Originally posted by UncleVanya
Bit depth is cool, but I wouldn't sweat it.
https://photographylife.com/14-bit-vs-12-bit-raw/amp
16 bit is the apsc and FF standard these days but I've seen no evidence it really improves end results.
Further, my 16mp 12 bit raw files on my EM-1 give me results in 13x19 prints that match up well with my 24mp 16 bit K-3 files. I think I can see subtle differences, but I'm not viewing double blind and comparing that way.
Thanks for the link - enjoyed reading the article. Seems I can stay with my Canon Powershot G10. The sensor uses 12 bit digitization.
If there's a talk about 12 vs. 14 bit depth it means the sensor cells analog signal is digitized using 12 or 14 bit. This is not what we're talking about in the context of RGB-data that is stored in RGB image data. K-5, K-3, K-1 and KP use 14 bit digitization, KS-2 12 bit. Computers address data by bytes = 8 bit, word = 16 bit data, double word = 32 bit ....
Raw files can't be shown on a screen. Raw data - the sensors digitized output data - has to be converted to an image file format, usually a RGB (red, green, blue) color space format. This is done by raw converters. The converted data given by a (real, not raw) digital image file is usually addressable data in 8 or 16 bit per color channel. In HDR processing I think also formats up to 64 bit are used. The stronger you post process, the more bits are appreciated. This way you won't get banding in very soft color gradients and color mappings can be done more precise.
It may be that in most cases differences in 12 vs. 14 bit vs. 15 or 16 bit digitization are negligible in end results. But if you do a lot of post processing it's always better to have more bits (more data increments) when you need it. I don't want to decide this when taking photos - so I go with more bits if I can.