Originally posted by clackers I think you're confusing resolution - a function of pixel quantity - with dynamic range, Acoufap.
That's the number of possible shades, and is what the bit range stores.
This was only an aside not my main point.
But since there seems to be some misinterpretation I'll go a bit deeper. I'm talking about the "time" where the rays doesn't even know that they will be part of pixels of an image
.
In the first place I'm talking about the capability of a sensor cell (not a pixel!) to "recognize" light. That means the cell works within an voltage range and recognizing light means the cell produces a voltage. The brighter the light, the higher the voltage. And there is a maximum. So we have a lower boundary and an upper boundary. On this physical level everything depends on sensor design and used materials. If a signal is produced by a cell the next step is that it is digitized by functionality of the sensor (part of the sensor design).
To come back to the starting point of this thread. The digitization is done with 12 (K-30, K-S2) or 14 (K5, K3) bits. The lower digital boundary should be the value zero the upper the maximum of the 12 or 14 bit number. The range in between is devided into steps as I described in my first post. This can be done equidistant in linear steps or logarithmic steps. The latter is my guess.
Resume one cell is able to show (recognize, measure) the hole range of brightness (= dynamic range) of the sensor take all the cells of the sensor together and you get the brightness structure the sensor delivers before taking the bayer filter pattern into account. This is what I'm talking about in my first post.
Bits of an image and the (color) image itself are the product of interpreting the measurements of light after applying the bayer pattern. Interpretation is done using the in camera raw converter or a software raw converter. The interpretation is based on some parameters you set in camera or in the software raw converter.
After this conversion we can talk about resolution of an RGB image. And we can show the spreading of brightness of an image within a histogram. This is usually done on an jpeg image and basis are eight bit per color channel. At this point another story could start
That's my notion of how the things work for bayer pattern based sensors. It helps me to explain a lot of effects in digital photography.
I'm not talking about resolution or pixel quantity!
Dynamic range I see as an inherent property of the single sensor cell and the whole sensor design.