Originally posted by normhead
But they have become much more interesting with digital...
For example theoretically, if you have a 4912 pixels high image. Obviously if you could make 4912 single pixel lines alternating black and white, you'd have you'd have 4912 distinct lines... so why can you only produce 3500 on a test chart with a K-1, the possibilities for exploration are endless. With analogue, I never asked that question. It just was what it was.
What does 3500 have to do with 4912 ?
Where do the extra 1412 pixel rows go?
It's like the meaning of life being 42.
It's more defined as a question in digital, but, it is still better not to think about it.
Where do the extra 1412 pixel rows go? Losses include the lens, the Bayer filter (a K-1 could only resolve 2456 red-and-black lines), the anti-alias filter (if present or simulated), scattering by the sensor microlenses, and even a bit of cross-pixel photon tunneling.
The interesting issue with DR is in the trade-off between image noise and resolution of those shadow objects. Even if the optics can put a sharp image on the sensor, a tiny feature in the darkness may be indistinguishable from noise if the DR of the pixels is too low or the ISO has been pushed. But a larger feature in the shadows might distinguishable in that noise by downsampling the image. That is, one can turn a noisy image made with 1 µm pixels into a higher quality image as if there were 5 µm pixels by bucketing and averaging.
The other interesting issue is the subtle role of pixel well depth in picture quality and the ability to "print large" even in low-DR images. Resolving a subtle variation in color or intensity (e.g., the alternating pattern of barbs and barbules in a feather) depends on the sensor's ability to resolve small differences in light level. It's not a DR issue because it happens even in bright pixels and would still occur even if one had a magic sensor with zero noise. The issue caused by the statistical properties of arriving photons and the chances that, for example, only 9900 or fewer photons hit the pixel when the average light intensity meant that 10,000 photons should have hit the pixel. That statistical property is a function of well depth at base ISO and any ISO-related amplification of the signal. A tiny-pixel sensor will have more problems resolving high-resolution features that have little variation in intensity. The feather will look more like a patch of color than a distinct but subtle ribbed alternation of color. Again, downsampling can help.