What Adam
Originally posted by Adam If you took a 36-megapixel full-frame sensor like the one in the K-1 and compared it to a hypothetical latest-gen 16-megapixel APS-C sensor, the two should have roughly the same dynamic range. That would in turn be comparable to a 60-megapixel medium format sensor.
and jatrax
Originally posted by jatrax I would also expect that an APS-C sized sensor cut from the same wafer as the FF K-1 sensor would show identical performance regarding DR.
wrote seems to makes sense first, but there actually is an impact on sensor size with respect to (maximum) dynamic range if you look at a picture with a fixed output resolution.
Dynamic range on a sensor is, as both pointed out, largely a function of the individual pixel size. Assuming the same technology and, for simplicity, same processing, the maximum amount intensity of light is determined by how many electrons can be excited by light to be counted/measured after the exposure. This is roughly proportional to the individual pixel size. The minimum amount of light that can be captured is driven by how few electrons can be discriminated. If that (absolute) number is roughly constant for a given technology, i.e. if the A/D conversion is nearly perfect, then the minimum intensity of light that can be captured is is inversely proportional to the pixel area, i.e. twice as big would mean half the intensity is needed to excite that countable number of elecrons. The ratio between maximum and minimum determines dynamic range. Even though this is simplified to an ideal processing, as both formats are equally effected by imperfection, it shows that those theoretical limits are indeed only related to the size of the photo sites.
So where does sensor format come in? The deepest shadows that can be differentiated in an ideal sensor, and the recent Sony ones come fairly close, are determined by the noise that comes from the particle properties of light. I.e. low light intensity triggers a low countable number of electrons to be excited per photo site, resulting in variances between sites, seen as noise in deep shadows. Now if I have the same intensity of light on a small sensor as on a big, I have the same noise amplitude on both on a per pixel basis.
Say we compare a micro 4/3 sensor and a full frame sensor with same size of photo sites, for the same picture taken with them at same speed, angle of view and F-Stop, i.e. same intensity of light. Each pixel on 4/3 is then covered by 4 on the full-frame sensor. With respect to maximum intensity, they fare the same as discussed above. With respect to minimum on each photo cell they also do, BUT for the same output resolution, I can take the average of the 4 photo cells on the FF sensor. This means, random noise in the shadows roughly scales down by the square root of 4, i.e. a factor of two or one EV. Therefore, the larger sensor with the same size of photo sites, on the minimum intensity side, behaves similarly to a low-resolution sensor with the same number of pixels as our 4/3 sensor. Of the maximum side however obviously not, as the maximum is not a noise property but each photo site will have to be able to cope with the intensity without hitting the ceiling. So a high-resolution bigger sensor only scales on one end similar to a low-resolution big sensor, which improves on both ends.
I hope this gives a little bit of a feeling for the physical boundaries.
Originally posted by alamo5000 if I expose for the highlights but the rest of the frame is in deep shadow... how much is more... and how much more will be able to be recovered?
For APS-C K-5 vs. FF K-1 (roughly same pitch), scaling would be sqrt(sensor area ratio), which happens to be known as the crop factor and is 1.5. This is a bit ore than half an F-Stop of dynamic range improvement (0.58EV) due to sensor size. It is the same amount of sensor-size driven improvement as for low-light noise performance. Taken technology improvements into account, based on what we see from K-3, it may get close, but I would not expect a full EV of additional dynamic range.