Originally posted by bdery In that test, you're not comparing ISO performances. You are comparing "effective pixels" (for lack of a better term) that are of different sizes. Or, put differently, you're pixel peeping with different test conditions. You cannot compare noise in that way, it's like comparing two sensors. Maybe I'm not understanding the test you propose correctly, but if I do then there are too many variables implicated for it to be conclusive.
I think perhaps I didn't explain sufficiently well.
The idea is to compare image where there are only two relevant variables - size of sensor and amount of noise at a given sensitivity - and to show there is a very simple relationship between those two variables.
The idea is that the actual pixels - the physical photon collecting sites on the sensor - are the same size in this comparison. They are also exactly the same distance aprt, the same physical composition, the same associated signal reading and processing hardware, etc. The *only* thing different is the size of the sensor. It's a *perfect* comparison if your goal is to isolate the effect of sensor size independent of those other factors. We seldom get such perfect comparisons in real life, but we get to see one here.
The end result will be that the image from the "smaller" sensor will show more noise for a given ISO. Yes, obviously, this is because of the greater enlargement being applied, which doesn't disporve the notion - it proves how simple the concept actually is. This is a case where literally the only difference is total surface area of the sensor. In the real world, it is not usual to find two sensors whose only diffeence is total surface area - there are usually other technogical difference. But this demonstration illustrates that the total surface area component alone has a meaurable effect, and it works out that the difference between APS-C and FF in terms of noise - all else equal - is about one stop.
Quote: . By decreasing the pixel size, you lower the amount of photons hitting it and the averaging is more prone to errors
Yes, of course. But if you reduce pixel size *without changing total surface area*, then you are obviously adding more pixels (I am discounting the fact that some sensor technologies may have larger gaps between pixels than other and assuming this too is held constant). Two sensors, same total surface area, but one has smaller pixels, means that one also has more pixels. The increased per pixel noise from the smaller wells is more or less exactly counteracted by the fact that we now have more pxiels to average together. The total amount of light collected by both sensors will be the same, hence the same total noise.
Quote: I understand what you mean. Indeed when you lower the resolution enough, everything starts to look evenly sharp. And I understand that it can be useful for a photographer to quantify this in some situations. But in that case, you're not measuring the DOF of your lens/sensor, you're measuring the resolution of your printer and your eye.
True, but that's exactly how DOF has always been measured In photography - using "typical" values for print size, viewing distance, and visual acuity to yield an appropriate CoC value to plug in to the the rest of the formula.
Quote: Maybe the gap here is caused by the fact that photographers and physics students don't always use the same language and metrics. For instance, f-numbers isn't used all that often in optics, many physics majors will never have used it. Because you can do optics without doing imaging science. So my metric for DOF might be tinted by my background. To me, it is a property of the sensor system. Maybe it's actually more common for photographers to quantify it on a printed standard, and that's why we're (politely
) arguing.
I think you have completely nailed it here, so I consider that aspect of the matter closed.