Originally posted by fehknt I think we're actually saying the same thing even though you sound like you're disagreeing. If you keep FoV, DoF and shutter speed the same (so all exposure params except ISO), then the larger format will have a dimmer image because the light is spread over a wider area and you must compensate by increasing ISO/gain. If you can increase the light intensity recorded somehow so you can run the sensor at ISO100 again then you're recording more light, so you get a better SNR of course! If you took the same larger format and tried to condense it onto a smaller area, you'd find now that you're already at base ISO and you blow the highlights.
So, yes, we both agree that IFF you can run the sensors both at base ISO, then the larger one will do better. I carry it one step further to try to argue that for an equiv image, above base ISO you'll have to be at a higher ISO on the larger sensor by exactly the difference in sensor size, so there's no benefit anymore in terms of noise.
If that's true, it'd be the best explanation for why writers perceive that image quality suffers when using lenses designed for full-frame cameras are used with APS-C sensors. I'm not sure it is true. There's two things that affect perception of image quality that I don't think have been considered.
First, everybody's eyeballs are different. If you think of the retina as the "sensor", then the photoreceptor cells are the "pixels". But there are two basic kinds of photoreceptors, those for color, and those for luminance. And their sizes are very different: color receptors are wider (fewer pixels per square unit of surface area), and luminance receptors are tiny (vastly greater number of pixels per square measure). The thing is that people see things differently because of the variance in the ratio of color to luminance receptors. People with really good color vision see the world as blobs of color (low resolution but very good color discrimination) while others see the world as sharply-defined but washed-out points (very high resolution and light-gathering ability, but low ability to discriminate among hues). I fall in the latter category, myself, I have what I call "wolf vision" and other people call "color blindness". I can see in what other people think is total cave darkness, and "image quality" to me is the ability to resolve fine detail regardless of color. So there are two aspects that are important to me, sheer number of pixels, and the size of each pixel. If it takes four pixels on a KP's sensor to do the same thing as one pixel on a K-1's sensor, then the KP has greater image quality (with respect to that one area) because there will be fine differences in luminance among its four pixels, which will all be averaged together in the K-1's one pixel. The K-1 more than makes up for it, however, by having a larger number of pixels overall. When the resultant images are compared in an proportionally-resized format, the KP's picture will be smaller than that of the K-1, but the image quality will be better; when resized so that both are the same size, the K-1 will have the better image quality.
The second thing I'm thinking is that we fail to recognize that the light arriving at the sensor is not digital. It is, for all practical purposes, continuously varying. Whether the light hitting a given point on the sensor is brighter or dimmer (because of the kind of lens used) has more to do with the ISO setting than anything else. But the way they adjust sensitivity appears to me to be the result of ganging pixels together (groups being used in effect as a single pixel) in order to add the light captured together, analogous to the use of larger silver salt crystals in the emulsion to achieve greater sensitivity. "Noise" isn't because of stray light hitting the over-sensitized pixels, it's the loss of resolution due to increased granularity, which is why the use of the "Gaussian blur" function can make the picture look better by creation of new pixels between the pre-existing pixels by interpolation, in effect "resizing" the pixels themselves.
People with better color vision can tolerate less fine an ability to resolve detail in images while those tending toward color blindness will be more tolerant of inability to detect contrast among adjacent hues. Both kinds of contrast information (variations in luminance and variance in color) are important to image quality, but everyone will see that differently.