Originally posted by wildman "Why is the Fuji GFX50s using a 43,8mm x 32,9mm sensor and not something larger like a real 6x6cm sensor?"
I often wonder why we insist on using the old film formats to describe sensor formats as if the two different technologies are comparable in performance?
For instance it may be that a modern "full frame" sensor gives performance equal to, or better than any 120 film format. So if you want, say, the performance you would get from your Rollei TLR using 120 plusx, in digital, get a K-1 not a 645Z.
Just as a more or less arbitrary point of reference I just assume, for all practical purposes, that my 16mp K-5 is more or less comparable to 135 film back in the day.
Interesting!
So far, there's no equivalent to "Plus-X" in the semiconductor world in which a given design for the sensor is made in large sheets and then cut down to cellphone, P&S, 35mm, medium format, and large format sizes.
Sure, large format substantially outperforms smaller format but at this stage in the silicon sensor technology roadmap, new sensors substantially outperform older sensors, too. For example, I'd bet that today's best smartphone cameras rival the early 35 mm full-frame cameras in resolution, ISO, and dynamic range despite the 7X crop factor between them. A 35mm sheet of the sensor stuff in the latest smartphones would probably best a 4x5 large format camera. What further complicates the format size equivalence issue is that smaller sensor cameras often get new sensor technologies first. That makes comparing format sizes hard at the moment.
I'd bet film used to be that way too until Kodak, Fuji, etc refined their emulsions and developers. I can't help but believe that the early film emulsions varied significantly in terms of grain size, sensitivity, and dynamic range such that a TLR loaded with a low-performance film produced worse images than a 110-format camera loaded with the best film. That seems to be where we are in digital.
At some point, the sensors are likely to hit technological limits in which this year's sensors aren't really much better than last year's sensors. For example, there's nothing that semiconductor makers can do to get around the basic physics of light and the statistical noise inherent in the low numbers of photons flying around on a moonlit night. Once sensor performance plateaus, it will be easier to make format comparisons based on pixel size and pixel count.