Originally posted by GUB First of all both resolution of the lens and the megapickles of the camera both contribute to the final image so there is not a black and white answer here.
A good lens is expected to achieve 100 lines per millimetre.
Lets turn that into points of detail 1/100mm square. And yeah I know - a questionable conversion - i think the reality wouldn't be that good.
That converts into a apsc rectangle on the image circle at 3.5 megadots.
So with the K3 that is 3.5 megadots converted to digital with 24megapixels
On the K-1 sensor that would be 5.25 megadots converted with 36 megapixels.
So still a no-brainer.
I apologize if I've missed something obvious - but would you walk me through your math here? How'd you get to 3.5 and 5.25 megadots (and a dot is a 0.01mm x 0.01mm square?)?
---------- Post added 12-01-20 at 09:25 AM ----------
And for clarity - here are the numbers I get on my end.
The KP (APS-C) sensor, it is 235mm x 156mm. This provides an area of 36660 mm^2
The K1 (FF) sensor, it is 359mm x 240mm. This results in an area of 86160 mm^2
Using your assumption that a good lens resolves dots of 1/100mm squares (0.01mm^2) then the KP sensor would read 3666000 dots, or 3.666 Mdots. I got here using sensor area divided by 0.01 (size of your dots).
Using that same math, the K1 would read 8.616 Mdots.
These numbers then translate to 6.5 Megapixels per dot from the KP and 4.2 Megapixels per dot from the K1. So yes - the K1 reads are larger area. But, the area read by the KP has more megapixels per dot, so one might assume more detail could be resolved in that area. Now, of course these are different size pixels, etc. But I'm just trying to follow the math you laid out.
If I am mistaken, and I certainly may be, please correct me.