Originally posted by Zephos Just learned about this in my Physics class:
Optical resolution is limited due to the diffraction of light. The formula is like this:
Minimum separation between two printed object (pixels) = (1.22 X Distance from observer to objects X wavelength of light) / diameter of your pupil.
If we choose a wavelength of red 750 nano-meters, it will give us the minimum separation for visible light in general. The human pupil varies between 1.5 and 8 mm, so we can choose the median of 4.75. The formula simplifies down to:
min separation in meters = 1.926 e^(-4) X Distance from observer in meters
This is actually how Apple's Retina displays work. They pack the pixels so close together that your eye is incapable of resolving them. So yes, if your separation between pixels becomes large enough, and your observers are close enough, a full frame would help. Only playing with the numbers will tell you the answer though.
There are a couple of criticisms of this. Red is one end of the spectrum for example other use yellow/green as a reference. You get many times more resolution out of the blue spectrum, but using red would be as bad as using blue, and claiming APS-c can give you 100 MP imAGES which it will on APS-c.
Quote: The human pupil varies between 1.5 and 8 mm, so we can choose the median of 4.75.
Not really, you have to decide what you have and what looks good to you. What your pupil sees at 4.75mm is not really applicable to what it sees at 1.5 or 8mm.
Quote: So yes, if your separation between pixels becomes large enough, and your observers are close enough, a full frame would help.
You haven't said anything. We know that. The only thing that matters is the point at which a full frame will help. This tells us nothing. The magical point at which a full frame will help has never been defined, and for good reason. It's different for every images. And on top of that, science has never established the physical requirements for a person, to perceive an image to be "sharp enough". IN photography we deal with not "sharp focus" but "acceptable focus." A completely subjective term that could be established only by multiple large scale studies of multiple images and subjects.
Quote: Only playing with the numbers will tell you the answer though.
Actually playing with the numbers doesn't tell you anything, getting out and printing and looking at some images will help. Physics is pretty much useful until variables such the eye sight of the viewer, the viewers likely minimum distance from the print, how much the resolution in blue makes up for the lack of resolution in the red spectrum, this has to be done image by image, because sometimes the combination of blues and reds can lead to the impression of resolution in the reds, even if there is very little red resolution. I have never seen anyone in physics take an image and actually apply all the variables that have to be accounted for to come up with any kind of answer supported by numbers. I've never even seen a post by anyone using physics that has even acknowledged mathematically all the variables that exist, forget about applied that knowledge in an empircle fashion to see if they are right. What happens when you apply physics is you apply the worst case scenario to everything, like assuming the predominance of red light, and coming up with an overkill situation that would not be necessary for a vast number of prints. If you go with this type of science, you will always predict way more than your actual requirements that are necessary.
Amateur physicists who attack this kind of problem are always way to lazy, and simply ignore the more complex variables to come up with a number, which almost always means next to nothing in the real world.
Your instructor is teaching you physics, and doing a fine job, these are fine instructional tools that are showing you how to use physics to attack certain problems. But it falls well short of answering the question asked, in anything but a strictly theoretical sense. In a practical sense, it's not even good theory.
For that to happen, their has to be an empirical proof, using actual people and actual prints to show the correlation between the physics being applied and the perception of acceptable focus. That, as far as I know, no one has ever provided.
Quote: Retina Display is a marketing term developed by Apple to refer to devices and monitors that have a resolution and pixel density so high – roughly 300 or more pixels per inch – that a person is unable to discern the individual pixels at a normal viewing distance.
The fact that 300 pixels per inch obliterates individual pixels when holding a phone in your hand actually means very little about your perception of a 100 DPI print that has been upscaled to 300 dpi and then printed. The individual pixels are still invisible , and detail that goes to a resolution of 100 lines per inch is still quite sharp. The mistake is in assuming that the 100 DPI image will be viewed, printed at that resolution. With digital, there is no reason to do that. You can expand your "negative", unlike film where you are stuck with the size film you shot the image on.
So the real question here is, how much can I upscale an image before it starts to look soft from pixelation etc? I've seen images upscaled to 300 dpi from 72 DPI images that look quite good, at 20x30 inches, and has sold multiple times.
Until physics comes up with a formula that accounts for that, it's all academic.