Originally posted by GUB No of course there is no black and white boundary. ( just hypothetical and unsupported by empirical evidence.
) If you take the standard circle of confusion utilised in the calculators we are already in the futile area. (But we don't)
And then again as photoptimist factually and logically stated in post #83 only 1 in 4 pixels sample red and blue . so if you quarter the numbers above the result is not quite so futile.
An example in practise, look at this image;
I have applied edge detect to it. White indicates areas of high contrast ie in focus.
Only the absolute whitest of the white areas may benefit from a higher pixel count.
This image would in no way be unique - most portraiture would be like this.
The rest of the image would get no gain from higher resolution in fact a 5mp would probably do the job. How can higher resolution improve a blur.
As you have already touched on the old school DoF tables based on 8x10 inch print viewed at approx 10" came about many years ago relating to film and these days pretty much outdated.
I would not put too much faith in the edge finding algorithms as they are based on contrast and can be misleading as resolution can be different to acutance.
A 5MP of a portrait to resolve detail would be good enough only for a specific output size and device and the desire/need to resolve fine detail and features such as hair, eyelashes skin pores etc indeed it may be preferred for ladies for lack of resolving ability at a certain output size.
I am sorry but I do not think that 'only the absolute whitest of white areas would benefit' makes sense, so I maybe misunderstanding your intent. But a higher pixel count all things being equal equates to better resolved detail.
Photoptimist did state correctly:
Quote: The Bayer effectt: A 100 MPix color sensor is really a 25 Mpix red sensor + 50 MPix green sensor + 25 MPix blue sensor due to the Bayer filter so 100 MPix isn't really as high res as it seems.
However I do not think he meant to imply that you can quarter the resolution as the in this case we are really looking at colour sampling depth and the interpolation that has to take place in camera looking at the probable colour of each pixel relating to its neigbour. The net result of the demosaic is a slight softening of image detail which would be mitigated with capture sharpening. Resolution being a different matter and generally agreed to need at least 2 pixels to resolve something
Take a current 36 MP camera e.g Pentax K1 pixel pitch 4.86 µm or Nikon D800 pixel pitch 4.87 µm and lets assume they are exactly equal @ 5 µm
At its very best the smallest detail a these sensors can resolve spans the width of at least 2 pixels therefore the system should be able to resolve detail as small as 10 microns. But can it?
So using a better method for calculating DoF as proposed by Douvas and Torger is setting a Blur Diameter "circle of confusion" to our 2x pixel i.e. 10 microns much more in line with todays needs and obviously a lot smaller than the old recommendation of a 30 µm
If you are really serious about the subject and have a real need to calculate best outcome for digital acquisition then you should really have a look at these:
Depth of Field, Diffraction and High Resolution Sensors Lumariver Depth of Field Calculator
Last edited by TonyW; 03-08-2021 at 03:17 PM.