Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version 134 Likes Search this Thread
03-08-2021, 12:56 PM - 1 Like   #136
GUB
Loyal Site Supporter
Loyal Site Supporter
GUB's Avatar

Join Date: Aug 2012
Location: Wanganui
Photos: Gallery | Albums
Posts: 5,760
QuoteOriginally posted by TonyW Quote
I am not disagreeing with you as it is an Interesting thought, but could you quantify where you feel the limit is (I agree there has to be one) and which manufacturer has so far overstepped the line?
No of course there is no black and white boundary. ( just hypothetical and unsupported by empirical evidence.) If you take the standard circle of confusion utilised in the calculators we are already in the futile area. (But we don't)
And then again as photoptimist factually and logically stated in post #83 only 1 in 4 pixels sample red and blue . so if you quarter the numbers above the result is not quite so futile.
An example in practise, look at this image;
I have applied edge detect to it. White indicates areas of high contrast ie in focus.
Only the absolute whitest of the white areas may benefit from a higher pixel count.
This image would in no way be unique - most portraiture would be like this.
The rest of the image would get no gain from higher resolution in fact a 5mp would probably do the job. How can higher resolution improve a blur.

Attached Images
View Picture EXIF
PENTAX K-1  Photo 
03-08-2021, 12:58 PM   #137
Veteran Member
eddie1960's Avatar

Join Date: Aug 2010
Location: Toronto
Photos: Gallery | Albums
Posts: 13,667
QuoteOriginally posted by MossyRocks Quote
But even having a 100mp sensor someone would still need to have a lens that can resolve that much detail at the given f-stop and pixel pitch for it to be worthwhile. .
True. In the case of the Fuji lenses, they have stated all were designed to resolve at 100mp. based on the 4 I own I can believe it. even the simplest of the 4 (the 50mm 3.5) is very clean and sharp (if anything they can be a bit clinical looking... though the 110is pretty wow), I only need to worry about 50mp though with no plan to change for some time
03-08-2021, 01:31 PM   #138
GUB
Loyal Site Supporter
Loyal Site Supporter
GUB's Avatar

Join Date: Aug 2012
Location: Wanganui
Photos: Gallery | Albums
Posts: 5,760
QuoteOriginally posted by eddie1960 Quote
True. In the case of the Fuji lenses, they have stated all were designed to resolve at 100mp. based on the 4 I own I can believe it. even the simplest of the 4 (the 50mm 3.5) is very clean and sharp (if anything they can be a bit clinical looking... though the 110is pretty wow), I only need to worry about 50mp though with no plan to change for some time
Remember that megapixels is an area count. It only means about 1.7 of a pixel count / height increase. (lineal) (EDIT compared to the K-1)
And as Photoptimist said in post 83 only 1 in 4 pixels sample red and blue and 1 in 2 sample green.
I don't think my old Takumar lenses would feel obselete in the slightest

Last edited by GUB; 03-08-2021 at 01:40 PM.
03-08-2021, 02:44 PM   #139
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by GUB Quote
No of course there is no black and white boundary. ( just hypothetical and unsupported by empirical evidence.) If you take the standard circle of confusion utilised in the calculators we are already in the futile area. (But we don't)
And then again as photoptimist factually and logically stated in post #83 only 1 in 4 pixels sample red and blue . so if you quarter the numbers above the result is not quite so futile.
An example in practise, look at this image;
I have applied edge detect to it. White indicates areas of high contrast ie in focus.
Only the absolute whitest of the white areas may benefit from a higher pixel count.
This image would in no way be unique - most portraiture would be like this.
The rest of the image would get no gain from higher resolution in fact a 5mp would probably do the job. How can higher resolution improve a blur.
As you have already touched on the old school DoF tables based on 8x10 inch print viewed at approx 10" came about many years ago relating to film and these days pretty much outdated.

I would not put too much faith in the edge finding algorithms as they are based on contrast and can be misleading as resolution can be different to acutance.

A 5MP of a portrait to resolve detail would be good enough only for a specific output size and device and the desire/need to resolve fine detail and features such as hair, eyelashes skin pores etc indeed it may be preferred for ladies for lack of resolving ability at a certain output size.

I am sorry but I do not think that 'only the absolute whitest of white areas would benefit' makes sense, so I maybe misunderstanding your intent. But a higher pixel count all things being equal equates to better resolved detail.

Photoptimist did state correctly:
QuoteQuote:
The Bayer effectt: A 100 MPix color sensor is really a 25 Mpix red sensor + 50 MPix green sensor + 25 MPix blue sensor due to the Bayer filter so 100 MPix isn't really as high res as it seems.
However I do not think he meant to imply that you can quarter the resolution as the in this case we are really looking at colour sampling depth and the interpolation that has to take place in camera looking at the probable colour of each pixel relating to its neigbour. The net result of the demosaic is a slight softening of image detail which would be mitigated with capture sharpening. Resolution being a different matter and generally agreed to need at least 2 pixels to resolve something

Take a current 36 MP camera e.g Pentax K1 pixel pitch 4.86 µm or Nikon D800 pixel pitch 4.87 µm and lets assume they are exactly equal @ 5 µm

At its very best the smallest detail a these sensors can resolve spans the width of at least 2 pixels therefore the system should be able to resolve detail as small as 10 microns. But can it?

So using a better method for calculating DoF as proposed by Douvas and Torger is setting a Blur Diameter "circle of confusion" to our 2x pixel i.e. 10 microns much more in line with todays needs and obviously a lot smaller than the old recommendation of a 30 µm

If you are really serious about the subject and have a real need to calculate best outcome for digital acquisition then you should really have a look at these:
Depth of Field, Diffraction and High Resolution Sensors
Lumariver Depth of Field Calculator


Last edited by TonyW; 03-08-2021 at 03:17 PM.
03-08-2021, 05:33 PM - 1 Like   #140
GUB
Loyal Site Supporter
Loyal Site Supporter
GUB's Avatar

Join Date: Aug 2012
Location: Wanganui
Photos: Gallery | Albums
Posts: 5,760
QuoteOriginally posted by TonyW Quote
As you have already touched on the old school DoF tables based on 8x10 inch print viewed at approx 10" came about many years ago relating to film and these days pretty much outdated.
Both times I quoted this I clearly stated that fact. I used them as a point of reference to demonstrate that to realise the effect of the extra megapixels we would have to think in terms of a smaller dof for that higher resolution image. That is a smaller coc for the 100mp as against what we choose for the K-1.
QuoteOriginally posted by TonyW Quote
I would not put too much faith in the edge finding algorithms as they are based on contrast and can be misleading as resolution can be different to acutance.
Maybe but they clearly indicate the plane of focus. Anything out of this white area will have, by the laws of physic, a larger circle of confusion and therefore there will be sod all improvement from increased resolution.
Or are you saying even a blurred area would respond to the higher resolution?
QuoteOriginally posted by TonyW Quote
However I do not think he meant to imply that you can quarter the resolution as the in this case we are really looking at colour sampling depth and the interpolation that has to take place in camera looking at the probable colour of each pixel relating to its neigbour. The net result of the demosaic is a slight softening of image detail which would be mitigated with capture sharpening. Resolution being a different matter and generally agreed to need at least 2 pixels to resolve something
I am sorry but all this boils down to 1 in 4 pixels samples blue and the demosaicing guesses the other 3.
I don't believe the neighbouring red and green pixels can reliably contribute to the guesswork because if there is any colour saturation involved their luminance value would bear no relevance to the blue value. So a smart guess from 1/4 of the resolution it is.
03-08-2021, 06:23 PM   #141
Veteran Member
eddie1960's Avatar

Join Date: Aug 2010
Location: Toronto
Photos: Gallery | Albums
Posts: 13,667
QuoteOriginally posted by GUB Quote
Remember that megapixels is an area count. It only means about 1.7 of a pixel count / height increase. (lineal) (EDIT compared to the K-1)
And as Photoptimist said in post 83 only 1 in 4 pixels sample red and blue and 1 in 2 sample green.
I don't think my old Takumar lenses would feel obselete in the slightest
I have used Taks on the gfx50 even 135 vignettes, they are decent in 35mm mode though. But not as sharp as modern lenses same with my bronica pe lenses decent performance and full sensor but not as sharp as the mor e modern lens

Last edited by eddie1960; 03-08-2021 at 07:37 PM.
03-08-2021, 06:25 PM   #142
Veteran Member




Join Date: Feb 2016
Posts: 706
Don’t get snitty this is supposed to be discussion i.e. an interchange of views and where possible support by fact
QuoteOriginally posted by GUB Quote
Both times I quoted this I clearly stated that fact. I used them as a point of reference to demonstrate that to realise the effect of the extra megapixels we would have to think in terms of a smaller dof for that higher resolution image. That is a smaller coc for the 100mp as against what we choose for the K-1.
Yes, and that view broadly supported

QuoteQuote:
Maybe but they clearly indicate the plane of focus. Anything out of this white area will have, by the laws of physic, a larger circle of confusion and therefore there will be sod all improvement from increased resolution.
Or are you saying even a blurred area would respond to the higher resolution?
No merely trying to make you aware of the fact that resolution and acutance do not always have to go hand in hand and this is nothing to do with higher resolution. A series of objects can be well resolved with low acutance and escape any edge filters enhancement.

QuoteQuote:
I am sorry but all this boils down to 1 in 4 pixels samples blue and the demosaicing guesses the other 3.
I don't believe the neighbouring red and green pixels can reliably contribute to the guesswork because if there is any colour saturation involved their luminance value would bear no relevance to the blue value. So a smart guess from 1/4 of the resolution it is.
I am also sorry but the guess is far from smart and a world away from fact. In simple terms demosaicing:
QuoteQuote:
Digital cameras use specialized demosaicing algorithms to convert this mosaic into an equally sized mosaic of true colors. The key is that each colored pixel can be used more than once. The true color of a single pixel can be determined by averaging the values from the closest surrounding pixels.
I will also add this is where pixel shift scores as the resolution lost in demosaicing a normal image is not an issue with PS as us the case with Foveon sensors.
And This:
Another simple method is bilinear interpolation, whereby the red value of a non-red pixel is computed as the average of the two or four adjacent red pixels, and similarly for blue and green. More complex methods that interpolate independently within each color plane include bicubic interpolation, spline interpolation, and Lanczos resampling.
In simple terms for a sensor to resolve something requires a minimum of two pixels this is actually illustrated in many articles and forms the basis of a sharply observed point relating to DoF and illustrated in the links you do not seem to have read. Thus real resolution of a sensor is closer to half its pixel count


Last edited by TonyW; 03-08-2021 at 06:36 PM.
03-08-2021, 06:55 PM   #143
GUB
Loyal Site Supporter
Loyal Site Supporter
GUB's Avatar

Join Date: Aug 2012
Location: Wanganui
Photos: Gallery | Albums
Posts: 5,760
QuoteOriginally posted by TonyW Quote
Don’t get snitty this is supposed to be discussion i.e. an interchange of views and where possible support by fact
Yes, and that view broadly supported

No merely trying to make you aware of the fact that resolution and acutance do not always have to go hand in hand and this is nothing to do with higher resolution. A series of objects can be well resolved with low acutance and escape any edge filters enhancement.

I am also sorry but the guess is far from smart and a world away from fact. In simple terms demosaicing:

In simple terms for a sensor to resolve something requires a minimum of two pixels this is actually illustrated in many articles and forms the basis of a sharply observed point relating to DoF and illustrated in the links you do not seem to have read. Thus real resolution of a sensor is closer to half its pixel count
Not "snitty" and not sure where I implied it.
Still waiting for your opinion on - "Or are you saying even a blurred area would respond to the higher resolution?"

QuoteOriginally posted by TonyW Quote
Quote:
Digital cameras use specialized demosaicing algorithms to convert this mosaic into an equally sized mosaic of true colors. The key is that each colored pixel can be used more than once. The true color of a single pixel can be determined by averaging the values from the closest surrounding pixels.
And This:
Another simple method is bilinear interpolation, whereby the red value of a non-red pixel is computed as the average of the two or four adjacent red pixels, and similarly for blue and green. More complex methods that interpolate independently within each color plane include bicubic interpolation, spline interpolation, and Lanczos resampling.
Not sure where you got this quote from - it doesn't appear to be from your links.
But what possible information can a pixel with a green filter over it, excluding the vast majority of blue light from it, impart to a neighbouring blue pixel?
03-09-2021, 12:52 AM   #144
Site Supporter
Site Supporter




Join Date: Feb 2017
Photos: Gallery | Albums
Posts: 2,034
QuoteOriginally posted by Risxsoul Quote
Hello fellow Pentaxians.
To me the unwashed hobby photographer. My question becomes. If I take a 35mm film photograph and a digital photograph both of the same iso, fstop, shutterspeed -all else being equal. Then print out the film pic in largest beautiful size. And then the digital image at the same size. At what point in Mega Pixels is the digital as good as the film shot? At what point is the digital better? Are we already there? Will we ever be there? Yes I shoot Pentax aps-c(k50), so will this format ever be as good as film 35mm? Is it already?
More or less agree with everything Normhead says in response to this. However, IMO there is still very valid reasons why film may be "better" in some pictures. Film B&W image exposed, processed and printed with good technique (and I stress with good technique) being a case in point. In addition film images within certain print sizes (dependent on format) are still good enough and the extra resolution of the digital image is often wasted. And then we come to two elephants in the room. The first being digital imaging is just way more convenient and most users most of the time will get technically superior images with digital. The second is what I call the texture difference of the final result between digital and film (thinking metaphorically the difference between silk and wool). Some images just look better when taken on film due to its texture, others look better on digital
03-09-2021, 06:54 AM   #145
Site Supporter
Site Supporter
Michail_P's Avatar

Join Date: Nov 2019
Location: Kalymnos
Photos: Gallery
Posts: 3,006
QuoteOriginally posted by Kobayashi.K Quote
The pixel race has ended. The new trend is eye, bird, squirrel and camel detection.
Still laughing!! ;P
As for the lenses, not enough for more than 150mp hypothetical scenarios. At least till we get an UltraHD **Pentax43/f1.4 limited limited....
Megapixels are certainly rising in numbers, referring to cellphones, where they actually are somehow important. Otherwise current cameras are perfectly capable of covering the vast majority of pros and commercial needs and the absolute number of enthusiasts.
03-09-2021, 02:08 PM   #146
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by GUB Quote
Still waiting for your opinion on - "Or are you saying even a blurred area would respond to the higher resolution?"
Seriously, the question seemed so absolutely silly that it was not seen as a question but rather a statement. Maybe you need to know so - There is only one spot or plane in an image that is fully resolved and that is the actual plane of focus. Either side of that varying degrees of blur increasing the further away from that spot. Any blurred areas of the image will respond in as much as there will be more pixels in a higher resolution capture recording the blur but they will still remain blurred.

QuoteQuote:
Not sure where you got this quote from - it doesn't appear to be from your links.
Well, yes the information in the quote is missing the source, but even a cursory search would have revealed these fairly widely understood facts about demosaicing.
Demosaicing - Wikipedia

http://web.stanford.edu/class/ee367/reading/Demosaicing_ICASSP04.pdf

Developing a RAW photo file 'by hand' - Part 1

QuoteQuote:
Strategy to Get the Three RGB Colors per Pixel

If in a raw file, we have information for only one RGB color value per pixel, there are two missing colors per pixel. The regular way to obtain them is using raw photo development software (either in-camera firmware or in desktop computer application). This software interpolates the colors around each photosite to compute —or rather to get an estimated value— the missing colors. This procedure is called de-mosaicing or de-bayering.
There are many demosaicing algorithms, some of them are publicly known or open source, while others are software vendor trade secrets (e.g. Adobe Lightroom, DxO Optics Pro, Capture One Software). If you are interested, here you have a comparative study of some demosaicing algorithms.
QuoteQuote:
But what possible information can a pixel with a green filter over it, excluding the vast majority of blue light from it, impart to a neighbouring blue pixel?
Well it must be done with Pixel dust if not with an interpolation algorithm to get an estimated value of missing colours per pixel


Quick notes (sorry not brief) relating to CoC and Megapixels

Common methods for calculation of CoC
I know there are a few methods of calculation. The so called “Zeiss formula” where the CoC is calculated as d/1730. d being the diagonal measure of an original negative (or sensor), aCoC of 0.025 mm. Anotherwidely used CoC limit is d/1500 giving a CoC of 0.029 mm full frame35mm

Cameras and Pixel Pitch

Sony(IMX555CQR)the newest 100MPFF sensor12,288x 8,192 pixels = 0.003 mm or 3.00 µm
PentaxK1 FF =4.86 µm
CanonEOS 90D APS -C
= 3.23
µm
Lookingat ‘standard’ charts with CoC calculated as 0.03 mm using a BlurSpot 30 µm
100MPNew Sony and the Pentax K1 both with 24mm lens required DoF close aspossible at f/8 to infinity
Focus Point @ 8’4”, Near Focus= 4’3” Far focus = Inf.

Different method 1 GeorgeDouvas TrueDoF or OptimumCS PRO. Usingthe simple 2x the pixel pitch and minimisingthe effects of Diffractionanddefocus blur
PentaxK1 24mm req. DoF close as pos. @f/8 to infinity (2x 4.86µm roundto BlurSpot =10µm)
Diffractionlimited to f/5
Focus Point @f/5
=50NearFocus = 25’ Farfocus = Inf
Sony100 mp 24mm same DoF req. f/8 to infinity (2 x 3 µmBlurSpot =6 µm)
Diffractionlimited to f/3.2
Focus Point @f/3.2 = 140’, Near focus =70’
Farfocus = Inf


Although I only showed the one method I think potentially that Anders TorgersLumariver DoF is probably of greater use. Its default uses airy disc model but when pixel pitch known the preferred method uses an Airy Pixel model witha scaling factor for Airy disc and Pixel Pitch.


I rarely use or need this type of calculation but for critical output planning of a shot the apps can be very useful. I would also add that these figures are not carved in stone and neither should they be taken as absolutes (beware of false precision). In practice you may have to alter the blur spot size and not worry about diffraction to reach your goals. A lot can now be mitigated in post within certain boundaries although we I am sure all agree get it right in camera as much as possible first. And many of us can just fly by the seat of our pants with experience and practice knowing the acceptable limits for our work

In practice moving to a 100MP FF 35mm from another FF or APS-C or indeedfrom either to MF will bring its own challenges and introduce somelimitations (lens resolving should not be one of them at least forsystem lenses).

Last edited by TonyW; 03-09-2021 at 02:16 PM.
03-09-2021, 02:34 PM   #147
GUB
Loyal Site Supporter
Loyal Site Supporter
GUB's Avatar

Join Date: Aug 2012
Location: Wanganui
Photos: Gallery | Albums
Posts: 5,760
QuoteOriginally posted by TonyW Quote
Seriously, the question seemed so absolutely silly that it was not seen as a question but rather a statement. Maybe you need to know so - There is only one spot or plane in an image that is fully resolved and that is the actual plane of focus. Either side of that varying degrees of blur increasing the further away from that spot. Any blurred areas of the image will respond in as much as there will be more pixels in a higher resolution capture recording the blur but they will still remain blurred.
So we agree then that there would be little bonus from increased megapixels in the black area of the edge detect image?

And far as the coc data strewn across the rest of the post I wasn't aware we were in dispute of coc application. Is there something you have taken issue with?
03-09-2021, 03:08 PM   #148
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by GUB Quote
So we agree then that there would be little bonus from increased megapixels in the black area of the edge detect image?

And far as the coc data strewn across the rest of the post I wasn't aware we were in dispute of coc application. Is there something you have taken issue with?
If the black area of the image contains no useful resolvable low contrast areas yes.

No dispute from me just fact relating to CoC that may be useful to some considering 100MP not useful.

So I can take it that you now realise that real sensor resolution is around 1/2 the stated camera resolution due to the requirements for 2 pixels to resolve something and that the two colours missing from each pixel have to be interpolated from surrounding pixels?
03-09-2021, 04:26 PM   #149
GUB
Loyal Site Supporter
Loyal Site Supporter
GUB's Avatar

Join Date: Aug 2012
Location: Wanganui
Photos: Gallery | Albums
Posts: 5,760
QuoteOriginally posted by TonyW Quote
So I can take it that you now realise that real sensor resolution is around 1/2 the stated camera resolution due to the requirements for 2 pixels to resolve something and that the two colours missing from each pixel have to be interpolated from surrounding pixels?
No problem with that. and isn't that similar to why they often measure lens resolution as line pairs / height?.
ps trying to get my head around this -
QuoteOriginally posted by TonyW Quote
So I can take it that you now realise that real sensor resolution is around 1/2 the stated camera resolution
Wouldn't that be a lineal measurement and to compare to a Megapixel unit it would be 1/4 of the resolution?

EDIT "due to the requirements for 2 pixels to resolve something and that the two colours missing from each pixel have to be interpolated from surrounding pixels?[/quote]"

Perhaps that should be fine tuned to "surrounding pixels that sample the missing colours".

Last edited by GUB; 03-09-2021 at 04:32 PM.
03-10-2021, 04:22 AM - 2 Likes   #150
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by GUB Quote
No problem with that. and isn't that similar to why they often measure lens resolution as line pairs / height?.
ps trying to get my head around this -

Wouldn't that be a lineal measurement and to compare to a Megapixel unit it would be 1/4 of the resolution?
TBH, that was my thinking when originally introduced to digital camera imaging upon realising that we were not able to get the potential for full resolution. Then further confusion when 'experts' started to claim figures of around 50%. Being curious and also job related at the time, I looked into this in more detail relying on others more expert opinion.

So my basic understanding of this subject:
The limits of resolution of any sampling system are dictated and can be predicted by sampling theory known as Nyquist theorem. This dictates that a sampling system can only perfecty reproduce signals at half of the sampling frequency. Needing two pixels to represent a single pixel means one to capture a line and one to capture the 'not a line'.

The total sampling frequency is made up from a combination of the diagonal and horizontal sampling with the diagonals of course sampling more detail than the horizontal and verticals. Also as human vision is more sensitive to green information the green part of a Bayer sensor is used to provide the most resolution.

Hence the comments that our 36MP systems are only capable or resolving 18MP. I have also seen mention somewhere of a higher figure of 70% quoted for a Bayer sensor! I think from a videographer with a Red system?

Edit: The Nyquist limit may be observed in a few situations with photographs containing horizontal or vertical patterns. If the detail is closer than two pixels you cannot distinguish those details. Instead you may see a wave pattern slightly offset from the vertical or horizontal with jagged edges (Moire patterns) which occur just before the Nyquist limit. I only saw it on a couple of ocassions when using the Nikon D800E.

QuoteQuote:
EDIT "due to the requirements for 2 pixels to resolve something and that the two colours missing from each pixel have to be interpolated from surrounding pixels?
QuoteQuote:
Perhaps that should be fine tuned to "surrounding pixels that sample the missing colours".
Yes, I wish I had phrased it better

Last edited by TonyW; 03-10-2021 at 04:48 AM.
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
camera, crop, explosion in megapixels, feet, film, format, frame, image, images, inches, landscapes, lot, megapixels, mp, people, photographer, photographers, photography, picture, post, print, prints, quality, sensor, shots, sizes, storage, time, wedding photographers

Similar Threads
Thread Thread Starter Forum Replies Last Post
Explosion of yellow Sandros Monthly Photo Contests 4 12-06-2020 06:15 AM
Nature I can feel Monday coming on... RobG Post Your Photos! 5 08-30-2020 06:24 PM
Explosion of serenity mattb123 Monthly Photo Contests 20 08-15-2020 02:19 AM
Nature An explosion eaglem Post Your Photos! 3 02-06-2020 07:38 AM
Black & White "Sometimes I Feel, Sometimes I Feel . . . . . . . . . . Sailor Post Your Photos! 4 04-11-2015 09:07 AM



All times are GMT -7. The time now is 07:06 PM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top