Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version 32 Likes Search this Thread
02-24-2018, 03:25 PM   #76
Otis Memorial Pentaxian
stevebrot's Avatar

Join Date: Mar 2007
Location: Vancouver (USA)
Photos: Gallery | Albums
Posts: 42,007
QuoteOriginally posted by photoptimist Quote
That scanner almost certainly uses a 3-line linear array sensor which means it captures 17 million red pixels, 17 million green pixels, and 17 million blue pixels for a total of 51 million pixels of data.
The Noritsu is a CCD device and captures the full frame at once.


Steve

02-24-2018, 03:39 PM   #77
Otis Memorial Pentaxian
stevebrot's Avatar

Join Date: Mar 2007
Location: Vancouver (USA)
Photos: Gallery | Albums
Posts: 42,007
QuoteOriginally posted by stevebrot Quote
(BTW...can you provide a link for the scandig reference? They have nothing about Noritsu on their Web site(s)? My experience is that Noritsu scans are only good to about 20 lp/mm.)
Found it: Vergleich unterschiedlicher Filmscanner-Typen: CMOS-Scanner, Flachbettscanner, Diascanner, virtuelle Trommelscanner

QuoteQuote:
An interesting question is which resolution such a laboratory scanner achieves in practice. We have scanned a USAF-1951 test chart with a Noritsu HS-1800. The maximum nominal resolution in the 35mm field is 4790 dpi. The adjacent image shows the inner part of the scan. You can recognize the 3 bars of the element 6.4. This corresponds to an effective resolution of approx. 4600 ppi according to our resolution chart. This is an excellent value and proves that the Noritsu HS-1800 achieves in practice what it promises, namely approx. 96% of its nominal resolution, which is an excellent value.1)
This model Noritsu is relatively new on the market and I have no experience with its output. Those are respectable numbers. At 4600 dpi the effective pixel count should be quite high. Noritsu claims 30 Mpx from 35mm. Why the Noritsu literature only says 17 Mpx for a 645 frame is a puzzle, however. At $10,000, I would probably opt for a used Nikon 9000 ED.


Steve

Last edited by stevebrot; 02-24-2018 at 03:44 PM.
02-24-2018, 03:51 PM   #78
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,122
QuoteOriginally posted by stevebrot Quote
The Noritsu is a CCD device and captures the full frame at once.


Steve
That's not what the spec sheet says (http://www.photoxport.com/file-manager/PDF's/Noritsu%20Labs/Scanners/ENS1800SPE_1.pdf)
02-24-2018, 04:02 PM - 1 Like   #79
Otis Memorial Pentaxian
stevebrot's Avatar

Join Date: Mar 2007
Location: Vancouver (USA)
Photos: Gallery | Albums
Posts: 42,007
QuoteOriginally posted by photoptimist Quote
I stand corrected. This is something new for them and may explain the leisurely throughput compared to most mini-lab scanners, though 5.4 frames per minute auto-fed is hardly slow by film scanner standards.

BTW...the 17 Mpx for 645 is pixels, not sensor sites. Each pixel has 16-bit RGB. Pixels are sort of strange that way. The 645z's sensor actually has zero pixels,* although it is capable (through the image processor) of 50 Mpx output, with each pixel defined for 14-bit RGB. I know...picky, picky, but I strongly believe in sensels for capture.


Steve

* Pixels for capture purposes are a logical construct defining a point of a raster image. Pixels for display purposes, OTOH, while still an abstraction, have a physical reality. According to the Wikipedia, they are "the smallest controllable element of a picture represented on the screen."


Last edited by stevebrot; 02-24-2018 at 04:28 PM.
02-24-2018, 05:31 PM - 1 Like   #80
Loyal Site Supporter
Loyal Site Supporter




Join Date: Dec 2017
Photos: Gallery | Albums
Posts: 1,138
QuoteOriginally posted by stevebrot Quote
Found it: Vergleich unterschiedlicher Filmscanner-Typen: CMOS-Scanner, Flachbettscanner, Diascanner, virtuelle Trommelscanner
(a) This model Noritsu is relatively new on the market and I have no experience with its output. Those are respectable numbers. At 4600 dpi the effective pixel count should be quite high. Noritsu claims 30 Mpx from 35mm. Why the Noritsu literature only says 17 Mpx for a 645 frame is a puzzle, however. At $10,000, I would probably opt for a used Nikon 9000 ED.

(b)This model Noritsu is relatively new on the market and I have no experience with its output. Those are respectable numbers. At 4600 dpi the effective pixel count should be quite high. Noritsu claims 30 Mpx from 35mm. Why the Noritsu literature only says 17 Mpx for a 645 frame is a puzzle, however. At $10,000, I would probably opt for a used Nikon 9000 ED.

(c) BTW...the 17 Mpx for 645 is pixels, not sensor sites. Each pixel has 16-bit RGB. Pixels are sort of strange that way. The 645z's sensor actually has zero pixels,* although it is capable (through the image processor) of 50 Mpx output, with each pixel defined for 14-bit RGB. I know...picky, picky, but I strongly believe in sensels for capture.

Steve
(a) Sorry for the late response, I was out. You found the reference that I had found.

(b) I have wondered why the resolution per inch or mm is less with the 645 format than the 35mm format. I assumed that the reason was that the scanner used a line element behind a lens such that the entire assembly "backs off" to cover the larger field of view (from its POV) when scanning the larger negative such that the total pixels in the scan are the same, but stretched over a larger negative. I didn't check this proportion out so it was only a wag.

(c) While I believe 17 Mpixels for the Noritsu scan of the 645 negative, I cannot believe that 50 mega detectors, if you will, arranged in the 645Z to perform the Bayer filtering scheme as photoptimist describes, has an effective resolution anywhere close to 50 million rgb pixels, no matter how processed. You may well be able to generate an image with 50 mega rgb values, but the effective resolution will not be based on such a sampling density. Based on the Bayer layout, the MTF and corresponding CTF can only be those well documented for 25 Mp sampling for green (in their respective axes' proportions), but red and blue responses are effectively limited to the sampling capability of 12.5 Mp (in their respective axes' proportions). In other words, the spatial bandwidth (cy/mm for MTF or lp/mm for CTF) of the sampling is established by each color's array density. The well known function is listed here, inter alia.

Sampling MTFs | SPIE Homepage: SPIE

Differing MTFs among colors is also seen with Portra film, for example, where one might guess that the grain size distributions are slightly different for each color. I don't have it, but this is treated, I vaguely recall from my relative youth, in Goodman: Statistical Optics.
02-25-2018, 07:49 AM - 1 Like   #81
Loyal Site Supporter
Loyal Site Supporter




Join Date: Dec 2017
Photos: Gallery | Albums
Posts: 1,138
In all fairness to this topic, after giving it some more thought, I should note that when imaging a B/W subject like the USAF resolution test target, all colors of pixels see the same contrast and can effectively contribute to the sampling density. In such cases, perhaps asserting a 50 Mpixel number for the camera performance is valid, particularly if the data are displayed/printed in black and white. A green test target could only be imaged with 25 Mpixel sampling density, however.

I think this will carry over to many natural scenes where the detail is structurally similar in all colors, and a B/W print at full resolution would show detail near the 50 Mpixel level. A sufficiently magnified color print of the raw data, however, should show what looks like chromatic aberration everywhere, Trinitron TV style. On the other hand, broad tonal areas such as in portraits would have lower resolution in rendering the tonal variation.

I am unclear what the effective resolution might be if the tri-color image data are processed (interpolated) so that a photographic print could be made with each pixel a mixed color as is the case with optical printing from negatives. In other words, a print that wasn't supposed to look like an OLED TV up close. Interpolation has to degrade resolution. The interpolation filter has a Fourier transform that, when multiplied point by point with the MTF of the sampling function, will reduce the effective bandwidth and hence resolution.
02-25-2018, 09:53 AM - 1 Like   #82
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,122
QuoteOriginally posted by kaseki Quote
In all fairness to this topic, after giving it some more thought, I should note that when imaging a B/W subject like the USAF resolution test target, all colors of pixels see the same contrast and can effectively contribute to the sampling density. In such cases, perhaps asserting a 50 Mpixel number for the camera performance is valid, particularly if the data are displayed/printed in black and white. A green test target could only be imaged with 25 Mpixel sampling density, however.

I think this will carry over to many natural scenes where the detail is structurally similar in all colors, and a B/W print at full resolution would show detail near the 50 Mpixel level. A sufficiently magnified color print of the raw data, however, should show what looks like chromatic aberration everywhere, Trinitron TV style. On the other hand, broad tonal areas such as in portraits would have lower resolution in rendering the tonal variation.

I am unclear what the effective resolution might be if the tri-color image data are processed (interpolated) so that a photographic print could be made with each pixel a mixed color as is the case with optical printing from negatives. In other words, a print that wasn't supposed to look like an OLED TV up close. Interpolation has to degrade resolution. The interpolation filter has a Fourier transform that, when multiplied point by point with the MTF of the sampling function, will reduce the effective bandwidth and hence resolution.
Very good points.

Yes, for a B&W resolution test target, a 50 MPix Bayer filter sensor does test quite well. But shoot that same test target through either deep red or blue filter and the measured resolution plummets to only 12.5 MPix. (Of course the resolution of color film in each of the colors would also be less than the B&W resolution.)

Interpolation certainly does degrade resolution if linear interpolation is used. Consider a single row of pixels of a Bayer filter sensor which might have a color filter pattern of RGRGRGRGRG. If we want to estimate the amount of R in pixel #4 (which is a green-sensitive pixel), we could use linear interpolation and estimate the value as half-way between the red values seen in pixels #3 and #5. That linear interpolation would produce an accurate red-value for pixels in scenes that have smooth variations in tonality (e.g., the reds on a gentle curve of a rose petal, a sky at sunset, or the middle of a bokeh disk from a red light). But that same linear interpolation would blur a scene with a sharp edge (e.g., the edge of the rose petal, where the red sky meets the black horizon, or the edge of the red bokeh circle).

The modern approach to reconstructing the full 3-color RGB values of each pixel in a scene is to try to use the scene data to statistically estimate whats going on in the image (is a patch of pixels part of a smooth change in tonality, or is there an edge, or a texture, or what? The value assigned to the missing RGB values will depend on the estimated type of image element and then some calculations appropriate to that category of scene element. It's not perfect but the resulting resolution will be higher than that expected from pure interpolation.


Note: if the actual scene contains a tiny red berry (or tiny red star) that happened to show up only in green pixel #4, then it will be entirely missing from the digital image!

03-24-2018, 10:22 AM - 1 Like   #83
Site Supporter
Site Supporter




Join Date: Feb 2018
Location: NoVA
Posts: 635
Resolution of Film vs Digital

QuoteOriginally posted by gofour3 Quote
Yep this was standard practice for printing a slide, however Kodak no longer makes the film to do that any longer.

Phil.
Edit: I didn’t realize what a late hit this was until after I posted it. But I think it still may add something to the discussion for posterity.

The main reason for internegs was because color positive print paper was contrasty, and goodness knows slides, which are intended for direct viewing, are contrasty enough. The alternative was dye transfer, which was expensive, difficult, and even then required the transparencies to be exposed in a certain way. (But dye transfer get amazing results—the work of Elliot Porter comes to mind.) DT was also the only archival color process in those days.

Then, Cibachrome came out. Interneg use dropped like a stone after that. Cibachrome also had a contrast problem, but lots of things became possible because of it.

I printed 35mm Kodachrome on Ciba at 16x20, and at the time was pretty impressed with the results. But only 1 in 100 slides had what it took to work at that size. I look at those prints now (what few have survived) and I realize that my own standard of sharpness has increased over the years.

On the topic:

Resolution is a red herring. It’s not an important question, but rather is a pseudo-scientific surrogate for a sense of detail, which is more perceptual.

I published an online article in 1999 exploring the $1000 digital darkroom. Even then, it was possible to scan film with cheap equipment and see detail at the nyquist limit (something like 50 lines/mm for the 2700-ppi film scanners then available).

What I want from both scanners and digital sensors is pixels crowded together closely enough so that pixel spacing becomes entirely unimportant. Then, the detail in the image will be limited only by lens and technique.

I rather think we are there now. One complaint we hear about the 645z is the lack of modern lenses. But most of those lenses were highly respected, say, in the 90’s. Why is that? Because nobody ever blew up a 6-micron pixel to a hundredth of an inch on a computer monitor (a 42x enlargement).

To wit: I had a range of lenses for my Canons that I thought were excellent, for 35mm. When I bought my first digital camera, a 6mp Canon 10D, I immediately started replacing lenses. Then, Canon came out with the 5D, the first relatively affordable 24x36 DSLR, and Canon started replacing their lenses. Why? People had sensors that exceeded lenses, and now they wanted lenses that exceeded sensors. It’s nuts: few ever print or display large enough to notice the difference. But that launched the resolution race, which is still in full swing.

Back to detail: I try to attain a sense of endless detail for much of my work. I don’t want the viewer to always believe that the only reason they can’t see more detail is because they forgot to bring a magnifying glass. That is a tougher standard than “normal viewing distance”. I get that with every camera in my collection, film or digital, small or large, cheap or expensive. I simply don’t enlarge images beyond a size that maintains that illusion.

I have medium-format photos I can’t enlarge beyond 8x10. In one case, it’s because the atmosphere was turbulent and obscured detail on a distant ridge:


(180mm Zeiss Sonnar, 6x6 Fuji Reala, f/16, big and heavy tripod, accurate focus, mirror locked up, proven technique, etc.)

But at 8x10, it’s as crisp as I could hope for.

There are those who are more strict than I am, and this thread already contains a post by one who prefers 8x10 contact prints. I’ve never seen a contact print that didn’t look sharper than any enlargement.

Sometimes, the product dictates the choice. I recently photographed a stained glass window:



They wanted to print it on transparent plastic film at about 150% of life size, and have it be detailed enough that people could walk up to it and still not be sure it wasn’t stained glass. The target print was 60” wide—that’s 18,000 pixels I needed. I photographed the original as three 6x12 film exposures, using the rear rise and fall in my Sinar view camera to avoid distortion. I had to scan each end of the 6x12 frame in my Nikon scanner, and ended up merging six 4000 ppi scans. I got the resolution I needed.

But I didn’t get the dynamic range I really needed. The image exceeded the 10-stop range of the Kodak Ektar I was using. Sure, I could have exposed the highlights into oblivion, but then my scanner wouldn’t have been able to punch through the negatives and I’d have had to triple costs and time by getting laser scans. The highlights would still have blocked up. Instead, I made the decision to let the shadows go black. (I still had enough front side exposure to retain detail on the soldered joints.)

I had considered renting a 645z, but that would have tripled costs, too. An 8x10 camera (if I had one) would not, on reflection, have solved the problem. I can only scan that film at 2000 ppi.

In this case, detail trumped dynamic range, and film was the efficient solution for me.

The point of all this is to say that both film and digital have an abundance of resolution, to the point we use it to reject formerly revered lenses, when handled correctly.

Resolution is no longer the discriminator between the two capture technologies. The discriminators now are dynamic range, sensitivity, noise/grain, workflow, convenience, cost, and the available surrounding equipment.

The notion that digital has less range because it can’t be overexposed as much is frankly baffling. The 645z sensor has every bit of 14 stops of range, which makes it possible to underexpose it by several stops to preserve highlights while also preserving the ability to pull the shadows up. Try that with negative film. It’s like slide film, with about 8 additional stops worth of shadow separation. Only black-and-white film processed in compensating developers like pyro can boast range like that.

When I upgrade my Canon 5D, I will probably not get a 5Ds, but will rather get a used 5DIII. 22mp is enough. But I would like all that extra sensitivity.

Rick “who still works in film and digital” Denney

Last edited by rdenney; 03-24-2018 at 10:49 AM.
03-25-2018, 08:31 AM   #84
Loyal Site Supporter
Loyal Site Supporter




Join Date: Dec 2017
Photos: Gallery | Albums
Posts: 1,138
QuoteOriginally posted by rdenney Quote
Edit: I didn’t realize what a late hit this was until after I posted it. But I think it still may add something to the discussion for posterity.
....

I have medium-format photos I can’t enlarge beyond 8x10. In one case, it’s because the atmosphere was turbulent and obscured detail on a distant ridge:

....

Rick “who still works in film and digital” Denney
I never find useful information to be too late, although eventually I might be too 'late' for it. Thanks for all the perspective.

I had been wondering whether atmospheric turbulence was noticeably intrusive on any shots not involving a long enough lens for the pixels to subtend 20 microradians or less and shutter speed longer than 1/100s, more or less. A lot of landscape photos are done at dawn and dusk, and while the usual argument is the color of the sunlight, turbulence is normally less at those times perhaps aiding the photographer.
07-27-2021, 01:45 AM   #85
Loyal Site Supporter
Loyal Site Supporter
TDvN57's Avatar

Join Date: Jul 2011
Location: Berlin
Photos: Gallery
Posts: 1,149
Talk about being late to the conversation.....

I don't want to chase down the rabbit hole of guessing the resolution of film versus digital. However a recurring question whether the Pentax film lenses will be able to resolve adequately for high resolution or high mpx sensors remains unanswered. For example we frequently hear that the 645 full frame sensor (150mpx) or even the 645 crop frame 100 mpx has higher resolutions than what the film era lenses can resolve.

Whilst that might be the case for some lenses I am not so quick to jump to that conclusion for some of the known high-end performers.

So for those that are determined to establish a correlation between film resolution and digital, I recommend this article written by Tim Vitale version 24 of Aug 2010.

https://cool.culturalheritage.org/videopreservation/library/film_grain_resol...eption_v24.pdf

It is a fairly technical article and he clearly explains the difference between grain on film and resolution and the misconceptions about these terms in the film industry. If I can try to surmise he explains that the resolution is determined by the Silver-Halide crystals on the film, both its size and the density of the crystals on the film. The crystals range in size from 0.2 microns to 2.0 microns. Keep in mind that the wavelengths of visible light is around 0.75 to 0.4 microns.

It would appear that for some films the resolution could be in the range of 1 to 2 microns. This is my own conclusion, although I have to admit it is trying to describe an analog phenomena with a digital reference.

.

I am proposing a different approach in an attempt to get some answers and make the data more relatable. Since we normally, these days, look at analogue pictures after it had been scanned, the analogue original is no longer relevant. The quality of the scanned image is totally dependent on the scanner and the scanner operator's skills and technique.

If we assume that we have a perfect scanner and a prefect operator and perfect software, producing a perfect representation of the film image, then we could assume that we have removed the additional imperfections of the transformation process from film to digital. At least as far as resolution is concerned.

Hope you concur with me so far....

Lets assume the only variable in our perfect scanning system is the resolution or dpi, then we have removed the uncomfortable leap of comparing apples and oranges.

At this point it is a fairly easy calculation to see at which dpi we need to scan (a perfect scan) to match the resolution of a digital sensor. I am avoiding the term image quality, because to me image quality is more than just resolution. Thus we only look at a single factor, the resolution of the images.

Here are the correlations I found:

To match the resolution a perfectly scanned image to of the following cameras (or sensors):

Pentax 645z: +/- 4800 dpi
Fuji 50mpx: +/- 4800 dpi
Fuji 100 mpx: +/- 6800 dpi
Pentax KP : +/- 6500 dpi
Pentax K3 iii: +/- 6800 dpi
Sony sensor 150 mpx 645 FF: +/- 6800 dpi
Hasselblad H6D 100 and 400 full frame: +/- 5600 dpi
Phase One XF IQ4 100 full frame: +/- 5500 dpi
Phase One XF IQ4 150 full frame: +/- 6800 dpi

These dpi numbers are rounded to represent an order of magnitude because of conflicting information available. These dpi number refer to real native dpi in both X and Y axis (dpi and lpi) and not upscaled dpi. Also remember this correlation is with our imaginary perfect scanning system.

So how does this bring us to the lens resolving issue?
Lens resolution is measured in ability to resolve line pairs, either LP per mm or LP per inch, pick your own sauce. I'll stick with LP/mm. Although to stay with reality and practicality, we don't have the data on all these older lenses and most of us (me included) do not have the equipment nor the skills to really measure the LP resolution of these lenses.

So how can we compare the resolving capabilities of our lenses and if they will stand up to the requirements of sensors like the 100 mpx 645 crop frame and full frame or the 150 mpx 645 full frame?

If we look at our dpi correlation chart above then we see that the K3 iii and the KP has similar dpi resolution as the 150 mpx 645 Full frame sensors and the Fuji 100 mpx 645 crop frame sensor.

The Hasselblad and Phase One 100 mpx use a lower density sensor on 645 FF and corelates with a 5500 dpi scan.

Thus, you can test your legacy lenses on either the K3iii or KP or Fuji 100 and I think it is safe to assume it will perform equally well or poor on the larger sensors, except with this test you cannot see the image circle larger than the test sensor.

Alternatively you could take some of your older film images and have it scanned at the higher resolutions to verify if the lenses were able to resolve adequately.

So how much resolution is enough?
Considering that the human limit to resolve LP is about 11 to 12 line pairs per mm. Yet all the sensors referenced above have LP/mm resolutions from 94 to 133, ten times more than what we can see. Also consider how much we are enlarging and cropping images, since we all have access to some form of digital "dark room" where we manipulate images endlessly, way beyond what was ever possible in real life dark room scenarios.

For example I regularly print 150 inch x 44 inch enlargements and I really need all the resolution I can get. 94 LP/mm is performing adequately but only just. I can see the difference between a large print in 94 LP/mm and 130 LP/mm.

For the sake of discussion I have ignored that larger sensors produce sharper images than their smaller form factor counter parts. I think that is a related issue, yet not directly addressing the resolving abilities of the lenses.

Please share your thoughts and I hope this belated posting will convince the more knowledgeable amongst us to share their wisdom and stretch our small dark rooms to the next level of understanding. :-)

Eagerly awaiting to learn more....

Last edited by TDvN57; 07-27-2021 at 01:57 AM.
07-28-2021, 05:06 AM - 2 Likes   #86
Site Supporter
Site Supporter




Join Date: Feb 2018
Location: NoVA
Posts: 635
Your visual acuity limit is a bit generous. I think of it as closer to 5 lp/mm for attaining my objective of a sense of unlimited detail when viewed closely.

One reason it’s hard to measure lens acuity with a single number is that the boundary between enough and not enough is fuzzy. I was measuring lenses by measuring how many pairs I could see at a modulation transfer (MTF) of 50%, which preserves apparent contrast, and 10%, which is minimum discernible acuity, using targets with line pairs that varied from white to black according to a sine wave. Most using a USAF target are measuring 10% MTF using binary line pairs (I’ve done that, too). The latter doesn’t say how resolved a real image will appear and is too optimistic.

Manufactures measure MTF at specific spatial resolutions, rather than my easier approach of measuring spatial resolution at specific target MTF.

One could argue that a scanner’s job is to reproduce the grain structure, and if I’m trying to duplicate the film at a fundamental level, that’s the objective. But that is beyond what is needed to make real prints even at the film’s limiting degree of enlargement. And it’s beyond what a sensor needs to make a print that size. Grain is random noise, whereas sensels are ordered in rows. Random noise is far more tolerable (like a quiet hiss in audio) compared to aliasing (like loud intermodulation distortion in audio).

We simply measure how big a print can be given the lens, and desire that our film or sensor isn’t the limiting factor.

But what sells sensors and lenses is what we see at 100% in Photoshop. I have overcome that itch. I can read the name tags in a group photo of 70 people made using good technique with the 645z and the film-era 45-80 zoom. That’s good enough for me.

I just bought a 512 GB memory card for my 645z. Eventually its contents have to fit on my computer hard drive and be backed up in my backup server. Why in the world would I want bigger files?

Rick “this thread is like watching a chess game where moves are made once a year—lol” Denney
07-29-2021, 06:48 AM   #87
Loyal Site Supporter
Loyal Site Supporter
TDvN57's Avatar

Join Date: Jul 2011
Location: Berlin
Photos: Gallery
Posts: 1,149
QuoteOriginally posted by rdenney Quote
Your visual acuity limit is a bit generous. I think of it as closer to 5 lp/mm for attaining my objective of a sense of unlimited detail when viewed closely.

One reason it’s hard to measure lens acuity with a single number is that the boundary between enough and not enough is fuzzy. I was measuring lenses by measuring how many pairs I could see at a modulation transfer (MTF) of 50%, which preserves apparent contrast, and 10%, which is minimum discernible acuity, using targets with line pairs that varied from white to black according to a sine wave. Most using a USAF target are measuring 10% MTF using binary line pairs (I’ve done that, too). The latter doesn’t say how resolved a real image will appear and is too optimistic.

Manufactures measure MTF at specific spatial resolutions, rather than my easier approach of measuring spatial resolution at specific target MTF.

One could argue that a scanner’s job is to reproduce the grain structure, and if I’m trying to duplicate the film at a fundamental level, that’s the objective. But that is beyond what is needed to make real prints even at the film’s limiting degree of enlargement. And it’s beyond what a sensor needs to make a print that size. Grain is random noise, whereas sensels are ordered in rows. Random noise is far more tolerable (like a quiet hiss in audio) compared to aliasing (like loud intermodulation distortion in audio).

We simply measure how big a print can be given the lens, and desire that our film or sensor isn’t the limiting factor.

But what sells sensors and lenses is what we see at 100% in Photoshop. I have overcome that itch. I can read the name tags in a group photo of 70 people made using good technique with the 645z and the film-era 45-80 zoom. That’s good enough for me.

I just bought a 512 GB memory card for my 645z. Eventually its contents have to fit on my computer hard drive and be backed up in my backup server. Why in the world would I want bigger files?

Rick “this thread is like watching a chess game where moves are made once a year—lol” Denney

I concur with you. I guess my point is that none of the 645 or 6x7 film photographers will object to a scan of their film pictures at 6400+ dpi.

None of them will say that the lenses they have cannot resolve enough detail for a 6400+ dpi scan.

Yet a 6400dpi scan is on par with the most advanced high resolution sensors available.

Thus, I believe that the current range of Pentax 645 lenses will not perform much different than what they are currently performing. Film or digital.
07-30-2021, 08:44 AM   #88
Moderator
Loyal Site Supporter
Wheatfield's Avatar

Join Date: Apr 2008
Location: The wheatfields of Canada
Posts: 15,981
QuoteOriginally posted by sibyrnes Quote
I have seen it stated and accepted by most here that the resolution of digital photography has surpassed that of film. It seems to be a very confusing issue with me. The comparisons I have seen are of scanned film images compared to digital camera images. Would not such comparisons be dependent on the resolution of the scanner used? How does the resolution of a 35mm slide(the actual slide - not a scan) compare to a digital image and how does one go about making such comparisons? I have read articles where the stated resolution of 35mm film varies from 20 mp to as high as 80 mp. What's the real story?
Very late to the conversation. I didn't read it was several years old until after I had posted.

When we were transitioning from film to digital, film was still the shooting medium, but we were scanning film to create a digital image.
A friend of mine was, at the time, very much on the cutting edge of the technology. He was working for one of our Crown Corporations as an imaging technician, which gave him a very large budget for equipment, and he was working with Photoshop right from the very beginning of the program.
His thoughts were that a 35mm negative was good for a 6mp scan, but not much more. After that, all that was being scanned was the grain structure of the film, not image detail.
He was working mostly with 100-400 ISO film, so I suspect something like Ektar 25 or Fuji Velvia 50 would give a greater resolution, but I doubt very much if the useful resolution would exceed 10mp.

When I got my *istD, I compared optical prints from film with digital prints from the DSLR. I found that detail wise, the DSLR was about on par with 35mm film, but due to the nature of how digital files were sampled, the noise (grain if you will) was more on par wit medium format film, at least for sizes up to 12x18.
Now there is no contest. With full frame digital exceeding 36mp, 35mm film cannot compete on a pure quality level. I can get prints from my K1 that are easily on par with what I could get from my Pentax 6x7, though it is still not quite up there with 4x5 prints.

Last edited by Wheatfield; 07-30-2021 at 08:53 AM.
07-30-2021, 03:59 PM   #89
Loyal Site Supporter
Loyal Site Supporter
TDvN57's Avatar

Join Date: Jul 2011
Location: Berlin
Photos: Gallery
Posts: 1,149
QuoteOriginally posted by Wheatfield Quote
Very late to the conversation. I didn't read it was several years old until after I had posted.

When we were transitioning from film to digital, film was still the shooting medium, but we were scanning film to create a digital image.
A friend of mine was, at the time, very much on the cutting edge of the technology. He was working for one of our Crown Corporations as an imaging technician, which gave him a very large budget for equipment, and he was working with Photoshop right from the very beginning of the program.
His thoughts were that a 35mm negative was good for a 6mp scan, but not much more. After that, all that was being scanned was the grain structure of the film, not image detail.
He was working mostly with 100-400 ISO film, so I suspect something like Ektar 25 or Fuji Velvia 50 would give a greater resolution, but I doubt very much if the useful resolution would exceed 10mp.

When I got my *istD, I compared optical prints from film with digital prints from the DSLR. I found that detail wise, the DSLR was about on par with 35mm film, but due to the nature of how digital files were sampled, the noise (grain if you will) was more on par wit medium format film, at least for sizes up to 12x18.
Now there is no contest. With full frame digital exceeding 36mp, 35mm film cannot compete on a pure quality level. I can get prints from my K1 that are easily on par with what I could get from my Pentax 6x7, though it is still not quite up there with 4x5 prints.
.

I think it also depends on the type of scan and the film. A wet scan will give much better results than a dry scan, no matter what the resolution. With a dry scan at high resolution you start to see the film structure. With wet scan that is less likely, at least at 6400dpi it would be.
07-31-2021, 07:41 AM   #90
Pentaxian
ZombieArmy's Avatar

Join Date: Jun 2014
Location: Florida
Posts: 3,210
All I'm gonna say is printing my 35mm film in a dark room to very large sizes look better than some of my older lower res files. It has a different look to it as well. But at the end of the day I don't think it matters, it's good enough for large prints if needed.
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
35mm, camera, canon, comparisons, film, film vs, images, kodak, lens, mp, people, photography, resolution, resolution of film, test, vinyl

Similar Threads
Thread Thread Starter Forum Replies Last Post
K-5 vs MZ-S vs LX vs PZ-1p vs ist*D vs K10D vs K20D vs K-7 vs....... Steelski Pentax K-5 & K-5 II 2 06-28-2017 04:59 PM
People Ivy Pt. 2: Film vs Digital (K1000 vs K1) alan_smithee_photos Post Your Photos! 7 06-12-2016 06:35 AM
Image Size vs Document Size vs Resolution vs Resampling vs ... AHHHH! veezchick Digital Processing, Software, and Printing 13 08-02-2010 03:57 PM
Resolution vs aperture vs subject distance pcarfan Photographic Technique 3 10-23-2009 05:14 AM
New year resolution Vs camera resolution Tripod General Talk 1 01-04-2009 05:10 AM



All times are GMT -7. The time now is 07:53 PM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top