Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version 38 Likes Search this Thread
12-27-2017, 02:41 PM   #46
GUB
Loyal Site Supporter
Loyal Site Supporter
GUB's Avatar

Join Date: Aug 2012
Location: Wanganui
Photos: Gallery | Albums
Posts: 5,763
QuoteOriginally posted by cyberjunkie Quote
Another one is affection, or historical interest. I thought that the last designs by a genius like Bertele should have been good enough, so I purchased the Schacht 2.8/35mm.
Then there is the pleasure of handling (and using!) a well made example of opto-mechanic, but the main reason is the hope, that with time becomes an educated guess, of finding a new paintbrush, or a new color for the palette, that would in some way help me develop my own humble way of painting with the camera.
I think this is about as good a justification for having an unreasonable number of old lenses as any idea I can think up. I have a few too many sitting around!!
I wonder whether we underestimate the iris for some of the subtle differences in the different lenses. Of course we know the shape of the aperture makes a big difference (especially in the oof areas) but how about a slightly differing placement within the lens affecting the rendering slightly?

12-27-2017, 05:00 PM - 2 Likes   #47
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,129
QuoteOriginally posted by cyberjunkie Quote
You're spot on with all your points. But one...
Even the most advanced multicoating is not 100% effective. Each glass-to-air surface reflects back (instead of refracting and letting through) at least 7% of the incoming light. IIRC any uncoated element reflects as much as 30%. That's why for long time the glass-to-air surfaces were kept to a minimum, and the Planar (which actually predates other common designs like the Cooke triplet and the Tessar) was practically abandoned until the vacuum coating process was invented.
So the actual number of elements makes a difference, albeit not as relevant as in the past, even with modern multicoated objectives.
Your history is correct but your numbers are way off. The glass-air reflection for uncoated lenses is only 4%-5% for basic crown and flint glasses and up to 8-10% for high-index lead and rare-earth glasses. That's for uncoated. Single coated lenses are usually almost half that (2.5%-3%) and modern high-quality multi-coating gets the number down to 0.5%.

These reflections have two main effects on the image: reduced transmission and flare.

Reduced transmission comes from light reflected by one glass-air interface which bounces the light from the scene back out of the lens. An uncoated Cooke triplet with 6 glass-air interfaces and about 4%-5% reflection per glass-air interface will reflect at total of about 24%-30% of incoming light back out (note: I've simplified the math here to make it easy. The numbers for the exact math aren't much different). The point is that only about 70%-76% of the scene's light reaches the film or sensor. It sounds like a huge loss but it's only 1/2 stop and it has no effect on color saturation, tonality, dynamic range, micro-contract, rendering, or anything. It's literally like someone turned down the lights just a little. With modern multicoating a lens such as the Zeiss Otus 55 f/1.4 only loses 13% of the light. Note that the Otus has four times as many elements as the Cooke (12 vs. 3) and yet it has half the transmission losses. Per interface, the Otus is 8 times better than the uncoated Cooke which suggests Zeiss is using coatings the only reflect 0.5% of the light.

Flare is the image killer. Flare occurs when light coming into the lens reflects off one interface, heads out of the lens, then reflects off a second interface, and heads back into the lens. For uncoated lenses, the first reflection is about 4-5% of the light and the second reflection is 4-5% of that 4-5% which only about 0.16% to 0.25% of the original light. If a lens has multiple elements the opportunities for these double-reflections multiply. For example, reflections off the back surface of the last element of the Cooke triplet have 5 opportunities for a second reflection as they head out the front of the lens. Adding up all the possible double-bounce flares in a simple but uncoated Cooke triplet yields a total flare of about 3%.

Because the lens surfaces are curved, the first instance of reflected light may be more strongly diverging or converging than the original rays. It's then refracted by other elements that it passes through, changing the amount of divergence or convergence. When it reflects of a second interface (which is usually curved, too) with further effects of the divergence or convergence. Sometimes the total effect of all these reflections and refractions of the flare is a very distorted copy of the light in the scene such as the bubbles and halos from a sun flare. But most flare is so defocused by all the curved surfaces that it simply fogs the image. That's why uncoated lenses tend to give a misty look.

If you can cut the reflections per interface by half, the flare drops to one quarter. The original 4% of 4% (0.16%) might become 2% of 2% (only 0.04%). Overall, a single-coated Cooke triplet will have about 0.75% flare after accounting for the 15 possible opportunities for flaring double-reflections. That sounds like a tiny number but it means the dynamic range of the scene may be limited to about 7 stops from the brightest highlight in the worst case because the deep shadows of the scene will be fogged by flare from the surround bright areas. That's the worst case (something like a small dark object in shadow backlit by the sky). If the overall scene is not very bright with only a narrow band of bright sky, the total flare will be much less and the shadows will be better resolved. Lenses of all ages and element counts that don't have good enough coatings need good hoods!

Now if you double the number of elements, the flare multiplies by four! There's a reason those old zoom lenses had such poor contrast. But if you also cut the reflections at each interface in half, then the total flare is the same as the low-element count lens with no or low-quality coatings. Overall, the Otus is way better than the uncoated Cooke triplet and is probably equivalent to a single-coated Cooke triplet in terms of flair.

The point is that a modern lens with lots of elements can equal or out-perform an older lens with few elements if the coatings are good enough. There's also the issue that time and use take a toll on lenses: worn coatings, dust, fading edge-blacking on the elements, clouded balsam, etc. all conspire to reduce the contrast of older lenses.


QuoteOriginally posted by cyberjunkie Quote
You don't need a super clinic lens to capture the maximum of information, that could eventually be degraded/enhanced/morphed/whatever afterwards by an algorithm. It is the sensor that does it. All lenses provide the same amount of data. It's the quality of that data that is not the same.
This is untrue. All lenses of a given f-stop (and transmission factor) might supply the same amount of light to the sensor but the amount of data will vary with the lens. Sharper lenses deliver more data, Properly focused lenses deliver more data (especially about the intended subject). Lenses with good flare control deliver more data. A fogged, de-centered copy of a lens delivers less data to the sensor than does a clean, non-defective copy of that lens.


QuoteOriginally posted by cyberjunkie Quote
Interpolation, or if you prefer fabricating pseudo sharpness in a slightly blurry image is almost as difficult as faking the exact sharpness-meets-halo of a true soft focus lens.
I'm sure there are bitmap wizards who can work with plug-ins and layers, plus a considerable amount of time, and get almost there... but it's not photography anymore, at least not the kind of photography I like. It gets pretty close to graphic art.

The day this kind of process is made as simple as a couple of clicks and available to everyone, it would be the programmer who makes most of the esthetic choices, not the photographer.
What the photographer does with the data is up to them but it is limited by the laws of statistics that create trade-offs in resolution versus noise. Although some sharpness algorithms are just arbitrary hacks, the better ones use physical models to estimate the detail that must have been in the scene to create the kind of blur found in the image. But there's always a trade-off and if the initial image has poor DR, the ability to estimate detail will be poor, too.

The way I see it, the programmers actually bring more aesthetic choices to the table. It may be true that auto-exposure, automagical, autolevel, autosharpen buttons might hard-code the programmer's aesthetic choices, but most decent software also lets the user combine and manipulate the filters and effects in ways never imagined by the programmer. And the easier that is, the easier it is for the user to try different effects, undo the ones they don't like, tweak the ones they like, etc.

QuoteOriginally posted by cyberjunkie Quote
I like vintage optics, I really do, and because I do I also know that generalizing a judgment, either good or bad, makes no sense at all. I often shoot in manual focus, the pleasure of using a smooth, well built object with pleasant ergonomics have a positive impact on the quality of the pictures. Though build quality is secondary, if optical quality is not at the same level.
To be more precise, a vintage lens that is purchased with the intention to use it as a picture-taking tool has to give something that a reasonably priced modern zoom can't offer. Other way it's a collector item.
I'm fine with it, but I'm aware that the scope is different.
I own a large number of vintage lenses (BTW, most of them are linked to my signature, so whoever got curious can check my tastes) and I really treasure a few of them, but since the beginning I always tried to have a reasonable answer to the question "why I want to buy it?"
I, too, enjoy vintage lenses although my collection is minuscule compared to yours. A lens with "character" (an excellent euphemism for optical flaw) can really bring something special to an image as long as the scene is in character with the lens's character. There's no bad lens, only lenses of narrower usefulness in terms of the scene, style, and aesthetic goals of the photographer. Some people love lomo, free-lensing, pinhole, lens baby, and the occasional smear of vaseline on the lens. And some people want corner-to-corner sharpness on a huge print. Different lenses work in different situations.

That's all part of the art of photography.

Cheers!

Photoptimist

Last edited by photoptimist; 12-27-2017 at 06:02 PM.
12-27-2017, 08:14 PM   #48
Pentaxian
cyberjunkie's Avatar

Join Date: Mar 2010
Location: Chiang Mai, Bologna, Amsterdam
Photos: Gallery
Posts: 1,198
QuoteOriginally posted by photoptimist Quote
This is untrue. All lenses of a given f-stop (and transmission factor) might supply the same amount of light to the sensor but the amount of data will vary with the lens. Sharper lenses deliver more data, Properly focused lenses deliver more data (especially about the intended subject). Lenses with good flare control deliver more data. A fogged, de-centered copy of a lens delivers less data to the sensor than does a clean, non-defective copy of that lens.
Maybe I haven't been clear enough.
When I wrote "quality of the data" I meant that given a small area of pixel, the imperfect/pleasant optic gives exactly the same amount of pixels as the state-of- the-art/clinical one. In one case some pixels would be blurred, while the other would represent (more) accurately the reality, but what changes is not the number of the digitized core components that "make" the image, but just the accuracy, and the iconografic value that we give them. You can say that a certain lens doesn't resolve a couple of lines beyond a certain value, but the data that represent those lines is made by the same amount of pixels. Unsharp pixels, but still there...
In practice what I'm trying to say is that in non-technical photography the value of unsharp "faulty" pixels is exactly what makes some pictures beautiful to the eye. I'm talking about portraits, for example, or pictorial photography in general.
We can compress bitmaps and audio with lossless algorithms, which means there are redundant data in the originals.
The unsharp/imperfect pixels of a picture can't be considered redundant, IMHO, because they are an integral part of the aesthetical value of such picture.


Regarding the figures of glass-to-air reflection, of course you're right.
I had those numbers in my head, but my memory isn't what it used to be. Maybe they had to do with actual light loss (difference between F and T), who knows
I should not trust my memory anymore...

My compliments for the section of your post about the effectiveness of advanced multicoating and the influence of what in Italy we call (quite effectively, I think) "parasite light".
I just want to add a little personal observation.
When it comes to veiling flare (and inter-reflections) form is substance. In the sense that barrel build plays an important role keeping unwanted inter-reflections to a minimum and preventing the excess of coverage from reaching in some way the sensor. The rear baffle of some Pentax lenses, released for APS-C but with actual FF coverage, is a perfect example.
I just got an old, tiny Komura 4.5/200mm. I opened it because something was a little loose. It's carefully designed to minimize inter-reflections. The optical design is simple, but the barrel was designed with the utmost care, to avoid light rays bouncing back and forth between the glass-to-air surfaces and the mount.
I guess it is not as relevant anymore. Plastic is more forgiving than aluminum, and coating is much more effective.
The old Komura had to do with what was available at its time. I'll see how it works as a picture-taking tool; the pleasure of checking how it's made, and the feel of its build in my hands, is already a fact

Cheers
Paolo

Last edited by cyberjunkie; 12-28-2017 at 12:04 AM.
12-28-2017, 03:21 AM   #49
Senior Member




Join Date: Dec 2012
Posts: 126
Original Poster
In the sixties and seventies I and other photographers squished the hell out of Kodak 400 ASA film Tri-X to get that "thing" describing the things around us.
Grain was art! I still love it.

In a way that was quality.

What's left is "rendering" and pixel-peeping.

OK - I'm the OP and should know better....

We just used the flaws!


Last edited by Gutta Perka; 12-28-2017 at 05:14 AM.
12-28-2017, 08:52 AM - 1 Like   #50
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,129
QuoteOriginally posted by cyberjunkie Quote
Maybe I haven't been clear enough.
When I wrote "quality of the data" I meant that given a small area of pixel, the imperfect/pleasant optic gives exactly the same amount of pixels as the state-of- the-art/clinical one. In one case some pixels would be blurred, while the other would represent (more) accurately the reality, but what changes is not the number of the digitized core components that "make" the image, but just the accuracy, and the iconografic value that we give them. You can say that a certain lens doesn't resolve a couple of lines beyond a certain value, but the data that represent those lines is made by the same amount of pixels. Unsharp pixels, but still there...
Good points. Maybe it was my turn to not be have been clear enough.

I was speaking of the effective amounts of photon data passing into and through the system at various points in space that you mentioned. If we start at the front of the lens and think about all the data encoded by all the photons arriving at the filter ring of the lens, there's a tremendous amount of data in that light. The arriving light contains a 180° view of everything at full sharpness from infrared to ultraviolet. Next, the lens samples a portion of all those arriving photons in proportion to the field of view and aperture of the lens. Yet some lenses do a better job that others at accurately collecting, collating, and transmitting that all the data coming from the scene. That's why I said that the amount of data coming from the lens depends on the quality of the lens. (And a lens with a lens cap on delivers no data at all to the sensor.)

Next, the sensor samples the photon data that is arriving from the lens. Most sensors ignore all the UV and IR data and the pixels of a Bayer filter sensor even ignored 2/3rds of the color data (which is why K-1 pixel shift is so amazing). You are entirely right that every frame coming out of the sensor has the identical number of pixels measured with the identical number of bits per pixel. Depending on the lens, the sensor might over-sample the lens (one does not need 36 MPix to capture the output of a pinhole lens) or it might under-sample the lens (some high-quality, diffraction-limited primes resolve more detail than most sensors can handle). And with the lens cap on, that pixel "data" is only about the sensor (which is useful as a dark frame for astrophotography or other long-exposure images).


QuoteOriginally posted by cyberjunkie Quote
In practice what I'm trying to say is that in non-technical photography the value of unsharp "faulty" pixels is exactly what makes some pictures beautiful to the eye. I'm talking about portraits, for example, or pictorial photography in general.
We can compress bitmaps and audio with lossless algorithms, which means there are redundant data in the originals.
The unsharp/imperfect pixels of a picture can't be considered redundant, IMHO, because they are an integral part of the aesthetical value of such picture.
I agree!

Some kinds of unsharpness certainly can improve the aesthetics of some kinds of images (and ruin others). Whether a blurry image can be said to contain redundant data is a bit tricky. At one level you are entirely correct that the specific character of the blur matters and contributes to the image. And to the extent that the specific blur makes a specific difference to the subjective quality of the image, then that specific blur is important data. But at another level, it's often possible to analyze the statistical properties of the blur, down sample the image to a low resolution for storage, and then upsample the image for viewing or printing with the blur reconstructed and no one can see the difference. Thus a 36 MPix image from a blurry lens might really be a 4 Mpix image with a few hundred numerical values that encode the blur function. Of course, some blur functions (e.g., soap-bubble bokeh) are harder to model than others.

QuoteOriginally posted by cyberjunkie Quote
My compliments for the section of your post about the effectiveness of advanced multicoating and the influence of what in Italy we call (quite effectively, I think) "parasite light".
I just want to add a little personal observation.
When it comes to veiling flare (and inter-reflections) form is substance. In the sense that barrel build plays an important role keeping unwanted inter-reflections to a minimum and preventing the excess of coverage from reaching in some way the sensor. The rear baffle of some Pentax lenses, released for APS-C but with actual FF coverage, is a perfect example.
I just got an old, tiny Komura 4.5/200mm. I opened it because something was a little loose. It's carefully designed to minimize inter-reflections. The optical design is simple, but the barrel was designed with the utmost care, to avoid light rays bouncing back and forth between the glass-to-air surfaces and the mount.
I guess it is not as relevant anymore. Plastic is more forgiving than aluminum, and coating is much more effective.
The old Komura had to do with what was available at its time. I'll see how it works as a picture-taking tool; the pleasure of checking how it's made, and the feel of its build in my hands, is already a fact

Cheers
Paolo
Thanks and those are excellent points about lens barrel design. Yes, the internal baffles, internal barrel design, surface textures, glass element edges, and black paint/coatings make a huge difference in controlling "parasite light." One of the first steps I do when evaluating a lens is to point it toward the sky or a bright light, look through the back of the lens, and check for reflective areas on internal components of the lens. Better lenses have almost no reflections off the internals. Cheap lenses (and cheap extension tubes) have all manner of shiny bits that fog the image.

And let's not forget what's on the front of the lens. A good petal hood (or rectangular hood) kills parasite light by 40% (or more with some scenes) relative to even the best optimized circular hood . A good petal hood can enable the lens designer to add a few more elements without having unacceptable contrast loss. Of course if the user puts the petal hood on backwards, then they may need to blame themselves and not the modern lens designer when their pictures have poor contrast and saturation.

Cheers,
Photoptimist
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
death, k-mount, lens, lenses, optics, pentax lens, slr lens, wrong

Similar Threads
Thread Thread Starter Forum Replies Last Post
DXO Optics Pro support coming in November. planteater Pentax K-70 & KF 8 11-28-2016 08:25 PM
DxO Optics Pro support for K-1 in September MJSfoto1956 Pentax K-1 & K-1 II 9 09-18-2016 12:25 PM
DxO Optics Pro support for K-S2 MJSfoto1956 Digital Processing, Software, and Printing 18 04-03-2016 10:11 AM
K-5 battery problem? camera problem? user problem? imtheguy Pentax K-5 & K-5 II 39 11-03-2010 06:02 PM
Modern Media /Modern Minds seacapt General Talk 24 09-23-2010 03:55 PM



All times are GMT -7. The time now is 02:03 AM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top