Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version 236 Likes Search this Thread
06-22-2013, 10:16 AM - 2 Likes   #2056
Veteran Member
falconeye's Avatar

Join Date: Jan 2008
Location: Munich, Alps, Germany
Photos: Gallery
Posts: 6,871
QuoteOriginally posted by VoiceOfReason Quote
just to find I was shooting myself in the foot.

Nice try but you actually shoot yourself into the foot quite successfully You are asking secondary questions and Class A's correct answers may not have made you notice. Let me try to expain ...

What you try to achieve are gallery-quality prints. Well, after you managed the artistic part of this challenge which means sharpness may not matter so much anymore. But ignoring this for a second and talking about sharpness.

The easiest way to look at sharpness is that there are many sources for blur and they all add up in the final image. It is not normally taught in a coherent way though. I will avoid the math here, just introduce the general concept. You find all the math and how it relates to mtf (whatever that shall mean) in my blog. Below, I quote some numbers, but the mathematical details are over-simplified. I quote them for make interested readers understand, not to provide means to compute sharpness.

So, let me introduce blur from a particular source as "b" which shall denote a blur spot diameter. Where the blur spot shall denote the image created by a single point-shaped light source. So, ideally, b=0. Also, think of b as sort of inverse resolution.

The overall blur results from adding up all blur terms b. Think of taking the sum. The true math is much more complicated though, more complicated than taking the RMS (ignore this sentence).

So, blur from pixellation may be characterized by something like b= 5 Micron, which shall be the pixel size. Obviously, this number is somewhat smaller for a 24MP camera than a 16MP camera (about 1 Micron smaller). All other blur is unrelated to MP.

Another blur is defocus. Which is about 20 Micron at the limit of deoth of field for an APSC camera.

Another blur is from lens aberration. Which can be very small, like 5 Micron / N where N is the F-stop number. Which obviously can become as small as 1 Micron. But only for a few famous prime lenses and in the image center. It is much larger in general.

Another blur is from the Bayer-anti alias filter. It creates a bit less additional blur than from pixellation above.

Another blur is from shake. And yet another from subject motion. Both are computable.

Yet another blur is from noise. However, noise does affect point blur much more than line blur and it is best to treat noise as remaining artefact size after noise reduction. Again, say a few Microns depending on ISO.

And then there is blur from diffraction. It is about 0.33 Micron * N where N is the F-stop number again. So, at F15, it is roughly 15/3 or 5 Microns which happens to be the Raleigh limit of diffraction for a sensor with 5 Microns pixel pitch. I've choosen the numbers such that diffraction causes blur as large as a pixel at the Raleigh limit. Again, a useful simplification for easier understanding: Divide the F-Stop by three abd if the resulting number is smaller than the pixel pitch in Micron then you can still resolve detail at the pixel level. Beyound, no. Obviously though, diffraction always creates blur at any aperture and now you got a feeling for it.

And now for the trick ...

In order to create a very sharp image, you have to *BALANCE* all blur terms to become roughly equally large!

This is why experience matters so much. You won't be able to compute it as it is too complex to be done in the field. The camera firmware could help you but for some reaon, firmware authors aren't that innovative...

But why is it so?

Because the different blur terms counteract each other: lens aberration counteracts diffraction (look at the terms: /N and *N, so, you must select N in between to take both effects into account). Shake and motion abd defocus counteracts noise, and so on ...

Everything considered, you see that a possible higher MP number in the K-3 cannot but help. But only in the regime where all blur terms combined are down to a few Microns already. Which is not the case for a majority of images. But it won't never hurt.

Note to fellow photographers: Why always point to other sites? I think it is better to understand than to cite.


Last edited by falconeye; 06-22-2013 at 10:38 AM.
06-22-2013, 11:09 AM   #2057
Veteran Member




Join Date: Feb 2011
Posts: 4,873
QuoteOriginally posted by Ferdinand Quote
I am sorry that the link does not work. Nevertheless please try to look up the article on
Cambridge in Colour - Photography Tutorials & Learning Community
about why sensor size matters, and make a better informed judgement about whether Class A is correct.

Other parts of the article are very good too, not just the part on diffraction.
Sensor size matters quite a bit.

The higher ISO noise of higher megapixel cameras does not matter.
06-22-2013, 12:13 PM   #2058
Forum Member




Join Date: Jan 2012
Location: Frankfurt
Photos: Albums
Posts: 89
QuoteOriginally posted by Class A Quote
Maybe I better need to explain my premise which is to assume that one wants to know whether two images taken with different sensors that have a different pixel-pitch will look different for a given f-ratio. They won't.

There is another legitimate question which is to ask: "Beyond which f-ratio will I reduce the maximum sharpness that I can achieve with a given sensor?" The answer to the latter question of course depends on the number of megapixels. Sensors with less MP just resolve less, so they can tolerate more diffraction until the latter starts compromising pixel-level sharpness.
Exactly. Now we agree. I thought that the original question was about the impact of MPs on the diffraction limit with a given sensor size. (Higher MP with given sensor size is synonimous to smaller pixels). The article I've tried to send has a diffraction limit calculator for any given sensor size. This is where I got the pieces of information that 24 MP corresponds to a 7.4 and 16 MP to a 9.1 diffraction limit (on average).
06-22-2013, 12:25 PM   #2059
Forum Member




Join Date: Jan 2012
Location: Frankfurt
Photos: Albums
Posts: 89
QuoteOriginally posted by falconeye Quote

Nice try but you actually shoot yourself into the foot quite successfully You are asking secondary questions and Class A's correct answers may not have made you notice. Let me try to expain ...

What you try to achieve are gallery-quality prints. Well, after you managed the artistic part of this challenge which means sharpness may not matter so much anymore. But ignoring this for a second and talking about sharpness.

The easiest way to look at sharpness is that there are many sources for blur and they all add up in the final image. It is not normally taught in a coherent way though. I will avoid the math here, just introduce the general concept. You find all the math and how it relates to mtf (whatever that shall mean) in my blog. Below, I quote some numbers, but the mathematical details are over-simplified. I quote them for make interested readers understand, not to provide means to compute sharpness.

So, let me introduce blur from a particular source as "b" which shall denote a blur spot diameter. Where the blur spot shall denote the image created by a single point-shaped light source. So, ideally, b=0. Also, think of b as sort of inverse resolution.

The overall blur results from adding up all blur terms b. Think of taking the sum. The true math is much more complicated though, more complicated than taking the RMS (ignore this sentence).

So, blur from pixellation may be characterized by something like b= 5 Micron, which shall be the pixel size. Obviously, this number is somewhat smaller for a 24MP camera than a 16MP camera (about 1 Micron smaller). All other blur is unrelated to MP.

Another blur is defocus. Which is about 20 Micron at the limit of deoth of field for an APSC camera.

Another blur is from lens aberration. Which can be very small, like 5 Micron / N where N is the F-stop number. Which obviously can become as small as 1 Micron. But only for a few famous prime lenses and in the image center. It is much larger in general.

Another blur is from the Bayer-anti alias filter. It creates a bit less additional blur than from pixellation above.

Another blur is from shake. And yet another from subject motion. Both are computable.

Yet another blur is from noise. However, noise does affect point blur much more than line blur and it is best to treat noise as remaining artefact size after noise reduction. Again, say a few Microns depending on ISO.

And then there is blur from diffraction. It is about 0.33 Micron * N where N is the F-stop number again. So, at F15, it is roughly 15/3 or 5 Microns which happens to be the Raleigh limit of diffraction for a sensor with 5 Microns pixel pitch. I've choosen the numbers such that diffraction causes blur as large as a pixel at the Raleigh limit. Again, a useful simplification for easier understanding: Divide the F-Stop by three abd if the resulting number is smaller than the pixel pitch in Micron then you can still resolve detail at the pixel level. Beyound, no. Obviously though, diffraction always creates blur at any aperture and now you got a feeling for it.

And now for the trick ...

In order to create a very sharp image, you have to *BALANCE* all blur terms to become roughly equally large!

This is why experience matters so much. You won't be able to compute it as it is too complex to be done in the field. The camera firmware could help you but for some reaon, firmware authors aren't that innovative...

But why is it so?

Because the different blur terms counteract each other: lens aberration counteracts diffraction (look at the terms: /N and *N, so, you must select N in between to take both effects into account). Shake and motion abd defocus counteracts noise, and so on ...

Everything considered, you see that a possible higher MP number in the K-3 cannot but help. But only in the regime where all blur terms combined are down to a few Microns already. Which is not the case for a majority of images. But it won't never hurt.

Note to fellow photographers: Why always point to other sites? I think it is better to understand than to cite.
Falconeye,

For some people it is just a hobby and the engineering part is of course not the very essence of it. So we learn from wherever we can. I've found the Cambride in Colour site a very good place to learn.

You have an impressive knowledge of this technical staff. I also learn from your explanations.

I am wondering if higher MP count can only help and never hurt than why not to create 400 MP cameras. So far my understanding was that there are some drawbacks and treadoffs with an increase in pixel density and that there is an optimal pixel count for a given sensor size at a given level of technology.

06-22-2013, 12:34 PM   #2060
Forum Member




Join Date: Jan 2012
Location: Frankfurt
Photos: Albums
Posts: 89
(Citing a reputable source is useful for someone who is not an authority in this area, you know...)
06-22-2013, 04:35 PM   #2061
Veteran Member




Join Date: Mar 2010
Location: SW Washington
Posts: 833
QuoteOriginally posted by Ferdinand Quote
I am wondering if higher MP count can only help and never hurt than why not to create 400 MP cameras. So far my understanding was that there are some drawbacks and treadoffs with an increase in pixel density and that there is an optimal pixel count for a given sensor size at a given level of technology.
Falconeye is talking about optical limitations. Diffraction and such is an optical limitation. Sensor performance is a technological limitation. The "higher MP doesn't hurt" theory relies on the assumption that the underlying technology will scale directly with the pixel density. Of course in reality it doesn't work this way from a consumer device standpoint.

"Given level of technology"? Assume we have a sensor with 1 pixel. We can make this the most super awesome pixel ever made. It's so big we can wire it up by hand. Processing of 1 pixel is so simple it can be done with a 30 year old calculator. Storage requirement is so small that it's below the minimum file size of most file systems today. This "wire by hand, 30 year old calculator, negligible file size" process is the "given level of technology" we have. Can we use this same level of technology to make 4 pixels 1/4 the size of the original? 40000? 400 million? Do we have the supporting technology to read out, process and store upwards of 1GB RAW files in a timely manner in a handheld consumer device?

Look at Intel's "Tick-Tock" model of processor development. Every two processor generations is built on a new architecture. The immediately following generation is a die shrink, meaning the same processor architecture with smaller transistors, and a negligible performance difference. If the second generation is pretty much the same as the first in architecture and performance, why did they not just include the smaller transistors in the first generation? Because it was not possible to do so with the technology at that time.

You cannot arbitrarily scale technology at a given point in time. It takes a lot of technological developement to take a device, shrink and increase the density of the components, and have the overall package, including supporting technologies, perform even at the same level as the original.

Last edited by Cannikin; 06-22-2013 at 04:52 PM.
06-22-2013, 04:54 PM   #2062
Site Supporter
Site Supporter
Aristophanes's Avatar

Join Date: Jul 2008
Location: Rankin Inlet, Nunavut
Photos: Albums
Posts: 3,948
I thought Nyquist prevented pixel density above a certain point. About 50MP for FF and 35MP for APS-C. Theoretical limits, of course.

06-22-2013, 05:30 PM   #2063
Veteran Member
falconeye's Avatar

Join Date: Jan 2008
Location: Munich, Alps, Germany
Photos: Gallery
Posts: 6,871
QuoteOriginally posted by Ferdinand Quote
I've found the Cambride in Colour site a very good place to learn.
...
there is an optimal pixel count for a given sensor size at a given level of technology.
QuoteOriginally posted by Ferdinand Quote
(Citing a reputable source is useful for someone who is not an authority in this area, you know...)
I was a bit harsh when saying the thing about citations. I wouldn't apply it to you.

My point is that in many forum discussions, people fire citations at each other in order "to be right" and whithout actually trying to understand what another party is saying (who in turn isn't saying anything either, just firing citations too).

In academia, it is a respected tradition to explain in your own words (and give references but w/o citing them, excluding soft sciences for a moment). I just wanted to make it a bit more popular for our forum discussions ...

A problem with Cambride in Colour, Luminous Landscape and similiar sites I have is that they are verbose. My belief is that any good explaination is a short one too.

As for pixel count: I didn't want to complicate matters. Of course, there are limits where too many pixels start to degrade the image quality. Because of diminishing fill factors, well capacities and edge ray problems. But I think, 24 MP APSC dSLRs haven't yet reached this area.

QuoteOriginally posted by Aristophanes Quote
I thought Nyquist prevented pixel density above a certain point. About 50MP for FF and 35MP for APS-C. Theoretical limits, of course.
There is no such theoretical limit, except for the finite wavelength of light. Which limits resolution to about 3 Gigapixel (FF). Of course, you won't be able to build the sensors or lenses to even come close to this limit. Maybe, Zeiss could for the price of another Hubble scope, but that's another story On a lower budget, I can assure you that high resolution B&W film in a Pentax film body with an excellent prime well exceeds the resolution limits you quote. I needed a microscope to see all the detail on film but the difference to digital was shocking.

So, what do I consider reasonable resolutions for 1", 4/3, APSC and FF? Well, 22MP, 35MP, 55MP, 100MP resp. But there is some intuition behind those numbers. It is just a feeling that those numbers are kind of a sweet spot in the long term. OTOH, the 1" 808 Pureview with 41MP does already surpass my supposed sweet spot. So, what do I know?

Last edited by falconeye; 06-22-2013 at 05:42 PM.
06-22-2013, 05:38 PM   #2064
Site Supporter
Site Supporter
jatrax's Avatar

Join Date: May 2010
Location: Washington Cascades
Photos: Gallery | Albums
Posts: 12,991
QuoteOriginally posted by falconeye Quote
What do I consider reasonable resolutions for 1", 4/3, APSC and FF? Well, 22MP, 35MP, 55MP, 100MP resp.
By this you mean this resolution would achieve the maximum quality for that particular size sensor? And above that resolution other factors not related to the sensor become controlling?
06-22-2013, 08:07 PM   #2065
Veteran Member




Join Date: Mar 2010
Location: SW Washington
Posts: 833
You know, I don't think all this talk of diffraction, blur, and why megapixels theoretically don't hurt makes a lot of intuitive sense to a lot of people. Perhaps an analogy will help:

Say you have two cars that are the same size. One has a 200 horsepower engine, the other has a 2000 hp engine. Your end goal is to get from Point A to Point B in a reasonable amount of time. Which will do better at the task? The 2000 hp one is obviously not going to be slower than the 200hp one, but at the same time it doesn't really do any better because they are limited by other factors, like speed limit and traffic. Due to minor factors such as acceleration from stop the 2000hp one will technically perform better, but practically speaking these two cars do pretty much the same job. At same time the 2000hp one is a lot more costly and wasteful. The R&D money spent on it could have been used to increase things like fuel efficiency and car features. If the car is underpowered, and cannot drive as fast as the speed limit and traffic allow, then of course increasing the power will get the job done better, but eventually the differences become negligible.

In the same way, theoretically speaking increasing the number of pixels will not make the real detail resolution of a picture any worse, but the amount it makes them better eventually becomes negligible due to optical limitations like diffraction and lens performance. At the same time, increasing the pixel count increases processing load, wastes storage space, and spends R&D money that might have otherwise been spent on increasing other aspects of sensor performance, or reducing cost.
06-22-2013, 08:14 PM   #2066
Site Supporter
VoiceOfReason's Avatar

Join Date: Jun 2010
Location: Mishawaka IN area
Photos: Albums
Posts: 6,124
QuoteOriginally posted by Class A Quote
I did not question your need for sharpness.

I questioned why you care about individual pixels.

Your question was about the impact of the number of MP with respect to diffraction. The answer is that if you print two images (one from a 16MP and another from a 24MP sensor) to the same size with the same f-ratio then they will look the same. Even though at a 100% view the 24MP image will look softer (due to the higher magnification).

If you are printing the two images to two different sizes -- e.g., by maintaining 300dpi for both -- then obviously diffraction will be more visible in the larger image. That is, if you view them from the same distance. But in this case the DOF will be shallower in the larger print as well.
I meant at higher apertures. I do understand that it would dio anything to them both at f/5.6, I meant at f/9 or higher. Something where I want everything in focus, like with landscapes where the foreground, background, etc. are all interesting.
06-22-2013, 08:19 PM   #2067
Site Supporter
VoiceOfReason's Avatar

Join Date: Jun 2010
Location: Mishawaka IN area
Photos: Albums
Posts: 6,124
QuoteOriginally posted by falconeye Quote

Nice try but you actually shoot yourself into the foot quite successfully You are asking secondary questions and Class A's correct answers may not have made you notice. Let me try to expain ...

What you try to achieve are gallery-quality prints. Well, after you managed the artistic part of this challenge which means sharpness may not matter so much anymore. But ignoring this for a second and talking about sharpness.

The easiest way to look at sharpness is that there are many sources for blur and they all add up in the final image. It is not normally taught in a coherent way though. I will avoid the math here, just introduce the general concept. You find all the math and how it relates to mtf (whatever that shall mean) in my blog. Below, I quote some numbers, but the mathematical details are over-simplified. I quote them for make interested readers understand, not to provide means to compute sharpness.

So, let me introduce blur from a particular source as "b" which shall denote a blur spot diameter. Where the blur spot shall denote the image created by a single point-shaped light source. So, ideally, b=0. Also, think of b as sort of inverse resolution.

The overall blur results from adding up all blur terms b. Think of taking the sum. The true math is much more complicated though, more complicated than taking the RMS (ignore this sentence).

So, blur from pixellation may be characterized by something like b= 5 Micron, which shall be the pixel size. Obviously, this number is somewhat smaller for a 24MP camera than a 16MP camera (about 1 Micron smaller). All other blur is unrelated to MP.

Another blur is defocus. Which is about 20 Micron at the limit of deoth of field for an APSC camera.

Another blur is from lens aberration. Which can be very small, like 5 Micron / N where N is the F-stop number. Which obviously can become as small as 1 Micron. But only for a few famous prime lenses and in the image center. It is much larger in general.

Another blur is from the Bayer-anti alias filter. It creates a bit less additional blur than from pixellation above.

Another blur is from shake. And yet another from subject motion. Both are computable.

Yet another blur is from noise. However, noise does affect point blur much more than line blur and it is best to treat noise as remaining artefact size after noise reduction. Again, say a few Microns depending on ISO.

And then there is blur from diffraction. It is about 0.33 Micron * N where N is the F-stop number again. So, at F15, it is roughly 15/3 or 5 Microns which happens to be the Raleigh limit of diffraction for a sensor with 5 Microns pixel pitch. I've choosen the numbers such that diffraction causes blur as large as a pixel at the Raleigh limit. Again, a useful simplification for easier understanding: Divide the F-Stop by three abd if the resulting number is smaller than the pixel pitch in Micron then you can still resolve detail at the pixel level. Beyound, no. Obviously though, diffraction always creates blur at any aperture and now you got a feeling for it.

And now for the trick ...

In order to create a very sharp image, you have to *BALANCE* all blur terms to become roughly equally large!

This is why experience matters so much. You won't be able to compute it as it is too complex to be done in the field. The camera firmware could help you but for some reaon, firmware authors aren't that innovative...

But why is it so?

Because the different blur terms counteract each other: lens aberration counteracts diffraction (look at the terms: /N and *N, so, you must select N in between to take both effects into account). Shake and motion abd defocus counteracts noise, and so on ...

Everything considered, you see that a possible higher MP number in the K-3 cannot but help. But only in the regime where all blur terms combined are down to a few Microns already. Which is not the case for a majority of images. But it won't never hurt.

Note to fellow photographers: Why always point to other sites? I think it is better to understand than to cite.
That's all I wanted it from, is a sharpness standpoint. Basically, would shooting with a 16mp sensor at f/9 vs a 24mp at f/9 where one is below and one is above the diffraction limit. In a nutshell, would it have a noticeable effect on my image with all things equal in the real world doing it at full resolution, or would other factors nullify the differences in the question I posted.
06-23-2013, 12:10 AM   #2068
Veteran Member
Cynog Ap Brychan's Avatar

Join Date: Sep 2012
Location: Gloucester
Photos: Gallery
Posts: 1,199
May I ask what may be a very naïve question at this point? For reasons that escape me, I had always thought that diffraction was a factor of the actual diameter of the aperture hole, not its f number per-se. In other words, would diffraction be less on a 300mm lens at f16 than, say a 50 mm lens at the same f-stop? How did members of the f64 Club get such sharp pictures, or was that just because the negative sizes were large? Could someone please elucidate?
06-23-2013, 01:25 AM   #2069
Veteran Member




Join Date: Mar 2010
Location: SW Washington
Posts: 833
QuoteOriginally posted by Cynog Ap Brychan Quote
May I ask what may be a very naïve question at this point? For reasons that escape me, I had always thought that diffraction was a factor of the actual diameter of the aperture hole, not its f number per-se. In other words, would diffraction be less on a 300mm lens at f16 than, say a 50 mm lens at the same f-stop? How did members of the f64 Club get such sharp pictures, or was that just because the negative sizes were large? Could someone please elucidate?
That depends on what your definition of capturing detail is.

Diffraction in terms of absolute aperture (actual size of the aperture) determines the minimum angular separation between points before their airy disks overlap. That is to say, how close together can two points be, in terms of visual angle, and still have the lens render them as two separate points, regardless of focal length. Astronomers are especially interested in this (to be able to distinguish stars close together), which is why the aperture of telescopes is usually given in units of length, not an f-number. If you were to try to see more detail by viewing an image at increased magnification (say by cropping or using a teleconverter), the absolute aperture would determine the finest detail separation the lens could possibly distinguish (assuming you have enough pixels).

Diffraction in terms of relative aperture (f-number) determines the size of the airy disk at the image plane. A lens cannot render a point of light as a point, but rather of a spread out pattern of some size. The higher the f-number (smaller the aperture), the bigger the disk is. Diffraction limits image detail when the airy disk of a single point covers more than a single pixel. Gradually, increasing the number of pixels doesn't record any more detail because pixels will end up recording the same airy disk (point source of light) as the pixels next to them.

In simpler terms:
- Diffraction limit in terms of absolute aperture determines when visually close points are rendered as overlapping (regardless of focal length).
- Diffraction limit in terms of relative aperture (f-number) determines when a single point is rendered large enough to cover more than one pixel.

Last edited by Cannikin; 06-23-2013 at 02:41 AM.
06-23-2013, 03:09 AM   #2070
Pentaxian
Class A's Avatar

Join Date: Aug 2008
Location: Wellington, New Zealand
Posts: 11,251
QuoteOriginally posted by Cynog Ap Brychan Quote
In other words, would diffraction be less on a 300mm lens at f16 than, say a 50 mm lens at the same f-stop?
No.

From a Cambridge in Colour Tutorial:
"Technical Note: Independence of Focal Length
Since the physical size of an aperture is larger for telephoto lenses (f/4 has a 50 mm diameter at 200 mm, but only a 25 mm diameter at 100 mm), why doesn't the airy disk become smaller? This is because longer focal lengths also cause light to travel further before hitting the camera sensor -- thus increasing the distance over which the airy disk can continue to diverge. The competing effects of larger aperture and longer focal length therefore cancel, leaving only the f-number as being important (which describes focal length relative to aperture size).
"
Falk would explain it in terms of the Heisenberg uncertainty principle, which is more concise and more elegant but less intuitively accessible.

QuoteOriginally posted by Cynog Ap Brychan Quote
How did members of the f64 Club get such sharp pictures, or was that just because the negative sizes were large?
The reason is indeed that their negatives were much larger. The equivalent f-stop on APS-C is f/11 (or less, depending on what particular "large format" they were using).

Last edited by Class A; 06-23-2013 at 03:14 AM.
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
aps-c, body, k-5, k-7, k-7/k-5, pentax, pentax news, pentax rumors, reason, sensor, sony

Similar Threads
Thread Thread Starter Forum Replies Last Post
Speculation: What if Pentax did not go FF but rather a 1.3x? brecklundin Pentax DSLR Discussion 36 08-13-2013 10:36 PM
Any speculation on how long... Tom S. Pentax K-5 & K-5 II 10 12-16-2010 09:19 PM
K-x price speculation SylBer Pentax DSLR Discussion 18 10-13-2010 12:29 PM
Small rant + speculation ilya80 Pentax News and Rumors 35 04-20-2010 11:42 PM
speculation about FA lenses on FF DSLR lpfonseca Pentax SLR Lens Discussion 19 11-05-2009 10:34 AM



All times are GMT -7. The time now is 04:32 PM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top