Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version Search this Thread
06-03-2009, 10:07 PM   #1
Senior Member
architorture's Avatar

Join Date: Apr 2008
Location: New York, NY
Posts: 151
crazy thought about improving results from image sensors

so a thought just popped into my head as i was looking at some k-7 samples (the thought has nothing to do with the k-7 specifically, though).

here it goes:
most current digital cameras use sensors built in the bayer color filter array (CFA) style as their image capture devices. explanation. the bayer array is made up of G(reen)R(red)G(green)B(lue) pixels. i have read that this is because it mimics our eyes' natural higher sensitivity to green light.

however, this means, i think, that the sensor is not as efficient at registering red and blue light (and whatever colors formed by their combination). the result, as i think we have all experienced, is that pictures taken at high ISO and/or long exposures show mostly red and blue color blotchiness.

tangent/part 2: i learned in a history of photography course that early film emulsions were more sensitive to blue light than other wavelengths, making photography of seascapes with sky difficult if you wanted something other than high brightness. most of us are aware that using a red filter on the lens with B&W film (or through the adjustment panel in PS) makes the camera less sensitive to blue light (c.f. Ansel Adams, half dome photo), and increases the exposure time.

SO, my thought/question was: what effect, other than obviously changing color balance, would result from using a red or blue filter on the lens at time of capture with a digital camera. would changing/hacking the sensitivity of the color channels in this way cause any appreciable change in image quality (cleanliness) at higher ISO's and long exposure times? presumably it would be trivial to fix the white balance shift caused by the filter. would the step of correcting white balance in RAW or pre-capture in-camera for JPEG negate any image quality gains?

this would not be too difficult to test, but i don't have such a filter (would a purple filter be best or am i wrong?), and don't feel like spending more mula on stuff i have no use for other than another geek-out experiment.

What do you guys think? would it work?

06-04-2009, 12:53 AM   #2
Dom
Guest




I think I've got a blue filter kicking around somewhere, I'll do some test shots and post them.
06-04-2009, 01:35 AM   #3
Veteran Member




Join Date: Nov 2006
Location: Hong Kong
Posts: 1,934
It won't work!

QuoteOriginally posted by architorture Quote

here it goes:
most current digital cameras use sensors built in the bayer color filter array (CFA) style as their image capture devices. explanation. the bayer array is made up of G(reen)R(red)G(green)B(lue) pixels. i have read that this is because it mimics our eyes' natural higher sensitivity to green light.

(snipped)

What do you guys think? would it work?
It will not work.

Most people have the mis-concept about the two Green pixels paired with one R and one G for each picture element of a Bayer sensor that we need two G pixels because our eyes need to see double amount of green light - this is not correct.

Just think about how a CRT and LCD works - they have *equal* RGB strips in each picture element: equal numbers and equal size!

So, why double the green pixels? The only purpose of doing so is to increase the areas of sensing pixels for the green light and thus improving S/N for the green channel. Hence, if you are to do any filtering on any colours, this will only decrease the amount of receivable light energy in particular channel. And, that will not help!

In fact, there are a lot of massage done already from the CCD/CMOS sensor, A/D conversion and to image processing. The signal has already been tuned to the best under each ISO, hardware and software wise.

The only way to improve noise is to improve the pixel area for receiving light. And of course, better circuitries afterwards with less noise will always help.

There is the practical solution already, indeed. It is called Full Frame!
06-04-2009, 01:47 AM   #4
Dom
Guest




These are 200% crops. I've very quickly corrected the colour balance, it's not perfect. As you can see the blue filter doesn't help at all.


ISO 6400 no filter.


ISO 6400 blue 80B(D-A) filter.

06-04-2009, 01:59 AM   #5
Veteran Member




Join Date: Nov 2006
Location: Hong Kong
Posts: 1,934
QuoteOriginally posted by Dom Quote
These are 200% crops. I've very quickly corrected the colour balance, it's not perfect. As you can see the blue filter doesn't help at all.

ISO 6400 blue 80B(D-A) filter.
It will only make it *worst* as the amount of light (energy) received has been decreased!

And as I have pointed out in my last/first reply, the more dominant red and blue noise (as observed by the OP) are owing to the smaller total sensing areas of the two colours. So, the noise can by no means be filtered, just because it is from the sensor, which is just Physics.
06-04-2009, 02:25 AM   #6
Dom
Guest




QuoteOriginally posted by RiceHigh Quote
It will only make it *worst* as the amount of light (energy) received has been decreased!

And as I have pointed out in my last/first reply, the more dominant red and blue noise (as observed by the OP) are owing to the smaller total sensing areas of the two colours. So, the noise can by no means be filtered, just because it is from the sensor, which is just Physics.
Totally agree with you. all the filter is going to do is make the image noisier. As an experiment it worked very well and we have a conclusive result.
06-04-2009, 09:49 AM   #7
Veteran Member




Join Date: Jun 2007
Location: Owego, NY
Posts: 976
I think the only case where a blue (or red) filter might help is in a situation where the white balance settings would be so severely skewed that prior to white balance adjustment, one raw channel is clipping when another is severely underexposed.

e.g. in a place lit with semi-dimmed tungsten lamps (even lower color temp than normal tungsten), one might encounter situations where the red channel clips and the blue channel has very little data. So a blue cooling filter to bring the color temp up (man that's always been a weird naming convention) would allow the blue channel to be exposed better without clipping the red channel.
06-04-2009, 10:37 AM   #8
Senior Member
architorture's Avatar

Join Date: Apr 2008
Location: New York, NY
Posts: 151
Original Poster
thanks for the great replies.

i figured i was getting something wrong in my thought process.

pretty cool you guys settled it so well, though.

06-04-2009, 12:00 PM   #9
Senior Member




Join Date: May 2009
Posts: 125
Foveon X3

It is almost exactly where Foveon got their idea from.

To get similar results, people just upped the megapixels on Bayer sensors. So far for me, it has worked beautifully but your idea is not bad. Just that it may be too difficult to implement because the sensor and the colour filters are inseparable.

I imagine switching filters would be a more complicated solution than Foveon.
06-05-2009, 02:36 PM   #10
Senior Member




Join Date: Feb 2009
Location: west coast USA
Posts: 206
We've already seen examples of what happens, but to elaborate on why...

QuoteOriginally posted by architorture Quote
most current digital cameras use sensors built in the bayer color filter array (CFA) style as their image capture devices. explanation. the bayer array is made up of G(reen)R(red)G(green)B(lue) pixels. i have read that this is because it mimics our eyes' natural higher sensitivity to green light.
It's not to mimic our eyes' sensitivity, but rather to take advantage of it.

Our eyes are more sensitive to green light, which means we use it primarily to determine how bright something is. Relative brightness is how we detect contrast, the difference between light levels on objects or parts of objects, and thus detail in a scene. You can sort of see that to some extent on the linked page: in the single color images, there are always 2 crayons that are hard to tell apart, but it's easier in the green image than the others. (That's a lousy example because it's been through several levels of computer processing and display hardware, but try roaming around at night with one of those tri-color flashlights.)

That sensitivity is why early night vision systems output in green, and why red is often used as indication/incidental light at night -- since our eyes are less sensitive to it, our pupils don't contract as far, so it doesn't disrupt our natural night vision as much as other colors would.

The problem faced by the sensor is one of spacial resolution in color. If the sensor didn't have color filters, it would be great at creating monochrome images in high resolution -- one pixel of output exactly represents one pixel of light input in space. Since the filters are required to sense color, that means each pixel of output is missing two colors of input, and those two colors are lost at this point in space. The demosaicing part of digital processing basically takes each pixel and guesses what the other two colors should be.

The idea behind having more green pixels then is that they can be weighted for relative brightness on output. That will provide the most accurate local contrast for our eyes, and thus approximate spacial detail more closely. The color may not be perfectly accurate at each pixel, but since the brightness is more accurate our eyes will fill in the gaps and we'll see e.g. feathers instead of a smooth surface.

QuoteQuote:
however, this means, i think, that the sensor is not as efficient at registering red and blue light (and whatever colors formed by their combination).
Actually, sensors tend to be most efficient at registering the red frequencies; there's usually a big infrared filter in front of the entire sensor. You'll sometimes see people complain about "the red channel blowing first" when they're trying to take pictures of things like bright red flowers, meaning the relative brightness wasn't that high but the sensor picked up so much red it saturated the image anyway.

QuoteQuote:
the result, as i think we have all experienced, is that pictures taken at high ISO and/or long exposures show mostly red and blue color blotchiness.
The red and blue blotches are actually a result of processing rather than the noise characteristics of the sensor itself. Underneath the color filters the sensor itself is monochrome, and the noise comes from there and later analog stages in the sensing pipeline, so every pixel is equally noisy regardless of color.

What happens is that the demosaicing algorithm goes for local contrast by paying more attention to the green channel as mentioned above. When filling in the missing colors for each pixel, it translates much of the green channel toward relative brightness (luminance/luma) and the red and blue channels more toward color shift (chrominance/chroma). The noise present then takes on those two characteristics as well.

The result is that we see luminance noise as false contrast, or detail/texture, and our eyes are quite good at filtering that out when we look at a scene. Chroma noise shows up as the color blotches, which we find annoying because it changes the fundamental colors of the object we're looking at.

You can see an example of this processing in my K200D NR comparison post, in the bottom section, middle column. The raw converter I used doesn't do green weighting, so in the bottom image you can see the green pixels that result from noise in that channel. (And that there is roughly as much green noise as there is red and blue combined, matching the ratio of color filters on the sensor, showing that the noise is spread equally.) Above is the camera's JPEG engine, which translated many of the noisy green pixels into whitish ones, as if those were simply "brighter" areas of the scene compared to the base near-black area.

This type of processing has so far turned out to be the best general approximation of how our eyes view the scene the camera is trying to capture.

QuoteQuote:
SO, my thought/question was: what effect, other than obviously changing color balance, would result from using a red or blue filter on the lens at time of capture with a digital camera.
One of the most detrimental effects is a reduction in spacial resolution, as only 25% of the sensor area is being used to capture parts of the scene. As the others have commented, this also reduces the amount of light captured, and since the other sensor pixels aren't getting light they essentially read as pure noise. Standard processing that assumes the green channel is present just makes the results worse.

Hope that helps.
06-05-2009, 03:06 PM   #11
Inactive Account




Join Date: Dec 2008
Location: Ames, Iowa, USA
Photos: Albums
Posts: 2,965
It seems to me a color array of blue-white-white-red might be an improvement over the Bayer red-green-green-blue in use.

The RGGB array makes white by sum. A RWWB array would make green by difference (2G=W+W-3R-3B).

There is no intrinsic color advantage to this, but there would be a significant shot noise improvement.

Roughly speaking, an exposure of white light photons incident on a RGGB array gives 4 equal count outputs; one for red+ two for green+ one for blue. But for a RWWB array the same stream of photons gives a total count output of 8; one red + (one red+one green+ one blue) + (one red+one green+ one blue) + one blue = 8.

Therefore the relative luminance noise for the RWWB array is much smaller (square root of 2 smaller) & is therefore more desirable.

In essence, more photons are used to measure luminance the RWWB way. I don't know why this trick isn't implemented to boost S/N ratio.

Iowa Dave

Last edited by newarts; 06-05-2009 at 03:48 PM.
06-05-2009, 03:57 PM   #12
Senior Member




Join Date: Feb 2009
Location: west coast USA
Posts: 206
QuoteOriginally posted by newarts Quote
It seems to me a color array of blue-white-white-red might be an improvement over the Bayer red-green-green-blue in use.
Unfortunately the hardware can't detect white. An individual photon has a wave frequency indicating its color in the light spectrum. "White" is literally a sum, a set of several photons representing a specific spread of frequencies in the right balance.

The hardware currently only "counts" photons to detect brightness, it can't tell what frequency they are. The color filters only let a specific frequency range through, but there's no way to filter white, since you need to examine the precise spread of frequencies of several photons in order to determine it is white.

The closest technology to frequency reading would probably be the Foveon sensor, which uses silicon refraction to move different light frequencies to different depths in the sensor. Foveon has problems of its own though, such as color bleeding; I suspect the problem is either that we don't yet know how to handle more frequencies with more precision, we know what needs to be done but the manufacturing technology doesn't exist yet, or it's just not remotely cost-effective at this stage.
06-05-2009, 04:21 PM   #13
Inactive Account




Join Date: Dec 2008
Location: Ames, Iowa, USA
Photos: Albums
Posts: 2,965
QuoteOriginally posted by Quension Quote
Unfortunately the hardware can't detect white. An individual photon has a wave frequency indicating its color in the light spectrum. "White" is literally a sum, a set of several photons representing a specific spread of frequencies in the right balance..
That's what I said I thought. I should have used "colorless" in place of "white". I understand light and colors.

Please re-read what I said, replacing "white" with "colorless"; the point is the luminance signal/noise ratio would be better if some photons were not rejected before reaching the sensor. Rather, reject them with arithmetic after they've registered their presence.

Iowa Dave
06-05-2009, 07:04 PM   #14
Site Supporter
LeoTaylor's Avatar

Join Date: Feb 2007
Location: Connecticut
Photos: Gallery
Posts: 679
The discussion of White sensor cells sort of resembles what I and many other astrophotographers do with astro cameras. We use LRGB. The same monochrome camera is used with a clear filter, red filter, green filter, and blue filter. Of course it helps that the stars stay put between filter changes! The Luminous frames are used for their high sensitivity and contrast. The RGB frames are used to "color" the Luminous image.

There was talk about a regular daytime camera coming out with Luminous cells in the array but I have not heard it mentioned recently.
06-06-2009, 02:53 AM   #15
Senior Member




Join Date: Feb 2009
Location: west coast USA
Posts: 206
QuoteOriginally posted by newarts Quote
Please re-read what I said, replacing "white" with "colorless"; the point is the luminance signal/noise ratio would be better if some photons were not rejected before reaching the sensor. Rather, reject them with arithmetic after they've registered their presence.
Whoops; at one point I thought that might be what you meant, but I was a bit rushed at the end of my last post and didn't think it through. Sorry!

One issue is that sensitivity for the colorless pixel would need to be different from the filtered ones; you would have to nearly triple the dynamic range to allow for complete white without losing sensitivity to pure green. Increased dynamic range is a goal sensor manufacturers are still chasing, and sticking with RGGB (or other individual color filters) would allow improvements to be applied to the entire image instead of being limited to subtractive color detection in part of it. It might not be considered worth trying until sensors can exceed our eyes' dynamic range and there is excess to play with.
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
balance, camera, color, dslr, exposure, filter, image, lens, light, photography, quality
Thread Tools Search this Thread
Search this Thread:

Advanced Search


Similar Threads
Thread Thread Starter Forum Replies Last Post
The Economy Is Improving graphicgr8s General Talk 22 05-21-2010 07:26 PM
K7 image autorotation driving me crazy! Spock Pentax DSLR Discussion 22 10-14-2009 08:47 AM
Improving AF with some MF finetuning? danielchtong Pentax DSLR Discussion 2 05-03-2009 01:04 AM
improving bokeh Urmas R. Pentax SLR Lens Discussion 35 04-08-2009 06:31 AM
My wife isn't crazy about this image... FHPhotographer Post Your Photos! 16 09-11-2008 03:29 PM



All times are GMT -7. The time now is 12:50 AM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top