We've already seen examples of what happens, but to elaborate on why...
Originally posted by architorture most current digital cameras use sensors built in the bayer color filter array (CFA) style as their image capture devices.
explanation. the bayer array is made up of G(reen)R(red)G(green)B(lue) pixels. i have read that this is because it mimics our eyes' natural higher sensitivity to green light.
It's not to mimic our eyes' sensitivity, but rather to take advantage of it.
Our eyes are more sensitive to green light, which means we use it primarily to determine how bright something is. Relative brightness is how we detect contrast, the difference between light levels on objects or parts of objects, and thus
detail in a scene. You can sort of see that to some extent on the linked page: in the single color images, there are always 2 crayons that are hard to tell apart, but it's easier in the green image than the others. (That's a lousy example because it's been through several levels of computer processing and display hardware, but try roaming around at night with one of those tri-color flashlights.)
That sensitivity is why early night vision systems output in green, and why red is often used as indication/incidental light at night -- since our eyes are less sensitive to it, our pupils don't contract as far, so it doesn't disrupt our natural night vision as much as other colors would.
The problem faced by the sensor is one of spacial resolution in color. If the sensor didn't have color filters, it would be great at creating monochrome images in high resolution -- one pixel of output
exactly represents one pixel of light input in space. Since the filters are required to sense color, that means each pixel of output is missing two colors of input, and those two colors are lost at this point in space. The demosaicing part of digital processing basically takes each pixel and guesses what the other two colors should be.
The idea behind having more green pixels then is that they can be weighted for relative brightness on output. That will provide the most accurate local contrast for our eyes, and thus approximate spacial detail more closely. The color may not be perfectly accurate at each pixel, but since the brightness is more accurate our eyes will fill in the gaps and we'll see e.g. feathers instead of a smooth surface.
Quote: however, this means, i think, that the sensor is not as efficient at registering red and blue light (and whatever colors formed by their combination).
Actually, sensors tend to be most efficient at registering the red frequencies; there's usually a big infrared filter in front of the entire sensor. You'll sometimes see people complain about "the red channel blowing first" when they're trying to take pictures of things like bright red flowers, meaning the relative brightness wasn't that high but the sensor picked up so much red it saturated the image anyway.
Quote: the result, as i think we have all experienced, is that pictures taken at high ISO and/or long exposures show mostly red and blue color blotchiness.
The red and blue blotches are actually a result of processing rather than the noise characteristics of the sensor itself. Underneath the color filters the sensor itself is monochrome, and the noise comes from there and later analog stages in the sensing pipeline, so every pixel is equally noisy regardless of color.
What happens is that the demosaicing algorithm goes for local contrast by paying more attention to the green channel as mentioned above. When filling in the missing colors for each pixel, it translates much of the green channel toward relative brightness (luminance/luma) and the red and blue channels more toward color shift (chrominance/chroma). The noise present then takes on those two characteristics as well.
The result is that we see luminance noise as false contrast, or detail/texture, and our eyes are quite good at filtering that out when we look at a scene. Chroma noise shows up as the color blotches, which we find annoying because it changes the fundamental colors of the object we're looking at.
You can see an example of this processing in my
K200D NR comparison post, in the bottom section, middle column. The raw converter I used doesn't do green weighting, so in the bottom image you can see the green pixels that result from noise in that channel. (And that there is roughly as much green noise as there is red and blue combined, matching the ratio of color filters on the sensor, showing that the noise is spread equally.) Above is the camera's JPEG engine, which translated many of the noisy green pixels into whitish ones, as if those were simply "brighter" areas of the scene compared to the base near-black area.
This type of processing has so far turned out to be the best general approximation of how our eyes view the scene the camera is trying to capture.
Quote: SO, my thought/question was: what effect, other than obviously changing color balance, would result from using a red or blue filter on the lens at time of capture with a digital camera.
One of the most detrimental effects is a reduction in spacial resolution, as only 25% of the sensor area is being used to capture parts of the scene. As the others have commented, this also reduces the amount of light captured, and since the other sensor pixels aren't getting light they essentially read as pure noise. Standard processing that assumes the green channel is present just makes the results worse.
Hope that helps.