Originally posted by RobA_Oz Doubling the image will have the concomitant effect of halving the light that falls on each sensor site, ignoring losses. On the surface of it, this means that removing such a filter may improve low-light performance, for a given sensor.
Aaargh !
At a risk of a considerable oversimplification (it is much easier to explain this with some overhead transparency and coloured felt tip pens) :
Imagine that your camera is taking a picture of a very small blue object. Just big enough give an image on the sensor one pixel in size. Now move the camera a little bit so that this image falls onto a red sensitive element. The sensor will not see it.
Now put in a birefringent filter just thick enough so that the double image has spacing equal to the distance between the sensor elements (ignore for the moment how the colour sensitive elements are arranged, i.e. imagine the Bayer sensor turned by 45 degrees). You'll get an image of your blue point falling not only on the blind red sensitive element but also on an adjacent blue sensitive element. Bingo. Your Bayer sensor will now see it.
You'll also need another birefringent filter at 90 degrees to make this work in the other (perpendicular) direction. So the image of your blue point actually becomes four blue point images, each as sharp as the original but offset from it.