Originally posted by photoptimist Absolutely! A well-designed demosaicing algorithm will estimate the full-resolution luminance pattern implied by pixel-to-pixel variations in all three color bands to fill-in the missing patterns in each color band.
But if the photo is taken with a strong red filter on the lens, then there will be very little signal and detail in the green and blue pixel channels and little ability to estimate the red-channel details that fell on green and blue pixels.
Thus, it's probably better to shoot in full color with no filter and then post-process with the channel mixer to convert the RGB image into a color-filtered monochrome. The only exception is that channel mixing cannot exactly replicate all the spectral effects of a color filter on the lens. For example, a picture taken through an orange filter might be subtly different from a unfiltered color image mixed in post to simulate an orange filter. A true orange glass filter might provide stronger green versus yellow contrast than the unfiltered+post-processing can version offers.
Post-processing of the unfiltered RGB image might be good enough to create the desired effect. Yet some photographers might seek a truer color filtration that only a color filter can provide.
Note: for those who have (and can use) pixelshift, the results with a color filter will be at full resolution.
I think I agree for what I consider to be the general case given some assumptions about the spectral responses of the pixels.
Gedanken experiment: If there were no overlap between green and red pixel spectral responses, a pure orange source emitting in the spectral gap wouldn't be detected, with or without an orange filter. An orange laser (Raman shifted green, say) illuminating a board in front of a luminous white background would appear in the image data as a black area against a white background, both in color and B&W. This indicates that some pixel spectral overlap is needed for reasonable raw performance.
Because a computer monitor having red, green, and blue pixels can emit an RGB combination that would look as orange as one might want, recreation of the orange board would depends on the photographer using his memory of the board's illumination color to recover it in PP by filling in the black area. The luminance data of the result should then represent a valid B&W representation (which could have positive or negative board contrast, depending on laser power).
A PP synthetic orange filter (if the physical filter's spectral response is known) could be imposed on the PP color image and the filtered luminance data that resulted could represent a valid expression of an orange filtered B&W scene. I suspect that this process could be very monitor dependent, even with calibration.