Originally posted by falconeye Ben, it appears we both use the same terminology then
But something else comes to my attention: What you write above is of course correct. But I stumble upon your "perceive" vs. "record".
In the example you are just giving ("narrow white" giving false coloured surfaces), both the human eye AND the camera will "see" the same wrong colors.
Also, when you write "certain
colour will be emphasized" you actually mean "certain
wavelength will be emphasized".
Colour is a three-dimensional vector of recorded/perceived luminosities for the three distinct receptor types of an eye/camera. A single wavelength can be given a color (a weight of its two neighboring receptor channels) but not every color corresponds to a wavelength, like white. Multiple wavelengths then are a spectrum, not a color. And of course, there is an infinite number of spectra which correspond to any given single color.
Of course, you know all this already, no doubt about. But it is difficult to discuss if one isn't very careful in wording.
So what all boils down to:
Both the eye and a camera convert a spectrum (a function) to a color (a 3D vector). The conversion operators (the spectral sensitivities I quoted above) may be slightly different but this doesn't matter in our discussion. The point is, after the conversion, it doesn't matter anymore how the spectrum looked like. And both, eye and camera sensor, only transmit the converted result, not the spectrum. A camera doesn't record a spectrum!
The construction of a difference here between camera and eye must fail.
Ben, are you sure you understand what I'm trying to say? Or are you just trying to explain what you think I may not understand?
I agree with most of what you write. May be my trial to summarize, what I mean was too short and again leads to misunderstanding.
When I am talking about "colour", I mean colour, not wavenlength or spectrum, simply because colour is, what we are talking about in photography (leaving BW aside).
I think, that the difference between colour perception, which is a product of the physiological properties of the eye and post-processing (based on experience and applied fuzzy logic) by our brain, is very different from colour recording with a camera sensor.
The human perception enables us to "automatically" correct colours under most lighting conditions, to meet, what the colours would look like under daylight conditions, simypl, because we "know" by experience, how these colours should look. We can also fill in colour gaps, if the light source is a strong emission line source (sodium/mercury vapour).
A sensor/film, will only record those colours, that are actually present in the scene. So, if we deal with a continous light source, will enable the sensor to record all the colours, present in the scene (represented by the wavelengthes, reflected by the surfaces, we perceive as coloured). If the colour temperature of that continous light source is higher or lower than the conventional 5600K, white balance will shift the spectral sensitivity curve to match the colour temperature and thus provide a final image, that will be corrected to the 5600K convention (if we choose not otherwise).
If the subject is illuminated by a light source with only some strong emission lines (as is the case with those LED PARs under discussion in this thread), many coloured surfaces will not receive the wavelengthes representative of their surface colour and thus cannot reflect them. They will appear in dull and different colours, than it would be the case under norm light.
White balance tries to correct this to a certain degree, by adding the Green-Magenta shift correction to the simple colour temperature shift. But with only few emission lines available, this is not possible, or at least not to the full extend. The result may be an image, that can be perceived as having the correct whites and blacks, but nevertheless certain colours will be missing or off.
Ofcourse there are some post-processing technologies available, that make use of fuzzy logic or AI algorythms to reconstruct these missing or off colours. You can use face detection and then apply a correction algorythm to correct for skin tones automatically, as some software seems to do.
My point only is, that this is today not possible to the full extend and continous light sources are much better (aks easier to correct, if correction is necessary at all) than line radiators. That may change someday. But still: If we consider, for instance, subjects with strong pure colours. How should any algorithm reconstruct the colour, as perceived under norm light, when the subject's colours fall completely into the emission gaps? No way, to do that correctly.
This whole, lengthy discussion (from my side) only served the purpose to illustrate, that emission line based light sources pose much more severe problem for colour balancing and correction, than conventional temperature radiators do, and that obviously, we have to deal with two very different lighting concepts.
Ben