Originally posted by KungPOW Of all the cameras I've used, the light meter can only be trusted to adjust the exposure to 18% on average. Even complex matrix systems do this. So a matrix metered snow scene will still under expose. How could the camera know that the scene was bright?
So, that's the logical flaw. In fact,
all the camera knows is that the scene is bright. It "knows" that in the direct sun in the middle of summer, and it "knows" it when shooting a field covered in snow. Either way, the sensor reads "wow, it's bright". What it can't know is whether the subject "should be" inherently bright (like snow) or just happens to be really bright looking but really should be exposed to look normal. Adding another sensor just adds more "yup, it sure is bright", and no more knowledge about the scene.
Matrix metering attempts to add artificial intelligence, by using multiple sensors to compare different parts of the frame to then guess what the subject is — for example, dark center with bright background is probably a backlit subject, so expose to get the center at 18% even if the background blows out. Having a second whole-frame or ambient light sensor isn't going to add much more information.
However, one could imagine a matrix metering system that seems a lot smarter simply by giving it more to work with. The current one has 16 areas, and presumably only works on luminosity. Instead, the entire live-view sensor could be read and a small full-color image thumbnail constructed. This thumbnail could be compared against a much, much larger database and the best match used. Flash memory is cheap — put half a gigabyte of possible images in there and a fast hash algorithm to match 'em up, and there you go.