Originally posted by nosnoop Even though EVF can accurate display the sensor coverage area and things like white balance, there will be some difference between LiveView and actual shooting mode - mainly because LiveView is done with open aperture and has to set a limit on shutter speed to maintain a reasonable refresh rate.
During low light, the liveview shutter speed (1/60 or 1/30 s) may be quite a bit higher the select shoot mode shutter speed. This is to maintain the EVF refresh rate.
That's why in some camera, you can select whether you want the EVF to bias towards final image capture, or to ease viewing.
Of course, you would use the lever now used for DOF preview to toggle or momentarily switch to either mode.
Originally posted by falconeye One aspect often overlooked is the high dynamic range of the human eye (if allowed to use microsaccades). Large sensors and forthcoming sensors with high quantum efficiency and low readout noise come pretty close.
So, what the human eye sees and image information available for post processing will be pretty similiar soon and has an advantage to the human eye today.
This means that an EVF with its 8 bit dynamic range and its tendency to ruin the eye's dark adaption can only provide a glimpse of available image information.
Why would a future high-quality EVF for use in semi-pro and pro cams have only 8 bit dynamic range? Why couldn't it have adaptive backlighting or ND filters?
Well, with adaptive backlighting, 8 bits per color is actually plenty for real-time applications.
The eye's dynamic range without time for adaption is actually not that large - which causes the problem you're mentioning in the last sentence of your post. This means we don't need huge static contrast (very high static contrast suffices), but we need very large dynamic contrast to be able to properly show very bright scenes in bright sunlight, and to not ruin dark adaption in lowly lit scenes.
With the use of adaptive LED backlighting, LCD TVs can achieve dynamic contrast of 2,000,000 : 1, and static (in-scene) contrasts of about 1,000 : 1 aren't uncommon. OLEDs can achieve about 1,000,000 : 1 (Samsung even have a prototype with theoretically infinite contrast. Apparently they're able to dim their OLEDs down quite well, though I suspect at the lowest levels, precision would suffer). And we both know this isn't the end of technological advance. An EVF's black level with dimmed backlight can be essentially zero.
I couldn't find proper data about the eyes static dynamic range, but i suspect it to not be so great after all, mabe 1,000 : 1 or 10,000 : 1. Its dynamic dynamic range is astonishingly high, though. But we probably wouldn't want an EVF that can push out sun-like brightness anyway.
Originally posted by falconeye A couple of weeks ago, I photographed a great Midsummer fire, with people's faces being partly indirected illuminated by the fire and their bodies creating great shadow shapes. Very well seen in the OVF. And I could render part of it by RAW image processing. But the rear screen showed nothing but black with a few bright blotches. Completely useless.
So you underexposed, and the rear screen reflected that. That's what an EVF in exposure preview mode would (correctly) show you. An EVF in "let me see something" mode would show much the same as your OVF did - though in extremely low lighting like this, today's LiveView implementations would either be very grainy or very choppy/blurry or both. My old Minolta Dimage 7i's (pretty crappy) ferroelectric EVF would switch to black and white in light like that.
I also don't quite see the connection to ruined night vision. Did your OVF ruin your night vision? Why would a good EVF necessarily be any brighter? You actually complained about the screen being too
dark.