Originally posted by Erik ISO amplification is performed at the analog stage, before ADC, where there are no discrete bit values. The signal has something like infinite bit depth (there is, of course, a signal-to-noise ratio that lowers the quality of the signal even in the analog domain) until it is converted to digital.
Erik, as this topic is coming back -- and because we have a few posts to go until 2009
-- let me comment on this.
Just on your remark now -- not if shooting at higher ISO makes sense as this involves a couple more factors ...
First, I see exactly at what you are at. And theoretically, you are right.
But there are two factors which are destroying your argument, totally actually:
- The quantization noise on the analog side coming from the finite number of photons. Aka "Shot noise".
- The amount of noise in the signal
after quantization. Of say 12 Bit after quantization, about 4 Bit are noise (per pixel scale, K20D, ISO 200). These 4 Bits are valuable information still, because they give an area of ~16x16 pixels a fine tonal rendition - Sort of Floyd-Steinberg dithering.
But, and this is why you are wrong, there is nothing to gain to amplify before quantization to only end up having more bits to encode the noise. There are already enough of them.
Addendum:
I concede that a dark pixel would have less Bits in the noise. But still enough to encode the noise well enough.
--
Maybe, this is enough "science" to settle this side track discussion. But I am not sure.
BTW.
High ISO still makes sense to avoid extreme quantization noise (e.g., to shoot 100 rather than 1600 would be bad if image is underexposed at even 1600 ...) and to reduce some low spatial frequency banding artifacts.