I was a bit loose in my explanations, so let's see if this helps...
Originally posted by Marc Sabatella Originally posted by Quension Amplifying the signal in analog form results in the A/D converter having a different range to quantize, resulting in a different and more precise set of values on output.
I still dont' see where "more precise" comes from.
Let's say the analog signal has a nominal range of 0 to 4V (AFAIK the real values are ridiculously small, but those details are beyond my knowledge), and the ADC's output is 12 bits (4096 values). Each output unit of the ADC therefore represents about 1mV.
We'll say the sensor is 2 stops underexposed, so the signal peaks at 1V. Only the bottom 10 bits of the ADC's output contain data, giving a range of 1024 values. Digitally amplifying this will still only leave us with 10 significant bits of image data.
Using an analog linear amplifier to boost the input to the ADC to peak at 4V will result in each ADC output value representing 250uV of the original signal, and the entire range of 4096 possible values will be restored. This is four times more precise than the same data digitally amplified.
Quote: I guess if we assume that the A/D converter was actually capable of generating more bits of precision than is being stored in the RAW file - as is supposedly the case with the K10D - then I might believe that amplifying the signal could result in more precision - or at least a better guarantee that the least significant bit of the RAW data actually had any real significance.
I don't follow your reasoning here. What I meant with the 22bit ADC is that in theory that would be the perfect digital amplifier: the input signal is converted at maximum precision in one shot, so you can slice the 12 bits of output from any appropriate section of it without losing anything -- just chop off the bits you don't need.
(I don't know why the K10D had a 22bit ADC. Pentax was proud of it at the time, but I believe the later cameras are all using 14bit ADCs. Some web searching just now suggests the extra bits are oversampling primarily used to reduce noise from the sensor digitizing hardware itself, but I can't find an authoritative reference for that. The company that supposedly made the chips has mysteriously lost its web presence, and I don't have the engineering background to fill in the blanks myself.)
Quote: But this would still require still trusting that this analog amplification is perfect, would it not?
At least close enough to perfect that it can faithfully scale the weaker parts of the input signal to a range the ADC can handle, yes.
Quote: OK, I can also see that the very least significant bit in the original (unamplified) data would have been the result of rounding, and digital amplification now "promotes" this a place or two. Assuming it was rounded *correctly* in the first place, though, we're still rounded correctly in the bit-shifted resulting, meaning we're off by at most only half the amount represented (eg, is we're now talking about 100b = 4, we're off by no more than 10b = 2. So I'd still need to see some assurance that the analog amplifier was capable of doubling the signal accurately enough to guarantee better accuracy than this.
Right. In practice I gather this is mostly true for the lower ISO levels in many cameras, but higher ISOs are apparently dominated by both amplifier and sensor noise.
Also keep in mind that due to the logarithmic nature of the exposure, "half" is a significant amount for the darker tones (for how we perceive them), which are often important when push processing. The brighter tones aren't affected nearly as much because they have such a huge gradation range to work with from the start that we normally don't change them enough to notice any loss of precision.
(This is the point that's most relevant to ETTR.)
Quote: the specifics of which method works better would seem likely to depend on all sorts of factors like the specific sensor, a/d converter, the amount of amplification, and the nature of the data being amplified.
This is where the analyses by GordonBGood and Oleg_V come in. Sadly the details are still over my head.
Quote: Quote: If the actual noise level was below that point but caused the rounding, the analog amplifier just gained you an extra bit (or more) of real signal to work with.
Like I said, for some reason I didn't quite follow your example, but it seems possible you are here saying the same thing I just did? That an analog amplifier *could* improve on digital, but the results actually depend on the specifics?
Yes. Using my new example above, let's say the median noise level from the sensor is about 250uV, and the amplifier is perfect. The least significant bit may then be rounded at ISO 100 and 200, but we can resolve the noise itself at ISO 400. Beyond that it wouldn't matter whether it was amplified in analog or digital form, because the brighter tones are already as gradated as we can resolve, and the darkest tones are buried in noise, so the bottom bits are useless anyway.