Originally posted by photoptimist We seem to disagree that:
anything done in-camera can be done in post processing
We don't disagree on this point.
It is obviously true that there are noise reduction approaches that can only be performed in-camera.
Where we disagree is on our estimation whether this happens in the case of the K-1 II. According to the data I cared to assemble, it is very unlikely that the accelerator makes use of any information that either isn't already available in standard RAW files or could be made available (by adding some additional data).
Originally posted by photoptimist signal can't be distinguished from added noise
With all due respect, I think your idea of signal vs noise seems to be based on the notion of a "strong signal" with identifiable features against a "weak noise" floor and in my estimation such a perspective is too naive for high-ISO (read "low light") scenarios.
We may also want to distinguish between "source noise" (Poisson distribution of photon events) which one could consider to be part of the signal vs system-generated noise (from analog amplification, A/D conversion, etc). How do you want to separate the "source noise" from the "system noise" (unless you gain access to sensor internals which I understand to be off-limits to the accelerator chip)?
It is too bad that member falconeye has defected to Nikon as he had some interesting thoughts about forgoing to even store all the "source noise" and then add it later (thus gaining more data compression in storage). This could suggest that there are ways to separate noise from signal that are not harmful, but I'm not immersed enough in this subject to competently comment. My current level of involvement with the matter suggests to me that you'll find it extremely difficult to separate weak signal from strong noise.
Originally posted by photoptimist the chip is doing a "nearest neighbour"-type denoising approach
I'm not saying it is doing just that. I fully expect it to do something much more clever overall. However, the attenuation of high frequencies one can observe is consistent with the notion that some "nearest neighbour"-processing is part of the overall manipulation.
Originally posted by photoptimist Actually, if you look carefully, the FFTs are not consistent with this at all.
I did spot features in the FT plots (FFT= Fast Fourier Transform / FT = Fourier Transform: The first "F" refers to an implementation choice which is irrelevant for the outcome) that are not consistent with a simple pure "nearest neighbour" processing. That is absolutely correct, but I never stated that the processing just amounted to a simple "nearest neighbour" smoothing. What I always meant to communicate is that a large part of the FT plot features is consistent with a "nearest neighbour"-type processing component.
In any event, to a RAW purist (who has good reasons for their principles, other than being pedantic) any kind of processing that isn't reversible is a problem; we need not fight over the specifics of the processing as long as we agree that it is destructive in a sense. I'm not sure you'd agree to that, but I don't know how you can explain why the 2D FT plots show high frequency attenuation while not destroying information. I'm making -- the very reasonable -- assumption that the processing cannot distinguish between noise and low, almost random signal, and thus would destroy signal as well as noise.
Originally posted by photoptimist Noise has independent differences that are statistically bounded to modest values. Image detail has structured differences (and similarities) and are potentially unbounded in value.
I believe this is an example of where your view on signal and noise may be too simplistic.
Have a look at this
visual demonstration of signal surviving noise levels that exceeds signal noise. Interestingly, the blog entry makes reference to our friends at DPReview who apparently espoused the untenable notion of signal only being meaningful as long as SNR >= 1.
Note that the "signal" in the examples chosen is pretty regular and thus nurture your hopes of extracting it by looking at pixel correlation, but in general the signal could be more random and I dispute the notion that any signal that looks like noise is noise.
Originally posted by photoptimist As much as I totally respect MJKoski's photographic skills and applaud his efforts to push cameras to their limits, I'm not confident of his conclusions because I fear that his choices camera settings and the RAW developer may be contributing to the problem.
Possibly true.
A thorough analysis of the problem would have to go beyond anything that has been done by anyone so far.
Originally posted by photoptimist I like this test!
Thanks!
Do you think I should propose it elsewhere more prominently so that there is some hope that someone may conduct the respective experiment?
Originally posted by photoptimist What's interesting is that the phenomena of Stochastic resonance - Wikipedia implies that noise will actually help occasionally reveal the smoothed low-amplitude signal in some images of the stack.
I'd say that is only remotely relevant to the discussion at hand, as stochastic resonance is about passing thresholds that otherwise would block signal entirely. As such it seems related to the idea of "dithering" that is used in audio applications to avoid systematic quantization errors. However, a possible "take away" from this area is that there is "good noise", i.e. that by removing noise at the wrong stage or in the wrong way, you can introduce non-linearities that give rise to a bothersome artefacts that some people refer to as a "digital signature".