Originally posted by photoptimist There's zero evidence of "smoothing" in bclaff's analysis because bclaff's analysis does not look at an image with any signal in it.
Of course there is evidence of "smoothing" even if taking the -- in my view untenable position -- that only noise is smoothed out.
Perhaps we can find common ground by agreeing on the fact that there is proof of image processing. You seem to be of the opinion that there is uncontentious image processing for RAW files. I disagree, but be that as it may it is clear that some people will take exception with having the RAW data being processed for them. These people hold the view that external computing power and future processing methods are/will be superior to anything that an in-camera solution can provide, as impressive the latter may seem by today's standards.
We not only have RAW purists, though, we also have DPReview as a known entity and their negative response regarding the accelerator could have easily avoided by making the processing optional.
It may turn out -- although given the current evidence I find this to be extremely unlikely -- that the accelerator processing is not some after-the-fact beautification, but instead reduces system-generated noise only. In this case DPReview would have to send a huge apology to Japan and many posters here (including me) would have to acknowledge that they made the wrong conclusions. However, even considering this hypothetical scenario, I cannot understand why Ricoh invites major (mainly DPReview-induced) trouble by making the processing mandatory.
Originally posted by photoptimist There's only evidence of noise reduction.
True "noise reduction" as opposed to after-the-fact "denoising" would not result in the frequency analysis results we've seen.
Again, given the source of the images (Rishi Sanyal) we cannot be certain whether due rigour has been applied when creating the images but it seems unlikely that he messed this up.
Originally posted by photoptimist There's an assumption that noise reduction requires smoothing but that is unproven and may be false.
Of course not all noise reduction requires smoothing. But the results of the accelerator processing are consistent with a "nearest neighbour"-type denoising approach.
You wouldn't get the 2D FT plot characteristics we see for higher ISO K-1 II images, if the noise reduction had been achieved by dark frame subtraction, for instance.
Originally posted by photoptimist Attenuation of higher spatial frequencies in a noise image says nothing about attenuation of higher spatial frequencies of a signal if the filter is a non-linear.
How do you suggest any processing to be able to distinguish signal from noise?
The latter is only possible at a level where signal and noise have not been mixed yet. Given the design of modern Sony sensors, it is, to the best of my knowledge, impossible for any external "accelerator" processor to access system-generated noise independently from the signal.
Perhaps Ricoh does something clever involving dark frames, etc., but why would such genuinely useful noise reduction result in an attenuation of higher frequency (aka, "loss of detail") that is typical for post-denoising?
I fully consider the possibility that Ricoh found some non-linear processing that will retain quite a bit of detail, if it is recognised as detail. This clearly seems to be the case with the non-smooth transitions between sharp detail and mushy background MJKoski takes issue with, and the apparent sharpening that seem to occur in some K-1 II files. The problem is that Ricoh's algorithm will sometimes be more or less successful in distinguishing suspected detail from noise. It is absolutely great that Ricoh provides this processing, but it just has to be optional. There are no two ways about it.
Note that I do not know whether all the issues MJKoski has seen and the apparent sharpening are artefacts of developing RAW files with converters that haven't been tuned correctly or are not optimised yet for handling K-1 II files. This is possible and AFAIC the jury is still out on what is really happening. However, it makes sense to make some educated guesses and my money wouldn't be on the scenario that Ricoh has found a way to considerably improve the performance of Sony sensors without any disadvantages of any kind.
Originally posted by photoptimist The only signal that is in danger of attenuation from these kinds of filters is signal of an amplitude so low that it's indistinguishable from noise.
It has been shown in the audio domain that very low signal amplitudes can be retained in a noise floor that has much higher amplitudes. You can think of the very weak signal as modulating the comparatively much stronger noise.
Again, Ricoh's algorithm may be really good discovering even such low levels of signal, but
- there can be signal that is impossible to distinguish from noise (-> extremely small dust specks?!?).
- future algorithms will probably be even better, but only if they are fed the original data, not something that has been processed already.
Originally posted by photoptimist But if the signal is indistinguishable from noise, it will also be lost in an image without NR anyway.
I dispute that view (see above).
I would expect RAW image stacking (as common in astrophotography) to be more successful with high ISO K-1 files compared to high-ISO K1-II files. The K-1 files should retain the real signal in the potentially high noise floor whereas the K-1 II files most likely will have smoothed it away (mistaking it for noise). As a result, stacking high-ISO K-1 II images should not recover the signal as well as stacking K-1 images would.
Perhaps the above hypothesis regarding high-ISO RAW image stacking can serve as an experimental design that someone with access to both a K-1 and K-1 II could carry out?