Originally posted by Class A I don't see how that is a good analogy.
- The effects of the "accelerator" unit are measurable, in an objective manner. There are no claims that one can see them when one is trained to see them and when one is relaxed, but that A/B comparisons are unsuitable to confirm a real effect. None of this voodo argumentation is needed in the case of the "accelerator" unit.
- If someone wants to pay through the nose for no real effect but feels happier afterwards because they are convinced to have improved their life, good for them. Who are we to deprive them of their newly gained happiness? The case of mandatory denoising, however, is an entirely different one. Mandatory denoising affects everybody. One cannot simply opt out and let other people decide what they want to do.
The real analogy would be if a certain brilliant turntable were only sold with built-in noise reduction which removes surface noise, clicks, pops, noise on the recording, etc. Of course the music would inevitably be affected as well, but some would argue that you cannot hear it, etc. Yet, measurements would be available that show how the frequency response is affected by the built-in noise reduction.
Haha, okay. My analogy was tongue in cheek (and sure, I won't judge anyone who is convinced he can improve things by spending some money on something), but this is where it's heading to. The beauty or pitfall of social media is, that facts can be spinned to tell different stories. They might not be wrong, but they might support different ideas. If you prefer your "purity" approach that is fine with me. I see the processing pipeline as a complex system with a lot of hardware and software algorithms involved. These are proprietary systems, and some steps after the light arrives the sensor through >filters>lens>(AAfilter)>bayer filter>microlens are happening, like equalisation of Pixel Non-Uniformity (maybe already influencing/"correcting" the lens vignetting?), elimination of signals from stuck pixels, dark floor noise subtraction (from the ring of pixels around the used image) are all done before something like RAW-files are created. The signal processing is done by every manufacturer differently, even if they use similar hardware components. Obviously Ricoh decided that they include a so called accelerator unit, to support the main processor. The results from cameras with the same sensor thus vary highly, even if the same sensor is used! I consider these differences way higher - by a very large margin - than anything that might be detectable visually between a K1 and the MkII. Despite the "measurements" we still have no clue at which stage under which circumstances this unit has an effect. The tests have been done with "synthetic" images, thus the extrapolated "knowledge" that it is starting at a certain ISO is just an assumption. It might show a higher impact at a certain noise floor of the data, or some other parameters. I don't think that a hardware switch is something to look for, it probably is not like that there could be a wired bypass
Cameras are produced by the manufacturers to deliver pleasing images to the costumers. They are not specified as scientific devices to collect photons.
If the K1II would fail in the first task I could understand an outcry and a wish to the designers to reconsider their design choices.
But, and please understand me correctly, I think it might be a somehow risky tactic to set wishes for future cameras of this company on the fact that it either has or has not one of the components of the current processing pipeline. They may have completely redone it and fine tuned to the extreme and it still would say it uses an accelerator unit. And then? Would you still hold to your "pureness" arguments?
Disclaimer: I have absolutely no intention to discourage a discussion about such concerns. I am just adding my own concerns.