Originally posted by Kunzite That isn't true.
Indeed, perhaps I was a bit too quick and brief with the statement that DSLRs one allow one frame as input for computation. Some technologies that are already implemented in our DSLRs and might be classified under computational photography, such as pixel shift, dark frame subtraction, in-camera HDR, ... do indeed expose multiple frames to produce one single image file resulting from some computation. All of which are taken *after* pressing the shutter release however.
Point is that the concept of DSLRs having a mirror between the lens and the sensor prior to the pressing of the shutter release, does limit the possibilities in a way that smartphones don't have to deal with. This means a DSLR cannot constantly refresh a buffer of a dozen or so exposures ready for processing at the press of a button, the way smart phones do it. I guess mirror-less cameras would/could be able to have such a buffer too? DSLRs with translucent mirrors or sophisticated AF sensors/meters might be able to collect some useful information before exposure, but not to the same level of detail. Perhaps in Liveview they could, but what would the added value of a DSLR compared to a mirror-less be then?
Regardless, there might have been some justified(?) resistance to this "baking" in the early stages, the technology noticeably not being sufficiently perfected yet, but I don't think we're far from no longer being able to detect any negative effects of computation in our images. Perhaps, we're already there in de K-3 III? I'll find out soon enough I hope... Keeping an open mind to technological advances...
Wim