Originally posted by glanglois Any idea how effective this approach is? And how well it discriminates betwen subject motion and camera motion? And what "blur volume" might be? And why one would prefer this approach to optical image stabilization?
I expect that there's some interesting math involved in the computations - math that is carried out (at least partially) in the ASIC to reduce calculation time. ASICs sound expensive unless they're COTS (Commercial Off The Shelf).
Can someone explain this without calculus?
I can't very well answer most of your questions, but as to why that approach was taken instead of OIS, I reckon it's cheaper to put in a piece of code whose R&D manhours cost can easily be distributed among a mass of cameras as opposed to real hardware that will add to the manufacturing cost per unit, not to mention once they have the code in place, it's also cheaper to include it in future cameras for zero cost (or minimal cost if they decided to improve on the code).
It'll be interesting to see how it works in the real world, if it's effective, and if it degrades image quality by a big chunk.
On a semi-related note, the new iMovie from Apple also includes the option to put in software-based image stabilization to movies in post-production. Perhaps that's (software-based stabilization) a big thing nowadays.