Originally posted by BrianR In body SR moves the sensor around, letting you grab slightly offset images. In lens stabilization can move the projected image around letting the sensor grab slightly offset images. On the surface they seem pretty equivalent for an application like this?
Does in-lens allow the same degree of control? (I'd expect?) Maybe the camera has no input in what this generation of in-lens stabilization does (I've no idea!). I understand there are small optical penalties with swinging the stabilization element around, but we're really looking at tiny distances relative to what either stabilization system should be capable of (same problem with in-body, but sub-pixel shifts are tiny). Much of the heavy computing can probably be skipped as you'd be sampling images in a pre-determined way.
As far as I can tell, the ability to control the sensor movement with precisely known micron-scale distances in Pentax's system came with the development of the K-3's AA simulator. And one would need to move the sensor known distances at the pixel-size level in order to deconvolute the resulting image files.
I don't think in-lens stabilization systems know how many pixels of correction they are applying, just that it corrects the right amount for lens movement. The lens doesn't even know the number of pixels in the camera, as far as I know. So at the least, lens-camera communication would have to be added to 1) allow lens correction element movements of known image distance on the sensor, and 2) know when it's OK to move to the next position in sequence.