Originally posted by Rondec With regard to hardware, it is entirely possible that (a) the processor in older cameras and (b) the shake reduction in older cameras are just not up to the task. The shake reduction in the K5 IIs can not be used to simulate an AA filter, while it can on newer cameras. I wouldn't blame Pentax for that. It probably just is what it is.
Beyond which, I don't know that I would pay extra for this. It is something that will increase color depth, but not resolution in situations where you have a static scene.
It is not a question of hardware/ processing power
Every camera on the market has a buffer size capable of holding at least 4 raw images and therefore an implementation could be done for a single frame regardless of camera providing shake reduction exists, or even , considering the movements of a single sensor pitch of perhaps 10 microns on an *istD 6MP camera, using the sensor cleaning shift which has always been present.
The bigger issue is more related I think to the shutter. Are we discussing here 4 separate frames, combined in a post processing form like HDR, or are we discussing performing the shift during the exposure?
To me, regardless of the method, there will be a big amount of blind faith involved with respect to image sharpness because nothing is ever that still.
How much real use will this give, compared to, for example, to adding luminance only sensors between the present GRGB pattern? Food for thought?