Originally posted by biz-engineer This is a method of spatial oversampling. The same method is use in high speed oscilloscopes where the same signal is sampled by multiple channels each having a sampling clock is phase shifted in small increments. So, for example, if the same signal would be sampled by 4 channels, each channel with 90 degree phase shift, the time resolution is 4 times faster than the time base of the scope.
How does that related to super resolution with multiple shots aligned? It is the same technique as spatial over sampling used by high speed scopes. The difference being that the phase shift is random, but each photo should be equally sharp. So , shooting at 1/2*FL would add some blur and it's not good at all. However, shooting on tripod is not a problem is the camera position is moved for every shot. Also, even 20 shots is not sufficient to ensure evenly spread camera position, 30 to 50 shots would be better to achieve super resolution on the basis of random camera position.
Between the constrain of very stable tripod for a sequence of 4 pixel shifted images and shooting a few dozens of shoots, I don't know what method is easier, IMO both method aren't as straightforward as a single shot.
Anyway, as I wrote in anther thread, unless printing XXXL, for the rendering of an image, more resolution is pretty useless compared to the use of a larger sensor, however, more resolution makes camera slower to operate while lower pixel count on a larger sensor brings better images at fast speeds.
I agree. My goal is to have the best single 36 megapixel file possible. From Pentax's standpoint, the only way to really achieve this is to shoot RAW pixel shift and then run the image through DCU or RAWTherapee (RT works better) to generate a high quality TIFF file. If Pentax would shoot 16 bit TIFF files with excellent motion correction masking already done then that would be the best possible scenario, but currently they don't do that. Even with that, there are probably only ten to twenty percent of landscape images that actually benefit from pixel shift -- most if you look at a single extracted image are roughly the same with less work to achieve the same result.
My guess as to Sony's implementation was that it had to do with existing patents from the other brands that currently do pixel shift and super resolution and Sony had to figure out how to do their own version without getting sued for patent infringement. Certainly it is tough when you are the fourth or fifth brand to release a feature.
From a personal standpoint, I don't have a reason to have more than 36 megapixel image. I just don't print that big.