Originally posted by Fenwoodian Interesting...
Paraphrasing - the optimum hand-held, dynamic pixel shift motion would be: an odd number of pixels shifted, angled (or curving) motions, and smaller overall motions. I don't think that many of us can do this type of complex motion consistantly. So, would it be fair to say that because one can't consistantly move optimally, then it's probably a good idea to shoot multiple shots of any dynamic pixel shift subject? By taking multiple shots, aren't you increasing your odds of achieving optimum movement and better super-res images?
I wonder if the "continuous" shooting setting is available when in the dynamic pixel shift mode?
It's true that few of us will ever twitch the camera optimally. And yet chances are that every processed output pixel location will be on top of or overlapping with at least one red, one green, and one blue pixel from at least one or another of the four frames. People may be unlikely to replicate the full resolution boost of the optimum pixel shift motion, but those complex hand-held motions are likely to ensure a huge boost in resolution over a single frame.
And, yes, taking multiple shots helps.
---------- Post added 04-13-18 at 07:49 PM ----------
Originally posted by monochrome @ photoptimist Can’t the processor simply crop a few pixels off the edges and align the remaining slightly cropped section optimally?
Alas, no. The alignment is determined by the random motion of the photographers hand. The camera is measuring that alignment with the SR sensors but it's not fully controlling it as it does in traditional pixel shift. The camera has to use the data it got which may not be aligned as optimally as a traditional pixel shift image.
If we think about resolving some tiny little one pixel red dot in the the scene (imagine a tiny red berry in a very distant fruit tree) we need to have red-sensitive pixels see both the berry and the adjacent non-berry surroundings.
In a single-shot image, there's a 75% chance of missing the berry entirely because only 25% of the sensels on the sensor can see red light. If the light from that berry falls on a green or blue sensel, that sensel sees nothing (dark in the green or blue channel). And with the surrounding sensels seeing only green leaves, the final image would probably show a green or darkish-green spot among the leaves where the unseen berry lies.
In a standard pixel shift image, the camer moves the sensor so that red sensels are guaranteed to visit every bit of the image. The result is a beautiful 1-pixel red berry surrounded by green.
In the new hand held or dynamic pixel shift image process, the natural motion of the photographer moves the sensor between the frames but there's always some chance that red sensels never land on the part of the scene with the red berry. Roughly speaking there's about a 30% chance that none of the frames will catch the berry although the result is likely to show some hint of it in the green.
To summarize:
* the single shot image only has a 25% chance of resolving the red berry
* the traditional pixel shift image has a 100% of resolving the red berry
* the dynamic pixel shift image has about a 70% of resolving the red berry
Note: I'm playing a little fast and loose with the math here because there's a chance a little bit of berry overlaps with red sensel and the demosaiced image shows something in the green-yellow-orange spectrum depending on how much overlap there was. And with four frames in a dynamic pixel shift, the chance of some overlap in some frames is pretty high. But the point remains that if red sensels never land right on top of the red berry, the data will never show a nice saturated value in the red channel and the final dynamic pixel shift image will do a worse job resolving the red berry than traditional tripod pixel shift.