I get the impression that lots of shooters here on PF are very 'traditional' (maybe older folkds too).
So the style of shooting is still old style 'straight shots'.
Less digital blending and photoshop "trickery".
Don't get me wrong, I certainly believe there are dozen ways to skin the cat, just citing that the preference of approach does not naturally lend itself to exploring the capabilities of pixel shift.
So in order to get a tech savvy shooter who actually buys a K3II and bothers to learn/explore the possibilities of pixel shift, furthermore, to actually start talking/posting about it.
All under the shadow of a FF coming (and spending on it over the K3II)
I think that chance is lower.
Originally posted by Cynog Ap Brychan Thank you Gimbal for confirming what I was thinking. Despite biz-engineer's explanation of the underlying physics (of which I know little) I couldn't fathom how pixel shift would lead to a better image on an APS-C sensor and not on a full-frame one. After all, the camera is taking a RBG reading at each pixel site location rather than interpolating the colours from surrounding pixels, somewhat in the manner of a Foveon sensor, though of course that does it in one take. I think biz is talking about noise levels, though I'm not sure about this either. By making four exposures, the camera is reading information from four times more light, although some of that information is probably discarded when amalgamating the readings to make the final RAW file. Would that not result in lower noise levels? Perhaps someone more knowledgeable than I would care to elaborate on this. I only know that when I used pixel shift for the first time on the K-3 IIs, I was blown away by the clarity and colour rendition which, to me, looked better than my D810.
I think you can summarize it as :
"I have the cameras and there is a difference! "