Originally posted by Class A That is true. It is, however, often useful to distinguish between "sensels" and "pixels".
In our discussion, the distinction is important because one is either talking about the alignment of raw data (sensel data) or demosaiced data (pixel data).
Agreed.
Originally posted by Class A Yes, demosaiced data can work but it consumes 3X the RAM and is gets poorer results. If the RAW data is available, both the speed and the results are superior.
Originally posted by Class A
What Pixel Shift does best is avoid colour moiré and increase colour resolution (in particular for red and blue).
AFAIC, it is moot to argue about whether increasing colour resolution in this manner constitutes increasing "spatial resolution" (in the regular meaning of the word).
Well, you can shoot a USAF test target in B/W or in various colors and measure the significant enhancement of spatial resolution produced by Pixel Shift.
Originally posted by Class A
I believe you know what I mean when I try to distinguish between processes that either
- aim to emulate a sensor that records full RGB data at each pixel location, or
- aim to emulate a sensor with more pixels.
Personally, I'd reserve the use of "increasing spatial resolution" for the latter process, but I won't argue if you want to use it for the first process as well.
In the red channel, the K-1 is a 9 megapixel camera with sensels that have a 25% fill factor. The steps to use the K-1's low spatial resolution sensor to get full 36 MP of spatial resolution in RGB is identical to superresolution.
Originally posted by Class A
I personally wouldn't use that term for classic Pixel Shift (as I associate "superresolution" with sub-pixel shifting), but I won't argue that it is an incorrect usage.
Whether the shifts in the pixels are carefully controlled, carefully measured by SR sensors, carefully estimated from the image data, or some combination of the three really doesn't change the superresolution process much although careful shifting (original Pixel Shift) can produce superior results to uncontrolled "hand-shake" shifting (the new "dynamic pixel shift").
Originally posted by Class A That's their statement but I'm assuming they may be considering adverse influences like motion in the scene, or perhaps they are not doing an exhaustive alignment analysis and hence include the possibility of errors?
Otherwise, it seems to me that a correlation analysis between the images should provide the best alignment possible. Acceleration measurements won't be 100% accurate or noiseless. Would you adjust a perfect alignment, just because camera movement measurements indicate that less sharpness should be obtained then actually possible by using a very slightly different alignment?
I'd use both the sensor data and the image data. To the extent that both sources are somewhat noisy but the noise is not 100% correlated, the combination is superior to either data source alone.
Originally posted by Class A I'm not "choosing to disbelieve" at all.
What makes you think that?
I said that I believe that my description of "Dynamic Pixel Shift" is compatible with Pentax's description. So I never implied that they could be lying. All I said was that one should not read too much into marketing statements; meaning that marketing statements are not white papers. It is not uncommon for marketing statements to be not quite accurate in a technical sense, without any intention of deceit on behalf of the company.
To clarify, I'm not saying anyone, including you, is wrong about what "Dynamic Pixel Shift" is. I'm just saying I'd be surprised if some of the theories that have been offered accurately described the actual implementation.
That we can agree on. It's easy (and fun) to speculate but hard to know what's really going on without extensive testing.