Originally posted by thigmo With the announcement of the Olympus OM-D E-M5 II and its high res photo mode (40MP from a 16MP sensor), this is obviously possible.
I'm curious enough to hypothesize how this could work on Pentax.
Okay, so Pentax bodies have sensor shake reduction. If a sensor can move each direction (up, down, left, and right) by one pixel within the span of a single exposure and take a shot at each location, it could theoretically compile them into a single image. If a sensor is, say, 10 megapixels (for easy math) but it actually take five images (four directions plus center), that would be a 50-MP image, right? The dpi size would still be 10 MP but the data volume would be 50mp.
With a Bayer array, that would mean that each pixel's space would record the color data in terms of R, G, and B. That seems like, in theory, the added detail and data would be significant.
I could see value in that. If 50MPs worth of dynamic range and color data could be smashed into a 10MP-size image, I would consider that a highly useful tool for photography. The process I'm imagining would basically mimic the manner in which a Foveon sensor works.