Originally posted by Barry Pearson The impression I'm getting is that the AA won't be obtained by multiple exposures. It will be achieved by vibrating the sensor extremely fast within one exposure.
If so, the the final resolution will be the natural resolution of the sensor.
An optical anti-ailiasing filter is essentially laying a sheet of screening over the sensor. So it looks like you are looking at the scene through a screen door. The vibration analogy would work in an audio environment, somewhat similar to noise canceling headphones.
Originally posted by Clavius But wouldn't vibrating the sensor just cause blur?
Originally posted by gazonk Isn't that exactly what an AA filter does?
The approach that's described, I do not know if it has been used before. I am piecing tea leaves together here based on what Pentax has been let out of the bag so far. Traditionally, an anti-ailiasing filter has been applied up front of the sensor, and the dithering / adjusting of the sensor so as to take two images that are combined, does increase final resolution. Increased overall resolution I am posturing here may effectively resolve moire. Medium format imagers are not affected by moire due to their enhanced resolution. The big question in my mind is can moire be negated computationally (multiple images), or must it be not captured with a high resolution image (taken at a single instance in time). However, up until now - the individual pieces 1) ability to reposition the sensor and 2) ability to combine images together in real time; have not been available together. Neither Canon nor Nikon use shifting sensors for image stabilization. Only Pentax, Sony and Olympus have the capability within their imaging systems. This is effectively expanding the solution space for one problem and applying it to another. All I'm doing is thinking out loud as to how they (Pentax) might have done it.
Olympus has a 5 axis sensor system, and I have always wondered why they did not use it for focus stacking, as their inherent ability seems to me to be a natural capability to apply to the need (well if you actually need or want it). Another application is taking a spherical view and laying it down on a flat surface (the sensor). By using the Olympus 5 axis system, you should be able to move each corner back / forward individually, taking an image at each point in time and then combining them into a virtual spherical imaging surface (formed over a short instant in time). That too would take a lot of real time processing power that is just becoming available now. I don't know of anyone that has tried this computationally with optics. In SARs (Synthetic Aperture Radars) when you form individual beams (beam formers or BFs) that is essentially what you are doing, e.g., taking a lot of individual images using radar and then putting them together like a giant jigsaw puzzle - I am way over simplifying, but the analogy should work.
If you take a look at where Fujitsu picked up the technology for combining the images - "
improved high dynamic range (HDR) photograph quality using a JPEG-HDR™ format developed by Dolby Laboratories", it becomes clear that sound systems like Bose and Dolby have developed technology for forming a three dimensional (surround sound) system using some of these techniques. Trying to think out of the box in terms of what Pentax may have used them for, HDR is just a technique for combining images taken with different ev's formed (shutter speed, aperture, etc.). I am wondering out loud, if by using the sensor shifting ability to increase image resolution, if that also just happened to be a reasonable solution to moire. If you think about it, in software (and in real time), how could they examine an image to 1) detect moire and then 2) correct it. To be efficient, it would need to be done across the entire image in such a way as to not damage the areas with out moire while repairing the areas that did have moire. Again in medium format with increasing resolution, the natural byproduct is not capturing moire.
There is a good chance that I could be all wrong. I guess we will know sometime tomorrow - Tuesday or Wednesday - depending where in the world you may be....
Then again, the combining of two images together could just produce additional moire (
take a look at this link - scroll down to "1080/60p"). Rather than using the HDR capabilities of the image processing chip, they could be using the video capability of the image processing chip to remove moire. Essentially, treating the input from the sensor as a stream and chipping out a still image. The bottom line is, the 40MP resolution reference - when using a 24MP sensor. The only way you can increase the pixel count of the physical sensor is to move it slightly and take another image and combine the images in some respect.