Originally posted by justruppert Since DSLR video is currently only able to use a fraction of the actual resolution of the sensor to record moving images, wouldn't it be possible to use this limitation and turn it into an advantage by creating something like a stochastic readout of the sensor to avoid moiré patterns (rather than blurring the image with mechanical means like the Caprock and such).
Best, this is a very interesting and innovative idea. It reminds me of
FloydSteinberg dithering.
IMHO, it wouldn't work though. Two reasons:
1. The stochastic subsampling pattern would transform the moiré pattern into strong noise, more specifically, strong color noise. While it may still look better on a still frame, it will look like strong flicker in a video (with a changing subsampling pattern). Even with constant subsampling, the video will typically flicker as it does with moiré patterns (because even on a tripod, the frames shift by a few µm -- you can see this in my video two posts up).
BTW, one can easily check this out with a random texture (and the current regular subsampling pattern): a random texture like grass or sand. It looks ugly because of strong flickering color noise. This has been criticised by many users of entry-level dSLR cameras (including the K-x) without identifying the cause of texture noise. The K-7 has fewer problems because of its more fine grain subsampling pattern.
2. In most image sensors, the read-out pattern is pre-wired in silicon. It isn't like a memory chip where you read an arbitrary address. Rather, you set it into one of a few read-out modes, give a start trigger and then sequentially read-out all values coming out of a so-called channel (the K-7 sensor having 4 channels). Most image sensors deliver the values as analog voltages, some as digitized values (Sony Exmor).
Sure, as soon as sensors can be read-out like a memory chip (requiring 24-32 address pins and 12-16 pins per channel or 96 rather than 4 pins ...) then your idea (and many other) becomes feasible. But then there is reason #1.
Really, talking about the future, the most reasonable to do will be on-chip DACs (like Sony) and four times the number of channels per 2x2 Bayer cell so that in video mode, it can be binned into one digital 12:4:4 rgb channel to be read out. This is fast enough to read out 24fps video with 6fps cameras and would even allow 4k video on some future 43+ MP cameras. However, I haven't checked for power consumption of such a solution.
A simpler solution is Fujifilm's EXR binning on the sensor cells themselves. But it's only 2x binning. One would probably need 4x binning (e.g., 4x4 sensor cells into one pseudo 2x2 Bayer cell to be read out). This is possible with 39+ MP cameras.
On the other hand, high speed sensors like in the Casio Exilim achieve 6MPx60fps which would be 15MPx24fps or fast enough for a real-time read-out of all pixels. However, these modes are active for 1s or so and power consumption may be an issue. Nevertheless, the problem of sub-par video quality will soon be gone.