Originally posted by falconeye That's because the "workflow" has to deal with 4x the data and has to guess which pixels did change.
Not exactly. Without pixel-shift, (PS), each RAW pixel has 14-bits of data representing the intensity of only one colour at that pixel. With PS, each RAW pixel has 56 bits of data recorded representing the intensity of RGBG, (the green value sampled twice).
This is then converted into a 42-bit pixel by disregarding one of the redundant green values and thus stored. If there were only three pixel shifts, a pixel initially blue would capture blue, green, red, while a pixel which was initially red would capture red, green blue. The problem are pixels which were initially green; three shifts would capture green, blue, green or green, red, green. A fourth shift would capture the missing red or blue value respectively. So now you have an image of
three four times the data (ignoring overhead, insignificantly less considering overhead).
I also believe (but I am not sure) that the 42-bit pixel is actually further reduced to a 24-bit or 28-bit pixel in a (10,14) or (14,14) bit matrix of (intensity, colour) or an (8,8,8) or (12,12,12) bit (R,G,B) matrix. This may make far more sense since a PS image is not three times the file size of a non-shifted image.
However, there is significantly less processing to be done because there is no issue of blocks, maze nor coloured moire artefacts with which to mitigate with the various bayer transform approximation (BTA) algorithms. Indeed, with any given BTA algorithm, one is actually computing more data because with every pixel, one may have to deal with the data of a minimum of four neighbouring colours and as many as sixteen, each having 14-bits of data. That is a minimum of 56 bits per pixel and up to 224 bits per pixel. With the full colour, full luminance information of each pixel, none of that extra processing of neighbouring pixels is needed.
And, no, there aren't any “changed” pixel. Each pixel is accurately recorded.
Originally posted by falconeye The immense success of the Bayer pattern has to do with its efficient "workflow", needing much less original data for the same perceived image quality.
Again, it is not for the same perceived image quality. It is for a far superior image quality with more detail and less noise and no artefacts. And unless you treat your mostly monotone, low contrast, simple scene the exact same way (same algorithm) you treat your high contrast, multi colour transition, complex scenes, the BTA workflow is far from efficient. This is what PS reduces.
Originally posted by falconeye The real merit of pixel-shift is an absolute better image quality at the pixel-level than a single capture with a Bayer filter could provide.
True: better image quality in terms of colour accuracy, luminance accuracy, (ergo details), noise reduction, artefact reduction, etc., all reducing the effort needed in the workflow.
Well, pixel shift the way Pentax does it and not the way Sony does it. They actually do pixel shift quite differently and the Sony method does in fact create an image of four times the resolution —a 20mp image becomes an 80mp image— while the Pentax method starts and ends with a 24mp or 36mp (K-3 II or K-1) image respectively.
[CORRECTIONS]
Handling of a 130mb image is not really an issue if your computer can handle it. (I currently have 32GiB RAM and about 5TB HDD space for data plus eSATA III and USB 3.0). It may mean more time in importing or running standard scripts on each image but the same added time exists when going from a 6mp workflow to a 24mp workflow. The
complexity of the workflow however decreases.