Quote: What Sensor Does the K-1 Use?
It's the same Sony sensor used in the Nikon D800, though it's been adapted to Pentax's specifications.
That pretty much confirms that it's a D800 family class chip or a revision of this chip. The degree to which is varies from the stock D800E package is what we have been debating and will likely have to debate until ifixit does a tear down!
However, the 'built to pentax' specifications has me wondering a few things besides the obvious possible differences in sensor substrate ( D800E IMX094AQP and A7r IMX094AQR) and potential rev C differences of a 3rd generation of this sensor class.
I wonder if Sony/Pentax have completely removed the physical AA filter from the K1 variant instead of disabling it. On the D800E the AA filter is physically present but disabled. However on the K1 site they make mention of using the SR to simulate AA when in *theory* this could be something that could be enabled/disabled electronically. I believe that Sony had this built in as a feature on the original D800 chip and is also present on the variant of the A7rii chip used in the Rx1rii. It is not outside the realms of possibility that Pentax's "specifications" include it's removal.
With respect to the differences between ISO64 and ISO100 - my feeling is that for real world uses it has very corner case advantages where it would be perceptive. The D810 and D800 have similar ISO100 performance and both are exceptional at ISO100 already.
Practically speaking, it's subjectively hard (for me anyway) to see any noticeable difference between ISO64 or ISO100 looking at the DPReview studio scene. On a graph , yes 2/3 of a stop does show up in SnR. In real world usage maybe not so much.
Pixel shift is more likely to provide a tangible and practical visible benefit to your overall IQ than a 2/3 drop from ISO100 to ISO64 where it too can be leveraged. The introduction of motion compensation pixel shift should extend the shooting envelope of this feature beyond the K3ii or EM5ii implementations. Although I expect it to still be a feature with limited use cases.
Focus stacking uses colour pixel information that has already been interpolated for each pixel before you actually do the actual stacking operation in PS.
Hardware based pixel shift reads the luminance value at each pixel to get a true pixel color value at each and every pixel. In theory (and practically where it can be deployed) it provides significant improvements to moire, noise, colour accuracy, tonality and detail.
diglloyd: Pentax K1 Super Resolution Pixel Shift Mode Pentax K-3 II Review: Now Shooting! - Pixel Shift Resolution mode
To echo your earlier point falconeye, I would love to see a side-by-side of a k3ii with pixel shift vs a k3ii in a normal shooting scenario that has been stacked in PS. Lloyd Chambers has an interesting article here on the EM5ii implementation and the advantages go beyond stacking
diglloyd Mirrorless - Olympus OM-D E-M5 Mark II - Hi-Res Sensor Shift Mode vs Standard Resolution, NOISE (Mining Artifacts) (diglloyd.com excerpt) (subscription needed).
As a Sony shooter I can attest that the Sony smooth reflections app has not provided me with massive improvements in IQ in the areas of detail and colour acuity. It's a nice way to simulate not having a ND filter, but I gave up on it and went back to shooting bracketed for stacked images in PS when I want to pull every last drop of detail for large prints greater than 30X40.
There is a certain irony in chatting resolution when the majority of my work goes to web, home print costs are nuts and despite my preference for printing BIG, I have a bunch of prints in art folder slieves that cannot by hung on the wall as I've no space! (I'm typing this on a 2mp laptop monitor with 8bit colour
)