Originally posted by RGlasel
Today, the only unambiguity at Sony is that they will continue to make sensors for sale to other manufacturers of finished goods.
To make an imager , we use a standard semiconductor techno and add additional stages, especially at the end of the manufacturing process. The difference between the camera business and the sensor business is that Sony are using their fab not only for imagers but other ASICs as well. You can design and release a new imager in 6 months with less than 15 designers. The thing is once the base pixel cell and signal conditioning is designed, it's almost a matter of copy-paste to move from 1 pixel to 24Million pixel, of course there's some decoding logic added and redundancy to repair dead pixels and improve production yield, but it's not a big deal.
Originally posted by Wired
Look at how well the 16mp sensor did in the Pentax K5 for example, nothing else could touch it the way Pentax tweaked that thing.
Pentax can only use the features that Sony semiconductors provide in their designs. Sony application engineers explain Pentax how to integrate their sensor and how to use it. In order to get the best signal/noise perf., all of the analog signal processing is designed by sony on the sensor itself. The job of Pentax consist of the mechanical, electrical integration and programming of the whole camera system. Pentax cannot act on the performance of the sensor, apart from using or not options given to them such as ISO80, and the selection of color filters, antialias filters. Semiconductors companies offer reference designs that customers such as Pentax can use as is or customize it to their needs.