Today I wanted to write an article that shows how sensor size affects image quality at a very high level. I'll start by saying that while my results are informal and not perfectly-controlled in every aspect (the ideal test would include 3 cameras with the same number of megapixels), I've tried to eliminate as many inconsistencies as possible in hope that my conclusions will generally be valid. In the remarks to follow, image quality from a compact camera (represented by the Pentax Q7), an APS-C camera (represented by the Pentax K-50), and a full-frame camera (represented by the Nikon D800) will be compared.
I've done the following to reduce the number of confounding factors:
- All 3 cameras are of the same generation (current models as of the time this post was written)
- All 3 cameras were using consumer-grade zoom lenses (02 zoom, Sigma 17-70mm F2.8-4 "C", Nikon 24-120mm F4)
- All photos were taken in aperture priority mode near the sharpest aperture setting
- All photos were taken at focal lengths delivering virtually the same horizontal field of view (Q7 = 10mm, K-50 = 34mm, D800 = 50mm)
- All photos were shot at default settings in JPEG mode
- No post-processing was applied except for monochromatic auto-levels in order to suppress differences in color
to allow you to focus on differences in detail and dynamic range - All images are losslessly compressed (.PNG format)
- All images were shot at the same sensitivity (ISO)
So, without further ado, here we go.
Everybody says that the bigger the sensor a camera has, the better the image quality. But why is this? Clearly, larger sensors have an advantage in terms of overall surface area. The additional area lets sensor manufacturers improve the sensor in two respects: the size of individual pixels (pixel pitch or pixel area), and the overall number of pixels (resolution).
- The bigger the pixel pitch of a sensor, the higher the signal-to-noise ratio. This leads to better color reproduction, detail, and dynamic range as the color values that pixels record will be closer to the true color values.
- The higher the resolution, the better the overall clarity. Because most photos are scaled to a smaller size before being printed or displayed, the extra pixels can be used to suppress much of the noise that was found in the original image that was recorded.
Let's look at a few different sensors to compare pixel pitch and resolution:
Camera | Format | Sensor Area (sq. mm) | Resolution (megapixels) | Pixel Pitch (micrometers) | Pixel Density (megapixels / sq. cm) |
Pentax 645D | Medium | 1452 | 40.0 | 6.0 | 2.8 |
Nikon D800 | FF | 861 | 36.3 | 4.8 | 4.9 |
Nikon D600 | FF | 861 | 24.3 | 5.9 | 2.8 |
Nikon D4 | FF | 860 | 16.2 | 7.3 | 1.9 |
Nikon D7100 | APS-C | 366 | 24.0 | 3.9 | 6.6 |
Pentax K-50 | APS-C | 371 | 16.3 | 4.8 | 4.4 |
Pentax Q7 | 1/1.7" | 43 | 12.4 | 1.9 | 29.3 |
Pentax Q | 1/2.3" | 28.5 | 12.4 | 1.5 | 43.4 |
Manufacturer-specific/generational hardware differences aside, if we shoot with take the Pentax K-50 and Nikon D800 side-by-side, we should observe very similar levels of overall noise. However, since the D800 has twice the resolution of the K-50, once its files are scaled to a 16-megapixel size, they will appear to have significantly less noise and therefore more clarity. On the other hand, if we took the same photo with the K-50 and the D4, the original file from the D4 should have much less noise due to the fact that its pixels are physically much larger. You can make further comparisons on your own using the data in the table above. Notice how big of a jump 1/1.7" to APS-C is compared to APS-C to FF in terms of sensor area. This jump is of course correlated to the respective difference in image quality.
So why is pixel pitch, resolution, and signal-to-noise ratio such a big deal? The short answer is that if we have low noise, more of the data recorded by the sensor is accurate, and so the overall image is clearer. Similarly, if we have more resolution, we have more room for error in the original image because we can make up for it when scaling photos.
But I want to take my explanation a little bit further than that. Having a background in computing, I have personally implemented and worked with a host of image processing algorithms, including the algorithms that convert raw sensor data to generate full color images. This has given me a great deal of insight into a number of things concerning how images are generated and enhanced. The big issue at hand is that color digital image sensors only record a fraction of the color data (light) that they are trying to portray. Each pixel on a color sensor is only sensitive to one of three color channels: red, green, or blue. Most sensors today are designed using a Bayer pattern, which uniformly lays out the pixels in a checker pattern consisting of 50% green pixels, 25% red pixels, and 25% blue pixel (because green light is the most abundant). Thus, after the sensor captures an image, at each individual pixel location, a computer algorithm has to try to calculate (a.k.a. guess with high accuracy) the intensity of the two missing color channels (this is known as demosaicing). Demosaicing techniques generally work by analyzing the neighbors of each pixel and calculating missing color intensity values based on the intensities found at neighboring locations. But what if the neighboring intensities are not accurate due to noise? The short answer is that the effectiveness of demosaicing drops dramatically, as the guesses about the missing two color channels could be bad or even flat-out wrong. This is what leads to significant loss of edge detail and color noise. If we have a lot of pixels in an image, the probability of having a good guess within any given area is high, and that's why scaling (downsampling) reduces the apparent noise. Similarly, if we have low overall noise to begin with, the original representation of detail and color will be accurate, so we will not need to do any (or as much) downsampling to get a good image.
And this brings us back to sensor sizes. If we have a very large sensor, its tolerance to noise will be higher, either due to higher resolution or due to increased pixel pitch. This, together with the precision of the optics that deliver light to the sensor, is what ultimately dictates image quality. So, when you go to choose a camera & sensor format, the main question you have to ask yourself, as least as far as image quality goes, is
when is the difference big enough to warrant a larger sensor size and
is my lens good enough.
To help answer this question, I have prepared a number of test photos. Here is the same scene shot with the D800 (FF), K-50 (APS-C), and Q7 (1/1.7" compact size):
1.
2.
3.
Can you guess which is which? It's probably going to be very difficult to tell a difference between the D800 and the K-50 shots, though you might be able to spot a little less detail in the Q7 shot, even at this web resolution. The answer is found in gray below.
#1 is from the K-50, #2 is from the Q7, and #3 is from the D800.
Now, let's zoom by about 2x.
1.
2.
3.
Is the difference more discernible spot now? It should be, but try to guess again!
#1 is from the K-50, #2 is from the D800, and #3 is from the Q7.
Finally, let's zoom in to the limit of the Q7's resolution
1.
2.
3.
Here, it should be obvious that the first photo is from the D800 while the last is from the Q7. The larger the sensor, the better the colors and the more detail you see.
All the photos above were taken at each camera's lowest sensitivity setting, meaning that their respective image quality was at its absolute best. But what happens as you crank up the ISO? Read on for some examples.