Originally posted by Rondec The big benefit with higher megapixels is increasing print size (and I suppose cropping ability), but most of us aren't printing huge prints any more and so that is sort of wasted.
Contrary to popular opinion, higher pixel density benefits image quality on two things:
1) absence of aliasing (false colors and irregular sharp edges)
2) noise management, noise can be reduced with less impact on image detail
I am not concerned with noise management.
But there is something that is really problematic with sensors without optical low pass filter and insufficient pixel density (such as Z7, A7RIII, GFX50 and to less extent K1), the impact when up-scaling e.g with Gigapixel AI or Sharpen AI, the magnification of pixel artifacts.
This may seem counter intuitive, but for example, GFX50S files don't scale well due to the presence of aliasing at pixel level in the original file. AI sharpening applied to Z7 files look atrocious. The 40Mpixels of the X-H2 on the other hand, scale up and sharpen very well with Sharpen AI. After up-scaling and sharpening, A7RV files look better than GFX50. Simply, when re-sampling images, software doesn't quite know what to do with edges and color artifacts, those are either interpolated with adjacent pixels in the case of down-sampling, or used as texture when up-sampling.
It is counter intuitive, but the best image fidelity is achieved when the sensors out-resolve lenses, although most people will find that images are soft when zooming to 100%, but that's precisely when images are soft that they are free from false pixels.
Pentax are very aware of that, they provide two ways to correct this problem: AA simulator for shutter speed slower than 1/1000, or pixel shift for static subject matter. With over 60Mpixels on FF, there is no need to use AA simulation or pixel shift anymore and absence of motion artifacts in every single image taken.