So what empirical evidence is there that these differences are significant to the human eye?
if there are areas significance, what are the parameters that would define them?
The obvious criticism of this type of graph is that 8 bit images can be quite acceptable. Dynamic range is more determined by the DR capability of the sensor than the number of bits in capture. My ZS100 will never have the DR of my K-1 and K-3 no matter how many bits the output is.
Looking at the top graph, the obvious (to me) question is how does a K-1 achieve a measured 14.9 stop DR on a 14 bit sensor? The hypothesis is disproved by actual scientific measurements.
It all depends on this.
Quote: Migrating to 16 bit would create much more usable and manipulatable data. This would be nice in the future and would incentivise me to spend rather than greater MPs.
That's a pretty bold hypothesis. It will definitely need an empirical evidence before I buy in to it.
Not everything that can be graphed is graphing useful significant data.
14 bit (36 MP)
12 bit (12 MP)
I'm going to venture that there is so little difference between 12 bit and 14 bit it makes absolutely no difference to anything.
And the corollary to that will be , going from 14 bit-16 bit will provide even less benefit I any.
The difference in these use shots is more about 12 MP vs 36 MP. Any DR differences are minor.
But I'm open to any real world world proof of concept photos anyone cares to share.
The assumption that camera makers didn't decide 12 bit was suficient, but went to 14 bit as a bit of redundant capacity, making 16 bit just more redundant capacity, is unwarranted.
The notion that 16 bit would provide DR the sensor doesn't support, is just silly.
And the benefits of 16 bit images are unsupported by empirical data.
The "more is always better" folks seem to think their reference for more of whatever somehow translates into better. Even too much water and there things critical to life can kill you. More is not always better. That needs to be established with sm kind o data. Speculative graphs based on unknown data samples (assuming there were any data) and this graph isn't just a hypothesis) don't count as data. Real world applications count as data. You can't make an image with a graph. We need t be able to evaluate the rigour of the graph maker.
Bottom line, once optimal is achieved, "improvements" after that are by definition less than optimal. I see little evidence that 12-14 bit isn't optimal. But hey convince me with images. I'll be happy to change my mind.
What I'm saying is it looks like a bunch of pseudo scientific gobbledygook to me.
What seriously needs to be addressed by the "more is always better crowd, is the hypothesis that competing systems, AD conversion, bit depth , human perception etc will lead to an optimal convergence, and that going above or below those setting produces a decrease in performanence... when you something like proposing 16 bit , bit depth, you really have to demonstrate in the real world, that's even useful, forget about optimal. Addin another 5 horse power to a 395 horse power engine does practically nothin g int erms of power prod but 400 sound a lot better that 395 to a possible purchaser. Not all "bigger is better" is about performance.
My guess is those currently selling 16 bit cameras are using it as marketing hook to justify overpriced product.