Originally posted by kadajawi I don't think you need something like OpenGL to protect their IP. Just a simple API that has commands that for example can demand the raw data from the sensor, or that tells the sensor to move by a certain amount..
Well, what I mean by that is actually to control the low-level action of reading out the sensor, in the same fashion that shader languages (GLSL) control rendering the actual pixels in an image. What I want is not just to demand raw data from the sensor- that's the equivalent of an OpenGL rendering call - but to demand that raw data from the sensor be read in a specific set of processing operations, pixel by pixel, the equivalent of a GLSL shader.
In other words - what we can do right now is ask for a RAW file. I want to be able to control how that RAW file is generated. In order to protect the intellectual property of the company (low level implementation details of the sensor) we come up with some generic language, and the image processor implements the commands that are possible in that language. Physical operations are controlled by signal pins from the image processor (as now, except on a per-pixel basis) and also by interconnecting them via an FPGA (for setting the overall signal path).
Some things just aren't possible to alter once the data has been read out from the sensor (eg analog signal amplifier gain setting). Some things (dark current, etc) vary across time and can't really be recovered after the fact. Right now camera companies use "general" approaches to these things, and I don't think those are necessarily
the best approach for every single situation. Perhaps you can just outright do better than the defaults in certain situations, or trade off certain aspects of the sensor's performance to boost other aspects. I'm not even convinced that the defaults are "the best" overall - there's certainly no "right way" to render an image, I don't see why there would be a "right way" to capture one.
For an example of what can be accomplished with this, check out Magic Lantern's "Dual ISO" feature. By reading out alternate lines of the sensor at different ISOs, they can boost the dynamic range by 3 stops, at the cost of losing half of the vertical resolution. Right now what they have is awkward hacks on top of the default software. What would be possible with much finer grained control, or by tweaking other operations/parameters in the process?
Last edited by Paul MaudDib; 10-21-2014 at 04:16 PM.