See, I told you this was a whole 'nother thread.
Originally posted by falconeye Sorry Matt,
but this is entirely wrong. I guess you skipped the nastier parts of your quantum mechanics courses and therefore, you are excused
Well, you've got me there on my own education. However, in my job, I am literally surrounded (no, really, they're on every side of me!) by physics PhDs, and I floated the idea by several of them, and was assured that I'm on pretty solid ground.
Originally posted by falconeye A lens is not a sort of analog computer. Because there is no input data to work with. If you would try to obtain input data (by measuring the wavefront hitting the front lens) you would destroy any chance to obtain a result. And if you tried to insist, let me refresh your memory and say that you simply cannot measure both, location and phase, of a photon. Heisenberg and all the rest.
So, as I am given to understand it, frequency and direction are not complementary properties in the quantum sense, so this is not a problem from that point of view. There's some uncertainty on the order of nanoseconds about exactly
when a photon is measured, but that's small enough we don't care about it.
So the problem that remains is how exactly to gather that data. Specifically, you need to deal with making a coherent picture from incoherent light.
There's two basic approaches I can think of.
First, there's the pinhole aperture used in current applications of this sort of technology. That has some serious disadvantages: diffraction, and simply losing a whole lot of light because you're at f/200 or whatever. The composite result is brighter than f/200 would be because you have numerous pinholes, but, still, a problem. I'm not sure how far this approach can go with more advanced engineering, but I'll take your word for it that it's a dead end.
So, the second idea: the front-element sensor is an array of small tubes, each pointing in a different but known direction, and each connected to one photosite. These tubes could also serve as the color filters, so you have direction and frequency.
Then it's a "simple" matter of deconvolving the input data to produce a photographic image. Unlike current raw files, these "extra-raw" files are nothing like a bitmap image -- information for a small portion of the frame is distributed across the whole dataset. In selecting parameters for calculating your final image, you could generate something like that produced by a traditional lens of any different type (and images impossible traditionally).
Now, by having your photosites direction-specific, you're losing a bunch of light -- but since that light hasn't passed through an aperture, you've got a
lot more to work with in the first place.
There may be yet-unthought-of ways to gather the information in an even less lossy way.
Originally posted by falconeye BTW, another one who missed his quantum mechanics course was God. He tried your idea first (many pinhole lenses hooked up to a neural network computer, aka an insect's compound eye). But after hundreds of millions of years in frustration (as far as we know, his beard turned white because of this) about the bad image quality, he gave up and eventually, gave green light to the development of the lens (aka lens-bearing eye or normal eye). [Smiley intentionally left blank]
It may be worth noting that although you characterize this branch as given up on, there are certainly many orders of magnitude more creatures with compound eyes on the earth right now than there are with "normal" eyes.
But beyond that: doing this well requires considerable computing power. A lot of our brain is devoted to vision already (somewhat like a quarter of the neocortex). Given the constraints, it makes sense to use an analog approach -- you don't want to have to think for a minute in order to see something. But that's not going to be a constraint on computing devices in the World Of The Future.