Not sure if this will come to generate any interest, but this might be a new direction for development of photographic imaging tools.
Combing the 3d point cloud data as generated by a laser infrared scanning system for z-values (1st generation kinect on Xbox does this to a resolution of 640x480x1cm), with the high resolution output of a quality 2d digital imaging sensor, would allow the computational generation of selective DOF.
You could shoot for example a scene with a few different foreground and background elements, at a narrow aperture (so that many of the elements are within an acceptable dof) and combine this info with depth information from the laser scanner, and select which foreground and background elements in the same scene you would like within a selectable level of focus. Additionally you could use this to artificially simulate (with likely good effect) the out of focus rendering characteristics of many different lenses.
There of course are limitations to this technique.
Here is a generalized proof of concept for scene mapping. All of the elements are in place to confer selective dof:
Last edited by spade111; 03-05-2014 at 10:24 AM.