Originally posted by brofkand DSLRs do not have any of those deficits. I do not see a need to apply "computational photography" to DSLRs.
I think it's inevitable, and it will open up all kinds of possibilities. For example, the new Pixel phone has a running buffer of 15 images that it uses to do things like exposure stacking that gives their relatively poor, small aperture lens the equivalent light gathering of a much larger lens, and does it transparently just at the touch of the shutter button. All of the computational magic happens in the background.
Do that with a DSLR sensor and lens and you could have the equivalent of a physically unobtainable lens/camera combination. Your 50mm f/1.4 could have the light gathering capablity of a f/0.2 lens but with usable depth-of-field, which can't exist. Sure, you could do a rough approximation of this with your DSRL and stacking multiple exposures, but it would take a lot of post-processing. Phones today already do this with zero work in post.