Thanks for the reply. I know you already covered the C>P issue, understood.
And I agree there is more potential for CDAF due to the smart image processing. Also higher camera resolution is demanding higher optical effort in PDAF relay optics which may approach economic limits in the future.
The additional effective aperture limit of the AF system is a good point which I had forgotten about.
However perhaps 1/5 sensor patch width is a bit pessimistic looking at PDAF sensors with high low light sensitivity like the 1DX or 6D:
http://cdn1.mos.techradar.futurecdn.net//art/cameras/Canon/1DX/Canon_EOS_1DX...or-580-100.jpg http://www.phreekz.de/wordpress/wp-content/uploads/2012/09/5DII_6D_AF_Sensor.jpg
Also the QE of monochromatic pixels can be in the order of 90% where an averaged bayer unit of 4 pixels is probably around ~30% in practice. This can provide an additional benefit (if not a drastic one as you noted) if reading the same area of the image plane.
So being more optimistic we can maybe reduce your 1/10 PDAF light intensity disadvantage.
Additionally there are other possible specific challenges associated with CDAF:
- Potential delays in AF acquisition introduced by a incorrect initial guess of focusing direction which leads to greater defocus.
- Large defocus which requires significant lens travel before practically measurable difference is registered (such as in telephoto or large aperture lenses). This can conflate with the previous issue.
- Limits on main sensor readout speed at high resolution. Line skipping may be needed reducing effective CDAF exposure per unit area and resolution. Although the PDAF sensor resolution is at a disadvantage here anyway due to the large pixels.
Anyway, on the whole I agree with you that CDAF will probably be the superior approach in the future when the image processing algorithms becomes good enough to minimize the effect of the above disadvantages. Perhaps there is potential for the rapidly developing depth map imaging field to help in tracking too.
BTW why use the term sensel? it does not seem to be a common term. Pixel seems perfectly good.
Originally posted by falconeye Again, I already covered this, summarized by C>P (you may want to go back and read my post again).
The S/N advantage of CDAF doesn't come from the light losses in AF mirrors etc. It comes from the fact that PDAF uses only two very small patches (long but narrow) "cut out" from the exit pupil (I already said so). The loss due to the AF mirror comes extra. And larger sensels do not help.
I don't know but do you the favour to now estimate that the patches are approx. F/5.6 apart and have maybe 1/5 width compared to their distance (probably less for a better phase separation). Which makes them F/28 wide (the real reason why they can be refocussed in an out of focus plane). Assuming the same height than a CDAF region of interest (which can be made larger), then PDAF uses roughly 1/10 the light of CDAF at F/2.8. Give the AF mirror, take the monochromatic sensels, and we roughly stay at that.
I have no way to predict the near future. But even w/o resorting to making use of lens aberrations, the potential of CDAF exceeds that of PDAF. PDAF throws away too much light to get into a simpler algorithmic domain.