TransWikia.com

confusion about the principle of on-sensor PDAF technique

Photography Asked on May 2, 2021

There are a lot of pictures over the internet illustrating the principle of phase detection autofocus, such as this one
https://www.androidauthority.com/how-pdaf-works-1102272/
enter image description here

The simplest way to understand how PDAF works is to start by thinking about light passing the camera lens at the very extreme edges. When in perfect focus, light from even these extremes of the lens will refract back to meet at an exact point on the camera sensor.

enter image description here

https://photographylife.com/how-phase-detection-autofocus-works

When the light reaches these two sensors, if an object is in focus, light rays from the extreme sides of the lens converge right in the center of each sensor (like they would on an image sensor). Both sensors would have identical images on them, indicating that the object is indeed in perfect focus.

For on-sensor PDAF technique, there are many special pixels with an opaque mask over one half.
It may be look like this:
enter image description here

https://www.imaging-resource.com/PRODS/olympus-e-m1/ZTECH_PDAF_PIXELS.gif

The right-mask pixels and left-mask pixels are not adjacent.
How can the left image with left-mask pixels and the right image with right-mask pixels be identical when the object is in focus? From the first figure, the object points should be imaged into one pixel location when the object is in focus.

2 Answers

Because PDAF, whether done using the main imaging sensor or using a dedicated PDAF array in traditional SLR cameras with reflex and secondary mirrors, does not focus on a discrete point, it focuses on lines of contrast.

If a line of contrast in a scene is vertical, then two different masked photosites¹, one looking left and the other looking right, that are in the same vertical column can measure slightly different parts of that line of contrast. If a line of contrast in a scene is horizontal, then two different masked photosites, one looking up and the other looking down, that are in the same horizontal row can measure slightly different parts of that line of contrast.

Note that your image of a theoretical sensor with masked photosites is a bit simplistic and primitive compared to the way most current cameras implement image sensor based phase detect focusing. Canon's 'Dual Pixel CMOS AF', for example, masks no photosites, but rather has two separate sub-sensels for each output pixel with microlenses over them shaped so that for each set one is looking slightly right and the other is looking slightly left. Approximately 80-90% of the photosites on Canon's sensors that offer 'DPAF' are dual, with only the photosites at the extreme edges of the sensor not all split into two sub-sensels.

¹ a/k/a sensels or "pixel wells". Technically, sensors do not have pixels; Digital images have pixels.

Answered by Michael C on May 2, 2021

How on-sensor phase detection is accomplished varies somewhat. It can be pixels where the micro lenses are oriented in opposite directions. It can be multiple *pixels under a single micro lens with a baffle between them. And it can be partially masked pixels (but that example image isn't very good IMO).

And there's probably some arrangement yet to be designed or I don't know about; but they all work based on the converging virtual images as shown in your first two diagrams.

And you are right; however it is accomplished, when the left/right (or upper/lower) virtual images merge as a single focused image there is no longer a phase difference. At that point a mirrorless camera must switch to contrast detection for any changes/refinements. Of course, what is considered "focused" in this sense depends on the resolution of the sensor/size of the PDAF points. E.g. if the two virtual images are falling on two separate photodiodes, which are binned as a single pixel in the output, then they are maximally "in focus" (combined as a single image) even though there is still a separation.

And even if both virtual images are perfectly aligned that does not mean that the camera cannot monitor the PDAF focus points in order to detect when a phase difference reoccurs (and then correct for that).

On-sensor PDAF is very much the same as using a split prism viewfinder; when using a split prism viewfinder you can see when the left/right virtual images are not combined as a single image (in focus), you can see when they are, and you can see when they separate again. And likewise, the on-sensor PDAF is very dependent on the light/image falling on it just as the split prism is... this is all quite different from the DSLR's dedicated PDAF system (which uses multiple real images).

*they are now starting to distinguish the difference between photodiode/detector and pixel/picture element when multiple photosites are binned in the sensor output.

EDIT TO ADD I found this good reference on the history of focusing in cameras. It covers the physics of using ground glass (diffused focus screen), rangefinders, split prisms (phase); and discusses their implementations into autofocus. https://www.pointsinfocus.com/learning/cameras-lenses/brief-history-focusing/

Answered by Steven Kersting on May 2, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP