Originally posted by Kobie "Subject recognition only works in Auto or Zone AF modes".
I still find it amusing that this sentence has to be written down. It is like "beware, water is wet".
If one understands that "subject recognition" is used to follow a pattern and then automatically activate/use AF-Points withing the boundaries of that recognized subject outline it is absolutely clear that this can only work in a mode where you have allowed the camera to choose from a very wide array of AF points.
Thinking the other way: What if you restricted the AF points to "single select" and put that in the upper right corner while subject recognition is on and detects a face in the left side. Basically the resulting thoughts of the camera would be "Ok, master, yes, I have detected a face, but no, I will not focus there as you have explicitly told me to only use that single tiny AF point which you have decided to put into the upper right corner."
Another comment:
My good old K-1 I already has some basic pattern recognition based on the metering sensor when using the OVF. It is not great and processing power is very slow, but it is there.
It's easy to prove. Select "large select" AF area in AF.C and point the selected AF-sensor on the eye of a face with back AF-button pressed. Then very (!) slowly swivel the camera. The active AF point will stick (more or less) to the eye or more exactly the camera will activate different AF points to where it detects the eye.
The better the software for pattern recognition and the more processing speed you have, the better it works.
---------- Post added 23rd Sep 2021 at 09:10 ----------
Originally posted by clackers It is not. In the old cameras, the pattern under the focus point is registered and in real time, if that pattern appears under an adjacent point, the focus is shifted to that point.
I do have to object here (based at least on my understanding).
The AF sensor only see greyscale pixel lines and as such are completely unable to recognize anything I'd call a "pattern". They basically only identify they biggest jump in brightness (=contrast edge) in that pixel line.
The data they have is along the following for one line AF point which is actually made up out of two separate physical sensor lines:
1: 222222224888888
2: 222488888888888
The numbers representing brightness values.
The contrasty edge is where the values go from 2 over 4 to 8 and the focus is moved so that both sensor detections match.
Old style handover between multiple AF points to my understanding was just based on the assumption that the desired subject is always the closed thing as unwanted background is farther back. So basically the camera activated any AF points which were similarly "close" to the initial/previously activated AF points(s) reading when the initial AF point suddenly lost it (= the distance suddenly was much bigger).
Or in other words: The camera always had a super simplistic 3D map from the AF sensor readings (consisting of say 11 data points of depth data) and chose the AF point where the "closeness" peaked.