Skip to Content

Apple Envisions Face & Presence Detection Security for iOS Devices


On December 29, 2011, the US Patent & Trademark Office published a patent application from Apple that reveals one of the next chapters for device security. In 2009, Apple’s presence detection patent first came to light in relation to future MacBooks. Then in November of this year, Apple revealed a heavy duty 3D face and object recognition system that could be used for home and enterprise security applications. In today’s revelations, Apple introduces us to a more down to earth and practical security system for our portable devices. For simple home or personal use, the system could be setup to recognize your presence and face to quickly turn on your device. This would bypass the need for entering a password or even having to touch the home button to get to your homepage. For use at work, the facial recognition system could be set to higher levels of security. All in all it sounds like a very promising security system is in our future.

The Problems of Face Recognition Apple Seeks to Solve

Most face recognition systems fall into one of two categories. A first category system tends to be robust and could tackle various lighting conditions, orientations, scale and the like, and tends to be computationally expensive. A second category system is specialized for security-type applications and could work under controlled lighting conditions.

Adopting the first category systems for face recognition on consumer operated portable appliances that are equipped with a camera would unnecessarily use an appliance’s computing resources and drain its power. Moreover, as the consumer portable appliances tend to be used both indoor and outdoor, the second category systems for face recognition may be ineffective. Such ineffectiveness may be further exacerbated by the proximity of the user to the camera, i.e., small changes in distance to and tilt of the appliance’s camera dramatically distort features, making traditional biometrics used in security-type face recognition ineffective.

A Basic Overview of Apple’s Solution

One aspect of Apple’s invention could be implemented in methods performed by an image processor that include the actions of processing a captured image of a face of a user seeking to access a resource by conforming a subset of the captured face image to a reference model. The reference model corresponds to a high information portion of human faces. The methods further include comparing the processed captured image to at least one target profile corresponding to a user associated with the resource, and selectively recognizing the user seeking access to the resource based on a result of said comparing.

These and other implementations could include one or more of the following features. In some cases, the high information portion includes eyes and a mouth. In some other cases, the high information portion further includes a tip of a nose. Processing the captured image could include detecting a face within the captured image by identifying the eyes in an upper one third of the captured image and the mouth in the lower third of the captured image.

The reference model includes a reference image of a face, and processing the captured image further could include matching the eyes of the detected face with eyes of the face in the reference image to obtain a normalized image of the detected face. Additionally, processing the captured image could further include vertically scaling a distance between an eyes-line and the mouth of the detected face to equal a corresponding distance for the face in the reference image in order to obtain the normalized image of the detected face. In addition, processing the captured image could further include matching the mouth of the detected face to the mouth of the face in the reference image in order to obtain the normalized image of the detected face.

In some implementations, comparing the processed captured image could include obtaining a difference image of the detected face by subtracting the normalized image of the detected face from a normalized image of a target face associated with a target profile. Comparing could further include calculating scores of respective pixels of the difference image based on a weight defined according to proximity of the respective pixels to high information portions of the human faces. The weight decreases with a distance from the high information portions of the human faces. For example, the weight decreases continuously with the distance from the high information portions of the human faces. As another example, the weight decreases discretely with the distance from the high information portions of the human faces. As yet another example, the weight decreases from a maximum weight value at a mouth-level to a minimum value at an eyes-line.

In some implementations, processing the captured image could include applying an orange-distance filter to the captured image, and segmenting a skin-tone orange portion of the orange-distance filtered image to represent a likely presence of a face in front of the image capture device. Processing the captured image could further include determining changes in area and in location of the skin-tone orange portion of the captured image relative to a previously captured image to represent likely movement of the face in front of the image capture device. Also, processing the captured image further could include detecting a face within the skin-tone orange portion of the orange-distance filtered image when the determined changes are less than predetermined respective variations.

The Advantages of Apple’s Invention

Particular implementations of the subject matter described in this specification could be configured to realize one or more of the following potential advantages. The techniques and systems disclosed in this specification could reduce the impact of lighting and emphasize skin variance. By acquiring images with the appliance’s own image capture device, the approximate location and orientation of face features could be pre-assumed and could avoid the overhead of other face recognition systems. The disclosed methods could ignore face biometrics, and rather use feature locations to normalize an image of a test face. Further, the face recognition techniques are based on a simple, weighted difference map, rather than traditional (and computationally expensive) correlation matching.

Read more by Patently Apple