Apple together with the Core Image Framework for iOS5 and Lion 10.7 introduces a really cool and easy to use face detection API. The face detection technology was bought by Apple for over $22 million from Polar Rose, a Swedish company known from its face-tagging pictures service and facial recognition software. The API detection with the Core Image Framework in iOS5 looks pretty straightforward.
Let’s take a look at how it works
The most important option for the CIDetector is the accuracy. We need to find a compromise between accuracy of detection and performance depending on requirements. There are two possible values for CIDetectorAccuracy:
In the applications, where the speed is crucial, for example when face detection is being processed on a live video stream, it is recommended to use low accuracy, especially when we are using high quality video stream. It is not recommended to use a resolution higher than 640×480 in AVCaptureSession together with face detection. With the still photo we could use high accuracy, as far as the device GPU is able to handle it in a desirable time. For instance the above photo was taken with low accuracy and detected a face on the hand drawing as well
Finally, we can create a detector:
It would be great if in the future apple extended the scope of possible detections with, for example a hand etc. but today we have only one option to choose from.
Now, we can use lately created detector instance to detect features in the photo. In Core Image framework we, of course, use CIImage as a base image class that we can work on. Let’s prepare CIImage in imagePickerController call back method:
Next, we need to define the options for features in the photo detection. The most important option is the orientation. We need to make sure that our detector has the same orientation as the picture we use. The orientation has the same values as kCGImagePropertyOrientation for CGImageRef but not the same as UIImage orientation. We can calculate an orientation in an easy way: