Face detection in iOS 11

With iOS 11 we get a bunch of new shiny tools to play with, and one of them is the new Vision framework. The Vision framework allows developers to analyse images and video to identify faces, features and scenes. In this post, we’re going to take a quick look at how we can detect faces in an image using these new APIs and guess what, it’s easy peasy.

You can download the sample project here if you want to run it yourself or see the complete code at the bottom of this article. All of this code requires iOS 11 and Xcode 9 be installed.

The 3 new Vision classes we’ll be interacting with are:

Setup the face detection request

The first thing to do is setup the request using VNImageRequestHandler and VNDetectFaceRectanglesRequest:

Let’s break this down.

In the code above we’re creating a UIImage instance of the image we want to analyse, in this case we’re using an image called “people”. Next we grab it’s CGImage property and using our imageRequestHandler we perform a VNRequest on that image. This can throw an error so in production you might want to catch this, but for demonstration purposes we’re happy to ignore this with try?. The VNRequest in this case is our faceDetectionRequest which is an instance of VNDetectFaceRectanglesRequest which itself inherits from VNRequest.

The VNDetectFaceRectanglesRequest is documented quite simply by Apple as: “An image analysis request that finds faces within an image.”.

Setup the detect face rectangles completion handler

The completion handler is quite simple, the only thing to mention here is that the observations returned in request.results will be an Array of VNFaceObservation objects. These objects contain information about faces or facial-features detected by our image analysis request we setup in the first step.

And that’s pretty much all there is too it.

Add a visual indicator of the detected faces

Let’s say we wanted to check whether the request we setup actually detected the faces in the image. We could check the count of request.results to see if they match, or we could add a visual indicator onto the image we’re processing.

To do this, we can process the Array of VNFaceObservation objects, and using it’s boundingBox property transform it into a size relevant to out image, and then draw the results onto the screen. The code to do this is really simple and we can achieve all we need by extending the functionality of VNFaceObservation.

Thanks to @NilStack for this piece of code from this article.

Then we can update our completionHandler from earlier to this:

Note: This requires making our peopleImage property an instance variable and also having a UIImageView on screen.

The output will look something like this:

Image demonstrating results of face detection in iOS 11

Complete UIViewController code:


Leave a comment