How to loop through Set, Array and Dictionary in Swift

Hands up who loves Swift 🙌

Ok now that’s over, let’s take a look at the several ways we can loop through a Set, Array and a Dictionary in Swift. The approach you choose can be personal preference, related to the type of thing you’re doing, or perhaps the general code style of the application you’re working on.

Array

All of the below loops essentially do the same thing. The result is that they print the contents of “array” to the console in the expected order, i.e. “1”, “2”, “3”. Array’s are normally used when you’re working with ordered collections of elements.

Set

A Set is a collection of unordered unique elements. You mainly use a Set in 3 conditions:

  1. When you want to test an element exists in an efficient manner.
  2. If you’re not bothered about the order of elements.
  3. You need to ensure that an element only appears once in a collection.

It’s important to be aware of these things as looping through a Set will not output elements in the same way as it does with an array. You can create a Set with any element that implements the Hashable protocol. In the examples below, we create a Set from and Array of Strings.

If you like, you could extend Set to make getting elements by an Integer “index” a bit easier:

Dictionary

A Dictionary is a collection whose elements are key-value pairs. To get a value from a Dictionary, you must do so by providing a Key. The Key must be a Hashable type such as a String or Number.

The general approach to looping through a Dictionary is similar to a Set. Because the elements are key-value pairs, their order is not relevant and as such, accessing them via a loop may not always give you the output you expect.

Supporting the Files App in iOS 11 using Swift

In this video tutorial we cover how to support reading and writing files in your iOS 11 app, so they appear in the native Files app in iOS 11. It’s not as complex as you might think!

Things covered:
UIFileSharingEnabled
LSSupportsOpeningDocumentsInPlace
UIDocumentPickerViewController

The completed Github project for this tutorial can be found here: https://github.com/timrichardsn/Supporting-Files-in-iOS-11

Face detection in iOS 11

With iOS 11 we get a bunch of new shiny tools to play with, and one of them is the new Vision framework. The Vision framework allows developers to analyse images and video to identify faces, features and scenes. In this post, we’re going to take a quick look at how we can detect faces in an image using these new APIs and guess what, it’s easy peasy.

You can download the sample project here if you want to run it yourself or see the complete code at the bottom of this article. All of this code requires iOS 11 and Xcode 9 be installed.

The 3 new Vision classes we’ll be interacting with are:

Setup the face detection request

The first thing to do is setup the request using VNImageRequestHandler and VNDetectFaceRectanglesRequest:

Let’s break this down.

In the code above we’re creating a UIImage instance of the image we want to analyse, in this case we’re using an image called “people”. Next we grab it’s CGImage property and using our imageRequestHandler we perform a VNRequest on that image. This can throw an error so in production you might want to catch this, but for demonstration purposes we’re happy to ignore this with try?. The VNRequest in this case is our faceDetectionRequest which is an instance of VNDetectFaceRectanglesRequest which itself inherits from VNRequest.

The VNDetectFaceRectanglesRequest is documented quite simply by Apple as: “An image analysis request that finds faces within an image.”.

Setup the detect face rectangles completion handler

The completion handler is quite simple, the only thing to mention here is that the observations returned in request.results will be an Array of VNFaceObservation objects. These objects contain information about faces or facial-features detected by our image analysis request we setup in the first step.

And that’s pretty much all there is too it.

Add a visual indicator of the detected faces

Let’s say we wanted to check whether the request we setup actually detected the faces in the image. We could check the count of request.results to see if they match, or we could add a visual indicator onto the image we’re processing.

To do this, we can process the Array of VNFaceObservation objects, and using it’s boundingBox property transform it into a size relevant to out image, and then draw the results onto the screen. The code to do this is really simple and we can achieve all we need by extending the functionality of VNFaceObservation.

Thanks to @NilStack for this piece of code from this article.

Then we can update our completionHandler from earlier to this:

Note: This requires making our peopleImage property an instance variable and also having a UIImageView on screen.

The output will look something like this:

Image demonstrating results of face detection in iOS 11

Complete UIViewController code:

 

ARKit Device Compatibility

With iOS 11 around the corner and the release of ARKit, maybe like me you’re starting to wonder if your device will be compatible. According to Apple, devices that have an A9 or A10 chip will be able to run ARKit. For those of us who don’t have this information memorised, let’s take a quick look at what devices this applies to.

Compatible ARKit Devices

  • iPhone 6s
  • iPhone 6s Plus
  • iPhone 7
  • iPhone 7 Plus
  • iPhone SE
  • iPad Pro (9.7, 10.5 or 12.9)
  • iPad (2017)

So if you want to run ARKit apps when iOS 11 is released, but don’t already own one of the above devices, it might be worth considering an upgrade or waiting for the iPhone 8 (fall 2017).

As a side note, people are already creating some awesome apps with ARKit. Check out http://www.madewitharkit.com if you want to see more!

 

Top iOS Newsletters

For a while now, I’ve looked to iOS newsletters for my weekly fix of whats new and shiny in the magic world of iOS and Swift development. If you do a quick search for iOS newsletters, chances are you’ll find a bunch. The problem is, not all of them are regularly updated. iOS and Swift development is fast paced and the only way to keep up is by subscribing to data streams that provide you with useful information, regularly. With that in mind, I recently realised that I now only read a handful of the 10+ iOS newsletters I subscribe to. I therefore decided to do a bit of spring cleaning…

My Top iOS Newsletters

This week in Swift

The first one I’d like to mention is run by @NatashaTheRobot and is called This Week in Swift. There is always at least one article worth reading, as well as the occasional video. She also recommends podcasts and throws in a bit of light humour. A must subscribe!

iOS Dev Weekly

The next one (and probably the most well known) in my list is run by @daveverwer and is called iOS Dev Weekly. Dave has been running this newsletter for a loooong time and has a lot of subscribers, and with good reason, it’s brilliant! He recently hired another 2 curators to help him out, which shows Dave plans to keep this running for a long time still to come. Always useful, always worth reading, another must sign up.

Swift Weekly Brief

The final one in my list is a newsletter focused around the Swift open source project and is run by @jesse_squires. Like the two above, Swift Weekly Brief provides some nice links to worthwhile articles and news, but as a bonus, contains a section on commits and pull requests related to the Swift open source project on Github. It’s a great way to keep on top of what’s coming next in Swift, so it’s highly recommended if you write a lot. Even if you primarily code in Objective-C, I’d still recommend keeping an eye on what’s going on. Jesse also runs a weekly podcast called Swift Unwrapped which compliments his newsletter and goes into more detail on some of the weeks issues.

And that’s my top 3 picks! Did I miss any? If I did, let me know on Twitter @timrichardsn.

 

Accessing device motion data in Swift and Core Motion

When utilised correctly, accessing device motion using Core Motion in iOS can be super useful. If you’re accessing the raw accelerometer and gyroscope data, you can do all sorts of magical UI transformations in real time which can make the user experience much better. It can also be used to ascertain the user’s current motion “state”. And by this I mean whether the user is walking, cycling, stationary, in a car, etc.

This sounds great, but anyone who’s ever tried to use Core Motion knows it’s far from straightforward and processing the raw data from an accelerometer and/or a gyroscope is definitely a challenge (and beyond the scope of this article).

In this brief article, I’m going to focus on the functionality of one particular class in Core Motion, and that’s the CMMotionActivityManager.

CMMotionActivityManager can be used to find out the user’s current or past motion state. There are 2 main interaction points:

  • startActivityUpdates(...) – used to get motion updates periodically
  • queryActivityStarting(...) – used to get historical motion data

Each API call will return an array of CMMotionActivity objects. These objects provide information as to the type of motion iOS thinks your device was involved in.

Simple right? Well this is software development, so give yourself a slap if you thought it would be plain sailing…

To fully understand how this particular part of Core Motion works, I decided to build a test app and log the data for a few days as I moved around.

The good news is, startActivityUpdates performed really well and the data returned and logged from this API seemed pretty much spot on. I did however observe some anomalies with the second API call, queryActivityStarting.

Querying historical Core Motion data works…sometimes

queryActivityStarting occasionally misbehaves and either doesn’t return any motion activities, or returns “unknown” even if you’ve been out running for the last 20 minutes.

I logged the data returned from this call using the following piece of code:

The main situation I found this happening in was when the phone had been left on a surface for some time before making the query.

This also happened much more frequently on iOS 10. I found iOS 11 behaved much much better and around 90% of the time the data was spot on.

I guess the thing to take away from this test is that CMMotionActivityManager actually performs really well without annihilating your battery (yay!). I was a bit surprised especially at how well it performed on iOS 11. If you’re thinking of using it in your app, just be aware it’s not 100% reliable, but we’re certainly getting there with iOS 11.

UserDefaults extension for Swifty variables

For quite some time now I’ve been using a UserDefaults extension class to make reading and writing user information much more manageable. In “old” times, writing a value to UserDefaults (or NSUserDefaults) was done like so:

Now there’s nothing wrong with doing this, it works fine. But who wants “fine”? We’re Swift developers and we can make this way more Swifty! In a real world application, chances are good you’ll want to access the “name” key value in various points throughout your app. Given our current solution, we’ll end up with a lot of repetition.

Enter UserDefaults extension!

Adding a variable “name” as an extension to UserDefaults makes this problem disappear, whilst simultaneously giving us extra functionality for free.

This is great. At our call site we can now reference the “name” property directly via UserDefaults.name, without having to input the “name” String value. But let’s take this one step further and remove the optional conformance on our variable. UserDefaults.standard.string(...) by default returns an optional String. Combined with the nil coalescing operator, we can make our “name” variable non optional:

Note: You may not always want this, and in most cases you’d probably prefer to keep “name” as optional, this is purely for demonstration purposes.

One final example of why I do this in my apps. Say we are reading/writing a custom data type not offered by UserDefaults. Say we are storing a Dictionary that uses a String as it’s key and a Date as it’s value. With our new extension class, we can do this easily:

Referencing this value at the call site throughout our entire app now looks like UserDefaults.stringDateDictionary. I think we can all agree this is now much more readable, much more maintainable, and much more Swifty!

 

Get your app launch time with Swift

In this post, we’ll be looking at how to get your app launch time with Swift. App launch times are fundamental to a good user experience and as an iOS Developer, you should be looking to optimise this as much as you can.

In iOS, the application launch cycle does a lot of things before it even calls “didFinishLaunchingWithOptions” inside your AppDelegate. It’s important to understand what iOS actually does before it gives you control so you can properly measure the launch time. I won’t go into the specifics of this in this post but if you’re interested, Ole Begemann has written a great post on the iOS app launch sequence that I highly recommend you read. In it he references Objective-C, but the fundamentals are the same and can be applied to Swift very easily.

For us, we want to know how quickly our app is launching. The first thing we need to do is create a new Swift file called main.swift. It should look like this:

By creating a main.swift file we’re telling Xcode where our application kicks things off. This is by default not required, but adding one ourselves means we need to then call UIApplicationMain() and tell it which class is our AppDelegate. We do this via NSStringFromClass(AppDelegate).

And now, in our AppDelegate.swift file, make the following changes:

  1. Comment the line @UIApplicationMain. Among other things, this line essentially tells the compiler this class is our AppDelegate. Since we are doing that manually in our main.swift file, we don’t need or want this.
  2. Here we’re adding an operation onto the main queue that will log our launch time to the console. The reason we’re using GCD here is so we make sure “didFinishLaunchingWithOptions” has actually finished before we execute this piece of code.

And that’s it! If you make these changes you’ll see your application launch time printed to the console. The lower you can get it, the better 🙂

Weighted quick union algorithm with Swift

Algorithms are pretty awesome. Understanding how they work and being able to analyse their performance (without actually having to test them) is an essential skill for all programmers, whatever their niche. For every mobile app I’ve ever made, there was at least one challenging algorithm I needed to write.

I recently tweeted a link to a repository containing Swift implementations of lots of algorithms aptly named, “the swift algorithm club“. This followed on from a challenge whereby I needed the weighted quick union algorithm to achieve percolation within my application. I had to write a Swift implementation of this, and I’d like to share it with you guys. You can see if on my Github page as a Gist, and below:

 

Detecting Emoji characters in Swift

I recently had to write some code to check for the presence of Emoji characters in a UITextField in Swift. I tried a few approaches before arriving at the conclusion below. Credit to Ziewvater for helping.

Adding this code as an extension to String makes useage simple: “string”.containsEmoji()

That’s it!