31/03/2023

Dave Duprey, iOS Engineer at what3words, announces the updated what3words Scan feature – now powered by Apple’s Core ML – and explains how it couldn’t be easier to add to your app, too.

What is what3words Scan?

what3words Scan lets users scan a what3words address with their device’s camera to open the location directly in the app. This means that whenever they see a what3words address on an event ticket, in a travel guidebook or on a sign, they can easily scan it to share, save or navigate to their precise destination.

Working with Apple’s Core ML

The latest version of what3words Scan now uses Apple’s Core ML – or, to be more precise, Vision Framework. This unlocks a breathtakingly quick and capable experience for iOS users, enabling them to scan what3words addresses on a much wider range of objects and in more formats. This includes handwritten addresses, those printed on packaging, or even stitched onto what3words socks – and more. See the announcement film below to find out more.

Incorporating Apple’s Core ML (based on Vision Framework) into our OCR component was surprisingly straightforward. In simple terms, we pass it images and it calls back with an array of “observations” of the text found. We then filter the text results with our regex, and check what passes against our SDK or API to rapidly validate the result.

Overall, we think the results speak for themselves; watch the film below to see what we mean.

Add what3words Scan to your app

If you’re a business that uses what3words and you’d like to add what3words Scan to your app, it couldn’t be easier thanks to our adoption of Apple’s Core ML. Follow these steps to get started.

Step-by-step guide

Use Swift Package Manager and add the URL below:
https://github.com/what3words/w3w-swift-components-ocr.git

Import the libraries wherever you use the components:

import W3WSwiftComponentsOcr
import W3WSwiftApi

Info.plist

You must set the camera permission in your app’sInfo.plist:

Using the component

Using the API with Vision Framework:
Our W3WOcrNative class that uses iOS’ Vision Framework requires our API (or SDK) to be passed into the constructor.

let api = What3WordsV3(apiKey: "YourApiKey")
let ocr = W3WOcrNative(api)
let ocrViewController = W3WOcrViewController(ocr: ocr)

Typical usage

Here’s a typical use example set in a UIViewController’s IBOutlet function that is connected to a UIButton (presuming the initialisation code above was used somewhere in the class):

@IBAction func scanButtonPressed(_ sender: Any) {

// show the OCR ViewController
self.show(ocrViewController, sender: self)

// start the OCR processing images
ocrViewController.start()

// when it finds an address, show it in the viewfinder
ocrViewController.onSuggestions = { [weak self] suggestions in
if let suggestion = suggestions.first {
 self?.ocrViewController.show(suggestion: suggestion)
 self?.ocrViewController.stop()
}

// if there is an error show the user
ocrViewController.onError = { [weak self] error in
 self?.ocrViewController.stop()
 self?.showError(error: error)
 }
}

Example code

An example calledOcrComponent can be found here in theExamples/OcrComponent directory of this repository.

Download it here

Get everything you need to add what3words Scan to your app here.

How can I add what3words Scan to my app?

Watch our tutorial on how to add what3words to your app.