Vision Framework Tutorial for iOS: Scanning Barcodes
In this Vision Framework tutorial, you’ll learn how to use your iPhone’s camera to scan QR codes and automatically open encoded URLs in Safari. By Emad Ghorbaninia.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Vision Framework Tutorial for iOS: Scanning Barcodes
20 mins
- Getting Started
- Getting Permission to Use Camera
- Starting an AVCaptureSession
- Setting Capture Session Quality
- Defining a Camera for Input
- Making an Output
- Running the Capture Session
- Vision Framework
- Vision and the Camera
- Using the Vision Framework
- Creating a Vision Request
- Vision Handler
- Vision Observation
- Adding a Confidence Score
- Using Barcode Symbology
- Opening Encoded URLs in Safari
- Setting Up Safari
- Opening Safari
- Where to Go From Here?
Barcodes are everywhere: on products, in advertising, on movie tickets. In this tutorial, you’ll learn how to scan barcodes with your iPhone using the Vision Framework. You’ll work with Vision Framework APIs such as VNDetectBarcodesRequest
, VNBarcodeObservation
and VNBarcodeSymbology
, as well as learn how to use AVCaptureSession
to perform real-time image capture.
That’s not all! You’ll also become familiar with:
- Using the camera as an input device.
- Generating and evaluating an image confidence score.
- Opening a web page in
SFSafariViewController
.
Getting Started
Download the starter project by clicking the Download Materials button at the top or bottom of this tutorial. Open the project in Xcode from starter and explore the project.
Take a look at ViewController.swift. You’ll find some helper methods already in the code.
Before you start scanning barcodes, you’d better get permission to use the camera.
Getting Permission to Use Camera
To protect user privacy, Apple requires developers to get permission from users before accessing their camera. There are two steps to prepare your app to ask for the right permission:
- Explain why and how your app will use the camera by adding a key and value in your Info.plist.
- Use
AVCaptureDevice.requestAccess(for:completionHandler:)
to prompt the user with your explanation and get user input for permission.
The starter project includes the key-value pair in Info.plist. You can find it under the key Privacy – Camera Usage Description.
To prompt the user for permission to use the camera, open ViewController.swift. Next, find // TODO: Checking permissions
inside checkPermissions()
. Add the following code inside the method:
switch AVCaptureDevice.authorizationStatus(for: .video) {
// 1
case .notDetermined:
AVCaptureDevice.requestAccess(for: .video) { [self] granted in
if !granted {
showPermissionsAlert()
}
}
// 2
case .denied, .restricted:
showPermissionsAlert()
// 3
default:
return
}
In the code above, you ask iOS for the current camera authorization status for your app.
- When the status isn’t determined, meaning the user hasn’t made a permissions selection yet, you call
AVCaptureDevice.requestAccess(for:completionHandler:)
. It presents the user with a dialog asking for permission to use the camera. If the user denies your request, you show a pop-up message asking for permission again, this time in the iPhone settings. - If the user previously provided restricted access to the camera, or denied the app access to the camera, you show an alert asking for an update to settings to allow access.
- Otherwise, the user already granted permission for your app to use the camera, so you don’t have to do anything.
Build and run and you’ll see the following:
With the camera permission in place, you can move on to starting a capturing session.
Starting an AVCaptureSession
Now you have permission to access the camera on your device. But when you dismiss the alert on your phone, nothing happens! You’ll fix that now by following these steps to start using the iPhone camera:
- Set quality for the capture session.
- Define a camera for input.
- Make an output for the camera.
- Run the Capture Session.
Setting Capture Session Quality
In Xcode, navigate to ViewController.swift. Find setupCameraLiveView()
and add the following code after // TODO: Setup captureSession
:
captureSession.sessionPreset = .hd1280x720
captureSession
is an instance of AVCaptureSession
. With an AVCaptureSession
, you can manage capture activity and coordinate how data flows from input devices to capture outputs.
In the code above, you set the capture session quality to HD.
Next, you’ll define which of the many iPhone cameras you want your app to use and pass your selection to the capture session.
Defining a Camera for Input
Continuing in setupCameraLiveView()
, add this code right after // TODO: Add input
:
// 1
let videoDevice = AVCaptureDevice
.default(.builtInWideAngleCamera, for: .video, position: .back)
// 2
guard
let device = videoDevice,
let videoDeviceInput = try? AVCaptureDeviceInput(device: device),
captureSession.canAddInput(videoDeviceInput)
else {
// 3
showAlert(
withTitle: "Cannot Find Camera",
message: "There seems to be a problem with the camera on your device.")
return
}
// 4
captureSession.addInput(videoDeviceInput)
Here you:
- Find the default wide angle camera, located on the rear of the iPhone.
- Make sure your app can use the camera as an input device for the capture session.
- If there’s a problem with the camera, show the user an error message.
- Otherwise, set the rear wide angle camera as the input device for the capture session.
With the capturing session ready, you can now make the camera output.
Making an Output
Now that you have video coming in from the camera, you need a place for it to go. Continuing where you left off, after // TODO: Add output
, add:
let captureOutput = AVCaptureVideoDataOutput()
// TODO: Set video sample rate
captureSession.addOutput(captureOutput)
Here, you set the output of your capture session to an instance of AVCaptureVideoDataOutput
. AVCaptureVideoDataOutput
is a capture output that records video and provides access to video frames for processing. You’ll add more to this later.
Finally, time to run the capturing session.
Running the Capture Session
Find the // TODO: Run session
comment and add the following code directly after it:
captureSession.startRunning()
This starts your camera session and enables you to continue with the Vision framework.
But first! What many forget is stopping the camera session again.
To do this, find the // TODO: Stop Session
comment in viewWillDisappear(_:)
and add the following after it:
captureSession.stopRunning()
This stops your capture session if your view happens to disappear, freeing up some precious memory.
Build and run the project on your device. Just like that, your rear camera shows up on your iPhone screen! If you have your phone facing your computer, it’ll look similar to this:
With that in place, it’s time to move on to the Vision framework.
Vision Framework
Apple created the Vision Framework to let developers apply computer vision algorithms to perform a variety of tasks on input images and video. For example, you can use Vision for:
- Face and landmark detection
- Text detection
- Image registration
- General feature tracking
Vision also lets you use custom Core ML models for tasks like image classification or object detection.