Building a Camera App With SwiftUI and Combine
Learn to natively build your own SwiftUI camera app using Combine and create fun filters using the power of Core Image. By Yono Mittlefehldt.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Building a Camera App With SwiftUI and Combine
30 mins
SwiftUI enables developers to design and create working user interfaces almost as quickly as they could prototype one in Sketch or Photoshop. Think about how powerful this is. Instead of making static mockups, you can create working prototypes for almost the same amount of effort. How cool is that?
Add Combine as a data pipeline to the mix, and you’ve got the Batman and Robin of the programming world. You can decide which one is Batman and which one is Robin. :]
Some people may complain that SwiftUI and Combine aren’t ready for prime time, but do you really want to tell Batman he can’t go out and fight crime? In fact, would you believe you can write a camera app completely in SwiftUI without even touching UIViewRepresentable
?
Creating a camera app using SwiftUI and Combine makes processing real-time video easy and delightful. Video processing can already be thought of as a data pipeline. Since Combine manages the flow of data like a pipeline, there are many similarities between these patterns. Integrating them allows you to create powerful effects, and these pipelines are easy to expand when future features demand.
In this tutorial, you’ll learn how to use this dynamic duo to:
- Manage the camera and the video frames it captures.
- Create a data pipeline in Combine to do interesting things with the captured frames.
- Present the resulting image stream using SwiftUI.
You’ll do these with an app called Filter the World. So, get ready to filter the world through your iPhone — even more so than you already do!
Getting Started
Click the Download Materials button at the top or bottom of this tutorial. There’s currently not a lot there, aside from a custom Error
, a helpful extension to convert from a CVPixelBuffer
to a CGImage
, and some basic SwiftUI views that you’ll use to build up the UI.
If you build and run now, you won’t see much.
Currently, there’s a blank screen with the name of the app in the center.
If you want to make a camera-based app, what’s the most important thing you need? Aside from having a cool name, being able to display the camera feed is probably a close second.
So that’s exactly what you’ll do first!
Displaying Captured Frames
If you were going to use a UIViewRepresentable
, you’d probably opt for attaching an AVPreviewLayer
to your UIView
, but that’s not what you’re going to do! In SwiftUI, you’ll display the captured frames as Image
objects.
Since the data you get from the camera will be a CVPixelBuffer
, you’ll need some way to convert it to an Image
. You can initialize an Image
from a UIImage
or a CGImage
, and the second route is the one you’ll take.
Inside the Views group, create an iOS SwiftUI View file and call it FrameView.swift.
Add the following properties to FrameView
:
var image: CGImage?
private let label = Text("Camera feed")
When adding FrameView
to ContentView
in a little bit, you’ll pass in the image it should display. label
is there to make your code in the next step a little bit cleaner!
Replace Text
in the body with the following:
// 1
if let image = image {
// 2
GeometryReader { geometry in
// 3
Image(image, scale: 1.0, orientation: .upMirrored, label: label)
.resizable()
.scaledToFill()
.frame(
width: geometry.size.width,
height: geometry.size.height,
alignment: .center)
.clipped()
}
} else {
// 4
Color.black
}
In this code block, you:
- Conditionally unwrap the optional
image
. - Set up a
GeometryReader
to access the size of the view. This is necessary to ensure the image is clipped to the screen bounds. Otherwise, UI elements on the screen could potentially be anchored to the bounds of the image instead of the screen. - Create
Image
fromCGImage
, scale it to fill the frame and clip it to the bounds. Here, you set the orientation to.upMirrored
, because you’ll be using the front camera. If you wanted to use the back camera, this would need to be.up
. - Return a black view if the image property is
nil
.
Great work! Now you need to hook it up in the ContentView
.
Open ContentView.swift and replace the contents of ZStack
with:
FrameView(image: nil)
.edgesIgnoringSafeArea(.all)
This adds the newly created FrameView
and ignores the safe area, so the frames will flow edge to edge. For now, you’re passing in nil
, as you don’t have a CGImage
, yet.
There’s no need to build and run now. If you did, it would show up black.
It’s still a blank screen. Is that really an improvement?
To display the frames now, you’ll need to add some code to set up the camera and receive the captured output.
Managing the Camera
You’ll start by creating a manager for your camera — a CameraManager
, if you will.
First, add a new Swift file named CameraManager.swift to the Camera group.
Now, replace the contents Xcode provides with the following code:
import AVFoundation
// 1
class CameraManager: ObservableObject {
// 2
enum Status {
case unconfigured
case configured
case unauthorized
case failed
}
// 3
static let shared = CameraManager()
// 4
private init() {
configure()
}
// 5
private func configure() {
}
}
So far, you’ve set up a basic structure for CameraManager
. More specifically, you:
- Created a
class
that conforms toObservableObject
to make it easier to use with future Combine code. - Added an internal enumeration to represent the status of the camera.
- Included a static shared instance of the camera manager to make it easily accessible.
- Turned the camera manager into a singleton by making
init
private. - Added a stub for a
configure()
you’ll fill out soon.
Configuring the camera requires two steps. First, check for permission to use the camera and request it, if necessary. Second, configure AVCaptureSession
.
Checking for Permission
Privacy is one of Apple’s most touted pillars. The data captured by a camera has the potential to be extremely sensitive and private. Since Apple (and hopefully you) care about users’ privacy, it only makes sense that the user needs to grant an app permission to use the camera. You’ll take care of that now.
Add the following properties to CameraManager
:
// 1
@Published var error: CameraError?
// 2
let session = AVCaptureSession()
// 3
private let sessionQueue = DispatchQueue(label: "com.raywenderlich.SessionQ")
// 4
private let videoOutput = AVCaptureVideoDataOutput()
// 5
private var status = Status.unconfigured
Here, you define:
- An error to represent any camera-related error. You made it a published property so that other objects can subscribe to this stream and handle any errors as necessary.
-
AVCaptureSession
, which will coordinate sending the camera images to the appropriate data outputs. - A session queue, which you’ll use to change any of the camera configurations.
- The video data output that will connect to
AVCaptureSession
. You’ll want this stored as a property so you can change the delegate after the session is configured. - The current status of the camera.
Next, add the following method to CameraManager
:
private func set(error: CameraError?) {
DispatchQueue.main.async {
self.error = error
}
}
Here, you set the published error
to whatever error is passed in. You do this on the main thread, because any published properties should be set on the main thread.
Next, to check for camera permissions, add the following method to CameraManager
:
private func checkPermissions() {
// 1
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .notDetermined:
// 2
sessionQueue.suspend()
AVCaptureDevice.requestAccess(for: .video) { authorized in
// 3
if !authorized {
self.status = .unauthorized
self.set(error: .deniedAuthorization)
}
self.sessionQueue.resume()
}
// 4
case .restricted:
status = .unauthorized
set(error: .restrictedAuthorization)
case .denied:
status = .unauthorized
set(error: .deniedAuthorization)
// 5
case .authorized:
break
// 6
@unknown default:
status = .unauthorized
set(error: .unknownAuthorization)
}
}
In this method:
- You switch on the camera’s authorization status, specifically for video.
- If the returned device status is undetermined, you suspend the session queue and have iOS request permission to use the camera.
- If the user denies access, then you set the
CameraManager
‘s status to.unauthorized
and set the error. Regardless of the outcome, you resume the session queue. - For the
.restricted
and.denied
statuses, you set theCameraManager
‘s status to.unauthorized
and set an appropriate error. - In the case that permission was already given, nothing needs to be done, so you break out of the switch.
- To make Swift happy, you add an unknown default case — just in case Apple adds more cases to
AVAuthorizationStatus
in the future.
Now, you’ll move on to the second step needed to use the camera: configuring it. But before that, a quick explanation!