Video Depth Maps Tutorial for iOS: Getting Started
In this iOS video depth maps tutorial, you’ll harness iOS 13’s video depth maps to apply realtime video filters and create a special effects masterpiece! By Owen L Brown.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Video Depth Maps Tutorial for iOS: Getting Started
20 mins
Admit it. Ever since you took your first video with the iPhone, you’ve had a burning desire to break into Hollywood. But you’re asking yourself, how can I do it?
Simple! You can use your iOS development skills to enhance your videos, become a special effects genius and take Hollywood by storm.
In this video depth maps tutorial, you’ll learn how to:
- Request depth information for a video feed.
- Manipulate the depth information.
- Combine the video feed with depth data and filters to create an SFX masterpiece.
Since the core of the project deals with depth maps, it’d be nice to know what depth maps are and how the iPhone gets them before you get started.
What are Depth Maps?
A depth map is distance data of surfaces from the camera’s point of view. This data is tied to a given image or video and, once it’s grabbed, you can do some pretty cool things with it.
For example, the image below displays closer objects in white and far away ones in black.
Using two offset cameras, the iPhone calculates the relative distances of objects. The process is very similar to your eyes working together to determine depth perception.
Filters
Filtering is another concept important to depth data handling. There are many ways to filter data, but for this tutorial you’ll focus on two:
- High-pass: These filters only keep values above a certain threshold.
- Band-pass: These filters only keep values between a minimum and maximum range.
That’s enough theory for now!
OK, it’s time to launch Xcode and get your formal wear ready for the Oscars!
Getting Started
For this video depth maps tutorial, you need Xcode 11 or later. You also need an iPhone with dual cameras on the back because that’s how the iPhone generates depth information. Since you need to run this app on a device, not the simulator, you also need an Apple Developer account.
Once you have everything ready, download and explore the materials for this tutorial by clicking the Download Materials button at the top or bottom of this page.
Open the starter project and select your development team in the Signing & Capabilities section of the Project Settings. Build and run it on your device. You’ll see something like this:
At this point, the app doesn’t do much. That’s where you come in!
Capturing Video Depth Maps Data
Before you can capture depth data for videos you need to add an AVCaptureDepthDataOutput
object to the AVCaptureSession
.
Apple added AVCaptureDepthDataOutput
in iOS 11 specifically to handle depth data, as the name suggests.
Open DepthVideoViewController.swift and add the following lines to the bottom of configureCaptureSession()
:
// 1
let depthOutput = AVCaptureDepthDataOutput()
// 2
depthOutput.setDelegate(self, callbackQueue: dataOutputQueue)
// 3
depthOutput.isFilteringEnabled = true
// 4
session.addOutput(depthOutput)
// 5
let depthConnection = depthOutput.connection(with: .depthData)
// 6
depthConnection?.videoOrientation = .portrait
Here’s the step-by-step breakdown:
- You create a new
AVCaptureDepthDataOutput
object. - Then, you set the current view controller as the delegate for the new object. The
callbackQueue
parameter is the dispatch queue on which to call the delegate methods. For now, ignore the error — you’ll fix it later. - Enable filtering on the depth data to take advantage of Apple’s algorithms to fill in any holes in the data.
- At this point, you’re ready to add the configured
AVCaptureDepthDataOutput
to theAVCaptureSession
. - Finally, get the
AVCaptureConnection
for the depth output in order to… - …ensure the video orientation of the depth data matches the video feed.
Simple, right?
But hang on! Before you build and run the project, you first need to tell the app what to do with this depth data. That’s where the delegate method comes in.
Still in DepthVideoViewController.swift, add the following extension and delegate method at the end of the file:
// MARK: - Capture Depth Data Delegate Methods
extension DepthVideoViewController: AVCaptureDepthDataOutputDelegate {
func depthDataOutput(
_ output: AVCaptureDepthDataOutput,
didOutput depthData: AVDepthData,
timestamp: CMTime,
connection: AVCaptureConnection) {
}
}
This method gets called every time the camera records more depth data. Add the following code to the method:
// 1
guard previewMode != .original else {
return
}
var convertedDepth: AVDepthData
// 2
let depthDataType = kCVPixelFormatType_DisparityFloat32
if depthData.depthDataType != depthDataType {
convertedDepth = depthData.converting(toDepthDataType: depthDataType)
} else {
convertedDepth = depthData
}
// 3
let pixelBuffer = convertedDepth.depthDataMap
// 4
pixelBuffer.clamp()
// 5
let depthMap = CIImage(cvPixelBuffer: pixelBuffer)
// 6
DispatchQueue.main.async {
self.depthMap = depthMap
}
Here’s what’s happening:
- First, optimize this function to create a depth map only if the current preview mode is anything that would use the depth map.
- Next, ensure the depth data is the format you need: 32-bit floating point disparity information. Disparity tells you how much one image is shifted compared to another.
- Save the depth data map from the
AVDepthData
object as aCVPixelBuffer
, an efficient data structure for holding a bunch of pixels. - Using an extension included in the project, clamp the pixels in the pixel buffer to keep them between 0.0 and 1.0.
- Convert the pixel buffer into a
CIImage
. - Finally, store this in a class property for later use.
Phew! You’re probably itching to run this now. But before you do, there’s one small addition you need to make to view the depth map: You need to display it!
Find the AVCaptureVideoDataOutputSampleBufferDelegate
extension and look for the switch
statement in captureOutput(_:didOutput:from:)
. Add the following case:
case (.depth, _, _):
previewImage = depthMap ?? image
previewImage
for each different filter.
Build and run the project. Tap the Depth segment of the segmented control at the bottom.
This is the visual representation of the depth data captured alongside the video data.