Image Depth Maps Tutorial for iOS: Getting Started
Learn how you can use the incredibly powerful image manipulation frameworks on iOS to use image depth maps with only a few lines of code. By Owen L Brown.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Image Depth Maps Tutorial for iOS: Getting Started
20 mins
- Getting Started
- Reading Depth Data
- Implementing the Depth Data
- How Does the iPhone Do This?
- Depth vs Disparity
- Creating a Mask
- Setting up the Left Side of the Mask
- Setting up the Right Side of the Mask
- Combining the Two Masks
- Your First Depth-Inspired Filter
- Color Highlight Filter
- Change the Focal Length
- More About AVDepthData
- Where to Go From Here?
Depth vs Disparity
So far, you’ve mostly used the term depth data but in your code, you requested kCGImageAuxiliaryDataTypeDisparity
data. What gives?
Depth and disparity are essentially inversely proportional.
The farther away an object is, the greater the object’s depth. The disparity is the distance between the equivalent object in the two images. According to the formula above, as this distance approaches zero, the depth approaches infinity.
If you played around with the starter project you might have noticed a slider at the bottom of the screen that’s visible when selecting the Mask and Filter segments.
You’re going to use this slider, along with the depth data, to make a mask for the image at a certain depth. Then you’ll use this mask to filter the original image and create some neat effects!
Creating a Mask
Open DepthImageFilters.swift and find createMask(for:withFocus:)
. Then add the following code to the top:
let s1 = MaskParams.slope
let s2 = -MaskParams.slope
let filterWidth = 2 / MaskParams.slope + MaskParams.width
let b1 = -s1 * (focus - filterWidth / 2)
let b2 = -s2 * (focus + filterWidth / 2)
These constants are going to define how you convert the depth data into an image mask.
Think of the depth data map as the following function:
The pixel value of your depth map image is equal to the normalized disparity. Remember, a pixel value of 1.0 is white and a disparity value of 1.0 is closest to the camera. On the other side of the scale, a pixel value of 0.0 is black and a disparity value of 0.0 is farthest from the camera.
To create a mask from the depth data, you’ll change this function to be something much more interesting. It will essentially pick out a certain depth. To illustrate that, consider the following version of the same pixel value to disparity function:
This is showing a focal point of 0.75 disparity, with a peak of width 0.1 and slope 4.0 on either side. createMask(for:withFocus:)
will use some funky math to create this function.
This means that the whitest pixels (value 1.0) will be those with a disparity of 0.75 ± 0.05 (focal point ± width / 2). The pixels will then quickly fade to black for disparity values above and below this range. The larger the slope, the faster they’ll fade to black.
You’ll set the mask up in two parts — the left side and the right side. You’ll then combine them.
Setting up the Left Side of the Mask
After the constants you previously added, add the following:
let depthImage = image.depthData.ciImage!
let mask0 = depthImage
.applyingFilter("CIColorMatrix", parameters: [
"inputRVector": CIVector(x: s1, y: 0, z: 0, w: 0),
"inputGVector": CIVector(x: 0, y: s1, z: 0, w: 0),
"inputBVector": CIVector(x: 0, y: 0, z: s1, w: 0),
"inputBiasVector": CIVector(x: b1, y: b1, z: b1, w: 0)])
.applyingFilter("CIColorClamp")
This filter multiplies all the pixels by the slope s1
. Since the mask is grayscale, you need to make sure that all color channels have the same value. After using CIColorClamp
to clamp the values to be between 0.0 and 1.0, this filter will apply the following function:
The larger s1
is, the steeper the slope of the line will be. The constant b1
moves the line left or right.
Setting up the Right Side of the Mask
To take care of the other side of the mask function, add the following:
let mask1 = depthImage
.applyingFilter("CIColorMatrix", parameters: [
"inputRVector": CIVector(x: s2, y: 0, z: 0, w: 0),
"inputGVector": CIVector(x: 0, y: s2, z: 0, w: 0),
"inputBVector": CIVector(x: 0, y: 0, z: s2, w: 0),
"inputBiasVector": CIVector(x: b2, y: b2, z: b2, w: 0)])
.applyingFilter("CIColorClamp")
Since the slope s2
is negative, the filter applies the following function:
Combining the Two Masks
Now, put the two masks together. Add the following code:
let combinedMask = mask0.applyingFilter("CIDarkenBlendMode", parameters: [
"inputBackgroundImage": mask1
])
let mask = combinedMask.applyingFilter("CIBicubicScaleTransform", parameters: [
"inputScale": image.depthDataScale
])
You combine the masks by using the CIDarkenBlendMode
filter, which chooses the lower of the two values of the input masks.
Then you scale the mask to match the image size since the data map is a lower resolution.
Finally, replace the return line with:
return mask
Build and run your project. Tap the Mask segment and play with the slider.
You’ll see something like this:
As you move the slider from left to right, the mask is picking out pixels from far to near. So when the slider is all the way to the left, the white pixels will be those that are far away. And when the slider is all the way to the right, the white pixels will be those that are near.
Your First Depth-Inspired Filter
Next, you’ll create a filter that mimics a spotlight. The spotlight will shine on objects at a chosen depth and fade to black from there.
Because you’ve already put in the hard work of reading in the depth data and creating the mask, it’s going to be super simple.
Return to DepthImageFilters.swift and add the following method at the bottom of the DepthImageFilters
class:
func createSpotlightImage(
for image: SampleImage,
withFocus focus: CGFloat
) -> UIImage? {
// 1
let mask = createMask(for: image, withFocus: focus)
// 2
let output = image.filterImage.applyingFilter("CIBlendWithMask", parameters: [
"inputMaskImage": mask
])
// 3
guard let cgImage = context.createCGImage(output, from: output.extent) else {
return nil
}
// 4
return UIImage(cgImage: cgImage)
}
Here’s what you did in these lines:
- Got the depth mask that you implemented within
createMask(for:withFilter:)
. - Used
CIBlendWithMask
and passed in the mask you created in the previous line. The filter essentially sets the alpha value of a pixel to the corresponding mask pixel value. So when the mask pixel value is 1.0, the image pixel is completely opaque and when the mask pixel value is 0.0, the image pixel is completely transparent. Since theUIView
behind theUIImageView
has a black color, black is what you will see coming from behind the image. - You created a
CGImage
using theCIContext
. - You then created a
UIImage
and returned it.
To see this filter in action, you first need to tell DepthImageViewController
to call this method when appropriate.
Open DepthImageViewController.swift and go to createImage(for:mode:filter:)
.
Look for the switch case that switches on the .filtered
and .spotlight
cases and replace the return statement with the following:
return depthFilters.createSpotlightImage(for: image, withFocus: focus)
Build and run. Tap the Filtered segment and ensure that you select Spotlight at the top. Play with the slider. You should see this:
Congratulations! You’ve written your first depth-inspired image filter.
But you’re just getting warmed up. You want to write another one, right?