Vision Framework Tutorial for iOS: Contour Detection
Learn how to detect and modify image contours in your SwiftUI iOS apps in a fun and artistic way using the Vision framework. By Yono Mittlefehldt.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Vision Framework Tutorial for iOS: Contour Detection
25 mins
Changing the Contrast
Now that you understand the settings available to you, you'll write some code to change the values for the two contrast settings. For this tutorial, you'll leave the detectsDarkOnLight
and maximumImageDimension
properties alone and just use the default values for them.
Open ContourDetector.swift and add the following methods to the bottom of ContourDetector
:
func set(contrastPivot: CGFloat?) {
request.contrastPivot = contrastPivot.map {
NSNumber(value: $0)
}
}
func set(contrastAdjustment: CGFloat) {
request.contrastAdjustment = Float(contrastAdjustment)
}
These methods change the contrastPivot
and contrastAdjustment
on the VNDetectContoursRequest
, respectively, with a little extra logic to allow you to set the contrastPivot
to nil
.
You'll recall that request
is a lazy var
, meaning if it hasn't been instantiated by the time you've called one of these methods, it will be now.
Next, open ContentViewModel.swift and find asyncUpdateContours
. Update the method so it looks like this:
func asyncUpdateContours() async -> [Contour] {
let detector = ContourDetector.shared
// New logic
detector.set(contrastPivot: 0.5)
detector.set(contrastAdjustment: 2.0)
return (try? detector.process(image: self.image)) ?? []
}
Those two new lines hard code values for the contrastPivot
and the contrastAdjustment
.
Build and run the app and experiment with different values for these settings (you'll need to change the values and then build and run again). Here's some screenshots of different values in action:
Ok, now you're getting some interesting results. However, it's a bit annoying that there's no magical setting to get all the contours from the image and combine them into one result.
But… there's a solution for that.
When exploring the starter project, you might have tapped on the settings icon in the bottom right corner. If you tapped on it, you would see sliders for minimum and maximum contrast pivot and adjustment.
You'll use these sliders to create ranges for these settings and loop through them. Then you'll combine all the contours from each setting pair to create a more complete set of contours for the image.
If you don't still have ContentViewModel.swift open, go ahead an open it. Delete the entire contents of asyncUpdateContours
and replace it with the following code:
// 1
var contours: [Contour] = []
// 2
let pivotStride = stride(
from: UserDefaults.standard.minPivot,
to: UserDefaults.standard.maxPivot,
by: 0.1)
let adjustStride = stride(
from: UserDefaults.standard.minAdjust,
to: UserDefaults.standard.maxAdjust,
by: 0.2)
// 3
let detector = ContourDetector.shared
// 4
for pivot in pivotStride {
for adjustment in adjustStride {
// 5
detector.set(contrastPivot: pivot)
detector.set(contrastAdjustment: adjustment)
// 6
let newContours = (try? detector.process(image: self.image)) ?? []
// 7
contours.append(contentsOf: newContours)
}
}
// 8
return contours
In this new version of asyncUpdateContours
, you:
- Create an empty array of
Contour
s to store all the contours in. - Setup the strides for the
contourPivot
andcontourAdjustment
values to loop through. - Get a reference to the
ContourDetector
singleton. - Loop through both strides. Notice that this is a nested loop, so that each value of
contourPivot
will be paired with each value ofcontourAdjustment
. - Change the settings for the
VNDetectContoursRequest
using the accessor methods you created. - Run the image through the Vision contour detector API.
- Append the results to the list of
Contour
s and… - Return this list of
Contour
s.
Phew! That was a lot, but it'll be worth it. Go ahead and build and run the app and change the sliders in the settings menu. After you dismiss the settings menu by swiping down or tapping outside it, it will begin recalculating the contours.
The ranges used in the screenshot below are:
- Contrast Pivot: 0.2 - 0.7
- Contrast Adjustment: 0.5 - 3.0
Very cool!
Thinning the Contours
This is a pretty cool effect, but you can do even better!
You might notice that some contours now look thick while others are thin. The "thick" contours are actually multiple contours of the same area but slightly offset from one another due to how the contrast was adjusted.
If you could detect duplicate contours, you'd be able to remove them, which should make the lines look thinner.
An easy way to determine whether two contours are the same is to look at how much overlap they have. It's not exactly 100% accurate, but it's a relatively fast approximation. To determine overlap, you can calculate the intersection-over-union of their bounding boxes.
Intersection over union, or IoU, is the intersection area of two bounding boxes divided by the area of their union.
When the IoU is 1.0, the bounding boxes are exactly the same. If the IoU is 0.0, there's no overlap between the two bounding boxes.
You can use this as a threshold to filter out bounding boxes that seem "close enough" to the same.
Back in asyncUpdateContours
in ContentViewModel.swift, add the following code just before the return
statement:
// 1
if contours.count < 9000 {
// 2
let iouThreshold = UserDefaults.standard.iouThresh
// 3
var pos = 0
while pos < contours.count {
// 4
let contour = contours[pos]
// 5
contours = contours[0...pos] + contours[(pos+1)...].filter {
contour.intersectionOverUnion(with: $0) < iouThreshold
}
// 6
pos += 1
}
}
With this code, you:
- Only run if the number of contours is less than 9,000. This can be the slowest part of the entire function, so try to limit when it can be used.
- Grab the IoU threshold setting, which can be changed in the settings screen.
- Loop through each contour. You use a
while
loop here because you'll be dynamically changing the contours array. You don't want to end up indexing outside of the array's size accidentally! - Index the contour array to get the current contour.
- Keep only the contours after the current contour, whose IoU is less than the threshold. Remember, if the IoU is greater than or equal to the threshold, you've determined it to be similar to the current contour and should be removed.
- Increment the indexing position.
Go ahead and build and run the app.
Notice how many of the thick contours are now significantly thinner!