Swift Accelerate and vImage: Getting Started
Learn how to process images using Accelerate and vImage in a SwiftUI application. By Bill Morefield.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Swift Accelerate and vImage: Getting Started
30 mins
- Getting Started
- Introducing vImage and vImage_Buffer
- Creating vImage Buffers
- Converting UIImage to vImage
- Converting vImage to UIImage
- Instantiating the Wrapper
- Managing Image Orientation
- Image Processing
- Implementing Equalize Histogram
- Processing the Image
- Hooking it up to the UI
- Implementing Image Reflection
- Histograms
- Getting the Histogram
- Working with Pointers
- Finalizing the Histogram Data
- Visualizing the Histogram Data
- Where to Go From Here?
The most valuable benefit of modern computing might be the ability to do complicated math quickly and accurately. Early computers existed almost exclusively to automate tedious and error-prone calculations previously done by hand.
Today, a phone can do calculations in a fraction of a second that once would have taken a team of people weeks or even months. This increased ability invites programmers to add more complicated calculations to these devices, increasing the utility of a phone.
The Accelerate framework gives app developers an efficient, high-speed library for large-scale mathematical or image-based calculations. It uses the vector-processing capabilities on modern purpose-built CPUs to perform calculations quickly while maintaining efficient energy usage.
Getting Started
You need to first understand a bit about what Accelerate is and the components that you will use in this tutorial. So before diving into code, take a look at the components of Accelerate.
Accelerate comprises several related libraries, each of which performs a dedicated type of mathematical process. The libraries are:
- BNNS: for training and running neural networks
- vImage: an image processing library
- vDSP: a library of digital signal processing functions
- vForce: to perform arithmetic and transcendental calculations on large sets of numbers
- Sparse Solvers, BLAS and LAPACK: for linear algebra calculations
Apple also uses these libraries as building blocks of other frameworks. For example, CoreML builds on top of BNNS. The archive and compression frameworks also build on top of Accelerate. Because Apple uses these frameworks extensively, you’ll also find support on all current Apple platforms.
In this tutorial, you’ll explore the Accelerate framework using the vImage library. All the libraries work similarly, and the vImage library provides clear visual examples that will be easier to understand than more complex tasks like digital signal processing.
Introducing vImage and vImage_Buffer
vImage gives you a way to manipulate large images using the CPU’s vector instruction set. These instructions let you write apps that can do complex image calculations quickly while placing less stress on mobile devices’ batteries than if you were to use general purpose instructions. vImage works well when you need to perform very complex calculations or process real-time video, or you require high accuracy.
Accelerate is a bit unique in older Apple frameworks in that it’s partially updated to Swift compatibility. Unlike many older frameworks, you’ll find functionality that works as you’d expect a native Swift library. But you can’t ignore the older origin of the framework because many calls still expect and use pre-Swift idioms and patterns. You’ll see how to manage those later in this tutorial.
For an image to work with vImage, you must first convert it to the native format for vImage — a vImage_Buffer. This buffer represents raw image data, and vImage functions treat it more as a set of numbers than as image data, keeping with the vector processing paradigm of Accelerate.
Creating vImage Buffers
Time to start coding!
Download the starter project by clicking the Download Materials button at the top or bottom of this tutorial. Open the starter project, then build and run.
You’ll see an app that allows you to select a photo from the camera roll. It then displays the selected image. In UIKit and SwiftUI, you usually work with a UIImage
. Because vImage does not understand this format, you’ll first convert this UIImage
to something it can use.
Create a new Swift file named VImageWrapper.swift. Replace the contents of the file with:
import UIKit
import Accelerate
struct VImageWrapper {
// 1
let vNoFlags = vImage_Flags(kvImageNoFlags)
}
extension VImageWrapper {
// 2
func printVImageError(error: vImage_Error) {
let errDescription = vImage.Error(vImageError: error).localizedDescription
print("vImage Error: \(errDescription)")
}
}
This file contains the start of a Swift wrapper for several vImage processes you’ll create in this tutorial. Begin by importing UIKit
for access to UIImage
along with the Accelerate
framework. The rest of the code will be useful later:
- Most Accelerate functions expect a flags parameter used to restrict or provide context to the function. For this tutorial, you won’t need to provide this, and
vNoFlags = vImage_Flags(kvImageNoFlags)
provides a handy constant for the value reflecting that. - Even after the better integration with Swift, many methods still return an Objective C style value to indicate the method’s status. This method converts the returned
vImage_Error
value to a Swift-friendlyvImage.Error
. It then prints the description to the console for debugging. You’ll use this method to handle errors in this tutorial.
Converting UIImage to vImage
Next, add the following code to the VImageWrapper
struct:
var uiImage: UIImage
init(uiImage: UIImage) {
self.uiImage = uiImage
}
This code creates a UIImage
property along with a custom initializer accepting a UIImage
.
Next, add the following method to the structure.
func createVImage(image: UIImage) -> vImage_Buffer? {
guard
// 1
let cgImage = uiImage.cgImage,
// 2
let imageBuffer = try? vImage_Buffer(cgImage: cgImage)
else {
// 3
return nil
}
// 4
return imageBuffer
}
Notice the use of the guard
statement to ensure that each step works. If any steps fail, then the else
returns nil
to represent something went wrong. You’ll use this logic often in this tutorial.
- Accelerate doesn’t provide a direct conversion from a
UIImage
to avImage_Buffer
. It does support converting aCGImage
to avImage_Buffer
. Because aUIImage
(usually) contains theCGImage
in thecgImage
property, you attempt to access the underlyingCGImage
of theUIImage
. - With a
CGImage
, you attempt to create avImage_Buffer
from it. The creation of this buffer can throw an error, so you use thetry?
operator to get anil
if an error occurs. - If either of these steps fails, then you return
nil
. - Otherwise, you return the
vImage_Buffer
.
That concludes the setup necessary for converting your images into a buffer that vImage can handle. You’ll use this extensively throughout the tutorial.
You’ll also need a way to convert back from a vImage_Buffer
into a UIImage
. So, code that now.
Converting vImage to UIImage
Add the following method at the end of the VImageWrapper
struct:
func convertToUIImage(buffer: vImage_Buffer) -> UIImage? {
guard
// 1
let originalCgImage = uiImage.cgImage,
// 2
let format = vImage_CGImageFormat(cgImage: originalCgImage),
// 3
let cgImage = try? buffer.createCGImage(format: format)
else {
return nil
}
let image = UIImage(cgImage: cgImage)
return image
}
You can see it takes a bit more work to go from the vImage_Buffer
back to the UIImage
:
- To begin, you’ll get the
CGImage
for theUIImage
as earlier. - As mentioned earlier, the
vImage_Buffer
contains only the image data. It has no information about what the buffer represents, and it needs to know the order and number of bits used for the image data. You can obtain this information from the original image, and that’s what you do here. You create avImage_CGImageFormat
object which contains information needed to interpret the image data of the original image. - You call the
createCGImage(format:)
on the buffer to convert the image data into an image using the format determined in the last step.
Now, to see your work, add the following optional property to the end of the list of properties of the VImageWrapper
struct:
var processedImage: UIImage?
Then, add the following code at the end of the init(uiImage:)
:
if let buffer = createVImage(image: uiImage),
let converted = convertToUIImage(buffer: buffer) {
processedImage = converted
}
The block of code above attempts to convert the image to a buffer and then back to a UIImage
.
Next, open ContentView.swift and, after the existing ImageView
, add the following code:
if let image = processedImage {
ImageView(
title: "Processed",
image: image)
.padding(3)
}
When a processed image exists, the app will display it using the ImageView
defined in the starter project. You need to make one more change before you can see your work.