Core ML and Vision Tutorial: On-device training on iOS
This tutorial introduces you to Core ML and Vision, two cutting-edge iOS frameworks, and how to fine-tune a model on the device. By Christine Abernathy.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Core ML and Vision Tutorial: On-device training on iOS
30 mins
- Getting Started
- What is Machine Learning?
- Training With Models
- Apple’s Frameworks and Tools for Machine Learning
- Integrating a Core ML Model Into Your App
- Creating a Request
- Integrating the Request
- Adding a Related Quote
- Personalizing a Model on the Device
- k-Nearest Neighbors
- Setting Up Training Drawing Flow
- Adding the Shortcut Drawing View
- Making Model Predictions
- Updating the Model
- Loading the Model Into Memory
- Preparing the Prediction
- Testing the Prediction
- Updating the Model
- Saving the Model
- Performing the Update
- Where to Go From Here?
Adding the Shortcut Drawing View
It’s time to prepare the drawing view on image by following these steps:
- First, declare a DrawingView.
- Next, add the drawing view in the main view.
- Then, call the
addCanvasForDrawing
fromviewDidLoad()
. - Finally, clear the canvas on selecting an image.
Open CreateQuoteViewController.swift and add the following property after the IBOutlet
declarations:
var drawingView: DrawingView!
This contains the view where the user draws the shortcut.
Next, add the following code to implement addCanvasForDrawing()
:
drawingView = DrawingView(frame: stickerView.bounds)
view.addSubview(drawingView)
drawingView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
drawingView.topAnchor.constraint(equalTo: stickerView.topAnchor),
drawingView.leftAnchor.constraint(equalTo: stickerView.leftAnchor),
drawingView.rightAnchor.constraint(equalTo: stickerView.rightAnchor),
drawingView.bottomAnchor.constraint(equalTo: stickerView.bottomAnchor)
])
Here you create an instance of the drawing view and add it to the main view. You set Auto Layout constraints so that it overlaps only the sticker view.
Then, add the following to the end of viewDidLoad()
:
addCanvasForDrawing()
drawingView.isHidden = true
Here you add the drawing view and make sure it’s initially hidden.
Now, in imagePickerController(_:didFinishPickingMediaWithInfo:)
add the following right after addStickerButton
is enabled:
drawingView.clearCanvas()
drawingView.isHidden = false
Here you clear any previous drawings and unhide the drawing view so the user can add stickers.
Build and run the app and select a photo. Use your mouse, or finger, to verify that you can draw on the selected image:
Progress has been made. Onwards!
Making Model Predictions
Drag UpdatableDrawingClassifier.mlmodel from the starter’s Models directory into your Xcode project’s Models folder:
Now, select UpdatableDrawingClassifier.mlmodel in Project navigator. The Update section lists the two inputs the model expects during training. One represents the drawing and the other the emoji label:
The Prediction section lists the input and outputs. The drawing
input format matches that used during training. The label
output represents the predicted emoji label.
Select the Model folder in Xcode’s Project navigator. Then, go to File ▸ New ▸ File…, choose the iOS ▸ Source ▸ Swift File template, and click Next. Name the file UpdatableModel.swift and click Create.
Now, replace the Foundation
import with the following:
import CoreML
This brings in the machine learning framework.
Now add the following extension to the end of the file:
extension UpdatableDrawingClassifier {
var imageConstraint: MLImageConstraint {
return model.modelDescription
.inputDescriptionsByName["drawing"]!
.imageConstraint!
}
func predictLabelFor(_ value: MLFeatureValue) -> String? {
guard
let pixelBuffer = value.imageBufferValue,
let prediction = try? prediction(drawing: pixelBuffer).label
else {
return nil
}
if prediction == "unknown" {
print("No prediction found")
return nil
}
return prediction
}
}
This extends UpdatableDrawingClassifier
which is the generated model class. Your code adds the following:
-
imageConstraint
to make sure the image matches what the model expects. -
predictLabelFor(_:)
to call the model’s prediction method with theCVPixelBuffer
representation of the drawing. It returns the predicted label ornil
if there’s no prediction.
Updating the Model
Add the following after the import statement:
struct UpdatableModel {
private static var updatedDrawingClassifier: UpdatableDrawingClassifier?
private static let appDirectory = FileManager.default.urls(
for: .applicationSupportDirectory,
in: .userDomainMask).first!
private static let defaultModelURL =
UpdatableDrawingClassifier.urlOfModelInThisBundle
private static var updatedModelURL =
appDirectory.appendingPathComponent("personalized.mlmodelc")
private static var tempUpdatedModelURL =
appDirectory.appendingPathComponent("personalized_tmp.mlmodelc")
private init() { }
static var imageConstraint: MLImageConstraint {
let model = updatedDrawingClassifier ?? UpdatableDrawingClassifier()
return model.imageConstraint
}
}
The struct represents your updatable model. The definition here sets up properties for the model. These include locations to the original compiled model and the saved model.
Loading the Model Into Memory
Now, add the following private extension after the struct definition:
private extension UpdatableModel {
static func loadModel() {
let fileManager = FileManager.default
if !fileManager.fileExists(atPath: updatedModelURL.path) {
do {
let updatedModelParentURL =
updatedModelURL.deletingLastPathComponent()
try fileManager.createDirectory(
at: updatedModelParentURL,
withIntermediateDirectories: true,
attributes: nil)
let toTemp = updatedModelParentURL
.appendingPathComponent(defaultModelURL.lastPathComponent)
try fileManager.copyItem(
at: defaultModelURL,
to: toTemp)
try fileManager.moveItem(
at: toTemp,
to: updatedModelURL)
} catch {
print("Error: \(error)")
return
}
}
guard let model = try? UpdatableDrawingClassifier(
contentsOf: updatedModelURL) else {
return
}
updatedDrawingClassifier = model
}
}
This code loads the updated, compiled model into memory. Next, add the following public extension right after the struct definition:
extension UpdatableModel {
static func predictLabelFor(_ value: MLFeatureValue) -> String? {
loadModel()
return updatedDrawingClassifier?.predictLabelFor(value)
}
}
The predict method loads the model into memory then calls the predict method that you added to the extension.
Now, open Drawing.swift and add the following after the PencilKit
import:
import CoreML
You need this to prepare the prediction input.
Preparing the Prediction
Core ML expects you to wrap the input data for a prediction in an MLFeatureValue
object. This object includes both the data value and its type.
In Drawing.swift, add the following property to the struct:
var featureValue: MLFeatureValue {
let imageConstraint = UpdatableModel.imageConstraint
let preparedImage = whiteTintedImage
let imageFeatureValue =
try? MLFeatureValue(cgImage: preparedImage, constraint: imageConstraint)
return imageFeatureValue!
}
This defines a computed property that sets up the drawing’s feature value. The feature value is based on the white-tinted representation of the image and the model’s image constraint.
Now that you’ve prepared the input, you can focus on triggering the prediction.
First, open CreateQuoteViewController.swift and add the DrawingViewDelegate
extension to the end of the file:
extension CreateQuoteViewController: DrawingViewDelegate {
func drawingDidChange(_ drawingView: DrawingView) {
// 1
let drawingRect = drawingView.boundingSquare()
let drawing = Drawing(
drawing: drawingView.canvasView.drawing,
rect: drawingRect)
// 2
let imageFeatureValue = drawing.featureValue
// 3
let drawingLabel =
UpdatableModel.predictLabelFor(imageFeatureValue)
// 4
DispatchQueue.main.async {
drawingView.clearCanvas()
guard let emoji = drawingLabel else {
return
}
self.addStickerToCanvas(emoji, at: drawingRect)
}
}
}
Recall that you added a DrawingView
to draw sticker shortcuts. In this code, you conform to the protocol to get notified whenever the drawing has changed. Your implementation does the following:
- Creates a
Drawing
instance with the drawing info and its bounding square. - Creates the feature value for the drawing prediction input.
- Makes a prediction to get the emoji that corresponds to the drawing.
- Updates the view on the main queue to clear the canvas and add the predicted emoji to the view.
Then, in imagePickerController(_:didFinishPickingMediaWithInfo:)
remove the following:
drawingView.clearCanvas()
You don’t need to clear the drawing here. You’ll do this after you make a prediction.