Beginning Machine Learning with Keras & Core ML
In this Keras machine learning tutorial, you’ll learn how to train a convolutional neural network model, convert it to Core ML, and integrate it into an iOS app. By Audrey Tam.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Beginning Machine Learning with Keras & Core ML
50 mins
- Why Use Keras?
- Getting Started
- Setting Up Docker
- ML in a Nutshell
- Keras Code Time!
- Import Utilities & Dependencies
- Load & Pre-Process Data
- Define Model Architecture
- Train the Model
- Convolutional Neural Network: Explanations
- Sequential
- Conv2D
- MaxPooling2D
- Dropout
- Flatten
- Dense
- Compile
- Fit
- Verbose
- Results
- Convert to Core ML Model
- Inspect Core ML model
- Add Metadata for Xcode
- Save the Core ML Model
- Use Model in iOS App
- Where To Go From Here?
- Resources
- Further Reading
Add Metadata for Xcode
Now add the following, substituting your own name and license info for the first two items, and run it.
coreml_mnist.author = 'raywenderlich.com'
coreml_mnist.license = 'Razeware'
coreml_mnist.short_description = 'Image based digit recognition (MNIST)'
coreml_mnist.input_description['image'] = 'Digit image'
coreml_mnist.output_description['output'] = 'Probability of each digit'
coreml_mnist.output_description['classLabel'] = 'Labels of digits'
This information appears when you select the model in Xcode’s Project navigator.
Save the Core ML Model
Finally, add the following, and run it.
coreml_mnist.save('MNISTClassifier.mlmodel')
This saves the mlmodel file in the notebook folder.
Congratulations, you now have a Core ML model that classifies handwritten digits! It’s time to use it in the iOS app.
Use Model in iOS App
Now you just follow the procedure described in Core ML and Vision: Machine Learning in iOS 11 Tutorial. The steps are the same, but I’ve rearranged the code to match Apple’s sample app Image Classification with Vision and CoreML.
Step 1. Drag the model into the app:
Open the starter app in Xcode, and drag MNISTClassifier.mlmodel from Finder into the project’s Project navigator. Select it to see the metadata you added:
If instead of Automatically generated Swift model class it says to build the project to generate the model class, go ahead and do that.
Step 2. Import the CoreML
and Vision
frameworks:
Open ViewController.swift, and import the two frameworks, just below import UIKit
:
import CoreML
import Vision
Step 3. Create VNCoreMLModel
and VNCoreMLRequest
objects:
Add the following code below the outlets:
lazy var classificationRequest: VNCoreMLRequest = {
// Load the ML model through its generated class and create a Vision request for it.
do {
let model = try VNCoreMLModel(for: MNISTClassifier().model)
return VNCoreMLRequest(model: model, completionHandler: handleClassification)
} catch {
fatalError("Can't load Vision ML model: \(error).")
}
}()
func handleClassification(request: VNRequest, error: Error?) {
guard let observations = request.results as? [VNClassificationObservation]
else { fatalError("Unexpected result type from VNCoreMLRequest.") }
guard let best = observations.first
else { fatalError("Can't get best result.") }
DispatchQueue.main.async {
self.predictLabel.text = best.identifier
self.predictLabel.isHidden = false
}
}
The request object works for any image that the handler in Step 4 passes to it, so you only need to define it once, as a lazy var
.
The request object’s completion handler receives request
and error
objects. You check that request.results
is an array of VNClassificationObservation
objects, which is what the Vision framework returns when the Core ML model is a classifier, rather than a predictor or image processor.
A VNClassificationObservation
object has two properties: identifier
— a String
— and confidence
— a number between 0 and 1 — the probability the classification is correct. You take the first result, which will have the highest confidence value, and dispatch back to the main queue to update predictLabel
. Classification work happens off the main queue, because it can be slow.
Step 4. Create and run a VNImageRequestHandler
:
Locate predictTapped()
, and replace the print
statement with the following code:
let ciImage = CIImage(cgImage: inputImage)
let handler = VNImageRequestHandler(ciImage: ciImage)
do {
try handler.perform([classificationRequest])
} catch {
print(error)
}
You create a CIImage
from inputImage
, then create the VNImageRequestHandler
object for this ciImage
, and run the handler on an array of VNCoreMLRequest
objects — in this case, just the one request object you created in Step 3.
Build and run. Draw a digit in the center of the drawing area, then tap Predict. Tap Clear to try again.
Larger drawings tend to work better, but the model often has trouble with ‘7’ and ‘4’. Not surprising, as a PCA visualization of the MNIST data shows 7s and 4s clustered with 9s:
If you don’t use Vision, include image_scale=1/255.0
as a parameter when you convert the Keras model to Core ML: the Keras model trains on images with gray scale values in the range [0, 1], and CVPixelBuffer
values are in the range [0, 255].
UIImage
object to CVPixelBuffer
format.
If you don’t use Vision, include image_scale=1/255.0
as a parameter when you convert the Keras model to Core ML: the Keras model trains on images with gray scale values in the range [0, 1], and CVPixelBuffer
values are in the range [0, 255].
Thanks to Sri Raghu M, Matthijs Hollemans and Hon Weng Chong for helpful discussions!
Where To Go From Here?
You can download the complete notebook and project for this tutorial here. If the model shows up as missing in the app, replace it with the one in the notebook folder.
You’re now well-equipped to train a deep learning model in Keras, and integrate it into your app. Here are some resources and further reading to deepen your own learning:
Resources
- Keras Documentation
- coremltools.converters.keras.convert
- Matthijs Hollemans’s blog
- Jason Brownlee’s blog
Further Reading
- François Chollet, Deep Learning with Python, Manning Publications
- Stanford CS231N Convolutional Networks
- Comparing Top Deep Learning Frameworks
- Preprocessing in Data Science (Part 1): Centering, Scaling, and KNN
- Gentle Introduction to the Adam Optimization Algorithm for Deep Learning
I hope you enjoyed this introduction to machine learning and Keras. Please join the discussion below if you have any questions or comments.