Instruction

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

Exporting the Model from Create ML

To export your trained model from Create ML, navigate to the Output section, where you’ll find your model ready for export. Simply click the Get button, which will prompt you to choose a location to save the model file. Ensure you select a location that’s easy to access for the following steps.

Integrating the Model Into a SwiftUI App

To integrate your custom image classification model into a SwiftUI app, you’ll follow a process that involves creating an instance of your model, setting up the image-classification request, and updating the UI with the results. Here’s how you can achieve this:

1. Creating an Image Classifier Instance

Start by creating an instance of your Core ML model. This should be done when the app launches, ensuring that you have a single instance of the model available for efficient performance throughout the app.

// 1. Initialize the model
private let model: VNCoreMLModel

init() {
  // 2. Load the Core ML model
  guard let model = try? VNCoreMLModel(for: EmotionsImageClassifier().model) else {
    fatalError("Failed to load Core ML model.")
  }
  self.model = model
}

2. Creating an Image-Classification Request

To classify an image, you must create a VNCoreMLRequest using your model. This request will process the image and provide classification results.

func classifyImage(_ image: UIImage) {
  // 1. Create a VNCoreMLRequest with the model
  let request = VNCoreMLRequest(model: model) { (request, error) in
    // 2. Handle the classification results
    guard let results = request.results as? [VNClassificationObservation],
          let firstResult = results.first else {
      return
    }
    print("Classification: \(firstResult.identifier), Confidence: \(firstResult.confidence)")
  }

  // 3. Configure the request to crop and scale images
  request.imageCropAndScaleOption = .centerCrop
}

3. Creating a Request Handler

You use the VNImageRequestHandler to handle the image and perform the request. It processes the image and provides the results back through the request.

func performClassification(for image: UIImage) {
  guard let cgImage = image.cgImage else {
    return
  }

  // 1. Create a VNImageRequestHandler with the image
  let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])

  // 2. Perform the classification request
  let request = VNCoreMLRequest(model: model) { (request, error) in
    // Handle the results in the completion handler
  }
  do {
    try handler.perform([request])
  } catch {
    print("Failed to perform classification request: \(error)")
  }
}

4. Handling and Extracting High-Confidence Results

Once you receive the classification results from the Core ML model, the next step is to handle these results and identify the most accurate classification based on confidence scores. This process involves checking for valid results and selecting the one with the highest confidence to ensure that you present the most reliable classification to the user.

// 1. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
  print("No results found")
  completion(nil, nil)
  return
}

// 2. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
  print("No top result found")
  completion(nil, nil)
  return
}

5. Updating the UI with Classification Results

After receiving the classification results, it’s essential to update the UI to present these results to the user in a clear and meaningful way. This step involves converting the raw prediction data into a user-friendly format and ensuring the UI elements reflect the updated information. Typically, this means updating labels, text fields, or other UI components with the classification results. It’s crucial to perform these updates on the main thread to ensure smooth and responsive user interactions.

Tips to Optimize the Model for Real-Time Performance

Optimize Predictions on Background Threads

Run your model’s predictions off the main thread to keep the UI responsive.

DispatchQueue.global(qos: .userInitiated).async {
  classifyImage(inputImage)
}

Batch Processing

For tasks requiring multiple classifications in a short period, consider batching your requests. This method minimizes the overhead of individual requests.

func classifyBatchImages(_ images: [UIImage]) {
  let requests = images.map { image in
    VNCoreMLRequest(model: EmotionClassifier.shared.model)
  }
  let handler = VNImageRequestHandler(cgImage: images[0].cgImage!, options: [:])
  try? handler.perform(requests)
}

Reduce Image Size

Before passing images to the model, resize them to match the input size your model expects (e.g., 224x224 pixels). This reduces the computational load.

func resizeImage(_ image: UIImage) -> UIImage? {
  UIGraphicsBeginImageContext(CGSize(width: 224, height: 224))
  image.draw(in: CGRect(x: 0, y: 0, width: 224, height: 224))
  let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
  UIGraphicsEndImageContext()
  return resizedImage
}

Profile Performance

Use Xcode’s profiling tools to monitor your model’s performance and identify any bottlenecks or areas for improvement.

See forum comments
Download course materials from Github
Previous: Introduction Next: Demo