Instruction

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

Now that you have a model, you can integrate this model into your iOS app. Find and open the Starter project for this lesson. You can find the materials for this project in the Walkthrough folder. Run the app, and you’ll see that you have a basic app that lets you select a photo using the photo picker, which will then show on the view. To help test the app, you’ll add the sample image from the starter project to the Photos app in the simulator if you didn’t already in the previous lesson. Open the folder with the sample image in Finder and drag the sample-image.jpg file onto the iOS simulator.

Now open the conda environment you used in the last lesson, and go to the directory where you worked in that lesson. Start the Python interpreter and enter the following one line at a time:

from ultralytics import YOLO
import os

model = YOLO("yolov8x-oiv7")
model.export(format="coreml", nms=True, int8=True)
os.rename("yolov8x-oiv7.mlpackage", "yolov8x-oiv7-int.mlpackage")
model.export(format="coreml", nms=True)

model = YOLO("yolov8n-oiv7")
model.export(format="coreml", nms=True)

model = YOLO("yolov8m-oiv7")
model.export(format="coreml", nms=True)

You’ll find these commands in the materials as download-models.py.

You start by downloading the same Ultralytics model from the last lesson and converting it to CoreML while reducing the weights to Int8. You then rename the resulting .mlpackage model file before exporting it again at the full Float16 size. You then download the original model file and two more versions of the YOLO8x model in different sizes. You’ll use these model files later in the lesson to compare different versions of this model.

Open the starter project for this lesson. Now, in Finder, find the following four model files - yolov8x-oiv7.mlpackage, yolov8x-oiv7-int.mlpackage, yolov8m-oiv7.mlpackage, and yolov8n-oiv7.mlpackage and drag them into the Models group of the Xcode project. Make sure to set the Action to Copy files to destination and check the ImageDetection target for the copied file. Then click Finish.

Using a Model with the Vision Framework

Since you’re dealing with image-related models, you can use the Vision framework to simplify interaction with the model. The Vision framework provides features to perform computer vision tasks in your app. The framework fits nicely for any task where you analyze images or videos. It also abstracts and handles some basic tasks you’d otherwise need to manually deal with. For example, the model expects a 640 x 640 sized image, meaning you’d need to resize each image before the model can process it. The Vision framework will take care of that for you.

import Vision
func runModel() {
  guard
    // 1
    let cgImage = cgImage,
    // 2
    let model = try? yolov8x_oiv7(configuration: .init()).model,
    // 3
    let detector = try? VNCoreMLModel(for: model) else {
      // 4
      print("Unable to load photo.")
      return
  }
}
// 1
let visionRequest = VNCoreMLRequest(model: detector) { request, error in
  if let error = error {
    print(error.localizedDescription)
    return
  }
  // 2
  if let results = request.results as? [VNRecognizedObjectObservation] {
    // Insert result processing code here
  }
}
// 1
visionRequest.imageCropAndScaleOption = .scaleFill
// 2
let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up)
// 3
do {
  try handler.perform([visionRequest])
} catch {
  print(error)
}
See forum comments
Download course materials from Github
Previous: Introduction Next: Demo 1