Now that you’ve learned how to download models and convert them into the CoreML file format, this lesson will guide you through the process of integrating a pre-trained local model into an iOS application. You’ll focus on incorporating the Ultralytics model you worked with in the last lesson into your app to perform object detection on user-selected images. You’ll learn how to leverage the Vision framework to streamline model integration and see how to visualize the detection results for the app’s user.
Additionally, you’ll add several versions of the same model to the app. You’ll then let the user select the different models within the app and compare their performance by running them on the same images. By the end of this lesson, you’ll have a solid understanding of how to deploy and use machine learning models directly on iOS devices.
By the end of lesson, you will have learned to:
Describe how to integrate converted third-party models into a simple iOS app
Identify how to evaluate the performance and user experience of the app with a third-party model
See forum comments
This content was released on Sep 19 2024. The official support period is 6-months
from this date.
An introduction to key concepts of releasing third-party models.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Quiz: Using Core ML Tools
Next: Instruction
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.