This module has equipped you with the skills to integrate external machine-learning models into your iOS apps. You’ve explored common model formats found outside of Apple platforms and learned how to set up a Python environment that lets you isolate your work with each model using conda. Conda lets you keep separate environments for each project and reduces the problems of dependencies between modules inherent to Python machine-learning projects. You then learned to convert these models to CoreML, including the important step that you should only convert when needed, as many models can be found in CoreML even if not the native model environment.
Finally, in this lesson, you took converted machine-learning Vision models and implemented them in an app. You also saw how the model size reduction steps you learned in the previous lesson can give better performance at little cost in model accuracy. You also learned how different model sizes result in a predictable balance of resources to results. You should now know the basics of converting models to work on iOS and running models locally in your app.
See forum comments
This content was released on Sep 19 2024. The official support period is 6-months
from this date.
Some closing thoughts and a brief review of incorporating third party models into your app.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.