Conclusion

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

This module has equipped you with the skills to integrate external machine-learning models into your iOS apps. You’ve explored common model formats found outside of Apple platforms and learned how to set up a Python environment that lets you isolate your work with each model using conda. Conda lets you keep separate environments for each project and reduces the problems of dependencies between modules inherent to Python machine-learning projects. You then learned to convert these models to CoreML, including the important step that you should only convert when needed, as many models can be found in CoreML even if not the native model environment.

Finally, in this lesson, you took converted machine-learning Vision models and implemented them in an app. You also saw how the model size reduction steps you learned in the previous lesson can give better performance at little cost in model accuracy. You also learned how different model sizes result in a predictable balance of resources to results. You should now know the basics of converting models to work on iOS and running models locally in your app.

See forum comments
Download course materials from Github
Previous: Demo 2 Next: Quiz: Releasing Third-Party Models