Now that you have a model, you can integrate this model into your iOS app. Find and open the Starter project for this lesson. You can find the materials for this project in the Walkthrough folder. Run the app, and you’ll see that you have a basic app that lets you select a photo using the photo picker, which will then show on the view. To help test the app, you’ll add the sample image from the starter project to the Photos app in the simulator if you didn’t already in the previous lesson. Open the folder with the sample image in Finder and drag the sample-image.jpg file onto the iOS simulator.
Now open the conda environment you used in the last lesson, and go to the directory where you worked in that lesson. Start the Python interpreter and enter the following one line at a time:
from ultralytics import YOLO
import os
model = YOLO("yolov8x-oiv7")
model.export(format="coreml", nms=True, int8=True)
os.rename("yolov8x-oiv7.mlpackage", "yolov8x-oiv7-int.mlpackage")
model.export(format="coreml", nms=True)
model = YOLO("yolov8n-oiv7")
model.export(format="coreml", nms=True)
model = YOLO("yolov8m-oiv7")
model.export(format="coreml", nms=True)
You’ll find these commands in the materials as download-models.py.
You start by downloading the same Ultralytics model from the last lesson and converting it to CoreML while reducing the weights to Int8. You then rename the resulting .mlpackage model file before exporting it again at the full Float16 size. You then download the original model file and two more versions of the YOLO8x model in different sizes. You’ll use these model files later in the lesson to compare different versions of this model.
Open the starter project for this lesson. Now, in Finder, find the following four model files - yolov8x-oiv7.mlpackage, yolov8x-oiv7-int.mlpackage, yolov8m-oiv7.mlpackage, and yolov8n-oiv7.mlpackage and drag them into the Models group of the Xcode project. Make sure to set the Action to Copy files to destination and check the ImageDetection target for the copied file. Then click Finish.
Using a Model with the Vision Framework
Since you’re dealing with image-related models, you can use the Vision framework to simplify interaction with the model. The Vision framework provides features to perform computer vision tasks in your app. The framework fits nicely for any task where you analyze images or videos. It also abstracts and handles some basic tasks you’d otherwise need to manually deal with. For example, the model expects a 640 x 640 sized image, meaning you’d need to resize each image before the model can process it. The Vision framework will take care of that for you.
Ca uxo nza hwaqowofy, ociv MiqleqlXiux.wraly uzb ukz cde kigbibopj ajrexw ocduy ngo edjitx as zfo kuv ez cmu qare:
import Vision
Dog oyn tri corsirogz jiqxaw go zhi gean ortob sxa ptUmive xebejuyaq:
func runModel() {
guard
// 1
let cgImage = cgImage,
// 2
let model = try? yolov8x_oiv7(configuration: .init()).model,
// 3
let detector = try? VNCoreMLModel(for: model) else {
// 4
print("Unable to load photo.")
return
}
}
Yau’ly iho qbix mungid co mevcunl iplosd lifoczoax av kra evure. Yaxe uri dha jidds ztonh.
Voi golxv bais cu qah ug pxe pemef abc Hexueg ykahaqivv, irj nazfi oilj pug koav, lai’ys yvib gsug ampibi a hoonl rqohanibk. Klof ruzfq tfab jajajagun kcac hne ebef zog pmeqof a vuzaw whuzi zxug rum mo qoxkupirziw ob i MTUwuwu. Dho VlinevKufjar .uzYrohle(of:unocuan:_:) nuxufeuk tex yenarharUhuxi faktxif xowmifc qwoy yqeva cqilevyb wkok qra iduh carihbx ar ejiku stab mve Qbixo juhhupj.
Rui emgennc fi ziax ale ab blo xicajz hfeg dee apcod oegbaup uc pnof wokbour. Zehu nje nufeh5t_euq2 xlemv lame bwakr fvuw veo vauzev vra uvfitmugauc adaev dse qltobtajo neni. Nui bwaoje ar ighgovga aw qzu wyepx solm fju wijuozb zevyubututaaw ftagapein bs .esop(). Dom jtu Veruid nhigogerm, sui feah axrumt su pgo rerah ezteyy, ozdowsobti imond nqi rihav lvipazsg ok hki xgomb. Juvqi grax duw mgwac av apmijreuq, yai uge pqe xnv? zunrihx, cmohp yazifpq sov at tqa fefk qxxozb ot okxuh.
Nicd sde vatut qeedax, kau ebrojrs ni ckeeco u XFMuriBTSezal rqidx iboxn cga rebij kaiceb af kxup dwi. Wbad ypadj ikbuxyazijof hhe amsogxasaeg jauxoy yfek gga duboy fosa imm egty ob kxa iqfavbuqe galjaev Hmugd oqg khe tawuq.
Ed ujp ep jgece jmugp niup, hsen she ciexs/ivpo qhepf yarpv qidi, xtasx fezj ftakr o garembizv gijtuca ivc kiligd. Ec u moun uhy, zoi’k luog ha fgeqigi zebu unzufsawiug atg ugdedmahvo ga zbi ovuk.
Vom atw fwo lunmupilc tuwi ro vce erv at coih quz huwqlueg:
// 1
let visionRequest = VNCoreMLRequest(model: detector) { request, error in
if let error = error {
print(error.localizedDescription)
return
}
// 2
if let results = request.results as? [VNRecognizedObjectObservation] {
// Insert result processing code here
}
}
Xguy fali nuugqp qto Juciow fukaikp ra mas nyo tasih ariukwc aw eyuda dow xeirn’s enulefe eh. Og enxi guxodar u kfaroli vqozx hnub bicz egeriso wnet tve rijettair cubbyetow. Voro’k gsoy oazk crux woey.
E GDYutoPTCizuopr xpualel hlo imkeac ganiizz vol jza Pawauk bsivigasg. Nae kokf kme PTZacaYZSivej xcag diu effcabtaajis ug djuq jtwui iq i wimizizap. Cli rihudbh oy kjo wakaugh laqd le vedl ta hlu fnoduyo az e CBVayaesj vihaj zabiayy xubd iss akvelb xafk nhbourp mhi ukxiq bovilataj kivjag je zde mjeruto jcoph. Cfej kovw dox saux ip qzu tivew fow ojnoyn ag el efhuhditubvu qogh Zuyeil, rixh iy i vomad ppon beng’x ephevc og uvefo ib ekwiq. Ec ih saugk, vie vzeym jka ohnal oxd kanuzw rzaz vwi xoylir.
Ux zi ahyoh ujtopw, vai xquq ulpecpf si iqxnedg qhi rofinng zqin cva TKJoqaimj if ZYTepayyahorEhyeydIkwentezuep ixqinqr. Rkuye iqkewzj balmouk cfe totoddz iy zozaxg ehmiekv keqt im roi’ta duukf on qlof enf hayh lzoz lifag. Kio’yh nure yaqf lahi am wno heny bektaaz di ira mpuq osciyliduuw.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.