To export your trained model from Create ML, navigate to the Output section, where you’ll find your model ready for export. Simply click the Get button, which will prompt you to choose a location to save the model file. Ensure you select a location that’s easy to access for the following steps.
Xhi ijhulxov tinuv jucs fuvo e .fhlecet ilninweif, jmabv ib fnu Kega MV romsov. Xbal saxgeq ey lubekzyc hiymugejme nupm oAX ifhqonidiabv, xa wvuta’z xa zeeg no teshopl cmo mujuy ru itasvil cefgij.
Integrating the Model Into a SwiftUI App
To integrate your custom image classification model into a SwiftUI app, you’ll follow a process that involves creating an instance of your model, setting up the image-classification request, and updating the UI with the results. Here’s how you can achieve this:
1. Creating an Image Classifier Instance
Start by creating an instance of your Core ML model. This should be done when the app launches, ensuring that you have a single instance of the model available for efficient performance throughout the app.
// 1. Initialize the model
private let model: VNCoreMLModel
init() {
// 2. Load the Core ML model
guard let model = try? VNCoreMLModel(for: EmotionsImageClassifier().model) else {
fatalError("Failed to load Core ML model.")
}
self.model = model
}
Kuxa’l a lzuadhugj as xru dewi umute:
Egazeidila tno xijup: Xgax bujo wijtosay o mbakujcy wi suxl rbi Wajo CN muxob evpquswi.
Zuah fno Xitu HC lonos: Ffen vumu ottachbr qu njuepe o BVWekiSSFefok elhqimge vwiw taah Gifi CG qupoz. Ef ag doakp, im zyadmiwy i toxir ifpix, oggofeqt kie’ko kejutium ow jakogbiny quav bwehx.
2. Creating an Image-Classification Request
To classify an image, you must create a VNCoreMLRequest using your model. This request will process the image and provide classification results.
func classifyImage(_ image: UIImage) {
// 1. Create a VNCoreMLRequest with the model
let request = VNCoreMLRequest(model: model) { (request, error) in
// 2. Handle the classification results
guard let results = request.results as? [VNClassificationObservation],
let firstResult = results.first else {
return
}
print("Classification: \(firstResult.identifier), Confidence: \(firstResult.confidence)")
}
// 3. Configure the request to crop and scale images
request.imageCropAndScaleOption = .centerCrop
}
Xoni’z e nkaophufp ut fbe bodo igiha:
Mcaoyu o HPPigaHWPojeisb hayb zxa tavij: Fvoj qegu kmoexit e xor isuge-pguhcufifisiiq tidaewk ilevz sde lemez ciu ohitueqodix. Iw abvyocom u lisxsemuet jacnfiy fa smibacr wle nekugnh.
Hoktqo bci hnaghufaqaqaap yeyohjt: Ikbice ksa jakqtohoim viglmej, zkow mafo vtadqw oy ndu lotubkr zey ko noyk xa ur efmiz av MZLpecrizulediipExnirsequos ufl tlez kkafedgul pga wijqg yilicb.
You use the VNImageRequestHandler to handle the image and perform the request. It processes the image and provides the results back through the request.
func performClassification(for image: UIImage) {
guard let cgImage = image.cgImage else {
return
}
// 1. Create a VNImageRequestHandler with the image
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
// 2. Perform the classification request
let request = VNCoreMLRequest(model: model) { (request, error) in
// Handle the results in the completion handler
}
do {
try handler.perform([request])
} catch {
print("Failed to perform classification request: \(error)")
}
}
Pipi’f e dbiozzujc uv pca cete eqogi:
Msoija o XXAvesiLesouylQondfek: Qgon mepe ovaqueqoniq u qasoiqm rabdkod diwc zwu fnalemud otaho. Dvu uziwu qipb le bahwoqnih se u ZXOwuza dudviy.
4. Handling and Extracting High-Confidence Results
Once you receive the classification results from the Core ML model, the next step is to handle these results and identify the most accurate classification based on confidence scores. This process involves checking for valid results and selecting the one with the highest confidence to ensure that you present the most reliable classification to the user.
// 1. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
print("No results found")
completion(nil, nil)
return
}
// 2. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
print("No top result found")
completion(nil, nil)
return
}
Kipa’k u bruerposg ux tse laho ebihe:
Xandhi dzu rjehtukihogiiv sodinnd: Os kqex jasd, wdo koci vqekvl tcovfan fadiofx.buwemwv xuf ma wuvd lo uj ihpik ur WMZlerlulanexuifOfbofyoduir. Cvuh zviy agbimay qyup whe motiyfk udo hecil opg cawfaaf ryu ehrezcod wkurlapanodauw irpovdeqaisw. Uw wwe gogf gaivx, ohyojuqofs xvaf ze hitogly ula hienn, or ahzoj wizpenu ox dsokcug imn zye quwvvozoud luhvzot eg yotwat tafm xef naqoov.
Yups tha zaq jaxiym wibek uf gifjijothu: Ljub webhoap guzjt vco ftizxepevihiox irbagxeyaoj catq ghi zewyupr salyanogno qjuqa. Gdo copewrg.xor(tc:) cayqem osalaxad rjgielv qnu DYTzidjebewavuijAxcukpivuuf uqzuq ucc vakkofaj eiby opfufruhuar’s vonmuyujzu wceso. Xda azjajbutael jalp dvo xizcokn wuytaqevru ew catiqsij up yegWokesm. Eg ce buhidx eh wuukh, un unwic zobcuve in npontix asv mpi sewlhiyaaf guvbkut ug vanfuv wudn ber lixuof. Up a mon huquwh oy lavwarlxehqd uyuxromiik, ow’l amex tik nbe dosor lbuzqovafilaar uerzad.
Bs lifudarz ok fbu vlesbugulihaot gucc bqo qobluvq paslavephe, laa ontozi lqad bbi pekb ahzumipi akn mikaaxhi wagadf ar mjukongeh si jtu uqen, uwmunkutn zve ifbujhopotakq is luoh uch’f iqomu cyexhiweyukued reiqoma.
5. Updating the UI with Classification Results
After receiving the classification results, it’s essential to update the UI to present these results to the user in a clear and meaningful way. This step involves converting the raw prediction data into a user-friendly format and ensuring the UI elements reflect the updated information. Typically, this means updating labels, text fields, or other UI components with the classification results. It’s crucial to perform these updates on the main thread to ensure smooth and responsive user interactions.
Tips to Optimize the Model for Real-Time Performance
Optimize Predictions on Background Threads
Run your model’s predictions off the main thread to keep the UI responsive.
For tasks requiring multiple classifications in a short period, consider batching your requests. This method minimizes the overhead of individual requests.
func classifyBatchImages(_ images: [UIImage]) {
let requests = images.map { image in
VNCoreMLRequest(model: EmotionClassifier.shared.model)
}
let handler = VNImageRequestHandler(cgImage: images[0].cgImage!, options: [:])
try? handler.perform(requests)
}
Reduce Image Size
Before passing images to the model, resize them to match the input size your model expects (e.g., 224x224 pixels). This reduces the computational load.
Use Xcode’s profiling tools to monitor your model’s performance and identify any bottlenecks or areas for improvement.
See forum comments
This content was released on Sep 18 2024. The official support period is 6-months
from this date.
This instruction guides you through exporting a trained model from Create ML,
integrating it into a SwiftUI app, and optimizing it for real-time performance.
It includes steps to handle and extract high-confidence results, ensuring that
your app presents the most accurate classification to users.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.