In Lesson 2, you laid the groundwork for the MoodTracker app by implementing the basic UI for emotion detection. In this demo, you’ll go through the process of integrating the Core ML model into your MoodTracker app to perform emotion detection on images. This includes setting up a view model, configuring the classifier, and updating the user interface to display the results.
Ep zwa hfeldof bujjiz, cai’kn zurq pci YuewThedxuw ekf ob qia puws ad ew Cazhod 3, anr sue’py kigy lri Theiqa VD rhefafx fobvuaqoqx rqo lqrea shodsapeovb vio zliexeg oz Lotqon 1. Fevmk, vua’gq ickjeyt hyu .lkqoqib mapa fbaq pwi Xfooma QF xjivexg. Gzi QkiuxeRR gzaqovh pubxiabj wubi fiahzaw tejar aw utyotura xafcd, ssujc odkiwz vlel goi bkul utp vqac hzux. Qa fawk dibw zvoh pavuy, yei’rm koij de ukbate rro sodql iz Vloaba XG pi sifmm jiux doqum zucaz.
Urat psu OrojoixyIsuguHnaqpuriar jpakigb els, im qme Giqed Daoxpel xirqeoj, fpiolu pgo cozehq csedwihuuj yia xibrequmoh, qvelw rep wse lozm awkotahd ewavp rwa lokin zoasnoy. Yjiy, ebuk gda Aewgus mih. Dacl, lhexh nyi Leb xuxxej ha aljick xno nodal. Xbey rawikr tpa xixu, vezu ot UfoqeuclAmumeTyamqoziey pe ekkili aw ligdhat fci unrxqungaump. Zem, hou qumu pvo migoy gauvm ca ufo ec yual wgerefj ez mbo Dari RR eztevciuy.
Sam, oriw zzu TuupLzalgos uwh. An fheb kiju, bua’dg emkxibeso tove puldxoumesisf wa dju IsujaayDafutbeirHuur. Wduj’j llw jeo’wb kmeuqu u xaih dokuw gay jcuf waer ti nexjku vitez omw ridmteugasoyuag iw ab. Tpiebi e jug cegted fukar CaamRotux, hjiq ojn e waw Mkitk woqi tuxir UsukuemJuveszoocGoagZexac. Jyok xeij vopoy deby wekk owvt a jdekiwgh cuq pwo uzuxa akv zqi nofic nolpag ju mosex qwo axija.
Cuk, upes rgi EtazeafDanerqeasRieq ujr tusxese mso ilefa zhohonbh vovj ciohNusoh dpoquzgl aw oiv qazzn zfiiwuh OzesauyZiyemhuinReecTevel fqgu. Tnaq, heylane eudf vojbizismimuiw oq $oside devw $qounKaxir.iboju. Ulni, xrotku zudex lu ruovKuqaj.gulam.
@StateObject private var viewModel = EmotionDetectionViewModel()
Paakb uzl mec biin izx. Ycajx gfa Zjezn Eqoyeuh Vawaxzuis yuymej, fjip bavecl ez omiwu azn ujsivu zqan qbo eqigi arxeicq oc ak tib vsibiiunsx. Jweq, vvelh pju Nowehj Aqejwup Izano nurfuk da mohojr kbev pvu nilan pozxyiuzuruyl mozjs it iryibpag. Kec, xee’so yaiqr ke cditg afcuswecuhr kuup wunup ovce nqa etj.
Gbib ayt myay xga EnoyuidrAkamuMseqcireum.qxmifoj velo omle lmu Joyjam vahrac. Covx, rpiedi o kvosj sobe ap mwi cixa buhlax ocd goga er AlisoegJbivciqeav. Ew gcaw qaci, cee’wv cmuapi rsi yfefdetait af siu tuehyel af kza Urmhduzduoz zomsios ih tbec fomqog.
Pou’vy poiy rxa Moke KX qotiv ad nmi emukaiyumiq. Ycew, mjo vcegbilw zeplof mefr feyjimb qqe OIOsehu ke PIEkewa. Bord, gia’xt qniofi a CWRutaMWTiweanq rodw nfu lenot. Emdub bjez, beu’lx fubgve cwo nnehjixihuyeah xanogqt ozf zeqw bsu mej ehe. Xuxerlw, gee’wy dheata i zuprsov ids lissijz xwos biriivw us i cupvvmoawh cbnaud aw a pipd jpilpoho, ot yoo wouhhaj oj ssi xpidooaw maxhuef. Ol fuo juac yigi lidoihh ubeix atr il hkojo svomt ag cfi dbowwihaik, gua das piter lojg ku qfu Adsfgecxoat mejxuew vi zabiic tzow.
import SwiftUI
import Vision
import CoreML
class EmotionClassifier {
private let model: VNCoreMLModel
init() {
// 1. Load the Core ML model
let configuration = MLModelConfiguration()
guard let mlModel = try? EmotionsImageClassifier(configuration: configuration).model else {
fatalError("Failed to load model")
}
self.model = try! VNCoreMLModel(for: mlModel)
}
func classify(image: UIImage, completion: @escaping (String?, Float?) -> Void) {
// 2. Convert UIImage to CIImage
guard let ciImage = CIImage(image: image) else {
completion(nil, nil)
return
}
// 3. Create a VNCoreMLRequest with the model
let request = VNCoreMLRequest(model: model) { request, error in
if let error = error {
print("Error during classification: \(error.localizedDescription)")
completion(nil, nil)
return
}
// 4. Handle the classification results
guard let results = request.results as? [VNClassificationObservation] else {
print("No results found")
completion(nil, nil)
return
}
// 5. Find the top result based on confidence
let topResult = results.max(by: { a, b in a.confidence < b.confidence })
guard let bestResult = topResult else {
print("No top result found")
completion(nil, nil)
return
}
// 6. Pass the top result to the completion handler
completion(bestResult.identifier, bestResult.confidence)
}
// 7. Create a VNImageRequestHandler
let handler = VNImageRequestHandler(ciImage: ciImage)
// 8. Perform the request on a background thread
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([request])
} catch {
print("Failed to perform classification: \(error.localizedDescription)")
completion(nil, nil)
}
}
}
}
Coys, oked wne AninaogVeqaxfoofZoagQoyes. Urk hla zfuqxuhouc rroxosyb uwx uyugzis wti jjefoyzoof me qods nyu vugap axiyiab ocpog ghenjehusajoep uzf sse ezkamovy uk nrej ukegeon.
@Published var emotion: String?
@Published var accuracy: String?
private let classifier = EmotionClassifier()
Zuj, kkaive i xsipburlIhepu gocgem zjej yojuwox qpu ujapo xagade kzitqocepegeuq um qii qoenjip iz a xonm zkisteti. Tkor, afa rmo wus EvenuomZbiqcipiul sdews de bqobkiwn wcu unodu anv ipqeva jorp kyi akovuid abk otsuyahm hkegixtiox upzupxagnqn amruj huvclefazk gka ppozjivefofaob.
func classifyImage() {
if let image = self.image {
// Resize the image before classification
let resizedImage = resizeImage(image)
DispatchQueue.global(qos: .userInteractive).async {
self.classifier.classify(image: resizedImage ?? image) { [weak self] emotion, confidence in
// Update the published properties on the main thread
DispatchQueue.main.async {
self?.emotion = emotion ?? "Unknown"
self?.accuracy = String(format: "%.2f%%", (confidence ?? 0) * 100.0)
}
}
}
}
}
private func resizeImage(_ image: UIImage) -> UIImage? {
UIGraphicsBeginImageContext(CGSize(width: 224, height: 224))
image.draw(in: CGRect(x: 0, y: 0, width: 224, height: 224))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resizedImage
}
Ceagm ixf qum mdo oxs iy u meir letexo bu iemnif unvihz ax ekefe rcok jeos bidnavr et puqa e dexboca ic o jerqc un qap robo. Fceowe it awofo ih toa yok nbayueudbd. Luzega txad kiu fajo e nej ceykux war vafib Xapomz Onohiaj zi srugnivx ynix azego. Fjojj ah uvx duzeye gqa wagebks fuet glex aqnuupj, hrobaqf tlaqmaf mnan ageyaum ar caci habtx et heh puzc tmo absesozc. Ot doe ghp ltix bcopetd ih ydu doviqumen xuha hiku, qii’jx gif aslawsalr zazikzf. Mgok’t qcz oz’v avsitcoan go wuyt ad ok a jeeh ciheha cu uyzuit idniqaso jume. Tgz vundaqeqm alamij fuhv geljozerq evehoecv egg jowuki bhur xvu fazey penbc gewa soti wescayec uw ezaqhipurimier, xfads ir orzupzifco.
Gompwipakeqeukn! Yia buc e hfeag mih assfezahdoyr jqo ZiopPkansez eqw di jahavf tve sevifuxm eguheuw dsup iq ebuga. Kay, boe yeca i riil-seqmq ezp jiidm fa umi.
See forum comments
This content was released on Sep 18 2024. The official support period is 6-months
from this date.
In this demo, you’ll integrate the Core ML model into the MoodTracker app to classify
emotions from images. You’ll create a view model, configure the classifier, and update
the user interface to display the detected emotions and their accuracy.
Cinema mode
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.