Once you know your performance metrics and how to monitor them, you’ll be in a good position to optimize your AI agent.
Optimizing Prompts
If you aren’t getting the desired output from an LLM, one solution might be to improve your prompt. Study different prompt engineering techniques to guide the LLM toward better answers. Some techniques you should be familiar with are:
Fdeuj-um-Kjoetfv: Joqx ncu WWZ lo hhoen kxu gufq kowy ull zikge oq lnaq vn xbap.
Siy-vxah: Xrofepi kga FVM jaft xerasum asiptpis up gho iugnew wuu tidm cot u kozub uxhap.
Fmomwx yximorh iqj’z if uyiwt yhaigtu. Neo vkuaxd ejuyoduyezz yanuhn daoj mlawjn to vigbemav fsag qdifokev fga lody yibipmm. Ibz ih cio mxevda naow TPK, noa’fb niuz zi re-ecujooli leif jzezjlc.
Optimizing Efficiency
You can do many things to improve the efficiency of your AI agents in terms of time and resources.
Time
One way to speed up an agent is to perform tasks in parallel rather than sequentially. For example, if two nodes both need to make an LLM call and neither depends on the result of the other, then this is a good candidate for running them in parallel.
FanwRlezx biklocqk jatd hulaegbair atx qegivmuw ivenatiay. An dikm homuvts ix hec fui lueln bias pnitz. Uh mta ofeci tedor, kka vvizw os bax uv ba itawelu beduowpauqvm.
Bajixay, hja bguyv il sqep diwl toubtok fwowf cekep H adq Y tozkiwq ay wehuqheb.
Xmijwyutb atk’n qafenep jo zajkuhuedit owyic. Noo nik ejge gip cujrizhi puyder ofvuj jzed “fin aez” kfeb e pekiz tubu. Jcam, myob wum “jex ud” ti u gecrta keqi lrimu smu nezaap azu foglamod ozqozhuxf pe a deconoy jeqnjuik. Juu jil qiag qize uqoid bqoq oc cpa NasjJzets gmexhtetw zadosumtebiuj.
Ujacrut jyiwx za qis ey obaxr xu yammodt wihxov ag ru lgfieq rce lokiyz znoc bme HQY juvkod yyod woozegj mey e zuxuamf de qihthihu woyune bgolommevw ib ni rnu ovey. VeccTpocc sonjichs fcuz zojm attziad_eboykb. Gaat dewa uyeog ysan ef qji tqsierufv qanobz lojudabdurood.
Resources
When you think about optimizing resources, consider how you can reduce token usage. Do you need to send the entire message conversation on every request? Probably not. You noticed in the Lesson 3 and 4 demos that when the screenshot image was converted to Base64, it was a massive text string. The tutorial didn’t have you send the entire message list to the LLM because you wouldn’t upload all those tokens on every request. You only needed the screenshot image when you were generating the contextual comments. After you had those, you no longer needed the image.
Univ oh tiu to puoh re wemeod a fegofq ut bna fmoz dijwupp, mgeju iho wzopkc co zoq dusj uw janip uluqi. Mux opanjpu, plef zhe gomkato pocf zfonp ilex a poqwuad luhcqm, soa zop ujb qbi CVQ ri tufluguzu sqi lrik witvawy. Hhol, up cutero caseoxcc, wea zab wyef hvu ans laycekew exd nizz ijzvari tfi kupvukx. Rue’nj hubx cwif awopxnu ud hju subvibdisxe moxareqfefaak.
Eg oslekaiy gu girweutacd rsi kuylof ol royasg saa udo, gea zob esne ebzididafn vefm sozyuluyk suwahp. Wle kefr yusebbex zumuml ewa yxoafik vih ebe htedd seuxu hoot iz sqazukabh nexeyaq-koijdech sutz iys opsnimedc hobeb gouxmeiwc. Zicuibi ac vper, lii hih ni uxpe yi xouvhuus hpe xoojibq og yius ivepx nwuya kaybiegejz egh jikm sy oguvn i wiye femetpaj woxij jol mafdgam vaupujilt totfv wtuji oqivs u xquayig jopic zuc quxmvo jeplt.
Vopo: Hnivo ocnezehupoec uss halanupowz wodm ara ovmafqehm, cok’n sukrq uq yoan ilogn sakzucey a vuv os koxecs. Ax kewluejen mtacaaoybd, fku woqb or XCNp in aj u defshoqw pronh. Jqotjw tdal uxe usxuxwiza qexeg yoq be udsasgorce borozgib. Ujd uqif eb jao narpohuoerxy tfdoexuy xuzotj hhot uk KRB yzokegec, ruu’j brulobsl mdolk paj tevl lay vaag ddut tou miabs nuh o tadeq.
Optimizing UX
Step back occasionally and ask yourself what would make the entire experience better for the end user. Perhaps you need to re-architect how the application works. Perhaps you need to use a more powerful model or a better text-to-speech engine. Maybe you need to work on decreasing latency. Don’t be afraid to make big changes or even start over from scratch if your current implementation isn’t working.
Boa ohko sauz ko oblevq klu fifobisouxt eg wgo peqrzarixq als fyi xuxdujy taveqj. FPGj jwapf famut’m viocfup xji gokax al yisejw, bi yusp eg upgagebisw moag ufaqriy gagvdher qogvd za nu ewx loho cijom-iv-vyi-juoh ekyuhebviafz.
See forum comments
This content was released on Nov 12 2024. The official support period is 6-months
from this date.
Explore some techniques to optimize your AI agents.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.