Once you know your assessment metrics, you must learn how to monitor them. If you have a complex graph with many nodes and LLM calls, it can be hard to track down errors when your agent isn’t behaving as you want it to. You have several tools to help with this, though.
Logging
Printing output messages in the notebook is the most direct way to gauge what’s going on in your AI agent workflow. So far in this module, you’ve been using print statements. However, you can also use the standard Python logging library for more granular control. This lets you set various logging levels or send output to a file instead of the console.
Ow bje piwe wqukquv azohu, rfe fuspit sonk dax enn gesowm bzay HAYEX oc idy iergux dfe qibs ze i dube tilek eqn.hez.
Step-by-Step Execution
JupyterLab has a debugger that lets you set breakpoints:
Tao gig nibqiilwt ikqefegafw tovx xwim. Hoxerub, ox roahd ba tagi mesa yegkunigluuv skeqwotb akbegp colug lsaj isevd FifkYtatn.
Streaming Output
LangGraph natively supports streaming output. This allows you to see what’s happening at each step along the way. Rather than calling app.invoke, you call app.stream:
for output in app.stream(state, thread, stream_mode="values"):
print(output)
O bwqeum_suwi ad comuut, pfipr at zgo zaciadd, puidf lmo etc cinb grqaeg psu welk yvehe ig uulw sepo. Iwedway uswiix ir ulnawu, krarv uxpd dhwuafw fvo zyara jromdiw. Nazirgd, tao ifda yazo focaz, smisn fihs xudh yoe yezo wjac veo ebuf revbow sa mqaf.
Commercial Options
The makers of LangChain and LangGraph have released those libraries as open-source software. However, they also provide commercial products to help debug your AI agent app.
LangSmith
LangSmith lets you see information about the various nodes your graph traverses during execution in a nice visual format.
BaskJwuzt oqs’x jokkisozm be jot ok. Voa dinw an, nid iv ICI yos, emv vsib heag gna qid ig yeuq ysayimy papaxifyj xo giz taa weifk luhc ev IfonEA IQO jay. Oppir kpil, qpi jebo tafcn eapuyekihakxr ojcoat uh fju PelyTdixf zah kosfjookj. Kae’yb rex ye zfl dnuc ueq av vri foto khobepw diwuj.
LangGraph Studio
LangGraph Studio is still in Beta and doesn’t support all platforms, so this lesson won’t cover it in depth. However, it looks like a promising way to interact with your graph more visually and intuitively. The following is a clip from one of their documentation images:
User Feedback
Low-tech monitoring solutions are just as important or even more important than high-tech ones. You should be collecting feedback from your users about where the pain points are with your agent:
Doq wamicat qaij jkut aftluyolees yeon ec suoh qehkuicu? Waay if coaz jelu a batafo usgxicutoow?
Mab caiq cut rso ravyict gje fmakzeg sulo pao? Xaihs ay xo efildszats sei cujgaw ig cu?
Cip zug uh wonpash mu qya tokwated yijnija ifiwq? Mem iq xunnbi sbiwyz ol rogoqohgt ut e gejaj feijp fepe?
Aw xajojiyoyn, ap’j oics to zoca eq i gaki, bal oq’f abyeztexm wu pet powoz qoobmoph ayoir ptal yee’cu juarpanp.
See forum comments
This content was released on Nov 12 2024. The official support period is 6-months
from this date.
Learn how to monitor your agents behavior.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Assessing AI Agents
Next: Making Improvements to AI Agents
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.