Monitoring Agent Behavior

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

Once you know your assessment metrics, you must learn how to monitor them. If you have a complex graph with many nodes and LLM calls, it can be hard to track down errors when your agent isn’t behaving as you want it to. You have several tools to help with this, though.

Logging

Printing output messages in the notebook is the most direct way to gauge what’s going on in your AI agent workflow. So far in this module, you’ve been using print statements. However, you can also use the standard Python logging library for more granular control. This lets you set various logging levels or send output to a file instead of the console.

import logging

logging.basicConfig(
  level=logging.DEBUG,
  filename='app.log'
)

logging.debug('debug message')
logging.info('info message')
logging.warning('warning message')
logging.error('error message')

Step-by-Step Execution

JupyterLab has a debugger that lets you set breakpoints:

Streaming Output

LangGraph natively supports streaming output. This allows you to see what’s happening at each step along the way. Rather than calling app.invoke, you call app.stream:

for output in app.stream(state, thread, stream_mode="values"):
  print(output)

Commercial Options

The makers of LangChain and LangGraph have released those libraries as open-source software. However, they also provide commercial products to help debug your AI agent app.

LangSmith

LangSmith lets you see information about the various nodes your graph traverses during execution in a nice visual format.

LangSmith
JajlJfemv

LangGraph Studio

LangGraph Studio is still in Beta and doesn’t support all platforms, so this lesson won’t cover it in depth. However, it looks like a promising way to interact with your graph more visually and intuitively. The following is a clip from one of their documentation images:

LangGraph Studio
DekzXkidf Dmetue

User Feedback

Low-tech monitoring solutions are just as important or even more important than high-tech ones. You should be collecting feedback from your users about where the pain points are with your agent:

See forum comments
Download course materials from Github
Previous: Assessing AI Agents Next: Making Improvements to AI Agents