An agent graph is an interactive knowledge graph on our Holistic AI Governance platform that converts raw execution logs from your AI agents into a structured visual map of everything that happened during a workflow. Instead of manually reading through traces and logs to understand what an agent did, the agent graph shows you the full picture: which agents were involved, what tasks they performed, which tools they called, what data moved between them, and where something went wrong.
This is built on our published research, AgentGraph: Trace-to-Graph Platform for Interactive Analysis and Robustness Testing in Agentic AI Systems.
Modern agentic AI systems plan, reason, and act across multiple steps. A single workflow might involve several agents delegating tasks to each other, calling external APIs, pulling data from different sources, and making decisions at each step. When something fails or behaves unexpectedly, the challenge is figuring out where in that chain the problem started.
Existing observability tools track basic inputs and outputs or surface operational metrics, but they require manual inspection to reconstruct what actually happened. You end up reading through logs line by line trying to piece together the structure. Agent graphs solve this by automatically building that structure for you and linking every element back to the raw trace data.
Every agent graph is made up of nodes and edges that represent the full execution of an agentic workflow.
Nodes represent the building blocks:
Edges capture the relationships between them:
Every node and edge in the graph links directly back to its exact trace span. Nothing is abstracted or summarized away. You can click into any element and see the raw execution data behind it.
Agent graphs support two types of analysis on the platform:
Qualitative analysis lets you visually trace through a workflow to find where things went wrong. You can follow the path an agent took, see which decisions led to a failure, and get optimization recommendations based on the structure of the graph. This is trace grounded, meaning every finding maps back to a specific moment in the actual execution.
Quantitative analysis lets you run robustness evaluations by introducing controlled changes into the workflow (perturbation testing) and then measuring the impact. The platform also provides causal attribution, which identifies the specific component that caused a failure rather than just showing you where the failure surfaced.
Agent graphs are not a standalone debugging tool. They feed directly into your governance workflows on the platform:
Using the olistic AI platform, you can open agent graphs for any agentic workflow that has been tested or is running in production. You can trace decision paths step by step, inspect individual nodes and edges, run perturbation tests against specific components, and export findings into your compliance reporting.
If you want to know more about how we use agent graphs to trace and govern agentic AI systems, get a demo now.