Agent Trace Viewer

About Agent Trace Viewer

Visualize and debug AI agent execution traces. Supports LangChain runs, OpenAI Assistant steps, and generic JSON arrays with type/name/input/output fields. All processing happens client-side.

What This Tool Does

Agent Trace Viewer is built for deterministic developer and agent workflows.

Visualize AI agent traces with timeline and table views. Debug LangChain, OpenAI, and custom agent runs.

Use How to Use for execution steps and FAQ for constraints, policies, and edge cases.

Last updated:

This tool is provided as-is for convenience. Output should be verified before use in any production or critical context.

Agent Invocation

Best Path For Builders

Browser workflow

Runs instantly in the browser with private local processing and copy/export-ready output.

Browser Workflow

This tool is optimized for instant in-browser execution with local data handling. Run it here and copy/export the output directly.

/agent-trace-viewer/

For automation planning, fetch the canonical contract at /api/tool/agent-trace-viewer.json.

How to Use Agent Trace Viewer

  1. 1

    Paste or load your agent trace JSON

    Export trace JSON from your agent system. Paste into textarea or click 'Load Sample Trace' for example. Supports LangChain, OpenAI Assistants, or generic JSON arrays.

  2. 2

    Click 'Analyze Trace' to visualize

    Parser detects trace format automatically. Shows summary: total steps, token count, duration, error count. View as Timeline (visual bars) or Table (structured rows).

  3. 3

    Debug agent behavior

    Click any step to see full input/output, duration, token count. Look for tool failures, missing outputs, or loops. Filter by type (LLM, Tool, Error) to focus on issues.

  4. 4

    Search and filter steps

    Use text search to find specific tool calls or LLM outputs. Filter by step type. Measure efficiency: count steps, compare durations across runs.

Frequently Asked Questions

What is an agent trace?
An agent trace is a JSON log of every step an AI agent takes during execution: LLM calls, tool invocations, decisions, and errors. It's used for debugging and understanding agent behavior.
What trace formats are supported?
The viewer supports LangChain run format (with child_runs), OpenAI Agents SDK format (with steps and tool_calls), and generic JSON arrays containing step objects with type and input/output fields.
Can I debug LangChain agents with this tool?
Yes, paste LangChain's trace JSON to see a visual timeline of runs, child runs, and tool calls. Each step shows duration, token usage, and full input/output details.
Does this tool send my traces to a server?
No. All processing happens entirely in your browser. Your agent traces never leave your machine — nothing is sent to any server.
How is this different from Langfuse or AgentOps?
This is a quick paste-and-view tool with zero setup — no account, no backend, no SDK integration. For continuous production monitoring, use a full observability platform like Langfuse or AgentOps.