Skip to main content

What is Observatory?

Observatory is an open-source AI agent observability SDK that helps you monitor, trace, and debug your AI applications in real-time. Built by The Context Company, it provides deep visibility into how your AI agents behave, what they do, and where they might be going wrong.
Local-first by default — Observatory can run completely offline with no account, API key, or external dependencies required.

Why Observatory?

Building AI agents is fundamentally different from traditional software development. With AI agents:
  • Non-deterministic behavior makes it hard to predict what your agent will do
  • Complex multi-step workflows involving LLM calls, tool usage, and reasoning steps
  • Token costs and latency require careful monitoring and optimization
  • User feedback is critical for understanding agent performance in production
Observatory was built to solve these challenges with a focus on developer experience as the #1 priority.

Key Features

Local-First Mode

Run Observatory completely offline without any account or API key. Perfect for development, debugging, and privacy-sensitive applications.
instrumentation.ts
export async function register() {
  if (process.env.NEXT_RUNTIME === "nodejs") {
    const { registerOTelTCC } = await import("@contextcompany/otel/nextjs");
    registerOTelTCC({ local: true }); // No account required
  }
}
Local mode currently supports Vercel AI SDK on Next.js. Support for other frameworks is coming soon.

Real-Time Visualization Widget

See your AI agent traces in real-time with an in-browser overlay that shows:
  • LLM requests and responses
  • Token usage and estimated costs
  • Tool calls and execution times
  • Multi-step agent workflows
  • Performance metrics and latency
Observatory widget showing real-time traces

OpenTelemetry-Based Instrumentation

Observatory uses industry-standard OpenTelemetry for tracing, making it:
  • Compatible with existing observability tools
  • Extensible for custom instrumentation needs
  • Standards-based rather than proprietary

Multi-Framework Support

Observatory provides first-class integrations for popular AI frameworks:

Vercel AI SDK

Automatic instrumentation via OpenTelemetry for Next.js applications

Claude Agent SDK

Built-in observability for Anthropic’s Claude agents

Mastra

Native integration with the Mastra AI framework

Custom Agents

Manual instrumentation SDK for any TypeScript or Python agent

Session and Run Tracking

Track conversations and individual AI interactions with built-in session management:
route.ts
import { streamText } from "ai";
import { randomUUID } from "crypto";

const sessionId = body.sessionId; // Track entire conversation
const runId = randomUUID(); // Track this specific AI call

const result = streamText({
  model: openai("gpt-4"),
  messages: messages,
  experimental_telemetry: {
    isEnabled: true,
    metadata: {
      "tcc.runId": runId,
      "tcc.sessionId": sessionId,
    },
  },
});

User Feedback Integration

Capture and link user feedback (thumbs up/down, comments) directly to specific agent runs for continuous improvement.

How It Works

Observatory follows a simple architecture:
  1. Instrumentation — Observatory wraps your AI framework calls using OpenTelemetry
  2. Collection — Traces are collected locally or sent to The Context Company backend
  3. Visualization — The widget displays traces in real-time in your browser
  4. Analysis — View detailed metrics, logs, and performance data
1

Install the package

Add Observatory to your project with your package manager of choice.
2

Configure instrumentation

Set up OpenTelemetry instrumentation for your AI framework.
3

Add the widget

Include the visualization widget in your application.
4

Enable telemetry

Add the telemetry flag to your AI SDK calls.

Observatory Packages

Observatory is a monorepo containing multiple packages for different use cases:

TypeScript/JavaScript

Python

Use Cases

Development

  • Debug agent behavior in real-time
  • Understand multi-step reasoning workflows
  • Identify performance bottlenecks
  • Monitor token usage and costs

Testing

  • Validate agent outputs and tool calls
  • Ensure proper error handling
  • Test edge cases and failure modes

Production

  • Monitor live agent performance
  • Collect user feedback
  • Track costs and usage patterns
  • Identify and fix issues quickly

Privacy & Telemetry

When running in local mode, Observatory collects limited anonymous usage data by default. No sensitive or personally identifiable information is ever collected. You can view exactly which events and values are tracked in the source code. To disable anonymous telemetry, set the TCC_DISABLE_ANONYMOUS_TELEMETRY environment variable:
.env
TCC_DISABLE_ANONYMOUS_TELEMETRY=true

Open Source

Observatory is 100% open source and available on GitHub. Contributions are welcome!

View Source Code

Explore the code, report issues, or contribute to Observatory

Contributing Guide

Learn how to contribute to the project

Next Steps

Ready to get started? Follow the quickstart guide to add Observatory to your AI application:

Quickstart Guide

Get Observatory running in your Next.js + AI SDK application in 5 minutes

Build docs developers (and LLMs) love