Skip to main content
AI agents make multiple LLM calls, tool invocations, and database queries to complete tasks. This tutorial shows you how to use Helicone Sessions to monitor entire agent workflows from start to finish.

What You’ll Build

A monitored AI research agent that:
  • Takes a user query
  • Searches multiple sources
  • Synthesizes findings
  • Generates a report
All tracked as a single session with hierarchical traces.

Prerequisites

  • Helicone API key (get one here)
  • OpenAI API key
  • Node.js 18+ or Python 3.8+

Step 1: Set Up Your Project

npm install openai uuid

Step 2: Configure the Helicone Client

import { OpenAI } from "openai";
import { randomUUID } from "crypto";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://oai.helicone.ai/v1",
  defaultHeaders: {
    "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});

Step 3: Create Session Structure

Define your agent’s workflow hierarchy:
const sessionId = randomUUID(); // Unique ID for this research task
const sessionName = "Research Agent"; // Groups all research tasks together

// Path hierarchy shows agent workflow:
// /research              - Top level
// /research/query        - Query analysis
// /research/search       - Web searches
// /research/synthesize   - Combining results
// /research/report       - Final output
Use descriptive paths that represent the type of work, not the order. Multiple requests can use the same path if they do the same conceptual task.

Step 4: Implement Agent Steps

1

Query Analysis

First, have the agent analyze the user’s query:
async function analyzeQuery(query: string, sessionId: string) {
  const response = await client.chat.completions.create(
    {
      model: "gpt-4o-mini",
      messages: [
        {
          role: "system",
          content: "Analyze the research query and identify key topics to investigate."
        },
        { role: "user", content: query }
      ],
    },
    {
      headers: {
        "Helicone-Session-Id": sessionId,
        "Helicone-Session-Path": "/research/query",
        "Helicone-Session-Name": "Research Agent",
      },
    }
  );
  
  return response.choices[0].message.content;
}
2

Search Multiple Sources

Perform searches for each identified topic:
async function searchSources(topics: string[], sessionId: string) {
  const searches = topics.map(async (topic, index) => {
    const response = await client.chat.completions.create(
      {
        model: "gpt-4o-mini",
        messages: [
          {
            role: "system",
            content: "Search for relevant information about this topic."
          },
          { role: "user", content: topic }
        ],
      },
      {
        headers: {
          "Helicone-Session-Id": sessionId,
          "Helicone-Session-Path": `/research/search/${topic}`,
          "Helicone-Session-Name": "Research Agent",
          "Helicone-Property-SearchIndex": index.toString(),
        },
      }
    );
    
    return response.choices[0].message.content;
  });
  
  return await Promise.all(searches);
}
3

Synthesize Results

Combine findings into coherent insights:
async function synthesizeResults(
  searchResults: string[],
  sessionId: string
) {
  const response = await client.chat.completions.create(
    {
      model: "gpt-4o",
      messages: [
        {
          role: "system",
          content: "Synthesize research findings into key insights."
        },
        {
          role: "user",
          content: `Research results:\n${searchResults.join("\n\n")}`
        }
      ],
    },
    {
      headers: {
        "Helicone-Session-Id": sessionId,
        "Helicone-Session-Path": "/research/synthesize",
        "Helicone-Session-Name": "Research Agent",
      },
    }
  );
  
  return response.choices[0].message.content;
}
4

Generate Final Report

Create the final research report:
async function generateReport(
  synthesis: string,
  sessionId: string
) {
  const response = await client.chat.completions.create(
    {
      model: "gpt-4o",
      messages: [
        {
          role: "system",
          content: "Create a well-formatted research report."
        },
        { role: "user", content: synthesis }
      ],
    },
    {
      headers: {
        "Helicone-Session-Id": sessionId,
        "Helicone-Session-Path": "/research/report",
        "Helicone-Session-Name": "Research Agent",
      },
    }
  );
  
  return response.choices[0].message.content;
}

Step 5: Orchestrate the Agent

Put it all together:
async function runResearchAgent(userQuery: string) {
  const sessionId = randomUUID();
  
  console.log(`Starting research session: ${sessionId}`);
  
  // Step 1: Analyze query
  const analysis = await analyzeQuery(userQuery, sessionId);
  const topics = analysis.split("\n"); // Simplified extraction
  
  // Step 2: Search sources
  const searchResults = await searchSources(topics, sessionId);
  
  // Step 3: Synthesize
  const synthesis = await synthesizeResults(searchResults, sessionId);
  
  // Step 4: Generate report
  const report = await generateReport(synthesis, sessionId);
  
  console.log(`Research complete! View in Helicone: https://helicone.ai/sessions/${sessionId}`);
  
  return report;
}

// Run the agent
runResearchAgent("What are the latest trends in AI agent architectures?");

Step 6: View Results in Helicone

1

Navigate to Sessions

Go to Helicone Sessions in your dashboard.
2

Find Your Session

Filter by session name “Research Agent” or search for your session ID.
3

Analyze the Flow

You’ll see:
  • Complete request hierarchy
  • Duration of each step
  • Costs per operation
  • Total agent cost and latency
  • Request/response details for debugging

Expected Output

After running the agent, you’ll see in Helicone:
Research Agent Session (550e8400-e29b-41d4-a716-446655440000)
├── /research/query (1 request, 0.8s, $0.002)
├── /research/search/topic1 (1 request, 1.2s, $0.003)
├── /research/search/topic2 (1 request, 1.1s, $0.003)
├── /research/search/topic3 (1 request, 1.3s, $0.003)
├── /research/synthesize (1 request, 2.1s, $0.008)
└── /research/report (1 request, 2.5s, $0.010)

Total: 6 requests, 9.0s, $0.029

Best Practices

Use descriptive paths: /research/search/web is better than /step3
Add custom properties: Track user tiers, environments, or feature flags with Helicone-Property-* headers
Reuse session names: All research tasks should use “Research Agent” so you can compare performance across runs
Don’t reuse session IDs across different workflows. Each agent run should have a unique session ID.

Troubleshooting

Check that:
  • All three headers are present: Helicone-Session-Id, Helicone-Session-Path, Helicone-Session-Name
  • Session ID is consistent across all requests
  • Requests are successfully reaching Helicone (check response headers for helicone-id)
  • Paths must start with /
  • Use / to separate levels: /parent/child
  • Ensure paths are consistent across related requests
Costs depend on accurate model detection. If using custom models or providers, costs may show as “not supported”. Contact [email protected] to add support.

Next Steps

Sessions Documentation

Deep dive into session features and configuration

Custom Properties

Add metadata to track environments, users, and features

Cost Tracking

Monitor and optimize agent costs

User Metrics

Track agent usage per user

Build docs developers (and LLMs) love