The Open Source LLM Observability Platform
Monitor, evaluate, and route requests to 100+ AI models. Built for developers who need production-grade observability and intelligent routing.
Why Helicone?
AI Gateway
Access 100+ AI models with one unified API. Intelligent routing, automatic fallbacks, and unified observability built in.
Full Observability
Track every request, session, and trace. Monitor cost, latency, and quality across your entire AI stack.
Open Source
Self-host with Docker or Kubernetes. Your data stays under your control. Apache 2.0 licensed.
Key Features
Prompt Management
Version and deploy prompts without code changes. Test iterations with production data.
Session Tracking
Trace complex agent workflows and multi-step processes end-to-end.
Evaluation Framework
Score and evaluate responses. Build datasets from production traffic.
Cost Analytics
Track spending across providers. Identify expensive patterns and optimize costs.
Response Caching
Reduce costs by up to 90% with intelligent response caching.
Rate Limiting
Control usage per user, team, or API key with custom rate limits.
Quick Start
Get your first request logged in under 2 minutes:Sign up and get your API key
Create a free account at helicone.ai and generate your API key.
View your logs
Navigate to your dashboard to see requests, costs, and performance metrics.
Supported Providers
Access 100+ models from leading AI providers through a single API:OpenAI
GPT-4, GPT-4o, GPT-3.5
Anthropic
Claude 3.5, Claude 3 Opus
Gemini, PaLM, Vertex AI
More Providers
Groq, Together, Anyscale, and more
Enterprise Ready
Self-Hosting
Deploy on your infrastructure with Docker or Kubernetes. Full control over your data.
Security & Compliance
SOC 2 and GDPR compliant. Encryption at rest and in transit.
Community & Support
Documentation
Comprehensive guides and API reference
Discord Community
Join 5,000+ developers building with Helicone
GitHub
5,000+ stars. Contribute and report issues
