Create your account
Sign up for Helicone
- Sign up for free (10,000 requests/month on the free tier)
- Complete the onboarding flow
- Generate your Helicone API key at API Keys
Free tier includes: 10K requests/month, all core features, and no credit card required
Add credits (optional)
Use the AI Gateway with credits
For the easiest experience, add credits to access 100+ models without signing up for each provider:- Go to helicone.ai/credits
- Add funds to your account (we charge exactly what providers charge - 0% markup)
- Use any model from any provider with a single API key
What are credits?
What are credits?
Instead of managing API keys for each provider (OpenAI, Anthropic, Google, etc.), Helicone maintains the keys for you. You simply add credits to your account, and we handle the rest.Benefits:
- 0% markup - Pay exactly what providers charge, no hidden fees
- No need to sign up for multiple LLM providers
- Switch between 100+ models by just changing the model name
- Automatic fallbacks if a provider is down
- Unified billing across all providers
Already have provider keys?
Already have provider keys?
Skip this step and use your own API keys for OpenAI, Anthropic, or other providers. Configure them at Provider Keys.You’ll still get full observability, but you’ll manage provider relationships directly. See the “Bring Your Own Keys” tab in Step 3.
Send your first request
Choose your integration method
Helicone’s AI Gateway is OpenAI-compatible, so you can use the OpenAI SDK with any provider.- TypeScript (Credits)
- Python (Credits)
- cURL (Credits)
- Bring Your Own Keys
Using Helicone credits to access any model:Switch providers instantly:
View your logs
See your request in the dashboard
Once you run the code, you’ll see your request appear in the Requests tab within seconds.- Full request and response details
- Token usage (input, output, cached)
- Exact cost per request
- Latency and processing time
- Model and provider information
- Custom properties and user tracking
You’re All Set! 🎉
Congratulations! You’ve successfully integrated Helicone and logged your first LLM request. Now let’s explore what you can do with the platform.What’s Next?
Understand the Platform
Learn how Helicone solves production AI challenges with architecture overview
Track Sessions & Agents
Debug multi-step AI workflows with session trees and full visibility
Add Custom Properties
Segment requests by user, feature, or environment for better insights
Set Up Fallbacks
Configure automatic failover when providers go down
Manage Prompts
Version control prompts and deploy without code changes
Cost Tracking
Understand your LLM economics and optimize spending
Common Use Cases
How do I track costs by user?
How do I track costs by user?
Add a Then filter by user in the dashboard to see per-user costs and usage.
Helicone-User-Id header to tag requests with user IDs:How do I debug AI agent workflows?
How do I debug AI agent workflows?
Use sessions to group related requests and trace multi-step workflows:View the complete workflow tree in the Sessions tab.
How do I set up automatic fallbacks?
How do I set up automatic fallbacks?
Specify multiple models separated by commas - Helicone will try them in order:Your app stays online even during provider outages.
How do I cache responses to save costs?
How do I cache responses to save costs?
Enable caching with a header to reuse identical responses:Identical requests are served from cache instantly at zero cost.
Need Help?
We’re here to help you succeed:Join Discord
Chat with 2000+ developers in our community
Email Support
Contact [email protected] with questions
Documentation
Explore integration guides for all frameworks
GitHub
Star us and contribute to the project
