Already using OpenAI, Anthropic, or other providers? Helicone integrates with just a URL change—no refactoring required.
Prerequisites
- A Helicone account (sign up free)
- Node.js or Python installed (or just use cURL)
- 2 minutes
Step 1: Create Your Account
Sign up for Helicone
Navigate to helicone.ai/signup and create your free account.
Generate your API key
After signing up, go to Settings > API Keys and generate a new API key.Save this key securely—you’ll use it to authenticate your requests.
Add credits (optional)
If you want to use Helicone’s AI Gateway to access 100+ models without managing individual provider keys:
- Visit helicone.ai/credits
- Add credits to your account (starting at $10)
- Access any model instantly with 0% markup
What are credits and how do they work?
What are credits and how do they work?
Credits let you access 100+ LLM providers (OpenAI, Anthropic, Google, etc.) without signing up for each one individually. Here’s how it works:
- 0% markup: You pay exactly what providers charge
- Unified billing: One account for all providers
- Instant access: No need to sign up for OpenAI, Anthropic, etc.
- Automatic fallbacks: Switch providers when one is down
- Simplified management: We handle provider API keys
Step 2: Send Your First Request
Helicone’s AI Gateway provides an OpenAI-compatible API. Simply point your existing OpenAI SDK to our gateway URL:Step 3: View Your Request in the Dashboard
Within seconds of sending your request, it will appear in your Helicone dashboard:- Navigate to us.helicone.ai/requests
- You’ll see your request with full details:
- Request and response bodies
- Cost breakdown
- Latency metrics (total time, time to first token)
- Token usage
- Model and provider information
Try More Models
One of Helicone’s superpowers is unified access to 100+ models. Try switching models by just changing themodel parameter:
Explore All Models
Browse our catalog of 100+ supported models across 20+ providers
Add Custom Metadata
Enhance your requests with custom properties for better filtering and debugging:Track Sessions (Multi-Step Workflows)
Building an AI agent or chatbot with multiple LLM calls? Use sessions to group related requests:Learn More About Sessions
Deep dive into session tracking for complex AI agents and workflows
What’s Next?
Now that you’re logging requests, explore what else Helicone can do:Platform Overview
Understand how Helicone works and explore the architecture
Gateway Features
Set up automatic fallbacks, caching, and rate limits
Cost Tracking
Track costs per user, feature, or any custom dimension
Prompt Management
Deploy and version prompts without code changes
Common Integration Patterns
Using with LangChain
Using with LangChain
Using with Vercel AI SDK
Using with Vercel AI SDK
Async logging (without proxy)
Async logging (without proxy)
If you prefer not to use a proxy, you can log requests asynchronously:View async logging documentation in our integrations guide
Need Help?
We’re here to support you:- Join our Discord community for live help
- Email us at [email protected]
- Browse our documentation for common questions
- Check out example code on GitHub