Installation
Setup
Set your OpenAI API key as an environment variable:Usage
Streaming
API Reference
ChatOpenAI
Name of OpenAI model to use (e.g.,
gpt-4o, gpt-4o-mini, o1, o3-mini).Sampling temperature between 0 and 2. Higher values make output more random, lower values more deterministic.
Maximum number of tokens to generate.
Whether to return log probabilities of output tokens.
Configure streaming outputs, like whether to return token usage when streaming (e.g.,
{"include_usage": True}).Whether to use the responses API.
Timeout for requests in seconds.
Maximum number of retries for failed requests.
OpenAI API key. If not provided, reads from
OPENAI_API_KEY environment variable.Base URL for API requests. Only specify if using a proxy or service emulator.
OpenAI organization ID. If not provided, reads from
OPENAI_ORG_ID environment variable.Supported Models
- GPT-4o series:
gpt-4o,gpt-4o-mini- Latest multimodal models - o-series:
o1,o1-mini,o3-mini- Reasoning models with advanced problem-solving - GPT-4 Turbo:
gpt-4-turbo,gpt-4-turbo-preview- High-intelligence models - GPT-3.5 Turbo:
gpt-3.5-turbo- Fast, cost-effective model
Features
- Text generation
- Function/tool calling
- Vision (multimodal input with GPT-4o)
- JSON mode
- Streaming
- Async support
- Token usage tracking
- Prompt caching
ChatOpenAI targets official OpenAI API specifications only. Non-standard response fields added by third-party providers are not extracted. If using a provider like OpenRouter or vLLM, use the corresponding provider-specific package instead.