Introduction
Chat models are language models that use messages as inputs and outputs. Unlike completion models that work with raw text, chat models understand conversational context through structured message types. LangChain.js provides a unified interface for working with chat models from different providers:- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- Google (Gemini, Vertex AI)
- And many more
Quick Start
Message Types
LangChain.js defines several message types for chat:SystemMessage
Provides instructions and context to the model:HumanMessage
Represents user input:AIMessage
Represents the model’s response:ToolMessage
Carries results from tool/function calls:Basic Usage
Single Message
Conversation History
Streaming Responses
Stream tokens as they’re generated:Configuration Options
Temperature
Controls randomness (0 = deterministic, 1 = creative):Max Tokens
Limit response length:Stop Sequences
Stop generation at specific strings:Function Calling
Chat models can call functions/tools to interact with external systems:Forcing Tool Usage
Require the model to use specific tools:Tool Choice Options
Structured Output
Get responses in a specific format using schemas:Complex Schemas
Batch Processing
Process multiple inputs efficiently:Batch with Different Configurations
Caching
Cache responses to reduce API calls and costs:Multi-Modal Inputs
Some models support images and other media:Error Handling
Provider-Specific Features
OpenAI
Anthropic
Best Practices
Use System Messages
Use System Messages
System messages set the behavior and context:
Set Appropriate Temperature
Set Appropriate Temperature
Match temperature to your use case:
- 0-0.3: Factual tasks, classification, data extraction
- 0.4-0.7: Balanced responses, general conversation
- 0.8-1.0: Creative writing, brainstorming
Handle Streaming for Long Responses
Handle Streaming for Long Responses
Use streaming for better UX with long outputs:
Implement Retry Logic
Implement Retry Logic
Handle transient failures gracefully:
Next Steps
Prompt Engineering
Learn techniques for better prompts
Streaming
Implement streaming responses
Building Agents
Create agents with chat models
Creating Tools
Add tools for function calling
