Chat models are the reasoning engines that power LangChain applications. They process messages and generate responses, optionally calling tools to accomplish tasks. All chat models in LangChain.js extend BaseChatModel and implement the Runnable interface.
The BaseChatModel class is defined in @langchain/core/language_models/chat_models.ts
import { HumanMessage } from "@langchain/core/messages";const response = await model.invoke([ new HumanMessage("What is LangChain?"),]);console.log(response.content);// "LangChain is a framework for building applications with large language models..."
Using shorthand:
const response = await model.invoke([ ["system", "You are a helpful assistant."], ["human", "What is LangChain?"],]);
const stream = await model.stream([ ["human", "Write a short poem about TypeScript"],]);for await (const chunk of stream) { process.stdout.write(chunk.content);}
Handle streaming with callbacks:
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ streaming: true, callbacks: [{ handleLLMNewToken(token: string) { process.stdout.write(token); }, }],});const response = await model.invoke([ ["human", "Count to 10"],]);
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ // Model identifier model: "gpt-4o", // Controls randomness (0.0 to 2.0) temperature: 0.7, // Maximum tokens in response maxTokens: 1000, // Alternative to temperature topP: 0.9, // Stop sequences stop: ["\n\n", "END"], // Number of responses to generate n: 1, // Streaming enabled streaming: false, // Timeout in milliseconds timeout: 60000, // Max retries maxRetries: 2,});
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o", // OpenAI specific presencePenalty: 0.5, frequencyPenalty: 0.5, logitBias: { "50256": -100 }, user: "user-123",});
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-5-sonnet-20241022", // Anthropic specific maxTokens: 4096, topK: 40,});
// Must use toolsconst response = await modelWithTools.invoke( [["human", "What's the weather?"]], { tool_choice: "any" });// Must use a specific toolconst response = await modelWithTools.invoke( [["human", "What's the weather?"]], { tool_choice: "get_weather" });// Can use tools but doesn't have toconst response = await modelWithTools.invoke( [["human", "What's the weather?"]], { tool_choice: "auto" });
import { InMemoryCache } from "@langchain/core/caches";const cache = new InMemoryCache();const model = new ChatOpenAI({ cache,});// First call - hits the APIconst response1 = await model.invoke([["human", "What is AI?"]]);// Second call - returns from cacheconst response2 = await model.invoke([["human", "What is AI?"]]);
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-5-sonnet-20241022", apiKey: process.env.ANTHROPIC_API_KEY,});
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";const model = new ChatGoogleGenerativeAI({ model: "gemini-pro", apiKey: process.env.GOOGLE_API_KEY,});