Overview
LangChain.js is built on a composable architecture that makes it easy to build complex LLM applications from simple, interoperable components. The framework follows several key design principles:- Modularity: Every component is self-contained and can be used independently
- Composability: Components can be chained together using the Runnable interface
- Type Safety: Full TypeScript support with strict typing throughout
- Streaming: First-class support for streaming responses
- Observability: Built-in tracing and callbacks for debugging and monitoring
Core Abstractions
The framework is organized around several foundational abstractions:Runnables
TheRunnable interface is the foundation of LangChain.js. It provides a standard interface for components that can be invoked, streamed, and batched.
invoke()- Single invocationstream()- Streaming invocationbatch()- Batch invocation
Messages
Messages represent the fundamental units of conversation. LangChain.js provides typed message classes for different roles:Chat Models
Chat models are the reasoning engines that power LangChain applications. They extend the Runnable interface and provide additional capabilities like tool calling and structured output.Tools
Tools give LLMs the ability to take actions and interact with external systems. They’re defined using Zod schemas for type-safe input validation.Prompts
Prompts templates allow you to construct messages dynamically with variable substitution and formatting.Agents
Agents combine models, tools, and reasoning patterns to create autonomous systems that can plan and execute multi-step tasks.Package Structure
LangChain.js is organized as a monorepo with several key packages:@langchain/core
The core package contains the fundamental abstractions and interfaces:- Runnable interface and base implementations
- Message types and utilities
- Base chat model and LLM classes
- Tool interfaces
- Prompt templates
- Output parsers
langchain
The main package provides high-level features built on core:- Agent implementations (ReAct, etc.)
- Chains and orchestration
- Memory systems
- Advanced prompt engineering
Provider Packages
Integration packages for specific LLM providers:@langchain/openai- OpenAI models@langchain/anthropic- Anthropic models@langchain/google-genai- Google models- And many more…
@langchain/community
Community-maintained integrations for:- Vector stores
- Document loaders
- Retrievers
- Additional tools
Composition Patterns
Chaining with pipe()
The most common pattern is to chain Runnables using thepipe() method:
Parallel Execution
UseRunnableParallel to execute multiple Runnables in parallel:
Conditional Routing
Route inputs based on conditions:Configuration and Callbacks
RunnableConfig
All Runnables accept a config object for controlling execution:Callbacks
Callbacks provide hooks into the execution lifecycle:Error Handling
Retries
Add automatic retries to any Runnable:Fallbacks
Provide fallback Runnables if the primary fails:Streaming
Streaming is a first-class feature in LangChain.js:Stream Events
Get granular events from streaming execution:Multi-Environment Support
LangChain.js works across all JavaScript environments:- Node.js - Full support for all features
- Browser - Client-side applications
- Edge Runtime - Vercel Edge, Cloudflare Workers
- Deno - Alternative runtime support
- React Native - Mobile applications
Next Steps
Runnables
Learn about the Runnable interface
Messages
Understand message types
Chat Models
Work with language models
Tools
Give agents abilities
