Skip to main content

LangChain Integration

LangChain is a popular framework for building LLM-powered applications. Since LLM Gateway is OpenAI-compatible, you can use LangChain’s ChatOpenAI class with a custom base URL.

Quick Start

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key",
    model="gpt-5"
)

response = llm.invoke("What is LangChain?")
print(response.content)

Installation

pip install langchain langchain-openai

Before and After Comparison

Python

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    api_key="sk-...",  # OpenAI API key
    model="gpt-4o"
)

response = llm.invoke("Hello!")

JavaScript

import { ChatOpenAI } from '@langchain/openai';

const llm = new ChatOpenAI({
    apiKey: 'sk-...',  // OpenAI API key
    model: 'gpt-4o'
});

const response = await llm.invoke('Hello!');

Streaming

LLM Gateway fully supports LangChain’s streaming interface:
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key",
    model="gpt-5",
    streaming=True
)

for chunk in llm.stream("Write a short story"):
    print(chunk.content, end="", flush=True)

Async Streaming

from langchain_openai import ChatOpenAI
import asyncio

async def main():
    llm = ChatOpenAI(
        base_url="https://api.llmgateway.io/v1",
        api_key="your-llmgateway-api-key",
        model="gpt-5"
    )

    async for chunk in llm.astream("Write a poem"):
        print(chunk.content, end="", flush=True)

asyncio.run(main())

Chains and Prompts

LLM Gateway works seamlessly with LangChain’s chain and prompt templates:
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser

llm = ChatOpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key",
    model="gpt-5"
)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

chain = prompt | llm | StrOutputParser()

result = chain.invoke({"input": "What is LangChain?"})
print(result)

Tools and Function Calling

LLM Gateway supports LangChain’s tool calling:
from langchain_openai import ChatOpenAI
from langchain.tools import tool

llm = ChatOpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key",
    model="gpt-5"
)

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny."

llm_with_tools = llm.bind_tools([get_weather])

response = llm_with_tools.invoke("What's the weather in Boston?")
print(response.tool_calls)

Retrieval-Augmented Generation (RAG)

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA

# Initialize LLM with LLM Gateway
llm = ChatOpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key",
    model="gpt-5"
)

# Initialize embeddings (can also use LLM Gateway)
embeddings = OpenAIEmbeddings(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key"
)

# Create vector store
texts = ["LangChain is a framework...", "LLM Gateway is..."]
vectorstore = FAISS.from_texts(texts, embeddings)

# Create RAG chain
qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=vectorstore.as_retriever()
)

result = qa_chain.invoke({"query": "What is LangChain?"})
print(result["result"])

Agents

LLM Gateway works with LangChain agents:
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.tools import tool
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(
    base_url="https://api.llmgateway.io/v1",
    api_key="your-llmgateway-api-key",
    model="gpt-5"
)

@tool
def search(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"

tools = [search]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant"),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

result = agent_executor.invoke({"input": "Search for LangChain tutorials"})
print(result["output"])

Environment Variables

.env
OPENAI_API_BASE=https://api.llmgateway.io/v1
OPENAI_API_KEY=your-llmgateway-api-key
from langchain_openai import ChatOpenAI

# Automatically reads OPENAI_API_BASE and OPENAI_API_KEY
llm = ChatOpenAI(model="gpt-5")

response = llm.invoke("Hello!")
print(response.content)

Model Selection

# Use LLM Gateway's unified model names
model="gpt-5"  # Auto-routes to best provider

# Specify a provider
model="openai/gpt-4o"
model="anthropic/claude-3-5-sonnet-20241022"

# Use automatic routing
model="auto"  # Selects cheapest model

Caveats and Limitations

  • Configuration Syntax: JavaScript requires configuration object wrapping baseURL and apiKey
  • Model Names: Use LLM Gateway’s model naming scheme
  • Environment Variables: In Python, use OPENAI_API_BASE; in JavaScript, use OPENAI_API_BASE or pass configuration.baseURL
  • Embeddings: LLM Gateway also supports embeddings with the same configuration

Next Steps

Build docs developers (and LLMs) love