Overview
AgentOS provides automated migration from LangChain Python projects. The migration tool scans your Python files, detects LangChain patterns, and converts them to AgentOS configuration.
Detection : The migration scanner looks for LangChain imports and patterns like ChatOpenAI, LLMChain, AgentExecutor, Tool, and more.
What Gets Migrated
The LangChain migration tool detects and converts:
LangChain Component AgentOS Equivalent Status ChatOpenAI, ChatAnthropicAgent with model config ✅ Full AgentExecutor, create_react_agentAgent with ReAct loop ✅ Full Tool, StructuredToolIntegration or custom tool ✅ Full LLMChain, SequentialChainWorkflow 🖄️ Partial ConversationBufferMemoryAgent memory ✅ Full VectorStoreRetrieverMemory with embeddings 🖄️ Partial OpenAIEmbeddingsEmbedding worker ✅ Full
Quick Migration
Scan Your Project
cd /path/to/langchain-project
agentos migrate scan
Output: {
"frameworks" : [
{
"framework" : "langchain" ,
"detected" : true ,
"configPath" : "./agent.py" ,
"version" : "0.1.0" ,
"migratable" : true
}
]
}
Preview Migration
agentos migrate langchain --dry-run
Shows what will be created without making changes.
Execute Migration
agentos migrate langchain
Creates:
agents/*/agent.toml - Agent configurations
integrations/*.toml - Custom tools
workflows/*.toml - Chain definitions
data/migrations/langchain-{timestamp}.json - Migration report
Review & Test
# View migration report
agentos migrate report
# Test migrated agent
agentos agent list | grep langchain
agentos chat my_agent_llm_0
Migration Examples
Example 1: Simple LLM Agent
Before (LangChain)
After (AgentOS)
from langchain.chat_models import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool
# Initialize LLM
llm = ChatOpenAI(
model_name = "gpt-4" ,
temperature = 0.7
)
# Define tools
tools = [
Tool(
name = "web_search" ,
func = lambda x : search_web(x),
description = "Search the web for information"
)
]
# Create agent
agent = create_react_agent(llm, tools)
executor = AgentExecutor(
agent = agent,
tools = tools,
verbose = True
)
# Run
result = executor.invoke({ "input" : "What's the weather?" })
agents/agent_llm_0/agent.toml
[ agent ]
name = "agent_llm_0"
description = "Migrated from LangChain (agent.py)"
module = "builtin:chat"
[ agent . model ]
provider = "anthropic"
model = "claude-sonnet-4-6" # Auto-mapped from gpt-4
max_tokens = 4096
[ agent . capabilities ]
tools = [ "tool::*" ]
memory_scopes = [ "self.*" , "shared.*" ]
network_hosts = [ "*" ]
[ agent . resources ]
max_tokens_per_hour = 500000
system_prompt = """
Migrated llm from agent.py. Review and customize this prompt.
"""
tags = [ "migrated" , "langchain" ]
# Usage
agentos message agent_llm_0 "What's the weather?"
Before (LangChain)
After (AgentOS)
from langchain.chat_models import ChatAnthropic
from langchain.agents import initialize_agent
from langchain.tools import StructuredTool
def fetch_data ( url : str , max_results : int = 10 ):
"""Fetch data from a URL."""
return requests.get(url).json()
def process_data ( data : dict ):
"""Process fetched data."""
return { "processed" : True , "count" : len (data)}
llm = ChatAnthropic( model = "claude-3-sonnet" )
tools = [
StructuredTool.from_function(
func = fetch_data,
name = "fetch_data" ,
description = "Fetch data from a URL"
),
StructuredTool.from_function(
func = process_data,
name = "process_data" ,
description = "Process fetched data"
)
]
agent = initialize_agent(
tools,
llm,
agent = "zero-shot-react-description" ,
verbose = True
)
Agent config: agents/tools_agent_llm_0/agent.toml
[ agent ]
name = "tools_agent_llm_0"
description = "Migrated from LangChain (tools_agent.py)"
module = "builtin:chat"
[ agent . model ]
provider = "anthropic"
model = "claude-sonnet-4-6"
max_tokens = 4096
[ agent . capabilities ]
tools = [ "tool::*" , "fetch_data" , "process_data" ]
system_prompt = """
Migrated llm from tools_agent.py. Review and customize this prompt.
"""
tags = [ "migrated" , "langchain" ]
Tool integrations: integrations/fetch_data.toml
[ integration ]
id = "fetch_data"
name = "fetch_data"
description = "Migrated tool from LangChain (tools_agent.py)"
category = "migrated"
transport = "stdio"
command = "python"
args = [ "-m" , "tools_agent" ]
[ integration . env ]
[ integration . oauth ]
enabled = false
integrations/process_data.toml
[ integration ]
id = "process_data"
name = "process_data"
description = "Migrated tool from LangChain (tools_agent.py)"
category = "migrated"
transport = "stdio"
command = "python"
args = [ "-m" , "tools_agent" ]
[ integration . env ]
[ integration . oauth ]
enabled = false
Example 3: Chain as Workflow
Before (LangChain)
After (AgentOS)
from langchain.chains import LLMChain, SequentialChain
from langchain.prompts import PromptTemplate
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI( model = "gpt-4" )
# Chain 1: Generate outline
outline_prompt = PromptTemplate(
input_variables = [ "topic" ],
template = "Create an outline for: {topic} "
)
outline_chain = LLMChain( llm = llm, prompt = outline_prompt, output_key = "outline" )
# Chain 2: Write content
content_prompt = PromptTemplate(
input_variables = [ "outline" ],
template = "Write content based on: {outline} "
)
content_chain = LLMChain( llm = llm, prompt = content_prompt, output_key = "content" )
# Sequential chain
full_chain = SequentialChain(
chains = [outline_chain, content_chain],
input_variables = [ "topic" ],
output_variables = [ "outline" , "content" ]
)
result = full_chain({ "topic" : "AI agents" })
workflows/chain_chain_0.toml
[ workflow ]
id = "chain_chain_0"
name = "chain_chain_0"
description = "Migrated from LangChain chain (chain.py)"
[[ workflow . steps ]]
id = "outline"
type = "llm"
prompt = "Create an outline for: {{input.topic}}"
output_key = "outline"
[[ workflow . steps ]]
id = "content"
type = "llm"
prompt = "Write content based on: {{steps.outline.output}}"
output_key = "content"
depends_on = [ "outline" ]
[ workflow . config ]
model = "claude-sonnet-4-6"
max_tokens = 4096
# Usage
agentos workflow run chain_chain_0 --input '{"topic": "AI agents"}'
Example 4: Memory with Retrieval
Before (LangChain)
After (AgentOS)
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
llm = ChatOpenAI( model = "gpt-4" )
memory = ConversationBufferMemory()
embeddings = OpenAIEmbeddings()
vectorstore = Chroma(
collection_name = "conversations" ,
embedding_function = embeddings
)
conversation = ConversationChain(
llm = llm,
memory = memory,
verbose = True
)
response = conversation.predict( input = "Hello, I'm learning about agents" )
agents/memory_agent_llm_0/agent.toml
[ agent ]
name = "memory_agent_llm_0"
description = "Migrated from LangChain (memory_agent.py)"
module = "builtin:chat"
[ agent . model ]
provider = "anthropic"
model = "claude-sonnet-4-6"
max_tokens = 4096
[ agent . capabilities ]
tools = [ "tool::*" ]
memory_scopes = [ "self.*" , "shared.*" ] # Built-in memory with embeddings
network_hosts = [ "*" ]
system_prompt = """
Migrated llm from memory_agent.py. Review and customize this prompt.
"""
tags = [ "migrated" , "langchain" , "memory" ]
Memory is automatically handled by AgentOS:
memory::store - Store conversation turns
memory::recall - Retrieve relevant memories using embeddings
embedding::generate - Generate embeddings (SentenceTransformers)
Pattern Detection
The migration tool detects these LangChain patterns:
# LLM initialization
ChatOpenAI( ... ) → agent with OpenAI model
ChatAnthropic( ... ) → agent with Anthropic model
ChatGoogleGenerativeAI( ... ) → agent with Google model
AzureChatOpenAI( ... ) → agent with Azure OpenAI
# Agent creation
create_react_agent( ... ) → agent config
create_openai_tools_agent( ... ) → agent config
AgentExecutor( ... ) → agent config
initialize_agent( ... ) → agent config
# Tools
Tool( ... ) → integration
StructuredTool( ... ) → integration
@tool decorator → integration
# Chains
LLMChain( ... ) → workflow step
SequentialChain( ... ) → workflow
RouterChain( ... ) → workflow with conditional
ConversationChain( ... ) → agent with memory
RetrievalQA( ... ) → agent with memory recall
# Memory
ConversationBufferMemory( ... ) → agent memory
ConversationSummaryMemory( ... ) → agent memory
VectorStoreMemory( ... ) → agent memory with embeddings
ChatMessageHistory( ... ) → session storage
# Retrievers
VectorStoreRetriever( ... ) → memory recall
SelfQueryRetriever( ... ) → memory search
# Embeddings
OpenAIEmbeddings( ... ) → embedding worker
HuggingFaceEmbeddings( ... ) → embedding worker
CohereEmbeddings( ... ) → embedding worker
Model Mapping
LangChain models are mapped to AgentOS equivalents:
LangChain Model AgentOS Model gpt-4, gpt-4o, gpt-4-turboclaude-sonnet-4-6gpt-4o-mini, gpt-3.5-turboclaude-haiku-3.5claude-3-opus-20240229claude-opus-4claude-3-sonnet-20240229claude-sonnet-4-6claude-3-haiku-20240307claude-haiku-3.5gemini-pro, gemini-1.5-proclaude-sonnet-4-6llama-3-70bllama-3.3-70bmixtral-8x7bmixtral-8x7b
Common LangChain tools are mapped:
# Web tools
SerpAPIWrapper → tool::web_search
GoogleSearchAPIWrapper → tool::web_search
DuckDuckGoSearchRun → tool::web_search
BraveSearch → tool::web_search
WikipediaQueryRun → tool::web_fetch
# File tools
ReadFileTool → tool::file_read
WriteFileTool → tool::file_write
ListDirectoryTool → tool::file_list
# Code tools
PythonREPLTool → tool::shell_exec
BashTool → tool::shell_exec
# Other
CalculatorTool → tool::calculate
RequestsGetTool → tool::web_fetch
Post-Migration Steps
Install Dependencies
Ensure AgentOS is running: # Start iii-engine
iii --config config.yaml &
# Start workers
agentos start
# Or manually
cargo run --release -p agentos-core &
npx tsx src/agent-core.ts &
npx tsx src/tools.ts &
python workers/embedding/main.py &
Review System Prompts
Migrated agents have generic system prompts. Customize them: # List migrated agents
ls agents/ | grep langchain
# Edit system prompts
for agent in agents/*langchain*/agent.toml ; do
echo "Reviewing: $agent "
vim " $agent "
done
Implement Custom Tools
If you have custom LangChain tools, implement them as AgentOS functions: import { init } from "iii-sdk" ;
const { registerFunction } = init ( "ws://localhost:49134" , {
workerName: "my-tools"
});
registerFunction (
{ id: "fetch_data" , description: "Fetch data from URL" },
async ({ url , max_results } : any ) => {
// Your implementation
const response = await fetch ( url );
return await response . json ();
}
);
# Run the worker
npx tsx src/my-tools.ts &
Test Agents
Verify each migrated agent works: # List agents
agentos agent list | grep langchain
# Test agent
agentos message my_agent_llm_0 "Test message"
# Interactive chat
agentos chat my_agent_llm_0
Migrate Data
If you have conversation history in LangChain: import json
from langchain.memory import FileChatMessageHistory
# Load LangChain history
history = FileChatMessageHistory( "langchain_history.json" )
messages = history.messages
# Convert to AgentOS format
agentos_history = {
"id" : "migrated-session-1" ,
"agent" : "my_agent_llm_0" ,
"history" : [
{ "role" : m.type, "content" : m.content}
for m in messages
],
"created" : "2025-03-01T00:00:00Z" ,
"migrated" : "2025-03-09T15:00:00Z" ,
"source" : "langchain"
}
# Save
with open ( "data/sessions/migrated-session-1.json" , "w" ) as f:
json.dump(agentos_history, f, indent = 2 )
Advanced Migration
Custom Config Path
agentos migrate langchain --config-dir /path/to/project
Skip Specific Patterns
Edit migration output manually:
# Dry run first
agentos migrate langchain --dry-run > migration-plan.json
# Review and edit
vim migration-plan.json
# Apply manually
# (Create agents/tools based on edited plan)
Programmatic Migration
import { init } from "iii-sdk" ;
const { trigger } = init ( "ws://localhost:49134" , { workerName: "migrator" });
const result = await trigger ( "migrate::langchain" , {
dryRun: false ,
configDir: "/path/to/langchain/project"
}, 300_000 ); // 5 minute timeout
console . log ( `Migrated ${ result . summary . migrated } items` );
console . log ( `Skipped ${ result . summary . skipped } items` );
console . log ( `Errors ${ result . summary . errors } ` );
Common Issues
If migration scan doesn’t find LangChain: # Check for LangChain installation
pip show langchain
# Look for Python files with LangChain imports
grep -r "from langchain" .
grep -r "import langchain" .
# Specify directory explicitly
agentos migrate langchain --config-dir /path/to/project
Chain migration incomplete
LangChain chains may need manual workflow creation: # Check what was migrated
ls workflows/
# Create workflow manually if needed
vim workflows/my-workflow.toml
# Test workflow
agentos workflow run my-workflow --input '{"key": "value"}'
Ensure embedding worker is running: # Check if embedding worker is running
ps aux | grep "python.*embedding"
# Start embedding worker
python workers/embedding/main.py &
# Verify it's registered
curl http://localhost:3111/functions | jq '.[] | select(.id | startswith("embedding::"))'
Migration Checklist
Next Steps
Creating Agents Customize your migrated agents
Creating Tools Build custom tools for your agents
Testing Test your migrated setup
Migration Overview General migration guide