Skip to main content
You can use LangSmith to trace any application, not just LangChain. The @traceable decorator (Python) and traceable wrapper (TypeScript) make it easy to instrument your code.

Basic usage

The @traceable decorator automatically captures inputs, outputs, and execution time:
from langsmith import traceable

@traceable
def generate_response(query: str) -> str:
    # Your application logic
    response = f"Answer to: {query}"
    return response

result = generate_response("What is machine learning?")

Async functions

Both sync and async functions are supported:
import asyncio
from langsmith import traceable

@traceable
async def fetch_data(url: str) -> dict:
    # Simulate async API call
    await asyncio.sleep(0.1)
    return {"data": f"Content from {url}"}

async def main():
    result = await fetch_data("https://api.example.com")
    print(result)

asyncio.run(main())

Nesting traces

Create hierarchical traces by calling traceable functions from within other traceable functions:
from langsmith import traceable

@traceable
def retrieve_documents(query: str) -> list[str]:
    # Simulate document retrieval
    return ["doc1", "doc2", "doc3"]

@traceable
def generate_answer(query: str, documents: list[str]) -> str:
    # Simulate answer generation
    return f"Answer based on {len(documents)} documents"

@traceable(name="rag_pipeline")
def rag_pipeline(query: str) -> str:
    # Nested traces: retrieve -> generate
    docs = retrieve_documents(query)
    answer = generate_answer(query, docs)
    return answer

result = rag_pipeline("What is RAG?")

Adding metadata and tags

Enrich your traces with metadata and tags for better filtering and organization:
from langsmith import traceable

@traceable(
    tags=["production", "api-v2"],
    metadata={"version": "2.0.0", "model": "gpt-4"}
)
def process_request(request: dict) -> dict:
    # Your logic here
    return {"status": "success", "result": request}

result = process_request({"query": "test"})

Run types

Specify the type of operation for better categorization:
from langsmith import traceable

@traceable(run_type="llm")
def call_llm(prompt: str) -> str:
    # LLM call
    return "LLM response"

@traceable(run_type="chain")
def orchestrate(input: str) -> str:
    # Orchestration logic
    return call_llm(input)

@traceable(run_type="tool")
def search_database(query: str) -> list:
    # Tool/function call
    return ["result1", "result2"]

@traceable(run_type="retriever")
def retrieve(query: str) -> list:
    # Document retrieval
    return search_database(query)
Available run types:
  • llm - Language model calls
  • chain - Sequences/orchestration
  • tool - Function/tool calls
  • retriever - Document retrieval
  • embedding - Embedding generation
  • prompt - Prompt formatting

Generators and streaming

Trace streaming/generator functions:
from langsmith import traceable

@traceable
def stream_tokens(text: str):
    """Stream text word by word."""
    for word in text.split():
        yield word

# Use the generator
for token in stream_tokens("Hello world from LangSmith"):
    print(token, end=" ")

Setting project name

Specify which project to log traces to:
import os
from langsmith import traceable

# Via environment variable (recommended)
os.environ["LANGSMITH_PROJECT"] = "my-project"

@traceable
def my_function(input: str) -> str:
    return f"Processed: {input}"

# Or via decorator parameter
@traceable(project_name="my-other-project")
def another_function(input: str) -> str:
    return f"Processed: {input}"

Error handling

Errors are automatically captured in traces:
from langsmith import traceable

@traceable
def process_data(data: dict) -> dict:
    if not data:
        raise ValueError("Data cannot be empty")
    return {"processed": data}

try:
    result = process_data({})
except ValueError as e:
    # Error will be logged in LangSmith trace
    print(f"Error: {e}")

Build docs developers (and LLMs) love