Skip to main content

Architecture Overview

nanobot is an ultra-lightweight AI agent framework that delivers core agent functionality with just ~4,000 lines of code. Its architecture is designed for simplicity, extensibility, and research-readiness.

System Architecture

The architecture follows a clean event-driven design with four main layers:
┌─────────────────────────────────────────────────────────────┐
│                    Channel Layer                            │
│  (Telegram, Discord, Slack, CLI, Email, etc.)              │
└────────────────┬────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│                    Message Bus                              │
│  (Async queue: InboundMessage → OutboundMessage)           │
└────────────────┬────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│                    Agent Loop                               │
│  Context → LLM → Tools → Response                          │
└─────┬──────────────────────────────────┬────────────────────┘
      │                                  │
      ▼                                  ▼
┌──────────────┐                  ┌──────────────────┐
│ Tool Registry│                  │ Session Manager  │
│ Memory Store │                  │ Subagent Manager │
│ Skills Loader│                  │ Provider         │
└──────────────┘                  └──────────────────┘

Core Components

Message Bus

Decouples channels from the agent loop using async queues for inbound and outbound messages.

Agent Loop

The heart of nanobot — processes messages through context building, LLM calls, and tool execution.

Tool Registry

Dynamic tool management system that allows registration and execution of agent capabilities.

Context Builder

Assembles system prompts from identity, memory, skills, and runtime context.

Implementation Details

AgentLoop Class

The AgentLoop class in nanobot/agent/loop.py is the core processing engine:
class AgentLoop:
    """
    The agent loop is the core processing engine.

    It:
    1. Receives messages from the bus
    2. Builds context with history, memory, skills
    3. Calls the LLM
    4. Executes tool calls
    5. Sends responses back
    """
Key responsibilities:
1

Message Processing

Consumes messages from the message bus and dispatches them as async tasks to stay responsive to /stop commands.
2

Context Building

Uses ContextBuilder to assemble system prompts with identity, memory, skills, and runtime metadata.
3

LLM Interaction

Calls the configured LLM provider with context and available tools, handling streaming and progress updates.
4

Tool Execution

Executes tool calls returned by the LLM through the ToolRegistry, with parameter validation and error handling.
5

Session Management

Saves conversation history and triggers memory consolidation when thresholds are reached.

Message Flow

  1. Channel Layer receives input (e.g., user message in Telegram)
  2. Message Bus enqueues an InboundMessage
  3. Agent Loop dequeues the message and:
    • Loads session history from SessionManager
    • Builds context with ContextBuilder (system prompt + history + current message)
    • Enters the agent iteration loop:
      • Calls LLM via LLMProvider
      • If tool calls are returned:
        • Executes tools via ToolRegistry
        • Adds tool results to messages
        • Continues iteration
      • If text response is returned:
        • Breaks loop with final content
    • Saves updated session history
    • Publishes OutboundMessage to the bus
  4. Channel Layer receives outbound message and sends to user

Design Principles

Separation of Concerns: Each component has a single, well-defined responsibility.
Async-First: All I/O operations are asynchronous for better concurrency and responsiveness.
Provider Abstraction: LLM providers implement a common interface, making it easy to add new models.
Tool Extensibility: Tools inherit from a base Tool class with automatic parameter validation.

File Structure

nanobot/
├── agent/
│   ├── loop.py           # Core agent loop
│   ├── context.py        # Context building
│   ├── memory.py         # Memory system
│   ├── skills.py         # Skills loader
│   ├── subagent.py       # Background task execution
│   └── tools/            # Tool implementations
│       ├── base.py       # Tool base class
│       ├── registry.py   # Tool registry
│       ├── filesystem.py # File operations
│       ├── shell.py      # Command execution
│       ├── web.py        # Web search/fetch
│       ├── mcp.py        # MCP integration
│       └── ...
├── bus/
│   ├── events.py         # Message types
│   └── queue.py          # Message bus
├── channels/
│   ├── base.py           # Channel base class
│   ├── telegram.py       # Telegram integration
│   ├── discord.py        # Discord integration
│   └── ...               # Other channels
├── providers/
│   ├── base.py           # Provider interface
│   ├── litellm_provider.py
│   ├── openai_codex_provider.py
│   └── ...
├── session/
│   └── manager.py        # Session persistence
└── config/
    ├── schema.py         # Configuration schema
    └── loader.py         # Configuration loading

Performance Characteristics

Startup time: < 1 second (cold start)Memory footprint: ~50-100 MB base (excluding LLM provider SDKs)Message latency: ~100-500ms (excluding LLM API time)Concurrent sessions: Limited only by system resources (async design)

Extensibility Points

The architecture provides clear extension points:

Agent Loop

Deep dive into the iteration logic

Tools

How tools work and how to create them

Memory

Understanding the memory system

Build docs developers (and LLMs) love