Architecture Overview
nanobot is an ultra-lightweight AI agent framework that delivers core agent functionality with just ~4,000 lines of code. Its architecture is designed for simplicity, extensibility, and research-readiness.System Architecture
The architecture follows a clean event-driven design with four main layers:Core Components
Message Bus
Decouples channels from the agent loop using async queues for inbound and outbound messages.
Agent Loop
The heart of nanobot — processes messages through context building, LLM calls, and tool execution.
Tool Registry
Dynamic tool management system that allows registration and execution of agent capabilities.
Context Builder
Assembles system prompts from identity, memory, skills, and runtime context.
Implementation Details
AgentLoop Class
TheAgentLoop class in nanobot/agent/loop.py is the core processing engine:
Message Processing
Consumes messages from the message bus and dispatches them as async tasks to stay responsive to
/stop commands.Context Building
Uses
ContextBuilder to assemble system prompts with identity, memory, skills, and runtime metadata.LLM Interaction
Calls the configured LLM provider with context and available tools, handling streaming and progress updates.
Tool Execution
Executes tool calls returned by the LLM through the
ToolRegistry, with parameter validation and error handling.Message Flow
Detailed Message Flow
Detailed Message Flow
- Channel Layer receives input (e.g., user message in Telegram)
- Message Bus enqueues an
InboundMessage - Agent Loop dequeues the message and:
- Loads session history from
SessionManager - Builds context with
ContextBuilder(system prompt + history + current message) - Enters the agent iteration loop:
- Calls LLM via
LLMProvider - If tool calls are returned:
- Executes tools via
ToolRegistry - Adds tool results to messages
- Continues iteration
- Executes tools via
- If text response is returned:
- Breaks loop with final content
- Calls LLM via
- Saves updated session history
- Publishes
OutboundMessageto the bus
- Loads session history from
- Channel Layer receives outbound message and sends to user
Design Principles
Separation of Concerns: Each component has a single, well-defined responsibility.
Async-First: All I/O operations are asynchronous for better concurrency and responsiveness.
Provider Abstraction: LLM providers implement a common interface, making it easy to add new models.
Tool Extensibility: Tools inherit from a base
Tool class with automatic parameter validation.File Structure
Performance Characteristics
Startup time: < 1 second (cold start)Memory footprint: ~50-100 MB base (excluding LLM provider SDKs)Message latency: ~100-500ms (excluding LLM API time)Concurrent sessions: Limited only by system resources (async design)
Extensibility Points
The architecture provides clear extension points:- New channels: Inherit from
BaseChannel(see nanobot/channels/base.py:1) - New tools: Inherit from
Tool(see Tools) - New providers: Inherit from
LLMProvider(see nanobot/providers/base.py:1) - New skills: Add markdown files to
skills/directory (see Skills)
Related Concepts
Agent Loop
Deep dive into the iteration logic
Tools
How tools work and how to create them
Memory
Understanding the memory system