Agent is the central component that manages conversation flow, executes reasoning loops, and coordinates tool calls.
Agent Type
Frompkg/ai-sdk/agent/agent.go:17-42:
Fields
- MaxIterations: Maximum reasoning steps before stopping (default: 10)
- Tools: Tools available to the agent
- UserInputTools: Tools that require human intervention
- SystemPrompt: System instructions for the agent
- Model: The LLM provider (OpenAI, Anthropic, Gemini)
- Memory: Storage for conversation history
- ConversationHistory: Number of messages to retrieve from memory
- TotalUsage: Accumulated token usage across all steps
- FinishReason: Why the agent stopped (stop, length, tool_calls, etc.)
Creating an Agent
Frompkg/ai-sdk/agent/agent.go:59-90:
Options
Frompkg/ai-sdk/agent/options.go:
WithModel
WithSystemPrompt
WithMaxIterations
WithTools
WithMemory
WithConversationHistoryLimit
WithCancelContext
WithHooks
Chat Methods
ChatSync (Synchronous)
Frompkg/ai-sdk/agent/agent.go:196-223:
Chat (Streaming)
Frompkg/ai-sdk/agent/agent.go:114-188:
ChatRequest
Frompkg/ai-sdk/agent/agent.go:92-96:
- Prompt: User message to send
- SessionID: Identifier for conversation continuity
- ToolResults: Results from user input tools (for resuming interrupted conversations)
Step Type
Frompkg/ai-sdk/agent/agent.go:259-269:
Hooks
Frompkg/ai-sdk/agent/agent.go:44-57:
Stream Events
When usingChat(), the agent emits these events:
- StreamStartEvent: Stream begins
- AgentStepStartEvent: New reasoning step begins
- TextDeltaEvent: Incremental text content
- TextCompleteEvent: Complete text content
- ToolCallStartEvent: Tool call begins
- ToolCallDeltaEvent: Tool arguments streaming
- ToolCallCompleteEvent: Tool call complete
- ToolExecutionStartEvent: Tool execution begins
- ToolExecutionCompleteEvent: Tool execution complete
- UsageEvent: Token usage update
- FinishReasonEvent: Finish reason
- AgentStepCompleteEvent: Step complete
- StreamEndEvent: Stream ends
- StreamErrorEvent: Error occurred
Reasoning Loop
The agent executes a reasoning loop:- Retrieve memory: Load conversation history from memory store
- Add user message: Append new user prompt to conversation
- Generate response: Call LLM with messages and tools
- Process tool calls: Execute any tools the LLM requested
- Add tool results: Append results to conversation
- Repeat: Continue until no more tool calls or max iterations
pkg/ai-sdk/agent/agent.go:123-178, the loop continues while:
- Current step < MaxIterations
- Agent hasn’t finished (has tool calls or specific finish reasons)
Finish Reasons
Frompkg/ai-sdk/types/response.go:24-31:
- stop: Natural completion
- length: Max tokens reached
- tool_calls: Waiting for tool results
- content_filter: Content filtered by provider
- error: Error occurred
- human_intervention: User input required
Human Intervention
Agents can pause for user input usingUserInputTool:
Best Practices
- Set appropriate MaxIterations to prevent infinite loops
- Use hooks for logging and monitoring
- Handle errors from both agent creation and chat methods
- Stream for UX - use
Chat()for responsive interfaces - Sync for simplicity - use
ChatSync()for scripts and batch processing - Memory management - limit conversation history to relevant messages
- Tool design - keep tools focused and well-documented
