System Overview
SSV Node implements a multi-layered architecture that coordinates distributed validator operations across multiple operators. The system combines Byzantine fault-tolerant consensus, threshold cryptography, and peer-to-peer networking to enable secure, decentralized Ethereum validator operation.The architecture follows event-driven and message queue patterns, enabling concurrent processing of multiple validator duties across different roles.
Core Components
1. Operator Node (operator/)
The Operator Node is the central orchestrator that manages the complete validator lifecycle. Located in operator/node.go, it coordinates all interactions between the blockchain, network, and protocol layers.
Key Responsibilities:
- Receives validator duties from the Beacon Chain
- Schedules duty execution based on slot timing
- Manages validator registration and deregistration
- Coordinates with other operators through the P2P network
- Handles fee recipient management
Operator Node Structure (operator/node.go)
Operator Node Structure (operator/node.go)
2. SSV Protocol (protocol/v2/)
The SSV Protocol layer implements the core distributed validator protocol, including consensus and duty execution. This is where the magic of distributed validation happens.
QBFT Consensus (protocol/v2/qbft/)
SSV implements Istanbul BFT (QBFT), a Byzantine fault-tolerant consensus algorithm that ensures operators agree on validator duties before signing.
Consensus Flow:
- Prepare Phase: Operators exchange prepare messages
- Commit Phase: Upon receiving 2f+1 prepares, operators send commits
- Decided: With 2f+1 commits, consensus is reached
- Round Change: If timeout occurs, operators move to next round
QBFT can tolerate up to ⌊(n-1)/3⌋ Byzantine (malicious or faulty) operators. For 4 operators, 1 fault is tolerated; for 7 operators, 2 faults are tolerated.
protocol/v2/qbft/controller/controller.go):
Runners (protocol/v2/ssv/runner/)
Runners execute specific validator duties through a common interface. Each duty type has a dedicated runner implementation:
- Attestation Runner: Handles epoch attestations
- Proposal Runner: Manages block proposals
- Sync Committee Runner: Executes sync committee duties
- Aggregator Runner: Aggregates attestations from other validators
protocol/v2/ssv/runner/runner.go):
Consensus (QBFT)
Operators reach Byzantine fault-tolerant agreement on which signatures to aggregate
3. P2P Network (network/)
The P2P Network layer handles all peer-to-peer communication between operators. Built on libp2p, it provides discovery, message routing, and gossip-based broadcasting.
Network Architecture:
- Discovery: Uses discv5 protocol for peer discovery
- Transport: TCP and QUIC support for reliable communication
- Pubsub: GossipSub for efficient message broadcasting
- Subnets: Validator-specific subnets for message isolation
network/network.go):
- Scalability: Operators don’t receive messages for validators they don’t manage
- Privacy: Validator communications are isolated
- Efficiency: Reduced bandwidth and processing overhead
4. Beacon Chain Integration (beacon/)
The Beacon Chain Integration layer interfaces with Ethereum’s consensus layer to fetch duties and submit validator signatures.
Core Functions:
- Fetches validator duties for upcoming slots
- Retrieves Beacon Chain state and fork information
- Submits attestations, block proposals, and sync committee messages
- Supports multiple Beacon nodes for redundancy
| Duty Type | Frequency | Description |
|---|---|---|
| Attestation | Every epoch (~6.4 min) | Vote on the current chain head |
| Block Proposal | When selected | Propose new blocks to the chain |
| Sync Committee | When assigned | Participate in light client support |
| Aggregation | When selected | Aggregate attestations from other validators |
5. Contract Sync (eth/)
The Contract Sync component monitors the SSV smart contract on Ethereum mainnet for validator and operator events.
Monitored Events:
- Validator Registered: New validator added to the network
- Validator Removed: Validator deregistered
- Operator Added: New operator joined
- Operator Removed: Operator exited
- Fee Recipient Updated: Fee configuration changed
- Cluster Liquidated: Insufficient balance for cluster operation
The contract sync is event-driven, responding to blockchain events in real-time to maintain an up-to-date view of the SSV network state.
6. Storage Layer (storage/, registry/storage/)
The Storage Layer provides persistent state management for validator shares, metadata, and consensus data.
Storage Technologies:
- BadgerDB: Default key-value store (mature, battle-tested)
- PebbleDB: Alternative high-performance option
- In-Memory Caches: For frequently accessed data
- Validator shares and operator assignments
- Consensus instance state and decided values
- Slashing protection database (prevents double-signing)
- Duty execution history
- Network metadata and configuration
Design Patterns
SSV Node employs several architectural patterns for reliability and maintainability:Message Queue Pattern
Each duty type has dedicated queues for concurrent processing, enabling parallel execution of multiple validators without blocking.Event-Driven Architecture
The system responds to three types of events:- Beacon Slots: Time-based slot ticker triggers duty fetching
- Contract Events: Smart contract updates trigger validator lifecycle changes
- P2P Messages: Network messages trigger consensus and duty processing
Repository Pattern
Clean storage interfaces abstract database operations, enabling:- Testability: Easy mocking for unit tests
- Flexibility: Swap storage backends without code changes
- Consistency: Uniform access patterns across the codebase
Strategy Pattern
Different runners implement a common interface for various duties, allowing the scheduler to treat all duty types uniformly while maintaining duty-specific logic.Critical Data Flows
1. Validator Duty Flow
The complete flow from duty detection to Beacon Chain submission:2. Message Processing Flow
How P2P messages are validated and processed:3. Contract Event Flow
Response to smart contract events:- Event sync detects
ValidatorAddedevent - Handler extracts validator data and shares
- Validator controller creates new validator instance
- Shares are stored in the database
- Runner is initialized for the new validator
Package Structure
Understanding the Go package organization ingithub.com/ssvlabs/ssv:
| Package | Purpose | Key Files |
|---|---|---|
operator/ | Main node orchestration | node.go, duties/scheduler.go |
protocol/v2/qbft/ | QBFT consensus implementation | controller/controller.go, instance/instance.go |
protocol/v2/ssv/runner/ | Duty execution runners | runner.go, attestation.go, proposal.go |
network/ | P2P networking layer | network.go, p2p/p2p.go |
beacon/ | Beacon Chain integration | goclient/*.go |
eth/ | Smart contract synchronization | eventhandler/, eventsyncer/ |
storage/ | Storage backends | badger/, pebble/ |
registry/storage/ | Validator data persistence | shares.go, validator_store.go |
Security Architecture
Slashing Protection
SSV Node implements comprehensive slashing protection to prevent validator penalties:- Database-Backed Protection: All signing operations check against historical data
- Distributed Checks: Each operator independently validates signing safety
- Attestation Protection: Prevents conflicting source/target votes
- Proposal Protection: Prevents double block proposals
Key Isolation
Validator keys never exist in complete form:- With 4 operators: Need 3 shares (75%)
- With 7 operators: Need 5 shares (71%)
- With 13 operators: Need 9 shares (69%)
Message Validation
All network messages undergo rigorous validation:- Signature Verification: Cryptographic signature check
- Operator Verification: Sender is part of the committee
- Sequence Validation: Message ordering is correct
- Content Validation: Message data passes role-specific checks
- Replay Protection: Message hasn’t been processed before
Performance Optimizations
Concurrent Processing
SSV Node leverages Go’s concurrency primitives:- Goroutines: Each validator runs in parallel
- Channels: Message passing between components
- Mutex Protection: Thread-safe state access
- Worker Pools: Bounded concurrency for resource management
Caching Strategy
Multiple caching layers reduce latency:- In-Memory Shares Cache: Fast validator share access
- Beacon State Cache: Reduces Beacon node queries
- Fork Version Cache: Avoids repeated fork lookups
Database Optimization
- Batch Operations: Group writes for efficiency
- Transaction Management: Consistent state updates
- Garbage Collection: Periodic cleanup of old data
- SSZ Encoding: Compact storage representation
Observability
Metrics (Prometheus)
Key metrics exposed on/metrics endpoint:
ssv_validator_duties_total: Count of duties processedssv_consensus_duration_seconds: Consensus latencyssv_beacon_submissions_total: Beacon submission success/failuressv_network_peers: Current peer countssv_storage_operations_total: Database operation stats
Traces (OpenTelemetry)
Distributed tracing for debugging:- Duty execution flow from start to submission
- Consensus round progression
- Message processing latency
- Cross-component request tracking
Structured Logging
All logs use structured format with standard fields:- DEBUG: Detailed execution flow (development only)
- INFO: Normal operational events
- WARN: Potential issues requiring attention
- ERROR: Critical failures requiring operator action
Deployment Architecture
Recommended Setup
High Availability
For production deployments:- Multiple Beacon Nodes: Automatic failover between Beacon clients
- Execution Client Redundancy: Backup execution clients
- Persistent Storage: Ensure database survives restarts
- Monitoring: Prometheus + Grafana for visibility
- Alerting: PagerDuty/OpsGenie for critical issues
Next Steps
Node Setup
Install and configure your SSV Node
Developer Guide
Contribute to SSV Node development
API Reference
Explore the Go package documentation
Network Specification
Learn about the P2P network protocol
