Skip to main content

Overview

Rate limiting protects your Sockudo server from abuse and ensures fair resource allocation across applications. It provides per-application limits for HTTP API requests, preventing any single app from overwhelming the server. Key Benefits:
  • Prevents API abuse and DoS attacks
  • Ensures fair resource allocation
  • Configurable per-application limits
  • Multiple backend drivers (memory, Redis, Redis Cluster)

Quick Start

Basic Configuration

config/config.json
{
  "rate_limiter": {
    "enabled": true,
    "driver": "memory"
  }
}

Environment Variable

RATE_LIMITER_ENABLED=true
RATE_LIMITER_DRIVER=memory

Configuration

Global Settings

{
  "rate_limiter": {
    "enabled": true,
    "driver": "memory"
  }
}
OptionValuesDescription
enabledtrue/falseEnable/disable rate limiting globally
drivermemory, redis, redis-cluster, noneRate limiter backend

Per-App Limits

Configure rate limits for individual applications:
{
  "app_manager": {
    "driver": "memory",
    "array": {
      "apps": [
        {
          "id": "my-app",
          "key": "my-key",
          "secret": "my-secret",
          "max_client_events_per_second": 1000
        }
      ]
    }
  }
}
OptionDefaultDescription
max_client_events_per_second1000Maximum events per second per app

Rate Limiter Drivers

Memory Driver (Default)

Best for: Single-node deployments, development, testing
{
  "rate_limiter": {
    "enabled": true,
    "driver": "memory"
  }
}
Characteristics:
  • In-memory counters (fast)
  • No external dependencies
  • Per-node limits (not shared across instances)
  • Lost on server restart
Use when:
  • Running single Sockudo instance
  • Development/testing environment
  • Low-traffic applications

Redis Driver

Best for: Multi-node deployments with shared limits
{
  "rate_limiter": {
    "enabled": true,
    "driver": "redis"
  },
  "database": {
    "redis": {
      "host": "localhost",
      "port": 6379,
      "password": "optional-password",
      "db": 0
    }
  }
}
Characteristics:
  • Shared counters across all nodes
  • Cluster-wide rate limits
  • Persistent across restarts (with Redis persistence)
  • Slightly higher latency (~1-2ms)
Use when:
  • Running multiple Sockudo instances
  • Need cluster-wide rate limits
  • Production deployments

Redis Cluster Driver

Best for: High-availability deployments with Redis Cluster
{
  "rate_limiter": {
    "enabled": true,
    "driver": "redis-cluster"
  },
  "redis_cluster": {
    "nodes": [
      "redis1:6379",
      "redis2:6379",
      "redis3:6379"
    ]
  }
}
Characteristics:
  • Distributed across Redis Cluster
  • High availability
  • Automatic failover
  • Horizontal scalability
Use when:
  • High-availability requirements
  • Large-scale deployments
  • Redis Cluster infrastructure

None Driver

Best for: Disabling rate limiting
{
  "rate_limiter": {
    "enabled": false,
    "driver": "none"
  }
}
Use when:
  • Behind external rate limiter (e.g., nginx, API gateway)
  • Trusted internal network
  • Development with no limits

Rate Limiting Behavior

HTTP API Endpoints

Rate limiting is enforced on these endpoints:
EndpointRate LimitedLimit Type
POST /apps/:app_id/events✅ YesPer-app
POST /apps/:app_id/batch_events✅ YesPer-app
GET /apps/:app_id/channels✅ YesPer-app
GET /apps/:app_id/channels/:channel✅ YesPer-app
GET /apps/:app_id/channels/:channel/users✅ YesPer-app
Health endpoints (/up/:app_id)❌ NoN/A
Metrics endpoint (/metrics)❌ NoN/A

Rate Limit Algorithm

Sockudo uses a token bucket algorithm:
  1. Each app has a bucket with max capacity = max_client_events_per_second
  2. Tokens refill at rate of max_client_events_per_second per second
  3. Each request consumes 1 token
  4. If no tokens available, request is rejected with 429 Too Many Requests

Response Headers

Rate limit information is included in response headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 742
X-RateLimit-Reset: 1678901234
HeaderDescription
X-RateLimit-LimitMaximum requests per second
X-RateLimit-RemainingTokens remaining in current window
X-RateLimit-ResetUnix timestamp when limit resets

Rate Limit Exceeded

When rate limit is exceeded, the server returns:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1678901234

{
  "error": "Rate limit exceeded for app: my-app"
}
Client behavior:
  • Wait until X-RateLimit-Reset timestamp
  • Implement exponential backoff
  • Queue requests client-side

Use Cases

1. Preventing API Abuse

Problem: Malicious client making excessive API calls Solution:
{
  "apps": [
    {
      "id": "public-app",
      "max_client_events_per_second": 100  // Low limit for public apps
    }
  ]
}
Result: API abuse blocked, server remains stable

2. Fair Resource Allocation

Problem: One app consuming all server resources Solution:
{
  "apps": [
    {
      "id": "premium-app",
      "max_client_events_per_second": 10000  // High limit
    },
    {
      "id": "free-app",
      "max_client_events_per_second": 100    // Low limit
    }
  ]
}
Result: Resources fairly distributed across tiers

3. Multi-Tenant SaaS

Problem: Need different limits per customer Solution:
{
  "apps": [
    {
      "id": "customer-1",
      "max_client_events_per_second": 1000
    },
    {
      "id": "customer-2",
      "max_client_events_per_second": 5000
    }
  ]
}
Result: Each customer gets appropriate limits based on plan

4. Development vs Production

Problem: Different limits for different environments Solution:
# Development
RATE_LIMITER_ENABLED=false

# Production
RATE_LIMITER_ENABLED=true
RATE_LIMITER_DRIVER=redis
Result: No limits in dev, enforced in production

Best Practices

1. Set Appropriate Limits

Too low: Legitimate traffic gets blocked Too high: Doesn’t prevent abuse Just right: Allows normal usage, blocks abuse
// Good starting points
{
  "free-tier": 100,      // 100 events/sec
  "standard-tier": 1000, // 1k events/sec
  "premium-tier": 10000  // 10k events/sec
}

2. Use Redis for Multi-Node Deployments

{
  "rate_limiter": {
    "driver": "redis"  // Share limits across nodes
  }
}
Memory driver gives each node independent limits, effectively multiplying the limit by number of nodes.

3. Monitor Rate Limit Metrics

Track rate limit hits via Prometheus metrics:
# Rate limit exceeded count
sockudo_rate_limit_exceeded_total{app_id="my-app"}

# Current rate limit usage
sockudo_rate_limit_remaining{app_id="my-app"}

4. Implement Client-Side Backoff

async function sendWithBackoff(event, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      await sendEvent(event);
      return;
    } catch (error) {
      if (error.status === 429) {
        const resetTime = error.headers['x-ratelimit-reset'];
        const waitMs = (resetTime * 1000) - Date.now();
        await sleep(Math.max(waitMs, 1000 * Math.pow(2, i)));
      } else {
        throw error;
      }
    }
  }
  throw new Error('Rate limit exceeded after retries');
}

5. Consider External Rate Limiters

For advanced scenarios, use nginx or API gateway:
# Nginx rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=100r/s;

server {
  location /apps/ {
    limit_req zone=api burst=20;
    proxy_pass http://sockudo;
  }
}

Monitoring

Prometheus Metrics

# Rate limit exceeded count by app
sockudo_rate_limit_exceeded_total{app_id="my-app"} 142

# Current remaining tokens
sockudo_rate_limit_remaining{app_id="my-app"} 758

# Rate limit configuration
sockudo_rate_limit_max{app_id="my-app"} 1000

Logs

[WARN] Rate limit exceeded for app: my-app (limit: 1000/sec)
[INFO] Rate limiter initialized with driver: redis
[DEBUG] Rate limit check: app=my-app, remaining=742/1000

Troubleshooting

Rate Limits Not Enforced

Check 1: Is rate limiting enabled?
grep -A 2 "rate_limiter" config/config.json
Check 2: Are per-app limits configured?
grep "max_client_events_per_second" config/config.json
Check 3: Check logs for initialization
grep "rate limiter" sockudo.log

Legitimate Traffic Blocked

Symptom: Users reporting 429 errors during normal usage Solution: Increase per-app limit
{
  "apps": [{
    "id": "my-app",
    "max_client_events_per_second": 2000  // Increase from 1000
  }]
}

Multi-Node Limit Multiplication

Symptom: Effective limit is higher than configured (e.g., 3000 instead of 1000) Cause: Using memory driver with 3 nodes (1000 × 3 = 3000) Solution: Switch to Redis driver for shared limits
{
  "rate_limiter": {
    "driver": "redis"
  }
}

Redis Connection Issues

Symptom: Rate limiting not working with Redis driver Check Redis connection:
redis-cli -h localhost -p 6379 ping
Check Sockudo logs:
grep "redis" sockudo.log | grep -i error
Fallback: Use memory driver temporarily
{
  "rate_limiter": {
    "driver": "memory"  // Temporary fallback
  }
}

Migration Guide

Enabling Rate Limiting

1. Enable in configuration:
{
  "rate_limiter": {
    "enabled": true,
    "driver": "memory"
  }
}
2. Configure per-app limits:
{
  "apps": [{
    "max_client_events_per_second": 1000
  }]
}
3. Monitor for blocked requests:
grep "Rate limit exceeded" sockudo.log
4. Adjust limits based on usage:
{
  "max_client_events_per_second": 2000  // Increase if needed
}

Switching from Memory to Redis

1. Configure Redis connection:
{
  "database": {
    "redis": {
      "host": "localhost",
      "port": 6379
    }
  }
}
2. Update rate limiter driver:
{
  "rate_limiter": {
    "driver": "redis"
  }
}
3. Restart Sockudo:
systemctl restart sockudo
4. Verify Redis keys:
redis-cli keys "sockudo:rate_limit:*"

Next Steps

Webhooks

Configure event notifications with batching

Presence Channels

Track online users with presence channels

Build docs developers (and LLMs) love