Skip to main content

What is this load balancer?

This is a Layer 7 HTTP Load Balancer built from first principles in TypeScript using the Bun runtime. It distributes incoming HTTP traffic across multiple backend servers with automatic health monitoring, intelligent failover handling, and structured request logging. Unlike simple reverse proxies, this load balancer actively monitors backend health, automatically removes failing servers from rotation, and brings them back once they recover—all without manual intervention.
Why this project exists: To deeply understand how production load balancers like NGINX and HAProxy work under the hood by implementing request routing, health monitoring, and fault tolerance from scratch.

Key features

This load balancer implements essential production features:

Round-robin distribution

Distributes requests evenly across healthy backend servers using a round-robin algorithm

Automatic health checks

Monitors backend availability every 5 seconds with configurable timeout handling

Intelligent failover

Automatically removes failing backends and restores them when they recover

Structured logging

Color-coded logs for requests, responses, health checks, and errors with timing metrics

Architecture highlights

Strategy pattern for load balancing — The load balancing algorithm is decoupled from the core balancer via a strategy interface. Currently implements round-robin, but the architecture makes it trivial to add least connections, weighted round-robin, or IP hash. Parallel health checks — Health checks run concurrently using Promise.all() with per-request AbortController timeouts (3s). This prevents a single slow or dead backend from blocking health evaluation of the entire pool. Clean separation of concerns — Each module has a single responsibility:
ComponentResponsibility
BackendPoolManages backend server registry and health state
RoundRobinImplements the routing algorithm
LoadBalancerOrchestrates strategy + pool to pick a backend
ProxyHandlerForwards requests and handles proxy errors
HealthCheckerPeriodically verifies backend availability
LoggerStructured, categorized log output

How it works

The request flow follows this sequence:
1

Startup

The load balancer initializes a pool of backend servers and starts periodic health checks every 5 seconds
2

Incoming request

Express receives a request and passes it to the proxy handler middleware
3

Backend selection

The load balancer queries the backend pool for healthy servers and uses the round-robin strategy to pick the next one
4

Proxying

The request is forwarded to the selected backend via express-http-proxy. Response time is measured and logged
5

Error handling

If the backend fails, it’s marked unhealthy immediately and a 502 Bad Gateway is returned. If no backends are available, a 503 Service Unavailable is returned
6

Health recovery

The health checker runs every 5 seconds, pinging each backend. Recovered servers are automatically re-added to the healthy pool

Use cases

Learning resource

Study how production load balancers work by exploring clean, well-structured TypeScript code with clear separation of concerns

Development environment

Use as a local load balancer for testing microservices or distributed applications during development

Foundation for custom solutions

Extend with additional algorithms (least connections, IP hash), rate limiting, or admin dashboards for your specific needs

Interview preparation

Understand load balancer internals, health checking strategies, and the strategy pattern in practice

Tech stack

TechnologyPurpose
TypeScriptType-safe development with strict mode
BunFast JavaScript runtime and package manager
Express 5HTTP server framework
express-http-proxyReverse proxy middleware

Next steps

Quickstart

Get up and running in 5 minutes

Installation

Detailed setup and prerequisites

Architecture

Deep dive into system design

Build docs developers (and LLMs) love