Why Multi-Gateway
A single Gateway instance has limitations: single point of failure, limited throughput, and no cross-region latency optimization. Multi-gateway deployment solves these issues with high availability and horizontal scaling.
Architecture Patterns
Active-Standby
One primary instance handles all requests; a standby instance takes over on failure.
Active-Active
Multiple instances process requests simultaneously with a load balancer distributing traffic.
Nginx Load Balancing
upstream openclaw_backend {
least_conn;
server 10.0.0.1:3000 weight=5;
server 10.0.0.2:3000 weight=5;
server 10.0.0.3:3000 backup;
keepalive 32;
}
server {
listen 443 ssl;
server_name gateway.example.com;
location / {
proxy_pass http://openclaw_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Shared State Storage
Multi-gateway deployments need shared session state via Redis:
{
"gateway": {
"session": {
"store": "redis",
"redis": {
"host": "redis.example.com",
"port": 6379,
"password": "{{REDIS_PASSWORD}}",
"db": 0,
"keyPrefix": "openclaw:"
}
}
}
}
Docker Compose Multi-Instance
A complete setup with Redis, two Gateway instances, and Nginx as load balancer.
Session Affinity
For WebSocket connections, use ip_hash in Nginx to route the same user to the same Gateway.
Webhook Deduplication
Use Redis locks to prevent duplicate processing:
{
"gateway": {
"webhook": { "deduplication": true, "deduplicationStore": "redis", "deduplicationTTL": 60 }
}
}
Rolling Updates
Multi-gateway architecture enables zero-downtime updates: remove one node from the load balancer, update it, wait for it to become healthy, then repeat for the next node.
Summary
Multi-gateway deployment is a best practice for OpenClaw production environments. With Nginx load balancing and Redis shared state, you can build a highly available, scalable Gateway cluster for high-concurrency and high-reliability needs.