Introduction
When your OpenClaw instance is connected to multiple channels and serving multiple users, an active user might send a large volume of messages in a short period, exhausting API quotas or increasing response latency and degrading the experience for others. This article provides a detailed walkthrough of how to configure OpenClaw's rate limiting and anti-abuse mechanisms to ensure fair resource allocation.
1. Why Rate Limiting Is Needed
1.1 Common Problem Scenarios
- Single-user flooding: A user rapidly sends dozens of messages, filling the request queue
- Group message floods: Every message in an active group triggers an AI reply, generating a burst of API calls
- Malicious abuse: A public bot is deliberately called at high frequency
- API quota protection: Preventing RPM (requests per minute) limits from the AI provider from being exceeded
1.2 Consequences of No Limits
| Impact | Description |
|---|---|
| Runaway API costs | Could generate hundreds of dollars in charges per day |
| Other users blocked | Request queue is full, normal users don't get replies |
| Upstream throttling | Claude/OpenAI rate limits slow down all requests |
| Service crash | Memory and CPU resources exhausted |
2. Global Rate Limiting
2.1 Basic Configuration
// ~/.config/openclaw/openclaw.json5
{
"rateLimit": {
"enabled": true,
// Global limit: maximum messages processed per minute
"global": {
"maxRequests": 60,
"window": "1m"
}
}
}
2.2 Model API Call Limits
Configure upstream limits based on your API plan:
{
"rateLimit": {
"model": {
// Maximum API requests per minute
"rpm": 50,
// Maximum tokens per minute
"tpm": 100000,
// Maximum daily requests (0 = unlimited)
"dailyLimit": 2000,
// Maximum daily cost (USD)
"dailyCostLimit": 10.00,
// Behavior when limit is reached
"onLimitReached": "queue" // "queue" = wait in queue, "reject" = reject immediately
}
}
}
When limits are reached, OpenClaw sends a notification message to the user:
{
"rateLimit": {
"messages": {
"rateLimited": "Too many requests. Please try again later.",
"dailyLimitReached": "Daily usage quota has been exhausted. Please come back tomorrow.",
"queueing": "Currently queued, estimated wait: {waitTime} seconds."
}
}
}
3. Per-User Rate Limiting
3.1 Configuring User-Level Limits
{
"rateLimit": {
"perUser": {
// Maximum messages per user per minute
"maxRequests": 10,
"window": "1m",
// Maximum requests per user per hour
"maxRequestsPerHour": 100,
// Maximum daily token consumption per user
"dailyTokenLimit": 50000,
// Cooldown time: wait period after triggering rate limit
"cooldown": "30s"
}
}
}
3.2 VIP User Exemptions
For important users or administrators, you can set higher limits or exemptions:
{
"rateLimit": {
"perUser": {
"maxRequests": 10,
"window": "1m"
},
// User whitelist: exempt from rate limiting
"whitelist": [
"+8613800138000", // Phone number
"telegram:123456789", // Telegram user ID
"discord:987654321" // Discord user ID
],
// Custom user quotas
"customLimits": {
"telegram:123456789": {
"maxRequests": 30,
"window": "1m",
"dailyTokenLimit": 200000
}
}
}
}
3.3 User Group Management
For a large number of users, you can set quotas by group:
{
"rateLimit": {
"groups": {
"free": {
"maxRequests": 5,
"window": "1m",
"dailyTokenLimit": 10000,
"dailyRequestLimit": 50
},
"premium": {
"maxRequests": 20,
"window": "1m",
"dailyTokenLimit": 100000,
"dailyRequestLimit": 500
},
"admin": {
"maxRequests": 60,
"window": "1m",
"dailyTokenLimit": 0 // 0 = unlimited
}
},
// User-to-group mapping
"userGroups": {
"telegram:111": "premium",
"telegram:222": "admin"
},
// Default group for users not assigned to any group
"defaultGroup": "free"
}
}
4. Per-Channel Rate Limiting
4.1 Channel-Level Limits
Different channels may have different usage patterns and priorities:
{
"rateLimit": {
"perChannel": {
"telegram": {
"maxRequests": 30,
"window": "1m"
},
"discord": {
"maxRequests": 20,
"window": "1m",
// Special limits for Discord group chats
"groupChat": {
"maxRequests": 10,
"window": "1m",
// Only respond to @Bot messages in group chats
"mentionOnly": true
}
},
"whatsapp": {
"maxRequests": 25,
"window": "1m"
}
}
}
}
4.2 Special Handling for Group Chats
Group chat message volume is typically much higher than DMs and requires special strategies:
{
"rateLimit": {
"groupChat": {
// Maximum messages to respond to per minute in groups
"maxResponses": 5,
"window": "1m",
// Only reply to messages that @mention the bot
"mentionOnly": true,
// Or reply with a probability (for scenarios where occasional participation is desired)
"responseRate": 0.3, // 30% chance of replying
// Consecutive message batching: merge multiple messages within a short window into one request
"batchWindow": "5s"
}
}
}
5. Anti-Abuse Mechanisms
5.1 Message Content Filtering
{
"antiAbuse": {
"enabled": true,
// Maximum message length (characters)
"maxMessageLength": 2000,
// Reject purely duplicate messages
"rejectDuplicates": true,
"duplicateWindow": "1m",
// Keyword blocklist
"blockedPatterns": [
"ignore previous instructions",
"system prompt"
],
// Action when abuse is detected
"onAbuse": "warn" // "warn" = send warning, "block" = silently ignore, "ban" = temporary ban
}
}
5.2 Temporary Ban Mechanism
When a user repeatedly triggers rate limit rules, apply temporary bans automatically:
{
"antiAbuse": {
"autoBan": {
"enabled": true,
// Ban if rate limit is triggered more than threshold times within the time window
"triggerCount": 5,
"triggerWindow": "10m",
// Ban duration
"banDuration": "1h",
// Permanent ban after cumulative ban count reaches threshold
"permanentBanThreshold": 3,
// Ban notification message
"banMessage": "Due to excessive requests, your account has been temporarily restricted. Please try again after {duration}."
}
}
}
5.3 Ban Management
# View current ban list
openclaw ban list
# Manually ban a user
openclaw ban add "telegram:123456" --duration 24h --reason "abuse"
# Remove a ban
openclaw ban remove "telegram:123456"
# View ban history
openclaw ban history --user "telegram:123456"
6. Monitoring Rate Limit Status
6.1 View Rate Limit Statistics
# View current rate limit status
openclaw stats --rate-limit
# Output
# Rate Limit Statistics (Last 1 Hour)
# ─────────────────────────────
# Global Requests: 580 / 3600
# Times Rate Limited: 12
# Users Rate Limited: 3
# Currently Queued: 2
# Today's Tokens: 89,500 / Unlimited
# Today's Cost: $1.85 / $10.00
6.2 Prometheus Metrics
# Rate-limited request count
rate(openclaw_rate_limited_total[5m])
# Rate-limited count by user
topk(10, increase(openclaw_rate_limited_total[24h]))
# Current queue length
openclaw_queue_length
# Daily cost progress
openclaw_daily_cost / openclaw_daily_cost_limit * 100
7. Best Practices
- Start loose, tighten gradually: Set generous limits initially and tighten them based on actual usage data
- Friendly messages: Give users clear notifications when rate limited, telling them when they can resume
- Priority queuing: Process VIP users' or administrators' requests with higher priority
- Tiered strategy: Group chats need stricter rate limiting than DMs
- Cost safety net: Always set a daily cost cap to prevent unexpected high bills
- Regular adjustments: Review rate limit statistics monthly and adjust quotas as the user base grows
Proper rate limiting isn't about restricting user experience — it's about ensuring all users receive fair, stable service. A well-configured rate limiting strategy is the foundation for running OpenClaw at scale.