When you expose an AI agent on a public chat platform, security control becomes critical. An unrestricted AI bot can be abused — whether it's consuming massive API quotas, generating inappropriate content, or leaking sensitive information. OpenClaw provides multi-layered security mechanisms to help you control who can interact with the AI and how.
Security Threat Overview
The main risks of running an AI bot on open chat platforms include:
- Unauthorized users abusing AI resources, generating high API costs
- Malicious users attempting to manipulate AI behavior through prompt injection
- AI being triggered inappropriately in group chats, responding with unsuitable content
- Sensitive information leaking to unauthorized individuals through AI responses
OpenClaw's security mechanisms are designed to address these scenarios.
Allowlist Mechanism
The allowlist is OpenClaw's most fundamental and effective access control measure. The principle is simple: only users on the allowlist can interact with the AI, and messages from all other users are silently ignored.
Global Allowlist
You can set up a global allowlist at the top level of openclaw.json, which applies to all channels:
{
"security": {
"allowlist": {
"enabled": true,
"users": [
{"platform": "telegram", "id": "123456789"},
{"platform": "whatsapp", "id": "[email protected]"},
{"platform": "discord", "id": "987654321098765432"}
]
}
}
}
The global allowlist uses a platform name plus user ID format to identify users. This lets you manage authorized users across all platforms in one place.
Channel-Level Allowlist
Each channel can also have its own allowlist, which forms a union with the global allowlist — meaning users in either the global or channel allowlist can access the bot:
{
"channels": {
"telegram": {
"allowlist": [123456789, 111222333]
},
"whatsapp": {
"allowlist": ["[email protected]"]
}
}
}
Channel-level allowlists only require the platform's user ID, making the format more concise.
Allowlist Modes
OpenClaw supports two allowlist modes:
Strict mode (strict): Only allowlisted users can interact. This is the default mode, suitable for private deployments.
Permissive mode (permissive): All users can interact, but allowlisted users enjoy higher priority and more feature permissions. For example, regular users may be limited to 10 messages per day, while allowlisted users have no limit.
{
"security": {
"allowlist": {
"mode": "strict"
}
}
}
Private Message Pairing Mechanism
Private Message Pairing is a unique security feature of OpenClaw. It requires users to first complete identity verification through a private message with the bot before they can use AI features in group chats and other contexts.
Workflow
The pairing process works as follows:
- A user attempts to interact with the AI in a group chat for the first time
- The bot detects that the user is not paired and sends a private message to the user (or prompts them to message the bot directly)
- The user receives a one-time pairing code via private message
- The user replies with the pairing code in the private message to complete verification
- After successful verification, the user can use AI features normally in group chats
Configuring Private Message Pairing
{
"security": {
"pairing": {
"enabled": true,
"codeExpiry": 300,
"codeLength": 6,
"maxAttempts": 3,
"pairingMessage": "Please reply with the following pairing code within 5 minutes to activate the AI assistant: {code}"
}
}
}
codeExpiry is the pairing code's validity period (in seconds), codeLength is the number of digits in the code, and maxAttempts is the maximum number of attempts — exceeding this requires reinitiating the pairing process. pairingMessage is the message template sent to the user, where {code} is replaced with the actual pairing code.
Pairing State Persistence
After successful pairing, the user's authentication state is persisted. Even if OpenClaw restarts, users do not need to pair again. Pairing records are stored in a local database, and you can manage them via commands:
openclaw pairing list # List all paired users
openclaw pairing revoke <id> # Revoke pairing for a specific user
openclaw pairing clear # Clear all pairing records
Group Chat Access Control
Security control in group chat scenarios is more complex, as it involves dual permissions at both the group and user levels.
Group Allowlist
Restrict the bot to work only in specific groups:
{
"security": {
"groups": {
"allowedGroups": {
"telegram": [-1001234567890],
"whatsapp": ["[email protected]"],
"discord": ["guild_id_here"]
}
}
}
}
If the bot is added to a group not on the allowlist, it silently ignores all messages. You can also configure the bot to automatically leave unauthorized groups.
Group Role-Based Access
On platforms that support role systems (such as Discord and Telegram), you can control access based on users' group roles:
{
"security": {
"groups": {
"roleBasedAccess": {
"adminOnly": false,
"allowedRoles": ["ai-user", "premium"]
}
}
}
}
When adminOnly is set to true, only administrators can use the bot. allowedRoles specifies which roles are allowed to use AI features (applicable to platforms with role systems like Discord).
Rate Limiting
Beyond access control, rate limiting is another important security measure. It prevents any single user from overusing AI resources:
{
"security": {
"rateLimit": {
"enabled": true,
"maxRequests": 30,
"windowSeconds": 3600,
"cooldownMessage": "You've reached your usage limit. Please try again later."
}
}
}
The above configuration limits each user to 30 AI interactions per hour. Rate limits are calculated independently per user but shared across channels — meaning if the same user uses 20 interactions on Telegram, they only have 10 remaining on Discord.
Content Safety
OpenClaw also provides basic content safety filtering. You can configure keyword filtering to intercept messages when user input or AI responses contain specific keywords:
{
"security": {
"contentFilter": {
"enabled": true,
"blockedPatterns": ["sensitive_word_1", "sensitive_word_2"],
"action": "block_and_warn"
}
}
}
action supports two modes: block_silently (silent interception) and block_and_warn (intercept and notify the user).
Security Configuration Best Practices
For production environments, the following combination of security measures is recommended: enable global allowlists in strict mode, allowing only known users; enable private message pairing for group chat scenarios to add an extra verification layer; configure reasonable rate limits to prevent abuse; set up group allowlists to prevent the bot from being added to unknown groups; and regularly review pairing records and usage logs.
These security mechanisms can be flexibly combined. For internal team use, an allowlist plus rate limiting is usually sufficient. For public-facing services, enabling the full suite of security measures is recommended.