Introduction
OpenClaw provides multi-layered security mechanisms to protect your AI gateway. From access control to sandbox isolation, from authentication management to error handling, these security features form a complete defense system. This article systematically covers the configuration and usage of each security feature based on the official OpenClaw documentation.
Access Restrictions: Security Configuration in openclaw.json
OpenClaw's security policies are centrally managed in the openclaw.json configuration file. You can define who can access the service, how they access it, and the scope of their usage.
{
security: {
// Global access control toggle
accessControl: {
enabled: true,
defaultPolicy: "deny", // Deny all unauthorized access by default
}
}
}
Setting defaultPolicy to "deny" is the recommended approach — this means only explicitly authorized users can interact with your AI assistant, preventing abuse from open access.
Allowlist Mechanism: Precise Interaction Control
The allowlist is OpenClaw's primary access control mechanism. You can configure allowed users by user ID, username, or user group.
{
security: {
allowlist: {
// Configure per platform
telegram: {
userIds: [123456789, 987654321],
usernames: ["alice", "bob"],
},
discord: {
userIds: ["1100000000000000001"],
roleIds: ["1100000000000000099"], // Authorize by role
},
wechat: {
remarkNames: ["Zhang San", "Li Si"],
}
}
}
}
The allowlist supports multiple matching dimensions, and different platforms can use their own unique identifiers. When a user sends a message, OpenClaw first checks whether the user is on the allowlist. Messages that fail the check are discarded immediately without consuming any API quota.
DM Pairing Mechanism
In direct message (DM) scenarios, OpenClaw introduces a pairing mechanism to ensure security. This mechanism requires users to complete identity verification pairing during their first interaction before they can have normal conversations.
The core logic of the pairing flow is: after a user sends a DM to the bot, the system verifies whether the user has been paired. Unpaired users must complete initial binding through a preset verification method (such as a passphrase or invitation code). Once pairing is complete, subsequent interactions do not require re-verification.
This design is especially suitable for scenarios where you need to limit DM access without manually adding each user to the allowlist. Administrators can flexibly control admission rules by configuring pairing policies.
Group Chat Rules and Security
Group chat environments are more complex than DMs, and OpenClaw provides dedicated group chat security rules.
{
channels: {
telegram: {
groupRules: {
// Trigger mode in group chats
triggerMode: "mention", // Only respond when @mentioned
allowedGroups: ["-100123456789"], // Restrict to specific groups
adminOnly: false, // Whether only admins can use it
cooldown: 5, // Minimum response interval per group (seconds)
}
}
}
}
Setting triggerMode to "mention" prevents the bot from responding to every message in the group, saving API calls and preventing information leakage. allowedGroups restricts the bot to only operate in specified groups, further narrowing the attack surface. The group cooldown also effectively mitigates spam-like abuse within groups.
Multi-Account Authentication: Failover and Cooldown
OpenClaw supports configuring multiple API accounts for the same AI provider, enabling automatic failover. When the primary account encounters authentication errors or rate limits, the system automatically switches to a backup account to continue service.
{
providers: {
openai: {
accounts: [
{
name: "primary",
apiKey: "${OPENAI_KEY_1}",
priority: 1,
},
{
name: "backup",
apiKey: "${OPENAI_KEY_2}",
priority: 2,
}
],
failover: {
enabled: true,
cooldownSeconds: 60, // Cooldown period after account failure
maxRetries: 3, // Maximum retry attempts
}
}
}
}
When an account triggers rate limiting (429 error) or authentication failure (401/403 error), it enters a cooldown period. During cooldown, requests are routed to other available accounts. After cooldown ends, the account rejoins the rotation pool. This mechanism ensures high availability while avoiding over-reliance on a single account.
Error Handling Strategy
OpenClaw has explicit handling logic for authentication errors and rate limits:
- Authentication errors (401/403): Mark the account as unavailable, immediately trigger failover, and log an alert.
- Rate limiting (429): Read the
Retry-Aftervalue from response headers to set the cooldown period, automatically switching to backup accounts during this time. - Server errors (5xx): Brief cooldown followed by retry; if failures persist, switch accounts.
Sandbox Isolation: Tool and Execution Environment Security
For scenarios that integrate tool use capabilities, OpenClaw ensures secure execution through sandbox mechanisms.
Path and Tool Constraints
Sandbox configuration restricts all tool file access to a specified root directory, preventing AI from accessing sensitive system files.
{
sandbox: {
enabled: true,
roots: ["/data/openclaw/workspace"], // Tools can only access this directory
allowedTools: ["web_search", "calculator", "file_reader"],
}
}
The tool list uses an allowlist approach — only tools explicitly listed in allowedTools can be invoked. Path constraints ensure that even when file operation tools are enabled, their access scope is strictly limited to the sandbox root directory.
Command Execution Isolation
When the AI needs to execute system commands, OpenClaw runs them in a containerized environment completely isolated from the host. All invocations of exec-type tools are executed in independent container instances, leaving no trace after container destruction.
Browser Bridging
Browser tools within the sandbox access external resources through a bridge URL rather than making requests directly from the host network. This proxy isolation layer effectively prevents SSRF (Server-Side Request Forgery) attacks while making network traffic auditable and controllable.
Special Security Measures for the Anthropic Provider
When using Anthropic (Claude) as the AI provider, OpenClaw implements additional security checks:
- Magic string clearing rejection: The system detects and blocks requests that attempt to clear context or system prompts through special strings, preventing prompt injection attacks.
- Consecutive role validation: OpenClaw validates that the role sequence in conversation messages is legitimate, ensuring no consecutive messages with the same role appear. This is both an Anthropic API format requirement and a security barrier against message forgery.
Security Configuration Best Practices
- Principle of least privilege: Set the default policy to deny and only open necessary allowlist entries.
- Manage keys with environment variables: Reference all API keys using
${ENV_VAR}instead of hardcoding them in configuration files. - Enable sandbox: If using tool calling features, always enable sandbox isolation.
- Configure multi-account failover: Avoid single points of failure while distributing rate limiting pressure.
- Regularly review the allowlist: Remove user authorizations that are no longer needed to keep the access list lean.
- Use mention mode for group chats: Reduce unnecessary API calls and information exposure.
Summary
OpenClaw's security system covers the entire chain from user admission to execution environment. By properly configuring allowlists, DM pairing, group chat rules, multi-account authentication, and sandbox isolation, you can build an AI gateway that is both open and secure. It's recommended to complete these security configurations early in deployment rather than after an incident. The cost of security prevention is far lower than the cost of incident recovery.