Introduction
OpenRouter is a unified AI model gateway that lets you access virtually all major large language models with a single API Key, including Claude, GPT, Gemini, Llama, Mistral, and more. For users who prefer not to register separately with multiple providers, OpenRouter is an extremely convenient option. This tutorial explains how to configure OpenRouter in OpenClaw.
Advantages of OpenRouter
| Feature | Description |
|---|---|
| One-stop access | A single API Key for 200+ models |
| Unified billing | All model costs settled in one place |
| Automatic failover | Switches to alternatives when a provider is down |
| No VPN needed | Some models are directly accessible (region-dependent) |
| Free models | Offers free quota for select open-source models |
Step 1: Register for OpenRouter and Get an API Key
1.1 Create an Account
- Visit openrouter.ai
- Sign up with a Google account or email
- After registration, enter the Dashboard
1.2 Create an API Key
- Go to the Keys page
- Click Create Key
- Enter a name, e.g.,
openclaw-prod - Copy the generated key
API Key format:
sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
1.3 Add Credits (Optional)
OpenRouter supports multiple payment methods. Some open-source models have free quotas, but commercial models like Claude and GPT require prepaid credits. We recommend starting with $5 for testing.
Step 2: Basic Configuration
2.1 Edit the Configuration File
nano ~/.config/openclaw/openclaw.json5
Add the OpenRouter configuration:
{
models: {
openrouter: {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "anthropic/claude-sonnet-4",
}
}
}
2.2 Set Environment Variables
# Add to ~/.bashrc
export OPENROUTER_API_KEY="sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
source ~/.bashrc
2.3 Restart and Verify
openclaw restart
openclaw doctor
Step 3: Available Models
The following mainstream models are accessible through OpenRouter:
| Model | OpenRouter Identifier | Input Price (per million tokens) | Output Price (per million tokens) |
|---|---|---|---|
| Claude Sonnet 4 | anthropic/claude-sonnet-4 | $3.00 | $15.00 |
| Claude Haiku 3.5 | anthropic/claude-3.5-haiku | $0.80 | $4.00 |
| GPT-4o | openai/gpt-4o | $2.50 | $10.00 |
| GPT-4o mini | openai/gpt-4o-mini | $0.15 | $0.60 |
| Gemini 2.5 Pro | google/gemini-2.5-pro | $1.25 | $10.00 |
| Gemini 2.5 Flash | google/gemini-2.5-flash | $0.15 | $0.60 |
| Llama 3.3 70B | meta-llama/llama-3.3-70b | $0.10 | $0.10 |
| DeepSeek V3 | deepseek/deepseek-chat | $0.14 | $0.28 |
| Mistral Large | mistralai/mistral-large | $2.00 | $6.00 |
| Qwen 2.5 72B | qwen/qwen-2.5-72b-instruct | $0.15 | $0.15 |
3.1 Free Models
OpenRouter provides free quota for certain models, suitable for testing and low-frequency use:
{
models: {
"openrouter-free": {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "meta-llama/llama-3.3-70b:free", // Free tier identifier
}
}
}
Note: Free models have strict rate limits and are not suitable for high-frequency use.
Step 4: Multi-Model Configuration
The core advantage of OpenRouter is the ability to flexibly switch between multiple models.
4.1 Assign Different Models to Different Channels
{
models: {
"router-premium": {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "anthropic/claude-sonnet-4",
},
"router-standard": {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "google/gemini-2.5-flash",
},
"router-budget": {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "meta-llama/llama-3.3-70b",
}
},
channels: {
whatsapp: {
model: "router-premium", // WhatsApp uses the strongest model
},
telegram: {
model: "router-standard", // Telegram uses a mid-tier model
},
discord: {
model: "router-budget", // Discord uses an economy model
}
}
}
4.2 Model Routing Strategy
OpenRouter supports automatic routing, which selects the most appropriate model based on the request content:
{
models: {
"router-auto": {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "openrouter/auto", // Automatically selects the best model
parameters: {
temperature: 0.7,
}
}
}
}
Step 5: Failover Configuration
When a model provider experiences an outage, OpenRouter can automatically switch to a fallback model.
5.1 Configure a Fallback List
{
models: {
"router-resilient": {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "anthropic/claude-sonnet-4",
fallbackModels: [
"openai/gpt-4o",
"google/gemini-2.5-pro",
"deepseek/deepseek-chat",
],
retryOnFailure: true,
maxRetries: 2,
}
}
}
5.2 How It Works
The failover trigger flow:
Request sent → Claude Sonnet 4
↓ Failed
Retry → GPT-4o
↓ Failed
Retry → Gemini 2.5 Pro
↓ Failed
Final retry → DeepSeek Chat
↓ Success
Return response
This configuration ensures extremely high availability, so even a complete provider outage will not disrupt your service.
Step 6: Cost Optimization
6.1 Set Budget Limits
You can set a total budget and daily limits in the OpenRouter Dashboard. Additionally, you can add a layer of control within the OpenClaw configuration:
{
models: {
openrouter: {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "google/gemini-2.5-flash",
budget: {
dailyLimit: 2.00, // Maximum daily spend of $2
monthlyLimit: 30.00, // Maximum monthly spend of $30
alertThreshold: 0.8, // Alert when reaching 80%
}
}
}
}
6.2 Dynamic Model Selection by Task Complexity
A smart approach is to select models dynamically based on the type of task:
{
models: {
"router-smart": {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "google/gemini-2.5-flash", // Use the cheaper model by default
complexModel: "anthropic/claude-sonnet-4", // Use a stronger model for complex tasks
complexTriggers: [
"代码",
"分析",
"翻译长文",
"写文章",
]
}
}
}
Step 7: Advanced Configuration Options
7.1 Custom HTTP Headers
Some scenarios may require passing additional HTTP headers:
{
models: {
openrouter: {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "anthropic/claude-sonnet-4",
headers: {
"HTTP-Referer": "https://your-site.com", // Helps OpenRouter track the source
"X-Title": "OpenClaw Assistant", // Displayed in OpenRouter logs
}
}
}
}
7.2 Exclude Specific Providers
If you prefer not to access models through certain providers:
{
models: {
openrouter: {
provider: "openrouter",
apiKey: "${OPENROUTER_API_KEY}",
defaultModel: "anthropic/claude-sonnet-4",
providerPreferences: {
exclude: ["azure"], // Do not use Azure-provided models
prefer: ["anthropic"], // Prefer direct Anthropic connection
}
}
}
}
Troubleshooting
Insufficient Balance
Error: 402 Payment Required - Insufficient credits
Add credits in the OpenRouter Dashboard to resolve this.
Model Unavailable
Error: 503 Model temporarily unavailable
This is typically a temporary issue with the upstream provider. If you have fallbackModels configured, the system will switch automatically. Otherwise, wait a few minutes and retry.
Slow Response Times
As a middleware layer, OpenRouter adds slight latency (typically 100-300ms). If latency is critical, consider connecting directly to the model provider.
Summary
OpenRouter is a powerful model aggregation solution within the OpenClaw ecosystem. Its core value lies in simplifying multi-model management, providing failover capabilities, and unifying billing. For users who want to flexibly use multiple models without managing separate APIs, OpenRouter is the best choice. We recommend using it as either your primary or backup solution, combined with failover configuration, to achieve a highly available AI assistant service.