Why Configure Models Per Channel
Different chat scenarios have different AI model requirements. A technical support channel may need the most powerful reasoning model to answer complex questions, while a casual chat channel may only need a fast and cost-effective model. Similarly, channels serving Chinese users may be better suited to Chinese-optimized models like Qwen or GLM, while international channels may prefer Claude or GPT.
OpenClaw supports configuring different AI models per channel (or per chat platform), allowing you to choose the most appropriate model for each channel's actual needs. This flexibility is one of the core advantages of OpenClaw's provider-agnostic architecture.
Basic Configuration Structure
OpenClaw's model configuration operates on two levels: global defaults and channel-level overrides. Global defaults are defined through agents.defaults, while channel-level configurations override settings for specific channels.
Global Default Configuration
{
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-5"
}
}
}
}
This is the default model for all channels. If a channel doesn't have its own specific configuration, it will use this default value.
Per-Channel Overrides
{
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-5"
}
},
"channels": {
"discord-tech-support": {
"model": {
"primary": "anthropic/claude-opus-4-5",
"fallback": "openai/gpt-4o"
}
},
"discord-casual-chat": {
"model": {
"primary": "openai/gpt-4o-mini"
}
},
"telegram-cn-group": {
"model": {
"primary": "qwen/qwen-max",
"fallback": "glm/glm-4"
}
},
"slack-dev-team": {
"model": {
"primary": "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0"
}
}
}
}
}
In this configuration:
- Technical support channel uses the most powerful Claude Opus model, ensuring complex questions get high-quality answers.
- Casual chat channel uses the lightweight GPT-4o-mini, reducing costs while maintaining basic conversation quality.
- Chinese group uses Qwen, leveraging its Chinese comprehension strengths, with GLM as a fallback.
- Development team channel uses Claude through Bedrock, benefiting from AWS enterprise security guarantees.
Per-Platform Configuration
In addition to per-channel configuration, you can also configure by entire chat platform:
{
"agents": {
"platforms": {
"discord": {
"model": {
"primary": "anthropic/claude-opus-4-5"
}
},
"telegram": {
"model": {
"primary": "qwen/qwen-max"
}
},
"slack": {
"model": {
"primary": "openai/gpt-4o"
}
}
}
}
}
The configuration priority from highest to lowest is: Channel-level > Platform-level > Global default.
Practical Use Cases
Scenario 1: Multilingual Community
If you operate a multilingual community, you can assign models by language:
- Chinese channels use
qwen/qwen-maxorglm/glm-4. - English channels use
anthropic/claude-opus-4-5. - Japanese channels use
openai/gpt-4o.
Scenario 2: Cost Tiering
Assign different-cost models based on channel importance and usage frequency:
- VIP user channels use flagship models (e.g., Claude Opus).
- Regular user channels use cost-effective models (e.g., GPT-4o-mini).
- Test channels use low-cost models or local Ollama models.
Scenario 3: Feature Matching
Choose models based on channel functional requirements:
- Code assistant channels use models that excel at programming.
- Creative writing channels use
minimax/abab6.5s-chat. - Privacy-sensitive channels use
venice/llama-3.3-70b. - Document processing channels use
moonshot/moonshot-v1-128klong-context models.
Independent Failover Per Channel
Each channel's model configuration can have its own independent failover chain:
{
"agents": {
"channels": {
"important-channel": {
"model": {
"primary": "anthropic/claude-opus-4-5",
"fallback": "bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
"fallback2": "openai/gpt-4o"
}
},
"budget-channel": {
"model": {
"primary": "openai/gpt-4o-mini",
"fallback": "ollama/llama3.2"
}
}
}
}
}
Important channels are configured with three-level failover for maximum availability, while budget-constrained channels use a local Ollama model as a fallback.
Dynamic Model Switching
OpenClaw's provider-agnostic architecture means you can modify a channel's model configuration at any time without restarting the service. When you update a channel's model setting in openclaw.json, new requests will automatically use the updated model.
This flexibility allows you to:
- Temporarily switch to a more affordable model during provider promotions.
- Quickly try out new models upon release without affecting other channels.
- Dynamically adjust model assignments based on real-time cost monitoring.
Configuration Validation
After completing channel-level model configuration, it's recommended to send test messages in each channel configured with a specific model to verify:
- Whether each channel correctly uses the designated model.
- Whether the failover chain works as expected.
- Whether the response quality of different models meets the channel's requirements.
Reviewing routing information in the OpenClaw logs can confirm which provider and model each request actually used.
By configuring different models per channel, you can fully leverage OpenClaw's multi-provider architecture to choose the optimal model solution for each use case.