Choosing the right AI model provider is a key step in building an intelligent assistant. OpenClaw, as a self-hosted AI assistant platform, currently supports 14 model providers, covering the full spectrum from cloud-based commercial models to local open-source models. This article systematically covers the configuration methods for all available providers, as well as advanced features like multi-account authentication and failover.
Supported Providers Overview
OpenClaw currently supports the following 14 providers:
| Provider | Type | Typical Use |
|---|---|---|
| Anthropic | Cloud commercial | Claude series, strong reasoning |
| OpenAI | Cloud commercial | GPT series, excellent general capabilities |
| Amazon Bedrock | Cloud hosted | AWS ecosystem integration |
| OpenRouter | Model gateway | Access multiple providers with one key |
| Ollama | Local inference | Run open-source models for free |
| GLM | Cloud commercial | Zhipu AI, excellent Chinese capabilities |
| Qwen | Cloud commercial | Alibaba Tongyi Qianwen series |
| MiniMax | Cloud commercial | Hailuo AI, strong long-text capabilities |
| Moonshot AI | Cloud commercial | Kimi, long-context processing |
| Xiaomi | Cloud commercial | Xiaomi MiLM series |
| Venice AI | Privacy inference | No-log, privacy-first inference |
| OpenCode Zen | Cloud commercial | Specialized for code generation |
| Z.AI | Cloud commercial | General multimodal model |
| Deepgram | Speech-to-text | Audio transcription service |
The first 13 providers offer large language model services, while Deepgram is specifically for speech-to-text (transcription) functionality.
Quick Start: Configure via the onboard Command
For new users, the simplest approach is the interactive onboarding command:
openclaw onboard
This command guides you step by step through provider selection and API key configuration. The system automatically validates your key and generates the corresponding configuration file.
Default Model Configuration
After completing onboard, OpenClaw generates default model settings in the configuration file. The core configuration structure is:
{
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-5"
}
}
}
}
The primary field specifies the main model in the format provider/model-name. OpenClaw's default recommendation is Anthropic's Claude Opus 4.5, one of the strongest all-around options currently available.
You can also switch to other models based on your needs:
{
"agents": {
"defaults": {
"model": {
"primary": "openai/gpt-4o"
}
}
}
}
Provider-Agnostic Model Switching
OpenClaw uses a provider-agnostic model switching mechanism. This means you can switch between models from different providers by simply changing the model identifier in the configuration, without modifying any business logic.
For example, switching from Anthropic to a local Ollama model requires changing just one line:
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/llama3.1"
}
}
}
}
All providers share a unified interface abstraction layer, so application-level code requires no changes.
Privacy-First Option: Venice AI
If you have strict data privacy requirements, Venice AI is worth considering. Venice offers no-log inference services — your conversation data is neither stored nor used for training.
Currently available Venice AI models include:
{
"agents": {
"defaults": {
"model": {
"primary": "venice/llama-3.3-70b"
}
}
}
}
Higher-end models are also supported:
{
"agents": {
"defaults": {
"model": {
"primary": "venice/claude-opus-45"
}
}
}
}
Venice is suitable for sensitive business scenarios such as medical consultation and legal documents that require high levels of privacy protection.
Expanding Provider Access via OpenRouter
Beyond the 14 natively supported providers, you can access additional model providers indirectly through OpenRouter, including:
- xAI — Grok series models
- Groq — Ultra-low latency inference acceleration
- Mistral — Leading European open-source models
With a single OpenRouter API key, you can uniformly access models from all these providers without registering and managing multiple keys separately.
Multi-Account Authentication and Failover
In production environments, relying on a single API account carries clear risks. OpenClaw supports configuring multiple authentication profiles for the same provider, enabling automatic failover.
Configuring Multiple Accounts
{
"providers": {
"anthropic": {
"profiles": [
{
"name": "primary",
"apiKey": "sk-ant-xxxxx-primary"
},
{
"name": "backup",
"apiKey": "sk-ant-xxxxx-backup"
}
]
}
}
}
Failover and Cooldown Tracking
When the primary profile's API calls fail (due to rate limiting, insufficient balance, or service downtime), OpenClaw automatically switches to the next available profile. The system has a built-in cooldown tracking mechanism:
- A profile is marked as cooling down after failure
- The profile won't be selected during the cooldown period
- It automatically becomes available again after the cooldown expires
- Multiple profiles rotate in sequence, ensuring service continuity
This mechanism is especially valuable in high-concurrency or long-running scenarios, effectively preventing the entire service from going down due to a single account issue.
Community Tool: Claude Max API Proxy
For users who have already subscribed to the Claude Max plan, the community has developed a tool called Claude Max API Proxy. It converts your Claude Max subscription into a compatible API interface, allowing OpenClaw to call Claude models directly through your Max subscription without purchasing additional API credits.
This is a community-maintained third-party tool — please evaluate its stability and security before use.
Speech-to-Text: Deepgram
Among the 14 providers, Deepgram is unique — it doesn't offer a large language model but instead focuses on speech-to-text. If your OpenClaw assistant needs to handle voice messages, you can configure Deepgram as the transcription service provider:
{
"transcription": {
"provider": "deepgram",
"apiKey": "your-deepgram-api-key"
}
}
Selection Recommendations
With 14 providers to choose from, how do you decide? Here are some practical guidelines:
- Best quality: Choose
anthropic/claude-opus-4-5oropenai/gpt-4o - Privacy protection: Choose Venice AI models
- Zero-cost operation: Choose Ollama for local open-source models
- Flexible multi-model switching: Choose OpenRouter as a unified gateway
- Chinese language optimization: Choose GLM (Zhipu) or Qwen (Tongyi Qianwen)
- Voice capabilities: Additionally configure Deepgram as the transcription service
Regardless of which provider you choose, OpenClaw's provider-agnostic architecture ensures you can switch at any time without being locked into a single platform. It's recommended to start with the default Anthropic configuration and gradually add more providers and failover settings as your needs evolve.