Introduction
OpenAI's GPT series is one of the most widely used families of large language models today. OpenClaw provides native support for the entire OpenAI lineup, including GPT-4, GPT-4o, GPT-4o mini, and the latest o3 reasoning model. This guide will walk you through the complete configuration process from scratch.
Prerequisites
Before getting started, make sure you have the following:
| Requirement | Details |
|---|---|
| OpenClaw installed | Run openclaw doctor to verify the installation |
| Node.js 22+ | Runtime dependency for OpenClaw |
| OpenAI account | An account with a linked payment method |
| Network access | Ability to reach the OpenAI API (some regions may require a proxy) |
Step 1: Obtain Your OpenAI API Key
1.1 Log In to the OpenAI Platform
Visit platform.openai.com and sign in to your account.
1.2 Create an API Key
Navigate to the API Keys page and click Create new secret key:
Name: openclaw-production (use a descriptive name)
Permissions: All (or select as needed)
Once created, copy the key immediately and store it securely. The API Key is only displayed once and follows this format:
sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
1.3 Set Spending Limits
It is strongly recommended to set a monthly spending cap under Billing → Usage limits:
Hard limit: $50 (adjust based on your budget)
Soft limit: $40 (triggers an email alert as you approach the cap)
Step 2: Configure OpenAI in OpenClaw
2.1 Edit the Configuration File
Open the OpenClaw configuration file:
nano ~/.config/openclaw/openclaw.json5
Add the OpenAI configuration under the models section:
{
// OpenClaw main configuration file
models: {
openai: {
provider: "openai",
apiKey: "sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
defaultModel: "gpt-4o",
baseUrl: "https://api.openai.com/v1", // Default value, usually no need to change
}
}
}
2.2 Use Environment Variables (Recommended)
For better security, pass the API Key via environment variables rather than hardcoding it in the configuration file:
# Add to ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY="sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Then reference the environment variable in the configuration file:
{
models: {
openai: {
provider: "openai",
apiKey: "${OPENAI_API_KEY}",
defaultModel: "gpt-4o",
}
}
}
2.3 Restart OpenClaw to Apply Changes
openclaw restart
Verify the configuration:
openclaw doctor
If the output shows ✓ OpenAI connection OK, the setup is complete.
Step 3: Model Selection Guide
OpenAI currently offers several models, each with distinct characteristics:
| Model | Highlights | Input Price (per million tokens) | Output Price (per million tokens) | Recommended Use Case |
|---|---|---|---|---|
| gpt-4o | Flagship multimodal model | $2.50 | $10.00 | General conversation, image understanding |
| gpt-4o-mini | Lightweight and efficient | $0.15 | $0.60 | Simple tasks, high concurrency |
| gpt-4 | Classic powerful model | $30.00 | $60.00 | Complex reasoning (expensive) |
| o3 | Reasoning-enhanced model | $10.00 | $40.00 | Math, coding, logical reasoning |
| o3-mini | Lightweight reasoning model | $1.10 | $4.40 | Balance between reasoning and cost |
3.1 Configuring Multiple Models
You can set up multiple OpenAI models and assign different models to different channels:
{
models: {
"openai-main": {
provider: "openai",
apiKey: "${OPENAI_API_KEY}",
defaultModel: "gpt-4o",
},
"openai-lite": {
provider: "openai",
apiKey: "${OPENAI_API_KEY}",
defaultModel: "gpt-4o-mini",
},
"openai-reasoning": {
provider: "openai",
apiKey: "${OPENAI_API_KEY}",
defaultModel: "o3",
}
},
channels: {
telegram: {
model: "openai-main", // Telegram uses GPT-4o
},
discord: {
model: "openai-lite", // Discord uses the cheaper mini model
}
}
}
Step 4: Parameter Tuning
4.1 Temperature
Temperature controls the randomness of the output, ranging from 0 to 2:
{
models: {
openai: {
provider: "openai",
apiKey: "${OPENAI_API_KEY}",
defaultModel: "gpt-4o",
parameters: {
temperature: 0.7, // Default, balances creativity and consistency
// temperature: 0, // Fully deterministic, ideal for code generation
// temperature: 1.2, // More creative, suitable for writing
}
}
}
}
Recommended temperature values for common scenarios:
| Scenario | Recommended Temperature | Notes |
|---|---|---|
| Code generation | 0 - 0.2 | Requires precise, consistent output |
| General conversation | 0.5 - 0.8 | Natural with some variation |
| Creative writing | 0.8 - 1.2 | More diverse expression |
| Data extraction | 0 | Requires strict format adherence |
4.2 Token Limits
Control the maximum length of a single response:
parameters: {
temperature: 0.7,
maxTokens: 4096, // Maximum output tokens per response
// GPT-4o supports up to 16384 output tokens
// o3 supports up to 100000 output tokens
}
4.3 System Prompt
Define the model's role and behavioral guidelines:
{
models: {
openai: {
provider: "openai",
apiKey: "${OPENAI_API_KEY}",
defaultModel: "gpt-4o",
systemPrompt: "You are a friendly AI assistant named Xiaozhi. You provide concise and accurate answers in Chinese.",
parameters: {
temperature: 0.7,
maxTokens: 4096,
}
}
}
}
Step 5: Azure OpenAI Configuration
If you are using Azure-hosted OpenAI services, the configuration differs slightly:
{
models: {
"azure-openai": {
provider: "openai",
apiKey: "${AZURE_OPENAI_API_KEY}",
baseUrl: "https://your-resource.openai.azure.com/openai/deployments/your-deployment",
defaultModel: "gpt-4o", // The deployment name you created in Azure
azureApiVersion: "2024-12-01-preview",
parameters: {
temperature: 0.7,
maxTokens: 4096,
}
}
}
}
Advantages of Azure OpenAI:
- Enterprise-grade SLA guarantees
- Data stays within your designated Azure region
- Supports private network deployments
- Comprehensive compliance certifications
Step 6: Cost Estimation
Here are some monthly cost references for common usage patterns (using GPT-4o):
| Usage Level | Daily Messages | Avg Tokens per Message | Estimated Monthly Cost |
|---|---|---|---|
| Light personal use | 20 | ~1000 | $3 - $5 |
| Heavy personal use | 100 | ~1500 | $15 - $25 |
| Small team | 500 | ~1200 | $50 - $80 |
| Medium scale | 2000 | ~1000 | $150 - $250 |
If you are on a tight budget, switching daily conversations to gpt-4o-mini can reduce costs by approximately 90%.
Step 7: Verification and Testing
After completing the configuration, verify with the following steps:
# Restart the service
openclaw restart
# Check connection status
openclaw doctor
# View real-time logs
openclaw logs
Then send a test message through any connected channel to confirm the model responds correctly.
Troubleshooting
Invalid API Key
Error: 401 Unauthorized - Invalid API key
Solution: Verify that the API Key was copied correctly, starts with sk-, and has been enabled in the OpenAI dashboard.
Quota Exceeded
Error: 429 Rate limit exceeded
Solution: Check the usage limits in the OpenAI dashboard and confirm a payment method is linked. New accounts may have lower rate limits that automatically increase over time.
Network Connection Issues
If you are in a region where the OpenAI API is not directly accessible, configure a proxy:
{
proxy: {
url: "http://127.0.0.1:7890", // Your proxy address
},
models: {
openai: {
provider: "openai",
apiKey: "${OPENAI_API_KEY}",
defaultModel: "gpt-4o",
}
}
}
Summary
This guide covered the complete process of configuring OpenAI GPT models in OpenClaw. The key steps include obtaining an API Key, editing the configuration file, selecting the right model, and tuning parameters. For most users, GPT-4o is recommended as the primary model, paired with GPT-4o mini for simple tasks to save on costs. If you are in mainland China or other restricted regions, Azure OpenAI is a stable and reliable alternative.