What Is Cloudflare AI Gateway
Cloudflare AI Gateway is an AI request proxy service from Cloudflare that sits between your application and AI providers, offering caching, rate limiting, logging, and cost control. Connecting OpenClaw to Cloudflare AI Gateway gives you better request observability and cost optimization capabilities.
Prerequisites
Before starting, you need:
- A Cloudflare account (the free plan is sufficient)
- An AI Gateway created in the Cloudflare Dashboard
- A running OpenClaw instance
- Your upstream model provider API key (e.g., OpenAI)
Create a Cloudflare AI Gateway
Log in to the Cloudflare Dashboard, go to the AI menu, click "AI Gateway," and create a new gateway. You will receive a gateway endpoint URL in this format:
https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_name}/openai
Note this URL for configuration below.
Configure in OpenClaw
Edit the OpenClaw configuration file and replace the model provider's base URL with the Cloudflare AI Gateway endpoint:
{
"providers": {
"cloudflare-openai": {
"type": "openai",
"baseUrl": "https://gateway.ai.cloudflare.com/v1/your_account_id/your_gateway/openai",
"apiKey": "{{OPENAI_API_KEY}}",
"models": ["gpt-4o", "gpt-4o-mini"]
}
}
}
You can also configure via environment variables:
export OPENAI_BASE_URL="https://gateway.ai.cloudflare.com/v1/your_account_id/your_gateway/openai"
export OPENAI_API_KEY="sk-your-openai-key"
Supported Upstream Providers
Cloudflare AI Gateway supports proxying multiple upstream providers. In OpenClaw, configure different gateway paths for different providers:
{
"providers": {
"cf-openai": {
"type": "openai",
"baseUrl": "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway}/openai"
},
"cf-anthropic": {
"type": "anthropic",
"baseUrl": "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway}/anthropic"
},
"cf-azure": {
"type": "azure",
"baseUrl": "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway}/azure-openai"
}
}
}
Enable Caching
Cloudflare AI Gateway supports caching identical requests to save API costs. After enabling caching in the Cloudflare Dashboard, identical requests return cached results.
In the gateway settings page:
- Toggle "Caching" on
- Set a cache TTL (time-to-live); 3600 seconds is recommended
- Choose a caching strategy (default is fine)
Caching works well for knowledge Q&A scenarios but is not recommended for conversations that require real-time responses.
Configure Rate Limiting
To prevent unexpectedly high request volumes from causing cost spikes, set rate limiting rules in the AI Gateway:
- Go to the gateway's "Rate Limiting" settings
- Set a maximum requests per minute (e.g., 60/minute)
- Set the over-limit behavior (return error or queue)
When OpenClaw hits the rate limit, it receives a 429 status code. OpenClaw's built-in retry mechanism automatically resends the request after a brief delay.
Logging and Monitoring
Cloudflare AI Gateway automatically logs all requests. In the Dashboard you can view:
- Total requests and success rate
- Token usage statistics
- Response latency distribution
- Error request details
- Per-model cost estimates
This is very helpful for troubleshooting OpenClaw model call issues.
Troubleshooting
If the model does not respond properly after configuration, check the following:
- Is the URL format correct: Ensure the gateway URL ends with the correct provider identifier (e.g.,
/openai) - Is the API key valid: Cloudflare AI Gateway passes your API key through to the upstream provider
- Network connectivity: Confirm the OpenClaw server can reach Cloudflare's domain
- Gateway status: Check that the gateway is active in the Cloudflare Dashboard
Use OpenClaw's diagnostic command to quickly test the connection:
openclaw doctor --provider cloudflare-openai
Summary
Cloudflare AI Gateway provides a powerful proxy layer for OpenClaw, delivering enterprise-grade features like caching, rate limiting, and logging without modifying core code. Configuration only requires changing the base URL, making it ideal for teams that need fine-grained management of AI requests.