Introduction
OpenClaw is more than just a chatbot framework — it is fundamentally an AI Agent gateway. Beyond interacting with AI through chat platforms like Telegram and Discord, OpenClaw exposes a complete set of HTTP API endpoints, allowing you to programmatically invoke AI Agents from any application, script, or service.
This article provides a comprehensive overview of OpenClaw's HTTP API architecture, authentication methods, core endpoints, and practical integration scenarios.
Enabling the API Gateway
By default, OpenClaw starts an HTTP server alongside the main process. You can configure the gateway parameters in openclaw.json5:
{
gateway: {
// HTTP server listening port
port: 3000,
// Listening address; 0.0.0.0 allows external access
host: "0.0.0.0",
// Whether to enable API endpoints
apiEnabled: true,
// API key authentication
apiKeys: [
"sk-openclaw-xxxxxxxxxxxx",
"sk-openclaw-yyyyyyyyyyyy"
],
// CORS configuration
cors: {
origins: ["https://your-app.com"],
methods: ["GET", "POST", "PUT", "DELETE"]
}
}
}
Once enabled, the API gateway will serve RESTful endpoints on the specified port.
Authentication
All API requests require authentication. OpenClaw supports two authentication methods:
Bearer Token Authentication
curl -X POST http://localhost:3000/api/v1/chat \
-H "Authorization: Bearer sk-openclaw-xxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{"message": "Hello"}'
Query Parameter Authentication
curl "http://localhost:3000/api/v1/agents?api_key=sk-openclaw-xxxxxxxxxxxx"
Bearer Token authentication is recommended to avoid exposing API keys in URL logs.
Core API Endpoints
1. Send a Message
This is the most commonly used endpoint — send a message to a specified Agent and receive an AI response.
POST /api/v1/chat
Request body:
{
"agentId": "my-agent",
"sessionId": "session-001",
"message": "Write a sorting algorithm for me",
"stream": false
}
Response body:
{
"status": "ok",
"response": {
"content": "Here is the AI's response...",
"messageId": "msg_abc123",
"model": "claude-sonnet-4-20250514",
"usage": {
"inputTokens": 150,
"outputTokens": 320
}
}
}
2. Streaming Responses
For long responses, it is recommended to use Server-Sent Events (SSE) for streaming output:
POST /api/v1/chat/stream
curl -N -X POST http://localhost:3000/api/v1/chat/stream \
-H "Authorization: Bearer sk-openclaw-xxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{"agentId": "my-agent", "message": "Explain quicksort in detail"}'
The returned SSE event stream:
data: {"type":"start","messageId":"msg_def456"}
data: {"type":"text","content":"Quick Sort"}
data: {"type":"text","content":" is an efficient"}
data: {"type":"text","content":" sorting algorithm..."}
data: {"type":"tool_use","name":"run_code","input":{"code":"..."}}
data: {"type":"tool_result","output":"Sort result: [1, 2, 3, 5, 8]"}
data: {"type":"end","usage":{"inputTokens":85,"outputTokens":540}}
3. Manage Sessions
# List all active sessions for an Agent
GET /api/v1/agents/{agentId}/sessions
# Get session details (including message history)
GET /api/v1/agents/{agentId}/sessions/{sessionId}
# Delete a session
DELETE /api/v1/agents/{agentId}/sessions/{sessionId}
# Clear a session's message history
POST /api/v1/agents/{agentId}/sessions/{sessionId}/clear
4. Query Agent Information
# List all Agents
GET /api/v1/agents
# Get details for a single Agent
GET /api/v1/agents/{agentId}
# Get an Agent's tool list
GET /api/v1/agents/{agentId}/tools
5. System Status
# Health check
GET /api/v1/health
# System metrics
GET /api/v1/metrics
Practical Integration Examples
Python Integration
import requests
class OpenClawClient:
def __init__(self, base_url, api_key):
self.base_url = base_url
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
def chat(self, agent_id, message, session_id=None):
resp = requests.post(
f"{self.base_url}/api/v1/chat",
headers=self.headers,
json={
"agentId": agent_id,
"sessionId": session_id,
"message": message
}
)
return resp.json()["response"]["content"]
client = OpenClawClient("http://localhost:3000", "sk-openclaw-xxx")
reply = client.chat("my-agent", "What's the weather like today?")
print(reply)
Node.js Integration
const response = await fetch("http://localhost:3000/api/v1/chat", {
method: "POST",
headers: {
"Authorization": "Bearer sk-openclaw-xxx",
"Content-Type": "application/json"
},
body: JSON.stringify({
agentId: "my-agent",
message: "Analyze this code for me"
})
});
const data = await response.json();
console.log(data.response.content);
Frontend Application Integration
OpenClaw's streaming API can be used directly to build custom chat interfaces:
const eventSource = new EventSource(
"http://localhost:3000/api/v1/chat/stream?" +
"agentId=my-agent&message=Hello&api_key=sk-openclaw-xxx"
);
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === "text") {
appendToChat(data.content);
}
};
Rate Limiting and Quotas
The API gateway includes built-in rate limiting to prevent abuse:
{
gateway: {
rateLimit: {
// Maximum requests per minute
maxRequestsPerMinute: 60,
// Maximum daily token consumption
maxTokensPerDay: 1000000,
// Calculated independently per API Key
perKey: true
}
}
}
When limits are exceeded, the API returns a 429 Too Many Requests status code.
OpenAI-Compatible Mode
OpenClaw provides an OpenAI-compatible API endpoint, allowing you to connect to OpenClaw directly using the OpenAI SDK:
POST /api/v1/openai/chat/completions
This means any client tool that supports the OpenAI API can seamlessly connect to OpenClaw — simply point the base_url to your OpenClaw gateway address.
Summary
OpenClaw's HTTP API gateway exposes AI Agent capabilities through standardized endpoints, allowing you to integrate OpenClaw into any tech stack — whether it's a web application, mobile app, automation script, or internal enterprise system. Combined with streaming responses, session management, and OpenAI-compatible mode, OpenClaw can handle a wide range of API gateway scenarios from simple chats to complex workflows.