Introduction
OpenClaw Gateway not only serves chat channels but also exposes a complete set of REST APIs. Through these APIs, you can integrate AI capabilities into any application -- whether it is a custom web frontend, a CRM system, or a customer service ticketing platform. This article provides a detailed guide to API usage and integration practices.
1. API Overview
1.1 Basic Information
| Property | Value |
|---|---|
| Base URL | http://localhost:18789/api/v1 |
| Protocol | HTTP/HTTPS |
| Data Format | JSON |
| Authentication | Bearer Token |
| Rate Limit | 60 RPM by default |
1.2 Core Endpoint List
| Method | Endpoint | Function |
|---|---|---|
| POST | /api/v1/chat |
Send a message and get an AI reply |
| POST | /api/v1/chat/stream |
Stream a message response |
| POST | /api/v1/send |
Send a message to a specific channel |
| GET | /api/v1/conversations |
Get conversation list |
| GET | /api/v1/conversations/:id |
Get conversation details |
| DELETE | /api/v1/conversations/:id |
Delete a conversation |
| GET | /api/v1/skills |
Get skill list |
| GET | /api/v1/models |
Get available models |
| GET | /health |
Health check |
| GET | /health/detail |
Detailed status |
2. Authentication Configuration
2.1 Generating an API Token
# Generate an API Token
openclaw token create --name "my-app" --scope "chat,send,read"
# Output:
# Token: oc_tok_a1b2c3d4e5f6g7h8i9j0...
# Scope: chat, send, read
# Created: 2026-04-09
# List all Tokens
openclaw token list
# Revoke a Token
openclaw token revoke oc_tok_a1b2c3d4e5f6g7h8i9j0
2.2 Configuration File Setup
// ~/.config/openclaw/openclaw.json5
{
"api": {
"enabled": true,
"port": 18789,
"auth": {
"enabled": true,
"tokens": [
{
"name": "my-web-app",
"token": "oc_tok_a1b2c3d4e5f6g7h8i9j0",
"scope": ["chat", "send", "read", "admin"],
"rateLimit": 120 // RPM
},
{
"name": "crm-integration",
"token": "oc_tok_x9y8z7w6v5u4t3s2r1q0",
"scope": ["chat", "send"],
"rateLimit": 30
}
]
}
}
}
2.3 Authenticated Request Example
All API requests must include the Token in the Header:
curl -H "Authorization: Bearer oc_tok_a1b2c3d4e5f6g7h8i9j0" \
http://localhost:18789/api/v1/models
3. Core API Usage
3.1 Sending a Message and Getting a Reply
curl example:
curl -X POST http://localhost:18789/api/v1/chat \
-H "Authorization: Bearer $OPENCLAW_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"message": "Write a quicksort algorithm in Python",
"conversationId": "conv_123",
"model": "claude-sonnet-4-20250514",
"systemPrompt": "You are a professional programming assistant",
"maxTokens": 2000,
"temperature": 0.7
}'
Response example:
{
"id": "msg_abc123",
"conversationId": "conv_123",
"role": "assistant",
"content": "Here is a Python implementation of quicksort:\n\n```python\ndef quicksort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quicksort(left) + middle + quicksort(right)\n```",
"model": "claude-sonnet-4-20250514",
"usage": {
"inputTokens": 45,
"outputTokens": 156
},
"latency": 2341
}
3.2 Streaming Output
curl example (SSE):
curl -X POST http://localhost:18789/api/v1/chat/stream \
-H "Authorization: Bearer $OPENCLAW_TOKEN" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"message": "Write a short essay about artificial intelligence",
"model": "claude-sonnet-4-20250514"
}'
# Returns an SSE event stream:
# data: {"type":"start","conversationId":"conv_456"}
# data: {"type":"delta","content":"Artificial"}
# data: {"type":"delta","content":" intelligence"}
# data: {"type":"delta","content":" (AI)"}
# ...
# data: {"type":"done","usage":{"inputTokens":20,"outputTokens":350}}
3.3 Sending Messages to a Channel
# Send a message to a Telegram user
curl -X POST http://localhost:18789/api/v1/send \
-H "Authorization: Bearer $OPENCLAW_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"channel": "telegram",
"chatId": "123456789",
"message": "This is a message sent via the API"
}'
# Send a message to a Discord channel
curl -X POST http://localhost:18789/api/v1/send \
-H "Authorization: Bearer $OPENCLAW_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"channel": "discord",
"chatId": "CHANNEL_ID",
"message": "System notice: Server maintenance will begin at 10:00 PM tonight"
}'
4. Webhook Reception
4.1 Configuring Webhook Callbacks
When OpenClaw receives a message or completes processing, it can send Webhook notifications to your application:
{
"api": {
"webhooks": {
"outgoing": [
{
"url": "https://your-app.com/api/openclaw-callback",
"secret": "your-webhook-secret",
"events": [
"message.received",
"message.sent",
"conversation.created",
"channel.connected",
"channel.disconnected"
],
"retry": {
"maxAttempts": 3,
"backoffMs": 5000
}
}
]
}
}
}
4.2 Webhook Event Format
{
"event": "message.received",
"timestamp": "2026-04-09T10:30:00Z",
"data": {
"messageId": "msg_xyz789",
"conversationId": "conv_123",
"channel": "telegram",
"chatId": "123456789",
"from": {
"id": "user_456",
"name": "Alice"
},
"content": "The message content sent by the user",
"timestamp": "2026-04-09T10:30:00Z"
},
"signature": "sha256=abcdef..."
}
4.3 Verifying the Webhook Signature
import hmac
import hashlib
def verify_webhook(payload, signature, secret):
expected = 'sha256=' + hmac.new(
secret.encode(),
payload.encode(),
hashlib.sha256
).hexdigest()
return hmac.compare_digest(expected, signature)
5. Python Integration Example
5.1 Basic Client Wrapper
import requests
from typing import Optional, Generator
class OpenClawClient:
def __init__(self, base_url: str = "http://localhost:18789",
token: str = ""):
self.base_url = base_url.rstrip("/")
self.headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
def chat(self, message: str, conversation_id: Optional[str] = None,
model: str = "claude-sonnet-4-20250514",
system_prompt: Optional[str] = None) -> dict:
"""Send a message and get an AI reply"""
payload = {
"message": message,
"model": model,
}
if conversation_id:
payload["conversationId"] = conversation_id
if system_prompt:
payload["systemPrompt"] = system_prompt
resp = requests.post(
f"{self.base_url}/api/v1/chat",
json=payload,
headers=self.headers,
timeout=120
)
resp.raise_for_status()
return resp.json()
def chat_stream(self, message: str,
model: str = "claude-sonnet-4-20250514") -> Generator:
"""Stream a message response"""
payload = {"message": message, "model": model}
headers = {**self.headers, "Accept": "text/event-stream"}
resp = requests.post(
f"{self.base_url}/api/v1/chat/stream",
json=payload,
headers=headers,
stream=True,
timeout=120
)
resp.raise_for_status()
for line in resp.iter_lines():
if line:
line = line.decode("utf-8")
if line.startswith("data: "):
yield line[6:]
def send(self, channel: str, chat_id: str, message: str) -> dict:
"""Send a message to a specific channel"""
resp = requests.post(
f"{self.base_url}/api/v1/send",
json={
"channel": channel,
"chatId": chat_id,
"message": message
},
headers=self.headers
)
resp.raise_for_status()
return resp.json()
def health(self) -> dict:
"""Health check"""
resp = requests.get(f"{self.base_url}/health")
return resp.json()
# Usage example
client = OpenClawClient(token="oc_tok_a1b2c3d4e5f6g7h8i9j0")
# Regular conversation
result = client.chat("What's the weather like today?")
print(result["content"])
# Streaming conversation
for chunk in client.chat_stream("Tell me a story"):
print(chunk, end="", flush=True)
# Send to Telegram
client.send("telegram", "123456789", "Hello from API!")
6. Node.js Integration Example
6.1 Basic Client Wrapper
// openclaw-client.js
const axios = require('axios');
class OpenClawClient {
constructor(baseUrl = 'http://localhost:18789', token = '') {
this.baseUrl = baseUrl;
this.headers = {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
};
}
async chat(message, options = {}) {
const { conversationId, model = 'claude-sonnet-4-20250514', systemPrompt } = options;
const payload = { message, model };
if (conversationId) payload.conversationId = conversationId;
if (systemPrompt) payload.systemPrompt = systemPrompt;
const resp = await axios.post(
`${this.baseUrl}/api/v1/chat`,
payload,
{ headers: this.headers, timeout: 120000 }
);
return resp.data;
}
async send(channel, chatId, message) {
const resp = await axios.post(
`${this.baseUrl}/api/v1/send`,
{ channel, chatId, message },
{ headers: this.headers }
);
return resp.data;
}
async health() {
const resp = await axios.get(`${this.baseUrl}/health`);
return resp.data;
}
async getModels() {
const resp = await axios.get(
`${this.baseUrl}/api/v1/models`,
{ headers: this.headers }
);
return resp.data;
}
}
module.exports = OpenClawClient;
// Usage example
async function main() {
const client = new OpenClawClient(
'http://localhost:18789',
'oc_tok_a1b2c3d4e5f6g7h8i9j0'
);
// Regular conversation
const result = await client.chat('Implement the Fibonacci sequence in JavaScript', {
systemPrompt: 'You are a programming tutor. Explain clearly and simply.'
});
console.log(result.content);
// Send to Discord
await client.send('discord', 'CHANNEL_ID', 'Message from the API');
// Get available models
const models = await client.getModels();
console.log('Available models:', models);
}
main().catch(console.error);
7. Third-Party System Integration
7.1 CRM System Integration
The following example integrates OpenClaw into a CRM to automatically analyze customer feedback:
# crm_integration.py
from openclaw_client import OpenClawClient
client = OpenClawClient(token="oc_tok_xxx")
def analyze_customer_feedback(feedback_text, customer_id):
"""Analyze customer feedback and classify it"""
result = client.chat(
f"Please analyze the following customer feedback and return the result in JSON format:\n\n{feedback_text}",
options={
"systemPrompt": """You are a customer feedback analysis expert. Classify the feedback and return JSON:
{
"sentiment": "positive/neutral/negative",
"category": "product quality/customer service/pricing/logistics/other",
"urgency": "high/medium/low",
"summary": "one-sentence summary",
"suggestedAction": "recommended course of action"
}"""
}
)
return result["content"]
# Usage
analysis = analyze_customer_feedback(
"Your product quality is terrible. It broke after just two days, and your customer service was rude!",
"CUST_001"
)
print(analysis)
7.2 Ticketing System Integration
# helpdesk_integration.py
def auto_reply_ticket(ticket):
"""Automatically generate an initial ticket reply"""
result = client.chat(
f"Ticket title: {ticket['title']}\n"
f"Ticket description: {ticket['description']}\n"
f"Priority: {ticket['priority']}\n\n"
f"Please generate a professional initial response.",
options={
"systemPrompt": "You are a technical support expert. Provide an initial response and handling recommendations based on the ticket content. Be professional and friendly."
}
)
return {
"reply": result["content"],
"auto_generated": True,
"model": result.get("model"),
"needs_review": ticket["priority"] == "high"
}
7.3 Custom Web Frontend
<!-- A simple web chat frontend -->
<!DOCTYPE html>
<html>
<head>
<title>AI Assistant</title>
<style>
#chat { max-width: 600px; margin: 0 auto; padding: 20px; }
.message { margin: 10px 0; padding: 10px; border-radius: 8px; }
.user { background: #e3f2fd; text-align: right; }
.assistant { background: #f5f5f5; }
#input-area { display: flex; gap: 10px; margin-top: 20px; }
#message-input { flex: 1; padding: 10px; border: 1px solid #ccc; border-radius: 4px; }
button { padding: 10px 20px; background: #1976d2; color: white; border: none; border-radius: 4px; cursor: pointer; }
</style>
</head>
<body>
<div id="chat">
<h2>AI Assistant</h2>
<div id="messages"></div>
<div id="input-area">
<input id="message-input" placeholder="Type a message..." />
<button onclick="sendMessage()">Send</button>
</div>
</div>
<script>
const API_URL = 'http://localhost:18789/api/v1/chat';
const TOKEN = 'oc_tok_a1b2c3d4e5f6g7h8i9j0';
let conversationId = null;
async function sendMessage() {
const input = document.getElementById('message-input');
const message = input.value.trim();
if (!message) return;
appendMessage('user', message);
input.value = '';
try {
const resp = await fetch(API_URL, {
method: 'POST',
headers: {
'Authorization': `Bearer ${TOKEN}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
message,
conversationId,
model: 'claude-sonnet-4-20250514'
})
});
const data = await resp.json();
conversationId = data.conversationId;
appendMessage('assistant', data.content);
} catch (error) {
appendMessage('assistant', 'An error occurred: ' + error.message);
}
}
function appendMessage(role, content) {
const div = document.createElement('div');
div.className = `message ${role}`;
div.textContent = content;
document.getElementById('messages').appendChild(div);
div.scrollIntoView({ behavior: 'smooth' });
}
document.getElementById('message-input')
.addEventListener('keydown', e => { if (e.key === 'Enter') sendMessage(); });
</script>
</body>
</html>
8. API Rate Limiting
8.1 Rate Limit Configuration
{
"api": {
"rateLimit": {
// Global limit
"global": {
"windowMs": 60000, // 1-minute window
"maxRequests": 120 // Maximum 120 requests
},
// Per-Token limit
"perToken": {
"windowMs": 60000,
"maxRequests": 60
}
}
}
}
8.2 Handling Rate Limit Errors
When rate limited, the API returns 429 Too Many Requests:
{
"error": "rate_limit_exceeded",
"message": "Too many requests. Please try again later.",
"retryAfter": 30
}
Implement retry logic in your client:
import time
def chat_with_retry(client, message, max_retries=3):
for attempt in range(max_retries):
try:
return client.chat(message)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 429:
retry_after = int(e.response.headers.get('Retry-After', 30))
print(f"Rate limited, retrying in {retry_after} seconds...")
time.sleep(retry_after)
else:
raise
raise Exception("Maximum retries exceeded")
9. Security Best Practices
| Measure | Description |
|---|---|
| Use HTTPS | In production, always enable HTTPS via an Nginx reverse proxy |
| Minimize Token permissions | Only grant necessary scopes |
| IP whitelist | Restrict API access origins |
| Input validation | Sanitize user inputs to prevent prompt injection |
| Audit logging | Log all API calls |
| Rotate Tokens regularly | Replace every 90 days |
{
"api": {
"security": {
// IP whitelist
"allowedIPs": ["10.0.0.0/8", "192.168.1.0/24"],
// CORS configuration
"cors": {
"enabled": true,
"origins": ["https://your-app.com"],
"methods": ["GET", "POST", "DELETE"]
},
// Request body size limit
"maxBodySize": "1MB"
}
}
}
10. Debugging and Troubleshooting
# View API request logs
openclaw logs | grep -i "api\|request\|response"
# Debug with verbose curl
curl -v -X POST http://localhost:18789/api/v1/chat \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"message":"test"}'
# Check API service status
curl -s http://localhost:18789/health | jq .
With these API integration approaches, you can seamlessly incorporate OpenClaw's AI capabilities into any existing system, building powerful intelligent applications.