Home Tutorials Categories Skills About
ZH EN JA KO
Skills-Plugins

OpenClaw Skill Debugging and Performance Optimization Tips

· 22 min read

Introduction

Debugging and optimization are inevitable parts of Skill development. A misbehaving Skill can lead to inaccurate responses, and an inefficient Skill can waste large amounts of tokens. This tutorial systematically introduces OpenClaw Skill debugging tools and performance optimization strategies.

Debug Mode

Enabling Global Debug

OpenClaw provides multiple log output levels:

# Default log level (info)
openclaw logs

# Debug level (shows Skill matching and execution details)
openclaw logs --level debug

# Verbose level (includes MCP communication data)
openclaw logs --level verbose

Setting the Log Level in Configuration

You can also persist the setting in ~/.config/openclaw/openclaw.json5:

{
  logging: {
    level: "debug",        // info, debug, verbose
    // Log output location
    file: "~/.openclaw/logs/openclaw.log",
    // Maximum log file size
    maxSize: "10MB",
    // Number of log files to retain
    maxFiles: 5,
    // Whether to output to the console
    console: true
  }
}

Reading Debug Logs

With debug level enabled, you'll see the following key log entries:

[DEBUG] === Message Processing Start ===
[DEBUG] Input: "北京天气怎么样"
[DEBUG] Channel: telegram, User: user_12345

[DEBUG] --- Skill Matching Phase ---
[DEBUG] Checking skill: weather (triggers: 天气,weather,气温)
[DEBUG]   ✓ Trigger matched: "天气" found in message
[DEBUG] Checking skill: reminder (triggers: 提醒,remind,闹钟)
[DEBUG]   ✗ No trigger match
[DEBUG] Checking skill: translator (triggers: 翻译,translate)
[DEBUG]   ✗ No trigger match

[DEBUG] --- Skill Execution Phase ---
[DEBUG] Active skill: weather
[DEBUG] MCP tools available: [http_fetch]
[DEBUG] Sending to model with skill context...

[DEBUG] --- MCP Tool Calls ---
[DEBUG] Tool call: http_fetch.http_get({url: "https://api.openweathermap.org/..."})
[DEBUG] Tool result: 200 OK, 1.2KB response
[DEBUG] Total tool calls: 1

[DEBUG] --- Response Generation ---
[DEBUG] Model response tokens: 245
[DEBUG] Total latency: 2340ms
[DEBUG] === Message Processing End ===

Isolated Skill Testing

Testing via the Gateway API

Send test messages directly through the Gateway API, bypassing chat channels:

# Test a specific Skill
curl -X POST http://localhost:18789/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "message": "北京天气怎么样",
    "options": {
      "skill": "weather",
      "debug": true
    }
  }'

With the debug: true parameter, the response will include debug information:

{
  "response": "🌤️ 北京天气...",
  "debug": {
    "skillMatched": "weather",
    "triggerWord": "天气",
    "mcpCalls": [
      {
        "server": "http-fetch",
        "tool": "http_get",
        "latency": 890,
        "status": "success"
      }
    ],
    "modelTokens": {
      "input": 1250,
      "output": 245
    },
    "totalLatency": 2340
  }
}

Disabling Other Skills for Isolated Testing

When testing a specific Skill, temporarily disable other Skills to eliminate interference:

{
  skills: {
    // Only enable the Skill you're testing
    enabled: ["weather"],
    // Or disable all other Skills
    // disabled: ["reminder", "translator", "rss-reader"]
  }
}

After modifying, restart:

openclaw restart

Remember to restore the configuration after testing.

Batch Validation with Test Messages

Create a test script to batch-verify Skill behavior:

#!/bin/bash
# test-weather-skill.sh

GATEWAY="http://localhost:18789/api/chat"

test_cases=(
  "北京天气怎么样"
  "上海明天会下雨吗"
  "weather in Tokyo"
  "深圳这周气温多少"
  "要不要带伞"
)

for msg in "${test_cases[@]}"; do
  echo "=== Testing: $msg ==="
  curl -s -X POST "$GATEWAY" \
    -H "Content-Type: application/json" \
    -d "{\"message\": \"$msg\", \"options\": {\"debug\": true}}" \
    | python3 -m json.tool
  echo ""
done

Common Debugging Scenarios

Scenario 1: Skill Not Being Triggered

Symptom: Messages containing trigger words don't activate the Skill.

Troubleshooting steps:

# 1. Confirm the Skill is loaded
openclaw skill list

# 2. Check matching logs
openclaw logs --level debug

Common causes:

Cause Solution
Incorrect filename Must end with .SKILL.md
SKILL.md syntax error Check the YAML frontmatter format
Trigger words don't match Add more trigger word variants
Skill is disabled Check the disabled list in configuration
Another Skill takes priority Adjust Skill priorities

Scenario 2: MCP Tool Call Fails

Symptom: The Skill is triggered but the MCP tool returns an error.

# View detailed MCP communication logs
openclaw logs --level verbose

Common errors and solutions:

[ERROR] MCP tool error: ECONNREFUSED
→ MCP Server isn't running or the port is incorrect

[ERROR] MCP tool error: TIMEOUT
→ Tool call timed out — increase the timeout configuration

[ERROR] MCP tool error: PERMISSION_DENIED
→ Filesystem MCP lacks read/write permissions

[ERROR] MCP tool error: INVALID_PARAMS
→ AI passed incorrect parameter format

Scenario 3: Response Content Doesn't Match Expectations

Symptom: The Skill is triggered and tools are called, but the output format or content is wrong.

Debugging approach:

  1. Check whether the Output Format description in SKILL.md is sufficiently clear
  2. Review the complete prompt sent to the AI model (visible in verbose logs)
  3. Try adding more examples to the Output Format section
  4. Consider lowering the temperature for more stable output

Performance Metrics Monitoring

Key Performance Indicators

Monitor the following metrics through the Dashboard or API:

# Open the Dashboard
openclaw dashboard
Metric Description Healthy Range
Response Latency Time from receiving a message to sending a reply < 5 seconds
MCP Call Latency Time for a single MCP tool call < 2 seconds
Input Token Count Input tokens per request Varies by scenario
Output Token Count Output tokens per response < 500
Skill Match Time Time spent on trigger word matching < 10ms

Retrieving Performance Data via API

curl http://localhost:18789/api/stats
{
  "uptime": "3d 14h 22m",
  "totalMessages": 1250,
  "avgLatency": 2100,
  "skillStats": {
    "weather": {
      "invocations": 89,
      "avgLatency": 2340,
      "avgInputTokens": 1250,
      "avgOutputTokens": 245,
      "errorRate": 0.02
    },
    "reminder": {
      "invocations": 156,
      "avgLatency": 1100,
      "avgInputTokens": 890,
      "avgOutputTokens": 120,
      "errorRate": 0.0
    }
  }
}

Token Usage Optimization

Token consumption directly impacts API costs. Here are practical tips for reducing token usage.

Streamline Your SKILL.md

The contents of SKILL.md are sent to the model as a system prompt, so a larger file means higher per-request token costs.

Before optimization:

## Output Format

当用户查询天气时,你需要以一种美观且信息丰富的方式来呈现天气数据。
首先显示城市名称,然后显示当前的天气状况描述,接着显示温度信息
(包括当前温度和体感温度),再显示湿度和风速信息,
最后如果数据中有日出日落时间的话也一并显示出来。
请确保使用适当的emoji来让信息更加生动...

After optimization:

## Output Format

🌤️ {城市} | {天气} 🌡️ {温度}°C(体感{体感}°C)| 💧{湿度}% | 💨{风速}m/s

Reduce Unnecessary Context

{
  context: {
    // Shrink the context window
    maxMessages: 20,   // Reduced from the default 50
    // Whether to carry full context when a Skill is activated
    skillContext: "minimal"  // minimal keeps only the latest 3 messages
  }
}

Choose the Right Model

Not every scenario needs the most powerful model:

{
  channels: {
    telegram: {
      // Use a smaller model for simple scenarios
      model: {
        provider: "claude",
        name: "claude-haiku-4-20250514"   // Faster and cheaper
      }
    }
  }
}

Skill Load Order

When multiple Skills have overlapping trigger words, load order determines priority.

View Current Load Order

openclaw skill list --verbose
Skills (loaded in order):
  1. weather        (priority: 10, triggers: 天气,weather,气温)
  2. reminder       (priority: 10, triggers: 提醒,remind,闹钟)
  3. translator     (priority: 5,  triggers: 翻译,translate)
  4. rss-reader     (priority: 5,  triggers: 订阅,新闻,rss)

Setting Priority

Set it in the SKILL.md frontmatter:

---
name: weather
priority: 20    # Higher value means higher priority
triggers:
  - 天气
---

Or override it globally in the configuration:

{
  skills: {
    priority: {
      "weather": 20,
      "reminder": 15,
      "translator": 10
    }
  }
}

Performance Optimization Checklist

Before taking your Skill live, go through this optimization checklist:

  • [ ] Is the SKILL.md file size reasonable (recommended < 3KB)?
  • [ ] Are the trigger words precise (avoiding false triggers)?
  • [ ] Is the number of MCP tool calls minimized (avoiding redundant calls)?
  • [ ] Is the context window size appropriate?
  • [ ] Is the output format concise?
  • [ ] Have you chosen the right model for the use case?
  • [ ] Is error handling robust (avoiding retry storms)?
  • [ ] Are priority settings reasonable?

Handy Debugging Command Reference

# View all skill statuses
openclaw skill list

# View real-time logs
openclaw logs --level debug

# System health check
openclaw doctor

# Open the monitoring dashboard
openclaw dashboard

# Restart (load new changes)
openclaw restart

# Test via the Gateway API
curl http://localhost:18789/api/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "test", "options": {"debug": true}}'

# Check Gateway status
curl http://localhost:18789/health

Summary

Effective debugging and performance optimization are essential to delivering high-quality Skills. By mastering log analysis, isolated testing, token optimization, and load order management, you'll be able to efficiently diagnose issues, optimize costs, and build top-notch OpenClaw Skills.

OpenClaw is a free, open-source personal AI assistant that supports WhatsApp, Telegram, Discord, and many more platforms