Home Tutorials Categories Skills About
ZH EN JA KO
Troubleshooting

Handling OpenClaw Context Overflow and Auto-Compaction Failures

· 12 min read

Problem Description

During extended conversations with an AI bot, you may encounter the following error:

[openclaw:gateway] Error: Context length exceeded. Model maximum: 128000 tokens, requested: 135842 tokens.

Or logs indicating auto-compaction failure:

[openclaw:gateway] Context compaction triggered (usage: 92%)
[openclaw:gateway] Error during context compaction: Model returned empty summary
[openclaw:gateway] Falling back to truncation strategy

Another scenario is what the user sees:

Bot reply: Sorry, the conversation history is too long for me to continue processing. Please send /reset to clear the conversation history.

OpenClaw maintains a conversation context for each user, containing message history and the system prompt. When conversations become lengthy, the token count can exceed the AI model's maximum context window limit.

How It Works

OpenClaw's context management flow is as follows:

  1. Each user message and AI reply is appended to the context
  2. Before sending an API request, OpenClaw calculates the current context's token count
  3. When token usage exceeds the configured threshold (default 85%), auto-compaction is triggered
  4. The compaction process sends older conversation history to the AI model for summarization
  5. The compressed summary replaces the original old messages, freeing up context space

Diagnostic Steps

Check the current user's context status:

openclaw context list

This displays all active sessions and their token usage:

Channel     User ID      Tokens    Model Max    Usage
telegram    12345678     98500     128000       77%
whatsapp    8612345      125000    128000       98%  ⚠️

Check compaction logs:

DEBUG=openclaw:context* openclaw start

View context management settings in the configuration:

cat ~/.openclaw/openclaw.json | grep -A 20 context

Solutions

Solution 1: Adjust Auto-Compaction Configuration

Optimize the compaction strategy in ~/.openclaw/openclaw.json:

{
  "context": {
    "compaction": {
      "enabled": true,
      "triggerThreshold": 0.75,
      "targetUsage": 0.5,
      "strategy": "summarize",
      "summaryModel": "gpt-4o-mini",
      "summaryPrompt": "Please compress the following conversation history into a concise summary, preserving key information and user preferences:",
      "maxRetries": 3
    }
  }
}

Key parameter explanations:

  • triggerThreshold: Usage threshold that triggers compaction; lowering this value triggers compaction earlier, preventing edge-case overflow
  • targetUsage: Target usage after compaction; setting this to 0.5 means context should occupy no more than 50% of capacity after compaction
  • summaryModel: Model used for generating summaries; a lower-cost model like gpt-4o-mini is recommended
  • maxRetries: Number of retries if compaction fails

Solution 2: Manually Clean Up Context

When auto-compaction fails, you can manually handle a specific user's context:

# View context details for a specific user
openclaw context show --channel telegram --user 12345678

# Manually compact a specific user's context
openclaw context compact --channel telegram --user 12345678

# Reset a specific user's conversation history (clear all context)
openclaw context reset --channel telegram --user 12345678

Users can also send the /reset command directly in chat to clear their own conversation history.

Solution 3: Switch to a Model with a Larger Context Window

If your use case requires long, continuous conversations, consider using a model with a larger context window:

{
  "models": {
    "default": {
      "provider": "anthropic",
      "model": "claude-sonnet-4-20250514",
      "maxContextTokens": 200000
    }
  }
}

Claude models support a 200K token context window, GPT-4o supports 128K, and Gemini 1.5 Pro supports up to 1M. Choose the model that fits your needs.

Solution 4: Configure Truncation as a Fallback Strategy

When compaction fails, configure a truncation strategy to ensure the system does not crash:

{
  "context": {
    "compaction": {
      "enabled": true,
      "strategy": "summarize",
      "fallback": {
        "strategy": "truncate",
        "keepSystemPrompt": true,
        "keepRecentMessages": 10
      }
    }
  }
}

The truncation strategy retains the system prompt and the most recent N messages, discarding older messages outright. While this loses historical context, it ensures the conversation can continue.

Solution 5: Limit Individual Message Length

Prevent users from sending excessively long messages that cause rapid context growth:

{
  "context": {
    "maxMessageTokens": 4000,
    "truncateMessage": true
  }
}

Individual messages exceeding the maxMessageTokens limit will be truncated, and the user will be notified that their message was too long.

Monitoring and Prevention

Configure context usage alerts to receive notifications before thresholds are reached:

{
  "context": {
    "alerts": {
      "warningThreshold": 0.7,
      "criticalThreshold": 0.9,
      "webhook": "https://your-webhook-url.com/alerts"
    }
  }
}

Periodically clean up inactive session contexts to free up memory:

# Clear sessions inactive for more than 7 days
openclaw context cleanup --inactive-days 7

This prevents long-idle sessions from consuming system resources.

OpenClaw is a free, open-source personal AI assistant that supports WhatsApp, Telegram, Discord, and many more platforms