Moonshot AI and the Kimi Model
Moonshot AI is one of China's leading AI companies, and its flagship product Kimi is renowned for its ultra-long context window. Kimi can handle inputs of up to hundreds of thousands of tokens, giving it a unique advantage in long document analysis, deep multi-turn conversations, and similar scenarios.
OpenClaw natively supports Moonshot AI as a model provider. You can call Kimi series models directly within OpenClaw and enjoy their powerful long-context processing capabilities.
Obtaining a Moonshot API Key
- Visit the Moonshot AI Open Platform (platform.moonshot.cn) and register an account.
- Complete identity verification (if required by the platform).
- Create a new API key on the API management page.
- Save your key — it is essential for the subsequent configuration.
Moonshot typically provides free credits for new users, sufficient for initial testing and evaluation.
Quick Configuration with Onboard
openclaw onboard
Select Moonshot AI as the model provider, enter your API key, and choose the default model. The onboarding tool will automatically generate the correct configuration file.
Manual Configuration
Configure manually in openclaw.json:
{
"agents": {
"defaults": {
"model": {
"primary": "moonshot/moonshot-v1-128k"
}
}
}
}
Authentication Setup
{
"providers": {
"moonshot": {
"auth": [
{
"key": "your-moonshot-api-key"
}
]
}
}
}
Multi-key rotation configuration:
{
"providers": {
"moonshot": {
"auth": [
{ "key": "key-a", "profile": "Primary account" },
{ "key": "key-b", "profile": "Backup account" }
]
}
}
}
OpenClaw's multi-account authentication mechanism automatically switches to the backup key when the primary key triggers rate limits. Failed keys enter a cooldown state and automatically recover after the cooldown period ends.
Available Models
Moonshot AI offers model versions with different context lengths:
- moonshot/moonshot-v1-8k: 8K context window, suitable for short conversations and simple Q&A, with fast response times and the lowest cost.
- moonshot/moonshot-v1-32k: 32K context window, suitable for medium-length document processing and multi-turn conversations.
- moonshot/moonshot-v1-128k: 128K context window, the flagship model, suitable for ultra-long document analysis and complex conversations.
The key factor when choosing a model is context requirements. If your use case doesn't involve long text, selecting a smaller context version will provide faster response times and lower costs.
Long-Context Use Cases in Practice
Moonshot's 128K context window is particularly well-suited for the following scenarios:
- Long document Q&A: Upload an entire paper, report, or contract and ask questions based on the full text.
- Deep multi-turn conversations: Conduct dozens of continuous conversation turns without losing historical context.
- Code review: Submit multiple code files at once for holistic review and analysis.
- Meeting minutes generation: Process complete transcripts from long meetings to generate structured minutes.
Configuring Failover
{
"agents": {
"defaults": {
"model": {
"primary": "moonshot/moonshot-v1-128k",
"fallback": "qwen/qwen-long"
}
}
}
}
Using Qwen's long-context model as a fallback is a sensible failover strategy, as both offer excellent Chinese language capabilities and long-context support.
Usage Recommendations
- Choose context length wisely: Don't default to the 128K version. Most daily conversations are adequately served by 8K or 32K, and using a smaller context window can significantly reduce costs and latency.
- Mind the rate limits: Moonshot's API has concurrency and RPM limits; for high-frequency call scenarios, configure multiple API keys.
- Chinese-first: The Kimi model has been extensively optimized for Chinese understanding and generation, typically performing better in Chinese scenarios than in English.
Verifying the Configuration
After completing the configuration, send a test message. Try sending a longer piece of text and asking the model to summarize it, verifying that the long-context feature works properly. Check the OpenClaw logs to confirm that requests are being correctly routed to the Moonshot provider.
With that, you have successfully integrated Moonshot AI's Kimi model into OpenClaw. The long-context capability will bring a qualitative improvement to your conversation experience.