π¦
Ollama
Local Model AI Models & RoutingInstall Command
npx clawhub@latest install ollama
Installation Guide
1
Check Environment
Make sure Node.js 22+ and OpenClaw are installed. Run openclaw --version in your terminal to verify.
2
Run Installation
Run the install command above in your terminal. ClawHub will automatically download and install Ollama to the ~/.openclaw/skills/ directory.
3
Verify Installation
Run openclaw skills list to check your installed skills and confirm Ollama appears in the list.
4
Configure (Optional)
Follow the configuration instructions in the description below to add skill settings to ~/.config/openclaw/openclaw.json5.
Manual Installation: Copy the Skill folder to
~/.openclaw/skills/ or the skills/ directory in your project root. Make sure the folder contains a SKILL.md file.
Local Execution
Offline Available
Privacy Protection
Detailed Description
The Ollama skill connects OpenClaw to a locally running Ollama service, using open-source large language models for fully offline AI conversations.
Core Features
- Local Models: Use Llama 3, Mistral, Qwen, Gemma, and other open-source models
- Fully Offline: Data never leaves your machine, ideal for privacy-sensitive scenarios
- Model Management: List, pull, and delete local models
- Custom Models: Create custom models using Modelfiles
- Multi-Model Parallel: Load multiple models simultaneously and switch as needed
Configuration
{
skills: {
ollama: {
baseUrl: "http://localhost:11434",
defaultModel: "llama3.1:8b",
keepAlive: "5m"
}
}
}
Use Cases
- Scenarios with high privacy requirements where data cannot be transmitted externally
- Offline AI assistant when there is no network connection
- Local development and testing without consuming API credits
- Run custom models fine-tuned for specific tasks
System Requirements
Requires Ollama to be installed with models pre-downloaded. At least 8GB VRAM (GPU) or 16GB RAM (CPU inference) is recommended.