🔥
Firecrawl
Web Scraping Search & ProductivityInstall Command
npx clawhub@latest install firecrawl
Installation Guide
1
Check Environment
Make sure Node.js 22+ and OpenClaw are installed. Run openclaw --version in your terminal to verify.
2
Run Installation
Run the install command above in your terminal. ClawHub will automatically download and install Firecrawl to the ~/.openclaw/skills/ directory.
3
Verify Installation
Run openclaw skills list to check your installed skills and confirm Firecrawl appears in the list.
4
Configure (Optional)
Follow the configuration instructions in the description below to add skill settings to ~/.config/openclaw/openclaw.json5.
Manual Installation: Copy the Skill folder to
~/.openclaw/skills/ or the skills/ directory in your project root. Make sure the folder contains a SKILL.md file.
Deep web crawling and structured extraction
Batch URL scraping and Markdown conversion
Dynamic page rendering and JavaScript execution
Detailed Description
Firecrawl is a web scraping and data extraction tool designed for large language models, capable of converting any website content into clean, structured Markdown or JSON format for direct AI comprehension and use.
Core Features
- Web Scraping (Scrape): Input a single URL, automatically handles JavaScript rendering and anti-scraping mechanisms, returns clean Markdown content with navigation bars, ads, and other irrelevant elements stripped
- Site Crawling (Crawl): Starting from an entry URL, automatically discovers and crawls all subpages across the entire site, with configurable crawl depth and page count limits
- Data Extraction (Extract): Extracts structured data from webpages based on custom Schema, supporting JSON Schema field and type definitions
- Batch Processing: Supports submitting multiple URLs for batch scraping simultaneously, executing asynchronously and returning results
Configuration
{
"mcpServers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "fc-your-api-key" // Get from firecrawl.dev
}
}
}
}
Use Cases
- Building RAG knowledge bases: Batch scrape documentation sites and convert content into vector database indexable formats
- Competitive analysis: Crawl competitor websites to extract product information and pricing data
- Content aggregation: Scrape articles from multiple news sources and store them in a structured format
- Data collection: Extract product names, prices, ratings, and other fields from e-commerce websites