Claude Agent Skill · by Firecrawl

Firecrawl Crawl

The firecrawl-crawl skill enables developers and content extraction teams to bulk extract structured content from entire websites or specific site sections by f

Install
Terminal · npx
$npx skills add https://github.com/firecrawl/cli --skill firecrawl-crawl
Works with Paperclip

How Firecrawl Crawl fits into a Paperclip company.

Firecrawl Crawl drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.

S
SaaS FactoryPaired

Pre-configured AI company — 18 agents, 18 skills, one-time purchase.

$27$59
Explore pack
Source file
SKILL.md58 lines
Expand
---name: firecrawl-crawldescription: |  Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.allowed-tools:  - Bash(firecrawl *)  - Bash(npx firecrawl *)--- # firecrawl crawl Bulk extract content from a website. Crawls pages following links up to a depth/limit. ## When to use - You need content from many pages on a site (e.g., all `/docs/`)- You want to extract an entire site section- Step 4 in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → **crawl** → interact ## Quick start ```bash# Crawl a docs sectionfirecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json # Full crawl with depth limitfirecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json # Check status of a running crawlfirecrawl crawl <job-id>``` ## Options | Option                    | Description                                 || ------------------------- | ------------------------------------------- || `--wait`                  | Wait for crawl to complete before returning || `--progress`              | Show progress while waiting                 || `--limit <n>`             | Max pages to crawl                          || `--max-depth <n>`         | Max link depth to follow                    || `--include-paths <paths>` | Only crawl URLs matching these paths        || `--exclude-paths <paths>` | Skip URLs matching these paths              || `--delay <ms>`            | Delay between requests                      || `--max-concurrency <n>`   | Max parallel crawl workers                  || `--pretty`                | Pretty print JSON output                    || `-o, --output <path>`     | Output file path                            | ## Tips - Always use `--wait` when you need the results immediately. Without it, crawl returns a job ID for async polling.- Use `--include-paths` to scope the crawl — don't crawl an entire site when you only need one section.- Crawl consumes credits per page. Check `firecrawl credit-usage` before large crawls. ## See also - [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape individual pages- [firecrawl-map](../firecrawl-map/SKILL.md) — discover URLs before deciding to crawl- [firecrawl-download](../firecrawl-download/SKILL.md) — download site to local files (uses map + scrape)