npx skills add https://github.com/firecrawl/cli --skill firecrawl-scrapeHow Firecrawl Scrape fits into a Paperclip company.
Firecrawl Scrape drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.
Pre-configured AI company — 18 agents, 18 skills, one-time purchase.
SKILL.md68 linesExpandCollapse
---name: firecrawl-scrapedescription: | Extract clean markdown from any URL, including JavaScript-rendered SPAs. Use this skill whenever the user provides a URL and wants its content, says "scrape", "grab", "fetch", "pull", "get the page", "extract from this URL", or "read this webpage". Handles JS-rendered pages, multiple concurrent URLs, and returns LLM-optimized markdown. Use this instead of WebFetch for any webpage content extraction.allowed-tools: - Bash(firecrawl *) - Bash(npx firecrawl *)--- # firecrawl scrape Scrape one or more URLs. Returns clean, LLM-optimized markdown. Multiple URLs are scraped concurrently. ## When to use - You have a specific URL and want its content- The page is static or JS-rendered (SPA)- Step 2 in the [workflow escalation pattern](firecrawl-cli): search → **scrape** → map → crawl → interact ## Quick start ```bash# Basic markdown extractionfirecrawl scrape "<url>" -o .firecrawl/page.md # Main content only, no nav/footerfirecrawl scrape "<url>" --only-main-content -o .firecrawl/page.md # Wait for JS to render, then scrapefirecrawl scrape "<url>" --wait-for 3000 -o .firecrawl/page.md # Multiple URLs (each saved to .firecrawl/)firecrawl scrape https://example.com https://example.com/blog https://example.com/docs # Get markdown and links togetherfirecrawl scrape "<url>" --format markdown,links -o .firecrawl/page.json # Ask a question about the pagefirecrawl scrape "https://example.com/pricing" --query "What is the enterprise plan price?"``` ## Options | Option | Description || ------------------------ | ---------------------------------------------------------------- || `-f, --format <formats>` | Output formats: markdown, html, rawHtml, links, screenshot, json || `-Q, --query <prompt>` | Ask a question about the page content (5 credits) || `-H` | Include HTTP headers in output || `--only-main-content` | Strip nav, footer, sidebar — main content only || `--wait-for <ms>` | Wait for JS rendering before scraping || `--include-tags <tags>` | Only include these HTML tags || `--exclude-tags <tags>` | Exclude these HTML tags || `-o, --output <path>` | Output file path | ## Tips - **Prefer plain scrape over `--query`.** Scrape to a file, then use `grep`, `head`, or read the markdown directly — you can search and reason over the full content yourself. Use `--query` only when you want a single targeted answer without saving the page (costs 5 extra credits).- **Try scrape before interact.** Scrape handles static pages and JS-rendered SPAs. Only escalate to `interact` when you need interaction (clicks, form fills, pagination).- Multiple URLs are scraped concurrently — check `firecrawl --status` for your concurrency limit.- Single format outputs raw content. Multiple formats (e.g., `--format markdown,links`) output JSON.- Always quote URLs — shell interprets `?` and `&` as special characters.- Naming convention: `.firecrawl/{site}-{path}.md` ## See also - [firecrawl-search](../firecrawl-search/SKILL.md) — find pages when you don't have a URL- [firecrawl-interact](../firecrawl-interact/SKILL.md) — when scrape can't get the content, use `interact` to click, fill forms, etc.- [firecrawl-download](../firecrawl-download/SKILL.md) — bulk download an entire site to local filesFirecrawl
This is autonomous web scraping that actually works for complex data extraction tasks. Instead of writing brittle scrapers that break when sites change, you des
Firecrawl Agent
The firecrawl-agent skill uses AI to autonomously navigate and extract structured data from complex multi-page websites, returning results as JSON that conforms
Firecrawl Build Interact
When basic web scraping hits a wall because content only appears after clicking buttons, filling forms, or navigating through multi-step flows, this skill integ