Claude Agent Skill · by Firecrawl

Firecrawl Map

The firecrawl-map skill discovers and lists all URLs on a website, with optional search filtering to locate specific pages within large sites. It serves develop

Install
Terminal · npx
$npx skills add https://github.com/firecrawl/cli --skill firecrawl-map
Works with Paperclip

How Firecrawl Map fits into a Paperclip company.

Firecrawl Map drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.

S
SaaS FactoryPaired

Pre-configured AI company — 18 agents, 18 skills, one-time purchase.

$27$59
Explore pack
Source file
SKILL.md50 lines
Expand
---name: firecrawl-mapdescription: |  Discover and list all URLs on a website, with optional search filtering. Use this skill when the user wants to find a specific page on a large site, list all URLs, see the site structure, find where something is on a domain, or says "map the site", "find the URL for", "what pages are on", or "list all pages". Essential when the user knows which site but not which exact page.allowed-tools:  - Bash(firecrawl *)  - Bash(npx firecrawl *)--- # firecrawl map Discover URLs on a site. Use `--search` to find a specific page within a large site. ## When to use - You need to find a specific subpage on a large site- You want a list of all URLs on a site before scraping or crawling- Step 3 in the [workflow escalation pattern](firecrawl-cli): search → scrape → **map** → crawl → interact ## Quick start ```bash# Find a specific page on a large sitefirecrawl map "<url>" --search "authentication" -o .firecrawl/filtered.txt # Get all URLsfirecrawl map "<url>" --limit 500 --json -o .firecrawl/urls.json``` ## Options | Option                            | Description                  || --------------------------------- | ---------------------------- || `--limit <n>`                     | Max number of URLs to return || `--search <query>`                | Filter URLs by search query  || `--sitemap <include\|skip\|only>` | Sitemap handling strategy    || `--include-subdomains`            | Include subdomain URLs       || `--json`                          | Output as JSON               || `-o, --output <path>`             | Output file path             | ## Tips - **Map + scrape is a common pattern**: use `map --search` to find the right URL, then `scrape` it.- Example: `map https://docs.example.com --search "auth"` → found `/docs/api/authentication` → `scrape` that URL. ## See also - [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape the URLs you discover- [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extract instead of map + scrape- [firecrawl-download](../firecrawl-download/SKILL.md) — download entire site (uses map internally)