注意:全新的 v2 版本 API 现已发布,提供更强大的功能和更高的性能。
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
The base URL to start crawling from
URL pathname regex patterns that exclude matching URLs from the crawl. For example, if you set "excludePaths": ["blog/.*"] for the base URL firecrawl.dev, any results matching that pattern will be excluded, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.
URL pathname regex patterns that include matching URLs in the crawl. Only the paths that match the specified patterns will be included in the response. For example, if you set "includePaths": ["blog/.*"] for the base URL firecrawl.dev, only results matching that pattern will be included, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.
Maximum absolute depth to crawl from the base of the entered URL. Basically, the max number of slashes the pathname of a scraped URL may contain.
Maximum depth to crawl based on discovery order. The root site and sitemapped pages has a discovery depth of 0. For example, if you set it to 1, and you set ignoreSitemap, you will only crawl the entered URL and all URLs that are linked on that page.
Ignore the website sitemap when crawling
Do not re-scrape the same path with different (or none) query parameters
Maximum number of pages to crawl. Default limit is 10000.
⚠️ DEPRECATED: Use 'crawlEntireDomain' instead. Allows the crawler to follow internal links to sibling or parent URLs, not just child paths.
Allows the crawler to follow internal links to sibling or parent URLs, not just child paths.
false: Only crawls deeper (child) URLs. → e.g. /features/feature-1 → /features/feature-1/tips ✅ → Won't follow /pricing or / ❌
true: Crawls any internal links, including siblings and parents. → e.g. /features/feature-1 → /pricing, /, etc. ✅
Use true for broader internal coverage beyond nested paths.
Allows the crawler to follow links to external websites.
Allows the crawler to follow links to subdomains of the main domain.
Delay in seconds between scrapes. This helps respect website rate limits.
Maximum number of concurrent scrapes. This parameter allows you to set a concurrency limit for this crawl. If not specified, the crawl adheres to your team's concurrency limit.
A webhook specification object.
If true, this will enable zero data retention for this crawl. To enable this feature, please contact help@firecrawl.dev