POST
/
crawl

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
url
string
required

The base URL to start crawling from

Enables the crawler to navigate from a specific URL to previously linked pages.

Allows the crawler to follow links to external websites.

excludePaths
string[]

URL pathname regex patterns that exclude matching URLs from the crawl. For example, if you set "excludePaths": ["blog/.*"] for the base URL firecrawl.dev, any results matching that pattern will be excluded, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.

ignoreQueryParameters
boolean
default:
false

Do not re-scrape the same path with different (or none) query parameters

ignoreSitemap
boolean
default:
false

Ignore the website sitemap when crawling

includePaths
string[]

URL pathname regex patterns that include matching URLs in the crawl. Only the paths that match the specified patterns will be included in the response. For example, if you set "includePaths": ["blog/.*"] for the base URL firecrawl.dev, only results matching that pattern will be included, such as https://www.firecrawl.dev/blog/firecrawl-launch-week-1-recap.

limit
integer
default:
10000

Maximum number of pages to crawl. Default limit is 10000.

maxDepth
integer
default:
2

Maximum depth to crawl relative to the entered URL.

scrapeOptions
object
webhook

The URL to send the webhook to. This will trigger for crawl started (crawl.started) ,every page crawled (crawl.page) and when the crawl is completed (crawl.completed or crawl.failed). The response will be the same as the /scrape endpoint.

Response

200 - application/json
id
string
success
boolean
url
string