Key changes, mappings, and before/after snippets to upgrade your integration to v2.
maxAge
defaulting to 2 days, and sensible defaults like blockAds
, skipTlsVerification
, and removeBase64Images
are enabled.
"summary"
as a format to directly receive a concise summary of the page content.
{ type: "json", prompt, schema }
. The old "extract"
format has been renamed to "json"
.
{ type: "screenshot", fullPage, quality, viewport }
.
"news"
and "images"
in addition to web results by setting the sources
parameter.
prompt
to crawl and the system derives paths/limits automatically. Use the new crawl-params-preview endpoint to inspect the derived options before starting a job.
const firecrawl = new Firecrawl({ apiKey: 'fc-YOUR-API-KEY' })
firecrawl = Firecrawl(api_key='fc-YOUR-API-KEY')
https://api.firecrawl.dev/v2/
endpoints."summary"
where needed{ type: "json", prompt, schema }
for JSON extractionstartCrawl
+ getCrawlStatus
(or crawl
waiter)startBatchScrape
+ getBatchScrapeStatus
(or batchScrape
waiter)startExtract
+ getExtractStatus
(or extract
waiter)prompt
with crawl-params-preview
v1 (FirecrawlApp) | v2 (Firecrawl) |
---|---|
scrapeUrl(url, ...) | scrape(url, options?) |
search(query, ...) | search(query, options?) |
mapUrl(url, ...) | map(url, options?) |
v1 | v2 |
---|---|
crawlUrl(url, ...) | crawl(url, options?) (waiter) |
asyncCrawlUrl(url, ...) | startCrawl(url, options?) |
checkCrawlStatus(id, ...) | getCrawlStatus(id) |
cancelCrawl(id) | cancelCrawl(id) |
checkCrawlErrors(id) | getCrawlErrors(id) |
v1 | v2 |
---|---|
batchScrapeUrls(urls, ...) | batchScrape(urls, opts?) (waiter) |
asyncBatchScrapeUrls(urls, ...) | startBatchScrape(urls, opts?) |
checkBatchScrapeStatus(id, ...) | getBatchScrapeStatus(id) |
checkBatchScrapeErrors(id) | getBatchScrapeErrors(id) |
v1 | v2 |
---|---|
extract(urls?, params?) | extract(args) |
asyncExtract(urls, params?) | startExtract(args) |
getExtractStatus(id) | getExtractStatus(id) |
v1 | v2 |
---|---|
generateLLMsText(...) | (not in v2 SDK) |
checkGenerateLLMsTextStatus(id) | (not in v2 SDK) |
crawlUrlAndWatch(...) | watcher(jobId, ...) |
batchScrapeUrlsAndWatch(...) | watcher(jobId, ...) |
v1 | v2 |
---|---|
scrape_url(...) | scrape(...) |
search(...) | search(...) |
map_url(...) | map(...) |
v1 | v2 |
---|---|
crawl_url(...) | crawl(...) (waiter) |
async_crawl_url(...) | start_crawl(...) |
check_crawl_status(...) | get_crawl_status(...) |
cancel_crawl(...) | cancel_crawl(...) |
v1 | v2 |
---|---|
batch_scrape_urls(...) | batch_scrape(...) (waiter) |
async_batch_scrape_urls(...) | start_batch_scrape(...) |
get_batch_scrape_status(...) | get_batch_scrape_status(...) |
get_batch_scrape_errors(...) | get_batch_scrape_errors(...) |
v1 | v2 |
---|---|
extract(...) | extract(...) |
start_extract(...) | start_extract(...) |
get_extract_status(...) | get_extract_status(...) |
v1 | v2 |
---|---|
generate_llms_text(...) | (not in v2 SDK) |
get_generate_llms_text_status(...) | (not in v2 SDK) |
watch_crawl(...) | watcher(job_id, ...) |
AsyncFirecrawl
mirrors the same methods (all awaitable)."markdown"
, "html"
, "rawHtml"
, "links"
, "summary"
.parsePDF
use parsers: [ { "type": "pdf" } | "pdf" ]
.v1 | v2 |
---|---|
allowBackwardCrawling | (removed) use crawlEntireDomain |
maxDepth | (removed) use maxDiscoveryDepth |
ignoreSitemap (bool) | sitemap (e.g., "only" , "skip" , or "include" ) |
(none) | prompt |