Firecrawl V1 is here! With that we introduce a more reliable and developer friendly API.

Here is what’s new:

  • Output Formats for /scrape. Choose what formats you want your output in.
  • New /map endpoint for getting most of the URLs of a webpage.
  • Developer friendly API for /crawl/{id} status.
  • 2x Rate Limits for all plans.
  • Go SDK and Rust SDK
  • Teams support
  • API Key Management in the dashboard.
  • onlyMainContent is now default to true.
  • /crawl webhooks and websocket support.

Scrape Formats

You can now choose what formats you want your output in. You can specify multiple output formats. Supported formats are:

  • Markdown (markdown)
  • HTML (html)
  • Raw HTML (rawHtml) (with no modifications)
  • Screenshot (screenshot or screenshot@fullPage)
  • Links (links)
  • Extract (extract) - structured output

Output keys will match the format you choose.

Response

SDKs will return the data object directly. cURL will return the payload exactly as shown below.

{
  "success": true,
  "data" : {
    "markdown": "Launch Week I is here! [See our Day 2 Release πŸš€](https://www.firecrawl.dev/blog/launch-week-i-day-2-doubled-rate-limits)[πŸ’₯ Get 2 months free...",
    "html": "<!DOCTYPE html><html lang=\"en\" class=\"light\" style=\"color-scheme: light;\"><body class=\"__variable_36bd41 __variable_d7dc5d font-inter ...",
    "metadata": {
      "title": "Home - Firecrawl",
      "description": "Firecrawl crawls and converts any website into clean markdown.",
      "language": "en",
      "keywords": "Firecrawl,Markdown,Data,Mendable,Langchain",
      "robots": "follow, index",
      "ogTitle": "Firecrawl",
      "ogDescription": "Turn any website into LLM-ready data.",
      "ogUrl": "https://www.firecrawl.dev/",
      "ogImage": "https://www.firecrawl.dev/og.png?123",
      "ogLocaleAlternate": [],
      "ogSiteName": "Firecrawl",
      "sourceURL": "https://firecrawl.dev",
      "statusCode": 200
    }
  }
}

Introducing /map (Alpha)

The easiest way to go from a single url to a map of the entire website.

Usage

Response

SDKs will return the data object directly. cURL will return the payload exactly as shown below.

{
  "status": "success",
  "links": [
    "https://firecrawl.dev",
    "https://www.firecrawl.dev/pricing",
    "https://www.firecrawl.dev/blog",
    "https://www.firecrawl.dev/playground",
    "https://www.firecrawl.dev/smart-crawl",
    ...
  ]
}

WebSockets

To crawl a website with WebSockets, use the Crawl URL and Watch method.

Extract format

LLM extraction is now available in v1 under the extract format. To extract structured from a page, you can pass a schema to the endpoint or just provide a prompt.

Output:

JSON
{
    "success": true,
    "data": {
      "extract": {
        "company_mission": "Train a secure AI on your technical resources that answers customer and employee questions so your team doesn't have to",
        "supports_sso": true,
        "is_open_source": false,
        "is_in_yc": true
      },
      "metadata": {
        "title": "Mendable",
        "description": "Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide",
        "robots": "follow, index",
        "ogTitle": "Mendable",
        "ogDescription": "Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide",
        "ogUrl": "https://docs.firecrawl.dev/",
        "ogImage": "https://docs.firecrawl.dev/mendable_new_og1.png",
        "ogLocaleAlternate": [],
        "ogSiteName": "Mendable",
        "sourceURL": "https://docs.firecrawl.dev/"
      },
    }
}

Extracting without schema (New)

You can now extract without a schema by just passing a prompt to the endpoint. The llm chooses the structure of the data.

Output:

JSON
{
    "success": true,
    "data": {
      "extract": {
        "company_mission": "Train a secure AI on your technical resources that answers customer and employee questions so your team doesn't have to",
      },
      "metadata": {
        "title": "Mendable",
        "description": "Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide",
        "robots": "follow, index",
        "ogTitle": "Mendable",
        "ogDescription": "Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide",
        "ogUrl": "https://docs.firecrawl.dev/",
        "ogImage": "https://docs.firecrawl.dev/mendable_new_og1.png",
        "ogLocaleAlternate": [],
        "ogSiteName": "Mendable",
        "sourceURL": "https://docs.firecrawl.dev/"
      },
    }
}

New Crawl Webhook

You can now pass a webhook parameter to the /crawl endpoint. This will send a POST request to the URL you specify when the crawl is started, updated and completed.

The webhook will now trigger for every page crawled and not just the whole result at the end.

cURL
curl -X POST https://api.firecrawl.dev/v1/crawl \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer YOUR_API_KEY' \
    -d '{
      "url": "https://docs.firecrawl.dev",
      "limit": 100,
      "webhook": "https://example.com/webhook"
    }'

Webhook Events

There are now 4 types of events:

  • crawl.started - Triggered when the crawl is started.
  • crawl.page - Triggered for every page crawled.
  • crawl.completed - Triggered when the crawl is completed to let you know it’s done.
  • crawl.failed - Triggered when the crawl fails.

Webhook Response

  • success - If the webhook was successful in crawling the page correctly.
  • type - The type of event that occurred.
  • id - The ID of the crawl.
  • data - The data that was scraped (Array). This will only be non empty on crawl.page and will contain 1 item if the page was scraped successfully. The response is the same as the /scrape endpoint.
  • error - If the webhook failed, this will contain the error message.

Migrating from V0

/scrape endpoint

The updated /scrape endpoint has been redesigned for enhanced reliability and ease of use. The structure of the new /scrape request body is as follows:

{
  "url": "<string>",
  "formats": ["markdown", "html", "rawHtml", "links", "screenshot"],
  "includeTags": ["<string>"],
  "excludeTags": ["<string>"],
  "headers": { "<key>": "<value>" },
  "waitFor": 123,
  "timeout": 123
}

Formats

You can now choose what formats you want your output in. You can specify multiple output formats. Supported formats are:

  • Markdown (markdown)
  • HTML (html)
  • Raw HTML (rawHtml) (with no modifications)
  • Screenshot (screenshot or screenshot@fullPage)
  • Links (links)

By default, the output will be include only the markdown format.

Details on the new request body

The table below outlines the changes to the request body parameters for the /scrape endpoint in V1.

ParameterChangeDescription
onlyIncludeTagsMoved and RenamedMoved to root level. And renamed to includeTags.
removeTagsMoved and RenamedMoved to root level. And renamed to excludeTags.
onlyMainContentMovedMoved to root level. true by default.
waitForMovedMoved to root level.
headersMovedMoved to root level.
parsePDFMovedMoved to root level.
extractorOptionsNo Change
timeoutNo Change
pageOptionsRemovedNo need for pageOptions parameter. The scrape options were moved to root level.
replaceAllPathsWithAbsolutePathsRemovedreplaceAllPathsWithAbsolutePaths is not needed anymore. Every path is now default to absolute path.
includeHtmlRemovedadd "html" to formats instead.
includeRawHtmlRemovedadd "rawHtml" to formats instead.
screenshotRemovedadd "screenshot" to formats instead.
fullPageScreenshotRemovedadd "screenshot@fullPage" to formats instead.
extractorOptionsRemovedUse "extract" format instead with extract object.

The new extract format is described in the llm-extract section.

/crawl endpoint

We’ve also updated the /crawl endpoint on v1. Check out the improved body request below:

{
  "url": "<string>",
  "excludePaths": ["<string>"],
  "includePaths": ["<string>"],
  "maxDepth": 2,
  "ignoreSitemap": true,
  "limit": 10,
  "allowBackwardLinks": true,
  "allowExternalLinks": true,
  "scrapeOptions": {
    // same options as in /scrape
    "formats": ["markdown", "html", "rawHtml", "screenshot", "links"],
    "headers": { "<key>": "<value>" },
    "includeTags": ["<string>"],
    "excludeTags": ["<string>"],
    "onlyMainContent": true,
    "waitFor": 123
  }
}

Details on the new request body

The table below outlines the changes to the request body parameters for the /crawl endpoint in V1.

ParameterChangeDescription
pageOptionsRenamedRenamed to scrapeOptions.
includesMoved and RenamedMoved to root level. Renamed to includePaths.
excludesMoved and RenamedMoved to root level. Renamed to excludePaths.
allowBackwardCrawlingMoved and RenamedMoved to root level. Renamed to allowBackwardLinks.
allowExternalLinksMovedMoved to root level.
maxDepthMovedMoved to root level.
ignoreSitemapMovedMoved to root level.
limitMovedMoved to root level.
crawlerOptionsRemovedNo need for crawlerOptions parameter. The crawl options were moved to root level.
timeoutRemovedUse timeout in scrapeOptions instead.