Welcome to Firecrawl

Firecrawl is an API service that takes a URL, crawls it, and converts it into clean markdown. We crawl all accessible subpages and give you clean markdown for each. No sitemap required.

How to use it?

We provide an easy to use API with our hosted version. You can find the playground and documentation here. You can also self host the backend if you’d like.

Self-host: To self-host refer to guide here.

API Key

To use the API, you need to sign up on Firecrawl and get an API key.

Crawling

Used to crawl a URL and all accessible subpages. This submits a crawl job and returns a job ID to check the status of the crawl.

Installation

Usage

If you are not using the sdk or prefer to use webhook or a different polling method, you can set the wait_until_done to false. This will return a jobId.

For cURL, /crawl will always return a jobId where you can use to check the status of the crawl.

{ "jobId": "1234-5678-9101" }

Check Crawl Job

Used to check the status of a crawl job and get its result.

Response

{
  "status": "completed",
  "current": 22,
  "total": 22,
  "data": [
    {
      "content": "Raw Content ",
      "markdown": "# Markdown Content",
      "provider": "web-scraper",
      "metadata": {
        "title": "Firecrawl | Scrape the web reliably for your LLMs",
        "description": "AI for CX and Sales",
        "language": null,
        "sourceURL": "https://docs.firecrawl.dev/"
      }
    }
  ]
}

Scraping

To scrape a single URL, use the scrape_url method. It takes the URL as a parameter and returns the scraped data as a dictionary.

Response

{
  "success": true,
  "data": {
    "markdown": "<string>",
    "content": "<string>",
    "html": "<string>",
    "rawHtml": "<string>",
    "metadata": {
      "title": "<string>",
      "description": "<string>",
      "language": "<string>",
      "sourceURL": "<string>",
      "<any other metadata> ": "<string>",
      "pageStatusCode": 123,
      "pageError": "<string>"
    },
    "llm_extraction": {},
    "warning": "<string>"
  }
}

Extraction

With LLM extraction, you can easily extract structured data from any URL. We support pydantic schemas to make it easier for you too. Here is how you to use it:

Contributing

We love contributions! Please read our contributing guide before submitting a pull request.