Skip to main content
Canonical Firecrawl Elixir source of truth for agents. Generated from SDK source and the v2 OpenAPI spec.

Install

Add to mix.exs:
{:firecrawl, "~> 1.0.0"}

Authenticate

# config/runtime.exs or config.exs
config :firecrawl, api_key: System.get_env("FIRECRAWL_API_KEY")

# or pass api_key per call
{:ok, res} = Firecrawl.search_and_scrape([query: "site:docs.firecrawl.dev webhook retries"], api_key: "fc-your-api-key")

When To Use What

  • search: use when you start with a query and need discovery.
  • scrape: use when you already have a URL and want page content.
  • interact: use when the page needs clicks, forms, or post-scrape browser actions.

Why use it

Use search to discover relevant pages from a query, then pick URLs to scrape or interact with. You can constrain results to a site with site:, for example site:docs.firecrawl.dev crawl webhooks.

Preferred SDK method

Firecrawl.search_and_scrape(params \\ [], opts \\ [])

Simple Example

{:ok, res} = Firecrawl.search_and_scrape(query: "site:docs.firecrawl.dev webhook retries")

Complex Example

{:ok, res} = Firecrawl.search_and_scrape(
  query: "site:docs.firecrawl.dev crawl webhooks",
  sources: [:web, :news],
  categories: [:research],
  limit: 10,
  tbs: "qdr:m",
  location: "San Francisco,California,United States",
  country: "US",
  ignore_invalid_urls: true,
  timeout: 60000,
  scrape_options: [
    formats: [
      "markdown",
      "links",
      %{type: "json", prompt: "Extract title and key endpoints."}
    ],
    only_main_content: true,
    include_tags: ["main", "article"],
    exclude_tags: ["nav", "footer"],
    wait_for: 1000
  ]
)

Parameters

  • query
    • Type: string
    • Use when: you need a search query.
    • Notes: use site:example.com to limit results to a domain.
  • sources
    • Type: list of atoms, strings, or maps
    • Use when: you want to control which sources are searched.
    • Confirmed values:
      • "web" or :web
      • "news" or :news
      • "images" or :images
      • %{type: "web" | "news" | "images"}
  • categories
    • Type: list of atoms, strings, or maps
    • Use when: you want to filter results by category.
    • Confirmed values:
      • "github" or :github
      • "research" or :research
      • "pdf" or :pdf
      • %{type: "github" | "research" | "pdf"}
  • limit
    • Type: integer
    • Use when: you want to cap results.
  • tbs
    • Type: string
    • Use when: you need a time-based filter (for example qdr:d, qdr:w, sbd:1,qdr:m).
  • location
    • Type: string
    • Use when: you want localized results.
  • country
    • Type: string
    • Use when: you want ISO 3166-1 alpha-2 targeting (for example "US").
  • ignore_invalid_urls
    • Type: boolean
    • Use when: you want to drop URLs that cannot be scraped by other endpoints.
  • timeout
    • Type: integer
    • Use when: you need a request timeout in milliseconds.
  • enterprise
    • Type: list of strings
    • Use when: you need enterprise search controls.
    • Confirmed values:
      • "zdr": end-to-end zero data retention
      • "anon": anonymized zero data retention
  • scrape_options
    • Type: keyword list
    • Use when: you want to scrape each search result (see Scrape parameters for fields).

Scrape

Why use it

Use scrape when you already have a URL and want structured content in one or more formats.

Preferred SDK method

Firecrawl.scrape_and_extract_from_url(params \\ [], opts \\ [])

Simple Example

{:ok, res} = Firecrawl.scrape_and_extract_from_url(
  url: "https://docs.firecrawl.dev",
  formats: ["markdown"]
)

Complex Example

{:ok, res} = Firecrawl.scrape_and_extract_from_url(
  url: "https://example.com/pricing",
  formats: [
    "markdown",
    "links",
    %{type: "json", prompt: "Extract plan names and prices."},
    %{type: "screenshot", fullPage: true, quality: 80, viewport: %{width: 1280, height: 720}},
    %{type: "changeTracking", modes: ["git-diff"], tag: "pricing"}
  ],
  headers: %{"User-Agent" => "FirecrawlDocsBot/1.0"},
  only_main_content: true,
  wait_for: 1000,
  parsers: [%{type: "pdf", mode: "auto", maxPages: 5}],
  actions: [
    %{type: "click", selector: "#accept"},
    %{type: "wait", milliseconds: 750},
    %{type: "scrape"}
  ],
  location: [country: "US", languages: ["en-US"]],
  remove_base64_images: true,
  block_ads: true,
  proxy: :auto,
  max_age: 86400000,
  min_age: 1,
  store_in_cache: true,
  profile: [name: "docs-session", save_changes: true],
  zero_data_retention: false
)

Parameters

  • url
    • Type: string
    • Use when: you want to scrape a specific page.
  • formats
    • Type: list of format strings or format maps
    • Use when: you want multiple output formats.
    • Confirmed format strings:
      • "markdown": markdown content
      • "html": cleaned HTML
      • "rawHtml": raw HTML
      • "links": page links
      • "images": image URLs
      • "screenshot": screenshot output
      • "summary": summary output
      • "changeTracking": change tracking output
      • "json": JSON extraction
      • "branding": branding profile output
      • "audio": audio extraction
    • Format map fields:
      • type: one of the format strings above
      • prompt, schema: JSON extraction options for type: "json"
      • modes, schema, prompt, tag: change tracking options for type: "changeTracking"
      • fullPage, quality, viewport: screenshot options for type: "screenshot"
  • headers
    • Type: map
    • Use when: you need custom request headers.
  • include_tags
    • Type: list of strings
    • Use when: you want to include only specific HTML tags.
  • exclude_tags
    • Type: list of strings
    • Use when: you want to exclude specific HTML tags.
  • only_main_content
    • Type: boolean
    • Use when: you want to strip nav, footer, and other boilerplate.
  • timeout
    • Type: integer
    • Use when: you need a timeout in milliseconds.
  • wait_for
    • Type: integer
    • Use when: you need to wait for the page to render (milliseconds).
  • mobile
    • Type: boolean
    • Use when: you want a mobile viewport.
  • parsers
    • Type: list of parser strings or maps
    • Use when: you need file parsing controls.
    • Confirmed values:
      • "pdf"
      • %{type: "pdf", mode: "fast" | "auto" | "ocr", maxPages: integer}
  • actions
    • Type: list of action maps
    • Use when: you need lightweight pre-scrape actions.
    • Confirmed action types:
      • wait: milliseconds or selector required
      • screenshot: fullPage, quality, viewport optional
      • click: selector required, all optional
      • write: text required (click to focus the input first)
      • press: key required
      • scroll: direction (up or down) required, selector optional
      • scrape: no additional fields
      • executeJavascript: script required
      • pdf: format (A0, A1, A2, A3, A4, A5, A6, Letter, Legal, Tabloid, Ledger), landscape, scale optional
  • location
    • Type: keyword list (for example country: and languages:)
    • Use when: you need geo or language-aware scraping.
  • skip_tls_verification
    • Type: boolean
    • Use when: you need to skip TLS verification.
  • remove_base64_images
    • Type: boolean
    • Use when: you want to drop base64 images from markdown output.
  • block_ads
    • Type: boolean
    • Use when: you want ad and cookie popup blocking.
  • proxy
    • Type: atom or string
    • Use when: you need proxy control.
    • Confirmed values: :basic, :enhanced, :auto
  • max_age
    • Type: integer
    • Use when: you want cached data up to a maximum age (milliseconds).
  • min_age
    • Type: integer
    • Use when: you want cached data only if it is at least this old (milliseconds).
  • store_in_cache
    • Type: boolean
    • Use when: you want Firecrawl to cache the result.
  • profile
    • Type: keyword list with name: and optional save_changes: (or saveChanges:)
    • Use when: you want a persistent browser profile shared across scrapes and interactions.
  • zero_data_retention
    • Type: boolean
    • Use when: you want zero data retention for this scrape.

Interact

Why use it

Use interact when a page requires browser actions or code execution after a scrape starts.

Preferred SDK method

Firecrawl.interact_with_scrape_browser_session(job_id, params \\ [], opts \\ [])

Simple Example

{:ok, res} = Firecrawl.interact_with_scrape_browser_session(
  "<scrapeJobId>",
  code: "console.log(await page.title());",
  language: :node,
  timeout: 60
)

Complex Example

{:ok, res} = Firecrawl.interact_with_scrape_browser_session(
  "<scrapeJobId>",
  code: "// Use Playwright page methods here",
  language: :node,
  timeout: 120
)

Parameters

  • job_id
    • Type: string
    • Use when: you have a scrape job ID.
  • code
    • Type: string
    • Use when: you want to run code in the browser session.
  • language
    • Type: atom or string
    • Use when: you need a specific runtime.
    • Confirmed values: :python, :node, :bash (or the string equivalents)
  • timeout
    • Type: integer
    • Use when: you need an execution timeout in seconds.
  • origin
    • Type: string
    • Use when: you want an optional origin label for execution telemetry.

Stop interactive session

Firecrawl.stop_interactive_scrape_browser_session(job_id, opts \\ []) issues DELETE /scrape/{jobId}/interact and tears down the browser session for that scrape job. A bang variant stop_interactive_scrape_browser_session!/2 is also available.
{:ok, res} = Firecrawl.stop_interactive_scrape_browser_session("<scrapeJobId>")

Notes

  • The Elixir client is OpenAPI-shaped; function names and parameter keys are generated from the spec.
  • Each public function has a bang (!) variant that raises on error instead of returning {:error, _}.
  • This SDK exposes code-based interactions only (no prompt parameter on interact_with_scrape_browser_session).

Source Of Truth

  • firecrawl/apps/elixir-sdk/mix.exs
  • firecrawl/apps/elixir-sdk/lib/firecrawl.ex
  • firecrawl-docs/api-reference/v2-openapi.json