!) variant that raises on error instead of returning {:error, ...} tuples.
Installation
Addfirecrawl to your list of dependencies in mix.exs and configure your API key:
Elixir
Elixir
Usage
- Get an API key from firecrawl.dev
- Set the API key in your application config or pass it as an option to any function.
Elixir
Scraping a URL
Scrape a single URL withscrape_and_extract_from_url. It returns the page content as structured data, including markdown, metadata, and any other formats you request.
Elixir
Crawl a Website
To crawl a website, usecrawl_urls. It takes the starting URL and optional parameters such as page limit, allowed domains, and output format.
Elixir
Start a Crawl
Start a crawl job and get back the job ID without blocking:Elixir
Checking Crawl Status
Check the status of a crawl job withget_crawl_status:
Elixir
Cancelling a Crawl
Cancel a crawl job withcancel_crawl:
Elixir
Map a Website
Usemap_urls to generate a list of URLs from a website:
Elixir
Search
Search the web and optionally scrape results:Elixir
Batch Scrape
Scrape multiple URLs in a single batch job:Elixir
Agent
Start an agentic data extraction task:Elixir
Browser
Launch cloud browser sessions and execute code remotely.Create a Session
Elixir
Execute Code
Elixir
Profiles
Save and reuse browser state (cookies, localStorage, etc.) across sessions:Elixir
List & Close Sessions
Elixir
Self-Hosted Instances
To use a self-hosted Firecrawl instance, pass thebase_url option:
Elixir
Error Handling
Non-bang functions return{:ok, response} or {:error, exception}. Bang variants raise on error. NimbleOptions validates all parameters before the request is sent, catching typos, missing required fields, and type errors immediately.
Elixir
All Available Functions
| Function | Description |
|---|---|
scrape_and_extract_from_url | Scrape a single URL |
scrape_and_extract_from_urls | Batch scrape multiple URLs |
crawl_urls | Crawl a website |
get_crawl_status | Check crawl job status |
get_crawl_errors | Get crawl job errors |
get_active_crawls | List active crawls |
cancel_crawl | Cancel a crawl job |
map_urls | Map URLs on a website |
search_and_scrape | Search and scrape results |
start_agent | Start an agent extraction task |
get_agent_status | Check agent job status |
cancel_agent | Cancel an agent job |
create_browser_session | Create a browser session |
execute_browser_code | Execute code in a browser session |
list_browser_sessions | List browser sessions |
delete_browser_session | Delete a browser session |
get_batch_scrape_status | Check batch scrape status |
get_batch_scrape_errors | Get batch scrape errors |
cancel_batch_scrape | Cancel a batch scrape |
get_credit_usage | Get remaining credits |
!) variant (e.g., scrape_and_extract_from_url!) that raises instead of returning error tuples.
For full API documentation, see hexdocs.pm/firecrawl.
