Rust
Firecrawl Rust SDK is a library to help you easily scrape and crawl websites, and output the data in a format ready for use with language models (LLMs).
Installation
To install the Firecrawl Rust SDK, add the following to your Cargo.toml
:
Usage
First, you need to obtain an API key from firecrawl.dev. Then, you need to initialize the FirecrawlApp
. From there, you can access functions like FirecrawlApp::scrape_url
, which let you use our API.
Here’s an example of how to use the SDK in Rust:
Scraping a URL
To scrape a single URL, use the scrape_url
method. It takes the URL as a parameter and returns the scraped data as a Document
.
Scraping with Extract
With Extract, you can easily extract structured data from any URL. You need to specify your schema in the JSON Schema format, using the serde_json::json!
macro.
Crawling a Website
To crawl a website, use the crawl_url
method. This will wait for the crawl to complete, which may take a long time based on your starting URL and your options.
Crawling asynchronously
To crawl without waiting for the result, use the crawl_url_async
method. It takes the same parameters, but it returns a CrawlAsyncRespone
struct, containing the crawl’s ID. You can use that ID with the check_crawl_status
method to check the status at any time. Do note that completed crawls are deleted after 24 hours.
Map a URL (Alpha)
Map all associated links from a starting URL.
Error Handling
The SDK handles errors returned by the Firecrawl API and by our dependencies, and combines them into the FirecrawlError
enum, implementing Error
, Debug
and Display
. All of our methods return a Result<T, FirecrawlError>
.