Installation
The official .NET SDK is maintained in the Firecrawl monorepo at apps/.net-sdk.
To install the Firecrawl .NET SDK, add the NuGet package:
.NET CLI
Package Manager
PackageReference
dotnet add package firecrawl-sdk
Install-Package firecrawl-sdk
<PackageReference Include="firecrawl-sdk" Version="1.0.0" />
Requires .NET 8.0 or later.
Usage
- Get an API key from firecrawl.dev
- Set the API key as an environment variable named
FIRECRAWL_API_KEY, or pass it to the FirecrawlClient constructor
Here is a quick example using the current SDK API surface:
using Firecrawl;
using Firecrawl.Models;
var client = new FirecrawlClient("fc-your-api-key");
// Scrape a single page
var doc = await client.ScrapeAsync("https://firecrawl.dev",
new ScrapeOptions { Formats = new List<object> { "markdown" } });
// Crawl a website
var job = await client.CrawlAsync("https://firecrawl.dev",
new CrawlOptions { Limit = 5 });
Console.WriteLine(doc.Markdown);
Console.WriteLine($"Crawled pages: {job.Data?.Count ?? 0}");
Scraping a URL
To scrape a single URL, use the ScrapeAsync method.
using Firecrawl.Models;
var doc = await client.ScrapeAsync("https://firecrawl.dev",
new ScrapeOptions
{
Formats = new List<object> { "markdown", "html" },
OnlyMainContent = true,
WaitFor = 5000
});
Console.WriteLine(doc.Markdown);
Console.WriteLine(doc.Metadata?["title"]);
Extract structured JSON with JsonFormat via the scrape endpoint:
using Firecrawl.Models;
var jsonFmt = new JsonFormat
{
Prompt = "Extract the product name and price",
Schema = new Dictionary<string, object>
{
["type"] = "object",
["properties"] = new Dictionary<string, object>
{
["name"] = new Dictionary<string, object> { ["type"] = "string" },
["price"] = new Dictionary<string, object> { ["type"] = "number" }
}
}
};
var doc = await client.ScrapeAsync("https://example.com/product",
new ScrapeOptions
{
Formats = new List<object> { jsonFmt }
});
Console.WriteLine(doc.Json);
Crawling a Website
To crawl a website and wait for completion, use CrawlAsync. This method handles polling and pagination automatically.
using Firecrawl.Models;
var job = await client.CrawlAsync("https://firecrawl.dev",
new CrawlOptions
{
Limit = 50,
MaxDiscoveryDepth = 3,
ScrapeOptions = new ScrapeOptions
{
Formats = new List<object> { "markdown" }
}
});
Console.WriteLine($"Status: {job.Status}");
Console.WriteLine($"Progress: {job.Completed}/{job.Total}");
if (job.Data != null)
{
foreach (var page in job.Data)
{
Console.WriteLine(page.Metadata?["sourceURL"]);
}
}
Start a Crawl
Start a job without waiting using StartCrawlAsync.
using Firecrawl.Models;
var start = await client.StartCrawlAsync("https://firecrawl.dev",
new CrawlOptions { Limit = 100 });
Console.WriteLine($"Job ID: {start.Id}");
Checking Crawl Status
Check crawl progress with GetCrawlStatusAsync.
var status = await client.GetCrawlStatusAsync(start.Id!);
Console.WriteLine($"Status: {status.Status}");
Console.WriteLine($"Progress: {status.Completed}/{status.Total}");
Cancelling a Crawl
Cancel a running crawl with CancelCrawlAsync.
var result = await client.CancelCrawlAsync(start.Id!);
Console.WriteLine(result);
Mapping a Website
Discover links on a site using MapAsync.
using Firecrawl.Models;
var data = await client.MapAsync("https://firecrawl.dev",
new MapOptions
{
Limit = 100,
Search = "blog"
});
if (data.Links != null)
{
foreach (var link in data.Links)
{
Console.WriteLine(link);
}
}
Searching the Web
Search with optional search settings using SearchAsync.
using Firecrawl.Models;
var results = await client.SearchAsync("firecrawl web scraping",
new SearchOptions
{
Limit = 10,
Location = "US"
});
if (results.Web != null)
{
foreach (var hit in results.Web)
{
Console.WriteLine($"{hit.Title} - {hit.Url}");
}
}
Batch Scraping
Scrape multiple URLs in parallel using BatchScrapeAsync. This method handles polling and pagination automatically.
using Firecrawl.Models;
var urls = new List<string>
{
"https://firecrawl.dev",
"https://firecrawl.dev/blog"
};
var job = await client.BatchScrapeAsync(urls,
new BatchScrapeOptions
{
Options = new ScrapeOptions
{
Formats = new List<object> { "markdown" }
}
});
if (job.Data != null)
{
foreach (var doc in job.Data)
{
Console.WriteLine(doc.Markdown);
}
}
Batch Scrape with Idempotency Key
To ensure duplicate requests are not processed, pass an IdempotencyKey:
var job = await client.BatchScrapeAsync(urls,
new BatchScrapeOptions
{
IdempotencyKey = "my-unique-key",
Options = new ScrapeOptions
{
Formats = new List<object> { "markdown" }
}
});
Usage & Metrics
Check concurrency and remaining credits:
using Firecrawl.Models;
var concurrency = await client.GetConcurrencyAsync();
Console.WriteLine($"Concurrency: {concurrency.Current}/{concurrency.MaxConcurrency}");
var credits = await client.GetCreditUsageAsync();
Console.WriteLine($"Remaining credits: {credits.RemainingCredits}");
Async Support
All methods in the .NET SDK are async by default and return Task<T>. They support CancellationToken for cooperative cancellation.
using Firecrawl.Models;
using var cts = new CancellationTokenSource(TimeSpan.FromSeconds(30));
var doc = await client.ScrapeAsync("https://example.com",
new ScrapeOptions
{
Formats = new List<object> { "markdown" }
},
cancellationToken: cts.Token);
Console.WriteLine(doc.Markdown);
Configuration
The FirecrawlClient constructor supports the following options:
| Option | Type | Default | Description |
|---|
apiKey | string? | FIRECRAWL_API_KEY env var | Your Firecrawl API key |
apiUrl | string? | https://api.firecrawl.dev (or FIRECRAWL_API_URL) | API base URL |
timeout | TimeSpan? | 5 minutes | HTTP request timeout |
maxRetries | int | 3 | Automatic retries for transient failures |
backoffFactor | double | 0.5 | Exponential backoff factor in seconds |
httpClient | HttpClient? | Built from timeout | Pre-configured HttpClient instance |
using Firecrawl;
var client = new FirecrawlClient(
apiKey: "fc-your-api-key",
apiUrl: "https://api.firecrawl.dev",
timeout: TimeSpan.FromMinutes(5),
maxRetries: 3,
backoffFactor: 0.5);
Custom HTTP Client
You can pass a pre-configured HttpClient to control connection pooling, proxies, message handlers, and any other HttpClient feature. When provided, the timeout setting is ignored in favor of the client’s own configuration.
using Firecrawl;
var handler = new HttpClientHandler
{
Proxy = new WebProxy("http://proxy.example.com:8080"),
UseProxy = true
};
var httpClient = new HttpClient(handler)
{
Timeout = TimeSpan.FromSeconds(60)
};
var client = new FirecrawlClient(
apiKey: "fc-your-api-key",
httpClient: httpClient);
Environment Variable Configuration
The SDK resolves configuration from environment variables when constructor parameters are omitted:
// Uses FIRECRAWL_API_KEY and FIRECRAWL_API_URL environment variables
var client = new FirecrawlClient();
Error Handling
The SDK throws specific exceptions under Firecrawl.Exceptions.
using Firecrawl.Exceptions;
using Firecrawl.Models;
try
{
var doc = await client.ScrapeAsync("https://example.com");
}
catch (AuthenticationException ex)
{
Console.Error.WriteLine($"Auth failed: {ex.Message}");
}
catch (RateLimitException ex)
{
Console.Error.WriteLine($"Rate limited: {ex.Message}");
}
catch (JobTimeoutException ex)
{
Console.Error.WriteLine($"Job {ex.JobId} timed out after {ex.TimeoutSeconds}s");
}
catch (FirecrawlException ex)
{
Console.Error.WriteLine($"Error {ex.StatusCode}: {ex.Message}");
}
The exception hierarchy:
| Exception | HTTP Code | When |
|---|
AuthenticationException | 401 | Invalid or missing API key |
RateLimitException | 429 | Too many requests |
JobTimeoutException | — | Async job (crawl/batch scrape) did not complete in time |
FirecrawlException | varies | Base exception for all other API errors |
Transient failures (408, 409, 502, and other 5xx errors) are retried automatically with exponential backoff before an exception is thrown.
Are you an AI agent that needs a Firecrawl API key? See firecrawl.dev/agent-onboarding/SKILL.md for automated onboarding instructions.