Installation
The official PHP SDK is maintained in the Firecrawl monorepo at apps/php-sdk.
To install the Firecrawl PHP SDK, add the dependency via Composer:
composer require firecrawl/firecrawl-php
Requires PHP 8.1 or later.
Laravel Integration
The SDK includes first-class Laravel support with auto-discovery. After installing the package, publish the configuration file:
php artisan vendor:publish --provider="Firecrawl\Laravel\FirecrawlServiceProvider"
Then add your API key to your .env file:
FIRECRAWL_API_KEY=fc-your-api-key
The following environment variables are supported:
| Variable | Default | Description |
|---|
FIRECRAWL_API_KEY | — | Your Firecrawl API key (required) |
FIRECRAWL_API_URL | https://api.firecrawl.dev | API base URL |
FIRECRAWL_TIMEOUT | 300 | HTTP request timeout in seconds |
FIRECRAWL_MAX_RETRIES | 3 | Automatic retries for transient failures |
FIRECRAWL_BACKOFF_FACTOR | 0.5 | Exponential backoff factor in seconds |
Usage
- Get an API key from firecrawl.dev
- Set the API key as an environment variable named
FIRECRAWL_API_KEY, or pass it with FirecrawlClient::create(apiKey: ...)
Here is a quick example using the current SDK API surface:
use Firecrawl\Client\FirecrawlClient;
use Firecrawl\Models\CrawlOptions;
use Firecrawl\Models\ScrapeOptions;
$client = FirecrawlClient::fromEnv();
$doc = $client->scrape(
'https://firecrawl.dev',
ScrapeOptions::with(formats: ['markdown'])
);
$crawl = $client->crawl(
'https://firecrawl.dev',
CrawlOptions::with(limit: 5)
);
echo $doc->getMarkdown();
echo 'Crawled pages: ' . count($crawl->getData());
Using the Laravel Facade
In a Laravel application you can use the Firecrawl facade or dependency injection:
use Firecrawl\Client\FirecrawlClient;
use Firecrawl\Laravel\Facades\Firecrawl;
// Via Facade
$doc = Firecrawl::scrape('https://example.com');
// Via Dependency Injection
class ScrapeController
{
public function __construct(
private readonly FirecrawlClient $firecrawl,
) {}
public function index()
{
$doc = $this->firecrawl->scrape('https://example.com');
return response()->json(['markdown' => $doc->getMarkdown()]);
}
}
Scraping a URL
To scrape a single URL, use the scrape method.
use Firecrawl\Models\Document;
use Firecrawl\Models\ScrapeOptions;
$doc = $client->scrape(
'https://firecrawl.dev',
ScrapeOptions::with(
formats: ['markdown', 'html'],
onlyMainContent: true,
waitFor: 5000,
)
);
echo $doc->getMarkdown();
echo $doc->getMetadata()['title'] ?? '';
Extract structured JSON with JsonFormat via the scrape endpoint:
use Firecrawl\Models\JsonFormat;
use Firecrawl\Models\ScrapeOptions;
$jsonFmt = JsonFormat::with(
prompt: 'Extract the product name and price',
schema: [
'type' => 'object',
'properties' => [
'name' => ['type' => 'string'],
'price' => ['type' => 'number'],
],
],
);
$doc = $client->scrape(
'https://example.com/product',
ScrapeOptions::with(formats: [$jsonFmt])
);
print_r($doc->getJson());
Crawling a Website
To crawl a website and wait for completion, use crawl.
use Firecrawl\Models\CrawlOptions;
use Firecrawl\Models\ScrapeOptions;
$job = $client->crawl(
'https://firecrawl.dev',
CrawlOptions::with(
limit: 50,
maxDiscoveryDepth: 3,
scrapeOptions: ScrapeOptions::with(formats: ['markdown']),
)
);
echo 'Status: ' . $job->getStatus();
echo 'Progress: ' . $job->getCompleted() . '/' . $job->getTotal();
foreach ($job->getData() as $page) {
echo $page->getMetadata()['sourceURL'] ?? '';
}
Start a Crawl
Start a job without waiting using startCrawl.
use Firecrawl\Models\CrawlOptions;
$start = $client->startCrawl(
'https://firecrawl.dev',
CrawlOptions::with(limit: 100)
);
echo 'Job ID: ' . $start->getId();
Checking Crawl Status
Check crawl progress with getCrawlStatus.
$status = $client->getCrawlStatus($start->getId());
echo 'Status: ' . $status->getStatus();
echo 'Progress: ' . $status->getCompleted() . '/' . $status->getTotal();
Cancelling a Crawl
Cancel a running crawl with cancelCrawl.
$result = $client->cancelCrawl($start->getId());
print_r($result);
Crawl Errors
Fetch crawl-level errors (if any) with getCrawlErrors.
$errors = $client->getCrawlErrors($start->getId());
print_r($errors);
Mapping a Website
Discover links on a site using map.
use Firecrawl\Models\MapOptions;
$data = $client->map(
'https://firecrawl.dev',
MapOptions::with(
limit: 100,
search: 'blog',
)
);
foreach ($data->getLinks() as $link) {
echo ($link['url'] ?? '') . ' - ' . ($link['title'] ?? '');
}
Searching the Web
Search with optional search settings using search.
use Firecrawl\Models\SearchOptions;
$results = $client->search(
'firecrawl web scraping',
SearchOptions::with(limit: 10)
);
foreach ($results->getWeb() as $result) {
echo ($result['title'] ?? '') . ' - ' . ($result['url'] ?? '');
}
Batch Scraping
Scrape multiple URLs in parallel using batchScrape.
use Firecrawl\Models\BatchScrapeOptions;
use Firecrawl\Models\ScrapeOptions;
$job = $client->batchScrape(
['https://firecrawl.dev', 'https://firecrawl.dev/blog'],
BatchScrapeOptions::with(
options: ScrapeOptions::with(formats: ['markdown']),
)
);
foreach ($job->getData() as $doc) {
echo $doc->getMarkdown();
}
For manual async control, use startBatchScrape, getBatchScrapeStatus, and cancelBatchScrape:
use Firecrawl\Models\BatchScrapeOptions;
use Firecrawl\Models\ScrapeOptions;
$start = $client->startBatchScrape(
['https://firecrawl.dev', 'https://firecrawl.dev/blog'],
BatchScrapeOptions::with(
options: ScrapeOptions::with(formats: ['markdown']),
)
);
$status = $client->getBatchScrapeStatus($start->getId());
echo 'Batch status: ' . $status->getStatus();
$cancel = $client->cancelBatchScrape($start->getId());
print_r($cancel);
Agent
Run an AI-powered agent with agent.
use Firecrawl\Models\AgentOptions;
$result = $client->agent(
AgentOptions::with(
prompt: 'Find the pricing plans for Firecrawl and compare them',
)
);
print_r($result->getData());
With a JSON schema for structured output:
use Firecrawl\Models\AgentOptions;
$result = $client->agent(
AgentOptions::with(
prompt: 'Extract pricing plan details',
urls: ['https://firecrawl.dev'],
schema: [
'type' => 'object',
'properties' => [
'plans' => [
'type' => 'array',
'items' => [
'type' => 'object',
'properties' => [
'name' => ['type' => 'string'],
'price' => ['type' => 'string'],
],
],
],
],
],
)
);
print_r($result->getData());
For manual async control, use startAgent, getAgentStatus, and cancelAgent:
use Firecrawl\Models\AgentOptions;
$start = $client->startAgent(
AgentOptions::with(
prompt: 'Summarize what Firecrawl does in one sentence',
urls: ['https://firecrawl.dev'],
)
);
$status = $client->getAgentStatus($start->getId());
echo 'Agent status: ' . $status->getStatus();
$cancel = $client->cancelAgent($start->getId());
print_r($cancel);
Usage & Metrics
Check concurrency and remaining credits:
use Firecrawl\Models\ConcurrencyCheck;
use Firecrawl\Models\CreditUsage;
$concurrency = $client->getConcurrency();
echo 'Concurrency: ' . $concurrency->getConcurrency() . '/' . $concurrency->getMaxConcurrency();
$credits = $client->getCreditUsage();
echo 'Remaining credits: ' . $credits->getRemainingCredits();
Browser
The PHP SDK includes Browser Sandbox helpers.
Create a Session
use Firecrawl\Models\BrowserCreateResponse;
$session = $client->browser(ttl: 120, activityTtl: 60, streamWebView: true);
echo $session->getId();
echo $session->getCdpUrl();
echo $session->getLiveViewUrl();
Execute Code
use Firecrawl\Models\BrowserExecuteResponse;
$run = $client->browserExecute(
sessionId: $session->getId(),
code: 'await page.goto("https://example.com"); console.log(await page.title());',
language: 'node',
timeout: 60,
);
echo $run->getStdout();
echo $run->getExitCode();
Scrape-Bound Interactive Session
Use a scrape job ID to run follow-up browser code in the same replayed context:
interact(...) runs code in the scrape-bound browser session (and initializes it on first use).
stopInteractiveBrowser(...) explicitly stops the interactive session when you are done.
use Firecrawl\Models\BrowserExecuteResponse;
use Firecrawl\Models\BrowserDeleteResponse;
use Firecrawl\Models\ScrapeOptions;
$doc = $client->scrape(
'https://example.com',
ScrapeOptions::with(formats: ['markdown'])
);
$scrapeJobId = $doc->getMetadata()['scrapeId'] ?? null;
if ($scrapeJobId === null) {
throw new RuntimeException('scrapeId not found in metadata');
}
$scrapeRun = $client->interact(
jobId: $scrapeJobId,
code: 'console.log(page.url());',
language: 'node',
timeout: 60,
);
echo $scrapeRun->getStdout();
$deleted = $client->stopInteractiveBrowser($scrapeJobId);
echo 'Deleted: ' . ($deleted->isSuccess() ? 'true' : 'false');
List & Close Sessions
use Firecrawl\Models\BrowserListResponse;
use Firecrawl\Models\BrowserSession;
$active = $client->listBrowsers('active');
foreach ($active->getSessions() as $s) {
echo $s->getId() . ' - ' . $s->getStatus();
}
$closed = $client->deleteBrowser($session->getId());
echo 'Closed: ' . ($closed->isSuccess() ? 'true' : 'false');
Configuration
FirecrawlClient::create() supports the following options:
| Option | Type | Default | Description |
|---|
apiKey | string | FIRECRAWL_API_KEY env var | Your Firecrawl API key |
apiUrl | string | https://api.firecrawl.dev (or FIRECRAWL_API_URL) | API base URL |
timeoutSeconds | float | 300 | HTTP request timeout in seconds |
maxRetries | int | 3 | Automatic retries for transient failures |
backoffFactor | float | 0.5 | Exponential backoff factor in seconds |
httpClient | GuzzleHttp\ClientInterface | Built from timeout | Custom Guzzle-compatible HTTP client |
use Firecrawl\Client\FirecrawlClient;
$client = FirecrawlClient::create(
apiKey: 'fc-your-api-key',
apiUrl: 'https://api.firecrawl.dev',
timeoutSeconds: 300,
maxRetries: 3,
backoffFactor: 0.5,
);
Custom HTTP Client
You can pass a pre-configured GuzzleHttp\ClientInterface implementation to control connection pooling, middleware, proxy settings, and other HTTP features. When provided, the timeoutSeconds setting is ignored in favor of the client’s own configuration.
use Firecrawl\Client\FirecrawlClient;
use GuzzleHttp\Client as GuzzleClient;
$guzzle = new GuzzleClient([
'proxy' => 'http://proxy.example.com:8080',
'timeout' => 60,
'connect_timeout' => 10,
]);
$client = FirecrawlClient::create(
apiKey: 'fc-your-api-key',
httpClient: $guzzle,
);
Error Handling
The SDK throws runtime exceptions under Firecrawl\Exceptions.
use Firecrawl\Exceptions\AuthenticationException;
use Firecrawl\Exceptions\FirecrawlException;
use Firecrawl\Exceptions\JobTimeoutException;
use Firecrawl\Exceptions\RateLimitException;
try {
$doc = $client->scrape('https://example.com');
} catch (AuthenticationException $e) {
echo 'Auth failed: ' . $e->getMessage();
} catch (RateLimitException $e) {
echo 'Rate limited: ' . $e->getMessage();
} catch (JobTimeoutException $e) {
echo 'Job ' . $e->getJobId() . ' timed out after ' . $e->getTimeoutSeconds() . 's';
} catch (FirecrawlException $e) {
echo 'Error ' . $e->getStatusCode() . ': ' . $e->getMessage();
}
Are you an AI agent that needs a Firecrawl API key? See firecrawl.dev/agent-onboarding/SKILL.md for automated onboarding instructions.