Lockdown mode forces the scrape endpoint to read from Firecrawl’s existing index and cache only — it never makes an outbound request to the target URL. It is designed for compliance-constrained and air-gapped environments where the scrape request itself (the URL, headers, and body) could leak sensitive information over the network.Documentation Index
Fetch the complete documentation index at: https://docs.firecrawl.dev/llms.txt
Use this file to discover all available pages before exploring further.
How it works
Whenlockdown: true is set on a /v2/scrape request:
- No outbound traffic. Firecrawl never connects to the target URL. All outbound paths (HTTP engines, robots.txt fetching, search-index writes, audio transforms, etc.) are gated off.
- Cache-only reads. The request is served from Firecrawl’s index if a matching entry exists. The default
maxAgeis bumped to 2 years so existing cached pages are eligible regardless of age. - Cache miss returns an error. If no cached data is available, Firecrawl returns a
404with error codeSCRAPE_LOCKDOWN_CACHE_MISS. The URL is never logged on miss. - Zero data retention. Lockdown requests are treated as ZDR: no URL is persisted, no response blob is written to long-term storage, and the scrape job is cleaned up after delivery.
When to use this
Great for:- Regulated industries (healthcare, finance, legal) where outbound requests require audit or approval
- Air-gapped or compliance-constrained environments where the URL itself is sensitive
- Replaying already-indexed pages without re-hitting origins
- Fresh content that has never been scraped before — lockdown mode returns an error on cache miss
- Real-time or time-sensitive data
Usage
Addlockdown: true to your scrape request.
Cache miss response
If the URL has not been previously scraped and cached, the response is:Billing
| Outcome | Credits |
|---|---|
| Cache hit | 5 credits |
Cache miss (SCRAPE_LOCKDOWN_CACHE_MISS) | 1 credit |
Cache hit matching
Lockdown uses the same cache-match rules as regular scrapes. For a cache hit, these parameters must match the cached entry:url, mobile, location, waitFor, blockAds, screenshot (enabled/disabled and full-page), and enhanced proxy mode. You can verify behavior via metadata.cacheState in the response — it will be "hit" on a served response.
Availability
Lockdown mode is supported on the/v2/scrape endpoint and is exposed across all surfaces that call it:
- SDKs — Python, Node.js, Go, Rust, Java, .NET, Ruby, PHP, and Elixir (
lockdown: trueon the scrape options). - CLI — pass
--lockdowntofirecrawl scrape. - MCP server — include
"lockdown": truein thefirecrawl_scrapetool arguments.
crawl, map, extract, or search.
Are you an AI agent that needs a Firecrawl API key? See firecrawl.dev/agent-onboarding/SKILL.md for automated onboarding instructions.

