/agent is a magic API that searches, navigates, and gathers data from the widest range of websites, finding data in hard-to-reach places and uncovering data in ways no other API can. It accomplishes in a few minutes what would take a human many hours — end-to-end data collection, without scripts or manual work.
Whether you need one data point or entire datasets at scale, Firecrawl /agent works to get your data.
Think of /agent as deep research for data, wherever it is!
Research Preview: Agent is in early access. Expect rough edges. It will get significantly better over time. Share feedback →
/extract and takes it further:
- No URLs Required: Just describe what you need via
promptparameter. URLs are optional - Deep Web Search: Autonomously searches and navigates deep into sites to find your data
- Reliable and Accurate: Works with a wide variety of queries and use cases
- Faster: Processes multiple sources in parallel for quicker results
Using /agent
The only required parameter is prompt. Simply describe what data you want to extract. For structured output, provide a JSON schema. The SDKs support Pydantic (Python) and Zod (Node) for type-safe schema definitions:
Response
JSON
Providing URLs (Optional)
You can optionally provide URLs to focus the agent on specific pages:Job Status and Completion
Agent jobs run asynchronously. When you submit a job, you’ll receive a Job ID that you can use to check status:- Default method:
agent()waits and returns final results - Start then poll: Use
start_agent(Python) orstartAgent(Node) to get a Job ID immediately, then poll withget_agent_status/getAgentStatus
Job results are available via the API for 24 hours after completion. After this period, you can still view your agent history and results in the activity logs.
Possible States
| Status | Description |
|---|---|
processing | The agent is still working on your request |
completed | Extraction finished successfully |
failed | An error occurred during extraction |
Pending Example
JSON
Completed Example
JSON
Model Selection
Firecrawl Agent offers two models. Spark 1 Mini is 60% cheaper and is the default — perfect for most use cases. Upgrade to Spark 1 Pro when you need maximum accuracy on complex tasks.| Model | Cost | Accuracy | Best For |
|---|---|---|---|
spark-1-mini | 60% cheaper | Standard | Most tasks (default) |
spark-1-pro | Standard | Higher | Complex research, critical extraction |
Spark 1 Mini (Default)
spark-1-mini is our efficient model, ideal for straightforward data extraction tasks.
Use Mini when:
- Extracting simple data points (contact info, pricing, etc.)
- Working with well-structured websites
- Cost efficiency is a priority
- Running high-volume extraction jobs
Spark 1 Pro
spark-1-pro is our flagship model, designed for maximum accuracy on complex extraction tasks.
Use Pro when:
- Performing complex competitive analysis
- Extracting data that requires deep reasoning
- Accuracy is critical for your use case
- Dealing with ambiguous or hard-to-find data
Specifying a Model
Pass themodel parameter to select which model to use:
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | Natural language description of the data you want to extract (max 10,000 characters) |
model | string | No | Model to use: spark-1-mini (default) or spark-1-pro |
urls | array | No | Optional list of URLs to focus the extraction |
schema | object | No | Optional JSON schema for structured output |
maxCredits | number | No | Maximum number of credits to spend on this agent task. If the limit is reached, the job fails and no data is returned, though credits consumed for work performed are still charged. |
Agent vs Extract: What’s Improved
| Feature | Agent (New) | Extract |
|---|---|---|
| URLs Required | No | Yes |
| Speed | Faster | Standard |
| Cost | Lower | Standard |
| Reliability | Higher | Standard |
| Query Flexibility | High | Moderate |
Example Use Cases
- Research: “Find the top 5 AI startups and their funding amounts”
- Competitive Analysis: “Compare pricing plans between Slack and Microsoft Teams”
- Data Gathering: “Extract contact information from company websites”
- Content Summarization: “Summarize the latest blog posts about web scraping”
API Reference
Check out the Agent API Reference for more details. Have feedback or need help? Email [email protected].Pricing
Firecrawl Agent uses dynamic billing that scales with the complexity of your data extraction request. You pay based on the actual work Agent performs, ensuring fair pricing whether you’re extracting simple data points or complex structured information from multiple sources.How Agent pricing works
Agent pricing is dynamic and credit-based during Research Preview:- Simple extractions (like contact info from a single page) typically use fewer credits and cost less
- Complex research tasks (like competitive analysis across multiple domains) use more credits but reflect the total effort involved
- Transparent usage shows you exactly how many credits each request consumed
- Credit conversion automatically converts agent credit usage to credits for easy billing
Credit usage varies based on the complexity of your prompt, the amount of data processed, and the structure of the output requested.
Getting started
All users receive 5 free daily runs to explore Agent’s capabilities without any cost. Additional usage is billed based on credit consumption and converted to credits.Managing costs
Agent can be expensive, but there are some ways to decrease the cost:- Start with free runs: Use your 5 daily free requests to understand pricing
- Set a
maxCreditsparameter: Limit your spending by setting a maximum number of credits you’re willing to spend - Optimize prompts: More specific prompts often use fewer credits
- Monitor usage: Track your consumption through the dashboard
- Set expectations: Complex multi-domain research will use more credits than simple single-page extractions
Pricing is subject to change as we move from Research Preview to general availability. Current users will receive advance notice of any pricing updates.

