/agent is a magic API that searches, navigates, and gathers data from even the most complex websites, finding data in hard-to-reach places and discovering data anywhere on the internet. It accomplishes in a few minutes what would take a human many hours, and makes traditional web scraping obsolete.
Just describe what data you want and /agent handles the rest.
Research Preview: Agent is in early access. Expect rough edges. It will get significantly better over time. Share feedback →
/extract and takes it further:
- No URLs Required: Just describe what you need via
promptparameter. URLs are optional. - Deep Web Search: Autonomously searches and navigates deep into sites to find your data
- Reliable and Accurate: Works with a wide variety of queries and use cases
- Faster: Processes multiple sources in parallel for quicker results
- Cheaper: Agent is more cost-effective than
/extractfor complex use cases
Using /agent
The only required parameter is prompt. Simply describe what data you want to extract. For structured output, provide a JSON schema. The SDKs support Pydantic (Python) and Zod (Node) for type-safe schema definitions:
Response
JSON
Providing URLs (Optional)
You can optionally provide URLs to focus the agent on specific pages:Job Status and Completion
Agent jobs run asynchronously. When you submit a job, you’ll receive a Job ID that you can use to check status:- Default method:
agent()waits and returns final results - Start then poll: Use
start_agent(Python) orstartAgent(Node) to get a Job ID immediately, then poll withget_agent_status/getAgentStatus
Job results are available for 24 hours after completion.
Possible States
| Status | Description |
|---|---|
processing | The agent is still working on your request |
completed | Extraction finished successfully |
failed | An error occurred during extraction |
Pending Example
JSON
Completed Example
JSON
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | Natural language description of the data you want to extract (max 10,000 characters) |
urls | array | No | Optional list of URLs to focus the extraction |
schema | object | No | Optional JSON schema for structured output |
Agent vs Extract: What’s Improved
| Feature | Agent (New) | Extract |
|---|---|---|
| URLs Required | No | Yes |
| Speed | Faster | Standard |
| Cost | Lower | Standard |
| Reliability | Higher | Standard |
| Query Flexibility | High | Moderate |
Example Use Cases
- Research: “Find the top 5 AI startups and their funding amounts”
- Competitive Analysis: “Compare pricing plans between Slack and Microsoft Teams”
- Data Gathering: “Extract contact information from company websites”
- Content Summarization: “Summarize the latest blog posts about web scraping”
API Reference
Check out the Agent API Reference for more details. Have feedback or need help? Email [email protected].Pricing
Firecrawl Agent uses dynamic billing that scales with the complexity of your data extraction request. You pay based on the actual work Agent performs, ensuring fair pricing whether you’re extracting simple data points or complex structured information from multiple sources.How Agent pricing works
Agent pricing is dynamic and credit-based during Research Preview:- Simple extractions (like contact info from a single page) typically use fewer credits and cost less
- Complex research tasks (like competitive analysis across multiple domains) use more credits but reflect the total effort involved
- Transparent usage shows you exactly how many credits each request consumed
- Credit conversion automatically converts agent credit usage to credits for easy billing
Credit usage varies based on the complexity of your prompt, the amount of data processed, and the structure of the output requested.
Getting started
All users receive 5 free daily runs to explore Agent’s capabilities without any cost. Additional usage is billed based on credit consumption and converted to credits.Managing costs
Take control of your Agent spending:- Start with free runs: Use your 5 daily free requests to understand pricing
- Set a
maxCreditsparameter: Limit your spending by setting a maximum number of credits you’re willing to spend - Optimize prompts: More specific prompts often use fewer credits
- Monitor usage: Track your consumption through the dashboard
- Set expectations: Complex multi-domain research will use more credits than simple single-page extractions
Pricing is subject to change as we move from Research Preview to general availability. Current users will receive advance notice of any pricing updates.

