Batch scrape multiple URLs
/crawl
endpoint works. You can either start the batch and wait for completion, or start it and handle completion yourself.
batchScrape
(JS) / batch_scrape
(Python): starts a batch job and waits for it to complete, returning the results.startBatchScrape
(JS) / start_batch_scrape
(Python): starts a batch job and returns the job ID so you can poll or use webhooks.batchScrape
/batch_scrape
returns the full results when the batch completes.startBatchScrape
/start_batch_scrape
returns a job ID you can track via getBatchScrapeStatus
/get_batch_scrape_status
, using the API endpoint /batch/scrape/{id}
, or webhooks. This endpoint is intended for in-progress checks or immediately after completion, as batch jobs expire after 24 hours.batchScrape
/batch_scrape
returns full results:startBatchScrape
/start_batch_scrape
returns a job ID:batch_scrape.started
- When the batch scrape beginsbatch_scrape.page
- For each URL successfully scrapedbatch_scrape.completed
- When all URLs are processedbatch_scrape.failed
- If the batch scrape encounters an error