Introduction to ScraperAPI Crawler v1.0
Want to crawl multiple linked pages without writing your own crawler? Our Crawler handles link discovery, scraping, retries, and webhook delivery for you. Just define the start URL, link pattern, and a crawl budget, we take care of the rest.
Getting Started
No Installation needed! Just send a POST request to https://crawler.scraperapi.com/job
POST https://crawler.scraperapi.com/jobExample Payload
{
"api_key": "<YOUR API KEY>",
"start_url": "https://www.zillow.com/homes/44269_rid/",
"max_depth": 5,
"crawl_budget": 50,
"url_regexp": "\\"(?<full_url>https:\\/\\/(www.)?zillow.com\\/homedetails\\/[^\\"]+)|href=\\"(?<relative_url>\\/homedetails\\/[^\\"]+)",
"api_params": {
"country_code": "us"
},
"callback": {
"type": "webhook",
"url": "<YOUR CALLBACK WEBHOOK URL>"
}
}Once the job is running, it streams results to your webhook in real-time and sends you a summary after the job is complete
Check out the full guide and integration examples below:
Last updated
Was this helpful?

