Batch Requests
To avoid the overhead of multiple calls, leverage our Async batch processing endpoint https://async.scraperapi.com/batchjobs. Instead of sending a single URL, pass an array of URL strings when calling /batchjobs.
curl --request POST \
--url "https://async.scraperapi.com/batchjobs" \
--header "Content-Type: application/json" \
--data '{
"apiKey": "API_KEY",
"urls": [
"https://wikipedia.org/wiki/Cowboy_boot",
"https://wikipedia.org/wiki/Web_scraping"
]
}'import requests
r = requests.post(url='https://async.scraperapi.com/batchjobs',
json={'apiKey': 'API_KEY', # Replace the value for api_key with your actual API Key.
'urls': [
'https://wikipedia.org/wiki/Cowboy_boot',
'https://wikipedia.org/wiki/Web_scraping'
]
}
)
print(r.text)The response returns one job entry per URL, including its ID and status.
[
{
id: '04888c53-e322-4976-969d-8f8b39f016da',
attempts: 0,
status: 'running',
statusUrl: 'https://async.scraperapi.com/jobs/04888c53-e322-4976-969d-8f8b39f016da',
url: 'https://wikipedia.org/wiki/Cowboy_boot'
},
{
id: '946ada9c-2f57-490b-900a-fa14193ae029',
attempts: 0,
status: 'running',
statusUrl: 'https://async.scraperapi.com/jobs/946ada9c-2f57-490b-900a-fa14193ae029',
url: 'https://wikipedia.org/wiki/Web_scraping'
}
]A single batch job can include up to 50 000 URLs. While this is already a large volume that should cover most use cases, it is also the maximum allowed per job and cannot be exceeded. This limit helps ensure stability, reliability, and efficient processing. If your workload requires more than 50 000 URLs, we recommend splitting them into multiple batches.
Last updated

