Async Batch Requests | Python
Learn to use ScraperAPI's batch processing in Python for mass web scraping. Submit arrays of URLs to the async endpoint and track multiple jobs simultaneously.
We have created a separate endpoint that accepts an array of URLs instead of just one to initiate scraping of multiple URLs at the same time: https://async.scraperapi.com/batchjobs. The API is almost the same as the single endpoint, but we expect an array of strings in the urls field instead of a string in url.
import requests
# API endpoint
url = 'https://async.scraperapi.com/batchjobs'
# Data Payload
data = {
'apiKey': 'API_KEY',
'urls': ['https://example.com/page1', 'https://example.com/page2'], # List of URLs
'apiParams': {
'ultra_premium': 'false'
}
}
# Send the POST request
r = requests.post(url=url, json=data)
# Print the response text
print(r.text)
As a response you’ll also get an array of the same response that you get using our single job endpoint:
[
{
"id":"0962a8e0-5f1a-4e14-bf8c-5efcc18f0953",
"status":"running",
"statusUrl":"https://async.scraperapi.com/jobs/0962a8e0-5f1a-4e14-bf8c-5efcc18f0953",
"url":"https://example.com/page1"
},
{
"id":"238d54a1-62af-41a9-b0b4-63f240bad439",
"status":"running",
"statusUrl":"https://async.scraperapi.com/jobs/238d54a1-62af-41a9-b0b4-63f240bad439",
"url":"https://example.com/page2"
}
]
We recommend sending a maximum of 50,000 URLs in one batch job.
Last updated
Was this helpful?