We have created a separate endpoint that accepts an array of URLs instead of just one to initiate scraping of multiple URLs at the same time: https://async.scraperapi.com/batchjobs. The API is almost the same as the single endpoint, but we expect an array of strings in the urls field instead of a string in url.
import requests# API endpointurl ='https://async.scraperapi.com/batchjobs'# Data Payloaddata ={'apiKey':'API_KEY','urls': ['https://example.com/page1','https://example.com/page2'],# List of URLs'apiParams':{'ultra_premium':'false'}}# Send the POST requestr = requests.post(url=url, json=data)# Print the response textprint(r.text)
As a response you’ll also get an array of the same response that you get using our single job endpoint: