Request Async Batch Scraping with ScraperAPI in Java
Learn to use ScraperAPI's batch processing in Java for mass web scraping. Submit arrays of URLs to the async endpoint and track multiple jobs simultaneously.
We have created a separate endpoint that accepts an array of URLs instead of just one to initiate scraping of multiple URLs at the same time: https://async.scraperapi.com/batchjobs. The API is almost the same as the single endpoint, but we expect an array of strings in the urls field instead of a string in url.
try {
URL asyncApiUrl = new URL("https://async.scraperapi.com/batchjobs");
HttpURLConnection connection = (HttpURLConnection) asyncApiUrl.openConnection();
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Type", "application/json");
connection.setDoOutput(true);
try (OutputStream outputStream = connection.getOutputStream()) {
outputStream.write("{\"apiKey\": \"xxxxxx\", \"urls\": [\"https://example.com/page1\", \"https://example.com/page2\"}".getBytes(StandardCharsets.UTF_8));
}
int responseCode = connection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
try (BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()))) {
String readLine = null;
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
System.out.println(response);
}
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
As a response you’ll also get an array of the same response that you get using our single job endpoint: