# Overview

We've designed our Async API with high-reliability in mind. It allows you to submit a job (or a batch of jobs) that runs in the background until the requested pages are successfully scraped or for up to **24 hours**. You can retrieve the results either from the **status endpoint** or have them delivered *directly* to a **webhook**.

We recommend using our **Async API** when success rate matters more than response time (ideal for recurring or scheduled data collection).

### Highlights

* **Resilient** - The Async API keeps retrying the job until the requested pages return a successful response (100 % success rate when possible), making it ideal for scraping pages with heavy protection.
* **Flexible** - Serves results via a status URL or streams them directly to a webhook endpoint. Webhook callbacks remove the need to poll and can report on both successful and failed jobs.
* **Batch Jobs Submission** - You can submit up to **50 000 URLs** per batch job, making it easy to handle large scraping projects with one request.

### Constraints

* **Data Retention** - Each job runs until it succeeds or until 24 hours have passed. Results are stored for up to **72 hours** (*24 hours guaranteed*). If not retrieved in time, the data is deleted and you’ll need to resubmit.
* **Batch Size** - A single batch job can include up to 50 000 URLs. While this is already a large volume that should cover most use cases, it is also the **maximum** allowed per job and cannot be exceeded. This limit helps ensure stability, reliability, and efficient processing. If your workload requires more than 50 000 URLs, we recommend splitting them into multiple batches.

### Costs

* **Credit Per Request** - Normal (flat) requests typically cost 1 API credit. Additional costs may apply in some cases, such as when extra parameters are enabled or certain domains are scraped. You can learn more about costs [here](/getting-started/quick-start/credits-and-requests-costs.md).
* **Cost Control** - You can set a `max_cost` parameter inside `apiParams` to cap how many credits a job may consume. If the cost exceeds the specified limit, the job will return a `403 error`.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.scraperapi.com/asynchronous-api/overview.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
