# Introduction

The `ScraperAPI Crawler` is ideal for scraping websites, where the data you want to extract spans across multiple linked pages. It’s great for extracting data from product listings, paginated search results, real estate catalogs - basically any structure, where one page leads to dozens or hundreds more. It takes care of **crawling**, **scraping**, **retries**, and delivers the results back to you (supports webhook callbacks too)**.**

## What the Crawler Does

* Discovers and scrapes new pages based on how it's configured.
* Skips duplicates to avoid infinite loops.
* Handles failed requests gracefully.
* Stops when the credit budget or depth limit is hit.
* Streams page results during the crawl and sends a full summary at the end.

Whether you're crawling 10 pages or 10,000, it runs the job from start to finish and saves each page result in real time (or sends it over to your webhook).

{% hint style="warning" %}
\*\* Free Plan limitations \*\*

* **Link depth:** limited to `1` (seed URL plus direct links).
* **Scheduling:** recurring schedules (hourly/daily/weekly/monthly) are not available.<br>

*Attempting to exceed these limits will return a `403` error.* *Upgrade to a paid plan to run deeper crawls and enable scheduling.*
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.scraperapi.com/scraperapi-crawler-v2.0/introduction.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
