# What is DataPipeline?

DataPipeline is a **low-code solution** for scraping projects, which **automates** your data collection. You can avoid writing complex code, stop maintaining your own scraper and therefore **reduce engineering resources and costs**.

With DataPipeline it is possible to automate your scraping jobs, receive the data where you need it and scale up your projects.&#x20;

All features at a glance:&#x20;

* Run up to 100 000 URLs, keywords, ASINs or Walmart IDs at once.
* Upload all of the input in an input field, with CSV or with a webhook, for more flexibility & dynamic scraping.
* Get results in HTLM, structured JSON or CSV.
* Schedule when you project should run automatically.
* Get your results delivered directly to your webhook.
* Get updates on the status for your pojects & the success of your jobs directly to your email inbox.

<figure><img src="/files/mAZs5GUz1iRI3eureOFt" alt="" width="375"><figcaption></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.scraperapi.com/data-pipeline/what-is-datapipeline.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
