# Endpoints and Parameters

The following section provides a list of the current API endpoints and parameters. Explore these options to tailor your experience and optimize your integration with our DataPipeline.&#x20;

## List of Endpoints <a href="#list-of-endpoints" id="list-of-endpoints"></a>

```java
Create a project
POST   <https://datapipeline.scraperapi.com/api/projects>

Get a single project
GET    <https://datapipeline.scraperapi.com/api/projects/:id>

Update a project
PATCH  <https://datapipeline.scraperapi.com/api/projects/:id>

Archive a project
DELETE <https://datapipeline.scraperapi.com/api/projects/:id>   

List projects
GET    <https://datapipeline.scraperapi.com/api/projects>

List the jobs of a project
GET    <https://datapipeline.scraperapi.com/api/projects/:id/jobs>

Cancel a job
DELETE <https://datapipeline.scraperapi.com/api/projects/:id/jobs/:jobId>
```

## Fields / Parameters

<table data-header-hidden><thead><tr><th width="295"></th><th></th></tr></thead><tbody><tr><td><code>name</code></td><td>The project name is purely for representation purposes.</td></tr><tr><td><code>schedulingEnabled</code></td><td><p></p><p>true / false</p><p><br>This specifies the automatic rescheduling of a project. If it's set to <code>false</code>, the project won't be rescheduled after the next run (if applicable)<br></p></td></tr><tr><td><code>scrapingInterval</code></td><td><p>- just scrape once (default)</p><p>- hourly<br>- daily<br>- weekly<br>- monthly<br>- cron expression like "10 * * * *"</p></td></tr><tr><td><code>scheduledAt</code></td><td>Next project runtime. Should be specified in ISO 8601 format. Simply use "now" as your reference point to initiate at the earliest possible time.</td></tr><tr><td><code>createdAt</code></td><td>The ISO 8601 representation of the date when the project was created.</td></tr><tr><td><code>projectType</code></td><td><p>This can be <code>URLs</code> for simple URL scraping, or a number of other values if you need to get structured data (JSON,CSV)</p><p></p><p>Valid <code>projectType</code> parameters:</p><p><br>- urls<br>- amazon_product<br>- amazon_offer<br>- amazon_review<br>- amazon_search<br>- ebay_product<br>- ebay_search<br>- google_jobs<br>- google_news<br>- google_search<br>- google_shopping<br>- redfin_search<br>- redfin_agent<br>- redfin_forsale<br>- refin_forrent<br>- walmart_category<br>- walmart_product<br>- walmart_search<br>- walmart_review</p></td></tr><tr><td><br><code>projectInput</code><br><br></td><td><p>The source of the list of URLs, search terms etc. for the project.<br></p><p>There are 2 valid variants<br><br>- A simple list (this is the default):<br><code>{"type": "list", "list": ["&#x3C;https://example.com>", "&#x3C;https://httpbin.org>"] }</code><br><br>- Webhook input. The result of the address of the webhook has to be a new line delimited list of search terms, asins, etc.<br><br><code>{"type": "webhook_input", "url": "&#x3C;https://the.url.where.your.list.is>" }</code></p></td></tr><tr><td><code>webhookOutput</code></td><td>The results are always saved and you can download them through the results link or through the UI.<br>If you want to get a webhook callback from your jobs you can add an optional parameter:<br><br><br><code>...</code><br><code>"webhookOutput": { "url": "&#x3C;url>"}</code><br><code>...</code><br><br><br><br>If you want to encode the results as multipart/form-data (because for example you use webhook.site) you can use the extra<br><em>webhookEncoding</em> parameter like that:<br><br><br><code>"webhookOutput": {</code><br><code>"url": "&#x3C;url>",</code><br><code>"webhookEncoding": "multipart_form_data_encoding"}</code><br><br><br><br>To unset a webhookOutput set it's value ot null<br><br><code>"webhookOutput": null</code></td></tr><tr><td><code>notificationConfig</code></td><td><p>Conifugre when to send email notifications about the finished jobs<br></p><p>•<code>notifyOnSuccess</code> - valid values <code>never</code>, <code>with_every_run</code>, <code>daily</code>, <code>weekly</code><br></p><p>•<code>notifyOnFailure</code> - valid values <code>with_every_run</code>, <code>daily</code>, <code>weekly</code></p></td></tr><tr><td><code>apiParams</code></td><td>Please refer to the <a href="../../../asynchronous-api/callbacks-and-api-params#api-params">Async API params</a>.</td></tr><tr><td><code>output_format</code></td><td><p>Set the output format of the response:<br>•CSV</p><p>•JSON</p></td></tr></tbody></table>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.scraperapi.com/data-pipeline/datapipeline-endpoints/endpoints-and-parameters.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
