How to use πŸ“š

To help you get started with the new DataPipeline API endpoints, please review the following examples. These will guide you through the process of creating, managing, and integrating your projects and jobs programmatically, ensuring a seamless transition to this powerful new feature.

  1. Choose the endpoint for your desired action

  2. Add the necessary parameters

Table of contents πŸ“–

Create a New Project

Scheduling is enabled by default and set to a weekly interval. You can disable it by adding schedulingEnabled: false

Example:

curl -X POST  --data '{ "projectInput": {"type": "list", "list": 
["https://www.amazon.com/AmazonBasics-3-Button-Wired-Computer-1-Pack/dp/B005EJH6RW/"] } }' 
-H 'content-type: application/json' 
'https://datapipeline.scraperapi.com/api/projects?api_key=xxxxxx'

Output:

{
    "id": 522,
    "name": "Project created at 2024-05-10T16:04:28.263Z",
    "schedulingEnabled": true,
    "scrapingInterval": "weekly",
    "createdAt": "2024-05-10T16:04:28.306Z",
    "projectType": "urls",
    "projectInput": {
        "type": "list",
        "list": [
            "https://www.amazon.com/AmazonBasics-3-Button-Wired-Computer-1-Pack/dp/B005EJH6RW/"
        ]
    },
    "projectOutput": {
        "type": "save"
    },
    "notificationConfig": {
        "notifyOnSuccess": "never",
        "notifyOnFailure": "with_every_run"
    }
}

Except for id and the createdAt fields, everything can be specified at the project creation.

Specifying Parameters

Lets see how sending a request looks like when sending parameters along

Example:

Output:

Updating Parameters Of An Existing Project

Example:

Output:

Specifying Project Type

Similarly to the DataPipeline GUI, you can specify a project type (e.g. Google Search) here as well. If not specified, projectType will be defaulted to URLs (for simple URL scraping).

Example:

Output:

Specifying Project Input

Choose the source for providing lists of URLs, search terms, and other relevant data for your project:

-A simple list (this is the default)

-Webhook Input - With every run the system downloads the contents from the webhook, allowing you to update the URLs list, ASINs, etc. without interacting with ScraperAPI.

Example:

Output:

Changing Output Format For SDE Project Types

The default output format is JSON, but you can opt for CSV if it better suits your needs for the SDE project types.

Example:

Output:

Changing Scheduling Interval

Example:

Output:

Creating a Project With a Custom Cron Scheduling

Cron is a time-based job scheduler. It allows users to schedule jobs to run periodically at fixed times, dates, or intervals. A cron expression consists of five fields that define when a job should execute:

  • Minute (0-59)

  • Hour (0-23)

  • Day of the month (1-31)

  • Month (1-12 or JAN-DEC)

  • Day of the week (0-7 or SUN-SAT, where 0 and 7 represent Sunday)

For example, a cron expression "0 0 * * 1" means the job will run at midnight (00:00) every Monday (1 represents Monday in cron):

Output:

Rescheduling The Project's Next Run Date

This works even with schedulingEnabled set to false, ensuring the project runs once.

If you want to schedule the project to run as soon as possibele you can use the now interval.

Configuring Webhook Output

If you are using webhook.site, it needs the content to be encoded as multipart/form-data so an extra parameter is used here.

Set the webhook output

Example:

Output:

Unset the webhook output

Configuring Notifications

Define when you would like to receive email notifications for completed jobs. This can be done at project level creation or eddited later on. Both notifyOnSuccess and notifyOnFailure have to be specified.

Example (on project creation):

Output:

Example (updating existing project):

Output:

Check The List of Jobs For a Project

You can list project's jobs with this endpoint:

Cancel a Job Within a Project

Canceling a job within a project allows you to manage projects effectively by halting specific jobs that are no longer needed.

Example:

Once you delete the job, the system will respond back with okif the command was successful

If we now lookup that same project, we'll see the job status is cancelled:

Archiving a Project

To archive an existing project, just send the following request:

Example:

When you try to look up that project now, the system will return Project not found

These changes are also visible in the Dashboard under the DataPipeline Projects list.

Before:

After:

Error Messages

Last updated

Was this helpful?