# n8n Integration

Our [n8n node](https://n8n.io/integrations/scraperapi/) allows you to integrate ScraperAPI into your workflows. Send API request directly from n8n and use the scraped data directly into your automation. We handle the heavy lifiting in the background: **proxy/user-agent rotation**, **CAPTCHA and Bot-Blocker bypass**, **rendering** (when necessary), so you can focus on building efficient and reliable data workflows.

### Installation

**Grab your ScraperAPI API Key:**

1. Sign up for a ScraperAPI account at [ScraperAPI Dashboard](https://dashboard.scraperapi.com/signup).
2. Once logged in, navigate to your dashboard.
3. Copy your API key from the dashboard.

Add the `ScraperAPI` Node **inside n8n:**

1. Log into n8n.
2. Open an existing workflow or create a new one.
3. Click the **`+`** button (Add Node) on the canvas.
4. In the search bar, type ScraperAPI.
5. Select the ScraperAPI node from the list.
6. The node will be added directly to your workflow.

#### How it works <a href="#how-it-works" id="how-it-works"></a>

**Scraping Workflow**

1. Add a **ScraperAPI** node to your workflow.
2. Select the **API** resource.
3. Enter the **URL** you want to scrape.
4. Configure any optional parameters (see available [Parameters](https://github.com/scraperapi/n8n-nodes-scraperapi-official/blob/master/README.md#parameters)).
5. Execute the workflow.

The node returns the scraped content.

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2F9biNC94UQtqu91XZ5wqi%2FScraperAPIn8n.gif?alt=media&#x26;token=ba6957e6-de29-4fe8-8c13-da3e42646a14" alt=""><figcaption></figcaption></figure>

#### AI Chat Model Scraping Workflow

Integrating an AI Chat Model into your workflow unlocks prompt-driven scraping, allowing you to scrape using natural language.

1. Add a **Chat Message Received** trigger.&#x20;
2. Add an **AI Agent** node.
3. Connect an **AI Chat Model** (e.g. OpenAI) node to the Agent (Chat Model input).&#x20;
4. Connect a **Simple Memory** node to the Agent (Memory input).&#x20;
5. Connect the **ScraperAPI** node to the Agent (Tool input).
6. Add a **system prompt** to the **AI Agent** explaining how it should behave.

The rest of the workflow is use-case-based.

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2FcSfWGBNRAkVs4nvDJIdW%2FAIAgent.gif?alt=media&#x26;token=7209f9d5-eace-46f6-b78d-b80c85811a09" alt=""><figcaption></figcaption></figure>

### Resources

#### API Endpoint

The **API** resource allows you to scrape any website using ScraperAPI's endpoint. It supports:

* JavaScript rendering for dynamic content.
* Geo-targeting with country codes.
* Device-specific user agents (desktop/mobile).
* Premium and ultra-premium proxy options.
* Automatic parsing of structured data for select websites.

| Parameter       | Parameter Type | Description                                                                                                                                                                                                                                                                                                                           |
| --------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `URL`           | REQUIRED       | The target URL to scrape (e.g., `https://example.com`)                                                                                                                                                                                                                                                                                |
| `COUNTRY_CODE`  | OPTIONAL       | Two-letter ISO country code (e.g., `US`, `GB`, `DE`) for geo-targeted scraping.                                                                                                                                                                                                                                                       |
| `Device Type`   | OPTIONAL       | <p>Choose the device type to scrape the page as:<br>- <strong><code>Desktop</code></strong>: Standard desktop browser user agent.<br>- <strong><code>Mobile</code></strong>: Mobile device user agent.</p>                                                                                                                            |
| `RENDER`        | OPTIONAL       | Enable JavaScript rendering for pages that require JavaScript to load content. Set to `true` only when needed, as it increases processing time.                                                                                                                                                                                       |
| `PREMIUM`       | OPTIONAL       | Use premium residential/mobile proxies for higher success rates. This option costs more but provides better reliability. **Note**: Cannot be combined with Ultra Premium.                                                                                                                                                             |
| `ULTRA_PREMIUM` | OPTIONAL       | Activate advanced bypass mechanisms for the most difficult websites. This is the most powerful option for sites with advanced anti-bot protection. **Note**: Cannot be combined with Premium.                                                                                                                                         |
| `OUTPUT_FORMAT` | OPTIONAL       | <p>The <code>output\_format</code> parameter allows you to instruct the API on what the response file type should be. Valid options:</p><p></p><ul><li>markdown</li><li>text</li></ul><p></p><p><a href="../../structured-data-endpoints">SDEs</a> valid options:</p><ul><li>json</li><li>csv</li><li>markdown</li><li>text</li></ul> |
| `AUTOPARSE`     | OPTIONAL       | <p>Activate auto parsing for <a href="../../structured-data-endpoints">selected</a> websites by setting <code>autoparse=true</code>. The API will parse the data on the page and return it in JSON format.</p><p>This parameter does not increase the cost of the API request.</p>                                                    |

#### MCP Server

ScraperAPI also provides an **MCP (Model Context Protocol) server** that enables AI models and agents to scrape websites.

#### Hosted MCP Server

ScraperAPI offers a hosted MCP server that you can use with n8n's [MCP Client Tool](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolmcp/).

**Configuration Steps:**

1. Add an **MCP Client Tool** node to your workflow
2. Configure the following settings:
   * **Endpoint**: `https://mcp.scraperapi.com/mcp`
   * **Server Transport**: `HTTP Streamable`
   * **Authentication**: `Bearer Auth`
   * **Credential for Bearer Auth**: Enter your ScraperAPI API key as a Bearer Token.
   * **Tools to include**: `All` (or select specific tools as needed)

#### Self-Hosted MCP Server

If you prefer to self-host the MCP server, you can find the implementation and setup instructions in the [scraperapi-mcp repository](https://github.com/scraperapi/scraperapi-mcp).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.scraperapi.com/integrations/automation-and-workflow-integrations/n8n-integration.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
