Introduction & Setup

This community n8n node allows you to integrate ScraperAPI into your workflows. Send API request directly from n8n and use the scraped data directly into your automation. We handle the heavy lifiting in the background: proxy/user-agent rotation, CAPTCHA and Bot-Blocker bypass, rendering (when necessary), so you can focus on building efficient and reliable data workflows.

Installation

Grab your ScraperAPI API Key:

  1. Sign up for a ScraperAPI account at ScraperAPI Dashboard.

  2. Once logged in, navigate to your dashboard.

  3. Copy your API key from the dashboard.

Configuring Credentials inside n8n:

  1. Bottom left --> Profile Icon --> Settings.

  2. Community Nodes --> Install.

  3. Enter 'n8n-nodes-scraperapi-official' inside npm Package Name field.

  4. Agree to the [risks](https://docs.n8n.io/integrations/community-nodes/risks/) of using community nodes: select I understand the risks of installing unverified code from a public source.

  5. Select Install. n8n installs the node, and returns to the Community Nodes list in Settings.

  6. The ScraperAPI node is now visible in the nodes list.

How it works

Scraping Workflow

  1. Add a ScraperAPI node to your workflow.

  2. Select the API resource.

  3. Enter the URL you want to scrape.

  4. Configure any optional parameters (see Parameters below).

  5. Execute the workflow.

The node returns the scraped content.

AI Chat Model Scraping Workflow

Integrating an AI Chat Model into your workflow unlocks prompt-driven scraping, allowing you to scrape using natural language.

  1. Add a Chat Message Received trigger.

  2. Add an AI Agent node.

  3. Connect an AI Chat Model (e.g. OpenAI) node to the Agent (Chat Model input).

  4. Connect a Simple Memory node to the Agent (Memory input).

  5. Connect the ScraperAPI node to the Agent (Tool input).

  6. Add a system prompt to the AI Agent explaining how it should behave.

The rest of the workflow is use-case-based.

Resources

API Endpoint

The API resource allows you to scrape any website using ScraperAPI's endpoint. It supports:

  • JavaScript rendering for dynamic content.

  • Geo-targeting with country codes.

  • Device-specific user agents (desktop/mobile).

  • Premium and ultra-premium proxy options.

  • Automatic parsing of structured data for select websites.

Parameter
Parameter Type
Description

URL

REQUIRED

The target URL to scrape (e.g., https://example.com)

COUNTRY_CODE

OPTIONAL

Two-letter ISO country code (e.g., US, GB, DE) for geo-targeted scraping.

Device Type

OPTIONAL

Choose the device type to scrape the page as: - Desktop: Standard desktop browser user agent. - Mobile: Mobile device user agent.

RENDER

OPTIONAL

Enable JavaScript rendering for pages that require JavaScript to load content. Set to true only when needed, as it increases processing time.

PREMIUM

OPTIONAL

Use premium residential/mobile proxies for higher success rates. This option costs more but provides better reliability. Note: Cannot be combined with Ultra Premium.

ULTRA_PREMIUM

OPTIONAL

Activate advanced bypass mechanisms for the most difficult websites. This is the most powerful option for sites with advanced anti-bot protection. Note: Cannot be combined with Premium.

MCP Server

ScraperAPI also provides an MCP (Model Context Protocol) server that enables AI models and agents to scrape websites.

Hosted MCP Server

ScraperAPI offers a hosted MCP server that you can use with n8n's MCP Client Tool.

Configuration Steps:

  1. Add an MCP Client Tool node to your workflow

  2. Configure the following settings:

    • Endpoint: https://mcp.scraperapi.com/mcp

    • Server Transport: HTTP Streamable

    • Authentication: Bearer Auth

    • Credential for Bearer Auth: Enter your ScraperAPI API key as a Bearer Token.

    • Tools to include: All (or select specific tools as needed)

Self-Hosted MCP Server

If you prefer to self-host the MCP server, you can find the implementation and setup instructions in the scraperapi-mcp repository.

Last updated

Was this helpful?