Introduction & Setup
This community n8n node allows you to integrate ScraperAPI into your workflows. Send API request directly from n8n and use the scraped data directly into your automation. We handle the heavy lifiting in the background: proxy/user-agent rotation, CAPTCHA and Bot-Blocker bypass, rendering (when necessary), so you can focus on building efficient and reliable data workflows.
Installation
Grab your ScraperAPI API Key:
Sign up for a ScraperAPI account at ScraperAPI Dashboard.
Once logged in, navigate to your dashboard.
Copy your API key from the dashboard.
Configuring Credentials inside n8n:
Bottom left --> Profile Icon --> Settings.
Community Nodes --> Install.
Enter 'n8n-nodes-scraperapi-official' inside npm Package Name field.
Agree to the [risks](https://docs.n8n.io/integrations/community-nodes/risks/) of using community nodes: select I understand the risks of installing unverified code from a public source.
Select Install. n8n installs the node, and returns to the Community Nodes list in Settings.
The ScraperAPI node is now visible in the nodes list.
How it works
Scraping Workflow
Add a ScraperAPI node to your workflow.
Select the API resource.
Enter the URL you want to scrape.
Configure any optional parameters (see Parameters below).
Execute the workflow.
The node returns the scraped content.

AI Chat Model Scraping Workflow
Integrating an AI Chat Model into your workflow unlocks prompt-driven scraping, allowing you to scrape using natural language.
Add a Chat Message Received trigger.
Add an AI Agent node.
Connect an AI Chat Model (e.g. OpenAI) node to the Agent (Chat Model input).
Connect a Simple Memory node to the Agent (Memory input).
Connect the ScraperAPI node to the Agent (Tool input).
Add a system prompt to the AI Agent explaining how it should behave.
The rest of the workflow is use-case-based.

Resources
API Endpoint
The API resource allows you to scrape any website using ScraperAPI's endpoint. It supports:
JavaScript rendering for dynamic content.
Geo-targeting with country codes.
Device-specific user agents (desktop/mobile).
Premium and ultra-premium proxy options.
Automatic parsing of structured data for select websites.
URL
REQUIRED
The target URL to scrape (e.g., https://example.com)
COUNTRY_CODE
OPTIONAL
Two-letter ISO country code (e.g., US, GB, DE) for geo-targeted scraping.
Device Type
OPTIONAL
Choose the device type to scrape the page as:
- Desktop: Standard desktop browser user agent.
- Mobile: Mobile device user agent.
RENDER
OPTIONAL
Enable JavaScript rendering for pages that require JavaScript to load content. Set to true only when needed, as it increases processing time.
PREMIUM
OPTIONAL
Use premium residential/mobile proxies for higher success rates. This option costs more but provides better reliability. Note: Cannot be combined with Ultra Premium.
ULTRA_PREMIUM
OPTIONAL
Activate advanced bypass mechanisms for the most difficult websites. This is the most powerful option for sites with advanced anti-bot protection. Note: Cannot be combined with Premium.
MCP Server
ScraperAPI also provides an MCP (Model Context Protocol) server that enables AI models and agents to scrape websites.
Hosted MCP Server
ScraperAPI offers a hosted MCP server that you can use with n8n's MCP Client Tool.
Configuration Steps:
Add an MCP Client Tool node to your workflow
Configure the following settings:
Endpoint:
https://mcp.scraperapi.com/mcpServer Transport:
HTTP StreamableAuthentication:
Bearer AuthCredential for Bearer Auth: Enter your ScraperAPI API key as a Bearer Token.
Tools to include:
All(or select specific tools as needed)
Self-Hosted MCP Server
If you prefer to self-host the MCP server, you can find the implementation and setup instructions in the scraperapi-mcp repository.
Last updated
Was this helpful?

