Hosted (Remote)

Prerequisites

  • ScraperAPI account.

  • Claude (used in this guide).

Setup

If you don't have an account with us yet, head over to scraperapi.comarrow-up-right to create one and grab your API key from the Dashboardarrow-up-right area. You will need it to authenticate the requests that your LLM client will be making.

Configuration for Claude Desktop App:

  1. Open Claude Desktop Application.

  2. Access the Settings Menu.

  3. Click on the settings icon (typically a gear or three dots in the upper right corner).

  4. Select the "Developer" tab.

  5. Click on "Edit Config" and paste the JSON block in the configuration file.

{
  "mcpServers": {
    "ScraperAPIRemote": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://mcp.scraperapi.com/mcp",
        "--header",
        "Authorization: Bearer {YOUR_API_KEY}"
      ]
    }
  }
}

Parameters

scrape (required)

Tells the LLM to scrape a URL from the internet using ScraperAPI.

url (requierd)

URL you wish to scrape.

render (optional)

Defaults to False. Set to True if the page requires JavaScript rendering to display its contents.

country_code (optional)

Activate country geotargeting (e.g. “us”, “es”, “uk”, etc.).

premium (optional)

Set to True to use residential IPs with your scrapes.

ultra_premium (optional)

Activates advanced bypass mechanisms when set to True. Can not be combined with premium.

device_type (optional)

Defaults to desktop. Set to mobile to use mobile user agents with the scrapes.

Prompt Examples

Scrape Zillow to get Real Estate insights

- "Scrape this Zillow search results URL https://www.zillow.com/queens-new-york-ny/under-500000/ and return a list of homes with: address, neighborhood, price, beds, baths, square feet (if available), listing URL, and a 1-2 sentence summary."

Response

Claude successfully scraped the search results page, but the response exceeded Claude’s 1MB MCP output limit, so the data could not be returned in full at first. To work around this limit, the request was retried using a different output format.

A list of properties, including pricing details and property-related information, was successfully extracted and returned once the response was reformatted to fit within Claude's MCP output size limit.

chevron-rightZillowSearch.mdhashtag

Crawl Walmart Seller Profile pages to extract product data

- "I want to crawl this Walmart seller page https://www.walmart.com/brand/bose/10026932 to get and scrape the product URLs for that seller (max depth should be 1), then have the data streamed to this webhook: https://webhook.site/203557d4-0c71-437d-9093-86d42f5d2b79. Starting URL is https://www.walmart.com/brand/bose/10026932. The budget for this is 500 API Credits. Use the following as regexp: urlRegexpInclude: .*/ip/.* The name of the crawling job should be 'Walmart Seller Profile page products' and it should be ran only once."

Response

Claude automatically configured and created a new Crawler job via the ScraperAPI Crawler, using the provided information. It also offers to check the crawl job status, making it easy to monitor progress and confirm when results are being delivered.

chevron-rightJob Status Responsehashtag
chevron-rightWebhook Resultshashtag

Google Shopping product search

- "I want to get an All-Clad kitchen utensils set. Check what's available on Google in the US and return the top 10 results. My budget is $2000."

Response

Claude executed a Google Shopping search using ScraperAPI, then extracted and returned the top 10 results, presenting them in a structured table for easy comparison.

chevron-rightGoogleShoppingResults.mdhashtag

ScraperAPI plays a key role in making these workflows possible by handling the heavy lifting behind the scenes. It ensures that Claude can retrieve accurate results efficiently, even when dealing with difficult-to-scrape websites.

Last updated