# Claude Desktop

## Remote Integration

#### Prerequisites

* ScraperAPI account
* Claude Desktop installed
* Claude account

#### Setup

If you don't have an account with us yet, head over to [scraperapi.com](https://www.scraperapi.com/) to create one and grab your API key from the [Dashboard](https://dashboard.scraperapi.com/home) area. You will need it to authenticate the requests that your LLM client will be making.

**Configuration for Claude Desktop App:**

1. Open Claude Desktop Application.
2. Access the Settings Menu.
3. Click on the settings icon (typically a gear or three dots in the upper right corner).
4. Select the "Developer" tab.
5. Click on "Edit Config" and paste the JSON block in the configuration file.

```json
{
  "mcpServers": {
    "ScraperAPIRemote": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://mcp.scraperapi.com/mcp",
        "--header",
        "Authorization: Bearer {YOUR_API_KEY}"
      ]
    }
  }
}
```

#### Parameters

| **`scrape`** (required)        | Tells the LLM to scrape a URL from the internet using ScraperAPI.                                                                                                                                                                        |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`url`** (requierd)           | URL you wish to scrape.                                                                                                                                                                                                                  |
| **`render`** (optional)        | Defaults to **`False`**. Set to **`True`** if the page requires JavaScript rendering to display its contents.                                                                                                                            |
| **`country_code`** (optional)  | Activate country geotargeting (e.g. “us”, “es”, “uk”, etc.).                                                                                                                                                                             |
| **`premium`** (optional)       | Set to **`True`** to use residential IPs with your scrapes.                                                                                                                                                                              |
| **`ultra_premium`** (optional) | Activates advanced bypass mechanisms when set to **`True`**. Can **not** be combined with **`premium`**.                                                                                                                                 |
| **`device_type`** (optional)   | Defaults to **`desktop`**.  Set to **`mobile`** to use mobile user agents with the scrapes.                                                                                                                                              |
| **`output_format`** (optional) | <p>Instructs the API on what the response file type should be. Defaul is HTML. Valid options:<br>- markdown<br>- text<br>- json (for a <a href="https://docs.scraperapi.com/structured-data-endpoints/overview">list</a> of domains)</p> |
| **`autoparse`** (optional)     | Boolean. Activate auto parsing for a [list](https://docs.scraperapi.com/structured-data-endpoints/overview) of supported domains. Defaults to `False`. Set to `True` only if you want the output format in csv or json.                  |

#### Prompt Examples

<mark style="background-color:blue;">Scrape Zillow to get Real Estate insights</mark>

\- *"Scrape this Zillow search results URL <https://www.zillow.com/queens-new-york-ny/under-500000/> and return a list of homes with: address, neighborhood, price, beds, baths, square feet (if available), listing URL, and a 1-2 sentence summary."*

**Response**

Claude successfully scraped the search results page, but the response exceeded Claude’s 1MB MCP output limit, so the data could not be returned in full at first. To work around this limit, the request was retried using a different output format.

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2FXGuqwylDiME3G53OkqV7%2Fimage.png?alt=media&#x26;token=27f084ce-b8a6-4470-9f74-42e80de7b486" alt=""><figcaption></figcaption></figure>

A list of properties, including pricing details and property-related information, was successfully extracted and returned once the response was reformatted to fit within Claude's MCP output size limit.

<details>

<summary>ZillowSearch.md</summary>

```markdown
155-27 89th Street #27

Neighborhood: Howard Beach

Price: $299,999

Beds/Baths: 3 / 2

Sq Ft: —

Listing URL: https://www.zillow.com/homedetails/155-27-89th-St-Howard-Beach-NY-11414/458182526_zpid/

Summary: 3-bed, 2-bath condo available for open house. Listing has 26 photos available.

10151 116th St

Neighborhood: South Richmond Hill

Price: $195,000

Beds/Baths: 3 / 2

Sq Ft: —

Listing URL: https://www.zillow.com/homedetails/10151-116th-St-South-Richmond-Hill-NY-11419/32123970_zpid/

Summary: 3-bed, 2-bath house listed by Joshua's Realty. Been on market 13 days.

86-16 60th Avenue #4L

Neighborhood: Elmhurst

Price: $200,000

Beds/Baths: 2 / 2

Sq Ft: 1,400

Listing URL: https://www.zillow.com/homedetails/8616-60th-Ave-APT-4L-Flushing-NY-11373/245387446_zpid/

Summary: 2-bed, 2-bath condo with generously sized bedrooms in a well-proportioned space.

17343 103rd Road

Neighborhood: Jamaica

Price: $464,900

Beds/Baths: 3 / 2

Sq Ft: 930

Listing URL: https://www.zillow.com/homedetails/17343-103rd-Rd-Jamaica-NY-11433/32140263_zpid/

Summary: Foreclosure property with 3 beds, 2 baths and a detached garage.

21-28 35th Street #2A

Neighborhood: Astoria

Price: $349,000

Beds/Baths: 3 / 1

Sq Ft: 750

Listing URL: https://www.zillow.com/homedetails/21-28-35th-St-APT-2A-Astoria-NY-11105/2068770096_zpid/

Summary: 3-bed, 1-bath condo featuring exposed brick and 24 photos showing unit details.

12031 142nd St

Neighborhood: Jamaica

Price: $275,000

Beds/Baths: 3 / 2

Sq Ft: —

Listing URL: https://www.zillow.com/homedetails/12031-142nd-St-Jamaica-NY-11436/32182860_zpid/

Summary: 3-bed, 2-bath house listed by Joshua's Realty. Listed 13 days ago.

182-16 144th Ave

Neighborhood: Jamaica

Price: $399,000

Beds/Baths: 6 / 3

Sq Ft: —

Listing URL: https://www.zillow.com/homedetails/182-16-144th-Ave-Jamaica-NY-11413/32203125_zpid/

Summary: Large 6-bed, 3-bath house offering significant space for a family. Listed 13 days ago.

11132 172nd St

Neighborhood: Jamaica

Price: $450,000

Beds/Baths: 3 / 1

Sq Ft: —

Listing URL: https://www.zillow.com/homedetails/11132-172nd-St-Jamaica-NY-11433/460157124_zpid/

Summary: 3-bed, 1-bath for-sale-by-owner property with 18 photos. On market 10 days.

10 Holder Place #2A

Neighborhood: Forest Hills

Price: $449,000

Beds/Baths: 1 / 2

Sq Ft: 1,033

Listing URL: https://www.zillow.com/homedetails/10-Holder-Pl-APT-2A-Forest-Hills-NY-11375/110283461_zpid/

Summary: —
```

</details>

<mark style="background-color:blue;">Crawl Walmart Seller Profile pages to extract product data</mark>

*- "I want to crawl this Walmart seller page <https://www.walmart.com/brand/bose/10026932> to get and scrape the product URLs for that seller (max depth should be 1), then have the data streamed to this webhook: <https://webhook.site/203557d4-0c71-437d-9093-86d42f5d2b79>.* \
*Starting URL is  <https://www.walmart.com/brand/bose/10026932>. The budget for this is 500 API Credits. Use the following as regexp: urlRegexpInclude: .\*/ip/.\**\
*The name of the crawling job should be 'Walmart Seller Profile page products' and it should be ran only once."*

**Response**

Claude automatically configured and created a new Crawler job via the ScraperAPI Crawler, using the provided information. It also offers to check the crawl job status, making it easy to monitor progress and confirm when results are being delivered.

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2FVGdnF6Nio3nBZyT07BPj%2Fimage.png?alt=media&#x26;token=eaabf075-9bc3-4da1-950a-c0348531ed26" alt=""><figcaption></figcaption></figure>

<details>

<summary>Job Status Response</summary>

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2FX1lMJ0dlzGwU8hLZ2oCd%2Fimage.png?alt=media&#x26;token=246f9563-7aa6-4b67-885d-8eb07b2a4285" alt=""><figcaption></figcaption></figure>

</details>

<details>

<summary>Webhook Results</summary>

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2FI6ae0k5BBqzNlVgRbMYn%2Fimage.png?alt=media&#x26;token=53f4a1fc-e320-41ee-8c50-803e2ba38646" alt=""><figcaption></figcaption></figure>

</details>

<mark style="background-color:blue;">Google Shopping product search</mark>

\- *"I want to get an All-Clad kitchen utensils set. Check what's available on Google in the US and return the top 10 results. My budget is $2000."*

**Response**

Claude executed a Google Shopping search using ScraperAPI, then extracted and returned the top 10 results, presenting them in a structured table for easy comparison.

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2FhkVG4u5JWZoW0q60diz8%2Fimage.png?alt=media&#x26;token=cbd76c81-b838-4592-9255-5b7680a67278" alt=""><figcaption></figcaption></figure>

<details>

<summary>GoogleShoppingResults.md</summary>

```markdown
| #  | Product                                              | Price    | Retailer              |
|----|------------------------------------------------------|----------|-----------------------|
| 1  | All-Clad Stainless Steel Kitchen Tool Set             | $99.95   | Sur La Table          |
| 2  | All-Clad 5-Piece Nonstick Kitchen Tool Set            | $119.99  | All-Clad              |
| 3  | All-Clad Cook & Serve 6-Piece Tool Set                | $99.99   | All-Clad              |
| 4  | All-Clad Silicone Tools 5-Piece Ultimate Set          | $69.99   | All-Clad              |
| 5  | All-Clad Precision Stainless-Steel 8-Piece Utensil Set | $329.95  | Williams-Sonoma       |
| 6  | All-Clad Stainless Steel Kitchen Utensil Set (8 pcs)  | $164.99  | Walmart (Saya Corp)   |
| 7  | All-Clad Silicone Tools Spatula Set                   | $59.99   | All-Clad              |
| 8  | Precision Stainless Steel Tools Set (5 piece)         | $199.99  | All-Clad              |
| 9  | All-Clad Precision Nonstick 8-Piece Utensil Set       | $339.95  | Williams-Sonoma       |
| 10 | All-Clad 4-Piece Barbecue Tool Set with Case          | $154.99  | All-Clad              |

```

</details>

ScraperAPI plays a key role in making these workflows possible by handling the heavy lifting behind the scenes. It ensures that Claude can retrieve accurate results efficiently, even when dealing with difficult-to-scrape websites.

## Local (Self-Hosted) Integration

#### Prerequisites

* ScraperAPI Account
* Python 3.11 +
* Claude Desktop installed
* Claude Account
* Docker (optional)

Install the **`scraperapi-mcp-server`**  using pip:

```bash
pip install scraperapi-mcp-server
```

### Setup

If you don't have an account with us yet, head over to [scraperapi.com](https://www.scraperapi.com/) to create one and grab your API key from the [Dashboard](https://dashboard.scraperapi.com/home) area. You will need it to authenticate the requests that your LLM client will be making.

Now that you have your API Key, add this JSON block to your client configuration file:

**Using Python:**

```json
{
  "mcpServers": {
    "ScraperAPI": {
      "command": "python",
      "args": ["-m", "scraperapi_mcp_server"],
      "env": {
        "API_KEY": "<YOUR_SCRAPERAPI_API_KEY>"
      }
    }
  }
}
```

**Using Docker:**

```json
{
  "mcpServers": {
    "ScraperAPI": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "-e",
        "API_KEY=${YOUR_SCRAPERAPI_API_KEY}",
        "--rm",
        "scraperapi-mcp-server"]
    }
  }
}
```

**Configuration for Claude Desktop App:**

1. Open Claude Desktop Application.
2. Access the Settings Menu.
3. Click on the settings icon (typically a gear or three dots in the upper right corner).
4. Select the "Developer" tab.
5. Click on "Edit Config" and paste the JSON block in the configuration file.

**That's it, the MCP Server is fully configured!** Include the keyword **`scrape`** in a prompt, and the LLM will automatically use **ScraperAPI** to retrieve the data you need.

If you are interested in digging deeper, you can set up the MCP server locally on your machine. [Here](https://pypi.org/project/scraperapi-mcp-server/#development) you will find instructions on how to setup, debug and customize the server for advanced development.

#### &#x20;Parameters

| **`scrape`** (required)        | Tells the LLM to scrape a URL from the internet using ScraperAPI.                                                                                                                                                                        |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`url`** (requierd)           | URL you wish to scrape.                                                                                                                                                                                                                  |
| **`render`** (optional)        | Defaults to **`False`**. Set to **`True`** if the page requires JavaScript rendering to display its contents.                                                                                                                            |
| **`country_code`** (optional)  | Activate country geotargeting (e.g. “us”, “es”, “uk”, etc.).                                                                                                                                                                             |
| **`premium`** (optional)       | Set to **`True`** to use residential IPs with your scrapes.                                                                                                                                                                              |
| **`ultra_premium`** (optional) | Activates advanced bypass mechanisms when set to **`True`**. Can **not** be combined with **`premium`**.                                                                                                                                 |
| **`device_type`** (optional)   | Defaults to **`desktop`**.  Set to **`mobile`** to use mobile user agents with the scrapes.                                                                                                                                              |
| **`output_format`** (optional) | <p>Instructs the API on what the response file type should be. Defaul is HTML. Valid options:<br>- markdown<br>- text<br>- json (for a <a href="https://docs.scraperapi.com/structured-data-endpoints/overview">list</a> of domains)</p> |
| **`autoparse`** (optional)     | Boolean. Activate auto parsing for a [list](https://docs.scraperapi.com/structured-data-endpoints/overview) of supported domains. Defaults to `False`. Set to `True` only if you want the output format in csv or json.                  |

#### Prompt Template

* Please scrape this URL **`<URL>`**. If you receive a 500 server error, identify the website's geo-targeting and add the corresponding country\_code to overcome geo-restrictions. If errors continues, upgrade the request to use premium proxies by adding **premium=true**. For persistent failures, activate **ultra\_premium=true** to use enhanced anti-blocking measures.
* Can you scrape URL **`<URL>`** to extract **`<SPECIFIC_DATA>`**? If the request returns missing/incomplet&#x65;**`<SPECIFIC_DATA>`**, set render=true to enable JS Rendering.

#### Example Prompt

*"Please scrape this URL* [*https://www.lowes.com/pd/Kozyard-12-ft-x-16-ft-Gazebo-Dark-Brown-Metal-Square-Screened-Gazebo-with-Steel-Roof/5014900669*](https://www.lowes.com/pd/Kozyard-12-ft-x-16-ft-Gazebo-Dark-Brown-Metal-Square-Screened-Gazebo-with-Steel-Roof/5014900669)*. If you receive a 500 server error identify the website's geo-targeting and add the corresponding country\_code to overcome geo-restrictions. If errors continues, upgrade the request to use premium proxies by adding **premium=true**. For persistent failures, activate **ultra\_premium=true** to use enhanced anti-blocking measures. Get me the price of the product."*

#### Response

The LLM first attempted to scrape the product page with a ***flat*** request (no extra parameters), but it was unable to find the price information requested in the prompt. It then identified that the price was likely being loaded dynamically via JavaScript and determined that enabling JS Rendering (**render=true**) would be the next step.

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2Fm4bKsXDK66rCvftLgA85%2Fimage.png?alt=media&#x26;token=8a83493c-60a9-40e1-bcfb-537969e10312" alt=""><figcaption></figcaption></figure>

The LLM then retried the request, this time with **`render=true`** and successfully retrieved the price information.

<figure><img src="https://921583510-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FXJv4kz1e8RdAq9HrFwyo%2Fuploads%2Fgtw6K8q7rrIf1px3BjdA%2Fimage.png?alt=media&#x26;token=0f8a383b-8655-4c98-a8be-8279e3563430" alt=""><figcaption></figcaption></figure>

The full set of price options, product details, and availability were successfully extraced once JavaScript Rendering was applied.

```markdown
Price Information:
Current Price: $2,058.00
Additional Pricing Options:
MyLowe's Rewards Credit Card: $1,955.10 (when you choose 5% savings on eligible purchases)
84-month financing: $35/month payments available
Buy Now, Pay Later (Lowe's Pay): $123.60 with 18 monthly payments
Product Details:
Model: Kozyard KZAHG1216BRW
Item #: 5682350
Size: 12-ft x 16-ft Alexander Dark Brown Gazebo with Hardtop Roof
Rating: 3.8/5 stars (15 reviews)
Features: Includes screen, hardtop roof, curtains, and mosquito netting
The gazebo is currently available for pickup (ready by Thu, Aug 21) and delivery (as soon as tomorrow) at the Presque Isle Lowe's location, with 18 units available for both options.
```

{% hint style="info" %}
This illustrates why **ScraperAPI** is essential to the process. Without the smart proxy, user agent rotation, and JavaScript Rendering, the LLM would not have been able to access the webpage undisrupted and capture the price details successfully.
{% endhint %}
