# Premium Residential/Mobile Proxy Pools

Our standard proxy pools include millions of proxies from over a dozen ISPs and should be sufficient for the vast majority of scraping jobs. However, for the difficult-to-scrape sites, we also maintain a private internal pool of residential and mobile IPs. This pool is available to all paid users.

Requests through our premium residential and mobile pool are charged at 10 times the normal rate (every successful request will count as 10 API credits against your monthly limit). Each request that uses both javascript rendering and our premium proxy pools will be charged at 25 times the normal rate (every successful request will count as 25 API credits against your monthly limit). To send a request through our premium proxy pool, enable the `premium=true` parameter with your requests.

We also have a higher premium level that can be used on really tough target websites. These pools can be accessed by adding the `ultra_premium=true` parameter. Requests will use 30 API credits against your monthly limit, or 75 if used together with JS Rendering (`render=true`). Please note, this is **only available with our paid plans**. Requests with the `ultra_premium=true` parameter are cached (by default) to enhance performance and efficiency. For detailed information about how caching works and its benefits, please refer to our [Cached Responses](/control-and-optimization/cached-responses.md) page.

{% hint style="success" %}
These two parameters are mutually exclusive, so they cannot be used at the same time.
{% endhint %}

{% hint style="danger" %}
Cutom Headers **cannot be used** together with `ultra_premium=true`. If `ultra_premium=true` is set, we discard all custom headers even if the `keep_headers=true` parameter is part of the request.
{% endhint %}

* **API REQUEST**

{% tabs %}
{% tab title="cURL" %}

```bash
curl --request GET \
  --url 'https://api.scraperapi.com?api_key=API_KEY&premium=true&url=https://example.com/'
```

{% endtab %}

{% tab title="Python" %}

```python
import requests

target_url = 'https://example.com/'
# Replace the value for api_key with your actual API Key.
api_key = 'API_KEY'

request_url = (
    f'https://api.scraperapi.com?'
    f'api_key={api_key}'
    f'&premium=true'
    f'&url={target_url}'
)

response = requests.get(request_url)
print(response.text)
```

{% endtab %}

{% tab title="NodeJS" %}

```javascript
import request from 'node-fetch';

// Replace the value for api_key with your actual API Key
const url = 'https://api.scraperapi.com/?api_key=API_KEY&premium=true&url=https://example.com/';

request(url)
  .then(response => {
    console.log(response);
  })
  .catch(error => {
    console.error(error);
  });
```

{% endtab %}

{% tab title="PHP" %}

```php
<?php

// Replace the value for api_key with your actual API Key
$url = "https://api.scraperapi.com?api_key=API_KEY&premium=true&url=https://example.com";

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, false);

$response = curl_exec($ch);

if (curl_errno($ch)) {
    echo 'Curl error: ' . curl_error($ch);
} else {
    print_r($response);
}

curl_close($ch);
```

{% endtab %}

{% tab title="Ruby" %}

```ruby
require 'net/http'

# Replace the value for api_key with your actual API Key
params = {
  api_key: "API_KEY",
  premium: "true",
  url: "https://example.com/"
}

uri = URI('https://api.scraperapi.com/')
uri.query = URI.encode_www_form(params)

website_content = Net::HTTP.get(uri)
puts website_content
```

{% endtab %}

{% tab title="Java" %}

```java
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Main {
    public static void main(String[] args) throws Exception {
        // Replace the value for api_key with your actual API Key
        String apiKey = "API_KEY";
        String targetUrl = "https://example.com/";

        String scraperApiUrl =
                "https://api.scraperapi.com?api_key=" + apiKey
                + "&premium=true"
                + "&url=" + targetUrl;

        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
                .uri(URI.create(scraperApiUrl))
                .GET()
                .build();

        HttpResponse<String> response =
                client.send(request, HttpResponse.BodyHandlers.ofString());

        System.out.println(response.body());
    }
}
```

{% endtab %}
{% endtabs %}

* **ASYNC REQUEST**

{% tabs %}
{% tab title="cURL" %}

```bash
curl -X POST \
  -H "Content-Type: application/json" \
  -d '{
        "apiKey": "API_KEY",
        "url": "https://example.com/",
        "apiParams": {
          "premium": "true"
        }
      }' \
  "https://async.scraperapi.com/jobs"
```

{% endtab %}

{% tab title="Python" %}

```python
import requests

r = requests.post(
    url='https://async.scraperapi.com/jobs',
    json={
        # Replace the value for api_key with your actual API Key.
        'apiKey': 'API_KEY',
        'premium': 'true',
        'url': 'https://example.com/'
    }
)

print(r.text)
```

{% endtab %}

{% tab title="NodeJS" %}

```javascript
import axios from 'axios';

(async () => {
  try {
    const { data } = await axios({
      method: 'POST',
      url: 'https://async.scraperapi.com/jobs',
      headers: { 'Content-Type': 'application/json' },
      data: {
        // Replace the value for api_key with your actual API Key.
        apiKey: 'API_KEY',
        url: 'https://example.com',
        apiParams: {
          premium: 'true'
        }
      }
    });

    console.log(data);
  } catch (error) {
    console.error(error);
  }
})();
```

{% endtab %}

{% tab title="PHP" %}

```php
<?php
$payload = json_encode([
    // Replace the value for api_key with your actual API Key.
    "apiKey" => "API_KEY",
    "url"    => "https://example.com",
    "apiParams" => [
        "premium"  => "true"
    ]
]);

$ch = curl_init("https://async.scraperapi.com/jobs");

curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);
curl_setopt($ch, CURLOPT_HTTPHEADER, ["Content-Type: application/json"]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

$response = curl_exec($ch);
curl_close($ch);

print_r($response);
?>
```

{% endtab %}

{% tab title="Ruby" %}

```ruby
require 'net/http'
require 'json'

uri = URI('https://async.scraperapi.com/jobs')

payload = {
  # Replace the value for api_key with your actual API Key
  "apiKey" => "API_KEY",
  "url" => "https://example.com",
  "apiParams" => {
    "premium" => "true"
  }
}

response = Net::HTTP.post(uri, payload.to_json, "Content-Type" => "application/json")
puts response.body
```

{% endtab %}

{% tab title="Java" %}

```java
import java.io.*;
import java.net.*;
import java.nio.charset.StandardCharsets;

public class Main {
    public static void main(String[] args) {
        try {
            URL url = new URL("https://async.scraperapi.com/jobs");
            HttpURLConnection conn = (HttpURLConnection) url.openConnection();
            conn.setRequestMethod("POST");
            conn.setRequestProperty("Content-Type", "application/json");
            conn.setDoOutput(true);

            String payload = "{"
                    // Replace the value for api_key with your actual API Key.
                    + "\"apiKey\": \"API_KEY\","
                    + "\"url\": \"https://example.com\","
                    + "\"apiParams\": {"
                    + "\"premium\": \"true\""
                    + "}"
                    + "}";

            try (OutputStream os = conn.getOutputStream()) {
                os.write(payload.getBytes(StandardCharsets.UTF_8));
            }

            try (BufferedReader in = new BufferedReader(new InputStreamReader(
                    conn.getResponseCode() == 200 ? conn.getInputStream() : conn.getErrorStream()
            ))) {
                StringBuilder response = new StringBuilder();
                String line;
                while ((line = in.readLine()) != null) response.append(line);
                System.out.println(response);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
```

{% endtab %}
{% endtabs %}

* **PROXY MODE**

{% tabs %}
{% tab title="cURL" %}

```bash
curl --proxy 'http://scraperapi.premium=true:API_KEY@proxy-server.scraperapi.com:8001' \
  -k \
  'https://example.com/'
```

{% endtab %}

{% tab title="Python" %}

```python
import requests

# Replace the value for api_key with your actual API Key.
proxies = {
    "http": "http://scraperapi.premium=true:API_KEY@proxy-server.scraperapi.com:8001"
}

r = requests.get('http://example.com/', proxies=proxies, verify=False)
print(r.text)
```

{% endtab %}

{% tab title="NodeJS" %}

```javascript
import axios from 'axios';

axios.get('http://example.com/', {
  method: 'GET',
  proxy: {
    host: 'proxy-server.scraperapi.com',
    port: 8001,
    auth: {
      username: 'scraperapi.premium=true',
      // Replace the value for password with your actual API Key.
      password: 'API_KEY'
    },
    protocol: 'http'
  }
})
  .then(response => {
    console.log(response.data);
  })
  .catch(error => {
    console.log(error);
  });
```

{% endtab %}

{% tab title="PHP" %}

```php
<?php

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://example.com/");
curl_setopt($ch, CURLOPT_PROXY, 
    // Replace the value for api_key with your actual API Key.
    "http://scraperapi.premium=true:API_KEY@proxy-server.scraperapi.com:8001"
);

curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, false);

$response = curl_exec($ch);
curl_close($ch);
var_dump($response);
```

{% endtab %}

{% tab title="Ruby" %}

```ruby
require 'httparty'

HTTParty::Basement.default_options.update(verify: false)

response = HTTParty.get('http://example.com/', {
  http_proxyaddr: "proxy-server.scraperapi.com",
  http_proxyport: 8001,
  http_proxyuser: "scraperapi.premium=true",
  # Replace the value for http_proxypass with your actual API Key.
  http_proxypass: "API_KEY"
})

results = response.body
puts results
```

{% endtab %}

{% tab title="Java" %}

```java
import java.io.*;
import java.net.*;
import javax.net.ssl.*;
import java.security.cert.X509Certificate;

public class Main {
    public static void main(String[] args) {
        try {
            SSLContext sc = SSLContext.getInstance("TLS");
            sc.init(null, new TrustManager[]{
                new X509TrustManager() {
                    public X509Certificate[] getAcceptedIssuers() { return null; }
                    public void checkClientTrusted(X509Certificate[] certs, String authType) {}
                    public void checkServerTrusted(X509Certificate[] certs, String authType) {}
                }
            }, new java.security.SecureRandom());
            HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());
            HttpsURLConnection.setDefaultHostnameVerifier((hostname, session) -> true);

            String apiKey = "API_KEY"; // Replace with your actual API Key
            Authenticator.setDefault(new Authenticator() {
                protected PasswordAuthentication getPasswordAuthentication() {
                    return new PasswordAuthentication(
                        "scraperapi.premium=true", apiKey.toCharArray()
                    );
                }
            });

            System.setProperty("jdk.http.auth.tunneling.disabledSchemes", "");
            System.setProperty("https.proxyHost", "proxy-server.scraperapi.com");
            System.setProperty("https.proxyPort", "8001");

            HttpsURLConnection conn = (HttpsURLConnection) new URL("https://example.com/").openConnection();
            conn.setRequestMethod("GET");

            BufferedReader in;
            if (conn.getResponseCode() >= 400) {
                in = new BufferedReader(new InputStreamReader(conn.getErrorStream()));
            } else {
                in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
            }

            StringBuilder resp = new StringBuilder();
            String line;
            while ((line = in.readLine()) != null) {
                resp.append(line).append("\n");
            }
            in.close();

            System.out.println(resp);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
```

{% endtab %}
{% endtabs %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.scraperapi.com/control-and-optimization/premium-residential-mobile-proxy-pools.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
