# Using the API Endpoint

Making a request to the **Sync API** is straightforward. You send a request, and we take care of proxies, browsers, CAPTCHAs, and protections in the background.

{% stepper %}
{% step %}

### Base URL

```bash
https://api.scraperapi.com
```

{% endstep %}

{% step %}

### Required query parameters

* `api_key` - your API Key
* `url` - target URL
  {% endstep %}

{% step %}

### Sample Request

{% tabs %}
{% tab title="cURL" %}

```bash
curl --request GET \
--url 'https://api.scraperapi.com?api_key=API_KEY&url=https://www.example.com'
```

{% endtab %}

{% tab title="Python" %}

```python
import requests

#Target URL
target_url = 'https://www.example.com'
# ScraperAPI API Key
api_key = 'API_KEY'

request_url = f'https://api.scraperapi.com?api_key={api_key}&url={target_url}'
response = requests.get(request_url)

print(response.text)
```

{% endtab %}

{% tab title="NodeJS" %}

```javascript
import request from 'node-fetch';

//Replace the value for api_key with your actual API Key
const url = 'http://api.scraperapi.com/?api_key=API_KEY&url=https://example.com/';

request(url)
  .then(response => {
    console.log(response);
  })
  .catch(error => {
    console.error(error);
  });
```

{% endtab %}

{% tab title="PHP" %}

```php
<?php

//Replace the value for api_key with your actual API Key
$url = "http://api.scraperapi.com?api_key=API_KEY&url=https://example.com/";

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, false);

$response = curl_exec($ch);

if (curl_errno($ch)) {
    echo 'Curl error: ' . curl_error($ch);
} else {
    print_r($response);
}

curl_close($ch);
```

{% endtab %}

{% tab title="Ruby" %}

```ruby
require 'net/http'

#Replace the value for api_key with your actual API Key
params = {
  api_key: "API_KEY",
  url: "https://www.example.com/"
}

uri = URI('https://api.scraperapi.com/')
uri.query = URI.encode_www_form(params)

website_content = Net::HTTP.get(uri)
puts website_content
```

{% endtab %}

{% tab title="Java" %}

```java
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Main {
    public static void main(String[] args) throws Exception {
        //Replace the value for api_key with your actual API Key
        String apiKey = "API_KEY";
        String targetUrl = "https://www.example.com/";

        String scraperApiUrl = "https://api.scraperapi.com?api_key=" + apiKey + "&url=" + targetUrl;

        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
                .uri(URI.create(scraperApiUrl))
                .GET()
                .build();

        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());

        System.out.println(response.body());
    }
}
```

{% endtab %}
{% endtabs %}
{% endstep %}
{% endstepper %}

### Optional Parameters

Sometimes sending normal (flat) requests is not enough, either because the domain is geo-locked to a specific region, it requires JavaScript rendering or it employs stronger bot protection. In those cases, you can add extra ScraperAPI parameters to your requests to ensure you get the data you need. Here are some common examples:

* `render=true` - enables JavaScript Rendering with the request.
* `country_code=us` - get results from a specific region. For the complete list of supported countries, visit [this](/control-and-optimization/geotargeting/standard-geo.md) page.
* `premium=true` - instructs the API to use high-quality residential proxies.
* `session_number=123` - keep reusing the same IP across multiple requests. Sessions expire 15 minutes after the last usage.

Here's an example request with JavaScript Rendering enabled

{% tabs %}
{% tab title="cURL" %}

```bash
curl --request GET \
--url 'https://api.scraperapi.com?api_key=API_KEY&render=true&url=https://www.example.com'
```

{% endtab %}

{% tab title="Python" %}

```python
import requests

target_url = 'https://www.example.com'
# Replace the value for api_key with your actual API Key.
api_key = 'API_KEY'

request_url = f'https://api.scraperapi.com?api_key={api_key}&render=true&url={target_url}'
response = requests.get(request_url)

print(response.text)
```

{% endtab %}

{% tab title="NodeJS" %}

```javascript
import request from 'request-promise';

//Replace the value for api_key with your actual API Key.
const url = 'https://api.scraperapi.com/?api_key=API_KEY&render=true&url=https://example.com/';

request(url)
  .then(response => {
    console.log(response);
  })
  .catch(error => {
    console.error(error);
  });
```

{% endtab %}

{% tab title="PHP" %}

```php
<?php

//Replace the value for api_key with your actual API Key.
$url = "https://api.scraperapi.com?api_key=API_KEY&render=true&url=https://example.com/";

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, false);

$response = curl_exec($ch);

if (curl_errno($ch)) {
    echo 'Curl error: ' . curl_error($ch);
} else {
    print_r($response);
}

curl_close($ch);
```

{% endtab %}

{% tab title="Ruby" %}

```ruby
require 'net/http'
require 'json'


params = {
  # Replace the value for api_key with your actual API Key.
  api_key: "API_KEY",
  render: "true",
  url: "https://www.example.com/"
}

uri = URI('https://api.scraperapi.com/')
uri.query = URI.encode_www_form(params)

website_content = Net::HTTP.get(uri)
print website_content
```

{% endtab %}

{% tab title="Java" %}

```java
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Main {
    public static void main(String[] args) throws Exception {
        //Replace the value for api_key with your actual API Key.
        String apiKey = "API_KEY";
        String targetUrl = "https://www.example.com/";

        String scraperApiUrl = "https://api.scraperapi.com?api_key=" + apiKey + "render=true" + "&url=" + targetUrl;

        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
                .uri(URI.create(scraperApiUrl))
                .GET()
                .build();

        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());

        System.out.println(response.body());
    }
}
```

{% endtab %}
{% endtabs %}

{% hint style="warning" %}
**Note:** *Ensure that all ScraperAPI parameters are listed **before** the url parameter, to avoid conflicts with paramters that may already exist in the target URL.*
{% endhint %}

### Related Sections

[Supported Geolocations.](/control-and-optimization/geotargeting.md)

[Full list of supported parameters.](/control-and-optimization/supported-parameters.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.scraperapi.com/synchronous-apis/using-the-api-endpoint.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
