Learn how to make requests using ScraperAPI. Sign up for a free trial to get 5,000 free API credits.
Using ScraperAPI is easy. Just send the URL you would like to scrape to the API along with your API key and the API will return the HTML response from the URL you want to scrape.
ScraperAPI uses API keys to authenticate requests. To use the API you need to sign up for an account and include your unique API key in every request.
You can use the API to scrape web pages, API endpoints, images, documents, PDFs, or other files just as you would any other URL. Note: there is a 2MB limit per request.
There are five ways in which you can send GET requests to ScraperAPI:
Via our Async Scraper service http://async.scraperapi.com
Via our API endpoint http://api.scraperapi.com?
Via one of our SDKs (only available for some programming languages)
Via our proxy port http://scraperapi:APIKEY@proxy-server.scraperapi.com:8001
Via our Structured Data service https://api.scraperapi.com/structured/
Choose whichever option best suits your scraping requirements.
Important note: regardless of how you invoke the service, we highly recommend you set a 70 seconds timeout in your application to get the best possible success rates, especially for some hard-to-scrape domains.
ScraperAPI exposes a single API endpoint for you to send GET requests. Simply send a GET request to http://api.scraperapi.com
with two query string parameters and the API will return the HTML response for that URL:
api_key
which contains your API key, and
url
which contains the url you would like to scrape
You should format your requests to the API endpoint as follows:
To enable other API functionality when sending a request to the API endpoint simply add the appropriate query parameters to the end of the ScraperAPI URL.
For example, if you want to enable Javascript rendering with a request, then add render=true
to the request:
To use two or more parameters, simply separate them with the “&” sign.
To simplify implementation for users with existing proxy pools, we offer a proxy front-end to the API. The proxy will take your requests and pass them through to the API which will take care of proxy rotation, captchas, and retries.
The proxy mode is a light front-end for the API and has all the same functionality and performance as sending requests to the API endpoint.
The username
for the proxy is scraperapi and the password
is your API key.
Note: So that we can properly direct your requests through the API, your code must be configured to not verify SSL certificates.
To enable extra functionality whilst using the API in proxy mode, you can pass parameters to the API by adding them to username, separated by periods.
For example, if you want to enable Javascript rendering with a request, the username would be scraperapi.render=true
Multiple parameters can be included by separating them with periods; for example:
If you would like to send requests to our Proxy API with SSL verification, you can manually trust our certificate by following these steps:
Download Our Proxy CA Certificate:
Please follow this link to download our proxy CA certificate.
Manual trust:
Once you've downloaded the certificate, manually trust it in your scraping tool or library settings. This step may vary depending on the tool or library you're using, but typically involves importing the certificate into your trusted root store or by configuring SSL/TLS settings. Depending on your operating system, follow the instructions below to install the ScraperAPI CA Certificate:
Press the Win key + R
hotkey and input mmc
in Run to open the Microsoft Management Console window.
Click File
and select Add/Remove Snap-ins
.
In the opened window select Certificates
and press the Add >
button.
In the Certificates Snap-in window select Computer account > Local Account
, and press the Finish
button to close the window.
Press the OK
button in the Add or Remove Snap-in window.
Back in the Microsoft Management Console window, select Certificates
under Console Root and right-click Trusted Root Certification Authorities
From the context menu select All Tasks > Import
to open the Certificate Import Wizard window from which you can add the Scraper API certificate.
More details can be found here.
Open Keychain Access window (Launchpad > Other > Keychain Access
).
Select System
tab under Keychains, drag and drop the downloaded certificate file (or select File > Import Items...
and navigate to the file).
Enter the administrator password to modify the keychain.
Double-click the ScraperAPI CA
certificate entry, expand Trust, next to When using this certificate: select Always Trust
.
Close the window and enter the administrator password again to update the settings.
Install the downloaded ScraperAPI proxyca.pem file:
Update stored Certificate Authority files:
If you have any questions or need further assistance regarding web scraping or certificate management, don't hesitate to reach out to support team. We're here to help you every step of the way!
To ensure a higher level of successful requests when using our scraper, we’ve built a new product, Async Scraper. Rather than making requests to our endpoint waiting for the response, this endpoint submits a job of scraping, in which you can later collect the data from using our status endpoint.
Scraping websites can be a difficult process; it takes numerous steps and significant effort to get through some sites’ protection which sometimes proves to be difficult with the timeout constraints of synchronous APIs. The Async Scraper will work on your requested URLs until we have achieved a 100% success rate (when applicable), returning the data to you.
Async Scraping is the recommended way to scrape pages when success rate on difficult sites is more important to you than response time (e.g. you need a set of data periodically).
Submit an async job
A simple example showing how to submit a job for scraping and receive a status endpoint URL through which you can poll for the status (and later the result) of your scraping job:
You can also send POST requests to the Async scraper by using the parameter “method”: “POST”. Here is an example on how to make a POST request to the Async scraper:
Response:
Note the statusUrl
field in the response. That is a personal URL to retrieve the status and results of your scraping job. Invoking that endpoint provides you with the status first:
Response:
Once your job is finished, the response will change and will contain the results of your scraping job:
Please note that the response for an Async job is stored for up to 72 hours (24hrs guaranteed) or until you retrieve the results, whichever comes first. If you do not get the response in due time, it will be deleted from our side and you will have to send another request for the same job.
Using a status URL is a great way to test the API or get started quickly, but some customer environments may require some more robust solutions, so we implemented callbacks. Currently only webhook
callbacks are supported but we are planning to introduce more over time (e.g. direct database callbacks, AWS S3, etc).
An example of using a webhook callback:
Using a callback you don’t need to use the status URL (although you still can) to fetch the status and results of the job. Once the job is finished the provided webhook URL will be invoked by our system with the same content as the status URL provides.
Just replace the https://yourcompany.com/scraperapi
URL with your preferred endpoint. You can even add basic auth to the URL in the following format: https://user:pass@yourcompany.com/scraperapi
By default, we'll call the webhook URL you provide for successful requests. If you'd like to receive data on failed attempts too, you will have to include the expectsUnsuccessReport: true
parameter in your request structure.
An example of using callbacks that report on the failed attempts as well:
Response:
Note: The system will try to invoke the webhook URL 3 times, then it cancels the job. So please make sure that the webhook URL is available through the public internet and will be capable of handling the traffic that you need.
Hint: Webhook.site is a free online service to test webhooks without requiring you to build a complex infrastructure.
You can use the usual API parameters just the same way you’d use it with our synchronous API. These parameters should go into an apiParams
object inside the POST data, e.g:
We have created a separate endpoint that accepts an array of URLs instead of just one to initiate scraping of multiple URLs at the same time: https://async.scraperapi.com/batchjobs. The API is almost the same as the single endpoint, but we expect an array of strings in the urls field instead of a string in url.
As a response you’ll also get an array of the same response that you get using our single job endpoint:
We recommend sending a maximum of 50,000 URLs in one batch job.
The responses returned by the Async API for binary requests require you to decode the data as they are encoded using Base64
encoding. This allows the binary data to be sent as a text string, which can then be decoded back into its original form when you want to use it.
Example request:
Decode response:
To make it even easier to get structured content, we created custom endpoints within our API that provide a shorthand method of retrieving content from supported domains. This method is ideal for users that need to receive structured data in JSON or CSV.
Amazon Endpoints:
Ebay Endpoints:
Google Endpoints:
Redfin Endpoints:
Walmart Endpoints:
This endpoint will retrieve product data from an Amazon product page and transform it into usable JSON. It also provides links to all variants of the product (if any).
API_KEY
(required)
User's normal API Key
ASIN
(required)
= Amazon Standard Identification Number. Please not that ASIN's are market specific (TLD). You can usually find the ASINs in the URL of an Amazon product (example: B07FTKQ97Q)
TLD
Amazon market to be scraped.
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
co.jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
com.sg (amazon.com.sg)
com.mx (amazon.com.mx)
ae (amazon.ae)
com.br (amazon.com.br)
nl (amazon.nl)
com.au (amazon.com.au)
com.tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
ZIP Code Targeting
To find out mote about ZIP Code targeting, please follow this link
This endpoint will retrieve products for a specified search term from Amazon search page and transform it into usable JSON.
API_KEY
(required)
User's normal API Key
QUERY
(required)
Amazon Search query string
TLD
Amazon market to be scraped.
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
co.jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
com.sg (amazon.com.sg)
com.mx (amazon.com.mx)
ae (amazon.ae)
com.br (amazon.com.br)
nl (amazon.nl)
com.au (amazon.com.au)
com.tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an Amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
page=N
Paginating the result. For example: 1
ZIP Code Targeting
To find out mote about ZIP Code targeting, please follow this link
This endpoint will retrieve offers for a specified product from an Amazon offers page and transform it into usable JSON.
API_KEY
(required)
User account’s normal API key.
ASIN
(required)
Amazon Standard Identification Number. Please not that ASIN's are market specific (TLD). You can usually find the ASINs in the URL of an Amazon product (example: B07FTKQ97Q)
TLD
Amazon market to be scraped.
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
co.jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
com.sg (amazon.com.sg)
com.mx (amazon.com.mx)
ae (amazon.ae)
com.br (amazon.com.br)
nl (amazon.nl)
com.au (amazon.com.au)
com.tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
F_NEW
Boolean parameter with a possible value of true
or false
indicating the condition of the listed items.
F_USED_GOOD
Boolean parameter with a possible value of true
or false
indicating the condition of the listed items.
F_USED_LIKE_NEW
Boolean parameter with a possible value of true
or false
indicating the condition of the listed items.
F_USED_VERY_GOOD
Boolean parameter with a possible value of true
or false
indicating the condition of the listed items.
F_USED_ACCEPTABLE
Boolean parameter with a possible value of true
or false
indicating the condition of the listed items.
ZIP Code Targeting
To find out mote about ZIP Code targeting, please follow this link
Amazon review pages currently require a login to access. Since ScraperAPI does not support scraping behind a login, this endpoint is not available.
Please use the Amazon Product Page API instead to scrape all publicly available reviews for an Amazon product.
This endpoint will retrieve reviews for a specified product from an Amazon reviews page and transform it into usable JSON.
API_KEY
(required)
User account’s normal API key.
ASIN
(required)
Amazon Standard Identification Number. Please not that ASIN's are market specific (TLD). You can usually find the ASINs in the URL of an Amazon product (example: B07FTKQ97Q)
TLD
Amazon market to be scraped.
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
co.jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
com.sg (amazon.com.sg)
com.mx (amazon.com.mx)
ae (amazon.ae)
com.br (amazon.com.br)
nl (amazon.nl)
com.au (amazon.com.au)
com.tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
REF
A reference string used by amazon.
For example: olp_f_usedAcceptable
aod_dpdsk_used_1
(used offers) aod_dpdsk_new_1
(new offers) aod_dpdsk_new_0_cart
(prob. opening a new cart)
dp_start-bbf_1_glance
(adding to cart) aod-swatch-id-usedAcceptable
FILTER_BY_STAR
Filter the results by stars.
All values: all_stars
five_star
four_star
three_star
two_star
one_star
All positive: positive
All critical: critical
REVIEW_TYPE
Filter the results by type. All values: all
reviewers: all_reviews
Verified purchase only: avp_only_reviews
PAGE_NUMBER
Paginating the result. For example: 1
ZIP Code Targeting
To find out mote about ZIP Code targeting, please follow this link
This endpoint will retrieve product data from an Ebay product pages (/itm/) and transform it into usable JSON.
API_KEY
(required)
User's normal API Key
PRODUCT_ID
(required)
ebay product ID. 12 digits. Example: 166619046796
TLD
Top-level Ebay domain to scrape. This is an optional argument and defaults to “com” (ebay.com). Valid values include:
com (ebay.com)
co.uk (ebay.co.uk)
com.au (ebay.com.au)
de (ebay.de)
ca (ebay.ca)
fr (ebay.fr)
it (ebay.it)
es (ebay.es)
at (ebay.at)
ch (ebay.ch)
com.sg (ebay.com.sg)
com.my (ebay.com.my)
ph (ebay.ph)
ie (ebay.ie)
pl (ebay.pl)
nl (ebay.nl)
COUNTRY
country_code
influences the language and the currency of the page. The TLD should be set to ‘com’ if you are using languages that are not used by the TLDs listed above.
This endpoint will retrieve products for a specified search term from Ebay search page and transform it into usable JSON.
API_KEY
(required)
User's normal API Key
QUERY
(required)
Search query. Example: iPhone
TLD
Top-level Ebay domain to scrape. This is an optional argument and defaults to “com” (ebay.com). Valid values include:
com (ebay.com)
co.uk (ebay.co.uk)
com.au (ebay.com.au)
de (ebay.de)
ca (ebay.ca)
fr (ebay.fr)
it (ebay.it)
es (ebay.es)
at (ebay.at)
ch (ebay.ch)
com.sg (ebay.com.sg)
com.my (ebay.com.my)
ph (ebay.ph)
ie (ebay.ie)
pl (ebay.pl)
nl (ebay.nl)
COUNTRY
country_code
influences the language and the currency of the page. The TLD should be set to ‘com’ if you are using languages that are not used by the TLDs listed above.
SORT_BY
Instructs the API to sort the results. Supported values:
ending_soonest newly_listed price_lowest price_highest distance_nearest best_match
PAGE
Page number
ITEMS_PER_PAGE
Number of items returned. Supported values: 60, 120, 240
SELLER_ID
The ‘name’ of the seller. This is a textual id, for example ‘gadget-solutions’
Filter options
CONDITON
Condition of the products. Supported values:
'new', 'used', 'open_box', 'refurbished', 'for_parts', 'not_working’ Note: These categories don’t have the same name for all products.
For example open_box
is sometimes called ‘without tag’ on the eBay results page. Please ensure that you are always using the values from the supported list when sending request to the API.
Note2: Multiple options can be used here. So if you want to search for new and open_box products you can use the query param:
’used,open_box’ in an URL encoded form like
&condition=used%2Copen_box
BUYING_FORMAT
Buying format. Supported values:
buy_it_now
auction
accepts_offers
Note: You can specify multiple options just as with the condition
parameter.
SHOW_ONLY
Additional filters for search. Supported values:
returns_accepted authorized_seller completed_items sold_items sale_items listed_as_lots search_in_description benefits_charity authenticity_guarantee Note: Multiple options can be used here. So if you want to search for a product where returns are accepted and it’s from an authorized seller the query param: ’returns_accepted,authorized_seller’ in an URL encoded form like &show_only=returns_accepted%2Cauthorized_seller
This endpoint will retrieve product data from an Google search result page and transform it into usable JSON.
´
API_KEY
(required)
User's normal API Key
QUERY
(required)
Query keywords that a user wants to search for
COUNTRY_CODE
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.). Where a Google domain needs to be scraped from another country (e.g. scraping google.com from Canada), both TLD and COUNTRY_CODE parameters must be specified.
TLD
Country of Google domain to scrape. This is an optional argument and defaults to “com” (google.com). Valid values include: com (google.com) co.uk (google.co.uk) ca (google.ca) de (google.de) es (google.es) fr (google.fr) it (google.it) co.jp (google.co.jp) in (google.in) cn (google.cn) com.sg (google.com.sg) com.mx (google.com.mx) ae (google.ae) com.br (google.com.br) nl (google.nl) com.au (google.com.au) com.tr (google.com.tr) sa (google.sa) se (google.se) pl (google.pl)
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
UULE
Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ. You can find an online UULE generator here.
NUM
Number of results
HL
Host Language. For example: DE
GL
Boosts matches whose country of origin matches the parameter value. For example: DE
IE
Character encoding how the engine interpret the query string. For example: UTF8
OE
Character encoding used for the results. For example: UTF8
START
Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result. (meaning it starts with page 2 of results if the "num" is 10)
This endpoint will retrieve news data from an Google news result page and transform it into usable JSON.
API_KEY
(required)
User's normal API Key
QUERY
(required)
Query keywords that a user wants to search for.
COUNTRY_CODE
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.). Where a Google domain needs to be scraped from another country (e.g. scraping google.com from Canada), both TLD and COUNTRY_CODE parameters must be specified.
TLD
Country of Google domain to scrape. This is an optional argument and defaults to “com” (google.com). Valid values include: com (google.com) co.uk (google.co.uk) ca (google.ca) de (google.de) es (google.es) fr (google.fr) it (google.it) co.jp (google.co.jp) in (google.in) cn (google.cn) com.sg (google.com.sg) com.mx (google.com.mx) ae (google.ae) com.br (google.com.br) nl (google.nl) com.au (google.com.au) com.tr (google.com.tr) sa (google.sa) se (google.se) pl (google.pl)
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
UULE
Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ. You can find an online UULE generator here.
NUM
Number of results
HL
Host Language: For example: DE
GL
Guest Language: Boosts matches whose country of origin matches the parameter value. For example: DE
IE
Query Encoding: Character encoding how the engine interpret the query string. For example: UTF8
OE
Character encoding used for the results. For example: UTF8
START
Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result.
This endpoint will retrieve jobs data from an Google jobs result page and transform it into usable JSON.
API_KEY
(required)
User account’s normal API key.
QUERY
(required)
search term user is looking to scrape.
TLD
Country of Google domain to scrape. This is an optional argument and defaults to “com” (google.com). Valid values include: com (google.com) co.uk (google.co.uk) ca (google.ca) de (google.de) es (google.es) fr (google.fr) it (google.it) co.jp (google.co.jp) in (google.in) cn (google.cn) com.sg (google.com.sg) com.mx (google.com.mx) ae (google.ae) com.br (google.com.br) nl (google.nl) com.au (google.com.au) com.tr (google.com.tr) sa (google.sa) se (google.se) pl (google.pl)
COUNTRY_CODE
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.). Where a Google domain needs to be scraped from another country (e.g. scraping google.com from Canada), both TLD and COUNTRY_CODE parameters must be specified.
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
UULE
Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ. You can find an online UULE generator here.
NUM
Number of results
HL
Host Language. For example: DE
GL
Guest Language: Boosts matches whose country of origin matches the parameter value. For example: DE
IE
Query Encoding: Character encoding how the engine interpret the query string. For example: UTF8
OE
Result Encoding: Character encoding used for the results. For example: UTF8
START
Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result.
This endpoint will retrieve shopping data from an Google shopping result page and transform it into usable JSON.
API_KEY
(required)
User's normal API Key
QUERY
(required)
Query keywords that a user wants to search for
COUNTRY_CODE
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.). Where a Google domain needs to be scraped from another country (e.g. scraping google.com from Canada), both TLD and COUNTRY_CODE parameters must be specified.
TLD
Country of Google domain to scrape. This is an optional argument and defaults to “com” (google.com). Valid values include: com (google.com) co.uk (google.co.uk) ca (google.ca) de (google.de) es (google.es) fr (google.fr) it (google.it) co.jp (google.co.jp) in (google.in) cn (google.cn) com.sg (google.com.sg) com.mx (google.com.mx) ae (google.ae) com.br (google.com.br) nl (google.nl) com.au (google.com.au) com.tr (google.com.tr) sa (google.sa) se (google.se) pl (google.pl)
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
UULE
Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ. You can find an online UULE generator here.
NUM
Number of results
HL
Host Language. For example: DE
GL
Guest Language: Boosts matches whose country of origin matches the parameter value. For example: DE
IE
Query Encoding: Character encoding how the engine interpret the query string. For example: UTF8
OE
Result Encoding: Character encoding used for the results. For example: UTF8
START
Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result.
This endpoint will retrieve shopping data from an Google Maps Search result page and transform it into usable JSON.
API_KEY
(required)
User's normal API Key
QUERY
(required)
Query string for example: vegan restaurant
LATITUDE
(required)
Latitude value, for example: 21.029738077531196
LONGITUDE
(required)
Longitude value, for example: 105.85222341863856
Note: To fetch additional pages you need to use the next_page_url
by adding your unique api_key
parameter at the end.
This endpoint will retrieve listing information from a single 'For Rent' property listing page and transform it into usable JSON.
API_KEY
(required)
User's API Key.
URL
(required)
The URL of the Redfin page. The URL has to be the URL of a property for rent.
country_code
Allows you to geotarget the request. Use this parameter if you want Redfin to be scraped from a specific country.
TLD
raw
This is a boolean param - true
or false
If the raw
parameter is set to true
, the raw data will be extracted from the page without further parsing.
Important:
The structure of the data in raw mode cannot be guaranteed, it’s a tradeoff: You get a lot more information back, but the structure of the response may change if Redfin
modifies their page layout.
This endpoint will retrieve listing information from a single 'For Sale' property listing page and transform it into usable JSON.
API_KEY
(required)
User's API Key.
URL
(required)
The URL of the Redfin page. The URL has to be the URL of a property for sale.
country_code
Allows you to geotarget the request. Use this parameter if you want Redfin to be scraped from a specific country.
TLD
raw
This is a boolean param - true
or false
If the raw
parameter is set to true
, the raw data will be extracted from the page without further parsing.
Important:
The structure of the data in raw mode cannot be guaranteed, it’s a tradeoff: You get a lot more information back, but the structure of the response may change if Redfin
modifies their page layout.
This endpoint will return the search results from a listing search page and transform it into usable JSON.
API_KEY
(required)
User's API Key.
URL
(required)
The URL of the Redfin search page. The URL has to be a Redfin Search page.
country_code
Allows you to geotarget the request. Use this parameter if you want Redfin to be scraped from a specific country.
TLD
This endpoint will retrieve product list data from Walmart as a result of a search.
API_KEY
(required)
User's normal API Key
QUERY
(required)
Example: matchbox+cars
PAGE
Pagination. Example: 2
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
This endpoint will retrieve Walmart product list for a specified product category.
API_KEY
(required)
User's normal API Key
CATEGORY
(required)
Walmart category ID as identifier. You can find the walmart category ID in the URL. Example: 3944_1089430_37807 (
/browse/3944_1089430_37807
)
PAGE
Pagination. Example: 3
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
This endpoint will retrieve Walmart product details for one product.
API_KEY
(required)
User's normal API Key
PRODUCT_ID
(required)
Walmart Product id. You can find the product ID in the URL. Example: 5253396052 (/ip/5253396052)
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
This endpoint will retrieve reviews for a specified product from a Walmart reviews page and transform it into usable JSON.
api_key
Your API Key.
product_id
Walmart Product id. Example: 5253396052
sort
Sort by option. Valid values are:
relevancy
helpful
submission-desc
submission-asc
rating-desc
rating-asc
page
Specify the page number
TLD
Top level domain. Valid values are com
and ca
To make it even easier to get structured content, we created custom endpoints within our API that provide a shorthand method of retrieving content from supported domains. This method is ideal for users that need to receive data in JSON.
Amazon Endpoints:
Ebay Endpoints:
Google Endpoints:
Redfin Endpoints:
This endpoint will retrieve product data from an Amazon product page and transform it into usable JSON.
Single ASIN request:
Multiple ASIN request:
Parameters available for this method:
ApiKey
User account’s normal API key.
ASIN/ASINS
Amazon product ASIN(s).
TLD
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
sg (amazon.com.sg)
mx (amazon.com.mx)
ae (amazon.ae)
br (amazon.com.br)
nl (amazon.nl)
au (amazon.com.au)
tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
Single ASIN Request:
Multiple ASIN Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve products for a specified search term from Amazon search page and transform it into usable JSON.
Single Query request:
Multiple Query request:
Parameters available for this method:
ApiKey
User account’s normal API key.
QUERY
Search term.
TLD
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
sg (amazon.com.sg)
mx (amazon.com.mx)
ae (amazon.ae)
br (amazon.com.br)
nl (amazon.nl)
au (amazon.com.au)
tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
REFERENCE
A reference string used by amazon. For example: olp_f_usedAcceptable
SORTING
Change sorting.
For example: price-desc-rank
Single Query Request:
Multiple Query Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve offers for a specified product from an Amazon offers page and transform it into usable JSON.
Single ASIN Request:
Multiple ASIN Request:
Multiple parameters can be used with this method:
APIKEY
User account’s normal API key.
ASIN
Amazon product ASIN.
TLD
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
sg (amazon.com.sg)
mx (amazon.com.mx)
ae (amazon.ae)
br (amazon.com.br)
nl (amazon.nl)
au (amazon.com.au)
tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
F_NEW
Boolean parameter with a possible value of true or false indicating the condition of the listed items
F_USEDGOOD
Boolean parameter with a possible value of true or false indicating the condition of the listed items
F_USEDLIKENEW
Boolean parameter with a possible value of true or false indicating the condition of the listed items
F_USEDVERYGOOD
Boolean parameter with a possible value of true or false indicating the condition of the listed items
F_USEDACCEPTABLE
Boolean parameter with a possible value of true or false indicating the condition of the listed items
Single ASIN Request:
Multiple ASIN Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
Amazon review pages currently require a login to access. Since ScraperAPI does not support scraping behind a login, this endpoint is not available.
Please use the Amazon Product Page API instead to scrape all publicly available reviews for an Amazon product.
This endpoint will retrieve reviews for a specified product from an Amazon reviews page and transform it into usable JSON.
Single ASIN Request:
Multiple ASIN Request:
Multiple Parameters can be used with this method:
APIKEY
User account’s normal API key.
ASIN
Amazon product ASIN.
TLD
Valid values include:
com (amazon.com)
co.uk (amazon.co.uk)
ca (amazon.ca)
de (amazon.de)
es (amazon.es)
fr (amazon.fr)
it (amazon.it)
jp (amazon.co.jp)
in (amazon.in)
cn (amazon.cn)
sg (amazon.com.sg)
mx (amazon.com.mx)
ae (amazon.ae)
br (amazon.com.br)
nl (amazon.nl)
au (amazon.com.au)
tr (amazon.com.tr)
sa (amazon.sa)
se (amazon.se)
pl (amazon.pl)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where an amazon domain needs to be scraped from another country (e.g. scraping amazon.com from Canada to get Canadian shipping information), both TLD and COUNTRY parameters must be specified.
REFERENCE
A reference string used by amazon. For example: olp_f_usedAcceptable
FILTERBYSTAR
Filter the results by stars. For example: all_stars
REVIEWERTYPE
Filter the results by type. For example: all_reviews
PAGENUMBER
Paginating the result. For example: 1
Single ASIN Request:
Multiple ASIN Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve product data from an Ebay product pages (/itm/) and transform it into usable JSON.
Single product request:
Multiple products Request:
API_KEY
(required)
User's normal API Key
PRODUCTID
(required)
ebay product ID. 12 digits. Example: 166619046796
TLD
Top-level Ebay domain to scrape. This is an optional argument and defaults to “com” (ebay.com). Valid values include:
com (ebay.com)
co.uk (ebay.co.uk)
com.au (ebay.com.au)
de (ebay.de)
ca (ebay.ca)
fr (ebay.fr)
it (ebay.it)
es (ebay.es)
at (ebay.at)
ch (ebay.ch)
com.sg (ebay.com.sg)
com.my (ebay.com.my)
ph (ebay.ph)
ie (ebay.ie)
pl (ebay.pl)
nl (ebay.nl)
COUNTRY
country_code
influences the language and the currency of the page. The TLD should be set to ‘com’ if you are using languages that are not used by the TLDs listed above.
Single Product Request:
Multiple Products Request:
This endpoint will retrieve products for a specified search term from Ebay search page and transform it into usable JSON.
Single Query request:
Multiple Query request:
API_KEY
(required)
User's normal API Key
QUERY
(required)
Search query. Example: iPhone
TLD
Top-level Ebay domain to scrape. This is an optional argument and defaults to “com” (ebay.com). Valid values include:
com (ebay.com)
co.uk (ebay.co.uk)
com.au (ebay.com.au)
de (ebay.de)
ca (ebay.ca)
fr (ebay.fr)
it (ebay.it)
es (ebay.es)
at (ebay.at)
ch (ebay.ch)
com.sg (ebay.com.sg)
com.my (ebay.com.my)
ph (ebay.ph)
ie (ebay.ie)
pl (ebay.pl)
nl (ebay.nl)
COUNTRY
country_code
influences the language and the currency of the page. The TLD should be set to ‘com’ if you are using languages that are not used by the TLDs listed above.
SORT_BY
Instructs the API to sort the results. Supported values:
ending_soonest newly_listed price_lowest price_highest distance_nearest best_match
PAGE
Page number
ITEMS_PER_PAGE
Number of items returned. Supported values: 60, 120, 240
SELLER_ID
The ‘name’ of the seller. This is a textual id, for example ‘gadget-solutions’
Filter options
CONDITON
Condition of the products. Supported values:
'new', 'used', 'open_box', 'refurbished', 'for_parts', 'not_working’ Note: These categories don’t have the same name for all products.
For example open_box
is sometimes called ‘without tag’ on the eBay results page. Please ensure that you are always using the values from the supported list when sending request to the API.
Note2: Multiple options can be used here. So if you want to search for new and open_box products you can use the query param:
’used,open_box’ in an URL encoded form like
&condition=used%2Copen_box
BUYING_FORMAT
Buying format. Supported values:
buy_it_now
auction
accepts_offers
Note: You can specify multiple options just as with the condition
parameter.
SHOW_ONLY
Additional filters for search. Supported values:
returns_accepted authorized_seller completed_items sold_items sale_items listed_as_lots search_in_description benefits_charity authenticity_guarantee Note: Multiple options can be used here. So if you want to search for a product where returns are accepted and it’s from an authorized seller the query param: ’returns_accepted,authorized_seller’ in an URL encoded form like &show_only=returns_accepted%2Cauthorized_seller
Single Query Request:
Multiple Query Request:
This endpoint will retrieve search data from a Google search result page and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
Multiple Parameters can be used with this method:
APIKEY
User account’s normal API key.
QUERY
Google Search Query.
TLD
Set this value to scrape the respective Google domain. Valid values include tlds for those countries or regions where Google has a search engine:
ae: (google.ae)
ca: (google.ca)
cn: (google.cn)
co.jp: (google.co.jp)
co.uk: (google.co.uk)
com: (google.com)
com.au: (google.com.au)
com.be: (google.com.be)
com.br: (google.com.br)
com.mx: (google.com.mx)
com.tr: (google.com.tr)
de: (google.de)
eg: (google.eg)
es: (google.es)
fr: (google.fr)
in: (google.in)
it: (google.it)
nl: (google.nl)
pl: (google.pl)
sa: (google.sa)
se: (google.se)
sg: (google.sg)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where a Google domain needs to be scraped from another country (e.g. scraping Google.com from Canada, both TLD and COUNTRY parameters must be specified.
We also support Google Search parameters for this endpoint.
UULE
: Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ
.
You can find an online UULE
generator here: https://site-analyzer.pro/services-seo/uule/
NUM
: Number of results
HL
: Host Language. For example: DE
GL
: Boosts matches whose country of origin matches the parameter value.
For example: DE
IE
: Character encoding how the engine interpret the query string. For example: UTF8
OE
: Character encoding used for the results. For example: UTF8
START
: Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result (meaning it starts with page 2 of results if the "num" is 10)
Single Query Request:
Multiple Query Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve news data from an Google news result page and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
Multiple Parameters can be used with this method:
APIKEY
User account’s normal API key.
QUERY
Google Search Query.
TLD
Set this value to scrape the respective Google domain. Valid values include tlds for those countries or regions where Google has a search engine:
ae: (google.ae)
ca: (google.ca)
cn: (google.cn)
co.jp: (google.co.jp)
co.uk: (google.co.uk)
com: (google.com)
com.au: (google.com.au)
com.be: (google.com.be)
com.br: (google.com.br)
com.mx: (google.com.mx)
com.tr: (google.com.tr)
de: (google.de)
eg: (google.eg)
es: (google.es)
fr: (google.fr)
in: (google.in)
it: (google.it)
nl: (google.nl)
pl: (google.pl)
sa: (google.sa)
se: (google.se)
sg: (google.sg)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where a Google domain needs to be scraped from another country (e.g. scraping Google.com from Canada, both TLD and COUNTRY parameters must be specified.
We also support Google Search parameters for this endpoint.
UULE
: Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ
.
You can find an online UULE
generator here: https://site-analyzer.pro/services-seo/uule/
NUM
: Number of results
HL
: Host Language. For example: DE
GL
: Boosts matches whose country of origin matches the parameter value.
For example: DE
IE
: Character encoding how the engine interpret the query string. For example: UTF8
OE
: Character encoding used for the results. For example: UTF8
START
: Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result (meaning it starts with page 2 of results if the "num" is 10)
Single Query Request:
Multiple Query Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve jobs data from an Google jobs result page and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
Multiple Parameters can be used with this method:
APIKEY
User account’s normal API key.
QUERY
Google Search Query.
TLD
Set this value to scrape the respective Google domain. Valid values include tlds for those countries or regions where Google has a search engine:
ae: (google.ae)
ca: (google.ca)
cn: (google.cn)
co.jp: (google.co.jp)
co.uk: (google.co.uk)
com: (google.com)
com.au: (google.com.au)
com.be: (google.com.be)
com.br: (google.com.br)
com.mx: (google.com.mx)
com.tr: (google.com.tr)
de: (google.de)
eg: (google.eg)
es: (google.es)
fr: (google.fr)
in: (google.in)
it: (google.it)
nl: (google.nl)
pl: (google.pl)
sa: (google.sa)
se: (google.se)
sg: (google.sg)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where a Google domain needs to be scraped from another country (e.g. scraping Google.com from Canada, both TLD and COUNTRY parameters must be specified.
We also support Google Search parameters for this endpoint.
UULE
: Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ
.
You can find an online UULE
generator here: https://site-analyzer.pro/services-seo/uule/
NUM
: Number of results
HL
: Host Language. For example: DE
GL
: Boosts matches whose country of origin matches the parameter value.
For example: DE
IE
: Character encoding how the engine interpret the query string. For example: UTF8
OE
: Character encoding used for the results. For example: UTF8
START
: Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result (meaning it starts with page 2 of results if the "num" is 10)
Single Query Request:
Multiple Query Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve shopping data from a Google shopping result page and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
Multiple Parameters can be used with this method:
APIKEY
User account’s normal API key.
QUERY
Google Search Query.
TLD
Set this value to scrape the respective Google domain. Valid values include tlds for those countries or regions where Google has a search engine:
ae: (google.ae)
ca: (google.ca)
cn: (google.cn)
co.jp: (google.co.jp)
co.uk: (google.co.uk)
com: (google.com)
com.au: (google.com.au)
com.be: (google.com.be)
com.br: (google.com.br)
com.mx: (google.com.mx)
com.tr: (google.com.tr)
de: (google.de)
eg: (google.eg)
es: (google.es)
fr: (google.fr)
in: (google.in)
it: (google.it)
nl: (google.nl)
pl: (google.pl)
sa: (google.sa)
se: (google.se)
sg: (google.sg)
COUNTRY
Valid values are two letter country codes for which we offer Geo Targeting (e.g. “au”, “es”, “it”, etc.).
Where a Google domain needs to be scraped from another country (e.g. scraping Google.com from Canada, both TLD and COUNTRY parameters must be specified.
We also support Google Search parameters for this endpoint.
UULE
: Set a region for a search. For example: w+CAIQICINUGFyaXMsIEZyYW5jZQ
.
You can find an online UULE
generator here: https://site-analyzer.pro/services-seo/uule/
NUM
: Number of results
HL
: Host Language. For example: DE
GL
: Boosts matches whose country of origin matches the parameter value.
For example: DE
IE
: Character encoding how the engine interpret the query string. For example: UTF8
OE
: Character encoding used for the results. For example: UTF8
START
: Set the starting offset in the result list. When start=10
set the first element in the result list will be the 10th search result (meaning it starts with page 2 of results if the "num" is 10)
Single Query Request:
Multiple Query Request:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve maps data from a Google Maps search and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
API_KEY
(required)
User's normal API Key
QUERY
(required)
Query string for example: vegan restaurant
LATITUDE
(required)
Latitude value, for example: 21.029738077531196
LONGITUDE
(required)
Longitude value, for example: 105.85222341863856
Single Query Request:
Multiple Query Request:
After the job(s) finish, you will find the result under the response
key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve listing information from a single 'For Rent' property listing page and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
API_KEY
(required)
User's API Key.
URL
(required)
The URL of the Redfin page. The URL has to be the URL of a property for rent.
country_code
Allows you to geotarget the request. Use this parameter if you want Redfin to be scraped from a specific country.
TLD
raw
This is a boolean param - true
or false
If the raw
parameter is set to true
, the raw data will be extracted from the page without further parsing.
Important:
The structure of the data in raw mode cannot be guaranteed, it’s a tradeoff: You get a lot more information back, but the structure of the response may change if Redfin
modifies their page layout.
For single query requests:
For multiple query requests:
After the job(s) finish, you will find the result under the response
key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve listing information from a single 'For Sale' property listing page and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
API_KEY
(required)
User's API Key.
URL
(required)
The URL of the Redfin page. The URL has to be the URL of a property for sale.
country_code
Allows you to geotarget the request. Use this parameter if you want Redfin to be scraped from a specific country.
TLD
raw
This is a boolean param - true
or false
If the raw
parameter is set to true
, the raw data will be extracted from the page without further parsing.
Important:
The structure of the data in raw mode cannot be guaranteed, it’s a tradeoff: You get a lot more information back, but the structure of the response may change if Redfin
modifies their page layout.
For single query requests:
For multiple query requests:
This endpoint will return the search results from a listing search page and transform it into usable JSON.
Single Query Request:
Multiple Query Request:
API_KEY
(required)
User's API Key.
URL
(required)
The URL of the Redfin search page. The URL has to be a Redfin Search page.
country_code
Allows you to geotarget the request. Use this parameter if you want Redfin to be scraped from a specific country.
TLD
For single query requests:
For multiple query requests:
This endpoint will retrieve product list data from Walmart as a result of a search.
Single query request:
Multiple Query Request:
APIKEY
User account’s normal API key.
QUERY
Walmart search term
PAGE
Pagination. Example: 3
TLD
Top level domain. Valid values are com and ca
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
For single query requests:
For multiple query requests:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve Walmart product list for a specified product category.
Single query request:
Multiple query request:
APIKEY
User account’s normal API key.
CATEGORY
Walmart category id. Example: 3944_1089430_37807
PAGE
Pagination. Example: 3
TLD
Top level domain. Valid values are com and ca
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
For single query requests:
For multiple query requests:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve information about a Walmart product
Single query request:
Multiple query request:
APIKEY
User account’s normal API key.
PRODUCTID
Walmart product id. Example: 65EDSZVED3NS
TLD
Top level domain. Valid values are com and ca
OUTPUT_FORMAT
For structured data methods we offer CSV and JSON output. JSON is default if parameter is not added. Options:
csv
json
For single query requests:
For multiple query requests:
After the job(s) finish, you will find the result under the response key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
This endpoint will retrieve reviews for a specified product from a Walmart reviews page and transform it into usable JSON.
api_key
Your API Key.
product_id
Walmart Product id. Example: 5253396052
sort
Sort by option. Valid values are:
relevancy
helpful
submission-desc
submission-asc
rating-desc
rating-asc
page
Specify the page number
TLD
Top level domain. Valid values are com
and ca
After the job(s) finish, you will find the result under the response
key in the response JSON object. The structure is the same as in the corresponding SYNC data endpoint.
Some advanced users may want to send POST/PUT requests in order to scrape forms and API endpoints directly. You can do this by sending a POST/PUT request through ScraperAPI. The return value will be stringified. So if you want to use it as JSON, you will need to parse it into a JSON object.
API ENDPOINT REQUEST
PROXY MODE
ASYNC MODE
You need to send a POST request to the Async service. The details of the POST request you want to send to the target site are in the data
ScraperAPI enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:
render
Activate javascript rendering by setting render=true
in your request. The API will automatically render the javascript on the page and return the HTML response after the javascript has been rendered.
Requests using this parameter cost 10 API credits, or 75 if used in combination with ultra-premium ultra_premium=true
.
screenshot
Gves you the ability to take a screenshot of the target page through the use of the parameter screenshot=true
. This parameter automatically enables JS rendering to get the full page content, before taking a screenshot.
country_code
Activate country geotargeting by setting country_code=us
to use US proxies for example.
This parameter does not increase the cost of the API request.
premium
Activate premium residential and mobile IPs by setting premium=true
. Using premium proxies costs 10 API credits, or 25 API credits if used in combination with Javascript rendering render=true
. (Can not be combined withUltra_Premium
)
session_number
Reuse the same proxy by setting session_number=123
for example.
This parameter does not increase the cost of the API request. (Can not be combined with Premium/Ultra_Premium
)
binary_target
Helpful when trying to scrape files or images. This tells our API that the target is a file.
keep_headers
Use your own custom headers by setting keep_headers=true
along with sending your own headers to the API.
This parameter does not increase the cost of the API request.
device_type
Set your requests to use mobile or desktop user agents by setting device_type=desktop
or device_type=mobile
.
This parameter does not increase the cost of the API request.
autoparse
Activate auto parsing for select websites by setting autoparse=true
. The API will parse the data on the page and return it in JSON format.
This parameter does not increase the cost of the API request.
ultra_premium
Activate our advanced bypass mechanisms by setting ultra_premium=true
.
Requests using this parameter cost 30 API credits, or 75 if used in combination with javascript rendering. (Can not be combined with Premium
)
output_format
The output_format
parameter allows you to instruct the API on what the response file type should be. Valid options:
markdown
text
SDEs valid options:
json
csv
markdown
text
Please be aware that our current support is limited to US ZIP Codes exclusively
Target specific ZIP codes to scrape product listings, offers, search queries, and more from regions of interest, providing valuable insights for market research, competitor analysis, and pricing optimization. When you need to focus solely on a particular ZIP code, incorporate the zip
parameter into your requests.
Here are some example ZIP codes that we support:
33837
62864
92223
92392
Davenport, FL
Mount Vernon, VA
Beaumont, TX
Victorville, CA
For example, set zip=92223
to retrieve information tailored to the Beaumont, TX area within the response:
API REQUEST
PROXY MODE
SDE METHOD
ASYNC SDE METHOD
Coming soon
We encourage you to explore the ZIP code targeting feature to see how it can enhance your data collection. Try various ZIP codes to get a feel for how the targeted information changes and tailor your requests to suit your needs. If certain ZIP codes are not working or accepted during your testing, please contact our support team to enable them for you.
ScraperAPI's caching system unlocks a new level of efficiency. By leveraging advanced decision-making mechanisms, we maintain an extensive amount of cached data, that is ready to be served if called upon. When you request a page with a cached response available, you'll receive the cached data, ensuring much faster access to the information you need with a 100% success rate.
Difficult Pages: Perfect for pages that are challenging to scrape.
10-Minute Updates: No older than 10 minutes cached value.
Guaranteed Success: 100% success rate for cached results.
Faster Response Times: Retrieve data quicker from cached results.
Fewer Retries: Reduced number of retries needed to serve back the response.
🚀Increased Efficiency:
Save time and resources by reducing the need to scrape the same page multiple times.
✅Improved Reliability:
Enhance the reliability of your scraping tasks with consistent and timely data retrieval.
For cases where real-time data is required, you can ensure the API serves uncached data by adding the parameter cache_control=no-cache
to the URL, as shown below:
We tag cached responses with the sa-from-cache: 1
response header, making it easy to distinguish between cached and non-cached responses.
We're always looking to improve. Stay tuned for the upcoming max_age
option, which will give you even more control over your caching preferences.
ScraperAPI helps you control and manage your costs efficiently. By using the max_cost
parameter with your requests, you instruct the API to set a limit on the maximum API credits you'd like to spend per each individual scrape. This helps prevent overspending, ensuring you stay within your individual project's budget.
API REQUEST
ASYNC REQUEST
PROXY MODE
If the scrape cost exceeds your limit, a 403
status code will be returned for the request, with the following error message:
"This request exceeds your max_cost. You can view the cost per request in your response header or in the API Playground on the dashboard."
Important note: Only use this feature if you need to send custom headers to retrieve specific results from the website. Within the API we have a sophisticated header management system designed to increase success rates and performance on difficult sites. When you send your own custom headers you override our header system, which oftentimes lowers your success rates. Unless you absolutely need to send custom headers to get the data you need, we advise that you don’t use this functionality.
If you would like to use your own custom headers (user agents, cookies, etc.) when making a request to the website, simply set keep_headers=true
and send the API the headers you want to use. The API will then use these headers when sending requests to the website.
If you need to get results for mobile devices, use the device_type
parameter to set the user-agent header for your request, instead of setting your own.
API REQUEST
PROXY MODE
If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type
parameter.
Set device_type=desktop
to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent. Note: This is the default behavior. Not setting the parameter will have the same effect.
Set device_type=mobile
to have the API set a mobile (e.g. iPhone or Android) user agent.
Note: The device type you set will be overridden if you use keep_headers=true
and send your own user agent in the requests header.
API REQUEST
PROXY MODE
Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.
To control the geolocation of the IP used to make the request, simply set the country_code
parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.
For example: to ensure your requests come from the United States, set the country_code
parameter to country_code=us
.
Business and Enterprise Plan users can geotarget their requests to the following 69 countries (Hobby and Startup Plans can only use US and EU geotargeting) by using the country_code
in their request:
us
United States
Hobby Plan and higher.
eu
Europe (general)
Hobby Plan and higher.
au
Australia
Business Plan and higher.
ae
United Arab Emirates
Business Plan and higher.
ar
Argentina
Business Plan and higher.
at
Austria
Business Plan and higher.
bd
Bangladesh
Business Plan and higher.
be
Belgium
Business Plan and higher.
br
Brazil
Business Plan and higher.
ca
Canada
Business Plan and higher.
ch
Switzerland
Business Plan and higher.
cl
Chile
Business Plan and higher.
cn
China
Business Plan and higher.
de
Germany
Business Plan and higher.
dk
Denmark
Business Plan and higher.
eg
Egypt
Business Plan and higher.
es
Spain
Business Plan and higher.
fi
Finland
Business Plan and higher.
fr
France
Business Plan and higher.
gr
Greece
Business Plan and higher.
hk
Hong Kong
Business Plan and higher.
hu
Hungary
Business Plan and higher.
id
Indonesia
Business Plan and higher.
ie
Ireland
Business Plan and higher.
il
Israel
Business Plan and higher.
in
India
Business Plan and higher.
it
Italy
Business Plan and higher.
jo
Jordan
Business Plan and higher.
jp
Japan
Business Plan and higher.
ke
Kenya
Business Plan and higher.
kr
South Korea
Business Plan and higher.
mx
Mexico
Business Plan and higher.
my
Malaysia
Business Plan and higher.
ng
Nigeria
Business Plan and higher.
nl
Netherlands
Business Plan and higher.
no
Norway
Business Plan and higher.
nz
New Zealand
Business Plan and higher.
pe
Peru
Business Plan and higher.
ph
Phillipines
Business Plan and higher.
pk
Pakistan
Business Plan and higher.
pl
Poland
Business Plan and higher.
pt
Portugal
Business Plan and higher.
ru
Russia
Business Plan and higher.
sa
Saudi Arabia
Business Plan and higher.
se
Sweden
Business Plan and higher.
sg
Singapore
Business Plan and higher.
th
Thailand
Business Plan and higher.
tr
Turkey
Business Plan and higher.
tw
Taiwan
Business Plan and higher.
ua
Ukraine
Business Plan and higher.
uk
United Kingdom
Business Plan and higher.
ve
Venezuela
Business Plan and higher.
vn
Vietnam
Business Plan and higher.
za
South Africa
Business Plan and higher.
bg
Bulgaria
Business Plan and higher.
hr
Croatia
Business Plan and higher.
cy
Cyprus
Business Plan and higher.
cz
Czechia
Business Plan and higher.
ee
Estonia
Business Plan and higher.
is
Iceland
Business Plan and higher.
lv
Latvia
Business Plan and higher.
li
Liechtenstein
Business Plan and higher.
lt
Lithuania
Business Plan and higher.
mt
Malta
Business Plan and higher.
ro
Romania
Business Plan and higher.
sk
Slovakia
Business Plan and higher.
si
Slovenia
Business Plan and higher.
ec
Ecuador
Business Plan and higher.
pa
Panama
Business Plan and higher.
Other countries are available to Enterprise customers upon request.
ZIP Code Geo targeting is currently supported for the Amazon domain. To find out more, please visit this link.
API REQUEST
PROXY MODE
In addition to the list of Country Codes listed on the Geotargeting page, our premium geotargeting service offers advanced capabilities for accessing and targeting exclusive geographic locations.
To get access to these geographic locations, please use premium=true
together with the country_code
parameter:
API REQUEST
PROXY MODE
ad
Andorra
Business Plan and higher.
af
Afghanistan
Business Plan and higher.
ag
Antigua and Barbuda
Business Plan and higher.
ai
Anguilla
Business Plan and higher.
al
Albania
Business Plan and higher.
am
Armenia
Business Plan and higher.
ao
Angola
Business Plan and higher.
as
American Samoa
Business Plan and higher.
aw
Aruba
Business Plan and higher.
ax
Åland Islands
Business Plan and higher.
az
Azerbaijan
Business Plan and higher.
ba
Bosnia and Herzegovina
Business Plan and higher.
bb
Barbados
Business Plan and higher.
bf
Burkina Faso
Business Plan and higher.
bh
Bahrain
Business Plan and higher.
bi
Burundi
Business Plan and higher.
bj
Benin
Business Plan and higher.
bl
Saint Barthélemy
Business Plan and higher.
bm
Bermuda
Business Plan and higher.
bn
Brunei
Business Plan and higher.
bo
Bolivia
Business Plan and higher.
bq
Bonaire, Sint Eustatius and Saba
Business Plan and higher.
bs
Bahamas
Business Plan and higher.
bt
Bhutan
Business Plan and higher.
bw
Botswana
Business Plan and higher.
by
Belarus
Business Plan and higher.
bz
Belize
Business Plan and higher.
cd
Democratic Republic of the Congo
Business Plan and higher.
cf
Central African Republic
Business Plan and higher.
cg
Republic of the Congo
Business Plan and higher.
ci
Côte d'Ivoire (Ivory Coast)
Business Plan and higher.
ck
Cook Islands
Business Plan and higher.
cm
Cameroon
Business Plan and higher.
cr
Costa Rica
Business Plan and higher.
cu
Cuba
Business Plan and higher.
cv
Cape Verde
Business Plan and higher.
cw
Curaçao
Business Plan and higher.
dj
Djibouti
Business Plan and higher.
dm
Dominica
Business Plan and higher.
do
Dominican Republic
Business Plan and higher.
dz
Algeria
Business Plan and higher.
er
Eritrea
Business Plan and higher.
et
Ethiopia
Business Plan and higher.
fj
Fiji
Business Plan and higher.
fk
Falkland Islands
Business Plan and higher.
fm
Micronesia
Business Plan and higher.
fo
Faroe Islands
Business Plan and higher.
ga
Gabon
Business Plan and higher.
gd
Grenada
Business Plan and higher.
ge
Georgia
Business Plan and higher.
gf
French Guiana
Business Plan and higher.
gg
Guernsey
Business Plan and higher.
gh
Ghana
Business Plan and higher.
gi
Gibraltar
Business Plan and higher.
gl
Greenland
Business Plan and higher.
gm
The Gambia
Business Plan and higher.
gn
Guinea
Business Plan and higher.
gp
Guadeloupe
Business Plan and higher.
gq
Equatorial Guinea
Business Plan and higher.
gt
Guatemala
Business Plan and higher.
gu
Guam
Business Plan and higher.
gw
Guinea-Bissau
Business Plan and higher.
gy
Guyana
Business Plan and higher.
hn
Honduras
Business Plan and higher.
ht
Haiti
Business Plan and higher.
im
Isle of Man
Business Plan and higher.
io
British Indian Ocean Territory
Business Plan and higher.
je
Jersey
Business Plan and higher.
jm
Jamaica
Business Plan and higher.
kg
Kyrgyzstan
Business Plan and higher.
kh
Cambodia
Business Plan and higher.
km
Comoros
Business Plan and higher.
kn
Saint Kitts and Nevis
Business Plan and higher.
kw
Kuwait
Business Plan and higher.
ky
Cayman Islands
Business Plan and higher.
kz
Kazakhstan
Business Plan and higher.
la
Laos
Business Plan and higher.
lc
Saint Lucia
Business Plan and higher.
lk
Sri Lanka
Business Plan and higher.
lr
Liberia
Business Plan and higher.
ls
Lesotho
Business Plan and higher.
lu
Luxembourg
Business Plan and higher.
ly
Libya
Business Plan and higher.
ma
Morocco
Business Plan and higher.
mc
Monaco
Business Plan and higher.
md
Moldova
Business Plan and higher.
me
Montenegro
Business Plan and higher.
mf
Saint Martin
Business Plan and higher.
mg
Madagascar
Business Plan and higher.
mh
Marshall Islands
Business Plan and higher.
mk
North Macedonia
Business Plan and higher.
ml
Mali
Business Plan and higher.
mm
Myanmar
Business Plan and higher.
mn
Mongolia
Business Plan and higher.
mo
Macau
Business Plan and higher.
mp
Northern Mariana Islands
Business Plan and higher.
mq
Martinique
Business Plan and higher.
mr
Mauritania
Business Plan and higher.
ms
Montserrat
Business Plan and higher.
mu
Mauritius
Business Plan and higher.
mv
Maldives
Business Plan and higher.
mw
Malawi
Business Plan and higher.
mz
Mozambique
Business Plan and higher.
na
Namibia
Business Plan and higher.
nc
New Caledonia
Business Plan and higher.
ne
Niger
Business Plan and higher.
ni
Nicaragua
Business Plan and higher.
np
Nepal
Business Plan and higher.
pf
French Polynesia
Business Plan and higher.
pg
Papua New Guinea
Business Plan and higher.
pm
Saint Pierre and Miquelon
Business Plan and higher.
pr
Puerto Rico
Business Plan and higher.
ps
Palestine
Business Plan and higher.
pw
Palau
Business Plan and higher.
py
Paraguay
Business Plan and higher.
qa
Qatar
Business Plan and higher.
re
Réunion
Business Plan and higher.
rw
Rwanda
Business Plan and higher.
sb
Solomon Islands
Business Plan and higher.
sc
Seychelles
Business Plan and higher.
sd
Sudan
Business Plan and higher.
sh
Saint Helena
Business Plan and higher.
sj
Svalbard and Jan Mayen
Business Plan and higher.
sl
Sierra Leone
Business Plan and higher.
sm
San Marino
Business Plan and higher.
sn
Senegal
Business Plan and higher.
so
Somalia
Business Plan and higher.
sr
Suriname
Business Plan and higher.
ss
South Sudan
Business Plan and higher.
st
São Tomé and Príncipe
Business Plan and higher.
sv
El Salvador
Business Plan and higher.
sx
Sint Maarten
Business Plan and higher.
sz
Eswatini
Business Plan and higher.
tc
Turks and Caicos Islands
Business Plan and higher.
td
Chad
Business Plan and higher.
tj
Tajikistan
Business Plan and higher.
tl
Timor-Leste
Business Plan and higher.
tm
Turkmenistan
Business Plan and higher.
tn
Tunisia
Business Plan and higher.
tv
Tuvalu
Business Plan and higher.
tz
Tanzania
Business Plan and higher.
ug
Uganda
Business Plan and higher.
uy
Uruguay
Business Plan and higher.
uz
Uzbekistan
Business Plan and higher.
va
Vatican City
Business Plan and higher.
vc
Saint Vincent and the Grenadines
Business Plan and higher.
vg
British Virgin Islands
Business Plan and higher.
vi
U.S. Virgin Islands
Business Plan and higher.
vu
Vanuatu
Business Plan and higher.
wf
Wallis and Futuna
Business Plan and higher.
ws
Samoa
Business Plan and higher.
ye
Yemen
Business Plan and higher.
yt
Mayotte
Business Plan and higher.
zm
Zambia
Business Plan and higher.
zw
Zimbabwe
Business Plan and higher.
ZIP Code Geo targeting is currently supported for the Amazon domain. To find out more, please visit this link.
Along with the “traditional” means of passing parameters, we also support passing parameters as headers. Passing parameters such as api_key
, render
, ultra_premium
and instruction_set
is very straightforward.
API REQUEST
Instead of including the parameters in the URL
you can just pass them as headers
PROXY MODE
Note that credentials must still be passed to the proxy in the manner shown above, not as headers.
Our standard proxy pools include millions of proxies from over a dozen ISPs and should be sufficient for the vast majority of scraping jobs. However, for a few particularly difficult to scrape sites, we also maintain a private internal pool of residential and mobile IPs. This pool is available to all paid users.
Requests through our premium residential and mobile pool are charged at 10 times the normal rate (every successful request will count as 10 API credits against your monthly limit). Each request that uses both javascript rendering and our premium proxy pools will be charged at 25 times the normal rate (every successful request will count as 25 API credits against your monthly limit). To send a request through our premium proxy pool, please set the premium
query parameter to premium=true
.
We also have a higher premium level that you can use for really tough targets, such as LinkedIn. You can access these pools by adding the ultra_premium=true
query parameter. These requests will use 30 API credits against your monthly limit, or 75 if used together with rendering. Please note, this is only available on our paid plans. Requests with the ultra_premium=true
parameter are cached (by default) to enhance performance and efficiency. For detailed information about how caching works and its benefits, please refer to our Cached Results page.
API REQUEST
PROXY MODE
Learn more about our rotating proxy and residential proxy solution.
If you are crawling a page that requires you to render the Javascript on the page to scrape the data you need, then we can fetch these pages using a headless browser.
To render Javascript, simply set render=true
and we will use a headless Google Chrome instance to fetch the page. This feature is available on all plans.
Pass the JavaScript rendering parameter within the URL:
API REQUEST
PROXY MODE
Pass the parameter in the headers:
API REQUEST
PROXY MODE
The Render Instruction Set is a set of instructions that can be used to instruct the browser on specific actions to execute during page rendering. By combining these instructions, you can execute complex operations such as completing a search form or scrolling through an endlessly scrolling page. This capability enables efficient automation of dynamic web content interactions.
To send an instruction set to the browser, you send a JSON object to the API as a header, along with any other necessary parameters, including the "render=true" parameter.
In the following example, we enter a search term into a form, click the search icon, and then wait for the search results to load.
To send the above instruction set to our API endpoint, it must be formatted as a single string and passed as a header.
API REQUEST
PROXY MODE
Browser instructions are organized as an array of objects within the instruction set, each with a specific structure. Below are the various instructions and the corresponding data they require:
Click on an element on the page.
Args
type: str = "click" selector: dict type: Enum["xpath", "css", "text"] value: str timeout: int (in seconds, optional)
Example
[{
"type": "click", "selector": { "type": "css", "value": "#search-form button[type="submit\"]" } }]
Enter a value into an input field on the page.
Args
type: str = "input" selector: dict type: Enum["xpath", "css", "text"] value: str value: str timeout: int (in seconds, optional)
Example
[{
"type": "input", "selector": { "type": "css", "value": "#searchInput" }, "value": "cowboy boots" }]
Execute a set of instructions in a loop a specified number of times by using the loop instruction with a sequence of standard instructions in the “instructions” argument.
Note that nesting loops isn't supported, so you can't create a “loop” instruction inside another “loop” instruction. This method is effective for automating actions on web pages with infinitely scrolling content, such as loading multiple pages of results by scrolling to the bottom of the page and waiting for additional content to load.
Args
type: str="loop" for: int instructions: array
Example
[{
"type": "loop", "for": 3, "instructions": [ { "type": "scroll", "direction": "y", "value": "bottom" }, { "type": "wait", "value": 5 } ] }]
Scroll the page in the X (horizontal) or Y (vertical) direction, by a given number of pixels or to the top or bottom of the page.
Args
type: str = "scroll" direction: Enum["x", "y"] value: int or Enum["bottom", "top"]
Example
[{ "type": "scroll", "direction": "y", "value": "bottom" }]
Waits for a given number of seconds to elapse.
Args
type: str = "wait" value: int
Example
[{ "type": "wait", "value": 10 }]
Waits for an event to occur within the browser.
Args
type: str = "wait_for_event" event: Enum["domcontentloaded", "load", "navigation", "networkidle"] timeout: int (in seconds, optional)
Example
[{ "type": "wait_for_event", "event": "networkidle", "timeout": 10 }]
- domcontentloaded = initial HTML loaded - load = full page load - navigation = page navigation - networkidle = network requests stopped - stabilize = page reaches steady state
Waits for an element to appear on the page. Takes a 'value' argument that instructs the rendering engine to wait for a specified number of seconds for the element to appear.
Args
type: str = "wait_for_selector" selector: dict type: Enum["xpath", "css", "text"] value: str timeout: int (in seconds, optional)
Example
[{ "type": "wait_for_selector", "selector": { "type": "css", "value": "#content"
}, "timeout": 5 }]
Our Java Script solution now gives you the ability to take a screenshot of the target page through the use of the parameter screenshot=true
. This parameter automatically enables JS rendering to get the full page content, before taking a screenshot.
After we serve the response, you can find the screenshot URL in the sa-screenshot
response header. The screenshot format is PNG
:
API REQUEST
ASYNC REQUEST
PROXY MODE - COMING SOON!
To reuse the same proxy for multiple requests, simply use the session_number
parameter by setting it equal to a unique integer for every session you want to maintain (e.g. session_number=123
). This will allow you to continue using the same proxy for each request with that session number. To create a new session simply set the session_number
parameter with a new integer to the API. The session value can be any integer. Sessions expire 15 minutes after the last usage.
API REQUEST
PROXY MODE
At ScraperAPI, we ensure that the data you need is handled with precision and delivered in a timely and efficient manner.
Depending on the outcome of each request, the API returns specific status codes. In case of failure, we will retry for up to 70 seconds to maximize the chances of a successful response before returning an error.
For certain supported websites, we offer an autoparse
feature that returns parsed data in JSON format, streamlining your data processing.
All content is standardized to UTF-8
encoding, so you receive consistent results no matter the encoding format of the original site.
For more detailed insights, please refer to the linked articles below.
The API will return a specific status code after every request depending on whether the request was successful, failed or some other error occurred. ScraperAPI will retry failed requests for up to 70 seconds to try and get a successful response from the target URL before responding with a 500
error indicating a failed request.
Note: To avoid timing out your request before the API has had a chance to complete all retries, remember to set your timeout to 70 seconds.
In cases where a request fails after 70 seconds of retrying, you will not be charged for the unsuccessful request (you are only charged for successful requests, 200
and 404
status codes).
Errors can occasionally occur, so it's important to ensure your code handles them appropriately. Configuring your system to retry failed requests often leads to success. If you find that a request is persistently failing, double-check your request configuration to ensure it’s correct. If you’re repeatedly encountering anti-bot bans, please create a ticket with our support team and we will try to find a solution to bypass the restrictions.
If you receive a successful 200
status code response from the API but the response contains a CAPTCHA, please contact our support team and they will add it to our CAPTCHA detection database. Once included in our CAPTCHA database the API will treat it as a ban in the future and automatically retry the request.
Below are the possible status codes you will receive:
200
Successful response.
400
Bad request. Please check your request structure.
404
Requested page does not exist.
410
Requested page is no longer available.
500
After retrying for 70 seconds, the API was unable to receive a successful response.
429
You are sending requests too fast, and exceeding your concurrency limit.
403
You have used up all your API credits.
If you have specific project needs, that require you to parse the data you get back from us or transform it into a more readable or structured format, you are in the right place.
The portfolio of output formarts that our product supports is growing, as we aim to ensure we meet your diverse project requirements and the evolving industry standards. The subpages below go into detail about our existing output formats, where they apply, and which ones are best suited for your needs.
👉 Simply parse your data with one parameter - autoparse=true
👉 Get your parsed results in structured JSON
or CSV
for those specific domains with choosing output_format=json
or output_format=csv
Read more about available domains here JSON Response - Autoparse 📜
👉 Get response in LLM friendly structure with the output formats markdown
or text
for every URL on the web
Read more here LLM Output Formats 💻
For selected domains we offer a parameter that parses the data and returns structured JSON format.
You enable the parsing simply by adding autoparse=true
to your request.
Available domains:
Search Result
Product Pages
Product Pages
Products Pages
'For Sale' Listing
News Results
Search Results
Category Pages
Search Results
'For Rent' Listing
Job Results
Offers
Search Results
Listing Search Page
Shopping Results
Product Reviews
Google Maps
API REQUEST
PROXY MODE
In addition to parsing the data, you can choose between two different formats how you want to receive your structured response.
output_format=json
output_format=csv
Both options are available for the listed results above and can be used with the API in combination with autoparse=true parameter or with the Structured Data Collection Method.
Request:
Response:
Request:
Response:
input
name
brand
brand_url
pricing
list_price
shipping_price
availability_status
images
product_category
average_rating
feature_bullets
total_reviews
customization_options
seller_id
seller_name
ships_from
sold_by
B0DGHPQJLP
Apple iPhone 16 Pro 128 GB: 5G Handy mit Kamerasteuerung, 4K 120 fps Dolby Vision und einem großen Sprung bei der Batterielaufzeit. Funktioniert mit AirPods, Titan Natur
Besuche den Apple-Store
1.042,52 €
1.199,00 €
GRATIS
Auf Lager
["https://m.media-amazon.com/images/I/318JqEQUsPL.jpg","https://m.media-amazon.com/images/I/21bPL-xQTrL.jpg","https://m.media-amazon.com/images/I/31CK+Sv8xPL.jpg","https://m.media-amazon.com/images/I/31uzqWpN6ZL.jpg","https://m.media-amazon.com/images/I/51BLlolRXuL.jpg","https://m.media-amazon.com/images/I/31nafjXCXwL.jpg"]
4.4
["BEEINDRUCKENDES TITAN DESIGN – Das iPhone 16 Pro hat ein robustes und leichtes Design aus Titan mit einem größeren 6,3\" Super Retina XDR Display. Es ist extrem widerstandsfähig und hat einen Ceramic Shield der neuesten Generation auf der Vorderseite, der 2x härter ist als jedes andere Smartphone Glas.","ÜBERNIMM DIE KAMERASTEUERUNG – Mit der Kamerasteuerung kannst du einfacher und schneller auf Kameratools wie Zoom oder Tiefenschärfe zugreifen und das perfekte Foto in Rekordzeit aufnehmen.","...]
832
{"Farbe":[{"asin":"B0DGHPQJLP","is_selected":true,"value":"Titan Natur","image":"https://m.media-amazon.com/images/I/11LYpzRb1cL.jpg"},{"asin":"B0DGHH9JY3","is_selected":false,"value":"Titan Schwarz","image":"https://m.media-amazon.com/images/I/11EC0wYqODL.jpg"},{"asin":"B0DGHS5NND","is_selected":false,"value":"Titan Weiß","image":"https://m.media-amazon.com/images/I/011TB187wYL.jpg"},{"asin":"B0DGHZQXTJ","is_selected":false,"value":"Titan Wüstensand","image":"https://m.media-amazon.com/images/I/11lV2YEmZ9L.jpg"}]}
Amazon
Amazon
To properly train LLMs, a lot of high quality unbiased data is needed. There is a lot of public data that is relevant for LLMs, but at times, that data can be too noisy and too large. Luckily, we have a solution. One that gathers large-scale
data and cleans it by removing irrelevant or duplicate content. The result - structured format responses, that can be used to train LLMs effectively. Simply add the parameter output_format=text
or output_format=markdown
to the request structure. Here are some examples:
API REQUEST
ASYNC REQUEST
PROXY MODE - COMING SOON!
Markdown:
Text:
Regardless of any tags in the HTML response body that might specify a different encoding (for example ISO-8859-2), ScraperAPI processes and delivers all content in UTF-8 encoding. This standardization offers several key advantages:
🌐Uniform Data Handling:
UTF-8 encoding avoids issues related to special characters and symbols, making data processing smoother and reducing errors.
🔗Compatibility Across Systems:
UTF-8 is widely supported across various platforms and programming languages, ensuring compatibility and reducing integration challenges.
🛠️Easier Debugging:
Consistent encoding simplifies troubleshooting and debugging, as you can expect uniform data format in all your responses.
⚡Streamlined Development:
Developers can work with a single encoding format, reducing the need for additional encoding/decoding steps and simplifying the development process.
You only need to refer to the Content-Type header in the response to verify this:
Content-type: text/html; charset=utf-8
The Dashboard shows you all the information so you can keep track of your usage. It is divided into Usage, Billing, Documentation, and links to contact us.
Usage
Here you will find your API key, sample codes, monitoring tools, and usage statistics such as credits used, concurrency, failed requests, your current plan, and the end date of your billing cycle.
Billing
In this section, you are able to see your current plan, the end date of the current billing cycle, as well as subscribe to another plan. This is also the right place to set or update your billing details, payment method, view your invoices, manage your subscription, or cancel it.
Here you can also renew your subscription early, in case you run out of API credits sooner than planned. This option will let you renew your subscription at an earlier date than planned, charging you for the subscription price and resetting your credits. If you find yourself using this option regularly, please reach out to us so we can find a plan that can accommodate the number of credits you need.
Documentation
This will direct you to our documentation – exactly where you are right now.
Contact Support
Direct link to reach out to our technical support team.
Contact Sales
Direct link to our sales team, for any subscription or billing-related inquiries.
If you believe that your API key has been exposed to a third party, it is very important to generate a new one. To do this, click your email address in the bottom left corner of your Dashboard, then ‘Manage API keys’ and finally ‘New API key’. Copy the new API key and replace the old one in your scraping setup.
You can see your overall usage, the total of credits used in the current monthly cycle as well as your current concurrency usage in the ‘Monitoring & Stats’ part of your Dashboard. For more detailed information, you can download a domain report by clicking the small download button in the top right corner of the ‘Monitoring & Stats’ window of your dashboard.
The domain report includes which domains were scraped on which date, the parameters used and the credit consumption per domain per date. It also includes the number of canceled and failed requests on each scraping date.
If you would like to find out how to monitor your account usage and limits programmatically, please visit the Account Information page.
You can delete your ScraperAPI account by going to your Billing Page. Then click ‘Request account deletion’ in the top right corner. You will receive a confirmation email once your account has been successfully deleted.
You can download all previous invoices on the Billing Page. On the Billing Page, click ‘View billing history’ in the top right corner. Your invoice history will show up in a popup window. Then click download on the invoice you would like to view.
If you want to receive billing information related to your ScraperAPI subscription to an additional email address, you can add it by clicking ‘Edit billing address’ in the top right corner of the Billing Page.
Add or change your billing address by clicking ‘Edit billing address’ in the top right corner of the Billing Page.
Add or update VAT number by clicking ‘Edit billing address’ in the top right corner of the Billing Page, click the billing address and input the VAT number.
We accept credit cards and Paypal. For Enterprise accounts we can also arrange for other methods of payment.
To add or change the payment method used for your subscription, go to the Billing Page and click ‘Edit payment method’ in the top right corner.
To cancel your subscription, go to the Billing Page and click ‘Cancel subscription’ in the top right corner of the page. This will schedule the cancellation upon the next renewal date, you can continue using any remaining credits until the end of the cycle and will not be charged again.
If you require an immediate cancellation and/or refund, please create a support ticket: Contact Support.
Every time you make a request, you consume credits. The credits you have available are defined by the plan you are currently using. The amount of credits you consume depends on the domain and parameters you add to your request.
We have created custom scrapers to target these sites, if you scrape one of these domain categories you will activate this scraper and the credit cost will change.
Normal Requests
1
E-Commerce Amazon
5
SERP Google, Bing - (applies to all subdomains)
25
Social Media LinkedIn, Twitter
30
Other domains may be applied, please use our API Playground before scraping to understand the cost per request will be.
If you are not a fan of GUI, you can call this API endpoint: https://api.scraperapi.com/account/urlcost
It supports all parameters as well. This comes in handy if you'd like to check what the cost is when enabling JS rendering for example.
Accessing domains with anti-scraping protections requires the activation of a bypass mechanism. This specialized process involves additional resources, resulting in an extra cost for scraping these protected websites.
Cloudflare Bypass
10
Datadome Bypass
10
PerimeterX/Human Bypass
10
Response Headers
Our Response Headers to each request will contain cost information on how many credits were spent on such a request: sa-credit-cost
According to your needs, you may want to access different features on our platform.
premium=true
– requests cost 10 credits
render=true
– requests cost 10 credits
screenshot=true
- requests cost 10 credits
wait_for_selector = x
- no extra cost
premium=true
+ render=true
– requests cost 25 credits
ultra_premium=true
– requests cost 30 credits*
ultra_premium=true
+ render=true
– requests cost 75 credits*
keep_headers = true
- no extra cost
country_code = x
- no extra cost
session_number = x
- no extra cost
In any requests, with or without these parameters, we will only charge for successful requests (200
and 404
status codes) and for requests that have been cancelled from your side before giving us enough time to finish them (70 seconds). If you run out of credits sooner than planned, you can renew your subscription early as explained in the section titled "Dashboard".
*Accounts that have discounted pricing will have a higher credit cost per request for our Ultra Premium domains to meet our minimum pricing of $3 per one thousand requests without rendering and $7 with rendered pages.
If you would like to monitor your account usage and limits programmatically (how many concurrent requests you’re using, how many requests you’ve made, etc.) you may use the /account endpoint, which returns JSON.
Note: the requestCount
and failedRequestCount
numbers only refresh once every 15 seconds, while the concurrentRequests
number is available in real-time.
API REQUEST