Only this pageAll pages
Powered by GitBook
1 of 40

FAQ

General

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Plans & Billing

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

JS Rendering

Loading...

Loading...

Loading...

Loading...

Loading...

Geolocation & Residential IPs

Loading...

Loading...

Loading...

Loading...

Anti-bots & CAPTCHAs

Loading...

Loading...

Loading...

Loading...

Low Success Rates

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

How to use the API

ScraperAPI is proxy solution for web scraping.It is designed to make scraping the web at scale as simple as possible by removing the hassle of finding high quality proxies, rotating proxy pools, detecting bans, solving CAPTCHAs, managing geotargeting and rendering javascript.

With ScraperAPI you simply send a request to either our simple REST API interface or our proxy port and we will return the HTML response from the target website. For more information on how to get started then check out our documentation .

What is an API credit?

We know that scraping is complex, and so is pricing. To properly reflect the effort it takes to scrape each site and make it easier for you to grow and scale, our pricing model is based on API credits. This also allows us to bring everything we have to bear on keeping Success Rates as high as possible and opens up all our features to you, rather than restricting them based on your plan.

Depending on the type of website you want to scrape and what parameters you need to use for a request, a different number of API Credits will be used for a single request.

For more details, please see our Documentation page .

here

Change Credit Card Details

You can change your card details anytime on the Billing page in your dashboard or by contacting support, who will help you securely change your card details.

Getting a refund

We offer a 7-day no questions asked refund policy, if you are unhappy with the service for any reason, , and we’ll refund you right away.

Increasing scraping speed

When you send a request to the API we will route it through our proxy pools, check the response for CAPTCHAs, bans, etc. and then either return the valid HTML response to you or keep retrying the request with different proxies for up to 60 seconds before returning you a 500 status error.

Frequently, the API has higher latencies than sending requests directly to a normal proxy as average latencies typically range from 4-12 seconds (depending on website) and single requests can sometimes take up to 60 seconds. However, this is compensated by the fact that our average success rate is around 98%.

If you would like to increase the volume of successful requests you can make in a given time period then we can increase your number of concurrent threads. to enquiry about increasing your concurrency limit.

If you like to reduce the latency of each request or reduce the longtail of some requests taking 20-30 seconds then you can use our premium proxy pools by adding premium=true to your request, or by to see if they can increase the speed of your requests.

contact support
Contact our sales team
contacting our support team

Page Elements Still Missing

To avoid rendering unnecessary images, tracking scripts, etc. that will slow your requests down, the API doesn’t render everything on the page by default. Sometimes this might include some data that you actually need. If you find yourself in such situation, you can instruct the API to wait for a specific selector to appear on the page through the use of the wait_for_selector=x parameter, before we return the final rendered response back to you.

Please note, that you need to include therender=trueparameter whenever you are specifying the wait_for_selector, otherwise the API will ignore it.

If you find yourself in need of help, you can contact our support team here

Pay-As-You-Go Option

Users on our Scaling, Enterprise or Custom plans have access to a Pay-As-You-Go option.

To protect you from surprise bills, you will have the option to set a monthly spending cap on your Pay-As-You-Go usage. Your service will not exceed this cap unless you change your settings on the billing page. This is a spending limit, not an amount you will be pre-charged.

Getting Failed Requests From The API

ScraperAPI routes your requests through proxy pools with over 40 million proxies and retries requests for up to 60 seconds to get a successful response, however, some of your requests will fail. You can expect 1-3% of your requests to fail, however, you won’t be charged for these failed requests. If you configure your code to automatically retry failed requests then in the majority of cases the retry will be successful.

If you are experiencing failure rates in excess of 10% then contact our support team who will look at tuning the API to yield a higher success rate.

Unused Requests

At the moment, we don’t have the possibility to roll over unused requests or credits. When your subscription renews, the requests are being reset. If you want to increase or decrease the amount of requests, please get in touch with our support team.

Mobile IPs

Our premium proxy pools contain mobile IPs, however, if you want to exclusively use mobile proxies then contact our support team who will be able to create a custom plan for you.

JS Rendering Concurrency

The rendering concurency (burst limit) is set to 10req/sec by default. This burst limit controls the number of rendered requests you can start each second. For example, if each rendered request takes 25 seconds to complete, and you are consistently sending 10 requests per second, you could have up to 250 rendered requests running concurrently at any given time (10 requests/sec * 25 seconds). If you need to handle more rendered requests concurrently (Enteprise users only) please contact our support team for assistance.

Check if the URL is allowed

Some websites have strict policies regarding web scraping. For example, Facebook and Instagram do not allow scraping of their pages. Similarly, pages that require login are not allowed to be scraped. Make sure the website you're trying to scrape allows it before proceeding.

Solving embedded CAPTCHAs

Currently, the API doesn’t solve CAPTCHAs that are permanently embedded on the page, like those often found on forms or buttons to reveal personal information. You will need to use a dedicated CAPTCHA solver service to unlock these CAPTCHAs. On Enterprise Plans, we can implement this functionality for you upon request.

Enable JavaScript Rendering

To enable JavaScript rendering you simply need to add the render=true parameter to your request. The API will then route your request through a Chromium instance and render any JavaScript on the page, before returning the HTML response back to you.

Buying Individual IPs

Currently, we don’t have an option to purchase individual proxies from our pools.

City-Level Geotargeting

At the moment the API doesn’t support state or city level geotargeting with our proxy pools. However, on request we can implement this for Enterprise level users.

Extra JavaScript Rendering Costs

All JS Rendered requests cost 10 API Credits. However, if you use JS Rendering with premium proxies, it will cost 25 API Credits, and if you use ultra premium proxies with JS Rendering, the cost will be 75 API Credits. We highly recommend that you only use JS rendering if you absolutely need it to extract your target data, as JS rendering will increase your latency and can reduce your success rates which can reduce the volume of requests you can process through the API.

How CAPTCHA solving works

After we receive a HTML response back from your target website we automatically run the response through our ban and CAPTCHA detection algorithms. If the API detects a CAPTCHA the API will retry the request with another IP and have the blocked IP unblocked in parallel. This ensures that you don’t have to wait until the CAPTCHA is solved before you can retry the request.

Bandwidth Based Pricing

Currently, we don’t offer bandwidth based pricing. All our plans are based on the number of requests you make to the API each month.

Cancelling a Plan

Yes, you can cancel your subscription at any time in your dashboard or by , you will not be charged for cancelling.

Getting a CAPTCHA as successful response

We have a CAPTCHA database with thousands of bans and CAPTCHA types that we use to detect whether a request contains a CAPTCHA or has been blocked by an anti-bot. If you are getting a CAPTCHA or an anti-bot message back as a successful status 200 request then and they will add this new CAPTCHA or anti-bot message into our database so it will be detected in the future. Triggering the API to keep retrying the request until it gets the correct successful response.

Clicking Page Elements

The latest addition to our JS rendering solution - Java Script Rendering Instruction Set allows you to interact with the page and perform actions such as click, input data, scroll and wait for certain events to finish before returning the rendered response back to you. To find out more, please follow link.

Concurrent Requests

Every plan with ScraperAPI has a limited number of concurrent threads which limit the number of requests you can make in parallel to the API (the API doesn’t accept batch requests). The more concurrent threads your plan has the faster you can scrape a website. If you would like to increase the number of concurrent requests you can make with the API then contact our .

In case your scraper is not utilizing the full concurrency thread limit of your Subscription, please make sure that its settings are correct and are set to utilize the exact amount of Threads that your Subscription allows.

In case you are still not utilizing all of your Concurrent threads, there might be something else blocking your requests on your machine. Please make sure you check both your Antivirus/Firewall and your Network for any issues. Another culprit might be the resource usage for the user on your machine.

You can adjust "ulimit" on both Linux and Windows machines to make sure that there are no restrictions on resource usage for the users set up on your machines. For Linux users, you will need to modify the file "/etc/security/limits.conf", and for Windows Users, you will need to modify your Registry Values. There are multiple guides that can be found on the Internet on how exactly this can be done.

Try geo-targeting

Certain websites, such as e-commerce stores and search engines, display different data to users based on their geolocation. Some domains even block users from visiting their site using a proxy location that differs from theirs. If you're experiencing low success rates with your scraping requests on these types of websites, geotargeting can help.

Our API offers geotargeting functionality that allows you to use proxies from a specific country to retrieve the correct data from the website. This can be especially helpful when scraping websites that display different data based on geolocation.

By using geotargeting, you can ensure that your scraping requests are coming from the required country and increase your chances of success.

If you're interested in using geotargeting, we offer a variety of locations that you can choose from. For more information on how to set up geotargeting, which locations are included in each plan, and how to get started, . Give it a try and see if it improves your success rates on geolocation-dependent websites.

Use your own custom headers (Advanced)

Some websites may block requests from known scraper user agents. Using your own custom headers can help disguise your requests as normal web traffic and avoid getting blocked. By setting keep_headers=true and sending the API the headers you want to use, you can customize your requests and potentially increase your success rates.

For more information on how to use custom headers and our other features, please see .

If you're new to custom headers, we don't recommend this approach. Custom headers are typically employed in particular situations, such as making subsequent requests along with a session_number or when you need to send custom headers to obtain specific results from a website. Please note that when you use your own custom headers, our header system will be overridden, which may occasionally lead to lower success rates. Nonetheless, if you need help setting this up let us know, we're always here to help and support you through the process!

Free Plan & 7-Day Free Trial

ScraperAPI offers a free plan of 1,000 free API credits per month (with a maximum of 5 concurrent connections) for small scraping projects. For the first 7-days after you sign up you will have access to 5,000 free requests so test the API at a larger scale. If you need additional API credits for testing purposes, please .

contacting support
just let our support team know
customer support team
please visit this link
our documentation
contact support

What happens if I run out of credits before the end of my current subscription?

  • Student, Hobby, Hobby Legacy, Startup, Business, or Professional monthly plan.

Starting November 4, 2025, you will have two options if you reach 100% of your credit usage before your plan renews:

  1. Auto-Upgrade: You can seamlessly upgrade to the next plan tier, which often provides a better price-per-credit.

  2. Get a Custom Plan: If a standard upgrade isn’t the right fit, you can contact our support team to create a custom plan tailored to your needs.

  • Hobby, Startup, Business, or Professional annual plan.

Starting at your next renewal, if you use all your annual credits before your year is over, you will have new, more flexible options. Instead of the old auto-renewal, you can choose to:

  1. Re-subscribe to Your Annual Plan: Start a new 12-month plan on your upcoming billing date to get a fresh batch of annual credits.

  2. Upgrade to a Higher Monthly Plan: Move to a flexible monthly plan that better suits your new usage level.

  3. Talk to Us: Contact our support team to discuss a custom solution.

  • Scaling monthly or annual plan.

Starting November 4, 2025, you have new options for when you run out of credits. On January 5, 2026, these new options will fully replace the old "Renew Now" and "Auto-Renew" features.

  1. Continue with Pay-As-You-Go: A pop-up will give you the option to continue service by using extra credits at a fixed, predictable rate.

  2. Contact Your Sales Representative: You can get in touch with our sales team to discuss a custom enterprise plan.

  • Enterprise or Custom plan

Starting November 4, 2025, we're introducing a Pay-As-You-Go model. On January 5, 2026, this new feature will replace the old "Renew Now" and "Auto-Renew" options. If you reach your credit limit mid-cycle, a pop-up will allow you to continue using extra credits at a fixed, predictable rate.

  • All Plans

Users who registered before November 4, 2025 will continue to have access to the auto-renewal feature until January 5, 2026.

Residential IPs

We use residential proxies as fallback proxies within our standard proxy pools if a request has repeatedly failed. However, if you would like to exclusively use our residential proxy pools then you can enable this functionality by adding premium=true to the requests you send to the API.

Check your timeout setting

Connection_timeout refers to the maximum time allowed for the API to attempt a request. With ScraperAPI you need to have it set at 60 seconds to give the API enough time to retry your request with different proxies until it returns a successful response (or return an error code if the request was unsuccessful). Keep in mind that setting the connection_timeout lower than 60 seconds will increase the speed of each request but may lower your overall success rate.

Check if it's a general issue

Are you experiencing low success rates with your API requests? Don't worry, we've got some solutions for you to try!

Before making any changes, make sure that the page you want to scrape doesn't have any issues and that the URL is valid and correct. If you can access the page and still have issues, then the next thing you want to check is if the issue is with the API or just your requests. You can check the status board to see if there are any ongoing issues. If the API is experiencing issues, wait until they are resolved before attempting your requests again.

Check what type of protection your target domain is using

Inspect your target URL to see whether they are using botblockers such as Cloudflare, Datadome or Cloudfront. These blockers make the domain more difficult to scrape, and you may need to use our premium or ultra premium proxies.

Try our Async scraper

If you're experiencing low success rates with synchronous API requests, consider trying our Async scraper. This product is designed to improve success rates by submitting a job of scraping, rather than making requests to our endpoint and waiting for a response.

The Async scraper works on your requested URLs until it manages to get a successful response back (when applicable) and returns the data to you. This method is especially helpful when scraping difficult sites that have protection measures in place and may require more time to successfully scrape.

If a high success rate is more important to you than response time (for example, if you need a set of data periodically), then Async scraping is the recommended way to scrape pages. Give it a try and see the difference it can make in the success rate of your scraping requests.

By following these steps, you can improve your success rate and get the data you need from your API requests. If you continue to experience issues, don't hesitate to reach out to our support team for assistance.

Getting Distil, Cloudflare, etc. bans.

Along with constantly fine tuning our proxy and header pools, within the API we’ve built in numerous anti-bot bypasses that enable the API to bypass most challenges thrown by anti-bots. Generally, your success rates will be a small bit lower on sites that make heavy use of anti-bots, however, you should be able to scrape the site reliably at scale with the API.

In the case that the API is completely blocked by a site or you are experiencing a very low success rate (under 70%) then .

This is generally due to the site using either a combination of 2 or more anti-bots in tandem or using a customised version of the anti-bot with higher security settings that stop the general bypass from working. In cases like these, one of our engineers will put in place a custom bypass for you if you .

please let our support team know about the issue
contact our support team

Geolocation & Residential IPs

Business and Enterprise Plan users can geotarget their requests to the following 13 countries (Hobby and Startup Plan can only use US and EU geotargeting) by using the country_code flag in their request.

Country Code
Country
Plans

us

United States

Hobby Plan and higher.

eu

Europe (general)

Hobby Plan and higher.

ca

Canada

Business Plan and higher.

uk

United Kingdom

Business Plan and higher.

de

Germany

Business Plan and higher.

fr

France

Business Plan and higher.

es

Spain

Business Plan and higher.

br

Brazil

Business Plan and higher.

mx

Mexico

Business Plan and higher.

in

India

Business Plan and higher.

jp

Japan

Business Plan and higher.

cn

China

Business Plan and higher.

au

Australia

Business Plan and higher.

be

Belgium

Business Plan and higher.

Other countries are available to Enterprise customers upon request.

here
this