Low Success Rates

Check if it's a general issue

Before making changes to your requests, make sure that the page you want to scrape doesn't have any issues and that the URL is valid. If you can access the page in a browser, but still have issues scraping it, then the next thing you want to check is if the issue is with the API rather than your request. Head over to the status boardarrow-up-right to check for any ongoing incidents. If the API is experiencing issues, wait for them to be resolved before retrying.

Check what type of protection your target domain is using

Inspect your target URL to determine whether it is protected by services such as Cloudflare, PerimeterX, Datadome, Akamai, AWS-WAF, etc. We do have in-house bypasses for most of these protections, though higher security implementations may require the use of our ultra premium proxies.

Check if the URL is allowed

Some websites have strict anti-scraping policies. For example, platforms like Meta and X explicitly prohibit scraping. Make sure the website you're trying to scrape allows it before proceeding. Additionally, we only support the scraping of publicly available data. Any content that requires authentication or is hidden behind a login is out of scope.

Check your timeout settings

Connection_timeout defines how long the API is allowed to attempt a request. With ScraperAPI, this sould be set to at least 60 seconds (70 recommended) to allow enough time for retries with different proxies, until a successful response is returned or the request ultimately fails. Lowering the timeout below 60 seconds will increase the speed of each request, but it may also decrease your overall success rate.

Try geo-targeting

Some websites, like e-commerce platforms and search engines, show different content depending on where the user is located. There are also websites that block visitors from outside their region. Sending geotargeted requests can improve the success rates when scraping location-sensitive websites.

We support request geotargeting, which allows you to route requests through proxies from a specific country (or region), so you don't get blocked and receive the correct, region-specific data. Supported countries and regions vary by plan. Visit this page for the full list of supported countries and setup instructions.

Use your own custom headers (Advanced)

Some websites may block requests from known scraper User Agents. Using custom headers can help disguise your requests as normal web traffic and reduce the likelihood of getting blocked. By enabling keep_headers=true and passing your own headers, you can customize your requests and potentially improve your success rates. For more information on how to use custom headers, visit this page.

Try our Async scraper

If you're experiencing low success rates (and response time is not critical) with requests to the synchronous API, consider trying our Async API. Instead of holding a live connection open, the async requests are submitted as jobs in a queue. They are then retried until the requested pages are successfully scraped or for up to 24 hours.

This method is helpful when scraping difficult sites, that have robust protection measures in place and may require more time and efforts to be successfully scraped.

If a high success rate is more important to you than response time (for example, if you need a set of data periodically), then we recommend trying out the Async API.

Last updated