Concurrent Requests

Every plan with ScraperAPI has a limited number of concurrent threads which limit the number of requests you can make in parallel to the API (the API doesn’t accept batch requests). The more concurrent threads your plan has the faster you can scrape a website. If you would like to increase the number of concurrent requests you can make with the API then contact our customer support team.

In case your scraper is not utilizing the full concurrency thread limit of your Subscription, please make sure that its settings are correct and are set to utilize the exact amount of Threads that your Subscription allows.

In case you are still not utilizing all of your Concurrent threads, there might be something else blocking your requests on your machine. Please make sure you check both your Antivirus/Firewall and your Network for any issues. Another culprit might be the resource usage for the user on your machine.

You can adjust "ulimit" on both Linux and Windows machines to make sure that there are no restrictions on resource usage for the users set up on your machines. For Linux users, you will need to modify the file "/etc/security/limits.conf", and for Windows Users, you will need to modify your Registry Values. There are multiple guides that can be found on the Internet on how exactly this can be done.

Last updated