Our standard proxy pools include millions of proxies from over a dozen ISPs and should be sufficient for the vast majority of scraping jobs. However, for a few particularly difficult to scrape sites, we also maintain a private internal pool of residential and mobile IPs. This pool is available to all paid users.
Requests through our premium residential and mobile pool are charged at 10 times the normal rate (every successful request will count as 10 API credits against your monthly limit). Each request that uses both javascript rendering and our premium proxy pools will be charged at 25 times the normal rate (every successful request will count as 25 API credits against your monthly limit). To send a request through our premium proxy pool, please set the premium query parameter to premium=true.
We also have a higher premium level that you can use for really tough targets, such as LinkedIn. You can access these pools by adding the ultra_premium=true query parameter. These requests will use 30 API credits against your monthly limit, or 75 if used together with rendering. Please note, this is only available on our paid plans. Requests with the ultra_premium=true parameter are cached (by default) to enhance performance and efficiency. For detailed information about how caching works and its benefits, please refer to our Cached Results page.
API REQUEST
import requestspayload ={'api_key':'APIKEY','url':'https://httpbin.org/ip','premium':'true'}r = requests.get('http://api.scraperapi.com', params=payload)print(r.text)# Scrapy users can simply replace the urls in their start_urls and parse function# ...other scrapy setup codestart_urls = ['http://api.scraperapi.com?api_key=APIKEY&url='+ url +'premium=true']defparse(self,response):# ...your parsing logic hereyield scrapy.Request('http://api.scraperapi.com/?api_key=APIKEY&url='+ url +'premium=true', self.parse)
PROXY MODE
import requestsproxies ={"http":"http://scraperapi.premium=true:APIKEY@proxy-server.scraperapi.com:8001"}r = requests.get('http://httpbin.org/ip', proxies=proxies, verify=False)print(r.text)# Scrapy users can likewise simply pass their API key in headers.# NB: Scrapy skips SSL verification by default.# ...other scrapy setup codestart_urls = ['http://httpbin.org/ip']meta ={"proxy":"http://scraperapi.premium=true:APIKEY@proxy-server.scraperapi.com:8001"}defparse(self,response):# ...your parsing logic hereyield scrapy.Request(url, callback=self.parse, headers=headers, meta=meta)
SDK Method
from scraperapi_sdk import ScraperAPIClientclient =ScraperAPIClient('APIKEY')result = client.get(url ='http://httpbin.org/ip', premium=true).textprint(result)# Scrapy users can simply replace the urls in their start_urls and parse function# Note for Scrapy, you should not use DOWNLOAD_DELAY and# RANDOMIZE_DOWNLOAD_DELAY, these will lower your concurrency and are not# needed with our API# ...other scrapy setup codestart_urls =[client.scrapyGet(url ='http://httpbin.org/ip', premium=true)]defparse(self,response):# ...your parsing logic hereyield scrapy.Request(client.scrapyGet(url ='http://httpbin.org/ip', premium=true), self.parse)