To reuse the same proxy for multiple requests, simply use the session_number parameter by setting it equal to a unique integer for every session you want to maintain (e.g. session_number=123). This will allow you to continue using the same proxy for each request with that session number. To create a new session simply set the session_number parameter with a new integer to the API. The session value can be any integer. Sessions expire 15 minutes after the last usage.
API REQUEST
import requestspayload ={'api_key':'APIKEY','url':'https://httpbin.org/ip','session_number':'123'}r = requests.get('http://api.scraperapi.com', params=payload)print(r.text)# Scrapy users can simply replace the urls in their start_urls and parse function# ...other scrapy setup codestart_urls = ['http://api.scraperapi.com?api_key=APIKEY&url='+ url +'&session_number=123']
PROXY MODE
import requestsproxies ={"http":"http://scraperapi.session_number=123:APIKEY@proxy-server.scraperapi.com:8001"}r = requests.get('http://httpbin.org/ip', proxies=proxies, verify=False)print(r.text)# Scrapy users can likewise simply pass their API key in headers.# NB: Scrapy skips SSL verification by default.# ...other scrapy setup codestart_urls = ['http://httpbin.org/ip']meta ={"proxy":"http://scraperapi.session_number=123:APIKEY@proxy-server.scraperapi.com:8001"}defparse(self,response):# ...your parsing logic hereyield scrapy.Request(url, callback=self.parse, headers=headers, meta=meta)
SDK Method
//from scraperapi_sdk import ScraperAPIClientclient =ScraperAPIClient('APIKEY')result = client.get(url ='http://httpbin.org/ip', session_number=123).textprint(result)# Scrapy users can simply replace the urls in their start_urls and parse function# Note for Scrapy, you should not use DOWNLOAD_DELAY and# RANDOMIZE_DOWNLOAD_DELAY, these will lower your concurrency and are not# needed with our API# ...other scrapy setup codestart_urls =[client.scrapyGet(url ='http://httpbin.org/ip', session_number=123)]defparse(self,response):# ...your parsing logic hereyield scrapy.Request(client.scrapyGet(url ='http://httpbin.org/ip', session_number=123), self.parse)