Python
Ask or search…
K
Links

Proxy Port Method

To simplify implementation for users with existing proxy pools, we offer a proxy front-end to the API. The proxy will take your requests and pass them through to the API which will take care of proxy rotation, captchas, and retries.
The proxy mode is a light front-end for the API and has all the same functionality and performance as sending requests to the API endpoint.
The username for the proxy is scraperapi and the password is your API key.
"http://scraperapi:[email protected]:8001"
You can use the ScraperAPI proxy the same way as you would use any other proxy:
import requests
proxies = {
"http": "http://scraperapi:[email protected]:8001"
}
r = requests.get('http://httpbin.org/ip', proxies=proxies, verify=False)
print(r.text)
# Scrapy users can likewise simply pass their API key in headers.
# NB: Scrapy skips SSL verification by default.
# ...other scrapy setup code
start_urls = ['http://httpbin.org/ip']
meta = {
"proxy": "http://scraperapi:[email protected]:8001"
}
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request(url, callback=self.parse, headers=headers, meta=meta)
Note: So that we can properly direct your requests through the API, your code must be configured to not verify SSL certificates.
To enable extra functionality whilst using the API in proxy mode, you can pass parameters to the API by adding them to username, separated by periods.
For example, if you want to enable Javascript rendering with a request, the username would be scraperapi.render=true
"http://scraperapi.render=true:[email protected]:8001"
Multiple parameters can be included by separating them with periods; for example:
"http://scraperapi.render=true.country_code=us:[email protected]:8001"

Last modified 7mo ago