Rendering Javascript
Learn to scrape JavaScript-rendered pages using ScraperAPI in Python. Enable headless browser rendering with render=true for dynamic content, SPAs, and JS-heavy sites.
If you are crawling a page that requires you to render the Javascript on the page to scrape the data you need, then we can fetch these pages using a headless browser.
To render Javascript, simply set render=true and we will use a headless Google Chrome instance to fetch the page. This feature is available on all plans.
Pass the JavaScript rendering parameter within the URL:
API REQUEST
import requests
payload = {'api_key': 'APIKEY', 'url':'https://httpbin.org/ip', 'render': 'true'}
r = requests.get('https://api.scraperapi.com', params=payload)
print(r.text)
# Scrapy users can simply replace the urls in their start_urls and parse function
# ...other scrapy setup code
start_urls = ['https://api.scraperapi.com?api_key=APIKEY&url=' + url + '&render=true']
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request('https://api.scraperapi.com/?api_key=APIKEY&url=' + url + '&render=true', self.parse)PROXY MODE
import requests
proxy_url = "http://scraperapi.render=true:<YOUR_API_KEY>@proxy-server.scraperapi.com:8001"
proxies = {
"http": proxy_url,
"https": proxy_url
}
r = requests.get('https://httpbin.org/ip', proxies=proxies, verify=False)
print(r.text)
# Scrapy users can likewise simply pass their API key in headers.
# NB: Scrapy skips SSL verification by default.
# ...other scrapy setup code
start_urls = ['http://httpbin.org/ip']
meta = {
"proxy": "http://scraperapi.render=true:[email protected]:8001"
}
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request(url, callback=self.parse, headers=headers, meta=meta)SDK Method
Pass the parameter in the headers:
API REQUEST
PROXY MODE
SDK METHOD
Last updated
Was this helpful?

