Cached Results | Python
Learn how to use ScraperAPI’s caching feature in Python to boost speed, success rate, and efficiency. Cached results update every 10 mins for fresh data.
ScraperAPI's caching system unlocks a new level of efficiency. By leveraging advanced decision-making mechanisms, we maintain an extensive amount of cached data, that is ready to be served if called upon. When you request a page with a cached response available, you'll receive the cached data, ensuring much faster access to the information you need with a 100% success rate.
Why is this good?
Difficult Pages: Perfect for pages that are challenging to scrape.
10-Minute Updates: No older than 10 minutes cached value.
Guaranteed Success: 100% success rate for cached results.
Faster Response Times: Retrieve data quicker from cached results.
Fewer Retries: Reduced number of retries needed to serve back the response.
What's in it for you?
🚀
Increased Efficiency:
Save time and resources by reducing the need to scrape the same page multiple times.✅
Improved Reliability:
Enhance the reliability of your scraping tasks with consistent and timely data retrieval.
Advanced Use-Case Scenario
For cases where real-time data is required, you can ensure the API serves uncached data by adding the parameter cache_control=no-cache
to the URL, as shown below:
import requests
payload = {'api_key': 'APIKEY', 'url':'https://httpbin.org/ip', 'ultra_premium': 'true', 'cache_control': 'no-cache'}
r = requests.get('https://api.scraperapi.com', params=payload)
print(r.text)
We tag cached responses with the sa-from-cache: 1
response header, making it easy to distinguish between cached and non-cached responses.
Future Enhancements
We're always looking to improve. Stay tuned for the upcoming max_age
option, which will give you even more control over your caching preferences.
Last updated
Was this helpful?