July 2025
Use the Langchain ScraperAPI package
You are building your own LLM? And want to use Langchain? Great, then you should use the Langchain ScraperAPI package to access all data without getting blocked.
What you can do?
ScraperAPITool
Grab the HTML/text/markdown of any web page
ScraperAPIGoogleSearchTool
Get structured Google Search SERP data
ScraperAPIAmazonSearchTool
Get structured Amazon product-search data
Installation
pip install -U langchain-scraperapiSetup
Create an account at https://www.scraperapi.com/ and get an API key, then set it as an environment variable:
import os
os.environ["SCRAPERAPI_API_KEY"] = "your-api-key"Learn more how to integrate this in the official Langchain ScraperAPI Github Repo or read our guide how to integrate it:
Introduction to ScraperAPI Crawler v1.0
Want to crawl multiple linked pages without writing your own crawler? Our Crawler handles link discovery, scraping, retries, and webhook delivery for you. Just define the start URL, link pattern, and a crawl budget, we take care of the rest.
Getting Started
No Installation needed! Just send a POST request to https://crawler.scraperapi.com/job
Example Payload
Once the job is running, it streams results to your webhook in real-time and sends you a summary after the job is complete.
Check out the full guide and integration examples below:
Last updated

