Welcome
Using ScraperAPI is easy. Just send the URL you’d like to scrape to one of our APIs, along with your API key and we will return the HTML of that page right back to you.
All ScraperAPI requests must be authenticated with an API Key. Sign up to get one and include your unique API key with each request that you send to us.
You can use ScraperAPI to scrape web pages, API endpoints, images, documents, PDFs, or other files just as you would any other URL. Note: there is a 50MB request size limit.
There are six ways you can send requests (GET,POST) to ScraperAPI:
Via our API endpoint
https://api.scraperapi.comVia our Async API endpoint
https://async.scraperapi.comVia our proxy port
http://scraperapi:[email protected]:8001Via our Structured Data Endpoints
https://api.scraperapi.com/structured/Via our DataPipeline service
https://datapipeline.scraperapi.com/api/projectsVia one of our SDKs (only available for some programming languages)
Choose the API that best fits your scraping needs.
Important note: regardless of how you invoke the service, we highly recommend you set a 70 seconds timeout in your application to get the best possible success rates, especially for some hard-to-scrape domains.
In addition to our scraping APIs, we provide scalable solutions like DataPipeline for managing bulk jobs and scheduled tasks. Our Crawler makes it easy to extract, follow and scrape URLs across a target domain and our MCP Server allows you to plug ScraperAPI directly into your LLM, making scraping as easy as writing a promt.
Speaking of LLMs, we also integrate with LangChain, giving AI agents the ability to browse web pages or pull Google and Amazon search results with just a few lines of code. For no-code and low-code workflows, you can also use our community n8n node to connect ScraperAPI directly into your automations. Our AI Parser allows you to extract structured data from virtually any website using flexible, schema- based definitions.
These tools give you everything you need to scrape, scale and power your projects with reliable data.
Last updated

