It’s not a secret that businesses and individuals use web scrapers to collect public data from various websites. However, getting blacklisted while scraping data is a common issue for those who don’t know how to avoid getting your IP blocked.
We also created a video about this:
If you’re interested in collecting public data without getting blocked, we gathered a list of actions on how to do it.
How do websites detect web crawlers?
Web pages detect web crawlers and scraping tools by checking their IP addresses, user agents, browser parameters, and general behavior. If the website finds it suspicious, you receive CAPTCHAs, and then eventually, your requests get blocked since your crawler is detected.
Here are the main tips on how to crawl a website without getting blocked:
1. Check robots exclusion protocol
Before crawling or scraping any website, ensure your target allows data gathering from their page. Inspect the robots exclusion protocol (robots.txt) file and respect the website's rules.
Even when the web page allows crawling, be respectful and don't harm the page. Follow the rules outlined in the robots exclusion protocol, crawl during off-peak hours, limit requests coming from one IP address, and set a delay between them.
However, even if the website allows web scraping, you may still get blocked, so it’s important to follow other steps, too. For a more in-depth look at the topic, see our web scraping Python tutorial.
2. Use a proxy server
Web crawling would be hardly possible without proxies. Pick a reliable proxy service provider and choose between the datacenter and residential IP proxies, depending on your task.
Using an intermediary between your device and the target website reduces IP address blocks, ensures anonymity, and allows you to access websites that might be unavailable in your region. For example, if you’re based in Germany, you may need to use a US proxy in order to access web content in the United States.
For the best results, choose a proxy provider with a large pool of IPs and a wide set of locations.
3. Rotate IP addresses
When you’re using a proxy pool, it’s essential that you rotate your IP addresses.
If you send too many requests from the same IP address, the target website will soon identify you as a threat and block your IP address. Proxy rotation makes you look like a number of different internet users and reduces your chances of getting blocked.
4. Use real user agents
Most servers that host websites can analyze the headers of the HTTP request that crawling bots make. This HTTP request header, called user agent, contains various information ranging from the operating system and software to application type and its version.
Servers can easily detect suspicious user agents. Real user agents contain popular HTTP request configurations that are submitted by organic visitors. To avoid getting blocked, make sure to customize your user agent to look like an organic one.
Since every request made by a web browser contains a user agent, you should switch the user agent frequently.
It’s also important to use up-to-date and the most common user agents. If you’re making requests with a 5-year-old user agent from a Firefox version that is no longer supported, it raises a lot of red flags. You can find public databases on the internet that show you which user agents are the most popular these days.
5. Set your fingerprint right
Anti-scraping mechanisms are getting more sophisticated, and some websites use Transmission Control Protocol (TCP) or IP fingerprinting to detect bots.
When scraping the web, TCP leaves various parameters. These parameters are set by the end user’s operating system or the device. If you're wondering how to prevent getting blocked while scraping, ensure your parameters are consistent.
6. Beware of honeypot traps
Honeypots are links in the HTML code. These links are invisible to organic users, but web scrapers can detect them. Honeypots are used to identify and block web crawlers because only robots would follow that link.
Since setting honeypots requires a relatively large amount of work, this technique is not widely used. However, if your request is blocked and crawler detected, beware that your target might be using honeypot traps.
7. Use CAPTCHA solving services
CAPTCHAs are one of the biggest web crawling challenges. Websites ask visitors to solve various puzzles in order to confirm they’re humans. The current CAPTCHAs often include images that are nearly impossible to read for computers.
We suggest you use dedicated CAPTCHAs solving services or ready-to-use crawling tools. There are various scraping solutions available that will help you deal with CAPTCHAs and other data gathering issues.
8. Change the crawling pattern
The pattern refers to how your crawler is configured to navigate the website. If you constantly use the same basic crawling pattern, it's only a matter of time before you get blocked.
You can add random clicks, scrolls, and mouse movements to make your crawling seem less predictable. However, the behavior should not be completely random. One of the best practices when developing a crawling pattern is to think of how a regular user would browse the website and then apply those principles to the tool itself. For example, visiting the home page first and only then making some requests to the inner pages makes a lot of sense.
9. Reduce the scraping speed
To mitigate the risk of being blocked, you should slow down your scraper speed. For instance, you can add random breaks between requests or initiate wait commands before performing a specific action.
What if I can’t scrape the URL because it is rate limited?
IP address rate limitation means that the target has a limited number of actions that can be done on the website at a certain time. To avoid requests throttling, respect the website and reduce your scraping speed.
10. Crawl during off-peak hours
Most crawlers move through pages significantly faster than an average user as they don’t actually read the content. Thus, a single unrestrained web crawling tool will affect server load more than any regular internet user. In turn, crawling during high-load times might negatively impact user experience due to service slowdowns.
Finding the best time to crawl the website will vary case-by-case, but picking off-peak hours just after midnight (localized to the service) is a good starting point.
11. Avoid image scraping
Images are data-heavy objects that can often be copyright-protected. Not only will it take additional bandwidth and storage space, but there's also a higher risk of infringing on someone else's rights.
Additionally, since images are data-heavy, they are often hidden in JavaScript elements (e.g., behind Lazy loading), which will significantly increase the complexity of the data acquisition process and slow down the web scraper itself. A more complicated scraping procedure (something that would force the website to load all content) would have to be written and employed to get images out of JS elements.
12. Avoid JavaScript
Data nested in JavaScript elements is hard to acquire. Websites use many different JavaScript features to display content based on specific user actions. A common practice is only displaying product images in search bars after the user has provided some input.
JavaScript can also cause many other issues – memory leaks, application instability, or, at times, complete crashes. Dynamic features can often become a burden. Avoid JavaScript unless absolutely necessary.
13. Use a headless browser
One of the additional tools for block-free web scraping is a headless browser. It works like any other browser, except a headless browser doesn’t have a graphical user interface (GUI).
A headless browser also allows scraping content that is loaded by rendering JavaScript elements. The most widely-used web browsers, Chrome and Firefox, have headless modes.
Conclusion
If you pay attention to our mentioned tips on avoiding getting blocked, you will be able to gather public data hassle-free. Set your browser parameters right, take care of fingerprinting, and beware of honeypot traps. Most importantly, use reliable proxies and scrape websites with respect. Then all your public data gathering jobs will go smoothly, and you’ll be able to use fresh information for your purposes.
Top comments (1)
If you have any questions or something is unclear, leave a comment here and we will make sure to answer as quickly as possible! :)