Friday, 24 May 2013

Collection of Robots.txt Files


The implementation of a suitable robots.txt file is very important for search engine optimization. There is plenty of advice around the Internet for the creation of such files (if you are looking for an introduction on this topic read “Creat a robots.txt file“), but what if instead of looking at what people say we could look at what people do?
That is what I did, collecting the robots.txt files from a wide range of blogs and websites. Below you will find them.

Key Takeaways

  • Only 2 out of 30 websites that I checked were not using a robots.txt file
  • Even if you don’t have any specific requirements for the search bots, therefore, you probably should use a simple robots.txt file
  • Most people stick to the “User-agent: *” attribute to cover all agents
  • The most common “Disallowed” factor is the RSS Feed
  • Google itself is using a combination of closed folders (e.g., /searchhistory/) and open ones (e.g., /search), which probably means they are treated differently
  • A minority of the sites included the sitemap URL on the robots.txt file

No comments:

Post a Comment