Site icon Tech News in India, India Technology News

Google pushes for an open-sourcing web crawler standard

google-robot
0
0

As you know Google’s business is depended on robots.txt files some extent, because that excludes every site’s some content from the search engine’s web crawler, that is Googlebot. It cut down their unwanted or pointless indexing and sensitive info protected or under wraps. So Google wanted to make improvements, though they are shedding some of its secrecy. They wanted to turn the decade-old Robots Exclusion Protocol (REP) into an official internet standard, hence making it robots.txt parser as an open source.

As you know Robot Exclusion Protocol introduced around a quarter of a century ago and it was an unofficial standard for long. Now Google wanted to make it an official format to make this streamlined and to make sure it is followed the way they wanted. By this Google imitating to the Internet Engineering Task Force to make how crawler is supposed to handle robots.txt and it can bring positive value to the internet and websites in general.

Source: Engadget

Exit mobile version