Google announces to cancel support for robot.txt Noindex directives. In a Google blog, the webmasters officially announce to end support for directive related to indexing. The news was announced today on 2nd July 2019. The announcement has been made a day after open-sourcing robot.txt parser. Yesterday Google announced that they’re making the robot.txt parser code open source.
Google webmasters cancel the robot.txt Noindex directive
Google made the announcement on Webmasters’ official twitter handle.
Today we're saying goodbye to undocumented and unsupported rules in robots.txt 👋
If you were relying on these rules, learn about your options in our blog post.https://t.co/Go39kmFPLT
— Google Webmasters (@googlewmc) July 2, 2019
They shared a blog link attached to the tweet. In the Google blog, they explained everything related to the announcement. Publishers and bloggers have time to remove noindex and choose an alternative. According to the blog, publishers can remove the noindex directive until September 1, 2019. The publishers can use alternatives to control indexing. Here are 5 ways mentioned in the blog.
- Noindex in robot meta tags.
- 404 & 410 HTTP status codes.
- Password protection.
- Disallow in robot.txt
- Search Console Remove URL tool.
Google makes the robot.txt parser open source
The announcement comes a day after Google shares the robot.txt parser code. The robot.txt parser code is now open source. Crawlers and developers can now read and use the code. They’ve made the process of GoogleBot public. Now other crawlers and coders can learn how GoogleBot reads robot.txt files.