07-08-2020, 02:55 AM
A robots.txt document tells search engine crawlers which records or pages the crawler mayor cannot ask from your website. This is used mainly in order to avoid ridding your website with asks; it isn't just a mechanism for preserving a web page outside of Google. To keep an internet page out of Google, you ought to use no-index directives or password-protect your page.