- Posted on
- • Past Newsletter Articles
Google On Robots.txt: When To Use Noindex vs. Disallow
- Author
-
-
- User
- AmeriWeb Hosting
- Posts by this author
- Posts by this author
-
There is some confusion in understanding the difference between Noindex and Disallow in regards to Google and Bing search engines. You can use them in your area on your web site, or your robots.txt file, and are used to instruct search engines.
- noindex - The noindex directive tells search engines not to include a specific page in their search results. Add this instruction in the HTML head section using the robots meta tag or the X-Robots HTTP header. Use noindex when you want to keep a page from showing up in search results but still allow search engines to read the page’s content. This is helpful for pages that users can see but that you don’t want search engines to display, like thank-you pages or internal search result pages.
- disallow - The disallow directive in a website’s robots.txt file stops search engine crawlers from accessing specific URLs or patterns. When a page is disallowed, search engines will not crawl or index its content. Use disallow when you want to block search engines completely from retrieving or processing a page. This is suitable for sensitive information, like private user data, or for pages that aren’t relevant to search engines.
- nofollow - Just for collateral information, the nofollow directive in a website’s will stop search engines from indexing any links found on a page. Use nofollow when you want to block search engines from using your page to find additional web pages. This is suitable for a links pages listing external sites, where you don't want to share your mojo with other sites.
It is not necessary to provide positive commands, such as index, allow or follow as seach engines assume they are present, unless told otherwise with a noindex, disallow or nofollow.
As always, if you have any questions contact us!