How to prevent indexing

Learn how to prevent your Locator & Pages from being indexed

Last updated on September 10th, 2021

This article will go over two methods for preventing your Locator & Pages from being indexed by search engines.

Why prevent indexing?

It can be beneficial to not have your Locator & Pages indexed on search engines if there is duplicate content across multiple instances of the Locator. This solution is for use-cases where there are multiple pages with the same Locator, with the same Locations. Those pages will be identified as duplicates by search engines and will cause issues with indexing. It's entirely possible that the indexed links will be split between all existing instances of the same Locator. To remedy this situation, search engines should only evaluate URLs from one Locator and ignore the rest. This will give you the best chances of having your Locations properly indexed.

Blocking using robots.txt

This method makes use of the robots.txt file. It is possible to add rules to disallow crawling by search engines. You can similar rules to your robots.txt file to prevent the crawling of the main Locator page as well as any page in the Locator folder.

user-agent: otherbot
disallow: /locator_folder
disallow: /locator_folder/*

Blocking using noindex

This method will make use of a meta tag located in the <head> of your Locator page to stop crawlers from identifying your page. Add the following to the page where the Locator is situated: 

<meta name="robots" content="noindex">

If your Locator can be reached at https://mydomain.com/locator, add the tag on that page. Since the noindex tag has been added to the main page of the Locator, it will also apply to all subpages.

Was this article helpful?

Save as PDF