Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

    English (US)
    US English (US)
    FR French
    DE German
    ES Spanish
    IT Italian
    JP Japanese
    • Home
    • User Guides
    • Locator + Pages

    How to prevent indexing

    Learn how to prevent your Locator & Pages from being indexed

    Written by Marco Degano

    Updated at September 2nd, 2025

    Contact Us

    If you still have questions or prefer to get help directly from an agent, please submit a request.
    We’ll get back to you as soon as possible.

    Please fill out the contact form below and we will reply as soon as possible.

    • Getting Started
      API Guides 'Near Me' 360 Product descriptions New here Configuration and Connection Guides
    • Uberall Academy and Learning Resources
      Learning Resources - Best Practices Uberall Academy
    • User Guides
      AI What's New Platform Status and General FAQs Homepage The Dashboard Location Hub Review Management Messages Social Locator + Pages Analytics Directories Mobile App Directories
    • Connecting Tools
      Connection Troubleshooting API
    • Org Settings
      Users Billing API Keys Webhooks Legal Documentation Product Descriptions
    + More

    This article will go over two methods for preventing your Locator & Pages from being indexed by search engines.

    Why prevent indexing?

    It can be beneficial to not have your Locator & Pages indexed on search engines if there is duplicate content across multiple instances of the Locator. This solution is for use-cases where there are multiple pages with the same Locator, with the same Locations. Those pages will be identified as duplicates by search engines and will cause issues with indexing. It's entirely possible that the indexed links will be split between all existing instances of the same Locator. To remedy this situation, search engines should only evaluate URLs from one Locator and ignore the rest. This will give you the best chances of having your Locations properly indexed.

    Blocking using robots.txt

    This method makes use of the robots.txt file. It is possible to add rules to disallow crawling by search engines. You can similar rules to your robots.txt file to prevent the crawling of the main Locator page as well as any page in the Locator folder.

    user-agent: otherbot
    disallow: /locator_folder
    disallow: /locator_folder/*

    Blocking using noindex

    This method will make use of a meta tag located in the <head> of your Locator page to stop crawlers from identifying your page. Add the following to the page where the Locator is situated: 

    <meta name="robots" content="noindex">

    If your Locator can be reached at https://mydomain.com/locator, add the tag on that page. Since the noindex tag has been added to the main page of the Locator, it will also apply to all subpages.

    locator & pages indexing index no index

    Was this article helpful?

    Yes
    No
    Give feedback about this article

    Related Articles

    • FAQ about Self Serve Locator & Pages
    • What is embedded Locator + Pages (Self Serve)?
    • Customizations of Locator + Pages via CSS
    • Locator and Pages attributes
    • Filters for Locator & Pages

    Copyright 2025 – uberall.

    Knowledge Base Software powered by Helpjuice

    Expand