Feedback

© 2026 SEO Lebedev · All rights reserved.

Noindex

Noindex is a directive that instructs search engines not to index a specific page on a website. It is used to exclude a page from search results without deleting it from the site or blocking direct user access via a link.

What is Noindex?

Noindex is a command for search engine robots indicating that a page’s content should not be added to the search engine’s index (Google, Yandex, etc.). When a robot encounters a noindex directive, it does not add the page to the search database, meaning the page will not appear in search results.

Noindex can be implemented:

  • In meta tags within the page’s HTML code.
  • As an X-Robots-Tag in HTTP headers.

Example of a Noindex Meta Tag

html

<meta name=”robots” content=”noindex”>

This code is placed in the <head> section of a page and tells search engines not to index the page.

To block indexing only for specific search engines, you can specify them directly:

html

<meta name=”googlebot” content=”noindex”>

<meta name=”yandex” content=”noindex”>

Example via HTTP Header

For files without HTML markup (e.g., PDF, DOC, images), the directive can be sent in an HTTP header:

text

X-Robots-Tag: noindex

This method is more commonly used to protect non-page resources from being indexed.

When to Use Noindex

  • Administrative Pages.
    For example, login pages, shopping carts, user accounts, privacy policy pages. They hold no value for search engines.
  • Filtering and URL Parameters.
    Pages with parameters (?sort=price, ?page=2) often duplicate main content and should not be indexed.
  • Test and Temporary Pages.
    During development or A/B testing, to avoid cluttering the index with unfinished pages.
  • Content Duplicates.
    When a site has pages with identical content (e.g., sorted pages, versions with UTM tags, filtered results).
  • Auxiliary Sections.
    For example, internal site search results pages (/search?q=…) or archives.

Common Combinations with Other Directives

  • noindex, follow — The page is not indexed, but search engines follow links on it.
  • html
  • <meta name=”robots” content=”noindex, follow”>
    This is the most commonly used option—the page is excluded from search but still passes link equity.
  • noindex, nofollow — The page is not indexed, and links on it are not followed or considered.
  • html
  • <meta name=”robots” content=”noindex, nofollow”>
    Used when a page has no SEO value at all.

Noindex in Yandex

Yandex has a specific feature: it supports not only the meta tag but also the <noindex> HTML tag, which allows hiding specific parts of text on a page from indexing.

Example:

html

<noindex>This text will not be indexed by the search engine</noindex>

This is useful for hiding individual blocks from indexing—for example, user comments, widgets, or text duplicates.

Important: The <noindex> tag works only in Yandex. Google does not support it.

Difference Between Noindex and Robots.txt

ParameterNoindexRobots.txt
What it doesPrevents indexing of the page content.Prevents crawling of the page.
Where it’s specifiedIn the page code or HTTP header.In the robots.txt file.
Impact on linksCan be controlled (follow / nofollow).Links are invisible if the page is blocked.
Page visibilityYes, if there’s a direct link.No, if access is completely blocked.
Example<meta name=”robots” content=”noindex”>Disallow: /private/

Use noindex when you want a page to be accessible to users but not appear in search results.

Checking if Noindex is Working

Check if noindex has been applied using these tools:

  • Google Search Console → URL Inspection. Shows if the page is excluded from the index.
  • Yandex Webmaster → Page Diagnostics. Displays whether Yandex has processed the noindex directive.
  • SEO crawlers (Screaming Frog, Netpeak Spider) — find pages with noindex and nofollow directives.

Common Mistakes When Using Noindex

  • Blocking important pages. The tag might be accidentally placed on a page that should be indexed.
  • Using it alongside Disallow in robots.txt. If a page is blocked from crawling via robots.txt, the search engine won’t see the noindex tag in its code.
  • Partial protection. noindex does not prevent a page from being indexed via external links—it’s better combined with other measures.

Advantages of Using Noindex

  • Complete control over which pages appear in search.
  • Helps avoid content duplication and keyword cannibalization.
  • Improves SEO structure and saves crawl budget.
  • Does not require deleting the page from the site—it remains accessible to users.

Conclusion

Noindex is a directive that instructs search engines not to include a page in their index. It is used to exclude administrative, duplicate, or temporary pages from search results to avoid cluttering the search database and wasting crawl budget.

The optimal usage is:

html

<meta name=”robots” content=”noindex, follow”>

This way, the page won’t appear in search, but links from it will still pass equity—this is the best option for maintaining a site’s SEO hygiene.

Back

Discuss the project

Fill out the form and we will give you a free consultation within a business day.

This field is required

This field is required

Fill in Telegram or WhatsApp

Fill in Telegram or WhatsApp

This field is required

By clicking the button, you agree to “Privacy Policy”.