Search code examples
angularjsrobots.txtgooglebot

Prevent Googlebot from indexing a page while still allowing access


In a angularjs app, I'm using some fragment like /fragments/welcome-1.html which get displayed as a part of /welcome. I thought, I could exclude it from Google search via

Disallow: /fragments

in robots.txt, but it completely prevents any access by Googlebot and therefore the page /welcome can't be displayed correctly.

Obviously, I can't do this, but how can I ensure that Google can fetch the fragment and it won't index it? Note that it's no real HTML, just a part of the body, so I can't really use a meta tag.


Solution

  • I've just found the X-Robots-Tag and I serve all pages from /fragments using

    X-Robots-Tag: googlebot: noindex
    

    Let's see if it really works.