Search code examples
robots.txt

Disallow or Noindex on Subdomain with robots.txt


I have dev.example.com and www.example.com hosted on different subdomains. I want crawlers to drop all records of the dev subdomain but keep them on www. I am using git to store the code for both, so ideally I'd like both sites to use the same robots.txt file.

Is it possible to use one robots.txt file and have it exclude crawlers from the dev subdomain?


Solution

  • Sorry, this is most likely not possible. The general rule is that each sub-domain is treated separately and thus would both need robots.txt files.

    Often subdomains are implemented using subfolders with url rewriting in place that does the mapping in which you want to share a single robots.txt file across subdomains. Here's a good discussion of how to do this: http://www.webmasterworld.com/apache/4253501.htm.

    However, in your case you want different behavior for each subdomain which is going to require separate files.