This is my script code:
<script type="text/javascript" src="//example.com/js/infolinks_main.js"></script>
I want to make crawler not to follow or index example.com/js/infolinks_main.js
.
How can I do this task? I have robots.txt in my root, but that URL is an external URL.
NB: I do not want to use iframe
.
The script
element can’t have a rel
attribute, so nofollow
can’t be used. Even if it could be used, note that nofollow
is not about disallowing bots to crawl/index the URL.
To disallow crawling the script, you have to use robots.txt:
User-agent: *
Disallow: /js/infolinks_main.js
Or if you want to disallow crawling of all your scripts:
User-agent: *
Disallow: /js/
You have to use the robots.txt file of the host where the scripts are hosted. It doesn’t necessarily have to be the host where your HTML documents are hosted.
(Note that this doesn’t disallow indexing the script. If you want to disallow indexing, you can use the X-Robots-Tag
header with a noindex
value, but then you have to allow crawling. As scripts are typically not indexed by general-purpose search engines, you probably want to prevent crawling, not indexing.)