I'm told by various sources that http://www.anthonygalli.com/robots.txt does not exist.
I do have a file in public/robots.txt
User-agent: *
Allow: /
Sitemap: http://www.anthonygalli.com/sitemap.xml.gz
How can I make the route work so that the error goes away and google can properly crawl the site?
Ensure that there is the following line in your config/environment/production.rb
:
config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present?