For some reason, my Rails assets won't get Cache Control headers sent when I've got them being served up from Heroku using the Puma server.
I have config.public_file_server.headers = { 'Cache-Control' => 'public, max-age=172800' }
set in my production.rb
, and when I run the app on my local machine, this works fine:
Cache-Control: public, max-age=31536000
Etag: "fdffdae515ab907046e7deed6a567968ab3e689f8505a281988bf6892003ff92"
X-Request-Id: 065e704c-1bea-428c-9c40-3cd6b6e4330a
X-Runtime: 0.002411
However, when I deploy to Heroku, I get the following headers:
Server: Cowboy
Date: Sat, 10 Jun 2017 18:56:26 GMT
Content-Length: 0
Connection: keep-alive
strict-transport-security: max-age=15552000; includeSubDomains
Via: 1.1 vegur
This seems slightly odd, surprisingly enough. I'm really quite lost to understand why this is happening. I'm using the Puma web server as is recommended by Heroku themselves - here's my Procfile:
web: bundle exec puma -C config/puma.rb
And here's my puma.rb in config:
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum, this matches the default thread size of Active Record.
#
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count
# Specifies the `port` that Puma will listen on to receive requests, default is 3000.
#
port ENV.fetch("PORT") { 3000 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked webserver processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
workers ENV.fetch("WEB_CONCURRENCY") {2}
# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory. If you use this option
# you need to make sure to reconnect any threads in the `on_worker_boot`
# block.
preload_app!
# The code in the `on_worker_boot` will be called if you are using
# clustered mode by specifying a number of `workers`. After each worker
# process is booted this block will be run, if you are using `preload_app!`
# option you will want to use this block to reconnect to any threads
# or connections that may have been created at application boot, Ruby
# cannot share connections between processes.
on_worker_boot do
ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
end
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
Anyone got any ideas? Incidentally, contrary to this post, it doesn't make any difference which file type the served file is - none of them have cache-control set.
Found the problem!
Turns out, without Heroku telling you obviously, if you want to use caching in the asset pipeline, you need to add the rails_12factor
gem to your Gemfile:
group :production do
gem 'rails_12factor'
end
As soon as I added the rails_12factor
gem, things started working fine :)