I have two production AWS instances which will be running resque but listening for different queues. Here is an example of my current configuration:
config/deploy/production/prod_resque_1.rb
server "<ip>", :web, :app, :resque_worker, :db, primary: true
set :resque_log_file, "log/resque.log"
set :resque_environment_task, true
set :workers, {
"queue1" => 5,
"*" => 2
}
after "deploy:restart", "resque:restart"
config/deploy/production/prod_resque_2.rb
server "<ip>", :web, :app, :resque_worker, :db, primary: true
set :resque_log_file, "log/resque.log"
set :resque_environment_task, true
set :workers, {
"queue2,queue3,queue4" => 5
}
after "deploy:restart", "resque:restart"
Then, I have a "global" recipe:
load 'config/deploy/production/common'
load 'config/deploy/production/prod_resque_1'
load 'config/deploy/production/prod_resque_2'
The obvious problem is, when I call cap prod_resque resque:start
, the :workers
definition in prod_resque_1
is overwritten by the load of prod_resque_2
, resulting in both prod_resque_1
and prod_resque_2
both having workers listening to queue2
, queue3
, and queue4
only.
My work around has been to run cap prod_resque_1 resque:start
then cap prod_resque_2 resque:start
, but this kind of defeats the purpose of capistrano.
Any suggestions for a cleaner solution allowing me to run cap prod_resque resque:start
and have the "first" server running 7 workers, 5 listening to queue1
and 2 listening to all queues, and the "second" server running 5 workers, only listening to queue2
, queue3
, and queue4
?
An example of this is given in the capistrano-resque docs: if you assign a different role to each server (or groups of servers), then you can define workers on a per role basis.
In your case you would do something like
role :queue_one_workers, [ip_from_prod_resque_1]
role :other_queue_workers, [ip_from_prod_resque_2]
set :workers, {
:queue_one_workers => {"queue1" => 5, "*" => 2},
:other_queue_workers => {"queue2" => 5, "queue3" => 5, "queue4" => 5}
}