Following Bate's Railscast (http://railscasts.com/episodes/127-rake-in-background) on background tasks, trying it out with some mailers, not working out for me.
Application controller:
def call_rake(task, options = {})
options[:rails_env] ||= Rails.env
args = options.map { |n, v| "#{n.to_s}='#{v}'" }
system "rake #{task} #{args.join(' ')} --trace 2>&1 >> #{Rails.root}/log/rake.log &"
# need to add path to rake /usr/bin/rake, etc.
end
Request controller:
call_rake :connect_email, :requestrecord => @requestrecord, :lender_array => @lender_array, :item => item, :quantity => quantity
Rake connect_email.rb:
desc "Send email to connect borrower/lender pairs for items found"
task :connect_email => :environment do
RequestMailer.found_email(requestrecord, lender_array, item, quantity).deliver unless lender_array.blank?
end
Am getting error NameError: undefined local variable or method
lender_array' for main:Object. Funny thing is, on a different mailer where the same parameters are passed except for
lender_array, I get an error message
NameError: undefined local variable or method requestrecord' for main:Object
.
Any thoughts on why?
And since it's my first time doing background tasks, would love to get additional insight from smart people on:
/usr/bin/rake
, how do I find what path this is on my local machine and also in production? (using Heroku)rake.log
file I created. Does it only register when the action is completed (i.e., a success)?I'm just thinking, if person A does the action that calls the rake emails, and then person B does that same action that calls the rake emails a few miliseconds later, will Rails first execute the rake tasks for person A then do the person B request, then do person B's rake tasks, or will it do person A request, person B request, and then do the rake tasks in order received, whenever there is downtime from actual requests?
Thanks!
There are many questions in the problem, I'll first tell you the problem with your code, when you run the rake task, the code that is executed is:
task :connect_email => :environment do
RequestMailer.found_email(requestrecord, lender_array, item, quantity).deliver unless lender_array.blank?
end
so when that code is executed, requesterecord, lender_array, item and quantity are not defined. That is pure ruby code. If you check the screencast in detail, what it pass to the rake task are environment variables/values, not ruby variables, the right code would be:
task :connect_email => :environment do
RequestMailer.found_email(ENV["requestrecord"], ENV["lender_array"], ENV["item"], ENV["quantity"]).deliver unless lender_array.blank?
end
Now, if you do that code, it won't work either, because all the environment variables are received as String by ruby, so you have to convert them to the desired kind of variable. If you call the code:
call_rake :connect_email, lender_array: ['hello', 'world']
it will set the environment variable lender_array to '["hello", "world"]', which will be received by the connect_email task as a ruby string, if you want to convert it to a Ruby Array, you can do something like:
RequestMailer.found_email(ENV["requestrecord"], eval(ENV["lender_array"]))
Because if you eval the string '["hello", "world"]', it will be evaluated as a Ruby Array.
A: Yes, every time you run a task, it will initialize rails from zero in a new process, and yes, it is expensive.
A: You can setup a Environment Variable for that, its pretty common to setup Environment Variables in heroku, i.e., you can setup the environment variable RAKE_PATH="/usr/bin/rake" and then on your code you can do:
system "#{ENV['RAKE_PATH'} #{task} #{args.join(' ')} --trace 2>&1 >> #{Rails.root}/log/rake.log &"
A: No, every time you run a rake task, is a different process being created to run that specific task, not queue at all.
A: Rake tasks usually logs information, I'm not sure why that exception is not being logged dough, but I usually check the logs and I see the rake tasks running and logging to them.
In order to be able to perform better your tasks on background, you can check delayed_job, which stores the jobs to be run on a database, and then a daemon fetch the tasks and run them in background.