I have several Resque workers for the same entity(User). After successful processing it should decrease call_left
attribute.
It works perfectly with perform_now
(consequently), but produces unpredictable results with perform_later
(in parallel). In logs there are commits with the same number of calls_left
.
I tried to use reload
method and even set the highest isolation level. But still have this problem.
How to solve?
class DataProcessJob < ActiveJob::Base
queue_as :default
def perform(user_id, profile_id)
User.transaction(isolation: :serializable) do
user = User.find(user_id).reload
user.data_process(profile_id)
user.update(calls_left: user.calls_left-1)
end
end
end
The first option would be to use locking (optimistic or pessimistic). The documentation explains their differences and you can choose the one that suits your case. Also, here is a relevant code snippet from the docs that would probably help you if you'd go with the optimistic locking.
def with_optimistic_retry
begin
yield
rescue ActiveRecord::StaleObjectError
begin
# Reload lock_version in particular.
reload
rescue ActiveRecord::RecordNotFound
# If the record is gone there is nothing to do.
else
retry
end
end
end
The second option would be to increment the calls_left
field using raw SQL string query. The underlying DB would deal with atomic updates.
Last, but not least, you could use decrement!(:calls_left)
method to make your code more readable.