My application creates resque jobs that must be processed sequentially per user, and they should be processed as fast as possible (1 second maximum delay).
An example: job1 and job2 is created for user1 und job3 for user2. Resque can process job1 and job3 in parallel, but job1 and job2 should be processed sequentially.
I have different thoughts for a solution:
rake resque:work QUEUE=queue_1
). Users are assigned to a queue/ worker at runtime (e.g. on login, every day etc.)Do you have any experiences with one of those scenarios in practice? Or do have other other ideas that might be worth thinking about? I would appreciate any input, thank you!
Thanks to the answer of @Isotope I finally came to a solution that seems to work (using resque-retry and locks in redis:
class MyJob
extend Resque::Plugins::Retry
# directly enqueue job when lock occurred
@retry_delay = 0
# we don't need the limit because sometimes the lock should be cleared
@retry_limit = 10000
# just catch lock timeouts
@retry_exceptions = [Redis::Lock::LockTimeout]
def self.perform(user_id, ...)
# Lock the job for given user.
# If there is already another job for the user in progress,
# Redis::Lock::LockTimeout is raised and the job is requeued.
Redis::Lock.new("my_job.user##{user_id}",
:expiration => 1,
# We don't want to wait for the lock, just requeue the job as fast as possible
:timeout => 0.1
).lock do
# do your stuff here ...
end
end
end
I am using here Redis::Lock from https://github.com/nateware/redis-objects (it encapsulates the pattern from http://redis.io/commands/setex).