Search code examples
dockerconcurrencycronnestjs

Prevent multiple cron running in nest.js on docker


In docker we have used deploy: replicas: 3 for our microservice. We have some Cronjob & the problem is the system in running all cronjob is getting called 3 times which is not what we want. We want to run it only one time. Sample of cron in nest.js :

  @Cron(CronExpression.EVERY_5_MINUTES)
  async runBiEventProcessor() {
    const calculationDate = new Date()
    Logger.log(`Bi Event Processor started at ${calculationDate}`)

How can I run this cron only once without changing the replicas to 1?


Solution

  • This is quite a generic problem when cron or background job is part of the application having multiple instances running concurrently.

    There are multiple ways to deal with this kind of scenario. Following are some of the workaround if you don't have a concrete solution:

    1. Create a separate service only for the background processing and ensure only one instance is running at a time.

    2. Expose the cron job as an API and trigger the API to start background processing. In this scenario, the load balancer will hand over the request to only one instance. This approach will ensure that only one instance will handle the job. You will still need an external entity to hit the API, which can be in-house or third-party.

    3. Use repeatable jobs feature from Bull Queue or any other tool or library that provides similar features. Bull will hand over the job to any active processor. That way, it ensures the job is processed only once by only one active processor. Nest.js has wrapper for the same. Read more about the Bull queue repeatable job here.

    4. Implement a custom locking mechanism It is not difficult as it sounds. Many other schedulers in other frameworks work on similar principles to handle concurrency.

      • If you are using RDBMS, make use of transactions and locking. Create cron records in the database. Acquire the lock as soon as the first cron enters and processes. Other concurrent jobs will either fail or timeout as they will not be able to acquire the lock. But you will need to handle a few cases in this approach to make it bug-free and flawless.
      • If you are using MongoDB or any similar database that supports TTL (Time-to-live) setting and unique index. Insert the document in the database where one of the fields from the document has unique constraints that ensure another job will not be able to insert one more document as it will fail due to database-level unique constraints. Also, ensure TTL(Time-to-live index) on the document; this way document will be deleted after a configured time.

    These are workaround if you don't have any other concrete options.