Search code examples
djangoceleryamazon-sqsproduction-environment

Celery task worker not updating in production


I have set up a django project on an EC2 instance with SQS as broker for celery, running through Supervisord. The problem started when I updated the parameter arguments for a task. On calling the task, I get an error on Sentry which clearly shows that the task is running the old code. How do I update it?

I have tried supervisorctl restart all but still there are issues. The strange thing is that for some arguments, the updated code runs while for some it does not.

I checked the logs for the celery worker and it doesn't receive the tasks which give me the error. I am running -P solo so there is only one worker (Ran ps auxww | grep 'celery worker' to check). Then who else is processing those tasks?

Any kind of help is appreciated.

P.S. I use RabbitMQ for local development and it works totally fine


Solution

  • Never use the same queue in different environments.