Search code examples
djangocelerycookiecutter-django

Celery and Django, queries cause ProgrammingError


I'm building a small Django project with cookiecutter-django and I need to run tasks in the background. Even though I set up the project with cookiecutter I'm facing some issues with Celery.

Let's say I have a model class called Job with three fields: a default primary key, a UUID and a date:

class Job(models.Model):
    access_id = models.UUIDField(default=uuid.uuid4, editable=False, unique=True)
    date = models.DateTimeField(auto_now_add=True)

Now if I do the following in a Django view everything works fine:

job1 = Job()
job1.save()
logger.info("Created job {}".format(job1.access_id))

job2 = Job.objects.get(access_id=job1.access_id)
logger.info("Retrieved job {}".format(job2.access_id))

If I create a Celery task that does exactly the same, I get an error:

django.db.utils.ProgrammingError: relation "metagrabber_job" does not exist
LINE 1: INSERT INTO "metagrabber_job" ("access_id", "date") VALUES ('e8a2...

Similarly this is what my Postgres docker container says at that moment:

postgres_1      | 2018-03-05 18:23:23.008 UTC [85] STATEMENT:  INSERT INTO "metagrabber_job" ("access_id", "date") VALUES ('e8a26945-67c7-4c66-afd1-bbf77cc7ff6d'::uuid, '2018-03-05T18:23:23.008085+00:00'::timestamptz) RETURNING "metagrabber_job"."id"

Interestingly enough, if I look into my Django admin I do see that a Job object is created, but it carries a different UUID as the logs say..

If I then set CELERY_ALWAYS_EAGER = False to make Django execute the task and not Celery: voila, it works again without error. But running the tasks in Django isn't the point.

I did quite a bit of searching and I only found similar issues where the solution was to run manage.py migrate. However I did this already and this can't be the solution otherwise Django wouldn't be able to execute the problematic code with or without Celery.

So what's going on? I'm getting this exact same behavior for all my model objects.

edit: Just in case, I'm using Django 2.0.2 and Celery 4.1


Solution

  • I found my mistake. If you are sure that your database is migrated properly and you get errors as above: it might very well be that you can't connect to the database. Your db host might be reached, but not the database itself.

    That means your config is probably broken.

    Why it was misconfigured: in the case of cookiecutter-django there is an issue that Celery might complain about running as root on Mac, so I set the environment variable C_FORCE_ROOT in my docker-compose file. [Only for local, you should never do this in production!] Read about this issue here https://github.com/pydanny/cookiecutter-django/issues/1304

    The relevant parts of the config looked like this:

    django: &django
      build:
        context: .
        dockerfile: ./compose/local/django/Dockerfile
      depends_on:
        - postgres
      volumes:
        - .:/app
      environment:
        - POSTGRES_USER=asdfg123456
        - USE_DOCKER=yes
      ports:
        - "8000:8000"
        - "3000:3000"
      command: /start.sh
    
    celeryworker:
      <<: *django
      depends_on:
        - redis
        - postgres
      environment:
        - C_FORCE_ROOT=true
      ports: []
      command: /start-celeryworker.sh
    

    However setting this environment variable via docker-compose file prevented the django environment variables to be set on the celeryworker container, leaving me with a nonexistent database configuration.

    I added the POSTGRES_USER variable to that container manually and things started to work again. Stupid mistake on my end, but I hope I can save some time for someone with this answer.