Search code examples
djangoubunturabbitmqcelerysupervisord

Django celery daemon gives 'supervisor FATAL can't find command', but path is correct


Overview:

I'm trying to run celery as a daemon for tasks to send emails. It worked fine in development, but not in production. I have my website up now, and every function works fine (no django errors), but the tasks aren't going through because the daemon isn't set up properly, and I get this error in ubuntu 16.04:

project_celery FATAL can't find command '/home/my_user/myvenv/bin/celery'

Installed programs / hardware, and what I've done so far:

I'm using Django 2.0.5, python 3.5, ubuntu 16.04, rabbitmq, and celery all on a VPS. Im using a venv for it all. I've installed supervisor too, and it's running when I check with sudo service --status-all because it has a + next to it. Erlang is also installed, and when I check with top, rabbitmq is running. Using sudo service rabbitmq-server status shows rabbitmq is active too.

Originally, I followed the directions at the celery website, but they were very confusing and I couldn't get it to work after ~40 hours of testing/reading/watching other people's solutions. Feeling very aggravated and defeated, I chose the directions here to get the daemon set up and hope I get somewhere, and I have got further, but I get the error above.

I read through the supervisor documentation, checked the process states to try and debug the problem, and program settings, and I'm lost because my paths are correct as far as I can tell, according to the documentation.

Here's my file structure stripped down:

home/
    my_user/               # is a superuser
        portfolio-project/
            project/
                __init__.py
                celery.py
                settings.py     # this file is in here too
            app_1/
            app_2/
            ...
            ...
        logs/
            celery.log
        myvenv/
            bin/
                celery       # executable file, is colored green
    celery_user_nobody/      # not a superuser, but created for celery tasks
etc/
    supervisor/
        conf.d/
            project_celery.conf

Here is my project_celery.conf:

[program:project_celery]
command=/home/my_user/myvenv/bin/celery worker -A project --loglevel=INFO
directory=/home/my_user/portfolio-project/project
user=celery_user_nobody
numprocs=1
stdout_logfile=/home/my_user/logs/celery.log
stderr_logfile=/home/my_user/logs/celery.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
stopasgroup=true
priority=1000

Here's my init.py:

from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ['celery_app']

And here's my celery.py:

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')

app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
@app.task(bind=True)

def debug_task(self):
    print('Request: {0!r}'.format(self.request))

UPDATE: Here is my settings.py:

This is the only setting I have because the example at the celery website django instructions shows nothing more, unless I were to use something like redis. I put this in my settings.py file because the django instructions say you can: CELERY_BROKER_URL = 'amqp://localhost'

UPDATE: I created the rabbitmq user:

$ sudo rabbitmqctl add_user rabbit_user1 mypassword
$ sudo rabbitmqctl add_vhost myvhost
$ sudo rabbitmqctl set_user_tags rabbit_user1 mytag
$ sudo rabbitmqctl set_permissions -p myvhost rabbit_user1 ".*" ".*" ".*"

And when I do sudo rabbitmqctl status, I get Status of node 'rabbit@django2-portfolio', but oddly, I don't see any nodes running like the following, because the directions here show that I should see that:

{nodes,[rabbit@myhost]},
{running_nodes,[rabbit@myhost]}]

Steps I followed:

  1. I created the .conf and .log files in the places I said.
  2. sudo systemctl enable supervisor
  3. sudo systemctl start supervisor
  4. sudo supervisorctl reread
  5. sudo supervisorctl update # no errors up to this point
  6. sudo supervisorctl status

And after 6 I get this error:

project_celery FATAL can't find command '/home/my_user/myvenv/bin/celery'

UPDATE: I checked the error logs, and I have multiple instances of the following in /var/log/rabbitmq/rabbit@django2-portfolio.log:

=INFO REPORT==== 9-Aug-2018::18:26:58 ===
connection <0.690.0> (127.0.0.1:42452 -> 127.0.0.1:5672): user 'guest' authenticated and granted access to vhost '/'

=ERROR REPORT==== 9-Aug-2018::18:29:58 ===
closing AMQP connection <0.687.0> (127.0.0.1:42450 -> 127.0.0.1:5672):
missed heartbeats from client, timeout: 60s

Closing statement:

Anyone have any idea what's going on? When I look at my absolute paths in my project_celery.conf file, I see everything set correctly, but something's obviously wrong. Looking over my code more, rabbitmq says no nodes are running. when I do sudo rabbitmqctl status, but celery does when I do celery status (it shows OK 1 node online).

Any help would be greatly appreciated. I even made this account specifically because I had this problem. It's driving me mad. And if anyone needs any more info, please ask. This is my first time deploying anything, so I'm not a pro.


Solution

  • Can you try any of the following in your project_celery.conf

    command=/home/my_user/myvenv/bin/celery worker -A celery --loglevel=INFO
    directory=/home/my_user/portfolio-project/project
    

    or

    command=/home/my_user/myvenv/bin/celery worker -A project.celery --loglevel=INFO
    directory=/home/my_user/portfolio-project/
    

    Additionally, in celery.py can you add the parent folder of the project module to sys.path (or make sure that you've packaged your deploy properly and have installed it via pip or otherwise)?

    I suspect (from your comments with @Jack Shedd that you're referring to a non-existent project due to where directory is set relative to the magic celery.py file.)