Guys, please help me find where the problem can be? Workers are starting fine and then celery just dies without a reason and even exit code 0
This is celery part in docker-compose.yml
celery:
build:
context: ./backend
dockerfile: ./celeryDockerfile
container_name: celery
env_file: .env
healthcheck:
test: celery -b amqp://${RABBITMQ_USER}:${RABBITMQ_PASS}@rabbit status || exit 1
interval: 5s
timeout: 10s
retries: 3
entrypoint: [ "/app/celery-entrypoint.sh" ]
command: celery multi start 2 --logfile=logs/celery.log -l INFO
depends_on:
backend:
condition: service_healthy
rabbit:
condition: service_healthy
And this is log output
celery | celery started
celery | celery multi v5.4.0 (opalescent)
celery | > Starting nodes...
celery | > celery1@3e42fac37fa0: OK
celery | > celery2@3e42fac37fa0: OK
Gracefully stopping... (press Ctrl+C again to force)
Container frontend Stopping
Container frontend Stopped
Container backend Stopping
Container backend Stopped
Container gateway Stopping
Container gateway Stopped
Container dashboard Stopping
Container dashboard Stopped
Container celery Stopping
Container celery Stopped
Container rabbit Stopping
Container rabbit Stopped
Container db Stopping
Container db Stopped
Container files Stopping
Container files Stopped
dependency failed to start: container celery exited (0)
Ignore the entrypoint part there is only echo celery started
inside
Turns out celery multi
runs and then exits so i found this gist to help with my problem.
To summarize what does code from the gist do:
celery multi start
and related parameterstrap
to capture TERM
and INT
signalstail -f
for logs to redirect them into consoletail -f
's PIDwait "$child"
will allowing to redirect logs indefinitely and capture any signal i.e TERM
celery multi stop
and kill tail -f
process with it's PID