Search code examples
pythondjangosupervisordceleryd

Celery worker can't complete job wher running as daemon


I have a celery setup with django & redis. When i run celery by command from user, like celery multi start 123_work -A 123 --pidfile="/var/log/celery/%n.pid" --logfile="/var/log/celery/%n.log" --workdir="/data/ports/dj_dois" --loglevel=INFO job work's fine, but if i run celery via celeryd or supervisor some job's give me an error:

[2015-12-28 09:10:59,229: ERROR/MainProcess] Unrecoverable error: UnpicklingError('NEWOBJ class argument has NULL tp_new',)
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/__init__.py", line 206, in start
    self.blueprint.start(self)
  File "/usr/local/lib/python3.4/dist-packages/celery/bootsteps.py", line 123, in start
    step.start(parent)
  File "/usr/local/lib/python3.4/dist-packages/celery/bootsteps.py", line 374, in start
    return self.obj.start()
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/consumer.py", line 278, in start
    blueprint.start(self)
  File "/usr/local/lib/python3.4/dist-packages/celery/bootsteps.py", line 123, in start
    step.start(parent)
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/consumer.py", line 821, in start
    c.loop(*c.loop_args())
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/loops.py", line 76, in asynloop
    next(loop)
  File "/usr/local/lib/python3.4/dist-packages/kombu/async/hub.py", line 328, in create_loop
    next(cb)
  File "/usr/local/lib/python3.4/dist-packages/celery/concurrency/asynpool.py", line 258, in _recv_message
    message = load(bufv)
_pickle.UnpicklingError: NEWOBJ class argument has NULL tp_new

[2015-12-28 09:10:59,317: ERROR/MainProcess] Task db_select_task[dd5af67d-6bbe-49bb-8f13-59d0a0a9717b] raised unexpected: WorkerLostError('Worker exited prematurely: exitcode 0.',)
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/__init__.py", line 206, in start
    self.blueprint.start(self)
  File "/usr/local/lib/python3.4/dist-packages/celery/bootsteps.py", line 123, in start
    step.start(parent)
  File "/usr/local/lib/python3.4/dist-packages/celery/bootsteps.py", line 374, in start
    return self.obj.start()
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/consumer.py", line 278, in start
    blueprint.start(self)
  File "/usr/local/lib/python3.4/dist-packages/celery/bootsteps.py", line 123, in start
    step.start(parent)
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/consumer.py", line 821, in start
    c.loop(*c.loop_args())
  File "/usr/local/lib/python3.4/dist-packages/celery/worker/loops.py", line 76, in asynloop
    next(loop)
  File "/usr/local/lib/python3.4/dist-packages/kombu/async/hub.py", line 328, in create_loop
    next(cb)
  File "/usr/local/lib/python3.4/dist-packages/celery/concurrency/asynpool.py", line 258, in _recv_message
    message = load(bufv)
_pickle.UnpicklingError: NEWOBJ class argument has NULL tp_new

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/billiard/pool.py", line 1175, in mark_as_worker_lost
    human_status(exitcode)),
billiard.exceptions.WorkerLostError: Worker exited prematurely: exitcode 0.

My celery version:

software -> celery:3.1.19 (Cipater) kombu:3.0.32 py:3.4.2
            billiard:3.3.0.22 py-amqp:1.4.8
platform -> system:Linux arch:64bit, ELF imp:CPython
loader   -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled

Pythnon - 3.4

Django - 1.8.7

Redis server v=2.8.17

Example of job that give's me an Error:

@shared_task(name='db_select_task')
def db_select_task(arg1,arg2):
    conn_pool = pool.manage(cx_Oracle)
    db = conn_pool.connect("user/pass@db")
    try:
        cursor = db.cursor()
        ports = {}
        t = tech
        cursor.execute("sql")
        data = cursor.fetchall()
    except Exception:
        return ('Error: with db')
    finally:
        cursor.close()
        db.close()
    return data

Solution

  • Problem was with oracle paths for celeryd daemon. Just add aditional export for celeryd config.