I have tasks that I run with celery in my application. I set it up without stress in my dev environment and It was working perfectly with redis as a broker. yesterday I transferred the code to my server and setup redis but celery cannot discover the tasks. The code is just the same.
My celery_conf.py
file (initially celery.py
):
# coding: utf-8
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'vertNews.settings')
app = Celery('vertNews')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
celery configurations in settings
# Celery Configuration
CELERY_TASK_ALWAYS_EAGER = False
CELERY_BROKER_URL = SECRETS['celery']['broker_url']
CELERY_RESULT_BACKEND = SECRETS['celery']['result_backend']
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
__init__.py
of root app
# coding: utf-8
from __future__ import absolute_import, unicode_literals
from .celery_conf import app as celery_app
__all__ = ['celery_app']
My tasks
# coding=utf-8
from __future__ import unicode_literals, absolute_import
import logging
from celery.schedules import crontab
from celery.task import periodic_task
from .api import fetch_tweets, delete_tweets
logger = logging.getLogger(__name__)
@periodic_task(
run_every=(crontab(minute=10, hour='0, 6, 12, 18, 23')),
name="fetch_tweets_task",
ignore_result=True)
def fetch_tweets_task():
logger.info("Tweet download started")
fetch_tweets()
logger.info("Tweet download and summarization finished")
@periodic_task(
run_every=(crontab(minute=13, hour=13)),
name="delete_tweets_task",
ignore_result=True)
def delete_tweets_task():
logger.info("Tweet deletion started")
delete_tweets()
logger.info("Tweet deletion finished")
the results when I run in remote server (not working)
(trendiz) kenneth@bots:~/projects/verticals-news/src$ celery -A vertNews beat -l debug
Trying import production.py settings...
celery beat v4.0.2 (latentcall) is starting.
__ - ... __ - _
LocalTime -> 2017-04-03 13:55:49
Configuration ->
. broker -> redis://localhost:6379//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]@%DEBUG
. maxinterval -> 5.00 minutes (300s)
[2017-04-03 13:55:49,770: DEBUG/MainProcess] Setting default socket timeout to 30
[2017-04-03 13:55:49,771: INFO/MainProcess] beat: Starting...
[2017-04-03 13:55:49,785: DEBUG/MainProcess] Current schedule:
[2017-04-03 13:55:49,785: DEBUG/MainProcess] beat: Ticking with max interval->5.00 minutes
[2017-04-03 13:55:49,785: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
result when I run in dev server(working)
LocalTime -> 2017-04-03 14:16:19
Configuration ->
. broker -> redis://localhost:6379//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]@%DEBUG
. maxinterval -> 5.00 minutes (300s)
[2017-04-03 14:16:19,919: DEBUG/MainProcess] Setting default socket timeout to 30
[2017-04-03 14:16:19,919: INFO/MainProcess] beat: Starting...
[2017-04-03 14:16:19,952: DEBUG/MainProcess] Current schedule:
<ScheduleEntry: fetch_tweets_task fetch_tweets_task() <crontab: 36 0, 6, 12, 18, 22 * * * (m/h/d/dM/MY)>
<ScheduleEntry: delete_tweets_task delete_tweets_task() <crontab: 13 13 * * * (m/h/d/dM/MY)>
[2017-04-03 14:16:19,952: DEBUG/MainProcess] beat: Ticking with max interval->5.00 minutes
[2017-04-03 14:16:19,953: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
I'm running python 3.5 and celery 4.0.2 in both environments
I don't know what exactly the problem was, but clearing all the *.pyc file in the project got rid of the problem