Search code examples
python-2.7tornadoapscheduler

Logging not working in callback of tornado.concurrent.Future


I'm using tornado scheduler of apscheduler. Whenever a task is called, I need to log it's exceptions. For handling exceptions I've created a decorator that will get the future object and will take appropriate action. It's working fine, however it's not logging inside the callback function of future. I've done Pdb inside of the callback, and logger instance properties are as expected, still it's not logging in file at all. Code is,

logger = logging.getLogger('myLogger')

def handle_callback(result):
    logger.debug(result.result())
    logger.info(result.result())
    logger.error(result.result())
    logger.exception(result.result())
    logger.exception(result.exc_info())


def handle_exceptions():
    def wrapped(fn):
        @wraps(fn)
        def wrapper(*args, **kwargs):
            future = fn(*args, **kwargs)
            future.add_done_callback(handle_callback)
        return wrapper
    return wrapped


@handle_exceptions()
@gen.coroutine
def run_task(job_id):
    logger.info('executing job {}'.format(job_id))
    raise MyException

P.S. I'm using python2.7


Solution

  • The wrapper is missing the return of the future - without this, ioloop won't continue if there would be some async call. Let's add some async call

    @handle_exceptions
    @gen.coroutine
    def run_task(job_id):
        logger.info('executing job {}'.format(job_id))
        yield gen.sleep(1)
        raise Exception('blah')
    

    As you might noted, I've removed () from decorator to simplify it. It doesn't have to be nested. So the decorator could look like:

    def handle_exceptions(fn):
        @wraps(fn)
        def wrapper(*args, **kwargs):
            future = fn(*args, **kwargs)
            future.add_done_callback(handle_callback)
            return future  # <<< we need this
        return wrapper
    

    Next, the handler's callback is calling Future.result() that will immediately re-raise exception. So it's better to check if there is an exception in the first place:

    def handle_callback(result):
        exc_info  = result.exc_info()
        if exc_info:
            logger.error('EXCEPTION %s', exc_info)
    

    Putting this together in simple example:

    import logging
    from functools import wraps, partial
    from tornado import gen, ioloop
    
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)
    
    
    def handle_callback(result):
        exc_info  = result.exc_info()
        if exc_info:
            logger.error('EXCEPTION %s', exc_info)
    
    
    def handle_exceptions(fn):
        @wraps(fn)
        def wrapper(*args, **kwargs):
            future = fn(*args, **kwargs)
            future.add_done_callback(handle_callback)
            return future
        return wrapper
    
    
    @handle_exceptions
    @gen.coroutine
    def run_task(job_id):
        logger.info('executing job {}'.format(job_id))
        yield gen.sleep(1)
        raise Exception('blah')
    
    
    ioloop.IOLoop.instance().run_sync(partial(run_task, 123))
    

    Since the question does not provide any info about logging by itself, I've used the standard with changed level. The code output:

    INFO:root:executing job 123
    ERROR:root:EXCEPTION (<type 'exceptions.Exception'>, Exception('blah',), <traceback object at 0x7f807df07dd0>)
    Traceback (most recent call last):
      File "test.py", line 31, in <module>
        ioloop.IOLoop.instance().run_sync(partial(run_task, 123))
      File "/tmp/so/lib/python2.7/site-packages/tornado/ioloop.py", line 458, in run_sync
        return future_cell[0].result()
      File "/tmp/so/lib/python2.7/site-packages/tornado/concurrent.py", line 238, in result
        raise_exc_info(self._exc_info)
      File "/tmp/so/lib/python2.7/site-packages/tornado/gen.py", line 1069, in run
        yielded = self.gen.send(value)
      File "test.py", line 28, in run_task
        raise Exception('blah')
    Exception: blah
    

    If there are any other issue, I presume it's related to the logging config/setup.