Update:
TimedRotatingFileHandler
is not working properly when I using multiprocessing
, what am I supposed to do with multiprocessing logging?
I wrote my own Logger class as below, use it as a module in all other python scripts.
import logging
import logging.handlers
class Logger:
DEFAULT_LOG_OUTPUT = "/home/haifzhan/"
def __init__(self, logger_name, log_file_name, log_dir=DEFAULT_LOG_OUTPUT, log_level=logging.DEBUG):
self.logger = logging.getLogger(logger_name,)
self.formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
#self.file_handler = logging.FileHandler(log_dir + log_file_name)
file_path = log_dir + log_file_name
self.file_handler = logging.handlers.TimedRotatingFileHandler(file_path, when='H', backupCount=30)
self.file_handler.setFormatter(self.formatter)
self.logger.setLevel(log_level)
self.logger.addHandler(self.file_handler)
self.console_handler = logging.StreamHandler()
self.console_handler.setFormatter(self.formatter)
self.console_handler.setLevel(logging.DEBUG)
self.logger.addHandler(self.console_handler)
def get_logger(self):
return self.logger
At the top of my python script, I create an instance of Logger.
`logger = Logger("logger name", "logfile.log", log_dir=LOG_DIR, log_level=logging.INFO).get_logger()` # always put it at the top of my script
It worked perfectly when I was using FileHandler
, unfortunately it omits logging lines after I switch to TimedRotatingFileHandler
. Log file rotation works as it is supposed to, but not logging all lines. The console logging is working fine, how can that be?
self.file_handler = logging.FileHandler(log_dir + log_file_name)
self.file_handler = logging.handlers.TimedRotatingFileHandler(file_path, when='H', backupCount=30)
Can anyone help to solve it?
Don't use the file from all the processes. Instead, make some Queue of log messages and have one dedicated process (the main or an special one) perform the logging.
This should remove the race conditions between processes and those problems.
Given that you have already set a Logger
class, the implementation should be quite easy. You can have a global / singleton instance of the logging Queue
(where every Logger
instance put
their logs) and manage the actual logging matters from an single central process.
Edit: A possible approach would be to use a special handler:
class QueueLogger(Handler):
def __init__(self, log_queue):
"""
Initialize the handler with logging Queue.
"""
Handler.__init__(self)
self.log_queue = log_queue
def emit(self, record):
self.log_queue.put(record)
That would allow to put the record (which contain log levels and extra information) in a queue. On the other side you can have a HubLogger which would do something like:
while True:
r = log_queue.get()
my_handler.emit(r)
And my_handler
can be a TimedRotatingFileHandler
or whatever handler you want.
Credit to @unutbu, as they had already commented the "Hub" approach in the comments.