I have a lambda on aws running a dockerized python code. when I run the lambda 2 times back to back, it will double log, but when ran 3 times or more, or only once, it only logs one time.
now, what do I mean by double logging?
START RequestId: <UUID>
{"message": "starting to do something"...}
{"message": "starting to do something"...}
{"message": "another log message with context data in it"...}
{"message": "another log message with context data in it"...}
REPORT RequestId: <UUID>
END RequestId: <UUID>
as you can see, lambda is logging its system logs only once, but my code logs are logged twice. I also checked monitoring charts for my lambda and it definitely ran once, but the code is double logged.
I do log my code like the following:
def make_logger():
logger = logging.getLogger('my_name')
...
return logger
def handler(event,context):
logger = make_logger()
logger.info("starting to do something")
message = json.loads(event['Records'][0]['body']
# some other code and logic
logger = logging.LoggerAdapter(logger, message)
logger.info("another log message with context data in it")
I'm suspecting that something fishy is going on there with how I'm making my logger. I don't think it's the loggerAddapter because logs before it is also duplicated.
update1: while adding print statements to my code as suggested in the comments, I could not get the print statement results, but something interesting happened:
I passed in data that would break the lambda code before the adapter was run to see the effect, and the lambda retries, and then I got up to 8 times the logs, in one run for the retries (second retry).
the answer was in a small part I left out in the question:
def make_logger():
logger = logging.getLogger('my_name')
logger.propagate = False
formatter = UtilFormatter(get_default_log_format())
log_handler = logging.StreamHandler()
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
logger.setLevel(logging.INFO)
return logger
when a lambda is called multiple times back to back, it will run in the same lambda instance. so they will be running make_logger
function and will be adding a handler to the list of handlers we already have because these handlers won't be cleared as we expect them.
the fix was a simple if statement:
def make_logger():
logger = logging.getLogger('my_name')
logger.propagate = False
if not logger.handlers:
formatter = UtilFormatter(get_default_log_format())
log_handler = logging.StreamHandler()
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
logger.setLevel(logging.INFO)
return logger