Search code examples
pythondockerflaskloggingtcp

how to configure logging system in one file on python


I have two files. first is the TCP server. second is the flask app. they are one project but they are inside of a separated docker container they should write logs same file due to being the same project ı try to create my logging library ı import my logging library to two file ı try lots of things
firstly ı deleted bellow code

    if (logger.hasHandlers()):
        logger.handlers.clear()

when ı delete,ı get same logs two times

my structure

docker-compose
docker file
loggingLib.py
app.py
tcp.py
requirements.txt
.
.
.

my last logging code

from logging.handlers import RotatingFileHandler
from datetime import datetime
import logging
import time
import os, os.path

project_name= "proje_name"


def get_logger():
    if not os.path.exists("logs/"):
        os.makedirs("logs/")
    now = datetime.now()
    file_name = now.strftime(project_name + '-%H-%M-%d-%m-%Y.log')
    log_handler = RotatingFileHandler('logs/'+file_name,mode='a', maxBytes=10000000, backupCount=50)
    formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(funcName)s - %(message)s  ', '%d-%b-%y %H:%M:%S')

    formatter.converter = time.gmtime
    log_handler.setFormatter(formatter)
    logger = logging.getLogger(__name__)
    logger.setLevel(level=logging.INFO)
    if (logger.hasHandlers()):
        logger.handlers.clear()
    logger.addHandler(log_handler)
    return logger

it is working but only in one file if app.py works first, it only makes a log other file don't make any logs


Solution

  • Anything that directly uses files – config files, log files, data files – is a little trickier to manage in Docker than running locally. For logs in particular, it's usually better to set your process to log directly to stdout. Docker will collect the logs, and you can review them with docker logs. In this setup, without changing your code, you can configure Docker to send the logs somewhere else or use a log collector like fluentd or logstash to manage the logs.

    In your Python code, you usually will want to configure the detailed logging setup at the top level, on the root logger

    import logging
    def main():
      logging.basicConfig(
        format='%(asctime)s - %(levelname)s - %(funcName)s - %(message)s  ',
        datefmt='%d-%b-%y %H:%M:%S',
        level=logging.INFO
      )
      ...
    

    and in each individual module you can just get a local logger, which will inherit the root logger's setup

    import logging
    LOGGER = logging.getLogger(__name__)
    

    With its default setup, Docker will capture log messages into JSON files on disk. If you generate a large amount of log messages in a long-running container, it can lead to local disk exhaustion (it will have no effect on memory available to processes). The Docker logging documentation advises using the local file logging driver, which does automatic log rotation. In a Compose setup you can specify logging: options:

    version: '3.8'
    services:
      app:
        image: ...
        logging:
          driver: local
    

    You can also configure log rotation on the default JSON File logging driver:

    version: '3.8'
    services:
      app:
        image: ...
        logging:
          driver: json-file # default, can be omitted
          options:
            max-size: 10m
            max-file: 50
    

    You "shouldn't" directly access the logs, but they are in a fairly stable format in /var/lib/docker, and tools like fluentd and logstash know how to collect them.

    If you ever decide to run this application in a cluster environment like Kubernetes, that will have its own log-management system, but again designed around containers that directly log to their stdout. You would be able to run this application unmodified in Kubernetes, with appropriate cluster-level configuration to forward the logs somewhere. Retrieving a log file from opaque storage in a remote cluster can be tricky to set up.