I'm more and more offloading frequently used tasks in small little libraries of mine. However I haven't figured out what the best practice for logging is. There are lots of resources explaining something like:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.')
But how to deal with that in packages intended for reuse? Should I just import und message, but do the config once in the main program importing? Obviously I want to set the logging style and level in my main app, but not every single package.
So how to make packages "ready for logging"?
You're pretty much spot on. Handlers were designed to be defined on the app side. Only add the logger to your libraries. Using the root logger may be easy but quickly gets out of hand so let's give them a name related to the module:
# mylibA.py
import logging
logger = logging.getLogger(__name__)
logger.info('hello from mylibA')
# mylibB.py
import logging
logger = logging.getLogger(__name__)
logger.info('hello from mylibB')
Using basicConfig
will add a handler to the root logger to capture all loggers. That includes the ones we just defined in your libraries and third party libraries. You're able to set the format as you wish:
# myapp.py
import logging
logging.basicConfig(filename='myapp.log', level=logging.INFO)
Imagine, mylibA is mature and you only want to capture critical logs for it and you want to ignore a third party library that's producing logs. Because we've used module names for logger names it's easy to focus on what you actually want.
# myapp.py
import logging
sh = logging.StreamHandler()
sh.setLevel(logging.CRITICAL)
logging.getLogger('mylibA').addHandler(sh)
logging.getLogger('mylibB').addHandler(logging.StreamHandler())
Also, you'd have no issue running multiple apps concurrently with each collecting, processing and formatting the logs in a way that's relevent for each app.