python日志记录loguru模块

3,003 阅读6分钟

在项目的编写的过程中,需要部署一些定时运行或者长期运行的任务时,为了留存一些导致程序出现异常或错误的信息,通常会采用日志的方式来进行记录这些信息。

在python中用到日志记录,那就不可避免地会用到内置的logging标准库。虽然logging库采用的是模块化设计,你可以设置不同的handler来进行组合,但是在配置上通常较为繁琐;而且如果不是特别处理,在一些多线程或多进程的场景下使用 logging还会导致日志记录会出现错乱或是丢失的情况。

但有这么一个库,它不仅能够减少繁琐的配置过程还能实现和logging类似的功能,同时还能保证日志记录的线程进程安全,又能够和logging相兼容,并进一步追踪异常也能进行代码回溯。这个库叫loguru——一个专为像我这样懒人而生日志记录库。

loguru是一个旨在以Python带来令人愉悦的日志记录的库。loguru的主要概念是只有一个 logger。

不使用logging和loguru模块

import sys

class Logger():
    def __init__(self, filename='./logger.txt'):
        self.terminal = sys.stdout
        self.log = open(filename, 'a')

    def write(self, message):
        self.terminal.write(message)
        self.log.write(message)
        self.log.flush()

    def flush(self):
        self.log.flush()

sys.stdout = Logger()

直接封装成一个类库,记录打印信息,保存程序报错的信息,方便粗暴,但不实用,因为使用的是追加写入的模式,打印的信息越多,文件越大。

使用logging

import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

logger.info('this is another debug message')
logger.warning('this is another debug message')
logger.error('this is another debug message')
logger.info('this is another debug message')

logging最基本的模块配置,相较于loguru比较繁琐

loguru安装说明

终端直接执行pip命令:

pip install loguru

简单使用

loguru 库的使用可以说是十分简单,我们直接可以通过导入它本身封装好的logger 类就可以直接进行调用。

附上一张日志级别列表

级别方法
trace5logger.trace()
debug10logger.debug()
info20logger.info()
success25logger.success()
warning30logger.warning()
error40logger.error()
critical50logger.critical()
from loguru import logger

logger.trace('this is tarce message!')
logger.debug('this is debug message!')
logger.info('this is info message!')
logger.success('this is success message!')
logger.warning('this is warning message!')
logger.error('this is error message!')
logger.critical('this is critical message!')

在这里插入图片描述

在pycharm中运行时,还会有不同的样式区分,使结果更加美观


详细使用

参数说明

def add(
        self,
        sink,
        *,
        level=_defaults.LOGURU_LEVEL,
        format=_defaults.LOGURU_FORMAT,
        filter=_defaults.LOGURU_FILTER,
        colorize=_defaults.LOGURU_COLORIZE,
        serialize=_defaults.LOGURU_SERIALIZE,
        backtrace=_defaults.LOGURU_BACKTRACE,
        diagnose=_defaults.LOGURU_DIAGNOSE,
        enqueue=_defaults.LOGURU_ENQUEUE,
        catch=_defaults.LOGURU_CATCH,
        **kwargs
    ):
    pass
  1. sink (file-like object, str, pathlib.Path, callable, coroutine function or logging.Handler) – An object in charge of receiving formatted logging messages and propagating them to an appropriate endpoint.【日志发送目的地】
  2. level (int or str, optional) – The minimum severity level from which logged messages should be sent to the sink.【日志等级】
  3. format (str or callable, optional) – The template used to format logged messages before being sent to the sink.【日志格式】
  4. filter (callable, str or dict, optional) – A directive optionally used to decide for each logged message whether it should be sent to the sink or not.【日追成过滤器】
  5. colorize (bool, optional) – Whether the color markups contained in the formatted message should be converted to ansi codes for terminal coloration, or stripped otherwise. If None, the choice is automatically made based on the sink being a tty or not.【是否加颜色】
  6. serialize (bool, optional) – Whether the logged message and its records should be first converted to a JSON string before being sent to the sink.【是否序列号】
  7. backtrace (bool, optional) – Whether the exception trace formatted should be extended upward, beyond the catching point, to show the full stacktrace which generated the error.
  8. diagnose (bool, optional) – Whether the exception trace should display the variables values to eases the debugging. This should be set to False in production to avoid leaking sensitive data.
  9. enqueue (bool, optional) – Whether the messages to be logged should first pass through a multiprocess-safe queue before reaching the sink. This is useful while logging to a file through multiple processes. This also has the advantage of making logging calls non-blocking.
  10. catch (bool, optional) – Whether errors occurring while sink handles logs messages should be automatically caught. If True, an exception message is displayed on sys.stderr but the exception is not propagated to the caller, preventing your app to crash.
  11. **kwargs – Additional parameters that are only valid to configure a coroutine or file sink
  12. rotation (str, int, datetime.time, datetime.timedelta or callable, optional) – A condition indicating whenever the current logged file should be closed and a new one started.【配置日志切割】
  13. retention (str, int, datetime.timedelta or callable, optional) – A directive filtering old files that should be removed during rotation or end of program.【删除过期日志】
  14. compression (str or callable, optional) – A compression or archive format to which log files should be converted at closure.【压缩方式】
  15. delay (bool, optional) – Whether the file should be created as soon as the sink is configured, or delayed until first logged message. It defaults to False.
  16. mode (str, optional) – The opening mode as for built-in open() function. It defaults to "a" (open the file in appending mode).
  17. buffering (int, optional) – The buffering policy as for built-in open() function. It defaults to 1 (line buffered file).
  18. encoding (str, optional) – The file encoding as for built-in open() function. If None, it defaults to locale.getpreferredencoding().

写入日志

只需要在第一个参数中传入一个你想要留存文件的路径字符串即可(个人习惯保存在当前文件夹下)

logger.add('./log.txt')

在这里插入图片描述

format

Key官方描述备注
elapsedThe time elapsed since the start of the program日期
exceptionThe formatted exception if any, none otherwise
extraThe dict of attributes bound by the user (see bind())
fileThe file where the logging call was made出错文件
functionThe function from which the logging call was made出错方法
levelThe severity used to log the message日志级别
lineThe line number in the source code行数
messageThe logged message (not yet formatted)信息
moduleThe module where the logging call was made模块
nameThe name where the logging call was madename
processThe process in which the logging call was made进程id或者进程名,默认是id
threadThe thread in which the logging call was made线程id或者进程名,默认是id
timeThe aware local time when the logging call was made日期

rotation设置

对日志文件以设置的大小进行分割操作 参数设置举例:'100KB','100MB','100GB'

logger.add('./log{time}.txt', rotation='100KB')
for n in range(10000):
    logger.info(f'test - {n}')

在这里插入图片描述

compression设置

对日志文件进行压缩方式 参数设置举例:'gz','tar','zip'

logger.add('./log{time}.txt', rotation='100KB', compression='zip')
for n in range(10000):
    logger.info(f'test - {n}')

在这里插入图片描述

retention设置

设置日志文件留存数

logger.add('./log{time}.txt', rotation='100KB', compression='zip',retention=1)
for n in range(10000):
    logger.info(f'test - {n}')

在这里插入图片描述

serialize设置

将日志转化成序列化的json格式输出

logger.add('./log{time}.txt', serialize=True)
logger.info('hello, world!')

异常值追溯的两种方式

追溯异常值的显示,将backtrace设置为True,开发过程中将diagnose设置为True显示变量值以简化调试,生产环境建议设置为False

from loguru import logger

logger.add("./log.txt", backtrace=True, diagnose=True)


def func(a, b):
    return a / b


def nested(c):
    try:
        func(5, c)
    except ZeroDivisionError:
        logger.exception("What?!")


if __name__ == "__main__":
    nested(0)

在这里插入图片描述

用了 loguru 之后,我们用它提供的装饰器就可以直接进行 Traceback 的记录,运行完毕之后,可以发现log里面就出现了Traceback信息,而且给我们输出了当时的变量值.

@logger.catch
def my_function(x, y, z):
    # An error? It's caught anyway!
    return 1 / (x + y + z)

在这里插入图片描述

多进程捕捉错误,可以清晰直观的列出时哪个进程报了什么错误,方便追溯原因

from loguru import logger
from multiprocessing import Process

logger.add("./log.txt", backtrace=True, diagnose=True)


def func(a, b):
    return a / b


def nested(c):
    try:
        func(5, c)
    except ZeroDivisionError:
        logger.exception("What?!")


if __name__ == "__main__":
    process_list = []
    for i in range(1, 4):
        process_list.append(Process(target=nested, args=(0,), name=f'第{i}进程'))
    for p in process_list:
        p.start()
    for p in process_list:
        p.join()

在这里插入图片描述

参考文献:

blog.csdn.net/cui_yonghua…

kangyucheng.blog.csdn.net/article/det…

希望对大家有所帮助,有问题的地方也请大家批评指正,感谢!!

能给个关注就更好了