摘要:本文主要介绍了基于docker搭建的Grafana+Loki轻量化日志搜索,功能没有ELK那么强大,但是基本上够用了。
一、基础环境准备
Docker环境安装地址 》 juejin.cn/post/696827…
二、服务启动
先下载服务的基础配置文件
wget https://raw.githubusercontent.com/grafana/loki/v2.3.0/cmd/loki/loki-local-config.yaml -O loki-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.3.0/clients/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.3.0/production/docker-compose.yaml -O docker-compose.yaml
然后做如下修改,如果下载不下来,也可以复制下面的文件
loki-config.yaml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
ingester:
wal:
enabled: true
dir: /tmp/wal
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /tmp/loki/boltdb-shipper-active
cache_location: /tmp/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: filesystem
filesystem:
directory: /tmp/loki/chunks
compactor:
working_directory: /tmp/loki/boltdb-shipper-compactor
shared_store: filesystem
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ruler:
storage:
type: local
local:
directory: /tmp/loki/rules
rule_path: /tmp/loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true
promtail-config.yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
docker-compose.yaml
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.3.0
container_name: loki
ports:
- "3100:3100"
command: -config.file=/mnt/config/loki-config.yaml
networks:
- loki
volumes:
- ./config:/mnt/config
promtail:
image: grafana/promtail:2.3.0
container_name: promtail
volumes:
- ./config:/mnt/config
- /var/log:/var/log
command: -config.file=/mnt/config/promtail-config.yaml
networks:
- loki
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
networks:
- loki
我把两个配置文件放到了./config目录下面,按照实际地址进行文件挂载,然后执行
docker-compose up -d;就可以访问了
配置日志数据源
查询日志
高级用法
我个人常用的一些查询函数和方法
Log Stream Selector 大范围区间匹配
{app="mysql",name="mysql-backup"}
=: exactly equal!=: not equal=~: regex matches!~: regex does not match
- 下面是实例
{name =~ "mysql.+"}{name !~ "mysql.+"}{name !~ `mysql-\d+`}
Log Pipeline 日志详细的一些搜索条件
|=: Log line contains string!=: Log line does not contain string|~: Log line contains a match to the regular expression!~: Log line does not contain a match to the regular expression
- 实例
{job="mysql"} |= "error"
Log Range Aggregations 区间查询和聚合的一些用法
rate(log-range): calculates the number of entries per secondcount_over_time(log-range): counts the entries for each log stream within the given range.bytes_rate(log-range): calculates the number of bytes per second for each stream.bytes_over_time(log-range): counts the amount of bytes used by each log stream for a given range.absent_over_time(log-range): returns an empty vector if the range vector passed to it has any elements and a 1-element vector with the value 1
- 计算日志行数
count_over_time({job="mysql"}[5m])
Unwrapped Range Aggregations
duration_seconds(label_identifier)(or its short equivalentduration) which will convert the label value in seconds from the go duration format(e.g5m,24s30ms).bytes(label_identifier)which will convert the label value to raw bytes applying the bytes unit (e.g.5 MiB,3k,1G).
Supported function for operating over unwrapped ranges are:
rate(unwrapped-range): calculates per second rate of all values in the specified interval.sum_over_time(unwrapped-range): the sum of all values in the specified interval.avg_over_time(unwrapped-range): the average value of all points in the specified interval.max_over_time(unwrapped-range): the maximum value of all points in the specified interval.min_over_time(unwrapped-range): the minimum value of all points in the specified intervalfirst_over_time(unwrapped-range): the first value of all points in the specified intervallast_over_time(unwrapped-range): the last value of all points in the specified intervalstdvar_over_time(unwrapped-range): the population standard variance of the values in the specified interval.stddev_over_time(unwrapped-range): the population standard deviation of the values in the specified interval.quantile_over_time(scalar,unwrapped-range): the φ-quantile (0 ≤ φ ≤ 1) of the values in the specified interval.absent_over_time(unwrapped-range): returns an empty vector if the range vector passed to it has any elements and a 1-element vector with the value 1 if the range vector passed to it has no elements. (absent_over_timeis useful for alerting on when no time series and logs stream exist for label combination for a certain amount of time.)
Aggregation operators
sum: Calculate sum over labelsmin: Select minimum over labelsmax: Select maximum over labelsavg: Calculate the average over labelsstddev: Calculate the population standard deviation over labelsstdvar: Calculate the population standard variance over labelscount: Count number of elements in the vectorbottomk: Select smallest k elements by sample valuetopk: Select largest k elements by sample value