日志架构
可以看到日志最终都进入了ES里面,对于搜索非常方便, 但是对于统计还不能满足我们的需求
日志收集
Doris
这里就不多介绍了,性能非常强,和mysql语法基本一样,可以快速上手
方案对比
logback配置
<appender name="FEIGN_LOG" class="com.xy.FeignLogAppender"/>
<appender name="ASYNC_FEIGN_LOG" class="ch.qos.logback.classic.AsyncAppender">
<discardingThreshold>0</discardingThreshold>
<queueSize>500</queueSize>
<appender-ref ref="FEIGN_LOG"/>
</appender>
<logger name="com.xy.controller" level="INFO">
<appender-ref ref="ASYNC_FEIGN_LOG"/>
</logger>
方案一:需要自定义Appender,过滤指定关键字的日志,把日志解析后写入kafka
优点:不需要运维支持
缺点:各个应用都需要强依赖kafka
logback无配置
logstash配置需要运维支持
input {
tcp {
mode => "server"
port => 4567
codec => json_lines
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => [ "@version", "@timestamp", "message" ]
}
}
output {
stdout {
codec => rubydebug
}
jdbc {
driver_jar_path => "/Users/xiongyan/Documents/fuchuang/logstash-8.12.2/mysql-connector-java-8.0.25.jar"
driver_class => "com.mysql.cj.jdbc.Driver"
connection_string => "jdbc:mysql://127.0.0.1:9030/xy?rewriteBatchedStatements=true"
username => "xy"
password => "123456"
# 连接池配置
max_pool_size => 100
# 批量插入
flush_size => 1000
statement => ["INSERT INTO service_remote_log (op_time, request_id, env, from_name, to_name, method, path, protocol, content_type, status_code, cost) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)", "%{optTime}", "%{requestId}", "%{env}", "%{from}", "%{to}", "%{method}", "%{path}", "%{protol}", "%{contentType}", "%{statusCode}", "%{cost}"]
}
}
方案二:利用logstash自生的优势,通过logstash-output-jdbc插件 把日志写入doris
优点:各个应用无需强依赖kafka
缺点:需要运维支持
logback无配置
logstash配置需要运维支持
input {
tcp {
mode => "server"
port => 4567
codec => json_lines
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => [ "@version", "@timestamp", "message"]
}
}
output {
stdout {
codec => rubydebug
}
kafka {
codec => json
bootstrap_servers => "127.0.0.1:9001,127.0.0.1:9002,127.0.0.1:9003"
topic_id => "%{topic}"
security_protocol => "SASL_PLAINTEXT"
sasl_mechanism => "PLAIN"
}
}
方案二:利用logstash自生的优势,通过logstash-output-kafka插件 把日志写入kafka,在doris配置任务直接消费kafka到表
优点:各个应用无需强依赖kafka
缺点:需要运维支持
Doris配置:
CREATE ROUTINE LOAD job_service_remote_log ON service_remote_log
COLUMNS(op_time, timestamp, trace_id, span_id, env, from_name, to_name, method, path, protocol, content_type, status_code, cost)
PROPERTIES
(
"desired_concurrent_number" = "1",
"max_error_number" = "1",
"format" = "json",
"strict_mode" = "false",
"timezone" = "Asia/Shanghai",
"max_batch_interval" = "10",
"max_batch_rows" = "200000",
"max_batch_size" = "209715200",
"jsonpaths" = "[\"$.opTime\",\"$.timestamp\",\"$.traceId\",\"$.spanId\",\"$.env\",\"$.from\",\"$.to\",\"$.method\",\"$.path\",\"$.protocol\",\"$.contentType\",\"$.statusCode\",\"$.cost\"]"
)
FROM KAFKA
(
"kafka_broker_list" = "127.0.0.1:9001,127.0.0.1:9002,127.0.0.1:9003",
"kafka_topic" = "topic",
"property.group.id" = "group"
);
日志查看
转json之前日志
转json之后日志
数据库日志
接口调用统计
总结
只需要在脚手架里面添加log.info(message),就可以轻松实现业务的统计
对于不同的业务统计,在脚手架里面组装不同的数据结构就可以了