ELK整合Kafka采集日志

898 阅读1分钟

ELK搭建请看之前的博客

  • 我们从之前采用的logstash直接将项目日志采集到ES中,然后从kibana展示的方式改为将日志先采集到kafka然后用logstash输出到kibana展示
  • 原因是当项目的日志量过大时,采用kafka可以实现高吞吐量

具体步骤

  • 项目中引入kafka项目依赖

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
        <!--logback-kafka-appender依赖-->
        <dependency>
            <groupId>com.github.danielwegener</groupId>
            <artifactId>logback-kafka-appender</artifactId>
            <version>0.2.0-RC2</version>
        </dependency>
    

    1234567891011

  • 在logstas配置中将项目路径

    input { tcp { mode => "server" host => "0.0.0.0"此处改为kafka地址 port => 4560 codec => json_lines }

    output { elasticsearch { hosts => "127.0.0.1:9200" index => "springboot-logstash-%{+YYYY.MM.dd}" } } 1234567891011121314

-增加logback.xml配置

  <!-- This is the kafkaAppender -->
    <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
        <topic>applog</topic>
        <!-- we don't care how the log messages will be partitioned  -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />

        <!-- use async delivery. the application threads are not blocked by logging -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=192.168.217.130:9092</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>

    </appender>

    <root level="info">
        <appender-ref ref="kafkaAppender" />
    </root>
    <!--输出到logstash的appender-->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>localhost:9500</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <root level="INFO">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="FILE"/>
        <!--使用logstash时打开-->
        <appender-ref ref="LOGSTASH"/>
    </root>
1234567891011121314151617181920212223242526272829303132333435363738394041
  • log4j2配置

    • Socket: name: logstash-tcp host: 127.0.0.1 port: 4560 protocol: TCP PatternLayout: pattern: ${log.pattern}

    Root: level: debug AppenderRef: - ref: CONSOLE - ref: ROLLING_FILE - ref: EXCEPTION_ROLLING_FILE - ref: logstash-tcp - ref: TASK_ROLLING_FILE