Kafka接收不到filebeat消息

303 阅读3分钟

在尝试部署ELK的过程中遇到的一个问题,记录下。 因为资源有限,都是在docker中部署,问题也正好出在这里。下面是filebeat到kafka的环境

现象

日志可以输出,但是kafka中没有数据。
filebeat错误信息:Kafka publish failed with: dial tcp [::1]:9092: connect: connection refused

环境准备

kafka是直接从官方复制的单节点docker-compose 原始地址:github.com/apache/kafk…

version: '2'
services:
  broker:
    image: ${IMAGE}
    hostname: broker
    container_name: broker
    ports:
      - '9092:9092'
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT_HOST://localhost:9092,PLAINTEXT://broker:19092'
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
      KAFKA_LISTENERS: 'CONTROLLER://:29093,PLAINTEXT_HOST://:9092,PLAINTEXT://:19092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'

filebeat的docker-compose

version: "2.2"

services:
  # filebeat
  filebeat:
    image: docker.elastic.co/beats/filebeat:8.15.0
    command: filebeat -e --strict.perms=false 
    volumes:
      - ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - ./filebeat/data:/usr/share/filebeat/data:rw
      - 宿主机日志路径:容器日志路径:ro

filebeat.yml

# 全局配置
fields_under_root: true
fields:
  log_topic: "smooth-logs"

# filebeat.config:
#   modules:
#     path: ${path.config}/modules.d/*.yml
#     reload.enabled: false
# processors:
#   - add_cloud_metadata: ~
#   - add_docker_metadata: ~
# 输入
filebeat.inputs:
- type: filestream 
  id: smooth-server
  # fields:
  #   serverName: smooth-server
  paths:
    - /usr/share/filebeat/smooth-server/logs/smooth-server*.log
  parsers:
   - multiline:
      type: pattern
      pattern: '^\d{4}-\d{2}-\d{2}'
      match: after
      negate: true
# 输出
# output:
#   console:
#     pretty: true
# logging.level: debug
output.kafka:
    # enable: false
    # initial brokers for reading cluster metadata
    # version: 2.6.0
    hosts: ["main.supx.tech:9092"]

    # message topic selection + partitioning
    # topic: '%{[fields.log_topic]}'
    topic: "smooth-logs"
    partition.round_robin:
      reachable_only: true

    required_acks: 1
    compression: gzip
    max_message_bytes: 1000000

排查过程

开始filebeat是没有输出任何错误的,从控制台看也是链接已经创建了,这个时候kafka中的topic也已经自动创建,只是没有数据!尝试了很多kafka版本也翻了filebeat的文档,很遗憾没有解决!
后来,将filebeat输出到console,发现日志是可以正常收集。在filebeat.yml中添加logging.level: debug,经过仔细排查,发现了错误信息:Kafka publish failed with: dial tcp [::1]:9092: connect: connection refused
至此,已经可以确定是kafka部署的问题了!但是,部署完kafka后使用命令行发送和接收消息是没有报错的!仔细观察报错信息,发现filebeat的容器为什么使用IPV6的回环地址来连接kafka?

解决

KAFKA_LISTENERS:监听器的地址
KAFKA_ADVERTISED_LISTENERS:发布到zk的监听器地址,客户端会根据这个链接发送数据

问题已经很明显了,因为是容器环境,使用的是各自的docker-compose,所以不在一个网络中,在KAFKA_ADVERTISED_LISTENERS的配置中没有能够互通的IP。因此需要一个不同docker网络间互通的IP。这里使用的是域名,而这个域名映射的是wsl的ip地址!

  1. 获取WSL中的IP image.png

  2. 在宿主机中配置hosts映射 image.png

  3. 修改kafka的配置

version: "2.2"

services:
  # kafka
  kafka:
    image: apache/kafka:3.8.0
    hostname: kafka
    container_name: kafka
    ports:
      - '9092:9092'
      - '9094:9094'
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,SUPX:PLAINTEXT'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT_HOST://localhost:9092,PLAINTEXT://kafka:19092,SUPX://main.supx.tech:9094'
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka:29093'
      KAFKA_LISTENERS: 'CONTROLLER://:29093,PLAINTEXT_HOST://:9092,PLAINTEXT://:19092,SUPX://:9094'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
volumes:
  kafka:
    driver: local

  1. 调整filebeat中kafka的配置,使用9094的端口,完整配置如下:
version: "2.2"

services:
  # kafka
  kafka:
    image: apache/kafka:3.8.0
    hostname: kafka
    container_name: kafka
    ports:
      - '9092:9092'
      - '9094:9094'
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,SUPX:PLAINTEXT'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT_HOST://localhost:9092,PLAINTEXT://kafka:19092,SUPX://main.supx.tech:9094'
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka:29093'
      KAFKA_LISTENERS: 'CONTROLLER://:29093,PLAINTEXT_HOST://:9092,PLAINTEXT://:19092,SUPX://:9094'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
volumes:
  kafka:
    driver: local

参考

kafka配置KAFKA_LISTENERS和KAFKA_ADVERTISED_LISTENERS - 简书 (jianshu.com)