ELK+Kafka搭建
-
安装Docker和Docker Compose 确保您的机器上已经安装了Docker和Docker Compose。如果没有,请参考官方文档进行安装。
-
编写docker-compose.yml文件 在您的工作目录下创建一个名为docker-compose.yml的文件,并添加以下内容:
version: '3.7' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0 container_name: elasticsearch environment: - discovery.type=single-node ports: - "9200:9200" - "9300:9300" volumes: - elasticsearch-data:/usr/share/elasticsearch/data logstash: image: docker.elastic.co/logstash/logstash:7.12.0 container_name: logstash user: root environment: - XPACK_MONITORING_ENABLED=false volumes: - ./logstash/indexer/config/logstash.yml:/usr/share/logstash/config/logstash.yml - ./logstash/indexer/pipeline:/usr/share/logstash/pipeline depends_on: - elasticsearch - kafka kibana: image: docker.elastic.co/kibana/kibana:7.12.0 container_name: kibana environment: - ELASTICSEARCH_URL=http://elasticsearch:9200 - XPACK_MONITORING_ENABLED=false ports: - "5601:5601" depends_on: - elasticsearch kafka: image: wurstmeister/kafka:2.12-2.4.1 container_name: kafka environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.200.100 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ports: - "9092:9092" depends_on: - zookeeper zookeeper: image: wurstmeister/zookeeper container_name: zookeeper environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper:2888:3888 ports: - "2181:2181" volumes: - zookeeper-data:/tmp/zookeeper volumes: elasticsearch-data: zookeeper-data:
-
创建Logstash配置文件
-
在当前目录下
mkdir logstash/indexer/config mkdir logstash/indexer/pipeline
-
进入config目录下,新建文件logstash.yml
http.host: 0.0.0.0 xpack.monitoring.enabled: false -
进入pipeline目录下,新建文件logstash.conf文件
input { kafka { bootstrap_servers => "192.168.99.82:9092" decorate_events => true topics => ["http-log", "mqtt-log"] codec => "json" } } filter { if [@metadata][kafka][topic] == "http-log" { mutate { add_field => { "[@metadata][target_index]" => "xm-http-log" } } } else if [@metadata][kafka][topic] == "mqtt-log" { mutate { add_field => { "[@metadata][target_index]" => "xm-mqtt-log" } } } } output { elasticsearch { hosts => ["http://192.168.99.82:9200"] index => "%{[@metadata][target_index]}-%{+YYYY.MM.dd}" } }
-
-
启动容器 打开终端,进入您的工作目录,并运行以下命令启动所有容器:
docker-compose up -d -
至此就可以在ES中查看日志信息(前提是已经把日志收集到kafka中去了)