filebeat基础使用(小节6)

58 阅读4分钟

使用filebeat替代logstash收集日志

Filebeat是轻量级单用途的日志收集工具,用于在没有安装java的服务器上专门收集日志,可以将日志转发到logstash、elasticsearch或redis及kafka等场景中进行下一步处理。

官网下载地址:www.elastic.co/downloads/b…

官方文档:www.elastic.co/guide/en/be…

web(106-108) 如果机器不行可以只开一个106

#开启kafka
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

#开启zookeeper
/usr/local/zookeeper/bin/zkServer.sh start

web1(106)

如果安装了logstash、需要关闭

#禁止开机启动logstash    
systemctl disable logstash
systemctl stop logstash

安装包:filebeat

cd /usr/local/src/

dpkg -i filebeat-6.8.3-amd64.deb

备份文件

cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak

修改配置文件

vim /etc/filebeat/filebeat.yml
 24   enabled: true
 28     - /var/log/syslog
 45   fields:
 46     type: syslog
 47     level: debug
 48     review: 1
152   hosts: ["192.168.37.101:9200"]

启动filebeat

systemctl enable filebeat
systemctl start filebeat

图片.png

图片.png

图片.png

模板文件、可参考

ll /etc/filebeat/modules.d/
total 84
drwxr-xr-x 2 root root 4096 May 22 09:00 ./
drwxr-xr-x 3 root root 4096 May 22 09:32 ../
-rw-r--r-- 1 root root  371 Aug 30  2019 apache2.yml.disabled
-rw-r--r-- 1 root root  175 Aug 30  2019 auditd.yml.disabled
-rw-r--r-- 1 root root 1250 Aug 30  2019 elasticsearch.yml.disabled
-rw-r--r-- 1 root root  269 Aug 30  2019 haproxy.yml.disabled
-rw-r--r-- 1 root root  546 Aug 30  2019 icinga.yml.disabled
-rw-r--r-- 1 root root  371 Aug 30  2019 iis.yml.disabled
-rw-r--r-- 1 root root  257 Aug 30  2019 iptables.yml.disabled
-rw-r--r-- 1 root root  396 Aug 30  2019 kafka.yml.disabled
-rw-r--r-- 1 root root  188 Aug 30  2019 kibana.yml.disabled
-rw-r--r-- 1 root root  563 Aug 30  2019 logstash.yml.disabled
-rw-r--r-- 1 root root  189 Aug 30  2019 mongodb.yml.disabled
-rw-r--r-- 1 root root  368 Aug 30  2019 mysql.yml.disabled
-rw-r--r-- 1 root root  569 Aug 30  2019 nginx.yml.disabled
-rw-r--r-- 1 root root  388 Aug 30  2019 osquery.yml.disabled
-rw-r--r-- 1 root root  192 Aug 30  2019 postgresql.yml.disabled
-rw-r--r-- 1 root root  463 Aug 30  2019 redis.yml.disabled
-rw-r--r-- 1 root root  190 Aug 30  2019 suricata.yml.disabled
-rw-r--r-- 1 root root  574 Aug 30  2019 system.yml.disabled
-rw-r--r-- 1 root root  195 Aug 30  2019 traefik.yml.disabled

先写到redis和kafka

vim /etc/filebeat/filebeat.yml
#注释此2行
149 #output.elasticsearch:
151   #  hosts: ["192.168.37.101:9200"]
...
#结尾添加(写入kafka)
210 output.kafka:
211   # initial brokers for reading cluster metadata
212   hosts: ["192.168.37.106:9092", "192.168.37.107:9092", "192.168.37.108:9092"]
213
214   # message topic selection + partitioning
215   topic: 'filebeat-syslog-37-106'
216
217   required_acks: 1
218   compression: gzip
219   max_message_bytes: 1000000

重启filebeat

systemctl restart filebeat

追加一些数据

echo "111" > /var/log/syslog

logstash(103)

修改配置信息

cd /etc/logstash/conf.d/

cat kafka-to-es.conf 
input {
  kafka {
    bootstrap_servers => "192.168.37.106:9092"
    topics => "syslog-37-106"
    codec => "json"
  }

  kafka {
    bootstrap_servers => "192.168.37.106:9092"
    topics => "nginx-access-log-37-106"
    codec => "json"
  }

  kafka {
    bootstrap_servers => "192.168.37.106:9092"
    topics => "filebeat-syslog-37-106"
    codec => "json"
  }
}

output {
  if [type] == "syslog-37-106" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "kafka-syslog-37-106-%{+YYYY.MM.dd}"
  }}

  if [type] == "nginx-access-log-37-106" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "kafka-nginx-access-log-37-106-%{+YYYY.MM.dd}"
  }}

  if [fields][type] == "syslog" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "filebeat-syslog-37-106-%{+YYYY.MM.dd}"
  }}
}

重启

systemctl restart logstash

图片.png

图片.png

图片.png

web1(106)

追加日志信息

echo "111" >> /var/log/syslog
echo "222" >> /var/log/syslog

可以看到刚刚追加了2条信息 图片.png

使用filebeat收集单个日志并写入redis

web1(106)

重启、停logstash服务

    reboot
    systemctl stop logstash

修改配置信息

vim /etc/filebeat/filebeat.yml
#结尾添加
output.redis:
  hosts: ["192.168.37.104:6379"]
  password: "123456"
  key: "filebeat-log-37-106"
  db: 0
  timeout: 5

重启服务

systemctl restart filebeat

redis(104)

查看是否有数据

# redis-cli 
127.0.0.1:6379> AUTH 123456
OK
127.0.0.1:6379> KEYS *
1) "filebeat-log-37-106"

logstash(103)

cat redis-to-es.conf 
input {
  redis {
    host => "192.168.37.104"
    port => "6379"
    password => "123456"
    key => "filebeat-log-37-106"
    data_type => list
    db => 0
  }
}

output {
  if [fields][type] == "syslog" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "filebeat-syslog-37-106-%{+YYYY.MM.dd}"
  }}
}

检查语法

/usr/share/logstash/bin/logstash -f redis-to-es.conf -t

重启服务

systemctl restart logstash

图片.png

在kibana中添加

图片.png

图片.png

web1(106)

把日志写到'log.txt'文件、再把'log.txt'追加到'syslog'

cat /var/log/syslog > /opt/log.txt
cat /opt/log.txt >> /var/log/syslog

redis(104)

脚本

cat redis_monitor.sh
#!/usr/bin/env python
#coding:utf-8

import redis
def redis_conn():
    pool=redis.ConnectionPool(host="192.168.37.104",port=6379,db=0,password=123456)
    conn = redis.Redis(connection_pool=pool)
    data = conn.llen('filebeat-log-37-106')
    print(data)
redis_conn()

添加权限

chmod a+x redis_monitor.sh

执行(如:没有模块)

python2.7 redis_monitor.sh
12634    <--取到值