filebeat日志收集架构实战(小节7)

95 阅读5分钟

日志收集实战

架构规划:

在下面的图当中从左向右看,当要访问ELK日志统计平台的时候,首先访问的是两台nginx+keepalived做的负载高可用,访问的地址是keepalived的IP,当一台nginx代理服务器挂掉之后也不影响访问,然后nginx将请求转发到kibana,kibana再去elasticsearch获取数据,elasticsearch是两台做的集群,数据会随机保存在任意一台elasticsearch服务器,redis服务器做数据的临时保存,避免web服务器日志量过大的时候造成的数据收集与保存不一致导致的日志丢失,可以临时保存到redis,redis可以是集群,然后再由logstash服务器在非高峰时期从redis持续的取出即可,另外有一台mysql数据库服务器,用于持久化保存特定的数据,web服务器的日志由filebeat收集之后发送给另外的一台logstash,再有其写入到redis即可完成日志的收集,从图中可以看出,redis服务器处于前端结合的最中间,其左右都要依赖于redis的正常运行,web服务删个日志经过filebeat收集之后通过日志转发层的logstash写入到redis不同的key当中,然后提取层logstash再从redis将数据提取并安按照不同的类型写入到elasticsearch的不同index当中,用户最终通过nginx代理的kibana查看到收集到的日志的具体内容:

logstash2(105)

安装JDK

apt install openjdk-8-jdk -y

安装logstash

安装包:logstash

cd /usr/local/src/
dpkg -i logstash-6.8.3.deb

编辑配置信息

cd /etc/logstash/conf.d/
#测试文件
cat beats.conf
input {
  beats {
   port => 5044
  }
}

output {
  stdout {
    codec => "rubydebug"
  }
}

启动

/usr/share/logstash/bin/logstash -f beats.conf
...等待

web1(106)

输出改到logstash上

vim /etc/filebeat/filebeat.yml
#结尾添加
output.logstash:
  hosts: ["192.168.37.105:5044","192.168.37.105:5045"]
  loadbalance: true
  worker: 1
  compression_level: 3

重启logstash

systemctl restart filebeat

在日志中追加数据

echo 123 >> /var/log/syslog input

logstash2(105)

看能否在105收到数据

{
           "log" => {
        "file" => {
            "path" => "/var/log/syslog"
        }
    },
    "@timestamp" => 2023-05-28T04:48:24.727Z,
      "@version" => "1",
        "fields" => {
         "level" => "debug",
          "type" => "syslog",
        "review" => 1
    },
         "input" => {
        "type" => "log"
    },
    "prospector" => {
        "type" => "log"
    },
          "host" => {
         "architecture" => "x86_64",
                 "name" => "web1",
                   "os" => {
             "version" => "18.04.1 LTS (Bionic Beaver)",
                "name" => "Ubuntu",
              "family" => "debian",
            "codename" => "bionic",
            "platform" => "ubuntu"
        },
        "containerized" => false,
                   "id" => "6b1f70a8909b4b0dbb63f938c28ca940"
    },
          "beat" => {
        "hostname" => "web1",
            "name" => "web1",
         "version" => "6.8.3"
    },
        "offset" => 6609389,
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
        "source" => "/var/log/syslog",
       "message" => "123 input"
}
vim beats.conf

input {
  beats {
   port => 5044
   codec => "json"
  }
  beats {
   port => 5045
   codec => "json"
  }
}

output {
  stdout {
    codec => "rubydebug"
  }
}

启动

/usr/share/logstash/bin/logstash -f beats.conf

web1(106)

收集访问日志和系统日志

可参考:filebeat.yml

grep -v "#" /etc/filebeat/filebeat.yml |grep -v "^$"
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/syslog
  fields:
    type: syslog-106
    level: debug
    review: 1
#添加以下8行信息
- type: log
  enabled: true
  paths:
    - /var/log/access.log
  fields:
    app: nginx-106
    level: debug
    review: 1
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
output.logstash:
  hosts: ["192.168.37.105:5044","192.168.37.105:5045"]
  loadbalance: true

重启服务

systemctl restart filebeat

logstash2(105)

此时会收到日志

cat beats.conf
input {
  beats {
   port => 5044
   codec => "json"
  }
  beats {
   port => 5045
   codec => "json"
  }
}

output {
  if [fields][type] == "syslog-106" {
    redis {
     host => "192.168.37.104"
     port => "6379"
     password => "123456"
     key => "syslog-37-106"
     data_type => list
     db => 3
  }}

  if [fields][app] == "nginx-106" {
    redis {
      host => "192.168.37.104"
      port => "6379"
      password => "123456"
      key => "nginx-accesslog-37-106"
      data_type => list
      db => 3
  }}
}

检查

/usr/share/logstash/bin/logstash -f beats.conf -t

重启

systemctl restart logstash

redis(104)

# redis-cli 
127.0.0.1:6379> AUTH 123456
OK
127.0.0.1:6379> SELECT 3
OK
127.0.0.1:6379[3]> KEYS *
1) "nginx-accesslog-37-106"  <--如果此处没有、请在web1(106),运行'/apps/nginx/sbin/nginx'后,浏览器访问:'http://192.168.37.106/'
2) "syslog-37-106"     <--如果没有数据、在web1(106)上直接'echo 123 >> /var/log/syslog'

web2(107)

安装jdk

apt install openjdk-8-jdk -y

安装filebeat

安装包:filebeat

cd /usr/local/src/
dpkg -i filebeat-6.8.3-amd64.deb

web1(106)

拷贝filebeat配置文件

scp /etc/filebeat/filebeat.yml 192.168.37.107:/etc/filebeat/

拷贝nginx配置文件

停服务

/apps/nginx/sbin/nginx -s stop

web2(107)

创建目录

mkdir /apps

web1(106)

打包并拷贝

cd /apps
tar czvf nginx.tar.gz nginx/*

scp nginx.tar.gz 192.168.37.107:/apps

启动服务

/apps/nginx/sbin/nginx

web2(107)

解压并启动

cd /apps/
tar xvf nginx.tar.gz
/apps/nginx/sbin/nginx

修改filebeat文件、加以区分

vim /etc/filebeat/filebeat.yml
 46     type: syslog-107
 71     app: nginx-107

logstash(105)

cd /etc/logstash/conf.d

cat beats.conf
input {
  beats {
   port => 5044
   codec => "json"
  }
  beats {
   port => 5045
   codec => "json"
  }
}

output {
  if [fields][type] == "syslog-106" {
    redis {
     host => "192.168.37.104"
     port => "6379"
     password => "123456"
     key => "syslog-37-106"
     data_type => list
     db => 3
  }}

  if [fields][app] == "nginx-106" {
    redis {
      host => "192.168.37.104"
      port => "6379"
      password => "123456"
      key => "nginx-accesslog-37-106"
      data_type => list
      db => 3
  }}
#添加了2107
  if [fields][type] == "syslog-107" {
    redis {
     host => "192.168.37.104"
     port => "6379"
     password => "123456"
     key => "syslog-37-107"
     data_type => list
     db => 3
  }}

  if [fields][app] == "nginx-107" {
    redis {
      host => "192.168.37.104"
      port => "6379"
      password => "123456"
      key => "nginx-accesslog-37-107"
      data_type => list
      db => 3
  }}
}

重启logstash

systemctl restart logstash

web2(107)

重启filebeat

systemctl restart filebeat

redis(104)

浏览器访问(192.168.37.106和192.168.37.107)、产生新的日志

127.0.0.1:6379[3]> KEYS *
1) "nginx-accesslog-37-106"
2) "nginx-accesslog-37-107"
3) "syslog-37-107"
4) "syslog-37-106"

logstash(103)

编辑文件

cd /etc/logstash/conf.d/

cat redis-to-es.conf 
input {
  redis {
    host => "192.168.37.104"
    port => "6379"
    password => "123456"
    key => "syslog-37-106"
    data_type => list
    db => 3
  }

  redis {
    host => "192.168.37.104"
    port => "6379"
    password => "123456"
    key => "syslog-37-107"
    data_type => list
    db => 3
  }

  redis {
    host => "192.168.37.104"
    port => "6379"
    password => "123456"
    key => "nginx-accesslog-37-106"
    data_type => list
    db => 3
  }

  redis {
    host => "192.168.37.104"
    port => "6379"
    password => "123456"
    key => "nginx-accesslog-37-107"
    data_type => list
    db => 3
  }
}

output {
#系统日志
  if [fields][type] == "syslog-106" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "filebeat-syslog-37-106-%{+YYYY.MM.dd}"
  }}

  if [fields][type] == "syslog-107" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "filebeat-syslog-37-107-%{+YYYY.MM.dd}"
  }}
#nginx日志
  if [fields][app] == "nginx-106" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "logstash-nginx-accesslog-37-106-%{+YYYY.MM.dd}"
  }}

  if [fields][app] == "nginx-107" {
    elasticsearch {
      hosts => ["http://192.168.37.102:9200"]
      index => "logstash-nginx-accesslog-37-107-%{+YYYY.MM.dd}"
  }}
}

重启服务

systemctl restart logstash

redis(104)

数据被取走了

127.0.0.1:6379[3]> KEYS *
(empty list or set)

图片.png

在kibana(http://192.168.37.101:5601), 依次添加 logstash-nginx-accesslog-37-10{6,7}和filebeat-syslog-37-10{6,7}

通过haproxy代理kibana并实现登录认证

host1(101)

下载nginx

cd /usr/local/src/
wget http://nginx.org/download/nginx-1.16.1.tar.gz

解压

tar xvf nginx-1.16.1.tar.gz

编译安装(编译安装出错可参考)

cd nginx-1.16.1/
./configure --prefix=/apps/

make
make install

修改kibana文件

vim /etc/kibana/kibana.yml

server.host: "127.0.0.1"

重启服务

systemctl restart kibana
cd /apps/
mkdir nginx
mv * nginx/

配置

vim nginx/conf/nginx.conf
...#在http中添加
http {
    include       /apps/nginx/conf.d/*.conf;

创建目录

mkdir /apps/nginx/conf.d/

配置nginx代理kibana

cd /apps

vim nginx/conf.d/kibana.conf
upstream kibana_server {
        server  127.0.0.1:5601 weight=1 max_fails=3  fail_timeout=60;
}

server {
        listen 80;
        server_name www.kibana101.com;
        location / {
        proxy_pass http://kibana_server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        }
}

指定配置文件启动

mkdir /apps/logs/
/apps/nginx/sbin/nginx -c /apps/nginx/conf/nginx.conf

本地域名解析

图片.png

通过域名访问

图片.png

添加认证

host1(101)

注意:首次创建需要'-c'选项、追加不需要!!!如果追加时用'-c'会把之前的替换掉!!!

htpasswd -bc /apps/nginx/conf/htpasswd.users zhao 123456
Adding password for user zhao

htpasswd -b /apps/nginx/conf/htpasswd.users qian 123456
Adding password for user qian
cat /apps/nginx/conf.d/kibana.conf
upstream kibana_server {
        server  127.0.0.1:5601 weight=1 max_fails=3  fail_timeout=60;
}

server {
        listen 80;
        server_name www.kibana101.com;
        auth_basic "Restricted Access";
        auth_basic_user_file /apps/nginx/conf/htpasswd.users;
        location / {
        proxy_pass http://kibana_server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        }
}

重新加载配置文件

/apps/nginx/sbin/nginx -c /apps/nginx/conf/nginx.conf -s reload

两个用户都可以登录

图片.png