常用命令:
启动filebeat
nohup /opt/efk/filebeat-7.5.1-linux-x86_64/filebeat -e -c filebeat.yml > filebeat.log &
启动kibana
nohup /opt/efk/kibana-7.5.1-linux-x86_64/bin/kibana --allow-root > kibana.log &
查看kibana 端口
netstat -antp |grep 5601
elasticsearch 命令
service elasticsearch start
service elasticsearch stop
service elasticsearch restart
SpringBoot使用ELK日志收集 Logstash 最佳实践 Spring boot 日志写入 ELK 三种方案
-
(ELK)Elasticsearch、Logstash、Kibana的简称,这三者是核心套件,但并非全部。
-
Elasticsearch 是实时全文搜索和分析引擎,提供搜集、分析、存储数据三大功能;是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统。它构建于Apache Lucene搜索引擎库之上。
-
Logstash 是一个用来搜集、分析、过滤日志的工具。它支持几乎任何类型的日志,包括系统日志、错误日志和自定义应用程序日志。它可以从许多来源接收日志,这些来源包括 syslog、消息传递(例如 RabbitMQ)和JMX,它能够以多种方式输出数据,包括电子邮件、websockets和Elasticsearch。
-
Kibana 是一个基于Web的图形界面,用于搜索、分析和可视化存储在 Elasticsearch指标中的日志数据。它利用Elasticsearch的REST接口来检索数据,不仅允许用户创建他们自己的数据的定制仪表板视图,还允许他们以特殊的方式查询和过滤数据
-
1.环境
系统:
CentOS7.3
安装jdk: jdk: 安装java环境(java环境必须是1.8版本以上的) 查看jdk软件包列表:
yum search java | grep -i --color JDK
选择自己需要的版本进行安装
- 安装命令:
yum install java-1.8.0-openjdk-devel.x86_64
2.安装 elasticsearch 环境
安装elasticsearch的yum源的密钥(这个需要在所有服务器上都配置)
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置elasticsearch的yum源
如下文件及内容需在服务器上新建
# vim /etc/yum.repos.d/elasticsearch.repo
在elasticsearch.repo文件中添加如下内容
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
安装 elasticsearch
yum install -y elasticsearch
创建elasticsearch data的存放目录,并修改该目录的属主属组
mkdir -p /data/es-data (自定义用于存放data数据的目录)
chown -R elasticsearch:elasticsearch /data/es-data
修改elasticsearch的日志属主属组
chown -R elasticsearch:elasticsearch /var/log/elasticsearch/
修改elasticsearch的配置文件
vim /etc/elasticsearch/elasticsearch.yml
找到配置文件中的cluster.name,打开该配置并设置集群名称
cluster.name: demon
找到配置文件中的node.name,打开该配置并设置节点名称
node.name: elk-1
修改data存放的路径
path.data: /data/es-data
修改logs日志的路径
path.logs: /var/log/elasticsearch/
配置内存使用用交换分区
bootstrap.memory_lock: true
监听的网络地址
network.host: 0.0.0.0
开启监听的端口
http.port: 9200
增加新的参数,这样head插件可以访问es (5.x版本,如果没有可以自己手动加)
http.cors.enabled: true
http.cors.allow-origin: "*"
启动elasticsearch服务
没有明白此参数的作用,但是要加上
bootstrap.system_call_filter: false
elasticsearch使用的内存大小为2G,我们把它改小点:
修改参数:
vim /etc/elasticsearch/jvm.options
-Xms512m
-Xmx512m
注意事项
需要修改几个参数,不然启动会报错
vim /etc/security/limits.conf
在末尾追加以下内容(elk为启动用户,当然也可以指定为*)
elk soft nofile 65536
elk hard nofile 65536
elk soft nproc 2048
elk hard nproc 2048
elk soft memlock unlimited
elk hard memlock unlimited
继续再修改一个参数
vim /etc/security/limits.d/20-nproc.conf
将里面的
soft nproc 4096
改为:
soft nproc 2048
这时候会启动会报错:
memory locking requested for elasticsearch process but memory is not locked 添加以下内容:
对于systemd service的资源限制,现在放在 /etc/systemd/system.conf
vim /etc/systemd/system.conf
在末尾追加以下内容
DefaultLimitNOFILE=65536
DefaultLimitNPROC=32000
DefaultLimitMEMLOCK=infinity
启动
/etc/init.d/elasticsearch start
查看服务状态
service elasticsearch status
日志位置(日志的名称是以集群名称命名的):
/var/log/elasticsearch/demon.log
创建开机自启动服务
chkconfig elasticsearch on
netstat 包括在 net-tools 套件中安装:
yum install net-tools
通过浏览器请求下9200的端口,看下是否成功
先检查9200端口是否起来
netstat -antp |grep 9200
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN 2833/java
浏览器访问测试是否正常(以下为正常)
curl http://127.0.0.1:9200/
[root@zpf ~]# curl http://127.0.0.1:9200/
{
"name" : "elk-1",
"cluster_name" : "demon",
"cluster_uuid" : "t8-0566XQuaCsp_V3Q315A",
"version" : {
"number" : "5.6.16",
"build_hash" : "3a740d1",
"build_date" : "2019-03-13T15:33:36.565Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
我们来回顾下 Elasticsearch
Elasticsearch是实时全文搜索和分析引擎,提供搜集、分析、存储数据三大功能;是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统。它构建于Apache Lucene搜索引擎库之上
如何和elasticsearch交互
[root@zpf ~]# curl -i -XGET 'localhost:9200/_count?pretty'
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 114
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"skipped" : 0,
"failed" : 0
}
}
设置开机启动:
1、在/etc/init.d/目录创建es文件 vim /etc/init.d/elasticsearch
文件内容:
#!/bin/bash
#
#chkconfig: 345 63 37
#description: elasticsearch
#processname: elasticsearch-7.5.1
ES_HOME=/opt/e**ffe*k/elasticsearch-7.5.1
case $1 in
start)
su - elasticsearch -c "$ES_HOME/bin/elasticsearch -d -p pid"
echo "elasticsearch is started"
;;
stop)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
;;
restart)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
sleep 1
su - elasticsearch -c "$ES_HOME/bin/elasticsearch -d -p pid"
echo "elasticsearch is started"
;;
*)
echo "start|stop|restart"
;;
esac
exit 0
2、修改上面文件的权限,执行命令
chmod 777 /etc/init.d/elasticsearch
3、添加和删除服务并设置启动方式(chkconfig具体使用另行百度)
chkconfig --add elasticsearch
chkconfig --del elasticsearch
4、启动和关闭服务
service elasticsearch start // 启动服务
service elasticsearch stop // 关闭服务
service elasticsearch restart // 重启服务
5、设置服务的启动方式
chkconfig elasticsearch on // 设置开机启动
chkconfig elasticsearch off // 关闭开机启动
安装插件
安装elasticsearch-head插件
安装docker镜像或者通过github下载elasticsearch-head项目都是可以的,1或者2两种方式选择一种安装使用即可
1. 使用docker的集成好的elasticsearch-head
拉取镜像
docker pull mobz/elasticsearch-head:5
查看镜像
docker images
启动镜像
docker run -d -p 9100:9100 docker.io/mobz/elasticsearch-head:5
查看正在运行的镜像
docker ps
docker容器下载成功并启动以后,运行浏览器打开http://localhost:9100/
2. 使用git安装elasticsearch-head
# yum install -y npm
服务器没有安装GIT,所以会导致出错。
yum install git -y
# git clone git://github.com/mobz/elasticsearch-head.git
# cd elasticsearch-head
npm install phantomjs-prebuilt@2.1.16 --ignore-scripts
# npm install
# npm run start 后台启动 nohup npm run start &
检查端口是否起来
netstat -antp |grep 9100
浏览器访问测试是否正常
http://IP:9100/
直接访问 http://IP:9100/ 访问不到:
查看防火墙状态
firewall-cmd --state
停止firewall
systemctl stop firewalld.service
禁止firewall开机启动
systemctl disable firewalld.service
参考
Linux – git: command not found 错误解决
yuminstall [git](https://www.bxl.me/b/git/) -y
参考
后台启动elasticsearch-head
nohup grunt server &exit
参考
3.LogStash的使用 Logstash配置文件
安装Logstash环境:
官方安装手册:
https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
下载yum源的密钥认证:
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
利用yum安装logstash
# yum install -y logstash
查看下logstash的安装目录
# rpm -ql logstash
创建一个软连接,每次执行命令的时候不用在写安装路劲(默认安装在/usr/share下)
ln -s /usr/share/logstash/bin/logstash /bin/
执行logstash的命令
# logstash -e 'input { stdin { } } output { stdout {} }'
运行成功以后输入:
nihao
stdout返回的结果:
这个jdk的警告就是显示需要加CPU
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Logstash状态:
sudo service logstash status
Logstash启动:
sudo service logstash start
Logstash停止:
sudo service logstash stop
设置Logstash开机启动:
chkconfig logstash on
这里有个问题要说明:
第一种: sudo service logstash start 启动没有 状态没有问题
[root@zpf bin]# sudo service logstash status
Redirecting to /bin/systemctl status logstash.service
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 17:07:29 CST; 3s ago
Main PID: 8622 (java)
Tasks: 15
Memory: 220.2M
CGroup: /system.slice/logstash.service
└─8622 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitG...
第二种 nohup ./logstash -f /etc/logstash/conf.d/elk.conf & 方式启动 logstash
[root@zpf bin]# sudo service logstash status
Redirecting to /bin/systemctl status logstash.service
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2019-07-09 17:07:46 CST; 4min 31s ago
Process: 8744 ExecStart=/usr/share/logstash/bin/logstash --path.settings /etc/logstash (code=exited, status=143)
Main PID: 8744 (code=exited, status=143)
用命令查看 logstash 状态 失败
Active: failed
但是实际 启动成功
以上命令 都是以java 进程的方式启动的 logstash 但是第二种就有问题 很费解
[root@zpf bin]# ps -ef|grep java
elastic+ 3177 1 1 15:48 ? 00:00:57 /bin/java -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet -Edefault.path.logs=/var/log/elasticsearch -Edefault.path.data=/var/lib/elasticsearch -Edefault.path.conf=/etc/elasticsearch
root 26464 4378 7 16:35 pts/0 00:01:03 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb -f /etc/logstash/conf.d/elk.conf
问题:
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
解决:
cd /usr/share/logstash
ln -s /etc/logstash ./config
查看Logstash日志 /var/log/logstash
问题: Logstash is not able to start since configuration auto reloading was enabled but the configuration contains plugins that don't support it. Quitting... 解决:
/etc/logstash/logstash.yml
config.test_and_exit: false
问题:
Logstash could not be started because there is already another instance
using the configured data directory. If you wish to run multiple
instances, you must change the "path.data" setting.
解决: sudo service logstash stop
logstash使用配置文件
官方指南:
https://www.elastic.co/guide/en/logstash/current/configuration.html
创建配置文件01-logstash.conf
# vim /etc/logstash/conf.d/elk.conf
文件中添加以下内容
input { stdin { } }
output {
elasticsearch { hosts => ["192.168.1.202:9200"] }
stdout { codec => rubydebug }
}
使用配置文件运行logstash
# logstash -f /etc/logstash/conf.d/elk.conf
运行成功以后输入以及标准输出结果
3.Kibana的安装及使用
Kibana版本要跟elasticsearch版本一致
我使用的:
elasticsearch5.6.16
下载kibana的tar.gz的软件包
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.6.16-linux-x86_64.tar.gz
解压kibana的tar包
# tar -xzf kibana-5.6.16-linux-x86_64.tar.gz
进入解压好的kibana
# mv kibana-5.6.16-linux-x86_64 /usr/local
创建kibana的软连接
# ln -s /usr/local/kibana-5.6.16-linux-x86_64/ /usr/local/kibana
编辑kibana的配置文件
# vim /usr/local/kibana/config/kibana.yml
修改配置文件如下,开启以下的配置
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.202:9200"
kibana.index: ".kibana"
启动kibana
nohup /usr/local/kibana/bin/kibana &
netstat -antp |grep 5601
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 17007/node
打开浏览器并设置对应的index
http://IP:5601
Kibana 设置
注意时间控件(一开始没注意,一直找不到日志)
总算完了。
好,现在索引也可以创建了,现在可以来输出nginx、apache、message、secrue的日志到前台展示(Nginx有的话直接修改,没有自行安装)
编辑nginx配置文件,修改以下内容(在http模块下添加)
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domian":"$host",'
'"host":"$server_addr",'
'"size":"$body_bytes_sent",'
'"responsetime":"$request_time",'
'"referer":"$http_referer",'
'"ua":"$http_user_agent"'
'}';
access_log logs/elk.access.log json;
以上在nginx 位置不能变
编辑logstash配置文件,进行日志收集
vim /etc/logstash/conf.d/elk.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
}
运行看看效果如何
logstash -f /etc/logstash/conf.d/full.conf
##ELK终极篇
安装reids
# yum install -y redis
修改redis的配置文件
# vim /etc/redis.conf
修改内容如下
daemonize yes
bind 192.168.1.202
启动redis服务
# /etc/init.d/redis restart
测试redis的是否启用成功
# redis-cli -h 192.168.1.202
输入info如果有不报错即可
redis 192.168.1.202:6379> info
redis_version:2.4.10
....
编辑配置redis-out.conf配置文件,把标准输入的数据存储到redis中
# vim /etc/logstash/conf.d/redis-out.conf
添加如下内容
input {
stdin {}
}
output {
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
}
}
运行logstash指定redis-out.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
运行成功以后,在logstash中输入内容(查看下效果)
编辑配置redis-out.conf配置文件,把reids的存储的数据输出到elasticsearch中
# vim /etc/logstash/conf.d/redis-out.conf
添加如下内容
input{
redis {
host => "192.168.1.202"
port => "6379"
password => 'test'
db => '1'
data_type => "list"
key => 'elk-test'
batch_count => 1 #这个值是指从队列中读取数据时,一次性取出多少条,默认125条(如果redis中没有125条,就会报错,所以在测试期间加上这个值)
}
}
output {
elasticsearch {
hosts => ['192.168.1.202:9200']
index => 'redis-test-%{+YYYY.MM.dd}'
}
}
运行logstash指定redis-out.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
把之前的配置文件修改一下,变成所有的日志监控的来源文件都存放到redis中,然后通过redis在输出到elasticsearch中
更改为如下,编辑elk.conf
input {
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
if [type] == "http" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
}
}
if [type] == "nginx" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
}
}
if [type] == "secure" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
}
}
if [type] == "system" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
}
}
}
运行logstash指定shipper.conf的配置文件
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elk.conf
在redis中查看是否已经将数据写到里面(有时候输入的日志文件不产生日志,会导致redis里面也没有写入日志)
把redis中的数据读取出来,写入到elasticsearch中
编辑配置文件
# vim /etc/logstash/conf.d/redis-out.conf
添加如下内容
input {
redis {
type => "system"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_system'
batch_count => 1
}
redis {
type => "http"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_http'
batch_count => 1
}
redis {
type => "nginx"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_nginx'
batch_count => 1
}
redis {
type => "secure"
host => "192.168.1.202"
password => 'test'
port => "6379"
db => "6"
data_type => "list"
key => 'nagios_secure'
batch_count => 1
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.1.202:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
}
注意:
input是从客户端收集的
output是同样也保存到192.168.1.202中的elasticsearch中,如果要保存到当前的主机上,可以把output中的hosts修改成localhost,如果还需要在kibana中显示,需要在本机上部署kabana,为何要这样做,起到一个松耦合的目的
说白了,就是在客户端收集日志,写到服务端的redis里或是本地的redis里面,输出的时候对接ES服务器即可
运行命令看看效果
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
效果是和直接往ES服务器输出一样的(这样是先将日志存到redis数据库,然后再从redis数据库里取出日志)
因为ES保存日志是永久保存,所以需要定期删除一下日志,下面命令为删除指定时间前的日志
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n days"`
filebeat 使用
安装FileBeat
下载FileBeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gz
解压
tar -zxvf filebeat-6.2.4-linux-x86_64.tar.gz
进入主目录,修改配置
vi filebeat.yml
找到类似以下的配置并修改
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/xxx/*.log
- /var/xxx/*.out
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
setup.kibana:
host: "localhost:5601"
output.elasticsearch:
hosts: ["localhost:9200"]
配置一定要注意格式,是以2个空格为子级,里面的配置都在配置文件中,列出来的只是要修改的部分,enabled默认为false,需要改成true才会收集日志。其中/var/xxx/*.log修改为自己的日志路径,注意-后面有一个空格,
如果多个路径则添加一行,一定要注意新行前面的4个空格,multiline开头的几个配置取消注释就行了,是为了兼容多行日志的情况,setup.kibana中的host取消注释,根据实际情况配置地址,output.elasticsearch中的host也一样,根据实际情况配置
启动FileBeat
filebeat启动命令
# 前台启动
./filebeat -e -c filebeat.yml
# 后台启动 不输出日志/输出日志
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &
nohup ./filebeat -e -c filebeat.yml > filebeat.log &
filebeat 多索引
ilebeat 小巧好用
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/access.log
fields:
type: "nginx"
- input_type: log
paths:
- /mnt/www/bi.xxxxx.com/app/runtime/tasklog/task_*.log
fields:
type: "task"
json.message_key: log
json.keys_under_root: true
output.elasticsearch:
hosts: ["127.0.0.1:9200"]
#index: "logs-%{[beat.version]}-%{+yyyy.MM.dd}"
indices:
- index: "www-f-nginx-log"
when.equals:
fields.type: "nginx"
- index: "www-f-task-log"
when.equals:
fields.type: "task"
#./filebeat -e -c filebeat.yml -d "Publish"