ELK安装部署实战
背景
公司平台的飞速发展,对elk日志分析日益迫切
-
- 对用户行为的分析
-
- 运营活动的分析
-
- 日志分析,故障定位及排除处理对日志集中管理的需求
elk安装部署
elasticsearch安装
下载elasticsearch-6.4.0
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz.sha512
# 当shasum命令不存在时,可执行命令安装 yum install perl-Digest-SHA
shasum -a 512 -c elasticsearch-6.4.0.tar.gz.sha512
tar -xzf elasticsearch-6.4.0.tar.gz
cd elasticsearch-6.4.0/
配置elasticsearch
1. 配置服务器的hosts,模拟生产环境,提升计算能力,跨主机集群配置为优选
1.1 配置每台主机节点的hosts,
vim /etc/hosts
192.168.199.21 es-node1
192.168.199.22 es-node2
192.168.199.23 es-node3
2. 配置elasticsearch
2.1 配置es-node1节点集群配置,如下配置node.master:true 表示为主节点,node.date:true 表示为主数据节点。
vim config/elasticsearch.yml
cluster.name: my_es_cluster
node.name: es-node1
path.data: /data/elk/es/data
path.logs: /data/elk/es/logs
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
# 配置白名单 0.0.0.0表示其他机器都可访问
network.host: 0.0.0.0
transport.tcp.port: 9300
# tcp 传输压缩
transport.tcp.compress: true
2.2 配置es-node2节点集群配置
vi config/elasticsearch.yml
cluster.name: my_es_cluster
node.name: es-node2
path.data: /data/elk/es/data
path.logs: /data/elk/es/logs
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: false
node.data: true
# 配置白名单 0.0.0.0表示其他机器都可访问
network.host: 0.0.0.0
transport.tcp.port: 9300
# tcp 传输压缩
transport.tcp.compress: true
http.port: 9200
discovery.zen.ping.unicast.hosts: ["es-node1","es-node2","es-node3"]
2.3 配置es-node3节点集群配置
vi config/elasticsearch.yml
cluster.name: my_es_cluster
node.name: es-node3
path.data: /data/elk/es/data
path.logs: /data/elk/es/logs
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: false
node.data: true
# 配置白名单 0.0.0.0表示其他机器都可访问
network.host: 0.0.0.0
transport.tcp.port: 9300
# tcp 传输压缩
transport.tcp.compress: true
http.port: 9200
discovery.zen.ping.unicast.hosts: ["es-node1","es-node2","es-node3"]
启动elasticsearch
1. 启动elasticsearch服务之前,需要先配置es用户组和es用户(es安全因素)
groupadd es #增加es组
useradd es –g es –p pwd #增加es用户并附加到es组
chown -R es:es elasticsearch-6.4.0 #分配es的目录访问权限
su –es #切换es用户
2. 启动命令
/tools/elk/elasticsearch-6.4.0
./bin/elasticsearch
第一次启动将遇到的问题
ERROR: [2] bootstrap checks failed [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
#切换到root用户修改
vi /etc/security/limits.conf
#在最后面追加
es hard nofile 65536
es soft nofile 65536
#修改后重新登录es账号,使用命令查看上面设置是否成功,结果为65536则成功
ulimit -Hn
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
#切换到root用户
vi /etc/sysctl.conf
#在最后追加
vm.max_map_count=262144
#使用 sysctl -p 查看修改结果
sysctl -p
解决以上问题,则先启动 数据节点,最后启动主节点
/tools/elk/elasticsearch-6.4.0
./bin/elasticsearch
当所有节点启动成功后,在主节点服务器执行以下curl命令,如下图所示,标识Elasticsearch集群启动成功
curl http://localhost:9200/_nodes/process?pretty
{
"_nodes": {
"total":3,
"successful":3,
"failed":0
},
}
kibana安装
下载kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz
https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz.sha512
shasum -a 512 kibana-6.4.0-linux-x86_64.tar.gz.sha512
tar -xzf kibana-6.4.0-linux-x86_64.tar.gz
mv kibana-6.4.0-linux-x86_64/ kibana-6.4.0
cd kibana-6.4.0/
配置kibana
vim ./config/kibana.yml
server.host: "192.168.199.21"
启动kibana
./bin/kibana
访问kibanan
http:// 192.168.199.21:5601
logstash和filebeat安装
下载losstash和filebeat
# Logstash
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz.sha512
shasum -a 512 logstash-6.4.0.tar.gz.sha512
tar -xzf logstash-6.4.0.tar.gz
cd logstash-6.4.0
# Filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz.sha512
shasum -a 512 filebeat-6.4.0-linux-x86_64.tar.gz.sha512
tar -xzf filebeat-6.4.0-linux-x86_64.tar.gz
mv filebeat-6.4.0-linux-x86_64 filebeat-6.4.0
cd filebeat-6.4.0
配置logstash/filebeat
官网https://www.elastic.co/guide/en/logstash/6.4/advanced-pipeline.html
# Filebeat.yml
filebeat.prospectors:
- type: log
paths:
- /path/to/file/logstash-tutorial.log
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
output.logstash:
hosts: ["localhost:5044"]
# Logstash-test.yml
cd logstash-6.4.0
vi logstash_test.conf
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
启动logstash/filebeat
#后台启动 filebeat
nohup ./filebeat -c ./filebeat.yml &
#启动Logstash
nohup ./bin/logstash -f logstash-test.conf &