0. 写在前面:
个人部署方案:
虚机裸部es-cluster,通过chart拉起,daemonset/filebeat,statefulset/logstash,deployment/kibana
chart版本包:
[GitHub - elastic/helm-charts: You know, for Kubernetes](https://github.com/elastic/helm-charts)
1. ES容器版:
1.1 写入主配挂载文件:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
# 配置远程访问
http.host: 0.0.0.0
# 因为elasticsearch与elasticsearch-head工具是前后端分离项目,所以需要处理跨域问题
http.cors.enabled: true
http.cors.allow-origin: "*"
# 开启账户密码验证
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
1.2 运行容器:
docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "http.host=0.0.0.0" -e ES_JAVA_OPTS="-Xms10g -Xmx10g" -v /data/es-test:/usr/share/elasticsearch/data -v /data/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /usr/share/elasticsearch/logs:/usr/share/elasticsearch/logs -d docker.elastic.co/elasticsearch/elasticsearch:7.17.3
1.3 连接es修改密码:
bin/elasticsearch-setup-passwords interactive
2. ES单点虚机版:
2.1 版本包获取:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-linux-x86_64.tar.gz
2.2 修改系统配置文件:
vim /etc/security/limits.conf
* soft nofile 300000
* hard nofile 300000
* soft nproc 102400
* soft memlock unlimited
* hard memlock unlimited
vim /etc/sysctl.conf
vm.max_map_count=655360
# 修改后执行 sysctl -p
2.3 创建用户与目录赋权:
groupadd elastic && useradd elastic -g elastic
chown -R elastic:elastic /data/elasticsearch-7.17.3
2.4 开启xpack前置生成证书:
./bin/elasticsearch-certutil ca -out config/elastic-certificates.p12 -pass
2.5 修改主配与jvm配置文件:
vim config/jvm.options
-Xms10g
-Xmx10g
vim config/elasticsearch.yml
cluster.name: "elasticsearch"
node.name: es-node0
network.host: 0.0.0.0
cluster.initial_master_nodes: ["es-node0"]
# path.data: /data/es-test/
# 配置远程访问
http.host: 0.0.0.0
# 因为elasticsearch与elasticsearch-head工具是前后端分离项目,所以需要处理跨域问题
http.cors.enabled: true
http.cors.allow-origin: "*"
# 开启账户密码验证
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
2.6 后台运行服务:
nohup ./bin/elasticsearch &
2.7 启动后修改密码:
bin/elasticsearch-setup-passwords interactive
3. ES分布式集群:
3.1-3.4 同 2.1-2.4 证书只需一台执行后同步到其他两节点
3.5 修改主配与jvm配置文件:
vim config/jvm.options
-Xms10g
-Xmx10g
---
# 配置区别只有 node.name: 对应修改即可
---
vim config/elasticsearch.yml
cluster.name: "elasticsearch"
node.name: es-node0
cluster.initial_master_nodes: ["es-node0", "es-node1", "es-node2"]
node.master: true
node.data: true
node.ingest: true
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
discovery.seed_hosts: ["172.16.250.229:9300","172.16.250.230:9300","172.16.250.231:9300"]
# master选举/节点间通讯超时时间(这个时间的配置需要根据实际情况设置)
# discovery.zen.fd.ping_interval: 30s
# 每次ping的超时时间
# discovery.zen.fd.ping_timeout: 120s
# 一个node被ping多少次失败就认为是故障了
# discovery.zen.fd.ping_retries: 6
# elasticsearch则可以配置返回消息的节点数量, 一般情况下会配置(n/2 + 1)个节点
# discovery.zen.minimum_master_nodes: 2
# 多少个节点启动后就可以组成集群
# gateway.recover_after_nodes: 2
# 期望多少个节点联通之后才开始分配shard
# gateway.expected_nodes: 3
# 超时时间
# gateway.recover_after_time: 1m
# 配置远程访问
http.host: 0.0.0.0
# 因为elasticsearch与elasticsearch-head工具是前后端分离项目,所以需要处理跨域问题
http.cors.enabled: true
http.cors.allow-origin: "*"
# 开启账户密码验证
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
thread_pool.search.size: 20
thread_pool.search.queue_size: 200
indices.memory.index_buffer_size: 30%
indices.memory.min_index_buffer_size: 96mb
indices.fielddata.cache.size: 30%
3.6-3.7 同 2.6-2.7 启动后只需修改一台密码,用户信息会同步
4. filebeta推送到logstash配置(容器日志):
filebeat.yml: |
# 消息直接推送es,处理日志合并时添加
# 如推送logstash,则在logstash中添加
# multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
# multiline.negate: true
# multiline.match: after
# multiline.timeout: 30
queue.mem.events: 2048
queue.mem.flush.min_events: 1536
# 输入
filebeat.inputs:
- type: container
ignore_older: 168h
max_bytes: 20480
tail_files: true
paths:
- /var/log/containers/*.log
# 删除不需要的日志
processors:
- drop_event.when.regexp:
or:
kubernetes.pod.name: "filebeat.*"
# 添加host字段
- add_kubernetes_metadata:
default_indexers.enabled: true
default_matchers.enabled: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# 精简化k8s字段
- script:
lang: javascript
id: format_k8s
tag: enable
source: >
function process(event) {
var k8s=event.Get("kubernetes");
var newK8s = {
podname: k8s.pod.name,
namespace: k8s.namespace,
container: k8s.container.name,
host: k8s.node.hostname
}
event.Put("k8s", newK8s);
}
# 删除多余字段
- drop_fields:
fields: ["host", "ecs", "log", "tags", "agent", "input", "stream", "container", "orchestrator", "kubernetes"]
ignore_missing: true
output.logstash:
hosts: ['${LOGSTASH_HOST:logstash}:${LOGSTASH_PORT:8080}']
4-1. filebeat推送到es配置(节点日志):
filebeat.yml: |
queue.mem.events: 2048
queue.mem.flush.min_events: 1536
filebeat.inputs:
- type: log
ignore_older: 168h
max_bytes: 20480
paths:
- /var/log/message*
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
setup.template.enabled: false
setup.template.name: "message233"
setup.template.pattern: "message233-*"
setup.template.overwrite: true
setup.ilm.enabled: false
output.elasticsearch:
hosts: ["http://192.xxx.xxx.xxx:9200", "http://192.xxx.xxx.xxx:9200", "http://192.xxx.xxx.xxx:9200"]
username: "elastic"
password: "xxxxxxxxxxxx"
index: "message233-%{+YYYY.MM.dd}"
5. logstash配置:
logstash.yml: |
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: nancal.123
# xpack.monitoring.elasticsearch.hosts: [ "http://172.16.xxx.xxx:9200" ]
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.xxx.xxx:9200", "http://192.168.xxx.xxx:9200", "http://192.168.xxx.xxx:9200" ]
xpack.monitoring.elasticsearch.sniffing: false
http.host: "0.0.0.0"
pipeline.ecs_compatibility: disabled
# 如需日志合并,先将镜像添加插件后在使用
# logstash-plugin install logstash-filter-multiline
logstash.conf: |
input {
beats {
port => 5044 }
}
filter {
if [k8s][namespace] == "ops" {
drop {}
}
# 不需要日志合并则注释下五行
multiline {
pattern => "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
negate => true
what => "previous"
}
}
output {
if [k8s][namespace] == "lzos2" {
elasticsearch{
hosts => ["http://192.168.xxx.xxx:40059", "http://192.168.xxx.xxx:40065", "http://192.168.xxx.xxx:40066"]
user => "xxxxxx"
password => "xxxxxxxxxxx"
index => "dev-221-lzos2-%{+YYYY.MM.dd}"
ssl_certificate_verification => false
ssl => false
}
}
else if [k8s][namespace] == "zadig" {
elasticsearch{
hosts => ["http://192.168.xxx.xxx:40059", "http://192.168.xxx.xxx:40065", "http://192.168.xxx.xxx:40066"]
user => "xxxxxx"
password => "xxxxxxxxxxxxxx"
index => "dev-221-zadig-%{+YYYY.MM.dd}"
ssl_certificate_verification => false
ssl => false
}
}
else {
elasticsearch {
hosts => ["http://192.168.xxx.xxx:40059", "http://192.168.xxx.xxx:40065", "http://192.168.xxx.xxx:40066"]
user => "xxxxx"
password => "xxxxxxxxxxxxx"
# index => "logstash-%{+YYYY.MM.dd}"
# index => "logstash-221"
index => "dev-221-%{[k8s][namespace]}"
ssl_certificate_verification => false
ssl => false
}
}
}
6. ES定期清理脚本:
# 清理索引版本:推荐
#!/bin/bash
#定时清除elk索引,7天
DATE=`date -d "7 days ago" +%Y.%m.%d`
#INDEX=`curl -XGET 'http://127.0.0.1:9200/_cat/indices/?v'|awk '{print $3}'`
curl -X DELETE elastic:xxxxxxxx@172.16.xxx.xxx:9200/*-$DATE
# 索引未按时间区分,持续写入一个索引:不推荐,效率极低
#!/bin/bash
# 所有需要清理的索引名称数组
a=("prd-hwy*" "uat-xxx*" "uat-xxx*" "dev-xxx*" "test-xx*")
# 遍历需要清理的索引,依次标为deleted
for i in ${a[@]}
do
curl -XPOST -u elastic:xxxxxxxx "http://127.0.0.1:9200/$i/_delete_by_query" -H 'Content-Type: application/json' -d'{
"query": {
"range": {
"@timestamp": {
"lt": "now-7d",
"format": "epoch_millis"
}
}
}
}'
echo "Done clean $i data."
done
# 将所有的删除状态数据清理
curl -XPOST -u elastic:xxxxxxxx 'http://127.0.0.1:9200/_forcemerge?only_expunge_deletes=true&max_num_segments=1'
echo " Clean docs Done!"
7. ES常用命令合集
# 查询服务状态
curl elastic:password@127.0.0.1:9200/_cat/health
# 查询索引
curl elastic:password@127.0.0.1:9200/_cat/indices
# 删除单个/多个索引(慎用)
curl -X DELETE -u elastic:password http://localhost:9200/index-x1, index-x2