什么是 ELK Stack?
意指 Elastic Stack
那么,ELK 到底是什么呢? “ELK”是三个开源项目的首字母缩写,这三个项目分别是:Elasticsearch、Logstash 和 Kibana。Elasticsearch 是一个搜索和分析引擎。Logstash 是服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据发送到诸如 Elasticsearch 等“存储库”中。Kibana 则可以让用户在 Elasticsearch 中使用图形和图表对数据进行可视化。
什么是 ELKB?
ELKB代表四个开源软件,分别为:Elasticsearch , Logstash, Kibana , Beats。
Elasticsearch
Elasticsearch 是一个分布式、RESTful 风格的搜索和数据分析引擎,能够解决不断涌现出的各种用例。能够执行及合并多种类型的搜索(结构化数据、非结构化数据、地理位置、指标)。很方便的使大量数据具有搜索、分析和探索的能力。
Logstash
Logstash 是免费且开放的服务器端数据处理管道,能够从多个来源采集数据,转换数据,然后将数据发送“存储库”中。这里的“存储库”就可以是指 Elasticseach。
能够动态地采集、转换和传输数据,不受格式或复杂度的影响。利用 Grok 从非结构化数据中派生出结构,从 IP 地址解码出地理坐标,匿名化或排除敏感字段,并简化整体处理过程。
数据从源传输到存储库的过程中,Logstash 过滤器能够解析各个事件,识别已命名的字段以构建结构,并将它们转换成通用格式,以便进行更强大的分析和实现商业价值。
Kibana
Kibana 是一个免费且开放的用户界面,能够对 Elasticsearch 数据进行可视化,并在 Elastic Stack 中进行导航。可以进行各种操作,从跟踪查询负载,到理解请求如何流经整个应用,都能轻松完成。
使用 Discover 的数据探索工具快速从采集进入到分析。简单直观地构建可视化。快速创建仪表板,将图表、地图和筛选功能有机整合,从而展示数据的全景。构建告警以触发定制行动
Beats
Beats 是一个免费且开放的平台,集合了多种单一用途数据采集器。它们从成百上千或成千上万台机器和系统向 Logstash 或 Elasticsearch 发送数据。
全品类采集器,搞定所有数据类型,包含Filebeat(日志文件)、Metricbeat(指标)、Auditbeat(审计数据)等。本文主要介绍Filebeat,Filebeat 将提供一种轻量型方法,用于转发和汇总日志与文件。
Filebeat 是 Elastic Stack 的一部分,因此能够与 Logstash、Elasticsearch 和 Kibana 无缝协作。无论您要使用 Logstash 转换或充实日志和文件,还是在 Elasticsearch 中随意处理一些数据分析,亦或在 Kibana 中构建和分享仪表板,Filebeat 都能轻松地将数据发送至最关键的地方。
Logstash vs Beats
Beats 和 Logstash 皆可进行采集数据时的特定要求。 Logstash 在诞生之初就集数据采集、数据过滤、数据输出与一体,如果单纯的为了收集日志,使用logstash就有点大材小用,性能消耗过大。因此,这时,Beats 在性能占用方面是轻量级的,单纯用来收集日志。让logstash专注于数据解析,转换,格式化等处理。
ELKB 镜像部署
注意:{{ip}}--该格式说明需要修改为自己的对应的IP地址
拉取镜像
# 1 拉取镜像
docker pull elasticsearch:7.7.0
docker pull kibana:7.7.0
docker pull logstash:7.7.0
docker pull elastic/filebeat:7.7.0
部署es
# 2 部署es,并测试
docker run -d --name elasticsearch \
-v /etc/localtime:/etc/localtime:ro \
-v /elkb/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /elkb/elasticsearch/jvm.options:/usr/share/elasticsearch/config/jvm.options \
-v /elkb/elasticsearch/data:/usr/share/elasticsearch/data
-p 30920:9200 -p 30930:9300 -e "discovery.type=single-node" elasticsearch:7.7.0
# 2.1 elasticsearch.yml(配置文件)内容
cluster.name: "docker-cluster"
network.host: 0.0.0.0 #生产环境不建议使用0.0.0.0
# 2.2 jvm.options(运行配置文件,最重要的是Xms和Xmx)内容
## JVM configuration
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms1g #可根据机器环境实际调整
-Xmx1g #可根据机器环境实际调整
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly
## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly
14-:-XX:+UseG1GC
14-:-XX:G1ReservePercent=25
14-:-XX:InitiatingHeapOccupancyPercent=30
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log
## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# 2.3 data目录
挂载data目录是为了将es的存储挂载到主机上
# 2.4 验证es部署是否成功
curl {{ip}}:30920 #有正确返回说明部署成功
部署kibana
# 3 运行kibana,(为了方便修改配置文件,下述操作是复制配置文件到宿主机,然后挂载到新的容器)
docker run -d --name kibana \
--link elasticsearch:elasticsearch \
-e ELASTICSEARCH_URL=http://{{ip}}:30920 -e I18N_LOCALE=zh-CN -p 30561:5601 kibana:7.7.0
docker cp kibana://opt/kibana/config/kibana.yml .
docker rm -f kibana
docker run -d --privileged --name kibana \
-v /etc/localtime:/etc/localtime:ro \
-v /elkb/kibana/config/kibana.yml:/opt/kibana/config/kibana.yml \
--link elasticsearch:elasticsearch \
-e ELASTICSEARCH_URL=http://{{ip}}:30920 -e I18N_LOCALE=zh-CN -p 30561:5601 kibana:7.7.0
# 3.1 kibana.yml (配置文件)内容
server.host: '0.0.0.0' # 建议生产环境不要配置0.0.0.0,允许任意地址访问;其它配置项可根据需要更改
# 3.2 测试部署成功
浏览器打开地址 http://{{ip}}:30561
部署logstash
# 4 运行logstash
docker run -d -p 30544:5044 --name logstash \
-v /elkb/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v /elkb/logstash/conf.d/:/usr/share/logstash/conf.d/ logstash:7.7.0
# 4.1 logstash.yml (配置文件)内容
path.config: /usr/share/logstash/conf.d/*.conf
path.logs: /var/log/logstash
# 4.2 conf.d目录下需新建beats.conf配置文件,文件内容如下
input {
beats {
port => 5044
codec => "json"
}
}
output {
elasticsearch { hosts => ["{{ip}}:30920"] }
stdout { codec => rubydebug }
}
部署filebeat
# 5 运行filebeat,(监控主机nginx的日志目录/nginx/logs/)
# https://raw.githubusercontent.com/elastic/beats/7.7/deploy/docker/filebeat.docker.yml
docker run --name filebeat --user=root -d \
-v /nginx/logs/:/var/log/nginx/ \
-v /elkb/filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro \
-v /var/lib/docker/containers:/var/lib/docker/containers:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro elastic/filebeat:7.7.0
# 5.1 filebeat.docker.yml (配置文件)内容
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
filebeat.inputs:
- type: log
enabled: true
paths:
#- /nginx/logs/*.log 该地址是本地nginx日志目录,此处是指容器内部映射后的地址,所以不用该地址
- /var/log/nginx/*
output.logstash:
hosts: ['{{ip}}:30544']
Sentinl 日志报警
#匹配kibana6.8.4版本(7.7.0版本不再支持)因此,sentinl能支持的版本有限,很久不更新了
# 1 修改 kibana 配置文件kibana.yml,添加如下内容
sentinl:
settings:
email:
active: true
user: ***********@163.com #邮箱地址
password: ****** #邮箱密码或者授权码
host: smtp.163.com #发送邮件服务器
ssl: true #根据实际情况添加,云服务器时打开这注释,因为云服务器会禁用25端口
port: 465
report:
active: true
# 2 下载插件 https://github.com/sentinl/sentinl/releases 选择6.8.4版本
# 3 拷贝到 kibana 容器中
docker cp ./sentinl-v6.8.4.zip kibana:/opt/kibana/bin/
# 4 安装插件
docker exec -it kibana /bin/bash
cd /opt/kibana/bin
kibana-plugin install file:////opt/kibana/bin/sentinl-v6.8.4.zip
./kibana-plugin remove sentinl #(若安装过程出错,先卸载,再重新安装)
# 5 重启 kibana 容器
docker restart kibana
# 6 效果
打开kibana网页,侧边栏出现告警一行,说明部署成功
ElastAlert 日志告警
ElastAlert2项目地址,在ElastAlert基础上维护开发的
elastalert可视化告警插件地址-(kibana7.10及更高版本不支持
启动 elastalert 服务端
# 1 启动 elastalert 服务
docker run -d -p 30303:3030 -p 30333:3333 \
-v /elkb/elastalert/config/config.yaml:/opt/elastalert/config.yaml \
-v /elkb/elastalert/config/config.json:/opt/elastalert-server/config/config.json \
-v /elkb/elastalert/config/smtp_auth.yaml:/opt/elastalert/config/smtp_auth.yaml \
-v /elkb/elastalert/rules:/opt/elastalert/rules \
-v /elkb/elastalert/rule_templates:/opt/elastalert/rule_templates \
-v /etc/localtime:/etc/localtime:ro \
-e TZ=Asia/Shanghai --name elastalert bitsensor/elastalert:3.0.0-beta.1
# 1.1 config.yaml (配置文件)内容
rules_folder: rules # 路由规则存放的文件路径
run_every: # 每间隔run_every时间运行一次
minutes: 1
buffer_time: # ElastAlert使用ES进行过滤查询的结果将被缓存的时间
minutes: 15
es_host: {{ip}} # es服务连接地址
es_port: 30920 # es服务连接端口
# es认证配置
#es_username: elastic
#es_password: elastic
# ElastAlert默认在ES上创建的索引,由于存放ElastAlert运行日志
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
alert_time_limit: # 2天内报警失败,进行重试
days: 2
# 1.2 config.json(用来配置启动参数)文件内容
{
"appName": "elastalert-server",
"port": 3030,
"wsport": 3333,
"elastalertPath": "/opt/elastalert",
"verbose": true,
"es_debug": false,
"debug": false,
"rulesPath": {
"relative": true,
"path": "/rules"
},
"templatesPath": {
"relative": true,
"path": "/rule_templates"
},
"es_host": "{{ip}}",
"es_port": 30920,
"writeback_index": "elastalert_status"
}
# 1.3 smtp_auth.yaml (邮箱smtp认证配置)内容
user: "12345678900@163.com"
password: "***********" #授权码
安装kibana界面插件
# 1 插件下载,选择支持kibana7.7.0版本的:https://github.com/nsano-rururu/elastalert-kibana-plugin/releases
# 2 拷贝到 kibana 容器中
docker cp ./elastalert-kibana-plugin-1.2.0-7.7.0.zip kibana:/opt/kibana/bin/
# 3 进入到kibana容器的bin目录下安装插件
./kibana-plugin install file:////opt/kibana/bin/elastalert-kibana-plugin-1.2.0-7.7.0.zip
测试 elastalert , 发送告警信息
# 1 这一步其实在第一步镜像启动后就可测试,省去这一步骤去kibana界面测试也是可以的)
docker exec -i elastalert python -m elastalert.elastalert --verbose --config /opt/elastalert/config.yaml --rule /opt/elastalert/rules/rule.yaml
# 1.1 规则rule.yaml内容
es_host: {{ip}}
es_port: 30920
#use_ssl: True
#es_username: elastic
#es_password: elastic
name: xxx_server_rule
type: frequency # frequency频率,匹配x时间中至少有y个事件发生
query_key: # 不进行重复提醒的字段
- message
aggregation: # 聚合2分钟内的结果,合并在一起发送
minutes: 2
realert: # 同一规则的两次警报之间的最短时间
minutes: 2
index: "filebeat-*" # es索引名称
num_events: 1 # 与规则匹配的日志出现次数
#threshold: 1
timeframe: # 在timeframe时间内出现num_events次与规则匹配的日志,将会触发报警。
minutes: 5
# 过滤规则
filter:
- term:
input.type: "log"
#alert: post # 告警方式
alert:
- "email"
email_format: html
alert_text_type: alert_text_only
alert_subject: "紧急!XXX日志报警通知。"
email: # 邮件接收方
- "***********@163.com"
- "***********@qq.com"
smtp_host: "smtp.163.com"
smtp_port: 465
smtp_auth_file: /opt/elastalert/config/smtp_auth.yaml
email_reply_to: "***********@163.com"
from_addr: "***********@163.com"
smtp_ssl: true
# 2 或者从kibana界面添加规则测试
# 规则内容如下:
# 规则示例
es_host: {{ip}}
es_port: 30920
name: frequency test rule
type: frequency
index: filebeat-*
num_events: 1
timeframe:
minutes: 1
filter:
- term:
input.type: "log"
smtp_host: smtp.163.com
smtp_port: 465
smtp_ssl: True
smtp_auth_file: /opt/elastalert/config/smtp_auth.yaml
email_reply_to: 12345678900@163.com
from_addr: 12345678900@163.com
alert:
- "email"
alert_subject: "紧急!XXX日志报警通知。"
email:
- "12345678900@163.com"
ELKB Kubernetes 部署
es 集群部署(3主节点es集群)
# 1 选择集群的三个节点打上标签(es就部署在这三个节点上,构建es集群)
kubectl label node node1 es=data
kubectl label node node2 es=data
kubectl label node node3 es=data
# 2 部署 elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: elkb
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.7.0
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
- name: localtime
mountPath: /etc/localtime
env:
- name: cluster.name
value: elasticsearch-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.minimum_master_nodes
value: "2"
- name: discovery.seed_hosts
value: "elasticsearch-0.elasticsearch,elasticsearch-1.elasticsearch,elasticsearch-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "elasticsearch-0,elasticsearch-1,elasticsearch-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
volumes:
- name: elasticsearch-data
hostPath:
path: /data/es/
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
nodeSelector: # 此处就是选择刚才打上标签的三个节点部署es
es: data
initContainers:
- name: elasticsearch-logging-init
image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: elasticsearch-volume-init
image: alpine:3.6
command:
- chmod
- -R
- "777"
- /usr/share/elasticsearch/data/
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data/
affinity: # 此处是为保证每个能部署es节点只部署一个es的pod
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- elasticsearch
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: elkb
spec:
clusterIP: None
ports:
- name: db
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
selector:
k8s-app: elasticsearch
filebeat 部署
# 1 选择想部署filebeat收集日志的节点打上标签
kubectl label node node1 log=filebeat
kubectl label node node2 log=filebeat
kubectl label node node3 log=filebeat
# 2 部署 filebeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elkb
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
enabled: true
paths:
- /var/log/containers/*.log # 监控的目录
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.elasticsearch:
hosts: ["elasticsearch:9200"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elkb
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
nodeSelector:
log: filebeat
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: 172.31.215.191:5000/filebeat:7.7.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: timezone
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: timezone
hostPath:
path: /etc/localtime
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elkb
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat
namespace: elkb
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elkb
roleRef:
kind: Role
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat-kubeadm-config
namespace: elkb
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elkb
roleRef:
kind: Role
name: filebeat-kubeadm-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups: ["apps"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat
# should be the namespace where filebeat is running
namespace: kubernetes-dashboard
labels:
k8s-app: filebeat
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat-kubeadm-config
namespace: elkb
labels:
k8s-app: filebeat
rules:
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- kubeadm-config
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elkb
labels:
k8s-app: filebeat
elastalert 部署
# 1 创建 secret
# 1.1 准备 smpt_auth.yaml 文件
user: "12345678900@163.com"
password: "*********"
# 1.2 根据 smpt_auth.yaml 创建 secret
kubectl create secret generic smtp-auth --from-file=smtp_auth.yaml -n elkb
# 2 部署 elastalert.yaml
apiVersion: v1
data:
config.json: |
{
"appName": "elastalert-server",
"port": 3030,
"wsport": 3333,
"elastalertPath": "/opt/elastalert",
"verbose": true,
"es_debug": false,
"debug": false,
"rulesPath": {
"relative": true,
"path": "/rules"
},
"templatesPath": {
"relative": true,
"path": "/rule_templates"
},
"es_host": "elasticsearch.elkb",
"es_port": 9200,
"writeback_index": "elastalert_status"
}
config.yaml: |
rules_folder: rules
run_every:
minutes: 1
buffer_time:
minutes: 30
es_host: elasticsearch.elkb
es_port: 9200
max_query_size: 9000
max_scrolling_count: 1
#es_username: elastic
#es_password: elastic
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
alert_time_limit:
days: 2
kind: ConfigMap
metadata:
name: elastalert
namespace: elkb
labels:
k8s-app: elastalert
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elastalert
namespace: elkb
labels:
k8s-app: elastalert
spec:
serviceName: elastalert
replicas: 1
selector:
matchLabels:
k8s-app: elastalert
template:
metadata:
labels:
k8s-app: elastalert
spec:
containers:
- name: elastalert
image: elastalert:7.7.0
command: [ "/bin/sh", "-c", "sed -i 's|10000|60000|' src/common/websocket.js && npm start"]
volumeMounts:
- name: elastalert
mountPath: /opt/elastalert/config.yaml
subPath: config.yaml
- name: elastalert
mountPath: /opt/elastalert-server/config/config.json
subPath: config.json
- name: auth
mountPath: /opt/elastalert/config/
- name: rules
mountPath: /opt/elastalert/rules
- name: rule-templates
mountPath: /opt/elastalert/rule_templates
- name: localtime
mountPath: /etc/localtime
ports:
- containerPort: 3030
name: serverport
protocol: TCP
- containerPort: 3333
name: transport
protocol: TCP
resources:
limits:
cpu: 50m
memory: 1Gi
requests:
cpu: 50m
memory: 256Mi
volumes:
- name: auth
secret:
secretName: smtp-auth
- name: elastalert
configMap:
name: elastalert
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
volumeClaimTemplates:
- metadata:
name: rules
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 300Mi
volumeMode: Filesystem
- metadata:
name: rule-templates
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 50Mi
volumeMode: Filesystem
---
apiVersion: v1
kind: Service
metadata:
name: elastalert
namespace: elkb
spec:
clusterIP: None
ports:
- name: serverport
port: 3030
protocol: TCP
targetPort: 3030
- name: transport
port: 3333
protocol: TCP
targetPort: 3333
selector:
k8s-app: elastalert
kibana 部署
# 1 部署 kibana.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana-config
namespace: elkb
labels:
k8s-app: kibana
data:
kibana.yml: |
server.host: '0.0.0.0'
i18n.locale: 'zh-CN'
xpack.reporting.capture.browser.chromium.disableSandbox: true
xpack.reporting.capture.browser.chromium.proxy.enabled: false
xpack.reporting.encryptionKey: mima
xpack.monitoring.ui.container.elasticsearch.enabled: false
alert.serverHost: elastalert
alert.serverPort: 3030
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elkb
labels:
k8s-app: kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: kibana:v7.7.0
imagePullPolicy: Always
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
volumeMounts:
- name: kibana-config
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elkb
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30561
selector:
k8s-app: kibana
ELKB(认证版)k8s 部署
es 集群部署
# 准备证书
# 1 随便启动一个es容器并进入工作目录
# 2 生成证书,设置不设置密码都可
bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
# 1 选择三个节点打上标签
kubectl label node node1 es=data
kubectl label node node2 es=data
kubectl label node node3 es=data
# 2 添加证书,将证书作为secret挂载
kubectl create secret -n elkb generic elastic-certs --from-file=elastic-certificates.p12
# 3 设置集群用户名密码
$ kubectl create secret -n elkb generic elastic-auth --from-literal=username=elastic --from-literal=password=xx123456
# 4 部署 elasticsearch.yaml 部署时直接apply即可
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
namespace: elkb
labels:
k8s-app: elasticsearch
data:
elasticsearch.yml: |
cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.keystore.password: 654321 # 证书的密码,生成时没设置就不用
xpack.security.transport.ssl.truststore.password: 654321
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: elkb
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.7.0
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
- name: localtime
mountPath: /etc/localtime
- name: elasticsearch-certs
mountPath: /usr/share/elasticsearch/config/certs
readOnly: true
- name: esconfig
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
env:
- name: cluster.name
value: elasticsearch-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.minimum_master_nodes
value: "2"
- name: discovery.seed_hosts
value: "elasticsearch-0.elasticsearch,elasticsearch-1.elasticsearch,elasticsearch-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "elasticsearch-0,elasticsearch-1,elasticsearch-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-auth
key: username
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-auth
key: password
volumes:
- name: elasticsearch-data
hostPath:
path: /data/es/
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: elasticsearch-certs
secret:
secretName: elastic-certs
- name: esconfig
configMap:
name: elasticsearch-config
nodeSelector:
es: data
initContainers:
- name: elasticsearch-logging-init
image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: elasticsearch-volume-init
image: alpine:3.6
command:
- chmod
- -R
- "777"
- /usr/share/elasticsearch/data/
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data/
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- elasticsearch
topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: elkb
spec:
clusterIP: None
ports:
- name: db
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
selector:
k8s-app: elasticsearch
filebeat 部署
# 1 选择想部署filebeat收集日志的节点打上标签
kubectl label node node1 log=filebeat
kubectl label node node2 log=filebeat
kubectl label node node3 log=filebeat
# 2 部署 filebeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elkb
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
enabled: true
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: "elastic"
password: "xx123456"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elkb
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
nodeSelector:
log: filebeat
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: filebeat:7.7.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: timezone
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: timezone
hostPath:
path: /etc/localtime
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elkb
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat
namespace: elkb
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elkb
roleRef:
kind: Role
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat-kubeadm-config
namespace: elkb
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elkb
roleRef:
kind: Role
name: filebeat-kubeadm-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups: ["apps"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat
# should be the namespace where filebeat is running
namespace: elkb
labels:
k8s-app: filebeat
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat-kubeadm-config
namespace: elkb
labels:
k8s-app: filebeat
rules:
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- kubeadm-config
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elkb
labels:
k8s-app: filebeat
elastalert 部署
# 1 创建 secret
# 1.1 准备 smpt_auth.yaml 文件
user: "12345678900@163.com"
password: "*********"
# 1.2 根据 smpt_auth.yaml 创建 secret
kubectl create secret generic smtp-auth --from-file=smtp_auth.yaml -n elkb
# 2 部署 elastalert.yaml
# 2.1 注意:若是启动失败-报错 client of null,扩大超时时间即可,添加命令 command: [ "/bin/sh", "-c", "sed -i 's|10000|160000|' src/common/websocket.js && npm start"]
# 2.2 注意:规则创建时格式一定要正确,否则会报错,总是重新启动,kibana报错502
apiVersion: v1
data:
config.json: |
{
"appName": "elastalert-server",
"port": 3030,
"wsport": 3333,
"elastalertPath": "/opt/elastalert",
"verbose": true,
"es_debug": false,
"debug": false,
"rulesPath": {
"relative": true,
"path": "/rules"
},
"templatesPath": {
"relative": true,
"path": "/rule_templates"
},
"es_host": "elasticsearch.elkb",
"es_port": 9200,
"es_username": "elastic",
"es_password": "xx123456",
"writeback_index": "elastalert_status"
}
config.yaml: |
rules_folder: rules
run_every:
minutes: 1
buffer_time:
minutes: 30
es_host: elasticsearch.elkb
es_port: 9200
max_query_size: 9000
max_scrolling_count: 1
es_username: elastic
es_password: xx123456
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
alert_time_limit:
days: 2
kind: ConfigMap
metadata:
name: elastalert
namespace: elkb
labels:
k8s-app: elastalert
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elastalert
namespace: elkb
labels:
k8s-app: elastalert
spec:
serviceName: elastalert
replicas: 1
selector:
matchLabels:
k8s-app: elastalert
template:
metadata:
labels:
k8s-app: elastalert
spec:
containers:
- name: elastalert
image: elastalert:7.7.0
command: [ "/bin/sh", "-c", "sed -i 's|10000|60000|' src/common/websocket.js && npm start"]
volumeMounts:
- name: elastalert
mountPath: /opt/elastalert/config.yaml
subPath: config.yaml
- name: elastalert
mountPath: /opt/elastalert-server/config/config.json
subPath: config.json
- name: auth
mountPath: /opt/elastalert/config/
- name: rules
mountPath: /opt/elastalert/rules
- name: rule-templates
mountPath: /opt/elastalert/rule_templates
- name: localtime
mountPath: /etc/localtime
ports:
- containerPort: 3030
name: serverport
protocol: TCP
- containerPort: 3333
name: transport
protocol: TCP
resources:
limits:
cpu: 50m
memory: 1Gi
requests:
cpu: 50m
memory: 256Mi
volumes:
- name: auth
secret:
secretName: smtp-auth
- name: elastalert
configMap:
name: elastalert
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
volumeClaimTemplates:
- metadata:
name: rules
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 300Mi
volumeMode: Filesystem
- metadata:
name: rule-templates
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 50Mi
volumeMode: Filesystem
---
apiVersion: v1
kind: Service
metadata:
name: elastalert
namespace: elkb
spec:
clusterIP: None
ports:
- name: serverport
port: 3030
protocol: TCP
targetPort: 3030
- name: transport
port: 3333
protocol: TCP
targetPort: 3333
selector:
k8s-app: elastalert
kibana 部署
# 1 kibana.yaml, 直接apply部署即可
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana-config
namespace: elkb
labels:
k8s-app: kibana
data:
kibana.yml: |
server.host: '0.0.0.0'
i18n.locale: 'zh-CN'
xpack.reporting.capture.browser.chromium.disableSandbox: true
xpack.reporting.capture.browser.chromium.proxy.enabled: false
xpack.reporting.encryptionKey: mima
xpack.monitoring.ui.container.elasticsearch.enabled: false
alert.serverHost: elastalert
alert.serverPort: 3030
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elkb
labels:
k8s-app: kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: kibana:v7.7.0
imagePullPolicy: Always
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: ELASTICSEARCH_USERNAME
value: "elastic"
- name: ELASTICSEARCH_PASSWORD
value: "xx123456"
ports:
- containerPort: 5601
name: ui
protocol: TCP
volumeMounts:
- name: kibana-config
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elkb
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30561
selector:
k8s-app: kibana
参考
如有侵权,请联系删除;如有错误,请批评指正。