k8s部署ELK并设置用户名密码

1,964 阅读5分钟

k8s部署ELK并设置用户名密码

k8s 集群说明

这里的 k8s 采用一个 master 节点,三个 node 节点的配置,通过如下命令更改主机名和给每个 node 节点加标签,方便部署:

# master 节点
hostnamectl set-hostname k8s-master

# node 节点
hostnamectl set-hostname k8s-node-1
hostnamectl set-hostname k8s-node-2
hostnamectl set-hostname k8s-preprod

kubectl label node k8s-node-1 env=k8s
kubectl label node k8s-node-2 env=k8s
kubectl label node k8s-preprod env=preprod

部署说明

elasticsearch、logstash、kibana 部署于运维用的服务器 node1 和 node2,filebeat 部署于 preprod 服务器,用于对部署在预发布服务器上的容器进行日志收集。

部署 ELK

部署 elasticsearch

elasticsearch.yaml:

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: kubernetes
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  ports:
  - name: http
    port: 9200
    targetPort: 9200
  selector:
    k8s-app: elasticsearch-logging

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch-logging
  namespace: kubernetes
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  namespace: kubernetes
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: elasticsearch-logging
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: elasticsearch-logging
  apiGroup: ""

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: kubernetes
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    srv: srv-elasticsearch
spec:
  serviceName: elasticsearch-logging
  replicas: 1
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        kubernetes.io/cluster-service: "true"
    spec:
      nodeSelector:
        env: k8s
      serviceAccountName: elasticsearch-logging
      containers:
      - image: docker.io/library/elasticsearch:7.9.3
        name: elasticsearch-logging
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /usr/share/elasticsearch/data/
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: "discovery.type"
          value: "single-node"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx2g"
      volumes:
      - name: config
        configMap:
          name: elasticsearch-logging
      - name: elasticsearch-logging
        hostPath:
          path: /data/es/
      tolerations:
      - effect: NoSchedule
        operator: Exists
      initContainers:
      - name: elasticsearch-logging-init
        image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: elasticsearch-volume-init
        image: alpine:3.6
        command:
          - chmod
          - -R
          - "777"
          - /usr/share/elasticsearch/data/
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /usr/share/elasticsearch/data/

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: elasticsearch-logging
  namespace: kubernetes
  labels:
    k8s-app: elasticsearch-logging
data:
  elasticsearch.yml: |-
    network.host: 0.0.0.0
    xpack.security.enabled: true

在 master 节点服务器上通过如下命令部署 elasticsearch:

kubectl create -f elasticsearch.yaml

部署完之后通过如下命令查看 pod 启动情况:

kubectl get pods -n kubernetes

查找到 pod 后,通过如下命令进入 elasticsearch 的 pod:

kubectl exec -it elasticsearch-logging-0 -n kubernetes bash

进入 pod 后执行如下命令,根据提示设置初始密码(我这里全部设置的"123456"):

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

只有首次安装设置初始密码的时候才需要进入 pod 操作,之后直接在 kibana 界面上就可以直接设置 elasticsearch 的密码了,但是界面更改之后还是要改 logstash 和 kibana yaml 中的配置并重新 apply。

部署 logstash

logstash.yaml:

apiVersion: v1
kind: Service
metadata:
  name: logstash
  namespace: kubernetes
spec:
  ports:
  - port: 5044
    targetPort: beats
  selector:
    type: logstash

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  namespace: kubernetes
spec:
  replicas: 1
  selector:
    matchLabels:
      type: logstash
  template:
    metadata:
      labels:
        type: logstash
        srv: srv-logstash
    spec:
      nodeSelector:
        env: k8s
      containers:
      - image: docker.io/kubeimages/logstash:7.9.3
        name: logstash
        ports:
        - containerPort: 5044
          name: beats
        command:
        - logstash
        - '-f'
        - '/etc/logstash_c/logstash.conf'
        volumeMounts:
        - name: config-volume
          mountPath: /etc/logstash_c/
        - name: config-yml-volume
          mountPath: /usr/share/logstash/config/
        - name: timezone
          mountPath: /etc/localtime
      volumes:
      - name: config-volume
        configMap:
          name: logstash-conf
          items:
          - key: logstash.conf
            path: logstash.conf
      - name: timezone
        hostPath:
          path: /etc/localtime
      - name: config-yml-volume
        configMap:
          name: logstash-yml
          items:
          - key: logstash.yml
            path: logstash.yml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-conf
  namespace: kubernetes
  labels:
    type: logstash
data:
  logstash.conf: |-
    input {
      beats {
      port => 5044
      }
     }
    output{
      elasticsearch {
        hosts => ["http://elasticsearch-logging:9200"]
        user => "elastic"
        password => "123456"
        codec => json
        index => "logstash-%{+YYYY.MM.dd}"
        }
      }

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-yml
  namespace: kubernetes
  labels:
    type: logstash
data:
  logstash.yml: |-
    xpack.monitoring.enabled: true
    xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
    xpack.monitoring.elasticsearch.username: logstash_system
    xpack.monitoring.elasticsearch.password: "123456"

在 master 节点服务器上通过如下命令部署 logstash:

kubectl create -f logstash.yaml

部署 filebeat

filebeat.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-preprod-config
  namespace: kubernetes
  labels:
    k8s-app: filebeat-preprod
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
    output.logstash:
       hosts: ["logstash:5044"]
       enabled: true

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat-preprod
  namespace: kubernetes
  labels:
    k8s-app: filebeat-preprod

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat-preprod
  labels:
    k8s-app: filebeat-preprod
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat-preprod
subjects:
- kind: ServiceAccount
  name: filebeat-preprod
  namespace: kubernetes
roleRef:
  kind: ClusterRole
  name: filebeat-preprod
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-preprod
  namespace: kubernetes
  labels:
    k8s-app: filebeat-preprod
spec:
  selector:
    matchLabels:
      k8s-app: filebeat-preprod
  template:
    metadata:
      labels:
        k8s-app: filebeat-preprod
    spec:
      nodeSelector:
        env: preprod
      serviceAccountName: filebeat-preprod
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat-preprod
        image: docker.io/kubeimages/filebeat:7.9.3
        args: [
          "-c", "/etc/filebeat.yml",
          "-e","-httpprof","0.0.0.0:6060"
        ]
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-logging
        - name: ELASTICSEARCH_PORT
          value: "9200"
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/
          readOnly: true
        - name: varlog
          mountPath: /var/log/
          readOnly: true
        - name: timezone
          mountPath: /etc/localtime
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-preprod-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/
      - name: varlog
        hostPath:
          path: /var/log/
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-preprod-inputs
      - name: data
        hostPath:
          path: /data/filebeat-preprod-data
          type: DirectoryOrCreate
      - name: timezone
        hostPath:
          path: /etc/localtime
      tolerations:
      - effect: NoExecute
        key: dedicated
        operator: Equal
        value: gpu
      - effect: NoSchedule
        operator: Exists

在 master 节点服务器上通过如下命令部署 filebeat:

kubectl create -f filebeat.yaml

部署 kibana

kibana.yaml:

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kubernetes
  labels:
    k8s-app: kibana
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
    srv: srv-kibana
spec:
  type: NodePort
  ports:
  - port: 5601
    nodePort: 30000
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kubernetes
  labels:
    k8s-app: kibana
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    srv: srv-kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana
  template:
    metadata:
      labels:
        k8s-app: kibana
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      nodeSelector:
        env: k8s
      containers:
      - name: kibana
        image: docker.io/kubeimages/kibana:7.9.3
        volumeMounts:
        - name: kibana-config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
      volumes:
      - name: kibana-config
        configMap:
          name: kibana

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kibana
  namespace: kubernetes
  labels:
    k8s-app: kibana
data:
  kibana.yml: |-
    server.name: kibana
    server.host: "0.0.0.0"
    elasticsearch.hosts: [ "http://elasticsearch-logging:9200" ]
    elasticsearch.username: "elastic"
    elasticsearch.password: "123456"
    monitoring.ui.container.elasticsearch.enabled: true

在 master 节点服务器上通过如下命令部署 kibana:

kubectl create -f kibana.yaml

通过如下命令查看 kibana 所在的服务器:

kubectl get pods -n kubernetes -o wide

通过 kibana 所在服务的 30000 端口就可以访问 ELK 啦! 用户名:elastic,密码:123456。