日志系统设计和实践(1)-Kubernetes下EFK日志系统搭建(ECK方式)

1,097 阅读16分钟

前言

随着生产环境部分服务已经切换至Kubernetes,最近的工作都围绕着周边基础设施来进行,最先考虑到的就是架构可观测性的欠缺,包括了日志系统和监控系统,相比之下我觉得日志系统更为迫切些。

关于日志这个事情其实可大可小,一切取决于应用的规模,日志的重要度以及复杂度会随着规模的变大而直线上升。日系系统包含的东西其实很多,一方面包含日志基础设置的建设,更重要的在于日志的规范的制定和体系的建立。这次除了需要搭建Kubernetes下的日志平台,还想趁着这次机会梳理一下与日志相关的方方面面,来建立较为完善的日志体系。

基于开源的日志方案首选就是ELK,未引入容器化之前我们采用的也是ELK + Filebeat,全部基于文件采集的方式,那么这次日志环境搭建的思路就是正确的把这套日志方案移入Kubernetes环境,当然基于EFK(ElasticsearchFluentdKibana)的方案也是 Kubernetes 官方比较推荐的一种方案。

包括:

  • 利用FileBeats、Fluentd等采集Agent实现容器上的数据统一收集。(Fluentd由于性能、与Kubernetes的无缝集成以及是CNCF会员项目等原因代替了Logstash)。
  • 采集的数据可以对接ElasticSearch来做实时的查询检索。
  • 数据的可视化可以使用grafana、kibana等常用的可视化组件。

以我们目前的日志要求以及量级,以上方案就足够了,至于定制化的数据清洗,实时或者离线的数据分析,这些作为后续扩展的储备。

Kubernetes下日志特点

在 Kubernetes 中,日志采集相比传统虚拟机、物理机方式要复杂很多,最根本的原因是 Kubernetes 把底层异常屏蔽,提供更加细粒度的资源调度,向上提供稳定、动态的环境。因此日志采集面对的是更加丰富、动态的环境,需要考虑的点也更加的多。

  1. 日志的形式变得更加复杂,不仅有物理机/虚拟机上的日志,还有容器的标准输出、容器内的文件、容器事件、Kubernetes 事件等等信息需要采集。
  2. 环境的动态性变强,在 Kubernetes 中,机器的宕机、下线、上线、Pod销毁、扩容/缩容等都是常态,这种情况下日志的存在是瞬时的(例如如果 Pod 销毁后该 Pod 日志就不可见了),所以日志数据必须实时采集到服务端。同时还需要保证日志的采集能够适应这种动态性极强的场景;
  3. 日志的种类变多,一个请求从客户端需要经过 CDN、Ingress、Service Mesh、Pod 等多个组件,涉及多种基础设施,其中的日志种类增加了很多,例如 K8s 各种系统组件日志、审计日志、ServiceMesh 日志、Ingress 等;

采集方式

日志的采集方式主要分为主动和被动,主动推送包括业务直写和DockerEngine 推送两种方式,前者在应用中集成日志sdk,和应用强绑定,后者太依赖容器运行时,灵活性也不够,这里都不做考虑。

至于被动方式,主要依靠在节点上运行日志Agent,通过轮训方式(阿里的logtail貌似是事件通知方式)采集节点日志。在Kubernetes环境下,也有两种方式:

  • DaemonSet 方式在每个 node 节点上只运行一个日志 agent,采集这个节点上所有的日志。DaemonSet 相对资源占用要小很多,但扩展性、租户隔离性受限,比较适用于功能单一或业务不是很多的集群;
  • Sidecar 方式为每个 POD 单独部署日志 agent,这个 agent 只负责一个业务应用的日志采集。Sidecar 相对资源占用较多,但灵活性以及多租户隔离性较强,建议大型的 K8s 集群或作为 PaaS 平台为多个业务方服务的集群使用该方式。

网上找的一个对比表格,在这贴一下:

DockerEngine业务直写DaemonSet方式Sidecar方式
采集日志类型标准输出业务日志标准输出+部分文件文件
部署运维低,原生支持低,只需维护好配置文件即可一般,需维护DaemonSet较高,每个需要采集日志的POD都需要部署sidecar容器
日志分类存储无法实现业务独立配置一般,可通过容器/路径等映射每个POD可单独配置,灵活性高
多租户隔离弱,日志直写会和业务逻辑竞争资源一般,只能通过配置间隔离强,通过容器进行隔离,可单独分配资源
支持集群规模本地存储无限制,若使用syslog、fluentd会有单点限制无限制取决于配置数无限制
资源占用低,docker
engine提供整体最低,省去采集开销较低,每个节点运行一个容器较高,每个POD运行一个容器
查询便捷性低,只能grep原始日志高,可根据业务特点进行定制较高,可进行自定义的查询、统计高,可根据业务特点进行定制
可定制性高,可自由扩展高,每个POD单独配置
耦合度高,与DockerEngine强绑定,修改需要重启DockerEngine高,采集模块修改/升级需要重新发布业务低,Agent可独立升级一般,默认采集Agent升级对应Sidecar业务也会重启(有一些扩展包可以支持Sidecar热升级)
适用场景测试、POC等非生产场景对性能要求极高的场景日志分类明确、功能较单一的集群大型、混合型、PAAS型集群

通过对比后,DaemonSet方式最适合目前的情况。

日志输出方式

和虚拟机/物理机不同,K8s 的容器提供标准输出和文件两种方式。在容器中,标准输出将日志直接输出到 stdout 或 stderr,而 DockerEngine 接管 stdout 和 stderr 文件描述符,将日志接收后按照 DockerEngine 配置的 LogDriver 规则进行处理;日志打印到文件的方式和虚拟机/物理机基本类似,只是日志可以使用不同的存储方式,例如默认存储、EmptyDir、HostVolume、NFS 等。

但是stdout方式并不是不写文件。例如Docker的JSON LogDriver日志输出的过程包括:应用 stdout -> DockerEngine -> LogDriver -> 序列化成 JSON -> 保存到文件,最后由日志的Agent收集。

相比之下:

  1. 文件方式性能会更好一点,因为Stdout方式中间会经过好几个流程。
  2. 文件方式不同的日志可以放入不同的文件,在采集和分析过程中达到了分类的效果;而Stdout方式输出都在一个流中。
  3. 操作文件的策略会更加多样化,例如同步/异步写入、缓存大小、文件轮转策略、压缩策略、清除策略等,相对更加灵活。

所以在搭建基础平台时我会先使用stdout方式,文件方式更多会依赖日志体系的具体规则。

EFK日志方案原理

EFK 利用部署在每个节点上的 Fluentd 采集 Kubernetes 节点服务器的 /var/log/var/lib/docker/container 两个目录下的日志,然后传到 Elasticsearch 中。最后,用户通过访问 Kibana 来查询日志。

具体过程如下:

  1. 创建 Fluentd 并且将 Kubernetes 节点服务器 log 目录挂载进容器。
  2. Fluentd 采集节点服务器 log 目录下的 containers 里面的日志文件。
  3. Fluentd 将收集的日志转换成 JSON 格式。
  4. Fluentd 利用 Exception Plugin 检测日志是否为容器抛出的异常日志,如果是就将异常栈的多行日志合并。
  5. Fluentd 将换行多行日志 JSON 合并。
  6. Fluentd 使用 Kubernetes Metadata Plugin 检测出 Kubernetes 的 Metadata 数据进行过滤,如 Namespace、Pod Name 等。
  7. Fluentd 使用 ElasticSearch Plugin 将整理完的 JSON 日志输出到 ElasticSearch 中。
  8. ElasticSearch 建立对应索引,持久化日志信息。
  9. Kibana 检索 ElasticSearch 中 Kubernetes 日志相关信息进行展示。

EFK的安装和配置

ECK

关于ECK(Elastic Cloud on Kubernetes)官方是这么说明的:

Elastic Cloud on Kubernetes 简化了在 Kubernetes 中运行 Elasticsearch 和 Kibana 的作业,包括设置、升级、快照、扩展、高可用性、安全性等。

简而言之就是官方提供的一种新的基于Kubernetes的简便的部署Elasticsearch的方式。ECK方式安装的架构图如下:

img

Local PV和local-path-provisioner的使用

在安装Elasticsearch之前,是需要扯一下存储问题的,由于Elasticsearch需要存储日志数据,所以它并不是一个无状态应用,需要为它准备相应的持久化存储,我们并没有云存储或者nfs,还好Kubernetes提供了Local PV的概念基于本地磁盘来提供持久化存储。但是目前还是有一些局限性:

1、目前Local PV不支持对空间申请管理,需要手动对空间进行配置和管理。 2、默认Local PV的StorageClass的provisioner是kubernetes.io/no-provisioner , 这是因为Local PV不支持Dynamic Provisioning, 所以它没有办法在创建出PVC的时候, 自动创建对应PV。

关于PV、PVC、StorageClass这些概念我就不重复了,上面是什么意思呢?意思是Kubernetes的动态存储分配是通过StorageClass实现的,而Local PV的StorageClass没有对应的provisioner。所以没有办法做到动态提供PV,需要提前将pv创建好,然后再与pvc绑定后才能使用,这无疑是比较麻烦的,当然如果是学习目的的话,手动创建PV和PVC也未尝不可。所以社区和一些厂商针对Local PV提供了相应的provisioner包,这里以Rancher开源的local-path-provisioner为例。

  • 安装local-path-provisioner

    kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
    

    网络原因下不来的话我可以用下面的

    apiVersion: v1
    kind: Namespace
    metadata:
      name: local-path-storage
    
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: local-path-provisioner-service-account
      namespace: local-path-storage
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: local-path-provisioner-role
    rules:
      - apiGroups: [ "" ]
        resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
        verbs: [ "get", "list", "watch" ]
      - apiGroups: [ "" ]
        resources: [ "endpoints", "persistentvolumes", "pods" ]
        verbs: [ "*" ]
      - apiGroups: [ "" ]
        resources: [ "events" ]
        verbs: [ "create", "patch" ]
      - apiGroups: [ "storage.k8s.io" ]
        resources: [ "storageclasses" ]
        verbs: [ "get", "list", "watch" ]
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: local-path-provisioner-bind
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: local-path-provisioner-role
    subjects:
      - kind: ServiceAccount
        name: local-path-provisioner-service-account
        namespace: local-path-storage
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: local-path-provisioner
      namespace: local-path-storage
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: local-path-provisioner
      template:
        metadata:
          labels:
            app: local-path-provisioner
        spec:
          serviceAccountName: local-path-provisioner-service-account
          containers:
            - name: local-path-provisioner
              image: rancher/local-path-provisioner:v0.0.19
              imagePullPolicy: IfNotPresent
              command:
                - local-path-provisioner
                - --debug
                - start
                - --config
                - /etc/config/config.json
              volumeMounts:
                - name: config-volume
                  mountPath: /etc/config/
              env:
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
          volumes:
            - name: config-volume
              configMap:
                name: local-path-config
    
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: local-path
    provisioner: rancher.io/local-path
    volumeBindingMode: WaitForFirstConsumer
    reclaimPolicy: Delete
    
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: local-path-config
      namespace: local-path-storage
    data:
      config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/home/k8s"]
                },
                 {
                         "node":"master02",
                         "paths":["/opt/local-path-provisioner", "/app/k8s"]
                 },
                 {
                          "node":"node05",
                          "paths":["/opt/local-path-provisioner", "/app/k8s"]
                 }
                ]
        }
      setup: |-
        #!/bin/sh
        while getopts "m:s:p:" opt
        do
            case $opt in
                p)
                absolutePath=$OPTARG
                ;;
                s)
                sizeInBytes=$OPTARG
                ;;
                m)
                volMode=$OPTARG
                ;;
            esac
        done
    
        mkdir -m 0777 -p ${absolutePath}
      teardown: |-
        #!/bin/sh
        while getopts "m:s:p:" opt
        do
            case $opt in
                p)
                absolutePath=$OPTARG
                ;;
                s)
                sizeInBytes=$OPTARG
                ;;
                m)
                volMode=$OPTARG
                ;;
            esac
        done
    
        rm -rf ${absolutePath}
      helperPod.yaml: |-
        apiVersion: v1
        kind: Pod
        metadata:
          name: helper-pod
        spec:
          containers:
          - name: helper-pod
            image: busybox
            imagePullPolicy: IfNotPresent
    
    
    
  • 配置Local Path Provisioner

    Local Path Provisioner支持一些配置,具体的可以看官方文档,我就说一点:

    我自身的服务器磁盘挂载的目录各有不同,所以当ElasticSearch在不同节点上运行时,我希望数据放在不同的地方,Local Path Provisioner会把数据默认放在/opt/local-path-provisioner目录下,可以通过下面这段配置设置,DEFAULT_PATH_FOR_NON_LISTED_NODES参数配置默认的数据目录。

    config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/home/k8s"]
                },
                 {
                         "node":"master02",
                         "paths":["/opt/local-path-provisioner", "/app/k8s"]
                 },
                 {
                          "node":"node05",
                          "paths":["/opt/local-path-provisioner", "/app/k8s"]
                 }
                ]
        }
    

ECK安装

  • 安装 Operator

    ## 安装
    kubectl apply -f https://download.elastic.co/downloads/eck/1.5.0/all-in-one.yaml
    ## 删除
    kubectl delete -f https://download.elastic.co/downloads/eck/1.5.0/all-in-one.yaml
    

    安装成功后,会自动创建一个 elastic-systemnamespace 以及一个 operatorPod

    ❯ kubectl get all -n elastic-system
    NAME                     READY   STATUS    RESTARTS   AGE
    pod/elastic-operator-0   1/1     Running   0          53s
    
    NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
    service/elastic-webhook-server   ClusterIP   10.0.73.219   <none>        443/TCP   55s
    
    NAME                                READY   AGE
    statefulset.apps/elastic-operator   1/1     57s
    
  • 部署ECK

    提供一份ECK的资源文件,关于ECK的配置具体参照文档

    apiVersion: elasticsearch.k8s.elastic.co/v1
    kind: Elasticsearch
    metadata:
      name: quickstart
      namespace: elastic-system
    spec:
      version: 7.12.1
      nodeSets:
      - name: default
        count: 3
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 100Gi
            storageClassName: local-path
        config:
            node.master: true
            node.data: true
            node.ingest: true
            node.store.allow_mmap: false
    ---
    apiVersion: kibana.k8s.elastic.co/v1
    kind: Kibana
    metadata:
      name: quickstart
      namespace: elastic-system
    spec:
      version: 7.12.1
      count: 1
      elasticsearchRef:
        name: quickstart
      config:
         i18n.locale: "zh-CN"
    
    

    其中storageClassName: local-path是Local Path Provisioner提供的storageClass名称,由于默认Kibana是英文的,通过i18n.locale: "zh-CN"设置为中文。

    kubectl apply -f eck.yaml
    

    提交资源文件,部署完毕后,可查看 elastic-system 命名空间下已经部署了 ElasticsearchKibana:

    ❯ kubectl get all -n elastic-system
    NAME                             READY   STATUS    RESTARTS   AGE
    pod/elastic-es-default-0         1/1     Running   0          10d
    pod/elastic-es-default-1         1/1     Running   0          10d
    pod/elastic-es-default-2         1/1     Running   0          10d
    pod/elastic-operator-0           1/1     Running   1          10d
    pod/kibana-kb-5bcd9f45dc-hzc9s   1/1     Running   0          10d
    
    NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    service/elastic-es-default       ClusterIP   None           <none>        9200/TCP   10d
    service/elastic-es-http          ClusterIP   172.23.4.246   <none>        9200/TCP   10d
    service/elastic-es-transport     ClusterIP   None           <none>        9300/TCP   10d
    service/elastic-webhook-server   ClusterIP   172.23.8.16    <none>        443/TCP    10d
    service/kibana-kb-http           ClusterIP   172.23.7.101   <none>        5601/TCP   10d
    
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/kibana-kb   1/1     1            1           10d
    
    NAME                                   DESIRED   CURRENT   READY   AGE
    replicaset.apps/kibana-kb-5bcd9f45dc   1         1         1       10d
    
    NAME                                  READY   AGE
    statefulset.apps/elastic-es-default   3/3     10d
    statefulset.apps/elastic-operator     1/1     10d
    

部署Fluentd

Fluent 在 github 上维护了 fluentd-kubernetes-daemonset 项目,可以供我们参考。

# fluentd-es-ds.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: elastic-system
  labels:
    app: fluentd-es
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    app: fluentd-es
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    app: fluentd-es
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: elastic-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es
  namespace: elastic-system
  labels:
    app: fluentd-es
spec:
  selector:
    matchLabels:
      app: fluentd-es
  template:
    metadata:
      labels:
        app: fluentd-es
    spec:
      serviceAccount: fluentd-es
      serviceAccountName: fluentd-es
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule      
      containers:
      - name: fluentd-es
        image: fluent/fluentd-kubernetes-daemonset:v1.11.5-debian-elasticsearch7-1.1
        env:
        - name:  FLUENT_ELASTICSEARCH_HOST
          value: quickstart-es-http
        # default user
        - name:  FLUENT_ELASTICSEARCH_USER
          value: elastic
        # is already present from the elasticsearch deployment
        - name:  FLUENT_ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: quickstart-es-elastic-user
              key: elastic
        # elasticsearch standard port
        - name:  FLUENT_ELASTICSEARCH_PORT
          value: "9200"
        # der elastic operator ist https standard
        - name: FLUENT_ELASTICSEARCH_SCHEME
          value: "https"
          # dont need systemd logs for now
        - name: FLUENTD_SYSTEMD_CONF
          value: disable
        # da certs self signt sind muss verify disabled werden
        - name:  FLUENT_ELASTICSEARCH_SSL_VERIFY
          value: "false"
        # to avoid issue https://github.com/uken/fluent-plugin-elasticsearch/issues/525
        - name:  FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
          value: "false"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /fluentd/etc
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-es-config

fluentd配置资源文件如下

# fluentd-es-configmap
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-es-config
  namespace: elastic-system
data:
  fluent.conf: |-
    # https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/fluent.conf

    @include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
    @include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || 'prometheus'}.conf"
    @include kubernetes.conf
    @include conf.d/*.conf

    <match kubernetes.**>
      # https://github.com/kubernetes/kubernetes/issues/23001
      @type elasticsearch_dynamic
      @id  kubernetes_elasticsearch
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
      reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
      reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
      reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
      log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
      logstash_prefix logstash-${record['kubernetes']['namespace_name']}
      logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
      logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
      index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
      target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"
      type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
      include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
      template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
      template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
      template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
      sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
      request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
      suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"
      enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"
      ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"
      ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"
      ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"
      <buffer>
        flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
        flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
        chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
        queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
        retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
        retry_forever true
      </buffer>
    </match>

    <match **>
      @type elasticsearch
      @id out_es
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
      reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
      reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
      reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
      log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
      logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
      logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
      logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
      index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
      target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"
      type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
      include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
      template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
      template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
      template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
      sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
      request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
      suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"
      enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"
      ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"
      ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"
      ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"
      <buffer>
        flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
        flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
        chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
        queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
        retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
        retry_forever true
      </buffer>
    </match>
  kubernetes.conf: |-
    # https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/kubernetes.conf

    <label @FLUENT_LOG>
      <match fluent.**>
        @type null
        @id ignore_fluent_logs
      </match>
    </label>

    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      chunk_limit_size 512m
      max_bytes 50000000
      max_lines 1000
    </match>
    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>
    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>

    <source>
      @type tail
      @id in_tail_minion
      path /var/log/salt/minion
      pos_file /var/log/fluentd-salt.pos
      tag salt
      <parse>
        @type regexp
        expression /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
        time_format %Y-%m-%d %H:%M:%S
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_startupscript
      path /var/log/startupscript.log
      pos_file /var/log/fluentd-startupscript.log.pos
      tag startupscript
      <parse>
        @type syslog
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_docker
      path /var/log/docker.log
      pos_file /var/log/fluentd-docker.log.pos
      tag docker
      <parse>
        @type regexp
        expression /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_etcd
      path /var/log/etcd.log
      pos_file /var/log/fluentd-etcd.log.pos
      tag etcd
      <parse>
        @type none
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kubelet
      multiline_flush_interval 5s
      path /var/log/kubelet.log
      pos_file /var/log/fluentd-kubelet.log.pos
      tag kubelet
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_proxy
      multiline_flush_interval 5s
      path /var/log/kube-proxy.log
      pos_file /var/log/fluentd-kube-proxy.log.pos
      tag kube-proxy
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_apiserver
      multiline_flush_interval 5s
      path /var/log/kube-apiserver.log
      pos_file /var/log/fluentd-kube-apiserver.log.pos
      tag kube-apiserver
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_controller_manager
      multiline_flush_interval 5s
      path /var/log/kube-controller-manager.log
      pos_file /var/log/fluentd-kube-controller-manager.log.pos
      tag kube-controller-manager
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_scheduler
      multiline_flush_interval 5s
      path /var/log/kube-scheduler.log
      pos_file /var/log/fluentd-kube-scheduler.log.pos
      tag kube-scheduler
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_rescheduler
      multiline_flush_interval 5s
      path /var/log/rescheduler.log
      pos_file /var/log/fluentd-rescheduler.log.pos
      tag rescheduler
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_glbc
      multiline_flush_interval 5s
      path /var/log/glbc.log
      pos_file /var/log/fluentd-glbc.log.pos
      tag glbc
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_cluster_autoscaler
      multiline_flush_interval 5s
      path /var/log/cluster-autoscaler.log
      pos_file /var/log/fluentd-cluster-autoscaler.log.pos
      tag cluster-autoscaler
      <parse>
        @type kubernetes
      </parse>
    </source>

    # Example:
    # 2017-02-09T00:15:57.992775796Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" ip="104.132.1.72" method="GET" user="kubecfg" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
    # 2017-02-09T00:15:57.993528822Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" response="200"
    <source>
      @type tail
      @id in_tail_kube_apiserver_audit
      multiline_flush_interval 5s
      path /var/log/kubernetes/kube-apiserver-audit.log
      pos_file /var/log/kube-apiserver-audit.log.pos
      tag kube-apiserver-audit
      <parse>
        @type multiline
        format_firstline /^\S+\s+AUDIT:/
        # Fields must be explicitly captured by name to be parsed into the record.
        # Fields may not always be present, and order may change, so this just looks
        # for a list of key="\"quoted\" value" pairs separated by spaces.
        # Unknown fields are ignored.
        # Note: We can't separate query/response lines as format1/format2 because
        #       they don't always come one after the other for a given query.
        format1 /^(?<time>\S+) AUDIT:(?: (?:id="(?<id>(?:[^"\\]|\\.)*)"|ip="(?<ip>(?:[^"\\]|\\.)*)"|method="(?<method>(?:[^"\\]|\\.)*)"|user="(?<user>(?:[^"\\]|\\.)*)"|groups="(?<groups>(?:[^"\\]|\\.)*)"|as="(?<as>(?:[^"\\]|\\.)*)"|asgroups="(?<asgroups>(?:[^"\\]|\\.)*)"|namespace="(?<namespace>(?:[^"\\]|\\.)*)"|uri="(?<uri>(?:[^"\\]|\\.)*)"|response="(?<response>(?:[^"\\]|\\.)*)"|\w+="(?:[^"\\]|\\.)*"))*/
        time_format %Y-%m-%dT%T.%L%Z
      </parse>
    </source>

这里要说一个问题,如果你的Docker地址在各个节点上都一致,请略过;如果不一致,请看过来

比如我这里因为磁盘挂的目录不一致,所有docker目录也不一致,这个有个问题:

fluentd会采集/var/log/containers目录下的所有日志,我以一个容器日志的举例:

截屏2021-06-04 上午10.02.45

会发现日志文件最终链接的还是Docker下的日志文件,所以如果Docker的目录不是/var/lib/docker,需要调整上述配置,否则会出现采集不到日志的情况

- name: varlibdockercontainers
  mountPath: /var/lib/docker/containers      ## 配置Docker的挂载地址
  readOnly: true

若各节点Docker地址不相同,全部挂载

- name: varlibdockercontainers
	mountPath: /var/lib/docker/containers
	readOnly: true
- name: varlibdockercontainers2
  mountPath: /app/docker/containers
  readOnly: true	
- name: varlibdockercontainers3
  mountPath: /home/docker/containers
  readOnly: true

其他的配置项可以根据自己的需要修改,提交资源文件

kubectl apply -f fluentd-es-configmap.yaml
kubectl apply -f fluentd-es-ds.yaml

部署完毕后,可查看 elastic-system 命名空间下已经部署了 fluentd

❯ kubectl get all -n elastic-system
NAME                             READY   STATUS    RESTARTS   AGE
pod/elastic-es-default-0         1/1     Running   0          10d
pod/elastic-es-default-1         1/1     Running   0          10d
pod/elastic-es-default-2         1/1     Running   0          10d
pod/elastic-operator-0           1/1     Running   1          10d
pod/fluentd-es-lrmqt             1/1     Running   0          4d6h
pod/fluentd-es-rd6xz             1/1     Running   0          4d6h
pod/fluentd-es-spq54             1/1     Running   0          4d6h
pod/fluentd-es-xc6pv             1/1     Running   0          4d6h
pod/kibana-kb-5bcd9f45dc-hzc9s   1/1     Running   0          10d

NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/elastic-es-default       ClusterIP   None           <none>        9200/TCP   10d
service/elastic-es-http          ClusterIP   172.23.4.246   <none>        9200/TCP   10d
service/elastic-es-transport     ClusterIP   None           <none>        9300/TCP   10d
service/elastic-webhook-server   ClusterIP   172.23.8.16    <none>        443/TCP    10d
service/kibana-kb-http           ClusterIP   172.23.7.101   <none>        5601/TCP   10d

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluentd-es   4         4         4       4            4           <none>          4d6h

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana-kb   1/1     1            1           10d

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-kb-5bcd9f45dc   1         1         1       10d

NAME                                  READY   AGE
statefulset.apps/elastic-es-default   3/3     10d
statefulset.apps/elastic-operator     1/1     10d

访问Kibana

Kibana部署以后,默认是ClusterIp方式,并不能访问到,可以开启hostport或者nodeport来访问,以kuboard为例,我开启了hostport端口后,通过节点Ip加端口号即可访问Kibana。截屏2021-06-04 上午11.04.09

Kibana默认用户名是elastic,密钥需要通过以下命令获得:

❯ kubectl get secret quickstart-es-elastic-user -n elastic-system -o=jsonpath='{.data.elastic}' | base64 --decode; echo
02fY4QjAC0C9361i0ftBA4Zo

至此部署过程结束了,至于使用过程,我觉得还是需要结合日志规则再详细说一说,此篇就此结束。