部署 zookeeper-exporter 接入Prometheus监控

2,593 阅读2分钟

1. 简介

zookeeper 从3.5.5版本开始,原生支持开放指标接口供Prometheus采集。

具体配置方法请见:

zookeeper.apache.org/doc/r3.8.0/…

但是低于3.5.5版本,不支持上面的配置,只能使用zookeeper-exporter进行采集。

本篇文章将描述如何安装zookeeper-exporter。

zookeeper-exporter相关信息:

开源项目:github.com/dabealu/zoo…

docker镜像地址:hub.docker.com/r/dabealu/z…

helm chart包已在仓库中,可直接使用。名为“prometheus-zookeeper-exporter”,如果使用helm安装,仅需要按需求修改value.yaml后install即可。

本文仅介绍手动部署的操作。

2. 创建 Deployment

如果使用helm安装则跳过该步骤。

创建deployment yaml。

将以下yaml内容保存为deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper-exporter
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: zookeeper-exporter
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: zookeeper-exporter
    spec:
      containers:
      - args:
        - --zk-hosts
        - zk-0.zk-hs.uat-middleware:2181,zk-1.zk-hs.uat-middleware:2181 # 修改为zookeeper实例的地址,以逗号分隔
        - --listen
        - 0.0.0.0:9141
        - --location
        - /metrics
        - --timeout
        - "30"
        env:
        - name: TZ
          value: Asia/Shanghai
        image: dabealu/zookeeper-exporter:v0.1.13
        imagePullPolicy: IfNotPresent
        name: zookeeper-exporter
        ports:
        - containerPort: 9141
          protocol: TCP
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 64Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

应用deployment

kubectl -n monitoring apply -f deployment.yaml

3. 创建 Service

创建service,暴露端口给Prometheus读取

创建文件 service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: zookeeper-exporter
  name: zookeeper-exporter
spec:
  ports:
  - name: zookeeper-exporter
    port: 9141
    protocol: TCP
    targetPort: 9141
  selector:
    app: zookeeper-exporter
  sessionAffinity: None
  type: ClusterIP

应用service

kubectl -n monitoring apply -f service.yaml

4. 创建 ServiceMonitor

创建ServiceMonitor,让Prometheus-operator自动添加采集job

创建文件 servicemonitor.yaml

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app: zookeeper-exporter
  name: zookeeper-exporter
  namespace: monitoring
spec:
  endpoints:
  - honorLabels: true
    interval: 20s
    path: /metrics
    port: zookeeper-exporter
    scheme: http
    scrapeTimeout: 20s
  jobLabel: zookeeper-exporter
  namespaceSelector:
    matchNames:
    - monitoring
  sampleLimit: 0
  selector:
    matchLabels:
      app: zookeeper-exporter

应用ServiceMonitor

kubectl -n monitoring apply -f servicemonitor.yaml

5. 确认部署

确认资源已拉起

# kubectl -n monitoring  get svc,deployment,servicemonitor zookeeper-exporter 
NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/zookeeper-exporter   ClusterIP   10.247.54.209   <none>        9141/TCP   176m
​
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/zookeeper-exporter   1/1     1            1           176m
​
NAME                                                      AGE
servicemonitor.monitoring.coreos.com/zookeeper-exporter   6m46s

配置完成后,在Prometheus的target页面能看到指标成功采集

image-20230227152057197.png

6. 配置grafana dashboard

zookeeper-exporter的开发者自制的dashboard可在该页面获取到:

grafana.com/grafana/das…