14-Kubernetes-zookeeper-nfs集群部署

435 阅读9分钟

Kubernetes-zookeeper-nfs集群部署

背景

Kubernetes安装zookeeper集群

一、环境准备,安装kubernetes集群

​ 当前节点base、master、node1、node2、node3

kubernetes安装不在此赘述,请参考参考资料进行安装

kubernetes安装nfs不在此赘述,请参考参考资料进行安装

​ 参考资料

/kubernetes/1-Kubernetes基于Centos7构建基础环境(一)

/kubernetes/2-Kubernetes基于Centos7构建基础环境(二)

/kubernetes/3-Kubernetes基于Centos7构建基础环境(三)

4-Kubernetes-基于Centos7安装面板及监控(四)

/kubernetes/nfs/1-kubernetes-nfs动态存储部署

集群名称集群域名说明
basebase.xincan.cn部署harbor、nfs等服务
mastermaster.xincan.cnkubernetes主节点,做污点容忍,排除业务资源,nfs客户端等
node1node1.xincan.cnkubernetes从节点,nfs客户端等
node2node2.xincan.cnkubernetes从节点,nfs客户端等
node3node3.xincan.cnkubernetes从节点,nfs客户端等

二、总体流程:

  1. 原始镜像mirrorgcrio/kubernetes-zookeeper:1.0-3.4.10
  2. 将原始镜像下载后进行push到私有Harbor仓库
  3. Kubernetes集群的主节点上创建zookeeper文件夹,用于存放Kuberntes文件编排,文件目录如下;
[root@master kafka]# tree
.
├── 0-kafka-namespace.yaml
├── 1-zookeeper-kafka-rbac.yaml
├── kafka
│   ├── 1-kafka-storageclass.yaml
│   ├── 2-kafka-nfs-provisioner.yaml
│   ├── 3-kafka-poddisruptionbudget.yaml
│   ├── 4-kafka-service.yaml
│   └── 5-kafka-statefulset.yaml
└── zookeeper
    ├── 1-zookeeper-storageclass.yaml
    ├── 2-zookeeper-nfs-provisioner.yaml
    ├── 3-zookeeper-configmap.yaml
    ├── 4-zookeeper-service.yaml
    ├── 5-zookeeper-poddisruptionbudget.yaml
    └── 6-zookeeper-statefulset.yaml
    
  1. 下一章节讲12-Kubernetes-kafka-nfs集群部署,本章节暂时不讲
  2. 资源创建;
  3. 效果展示;

三、镜像处理

  1. 镜像下载
[root@master /]# docker pull mirrorgcrio/kubernetes-zookeeper:1.0-3.4.10
  1. 镜像tag
[root@master /]# docker tag mirrorgcrio/kubernetes-zookeeper:1.0-3.4.10 base.xincan.cn/library/k8s-zookeeper:3.4.10
  1. 提交到Harbor
[root@master /]# docker push base.xincan.cn/library/k8s-zookeeper:3.4.10

四、资源创建

  1. 创建kafka的命名空间Namespace,名称为:kafka
    • 将所有的资源挂载到此命名空间下
[root@master kafka]# cat 0-kafka-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/product: xincan
[root@master kafka]#
  1. 创建kafka权限配置ServiceAccount、ClusterRole、ClusterRoleBinding、Role、RoleBinding
[root@master kafka]# cat 1-zookeeper-kafka-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: zk-nfs-client-provisioner
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan

---


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: zk-nfs-client-provisioner-runner
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get","watch","list", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]

---


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-zk-nfs-client-provisioner
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
subjects:
  - kind: ServiceAccount
    name: zk-nfs-client-provisioner
    namespace: kafka
roleRef:
  kind: ClusterRole
  name: zk-nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---


apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: leader-locking-zk-nfs-client-provisioner
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---


apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: leader-locking-zk-nfs-client-provisioner
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
subjects:
  - kind: ServiceAccount
    name: zk-nfs-client-provisioner
    namespace: kafka
roleRef:
  kind: Role
  name: leader-locking-zk-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@master kafka]#
  1. 创建cockroachdb主节点StorageClass,名称为:cockroachdb-nfs-storage
    • 后续pv、pvc通过Deployment动态创建
[root@master zookeeper]# cat 1-zookeeper-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: zookeeper-nfs-storage
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
provisioner: nfs.xincan.kafka/zookeeper # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
[root@master zookeeper]#
  1. 创建NFS资源Deployment,名称为:zookeeper-nfs-client-provisionerr
    • mountPath: /persistentvolumes 该文件夹是nfs-client-provisioner镜像运行之后容器内部固定的文件夹它会mount/hatech/nfs/data/xincan/zookeeper-multiple nfs服务器zookeeper-multiple 文件夹下
    • value: 192.168.1.80 该地址是对应NFS服务器地址
    • path: /hatech/nfs/data/xincan/zookeeper-multiple path地址路径则是NFS服务器自己创建挂载点的文件路径,也就是后续数据库存储的位置
    • serviceAccountName: zk-nfs-client-provisioner 赋予动态存储有操作PV,PVC等权限
    • PROVISIONER_NAME 对应的value与StorageClass中的provisioner值要一一对应
[root@master zookeeper]# cat 2-zookeeper-nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper-nfs-client-provisioner
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      xincan.kubernetes.io/company: xincan.cn
      xincan.kubernetes.io/version: 0.0.1
      xincan.kubernetes.io/type: plugins
      xincan.kubernetes.io/product: xincan
  template:
    metadata:
      labels:
        xincan.kubernetes.io/company: xincan.cn
        xincan.kubernetes.io/version: 0.0.1
        xincan.kubernetes.io/type: plugins
        xincan.kubernetes.io/product: xincan
    spec:
      serviceAccountName: zk-nfs-client-provisioner
      containers:
        - name: zookpeer-nfs-client-provisioner
          image: base.x.cn/library/nfs-client-provisioner:v1.5.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs.xincan.kafka/zookeeper
            - name: NFS_SERVER
              value: 192.168.1.80
            - name: NFS_PATH
              value: /hatech/nfs/data/xincan/zookeeper-multiple
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.80
            path: /hatech/nfs/data/xincan/zookeeper-multiple
  1. 创建zookeeperPodDisruptionBudget,名称为:zookeeper-pdb
[root@master zookeeper]# cat 3-zookeeper-poddisruptionbudget.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zookeeper-pdb
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: zookeeper
spec:
  selector:
    matchLabels:
      xincan.kubernetes.io/company: xincan.cn
      xincan.kubernetes.io/version: 0.0.1
      xincan.kubernetes.io/type: plugins
      xincan.kubernetes.io/product: xincan
      xincan.kubernetes.io/app: zookeeper
  minAvailable: 2
[root@master zookeeper]#
  1. 创建zookeeper服务外暴露访问资源Service,名称为:zk-hs,zk-cs
[root@master zookeeper]# cat 4-zookeeper-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: zookeeper
spec:
  selector:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: zookeeper
  clusterIP: None
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election

---


apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: zookeeper
spec:
  ports:
  - port: 2181
    name: client
  selector:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: zookeeper
[root@master zookeeper]#
  1. 创建zookeeper无头服务StatefulSet,名称为:根据metadata.name+“-”+实例序号,序号从0开始,如我的最后结果是:zk-0,zk-1、zk-2
  • replicas: 3 最终会生成3分zookeeper实例
[root@master zookeeper]# cat 6-zookeeper-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: zookeeper
spec:
  selector:
    matchLabels:
      xincan.kubernetes.io/company: xincan.cn
      xincan.kubernetes.io/version: 0.0.1
      xincan.kubernetes.io/type: plugins
      xincan.kubernetes.io/product: xincan
      xincan.kubernetes.io/app: zookeeper
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        xincan.kubernetes.io/company: xincan.cn
        xincan.kubernetes.io/version: 0.0.1
        xincan.kubernetes.io/type: plugins
        xincan.kubernetes.io/product: xincan
        xincan.kubernetes.io/app: zookeeper
    spec:
      affinity:
       podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "xincan.kubernetes.io/app"
                    operator: In
                    values:
                    - zookeeper
              topologyKey: "kubernetes.io/hostname"

      containers:
      - name: zk
        image: base.xincan.cn/library/k8s-zookeeper:3.4.10
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        resources:
          requests:
            cpu: "500m"
            memory: "512Mi"

        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: data
          mountPath: /var/lib/zookeeper
      # serviceAccountName: kafka-rbac
      securityContext:
        runAsUser: 1000
        fsGroup: 1000

  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: zookeeper-nfs-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
[root@master zookeeper]#
  1. 创建zookeeper资源
[root@master kafka]# kubectl apply -f 0-kafka-namespace.yaml
[root@master kafka]# kubectl apply -f 1-zookeeper-kafka-rbac.yaml
[root@master kafka]# kubectl app -f zookeeper/
storageclass.storage.k8s.io/zookeeper-nfs-storage created
deployment.apps/zookeeper-nfs-client-provisioner created
configmap/zookeeper-config created
service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zookeeper-pdb created
statefulset.apps/zk created
  1. 查看安装情况
[root@master kafka]# kubectl -n kafka get pod
NAME                                                READY   STATUS    RESTARTS   AGE
zk-0                                                1/1     Running   0          18m
zk-1                                                1/1     Running   0          18m
zk-2                                                1/1     Running   0          18m
zookeeper-nfs-client-provisioner-6d7b494668-66h5k   1/1     Running   0          18m

五、测试集群数据

  1. 在第zk-0节点写入数据
    • 地址为:/name
    • 结果为:zhangsan
[root@master kafka]# kubectl -n kafka exec -it zk-0 zkCli.sh create /name zhangshan
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Connecting to localhost:2181
2021-09-08 10:14:19,465 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2021-09-08 10:14:19,471 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-0.zk-hs.kafka.svc.cluster.local
2021-09-08 10:14:19,471 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
2021-09-08 10:14:19,475 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2021-09-08 10:14:19,475 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2021-09-08 10:14:19,475 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
2021-09-08 10:14:19,475 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2021-09-08 10:14:19,475 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2021-09-08 10:14:19,476 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2021-09-08 10:14:19,476 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2021-09-08 10:14:19,476 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2021-09-08 10:14:19,476 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=4.4.247-1.el7.elrepo.x86_64
2021-09-08 10:14:19,476 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=zookeeper
2021-09-08 10:14:19,476 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/home/zookeeper
2021-09-08 10:14:19,477 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
2021-09-08 10:14:19,479 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
2021-09-08 10:14:19,510 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2021-09-08 10:14:19,617 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2021-09-08 10:14:19,638 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x17bc4d3bafa0001, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
Created /name
[root@master kafka]#
  1. 在zk-2节点获取数据
    • 地址为:/name
    • 可以看到结果中有zhangsan体现
[root@master kafka]# kubectl -n kafka exec -it zk-2 zkCli.sh get /name
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Connecting to localhost:2181
2021-09-08 10:14:33,885 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2021-09-08 10:14:33,890 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-2.zk-hs.kafka.svc.cluster.local
2021-09-08 10:14:33,890 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
2021-09-08 10:14:33,895 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2021-09-08 10:14:33,895 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2021-09-08 10:14:33,895 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
2021-09-08 10:14:33,895 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2021-09-08 10:14:33,895 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2021-09-08 10:14:33,896 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2021-09-08 10:14:33,896 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2021-09-08 10:14:33,896 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2021-09-08 10:14:33,897 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=4.4.247-1.el7.elrepo.x86_64
2021-09-08 10:14:33,897 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=zookeeper
2021-09-08 10:14:33,897 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/home/zookeeper
2021-09-08 10:14:33,897 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
2021-09-08 10:14:33,900 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
2021-09-08 10:14:33,937 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2021-09-08 10:14:34,047 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2021-09-08 10:14:34,060 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x37bc4d3fae80001, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
zhangshan
cZxid = 0x100000043
ctime = Wed Sep 08 10:14:19 UTC 2021
mZxid = 0x100000043
mtime = Wed Sep 08 10:14:19 UTC 2021
pZxid = 0x100000043
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[root@master kafka]#

六:结束语

至此集群版Kubernetes部署zookeeper集群完成