15-Kubernetes-kafka-nfs集群部署

388 阅读4分钟

Kubernetes-zookeeper-nfs集群部署

背景

Kubernetes安装zookeeper集群

一、环境准备,安装kubernetes集群

​ 当前节点base、master、node1、node2、node3

kubernetes安装不在此赘述,请参考参考资料进行安装

kubernetes安装nfs不在此赘述,请参考参考资料进行安装

​ 参考资料

/kubernetes/1-Kubernetes基于Centos7构建基础环境(一)

/kubernetes/2-Kubernetes基于Centos7构建基础环境(二)

/kubernetes/3-Kubernetes基于Centos7构建基础环境(三)

4-Kubernetes-基于Centos7安装面板及监控(四)

/kubernetes/nfs/10-Kubernetes-elasticsearch-nfs集群部署

/kubernetes/nfs/6-Kubernetes-nacos-nfs单机版部署

集群名称集群域名说明
basebase.xincan.cn部署harbor、nfs等服务
mastermaster.xincan.cnkubernetes主节点,做污点容忍,排除业务资源,nfs客户端等
node1node1.xincan.cnkubernetes从节点,nfs客户端等
node2node2.xincan.cnkubernetes从节点,nfs客户端等
node3node3.xincan.cnkubernetes从节点,nfs客户端等

二、总体流程:

  1. 原始镜像fastop/kafka:2.2.0
  2. 将原始镜像下载后进行push到私有Harbor仓库
  3. Kubernetes集群的主节点上创建skywalking文件夹,用于存放Kuberntes文件编排,文件目录如下;
[root@master kafka]# tree
.
├── 0-kafka-namespace.yaml
├── 1-zookeeper-kafka-rbac.yaml
├── kafka
│   ├── 1-kafka-storageclass.yaml
│   ├── 2-kafka-nfs-provisioner.yaml
│   ├── 3-kafka-poddisruptionbudget.yaml
│   ├── 4-kafka-service.yaml
│   └── 5-kafka-statefulset.yaml
└── zookeeper
    ├── 1-zookeeper-storageclass.yaml
    ├── 2-zookeeper-nfs-provisioner.yaml
    ├── 3-zookeeper-configmap.yaml
    ├── 4-zookeeper-service.yaml
    ├── 5-zookeeper-poddisruptionbudget.yaml
    └── 6-zookeeper-statefulset.yaml
    
  1. 资源创建;
  2. 效果展示;

三、镜像处理

  1. 镜像下载
[root@master /]# docker pull fastop/kafka:2.2.0
  1. 镜像tag
[root@master /]# docker tag fastop/kafka:2.2.0 base.xincan.cn/library/k8s-kafka:2.2.0
  1. 提交到Harbor
[root@master /]# docker push base.xincan.cn/library/k8s-kafka:2.2.0

四、资源创建

  1. 命名空间、权限已经创建,参考/kubernetes/nfs/11-Kubernetes-zookeeper-nfs集群部署

  2. 创建kafka主节点StorageClass,名称为:kafka-nfs-storage

    • 后续pv、pvc通过Deployment动态创建
[root@master kafka]# cat 1-kafka-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: kafka-nfs-storage
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
provisioner: nfs.xincan.kafka/kafka # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
[root@master kafka]#
  1. 创建NFS资源Deployment,名称为:kafka-nfs-client-provisionerr
    • mountPath: /persistentvolumes 该文件夹是nfs-client-provisioner镜像运行之后容器内部固定的文件夹它会mount/hatech/nfs/data/xincan/kafka-multiple nfs服务器kafka-multiple 文件夹下
    • value: 192.168.1.80 该地址是对应NFS服务器地址
    • path: /hatech/nfs/data/xincan/kafka-multiple path地址路径则是NFS服务器自己创建挂载点的文件路径,也就是后续数据库存储的位置
    • serviceAccountName: zk-nfs-client-provisioner 赋予动态存储有操作PV,PVC等权限
    • PROVISIONER_NAME 对应的value与StorageClass中的provisioner值要一一对应
[root@master kafka]# cat 2-kafka-nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-nfs-client-provisioner
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      xincan.kubernetes.io/company: xincan.cn
      xincan.kubernetes.io/version: 0.0.1
      xincan.kubernetes.io/type: plugins
      xincan.kubernetes.io/product: xincan
  template:
    metadata:
      labels:
        xincan.kubernetes.io/company: xincan.cn
        xincan.kubernetes.io/version: 0.0.1
        xincan.kubernetes.io/type: plugins
        xincan.kubernetes.io/product: xincan
    spec:
      serviceAccountName: zk-nfs-client-provisioner
      containers:
        - name: kafka-nfs-client-provisioner
          image: base.xincan.cn/library/nfs-client-provisioner:v1.5.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs.xincan.kafka/kafka #fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.1.80
            - name: NFS_PATH
              value: /hatech/nfs/data/xincan/kafka-multiple
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.80
            path: /hatech/nfs/data/xincan/kafka-multiple
[root@master kafka]#
  1. 创建kafkaPodDisruptionBudget,名称为:kafka-pdb
[root@master kafka]# cat 3-kafka-poddisruptionbudget.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: kafka-pdb
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: kafka
spec:
  selector:
    matchLabels:
      xincan.kubernetes.io/company: xincan.cn
      xincan.kubernetes.io/version: 0.0.1
      xincan.kubernetes.io/type: plugins
      xincan.kubernetes.io/product: xincan
      xincan.kubernetes.io/app: kafka
  minAvailable: 2
[root@master kafka]#
  1. 创建kafka服务外暴露访问资源Service,名称为:kafka-hs,kafka-cs
[root@master kafka]# cat 4-kafka-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kafka-hs
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: kafka
spec:
  selector:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: kafka
  clusterIP: None
  ports:
  - port: 9092
    name: server

---


apiVersion: v1
kind: Service
metadata:
  name: kafka-cs
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: kafka
spec:
  selector:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: kafka
  type: NodePort
  ports:
  - name: client
    port: 9092
  #  nodePort: 19092
[root@master kafka]#
  1. 创建kafka无头服务StatefulSet,名称为:根据metadata.name+“-”+实例序号,序号从0开始,如我的最后结果是:kafka-0,kafka-1、kafka-2
  • replicas: 3 最终会生成3分kafka实例
  • 节点亲和性保证zookeeper与Kafka在每台子节点集群中确保一一至少一个保持对应
[root@master kafka]# cat 5-kafka-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: kafka
  labels:
    xincan.kubernetes.io/company: xincan.cn
    xincan.kubernetes.io/version: 0.0.1
    xincan.kubernetes.io/type: plugins
    xincan.kubernetes.io/product: xincan
    xincan.kubernetes.io/app: kafka
spec:
  serviceName: kafka-hs
  replicas: 3
  selector:
    matchLabels:
      xincan.kubernetes.io/company: xincan.cn
      xincan.kubernetes.io/version: 0.0.1
      xincan.kubernetes.io/type: plugins
      xincan.kubernetes.io/product: xincan
      xincan.kubernetes.io/app: kafka
  template:
    metadata:
      labels:
        xincan.kubernetes.io/company: xincan.cn
        xincan.kubernetes.io/version: 0.0.1
        xincan.kubernetes.io/type: plugins
        xincan.kubernetes.io/product: xincan
        xincan.kubernetes.io/app: kafka
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "xincan.kubernetes.io/app"
                    operator: In
                    values:
                    - kafka
              topologyKey: "kubernetes.io/hostname"
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                    matchExpressions:
                      - key: "xincan.kubernetes.io/app"
                        operator: In
                        values:
                        - zookeeper
                 topologyKey: "kubernetes.io/hostname"
      terminationGracePeriodSeconds: 300
      containers:
        - name: k8s-kafka
          imagePullPolicy: Always
          image: base.x.cn/library/k8s-kafka:2.2.0
          resources:
            requests:
              memory: "600Mi"
              cpu: 500m
          ports:
            - containerPort: 9092
              name: server
          command:
            - sh
            - -c
            - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
          --override listeners=PLAINTEXT://:9092 \
          --override zookeeper.connect=zk-0.zk-hs.kafka.svc.cluster.local:2181,zk-1.zk-hs.kafka.svc.cluster.local:2181,zk-2.zk-hs.kafka.svc.cluster.local:2181 \
          --override log.dir=/var/lib/kafka \
          --override auto.create.topics.enable=true \
          --override auto.leader.rebalance.enable=true \
          --override background.threads=10 \
          --override compression.type=producer \
          --override delete.topic.enable=false \
          --override leader.imbalance.check.interval.seconds=300 \
          --override leader.imbalance.per.broker.percentage=10 \
          --override log.flush.interval.messages=9223372036854775807 \
          --override log.flush.offset.checkpoint.interval.ms=60000 \
          --override log.flush.scheduler.interval.ms=9223372036854775807 \
          --override log.retention.bytes=-1 \
          --override log.retention.hours=168 \
          --override log.roll.hours=168 \
          --override log.roll.jitter.hours=0 \
          --override log.segment.bytes=1073741824 \
          --override log.segment.delete.delay.ms=60000 \
          --override message.max.bytes=1000012 \
          --override min.insync.replicas=1 \
          --override num.io.threads=8 \
          --override num.network.threads=3 \
          --override num.recovery.threads.per.data.dir=1 \
          --override num.replica.fetchers=1 \
          --override offset.metadata.max.bytes=4096 \
          --override offsets.commit.required.acks=-1 \
          --override offsets.commit.timeout.ms=5000 \
          --override offsets.load.buffer.size=5242880 \
          --override offsets.retention.check.interval.ms=600000 \
          --override offsets.retention.minutes=1440 \
          --override offsets.topic.compression.codec=0 \
          --override offsets.topic.num.partitions=50 \
          --override offsets.topic.replication.factor=3 \
          --override offsets.topic.segment.bytes=104857600 \
          --override queued.max.requests=500 \
          --override quota.consumer.default=9223372036854775807 \
          --override quota.producer.default=9223372036854775807 \
          --override replica.fetch.min.bytes=1 \
          --override replica.fetch.wait.max.ms=500 \
          --override replica.high.watermark.checkpoint.interval.ms=5000 \
          --override replica.lag.time.max.ms=10000 \
          --override replica.socket.receive.buffer.bytes=65536 \
          --override replica.socket.timeout.ms=30000 \
          --override request.timeout.ms=30000 \
          --override socket.receive.buffer.bytes=102400 \
          --override socket.request.max.bytes=104857600 \
          --override socket.send.buffer.bytes=102400 \
          --override unclean.leader.election.enable=true \
          --override zookeeper.session.timeout.ms=6000 \
          --override zookeeper.set.acl=false \
          --override broker.id.generation.enable=true \
          --override connections.max.idle.ms=600000 \
          --override controlled.shutdown.enable=true \
          --override controlled.shutdown.max.retries=3 \
          --override controlled.shutdown.retry.backoff.ms=5000 \
          --override controller.socket.timeout.ms=30000 \
          --override default.replication.factor=1 \
          --override fetch.purgatory.purge.interval.requests=1000 \
          --override group.max.session.timeout.ms=300000 \
          --override group.min.session.timeout.ms=6000 \
          --override inter.broker.protocol.version=2.2.0 \
          --override log.cleaner.backoff.ms=15000 \
          --override log.cleaner.dedupe.buffer.size=134217728 \
          --override log.cleaner.delete.retention.ms=86400000 \
          --override log.cleaner.enable=true \
          --override log.cleaner.io.buffer.load.factor=0.9 \
          --override log.cleaner.io.buffer.size=524288 \
          --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
          --override log.cleaner.min.cleanable.ratio=0.5 \
          --override log.cleaner.min.compaction.lag.ms=0 \
          --override log.cleaner.threads=1 \
          --override log.cleanup.policy=delete \
          --override log.index.interval.bytes=4096 \
          --override log.index.size.max.bytes=10485760 \
          --override log.message.timestamp.difference.max.ms=9223372036854775807 \
          --override log.message.timestamp.type=CreateTime \
          --override log.preallocate=false \
          --override log.retention.check.interval.ms=300000 \
          --override max.connections.per.ip=2147483647 \
          --override num.partitions=4 \
          --override producer.purgatory.purge.interval.requests=1000 \
          --override replica.fetch.backoff.ms=1000 \
          --override replica.fetch.max.bytes=1048576 \
          --override replica.fetch.response.max.bytes=10485760 \
          --override reserved.broker.max.id=1000 "
          env:
            - name: KAFKA_HEAP_OPTS
              value : "-Xmx512M -Xms512M"
            - name: KAFKA_OPTS
              value: "-Dlogging.level=INFO"
          volumeMounts:
            - name: kafka
              mountPath: /var/lib/kafka
          readinessProbe:
            tcpSocket:
              port: 9092
            timeoutSeconds: 1
            initialDelaySeconds: 5
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: kafka
      annotations:
        volume.beta.kubernetes.io/storage-class: "kafka-nfs-storage"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
[root@master kafka]#
  1. 创建zookeeper资源
[root@master kafka]# kubectl apply -f kafka/
storageclass.storage.k8s.io/kafka-nfs-storage created
deployment.apps/kafka-nfs-client-provisioner created
poddisruptionbudget.policy/kafka-pdb created
service/kafka-hs created
service/kafka-cs created
statefulset.apps/kafka created
[root@master kafka]#
  1. 查看安装情况
[root@master kafka]# kubectl -n kafka get all
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/kafka-0                                             1/1     Running   0          76s
pod/kafka-1                                             1/1     Running   0          66s
pod/kafka-2                                             1/1     Running   0          56s
pod/kafka-nfs-client-provisioner-65fd4d5cbc-xc4rj       1/1     Running   0          77s
pod/zk-0                                                1/1     Running   0          40m
pod/zk-1                                                1/1     Running   0          40m
pod/zk-2                                                1/1     Running   0          40m
pod/zookeeper-nfs-client-provisioner-6d7b494668-66h5k   1/1     Running   0          40m

NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/kafka-cs   NodePort    10.97.249.127   <none>        9092:31903/TCP      78s
service/kafka-hs   ClusterIP   None            <none>        9092/TCP            78s
service/zk-cs      ClusterIP   10.100.46.147   <none>        2181/TCP            40m
service/zk-hs      ClusterIP   None            <none>        2888/TCP,3888/TCP   40m

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kafka-nfs-client-provisioner       1/1     1            1           78s
deployment.apps/zookeeper-nfs-client-provisioner   1/1     1            1           40m

NAME                                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/kafka-nfs-client-provisioner-65fd4d5cbc       1         1         1       78s
replicaset.apps/zookeeper-nfs-client-provisioner-6d7b494668   1         1         1       40m

NAME                     READY   AGE
statefulset.apps/kafka   3/3     78s
statefulset.apps/zk      3/3     40m
[root@master kafka]#

五、测试kafka集群,生产者、消费者数据

  1. 进入kafka-0节点,创建 topic 为 test
[root@master kafka]# kubectl -n kafka exec -it kafka-0 -- bash

kafka@kafka-0:/$ kafka-topics.sh --create \
--topic test \
--zookeeper zk-0.zk-hs.kafka.svc.cluster.local:2181,zk-1.zk-hs.kafka.svc.cluster.local:2181,zk-2.zk-hs.kafka.svc.cluster.local:2181 \
--partitions 3 \
--replication-factor 2

Created topic test.
[root@master kafka]#
  1. 查看有哪些 topic
kafka@kafka-0:/$ kafka-topics.sh --list --zookeeper zk-0.zk-hs.kafka.svc.cluster.local:2181,zk-1.zk-hs.kafka.svc.cluster.local:2181,zk-2.zk-hs.kafka.svc.cluster.local:2181
test
kafka@kafka-0:/$
  1. 打开这个窗口不要动, 消费者监听窗口
kafka@kafka-0:/$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9092
  1. 重新开启一个窗口,进入kafka-1容器
[root@master ~]# kubectl -n kafka exec -it kafka-1 -- bash
kafka@kafka-1:/$
  1. 打开生产者监听窗口
    • 随便输入内容,如“测试kafka”回车
kafka@kafka-1:/$ kafka-console-producer.sh --topic test --broker-list localhost:9092

>测试kafka

>
  1. 在回到第四步中,查看消费者是否接收到生产者产生的数据
    • 结果可以看到生产者产生的数据
    • 验证kafka集群成功
kafka@kafka-0:/$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9092
测试kafka

六:结束语

至此集群版Kubernetes部署kafka集群完成