1-Kubernetes-nfs动态存储部署

88 阅读10分钟

Kubernetes-nfs动态存储部署

动态卷供应

动态卷供应允许按需创建存储卷。 如果没有动态供应,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷, 然后在 Kubernetes 集群创建 PersistentVolume 对象来表示这些卷。 动态供应功能消除了集群管理员预先配置存储的需要。 相反,它在用户请求时自动供应存储。

背景

动态卷供应的实现基于 storage.k8s.io API 组中的 StorageClass API 对象。 集群管理员可以根据需要定义多个 StorageClass 对象,每个对象指定一个卷插件(又名 provisioner), 卷插件向卷供应商提供在创建卷时需要的数据卷信息及相关参数。

集群管理员可以在集群中定义和公开多种存储(来自相同或不同的存储系统),每种都具有自定义参数集。 该设计也确保终端用户不必担心存储供应的复杂性和细微差别,但仍然能够从多个存储选项中进行选择。

一、环境准备,安装kubernetes集群

​ 当前NFS服务端在base虚拟机,master、slave1、slave2分别为kubernetes主节点和两个从节点

kubernetes安装不在此赘述,请参考参考资料进行安装

​ 参考资料

1-Kubernetes基于Centos7构建基础环境(一)

2-Kubernetes基于Centos7构建基础环境(二)

3-Kubernetes基于Centos7构建基础环境(三)

4-Kubernetes-基于Centos7安装面板及监控(四)xincan.gitee.io/posts/2aa23…)

集群名称集群域名说明
basebase.xincan.cn部署harbor、nfs等服务
mastermaster.xincan.cnkubernetes主节点,做污点容忍,排除业务资源,nfs客户端等
slave1slave1.xincan.cnkubernetes从节点,nfs客户端等
slave2slave2.xincan.cnkubernetes从节点,nfs客户端等

二、总体流程:

  1. 创建 NFS 服务器;
  2. 创建 kubernetes 命名空间为 study
  3. 创建 Service Account,用来管控 NFS provisionerkubernetes集群中运行的权限;
  4. 创建 StorageClass,负责创建 PVC 并调用 NFS provisioner 进行预定的工作,并关联 PV PVC
  5. 创建 NFS provisioner,有两个功能:一个是在NFS共享目录下创建挂载点(volume),二是建立 PV 并将 PV 与 NFS 挂载点建立关联;
  6. 创建基于Nginx的Statefulset无头服务,来动态创建相对应的PVPVC

三、安装NFS服务器

  1. base服务器安装nfs服务端

    [root@base ~]# yum -y install nfs-utils rpcbind
    [root@base ~]# systemctl start rpcbind.service && systemctl enable rpcbind.service
    [root@base ~]# systemctl start nfs.service && systemctl enable nfs.service
    
    # 如果想在data文件夹下做细粒度划分,请继续创建并赋予777权限.
    # 我下面分了(nginx, mysql-multiple, mysql-single, prometheus)文件并给予777权限
    [root@base ~]# mkdir /nfs/data
    [root@base ~]# chmod 777 /nfs/data
    
    # 编辑/etc/exprots文件,增加如下代码
    # 参数:
    # sync:将数据同步写入内存缓冲区与磁盘中,效率低,但可以保证数据的一致性
    # async:将数据先保存在内存缓冲区中,必要时才写入磁盘
    [root@base ~]# vim /etc/exports
    /nfs/data/prometheus            172.16.124.1/24(rw,sync,no_root_squash,no_all_squash)
    /nfs/data/mysql-single          172.16.124.1/24(rw,sync,no_root_squash,no_all_squash)
    /nfs/data/mysql-multiple        172.16.124.1/24(rw,sync,no_root_squash,no_all_squash)
    /nfs/data/nginx                 172.16.124.1/24(rw,sync,no_root_squash,no_all_squash)
    
    [root@base ~]# exportfs -arv
    exporting 172.16.124.1/24:/nfs/data/nginx
    exporting 172.16.124.1/24:/nfs/data/mysql-multiple
    exporting 172.16.124.1/24:/nfs/data/mysql-single
    exporting 172.16.124.1/24:/nfs/data/prometheus
    [root@slave1 ~]# 
    
    # 测试挂载是否成功
    [root@base ~]# showmount -e localhost
    Export list for localhost:
    /nfs/data/nginx          172.16.124.1/24
    /nfs/data/mysql-multiple 172.16.124.1/24
    /nfs/data/mysql-single   172.16.124.1/24
    /nfs/data/prometheus     172.16.124.1/24
    [root@base ~]# 
    
  2. kubernetes集群安装nfs,主节点master

    [root@master ~]# yum -y install nfs-utils rpcbind
    [root@master ~]# systemctl start rpcbind.service
    [root@master ~]# systemctl enable rpcbind.service
    [root@master ~]# systemctl start nfs.service
    [root@master ~]# systemctl enable nfs.service
    
    # 测试是否连接nfs服务器是否连接, nfs服务器地址为:172.16.124.130
    [root@master ~]# showmount -e 172.16.124.130
    Export list for 172.16.124.130:
    /nfs/data/nginx          172.16.124.1/24
    /nfs/data/mysql-multiple 172.16.124.1/24
    /nfs/data/mysql-single   172.16.124.1/24
    /nfs/data/prometheus     172.16.124.1/24
    [root@master ~]#
    
  3. kubernetes集群安装nfs,从节点slave1

    [root@slave1 ~]# yum -y install nfs-utils rpcbind
    [root@slave1 ~]# systemctl start rpcbind.service
    [root@slave1 ~]# systemctl enable rpcbind.service
    [root@slave1 ~]# systemctl start nfs.service
    [root@slave1 ~]# systemctl enable nfs.service
    
    # 测试是否连接nfs服务器是否连接, nfs服务器地址为:172.16.124.130
    [root@slave1 ~]# showmount -e 172.16.124.130
    Export list for 172.16.124.130:
    /nfs/data/nginx          172.16.124.1/24
    /nfs/data/mysql-multiple 172.16.124.1/24
    /nfs/data/mysql-single   172.16.124.1/24
    /nfs/data/prometheus     172.16.124.1/24
    [root@slave1 ~]#
    
  4. kubernetes集群安装nfs,从节点slave2

    [root@slave2 ~]# yum -y install nfs-utils rpcbind
    [root@slave2 ~]# systemctl start rpcbind.service
    [root@slave2 ~]# systemctl enable rpcbind.service
    [root@slave2 ~]# systemctl start ntpd.service
    [root@slave2 ~]# systemctl enable ntpd.service
    
    # 测试是否连接nfs服务器是否连接, nfs服务器地址为:172.16.124.130
    [root@slave2 ~]# showmount -e 172.16.124.130
    Export list for 172.16.124.130:
    /nfs/data/nginx          172.16.124.1/24
    /nfs/data/mysql-multiple 172.16.124.1/24
    /nfs/data/mysql-single   172.16.124.1/24
    /nfs/data/prometheus     172.16.124.1/24
    [root@slave2 ~]#
    

四、构建Kubernetes资源

代码参考地址

说明:本章节所有yaml文件夹存放到kubernetes中的master节点上,任意一个目录即可

​ 最终代码目录结构如下

dynamic-nfs
├── nfs-namespace.yaml
├── nfs-rbac.yaml
├── nfs-sc.yaml
├── nfs-provisioner.yaml
└── nnfs-statefulset.yaml
  1. 创建kubernetes命名空间为study。

    [root@master dynamic-nfs]# vim nfs-namespace.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      name: study
    
  2. 创建Service Account用来管控 NFS provisioner 在k8s集群中运行的权限

    [root@master dynamic-nfs]# vim nfs-rbac.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: study
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    
    ---
    
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: study
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: study
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    
    ---
    
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: study
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: study
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    
  3. 创建 StorageClass。负责创建 PVC 并调用 NFS provisioner 进行预定的工作,并关联 PV 和 PVC

    [root@master dynamic-nfs]# vim nfs-sc.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: nfs-client-storage
    provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      archiveOnDelete: "false"
    
  4. 创建 NFS provisioner。有两个功能,一个是在NFS共享目录下创建挂载点(volume),二是建立 PV 并将 PV 与 NFS 挂载点建立关联

    • 自动创建的 PV 以 ${namespace}-${pvcName}-${pvName} 的命名格式创建在 NFS 上

    • 当这个 PV 被回收后会以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式存在 NFS 服务器上

    [root@master dynamic-nfs]# vim nfs-provisioner.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: study
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: base.xincan.cn/library/nfs-client-provisioner:v1.5.2
              volumeMounts:
                - name: nfs-client-root
                  # 该文件夹是nfs-client-provisioner镜像运行之后容器内部固定的文件夹
                  # 它会mount到/nfs/data/nginx nfs服务器nginx文件夹下
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 172.16.124.130
                - name: NFS_PATH
                  value: /nfs/data/nginx
          volumes:
            - name: nfs-client-root
              nfs:
                server: 172.16.124.130
                path: /nfs/data/nginx
    
  5. 创建基于Nginx的Statefulset无头服务,来动态创建相对应的PV和PVC。

    [root@master dynamic-nfs]# vim nfs-statefulset.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      namespace: study
      labels:
        app: nginx
    spec:
      ports:
      - port: 80
        name: web
      clusterIP: None
      selector:
        app: nginx
    
    ---
    
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nfs-web
      namespace: study
    spec:
      serviceName: "nginx"
      replicas: 3
      selector:
        matchLabels:
          app: nfs-web # has to match .spec.template.metadata.labels
      template:
        metadata:
          labels:
            app: nfs-web
        spec:
          terminationGracePeriodSeconds: 10
          containers:
          - name: nginx
            image: base.xincan.cn/library/nginx:v1.7.9
            ports:
            - containerPort: 80
              name: web
            volumeMounts:
            - name: nginx
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates:
      - metadata:
          name: nginx
          annotations:
            # 引用上面创建的StorageClass名称
            volume.beta.kubernetes.io/storage-class: nfs-client-storage
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 1Gi
    
  6. 切换dynamic-nfs文件夹下执行如下命令,创建相对应的kubernetes资源

    [root@master dynamic-nfs]# kubectl create -f nfs-namespace.yaml
    [root@master dynamic-nfs]# kubectl create -f nfs-rbac.yaml
    [root@master dynamic-nfs]# kubectl create -f nfs-sc.yaml
    [root@master dynamic-nfs]# kubectl create -f nfs-provisioner.yaml
    [root@master dynamic-nfs]# kubectl create -f nfs-statefulset.yaml
    deployment.apps/nfs-client-provisioner created
    serviceaccount/nfs-client-provisioner created
    clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
    clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
    role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
    rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
    storageclass.storage.k8s.io/nfs-client-storage created
    service/nginx created
    statefulset.apps/nfs-web created
    [root@master dynamic-nfs]#
    
  7. 查看执行结果

    # 查看sc,pv,pvc
    [root@master dynamic-nfs]# kubectl -n study get sc,pv,pvc
    NAME                                             PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    storageclass.storage.k8s.io/nfs-client-storage   fuseim.pri/ifs   Delete          Immediate           false                  63s
    
    NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS         REASON   AGE
    persistentvolume/pvc-4d5ad539-28df-4eea-b57b-7a9a184addb4   1Gi        RWO            Delete           Bound    study/nginx-nfs-web-2   nfs-client-storage            31s
    persistentvolume/pvc-a039e19e-0ae7-4a83-ba18-887eeb955959   1Gi        RWO            Delete           Bound    study/nginx-nfs-web-1   nfs-client-storage            34s
    persistentvolume/pvc-ea5330c9-06b7-4de6-9040-c28082151434   1Gi        RWO            Delete           Bound    study/nginx-nfs-web-0   nfs-client-storage            44s
    
    NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
    persistentvolumeclaim/nginx-nfs-web-0   Bound    pvc-ea5330c9-06b7-4de6-9040-c28082151434   1Gi        RWO            nfs-client-storage   63s
    persistentvolumeclaim/nginx-nfs-web-1   Bound    pvc-a039e19e-0ae7-4a83-ba18-887eeb955959   1Gi        RWO            nfs-client-storage   34s
    persistentvolumeclaim/nginx-nfs-web-2   Bound    pvc-4d5ad539-28df-4eea-b57b-7a9a184addb4   1Gi        RWO            nfs-client-storage   31s
    [root@master dynamic-nfs]#
    
    # 查看study命名空间下pod、service、deployment、replicaset、statefulset资源
    [root@master dynamic-nfs]# kubectl -n study get all
    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/nfs-client-provisioner-68cb7f8754-wcn2b   1/1     Running   0          12m
    pod/nfs-web-0                                 1/1     Running   0          12m
    pod/nfs-web-1                                 1/1     Running   0          11m
    pod/nfs-web-2                                 1/1     Running   0          11m
    
    NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/nginx   ClusterIP   None         <none>        80/TCP    12m
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nfs-client-provisioner   1/1     1            1           12m
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/nfs-client-provisioner-68cb7f8754   1         1         1       12m
    
    NAME                       READY   AGE
    statefulset.apps/nfs-web   3/3     12m
    [root@master dynamic-nfs]#
    
  8. 查看动态生成远程mount文件目录

    [root@base nginx]# pwd
    /nfs/data/nginx
    [root@base nginx]# ls
    study-nginx-nfs-web-0-pvc-ea5330c9-06b7-4de6-9040-c28082151434
    study-nginx-nfs-web-1-pvc-a039e19e-0ae7-4a83-ba18-887eeb955959
    study-nginx-nfs-web-2-pvc-4d5ad539-28df-4eea-b57b-7a9a184addb4
    [root@base nginx]#
    

五、破坏性测试

  1. 在kubernetes中的master节点,利用for循环给每个pod中nginx容器写入index.html

    [root@master ~]# for i in 0 1 2; do kubectl -n study exec -it nfs-web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done
    [root@master ~]#
    
  2. 查看远程nfs服务器/nfs/data下挂载情况,发现每个文件夹下都创建了index.html

    # mount路径
    root@base nginx]# pwd
    /nfs/data/nginx
    # 根据上面提到的规则对其进行目录创建,每个无头服务对应pod创建文件夹
    [root@base nginx]# ls
    study-nginx-nfs-web-0-pvc-ea5330c9-06b7-4de6-9040-c28082151434
    study-nginx-nfs-web-1-pvc-a039e19e-0ae7-4a83-ba18-887eeb955959
    study-nginx-nfs-web-2-pvc-4d5ad539-28df-4eea-b57b-7a9a184addb4
    # 分别依次进入ls查询出来的文件夹,发现每个文件夹下都创建了index.html
    [root@base nginx]# cd study-nginx-nfs-web-0-pvc-ea5330c9-06b7-4de6-9040-c28082151434/
    [root@base study-nginx-nfs-web-0-pvc-ea5330c9-06b7-4de6-9040-c28082151434]# ls
    index.html
    [root@base study-nginx-nfs-web-0-pvc-ea5330c9-06b7-4de6-9040-c28082151434]# cd ..
    [root@base nginx]# cd study-nginx-nfs-web-1-pvc-a039e19e-0ae7-4a83-ba18-887eeb955959/
    [root@base study-nginx-nfs-web-1-pvc-a039e19e-0ae7-4a83-ba18-887eeb955959]# ls
    index.html
    [root@base study-nginx-nfs-web-1-pvc-a039e19e-0ae7-4a83-ba18-887eeb955959]# cd ..
    [root@base nginx]# cd study-nginx-nfs-web-2-pvc-4d5ad539-28df-4eea-b57b-7a9a184addb4/
    [root@base study-nginx-nfs-web-2-pvc-4d5ad539-28df-4eea-b57b-7a9a184addb4]# ls
    index.html
    [root@base study-nginx-nfs-web-2-pvc-4d5ad539-28df-4eea-b57b-7a9a184addb4]#
    
  3. 在kubernetes master节点中查看每个容器对应的pod容器中nginx相对应文件中的值

    # 查看每个容器对应的pod容器中nginx相对应文件中的值
    [root@master ~]# for i in 0 1 2; do kubectl -n study exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done
    nfs-web-0
    nfs-web-1
    nfs-web-2
    [root@master ~]#
    
  4. 删除对应的pod 注意时间变化

    [root@master ~]# kubectl -n study get pod -l app=nfs-web
    NAME        READY   STATUS    RESTARTS   AGE
    nfs-web-0   1/1     Running   1          51m
    nfs-web-1   1/1     Running   1          51m
    nfs-web-2   1/1     Running   1          51m
    [root@master ~]# kubectl -n study delete pod -l app=nfs-web
    pod "nfs-web-0" deleted
    pod "nfs-web-1" deleted
    pod "nfs-web-2" deleted
    [root@master ~]# kubectl -n study get pod -l app=nfs-web
    NAME        READY   STATUS    RESTARTS   AGE
    nfs-web-0   1/1     Running   0          7s
    nfs-web-1   1/1     Running   0          3s
    nfs-web-2   1/1     Running   0          2s
    [root@master ~]#
    

六、结束语

可以看到, statefulset 控制器通过固定的 pod 创建顺序可以确保 pod 之间的拓扑关系一直处于稳定不变的状态,通过 nfs-client-provisioner 自动创建和每个 pod 有固定对应关系的远程存储卷,确保 pod 重建后数据不会丢失。