11kubernetes持久化存储卷

0 阅读11分钟

kubernetes持久化存储卷

存储卷介绍

pod有生命周期,生命周期结束后pod里的数据会消失(如配置文件,业务数据等)。解决:我们需要将数据与pod分离,将数据放在专门的存储卷上

pod在k8s集群的节点中是可以调度的,如果pod挂了被调度到另一个节点,那么数据和pod的联系会中断。解决:所以我们需要与集群节点分离的存储系统才能实现数据持久化

volume提供了在容器上挂载外部存储的能力

存储卷的分类

kubernetes支持的存储卷类型非常丰富,使用kubectl explain pod.spec.volumes命令查看,或者参考: kubernetes.io/docs/concep…

kubernetes支持的存储卷列表如下:

我们将上面的存储卷列表进行简单的分类:

  • 本地存储卷
    • emptyDir pod删除,数据也会被清除,用于数据的临时存储
    • hostPath 宿主机目录映射(本地存储卷)
  • 网络存储卷
    • NAS类 nfs等
    • SAN类 iscsi,FC等
    • 分布式存储 glusterfs,cephfs,rbd,cinder等
    • 云存储 aws,azurefile等

存储卷的选择

市面上的存储产品种类繁多, 但按应用角度主要分为三类:

  • 文件存储 如:nfs,glusterfs,cephfs等
    • 优点: 数据共享(多pod挂载可以同读同写)
    • 缺点: 性能较差
  • 块存储 如: iscsi,rbd等
    • 优点: 性能相对于文件存储好
    • 缺点: 不能实现数据共享(部分)
  • 对象存储 如: ceph对象存储
    • 优点: 性能好, 数据共享
    • 缺点: 使用方式特殊,支持较少

面对kubernetes支持的形形色色的存储卷,如何选择成了难题。在选择存储时,我们要抓住核心需求:

  • 数据是否需要持久性
  • 数据可靠性 如存储集群节点是否有单点故障,数据是否有副本等
  • 性能
  • 扩展性 如是否能方便扩容,应对数据增长的需求
  • 运维难度 存储的运维难度是比较高的,尽量选择稳定的开源方案或商业产品
  • 成本

总之,存储的选择是需要考虑很多因素的,熟悉各类存储产品, 了解它们的优缺点,结合自身需求才能选择合适自己的

本地存储卷之emptyDir

  • 应用场景:实现pod内容器之间数据共享
  • 特点:随着pod被删除,该卷也会被删除

volume-emptydir.yml

apiVersion: v1
kind: Pod
metadata:
  name: volume-emptydir
spec:
  containers:
  - name: write
    image: centos
    imagePullPolicy: IfNotPresent
    command: ["bash","-c","echo haha > /data/1.txt ; sleep 6000"]
    volumeMounts:
    - name: data
      mountPath: /data

  - name: read
    image: centos
    imagePullPolicy: IfNotPresent
    command: ["bash","-c","cat /data/1.txt; sleep 6000"]
    volumeMounts:
    - name: data
      mountPath: /data
      
  volumes:
  - name: data
    emptyDir: {}
kubectl apply -f volume-emptydir.yml

kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
volume-emptydir   2/2     Running   0          20s

kubectl describe pod volume-emptydir | tail -10
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  36s   default-scheduler  Successfully assigned default/volume-emptydir to worker02
  Normal  Pulling    36s   kubelet            Pulling image "centos"
  Normal  Pulled     29s   kubelet            Successfully pulled image "centos" in 7.047554868s (7.047570822s including waiting)
  Normal  Created    29s   kubelet            Created container write
  Normal  Started    28s   kubelet            Started container write
  Normal  Pulled     28s   kubelet            Container image "centos" already present on machine
  Normal  Created    28s   kubelet            Created container read
  Normal  Started    28s   kubelet            Started container read

kubectl logs volume-emptydir -c write

kubectl logs volume-emptydir -c read
haha

# 删除
kubectl delete -f volume-emptydir.yml
rm -f volume-emptydir.yml

本地存储卷之hostPath

  • 应用场景

    pod内与集群节点目录映射(pod中容器想访问节点上数据,例如监控,只有监控访问到节点主机文件才能知道集群节点主机状态)

  • 缺点

    如果集群节点挂掉,控制器在另一个集群节点拉起容器,数据就会变成另一台集群节点主机的了(无法实现数据共享)

volume-hostpath.yml

apiVersion: v1
kind: Pod
metadata:
  name: volume-hostpath
spec:
  containers:
  - name: busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","echo haha > /data/1.txt ; sleep 600"]
    volumeMounts:
    - name: data
      mountPath: /data
      
  volumes:
  - name: data
    hostPath:
      path: /opt
      type: Directory
kubectl apply  -f volume-hostpath.yml

# 可以看到pod是在worker02节点上
kubectl get pods -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
volume-hostpath   1/1     Running   0          7s    10.244.30.70   worker02   <none>           <none>

# 验证pod所在机器上的挂载文件
# worker02
cat /opt/1.txt
haha

# master01
# 删除
kubectl delete  -f volume-hostpath.yml
rm -f volume-hostpath.yml

网络存储卷之nfs

# nfs节点,192.168.91.101
mkdir -p /data/nfs
yum -y install nfs-utils
echo "/data/nfs       *(rw,sync,no_root_squash)" > /etc/exports
systemctl enable nfs-server
systemctl start nfs-server
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

showmount -e
Export list for test:
/data/nfs *

# 所有worker节点(worker01和worker02),安装nfs客户端相关软件包,并验证nfs可用性
yum -y install nfs-utils

showmount -e 192.168.91.101
Export list for 192.168.91.101:
/data/nfs *

# master01 

volume-nfs.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: volume-nfs
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: documentroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: documentroot
        nfs:
          server: 192.168.91.101
          path: /data/nfs
kubectl apply -f volume-nfs.yml

# nfs
# 在nfs服务器共享目录中创建验证文件
echo "volume-nfs" > /data/nfs/index.html

# master01
# 验证pod
kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
volume-nfs-7fdb89d5b5-9z4hs   1/1     Running   0          2m8s
volume-nfs-7fdb89d5b5-n5crr   1/1     Running   0          2m8s

kubectl exec volume-nfs-7fdb89d5b5-9z4hs -- cat /usr/share/nginx/html/index.html
volume-nfs

kubectl exec volume-nfs-7fdb89d5b5-n5crr -- cat /usr/share/nginx/html/index.html
volume-nfs

# 删除
kubectl delete -f volume-nfs.yml
rm -f volume-nfs.yml

PV(持久存储卷)与PVC(持久存储卷声明)

认识pv与pvc

kubernetes存储卷的分类太丰富了,每种类型都要写相应的接口与参数才行,这就让维护与管理难度加大

persistenvolume(PV) 是配置好的一段存储(可以是任意类型的存储卷)。也就是说将网络存储共享出来,配置定义成PV

PersistentVolumeClaim(PVC)是用户pod使用PV的申请请求。用户不需要关心具体的volume实现细节,只需要关心使用需求

pv与pvc之间的关系

  • pv提供存储资源(生产者)
  • pvc使用存储资源(消费者)
  • 使用pvc绑定pv

image.png

实现nfs类型pv与pvc

创建pv的YAML文件,pv-nfs.yml

apiVersion: v1
kind: PersistentVolume						# 类型为PersistentVolume(pv)
metadata:		
  name: pv-nfs								# 名称
spec:
  capacity:
    storage: 1Gi							# 大小
  accessModes:
    - ReadWriteMany							# 访问模式
  nfs:
    path: /data/nfs							# nfs共享目录
    server: 192.168.91.101					 # nfs服务器IP

访问模式有3种 参考: kubernetes.io/docs/concep…

  • ReadWriteOnce 单节点读写挂载
  • ReadOnlyMany 多节点只读挂载
  • ReadWriteMany 多节点读写挂载

cephfs存储卷3种类型都支持,我们要实现多个nginx跨节点之间的数据共享,所以选择ReadWriteMany模式

kubectl apply -f pv-nfs.yml

# RWX为ReadWriteMany的简写;Retain是回收策略,Retain表示不使用了需要手动回收,参考: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy
kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-nfs   1Gi        RWX            Retain           Available                                   9s

创建pvc的YAML文件:pvc-nfs.yml

apiVersion: v1
kind: PersistentVolumeClaim				# 类型为PersistentVolumeClaim(pvc)
metadata:
  name: pvc-nfs							# pvc的名称
spec:
  accessModes:
    - ReadWriteMany						# 访问模式
  resources:
    requests:
      storage: 1Gi						# 大小要与pv的大小保持一致
kubectl apply -f pvc-nfs.yml

# STATUS必须为Bound状态(Bound状态表示pvc与pv绑定OK)
kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs     Bound     pv-nfs   1Gi        RWX                           10s

创建deployment进行验证

deploy-nginx-nfs.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx-nfs
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:
          claimName: pvc-nfs
kubectl apply -f deploy-nginx-nfs.yml

kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
deploy-nginx-nfs-559f8567-j85hj   1/1     Running   0          18s
deploy-nginx-nfs-559f8567-s6psd   1/1     Running   0          18s

# 验证pod内卷的数据
kubectl exec deploy-nginx-nfs-559f8567-j85hj -- cat /usr/share/nginx/html/index.html
volume-nfs

kubectl exec deploy-nginx-nfs-559f8567-s6psd -- cat /usr/share/nginx/html/index.html
volume-nfs

# 删除
kubectl delete -f deploy-nginx-nfs.yml
kubectl delete -f pvc-nfs.yml
kubectl delete -f pv-nfs.yml
rm -f deploy-nginx-nfs.yml pvc-nfs.yml pv-nfs.yml

subpath使用

subpath是指可以把相同目录中不同子目录挂载到容器中不同的目录中使用的方法。以下通过案例演示

01_create_pv_nfs.yaml

apiVersion: v1
kind: PersistentVolume                                          # 类型为PersistentVolume(pv)
metadata:
  name: pv-nfs                                                          # 名称
spec:
  capacity:
    storage: 1Gi                                                        # 大小
  accessModes:
    - ReadWriteMany                                              # 访问模式
  nfs:
    path: /data/nfs                                                     # nfs共享目录
    server: 192.168.91.101

02_create_pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim                             # 类型为PersistentVolumeClaim(pvc)
metadata:
  name: pvc-nfs                                                 # pvc的名称
spec:
  accessModes:
    - ReadWriteMany                                             # 访问模式
  resources:
    requests:
      storage: 1Gi                                              # 大小要与pv的大小保持一致

03_create_pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod1
spec:
  containers:
  - name: c1
    image: busybox
    command: ["/bin/sleep","100000"]
    volumeMounts:
      - name: data
        mountPath: /opt/data1
        subPath: data1
      - name: data
        mountPath: /opt/data2
        subPath: data2
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: pvc-nfs
# master01
kubectl apply -f 01_create_pv_nfs.yaml
kubectl apply -f 02_create_pvc.yaml
kubectl apply -f 03_create_pod.yaml

# nfs
# 在nfs服务器查看pod中目录是否自动添加到nfs服务器/data/nfs目录中
ll /data/nfs
total 4
drwxr-xr-x. 2 root root  6 Nov 14 09:14 data1
drwxr-xr-x. 2 root root  6 Nov 14 09:14 data2

# master01
# 删除
kubectl delete -f 03_create_pod.yaml
kubectl delete -f 02_create_pvc.yaml
kubectl delete -f 01_create_pv_nfs.yaml
rm -f 01_create_pv_nfs.yaml 02_create_pvc.yaml 03_create_pod.yaml

存储的动态供给

什么是动态供给

每次使用存储要先创建pv,再创建pvc,真累! 所以我们可以实现使用存储的动态供给特性

  • 静态存储需要用户申请PVC时保证容量和读写类型与预置PV的容量及读写类型完全匹配,而动态存储则无需如此
  • 管理员无需预先创建大量的PV作为存储资源

Kubernetes从1.4版起引入了一个新的资源对象StorageClass,可用于将存储资源定义为具有显著特性的类(Class)而不是具体的PV。用户通过PVC直接向意向的类别发出申请,匹配由管理员事先创建的PV,或者由其按需为用户动态创建PV,这样就免去了需要先创建PV的过程

使用NFS文件系统创建存储动态供给

PV对存储系统的支持可通过其插件来实现,目前Kubernetes支持如下类型的插件,官方地址:kubernetes.io/docs/concep…

官方插件是不支持NFS动态供给的,但是我们可以用第三方的插件来实现,第三方插件地址: github.com/kubernetes-…

image.png

image.png

下载并创建storageclass

wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/v4.0.0/deploy/class.yaml
mv class.yaml storageclass-nfs.yml
# 修改名称
sed -i 's/managed-nfs-storage/nfs-client/' storageclass-nfs.yml

cat storageclass-nfs.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client # 名称,要使用就需要调用此名称
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # 动态供给插件
parameters:
  archiveOnDelete: "false" # 删除数据时是否存档,false表示不存档,true表示存档

kubectl apply -f storageclass-nfs.yml

# RECLAIMPOLICY pv回收策略,pod或pvc被删除后,pv是否删除还是保留。
# VOLUMEBINDINGMODE Immediate 模式下PVC与PV立即绑定,主要是不等待相关Pod调度完成,不关心其运行节点,直接完成绑定。相反的 WaitForFirstConsumer模式下需要等待Pod调度完成后进行PV绑定。
# ALLOWVOLUMEEXPANSION pvc扩容是否允许
kubectl get storageclass
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  27s

下载并创建rbac

因为storage自动创建pv需要经过kube-apiserver,所以需要授权

wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/v4.0.0/deploy/rbac.yaml
mv rbac.yaml storageclass-nfs-rbac.yaml
kubectl apply -f storageclass-nfs-rbac.yaml

创建动态供给的deployment

需要一个deployment来专门实现pv与pvc的自动创建

deploy-nfs-client-provisioner.yml,可通过 github.com/kubernetes-… 下载,然后修改nfs对应的ip即可

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.91.101
            - name: NFS_PATH
              value: /data/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.91.101
            path: /data/nfs
kubectl apply -f deploy-nfs-client-provisioner.yml

kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5bffc5866c-thmtl   1/1     Running   0          20s

测试存储动态供给是否可用

nginx-sc.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
      - name: huoban-harbor
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-client"
      resources:
        requests:
          storage: 1Gi
kubectl apply -f nginx-sc.yaml

kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5bffc5866c-thmtl   1/1     Running   0          2m28s
web-0                                     1/1     Running   0          110s
web-1                                     1/1     Running   0          42s

# nfs节点
ls /data/nfs/
default-www-web-0-pvc-77d27423-e1d9-4f13-9e2e-a3f03d9616bc  default-www-web-1-pvc-b3d98d49-3924-4a30-8f75-feca1013a45a

# master01
# 删除
kubectl delete -f nginx-sc.yaml
kubectl delete -f deploy-nfs-client-provisioner.yml
kubectl delete -f storageclass-nfs-rbac.yaml
kubectl delete -f storageclass-nfs.yml
rm -f nginx-sc.yaml deploy-nfs-client-provisioner.yml storageclass-nfs-rbac.yaml storageclass-nfs.yml