配置Kubernetes动态存储卷遇坑记

1,236 阅读2分钟

趁热记录下,给未来的自己

0 - 前言

在配置Kubernetes动态存储卷(StorageClass+PVC)时,发现PVC一直处于Pending状态。

describe pvc,报错信息是 waiting for a volume to be created, either by external provisioner “fuseim.pri/ifs” or manually created by system administrator

log pod 的报错信息是 unexpected error getting claim reference: selfLink was empty, can’t make reference

研究后,发现是kubernetes在v1.20版本起,selfLink默认处于禁止状态,需要修改api-server的配置信息,来开启selfLink。以下是复现过程和解决方案。

1 - 配置的过程

  1. 创建RBAC授权

wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
kubectl apply -f rbac.yaml
  1. 创建StorageClass

sotrageclass.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: 'false'
reclaimPolicy: Delete
  1. 创建provisioner

该provisioner将会自动创建pv,以 −namespace−{pvcName}-${pvName} 的命名格式创建在 NFS 服务器的 /mnt/storage 目录上。

nfs-provisioner-deploy.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-provisioner
  labels:
    app: nfs-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-provisioner
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccountName: nfs-provisioner
      containers:
        - name: nfs-provisioner
          image: 'quay.io/external_storage/nfs-client-provisioner:latest'
          volumeMounts:
            - name: nfs-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.1.36.89
            - name: NFS_PATH
              value: /mnt/storage
      volumes:
        - name: nfs-root
          nfs:
            server: 10.1.36.89
            path: /mnt/storage

  1. 创建有状态应用

nfs-statefulset.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
    - port: 80
      name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nfs-web
spec:
  serviceName: "nginx"
  replicas: 3
  selector:
    matchLabels:
      app: nfs-web # has to match .spec.template.metadata.labels
  template:
    metadata:
      labels:
        app: nfs-web
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: nfs-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

2 - 遇到的问题

启动后,发现创建的PVC状态一直处于pending状态。

排查步骤:

  1. Kubectl describe pvc,发现报错信息:waiting for a volume to be created, either by external provisioner “fuseim.pri/ifs” or manually created by system administrator
  2. Kubectl log pod,发现报错信息:unexpected error getting claim reference: selfLink was empty, can’t make reference

利用以上信息,找到相关线索:

kubernetes版本在v1.20起,默认禁止使用SelfLinks。

3 - 解决的方法

想办法开启SelfLinks。这里提供三种思路

  1. Yaml部署-修改 kube-apiserver.yaml 文件

一般在 /etc/kubernetes 下,可以 find / -name kube-apiserver.yaml 全局搜索,添加如下内容,然后重启kube-apiserver服务即可

  1. 二进制部署-修改kube-apiserver.service

如果没有找到kube-apiserver.yaml文件,比如通过二进制部署的,可以全局搜索 kube-apiserver.service (一般在 /etc/systemd/system/kube-apiserver.service),添加如下内容即可。修改为记得systemctl daemon-reload一下,并重启kube-apiserver服务

  1. Rancher部署-修改cluster.yml

如果是用rancher部署的,那么需要修改其cluster.yml文件如下,然后更新配置:./rke up

问题解决

4 - 阿里云部署

以上是基于本地部署的k8s进行的动态存储配置,如果是在阿里云上,可参考:

help.aliyun.com/document_de…