rook创建块/对象/文件系统

798 阅读10分钟

rook创建块/对象/文件系统

本文主要论述使用rook在k8s中创建块存储、文件存储、以及对象存储的主要过程。
ceph提供底层存储功能,cephfs方式支持k8s的pv的3种访问模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany ,RBD支持ReadWriteOnce,ReadOnlyMany两种模式

创建块设备

[root@node1 rbd]# pwd
/root/rook/deploy/examples/csi/rbd
[root@node1 rbd]# kubectl apply -f storageclass.yaml 
cephblockpool.ceph.rook.io/replicapool unchanged
storageclass.storage.k8s.io/rook-ceph-block unchanged
##将rook-ceph-block作为默认storage_class
[root@node1 rbd]# kubectl annotate sc/rook-ceph-block storageclass.kubernetes.io/is-default-class=true
storageclass.storage.k8s.io/rook-ceph-block annotated
[root@node1 rbd]# kubectl get sc 
NAME                        PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block (default)   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   5d

登录ceph dashboard查看创建的存储池: d63894960051692b9b0dc9961b1339f.png 使用rbd作为存储,可以本人参考之前的文章:juejin.cn/post/716840…

创建对象存储

参照:blog.csdn.net/Man_In_The_…
对象存储向存储集群公开了一个 S3 API,供应用程序放置和获取数据。Rook 能够在 Kubernetes 中部署对象存储或连接到外部 RGW 服务。最常见的是,对象存储将由 Rook 在本地配置。或者,如果您有一个带有 Rados 网关的现有 Ceph 集群。

  1. 在本地集群启动一个RGW服务
    [root@node3 examples]# kubectl create -f object.yaml 
    cephobjectstore.ceph.rook.io/my-s3 created
    
    查看创建的rgw服务
    [root@node3 examples]#  kubectl -n rook-ceph get pod -l app=rook-ceph-rgw
    NAME                                     READY   STATUS    RESTARTS   AGE
    rook-ceph-rgw-my-s3-a-5444f67b84-wfcnl   1/2     Running   0          17s
    [root@node3 examples]# kubectl -n rook-ceph get pod -l app=rook-ceph-rgw
    NAME                                     READY   STATUS    RESTARTS   AGE
    rook-ceph-rgw-my-s3-a-5444f67b84-wfcnl   2/2     Running   0          4h50m
    
  2. 创建存储桶
    现在对象存储已配置完毕,接下来我们需要创建一个存储桶(实际上感觉就是StorageClass),客户端可以在其中读取和写入对象。可以通过定义存储类来创建存储桶,类似于块和文件存储使用的模式。
    [root@node3 examples]# kubectl apply -f storageclass-bucket-delete.yaml 
    storageclass.storage.k8s.io/rook-ceph-delete-bucket created
    
    基于此存储类,对象客户端现在可以通过创建对象存储桶声明 (OBC)(对应PVC?) 来请求存储桶。
    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: knowdee-bucket-test33344
      namespace: rook-ceph ##nameapse很重要,涉及到secret所在的明明空间
    spec:
      # To create a new bucket specify either `bucketName` or
      # `generateBucketName` here. Both cannot be used. To access
      # an existing bucket the bucket name needs to be defined in
      # the StorageClass referenced here, and both `bucketName` and
      # `generateBucketName` must be omitted in the OBC.
      bucketName: test333
      #generateBucketName: ceph-bkt
      storageClassName: rook-ceph-s3
      additionalConfig:
        # To set for quota for OBC
        #maxObjects: "1000"
        #maxSize: "2G"
    
    
    [root@node3 examples]# kubectl apply -f object-bucket-claim-delete.yaml
    objectbucketclaim.objectbucket.io/ceph-delete-bucket created
    
    
    
    nameapse很重要,涉及到secret所在的明明空间
    现在声明已创建,operator将创建存储桶并生成其他工件以启用对存储桶的访问。secret和 ConfigMap 是使用与 OBC 相同的名称并在相同的命名空间中创建的。该密钥包含应用程序 pod 用于访问存储桶的凭据。ConfigMap 包含存储桶端点信息,也被 pod 使用。
  3. 集群外部访问rook对象存储
    Rook 设置对象存储,以便 pod 可以访问集群内部。如果您的应用程序在集群外运行,则需要通过NodePort.
    创建外部服务。
    [root@node3 examples]# kubectl apply -f rgw-external.yaml 
    service/rook-ceph-rgw-my-store-external created
    [root@node3 examples]# kubectl -n rook-ceph get service rook-ceph-rgw-my-s3 rook-ceph-rgw-my-store-external
    NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    rook-ceph-rgw-my-s3               ClusterIP   10.104.188.76   <none>        80/TCP         31m
    rook-ceph-rgw-my-store-external   NodePort    10.100.51.141   <none>        80:31936/TCP   30s
    
    在内部,rgw 服务正在端口上运行80。在这种情况下,外部端口是31536。现在您可以CephObjectStore从任何地方访问!您只需要集群中任何机器的主机名、外部端口和用户凭据。
  4. 创建用户
    如果您需要创建一组独立的用户凭证来访问 S3 端点,请创建一个CephObjectStoreUser.用户将用于使用 S3 API 连接到集群中的 RGW 服务。用户将独立于您在本文档前面的说明中创建的任何对象存储桶声明。
     [root@node3 examples]# kubectl apply -f object-user.yaml 
    cephobjectstoreuser.ceph.rook.io/my-user created
    ##rook operator 会自动创建对应secret
    [root@sc-master-1 examples]# kubectl -n rook-ceph get secret
    NAME                                               TYPE                 DATA   AGE
    rook-ceph-object-user-knowdee-s3-knowdee-s3-user   kubernetes.io/rook   3      22m
    root@sc-master-1 examples]# kubectl -n rook-ceph get secret rook-ceph-object-user-knowdee-s3-knowdee-s3-user -o yaml | grep AccessKey | awk '{print $2}' | base64 --decod 
    U4JBK1XIK7GNU2P0ZBXW
    [root@sc-master-1 examples]# kubectl -n rook-ceph get secret rook-ceph-object-user-knowdee-s3-knowdee-s3-user -o yaml | grep SecretKey | awk '{print $2}' | base64e--decod 
    aAJk8XXduYZ9rFs34obDGGB2GOEIAAobyIjj0ed7
    
  5. 配置s3cmd客户端
    安装s3cmd
    [root@node3 examples]# yum -y install s3cmd
    ##获取service地址,实际上外部用不到
    [root@node3 examples]# kubectl -n default get cm ceph-delete-bucket -o jsonpath='{.data.BUCKET_HOST}'
    rook-ceph-rgw-my-s3.rook-ceph.svc
    ##获取accessKey
    [root@node3 examples]# kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode
    M0U2U9HA78UQURBGVRT3
    ##获取secretKey
    [root@node3 examples]# kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode
    A6QBY2lprWS6Js8rwAdkFwDnyvMPqT9bVHpfgdTB
    

接下来,开始配置s3cmd:
注意: .
.s3cfg 文件中的AccessKey 和SecretKey是user的,则可以创建bucket,进行object的上传和下载
.s3cfg 文件中的AccessKey 和SecretKey是bucket的,则不能创建bucket,只能进行object的上传和下载 1111.png 然后使用s3cmd进行文档的上传/下载/查看等操作:

[root@node3 ~]# s3cmd ls s3://
[root@node3 ~]# s3cmd mb s3://test1
Bucket 's3://test1/' created
[root@node3 ~]# s3cmd ls s3://
2022-11-22 03:10  s3://test1
[root@node3 ~]# du -sh *
2.4M    rook-1.10.5.tar.gz
[root@node3 ~]# s3cmd put rook-1.10.5.tar.gz s3://test1/rook-1.10.5.tar.gz
upload: 'rook-1.10.5.tar.gz' -> 's3://test1/rook-1.10.5.tar.gz'  [1 of 1]
 2424786 of 2424786   100% in    0s     4.07 MB/s  done
[root@node3 ~]# s3cmd ls s3://test1
2022-11-22 03:34      2424786  s3://test1/rook-1.10.5.tar.gz
[root@node3 ~]# md5sum rook-1.10.5.tar.gz 
b384c2d39d4f8e460623530bc198ecff  rook-1.10.5.tar.gz
[root@node3 ~]# rm -rf rook-1.10.5.tar.gz 
[root@node3 ~]# s3cmd get s3://test1/rook-1.10.5.tar.gz
download: 's3://test1/rook-1.10.5.tar.gz' -> './rook-1.10.5.tar.gz'  [1 of 1]
 2424786 of 2424786   100% in    0s    11.55 MB/s  done
[root@node3 ~]# md5sum rook-1.10.5.tar.gz 
b384c2d39d4f8e460623530bc198ecff  rook-1.10.5.tar.gz

在dashboard上面可以看到 222.png

使用https域名暴露rgw服务

创建secret

[root@sc-master-1 ~]# kubectl create secret tls s3.knowdee.com --cert=/root/xxxx.com_bundle.crt  --key=/root/xxxx.com.key  -n rook-ceph
secret/s3.knowdee.com created
[root@sc-master-1 kubeVela]# kubectl -n rook-ceph get svc
NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
rook-ceph-mgr                       ClusterIP   10.101.254.146   <none>        9283/TCP            46h
rook-ceph-mgr-dashboard             ClusterIP   10.107.50.231    <none>        8443/TCP            46h
rook-ceph-mon-a                     ClusterIP   10.102.199.219   <none>        6789/TCP,3300/TCP   47h
rook-ceph-mon-b                     ClusterIP   10.104.70.180    <none>        6789/TCP,3300/TCP   46h
rook-ceph-mon-c                     ClusterIP   10.111.173.53    <none>        6789/TCP,3300/TCP   46h
rook-ceph-rgw-knowdee-s3            ClusterIP   10.106.129.47    <none>        9080/TCP            142m
rook-ceph-rgw-knowdee-s3-external   NodePort    10.104.241.200   <none>        80:47409/TCP        39m

创建ingress

#ingress应该映射到rook-ceph-rgw-knowdee-s3服务上面
[root@sc-master-1 ~]# cat rgw-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations: 
    nginx.ingress.kubernetes.io/proxy-body-size: 102400m #上传文件比较大的时候,可能会报413 Request Entity Too Large的错误
    nginx.ingress.kubernetes.io/proxy-connect-timeout: '300'
    nginx.ingress.kubernetes.io/proxy-read-timeout: '300'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '300'
  labels:
    overlay-label: overlay-app
  name: s3-ingress
  namespace: rook-ceph
spec:
  ingressClassName: nginx
  rules:
  - host: s3.knowdee.com
    http:
      paths:
      - backend:
          service:
            name: rook-ceph-rgw-knowdee-s3
            port:
              number: 9080
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - s3.knowdee.com
    secretName: s3.knowdee.com
[root@sc-master-1 ~]# kubectl apply -f rgw-ingress.yaml 
ingress.networking.k8s.io/s3-ingress created    

重新生成配置: 截图 2023-08-16 18-09-37.png 测试:

root@oldsix:~# s3cmd ls s3://
2023-08-16 09:13  s3://caoyong
root@oldsix:~# s3cmd ls s3://caoyong/
2023-08-16 09:25    656048640  s3://caoyong/xx.tar.gz
root@oldsix:~# s3cmd get s3://caoyong/xx.tar.gz
download: 's3://caoyong/xx.tar.gz' -> './xx.tar.gz'  [1 of 1]
 656048640 of 656048640   100% in   53s    11.77 MB/s  done
root@oldsix:~# s3cmd mb s3://xxx
Bucket 's3://xxx/' created
root@oldsix:~# s3cmd ls s3://
2023-08-16 09:13  s3://caoyong
2023-08-16 10:31  s3://xxx
root@oldsix:~# 

创建文件系统

共享文件系统可以从多个 pod 以读/写权限挂载共享文件系统。这对于可以使用共享文件系统进行集群的应用程序可能很有用。
  1. 创建文件系统
    CephFilesystem通过为CRD中的元数据池、数据池和元数据服务器指定所需的设置来创建文件系统。在此示例中,我们创建了具有三个复制的元数据池和一个具有三个复制的单个数据池。 Rook Operator将创建启动服务所需的所有池和其他资源。这可能需要一分钟才能完成。
[root@node3 examples]# kubectl apply -f filesystem.yaml 
cephfilesystem.ceph.rook.io/myfs created
[root@node3 examples]# kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                    READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-7dd8dc9dcc-cj4xr   2/2     Running   0          4m59s
rook-ceph-mds-myfs-b-86b96fd8db-dsg5f   2/2     Running   0          4m57s

要查看文件系统的详细状态,请启动并连接到Rook 工具箱

[root@node1 ~]# kubectl exec -it pod/rook-ceph-tools-5679b7d8f-jzbkr -n rook-ceph -- /bin/bash
bash-4.4$ ceph status
  cluster:
    id:     0055fdfc-9741-40e5-b108-87e02574e98b
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 29h)
    mgr: a(active, since 29h), standbys: b
    mds: 1/1 daemons up, 1 hot standby
    osd: 12 osds: 12 up (since 27h), 12 in (since 27h)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   12 pools, 337 pgs
    objects: 487 objects, 17 MiB
    usage:   531 MiB used, 5.9 TiB / 5.9 TiB avail
    pgs:     337 active+clean
 
  io:
    client:   2.2 KiB/s rd, 255 B/s wr, 3 op/s rd, 0 op/s wr

该服务将显示一个新行mds。在此示例中,有一个 MDS 的活动实例已启动,其中一个 MDS 实例处于standby-replay故障转移模式。
2. 配置存储
在 Rook 开始配置存储之前,需要基于文件系统创建一个 StorageClass。这是 Kubernetes 与 CSI 驱动程序互操作以创建持久卷所必需的。
如果您已将 Rook Operator部署在“rook-ceph”以外的命名空间中,那么通常更改配置器中的前缀以匹配您使用的命名空间。例如,如果 Rook 操作符在“rook-op”中运行,则配置器值应该是“rook-op.rbd.csi.ceph.com”。

创建存储类。

[root@node3 cephfs]# pwd
/root/rook-1.10.5/deploy/examples/csi/cephfs
[root@node3 cephfs]# kubectl apply -f storageclass.yaml 
storageclass.storage.k8s.io/rook-cephfs created
[root@node3 cephfs]# kubectl get sc
NAME                        PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block (default)   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   5d2h
rook-ceph-delete-bucket     rook-ceph.ceph.rook.io/bucket   Delete          Immediate           false                  5h59m
rook-cephfs                 rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   41s
  1. 应用存储
    pvc.yaml内容:
[root@node1 cephfs]# cat pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs

创建pvc:

[root@node1 cephfs]# kubectl apply -f pvc.yaml
persistentvolumeclaim/cephfs-pvc created
[root@node1 cephfs]# vim pvc.yaml
[root@node1 cephfs]# kubectl get pvc 
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
cephfs-pvc       Bound    pvc-b7f55803-3788-4e69-9f7f-73b1a8d6449c   1Gi        RWX            rook-cephfs       17s
rbd-my-nginx-0   Bound    pvc-bff940d2-1b78-4224-8b62-7b3b05e1951c   1Gi        RWO            rook-ceph-block   23h
rbd-my-nginx-1   Bound    pvc-749ab45c-c3a7-4ce1-8d06-2d7b976c97a0   1Gi        RWO            rook-ceph-block   23h
rbd-my-nginx-2   Bound    pvc-b0d9f1ec-256a-465f-a34f-47f3f03c4051   1Gi        RWO            rook-ceph-block   23h
rbd-pvc          Bound    pvc-5329ab89-8e7c-40de-883f-933344eb4a5e   1Gi        RWO            rook-ceph-block   5d2h

pod.yaml内容:

[root@node1 ~]# cat nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        volumeMounts:
        - name: rbd
          mountPath: /usr/share/rbd
      volumes:
      - name: rbd
        persistentVolumeClaim:              #指定pvc
          claimName: cephfs-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 32501

创建pod

[root@node1 ~]# kubectl apply -f nginx.yaml 
deployment.apps/my-nginx created
service/nginx-service unchanged
[root@node1 ~]# kubectl get pods 
NAME                        READY   STATUS    RESTARTS   AGE
my-nginx-5dbb7847b7-4d8vd   1/1     Running   0          11s
my-nginx-5dbb7847b7-dvssf   1/1     Running   0          11s
my-nginx-5dbb7847b7-xg76j   1/1     Running   0          11s

在共享卷的某一个pod中,在挂载卷目录中写入文件:

[root@node1 ~]# kubectl exec -it pod/my-nginx-5dbb7847b7-4d8vd -- /bin/bash
root@my-nginx-5dbb7847b7-4d8vd:/# cd /usr/share/rbd
root@my-nginx-5dbb7847b7-4d8vd:/usr/share/rbd# ls
root@my-nginx-5dbb7847b7-4d8vd:/usr/share/rbd# echo "knowdee_cephfs" > mark.txt
root@my-nginx-5dbb7847b7-4d8vd:/usr/share/rbd# cat mark.txt 
knowdee_cephfs
root@my-nginx-5dbb7847b7-4d8vd:/usr/share/rbd# exit

在另外一个共享卷的pod中,可以看到之前在另一个pod中写入的文件:

[root@node1 ~]# kubectl exec -it pod/my-nginx-5dbb7847b7-xg76j -- /bin/bash
root@my-nginx-5dbb7847b7-xg76j:/# cd /usr/share/rbd
root@my-nginx-5dbb7847b7-xg76j:/usr/share/rbd# ls
mark.txt
root@my-nginx-5dbb7847b7-xg76j:/usr/share/rbd# cat mark.txt 
knowdee_cephfs

创建PV

参考: www.bbsmax.com/A/pRdB64X6d…
kubernetes.io/zh/docs/con…
针对 PV 持久卷,Kubernetes 支持两种卷模式(volumeModes):Filesystem(文件系统) 和 Block(块)。
volumeMode 是一个可选的 API 参数。 如果该参数被省略,默认的卷模式是 Filesystem。
volumeMode 属性设置为 Filesystem 的卷会被 Pod 挂载(Mount) 到某个目录。 volumeMode 属性设置为 Block 挂载的时候,会被当成块设备挂到pod上面,在使用pod挂载的时候,需要使用
volumeDevices:
- name: rbdxx
devicePath: /dev/rbdxx
而不用:
volumeMounts:
- name: rbdxx
mountPath: /usr/share/rbdvolumeDevices
如果卷的存储来自某块设备而该设备目前为空,Kuberneretes 会在第一次挂载卷之前 在设备上创建文件系统。

volumeMode设置为Filesystem(默认方式)

  1. 首先创建image 在ceph上面创建image
bash-4.4$ rbd create caoyong-fs-rbd-image  --size 4Gi --pool rbdpool

ceph命令参考 333.png 2. 创建PV

metadata:
  name: caoyong-ceph-pv-200
spec:
  accessModes:
    #- ReadWriteOnce
    - ReadOnlyMany
  capacity:
    storage: 4Gi
  csi:
    driver: rbd.csi.ceph.com
    fsType: ext4
    nodeStageSecretRef:
      # node stage secret name
      name: csi-rbd-secret
      # node stage secret namespace where above secret is created
      namespace: default
    volumeAttributes:
      # Required options from storageclass parameters need to be added in volumeAttributes
      clusterID: "6779d759-5604-425d-bc98-c95a5f54e31d"
      pool: "rbdpool"
      staticVolume: "true"
      imageFeatures: "layering"
      #mounter: rbd-nbd
    # volumeHandle should be same as rbd image name
    volumeHandle: caoyong-fs-rbd-image 
    #volumeHandle: static-image
  persistentVolumeReclaimPolicy: Retain
  # The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
  volumeMode: Filesystem
#  volumeMode: Block

可以参照storage-class内容,基本一致,只不过需要指定volumeHandle: caoyong-image(步骤1中在ceph上面创建的image)
参考
然后执行即可

  1. 创建PVC(volumeMode Filesystem(默认方式))
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: caoyong-ceph-claim-200
spec:
  accessModes:     
    #- ReadWriteOnce
    - ReadOnlyMany
  resources:
    requests:
      storage: 3Gi 
  # The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
  #volumeMode: Block
  volumeMode: Filesystem
  # volumeName should be same as PV name
  volumeName: caoyong-ceph-pv-200 #fs-static-pv
  #volumeName: fs-static-pv

可以看到pv信息

[root@server14 filestystem]# kubectl get pvc
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
caoyong-ceph-claim-200   Bound    caoyong-ceph-pv-200                        4Gi        ROX                            43m
  1. 使用PVC-ROX
[root@server14 filestystem]# cat nginx.yaml 
apiVersion: apps/v1
#kind: StatefulSet
kind: Deployment
metadata:
  name: test-rbd-fs-pv
spec:
  selector:
    matchLabels:
      app: test-rbd-fs-pv
#  serviceName: ngx-service
  replicas: 2
  template:
    metadata:
      labels:
        app: test-rbd-fs-pv
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        volumeMounts:
        - name: rbdxx
          mountPath: /usr/share/rbd
      volumes:
      - name: rbdxx
        persistentVolumeClaim:              #指定pvc
          claimName: caoyong-ceph-claim-200 #caoyong-ceph-claim-2 #rbd-pvc #cephfs-pvc
          #readOnly: true

然后可以看到

[root@server14 filestystem]# kubectl get pods
NAME                                            READY   STATUS      RESTARTS        AGE
test-rbd-fs-pv-6f88b6665d-bx88v                 1/1     Running     0               46m
test-rbd-fs-pv-6f88b6665d-d8zjk                 1/1     Running     0               42m
[root@server14 filestystem]# kubectl edit pod/test-rbd-fs-pv-6f88b6665d-d8zjk -- /bin/bash
error: arguments in resource/name form may not have more than one slash
[root@server14 filestystem]# kubectl exec -it pod/test-rbd-fs-pv-6f88b6665d-d8zjk -- /bin/bash
root@test-rbd-fs-pv-6f88b6665d-d8zjk:/# cd /usr/share/rbd/
root@test-rbd-fs-pv-6f88b6665d-d8zjk:/usr/share/rbd# ls
lost+found  m.txt  mark.txt
root@test-rbd-fs-pv-6f88b6665d-d8zjk:/usr/share/rbd# cat mark.txt 
root@test-rbd-fs-pv-6f88b6665d-nlm9m:/usr/share/rbd#.................
root@test-rbd-fs-pv-6f88b6665d-d8zjk:/usr/share/rbd# cat m.txt 
read only many

如上,可以看到,两个nginx实例pod都正确的挂载到了PVC上面,挂载目录下面的文件,是之前使用另一个PV和PV 写入的,之前的PV和PVC的访问模式是:

accessModes:     
    - ReadWriteOnce;

即,如果想使用ReadOnlyMany的PVC,需要提前创建PV1和PVC1(accessModes是ReadWriteOnce),将内容写入到ceph的image(本文中是caoyong-fs-rbd-image)中。然后再创建另一个PV2和PVC2(accessModes是ReadOnlyOnce),多副本的pod,使用PVC2即可达到ROX的目的;

rbd动态调整image大小参考

444.png

随着k8s 1.24版本的发布 新增加了 卷填充器和数据源(Volume populators and data sources),对只读pvc的用法,有了新的实践,目前尚未研究,敬请期待。 关于卷填充器参考

volumeMode设置为Block方式

  1. 在ceph上面创建image
bash-4.4$ rbd create caoyong-fs-rbd-image  --size 4Gi --pool rbdpool

14670aaa3417b85c9b6a18321193309.png

  1. 创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
  name: caoyong-ceph-pv-100
spec:
  accessModes:
    - ReadWriteOnce
#    - ReadOnlyMany
  capacity:
    storage: 2Gi
  csi:
    driver: rbd.csi.ceph.com
    fsType: ext4
    nodeStageSecretRef:
      # node stage secret name
      name: csi-rbd-secret
      # node stage secret namespace where above secret is created
      namespace: default
    volumeAttributes:
      # Required options from storageclass parameters need to be added in volumeAttributes
      clusterID: "6779d759-5604-425d-bc98-c95a5f54e31d"
      pool: "rbdpool"
      staticVolume: "true"
      imageFeatures: "layering"
      #mounter: rbd-nbd
    # volumeHandle should be same as rbd image name
    volumeHandle: caoyong-image 
    #volumeHandle: static-image
  persistentVolumeReclaimPolicy: Retain
  # The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
#  volumeMode: Filesystem
  volumeMode: Block
  1. 创建pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: caoyong-ceph-claim-100
spec:
  accessModes:     
    - ReadWriteOnce
    #- ReadOnlyMany
  resources:
    requests:
      storage: 2Gi 
  # The volumeMode can be either `Filesystem` or `Block` if you are creating Filesystem PVC it should be `Filesystem`, if you are creating Block PV you need to change it to `Block`
  volumeMode: Block
  #volumeMode: Filesystem
  # volumeName should be same as PV name
  volumeName: caoyong-ceph-pv-100 #fs-static-pv
  #volumeName: fs-static-pv
  1. 创建pod,挂载pvc
root@server14 block_device]# cat nginx.yaml 
apiVersion: apps/v1
#kind: StatefulSet
kind: Deployment
metadata:
  name: my-nginx-test-rbd-repeat
spec:
  selector:
    matchLabels:
      app: nginx
#  serviceName: ngx-service
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        securityContext:
          privileged: true
        image: nginx:latest
        ports:
        - containerPort: 80
        volumeDevices:
        - name: rbdxx
          devicePath: /dev/rbdxx
      volumes:
      - name: rbdxx
        persistentVolumeClaim:              #指定pvc
          claimName: caoyong-ceph-claim-100 #caoyong-ceph-claim-2 #rbd-pvc #cephfs-pvc
          #readOnly: true
---

有两点需要注意:

  1. 需要添加权限
            securityContext:
              privileged: true
    
    否则报错:mount: permission denied
    参考1
    参考2
  2. 需要使用设备挂载方式
        volumeDevices:
        - name: rbdxx
          devicePath: /dev/rbdxx

否则回报错:

Events:
  Type     Reason       Age   From               Message
  ----     ------       ----  ----               -------
  Normal   Scheduled    15s   default-scheduler  Successfully assigned default/my-nginx-test-rbd-repeat-5d6b485b49-pctbf to server19
  Warning  FailedMount  15s   kubelet            Unable to attach or mount volumes: unmounted volumes=[kube-api-access-mqg44 rbdxx], unattached volumes=[kube-api-access-mqg44 rbdxx]: volume rbdxx has volumeMode Block, but is specified in volumeMounts
  1. 手动挂载块设备
[root@server14 block_device]# kubectl exec -it pod/my-nginx-test-rbd-repeat-96966544c-9lj7v -- /bin/bash
##发现块设备b开头
root@my-nginx-test-rbd-repeat-96966544c-9lj7v:/# ls -al /dev/rbdxx 
brw-rw----. 1 root disk 252, 16 Dec  9 10:03 /dev/rbdxx
root@my-nginx-test-rbd-repeat-96966544c-9lj7v:/# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop1    7:1    0    2G  0 loop 
sda      8:0    0 25.5T  0 disk 
|-sda1   8:1    0    2M  0 part 
|-sda2   8:2    0   10G  0 part 
`-sda3   8:3    0 25.5T  0 part 
rbd0   252:0    0    1G  0 disk 
rbd1   252:16   0    2G  0 disk 
rbd2   252:32   0    4G  1 disk 
root@my-nginx-test-rbd-repeat-96966544c-9lj7v:/# mkdir /root/caoyong
root@my-nginx-test-rbd-repeat-96966544c-9lj7v:/# mount /dev/rbdxx /root/caoyong
root@my-nginx-test-rbd-repeat-96966544c-9lj7v:/# df -lh
Filesystem               Size  Used Avail Use% Mounted on
overlay                   24T   43G   24T   1% /
tmpfs                     64M     0   64M   0% /dev
tmpfs                     95G     0   95G   0% /sys/fs/cgroup
/dev/mapper/centos-root   24T   43G   24T   1% /etc/hosts
shm                       64M     0   64M   0% /dev/shm
tmpfs                    189G   12K  189G   1% /run/secrets/kubernetes.io/serviceaccount
/dev/rbdxx               2.0G   24K  1.8G   1% /root/caoyong
root@my-nginx-test-rbd-repeat-96966544c-9lj7v:/# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop1    7:1    0    2G  0 loop 
sda      8:0    0 25.5T  0 disk 
|-sda1   8:1    0    2M  0 part 
|-sda2   8:2    0   10G  0 part 
`-sda3   8:3    0 25.5T  0 part 
rbd0   252:0    0    1G  0 disk 
rbd1   252:16   0    2G  0 disk /root/caoyong
rbd2   252:32   0    4G  1 disk 

可以看到块设备/dev/rbdxx被挂载到了pod的容器上面

解决# driver name rook-ceph.cephfs.csi.ceph.com not found in the list of registered CSI driver问题

[root@sc-neutral-1 ~]# cd /var/lib/kubelet/plugins_registry/
[root@sc-neutral-1 plugins_registry]# ls
rook-ceph.cephfs.csi.ceph.com-reg.sock  rook-ceph.rbd.csi.ceph.com-reg.sock

这两个socket,是rbd和cephfs向ceph mon申请存储的客户端,被注册在/var/lib/kubelet/plugins_registry/。 相应的配置在operator.yaml

  CSI_PLUGIN_TOLERATIONS: |
    - effect: NoExecute
      key: taint.knowdee.com/neutral
      operator: Exists
    - effect: NoExecute
      key: taint.knowdee.com/apps
      operator: Exists
    - effect: NoExecute
      key: taint.knowdee.com/midWare
      operator: Exists
  CSI_RBD_PLUGIN_TOLERATIONS: |
    - effect: NoExecute
      key: taint.knowdee.com/neutral
      operator: Exists
    - effect: NoExecute
      key: taint.knowdee.com/apps
      operator: Exists
    - effect: NoExecute
      key: taint.knowdee.com/midWare
      operator: Exists    
  CSI_CEPHFS_PLUGIN_TOLERATIONS: |
    - effect: NoExecute
      key: taint.knowdee.com/neutral
      operator: Exists
    - effect: NoExecute
      key: taint.knowdee.com/apps
      operator: Exists
    - effect: NoExecute
      key: taint.knowdee.com/midWare
      operator: Exists      

这些配置导致csi-plugin pod的分布:

csi-cephfsplugin-4ldrg                                     2/2     Running     0          25m     172.70.21.19     sc-neutral-1      <none>           <none>
csi-cephfsplugin-fmm2x                                     2/2     Running     0          25m     172.70.10.24     sc-node-mysql-3   <none>           <none>
csi-cephfsplugin-gtc5p                                     2/2     Running     0          25m     172.70.10.23     sc-node-mysql-2   <none>           <none>
csi-cephfsplugin-kscv6                                     2/2     Running     0          25m     172.70.21.15     sc-node-app-2     <none>           <none>
csi-cephfsplugin-mlbpm                                     2/2     Running     0          25m     172.70.21.18     sc-node-app-5     <none>           <none>
csi-cephfsplugin-provisioner-6777bdb89d-bq5w9              5/5     Running     0          16m     10.244.49.49     sc-neutral-2      <none>           <none>
csi-cephfsplugin-provisioner-6777bdb89d-q69jh              5/5     Running     0          16m     10.244.252.104   sc-neutral-1      <none>           <none>
csi-cephfsplugin-pvp2p                                     2/2     Running     0          25m     172.70.10.22     sc-node-mysql-1   <none>           <none>
csi-cephfsplugin-pxv9k                                     2/2     Running     0          25m     172.70.21.17     sc-node-app-4     <none>           <none>
csi-cephfsplugin-rvxfq                                     2/2     Running     0          25m     172.70.21.16     sc-node-app-3     <none>           <none>
csi-cephfsplugin-sts6k                                     2/2     Running     0          25m     172.70.21.20     sc-neutral-2      <none>           <none>
csi-cephfsplugin-x9mzr                                     2/2     Running     0          25m     172.70.21.14     sc-node-app-1     <none>           <none>
csi-rbdplugin-24hqj                                        2/2     Running     0          25m     172.70.21.20     sc-neutral-2      <none>           <none>
csi-rbdplugin-48db7                                        2/2     Running     0          25m     172.70.21.17     sc-node-app-4     <none>           <none>
csi-rbdplugin-5d7x6                                        2/2     Running     0          25m     172.70.21.15     sc-node-app-2     <none>           <none>
csi-rbdplugin-7r8hc                                        2/2     Running     0          25m     172.70.21.19     sc-neutral-1      <none>           <none>
csi-rbdplugin-8hkcd                                        2/2     Running     0          25m     172.70.21.14     sc-node-app-1     <none>           <none>
csi-rbdplugin-bn78b                                        2/2     Running     0          25m     172.70.10.22     sc-node-mysql-1   <none>           <none>
csi-rbdplugin-m48tq                                        2/2     Running     0          25m     172.70.21.16     sc-node-app-3     <none>           <none>
csi-rbdplugin-mqg72                                        2/2     Running     0          25m     172.70.21.18     sc-node-app-5     <none>           <none>
csi-rbdplugin-provisioner-fdbdf4548-bbwkx                  5/5     Running     0          16m     10.244.252.103   sc-neutral-1      <none>           <none>
csi-rbdplugin-provisioner-fdbdf4548-pq5nz                  5/5     Running     0          16m     10.244.49.48     sc-neutral-2      <none>           <none>
csi-rbdplugin-qlnmw                                        2/2     Running     0          25m     172.70.10.23     sc-node-mysql-2   <none>           <none>
csi-rbdplugin-zmd8m                                        2/2     Running     0          25m     172.70.10.24     sc-node-mysql-3   <none>           <none>

重用命令

#出现The Object Gateway Service is not configured的问题

bash-4.4$ ceph dashboard set-rgw-api-admin-resource admin
Option RGW_API_ADMIN_RESOURCE updated

然后删掉rgw,mgr等deployment;然后删除operator重建即可

获取rgw的用户名密码

bash-4.4$ radosgw-admin  user list
[
    "obc-rook-ceph-model-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-123scello-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-chinakb-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-wwwscello-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-123data-platform-s3-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-k8s-install-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-env-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-harbor-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-operate-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "rgw-admin",
    "obc-rook-ceph-bucket01-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "dashboard-admin",
    "obc-rook-ceph-docker-compose-file-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-rdf-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-zhangchen-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-kafka-service-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-kafka-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-ansible-hosts-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-ansible-mount-playbook-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-ansible-deploy-playbook-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-my-bucket-name-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "knowdee-s3-user",
    "obc-rook-ceph-rfp-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-sunshine-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-sentinel-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-knola-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-dgraph-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-sql-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-data-platform-s3-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-app-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-image-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-certificate-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-nacos-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-aaa-scello-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-docker-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-mysql-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-inventory-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-nginx-conf-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-ansible-localize-playbook-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-env-deploy-file-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-pub-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-namespace-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-redis-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-bot-data-s3-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-delivery-meta-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-bj20-scello-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-sync-script-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-gitlab-backup-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-lcig-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-flink-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-chatbot-data-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-start-k8s-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-lcig.dev-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-flyway-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-origin-install-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-config-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-test-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-pipeline-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-package-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-ansible-backup-playbook-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-restart-deploy-file-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-static-script-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "rgw-admin-ops-user",
    "obc-rook-ceph-scello-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d",
    "obc-rook-ceph-zookeeper-d2d88f82-2a5a-43f3-8bd0-bb2881d1206d"
]
bash-4.4$ radosgw-admin  user info  --uid knowdee-s3-user
{
    "user_id": "knowdee-s3-user",
    "display_name": "knowdee s3 of shichuang ",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "knowdee-s3-user",
            "access_key": "FXEPSIIKK2GWV0XCSSJE",
            "secret_key": "3CMG9hf1n2uV8BK7wtTLTv68IDrY9ymw4z7tp1IA"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}



   29  echo AEL98BJZOFS1XT9OPOM1 > access_key
   30  echo CcXfx6cIzgfYdjnessqinQPnDyXY83JM0BNs72Cs > secret_key
   31  ceph dashboard set-rgw-api-access-key -i access_key
   32  ceph dashboard set-rgw-api-secret-key -i secret_key

All.