K8S中使用GlusterFS实现PV/PVC

227 阅读7分钟

GlusterFS

K8S常用的共享存储有cephfs、nfs,但是ceph占用资源较多,nfs又缺乏可靠性,那是否有一个相对折中的方案呢?当然有,GlusterFS就是一个很好的选择。

简介

Gluster是一个可扩展的分布式文件系统,它将来自多个服务器的磁盘存储资源聚合到一个全局命名空间中。

docs.gluster.org/en/latest/A…

GlusterFS中的volume的模式有很多中,包括以下几种:

  • Distributed:分布式卷将文件分布在卷中的砖块上。您可以使用分布式卷,其中要求是扩展存储,冗余要么不重要,要么由其他硬件/软件层提供。
  • Replicated :复制的卷跨卷中的砖块复制文件。您可以在高可用性和高可靠性至关重要的环境中使用复制卷。
  • Distributed replicated:分布式+复制卷(类似raid10)在卷中的复制砖块上分发文件。您可以在要求扩展存储且高可靠性至关重要的环境中使用分布式复制卷。分布式复制卷在大多数环境中也能提供更好的读取性能。
  • Dispersed:条带卷基于擦除代码,为磁盘或服务器故障提供节省空间的保护。它将原始文件的编码片段存储在每个砖块中,这样只需要片段的子集就可以恢复原始文件。管理员在卷创建时间配置了在不失去数据访问权限的情况下可能丢失的砖块数量。
  • Distributed Dispersed:分布式卷在条带的子卷之间分发文件。这具有与分发复制卷相同的优势,但使用分散将数据存储在砖块中。

管理卷

扩容,一般扩容的数量是副本数的倍数,例如replica为2,扩容就是2、4、6

# 添加节点
gluster peer probe server4
# 添加盘
gluster volume add-brick test-volume server4:/exp4
# 查看volume
gluster volume info test-volume
# 重均衡

缩容

# 缩容
gluster volume remove-brick test-volume server2:/exp2 start
# 查看缩容状态
gluster volume remove-brick test-volume server2:/exp2 status
# 完成后提交缩容操作
gluster volume remove-brick test-volume server2:/exp2 commit

使用add-brick命令扩展卷后,您可能需要在服务器之间重新平衡数据。在扩展或缩小卷后创建的新目录将自动均匀分布。对于所有现有目录,可以通过rebalance fix-layout或rebalance来修复分布。

# 重均衡 修复layout
gluster volume rebalance test-volume fix-layout start
# 重均衡 修复layout 和 迁移数据
gluster volume rebalance test-volume start force
# 查看rebalance状态
gluster volume rebalance test-volume status

更换distribute分布式坏盘,例如坏道这种还可以读取的情况

# 要替换Bricks(砖块),请添加新砖块,然后移除要替换的砖块。这将触发一个重新平衡操作,该操作将从移除的砖块中移动数据。
# 添加
gluster volume add-brick test-volume Server1:/home/gfs/r2_2
# 删除
gluster volume remove-brick test-volume Server1:/home/gfs/r2_1 start
# 查看新替换盘的状态
gluster volume remove-brick test-volume Server1:/home/gfs/r2_1 status
# 提交删除
gluster volume remove-brick test-volume Server1:/home/gfs/r2_1 commit

更换Replicate/Distributed坏盘,例如坏道、坏盘的情况

# r2_5替换r2_0,需要添加commit force参数
gluster volume replace-brick test-volume Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 commit force
# 查看进度
gluster volume heal test-volume info
# 当Number of entries: 0表示替换完成

主动触发Replicate模式下的自愈

# 仅在需要修复的文件上触发自我修复
gluster volume heal test-volume
# 在测试卷的所有文件上触发自我治愈
gluster volume heal test-volume full
# 查看测试卷上需要修复的文件列表
gluster volume heal test-volume info
# 查看未自愈的文件列表
gluster volume heal test-volume info healed
# 查看自愈失败的文件列表
gluster volume heal test-volume info failed
# 查看脑裂状态的文件列表
gluster volume heal test-volume info split-brain

其他参数设置

# 同时开启tcp和rdma模式(先volume stop)
$ gluster volume set test-volume config.transport tcp,rdma
# 开启 指定 volume 的配额
$ gluster volume quota k8s-volume enable
# 限制 指定 volume 的配额
$ gluster volume quota k8s-volume limit-usage / 1TB
# 设置 cache 大小, 默认32MB
$ gluster volume set k8s-volume performance.cache-size 4GB
# 设置 io 线程, 太大会导致进程崩溃
$ gluster volume set k8s-volume performance.io-thread-count 16
# 设置 网络检测时间, 默认42s
$ gluster volume set k8s-volume network.ping-timeout 10
# 设置 写缓冲区的大小, 默认1M
$ gluster volume set k8s-volume performance.write-behind-window-size 1024MB

性能分析

# 开始
$ sudo gluster volume profile vol_test start
# 查看
$ sudo gluster volume profile vol_test info
# 停止
$ sudo gluster volume profile vol_test stop

top监控

Usage:
volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] | {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>]
# 查看VOLNAME和brick
$ sudo gluster volume info
# 查看fd数量
$ sudo gluster volume top $VOLNAME open brick $brick

部署glusterfs

部署glusterfs-server

$ sudo apt-get install glusterfs-server
$ cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option transport.socket.listen-port 24007
    option ping-timeout 0
    option event-threads 1
#   option lock-timer 180
#   option transport.address-family inet6
#   option base-port 49152
    option max-port  60999
end-volume

$ sudo systemctl start glusterd.service
$ sudo systemctl enable glusterd.service

# 添加peer
$ sudo gluster peer probe XXX-node1.gy.ntes
$ sudo gluster peer probe XXX-node2.gy.ntes
$ sudo gluster peer probe XXX-node3.gy.ntes
$ sudo gluster peer status
Number of Peers: 3

做盘(使用heketi不用做)

mkfs.xfs /dev/sdX
mkdir /gfsdata1
echo $(blkid /dev/sdX |awk '{print $2}') /gfsdata1 xfs defaults 0 0 >> /etc/fstab

创建条带复制卷(使用heketi不用做)

$ gluster volume create k8s replica 2 transport tcp\
XXX-master1.gy.ntes:/gfsdata1 \
XXX-node1.gy.ntes:/gfsdata1 \
XXX-node2.gy.ntes:/gfsdata1 \
XXX-node3.gy.ntes:/gfsdata1 force
# 查看
$ sudo gluster volume info
Volume Name: k8s
Type: Distributed-Replicate
Volume ID: 457c29de-6b55-48e9-9917-70ffea68dd11
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: XXX-master1.gy.ntes:/gfsdata1
Brick2: XXX-node1.gy.ntes:/gfsdata1
Brick3: XXX-node2.gy.ntes:/gfsdata1
Brick4: XXX-node3.gy.ntes:/gfsdata1
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
# 启动
$ sudo gluster volume start k8s

部署heketi:GlusterFS的RESTful卷管理接口

创建用户

# 所有节点上
useradd -m -s /bin/bash heketi
echo 'heketi ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/heketi
# 第一个节点创建密钥,把公钥写入.ssh/authorized_keys
ssh-keygen -m PEM -t rsa -b 4096 -q -f /etc/heketi/heketi_key -N ''

部署heketi

github.com/heketi/heke…

$ wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-v7.0.0.linux.amd64.tar.gz
$ tar -xf heketi-v7.0.0.linux.amd64.tar.gz
$ cd heketi
$ ls
heketi  heketi-cli  heketi.json
$ sudo cp heketi /usr/bin
$ sudo cp heketi-cli /usr/bin
$ sudo vi /lib/systemd/system/heketi.service
[Unit]
Description=Heketi Server
[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target

$ sudo mkdir -p /var/lib/heketi
$ sudo mkdir -p /etc/heketi
# 编辑heketi.json
  "port": "8880",
    "executor": "ssh",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "heketi",
      "port": "1046",
      "fstab": "/etc/fstab",
      "backup_lvm_metadata": false,
      "sudo": true
    },
$ sudo cp heketi.json /etc/heketi/heketi.json
$ sudo systemctl start heketi
$ sudo systemctl enable heketi

创建topology.json(注意不能用主机名)

$ vi /etc/heketi/topology.json
 {
  "clusters": [
     {
       "nodes": [
         {
           "node": {
             "hostnames": {
               "manage": [
                 "10.XX.1.95"
              ],
              "storage": [
                "10.XX.1.95"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sda"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.XX.1.98"
              ],
              "storage": [
                "10.XX.1.98"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sda"
          ]
        },          {
          "node": {
            "hostnames": {
              "manage": [
                "10.XX.1.99"
              ],
              "storage": [
                "10.XX.1.99"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sda"
          ]
        },          {
          "node": {
            "hostnames": {
              "manage": [
                "10.XX.1.94"
              ],
              "storage": [
                "10.XX.1.94"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sda"
          ]
        }
      ]
    }
  ]
}

创建topology

$ export HEKETI_CLI_SERVER=http://localhost:8880
$ heketi-cli topology load --json=/etc/heketi/topology.json
# 集群管理
$ heketi-cli cluster list
Clusters:
Id:ab37ff950c77de3cfebdd29c5f8dba47 [file][block]
$ heketi-cli cluster info ab37ff950c77de3cfebdd29c5f8dba47
Cluster id: ab37ff950c77de3cfebdd29c5f8dba47
Nodes:
2c17ef117e0e87b63d51a4c5132b21b6
62cb428dc426f29556bb84f9a859a39a
80636e9d5bd39749a54ac77458f7fbea
cb20f587250a7675036cee218dc4ab5c
Volumes:
Block: true
File: true
# volume管理
$ heketi-cli volume list

测试

$ export HEKETI_CLI_SERVER=http://localhost:8880 HEKETI_CLI_USER=admin HEKETI_CLI_KEY=neteasepsw
$ heketi-cli volume create --size=10 --durability=none --user "admin" --secret "neteasepsw"

创建sc

$ cat clustersc.yaml
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: kube-system
type: kubernetes.io/glusterfs
data:
  key: "bmV0ZWFzZXBzdw==" # echo -n "neteasepsw" | base64
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
    storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
  name: glusterfs
parameters:
  clusterid: "ab37ff950c77de3cfebdd29c5f8dba47"
  gidMax: "50000"
  gidMin: "40000"
  restauthenabled: "true"
  resturl: "http://10.XX.1.95:8880"
  restuser: admin
  secretName: heketi-secret
  secretNamespace: kube-system
  # restuserkey: neteasepsw # 也可以直接使用
  volumetype: "replicate:2"
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
$ kubectl apply -f clustersc.yaml
secret/heketi-secret created
storageclass.storage.k8s.io/glusterfs created
$ kubectl get sc
NAME                  PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
glusterfs (default)   kubernetes.io/glusterfs   Delete          Immediate              true                   38m
local                 openebs.io/local          Delete          WaitForFirstConsumer   false                  75d

pv无法创建问题排查

# 修改kube-controller-manager的日志等级
vi /etc/kubernetes/manifests/kube-controller-manager.yaml
  - --v=4

查看日志会发现:因为创建失败反复创建导致空间不足

I1030 18:25:00.061658       1 glusterfs.go:837] failed to create volume: Failed to allocate new volume: No space

清理后我们创建一个测试pvc

$ cat test.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfs-test
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: glusterfs

日志发现必须使用ip,发现之前的/etc/heketi/topology.json不能用主机名:

: glusterfs server node ip address XXX-node2.gy.ntes must be a valid IP address, (e.g. 10.9.8.7)

节点上清理历史vg、pg,修改/etc/heketi/topology.json,重新heketi-cli topology load

清理历史cluster发现无法清理,无法删除历史node,提示有device

# 删除 volume、device、node
$ heketi-cli  node delete 80636e9d5bd39749a54ac77458f7fbea
Error: Unable to delete node [80636e9d5bd39749a54ac77458f7fbea] because it contains devices
$ sudo vgremove vg_bf59ca136513143d28d4a3eb8fcd710a
  Volume group "vg_bf59ca136513143d28d4a3eb8fcd710a" successfully removed
$ heketi-cli  device disable bf59ca136513143d28d4a3eb8fcd710a
Device bf59ca136513143d28d4a3eb8fcd710a is now offline
$ heketi-cli  device remove bf59ca136513143d28d4a3eb8fcd710a
Device bf59ca136513143d28d4a3eb8fcd710a is now removed
$ heketi-cli  device delete bf59ca136513143d28d4a3eb8fcd710a
Device bf59ca136513143d28d4a3eb8fcd710a deleted
$ heketi-cli  node delete 80636e9d5bd39749a54ac77458f7fbea
Node 80636e9d5bd39749a54ac77458f7fbea deleted