Ceph-FS 文件存储

218 阅读3分钟

介绍

  • Ceph FS 即ceph filesystem, 可以实现文件系统共享功能,客户端通过ceph协议挂载并使用ceph集群作为数据存储服务器。
  • Ceph FS需要运行Meta Data Service(MDS)服务,其守护进程为ceph-mds,ceph-mds进程管理与cephFS上存储的文件相关的元数据,并协调对ceph存储集群的访问。

部署MDS服务器

节点选择

在下面的三台节点部署MDS服务

192.168.0.184 node1
192.168.0.101 node2
192.168.0.100 node3

安装ceph-mds

 yum install ceph-mds -y

在ceph deploy节点部署ceph-mds

ceph-deploy --overwrite-conf mds  create node1
ceph-deploy --overwrite-conf mds  create node2
ceph-deploy --overwrite-conf mds  create node3

查看ceph集群状态

[root@node1 cephcluster]# ceph status
  cluster:
    id:     9e7b59a6-c3ee-43d4-9baf-60d5bb05484a
    health: HEALTH_OK
 
  services:
    mon: 5 daemons, quorum node3,node4,node1,node2,node5 (age 5h)
    mgr: node2(active, since 5h), standbys: node1, node3, node5, node4
    osd: 3 osds: 3 up (since 5h), 3 in (since 5d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    pools:   7 pools, 225 pgs
    objects: 207 objects, 8.9 MiB
    usage:   106 MiB used, 477 GiB / 477 GiB avail
    pgs:     225 active+clean
 
  io:
    client:   36 KiB/s rd, 0 B/s wr, 35 op/s rd, 23 op/s wr
[root@node1 cephcluster]# ceph mds stat
 3 up:standby  # 当前为备用状态,需要分配pool才可以使用

创建cephFS metadata和data存储池

使用CephFS之前需要事先于集群中创建一个文件系统,并为其分别指定元数据和数据相关的存储池,如下命令将创建名为mycephfs的文件系统,它使用cephfs-metadata作为元数据存储池,使用cephfs-data作为数据存储池。

# 保存metadata的pool
[root@node1 cephcluster]# ceph osd pool create cephfs-metadata 16 16
pool 'cephfs-metadata' created 

# 保存数据的pool
[root@node1 cephcluster]# ceph osd pool create cephfs-data 8 8
pool 'cephfs-data' created

创建CephFS并认证

[root@node1 cephcluster]# ceph fs new mycephfs cephfs-metadata cephfs-data
new fs with metadata pool 8 and data pool 9

再次查看mds服务状态

new fs with metadata pool 8 and data pool 9
[root@node1 cephcluster]# ceph -s
  cluster:
    id:     9e7b59a6-c3ee-43d4-9baf-60d5bb05484a
    health: HEALTH_OK
 
  services:
    mon: 5 daemons, quorum node3,node4,node1,node2,node5 (age 6h)
    mgr: node2(active, since 6h), standbys: node1, node3, node5, node4
    mds: 1/1 daemons up, 2 standby
    osd: 3 osds: 3 up (since 6h), 3 in (since 5d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   9 pools, 249 pgs
    objects: 229 objects, 8.9 MiB
    usage:   110 MiB used, 477 GiB / 477 GiB avail
    pgs:     249 active+clean
 
  io:
    client:   1.3 KiB/s wr, 0 op/s rd, 4 op/s wr
[root@node1 cephcluster]# ceph mds stat
mycephfs:1 {0=node3=up:active} 2 up:standby

查看cephfs状态

[root@node1 cephcluster]# ceph fs status mycephfs
mycephfs - 0 clients
========
RANK  STATE    MDS      ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  node3  Reqs:    0 /s    10     13     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata  96.0k   150G  
  cephfs-data      data       0    150G  
STANDBY MDS  
   node1     
   node2     
MDS version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)

客户端挂载cephFS

在客户端测试cephFS的挂载,需要指定mon节点的6789端口

root@onda:~# cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
        key = AQBakzplc7fHGxAA3jNeWcQXV2QyXiXh9Bzurw==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
        
root@onda:~# mount -t ceph  192.168.0.100:6789:/ /mnt -o name=admin,secret=AQBakzplc7fHGxAA3jNeWcQXV2QyXiXh9Bzurw==

# 查看磁盘
root@onda:~# df -TH
Filesystem           Type   Size  Used Avail Use% Mounted on
tmpfs                tmpfs  394M  2.1M  392M   1% /run
/dev/sda2            ext4   118G   19G   93G  17% /
tmpfs                tmpfs  2.0G     0  2.0G   0% /dev/shm
tmpfs                tmpfs  5.3M  4.1k  5.3M   1% /run/lock
/dev/sda1            vfat   536M  6.4M  530M   2% /boot/efi
tmpfs                tmpfs  394M  107k  394M   1% /run/user/1000
tmpfs                tmpfs  394M   62k  394M   1% /run/user/0
192.168.0.100:6789:/ ceph   163G     0  163G   0% /mnt