分布式文件存储之Glusterfs

6,458 阅读4分钟

Gluster 分布式文件可以整合系统上所有的可用空间,将空余的磁盘整合为一个大的空间,方便公司内部使用。具体系统部署方案如下:

ip地址系统版本gluster版本
192.168.4.225CentOS77
192.168.4.226CentOS77
192.168.4.227CentOS77
192.168.4.228CentOS77

安装Gluster需要使用yum进行安装,这样方便解决依赖问题自动下载相关组件。

  • 1.安装yum源包
#所有服务器上都必须安装
[root@server-225 ~]# yum install centos-release-gluster7
[root@server-225 ~]# yum makecache
[root@server-225 ~]# yum install -y glusterfs-server
[root@server-225 ~]# cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /data/glusterd  #调整数据存放目录
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option transport.socket.listen-port 24007
    option transport.rdma.listen-port 24008
    option ping-timeout 0
    option event-threads 1
    option max-port  60999
end-volume
[root@server-225 ~]# systemctl restart glusterd
[root@server-225 ~]# systemctl enable glusterd

在所有节点上完成安装之后,接下来就是添加节点这一步操作,只需要在其中一台机器上操作即可

[root@server-225 ~]# gluster peer probe 192.168.4.226
[root@server-225 ~]# gluster peer probe 192.168.4.227
[root@server-225 ~]# gluster peer probe 192.168.4.228
[root@server-225 ~]# gluster peer status
Number of Peers: 3
Hostname: 192.168.4.228
Uuid: 94f8c106-d6f1-40f4-b4f3-beaac700e0bd
State: Peer in Cluster (Connected)
Other names:
192.168.4.228

Hostname: 192.168.4.226
Uuid: 23f8f2db-3d65-446b-a0c4-fdfb4e51bd9b
State: Peer in Cluster (Connected)

Hostname: 192.168.4.227
Uuid: 0b481909-2e79-4927-9eae-92d898419b32
State: Peer in Cluster (Connected)

将四台机器组建成一个集群之后需要创建磁盘,这里我创建了一个多副本的磁盘(一个文件存多份)防止文件丢。这时在各主机添加一块20G的磁盘并挂载上,可能你的系统系统里面有空余的磁盘空间就可以跳过这个操作(这里的目录可以随便添加)

[root@server-225 ~]# gluster volume create magicreplica replica 3 192.168.4.227:/gluster/replica/ 192.168.4.228:/gluster/replica/ 192.168.4.226:/gluster/replica/ force

如果你只想单纯的将所有可用空间整合起来,那就可以创建单个磁盘(类似于raid0),操作如下:

[root@server-225 ~]# gluster volume create magicdist 192.168.4.227:/gluster/distr 192.168.4.228:/gluster/distr 192.168.4.226:/gluster/distr force
[root@server-225 ~]# gluster volume start magicreplica
[root@server-225 ~]# gluster volume start magicdist
[root@server-225 ~]# gluster volume status magicdist
Status of volume: magicdist
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.4.227:/gluster/distr          49153     0          Y       18837
Brick 192.168.4.228:/gluster/distr          49153     0          Y       18927
Brick 192.168.4.226:/gluster/distr          49152     0          Y       36464
Self-heal Daemon on localhost               N/A       N/A        Y       36485
Self-heal Daemon on 192.168.4.228           N/A       N/A        Y       18948
Self-heal Daemon on 192.168.4.226           N/A       N/A        Y       96800
Self-heal Daemon on 192.168.4.227           N/A       N/A        Y       18858

Task Status of Volume magicdist
------------------------------------------------------------------------------
There are no active volume tasks

创建完成之后可以查看一下这两个磁盘的状态信息:

[root@server-225 ~]# gluster volume status magicreplica
Status of volume: magicreplica
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.4.227:/gluster/replica/       49153     0          Y       18837
Brick 192.168.4.228:/gluster/replica/       49153     0          Y       18927
Brick 192.168.4.226:/gluster/replica/       49152     0          Y       36464
Self-heal Daemon on localhost               N/A       N/A        Y       36485
Self-heal Daemon on 192.168.4.228           N/A       N/A        Y       18948
Self-heal Daemon on 192.168.4.226           N/A       N/A        Y       96800
Self-heal Daemon on 192.168.4.227           N/A       N/A        Y       18858
Task Status of Volume magicreplica
------------------------------------------------------------------------------
There are no active volume tasks
[root@server-225 ~]# gluster volume info magicreplica
Volume Name: magicreplica
Type: Replicate
Volume ID: 5546f656-50a0-4fff-82f2-2e882ab81ba1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.4.227:/gluster/replica/
Brick2: 192.168.4.228:/gluster/replica/
Brick3: 192.168.4.226:/gluster/replica/
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

我们创建了replicate磁盘时,我们可以看到一个文件它存放了3个。单块磁盘没有这个功能。

[root@server-225 ~]# gluster volume info magicdist
Volume Name: magicdist
Type: Replicate
Volume ID: 5546f656-50a0-4fff-82f2-2e882ab81ba1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.4.227:/gluster/distr
Brick2: 192.168.4.228:/gluster/distr
Brick3: 192.168.4.226:/gluster/distr
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

客户端安装挂载文件系统

[root@localjenkins replica]# yum install centos-release-gluster7
[root@localjenkins replica]# yum makecache
[root@localjenkins replica]# yum install glusterfs gluster-fuse
[root@localjenkins replica]# mount.glusterfs 192.168.4.227,192.168.4.228,192.168.4.226:/magicreplica /media/replica
[root@localjenkins replica]# mount.glusterfs 192.168.4.227,192.168.4.228,192.168.4.226:/magicdist /media/dist
#### 空间统计,replica单个分区的大小20G*3(节点)
#### 空间统计,distribute3个分区合并为一个
[root@localjenkins ~]# df -h
Filesystem                     Size  Used Avail Use% Mounted on
192.168.253.128:/magicreplica   20G  238M     20G  2% /media/replica
192.168.253.128:/magicdist     60G   712M   60G  2% /media/dist
#### 文件系统性能测试
[root@localjenkins replica]# time dd if=/dev/zero bs=1M count=7072 of=centos.iso
7072+0 records in
7072+0 records out
7415529472 bytes (7.4 GB, 6.9 GiB) copied, 41.3784 s, 179 MB/s

real	0m41.266s
user	0m0.018s
sys	0m4.812s

[root@localjenkins dist]# time dd if=/dev/zero bs=1M count=7072 of=centos.iso
7072+0 records in
7072+0 records out
7415529472 bytes (7.4 GB, 6.9 GiB) copied, 17.2403 s, 430 MB/s

real	0m20.417s
user	0m0.003s
sys	0m9.382s
[root@localgitlab gluster]# ls /gluster/replica/
centos.iso  officesite-v2.1.0.tar.gz
[root@localfabu ~]# ls /gluster/replica/
centos.iso  officesite-v2.1.0.tar.gz
[root@glusterfs ~]# ls /gluster/replica/
centos.iso  officesite-v2.1.0.tar.gz

#无上面生成的centos.iso
[root@localgitlab gluster]# ls /gluster/distr/
centos.iso
[root@localfabu ~]# ls /gluster/distr/

[root@glusterfs ~]# ls /gluster/distr/

写入大文件测试

类型(本地dd测试)测试次数写入文件大小耗时(s)
replica307072M46.3838
distribute307072M36.6701

写入小文件测试

类型(scp)速率测试次数写入文件大小耗时(s)
replica2-4MB/s214G5m25.663s
distribute2-4MB/s214G5m16.426s