Ceph搭建

369 阅读4分钟

Ceph搭建

一.环境准备

准备三台centos7虚拟机(三节点)

每台虚拟机安装两块网卡,一块仅主机模式,一块nat模式

准备两块硬盘,一块系统盘,另一块作为ceph使用大小1024G

配置IP地址

ceph01 192.168.158.9 192.168.100.9 ceph02 192.168.158.11 192.168.100.11 ceph03 192.168.158.12 192.168.100.70

1. 主机名和免登录设置

1.1 每个节点修改主机名,和hosts文件

vi /etc/hosts 192.168.100.9 ceph01 192.168.100.11 ceph02 192.168.100.70 ceph03

1.2 关闭防火墙和核心防护

systemctl stop firewalld systemctl disable firewalld setenforce 0 vi /etc/selinux/config SELINUX =disabled

1.3 三个节点创建免交互

ssh-keygen ssh-copy-id root@ceph01 ssh-copy-id root@ceph02 ssh-copy-id root@ceph03

2 YUM源和Ceph源设置

2.1 配置yum源

yum install wget -y cd /etc/yum.repos.d/  //安装wget命令,方便下载新的yum源。 
mkdir backup mv C* backup //用wget命令下载新的yum源。 
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

2.2 配置ceph源

//配置ceph源
vi /etc/yum.repos.d/ceph.repo 
[ceph] 
name=Ceph packages for 
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch 
enabled=1 
gpgcheck=1 
type=rpm-md 
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc 
priority=1 
[ceph-noarch] 
name=Ceph noarch packages 
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ 
enabled=1 
gpgcheck=1 
type=rpm-md 
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1 
[ceph-source] 
name=Ceph source packages 
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/ 
enabled=1 
gpgcheck=1 
type=rpm-md 
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc 
priority=1

二.安装ceph集群

1.安装

1.1 安装

//ceph01节点 
yum install ceph-deploy ceph python-setuptools -y 
//ceph02、ceph03节点 
yum install ceph python-setuptools -y

1.2 创建配置目录

//每个节点 
mkdir /etc/ceph 
//管理节点创建mon并初始化,收集秘钥 
[root@ceph01 yum.repos.d]# cd /etc/ceph 
[root@ceph01 ceph]# ceph-deploy new ceph01 ceph02  //创建mon 
[root@ceph01 ceph]# ceph-deploy mon create-initial //初始化,收集秘钥

错误处理,文档:Ceph部署错误应对.note 链接:http://note.youdao.com/noteshare?id=4b387b7fd71278cb6a1c3752c9f13092&sub=BBA6C176400043DE8744AECFE2E48E7E

1.3 查看集群状态

[root@ceph01 ceph]# ceph -s

1.4 创建osd

[root@ceph01 ceph]# ceph-deploy osd create --data /dev/sdb ceph01 
[root@ceph01 ceph]# ceph-deploy osd create --data /dev/sdb ceph02 
[root@ceph01 ceph]# ceph -s 
//查看集群状态会发现有两个osd加入进来了 
[root@ceph01 ceph]# ceph osd tree 
//查看osd目录树 
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF 
-1 1.99799 root default 
-3 0.99899 host ceph01 
0 hdd 0.99899 osd.
0 up 1.00000 1.00000 
-5 0.99899 host ceph02 
1 hdd 0.99899 osd.1 up 1.00000 1.00000 
[root@ceph01 ceph]# ceph osd stat 
//查看osd状态
2 osds: 2 up, 2 in; epoch: e9

1.5 将配置文件和admin秘钥下发到节点并给秘钥增加权限

[root@ceph01 ceph]# ceph-deploy admin ceph01 ceph02 
[root@ceph01 ceph]# chmod +r ceph.client.admin.keyring 
[root@ceph02 ceph]# chmod +r ceph.client.admin.keyring

2. 扩容

2.1 扩容ceph03节点的osd和mon

[root@ceph01 ceph]# ceph-deploy osd create --data /dev/sdb ceph03 
[root@ceph01 ceph]# ceph -s 
//已经有三个osd了
[root@ceph01 ceph]# vi ceph.conf [global] fsid = b175fb1a-fdd6-4c57-a41f-b2d964dff248 mon_initial_members = ceph01, ceph02, ceph03 
//添加ceph03 
mon_host = mon_host = 192.168.100.9,192.168.100.11,192.168.100.70 
//添加ceph03IP地址
auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public network = 192.168.100.0/24 
//添加内部通信网段
[root@ceph01 ceph]# ceph-deploy mon add ceph03 //添加mon

2.2 重新下发秘钥和配置文件

[root@ceph01 ceph]# ceph-deploy --overwrite-conf config push ceph01 ceph02 ceph03 [root@ceph01 ceph]# systemctl list-unit-files |grep mon '//查看已启动的含mon列表' ceph-mon@.service enabled lvm2-monitor.service enabled ceph-mon.target enabled

2.3 重启mon服务

// 每个节点 
[root@ceph01 ceph]# systemctl restart ceph-mon.target 
[root@ceph01 ceph]# ceph -s

3. 故障osd恢复

3.1 首先模拟故障,将一个osd从集群中移除

//移除osd,删除osd,删除osd认证
ceph osd out osd.2 ceph osd crush remove osd.2 ceph auth del osd.2 
//重启所有节点osd服务,osd.2就down掉了 
systemctl restart ceph-osd.target

3.2 恢复osd到集群中

//在ceph03节点,查看ceph位置信息 
df -hT 
//显示位置信息: 
tmpfs tmpfs 910M 52K 910M 1% /var/lib/ceph/osd/ceph-2 
//查看/var/lib/ceph/osd/ceph-2的fsid 
[root@ceph03 ~]# cd /var/lib/ceph/osd/ceph-2
[root@ceph03 ceph-2]# more fsid 57df2d3e-2f53-4143-9c3f-5e98d0ae619b

//重新添加osd进入集群 
[root@ceph03 ceph-2]# ceph osd create 57df2d3e-2f53-4143-9c3f-5e98d0ae619b 
//重新添加认证权限
[root@ceph03 ceph-2]# 
ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i 
/var/lib/ceph/osd/ceph-2/keyring 
//设置ceph03的权重,ceph osd crush add osdID号 权重 host=主机名
[root@ceph03 ceph-2]# ceph osd crush add 2 0.99899 host=ceph03 
//将osd加入集群 [root@ceph03 ceph-2]# 
ceph osd in osd.2 
//重启所有节点的osd服务 
systemctl restart ceph-osd.target

通过ceph osd tree查看,ceph03的osd节点已经恢复。

资料参考:

预检 — Ceph Documentation

CEPH环境搭建01 | Neohope's Blog

学会Ceph群集搭建一篇就够了!!!_下一个艺术家-CSDN博客_ceph