ceph单机部署(线下学习环境)

2,768 阅读4分钟

背景

学习为目的,想通过搭建一个自己的ceph环境便于理解其中组件的大致作用和后续的深入学习,所以就想自己搭建一个ceph集群,但是个人财力有限,没有那么多的机器,又不想因自己的学习而薅公司的羊毛,所以自己尝试在家里的14买年的低配(2核4GB 512GBHDD)电脑的vmware上安装,尝试几次后是成功了,但是osd没有配置好,还特别卡,不想在这个阶段浪费太多的时间,所以自己买了个低配的主机,主要是想买几块云硬盘,算下来很便宜,20GB一块盘才7元/月,最低配的云主机单月80元/月,(腾讯云),接下来就记录下我自己搭建的步骤,方边后续快速部署,不做太多重复劳动。

环境准备

系统环境:基于centos7.4,其它没考虑
存储环境: 2块裸盘

部署

  • ceph-deploy部署
基于ceph-deploy快速部署,所以先安装它
想要安装其它更高版本的可以去官网wget然后手动安装rpm
  • ceph安装包部署
先替换下yum源:
1.mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
2.wget -O /etc/yum.repos.d/CentOS-Base.repo mirrors.aliyun.com/repo/Centos…
3.yum clean all
4.yum makecache
然后配置ceph的repo:
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
配置完成后执行: yum update
mkdir /root/myceph
cd /root/myceph
ceph-deploy new VM_0_11_centos
修改ceph.conf中副本的设置,否则会出现osd写入有问题,因为副本数没有写完
[global]
osd pool default size = 1
osd pool default min size = 1
执行 ceph-deploy install --release luminous VM_0_11_centos
如果执行善变的install卡住或者失败了,停掉,直接yum -y intall ceph
  • 初始化mon
ceph-deploy mon create-initial
ceph-deploy admin VM_0_11_centos
验证: cd /root/myceph && ceph -s
  • mgr部署
ceph-deploy mgr create VM_0_11_centos
  • osd部署
创建两中类型的osd,一个是3个逻辑卷的osd模拟不同类型的存储介质,一个是裸盘
基于逻辑卷:
pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
$ vgcreate ceph-pool /dev/sdb
Volume group "ceph-pool" successfully created
$ lvcreate -n osd0.wal -L 1G ceph-pool
Logical volume "osd0.wal" created.
$ lvcreate -n osd0.db -L 1G ceph-pool
Logical volume "osd0.db" created.
$ lvcreate -n osd0 -l 100%FREE ceph-pool
Logical volume "osd0" created.

ceph-deploy osd create \
--data ceph-pool/osd0 \
--block-db ceph-pool/osd0.db \
--block-wal ceph-pool/osd0.wal \
--bluestore VM_0_11_centos

基于osd:

ceph-deploy osd create --bluestore caoge --data /dev/sdc
  • 安装radosgw
ceph-deploy install --rgw VM_0_11_centos
ceph-deploy admin VM_0_11_centos
eph-deploy rgw create VM_0_11_centos
另外,这块的网络得配置好否则很尴尬访问不到:
我贴下我的配置:
[global]
fsid = 5f4384c9-f63a-4c8a-b97a-45dcb523b3f9
ms_bind_ipv6 = true
mon_initial_members = VM_0_11_centos
mon_host = [172.21.0.11]
public network = 172.21.0.11/24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 1
osd pool default min size = 1
[client.rgw.VM_0_11_centos]
host = VM_0_11_centos
rgw_enable_ops_log = true
rgw_frontends = "civetweb port=7480"
rgw dns name = s3.caoge.com
rgw socket path = /var/run/ceph-client.radosgw.sock
keying = /etc/ceph/ceph.client.radosgw.keyring

如果都完成了应该是有一个可用的ceph,我们用s3cmd验证下:
先创建一个访问用户:
sudo radosgw-admin user create --uid="test" --display-name="caoge"
安装s3cmd
yum -y install s3cmd
配置s3 client
s3 --configure
这个命令会让你逐步确认一些信息,ak,sk啥的,可以先随便填,最后不验证,然后选择yes保存配置
vim ~/.s3cfg
主要修改一些信息
access_key:上一步那个用户的accesskey
secret_key: 上一步那个用户的secretkey
host_base = 你的ip:你的ceph-rgw端口
host_bucket = 你的ip:你的ceph-rgw端口/%(bucket)
use_https = False
具体的s3的使用下班那个s3cmd访问ceph的连接说的很清楚,照着操作一些就能看到你的上传下载效果了,应该达到验证的目的了吧。

监控

基本步骤参照ceph可视化监控的那篇文章就可以搞定,这里记录下不同的地方,在配置
vim /etc/prometheus/prometheus.yml 的时候注意格式,例如:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets: ['localhost:9090']
- job_name: 'ceph_exporter'
static_configs:
- targets: ['localhost:9128']

grafana下载可以去官网wget一个rpm最新的release包安装然后按照参考的那个cep可视化监控继续安装
如果顺利的话就能看到效果了:

参考:

ceph官网安装:docs.ceph.com/docs/master…

ceph可视化监控:www.jianshu.com/p/f0fae97d9…
grafana rpm下载网址:grafana.com/grafana/dow…

osd重建:www.strugglesquirrel.com/2018/11/20/…

附件:

aws-sdk-go访问ceph:

package main

import (
	"fmt"
	"os"
	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/credentials"
	"github.com/aws/aws-sdk-go/service/s3/s3manager"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/s3"
	"time"
)

func test() {
	bucket := aws.String("BUCKET")
	key := aws.String("caogeobject")
	accessKey := "xxxx"
	secretKey := "xxxx"
	endPoint := "http://xxxxxxx:7480"
	myContentType := aws.String("application/zip")
	myACL := aws.String("public-read")
	metadataKey := "udf-metadata"
	metadataValue := "abc"
	myMetadata := map[string]*string{
		metadataKey: &metadataValue,
	}
	//config to use s3 server
	s3Config := &aws.Config {
		Credentials: credentials.NewStaticCredentials(accessKey, secretKey, ""),
		Endpoint: aws.String(endPoint),
		Region: aws.String("us"),
		DisableSSL: aws.Bool(true),
		S3ForcePathStyle: aws.Bool(false),
	}
	newSession := session.New(s3Config)
	s3Client := s3.New(newSession)
	cparams := &s3.HeadBucketInput{
		Bucket: bucket,
	}
	_, err := s3Client.HeadBucket(cparams)
	if err != nil {
		fmt.Println("HeadBucket", err.Error())
		return
	}
	uploader := s3manager.NewUploader(newSession)
	filename := "data"
	f, err := os.Open(filename)
	if err != nil {
		fmt.Errorf("failed to open file %q, %v", filename, err)
		return
	}
	//upload the file to s3
	result, err := uploader.Upload(&s3manager.UploadInput{
		Bucket: bucket,
		Key: key,
		Body: f,
		ContentType: myContentType,
		ACL: myACL,
		Metadata: myMetadata,
	}, func(u *s3manager.Uploader) {
		u.PartSize = 10 * 1024 * 1024
		u.LeavePartsOnError = true
		u.Concurrency = 3
	})
	if err != nil {
		fmt.Printf("Failed to upload data to %s/%s, %s\n", *bucket, *key, err.Error())
		return
	}
	fmt.Printf("file uploaded to %s\n", result.Location)
	dowFile := "data_ceph"
	file, err := os.Create(dowFile)
	if err != nil {
		fmt.Println("Failed to create file", err)
		return
	}
	defer file.Close()
	downloader := s3manager.NewDownloader(newSession)
	numBytes, err := downloader.Download(file,
		&s3.GetObjectInput{
			Bucket: bucket,
			Key: key,
		})
	if err != nil {
		fmt.Println("Failed to download file", err)
		return
	}
	fmt.Println("Download file", file.Name(), numBytes, "bytes")
	//delete file
	
}

func main() {
	test()
}