Docker swarm 配置 mongodb 高可用

649 阅读7分钟

(分片,副本,选举,路由)

一,环境准备条件

docker版本 :Docker version 20.10.1, build 831ebea

mongodb版本:4.2.7

服务器版本:CentOS Linux release 7.7.1908 (Core)

1,准备三台服务器互相可以访问

例如:

10.11.32.24

10.11.32.25

10.11.32.26

2,三台服务器都要安装docker 环境

配置 daemon.conf

三台服务器都要创建 docker.service 文件 ,具体内容见附录

vi /lib/systemd/system/docker.service 

重启docker

systemctl daemon-reloadsystemctl restart docker

3,mongodb集群架构

二,创建swarm 集群及集群网络

1,在一台服务器上创建swarm 集群

docker swarm init --advertise-addr  10.11.32.24(本机ip)

在其它两台服务器执行加入swarm集群

docker swarm join --token SWMTKN-1-42w2dmkv2u18k9d6vi65rhqggjcqcc31iijnecd4bbi43bmo
m2-b3o89g28thkcken0v3qxtczuj 172.16.10.85:2377

忘记token 执行以下命令

docker swarm join-token manager  查看加入manager的命令

查看集群关系

docker node ls  

管理节点:赋值其他工作节点为管理节点

docker node promote 工作节点主机名1docker node promote 工作节点主机名2 

2,manager节点创建集群网络**

docker network create -d overlay --attachable mongodbs

--attachable 允许其他容器加入此网络

查看集群网络

docker network ls

三,配置mongodb高可用步骤

1,初始化目录并创建yml文件

所有服务器创建相关目录

mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard1/backup   -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard1/log      -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard1/data     -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard1/conf     -p

																		  
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard2/backup   -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard2/log      -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard2/data     -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard2/conf     -p

																		  
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard3/backup   -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard3/log      -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard3/data     -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/shard3/conf     -p


mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config1/backup   -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config1/log      -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config1/data     -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config1/conf     -p

																		  
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config2/backup   -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config2/log      -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config2/data     -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config2/conf     -p

																		  
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config3/backup   -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config3/log      -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config3/data     -p
mkdir  /data/docker/composes/prod/middleware/mongo_cluster/config3/conf     -p

leader服务器创建 stack.yml文件

version: '3.7'
services:
  rs_1_1:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node03
  rs_2_1:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/conf:/data/conf	
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node04
  rs_3_1:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node05
  rs_1_2:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node04
  rs_2_2:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/conf:/data/conf	
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node05
  rs_3_2:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node03
  rs_1_3:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard1/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node05
  rs_2_3:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard2/conf:/data/conf	
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node03
  rs_3_3:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/shard3/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node04
  config_1:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --configsvr --replSet config --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/config1/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/config1/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/config1/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/config1/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node03
  config_2:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --configsvr --replSet config --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/config2/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/config2/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/config2/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/config2/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node04
  config_3:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongod --configsvr --replSet config --dbpath /data/db --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/docker/composes/prod/middleware/mongo_cluster/config3/backup:/data/backup
      - /data/docker/composes/prod/middleware/mongo_cluster/config3/log:/data/log
      - /data/docker/composes/prod/middleware/mongo_cluster/config3/data:/data/db
      - /data/docker/composes/prod/middleware/mongo_cluster/config3/conf:/data/conf
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.hostname == GIE-EC-Node05
  route:
    image: 10.11.32.23:5000/mongo:v4.4.3
    command: mongos --configdb config/config_1:27017,config_2:27017,config_3:27017 --bind_ip_all --port 27017
    networks:
      - GIE-IOT-mongodbs
    ports:
      - 27017:27017
    environment:
      - TZ=Asia/Shanghai
    deploy:
      mode: global
networks:
  GIE-IOT-mongodbs:
    external: true

启动服务,在 manager上执行

docker stack deploy -c stack.yml mongo

在 manager查看服务的启动情况

docker service ls

2,初始化集群

【node1】初始化 Mongo 配置集群 10.11.32.25 执行以下命令

docker exec -it $(docker ps | grep "config_1" | awk '{ print $1 }') bash -c "echo 
'rs.initiate({_id: \"config\",configsvr: true, members: [{ _id : 0, host : \"config_1\" },
{ _id : 1, host : \"config_2\" }, { _id : 2, host : \"config_3\" }]})' | mongo"

【node1】初始化三个 Mongo 数据集群

10.11.32.25 服务器执行以下命令
docker exec -it $(docker ps | grep "rs_1_1" | awk '{ print $1 }') bash -c "echo 
'rs.initiate({_id : \"shard1\", members: [{ _id : 0, host : \"rs_1_1\" },{ _id : 1, 
host : \"rs_1_2\" },{ _id : 2, host : \"rs_1_3\"}]})' | mongo"

10.11.32.26 服务器执行以下命令
docker exec -it $(docker ps | grep "rs_2_1" | awk '{ print $1 }') bash -c "echo
 'rs.initiate({_id : \"shard2\", members: [{ _id : 0, host : \"rs_2_1\" },{ _id : 1, 
host : \"rs_2_2\" },{ _id : 2, host : \"rs_2_3\"}]})' | mongo"

10.11.32.27 服务器执行以下命令
docker exec -it $(docker ps | grep "rs_3_1" | awk '{ print $1 }') bash -c "echo
 'rs.initiate({_id : \"shard3\", members: [{ _id : 0, host : \"rs_3_1\" },{ _id : 1, 
host : \"rs_3_2\" },{ _id : 2, host : \"rs_3_3\"}]})' | mongo"

【node1】将三个数据集群当做分片加入 mongos 任意服务器执行以下命令

docker exec -it $(docker ps | grep "route" | awk '{ print $1 }') bash -c "echo 
'sh.addShard(\"shard1/rs_1_1:27017,rs_1_2:27017,rs_1_3:27017\")' | mongo "

docker exec -it $(docker ps | grep "route" | awk '{ print $1 }') bash -c "echo 
'sh.addShard(\"shard2/rs_2_1:27017,rs_2_2:27017,rs_2_3:27017\")' | mongo "

docker exec -it $(docker ps | grep "route" | awk '{ print $1 }') bash -c "echo 
'sh.addShard(\"shard3/rs_3_1:27017,rs_3_2:27017,rs_3_3:27017\")' | mongo "

内部:在 mongo 网络下的容器,通过 mongos:27017 连接

外部:通过 IP:27017 连接,IP 可以为三台服务的中的一个的 IP

测试数据

 # 【node1】连接集群
 docker exec -it $(docker ps | grep "route" | awk '{ print $1 }') mongo route:27017
 
 # 【node1】创建数据库
 use test_db
 
# 【node1】创建文档
db.createCollection("user")

# 【node1】允许数据库分片
 sh.enableSharding("test_db")

# 【node1】分片设置
sh.shardCollection( "test_db.user", {_id:"hashed"} )

# 【node1】添加数据
for(var i = 0; i < 1000; i++) { db.user.insert({_id:i, name:"test" + i}) }

# 【node1】查询数据
db.user.find().pretty()

# 【node1】查询分片状态
db.user.getShardDistribution()

附录

1,Docker.service 模板

--insecure-registry 10.11.32.23:5000 为新增加的私服地址

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
  
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd --selinux-enabled=false --insecure-registry 10.11.32.23:5000
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
  
[Install]
WantedBy=multi-user.target

2,踩过的坑

1 ,systemctl start docker 启动失败 报Unit docker.service has failed

解决办法:

1)创建docker.service文件

2)systemctl daemon-reload

3)systemctl restart docker

还有的说和socket相关的, 最后重新启动服务器就好了。

2 ,镜像拉取失败

解决办法:docker.service 中增加 --insecure-registry 10.11.32.23:5000 和 daemon.conf 中的私服地址一致

3 ,docker service li 发现 副本为零或者容器启动失败

可能存在的问题:镜像拉取失败,其他服务器没有创建默认文件夹

解决办法:docker service ps [serviceID] --no-trunc 详细信息

更多参考链接

blog.csdn.net/qq_35067322…

www.cnblogs.com/xiangsikai/…

www.jb51.net/article/195…

www.cnblogs.com/xiangsikai/…