黄昏版,Docker搭建Redis集群、集群扩容(下一节)、集群缩容(下一节)

90 阅读9分钟

本次的redis版本是redis:6.0.8版本。

# 8个redis容器启动🦶
# --net host 使用宿主机的IP和端口,默认
# --cluster-enabled yes 开启redis集群
# --appendonly yes 开启redis持久化
# --port 6381 配置redis端口号

docker run -d --name redis-node-1 --net host --privileged=true -v /docker/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381

docker run -d --name redis-node-2 --net host --privileged=true -v /docker/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382

docker run -d --name redis-node-3 --net host --privileged=true -v /docker/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383

docker run -d --name redis-node-4 --net host --privileged=true -v /docker/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384

docker run -d --name redis-node-5 --net host --privileged=true -v /docker/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385

docker run -d --name redis-node-6 --net host --privileged=true -v /docker/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386

docker run -d --name redis-node-7 --net host --privileged=true -v /docker/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387

docker run -d --name redis-node-8 --net host --privileged=true -v /docker/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388
开始搭建Redis容器环境。
  1. 第一次先启动 6 个redis容器。
  2. docker ps 检查是否6个redis容器是否全部启动了。
开始构建主从关系
# 随便进入一个redis容器内
docker exec -it redis-node-1 /bin/bash

# 构建主从关系(这里的 172.17.0.1是我宿主机的ip)
redis-cli --cluster create 172.17.0.1:6381 172.17.0.1:6382 172.17.0.1:6383 172.17.0.1:6384 172.17.0.1:6385 172.17.0.1:6386 --cluster-replicas 1

# 输入命令出现 [ERR] Node 172.17.0.1:6381 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0. 这是挂载的目录存在上一次的redis的文件。删除即可(‼️重要的自己备份啊)

#命令执行后会出现👇这个输入:yes即可。
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.17.0.1:6385 to 172.17.0.1:6381
Adding replica 172.17.0.1:6386 to 172.17.0.1:6382
Adding replica 172.17.0.1:6384 to 172.17.0.1:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: dc34b05da968c12d68330794c923e073ea722ab4 172.17.0.1:6381
   slots:[0-5460] (5461 slots) master
M: 6b5b2673c25da6ee301bf9d21245106f8956d1f4 172.17.0.1:6382
   slots:[5461-10922] (5462 slots) master
M: 31325c21cfca77be68e6bd2f0723c6e53c470e48 172.17.0.1:6383
   slots:[10923-16383] (5461 slots) master
S: 8ee1d6f33f12cd22637ed70807d39fa1d6929fb2 172.17.0.1:6384
   replicates 6b5b2673c25da6ee301bf9d21245106f8956d1f4
S: 4c72c444e14385af96e05183aea897a9d6d44474 172.17.0.1:6385
   replicates 31325c21cfca77be68e6bd2f0723c6e53c470e48
S: 14e4d6eab97fd738383b982bdd39fb05500db668 172.17.0.1:6386
   replicates dc34b05da968c12d68330794c923e073ea722ab4
Can I set the above configuration? (type 'yes' to accept): yes

# 输入yes后就会自动分哈希槽位。
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join

>>> Performing Cluster Check (using node 172.17.0.1:6381)
M: dc34b05da968c12d68330794c923e073ea722ab4 172.17.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 6b5b2673c25da6ee301bf9d21245106f8956d1f4 10.0.12.8:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 14e4d6eab97fd738383b982bdd39fb05500db668 10.0.12.8:6386
   slots: (0 slots) slave
   replicates dc34b05da968c12d68330794c923e073ea722ab4
S: 4c72c444e14385af96e05183aea897a9d6d44474 10.0.12.8:6385
   slots: (0 slots) slave
   replicates 31325c21cfca77be68e6bd2f0723c6e53c470e48
S: 8ee1d6f33f12cd22637ed70807d39fa1d6929fb2 10.0.12.8:6384
   slots: (0 slots) slave
   replicates 6b5b2673c25da6ee301bf9d21245106f8956d1f4
M: 31325c21cfca77be68e6bd2f0723c6e53c470e48 10.0.12.8:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
查看集群状态
# 随机进入一个 redis的集群容器中
docker exec -it redis-node-2 /bin/bash

# 使用 redis-cli 连接到 6381节点
root@VM-12-8-centos:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:208
cluster_stats_messages_pong_sent:220
cluster_stats_messages_sent:428
cluster_stats_messages_ping_received:215
cluster_stats_messages_pong_received:208
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:428

#查看集群节点状态
cluster nodes

6b5b2673c25da6ee301bf9d21245106f8956d1f4 10.0.12.8:6382@16382 master - 0 1700051193160 2 connected 5461-10922
14e4d6eab97fd738383b982bdd39fb05500db668 10.0.12.8:6386@16386 slave dc34b05da968c12d68330794c923e073ea722ab4 0 1700051194163 1 connected
dc34b05da968c12d68330794c923e073ea722ab4 172.17.0.1:6381@16381 myself,master - 0 1700051191000 1 connected 0-5460
4c72c444e14385af96e05183aea897a9d6d44474 10.0.12.8:6385@16385 slave 31325c21cfca77be68e6bd2f0723c6e53c470e48 0 1700051191000 3 connected
8ee1d6f33f12cd22637ed70807d39fa1d6929fb2 10.0.12.8:6384@16384 slave 6b5b2673c25da6ee301bf9d21245106f8956d1f4 0 1700051191000 2 connected
31325c21cfca77be68e6bd2f0723c6e53c470e48 10.0.12.8:6383@16383 master - 0 1700051192157 3 connected 10923-16383
🔥🔥🔥特别注意⚠️现在集群搭建完成。如果要进行redis的读写;必须使用 redis-cli -p 6381 -c 集群方式连接redis读写;不然就会出错。
## 集群信息检查
# 随机进入一个 redis的集群容器中
docker exec -it redis-node-3 /bin/bash

# 输入任意一台主节点地址都可以进行集群检查
redis-cli --cluster check 10.0.12.8:6384

10.0.12.8:6381 (dc34b05d...) -> 0 keys | 5461 slots | 1 slaves.
10.0.12.8:6382 (6b5b2673...) -> 0 keys | 5462 slots | 1 slaves.
10.0.12.8:6383 (31325c21...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.0.12.8:6384)
S: 8ee1d6f33f12cd22637ed70807d39fa1d6929fb2 10.0.12.8:6384
   slots: (0 slots) slave
   replicates 6b5b2673c25da6ee301bf9d21245106f8956d1f4
S: 4c72c444e14385af96e05183aea897a9d6d44474 10.0.12.8:6385
   slots: (0 slots) slave
   replicates 31325c21cfca77be68e6bd2f0723c6e53c470e48
M: dc34b05da968c12d68330794c923e073ea722ab4 10.0.12.8:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 6b5b2673c25da6ee301bf9d21245106f8956d1f4 10.0.12.8:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 14e4d6eab97fd738383b982bdd39fb05500db668 10.0.12.8:6386
   slots: (0 slots) slave
   replicates dc34b05da968c12d68330794c923e073ea722ab4
M: 31325c21cfca77be68e6bd2f0723c6e53c470e48 10.0.12.8:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.