搭建Redis集群

193 阅读5分钟

搭建Redis集群可以通过两种方式:

  • 手动搭建
  • redis-cli搭建

搭建集群的步骤如下:

  1. 准备节点
  2. 节点握手
  3. 分配槽

一、手动搭建

要搭建由6个节点(文中提到至少要有6个节点)组成的集群,3个为主节点,3个为从节点。

1. 准备节点

Redis集群模式启动过程

Redis集群模式启动过程.drawio.png

6379.conf

port 6379 #节点端口
cluster-enabled yes #开启集群模式
cluster-node-timeout 15000 #节点超时时间,毫秒
cluster-config-file "nodes-6379.conf" #节点配置文件
dir "/mnt/d/Dev_software/Server/redis/redis/data/"
logfile "6379.log"
dbfilename "dump-6379.rdb"
appendfilename "appendonly-6379.aof"
daemonize yes

复制6379.conf,修改配置,得到6380.conf,6381.conf,6382.conf,6383.conf和6384.conf。

说明:

  • 节点配置文件和Redis原有的配置文件是分开的。

启动节点

./src/redis-server conf/637{x}.conf

查看节点启动情况

$ ps -ef|grep redis
liudh       94    10  0 15:20 ?        00:00:00 ./src/redis-server *:6379 [cluster]
liudh      139    10  0 15:32 ?        00:00:00 ./src/redis-server *:6380 [cluster]
liudh      144    10  0 15:32 ?        00:00:00 ./src/redis-server *:6381 [cluster]
liudh      149    10  0 15:32 ?        00:00:00 ./src/redis-server *:6382 [cluster]
liudh      154    10  0 15:32 ?        00:00:00 ./src/redis-server *:6383 [cluster]
liudh      159    10  0 15:32 ?        00:00:00 ./src/redis-server *:6384 [cluster]

查看节点信息

127.0.0.1:6379> cluster nodes
fdcb45e7efbcefc422e1ab9aaea1b2e0d859e08e :6379@16379 myself,master - 0 0 0 connected

也可以通过conf/nodes-6379.conf文件查看。

2. 节点握手

节点握手:一组运行在集群模式下的节点,通过Gossip协议彼此通信。

cluster meet命令进行节点握手的过程.png 节点握手过程

  1. 节点A向B节点发送meet消息
  2. 节点B接受meet消息,保存A节点信息并回复pong消息。
  3. 之后节点A和B彼此通过ping/pong进行正常的节点通信。

cluster meet

在集群内任意一个节点上执行cluster meet即可,握手的状态会通过消息在集群内传播。

127.0.0.1:6379> cluster meet 127.0.0.1 6380
OK
127.0.0.1:6379> cluster meet 127.0.0.1 6381
OK
127.0.0.1:6379> cluster meet 127.0.0.1 6382
OK
127.0.0.1:6379> cluster meet 127.0.0.1 6383
OK
127.0.0.1:6379> cluster meet 127.0.0.1 6384
OK
  • cluster meet是异步命令,执行之后立即返回

查看节点

> cluster nodes
b58a974ac674814f54c92732f55df1cbf23e8571 127.0.0.1:6384@16384 master - 0 1647071659000 5 connected
d1bbf0d4f98ab120e99a4d04ec663ac18bfff6f3 127.0.0.1:6381@16381 master - 0 1647071660000 2 connected
d71815f1f17beb57f548fff1d032eb3558d2860c 127.0.0.1:6382@16382 master - 0 1647071660000 3 connected
9aa83361f75e27683f36b7486db898cb320facb5 127.0.0.1:6383@16383 master - 0 1647071660815 4 connected
74b54becda3093173c1bd9217794f8ac0c7939dc 127.0.0.1:6380@16380 master - 0 1647071662821 1 connected
fdcb45e7efbcefc422e1ab9aaea1b2e0d859e08e 127.0.0.1:6379@16379 myself,master - 0 1647071661000 0 connected

节点握手后,集群处于下线状态,所有的数据读写都被禁止

127.0.0.1:6379> set hello world
(error) CLUSTERDOWN Hash slot not served
> cluster info
cluster_state:fail #集群状态不可用
cluster_slots_assigned:0 #槽未分配
cluster_slots_ok:0 #无可用的槽

3. 分配槽

Redis集群把所有的数据都映射到16384个槽中。

为三个主节点分配槽

liudh@DESKTOP-1LNEQAE:/mnt/d/Dev_Software/Server/redis/redis/src$ ./redis-cli -p 6379 cluster addslots {0..5461}
OK
liudh@DESKTOP-1LNEQAE:/mnt/d/Dev_Software/Server/redis/redis/src$ ./redis-cli -p 6380 cluster addslots {5462..10922}
OK
liudh@DESKTOP-1LNEQAE:/mnt/d/Dev_Software/Server/redis/redis/src$ ./redis-cli -p 6381 cluster addslots {10923..16383}
OK
liudh@DESKTOP-1LNEQAE:/mnt/d/Dev_Software/Server/redis/redis/src$ ./redis-cli -p 6382 cluster addslots {16384..16385}
(error) ERR Invalid or out of range slot

槽的范围就是[0,16383],超出范围会报错。

查看槽

> cluster nodes
b58a974ac674814f54c92732f55df1cbf23e8571 127.0.0.1:6384@16384 master - 0 1647072747000 5 connected
d1bbf0d4f98ab120e99a4d04ec663ac18bfff6f3 127.0.0.1:6381@16381 master - 0 1647072748000 2 connected 10923-16383
d71815f1f17beb57f548fff1d032eb3558d2860c 127.0.0.1:6382@16382 master - 0 1647072749470 3 connected
9aa83361f75e27683f36b7486db898cb320facb5 127.0.0.1:6383@16383 master - 0 1647072749000 4 connected
74b54becda3093173c1bd9217794f8ac0c7939dc 127.0.0.1:6380@16380 master - 0 1647072747463 1 connected 5462-10922
fdcb45e7efbcefc422e1ab9aaea1b2e0d859e08e 127.0.0.1:6379@16379 myself,master - 0 1647072748000 0 connected 0-5461

4. 配置从节点

cluster replicate {nodeId}

127.0.0.1:6382> cluster replicate fdcb45e7efbcefc422e1ab9aaea1b2e0d859e08e
127.0.0.1:6383> cluster replicate 74b54becda3093173c1bd9217794f8ac0c7939dc
127.0.0.1:6384> cluster replicate d1bbf0d4f98ab120e99a4d04ec663ac18bfff6f3

查看节点状态

> cluster nodes
74b54becda3093173c1bd9217794f8ac0c7939dc 127.0.0.1:6380@16380 master - 0 1647073059131 1 connected 5462-10922
fdcb45e7efbcefc422e1ab9aaea1b2e0d859e08e 127.0.0.1:6379@16379 master - 0 1647073059000 0 connected 0-5461
9aa83361f75e27683f36b7486db898cb320facb5 127.0.0.1:6383@16383 slave 74b54becda3093173c1bd9217794f8ac0c7939dc 0 1647073060000 1 connected
d1bbf0d4f98ab120e99a4d04ec663ac18bfff6f3 127.0.0.1:6381@16381 master - 0 1647073061647 2 connected 10923-16383
b58a974ac674814f54c92732f55df1cbf23e8571 127.0.0.1:6384@16384 myself,slave d1bbf0d4f98ab120e99a4d04ec663ac18bfff6f3 0 1647073061000 2 connected
d71815f1f17beb57f548fff1d032eb3558d2860c 127.0.0.1:6382@16382 slave fdcb45e7efbcefc422e1ab9aaea1b2e0d859e08e 0 1647073060270 0 connected

可以看到三个从节点都已经分配完毕。

Redis集群状态下的主从复制,依然支持全量和部分复制。

5. 集群测试

在启动客户端时,需要带上-c参数,访问集群模式,否则,会报错。

$ ./redis-cli -c -p 6379
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 127.0.0.1:6381
OK
127.0.0.1:6381> get a
"b"
127.0.0.1:6381> set b c
-> Redirected to slot [3300] located at 127.0.0.1:6379
OK
127.0.0.1:6379> get b
"c"
127.0.0.1:6379> keys *
1) "b"
  • 不同的键,会存在不同的桶中
  • keys *仅能获取当前节点的所有键

二、 redis-cli安装

因为是在Redis 6.x配置,原先在2.8的安装从redis-trib.rb移至了redis-cli

1. 准备节点

    port 6379 #节点端口
    cluster-enabled yes #开启集群模式
    cluster-node-timeout 15000 #节点超时时间,毫秒
    cluster-config-file "nodes-6379.conf" #节点配置文件
    dir "/mnt/d/Dev_software/Server/redis/redis/data/"
    logfile "6379.log"
    dbfilename "dump-6379.rdb"
    appendfilename "appendonly-6379.aof"
    daemonize yes
$ ./src/redis-server conf/6379.conf
$ ./src/redis-server conf/6380.conf
$ ./src/redis-server conf/6381.conf
$ ./src/redis-server conf/6382.conf
$ ./src/redis-server conf/6383.conf
$ ./src/redis-server conf/6384.conf

2. 创建集群

$ ./src/redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster-replicas 1
  • --cluster-replicas:指定每个主节点的从节点个数
>>> Performing hash slots allocation on 6 nodes...
-- 分配槽
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
-- 添加主从关系
Adding replica 127.0.0.1:6383 to 127.0.0.1:6379
Adding replica 127.0.0.1:6384 to 127.0.0.1:6380
Adding replica 127.0.0.1:6382 to 127.0.0.1:6381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 0e0e1c0f3a01aafe694c844f2422817b4ed6883a 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
M: 98efc4d4cd26e60bce874193615ba369416a91a4 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
M: 414dfc5d2c47c64ab75fbaa75f69130efac4117e 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
S: e09a111f7f43e035910d042fc1d26c1264e6da3a 127.0.0.1:6382
   replicates 0e0e1c0f3a01aafe694c844f2422817b4ed6883a
S: 4d443bface811e42b5e3b5c300dcc4b9268b561c 127.0.0.1:6383
   replicates 98efc4d4cd26e60bce874193615ba369416a91a4
S: b882aed5a94132fff0e521bc9c9a879a5e3f368e 127.0.0.1:6384
   replicates 414dfc5d2c47c64ab75fbaa75f69130efac4117e
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
-- 执行了集群检查
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 0e0e1c0f3a01aafe694c844f2422817b4ed6883a 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 98efc4d4cd26e60bce874193615ba369416a91a4 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: b882aed5a94132fff0e521bc9c9a879a5e3f368e 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 414dfc5d2c47c64ab75fbaa75f69130efac4117e
S: 4d443bface811e42b5e3b5c300dcc4b9268b561c 127.0.0.1:6383
   slots: (0 slots) slave
   replicates 98efc4d4cd26e60bce874193615ba369416a91a4
S: e09a111f7f43e035910d042fc1d26c1264e6da3a 127.0.0.1:6382
   slots: (0 slots) slave
   replicates 0e0e1c0f3a01aafe694c844f2422817b4ed6883a
M: 414dfc5d2c47c64ab75fbaa75f69130efac4117e 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.