Redis6集群节点管理

877 阅读8分钟

一、说明

本文档为redis6.2集群节点管理操作手册,其中包括当前集群增加master节点,分配哈希槽,增加slave节点等操作步骤。

二、环境

系统:CentOS Linux release 7.6.1810 (Core)

数据库:Redis server v=6.2.5

三、步骤

说明:当前环境为3主3从的redis集群环境,部署点为192.168.6.130,端口号为9001-9006,预计新增两个节点9007和9008,9007为集群的master节点,9008为9007的从节点,组成4主4从的集群环境。9007、9008节点已经创建完毕,并启动服务。

1、 查看当前节点状态

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 -p 9001 cluster nodes

fe583c18857792ac9cea98af169f5b344c94846f 192.168.6.130:9006@19006 slave e3dfdb6c09017af74989f0fc4920719981587caf 0 1629714454000 3 connected

e1d48f1831c4bfb7683d8631ac53d34d2e5b1125 192.168.6.130:9004@19004 slave 4f45fe684d157f69360af676e5a1583f708042f7 0 1629714456032 1 connected

e3dfdb6c09017af74989f0fc4920719981587caf 192.168.6.130:9003@19003 master - 0 1629714457045 3 connected 10923-16383

4f45fe684d157f69360af676e5a1583f708042f7 192.168.6.130:9001@19001 myself,master - 0 1629714455000 1 connected 0-5460

f6e23e0a94e573e9f612aada502b112c2bcd02c1 192.168.6.130:9002@19002 master - 0 1629714455021 2 connected 5461-10922

8ddb932ab0429f7cc555474c0501b45cd9e65c7e 192.168.6.130:9005@19005 slave f6e23e0a94e573e9f612aada502b112c2bcd02c1 0 1629714456000 2 connected

2、 增加master节点

向当前集群环境增加9007节点,9001节点可以为当前集群环境中的任意节点;

新增加的节点内不能包含任何数据,否则会报错;

如此已经增加2个节点至集群环境,由于没有分配哈希槽,暂时无法使用。

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 --cluster add-node 192.168.6.130:9007 192.168.6.130:9001

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 --cluster add-node 192.168.6.130:9008 192.168.6.130:9001

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 -p 9007 cluster nodes

4f45fe684d157f69360af676e5a1583f708042f7 192.168.6.130:9001@19001 master - 0 1629797552328 1 connected 0-5460

e1d48f1831c4bfb7683d8631ac53d34d2e5b1125 192.168.6.130:9004@19004 slave 4f45fe684d157f69360af676e5a1583f708042f7 0 1629797550000 1 connected

5703e2b216f487148b45ce47a5139e38fc620ecd 192.168.6.130:9007@19007 myself,master - 0 1629797549000 0 connected

fe583c18857792ac9cea98af169f5b344c94846f 192.168.6.130:9006@19006 slave e3dfdb6c09017af74989f0fc4920719981587caf 0 1629797550000 3 connected

8ddb932ab0429f7cc555474c0501b45cd9e65c7e 192.168.6.130:9005@19005 slave f6e23e0a94e573e9f612aada502b112c2bcd02c1 0 1629797550000 2 connected

e3dfdb6c09017af74989f0fc4920719981587caf 192.168.6.130:9003@19003 master - 0 1629797550256 3 connected 10923-16383

f6e23e0a94e573e9f612aada502b112c2bcd02c1 192.168.6.130:9002@19002 master - 0 1629797550000 2 connected 5461-10922

1e963d1df7b849a29379e275f7bb2faa237564f7 192.168.6.130:9008@19008 master - 0 1629797551273 7 connected

3、 分配哈希槽

分配哈希槽时后的ip可以为当前集群任意一节点。

由于原集群环境增加1主节点变为4主4从,redis哈希槽位只有16384个槽位,新增节点需要调整其他节点槽位的分配;当前节点分配槽位16384/4,共4096个。

分配方式可以选择全部节点为新节点分配,也可以选择指定的节点分配哈希槽;

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 --cluster reshard 192.168.6.130:9001

How many slots do you want to move (from 1 to 16384)? 4096

What is the receiving node ID? 5703e2b216f487148b45ce47a5139e38fc620ecd

Source node #1: all

4、 为master节点增加slave库

登录需要增加的从库,执行以下语句,后节点ID为需要挂载的主节点ID。

192.168.6.130:9008> cluster replicate 5703e2b216f487148b45ce47a5139e38fc620ecd

5、 查看增加的节点

节点已增加,新增主节点为9007,从节点为9008。

[root@Y001 data]# redis-cli -c -h 192.168.6.130 -p 9001 cluster nodes

fe583c18857792ac9cea98af169f5b344c94846f 192.168.6.130:9006@19006 slave e3dfdb6c09017af74989f0fc4920719981587caf 0 1629859173508 3 connected

e1d48f1831c4bfb7683d8631ac53d34d2e5b1125 192.168.6.130:9004@19004 slave 4f45fe684d157f69360af676e5a1583f708042f7 0 1629859173000 8 connected

e3dfdb6c09017af74989f0fc4920719981587caf 192.168.6.130:9003@19003 master - 0 1629859174527 3 connected 13823-16383

4f45fe684d157f69360af676e5a1583f708042f7 192.168.6.130:9001@19001 myself,master - 0 1629859168000 8 connected 2390-7509 10923-12969

1e963d1df7b849a29379e275f7bb2faa237564f7 192.168.6.130:9008@19008 slave 5703e2b216f487148b45ce47a5139e38fc620ecd 0 1629859176562 9 connected

5703e2b216f487148b45ce47a5139e38fc620ecd 192.168.6.130:9007@19007 master - 0 1629859172496 9 connected 0-2389 7510-8362 12970-13822

f6e23e0a94e573e9f612aada502b112c2bcd02c1 192.168.6.130:9002@19002 master - 0 1629859174000 2 connected 8363-10922

8ddb932ab0429f7cc555474c0501b45cd9e65c7e 192.168.6.130:9005@19005 slave f6e23e0a94e573e9f612aada502b112c2bcd02c1 0 1629859175542 2 connected

6、 删除节点

删除从节点:

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 --cluster del-node 192.168.6.130:9008 1e963d1df7b849a29379e275f7bb2faa237564f7

>>> Removing node 1e963d1df7b849a29379e275f7bb2faa237564f7 from cluster 192.168.6.130:9008

>>> Sending CLUSTER FORGET messages to the cluster...

>>> Sending CLUSTER RESET SOFT to the deleted node.

删除主节点:由于主节点占用哈希槽含有数据,需要将哈希槽分配给其他节点,所以需要先进行分片操作。

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 --cluster reshard 192.168.6.130:9007

How many slots do you want to move (from 1 to 16384)?4096

What is the receiving node ID? 分配目标ID

Source node #1: 需要删除节点的ID

Source node #2:done

删除主节点的IP端口以及ID

[root@Y001 ~]# redis-cli -c -h 192.168.6.130 --cluster del-node 192.168.6.130:9007 5703e2b216f487148b45ce47a5139e38fc620ecd

>>> Removing node 5703e2b216f487148b45ce47a5139e38fc620ecd from cluster 192.168.6.130:9007

>>> Sending CLUSTER FORGET messages to the cluster...

>>> Sending CLUSTER RESET SOFT to the deleted node.

四、注意事项

1、 在删除主节点时需要对主节点进行分片操作,否则会报错;

2、在增加节点的过程中,先挂载主节点,然后增加从节点;