docker(四、常见安装总结)

82 阅读59分钟

本文全部内容来自尚硅谷周阳老师

视频地址:www.bilibili.com/video/BV1gr…

本篇文章仅用于个人学习,如有侵权联系删除。

十二、docker常见安装总结

总体步骤: 搜索镜像->拉取镜像->查看镜像->启动镜像(服务端口映射)->停止容器->移除容器

1. 安装tomcat

1.1 docker hub上面查找tomcat镜像

docker search tomcat

[root@localhost myHostData]# docker search tomcat --limit 5
NAME                          DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
tomcat                        Apache Tomcat is an open source implementati…   3435      [OK]
tomee                         Apache TomEE is an all-Apache Java EE certif…   98        [OK]
bitnami/tomcat                Bitnami Tomcat Docker Image                     47                   [OK]
secoresearch/tomcat-varnish   Tomcat and Varnish 5.0                          0                    [OK]
wnprcehr/tomcat                                                               0

1.2 从docker hub上拉取tomcat镜像到本地

docker pull tomcat

[root@localhost myHostData]# docker pull tomcat
Using default tag: latest
latest: Pulling from library/tomcat
0e29546d541c: Pull complete
9b829c73b52b: Pull complete
cb5b7ae36172: Pull complete
6494e4811622: Pull complete
668f6fcc5fa5: Pull complete
dc120c3e0290: Pull complete
8f7c0eebb7b1: Pull complete
77b694f83996: Pull complete
0f611256ec3a: Pull complete
4f25def12f23: Pull complete
Digest: sha256:9dee185c3b161cdfede1f5e35e8b56ebc9de88ed3a79526939701f3537a52324
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest

1.3 docker images查看是否有拉取到的tomcat

docker images tomcat

[root@localhost myHostData]# docker images tomcat
REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
tomcat       latest    fb5657adc892   10 months ago   680MB

1.4 使用tomcat镜像创建容器实例(也叫运行镜像)

docker run -it -p 8080:8080 tomcat

(1) -p 小写,主机端口:docker容器端口

(2) -P 大写,随机分配端口 docker run -it -P tomcat

(3) i:交互

(4) t:终端

(5) d:后台

[root@localhost myHostData]# docker run -it -p 8080:8080 tomcat
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr/local/openjdk-11
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:
NOTE: Picked up JDK_JAVA_OPTIONS:  --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
16-Nov-2022 01:57:41.584 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name:   Apache Tomcat/10.0.14
16-Nov-2022 01:57:41.590 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Dec 2 2021 22:01:36 UTC
16-Nov-2022 01:57:41.590 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 10.0.14.0
16-Nov-2022 01:57:41.590 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name:               Linux
16-Nov-2022 01:57:41.590 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version:            3.10.0-1127.el7.x86_64
16-Nov-2022 01:57:41.591 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture:          amd64
16-Nov-2022 01:57:41.591 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home:             /usr/local/openjdk-11
16-Nov-2022 01:57:41.591 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version:           11.0.13+8
16-Nov-2022 01:57:41.591 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor:            Oracle Corporation
16-Nov-2022 01:57:41.596 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE:         /usr/local/tomcat
16-Nov-2022 01:57:41.596 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME:         /usr/local/tomcat
16-Nov-2022 01:57:41.615 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.lang=ALL-UNNAMED
16-Nov-2022 01:57:41.616 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.io=ALL-UNNAMED
16-Nov-2022 01:57:41.616 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.util=ALL-UNNAMED
16-Nov-2022 01:57:41.616 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.util.concurrent=ALL-UNNAMED
16-Nov-2022 01:57:41.616 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
16-Nov-2022 01:57:41.616 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
16-Nov-2022 01:57:41.617 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
16-Nov-2022 01:57:41.617 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
16-Nov-2022 01:57:41.617 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
16-Nov-2022 01:57:41.617 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
16-Nov-2022 01:57:41.617 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs=
16-Nov-2022 01:57:41.618 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
16-Nov-2022 01:57:41.618 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
16-Nov-2022 01:57:41.619 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
16-Nov-2022 01:57:41.633 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded Apache Tomcat Native library [1.2.31] using APR version [1.7.0].
16-Nov-2022 01:57:41.633 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true], UDS [true].
16-Nov-2022 01:57:41.637 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.1.1k  25 Mar 2021]
16-Nov-2022 01:57:42.048 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
16-Nov-2022 01:57:42.080 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [795] milliseconds
16-Nov-2022 01:57:42.169 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
16-Nov-2022 01:57:42.169 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.0.14]
16-Nov-2022 01:57:42.187 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
16-Nov-2022 01:57:42.199 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [119] milliseconds

1.5 访问猫首页

http://192.168.198.100:8080/

问题:404

tomcat9版本及以上会出现此问题

image.png

解决:

(1) 检查是否映射端口 比如尽量不要使用-P

(2) 关闭防火墙

(3) 查看webapps目录下为空,删除webapps文件夹,并将同目录下的webapps.dist重命名为webapps

[root@localhost ~]# docker exec -it fce94c1a889c /bin/bash
root@fce94c1a889c:/usr/local/tomcat# ls -l
total 132
-rw-r--r--. 1 root root 18994 Dec  2  2021 BUILDING.txt
-rw-r--r--. 1 root root  6210 Dec  2  2021 CONTRIBUTING.md
-rw-r--r--. 1 root root 60269 Dec  2  2021 LICENSE
-rw-r--r--. 1 root root  2333 Dec  2  2021 NOTICE
-rw-r--r--. 1 root root  3378 Dec  2  2021 README.md
-rw-r--r--. 1 root root  6905 Dec  2  2021 RELEASE-NOTES
-rw-r--r--. 1 root root 16517 Dec  2  2021 RUNNING.txt
drwxr-xr-x. 2 root root  4096 Dec 22  2021 bin
drwxr-xr-x. 1 root root    22 Nov 16 01:57 conf
drwxr-xr-x. 2 root root  4096 Dec 22  2021 lib
drwxrwxrwx. 1 root root    80 Nov 16 01:57 logs
drwxr-xr-x. 2 root root   159 Dec 22  2021 native-jni-lib
drwxrwxrwx. 2 root root    30 Dec 22  2021 temp
drwxr-xr-x. 2 root root     6 Dec 22  2021 webapps
drwxr-xr-x. 7 root root    81 Dec  2  2021 webapps.dist
drwxrwxrwx. 2 root root     6 Dec  2  2021 work
root@fce94c1a889c:/usr/local/tomcat# rm -r webapps
root@fce94c1a889c:/usr/local/tomcat# mv webapps.dist webapps

image.png

1.6 免修改版

docker pull billygoo/tomcat8-jdk8

docker run -d -p 8080:8080 --name mytomcat8 billygoo/tomcat8-jdk8

[root@localhost ~]# docker pull billygoo/tomcat8-jdk8
Using default tag: latest
latest: Pulling from billygoo/tomcat8-jdk8
55cbf04beb70: Pull complete
1607093a898c: Pull complete
9a8ea045c926: Pull complete
1290813abd9d: Pull complete
8a6b982ad6d7: Pull complete
abb029e68402: Pull complete
8cd067dc06dc: Pull complete
1b9ce2097b98: Pull complete
d6db5874b692: Pull complete
25b4aa3d52c5: Pull complete
d26b86f009c9: Pull complete
e54998e5e699: Pull complete
4a1e415a3c2e: Pull complete
Digest: sha256:4e21f52d29e3a0baafc18979da2f9725449b54652db69d4cdaef9ba807097e11
Status: Downloaded newer image for billygoo/tomcat8-jdk8:latest
docker.io/billygoo/tomcat8-jdk8:latest
[root@localhost ~]# docker run -d -p 8080:8080 --name mytomcat8 billygoo/tomcat8-jdk8
1ae30c2e8af46c7a40ce5274f529e39e72151a2d339b10d9c74c6f68b2d1fd14
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED         STATUS             PORTS                                       NAMES
1ae30c2e8af4   billygoo/tomcat8-jdk8   "catalina.sh run"   3 seconds ago   Up 2 seconds       0.0.0.0:8080->8080/tcp, :::8080->8080/tcp   mytomcat8
9a32e157bf8c   ubuntu                  "/bin/bash"         2 hours ago     Up About an hour                                               myubuntu

image.png

2. 安装mysql

2.1 简单安装 mysql5.7

2.1.1 docker hub上面查找mysql镜像

docker search mysql

[root@localhost ~]# docker search mysql --limit 5
NAME             DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
mysql            MySQL is a widely used, open-source relation…   13475     [OK]
mariadb          MariaDB Server is a high performing open sou…   5142      [OK]
phpmyadmin       phpMyAdmin - A web interface for MySQL and M…   689       [OK]
percona          Percona Server is a fork of the MySQL relati…   593       [OK]
circleci/mysql   MySQL is a widely used, open-source relation…   28

2.1.2 拉取mysql镜像到本地标签为5.7

docker pull mysql:5.7

[root@localhost ~]# docker pull mysql:5.7
5.7: Pulling from library/mysql
72a69066d2fe: Pull complete
93619dbc5b36: Pull complete
99da31dd6142: Pull complete
626033c43d70: Pull complete
37d5d7efb64e: Pull complete
ac563158d721: Pull complete
d2ba16033dad: Pull complete
0ceb82207cd7: Pull complete
37f2405cae96: Pull complete
e2482e017e53: Pull complete
70deed891d42: Pull complete
Digest: sha256:f2ad209efe9c67104167fc609cca6973c8422939491c9345270175a300419f94
Status: Downloaded newer image for mysql:5.7
docker.io/library/mysql:5.7

2.1.3 新建mysql容器实例

docker run -d -p 3306:3306 --privileged=true -v /root/mysql/log:/var/log/mysql -v /root/mysql/data:/var/lib/mysql -v /root/mysql/conf:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=123456  --name mysql mysql:5.7

2.1.4 新建my.cnf同步给mysql容器实例

my.cnf内容

[client]
default_character_set=utf8
[mysqld]
collation_server = utf8_general_ci
character_set_server = utf8
[root@localhost ~]# cd /root/mysql/conf
[root@localhost conf]# ls
[root@localhost conf]# vim my.cnf
[root@localhost conf]# cat my.cnf
[client]
default_character_set=utf8
[mysqld]
collation_server = utf8_general_ci
character_set_server = utf8

2.1.5 重启mysql容器并进入查看字符编码

重启mysql

[root@localhost conf]# docker restart mysql

进入mysql命令行

[root@localhost conf]# docker exec -it mysql bash

登录

root@f4e7b361f010:/# mysql -uroot -p123456

查看字符编码

mysql> show variables like 'character%';

创建数据库

mysql> create database db01;

创建表

mysql> use db01;

# 重启mysql
[root@localhost conf]# docker restart mysql
mysql

# 进入mysql命令行
[root@localhost conf]# docker exec -it mysql bash
# 登录
root@f4e7b361f010:/# mysql -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

# 查看字符编码
mysql> show variables like 'character%';
+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | utf8                       |
| character_set_connection | utf8                       |
| character_set_database   | utf8                       |
| character_set_filesystem | binary                     |
| character_set_results    | utf8                       |
| character_set_server     | utf8                       |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.04 sec)

# 创建数据库
mysql> create database db01;
Query OK, 1 row affected (0.00 sec)

# 创建表
mysql> use db01;
Database changed
mysql> create table bb(id int,name varchar(20));
Query OK, 0 rows affected (0.01 sec)
mysql>

2.1.6 连接并验证字符集

docker安装完MySQL并run出容器后,建议请先修改完字符集编码后再新建mysql库-表-插数据

可以看到下面成功连接并添加数据 image.png

2.2 简单安装mysql8.x

以后有机会再加上

2.3 主从复制

2.3.1 新建主服务器容器实例3307

docker run -p 3307:3306 --name mysql-master \

-v /mydata/mysql-master/log:/var/log/mysql \

-v /mydata/mysql-master/data:/var/lib/mysql \

-v /mydata/mysql-master/conf:/etc/mysql \

-e MYSQL_ROOT_PASSWORD=root  \

-d mysql:5.7

[root@localhost ~]# docker run -p 3307:3306 --name mysql-master -v /mydata/mysql-master/log:/var/log/mysql -v /mydata/mysql-master/data:/var/lib/mysql -v /mydata/mysql-master/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
d670b7ae668f40cdc87d358aba6feb1b56c0b9997c15ea6fee59c64760bdccd6

2.3.2 进入/mydata/mysql-master/conf目录下新建my.cnf


[root@localhost ~]# cd /mydata/mysql-master/conf/
[root@localhost conf]# vim my.cnf

# my.cnf内容如下

[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=101 
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql  
## 开启二进制日志功能
log-bin=mall-mysql-bin  
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M  
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed  
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7  
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062

2.3.3 修改完配置后重启master实例

docker restart mysql-master

[root@localhost conf]# docker restart mysql-master
mysql-master

2.3.4 进入mysql-master容器

docker exec -it mysql-master /bin/bash

mysql -uroot -proot

[root@localhost conf]# docker exec -it mysql-master /bin/bash
root@d670b7ae668f:/# mysql -uroot -proot

2.3.5 master容器实例内创建数据同步用户

CREATE USER 'slave'@'%' IDENTIFIED BY '123456';

GRANT REPLICATION SLAVE, REPLICATION CLIENT ON . TO 'slave'@'%';

mysql> CREATE USER 'slave'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'slave'@'%';
Query OK, 0 rows affected (0.00 sec)

2.3.6 新建从服务器容器实例3308

docker run -p 3308:3306 --name mysql-slave \

-v /mydata/mysql-slave/log:/var/log/mysql \

-v /mydata/mysql-slave/data:/var/lib/mysql \

-v /mydata/mysql-slave/conf:/etc/mysql \

-e MYSQL_ROOT_PASSWORD=root  \

-d mysql:5.7

[root@localhost conf]# docker run -p 3308:3306 --name mysql-slave -v /mydata/mysql-slave/log:/var/log/mysql -v /mydata/mysql-slave/data:/var/lib/mysql -v /mydata/mysql-slave/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
053253a872a529bdc835c6ffafbd8305359e1b8fea33743d66ebe50b1c92d89d

2.3.7 进入/mydata/mysql-slave/conf目录下新建my.cnf


[root@localhost conf]# vim /mydata/mysql-slave/conf/my.cnf

# my.cnf内容如下

[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=102
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql  
## 开启二进制日志功能,以备Slave作为其它数据库实例的Master时使用
log-bin=mall-mysql-slave1-bin  
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M  
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed  
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7  
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062  
## relay_log配置中继日志
relay_log=mall-mysql-relay-bin  
## log_slave_updates表示slave将复制事件写进自己的二进制日志
log_slave_updates=1  
## slave设置为只读(具有super权限的用户除外)
read_only=1

2.3.8 重启slave实例

docker restart mysql-slave

[root@localhost conf]# docker restart mysql-slave
mysql-slave

2.3.9 在主服务器内查看主从同步状态

show master status;

[root@localhost conf]# docker exec -it mysql-master /bin/bash
root@d670b7ae668f:/# mysql -uroot -proot

mysql> show master status;
+-----------------------+----------+--------------+------------------+-------------------+
| File                  | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-----------------------+----------+--------------+------------------+-------------------+
| mall-mysql-bin.000001 |      617 |              | mysql            |                   |
+-----------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

mysql>

2.3.10 进入从服务器并配置主从复制

change master to master_host='宿主机ip', master_user='slave', master_password='123456', master_port=3307, master_log_file='mall-mysql-bin.000001', master_log_pos=617, master_connect_retry=30;

master_host:主数据库的IP地址;

master_port:主数据库的运行端口;

master_user:在主数据库创建的用于同步数据的用户账号;

master_password:在主数据库创建的用于同步数据的用户密码;

master_log_file:指定从数据库要复制数据的日志文件,通过查看主数据的状态,获取File参数;

master_log_pos:指定从数据库从哪个位置开始复制数据,通过查看主数据的状态,获取Position参数;

master_connect_retry:连接失败重试的时间间隔,单位为秒。

[root@localhost conf]# docker exec -it mysql-slave /bin/bash
root@053253a872a5:/# mysql -uroot -proot

mysql> change master to master_host='192.168.198.100', master_user='slave', master_password='123456', master_port=3307, master_log_file='mall-mysql-bin.000001', master_log_pos=617, master_connect_retry=30;
Query OK, 0 rows affected, 2 warnings (0.01 sec)

2.3.11 从数据库中查看主从同步状态

命令:show slave status \G;

下面配置为No说明还没开始

Slave_IO_Running: No 、 Slave_SQL_Running: No

mysql> show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State:
                  Master_Host: 192.168.198.100
                  Master_User: slave
                  Master_Port: 3307
                Connect_Retry: 30
              Master_Log_File: mall-mysql-bin.000001
          Read_Master_Log_Pos: 617
               Relay_Log_File: mall-mysql-relay-bin.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: mall-mysql-bin.000001
             Slave_IO_Running: No
            Slave_SQL_Running: No
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 617
              Relay_Log_Space: 154
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 0
                  Master_UUID:
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State:
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.00 sec)

ERROR:
No query specified

2.3.12 在从数据库中开启主从同步并再次查看状态

命令:

(1) start slave;

(2) show slave status \G;

下面配置为Yes说明开始了

Slave_IO_Running: Yes 、 Slave_SQL_Running: Yes

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.198.100
                  Master_User: slave
                  Master_Port: 3307
                Connect_Retry: 30
              Master_Log_File: mall-mysql-bin.000001
          Read_Master_Log_Pos: 617
               Relay_Log_File: mall-mysql-relay-bin.000002
                Relay_Log_Pos: 325
        Relay_Master_Log_File: mall-mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 617
              Relay_Log_Space: 537
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 101
                  Master_UUID: 4c913747-66d9-11ed-bb36-0242ac110002
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.00 sec)

ERROR:
No query specified

2.3.13 测试

主数据库 创建库->创建表->新增数据

从数据库 看是否同步过来

# 主数据库
[root@localhost ~]# docker exec -it mysql-master /bin/bash

root@d670b7ae668f:/# mysql -uroot -proot

mysql> create database test01;
Query OK, 1 row affected (0.00 sec)

mysql> use test01;
Database changed

mysql> create table aa(id int,name varchar(20));
Query OK, 0 rows affected (0.02 sec)

mysql> insert into aa(id,name) values (1,'jack');
Query OK, 1 row affected (0.00 sec)

# 从数据库
mysql> select * from aa \G;
*************************** 1. row ***************************
  id: 1
name: jack
1 row in set (0.00 sec)

3. 安装redis

3.1 简单安装

3.1.1 拉取redis6.0.8镜像

docker pull redis:6.0.8

[root@localhost conf]# docker pull redis:6.0.8
6.0.8: Pulling from library/redis
bb79b6b2107f: Pull complete
1ed3521a5dcb: Pull complete
5999b99cee8f: Pull complete
3f806f5245c9: Pull complete
f8a4497572b2: Pull complete
eafe3b6b8d06: Pull complete
Digest: sha256:21db12e5ab3cc343e9376d655e8eabbdbe5516801373e95a8a9e66010c5b8819
Status: Downloaded newer image for redis:6.0.8
docker.io/library/redis:6.0.8

3.1.2 进入redis命令行测试可用性

docker run -d -p 6379:6379 redis:6.0.8

docker exec -it 6bf45aabe152 /bin/bash

redis-cli

注意:

docker挂载主机目录Docker访问出现cannot open directory .: Permission denied解决办法:在挂载目录后多加一个--privileged=true参数即可

[root@localhost conf]# docker run -d -p 6379:6379 redis:6.0.8
6bf45aabe15295e9df7563c7ffd0de377c44b0b4bf9d120f791137cbbe97e3b1
[root@localhost conf]# docker ps
CONTAINER ID   IMAGE                   COMMAND                  CREATED          STATUS          PORTS                                                 NAMES
6bf45aabe152   redis:6.0.8             "docker-entrypoint.s…"   4 seconds ago    Up 2 seconds    0.0.0.0:6379->379/tcp, :::6379->6379/tcp              clever_montalcini                                                       myubuntu
[root@localhost conf]# docker exec -it 6bf45aabe152 /bin/bash
root@6bf45aabe152:/data# redis-cli
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>

3.1.3 在宿主机下新建目录/app/redis并copy一个redis.conf进去

mkdir -p /app/redis/

cp /root/redis.conf /app/redis/

vim /app/redis/redis.conf

编辑内容

(1) 开启redis验证 -> requirepass 123 (可不设置)

(2) 允许redis外地连接 -> 注释掉bind 127.0.0.1

(3) daemonize no -> 将daemonize yes注释起来或者 daemonize no设置,因为该配置和docker run中-d参数冲突,会导致容器一直启动失败

(4) 开启redis数据持久化 -> appendonly yes(可不设置)

     

[root@localhost ~]# mkdir -p /app/redis/

[root@localhost ~]# cp /root/redis.conf /app/redis/

[root@localhost ~]# vim /app/redis/redis.conf

3.1.4 创建容器(运行镜像)

[root@localhost ~]# docker run  -p 6379:6379 --name myr3 --privileged=true -v /app/redis/redis.conf:/etc/redis/redis.conf -v /app/redis/data:/data -d redis:6.0.8 redis-server /etc/redis/redis.conf
c1417ef2fb049f262dd419d609017325fa4f2c56c073412d52b55edb09b49ef5

[root@localhost ~]# docker exec -it c1417ef2fb04 /bin/bash
root@c1417ef2fb04:/data# redis-cli
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> get k2
"v2"
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>

3.2 集群(cluster)

设计一个1~2亿条数据的存储案例,单机单台100%不可能,需要用到分布式存储。一般落地三种方案:哈希取余分区、一致性哈希算法分区、哈希槽分区

3.2.1 三种分区

3.2.1.1 哈希取余分区

image.png

2亿条记录就是2亿个k,v,我们单机不行必须要分布式多机,假设有3台机器构成一个集群,用户每次读写操作都是根据公式:hash(key) % N个机器台数,计算出哈希值,用来决定数据映射到哪一个节点上。

优点:

简单粗暴,直接有效,只需要预估好数据规划好节点,例如3台、8台、10台,就能保证一段时间的数据支撑。使用Hash算法让固定的一部分请求落到同一台服务器上,这样每台服务器固定处理一部分请求(并维护这些请求的信息),起到负载均衡+分而治之的作用。

缺点:

原来规划好的节点,进行扩容或者缩容就比较麻烦了额,不管扩缩,每次数据变动导致节点有变动,映射关系需要重新进行计算,在服务器个数固定不变时没有问题,如果需要弹性扩容或故障停机的情况下,原来的取模公式就会发生变化:Hash(key)/3会变成Hash(key) /?。此时地址经过取余运算的结果将发生很大变化,根据公式获取的服务器也会变得不可控。某个redis机器宕机了,由于台数数量变化,会导致hash取余全部数据重新洗牌。

3.2.1.2 一致性Hash算法分区

一致性哈希算法在1997年由麻省理工学院中提出的,设计目标是为了解决分布式缓存数据变动和映射问题,某个机器宕机了,分母数量改变了,自然取余数不OK了。

提出一致性Hash解决方案。 目的是当服务器个数发生变动时, 尽量减少影响客户端到服务器的映射关系。

三大步骤:

(1) 算法构建一致性哈希环

   一致性哈希算法必然有个hash函数并按照算法产生hash值,这个算法的所有可能哈希值会构成一个全量集,这个集合可以成为一个hash空间[0,2^32-1],这个是一个线性空间,但是在算法中,我们通过适当的逻辑控制将它首尾相连(0 = 2^32),这样让它逻辑上形成了一个环形空间。

   它也是按照使用取模的方法,前面笔记介绍的节点取模法是对节点(服务器)的数量进行取模。而一致性Hash算法是对2^32取模,简单来说,一致性Hash算法将整个哈希值空间组织成一个虚拟的圆环,如假设某哈希函数H的值空间为0-2^32-1(即哈希值是一个32位无符号整形),整个哈希环如下图:整个空间按顺时针方向组织,圆环的正上方的点代表0,0点右侧的第一个点代表1,以此类推,2、3、4、……直到2^32-1,也就是说0点左侧的第一个点代表2^32-1, 0和2^32-1在零点中方向重合,我们把这个由2^32个点组成的圆环称为Hash环。

image.png

(2) 服务器IP节点映射

将集群中各个IP节点映射到环上的某一个位置。 将各个服务器使用Hash进行一个哈希,具体可以选择服务器的IP或主机名作为关键字进行哈希,这样每台机器就能确定其在哈希环上的位置。假如4个节点NodeA、B、C、D,经过IP地址的哈希函数计算(hash(ip)),使用IP地址哈希后在环空间的位置如下:

image.png

(3) key落到服务器的落键规则

当我们需要存储一个kv键值对时,首先计算key的hash值,hash(key),将这个key使用相同的函数Hash计算出哈希值并确定此数据在环上的位置,从此位置沿环顺时针“行走” ,第一台遇到的服务器就是其应该定位到的服务器,并将该键值对存储在该节点上。

如我们有Object A、Object B、Object C、Object D四个数据对象,经过哈希计算后,在环空间上的位置如下:根据一致性Hash算法,数据A会被定为到Node A上,B被定为到Node B上,C被定为到Node C上,D被定为到Node D上。

image.png

优点:

(1) 容错性

假设Node C宕机,可以看到此时对象A、B、D不会受到影响,只有C对象被重定位到Node D。一般的,在一致性Hash算法中,如果一台服务器不可用,则受影响的数据仅仅是此服务器到其环空间中前一台服务器(即沿着逆时针方向行走遇到的第一台服务器)之间数据,其它不会受到影响。简单说,就是C挂了,受到影响的只是B、C之间的数据,并且这些数据会转移到D进行存储。

(2) 扩展性

数据量增加了,需要增加一台节点NodeX,X的位置在A和B之间,那收到影响的也就是A到X之间的数据,重新把A到X的数据录入到X上即可,不会导致hash取余全部数据重新洗牌。

缺点:

一致性哈希算法的数据倾斜问题

Hash环的数据倾斜问题 : 一致性Hash算法在服务节点太少时,容易因为节点分布不均匀而造成数据倾斜(被缓存的对象大部分集中缓存在某一台服务器上)问题, 例如系统中只有两台服务器:

image.png

总结:

为了在节点数目发生改变时尽可能少的迁移数据,将所有的存储节点排列在收尾相接的Hash环上,每个key在计算Hash后会顺时针找到临近的存储节点存放。而当有节点加入或退出时仅影响该节点在Hash环上顺时针相邻的后续节点。 

优点:加入和删除节点只影响哈希环中顺时针方向的相邻的节点,对其他节点无影响。

缺点:数据的分布和节点的位置有关,因为这些节点不是均匀的分布在哈希环上的,所以数据在进行存储时达不到均匀分布的效果。

3.2.1.3 哈希槽分区

哈希槽实质就是一个数组,数组[0,2^14 -1]形成hash slot空间。

解决均匀分配的问题,在数据和节点之间又加入了一层,把这层称为哈希槽(slot),用于管理数据和节点之间的关系,现在就相当于节点上放的是槽,槽里放的是数据。

image.png

槽解决的是粒度问题,相当于把粒度变大了,这样便于数据移动。 哈希解决的是映射问题,使用key的哈希值来计算所在的槽,便于数据分配。

一个集群只能有16384个槽,编号0-16383(0-2^14-1)。这些槽会分配给集群中的所有主节点,分配策略没有要求。可以指定哪些编号的槽分配给哪个主节点。集群会记录节点和槽的对应关系。解决了节点和槽的关系后,接下来就需要对key求哈希值,然后对16384取余,余数是几key就落入对应的槽里。slot = CRC16(key) % 16384。以槽为单位移动数据,因为槽的数目是固定的,处理起来比较容易,这样数据移动问题就解决了。

哈希槽计算:

Redis 集群中内置了 16384 个哈希槽,redis 会根据节点数量大致均等的将哈希槽映射到不同的节点。当需要在 Redis 集群中放置一个 key-value时,redis 先对 key 使用 crc16 算法算出一个结果,然后把结果对 16384 求余数,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,也就是映射到某个节点上。如下代码,key之A 、B在Node2, key之C落在Node3上

image.png

image.png

3.2.2 配置步骤(3主3从redis集群配置)

3.2.2.1 新建6个docker容器redis实例

参数解释:

  • docker run : 创建并运行docker容器实例

  • --name redis-node-6 : 容器名字

  • --net host : 使用宿主机的ip和端口,默认

  • --privileged=true : 获取宿主机root用户权限

  • -v /data/redis/share/redis-node-6:/data : 容器卷,宿主机地址:docker内部地址

  • redis:6.0.8 : redis镜像和版本号

  • --cluster-enable yes : 开启redis集群

  • --appendonly yes : 开启持久化

  • --port 6386 : redis端口号

[root@localhost cig]# docker run -d --name redis-node-1 --net host --privileged=true -v /data/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381
d2e858cb98440fec6799288065a216e8da0c47928773925f1b6ec8ad435f4e92

[root@localhost cig]# docker run -d --name redis-node-2 --net host --privileged=true -v /data/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382
bef91fcb4fd6ebd527985662f8864b1507132157dacee17cb409191652256c6f

[root@localhost cig]# docker run -d --name redis-node-3 --net host --privileged=true -v /data/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383
7e391f0fbe60c8b4f58e7605087743332dddd9b7b37e4469f7229c38a0ba44eb

[root@localhost cig]# docker run -d --name redis-node-4 --net host --privileged=true -v /data/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384
0b7324cc5f667b665490d1701891fd879f5cf6b7cd9931f8ec0842fd39d4582d

[root@localhost cig]# docker run -d --name redis-node-5 --net host --privileged=true -v /data/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385
8a65615570bf0204c2c9020fcbedbc6e993d3840c77c86a8636397690420b63d

[root@localhost cig]# docker run -d --name redis-node-6 --net host --privileged=true -v /data/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386
28ffd34f3af6d64f75402dc9d829f2ec4f74aa8bacafb606a38cc391afab142e
[root@localhost cig]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED         STATUS         PORTS     NAMES
28ffd34f3af6   redis:6.0.8   "docker-entrypoint.s…"   4 minutes ago   Up 4 minutes             redis-node-6
8a65615570bf   redis:6.0.8   "docker-entrypoint.s…"   5 minutes ago   Up 4 minutes             redis-node-5
0b7324cc5f66   redis:6.0.8   "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes             redis-node-4
7e391f0fbe60   redis:6.0.8   "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes             redis-node-3
bef91fcb4fd6   redis:6.0.8   "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes             redis-node-2
d2e858cb9844   redis:6.0.8   "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes             redis-node-1
3.2.2.2 进入容器redis-node-1并为6台机器构建集群关系

进入容器:docker exec -it redis-node-1 /bin/bash

构建主从关系:redis-cli --cluster create 192.168.198.100:6381 192.168.198.100:6382 192.168.198.100:6383 192.168.198.100:6384 192.168.198.100:6385 192.168.198.100:6386 --cluster-replicas 1

# 进入容器
[root@localhost cig]# docker exec -it redis-node-1 /bin/bash

# 构建主从关系
root@localhost:/data# redis-cli --cluster create 192.168.198.100:6381 192.168.198.100:6382 192.168.198.100:6383 192.168.198.100:6384 192.168.198.100:6385 192.168.198.100:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.198.100:6385 to 192.168.198.100:6381
Adding replica 192.168.198.100:6386 to 192.168.198.100:6382
Adding replica 192.168.198.100:6384 to 192.168.198.100:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-5460] (5461 slots) master
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[5461-10922] (5462 slots) master
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[10923-16383] (5461 slots) master
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
Can I set the above configuration? (type 'yes' to accept): yes # 这里选yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3.2.2.3 链接进入6381作为切入点,查看集群状态
# 进入6381
root@localhost:/data# redis-cli -p 6381
127.0.0.1:6381> keys *
(empty array)
# 查看节点信息
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:138
cluster_stats_messages_pong_sent:142
cluster_stats_messages_sent:280
cluster_stats_messages_ping_received:137
cluster_stats_messages_pong_received:138
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:280
# 查看所有节点
127.0.0.1:6381> cluster nodes
e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381@16381 myself,master - 0 1669083089000 1 connected 0-5460
0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382@16382 master - 0 1669083092206 2 connected 5461-10922
c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384@16384 slave 0f954a05cabd504de1835e9d02fab832c918c26a 0 1669083091194 2 connected
606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383@16383 master - 0 1669083090187 3 connected 10923-16383
3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386@16386 slave e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 0 1669083089177 1 connected
271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385@16385 slave 606b6da0c631a20592cb0db22d7f52342ff77cb5 0 1669083090000 3 connected

注意:

cluster_known_nodes:6 ---> 一共6个节点

3.2.3 主从容错切换迁移案例

3.2.3.1 数据读写存储
# --------进入6381 新增key看看是否可以成功 
root@localhost:/data# redis-cli -p 6381
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.198.100:6383
127.0.0.1:6381> quit

# --------防止路由失效加参数 -c 并新增两个key k1 k2
root@localhost:/data# redis-cli -p 6381 -c
127.0.0.1:6381> set k1 v-cluster1
-> Redirected to slot [12706] located at 192.168.198.100:6383
OK
192.168.198.100:6383> set k2 v-cluster2
-> Redirected to slot [449] located at 192.168.198.100:6381
OK
192.168.198.100:6381> quit

# --------进入6382 看看k1 k2是否成功被存入
root@localhost:/data# redis-cli -p 6382 -c
127.0.0.1:6382> get k1
-> Redirected to slot [12706] located at 192.168.198.100:6383
"v-cluster1"
192.168.198.100:6383> get k2
-> Redirected to slot [449] located at 192.168.198.100:6381
"v-cluster2"
192.168.198.100:6381> get k3
(nil)
192.168.198.100:6381>

# --------查看集群信息
root@localhost:/data# redis-cli --cluster check 192.168.198.100:6381
192.168.198.100:6381 (e9041b90...) -> 1 keys | 5461 slots | 1 slaves.
192.168.198.100:6382 (0f954a05...) -> 0 keys | 5462 slots | 1 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 2 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3.2.3.2 容错切换迁移

主6381和从机切换,先停止主机6381

6381主机停了,对应的真实从机上位。6381作为1号主机分配的从机以实际情况为准,具体是几号机器就是几号

用到的命令:

docker stop redis-node-1

docker start redis-node-1

docker exec -it redis-node-2 /bin/bash

redis-cli -p 6382 -c

cluster nodes

redis-cli --cluster check 192.168.198.100:6381

# ---------先停止主机6381
[root@localhost cig]# docker stop redis-node-1
redis-node-1
[root@localhost cig]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS     NAMES
28ffd34f3af6   redis:6.0.8   "docker-entrypoint.s…"   30 minutes ago   Up 30 minutes             redis-node-6
8a65615570bf   redis:6.0.8   "docker-entrypoint.s…"   30 minutes ago   Up 30 minutes             redis-node-5
0b7324cc5f66   redis:6.0.8   "docker-entrypoint.s…"   30 minutes ago   Up 30 minutes             redis-node-4
7e391f0fbe60   redis:6.0.8   "docker-entrypoint.s…"   30 minutes ago   Up 30 minutes             redis-node-3
bef91fcb4fd6   redis:6.0.8   "docker-entrypoint.s…"   30 minutes ago   Up 30 minutes             redis-node-2

# ---------再次查看集群信息
[root@localhost cig]# docker exec -it redis-node-2 /bin/bash
root@localhost:/data# redis-cli -p 6382 -c
127.0.0.1:6382> cluster nodes
271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385@16385 slave 606b6da0c631a20592cb0db22d7f52342ff77cb5 0 1669084220279 3 connected
e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381@16381 master,fail - 1669084175043 1669084171997 1 disconnected
c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384@16384 slave 0f954a05cabd504de1835e9d02fab832c918c26a 0 1669084218000 2 connected
3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386@16386 master - 0 1669084221310 7 connected 0-5460
606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383@16383 master - 0 1669084219247 3 connected 10923-16383
0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382@16382 myself,master - 0 1669084219000 2 connected 5461-10922
127.0.0.1:6382>

对比停前和停后,可以看到:

原来是6381、6382、6383为主服务器,6384、6385、6386为从服务器。

现在6382、6383、6386为主服务器,6384、6385为从服务器,其中6381-fail,6386晋升为主服务器。

# 停止前
127.0.0.1:6381> cluster nodes
e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381@16381 myself,master - 0 1669083089000 1 connected 0-5460
0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382@16382 master - 0 1669083092206 2 connected 5461-10922
c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384@16384 slave 0f954a05cabd504de1835e9d02fab832c918c26a 0 1669083091194 2 connected
606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383@16383 master - 0 1669083090187 3 connected 10923-16383
3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386@16386 slave e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 0 1669083089177 1 connected
271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385@16385 slave 606b6da0c631a20592cb0db22d7f52342ff77cb5 0 1669083090000 3 connected

# 停止后
127.0.0.1:6382> cluster nodes
271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385@16385 slave 606b6da0c631a20592cb0db22d7f52342ff77cb5 0 1669084220279 3 connected
e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381@16381 master,fail - 1669084175043 1669084171997 1 disconnected
c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384@16384 slave 0f954a05cabd504de1835e9d02fab832c918c26a 0 1669084218000 2 connected
3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386@16386 master - 0 1669084221310 7 connected 0-5460
606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383@16383 master - 0 1669084219247 3 connected 10923-16383
0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382@16382 myself,master - 0 1669084219000 2 connected 5461-10922

此时还原之前的3主3从,并再次查看集群状态

# 首先启动redis-node-1 此时如果查看状态发现其变成了slave
[root@localhost cig]# docker start redis-node-1
redis-node-1

# 为了让redis-node-1变成master 可以先停止redis-node-6再启动 
[root@localhost cig]# docker stop redis-node-6
redis-node-6
# 注意中间要隔一段时间 中间需要等待一会儿,docker集群重新响应。如果执行太快redis-node-1会仍是slave
[root@localhost cig]# docker start redis-node-6
redis-node-6

# 查看集群状态
root@localhost:/data# redis-cli --cluster check 192.168.198.100:6381
192.168.198.100:6381 (e9041b90...) -> 1 keys | 5461 slots | 1 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 5461 slots | 1 slaves.
192.168.198.100:6382 (0f954a05...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 2 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

3.2.4 主从扩容案例

3.2.4.1 新建6387、6388两个节点
[root@localhost cig]# docker run -d --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387
8d113177433a57ddede20a58cfcf2f3b475420cdd6da9bf8d30ab7036bb176cb

[root@localhost cig]# docker run -d --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388
1b9ebfc31c9d42216a282814f94964fe0719c253fd4956d7c46578b795e555b3

[root@localhost cig]# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS     NAMES
1b9ebfc31c9d   redis:6.0.8   "docker-entrypoint.s…"   5 seconds ago    Up 4 seconds              redis-node-8
8d113177433a   redis:6.0.8   "docker-entrypoint.s…"   10 seconds ago   Up 9 seconds              redis-node-7
28ffd34f3af6   redis:6.0.8   "docker-entrypoint.s…"   57 minutes ago   Up 9 minutes              redis-node-6
8a65615570bf   redis:6.0.8   "docker-entrypoint.s…"   57 minutes ago   Up 57 minutes             redis-node-5
0b7324cc5f66   redis:6.0.8   "docker-entrypoint.s…"   57 minutes ago   Up 57 minutes             redis-node-4
7e391f0fbe60   redis:6.0.8   "docker-entrypoint.s…"   58 minutes ago   Up 58 minutes             redis-node-3
bef91fcb4fd6   redis:6.0.8   "docker-entrypoint.s…"   58 minutes ago   Up 58 minutes             redis-node-2
d2e858cb9844   redis:6.0.8   "docker-entrypoint.s…"   58 minutes ago   Up 12 minutes             redis-node-1
3.2.4.2 进入6387容器实例内部将6387作为master加入集群

将新增的6387作为master节点加入集群

redis-cli --cluster add-node 自己实际IP地址:6387 自己实际IP地址:6381

6387 就是将要作为master新增节点

6381 就是原来集群节点里面的领路人,相当于6387拜拜6381的码头从而找到组织加入集群

[root@localhost cig]# docker exec -it redis-node-7 /bin/bash
root@localhost:/data# redis-cli --cluster add-node 192.168.198.100:6387 192.168.198.100:6381
>>> Adding node 192.168.198.100:6387 to cluster 192.168.198.100:6381
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.198.100:6387 to make it join the cluster.
[OK] New node added correctly.
3.2.4.3 检查集群情况第一次并重新分配槽号

重新分派槽号 命令:redis-cli --cluster reshard IP地址:端口号

注意:

# 1. 没有replicates选项说明还没有slave
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots: (0 slots) master
   
# 2. 执行重新分派槽号命令 4096 是 16384 / master 台数得到的 即 16384 / 4 = 4096
How many slots do you want to move (from 1 to 16384)? 4096

# 3. eaec73df549819ff60a553be8f60591e74dceac7值是从M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387拿到的
What is the receiving node ID? eaec73df549819ff60a553be8f60591e74dceac7

# 4. 这里选择all
Source node #1: all

root@localhost:/data# redis-cli --cluster reshard 192.168.198.100:6381
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots: (0 slots) master
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? eaec73df549819ff60a553be8f60591e74dceac7
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all

Ready to move 4096 slots.
  Source nodes:
    M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
  Destination node:
    M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
       slots: (0 slots) master
  Resharding plan:
    Moving slot 5461 from 0f954a05cabd504de1835e9d02fab832c918c26a
    Moving slot 5462 from 0f954a05cabd504de1835e9d02fab832c918c26a
    Moving slot 5463 from 0f954a05cabd504de1835e9d02fab832c918c26a
3.2.4.3 检查集群情况第二次并槽号分派说明

redis-cli --cluster check 真实ip地址:6381

可以看到 已经有槽号了 --> (4096 slots) master

slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master

为什么6387是3个新的区间,以前的还是连续?

重新分配成本太高,所以前3家各自匀出来一部分,从6381/6382/6383三个旧节点分别匀出1364个坑位给新节点6387

root@localhost:/data# redis-cli --cluster check 192.168.198.100:6381
192.168.198.100:6381 (e9041b90...) -> 0 keys | 4096 slots | 1 slaves.
192.168.198.100:6387 (eaec73df...) -> 1 keys | 4096 slots | 0 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 4096 slots | 1 slaves.
192.168.198.100:6382 (0f954a05...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3.2.4.4 为6387分配从节点6388

命令:redis-cli --cluster add-node ip:新slave端口 ip:新master端口 --cluster-slave --cluster-master-id 新主机节点ID

# eaec73df549819ff60a553be8f60591e74dceac7 -->这个是6387的编号,按照自己实际情况
root@localhost:/data# redis-cli --cluster add-node 192.168.198.100:6388 192.168.198.100:6387 --cluster-slave --cluster-master-id eaec73df549819ff60a553be8f60591e74dceac7
>>> Adding node 192.168.198.100:6388 to cluster 192.168.198.100:6387
>>> Performing Cluster Check (using node 192.168.198.100:6387)
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.198.100:6388 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 192.168.198.100:6387.
[OK] New node added correctly.
3.2.4.5 检查集群情况第三次

redis-cli --cluster check 192.168.198.100:6382

root@localhost:/data# redis-cli --cluster check 192.168.198.100:6382
192.168.198.100:6382 (0f954a05...) -> 0 keys | 4096 slots | 1 slaves.
192.168.198.100:6381 (e9041b90...) -> 0 keys | 4096 slots | 1 slaves.
192.168.198.100:6387 (eaec73df...) -> 1 keys | 4096 slots | 1 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6382)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 08adfc36d361f23e52658f2ac8688b9098883b6c 192.168.198.100:6388
   slots: (0 slots) slave
   replicates eaec73df549819ff60a553be8f60591e74dceac7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

3.2.5 主从缩容案例

3.2.5.1 检查集群情况第一次并获得6388的节点ID

redis-cli --cluster check 192.168.198.100:6382

eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387

08adfc36d361f23e52658f2ac8688b9098883b6c 192.168.198.100:6388

root@localhost:/data# redis-cli --cluster check 192.168.198.100:6382
192.168.198.100:6382 (0f954a05...) -> 0 keys | 4096 slots | 1 slaves.
192.168.198.100:6381 (e9041b90...) -> 0 keys | 4096 slots | 1 slaves.
192.168.198.100:6387 (eaec73df...) -> 1 keys | 4096 slots | 1 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6382)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 08adfc36d361f23e52658f2ac8688b9098883b6c 192.168.198.100:6388
   slots: (0 slots) slave
   replicates eaec73df549819ff60a553be8f60591e74dceac7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3.2.5.2 将6388删除 从集群中将4号从节点6388删除

命令:redis-cli --cluster del-node ip:从机端口 从机6388节点ID

root@localhost:/data# redis-cli --cluster del-node 192.168.198.100:6388 08adfc36d361f23e52658f2ac8688b9098883b6c
>>> Removing node 08adfc36d361f23e52658f2ac8688b9098883b6c from cluster 192.168.198.100:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

# 检查一下 发现6388被删除了
root@localhost:/data# redis-cli --cluster check 192.168.198.100:6382
192.168.198.100:6382 (0f954a05...) -> 0 keys | 4096 slots | 1 slaves.
192.168.198.100:6381 (e9041b90...) -> 0 keys | 4096 slots | 1 slaves.
192.168.198.100:6387 (eaec73df...) -> 1 keys | 4096 slots | 0 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6382)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3.2.5.3 将6387槽号清空,重新分配,本例将清出来的槽号都给6381

注意:4096个槽位都指给6381

# 4096是要分给6381的数量 来自于slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
How many slots do you want to move (from 1 to 16384)? 4096

# e9041b9016ee93ac9aa3ad6d21160fd923bea0bd是6381的节点id,由它来接手空出来的槽号
What is the receiving node ID? e9041b9016ee93ac9aa3ad6d21160fd923bea0bd

# eaec73df549819ff60a553be8f60591e74dceac7是6387的节点id,告知删除哪个
Source node #1: eaec73df549819ff60a553be8f60591e74dceac7
Source node #2: done

# Do you want to proceed with the proposed reshard plan (yes/no)? yes
root@localhost:/data# redis-cli --cluster reshard 192.168.198.100:6381
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: eaec73df549819ff60a553be8f60591e74dceac7
Source node #2: done

Ready to move 4096 slots.
  Source nodes:
    M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
       slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
  Destination node:
    M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
       slots:[1365-5460] (4096 slots) master
       1 additional replica(s)
  Resharding plan:
    Moving slot 0 from eaec73df549819ff60a553be8f60591e74dceac7
    Moving slot 1 from eaec73df549819ff60a553be8f60591e74dceac7
3.2.5.4 检查集群情况第二次

4096个槽位都指给6381,它变成了8192个槽位,相当于全部都给6381了,不然要输入3次,一锅端

注意重点:

# 1. 6381有了8192个槽位
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
[root@localhost ~]# docker exec -it redis-node-1 /bin/bash

root@localhost:/data# redis-cli --cluster check 192.168.198.100:6381
192.168.198.100:6381 (e9041b90...) -> 1 keys | 8192 slots | 1 slaves.
192.168.198.100:6387 (eaec73df...) -> 0 keys | 0 slots | 0 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 4096 slots | 1 slaves.
192.168.198.100:6382 (0f954a05...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
M: eaec73df549819ff60a553be8f60591e74dceac7 192.168.198.100:6387
   slots: (0 slots) master
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
3.2.5.5 删除6387并最后检查集群情况

命令:redis-cli --cluster del-node ip:端口 6387节点ID

root@localhost:/data# redis-cli --cluster del-node 192.168.198.100:6387 eaec73df549819ff60a553be8f60591e74dceac7
>>> Removing node eaec73df549819ff60a553be8f60591e74dceac7 from cluster 192.168.198.100:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

root@localhost:/data# redis-cli --cluster check 192.168.198.100:6381
192.168.198.100:6381 (e9041b90...) -> 1 keys | 8192 slots | 1 slaves.
192.168.198.100:6383 (606b6da0...) -> 1 keys | 4096 slots | 1 slaves.
192.168.198.100:6382 (0f954a05...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.198.100:6381)
M: e9041b9016ee93ac9aa3ad6d21160fd923bea0bd 192.168.198.100:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
S: 3b98da815e93d18f56765fcbbbd99f29156207b6 192.168.198.100:6386
   slots: (0 slots) slave
   replicates e9041b9016ee93ac9aa3ad6d21160fd923bea0bd
S: 271a3487bf5cbbfaf79741294eec943b256500e1 192.168.198.100:6385
   slots: (0 slots) slave
   replicates 606b6da0c631a20592cb0db22d7f52342ff77cb5
M: 606b6da0c631a20592cb0db22d7f52342ff77cb5 192.168.198.100:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 0f954a05cabd504de1835e9d02fab832c918c26a 192.168.198.100:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: c7f9d4cc3c230273f3ec6b2a696ab79dc8b9bd61 192.168.198.100:6384
   slots: (0 slots) slave
   replicates 0f954a05cabd504de1835e9d02fab832c918c26a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.