redis 数据迁移(单节点-》集群)

1,352 阅读3分钟

redis 数据迁移(RDB文件-》集群)成功基础上再接再厉,迁移单节点到集群

注意一下涉及到的文件格式:要确保换行编码-LF(UNIX)

1. 网络互通检查

首先检查集群使用网段:

$ docker network ls
NETWORK ID     NAME                            DRIVER    SCOPE
bfa40a6eb2ba   bridge                          bridge    local
b6055b32849c   docker-redis-cluster_redisnet   bridge    local

集群使用网络docker-redis-cluster_redisnet

2 单节点redis

2.1 配置文件

redis.conf

port 6379
cluster-enabled no
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly no

2.2 docker-compose.yml

这里使用和集群一个网段:docker-redis-cluster_redisnet
注意-- external:true

version: '3'
networks:
    docker-redis-cluster_redisnet:
      external: true
services:
  redis:
    image: redis:6.0.9
    container_name: redis
    hostname: redis
    restart: always
    ports:
      - 7000:6379
    networks:
      docker-redis-cluster_redisnet:
        ipv4_address: 10.0.0.9
    volumes:
      - ./conf/redis.conf:/etc/redis/redis.conf
      - ./data/:/data/
    command:
      redis-server /etc/redis/redis.conf --appendonly no

2.3 部署一个单节点redis

$ docker-compose up --build -d
Creating redis ... done

3. 单节点--》集群迁移

3.1 获得迁移镜像

$ docker pull lyman1567/redis-migrate-tool
Using default tag: latest
latest: Pulling from lyman1567/redis-migrate-tool
2d473b07cdd5: Pull complete
db3f9a69e07b: Pull complete
Digest: sha256:c6d39642f42d1882058664a10920cdff97b1439d4c5a91593549d8c1b3e68529
Status: Downloaded newer image for lyman1567/redis-migrate-tool:latest
docker.io/lyman1567/redis-migrate-tool:latest

3.2 迁移数据配置(rmt.conf)

C:/redis-migrate-tool/data/conf/rmt.conf 注:集群-集群--只需要把type:single改成type: redis cluster就可

#rmt.conf
[source]
type: single
servers:
 - 10.0.0.9:6379

[target]
type: redis cluster
servers:
 - 10.0.0.10:6379

[common]
listen: 0.0.0.0:8888

3.3 开始嵌入单节点

$ docker ps
CONTAINER ID   IMAGE                COMMAND                  CREATED          STATUS          PORTS                    NAMES
8986142ff4d4   redis:6.0.9          "docker-entrypoint.s…"   11 minutes ago   Up 11 minutes   0.0.0.0:7000->6379/tcp   redis
9775b194c25b   redis-cluster-node   "/bin/bash /start.sh"    51 minutes ago   Up 51 minutes   0.0.0.0:6380->6379/tcp   docker-redis-cluster_redis-2_1
419e4364c06f   redis-cluster-node   "/bin/bash /start.sh"    51 minutes ago   Up 51 minutes   0.0.0.0:6384->6379/tcp   docker-redis-cluster_redis-6_1
e3a6230f2a20   redis-cluster-node   "/bin/bash /start.sh"    51 minutes ago   Up 51 minutes   0.0.0.0:6382->6379/tcp   docker-redis-cluster_redis-4_1
bf62da9c04d9   redis-cluster-node   "/bin/bash /start.sh"    51 minutes ago   Up 51 minutes   0.0.0.0:6381->6379/tcp   docker-redis-cluster_redis-3_1
a507843f88eb   redis-cluster-node   "/bin/bash /start.sh"    51 minutes ago   Up 51 minutes   0.0.0.0:6379->6379/tcp   docker-redis-cluster_redis-1_1
843d7f746595   redis-cluster-node   "/bin/bash /start.sh"    51 minutes ago   Up 51 minutes   0.0.0.0:6383->6379/tcp   docker-redis-cluster_redis-5_1

把工具嵌入此name:redis的单节点

$ docker run -it --network=container:8986142ff4d4 --pid=container:8986142ff4d4 -v C:/redis-migrate-tool/data/conf:/usr/local/etc lyman1567/redis-migrate-tool bash

4. 进入容器验证迁移配置

[root@redis /]# cd /usr/local/etc
[root@redis etc]# ls
rmt.conf
[root@redis etc]# vi rmt.conf
[source]
type: single
servers:
 - 10.0.0.9:6379

[target]
type: redis cluster
servers:
 - 10.0.0.10:6379

[common]
listen: 0.0.0.0:8888

5. 执行迁移并查看output.log文件

5.1 执行迁移

[root@redis etc]# redis-migrate-tool -c rmt.conf -o output.log -d
[root@redis etc]#

5.2 查看output.log文件

和output.log同目录下会出现node10.0.0.96379@16379-1612084077217638-4314.rdb等临时文件
临时文件消失时,就证明迁移完毕了

[2021-01-31 08:09:52.784] rmt_core.c:525 Nodes count of source group : 1
[2021-01-31 08:09:52.784] rmt_core.c:526 Total threads count : 12
[2021-01-31 08:09:52.785] rmt_core.c:527 Read threads count assigned: 1
[2021-01-31 08:09:52.785] rmt_core.c:528 Write threads count assigned: 1
[2021-01-31 08:09:52.785] rmt_core.c:836 instances_by_host:
[2021-01-31 08:09:52.786] rmt_core.c:840 10.0.0.9:6379
[2021-01-31 08:09:52.786] rmt_core.c:842 
[2021-01-31 08:09:52.786] rmt_core.c:2443 Total threads count in fact: 2
[2021-01-31 08:09:52.786] rmt_core.c:2444 Read threads count in fact: 1
[2021-01-31 08:09:52.786] rmt_core.c:2445 Write threads count in fact: 1
[2021-01-31 08:09:52.787] rmt_core.c:2454 read thread(0):
[2021-01-31 08:09:52.787] rmt_core.c:2460 10.0.0.9:6379
[2021-01-31 08:09:52.787] rmt_core.c:2487 write thread(0):
[2021-01-31 08:09:52.787] rmt_core.c:2493 10.0.0.9:6379
[2021-01-31 08:09:52.788] rmt_core.c:2550 migrate job is running...
[2021-01-31 08:09:52.788] rmt_redis.c:1735 Start connecting to MASTER[10.0.0.9:6379].
[2021-01-31 08:09:52.788] rmt_redis.c:1769 Master[10.0.0.9:6379] replied to PING, replication can continue...
[2021-01-31 08:09:52.788] rmt_redis.c:1080 Partial resynchronization for MASTER[10.0.0.9:6379] not possible (no cached master).
[2021-01-31 08:09:52.798] rmt_redis.c:1139 Full resync from MASTER[10.0.0.9:6379]: 59f526849c29aa8b2dbb2d835fc85bc91d326739:0
[2021-01-31 08:10:59.012] rmt_redis.c:1546 MASTER <-> SLAVE sync: receiving 1151163248 bytes from master[10.0.0.9:6379]
[2021-01-31 08:11:44.377] rmt_redis.c:1652 MASTER <-> SLAVE sync: RDB data for node[10.0.0.9:6379] is received, used: 111 s
[2021-01-31 08:11:44.377] rmt_redis.c:1672 rdb file node10.0.0.9:6379-1612080592798302-63.rdb write complete
[2021-01-31 08:13:31.756] rmt_redis.c:6685 Rdb file for node[10.0.0.9:6379] parsed finished, use: 107 s.
[2021-01-31 08:13:31.873] rmt_redis.c:6793 All nodes' rdb file parsed finished for this write thread(0).

6. 验证集群

127.0.0.1:6379> info
....
# Keyspace
db0:keys=495601,expires=0,avg_ttl=0