跨网段/跨三层 MySQL+PostgreSQL 高可用(无单点)部署文档(含架构图)

2 阅读11分钟

一、架构概述

1.1 核心架构(无单点、兼容二层/三层网络,支持双数据库)

最终生产级架构:应用 → ProxySQL 双机高可用(Keepalived+VIP)→ 双数据库集群(MySQL 跨网段半同步主从 + PostgreSQL 跨网段流复制主从)→ Orchestrator(双数据库自动故障切换)→ 独立备份服务器(定期全量+增量备份)

核心优势:

  • 无单点故障:ProxySQL 双机热备、两种数据库均为主从互备,Orchestrator 故障监控,任一节点故障不影响业务
  • 网络兼容:MySQL、PostgreSQL 主从均可跨网段、跨三层、跨机房;仅 ProxySQL 双节点需同网段(为实现VIP漂移)
  • 业务无感知:应用仅连接 ProxySQL VIP,主库/ProxySQL 故障自动切换,无需修改应用配置,支持两种数据库并行访问
  • 资源不浪费:两种数据库的从库既做灾备,也分担读流量,主库专注处理写请求
  • 数据安全:移除归档机制,通过独立备份服务器执行定期全量+增量备份,完全不占用主从库资源,适配两种数据库

1.2 角色规划(固定IP,部署时替换为实际IP)

角色主机名IP地址网段核心作用
MySQL 主节点mysql-m192.168.1.10192.168.1.0/24MySQL写库,跨网段部署
MySQL 从节点mysql-s192.168.2.10192.168.2.0/24MySQL读库+故障备用主库,跨网段部署
PostgreSQL 主节点pg-m192.168.1.11192.168.1.0/24PostgreSQL写库,跨网段部署
PostgreSQL 从节点pg-s192.168.2.11192.168.2.0/24PostgreSQL读库+故障备用主库,跨网段部署
Orchestrator 管理节点orch-mgr192.168.3.12192.168.3.0/24MySQL、PostgreSQL 主从故障检测、自动切换
ProxySQL 主节点proxysql-m192.168.3.10192.168.3.0/24数据库代理(读写分离),主节点,同网段部署
ProxySQL 备节点proxysql-s192.168.3.11192.168.3.0/24数据库代理,备节点,同网段部署(与主节点)
ProxySQL 集群 VIP-192.168.3.100192.168.3.0/24应用唯一入口,VIP自动漂移,适配两种数据库访问
独立备份服务器db-backup192.168.4.10192.168.4.0/24执行MySQL、PostgreSQL定期全量+增量备份,不占用集群资源

1.3 架构图(可直接复制生成)

暂时无法在豆包文档外展示此内容

说明:将上述mermaid代码复制到支持mermaid的工具(如Typora、PPT插件),即可生成清晰架构图,可直接放入方案文档/PPT。

二、完整部署步骤(可直接复制粘贴落地)

2.1 前置环境准备(所有节点统一执行)

核心要求:所有节点网络互通(重点开放MySQL 3306、PostgreSQL 5432、ProxySQL 6032/6033、Orchestrator 3000、备份服务器 22 端口),时间同步,主机名解析正常。

# 1. 关闭防火墙、SELinux(生产环境可替换为放行对应端口)
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0

# 2. 安装时间同步工具,确保所有节点时间一致(主从复制关键)
yum install -y chrony && systemctl start chronyd && systemctl enable chronyd

# 3. 配置主机名解析(所有节点统一添加,替换为实际IP)
cat >> /etc/hosts << EOF
192.168.1.10 mysql-m
192.168.2.10 mysql-s
192.168.1.11 pg-m
192.168.2.11 pg-s
192.168.3.10 proxysql-m
192.168.3.11 proxysql-s
192.168.3.12 orch-mgr
192.168.4.10 db-backup
EOF

# 4. 配置SSH免密登录(Orchestrator需免密访问双数据库节点,ProxySQL双节点、备份服务器可免密)
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
ssh-copy-id root@192.168.1.10
ssh-copy-id root@192.168.2.10
ssh-copy-id root@192.168.1.11
ssh-copy-id root@192.168.2.11
ssh-copy-id root@192.168.3.10
ssh-copy-id root@192.168.3.11
ssh-copy-id root@192.168.3.12
ssh-copy-id root@192.168.4.10

2.2 部署 MySQL 8.0 跨网段半同步主从

2.2.1 所有MySQL节点(主+从)安装MySQL

wget https://dev.mysql.com/get/mysql80-community-release-el8-3.noarch.rpm
rpm -ivh mysql80-community-release-el8-3.noarch.rpm
yum install -y mysql-community-server --nogpgcheck

# 启动MySQL并设置开机自启
systemctl start mysqld && systemctl enable mysqld

# 获取MySQL初始密码(用于首次登录)
grep 'temporary password' /var/log/mysqld.log

2.2.2 主节点(192.168.1.10)配置

# 1. 登录MySQL(输入上一步获取的初始密码)
mysql -uroot -p

# 2. 修改root密码(生产环境替换为强密码)
ALTER USER 'root'@'localhost' IDENTIFIED BY 'Root@123456';
flush privileges;

# 3. 创建所需账号(主从复制、Orchestrator管理、ProxySQL监控)
# 主从复制专用账号
create user 'repl'@'%' IDENTIFIED WITH mysql_native_password BY 'Repl@123456';
grant replication slave on *.* to 'repl'@'%';

# Orchestrator管理专用账号
create user 'orch'@'%' IDENTIFIED WITH mysql_native_password BY 'Orch@123456';
grant super, reload, replication slave, replication client, process, select on *.* to 'orch'@'%';

# ProxySQL监控专用账号
create user 'proxysql'@'%' IDENTIFIED WITH mysql_native_password BY 'Proxy@123456';
grant select on *.* to 'proxysql'@'%';
flush privileges;

# 4. 编辑主库配置文件
vi /etc/my.cnf

主库my.cnf配置(直接复制替换):

[mysqld]
server-id=10  # 唯一ID,不可与从库重复
log-bin=mysql-bin  # 开启二进制日志(主从复制核心)
binlog_format=row  # 二进制日志格式(保证数据一致性)
relay_log=relay-bin
log_slave_updates=1
read_only=0  # 主库可写
# 开启半同步复制(防止切换丢数据)
plugin_load_add="rpl_semi_sync_master.so"
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=1000
# 重启主库,使配置生效
systemctl restart mysqld

2.2.3 从节点(192.168.2.10)配置

# 1. 登录MySQL,修改root密码(与主库一致)
mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY 'Root@123456';
flush privileges;

# 2. 编辑从库配置文件
vi /etc/my.cnf

从库my.cnf配置(直接复制替换):

[mysqld]
server-id=20  # 唯一ID,不可与主库重复
log-bin=mysql-bin
binlog_format=row
relay_log=relay-bin
log_slave_updates=1
read_only=1  # 从库只读(普通用户)
super_read_only=1  # 从库只读(超级用户)
# 开启半同步复制
plugin_load_add="rpl_semi_sync_slave.so"
rpl_semi_sync_slave_enabled=1
# 重启从库,使配置生效
systemctl restart mysqld

# 3. 配置跨网段主从同步(登录MySQL执行)
change master to
master_host='192.168.1.10',  # 主库IP(跨网段可正常连接)
master_user='repl',
master_password='Repl@123456',
master_log_file='mysql-bin.000001',  # 主库二进制日志文件(可通过主库show master status查看)
master_log_pos=156;  # 日志位置(同上)

# 启动从库同步
start slave;

2.2.4 验证主从同步(从节点执行)

show slave status\G

验证标准:看到 Slave_IO_Running: YesSlave_SQL_Running: Yes,说明跨网段主从同步成功。

2.3 部署 PostgreSQL 14 跨网段流复制主从

2.3.1 所有PostgreSQL节点(主+从)安装PostgreSQL

# 1. 安装PostgreSQL官方源
yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm

# 2. 禁用自带PostgreSQL模块
dnf -y module disable postgresql

# 3. 安装PostgreSQL 14及依赖
yum install -y postgresql14-server postgresql14-contrib

# 4. 初始化PostgreSQL(主从节点均执行)
/usr/pgsql-14/bin/postgresql-14-setup initdb

# 5. 启动PostgreSQL并设置开机自启
systemctl start postgresql-14 && systemctl enable postgresql-14

# 6. 获取PostgreSQL初始密码(安装后自动生成,记录用于后续登录)
grep 'password' /var/lib/pgsql/14/data/postgresql.conf | head -1

2.3.2 主节点(192.168.1.11)配置

# 1. 切换到PostgreSQL用户,修改初始密码
su - postgres
psql -U postgres
ALTER USER postgres WITH PASSWORD 'Postgres@123456';
\q
exit

# 2. 编辑PostgreSQL主配置文件
vi /var/lib/pgsql/14/data/postgresql.conf

主节点postgresql.conf核心配置(直接复制替换):

# 监听所有地址(支持跨网段访问)
listen_addresses = '*'
# 端口号(默认5432)
port = 5432
# 流复制相关参数
wal_level = replica  # 流复制核心,需为replica或logical
max_wal_senders = 10  # 最大流复制连接数
wal_keep_size = 16MB  # 保留的WAL日志大小,避免主从同步中断
hot_standby = on  # 允许从库作为热备(读库)
# 其他默认参数保持不变
# 3. 编辑主机认证配置文件
vi /var/lib/pgsql/14/data/pg_hba.conf

主节点pg_hba.conf配置(直接复制替换):

# 添加以下内容(允许从库、pgpoolOrchestrator、备份服务器访问)
host    all             all             192.168.2.0/24      scram-sha-256
host    all             all             192.168.3.0/24      scram-sha-256
host    all             all             192.168.4.0/24      scram-sha-256
# 保留原有本地访问规则
host    all             all             127.0.0.1/32        scram-sha-256
host    all             all             ::1/128             scram-sha-256
local   all             all                                     peer
# 4. 创建流复制专用账号(PostgreSQL用户执行)
su - postgres
psql -U postgres
CREATE ROLE repl WITH REPLICATION LOGIN PASSWORD 'Repl@123456';
\q
exit

# 5. 重启PostgreSQL使配置生效
systemctl restart postgresql-14

2.3.3 从节点(192.168.2.11)配置

# 1. 停止PostgreSQL服务(初始化从库需清空数据目录)
systemctl stop postgresql-14

# 2. 清空数据目录(避免冲突)
rm -rf /var/lib/pgsql/14/data/*

# 3. 从主节点流式复制初始化从库(PostgreSQL用户执行)
su - postgres
pg_basebackup -h 192.168.1.11 -U repl -D /var/lib/pgsql/14/data -P -X stream -R
exit

# 4. 编辑从库恢复配置文件(创建standby.signal)
touch /var/lib/pgsql/14/data/standby.signal

# 5. 编辑PostgreSQL从库配置文件
vi /var/lib/pgsql/14/data/postgresql.conf

从节点postgresql.conf核心配置(直接复制替换):

# 监听所有地址
listen_addresses = '*'
port = 5432
# 流复制核心参数(与主库一致)
wal_level = replica
max_wal_senders = 10
wal_keep_size = 16MB
hot_standby = on  # 开启热备,允许读请求
# 从库只读(普通用户)
default_transaction_read_only = on
# 其他默认参数保持不变
# 6. 编辑pg_hba.conf(与主节点一致)
vi /var/lib/pgsql/14/data/pg_hba.conf

从节点pg_hba.conf配置(直接复制替换):

host    all             all             192.168.1.0/24      scram-sha-256
host    all             all             192.168.3.0/24      scram-sha-256
host    all             all             192.168.4.0/24      scram-sha-256
host    all             all             127.0.0.1/32        scram-sha-256
host    all             all             ::1/128             scram-sha-256
local   all             all                                     peer
# 7. 授权数据目录权限
chown -R postgres:postgres /var/lib/pgsql/14/data
chmod -R 700 /var/lib/pgsql/14/data

# 8. 启动PostgreSQL从库
systemctl start postgresql-14

# 9. 验证主从同步(从节点PostgreSQL用户执行)
su - postgres
psql -U postgres -c "SELECT * FROM pg_stat_replication;"
exit

验证标准:返回结果中sent_lsnwrite_lsnflush_lsn均有值,且statestreaming,说明跨网段流复制主从同步成功。

2.4 部署 Orchestrator(双数据库自动故障切换)

2.4.1 管理节点(192.168.3.12)安装Orchestrator

wget https://github.com/openark/orchestrator/releases/download/v3.2.6/orchestrator-3.2.6-1.el8.x86_64.rpm
rpm -ivh orchestrator-3.2.6-1.el8.x86_64.rpm

2.4.2 配置Orchestrator(适配双数据库)

vi /etc/orchestrator.conf.json

核心配置(直接复制,替换IP/密码,适配MySQL、PostgreSQL):

{
  "MySQLTopologyUser": "orch",
  "MySQLTopologyPassword": "Orch@123456",
  "PostgresTopologyUser": "postgres",
  "PostgresTopologyPassword": "Postgres@123456",
  "MySQLOrchestratorHost": "127.0.0.1",
  "MySQLOrchestratorPort": 3306,
  "PostgresOrchestratorHost": "127.0.0.1",
  "PostgresOrchestratorPort": 5432,
  "MySQLOrchestratorDatabase": "orchestrator",
  "PostgresOrchestratorDatabase": "orchestrator",
  "MySQLOrchestratorUser": "root",
  "MySQLOrchestratorPassword": "Root@123456",
  "PostgresOrchestratorUser": "postgres",
  "PostgresOrchestratorPassword": "Postgres@123456",
  "FailoverMasterCluster": true,
  "ApplyMySQLPromotionAfterMasterFailover": true,
  "ApplyPostgresPromotionAfterMasterFailover": true,
  "DetectClusterAliasQuery": "SELECT @@server_id",
  "DiscoverByShowSlaveHosts": false,
  "Instances": [
    {"Alias": "mysql-m", "Hostname": "192.168.1.10", "Port": 3306, "DbType": "mysql"},
    {"Alias": "mysql-s", "Hostname": "192.168.2.10", "Port": 3306, "DbType": "mysql"},
    {"Alias": "pg-m", "Hostname": "192.168.1.11", "Port": 5432, "DbType": "postgres"},
    {"Alias": "pg-s", "Hostname": "192.168.2.11", "Port": 5432, "DbType": "postgres"}
  ],
  "PostMasterFailoverScript": "/usr/bin/orch_proxysql_failover.sh {{newMasterHost}} {{dbType}}"  # 切换后联动ProxySQL,区分数据库类型
}

2.4.3 创建Orchestrator与ProxySQL联动脚本(适配双数据库)

vi /usr/bin/orch_proxysql_failover.sh

脚本内容(直接复制,适配MySQL、PostgreSQL切换):

#!/bin/bash
NEW_MASTER=$1
DB_TYPE=$2
# 区分数据库类型,更新对应ProxySQL路由(MySQL写组10,PostgreSQL写组30)
if [ "$DB_TYPE" = "mysql" ]; then
    HOSTGROUP_ID=10
    PORT=3306
elif [ "$DB_TYPE" = "postgres" ]; then
    HOSTGROUP_ID=30
    PORT=5432
else
    echo "未知数据库类型,切换失败" >> /var/log/orch_failover.log
    exit 1
fi

# 通知ProxySQL主节点更新写组路由(切换到新主库)
mysql -uadmin -padmin -h192.168.3.10 -P6032 -e "
DELETE FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_ID;
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES ($HOSTGROUP_ID, '$NEW_MASTER', $PORT);
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
"

# 同步更新备节点ProxySQL配置
mysql -uadmin -padmin -h192.168.3.11 -P6032 -e "
DELETE FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_ID;
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES ($HOSTGROUP_ID, '$NEW_MASTER', $PORT);
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
"

# 日志记录切换结果
echo "$(date +'%Y-%m-%d %H:%M:%S') - $DB_TYPE 主库切换至 $NEW_MASTER,ProxySQL路由更新成功" >> /var/log/orch_failover.log
# 赋予脚本执行权限
chmod +x /usr/bin/orch_proxysql_failover.sh

2.4.4 启动Orchestrator

systemctl start orchestrator && systemctl enable orchestrator

验证:浏览器访问http://192.168.3.12:3000,可看到MySQL、PostgreSQL两个主从拓扑结构,说明部署成功。

2.5 部署 ProxySQL 双机(替代VIP,应用唯一入口,适配双数据库)

2.5.1 两台ProxySQL节点(3.10、3.11)安装ProxySQL

wget https://github.com/sysown/proxysql/releases/download/v2.5.5/proxysql-2.5.5-1.el8.x86_64.rpm
rpm -ivh proxysql-2.5.5-1.el8.x86_64.rpm
systemctl start proxysql && systemctl enable proxysql

2.5.2 配置ProxySQL(主节点3.10执行,备节点同步配置,适配双数据库)

# 登录ProxySQL管理端(默认账号密码:admin/admin)
mysql -uadmin -padmin -h127.0.0.1 -P6032

执行以下配置(直接复制,实现双数据库读写分离+监控):

-- 1. 添加MySQL主从节点(写组10,读组20)
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (10, '192.168.1.10', 3306);
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (20, '192.168.2.10', 3306);

-- 2. 添加PostgreSQL主从节点(写组30,读组40)
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (30, '192.168.1.11', 5432);
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (40, '192.168.2.11', 5432);

-- 3. 配置双数据库读写分离规则
INSERT INTO mysql_replication_hostgroups (writer_hostgroup, reader_hostgroup) VALUES (10, 20); -- MySQL
INSERT INTO mysql_replication_hostgroups (writer_hostgroup, reader_hostgroup) VALUES (30, 40); -- PostgreSQL

-- 4. 配置双数据库监控账号
-- MySQL监控账号(与之前创建的proxysql账号一致)
UPDATE global_variables SET variable_value='proxysql' WHERE variable_name='mysql-monitor_username';
UPDATE global_variables SET variable_value='Proxy@123456' WHERE variable_name='mysql-monitor_password';
-- PostgreSQL监控账号(与PostgreSQL postgres账号一致)
INSERT INTO mysql_monitor_users (username, password, default_hostgroup, active) VALUES ('postgres', 'Postgres@123456', 30, 1);

-- 5. 配置读流量权重(保护主库,从库承担更多读流量)
-- MySQL权重配置
UPDATE mysql_servers SET weight=1 WHERE hostname='192.168.1.10';  # MySQL主库读权重低
UPDATE mysql_servers SET weight=10 WHERE hostname='192.168.2.10'; # MySQL从库读权重高
-- PostgreSQL权重配置
UPDATE mysql_servers SET weight=1 WHERE hostname='192.168.1.11';  # PostgreSQL主库读权重低
UPDATE mysql_servers SET weight=10 WHERE hostname='192.168.2.11'; # PostgreSQL从库读权重高

-- 6. 应用配置并保存(永久生效)
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;

2.5.3 同步ProxySQL备节点(3.11)配置

将主节点(3.10)的上述SQL命令,在备节点(3.11)的ProxySQL管理端原样执行一遍,确保两台ProxySQL配置一致,适配双数据库访问。

2.6 部署 Keepalived(ProxySQL双机高可用,VIP漂移)

两台ProxySQL节点(3.10、3.11)均执行以下操作,且必须同网段(192.168.3.0/24)。

2.6.1 安装Keepalived

yum install -y keepalived

2.6.2 创建ProxySQL检测脚本(两台均执行)

vi /etc/keepalived/check_proxysql.sh

脚本内容(直接复制,检测ProxySQL是否存活):

#!/bin/bash
# 检测ProxySQL进程是否存在,不存在则停止Keepalived,触发VIP漂移
pgrep proxysql > /dev/null
if [ $? -ne 0 ]; then
    systemctl stop keepalived
fi
# 赋予执行权限
chmod +x /etc/keepalived/check_proxysql.sh

2.6.3 配置Keepalived(主节点3.10)

vi /etc/keepalived/keepalived.conf

配置内容(直接复制):

global_defs {
   router_id proxysql_ha  # 唯一标识,与备节点一致
}

# 检测ProxySQL状态的脚本
vrrp_script check_proxysql {
    script "/etc/keepalived/check_proxysql.sh"
    interval 2  # 检测间隔2秒
    weight -20  # 检测失败,权重降低20
}

vrrp_instance VI_1 {
    state MASTER  # 主节点标识
    interface eth0  # 替换为实际网卡名(ip addr查看)
    virtual_router_id 52  # 虚拟路由ID,主备节点必须一致
    priority 100  # 主节点优先级,高于备节点
    advert_int 1  # 心跳间隔1秒
    authentication {
        auth_type PASS
        auth_pass 1111  # 认证密码,主备一致
    }
    virtual_ipaddress {
        192.168.3.100/24  # 应用唯一入口VIP
    }
    track_script {
        check_proxysql  # 关联检测脚本
    }
}

2.6.4 配置Keepalived(备节点3.11)

vi /etc/keepalived/keepalived.conf

配置内容(仅修改2处,其余与主节点一致):

global_defs {
   router_id proxysql_ha
}

vrrp_script check_proxysql {
    script "/etc/keepalived/check_proxysql.sh"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state BACKUP  # 改为备节点标识
    interface eth0
    virtual_router_id 52
    priority 90  # 优先级低于主节点(100)
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.3.100/24
    }
    track_script {
        check_proxysql
    }
}

2.6.5 启动Keepalived(两台均执行)

systemctl start keepalived && systemctl enable keepalived

验证:在主节点(3.10)执行 ip addr,可看到 192.168.3.100(VIP),说明部署成功。

2.7 部署独立备份服务器(定期全量+增量备份,适配双数据库)

2.7.1 备份服务器(192.168.4.10)安装备份工具

# 安装MySQL、PostgreSQL备份依赖工具
yum install -y mysql-community-client postgresql14-client gzip cronie

# 创建备份目录(按数据库类型分区,便于管理)
mkdir -p /data/db_backup/{mysql,postgresql}/{full,incr}
# mysql/full:MySQL全量备份;mysql/incr:MySQL增量备份
# postgresql/full:PostgreSQL全量备份;postgresql/incr:PostgreSQL增量备份

# 授权目录权限
chmod -R 755 /data/db_backup

2.7.2 编写双数据库定期备份脚本

# 创建备份脚本
vi /usr/bin/db_backup.sh

脚本内容(直接复制,可根据实际留存需求修改保留时间):

#!/bin/bash
# 双数据库定期备份脚本(独立服务器执行,避开业务高峰)
# 备份策略:MySQL/PostgreSQL 每日增量备份,每周日全量备份,自动清理过期备份

# 1. 定义变量
# MySQL相关变量
MYSQL_USER="root"
MYSQL_PASS="Root@123456"
MYSQL_HOST="192.16