引言
在企业数字化转型进程中,数据库系统的稳定运行承载着业务连续性的核心使命。当在线服务的中断可能直接转化为企业经济损失时,构建可靠的数据库高可用架构已从技术选项变为业务必需。面对不同规模、不同特征的应用场景,单一的架构方案往往难以满足多样化的可用性需求。
本文将从技术实现角度,系统剖析三种典型的数据库高可用架构模式及其在具体数据库产品中的应用实践。*
一、KES主备集群:基础高可用架构实战
1.1 架构原理深度解析
KES主备集群采用了经典的基于日志传输的物理复制机制。与逻辑复制不同,物理复制在数据块级别进行同步,这种方式在数据一致性和同步效率方面具有明显优势。核心架构基于"一主多备"模型,主节点处理所有读写事务,备节点通过重做主节点的WAL(Write-Ahead Logging)日志保持数据同步。
复制流工作原理:
主节点在事务提交时,首先将WAL日志写入本地持久存储,然后通过TCP连接将日志流式传输到备节点。备节点接收日志后,按照主节点完全相同的顺序重做这些日志,从而保证数据的一致性状态。这种机制确保了即使主节点发生灾难性故障,备节点的数据也不会丢失。
同步模式选择策略:
实际部署中,同步模式的选择需要权衡数据安全性和性能影响。在金融、交易等对数据一致性要求极高的场景,通常配置为同步模式。以下是一个典型配置示例:
-- 主节点配置示例
# kingbase.conf
synchronous_commit = on
synchronous_standby_names = 'standby1'
-- 查看同步状态
SELECT application_name, sync_state, sync_priority
FROM sys_stat_replication;
异步模式适用于数据分析、报表生成等可容忍少量数据延迟的场景。在这种模式下,主节点无需等待备节点确认即可提交事务,显著降低了写操作的延迟。
1.2 故障检测与切换机制
KES集群的故障检测机制基于多层健康检查体系,确保能够及时准确地识别故障节点。
心跳检测网络:
集群管理服务会在节点间建立专用的心跳网络,定期交换健康状态信息。心跳间隔通常配置在秒级,超时时间根据网络质量调整。一个实际部署中的心跳配置可能如下:
# 集群配置文件示例
node_id = 1
node_name = 'primary'
node_host = '192.168.1.101'
heartbeat_interval = 1s
heartbeat_timeout = 3s
max_failover_retries = 3
智能故障判定:
系统会综合多个指标判定节点状态,避免因网络抖动导致的误切换。除了心跳检测外,还包括:
- 数据库进程状态检查
- 磁盘空间监控
- 内存使用率检查
- 关键系统服务状态
自动切换流程:
当主节点被确认为故障状态后,切换流程自动触发。以下是典型的切换时序:
- 集群管理服务检测到主节点心跳丢失
- 向见证节点请求仲裁,避免脑裂
- 选择数据最接近的备节点作为新主节点
- 提升备节点为可写状态
- 重新配置虚拟IP指向新主节点
- 通知其他备节点从新主节点同步
- 记录切换事件到集群日志
客户端重连处理:
应用层需要实现适当的重试逻辑以处理切换期间的连接中断。以下是一个Java连接池的配置示例:
// JDBC连接配置示例
String url = "jdbc:kingbase8://virtual-ip:5432/mydb";
Properties props = new Properties();
props.setProperty("user", "app_user");
props.setProperty("password", "password");
props.setProperty("connectTimeout", "10");
props.setProperty("socketTimeout", "30");
props.setProperty("tcpKeepAlive", "true");
props.setProperty("loadBalanceHosts", "true");
// 配置重试策略
props.setProperty("maxReconnects", "3");
props.setProperty("initialReconnectDelay", "1");
1.3 数据同步状态监控
有效的监控是保障高可用的关键。以下是一些关键监控指标和查询示例:
复制延迟监控:
-- 查看各备节点的复制延迟
SELECT
client_addr,
application_name,
state,
sync_state,
write_lag,
flush_lag,
replay_lag
FROM sys_stat_replication
WHERE client_addr IS NOT NULL;
-- 计算具体的延迟秒数
SELECT
application_name,
EXTRACT(EPOCH FROM write_lag) AS write_delay_seconds,
EXTRACT(EPOCH FROM flush_lag) AS flush_delay_seconds,
EXTRACT(EPOCH FROM replay_lag) AS replay_delay_seconds
FROM sys_stat_replication;
WAL日志传输状态:
-- 查看WAL日志生成和传输状态
SELECT
pg_current_wal_lsn() AS current_lsn,
pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn) AS sent_lag_bytes,
pg_wal_lsn_diff(pg_current_wal_lsn(), write_lsn) AS write_lag_bytes,
pg_wal_lsn_diff(pg_current_wal_lsn(), flush_lsn) AS flush_lag_bytes
FROM sys_stat_replication;
历史切换记录:
-- 创建切换记录表(如果不存在)
CREATE TABLE IF NOT EXISTS failover_history (
id SERIAL PRIMARY KEY,
event_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
old_primary VARCHAR(100),
new_primary VARCHAR(100),
trigger_reason VARCHAR(200),
duration INTERVAL
);
-- 记录切换事件(通常在切换脚本中执行)
INSERT INTO failover_history
(old_primary, new_primary, trigger_reason, duration)
VALUES ('node1', 'node2', 'heartbeat_timeout', '00:01:23');
1.4 性能优化实践
在主备架构中,性能优化需要同时考虑主节点和备节点。
主节点写优化:
-- 调整WAL相关参数优化写性能
ALTER SYSTEM SET wal_buffers = '16MB';
ALTER SYSTEM SET max_wal_size = '2GB';
ALTER SYSTEM SET min_wal_size = '1GB';
ALTER SYSTEM SET checkpoint_completion_target = 0.9;
-- 批量事务优化
BEGIN;
-- 批量插入操作
INSERT INTO large_table SELECT generate_series(1,1000000);
-- 更多操作...
COMMIT;
备节点读优化:
备节点虽然不接收写操作,但可以优化查询性能:
-- 在备节点上创建只读视图
CREATE VIEW sales_summary AS
SELECT
region,
product_category,
SUM(amount) as total_sales,
COUNT(*) as transaction_count
FROM sales
WHERE sale_date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY region, product_category;
-- 为常见查询创建索引
CREATE INDEX CONCURRENTLY idx_sales_date
ON sales(sale_date);
-- 定期更新统计信息
ANALYZE VERBOSE sales;
网络优化配置:
主备节点间的网络质量直接影响同步性能:
-- 调整TCP参数优化网络传输
ALTER SYSTEM SET tcp_keepalives_idle = 60;
ALTER SYSTEM SET tcp_keepalives_interval = 10;
ALTER SYSTEM SET tcp_keepalives_count = 3;
-- 调整复制相关网络参数
ALTER SYSTEM SET wal_sender_timeout = '60s';
ALTER SYSTEM SET wal_receiver_timeout = '60s';
ALTER SYSTEM SET max_wal_senders = 10;
1.5 备份与恢复策略
完善的备份策略是数据安全的最后防线:
物理备份配置:
#!/bin/bash
# 物理备份脚本示例
BACKUP_DIR="/backup/kingbase"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_PATH="$BACKUP_DIR/full_$DATE"
# 执行基础备份
sys_basebackup -D $BACKUP_PATH \
-h primary_host -p 5432 \
-U replica_user \
--progress \
--verbose \
--wal-method=stream
# 备份完成后保留策略
find $BACKUP_DIR -name "full_*" -mtime +7 -delete
逻辑备份与恢复:
# 逻辑备份特定数据库
sys_dump -h primary_host -U backup_user \
-d mydb \
-Fc \
-f /backup/mydb_$(date +%Y%m%d).dump
# 恢复到备用节点
sys_restore -h standby_host -U restore_user \
-d mydb \
--clean \
--create \
/backup/mydb_20240101.dump
时间点恢复:
-- 创建恢复点
SELECT pg_create_restore_point('before_major_change');
-- 恢复配置文件示例
# recovery.conf
restore_command = 'cp /backup/wal_archive/%f %p'
recovery_target_time = '2024-01-15 14:30:00'
recovery_target_action = 'promote'
二、 读写分离架构实战部署
2.1 读写分离中间件配置
读写分离的核心在于智能的请求路由。以下是一个典型的中间件配置示例:
连接池配置:
# 读写分离中间件配置文件
# application.yml
kingbase:
datasource:
primary:
jdbc-url: jdbc:kingbase8://primary:5432/mydb
username: app_user
password: ${DB_PASSWORD}
driver-class-name: com.kingbase8.Driver
hikari:
maximum-pool-size: 20
minimum-idle: 5
connection-timeout: 30000
replica:
jdbc-url: jdbc:kingbase8://replica1:5432,replica2:5432/mydb
username: app_user
password: ${DB_PASSWORD}
load-balance: true
hikari:
maximum-pool-size: 30
minimum-idle: 10
connection-timeout: 30000
路由规则定义:
// 基于注解的读写分离配置
@Configuration
@EnableTransactionManagement
public class DataSourceConfig {
@Bean
@ConfigurationProperties("spring.datasource.primary")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
@Bean
@ConfigurationProperties("spring.datasource.replica")
public DataSource replicaDataSource() {
return DataSourceBuilder.create().build();
}
@Bean
public DataSource routingDataSource() {
Map<Object, Object> targetDataSources = new HashMap<>();
targetDataSources.put(DbContextHolder.DbType.PRIMARY, primaryDataSource());
targetDataSources.put(DbContextHolder.DbType.REPLICA, replicaDataSource());
RoutingDataSource routingDataSource = new RoutingDataSource();
routingDataSource.setTargetDataSources(targetDataSources);
routingDataSource.setDefaultTargetDataSource(primaryDataSource());
return routingDataSource;
}
}
// 读写分离上下文管理
public class DbContextHolder {
private static final ThreadLocal<DbType> contextHolder = new ThreadLocal<>();
public enum DbType {
PRIMARY, REPLICA
}
public static void setDbType(DbType dbType) {
contextHolder.set(dbType);
}
public static DbType getDbType() {
return contextHolder.get() == null ? DbType.PRIMARY : contextHolder.get();
}
public static void clearDbType() {
contextHolder.remove();
}
}
// 读写分离切面
@Aspect
@Component
public class DataSourceAspect {
@Before("@annotation(readOnly) || @within(readOnly)")
public void setReadOnlyDataSource(JoinPoint joinPoint, ReadOnly readOnly) {
DbContextHolder.setDbType(DbType.REPLICA);
}
@Before("execution(* com..*.service..*.save*(..)) || " +
"execution(* com..*.service..*.update*(..)) || " +
"execution(* com..*.service..*.delete*(..)) || " +
"execution(* com..*.service..*.insert*(..))")
public void setWriteDataSource(JoinPoint joinPoint) {
DbContextHolder.setDbType(DbType.PRIMARY);
}
@After("execution(* com..*.service..*.*(..))")
public void clearDataSource(JoinPoint joinPoint) {
DbContextHolder.clearDbType();
}
}
2.2 负载均衡策略实现
负载均衡策略直接影响系统性能和资源利用率:
权重轮询算法:
public class WeightedRoundRobinLoadBalancer implements LoadBalancer {
private List<DataSourceInstance> instances;
private AtomicInteger currentIndex = new AtomicInteger(0);
private int currentWeight = 0;
private int maxWeight;
private int gcdWeight;
@Override
public DataSourceInstance select(List<DataSourceInstance> instances) {
if (instances == null || instances.isEmpty()) {
return null;
}
if (instances.size() == 1) {
return instances.get(0);
}
while (true) {
int index = currentIndex.getAndUpdate(i -> (i + 1) % instances.size());
DataSourceInstance instance = instances.get(index);
if (instance.getWeight() >= currentWeight) {
if (instance.getWeight() == currentWeight) {
currentWeight = 0;
}
return instance;
}
if (index == instances.size() - 1) {
currentWeight = 0;
}
}
}
// 计算最大权重和最大公约数
private void calculateWeights() {
maxWeight = instances.stream()
.mapToInt(DataSourceInstance::getWeight)
.max()
.orElse(1);
gcdWeight = instances.stream()
.mapToInt(DataSourceInstance::getWeight)
.reduce(this::gcd)
.orElse(1);
}
private int gcd(int a, int b) {
return b == 0 ? a : gcd(b, a % b);
}
}
基于响应时间的动态负载均衡:
public class ResponseTimeLoadBalancer implements LoadBalancer {
private Map<String, ResponseTimeStats> statsMap = new ConcurrentHashMap<>();
private long decayFactor = 1000; // 衰减因子
@Override
public DataSourceInstance select(List<DataSourceInstance> instances) {
if (instances.isEmpty()) {
return null;
}
// 计算权重
Map<DataSourceInstance, Double> weights = new HashMap<>();
double totalWeight = 0;
for (DataSourceInstance instance : instances) {
ResponseTimeStats stats = statsMap.computeIfAbsent(
instance.getId(),
k -> new ResponseTimeStats()
);
double avgResponseTime = stats.getAverageResponseTime();
double weight = avgResponseTime > 0 ? 1.0 / avgResponseTime : 1.0;
weights.put(instance, weight);
totalWeight += weight;
}
// 加权随机选择
double random = Math.random() * totalWeight;
double current = 0;
for (Map.Entry<DataSourceInstance, Double> entry : weights.entrySet()) {
current += entry.getValue();
if (random <= current) {
return entry.getKey();
}
}
return instances.get(0);
}
public void recordResponseTime(String instanceId, long responseTime) {
ResponseTimeStats stats = statsMap.get(instanceId);
if (stats != null) {
stats.record(responseTime);
}
}
static class ResponseTimeStats {
private volatile double average = 0;
private long count = 0;
public synchronized void record(long responseTime) {
// 指数加权移动平均
average = (average * count + responseTime) / (count + 1);
count = Math.min(count + 1, 1000); // 限制历史记录数量
}
public double getAverageResponseTime() {
return average;
}
}
}
2.3 数据一致性处理
在读写分离架构中,处理数据一致性是关键挑战:
会话一致性实现:
@Component
public class SessionConsistencyManager {
// 记录会话的写操作时间
private Map<String, Long> sessionWriteTime = new ConcurrentHashMap<>();
private long consistencyWindow = 5000; // 5秒一致性窗口
@Aspect
@Component
public static class WriteOperationAspect {
@AfterReturning("execution(* com..*.service..*.save*(..)) || " +
"execution(* com..*.service..*.update*(..)) || " +
"execution(* com..*.service..*.delete*(..))")
public void afterWriteOperation(JoinPoint joinPoint) {
HttpServletRequest request =
((ServletRequestAttributes) RequestContextHolder.getRequestAttributes()).getRequest();
String sessionId = request.getSession().getId();
// 更新会话的最后写时间
SessionConsistencyManager.getInstance()
.updateSessionWriteTime(sessionId, System.currentTimeMillis());
}
}
public boolean shouldReadFromPrimary(String sessionId) {
Long lastWriteTime = sessionWriteTime.get(sessionId);
if (lastWriteTime == null) {
return false;
}
long timeSinceLastWrite = System.currentTimeMillis() - lastWriteTime;
return timeSinceLastWrite < consistencyWindow;
}
public void updateSessionWriteTime(String sessionId, long timestamp) {
sessionWriteTime.put(sessionId, timestamp);
}
// 清理过期会话
@Scheduled(fixedDelay = 60000) // 每分钟清理一次
public void cleanupExpiredSessions() {
long cutoffTime = System.currentTimeMillis() - 3600000; // 1小时前
sessionWriteTime.entrySet().removeIf(entry -> entry.getValue() < cutoffTime);
}
}
读写分离代理配置:
-- 在数据库层面配置复制延迟监控
CREATE OR REPLACE FUNCTION check_replica_delay(max_delay interval)
RETURNS boolean AS $$
DECLARE
delay interval;
BEGIN
SELECT write_lag INTO delay
FROM sys_stat_replication
WHERE application_name = 'replica1';
RETURN delay IS NULL OR delay <= max_delay;
END;
$$ LANGUAGE plpgsql;
-- 创建只读用户
CREATE USER read_only_user WITH PASSWORD 'secure_password';
GRANT CONNECT ON DATABASE mydb TO read_only_user;
GRANT USAGE ON SCHEMA public TO read_only_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO read_only_user;
-- 设置默认权限
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO read_only_user;
2.4 故障处理与自动恢复
健康检查机制:
@Component
public class HealthChecker {
private ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
private Map<String, HealthStatus> statusMap = new ConcurrentHashMap<>();
@PostConstruct
public void init() {
scheduler.scheduleAtFixedRate(this::checkAllInstances, 0, 5, TimeUnit.SECONDS);
}
private void checkAllInstances() {
for (DataSourceInstance instance : getAllInstances()) {
checkInstanceHealth(instance);
}
}
private void checkInstanceHealth(DataSourceInstance instance) {
try (Connection conn = instance.getConnection();
Statement stmt = conn.createStatement()) {
// 执行健康检查查询
ResultSet rs = stmt.executeQuery("SELECT 1");
if (rs.next()) {
updateInstanceStatus(instance.getId(), HealthStatus.HEALTHY);
}
} catch (SQLException e) {
// 健康检查失败
updateInstanceStatus(instance.getId(), HealthStatus.UNHEALTHY);
log.warn("Health check failed for instance: {}", instance.getId(), e);
}
}
public enum HealthStatus {
HEALTHY, UNHEALTHY, UNKNOWN
}
}
自动故障切换:
@Component
public class AutoFailoverManager {
@Autowired
private LoadBalancer loadBalancer;
@Autowired
private HealthChecker healthChecker;
public void handleInstanceFailure(String failedInstanceId) {
// 1. 从负载均衡池中移除故障实例
loadBalancer.removeInstance(failedInstanceId);
// 2. 记录故障事件
log.error("Instance {} failed, removed from load balancer", failedInstanceId);
// 3. 尝试恢复故障实例
recoverFailedInstance(failedInstanceId);
}
private void recoverFailedInstance(String instanceId) {
ScheduledExecutorService recoveryScheduler =
Executors.newSingleThreadScheduledExecutor();
// 指数退避重试
recoveryScheduler.scheduleAtFixedRate(() -> {
if (tryRecoverInstance(instanceId)) {
loadBalancer.addInstance(getInstance(instanceId));
recoveryScheduler.shutdown();
}
}, 30, 60, TimeUnit.SECONDS);
}
}
三、 MPP分布式集群实战部署
3.1 集群初始化与配置
集群配置文件示例:
# mpp_cluster_config.yaml
cluster:
name: "production_cluster"
version: "2.0"
coordinator_nodes:
- id: "coord1"
host: "192.168.1.101"
port: 5432
data_dir: "/data/kingbase/coord1"
- id: "coord2"
host: "192.168.1.102"
port: 5432
data_dir: "/data/kingbase/coord2"
data_nodes:
- id: "dn1"
host: "192.168.1.111"
port: 5433
data_dir: "/data/kingbase/dn1"
segment_id: 1
- id: "dn2"
host: "192.168.1.112"
port: 5433
data_dir: "/data/kingbase/dn2"
segment_id: 2
- id: "dn3"
host: "192.168.1.113"
port: 5433
data_dir: "/data/kingbase/dn3"
segment_id: 3
- id: "dn4"
host: "192.168.1.114"
port: 5433
data_dir: "/data/kingbase/dn4"
segment_id: 4
network:
internal_network: "10.0.0.0/24"
external_network: "192.168.1.0/24"
replication_port: 5434
storage:
shared_storage: false
local_storage_path: "/data/kingbase"
security:
ssl_enabled: true
certificate_path: "/etc/kingbase/certs"
集群初始化脚本:
#!/bin/bash
# initialize_mpp_cluster.sh
set -e
echo "开始初始化MPP集群..."
# 1. 创建系统用户
useradd -r -s /bin/false kingbase
mkdir -p /data/kingbase
chown -R kingbase:kingbase /data/kingbase
# 2. 初始化协调节点
for coord in coord1 coord2; do
echo "初始化协调节点: $coord"
ssh $coord "sudo -u kingbase initdb \
-D /data/kingbase/$coord \
--encoding=UTF8 \
--locale=C \
--data-checksums"
done
# 3. 初始化数据节点
for ((i=1; i<=4; i++)); do
node="dn$i"
echo "初始化数据节点: $node"
ssh $node "sudo -u kingbase initdb \
-D /data/kingbase/$node \
--encoding=UTF8 \
--locale=C \
--data-checksums \
--segment-id=$i"
done
# 4. 配置集群参数
configure_cluster_parameters
# 5. 启动集群
start_cluster
# 6. 验证集群状态
verify_cluster_status
echo "MPP集群初始化完成"
3.2 数据分布与分片策略
创建分布式表:
-- 创建哈希分布表
CREATE TABLE distributed_sales (
sale_id BIGSERIAL,
customer_id INT NOT NULL,
product_id INT NOT NULL,
sale_date DATE NOT NULL,
amount DECIMAL(10,2) NOT NULL,
region VARCHAR(50)
) DISTRIBUTED BY (customer_id);
-- 创建复制表(小表)
CREATE TABLE products (
product_id INT PRIMARY KEY,
product_name VARCHAR(200) NOT NULL,
category VARCHAR(100),
price DECIMAL(10,2)
) DISTRIBUTED REPLICATED;
-- 创建范围分区表
CREATE TABLE sales_by_date (
sale_id BIGSERIAL,
customer_id INT,
sale_date DATE,
amount DECIMAL(10,2)
) DISTRIBUTED BY (sale_date)
PARTITION BY RANGE (sale_date);
-- 创建日期分区
CREATE TABLE sales_2023_q1 PARTITION OF sales_by_date
FOR VALUES FROM ('2023-01-01') TO ('2023-04-01');
CREATE TABLE sales_2023_q2 PARTITION OF sales_by_date
FOR VALUES FROM ('2023-04-01') TO ('2023-07-01');
CREATE TABLE sales_2023_q3 PARTITION OF sales_by_date
FOR VALUES FROM ('2023-07-01') TO ('2023-10-01');
CREATE TABLE sales_2023_q4 PARTITION OF sales_by_date
FOR VALUES FROM ('2023-10-01') TO ('2024-01-01');
数据重分布:
-- 查看数据分布情况
SELECT
gp_segment_id,
COUNT(*) as row_count,
pg_size_pretty(pg_relation_size('distributed_sales')) as segment_size
FROM distributed_sales
GROUP BY gp_segment_id
ORDER BY gp_segment_id;
-- 重新分布数据以平衡负载
ALTER TABLE distributed_sales
SET DISTRIBUTED BY (sale_date, customer_id);
-- 执行重分布操作
VACUUM FULL distributed_sales;
-- 检查重分布后的平衡性
SELECT
gp_segment_id,
COUNT(*) as row_count,
COUNT(*) * 100.0 / SUM(COUNT(*)) OVER() as percentage
FROM distributed_sales
GROUP BY gp_segment_id
ORDER BY row_count DESC;
3.3 分布式查询优化
查询执行计划分析:
-- 启用详细执行计划输出
SET explain_distributed_queries = on;
SET client_min_messages = debug1;
-- 分析分布式查询计划
EXPLAIN (ANALYZE, VERBOSE, DISTSQL)
SELECT
c.customer_name,
SUM(s.amount) as total_spent,
COUNT(*) as purchase_count
FROM customers c
JOIN distributed_sales s ON c.customer_id = s.customer_id
WHERE s.sale_date >= '2023-01-01'
AND s.region = 'East'
GROUP BY c.customer_id, c.customer_name
HAVING SUM(s.amount) > 10000
ORDER BY total_spent DESC
LIMIT 100;
-- 查询计划优化提示
/*
计划输出会显示:
1. 数据节点上的本地扫描
2. 节点间的数据移动(重分布、广播)
3. 协调节点上的最终聚合
4. 各阶段的资源消耗
*/
分布式连接优化:
-- 启用查询优化器提示
SET optimizer = on;
SET enable_hashjoin = on;
SET enable_mergejoin = on;
SET enable_nestloop = on;
-- 创建统计信息以帮助优化器
ANALYZE distributed_sales;
ANALYZE customers;
-- 收集列统计信息
CREATE STATISTICS sales_customer_stats
ON customer_id, sale_date, region
FROM distributed_sales;
-- 手动优化连接顺序
BEGIN;
SET LOCAL join_collapse_limit = 1;
SET LOCAL from_collapse_limit = 1;
-- 手动指定连接顺序
SELECT /*+ LEADING(c s) USE_HASH(s) */
c.customer_name,
s.total_amount
FROM customers c
JOIN (
SELECT customer_id, SUM(amount) as total_amount
FROM distributed_sales
WHERE sale_date >= '2023-01-01'
GROUP BY customer_id
) s ON c.customer_id = s.customer_id
WHERE c.region = 'East';
COMMIT;
3.4 集群监控与维护
集群状态监控:
-- 查看集群整体状态
SELECT
node_name,
node_host,
node_port,
node_status,
node_role,
segment_id,
total_disk,
used_disk,
(used_disk * 100.0 / total_disk) as disk_usage_percent
FROM sys_distributed_nodes
ORDER BY segment_id, node_role;
-- 监控查询执行状态
SELECT
pid,
usename,
application_name,
query_start,
state,
wait_event_type,
wait_event,
query
FROM sys_stat_activity
WHERE state != 'idle'
ORDER BY query_start DESC
LIMIT 20;
-- 查看数据分布倾斜
SELECT
schemaname,
tablename,
gp_segment_id,
count(*) as row_count,
pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) as size
FROM gp_dist_random('gp_id')
JOIN pg_class ON gp_segment_id = gp_segment_id
WHERE schemaname NOT LIKE 'pg_%'
AND schemaname != 'information_schema'
GROUP BY schemaname, tablename, gp_segment_id
HAVING count(*) > 10000
ORDER BY row_count DESC
LIMIT 20;
自动化维护脚本:
#!/bin/bash
# mpp_cluster_maintenance.sh
set -e
LOG_FILE="/var/log/kingbase/maintenance_$(date +%Y%m%d).log"
exec >> $LOG_FILE 2>&1
echo "开始集群维护作业: $(date)"
# 1. 统计信息更新
echo "更新统计信息..."
psql -h coord1 -U kingbase -d mydb << EOF
ANALYZE VERBOSE;
VACUUM ANALYZE;
EOF
# 2. 检查数据分布平衡
echo "检查数据分布..."
psql -h coord1 -U kingbase -d mydb << EOF
SELECT
'数据倾斜检查' as check_type,
MAX(row_count) - MIN(row_count) as skew_difference
FROM (
SELECT gp_segment_id, COUNT(*) as row_count
FROM distributed_sales
GROUP BY gp_segment_id
) segment_counts;
EOF
# 3. 备份关键元数据
echo "备份集群元数据..."
pg_dumpall -h coord1 -U kingbase --globals-only \
-f /backup/globals_$(date +%Y%m%d).sql
# 4. 检查磁盘空间
echo "检查磁盘空间..."
df -h /data/kingbase
# 5. 检查复制状态
echo "检查复制状态..."
psql -h coord1 -U kingbase -d mydb << EOF
SELECT
application_name,
state,
sync_state,
write_lag,
flush_lag,
replay_lag
FROM sys_stat_replication;
EOF
echo "维护作业完成: $(date)"
3.5 故障恢复与数据重平衡
节点故障恢复:
#!/bin/bash
# recover_failed_node.sh
FAILED_NODE=$1
SEGMENT_ID=$2
BACKUP_HOST=$3
echo "开始恢复故障节点: $FAILED_NODE"
# 1. 停止故障节点
echo "停止故障节点..."
ssh $FAILED_NODE "systemctl stop kingbase"
# 2. 从备份节点恢复数据
echo "从备份节点恢复数据..."
ssh $BACKUP_HOST "pg_basebackup -h $BACKUP_HOST \
-D /data/kingbase/recovery \
-X stream -P -R"
# 3. 同步数据到故障节点
echo "同步数据到故障节点..."
rsync -avz --delete \
$BACKUP_HOST:/data/kingbase/recovery/ \
$FAILED_NODE:/data/kingbase/dn$SEGMENT_ID/
# 4. 更新恢复配置
ssh $FAILED_NODE "cat > /data/kingbase/dn$SEGMENT_ID/recovery.conf << EOF
standby_mode = 'on'
primary_conninfo = 'host=$BACKUP_HOST port=5433 user=replicator'
recovery_target_timeline = 'latest'
restore_command = ''
EOF"
# 5. 启动恢复的节点
echo "启动恢复的节点..."
ssh $FAILED_NODE "systemctl start kingbase"
# 6. 验证恢复状态
echo "验证恢复状态..."
sleep 10
ssh $FAILED_NODE "psql -p 5433 -c 'SELECT pg_is_in_recovery();'"
echo "节点恢复完成: $FAILED_NODE"
数据重平衡操作:
-- 检查当前数据分布
SELECT
gp_segment_id,
COUNT(*) as row_count,
pg_size_pretty(SUM(pg_relation_size(tablename::regclass))) as total_size
FROM gp_dist_random('gp_id')
JOIN pg_tables ON schemaname || '.' || tablename = tablename
GROUP BY gp_segment_id
ORDER BY row_count DESC;
-- 执行重平衡操作
BEGIN;
-- 启用重平衡模式
SET allow_system_table_mods = on;
-- 重平衡具体表
ALTER TABLE large_table SET WITH (reorganize=true);
-- 或者重平衡整个数据库
SELECT gp_segment_ redistribution('mydb');
COMMIT;
-- 验证重平衡结果
SELECT
'重平衡后' as status,
MAX(row_count) - MIN(row_count) as max_skew
FROM (
SELECT gp_segment_id, COUNT(*) as row_count
FROM large_table
GROUP BY gp_segment_id
) segment_counts;
四、 方案对比与选型指南
4.1 技术特性对比分析
架构复杂度对比:
-- 各方案配置复杂度评分(1-5分,5分为最复杂)
WITH complexity_scores AS (
SELECT 'KES主备集群' as solution, 2 as config_complexity, 3 as ops_complexity
UNION ALL
SELECT '读写分离架构', 3, 4
UNION ALL
SELECT 'MPP分布式集群', 5, 5
)
SELECT
solution,
config_complexity,
ops_complexity,
(config_complexity + ops_complexity) as total_complexity
FROM complexity_scores
ORDER BY total_complexity;
性能特征对比:
-- 性能特征分析查询示例
SELECT
solution,
read_scalability,
write_scalability,
query_complexity,
consistency_level
FROM (
VALUES
('KES主备集群', '低', '无', '中等', '强一致性'),
('读写分离架构', '高', '无', '中等', '可调一致性'),
('MPP分布式集群', '高', '高', '高', '最终一致性')
) AS performance_matrix(solution, read_scalability, write_scalability, query_complexity, consistency_level);
4.2 选型决策矩阵
# 选型决策支持工具
def evaluate_solution_requirements(requirements):
"""
根据需求评估最适合的方案
requirements: 包含各种需求指标的字典
"""
scores = {
'KES主备集群': 0,
'读写分离架构': 0,
'MPP分布式集群': 0
}
# 数据规模评分
if requirements['data_size_gb'] < 100:
scores['KES主备集群'] += 3
scores['读写分离架构'] += 2
elif requirements['data_size_gb'] < 1000:
scores['KES主备集群'] += 1
scores['读写分离架构'] += 3
scores['MPP分布式集群'] += 2
else:
scores['MPP分布式集群'] += 3
# 读写比例评分
read_ratio = requirements['read_ratio']
if read_ratio > 0.8:
scores['读写分离架构'] += 3
scores['MPP分布式集群'] += 2
else:
scores['KES主备集群'] += 2
scores['MPP分布式集群'] += 1
# 并发用户数评分
if requirements['concurrent_users'] < 100:
scores['KES主备集群'] += 3
elif requirements['concurrent_users'] < 1000:
scores['读写分离架构'] += 3
scores['KES主备集群'] += 1
else:
scores['MPP分布式集群'] += 3
scores['读写分离架构'] += 2
# 查询复杂度评分
if requirements['query_complexity'] == 'simple':
scores['KES主备集群'] += 3
elif requirements['query_complexity'] == 'medium':
scores['读写分离架构'] += 3
else: # complex
scores['MPP分布式集群'] += 3
# 预算限制评分
if requirements['budget'] == 'low':
scores['KES主备集群'] += 3
elif requirements['budget'] == 'medium':
scores['读写分离架构'] += 2
else: # high
scores['MPP分布式集群'] += 3
return scores
# 使用示例
requirements = {
'data_size_gb': 500,
'read_ratio': 0.7,
'concurrent_users': 500,
'query_complexity': 'medium',
'budget': 'medium'
}
result = evaluate_solution_requirements(requirements)
print("方案评分结果:", result)
4.3 混合架构配置示例
# 混合架构配置示例
mixed_architecture:
description: "KES主备 + 读写分离混合架构"
primary_cluster:
type: "kes_ha"
nodes:
- role: "primary"
host: "db-primary-1"
spec: "16c32g"
- role: "standby"
host: "db-standby-1"
spec: "16c32g"
- role: "witness"
host: "db-witness-1"
spec: "4c8g"
read_replicas:
- host: "db-read-1"
spec: "8c16g"
load_balancer_weight: 30
- host: "db-read-2"
spec: "8c16g"
load_balancer_weight: 30
- host: "db-read-3"
spec: "8c16g"
load_balancer_weight: 40
mpp_cluster:
enabled: true
purpose: "analytics"
nodes:
coordinator:
- host: "mpp-coord-1"
- host: "mpp-coord-2"
data_nodes:
- segment: 1
host: "mpp-dn-1"
spec: "32c64g"
- segment: 2
host: "mpp-dn-2"
spec: "32c64g"
data_sync:
oltp_to_olap:
method: "logical_replication"
schedule: "hourly"
latency_tolerance: "3600" # 1小时
load_balancer:
type: "haproxy"
config:
frontend_port: 5432
backend_checks:
interval: "10s"
timeout: "5s"
routing_rules:
- pattern: "^SELECT"
backend_pool: "read_replicas"
- pattern: ".*"
backend_pool: "primary_cluster"
五、 监控与运维体系
5.1 综合监控方案
监控指标收集:
-- 创建监控数据收集表
CREATE SCHEMA IF NOT EXISTS monitoring;
CREATE TABLE monitoring.metrics (
id BIGSERIAL PRIMARY KEY,
metric_name VARCHAR(100) NOT NULL,
metric_value DOUBLE PRECISION NOT NULL,
metric_labels JSONB,
instance_id VARCHAR(100),
collected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_metrics_name_time (metric_name, collected_at)
) DISTRIBUTED BY (metric_name);
-- 收集连接数指标
INSERT INTO monitoring.metrics (metric_name, metric_value, metric_labels)
SELECT
'database_connections',
COUNT(*) as connection_count,
jsonb_build_object(
'database', datname,
'state', state
) as labels
FROM sys_stat_activity
WHERE datname IS NOT NULL
GROUP BY datname, state;
-- 收集表大小和行数
INSERT INTO monitoring.metrics (metric_name, metric_value, metric_labels)
SELECT
'table_size_bytes',
pg_relation_size(schemaname||'.'||tablename),
jsonb_build_object(
'schema', schemaname,
'table', tablename
)
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema');
-- 收集索引使用情况
INSERT INTO monitoring.metrics (metric_name, metric_value, metric_labels)
SELECT
'index_usage_ratio',
idx_scan::float / NULLIF(idx_scan + seq_scan, 0),
jsonb_build_object(
'schema', schemaname,
'table', relname,
'index', indexrelname
)
FROM pg_stat_user_indexes
WHERE idx_scan + seq_scan > 0;
自动化监控脚本:
#!/usr/bin/env python3
# database_monitor.py
import psycopg2
import json
import time
from datetime import datetime
from prometheus_client import Gauge, start_http_server
class DatabaseMonitor:
def __init__(self, connection_string):
self.conn = psycopg2.connect(connection_string)
self.metrics = self._setup_metrics()
def _setup_metrics(self):
"""初始化监控指标"""
return {
'active_connections': Gauge(
'kingbase_connections_active',
'Active connections',
['database', 'state']
),
'transactions_per_second': Gauge(
'kingbase_tx_rate',
'Transactions per second',
['database']
),
'replication_lag': Gauge(
'kingbase_replication_lag_seconds',
'Replication lag in seconds',
['client_addr', 'application_name']
),
'table_size': Gauge(
'kingbase_table_size_bytes',
'Table size in bytes',
['schema', 'table']
)
}
def collect_connection_metrics(self):
"""收集连接相关指标"""
with self.conn.cursor() as cur:
cur.execute("""
SELECT datname, state, COUNT(*)
FROM sys_stat_activity
WHERE datname IS NOT NULL
GROUP BY datname, state
""")
for db, state, count in cur.fetchall():
self.metrics['active_connections'].labels(
database=db, state=state
).set(count)
def collect_replication_metrics(self):
"""收集复制延迟指标"""
with self.conn.cursor() as cur:
cur.execute("""
SELECT client_addr, application_name,
EXTRACT(EPOCH FROM write_lag) as write_lag_seconds
FROM sys_stat_replication
WHERE write_lag IS NOT NULL
""")
for client_addr, app_name, lag in cur.fetchall():
self.metrics['replication_lag'].labels(
client_addr=str(client_addr),
application_name=app_name
).set(lag or 0)
def run(self):
"""主监控循环"""
start_http_server(9090) # Prometheus指标端点
while True:
try:
self.collect_connection_metrics()
self.collect_replication_metrics()
# 收集其他指标...
time.sleep(15) # 15秒收集间隔
except Exception as e:
print(f"监控收集出错: {e}")
time.sleep(60)
if __name__ == "__main__":
monitor = DatabaseMonitor(
"host=localhost dbname=monitoring user=monitor"
)
monitor.run()
5.2 告警与自动修复
智能告警规则:
# alert_rules.yaml
groups:
- name: kingbase_ha
rules:
- alert: HighReplicationLag
expr: kingbase_replication_lag_seconds > 30
for: 5m
labels:
severity: warning
annotations:
summary: "复制延迟过高"
description: "实例 {{ $labels.client_addr }} 的复制延迟为 {{ $value }} 秒"
- alert: TooManyConnections
expr: kingbase_connections_active > 100
for: 2m
labels:
severity: critical
annotations:
summary: "数据库连接数过多"
description: "数据库 {{ $labels.database }} 有 {{ $value }} 个活跃连接"
- alert: PrimaryNodeDown
expr: up{job="kingbase_primary"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "主节点宕机"
description: "主数据库节点不可用"
- alert: DiskSpaceLow
expr: node_filesystem_free_bytes{mountpoint="/data"} / node_filesystem_size_bytes{mountpoint="/data"} < 0.2
for: 5m
labels:
severity: warning
annotations:
summary: "磁盘空间不足"
description: "数据目录剩余空间不足20%"
自动修复脚本:
#!/usr/bin/env python3
# auto_healer.py
import subprocess
import logging
import yaml
from typing import Dict, Any
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class DatabaseHealer:
def __init__(self, config_path: str):
with open(config_path, 'r') as f:
self.config = yaml.safe_load(f)
self.alert_history = {}
def handle_alert(self, alert: Dict[str, Any]):
"""处理告警事件"""
alert_name = alert['labels']['alertname']
instance = alert['labels'].get('instance', 'unknown')
# 检查告警频率
alert_key = f"{alert_name}_{instance}"
if self._is_too_frequent(alert_key):
logger.warning(f"告警 {alert_key} 触发过于频繁,跳过处理")
return
# 根据告警类型执行修复
if alert_name == 'HighReplicationLag':
self._fix_replication_lag(alert)
elif alert_name == 'TooManyConnections':
self._reduce_connections(alert)
elif alert_name == 'PrimaryNodeDown':
self._failover_primary(alert)
elif alert_name == 'DiskSpaceLow':
self._cleanup_disk_space(alert)
# 记录处理历史
self.alert_history[alert_key] = {
'last_handled': time.time(),
'count': self.alert_history.get(alert_key, {}).get('count', 0) + 1
}
def _fix_replication_lag(self, alert: Dict[str, Any]):
"""修复复制延迟"""
instance = alert['labels']['instance']
logger.info(f"处理复制延迟告警: {instance}")
# 1. 检查网络连通性
self._check_network(instance)
# 2. 检查备节点状态
self._check_replica_status(instance)
# 3. 如果延迟持续,尝试重建复制
if self._is_persistent_lag(instance):
logger.info(f"复制延迟持续,尝试重建复制: {instance}")
self._rebuild_replication(instance)
def _reduce_connections(self, alert: Dict[str, Any]):
"""减少连接数"""
db_name = alert['labels']['database']
logger.info(f"处理连接数过多告警: {db_name}")
# 1. 终止空闲连接
self._terminate_idle_connections(db_name, 300) # 5分钟空闲
# 2. 如果仍然过多,终止长时间运行的查询
if self._get_active_connections(db_name) > 100:
self._terminate_long_queries(db_name, 300) # 5分钟以上
def _failover_primary(self, alert: Dict[str, Any]):
"""主节点故障切换"""
logger.info("开始主节点故障切换流程")
# 1. 确认主节点确实不可用
if not self._confirm_primary_down():
logger.warning("主节点可能误报,跳过切换")
return
# 2. 选择最优备节点
new_primary = self._select_new_primary()
# 3. 执行切换
self._execute_failover(new_primary)
# 4. 更新DNS/负载均衡器
self._update_service_discovery(new_primary)
logger.info(f"故障切换完成,新主节点: {new_primary}")
def _cleanup_disk_space(self, alert: Dict[str, Any]):
"""清理磁盘空间"""
mountpoint = alert['labels']['mountpoint']
logger.info(f"清理磁盘空间: {mountpoint}")
# 1. 清理旧WAL日志
self._cleanup_old_wal(mountpoint)
# 2. 清理旧备份
self._cleanup_old_backups(mountpoint)
# 3. 如果仍然不足,发出紧急告警
if self._get_disk_usage(mountpoint) > 0.9:
logger.critical(f"磁盘空间严重不足: {mountpoint}")
self._escalate_alert(alert)
def _is_too_frequent(self, alert_key: str) -> bool:
"""检查告警频率是否过高"""
if alert_key not in self.alert_history:
return False
history = self.alert_history[alert_key]
time_since_last = time.time() - history['last_handled']
# 5分钟内触发超过3次视为过于频繁
return history['count'] >= 3 and time_since_last < 300
def run(self):
"""主运行循环"""
logger.info("数据库自动修复服务启动")
# 这里应该连接告警系统(如Alertmanager)的Webhook
# 为简化示例,我们使用模拟循环
while True:
# 实际应用中应该从消息队列或Webhook获取告警
time.sleep(1)
if __name__ == "__main__":
healer = DatabaseHealer("healer_config.yaml")
healer.run()
结语
金仓数据库提供了从基础的KES主备集群到复杂的MPP分布式集群的完整高可用解决方案体系。每种方案都有其特定的适用场景和优化方向,在实际应用中需要根据业务需求、数据特征、性能要求和运维资源进行综合选择。
通过本文的详细解析和实战示例,我们可以看到:
- KES主备集群 提供了简单可靠的基础高可用保障,适合大多数对RPO、RTO要求严格的在线事务处理系统。其优势在于配置简单、数据一致性高,但扩展性相对有限。
- 读写分离架构 在KES集群基础上,通过智能路由实现了读能力的水平扩展,特别适合读多写少的Web应用场景。该架构在保证数据一致性的同时,显著提升了系统吞吐量。
- MPP分布式集群 面向海量数据处理需求,通过数据分片和并行计算实现了真正的水平扩展。虽然架构和运维复杂度较高,但为大数据分析、数据仓库等场景提供了强大的处理能力。
在实际部署中,建议采用渐进式演进策略。从基础的主备集群开始,随着业务增长逐步引入读写分离,最终在数据量和复杂度达到阈值时考虑MPP集群。同时,无论采用哪种架构,完善的监控体系、定期的容灾演练和严格的变化管理都是保障高可用性的关键。
随着云计算和容器化技术的发展,金仓数据库的高可用架构也在不断演进。未来,我们可以期待更加智能化、自动化的高可用解决方案,进一步降低运维复杂度,提升系统可靠性。但不变的核心原则是:技术为业务服务,架构选择应始终以业务需求为出发点,在可靠性、性能、成本和复杂度之间找到最佳平衡点。