MySQL 高级技巧手册 - 第一章:连接与会话管理
目标:深入理解 MySQL 连接机制,掌握会话诊断和性能优化技巧。
1.1 连接状态诊断(SHOW PROCESSLIST)
1.1.1 基础命令详解
-- 查看当前所有连接
SHOW PROCESSLIST;
-- 查看完整 SQL(不截断)
SHOW FULL PROCESSLIST;
-- 通过 INFORMATION_SCHEMA 查询
SELECT
ID,
USER,
HOST,
DB,
COMMAND,
TIME,
STATE,
LEFT(INFO, 100) AS INFO_PREVIEW
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND != 'Sleep'
ORDER BY TIME DESC;
输出字段详解:
┌─────────┬─────────────────────────────────────────────────────────┐
│ 字段 │ 说明 │
├─────────┼─────────────────────────────────────────────────────────┤
│ ID │ 连接 ID,可用于 KILL │
│ USER │ 用户名 │
│ HOST │ 客户端 IP:端口 │
│ DB │ 当前数据库 │
│ COMMAND │ 命令类型(Sleep/Query/Connect) │
│ TIME │ 当前状态持续时间(秒) │
│ STATE │ 线程状态(非常重要!) │
│ INFO │ 正在执行的 SQL │
└─────────┴─────────────────────────────────────────────────────────┘
1.1.2 关键状态分析
常见 COMMAND 类型:
-- 统计各类命令数量
SELECT
COMMAND,
COUNT(*) as count,
GROUP_CONCAT(DISTINCT USER) as users
FROM INFORMATION_SCHEMA.PROCESSLIST
GROUP BY COMMAND;
| COMMAND | 含义 | 关注点 | |
|---|---|---|---|
| Sleep | 空闲连接 | 连接数是否过多 | |
| Query | 正在执行 SQL | 执行时间是否过长 | |
| Connect | 正在连接 | 连接风暴 | |
| Execute | 执行预处理语句 | 预处理语句使用 | |
| Binlog Dump | 主从复制 | 复制是否正常 |
关键 STATE 状态解析:
-- 查看当前所有状态
SELECT
STATE,
COUNT(*) as count,
AVG(TIME) as avg_time,
MAX(TIME) as max_time
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND != 'Sleep'
GROUP BY STATE
ORDER BY count DESC;
高危险状态(需立即处理):
┌──────────────────────────────┬──────────────────────────────────────┐
│ State │ 含义 & 处理 │
├──────────────────────────────┼──────────────────────────────────────┤
│ Waiting for table lock │ 等待表锁,检查锁竞争 │
│ Waiting for global read lock │ 等待全局读锁,可能在做备份/DDL │
│ Copying to tmp table │ 创建临时表,可能内存不足转磁盘 │
│ Locked │ 被锁定,检查死锁 │
│ Waiting for lock │ 等待行锁,检查事务持有时间 │
│ User lock │ 用户显式锁定 │
└──────────────────────────────┴──────────────────────────────────────┘
需要关注的状态:
┌──────────────────────────────┬──────────────────────────────────────┐
│ State │ 含义 & 优化建议 │
├──────────────────────────────┼──────────────────────────────────────┤
│ Sending data │ 发送数据给客户端,可能结果集太大 │
│ Sorting result │ 排序中,检查是否使用了文件排序 │
│ Creating sort index │ 创建排序索引,大数据量排序 │
│ Statistics │ 计算统计信息,可能表分析不及时 │
│ Preparing │ 查询优化中,复杂查询可能需要时间 │
│ Opening tables │ 打开表,可能表数量过多 │
│ System lock │ 系统锁,检查文件系统/磁盘 │
└──────────────────────────────┴──────────────────────────────────────┘
1.1.3 实战诊断脚本
-- 1. 查找长时间运行的查询
SELECT
ID,
USER,
HOST,
DB,
TIME as seconds,
STATE,
LEFT(INFO, 200) as SQL_PREVIEW
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND = 'Query'
AND TIME > 60 -- 运行超过 60 秒
ORDER BY TIME DESC;
-- 2. 查找被阻塞的查询
SELECT
r.ID as waiting_id,
r.USER as waiting_user,
r.INFO as waiting_query,
b.ID as blocking_id,
b.USER as blocking_user,
b.INFO as blocking_query
FROM INFORMATION_SCHEMA.PROCESSLIST r
JOIN INFORMATION_SCHEMA.INNODB_LOCK_WAITS w ON r.ID = w.REQUESTING_TRX_ID
JOIN INFORMATION_SCHEMA.PROCESSLIST b ON b.ID = w.BLOCKING_TRX_ID
WHERE r.COMMAND = 'Query';
-- 3. 统计每个用户的连接数
SELECT
USER,
HOST,
COUNT(*) as connections,
SUM(CASE WHEN COMMAND = 'Sleep' THEN 1 ELSE 0 END) as idle,
SUM(CASE WHEN COMMAND = 'Query' THEN 1 ELSE 0 END) as active,
MAX(TIME) as max_time
FROM INFORMATION_SCHEMA.PROCESSLIST
GROUP BY USER, HOST
ORDER BY connections DESC;
-- 4. 查找未提交的事务
SELECT
p.ID,
p.USER,
p.HOST,
p.DB,
t.TRX_ID,
t.TRX_STATE,
t.TRX_STARTED,
TIMESTAMPDIFF(SECOND, t.TRX_STARTED, NOW()) as trx_seconds,
t.TRX_MYSQL_THREAD_ID,
LEFT(t.TRX_QUERY, 200) as query
FROM INFORMATION_SCHEMA.INNODB_TRX t
JOIN INFORMATION_SCHEMA.PROCESSLIST p ON t.TRX_MYSQL_THREAD_ID = p.ID
ORDER BY t.TRX_STARTED;
-- 5. 监控连接数趋势(用于定时任务)
CREATE TABLE IF NOT EXISTS monitor_connections (
id INT AUTO_INCREMENT PRIMARY KEY,
snap_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
total_conn INT,
active_conn INT,
sleeping_conn INT,
slow_query_count INT
);
INSERT INTO monitor_connections (total_conn, active_conn, sleeping_conn, slow_query_count)
SELECT
COUNT(*) as total,
SUM(CASE WHEN COMMAND != 'Sleep' THEN 1 ELSE 0 END) as active,
SUM(CASE WHEN COMMAND = 'Sleep' THEN 1 ELSE 0 END) as sleeping,
SUM(CASE WHEN COMMAND = 'Query' AND TIME > 10 THEN 1 ELSE 0 END) as slow
FROM INFORMATION_SCHEMA.PROCESSLIST;
1.2 连接管理与优化
1.2.1 连接数配置优化
-- 查看当前连接数配置
SHOW VARIABLES LIKE 'max_connections';
SHOW VARIABLES LIKE 'max_user_connections';
-- 查看当前连接状态
SHOW GLOBAL STATUS LIKE 'Threads_%';
SHOW GLOBAL STATUS LIKE 'Max_used_connections';
SHOW GLOBAL STATUS LIKE 'Aborted_%';
关键参数说明:
┌──────────────────────────┬──────────────────────────────────────────┐
│ 参数/状态 │ 说明 │
├──────────────────────────┼──────────────────────────────────────────┤
│ max_connections │ 最大连接数,默认 151 │
│ max_user_connections │ 单用户最大连接数,0 表示不限制 │
│ Threads_connected │ 当前打开的连接数 │
│ Threads_running │ 活跃的非 Sleep 线程数 │
│ Threads_cached │ 线程缓存中的线程数 │
│ Max_used_connections │ 历史最大连接数 │
│ Aborted_connects │ 连接失败次数 │
│ Aborted_clients │ 客户端异常断开次数 │
└──────────────────────────┴──────────────────────────────────────────┘
配置建议:
# my.cnf 配置
[mysqld]
# 基础连接配置
max_connections = 500 # 根据内存调整,每个连接约 256KB-1MB
max_user_connections = 100 # 单用户限制
max_connect_errors = 1000 # 连接错误阈值,超过后锁定
# 线程缓存
thread_cache_size = 100 # 线程缓存,减少线程创建开销
# 超时配置
wait_timeout = 600 # 非交互连接空闲超时(秒)
interactive_timeout = 600 # 交互连接空闲超时(秒)
connect_timeout = 10 # 连接超时
# 连接验证
delayed_insert_timeout = 300
连接数计算公式:
推荐 max_connections = (内存 - 系统保留) / 每个连接内存占用
每个连接内存占用 ≈
sort_buffer_size (256KB)
+ read_buffer_size (128KB)
+ read_rnd_buffer_size (256KB)
+ join_buffer_size (256KB)
+ binlog_cache_size (32KB)
+ thread_stack (256KB)
≈ 1-2MB
例如:64GB 内存服务器
max_connections ≈ (64GB - 8GB) / 2MB ≈ 28000
实际建议:500-1000(避免过多连接导致性能下降)
1.2.2 连接池配置
Java HikariCP 最佳实践:
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
public class MySQLConnectionPool {
public HikariDataSource createDataSource() {
HikariConfig config = new HikariConfig();
// 基础配置
config.setJdbcUrl("jdbc:mysql://localhost:3306/mydb?useSSL=false&serverTimezone=Asia/Shanghai");
config.setUsername("user");
config.setPassword("password");
// 连接池大小配置(关键!)
// 公式:connections = ((core_count * 2) + effective_spindle_count)
// 对于 SSD,effective_spindle_count = 1
// 8 核服务器:connections = (8 * 2) + 1 = 17
config.setMaximumPoolSize(20); // 最大连接数
config.setMinimumIdle(5); // 最小空闲连接
// 连接超时
config.setConnectionTimeout(30000); // 获取连接等待时间
config.setIdleTimeout(600000); // 空闲连接超时
config.setMaxLifetime(1800000); // 连接最大生命周期
// 连接测试
config.setConnectionTestQuery("SELECT 1");
config.setValidationTimeout(5000); // 验证超时
// 性能优化
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
config.addDataSourceProperty("useServerPrepStmts", "true");
config.addDataSourceProperty("useLocalSessionState", "true");
config.addDataSourceProperty("rewriteBatchedStatements", "true");
config.addDataSourceProperty("cacheResultSetMetadata", "true");
config.addDataSourceProperty("cacheServerConfiguration", "true");
config.addDataSourceProperty("elideSetAutoCommits", "true");
config.addDataSourceProperty("maintainTimeStats", "false");
// 监控
config.setMetricRegistry(metricRegistry); // 可选:集成监控
return new HikariDataSource(config);
}
}
Python 连接池(SQLAlchemy):
from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool
def create_engine_with_pool():
"""创建带连接池的引擎"""
engine = create_engine(
'mysql+pymysql://user:password@localhost:3306/mydb',
# 连接池配置
poolclass=QueuePool,
pool_size=10, # 保持的连接数
max_overflow=20, # 超出 pool_size 的额外连接
pool_timeout=30, # 获取连接超时
pool_recycle=3600, # 连接回收时间(避免 8 小时问题)
pool_pre_ping=True, # 连接前 ping 测试
# 连接参数
connect_args={
'connect_timeout': 10,
'read_timeout': 30,
'write_timeout': 30,
}
)
return engine
# 连接池监控
def monitor_pool(engine):
"""监控连接池状态"""
pool = engine.pool
return {
'size': pool.size(), # 当前连接数
'checked_in': pool.checkedin(), # 空闲连接
'checked_out': pool.checkedout(), # 使用中连接
'overflow': pool.overflow(), # 溢出连接
}
Go 连接池(database/sql):
package main
import (
"database/sql"
"fmt"
"time"
_ "github.com/go-sql-driver/mysql"
)
type DBPool struct {
db *sql.DB
}
func NewDBPool() (*DBPool, error) {
dsn := "user:password@tcp(localhost:3306)/mydb?charset=utf8mb4&parseTime=True&loc=Local"
db, err := sql.Open("mysql", dsn)
if err != nil {
return nil, err
}
// 连接池配置
db.SetMaxOpenConns(25) // 最大打开连接数
db.SetMaxIdleConns(10) // 最大空闲连接数
db.SetConnMaxLifetime(5 * time.Minute) // 连接最大生命周期
db.SetConnMaxIdleTime(10 * time.Minute) // 空闲连接超时
// 验证连接
if err := db.Ping(); err != nil {
return nil, err
}
return &DBPool{db: db}, nil
}
func (p *DBPool) Stats() sql.DBStats {
return p.db.Stats()
}
func (p *DBPool) Close() error {
return p.db.Close()
}
// 使用示例
func main() {
pool, err := NewDBPool()
if err != nil {
panic(err)
}
defer pool.Close()
// 获取连接池统计
stats := pool.Stats()
fmt.Printf("Open: %d, InUse: %d, Idle: %d\n",
stats.OpenConnections,
stats.InUse,
stats.Idle)
}
1.3 连接问题诊断
1.3.1 连接风暴处理
现象:
ERROR 1040 (08004): Too many connections
Aborted_connects 快速增长
Threads_connected 接近 max_connections
诊断脚本:
-- 1. 查看连接来源分布
SELECT
SUBSTRING_INDEX(HOST, ':', 1) as client_ip,
COUNT(*) as conn_count,
GROUP_CONCAT(DISTINCT USER) as users
FROM INFORMATION_SCHEMA.PROCESSLIST
GROUP BY client_ip
ORDER BY conn_count DESC
LIMIT 10;
-- 2. 查看连接创建频率
SELECT
DATE_FORMAT(NOW(), '%Y-%m-%d %H:%i:%s') as snap_time,
VARIABLE_VALUE as threads_created
FROM PERFORMANCE_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME = 'Threads_created';
-- 3. 查找连接泄漏的应用
SELECT
USER,
DB,
COMMAND,
COUNT(*) as count,
AVG(TIME) as avg_idle_time
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND = 'Sleep'
GROUP BY USER, DB, COMMAND
HAVING count > 10
ORDER BY count DESC;
紧急处理:
-- 1. 临时增加连接数(在线修改)
SET GLOBAL max_connections = 1000;
-- 2. 终止空闲连接(谨慎操作!)
SELECT GROUP_CONCAT(ID SEPARATOR ',')
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND = 'Sleep' AND TIME > 600;
-- 然后批量 KILL(在 shell 中执行)
-- mysql -e "KILL 123; KILL 124; KILL 125;"
Python 自动化清理脚本:
import mysql.connector
from datetime import datetime
class ConnectionManager:
"""连接管理器"""
def __init__(self, host, user, password):
self.conn = mysql.connector.connect(
host=host,
user=user,
password=password,
database='information_schema'
)
def kill_idle_connections(self, idle_seconds=600, dry_run=True):
"""
终止空闲连接
Args:
idle_seconds: 空闲时间阈值
dry_run: 是否仅预览不执行
"""
cursor = self.conn.cursor(dictionary=True)
# 查找空闲连接
query = """
SELECT ID, USER, HOST, DB, TIME, INFO
FROM PROCESSLIST
WHERE COMMAND = 'Sleep'
AND TIME > %s
AND USER != 'root' -- 保护 root 连接
ORDER BY TIME DESC
"""
cursor.execute(query, (idle_seconds,))
connections = cursor.fetchall()
print(f"发现 {len(connections)} 个空闲超过 {idle_seconds} 秒的连接")
if dry_run:
print("\n[预览模式] 以下连接将被终止:")
for conn in connections[:10]:
print(f" ID={conn['ID']}, USER={conn['USER']}, "
f"TIME={conn['TIME']}s, DB={conn['DB']}")
if len(connections) > 10:
print(f" ... 还有 {len(connections) - 10} 个")
else:
killed = 0
for conn in connections:
try:
cursor.execute(f"KILL {conn['ID']}")
killed += 1
except Exception as e:
print(f"终止连接 {conn['ID']} 失败: {e}")
print(f"成功终止 {killed} 个连接")
cursor.close()
def kill_long_queries(self, max_seconds=60, dry_run=True):
"""终止长时间运行的查询"""
cursor = self.conn.cursor(dictionary=True)
query = """
SELECT ID, USER, HOST, DB, TIME, STATE, LEFT(INFO, 100) as INFO
FROM PROCESSLIST
WHERE COMMAND = 'Query'
AND TIME > %s
ORDER BY TIME DESC
"""
cursor.execute(query, (max_seconds,))
queries = cursor.fetchall()
print(f"发现 {len(queries)} 个运行超过 {max_seconds} 秒的查询")
for q in queries:
print(f"\n ID={q['ID']}, TIME={q['TIME']}s, STATE={q['STATE']}")
print(f" SQL: {q['INFO']}")
if not dry_run:
confirm = input("终止此查询? (y/n): ")
if confirm.lower() == 'y':
try:
cursor.execute(f"KILL {q['ID']}")
print(f" 已终止")
except Exception as e:
print(f" 终止失败: {e}")
cursor.close()
def close(self):
self.conn.close()
# 使用示例
if __name__ == '__main__':
manager = ConnectionManager('localhost', 'root', 'password')
# 预览要终止的空闲连接
manager.kill_idle_connections(idle_seconds=600, dry_run=True)
# 实际终止(取消注释)
# manager.kill_idle_connections(idle_seconds=600, dry_run=False)
manager.close()
1.3.2 连接数监控告警
Prometheus Exporter 配置:
# mysql_exporter.yml
collect.info_schema.processlist: true
collect.info_schema.processlist.min_time: 0
collect.global_status: true
collect.global_variables: true
Prometheus 告警规则:
# mysql_alerts.yml
groups:
- name: mysql_connection
rules:
# 连接数过高
- alert: MySQLTooManyConnections
expr: |
mysql_global_status_threads_connected / mysql_global_variables_max_connections * 100 > 80
for: 5m
labels:
severity: warning
annotations:
summary: "MySQL 连接数过高"
description: "{{ $labels.instance }} 连接数使用率超过 80%"
# 活跃连接数异常
- alert: MySQLHighThreadsRunning
expr: |
mysql_global_status_threads_running > 50
for: 5m
labels:
severity: warning
annotations:
summary: "MySQL 活跃线程数过高"
description: "当前活跃线程: {{ $value }}"
# 连接拒绝
- alert: MySQLConnectionRefused
expr: |
rate(mysql_global_status_aborted_connects[5m]) > 0
for: 5m
labels:
severity: critical
annotations:
summary: "MySQL 连接被拒绝"
description: "存在连接失败"
# 慢查询
- alert: MySQLSlowQueries
expr: |
rate(mysql_global_status_slow_queries[5m]) > 0
for: 5m
labels:
severity: warning
annotations:
summary: "MySQL 存在慢查询"
description: "慢查询数量增加"
1.4 会话变量管理
1.4.1 会话级优化
-- 查看当前会话变量
SHOW SESSION VARIABLES LIKE '%buffer%';
-- 设置会话级变量(仅当前连接有效)
SET SESSION sort_buffer_size = 2 * 1024 * 1024; -- 2MB
SET SESSION join_buffer_size = 1 * 1024 * 1024; -- 1MB
SET SESSION read_buffer_size = 1 * 1024 * 1024; -- 1MB
-- 设置超时
SET SESSION wait_timeout = 3600;
SET SESSION interactive_timeout = 3600;
-- 设置 SQL 模式
SET SESSION sql_mode = 'STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION';
Python 会话优化示例:
import mysql.connector
def create_optimized_connection():
"""创建优化后的连接"""
conn = mysql.connector.connect(
host='localhost',
user='user',
password='password',
database='mydb'
)
cursor = conn.cursor()
# 设置会话级优化参数
optimizations = [
"SET SESSION sort_buffer_size = 2097152", # 2MB
"SET SESSION join_buffer_size = 1048576", # 1MB
"SET SESSION read_buffer_size = 1048576", # 1MB
"SET SESSION read_rnd_buffer_size = 524288", # 512KB
"SET SESSION tmp_table_size = 67108864", # 64MB
"SET SESSION max_heap_table_size = 67108864", # 64MB
]
for opt in optimizations:
cursor.execute(opt)
cursor.close()
return conn
# 用于大数据量导入的优化连接
def create_bulk_connection():
"""用于批量导入的优化连接"""
conn = mysql.connector.connect(
host='localhost',
user='user',
password='password',
database='mydb',
autocommit=False,
use_pure=True,
# 连接参数
connection_timeout=300,
consume_results=True
)
cursor = conn.cursor()
# 批量导入优化
optimizations = [
"SET SESSION autocommit = 0",
"SET SESSION unique_checks = 0", # 关闭唯一性检查
"SET SESSION foreign_key_checks = 0", # 关闭外键检查
"SET SESSION sql_log_bin = 0", # 关闭 binlog(如果是从库)
"SET SESSION innodb_flush_log_at_trx_commit = 2", # 批量提交优化
]
for opt in optimizations:
cursor.execute(opt)
cursor.close()
return conn
1.4.2 连接诊断视图
创建诊断视图:
-- 创建方便查询的视图
CREATE OR REPLACE VIEW v_processlist_summary AS
SELECT
USER,
DB,
COMMAND,
STATE,
COUNT(*) as conn_count,
MIN(TIME) as min_time,
MAX(TIME) as max_time,
AVG(TIME) as avg_time,
GROUP_CONCAT(DISTINCT LEFT(INFO, 50) SEPARATOR ' | ') as sample_queries
FROM INFORMATION_SCHEMA.PROCESSLIST
GROUP BY USER, DB, COMMAND, STATE;
-- 使用视图
SELECT * FROM v_processlist_summary
WHERE COMMAND != 'Sleep'
ORDER BY conn_count DESC;
创建存储过程:
DELIMITER //
-- 查看活跃连接详情
CREATE PROCEDURE sp_show_active_processes()
BEGIN
SELECT
ID,
USER,
HOST,
DB,
COMMAND,
TIME as seconds,
STATE,
INFO
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND != 'Sleep'
ORDER BY TIME DESC;
END //
-- 查看慢连接
CREATE PROCEDURE sp_show_slow_processes(IN threshold INT)
BEGIN
SELECT
ID,
USER,
HOST,
DB,
TIME as seconds,
STATE,
LEFT(INFO, 200) as SQL_PREVIEW
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND = 'Query' AND TIME > threshold
ORDER BY TIME DESC;
END //
-- 终止所有空闲连接
CREATE PROCEDURE sp_kill_idle_connections(IN idle_seconds INT)
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE conn_id BIGINT;
DECLARE cur CURSOR FOR
SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE COMMAND = 'Sleep' AND TIME > idle_seconds;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur;
read_loop: LOOP
FETCH cur INTO conn_id;
IF done THEN
LEAVE read_loop;
END IF;
KILL conn_id;
END LOOP;
CLOSE cur;
SELECT CONCAT('Killed connections idle > ', idle_seconds, ' seconds') as result;
END //
DELIMITER ;
1.5 本章小结
✅ 已完成:
- SHOW PROCESSLIST 深入解析(FULL 版本、字段含义)
- 关键 STATE 状态分析(危险状态、需关注状态)
- 实战诊断脚本(慢查询、阻塞、连接统计、未提交事务)
- 连接数配置优化(max_connections 计算、关键参数)
- 连接池配置(Java HikariCP、Python SQLAlchemy、Go database/sql)
- 连接风暴处理(诊断、清理脚本、紧急处理)
- 会话变量管理(会话级优化、批量导入优化)
- 监控告警配置(Prometheus + Alertmanager)
📋 连接管理要点:
诊断三板斧:
├── SHOW FULL PROCESSLIST - 查看当前活动
├── SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST - 详细分析
└── SHOW GLOBAL STATUS LIKE 'Threads_%' - 连接统计
关键指标:
├── Threads_connected < max_connections * 80%
├── Threads_running 应稳定,不持续上升
├── Aborted_connects 应保持低位
└── 慢查询应及时发现和优化
优化建议:
├── 使用连接池,避免频繁创建连接
├── 合理设置连接池大小(核心数 * 2 + 1)
├── 设置合理的超时时间
├── 监控连接泄漏
└── 定期清理空闲连接
下一步:第二章 - 索引深度优化