缓存一致性解决方案
概述
缓存一致性是分布式系统中的核心问题之一。当数据库中的数据发生变化时,如何确保缓存中的数据也能及时更新,避免脏数据的产生,是每个使用缓存的系统都必须面对的挑战。本文将深入探讨 Redis 缓存一致性的各种解决方案,从基础的缓存删除策略到高级的 binlog 监听方案。
1. 缓存双删与双检机制
1.1 缓存双删策略
1.1.1 延迟双删
@Component
public class CacheDoubleDeleter {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Autowired
private UserMapper userMapper;
/**
* 延迟双删缓存
*/
public boolean updateUserWithDelayDoubleDelete(User user) {
try {
String cacheKey = "user:" + user.getId();
// 1. 第一次删除缓存
redisTemplate.delete(cacheKey);
// 2. 更新数据库
int result = userMapper.updateUser(user);
if (result > 0) {
// 3. 延迟第二次删除缓存
delayDelete(cacheKey, 500); // 延迟500ms
return true;
}
return false;
} catch (Exception e) {
log.error("延迟双删更新用户失败", e);
return false;
}
}
/**
* 延迟删除缓存
*/
@Async
public void delayDelete(String cacheKey, long delayMs) {
try {
// 延迟指定时间
Thread.sleep(delayMs);
// 删除缓存
redisTemplate.delete(cacheKey);
log.info("延迟双删缓存: {}", cacheKey);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
log.error("延迟双删缓存被中断: {}", cacheKey);
}
}
}
1.2 缓存双检机制(Double Check)
1.2.1 双检机制原理
flowchart TD
A[查询请求] --> B[检查缓存]
B --> C{缓存存在?}
C -->|是| D[返回缓存数据]
C -->|否| E[获取分布式锁]
E --> F{获取锁成功?}
F -->|否| G[等待并重试]
F -->|是| H[再次检查缓存]
H --> I{缓存存在?}
I -->|是| J[释放锁并返回缓存数据]
I -->|否| K[查询数据库]
K --> L[更新缓存]
L --> M[释放锁]
M --> N[返回数据]
G --> B
style E fill:#fff3e0
style H fill:#e8f5e8
style K fill:#e1f5fe
1.2.2 双检机制实现
@Service
public class UserCacheService {
@Autowired
private UserMapper userMapper;
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Autowired
private RedissonClient redissonClient;
private static final String LOCK_PREFIX = "lock:user:";
private static final String CACHE_PREFIX = "user:";
public User getUserById(Long userId) {
String cacheKey = CACHE_PREFIX + userId;
// 第一次检查缓存
User user = (User) redisTemplate.opsForValue().get(cacheKey);
if (user != null) {
return user;
}
// 获取分布式锁
String lockKey = LOCK_PREFIX + userId;
RLock lock = redissonClient.getLock(lockKey);
try {
// 尝试获取锁,最多等待10秒,锁定30秒后自动释放
if (lock.tryLock(10, 30, TimeUnit.SECONDS)) {
// 第二次检查缓存(双检机制)
user = (User) redisTemplate.opsForValue().get(cacheKey);
if (user != null) {
return user;
}
// 查询数据库
user = userMapper.selectById(userId);
if (user != null) {
// 更新缓存,设置过期时间
redisTemplate.opsForValue().set(cacheKey, user, 30, TimeUnit.MINUTES);
} else {
// 防止缓存穿透,设置空值缓存
redisTemplate.opsForValue().set(cacheKey, new User(), 5, TimeUnit.MINUTES);
}
return user;
} else {
// 获取锁失败,直接查询数据库
return userMapper.selectById(userId);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
log.error("获取锁被中断", e);
return userMapper.selectById(userId);
} finally {
if (lock.isHeldByCurrentThread()) {
lock.unlock();
}
}
}
}
2. 四种缓存更新策略
2.1 策略一:先更新数据库,再更新缓存
2.1.1 流程图
sequenceDiagram
participant C as Client
participant A as Application
participant D as Database
participant R as Redis
C->>A: 更新请求
A->>D: 更新数据库
D-->>A: 更新成功
A->>R: 更新缓存
R-->>A: 更新成功
A-->>C: 返回成功
2.1.2 并发问题模拟
sequenceDiagram
participant T1 as 线程1
participant T2 as 线程2
participant D as Database
participant R as Redis
Note over T1,T2: 初始值: DB=100, Cache=100
T1->>D: 更新为200
T2->>D: 更新为300
D-->>T2: 更新成功
D-->>T1: 更新成功
T2->>R: 更新缓存为300
T1->>R: 更新缓存为200
R-->>T1: 更新成功
R-->>T2: 更新成功
Note over D,R: 结果: DB=300, Cache=200 (不一致!)
2.1.3 代码实现
@Service
public class CacheUpdateStrategy1 {
@Transactional
public boolean updateUserStrategy1(User user) {
try {
// 1. 先更新数据库
int result = userMapper.updateUser(user);
if (result > 0) {
// 2. 再更新缓存
String cacheKey = "user:" + user.getId();
redisTemplate.opsForValue().set(cacheKey, user, 30, TimeUnit.MINUTES);
return true;
}
return false;
} catch (Exception e) {
log.error("策略1更新失败", e);
throw new RuntimeException("更新失败", e);
}
}
}
优点:
- 实现简单
- 数据库和缓存都是最新数据
缺点:
- 并发情况下可能导致缓存数据不一致
- 如果缓存更新失败,会出现数据不一致
- 浪费性能(可能更新了无人访问的缓存)
2.2 策略二:先更新缓存,再更新数据库
2.2.1 流程图
sequenceDiagram
participant C as Client
participant A as Application
participant R as Redis
participant D as Database
C->>A: 更新请求
A->>R: 更新缓存
R-->>A: 更新成功
A->>D: 更新数据库
D-->>A: 更新成功
A-->>C: 返回成功
2.2.2 并发问题模拟
sequenceDiagram
participant T1 as 线程1
participant T2 as 线程2
participant R as Redis
participant D as Database
Note over T1,T2: 初始值: Cache=100, DB=100
T1->>R: 更新缓存为200
T2->>R: 更新缓存为300
R-->>T1: 更新成功
R-->>T2: 更新成功
T1->>D: 更新数据库为200
T2->>D: 更新数据库为300
D-->>T2: 更新成功
D-->>T1: 更新成功
Note over R,D: 结果: Cache=300, DB=200 (不一致!)
2.2.3 代码实现
@Service
public class CacheUpdateStrategy2 {
public boolean updateUserStrategy2(User user) {
String cacheKey = "user:" + user.getId();
try {
// 1. 先更新缓存
redisTemplate.opsForValue().set(cacheKey, user, 30, TimeUnit.MINUTES);
// 2. 再更新数据库
int result = userMapper.updateUser(user);
if (result > 0) {
return true;
} else {
// 数据库更新失败,回滚缓存
redisTemplate.delete(cacheKey);
return false;
}
} catch (Exception e) {
// 发生异常,删除缓存
redisTemplate.delete(cacheKey);
log.error("策略2更新失败", e);
return false;
}
}
}
优点:
- 缓存响应快
缺点:
- 并发情况下数据不一致风险更高
- 如果数据库更新失败,需要回滚缓存
- 违反了数据库作为主存储的原则
2.3 策略三:先删除缓存,再更新数据库
2.3.1 流程图
sequenceDiagram
participant C as Client
participant A as Application
participant R as Redis
participant D as Database
C->>A: 更新请求
A->>R: 删除缓存
R-->>A: 删除成功
A->>D: 更新数据库
D-->>A: 更新成功
A-->>C: 返回成功
2.3.2 并发问题模拟
sequenceDiagram
participant W as 写线程
participant R1 as 读线程
participant Cache as Redis
participant DB as Database
Note over W,R1: 初始值: Cache=100, DB=100
W->>Cache: 删除缓存
Cache-->>W: 删除成功
R1->>Cache: 查询缓存
Cache-->>R1: 缓存不存在
R1->>DB: 查询数据库(旧值100)
DB-->>R1: 返回100
R1->>Cache: 设置缓存=100
W->>DB: 更新数据库为200
DB-->>W: 更新成功
Note over Cache,DB: 结果: Cache=100, DB=200 (不一致!)
2.3.3 代码实现
@Service
public class CacheUpdateStrategy3 {
@Transactional
public boolean updateUserStrategy3(User user) {
String cacheKey = "user:" + user.getId();
try {
// 1. 先删除缓存
redisTemplate.delete(cacheKey);
// 2. 再更新数据库
int result = userMapper.updateUser(user);
if (result > 0) {
// 延迟双删策略
delayDelete(cacheKey, 1000);
return true;
}
return false;
} catch (Exception e) {
log.error("策略3更新失败", e);
return false;
}
}
@Async
private void delayDelete(String cacheKey, long delayMs) {
try {
Thread.sleep(delayMs);
redisTemplate.delete(cacheKey);
log.info("延迟删除缓存: {}", cacheKey);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
优点:
- 避免了缓存更新的性能开销
- 实现相对简单
缺点:
- 在高并发读写场景下容易出现数据不一致
- 需要延迟双删来减少不一致窗口期
2.4 策略四:先更新数据库,再删除缓存(推荐)
2.4.1 流程图
sequenceDiagram
participant C as Client
participant A as Application
participant D as Database
participant R as Redis
C->>A: 更新请求
A->>D: 更新数据库
D-->>A: 更新成功
A->>R: 删除缓存
R-->>A: 删除成功
A-->>C: 返回成功
2.4.2 并发问题模拟
sequenceDiagram
participant W as 写线程
participant R1 as 读线程
participant DB as Database
participant Cache as Redis
Note over W,R1: 初始值: DB=100, Cache=100
R1->>Cache: 查询缓存
Cache-->>R1: 返回100
W->>DB: 更新数据库为200
DB-->>W: 更新成功
W->>Cache: 删除缓存
Cache-->>W: 删除成功
Note over DB,Cache: 短暂不一致,但下次读取会从DB获取最新数据
2.4.3 代码实现
@Service
public class CacheUpdateStrategy4 {
@Autowired
private CacheDelayDeleter delayDeleter;
@Transactional
public boolean updateUserStrategy4(User user) {
String cacheKey = "user:" + user.getId();
try {
// 1. 先更新数据库
int result = userMapper.updateUser(user);
if (result > 0) {
// 2. 再删除缓存
redisTemplate.delete(cacheKey);
// 3. 延迟双删(可选)
delayDeleter.delayDelete(cacheKey, 1000);
return true;
}
return false;
} catch (Exception e) {
log.error("策略4更新失败", e);
return false;
}
}
}
2.4.4 重试机制
@Component
public class CacheDeleteRetryHandler {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Retryable(value = Exception.class, maxAttempts = 3, backoff = @Backoff(delay = 1000))
public void deleteWithRetry(String cacheKey) {
try {
redisTemplate.delete(cacheKey);
log.info("缓存删除成功: {}", cacheKey);
} catch (Exception e) {
log.error("缓存删除失败: {}", cacheKey, e);
throw e;
}
}
@Recover
public void recover(Exception e, String cacheKey) {
log.error("缓存删除最终失败,记录到消息队列: {}", cacheKey, e);
// 发送到消息队列进行异步重试
sendToMQ(cacheKey);
}
private void sendToMQ(String cacheKey) {
// 发送到消息队列的逻辑
}
}
优点:
- 数据一致性风险最小
- 性能较好(避免了缓存更新)
- 实现简单
缺点:
- 可能存在短暂的数据不一致
- 如果缓存删除失败,需要重试机制
3. MySQL Binlog + Canal 架构
3.1 Canal 整体架构
graph TD
subgraph "MySQL Master"
A[MySQL Binlog]
end
subgraph "Canal Server"
B[Canal Server]
C[Binlog Parser]
D[Event Filter]
E[Event Dispatcher]
end
subgraph "Message Queue"
F[RocketMQ/Kafka]
end
subgraph "Cache Service"
G[Cache Update Service]
H[Message Consumer]
end
subgraph "Cache Layer"
I[Redis Cluster]
end
A --> B
B --> C
C --> D
D --> E
E --> F
F --> H
H --> G
G --> I
style A fill:#e1f5fe
style B fill:#fff3e0
style F fill:#ffecb3
style G fill:#e8f5e8
style I fill:#c8e6c9
3.2 Canal 工作原理
sequenceDiagram
participant App as Application
participant MySQL as MySQL Master
participant Canal as Canal Server
participant MQ as Message Queue
participant Consumer as Cache Service
participant Redis as Redis
App->>MySQL: 更新数据
MySQL->>MySQL: 写入Binlog
Canal->>MySQL: 订阅Binlog
MySQL-->>Canal: 推送Binlog事件
Canal->>Canal: 解析Binlog
Canal->>MQ: 发送消息到MQ
MQ-->>Canal: 发送成功
Consumer->>MQ: 消费消息
MQ-->>Consumer: 返回消息
Consumer->>Consumer: 处理缓存更新逻辑
Consumer->>Redis: 更新/删除缓存
Redis-->>Consumer: 操作成功
3.3 Canal 部署配置
3.3.1 MySQL 配置
-- 1. 开启binlog
-- my.cnf配置
[mysqld]
log-bin=mysql-bin
binlog-format=ROW
server-id=1
-- 2. 创建Canal用户
CREATE USER 'canal'@'%' IDENTIFIED BY 'canal123';
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
FLUSH PRIVILEGES;
-- 3. 查看binlog状态
SHOW MASTER STATUS;
SHOW VARIABLES LIKE 'binlog_format';
3.3.2 Canal Server 配置
# canal.properties
canal.id = 1
canal.ip =
canal.port = 11111
canal.metrics.pull.port = 11112
canal.zkServers =
# canal.destinations
canal.destinations = example
canal.auto.scan = true
canal.auto.scan.interval = 5
# instance.properties
canal.instance.master.address=127.0.0.1:3306
canal.instance.master.journal.name=mysql-bin.000001
canal.instance.master.position=154
canal.instance.master.timestamp=
canal.instance.master.gtid=
# username/password
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal123
canal.instance.connectionCharset = UTF-8
# table regex
canal.instance.filter.regex=test\\.user,test\\.order
canal.instance.filter.black.regex=
3.4 Canal Client 实现
3.4.1 基础客户端
@Component
public class CanalClient {
private static final Logger log = LoggerFactory.getLogger(CanalClient.class);
@Autowired
private CacheUpdateHandler cacheUpdateHandler;
private CanalConnector connector;
@PostConstruct
public void init() {
// 创建连接
connector = CanalConnectors.newSingleConnector(
new InetSocketAddress("127.0.0.1", 11111),
"example", "", ""
);
// 启动监听
startListening();
}
@Async
public void startListening() {
try {
connector.connect();
connector.subscribe("test\\.user,test\\.order");
connector.rollback();
while (true) {
// 获取指定数量的数据
Message message = connector.getWithoutAck(100);
long batchId = message.getBatchId();
int size = message.getEntries().size();
if (batchId == -1 || size == 0) {
Thread.sleep(1000);
} else {
processEntries(message.getEntries());
}
// 提交确认
connector.ack(batchId);
}
} catch (Exception e) {
log.error("Canal客户端异常", e);
} finally {
connector.disconnect();
}
}
private void processEntries(List<Entry> entries) {
for (Entry entry : entries) {
if (entry.getEntryType() == EntryType.TRANSACTIONBEGIN ||
entry.getEntryType() == EntryType.TRANSACTIONEND) {
continue;
}
RowChange rowChange;
try {
rowChange = RowChange.parseFrom(entry.getStoreValue());
} catch (Exception e) {
log.error("解析binlog失败", e);
continue;
}
EventType eventType = rowChange.getEventType();
String tableName = entry.getHeader().getTableName();
for (RowData rowData : rowChange.getRowDatasList()) {
cacheUpdateHandler.handleDataChange(tableName, eventType, rowData);
}
}
}
}
3.4.2 缓存更新处理器
@Component
public class CacheUpdateHandler {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
public void handleDataChange(String tableName, EventType eventType, RowData rowData) {
switch (tableName) {
case "user":
handleUserChange(eventType, rowData);
break;
case "order":
handleOrderChange(eventType, rowData);
break;
default:
log.warn("未处理的表: {}", tableName);
}
}
private void handleUserChange(EventType eventType, RowData rowData) {
String userId = null;
// 获取用户ID
if (eventType == EventType.DELETE) {
userId = getColumnValue(rowData.getBeforeColumnsList(), "id");
} else {
userId = getColumnValue(rowData.getAfterColumnsList(), "id");
}
if (userId == null) {
return;
}
String cacheKey = "user:" + userId;
switch (eventType) {
case INSERT:
case UPDATE:
// 删除缓存,让下次查询时重新加载
redisTemplate.delete(cacheKey);
log.info("用户{}缓存已删除", userId);
break;
case DELETE:
// 删除缓存
redisTemplate.delete(cacheKey);
log.info("用户{}缓存已删除", userId);
break;
default:
break;
}
}
private void handleOrderChange(EventType eventType, RowData rowData) {
String orderId = null;
String userId = null;
if (eventType == EventType.DELETE) {
orderId = getColumnValue(rowData.getBeforeColumnsList(), "id");
userId = getColumnValue(rowData.getBeforeColumnsList(), "user_id");
} else {
orderId = getColumnValue(rowData.getAfterColumnsList(), "id");
userId = getColumnValue(rowData.getAfterColumnsList(), "user_id");
}
// 删除订单相关缓存
if (orderId != null) {
redisTemplate.delete("order:" + orderId);
}
// 删除用户订单列表缓存
if (userId != null) {
redisTemplate.delete("user:orders:" + userId);
}
log.info("订单{}相关缓存已删除", orderId);
}
private String getColumnValue(List<Column> columns, String columnName) {
for (Column column : columns) {
if (columnName.equals(column.getName())) {
return column.getValue();
}
}
return null;
}
}
3.5 模拟数据测试
3.5.1 测试数据准备
-- 创建测试表
CREATE TABLE `user` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`username` varchar(50) NOT NULL,
`email` varchar(100) DEFAULT NULL,
`age` int(11) DEFAULT NULL,
`created_time` datetime DEFAULT CURRENT_TIMESTAMP,
`updated_time` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_username` (`username`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
-- 插入测试数据
INSERT INTO user (username, email, age) VALUES
('user1', 'user1@example.com', 25),
('user2', 'user2@example.com', 30),
('user3', 'user3@example.com', 28);
3.5.2 测试用例
@SpringBootTest
public class CanalIntegrationTest {
@Autowired
private UserMapper userMapper;
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Test
public void testCanalCacheSync() throws InterruptedException {
// 1. 先查询用户,建立缓存
User user = userMapper.selectById(1L);
String cacheKey = "user:1";
redisTemplate.opsForValue().set(cacheKey, user, 30, TimeUnit.MINUTES);
// 验证缓存存在
User cachedUser = (User) redisTemplate.opsForValue().get(cacheKey);
assertNotNull(cachedUser);
assertEquals("user1", cachedUser.getUsername());
// 2. 更新数据库
user.setUsername("user1_updated");
user.setAge(26);
userMapper.updateById(user);
// 3. 等待Canal处理
Thread.sleep(2000);
// 4. 验证缓存已被删除
User cachedUserAfterUpdate = (User) redisTemplate.opsForValue().get(cacheKey);
assertNull(cachedUserAfterUpdate);
log.info("Canal缓存同步测试通过");
}
@Test
public void testCanalPerformance() throws InterruptedException {
int batchSize = 1000;
long startTime = System.currentTimeMillis();
// 批量更新数据
for (int i = 1; i <= batchSize; i++) {
User user = new User();
user.setId((long) i);
user.setUsername("user" + i + "_batch");
user.setAge(20 + (i % 50));
userMapper.updateById(user);
}
long endTime = System.currentTimeMillis();
log.info("批量更新{}条数据耗时: {}ms", batchSize, endTime - startTime);
// 等待Canal处理完成
Thread.sleep(5000);
// 验证缓存清理效果
int deletedCacheCount = 0;
for (int i = 1; i <= batchSize; i++) {
String cacheKey = "user:" + i;
if (!redisTemplate.hasKey(cacheKey)) {
deletedCacheCount++;
}
}
log.info("Canal处理完成,清理缓存数量: {}", deletedCacheCount);
}
}
3.6 Canal 高可用配置
3.6.1 Canal HA 架构
graph TD
subgraph "MySQL Cluster"
M1[MySQL Master]
S1[MySQL Slave1]
S2[MySQL Slave2]
end
subgraph "ZooKeeper Cluster"
Z1[ZK Node1]
Z2[ZK Node2]
Z3[ZK Node3]
end
subgraph "Canal Cluster"
C1[Canal Server1]
C2[Canal Server2]
C3[Canal Server3]
end
subgraph "Canal Client Cluster"
CL1[Client1]
CL2[Client2]
CL3[Client3]
end
M1 --> S1
M1 --> S2
M1 --> C1
M1 --> C2
M1 --> C3
Z1 --- Z2
Z2 --- Z3
Z3 --- Z1
C1 --> Z1
C2 --> Z2
C3 --> Z3
C1 --> CL1
C2 --> CL2
C3 --> CL3
3.6.2 HA 配置
# canal.properties (HA模式)
canal.zkServers = 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
canal.zookeeper.flush.period = 1000
canal.withoutNetty = false
# instance.properties (HA模式)
canal.instance.mysql.slaveId = 1234
canal.instance.master.address = 127.0.0.1:3306
canal.instance.master.fallbackAddress = 127.0.0.1:3307
canal.instance.master.standby.address = 127.0.0.1:3308
# 故障转移配置
canal.instance.detecting.enable = true
canal.instance.detecting.sql = SELECT 1
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = true
4. 主流缓存一致性方案推荐
4.1 方案对比
| 方案 | 一致性 | 性能 | 复杂度 | 适用场景 |
|---|---|---|---|---|
| Cache Aside | 强一致 | 中等 | 简单 | 读多写少 |
| Write Through | 强一致 | 较低 | 中等 | 写操作频繁 |
| Write Behind | 最终一致 | 高 | 复杂 | 高并发写入 |
| Binlog监听 | 最终一致 | 高 | 中等 | 大型系统 |
| 分布式事务 | 强一致 | 低 | 复杂 | 强一致性要求 |
4.2 推荐方案
4.2.1 小型系统推荐:Cache Aside + 延迟双删
@Service
public class RecommendedCacheService {
@Autowired
private UserMapper userMapper;
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Autowired
private AsyncTaskExecutor asyncExecutor;
/**
* 推荐的缓存更新策略
*/
@Transactional
public boolean updateUser(User user) {
String cacheKey = "user:" + user.getId();
try {
// 1. 更新数据库
int result = userMapper.updateUser(user);
if (result > 0) {
// 2. 删除缓存
redisTemplate.delete(cacheKey);
// 3. 延迟双删
asyncExecutor.execute(() -> {
try {
Thread.sleep(1000); // 延迟1秒
redisTemplate.delete(cacheKey);
log.info("延迟删除缓存: {}", cacheKey);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
return true;
}
return false;
} catch (Exception e) {
log.error("更新用户失败", e);
throw new RuntimeException("更新失败", e);
}
}
/**
* 查询用户(双检锁)
*/
public User getUser(Long userId) {
String cacheKey = "user:" + userId;
// 第一次检查缓存
User user = (User) redisTemplate.opsForValue().get(cacheKey);
if (user != null) {
return user;
}
// 分布式锁
String lockKey = "lock:user:" + userId;
Boolean lockAcquired = redisTemplate.opsForValue().setIfAbsent(lockKey, "1", 10, TimeUnit.SECONDS);
try {
if (Boolean.TRUE.equals(lockAcquired)) {
// 第二次检查缓存
user = (User) redisTemplate.opsForValue().get(cacheKey);
if (user != null) {
return user;
}
// 查询数据库
user = userMapper.selectById(userId);
if (user != null) {
redisTemplate.opsForValue().set(cacheKey, user, 30, TimeUnit.MINUTES);
}
return user;
} else {
// 获取锁失败,等待后重试
Thread.sleep(50);
return getUser(userId);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return userMapper.selectById(userId);
} finally {
if (Boolean.TRUE.equals(lockAcquired)) {
redisTemplate.delete(lockKey);
}
}
}
}
4.2.2 中大型系统推荐:Binlog + MQ
@Component
public class AdvancedCacheConsistencyService {
@Autowired
private RabbitTemplate rabbitTemplate;
@Autowired
private RedisTemplate<String, Object> redisTemplate;
/**
* 发送缓存更新消息
*/
public void sendCacheUpdateMessage(String tableName, String operation, String primaryKey) {
CacheUpdateMessage message = new CacheUpdateMessage();
message.setTableName(tableName);
message.setOperation(operation);
message.setPrimaryKey(primaryKey);
message.setTimestamp(System.currentTimeMillis());
rabbitTemplate.convertAndSend("cache.update.exchange", "cache.update.key", message);
}
/**
* 处理缓存更新消息
*/
@RabbitListener(queues = "cache.update.queue")
public void handleCacheUpdate(CacheUpdateMessage message) {
try {
String cacheKey = message.getTableName() + ":" + message.getPrimaryKey();
switch (message.getOperation().toUpperCase()) {
case "INSERT":
case "UPDATE":
case "DELETE":
redisTemplate.delete(cacheKey);
log.info("缓存已删除: {}", cacheKey);
break;
default:
log.warn("未知操作类型: {}", message.getOperation());
}
} catch (Exception e) {
log.error("处理缓存更新消息失败", e);
// 可以考虑重试或死信队列
}
}
}
@Data
public class CacheUpdateMessage {
private String tableName;
private String operation;
private String primaryKey;
private Long timestamp;
}
4.2.3 高并发系统推荐:多级缓存 + 异步更新
@Service
public class MultiLevelCacheService {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Autowired
private UserMapper userMapper;
// 本地缓存
private final Cache<String, User> localCache = Caffeine.newBuilder()
.maximumSize(10000)
.expireAfterWrite(5, TimeUnit.MINUTES)
.build();
/**
* 多级缓存查询
*/
public User getUserMultiLevel(Long userId) {
String cacheKey = "user:" + userId;
// 1. 查询本地缓存
User user = localCache.getIfPresent(cacheKey);
if (user != null) {
return user;
}
// 2. 查询Redis缓存
user = (User) redisTemplate.opsForValue().get(cacheKey);
if (user != null) {
localCache.put(cacheKey, user);
return user;
}
// 3. 查询数据库
user = userMapper.selectById(userId);
if (user != null) {
// 异步更新缓存
CompletableFuture.runAsync(() -> {
redisTemplate.opsForValue().set(cacheKey, user, 30, TimeUnit.MINUTES);
localCache.put(cacheKey, user);
});
}
return user;
}
/**
* 多级缓存更新
*/
@Transactional
public boolean updateUserMultiLevel(User user) {
String cacheKey = "user:" + user.getId();
try {
// 1. 更新数据库
int result = userMapper.updateUser(user);
if (result > 0) {
// 2. 异步删除多级缓存
CompletableFuture.runAsync(() -> {
localCache.invalidate(cacheKey);
redisTemplate.delete(cacheKey);
});
return true;
}
return false;
} catch (Exception e) {
log.error("多级缓存更新失败", e);
return false;
}
}
}
4.3 方案选择指南
flowchart TD
A[开始选择缓存一致性方案] --> B{系统规模}
B -->|小型系统<br/>QPS < 1000| C[Cache Aside + 延迟双删]
B -->|中型系统<br/>1000 < QPS < 10000| D{一致性要求}
B -->|大型系统<br/>QPS > 10000| E[Binlog + MQ + 多级缓存]
D -->|强一致性| F[分布式事务 + 缓存]
D -->|最终一致性| G[Binlog监听 + 异步更新]
C --> H[实现简单<br/>维护成本低]
F --> I[一致性强<br/>性能较低]
G --> J[性能好<br/>复杂度中等]
E --> K[高性能<br/>高可用]
style C fill:#c8e6c9
style G fill:#fff3e0
style E fill:#e1f5fe
5. 总结
5.1 核心要点
-
缓存一致性是分布式系统的核心挑战
- 需要在一致性、性能和复杂度之间找到平衡
- 没有银弹,需要根据业务场景选择合适的方案
-
四种更新策略各有优劣
- 先更新数据库,再删除缓存(推荐)
- 结合延迟双删可以进一步降低不一致风险
- 需要考虑缓存删除失败的重试机制
-
Binlog + Canal 是大型系统的优选方案
- 解耦了业务逻辑和缓存更新
- 支持多种数据源和目标
- 需要考虑高可用和故障恢复
-
多级缓存可以进一步提升性能
- 本地缓存 + 分布式缓存
- 需要处理缓存层级间的一致性
5.2 最佳实践建议
-
设计阶段
- 明确一致性要求(强一致 vs 最终一致)
- 评估系统规模和性能要求
- 选择合适的缓存策略
-
实现阶段
- 实现重试机制和异常处理
- 添加监控和告警
- 考虑缓存预热和降级策略
-
运维阶段
- 监控缓存命中率和一致性
- 定期检查和清理过期数据
- 建立故障应急预案
5.3 发展趋势
-
技术发展方向
- 更智能的缓存策略
- 更好的一致性保证
- 更低的延迟和更高的吞吐量
-
工具和框架
- 更成熟的缓存中间件
- 更完善的监控和管理工具
- 更好的云原生支持
通过合理选择和实施缓存一致性方案,可以在保证数据一致性的同时,显著提升系统的性能和用户体验。关键是要根据具体的业务场景和技术约束,选择最适合的方案,并在实践中不断优化和改进。