【REDIS SCAN】scan清理无效key,导致慢日志飙升

907 阅读1分钟

1 问题现场

    private List<String> scanWithLimit(String pattern, int limit, int type) {
        Preconditions.checkArgument(StringUtils.isNotBlank(pattern));
        List<String> list = Lists.newArrayList();
        AtomicInteger count = new AtomicInteger();
        AtomicInteger expired = new AtomicInteger();
        this.redisTemplate.execute((RedisConnection connection) -> {
            Cursor<byte[]> cursor = null;
            try {
                cursor = connection.scan(ScanOptions.scanOptions().count(30000).match(pattern).build());
                while (cursor.hasNext()){
                    String key = new String(cursor.next(), StandardCharsets.UTF_8);
                    
                    //list.add(key);
                    long expire = redisCacheUtil.getExpireMillis(RedisKeys.BLANK, key);
                    if(expire == -1 || type == 2){
                        expired.getAndIncrement();
                        redisCacheUtil.delBatch("", key);
                    }else if(expire < -1){
                        expired.getAndIncrement();
                    }else{
                    }
                    int total = count.getAndIncrement();
                    if(total > limit){
                        break;
                    }
                   
                }
                
                return list;
            } catch (Exception e) {
                return null;
            }finally {
                if(null != cursor){
                    try {
                        cursor.close();
                    } catch (IOException e) {
                        return null;
                    }
                }
            }
        });
        return list;
    }

2 慢日志现场

3 上面代码使用的实现

org.springframework.data.redis.connection.lettuce.LettuceKeyCommands#scan(org.springframework.data.redis.core.ScanOptions) 

 该方法Incrementally iterate the keys space over the whole Cluster. // 注意该扫描器会迭代所有节点 默认 0->1->2->...->31 所以节点一次扫描尽可能的多扫描key,但是32个节点执行时间耗时,超出了预期,暂时MARK。