Ruoyi-Cloud-Plus 增加dubbo3元数据中心(metadata-report)使用redis哨兵模式

303 阅读11分钟

Ruoyi-Cloud-Plus 增加dubbo3元数据中心(metadata-report)使用redis哨兵模式

首先感谢 开源爱好者 若依,疯狂的狮子Li 以及 dromara开源社区。

本文是作者在使用RuoYi-Cloud-Plus 这一开源cloud架子部署时遇到的问题做出的记录与框架扩展

遇到的问题
  • Dubbo 元数据中心(metadata-report) 如何使用redis 哨兵模式
参考文章有

原因分析

Dubbo 使用元数据中心时如何配置redis哨兵,通过浏览架构的结构发现ruoyi-common基础包中存在 ruoyi-common-dubbo这个就是项目引用Dubbo的子包了,子包下有一个common-dubbo.yml的文件其中配置了 Dubbo的基础配置下面是初始化是的配置代码

# 内置配置 不允许修改 如需修改请在 nacos 上写相同配置覆盖
dubbo:
  application:
    logger: slf4j
    # 元数据中心 local 本地 remote 远程 这里使用远程便于其他服务获取
    metadataType: remote
    # 可选值 interface、instance、all,默认是 all,即接口级地址、应用级地址都注册
    register-mode: instance
    service-discovery:
      # FORCE_INTERFACE,只消费接口级地址,如无地址则报错,单订阅 2.x 地址
      # APPLICATION_FIRST,智能决策接口级/应用级地址,双订阅
      # FORCE_APPLICATION,只消费应用级地址,如无地址则报错,单订阅 3.x 地址
      migration: FORCE_APPLICATION
  # 注册中心配置
  registry:
    address: nacos://${spring.cloud.nacos.server-addr}
    group: DUBBO_GROUP
    username: ${spring.cloud.nacos.username}
    password: ${spring.cloud.nacos.password}
    parameters:
      namespace: ${spring.profiles.active}
  metadata-report:
    address: redis://${spring.data.redis.host}:${spring.data.redis.port}
    group: DUBBO_GROUP
    username: dubbo
    password: ${spring.data.redis.password}
    parameters:
      namespace: ${spring.profiles.active}
      database: ${spring.data.redis.database}
  # 消费者相关配置
  consumer:
    # 结果缓存(LRU算法)
    # 会有数据不一致问题 建议在注解局部开启
    cache: false
    # 支持校验注解
    validation: jvalidationNew
    # 调用重试 不包括第一次 0为不需要重试
    retries: 0
    # 初始化检查
    check: false

(以启动ruoyi-system为例)其中注释了元数据中心配置的信息是 读取微服务中的文件读取位置是 spring.data.redis.host 和 spring.data.redis.port

通过这个配置不难看出框架中的redis 是使用的单机redis

然后问题来了我通过nacos上的ruoyi-system.yaml将原有redis配置覆盖成redis哨兵配置

………………

  # redis通用配置 子服务可以自行配置进行覆盖
  data:
    redis:
      sentinel:
        master: mymaster
        nodes: 10.104.209.83:6379
      max-redirects: 3  
      database: 13
      password: RJn+!lduAgvs@7K
      timeout: 10s
      ssl.enabled: false
………………

改完后就很OK然后启动就报错,原因是Dubbo读取的也是nacso上的配置的redis,我改了项目的redis配置

那Dubbo就不能读取到redis配置了

那既然这样我硬编码你不让你去读nacos的配置写死,然后在启动还是不出意外的报错了.

秉持着遇到问题就先看看官方的Doc描述 Dubbo元数据中心配置

然后就是如图所见了Dubbo说他还不支持redis;我想着不能啊!!!框架是开源的而且当时还用单机redis启动过没问题啊.

没办法我就只能去看看报错是哪里的题了 附上报错信息

2024-08-07 16:47:49 [DubboSaveMetadataReport-thread-1] ERROR o.a.d.m.s.redis.RedisMetadataReport
 -  [DUBBO] Failed to put provider metadata MetadataIdentifier{application='base-system', serviceInterface='org.dromara.system.api.RemoteDictService', version='', group='', side='provider'} in  FullServiceDefinition{parameters={dubbo=2.0.2, pid=19557, metadata-type=remote, release=3.2.14, interface=org.dromara.system.api.RemoteDictService, anyhost=true, side=provider, application=base-system, executor-management-mode=isolation, file-cache=true, methods=selectDictDataByType, logger=slf4j, deprecated=false, service-name-mapping=true, register-mode=instance, qos.enable=false, generic=false, bind.port=20880, bind.ip=192.168.255.6, prefer.serialization=fastjson2,hessian2, background=false, dynamic=true, timestamp=1723020468602}} ServiceDefinition [canonicalName=org.dromara.system.api.RemoteDictService, codeSource=file:/Users/picachu/work/A-java-dev/User-System-Service/base-api/base-api-system/target/classes/, methods=[MethodDefinition [name=selectDictDataByType, parameterTypes=[java.lang.String], returnType=java.util.List<org.dromara.system.api.domain.vo.RemoteDictDataVo>]]], cause: Failed to put MetadataIdentifier{application='base-system', serviceInterface='org.dromara.system.api.RemoteDictService', version='', group='', side='provider'} to redis {"annotations":[],"canonicalName":"org.dromara.system.api.RemoteDictService","codeSource":"file:/Users/picachu/work/A-java-dev/User-System-Service/base-api/base-api-system/target/classes/","methods":[{"annotations":[],"name":"selectDictDataByType","parameterTypes":["java.lang.String"],"parameters":[],"returnType":"java.util.List<org.dromara.system.api.domain.vo.RemoteDictDataVo>"}],"parameters":{"dubbo":"2.0.2","pid":"19557","metadata-type":"remote","release":"3.2.14","interface":"org.dromara.system.api.RemoteDictService","anyhost":"true","side":"provider","application":"base-system","executor-management-mode":"isolation","file-cache":"true","methods":"selectDictDataByType","logger":"slf4j","deprecated":"false","service-name-mapping":"true","register-mode":"instance","qos.enable":"false","generic":"false","bind.port":"20880","bind.ip":"192.168.255.6","prefer.serialization":"fastjson2,hessian2","background":"false","dynamic":"true","timestamp":"1723020468602"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.Long"},{"enums":[],"items":[],"properties":{},"type":"java.util.Date"},{"enums":[],"items":[],"properties":{},"type":"java.lang.String"},{"enums":[],"items":["org.dromara.system.api.domain.vo.RemoteDictDataVo"],"properties":{},"type":"java.util.List<org.dromara.system.api.domain.vo.RemoteDictDataVo>"},{"enums":[],"items":[],"properties":{"dictLabel":"java.lang.String","dictValue":"java.lang.String","listClass":"java.lang.String","isDefault":"java.lang.String","cssClass":"java.lang.String","dictCode":"java.lang.Long","createTime":"java.util.Date","dictSort":"java.lang.Integer","remark":"java.lang.String","dictType":"java.lang.String","status":"java.lang.String"},"type":"org.dromara.system.api.domain.vo.RemoteDictDataVo"},{"enums":[],"items":[],"properties":{},"type":"java.lang.Integer"}],"uniqueId":"org.dromara.system.api.RemoteDictService@file:/Users/picachu/work/A-java-dev/User-System-Service/base-api/base-api-system/target/classes/"}, cause: ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?, dubbo version: 3.2.14, current host: 192.168.255.6, error code: 3-2. This may be caused by , go to https://dubbo.apache.org/faq/3/2 to find instructions. 
org.apache.dubbo.rpc.RpcException: Failed to put MetadataIdentifier{application='base-system', serviceInterface='org.dromara.system.api.RemoteDictService', version='', group='', side='provider'} to redis {"annotations":[],"canonicalName":"org.dromara.system.api.RemoteDictService","codeSource":"file:/Users/picachu/work/A-java-dev/User-System-Service/base-api/base-api-system/target/classes/","methods":[{"annotations":[],"name":"selectDictDataByType","parameterTypes":["java.lang.String"],"parameters":[],"returnType":"java.util.List<org.dromara.system.api.domain.vo.RemoteDictDataVo>"}],"parameters":{"dubbo":"2.0.2","pid":"19557","metadata-type":"remote","release":"3.2.14","interface":"org.dromara.system.api.RemoteDictService","anyhost":"true","side":"provider","application":"base-system","executor-management-mode":"isolation","file-cache":"true","methods":"selectDictDataByType","logger":"slf4j","deprecated":"false","service-name-mapping":"true","register-mode":"instance","qos.enable":"false","generic":"false","bind.port":"20880","bind.ip":"192.168.255.6","prefer.serialization":"fastjson2,hessian2","background":"false","dynamic":"true","timestamp":"1723020468602"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.Long"},{"enums":[],"items":[],"properties":{},"type":"java.util.Date"},{"enums":[],"items":[],"properties":{},"type":"java.lang.String"},{"enums":[],"items":["org.dromara.system.api.domain.vo.RemoteDictDataVo"],"properties":{},"type":"java.util.List<org.dromara.system.api.domain.vo.RemoteDictDataVo>"},{"enums":[],"items":[],"properties":{"dictLabel":"java.lang.String","dictValue":"java.lang.String","listClass":"java.lang.String","isDefault":"java.lang.String","cssClass":"java.lang.String","dictCode":"java.lang.Long","createTime":"java.util.Date","dictSort":"java.lang.Integer","remark":"java.lang.String","dictType":"java.lang.String","status":"java.lang.String"},"type":"org.dromara.system.api.domain.vo.RemoteDictDataVo"},{"enums":[],"items":[],"properties":{},"type":"java.lang.Integer"}],"uniqueId":"org.dromara.system.api.RemoteDictService@file:/Users/picachu/work/A-java-dev/User-System-Service/base-api/base-api-system/target/classes/"}, cause: ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?
	at org.apache.dubbo.metadata.store.redis.RedisMetadataReport.storeMetadataStandalone(RedisMetadataReport.java:196)
	at org.apache.dubbo.metadata.store.redis.RedisMetadataReport.storeMetadata(RedisMetadataReport.java:159)
	at org.apache.dubbo.metadata.store.redis.RedisMetadataReport.doStoreProviderMetadata(RedisMetadataReport.java:115)
	at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadataTask$1(AbstractMetadataReport.java:283)
	at org.apache.dubbo.metrics.event.MetricsEventBus.post(MetricsEventBus.java:84)
	at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.storeProviderMetadataTask(AbstractMetadataReport.java:271)
	at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadata$0(AbstractMetadataReport.java:262)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: redis.clients.jedis.exceptions.JedisDataException: ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?
	at redis.clients.jedis.Protocol.processError(Protocol.java:105)
	at redis.clients.jedis.Protocol.process(Protocol.java:162)
	at redis.clients.jedis.Protocol.read(Protocol.java:221)
	at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:350)
	at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:268)
	at redis.clients.jedis.Connection.helloOrAuth(Connection.java:492)
	at redis.clients.jedis.Connection.initializeFromClientConfig(Connection.java:401)
	at redis.clients.jedis.Connection.<init>(Connection.java:67)
	at redis.clients.jedis.Jedis.<init>(Jedis.java:220)
	at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:170)
	at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:566)
	at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:306)
	at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:233)
	at redis.clients.jedis.util.Pool.getResource(Pool.java:38)
	at redis.clients.jedis.JedisPool.getResource(JedisPool.java:378)
	at org.apache.dubbo.metadata.store.redis.RedisMetadataReport.storeMetadataStandalone(RedisMetadataReport.java:191)
	... 9 common frames omitted
2024-08-07 16:47:49 [DubboSaveMetadataReport-thread-1] ERROR o.a.d.m.s.redis.RedisMetadataReport
 -  [DUBBO] Failed to put MetadataIdentifier{application='base-system', serviceInterface='org.dromara.system.api.RemoteLogService', version='', group='', side='provider'} to redis {"annotations":[],"canonicalName":"org.dromara.system.api.RemoteLogService","codeSource":"file:/Users/picachu/work/A-java-dev/User-System-Service/base-api/base-api-system/target/classes/","methods":[{"annotations":[],"name":"saveLogininfor","parameterTypes":["org.dromara.system.api.domain.bo.RemoteLogininforBo"],"parameters":[],"returnType":"void"},{"annotations":[],"name":"saveLog","parameterTypes":["org.dromara.system.api.domain.bo.RemoteOperLogBo"],"parameters":[],"returnType":"void"}],"parameters":{"dubbo":"2.0.2","pid":"19557","metadata-type":"remote","release":"3.2.14","interface":"org.dromara.system.api.RemoteLogService","anyhost":"true","side":"provider","application":"base-system","executor-management-mode":"isolation","file-cache":"true","methods":"saveLog,saveLogininfor","logger":"slf4j","deprecated":"false","service-name-mapping":"true","register-mode":"instance","qos.enable":"false","generic":"false","bind.port":"20880","bind.ip":"192.168.255.6","prefer.serialization":"fastjson2,hessian2","background":"false","dynamic":"true","timestamp":"1723020468995"},"types":[{"enums":[],"items":[],"properties":{},"type":"void"},{"enums":[],"items":[],"properties":{"deviceType":"java.lang.String","msg":"java.lang.String","infoId":"java.lang.Long","os":"java.lang.String","userName":"java.lang.String","params":"java.util.Map<java.lang.String,java.lang.Object>","loginTime":"java.util.Date","clientKey":"java.lang.String","browser":"java.lang.String","tenantId":"java.lang.String","ipaddr":"java.lang.String","loginLocation":"java.lang.String","status":"java.lang.String"},"type":"org.dromara.system.api.domain.bo.RemoteLogininforBo"},{"enums":[],"items":[],"properties":{},"type":"java.lang.Long"},{"enums":[],"items":[],"properties":{"deptName":"java.lang.String","method":"java.lang.String","requestMethod":"java.lang.String","operId":"java.lang.Long","title":"java.lang.String","jsonResult":"java.lang.String","params":"java.util.Map<java.lang.String,java.lang.Object>","errorMsg":"java.lang.String","operLocation":"java.lang.String","costTime":"java.lang.Long","operIp":"java.lang.String","operUrl":"java.lang.String","tenantId":"java.lang.String","operName":"java.lang.String","operatorType":"java.lang.Integer","businessType":"java.lang.Integer","operParam":"java.lang.String","status":"java.lang.Integer","operTime":"java.util.Date"},"type":"org.dromara.system.api.domain.bo.RemoteOperLogBo"},{"enums":[],"items":[],"properties":{},"type":"java.util.Date"},{"enums":[],"items":["java.lang.String","java.lang.Object"],"properties":{},"type":"java.util.Map<java.lang.String,java.lang.Object>"},{"enums":[],"items":[],"properties":{},"type":"java.lang.Object"},{"enums":[],"items":[],"properties":{},"type":"java.lang.String"},{"enums":[],"items":[],"properties":{},"type":"java.lang.Integer"}],"uniqueId":"org.dromara.system.api.RemoteLogService@file:/Users/picachu/work/A-java-dev/User-System-Service/base-api/base-api-system/target/classes/"}, cause: ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?, dubbo version: 3.2.14, current host: 192.168.255.6, error code: 6-14. This may be caused by , go to https://dubbo.apache.org/faq/6/14 to find instructions. 
redis.clients.jedis.exceptions.JedisDataException: ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?
	at redis.clients.jedis.Protocol.processError(Protocol.java:105)
	at redis.clients.jedis.Protocol.process(Protocol.java:162)
	at redis.clients.jedis.Protocol.read(Protocol.java:221)
	at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:350)
	at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:268)
	at redis.clients.jedis.Connection.helloOrAuth(Connection.java:492)
	at redis.clients.jedis.Connection.initializeFromClientConfig(Connection.java:401)
	at redis.clients.jedis.Connection.<init>(Connection.java:67)
	at redis.clients.jedis.Jedis.<init>(Jedis.java:220)
	at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:170)
	at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:566)
	at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:306)
	at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:233)
	at redis.clients.jedis.util.Pool.getResource(Pool.java:38)
	at redis.clients.jedis.JedisPool.getResource(JedisPool.java:378)
	at org.apache.dubbo.metadata.store.redis.RedisMetadataReport.storeMetadataStandalone(RedisMetadataReport.java:191)
	at org.apache.dubbo.metadata.store.redis.RedisMetadataReport.storeMetadata(RedisMetadataReport.java:159)
	at org.apache.dubbo.metadata.store.redis.RedisMetadataReport.doStoreProviderMetadata(RedisMetadataReport.java:115)
	at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadataTask$1(AbstractMetadataReport.java:283)
	at org.apache.dubbo.metrics.event.MetricsEventBus.post(MetricsEventBus.java:84)
	at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.storeProviderMetadataTask(AbstractMetadataReport.java:271)
	at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadata$0(AbstractMetadataReport.java:262)

其实错误的大意就是你提供的这个redis我写不进去也读不到数据

后来分析原因就是 redis的哨兵节点并不能去读写,他的工作是检测主节点和从节点健康和信息

而上面我在配置redis(ruoyi-system.yml)的时候其实配置了redis的模式为sentinel 所以 redis-spring-boot-starter 是会根据你配置的模式读取配置并进行连接,在结合Dubbo官网方说了他不支持redis;那么就猜测就是项目中的Dubbo能连接redis应该就是自己继承后实现的了(这里纯是我理想化开发了以为Dubbo的redis会识别模式的).

然后考试分报错信息中的提示类

org.apache.dubbo.metadata.store.redis.RedisMetadataReport.storeMetadataStandalone(RedisMetadataReport.java:191) org.apache.dubbo.metadata.store.redis.RedisMetadataReport.storeMetadata(RedisMetadataReport.java:159) org.apache.dubbo.metadata.store.redis.RedisMetadataReport.doStoreProviderMetadata(RedisMetadataReport.java:115)

下面是错误信息中 RedisMetadataReport 这个报错类的 部分代码

………………
    public RedisMetadataReport(URL url) {
        super(url);
        timeout = url.getParameter(TIMEOUT_KEY, DEFAULT_TIMEOUT);
        password = url.getPassword();
        this.root = url.getGroup(DEFAULT_ROOT);
        if (url.getParameter(CYCLE_REPORT_KEY, DEFAULT_METADATA_REPORT_CYCLE_REPORT)) {
            // ttl default is twice the cycle-report time
            jedisParams.ex(ONE_DAY_IN_MILLISECONDS * 2);
        }
        if (url.getParameter(CLUSTER_KEY, false)) {
            jedisClusterNodes = new HashSet<>();
            List<URL> urls = url.getBackupUrls();
            for (URL tmpUrl : urls) {
                jedisClusterNodes.add(new HostAndPort(tmpUrl.getHost(), tmpUrl.getPort()));
            }
        } else {
            int database = url.getParameter(REDIS_DATABASE_KEY, 0);
            pool = new JedisPool(new JedisPoolConfig(), url.getHost(), url.getPort(), timeout, password, database);
        }
    }
………………

我在浏览到这段代码的时候大概就明白过来为什么报错了, 这段代码继承Dubbo元数据中心的初始化链接加调用

这段代码的意思就是初始化redis链接 链接的工具包是jedis 其中有一个区分(redis集群模式或者单机模式)

dubbo.metadata-report.parameters.cluster=true #集群模式

没有上面的配置就是单机模式

到这里就清晰了 我如果想要Dubbo支持redis哨兵模式那就按照这个类里初始化单机模式的写法,追加一个redis集群配置的初始化加调用就可以了;例外因为是if判断只判断是否是redis集群模式所以我们加一个redis哨兵的判断.

最终完整代码

  • 这里追加了redis哨兵模式,加上判断条件 SENTINEL_KEY判断redis是否为哨兵模式
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package org.apache.dubbo.metadata.store.redis;

import org.apache.dubbo.common.URL;
import org.apache.dubbo.common.config.configcenter.ConfigItem;
import org.apache.dubbo.common.logger.ErrorTypeAwareLogger;
import org.apache.dubbo.common.logger.LoggerFactory;
import org.apache.dubbo.common.utils.ConcurrentHashMapUtils;
import org.apache.dubbo.common.utils.ConcurrentHashSet;
import org.apache.dubbo.common.utils.JsonUtils;
import org.apache.dubbo.common.utils.StringUtils;
import org.apache.dubbo.metadata.MappingChangedEvent;
import org.apache.dubbo.metadata.MappingListener;
import org.apache.dubbo.metadata.MetadataInfo;
import org.apache.dubbo.metadata.ServiceNameMapping;
import org.apache.dubbo.metadata.report.identifier.BaseMetadataIdentifier;
import org.apache.dubbo.metadata.report.identifier.KeyTypeEnum;
import org.apache.dubbo.metadata.report.identifier.MetadataIdentifier;
import org.apache.dubbo.metadata.report.identifier.ServiceMetadataIdentifier;
import org.apache.dubbo.metadata.report.identifier.SubscriberMetadataIdentifier;
import org.apache.dubbo.metadata.report.support.AbstractMetadataReport;
import org.apache.dubbo.rpc.RpcException;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;

import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import redis.clients.jedis.*;
import redis.clients.jedis.params.SetParams;
import redis.clients.jedis.util.JedisClusterCRC16;

import static org.apache.dubbo.common.constants.CommonConstants.CLUSTER_KEY;
import static org.apache.dubbo.common.constants.CommonConstants.CYCLE_REPORT_KEY;
import static org.apache.dubbo.common.constants.CommonConstants.DEFAULT_TIMEOUT;
import static org.apache.dubbo.common.constants.CommonConstants.GROUP_CHAR_SEPARATOR;
import static org.apache.dubbo.common.constants.CommonConstants.QUEUES_KEY;
import static org.apache.dubbo.common.constants.CommonConstants.TIMEOUT_KEY;
import static org.apache.dubbo.common.constants.LoggerCodeConstants.TRANSPORT_FAILED_RESPONSE;
import static org.apache.dubbo.metadata.MetadataConstants.META_DATA_STORE_TAG;
import static org.apache.dubbo.metadata.ServiceNameMapping.DEFAULT_MAPPING_GROUP;
import static org.apache.dubbo.metadata.ServiceNameMapping.getAppNames;
import static org.apache.dubbo.metadata.report.support.Constants.DEFAULT_METADATA_REPORT_CYCLE_REPORT;

/**
 * RedisMetadataReport
 */
public class RedisMetadataReport extends AbstractMetadataReport {

    private static final String REDIS_DATABASE_KEY = "database";
    private static final String SENTINEL_KEY = "sentinel";
    private static final ErrorTypeAwareLogger logger = LoggerFactory.getErrorTypeAwareLogger(RedisMetadataReport.class);

    // protected , for test
    protected JedisPool pool;
    protected JedisSentinelPool sentinelPool;
    private Set<HostAndPort> jedisClusterNodes;
    private int timeout;
    private String password;
    private final String root;
    private final ConcurrentHashMap<String, MappingDataListener> mappingDataListenerMap = new ConcurrentHashMap<>();
    private SetParams jedisParams = SetParams.setParams();

    public RedisMetadataReport(URL url) {
        super(url);
        timeout = url.getParameter(TIMEOUT_KEY, DEFAULT_TIMEOUT);
        password = url.getPassword();
        this.root = url.getGroup(DEFAULT_ROOT);
        if (url.getParameter(CYCLE_REPORT_KEY, DEFAULT_METADATA_REPORT_CYCLE_REPORT)) {
            // ttl default is twice the cycle-report time
            jedisParams.ex(ONE_DAY_IN_MILLISECONDS * 2);
        }
        if (url.getParameter(CLUSTER_KEY, false)) {
            jedisClusterNodes = new HashSet<>();
            List<URL> urls = url.getBackupUrls();
            for (URL tmpUrl : urls) {
                jedisClusterNodes.add(new HostAndPort(tmpUrl.getHost(), tmpUrl.getPort()));
            }
        } else if (url.getParameter(SENTINEL_KEY,false)) {
            Set<String> sentinels = new HashSet<>();
            List<URL> urls = url.getBackupUrls();
            for (URL tmpUrl : urls) {
                sentinels.add(tmpUrl.getHost()+":"+ tmpUrl.getPort());
            }
            int database = url.getParameter(REDIS_DATABASE_KEY, 0);
            sentinelPool = new JedisSentinelPool("mymaster",sentinels ,new GenericObjectPoolConfig<>(), timeout, password, database);
        } else {
            int database = url.getParameter(REDIS_DATABASE_KEY, 0);
            pool = new JedisPool(new JedisPoolConfig(), url.getHost(), url.getPort(), timeout, password, database);
        }
    }

    @Override
    protected void doStoreProviderMetadata(MetadataIdentifier providerMetadataIdentifier, String serviceDefinitions) {
        this.storeMetadata(providerMetadataIdentifier, serviceDefinitions);
    }

    @Override
    protected void doStoreConsumerMetadata(MetadataIdentifier consumerMetadataIdentifier, String value) {
        this.storeMetadata(consumerMetadataIdentifier, value);
    }

    @Override
    protected void doSaveMetadata(ServiceMetadataIdentifier serviceMetadataIdentifier, URL url) {
        this.storeMetadata(serviceMetadataIdentifier, URL.encode(url.toFullString()));
    }

    @Override
    protected void doRemoveMetadata(ServiceMetadataIdentifier serviceMetadataIdentifier) {
        this.deleteMetadata(serviceMetadataIdentifier);
    }

    @Override
    protected List<String> doGetExportedURLs(ServiceMetadataIdentifier metadataIdentifier) {
        String content = getMetadata(metadataIdentifier);
        if (StringUtils.isEmpty(content)) {
            return Collections.emptyList();
        }
        return new ArrayList<>(Arrays.asList(URL.decode(content)));
    }

    @Override
    protected void doSaveSubscriberData(SubscriberMetadataIdentifier subscriberMetadataIdentifier, String urlListStr) {
        this.storeMetadata(subscriberMetadataIdentifier, urlListStr);
    }

    @Override
    protected String doGetSubscribedURLs(SubscriberMetadataIdentifier subscriberMetadataIdentifier) {
        return this.getMetadata(subscriberMetadataIdentifier);
    }

    @Override
    public String getServiceDefinition(MetadataIdentifier metadataIdentifier) {
        return this.getMetadata(metadataIdentifier);
    }

    private void storeMetadata(BaseMetadataIdentifier metadataIdentifier, String v) {
        if (pool != null) {
            storeMetadataStandalone(metadataIdentifier, v);
        }else if(sentinelPool != null) {
            storeMetadataInSentinel(metadataIdentifier, v);
        }else {
            storeMetadataInCluster(metadataIdentifier, v);
        }
    }

    private void storeMetadataInSentinel(BaseMetadataIdentifier metadataIdentifier, String v) {
        try (Jedis jedisSentinel = sentinelPool.getResource()) {
            jedisSentinel.set(metadataIdentifier.getUniqueKey(KeyTypeEnum.UNIQUE_KEY), v, jedisParams);
        } catch (Throwable e) {
            String msg =
                "Failed to put " + metadataIdentifier + " to redis cluster " + v + ", cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private void storeMetadataInCluster(BaseMetadataIdentifier metadataIdentifier, String v) {
        try (JedisCluster jedisCluster =
                 new JedisCluster(jedisClusterNodes, timeout, timeout, 2, password, new GenericObjectPoolConfig<>())) {
            jedisCluster.set(metadataIdentifier.getIdentifierKey() + META_DATA_STORE_TAG, v, jedisParams);
        } catch (Throwable e) {
            String msg =
                "Failed to put " + metadataIdentifier + " to redis cluster " + v + ", cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private void storeMetadataStandalone(BaseMetadataIdentifier metadataIdentifier, String v) {
        try (Jedis jedis = pool.getResource()) {
            jedis.set(metadataIdentifier.getUniqueKey(KeyTypeEnum.UNIQUE_KEY), v, jedisParams);
        } catch (Throwable e) {
            String msg = "Failed to put " + metadataIdentifier + " to redis " + v + ", cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private void deleteMetadata(BaseMetadataIdentifier metadataIdentifier) {
        if (pool != null) {
            deleteMetadataStandalone(metadataIdentifier);
        }else if(sentinelPool != null) {
            deleteMetadataSentinel(metadataIdentifier);
        }else {
            deleteMetadataInCluster(metadataIdentifier);
        }
    }

    private void deleteMetadataInCluster(BaseMetadataIdentifier metadataIdentifier) {
        try (JedisCluster jedisCluster =
                 new JedisCluster(jedisClusterNodes, timeout, timeout, 2, password, new GenericObjectPoolConfig<>())) {
            jedisCluster.del(metadataIdentifier.getIdentifierKey() + META_DATA_STORE_TAG);
        } catch (Throwable e) {
            String msg = "Failed to delete " + metadataIdentifier + " from redis cluster , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private void deleteMetadataSentinel(BaseMetadataIdentifier metadataIdentifier) {
        try (Jedis jedisSentinel = sentinelPool.getResource()) {
            jedisSentinel.del(metadataIdentifier.getUniqueKey(KeyTypeEnum.UNIQUE_KEY));
        } catch (Throwable e) {
            String msg = "Failed to delete " + metadataIdentifier + " from redis , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private void deleteMetadataStandalone(BaseMetadataIdentifier metadataIdentifier) {
        try (Jedis jedis = pool.getResource()) {
            jedis.del(metadataIdentifier.getUniqueKey(KeyTypeEnum.UNIQUE_KEY));
        } catch (Throwable e) {
            String msg = "Failed to delete " + metadataIdentifier + " from redis , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private String getMetadata(BaseMetadataIdentifier metadataIdentifier) {
        if (pool != null) {
            return getMetadataStandalone(metadataIdentifier);
        }else if(sentinelPool != null) {
            return getMetadataSentinel(metadataIdentifier);
        }else {
            return getMetadataInCluster(metadataIdentifier);
        }
    }

    private String getMetadataInCluster(BaseMetadataIdentifier metadataIdentifier) {
        try (JedisCluster jedisCluster =
                 new JedisCluster(jedisClusterNodes, timeout, timeout, 2, password, new GenericObjectPoolConfig<>())) {
            return jedisCluster.get(metadataIdentifier.getIdentifierKey() + META_DATA_STORE_TAG);
        } catch (Throwable e) {
            String msg = "Failed to get " + metadataIdentifier + " from redis cluster , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private String getMetadataSentinel(BaseMetadataIdentifier metadataIdentifier) {
        try (Jedis jedisSentinel = sentinelPool.getResource()) {
            return jedisSentinel.get(metadataIdentifier.getUniqueKey(KeyTypeEnum.UNIQUE_KEY));
        } catch (Throwable e) {
            String msg = "Failed to get " + metadataIdentifier + " from redis , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private String getMetadataStandalone(BaseMetadataIdentifier metadataIdentifier) {
        try (Jedis jedis = pool.getResource()) {
            return jedis.get(metadataIdentifier.getUniqueKey(KeyTypeEnum.UNIQUE_KEY));
        } catch (Throwable e) {
            String msg = "Failed to get " + metadataIdentifier + " from redis , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    /**
     * Store class and application names using Redis hashes
     * key: default 'dubbo:mapping'
     * field: class (serviceInterface)
     * value: application_names
     * @param serviceInterface field(class)
     * @param defaultMappingGroup  {@link ServiceNameMapping#DEFAULT_MAPPING_GROUP}
     * @param newConfigContent new application_names
     * @param ticket previous application_names
     * @return
     */
    @Override
    public boolean registerServiceAppMapping(
        String serviceInterface, String defaultMappingGroup, String newConfigContent, Object ticket) {
        try {
            if (null != ticket && !(ticket instanceof String)) {
                throw new IllegalArgumentException("redis publishConfigCas requires stat type ticket");
            }
            String pathKey = buildMappingKey(defaultMappingGroup);

            return storeMapping(pathKey, serviceInterface, newConfigContent, (String) ticket);
        } catch (Exception e) {
            logger.warn(TRANSPORT_FAILED_RESPONSE, "", "", "redis publishConfigCas failed.", e);
            return false;
        }
    }

    private boolean storeMapping(String key, String field, String value, String ticket) {
        if (pool != null) {
            return storeMappingStandalone(key, field, value, ticket);
        }else if(sentinelPool != null) {
            return storeMappingSentinel(key, field, value, ticket);
        } else {
            return storeMappingInCluster(key, field, value, ticket);
        }
    }

    /**
     * use 'watch' to implement cas.
     * Find information about slot distribution by key.
     */
    private boolean storeMappingInCluster(String key, String field, String value, String ticket) {
        try (JedisCluster jedisCluster =
                 new JedisCluster(jedisClusterNodes, timeout, timeout, 2, password, new GenericObjectPoolConfig<>())) {
            Jedis jedis = new Jedis(jedisCluster.getConnectionFromSlot(JedisClusterCRC16.getSlot(key)));
            jedis.watch(key);
            String oldValue = jedis.hget(key, field);
            if (null == oldValue || null == ticket || oldValue.equals(ticket)) {
                Transaction transaction = jedis.multi();
                transaction.hset(key, field, value);
                List<Object> result = transaction.exec();
                if (null != result) {
                    jedisCluster.publish(buildPubSubKey(), field);
                    return true;
                }
            } else {
                jedis.unwatch();
            }
            jedis.close();
        } catch (Throwable e) {
            String msg = "Failed to put " + key + ":" + field + " to redis " + value + ", cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
        return false;
    }

    /**
     * use 'watch' to implement cas.
     * Find information about slot distribution by key.
     */
    private boolean storeMappingSentinel(String key, String field, String value, String ticket) {
        try (Jedis jedisSentinel = sentinelPool.getResource()) {
            jedisSentinel.watch(key);
            String oldValue = jedisSentinel.hget(key, field);
            if (null == oldValue || null == ticket || oldValue.equals(ticket)) {
                Transaction transaction = jedisSentinel.multi();
                transaction.hset(key, field, value);
                List<Object> result = transaction.exec();
                if (null != result) {
                    jedisSentinel.publish(buildPubSubKey(), field);
                    return true;
                }
            }
            jedisSentinel.unwatch();
        } catch (Throwable e) {
            String msg = "Failed to put " + key + ":" + field + " to redis " + value + ", cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
        return false;
    }

    /**
     * use 'watch' to implement cas.
     * Find information about slot distribution by key.
     */
    private boolean storeMappingStandalone(String key, String field, String value, String ticket) {
        try (Jedis jedis = pool.getResource()) {
            jedis.watch(key);
            String oldValue = jedis.hget(key, field);
            if (null == oldValue || null == ticket || oldValue.equals(ticket)) {
                Transaction transaction = jedis.multi();
                transaction.hset(key, field, value);
                List<Object> result = transaction.exec();
                if (null != result) {
                    jedis.publish(buildPubSubKey(), field);
                    return true;
                }
            }
            jedis.unwatch();
        } catch (Throwable e) {
            String msg = "Failed to put " + key + ":" + field + " to redis " + value + ", cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
        return false;
    }

    /**
     * build mapping key
     * @param defaultMappingGroup {@link ServiceNameMapping#DEFAULT_MAPPING_GROUP}
     * @return
     */
    private String buildMappingKey(String defaultMappingGroup) {
        return this.root + GROUP_CHAR_SEPARATOR + defaultMappingGroup;
    }

    /**
     * build pub/sub key
     */
    private String buildPubSubKey() {
        return buildMappingKey(DEFAULT_MAPPING_GROUP) + GROUP_CHAR_SEPARATOR + QUEUES_KEY;
    }

    /**
     * get content and use content to complete cas
     * @param serviceKey class
     * @param group {@link ServiceNameMapping#DEFAULT_MAPPING_GROUP}
     */
    @Override
    public ConfigItem getConfigItem(String serviceKey, String group) {
        String key = buildMappingKey(group);
        String content = getMappingData(key, serviceKey);

        return new ConfigItem(content, content);
    }

    /**
     * get current application_names
     */
    private String getMappingData(String key, String field) {
        if (pool != null) {
            return getMappingDataStandalone(key, field);
        }else if(sentinelPool != null) {
            return getMappingDataSentinel(key, field);
        } else {
            return getMappingDataInCluster(key, field);
        }
    }

    private String getMappingDataInCluster(String key, String field) {
        try (JedisCluster jedisCluster =
                 new JedisCluster(jedisClusterNodes, timeout, timeout, 2, password, new GenericObjectPoolConfig<>())) {
            return jedisCluster.hget(key, field);
        } catch (Throwable e) {
            String msg = "Failed to get " + key + ":" + field + " from redis cluster , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private String getMappingDataSentinel(String key, String field) {
        try (Jedis jedisSentinel = sentinelPool.getResource()) {
            return jedisSentinel.hget(key, field);
        } catch (Throwable e) {
            String msg = "Failed to get " + key + ":" + field + " from redis , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    private String getMappingDataStandalone(String key, String field) {
        try (Jedis jedis = pool.getResource()) {
            return jedis.hget(key, field);
        } catch (Throwable e) {
            String msg = "Failed to get " + key + ":" + field + " from redis , cause: " + e.getMessage();
            logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            throw new RpcException(msg, e);
        }
    }

    /**
     * remove listener. If have no listener,thread will dead
     */
    @Override
    public void removeServiceAppMappingListener(String serviceKey, MappingListener listener) {
        MappingDataListener mappingDataListener = mappingDataListenerMap.get(buildPubSubKey());
        if (null != mappingDataListener) {
            NotifySub notifySub = mappingDataListener.getNotifySub();
            notifySub.removeListener(serviceKey, listener);
            if (notifySub.isEmpty()) {
                mappingDataListener.shutdown();
            }
        }
    }

    /**
     * Start a thread and subscribe to {@link this#buildPubSubKey()}.
     * Notify {@link MappingListener} if there is a change in the 'application_names' message.
     */
    @Override
    public Set<String> getServiceAppMapping(String serviceKey, MappingListener listener, URL url) {
        MappingDataListener mappingDataListener =
            ConcurrentHashMapUtils.computeIfAbsent(mappingDataListenerMap, buildPubSubKey(), k -> {
                MappingDataListener dataListener = new MappingDataListener(buildPubSubKey());
                dataListener.start();
                return dataListener;
            });
        mappingDataListener.getNotifySub().addListener(serviceKey, listener);
        return this.getServiceAppMapping(serviceKey, url);
    }

    @Override
    public Set<String> getServiceAppMapping(String serviceKey, URL url) {
        String key = buildMappingKey(DEFAULT_MAPPING_GROUP);
        return getAppNames(getMappingData(key, serviceKey));
    }

    @Override
    public MetadataInfo getAppMetadata(SubscriberMetadataIdentifier identifier, Map<String, String> instanceMetadata) {
        String content = this.getMetadata(identifier);
        return JsonUtils.toJavaObject(content, MetadataInfo.class);
    }

    @Override
    public void publishAppMetadata(SubscriberMetadataIdentifier identifier, MetadataInfo metadataInfo) {
        this.storeMetadata(identifier, metadataInfo.getContent());
    }

    @Override
    public void unPublishAppMetadata(SubscriberMetadataIdentifier identifier, MetadataInfo metadataInfo) {
        this.deleteMetadata(identifier);
    }

    // for test
    public MappingDataListener getMappingDataListener() {
        return mappingDataListenerMap.get(buildPubSubKey());
    }

    /**
     * Listen for changes in the 'application_names' message and notify the listener.
     */
    class NotifySub extends JedisPubSub {

        private final Map<String, Set<MappingListener>> listeners = new ConcurrentHashMap<>();

        public void addListener(String key, MappingListener listener) {
            Set<MappingListener> listenerSet = listeners.computeIfAbsent(key, k -> new ConcurrentHashSet<>());
            listenerSet.add(listener);
        }

        public void removeListener(String serviceKey, MappingListener listener) {
            Set<MappingListener> listenerSet = this.listeners.get(serviceKey);
            if (listenerSet != null) {
                listenerSet.remove(listener);
                if (listenerSet.isEmpty()) {
                    this.listeners.remove(serviceKey);
                }
            }
        }

        public Boolean isEmpty() {
            return this.listeners.isEmpty();
        }

        @Override
        public void onMessage(String key, String msg) {
            logger.info("sub from redis:" + key + " message:" + msg);
            String applicationNames = getMappingData(buildMappingKey(DEFAULT_MAPPING_GROUP), msg);
            MappingChangedEvent mappingChangedEvent = new MappingChangedEvent(msg, getAppNames(applicationNames));
            if (!listeners.get(msg).isEmpty()) {
                for (MappingListener mappingListener : listeners.get(msg)) {
                    mappingListener.onEvent(mappingChangedEvent);
                }
            }
        }

        @Override
        public void onPMessage(String pattern, String key, String msg) {
            onMessage(key, msg);
        }

        @Override
        public void onPSubscribe(String pattern, int subscribedChannels) {
            super.onPSubscribe(pattern, subscribedChannels);
        }
    }

    /**
     * Subscribe application names change message.
     */
    class MappingDataListener extends Thread {

        private String path;

        private final NotifySub notifySub = new NotifySub();
        // for test
        protected volatile boolean running = true;

        public MappingDataListener(String path) {
            this.path = path;
        }

        public NotifySub getNotifySub() {
            return notifySub;
        }

        @Override
        public void run() {
            while (running) {
                if (pool != null) {
                    try (Jedis jedis = pool.getResource()) {
                        jedis.subscribe(notifySub, path);
                    } catch (Throwable e) {
                        String msg = "Failed to subscribe " + path + ", cause: " + e.getMessage();
                        logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
                        throw new RpcException(msg, e);
                    }
                } else if (sentinelPool != null) {
                    try (Jedis jedisSentinel = sentinelPool.getResource()) {
                        jedisSentinel.subscribe(notifySub, path);
                    } catch (Throwable e) {
                        String msg = "Failed to subscribe " + path + ", cause: " + e.getMessage();
                        logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
                        throw new RpcException(msg, e);
                    }
                }else {
                    try (JedisCluster jedisCluster = new JedisCluster(
                        jedisClusterNodes, timeout, timeout, 2, password, new GenericObjectPoolConfig<>())) {
                        jedisCluster.subscribe(notifySub, path);
                    } catch (Throwable e) {
                        String msg = "Failed to subscribe " + path + ", cause: " + e.getMessage();
                        logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
                        throw new RpcException(msg, e);
                    }
                }
            }
        }

        public void shutdown() {
            try {
                running = false;
                notifySub.unsubscribe(path);
            } catch (Throwable e) {
                String msg = "Failed to unsubscribe " + path + ", cause: " + e.getMessage();
                logger.error(TRANSPORT_FAILED_RESPONSE, "", "", msg, e);
            }
        }
    }
}

  • 这里是 ruoyi-common-dubbo 配置文件的完整配置
# 内置配置 不允许修改 如需修改请在 nacos 上写相同配置覆盖
dubbo:
  application:
    logger: slf4j
    # 元数据中心 local 本地 remote 远程 这里使用远程便于其他服务获取
    metadataType: remote
    # 可选值 interface、instance、all,默认是 all,即接口级地址、应用级地址都注册
    register-mode: instance
    service-discovery:
      # FORCE_INTERFACE,只消费接口级地址,如无地址则报错,单订阅 2.x 地址
      # APPLICATION_FIRST,智能决策接口级/应用级地址,双订阅
      # FORCE_APPLICATION,只消费应用级地址,如无地址则报错,单订阅 3.x 地址
      migration: FORCE_APPLICATION
  # 注册中心配置
  registry:
    address: nacos://${spring.cloud.nacos.server-addr}
    group: DUBBO_GROUP
    username: ${spring.cloud.nacos.username}
    password: ${spring.cloud.nacos.password}
    parameters:
      namespace: ${spring.profiles.active}
  metadata-report:
    address: redis://10.104.209.83:6379
    group: DUBBO_GROUP
    username: dubbo
    password: ${spring.data.redis.password}
    parameters:
      namespace: ${spring.profiles.active}
      database: ${spring.data.redis.database}
      sentinel: true
  # 消费者相关配置
  consumer:
    # 结果缓存(LRU算法)
    # 会有数据不一致问题 建议在注解局部开启
    cache: false
    # 支持校验注解
    validation: jvalidationNew
    # 调用重试 不包括第一次 0为不需要重试
    retries: 0
    # 初始化检查
    check: false

我这里是为了怕同组的同事不知道瞎覆盖所以直接在这写死

可以看到我这里的的dubbo.metadata-report.address就写死了ip和port

这里还有个注意事项在查看官网时发现Dubbo推荐的多个地址写法为redis://home:port?backup=host1:port1

示例 dubbo.metadata-report.address= redis://localhost:6379?backup=localshot:6379,localshot:6379

同时在改写RedisMetadataReport类时通过断点发现(断点打在了100行)实际执行时,是可以把他推荐的写法redis://localhost:6379?backup=localshot:6379,localshot:6379转换成3个URL 集合 所以建议是按官方的写法写(直接,分割的没有做尝试)

还有要记得加 dubbo.metadata-report.parameters.sentinel= true 加了这个才会走redis哨兵

加上这些后启动项目发现 启动成功

截屏2024-08-07 19.57.26.png

到这里完成!!!

  • 例外再说一下集群模式的配置 (不需要redis哨兵就不用改 RedisMetadataReport这个类)
# 内置配置 不允许修改 如需修改请在 nacos 上写相同配置覆盖
dubbo:
  application:
    logger: slf4j
    # 元数据中心 local 本地 remote 远程 这里使用远程便于其他服务获取
    metadataType: remote
    # 可选值 interface、instance、all,默认是 all,即接口级地址、应用级地址都注册
    register-mode: instance
    service-discovery:
      # FORCE_INTERFACE,只消费接口级地址,如无地址则报错,单订阅 2.x 地址
      # APPLICATION_FIRST,智能决策接口级/应用级地址,双订阅
      # FORCE_APPLICATION,只消费应用级地址,如无地址则报错,单订阅 3.x 地址
      migration: FORCE_APPLICATION
  # 注册中心配置
  registry:
    address: nacos://${spring.cloud.nacos.server-addr}
    group: DUBBO_GROUP
    username: ${spring.cloud.nacos.username}
    password: ${spring.cloud.nacos.password}
    parameters:
      namespace: ${spring.profiles.active}
  metadata-report:
    address: redis://localhost:7000?backup=localshot:7001,localshot:7000,localshot:7001
    group: DUBBO_GROUP
    username: dubbo
    password: ${spring.data.redis.password}
    parameters:
      namespace: ${spring.profiles.active}
      database: ${spring.data.redis.database}
      cluster: true
  # 消费者相关配置
  consumer:
    # 结果缓存(LRU算法)
    # 会有数据不一致问题 建议在注解局部开启
    cache: false
    # 支持校验注解
    validation: jvalidationNew
    # 调用重试 不包括第一次 0为不需要重试
    retries: 0
    # 初始化检查
    check: false

同样的就是加上 dubbo.metadata-report.parameters.cluster= true 让他走集群模式就好