常见的配置中心
1、config:添加actuator监控,但是要预估哪些bean需要在程序运行期间重新初始化,对代码有一定的侵入性。 2、携程开源的Apollo 3、nacos
Nacos
动态配置管理是 Nacos 的三大功能之一,通过动态配置服务,我们可以在所有环境中以集中和动态的方式管理所有应用程序或服务的配置信息。动态配置中心可以实现配置更新时无需重新部署应用程序和服务即可使相应的配置信息生效,这极大了增加了系统的运维能力。
使用:
1、配置值的文件
- application.properties
- 外置文件中配置,并在启动类中使用,排最上面的优先级最高
@SpringBpptApplication
@NacosPropertySource(dataId="a.yaml/properties/json")
2、引用配置好的值: @NacosValue @RefreshScope(开启自动刷新)
nacos中每个配置集(配置文件)由三元信息组成:
- namespace(命名空间):用于环境隔离
在不同名称空间创建配置文件,直接通过clone方式添加配置,并修改即可:
- group:业务隔离
- dataId:对应一个个配置文件。命名格式{spring.profiles.active}.${file-extension},举例:account-dev.yaml
共享配置
在一些情况下需要加载多个配置文件。假如现在dev名称空间下有三个配置文件:service-statistics.properties、redis.properties、jdbc.properties 如果要在特定范围内(⽐如某个应⽤上)覆盖某个共享dataId上的特定属性,需要为该应⽤配置扩展属性来覆盖。
spring:
application:
name: ddd-demo-service
cloud:
nacos:
config:
server-addr: nacos-2.nacos-headless.public.svc.cluster.local:8848
namespace: ygjpro-test2
group: ddd-demo
......
shared-configs[3]:
data-id: mysql.yaml
refresh: true
......
extension-configs[3]:
data-id: mysql.yaml
group: ddd-demo
refresh: true
优先级
1、上述两类配置都是数组,对同种配置,数组元素对应的下标越⼤,优先级越⾼。extension-configs[2] > extension-configs[1]
2、不同种类配置之间,优先级按顺序如下:主配置 > 扩展配置(extension-configs) > 共享配置(shared-configs)
Nacos作为配置中心源码:
1.4版本nacos使用Http短连接+长轮询的方式,客户端发起http请求,服务端hold住请求,当配置变更时响应客户端,超时时间30s。 ——拉模式 (轮询与长轮询区别: 短轮询:重复多次发起,立刻产生相应。
长轮询:客户端发起请求后,服务端不会立即返回请求结果,而是将请求挂起等待一段时间,如果此段时间内服务端数据变更,立即响应客户端请求,若是一直无变化则等到指定的超时时间后响应请求返回304,客户端重新发起长链接。长轮询时往往采用异步响应的方式来实现,为AsyncContext 机制)
2.0版本nacos用gRPC长连接代替了http短连接长轮询。配置同步采用推拉结合的方式。2.x属于推拉结合的方式。 拉:每5分钟,客户端会对全量CacheData发起配置监听请求ConfigBatchListenRequest,如果配置md5发生变更,会同步收到变更配置项,发起ConfigQuery请求查询实时配置。 推:服务端配置变更,会发送ConfigChangeNotifyRequest请求给与当前节点建立长连接的客户端通知配置变更项。
public static void main(String[] args) throws NacosException, InterruptedException {
String serverAddr = "localhost";
String dataId = "test";
String group = "DEFAULT_GROUP";
Properties properties = new Properties();
properties.put("serverAddr", serverAddr);
ConfigService configService = NacosFactory.createConfigService(properties);//1
String content = configService.getConfig(dataId, group, 5000);//2
System.out.println(content);
configService.addListener(dataId, group, new Listener() {//3
@Override
public void receiveConfigInfo(String configInfo) {
System.out.println("receive:" + configInfo);
}
@Override
public Executor getExecutor() {
return null;
}
});
客户端监听配置
#ConfigFactory
public static ConfigService createConfigService(Properties properties) throws NacosException {
try {
Class<?> driverImplClass = Class.forName("com.alibaba.nacos.client.config.NacosConfigService");
Constructor constructor = driverImplClass.getConstructor(Properties.class);//NacosConfigService实例化
ConfigService vendorImpl = (ConfigService) constructor.newInstance(properties);
return vendorImpl;
} catch (Throwable e) {
throw new NacosException(NacosException.CLIENT_INVALID_PARAM, e);
}
}
#NacosConfigService的构造函数
public NacosConfigService(Properties properties) throws NacosException {
ValidatorUtils.checkInitParam(properties);
initNamespace(properties);
this.configFilterChainManager = new ConfigFilterChainManager(properties);
ServerListManager serverListManager = new ServerListManager(properties);
serverListManager.start();
this.worker = new ClientWorker(this.configFilterChainManager, serverListManager, properties);//
// will be deleted in 2.0 later versions
agent = new ServerHttpAgent(serverListManager);
}
public ClientWorker(final ConfigFilterChainManager configFilterChainManager, ServerListManager serverListManager,
final Properties properties) throws NacosException {
this.configFilterChainManager = configFilterChainManager;
init(properties);
agent = new ConfigRpcTransportClient(properties, serverListManager);
int count = ThreadUtils.getSuitableThreadCount(THREAD_MULTIPLE);
ScheduledExecutorService executorService = Executors
.newScheduledThreadPool(Math.max(count, MIN_THREAD_NUM), r -> {
Thread t = new Thread(r);
t.setName("com.alibaba.nacos.client.Worker");
t.setDaemon(true);
return t;
});
agent.setExecutor(executorService);
agent.start();//start方法里利用外部传入的线程服务,**无限循环跑一个配置同步任务**。
}
#agent.start()-》startInternal()
@Override
public void startInternal() {
executor.schedule(() -> {
while (!executor.isShutdown() && !executor.isTerminated()) {
try {
listenExecutebell.poll(5L, TimeUnit.SECONDS);//listenExecutebell为阻塞队列,.poll()表示出队一个元素,如果存在则直接出队,如果没有则等待timeout时间,无元素则返回null
if (executor.isShutdown() || executor.isTerminated()) {
continue;
}
executeConfigListen();//定时地查看配置项,监听本地配置和服务器配置是否一致,不一致,则同步配置。
}
...
}
}, 0L, TimeUnit.MILLISECONDS);
}
@Override
public void executeConfigListen() {
...
boolean needAllSync = now - lastAllSyncTime >= ALL_SYNC_INTERNAL;//距上次全量同步>=5s
for (CacheData cache : cacheMap.get().values()) {
synchronized (cache) {
//check local listeners consistent.同步过了,校验下Md5
if (cache.isSyncWithServer()) {
cache.checkListenerMd5();//// 1. 对于已经同步的配置,再次校验cacheData和listener的md5一致性
if (!needAllSync) {
continue;// 如果距离上次全量同步时间小于5分钟,跳过
}
}
// 统计有/没有listener的配置,分别添加到listenCachesMap/removeListenCachesMap
}
}
boolean hasChangedKeys = false;
//// 2. 处理有Listener的cacheData,发起listen请求
if (!listenCachesMap.isEmpty()) {
for (Map.Entry<String, List<CacheData>> entry : listenCachesMap.entrySet()) {
String taskId = entry.getKey();
Map<String, Long> timestampMap = new HashMap<>(listenCachesMap.size() * 2);
List<CacheData> listenCaches = entry.getValue();
for (CacheData cacheData : listenCaches) {
timestampMap.put(GroupKey.getKeyTenant(cacheData.dataId, cacheData.group, cacheData.tenant),
cacheData.getLastModifiedTs().longValue());
}
ConfigBatchListenRequest configChangeListenRequest = buildConfigRequest(listenCaches);
configChangeListenRequest.setListen(true);
try {
RpcClient rpcClient = ensureRpcClient(taskId);//ensureRpcClient方法,确保每个taskId公用一个RpcClient
////请求Nacos服务端**9848端口**,注册监听,**这里不会像1.4Server端会hold住请求,这里会立即返回**。**ConfigChangeBatchListenResponse**会返回md5已经发生变更的配置项
ConfigChangeBatchListenResponse configChangeBatchListenResponse = (ConfigChangeBatchListenResponse) requestProxy(
rpcClient, configChangeListenRequest);
if (configChangeBatchListenResponse != null && configChangeBatchListenResponse.isSuccess()) {
Set<String> changeKeys = new HashSet<>();
//handle changed keys,notify listener
if (!CollectionUtils.isEmpty(configChangeBatchListenResponse.getChangedConfigs())) {
hasChangedKeys = true;
for (ConfigChangeBatchListenResponse.ConfigContext changeConfig : configChangeBatchListenResponse
.getChangedConfigs()) {
String changeKey = GroupKey
.getKeyTenant(changeConfig.getDataId(), changeConfig.getGroup(),
changeConfig.getTenant());
changeKeys.add(changeKey);
boolean isInitializing = cacheMap.get().get(changeKey).isInitializing();
////查询服务端最新配置,更新snapshot和CacheData
refreshContentAndCheck(changeKey, !isInitializing);
}
}
//handler content configs
for (CacheData cacheData : listenCaches) {
String groupKey = GroupKey
.getKeyTenant(cacheData.dataId, cacheData.group, cacheData.getTenant());
if (!changeKeys.contains(groupKey)) {
//sync:cache data md5 = server md5 && cache data md5 = all listeners md5.
synchronized (cacheData) {
if (!cacheData.getListeners().isEmpty()) {
Long previousTimesStamp = timestampMap.get(groupKey);
if (previousTimesStamp != null && !cacheData.getLastModifiedTs().compareAndSet(previousTimesStamp,
System.currentTimeMillis())) {
continue;
}
cacheData.setSyncWithServer(true);
}
}
}
cacheData.setInitializing(false);
}
}
}
...
}
}
//// 3. 移除监听
if (!removeListenCachesMap.isEmpty()) {
...
}
//If has changed keys,notify re sync md5. // 4. 如果有配置发生变更,再次立即触发executeConfigListen
if (hasChangedKeys) {
notifyListenConfig();//通过往阻塞队列listenExecutebell中放入元素,可以唤醒配置同步任务。
}
}
private void refreshContentAndCheck(CacheData cacheData, boolean notify) {
try {
ConfigResponse response = getServerConfig(cacheData.dataId, cacheData.group, cacheData.tenant, 3000L,
notify);//获取配置,执行saveSnapshot
cacheData.setEncryptedDataKey(response.getEncryptedDataKey());
cacheData.setContent(response.getContent());
if (null != response.getConfigType()) {
cacheData.setType(response.getConfigType());
}
if (notify) {
LOGGER.info("[{}] [data-received] dataId={}, group={}, tenant={}, md5={}, content={}, type={}",
agent.getName(), cacheData.dataId, cacheData.group, cacheData.tenant, cacheData.getMd5(),
ContentUtils.truncateContent(response.getContent()), response.getConfigType());
}
cacheData.checkListenerMd5();
} catch (Exception e) {
LOGGER.error("refresh content and check md5 fail ,dataId={},group={},tenant={} ", cacheData.dataId,
cacheData.group, cacheData.tenant, e);
}
}
服务端处理监听请求
public class ConfigChangeBatchListenRequestHandler
extends RequestHandler<ConfigBatchListenRequest, ConfigChangeBatchListenResponse> {
@Autowired
private ConfigChangeListenContext configChangeListenContext;
@Override
@TpsControl(pointName = "ConfigListen")
@Secured(action = ActionTypes.READ, signType = SignType.CONFIG)
public ConfigChangeBatchListenResponse handle(ConfigBatchListenRequest configChangeListenRequest, RequestMeta meta)
throws NacosException {
String connectionId = StringPool.get(meta.getConnectionId());
String tag = configChangeListenRequest.getHeader(Constants.VIPSERVER_TAG);
ConfigChangeBatchListenResponse configChangeBatchListenResponse = new ConfigChangeBatchListenResponse();
for (ConfigBatchListenRequest.ConfigListenContext listenContext : configChangeListenRequest
.getConfigListenContexts()) {
String groupKey = GroupKey2
.getKey(listenContext.getDataId(), listenContext.getGroup(), listenContext.getTenant());
groupKey = StringPool.get(groupKey);
String md5 = StringPool.get(listenContext.getMd5());
if (configChangeListenRequest.isListen()) {
configChangeListenContext.addListen(groupKey, md5, connectionId);
boolean isUptoDate = ConfigCacheService.isUptodate(groupKey, md5, meta.getClientIp(), tag);//// 校验配置是否已经发生变更,如果是的话,将变更的groupKey加入响应报文
if (!isUptoDate) {
configChangeBatchListenResponse.addChangeConfig(listenContext.getDataId(), listenContext.getGroup(),
listenContext.getTenant());
}
} else {
configChangeListenContext.removeListen(groupKey, connectionId);
}
}
return configChangeBatchListenResponse;
}
}
服务端配置发布
Nacos服务端的本地配置和内存配置都更新完成后,会发布LocalDataChangeEvent事件。对应服务端的LocalDataChangeEvent事件处理器是RpcConfigChangeNotifier
@Override
public void onEvent(LocalDataChangeEvent event) {
String groupKey = event.groupKey;
boolean isBeta = event.isBeta;
List<String> betaIps = event.betaIps;
String[] strings = GroupKey.parseKey(groupKey);
String dataId = strings[0];
String group = strings[1];
String tenant = strings.length > 2 ? strings[2] : "";
String tag = event.tag;
configDataChanged(groupKey, dataId, group, tenant, isBeta, betaIps, tag);////
}
configDataChanged从监听上下文ConfigChangeListenContext中获取groupKey对应的所有监听connectionId,再通过ConnectionManager获取connectionId对应Connection gRPC长连接。最后构建ConfigChangeNotifyRequest同步配置请求参数,提交RpcPushTask到其他线程服务处理,不阻塞其他事件处理。
public void configDataChanged(String groupKey, String dataId, String group, String tenant, boolean isBeta,
List<String> betaIps, String tag) {
Set<String> listeners = configChangeListenContext.getListeners(groupKey);
if (CollectionUtils.isEmpty(listeners)) {
return;
}
int notifyClientCount = 0;
for (final String client : listeners) {
Connection connection = connectionManager.getConnection(client);
if (connection == null) {
continue;
}
ConnectionMeta metaInfo = connection.getMetaInfo();
//beta ips check.
String clientIp = metaInfo.getClientIp();
String clientTag = metaInfo.getTag();
if (isBeta && betaIps != null && !betaIps.contains(clientIp)) {
continue;
}
//tag check
if (StringUtils.isNotBlank(tag) && !tag.equals(clientTag)) {
continue;
}
ConfigChangeNotifyRequest notifyRequest = ConfigChangeNotifyRequest.build(dataId, group, tenant);
RpcPushTask rpcPushRetryTask = new RpcPushTask(notifyRequest, 50, client, clientIp, metaInfo.getAppName());
push(rpcPushRetryTask);
notifyClientCount++;
}
Loggers.REMOTE_PUSH.info("push [{}] clients ,groupKey=[{}]", notifyClientCount, groupKey);
}
push方法里面分为三个分支。 如果Task已经超过重试次数50次,ConnectionManager会关闭对应长连接; 如果ConnectionManager里connectionId对应连接还存在,正常提交Task,每次任务执行失败后会做延迟补偿,延迟时间 = 失败次数 * 2 秒; 如果连接已经不存在,不做任何处理。
private void push(RpcPushTask retryTask) {
ConfigChangeNotifyRequest notifyRequest = retryTask.notifyRequest;
if (retryTask.isOverTimes()) {
Loggers.REMOTE_PUSH.warn(
"push callback retry fail over times .dataId={},group={},tenant={},clientId={},will unregister client.",
notifyRequest.getDataId(), notifyRequest.getGroup(), notifyRequest.getTenant(),
retryTask.connectionId);
connectionManager.unregister(retryTask.connectionId);
} else if (connectionManager.getConnection(retryTask.connectionId) != null) {
// first time :delay 0s; sencond time:delay 2s ;third time :delay 4s
ConfigExecutor.getClientConfigNotifierServiceExecutor()
.schedule(retryTask, retryTask.tryTimes * 2, TimeUnit.SECONDS);
} else {
// client is already offline,ingnore task.
}
}