本文已参与[新人创作礼]活动,一路开启掘金创作之路。
基于rocketmq-4.9.0 版本分析rocketmq
1.接收和处理请求
在前面NameServer启动过程文章中,我们知道NameServer启动时,最终会启动一个netty服务,可以接收读写请求。那么这篇文章我们就来看看他是如何接收和处理请求的。
在前面分析NameServer启动的过程中,我们通过源码看到,netty服务端启动类会绑定很多ChannelHandler,有负责处理握手的,有负责处理心跳的,有负责处理连接的,也有负责读写的,其中NettyServerHandler就是负责读写的。
//TODO: ServerBootstrap 是netty的启动类
ServerBootstrap childHandler =
this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
.channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.SO_KEEPALIVE, false)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSndBufSize())
.childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketRcvBufSize())
.localAddress(new InetSocketAddress(this.nettyServerConfig.getListenPort()))
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline()
.addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME, handshakeHandler)
.addLast(defaultEventExecutorGroup,
encoder,
new NettyDecoder(),
//TODO:处理心跳的ChannelHandler
new IdleStateHandler(0, 0, nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),
//TODO:处理连接的 ChannelHandler
connectionManageHandler,
//TODO: 处理读写的 ChannelHandler (NettyServerHandler)
serverHandler
);
}
});
所以,NettyServerHandler 类就是我们分析的入口:
它是
NettyRemotingServer类的内部类
@Sharable
class NettyServerHandler extends SimpleChannelInboundHandler<RemotingCommand> {
@Override
protected void channelRead0(ChannelHandlerContext ctx, RemotingCommand msg) throws Exception {
processMessageReceived(ctx, msg);
}
}
我们继续点进去看它的内部实现:
public void processMessageReceived(ChannelHandlerContext ctx, RemotingCommand msg) throws Exception {
final RemotingCommand cmd = msg;
if (cmd != null) {
switch (cmd.getType()) {
case REQUEST_COMMAND:
//TODO: 处理request请求
processRequestCommand(ctx, cmd);
break;
case RESPONSE_COMMAND:
processResponseCommand(ctx, cmd);
break;
default:
break;
}
}
}
然后我们还是继续点进去看它的内部实现:
public void processRequestCommand(final ChannelHandlerContext ctx, final RemotingCommand cmd) {
final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode());
final Pair<NettyRequestProcessor, ExecutorService> pair = null == matched ? this.defaultRequestProcessor : matched;
final int opaque = cmd.getOpaque();
if (pair != null) {
//TODO:构建一个线程,当submit到线程池中时会执行
//TODO: 先不要展开看这里,继续往后看,会将该run 提交到线程池中
Runnable run = new Runnable() {
@Override
public void run() {
try {
doBeforeRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd);
final RemotingResponseCallback callback = new RemotingResponseCallback() {
@Override
public void callback(RemotingCommand response) {
doAfterRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd, response);
if (!cmd.isOnewayRPC()) {
if (response != null) {
response.setOpaque(opaque);
response.markResponseType();
try {
ctx.writeAndFlush(response);
} catch (Throwable e) {
log.error("process request over, but response failed", e);
log.error(cmd.toString());
log.error(response.toString());
}
} else {
}
}
}
};
if (pair.getObject1() instanceof AsyncNettyRequestProcessor) {
AsyncNettyRequestProcessor processor = (AsyncNettyRequestProcessor)pair.getObject1();
processor.asyncProcessRequest(ctx, cmd, callback);
} else {
NettyRequestProcessor processor = pair.getObject1();
RemotingCommand response = processor.processRequest(ctx, cmd);
callback.callback(response);
}
} catch (Throwable e) {
//TODO:....省略catch......
}
}
};
//TODO: ....省略部分代码.......
try {
final RequestTask requestTask = new RequestTask(run, ctx.channel(), cmd);
pair.getObject2().submit(requestTask);
} catch (RejectedExecutionException e) {
//TODO:省略catch代码.......
}
} else {
//TODO:...省略else......
}
}
这个方法内容比较多,我们展开来看下
1. 获取Pair对象
//TODO:从处理器表中根据code获取Pair
final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode());
//TODO:决定是使用从处理器表中获取的Pair还是使用默认的Pair
final Pair<NettyRequestProcessor, ExecutorService> pair = null == matched ? this.defaultRequestProcessor : matched;
这个Pair对象是个啥?
在前面NameServer启动过程文章中,步骤4.1中会注册处理器,而它注册的是一个默认处理器DefaultRequestProcessor.并创建一个Pair对象,并将其赋值给defaultRequestProcessor.
@Override
public void registerDefaultProcessor(NettyRequestProcessor processor, ExecutorService executor) {
//TODO: processor---> DefaultRequestProcessor
this.defaultRequestProcessor = new Pair<NettyRequestProcessor, ExecutorService>(processor, executor);
}
那我们在看它获取Pair对象的代码中,它首先从处理器表processorTable中根据code获取,可以获取的到吗?
答案是获取不到的!因为我们在前面看NameServer启动过程源码分析过程中,它并没有将处理器放入processorTable的操作,而是创建了一个默认的处理器DefaultRequestProcessor,所以它使用的就是这个默认的处理器。
在Broker启动过程中,将会注册大量的处理器到
processorTable中,而NameServer没有,它只使用了一个默认的处理器。
-
创建一个线程
Runnable,这个先暂且不看,因为创建一个线程必然是要放入线程池中的,所以我们先继续往下看。 -
创建
RequestTask对象,并在上面的Runnable设置进来,然后丢到线程池中
final RequestTask requestTask = new RequestTask(run, ctx.channel(), cmd);
pair.getObject2().submit(requestTask);
pair.getObject2() 是什么? 猜测肯定是线程池,我们看下Pair类的结构
public class Pair<T1, T2> {
private T1 object1;
private T2 object2;
public Pair(T1 object1, T2 object2) {
this.object1 = object1;
this.object2 = object2;
}
然后我们在看创建Pair对象(也就是注册默认处理器defaultRequestProcessor)的时候,T1,T2 都传的什么?
@Override
public void registerDefaultProcessor(NettyRequestProcessor processor, ExecutorService executor) {
this.defaultRequestProcessor = new Pair<NettyRequestProcessor, ExecutorService>(processor, executor);
}
Object1: 是一个处理器,这里就是默认的处理器
DefaultRequestProcessor
Object2: 是一个线程池
- 在上面将线程放入线程池中后,开始执行上面步骤2的线程任务。
Runnable run = new Runnable() {
@Override
public void run() {
try {
//TODO:这里是一个Callback,忽略,不关注
if (pair.getObject1() instanceof AsyncNettyRequestProcessor) {
AsyncNettyRequestProcessor processor = (AsyncNettyRequestProcessor)pair.getObject1();
processor.asyncProcessRequest(ctx, cmd, callback);
} else {
NettyRequestProcessor processor = pair.getObject1();
RemotingCommand response = processor.processRequest(ctx, cmd);
callback.callback(response);
}
} catch (Throwable e) {
//TODO: .... 忽略catch.....
}
}
};
现在我们知道,Pair对象中,Object1是DefaultRequestProcessor,那么到底是走if还是else呢?我们看下DefaultRequestProcessor的继承结构就知道了。
//TODO:它继承了 AsyncNettyRequestProcessor
public class DefaultRequestProcessor extends AsyncNettyRequestProcessor implements NettyRequestProcessor {
//......
}
所以我们知道,它走的是if的逻辑。将默认的处理器DefaultRequestProcessor强转为AsyncNettyRequestProcessor处理器,那么我们就继续往下走,看下processor.asyncProcessRequest(...)的逻辑
public abstract class AsyncNettyRequestProcessor implements NettyRequestProcessor {
public void asyncProcessRequest(ChannelHandlerContext ctx, RemotingCommand request, RemotingResponseCallback responseCallback) throws Exception {
RemotingCommand response = processRequest(ctx, request);
responseCallback.callback(response);
}
}
继续看processRequest(..)方法,将会来到实现类DefaultRequestProcessor中(就是NameServer的默认处理器)
@Override
public RemotingCommand processRequest(ChannelHandlerContext ctx,
RemotingCommand request) throws RemotingCommandException {
//TODO:省略部分代码.......
switch (request.getCode()) {
//TODO:省略部分case.....
//TODO: 注册broker,当broker启动的时候,会向NameServer发送注册请求,将会来到这里
case RequestCode.REGISTER_BROKER:
Version brokerVersion = MQVersion.value2Version(request.getVersion());
if (brokerVersion.ordinal() >= MQVersion.Version.V3_0_11.ordinal()) {
return this.registerBrokerWithFilterServer(ctx, request);
} else {
return this.registerBroker(ctx, request);
}
//TODO:broker下线
case RequestCode.UNREGISTER_BROKER:
return this.unregisterBroker(ctx, request);
//TODO:查询路由信息
case RequestCode.GET_ROUTEINFO_BY_TOPIC:
return this.getRouteInfoByTopic(ctx, request);
//TODO:查询broker集群信息
case RequestCode.GET_BROKER_CLUSTER_INFO:
return this.getBrokerClusterInfo(ctx, request);
//TODO:省略部分case.......
default:
break;
}
return null;
}
走到这里,它的请求就真正的进来了,接下来就是处理请求的具体逻辑了。举3个栗子说明:
- code=
RequestCode.REGISTER_BROKER。它是注册broker的请求,当broker启动的时候会向NameServer发送注册请求,将会来到这里。
注意:Broker每隔30s会向NameServer发送注册请求(心跳),这个请求除了注册Broker信息以外,还会注册topic路由信息。(Broker内部维护了topic信息,当发送心跳时,也会将topic信息携带过来,从而完成topic路由信息的注册)
- code=
RequestCode.UNREGISTER_BROKER。当Broker停止工作前会发送这个请求给NameServer,移除Broker信息。 - code=
RequestCode.GET_ROUTEINFO_BY_TOPIC。获取路由信息的请求。内容包含topic,queue以及queue属于哪个broker的broker信息。当producer发送消息的时候,消息发往哪个broker,哪个queue,就是基于这个去判断的。
至此我们从源码的角度可以看到NameServer是如何接收读写请求并处理的。
我们知道,当broker启动时,肯定要向NameServer完成注册信息,那么我就先简单看下,Broker注册都发生了什么?
2.Broker向NameServer完成注册
2.1 Broker发送请求
说明:这里也可以不看,直接看2.2章节,NameServer接收到注册请求的处理工作 我们这里先不探究Broker的细节,下一章节我们在探究。我们就关注注册的相关代码 BrokerController
//TODO:topicConfigWrapper 是topic相关的配置
private void doRegisterBrokerAll(boolean checkOrderConfig, boolean oneway,
TopicConfigSerializeWrapper topicConfigWrapper) {
//TODO:向NameServer注册Broker
List<RegisterBrokerResult> registerBrokerResultList = this.brokerOuterAPI.registerBrokerAll(
//TODO:broker集群名字
this.brokerConfig.getBrokerClusterName(),
//TODO:broker地址
this.getBrokerAddr(),
//TODO:broker名字
this.brokerConfig.getBrokerName(),
//TODO: brokerid
this.brokerConfig.getBrokerId(),
//TODO: 高可用服务地址
this.getHAServerAddr(),
//TODO:topic路由配置
topicConfigWrapper,
this.filterServerManager.buildNewFilterServerList(),
oneway,
this.brokerConfig.getRegisterBrokerTimeoutMills(),
this.brokerConfig.isCompressedRegister());
//TODO:...省略部分代码.......
}
不难发现,BrokerConfig 中保存了broker相关的配置
- broker 集群名字 (默认DefaultCluster)
- broker 地址 (172.34.91.144:10911)
- broker 名字 (我在broker.conf配置文件中指定的 broker-a)
- broker id (brokerid=0, 表示master)
- 高可用服务地址 (172.34.91.141:10912), 端口是(10911 + 1)
- topic 路由信息
这些broker和topic路由信息封装到
RegisterBrokerRequestHeader对象中。
final RegisterBrokerRequestHeader requestHeader = new RegisterBrokerRequestHeader();
requestHeader.setBrokerAddr(brokerAddr);
requestHeader.setBrokerId(brokerId);
requestHeader.setBrokerName(brokerName);
requestHeader.setClusterName(clusterName);
requestHeader.setHaServerAddr(haServerAddr);
requestHeader.setCompressed(compressed);
//TODO:保存topic路由信息
RegisterBrokerBody requestBody = new RegisterBrokerBody();
requestBody.setTopicConfigSerializeWrapper(topicConfigWrapper);
这些部分内容我们在broker.conf中进行过相应的配置。
我们继续向里面走:
private RegisterBrokerResult registerBroker(
final String namesrvAddr,
final boolean oneway,
final int timeoutMills,
final RegisterBrokerRequestHeader requestHeader,
final byte[] body
) throws RemotingCommandException, MQBrokerException, RemotingConnectException, RemotingSendRequestException, RemotingTimeoutException,
InterruptedException {
//TODO:构建注册broker的远程命令,我们请求的数据也封装到这里
RemotingCommand request = RemotingCommand.createRequestCommand(RequestCode.REGISTER_BROKER, requestHeader);
request.setBody(body);
//TODO:省略部分代码.....
//TODO:向NameServer发起注册请求,我们在broker.conf中指定了nameserver地址
RemotingCommand response = this.remotingClient.invokeSync(namesrvAddr, request, timeoutMills);
//TODO:省略部分代码.....
}
这个RemotingCommand 中封装了我们要请求的数据,同时也指定
code=RequestCode.REGISTER_BROKER,这个code后面我们看NameServer接收请求时还会看到。 我们继续点进去看内部实现:
@Override
public RemotingCommand invokeSync(String addr, final RemotingCommand request, long timeoutMillis)
throws InterruptedException, RemotingConnectException, RemotingSendRequestException, RemotingTimeoutException {
//TODO:根据nameServer的地址(127.0.0.1:9876)构建channel
final Channel channel = this.getAndCreateChannel(addr);
if (channel != null && channel.isActive()) {
try {
//TODO:省略部分代码....
//TODO:发起调用
RemotingCommand response = this.invokeSyncImpl(channel, request, timeoutMillis - costTime);
return response;
} catch (RemotingSendRequestException e) {
//TODO:省略catch......
}
}
- 这里会根据nameServer的地址(127.0.0.1:9876)构建
Channel,这样就可以向channel中write数据,从而NameServer作为服务端就可以接收请求了。
我们再点击进去继续看内部实现:
public RemotingCommand invokeSyncImpl(final Channel channel, final RemotingCommand request,
final long timeoutMillis)
throws InterruptedException, RemotingSendRequestException, RemotingTimeoutException {
final int opaque = request.getOpaque();
try {
final ResponseFuture responseFuture = new ResponseFuture(channel, opaque, timeoutMillis, null, null);
this.responseTable.put(opaque, responseFuture);
final SocketAddress addr = channel.remoteAddress();
//TODO:根据前面构建的channel,开始写入数据,NameServer等待接收
channel.writeAndFlush(request).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture f) throws Exception {
if (f.isSuccess()) {
responseFuture.setSendRequestOK(true);
return;
} else {
responseFuture.setSendRequestOK(false);
}
responseTable.remove(opaque);
responseFuture.setCause(f.cause());
responseFuture.putResponse(null);
log.warn("send a request command to channel <" + addr + "> failed.");
}
});
}fianlly{
}
}
这里就会将写请求放入Channel中,从而NameServer就可以接收到请求了
至此,Broker向NameServer发起了请求,等待NameServer接收请求
2.2 NameServer接收Broker注册请求
在前面第1大步骤中,我们知道NameServer接收到请求后,处理逻辑任务的只有一个默认的处理器DefaultRequestProcessor,所以我们就回到了开始说的这里
@Override
public RemotingCommand processRequest(ChannelHandlerContext ctx,
RemotingCommand request) throws RemotingCommandException {
//TODO:省略部分代码.......
switch (request.getCode()) {
//TODO:省略部分case.....
//TODO: 注册broker,当broker启动的时候,会向NameServer发送注册请求,将会来到这里
case RequestCode.REGISTER_BROKER:
Version brokerVersion = MQVersion.value2Version(request.getVersion());
if (brokerVersion.ordinal() >= MQVersion.Version.V3_0_11.ordinal()) {
//TODO:我们使用的版本是4.9.0,所以走这里
return this.registerBrokerWithFilterServer(ctx, request);
} else {
return this.registerBroker(ctx, request);
}
//TODO:broker下线
case RequestCode.UNREGISTER_BROKER:
return this.unregisterBroker(ctx, request);
//TODO:查询路由信息
case RequestCode.GET_ROUTEINFO_BY_TOPIC:
return this.getRouteInfoByTopic(ctx, request);
//TODO:查询broker集群信息
case RequestCode.GET_BROKER_CLUSTER_INFO:
return this.getBrokerClusterInfo(ctx, request);
//TODO:省略部分case.......
default:
break;
}
return null;
}
首先根据code判断走哪个逻辑,前面2.1步骤中,构RemotingCommand对象是,设置了code=RequestCode.REGISTER_BROKER,所以这里走的就是注册Broker的逻辑。
那接下来我们就看看注册逻辑
public RemotingCommand registerBrokerWithFilterServer(ChannelHandlerContext ctx, RemotingCommand request)
throws RemotingCommandException {
final RemotingCommand response = RemotingCommand.createResponseCommand(RegisterBrokerResponseHeader.class);
final RegisterBrokerResponseHeader responseHeader = (RegisterBrokerResponseHeader) response.readCustomHeader();
//TODO:解析出封装了broker信息和topic路由信息的对象
final RegisterBrokerRequestHeader requestHeader =
(RegisterBrokerRequestHeader) request.decodeCommandCustomHeader(RegisterBrokerRequestHeader.class);
//TODO:省略部分代码......
RegisterBrokerBody registerBrokerBody = new RegisterBrokerBody();
//TODO:request.getBody() != null 是false,省略这部分代码......
//TODO:获取路由信息管理器对象,注册broker
RegisterBrokerResult result = this.namesrvController.getRouteInfoManager().registerBroker(
requestHeader.getClusterName(),
requestHeader.getBrokerAddr(),
requestHeader.getBrokerName(),
requestHeader.getBrokerId(),
requestHeader.getHaServerAddr(),
registerBrokerBody.getTopicConfigSerializeWrapper(),
registerBrokerBody.getFilterServerList(),
ctx.channel());
//TODO:省略部分代码......
return response;
}
在看注册逻辑之前,我们先看下路由信息管理器RouteInfoManager对象的结构
注册broker和topic路由信息都会保存到这个类的各个Map中。
public class RouteInfoManager {
private final HashMap<String/* topic */, List<QueueData>> topicQueueTable;
private final HashMap<String/* brokerName */, BrokerData> brokerAddrTable;
private final HashMap<String/* clusterName */, Set<String/* brokerName */>> clusterAddrTable;
private final HashMap<String/* brokerAddr */, BrokerLiveInfo> brokerLiveTable;
private final HashMap<String/* brokerAddr */, List<String>/* Filter Server */> filterServerTable;
//TODO:.........
}
topicQueueTable: key是topic, value是topic下的queuebrokerAddrTable: key是brokerName, value是Broker信息(broker地址,name,集群)clusterAddrTable: key是集群名称, value是集群下所有broker的namebrokerLiveTable: key是broker地址, value是活着的Broker信息
那么接下来我们看注册的逻辑:
public RegisterBrokerResult registerBroker(
final String clusterName,
final String brokerAddr,
final String brokerName,
final long brokerId,
final String haServerAddr,
final TopicConfigSerializeWrapper topicConfigWrapper,
final List<String> filterServerList,
final Channel channel) {
RegisterBrokerResult result = new RegisterBrokerResult();
try {
try {
this.lock.writeLock().lockInterruptibly();
Set<String> brokerNames = this.clusterAddrTable.get(clusterName);
if (null == brokerNames) {
brokerNames = new HashSet<String>();
//TODO:保存集群信息
this.clusterAddrTable.put(clusterName, brokerNames);
}
brokerNames.add(brokerName);
boolean registerFirst = false;
BrokerData brokerData = this.brokerAddrTable.get(brokerName);
if (null == brokerData) {
registerFirst = true;
brokerData = new BrokerData(clusterName, brokerName, new HashMap<Long, String>());
//TODO:保存broker信息
this.brokerAddrTable.put(brokerName, brokerData);
}
Map<Long, String> brokerAddrsMap = brokerData.getBrokerAddrs();
//Switch slave to master: first remove <1, IP:PORT> in namesrv, then add <0, IP:PORT>
//The same IP:PORT must only have one record in brokerAddrTable
Iterator<Entry<Long, String>> it = brokerAddrsMap.entrySet().iterator();
while (it.hasNext()) {
Entry<Long, String> item = it.next();
if (null != brokerAddr && brokerAddr.equals(item.getValue()) && brokerId != item.getKey()) {
it.remove();
}
}
String oldAddr = brokerData.getBrokerAddrs().put(brokerId, brokerAddr);
registerFirst = registerFirst || (null == oldAddr);
if (null != topicConfigWrapper
&& MixAll.MASTER_ID == brokerId) {
if (this.isBrokerTopicConfigChanged(brokerAddr, topicConfigWrapper.getDataVersion())
|| registerFirst) {
ConcurrentMap<String, TopicConfig> tcTable =
topicConfigWrapper.getTopicConfigTable();
if (tcTable != null) {
for (Map.Entry<String, TopicConfig> entry : tcTable.entrySet()) {
//TODO:保存topic路由信息
this.createAndUpdateQueueData(brokerName, entry.getValue());
}
}
}
}
//TODO:保存Broker生命信息
BrokerLiveInfo prevBrokerLiveInfo = this.brokerLiveTable.put(brokerAddr,
new BrokerLiveInfo(
System.currentTimeMillis(),
topicConfigWrapper.getDataVersion(),
channel,
haServerAddr));
if (null == prevBrokerLiveInfo) {
log.info("new broker registered, {} HAServer: {}", brokerAddr, haServerAddr);
}
if (filterServerList != null) {
if (filterServerList.isEmpty()) {
this.filterServerTable.remove(brokerAddr);
} else {
this.filterServerTable.put(brokerAddr, filterServerList);
}
}
if (MixAll.MASTER_ID != brokerId) {
String masterAddr = brokerData.getBrokerAddrs().get(MixAll.MASTER_ID);
if (masterAddr != null) {
BrokerLiveInfo brokerLiveInfo = this.brokerLiveTable.get(masterAddr);
if (brokerLiveInfo != null) {
result.setHaServerAddr(brokerLiveInfo.getHaServerAddr());
result.setMasterAddr(masterAddr);
}
}
}
} finally {
this.lock.writeLock().unlock();
}
} catch (Exception e) {
log.error("registerBroker Exception", e);
}
return result;
}
总结一下:
- 将broker集群信息保存到
clusterAddrTable表中,key是集群的名字(比如我这个就是DefaultCluster), value是集群下所有broker的名字(比如我的是broker-a) - 将broker信息保存到
brokerAddrTable表中,key是brokerName, value是BrokerData对象。
public class BrokerData implements Comparable<BrokerData> {
private String cluster;
private String brokerName;
//TODO:broker地址信息,key=brokerid, value=brokerAddress
private HashMap<Long/* brokerId */, String/* broker address */> brokerAddrs;
}
一个broker如果有主从,那么主从的brokerName是一样的,主机的brokerid=0,slave机器的brokerid不为0.
- 将topic路由信息保存到
topicQueueTable表中,key是topic名称,value是QueueData对象。
public class QueueData implements Comparable<QueueData> {
//TODO:brokerName
private String brokerName;
//TODO:逻辑读队列数量
private int readQueueNums;
//TODO:逻辑写队列数量
private int writeQueueNums;
//TODO:权限
private int perm;
private int topicSysFlag;
}
- 将broker心跳信息保存到
brokerLiveTable表中,key是broker地址,value是BrokerLiveInfo对象。
class BrokerLiveInfo {
//TODO:最新更新的时间,每次心跳都会更新这个时间
private long lastUpdateTimestamp;
//TODO:数据版本
private DataVersion dataVersion;
private Channel channel;
//TODO:slave机器地址
private String haServerAddr;
}
前面我们在分析NameServer启动过程中,他会启动一个定时任务去扫描不活跃的broker,实际上它扫描的就是这个brokerLiveTable表。
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
NamesrvController.this.routeInfoManager.scanNotActiveBroker();
}
}, 5, 10, TimeUnit.SECONDS);
判断每个
BrokerLiveInfo的最近一次的上报时间,判断是否超时,如果最近的上报时间距离当前超过了2分钟,说明该broker可能挂了,就将它从brokerLiveTable移除。同时也会将该broker从其他注册表中移除。
至此,broker在nameserver的注册就已经完成了。
3.总结
本文分析了NameServer如何处理请求,NameServer底层使用netty进行通讯,处理broker,consumer,producer请求消息的ChannelHandler为NettyServerHandler,最终完成处理的是DefaultRequestProcessor对象,这个方法会处理很多的netty请求,我们以broker注册为例分析了注册流程。
注册/注销broker消息、获取topic路由消息、获取broker版本信息最终都是在RouteInfoManager类中处理,这个类中有几个非常重要的注册表:
topicQueueTable: key是topic, value是topic下的queuebrokerAddrTable: key是brokerName, value是Broker信息(broker地址,name,集群)clusterAddrTable: key是集群名称, value是集群下所有broker的namebrokerLiveTable: key是broker地址, value是活着的Broker信息
这个几成员变量就是NameServer被称为注册中心的原因所在,所谓的注册/注销broker,就是往这几个注册表(HashMap)中put或remove相关的broker信息;获取topic路由消息就是从topicQueueTable中获取broker/messageQueue等信息。
而nameServer所谓的"注册"、“发现”、“心跳”等,都是对RouteInfoManager这几个HashMap成员变量进行操作的。
至此,NameServer的如何接收请求以及Broker如何完成注册就分析完了。接下来我们就看Broker的相关内容。
限于作者个人水平,文中难免有错误之处,欢迎指正!勿喷,感谢!