概要
在源码分析一中,我们已经看到如果启动一个消费者,接下来我们要分析消费者的启动流程
consumer用法是先是创建了一个DefaultMQPushConsumer对象,然后配置了一些属性,比较关键的就是注册消息监听器(在这个监听器里会获取消息),之后就调用start()方法启动consumer.
接下来我们就来分析这块的消费过程。
一. 构造方法:DefaultMQPushConsumer
consumer的处理类为DefaultMQPushConsumer,我们先来看看DefaultMQPushConsumer的构造方法:
public DefaultMQPushConsumer(final String consumerGroup) {
// 这里指定了队列分配策略
this(null, consumerGroup, null, new AllocateMessageQueueAveragely());
}
public DefaultMQPushConsumer(final String namespace, final String consumerGroup,
RPCHook rpcHook, AllocateMessageQueueStrategy allocateMessageQueueStrategy) {
this.consumerGroup = consumerGroup;
this.namespace = namespace;
this.allocateMessageQueueStrategy = allocateMessageQueueStrategy;
defaultMQPushConsumerImpl = new DefaultMQPushConsumerImpl(this, rpcHook);
}
在构造方法中,就只是做了一些成员变量的赋值操作,比较关键的是分配消息队列的策略:allocateMessageQueueStrategy,如果指定,默认就使用AllocateMessageQueueAveragely,即从各队列平均获取消息。
二. 启动consumer:DefaultMQPushConsumer#start
consumer的启动方法为DefaultMQPushConsumer#start,代码如下:
public void start() throws MQClientException {
setConsumerGroup(
NamespaceUtil.wrapNamespace(this.getNamespace(), this.consumerGroup));
// 启动
this.defaultMQPushConsumerImpl.start();
// 消息轨迹相关内容,我们不关注
if (null != traceDispatcher) {
...
}
}
继续进入DefaultMQPushConsumerImpl#start:
public synchronized void start() throws MQClientException {
switch (this.serviceState) {
case CREATE_JUST:
log.info(...);
this.serviceState = ServiceState.START_FAILED;
this.checkConfig();
this.copySubscription();
if (this.defaultMQPushConsumer.getMessageModel() == MessageModel.CLUSTERING) {
this.defaultMQPushConsumer.changeInstanceNameToPID();
}
// 客户端
this.mQClientFactory = MQClientManager.getInstance()
.getOrCreateMQClientInstance(this.defaultMQPushConsumer, this.rpcHook);
// 设置负载均衡相关属性
this.rebalanceImpl.setConsumerGroup(this.defaultMQPushConsumer.getConsumerGroup());
this.rebalanceImpl.setMessageModel(this.defaultMQPushConsumer.getMessageModel());
this.rebalanceImpl.setAllocateMessageQueueStrategy(
this.defaultMQPushConsumer.getAllocateMessageQueueStrategy());
this.rebalanceImpl.setmQClientFactory(this.mQClientFactory);
this.pullAPIWrapper = new PullAPIWrapper(mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup(), isUnitMode());
this.pullAPIWrapper.registerFilterMessageHook(filterMessageHookList);
if (this.defaultMQPushConsumer.getOffsetStore() != null) {
this.offsetStore = this.defaultMQPushConsumer.getOffsetStore();
} else {
// 消息模式:广播模式存在本地,集群模式存在远程(broker)
switch (this.defaultMQPushConsumer.getMessageModel()) {
case BROADCASTING:
this.offsetStore = new LocalFileOffsetStore(this.mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup());
break;
case CLUSTERING:
this.offsetStore = new RemoteBrokerOffsetStore(this.mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup());
break;
default:
break;
}
this.defaultMQPushConsumer.setOffsetStore(this.offsetStore);
}
// 加载消费信息的偏移量
this.offsetStore.load();
// 根据客户端实例化不同的consumeMessageService:顺序消息与并发消息
if (this.getMessageListenerInner() instanceof MessageListenerOrderly) {
this.consumeOrderly = true;
this.consumeMessageService = new ConsumeMessageOrderlyService(this,
(MessageListenerOrderly) this.getMessageListenerInner());
} else if (this.getMessageListenerInner() instanceof MessageListenerConcurrently) {
this.consumeOrderly = false;
this.consumeMessageService = new ConsumeMessageConcurrentlyService(this,
(MessageListenerConcurrently) this.getMessageListenerInner());
}
this.consumeMessageService.start();
// 注册消费组
boolean registerOK = mQClientFactory.registerConsumer(
this.defaultMQPushConsumer.getConsumerGroup(), this);
if (!registerOK) {
this.serviceState = ServiceState.CREATE_JUST;
this.consumeMessageService.shutdown(
defaultMQPushConsumer.getAwaitTerminationMillisWhenShutdown());
throw new MQClientException(...);
}
// 启动
mQClientFactory.start();
log.info(...);
this.serviceState = ServiceState.RUNNING;
break;
case RUNNING:
case START_FAILED:
case SHUTDOWN_ALREADY:
throw new MQClientException(...);
default:
break;
}
// 更新 topic 的信息,从nameServer获取数据
this.updateTopicSubscribeInfoWhenSubscriptionChanged();
this.mQClientFactory.checkClientInBroker();
// 发送心跳,发送到所有的broker
this.mQClientFactory.sendHeartbeatToAllBrokerWithLock();
// 负载均衡
this.mQClientFactory.rebalanceImmediately();
}
这个方法比较长,整个consumer的启动流程都在这里了,咱们挑重点说,来总结下这个方法做了什么。
- 获取客户端
mQClientFactory,类型为org.apache.rocketmq.client.impl.factory.MQClientInstance,如果对producer还有印象的话,我们就会发现,producer中的mQClientFactory的类型也是它 - 区分广播模式与集群模式的
offsetStore,所谓的offsetStore,就是一存储器,用来存储当前消费者消费信息的偏移量。在广播模式中,该偏移量保存在本地文件中,而在集群模式中,该偏移量保存在远程broker中,广播模式与集群模式,我们后面再详细分析 - 根据客户端实例化不同的
consumeMessageService,这里用来区分顺序消息与并发消息,依然是后面再分析 - 启动
mQClientFactory,也就是启动客户端 - 更新
topic信息、发送心跳信息到broker、处理负载均衡功能
以上就是DefaultMQPushConsumerImpl#start方法所做的的主要工作了。实际上,上面的1,2,3点都是一些配置工作,这些配置对应的服务是在mQClientFactory.start()方法中启动的,我们继续。
三. 启动mQClientFactory:MQClientInstance#start
我们来看看mQClientFactory的启动流程,进入MQClientInstance#start:
public void start() throws MQClientException {
synchronized (this) {
switch (this.serviceState) {
case CREATE_JUST:
this.serviceState = ServiceState.START_FAILED;
// 获取 nameServer 的地址
if (null == this.clientConfig.getNamesrvAddr()) {
this.mQClientAPIImpl.fetchNameServerAddr();
}
// 启动客户端的远程服务,这个方法会配置netty客户端
this.mQClientAPIImpl.start();
// 启动定时任务
this.startScheduledTask();
// pull服务,仅对consumer启作用
this.pullMessageService.start();
// 启动负载均衡服务,仅对consumer启作用
this.rebalanceService.start();
// 启用内部的 producer
this.defaultMQProducer.getDefaultMQProducerImpl().start(false);
log.info("the client factory [{}] start OK", this.clientId);
this.serviceState = ServiceState.RUNNING;
break;
case START_FAILED:
throw new MQClientException(...);
default:
break;
}
}
}
在producer的启动过程中,也会调用这个方法,前面我们已经分析过了一波了,这次我们在consumer的角度再来分析这个方法。
该方法所做的工作如下:
- 获取
nameServer的地址 - 启动客户端的远程服务,这个方法会配置
netty客户端 - 启动定时任务
- 启动拉取消息服务
- 启动负载均衡服务
上面的1,2与producer的流程并无区别,就不再分析了,我们来看看定时任务的启动,进入方法MQClientInstance#startScheduledTask:
private void startScheduledTask() {
...
// 持久化消费者的消费偏移量,每5秒一次
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.persistAllConsumerOffset();
} catch (Exception e) {
log.error("ScheduledTask persistAllConsumerOffset exception", e);
}
}
}, 1000 * 10, this.clientConfig.getPersistConsumerOffsetInterval(), TimeUnit.MILLISECONDS);
// 省略其他定时任务
...
}
这个方法中还启动了其他一些定时任务,这里我们重点关注执行MQClientInstance#persistAllConsumerOffset()方法的定时任务,该定时任务会持久化当前消费者消费消息的偏移量,在本节我们先对这个定时任务有个印象,在分析偏移量持久化一节再详细分析持久化流程。
我们再回到MQClientInstance#start的流程,第4与第5步,主要是启动了两个服务:pullMessageService与rebalanceService,这个类的信息如下:
/**
* PullMessageService
*/
public class PullMessageService extends ServiceThread {
...
}
/**
* RebalanceService
*/
public class RebalanceService extends ServiceThread {
...
}
这两个类都是ServiceThread的子类,这两个类的start()方法也都是来自于ServiceThread:
public abstract class ServiceThread implements Runnable {
// 省略其他代码
...
/**
* start() 方法
*/
public void start() {
log.info(...);
if (!started.compareAndSet(false, true)) {
return;
}
stopped = false;
this.thread = new Thread(this, getServiceName());
this.thread.setDaemon(isDaemon);
this.thread.start();
}
}
从代码来看,ServiceThread实现了Runnable接口,在其start()方法中,启动了一个线程,线程的执行逻辑正是来自于其子类的run()方法,因此我们要看pullMessageService与rebalanceService的start()方法执行逻辑,只需要看对应类的run()方法即可。
到此为止,consumer的启动就已经完成了,各项服务也启动起来了,而consumer拉取消息也正是由这些服务的配合处理的,接下来我们就来分析这些服务做了什么。
四. 拉取消息:PullMessageService
在MQClientInstance#start方法中,会启动消息拉取的服务:PullMessageService,PullMessageService是ServiceThread的子类,启动该服务时会创建一个新的线程,我们直接来看PullMessageService#run()方法,
public class PullMessageService extends ServiceThread {
...
private final LinkedBlockingQueue<PullRequest> pullRequestQueue
= new LinkedBlockingQueue<PullRequest>();
/**
* 将 pullRequest 放入 pullRequestQueue 中
*/
public void executePullRequestImmediately(final PullRequest pullRequest) {
try {
this.pullRequestQueue.put(pullRequest);
} catch (InterruptedException e) {
log.error("executePullRequestImmediately pullRequestQueue.put", e);
}
}
@Override
public void run() {
log.info(this.getServiceName() + " service started");
while (!this.isStopped()) {
try {
// 从 pullRequestQueue 获取一个 pullRequest,阻塞的方式
PullRequest pullRequest = this.pullRequestQueue.take();
this.pullMessage(pullRequest);
} catch (InterruptedException ignored) {
} catch (Exception e) {
log.error("Pull Message Service Run Method exception", e);
}
}
log.info(this.getServiceName() + " service end");
}
...
}
在PullMessageService#run()方法中,该方法会从pullRequestQueue中获取一个pullRequest的操作,然后调用this.pullMessage(pullRequest)进行拉取操作,注意到pullRequest的类型为LinkedBlockingQueue,并且使用的是阻塞方法take(),因此如果LinkedBlockingQueue中没有内容,那take()方法就会一直在这里阻塞。
关于pullRequestQueue中的内容是在哪里放放的,可以看到PullMessageService#executePullRequestImmediately方法中,会调用pullRequestQueue.put(pullRequest)方法放入元素。谁会调用PullMessageService#executePullRequestImmediately(...)方法呢?关于这点,我们先留个疑问,后面分析负载均衡服务时再揭晓。
我们回到PullMessageService#run()方法,该方法调用了this.pullMessage(pullRequest)方法对pullRequest做了进一步处理,我们跟进去:
private void pullMessage(final PullRequest pullRequest) {
final MQConsumerInner consumer = this.mQClientFactory
.selectConsumer(pullRequest.getConsumerGroup());
if (consumer != null) {
DefaultMQPushConsumerImpl impl = (DefaultMQPushConsumerImpl) consumer;
// 继续处理
impl.pullMessage(pullRequest);
} else {
log.warn(...);
}
}
在这个方法里,调用的是DefaultMQPushConsumerImpl#pullMessage来进一步处理pullRequest:
/**
* 拉取消息的核心流程
* @param pullRequest
*/
public void pullMessage(final PullRequest pullRequest) {
// 这里省略非常多的代码
...
// pullCallback 是在这里生成的,这里我们并不打算讨论
PullCallback pullCallback = new PullCallback() {
...
}
int sysFlag = PullSysFlag.buildSysFlag(
commitOffsetEnable, // commitOffset
true, // suspend
subExpression != null, // subscription
classFilter // class filter
);
try {
// 拉取消息
this.pullAPIWrapper.pullKernelImpl(
pullRequest.getMessageQueue(),
subExpression,
subscriptionData.getExpressionType(),
subscriptionData.getSubVersion(),
pullRequest.getNextOffset(),
this.defaultMQPushConsumer.getPullBatchSize(),
sysFlag,
commitOffsetValue,
BROKER_SUSPEND_MAX_TIME_MILLIS,
CONSUMER_TIMEOUT_MILLIS_WHEN_SUSPEND,
CommunicationMode.ASYNC,
pullCallback
);
} catch (Exception e) {
log.error("pullKernelImpl exception", e);
this.executePullRequestLater(pullRequest, pullTimeDelayMillsWhenException);
}
}
可以看到,这个方法里,最核心就是拉取消息的操作了,方法为PullAPIWrapper#pullKernelImpl:
public PullResult pullKernelImpl(
final MessageQueue mq,
final String subExpression,
final String expressionType,
final long subVersion,
final long offset,
final int maxNums,
final int sysFlag,
final long commitOffset,
final long brokerSuspendMaxTimeMillis,
final long timeoutMillis,
final CommunicationMode communicationMode,
final PullCallback pullCallback
) throws MQClientException, RemotingException, MQBrokerException, InterruptedException {
// 找到broker
FindBrokerResult findBrokerResult =
this.mQClientFactory.findBrokerAddressInSubscribe(mq.getBrokerName(),
this.recalculatePullFromWhichNode(mq), false);
if (null == findBrokerResult) {
// broker 为 null,更新 topic 信息后,再获取一次
this.mQClientFactory.updateTopicRouteInfoFromNameServer(mq.getTopic());
findBrokerResult =
this.mQClientFactory.findBrokerAddressInSubscribe(mq.getBrokerName(),
this.recalculatePullFromWhichNode(mq), false);
}
if (findBrokerResult != null) {
{
// check version
...
}
int sysFlagInner = sysFlag;
if (findBrokerResult.isSlave()) {
sysFlagInner = PullSysFlag.clearCommitOffsetFlag(sysFlagInner);
}
// 构建请求
PullMessageRequestHeader requestHeader = new PullMessageRequestHeader();
//这里省略了好多的 requestHeader.setXxx 操作
...
String brokerAddr = findBrokerResult.getBrokerAddr();
if (PullSysFlag.hasClassFilterFlag(sysFlagInner)) {
brokerAddr = computePullFromWhichFilterServer(mq.getTopic(), brokerAddr);
}
// 从broker拉取消息
PullResult pullResult = this.mQClientFactory.getMQClientAPIImpl().pullMessage(
brokerAddr,
requestHeader,
timeoutMillis,
communicationMode,
pullCallback);
return pullResult;
}
throw new MQClientException("The broker[" + mq.getBrokerName() + "] not exist", null);
}
这个方法主要是组装拉取消息的请求,组装好之后接着就调用了MQClientAPIImpl#pullMessage方法,我们再进去一探究竟:
public PullResult pullMessage(
final String addr,
final PullMessageRequestHeader requestHeader,
final long timeoutMillis,
final CommunicationMode communicationMode,
final PullCallback pullCallback
) throws RemotingException, MQBrokerException, InterruptedException {
// 请求code为PULL_MESSAGE
RemotingCommand request = RemotingCommand
.createRequestCommand(RequestCode.PULL_MESSAGE, requestHeader);
// 拉取数据的几种方式
switch (communicationMode) {
case ONEWAY:
assert false;
return null;
case ASYNC:
// 异步调用的是 pullCallback 处理
this.pullMessageAsync(addr, request, timeoutMillis, pullCallback);
return null;
case SYNC:
return this.pullMessageSync(addr, request, timeoutMillis);
default:
assert false;
break;
}
return null;
}
与发送消息一样,rocketMq拉取消息的模式也有三种:
ONEWAY:什么也不做,直接返回nullASYNC:异步方式,拉取成功或失败后,会在pullCallback对象中处理回调信息SYNC:同步方式,拉取的消息同步返回
由于进入的方法是异步方式,因此这里我们主要看异步方式的实现,进入MQClientAPIImpl#pullMessageAsync方法:
private void pullMessageAsync(
final String addr,
final RemotingCommand request,
final long timeoutMillis,
final PullCallback pullCallback
) throws RemotingException, InterruptedException {
// 异步拉取
this.remotingClient.invokeAsync(addr, request, timeoutMillis, new InvokeCallback() {
@Override
public void operationComplete(ResponseFuture responseFuture) {
// 处理拉取消息的结果
RemotingCommand response = responseFuture.getResponseCommand();
// 有响应
if (response != null) {
try {
PullResult pullResult = MQClientAPIImpl.this
.processPullResponse(response, addr);
assert pullResult != null;
// 调用 pullCallback 的 onSuccess(...) 方法
pullCallback.onSuccess(pullResult);
} catch (Exception e) {
// 调用 pullCallback 的 onException(...) 方法
pullCallback.onException(e);
}
} else {
...
}
}
});
}
这块的操作与producer发送异步消息的套路一模一样,调用的同样是remotingClient.invokeAsync(...)方法,结果处理同样的是在InvokeCallback对象中。在InvokeCallback#operationComplete方法中,成功时会调用调用 pullCallback 的 onSuccess(...) 方法,失败时则调用 pullCallback 的 onException(...) 方法,接下来我们来看看pullCallback的内容。
pullCallback对象是在DefaultMQPushConsumerImpl#pullMessage方法中创建并传入的,它的内容如下:
/**
* 拉取消息的核心流程
* @param pullRequest
*/
public void pullMessage(final PullRequest pullRequest) {
// 省略其他代码,重点关注 pullCallback
...
// 消息拉取的回调函数,在拉取到消息后会进入这个方法处理
PullCallback pullCallback = new PullCallback() {
@Override
public void onSuccess(PullResult pullResult) {
if (pullResult != null) {
// 处理消息,将二制消息解码为java对象,也会对消息进行tag过滤
pullResult = DefaultMQPushConsumerImpl.this.pullAPIWrapper.processPullResult(
pullRequest.getMessageQueue(), pullResult, subscriptionData);
switch (pullResult.getPullStatus()) {
case FOUND:
...
long firstMsgOffset = Long.MAX_VALUE;
if (pullResult.getMsgFoundList() == null
|| pullResult.getMsgFoundList().isEmpty()) {
DefaultMQPushConsumerImpl.this
.executePullRequestImmediately(pullRequest);
} else {
...
// 处理消息,处理顺序与并发消息
DefaultMQPushConsumerImpl.this.consumeMessageService
.submitConsumeRequest(
pullResult.getMsgFoundList(),
processQueue,
pullRequest.getMessageQueue(),
dispatchToConsume);
// 准备下一次的运行
if (DefaultMQPushConsumerImpl.this
.defaultMQPushConsumer.getPullInterval() > 0) {
DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest,
DefaultMQPushConsumerImpl.this
.defaultMQPushConsumer.getPullInterval());
} else {
DefaultMQPushConsumerImpl.this
.executePullRequestImmediately(pullRequest);
}
}
...
break;
// 省略其他状态的处理
...
}
}
}
@Override
public void onException(Throwable e) {
if (!pullRequest.getMessageQueue().getTopic()
.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
log.warn("execute the pull request exception", e);
}
// 这个方法会把 pullRequest 丢到 pullRequestQueue 中
DefaultMQPushConsumerImpl.this
.executePullRequestLater(pullRequest, pullTimeDelayMillsWhenException);
}
};
// 省略其他代码,重点关注 pullCallback
...
}
PullCallback主要有两个方法:
onSuccess(...):拉取消息成功时调用,在这个方法里会解码消息,消费消息,然后准备下一次的pullQequest请求onException(...):拉取消息异常时调用,在这个方法里主要是将出现异常的pullQequest丢到pullRequestQueue,等待下一次再调用
接下来,我们这两个方法进行具体分析。
4.1 消息解码操作
处理消息解码操作的方法为PullAPIWrapper#processPullResult,还过这个方法并不只是处理解码,还处理了其他操作,代码如下:
public PullResult processPullResult(final MessageQueue mq, final PullResult pullResult,
final SubscriptionData subscriptionData) {
PullResultExt pullResultExt = (PullResultExt) pullResult;
this.updatePullFromWhichNode(mq, pullResultExt.getSuggestWhichBrokerId());
if (PullStatus.FOUND == pullResult.getPullStatus()) {
// 将二进制数据解码为对象
ByteBuffer byteBuffer = ByteBuffer.wrap(pullResultExt.getMessageBinary());
List<MessageExt> msgList = MessageDecoder.decodes(byteBuffer);
List<MessageExt> msgListFilterAgain = msgList;
// 按 tag 过滤
if (!subscriptionData.getTagsSet().isEmpty() && !subscriptionData.isClassFilterMode()) {
msgListFilterAgain = new ArrayList<MessageExt>(msgList.size());
for (MessageExt msg : msgList) {
if (msg.getTags() != null) {
// 真正的过滤操作
if (subscriptionData.getTagsSet().contains(msg.getTags())) {
msgListFilterAgain.add(msg);
}
}
}
}
if (this.hasHook()) {
FilterMessageContext filterMessageContext = new FilterMessageContext();
filterMessageContext.setUnitMode(unitMode);
filterMessageContext.setMsgList(msgListFilterAgain);
this.executeHook(filterMessageContext);
}
// 进一步处理过后滤的消息
for (MessageExt msg : msgListFilterAgain) {
// 事务消息的标识
String traFlag = msg.getProperty(MessageConst.PROPERTY_TRANSACTION_PREPARED);
if (Boolean.parseBoolean(traFlag)) {
msg.setTransactionId(msg.getProperty(
MessageConst.PROPERTY_UNIQ_CLIENT_MESSAGE_ID_KEYIDX));
}
// 偏移量
MessageAccessor.putProperty(msg, MessageConst.PROPERTY_MIN_OFFSET,
Long.toString(pullResult.getMinOffset()));
MessageAccessor.putProperty(msg, MessageConst.PROPERTY_MAX_OFFSET,
Long.toString(pullResult.getMaxOffset()));
msg.setBrokerName(mq.getBrokerName());
}
pullResultExt.setMsgFoundList(msgListFilterAgain);
}
pullResultExt.setMessageBinary(null);
return pullResult;
}
这个方法所做的工作有3件:
- 将二进制数据解码为对象,即将
byte[]解码为List<MessageExt> - 如果
consumer指定了tag,则按tag进行过滤,其实就是调用Set#contains()判断tag是否符合条件 - 设置消息的属性,如
TransactionId、BrokerName
4.2 消费消息
消费消息的相关代码为
DefaultMQPushConsumerImpl.this.consumeMessageService.submitConsumeRequest(
pullResult.getMsgFoundList(),
processQueue,
pullRequest.getMessageQueue(),
dispatchToConsume);
复制代码
消费消息的模式有两种:
ConsumeMessageConcurrentlyService:并发消费消息ConsumeMessageOrderlyService:顺序消费消息
关于这两点的差别我们之后再分析,这里我们使用的消费模式是并发消费消息,进入ConsumeMessageConcurrentlyService#submitConsumeRequest方法:
public void submitConsumeRequest(
final List<MessageExt> msgs,
final ProcessQueue processQueue,
final MessageQueue messageQueue,
final boolean dispatchToConsume) {
final int consumeBatchSize = this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
// 一次只拉取32条数据,不足32条直接处理
if (msgs.size() <= consumeBatchSize) {
ConsumeRequest consumeRequest
= new ConsumeRequest(msgs, processQueue, messageQueue);
try {
// 添加任务
this.consumeExecutor.submit(consumeRequest);
} catch (RejectedExecutionException e) {
this.submitConsumeRequestLater(consumeRequest);
}
} else {
// 超过32条就进行分页处理,每页都使用一个线程处理
for (int total = 0; total < msgs.size(); ) {
List<MessageExt> msgThis = new ArrayList<MessageExt>(consumeBatchSize);
for (int i = 0; i < consumeBatchSize; i++, total++) {
if (total < msgs.size()) {
msgThis.add(msgs.get(total));
} else {
break;
}
}
ConsumeRequest consumeRequest
= new ConsumeRequest(msgThis, processQueue, messageQueue);
try {
this.consumeExecutor.submit(consumeRequest);
} catch (RejectedExecutionException e) {
for (; total < msgs.size(); total++) {
msgThis.add(msgs.get(total));
}
this.submitConsumeRequestLater(consumeRequest);
}
}
}
}
这个方法就是将获得到的消息封装为ConsumeRequest,然后提交到线程池中处理。在处理时,会判断消息的多少,如消息超过32条,就会对消息进行分页,每页都使用一个线程处理。
ConsumeRequest最终在线程池中执行了,根据线程的执行规律,我们直接进入它的run方法看看做了什么:
class ConsumeRequest implements Runnable {
...
@Override
public void run() {
...
// 取出消息监听器
MessageListenerConcurrently listener
= ConsumeMessageConcurrentlyService.this.messageListener;
...
ConsumeReturnType returnType = ConsumeReturnType.SUCCESS;
try {
if (msgs != null && !msgs.isEmpty()) {
for (MessageExt msg : msgs) {
MessageAccessor.setConsumeStartTimeStamp(
msg, String.valueOf(System.currentTimeMillis()));
}
}
// 交由listener实际处理消息
status = listener.consumeMessage(
Collections.unmodifiableList(msgs), context);
} catch (Throwable e) {
...
}
...
// 处理结果
if (!processQueue.isDropped()) {
ConsumeMessageConcurrentlyService.this.processConsumeResult(status, context, this);
} else {
log.warn(...);
}
}
}
以上代码做了大量的删减,我们仅保留了重要部分,重要部分主要包含三个操作:
- 取出当前
consumer的消息监听器 - 执行消息监听器的
consumeMessage()方法 - 处理
consumeMessage()方法的返回值
这个consumer的消息监听器是个啥呢?我们在org.apache.rocketmq.example.simple.PushConsumer中是这样注册listener的:
public class PushConsumer {
public static void main(String[] args)
throws InterruptedException, MQClientException {
DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("CID_JODIE_1");
...
// 注册监听器,监听消息
consumer.registerMessageListener(new MessageListenerConcurrently() {
@Override
public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs,
ConsumeConcurrentlyContext context) {
// 这里获得了消息
System.out.printf("%s Receive New Messages: %s %n",
Thread.currentThread().getName(), msgs);
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
});
// 启动
consumer.start();
}
}
这里指定的MessageListenerConcurrently#consumeMessage(...)方法就是在ConsumeRequest#run()中调用的。
执行完MessageListenerConcurrently#consumeMessage(...)方法后,接下来会处理这个方法的返回值,方法为ConsumeMessageConcurrentlyService#processConsumeResult,我们直接看关键代码:
public void processConsumeResult(
final ConsumeConcurrentlyStatus status,
final ConsumeConcurrentlyContext context,
final ConsumeRequest consumeRequest
) {
// 省略重试的操作,后面分析重试机制时再详细展开
...
long offset = consumeRequest.getProcessQueue().removeMessage(consumeRequest.getMsgs());
if (offset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
// 更新偏移量
this.defaultMQPushConsumerImpl.getOffsetStore()
.updateOffset(consumeRequest.getMessageQueue(), offset, true);
}
}
在这个方法会处理两个操作:
- 根据返回结果确认是否需要重试,关于重试机制这里就不展开讨论了,后面分析时再详细展开
- 更新消费位置的偏移量,更新时,会根据广播模式与集群模式从而执行不同的更新策略,这点我们一会再分析
4.3 准备下一次的pullRequest请求
让我们回到DefaultMQPushConsumerImpl#pullMessage方法,准备下一次运行的代码如下:
// 准备下一次的运行
if (DefaultMQPushConsumerImpl.this
.defaultMQPushConsumer.getPullInterval() > 0) {
// 延迟 xxx 秒后进行一次 pullRequest
DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest,
DefaultMQPushConsumerImpl.this
.defaultMQPushConsumer.getPullInterval());
} else {
// 立即进行一次 pullRequest
DefaultMQPushConsumerImpl.this
.executePullRequestImmediately(pullRequest);
}
这两个方法非常相似,区别在于,一个是延迟 xxx 秒后进行一次 pullRequest,另一个是立即进行一次 pullRequest,我们来看看它的操作:
private void executePullRequestLater(final PullRequest pullRequest, final long timeDelay) {
// 继续调用
this.mQClientFactory.getPullMessageService()
.executePullRequestLater(pullRequest, timeDelay);
}
public void executePullRequestLater(final PullRequest pullRequest, final long timeDelay) {
if (!isStopped()) {
// 只执行一次,延迟执行
this.scheduledExecutorService.schedule(new Runnable() {
@Override
public void run() {
// 调用的是 executePullRequestImmediately(...)
PullMessageService.this.executePullRequestImmediately(pullRequest);
}
}, timeDelay, TimeUnit.MILLISECONDS);
} else {
log.warn("PullMessageServiceScheduledThread has shutdown");
}
}
延迟获取的操作,最终使用scheduledExecutorService来调用executePullRequestImmediately(...),需要注意的是,这个scheduledExecutorService只会执行一次,首次执行时间为指定的timeDelay后,也就是defaultMQPushConsumer.getPullInterval()的值。
最终,无论是延迟执行还是立即执行,都会调用PullMessageService#executePullRequestImmediately方法,内容如下:
public void executePullRequestImmediately(final PullRequest pullRequest) {
try {
this.pullRequestQueue.put(pullRequest);
} catch (InterruptedException e) {
log.error("executePullRequestImmediately pullRequestQueue.put", e);
}
}
可以看到,这里仅是pullRequest放入pullRequestQueue中,之后PullMessageService线程就会从其中获取到这个pullRequest,从而又一次发起获取消息的请求了。
五. 如何选择消息队列:RebalanceService
让我们回到PullMessageService#run()方法:
public class PullMessageService extends ServiceThread {
...
private final LinkedBlockingQueue<PullRequest> pullRequestQueue
= new LinkedBlockingQueue<PullRequest>();
/**
* 将 pullRequest 放入 pullRequestQueue 中
*/
public void executePullRequestImmediately(final PullRequest pullRequest) {
try {
this.pullRequestQueue.put(pullRequest);
} catch (InterruptedException e) {
log.error("executePullRequestImmediately pullRequestQueue.put", e);
}
}
@Override
public void run() {
log.info(this.getServiceName() + " service started");
while (!this.isStopped()) {
try {
// 从 pullRequestQueue 获取一个 pullRequest,阻塞的方式
PullRequest pullRequest = this.pullRequestQueue.take();
this.pullMessage(pullRequest);
} catch (InterruptedException ignored) {
} catch (Exception e) {
log.error("Pull Message Service Run Method exception", e);
}
}
log.info(this.getServiceName() + " service end");
}
...
}
PullMessageService线程获得了pullRequest后,然后就开始了一次又一次的拉起消息的操作,那这个pullRequest最初是在哪里添加进来的呢?这就是本节要分析的负载均衡功能了。
处理负载均衡的线程为RebalanceService,它是在MQClientInstance#start方法中启动的,我们直接进入其run()方法:
public class RebalanceService extends ServiceThread {
// 省略其他
...
@Override
public void run() {
log.info(this.getServiceName() + " service started");
while (!this.isStopped()) {
this.waitForRunning(waitInterval);
this.mqClientFactory.doRebalance();
}
log.info(this.getServiceName() + " service end");
}
}
在它的run()方法中,仅是调用了MQClientInstance#doRebalance方法,我们继续进入:
public void doRebalance() {
// consumerTable 存放的就是当前 consumer
for (Map.Entry<String, MQConsumerInner> entry : this.consumerTable.entrySet()) {
MQConsumerInner impl = entry.getValue();
if (impl != null) {
try {
impl.doRebalance();
} catch (Throwable e) {
log.error("doRebalance exception", e);
}
}
}
}
复制代码
在MQClientInstance#doRebalance方法中,会遍历所有的consumer,然后调用DefaultMQPushConsumerImpl#doRebalance方法作进一步的处理,consumerTable就是用来保存DefaultMQPushConsumerImpl实例的,继续进入DefaultMQPushConsumerImpl#doRebalance方法:
@Override
public void doRebalance() {
if (!this.pause) {
this.rebalanceImpl.doRebalance(this.isConsumeOrderly());
}
}
继续跟进,来到RebalanceImpl#doRebalance方法:
public void doRebalance(final boolean isOrder) {
Map<String, SubscriptionData> subTable = this.getSubscriptionInner();
if (subTable != null) {
for (final Map.Entry<String, SubscriptionData> entry : subTable.entrySet()) {
final String topic = entry.getKey();
try {
// 客户端负载均衡:根据主题来处理负载均衡
this.rebalanceByTopic(topic, isOrder);
} catch (Throwable e) {
if (!topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
log.warn("rebalanceByTopic Exception", e);
}
}
}
}
this.truncateMessageQueueNotMyTopic();
}
/**
* 这就是最张处理负载均衡的地方了
*/
private void rebalanceByTopic(final String topic, final boolean isOrder) {
switch (messageModel) {
// 广播模式:不需要处理负载均衡,每个消费者都要消费,只需要更新负载信息
case BROADCASTING: {
Set<MessageQueue> mqSet = this.topicSubscribeInfoTable.get(topic);
if (mqSet != null) {
// 更新负载均衡信息,这里传入的参数是mqSet,即所有队列
boolean changed = this
.updateProcessQueueTableInRebalance(topic, mqSet, isOrder);
if (changed) {
this.messageQueueChanged(topic, mqSet, mqSet);
log.info(...);
}
} else {
log.warn(...);
}
break;
}
// 集群模式
case CLUSTERING: {
// 根据订阅的主题获取消息队列
Set<MessageQueue> mqSet = this.topicSubscribeInfoTable.get(topic);
// 客户端id,根据 topic 与 consumerGroup 获取所有的 consumerId
List<String> cidAll = this.mQClientFactory
.findConsumerIdList(topic, consumerGroup);
if (null == mqSet) {
if (!topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
log.warn(...);
}
}
if (null == cidAll) {
log.warn(...);
}
if (mqSet != null && cidAll != null) {
List<MessageQueue> mqAll = new ArrayList<MessageQueue>();
mqAll.addAll(mqSet);
// 排序后才能保证消费者负载策略相对稳定
Collections.sort(mqAll);
Collections.sort(cidAll);
// MessageQueue 的负载策略
AllocateMessageQueueStrategy strategy = this.allocateMessageQueueStrategy;
List<MessageQueue> allocateResult = null;
try {
// 按负载策略进行分配,返回当前消费者实际订阅的messageQueue集合
allocateResult = strategy.allocate(
this.consumerGroup,
this.mQClientFactory.getClientId(),
mqAll,
cidAll);
} catch (Throwable e) {
log.error(...);
return;
}
Set<MessageQueue> allocateResultSet = new HashSet<MessageQueue>();
if (allocateResult != null) {
allocateResultSet.addAll(allocateResult);
}
// 更新负载均衡信息,传入参数是 allocateResultSet,即当前consumer分配到的队列
boolean changed = this
.updateProcessQueueTableInRebalance(topic, allocateResultSet, isOrder);
if (changed) {
log.info(...);
this.messageQueueChanged(topic, mqSet, allocateResultSet);
}
}
break;
}
default:
break;
}
}
RebalanceImpl#rebalanceByTopic方法就是最终处理负载均衡的方法了,在这个方法里会区分广播模式与集群模式的处理。
在广播模式下,一条消息会被同一个消费组中的所有consumer消费,而集群模式下,一条消息只会被同一个消费组下的一个consumer消费。
正是因为如此,广播模式下并没有负载均衡可言,直接把所有的队列都分配给当前consumer处理,然后更新QueueTable的负载均衡信息;而集群模式会先分配当前consumer消费的消息队列,再更新QueueTable的负载均衡信息。
这里我们来看看集群模式,看看它的操作:
strategy.allocate(...):按负载均衡策略为当前consumer分配队列updateProcessQueueTableInRebalance(...):更新负载均衡信息。
在rocketMq中,提供了这些负载均衡策略:
AllocateMessageQueueAveragely:平均负载策略,rocketMq默认使用的策略AllocateMessageQueueAveragelyByCircle:环形平均分配,这个和平均分配唯一的区别就是,再分队列的时候,平均队列是将属于自己的MessageQueue全部拿走,而环形平均则是,一人拿一个,拿到的Queue不是连续的。AllocateMessageQueueByConfig:用户自定义配置AllocateMessageQueueByMachineRoom:同机房负载策略,这个策略就是当前Consumer只负载处在指定的机房内的MessageQueue,brokerName的命名必须要按要求的格式来设置:机房名@brokerNameAllocateMachineRoomNearby:就近机房负载策略,在AllocateMessageQueueByMachineRoom策略中,如果同一机房中只有MessageQueue而没有consumer,那这个MessageQueue上的消息该如何消费呢?AllocateMachineRoomNearby就是扩充了该功能的处理AllocateMessageQueueConsistentHash:一致性哈希策略
这里我们重点来分析平均负载策略AllocateMessageQueueAveragely:
public List<MessageQueue> allocate(String consumerGroup, String currentCID,
List<MessageQueue> mqAll, List<String> cidAll) {
// 返回值
List<MessageQueue> result = new ArrayList<MessageQueue>();
// 省略一些判断操作
...
int index = cidAll.indexOf(currentCID);
int mod = mqAll.size() % cidAll.size();
// 1. 消费者数量大于队列数量:averageSize = 1
// 2. 消费者数量小于等于队列数量:averageSize = 队列数量 / 消费者数量,还要处理个+1的操作
int averageSize = mqAll.size() <= cidAll.size()
? 1 : (mod > 0 && index < mod
? mqAll.size() / cidAll.size() + 1 : mqAll.size() / cidAll.size());
int startIndex = (mod > 0 && index < mod) ? index * averageSize : index * averageSize + mod;
int range = Math.min(averageSize, mqAll.size() - startIndex);
for (int i = 0; i < range; i++) {
result.add(mqAll.get((startIndex + i) % mqAll.size()));
}
return result;
}
这个方法中,关键的分配方法就在后面几行,如果只看代码,会感觉有点晕,这里我举一个例子来简单解释下:
假设:messageQueue一共有6个,consumer有4个,当前consumer的index为1,有了这些前提后,接下来我们就来看它的分配过程了。
-
计算取余操作:
6 % 4 = 2,这表明messageQueue不能平均分配给每个consumer,接下来就来看看这个余数2是如何处理的 -
计算每个
consumer平均处理的messageQueue数量- 这里需要注意,如果
consumer数量大于messageQueue数量,那每个consumer最多只会分配到一个messageQueue,这种情况下,余数2不会进行处理,并且有的consumer处理的messageQueue数量为0,同一个messageQueue不会同时被两个及以上的consumer消费掉 - 这里的
messageQueue数量为6,consumer为4,计算得到每个consumer处理的队列数最少为1,除此之外,为了实现“平均”,有2个consumer会需要多处理1个messageQueue,按“平均”的分配原则,如果index小于mod,则会分配多1个messageQueue,这里的mod为2,结果如下:
消费者索引 0 1 2 3 处理数量 2 2 1 1 - 这里需要注意,如果
-
分配完每个
consumer处理的messageQueue数量后,这些messageQueue该如何分配呢?从代码来看,分配时会先分配完一个consumer,再分配下一个consumer,最终结果就是这样:队列 Q0 Q1 Q2 Q3 Q4 Q5 消费者 C1 C1 C2 C2 C4 C5
从图中可以看到,在6个messageQueue、4个consumer、当前consumer的index为1的情况下,当前consumer会分到2个队列,分别为Q2/Q3.
将messageQueue分配完成后,接下来就是更新负载信息了,方法为RebalanceImpl#updateProcessQueueTableInRebalance:
private boolean updateProcessQueueTableInRebalance(final String topic,
final Set<MessageQueue> mqSet, final boolean isOrder) {
...
List<PullRequest> pullRequestList = new ArrayList<PullRequest>();
for (MessageQueue mq : mqSet) {
if (!this.processQueueTable.containsKey(mq)) {
if (isOrder && !this.lock(mq)) {
log.warn(...);
continue;
}
this.removeDirtyOffset(mq);
ProcessQueue pq = new ProcessQueue();
long nextOffset = this.computePullFromWhere(mq);
if (nextOffset >= 0) {
ProcessQueue pre = this.processQueueTable.putIfAbsent(mq, pq);
if (pre != null) {
log.info("doRebalance, {}, mq already exists, {}", consumerGroup, mq);
}
// pullRequest 最初产生的地方:mq 不存在,就添加
else {
log.info("doRebalance, {}, add a new mq, {}", consumerGroup, mq);
// 添加 pullRequest
PullRequest pullRequest = new PullRequest();
pullRequest.setConsumerGroup(consumerGroup);
pullRequest.setNextOffset(nextOffset);
pullRequest.setMessageQueue(mq);
pullRequest.setProcessQueue(pq);
pullRequestList.add(pullRequest);
changed = true;
}
} else {
log.warn("doRebalance, {}, add new mq failed, {}", consumerGroup, mq);
}
}
}
// 发布
this.dispatchPullRequest(pullRequestList);
return changed;
}
这个方法中最最关键的就是pullRequestList的添加操作了:先遍历传入的MessageQueue,如果当前consumer没有消费过该messageQueue,则添加一个新的pullRequest到pullRequestList,之后就是发布pullRequestList了。
看到这里,我们就应该能明白,最初的pullRequest就是在这里产生的,而发布pullRequestList的操作,就是将pullRequest丢给pullMessageService线程处理了:
/**
* RebalancePushImpl#dispatchPullRequest:发布pullRequest的操作
*/
public void dispatchPullRequest(List<PullRequest> pullRequestList) {
for (PullRequest pullRequest : pullRequestList) {
// 在这里执行pullRequest,其实就是把 pullRequest 添加到
// PullMessageService#pullRequestQueue 中
this.defaultMQPushConsumerImpl.executePullRequestImmediately(pullRequest);
log.info("doRebalance, {}, add a new pull request {}", consumerGroup, pullRequest);
}
}
六. 不重复消费消息:消费位置的偏移量
rocketMq的消费者如何保证不重复消费消息呢?答应就在于偏移量!consumer在拉取消息时,会先获取偏移量信息,然后拉消息时会带上这个偏移量,之后broker则会根据这个偏移量,获取对应的消息返回给consumer。
6.1 偏移量的存储初始化
处理偏移量存储的接口为OffsetStore,它有两个实现类:
LocalFileOffsetStore:本地文件存储,即存储在本地文件中RemoteBrokerOffsetStore:远程broker存储,即存储在远程broker中
这个接口的初始化在DefaultMQPushConsumerImpl#start方法中进行:
public synchronized void start() throws MQClientException {
switch (this.serviceState) {
case CREATE_JUST:
...
if (this.defaultMQPushConsumer.getOffsetStore() != null) {
this.offsetStore = this.defaultMQPushConsumer.getOffsetStore();
} else {
// 消息模式:广播模式存在本地,集群模式存在远程(broker)
switch (this.defaultMQPushConsumer.getMessageModel()) {
case BROADCASTING:
this.offsetStore = new LocalFileOffsetStore(this.mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup());
break;
case CLUSTERING:
this.offsetStore = new RemoteBrokerOffsetStore(this.mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup());
break;
default:
break;
}
this.defaultMQPushConsumer.setOffsetStore(this.offsetStore);
}
// 加载消费信息的偏移量
this.offsetStore.load();
...
}
...
}
这个方法中,与偏移量相关的操作有两个:
- 初始化:根据消息模式的不同,而初始化不同的
OffsetStore,简单来说,广播模式下,偏移量存储在本地,集群模式下,偏移量存储在远程broker - 加载偏移量信息
这里分别来看看两者的加载操作:
LocalFileOffsetStore的加载:
@Override
public void load() throws MQClientException {
// 读取本地文件
OffsetSerializeWrapper offsetSerializeWrapper = this.readLocalOffset();
if (offsetSerializeWrapper != null && offsetSerializeWrapper.getOffsetTable() != null) {
offsetTable.putAll(offsetSerializeWrapper.getOffsetTable());
// 加载每个队列的偏移量
for (MessageQueue mq : offsetSerializeWrapper.getOffsetTable().keySet()) {
AtomicLong offset = offsetSerializeWrapper.getOffsetTable().get(mq);
}
}
}
private OffsetSerializeWrapper readLocalOffset() throws MQClientException {
String content = null;
try {
// 读取文件操作,将文件内容转为String
content = MixAll.file2String(this.storePath);
} catch (IOException e) {
log.warn("Load local offset store file exception", e);
}
if (null == content || content.length() == 0) {
// 读取 bak 文件
return this.readLocalOffsetBak();
} else {
OffsetSerializeWrapper offsetSerializeWrapper = null;
try {
offsetSerializeWrapper =
OffsetSerializeWrapper.fromJson(content, OffsetSerializeWrapper.class);
} catch (Exception e) {
log.warn("readLocalOffset Exception, and try to correct", e);
return this.readLocalOffsetBak();
}
return offsetSerializeWrapper;
}
}
可以看到,这种操作下,仅仅只是读取本地文件而已。
再来看看RemoteBrokerOffsetStore的加载:
@Override
public void load() {
}
复制代码
远程broker存储时,啥也没做。
6.2 偏移量持久化
偏移量持久化是在定时任务中进行的,定时任务的启动方法为MQClientInstance#startScheduledTask:
private void startScheduledTask() {
// 持久化消费者的消费偏移量,每5秒执行一次
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.persistAllConsumerOffset();
} catch (Exception e) {
log.error("ScheduledTask persistAllConsumerOffset exception", e);
}
}
}, 1000 * 10, this.clientConfig.getPersistConsumerOffsetInterval(), TimeUnit.MILLISECONDS);
}
处理持久化操作的方法为OffsetStore#persistAll,我们先来看看LocalFileOffsetStore的持久化操作:
public void persistAll(Set<MessageQueue> mqs) {
if (null == mqs || mqs.isEmpty())
return;
OffsetSerializeWrapper offsetSerializeWrapper = new OffsetSerializeWrapper();
for (Map.Entry<MessageQueue, AtomicLong> entry : this.offsetTable.entrySet()) {
if (mqs.contains(entry.getKey())) {
AtomicLong offset = entry.getValue();
offsetSerializeWrapper.getOffsetTable().put(entry.getKey(), offset);
}
}
String jsonString = offsetSerializeWrapper.toJson(true);
if (jsonString != null) {
try {
// 保存到文件
MixAll.string2File(jsonString, this.storePath);
} catch (IOException e) {
log.error("persistAll consumer offset Exception, " + this.storePath, e);
}
}
}
这个操作比较简单,就只是将偏移量信息写入到文件中。
再来看看RemoteBrokerOffsetStore的操作:
public void persistAll(Set<MessageQueue> mqs) {
if (null == mqs || mqs.isEmpty())
return;
final HashSet<MessageQueue> unusedMQ = new HashSet<MessageQueue>();
for (Map.Entry<MessageQueue, AtomicLong> entry : this.offsetTable.entrySet()) {
MessageQueue mq = entry.getKey();
AtomicLong offset = entry.getValue();
if (offset != null) {
if (mqs.contains(mq)) {
try {
// 更新偏移量信息到broker
this.updateConsumeOffsetToBroker(mq, offset.get());
log.info(...);
} catch (Exception e) {
log.error(...);
}
} else {
unusedMQ.add(mq);
}
}
}
if (!unusedMQ.isEmpty()) {
for (MessageQueue mq : unusedMQ) {
this.offsetTable.remove(mq);
log.info("remove unused mq, {}, {}", mq, this.groupName);
}
}
}
在RemoteBrokerOffsetStore#persistAll方法中,会调用this.updateConsumeOffsetToBroker(...)将持久化信息提交到broker上,更新的操作方法为 MQClientAPIImpl#updateConsumerOffset:
public void updateConsumerOffset(
final String addr,
final UpdateConsumerOffsetRequestHeader requestHeader,
final long timeoutMillis
) throws RemotingException, MQBrokerException, InterruptedException {
// 更新偏移量的请求 code 为 UPDATE_CONSUMER_OFFSET
RemotingCommand request = RemotingCommand.createRequestCommand(
RequestCode.UPDATE_CONSUMER_OFFSET, requestHeader);
// 执行 netty 请求
RemotingCommand response = this.remotingClient.invokeSync(
MixAll.brokerVIPChannel(this.clientConfig.isVipChannelEnabled(), addr),
request, timeoutMillis);
assert response != null;
switch (response.getCode()) {
case ResponseCode.SUCCESS: {
return;
}
default:
break;
}
throw new MQBrokerException(response.getCode(), response.getRemark(), addr);
}
broker收到请求后,又是如何处理的呢?这里我们根据UPDATE_CONSUMER_OFFSET找到处理该code的Processor为ConsumerManageProcessor,它的ConsumerManageProcessor如下:
@Override
public RemotingCommand processRequest(ChannelHandlerContext ctx,
RemotingCommand request) throws RemotingCommandException {
switch (request.getCode()) {
case RequestCode.GET_CONSUMER_LIST_BY_GROUP:
return this.getConsumerListByGroup(ctx, request);
case RequestCode.UPDATE_CONSUMER_OFFSET:
return this.updateConsumerOffset(ctx, request);
case RequestCode.QUERY_CONSUMER_OFFSET:
return this.queryConsumerOffset(ctx, request);
default:
break;
}
return null;
}
这个方法中会处理三种类型的请求:
GET_CONSUMER_LIST_BY_GROUP:获取指定消费组下的所有消费者UPDATE_CONSUMER_OFFSET:更新消费位置的偏移量QUERY_CONSUMER_OFFSET:查询消费位置的偏移量
这里我们只关注更新消费位置的偏移量操作,进入ConsumerManageProcessor#updateConsumerOffset方法:
private RemotingCommand updateConsumerOffset(ChannelHandlerContext ctx, RemotingCommand request)
throws RemotingCommandException {
final RemotingCommand response =
RemotingCommand.createResponseCommand(UpdateConsumerOffsetResponseHeader.class);
final UpdateConsumerOffsetRequestHeader requestHeader =
(UpdateConsumerOffsetRequestHeader) request
.decodeCommandCustomHeader(UpdateConsumerOffsetRequestHeader.class);
// 继续处理
this.brokerController.getConsumerOffsetManager().commitOffset(
RemotingHelper.parseChannelRemoteAddr(ctx.channel()), requestHeader.getConsumerGroup(),
requestHeader.getTopic(), requestHeader.getQueueId(), requestHeader.getCommitOffset());
response.setCode(ResponseCode.SUCCESS);
response.setRemark(null);
return response;
}
继续跟进,最终来到ConsumerOffsetManager类:
public class ConsumerOffsetManager extends ConfigManager {
...
/**
* 存放偏移量的map
*/
private ConcurrentMap<String/* topic@group */, ConcurrentMap<Integer, Long>> offsetTable =
new ConcurrentHashMap<String, ConcurrentMap<Integer, Long>>(512);
/**
* 处理保存操作
*/
public void commitOffset(final String clientHost, final String group, final String topic,
final int queueId,
final long offset) {
// topic@group
String key = topic + TOPIC_GROUP_SEPARATOR + group;
// 继续调用重载方法
this.commitOffset(clientHost, key, queueId, offset);
}
/**
* 最终处理的地方
*/
private void commitOffset(final String clientHost, final String key, final int queueId,
final long offset) {
ConcurrentMap<Integer, Long> map = this.offsetTable.get(key);
if (null == map) {
map = new ConcurrentHashMap<Integer, Long>(32);
map.put(queueId, offset);
this.offsetTable.put(key, map);
} else {
Long storeOffset = map.put(queueId, offset);
if (storeOffset != null && offset < storeOffset) {
log.warn(...);
}
}
}
...
ConsumerOffsetManager类就是用来处理偏移量的存储的,它使用一个ConcurrentMap来保存消费信息的偏移量,key为topic@group,value为消费位置的偏移量。
从ConsumerOffsetManager来看,偏移量仅仅只是保存在了内存中,这也就是说,如果整个broker集群停机了,然后再重启,消费位置的偏移量就没有了。
6.3 偏移量的获取
在consumer拉取消息时,在最初准备pullRequest对象时,会加载消费信息的偏移量,方法为RebalanceImpl#updateProcessQueueTableInRebalance:
private boolean updateProcessQueueTableInRebalance(final String topic,
final Set<MessageQueue> mqSet, final boolean isOrder) {
...
List<PullRequest> pullRequestList = new ArrayList<PullRequest>();
for (MessageQueue mq : mqSet) {
if (!this.processQueueTable.containsKey(mq)) {
...
this.removeDirtyOffset(mq);
ProcessQueue pq = new ProcessQueue();
// 计算消费位置的偏移量
long nextOffset = this.computePullFromWhere(mq);
if (nextOffset >= 0) {
ProcessQueue pre = this.processQueueTable.putIfAbsent(mq, pq);
if (pre != null) {
log.info(...);
} else {
log.info(...);
// 添加 pullRequest
PullRequest pullRequest = new PullRequest();
pullRequest.setConsumerGroup(consumerGroup);
// 设置偏移量
pullRequest.setNextOffset(nextOffset);
pullRequest.setMessageQueue(mq);
pullRequest.setProcessQueue(pq);
pullRequestList.add(pullRequest);
changed = true;
}
} else {
log.warn(...);
}
}
}
// 发布
this.dispatchPullRequest(pullRequestList);
return changed;
}
获取消费位置的偏移量的代码为
// 计算消费位置的偏移量
long nextOffset = this.computePullFromWhere(mq);
调用的方法为RebalancePushImpl#computePullFromWhere,进入其中:
public long computePullFromWhere(MessageQueue mq) {
long result = -1;
final ConsumeFromWhere consumeFromWhere = this.defaultMQPushConsumerImpl
.getDefaultMQPushConsumer().getConsumeFromWhere();
// 获取 offsetStore
final OffsetStore offsetStore = this.defaultMQPushConsumerImpl.getOffsetStore();
switch (consumeFromWhere) {
case CONSUME_FROM_LAST_OFFSET_AND_FROM_MIN_WHEN_BOOT_FIRST:
case CONSUME_FROM_MIN_OFFSET:
case CONSUME_FROM_MAX_OFFSET:
case CONSUME_FROM_LAST_OFFSET: {
// 读取操作
long lastOffset = offsetStore.readOffset(
mq, ReadOffsetType.READ_FROM_STORE);
// 省略各种判断
...
break;
}
case CONSUME_FROM_FIRST_OFFSET: {
long lastOffset = offsetStore.readOffset(
mq, ReadOffsetType.READ_FROM_STORE);
// 省略各种判断
...
break;
}
case CONSUME_FROM_TIMESTAMP: {
long lastOffset = offsetStore.readOffset(
mq, ReadOffsetType.READ_FROM_STORE);
// 省略各种判断
...
break;
}
default:
break;
}
return result;
}
读取操作就是在这里进行的,我们直接看看本地存储的读取与远程存储的读取:
本地文件存储,就是直接读取本地文件,进入LocalFileOffsetStore#readOffset方法:
public long readOffset(final MessageQueue mq, final ReadOffsetType type) {
if (mq != null) {
switch (type) {
case MEMORY_FIRST_THEN_STORE:
case READ_FROM_MEMORY: {
AtomicLong offset = this.offsetTable.get(mq);
if (offset != null) {
return offset.get();
} else if (ReadOffsetType.READ_FROM_MEMORY == type) {
return -1;
}
}
case READ_FROM_STORE: {
OffsetSerializeWrapper offsetSerializeWrapper;
try {
// 读取本地文件
offsetSerializeWrapper = this.readLocalOffset();
} catch (MQClientException e) {
return -1;
}
if (offsetSerializeWrapper != null && offsetSerializeWrapper.getOffsetTable() != null) {
AtomicLong offset = offsetSerializeWrapper.getOffsetTable().get(mq);
if (offset != null) {
this.updateOffset(mq, offset.get(), false);
return offset.get();
}
}
}
default:
break;
}
}
return -1;
}
/**
* 读取文件的操作
*/
private OffsetSerializeWrapper readLocalOffset() throws MQClientException {
String content = null;
try {
// 将文件内容转化为字符串
content = MixAll.file2String(this.storePath);
} catch (IOException e) {
log.warn("Load local offset store file exception", e);
}
if (null == content || content.length() == 0) {
return this.readLocalOffsetBak();
} else {
OffsetSerializeWrapper offsetSerializeWrapper = null;
try {
offsetSerializeWrapper =
OffsetSerializeWrapper.fromJson(content, OffsetSerializeWrapper.class);
} catch (Exception e) {
log.warn("readLocalOffset Exception, and try to correct", e);
return this.readLocalOffsetBak();
}
return offsetSerializeWrapper;
}
}
再来看看从远程broker是如何获取的,进入RemoteBrokerOffsetStore#readOffset方法:
public long readOffset(final MessageQueue mq, final ReadOffsetType type) {
if (mq != null) {
switch (type) {
case MEMORY_FIRST_THEN_STORE:
case READ_FROM_MEMORY: {
AtomicLong offset = this.offsetTable.get(mq);
if (offset != null) {
return offset.get();
} else if (ReadOffsetType.READ_FROM_MEMORY == type) {
return -1;
}
}
case READ_FROM_STORE: {
try {
// 从broker中获取
long brokerOffset = this.fetchConsumeOffsetFromBroker(mq);
AtomicLong offset = new AtomicLong(brokerOffset);
this.updateOffset(mq, offset.get(), false);
return brokerOffset;
}
// No offset in broker
catch (MQBrokerException e) {
return -1;
}
//Other exceptions
catch (Exception e) {
log.warn("fetchConsumeOffsetFromBroker exception, " + mq, e);
return -2;
}
}
default:
break;
}
}
return -1;
}
/**
* 继续获取操作
*/
private long fetchConsumeOffsetFromBroker(MessageQueue mq) throws RemotingException,
MQBrokerException, InterruptedException, MQClientException {
// 找到对应的 broker,只从对应的broker上获取
FindBrokerResult findBrokerResult = this.mQClientFactory
.findBrokerAddressInAdmin(mq.getBrokerName());
if (null == findBrokerResult) {
this.mQClientFactory.updateTopicRouteInfoFromNameServer(mq.getTopic());
findBrokerResult = this.mQClientFactory.findBrokerAddressInAdmin(mq.getBrokerName());
}
if (findBrokerResult != null) {
QueryConsumerOffsetRequestHeader requestHeader = new QueryConsumerOffsetRequestHeader();
requestHeader.setTopic(mq.getTopic());
requestHeader.setConsumerGroup(this.groupName);
requestHeader.setQueueId(mq.getQueueId());
// 从broker查询
return this.mQClientFactory.getMQClientAPIImpl().queryConsumerOffset(
findBrokerResult.getBrokerAddr(), requestHeader, 1000 * 5);
} else {
throw new MQClientException("The broker[" + mq.getBrokerName() + "] not exist", null);
}
}
查询操作在MQClientAPIImpl#queryConsumerOffset方法中,进入:
public long queryConsumerOffset(
final String addr,
final QueryConsumerOffsetRequestHeader requestHeader,
final long timeoutMillis
) throws RemotingException, MQBrokerException, InterruptedException {
RemotingCommand request = RemotingCommand.createRequestCommand(
RequestCode.QUERY_CONSUMER_OFFSET, requestHeader);
RemotingCommand response = this.remotingClient.invokeSync(
MixAll.brokerVIPChannel(this.clientConfig.isVipChannelEnabled(), addr),
request, timeoutMillis);
assert response != null;
switch (response.getCode()) {
case ResponseCode.SUCCESS: {
// 处理结果
QueryConsumerOffsetResponseHeader responseHeader =
(QueryConsumerOffsetResponseHeader) response.decodeCommandCustomHeader(
QueryConsumerOffsetResponseHeader.class);
return responseHeader.getOffset();
}
default:
break;
}
throw new MQBrokerException(response.getCode(), response.getRemark(), addr);
}
收到该消息后,broker会如何应答呢?
ConsumerManageProcessor#processRequest方法如下:
@Override
public RemotingCommand processRequest(ChannelHandlerContext ctx,
RemotingCommand request) throws RemotingCommandException {
switch (request.getCode()) {
case RequestCode.GET_CONSUMER_LIST_BY_GROUP:
return this.getConsumerListByGroup(ctx, request);
case RequestCode.UPDATE_CONSUMER_OFFSET:
return this.updateConsumerOffset(ctx, request);
case RequestCode.QUERY_CONSUMER_OFFSET:
return this.queryConsumerOffset(ctx, request);
default:
break;
}
return null;
}
处理查询操作在ConsumerManageProcessor#queryConsumerOffset方法中:
private RemotingCommand queryConsumerOffset(ChannelHandlerContext ctx,
RemotingCommand request) throws RemotingCommandException {
final RemotingCommand response =
RemotingCommand.createResponseCommand(QueryConsumerOffsetResponseHeader.class);
final QueryConsumerOffsetResponseHeader responseHeader =
(QueryConsumerOffsetResponseHeader) response.readCustomHeader();
final QueryConsumerOffsetRequestHeader requestHeader =
(QueryConsumerOffsetRequestHeader) request
.decodeCommandCustomHeader(QueryConsumerOffsetRequestHeader.class);
// 查询操作
long offset =
this.brokerController.getConsumerOffsetManager().queryOffset(
requestHeader.getConsumerGroup(), requestHeader.getTopic(),
requestHeader.getQueueId());
...
return response;
}
最终调用ConsumerOffsetManager#queryOffset(...)方法完成查询操作:
public long queryOffset(final String group, final String topic, final int queueId) {
// topic@group
String key = topic + TOPIC_GROUP_SEPARATOR + group;
ConcurrentMap<Integer, Long> map = this.offsetTable.get(key);
if (null != map) {
Long offset = map.get(queueId);
if (offset != null)
return offset;
}
return -1;
}
复制代码
其实就是从ConsumerOffsetManager的成员变量offsetTable中获取数据。