ROCKETMQ源码分析-02(分发记录consumeQueue&Index文件)

60 阅读4分钟

接上回书说到,rocketmq的消息存储问题,我们上一次主要是将commitLog的存储过程看了一遍,也就是相当于消息存储的根目录,但是我们消息的消费是首先先根据偏移量去查找consumeQueue然后从consume Queue中找到对应在commitLog上的偏移量以及具体的消息长度再去拿到具体的消息的。

那么我们这次的问题就是:

1.消息是如何分发到consumeQueue以及index文件上存储的
2.客户端如何根据consumeQueue去commitLog上查找对应的消息的

问题一:消息是如何分发到consumeQueue以及index文件上存储的

首先在messageStore在初始化的时候会初始化一个ReputMessageService的类 这个类主要做的事情就是将commitLog上面的消息按照规则分发到不同的consumeQueue上面去(每个topic下对应着很多consumeQueue)

image.png

我们可以看到这个类是间接实现了runnable接口,我们只需要看run()方法即可:

@Override
public void run() {
    DefaultMessageStore.log.info(this.getServiceName() + " service started");

    while (!this.isStopped()) {
        try {
            Thread.sleep(1);   //间隔时间1ms
            this.doReput();
        } catch (Exception e) {
            DefaultMessageStore.log.warn(this.getServiceName() + " service has exception. ", e);
        }
    }

    DefaultMessageStore.log.info(this.getServiceName() + " service end");
}

我们可以看到rocketmq会每隔1ms进行一次消息的分发,主要逻辑在doReput();

接下来我们看一下doreput()方法:

private void doReput() {
   //....判断
   
    for (boolean doNext = true; this.isCommitLogAvailable() && doNext; ) {

      //判断...

        SelectMappedBufferResult result = DefaultMessageStore.this.commitLog.getData(reputFromOffset);  //通过偏移量获取commitLog文件result
        if (result != null) {
            try {
                this.reputFromOffset = result.getStartOffset();

                for (int readSize = 0; readSize < result.getSize() && doNext; ) {
                  //对消息进行解析验证 并构建为DispatchRequest类型
                    DispatchRequest dispatchRequest =
   DefaultMessageStore.this.commitLog.checkMessageAndReturnSize(result.getByteBuffer(), false, false);
                    int size = dispatchRequest.getBufferSize() == -1 ? dispatchRequest.getMsgSize() : dispatchRequest.getBufferSize();

                    if (dispatchRequest.isSuccess()) {
                        if (size > 0) {
                        //主要进行分发的地方【重要】
                            DefaultMessageStore.this.doDispatch(dispatchRequest);

                            if (BrokerRole.SLAVE != DefaultMessageStore.this.getMessageStoreConfig().getBrokerRole()
                                && DefaultMessageStore.this.brokerConfig.isLongPollingEnable()) {
                                DefaultMessageStore.this.messageArrivingListener.arriving(dispatchRequest.getTopic(),
                                    dispatchRequest.getQueueId(), dispatchRequest.getConsumeQueueOffset() + 1,
                                    dispatchRequest.getTagsCode(), dispatchRequest.getStoreTimestamp(),
                                    dispatchRequest.getBitMap(), dispatchRequest.getPropertiesMap());
                            }

                            this.reputFromOffset += size;
                            readSize += size;
                            if (DefaultMessageStore.this.getMessageStoreConfig().getBrokerRole() == BrokerRole.SLAVE) {
                                DefaultMessageStore.this.storeStatsService
                                    .getSinglePutMessageTopicTimesTotal(dispatchRequest.getTopic()).incrementAndGet();
                                DefaultMessageStore.this.storeStatsService
                                    .getSinglePutMessageTopicSizeTotal(dispatchRequest.getTopic())
                                    .addAndGet(dispatchRequest.getMsgSize());
                            }
                        } else if (size == 0) {
                            this.reputFromOffset = DefaultMessageStore.this.commitLog.rollNextFile(this.reputFromOffset);
                            readSize = result.getSize();
                        }
                    } else if (!dispatchRequest.isSuccess()) {

                        if (size > 0) {
                            log.error("[BUG]read total count not equals msg total size. reputFromOffset={}", reputFromOffset);
                            this.reputFromOffset += size;
                        } else {
                            doNext = false;
                            // If user open the dledger pattern or the broker is master node,
                            // it will not ignore the exception and fix the reputFromOffset variable
                            if (DefaultMessageStore.this.getMessageStoreConfig().isEnableDLegerCommitLog() ||
                                DefaultMessageStore.this.brokerConfig.getBrokerId() == MixAll.MASTER_ID) {
                                log.error("[BUG]dispatch message to consume queue error, COMMITLOG OFFSET: {}",
                                    this.reputFromOffset);
                                this.reputFromOffset += result.getSize() - readSize;
                            }
                        }
                    }
                }
            } finally {
                result.release();
            }
        } else {
            doNext = false;
        }
    }

我们看到了上面的doDispatch()方法 这个就是主要进行消息的分发的地方: 主要是调用了CommitLogDispatcher.dispatch(final DispatchRequest request)这个方法

实现:

1.CommitLogDispatcherBuildConsumeQueue
2.CommitLogDispatcherBuildIndex

这两个接口一个是用来进行consumeQueue的构建以及存储的另一个是index的文件构建 首先我们先来看看consumeQueue消息是如何构建的

@Override
public void dispatch(DispatchRequest request) {
    final int tranType = MessageSysFlag.getTransactionValue(request.getSysFlag());
    switch (tranType) {
        case MessageSysFlag.TRANSACTION_NOT_TYPE:
        case MessageSysFlag.TRANSACTION_COMMIT_TYPE:
            DefaultMessageStore.this.putMessagePositionInfo(request);  //只有提交了的消息才给consumer消费
            break;
        case MessageSysFlag.TRANSACTION_PREPARED_TYPE:
        case MessageSysFlag.TRANSACTION_ROLLBACK_TYPE:
            break;
    }
}

putMessagePositionInfo

public void putMessagePositionInfo(DispatchRequest dispatchRequest) {
   ConsumeQueue cq = this.findConsumeQueue(dispatchRequest.getTopic(), dispatchRequest.getQueueId());  //根据主题和queueid获取对应的consumeQueue主要是
   cq.putMessagePositionInfoWrapper(dispatchRequest);
}

cq.putMessagePositionInfoWrapper

public void putMessagePositionInfoWrapper(DispatchRequest request) {
    final int maxRetries = 30;  //最大重试次数
    boolean canWrite = this.defaultMessageStore.getRunningFlags().isCQWriteable();
    for (int i = 0; i < maxRetries && canWrite; i++) {
        long tagsCode = request.getTagsCode();
        if (isExtWriteEnable()) {  //记录一些不重要的信息时间等
            ConsumeQueueExt.CqExtUnit cqExtUnit = new ConsumeQueueExt.CqExtUnit();
            cqExtUnit.setFilterBitMap(request.getBitMap());
            cqExtUnit.setMsgStoreTime(request.getStoreTimestamp());
            cqExtUnit.setTagsCode(request.getTagsCode());

            long extAddr = this.consumeQueueExt.put(cqExtUnit);
            if (isExtAddr(extAddr)) {
                tagsCode = extAddr;
            } else {
                log.warn("Save consume queue extend fail, So just save tagsCode! {}, topic:{}, queueId:{}, offset:{}", cqExtUnit,
                    topic, queueId, request.getCommitLogOffset());
            }
        }
        //进行消息的存储
        boolean result = this.putMessagePositionInfo(request.getCommitLogOffset(),
            request.getMsgSize(), tagsCode, request.getConsumeQueueOffset());
        if (result) {
            if (this.defaultMessageStore.getMessageStoreConfig().getBrokerRole() == BrokerRole.SLAVE ||
                this.defaultMessageStore.getMessageStoreConfig().isEnableDLegerCommitLog()) {
                this.defaultMessageStore.getStoreCheckpoint().setPhysicMsgTimestamp(request.getStoreTimestamp());
            }
            this.defaultMessageStore.getStoreCheckpoint().setLogicsMsgTimestamp(request.getStoreTimestamp());
            return;
        } else {
            // XXX: warn and notify me
            log.warn("[BUG]put commit log position info to " + topic + ":" + queueId + " " + request.getCommitLogOffset()
                + " failed, retry " + i + " times");

            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                log.warn("", e);
            }
        }
    }

    // XXX: warn and notify me
    log.error("[BUG]consume queue can not write, {} {}", this.topic, this.queueId);
    this.defaultMessageStore.getRunningFlags().makeLogicsQueueError();
}

putMessagePositionInfo(final long offset, final int size, final long tagsCode,final long cqOffset)

参数说明:

1.commitLogOffset:消息对应在commitLog上的偏移量 也就是consumeQueue的节点第一个八个字节要存储的信息
2.消息的大小
3.tagCode哈希编码
4.在consumeQueue上的偏移量  因为一个consumeQueue的节点是20个字节所以这个字段其实也就是在consumeQueue是第几个
private boolean putMessagePositionInfo(final long offset, final int size, final long tagsCode,
    final long cqOffset) {

    if (offset + size <= this.maxPhysicOffset) {
        log.warn("Maybe try to build consume queue repeatedly maxPhysicOffset={} phyOffset={}", maxPhysicOffset, offset);
        return true;
    }
    //consumeQueue  信息构建
    this.byteBufferIndex.flip();
    this.byteBufferIndex.limit(CQ_STORE_UNIT_SIZE);
    this.byteBufferIndex.putLong(offset);
    this.byteBufferIndex.putInt(size);
    this.byteBufferIndex.putLong(tagsCode);

    final long expectLogicOffset = cqOffset * CQ_STORE_UNIT_SIZE;
    //获取mappedFile
    MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile(expectLogicOffset);
    if (mappedFile != null) {
        //如果当前mappedFile是第一次创建的那么
        if (mappedFile.isFirstCreateInQueue() && cqOffset != 0 && mappedFile.getWrotePosition() == 0) {
            this.minLogicOffset = expectLogicOffset;
            this.mappedFileQueue.setFlushedWhere(expectLogicOffset);
            this.mappedFileQueue.setCommittedWhere(expectLogicOffset);
            this.fillPreBlank(mappedFile, expectLogicOffset);
            log.info("fill pre blank space " + mappedFile.getFileName() + " " + expectLogicOffset + " "
                + mappedFile.getWrotePosition());
        }

        if (cqOffset != 0) {
            //获取当前的写指针偏移量  并和期望写入的位置的偏移量进行对比
            long currentLogicOffset = mappedFile.getWrotePosition() + mappedFile.getFileFromOffset();

            if (expectLogicOffset < currentLogicOffset) {
                log.warn("Build  consume queue repeatedly, expectLogicOffset: {} currentLogicOffset: {} Topic: {} QID: {} Diff: {}",
                    expectLogicOffset, currentLogicOffset, this.topic, this.queueId, expectLogicOffset - currentLogicOffset);
                return true;
            }

            if (expectLogicOffset != currentLogicOffset) {
                LOG_ERROR.warn(
                    "[BUG]logic queue order maybe wrong, expectLogicOffset: {} currentLogicOffset: {} Topic: {} QID: {} Diff: {}",
                    expectLogicOffset,
                    currentLogicOffset,
                    this.topic,
                    this.queueId,
                    expectLogicOffset - currentLogicOffset
                );
            }
        }
        this.maxPhysicOffset = offset + size; //记录最大物理偏移量
        //写入
        return mappedFile.appendMessage(this.byteBufferIndex.array());
    }
    return false;
}

以上就是commitLog信息分发存入consumeQueue的过程,index文件的记录过程也是大同小异 只不过index文件的目的是通过msgId可以进行消息的获取,感兴趣的同学可以看一下CommitLogDispatcherBuildIndex这个类的实现

我们接下来简单看一下 consume获取消息的时候怎么根据consumeQueue来获取commmitLog的 DefaultMessageStore#getMessage(final String group, final String topic, final int queueId, final long offset,final int maxMsgNums,final MessageFilter messageFilter)

这个方法就是进行消息拉取的方法我们主要看一下如何根据consumeQueue进行commitLog的获取 其他的我们留到消息消费的章节进行深入理解

主要的方法:

//根据主题和queueId获取consumeQueue
ConsumeQueue consumeQueue = findConsumeQueue(topic, queueId);

//根据cqoffset获取对应的consumeQueue信息
SelectMappedBufferResult bufferConsumeQueue = consumeQueue.getIndexBuffer(offset);

for (; i < bufferConsumeQueue.getSize() && i < maxFilterMessageCount; i += ConsumeQueue.CQ_STORE_UNIT_SIZE) {
    //commitLog的偏移量
    long offsetPy = bufferConsumeQueue.getByteBuffer().getLong();
    //消息的长度
    int sizePy = bufferConsumeQueue.getByteBuffer().getInt();
    //tag的hash编码
    long tagsCode = bufferConsumeQueue.getByteBuffer().getLong();

    maxPhyOffsetPulling = offsetPy;

    if (nextPhyFileStartOffset != Long.MIN_VALUE) {
        if (offsetPy < nextPhyFileStartOffset)
            continue;
    }
//.....
    //获取commitLog 根据偏移量 根据consumeQueue节点上存储的commitLogOffset以及长度获取对应的信息
    SelectMappedBufferResult selectResult = this.commitLog.getMessage(offsetPy, sizePy);
    //...
 
}

这里就是consumeQueue消息的存储以及获取相关的介绍 主要是讲了一些核心的主要流程 如果大家希望深入了解的话可以去翻翻源码!0v0