RocketMq源码分析(六):broker消息接受

408 阅读9分钟

概要:从上一篇文章RocketMq源码分析(五):生产者启动和发送消息到broker 中,我们看到生产者是如何将需要发送的消息封装好后,发送到broker,这一篇我们接着分析broker收到消息后,会做哪些事情

RocketMq消息处理整个流程如下: image.png

  • 消息接收:消息接收是指接收producer的消息,处理类是SendMessageProcessor,将消息写入到commigLog文件后,接收流程处理完毕;

  • 消息分发:broker处理消息分发的类是ReputMessageService,它会启动一个线程,不断地将commitLong分到到对应的consumerQueue,这一步操作会写两个文件:consumerQueueindexFile,写入后,消息分发流程处理 完毕;

  • 消息投递:消息投递是指将消息发往consumer的流程,consumer会发起获取消息的请求,broker收到请求后,调用PullMessageProcessor类处理,从consumerQueue文件获取消息,返回给consumer后,投递流程处理完毕。

一.broker启动过程中创建NettyRemoteServer

在前面文章中 # RocketMq源码分析(四):Broker启动流程中,已经得知,broker在启动的时候,会创建一个 NettyRemotingServer类,绑定好端口,等待客户端的连接,在start()方法中

serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
    .channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
    .option(ChannelOption.SO_BACKLOG, 1024)
    .option(ChannelOption.SO_REUSEADDR, true)
    .childOption(ChannelOption.SO_KEEPALIVE, false)
    .childOption(ChannelOption.TCP_NODELAY, true)
        //TODO 绑定ip和端口
    .localAddress(new InetSocketAddress(this.nettyServerConfig.getBindAddress(),
        this.nettyServerConfig.getListenPort()))
    .childHandler(new ChannelInitializer<SocketChannel>() {
        @Override
        public void initChannel(SocketChannel ch) {
            ch.pipeline()
                    //handshakeHandler  处理握手操作,用来判断tls的开启状态
                .addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME, handshakeHandler)

                    /**
                     * 批量添加五个handler
                     * encoder/NettyDecoder:处理报文的编解码操作
                     * IdleStateHandler:处理心跳
                     * connectionManageHandler:处理连接请求
                     * serverHandler:处理读写请求=> TODO 用来处理broker注册消息、producer/consumer获取topic消息的
                     */
                .addLast(defaultEventExecutorGroup,
                    encoder,
                    new NettyDecoder(),
                    new IdleStateHandler(0, 0,
                        nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),
                    connectionManageHandler,
                    //serverHandler:处理读写请求
                    serverHandler
                );
        }
    });

看到处理读写请求的handler是NettyServerHandler

二.处理netty读写请求的NettyServerHandler

NettyServerHandler是处理channel中数据读写的handler,我们可以进入看源码

    class NettyServerHandler extends SimpleChannelInboundHandler<RemotingCommand> {

        @Override
        protected void channelRead0(ChannelHandlerContext ctx, RemotingCommand msg) {
//            System.out.println("NettyServerHandler 被触发==>"+msg.toString());
            int localPort = RemotingHelper.parseSocketAddressPort(ctx.channel().localAddress());
            NettyRemotingAbstract remotingAbstract = NettyRemotingServer.this.remotingServerTable.get(localPort);
            if (localPort != -1 && remotingAbstract != null) {
                //处理请求
                remotingAbstract.processMessageReceived(ctx, msg);
                return;
            }
            // The related remoting server has been shutdown, so close the connected channel
            RemotingUtil.closeChannel(ctx.channel());
        }
    }

继续进入

public void processMessageReceived(ChannelHandlerContext ctx, RemotingCommand msg) {
    if (msg != null) {
        switch (msg.getType()) {
            case REQUEST_COMMAND:
                // 处理request命令
                processRequestCommand(ctx, msg);
                break;
            // 处理response命令
            case RESPONSE_COMMAND:
                processResponseCommand(ctx, msg);
                break;
            default:
                break;
        }
    }
}

继续进入processRequestCommand(ctx, msg)方法

public void processRequestCommand(final ChannelHandlerContext ctx, final RemotingCommand cmd) {
    // 根据 code 从 processorTable 获取 Pair
    final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode());
    // 找不到,给个默认值
    final Pair<NettyRequestProcessor, ExecutorService> pair = null == matched ? this.defaultRequestProcessorPair : matched;
    final int opaque = cmd.getOpaque();

    if (pair == null) {
        String error = " request type " + cmd.getCode() + " not supported";
        final RemotingCommand response =
            RemotingCommand.createResponseCommand(RemotingSysResponseCode.REQUEST_CODE_NOT_SUPPORTED, error);
        response.setOpaque(opaque);
        ctx.writeAndFlush(response);
        log.error(RemotingHelper.parseChannelRemoteAddr(ctx.channel()) + error);
        return;
    }

    /**
     * 通过code找到 process(即pair),构建成一个 Runnable,
     * 交个线程池去执行
     * todo  => pair.getObject1() 是这个process
     * todo  => pair.getObject2() 是这个线程池
     *
     */
    Runnable run = buildProcessRequestHandler(ctx, cmd, pair, opaque);

    if (pair.getObject1().rejectRequest()) {
        final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,
                "[REJECTREQUEST]system busy, start flow control for a while");
        response.setOpaque(opaque);
        ctx.writeAndFlush(response);
        return;
    }

    try {
        /**
         * 将上面构建的runnable交给线程池执行
         */
        final RequestTask requestTask = new RequestTask(run, ctx.channel(), cmd);
        //async execute task, current thread return directly
        pair.getObject2().submit(requestTask);
    } catch (RejectedExecutionException e) {
      ....
    }
}

上面代码逻辑

  • 1.通过code找到对应的pair对象
  • 2.将请求和pair对象封装成run对象
  • 3.丢入线程池中执行

重点:这些pair对象是在哪里初始化的?

三 pair初始化时机

RocketMq源码分析(五):生产者启动和发送消息到broker中我们在brokerController的initialize()方法中

image.png

public void registerProcessor() {
    /*
     * SendMessageProcessor
     */
    sendMessageProcessor.registerSendMessageHook(sendMessageHookList);
    sendMessageProcessor.registerConsumeMessageHook(consumeMessageHookList);

    this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendMessageProcessor, this.sendMessageExecutor);
    this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendMessageProcessor, this.sendMessageExecutor);
    this.remotingServer.registerProcessor(RequestCode.SEND_BATCH_MESSAGE, sendMessageProcessor, this.sendMessageExecutor);
    this.remotingServer.registerProcessor(RequestCode.CONSUMER_SEND_MSG_BACK, sendMessageProcessor, this.sendMessageExecutor);
    //TODO 10 就是处理生产者发送的code码
    this.fastRemotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendMessageProcessor, this.sendMessageExecutor);
    //TODO 310 就是处理生产者发送的code码
    this.fastRemotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendMessageProcessor, this.sendMessageExecutor);
    this.fastRemotingServer.registerProcessor(RequestCode.SEND_BATCH_MESSAGE, sendMessageProcessor, this.sendMessageExecutor);
    this.fastRemotingServer.registerProcessor(RequestCode.CONSUMER_SEND_MSG_BACK, sendMessageProcessor, this.sendMessageExecutor);
    ...省略其他很多

在这里,会给remoteingServer注册很多 processor,把processor和线程池封装为pair对象, 和对应的code存放在一个map中,我们继续进入registerProcessor()方法

public void registerProcessor(int requestCode, NettyRequestProcessor processor, ExecutorService executor) {
    ExecutorService executorThis = executor;
    if (null == executor) {
        executorThis = this.publicExecutor;
    }
    //TODO 把process和线程池封装为pair对象
    Pair<NettyRequestProcessor, ExecutorService> pair = new Pair<>(processor, executorThis);
    // 放入map中,每一个code对应一个pair
    this.processorTable.put(requestCode, pair);
}

到了这里,所有的逻辑都通了,broker在启动的时候,会创建nettyServer,并且为每个code设置对应的processor对象, 当客户端连接的时候,对传递对应的code,服务端拿到code后,会根据code查到对应的processor类,进行处理

四.接受生产者发送的mq消息

RocketMq源码分析(五):生产者启动和发送消息到broker的最后,我们已经分析了生产者往broker发送消息,那么broker是怎么接受的呢?接下来我们来分析

4.1 回顾生产者发送的数据

请求头

// TODO 构架请求参数
SendMessageRequestHeader requestHeader = new SendMessageRequestHeader();
requestHeader.setProducerGroup(this.defaultMQProducer.getProducerGroup());
requestHeader.setTopic(msg.getTopic());
requestHeader.setDefaultTopic(this.defaultMQProducer.getCreateTopicKey());
requestHeader.setDefaultTopicQueueNums(this.defaultMQProducer.getDefaultTopicQueueNums());
requestHeader.setQueueId(mq.getQueueId());
requestHeader.setSysFlag(sysFlag);
requestHeader.setBornTimestamp(System.currentTimeMillis());
requestHeader.setFlag(msg.getFlag());
requestHeader.setProperties(MessageDecoder.messageProperties2String(msg.getProperties()));
requestHeader.setReconsumeTimes(0);
requestHeader.setUnitMode(this.isUnitMode());
requestHeader.setBatch(msg instanceof MessageBatch);

请求命令

RemotingCommand  request = RemotingCommand.createRequestCommand(RequestCode.SEND_MESSAGE, requestHeader);

请求内容

request.setBody(msg.getBody());

4.2.brocker接受生产者发送的消息逻辑

当broker的netty服务端接受到消息时候,处理消息发送的processor是 SendMessageprocessor

public RemotingCommand processRequest(ChannelHandlerContext ctx,
   RemotingCommand request) throws RemotingCommandException {
   SendMessageContext traceContext;
   switch (request.getCode()) {
       /**
        * 如果是消费者 的ack信息
        */
       case RequestCode.CONSUMER_SEND_MSG_BACK:
           return this.consumerSendMsgBack(ctx, request);
       default:
           /**
            * 是生产者发送数据
            */
            SendMessageRequestHeader requestHeader = parseRequestHeader(request);
           if (requestHeader == null) {
               return null;
           }
           TopicQueueMappingContext mappingContext = this.brokerController.getTopicQueueMappingManager().buildTopicQueueMappingContext(requestHeader, true);
           RemotingCommand rewriteResult = this.brokerController.getTopicQueueMappingManager().rewriteRequestForStaticTopic(requestHeader, mappingContext);
           if (rewriteResult != null) {
               return rewriteResult;
           }
           /** 构建生产者发送过来数据*/
           traceContext = buildMsgContext(ctx, requestHeader);
           System.out.println("broker 接受的 head数据是==>"+ JSON.toJSONString(traceContext));
           System.out.println("broker 接受的 body数据是==>"+new String(request.getBody()));
           String owner = request.getExtFields().get(BrokerStatsManager.COMMERCIAL_OWNER);
           traceContext.setCommercialOwner(owner);
           try {
               this.executeSendMessageHookBefore(ctx, request, traceContext);
           } catch (AbortProcessException e) {
               final RemotingCommand errorResponse = RemotingCommand.createResponseCommand(e.getResponseCode(), e.getErrorMessage());
               errorResponse.setOpaque(request.getOpaque());
               return errorResponse;
           }

           RemotingCommand response;
           if (requestHeader.isBatch()) {
                 //普通批量消息
               response = this.sendBatchMessage(ctx, request, traceContext, requestHeader, mappingContext,
                   (ctx1, response1) -> executeSendMessageHookAfter(response1, ctx1));
           } else {
                   //普通发送消息
               response = this.sendMessage(ctx, request, traceContext, requestHeader, mappingContext,
                   (ctx12, response12) -> executeSendMessageHookAfter(response12, ctx12));
           }

           return response;
   }
}

继续进入sendMessage方法

public RemotingCommand sendMessage(final ChannelHandlerContext ctx,
    final RemotingCommand request,
    final SendMessageContext sendMessageContext,
    final SendMessageRequestHeader requestHeader,
    final TopicQueueMappingContext mappingContext,
    final SendMessageCallback sendMessageCallback) throws RemotingCommandException {

    final RemotingCommand response = preSend(ctx, request, requestHeader);
    if (response.getCode() != -1) {
        return response;
    }

    final SendMessageResponseHeader responseHeader = (SendMessageResponseHeader) response.readCustomHeader();

    /**
     * todo 获取消息内容
     */
    final byte[] body = request.getBody();

    int queueIdInt = requestHeader.getQueueId();
    TopicConfig topicConfig = this.brokerController.getTopicConfigManager().selectTopicConfig(requestHeader.getTopic());

    // queueIdInt 小于0,就随机使用其中一个
    if (queueIdInt < 0) {
        queueIdInt = randomQueueId(topicConfig.getWriteQueueNums());
    }
    // 将消息包装为 MessageExtBrokerInner
    MessageExtBrokerInner msgInner = new MessageExtBrokerInner();
    msgInner.setTopic(requestHeader.getTopic());
    msgInner.setQueueId(queueIdInt);
    
    ...省略一些参数封装
    
    if (brokerController.getBrokerConfig().isAsyncSendEnable()) {
    CompletableFuture<PutMessageResult> asyncPutMessageFuture;
    if (sendTransactionPrepareMessage) { /** 发送事务消息==> 通过 TransactionalMessageService 存放消息 */
        asyncPutMessageFuture = this.brokerController.getTransactionalMessageService().asyncPrepareMessage(msgInner);
    } else {
        /** 发送普通消息==> 通过messageStore 存放消息  TODO 进入 */
        asyncPutMessageFuture = this.brokerConroller.getMessageStore().asyncPutMessage(msgInner);
    }

上面逻辑代码比较多,核心就是做下面三件事

  • 获取队列id
  • 封装参数到msgInner中
  • 调用messageStore将数据存入commitlog中 brokerConroller.getMessageStore().asyncPutMessage(msgInner)

继续进入方法brokerConroller.getMessageStore().asyncPutMessage(msgInner)

/**
 * todo 核心 保存到 commitLog  进入
 */
CompletableFuture<PutMessageResult> putResultFuture = this.commitLog.asyncPutMessage(msg);

继续进入

public CompletableFuture<PutMessageResult> asyncPutMessage(final MessageExtBrokerInner msg) {
    // Set the storage time 设置存储时间
    if (!defaultMessageStore.getMessageStoreConfig().isDuplicationEnable()) {
        msg.setStoreTimestamp(System.currentTimeMillis());
    }

    // Set the message body CRC (consider the most appropriate setting on the client)
    msg.setBodyCRC(UtilAll.crc32(msg.getBody()));
    // Back to Results
    AppendMessageResult result = null;

    StoreStatsService storeStatsService = this.defaultMessageStore.getStoreStatsService();

    String topic = msg.getTopic();

    InetSocketAddress bornSocketAddress = (InetSocketAddress) msg.getBornHost();
    if (bornSocketAddress.getAddress() instanceof Inet6Address) {
        msg.setBornHostV6Flag();
    }

    InetSocketAddress storeSocketAddress = (InetSocketAddress) msg.getStoreHost();
    if (storeSocketAddress.getAddress() instanceof Inet6Address) {
        msg.setStoreHostAddressV6Flag();
    }

    PutMessageThreadLocal putMessageThreadLocal = this.putMessageThreadLocal.get();
    updateMaxMessageSize(putMessageThreadLocal);
    String topicQueueKey = generateKey(putMessageThreadLocal.getKeyBuilder(), msg);
    long elapsedTimeInLock = 0;
    MappedFile unlockMappedFile = null;
    MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile();

    long currOffset;
    if (mappedFile == null) {
        currOffset = 0;
    } else { //获取 currOffset
        currOffset = mappedFile.getFileFromOffset() + mappedFile.getWrotePosition();
    }

    int needAckNums = this.defaultMessageStore.getMessageStoreConfig().getInSyncReplicas();
    boolean needHandleHA = needHandleHA(msg);

    if (needHandleHA && this.defaultMessageStore.getBrokerConfig().isEnableControllerMode()) {
        if (this.defaultMessageStore.getHaService().inSyncReplicasNums(currOffset) < this.defaultMessageStore.getMessageStoreConfig().getMinInSyncReplicas()) {
            return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.IN_SYNC_REPLICAS_NOT_ENOUGH, null));
        }
        if (this.defaultMessageStore.getMessageStoreConfig().isAllAckInSyncStateSet()) {
            // -1 means all ack in SyncStateSet
            needAckNums = MixAll.ALL_ACK_IN_SYNC_STATE_SET;
        }
    } else if (needHandleHA && this.defaultMessageStore.getBrokerConfig().isEnableSlaveActingMaster()) {
        int inSyncReplicas = Math.min(this.defaultMessageStore.getAliveReplicaNumInGroup(),
            this.defaultMessageStore.getHaService().inSyncReplicasNums(currOffset));
        needAckNums = calcNeedAckNums(inSyncReplicas);
        if (needAckNums > inSyncReplicas) {
            // Tell the producer, don't have enough slaves to handle the send request
            return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.IN_SYNC_REPLICAS_NOT_ENOUGH, null));
        }
    }

    topicQueueLock.lock(topicQueueKey);
    try {

        boolean needAssignOffset = true;
        if (defaultMessageStore.getMessageStoreConfig().isDuplicationEnable()
            && defaultMessageStore.getMessageStoreConfig().getBrokerRole() != BrokerRole.SLAVE) {
            needAssignOffset = false;
        }
        if (needAssignOffset) {
            defaultMessageStore.assignOffset(msg, getMessageNum(msg));
        }

        PutMessageResult encodeResult = putMessageThreadLocal.getEncoder().encode(msg);
        if (encodeResult != null) {
            return CompletableFuture.completedFuture(encodeResult);
        }
        msg.setEncodedBuff(putMessageThreadLocal.getEncoder().getEncoderBuffer());
        PutMessageContext putMessageContext = new PutMessageContext(topicQueueKey);

        putMessageLock.lock(); //spin or ReentrantLock ,depending on store config
        try {
            long beginLockTimestamp = this.defaultMessageStore.getSystemClock().now();
            this.beginTimeInLock = beginLockTimestamp;

     .... 省略一些代码
     
            // todo  追加到文件中 {user.home}\store\commitlog\00000000000000000000
            result = mappedFile.appendMessage(msg, this.appendMessageCallback, putMessageContext);

在源码里,这个方法也是非常长,这里删减了大部分,只看关键点:

  1. 如果发送的是延迟消息,先保存原始的topicqueueId,然后使用延迟队列专有的topicqueueId
  2. 将消息写入到文件中

image.png 消息存储架构图中主要有下面三个跟消息存储相关的文件构成。

(1) CommitLog:消息主体以及元数据的存储主体,存储Producer端写入的消息主体内容,消息内容不是定长的。单个文件大小默认1G ,文件名长度为20位,左边补零,剩余为起始偏移量,比如00000000000000000000代表了第一个文件,起始偏移量为0,文件大小为1G=1073741824;当第一个文件写满了,第二个文件为00000000001073741824,起始偏移量为1073741824,以此类推。消息主要是顺序写入日志文件,当文件满了,写入下一个文件;

(2) ConsumeQueue:消息消费队列,引入的目的主要是提高消息消费的性能,由于RocketMQ是基于主题topic的订阅模式,消息消费是针对主题进行的,如果要遍历commitlog文件中根据topic检索消息是非常低效的。Consumer即可根据ConsumeQueue来查找待消费的消息。其中,ConsumeQueue(逻辑消费队列)作为消费消息的索引,保存了指定Topic下的队列消息在CommitLog中的起始物理偏移量offset,消息大小size和消息TagHashCode值。consumequeue文件可以看成是基于topiccommitlog索引文件,故consumequeue文件夹的组织方式如下:topic/queue/file三层组织结构,具体存储路径为:$HOME/store/consumequeue/{topic}/{queueId}/{fileName}。同样consumequeue文件采取定长设计,每一个条目共20个字节,分别为8字节的commitlog物理偏移量4字节的消息长度8字节tag hashcode,单个文件由30W个条目组成,可以像数组一样随机访问每一个条目,每个ConsumeQueue文件大小约5.72M;

(3) IndexFileIndexFile(索引文件)提供了一种可以通过key时间区间来查询消息的方法。Index文件的存储位置是:HOME\store\index{fileName},文件名fileName是以创建时的时间戳命名的,固定的单个IndexFile文件大小约为400M,一个IndexFile可以保存 2000W个索引,IndexFile的底层存储设计为在文件系统中实现HashMap结构,故rocketmq的索引文件其底层实现为hash索引。

在上面的RocketMQ的消息存储整体架构图中可以看出,RocketMQ采用的是混合型的存储结构,即为Broker单个实例下所有的队列共用一个日志数据文件(即为CommitLog)来存储。RocketMQ的混合型存储结构(多个Topic的消息实体内容都存储于一个CommitLog中)针对ProducerConsumer分别采用了数据和索引部分相分离的存储结构,Producer发送消息至Broker端,然后Broker端使用同步或者异步的方式对消息刷盘持久化,保存至CommitLog中。只要消息被刷盘持久化至磁盘文件CommitLog中,那么Producer发送的消息就不会丢失。

正因为如此,Consumer也就肯定有机会去消费这条消息。当无法拉取到消息后,可以等下一次消息拉取,同时服务端也支持长轮询模式,如果一个消息拉取请求未拉取到消息,Broker允许等待30s的时间,只要这段时间内有新消息到达,将直接返回给消费端。这里,RocketMQ的具体做法是,使用Broker端的后台服务线程—ReputMessageService不停地分发请求并异步构建ConsumeQueue(逻辑消费队列)和IndexFile(索引文件)数据。

五.总结

本文主要分析了 broker 接收producer消息的流程,流程如下:

  1. 处理消息接收的底层服务为 netty,在BrokerController#start方法中启动

  2. netty服务中,处理消息接收的channelHandlerNettyServerHandler,最终会调用SendMessageProcessor#processRequest来处理消息接收

  3. 消息接收流程的最后,MappedFile#appendMessage(...)方法会将消息内容写入到commitLog文件中。