简介
RocketMQ 是一款分布式、队列模型的消息中间件
特点
- 能够保证严格的消息顺序
- 提供丰富的消息拉取模式
- 高效的订阅者水平扩展能力
- 实时的消息订阅机制
- 亿级消息堆积能力
选取理由
- 强调集群无单点,可扩展,任意一点高可用,水平可扩展。
- 海量消息堆积能力,消息堆积后,写入低延迟。
- 支持上万个队列
- 消息失败重试机制
- 消息可查询
- 开源社区活跃
- 成熟度(经过双十一考验)
关键概念
Tpoic:第一级消息类型,书的标题
Tags:第二级消息类型,书的目录,可以基于 Tag 做消息过滤
生产组:用于消息的发送
消费组:用于消息的订阅处理
PS:生产组和消费组,方便扩缩机器,增减处理能力,集群组的名字,用于标记用途中的一员。每次只会随机的发给每个集群中的一员。
rocketmq安装
mkdir /usr/local/apache-rocketmq
tar -zxvf apache-rocketmq.tar.gz -C /usr/local/apache-rocketmq
# 带-s 软链接:和源文件同步更新、不占磁盘空间
# 不带-s 硬链接:和源文件同步更新、占磁盘空间
ln -s apache-rocketmq rocketmq
mkdir /usr/local/rocketmq/store
mkdir /usr/local/rocketmq/store/commitlog
mkdir /usr/local/rocketmq/store/consumequeue
mkdir /usr/local/rocketmq/store/index
/usr/local/rocketmq/conf/2m-2s-sync/broker-a.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
brokerClusterName=DefaultCluster
brokerName=broker-a
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=ASYNC_MASTER
flushDiskType=ASYNC_FLUSH
修改为
#所属集群名字
brokerClusterName=rocketmq-cluster
#broker 名字,注意此处不同的配置文件填写的不一样
brokerName=broker-a|broker-b
#0 表示 Master,>0 表示 Slave
brokerId=0
#nameServer 地址,分号分割
namesrvAddr=rocketmq-nameserver1:9876;rocketmq-nameserver2:9876
#在发送消息时,自动创建服务器不存在的 topic,默认创建的队列数
defaultTopicQueueNums=4
#是否允许 Broker 自动创建 Topic,建议线下开启,线上关闭
autoCreateTopicEnable=true
#是否允许 Broker 自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=true
#Broker 对外服务的监听端口
listenPort=10911
#删除文件时间点,默认凌晨 4 点
deleteWhen=04
#文件保留时间,默认 48 小时
fileReservedTime=120
#commitLog 每个文件的大小默认 1G
mapedFileSizeCommitLog=1073741824
#ConsumeQueue 每个文件默认存 30W 条,根据业务情况调整
mapedFileSizeConsumeQueue=300000
#destroyMapedFileIntervalForcibly=120000
#redeleteHangedFileInterval=120000
#检测物理文件磁盘空间
diskMaxUsedSpaceRatio=88
#存储路径
storePathRootDir=/usr/local/rocketmq/store
#commitLog 存储路径
storePathCommitLog=/usr/local/rocketmq/store/commitlog
#消费队列存储路径存储路径
storePathConsumeQueue=/usr/local/rocketmq/store/consumequeue
#消息索引存储路径
storePathIndex=/usr/local/rocketmq/store/index
#checkpoint 文件存储路径
storeCheckpoint=/usr/local/rocketmq/store/checkpoint
#abort 文件存储路径
abortFile=/usr/local/rocketmq/store/abort
#限制的消息大小
maxMessageSize=65536
#flushCommitLogLeastPages=4
#flushConsumeQueueLeastPages=2
#flushCommitLogThoroughInterval=10000
#flushConsumeQueueThoroughInterval=60000
#Broker 的角色
#- ASYNC_MASTER 异步复制 Master
#- SYNC_MASTER 同步双写 Master
#- SLAVE
brokerRole=ASYNC_MASTER
#刷盘方式
#- ASYNC_FLUSH 异步刷盘
#- SYNC_FLUSH 同步刷盘
flushDiskType=ASYNC_FLUSH
#checkTransactionMessageEnable=false
#发消息线程池数量
#sendMessageThreadPoolNums=128
#拉消息线程池数量
#pullMessageThreadPoolNums=128
日志
mkdir -p /usr/local/rocketmq/logs
cd /usr/local/rocketmq/conf && sed -i 's#${user.home}#/usr/local/rocketmq#g' *.xml
启动
# 可以根据修改启动参数runserver.sh
nohup sh mqnamesrv &
# 可以根据修改启动参数runbroker.sh
nohup sh mqbroker -c /usr/local/rocketmq/conf/2m-2s-async/broker-a.properties >/dev/null 2>&1 &
查看日志
tail -f -n 500 /usr/local/rocketmq/logs/rocketmqlogs/broker.log
tail -f -n 500 /usr/local/rocketmq/logs/rocketmqlogs/namesrv.log
关闭
# 先关broker
sh mqshutdown broker
# 再关namesrv
sh mqshutdown namesrv
rocketmq-console安装
下载下来修改rocketmq.config.namesrvAddr重新打包启动即可
mvn clean package -Dmaven.test.skip=true
使用
生产者,元数据通过定时任务,底层Netty通信进行同步,commitlog即实际数据主从同步是实时的、底层使用原生的socket代码
// 在一个程序中不能重名
DefaultMQProducer producer = new DefaultMQProducer("test_producer");
producer.setNamesrvAddr(Const.NAMESRV_ADDR_SINGLE);
producer.setVipChannelEnabled(false);
producer.start();
for (int i = 0; i < 5; i++) {
Message message = new Message("test_topic", //主题
"TagA", //标签
"key" + i, //用户自定义的key ,唯一的标识
("Hello RocketMQ" + i).getBytes()); //消息内容实体(byte[])
SendResult sendResult = producer.send(message);
System.out.println(sendResult);
}
producer.shutdown();
默认一个topic四个queue,可以通过配置文件修改
SendResult [sendStatus=SEND_OK, msgId=C0A81F962323135FBAA48E570D7E0000, offsetMsgId=784CDABE00002A9F0000000000000000, messageQueue=MessageQueue [topic=test_topic, brokerName=broker-a, queueId=1], queueOffset=0]
SendResult [sendStatus=SEND_OK, msgId=C0A81F962323135FBAA48E570DC60001, offsetMsgId=784CDABE00002A9F00000000000000BC, messageQueue=MessageQueue [topic=test_topic, brokerName=broker-a, queueId=2], queueOffset=0]
SendResult [sendStatus=SEND_OK, msgId=C0A81F962323135FBAA48E570FA90002, offsetMsgId=784CDABE00002A9F0000000000000178, messageQueue=MessageQueue [topic=test_topic, brokerName=broker-a, queueId=3], queueOffset=0]
SendResult [sendStatus=SEND_OK, msgId=C0A81F962323135FBAA48E570FD30003, offsetMsgId=784CDABE00002A9F0000000000000234, messageQueue=MessageQueue [topic=test_topic, brokerName=broker-a, queueId=0], queueOffset=0]
SendResult [sendStatus=SEND_OK, msgId=C0A81F962323135FBAA48E570FEE0004, offsetMsgId=784CDABE00002A9F00000000000002F0, messageQueue=MessageQueue [topic=test_topic, brokerName=broker-a, queueId=1], queueOffset=1]
其他要点
# 延迟消息只是是固定的那几个值
message.setDelayTimeLevel(3);
# 消息可以发送到指定到队列(默认4)
SendResult sr = producer.send(message, new MessageQueueSelector() {
@Override
public MessageQueue select(List<MessageQueue> mqs, Message msg, Object arg) {
Integer queueNumber = (Integer)arg;
return mqs.get(queueNumber);
}
}, 2);
# 异步发送消息
producer.send(message, new SendCallback() {
//rabbitmq急速入门的实战: 可靠性消息投递
@Override
public void onSuccess(SendResult sendResult) {
System.err.println("msgId: " + sendResult.getMsgId() + ", status: " + sendResult.getSendStatus());
}
@Override
public void onException(Throwable e) {
e.printStackTrace();
System.err.println("------发送失败");
}
});
# SendResult的status,除了第一种、其他三种需要做补偿
SEND_OK,
FLUSH_DISK_TIMEOUT,
FLUSH_SLAVE_TIMEOUT,
SLAVE_NOT_AVAILABLE;
消费者
DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("test_consumer");
consumer.setNamesrvAddr(Const.NAMESRV_ADDR_SINGLE);
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET);
consumer.subscribe("test_topic", "*");
consumer.registerMessageListener(new MessageListenerConcurrently() {
@Override
public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) {
MessageExt me = msgs.get(0);
try {
String topic = me.getTopic();
String tags = me.getTags();
String keys = me.getKeys();
if(keys.equals("key2")) {
System.err.println("模拟失败重试机制");
throw new RuntimeException("模拟失败重试机制");
}
String msgBody = new String(me.getBody(), RemotingHelper.DEFAULT_CHARSET);
System.out.println("topic: " + topic + ",tags: " + tags + ", keys: " + keys + ",body: " + msgBody);
} catch (Exception e) {
e.printStackTrace();
int recousumeTimes = me.getReconsumeTimes();
if(recousumeTimes == 2) {
System.out.println("失败重试2次后不再继续处理,此处需要自己日志处理+补偿或通知");
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
});
consumer.start();
失败重试在rocketmq内部由定时任务实现
INFO main - messageDelayLevel=1s 5s 10s 30s 1m 2m 3m 4m 5m 6m 7m 8m 9m 10m 20m 30m 1h 2h
重试2次后程序自己做补偿
topic: test_topic,tags: TagA, keys: key0,body: Hello RocketMQ0
topic: test_topic,tags: TagA, keys: key1,body: Hello RocketMQ1
模拟失败重试机制
java.lang.RuntimeException: 模拟失败重试机制
at com.bfxy.rocketmq.quickstart.Consumer$1.consumeMessage(Consumer.java:36)
at org.apache.rocketmq.client.impl.consumer.ConsumeMessageConcurrentlyService$ConsumeRequest.run(ConsumeMessageConcurrentlyService.java:417)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
topic: test_topic,tags: TagA, keys: key3,body: Hello RocketMQ3
topic: test_topic,tags: TagA, keys: key4,body: Hello RocketMQ4
模拟失败重试机制
java.lang.RuntimeException: 模拟失败重试机制
at com.bfxy.rocketmq.quickstart.Consumer$1.consumeMessage(Consumer.java:36)
at org.apache.rocketmq.client.impl.consumer.ConsumeMessageConcurrentlyService$ConsumeRequest.run(ConsumeMessageConcurrentlyService.java:417)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
模拟失败重试机制
失败重试2次后不再继续处理,此处需要自己日志处理+补偿或通知
java.lang.RuntimeException: 模拟失败重试机制
at com.bfxy.rocketmq.quickstart.Consumer$1.consumeMessage(Consumer.java:36)
at org.apache.rocketmq.client.impl.consumer.ConsumeMessageConcurrentlyService$ConsumeRequest.run(ConsumeMessageConcurrentlyService.java:417)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
其他要点
- 集群模式(MessageModel.CLUSTERING)
1.offset存放在borker,第一次从ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET仅仅保证第一次从最后一个开始消费,断开重连后接着消费。
2.消费者组把多个消费者组织在一起,以队列为单位进行负载均衡,如果设置只订阅TagA,但是同时队列中有TagB,则TagB过滤(视为消费了)
3.通配符 *
4.TagA || TagB 支持
- 广播模式(MessageModel.BROADCASTING)
本地维护offset,消费topic中的所有消息
- offset
offset指某个topic下的一条消息在某个MessageQueue中的位置,通过offset可以指定这条消息,存储实现有两种,集群模式-RemoteBrokerOffsetStore,广播模式-LocalFileOffsetStore
- 推模式本质长轮询
5s/次,共15s,每次请求会在channel级别阻塞一定时间
- 拉模式
步骤
1.获取Message Queue且遍历
2.维护offsetStore(redis等)(DefaultMQPullConsumer)
2.1也可以通过(MQPullConsumerScheduleService)
// 获取offset
long offset = consumer.fetchConsumeOffset(mq, false);
// 更新offset
consumer.updateConsumeOffset(mq, pullResult.getNextBeginOffset());
3.根据消息的不同状态做不同处理
switch (pullResult.getPullStatus()) {
case FOUND:
List<MessageExt> list = pullResult.getMsgFoundList();
for(MessageExt msg : list){
//消费数据...
System.out.println(new String(msg.getBody()));
}
break;
case NO_MATCHED_MSG:
break;
case NO_NEW_MSG:
case OFFSET_ILLEGAL:
break;
default:
break;
}
核心原理
消息的存储结构
顺序写、随机读。存入commitlog,消费者组消费Consume Queue中的消息。
异步刷盘
写入内存即成功、攒满一定数量刷到磁盘。
同步刷盘
写入内存、刷到磁盘才返回。
同步复制
producer->主从一起刷盘->返回。异步复制:producer->主刷盘->返回成功->异步任务->从刷盘
建议
异步刷盘,同步复制
高可用机制
master(brokerId=0,读写),slave(brokerId>0,只读),只有当master过于繁忙才会切换到slave读取信息
集群
单点模式
主从模式
主节点宕机后、从节点可以继续提供消费者消费、主节点上线后同步消费的offset
双主模式
多主多从
2m-2s-sync保证同步写入内存、各自的broker配置异步刷盘即可。