【简约入门】消息队列实践!如何生产与消费

488 阅读3分钟

springboot结合kafka生产与消费

之前写了kafka的简单操作命令行。现在来实践下与spring结合生产消息、消费消息。

依赖

首先引入maven依赖

 <dependency>
     <groupId>org.springframework.kafka</groupId>
     <artifactId>spring-kafka</artifactId>
     <version>2.3.3.RELEASE</version>
 </dependency>

配置编写

编写application.yml配置

spring:
  # Kafka 配置项,对应 KafkaProperties 配置类
  kafka:
    bootstrap-servers: 172.27.51.211:9092 # 指定 Kafka Broker 地址,可以设置多个,以逗号分隔
    # Kafka Producer 配置项
    producer:
      acks: 1 # 0-不应答。1-leader 应答。all-所有 leader 和 follower 应答。
      retries: 3 # 发送失败时,重试发送的次数
      key-serializer: org.apache.kafka.common.serialization.StringSerializer # 消息的 key 的序列化
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # 消息的 value 的序列化
    # Kafka Consumer 配置项
    consumer:
      auto-offset-reset: earliest # 设置消费者分组最初的消费进度为 earliest 。可参考博客 https://blog.csdn.net/lishuangzhe7047/article/details/74530417 理解
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
      properties:
        spring:
          json:
            trusted:
              packages: com.wqy.cloud.kafka.message
    # Kafka Consumer Listener 监听器配置
    listener:
      missing-topics-fatal: false # 消费监听接口监听的主题不存在时,默认会报错。所以通过设置为 false ,解决报错

logging:
  level:
    org:
      springframework:
        kafka: ERROR # spring-kafka INFO 日志太多了,所以我们限制只打印 ERROR 级别
      apache:
        kafka: ERROR # kafka INFO 日志太多了,所以我们限制只打印 ERROR 级别

生产者

用于发消息的一方,需要指定发送的topic

@Component
public class Demo01Producer {

    @Resource
    private KafkaTemplate<Object, Object> kafkaTemplate;


    public ListenableFuture<SendResult<Object, Object>> asyncSend(Integer id) {
        // 创建消息
        Message message = new Message();
        message.setId(id);
        // 异步发送消息
        return kafkaTemplate.send(Message.TOPIC, message);
    }

}

消费者

用于消费topic中消息的一方,指定topic和group,因为kafka的消费机制是针对每个topic,group下只有一个消费者能拿到消息,如果想再多一个消费者,就需要增加不同的group。

@Component
public class Consumer1 {

    private Logger logger = LoggerFactory.getLogger(getClass());

    @KafkaListener(topics = Message.TOPIC,
            groupId = "demo-consumer1-group-" + Message.TOPIC)
    public void onMessage(ConsumerRecord<Integer, String> record) {
        logger.info("[onMessage1][线程编号:{} 消息内容:{}]", Thread.currentThread().getId(), record);
    }
}

测试发送接收

@Test
public void testSyncSend() throws ExecutionException, InterruptedException {
    int id = (int) (System.currentTimeMillis() / 1000);
    SendResult result = producer.syncSend(id);
    logger.info("[testSyncSend][发送编号:[{}] 发送结果:[{}]]", id, result);

    //  阻塞
    new CountDownLatch(1).await();
}

查看日志:

2021-10-18 20:14:07.676  INFO 117476 --- [           main] c.w.c.kafka.producer.Demo01ProducerTest  : [testSyncSend][发送编号:[1634559247] 发送结果:[SendResult [producerRecord=ProducerRecord(topic=demo_wqy, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = __TypeId__, value = [99, 111, 109, 46, 119, 113, 121, 46, 99, 108, 111, 117, 100, 46, 107, 97, 102, 107, 97, 46, 109, 101, 115, 115, 97, 103, 101, 46, 77, 101, 115, 115, 97, 103, 101])], isReadOnly = true), key=null, value=Demo01Message{id=1634559247}, timestamp=null), recordMetadata=demo_wqy-0@3]]]
2021-10-18 20:14:10.644  INFO 117476 --- [ntainer#0-0-C-1] com.wqy.cloud.kafka.consumer.Consumer1   : [onMessage1][线程编号:17 消息内容:ConsumerRecord(topic = demo_wqy, partition = 0, leaderEpoch = 0, offset = 3, CreateTime = 1634559247652, serialized key size = -1, serialized value size = 17, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = Demo01Message{id=1634559247})]
2021-10-18 20:14:10.649  INFO 117476 --- [ntainer#1-0-C-1] com.wqy.cloud.kafka.consumer.Consumer2   : [onMessage2][线程编号:19 消息内容:Demo01Message{id=1634559247}]

结束

看到"发送结果"和"消息内容"说明成功的发送和接收,简单的生产一条消息、消费一条消息的例子就构建完了~