spring kafka不常用的一些操作类代码

299 阅读2分钟

GenericMessageListener

 

data : 对于data值的类型其实并没有限定,根据KafkaTemplate所定义的类型来决定。data为List集合的则是用作批量消费。
ConsumerRecord:具体消费数据类,包含Headers信息、分区信息、时间戳等
Acknowledgment:用作Ack机制的接口
Consumer:消费者类,使用该类我们可以手动提交偏移量、控制消费速率等功能

 

id:消费者的id,当GroupId没有被配置的时候,默认id为GroupId
containerFactory:上面提到了@KafkaListener区分单数据还是多数据消费只需要配置一下注解的containerFactory属性就可以了,这里面配置的是监听容器工厂,也就是ConcurrentKafkaListenerContainerFactory,配置BeanName
topics:需要监听的Topic,可监听多个
topicPartitions:可配置更加详细的监听信息,必须监听某个Topic中的指定分区,或者从offset为200的偏移量开始监听
errorHandler:监听异常处理器,配置BeanName
groupId:消费组ID
idIsGroup:id是否为GroupId
clientIdPrefix:消费者Id前缀
beanRef:真实监听容器的BeanName,需要在 BeanName前加 "__"


创建一个监听容器工厂,设置其为批量消费并设置并发量为5,这个并发量根据分区数决定,必须小于等于分区数,否则会有线程一直处于空闲状态

| ``` @Component public class BatchListener { private static final Logger log= LoggerFactory.getLogger(BatchListener.class); private Map<String, Object> consumerProps() { Map<String, Object> props = new HashMap<>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000"); //一次拉取消息数量 props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "5"); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); return props; } @Bean("batchContainerFactory") public ConcurrentKafkaListenerContainerFactory listenerContainer() { ConcurrentKafkaListenerContainerFactory container = new ConcurrentKafkaListenerContainerFactory(); container.setConsumerFactory(new DefaultKafkaConsumerFactory(consumerProps())); //设置并发量,小于或等于Topic的分区数 container.setConcurrency(5); //设置为批量监听 container.setBatchListener(true); return container; } @Bean public NewTopic batchTopic() { return new NewTopic("topic.quick.batch", 8, (short) 1); } @KafkaListener(id = "batch",clientIdPrefix = "batch",topics = {"topic.quick.batch"},containerFactory = "batchContainerFactory") public void batchListener(List data) { log.info("topic.quick.batch receive : "); for (String s : data) { log.info( s); } } }

| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

## **使用Ack机制确认消费**

使用Kafka的Ack机制比较简单,只需简单的三步即可:

1. 设置ENABLE_AUTO_COMMIT_CONFIG=false,禁止自动提交
2. 设置AckMode=MANUAL_IMMEDIATE
3. 监听方法加入Acknowledgment ack 参数

| ```
@Component public class AckListener {      private static final Logger log= LoggerFactory.getLogger(AckListener.class);      private Map<String, Object> consumerProps() {         Map<String, Object> props = new HashMap<>();         props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");         props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);         props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");         props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");         props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);         props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);         return props;     }      @Bean("ackContainerFactory")     public ConcurrentKafkaListenerContainerFactory ackContainerFactory() {         ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();         factory.setConsumerFactory(new DefaultKafkaConsumerFactory(consumerProps()));         factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);         factory.setConsumerFactory(new DefaultKafkaConsumerFactory(consumerProps()));         return factory;     }       @KafkaListener(id = "ack", topics = "topic.quick.ack",containerFactory = "ackContainerFactory")     public void ackListener(ConsumerRecord record, Acknowledgment ack) {         log.info("topic.quick.ack receive : " + record.value());         ack.acknowledge();     } }  
``` |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

Kafka机制会出现的一些情况,导致没办法重复消费未被Ack的消息,解决办法有如下:

| ```
//1.重新将消息发送到队列中,这种方式比较简单而且可以使用Headers实现第几次消费的功能,用以下次判断 
@KafkaListener(id = "ack", topics = "topic.quick.ack", containerFactory = "ackContainerFactory")     public void ackListener(ConsumerRecord record, Acknowledgment ack, Consumer consumer) {         log.info("topic.quick.ack receive : " + record.value());          //如果偏移量为偶数则确认消费,否则拒绝消费         if (record.offset() % 2 == 0) {             log.info(record.offset()+"--ack");             ack.acknowledge();         } else {             log.info(record.offset()+"--nack");             kafkaTemplate.send("topic.quick.ack", record.value());         }     } 
 

//2.使用Consumer.seek方法,重新回到该未ack消息偏移量的位置重新消费,这种可能会导致死循环,原因出现于业务一直没办法处理这条数据,但还是不停的重新定位到该数据的偏移量上。

    @KafkaListener(id = "ack", topics = "topic.quick.ack", containerFactory = "ackContainerFactory")     public void ackListener(ConsumerRecord record, Acknowledgment ack, Consumer consumer) {         log.info("topic.quick.ack receive : " + record.value());          //如果偏移量为偶数则确认消费,否则拒绝消费         if (record.offset() % 2 == 0) {             log.info(record.offset()+"--ack");             ack.acknowledge();         } else {             log.info(record.offset()+"--nack");             consumer.seek(new TopicPartition("topic.quick.ack",record.partition()),record.offset() );         }     } 
``` |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

 

## **KafkaTemplate**

**消息结果回调**

一般来说我们都会去获取KafkaTemplate发送消息的结果去判断消息是否发送成功,如果消息发送失败,则会重新发送或者执行对应的业务逻辑。所以这里我们去实现这个功能。

KafkaSendResultHandler

第一步还是编写一个消息结果回调类KafkaSendResultHandler。当我们使用KafkaTemplate发送消息成功的时候回调用OnSuccess方法,发送失败则会调用onError方法。

**KafkaTemplate异步发送消息**

主要是因为KafkaTemplate发送消息是采取异步方式发送的发送消息的时候需要休眠一下,否则发送时间较长的时候会导致进程提前关闭导致无法调用回调时间。

KafkaTemplate的源代码

**KafkaTemplate同步发送消息**

KafkaTemplate异步发送消息大大的提升了生产者的并发能力,但某些场景下我们并不需要异步发送消息,这个时候我们可以采取同步发送方式,实现也是非常简单的。

只需要在send方法后面调用get方法即可。

Future模式中,我们采取异步执行事件,等到需要返回值得时候我们再调用get方法获取future的返回值

**KafkaTransactionManager**

使用@Transactional注解

| ```
    @Bean     public ProducerFactory<Integer, String> producerFactory() {         DefaultKafkaProducerFactory factory = new DefaultKafkaProducerFactory<>(senderProps());         factory.transactionCapable();         factory.setTransactionIdPrefix("tran-");         return factory;     }      @Bean     public KafkaTransactionManager transactionManager(ProducerFactory producerFactory) {         KafkaTransactionManager manager = new KafkaTransactionManager(producerFactory);         return manager;     } 

 

    @Test     @Transactional     public void testTransactionalAnnotation() throws InterruptedException {         kafkaTemplate.send("topic.quick.tran", "test transactional annotation");         throw new RuntimeException("fail");     } 
``` |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

使用KafkaTemplate的executeInTransaction方法

| ```
    @Test     public void testExecuteInTransaction() throws InterruptedException {         kafkaTemplate.executeInTransaction(new KafkaOperations.OperationsCallback() {             @Override             public Object doInOperations(KafkaOperations kafkaOperations) {                 kafkaOperations.send("topic.quick.tran", "test executeInTransaction");                 throw new RuntimeException("fail");                 //return true;             }         });     }
``` |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

**ReplyingKafkaTemplate**

Spring-Kafka整合了两种消息转发方式:

1. 使用Headers设置回复主题(Reply_Topic),这种方式比较特别,是一种请求响应模式,使用的是ReplyingKafkaTemplate类
2. 手动转发,使用@SendTo注解将监听方法返回值转发到Topic中

ReplyTemplate方式

| ```
    @Autowired     private ReplyingKafkaTemplate replyingKafkaTemplate;      @Test     public void testReplyingKafkaTemplate() throws ExecutionException, InterruptedException, TimeoutException {         ProducerRecord<String, String> record = new ProducerRecord<>("topic.quick.request", "this is a message");         record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "topic.quick.reply".getBytes()));         RequestReplyFuture<String, String, String> replyFuture = replyingKafkaTemplate.sendAndReceive(record);         SendResult<String, String> sendResult = replyFuture.getSendFuture().get();         System.out.println("Sent ok: " + sendResult.getRecordMetadata());         ConsumerRecord<String, String> consumerRecord = replyFuture.get();         System.out.println("Return value: " + consumerRecord.value());         Thread.sleep(20000);     } 
``` |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

@SendTo方式

| ```
    @Bean     public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {         ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();         factory.setConsumerFactory(consumerFactory());         factory.setReplyTemplate(kafkaTemplate());         return factory;     } 

 

@Component public class ForwardListener {      private static final Logger log= LoggerFactory.getLogger(ForwardListener.class);      @KafkaListener(id = "forward", topics = "topic.quick.target")     @SendTo("topic.quick.real")     public String forward(String data) {         log.info("topic.quick.target  forward "+data+" to  topic.quick.real");         return "topic.quick.target send msg : " + data;     } } 
``` |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

## **KafkaListenerEndpointRegistry**

这里需要提及一下,@KafkaListener这个注解所标注的方法并没有在IOC容器中注册为Bean,而是会被注册在KafkaListenerEndpointRegistry中,KafkaListenerEndpointRegistry在SpringIOC中已经被注册为Bean,具体可以看一下该类的源码,当然不是使用注解方式注册啦

这里需要注意一下启动监听容器的方法,项目启动的时候监听容器是未启动状态,而resume是恢复的意思不是启动的意思,所以我们需要判断容器是否运行,如果运行则调用resume方法,否则调用start方法

| ```
@Component @EnableScheduling public class TaskListener{      private static final Logger log= LoggerFactory.getLogger(TaskListener.class);      @Autowired     private KafkaListenerEndpointRegistry registry;      @Autowired     private ConsumerFactory consumerFactory;      @Bean     public ConcurrentKafkaListenerContainerFactory delayContainerFactory() {         ConcurrentKafkaListenerContainerFactory container = new ConcurrentKafkaListenerContainerFactory();         container.setConsumerFactory(consumerFactory);         //禁止自动启动         container.setAutoStartup(false);         return container;     }      @KafkaListener(id = "durable", topics = "topic.quick.durable",containerFactory = "delayContainerFactory")     public void durableListener(String data) {         //这里做数据持久化的操作         log.info("topic.quick.durable receive : " + data);     }      //定时器,每天凌晨0点开启监听     @Scheduled(cron = "0 0 0 * * ?")     public void startListener() {         log.info("开启监听");         //判断监听容器是否启动,未启动则将其启动         if (!registry.getListenerContainer("durable").isRunning()) {             registry.getListenerContainer("durable").start();         }         registry.getListenerContainer("durable").resume();     }      //定时器,每天早上10点关闭监听     @Scheduled(cron = "0 0 10 * * ?")     public void shutDownListener() {         log.info("关闭监听");         registry.getListenerContainer("durable").pause();     } } 
``` |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

## **RecordFilterStrategy**

消息过滤器可以在消息抵达监听容器前被拦截,过滤器根据系统业务逻辑去筛选出需要的数据再交由KafkaListener处理。

配置消息其实是非常简单的额,只需要为监听容器工厂配置一个RecordFilterStrategy(消息过滤策略),返回true的时候消息将会被抛弃,返回false时,消息能正常抵达监听容器。

| ```
@Component public class FilterListener {      private static final Logger log= LoggerFactory.getLogger(TaskListener.class);      @Autowired     private ConsumerFactory consumerFactory;      @Bean     public ConcurrentKafkaListenerContainerFactory filterContainerFactory() {         ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();         factory.setConsumerFactory(consumerFactory);         //配合RecordFilterStrategy使用,被过滤的信息将被丢弃         factory.setAckDiscarded(true);         factory.setRecordFilterStrategy(new RecordFilterStrategy() {             @Override             public boolean filter(ConsumerRecord consumerRecord) {                 long data = Long.parseLong((String) consumerRecord.value());                 log.info("filterContainerFactory filter : "+data);                 if (data % 2 == 0) {                     return false;                 }                 //返回true将会被丢弃                 return true;             }         });         return factory;     }      @KafkaListener(id = "filterCons", topics = "topic.quick.filter",containerFactory = "filterContainerFactory")     public void filterListener(String data) {         //这里做数据持久化的操作         log.error("topic.quick.filter receive : " + data);     } } 
``` |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |

参考链接:

<http://blog.seasedge.cn/category/Kafka/>