kafka服务端消费

181 阅读2分钟

生产级消费者代码(Spring Boot 3.x 实现)

java

复制

import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
import java.util.List;

@Component
public class ProductionKafkaConsumer {

    // 基础消费模板(带手动提交)
    @KafkaListener(
        topics = "${kafka.topic.order-events}",
        groupId = "${spring.kafka.consumer.group-id}",
        concurrency = "3" // 并发消费者数=分区数
    )
    @Transactional(transactionManager = "kafkaTransactionManager")
    public void handleOrderEvents(
        List<OrderEvent> events, // 批量消费
        Acknowledgment acknowledgment
    ) {
        try {
            events.forEach(event -> {
                validateEvent(event); // 消息校验
                processBusinessLogic(event); // 业务处理
                writeToDatabase(event); // 数据库操作
            });
            
            acknowledgment.acknowledge(); // 手动提交偏移量
        } catch (BusinessException e) {
            acknowledgment.nack(2000); // 延迟2秒后重试
        } catch (FatalException e) {
            sendToDlq(event); // 发送死信队列
            acknowledgment.acknowledge();
        }
    }

    // 死信队列处理器
    @KafkaListener(
        topics = "${kafka.topic.order-events-dlq}",
        groupId = "${spring.kafka.consumer.group-id}-dlq"
    )
    public void handleDlqMessages(OrderEvent event) {
        // 告警通知 + 人工干预逻辑
        alertService.sendCriticalAlert(event);
        log.error("DLQ message received: {}", event);
    }
}

生产环境配置(application.yml)

yaml

复制

spring:
  kafka:
    bootstrap-servers: kafka-prod-1:9092,kafka-prod-2:9092,kafka-prod-3:9092
    consumer:
      group-id: order-service-group
      auto-offset-reset: latest
      enable-auto-commit: false
      max-poll-records: 200 # 根据业务处理能力调整
      fetch-max-wait: 500ms
      fetch-min-size: 1MB
      request-timeout-ms: 30000
      session-timeout-ms: 10000
      heartbeat-interval-ms: 3000
      isolation-level: read_committed # 事务消息支持
      properties:
        max.partition.fetch.bytes: 1048576 # 1MB
    listener:
      ack-mode: MANUAL_IMMEDIATE # 精确控制提交
      concurrency: 3 # 与分区数一致
      poll-timeout: 5000
      shutdown-timeout: 60000 # 优雅关闭等待时间
      type: batch # 启用批量消费
    properties:
      security.protocol: SASL_SSL
      sasl.mechanism: SCRAM-SHA-512
      sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="pass";

生产环境必备功能实现

1. 消息幂等处理

java

复制

@Transactional
public void processBusinessLogic(OrderEvent event) {
    if (deduplicationCache.exists(event.getMessageId())) {
        log.warn("Duplicate message detected: {}", event.getMessageId());
        return;
    }
    // 核心业务逻辑
    deduplicationCache.save(event.getMessageId(), TTL.ONE_DAY);
}

2. 消费速率控制

java

复制

@Bean
public ConsumerFactory<String, String> consumerFactory() {
    Map<String, Object> props = new HashMap<>();
    // 限流配置(每秒最多处理500条)
    props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 500);
    props.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 1000);
    return new DefaultKafkaConsumerFactory<>(props);
}

3. 监控集成

java

复制

@Bean
public MicrometerConsumerListener<Object, Object> consumerMetrics() {
    return new MicrometerConsumerListener<>(
        Metrics.globalRegistry,
        Tags.of("service", "order-service")
    );
}

// Prometheus监控指标示例:
// kafka_consumer_records_consumed_total
// kafka_consumer_fetch_latency_seconds

4. 动态配置刷新

java

复制

@RefreshScope
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String> 
    kafkaListenerContainerFactory(ConfigurableEnvironment env) {
    // 支持运行时配置热更新
}

生产环境关键配置说明

配置项推荐值说明
session.timeout.ms10,000-30,000平衡rebalance速度和网络波动容忍度
max.poll.interval.ms300,000根据业务处理最长时间设置
fetch.max.bytes50MB根据消息大小调整
max.partition.fetch.bytes1MB控制单分区拉取量
request.timeout.ms30,000防止网络波动导致假死
auto.commit.interval.ms禁用自动提交必须使用手动提交
enable.auto.commitfalse确保精确控制提交时机

生产环境部署建议

  1. 消费者组管理

    • 每个服务使用独立消费者组
    • 采用命名规范:{service-name}-{env}-group
  2. 分区分配策略

    java

    复制

    props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, 
        CooperativeStickyAssignor.class.getName());
    
  3. 健康检查端点

    java

    复制

    @Component
    public class KafkaHealthIndicator implements HealthIndicator {
        private final Consumer<?, ?> consumer;
        
        public KafkaHealthIndicator(Consumer<?, ?> consumer) {
            this.consumer = consumer;
        }
    
        @Override
        public Health health() {
            try {
                consumer.listTopics(Duration.ofSeconds(5));
                return Health.up().build();
            } catch (Exception e) {
                return Health.down(e).build();
            }
        }
    }
    
  4. 消息追踪

    java

    复制

    @Bean
    public ProducerListener<String, String> tracingProducerListener() {
        return new TracingProducerListener<>(tracer);
    }
    

灾备处理方案

  1. 跨集群消费

    java

    复制

    @KafkaListener(
        topics = "${primary.topic}",
        groupId = "${consumer.group}",
        containerFactory = "primaryKafkaFactory"
    )
    public void primaryListener() { /* ... */ }
    
    @KafkaListener(
        topics = "${backup.topic}",
        groupId = "${consumer.group}",
        containerFactory = "backupKafkaFactory",
        autoStartup = "false"
    )
    public void backupListener() { /* ... */ }
    
  2. 自动故障切换

    java

    复制

    @Scheduled(fixedDelay = 5000)
    public void checkClusterHealth() {
        if (primaryClusterDown) {
            backupListenerContainer.start();
            primaryListenerContainer.stop();
        }
    }