kafka MemberIdRequiredException 问题追踪

3,478 阅读10分钟

MemberIdRequiredException 异常

2022-08-03 15:09:14.948 INFO 9 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-c8d91224-c83d-4a5f-8786-b3a226e3cabd-1, groupId=c8d91224-c83d-4a5f-8786-b3a226e3cabd] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group

异常日志

2022-08-08 11:15:30.923  INFO 64411 --- [ntainer#0-0-C-1] org.cl.kafka.mq.KafkaConsumer            : 客户端 A 消费了:GroupId[46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Topic[topic.test] Partition[2] Message[hello0]### consumer 从group 中脱离 
2022-08-08 11:15:31.883  INFO 64411 --- [22-644b4f7d16ed] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Member consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1-73019ea9-dfbd-4790-9e10-24cf3e3d5798 sending LeaveGroup request to coordinator 192.168.94.133:9092 (id: 2147483647 rack: null) due to consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
​
### 问题切入点
2022-08-08 11:15:41.934  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Failing OffsetCommit request since the consumer is not part of an active group
​
2022-08-08 11:15:41.940  WARN 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Synchronous auto-commit of offsets {topic.test-0=OffsetAndMetadata{offset=4097, leaderEpoch=null, metadata=''}, topic.test-1=OffsetAndMetadata{offset=4121, leaderEpoch=null, metadata=''}, topic.test-2=OffsetAndMetadata{offset=4048, leaderEpoch=0, metadata=''}} failed: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.
​
2022-08-08 11:15:41.940  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Giving away all assigned partitions as lost since generation has been reset,indicating that consumer is no longer part of the group
​
2022-08-08 11:15:41.940  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Lost previously assigned partitions topic.test-0, topic.test-1, topic.test-2
​
2022-08-08 11:15:41.941  INFO 64411 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : 46f4da71-e7ce-4fbf-a122-644b4f7d16ed: partitions lost: [topic.test-0, topic.test-1, topic.test-2]
​
2022-08-08 11:15:41.942  INFO 64411 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : 46f4da71-e7ce-4fbf-a122-644b4f7d16ed: partitions revoked: [topic.test-0, topic.test-1, topic.test-2]### consumer 重新加入 group
2022-08-08 11:15:41.944  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] (Re-)joining group
​
2022-08-08 11:15:41.951  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Join group failed with org.apache.kafka.common.errors.MemberIdRequiredException: The group member needs to have a valid member id before actually entering a consumer group
​
2022-08-08 11:15:41.951  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] (Re-)joining group
​
2022-08-08 11:15:41.962  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Finished assignment for group at generation 3: {consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1-5a9e31cb-57fa-4ead-8655-1ec328ddcf04=Assignment(partitions=[topic.test-0, topic.test-1, topic.test-2])}
​
2022-08-08 11:15:41.985  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Successfully joined group with generation 3
​
2022-08-08 11:15:41.987  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Adding newly assigned partitions: topic.test-0, topic.test-1, topic.test-2
​
2022-08-08 11:15:42.029  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Setting offset for partition topic.test-0 to the committed offset FetchPosition{offset=4097, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[192.168.94.133:9092 (id: 0 rack: null)], epoch=0}}
​
2022-08-08 11:15:42.030  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Setting offset for partition topic.test-1 to the committed offset FetchPosition{offset=4121, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[192.168.94.133:9092 (id: 0 rack: null)], epoch=0}}
​
2022-08-08 11:15:42.030  INFO 64411 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-46f4da71-e7ce-4fbf-a122-644b4f7d16ed-1, groupId=46f4da71-e7ce-4fbf-a122-644b4f7d16ed] Setting offset for partition topic.test-2 to the committed offset FetchPosition{offset=4047, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[192.168.94.133:9092 (id: 0 rack: null)], epoch=0}}
​
2022-08-08 11:15:42.032  INFO 64411 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : 46f4da71-e7ce-4fbf-a122-644b4f7d16ed: partitions assigned: [topic.test-0, topic.test-1, topic.test-2]

问题定位:

consumer client 消费耗时过久,broker判定从 consumer group 中剔除,进行reblance;

consumer client 再次消费时提示已经不在group中,client重新注册到group中;

再此过程中 client 会重新消费订阅的消息;

问题复现

问题场景

spring boot 消费kafka

一个consumer 消费多个topic

kafka 安装

docker 安装kafaka

zk + kafka 单机部署

docker run -d --name zookeeper -p 2181:2181 -t wurstmeister/zookeeper

docker exec -it zookeeper /bin/sh

kafka 单机部署

# 本地访问,KAFKA_ADVERTISED_LISTENERS指向本地地址
docker run -d --name kafka \
-p 9092:9092 \
-e KAFKA_BROKER_ID=0 \
-e KAFKA_ZOOKEEPER_CONNECT=172.17.0.2:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.10.183:9092 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 wurstmeister/kafka

# 验证
docker exec -it kafka /bin/bash
> cd /opt/kafka_2.13-2.8.1/bin
# 常见topic
> kafka-topics.sh --create --zookeeper 172.17.0.2:2181 --replication-factor 1 --partitions 1 --topic demo
# 生产
> kafka-console-producer.sh --broker-list localhost:9092 --topic demo
# 消费
> kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --topic demo --from-beginning

kafka 配置修改

# 配置路径
/opt/kafka_2.13-2.8.1/config/server.properties

spring boot kafka

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <!--        <version>2.7.2</version>-->
        <version>2.3.12.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <groupId>com.ming</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>1.0-SNAPSHOT</version>
    <description>Demo project for Spring Boot AND Kafka</description>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.24</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>2.0.10</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>
  
    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
</project>

生产者

生产者配置 application.properties

# producer
spring.kafka.producer.acks=0
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=16384
spring.kafka.producer.retries=3
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

初始化 topic

@Configuration
public class KafkaConfig {
​
    @Bean
    public NewTopic initialTopic() {
        // partition 3个,replication 1 个
        return new NewTopic("topic.test", 3, (short) 1);
    }
​
    @Bean
    public NewTopic initialTopic2() {
        return new NewTopic("topic.test2", 3, (short) 1);
    }
​
}

生产者

package org.cl.kafka.mq;
​
import lombok.extern.slf4j.Slf4j;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Component;
import org.springframework.util.concurrent.ListenableFutureCallback;
​
import javax.annotation.Resource;
​
@Component
@Slf4j
public class KafkaProducer {
​
    @Resource
    private KafkaTemplate<String, Object> kafkaTemplate;
​
    public void send(Object obj) {
        // 发送消息
        kafkaTemplate.send("topic.test", obj).addCallback(new ListenableFutureCallback<>() {
            @Override
            public void onFailure(Throwable throwable) {
                // 发送失败的处理
                log.info("topic[{}] 生产者 发送消息失败[{}]", "topic.test", throwable.getMessage());
            }
​
            @Override
            public void onSuccess(SendResult<String, Object> stringObjectSendResult) {
                // 成功的处理
                log.info("topic[{}] 生产者 发送消息成功[{}]", "topic.test", stringObjectSendResult.getProducerRecord().value());
            }
        });
    }
}

junit 启动

package org.cl.kafka;

import org.cl.kafka.mq.KafkaProducer;
import org.junit.jupiter.api.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class KafkaTest {
    @Autowired
    private KafkaProducer kafkaProducer;

    @Test
    public void testSendMsg() throws InterruptedException {
        for (int i = 0; i < 100000; i++) {
            String msg = "hello" + i;
            kafkaProducer.send(msg);
            kafkaProducer.send2(msg);
            Thread.sleep(200L);
        }

    }
}

消费者

消费者配置 application.properties

# consumer
spring.kafka.consumer.group-id=demo
spring.kafka.consumer.auto-offset-reset=latest
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
# listener
spring.kafka.listener.concurrency=5
spring.kafka.listener.ack-mode=manual_immediate
spring.kafka.listener.missing-topics-fatal=false

Consumer factory

package org.cl.kafka.configuration;
​
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
​
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
​
@Configuration
@ConfigurationProperties(prefix = "spring.kafka.consumer")
public class KafkaConsumerConfig {
    @Value("${spring.kafka.bootstrap-servers}")
    private String bootstrapServers;
    @Value("${spring.kafka.consumer.enable-auto-commit}")
    private Boolean enableAutoCommit;
    private Integer maxPollRecords = 10000;
    private Integer autoCommitIntervalMs = 1000;
    private Integer consumerRequestTimeoutMs = 1000;
    @Value("${spring.kafka.consumer.key-deserializer}")
    private String keyDeserializer;
    @Value("${spring.kafka.consumer.value-deserializer}")
    private String valueDeserializer;
    @Value("${spring.kafka.consumer.auto-offset-reset}")
    private String autoOffsetReset;
​
    @Bean("kafkaListenerContainerFactory")
    public KafkaListenerContainerFactory<?> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
//        factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
        return factory;
    }
​
    private Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        String uuid = UUID.randomUUID().toString();
        props.put(ConsumerConfig.GROUP_ID_CONFIG, uuid);
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, consumerRequestTimeoutMs);
//        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, enableAutoCommit);
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
        props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitIntervalMs);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, keyDeserializer);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, valueDeserializer);
        props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPollRecords);
        props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 1000);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
        return props;
    }
}

consumer

package org.cl.kafka.mq;
​
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.stereotype.Component;
​
import java.util.Optional;
​
@Component
@Slf4j
public class KafkaConsumer {
​
    @KafkaListener(containerFactory = "kafkaListenerContainerFactory", topics = "topic.test")
//    public void topicTest(ConsumerRecord<?, ?> record, Acknowledgment ack, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
//                          @Header(KafkaHeaders.GROUP_ID) String groupId) throws InterruptedException {
    public void topicTest(ConsumerRecord<?, ?> record,
                          @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
                          @Header(KafkaHeaders.GROUP_ID) String groupId) throws InterruptedException {
        Optional message = Optional.ofNullable(record.value());
        if (message.isPresent()) {
            int partition = record.partition();
            Object msg = message.get();
            log.info("客户端 A 消费了:GroupId[{}] Topic[{}] Partition[{}] Message[{}]", groupId, topic, partition, msg);
​
//            ack.acknowledge();
            Thread.sleep(11000);
        }
    }
​
}
​

java kafka

kafka rebalance

ISR(in-sync replicas)

broker 、topic 副本列表

replica.lag.time.max.ms #

rebalance 触发点

# consumer 处理时间过长 触发rebalance
session.timeout.ms=1000
max.poll.interval.ms=1000 # 默认5分钟
max.poll.records=100 # 默认500条

# 心跳检查频率
heartbeat.interval.ms

rebalance 分区策略

rangeround-robin、sticky

假设 一个topic 有10个 partition;

range策略就是按照分区序号排序,假设 n=分区数/消费者数量 = 3, m=分区数%消费者数量 = 1,那么前 m 个消费者每个分配
n+1 个分区,后面的(消费者数量-m )个消费者每个分配 n 个分区。
比如分区03给一个consumer,分区46给一个consumer,分区7~9给一个consumer。

round-robin策略就是轮询分配,比如分区0369给一个consumer,分区147给一个consumer,分区258给一个consumer

sticky策略初始时分配策略与round-robin类似,但是在rebalance的时候,需要保证如下两个原则:
a)分区的分配要尽可能均匀 。
b)分区的分配尽可能与上次分配的保持相同。

rebalance 过程

第一阶段:选择组协调器

组协调器GroupCoordinator: 每个consumer group都会选择一个broker作为自己的组协调器coordinator,负责监控这个消费组里的所有消费者的心跳,以及判断是否宕机,然后开启消费者rebalance。 consumer group中的每个consumer启动时会向kafka集群中的某个节点发送 FindCoordinatorRequest 请求来查找对应的组协调器GroupCoordinator,并跟其建立网络连接。

组协调器选择方式: consumer消费的offset要提交到__consumer_offsets的哪个分区,这个分区leader对应的broker就是这个consumer group的coordinator

第二阶段:加入消费组JOIN GROUP

在成功找到消费组所对应的 GroupCoordinator 之后就进入加入消费组的阶段,在此阶段的消费者会向 GroupCoordinator 发送 JoinGroupRequest 请求,并处理响应。然后GroupCoordinator 从一个consumer group中选择第一个加入group的consumer作为leader(消费组协调器),把consumer group情况发送给这个leader,接着这个leader会负责制定分区方案。

第三阶段( SYNC GROUP)

consumer leader通过给GroupCoordinator发送SyncGroupRequest,接着GroupCoordinator就把分区方案下发给各个consumer,他们会根据指定分区的leader broker进行网络连接以及消息消费。

producer写入过程

1、写入方式

producer 采用 push 模式将消息发布到 broker,每条消息都被 append 到 patition 中,属于顺序写磁盘(顺序写磁盘效率比随机写内存要高,保障 kafka 吞吐率)。

2、消息路由

producer 发送消息到 broker 时,会根据分区算法选择将其存储到哪一个 partition。其路由机制为:

  1. 指定了 patition,则直接使用;
  2. 未指定 patition 但指定 key,通过对 key 的 value 进行hash 选出一个 patition
  3. patition 和 key 都未指定,使用轮询选出一个 patition。

3、写入流程

  1. producer 先从 zookeeper 的 “/brokers/…/state” 节点找到该 partition 的 leader
  2. producer 将消息发送给该 leader
  3. leader 将消息写入本地 log
  4. followers 从 leader pull 消息,写入本地 log 后 向leader 发送 ACK
  5. leader 收到所有 ISR 中的 replica 的 ACK 后,增加 HW(high watermark,最后 commit 的 offset) 并向 producer 发送 ACK

HW俗称高水位,HighWatermark的缩写,取一个partition对应的ISR中最小的LEO(log-end-offset)作为HW,consumer最多只能消费到HW所在的位置。

另外每个replica都有HW,leader和follower各自负责更新自己的HW的状态。

对于leader新写入的消息,consumer不能立刻消费,leader会等待该消息被所有ISR中的replicas同步后更新HW,此时消息才能被consumer消费。

这样就保证了如果leader所在的broker失效,该消息仍然可以从新选举的leader中获取。对于来自内部broker的读取请求,没有HW的限制。

节点图

在这里插入图片描述

spring kafka 源码

参考文档

issues.apache.org/jira/browse…

cwiki.apache.org/confluence/…

www.cnblogs.com/sniffs/p/13…