1.前言
为什么会想着去探秘kafka消费者流程呢?在回答这个问题之前,先带你看两个示例,看过之后想必你也就知道其中的原因了
2.示例
2.1 示例一
@KafkaListener(topics = "product", groupId = "product1")
public void consumeMessage1(ConsumerRecord<String, String> consumerRecord, Acknowledgment ack) {
log.info("receive message topic:{}, partition:{},offset:{},message:{}", consumerRecord.topic(),
consumerRecord.partition(), consumerRecord.offset(), consumerRecord.value());
if ("test".equals(consumerRecord.value())) {
throw new IllegalArgumentException("message content is empty");
}
ack.acknowledge();
}
示例很简单,就是正常消费消息,在业务处理完成之后执行
ack操作进行消息确认。也就是说如果不执行ack操作,那么这条消息应该可以一直消费,直到处理成功为止。事实上真的是这样嘛?
2.2 测试方案
通过kafka客户端发送如下消息:
aa
bb
test
cc
dd
ee
应用控制台会输出如下内容:
receive message topic:product, partition:0,offset:5,message:aa
receive message topic:product, partition:0,offset:6,message:bb
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:7,message:test
receive message topic:product, partition:0,offset:8,message:cc
receive message topic:product, partition:0,offset:9,message:dd
receive message topic:product, partition:0,offset:10,message:ee
从输出内容来看,
test这条消息在执行10后就丢失了,因为当执行到ee这条消息的时候会进行ack操作,一旦ack成功,下一次就会从ee这条消息对应offset+ 1位置开始拉取消息,如果不重置offset,永远无法再消费到test这条消息。想一想如果这种丢消息的情况发生在核心交易链路会造成什么影响?还有一点与认知不同的是:"消息没有被ack为什么会丢?"
2.3 示例二
@KafkaListener(topics = "product", groupId = "product2")
public void consumeMessage2(ConsumerRecord<String, String> consumerRecord, Acknowledgment ack) {
log.info("receive message topic:{}, partition:{},offset:{},message:{}", consumerRecord.topic(),
consumerRecord.partition(), consumerRecord.offset(), consumerRecord.value());
try {
if ("test".equals(consumerRecord.value())) {
throw new IllegalArgumentException("message content is empty");
}
ack.acknowledge();
} catch (Exception e) {
log.error("consume message error:", e);
}
}
示例2与示例1的区别就是多了异常捕获,接下来通过测试方案来看结果,可能也会超出你的认知。
2.4 测试方案
通过kafka客户端发送如下消息:
test
应用程序控制台会输入异常和
error日志,由于没有执行ack操作,理论上消息可以再次被拉取,事实上真是如此嘛?此时,只需要重启应用程序即可验证,在重启程序之后,可以看到消息再次被消费,这与我们的认知是一样的,即消息没有ack,就可以再次被消费。
接下来再做一次测试,通过kafka客户端发送如下消息:
aa
test
bb
在揭晓之前,你可以先猜一猜最终的结果会如何呈现?
重启应用后可以看到控制台输入如下内容:
receive message topic:product, partition:0,offset:76,message:aa
receive message topic:product, partition:0,offset:77,message:test
consume message error:
receive message topic:product, partition:0,offset:78,message:bb
为了验证
test这条消息没有被ack的消息能否再次被消费到,只需要重启应用即可。重启应用后想必你多少会有点迷茫,test这条消息最终并没有并消费,这条消息丢了,出现了和示例一同样诡异的现象:"未ack的消息丢了"
通过对这两个示例的测试,我有了如下的疑惑:
- 为什么在不捕获异常的情况下会对消息进行重试,在捕获异常的情况下不会对消息进行重试?
- 为什么在捕获异常的情况下,异常消息之后不发送其它消息,异常消息在应用重启后可以继续被消费,发送其它消息后异常消息却丢了?
- 为什么未
ack的消息也会丢?
看到这些疑惑,想必你也就知道为什么我会去探秘kafka消费者流程了,只有了解其中的原理,才能解开心中的这些疑惑。
3.探秘kafka消费者流程
3.1 寻找消费者组对应的Coordinator

消费者与kafka交互,都是与消费者组对应的Coordinator进行通信,因此消费者首要任务就是寻找自己所在组对应的Coordinator
对应源码可查看org.apache.kafka.clients.consumer.internals.ConsumerCoordinator#poll(org.apache.kafka.common.utils.Timer, boolean)中的ensureCoordinatorReady,该方法会向kafka中的一个broker发送请求,请求的响应结果会返回对应Coordinator的host、port、节点id
private class FindCoordinatorResponseHandler extends RequestFutureAdapter<ClientResponse, Void> {
@Override
public void onSuccess(ClientResponse resp, RequestFuture<Void> future) {
log.debug("Received FindCoordinator response {}", resp);
FindCoordinatorResponse findCoordinatorResponse = (FindCoordinatorResponse) resp.responseBody();
Errors error = findCoordinatorResponse.error();
if (error == Errors.NONE) {
synchronized (AbstractCoordinator.this) {
int coordinatorConnectionId = Integer.MAX_VALUE - findCoordinatorResponse.data().nodeId();
// 设置Coordinator
AbstractCoordinator.this.coordinator = new Node(
coordinatorConnectionId,
findCoordinatorResponse.data().host(),
findCoordinatorResponse.data().port());
log.info("Discovered group coordinator {}", coordinator);
client.tryConnect(coordinator);
heartbeat.resetSessionTimeout();
}
future.complete(null);
}
}
}
3.2 加入消费者组

在寻找到Coordinator后,消费者会向对应的Coordinator发送加入消费者组请求
对应源码可查看org.apache.kafka.clients.consumer.internals.ConsumerCoordinator#poll(org.apache.kafka.common.utils.Timer, boolean)中的ensureActiveGroup,该方法会向Coordinator发送加入消费者组请求
RequestFuture<ByteBuffer> sendJoinGroupRequest() {
if (coordinatorUnknown())
return RequestFuture.coordinatorNotAvailable();
// send a join group request to the coordinator
log.info("(Re-)joining group");
JoinGroupRequest.Builder requestBuilder = new JoinGroupRequest.Builder(
new JoinGroupRequestData()
.setGroupId(rebalanceConfig.groupId)
.setSessionTimeoutMs(this.rebalanceConfig.sessionTimeoutMs)
.setMemberId(this.generation.memberId)
.setGroupInstanceId(this.rebalanceConfig.groupInstanceId.orElse(null))
.setProtocolType(protocolType())
.setProtocols(metadata())
.setRebalanceTimeoutMs(this.rebalanceConfig.rebalanceTimeoutMs)
);
log.debug("Sending JoinGroup ({}) to coordinator {}", requestBuilder, this.coordinator);
int joinGroupTimeoutMs = Math.max(client.defaultRequestTimeoutMs(),
rebalanceConfig.rebalanceTimeoutMs + JOIN_GROUP_TIMEOUT_LAPSE);
// 消费者发送加入消费者组请求,请求响应处理可查看JoinGroupResponseHandler
return client.send(coordinator, requestBuilder, joinGroupTimeoutMs)
.compose(new JoinGroupResponseHandler(generation));
}
3.3 Coordinator响应加入消费者请求

Coordinator在收到消费者加入消费者组请求后,会从同一个消费者组中选择一个消费者作为leader,其余消费者作为flower
private class JoinGroupResponseHandler extends CoordinatorResponseHandler<JoinGroupResponse, ByteBuffer> {
@Override
public void handle(JoinGroupResponse joinResponse, RequestFuture<ByteBuffer> future) {
Errors error = joinResponse.error();
if (error == Errors.NONE) {
if (isProtocolTypeInconsistent(joinResponse.data().protocolType())) {
} else {
synchronized (AbstractCoordinator.this) {
if (state != MemberState.PREPARING_REBALANCE) {
} else {
// 如果当前消费者被选为leader
if (joinResponse.isLeader()) {
onJoinLeader(joinResponse).chain(future);
} else {
onJoinFollower().chain(future);
}
}
}
}
}
}
}
3.4 消费者leader制定分区分配方案
如果某一消费者被Coordinator选为leader后,那么就需要负责制定该分组分区分配方案
对应源码可查看org.apache.kafka.clients.consumer.internals.AbstractCoordinator#onJoinLeader中的performAssignment,该方法先寻找对应的分区分配器,根据分配器来给消费者组中的消费者分配分区
@Override
protected Map<String, ByteBuffer> performAssignment(String leaderId,
String assignmentStrategy,
List<JoinGroupResponseData.JoinGroupResponseMember> allSubscriptions) {
// 1.寻找分区分配器
ConsumerPartitionAssignor assignor = lookupAssignor(assignmentStrategy);
Set<String> allSubscribedTopics = new HashSet<>();
Map<String, Subscription> subscriptions = new HashMap<>();
// collect all the owned partitions
Map<String, List<TopicPartition>> ownedPartitions = new HashMap<>();
for (JoinGroupResponseData.JoinGroupResponseMember memberSubscription : allSubscriptions) {
Subscription subscription = ConsumerProtocol.deserializeSubscription(ByteBuffer.wrap(memberSubscription.metadata()));
subscription.setGroupInstanceId(Optional.ofNullable(memberSubscription.groupInstanceId()));
subscriptions.put(memberSubscription.memberId(), subscription);
allSubscribedTopics.addAll(subscription.topics());
ownedPartitions.put(memberSubscription.memberId(), subscription.ownedPartitions());
}
// the leader will begin watching for changes to any of the topics the group is interested in,
// which ensures that all metadata changes will eventually be seen
updateGroupSubscription(allSubscribedTopics);
isLeader = true;
log.debug("Performing assignment using strategy {} with subscriptions {}", assignor.name(), subscriptions);
// 2.根据分区分配器制定消费者组分区分配方案
Map<String, Assignment> assignments = assignor.assign(metadata.fetch(), new GroupSubscription(subscriptions)).groupAssignment();
log.info("Finished assignment for group at generation {}: {}", generation().generationId, assignments);
Map<String, ByteBuffer> groupAssignment = new HashMap<>();
for (Map.Entry<String, Assignment> assignmentEntry : assignments.entrySet()) {
ByteBuffer buffer = ConsumerProtocol.serializeAssignment(assignmentEntry.getValue());
groupAssignment.put(assignmentEntry.getKey(), buffer);
}
return groupAssignment;
}
3.5 发送同步组请求

消费者在收到加入消费者组的响应后,如果被选为leader,那么该消费者负责制定分区分配方案,其它消费者不需要,之后所有消费者再次向Coordinator发送同步组请求
3.6 Coordinator通知分区分配方案

Coordinator在收到消费者leader制定的分区分配方案后,会将该方案通知到各个消费者,告诉每个消费者应该消费哪些分区
对应源码可查看org.apache.kafka.clients.consumer.internals.AbstractCoordinator#joinGroupIfNeeded中的onJoinComplete,该方法会设置消费分区信息
@Override
protected void onJoinComplete(int generation,
String memberId,
String assignmentStrategy,
ByteBuffer assignmentBuffer) {
log.debug("Executing onJoinComplete with generation {} and memberId {}", generation, memberId);
ConsumerPartitionAssignor assignor = lookupAssignor(assignmentStrategy);
// Give the assignor a chance to update internal state based on the received assignment
groupMetadata = new ConsumerGroupMetadata(rebalanceConfig.groupId, generation, memberId, rebalanceConfig.groupInstanceId);
Set<TopicPartition> ownedPartitions = new HashSet<>(subscriptions.assignedPartitions());
// 1.反序列化分区信息
Assignment assignment = ConsumerProtocol.deserializeAssignment(assignmentBuffer);
Set<TopicPartition> assignedPartitions = new HashSet<>(assignment.partitions());
// 2.设置分区信息
subscriptions.assignFromSubscribed(assignedPartitions);
}
3.6.1 反序列化分区信息

通过调试可以看到当前消费者应该消费topic=product的0号分区
3.6.2 设置分区信息

在反序列化得到分区信息后,会将分区信息设置在
SubscriptionState对象的assignment属性中,后续消费者拉取消息的时候会用到
3.7 小结
到此,我们对消费者流程有了一个大致的认识,稍微总结下,内容如下:
- 寻找后续交互的
Coordinator - 消费者请求加入消费者组,
Coordinator从消费者中选择一个作为leader leader消费者制定分区分配方案并同步CoordinatorCoordinator向消费者下发分区分配方案- 消费者反序列分区信息并在本地设置分区信息
3.8 初始化分区偏移量

消费者有了分区信息后就可以拉取该分区存储的消息记录,在拉取消息记录之前,必须要明确从什么位置开始拉取,因此需要初始化分区偏移量。
对应源码查看org.apache.kafka.clients.consumer.KafkaConsumer#updateAssignmentMetadataIfNeeded(org.apache.kafka.common.utils.Timer, boolean)中的updateFetchPositions
private boolean updateFetchPositions(final Timer timer) {
// If any partitions have been truncated due to a leader change, we need to validate the offsets
fetcher.validateOffsetsIfNeeded();
// 1.如果设置过offset直接返回
cachedSubscriptionHashAllFetchPositions = subscriptions.hasAllFetchPositions();
if (cachedSubscriptionHashAllFetchPositions) return true;
// 2.向kafka询问分区对应offset,如果存在则设置分区offset
if (coordinator != null && !coordinator.refreshCommittedOffsetsIfNeeded(timer)) return false;
// 3.kafka不存在分区对应offset,需要将分区状态设置为AWAIT_RESET
subscriptions.resetInitializingPositions();
// 4.根据分区offset重置策略进行offset重置
fetcher.resetOffsetsIfNeeded();
return true;
}
从源码分析中可以了解到这里存在两种情况,一种是
kafka服务器中存在分区对应偏移量,另一种是kafka服务器中不存在分区对应偏移量,这里以第一种情况进行分析
public boolean refreshCommittedOffsetsIfNeeded(Timer timer) {
final Set<TopicPartition> initializingPartitions = subscriptions.initializingPartitions();
// 1.向Coordinator发送请求获取当前分区offset
final Map<TopicPartition, OffsetAndMetadata> offsets = fetchCommittedOffsets(initializingPartitions, timer);
// 2.kafka服务器不存在分区对应offset,直接返回
if (offsets == null) return false;
for (final Map.Entry<TopicPartition, OffsetAndMetadata> entry : offsets.entrySet()) {
final TopicPartition tp = entry.getKey();
final OffsetAndMetadata offsetAndMetadata = entry.getValue();
if (offsetAndMetadata != null) {
// first update the epoch if necessary
entry.getValue().leaderEpoch().ifPresent(epoch -> this.metadata.updateLastSeenEpochIfNewer(entry.getKey(), epoch));
// it's possible that the partition is no longer assigned when the response is received,
// so we need to ignore seeking if that's the case
if (this.subscriptions.isAssigned(tp)) {
final ConsumerMetadata.LeaderAndEpoch leaderAndEpoch = metadata.currentLeader(tp);
final SubscriptionState.FetchPosition position = new SubscriptionState.FetchPosition(
offsetAndMetadata.offset(), offsetAndMetadata.leaderEpoch(),
leaderAndEpoch);
// 设置分区对应偏移量
this.subscriptions.seekUnvalidated(tp, position);
log.info("Setting offset for partition {} to the committed offset {}", tp, position);
}
}
}
return true;
}
获取所有初始化状态分区列表,向
Coordinator询问对应的offset,得到偏移量后进行本地初始化


对比
3.6.2中的截图,可以看到此处的position对象被赋值并且明确了分区offset,有了分区offset就可以拉取该分区对应的消息记录了
3.9 拉取消息记录

万事俱备,只欠东风。有了分区offset,只需要将topic、partition、offset告诉kafka服务器就可以获取到消息记录。
3.9.1 发送拉取请求
拉取消息分为两部分,第一步部分发送拉取请求,第二部分拉取消息记录,先来看看第一部分发送拉取请求
对应源码查看org.apache.kafka.clients.consumer.KafkaConsumer#pollForFetches中的fetcher.sendFetches()
public synchronized int sendFetches() {
// 1.准备请求参数
Map<Node, FetchSessionHandler.FetchRequestData> fetchRequestMap = prepareFetchRequests();
for (Map.Entry<Node, FetchSessionHandler.FetchRequestData> entry : fetchRequestMap.entrySet()) {
final Node fetchTarget = entry.getKey();
final FetchSessionHandler.FetchRequestData data = entry.getValue();
// 2.构建请求对象
final FetchRequest.Builder request = FetchRequest.Builder
.forConsumer(this.maxWaitMs, this.minBytes, data.toSend())
.isolationLevel(isolationLevel)
.setMaxBytes(this.maxBytes)
.metadata(data.metadata())
.toForget(data.toForget())
.rackId(clientRackId);
if (log.isDebugEnabled()) {
log.debug("Sending {} {} to broker {}", isolationLevel, data.toString(), fetchTarget);
}
// 3.发送拉取消息记录请求
RequestFuture<ClientResponse> future = client.send(fetchTarget, request);
// We add the node to the set of nodes with pending fetch requests before adding the
// listener because the future may have been fulfilled on another thread (e.g. during a
// disconnection being handled by the heartbeat thread) which will mean the listener
// will be invoked synchronously.
this.nodesWithPendingFetchRequests.add(entry.getKey().id());
future.addListener(new RequestFutureListener<ClientResponse>() {
@Override
public void onSuccess(ClientResponse resp) {
synchronized (Fetcher.this) {
try {
FetchResponse<Records> response = (FetchResponse<Records>) resp.responseBody();
Set<TopicPartition> partitions = new HashSet<>(response.responseData().keySet());
FetchResponseMetricAggregator metricAggregator = new FetchResponseMetricAggregator(sensors, partitions);
for (Map.Entry<TopicPartition, FetchResponse.PartitionData<Records>> entry : response.responseData().entrySet()) {
TopicPartition partition = entry.getKey();
FetchRequest.PartitionData requestData = data.sessionPartitions().get(partition);
if (requestData == null) {
} else {
long fetchOffset = requestData.fetchOffset;
FetchResponse.PartitionData<Records> partitionData = entry.getValue();
log.debug("Fetch {} at offset {} for partition {} returned fetch data {}",
isolationLevel, fetchOffset, partition, partitionData);
Iterator<? extends RecordBatch> batches = partitionData.records().batches().iterator();
short responseVersion = resp.requestHeader().apiVersion();
// 4.将消息记录放入本地队列中
completedFetches.add(new CompletedFetch(partition, partitionData,
metricAggregator, batches, fetchOffset, responseVersion));
}
}
}
}
}
}
return fetchRequestMap.size();
}
通过分析可以得知,发送完拉取请求后会将拉取到的消息放入本地队列
completedFetches中
3.9.2 拉取消息记录
对应源码查看org.apache.kafka.clients.consumer.internals.Fetcher.CompletedFetch#fetchRecords
private List<ConsumerRecord<K, V>> fetchRecords(int maxRecords) {
List<ConsumerRecord<K, V>> records = new ArrayList<>();
try {
for (int i = 0; i < maxRecords; i++) {
if (cachedRecordException == null) {
corruptLastRecord = true;
lastRecord = nextFetchedRecord();
corruptLastRecord = false;
}
if (lastRecord == null)
break;
// 1.解析消息记录并放入集合中
records.add(parseRecord(partition, currentBatch, lastRecord));
recordsRead++;
bytesRead += lastRecord.sizeInBytes();
// 2.nextFetchOffset = 最后一条消息记录offset + 1,比如最后一条消息offset = 81,那么nextFetchOffset=82
nextFetchOffset = lastRecord.offset() + 1;
// In some cases, the deserialization may have thrown an exception and the retry may succeed,
// we allow user to move forward in this case.
cachedRecordException = null;
}
}
return records;
}
至此终于看到我们比较熟悉的内容
ConsumerRecord<K, V>,也就是消费者方法参数中的ConsumerRecord<String, String> consumerRecord。在源码分析中特地强调了nextFetchOffset,那么它有什么用呢?一起来分析下如下代码,你就能明白了。
private List<ConsumerRecord<K, V>> fetchRecords(CompletedFetch completedFetch, int maxRecords) {
if (!subscriptions.isAssigned(completedFetch.partition)) {
} else if (!subscriptions.isFetchable(completedFetch.partition)) {
} else {
FetchPosition position = subscriptions.position(completedFetch.partition);
if (completedFetch.nextFetchOffset == position.offset) {
List<ConsumerRecord<K, V>> partRecords = completedFetch.fetchRecords(maxRecords);
log.trace("Returning {} fetched records at offset {} for assigned partition {}",
partRecords.size(), position, completedFetch.partition);
// 假设position.offset=81,拉取到的是offset=81位置对应的消息
// 通过控制台发送一条消息
// 从上面的分析可以知道completedFetch.nextFetchOffset = 最后一条消息offset + 1也就是82
if (completedFetch.nextFetchOffset > position.offset) {
FetchPosition nextPosition = new FetchPosition(
completedFetch.nextFetchOffset,
completedFetch.lastEpoch,
position.currentLeader);
log.trace("Update fetching position to {} for partition {}", nextPosition, completedFetch.partition);
// 将分区偏移量设置为最后一条消息offset + 1,也就是82
subscriptions.position(completedFetch.partition, nextPosition);
}
return partRecords;
} else {
}
}
return emptyList();
}

通过调试也可以证明我们的结论是正确的。
这里再次强调一下,每次拉取完消息后,消费者会将分区本地offset设置为最后一条消息对应offset + 1
3.10 消费消息
在拉取到消息后,就需要对消息记录进行消费org.springframework.kafka.listener.KafkaMessageListenerContainer.ListenerConsumer#doInvokeWithRecords
private void doInvokeWithRecords(final ConsumerRecords<K, V> records) {
Iterator<ConsumerRecord<K, V>> iterator = records.iterator();
// 1.遍历消息记录
while (iterator.hasNext()) {
if (this.stopImmediate && !isRunning()) {
break;
}
final ConsumerRecord<K, V> record = checkEarlyIntercept(iterator.next());
if (record == null) {
continue;
}
this.logger.trace(() -> "Processing " + ListenerUtils.recordToString(record));
// 2.调用监听器方法
doInvokeRecordListener(record, iterator);
if (this.nackSleep >= 0) {
handleNack(records, record);
break;
}
}
}
消费的过程也就是遍历消息记录,然后调用对应被
@KafkaListener注解标注的方法
4.解惑
4.1 示例二解惑
在解释疑惑之前,这里再贴一下对应内容,方便查看
为什么在捕获异常的情况下,异常消息之后不发送其它消息,异常消息在应用重启后可以继续被消费,发送其它消息后异常消息却丢了?
不知当你再次看到这个疑惑的时候心中是否已经有了答案,不管有没有答案,一起来分析下其中的缘由吧。
假设消费者消费的最后一条消息对应的offset = 81,那么下一次就应该拉取offset = 82位置对应的消息。
此时通过kafka客户端发送一条消息,消息内容为"test",消费者就会拉取到该消息,紧接着修改本地分区offset = 82 + 1 = 83,消费该消息,业务报错,未执行ack操作,重启应用,消费者得到的分区offset仍然是82,所以可以继续消费test这条消息。
此时通过kafka客户端再发送一条消息,消息内容为"bb",由于在拉取"test"消息后会将本地分区offset修改为83,那么消费者就可以正常拉取到"bb"这条消息,成功消费,进而执行ack,应用重启,消费者得到的分区offset是84,所以"test"这条消息就丢了。
一句话总结该疑惑的"罪魁祸首"就是在拉取完消息记录后,会将分区本地
offset设置为最后一条消息对应offset+ 1
4.2 示例一解惑
同样为了方便,这里再贴一下对应内容
为什么在不捕获异常的情况下会对消息进行重试,重试9次之后消息丢失
在解惑前先来看一下org.springframework.kafka.listener.KafkaMessageListenerContainer.ListenerConsumer中的一段代码
protected ErrorHandler determineErrorHandler(GenericErrorHandler<?> errHandler) {
return errHandler != null ? (ErrorHandler) errHandler
: this.transactionManager != null ? null : new SeekToCurrentErrorHandler();
}
通过这段代码可以了解到在没有定义异常处理器的情况下会默认使用
SeekToCurrentErrorHandler异常处理器
既然知道会使用SeekToCurrentErrorHandler异常处理器,不妨点进去看看
/**
* Construct an instance with the default recoverer which simply logs the record after
* {@value SeekUtils#DEFAULT_MAX_FAILURES} (maxFailures) have occurred for a
* topic/partition/offset, with the default back off (9 retries, no delay).
* @since 2.2
*/
public SeekToCurrentErrorHandler() {
this(null, SeekUtils.DEFAULT_BACK_OFF);
}
通过注释我们可以了解到该异常处理器会进行无时间间隔的9次重试
有了如上知识后,再来看下org.springframework.kafka.listener.KafkaMessageListenerContainer.ListenerConsumer#doInvokeRecordListener中的一段代码
private RuntimeException doInvokeRecordListener(final ConsumerRecord<K, V> record, // NOSONAR
Iterator<ConsumerRecord<K, V>> iterator) {
Object sample = startMicrometerSample();
try {
// 1.执行业务方法
invokeOnMessage(record);
successTimer(sample);
recordInterceptAfter(record, null);
}
catch (RuntimeException e) {
try {
// 2.业务执行异常,执行默认异常处理器
invokeErrorHandler(record, iterator, e);
// 3.提交偏移量
commitOffsetsIfNeeded(record);
}
catch (KafkaException ke) {
}
catch (RuntimeException ee) {
}
catch (Error er) { // NOSONAR
}
}
return null;
}
通过这段代码我们可以了解到,如果业务执行异常,底层会进行异常捕获并使用默认异常处理器
SeekToCurrentErrorHandler进行9次重试,9次重试后还是失败,就会主动提交offset,因此异常消息会在9次重试后丢失。
5. 源码入口
org.apache.kafka.clients.consumer.KafkaConsumer#poll(org.apache.kafka.common.utils.Timer, boolean)
其中org.apache.kafka.clients.consumer.KafkaConsumer#updateAssignmentMetadataIfNeeded(org.apache.kafka.common.utils.Timer, boolean)涉及3.1-3.6章节
org.apache.kafka.clients.consumer.KafkaConsumer#updateFetchPositions涉及3.8章节
org.apache.kafka.clients.consumer.KafkaConsumer#pollForFetches涉及3.9章节