Kafka集群的安装部署和实践应用

41 阅读22分钟

[TOC]

Kafka介绍:sailboat:

Kafka是一种高吞吐量的分布式发布订阅消息系统,有如下特性:

  • 通过O(1)的磁盘数据结构提供消息的持久化,这种结构对于即使数以TB的消息存储也能够保持长时间的稳定性能。
  • 高吞吐量:即使是非常普通的硬件Kafka也可以支持每秒数百万的消息。
  • 支持通过Kafka服务器和消费机集群来分区消息。
  • 支持Hadoop并行数据加载。

消息队列的作用

  • 应用程序解耦并行处理
  • 顺序保证
  • 高吞吐率
  • 高容错、高可用
  • 可扩展
  • 峰值处理 Kafka集群.png

kafka原理

Kafka集群由多个实例组成,每个节点称为Broker,对消息保存时根据Topic进行归类 一个Topic可以被划分为多个Partition每个Partition可以有多个副本。
Kafka原理图01.png

Partition内顺序存储,写入新消息采用追加的方式,消费消息采用FIFO的方式顺序拉取消息 一个Topic可以有多个分区,Kafka只保证同一个分区内有序,不保证Topic整体(多个分区之间)有序
kafka原理图02.png

Consumer Group(CG),为了加快读取速度,多个consumer可以划分为一个组,并行消费一个Toic,一个Topic可以由多个CG订阅,多个CG之间是平等的,同一个CG内可以有一个或多个consumer,同一个CG内的consumer之间是竞争 关系,一个消息在一个CG内的只能被一个consumer消费 kafka原理图03.png

一、Kafka集群部署:deciduous_tree:

集群规划清单

名称节点说明节点名
Broker01192.168.43.22kafka节点01hadoop03
Broker02192.168.43.23kafka节点02hadoop04
Broker03192.168.43.24kafka节点03hadoop05
Zookeeper192.168.43.20/21/22Zookeeper集群节点hadoop01/hadoop02/hadoop03

1.下载Kafka安装包,并解压安装

[root@hadoop03 kafka_2.11-0.10.2.1]# ll
总用量 52
drwxr-xr-x. 3 hadoop hadoop  4096 4  22 2017 bin
drwxr-xr-x. 2 hadoop hadoop  4096 4  22 2017 config
drwxr-xr-x. 2 root   root     152 1  20 18:57 kafka-logs
drwxr-xr-x. 2 hadoop hadoop  4096 1  20 18:43 libs
-rw-r--r--. 1 hadoop hadoop 28824 4  22 2017 LICENSE
drwxr-xr-x. 2 root   root    4096 1  20 23:07 logs
-rw-r--r--. 1 hadoop hadoop   336 4  22 2017 NOTICE
drwxr-xr-x. 2 hadoop hadoop    47 4  22 2017 site-docs

2.创建软链接

[root@hadoop03 kafka_2.11-0.10.2.1]# ln -s /home/hadoop/apps/kafka_2.11-0.10.2.1 /usr/local/kafka

3.创建日志文件夹

[root@hadoop03 kafka]# pwd
/usr/local/kafka
[root@hadoop03 kafka]# mkdir kafka-logs/

4.配置服务启动信息

在/usr/local/kafka/config目录下修改server.properties文件,具体内容如下:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

#每个borker的id是唯一的,多个broker要设置不同的id
broker.id=0

#访问端口号
port=9092

#访问地址
host.name=192.168.43.22

#允许删除topic
delete.topic.enable=true


# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

#存储数据路径,默认是在/tmp目录下,需要修改
log.dirs=/usr/local/kafka/kafka-logs

#创建topic默认分区数
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

#数据保存时间,默认7天,单位小时
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

#zookeeper地址,多个地址用逗号隔开
zookeeper.connect=192.168.43.20:2181,192.168.43.21:2181,192.168.43.22:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

5.拷贝文件信息到Broker02/Broker03节点上

scp -r /home/hadoop/apps/kafka_2.11-0.10.2.1 hadoop@node04:/home/hadoop/apps/
scp -r /home/hadoop/apps/kafka_2.11-0.10.2.1 hadoop@node04:/home/hadoop/apps/

6.修改Broker02和Broker03信息

创建软连接

[root@hadoop03 kafka_2.11-0.10.2.1]# ln -s /home/hadoop/apps/kafka_2.11-0.10.2.1 /usr/local/kafka

修改配置文件server.properties信息

broker.id=1
host.name=192.168.43.23

修改Broker03节点server.properties信息

broker.id=2
host.name=192.168.43.24

7.分别启动Broker01/Broker02/Broker03

以后台进程的方式启动Kafka

[root@hadoop03 bin]#./kafka-server-start.sh -daemon config/server.properties

二、Kafka应用实践:paintbrush:

1.创建主题

[root@hadoop03 bin]# pwd
/usr/local/kafka/bin
[root@hadoop03 bin]# ./kafka-topics.sh --create --zookeeper 192.168.43.20:2181 --replication-factor 2 --partitions 3 --topic topicnewtest1
Created topic "topicnewtest1".

2.查看主题

[root@hadoop03 bin]# ./kafka-topics.sh  --list --zookeeper 192.168.43.20:2181
topicnewtest1

3.查看主题信息

[root@hadoop03 bin]# ./kafka-topics.sh --describe --zookeeper 192.168.43.20:2181 --topic topicnewtest1
Topic:topicnewtest1	PartitionCount:3	ReplicationFactor:2	Configs:
	Topic: topicnewtest1	Partition: 0	Leader: 2	Replicas: 2,0	Isr: 2,0
	Topic: topicnewtest1	Partition: 1	Leader: 0	Replicas: 0,1	Isr: 0,1
	Topic: topicnewtest1	Partition: 2	Leader: 1	Replicas: 1,2	Isr: 1,2

4.删除主题

[root@hadoop03 bin]# ./kafka-topics.sh --delete --zookeeper 192.168.43.20:2181 --topic topicnewtest1
Topic topicnewtest1 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

5.增加分区

[root@hadoop03 bin]# ./kafka-topics.sh --alter --zookeeper 192.168.43.20:2181 --topic topicnewtest1 --partitions 5
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
[root@hadoop03 bin]# ./kafka-topics.sh --describe --zookeeper 192.168.43.20:2181 --topic topicnewtest1
Topic:topicnewtest1	PartitionCount:5	ReplicationFactor:2	Configs:
	Topic: topicnewtest1	Partition: 0	Leader: 1	Replicas: 1,0	Isr: 1,0
	Topic: topicnewtest1	Partition: 1	Leader: 2	Replicas: 2,1	Isr: 2,1
	Topic: topicnewtest1	Partition: 2	Leader: 0	Replicas: 0,2	Isr: 0,2
	Topic: topicnewtest1	Partition: 3	Leader: 1	Replicas: 1,2	Isr: 1,2
	Topic: topicnewtest1	Partition: 4	Leader: 2	Replicas: 2,0	Isr: 2,0

6.使用kafka自带的生产者客户端脚本和消费端脚本

使用kafka自带的生产者客户端脚本

[root@hadoop03 bin]# ./kafka-console-producer.sh --broker-list 192.168.43.22:9092,192.168.43.23:9092 --topic topicnewtest1

使用kafka自带的消费者客户端脚本

[root@hadoop04 bin]# ./kafka-console-consumer.sh --bootstrap-server 192.168.43.22:9092  --from-beginning --topic topicnewtest1

在生成端发送消息,可以在消费看到消息

  • 查看Topic 消息消费的偏移量Offset
kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 192.168.43.22:9092 --topic topicnewtest1

bash-5.1#  kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 172.19.0.6:9092 --topic topicnewtest1
topicnewtest1:0:2
topicnewtest1:1:2
topicnewtest1:2:2
topicnewtest1:3:3
topicnewtest1:4:2
  • 查看kafka 消费组列表信息:
 ./kafka-consumer-groups.sh  --bootstrap-server 172.19.0.6:9092 --list

  • 看特定consumer group 详情,使用--group与--describe参数
  ./kafka-consumer-groups.sh  --bootstrap-server 172.19.0.6:9092 --group kafka-map --describe

7.使用Java访问Kafka产生消息和消费消息

  • Producer
package cn.chinahadoop.client;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Date;
import java.util.Properties;
import java.util.Random;

/**
 * Kafka生产端
 * @author Zhangyongliang
 */
public class ProducerClient {
    public static void main(String[] args){
        Properties props = new Properties();
        //kafka broker列表
        props.put("bootstrap.servers", "192.168.43.22:9092,192.168.43.23:9092,192.168.43.24:9092");
        //acks=1表示Broker接收到消息成功写入本地log文件后向Producer返回成功接收的信号,不需要等待所有的Follower全部同步完消息后再做回应
        props.put("acks", "1");
        //key和value的字符串序列化类
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<String, String>(props);
        //用户产生随机数,模拟消息生成
        Random rand = new Random();
        for(int i = 0; i < 20; i++) {
            //通过随机数产生一个ip地址作为key发送出去
            String ip = "192.168.1." + rand.nextInt(255);
            long runtime = new Date().getTime();
            //组装一条消息内容
            String msg = runtime + "---" + ip;
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println("send to kafka->key:" + ip + " value:" + msg);
            //向kafka topictest1主题发送消息
            producer.send(new ProducerRecord<String, String>("topicnewtest1", ip, msg));
        }
        producer.close();
    }
}
  • ConSumer
package com.yongliang.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;

/**
 * Kafka消费端
 * @author Zhangyongliang
 */
public class ConsumerClient {
    /**
     * 手动提交偏移量
     */
    public static void manualCommintClient(){
        Properties props = new Properties();
        //kafka broker列表
        props.put("bootstrap.servers", "192.168.43.22:9092,192.168.43.23:9092,192.168.43.24:9092");
        //consumer group id
        props.put("group.id", "yongliang");
        //手动提交offset
        props.put("enable.auto.commit", "false");
        //earliest表示从最早的偏移量开始拉取,latest表示从最新的偏移量开始拉取,none表示如果没有发现该Consumer组之前拉取的偏移量则抛异常。默认值latest。
        props.put("auto.offset.reset", "earliest");
        //key和value的字符串反序列化类
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
        //consumer订阅topictest1主题,同时消费多个主题用逗号隔开
        consumer.subscribe(Arrays.asList("topicnewtest1"));
        //每次最少处理10条消息后才提交
        final int minBatchSize = 10;
        //用于保存消息的list
        List<ConsumerRecord<String, String>> bufferList = new ArrayList<ConsumerRecord<String, String>>();
        while (true) {
            System.out.println("--------------start pull message---------------" );
            long starttime = System.currentTimeMillis();
            //poll方法需要传入一个超时时间,当没有可以拉取的消息时先等待,
            //如果已到超时时间还没有可以拉取的消息则进行下一轮拉取,单位毫秒
            ConsumerRecords<String, String> records = consumer.poll(1000);
            long endtime = System.currentTimeMillis();
            long tm = (endtime - starttime) / 1000;
            System.out.println("--------------end pull message and times=" + tm + "s -------------");

            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("partition = %d, offset = %d, key = %s, value = %s%n", record.partition(), record.offset(), record.key(), record.value());
                bufferList.add(record);
            }
            System.out.println("--------------buffer size->" + bufferList.size());
            //如果读取到的消息满了10条, 就进行处理
            if (bufferList.size() >= minBatchSize) {
                System.out.println("******start deal message******");
                try {
                    //当前线程睡眠1秒钟,模拟消息处理过程
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }

                System.out.println("manual commint offset start...");
                //处理完之后进行提交
                consumer.commitSync();
                //清除list, 继续接收
                bufferList.clear();
                System.out.println("manual commint offset end...");
            }
        }
    }

    /**
     * 自动提交偏移量
     */
    public static void autoCommintClient(){
        Properties props = new Properties();
        //kafka broker列表
        props.put("bootstrap.servers", "192.168.43.22:9092,192.168.43.23:9092,192.168.43.24:9092");
        props.put("group.id", "newConsumerGroup");
        //自动提交
        props.put("enable.auto.commit", "true");
        //自动提交时间间隔1000毫秒
        props.put("auto.commit.interval.ms", "1000");
        //earliest表示从最早的偏移量开始拉取,latest表示从最新的偏移量开始拉取,none表示如果没有发现该Consumer组之前拉取的偏移量则抛异常。默认值latest。
        props.put("auto.offset.reset", "earliest");
        //key和value的字符串反序列化类
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
        //consumer订阅topictest1主题,同时消费多个主题用逗号隔开
        consumer.subscribe(Arrays.asList("topicnewtest1"));
        while (true) {
            //poll方法需要传入一个超时时间,当没有可以拉取的消息时先等待,
            //如果已到超时时间还没有可以拉取的消息则进行下一轮拉取,单位毫秒
            ConsumerRecords<String, String> records = consumer.poll(1000);
            //处理拉取过来的消息
            for (ConsumerRecord<String, String> record : records){
                System.out.printf("partition = %d, offset = %d, key = %s, value = %s%n", record.partition(), record.offset(), record.key(), record.value());
            }

        }
    }
    public static void main(String[] args){
        //自动提交offset
//        autoCommintClient();
        //手动提交offset
        manualCommintClient();
    }
}

三、docker 安装Kafka集群:desert:

docker run  -itd --name kafka --net hadoop --ip 172.19.0.6 -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=Master:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.43.108:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -v /etc/localtime:/etc/localtime  wurstmeister/kafka:latest

docker run  -itd --name kafka2 --net hadoop --ip 172.19.0.8 -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=Master:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.43.108:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -v /etc/localtime:/etc/localtime  wurstmeister/kafka:latest


docker run  -itd --name kafka3 --net hadoop --ip 172.19.0.9 -p 9094:9094 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=Master:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.43.108:9094 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9094 -v /etc/localtime:/etc/localtime  wurstmeister/kafka:latest


## Master 为同一子网的zookeeper 节点自定义域名,192.168.43.108 为宿主机IP

-e KAFKA_BROKER_ID=0 在kafka集群中,每个kafka都有一个BROKER_ID来区分自己 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181/kafka 配置zookeeper管理kafka的路径zookeeper:2181/kafka -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 把kafka的地址端口注册给zookeeper -e KAFKA_LISTENERS=PLAINTEXT://kafka:9092 配置kafka的监听端口 -v /etc/localtime:/etc/localtime 容器时间同步虚拟机的时间

  • 安装监控工具

设定子网名称,使 kafka-map 与kafka处于同一子网


docker run -itd \
    --net  hadoop \
	--ip  172.19.0.7 \
    -p 8080:8080 \
    -v /opt/kafka-map/data:/usr/local/kafka-map/data \
    -e DEFAULT_USERNAME=admin \
    -e DEFAULT_PASSWORD=admin \
    --name kafka-map \
    --restart always dushixiang/kafka-map:latest

四、SpringBoot 集成Kafka :small_airplane:

4.1 集成kafka

  • Step1:创建项目

直接通过Spring 官方提供的 Spring Initializr 创建或者直接使用 IDEA 创建皆可。

  • Step2: 配置 Kafka

通过 application.yml 配置文件配置 Kafka 基本信息

server:
  port: 9090

spring:
  kafka:
    consumer:
      bootstrap-servers: localhost:9092
      # 配置消费者消息offset是否自动重置(消费者重连会能够接收最开始的消息)
      auto-offset-reset: earliest
    producer:
      bootstrap-servers: localhost:9092
      # 发送的对象信息变为json格式
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
kafka:
  topic:
    my-topic: my-topic
    my-topic2: my-topic2

Kafka 额外配置类:

package cn.javaguide.springbootkafka01sendobjects.config;

import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.support.converter.RecordMessageConverter;
import org.springframework.kafka.support.converter.StringJsonMessageConverter;

/**
 * @author shuang.kou
 */
@Configuration
public class KafkaConfig {

    @Value("${kafka.topic.my-topic}")
    String myTopic;
    @Value("${kafka.topic.my-topic2}")
    String myTopic2;

    /**
     * JSON消息转换器
     */
    @Bean
    public RecordMessageConverter jsonConverter() {
        return new StringJsonMessageConverter();
    }

    /**
     * 通过注入一个 NewTopic 类型的 Bean 来创建 topic,如果 topic 已存在,则会忽略。
     */
    @Bean
    public NewTopic myTopic() {
        return new NewTopic(myTopic, 2, (short) 1);
    }

    @Bean
    public NewTopic myTopic2() {
        return new NewTopic(myTopic2, 1, (short) 1);
    }
}

当我们到了这一步之后,你就可以试着运行项目了,运行成功后你会发现 Spring Boot 会为你创建两个topic:

  1. my-topic: partition 数为 2, replica 数为 1
  2. my-topic2:partition 数为 1, replica 数为 1

通过上一节说的:kafka-topics --describe --zookeeper zoo1:2181 命令查看或者直接通过IDEA 提供的 Kafka 可视化管理插件-Kafkalytic 来查看

  • Step3:创建要发送的消息实体类
package cn.javaguide.springbootkafka01sendobjects.entity;

public class Book {
    private Long id;
    private String name;

    public Book() {
    }

    public Book(Long id, String name) {
        this.id = id;
        this.name = name;
    }

    省略 getter/setter以及 toString方法
}

Step4:创建发送消息的生产者

这一步内容比较长,会一步一步优化生产者的代码。

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;

@Service
public class BookProducerService {

    private static final Logger logger = LoggerFactory.getLogger(BookProducerService.class);

    private final KafkaTemplate<String, Object> kafkaTemplate;

    public BookProducerService(KafkaTemplate<String, Object> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }

    public void sendMessage(String topic, Object o) {
        kafkaTemplate.send(topic, o);
    }
}

我们使用Kafka 提供的 KafkaTemplate 调用 send()方法出入要发往的topic和消息内容即可很方便的完成消息的发送:

  kafkaTemplate.send(topic, o);

如果我们想要知道消息发送的结果的话,sendMessage方法这样写:

    public void sendMessage(String topic, Object o) {
        try {
            SendResult<String, Object> sendResult = kafkaTemplate.send(topic, o).get();
            if (sendResult.getRecordMetadata() != null) {
                logger.info("生产者成功发送消息到" + sendResult.getProducerRecord().topic() + "-> " + sendResult.getProducerRecord().value().toString());
            }
        } catch (InterruptedException | ExecutionException e) {
            e.printStackTrace();
        }
    }

但是这种属于同步的发送方式并不推荐,没有利用到 Future对象的特性。

KafkaTemplate 调用 send()方法实际上返回的是ListenableFuture 对象。

send()方法源码如下:

	@Override
	public ListenableFuture<SendResult<K, V>> send(String topic, @Nullable V data) {
		ProducerRecord<K, V> producerRecord = new ProducerRecord<>(topic, data);
		return doSend(producerRecord);
	}

ListenableFuture 是Spring提供了继承自Future 的接口。

ListenableFuture方法源码如下:

public interface ListenableFuture<T> extends Future<T> {
    void addCallback(ListenableFutureCallback<? super T> var1);

    void addCallback(SuccessCallback<? super T> var1, FailureCallback var2);

    default CompletableFuture<T> completable() {
        CompletableFuture<T> completable = new DelegatingCompletableFuture(this);
        this.addCallback(completable::complete, completable::completeExceptionally);
        return completable;
    }
}

继续优化sendMessage方法

    public void sendMessage(String topic, Object o) {

        ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send(topic, o);
        future.addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {

            @Override
            public void onSuccess(SendResult<String, Object> sendResult) {
                logger.info("生产者成功发送消息到" + topic + "-> " + sendResult.getProducerRecord().value().toString());
            }
            @Override
            public void onFailure(Throwable throwable) {
                logger.error("生产者发送消息:{} 失败,原因:{}", o.toString(), throwable.getMessage());
            }
        });
    }

使用lambda表达式再继续优化:

    public void sendMessage(String topic, Object o) {

        ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send(topic, o);
        future.addCallback(result -> logger.info("生产者成功发送消息到topic:{} partition:{}的消息", result.getRecordMetadata().topic(), result.getRecordMetadata().partition()),
                ex -> logger.error("生产者发送消失败,原因:{}", ex.getMessage()));
    }

再来简单研究一下 send(String topic, @Nullable V data) 方法。

我们使用send(String topic, @Nullable V data)方法的时候实际会new 一个ProducerRecord对象发送,

	@Override
	public ListenableFuture<SendResult<K, V>> send(String topic, @Nullable V data) {
		ProducerRecord<K, V> producerRecord = new ProducerRecord<>(topic, data);
		return doSend(producerRecord);
	}

ProducerRecord类中有多个构造方法:

   public ProducerRecord(String topic, V value) {
        this(topic, null, null, null, value, null);
    }
    public ProducerRecord(String topic, Integer partition, Long timestamp, K key, V 
        ......
    }

如果我们想在发送的时候带上timestamp(时间戳)、key等信息的话,sendMessage()方法可以这样写:

    public void sendMessage(String topic, Object o) {
      // 分区编号最好为 null,交给 kafka 自己去分配
        ProducerRecord<String, Object> producerRecord = new ProducerRecord<>(topic, null, System.currentTimeMillis(), String.valueOf(o.hashCode()), o);
      
        ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send(producerRecord);
        future.addCallback(result -> logger.info("生产者成功发送消息到topic:{} partition:{}的消息", result.getRecordMetadata().topic(), result.getRecordMetadata().partition()),
                ex -> logger.error("生产者发送消失败,原因:{}", ex.getMessage()));
    }
  • Step5:创建消费消息的消费者

通过在方法上使用 @KafkaListener 注解监听消息,当有消息的时候就会通过 poll 下来消费。

import cn.javaguide.springbootkafka01sendobjects.entity.Book;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;

@Service
public class BookConsumerService {

    @Value("${kafka.topic.my-topic}")
    private String myTopic;
    @Value("${kafka.topic.my-topic2}")
    private String myTopic2;
    private final Logger logger = LoggerFactory.getLogger(BookProducerService.class);
    private final ObjectMapper objectMapper = new ObjectMapper();


    @KafkaListener(topics = {"${kafka.topic.my-topic}"}, groupId = "group1")
    public void consumeMessage(ConsumerRecord<String, String> bookConsumerRecord) {
        try {
            Book book = objectMapper.readValue(bookConsumerRecord.value(), Book.class);
            logger.info("消费者消费topic:{} partition:{}的消息 -> {}", bookConsumerRecord.topic(), bookConsumerRecord.partition(), book.toString());
        } catch (JsonProcessingException e) {
            e.printStackTrace();
        }
    }

    @KafkaListener(topics = {"${kafka.topic.my-topic2}"}, groupId = "group2")
    public void consumeMessage2(Book book) {
        logger.info("消费者消费{}的消息 -> {}", myTopic2, book.toString());
    }
}

  • Step6:创建一个 Rest Controller
import cn.javaguide.springbootkafka01sendobjects.entity.Book;
import cn.javaguide.springbootkafka01sendobjects.service.BookProducerService;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.concurrent.atomic.AtomicLong;

/**
 * @author shuang.kou
 */
@RestController
@RequestMapping(value = "/book")
public class BookController {
    @Value("${kafka.topic.my-topic}")
    String myTopic;
    @Value("${kafka.topic.my-topic2}")
    String myTopic2;
    private final BookProducerService producer;
    private AtomicLong atomicLong = new AtomicLong();

    BookController(BookProducerService producer) {
        this.producer = producer;
    }

    @PostMapping
    public void sendMessageToKafkaTopic(@RequestParam("name") String name) {
        this.producer.sendMessage(myTopic, new Book(atomicLong.addAndGet(1), name));
        this.producer.sendMessage(myTopic2, new Book(atomicLong.addAndGet(1), name));
    }
}
  • Step7:测试

输入命令:

curl -X POST -F 'name=Java' http://localhost:9090/book

控制台打印出的效果如下:

my-topic 有2个partition(分区) 当你尝试发送多条消息的时候,你会发现消息会被比较均匀地发送到每个 partion 中。

4.2 优雅创建topic

在我们之前的代码中,我们是通过注入 NewTopic 类型的对象来创建 Kafka 的 topic 的。当我们的项目需要创建的 topic 逐渐变多的话,通过这种方式创建就不是那么友好了,我觉得主要带来的问题有两个:

  1. Topic 信息不清晰;
  2. 代码量变的庞大;
    /**
     * 通过注入一个 NewTopic 类型的 Bean 来创建 topic,如果 topic 已存在,则会忽略。
     */
    @Bean
    public NewTopic myTopic() {
        return new NewTopic(myTopic, 2, (short) 1);
    }

    @Bean
    public NewTopic myTopic2() {
        return new NewTopic(myTopic2, 1, (short) 1);
    }

今天说一下我对于这个问题的解决办法:

application.xml 配置文件中配置 Kafka 连接信息以及我们项目中用到的 topic。

server:
  port: 9090
spring:
  kafka:
    bootstrap-servers: localhost:9092
kafka:
  topics:
    - name: topic1
      num-partitions: 3
      replication-factor: 1
    - name: topic2
      num-partitions: 1
      replication-factor: 1
    - name: topic3
      num-partitions: 2
      replication-factor: 1

TopicConfigurations 类专门用来读取我们的 topic 配置信息:


import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;

import java.util.List;

@Configuration
@ConfigurationProperties(prefix = "kafka")
@Setter
@Getter
@ToString
class TopicConfigurations {
    private List<Topic> topics;

    @Setter
    @Getter
    @ToString
    static class Topic {
        String name;
        Integer numPartitions = 3;
        Short replicationFactor = 1;

        NewTopic toNewTopic() {
            return new NewTopic(this.name, this.numPartitions, this.replicationFactor);
        }

    }
}

TopicAdministrator 类中我们手动将 topic 对象注册到容器中。


import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.context.support.GenericWebApplicationContext;

import javax.annotation.PostConstruct;
import java.util.List;

/**
 * @author shuang.kou
 */
@Configuration
public class TopicAdministrator {
    private final TopicConfigurations configurations;
    private final GenericWebApplicationContext context;

    public TopicAdministrator(TopicConfigurations configurations, GenericWebApplicationContext genericContext) {
        this.configurations = configurations;
        this.context = genericContext;
    }

    @PostConstruct
    public void init() {
        initializeBeans(configurations.getTopics());
    }

    private void initializeBeans(List<TopicConfigurations.Topic> topics) {
        topics.forEach(t -> context.registerBean(t.name, NewTopic.class, t::toNewTopic));
    }


}

这样的话,当我们运行项目之后,就会自动创建 3 个名为:topic1、topic2 和 topic3 的主题了。

4.3 批量消费消息

实现消费者批量消费消息,可通过以下步骤实现:

  • 配置yml 文件,主要新增属性如下:

    spring:
      kafka:
        ## 开启批量消费
        listener:
          type: batch
          ## 发量根据实际分区数决定,必须小于等于分区数,否则会有线程一直处于空闲状态
          concurrency: 5
          ack-mode: manual
        consumer:
          # 配置消费者消息offset是否自动重置(消费者重连会能够接收最开始的消息)
          auto-offset-reset: earliest
          ## 一次最多返回的记录数
          max-poll-records: 5
          enable-auto-commit: false
    
  • 修改消费者模块

将消费者入参改为List 类型,设置消费模式为手动提交ack acknowledge()

@KafkaListener(topics = {"${kafka.topic.my-topic}"}, groupId = "group1")
public void consumeMessage(List<ConsumerRecord<String, Message>> bookConsumerRecordList, Acknowledgment ack) {
    try {
        log.info("消费集合大小:" + bookConsumerRecordList.size());
        for (ConsumerRecord info : bookConsumerRecordList) {
            log.info("Key---" + info.key().toString());
            log.info("AA:" + JSONUtil.toJsonStr(info));
            log.info("结果实体为:" + JSONUtil.toJsonStr(info.value()));
            log.info("消费者消费topic:{} partition:{}的消息 -> {}", info.topic(), info.partition(), info.value().toString());
        }
        ack.acknowledge();
    } catch (Exception e) {
        e.printStackTrace();
    }
}

    @KafkaListener(topics = {"${kafka.topic.my-topic2}"}, groupId = "group2")
    public void consumeMessage2(List<Message> messageList, Acknowledgment ack) {
        log.info("消费的topic2 集合大小为:" + messageList.size());
        log.info("消费者消费{}的消息 -> {}", myTopic2, JSONUtil.toJsonStr(messageList));
        ack.acknowledge();
    }

五、Kafka如何做到消息不丢&消息不重复:recycle:

具体需要Producer端,Broker端,Consumer都做一些工作才能保证消息一定被消费,即,

  1. 生产者不少生产消息;
  2. 服务端不丢失消息;
  3. 消费者也不能少消费消息。

5.1 生产者不少生产消息

5.1.1 使用带回调的发送消息的方法。

如果消息没有发送成功,那么Producer会按照配置的重试规则进行重试,如果重试次数用光后,还是消息发送失败,那么kafka会将异常信息通过回调的形式带给我们,这时,我们可以将没有发送成功的消息进行持久化,做后续的补偿处理。

kafkaProducer.send(new ProducerRecord<>("foo", "bar"), new Callback() {
    @Override
    public void onCompletion(RecordMetadata metadata, Exception exception) {
        if (exception != null) {
            // todo 处理发送失败的消息
        }
    }
});

5.1.2配置可靠性参数

  • 配置 acks = -1

acks=0,表示生产者不等待任何服务器节点的响应,只要发送消息就认为成功。

acks=1,表示生产者收到 leader 分区的响应就认为发送成功。

acks=-1,表示只有当 ISR 中的副本全部收到消息时,生产者才会认为消息生产成功了。这种配置是最安全的,因为如果 leader 副本挂了,当 follower 副本被选为 leader 副本时,消息也不会丢失。但是系统吞吐量会降低,因为生产者要等待所有副本都收到消息后才能再次发送消息。 2.2 配置 retries = 3

参数 retries 表示生产者生产消息的重试次数,这里的3属于一个建议值,如果重试次数超过3次后,消息还是没有发送成功,可以根据自己的业务场景对发送失败的消息进行额外处理,比如持久化到磁盘,等待服务正常后进行补偿。 2.3 配置 retry.backoff.ms=300

参数retry.backoff.ms 表示重试的时间间隔,单位是毫秒,300ms是一个建议值,如果配置的时间间隔太短,服务可能仍然处于不可用状态

5.2 服务端不丢失消息

5.2.1 配置 replication.factor > 1

参数replication.factor表示在服务端的分区副本数,配置 > 1后,即使分区的leader挂掉,其他follower被选中为leader也会正常处理消息。

5.2.2 配置 min.insync.replicas > 1

min.insync.replicas 指的是 ISR 最少的副本数量,原理同上,也需要大于 1 的副本数量来保证消息不丢失。

5.2 .3 确保 replication.factor > min.insync.replicas。

如果两者相等,那么只要有一个副本挂机,整个分区就无法正常工作了。我们不仅要改善消息的持久性,防止数据丢失,还要在不降低可用性的基础上完成。推荐设置成 replication.factor = min.insync.replicas + 1。

5.2.4 配置 unclean.leader.election.enable = false

unclean.leader.election.enable 指是否能把非 ISR 集合中的副本选举为 leader 副本。unclean.leader.election.enable = true,也就是说允许非 ISR 集合中的 follower 副本成为 leader 副本。因为非ISR集合中的副本消息可能已经落后leader消息很长时间,数据不完整,如果被选中作为leader副本,可能导致消息丢失。

5.3 消费者不少消费消息

5.3 .1 配置 enable.auto.commit=false

enable.auto.commit 这个参数表示是否自动提交,设置成false后,将消息提交的权利交给开发人员。因为设置自动提交后,消费端可能由于消息消费失败,但是却自动提交,导致消息丢失问题。 1.2 手动提交消息的正确方式 先处理消息,后提交offset,代码如下:

kafkaConsumer.subscribe(Collections.singleton("foo"));
try {
new Thread(() -> {
    while (true) {
        ConsumerRecords<String, Object> records = kafkaConsumer.poll(Duration.ofMillis(100));
        handlerRecord(records);
        kafkaConsumer.commitSync();
    }
}).start();
} catch (Exception e) {
errHandler(e);
}

但是这种情况可能会导致消息已经消费成功,但是提交offset的时候,consumer突然宕机,导致消息提交失败,等到consumer重启后,可能还会收到已经成功处理过的消息,消费了重复的消息,所以手动提交消息需要做一些幂等性的措施。

5.4 消息不重复

5.4.1 生产端不重复发送消息

由于网络原因,Producer端对消息进行了重试,但是,Broker端可能之前已经收到了消息,这样就导致broker端收到了重复的消息。

kafka在0.11.0 版本后,给每个Producer端分配了一个唯一的ID,每条消息中也会携带一个序列号,这样服务端便可以对消息进行去重,但是如果是两个Producer生产了两条相同的消息,那么kafka无法对消息进行去重,所以我们可以在消息头中自定义一个唯一的消息ID然后在consumer端对消息进行手动去重。

5.4.2 消费端不重复消费消息

由于为了保证不少消费消息,配置了手动提交,由于处理消息期间,其他consumer的加入,进行了重平衡,或者consumer提交消息失败,进而导致接收到了重复的消息。

我们可以通过自定义唯一消息ID对消息进行过滤去重重复的消息。

六、 Kafka 事务消息:tractor:

Kafka的事务主要是保障一次发送多条消息的事务一致性(要么同时成功要么同时失败)

一般在kafka的流式计算场景用得多一点,比如,kafka需要对一个topic里的消息做不同的流式计算处理,处理完分别发到不同的topic里,这些topic分别被不同的下游系统消费(比如hbase,redis,es等),这种我们肯定希望系统发送到多个topic的数据保持事务一致性。

Kafka 的事务消息默认要求你的 Kafka Broker的节点在 3 个以上。 这也就是为什么第一步要搭建 kafka 集群的一个原因了。当然你也可以通过修改transaction.state.log.replication.factor=1参数来做到单节点 kafka 就支持事务。

6.1 两个重要的配置参数

  1. transaction-id-prefix: 事务编号前缀
  2. isolation-level: read_committed :仅读取已提交的消息

application.yml 配置文件如下

server:
 port: 9090
spring:
 kafka:
   bootstrap-servers: localhost:9092,localhost:9093,localhost:9094
   consumer:
     # 配置消费者消息offset是否自动重置(消费者重连会能够接收最开始的消息)
     auto-offset-reset: earliest
     # 事务隔离级别
     isolation-level: read_committed #仅读取已提交的消息
   producer:
     value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
     retries: 3  #  重试次数
     # 启用事务
     transaction-id-prefix: my-tx. # 事务编号前缀
kafka:
 topic:
   topic-test-transaction: topic-test-transaction

executeInTransaction()方法

如何发送带事务的消息呢?一种很简单的方法就是将我们的发送消息的逻辑和业务逻辑放到KafkaTemplateexecuteInTransaction()中。

executeInTransaction()方法参数如下:

public <T> T executeInTransaction(OperationsCallback<K, V, T> callback) {
  ......
}

OperationsCallback源码如下:

	interface OperationsCallback<K, V, T> {

		T doInOperations(KafkaOperations<K, V> operations);

	}

所以我们的代码可以这样写:

@Service
public class BookProducerService {

    private List<Object> sendedBooks = new ArrayList<>();
    private static final Logger logger = LoggerFactory.getLogger(BookProducerService.class);

    private final KafkaTemplate<String, Object> kafkaTemplate;

    public BookProducerService(KafkaTemplate<String, Object> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }

    public void sendMessage(String topic, Object o) {
        kafkaTemplate.executeInTransaction(new KafkaOperations.OperationsCallback<String, Object, Object>() {
            @Override
            public Object doInOperations(KafkaOperations<String, Object> operations) {
                // 发送消息
                operations.send(topic, o);
                // 模拟发生异常
                int a = 1 / 0;
                // 模拟业务操作
                sendedBooks.add(o);
                return null;
            }
        });
    }
}

上面的代码可以用Java8 的 Lambda 改写,Lambda 忘记的或者不会的速度补起来,源码中的 Java8 的各种骚操作太常见了:

    public void sendMessage(String topic, Object o) {
        kafkaTemplate.executeInTransaction(kafkaOperations -> {
            // 发送消息
            kafkaOperations.send(topic, o);
            // 模拟发生异常
            int a = 1 / 0;
            // 模拟业务操作
            sendedBooks.add(o);
            return null;
        });
    }

简单说一下为什么KafkaTemplateexecuteInTransaction()中执行的操作具有事务属性。

我们在executeInTransaction()方法中传入了一个了回调,如果你看 executeInTransaction() 源码的话就会发现实际上这个方法内部已经帮我们把事务操作做好了,避免我们自己写一遍!

截取部分代码帮助理解:

try {
      //doInOperations就是我们的发送消息的逻辑和业务逻辑代码
			T result = callback.doInOperations(this);
			try {
        // 提交正在执行的事物
				producer.commitTransaction();
			}
			catch (Exception e) {
				throw new SkipAbortException(e);
			}
			return result;
		}

6.2 配合 @Transactional注解使用

直接使用 @Transactional也可以:

    @Transactional(rollbackFor = Exception.class)
    public void sendMessage(String topic, Object o) {
        // 发送消息
        kafkaTemplate.send(topic, o);
        // 模拟发生异常
        int a = 1 / 0;
        // 模拟业务操作
        sendedBooks.add(o);
    }

6.3 Kafka 错误处理

Spring-Kafka 将这种正常情况下无法被消费的消息称为死信消息(Dead-Letter Message),将存储死信消息的特殊队列称为死信队列(Dead-Letter Queue)。