kafka系列三:Java程序实现Kafka消息收发

109 阅读2分钟

🦴原生kafka JDK使用

开启掘金成长之旅!这是我参与「掘金日新计划 · 12 月更文挑战」的第3天,点击查看活动详情

ℹ︎pom文件引入原生JDK包

<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->  
<dependency>  
    <groupId>org.apache.kafka</groupId>  
    <artifactId>kafka-clients</artifactId>  
    <version>2.0.0</version>  
</dependency>

ℹ︎代码编写

🖋生产者端 ProducerConfig为Kafka官方提供的配置枚举类

public class KProducer {
    public static final String brokerList = "192.168.138.130:9092";
    public static final String topic = "topic-java";

    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,brokerList);

        KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
        Random r=new Random();
        for (int i = 0; i < 500; i++) {
            String value = "hello,Kafka:" + r.nextInt(100000);
            ProducerRecord<String, String> record = new ProducerRecord<>(topic, value);
            try {
                producer.send(record);
                System.out.println(topic + ":" +value);
            }catch (Exception ex){
                ex.printStackTrace();
            }
        }
        producer.close();
    }
}

🖋消费者端 ConsumerConfig为Kafka官方提供的配置枚举类

public class KConsumer {
    public static final String brokerList = "192.168.138.130:9092";
    public static final String topic = "topic-java";
    public static final String groupId = "group.demo.new";
    public static final AtomicBoolean isRunning = new AtomicBoolean(true);

    public static Properties initConfig() {
        Properties props = new Properties();
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getName());
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,brokerList);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
        props.put(ConsumerConfig.GROUP_ID_CONFIG,groupId);
        props.put(ConsumerConfig.CLIENT_ID_CONFIG,"consumer.java.demo");
        return props;
    }

    public static void main(String[] args) {
        Properties props = initConfig();
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(Arrays.asList(topic));
        try{
            while (isRunning.get()){
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
                if(!records.isEmpty()){
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println("topic:" + record.topic());
                        System.out.println("partition:" + record.partition());
                        System.out.println("offset:" + record.offset());
                        System.out.println("key = "+ record.key() + ",value = " + record.value());
                        // do something
                    }
                }
            }
        }catch (Exception ex){
            ex.printStackTrace();
        } finally {
            consumer.close();
        }
    }
}

ℹ︎实验效果 可以发现每次启动消费者或者生产者时服务端都会输出相关日志,然后也注意到zookeeper上存储了broker、分区信息、消费者位移信息,而消费者的注册、移除都在服务端上处理。然后发现新注册一个组上去kafka的话(假设该topic已有数据),该组没有任何消费,却会让 current-offset=log-end-offset


[root@localhost bin]# ./kafka-consumer-groups.sh --bootstrap-server 192.168.138.130:9092 --describe --new.group.demo
Consumer group 'group.demo' has no active members.

[root@localhost bin]# ./kafka-consumer-groups.sh --bootstrap-server 192.168.138.130:9092 --describe --group new.group.demo

TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                             HOST            CLIENT-ID
topic-java      0          10013           10013           0               consumer.java.demo-2b0467cf-9f94-4aa5-b821-75c54486dd08 /192.168.138.1  consumer.java.demo

后面查了下指定位移消费,假如想从offset=1开始消费的话,重新进行指定即可ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest"

800

zk上是只注册服务端的/topic信息/消费者位移,不记录生产者/消费者端的

🦴Spring for Apache Kafka

ℹ︎官方文档 ℹ︎ pom引入依赖

<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
  <version>2.2.8.RELEASE</version>
</dependency>

ℹ︎代码编写 🖋application.properties配置

#kafka相关配置  
spring.kafka.bootstrap-servers = 192.168.138.130:9092  
#设置一个默认组  
spring.kafka.consumer.group-id = springDemo.defaultGroup  
#key-value序列化反序列化  
spring.kafka.consumer.key-deserializer = org.apache.kafka.common.serialization.StringDeserializer  
spring.kafka.consumer.value-deserializer = org.apache.kafka.common.serialization.StringDeserializer  
spring.kafka.producer.key-serializer = org.apache.kafka.common.serialization.StringSerializer  
spring.kafka.producer.value-serializer = org.apache.kafka.common.serialization.StringSerializer  
  
#每次批量发送消息的数量  
spring.kafka.producer.batch-size = 65536  
spring.kafka.producer.buffer-memory = 524288

🖋消息发送

public class KDemoSender {  
  
    private final String topic = "spring-demo-kafka-topic";  
  
    @Autowired  
    private KafkaTemplate kafkaTemplate;  
  
    public void sendMessage() {  
        Random r=new Random();  
        String value = "hello,Kafka:" + r.nextInt(100000);  
        kafkaTemplate.send(topic, value);  
    }  
}

🖋消息监听

@Component
public class KDemoListener {  
  
    private final String topic = "spring-demo-kafka-topic";  
  
    /**  
     * 监听Topic主题  
     */  
    @KafkaListener(topics = topic)  
	public void receiveMessage(ConsumerRecord<?,?> record) {  
	    System.out.println(record);  
	    System.out.println(record.value());  
	}
}

ℹ︎实验效果 从体验来说,spring-kafka的使用和配置更简单

800