Zookeeper、kafka集群实践+Kakfa常用指令+Springboot整合Kafka

168 阅读3分钟

kafka集群主要分为两步 Zookeeper集群和kafka集群下面先讲

zookeeper主要是作为kafka的信息存储,存放着kafka的partitions、offset、clusterid、Replicas、Isr等等元数据

Zookeeper

未命名文件.jpg 下载zookeeper

cd  /usr/local/
wget  https://mirrors.aliyun.com/apache/zookeeper/zookeeper-3.5.10/apache-zookeeper-3.5.10.tar.gz
tar -zxvf apache-zookeeper-3.5.10.tar.gz

配置

vim conf/zoo.cfg
##配置三节点
server.1=192.168.222.39
server.2=192.168.222.40
server.3=192.168.222.41

#创建文件夹
mkdir -p /tmp/zookeeper/
##控制台输入,第一个节点这么写
echo "1" > /tmp/zookeeper/myid

随后再zookeeper目录上执行

mvn package -Dmaven.test.skip=true

没有maven的话,需要自己安装下 如果以上操作不执行会起不来 看日志

cat /usr/local/zookeeper/logs/*
Error: Could not find or load main class org.apache.zookeeper.server.quorum.

最后执行这个启动指令

/usr/local/apache-zookeeper-3.5.10/bin/zkServer.sh start

然后可以根据这台虚拟机克隆两份出来 或者使用scp复制出来如:

传输文件
scp apache-zookeeper-3.5.10  root@192.168.222.41:/usr/local/
传输多文件夹
scp -v -r apache-zookeeper-3.5.10  root@192.168.222.41:/usr/local/

只需要修改/tmp/zookeeper/myid 的内容就行对应conf/zoo.cfg中的server.数字

Kafka

未命名文件.jpg Kakfa主要的组件

  • Producer生产者(负责投放消息到kafka集群中)

  • Consumer(扶着获取不同分区的消息,提高吞吐量)

  • Broker(kafka的节点,维护数据生产消费,保持同步备份Isr同步队列,实现高可用)

  • Topic(消息的主题)

  • Partition(根据消息投放到不同的分区,实现负载均衡和高吞吐)

  • Replica(分区备份,实现高可用)

    #一样的下载
    wget  https://mirrors.aliyun.com/apache/kafka/3.4.1/kafka_2.12-3.4.1.tgz
    tar -zxvf kafka_2.12-3.4.1.tgz
    
    #配置文件
    vim kafka_2.12-3.4.1/config/server.properties
    #每个机器不一样就行
    broker.id=0
    #不同机器ip填不同的
    listeners= PLAINTEXT://192.168.222.41:9092
    #填入zoopeeper集群
    zookeeper.connect=192.168.222.39:12181,192.168.222.40:12181,192.168.222.41:12181
    
    #同样的克隆或者传输
    scp -v -r kafka_2.12-3.4.1  root@192.168.222.41:/usr/local/
    

    常用指令

    #启动
    kafka_2.12-3.4.1/bin/kafka-server-start.sh -daemon kafka_2.12-3.4.1/config/server.properties
    #关闭
    kafka_2.12-3.4.1/bin/kafka-server-stop.sh
    #创建
    kafka_2.12-3.4.1/bin/kafka-topics.sh --create --replication-factor 2 --partitions 1 --topic test --bootstrap-server 192.168.222.41:9092
    #查看topic分配
    kafka_2.12-3.4.1/bin/kafka-topics.sh  --describe --bootstrap-server 192.168.222.41:9092 --topic test
    # Topic查看
    kafka_2.12-3.4.1/bin/kafka-topics.sh  --list --bootstrap-server 192.168.222.41:9092
    # Topic修改分区
    kafka_2.12-3.4.1/bin/kafka-topics.sh   --bootstrap-server 192.168.222.41:9092 --alter  --partitions 4 --topic test
    

    image.png 显示了显示Leader Replicas Isr由那个节点担任

    # Topic删除
    kafka_2.12-3.4.1/bin/kafka-topics.sh   --bootstrap-server 192.168.222.41:9092 --delete  --topic test
    # Topic某个分组消费情况
    kafka_2.12-3.4.1/bin/kafka-consumer-groups.sh --bootstrap-server 192.168.222.41:9092 --group test --describe
    

    投放消息

    /usr/local/kafka_2.12-3.4.1/bin/kafka-console-producer.sh --broker-list 192.168.222.40:9092  --topic test
    

    消费消息

    kafka_2.12-3.4.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.222.41:9092 --topic test
    

    注明

    kafka_2.12-3.4.1/bin/kafka-consumer-groups.sh --bootstrap-server 192.168.222.41:9092 --group test --describe

    等于

    kafka_2.12-3.4.1/bin/kafka-consumer-groups.sh --zookeeper 192.168.222.41:2181 --group test --describe

    一个是直接操作zookeeper,一个操作kafka

    高版本貌似已经移除直接操作zookeeper

Springboot整合Kafka

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.6.10</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>seek-employeement</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>seek-employeement</name>
    <description>Demo project for Spring Boot</description>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

文件 application.yml

spring:
  kafka:
    bootstrap-servers: 192.168.222.39:9092,192.168.222.40:9092,192.168.222.41:9092
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      acks: 1
      group-id: test
      batch-size: 1000
      retries: 3
      bootstrap-servers: 192.168.222.39:9092,192.168.222.40:9092,192.168.222.41:9092
    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      auto-offset-reset: earliest
      group-id: test
      max-poll-records: 50
      enable-auto-commit: true

SpringBootApplication

@SpringBootApplication
@RestController
@EnableKafka
public class SeekEmployeementApplication {
    public static void main(String[] args) {
        SpringApplication.run(SeekEmployeementApplication.class, args);
    }
    @KafkaHandler
    @KafkaListener(id = "handleMessage1",topics = "test1")
    public void handleMessage1(String message) {
        System.out.println("Received message1: " + message);
    }
    @KafkaHandler
    @KafkaListener(id = "handleMessage2",topics = "test2")
    public void handleMessage2(String message) {
        System.out.println("Received message2: " + message);
    }
    @KafkaHandler
    @KafkaListener(id = "handleMessage3",topics = "test3")
    public void handleMessage3(String message) {
        System.out.println("Received message3: " + message);
    }
    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;
    @RequestMapping
    public void test1(){
        for (int i = 0; i < 100; i++) {
            kafkaTemplate.send("test1","data");
            kafkaTemplate.send("test2","data");
            kafkaTemplate.send("test3","data");
        }
    }

}

遇到的问题问题

  • 某个分区LeaderOrFollowException的问题,最后把topic删了重建又好了。。
  • 集群只能链接一个节点其他节点链接不上,这是因为server.properties没有设置listeners 这个需要写ip:port