kafka 的集群启动

35 阅读2分钟

背景:

我们模拟一个 3 分区,2 副本,3 brokers 的场景

分析 partitions * replicas = 3x2共 6 个副本,每个 brokers 分了 2 个副本

  • b1: replicas 1,2
  • b2: replicas 2,3
  • b3: replicas 3,1

启动——这样的配置来 3 套,端口分别不同,以便启动 3 个 broker

  1. 初始化集群和目录
#!/bin/bash

# Kafka KRaft 集群存储初始化脚本
# 用于初始化或重新初始化集群存储

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"

echo "🔧 Kafka 集群存储初始化"
echo "================================================================"
echo ""

# 检查是否需要清理旧数据
if [ -d "/tmp/kafka-logs-1" ] || [ -d "/tmp/kafka-logs-2" ] || [ -d "/tmp/kafka-logs-3" ]; then
    echo "⚠️  检测到现有的存储目录"
    echo ""
    echo "现有目录:"
    ls -ld /tmp/kafka-logs-* 2>/dev/null
    echo ""
    read -p "是否清理并重新初始化?(y/N): " -n 1 -r
    echo
    if [[ $REPLY =~ ^[Yy]$ ]]; then
        echo "🧹 清理旧的存储目录..."
        rm -rf /tmp/kafka-logs-1 /tmp/kafka-logs-2 /tmp/kafka-logs-3
        echo "✅ 清理完成"
        echo ""
    else
        echo "❌ 取消操作"
        exit 1
    fi
fi

# 生成新的集群 UUID
echo "🎲 生成集群 UUID..."
UUID=$(kafka-storage random-uuid)
echo "✅ UUID: $UUID"
echo ""

# 格式化存储目录
echo "📝 格式化存储目录..."
echo "================================================================"

echo ""
echo "➤ Server 1 (node.id=1, broker=9092, controller=19092)..."
kafka-storage format -t $UUID -c "${SCRIPT_DIR}/server1.properties"

echo ""
echo "➤ Server 2 (node.id=2, broker=9093, controller=19093)..."
kafka-storage format -t $UUID -c "${SCRIPT_DIR}/server2.properties"

echo ""
echo "➤ Server 3 (node.id=3, broker=9094, controller=19094)..."
kafka-storage format -t $UUID -c "${SCRIPT_DIR}/server3.properties"

echo ""
echo "================================================================"
echo "✅ 所有服务器存储已初始化完成!"
echo ""
echo "集群 UUID: $UUID"
echo ""
echo "📂 存储目录:"
echo "  - Server 1: /tmp/kafka-logs-1"
echo "  - Server 2: /tmp/kafka-logs-2"
echo "  - Server 3: /tmp/kafka-logs-3"
echo ""
echo "💡 下一步:"
echo "  1. 启动集群: ./start-cluster.sh"
echo "  2. 查看状态: kafka-broker-api-versions --bootstrap-server localhost:9092"
echo ""
echo "================================================================"

# 保存 UUID 到文件
echo $UUID > .cluster-uuid
echo "💾 UUID 已保存到 .cluster-uuid 文件"

配broker1、broker2、broker3


process.roles=broker,controller

node.id=1

controller.quorum.voters=1@localhost:19091,2@localhost:19092,3@localhost:19093

listeners=PLAINTEXT://:9091,CONTROLLER://:19091

inter.broker.listener.name=PLAINTEXT

advertised.listeners=PLAINTEXT://localhost:9091,CONTROLLER://localhost:19091
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

num.network.threads=3

num.io.threads=8
socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

log.dirs=/tmp/kafka-logs-1
num.partitions=1
num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1
share.coordinator.state.topic.replication.factor=1
share.coordinator.state.topic.min.isr=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000


check

 kafka-broker-api-versions --bootstrap-server localhost:9091,localhost:9092,localhost:9093 2>/dev/null | grep "id:"
localhost:9091 (id: 1 rack: null isFenced: false) -> (
localhost:9092 (id: 2 rack: null isFenced: false) -> (
localhost:9093 (id: 3 rack: null isFenced: false) -> (

创建 topics

kafka-topics --create --topic my-messages --bootstrap-server localhost:9092 --partitions 3 --replication-factor 2

查看该 topic my-messages 的分布情况

kafka-topics --describe --topic my-messages --bootstrap-server localhost:9091

Topic: my-messages      TopicId: WLE130LvSZaPEF7HbZIAPg PartitionCount: 3       ReplicationFactor: 2    Configs: min.insync.replicas=1,segment.bytes=1073741824
Topic: my-messages      Partition: 0    Leader: 3       Replicas: 3,1   Isr: 3,1        Elr:    LastKnownElr: 
Topic: my-messages      Partition: 1    Leader: 1       Replicas: 1,2   Isr: 1,2        Elr:    LastKnownElr: 
Topic: my-messages      Partition: 2    Leader: 2       Replicas: 2,3   Isr: 2,3        Elr:    LastKnownElr: 

写入消息

kafka-console-producer --topic my-messages --bootstrap-server localhost:9091 --property="key.separator=:"

读取消息

kafka-console-consumer --topic my-messages --from-beginning --max-messages 10 --bootstrap-server localhost:9091

重新消费和记住位置消费

image.png