先决条件:
1. 准备 Docker Compose 文件
1.1 准备 zookeeper集群
注意:我这里使用的是macOS,需要替换自己的路径
services:
zoo1:
image: zookeeper
restart: always
hostname: zoo1
container_name: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
volumes:
- /Users/liulinfang/data/docker-data/zookeeper/zoo1/data:/data
- /Users/liulinfang/data/docker-data/zookeeper/zoo1/datalog:/datalog
zoo2:
image: zookeeper
restart: always
hostname: zoo2
container_name: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
volumes:
- /Users/liulinfang/data/docker-data/zookeeper/zoo2/data:/data
- /Users/liulinfang/data/docker-data/zookeeper/zoo2/datalog:/datalog
zoo3:
image: zookeeper
restart: always
hostname: zoo3
container_name: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
volumes:
- /Users/liulinfang/data/docker-data/zookeeper/zoo3/data:/data
- /Users/liulinfang/data/docker-data/zookeeper/zoo3/datalog:/datalog
创建对应的文件夹注意更换路径
mkdir -p /Users/liulinfang/data/docker-data/zookeeper/zoo1/data /Users/liulinfang/data/docker-data/zookeeper/zoo1/datalog /Users/liulinfang/data/docker-data/zookeeper/zoo2/data /Users/liulinfang/data/docker-data/zookeeper/zoo2/datalog /Users/liulinfang/data/docker-data/zookeeper/zoo3/data /Users/liulinfang/data/docker-data/zookeeper/zoo3/datalog
使用命令运行
docker compose -p zookeeper_compose up -d
到这里,zookeeper集群就搭建好了。
1.2 准备 Kafka集群
创建对应的文件夹注意更换路径
mkdir -p /Users/liulinfang/data/docker-data/kafka1/data /Users/liulinfang/data/docker-data/kafka1 /Users/liulinfang/data/docker-data/kafka2/data /Users/liulinfang/data/docker-data/kafka2 /Users/liulinfang/data/docker-data/kafka3/data /Users/liulinfang/data/docker-data/kafka3
创建网络,用于kafka和zookeeper共享一个网络段
docker network create --driver bridge zookeeper_kafka_net
接下来,让我们为 Kafka 集群编写 Docker Compose 配置文件。创建一个 docker-compose.yml 文件,并将以下内容填入其中:
# docker-compose.yml
services:
kafka1:
image: bitnami/kafka:3.1
restart: always
container_name: kafka1
hostname: kafka1
ports:
- 9091:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_MODE: "zookeeper"
KAFKA_ENABLE_KRAFT: "no"
ALLOW_PLAINTEXT_LISTENER: "yes" # 必须添加
KAFKA_ZOOKEEPER_PROTOCOL: "PLAINTEXT" # 明确指定
KAFKA_CFG_PROCESS_ROLES: ""
KAFKA_ADVERTISED_HOST_NAME: kafka1
KAFKA_ADVERTISED_PORT: 9091
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.8.13.67:9091
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
volumes:
- /Users/liulinfang/data/docker-data/kafka1/docker.sock:/var/run/docker.sock
- /Users/liulinfang/data/docker-data/kafka1/data:/kafka
external_links:
- zoo1
- zoo2
- zoo3
kafka2:
image: bitnami/kafka:3.1
restart: always
container_name: kafka2
hostname: kafka2
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 2
KAFKA_MODE: "zookeeper"
KAFKA_ENABLE_KRAFT: "no"
ALLOW_PLAINTEXT_LISTENER: "yes" # 必须添加
KAFKA_ZOOKEEPER_PROTOCOL: "PLAINTEXT" # 明确指定
KAFKA_CFG_PROCESS_ROLES: ""
KAFKA_ADVERTISED_HOST_NAME: kafka2
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.8.13.67:9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
volumes:
- /Users/liulinfang/data/docker-data/kafka2/docker.sock:/var/run/docker.sock
- /Users/liulinfang/data/docker-data/kafka2/data:/kafka
external_links:
- zoo1
- zoo2
- zoo3
kafka3:
image: bitnami/kafka:3.1
restart: always
container_name: kafka3
hostname: kafka3
ports:
- 9093:9092
environment:
KAFKA_BROKER_ID: 3
KAFKA_ENABLE_KRAFT: "no"
KAFKA_MODE: "zookeeper"
ALLOW_PLAINTEXT_LISTENER: "yes" # 必须添加
KAFKA_ZOOKEEPER_PROTOCOL: "PLAINTEXT" # 明确指定
KAFKA_CFG_PROCESS_ROLES: ""
KAFKA_ADVERTISED_HOST_NAME: kafka3
KAFKA_ADVERTISED_PORT: 9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.8.13.67:9093
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
volumes:
- /Users/liulinfang/data/docker-data/kafka3/docker.sock:/var/run/docker.sock
- /Users/liulinfang/data/docker-data/kafka3/data:/kafka
external_links:
- zoo1
- zoo2
- zoo3
kafka-manager:
image: sheepkiller/kafka-manager
restart: always
container_name: kafka-manager
hostname: kafka-manager
ports:
- 9010:9000
links:
- kafka1
- kafka2
- kafka3
external_links:
- zoo1
- zoo2
- zoo3
environment:
ZK_HOSTS: zoo1:2181,zoo2:2181,zoo3:2181
将所有的容器加入一个网段
docker network connect zookeeper_kafka_net zoo1
docker network connect zookeeper_kafka_net zoo2
docker network connect zookeeper_kafka_net zoo3
docker network connect zookeeper_kafka_net kafka1
docker network connect zookeeper_kafka_net kafka2
docker network connect zookeeper_kafka_net kafka3
docker network connect zookeeper_kafka_net kafka-manager
2. 配置 Kafka 和 ZooKeeper 环境
小提醒:
- 请确保 ALLOW_PLAINTEXT_LISTENER 参数设置为 yes,以便允许明文传输。
- 如果你是为了生产环境,请务必启用加密连接和身份验证。
3. 启动集群
现在是时候让 Kafka 集群运转起来了。在 docker-compose.yml 文件所在目录执行以下命令:
docker compose up -d
注:在linux环境下安装的kafka,需要先启动zookeeper,再启动kafka。
4. 测试 Kafka 集群
让我们看看这个集群是否真的能工作吧!
- 创建一个主题:
通过 kafka1 容器创建一个名为 test-topic 的主题:
docker exec -it kafka-cluster_kafka1_1 kafka-topics.sh --create --topic test-topic --bootstrap-server kafka1:9092 --replication-factor 3 --partitions 1
- 列出主题:
检查主题是否创建成功:
docker exec -it kafka-cluster_kafka1_1 kafka-topics.sh --list --bootstrap-server kafka1:9092
你应该会看到:
test-topic
- 发送测试消息:
通过 kafka1 容器向 test-topic 发送一些消息:
docker exec -it kafka1 kafka-console-producer.sh --topic test-topic --bootstrap-server kafka1:9092
输入以下内容作为测试消息:
Hello Kafka
This is a test message
使用 Ctrl+C 退出。
- 消费测试消息:
在另一个终端窗口,通过 kafka1 容器消费 test-topic 中的消息:
docker exec -it kafka1 kafka-console-consumer.sh --topic test-topic --from-beginning --bootstrap-server kafka1:9092
你应该会看到:
Hello Kafka
This is a test message
注意事项与提示
- 端口冲突: 确保 9092, 9093 和 9094 端口没有被其他服务占用。容器内互相调用使用该接口
- 端口冲突: 确保 19092, 19093 和 19094 端口没有被其他服务占用。程序使用该接口
- ZooKeeper 检查: 可以通过 localhost:2181 确保 ZooKeeper 正常运行。
- Kafka 检查: 确保每个 Kafka Broker 正常运行并监听正确的端口。
好了,现在你有了自己的 Kafka 集群,尽情玩耍吧!有什么疑问或者建议,欢迎留言。