centos7 kafka集群部署

130 阅读2分钟

Linux操作系统的信息如下

[hadoop@hadoop18 bin]$ uname -a
Linux hadoop18 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[hadoop@hadoop18 bin]$ cat /etc/issue
\S
Kernel \r on an \m

搭建 Kafka 运行环境还需要涉及 ZooKeeper,Kafka 和 ZooKeeper 都是运行在 JVM 之上的服务,所以还需要安装 JDK。Kafka 从2.0.0版本开始就不再支持 JDK7 及以下版本.

1. JDK的安装与配置

具体步骤略过。

[hadoop@hadoop18 bin]$ java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

2. ZooKeeper安装与配置

ZooKeeper 是安装 Kafka 集群的必要组件,Kafka 通过 ZooKeeper 来实施对元数据信息的管理,包括集群、broker、主题、分区等内容。

ZooKeeper 是一个开源的分布式协调服务,分布式应用程序可以基于 ZooKeeper 实现诸如数据发布/订阅、负载均衡、命名服务、分布式协调/通知、集群管理、Master 选举、配置维护等功能。

在 ZooKeeper 中共有3个角色:leader、follower 和 observer,同一时刻 ZooKeeper 集群中只会有一个 leader,其他的都是 follower 和 observer。observer 不参与投票,默认情况下 ZooKeeper 中只有 leader 和 follower 两个角色。更多相关知识可以查阅 ZooKeeper 官方网站来获得。

安装步骤:Zookeeper在Centos7上安装

3. Kafka的安装与配置

在安装完 JDK 和 ZooKeeper 之后,就可以执行 Kafka broker 的安装了,首先也是从官网中下载安装包,我用的包的是 kafka_2.12-2.2.0.tgz。

解压并重命名

[hadoop@hadoop18 soft]$ tar -zxvf kafka_2.12-2.2.0.tgz
[hadoop@hadoop18 soft]$ mv kafka_2.12-2.2.0 kafka
[hadoop@hadoop18 soft]$ cd kafka
[hadoop@hadoop18 soft]$ pwd
/home/hadoop/soft/kafka
[hadoop@hadoop18 kafka]$ su -

配置环境变量/etc/profile

export KAFKA_HOME=/home/hadoop/soft/kafka
export PATH=$PATH:$KAFKA_HOME/bin

修改 broker 的配置文件

$KAFKA_HOME/conf/server.properties。主要关注以下几个配置参数即可:

broker.id=0
listeners=PLAINTEXT://10.2.196.20:9092
log.dirs=/home/hadoop/soft/kafka/kafka-logs
zookeeper.connect=10.2.196.18:2181,10.2.196.19:2181,10.2.196.20:2181

说明:如果是单机版的话,默认即可,要配置集群,需要配置一些参数

  1. broker.id:每台机器不能一样
  2. zookeeper.connect:因为有3台zookeeper服务器,所以在这里zookeeper.connect设置为3台,必须全部加进去
  3. listeners:在配置集群的时候,必须设置,不然以后的操作会报找不到leader的错误
 WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 40 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
  1. server.propertie配置
[hadoop@hadoop20 config]$ cat server.properties | grep -v ^# | grep -v ^$
broker.id=0
listeners=PLAINTEXT://10.2.196.20:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/hadoop/soft/kafka/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.2.196.18:2181,10.2.196.19:2181,10.2.196.20:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
[hadoop@hadoop19 config]$ cat server.properties | grep -v ^# | grep -v ^$
broker.id=1
listeners=PLAINTEXT://10.2.196.19:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/hadoop/soft/kafka/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.2.196.18:2181,10.2.196.19:2181,10.2.196.20:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
[hadoop@hadoop18 config]$ cat server.properties | grep -v ^# | grep -v ^$
broker.id=2
listeners=PLAINTEXT://10.2.196.18:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/hadoop/soft/kafka/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.2.196.18:2181,10.2.196.19:2181,10.2.196.20:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

启动Kafka

启动 Kafka 服务的方式比较简单,在$KAFKA_HOME 目录下执行下面的命令即可:

bin/kafka-server-start.sh config/server.properties

如果要在后台运行 Kafka 服务,那么可以在启动命令中加入 -daemon 参数或&字符,示例如下:

nohup bin/kafka-server-start.sh config/server.properties &

注意: 三个节点均要启动;启动无报错,即搭建成功。

可以通过 jps 命令查看 Kafka 服务进程是否已经启动,示例如下:

[hadoop@hadoop18 kafka]$ jps -l
31812 org.apache.hadoop.hdfs.server.datanode.DataNode
28005 org.apache.spark.deploy.worker.Worker
26487 org.apache.zookeeper.server.quorum.QuorumPeerMain
8119 org.apache.slider.server.appmaster.SliderAppMaster
31929 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
8282 org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon
16522 sun.tools.jps.Jps
8427 org.apache.spark.deploy.worker.Worker
16139 kafka.Kafka # 这个就是Kafka服务端的进程
6463 org.apache.hadoop.yarn.server.nodemanager.NodeManager

jps 命令只是用来确认 Kafka 服务的进程已经正常启动。

可以使用kafka的命令行工具进行验证Kafka集群是否正常运行

[hadoop@hadoop18 bin]$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic kafka-demo
Created topic kafka-demo.
[hadoop@hadoop18 bin]$ kafka-topics.sh --list --zookeeper localhost:2181
kafka-demo
test
top-demo
tip

修改kafka broker id产生的错误

kafka.common.InconsistentBrokerIdException: Configured broker.id 3 doesn't match stored broker.id 0 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
        at kafka.server.KafkaServer.getBrokerIdAndOfflineDirs(KafkaServer.scala:710)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:212)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
        at kafka.Kafka$.main(Kafka.scala:75)
        at kafka.Kafka.main(Kafka.scala)
[2019-07-30 01:26:52,343] INFO shutting down (kafka.server.KafkaServer)

找到配置文件 cat server.properties

查看log.dirs所在位置log.dirs=/home/user/tools/kafka/data

按着这个路径找到meta.properties

修改brokerid


5. Kafka命令行操作

5.1 查看当前服务器中的所有topic

./bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
ack
example
filter_topic
first_top

5.2 创建topic

[hadoop@hadoop18 kafka]$ ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic installbill --replication-factor 3 --partitions 3
Created topic installbill.
  • --topic:定义topic名
  • --replication-factor:定义副本数
  • --partitions:定义分区数

5.3 删除topic

[hadoop@hadoop18 kafka]$ ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic first_top 
Topic first_top is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

需要server.properties中设置delete.topic.enable=true 否则只是标记删除或者直接重启

5.4 发送消息

[hadoop@hadoop18 kafka]$ ./bin/kafka-console-producer.sh --broker-list 10.2.196.18:9092 --topic installbill
>hello
>heo^Hll
>hello
>hello wrod
>hellop^H 
>hello word
>hello

5.5 消费消息

[hadoop@hadoop18 kafka]$ ./bin/kafka-console-consumer.sh --bootstrap-server 10.2.196.18:9092 --topic installbill --from-beginning
hello
hello word
hello
hello
hello wrod
hello
hell
hello 
hello

  • --from-beginning:会把firs主题中以所有的数据都读取出来。根据业务场景选择是增加该配置

5.6 查看某个Topic的详情

[hadoop@hadoop18 kafka]$ ./bin/kafka-topics.sh  --topic test --describe  --zookeeper=localhost:2181
Topic:test	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: test	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
[hadoop@hadoop18 kafka]$ ./bin/kafka-topics.sh  --topic installbill --describe  --zookeeper=localhost:2181
Topic:installbill	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: installbill	Partition: 0	Leader: 2	Replicas: 2,1,0	Isr: 2,1,0
	Topic: installbill	Partition: 1	Leader: 0	Replicas: 0,2,1	Isr: 0,2,1
	Topic: installbill	Partition: 2	Leader: 1	Replicas: 1,0,2	Isr: 1,0,2

5.7. 修改分区数

[hadoop@hadoop18 kafka]$ ./bin/kafka-topics.sh --zookeeper hadoop102:2181 --alter --topic first --partitions 6

分区不能缩小。