kafka部署(plaintext和sasl-plaintext)-单机版

1,661 阅读2分钟

本文已参与「新人创作礼」活动,一起开启掘金创作之路。

1、无认证单机版

1.1、下载kafka安装包并解压

curl -#  -O  https://archive.apache.org/dist/kafka/2.8.1/kafka_2.13-2.8.1.tgz
tar -xvf kafka_2.13-2.8.1.tgz
mv kafka_2.13-2.8.1 kafka_server
mkdir -vp kafka_server/kafka_{data,log} kafka_server/zookeeper_data/{data,log}
cd kafka_server

1.2、配置kafka集群配置文件

cat config/zookeeper.properties | grep -v -E "^#|^$"

dataDir=/data01/sasl-plaintext/kafka_server/zookeeper_data/data
dataLogDir=/data01/sasl-plaintext/kafka_server/zookeeper_data/log
clientPort=2182
maxClientCnxns=0
admin.enableServer=false
server.0=192.168.31.11:2888:3888

cat config/server.properties | grep -v -E "^#|^$"

broker.id=0
listeners=PLAINTEXT://192.168.31.110:9093
host.name=192.168.31.110
advertised.listeners=PLAINTEXT://192.168.31.110:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data01/sasl-plaintext/kafka_server/kafka_data
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.31.110:2182
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

1.3、写入myid

echo 0 > zookeeper_data/data/myid

1.4、启动集群

后台启动zk
nohup bin/zookeeper-server-start.sh config/zookeeper.properties >zookeeper_data/log/zookeeper.log 2>1 &
后台启动kafka
nohup bin/kafka-server-start.sh config/server.properties >kafka_log/kafka.log 2>1 &

1.5、创建topic

bin/kafka-topics.sh -create --zookeeper 192.168.31.110:2182 -replication-factor 1 --partitions 3 --topic test
bin/kafka-topics.sh --list --zookeeper 192.168.31.110:2182
bin/kafka-topics.sh --describe --zookeeper 192.168.31.110:2182 --topic test

1.6、验证kafka生产消费

bin/kafka-console-producer.sh --broker-list 192.168.31.110:9093 --topic test
bin/kafka-console-consumer.sh --bootstrap-server 192.168.31.110:9093 --topic test --from-beginning

2、sasl认证单机版

2.1、增加zookeeper.properties配置信息

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

2.2、增加zk_server_jaas.conf配置文件

Server {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin"
  user_admin="admin";
};

说明:username="admin", password="admin"用于zookeeper服务节点内部通讯使。user_admin="admin"是创建一个名称为admin的账号,密码为admin,相当于客户端接入的账号,后面kafka连接zookeeper时使用这个账号。

2.3、zookeeper启动命令

export KAFKA_OPTS=" -Djava.security.auth.login.config=/data01/sasl-plaintext/kafka_server/config/zk_server_jaas.conf "
nohup bin/zookeeper-server-start.sh config/zookeeper.properties >zookeeper_data/log/zookeeper.log 2>1 &

说明:zk_server_jaas.conf是2.2中配置的文件,KAFKA_OPTS变量会在zookeeper-server-start.sh 中使用,这个只有kafka自带的zookeeper才能识别,标准的zookeeper需要修改启动脚本

2.4、修改kafka配置文件server.properies

listeners=SASL_PLAINTEXT://192.168.31.110:9093
advertised.listeners=SASL_PLAINTEXT://192.168.31.110:9093
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
security.inter.broker.protocol= SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
super.users=User:admin

说明:需要注意的是,这一段网上内容比较乱。advertised.listeners的内容需要是listeners的子集,sasl.mechanism.inter.broker.protocol是客户端接入的时候的加密算法。

2.5、kafka的用户配置文件kafka_server_jaas.conf

KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin"
  user_admin="admin";
};

KafkaClient {  
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin";
};

Client {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin";
};

说明:第一段KafkaServer是指明服务器端各broker节点通讯的账号密码,第二段是kafka客户端(生产和消费)使用,第三段Client是指的kafka连接zookeeper时所使用的sasl用户密码。

2.6、kafka启动

export KAFKA_OPTS=" -Djava.security.auth.login.config=/data01/sasl-plaintext/kafka_server/config/kafka_server_jaas.conf "
nohup bin/kafka-server-start.sh config/server.properties >kafka_log/kafka.log 2>1 &

2.7、防火墙开启端口(根据自己的情况而定)

firewall-cmd --permanent --add-port=2182/tcp
firewall-cmd --permanent --add-port=9093/tcp
firewall-cmd --reload

2.8、新增kafka_client_jaas.conf文件

KafkaClient {  
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin";
};

2.9、修改consumer.properties,roducer.properties添加以下内容

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

2.10、生产、消费验证

bin/kafka-topics.sh -create --zookeeper 192.168.31.110:2182 -replication-factor 1 --partitions 3 --topic test
bin/kafka-topics.sh --list --zookeeper 192.168.31.110:2182
bin/kafka-topics.sh --describe --zookeeper 192.168.31.110:2182 --topic test

export KAFKA_OPTS=" -Djava.security.auth.login.config=/data01/sasl-plaintext/kafka_server/config/kafka_client_jaas.conf"
bin/kafka-console-producer.sh --broker-list 192.168.31.110:9093 --topic test --producer.config config/producer.properties

export KAFKA_OPTS=" -Djava.security.auth.login.config=/data01/sasl-plaintext/kafka_server/config/kafka_client_jaas.conf"
bin/kafka-console-consumer.sh --bootstrap-server 192.168.31.110:9093 --topic test --consumer.config config/consumer.properties --from-beginning