Linux(统信UOS X86_64 A版)安装部署Zookeeper集群,配置单机版Kafka

311 阅读2分钟

zookeeper官网下载

下载地址:archive.apache.org/dist/zookee…

找到对应的版本下载

配置JDK的环境变量

sudo vi /etc/profile

export JAVA\_HOME=/usr/local/jdk1.8.0_291/

export JRE\_HOME=${JAVA\_HOME}/jre

export CLASSPATH=.:${JAVA\_HOME}/lib:${JRE\_HOME}/lib

export PATH=${JAVA\_HOME}/bin:$PATH

重新加载配置文件

sudo source /etc/profile

================zookeeper安装=======================

1.zookeeper解压 首先将下载的 apache-zookeeper-3.5.9-bin.tar.gz 上传到服务器 解压安装至 /usr/local/目录下

tar -zxvf apache-zookeeper-3.5.9-bin.tar.gz -C /usr/local/

重命名为 zookeeper

mv apache-zookeeper-3.5.9-bin zookeeper

2.修改zookeeper配置文件

进入 zookeeper 配置文件 usr/local/zookeeper/conf/给 zoo_sample.cfg 配置文件重命名为 zoo.cfg

mv zoo_sample.cfg zoo.cfg

配置文件介绍

# The number of milliseconds of each tick
# 用于计算基础的实际单位
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
# 初始化时间
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
# 选举时间
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# 配置zookeeper数据存放路径
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc\_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

首先 zookeeper 目录下创建 zookeeper 数据和日志的存放目录,并且添加文件读写权限

mkdir data
sudo chmod 777 data
mkdir logs
sudo chmod 777 logs

修改配置文件

dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/logs

集群配置

# 集群配置 2888:选举端口 3888:投票端口
server.1=server001:2888:3888
server.2=server002:2888:3888
server.3=server003:2888:3888

server001 表示的是主机名,亦可以写IP地址

查看主机名 设置主机名

hostnamectl
sudo hostnamectl set-hostname server001

在之前创建 /usr/local/zookeeper/data 数据目录添加这台机器集群的唯一标识 写入 1 注意:myid 里面的数据个service一致

echo "1" > myid

配置 host文件 vi /etc/hosts 添加三台集群的主机名和IP地址 192.168.1.106 server001 192.168.1.107 server002 192.168.1.108 server003

分别启动各服务,可以看到服务已绑定,如果防火墙没有关闭需要开放各端口(包括选举端口及投票端口,否则集群用不了,无法连接)

image.png

image.png web页面可以考虑使用zkui(Releases · DeemOpen/zkui (github.com) 需要自己克隆下来打包)访问,zkui config.cfg配置参考

#Server Port
serverPort=9009
#Comma seperated list of all the zookeeper servers
zkServer=localhost:2182,localhost:2181
#Http path of the repository. Ignore if you dont intent to upload files from repository.
scmRepo=http://myserver.com/@rev1=
#Path appended to the repo url. Ignore if you dont intent to upload files from repository.
scmRepoPath=//appconfig.txt
#if set to true then userSet is used for authentication, else ldap authentication is used.
ldapAuth=false
ldapDomain=mycompany,mydomain
#ldap authentication url. Ignore if using file based authentication.
ldapUrl=ldap://<ldap_host>:<ldap_port>/dc=mycom,dc=com
#Specific roles for ldap authenticated users. Ignore if using file based authentication.
ldapRoleSet={"users": [{ "username":"domain\\user1" , "role": "ADMIN" }]}
userSet = {"users": [{ "username":"admin" , "password":"admin","role": "ADMIN" },{ "username":"appconfig" , "password":"appconfig","role": "USER" }]}
#Set to prod in production and dev in local. Setting to dev will clear history each time.
env=prod
jdbcClass=org.h2.Driver
jdbcUrl=jdbc:h2:zkui
jdbcUser=root
jdbcPwd=manager
#If you want to use mysql db to store history then comment the h2 db section.
#jdbcClass=com.mysql.jdbc.Driver
#jdbcUrl=jdbc:mysql://localhost:3306/zkui
#jdbcUser=root
#jdbcPwd=manager
loginMessage=Please login using admin/manager or appconfig/appconfig.
#session timeout 5 mins/300 secs.
sessionTimeout=300
#Default 5 seconds to keep short lived zk sessions. If you have large data then the read will take more than 30 seconds so increase this accordingly. 
#A bigger zkSessionTimeout means the connection will be held longer and resource consumption will be high.
zkSessionTimeout=5
#Block PWD exposure over rest call.
blockPwdOverRest=false
#ignore rest of the props below if https=false.
https=false
keystoreFile=/home/user/keystore.jks
keystorePwd=password
keystoreManagerPwd=password
# The default ACL to use for all creation of nodes. If left blank, then all nodes will be universally accessible
# Permissions are based on single character flags: c (Create), r (read), w (write), d (delete), a (admin), * (all)
# For example defaultAcl={"acls": [{"scheme":"ip", "id":"192.168.1.192", "perms":"*"}, {"scheme":"ip", id":"192.168.1.0/24", "perms":"r"}]
defaultAcl=
# Set X-Forwarded-For to true if zkui is behind a proxy
X-Forwarded-For=false

如果zkui的监控页面无法显示监控信息提示未在白名单,需要修改zoo.cfg及启动文件zkServer.sh,对其命令放行 配置文件中增加

#开启四字命令
4lw.commands.whitelist=*

启动文件中先查找user request的位置在其后增加,如图所示

#添加VM环境变量-Dzookeeper.4lw.commands.whitelist=*
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"

if [ "x$SERVER_JVMFLAGS" != "x" ]
then
    JVMFLAGS="$SERVER_JVMFLAGS $JVMFLAGS"
fi

image.png

image.png

直接下载kafka到/usr/local目录下,这里选择的是2.8版本可以支持到上述kafka

wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.13-2.8.0.tgz

解压后调整路径

tar -zxvf kafka_2.13-2.8.0.tgz
mv kafka_2.13-2.8.0 kafka

进入config目录下修改server.properties

image.png

重点修改以下几条

broker.id=0
#用于指定Kafka Broker实际监听的地址和端口,可以同时配置多个, 并且用逗号隔开,监听器的名称和端口必须是唯一的
listeners=PLAINTEXT://192.168.1.108:9091
#端口号
port=9092
#主机IP
host.name=192.168.1.108
#服务器对外宣传的端点信息,它会将地址注册到Zookeeper中,用于告诉客户端应该连接到哪个地址和端口,也就是客户端真正要访问的地址,此处做了域名代理,填写的为域名+9091对外映射的端口
advertised.listeners=PLAINTEXT://{host}:10339
#日志地址
log.dirs=/usr/local/kafka/logs
#注册的zookeeper集群地址
zookeeper.connect=localhost:2182/kafka,192.168.1.107:2182/kafka,192.168.1.106:2182/kafka

配置完成后进入bin目录直接启动

cd ../bin
./kafka-server-start -daemon ../config/server.properties

如果启动失败可以不加-daemon查看一下启动日志排查问题,启动成功后是可以看到端口绑定信息的

image.png

内网测试生产消费信息 分别启动一个消费者与生产者 指定主题test,因为listener监听配置了IP此处需要指定IP,不能使用localhost

image.png

image.png

java集成消费者与生产者

public class KafkaConsumerTest {
    public static void main(String[] args) {
        // TODO 创建配置对象
        Map<String, Object> consumerConfig = new HashMap<>();
        consumerConfig.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "{host}:10339");
        // TODO 对数据进行反序列化
        consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
 
        consumerConfig.put(ConsumerConfig.GROUP_ID_CONFIG, "group");
 
        // TODO 创建消费者对象
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(consumerConfig);
 
        // TODO 订阅主题
        consumer.subscribe(Collections.singletonList("test"));
 
        // TODO 从kafka的主题中获取数据
        // 消费者从Kafka中拉取数据
        while (true){
            final ConsumerRecords<String, String> datas = consumer.poll(100);
            for (ConsumerRecord<String, String> data : datas){
                System.out.println(data);
            }
        }
        // TODO 关闭消费者对象
        //consumer.close();
    }
}

image.png

public class KafkaProducerTest {
    public static void main(String[] args) {

        // 配置Kafka生产者
        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "{host}:10339");
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

        // 创建Kafka生产者
        Producer<String, String> producer = new KafkaProducer<>(properties);

        // 发送消息
        try {
            producer.send(new ProducerRecord<>("test", "Key", "Value"));
            System.out.println("Message sent");
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            // 关闭生产者
            producer.close();
        }
    }
}

image.png

如果没关防火墙,记得开放内网端口