集群搭建之 zookeeper + kafka
环境要求 pre-install
zookeeper cluster
- 下载安装 zk
- 官方下载地址 Download Zookeeper
- 安装Zookeeper
# 0. 设置集群hosts,方便后续配置
vim /etc/hosts
172.1.1.1 Data_Center_ZK_1
172.1.1.2 Data_Center_ZK_2
172.1.1.3 Data_Center_ZK_3
# 1. unpack and cd to the root
tar xzf zookeeper-3.4.10.tar.gz && cd zookeeper-3.4.10
# 2. 配置单机 zk,此处仅做参考
# cp conf/zoo_sample.cfg conf/zoo.cfg
# vim conf/zoo.cfg
# tickTime=2000
# initLimit=10
# syncLimit=5
# dataDir=/opt/data/zookeeper
# clientPort=2181
# maxClientCnxns=60
# autopurge.snapRetainCount=3
# autopurge.purgeInterval=24
# 2. 配置集群 zk
# 注意,zk集群的 server id 不能相同
vim /opt/data/zookeeper/myid # 指定每台zk服务器的id, 例如:1、2、3
cp conf/zoo_sample.cfg conf/zoo.cfg
vim conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/data/zookeeper
clientPort=2181
maxClientCnxns=60
autopurge.purgeInterval=24
server.1=Data_Center_ZK_1:2888:3888
server.2=Data_Center_ZK_2:2888:3888
server.3=Data_Center_ZK_3:2888:3888
# 3. 配置 Java heap size (2G/4G)
# 注意,应尽量避免zk使用swap,性能会有大幅降级
# 此处,在总内存为4G的情况下,指定zk初始jvm内存大小为512M,最大为2G
vim conf/java.env
export JVMFLAGS="-Xmx2048m -Xms512m"
# 4. 启动服务
bin/zkServer.sh start
# bin/zkServer.sh stop
bin/zkServer.sh status
# 以上步骤需要在三台server上分别配置
# 5. 客户端连接测试集群可用性
bin/zkCli.sh -server Data_Center_ZK_1:2181
help
ls /
create /test "hello world!"
get /test
bin/zkCli.sh -server Data_Center_ZK_3:2181
help
ls /
get /test
- 参考资料
kafka cluster
- 下载安装 kafka
- 官方下载地址 Download kafka_2.11
- 安装Kafka
# 0. 设置集群hosts,方便后续配置
vim /etc/hosts
172.1.1.1 Data_Center_Kafka_1
172.1.1.2 Data_Center_Kafka_2
172.1.1.3 Data_Center_Kafka_3
# 1. unpack
tar xzf kafka_2.11-1.0.0.tgz
cd kafka_2.11-1.0.0
# 2. 集群配置
# Kafka uses ZooKeeper. 确保zookeeper环节已经成功启动服务
vim config/server.properties
# The id of the broker. 3台server配置不同的broker id
broker.id=1
# Zookeeper connection string
zookeeper.connect=Data_Center_ZK_1:2181,Juliye_Data_Center_ZK_2:2181,Juliye_Data_Center_ZK_3:2181
# 配置socket server
advertised.host.name=Data_Center_Kafka_1
advertised.port=9092
# 3. 启动服务
bin/kafka-server-start.sh
# bin/kafka-server-stop.sh
# 以上步骤需要在三台server上分别配置
# 4. 客户端连接测试
# 查看所有 topic
bin/kafka-topics.sh --list --zookeeper Data_Center_ZK_1:2181
# 创建 topic
bin/kafka-topics.sh --create --zookeeper Data_Center_ZK_1:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
# Run the producer and then type a few messages into the console to send to the server.
bin/kafka-console-producer.sh --broker-list Data_Center_Kafka_1:9092 --topic my-replicated-topic
# Start a consumer
bin/kafka-console-consumer.sh --bootstrap-server Data_Center_Kafka_2:9092 --topic my-replicated-topic --from-beginning
# 查看多副本topic的状态
bin/kafka-topics.sh --describe --zookeeper Juliye_Data_Center_ZK_1:2181 --topic my-replicated-topic
# 输出对应集群状态
# Topic: my-replicated-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 0,2,1
# 5. 查看zk中kafka的状态信息
/opt/tools/zookeeper-3.4.10/bin/zkCli.sh -server Data_Center_ZK_1:2181
help
ls /
ls /brokers
ls /consumers
ls /config
集群高可用
- 使用supervisor守护应用服务
- Supervisor安装参考
- 配置supervisor
# Add Configs: Add Zookeeper and Kafka deamon vim /etc/supervisord.conf [program:zookeeper] ;command=/opt/tools/zookeeper-3.4.10/bin/zkServer.sh start command=/opt/tools/zookeeper-3.4.10/bin/zkServer.sh start-foreground [program:kafka] ;command=/opt/tools/kafka_2.11-1.0.0/bin/kafka-server-start.sh command=/opt/tools/kafka_2.11-1.0.0/bin/kafka-server-start.sh /opt/tools/kafka_2.11-1.0.0/config/server.properties # 启动 supervisord -c /etc/supervisord.conf # 查看状态 supervisorctl status all - 使用systemd守护supervisor,并设置开机启动,参考
集群搭建之 zookeeper + storm
环境要求 pre-install
zookeeper集群搭建参考本章第一部分
storm cluster
- Strom安装注意事项
- Storm uses Zookeeper for coordinating the cluster.
- Single node Zookeeper clusters should be sufficient for most cases
- It's critical that you run Zookeeper under supervision
- It's critical that you set up a cron to compact Zookeeper's data and transaction logs.
- Install dependencies on Nimbus and worker machines
- Python 2.7
python --version - Centos下安装Java开发环境 JDK1.8
- Cenos下安装Supervisor守护
- Python 2.7
- 安装配置
- Storm 1.2.1 下载地址
- 参考 Setting Up a Development Environment
# 0. 设置集群hosts,方便后续配置
vim /etc/hosts
172.1.1.1 Data_Center_Storm_1
172.1.1.2 Data_Center_Storm_2
172.1.1.3 Data_Center_Storm_3
mkdir -p /opt/data/storm
# 1. Download and extract a Storm release to Nimbus and worker machines
tar xzf storm-1.2.1.tar.gz -C /etc/tools/
cd /etc/tools/storm-1.2.1
# 2. Fill in mandatory configurations into storm.yaml
vim conf/storm.yaml
storm.local.dir: "/opt/data/storm"
storm.zookeeper.servers:
- "Data_Center_ZK_1"
- "Data_Center_ZK_2"
- "Data_Center_ZK_3
nimbus.seeds : ["Data_Center_Storm_1"]
drpc.servers:
- "Data_Center_Storm_1"
#- "Data_Center_Storm_2"
#- "Data_Center_Storm_3"
drpc.port: 3772
# 其它配置都默认
# 3. Launch daemons under supervision using "storm" script and a supervisor of your choice
# 在Storm-1上启动 nimbus、supervisor、ui
nohup storm nimbus &
nohup storm supervisor &
nohup storm ui &
nohup storm drpc &
# 在Storm-2、Storm-3上启动 supervisor
nohup storm supervisor &
# 启动成功后可以通过
# http://Data_Center_Storm_1:8080
# 来查看storm集群状态
集群高可用
- 使用supervisor守护应用服务
- Supervisor安装参考
- 配置supervisor
# Add Configs: Storm-Supervisor | Storm-UI | Storm-Nimbus # 注意 UI和Nimbus仅在节点1上设置 vim /etc/supervisord.conf [program:storm_nimbus] ;nohup storm nimbus & command=/opt/tools/apache-storm-1.2.1/bin/storm nimbus [program:storm_supervisor] ;nohup storm supervisor & command=/opt/tools/apache-storm-1.2.1/bin/storm supervisor [program:storm_ui] ;nohup storm ui & command=/opt/tools/apache-storm-1.2.1/bin/storm ui [program:storm_drpc] ;nohup storm drpc & command=/opt/tools/apache-storm-1.2.1/bin/storm drpc # 启动 supervisord -c /etc/supervisord.conf # 查看状态 supervisorctl status all - 使用systemd守护supervisor,并设置开机启动,参考
集群测试 zookeeper + kafka + storm
配置客户端的开发环境
# 0. 解压发行版storm
tar xzf software/apache-storm-1.2.1.tar.gz -C tools/
# 1. 配置环境
vim /etc/profile.d/global_ops_cmd.sh
export JAVA_HOME="/usr/java/jdk1.8.0_161"
export MVN_HOME="/opt/tools/apache-maven-3.5.2"
export STORM_HOME="/opt/tools/apache-storm-1.2.1"
export PATH="$PATH:$STORM_HOME/bin:$MVN_HOME/bin"
. /etc/profile.d/global_ops_cmd.sh
# 2. 配置远程集群信息,指定nimbus服务器节点
# Config cluster information,
# The local Storm configs are the ones in ~/.storm/storm.yaml merged in with the configs in defaults.yaml
vim ~/.storm/storm.yaml
nimbus.seeds: ["Data_Center_Storm_1"]
# 3. 查看集群topology状态
storm list
# 如果不在本地配置,可以以命令行参数形式传递
# storm list -c nimbus.host=Data_Center_Storm_1
# 4. Storm 客户端其它常用命令
storm kill topology-name [-w wait-time-secs]
storm activate topology-name
storm deactivate topology-name
storm jar topology-jar-path class ...
# 5. 获取最新代码 Git clone repo
# release 版本中的代码
cd /opt/apps
git clone git://github.com/apache/storm.git
- Refs
Run storm-starter examples
# Root dir
cd /opt/apps/storm/
# 1. 切换至你需要的代码版本,避免不同版本实例可能造成的问题
# 由于此处storm集群的版本为1.2.1,所以此处我们切换到对应版本代码
git tag
git checkout tags/v1.2.1
cd /opt/apps/storm/examples/storm-starter
mvn clean package
# Run the WordCountTopology in remote/cluster mode,
storm jar target/storm-starter-*.jar org.apache.storm.starter.WordCountTopology WordCountProduction remote
# Run the RollingTopWords in remote/cluster mode,
# under the name "production-topw-1"
storm jar target/storm-starter-*.jar org.apache.storm.starter.RollingTopWords production-topw-1 remote
Run storm-kafka-client examples
# 1. 切换至你需要的代码版本,避免不同版本实例可能造成的问题
# 由于此处storm集群的版本为1.2.1,所以此处我们切换到对应版本代码
git tag
git checkout tags/v1.2.1
cd examples/storm-kafka-client-examples/
# 2. 修改工程依赖关系
# 需要明确指定scope为compile,否则可能会出现类似于NoClassDefFoundError的错误
vim ./pom.xml
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>${storm.kafka.artifact.id}</artifactId>
<version>${storm.kafka.client.version}</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${storm.kafka.client.version}</version>
<scope>compile</scope>
</dependency>
# 3. 修改该版本代码
vim src/main/java/org/apache/storm/kafka/trident/TridentKafkaClientWordCountNamedTopics.java
// # 更新原有方法newKafkaTridentSpoutOpaque参数列表,如下:
private KafkaTridentSpoutOpaque<String, String> newKafkaTridentSpoutOpaque(String broker, String topic1, String topic2) { //...
// # 更新原有方法newKafkaSpoutConfig参数列表,如下:
protected KafkaSpoutConfig<String,String> newKafkaSpoutConfig(String broker, String topic1, String topic2) { //...
//# 原有代码,默认为local模式
//# DrpcResultsPrinter.remoteClient().printResults(60, 1, TimeUnit.SECONDS);
//# 如果remote模式运行,需要替换成如下:
Thread.sleep(2000);
Config drpc = new Config();
drpc.setDebug(false);
drpc.put("storm.thrift.transport", "org.apache.storm.security.auth.SimpleTransportPlugin");//"backtype.storm.security.auth.SimpleTransportPlugin");
drpc.put(Config.STORM_NIMBUS_RETRY_TIMES, 3);
drpc.put(Config.STORM_NIMBUS_RETRY_INTERVAL, 10);
drpc.put(Config.STORM_NIMBUS_RETRY_INTERVAL_CEILING, 20);
drpc.put(Config.DRPC_MAX_BUFFER_SIZE, 1048576);
System.out.printf("drpc config: %s \n", drpc);
try {
DrpcResultsPrinter client = DrpcResultsPrinter.remoteClient(drpc, "Juliye_Data_Center_Storm_1", 3772);
System.out.printf("client: %s \n", client);
client.printResults(60, 1, TimeUnit.SECONDS);
}catch (Exception e) {
e.printStackTrace();
}finally {
System.out.printf("finally \n");
}
# 4. 使用 maven 打包
# 根据当前kafka版本指定两个参数:kafka_artifact_id、kafka_broker_version
# mvn clean package -Dstorm.kafka.artifact.id=<kafka_artifact_id> -Dstorm.kafka.client.version=<kafka_broker_version>
# 此处安装的版本为 kafka_2.11-1.0.0
mvn clean package -Dstorm.kafka.artifact.id=kafka_2.11 -Dstorm.kafka.client.version=1.0.0
# 5. 上传storm topology
# 注意后面4个参数 分别是:
# 指定kafka节点;指定拓扑1的名称(用于生产msg数据);指定拓扑2的名称(用于生产msg数据);指定远程执行(非local模式)
storm jar target/storm-kafka-client-examples-1.2.1.jar org.apache.storm.kafka.trident.TridentKafkaClientWordCountNamedTopics Data_Center_Kafka_2:9092 kafka-prod-1 kafka-prod-2 remote
# storm -c nimbus.host=Juliye_Data_Center_Storm_1 jar target/storm-kafka-client-examples-1.2.1.jar org.apache.storm.kafka.trident.TridentKafkaClientWordCountNamedTopics
可能遇到的异常:
#
#
# 1. 依赖问题,无法找到部分依赖
# Error: A JNI error has occurred, please check your installation and try again
# Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/storm/kafka/...
# 参考上方第2步设置
# 2. kafka producer 无法写入问题
# org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
# org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
# 注意修改kafka配置
vim config/server.properties
# 配置socket server
advertised.host.name=Data_Center_Kafka_1
advertised.port=9092
# 3. kafka consumer 无法drpc连接问题
# java.lang.RuntimeException:
# No DRPC servers configured for topology at org.apache.storm.drpc.DRPCSpout.open(DRPCSpout.java:149) at org.apache.storm.trident.spout.RichSpoutBatchTriggerer.open(RichSpo
1. 启动drpc server
vim /opt/tools/apache-storm-1.2.1/conf/storm.yaml
drpc.servers:
- "Juliye_Data_Center_Storm_1"
#- "Juliye_Data_Center_Storm_2"
#- "Juliye_Data_Center_Storm_3"
drpc.port: 3772
2. 配置代码中的连接
vim src/main/java/org/apache/storm/kafka/trident/TridentKafkaClientWordCountNamedTopics.java
Thread.sleep(2000);
Config drpc = new Config();
drpc.setDebug(false);
drpc.put("storm.thrift.transport", "org.apache.storm.security.auth.SimpleTransportPlugin");//"backtype.storm.security.auth.SimpleTransportPlugin");
drpc.put(Config.STORM_NIMBUS_RETRY_TIMES, 3);
drpc.put(Config.STORM_NIMBUS_RETRY_INTERVAL, 10);
drpc.put(Config.STORM_NIMBUS_RETRY_INTERVAL_CEILING, 20);
drpc.put(Config.DRPC_MAX_BUFFER_SIZE, 1048576);
System.out.printf("drpc config: %s \n", drpc);
try {
DrpcResultsPrinter client = DrpcResultsPrinter.remoteClient(drpc, "Juliye_Data_Center_Storm_1", 3772);
System.out.printf("client: %s \n", client);
client.printResults(60, 1, TimeUnit.SECONDS);
}catch (Exception e) {
e.printStackTrace();
}finally {
System.out.printf("finally \n");
}
更多参考
- Zookeeper Maintenance
- Zookeeper Supervision
- Zookeeper Monitoring
- Zookeeper Logging
- Zookeeper Admin
- ZooKeeper Getting Started Guide
- How to set zk java heap
- Storm Rationale 简介
- Storm Video Tutorial - ETE 2012
- Storm Documentation Index
- Storm Main Page
- Storm Tutorial
- streamparse python
- Running Apache Storm under Supervision: Supervisord
- Streamparse io