基于centos7 搭建storm1.2.3集群过程

105 阅读4分钟

携手创作,共同成长!这是我参与「掘金日新计划 · 8 月更文挑战」的第25天,点击查看活动详情

1.环境准备

申请3台测试环境,IP地址如下:

192.168.162.201  m162p201
192.168.162.202  m162p202
192.168.162.203  m162p203

1.1 修改hosts

vim编译 /etc/hosts 分别在每台服务器上增加对应的hosts配置

#在192.168.162.201 上配置
192.168.162.201  m162p201

#在192.168.162.202 上配置
192.168.162.202  m162p202

#在192.168.162.203 上配置
192.168.162.203  m162p203

查看

1.2更新yum repo

可选用的 清华大学软件站 mirror.tuna.tsinghua.edu.cn/help/centos… 或者阿里云 mirrors.aliyun.com/

一般需要安装的 CentOS-Base.repo

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

用上文中的内容,替换原有的centos-base.repo 之后 epel 包需要安装

yum makecache
yum install -y epel

然后修改epel的内容为期望的镜像源。

mv epel.repo epel.repo.bak

vim epel.repo

然后增加如下内容:

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

都更新之后,执行yum -y update 将yum的源更新为最新。

2.安装基础软件

2.1 工具类软件

个人认为比较常用的一些软件为: lsb_release 红帽默认会有这个命令,能方便查看操作系统信息,centos是不带的,需要自行安装:

yum install -y redhat-lsb

2.2 监控类软件

监控类软件 nmon atop htop iotop

yum -y install nmon atop htop iotop

2.3 故障分析类软件

故障分析类软件 telnet mtr

yum -y install telnet mtr

2.4 基础类库

安装基础类库,glibc-headers openssl-devel bzip2-devel readline-devel sqlite-devel bison cmake 注意,CentOS会带很多基础包,但都不是devel版本,安装devel版本是为了后续需要根据源码编译安装软件来试用,如根据源码安装mysql等,就需要很多基础库。

yum install sqlite-devel
yum install glibc-headers -y
yum install gcc-c++  -y
yum install openssl-devel  -y
yum install readline-devel -y
yum install bzip2-devel -y
yum install bison cmake -y

3.安装常用软件

常用软件是指开发人员常用的如 jdk python 等开发工具。

3.1 JDK安装

sudo su - root
mkdir /opt/software
cd /opt/software
wget https://download.oracle.com/otn/java/jdk/8u231-b11/5b13a193868b4bf28bcb45c792fce896/jdk-8u231-linux-x64.rpm?AuthParam=1572433442_c80def4f441f1b246a14ed0417088aab
mv ./jdk-8u231-linux-x64.rpm\?AuthParam\=1572433442_c80def4f441f1b246a14ed0417088aab ./jdk-8u231-linux-x64.rpm

#开始安装
rpm -ivh ./jdk-8u231-linux-x64.rpm 

安装过程

warning: ./jdk-8u231-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:jdk1.8-2000:1.8.0_231-fcs        ################################# [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...

按上述流程,依此完成3台服务器的安装。 分别是 192.168.162.201 192.168.162.202 192.168.162.203

4 安装zookeeper

通常情况下,会建立一个zookeeper单独运行用户。

#一般根据服务器磁盘的挂载目录,会将数据盘挂载到/opt 在root权限下运行如下命令,或者用sudo方式
useradd -d  /opt/zookeeper  zookeeper 

将zookeeper 软件下载到 /opt/software目录。 需要注意的是,新版本3.5.6的zookeeper apache-zookeeper-3.5.6-bin.tar.gz 才是可以直接启动的编译后的文件,而apache-zookeeper-3.5.6.tar.gz则是源码,可以自行用maven编译。

#切换用户
sudo su - zookeeper
#解压
tar -zxvf /opt/software/apache-zookeeper-3.5.6-bin.tar.gz  /opt/zookeeper/
#建立软链接,便于后续升级更换
ln -s /opt/zookeeper/apache-zookeeper-3.5.6-bin /opt/zookeeper/apache-zookeeper
#新建 data  logs 目录,分别用于存放zookeeper的数据和日志

mkdir /opt/zookeeper/data
mkdir /opt/zookeeper/logs

目前有3台服务器,zookeeper均执行上述操作,之后,zookeeper需要对各个节点分配id,通过一个myid的文件存储在data目录。 修改后的zoo.conf如下:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.162.201:2888:3888
server.2=192.168.162.202:2888:3888
server.3=192.168.162.203:2888:3888

dataDir路径需要修改为指定目录。 之后根据上述对应关系生成myid。

#在192.168.162.201服务器执行
echo '1' > /opt/zookeeper/data/myid 
#在192.168.162.202服务器执行
echo '2' > /opt/zookeeper/data/myid 
在192.168.162.203服务器执行
echo '3'

另外为了让zookeeper的日志能产生在指定的日志目录,还需要将zkEnv.sh 文件进行修改: 将 将ZOO_LOG_DIR 修改为:

if [ "x${ZOO_LOG_DIR}" = "x" ]
then
    ZOO_LOG_DIR="/opt/zookeeper/logs"
fi

之后,分别在三台服务器上启动zookeeper

/opt/zookeeper/apache-zookeeper/bin/zkServer.sh start

5. 安装storm

5.1 基础准备

storm通过wget的方式,下载到/opt/software目录,完整路径为:/opt/software/apache-storm-1.2.3.tar.gz

#一般根据服务器磁盘的挂载目录,会将数据盘挂载到/opt 在root权限下运行如下命令,或者用sudo方式
useradd -d  /opt/storm storm 
sudo su - storm
#解压缩 
tar -zxvf /opt/software/apache-storm-1.2.3.tar.gz ./
#建立软链接 一定要用软链接的方式,便于以后程序升级。
ln -s /opt/storm/apache-storm-1.2.3 /opt/storm/apache-storm

5.2 配置

storm有很多配置选项,默认的配置项可以参考git github.com/apache/stor… 根据服务器信息,基本配置如下:

#storm本地存储目录
storm.local.dir: "/opt/storm/data"
#zookeeper服务
storm.zookeeper.servers:
    - "192.168.162.201"
    - "192.168.162.202"
    - "192.168.162.203"
#zookeeper端口,注意,不支持各节点用不同的端口,zk不要弄得太奇怪
storm.zookeeper.port: 2181
#nimbus的种子节点,这个地方要填host名称,否则UI展示会有问题
nimbus.seeds : ["m162p201","m162p202","m162p203"]
#UI的监听IP及端口
ui.host: 0.0.0.0
ui.port: 8087
#如果有统一的日志采集工具,可以不用开logviewer
logviewer.port: 8000
#supervisor 槽
supervisor.slots.ports:
    - 6700
    - 6701
    - 6702
    - 6703

因为上述配置中使用了各节点的hostname,因此需要在hosts中配置,都需要加上:

192.168.162.201 m162p201
192.168.162.202 m162p202
192.168.162.203 m162p203

环境变量配置: 修改.bash_profile 增加如下内容

STORM_HOME=/opt/storm/apache-storm
PATH=$PATH:$STORM_HOME/bin
export PATH

之后重新加载环境变量

source .bash_profile 

5.3 启动storm

sudo su - storm 
#启动supervisor
nohup storm supervisor > /dev/null 2>&1 & 
#启动nimbus
nohup storm supervisor > /dev/null 2>&1 & 
#启动UI 
nohup storm ui > /dev/null 2>&1 & 

UI只需要启动一个节点即可,其他服务器都启动nimbus和supervisor. 至此,storm集群已搭建完毕。 在这里插入图片描述

在这里插入图片描述