环境准备
服务器 | 内存 | CPU | 系统 | 内网ip |
---|---|---|---|---|
k8s-master | 4G | 2核 | Centos 7.6 | 192.168.0.6 |
k8s-node1 | 4G | 2核 | Centos 7.6 | 192.168.0.47 |
k8s-node2 | 4G | 2核 | Centos 7.6 | 192.168.0.154 |
已有Hadoop集群
Hadoop集群安装 juejin.cn/post/691987…
安装Zookeeper集群
下载地址
Zookeeper官网地址 zookeeper.apache.org/
下载地址: mirror.bit.edu.cn/apache/zook…
下载-bin 编译后的压缩包
非-bin结尾的包 会报错:Could not find or Load main class org.apache.zookeeper.server.quorum.QuorumPeerMain
安装
cd /usr/local/bigdata
wget https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz
tar -zxvf apache-zookeeper-3.6.2-bin.tar.gz
mv apache-zookeeper-3.6.2 zookeeper
cd zookeeper
mkdir data
echo "1" >> data/myid
官网配置范例 https://zookeeper.apache.org/doc/current/zookeeperStarted.html
修改配置文件
cp zoo_sample.cfg zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
##the location to store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.
dataDir=/usr/local/bigdata/zookeeper/data
dataLogDir=/usr/local/bigdata/zookeeper/log
server.1=k8s-master:2888:3888
server.2=k8s-node1:2888:3888
server.3=k8s-node2:2888:3888
将Hbase scp 到k8s-node1 k8s-node2上 并修改myid 对应分别为1 和 2
Zookeeper 命令
cd bin
sh zkServer.sh start 启动
sh zkServer.sh stop 停止
sh zkServer.sh status 查看状态
启动集群
三台机器分别执行 sh zkServer.sh start
启动后分别查看状态 sh zkServe.sh status
Client port found: 2181. Client address: localhost.
Mode: leader/follower
则启动成功
并查看端口 lsof -i:2181 进程存在
如果遇到错误为Client port found: 2181. Client address: localhost.
请仔细检查 /etc/hosts 和 zoo.cfg 使用的名称是否一致
如犯此类错误 k8s-master 误输入成ks-master
安装 Hbase 使用外置zookeeper
下载地址
官网地址: hbase.apache.org/
下载地址: mirror.bit.edu.cn/apache/hbas…
选择安装目录
/usr/local/bigdata/hbase
mv hbase-1.4.13 hbase
修改配置文件
需要把hadoop中的配置core-site.xml 、hdfs-site.xml拷贝到hbase安装目录下的conf文件夹中
修改 conf/hbase-env.sh
vi hbase-env.sh
添加环境变量
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
#指定使用外部的zk集群
export HBASE_MANAGES_ZK=FALSE
vi hbase-site.xml
<configuration>
<!-- 指定hbase在HDFS上存储的路径 -->
<property>
<name>hbase.rootdir</name>
<value>hdfs://k8s-master:9000/hbase</value>
</property>
<!-- 指定hbase是分布式的 -->
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<!-- 指定zk的地址 -->
<property>
<name>hbase.zookeeper.quorum</name>
<value>k8s-master:2181,k8s-node1:2181,k8s-node2:2181</value>
</property>
</configuration>
vi regionservers
k8s-node1
k8s-node2
配置环境变量
vi /etc/profile
export HBASE_HOME=/usr/local/bigdata/hbase
export PATH=$PATH:$HBASE_HOME/bin
source /etc/profile
启动hbase集群
在k8s-master执行 在/usr/local/bigdata/hbase/bin 目录中
启动命令 :sh start-hbase.sh
终止命令 :sh stop-hbase.sh
检查是否启动成功
三台机器分别使用jps
会看到k8s-node1和k8s-node2两台机器上出现 HRegionServer
k8s-master 出现HMaster
则部署成功
出现: k8s-node2:2181 stat is not executed because it is not in the whitelist.
解决: vi zoo.cfg 添加: 4lw.commands.whitelist=*
重启zookeeper
浏览器访问:http://ip:16010/zk.jsp
HBase is rooted at /hbase
Active master address: k8s-master,16000,1611339790174
Backup master addresses:
Region server holding hbase:meta: k8s-node1,16020,1611339791064
Region servers:
k8s-node1,16020,1611339791064
k8s-node2,16020,1611339791080
/hbase/replication:
/hbase/replication/peers:
/hbase/replication/rs:
/hbase/replication/rs/k8s-node2,16020,1611339791080:
/hbase/replication/rs/k8s-node1,16020,1611339791064:
Quorum Server Statistics:
k8s-master:2181
Zookeeper version: 3.6.1--104dcb3e3fb464b30c5186d229e00af9f332524b, built on 04/21/2020 15:01 GMT
Clients:
/192.168.0.6:52660[1](queued=0,recved=2,sent=2)
/192.168.0.6:52664[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0.0/0
Received: 4
Sent: 2
Connections: 2
Outstanding: 0
Zxid: 0x30000006d
Mode: follower
Node count: 43
k8s-node1:2181
Zookeeper version: 3.6.1--104dcb3e3fb464b30c5186d229e00af9f332524b, built on 04/21/2020 15:01 GMT
Clients:
/192.168.0.47:40666[1](queued=0,recved=1,sent=1)
/192.168.0.6:46900[0](queued=0,recved=1,sent=0)
/192.168.0.154:39822[1](queued=0,recved=2,sent=2)
/192.168.0.47:40662[1](queued=0,recved=3,sent=3)
/192.168.0.6:46890[1](queued=0,recved=2,sent=2)
/192.168.0.6:46896[1](queued=0,recved=16,sent=16)
/192.168.0.6:46888[1](queued=0,recved=2,sent=2)
/192.168.0.154:39824[1](queued=0,recved=3,sent=3)
Latency min/avg/max: 0/0.4545/3
Received: 30
Sent: 29
Connections: 8
Outstanding: 0
Zxid: 0x500000000
Mode: leader
Node count: 43
Proposal sizes last/min/max: -1/-1/-1
k8s-node2:2181
Zookeeper version: 3.6.1--104dcb3e3fb464b30c5186d229e00af9f332524b, built on 04/21/2020 15:01 GMT
Clients:
/192.168.0.6:46076[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0.0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x30000006d
Mode: follower
Node count: 43