一、完成目标
1.1 测试服务器(DIY)
- 32核64线程
- 256G内存
- 10T机械硬盘
- RTX 3070
- VMWare ESXi 7.0.3
- CentOS镜像:CentOS-7-x86_64-Minimal-2009.iso,
CentOS Linux release 7.9.2009 (Core)
1.2 软件列表
- jdk-8u181-linux-x64.tar.gz
- zookeeper-3.7.1.tar.gz
- hadoop-2.10.2.tar.gz
- apache-hive-2.3.9-bin.tar.gz
- hbase-2.4.15-bin.tar.gz
1.3 我的目标
- Hadoop HA(已完成 2022-11-19)
- Yarn HA(已完成 2022-11-19)
- Hive HA(已完成 2022-11-20)
- MySQL 5.7 standalone(已完成 2022-11-20)
- Redis 6.x standalone
- HBase
- Spark
- Flink
- GitLab(已完成2023-04-15)
- SonarQube
- Jenkins
- k8s HA Cluster
- JIRA, Confluence
1.4 完成清单
- Hadoop HA
- node01~node06
- node01,node02:NameNode, ZKFC
- node01,node02,node03:ZK集群
- node04,node05,node06:DataNode, JournalNode
- Yarn HA
- node02,node03:ResourceManager
- node04,node05,node06:NodeManager
- Hive Metastore & HiveServer2 HA
- node07:Metastore, MySQL
- node08,node09:HiveServer2
- node01,node02,node03:ZK集群
- GitLab
二、Hadoop HA集群搭建
2.1 集群规划
| IP | hostname/alias | cpu/mem | NN(A) | NN(S) | ZKFC | DN | JN | ZK | RM | NM | JH |
|---|---|---|---|---|---|---|---|---|---|---|---|
192.168.1.31时钟主服务器 | node01.hadoop.cc node01 | 2核 8G内存 | ✅ | - | ✅ | - | - | ✅ | - | - | - |
| 192.168.1.32 | node02.hadoop.cc node02 | 2核 4G内存 | - | ✅ | ✅ | - | - | ✅ | ✅ | - | - |
| 192.168.1.33 | node03.hadoop.cc node03 | 2核 8G内存 | - | - | - | - | - | ✅ | ✅ | - | ✅ |
| 192.168.1.34 | node04.hadoop.cc node04 | 4核 16G内存 | - | - | - | ✅ | ✅ | - | - | ✅ | - |
| 192.168.1.35 | node05.hadoop.cc node05 | 4核 16G内存 | - | - | - | ✅ | ✅ | - | - | ✅ | - |
| 192.168.1.36 | node06.hadoop.cc node06 | 4核 16G内存 | - | - | - | ✅ | ✅ | - | - | ✅ | - |
2.2 软件及数据目录规划
2.2.1 软件目录
/opt/cluster/scripts:- zkctl.sh
- xcall.sh
/opt/cluster/zookeeper/opt/cluster/hadoop
2.2.2 数据目录
zookeeper
mkdir -p /var/cluster/zookeeper/{data,logs}
hadoop(HA)
mkdir -p /var/cluster/hadoop/ha/dfs/{jn,nn,dn}
2.3 基础设置
2.3.1 yum换源、安装基础库(各个节点)
NameNode 高可用切换失败问题
问题点:安装的CentOS7系统最小安装,默认不会安装psmisc(fuser等命令的rpm包),导致测试namenode高可用时kill掉namenode active后namenode standby不会自主切换为namenode active。安装一下即可。
yum install psmisc -y
2.3.2 hosts 配置(各个节点)
192.168.1.31 node01.hadoop.cc node01
192.168.1.32 node02.hadoop.cc node02
192.168.1.33 node03.hadoop.cc node03
192.168.1.34 node04.hadoop.cc node04
192.168.1.35 node05.hadoop.cc node05
192.168.1.36 node06.hadoop.cc node06
2.3.3 关闭SELinux(各个节点)
# 永久关闭
# 修改 SELINUX=disabled
# 重启即可
vi /etc/selinux/config
2.3.4 关闭防火墙(各个节点)
systemctl stop firewalld
systemctl disable firewalld
2.3.5 时钟同步(分主、从)
时钟主服务器 node01/node01.hadoop.cc
# 1. 安装 ntpd
yum install ntp -y
# 2. ntp配置
vi /etc/ntp.conf
### 添加如下
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
### 注释原来的server 0.centos.pool.ntp.org iburst
### 新增如下
server ntp1.aliyun.com
server ntp2.aliyun.com
server ntp3.aliyun.com
server ntp4.aliyun.com
server ntp5.aliyun.com
# 3. 启动 ntpd 服务
systemctl start ntpd
systemctl enable ntpd
# 4. 设置定时任务
crontab -e
00 01 * * * ntpdate ntp.aliyun.com
时钟从服务器,同步主服务器
# 1. 安装 ntpd
yum install ntp -y
# 2. ntp配置
vi /etc/ntp.conf
### 注释原来的server 0.centos.pool.ntp.org iburst,新增如下
### node01.hadoop.cc iburst 是主服务器
server node01.hadoop.cc iburst
# 3. 启动 ntpd 服务
systemctl start ntpd
systemctl enable ntpd
# 4. 设置定时任务
crontab -e
00 01 * * * ntpdate node01.hadoop.cc
2.3.6 JDK(各个节点)
# 有些软件默认的JDK安装目录是 /usr/java/default
tar zxvf jdk-8u181-linux-x64.tar.gz -C /usr/java/default
2.3.7 配置环境变量(各个节点)
/etc/profile
# 1. 编辑目标文件
vim /etc/profile
# 2. 添加如下内容
export JAVA_HOME=/usr/java/default
export ZOOKEEPER_HOME=/opt/cluster/zookeeper
export HADOOP_HOME=/opt/cluster/hadoop
export HADOOP_CONF_DIR=/opt/cluster/hadoop/etc/hadoop
export PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/sbin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_CLASSPATH=`hadoop classpath`
# 3. 保存退出
# 4. 生效
source /etc/profile
# 5. 编辑 ~/.bashrc,
vim ~/.bashrc
### 5.1 在最后面添加 source /etc/profile
### 5.2 这一步一定要做,是为了后面自己写的脚本使用,脚本通过 ssh 免密登录到远程服务器执行命令,默认是找的 ~/.bashrc 的环境变量配置
2.3.8 SSH免密登录
主到从:
# 在主节点上(node01/node01.hadoop.cc)操作
# 1. ssh localhost
### 1. 验证自己还没免密
### 2. 被动生成了/root/.ssh
ssh localhost
# 2. 生成密钥对
cd ~/.ssh
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
# 3. 把公钥分发给自己,实现自己的免密登录
cd ~/.ssh
cat ./id_dsa.pub >> ./authorized_keys
ssh localhost
# 4. 把公钥分发给其它几个节点,实现免密登录
ssh-copy-id -i ~/.ssh/id_dsa.pub node02
ssh-copy-id -i ~/.ssh/id_dsa.pub node03
### ... 省略其它节点
# 5. 验证免密登录到 node02, node03 等其它节点
ssh node01
ssh node02
ssh node03
ssh node04
ssh node05
ssh node06
ssh node01.hadoop.cc
ssh node02.hadoop.cc
ssh node03.hadoop.cc
ssh node04.hadoop.cc
ssh node05.hadoop.cc
ssh node06.hadoop.cc
由于yarn HA RM 在其它节点上,所以还需要针对 RM 节点的服务器对 DN 节点做免密登录
# node02/node02.hadoop.cc
### node03,node04,node05 是 DataNode
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ./id_dsa.pub >> ./authorized_keys
ssh-copy-id -i ~/.ssh/id_dsa.pub node04
ssh-copy-id -i ~/.ssh/id_dsa.pub node05
ssh-copy-id -i ~/.ssh/id_dsa.pub node06
ssh node04
ssh node05
ssh node06
ssh node04.hadoop.cc
ssh node05.hadoop.cc
ssh node06.hadoop.cc
# node03/node03.hadoop.cc
### 同上做一遍
2.3.9 脚本
xcall.sh:用来远程ssh登录服务器后执行命令。
#!/bin/bash
for i in node01.hadoop.cc node02.hadoop.cc node03.hadoop.cc node04.hadoop.cc node05.hadoop.cc node06.hadoop.cc
do
echo --------- $i ----------
ssh $i "$*"
done
zkctl.sh:启动、停止 zk 集群,和查看 zk 集群运行状态
#!/bin/bash
case $1 in
"start"){
for i in node01 node02 node03
do
ssh $i "/opt/cluster/zookeeper/bin/zkServer.sh start"
done
};;
"stop"){
for i in node01 node02 node03
do
ssh $i "/opt/cluster/zookeeper/bin/zkServer.sh stop"
done
};;
"status"){
for i in node01 node02 node03
do
ssh $i "/opt/cluster/zookeeper/bin/zkServer.sh status"
done
};;
esac
clusterctl.sh:启动 zk、hdfs(ha)、yarn(ha) 的脚本工具
#! /bin/bash
BASE_PATH=/opt/cluster/scripts
HADOOP_HOME=/opt/cluster/hadoop
case $1 in
"start"){
echo " -------- 启动 集群 -------"
echo " -------- 启动 ZK 集群 -------"
#启动 Zookeeper集群
$BASE_PATH/zkctl.sh start
echo " -------- 启动 hdfs 集群 -------"
$HADOOP_HOME/sbin/start-dfs.sh
echo " -------- 启动 yarn 集群 --------"
ssh node03 "/opt/cluster/hadoop/sbin/start-yarn.sh"
ssh node02 "/opt/cluster/hadoop/sbin/yarn-daemon.sh start resourcemanager"
};;
"stop"){
echo " -------- 停止 集群 -------"
echo " -------- 停止 yarn 集群 --------"
ssh node03 "/opt/cluster/hadoop/sbin/stop-yarn.sh"
ssh node02 "/opt/cluster/hadoop/sbin/stop-yarn.sh"
echo " -------- 停止 hadoop集群 -------"
$HADOOP_HOME/sbin/stop-dfs.sh
echo " -------- 停止 ZK 集群 --------"
$BASE_PATH/zkctl.sh stop
};;
esac
2.4 初始化安装
2.4.1 zookeeper
zoo.cfg
配置好后,分发给其它 zookeeper 节点服务器
cd /opt/cluster/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/cluster/zookeeper/data
dataLogDir=/var/cluster/zookeeper/logs
clientPort=2181
server.1=node01.hadoop.cc:2888:3888
server.2=node02.hadoop.cc:2888:3888
server.3=node03.hadoop.cc:2888:3888
myid
注意:各节点服务器的值是不同的
# node01
echo 1 >/var/cluster/zookeeper/data/myid
# node02
echo 2 >/var/cluster/zookeeper/data/myid
# node03
echo 3 >/var/cluster/zookeeper/data/myid
启动集群
/opt/cluster/scripts/zkctl.sh start
查看集群运行状态
/opt/cluster/scripts.zkctl.sh status
# 输出如下:
ZooKeeper JMX enabled by default
Using config: /opt/cluster/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
ZooKeeper JMX enabled by default
Using config: /opt/cluster/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
ZooKeeper JMX enabled by default
Using config: /opt/cluster/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
2.4.2 hadoop(HA)
先在主节点上把配置文件修改好,涉及到如下配置文件。
- core-site.xml
- hdfs-site.xml
- mapred-site.xml
- yarn-site.xml
- slaves
a. core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node01.hadoop.cc:2181,node02.hadoop.cc:2181,node03.hadoop.cc:2181</value>
</property>
<property>
<name>ha.zookeeper.session-timeout.ms</name>
<value>30000</value>
</property>
</configuration>
b. hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/var/cluster/hadoop/ha/dfs/nn</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/var/cluster/hadoop/ha/dfs/dn</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>node01.hadoop.cc:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>node02.hadoop.cc:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>node01.hadoop.cc:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>node02.hadoop.cc:50070</value>
</property>
<!-- JN -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node04.hadoop.cc:8485;node05.hadoop.cc:8485;node06.hadoop.cc:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/var/cluster/hadoop/ha/dfs/jn</value>
</property>
<!-- HA 角色切换的代理类和实现方法,配置的 SSH 免密 -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
<value>60000</value>
</property>
<property>
<name>dfs.qjournal.start-segment.timeout.ms</name>
<value>60000</value>
</property>
<!-- zkfc -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
e. slaves
slaves 中对应的 DataNode 节点服务器。
node04.hadoop.cc
node05.hadoop.cc
node06.hadoop.cc
f. hadoop目录分发给其它节点
scp -r hadoop root@node02:/opt/cluster
scp -r hadoop root@node03:/opt/cluster
scp -r hadoop root@node04:/opt/cluster
scp -r hadoop root@node05:/opt/cluster
scp -r hadoop root@node06:/opt/cluster
g. 首次初始化、启动 hdfs
# 1. 先启动规划的JN集群(node04,node05,node06)
$HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
# 2. 在其中一个规划的 namenode(Active) 上格式化并运行(node01)
$HADOOP_HOME/bin/hadoop namenode -format
$HADOOP_HOME/sbin/hadoop-daemon.sh start namenode
# 3. 在另外一个 namenode(Standby) 上运行如下命令进行同步
$HADOOP_HOME/bin/hadoop namenode -bootstrapStandby
# 4. 格式化 zkfc(node01)
$HADOOP_HOME/bin/hdfs zkfc -formatZK
# 5. 分别在两个 namenode 节点上运行 zkfc 进程(node01,node02)
$HADOOP_HOME/sbin/hadoop-daemon.sh start zkfc
# 6. 启动剩余的服务(node01)
$HADOOP_HOME/sbin/start-dfs.sh
首次初始化完成后,后面只需要使用 start-dfs.sh stop-dfs.sh 即可。 通过xcall.sh来查看各个服务器运行的进程,如下:
--------- node01.hadoop.cc ----------
3152 NameNode
4433 Jps
3556 DFSZKFailoverController
2846 QuorumPeerMain
--------- node02.hadoop.cc ----------
2096 QuorumPeerMain
2530 DFSZKFailoverController
3415 Jps
2617 NameNode
--------- node03.hadoop.cc ----------
2116 QuorumPeerMain
2551 Jps
--------- node04.hadoop.cc ----------
2423 DataNode
2009 JournalNode
2669 Jps
--------- node05.hadoop.cc ----------
1872 JournalNode
2261 DataNode
2493 Jps
--------- node06.hadoop.cc ----------
2258 DataNode
1879 JournalNode
2488 Jps
h. HDFS HA 验证
kill 掉当前的 Active 节点进程,观察 Standby 节点进程是否成功切换到 Active。
2.4.3 yarn(HA)
a. yarn 的介绍
hadoop 在 1.x 版本的时候没有NameNode HA, 也没有YARN。在 2.x 的时候考虑做HA,但是为了做到向前兼容,不修改NameNode,就引入了ZKFC角色。此时 YARN 也出来了,YARN 有了之前的经验,就直接把 HA 的功能模块集成在ResourceManager中了,所以 YARN 的配置相对简单,没有那么多角色。
b. mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- jobhistory -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>node03.hadoop.cc:10020</value>
</property>
<!-- jobhistory web address-->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>node03.hadoop.cc:19888</value>
</property>
</configuration>
c. yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>my-yarn-cluster</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node01.hadoop.cc:2181,node02.hadoop.cc:2181,node03.hadoop.cc:2181</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node03.hadoop.cc</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node02.hadoop.cc</value>
</property>
</configuration>
d. 启动 yarn
# 1. 先在 node03/node03.hadoop.cc 上启动
$HADOOP_HOME/sbin/start-yarn.sh
# 2. 再启动 resourcemanager standby node02/node02.hadoop.cc
$HADOOP_HOME/sbin/yarn-daemon.sh start resourcemanager
f. yarn HA 验证
kill 掉当前 ResourceManager Active 节点进程,看 Standy 节点进程是否能自动切换成 Active。
三、Hive HA
3.1 服务器规划
| IP | hostname/alias | cpu/mem | MySQL | MetaStore | HiveServer2 |
|---|---|---|---|---|---|
| 192.168.1.37 | node07.hadoop.cc node07 | 2核 8G内存 | ✅root/123456 | ✅ | - |
| 192.168.1.38 | node08.hadoop.cc node08 | 2核 8G内存 | - | - | ✅ |
| 192.168.1.39 | node09.hadoop.cc node09 | 2核 8G内存 | - | - | ✅ |
3.2 基础配置
3.2.1 SELinux,防火墙(各节点)
# 1. SELinux 关闭和重启
vi /etc/selinux/config
SELINUX=disabled
# 2. 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
3.2.2 MySQL 5.7(node07)
yum源快速安装 MySQL 服务
# 1. 检查并卸载之前的版本
yum list installed | grep mysql
yum list installed | grep maria
mariadb-libs.x86_64 1:5.5.68-1.el7 @anaconda
### 删除
yum remove mariadb-libs.x86_64
### 或者这样查找
rpm -qa | grep mariadb
### 移除
sudo rpm -e mariadb-libs-5.5.65-2.el7.x86_64 --nodeps
# 2. 查找所有mysql残留文件夹,强行删除
find / -name mysql
/etc/selinux/targeted/active/modules/100/mysql
/usr/lib64/mysql
### 强行删除上面文件夹
rm -rf /usr/lib64/mysql
rm -rf /etc/selinux/targeted/active/modules/100/mysql
### 并且删除配置文件 rm -rf /etc/my.cnf(如果存在的话)
# 3. 通过 yum 源安装 mysql-server 5.7
wget http://repo.mysql.com/yum/mysql-5.7-community/el/7/x86_64/mysql57-community-release-el7-10.noarch.rpm
rpm -ivh mysql57-community-release-el7-10.noarch.rpm
rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
yum install mysql-community-server
# 4. 尝试用客户连接(报错)
mysql
ERROR 2002 (HY000): 'Can not connect to local MySQL server through socket /var/lib/mysql/mysql.sock (2)'
# 5. 启动服务
systemctl start mysqld
systemctl enable mysqld
# 6. 查看默认密码并登录
# 4LC3lF*pidHq
cat /var/log/mysqld.log | grep password
mysql -uroot -p'4LC3lF*pidHq'
# 7. 修改账户密码
### 7.1
ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.
### 7.2
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
set global validate_password_policy=LOW;
set global validate_password_length=6;
ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
flush privileges;
quit
systemctl restart mysqld
# 8. 修改密码后重新登录
mysql -uroot -p'123456'
# 9. 允许远程登录
mysql -uroot -p'123456'
use mysql;
select host, user from user # 查看现有的用户
set global validate_password_policy=LOW;
set global validate_password_length=6;
### 添加一个 root 用户,host = % 代表不限制IP,密码 123456,with grant option 代表 root
### 用户有所有权限,可以给其它用户赋权
grant all privileges on *.* to 'root'@'%' identified by '123456' with grant option;
3.3 Metastore & HiveServer2 HA 搭建
3.3.1 mysql 驱动(node07)
下载mysql-connector-java Jar包到 hive lib 目录下。
3.3.2 修改 hive-site.xml(node07)
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node07.hadoop.cc:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
</configuration>
3.3.3 初始化 MySQL 数据库表(node07)
schematool -dbType mysql -initSchema
3.3.4 启动 metastore 元数据服务(node07)
hive-site.xml
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node07.hadoop.cc:3306/hive?useSSL=false&createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
</configuration>
hadoop & 环境变量
把之前集群的 hadoop 拷贝到本机上:
# 在 node01 上操作,拷贝 hadoop 到 node07
scp -r /opt/cluster/hadoop root@node07:/opt/cluster
/etc/profile,添加HADOOP_HOME HADOOP_CONF
export JAVA_HOME=/usr/java/default
export HIVE_HOME=/opt/cluster/hive
export HBASE_HOME=/opt/cluster/hbase
+ export HADOOP_HOME=/opt/cluster/hadoop
+ export HADOOP_CONF_DIR=/opt/cluster/hadoop/etc/hadoop
export PATH=$JAVA_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
启动 metastore 服务:
hive --service metastore
3.3.5 启动 hiveserver2 HA(node08, node09)
hive-site.xml
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://node07.hadoop.cc:9083</value>
</property>
<property>
<name>hive.server2.support.dynamic.service.discovery</name>
<value>true</value>
</property>
<property>
<name>hive.server2.zookeeper.namespace</name>
<value>hiveserver2</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>node01.hadoop.cc:2181,node02.hadoop.cc:2181,node03.hadoop.cc:2181</value>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>node08.hadoop.cc</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
</configuration>
hadoop & 环境变量
把之前集群的 hadoop 拷贝到本机上:
# 在 node01 上操作,拷贝 hadoop 到 node08 node09
scp -r /opt/cluster/hadoop root@node08:/opt/cluster
scp -r /opt/cluster/hadoop root@node09:/opt/cluster
/etc/profile,添加HADOOP_HOME HADOOP_CONF
export JAVA_HOME=/usr/java/default
export HIVE_HOME=/opt/cluster/hive
export HBASE_HOME=/opt/cluster/hbase
+ export HADOOP_HOME=/opt/cluster/hadoop
+ export HADOOP_CONF_DIR=/opt/cluster/hadoop/etc/hadoop
export PATH=$JAVA_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
启动 HiveServer2 服务
# node08, node09 都启动
hive --service hiveserver2 &
查看 zk 中的 HA 信息,如下:
# ls /hiveserver2
[zk: localhost:2181(CONNECTED) 3] ls /hiveserver2
[serverUri=node08.hadoop.cc:10000;version=2.3.9;sequence=0000000002, serverUri=node08.hadoop.cc:10000;version=2.3.9;sequence=0000000003]
3.3.6 测试 beeline
beeline 是配置 HiveServer2 使用的。
beeline
# 普通连接
!connect jdbc:hive2://node08.hadoop.cc:10000/default
# HA方式连接
!connect jdbc:hive2://node01.hadoop.cc,node02.hadoop.cc,node03.hadoop.cc/;serviceDiscoveryMode=zookeeper;zookeeperNamespace=hiveserver2
效果如下:
Beeline version 2.3.9 by Apache Hive
beeline> !connect jdbc:hive2://node01.hadoop.cc,node02.hadoop.cc,node03.hadoop.cc/;serviceDiscoveryMode=zookeeper;zookeeperNamespace=hiveserver2
Connecting to jdbc:hive2://node01.hadoop.cc,node02.hadoop.cc,node03.hadoop.cc/;serviceDiscoveryMode=zookeeper;zookeeperNamespace=hiveserver2
Enter username for jdbc:hive2://node01.hadoop.cc,node02.hadoop.cc,node03.hadoop.cc/:
Enter password for jdbc:hive2://node01.hadoop.cc,node02.hadoop.cc,node03.hadoop.cc/:
22/11/20 16:54:23 [main]: INFO jdbc.HiveConnection: Connected to node08.hadoop.cc:10000
Connected to: Apache Hive (version 2.3.9)
Driver: Hive JDBC (version 2.3.9)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://node01.hadoop.cc,node02.hadoo>
3.4 常见问题
3.4.1 代理用户
User: root is not allowed to impersonate anonymous
beeline
!connect jdbc:hive2://node08.hadoop.cc:10000/default
随便输入账户密码后,出现如下错误:
22/11/20 16:29:15 [main]: WARN jdbc.HiveConnection: Failed to connect to node08:10000
Error: Could not open client transport with JDBC Uri: jdbc:hive2://node08:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate anonymous (state=08S01,code=0)
解决方法:
# 1. 编辑 hadoop 集群中的 core-site.xml,添加如下配置
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
# 2. 刷新权限
hdfs dfsadmin -fs hdfs://node01.hadoop.cc:8020 -refreshSuperUserGroupsConfiguration
待补充
Kafka
待补充
N、GitLab
N.1 服务器规划
| IP | hostname/alias | cpu/mem |
|---|---|---|
| 192.168.1.62 | malan.git | 4核 8G内存 |
N.2 安装部署
https://gitlab.cn/install/,CentOS 7
# 1. 安装配置和依赖
sudo yum install -y curl policycoreutils-python openssh-server perl
sudo systemctl enable sshd
sudo systemctl start sshd
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo systemctl reload firewalld
# 2. 可选,电子邮件支持
sudo yum install postfix
sudo systemctl enable postfix
sudo systemctl start postfix
# 3. 下载安装
curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash
# 4. 假设我要配置的域名是: https://gitlab.abc.com
sudo EXTERNAL_URL="https://gitlab.abc.com" yum install -y gitlab-jh
# 5. 假设我在阿里云上买的域名就是:abc.com
### 5.1 做域名解析:gitlab.abc.com
### 5.2 购买阿里云提供的免费SSL证书,配置的域名是 gitlab.abc.com
### 5.3 下载申请好的 nginx 证书,上传到 /etc/gitlab/ssl/aliyun/目录下{xxx.pem, xxx.key}
### 5.4 配置 gitlab
vi /etc/gitlab/gitlab.rb
### 5.5 找到 nginx[''] 这类的配置信息
nginx['enable'] = true
nginx['redirect_http_to_https'] = true
nginx['redirect_http_to_https_port'] = 80
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.abc.com.pem"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.abc.com.key"
# 6. 重新加载配置
gitlab-ctl reconfigure
# 7. 访问 https://gitlab.abc.com 测试
# 8. 查看 root 用户的登录密码
cat /etc/gitlab/initial_root_password
n.2 gitlab SSL 配置
在阿里云重新申请免费SSL证书,下载nginx证书,修改配置:
/etc/gitlab/gitlab.rb
nginx['enable'] = true
nginx['redirect_http_to_https'] = true
nginx['redirect_http_to_https_port'] = 80
nginx['ssl_certificate'] = "/etc/gitlab/ssl/aliyun/9711796_gitlab.xxxx.cn.pem"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/aliyun/9711796_gitlab.xxxx.cn.key"
重启服务即可
sudo gitlab-ctl reconfigure
sudo gitlab-ctl hup nginx
sudo gitlab-ctl hup registry
N.x SSH Key 配置
ssh-keygen -t rsa -C "邮箱" -f sshfile
把生成的 sshfile.pub 保存到如下页面
修改 vi ~/.ssh/config,添加如下内容:
Host gitlab.js1k.cn
HostName gitlab.js1k.cn
User git
IdentityFile ~/.ssh/gitlab_js1k_mbp
Confluence JIRA 搭建
1. 服务器规划
| IP | hostname/alias | cpu/mem |
|---|---|---|
| 192.168.1.80 | confluence | 4核 8G内存 |
2. 初始化
# 1. 静态IP
#BOOTPROTO="dhcp"
BOOTPROTO="static"
IPADDR=192.168.1.80
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=8.8.8.8
DNS2=8.8.4.4
systemctl restart network
# 2. YUM换源
yum install wget -y
cd /etc/yum.repos.d/
mkdir -p /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
cd /etc/yum.repos.d
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache
# 3. mysql 安装、见上面的步骤有提到过
### 开启3306端口
firewall-cmd --zone=public --add-port=3306/tcp --permanent
firewall-cmd --reload
firewall-cmd --list-ports
3. 安装 Confluence
HTTP Port: 8090
RMI Port: 8000
PostgreSQL 安装
1. 安装脚本
sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
# 安装 postgresql15-server postgresql15-contrib 会出现"需要:libzstd.so.1()(64bit)"
# 的错误信息,需要先解决该问题 yum install epel-release -y
sudo yum install -y postgresql15-server postgresql15-contrib
sudo yum install -y postgresql-15-setup initdb
sudo systemctl enable postgresql-15
sudo systemctl start postgresql-15
sudo systemctl status postgresql-15
2. 配置
# 1. 查找配置路径
find / -name "*pg_hba.conf"
### 一般位置在如下
/var/lib/pgsql/15/data/pg_hba.conf
# 2. 编辑 pg_hba.conf,允许 IP 范围
# host all all 127.0.0.1/32 scram-sha-256 注释这一行,填写下面这行
host all all 0.0.0.0/0 scram-sha-256
# 3. 编辑 /var/lib/pgsql/15/data/postgresql.conf
### 添加如下:
listen_addresses = '*'
配置后,需要重启一下
sudo systemctl restart postgresql-15
3. 防火墙开放端口和服务
firewall-cmd --zone=public --add-port=5432/tcp --permanent
firewall-cmd --reload
firewall-cmd --list-ports
firewall-cmd --zone=public --add-service=postgresql --permanent
firewall-cmd --reload
firewall-cmd --list-service
4. 修改密码
# 切换身份,登录到控制台
su - postgres
psql
# 修改 postgres 密码,如下
ALTER USER postgres WITH PASSWORD 'Pwd@123456';
CREATE ROLE replica login replication encrypted password 'replica';
# 退出
\q
5. 远程登录测试
使用 datagrip 测试登录,账号密码:postgres/Pwd@123456