如何搭建 HA 高可用的大数据 Hadoop 集群?

206 阅读6分钟

1. HA 概述

  1. 所谓 HA(High Availablity),即高可用(7 * 24 小时不中断服务)
  2. 实现高可以最关键的策略是消除消除单点故障。HA严格来说应该分成各个组件的HA
    机制:HDFS的HA和YARN的HA。
  3. NameNode主要在以下两个方面影响HDFS集群
  • NameNode机器发生意外,如宕机,集群将无法使用,直到管理员重启。
  • NameNode机器需要升级,包括软件、硬件升级,此时集群也将无法使用。

DFS HA功能通过配置多个NameNodes(Active/Standby)实现在集群中对NameNode的热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时可通过此种方式将NameNode很快的切换到另外一台机器。

2. HDFS-HA 集群搭建

2.1. HDFS-HA 核心问题

  1. 怎么保证三台 NameNode 的数据一致
    1. Fsimage:让一台 NN 生成数据,让其他机器来同步
    2. Edits:需要引进新的模块 JournalNode 来保证 edits 文件的数据一致性。
  1. 怎么让同时只有一台 NN 是active,其他所有都是 standby 的
    1. 手动分配
    2. 自动分配
  1. 2nn在ha架构中并不存在,定期合并fsimage和edtis的活谁来干

由 standby 的 nn 来干

  1. 如果 nn 真的发生问题,怎么让其他的 nn 上位干活
    1. 手动故障转移
    2. 自动故障转移

3. HDFS-HA 手动模式

3.1. 环境准备

(1)修改IP
(2)修改主机名及主机名和IP地址的映射
(3)关闭防火墙
(4)ssh免密登录
(5)安装JDK,配置环境变量等

3.2. 规划集群

3.3. 配置HDFS-HA集群

  1. 官方地址:hadoop.apache.org/
  2. 在opt目录下创建一个ha文件夹
[atguigu@hadoop102 ~]$cd/opt
[atguigu@hadoop102 opt]$sudo mkdir ha
[atguigu@hadoop102 opt]$sudo chown atguigu:atguigu/opt/ha
  1. 将/opt/module/下的hadoop-3.1.3拷贝到/opt/ha目录下(记得删除data和log目录)
[atguigu@hadoop102 opt]$cp-r/opt/module/hadoop-3.1.3/opt/ha/
  1. 配置 core-site.xml
<configuration>
  <!-- 把多个NameNode的地址组装成一个集群mycluster -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
  </property>

  <!-- 指定hadoop运行时产生文件的存储目录 -->
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/ha/hadoop-3.1.3/data</value>
  </property>
</configuration>
  1. 配置 hdfs -site.xml
<configuration>

<!-- NameNode数据存储目录 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.tmp.dir}/name</value>
</property>

<!-- DataNode数据存储目录 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file://${hadoop.tmp.dir}/data</value>
</property>

<!-- JournalNode数据存储目录 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>${hadoop.tmp.dir}/jn</value>
</property>

<!-- 完全分布式集群名称 -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>

<!-- 集群中NameNode节点都有哪些 -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2,nn3</value>
</property>

<!-- NameNode的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>hadoop102:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hadoop103:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn3</name>
<value>hadoop104:8020</value>
</property>

<!-- NameNode的http通信地址 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>hadoop102:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hadoop103:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn3</name>
<value>hadoop104:9870</value>
</property>

<!-- 指定NameNode元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop102:8485;hadoop103:8485;hadoop104:8485/mycluster</value>
</property>

<!-- 访问代理类:client用于确定哪个NameNode为Active -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>

<!-- 使用隔离机制时需要ssh秘钥登录-->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/atguigu/.ssh/id_rsa</value>
</property>

</configuration>
6)分发配置好的hadoop环境到其他节点
1.3.4 启动HDFS-HA集群
1)将HADOOP_HOME环境变量更改到HA目录(三台机器)
[atguigu@hadoop102 ~]$ sudo vim /etc/profile.d/my_env.sh
将HADOOP_HOME部分改为如下
#HADOOP_HOME
export HADOOP_HOME=/opt/ha/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
去三台机器上source环境变量
[atguigu@hadoop102 ~]$source /etc/profile
2)在各个JournalNode节点上,输入以下命令启动journalnode服务
[atguigu@hadoop102 ~]$ hdfs --daemon start journalnode
[atguigu@hadoop103 ~]$ hdfs --daemon start journalnode
[atguigu@hadoop104 ~]$ hdfs --daemon start journalnode
3)在[nn1]上,对其进行格式化,并启动
[atguigu@hadoop102 ~]$ hdfs namenode -format
[atguigu@hadoop102 ~]$ hdfs --daemon start namenode
4)在[nn2]和[nn3]上,同步nn1的元数据信息
[atguigu@hadoop103 ~]$ hdfs namenode -bootstrapStandby
[atguigu@hadoop104 ~]$ hdfs namenode -bootstrapStandby
5)启动[nn2]和[nn3]
[atguigu@hadoop103 ~]$ hdfs --daemon start namenode
[atguigu@hadoop104 ~]$ hdfs --daemon start namenode
6)查看web页面显示 

图 hadoop102(standby)

图 hadoop103(standby)

图 hadoop104(standby)
7)在所有节点上,启动datanode
[atguigu@hadoop102 ~]$ hdfs --daemon start datanode
[atguigu@hadoop103 ~]$ hdfs --daemon start datanode
[atguigu@hadoop104 ~]$ hdfs --daemon start datanode
8)将[nn1]切换为Active
[atguigu@hadoop102 ~]$ hdfs haadmin -transitionToActive nn1
9)查看是否Active
[atguigu@hadoop102 ~]$ hdfs haadmin -getServiceState nn1

4. HDFS-HA 自动模式

4.1. HDFS-HA自动故障转移工作机制

自动故障转移为HDFS部署增加了两个新组件:ZooKeeper和ZKFailoverController(ZKFC)进程,如图所示。ZooKeeper是维护少量协调数据,通知客户端这些数据的改变和监视客户端故障的高可用服务。

4.2. HDFS-HA 自动故障转移的集群规划

hadoop102hadoop103hadoop104
NameNodeNameNodeNameNode
JournalNodeJournalNodeJournalNode
DataNodeDataNodeDataNode
ZookeeperZookeeperZookeeper
ZKFCZKFCZKFC

4.3. 配置HDFS-HA自动故障转移

  1. 具体配置

(1)在hdfs-site.xml中增加

<!-- 启用nn故障自动转移 -->
<property>
	<name>dfs.ha.automatic-failover.enabled</name>
	<value>true</value>
</property>

(2)在core-site.xml文件中增加

<!-- 指定zkfc要连接的zkServer地址 -->
<property>
	<name>ha.zookeeper.quorum</name>
	<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>

(3)修改后分发配置文件

[atguigu@hadoop102 etc]$ pwd
/opt/ha/hadoop-3.1.3/etc
[atguigu@hadoop102 etc]$ xsync hadoop/
  1. 启动

(1)关闭所有HDFS服务:

[atguigu@hadoop102 ~]$ stop-dfs.sh

(2)启动Zookeeper集群:

[atguigu@hadoop102 ~]$ zkServer.sh start
[atguigu@hadoop103 ~]$ zkServer.sh start
[atguigu@hadoop104 ~]$ zkServer.sh start

(3)启动Zookeeper以后,然后再初始化HA在Zookeeper中状态:

[atguigu@hadoop102 ~]$ hdfs zkfc -formatZK

(4)启动HDFS服务:

[atguigu@hadoop102 ~]$ start-dfs.sh

(5)可以去zkCli.sh客户端查看Namenode选举锁节点内容:

[zk: localhost:2181(CONNECTED) 7] get -s /hadoop-ha/mycluster/ActiveStandbyElectorLock
	myclusternn2	hadoop103 �>(�>
cZxid = 0x10000000b
ctime = Tue Jul 14 17:00:13 CST 2020
mZxid = 0x10000000b
mtime = Tue Jul 14 17:00:13 CST 2020
pZxid = 0x10000000b
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x40000da2eb70000
dataLength = 33
numChildren = 0
  1. 验证
[atguigu@hadoop102 ~]$ kill -9 namenode的进程id

4.4. 常见问题 1- - 解决 N N 连接不上 J N 的问题

自动故障转移配置好以后,然后使用start-dfs.sh群起脚本启动hdfs集群,有可能会遇到NameNode起来一会后,进程自动关闭的问题。查看NameNode日志,报错信息如下:

2020-08-17 10:11:40,658 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop104/192.168.6.104:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-08-17 10:11:40,659 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.6.102:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-08-17 10:11:40,659 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop103/192.168.6.103:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

… …

2020-08-17 10:11:49,669 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop102/192.168.6.102:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-08-17 10:11:49,673 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop104/192.168.6.104:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-08-17 10:11:49,676 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop103/192.168.6.103:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-08-17 10:11:49,678 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [192.168.6.102:8485, 192.168.6.103:8485, 192.168.6.104:8485]. Skipping.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.6.103:8485: Call From hadoop102/192.168.6.102 to hadoop103:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.6.102:8485: Call From hadoop102/192.168.6.102 to hadoop102:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.6.104:8485: Call From hadoop102/192.168.6.102 to hadoop104:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

查看报错日志,可分析出报错原因是因为NameNode连接不上JournalNode,而利用jps命令查看到三台JN都已经正常启动,为什么NN还是无法正常连接到JN呢?这是因为start-dfs.sh群起脚本默认的启动顺序是先启动NN,再启动DN,然后再启动JN,并且默认的rpc连接参数是重试次数为10,每次重试的间隔是1s,也就是说启动完NN以后的10s中内,JN还启动不起来,NN就会报错了。

core-default.xml里面有两个参数如下:

<!-- NN连接JN重试次数,默认是10次-->
<property>
  <name>ipc.client.connect.max.retries</name>
  <value>10</value>
</property>

<!-- 重试时间间隔,默认1s -->
<property>
  <name>ipc.client.connect.retry.interval</name>
  <value>1000</value>
</property>
解决方案:遇到上述问题后,可以稍等片刻,等JN成功启动后,手动启动下三台NN:
[atguigu@hadoop102 ~]$ hdfs --daemon start namenode
[atguigu@hadoop103 ~]$ hdfs --daemon start namenode
[atguigu@hadoop104 ~]$ hdfs --daemon start namenode
也可以在core-site.xml里面适当调大上面的两个参数:
<!-- NN连接JN重试次数,默认是10次-->
<property>
  <name>ipc.client.connect.max.retries</name>
  <value>20</value>
</property>

<!-- 重试时间间隔,默认1s -->
<property>
  <name>ipc.client.connect.retry.interval</name>
  <value>5000</value>
</property>

5. YARN - HA 配置

5.1. YARN-HA工作机制

1 官方文档:

hadoop.apache.org/docs/r3.1.3…

2 Y ARN-HA 工作 机制

5.2. 配置YARN-HA集群

5.2.1. 环境准备

(1)修改IP

(2)修改主机名及主机名和IP地址的映射

(3)关闭防火墙

(4)ssh免密登录

(5)安装JDK,配置环境变量等

(6)配置Zookeeper集群

5.2.2. 规划集群

5.2.3. 核心问题

    1. 如果当前 active rm 挂了,其他 rm 怎么将其他 standby rm 上位

核心原理跟HDFS一样,利用了zk的临时节点。

    1. 当前 rm 上有很多的计算程序在等待运行,其他的 rm 怎么将这些程序接手过来接着跑

rm会将当前的所有计算程序的状态存储在zk中,其他rm上位后会去读取,然后接着跑。

5.2.4. 具体配置

  1. yarn-site.xml
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <!-- 启用resourcemanager ha -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
 
    <!-- 声明两台resourcemanager的地址 -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>cluster-yarn1</value>
    </property>

    <!--指定resourcemanager的逻辑列表-->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2,rm3</value>
    </property>
<!-- ========== rm1的配置 ========== -->
    <!-- 指定rm1的主机名 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop102</value>
    </property>

    <!-- 指定rm1的web端地址 -->
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>hadoop102:8088</value>
    </property>

    <!-- 指定rm1的内部通信地址 -->
    <property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>hadoop102:8032</value>
    </property>

    <!-- 指定AM向rm1申请资源的地址 -->
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>  
        <value>hadoop102:8030</value>
    </property>

    <!-- 指定供NM连接的地址 -->  
    <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
        <value>hadoop102:8031</value>
    </property>

<!-- ========== rm2的配置 ========== -->
    <!-- 指定rm2的主机名 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hadoop103</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>hadoop103:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>hadoop103:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>hadoop103:8030</value>
    </property>

    <property>
<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
        <value>hadoop103:8031</value>
    </property>

<!-- ========== rm3的配置 ========== -->
    <!-- 指定rm1的主机名 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm3</name>
        <value>hadoop104</value>
    </property>
    <!-- 指定rm1的web端地址 -->
    <property>
        <name>yarn.resourcemanager.webapp.address.rm3</name>
        <value>hadoop104:8088</value>
    </property>
    <!-- 指定rm1的内部通信地址 -->
    <property>
        <name>yarn.resourcemanager.address.rm3</name>
        <value>hadoop104:8032</value>
    </property>
    <!-- 指定AM向rm1申请资源的地址 -->
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm3</name>  
        <value>hadoop104:8030</value>
    </property>

    <!-- 指定供NM连接的地址 -->  
    <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm3</name>
        <value>hadoop104:8031</value>
    </property>

    <!-- 指定zookeeper集群的地址 --> 
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
    </property>

    <!-- 启用自动恢复 --> 
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
 
    <!-- 指定resourcemanager的状态信息存储在zookeeper集群 --> 
    <property>
        <name>yarn.resourcemanager.store.class</name>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>

    <!-- 环境变量的继承 -->
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>

</configuration>
  1. 同步更新其他节点的配置信息,分发配置文件
[atguigu@hadoop102 etc]$ xsync hadoop/

5.2.5. 启动 Yarn

(1)在有ResourceManager的节点启动。

[atguigu@hadoop102 ~]$ start-yarn.sh

(2)查看服务状态

[atguigu@hadoop102 ~]$ yarn rmadmin -getServiceState rm1

(3)可以去zkCli.sh客户端查看ResourceManager选举锁节点内容。

[atguigu@hadoop102 ~]$ zkCli.sh
[zk: localhost:2181(CONNECTED) 16] get -s /yarn-leader-election/cluster-yarn1/ActiveStandbyElectorLock

cluster-yarn1rm1
cZxid = 0x100000022
ctime = Tue Jul 14 17:06:44 CST 2020
mZxid = 0x100000022
mtime = Tue Jul 14 17:06:44 CST 2020
pZxid = 0x100000022
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x30000da33080005
dataLength = 20
numChildren = 0

(4)web端查看hadoop102:8088和hadoop103:8088的YARN的状态

6. Hadoop HA 最终规划