集群启动/停止方式总结
各个模块分开启动/停止 (配置ssh 是前提) 常用
整体启动/停止 HDFS
[muyi@hadoop102 ~]$ start-dfs.sh
[muyi@hadoop102 ~]$ stop-dfs.sh
整体启动/停止YARN
[muyi@hadoop103 ~]$ start-yarn.sh
[muyi@hadoop103 ~]$ stop-yarn.sh
各个服务组件逐一启动/停止
分别启动/停止 HDFS 组件
[muyi@hadoop102 ~]$ kill -9 3730
[muyi@hadoop102 ~]$ jps
4320 Jps
3565 NameNode
[muyi@hadoop102 ~]$
[muyi@hadoop102 ~]$ hdfs --daemon start datanode
[muyi@hadoop102 ~]$ jps
4387 DataNode
4455 Jps
3565 NameNode
[muyi@hadoop102 ~]$ hdfs --daemon stop datanode
[muyi@hadoop102 ~]$ jps
4512 Jps
3565 NameNode
[muyi@hadoop102 ~]$
hdfs --daemon start/stop namenode/datanode/secondarynoenode
启动/停止YARN
yarn --daemon start/stop resourcemanager/nodemanager
编写Hadoop集群常用脚本
Hadoop集群启停脚本(包含HDFS, YARN, Historyserver): myhadoop.sh
[muyi@hadoop102 ~]$ cd bin/
[muyi@hadoop102 bin]$ ll
总用量 4
-rwxrwxrwx. 1 muyi muyi 704 11月 10 10:36 xsync
[muyi@hadoop102 bin]$ vim myhadoop.sh
脚本内容:
#!/bin/bash
if [ $# -lt 1 ]
then
echo "No Args Input..."
exit ;
fi
case $1 in
"start")
echo " =================== 启动 hadoop 集群 ==================="
echo " --------------- 启动 hdfs ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh"
echo " --------------- 启动 yarn ---------------"
ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh"
echo " --------------- 启动 historyserver ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
;;
"stop")
echo " =================== 关闭 hadoop 集群 ==================="
echo " --------------- 关闭 historyserver ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
echo " --------------- 关闭 yarn ---------------"
ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh"
echo " --------------- 关闭 hdfs ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh"
;;
*)
echo "Input Args Error..."
;;
esac
保存后退出,然后赋予脚本执行权限
[muyi@hadoop102 bin]$ ll
总用量 8
-rw-rw-r--. 1 muyi muyi 1171 11月 13 21:33 myhadoop.sh
-rwxrwxrwx. 1 muyi muyi 704 11月 10 10:36 xsync
[muyi@hadoop102 bin]$ chmod 777 myhadoop.sh
[muyi@hadoop102 bin]$ ll
总用量 8
-rwxrwxrwx. 1 muyi muyi 1171 11月 13 21:33 myhadoop.sh
-rwxrwxrwx. 1 muyi muyi 704 11月 10 10:36 xsync
[muyi@hadoop102 bin]$
参数也可以是+x
[muyi@hadoop102 bin]$ chmod +x myhadoop.sh
尝试使用:
[muyi@hadoop102 ~]$ jps
4928 Jps
3565 NameNode
4575 DataNode
[muyi@hadoop102 ~]$ myhadoop.sh stop
=================== 关闭 hadoop 集群 ===================
--------------- 关闭 historyserver ---------------
--------------- 关闭 yarn ---------------
Stopping nodemanagers
Stopping resourcemanager
--------------- 关闭 hdfs ---------------
Stopping namenodes on [hadoop102]
Stopping datanodes
Stopping secondary namenodes [hadoop104]
[muyi@hadoop102 ~]$ jps
5522 Jps
[muyi@hadoop102 hadoop]$ myhadoop.sh start
=================== 启动 hadoop 集群 ===================
--------------- 启动 hdfs ---------------
Starting namenodes on [hadoop102]
Starting datanodes
Starting secondary namenodes [hadoop104]
--------------- 启动 yarn ---------------
Starting resourcemanager
Starting nodemanagers
--------------- 启动 historyserver ---------------
查看三台服务器 Java 进程脚本:jpsall
[muyi@hadoop102 hadoop]$ cd ~/bin/
[muyi@hadoop102 bin]$ ll
总用量 8
-rwxrwxr-x. 1 muyi muyi 1074 11月 13 22:29 myhadoop.sh
-rwxrwxrwx. 1 muyi muyi 704 11月 10 10:36 xsync
[muyi@hadoop102 bin]$ vim jpsall
#!/bin/bash
for host in hadoop102 hadoop103 hadoop104
do
echo =============== $host ===============
ssh $host jps
done
保存后退出
然后赋予脚本执行权限
[muyi@hadoop102 bin]$ ll
总用量 12
-rw-rw-r--. 1 muyi muyi 124 11月 13 23:31 jpsall
-rwxrwxr-x. 1 muyi muyi 1074 11月 13 22:29 myhadoop.sh
-rwxrwxrwx. 1 muyi muyi 704 11月 10 10:36 xsync
[muyi@hadoop102 bin]$ chmod 777 jpsall
[muyi@hadoop102 bin]$ ll
总用量 12
-rwxrwxrwx. 1 muyi muyi 124 11月 13 23:31 jpsall
-rwxrwxr-x. 1 muyi muyi 1074 11月 13 22:29 myhadoop.sh
-rwxrwxrwx. 1 muyi muyi 704 11月 10 10:36 xsync
[muyi@hadoop102 bin]$
尝试使用:
[muyi@hadoop102 bin]$ jpsall
=============== hadoop102 ===============
8594 JobHistoryServer
7940 NameNode
8423 NodeManager
8072 DataNode
9944 Jps
=============== hadoop103 ===============
6928 Jps
6325 NodeManager
6198 ResourceManager
5995 DataNode
=============== hadoop104 ===============
4466 SecondaryNameNode
4347 DataNode
4572 NodeManager
4957 Jps
[muyi@hadoop102 bin]$
将脚本分发到其他主机上,保证自定义脚本在三台机器上都可以使用