本文已参与「新人创作礼」活动,一起开启掘金创作之路。
配置hadoop配置文件(该部分操作只在主节点进行即可)
- mkdir /usr/local/hadoop
- mkdir /usr/local/hadoop/data
- mkdir /usr/local/hadoop/data/tmp
- mkdir /usr/local/hadoop/dfs
- mkdir /usr/local/hadoop/dfs/data
- mkdir /usr/local/hadoop/dfs/name
- mkdir /usr/local/hadoop/tmp
- cd /opt/hadoop-3.1.3/etc/hadoop/
- vi hadoop-env.sh文件中加入如下内容
export JAVA_HOME=/opt/jdk
- vi core-site.xml文件中加入如下内容
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
- vi hdfs-site.xml文件中加入如下内容
<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>master:50070</value>
</property>
<property><!--namenode持久存储名字空间及事务日志的本地文件系统路径-->
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/dfs/name</value>
</property>
<property><!--DataNode存放块数据的本地文件系统路径-->
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/dfs/data</value>
</property>
<property><!--数据需要备份的数量,不能大于集群的机器数量,默认为3-->
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
- vi yarn-site.xml文件中加入如下内容
<configuration>
<property><!--NodeManager上运行的附属服务,用于运行mapreduce-->
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
- vi mapred-site.xml文件中加入如下内容
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
- vi workers文件中加入如下内容
master
node1
node2
- vi /opt/hadoop-3.1.3/sbin/start-yarn.sh、vi /opt/hadoop-3.1.3/sbin/stop-yarn.sh 两个文件中加入如下内容(放在靠前的位置)
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
- vi /opt/hadoop-3.1.3/sbin/start-dfs.sh和vi /opt/hadoop-3.1.3/sbin/stop-dfs.sh两个文件中加入如下内容(放在靠前的位置)
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
- 将改的文件全部分发给子节点
scp -r /opt/hadoop-3.1.3/etc/hadoop/ node1:/opt/hadoop-3.1.3/etc/hadoop/
scp -r /opt/hadoop-3.1.3/etc/hadoop/ node2:/opt/hadoop-3.1.3/etc/hadoop/
scp -r /opt/hadoop-3.1.3/sbin/ node1:/opt/hadoop-3.1.3/sbin/
scp -r /opt/hadoop-3.1.3/sbin/ node2:/opt/hadoop-3.1.3/sbin/
scp -r /opt/hadoop-3.1.3/etc/hadoop/* node1:/opt/hadoop-3.1.3/etc/hadoop/
scp -r /opt/hadoop-3.1.3/etc/hadoop/* node2:/opt/hadoop-3.1.3/etc/hadoop/
- 执行hadoop classpath 将内容复制,编辑vi /opt/hadoop-3.1.3/etc/hadoop/yarn-site.xml 文件并在configuration中添加如下内容,其中yarn.application.classpath对应的value标签内人为执行hadoop classpath后出现的路径
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*</value>
</property>
- 将 /opt/hadoop-3.1.3/etc/hadoop/*分发给所有节点
scp -r /opt/hadoop-3.1.3/etc/hadoop/* node1:/opt/hadoop-3.1.3/etc/hadoop/
scp -r /opt/hadoop-3.1.3/etc/hadoop/* node2:/opt/hadoop-3.1.3/etc/hadoop/
- 初始化hadoop(所有节点都需要执行)
cd /opt/hadoop/bin
./hadoop namenode -format
- 启动hadoop(主节点执行即可) 可以通过ip:8088和ip:50070两个端口分别进行管理和数据查看
cd /opt/hadoop-3.1.3/sbin
./start-all.sh
6.hadoop跑自带的wordcount程序
进入hdfs根目录
hdfs dfs -ls /
创建hdfs目录
hdfs dfs -mkdir /input
向hdfs上传一个文件作为跑wordcount的文本
hdfs dfs -put /etc/httpd/conf/httpd.conf /input
cd /opt/hadoop-3.1.3/share/hadoop/mapreduce/
hadoop jar hadoop-mapreduce-examples-3.1.3.jar wordcount /input/httpd.conf /output
hdfs dfs -cat /output/part-r-00000
这里注意自己的版本
如果hadoop运行报错进入安全模式
执行命令退出安全模式
hadoop dfsadmin -safemode leave
执行健康检查,删除损坏掉的block