CentOS 7下配置hadoop 2.8 分布式集群

265 阅读2分钟
原文链接: click.aliyun.com

三、配置及安装hadoop 2.8


1、配置java运行环境(所有节点)


[root@namenode ~]# vim /etc/profile.d/java.sh
export JAVA_HOME=/etc/alternatives/java_sdk_1.8.0_openjdk
export PATH=$PATH:$JAVA_HOME

[root@namenode ~]# source /etc/profile.d/java.sh
[root@namenode ~]# env |grep JAVA_HOME
JAVA_HOME=/etc/alternatives/java_sdk_1.8.0_openjdk

2、配置Hosts文件,添加用户及创建目录(所有节点)


[root@namenode ~]# vim /etc/hosts

192.168.81.142 namenode.example.com namenode
192.168.81.146 datanode1.example.com datanode1
192.168.81.147 datanode2.example.com datanode2

[root@namenode ~]# useradd hadoop
[root@namenode ~]# passwd hadoop
[root@namenode ~]# mkdir -pv /usr/local/hadoop/datanode
[root@namenode ~]# chmod 755 /usr/local/hadoop/datanode
[root@namenode ~]# chown hadoop:hadoop /usr/local/hadoop

3、配置等效性(所有节点)


[root@namenode ~]# su - hadoop
[hadoop@namenode ~]$
[hadoop@namenode ~]$ ssh-keygen
[hadoop@namenode ~]$ ssh-copy-id localhost
[hadoop@namenode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.81.146
[hadoop@namenode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.81.147

[hadoop@namenode ~]$ ssh namenode.example.com date;\
> ssh datanode1.example.com date;
> ssh datanode2.example.com date
Wed Nov 15 16:06:16 CST 2017
Wed Nov 15 16:06:16 CST 2017
Wed Nov 15 16:06:16 CST 2017

4、配置hadoop运行环境(所有节点)

[hadoop@namenode ~]$ vi ~/.bash_profile
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

[hadoop@namenode ~]$ source ~/.bash_profile

5、安装hadoop(所有节点)


[hadoop@namenode ~]$ wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz -P /tmp
[hadoop@namenode ~]$ tar -xf /tmp/hadoop-2.8.1.tar.gz -C /usr/local/hadoop --strip-components 1

6、配置hadoop相关配置文件


[hadoop@namenode ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///usr/local/hadoop/datanode</value>
    </property>
</configuration>

[hadoop@namenode ~]$ scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml \
> datanode1:/usr/local/hadoop/etc/hadoop

[hadoop@namenode ~]$ scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml \
> datanode2:/usr/local/hadoop/etc/hadoop

[hadoop@namenode ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://namenode.example.com:9000/</value>
    </property>
</configuration>

[hadoop@namenode ~]$ scp /usr/local/hadoop/etc/hadoop/core-site.xml \
> datanode1:/usr/local/hadoop/etc/hadoop

[hadoop@namenode ~]$ scp /usr/local/hadoop/etc/hadoop/core-site.xml \
> datanode2:/usr/local/hadoop/etc/hadoop

再次编辑hdfs-site.xml,仅仅针对namenode节点
[hadoop@namenode ~]$ mkdir -pv /usr/local/hadoop/namenode
[hadoop@namenode ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
将以下内容添加到<configuration> - </configuration>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/hadoop/namenode</value>
</property>

[hadoop@namenode ~]$ vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

[hadoop@namenode ~]$ vi /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>namenode.example.com</value>
    </property>
    <property>
        <name>yarn.nodemanager.hostname</name>
        <value>namenode.example.com</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

[hadoop@namenode ~]$ vi /usr/local/hadoop/etc/hadoop/slaves
# add all nodes (remove localhost)
namenode.example.com
datanode1.example.com
datanode2.example.com

7、格式化


[hadoop@namenode ~]$ hdfs namenode -format
17/11/16 16:32:20 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hadoop
STARTUP_MSG: host = namenode.example.com/192.168.81.142
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.8.1
........

17/11/16 16:32:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at namenode.example.com/192.168.81.142
************************************************************/

8、启动hadoop


[hadoop@namenode ~]$ start-dfs.sh
Starting namenodes on [namenode.example.com]
namenode.example.com: starting namenode, logging to /usr/...-namenode-namenode.example.com.out
datanode2.example.com: starting datanode, logging to /usr/...-datanode-datanode2.example.com.out
namenode.example.com: starting datanode, logging to /usr/...-datanode-namenode.example.com.out
datanode1.example.com: starting datanode, logging to /usr/...-datanode-datanode1.example.com.out
Starting secondary namenodes [blogs.jrealm.net]
blogs.jrealm.net: starting secondarynamenode, logging to /usr/...-secondarynamenode-namenode.example.com.out

[hadoop@namenode ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/...-resourcemanager-namenode.example.com.out
datanode2.example.com: starting nodemanager, logging to /usr/...-datanode2.example.com.out
datanode1.example.com: starting nodemanager, logging to /usr/...-datanode1.example.com.out
namenode.example.com: starting nodemanager, logging to /usr/...-namenode.example.com.out

[root@namenode ~]# jps
12995 Jps
10985 ResourceManager
11179 NodeManager  ## Author : Leshami
10061 NameNode     ## QQ/Weixin : 645746311 
10301 DataNode
10655 SecondaryNameNode