hadoop集群搭建

1,142 阅读3分钟

hadoop集群搭建步骤

实验介绍

下面将要在三台linux虚拟机上搭建hadoop集群。

知识点

  • linux基本命令

集群安装

完成实验需要以下相关知识

  1. 解压命令

tar -zxvf XX.tar.gz -C dist

  1. vi编辑器的使用

vi + file 打开一个文件,要想了解更多请了解vi编辑器的使用

  1. 远程拷贝

scp -r srcfile user@hostName:distpath

  1. 关闭防火墙命令

    service iptables stop

  2. linux下安装jdk

  3. linux下免密码登录

  4. hadoop集群基本常识

实验前准备

  1. 准备三台linux虚拟机
  2. 配置ip和host 下面表格是本次实验的配置情况
iphost软件名
192.168.1.111linux1java8、hadoop
192.168.1.112linux2java8,hadoop
192.168.1.113linux3java8,hadoop
  1. 配置免密登录,免密登录方案 linux1免密登录linux2和linux3
  2. 安装jdk8
  3. 准备hadoop2.7.7版本的安装包

下面开始进行实验。

hadoop集群搭建实验

  1. 上传hadoop安装文件到 /root/apps/srcclauster
  2. 进入主节点创建一个目录apps就作为安装目录
[root@linux1 ~]# mkdir  /root/apps
  1. 解压hadoop
[root@linux1 ~]#tar –zxvf  /root/srcclauster/hadoop-2.7.7.tar.gz     -C   /root/apps
  1. 配置hadoop

进入hadoop配置目录打开hadoop-env.sh文件 配置一下JAVA_HOME

[root@linux1 ~]#cd  /root/srcclauster/hadoop-2.7.7/etc/hadoop
[root@linux1 hadoop]#

[root@linux1 hadoop]# vi   hadoop-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.
export JAVA_HOME=/root/appstest1/jdk1.8.0_101

# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol.  Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME}

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
  if [ "$HADOOP_CLASSPATH" ]; then
    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
  else
    export HADOOP_CLASSPATH=$f
  fi
done

# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""

# Extra Java runtime options.  Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"

export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"

export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"

# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"

# On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol.  This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}

# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}

###
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=""

###
# Advanced Users Only!
###

# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by 
#       the user that will run the hadoop daemons.  Otherwise there is the
#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}

# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER

打开core-site.xml文件配置一下主节点和工作目录

[root@linux1 hadoop]# vi core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
      <name>fs.defaultFS</name>
      <value>hdfs://linux1:9000</value>
  </property>
  <property>
      <name>hadoop.tmp.dir</name>
      <value>/root/appstest1/appdata</value>
  </property>
</configuration>

打开mapred-site.xml配置MR运行方式

[root@linux1 hadoop]# vi mapred-site.xm
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
  </property>
</configuration>

打开yarn-site.xml文件配置yarn的主节点

[root@linux1 hadoop]# vi yarn-site.xml
<?xml version="1.0"?>
<configuration>
  <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>linux1</value>
  </property>
  <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
  </property>
</configuration>

配置slaves

[root@linux1 hadoop]# vi slaves
linux2
linux3
  1. 格式化hdfs
[root@linux1 ~]#/root/hadoop-2.7.7/bin/hadoop namenode -format
  1. 启动hadoop集群

进入linux1

[root@linux1 apps]# /root/apps/hadoop-2.7.7/sbin/start-dfs.sh
20/04/27 16:14:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [linux1]
linux1: starting namenode, logging to /root/apps/hadoop-2.7.7/logs/hadoop-root-namenode-linux1.out
linux3: datanode running as process 1618. Stop it first.
linux2: datanode running as process 1617. Stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /root/apps/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-linux1.out
20/04/27 16:15:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@linux1 apps]# 
  1. 测试是否启动成功

总结

配置核心4个文件 ,hadoop-env.sh配置JAVA_HOME,core-site.xml配置主节点,mapred-site.xm配置MR运行方式, yarn-site.xml配置yarn的主节点。

关于其他文章请参阅 juejin.cn/user/175884…