部署hadoop3.3.0的单节点

436 阅读1分钟

环境:CentOS7 java环境

hadoop-3.3.0.tar.gz压缩包解压

tar-zxvf hadoop-3.3.0.tar.gz
cd hadoop-3.3.0

配置HADOOP的环境和root权限

vim /etc/profile
export HADOOP_HOME=/opt/hadoop-3.3.0
export JAVA_HOME=/opt/jdk1.8.0_221
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}:${HADOOP_HOME}/bin:${HIVE_HOME}/bin

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

hadoop-env.sh配置JAVA环境

cd etc/hadoop/hadoop-env.sh 
export JAVA_HOME=/opt/jdk1.8.0_221

配置core-site.xml

vim core-site.xml
<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://ip:9000</value>

</property>

</configuration>

配置hdfs-site.xml

dfs.replication 指定文件副本数,默认值是3。因为是单节点集群,所以只有一个datanode,只能有1个副本,故修改为1

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>

设置无密码ssh,否则无法ssh到本机

cd ~/.ssh/
ssh-keygen
cat id_rsa.pub >> ~/.ssh/authorized_keys
ssh localhost

格式化文件系统

bin/hdfs namenode -format

启动 NameNode 守护进程和 DataNode 守护进程

sbin/start-dfs.sh

jps查看,成功:可以看到一个namenode,一个datanode,一个SecondaryNameNode

http://ip:9870/

摘自:cloud.tencent.com/developer/a…