使用 Docker 搭建离线数仓环境_docker数仓(1),2024年最新7年老大数据开发一次坑爹的面试经历

198 阅读6分钟
	- [4.3 安装服务端和客户端](#43__148)
	- [4.4 启动并配置MySQL](#44_MySQL_154)
+ [五、安装JDK](#JDK_168)
+ - [5.1 上传并解压](#51__170)
	- [5.2 配置环境变量](#52__176)
	- [5.3 查看版本](#53__184)
+ [六、Hadoop安装](#Hadoop_190)
+ - [6.1 上传并解压](#61__192)
	- [6.2 修改配置](#62__198)
	- * [6.2.1 配置core-site.xml](#621_coresitexml_204)
		* [6.2.2 配置hdfs-site.xml](#622_hdfssitexml_210)
		* [6.2.3 配置mapred-site.xml](#623_mapredsitexml_216)
		* [6.2.4 配置yarn-site.xml](#624_yarnsitexml_222)
		* [6.2.5 配置hadoop-env.sh](#625_hadoopenvsh_228)
		* [6.2.6 配置mapred-env.sh](#626_mapredenvsh_234)
		* [6.2.7 配置yarn-env.sh](#627_yarnenvsh_240)
		* [6.2.8 配置works](#628_works_246)
	- [6.3 添加变量](#63__252)
	- [6.4 HDFS格式化](#64_HDFS_258)
	- [6.5 启动Hadoop服务](#65_Hadoop_264)
	- [6.6 Web端查看](#66_Web_270)
+ [七、Hive安装](#Hive_280)
+ - [7.1 上传并解压](#71__282)
	- [7.2 修改配置](#72__288)
	- * [7.2.1 修改hive-site.xml](#721_hivesitexml_294)
		* [7.2.2 修改hive-env.sh](#722_hiveenvsh_300)
	- [7.3 添加依赖包](#73__306)
	- [7.4 添加环境变量](#74__312)
	- [7.5 启动服务](#75__318)
	- [7.6 Jps查看](#76_Jps_324)
+ [八、Sqoop安装](#Sqoop_328)
+ - [8.1 上传并解压](#81__330)
	- [8.2 修改sqoop-env.sh](#82_sqoopenvsh_336)
	- [8.3 添加依赖包](#83__342)
	- [8.4 添加环境变量](#84__348)
	- [8.5 查看版本](#85__354)
+ [九、Flume安装](#Flume_360)
+ - [9.1 上传并解压](#91__362)
	- [9.2 删除依赖](#92__368)
	- [9.3 添加环境变量](#93__374)
+ [附录](#_380)

前言

使用版本:

软件名称版本
MySQL5.5.40
JDK1.8
Hadoop3.2.1
Hive3.1.2
Sqoop1.4.47
Flume1.9.0

一、Docker安装

1.1 Centos Docker安装

# 镜像比较大, 需要准备一个网络稳定的环境
# 其中--mirror Aliyun代表使用阿里源
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

1.2 Ubuntu Docker安装【推荐】

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

1.3 MacOs Docker安装

# 下载安装包, 拖动安装即可
https://hub.docker.com/editions/community/docker-ce-desktop-mac/

1.4 Windows Docker安装【不推荐】

# win10家庭版 【参考】
https://docs.docker.com/docker-for-windows/install-windows-home/

# win10专业版、商业版或教育版 【参考】
https://docs.docker.com/docker-for-windows/install/

二、容器准备

2.1 拉取镜像

docker pull centos:7

2.2 启动并创建容器

docker run -itd --privileged --name singleNode -h singleNode \
-p 2222:22 \
-p 3306:3306 \
-p 8020:8020 \
-p 9870:9870 \
-p 19888:19888 \
-p 8088:8088 \
-p 9083:9083 \
-p 10000:10000 \
-p 2181:2181 \
-p 9092:9092 \
-p 8091:8091 \
-p 8080:8080 \
-p 16010:16010 \
-p 4000:4000 \
-p 3000:3000 \
centos:7 /usr/sbin/init

# 其中端口号解释
2222:22# SSH
3306:3306 #MySQL
8020:8020 # HDFS RPC
9870:9870 # HDFS web UI
19888:19888 # Yarn job history 
8088:8088 # Yarn web UI
9083:9083 # Hive metastore
10000:10000 # HiveServer2
2181:2181 # zk
9092:9092 # kafka
8091:8091 # flink

2.3 进入容器

docker exec -it singleNode /bin/bash

三、环境准备

3.1 安装必要软件

yum clean all
yum -y install unzip bzip2-devel vim bashname
yum install kde-l10n-Chinese -y
yum install glibc-common -y
localedef -c -f UTF-8 -i zh_CN zh_CN.utf8
echo "export LANG=zh_CN.UTF-8" >> /etc/locale.conf
echo "LC_ALL zh_CN.UTF-8" >> ~/.bashrc

3.2 配置SSH免密登录

# 修改root密码passwd root  # 输入两次密码# 安装必要SSH服务yum install -y openssh openssh-server openssh-clients openssl openssl-devel # 生成秘钥ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' # 配置免密cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys# 方式2: ssh-copy-id# 启动SSH服务systemctl start sshd

3.3 设置时区

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

3.4 关闭防火墙

yum -y install firewalldsystemctl stop firewalldsystemctl disable firewalld

3.5 时间同步、静态ip、主机映射

  • 由于本次课使用docker进行环境搭建,所以对于静态ip和主机映射可以不用配置
  • 由于本次课搭建的是单节点的伪分布式集群,所以时间同步可以用不设置
  • 如果在物理机上搭建多节点的完全分布式集群则必须配置

四、MySQL安装

4.1 上传解压安装包

cd /opt/software/tar xvf MySQL-5.5.40-1.linux2.6.x86_64.rpm-bundle.tar

4.2 安装必要依赖

yum -y install libaio perl

4.3 安装服务端和客户端

rpm -ivh MySQL-server-5.5.40-1.linux2.6.x86_64.rpmrpm -ivh MySQL-client-5.5.40-1.linux2.6.x86_64.rpm 

4.4 启动并配置MySQL

方式一

# 启动服务systemctl start mysql# 修改MySQL密码/usr/bin/mysqladmin -u root password 'root'# 登陆MySQL设置权限mysql -uroot -proot > update mysql.user set host='%' where host='localhost';> delete from mysql.user where host<>'%' or user='';> flush privileges;

方式二

# 启动服务systemctl start mysql# 执行MySQL的初始化/usr/bin/mysql\_secure\_installation# 输入一次回车, 两次相同的密码进行修改密码# Remove anonymous users? [Y/n] 是否移除掉anonymous用户 n# Disallow root login remotely? [Y/n] 是否允许root用户远程登录 y# Remove test database and access to it? [Y/n] 是否移除掉test数据库 n# Reload privilege tables now? [Y/n] 是否现在刷新权限 y# 登陆MySQL设置权限mysql -uroot -proot > update mysql.user set host='%' where host='localhost';> delete from mysql.user where host<>'%' or user='';> flush privileges;

五、安装JDK

5.1 上传并解压

tar zxvf /opt/software/jdk-8u171-linux-x64.tar.gz -C /opt/install/ln -s /opt/install/jdk1.8.0_171 /opt/install/java

5.2 配置环境变量

环境变量配置在 ~/.bashrc 里

vi ~/.bashrc-------------------------------------------export JAVA_HOME=/opt/install/javaexport PATH=$JAVA_HOME/bin:$PATH-------------------------------------------source ~/.bashrc

5.3 查看版本

java -version

六、Hadoop安装

6.1 上传并解压

tar zxvf hadoop-3.2.1.tar.gz -C /opt/install/ln -s /opt/install/hadoop-3.2.1/ /opt/install/hadoop

6.2 修改配置

# 进入路径cd /opt/install/hadoop/etc/hadoop/

6.2.1 配置core-site.xml
vi core-site.xml-------------------------------------------<configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs://singleNode:8020</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>/opt/install/hadoop/data</value>    </property>    <property>        <name>hadoop.proxyuser.root.hosts</name>        <value>*</value>    </property>    <property>        <name>hadoop.proxyuser.root.groups</name>        <value>*</value>    </property>    <property>        <name>hadoop.http.staticuser.user</name>        <value>root</value>    </property></configuration>-------------------------------------------

6.2.2 配置hdfs-site.xml
vi hdfs-site.xml-------------------------------------------<configuration>    <property>        <name>dfs.replication</name>        <value>1</value>    </property>    <property>        <name>dfs.namenode.secondary.http-address</name>        <value>singleNode:9868</value>    </property></configuration>-------------------------------------------

6.2.3 配置mapred-site.xml
vi mapred-site.xml-------------------------------------------<configuration>  <property>    <name>mapreduce.framework.name</name>    <value>yarn</value>  </property>  <property>    <name>mapreduce.jobhistory.address</name>    <value>singleNode:10020</value>  </property>  <property>    <name>mapreduce.jobhistory.webapp.address</name>    <value>singleNode:19888</value>  </property></configuration>-------------------------------------------

6.2.4 配置yarn-site.xml
vi yarn-site.xml-------------------------------------------<configuration>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property>    <property>        <name>yarn.resourcemanager.hostname</name>        <value>singleNode</value>    </property>    <property>        <name>yarn.nodemanager.env-whitelist</name>        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>    </property>    <property>        <name>yarn.scheduler.minimum-allocation-mb</name>        <value>512</value>    </property>    <property>        <name>yarn.scheduler.maximum-allocation-mb</name>        <value>4096</value>    </property>    <property>        <name>yarn.nodemanager.resource.memory-mb</name>        <value>4096</value>    </property>    <property>        <name>yarn.nodemanager.pmem-check-enabled</name>        <value>false</value>    </property>    <property>        <name>yarn.nodemanager.vmem-check-enabled</name>        <value>false</value>    </property>    <property>        <name>yarn.log-aggregation-enable</name>        <value>true</value>    </property>    <property>          <name>yarn.log.server.url</name>          <value>http://${yarn.timeline-service.webapp.address}/applicationhistory/logs</value>    </property>    <property>        <name>yarn.log-aggregation.retain-seconds</name>        <value>604800</value>    </property>    <property>        <name>yarn.timeline-service.enabled</name>        <value>true</value>    </property>    <property>        <name>yarn.timeline-service.hostname</name>        <value>${yarn.resourcemanager.hostname}</value>    </property>    <property>        <name>yarn.timeline-service.http-cross-origin.enabled</name>        <value>true</value>    </property>    <property>        <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>        <value>true</value>    </property></configuration>-------------------------------------------

6.2.5 配置hadoop-env.sh
vi hadoop-env.sh-------------------------------------------export JAVA_HOME=/opt/install/java-------------------------------------------

6.2.6 配置mapred-env.sh
vi mapred-env.sh-------------------------------------------export JAVA_HOME=/opt/install/java-------------------------------------------

6.2.7 配置yarn-env.sh
vi yarn-env.sh-------------------------------------------export JAVA_HOME=/opt/install/java-------------------------------------------

6.2.8 配置works
vi works-------------------------------------------singleNode-------------------------------------------

6.3 添加变量

vi ~/.bashrc------------------------------------------------export HADOOP_HOME=/opt/install/hadoopexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport PATH=$HADOOP_HOME/bin:$PATH------------------------------------------------vi $HADOOP_HOME/sbin/start-dfs.shvi $HADOOP_HOME/sbin/stop-dfs.sh------------------------------------------------HDFS_NAMENODE_USER=root HDFS_DATANODE_USER=root HDFS_SECONDARYNAMENODE_USER=root YARN_RESOURCEMANAGER_USER=root YARN_NODEMANAGER_USER=root------------------------------------------------vi $HADOOP_HOME/sbin/start-yarn.shvi $HADOOP_HOME/sbin/stop-yarn.sh------------------------------------------------YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root------------------------------------------------

6.4 HDFS格式化

hdfs namenode -format

6.5 启动Hadoop服务

# 启动HDFS$HADOOP_HOME/sbin/start-dfs.sh# 启动yarn$HADOOP_HOME/sbin/start-yarn.sh# 启动历史服务器mapred --daemon start historyserver

6.6 Web端查看

查看9870端口

image-20210630150450318

查看8088端口

image-20210220114908549

七、Hive安装

img img

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化资料的朋友,可以戳这里获取

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!