1、背景
生产环境sqoop任务执行极慢,即使只导出1条数据也要10分钟左右,导致数据产出延时。
2、问题排查
查看日志,真正执行map reduce的时间还是很短的,很快就结束了,耗时都花在jar的上传了。
可以看到jar包的加载就花费了8min左右。
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,585 INFO hcat.SqoopHCatUtilities: Adding to job classpath: file:/opt/client_new/Hive/install/hive-3.1.0/lib/jetty-http-9.4.20.v20190813.jar
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,585 INFO hcat.SqoopHCatUtilities: Adding jar files under /usr/lib/hcatalog/share/hcatalog/storage-handlers to distributed cache (recursively)
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,585 WARN hcat.SqoopHCatUtilities: No files under /usr/lib/hcatalog/share/hcatalog/storage-handlers to add to distributed cache for hcatalog job
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,586 INFO hcat.SqoopHCatUtilities: Configuring HCatalog for export job
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,586 INFO hcat.SqoopHCatUtilities: Ignoring configuration request for HCatalog info
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,602 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,602 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
27-10-2022 11:02:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:24,602 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
27-10-2022 11:02:31 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:31,345 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to 18
27-10-2022 11:02:35 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:02:35,324 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: hdfs://hacluster/tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877
27-10-2022 11:02:54 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - Oct 27, 2022 11:02:54 AM mrs.shaded.provider.com.fasterxml.jackson.databind.ext.Java7Handlers <clinit>
27-10-2022 11:02:54 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - WARNING: Unable to load JDK7 types (java.nio.file.Path): no Java7 type support added
27-10-2022 11:03:06 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:03:06,867 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/javax.inject-2.4.0-b34.jar retrying...
27-10-2022 11:03:13 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:03:13,770 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/jetty-io-9.4.20.v20190813.jar retrying...
27-10-2022 11:03:29 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:03:29,342 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/jackson-module-paranamer-2.10.4.jar retrying...
27-10-2022 11:03:47 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:03:47,785 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/netty-all-4.1.48.Final.jar retrying...
27-10-2022 11:04:17 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:04:17,348 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/hive-llap-client-3.1.0-hw-ei-302023.jar retrying...
27-10-2022 11:04:26 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:04:26,813 INFO metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 1
27-10-2022 11:04:27 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:04:27,984 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/tez-runtime-internals-0.9.2-hw-ei-302023.jar retrying...
27-10-2022 11:05:08 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:05:08,066 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/hadoop-distcp-3.1.1-hw-ei-302023.jar retrying...
27-10-2022 11:05:37 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:05:37,831 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/kerby-config-1.0.1.jar retrying...
27-10-2022 11:05:57 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:05:57,675 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/parquet-hadoop-bundle-1.11.0.jar retrying...
27-10-2022 11:06:17 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:06:17,991 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/hive-streaming-3.1.0-hw-ei-302023.jar retrying...
27-10-2022 11:07:20 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:07:20,009 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/jcommander-1.32.jar retrying...
27-10-2022 11:07:38 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:07:38,346 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/snappy-java-1.1.4.jar retrying...
27-10-2022 11:09:15 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:09:15,678 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/aos-plugin-3.1.0-hw-ei-302023.jar retrying...
27-10-2022 11:09:24 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:09:24,425 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/aos-plugin-3.1.0-hw-ei-302023.jar retrying...
27-10-2022 11:09:42 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:09:42,081 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/opencsv-2.3.jar retrying...
27-10-2022 11:09:42 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:09:42,913 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/opencsv-2.3.jar retrying...
27-10-2022 11:09:44 CST sqoop_export_ads_hive_grafana_maybe_new_reg_stat_1d INFO - 2022-10-27 11:09:44,519 INFO hdfs.DFSClient: Could not complete /tmp/hadoop-yarn/staging/root/.staging/job_1642060369182_4157877/libjars/opencsv-2.3.jar retrying...
........................
关于staging目录可能很多人都不太会关注,毕竟日常运行作业也用不到这些配置。不过了解它对于我们理解作业的执行流程也是有所帮助的,比如我们都会使用hadoop jar 或 spark-submit等命令来提交一个MR或Spark作业,然后我们就会看到在集群的某些计算节点上启动executor(MapRedece对应的是mapper和reducer)来执行任务。这些executor都是一个JVM进程,既然如此,那么启动一个JVM进程必然需要用到至少一个jar包,那这些jar包是从哪里来的呢?此时就用到了staging 目录(有人把staging翻译为舞台目录,这也有一定的含义,提交作业,就要登上集群这个舞台了)了。提交作业的时候,会把相关的jar包和配置信息都上传到这个staging目录下,然后在启动executor之前,会从这里将这些信息下载到计算节点本地。
我们先介绍下staging的一些配置参数,这有助于我们理解这个问题。
mr staging配置
- yarn.app.mapreduce.am.staging-dir:提交mapreduce作业的staging目录,默认是/tmp/hadoop-yarn/staging
- yarn.app.mapreduce.am.staging-dir.erasurecoding.enabled:staging目录下的文件是否使用纠删码方式存储,默认false,这可以在提交作业时指定
- mapreduce.client.submit.file.replication:提交到staging目录下的文件的副本数,默认是10
- mapreduce.job.split.metainfo.maxsize:split元数据信息文件的最大大小,默认10000000,超过该大小AppMaster将不会再读取。设置为-1,则表示不限制该文件的大小。这个文件通常不会很大,一般也不会去单独设置
最后解释一下这些文件的副本为什么是10呢,难道3副本还不能满足数据安全的要求吗?这里其实涉及到MR作业运行时的一个步骤就是资源本地化,本文开头已经介绍过staging目录的作用了,而比如一个MR作业的mapper有几百几千个的时候,那么就意味着会有上千的客户端会同时下载这个jar包,此时将这个jar包的副本数设置的大一点可以提高资源本地化的效率,而反之如果只是一个很小的MR作业,这么高的副本数就是浪费了。这也侧面说明了MR的设计初衷就是面向大规模数据的。
spark staging配置
- spark.yarn.submit.file.replication:提交作业时上传到HDFS相关文件的副本数,默认是HDFS的默认副本数,通常是3
- spark.yarn.stagingDir:提交作业时的staging目录,默认是用户的家目录
- spark.yarn.preserve.staging.files:在提交作业时的staged 文件(Spark jar, app jar, distributed cache files) 在作业结束时是否保留,默认false,也就是作业执行结束后进行删除
3、解决方案
在sqoop的conf目录下,修改sqoop_env.sh配置文件
#Set the path to where bin/hive is available
export HIVE_HOME=
#注释掉如下行,新增hive_home配置为空
# export HIVE_HOME=/opt/client_new/Hive/install/hive-3.1.0
# export HIVE_CONF_DIR=/opt/client_new/Hive/config
# export HADOOP_CLASSPATH=/opt/client_new/HDFS/hadoop/lib/*
# export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*
sqoop在执行的时候会把hive_home下的所有lib文件上传的staging目录下,共有360多个。
如果不配置hive_home,会读取Linux的hive_home,然后加载所有的lib到staging目录下,共有100多个。
所以将hive_home配置为空,这样就找不到需要加载的lib了,但是此时会报错,因为缺少一些必要的包,所以我们还要把相关的包从/opt/client_new/Hive/install/hive-3.1.0/lib下拷贝到sqoop的lib下。
cp avatica-1.15.0.jar /data/dmp/sqoop-1.4.7/lib
cp calcite* /data/dmp/sqoop-1.4.7/lib
cp log* /data/dmp/sqoop-1.4.7/lib
cp jline-2.13-hw-aarch64.jar /data/dmp/sqoop-1.4.7/lib
cp hive-cli-3.1.0-hw-ei-302023.jar /data/dmp/sqoop-1.4.7/lib
cp libfb303-0.9.3.jar /data/dmp/sqoop-1.4.7/lib
cp hive-exec-3.1.0-hw-ei-302023.jar /data/dmp/sqoop-1.4.7/lib
cp antlr-runtime-3.5.2.jar /data/dmp/sqoop-1.4.7/lib
拷贝完成,重新提交sqoop任务,可以看到已经不再加载那么多的lib依赖了,速度大概提升10倍。