部署Flume组件

184 阅读1分钟

本文已参与「新人创作礼」活动,一起开启掘金创作之路。

此文以Hadoop 3.2.2、Flume 1.9.0版本为例!

如未指定,下述命令在所有节点执行!

一、系统资源及组件规划

节点名称系统名称CPU/ 内存网卡磁盘IP 地址OS
NameNodenamenode2C/4Gens33128G192.168.0.11CentOS7
Secondary NameNodesecondarynamenode2C/4Gens33128G192.168.0.12CentOS7
ResourceManagerresourcemanager2C/4Gens33128G192.168.0.13CentOS7
Worker1worker12C/4Gens33128G192.168.0.21CentOS7
Worker2worker22C/4Gens33128G192.168.0.22CentOS7
Worker3worker32C/4Gens33128G192.168.0.23CentOS7

Flume组件部署在Worker节点上

二、搭建Hadoop集群

Hadoop完全分布式集群搭建过程省略,参考如下:

juejin.cn/post/709125…

三、部署Flume组件

1、安装Flume组件

下载Flume文件:

参考地址:flume.apache.org/download.ht…

 

在Worker节点(数据采集节点)上解压Flume安装文件:

tar -xf /root/apache-flume-1.9.0-bin.tar.gz -C /usr/local/

image.png

设置环境变量:

export PATH=$PATH:/usr/local/apache-flume-1.9.0-bin/bin/

image.png

添加环境变量至/etc/profile文件:

PATH=$PATH:/usr/local/apache-flume-1.9.0-bin/bin/

image.png

2、配置Flume采集目录到HDFS

在Worker节点(数据采集节点)上创建flume-env.sh文件:

cat > /usr/local/apache-flume-1.9.0-bin/conf/flume-env.sh << EOF
export JAVA_HOME=/usr/local/jdk1.8.0_291/
EOF

image.png

在Worker节点(数据采集节点)上同步guava文件,解决Flume版本过低问题:

cp /usr/local/hadoop-3.2.2/share/hadoop/common/lib/guava-27.0-jre.jar /usr/local/apache-flume-1.9.0-bin/lib/
rm -rf /usr/local/apache-flume-1.9.0-bin/lib/guava-11.0.2.jar

image.png

在Worker节点(数据采集节点)上创建Flume配置文件:

cat > /usr/local/apache-flume-1.9.0-bin/conf/dir-hdfs.conf << EOF
ag1.sources = source1
ag1.sinks = sink1
ag1.channels = channel1

ag1.sources.source1.type = spooldir
ag1.sources.source1.spoolDir = /file
ag1.sources.source1.fileHeader = true
ag1.sources.source1.deserializer.maxLineLength = 5120

ag1.sinks.sink1.type = hdfs
ag1.sinks.sink1.hdfs.path = hdfs://namenode:9000/file/%y-%m-%d/%H-%M
ag1.sinks.sink1.hdfs.filePrefix = file
ag1.sinks.sink1.hdfs.fileSuffix = .file
ag1.sinks.sink1.hdfs.batchSize = 100
ag1.sinks.sink1.hdfs.fileType = DataStream
ag1.sinks.sink1.hdfs.writeFormat = Text
ag1.sinks.sink1.hdfs.rollSize = 512000
ag1.sinks.sink1.hdfs.rollCount = 1000000
ag1.sinks.sink1.hdfs.rollInterval = 60
ag1.sinks.sink1.hdfs.round = true
ag1.sinks.sink1.hdfs.roundValue = 10
ag1.sinks.sink1.hdfs.roundUnit = minute
ag1.sinks.sink1.hdfs.useLocalTimeStamp = true

ag1.channels.channel1.type = memory
ag1.channels.channel1.capacity = 500000
ag1.channels.channel1.transactionCapacity = 600

ag1.sources.source1.channels = channel1
ag1.sinks.sink1.channel = channel1
EOF

image.png

hdfs.path需与core-site.xml中fs.defaultFS参数保持一致

 

在Worker节点(数据采集节点)上拷贝Hadoop配置文件至Flume目录:

cp /usr/local/hadoop-3.2.2/etc/hadoop/core-site.xml /usr/local/apache-flume-1.9.0-bin/conf/
cp /usr/local/hadoop-3.2.2/etc/hadoop/hdfs-site.xml /usr/local/apache-flume-1.9.0-bin/conf/

image.png

在Worker节点(数据采集节点)上后台运行Flume

nohup flume-ng agent -c /usr/local/apache-flume-1.9.0-bin/conf/ -f /usr/local/apache-flume-1.9.0-bin/conf/dir-hdfs.conf -n ag1 -Dflume.root.logger=INFO,console &

image.png

3、演示Flume采集目录到HDFS

在Worker节点(数据采集节点)上/file目录下创建文件file1:

mkdir /file
touch /file/file1

image.png

验证Flume数据采集:

image.png

4、配置Flume采集文件到HDFS

在Worker节点(数据采集节点)上安装Web服务,并启动:

 

在Worker节点(数据采集节点)上创建flume-env.sh文件:

cat > /usr/local/apache-flume-1.9.0-bin/conf/flume-env.sh << EOF
export JAVA_HOME=/usr/local/jdk1.8.0_291/
EOF

image.png

在Worker节点(数据采集节点)上同步guava文件,解决Flume版本过低问题:

cp /usr/local/hadoop-3.2.2/share/hadoop/common/lib/guava-27.0-jre.jar /usr/local/apache-flume-1.9.0-bin/lib/
rm -rf /usr/local/apache-flume-1.9.0-bin/lib/guava-11.0.2.jar

image.png

在Worker节点(数据采集节点)上创建Flume配置文件:

cat > /usr/local/apache-flume-1.9.0-bin/conf/tail-hdfs.conf << EOF
ag2.sources = source2
ag2.sinks = sink2
ag2.channels = channel2

ag2.sources.source2.type = exec
ag1.sources.source1.fileHeader = true
ag2.sources.source2.command = tail -F /var/log/httpd/access_log

ag2.sinks.sink2.type = hdfs
ag2.sinks.sink2.hdfs.path = hdfs://namenode:9000/access_log/%y-%m-%d/%H-%M
ag2.sinks.sink2.hdfs.filePrefix = log
ag2.sinks.sink2.hdfs.fileSuffix = .log
ag2.sinks.sink2.hdfs.batchSize = 100
ag2.sinks.sink2.hdfs.fileType = DataStream
ag2.sinks.sink2.hdfs.writeFormat = Text
ag2.sinks.sink2.hdfs.rollSize = 512000
ag2.sinks.sink2.hdfs.rollCount = 1000000
ag2.sinks.sink2.hdfs.rollInterval = 60
ag2.sinks.sink2.hdfs.round = true
ag2.sinks.sink2.hdfs.roundValue = 10
ag2.sinks.sink2.hdfs.roundUnit = minute
ag2.sinks.sink2.hdfs.useLocalTimeStamp = true

ag2.channels.channel2.type = memory
ag2.channels.channel2.capacity = 500000
ag2.channels.channel2.transactionCapacity = 600

ag2.sources.source2.channels = channel2
ag2.sinks.sink2.channel = channel2
EOF

image.png

hdfs.path需与core-site.xml中fs.defaultFS参数保持一致

 

在Worker节点(数据采集节点)上拷贝Hadoop配置文件至Flume目录:

cp /usr/local/hadoop-3.2.2/etc/hadoop/core-site.xml /usr/local/apache-flume-1.9.0-bin/conf/
cp /usr/local/hadoop-3.2.2/etc/hadoop/hdfs-site.xml /usr/local/apache-flume-1.9.0-bin/conf/

image.png

在Worker节点(数据采集节点)上后台运行Flume

nohup flume-ng agent -c /usr/local/apache-flume-1.9.0-bin/conf/ -f /usr/local/apache-flume-1.9.0-bin/conf/tail-hdfs.conf -n ag2 -Dflume.root.logger=INFO,console &

image.png

5、演示Flume采集文件到HDFS

访问Worker节点Web服务,刷新日志文件

image.png