flink

212 阅读2分钟

Flink

准备工作

1、相关资源

jdk-8u151-linux-x64.tar.gz

flink-1.13.2-bin-scala_2.11.tgz(yarn集群部署)

hadoop-2.6.4.tar.gz(集群部署)

zookeeper-3.4.14.tar.gz(集群部署)

rsync-3.1.2-12.el7_9.x86_64.rpm

百度网盘 提取码:asio

阿里云盘 提取码:w7g0

2、配置免密

3、安装xsync

1、vim xsync

image-20230905162225754.png

# chmod 777 xsync
# mv xsync /usr/bin/

4、配置hosts

image-20230905162200324.png

5、配置变量

export JAVA_HOME=/usr/local/src/java
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/usr/local/src/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_CLASSPATH=`hadoop classpath`
export ZK_HOME=/usr/local/src/zookeeper
export PATH=$ZK_HOME/bin:$PATH

hadoop

1、修改配置文件

# cd /usr/local/src/hadoop/etc/hadoop/
1、vim core-site.xml
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://flink046:9000</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/src/hadoop/data/tmp</value>
</property>

2、vim hdfs-site.xml
<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>
<property>
  <name>dfs.namenode.secondary.http-address</name>
  <value>flink048:50090</value>
</property>

3、vim yarn-site.xml
 <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
 <property>
    <name>yarn.resourcemanager.hostsname</name>
    <value>flink047</value>
 </property>
 <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
 </property>
 <property>
    <name>yarn.log.server.url</name>
    <value>http://flink047:19888/jobhistory/logs</value>
 </property>
 <property>
    <name>yarn.log-aggregation.retain-seconds</name>
    <value>604800</value>
 </property>
ggregation.retain-seconds</name>
    <value>604800</value>
 </property>
YARN模式下的HA需要注意一点,官方给出建议,必须要增加以下两项配置: YARN配置,修改yarn-site.xml
<!-- master(JobManager)失败重启的最大尝试次数-->
<property>
  <name>yarn.resourcemanager.am.max-attempts</name>
  <value>4</value>
  <description>
    The maximum number of application master execution attempts.
  </description>
</property>

<!-- 关闭yarn内存检查 -->
<!-- 是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认为 true -->
<!-- 因为对于 flink 使用 yarn 模式下,很容易内存超标,这个时候 yarn 会自动杀掉 job,因此需要关掉-->

<property>
   <name>yarn.nodemanager.pmem-check-enabled</name>
   <value>false</value>
</property>

<property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
   <value>false</value>
</property>
4、vim mapred-site.xml
<property>
     <name>mapreduce.framework.name</name>
     <value>yarn</value>
 </property>
 <property>
     <name>mapreduce.jobhistory.address</name>
     <value>flink047:10020</value>
 </property>
 <property>
     <name>mapreduce.jobhistory.webapp.address</name>
     <value>flink047:19888</value>
 </property>


5、vim slaves
flink046
flink047
flink048

6、在hadoop-env.sh  mapred-env.sh yarn-env.sh 中添加java的安装目录路径
export JAVA_HOME=/usr/local/src/java

2、启动hadoop

1、xsync同步hadoop
2、格式化namenode
./bin/hdfs namenode -format    格式化文件系统(三个都要执行)
3、启动
./sbin/start-all.sh

注意:重新格式化后要删除hadoop.tmp.dir的数据

3、测试

http://flink046:8088
http://flink046:50070

zookeeper

1、配置systemctl

vim /usr/lib/systemd/system/zookeeper.service
[Unit]
# 服务描述
Description=cosmo-bdp zookeeper
# 在网络服务启动后运行
After=network.target
[Service]
Type=forking
# jdk环境变量
Environment=JAVA_HOME=/usr/local/src/java ZOO_LOG_DIR=/opt/logs
# 启动命令
ExecStart=/usr/local/src/zookeeper/bin/zkServer.sh start
# 停止命令
ExecStop=/usr/local/src/zookeeper/bin/zkServer.sh stop
# 重载命令
ExecReload=/usr/local/src/zookeeper/bin/zkServer.sh restart
[Install]
WantedBy=multi-user.target

2、修改配置文件

# vim zoo.cfg
tickTime=2000
dataDir=/var/run/zookeeper/data
dataLogDir=/var/run/zookeeper/log
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.105.245:2888:3888
server.2=192.168.105.246:2888:3888
server.3=192.168.105.247:2888:3888
建立myid文件
vim /var/run/zookeeper/data/myid
systemctl start zookeeper

flink

1、修改配置文件

state.backend: filesystem
state.backend.fs.checkpointdir: hdfs://192.168.105.245:9000/flink-checkpoints
state.savepoints.dir: hdfs://192.168.105.245:9000/flink-savepoints
high-availability: zookeeper
high-availability.storageDir: hdfs://192.168.105.245:9000/flink/ha/
high-availability.zookeeper.quorum: 192.168.105.245:2181,192.168.105.246:2181,192.168.105.247:2181
high-availability.zookeeper.client.acl: open

#用户提交作业失败时,重新执行次数
yarn.application-attempts: 4
#
##设置Task在所有节点平均分配
cluster.evenly-spread-out-slots: true

2、测试

1、session测试
# 主节点中执行
bin/yarn-session.sh -d -jm 1024 -tm 1024 -s 1

# -tm 表示每个 TaskManager 的内存大小
# -s 表示每个 TaskManager 的 slots 数量
# -d 表示以后台程序方式运行

image-20230905170007041.png

# 上传文件
hadoop fs -put wordcount.txt /
# 执行
./flink run ../examples/batch/WordCount.jar --input hdfs://192.168.105.245:9000/wordcount.txt

2、Per-job测试
./flink run -t yarn-per-job --detached ../examples/batch/WordCount.jar --input hdfs://192.168.105.245:9000/wordcount.txt

关闭测试
yarn application -kill application_1693889207648_0001