flink-operator支持configmap和spec.flink-configuration两种配置,且spec.flink-configuration的优先级高于configmap,修改立即生效,默认会加载flink-config.yaml的configmap配置。
默认配置
文件名为link-config-resourceName.yaml,以下为其中一个demo:
➜ ~ kubectl get cm -n flink flink-config-basic-example -o yaml
apiVersion: v1
data:
flink-conf.yaml: "blob.server.port: 6124\nkubernetes.jobmanager.annotations: flinkdeployment.flink.apache.org/generation:3\nkubernetes.jobmanager.replicas:
1\njobmanager.rpc.address: basic-example.flink\nkubernetes.service-account: flink\nkubernetes.cluster-id:
basic-example\nkubernetes.taskmanager.cpu.amount: 1.0\n$internal.application.program-args:
\nparallelism.default: 2\nkubernetes.namespace: flink\ntaskmanager.numberOfTaskSlots:
2\nkubernetes.rest-service.exposed.type: ClusterIP\nkubernetes.jobmanager.owner.reference:
uid:0de2d2a7-2f1a-4595-add9-72a7f0bd8b9c,kind:FlinkDeployment,apiVersion:flink.apache.org/v1beta1,blockOwnerDeletion:true,controller:false,name:basic-example\nkubernetes.container.image.ref:
harbor.hellobike.cn/kube-public/flink:1.17\ntaskmanager.memory.process.size: 2048m\nkubernetes.internal.jobmanager.entrypoint.class:
org.apache.flink.kubernetes.entrypoint.KubernetesApplicationClusterEntrypoint\npipeline.name:
basic-example\nweb.cancel.enable: false\nexecution.target: kubernetes-application\nexecution.shutdown-on-application-finish:
false\njobmanager.memory.process.size: 2048m\ntaskmanager.rpc.port: 6122\nkubernetes.jobmanager.cpu.amount:
1.0\ninternal.cluster.execution-mode: NORMAL\n$internal.pipeline.job-id: 457164662f25d342a566635bb8cee7b6\nexecution.checkpointing.externalized-checkpoint-retention:
RETAIN_ON_CANCELLATION\npipeline.jars: local:///opt/flink/examples/streaming/StateMachineExample.jar\n$internal.flink.version:
v1_17\nkubernetes.pod-template-file.jobmanager: /tmp/flink_op_generated_podTemplate_6579629526146254494.yaml\n"
log4j-console.properties: |
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.console.ref = ConsoleAppender
rootLogger.appenderRef.rolling.ref = RollingFileAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
# Log all infos to the console
appender.console.name = ConsoleAppender
appender.console.type = CONSOLE
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
# Log all infos in the given rolling file
appender.rolling.name = RollingFileAppender
appender.rolling.type = RollingFile
appender.rolling.append = false
appender.rolling.fileName = ${sys:log.file}
appender.rolling.filePattern = ${sys:log.file}.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 10
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
# Flink Deployment Logging Overrides
# rootLogger.level = DEBUG
kind: ConfigMap
metadata:
creationTimestamp: "2023-12-11T08:41:00Z"
labels:
app: basic-example
type: flink-native-kubernetes
name: flink-config-basic-example
namespace: flink
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: basic-example
uid: cebef99f-2260-49be-b825-eab0bf2dc680
resourceVersion: "6363470"
uid: 4bbeb9f0-7e1b-431a-b084-51702fcbfd41
日志相关配置可以参考文档页面
Flink 版本和命名空间特定默认值 #
# Flink Version specific defaults
kubernetes.operator.default-configuration.flink-version.v1_17.k1: v1
kubernetes.operator.default-configuration.flink-version.v1_17.k2: v2
kubernetes.operator.default-configuration.flink-version.v1_17.k3: v3
# Namespace specific defaults
kubernetes.operator.default-configuration.namespace.ns1.k1: v1
kubernetes.operator.default-configuration.namespace.ns1.k2: v2
kubernetes.operator.default-configuration.namespace.ns2.k1: v1
动态operator配置 #
flink-operator支持通过configmap的动态配置更改,动态配置默认开启,可以通过kubernetes.operator.dynamic.config.enabled为 false 来禁用。可以通过kubernetes.operator.dynamic.config.check.interval配置动态配置检查的时间间隔,默认是5分钟。
leader选举和高可用性 #
以下两个参数用来配置是都启用leader选举
kubernetes.operator.leader-election.enabled: true
kubernetes.operator.leader-election.lease-name: flink-operator-lease
环境变量 #
可以配置host_ip、pod_ip和pod_name的环境变量。
operator配置参考 #
系统配置 #
资源/用户配置 #
弹性配置 #
系统metric配置 #
高级系统配置 #