kafka broker configs list

1,782 阅读32分钟

背景说明

在维护 kafka 集群的过程中,发现以前的一些配置项并不是很合理,原因是自己对 kafka broker 的配置项不了解,为了加强自己对这部分的理解,也为了更好的维护 kafka 集群,对 kafka-1.0.1 版本的 broker 的配置项进行了 review。

可配置项

1、必须要配置的参数有三个:

	broker.id
	log.dirs
	zookeeper.connect

2、Topic级别的配置参数和默认值列表如下:

列表中的Dynamic Update Mode列中三个选项的含义分别如下:

> read-only : 必须要重启broker才能实现值得更新。
> per-broker :可以为每一个broker动态更新。
> cluster-wide : 可以作为集群范围的默认值进行动态更新,也可以作为per-broker类型进行更新(测试)。

**下面是 topic 级别的 broker configs : **

name description type default valid value importance dynamic update mode
zookeeper.connect zk列表 String high read-only
advertised.host.name 已经弃用!使用‘advertised.listeners’替代。代表要发布到zk上的供客户端使用的HostName。 String null high read-only
advertised.listeners 发布到ZooKeeper上给客户端使用的监听器,如果与上述的监听器不同。在IaaS环境中,这个需要与broker绑定的接口不同。如果这个没有设置,将会使用listeners的值。 String null high per-broker
advertised.port 已经弃用!只有在‘advertised.listeners’或者'listener'没有设置的情况下才生效,使用‘advertised.listeners’替代。代表要发布到ZooKeeper以供客户端使用的端口。 int null high read-only
auto.create.topics.enable 是否允许在broker上自动创建topic。 boolean True high read-only
auto.leader.rebalance.enable 是否允许leader自动rebalance(自动选举)。 boolean True high read-only
如果设置为True,那么后台会维护一个检测和触发leader rebalance的线程。
background.threads 用于处理后台各种任务的线程数量。 int 10 [1,...] high cluster-wide
broker.id 服务器的broker标识。如果没有设置,会自动创建一个惟一的broker id。 int -1 high read-only
为了避免Zookeeper创建的标识与用户配置的标识产生冲突,自动创建的broker标识从 reserved.broker.max.id + 1开始。
compression.type 为指定的topic指定压缩类型,可选类型有'gzip', 'snappy', 'lz4'; String producer high cluster-wide
如果设置为'uncompressed' 就意味着没有设定压缩类型;
如果设置为 'producer' 就意味着保留producer设置的压缩类型。
delete.topic.enable 允许删除topic。 boolean True high read-only
如果这项参数没有启用,那么通过admin tool删除topic就不会有效果。
host.name 已经弃用!只有在‘listeners’没有设置的情况下使用。 String "" high read-only
leader.imbalance.check.interval.seconds 控制器触发的分区重新平衡检查的频率。 long 300 high read-only
leader.imbalance.per.broker.percentage 每个broker允许的leader不平衡比率阈值。 int 10 high read-only
在每个broker上leader都不平衡的情况下,控制器才会触发leader rebalance.
该值以百分比形式设置,10 <==> 10%
这个值计算方法:(leader不是prefered leader的AR数量)/(AR列表中的总数)
listeners 监听器列表选项。设置后将监听由逗号分割的URI和监听器列表,如果监器的名称不是一个安全的协议,listener.security.protocol.map也必须设置。 String null high per-broker
指定主机名为0.0.0.0来绑定所有的接口。让主机名为空来绑定到默认的接口。
合法的监听器列表样例:
PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093
log.dir log.dirs属性的补充,保存数据目录。 string /tmp/kafka-logs high read-only
log.dirs 保存数据目录。 String null high read-only
log.flush.interval.messages 消息被写入到磁盘前在日志分区上可保留的消息数量的最大值 long 9.22337203685477E+18 high cluster-wide
log.flush.interval.ms 消息被书写到磁盘之间在内存中保存的最大时长,如果没有设置,那么就用‘log.flush.scheduler.interval.ms’对应的值。 long null high cluster-wide
log.flush.offset.checkpoint.interval.ms 检查点文件的更新频率,就像日志的还原点 int 60000 [0,...] high read-only
log.flush.scheduler.interval.ms log flusher【检查是否有log需要刷写到磁盘】的检查频率 long 9.22337203685477E+18 high read-only
log.flush.start.offset.checkpoint.interval.ms 日志起始偏移量持久化记录的刷新频率 int 60000 [0,...] high read-only
log.retention.bytes 日志保留最大的size long -1 high cluster-wide
log.retention.hours 日志保留时间,单位hour int 168 high read-only
优先级低于属性‘log.retention.ms ’
log.retention.minutes 日志保留时间,单位minutes int null high read-only
如果没设置,则使用‘log.retention.ms’中的值
优先级低于属性‘log.retention.ms ’
log.retention.ms 日志保留时间,单位ms long null high cluster-wide
如果没有设置,则使用‘log.retention.minutes’中的值
log.roll.hours 生成一个新的log segment所需的最长时间,单位hour int 168 [1,...] high read-only
优先级低于log.roll.ms
log.roll.ms 生成一个新的log segment所需的最长时间,单位ms long null high cluster-wide
如果没有设置,使用log.roll.hours项
log.roll.jitter.hours 从logRollTimeMillis中减去的最大抖动(以小时为单位) int 0 [0,...] high read-only
是log.roll.jitter.ms属性的次要选项
log.roll.jitter.ms 从logRollTimeMillis中减去的最大抖动(以ms为单位) long null high cluster-wide
log.segment.bytes 单个log segment的最大size int 1073741824 / 1G [14,...] high cluster-wide
log.segment.delete.delay.ms 文件删除之前延迟时间 / ms long 60000 [0,...] high cluster-wide
message.max.bytes Kafka允许的一批消息的最大size(一个batch的最大size)。 int 1000012 [0,...] high cluster-wide
如果增加此参数的值并且存在0.10.2版本之前的消费者,那么老版本消费者的提取大小也必须增加,以便他们可以获取这么大的记录批次。
在最新的消息格式版本中,记录总是按批次分组以提高效率。 在以前的消息格式版本中,未压缩的记录不会分组到批次中,并且此限制仅适用于该情况下的单个记录。
可以单独为每一个topic设置该项。
min.insync.replicas ack设置选项:-1/all、0、1、... int 1 [1,...] high cluster-wide
-1/all:
ISR中所有的副本确认后才认为发送成功
1899/12/31 上午12:00:00
默认所有的数据发送成功,吞吐量最大,不安全
1899/12/31 上午1:00:00
leader副本写入成功即可
2、3、4...k...
需要k个replica写入成功才行,但是如果k超过了replica的数量,会报错NotEnoughReplicas 或者 NotEnoughReplicasAfterAppend
num.io.threads 用于处理请求的线程数量,包括磁盘I/O处理。 int 8 [1,...] high cluster-wide
num.network.threads 用于接受或者发送网络请求的线程数量 int 3 [1,...] high cluster-wide
num.recovery.threads.per.data.dir 每个日志目录用于日志恢复的线程数量(用于在启动时加载和关闭时刷写到磁盘) int 1 [1,...] high cluster-wide
num.replica.alter.log.dirs.threads 用于在log dirctory之间移动replicas的线程数量,可能包括disk I/O int null high read-only
num.replica.fetchers 从一个broker源获复制数据的fetcher线程数量。 int 1 high cluster-wide
增大该值可以提升follower broker的I/O并行度。
offset.metadata.max.bytes 与偏移提交关联的元数据条目的最大size int 4096 high read-only
offsets.commit.required.acks 接受提交之间所需的acks,通常情况下不应该修改默认值-1 short -1 high read-only
offsets.commit.timeout.ms 提交Offset的最大允许等待时长。 int 5000 [1,...] high read-only
偏移提交将被延迟到提交偏移主题的所有副本收提交或超时为止。这是类似于生产者请求超时。
offsets.load.buffer.size 在将Offset从Offset segment加载到缓存中时,一次读取的batch的size大小 int 5242880 [1,...] high read-only
offsets.retention.check.interval.ms 检查老旧Offset的频率 long 600000 [1,...] high read-only
offsets.retention.minutes 存在超过时长的offset会被抛弃 int 1440 [1,...] high read-only
offsets.topic.compression.codec 偏移量topic的压缩编码器,可以保证原子提交 int 0 high read-only
offsets.topic.num.partitions offset topic的partition数量 int 50 [1,...] high read-only
offsets.topic.replication.factor offset topic的副本数量 short 3 [1,...] high read-only
offsets.topic.segment.bytes offset topic中segment的大小设置。 int 104857600 [1,...] high read-only
这个参数的值应该设置的相对较小,这样可以加快日志压缩和缓存加载
port 已经弃用!设置接收和监听连接 int 9092 high read-only
queued.max.requests 网络线程被阻塞之前允许的请求队列的大小 int 500 [1,...] high read-only
quota.consumer.default 已弃用:仅在Zookeeper动态默认配额没有被配置时使用。任何通过客户端标识或消费者组来区分的消费者将会受到限制,如果它每秒获取的字节数多于此属性设置的值。 long 9.22337203685477E+18 [1,...] high read-only
quota.producer.default 已弃用:仅在Zookeeper动态默认配额没有被配置时使用。任何通过客户端标识来区分的生产者将会受到限制,如果它每秒产生的字节数多于此属性设置的值。 long 9.22337203685477E+18 [1,...] high read-only
replica.fetch.min.bytes 获取响应的期望的最小字节数,如果当前没有足够的字节数,那么等待replicaMaxWaitTimeMs时长 int 1 high read-only
replica.fetch.wait.max.ms follower broker发出的每个fetcher请求的最长等待时间。 int 500 high read-only
此值应始终始终小于replica.lag.time.max.ms,以防止低吞吐量的topic频繁收缩ISR
replica.high.watermark.checkpoint.interval.ms HW(高水位)记录到磁盘的频率 long 5000 high read-only
replica.lag.time.max.ms 如果follower broker没有发送任何的fetch请求 || 还没有消费到leader log的最新的offset位置,那么leader副本会将该follower副本从ISR中移除 long 10000 high read-only
replica.socket.receive.buffer.bytes 用于网络请求的套接字接收缓存 int 65536 high read-only
replica.socket.timeout.ms 网络请求的socket超时时间,设置的值应该不小于‘replica.fetch.wait.max.ms’值 int 30000 high read-only
request.timeout.ms 这个配置控制了客户端一个请求等待响应的最大时间。 int 30000 high read-only
如果已经超时了却没有收到响应,如有必要客户端会重新发送请求或是当重试耗尽时请求失败。
socket.receive.buffer.bytes 套接字接收缓存,如果值是-1那么 OS默认值会被使用 int 102400 high read-only
socket.request.max.bytes 一个socket请求包含的最大字节数 int 104857600 [1,...] high read-only
socket.send.buffer.bytes 套接字发送缓存,如果值是-1那么 OS默认值会被使用。 int 102400 high read-only
transaction.max.timeout.ms transaction/事物允许的最大超时时长。 int 900000 [1,...] high read-only
如果客户端请求的事物时间超过该值,那么broker会在InitProducerIdRequest返回一个错误。这可以防止客户端因为有太大的超时,从而阻止其他消费者从事物中包含的topic中消费消息。
transaction.state.log.load.buffer.size 将生产者id和事物加载到缓冲中时,从事物日志段中读取批次的大小设置。 int 5242880 [1,...] high read-only
Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache.
transaction.state.log.min.isr 重写/覆盖事物topic的min.insync.replicas参数。 int 2 [1,...] high read-only
Overridden min.insync.replicas config for the transaction topic.
transaction.state.log.num.partitions 事物主题的分区数量(部署后不能修改) int 50 [1,...] high read-only
The number of partitions for the transaction topic (should not change after deployment).
transaction.state.log.replication.factor 事物主题的副本数量。只有在集群size大于设置的副本数量的情况下,内部主题才会创建成功。 short 3 [1,...] high read-only
The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.
transaction.state.log.segment.bytes 事物主题的log segment应该相对较小,这样可以达到更快的日志压缩和缓存加载效果。 int 104857600 [1,...] high read-only
The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads
transactional.id.expiration.ms 事务协调器在长时间没有从生产者事物id收取到任何事物状态更新时,会主动将其终止,该参数设置的是终止前的最长等待时间。 int 604800000 [1,...] high read-only
The maximum amount of time in ms that the transaction coordinator will wait before proactively expire a producer's transactional id without receiving any transaction status updates from it.
unclean.leader.election.enable 是否允许不在ISR列表中的副本当选为leader副本,设置为true可能会造成数据丢失。 boolean False high cluster-wide
Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss
zookeeper.connection.timeout.ms 客户端和zk建立连接的最大等待时长,没有设置的话会采用‘zookeeper.session.timeout.ms’项对应的值。 int null high read-only
The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used
zookeeper.max.in.flight.requests 客户端发送到zk命令阻塞之前允许发送的未确认请求数量的最大值。 int 10 [1,...] high read-only
The maximum number of unacknowledged requests the client will send to Zookeeper before blocking.
zookeeper.session.timeout.ms zk会话超时时间设置。 int 6000 high read-only
Zookeeper session timeout
zookeeper.set.acl 设置客户端使用安全的访问控制列表 boolean False high read-only
Set client to use secure ACLs
broker.id.generation.enable 在服务器端启用自动生成broker id功能,如果开启该参数,那么也应该同时检查一下‘reserved.broker.max.id’项对应的值。 boolean True medium read-only
Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.
broker.rack broker所在的机架。在考虑分配replica副本到哪一个broker上时,考虑机架的因素可以增加容错能力。 string null medium read-only
Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1, us-east-1d
connections.max.idle.ms 空闲连接超时。服务端的socket处理线程的空闲时间超过该值后会关闭。 long 600000 medium read-only
Idle connections timeout: the server socket processor threads close the connections that idle more than this
controlled.shutdown.enable 是否允许服务器的受控关机。 boolean True medium read-only
Enable controlled shutdown of the server
controlled.shutdown.max.retries 在‘受控关机’发生失败时可以重试的次数设置。 int 3 medium read-only
Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens
controlled.shutdown.retry.backoff.ms 每次‘受控关机’重试的时间间隔,用于系统的状态恢复。 long 5000 medium read-only
Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.
controller.socket.timeout.ms 控制器到broker之间的socket频道/信道超时时间。 int 30000 medium read-only
The socket timeout for controller-to-broker channels
default.replication.factor 允许自动创建topic的情况下,创建topic的副本数量。 int 1 medium read-only
default replication factors for automatically created topics
delegation.token.expiry.time.ms token的有效时间,默认值1天,超时需要更新/续订。 long 86400000 [1,...] medium read-only
The token validity time in seconds before the token needs to be renewed. Default value 1 day.
delegation.token.master.key 用于生成和验证委托tokens的主(公)/密钥。 必须在所有broker中配置相同的密钥。 password null medium read-only
如果未设置密钥或将其设置为空字符串,那么broker将禁用委派token支持。
Master/secret key to generate and verify delegation tokens. Same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support.
delegation.token.max.lifetime.ms token的生命时长。默认值为7天,超过该值就无法再被更新/续订。 long 604800000 [1,...] medium read-only
The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.
delete.records.purgatory.purge.interval.requests 【删除消息】请求记录的清理间隔(此间隔非时间间隔,而是以请求数量为间隔)。 int 1 medium read-only
The purge interval (in number of requests) of the delete records request purgatory
fetch.purgatory.purge.interval.requests 【拉取消息】请求记录的删除间隔(此间隔非时间间隔,而是以请求数量为间隔)。 int 1000 medium read-only
The purge interval (in number of requests) of the fetch request purgatory
group.initial.rebalance.delay.ms 在new group内执行第一次重新平衡之前,组协调器会等待更多消费者加入新组的时间,该值设置的为其等待的最大时长。 较长的延迟意味着可能更少的rebalance,但会增加处理开始之前的时间。 int 3000 medium read-only
The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.
group.max.session.timeout.ms 允许已经注册成功的消费者最大的会话超时时间。较长的超市时长可以使得消费者有更多的时间在心跳之间处理消息,但是这是以更长的故障检测时长作为代价的。 int 300000 medium read-only
The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.
group.min.session.timeout.ms 允许已经注册成功的消费者最小的会话超时时间。较短的超时时长可以导致更快的故障检测,代价是需要更频繁的检测消费者的心跳,这可能会导致broker资源垮掉。 int 6000 medium read-only
The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.
inter.broker.listener.name broker之间用于通信的监听器名称。如果此值没有设置,监听器名称根据security.inter.broker.protocol定义。注意不能同时设置该项和security.inter.broker.protocol项。 string null medium read-only
Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.
inter.broker.protocol.version 指明内部broker之间使用协议的版本。 string 1.1-IV0 medium read-only
Specify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.
log.cleaner.backoff.ms log cleaner在没有日志清理时的休眠时间。 long 15000 [0,...] medium cluster-wide
The amount of time to sleep when there are no logs to clean
log.cleaner.dedupe.buffer.size 用于所有cleaner线程进行日志重复删除的总内存。 long 134217728 medium cluster-wide
The total memory used for log deduplication across all cleaner threads
log.cleaner.delete.retention.ms 删除记录/消息的保存时长。 long 86400000 medium cluster-wide
How long are delete records retained?
log.cleaner.enable 允许日志清理进程在服务器上运行。 boolean True medium read-only
只要存在设置了cleanup.policy=compac 项的topic,该项据需要设置为true,否者压缩不会执行,topic的size会不断增加。
Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.
log.cleaner.io.buffer.load.factor 日志清理器删除重复数据缓冲区加载因子。重复数据删除缓冲区可以映射为百分比。 double 0.9 medium cluster-wide
较高的值会导致一次清理更多的日志,但是对导致更多的哈希冲突。
Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions
log.cleaner.io.buffer.size 所有的清理线程间用于日志清理的I/O缓冲的总内存。 int 524288 [0,...] medium cluster-wide
The total memory used for log cleaner I/O buffers across all cleaner threads
log.cleaner.io.max.bytes.per.second 日志清理器将被限速,这样日志清理的读写I/O的总和平均值将小于这个值。 double 1.7976931348623157E308 medium cluster-wide
The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average
log.cleaner.min.cleanable.ratio 超过该比例则进行log清理,比例计算方法:脏日志/总日志 double 0.5 medium cluster-wide
The minimum ratio of dirty log to total log for a log to eligible for cleaning
log.cleaner.min.compaction.lag.ms 消息在log中保持未被压缩的最短时长,仅仅适用于要被压缩的日志。 long 0 medium cluster-wide
The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.
log.cleaner.threads 用于日志清理的后台线程数。 int 1 [0,...] medium cluster-wide
The number of background threads to use for log cleaning
log.cleanup.policy 除了保留窗口之外的log segment的默认的清理策略。 list delete [compact, delete] medium cluster-wide
可以设置多个方法,方法之间通过逗号分隔,有效的策略有:"delete" and "compact"。
The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"
log.index.interval.bytes 在偏移量索引文件中插入索引的间隔(以字节为间隔,并不是消息的数量)。 int 4096 [0,...] medium cluster-wide
The interval with which we add an entry to the offset index
log.index.size.max.bytes 偏移量索引文件的最大值。 int 10485760 [4,...] medium cluster-wide
The maximum size in bytes of the offset index
log.message.format.version 指定消息格式的版本,broker会根据该选项将消息按指定的格式添加到log中。 string 1.1-IV0 medium read-only
需要是合法的ApiVersion,例如:0.8.2, 0.9.0.0, 0.10.0。
通过设置特定的消息格式版本,用户可以表明磁盘上的所有现有消息都小于或等于指定的版本。 错误地设置此值将导致旧版本的使用者中断,因为他们将接收具有他们不理解的格式的消息。
Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.
log.message.timestamp.difference.max.ms 允许的broker接收到消息时的时间戳与消息中指定的时间戳之间的最大差异。 long 9.22337203685477E+18 medium cluster-wide
如果设置了 log.message.timestamp.type=CreateTime ,那么超过该阈值的消息会被拒绝。
如果设置了 log.message.timestamp.type=LogAppendTime,那么该参数会失效。
该项的设定值需要小于等于log.retention.ms,从而避免没有必要的频繁的日志滚动。
The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.
log.message.timestamp.type 指定消息中的时间戳是CreateTime还是LogAppendTime。默认是CreateTime。 string CreateTime [CreateTime, LogAppendTime] medium cluster-wide
Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime
log.preallocate 在创建新的segment之前是否预分配文件。Windows上需要设置为true. boolean False medium cluster-wide
Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.
log.retention.check.interval.ms 日志清理器检查是否有日志需要清理的检查间隔时间/ms long 300000 [1,...] medium read-only
The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion
max.connections.per.ip 单个ip允许建立连接的最大数量。 int 2147483647 [1,...] medium read-only
The maximum number of connections we allow from each ip address
max.connections.per.ip.overrides 每个ip或者主机名最大连接数,覆盖默认值。 string "" medium read-only
Per-ip or hostname overrides to the default maximum number of connections
max.incremental.fetch.session.cache.slots 可维护的最大消息提取会话数量。 int 1000 [0,...] medium read-only
The maximum number of incremental fetch sessions that we will maintain.
num.partitions 么个主题默认的partition数量。 int 1 [1,...] medium read-only
The default number of log partitions per topic
password.encoder.old.secret 用于动态配置密码的旧的密钥。只有在更新密钥时才需要陈志祥参数。 password null medium read-only
如果指定,那么使用此旧的密钥对所有的动态编码的密码进行解码,并在broker启动时使用password.encoder.secret进行重新编码。
The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.
password.encoder.secret 用于为此代理编码动态配置密码的密钥。 password null medium read-only
The secret used for encoding dynamically configured passwords for this broker.
principal.builder.class KafkaPrincipalBuilder接口实现类的全名,用于构建KafkaPrincipal类型对象,这个对象在认证授权时会用到。可以理解为用来构建SSL安全协议的规则。 class null medium per-broker
The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. This config also supports the deprecated PrincipalBuilder interface which was previously used for client authentication over SSL. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal name will be the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.
producer.purgatory.purge.interval.requests 【生产请求消息】请求记录的清理间隔(此间隔非时间间隔,而是以请求数量为间隔)。 int 1000 medium read-only
The purge interval (in number of requests) of the producer request purgatory
queued.max.request.bytes 在不再读取请求之前允许队列中有的字节数。 long -1 medium read-only
The number of queued bytes allowed before no more requests are read
replica.fetch.backoff.ms 当拉取partition出现错误时,拉取操作休眠时间。 int 1000 [0,...] medium read-only
The amount of time to sleep when fetch partition error occurs.
replica.fetch.max.bytes 允许从每个partition中获取的消息的字节数。 int 1048576 [0,...] medium read-only
这不是一个绝对的最大值,如果获取的第一个非空的分区中的第一个记录批次大于这个属性的值,这个记录批次将继续被返回以确保取得进展。
此外,可以通过message.max.bytes (broker侧) 或 max.message.bytes (topic侧)来定义broker可以接受的消息batch/批次的最大size。
The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
replica.fetch.response.max.bytes 期望的拉取请求响应的最大字节数。 int 10485760 [0,...] medium read-only
Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
reserved.broker.max.id 可以作为broker id的最大值。 int 1000 [0,...] medium read-only
Max number that can be used for a broker.id
sasl.enabled.mechanisms kafka server中启用的SASL机制列表。此列表可能包含安全提供程序可用的任何机制。默认情况下只有CSSAPI可用。 list GSSAPI medium per-broker
qThe list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.
sasl.jaas.config JAAS登录上下文参数,用于SASL连接。格式为: ' (=)*;' password null medium per-broker
JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: ' (=)*;'
sasl.kerberos.kinit.cmd Kerberos kinit命令路径。 string /usr/bin/kinit medium per-broker
Kerberos kinit command path.
sasl.kerberos.min.time.before.relogin 刷新尝试之间的登录线程睡眠时间。 long 60000 medium per-broker
Login thread sleep time between refresh attempts.
sasl.kerberos.principal.to.local.rules 从主体名到短名之间的映射规则列表。按照映射规则顺序评估,只要找到匹配的规则后面的规则则被忽略。默认情况下,{username} / {hostname} @ {REALM}形式的主体名称将映射到{username}。 有关格式的更多详细信息,可以参阅安全授权和acls。 请注意,如果principal.builder.classconfiguration提供了KafkaPrincipalBuilder的扩展,则会忽略此配置。 list DEFAULT medium per-broker
A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please seesecurity authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.classconfiguration.
sasl.kerberos.service.name kafka运行Kerberos的主体名。在Kafka的JAAS配置或Kafka的配置中定义都可以。 string null medium per-broker
The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.
sasl.kerberos.ticket.renew.jitter 添加到续订时间的随机抖动百分比。 double 0.05 medium per-broker
Percentage of random jitter added to the renewal time.
sasl.kerberos.ticket.renew.window.factor 登录线程将一直睡眠,直到指定的时间窗口因子从最近一次的刷新到的ticket超时过期,此时它将尝试更新/续订ticket。 double 0.8 medium per-broker
Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
sasl.mechanism.inter.broker.protocol 用于内部broker通信的简单身份验证和安全层机制。默认设置为CSSAPI/通用安全服务应用程序接口 string GSSAPI medium per-broker
SASL mechanism used for inter-broker communication. Default is GSSAPI.
security.inter.broker.protocol 用于brokers之间通信的安全协议。可选项有PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. string PLAINTEXT medium read-only
不能同时设置该参数和inter.broker.listener.name参数。
Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.
ssl.cipher.suites 密码套件列表。这是用于使用TLS或SSL网络协议协商网络连接的安全设置的身份验证,加密,MAC和密钥交换算法的命名组合。 默认情况下,支持所有可用的密码套件。 list "" medium per-broker
A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
ssl.client.auth 配置kafka broker以请求客户端身份验证。常见设置: string none [required, requested, none] medium per-broker
ssl.client.auth = required #设置为required意味着客户端身份验证是必须的。
ssl.client.auth = requested #这意味着客户端身份验证是可选的。 与请求不同,如果设置了此选项,则客户端可以选择不提供有关自身的身份验证信息
ssl.client.auth = none #这意味着不需要客户端身份验证。默认值为none
Configures kafka broker to request client authentication. The following settings are common:
ssl.client.auth=required If set to required client authentication is required.
ssl.client.auth=requested This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
ssl.client.auth=noneThis means client authentication is not needed.
ssl.enabled.protocols 为SSL连接启用的协议列表。 list TLSv1.2,TLSv1.1,TLSv1 medium per-broker
The list of protocols enabled for SSL connections.
ssl.key.password 密钥库文件中私钥的密码。 这对于客户来说是可选的。 password null medium per-broker
The password of the private key in the key store file. This is optional for client.
ssl.keymanager.algorithm 密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。 string SunX509 medium per-broker
The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
ssl.keystore.location 密钥库文件的位置。 这对于客户端是可选的,可用于客户端的双向身份验证。 string null medium per-broker
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
ssl.keystore.password 密钥库文件的访问密码。 这对于客户端是可选的,仅在配置了ssl.keystore.location时才需要。 password null medium per-broker
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
ssl.keystore.type 密钥库文件的文件格式。 这对于客户来说是可选的。 string JKS medium per-broker
The file format of the key store file. This is optional for client.
ssl.protocol 用于生成SSLContext的SSL协议。 string TLS medium per-broker
默认设置为TLS,在大多数情况下都可以。 最近的JVM中的允许值是TLS,TLSv1.1和TLSv1.2。 较旧的JVM可能支持SSL,SSLv2和SSLv3,但由于已知的安全漏洞,不鼓励使用它们。
The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.
ssl.provider 用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。 string null medium per-broker
The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.
ssl.trustmanager.algorithm 信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。 string PKIX medium per-broker
The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
ssl.truststore.location 信任库文件的位置。 string null medium per-broker
The location of the trust store file.
ssl.truststore.password 信任库文件的密码。 如果未设置密码,则仍可访问信任库,但禁用完整性检查。 password null medium per-broker
The password for the trust store file. If a password is not set access to the trust store is still available, but integrity checking is disabled.
ssl.truststore.type 信任库文件的文件格式。 string JKS medium per-broker
The file format of the trust store file.
alter.config.policy.class.name 用于验证的改变配置方法类名称。 class null low read-only
该类应该是接口 org.apache.kafka.server.policy.AlterConfigPolicy的实现类
The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface.
alter.log.dirs.replication.quota.window.num 保留在内存中用于修改log dirs副本指标的样本数。 int 11 [1,...] low read-only
The number of samples to retain in memory for alter log dirs replication quotas
alter.log.dirs.replication.quota.window.size.seconds 上个参数中提到的改变log dirs副本指标的样本的时间跨度。 int 1 [1,...] low read-only
The time span of each sample for alter log dirs replication quotas
authorizer.class.name 应该用于授权的授权者类。 string "" low read-only
The authorizer class that should be used for authorization
create.topic.policy.class.name 应该用于验证的创建topic策略类。 class null low read-only
该类应该是org.apache.kafka.server.policy.CreateTopicPolicy接口的实现类。
The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface.
delegation.token.expiry.check.interval.ms 扫描间隔以删除过期的token。 long 3600000 [1,...] low read-only
Scan interval to remove expired delegation tokens.
listener.security.protocol.map 监听器名称和安全协议之间的映射。 string PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL low per-broker
必须为同一安全协议定义,以便在多个端口或IP中使用。 例如,即使两者都需要SSL,也可以分离内部和外部流量。 具体地说,用户可以定义名为INTERNAL和EXTERNAL的侦听器,并将此属性定义为:INTERNAL:SSL,EXTERNAL:SSL。 如图所示,键和值由冒号分隔,映射条目以逗号分隔。 每个侦听器名称只应在地图中出现一次。 通过向配置名称添加规范化前缀(侦听器名称为小写),可以为每个侦听器配置不同的安全性(SSL和SASL)设置。 例如,要为INTERNAL侦听器设置不同的密钥库,将设置名为“listener.name.internal.ssl.keystore.location”的配置。 如果未设置侦听器名称的配置,则配置将回退到通用配置(即ssl.keystore.location)。
Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location).
metric.reporters 用于度量报告的类列表。实现接口 org.apache.kafka.common.metrics.MetricsReporter的类可以插入到类列表中,这样就可以被metic创建通知到. JmxReporter一般包括注册JMX统计。 list "" low cluster-wide
A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.
metrics.num.samples 用于维护计算指标的样本数量。 int 2 [1,...] low read-only
The number of samples maintained to compute metrics.
metrics.recording.level 指标的最高纪录级别。 string INFO low read-only
The highest recording level for metrics.
metrics.sample.window.ms 计算度量样本的时间窗口。 long 30000 [1,...] low read-only
The window of time a metrics sample is computed over.
password.encoder.cipher.algorithm 用于编码动态配置密码的密码算法。 string AES/CBC/PKCS5Padding low read-only
The Cipher algorithm used for encoding dynamically configured passwords.
password.encoder.iterations 用于编码动态配置密码的迭代计数。 int 4096 [1024,...] low read-only
The iteration count used for encoding dynamically configured passwords.
password.encoder.key.length 用于编码动态配置密码的密钥长度。 int 128 [8,...] low read-only
The key length used for encoding dynamically configured passwords.
password.encoder.keyfactory.algorithm SecretKeyFactory算法用于编码动态配置的密码。 string null low read-only
默认值为PBKDF2WithHmacSHA512(如果可用),否则为PBKDF2WithHmacSHA1。
The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.
quota.window.num 在内存中维护的用于客户端配额的样本数量。 int 11 [1,...] low read-only
The number of samples to retain in memory for client quotas
quota.window.size.seconds 上面参数中样本的时间窗。 int 1 [1,...] low read-only
The time span of each sample for client quotas
replication.quota.window.num 在内存中维护的用于副本配额的样本数量。 int 11 [1,...] low read-only
The number of samples to retain in memory for replication quotas
replication.quota.window.size.seconds 用于复制配额的每个样本的时间跨度。 int 1 [1,...] low read-only
The time span of each sample for replication quotas
ssl.endpoint.identification.algorithm 端点识别算法,使用服务器证书验证服务器主机名。 string null low per-broker
The endpoint identification algorithm to validate server hostname using server certificate.
ssl.secure.random.implementation 用于SSL加密操作的SecureRandom PRNG实现。 string null low per-broker
The SecureRandom PRNG implementation to use for SSL cryptography operations.
transaction.abort.timed.out.transaction.cleanup.interval.ms 回滚已超时的事务的时间间隔。 int 60000 [1,...] low read-only
The interval at which to rollback transactions that have timed out
transaction.remove.expired.transaction.cleanup.interval.ms 删除【由于transactional.id.expiration.ms过期而引起的过期的事物】的时间间隔 int 3600000 [1,...] low read-only
The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing
zookeeper.sync.time.ms zk的follower可以多长时间不与zk leader同步(zk follower可以落后zk leader的最长时间) int 2000 low read-only
How far a ZK follower can be behind a ZK leader

3、修改上表中的参数需要通过./kafka-configs.sh文件来实现更改(kafka版本号 >= 1.1)。

例如改变当前broker 0上的log cleaner threads可以通过下面命令实现:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2

查看当前broker 0的动态配置参数:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe

删除broker id为0的server上的配置参数/设置为默认值:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads

同时更新集群上所有broker上的参数(cluster-wide类型,保持所有brokers上参数的一致性):

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2

查看当前集群中动态的cluster-wide类型的参数列表:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe

如果一个参数同时在不同的level层面进行了定义,那么其使用的优先级如下所示:

Dynamic per-broker config stored in ZooKeeper  # 保存在zk中的动态的per-broker配置
Dynamic cluster-wide default config stored in ZooKeeper # 保存在zk中的动态的cluster-wide级别的配置
Static broker config from server.properties  # server.properties中静态配置的参数
Kafka default, see broker configs  # kafka的终极默认值