kafka zookeeper less 版本

286 阅读1分钟

docker 版本

执行如下命令,可以本地启动一个kafka服务,不需要准备zookeeper

docker run -it --name kafka-zkless -p 9092:9092 -e LOG_DIR=/tmp/logs quay.io/strimzi/kafka:latest-kafka-3.1.0-amd64 /bin/sh -c 'export CLUSTER_ID=$(bin/kafka-storage.sh random-uuid) && bin/kafka-storage.sh format -t $CLUSTER_ID -c config/kraft/server.properties && bin/kafka-server-start.sh config/kraft/server.properties'

使用

执行如下脚本,可以发布并订阅消息。(脚本最后会block住持续等待消息)

#!/usr/bin/env python3
# -*- encoding: utf-8 -*-
'''
@File    :   LearnKafka.py
@Author  :   shoujun.li 
@Version :   1.0
@Desc    :   None
'''
from kafka import KafkaConsumer
from kafka import KafkaProducer
from kafka.errors import KafkaError
import json
producer = KafkaProducer(bootstrap_servers=['localhost:9092'])

# Asynchronous by default
future = producer.send('my-topic', b'raw_bytes')

# Block for 'synchronous' sends
try:
    record_metadata = future.get(timeout=10)
except KafkaError:
    # Decide what to do if produce request failed...
    log.exception()
    pass

# Successful result returns assigned partition and offset
print(record_metadata.topic)
print(record_metadata.partition)
print(record_metadata.offset)

# produce keyed messages to enable hashed partitioning
producer.send('my-topic', key=b'foo', value=b'bar')

# produce json messages
producer = KafkaProducer(
    value_serializer=lambda m: json.dumps(m).encode('ascii'))
producer.send('json-topic', {'key': 'value'})


def on_send_success(record_metadata):
    print(record_metadata.topic)
    print(record_metadata.partition)
    print(record_metadata.offset)


def on_send_error(excp):
    log.error('I am an errback', exc_info=excp)
    # handle exception


# produce asynchronously with callbacks
producer.send('json-topic', {'key': 'value'}
              ).add_callback(on_send_success).add_errback(on_send_error)

# block until all async messages are sent
producer.flush()

# configure multiple retries
producer = KafkaProducer(retries=5)
producer.close()

print("开始订阅")
# consume earliest available messages, don't commit offsets
consumer = KafkaConsumer('json-topic',
                         group_id='my-group',
                         bootstrap_servers=['localhost:9092'], 
                         auto_offset_reset='earliest', 
                         enable_auto_commit=False,
                         value_deserializer=lambda m: json.loads(m.decode('ascii'))
                         )

for message in consumer:
    # message value and key are raw bytes -- decode if necessary!
    # e.g., for unicode: `message.value.decode('utf-8')`
    print("%s:%d:%d: key=%s value=%s" % (message.topic, message.partition,
                                         message.offset, message.key,
                                         message.value))

手动安装

上次使用docker安装后,今天在一个服务器上徒手配置了一下,故更新一下笔记
下载
wget https://mirrors.cloud.tencent.com/apache/kafka/3.2.0/kafka_2.13-3.2.0.tgz
解压
tar xzvf kafka_2.13-3.2.0.tgz
编辑 cat config/kraft/server.properties 文件,配置监听ip为服务器ip,注意不能写0.0.0.0 启动
bin/kafka-server-start.sh -daemon config/kraft/server.properties
探活
telnet ip 9092
创建
bin/kafka-topics.sh --create --topic my-message --partitions 1 --replication-factor 1 --bootstrap-server ip:9092
订阅
bin/kafka-console-consumer.sh --bootstrap-server ip:9092 --topic my-message 发布
bin/kafka-console-producer.sh --bootstrap-server ip:9092 --topic my-message