Apache Kafka分布式流处理系统的Python客户端。kafka-python的设计功能非常类似于官方的Java客户端,并带有大量的pythonic接口(例如,消费者迭代器)。
kafka-python最好与较新的代理(0.9+)一起使用,但与较旧的版本(至0.8.0)向后兼容。某些功能仅在较新的代理上启用。例如,完全协调的消费者组(即,将动态分区分配给同一组中的多个消费者)需要使用0.9+ kafka经纪人。要为较早的经纪人版本支持此功能,将需要编写和维护自定义领导者选举和成员资格/健康检查代码(可能使用Zookeeper或领事)。对于较旧的代理,您可以通过使用Chef,ansible等配置管理工具为每个使用者实例手动分配不同的分区来实现类似的目的。这种方法可以很好地工作,尽管它不支持对故障进行重新平衡。见< >了解更多详细信息。
https://kafka-python.readthedocs.io/en/master/compatibility.html
请注意,master分支可能包含未发布的功能。有关发布文档,请参阅readthedocs和/或python的内联帮助。
[Shell]
纯文本查看
复制代码
1 | >>> pip install kafka-python |
KafkaConsumer
KafkaConsumer是高级消息使用者,旨在与官方Java客户端尽可能类似地操作。要完全支持协调的消费者群体,需要使用支持组API的kafka经纪人:kafka v0.9 +。
有关API和配置的详细信息,请参见< >。
https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html
消费者迭代器返回ConsumerRecords,它们是简单的namedtuple,显示基本消息属性:主题,分区,偏移量,键和值:
[Shell]
纯文本查看
复制代码
1 2 3 4 | >>> from kafka import KafkaConsumer>>> consumer = KafkaConsumer('my_favorite_topic')>>> for msg in consumer:... print (msg) |
[Shell]
纯文本查看
复制代码
1 2 3 4 5 | >>> # join a consumer group for dynamic partition assignment and offset commits>>> from kafka import KafkaConsumer>>> consumer = KafkaConsumer('my_favorite_topic', group_id='my_favorite_group')>>> for msg in consumer:... print (msg) |
[Shell]
纯文本查看
复制代码
1 2 3 4 5 | >>> # manually assign the partition list for the consumer>>> from kafka import TopicPartition>>> consumer = KafkaConsumer(bootstrap_servers='localhost:1234')>>> consumer.assign([TopicPartition('foobar', 2)])>>> msg = next(consumer) |
[AppleScript]
纯文本查看
复制代码
1 2 3 4 5 | >>> # Deserialize msgpack-encoded values>>> consumer = KafkaConsumer(value_deserializer=msgpack.loads)>>> consumer.subscribe(['msgpackfoo'])>>> for msg in consumer:... assert isinstance(msg.value, dict) |
[Shell]
纯文本查看
复制代码
1 2 3 4 | >>> # Access record headers. The returned value is a list of tuples>>> # with str, bytes for key and value>>> for msg in consumer:... print (msg.headers) |
[Shell]
纯文本查看
复制代码
1 2 | >>> # Get consumer metrics>>> metrics = consumer.metrics() |
KafkaProducer
KafkaProducer是高级异步消息生成器。该类旨在与官方Java客户端尽可能类似地运行。有关更多详细信息,请参见< >。
https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html
[Shell]
纯文本查看
复制代码
1 2 3 4 | >>> from kafka import KafkaProducer>>> producer = KafkaProducer(bootstrap_servers='localhost:1234')>>> for _ in range(100):... producer.send('foobar', b'some_message_bytes') |
[Shell]
纯文本查看
复制代码
1 2 | >>> # Block until a single message is sent (or timeout)[/size][/backcolor][/color][/align]>>> future = producer.send('foobar', b'another_message')>>> result = future.get(timeout=60) |
[Shell]
纯文本查看
复制代码
1 2 3 4 | >>> # Block until all pending messages are at least put on the network>>> # NOTE: This does not guarantee delivery or success! It is really>>> # only useful if you configure internal batching using linger_ms>>> producer.flush() |
[Shell]
纯文本查看
复制代码
1 2 | >>> # Use a key for hashed-partitioning>>> producer.send('foobar', key=b'foo', value=b'bar') |
[Shell]
纯文本查看
复制代码
1 2 3 4 | >>> # Serialize json messages>>> import json>>> producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode('utf-8'))>>> producer.send('fizzbuzz', {'foo': 'bar'}) |
[Shell]
纯文本查看
复制代码
1 2 3 | >>> # Serialize string keys>>> producer = KafkaProducer(key_serializer=str.encode)>>> producer.send('flipflap', key='ping', value=b'1234') |
[Shell]
纯文本查看
复制代码
1 2 3 4 | >>> # Compress messages[mw_shl_code=shell,true]>>> # Include record headers. The format is list of tuples with string key>>> # and bytes value.>>> producer.send('foobar', value=b'c29tZSB2YWx1ZQ==', headers=[('content-encoding', b'base64')]) |
[Shell]
纯文本查看
复制代码
1 2 | >>> # Get producer performance metrics>>> metrics = producer.metrics() |
>>> producer = KafkaProducer(compression_type='gzip')
>>> for i in range(1000):
... producer.send('foobar', b'msg %d' % i)[/mw_shl_code]
案例:
[Python]
纯文本查看
复制代码
01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | #!/usr/bin/env pythonimport threading, logging, timeimport multiprocessingfrom kafka import KafkaConsumer, KafkaProducerclass Producer(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.stop_event = threading.Event() def stop(self): self.stop_event.set() def run(self): producer = KafkaProducer(bootstrap_servers='localhost:9092') while not self.stop_event.is_set(): producer.send('my-topic', b"test") producer.send('my-topic', b"\xc2Hola, mundo!") time.sleep(1) producer.close() |
更多技术资讯可关注:itheimaGZ 获取