kafka安装(3.0.0)
- 下载:archive.apache.org/dist/kafka/…
- 解压:tar -xvf kafka_2.12-3.0.0
- 修改配置
- server.properties
broker.id = 0
log.dirs=/data/kafka_2.12-3.0.0/datas
zookeeper.connect=localhost:2181/kafka
advertised.listeners=PLAINTEXT://xxxx.xxxx.xxx.xxx:9092
- zookeeper.properties
dataDir=/data/kafka_2.12-3.0.0/zk-datas
-
zookeeper模式启动
启动zookeeper:bin/zookeeper-server-start.sh config/zookeeper.properties
启动kafka:bin/kafka-server-start.sh config/server.properties JMX_PORT=9999
-
Kraft模式启动
初始化:bin/kafka-storage.sh random-uuid
初始化:bin/kafka-storage.sh format -t IvVFoklNT0OUrrhpTBS10A -c config/kraft/server.properties
修改配置:advertised.listeners=PLAINTEXT://xxxx.xxxx.xxx.xxx:9092
启动:bin/kafka-server-start.sh config/kraft/server.properties
- 测试
创建主题:bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
生产者发送消息: bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
消费者消费消息: bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
kafka-manager(cmak)(3.0.0.6)
- 下载:github.com/yahoo/CMAK/…
- 解压:unzip cmak-3.0.0.6.zip -d cmak
- 配置
kafka-manager.zkhosts="localhost:2181"
cmak.zkhosts="localhost:2181"
- 启动
bin/cmak
- 访问
localhost:9000
kafka-client
- 引入kafka-client包
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.0.0</version>
</dependency>
- 创建生产者Producer
public static void main(String[] args) {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"xx.xx.xx.xx:9092");
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(properties);
// 普通
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<>("test","MJoe"+i));
}
//异步回调
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<>("test", "MJoe" + i), new Callback() {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (e == null){
System.out.println("主题:"+recordMetadata.topic()+" 分区:"+recordMetadata.partition());
}
}
});
}
// 同步发送
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<>("test","MJoe"+i)).get();
}
producer.close();
}
SpringBoot集成kafka
- 引入依赖
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
- 生产者
@RestController
public class ProducerController {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
@RequestMapping("/msg")
public String msg(String msg){
kafkaTemplate.send("first", msg);
return "ok";
}
}
- 消费者
@Configuration
public class KafkaConsumer {
@KafkaListener(topics = "first")
public void consumerTopic(String msg){
System.out.println("msg = " + msg);
}
}
- application.properties配置
spring.kafka.bootstrap-servers=101.35.225.169:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.group-id=mjoe