在Spring Boot中集成Kafka实现消息队列的发送和接收,可以按照以下步骤操作:
1. 添加依赖
在pom.xml中添加依赖:
<dependencies>
<!-- Spring Boot Starter Web (可选,用于创建REST接口测试) -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Spring Kafka -->
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
</dependencies>
2. 配置Kafka连接
在application.yml中配置:
spring:
kafka:
bootstrap-servers: localhost:9092 # Kafka服务器地址
# 生产者配置
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
# 消费者配置
consumer:
group-id: my-group # 消费者组ID
auto-offset-reset: earliest # 从最早的消息开始消费
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
3. 创建生产者服务
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;
@Service
public class KafkaProducerService {
private final KafkaTemplate<String, String> kafkaTemplate;
public KafkaProducerService(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
// 发送消息到指定主题
public void sendMessage(String topic, String message) {
kafkaTemplate.send(topic, message);
System.out.println("Produced message: " + message);
}
}
4. 创建消费者服务
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;
@Service
public class KafkaConsumerService {
// 监听指定主题的消息
@KafkaListener(topics = "test-topic", groupId = "my-group")
public void consume(String message) {
System.out.println("Consumed message: " + message);
// 在这里添加业务处理逻辑
}
}
5. 创建REST接口测试(可选)
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class KafkaController {
private final KafkaProducerService producerService;
public KafkaController(KafkaProducerService producerService) {
this.producerService = producerService;
}
@PostMapping("/send")
public String sendMessage(
@RequestParam("message") String message,
@RequestParam(value = "topic", defaultValue = "test-topic") String topic
) {
producerService.sendMessage(topic, message);
return "Message sent: " + message;
}
}
6. 启动Kafka服务(本地测试)
- 下载并解压Kafka
- 启动Zookeeper:
bin/zookeeper-server-start.sh config/zookeeper.properties - 启动Kafka:
bin/kafka-server-start.sh config/server.properties - 创建主题:
bin/kafka-topics.sh --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
7. 测试流程
- 启动Spring Boot应用
- 发送HTTP请求:
curl -X POST http://localhost:8080/send?message=HelloKafka - 观察控制台输出:
Produced message: HelloKafka Consumed message: HelloKafka
高级配置(可选)
spring:
kafka:
# 生产者重试机制
producer:
retries: 3
properties:
delivery.timeout.ms: 30000
# 消费者手动提交偏移量
consumer:
enable-auto-commit: false
# 监听器配置
listener:
ack-mode: manual # 手动ACK
常见问题解决
- 连接失败:检查
bootstrap-servers配置和Kafka服务状态 - 消费者不工作:
- 确认
group-id唯一性 - 检查
@KafkaListener的topic名称拼写
- 确认
- 消息堆积:增加消费者线程数
@KafkaListener(topics = "test-topic", groupId = "my-group", concurrency = "3")
提示:生产环境中建议配置:
- 消息序列化(JSON/Protobuf)
- 错误处理机制
- 消息确认(ACK)策略
- 监控和指标收集
通过以上步骤,您已实现了Spring Boot与Kafka的集成,能够完成消息队列的基本生产和消费功能。