SpringAI 实战:构建智能问答系统全流程解析

0 阅读3分钟

SpringAI 实战:构建智能问答系统全流程解析

引言:当Spring遇上AI

在数字化转型的浪潮中,人工智能已成为企业应用的核心竞争力。作为Java生态中最受欢迎的框架,Spring与AI的融合为开发者提供了强大的企业级AI应用构建能力。SpringAI作为Spring官方推出的AI集成框架,让Java开发者能够以熟悉的Spring方式轻松接入各类大语言模型(LLM)。本文将带你深入实践,通过构建一个完整的智能问答系统,全面掌握SpringAI的核心技术和最佳实践。

一、SpringAI架构概览

1.1 设计理念

SpringAI采用"约定优于配置"的Spring哲学,为AI应用开发提供了一致性的抽象接口。其核心架构分为四层:

  • 应用层:提供面向业务的API接口
  • 抽象层:定义统一的AI操作接口(如ChatClient、EmbeddingClient等)
  • 适配层:对接各种AI服务提供商(OpenAI、Azure、本地模型等)
  • 基础设施层:提供配置管理、连接池、监控等基础支持

1.2 核心组件

<!-- Maven依赖配置 -->
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
    <version>0.8.1</version>
</dependency>
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-pgvector-store-spring-boot-starter</artifactId>
    <version>0.8.1</version>
</dependency>

二、环境搭建与配置

2.1 项目初始化

使用Spring Initializr创建项目,选择以下依赖:

  • Spring Web
  • Spring Data JPA
  • PostgreSQL Driver
  • SpringAI OpenAI
  • SpringAI PGVector

2.2 配置文件详解

# application.yml
spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/ai_demo
    username: postgres
    password: postgres
    
  ai:
    openai:
      api-key: ${OPENAI_API_KEY}
      chat:
        options:
          model: gpt-3.5-turbo
          temperature: 0.7
          max-tokens: 2000
    
    vectorstore:
      pgvector:
        index-type: HNSW
        distance-type: COSINE

2.3 数据库初始化

-- 启用vector扩展
CREATE EXTENSION IF NOT EXISTS vector;

-- 创建文档存储表
CREATE TABLE IF NOT EXISTS document_store (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    content TEXT NOT NULL,
    metadata JSONB,
    embedding vector(1536),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- 创建HNSW索引提升查询性能
CREATE INDEX IF NOT EXISTS document_embedding_idx 
ON document_store 
USING hnsw (embedding vector_cosine_ops);

三、智能问答系统核心实现

3.1 数据模型设计

@Entity
@Table(name = "document_store")
@Data
@NoArgsConstructor
@AllArgsConstructor
public class Document {
    
    @Id
    @GeneratedValue(strategy = GenerationType.UUID)
    private UUID id;
    
    @Column(columnDefinition = "TEXT")
    private String content;
    
    @Type(JsonType.class)
    @Column(columnDefinition = "jsonb")
    private Map<String, Object> metadata;
    
    @Column(columnDefinition = "vector(1536)")
    private float[] embedding;
    
    private LocalDateTime createdAt;
    
    @PrePersist
    protected void onCreate() {
        this.createdAt = LocalDateTime.now();
    }
}

@Repository
public interface DocumentRepository extends JpaRepository<Document, UUID> {
    
    @Query(value = "SELECT * FROM document_store ORDER BY embedding <=> :embedding LIMIT :k", 
           nativeQuery = true)
    List<Document> findSimilarDocuments(@Param("embedding") float[] embedding, 
                                       @Param("k") int k);
}

3.2 文档处理与向量化

@Service
@Slf4j
public class DocumentProcessingService {
    
    @Autowired
    private EmbeddingClient embeddingClient;
    
    @Autowired
    private DocumentRepository documentRepository;
    
    @Autowired
    private TextSplitter textSplitter;
    
    /**
     * 处理并存储文档
     */
    @Transactional
    public void processAndStoreDocument(String documentContent, 
                                       Map<String, Object> metadata) {
        
        // 1. 文本分割(处理长文档)
        List<TextSegment> segments = textSplitter.split(documentContent);
        
        // 2. 批量向量化
        List<List<Double>> embeddings = embeddingClient.embed(segments);
        
        // 3. 存储到向量数据库
        for (int i = 0; i < segments.size(); i++) {
            Document doc = new Document();
            doc.setContent(segments.get(i).getText());
            
            Map<String, Object> docMetadata = new HashMap<>(metadata);
            docMetadata.put("segment_index", i);
            docMetadata.put("total_segments", segments.size());
            
            doc.setMetadata(docMetadata);
            doc.setEmbedding(convertToFloatArray(embeddings.get(i)));
            
            documentRepository.save(doc);
        }
        
        log.info("成功处理文档,分割为{}个片段", segments.size());
    }
    
    /**
     * 文档检索(基于向量相似度)
     */
    public List<Document> retrieveRelevantDocuments(String query, int topK) {
        // 将查询语句向量化
        List<Double> queryEmbedding = embeddingClient.embed(query);
        
        // 相似度搜索
        return documentRepository.findSimilarDocuments(
            convertToFloatArray(queryEmbedding), 
            topK
        );
    }
    
    private float[] convertToFloatArray(List<Double> doubleList) {
        float[] floatArray = new float[doubleList.size()];
        for (int i = 0; i < doubleList.size(); i++) {
            floatArray[i] = doubleList.get(i).floatValue();
        }
        return floatArray;
    }
}

3.3 智能问答服务

@Service
public class IntelligentQAService {
    
    @Autowired
    private ChatClient chatClient;
    
    @Autowired
    private DocumentProcessingService documentService;
    
    /**
     * RAG(检索增强生成)问答
     */
    public AnswerResponse answerWithRAG(String question) {
        
        // 1. 检索相关文档片段
        List<Document> relevantDocs = documentService
            .retrieveRelevantDocuments(question, 5);
        
        // 2. 构建上下文
        String context = buildContextFromDocuments(relevantDocs);
        
        // 3. 构建Prompt
        PromptTemplate promptTemplate = new PromptTemplate("""
            你是一个专业的智能助手,请基于以下上下文信息回答问题。
            如果上下文信息不足以回答问题,请说明你不知道。
            
            上下文信息:
            {context}
            
            问题:{question}
            
            请提供详细、准确的回答:
            """);
            
        Map<String, Object> variables = Map.of(
            "context", context,
            "question", question
        );
        
        Prompt prompt = promptTemplate.create(variables);
        
        // 4. 调用AI模型生成回答
        ChatResponse response = chatClient.call(prompt);
        
        // 5. 构建返回结果
        return AnswerResponse.builder()
            .question(question)
            .answer(response.getResult().getOutput().getContent())
            .sources(relevantDocs.stream()
                .map(Document::getMetadata)
                .collect(Collectors.toList()))
            .timestamp(LocalDateTime.now())
            .build();
    }
    
    /**
     * 流式问答(适合长回答)
     */
    public Flux<String> streamAnswer(String question) {
        return Flux.create(sink -> {
            Prompt prompt = new Prompt(new UserMessage(question));
            
            chatClient.stream(prompt)
                .doOnNext(chatResponse -> {
                    String content = chatResponse.getResult()
                                                .getOutput()
                                                .getContent();
                    if (content != null) {
                        sink.next(content);
                    }
                })
                .doOnComplete(sink::complete)
                .doOnError(sink::error)
                .subscribe();
        });
    }
    
    private String buildContextFromDocuments(List<Document> documents) {
        StringBuilder context = new StringBuilder();
        for (int i = 0; i < documents.size(); i++) {
            Document doc = documents.get(i);
            context.append(String.format("[文档片段 %d]:\n%s\n\n", 
                i + 1, doc.getContent()));
        }
        return context.toString();
    }
}

@Data
@Builder
class AnswerResponse {
    private String question;
    private String answer;
    private List<Map<String, Object>> sources;
    private LocalDateTime timestamp;
}

3.4 REST API接口设计

@RestController
@RequestMapping("/api/ai")
@Validated
@Tag(name = "智能问答API", description = "基于SpringAI的智能问答接口")
public class QAController {
    
    @Autowired
    private IntelligentQAService qaService;
    
    @PostMapping("/answer")
    @Operation(summary = "智能问答", description = "基于RAG的智能问答接口")
    public ResponseEntity<AnswerResponse> answerQuestion(
            @RequestBody @Valid QuestionRequest request) {
        
        AnswerResponse response = qaService.answerWithRAG(request.getQuestion());
        return ResponseEntity.ok(response);
    }
    
    @PostMapping(value = "/stream-answer", 
                produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    @Operation(summary = "流式问答", description = "支持流式输出的问答接口")
    public Flux<String> streamAnswer(
            @RequestBody @Valid QuestionRequest request) {
        
        return qaService.streamAnswer(request.getQuestion());
    }
    
    @PostMapping("/documents")
    @Operation(summary = "上传文档", description = "上传文档到知识库")
    public ResponseEntity<UploadResponse> uploadDocument(
            @RequestBody @Valid DocumentUploadRequest request) {
        
        // 处理文档上传逻辑
        return ResponseEntity.ok(UploadResponse.success());
    }
}

@Data
class QuestionRequest {
    @NotBlank(message = "问题不能为空")
    @Size(max = 1000, message = "问题长度不能超过1000字符")
    private String question;
    
    private String contextId; // 会话上下文ID
}

四、高级特性实现

4.1 对话上下文管理

@Component
public class ConversationContextManager {
    
    private final Map<String, List<Message>> conversationHistory = 
        new ConcurrentHashMap<>();
    
    private final int MAX_HISTORY = 10;
    
    /**
     * 添加上下文消息
     */
    public void addMessage(String sessionId, Message message) {
        conversationHistory
            .computeIfAbsent(sessionId, k -> new ArrayList<>())
            .add(message);
        
        // 保持最近的历史记录
        List<Message> history = conversationHistory.get(sessionId);
        if (history.size() > MAX_HISTORY) {
            conversationHistory.put(sessionId, 
                history.subList(history.size() - MAX_HISTORY, history.size()));
        }
    }
    
    /**
     * 构建带上下文的Prompt
     */
    public Prompt buildContextualPrompt(String sessionId, String newQuestion) {
        List<Message> history = conversationHistory.getOrDefault(sessionId, 
            new ArrayList<>());
        
        List<Message> messages = new ArrayList<>(history);
        messages.add(new UserMessage(newQuestion));
        
        return new Prompt(messages);
    }
}

4.2 异步批量处理

@Service
public class BatchProcessingService {
    
    @Autowired
    private AsyncTaskExecutor taskExecutor;
    
    @Autowired
    private EmbeddingClient embeddingClient;
    
    /**
     * 批量文档处理
     */
    @Async
    public CompletableFuture<Void> batchProcessDocuments(
            List<Document> documents) {
        
        return CompletableFuture.runAsync(() -> {
            int batchSize = 100;
            for (int i = 0; i < documents.size(); i += batchSize) {
                List<Document> batch = documents.subList(i, 
                    Math.min(i + batchSize, documents.size()));
                
                processBatch(batch);
                
                log.info("已处理 {}/{} 个文档", 
                    Math.min(i + batchSize, documents.size()), 
                    documents.size());
            }
        }, taskExecutor);
    }
    
    private void processBatch(List<Document> batch) {
        // 批量向量化
        List<String> contents = batch.stream()
            .map(Document::getContent)
            .collect(Collectors.toList());
        
        List<List<Double>> embeddings = embeddingClient.embed(contents);
        
        // 批量保存
        for (int i = 0; i < batch.size(); i++) {
            batch.get(i).setEmbedding(
                convertToFloatArray(embeddings.get(i)));
        }
    }
}

五、性能优化与监控

5.1 缓存策略

@Configuration
@EnableCaching
public class CacheConfig {
    
    @Bean
    public CacheManager cacheManager() {
        CaffeineCacheManager cacheManager = new CaffeineCacheManager();
        cacheManager.setCaffeine(Caffeine.newBuilder()
            .expireAfterWrite(30, TimeUnit.MINUTES)
            .maximumSize(1000)
            .recordStats());
        return cacheManager;
    }
}

@Service
public class CachedQAService {
    
    @Autowired
    private IntelligentQAService qaService;
    
    @Cacheable(value = "answers", key = "#question.hashCode()")
    public AnswerResponse getCachedAnswer(String question) {
        return qaService.answerWithRAG(question);
    }
}

5.2 监控与指标

@Component
public class AIMetrics {
    
    private final MeterRegistry meterRegistry;
    
    private final Timer embeddingTimer;
    private final Timer chatTimer;
    
    public AIMetrics(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
        
        this.embeddingTimer = Timer.builder("ai.embedding.duration")
            .description("Embedding操作耗时")
            .register(meterRegistry);
            
        this.chatTimer = Timer.builder("ai.chat.duration")
            .description("Chat操作耗时")
            .register(meterRegistry);
    }
    
    public <T> T recordEmbeddingTime(Supplier<T> supplier) {
        return embeddingTimer.record(supplier);
    }
    
    public void incrementError(String type) {
        meterRegistry.counter("ai.errors", "type", type).increment();
    }
}

六、测试策略

6.1 单元测试

@SpringBootTest
@AutoConfigureMockMvc
class IntelligentQAServiceTest {
    
    @MockBean
    private ChatClient chatClient;
    
    @MockBean
    private EmbeddingClient embeddingClient;
    
    @Autowired
    private IntelligentQAService qaService;
    
    @Test
    void testAnswerWithRAG() {
        // 模拟向量化结果
        when(embeddingClient.embed(anyString()))
            .thenReturn(List.of(0.1, 0.2, 0.3));
        
        // 模拟AI回答
        ChatResponse mockResponse = new ChatResponse(
            List.of(new Generation("这是模拟回答")));
        when(chatClient.call(any(Prompt.class)))
            .thenReturn(mockResponse);
        
        AnswerResponse response = qaService.answerWithRAG("测试问题");
        
        assertNotNull(response);
        assertEquals("这是模拟回答", response.getAnswer());
    }
}

6.2 集成测试

@Testcontainers
@SpringBootTest
class QASystemIntegrationTest {
    
    @Container
    static PostgreSQLContainer<?> postgres = 
        new PostgreSQLContainer<>("pgvector/pgvector:pg16")
            .withDatabaseName("testdb");
    
    @DynamicPropertySource
    static void configureProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.datasource.username", postgres::getUsername);
        registry.add("spring.datasource.password", postgres::getPassword);
    }
    
    @Test
    void testCompleteWorkflow() {
        // 完整的集成测试流程
    }
}

七、部署与生产实践

7.1 Docker容器化

# Dockerfile
FROM openjdk:17-jdk-slim
WORKDIR /app

COPY target/*.jar app.jar
COPY entrypoint.sh /entrypoint.sh

RUN chmod +x /entrypoint.sh

EXPOSE 8080

ENTRYPOINT ["/entrypoint.sh"]
#!/bin/bash
# entrypoint.sh
java -jar \
  -Dspring.profiles.active=${SPRING_PROFILES_ACTIVE:-prod} \
  -Dserver.port=${SERVER_PORT:-8080} \
  -Dspring.ai.openai.api-key=${OPENAI_API_KEY} \
  app.jar

7.2 Kubernetes部署配置

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: springai-qa-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: springai-qa
  template:
    metadata:
      labels:
        app: springai-qa
    spec:
      containers:
      - name: qa-service
        image: springai-qa:latest
        ports:
        - containerPort: 8080
        env:
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: ai-secrets
              key: openai-api-key
        resources:
          requests:
            memory: "1Gi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "1000m"
        readinessProbe:
          httpGet:
            path: /actuator/health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10

八、总结与展望

通过本文的完整实践,我们构建了一个基于SpringAI的企业级智能问答系统。这个系统展示了SpringAI在以下方面的优势:

  1. 开发效率:Spring风格的API极大降低了AI集成的复杂度
  2. 架构清晰:分层设计保证了代码的可维护性和可扩展性
  3. 生产就绪:完善的监控、缓存、容错机制
  4. 生态丰富:与Spring全家桶无缝集成

未来,随着SpringAI生态的不断发展,我们可以期待更多功能的加入:

  • 多模型支持切换
  • 更高级的提示工程工具
  • 自动化的模型评估和优化
  • 联邦学习支持

SpringAI为Java开发者打开了AI应用开发的大门,让AI能力真正成为企业应用的标准配置。无论你是Spring开发者想要接触AI,还是AI工程师想要构建企业级应用,SpringAI都值得深入学习和应用。


技术栈总结

  • Spring Boot 3.x
  • Spring AI 0.8+
  • PostgreSQL + pgvector
  • OpenAI GPT API
  • Docker & Kubernetes
  • Micrometer监控

希望本文能为你提供SpringAI实践的完整路线图,祝你在大模型应用开发的道路上取得成功!