langchain4j基础实战
1. 简介
langchain for java,在java语言中驾驭大模型的框架。langchain4j的目标是简化将LLM集成到java应用程序中的过程。
github:github.com/langchain4j…
2. 基础使用
2.1. helloworld
2.1.1. 前置
接入阿里云百炼平台的通义千问模型。bailian.console.aliyun.com
调用大模型之前我们需要准备api-key、模型名称、baseURL。
api-key:sk-c****3b12
模型名称:qwen-plus
baseUrl:dashscope.aliyuncs.com/compatible-…
引入依赖
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai</artifactId>
</dependency>
<!--langchain4j 高阶-->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
</dependency>
2.1.2. 代码
写yaml
server.port=9001
spring.application.name=langchain4j-01helloworld
主启动类
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class HelloLangChain4JApp
{
public static void main(String[] args)
{
SpringApplication.run(HelloLangChain4JApp.class,args);
}
}
配置类
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class LLMConfig
{
@Bean
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
// .apiKey(System.getenv("aliQwen-api")) //配置在环境变量中
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
}
controller
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.apache.tomcat.util.buf.Utf8Encoder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* @auther zzyybs@126.com
* @Date 2025-05-27 21:43
* @Description: TODO
*/
@RestController
@Slf4j
public class HelloLangChain4JController
{
@Resource
private ChatModel chatModel;
@GetMapping(value = "/langchain4j/hello")
public String hello(@RequestParam(value = "question",defaultValue = "你是谁") String question)
{
String result = chatModel.chat(question);
System.out.println("调用大模型回复: "+result);
return result;
}
public void test1(String question)
{
}
}
2.1.3. 测试
http://localhost:9001/langchain4j/hello?question=如何学习Java
2.2. 多模型共存
使用qwen-plus和DeepSeek模型共存
2.2.1. 准备
api-key:sk-fbe1701f86464a11bfa59b8dfbcf7f69
模型名称:deepseek-chat
base URL:api.deepseek.com/v1
2.2.2. 代码
配置类:
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
@Bean(name = "deepseek")
public ChatModel chatModelDeepSeek()
{
return
OpenAiChatModel.builder()
.apiKey("sk-fbe1701f86464a11bfa59b8dfbcf7f69")
.modelName("deepseek-chat")
//.modelName("deepseek-reasoner")
.baseUrl("https://api.deepseek.com/v1")
.build();
}
}
controller:
import dev.langchain4j.model.chat.ChatModel;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
@Slf4j
public class MultiModelController
{
@Resource(name = "qwen")
private ChatModel chatModelQwen;
@Resource(name = "deepseek")
private ChatModel chatModelDeepSeek;
// http://localhost:9002/multimodel/qwen
@GetMapping(value = "/multimodel/qwen")
public String qwenCall(@RequestParam(value = "prompt", defaultValue = "你是谁") String prompt)
{
String result = chatModelQwen.chat(prompt);
System.out.println("通过langchain4j调用模型返回结果:\n"+result);
return result;
}
// http://localhost:9002/multimodel/deepseek
@GetMapping(value = "/multimodel/deepseek")
public String deepseekCall(@RequestParam(value = "prompt", defaultValue = "你是谁") String prompt)
{
String result = chatModelDeepSeek.chat(prompt);
System.out.println("通过langchain4j调用模型返回结果:\n"+result);
return result;
}
}
2.2.3. 测试
访问http://localhost:9002/multimodel/qwen
访问http://localhost:9002/multimodel/deepseek
2.3. langchain4j与spring boot的整合
2.3.1. 说明
可以使用原生或者与springboot整合的依赖。
两者都有高阶和低阶的版本。
高阶与低阶的对比
| 特性 | 低阶 API (Low-Level) | 高阶 API (High-Level) |
|---|---|---|
| 抽象层级 | 低,接近基础组件 | 高,封装了完整模式 |
| 代码量 | 多,需要手动编排 | 少,非常简洁 |
| 控制力 | 强,可控制每个细节 | 弱,依赖框架的约定 |
| 灵活性 | 高,易于深度定制 | 低,适合标准场景 |
| 上手难度 | 较高,需要理解各组件 | 低,非常容易上手 |
| 适用场景 | 实现复杂、非标准的逻辑;学习框架原理 | 快速原型开发;构建符合常见模式的应用(如聊天、RAG) |
如何选择
-
优先使用高阶API (
**AiServices**):对于绝大多数常见的应用场景,如构建聊天机器人、文档问答系统等,AiServices都是首选。它能让你以最快的速度和最少的代码量完成开发。 -
当高阶API无法满足需求时,回退到低阶API:如果你需要实现非常精细的提示词工程、自定义的对话记忆策略,或者要将多个模型和工具以一种非标准的方式组合在一起时,就应该使用低阶API来手动编排整个流程。
2.3.2. 准备
依赖
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
</dependency>
<!--2 LangChain4j 整合boot高阶支持-->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-spring-boot-starter</artifactId>
</dependency>
配置文件
server.port=9003
spring.application.name=langchain4j-12boot-integration
# https://docs.langchain4j.dev/tutorials/spring-boot-integration
langchain4j.open-ai.chat-model.api-key=sk-cf4de7068032411b9de71adde80e3b12
langchain4j.open-ai.chat-model.model-name=qwen-plus
langchain4j.open-ai.chat-model.base-url=https://dashscope.aliyuncs.com/compatible-mode/v1
2.3.3. 代码
2.3.3.1. 低阶
controller
import dev.langchain4j.model.chat.ChatModel;
import jakarta.annotation.Resource;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class PopularIntegrationController
{
@Resource
private ChatModel chatModel;
// http://localhost:9003/lc4j/boot/chat
@GetMapping(value = "/lc4j/boot/chat")
public String chat(@RequestParam(value = "prompt", defaultValue = "你是谁") String prompt)
{
return chatModel.chat(prompt);
}
}
2.3.3.2. 高阶
controller
import com.atguigu.study.service.ChatAssistant;
import jakarta.annotation.Resource;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* @auther zzyybs@126.com
* @Date 2025-06-18 15:43
* @Description: TODO
*/
@RestController
public class DeclarativeAIServiceController
{
@Resource
private ChatAssistant chatAssistantQwen;
// http://localhost:9003/lc4j/boot/declarative
@GetMapping(value = "/lc4j/boot/declarative")
public String declarative(@RequestParam(value = "prompt", defaultValue = "你是谁") String prompt)
{
return chatAssistantQwen.chat(prompt);
}
}
service接口
import dev.langchain4j.service.spring.AiService;
import dev.langchain4j.service.spring.AiServiceWiringMode;
import org.springframework.stereotype.Service;
@AiService
public interface ChatAssistant
{
String chat(String prompt);
}
2.3.4. 测试
访问http://localhost:9003/lc4j/boot/chat测试低阶
访问http://localhost:9003/lc4j/boot/declarative测试高阶
3. 高阶低阶API使用
低阶 API (Low-Level APIs)
低阶API指的是LangChain4j中那些基础的、独立的构建模块。使用它们时,你需要手动将这些模块“连接”或“编排”在一起,以实现一个完整的功能。
特点:
-
完全的控制力:你可以精确控制流程的每一步,例如如何构建提示词、何时与模型交互、如何管理记忆等。
-
最高的灵活性:可以实现非常规或高度定制化的逻辑。
-
代码更冗长:需要编写更多的“胶水代码”来连接各个组件。
-
更清晰的流程:对于学习其内部工作原理非常有帮助,因为每一步都是显式调用的。
高阶 API (High-Level APIs)
高阶API是构建在低阶组件之上的便捷抽象。它们将常见的应用模式(如对话、RAG)封装起来,提供一个极其简洁的接口。
特点:
-
极其简洁:用最少的代码实现复杂功能,极大提升开发效率。
-
约定优于配置:框架帮你处理了很多细节,你只需要关注业务逻辑。
-
易于上手:对新手非常友好,可以快速构建出原型。
-
灵活性较低:如果你的需求偏离了它预设的模式,定制起来会比较困难。
3.1. 低阶API使用
3.1.1. 准备
配置类
import com.atguigu.study.service.ChatAssistant;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @auther zzyybs@126.com
* @Date 2025-05-27 22:04
* @Description: 知识出处 https://docs.langchain4j.dev/get-started
*/
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
/**
* @Description: 知识出处,https://api-docs.deepseek.com/zh-cn/
* @Auther: zzyybs@126.com
*/
@Bean(name = "deepseek")
public ChatModel chatModelDeepSeek()
{
return
OpenAiChatModel.builder()
.apiKey("sk-fbe1701f86464a11bfa59b8dfbcf7f69")
.modelName("deepseek-chat")
//.modelName("deepseek-reasoner")
.baseUrl("https://api.deepseek.com/v1")
.build();
}
}
3.1.2. 代码
使用DeepSeek实现模型问答token用量计算
controller
import dev.langchain4j.data.message.ChatMessage;
import dev.langchain4j.data.message.ChatMessageType;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.output.TokenUsage;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
@Slf4j
public class LowApiController
{
@Resource(name = "deepseek")
private ChatModel chatModelDeepSeek;
@GetMapping(value = "/lowapi/api02")
public String api02(@RequestParam(value = "prompt", defaultValue = "你是谁") String prompt)
{
ChatResponse chatResponse = chatModelDeepSeek.chat(UserMessage.from(prompt));
String result = chatResponse.aiMessage().text();
System.out.println("通过调用大模型返回结果:"+result);
// Token 用量计算的底层api
TokenUsage tokenUsage = chatResponse.tokenUsage();
System.out.println("本次调用消耗的token:"+tokenUsage);
result = result +"\t\n"+tokenUsage;
return result;
}
}
3.1.3. 测试
3.2. 高阶API使用
使用千问模型实现高阶API调用
3.3. 代码
配置类
import com.atguigu.study.service.ChatAssistant;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @auther zzyybs@126.com
* @Date 2025-05-27 22:04
* @Description: 知识出处 https://docs.langchain4j.dev/get-started
*/
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
/**
* @Description: 知识出处,https://api-docs.deepseek.com/zh-cn/
* @Auther: zzyybs@126.com
*/
@Bean(name = "deepseek")
public ChatModel chatModelDeepSeek()
{
return
OpenAiChatModel.builder()
.apiKey("sk-fbe1701f86464a11bfa59b8dfbcf7f69")
.modelName("deepseek-chat")
//.modelName("deepseek-reasoner")
.baseUrl("https://api.deepseek.com/v1")
.build();
}
// High-Api https://docs.langchain4j.dev/tutorials/ai-services#simplest-ai-service
@Bean
public ChatAssistant chatAssistant(@Qualifier("qwen") ChatModel chatModelQwen)
{
return AiServices.create(ChatAssistant.class, chatModelQwen);
}
}
controller
import com.atguigu.study.service.ChatAssistant;
import dev.langchain4j.model.chat.ChatModel;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* @auther zzyybs@126.com
* @Date 2025-05-28 17:32
* @Description: TODO
*/
@RestController
@Slf4j
public class HighApiController
{
@Resource
private ChatAssistant chatAssistant;
@GetMapping(value = "/highapi/api01")
public String highApi(@RequestParam(value = "prompt", defaultValue = "你是谁") String prompt)
{
return chatAssistant.chat(prompt);
}
}
service
/**
* @Description: 我们知道,按照Java开发一般习惯,有接口就要有实现类
* 比如接口ChatAssistant,就会有实现类ChatAssistantImpl
* 现在用高阶api-AIServics不用你自己写impl实现类,交给langchain4j给你搞定
*
* 本次配置用的是langchain4j原生整合,没有引入sprinboot,不需要接口头上配置@AiService注解标签
*/
public interface ChatAssistant
{
String chat(String prompt);
}
3.4. 测试
访问:http://localhost:9004/highapi/api01?prompt=1+1=等于几
4. 模型参数配置
通过配置一些参数可以自定义模型模型实现一些高级功能。
4.1. 日志
4.1.1. 准备
依赖
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.5.8</version>
</dependency>
如果使用spring boot的langchain4j包,需要在配置文件中加入
logging.level.dev.langchain4j = DEBUG
4.1.2. 代码
controller
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import java.util.concurrent.TimeUnit;
/**
* @auther zzyybs@126.com
* @Date 2025-05-28 18:33
* @Description: 知识出处,https://docs.langchain4j.dev/tutorials/model-parameters/
*/
@RestController
@Slf4j
public class ModelParameterController
{
@Resource
private ChatModel chatModelQwen;
// http://localhost:9005/modelparam/config
@GetMapping(value = "/modelparam/config")
public String config(@RequestParam(value = "prompt", defaultValue = "你是谁") String prompt)
{
String result = chatModelQwen.chat(prompt);
System.out.println("通过langchain4j调用模型返回结果:"+result);
return result;
}
}
config
import com.atguigu.study.listener.TestChatModelListener;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
import java.util.List;
/**
* @auther zzyybs@126.com
* @Date 2025-05-27 22:04
* @Description: 知识出处 https://docs.langchain4j.dev/tutorials/model-parameters/
*/
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.logRequests(true)
.logResponses(true)
.build();
}
}
4.1.3. 测试
访问http://localhost:9005/modelparam/config后台会打印日志。
4.2. 监控
4.2.1. 说明
类似于spring中的环绕通知。继承langchain4j提供的listener,实现ChatModeListener来监听以下事件:
-
向LLM提出的请求
-
LLM的回应
-
错误
ChatModeListener
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.response.ChatResponse;
/**
* A {@link ChatModel} listener that listens for requests, responses and errors.
*/
public interface ChatModelListener {
/**
* This method is called before the request is sent to the model.
*
* @param requestContext The request context. It contains the {@link ChatRequest} and attributes.
* The attributes can be used to pass data between methods of this listener
* or between multiple listeners.
*/
default void onRequest(ChatModelRequestContext requestContext) {
}
/**
* This method is called after the response is received from the model.
*
* @param responseContext The response context.
* It contains {@link ChatResponse}, corresponding {@link ChatRequest} and attributes.
* The attributes can be used to pass data between methods of this listener
* or between multiple listeners.
*/
default void onResponse(ChatModelResponseContext responseContext) {
}
/**
* This method is called when an error occurs during interaction with the model.
*
* @param errorContext The error context.
* It contains the error, corresponding {@link ChatRequest},
* partial {@link ChatResponse} (if available) and attributes.
* The attributes can be used to pass data between methods of this listener
* or between multiple listeners.
*/
default void onError(ChatModelErrorContext errorContext) {
}
}
4.2.2. 准备
创建TestChatModelListener实现ChatModeListener接口。访问LLM前生成一个随机的字符串存储在ChatModelRequestContext对象的attributes中
TestChatModelListener
import cn.hutool.core.util.IdUtil;
import dev.langchain4j.model.chat.listener.ChatModelErrorContext;
import dev.langchain4j.model.chat.listener.ChatModelListener;
import dev.langchain4j.model.chat.listener.ChatModelRequestContext;
import dev.langchain4j.model.chat.listener.ChatModelResponseContext;
import lombok.extern.slf4j.Slf4j;
/**
* @auther zzyybs@126.com
* @Date 2025-05-28 18:53
* @Description: 知识出处,https://docs.langchain4j.dev/tutorials/spring-boot-integration#observability
*/
@Slf4j
public class TestChatModelListener implements ChatModelListener
{
@Override
public void onRequest(ChatModelRequestContext requestContext)
{
// onRequest配置的k:v键值对,在onResponse阶段可以获得,上下文传递参数好用
String uuidValue = IdUtil.simpleUUID();
requestContext.attributes().put("TraceID",uuidValue);
log.info("请求参数requestContext:{}", requestContext+"\t"+uuidValue);
}
@Override
public void onResponse(ChatModelResponseContext responseContext)
{
Object object = responseContext.attributes().get("TraceID");
log.info("返回结果responseContext:{}", object);
}
@Override
public void onError(ChatModelErrorContext errorContext)
{
log.error("请求异常ChatModelErrorContext:{}", errorContext);
}
}
4.2.3. 代码
config
import com.atguigu.study.listener.TestChatModelListener;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
import java.util.List;
/**
* @auther zzyybs@126.com
* @Date 2025-05-27 22:04
* @Description: 知识出处 https://docs.langchain4j.dev/tutorials/model-parameters/
*/
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.logRequests(true)
.logResponses(true)
.listeners(List.of(new TestChatModelListener()))
.build();
}
}
4.2.4. 测试
访问http://localhost:9005/modelparam/config
4.3. 重试机制
4.3.1. 说明
当使用langchain4j访问大模型时,可能会出现访问中断的情况,设置重试参数,可以在访问大模型失败后自动重试。
4.3.2. 代码
import com.atguigu.study.listener.TestChatModelListener;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
import java.util.List;
/**
* @auther zzyybs@126.com
* @Date 2025-05-27 22:04
* @Description: 知识出处 https://docs.langchain4j.dev/tutorials/model-parameters/
*/
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.logRequests(true)
.logResponses(true)
.listeners(List.of(new TestChatModelListener()))
.maxRetries(2) //发送两次请求
.build();
}
}
4.3.3. 测试
断开网络连接,访问http://localhost:9005/modelparam/config
控制台输出
2025-07-17T14:26:34.068+08:00 INFO 32588 --- [langchain4j-05model-parameters] [nio-9005-exec-4] c.a.s.listener.TestChatModelListener : 请求参数requestContext:dev.langchain4j.model.chat.listener.ChatModelRequestContext@19ea835b 3b16258519f64cf3a9b462f0517090ea
2025-07-17T14:26:34.068+08:00 INFO 32588 --- [langchain4j-05model-parameters] [nio-9005-exec-4] d.l.http.client.log.LoggingHttpClient : HTTP request:
- method: POST
- url: https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
- headers: [Authorization: Beare...12], [User-Agent: langchain4j-openai], [Content-Type: application/json]
- body: {
"model" : "qwen-plus",
"messages" : [ {
"role" : "user",
"content" : "你是谁"
} ],
"stream" : false
}
2025-07-17T14:26:53.256+08:00 WARN 32588 --- [langchain4j-05model-parameters] [nio-9005-exec-4] dev.langchain4j.internal.RetryUtils : A retriable exception occurred. Remaining retries: 2 of 2
java.lang.RuntimeException: java.io.IOException: Connection reset
at dev.langchain4j.http.client.jdk.JdkHttpClient.execute(JdkHttpClient.java:60) ~[langchain4j-http-client-jdk-1.0.1.jar:na]
at dev.langchain4j.http.client.log.LoggingHttpClient.execute(LoggingHttpClient.java:39) ~[langchain4j-http-client-1.0.1.jar:na]
at dev.langchain4j.model.openai.internal.SyncRequestExecutor.execute(SyncRequestExecutor.java:20) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.model.openai.internal.RequestExecutor.execute(RequestExecutor.java:39) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.model.openai.OpenAiChatModel.lambda$doChat$0(OpenAiChatModel.java:143) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.internal.ExceptionMapper.withExceptionMapper(ExceptionMapper.java:29) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.lambda$withRetryMappingExceptions$2(RetryUtils.java:324) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils$RetryPolicy.withRetry(RetryUtils.java:211) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.withRetry(RetryUtils.java:264) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.withRetryMappingExceptions(RetryUtils.java:324) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.withRetryMappingExceptions(RetryUtils.java:308) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.model.openai.OpenAiChatModel.doChat(OpenAiChatModel.java:142) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.model.chat.ChatModel.chat(ChatModel.java:46) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.model.chat.ChatModel.chat(ChatModel.java:77) ~[langchain4j-core-1.0.1.jar:na]
at com.atguigu.study.controller.ModelParameterController.config(ModelParameterController.java:29) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:903) ~[spring-webmvc-6.2.7.jar:6.2.7]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:564) ~[tomcat-embed-core-10.1.41.jar:6.0]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) ~[spring-webmvc-6.2.7.jar:6.2.7]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658) ~[tomcat-embed-core-10.1.41.jar:6.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) ~[tomcat-embed-websocket-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.7.jar:6.2.7]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.7.jar:6.2.7]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.7.jar:6.2.7]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Caused by: java.io.IOException: Connection reset
at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:586) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:123) ~[java.net.http:na]
at dev.langchain4j.http.client.jdk.JdkHttpClient.execute(JdkHttpClient.java:50) ~[langchain4j-http-client-jdk-1.0.1.jar:na]
... 64 common frames omitted
Caused by: java.net.SocketException: Connection reset
at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[na:na]
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[na:na]
at java.net.http/jdk.internal.net.http.SocketTube.readAvailable(SocketTube.java:1170) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.SocketTube$InternalReadPublisher$InternalReadSubscription.read(SocketTube.java:833) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.SocketTube$SocketFlowTask.run(SocketTube.java:181) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:230) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.common.SequentialScheduler.runOrSchedule(SequentialScheduler.java:303) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.common.SequentialScheduler.runOrSchedule(SequentialScheduler.java:256) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.SocketTube$InternalReadPublisher$InternalReadSubscription.signalReadable(SocketTube.java:774) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.SocketTube$InternalReadPublisher$ReadEvent.signalEvent(SocketTube.java:957) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.SocketTube$SocketFlowEvent.handle(SocketTube.java:253) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.HttpClientImpl$SelectorManager.handleEvent(HttpClientImpl.java:979) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.HttpClientImpl$SelectorManager.lambda$run$3(HttpClientImpl.java:934) ~[java.net.http:na]
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) ~[na:na]
at java.net.http/jdk.internal.net.http.HttpClientImpl$SelectorManager.run(HttpClientImpl.java:934) ~[java.net.http:na]
2025-07-17T14:26:53.849+08:00 INFO 32588 --- [langchain4j-05model-parameters] [nio-9005-exec-4] d.l.http.client.log.LoggingHttpClient : HTTP request:
- method: POST
- url: https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
- headers: [Authorization: Beare...12], [User-Agent: langchain4j-openai], [Content-Type: application/json]
- body: {
"model" : "qwen-plus",
"messages" : [ {
"role" : "user",
"content" : "你是谁"
} ],
"stream" : false
}
2025-07-17T14:26:53.854+08:00 ERROR 32588 --- [langchain4j-05model-parameters] [nio-9005-exec-4] c.a.s.listener.TestChatModelListener : 请求异常ChatModelErrorContext:dev.langchain4j.model.chat.listener.ChatModelErrorContext@35277b72
2025-07-17T14:26:53.859+08:00 ERROR 32588 --- [langchain4j-05model-parameters] [nio-9005-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: dev.langchain4j.exception.UnresolvedModelServerException] with root cause
java.nio.channels.UnresolvedAddressException: null
at java.base/sun.nio.ch.Net.checkAddress(Net.java:149) ~[na:na]
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157) ~[na:na]
at java.base/sun.nio.ch.SocketChannelImpl.checkRemote(SocketChannelImpl.java:816) ~[na:na]
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:839) ~[na:na]
at java.net.http/jdk.internal.net.http.PlainHttpConnection.lambda$connectAsync$0(PlainHttpConnection.java:183) ~[java.net.http:na]
at java.base/java.security.AccessController.doPrivileged(AccessController.java:569) ~[na:na]
at java.net.http/jdk.internal.net.http.PlainHttpConnection.connectAsync(PlainHttpConnection.java:185) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.AsyncSSLConnection.connectAsync(AsyncSSLConnection.java:56) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.Http2Connection.createAsync(Http2Connection.java:378) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.Http2ClientImpl.getConnectionFor(Http2ClientImpl.java:128) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.ExchangeImpl.get(ExchangeImpl.java:93) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.Exchange.establishExchange(Exchange.java:343) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.Exchange.responseAsyncImpl0(Exchange.java:475) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.Exchange.responseAsyncImpl(Exchange.java:380) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.Exchange.responseAsync(Exchange.java:372) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.MultiExchange.responseAsyncImpl(MultiExchange.java:408) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.MultiExchange.lambda$responseAsyncImpl$7(MultiExchange.java:449) ~[java.net.http:na]
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:950) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2340) ~[na:na]
at java.net.http/jdk.internal.net.http.MultiExchange.responseAsyncImpl(MultiExchange.java:439) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.MultiExchange.lambda$responseAsync0$2(MultiExchange.java:341) ~[java.net.http:na]
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1150) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1773) ~[na:na]
at java.net.http/jdk.internal.net.http.HttpClientImpl$DelegatingExecutor.execute(HttpClientImpl.java:157) ~[java.net.http:na]
at java.base/java.util.concurrent.CompletableFuture.completeAsync(CompletableFuture.java:2673) ~[na:na]
at java.net.http/jdk.internal.net.http.MultiExchange.responseAsync(MultiExchange.java:294) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.HttpClientImpl.sendAsync(HttpClientImpl.java:654) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.HttpClientImpl.send(HttpClientImpl.java:552) ~[java.net.http:na]
at java.net.http/jdk.internal.net.http.HttpClientFacade.send(HttpClientFacade.java:123) ~[java.net.http:na]
at dev.langchain4j.http.client.jdk.JdkHttpClient.execute(JdkHttpClient.java:50) ~[langchain4j-http-client-jdk-1.0.1.jar:na]
at dev.langchain4j.http.client.log.LoggingHttpClient.execute(LoggingHttpClient.java:39) ~[langchain4j-http-client-1.0.1.jar:na]
at dev.langchain4j.model.openai.internal.SyncRequestExecutor.execute(SyncRequestExecutor.java:20) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.model.openai.internal.RequestExecutor.execute(RequestExecutor.java:39) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.model.openai.OpenAiChatModel.lambda$doChat$0(OpenAiChatModel.java:143) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.internal.ExceptionMapper.withExceptionMapper(ExceptionMapper.java:29) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.lambda$withRetryMappingExceptions$2(RetryUtils.java:324) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils$RetryPolicy.withRetry(RetryUtils.java:211) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.withRetry(RetryUtils.java:264) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.withRetryMappingExceptions(RetryUtils.java:324) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.internal.RetryUtils.withRetryMappingExceptions(RetryUtils.java:308) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.model.openai.OpenAiChatModel.doChat(OpenAiChatModel.java:142) ~[langchain4j-open-ai-1.0.1.jar:na]
at dev.langchain4j.model.chat.ChatModel.chat(ChatModel.java:46) ~[langchain4j-core-1.0.1.jar:na]
at dev.langchain4j.model.chat.ChatModel.chat(ChatModel.java:77) ~[langchain4j-core-1.0.1.jar:na]
at com.atguigu.study.controller.ModelParameterController.config(ModelParameterController.java:29) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) ~[spring-webmvc-6.2.7.jar:6.2.7]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:903) ~[spring-webmvc-6.2.7.jar:6.2.7]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:564) ~[tomcat-embed-core-10.1.41.jar:6.0]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) ~[spring-webmvc-6.2.7.jar:6.2.7]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658) ~[tomcat-embed-core-10.1.41.jar:6.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) ~[tomcat-embed-websocket-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.7.jar:6.2.7]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.7.jar:6.2.7]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.2.7.jar:6.2.7]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.7.jar:6.2.7]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) ~[tomcat-embed-core-10.1.41.jar:10.1.41]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
4.4. 超时机制
4.4.1. 说明
向大模型发送请求时,如在指定时间内没有收到响应,该请求将被中断并报request timed out
4.4.2. 代码
config
import com.atguigu.study.listener.TestChatModelListener;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
import java.util.List;
/**
* @auther zzyybs@126.com
* @Date 2025-05-27 22:04
* @Description: 知识出处 https://docs.langchain4j.dev/tutorials/model-parameters/
*/
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.logRequests(true)
.logResponses(true)
.listeners(List.of(new TestChatModelListener()))
.maxRetries(2)
.timeout(Duration.ofSeconds(2))//向大模型发送请求时,如在指定时间内没有收到响应,该请求将被中断并报request timed out
.build();
}
}
4.4.3. 测试
访问http://localhost:9005/modelparam/config?prompt=介绍一下Java
已知这个请求的响应时间会超过两秒。所以控制台会输出request timed out
5. 多模态视觉理解
langchain4j不仅可以支持理解文字,还可以传递解析后的图片、视频、音频、pdf等。当然,在配置大模型时,你需要确保大模型本身能支持解析这些文件。
5.1. 图片理解
按照langchain官网的知道,将图片通过base64转码为字符串
5.1.1. 准备
由于我们之前使用的deepseek-chat和qwen-plus都不支持图片理解,所以我们需要更换一个模型。langchain官网docs.langchain4j.dev/integration…给出了一些支持图片理解的大模型可以作为参考,我们使用通义千问的qwen-vl-max模型。
qwen-vl-max的使用方式help.aliyun.com/zh/model-st…
需要在配置类LLMConfig中修改modelName和baseUrl。api-key保持不变。
图片理解我们需要实现准备一张图片让大模型去分析。我们使用这张图片,将其放在src/main/resources/static/images目录下,并提问大模型:从下面图片中获取来源网站名称,分析股价走势和5月30号股价
5.1.2. 代码
LLMConfig
import dev.langchain4j.community.model.dashscope.WanxImageModel;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @auther zzyybs@126.com
* @Date 2025-05-30 11:24
* @Description: 知识出处
* https://docs.langchain4j.dev/tutorials/chat-and-language-models/#image-content
*/
@Configuration
public class LLMConfig
{
@Bean
public ChatModel ImageModel() {
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
//qwen-vl-max 是一个多模态大模型,支持图片和文本的结合输入,适用于视觉-语言任务。
.modelName("qwen-vl-max")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
}
controller
根据langchain官网的使用方法,我们编写业务类
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.data.message.ImageContent;
import dev.langchain4j.data.message.TextContent;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.output.Response;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.core.io.Resource;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.io.IOException;
import java.util.Base64;
/**
* @auther zzyybs@126.com
* @Date 2025-05-30 11:26
* @Description: https://docs.langchain4j.dev/tutorials/chat-and-language-models/#multimodality
*/
@RestController
@Slf4j
public class ImageModelController
{
@Autowired
private ChatModel chatModel;
@Value("classpath:static/images/mi.jpg")
private Resource resource;//import org.springframework.core.io.Resource;
/**
* @Description: 通过Base64编码将图片转化为字符串
* 结合ImageContent和TextContent形成UserMessage一起发送到模型进行处理。
* @Auther: zzyybs@126.com
*
*测试地址:http://localhost:9006/image/call
*/
@GetMapping(value = "/image/call")
public String readImageContent() throws IOException
{
String result = null;
//第一步,图片转码:通过Base64编码将图片转化为字符串
byte[] byteArray = resource.getContentAsByteArray();
String base64Data = Base64.getEncoder().encodeToString(byteArray);
//第二步,提示词指定:结合ImageContent和TextContent一起发送到模型进行处理。
UserMessage userMessage = UserMessage.from(
TextContent.from("从下面图片中获取来源网站名称,分析股价走势和5月30号股价"),
ImageContent.from(base64Data, "image/jpg")
);
//第三步,API调用:使用OpenAiChatModel来构建请求,并通过chat()方法调用模型。
//请求内容包括文本提示和图片,模型会根据输入返回分析结果。
ChatResponse chatResponse = chatModel.chat(userMessage);
//第四步,解析与输出:从ChatResponse中获取AI大模型的回复,打印出处理后的结果。
result = chatResponse.aiMessage().text();
//后台打印
System.out.println(result);
//返回前台
return result;
}
}
5.1.3. 测试
访问http://localhost:9006/image/call
得到qwen-vl-max大模型的回答
5.2. 文生图
通过文字生成一张图片
5.2.1. 准备
使用阿里巴巴通义万象进行图像理解,其支持视觉-语言的多模态任务。
docs.langchain4j.dev/integration…
文生图需要用到第三方的api,所以需要引入qwen的依赖
5.2.2. 代码
父pom增加依赖
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-bom</artifactId>
<version>${langchain4j-community.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
子pom增加依赖
<!--DashScope (Qwen)接入阿里云百炼平台
https://docs.langchain4j.dev/integrations/language-models/dashscope
-->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-dashscope-spring-boot-starter</artifactId>
</dependency>
config
import dev.langchain4j.community.model.dashscope.WanxImageModel;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @auther zzyybs@126.com
* @Date 2025-05-30 11:24
* @Description: 知识出处
* https://docs.langchain4j.dev/tutorials/chat-and-language-models/#image-content
*/
@Configuration
public class LLMConfig
{
@Bean
public ChatModel ImageModel() {
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
//qwen-vl-max 是一个多模态大模型,支持图片和文本的结合输入,适用于视觉-语言任务。
.modelName("qwen-vl-max")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
/**
* @Description: 测试通义万象来实现图片生成,
* 知识出处,https://help.aliyun.com/zh/model-studio/text-to-image
* @Auther: zzyybs@126.com
*/
@Bean
public WanxImageModel wanxImageModel()
{
return WanxImageModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("wanx2.1-t2i-turbo") //图片生成 https://help.aliyun.com/zh/model-studio/text-to-image
.build();
}
}
controller
方法createImageContent2和方法createImageContent3使用不同的方式访问大模型,其中createImageContent2为常规的使用langchain4j的api对大模型进行配置和提问,而createImageContent3使用dashscope原生的sdk对大模型进行配置和提问。区别在于抽象层次和使用的开发库不同。
| 特性 | createImageContent2 (使用 LangChain4j) | createImageContent3 (使用原生 DashScope SDK) |
|---|---|---|
| 核心库 | dev.langchain4j.community.model.dashscope.WanxImageModel | com.alibaba.dashscope.aigc.imagesynthesis.ImageSynthesis |
| 抽象级别 | 高层抽象 (High-Level Abstraction) | 底层原生 (Low-Level Native) |
| 编程范式 | 集成框架式调用。LangChain4j 作为一个AI应用开发框架,将底层API调用封装成了标准化的模型接口。开发者面向的是LangChain4j的WanxImageModel,而不是直接和阿里云的SDK打交道。 | 原生SDK直接调用。直接使用阿里云官方提供的dashscope-sdk-java,对API的请求和响应进行精细化控制。 |
| 代码简洁度 | 非常简洁。核心代码只有一行 wanxImageModel.generate("美女")。 | 相对繁琐。需要手动构建一个详细的参数对象ImageSynthesisParam,包括模型名称、提示词、风格、数量、尺寸等。 |
| 配置方式 | 外部化配置。API Key、模型名称、超时时间等参数通常通过Spring Boot的配置文件(如 application.yml)进行统一管理,然后通过@Autowired注入一个已经配置好的WanxImageModel实例。这符合企业级开发的实践。 | 硬编码或方法内配置。在此示例中,API Key (sk-cf4de...)、模型 (WANX_V1)、风格 () 等参数直接在方法内部硬编码。这使得方法不灵活且不安全。 |
| 灵活性与控制力 | 灵活性较低。LangChain4j提供的是通用参数,如果要使用某个模型非常独特且未被框架支持的参数,可能会比较困难。 | 灵活性和控制力极高。可以直接设置原生SDK支持的所有参数,对API的控制力最强。 |
| 返回结果 | 返回的是LangChain4j定义的标准Response | 返回的是DashScope SDK定义的ImageSynthesisResult对象。这个结果是JSON格式,包含了任务ID、状态、每个图片的URL等原生信息。 |
| 适用场景 | 适用于快速构建和集成,或者在需要轻松切换不同模型提供商的复杂应用中。代码更优雅、可维护性更高。 | 适用于需要对API进行深度定制和精细化控制的场景,或者在不使用大型框架的简单项目中直接调用。 |
import com.alibaba.dashscope.aigc.imagesynthesis.ImageSynthesis;
import com.alibaba.dashscope.aigc.imagesynthesis.ImageSynthesisParam;
import com.alibaba.dashscope.aigc.imagesynthesis.ImageSynthesisResult;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.utils.JsonUtils;
import dev.langchain4j.community.model.dashscope.WanxImageModel;
import dev.langchain4j.data.image.Image;
import dev.langchain4j.model.output.Response;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.io.IOException;
/**
* @auther zzyybs@126.com
* @Date 2025-05-30 11:57
* @Description: TODO
*/
@RestController
@Slf4j
public class WanxImageModelController
{
@Autowired
private WanxImageModel wanxImageModel;
// http://localhost:9006/image/create2
@GetMapping(value = "/image/create2")
public String createImageContent2() throws IOException
{
System.out.println(wanxImageModel);
Response<Image> imageResponse = wanxImageModel.generate("美女");
System.out.println(imageResponse.content().url());
return imageResponse.content().url().toString();
}
@GetMapping(value = "/image/create3")
public String createImageContent3() throws IOException
{
String prompt = "近景镜头,18岁的中国女孩,古代服饰,圆脸,正面看着镜头," +
"民族优雅的服装,商业摄影,室外,电影级光照,半身特写,精致的淡妆,锐利的边缘。";
ImageSynthesisParam param =
ImageSynthesisParam.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.model(ImageSynthesis.Models.WANX_V1)
.prompt(prompt)
.style("<watercolor>")
.n(1)
.size("1024*1024")
.build();
ImageSynthesis imageSynthesis = new ImageSynthesis();
ImageSynthesisResult result = null;
try {
System.out.println("---sync call, please wait a moment----");
result = imageSynthesis.call(param);
} catch (ApiException | NoApiKeyException e){
throw new RuntimeException(e.getMessage());
}
System.out.println(JsonUtils.toJson(result));
return JsonUtils.toJson(result);
}
}
5.2.3. 测试
访问http://localhost:9006/image/create2测试createImageContent2
访问http://localhost:9006/image/create3测试测试createImageContent3
6. 流式输出
6.1.1. 说明
是一种逐步返回大模型生成结果的技术,生成一点返回一点,允许服务器将响应内容分批次实时传输给客户端,而不是等待全部内容生成完毕后再一次性返回。
流式输出的实现有两种方式
- low-level:docs.langchain4j.dev/tutorials/r…
- high-level
6.1.2. 准备
pom
使用high-level需要新增依赖
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-reactor</artifactId>
</dependency>
配置文件
设置响应的字符编码,避免流式返回输出乱码
server.servlet.encoding.charset=utf-8
server.servlet.encoding.enabled=true
server.servlet.encoding.force=true
6.1.3. 代码
servcie
high-level
import reactor.core.publisher.Flux;
public interface ChatAssistant
{
//普通回答
String chat(String prompt);
//流式回答
Flux<String> chatFlux(String prompt);
}
config
import com.atguigu.study.service.ChatAssistant;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.StreamingChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.model.openai.OpenAiStreamingChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
import java.util.List;
@Configuration
public class LLMConfig
{
@Bean(name = "qwen")
public ChatModel chatModelQwen()
{
return OpenAiChatModel.builder()
.apiKey(System.getenv("aliQwen-api"))
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
@Bean
public StreamingChatModel streamingChatModel(){
return OpenAiStreamingChatModel.builder()
.apiKey(System.getenv("aliQwen-api"))
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
@Bean
public ChatAssistant chatAssistant(StreamingChatModel streamingChatModel){
return AiServices.create(ChatAssistant.class, streamingChatModel);
}
}
controller
import com.atguigu.study.service.ChatAssistant;
import dev.langchain4j.model.chat.StreamingChatModel;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.chat.response.StreamingChatResponseHandler;
import jakarta.annotation.Resource;
import jakarta.servlet.http.HttpServletResponse;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Flux;
/**
* @auther zzyybs@126.com
* @Date 2025-05-30 16:25
* @Description: 知识出处,https://docs.langchain4j.dev/tutorials/response-streaming
*/
@RestController
@Slf4j
public class StreamingChatModelController
{
@Resource //直接使用 low-level LLM API
private StreamingChatModel streamingChatLanguageModel;
@Resource //自己封装接口使用 high-level LLM API
private ChatAssistant chatAssistant;
// http://localhost:9007/chatstream/chat?prompt=天津有什么好吃的
@GetMapping(value = "/chatstream/chat")
public Flux<String> chat(@RequestParam("prompt") String prompt)
{
System.out.println("---come in chat");
return Flux.create(emitter -> {
streamingChatLanguageModel.chat(prompt, new StreamingChatResponseHandler()
{
@Override
public void onPartialResponse(String partialResponse)
{
emitter.next(partialResponse);
}
@Override
public void onCompleteResponse(ChatResponse completeResponse)
{
emitter.complete();
}
@Override
public void onError(Throwable throwable)
{
emitter.error(throwable);
}
});
});
}
@GetMapping(value = "/chatstream/chat3")
public Flux<String> chat3(@RequestParam(value = "prompt", defaultValue = "南京有什么好吃") String prompt)
{
System.out.println("---come in chat3");
return chatAssistant.chatFlux(prompt);
}
}
6.1.4. 测试
访问http://localhost:9007/chatstream/chat?prompt=天津有什么好吃的测试low-level
访问http://localhost:9007/chatstream/chat3 测试high-level
7. 记忆缓存
上下文记忆是用户在和模型进行多轮对话时,模型有记忆上下文对话内容的功能。从而提供连贯和个性化的回复。
7.1. 说明
注意,使用TokenWindowChatMemory选项是需要结合Tokenizer计算token数量,这个计算过程会耗费一定的时间,尤其是在历史记录很长的情况下。
由于我们要测验记忆功能,考虑到问答上下文很长,我们换一个能支持更大上下文长度的模型qwen-long
7.2. 准备
MessageWindowChatMemory基于消息数量的简单实现。它采用滑动窗口的方式,保留最近的n条数据并清楚旧消息。
7.3. 代码(基于MessageWindowChatMemory)
config
这里我们使用high-level的方式,并且规定消息的最大长度是100,当消息数量超过100时,会触发清除旧消息的机制。
import com.atguigu.study.service.ChatAssistant;
import com.atguigu.study.service.ChatMemoryAssistant;
import dev.langchain4j.memory.chat.MessageWindowChatMemory;
import dev.langchain4j.memory.chat.TokenWindowChatMemory;
import dev.langchain4j.model.TokenCountEstimator;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.model.openai.OpenAiTokenCountEstimator;
import dev.langchain4j.service.AiServices;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class LLMConfig
{
@Bean
public ChatModel chatModel()
{
return OpenAiChatModel.builder()
.apiKey("sk-cf4de7068032411b9de71adde80e3b12")
.modelName("qwen-long")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
@Bean(name = "chat")
public ChatAssistant chatAssistant(ChatModel chatModel)
{
return AiServices.create(ChatAssistant.class, chatModel);
}
@Bean(name = "chatMessageWindowChatMemory")
public ChatMemoryAssistant chatMessageWindowChatMemory(ChatModel chatModel)
{
return AiServices.builder(ChatMemoryAssistant.class)
.chatModel(chatModel)
//按照memoryId对应创建了一个chatMemory
.chatMemoryProvider(memoryId -> MessageWindowChatMemory.withMaxMessages(100))
.build();
}
}
controller
方法 chat()为没有记忆缓存功能,方法chatMessageWindowChatMemory()是有记忆缓存功能
import cn.hutool.core.date.DateUtil;
import com.atguigu.study.service.ChatAssistant;
import com.atguigu.study.service.ChatMemoryAssistant;
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.response.ChatResponse;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.Arrays;
@RestController
@Slf4j
public class ChatMemoryController
{
@Resource(name = "chat")
private ChatAssistant chatAssistant;
@Resource(name = "chatMessageWindowChatMemory")
private ChatMemoryAssistant chatMessageWindowChatMemory;
@Resource(name = "chatTokenWindowChatMemory")
private ChatMemoryAssistant chatTokenWindowChatMemory;
@GetMapping(value = "/chatmemory/test1")
public String chat()
{
String answer01 = chatAssistant.chat("你好,我的名字叫张三");
System.out.println("answer01返回结果:"+answer01);
String answer02 = chatAssistant.chat("我的名字是什么");
System.out.println("answer02返回结果:"+answer02);
return "success : "+ DateUtil.now()+"<br> \n\n answer01: "+answer01+"<br> \n\n answer02: "+answer02;
}
@GetMapping(value = "/chatmemory/test2")
public String chatMessageWindowChatMemory()
{
chatMessageWindowChatMemory.chatWithChatMemory(1L, "你好!我的名字是Java.");
String answer01 = chatMessageWindowChatMemory.chatWithChatMemory(1L, "我的名字是什么");
System.out.println("answer01返回结果:"+answer01);
chatMessageWindowChatMemory.chatWithChatMemory(3L, "你好!我的名字是C++");
String answer02 = chatMessageWindowChatMemory.chatWithChatMemory(3L, "我的名字是什么");
System.out.println("answer02返回结果:"+answer02);
return "chatMessageWindowChatMemory success : "
+ DateUtil.now()+"<br> \n\n answer01: "+answer01+"<br> \n\n answer02: "+answer02;
}
}
service
普通没有记忆功能的service。
import dev.langchain4j.service.MemoryId;
import dev.langchain4j.service.UserMessage;
public interface ChatAssistant
{
/**
* @Description: 普通聊天对话,不带记忆缓存功能
*/
String chat(String prompt);
}
有记忆功能的service。
import dev.langchain4j.service.MemoryId;
import dev.langchain4j.service.UserMessage;
/**
* @auther zzyybs@126.com
* @Date 2025-05-30 19:22
* @Description: TODO
*/
public interface ChatMemoryAssistant
{
/**
* 聊天带记忆缓存功能
*
* @param userId 用户 ID
* @param prompt 消息
* @return {@link String }
*/
String chatWithChatMemory(@MemoryId Long userId, @UserMessage String prompt);
}
7.4. 测试
访问http://localhost:9008/chatmemory/test1 测试无记忆功能的接口
访问http://localhost:9008/chatmemory/test2 测试基于MessageWindowChatMemory有记忆功能的接口
7.5. 代码(基于TokenWindowChatMemory)
config
这里我们使用high-level的方式,规定消息最多1000个token。
import com.atguigu.study.service.ChatAssistant;
import com.atguigu.study.service.ChatMemoryAssistant;
import dev.langchain4j.memory.chat.MessageWindowChatMemory;
import dev.langchain4j.memory.chat.TokenWindowChatMemory;
import dev.langchain4j.model.TokenCountEstimator;
import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.model.openai.OpenAiTokenCountEstimator;
import dev.langchain4j.service.AiServices;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
/**
* @auther zzyybs@126.com
* @Date 2025-05-30 18:58
* @Description: 知识出处,https://docs.langchain4j.dev/tutorials/chat-memory/#eviction-policy
*/
@Configuration
public class LLMConfig
{
@Bean(name = "chatTokenWindowChatMemory")
public ChatMemoryAssistant chatTokenWindowChatMemory(ChatModel chatModel)
{
//1 TokenCountEstimator默认的token分词器,需要结合Tokenizer计算ChatMessage的token数量
TokenCountEstimator openAiTokenCountEstimator = new OpenAiTokenCountEstimator("gpt-4");
return AiServices.builder(ChatMemoryAssistant.class)
.chatModel(chatModel)
.chatMemoryProvider(memoryId -> TokenWindowChatMemory.withMaxTokens(1000,openAiTokenCountEstimator))
.build();
}
}
controller
import cn.hutool.core.date.DateUtil;
import com.atguigu.study.service.ChatAssistant;
import com.atguigu.study.service.ChatMemoryAssistant;
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.model.chat.response.ChatResponse;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.Arrays;
@RestController
@Slf4j
public class ChatMemoryController
{
@Resource(name = "chat")
private ChatAssistant chatAssistant;
@Resource(name = "chatMessageWindowChatMemory")
private ChatMemoryAssistant chatMessageWindowChatMemory;
@Resource(name = "chatTokenWindowChatMemory")
private ChatMemoryAssistant chatTokenWindowChatMemory;
@GetMapping(value = "/chatmemory/test3")
public String chatTokenWindowChatMemory()
{
chatTokenWindowChatMemory.chatWithChatMemory(1L, "你好!我的名字是mysql");
String answer01 = chatTokenWindowChatMemory.chatWithChatMemory(1L, "我的名字是什么");
System.out.println("answer01返回结果:"+answer01);
chatTokenWindowChatMemory.chatWithChatMemory(3L, "你好!我的名字是oracle");
String answer02 = chatTokenWindowChatMemory.chatWithChatMemory(3L, "我的名字是什么");
System.out.println("answer02返回结果:"+answer02);
return "chatTokenWindowChatMemory success : "
+ DateUtil.now()+"<br> \n\n answer01: "+answer01+"<br> \n\n answer02: "+answer02;
}
}
service同上
7.5. 测试
访问http://localhost:9008/chatmemory/test3测试