我们的聊天机器人现在可以使用工具回答用户问题,但它不记得之前交互的上下文。这限制了其进行连贯的多轮对话的能力。
LangGraph 通过持久化检查点(persistent checkpointing)解决了这个问题。如果在编译图时提供了 checkpointer,并在调用图时提供了 thread_id,LangGraph 会在每一步后自动保存状态。当您使用相同的 thread_id 再次调用图时,图会加载其保存的状态,从而允许聊天机器人从上次离开的地方继续。
简单来说,类似于咱们使用AI聊天可以创建多个对话,每个对话都有个thread_id作为其标识符,让图来加载其保存的状态来继续对话。
看官方的代码
创建检查点
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
这里官方使用的是内存检查点。这对于教程很方便(它将所有内容保存在内存中)。在生产应用中,您可能会将其更改为使用 SqliteSaver 或 PostgresSaver 并连接到您自己的数据库。
构建图
from typing import Annotated
from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearch(max_results=2)
tools = [tool]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile(checkpointer=memory)#使用提供的检查点编译图。
这部分与上一章节的代码基本相同,官方代码用 LangGraph 的预构建 ToolNode 和 tools_condition 替换了上节构建的 BasicToolNode。
想详细了解可以参考对应的API文档。
- ToolNode API资料:执行最后一条 AIMessage 中调用的工具的节点。
- tools_condition API资料:在 conditional_edge 中使用,如果最后一条消息包含工具就调用。否则,路由到末尾
测试聊天机器人的记忆功能
config = {"configurable": {"thread_id": "1"}}
user_input = "Hi there! My name is Will."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
user_input = "Remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
官方案例这里在第两段对话中测试聊天机器人是否记得第一段对话中提到的用户名字。
实践为聊天机器人添加记忆
创建带记忆的聊天机器人
import os
from typing import Annotated
# from langchain.chat_models import init_chat_model
from langchain_openai import ChatOpenAI
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()#设置检查点
#环境变量设置,设置为你的API_KEY
os.environ['TAVILY_API_KEY'] = 'TAVILY_API_KEY'
os.environ['ARK_API_KEY'] = 'API_KEY'
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearch(max_results=2)
tools = [tool]
llm = ChatOpenAI(
base_url="https://ark.cn-beijing.volces.com/api/v3",
api_key=os.environ.get('ARK_API_KEY'),
model="doubao-1-5-pro-32k-250115" # 根据实际模型名称修改
)
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile(checkpointer=memory)#使用提供的检查点编译图。
代码相比之前只是多创建了个MemorySaver
检查点,然后使用它来编译图。
测试聊天机器人的记忆功能
config = {"configurable": {"thread_id": "1"}}
user_input = "Hi there! My name is Will."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
第一次对话返回结果:
================================ Human Message =================================
Hi there! My name is Will.
================================== Ai Message ==================================
Hello, Will! It's great to meet you. If you have any questions, feel free to tell me.
我们在第二次对话测试聊天机器人是否还记得我们的名字:
user_input = "Remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
返回结果:
================================ Human Message =================================
Remember my name?
================================== Ai Message ==================================
Of course! I'll remember that your name is Will. If you have any other needs, just let me know.
显然,聊天机器人已经具备了记忆功能。接下来我们尝试切换配置中的thread_id
,来看看是否还记得我们的名字:
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
{"configurable": {"thread_id": "2"}},#这里切换一个thread_id
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
返回结果为:
================================ Human Message =================================
Remember my name?
================================== Ai Message ==================================
I'm sorry, you haven't told me your name yet. Could you please share it with me?
我们还可以通过.get_state()
查看图的状态:
snapshot = graph.get_state(config)
snapshot
返回结果:
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='6696cb1b-0b66-4670-bf71-cc460dde6f87'), AIMessage(content="Hello, Will! It's nice to meet you. If you have any questions, feel free to tell me.", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 1055, 'total_tokens': 1078, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'doubao-1-5-pro-32k-250115', 'system_fingerprint': None, 'id': '0217496320997100f848b5e7d5cd90fc22435541affcfca860dc5', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--1f01829c-0590-4f37-853d-7883e30b7b1b-0', usage_metadata={'input_tokens': 1055, 'output_tokens': 23, 'total_tokens': 1078, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='f92cb8d1-38c0-4bee-8033-55b61171731d'), AIMessage(content="Of course, Will. I'll remember your name. If you have anything you'd like to ask or discuss, just let me know. ", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 29, 'prompt_tokens': 1091, 'total_tokens': 1120, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'doubao-1-5-pro-32k-250115', 'system_fingerprint': None, 'id': '0217496321039670f848b5e7d5cd90fc22435541affcfcacd74c6', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--88da7655-4f55-4cb7-81a5-0cbd05126e2c-0', usage_metadata={'input_tokens': 1091, 'output_tokens': 29, 'total_tokens': 1120, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f046a1c-4f79-6899-8004-a7da7152350f'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content="Of course, Will. I'll remember your name. If you have anything you'd like to ask or discuss, just let me know. ", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 29, 'prompt_tokens': 1091, 'total_tokens': 1120, 'completion_tokens_details': {'accepted_prediction_tokens': None, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': None}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'doubao-1-5-pro-32k-250115', 'system_fingerprint': None, 'id': '0217496321039670f848b5e7d5cd90fc22435541affcfcacd74c6', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--88da7655-4f55-4cb7-81a5-0cbd05126e2c-0', usage_metadata={'input_tokens': 1091, 'output_tokens': 29, 'total_tokens': 1120, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}, 'step': 4, 'parents': {}, 'thread_id': '1'}, created_at='2025-06-11T08:55:05.672514+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f046a1c-3e81-621e-8003-bb633a5bc23e'}}, tasks=(), interrupts=())
使用.next
查看待处理的节点:
snapshot.next
返回next是空的,表示图已经到达了END状态。
恭喜! 您的聊天机器人现在可以借助LangGraph的检查点系统跨会话维护对话状态。这为更自然、更具情境性的交互打开了令人兴奋的可能性。LangGraph的检查点系统甚至能够处理任意复杂的图状态,这比简单的聊天记忆要表达性和强大得多。
原文地址:https://www.cnblogs.com/LiShengTrip/p/18924209