langchain (LCEL) v0.2文档(10)如何创建动态(自构建)链(中英对照)

24 阅读2分钟

Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs (routing is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example. 有时我们想在运行时根据链输入构建链的某些部分(路由是最常见的例子)。我们可以使用 RunnableLambda 的一个非常有用的属性来创建这样的动态链,即如果 RunnableLambda 返回 Runnable,则该 Runnable 本身将被调用。让我们看一个例子。


pip install -qU langchain-openai
import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")
# | echo: false

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-sonnet-20240229")

API Reference: ChatAnthropic

from operator import itemgetter

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import Runnable, RunnablePassthrough, chain

contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", contextualize_instructions),
        ("placeholder", "{chat_history}"),
        ("human", "{question}"),
    ]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()

qa_instructions = (
    """Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages(
    [("system", qa_instructions), ("human", "{question}")]
)

@chain
def contextualize_if_needed(input_: dict) -> Runnable:
    if input_.get("chat_history"):
        # NOTE: This is returning another Runnable, not an actual output.
        return contextualize_question
    else:
        return RunnablePassthrough() | itemgetter("question")

@chain
def fake_retriever(input_: dict) -> str:
    return "egypt's population in 2024 is about 111 million"

full_chain = (
    RunnablePassthrough.assign(question=contextualize_if_needed).assign(
        context=fake_retriever
    )
    | qa_prompt
    | llm
    | StrOutputParser()
)

full_chain.invoke(
    {
        "question": "what about egypt",
        "chat_history": [
            ("human", "what's the population of indonesia"),
            ("ai", "about 276 million"),
        ],
    }
)

API Reference: StrOutputParser | ChatPromptTemplate | Runnable | RunnablePassthrough | chain

"According to the context provided, Egypt's population in 2024 is estimated to be about 111 million."

The key here is that contextualize_if_needed returns another Runnable and not an actual output. This returned Runnable is itself run when the full chain is executed.

这里的关键是 contextualize_if_needed 返回另一个 Runnable 而不是实际的输出。当执行完整链时,返回的 Runnable 本身就会运行。

Looking at the trace we can see that, since we passed in chat_history, we executed the contextualize_question chain as part of the full chain: smith.langchain.com/public/9e0a…

查看跟踪我们可以看到,自从我们传入 chat_history 以来,我们执行了 contextualize_question 链作为完整链的一部分:smith.langchain.com/public/9e0a…

Note that the streaming, batching, etc. capabilities of the returned Runnable are all preserved

注意返回的Runnable的流式处理、批处理等能力都被保留

for chunk in contextualize_if_needed.stream(
    {
        "question": "what about egypt",
        "chat_history": [
            ("human", "what's the population of indonesia"),
            ("ai", "about 276 million"),
        ],
    }
):
    print(chunk)
What
 is
 the
 population
 of
 Egypt
?