如何链接可运行程序
PREREQUISITES
This guide assumes familiarity with the following concepts:
One point about LangChain Expression Language is that any two runnables can be "chained" together into sequences. The output of the previous runnable's .invoke() call is passed as input to the next runnable. This can be done using the pipe operator (|), or the more explicit .pipe() method, which does the same thing.
关于 LangChain 表达式语言的一点是,任何两个可运行程序都可以“链接”在一起形成序列。前一个可运行程序的 .invoke() 调用的输出将作为输入传递给下一个可运行程序。这可以使用管道运算符或者更明确的 .pipe() 方法,它做同样的事情。
The resulting RunnableSequence is itself a runnable, which means it can be invoked, streamed, or further chained just like any other runnable. Advantages of chaining runnables in this way are efficient streaming (the sequence will stream output as soon as it is available), and debugging and tracing with tools like LangSmith.
生成的 RunnableSequence 本身就是一个可运行对象,这意味着它可以像其他任何可运行对象一样被调用、流式传输或进一步链接。以这种方式链接可运行对象的优点是流式传输效率高(序列将在输出可用时立即流式传输输出),并且可以使用 LangSmith 等工具进行调试和跟踪。
The pipe operator: | (管道操作符)
To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a prompt template to format input into a chat model, and finally converting the chat message output into a string with an output parser.
为了展示其工作原理,我们来看一个示例。我们将介绍 LangChain 中的常见模式:使用提示模板将输入格式化为聊天模型,最后使用输出解析器将聊天消息输出转换为字符串。
pip install -qU langchain-openai
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo-0125")
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model | StrOutputParser()
API Reference: StrOutputParser | ChatPromptTemplate
Prompts and models are both runnable, and the output type from the prompt call is the same as the input type of the chat model, so we can chain them together. We can then invoke the resulting sequence like any other runnable:
提示和模型都是可运行的,并且提示调用的输出类型与聊天模型的输入类型相同,因此我们可以将它们链接在一起。然后我们可以像任何其他可运行程序一样调用结果序列:
chain.invoke({"topic": "bears"})
"Here's a bear joke for you:\n\nWhy did the bear dissolve in water?\nBecause it was a polar bear!"
Coercion(控制)
We can even combine this chain with more runnables to create another chain. This may involve some input/output formatting using other types of runnables, depending on the required inputs and outputs of the chain components.
我们甚至可以将此链与更多可运行对象结合起来创建另一个链。这可能涉及使用其他类型的可运行对象进行一些输入/输出格式化,具体取决于链组件所需的输入和输出。
For example, let's say we wanted to compose the joke generating chain with another chain that evaluates whether or not the generated joke was funny.
例如,假设我们想将笑话生成链与另一个评估生成的笑话是否有趣的链组合在一起。
We would need to be careful with how we format the input into the next chain. In the below example, the dict in the chain is automatically parsed and converted into a RunnableParallel, which runs all of its values in parallel and returns a dict with the results.
我们需要小心处理如何将输入格式化到下一个链中。在下面的示例中,链中的字典会自动解析并转换为 RunnableParallel,它会并行运行其所有值并返回包含结果的字典。
This happens to be the same format the next prompt template expects. Here it is in action:
这恰好与下一个提示模板期望的格式相同。这是在行动中:
from langchain_core.output_parsers import StrOutputParser
analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()
composed_chain.invoke({"topic": "bears"})
API Reference: StrOutputParser
'Haha, that\'s a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing. I appreciate a good pun or wordplay joke.'
Functions will also be coerced into runnables, so you can add custom logic to your chains too. The below chain results in the same logical flow as before:
函数也将被强制转换为可运行对象,因此您也可以向链中添加自定义逻辑。下面的链产生与之前相同的逻辑流程:
composed_chain_with_lambda = (
chain
| (lambda input: {"joke": input})
| analysis_prompt
| model
| StrOutputParser()
)
composed_chain_with_lambda.invoke({"topic": "beets"})
"Haha, that's a cute and punny joke! I like how it plays on the idea of beets blushing or turning red like someone blushing. Food puns can be quite amusing. While not a total knee-slapper, it's a light-hearted, groan-worthy dad joke that would make me chuckle and shake my head. Simple vegetable humor!"
However, keep in mind that using functions like this may interfere with operations like streaming. See this section for more information.
但是,请记住,使用此类功能可能会干扰流式传输等操作。请参阅本节了解更多信息。