LangChain基础01-PromptTemplate

77 阅读3分钟

FewShotPromptTempalte

在我们的提示词中增加一些实例 用于规范大模型的输出

from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate

# 示例数据 Q&A
examples = [
    {
        "question": "Who lived longer, Muhammad Ali or Alan Turing?",
        "answer": """
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
""",
    },
    {
        "question": "When was the founder of craigslist born?",
        "answer": """
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
""",
    },
    {
        "question": "Who was the maternal grandfather of George Washington?",
        "answer": """
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
""",
    },
    {
        "question": "Are both the directors of Jaws and Casino Royale from the same country?",
        "answer": """
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
""",
    },
]

#基础的prompt

example_prompt = PromptTemplate.from_template("Question: {question}\n{answer}")
# 实例化 few-shot prompt
prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    suffix="Question: {input}",
    input_variables=["input"]
)

print(
    prompt.invoke({"input": "Who was the father of Mary Ball Washington?"}).to_string()
)

通过上面这种方式 我们就可以在给大模型的提示词中增加一些实例数据 加强大模型对问题的理解 规范化大模型的输出

partially format prompt templates

有的时候我们可能需要在提示词中告诉大模型当前世界的时间 或者是一些其他动态获取的数据 但是又不想 通过入参的方式每次都传递进来 那么我们可以使用下面这种方式

from datetime import datetime

from langchain_core.prompts import PromptTemplate


def _get_datetime():
    now = datetime.now()
    return now.strftime("%Y-%m-%d %H:%M:%S")


# 定义一个提示词模板
prompt = PromptTemplate(
    template="Tell me a {adjective} joke about the day {date}",
    input_variables=["adjective", "date"],
)
# 定义partial字段 以及需要提供一个callable 方法
partial_prompt = prompt.partial(date=_get_datetime)
print(partial_prompt.format(adjective="funny"))

或者在构建prompt 的时候 作为入参传给promptTemplate

prompt = PromptTemplate(  
template="Tell me a {adjective} joke about the day {date}",  
input_variables=["adjective"],  
partial_variables={"date": _get_datetime},  
)  
print(prompt.format(adjective="funny"))

输出:

Tell me a funny joke about the day 2024-07-05 19:08:52

compose prompt

  • chatComposeTemplate
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
prompt = SystemMessage(content="你的名字叫做bob 你是一个AI助手")
new_prompt = (
   prompt + HumanMessage(content="hello") + AIMessage(content="hi") + "{input}"
)

print(new_prompt.invoke({"input": "你的名字叫什么?"}))

可以看到 可以用这种方式 来构建组合形式的MessageTemplate string类型的入参会被直接包装为HumanMessage

  • piplineTemplate

我们可以通过piplinePromptTemplate 来进行多个prompt的流程式处理

from langchain_core.prompts import PipelinePromptTemplate, PromptTemplate

full_template = """{introduction}

{example}

{start}"""
full_prompt = PromptTemplate.from_template(full_template)

introduction_template = """You are impersonating {person}."""
introduction_prompt = PromptTemplate.from_template(introduction_template)

example_template = """Here's an example of an interaction:

Q: {example_q}
A: {example_a}"""
example_prompt = PromptTemplate.from_template(example_template)

start_template = """Now, do this for real!

Q: {input}
A:"""
start_prompt = PromptTemplate.from_template(start_template)

input_prompts = [
    ("introduction", introduction_prompt),
    ("example", example_prompt),
    ("start", start_prompt),
]

# 构建PipelinePromptTemplate
pipeline_prompt = PipelinePromptTemplate(
    # final_prompt 最终的prompt 
    # pipeline_prompts List<Tuple<str,BasePromptTemplate>>
    final_prompt=full_prompt, pipeline_prompts=input_prompts
)

print(
    pipeline_prompt.format(
        person="Elon Musk",
        example_q="What's your favorite car?",
        example_a="Tesla",
        input="What's your favorite social media site?",
    )
)

输出:

You are impersonating Elon Musk.

Here's an example of an interaction:

Q: What's your favorite car?
A: Tesla

Now, do this for real!

Q: What's your favorite social media site?
A:

总结

langchain 提供了比较多的prompt模板管理的方法 能够让我们更加规范的使用prompt 更多的方法使用需要大家去探索 **以上文章中的示例 大多来自官网示例 **

持续更新中!