openAI 推出了提示词生成,可以根据需求对 prompt 进行美化,写出高质量的 prompt,这里使用 colab 调用智谱 API 的接口来实现提示词美化。
colab 的使用
关于 colab 使用和配置请参考colab 基本使用
实操
安装
pip install zhipuai
配置 secret
from google.colab import userdata
ZHIPUAI_API_KEY = userdata.get("ZHIPUAI_API_KEY")
使用 openAI 的 meta prompt
from zhipuai import ZhipuAI
client = ZhipuAI(api_key=ZHIPUAI_API_KEY)
META_PROMPT = """
Given a task description or existing prompt, produce a detailed system prompt to guide a language model in completing the task effectively.
# Guidelines
- Understand the Task: Grasp the main objective, goals, requirements, constraints, and expected output.
- Minimal Changes: If an existing prompt is provided, improve it only if it's simple. For complex prompts, enhance clarity and add missing elements without altering the original structure.
- Reasoning Before Conclusions**: Encourage reasoning steps before any conclusions are reached. ATTENTION! If the user provides examples where the reasoning happens afterward, REVERSE the order! NEVER START EXAMPLES WITH CONCLUSIONS!
- Reasoning Order: Call out reasoning portions of the prompt and conclusion parts (specific fields by name). For each, determine the ORDER in which this is done, and whether it needs to be reversed.
- Conclusion, classifications, or results should ALWAYS appear last.
- Examples: Include high-quality examples if helpful, using placeholders [in brackets] for complex elements.
- What kinds of examples may need to be included, how many, and whether they are complex enough to benefit from placeholders.
- Clarity and Conciseness: Use clear, specific language. Avoid unnecessary instructions or bland statements.
- Formatting: Use markdown features for readability. DO NOT USE ``` CODE BLOCKS UNLESS SPECIFICALLY REQUESTED.
- Preserve User Content: If the input task or prompt includes extensive guidelines or examples, preserve them entirely, or as closely as possible. If they are vague, consider breaking down into sub-steps. Keep any details, guidelines, examples, variables, or placeholders provided by the user.
- Constants: DO include constants in the prompt, as they are not susceptible to prompt injection. Such as guides, rubrics, and examples.
- Output Format: Explicitly the most appropriate output format, in detail. This should include length and syntax (e.g. short sentence, paragraph, JSON, etc.)
- For tasks outputting well-defined or structured data (classification, JSON, etc.) bias toward outputting a JSON.
- JSON should never be wrapped in code blocks (```) unless explicitly requested.
The final prompt you output should adhere to the following structure below. Do not include any additional commentary, only output the completed system prompt. SPECIFICALLY, do not include any additional messages at the start or end of the prompt. (e.g. no "---")
[Concise instruction describing the task - this should be the first line in the prompt, no section header]
[Additional details as needed.]
[Optional sections with headings or bullet points for detailed steps.]
# Steps [optional]
[optional: a detailed breakdown of the steps necessary to accomplish the task]
# Output Format
[Specifically call out how the output should be formatted, be it response length, structure e.g. JSON, markdown, etc]
# Examples [optional]
[Optional: 1-3 well-defined examples with placeholders if necessary. Clearly mark where examples start and end, and what the input and output are. User placeholders as necessary.]
[If the examples are shorter than what a realistic example is expected to be, make a reference with () explaining how real examples should be longer / shorter / different. AND USE PLACEHOLDERS! ]
# Notes [optional]
[optional: edge cases, details, and an area to call or repeat out specific important considerations]
""".strip()
def generate_prompt(task_or_prompt: str):
completion = client.chat.completions.create(
model="glm-4-plus",
messages=[
{
"role": "system",
"content": META_PROMPT,
},
{
"role": "user",
"content": "Task, Goal, or Current Prompt:\n" + task_or_prompt,
},
],
)
return completion.choices[0].message.content
生成 prompt
prompt = generate_prompt("优化用户输入的 prompt")
print_md(prompt)
基于聊天模型的问答函数实现
def ask(query, system_prompt):
completion = client.chat.completions.create(
model="glm-4-plus",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": query}
]
)
return completion.choices[0].message.content
执行并输出
print_md(ask("我想成为 LLM 领域方面的专家", prompt))
优化后的提示: "制定一份详细的计划,指导我成为大型语言模型(LLM)领域的专家,涵盖必要的学习资源、技能培养步骤、实践经验积累及职业发展建议。"