AutoGen 框架源码解读系列(二):基础组件之Models(模型)
本篇文章主要讲解AutoGen框架的基础组件之Models。学习AutoGen框架配置OpenAI(Azure)大模型示例、如何配置国内三方大模型(如:火山引擎的豆包)示例及搭建支持本地大模型示例;
Models(模型)
在许多情况下,代理需要访问LLM模型服务,例如OpenAI(Azure),国内三方(豆包)或本地模型。由于LLM模型服务的API不同,因此autogen-core为模型客户端实现了一个协议,autogen-ext为流行的模型服务实现了一组模型客户端。AgentChat可以使用这些模型客户端与模型服务进行交互。
OpenAI(Azure)大模型
安装autogen扩展
# 要访问OpenAI(Azure)模型,请安装ext扩展,它允许使用OpenAIChatCompletionClient
uv pip install -U "autogen-ext"
# 或分别安装openai
uv pip install -U "autogen-ext[openai]"
# 在安装azure
uv pip install -U "autogen-ext[openai,azure]"
- 基于OpenAI(Azure)大模型(gpt-4o)的天气智能体源码实践
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination, MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient, AzureOpenAIChatCompletionClient
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from aioconsole import ainput
import asyncio
# 定义天气工具
async def get_weather(city: str) -> str:
return f"The weather in {city} is 73 degrees and Sunny."
# 创建一个基于OpemAI模型(gpt-4o-2024-08-06)的客户端对象
gpt_model_client = OpenAIChatCompletionClient(
model="gpt-4o-2024-08-06",
api_key="您账号对应的api_key",
)
# 创建一个基于Azure云服务提供的OpenAI大模型(gpt-4o)的客户端对象
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
azure_model_client = AzureOpenAIChatCompletionClient(
azure_deployment="{your-azure-deployment}",
# 配置 模型-名字, 例如 gpt-4o
model="{model-name, such as gpt-4o}",
api_version="2024-06-01",
azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/",
# 用于基于密钥的认证
api_key="sk-...",
# 选择基于密钥的身份验证时可选
azure_ad_token_provider=token_provider,
)
async def main() -> None:
# 定义一个天气助理
weather_agent = AssistantAgent(
name="weather_agent",
model_client=gpt_model_client,
# model_client=azure_model_client,
tools=[get_weather],
)
# 定义单例的代理和结束条件
text_mention_termination = TextMentionTermination("TERMINATE")
max_messages_termination = MaxMessageTermination(max_messages=5)
termination = text_mention_termination | max_messages_termination
agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination)
while True:
# 从控制台获取用户输入
user_input = await ainput("Enter a message(type 'exit' to leave): ")
if user_input == 'exit':
break
# 执行代理并获取返回消息流
stream = agent_team.run_stream(task=user_input)
await Console(stream)
# 运行main方法
if __name__ == "__main__":
asyncio.run(main())
国内云服务大模型(Doubao)
让我们一起通过openai协议配置国内火山引擎的豆包大模型的客户端吧!💪🏻
- 火山云开通LLM服务与授权API_KEY
- autogen框架扩展支持火山引擎的Doubao大模型配置
# 打开auto_ext文件夹下的_model_info.py
vim /Users/james/workspace/autogen/python/packages/autogen-ext/src/autogen_ext/models/openai/_model_info.py
# 配置Doubao-pro-128k模型信息
_MODEL_POINTERS = {
... ...
# 配置Doubao-pro-128k模型
"ep-******-zvjl7" : "ep-******-zvjl7",
}
_MODEL_INFO: Dict[str, ModelInfo] = {
... ...
# 配置Doubao-pro-128k模型相关信息
"ep-******-zvjl7": {
"vision": False,
"function_calling": True,
"json_output": True,
},
}
_MODEL_TOKEN_LIMITS: Dict[str, int] = {
... ...
# 配置Doubao-pro-128k模型信息
"ep-******-zvjl7": 128000,
}
# AutoGen(支持Doubao-pro-128k模型)打包构建并安装到本地开发环境
cd /Users/james/workspace/autogen/python/packages/autogen-ext
uv build
uv pip install /Users/james/workspace/autogen/python/dist/autogen_ext-0.4.1.tar.gz
# 若安装成功,运行结果如下图:
- 基于火山引擎Doubao大模型(Doubao-pro-128k)的天气智能体源码实践
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination, MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
from aioconsole import ainput
import asyncio
# 定义天气工具
async def get_weather(city: str) -> str:
return f"The weather in {city} is 73 degrees and Sunny."
# 创建一个技术火山引擎云服务提供的豆包大模型(Doubao-pro-128k)的客户端对象
doubao_model_client = OpenAIChatCompletionClient(
# 配置豆包大模型的Model(如上图)
model="ep-******-zvjl7",
# 配置火山云的YOUR_API_KEY(如上图)
api_key="YOUR_API_KEY",
base_url="https://ark.cn-beijing.volces.com/api/v3",
)
async def main() -> None:
# 定义一个天气助理
weather_agent = AssistantAgent(
name="weather_agent",
model_client=doubao_model_client,
tools=[get_weather],
)
# 定义单例的代理和结束条件
text_mention_termination = TextMentionTermination("TERMINATE")
max_messages_termination = MaxMessageTermination(max_messages=5)
termination = text_mention_termination | max_messages_termination
agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination)
while True:
# 从控制台获取用户输入
user_input = await ainput("Enter a message(type 'exit' to leave): ")
if user_input == 'exit':
break
# 执行代理并获取返回消息流
stream = agent_team.run_stream(task=user_input)
await Console(stream)
# 运行main
if __name__ == "__main__":
asyncio.run(main())
本地大模型(Ollama Qwen2.5)
见《AutoGen 框架源码解读系列(一):快速入门》文中使用本地Qwen2.5模型的Demo。