LangChain1.0智能体开发:工具组件

138 阅读5分钟

工具是智能体调用以执行操作的组件。它们通过让模型通过定义明确的输入和输出与世界交互来扩展模型功能。工具封装了一个可调用函数及其输入模式。这些可以传递给兼容的聊天模型,允许模型决定是否调用工具以及使用哪些参数。在这些场景中,工具调用使模型能够生成符合指定输入模式的请求。

一、创建工具

1、基本工具定义

创建工具最简单的方法是使用 @tool 装饰器。默认情况下,函数的文档字符串成为工具的描述,帮助模型理解何时使用它。

from langchain.tools import tool

@tool
def search_database(query: str, limit: int = 10) -> str:
    """
    Search the customer database for records matching the query.
    Args:
        query: Search terms to look for
        limit: Maximum number of results to return
    """
    return f"Found {limit} results for '{query}'"

注意:

  • 类型提示是必需的,因为它们定义了工具的输入模式。
  • 文档字符串应该信息丰富且简洁,以帮助模型理解工具的用途。

2、自定义工具名称

默认情况下,工具名称来自函数名称。当您需要更具描述性的名称时,请覆盖它。

@tool("web_search")  # 自定义的工具名称
def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

print(search.name)  # web_search

3、自定义工具描述

覆盖自动生成的工具描述以获得更清晰的模型指导。

@tool("calculator", description="Performs arithmetic calculations. Use this for any math problems.")
def calc(expression: str) -> str:
    """Evaluate mathematical expressions."""
    return str(eval(expression))

4、工具的高级模式定义

使用 Pydantic 模型或 JSON 模式定义复杂的输入。

from pydantic import BaseModel, Field
from typing import Literal

class WeatherInput(BaseModel):
    """Input for weather queries."""
    location: str = Field(description="City name or coordinates")
    units: Literal["celsius", "fahrenheit"] = Field(
        default="celsius",
        description="Temperature unit preference"
    )
    include_forecast: bool = Field(
        default=False,
        description="Include 5-day forecast"
    )

@tool(args_schema=WeatherInput)
def get_weather(location: str, units: str = "celsius", include_forecast: bool = False) -> str:
    """Get current weather and optional forecast."""
    temp = 22 if units == "celsius" else 72
    result = f"Current weather in {location}: {temp} degrees {units[0].upper()}"
    if include_forecast:
        result += "\nNext 5 days: Sunny"
    return result

二、使用中间件处理工具调用错误

要自定义工具错误的处理方式,请使用 @wrap_tool_call 装饰器创建中间件。

from langchain.chat_models import init_chat_model
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_tool_call
from langchain.messages import ToolMessage
from langchain.tools import tool
from config import api_key, api_base


def init_model():
    model = init_chat_model(
        api_key = api_key,
        base_url = api_base,
        model = "Qwen/Qwen3-8B",
        model_provider = "openai",
        temperature = 0.7,
    )
    return model

@tool
def get_weather(city: str) -> str:
    """获取指定城市的天气"""
    return f"{city} 的天气是晴朗的"

@wrap_tool_call
def handle_tool_errors(request, handler):
    """Handle tool execution errors with custom messages."""
    try:
        return handler(request)
    except Exception as e:
        # Return a custom error message to the model
        return ToolMessage(
            content=f"Tool error: Please check your input and try again. ({str(e)})",
            tool_call_id=request.tool_call["id"]
        )


model = init_model()
agent = create_agent(
    model=model,
    tools=[get_weather],
    system_prompt="你是一个智能个人助理",
    middleware=[handle_tool_errors],
)
input = {"messages": [{"role": "user", "content": "北京天气如何?"}]}

# 流式输出
for chunk in agent.stream(input, stream_mode="values"):
    chunk['messages'][-1].pretty_print()

三、使用ToolRuntime访问运行时信息

当工具函数包含一个类型提示为 ToolRuntime的参数时,工具执行系统会自动注入一个ToolRuntime实例,该实例包含以下运行时信息:

  • state:当前的图状态(graph state)
  • tool_call_id:当前工具调用的 ID
  • config:当前执行的可运行配置(RunnableConfig)
  • context:运行时上下文
  • store:用于持久化存储的基础存储实例
  • stream_writer:用于流式输出的流写入器

无需使用 Annotated 包装器 —— 只需将 runtime: ToolRuntime 作为参数即可。

1 访问state

from langchain.tools import tool, ToolRuntime

# Access the current conversation state
@tool
def summarize_conversation(runtime: ToolRuntime) -> str:
    """Summarize the conversation so far."""
    messages = runtime.state["messages"]
    human_msgs = sum(1 for m in messages if m.__class__.__name__ == "HumanMessage")
    ai_msgs = sum(1 for m in messages if m.__class__.__name__ == "AIMessage")
    tool_msgs = sum(1 for m in messages if m.__class__.__name__ == "ToolMessage")
    return f"Conversation has {human_msgs} user messages, {ai_msgs} AI responses, and {tool_msgs} tool results"

# Access custom state fields
# ToolRuntime parameter is not visible to the model
@tool
def get_user_preference(pref_name: str,runtime: ToolRuntime) -> str:   
    """Get a user preference value."""
    preferences = runtime.state.get("user_preferences", {})
    return preferences.get(pref_name, "Not set")

2 访问context

from dataclasses import dataclass
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain.tools import tool, ToolRuntime


USER_DATABASE = {
    "user123": {
        "name": "Alice Johnson",
        "account_type": "Premium",
        "balance": 5000,
        "email": "alice@example.com"
    },
    "user456": {
        "name": "Bob Smith",
        "account_type": "Standard",
        "balance": 1200,
        "email": "bob@example.com"
    }
}

@dataclass
class UserContext:
    user_id: str

@tool
def get_account_info(runtime: ToolRuntime[UserContext]) -> str:
    """Get the current user's account information."""
    user_id = runtime.context.user_id

    if user_id in USER_DATABASE:
        user = USER_DATABASE[user_id]
        return f"Account holder: {user['name']}\nType: {user['account_type']}\nBalance: ${user['balance']}"
    return "User not found"

model = ChatOpenAI(model="gpt-4o")
agent = create_agent(
    model,
    tools=[get_account_info],
    context_schema=UserContext,
    system_prompt="You are a financial assistant."
)

result = agent.invoke(
    {"messages": [{"role": "user", "content": "What's my current balance?"}]},
    context=UserContext(user_id="user123")
)

3 访问store

from typing import Any
from langgraph.store.memory import InMemoryStore
from langchain.agents import create_agent
from langchain.tools import tool, ToolRuntime


# Access memory
@tool
def get_user_info(user_id: str, runtime: ToolRuntime) -> str:
    """Look up user info."""
    store = runtime.store
    user_info = store.get(("users",), user_id)
    return str(user_info.value) if user_info else "Unknown user"

# Update memory
@tool
def save_user_info(user_id: str, user_info: dict[str, Any], runtime: ToolRuntime) -> str:
    """Save user info."""
    store = runtime.store
    store.put(("users",), user_id, user_info)
    return "Successfully saved user info."

store = InMemoryStore()
agent = create_agent(
    model,
    tools=[get_user_info, save_user_info],
    store=store
)

# First session: save user info
agent.invoke({
    "messages": [{"role": "user", "content": "Save the following user: userid: abc123, name: Foo, age: 25, email: foo@langchain.dev"}]
})

# Second session: get user info
agent.invoke({
    "messages": [{"role": "user", "content": "Get user info for user with id 'abc123'"}]
})
# Here is the user info for user with ID "abc123":
# - Name: Foo
# - Age: 25
# - Email: foo@langchain.dev

4 访问stream_writer

from langchain.tools import tool, ToolRuntime

@tool
def get_weather(city: str, runtime: ToolRuntime) -> str:
    """Get weather for a given city."""
    writer = runtime.stream_writer

    # Stream custom updates as the tool executes
    writer(f"Looking up data for city: {city}")
    # Get data for city
    writer(f"Acquired data for city: {city}")

    return f"It's always sunny in {city}!"

四、总结

本期我们介绍了LangChain的工具组件,包括工具的创建、工具的错误处理、工具运行时访问运行时信息。