LangGraph 进化LangChain到多代理运行时

739 阅读6分钟

假如你和我一样在准备24年的春招,在前端全栈外,再准备一些AI的内容是非常有必要的。24年是AI红利年,AIGC+各种岗位大厂机会会多些,同意的请点赞。也欢迎朋友们加我微信shunwuyu, 一起交流。

前言

LangChain近期发布了重要新功能LangGraph,作为一个老前端LangChain吹,肯定要给大家来分享下。一起搞AI项目,龙年发大财。

LangGraph

LangGraph增强了LangChain的代理能力,这是在向AutoGen看齐吗?我们可以来到官网,一看究竟。

image.png

应用场景举例

image.png

上图是一个比较典型的应用场景,我们需要编写一个AI应用,帮助用户通过搜索引擎去采集一些信息,然后编写推文。我在逐步掌握最佳Ai Agents框架-AutoGen 三 写新闻稿新方式 - 掘金 (juejin.cn)一文,有介绍AutoGen代理实现相似功能的做法,看来,LangChain要向AutoGen进军了。

User 提出需求,交给Supervisor(在AutoGen里我们称为UserProxy)任务。Supervisor并不是自己去完成任务,它的职责是决定把何种任务交给合适的Agent。有AutoGen的基础,搞起来很爽啊。

Search Engine 负责采集信息和生成推文,它接受Supervisor传递的关键字,在搜索后,将推文返回给Supervisor;Twitter Writter 负责发推。

如果我们在LangChain开发中,遇到这样的需求,LangGraph就派上用场了,与AutoGen的竞争就有了护城河。

核心机制

LangGraph本质是一张有向有环图,它包启三个核心概念:StateGraph、Node、

  • StateGraph

用来描述LangGraph的状态,它会被不断的更新。 我们来看下更新状态的例子

from langgraph.graph import StateGraph
from typing import TypeDict, List, Annotated
import Operator

class State(TypeDict):
    input: str
    all_actions: Annotated[List[str], operator.add]

上面的状态定义了一个输入,还有所有的动作。

  • Node

负责更新状态,当我们像上面一样定义了一张状态图后,就可以通过addNoe来添加了。我们给每个Node 取一个名字,比如9527。我们来看在文档给的例子。

graph.add_node("model", model)
graph.add_node("tools", tool_executor)

model、tool_executor可以是一个函数,也可以是一个runable。它的输入是一个dictionary的状态对象,输出是被更新后的dictionary对象。

还有一个特殊的结点,END ,表示操作终止了。

Edges

当有了状态,我们用Edge(边)将这些Node连接起来。有不同的边类型

  • 起始边
graph.set_entry_point("model")
  • 普通边
graph.add_edge("tools", "model")

两个结点连成一条边,从哪个结点开始,到哪个结点结束,定义了先后关系。 上面的代码,将tools 节点添加在model后面,做为下一个节点。

  • 条件边

有条件的决定下一个节点是哪里。这个条件由以下三个要素构成:

  1. node的输出可以决定下一步做什么 上流节点是哪一个?
  2. 函数,下一步调用哪个节点 来一个判定函数,下一个节点在哪里。
  3. 映射关系 来看段代码:
graph.add_conditional_edge(
    "model", # 上流节点
    should_continue, // 函数
    {   // 映射关系
        "end": END, # 表示结束 
        "continue": "tools" # 前往tools结点
    }
)

graph 添加了一条条件边,节点是model, 是否要继续呢?end 表示退出, contrinue 表示继续执行tools。

当我们掌握了LangGraph的StateGraph、Node、Edge后,

编译

当把图定义好了后, 我们就可以调用compile 方法。编译可以将LangGraph转变成LangChain的runnable,就可以把它做为chain或agent来使用它。

app = graph.compile()

Demo

  • 安装依赖
!pip install -q -U langchain langchain_openai langgraph google-search-results

google-search-results 基于serapi 进行goggle 搜索

  • 环境变量设置
# 从colab加载 os 
import os from google.colab 
# 引入 用户数据
import userdata

os.environ['SERPAPI_API_KEY'] = userdata.get('GOOGLE_API_KEY') os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY') os.environ["LANGCHAIN_TRACING_V2"] = "true" os.environ["LANGCHAIN_PROJECT"] = "LangGraph" os.environ["LANGCHAIN_API_KEY"] = userdata.get('LANGSMITH_API_KEY')

  • 搜索
from langchain_community.utilities import SerpAPIWrapper
search = SerpAPIWrapper() 
search.run("Obama's first name?")

返回结果是Barack Hussein Obama II。

  • 引入模块
import functools, operator, requests, os, json 
# agent执行器
from langchain.agents import AgentExecutor, create_openai_tools_agent 
from langchain_core.messages import BaseMessage, HumanMessage 
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser 
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder 
from langgraph.graph import StateGraph, END 
from langchain.tools import tool 
from langchain_openai import ChatOpenAI 
from typing import Annotated, Any, Dict, List, Optional, Sequence, TypedDict
  • 实例化llm
llm = ChatOpenAI(model="gpt-4-turbo-preview")
  • 定义节点
from langchain_core.messages import ( AIMessage, BaseMessage, ChatMessage, FunctionMessage, HumanMessage, SystemMessage ) 
@tool("web_search") 
def web_search(query: str) -> str: """Search with Google SERP API by a query""" 
search = SerpAPIWrapper() 
return search.run(query) 
@tool("twitter_writer") 
def write_tweet(content: str) -> str: """Based a piece of content, write a tweet.""" 
chat = ChatOpenAI() 
messages = [ SystemMessage( content="You are a Twitter account operator." " You are responsible for writing a tweet based on the content given." " You should follow the Twitter policy and make sure each tweet has no more than 140 characters." ), HumanMessage( content=content ), ] 
response = chat(messages) 
return response.content
  • 定义Agent State
class AgentState(TypedDict): 
# The annotation tells the graph that new messages will always 
# be added to the current states 
messages: Annotated[Sequence[BaseMessage], operator.add] 
# The 'next' field indicates where to route to next next: str
  • 创建agent
def create_agent(llm: ChatOpenAI, tools: list, system_prompt: str): prompt = ChatPromptTemplate.from_messages( [ ( "system", system_prompt, ), MessagesPlaceholder(variable_name="messages"), MessagesPlaceholder(variable_name="agent_scratchpad"), ] ) agent = create_openai_tools_agent(llm, tools, prompt) executor = AgentExecutor(agent=agent, tools=tools) return executor def agent_node(state, agent, name): result = agent.invoke(state) return {"messages": [HumanMessage(content=result["output"], name=name)]}
  • 构建chain
members = ["Search_Engine", "Twitter_Writer"] system_prompt = ( "You are a supervisor tasked with managing a conversation between the" " following workers: {members}. Given the following user request," " respond with the worker to act next. Each worker will perform a" " task and respond with their results and status. When finished," " respond with FINISH." ) options = ["FINISH"] + members # Using openai function calling can make output parsing easier for us function_def = { "name": "route", "description": "Select the next role.", "parameters": { "title": "routeSchema", "type": "object", "properties": { "next": { "title": "Next", "anyOf": [ {"enum": options}, ], } }, "required": ["next"], }, } prompt = ChatPromptTemplate.from_messages( [ ("system", system_prompt), MessagesPlaceholder(variable_name="messages"), ( "system", "Given the conversation above, who should act next?" " Or should we FINISH? Select one of: {options}", ), ] ).partial(options=str(options), members=", ".join(members)) supervisor_chain = ( prompt | llm.bind_functions(functions=[function_def], function_call="route") | JsonOutputFunctionsParser() )
  • 组成graph
search_engine_agent = create_agent(llm, [web_search], "You are a web search engine.") 
search_engine_node = functools.partial(agent_node, agent=search_engine_agent, name="Search_Engine") twitter_operator_agent = create_agent(llm, [write_tweet], "You are responsible for writing a tweet based on the content given.") 
twitter_operator_node = functools.partial(agent_node, agent=twitter_operator_agent, name="Twitter_Writer") workflow = StateGraph(AgentState) workflow.add_node("Search_Engine", search_engine_node) workflow.add_node("Twitter_Writer", twitter_operator_node) workflow.add_node("supervisor", supervisor_chain)
  • 编译
for member in members: workflow.add_edge(member, "supervisor") conditional_map = {k: k for k in members} conditional_map["FINISH"] = END workflow.add_conditional_edges("supervisor", lambda x: x["next"], conditional_map) workflow.set_entry_point("supervisor") graph = workflow.compile()

for s in graph.stream( { "messages": [ HumanMessage(content="Write a tweet about LangChain news") ] } ): if "__end__" not in s: print(s) print("----")
  • 结果
{'supervisor': {'next': 'Search_Engine'}}
----
{'Search_Engine': {'messages': [HumanMessage(content="🚀 Exciting news from LangChain! 🌟\n\nWe just launched LangGraph, a revolutionary tool to customize your Agent Runtime, marking a significant milestone in our journey. Also, we're thrilled to announce the release of langchain 0.1.0, our first stable version that's fully backward compatible. 🎉\n\nStay tuned for more updates on how we're transforming the AI ecosystem. #LangChain #Innovation #AI\n\n[Week of 1/22/24]", name='Search_Engine')]}}
----
{'supervisor': {'next': 'Twitter_Writer'}}
----
{'Twitter_Writer': {'messages': [HumanMessage(content='🚀 Exciting news from LangChain! 🌟 Introducing LangGraph, a customizable Agent Runtime tool, and langchain 0.1.0, our first stable release. #LangChain #Innovation #AI', name='Twitter_Writer')]}}
----
{'supervisor': {'next': 'FINISH'}}

参考资料