LangChain—— 企业级 LLM 应用开发框架

131 阅读7分钟

LangChain 是目前最成熟的 LLM 应用开发框架,提供端到端的解决方案。


3.1 LangChain 核心架构

LangChain 提供完整的 LLM 应用开发工具链:

┌─────────────────────────────────────────────────────────────┐
│                     LangChain 架构                           │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐        │
│  │   Chains    │  │   Agents    │  │    RAG      │        │
│  │   链式调用  │  │   智能体    │  │  检索增强   │        │
│  └─────────────┘  └─────────────┘  └─────────────┘        │
│         │               │                │                 │
│         └───────────────┴────────────────┘                 │
│                         │                                   │
│  ┌──────────────────────▼──────────────────────┐           │
│  │              Core Components                │           │
│  │  • Models (LLM/Chat)  • Prompts            │           │
│  │  • Memory            • Tools               │           │
│  │  • Output Parsers    • Document Loaders    │           │
│  └─────────────────────────────────────────────┘           │
│                         │                                   │
│  ┌──────────────────────▼──────────────────────┐           │
│  │              Integrations                   │           │
│  │  200+ 第三方集成:数据库、向量库、API 等     │           │
│  └─────────────────────────────────────────────┘           │
└─────────────────────────────────────────────────────────────┘

核心组件

组件说明关键类
ModelsLLM 和 Chat 模型接口ChatOpenAI, OpenAI
Prompts提示词模板管理ChatPromptTemplate
Memory对话记忆管理ConversationBufferMemory
Tools工具定义与调用Tool, StructuredTool
Output Parsers输出解析与验证StrOutputParser, PydanticOutputParser
Document Loaders文档加载器PyPDFLoader, DirectoryLoader

3.2 核心概念详解

3.2.1 LCEL(LangChain Expression Language)

LCEL 是 LangChain 的声明式语法,支持链式组合:

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# 定义组件
prompt = ChatPromptTemplate.from_template("翻译以下文本为{language}: {text}")
model = ChatOpenAI(model="gpt-4")
parser = StrOutputParser()

# 使用 | 操作符链接
chain = prompt | model | parser

# 执行
result = chain.invoke({"language": "英文", "text": "你好世界"})
print(result)  # Hello World

LCEL 核心操作符

操作符说明示例
|管道,连接组件prompt | model | parser
invoke()同步执行chain.invoke({...})
ainvoke()异步执行await chain.ainvoke({...})
batch()批量执行chain.batch([{...}, {...}])
stream()流式输出for chunk in chain.stream({...})

3.2.2 Chains(链)

将多个组件串联成流水线:

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# 多步骤链
translate_prompt = ChatPromptTemplate.from_template(
    "将以下文本翻译成{language}: {text}"
)
summarize_prompt = ChatPromptTemplate.from_template(
    "总结以下文本的要点:\n{text}"
)

model = ChatOpenAI(model="gpt-4")

# 组合链
translate_chain = translate_prompt | model | StrOutputParser()
summarize_chain = summarize_prompt | model | StrOutputParser()

# 完整流程
text = "这是一段需要处理的长文本..."
translated = translate_chain.invoke({"language": "英文", "text": text})
summary = summarize_chain.invoke({"text": translated})

3.2.3 RAG(检索增强生成)

构建知识库问答系统:

┌─────────────────────────────────────────────────────────┐
│                    RAG 架构                              │
├─────────────────────────────────────────────────────────┤
│                                                         │
│   用户问题 ──▶ 向量检索 ──▶ 上下文组装 ──▶ LLM 生成    │
│                   │                                     │
│                   ▼                                     │
│              ┌─────────┐                               │
│              │ 向量库  │                               │
│              │(Chroma) │                               │
│              └─────────┘                               │
└─────────────────────────────────────────────────────────┘

完整 RAG 实现

from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_chroma import Chroma
from langchain_openai import ChatOpenAI
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate

# 1. 加载文档
loader = PyPDFLoader("knowledge_base.pdf")
docs = loader.load()

# 2. 文档分块
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200,
    length_function=len,
    separators=["\n\n", "\n", "。", "!", "?", ";", " ", ""]
)
splits = text_splitter.split_documents(docs)
print(f"文档分割为 {len(splits)} 个块")

# 3. 向量化存储
vectorstore = Chroma.from_documents(
    documents=splits,
    embedding=OpenAIEmbeddings(),
    persist_directory="./chroma_db"  # 持久化存储
)
retriever = vectorstore.as_retriever(
    search_type="mmr",  # 最大边际相关性
    search_kwargs={"k": 5, "fetch_k": 10}
)

# 4. 创建 RAG 链
model = ChatOpenAI(model="gpt-4", temperature=0)

prompt = ChatPromptTemplate.from_template("""
你是一个专业的问答助手。请基于以下上下文回答问题。
如果上下文中没有相关信息,请明确说明。

上下文:
{context}

问题:{input}

回答:
""")

document_chain = create_stuff_documents_chain(model, prompt)
retrieval_chain = create_retrieval_chain(retriever, document_chain)

# 5. 执行查询
response = retrieval_chain.invoke({"input": "什么是 RAG?"})
print(response["answer"])

3.2.4 Memory(记忆管理)

实现多轮对话记忆:

from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from typing import List

# 自定义会话存储
class InMemoryHistory(BaseChatMessageHistory):
    def __init__(self):
        self._messages: List[BaseMessage] = []

    @property
    def messages(self) -> List[BaseMessage]:
        return self._messages

    @messages.setter
    def messages(self, value: List[BaseMessage]):
        self._messages = value

    def add_message(self, message: BaseMessage) -> None:
        self._messages.append(message)

    def clear(self) -> None:
        self._messages = []

# 会话管理器
store = {}

def get_session_history(session_id: str) -> BaseChatMessageHistory:
    if session_id not in store:
        store[session_id] = InMemoryHistory()
    return store[session_id]

# 创建带记忆的对话链
prompt = ChatPromptTemplate.from_messages([
    ("system", "你是一个友好的AI助手,记得用户之前说过的内容。"),
    MessagesPlaceholder(variable_name="history"),
    ("human", "{input}")
])

model = ChatOpenAI(model="gpt-4")
chain = prompt | model

# 包装为带记忆的链
chain_with_history = RunnableWithMessageHistory(
    chain,
    get_session_history=get_session_history,
    input_messages_key="input",
    history_messages_key="history"
)

# 多轮对话
session_id = "user_123"

# 第一轮
response1 = chain_with_history.invoke(
    {"input": "我叫张三"},
    config={"configurable": {"session_id": session_id}}
)
print(response1.content)  # 你好张三!

# 第二轮
response2 = chain_with_history.invoke(
    {"input": "你还记得我叫什么吗?"},
    config={"configurable": {"session_id": session_id}}
)
print(response2.content)  # 你叫张三

3.2.5 Tools(工具)

定义和调用外部工具:

from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel, Field

# 方式一:使用 @tool 装饰器
@tool
def get_weather(city: str) -> str:
    """获取指定城市的天气信息"""
    # 实际应用中调用天气 API
    weather_data = {
        "北京": "晴天,25°C",
        "上海": "多云,28°C",
        "广州": "小雨,30°C"
    }
    return weather_data.get(city, f"未找到 {city} 的天气信息")

# 方式二:使用 Pydantic 定义参数
class SearchInput(BaseModel):
    query: str = Field(description="搜索关键词")
    num_results: int = Field(default=5, description="返回结果数量")

@tool(args_schema=SearchInput)
def search_web(query: str, num_results: int = 5) -> str:
    """在网络上搜索信息"""
    # 实际应用中调用搜索 API
    return f"搜索 '{query}' 找到 {num_results} 个结果"

# 创建 Agent 使用工具
tools = [get_weather, search_web]
model = ChatOpenAI(model="gpt-4", temperature=0)

prompt = ChatPromptTemplate.from_messages([
    ("system", "你是一个有用的助手,可以使用工具来回答问题。"),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# 执行
result = agent_executor.invoke({"input": "北京今天天气怎么样?"})
print(result["output"])

3.3 实战案例:智能客服系统

系统架构

用户输入 ──▶ 意图识别 ──▶ 知识库检索 ──▶ 回答生成 ──▶ 输出
               │              │              │
               ▼              ▼              ▼
           意图分类器     RAG Engine      LLM

完整实现

# customer_service_bot.py
from typing import TypedDict, Annotated, Sequence
import operator
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_chroma import Chroma
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.document_loaders import DirectoryLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage

# 定义状态
class AgentState(TypedDict):
    messages: Annotated[Sequence[str], operator.add]
    intent: str
    context: str
    response: str

class CustomerServiceBot:
    def __init__(self, knowledge_base_path: str):
        # 初始化 LLM
        self.llm = ChatOpenAI(model="gpt-4", temperature=0)

        # 初始化向量库
        self.embeddings = OpenAIEmbeddings()
        self.vectorstore = self._load_knowledge_base(knowledge_base_path)

        # 意图分类器
        self.intent_prompt = ChatPromptTemplate.from_template("""
        分析用户问题,判断意图类别:
        - product: 产品咨询
        - order: 订单查询
        - refund: 退款问题
        - technical: 技术支持
        - other: 其他

        用户问题:{question}
        只输出类别名称:
        """)

        # 回答生成器
        self.answer_prompt = ChatPromptTemplate.from_template("""
        你是一个专业的客服助手。请基于以下信息回答用户问题。

        意图类别:{intent}
        相关知识:{context}
        用户问题:{question}

        请提供专业、友好的回答。如果无法回答,请说明原因并建议用户联系人工客服:
        """)

    def _load_knowledge_base(self, path: str) -> Chroma:
        """加载知识库到向量数据库"""
        loader = DirectoryLoader(path, glob="**/*.md")
        documents = loader.load()

        text_splitter = RecursiveCharacterTextSplitter(
            chunk_size=500,
            chunk_overlap=50
        )
        splits = text_splitter.split_documents(documents)

        return Chroma.from_documents(
            documents=splits,
            embedding=self.embeddings,
            persist_directory="./kb_chroma"
        )

    async def classify_intent(self, question: str) -> str:
        """意图分类"""
        chain = self.intent_prompt | self.llm | StrOutputParser()
        return await chain.ainvoke({"question": question})

    async def retrieve_context(self, question: str, k: int = 3) -> str:
        """检索相关上下文"""
        docs = self.vectorstore.similarity_search(question, k=k)
        return "\n\n".join([doc.page_content for doc in docs])

    async def generate_response(
        self,
        question: str,
        intent: str,
        context: str
    ) -> str:
        """生成回答"""
        chain = self.answer_prompt | self.llm | StrOutputParser()
        return await chain.ainvoke({
            "question": question,
            "intent": intent,
            "context": context
        })

    async def chat(self, question: str) -> dict:
        """完整对话流程"""
        # 1. 意图识别
        intent = await self.classify_intent(question)

        # 2. 知识检索
        context = await self.retrieve_context(question)

        # 3. 生成回答
        response = await self.generate_response(question, intent, context)

        return {
            "intent": intent,
            "context": context,
            "response": response
        }

# 使用示例
async def main():
    bot = CustomerServiceBot("./knowledge_base")

    result = await bot.chat("我的订单什么时候能到?")
    print(f"意图:{result['intent']}")
    print(f"回答:{result['response']}")

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

3.4 LangGraph:状态化工作流

对于复杂的状态管理,LangGraph 提供了更强大的能力:

┌─────────────────────────────────────────────────────────┐
│                  LangGraph 工作流                        │
├─────────────────────────────────────────────────────────┤
│                                                         │
│   ┌─────────┐     ┌─────────┐     ┌─────────┐         │
│   │ intent  │────▶│retrieve │────▶│generate │         │
│   │ 意图识别│     │ 检索    │     │ 生成    │         │
│   └─────────┘     └─────────┘     └─────────┘         │
│                                        │               │
│                                        ▼               │
│                                   ┌─────────┐         │
│                                   │  END    │         │
│                                   └─────────┘         │
└─────────────────────────────────────────────────────────┘

完整 LangGraph 实现

from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, Annotated
import operator

# 定义状态
class GraphState(TypedDict):
    question: str
    intent: str
    context: str
    answer: str
    iterations: int
    messages: Annotated[list, operator.add]

# 定义节点函数
def intent_node(state: GraphState) -> dict:
    """意图识别节点"""
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate

    model = ChatOpenAI(model="gpt-4")
    prompt = ChatPromptTemplate.from_template("""
    判断问题意图:
    - product: 产品咨询
    - order: 订单查询
    - refund: 退款问题
    - technical: 技术支持

    问题:{question}
    只输出类别:
    """)

    chain = prompt | model
    intent = chain.invoke({"question": state["question"]}).content.strip()

    return {
        "intent": intent,
        "messages": [{"role": "system", "content": f"识别意图: {intent}"}]
    }

def retrieve_node(state: GraphState) -> dict:
    """检索节点"""
    # 这里实现检索逻辑
    # 实际应用中连接向量数据库
    context = f"关于 {state['intent']} 的相关知识..."

    return {
        "context": context,
        "messages": [{"role": "system", "content": "完成知识检索"}]
    }

def generate_node(state: GraphState) -> dict:
    """生成节点"""
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate

    model = ChatOpenAI(model="gpt-4")
    prompt = ChatPromptTemplate.from_template("""
    基于以下信息回答问题:

    意图:{intent}
    知识:{context}
    问题:{question}

    回答:
    """)

    chain = prompt | model
    answer = chain.invoke({
        "intent": state["intent"],
        "context": state["context"],
        "question": state["question"]
    }).content

    return {
        "answer": answer,
        "iterations": state.get("iterations", 0) + 1,
        "messages": [{"role": "assistant", "content": answer}]
    }

def should_continue(state: GraphState) -> str:
    """判断是否继续"""
    if state.get("iterations", 0) >= 3:
        return END
    if "不确定" in state.get("answer", ""):
        return "retrieve"
    return END

# 构建图
workflow = StateGraph(GraphState)

# 添加节点
workflow.add_node("intent", intent_node)
workflow.add_node("retrieve", retrieve_node)
workflow.add_node("generate", generate_node)

# 添加边
workflow.set_entry_point("intent")
workflow.add_edge("intent", "retrieve")
workflow.add_edge("retrieve", "generate")

# 添加条件边
workflow.add_conditional_edges(
    "generate",
    should_continue,
    {
        "retrieve": "retrieve",
        END: END
    }
)

# 编译并添加记忆
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)

# 执行
result = app.invoke(
    {"question": "我的订单什么时候能到?", "iterations": 0},
    config={"configurable": {"thread_id": "user_123"}}
)
print(result["answer"])

3.5 部署与生产实践

3.5.1 LangServe 部署

# server.py
from langserve import add_routes
from fastapi import FastAPI
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

app = FastAPI(title="LangChain Server")

# 定义链
prompt = ChatPromptTemplate.from_template("{topic}")
model = ChatOpenAI(model="gpt-4")
chain = prompt | model | StrOutputParser()

# 添加路由
add_routes(app, chain, path="/chat")

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

3.5.2 LangSmith 监控

import os

# 配置 LangSmith
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-api-key"
os.environ["LANGCHAIN_PROJECT"] = "your-project-name"

# 所有链的执行都会自动记录到 LangSmith

3.6 GitHub 项目推荐

项目描述链接
LangChain核心框架github.com/langchain-a…
LangGraph状态化工作流github.com/langchain-a…
LangServe部署服务github.com/langchain-a…
LangSmith调试与监控smith.langchain.com
LangChain Templates应用模板github.com/langchain-a…


欢迎关注的我的公众号《码上未来》,一起交流AI前沿技术!

码上未来.jpg

扫码二维码加我微信进群聊AI

image.png