Agent开发必会的5个工作流模式,你认识几个?

0 阅读5分钟

从 Workflow 到 Agentic 的演进

如今 AI 的发展速度已经远超我们大多数人的预期,仿佛就在一夜之间,我们已经完全进入了 Agent 时代

但你有没有注意到,在 Agent 快速发展的过程中,它的含义也在悄然发生变化?

最早的 Agent,更多只是 Workflow(工作流) 的形式 —— 预设好每一步的调用链路,大模型按照既定的流程一步步执行。然而,随着 ReAct(Reasoning + Acting)范式的兴起,业界逐渐形成了共识:真正具有自主性的 Agent,应该是能够自主决定使用什么工具、如何推理的智能体

Anthropic基于他们toB场景下的实践,提出了将 Workflow 和 Agent 共同纳入Agentic这个范畴。

  • Agent 形式:适合复杂多变、需要灵活推理的场景
  • Workflow 形式:适合需要高效、稳定、可预测结果的场景

所以说我们在开发企业级Agent应用时,有时候也不是盲目追求"越智能越好"。


打破 Workflow 的刻板印象

然而,我们多数人一提及 Workflow 之后,第一印象可能还是类似一个链式调用的简单流程:

输入 → LLM1 → LLM2 → LLM3 → 输出

这确实非常直观,但和实际我们真实工作中的现状不太一致。

实际上,各大AI框架提供商比如Anthropic、LangChain、Vercel等都已经明确了 5 类经典的工作流模式,它们比简单的链式调用要强大得多,也更能应对真实的业务场景。

今天就以LangGraph为例,带大家逐一拆解这 5 个经典模式。

(顺便提一句,如果是AI开发框架的选型,除了LangChain还是有其他方案可以选择的,后期有空给大家单独分析一下)


一、Prompt Chaining(提示词链式)

核心思想:将一个复杂任务拆解为多个步骤,每个 LLM 调用处理前一个调用的输出。

这种模式适用于可以明确分解为多个可验证步骤的任务,常见场景包括:

  • 文档翻译(初译 → 校对 → 润色)
  • 内容审核(检测 → 分类 → 处置)
  • 多轮对话摘要

截屏2026-03-22 下午11.54.22.png

// Graph state
const State = new StateSchema({
  topic: z.string(),
  joke: z.string(),
  improvedJoke: z.string(),
  finalJoke: z.string(),
});

// Define node functions

// First LLM call to generate initial joke
const generateJoke: GraphNode<typeof State> = async (state) => {
  const msg = await llm.invoke(`Write a short joke about ${state.topic}`);
  return { joke: msg.content };
};

// Gate function to check if the joke has a punchline
const checkPunchline: ConditionalEdgeRouter<typeof State, "improveJoke"> = (state) => {
  // Simple check - does the joke contain "?" or "!"
  if (state.joke?.includes("?") || state.joke?.includes("!")) {
    return "Pass";
  }
  return "Fail";
};

// Second LLM call to improve the joke
const improveJoke: GraphNode<typeof State> = async (state) => {
  const msg = await llm.invoke(
    `Make this joke funnier by adding wordplay: ${state.joke}`
  );
  return { improvedJoke: msg.content };
};

// Third LLM call for final polish
const polishJoke: GraphNode<typeof State> = async (state) => {
  const msg = await llm.invoke(
    `Add a surprising twist to this joke: ${state.improvedJoke}`
  );
  return { finalJoke: msg.content };
};

// Build workflow
const chain = new StateGraph(State)
  .addNode("generateJoke", generateJoke)
  .addNode("improveJoke", improveJoke)
  .addNode("polishJoke", polishJoke)
  .addEdge("__start__", "generateJoke")
  .addConditionalEdges("generateJoke", checkPunchline, {
    Pass: "improveJoke",
    Fail: "__end__"
  })
  .addEdge("improveJoke", "polishJoke")
  .addEdge("polishJoke", "__end__")
  .compile();

这种模式的优势在于可控性强,每一步都可以添加校验逻辑,一旦某一步出现问题,可以及时干预或回滚。


二、Parallelization(并行化)

核心思想:多个 LLM 同时处理不同的子任务,最后将结果汇总。

并行化有两种常见形式:

  1. 子任务并行:将一个大任务拆分为多个独立子任务,同时执行
  2. 多次执行:同一个任务运行多次,从多个结果中选择最优

典型场景:

  • 同时生成文案、配图、标题等多个营销素材
  • 对同一内容进行多维度的质量评估
  • 并行调用多个 API 获取不同来源的信息

截屏2026-03-22 下午11.54.31.png

// Graph state
const State = new StateSchema({
  topic: z.string(),
  joke: z.string(),
  story: z.string(),
  poem: z.string(),
  combinedOutput: z.string(),
});

// Nodes
// First LLM call to generate initial joke
const callLlm1: GraphNode<typeof State> = async (state) => {
  const msg = await llm.invoke(`Write a joke about ${state.topic}`);
  return { joke: msg.content };
};

// Second LLM call to generate story
const callLlm2: GraphNode<typeof State> = async (state) => {
  const msg = await llm.invoke(`Write a story about ${state.topic}`);
  return { story: msg.content };
};

// Third LLM call to generate poem
const callLlm3: GraphNode<typeof State> = async (state) => {
  const msg = await llm.invoke(`Write a poem about ${state.topic}`);
  return { poem: msg.content };
};

// Combine the joke, story and poem into a single output
const aggregator: GraphNode<typeof State> = async (state) => {
  const combined = `Here's a story, joke, and poem about ${state.topic}!\n\n` +
    `STORY:\n${state.story}\n\n` +
    `JOKE:\n${state.joke}\n\n` +
    `POEM:\n${state.poem}`;
  return { combinedOutput: combined };
};

// Build workflow
const parallelWorkflow = new StateGraph(State)
  .addNode("callLlm1", callLlm1)
  .addNode("callLlm2", callLlm2)
  .addNode("callLlm3", callLlm3)
  .addNode("aggregator", aggregator)
  .addEdge("__start__", "callLlm1")
  .addEdge("__start__", "callLlm2")
  .addEdge("__start__", "callLlm3")
  .addEdge("callLlm1", "aggregator")
  .addEdge("callLlm2", "aggregator")
  .addEdge("callLlm3", "aggregator")
  .addEdge("aggregator", "__end__")
  .compile();

这种模式可以显著提升响应速度,特别是在需要多维度输出的场景中效果明显。


三、Routing(路由)

核心思想:根据输入的内容和上下文,动态决定接下来应该调用哪个处理流程。

这是企业级应用中最常用的模式之一。想象一个客服系统:

  • 用户问价格 → 路由到定价流程
  • 用户问退货 → 路由到退货流程
  • 用户问产品功能 → 路由到产品介绍流程

截屏2026-03-22 下午11.54.40.png

// Schema for structured output to use as routing logic
const routeSchema = z.object({
  step: z.enum(["poem", "story", "joke"]).describe(
    "The next step in the routing process"
  ),
});

// Augment the LLM with schema for structured output
const router = llm.withStructuredOutput(routeSchema);

// Graph state
const State = new StateSchema({
  input: z.string(),
  decision: z.string(),
  output: z.string(),
});

// Nodes

// Write a story
const llmCall1: GraphNode<typeof State> = async (state) => {
  const result = await llm.invoke([{
    role: "system",
    content: "You are an expert storyteller.",
  }, {
    role: "user",
    content: state.input
  }]);
  return { output: result.content };
};

// Write a joke
const llmCall2: GraphNode<typeof State> = async (state) => {
  const result = await llm.invoke([{
    role: "system",
    content: "You are an expert comedian.",
  }, {
    role: "user",
    content: state.input
  }]);
  return { output: result.content };
};

// Write a poem
const llmCall3: GraphNode<typeof State> = async (state) => {
  const result = await llm.invoke([{
    role: "system",
    content: "You are an expert poet.",
  }, {
    role: "user",
    content: state.input
  }]);
  return { output: result.content };
};

const llmCallRouter: GraphNode<typeof State> = async (state) => {
  // Route the input to the appropriate node
  const decision = await router.invoke([
    {
      role: "system",
      content: "Route the input to story, joke, or poem based on the user's request."
    },
    {
      role: "user",
      content: state.input
    },
  ]);

  return { decision: decision.step };
};

// Conditional edge function to route to the appropriate node
const routeDecision: ConditionalEdgeRouter<typeof State, "llmCall1" | "llmCall2" | "llmCall3"> = (state) => {
  // Return the node name you want to visit next
  if (state.decision === "story") {
    return "llmCall1";
  } else if (state.decision === "joke") {
    return "llmCall2";
  } else {
    return "llmCall3";
  }
};

// Build workflow
const routerWorkflow = new StateGraph(State)
  .addNode("llmCall1", llmCall1)
  .addNode("llmCall2", llmCall2)
  .addNode("llmCall3", llmCall3)
  .addNode("llmCallRouter", llmCallRouter)
  .addEdge("__start__", "llmCallRouter")
  .addConditionalEdges(
    "llmCallRouter",
    routeDecision,
    ["llmCall1", "llmCall2", "llmCall3"]
  )
  .addEdge("llmCall1", "__end__")
  .addEdge("llmCall2", "__end__")
  .addEdge("llmCall3", "__end__")
  .compile();

路由可以是基于规则的(关键词匹配),也可以是基于 LLM 的(让模型判断应该走哪个分支)。后者更加灵活,但成本也略高。


四、Orchestrator-Worker(编排器-工作者)

核心思想:一个中央编排器负责分解任务、分发给多个 Worker,最后汇总结果。

这与并行化的区别在于:编排器不是预先知道要处理多少个子任务,而是在执行过程中动态决定。

典型场景:

  • 代码生成:先分析需要生成哪些文件,每个文件交给一个 Worker
  • 多文档处理:先分析有多少个文档需要处理,再分发给 Workers
  • 复杂报告生成:先规划大纲,再分章节撰写

截屏2026-03-22 下午11.54.50.png

type SectionSchema = {
  name: string;
  description: string;
}
type SectionsSchema = {
  sections: SectionSchema[]
}

// Augment the LLM with schema for structured output
const planner = llm.withStructuredOutput(sectionsSchema);

// Graph state
const State = new StateSchema({
  topic: z.string(),
  sections: z.array(z.custom<SectionsSchema>()),
  completedSections: new ReducedValue(
    z.array(z.string()).default(() => []),
    { reducer: (a, b) => a.concat(b) }
  ),
  finalReport: z.string(),
});

// Worker state
const WorkerState = new StateSchema({
  section: z.custom<SectionsSchema>(),
  completedSections: new ReducedValue(
    z.array(z.string()).default(() => []),
    { reducer: (a, b) => a.concat(b) }
  ),
});

// Nodes
const orchestrator: GraphNode<typeof State> = async (state) => {
  // Generate queries
  const reportSections = await planner.invoke([
    { role: "system", content: "Generate a plan for the report." },
    { role: "user", content: `Here is the report topic: ${state.topic}` },
  ]);

  return { sections: reportSections.sections };
};

const llmCall: GraphNode<typeof WorkerState> = async (state) => {
  // Generate section
  const section = await llm.invoke([
    {
      role: "system",
      content: "Write a report section following the provided name and description. Include no preamble for each section. Use markdown formatting.",
    },
    {
      role: "user",
      content: `Here is the section name: ${state.section.name} and description: ${state.section.description}`,
    },
  ]);

  // Write the updated section to completed sections
  return { completedSections: [section.content] };
};

const synthesizer: GraphNode<typeof State> = async (state) => {
  // List of completed sections
  const completedSections = state.completedSections;

  // Format completed section to str to use as context for final sections
  const completedReportSections = completedSections.join("\n\n---\n\n");

  return { finalReport: completedReportSections };
};

// Conditional edge function to create llm_call workers that each write a section of the report
const assignWorkers: ConditionalEdgeRouter<typeof State, "llmCall"> = (state) => {
  // Kick off section writing in parallel via Send() API
  return state.sections.map((section) =>
    new Send("llmCall", { section })
  );
};

// Build workflow
const orchestratorWorker = new StateGraph(State)
  .addNode("orchestrator", orchestrator)
  .addNode("llmCall", llmCall)
  .addNode("synthesizer", synthesizer)
  .addEdge("__start__", "orchestrator")
  .addConditionalEdges(
    "orchestrator",
    assignWorkers,
    ["llmCall"]
  )
  .addEdge("llmCall", "synthesizer")
  .addEdge("synthesizer", "__end__")
  .compile();

这种模式的灵活性最高,但也最复杂,需要处理好状态管理和错误处理。


五、Evaluator-Optimizer(评估-优化)

核心思想:一个 LLM 生成内容,另一个 LLM 评估内容质量,如果不达标则循环优化,直到满足标准。

这是 LLM 原生的一种迭代机制,特别适合那些"没有标准答案,但有评判标准"的场景。

典型场景:

  • 翻译质量优化(信达雅的标准)
  • 代码审查与优化
  • 文案润色(直到读起来满意为止)
  • 搜索结果排序

截屏2026-03-22 下午11.55.02.png

// Graph state
const State = new StateSchema({
  joke: z.string(),
  topic: z.string(),
  feedback: z.string(),
  funnyOrNot: z.string(),
});

// Schema for structured output to use in evaluation
const feedbackSchema = z.object({
  grade: z.enum(["funny", "not funny"]).describe(
    "Decide if the joke is funny or not."
  ),
  feedback: z.string().describe(
    "If the joke is not funny, provide feedback on how to improve it."
  ),
});

// Augment the LLM with schema for structured output
const evaluator = llm.withStructuredOutput(feedbackSchema);

// Nodes
const llmCallGenerator: GraphNode<typeof State> = async (state) => {
  // LLM generates a joke
  let msg;
  if (state.feedback) {
    msg = await llm.invoke(
      `Write a joke about ${state.topic} but take into account the feedback: ${state.feedback}`
    );
  } else {
    msg = await llm.invoke(`Write a joke about ${state.topic}`);
  }
  return { joke: msg.content };
};

const llmCallEvaluator: GraphNode<typeof State> = async (state) => {
  // LLM evaluates the joke
  const grade = await evaluator.invoke(`Grade the joke ${state.joke}`);
  return { funnyOrNot: grade.grade, feedback: grade.feedback };
};

// Conditional edge function to route back to joke generator or end based upon feedback from the evaluator
const routeJoke: ConditionalEdgeRouter<typeof State, "llmCallGenerator"> = (state) => {
  // Route back to joke generator or end based upon feedback from the evaluator
  if (state.funnyOrNot === "funny") {
    return "Accepted";
  } else {
    return "Rejected + Feedback";
  }
};

// Build workflow
const optimizerWorkflow = new StateGraph(State)
  .addNode("llmCallGenerator", llmCallGenerator)
  .addNode("llmCallEvaluator", llmCallEvaluator)
  .addEdge("__start__", "llmCallGenerator")
  .addEdge("llmCallGenerator", "llmCallEvaluator")
  .addConditionalEdges(
    "llmCallEvaluator",
    routeJoke,
    {
      // Name returned by routeJoke : Name of next node to visit
      "Accepted": "__end__",
      "Rejected + Feedback": "llmCallGenerator"
    }
  )
  .compile();

这种模式模拟了人类"写-改-写"的创作过程,可以让输出质量更稳定。但需要注意设置最大迭代次数,避免陷入无限循环。


总结

上述 5 种模式,就是最经典的 Workflow 范式。它们并不是相互排斥的,而是在实际业务中往往会组合使用

我们在企业级开发中,会根据实际的应用场景进行更进一步的融合和升级,从而在垂直领域实现更加高效和稳定的 Agentic 应用。