模块五-AI系统架构设计 | 第35讲:推理服务架构设计 - 流式输出、异步队列、Token 限流与成本控制
本讲目标:把 CodeSentinel 的「大模型推理层」从 demo 级调用,升级为可上线推理服务:用 SSE/StreamingResponse 把审核进度实时推给前端;用 Redis 队列承载峰值、Worker 池并行消费;用 tiktoken 做调用前预算校验;用令牌桶与自适应限流保护上游 API;用语义缓存降低重复审核成本。读完本讲,你能画出推理服务拓扑、解释流式回调链路,并落地
ReviewWorker、TokenBudgetManager、CostTracker三类核心组件的完整 Python 实现。
开场:推理服务不是「调一次 API」,而是一套系统工程
很多团队把 LLM 集成误解成「在业务代码里塞一个 chat.completions.create」。一旦进入真实场景:PR 审核要分钟级、用户要看到「正在分析第 3 个文件」、账单按 token 计费、供应商随时 429、以及同一段代码被多人反复提交——你就会意识到:推理服务必须独立建模。它要解决四类矛盾:体验要实时(流式),吞吐要弹性(队列+Worker),成本要可控(预算+缓存),稳定性要可预期(限流+退避)。
CodeSentinel 的推理服务位于架构「腰部」:向下承接索引与 RAG 检索给出的上下文,向上对 Webhook、Dashboard、IDE 插件暴露统一的审核契约。若这一层设计失败,上层再漂亮的编排器也会被拖垮:要么用户等到超时,要么费用失控,要么被云厂商限流打爆。
本讲我们刻意把「同步直调」与「异步推理」放在同一张图里讨论:小任务可以边算边推流;大任务必须入队、可重试、可观测。你会看到 FastAPI 如何用 StreamingResponse 配合 LangChain 的 streaming callback,把 token 增量转成 SSE 事件;也会看到 Redis List/Stream 作为任务队列的取舍,以及 Worker 池如何把「并行度」与「token 预算」绑定,避免十个 Worker 同时把项目预算烧穿。
最后,我们会把成本控制讲透:不是事后看账单,而是调用前就知道这次 review 会不会超预算;不是简单计数字符,而是用 tiktoken 对齐供应商计费口径;再配合语义缓存,让「相似改动」命中历史审核结论。下面先用全局架构把组件关系钉死,再进入原理与完整代码。
全局视角:CodeSentinel 推理服务架构(Mermaid)
flowchart TB
subgraph API["API 网关层"]
GW["FastAPI /reviews/stream"]
RL["RateLimiter\nTokenBucket"]
TB["TokenBudgetManager"]
end
subgraph Queue["异步与弹性"]
RQ["Redis Queue\nreview:jobs"]
WP["Worker Pool\nReviewWorker x N"]
ST["JobStore\nstatus/progress"]
end
subgraph Inf["推理与上下文"]
LC["LangChain\nStreaming Callbacks"]
LLM["LLM Provider API"]
SEM["SemanticCache\nembedding key"]
end
subgraph Obs["成本与观测(衔接下一讲)"]
CT["CostTracker"]
LOG["Structured Logs"]
end
GW --> RL --> TB
TB -->|预算通过| RQ
TB -->|小任务/同步路径| LC
RQ --> WP --> LC
LC --> LLM
WP --> SEM
LC --> CT
WP --> ST
CT --> LOG
流式输出链路:从 LLM token 到 SSE 事件(Mermaid)
sequenceDiagram
participant C as 客户端
participant F as FastAPI
participant W as ReviewWorker
participant L as LangChain Chain
participant P as LLM Provider
C->>F: GET /reviews/stream?job_id
F-->>C: text/event-stream (SSE)
W->>L: stream=True + callbacks
loop token stream
P-->>L: delta token
L-->>W: on_llm_new_token
W-->>F: enqueue SSE chunk
F-->>C: data: {...}\n\n
end
W-->>F: event: done
F-->>C: data: [DONE]
成本与预算治理流水线(Mermaid)
flowchart LR
subgraph Pre["调用前"]
SRC["源代码+上下文"]
TC["tiktoken count"]
EST["成本估算\n$/1K tokens"]
BUD["预算校验\nper-review/per-project"]
end
subgraph Run["执行中"]
TB2["TokenBucket 限流"]
ADP["自适应退避\n429/Retry-After"]
STR["流式回调计量"]
end
subgraph Post["调用后"]
ACC["CostTracker 记账"]
ALM["告警阈值\n日/周"]
end
SRC --> TC --> EST --> BUD
BUD -->|允许| TB2 --> ADP --> STR
STR --> ACC --> ALM
核心原理:高延迟、高成本与非确定性下的推理服务设计
1. 为什么推理必须「服务化」而不是函数内嵌
PR 审核的输入规模波动极大:几十行改动与上千行重构,token 消耗可能差一个数量级。函数内嵌调用会把超时、重试、并发、计费、缓存全部耦合进业务路由,最终不可测试、不可扩缩。服务化的最低标准包括:统一入口(审核 job 模型)、统一出口(流式事件+最终报告)、统一配额(项目维度预算)、统一观测(trace/metrics/logs,将在第 36 讲展开)。
2. 流式输出:SSE 与 HTTP 长连接的工程取舍
浏览器与多数网关对 WebSocket 的运维成本更高;对「单向推送进度」场景,**SSE(Server-Sent Events)**通常足够:基于 HTTP、自动重连友好、与 FastAPI StreamingResponse 天然契合。关键是事件契约:event 字段区分 progress、token、finding、error、done;data 使用 JSON,避免解析歧义。
LangChain 的 streaming 通过 callback handler 把模型增量暴露出来。你要注意两类 token:模型输出 token与提示词侧 token;计费与预算通常两者合计。流式路径还要处理「半包 JSON」:如果前端要展示结构化 finding,建议服务端边流式输出边在 Worker 内累积,最终再发一条 summary 事件。
3. 异步队列:削峰、重试与可并行 Worker 池
Webhook 高峰时,同步推理会拖垮 API Pod。把任务写入 Redis 队列后,API 只做快速 ACK 与 job_id 返回,Worker 异步消费。队列实现可选用 LPUSH/BRPOP(简单)、Redis Stream(消费者组,适合至少一次语义)、或 Celery/RQ(生态成熟)。教学实现用 List + BRPOP 足够展示模式;生产建议 Stream + 死信队列 + 幂等键。
Worker 池并行度不是越大越好:并行度 × 单次峰值 token ≈ 瞬时费用。应把「最大并发 Worker 数」与 TokenBudgetManager 联动,或在 Worker 内对 LLM 调用再加全局信号量。
4. Token 管理:tiktoken、预算与成本估算
在调用 LLM 之前,用 tiktoken(或供应商官方 tokenizer)估算 prompt token。对未知模型,选取最接近的编码 cl100k_base 作为保守上界,并在配置中允许 safety_margin(例如 ×1.1)。per-review 预算限制单次任务;per-project 预算限制租户;两者都要持久化(Redis/Postgres),否则重启丢状态会导致成本洞穿。
成本估算需要维护 price_per_1k_input、price_per_1k_output 配置表,随供应商调价热更新。CostTracker 负责把每次调用写入时序表或 Prometheus counter,便于财务对账。
5. 限流:令牌桶与基于响应头的自适应策略
令牌桶比固定窗口更平滑:允许短时突发,同时长期速率受控。对 LLM API,还要处理 429:Retry-After 或 x-ratelimit-reset 应反馈到全局 limiter,临时降低发放速率(自适应)。这属于「系统保护」,不是业务重试那么简单。
6. 语义缓存:相似代码共享审核结论
精确缓存(hash 源代码)命中率低;语义缓存对 embedding 做近似检索:若 cosine_similarity >= threshold,直接返回历史审核结果,并记录 cache_hit=true 供观测。注意 PII 与密钥:缓存键应来自规范化后的代码片段哈希+embedding,不应存明文令牌。
代码实战:完整可运行组件(FastAPI + Redis + LangChain 风格回调)
说明:以下代码为「单文件可运行教学版」:用内存 FakeLLM 模拟流式 token,真实环境把
FakeStreamingLLM替换为 OpenAI/Anthropic 适配器即可。依赖:fastapi、uvicorn、redis、tiktoken。LangChain 版本差异较大,这里用最小AsyncCallbackHandler接口演示思想;若你使用 LangChain 1.x,请把 handler 方法名对齐官方文档。
1. 目录与依赖
codesentinel_inference_lab/
app.py
requirements.txt
requirements.txt:
fastapi>=0.110.0
uvicorn[standard]>=0.27.0
redis>=5.0.0
tiktoken>=0.7.0
pydantic>=2.6.0
2. app.py(完整实现)
from __future__ import annotations
import asyncio
import json
import os
import time
import uuid
from collections import deque
from dataclasses import dataclass, field
from typing import Any, AsyncIterator, Callable, Deque, Dict, List, Optional
import redis.asyncio as redis
import tiktoken
import uvicorn
from fastapi import FastAPI, HTTPException
from fastapi.responses import StreamingResponse
from pydantic import BaseModel, Field
# -----------------------------
# Token 预算与成本追踪
# -----------------------------
@dataclass
class PriceTable:
usd_per_1k_prompt: float = 0.005
usd_per_1k_completion: float = 0.015
class TokenBudgetManager:
"""per-review / per-project 预算(Redis 持久化 + 本地 safety margin)。"""
def __init__(self, r: redis.Redis, encoding_name: str = "cl100k_base") -> None:
self._r = r
try:
self._enc = tiktoken.get_encoding(encoding_name)
except Exception:
self._enc = tiktoken.get_encoding("cl100k_base")
self.safety_margin = float(os.getenv("TOKEN_SAFETY_MARGIN", "1.15"))
def count_text(self, text: str) -> int:
return len(self._enc.encode(text or ""))
def estimate_cost_usd(
self, prompt_tokens: int, max_output_tokens: int, price: PriceTable
) -> float:
# 保守:把预计输出上限全部按 completion 计价
return (
(prompt_tokens / 1000.0) * price.usd_per_1k_prompt
+ (max_output_tokens / 1000.0) * price.usd_per_1k_completion
)
async def ensure_review_budget(
self,
project_id: str,
review_id: str,
prompt_tokens: int,
max_output_tokens: int,
per_review_limit: int,
per_project_daily_limit: int,
) -> None:
if int(prompt_tokens * self.safety_margin) > per_review_limit:
raise HTTPException(
status_code=400,
detail=f"预估 prompt tokens {prompt_tokens} 超出单次审核上限 {per_review_limit}",
)
day = time.strftime("%Y%m%d", time.gmtime())
pkey = f"budget:project:{project_id}:day:{day}"
rkey = f"budget:review:{review_id}"
async with self._r.pipeline(transaction=True) as pipe:
await pipe.get(rkey)
await pipe.get(pkey)
prev_review, prev_day = await pipe.execute()
if prev_review is not None:
raise HTTPException(status_code=409, detail="review budget already committed")
used_day = int(prev_day or 0)
projected = used_day + int(prompt_tokens + max_output_tokens)
if projected > per_project_daily_limit:
raise HTTPException(
status_code=429,
detail=f"项目 {project_id} 当日 token 预算将超限:{projected}/{per_project_daily_limit}",
)
async def commit_review_tokens(
self, project_id: str, review_id: str, total_tokens: int, ttl_seconds: int = 86400 * 2
) -> None:
day = time.strftime("%Y%m%d", time.gmtime())
pkey = f"budget:project:{project_id}:day:{day}"
rkey = f"budget:review:{review_id}"
async with self._r.pipeline(transaction=True) as pipe:
await pipe.incrby(pkey, total_tokens)
await pipe.expire(pkey, ttl_seconds)
await pipe.set(rkey, str(total_tokens), ex=ttl_seconds)
await pipe.execute()
class CostTracker:
"""简化版成本记账:生产可对接 Prometheus Histogram/Summary + OLAP。"""
def __init__(self, r: redis.Redis) -> None:
self._r = r
async def record(
self,
project_id: str,
review_id: str,
prompt_tokens: int,
completion_tokens: int,
cost_usd: float,
provider: str,
model: str,
) -> None:
payload = {
"ts": time.time(),
"project_id": project_id,
"review_id": review_id,
"prompt_tokens": prompt_tokens,
"completion_tokens": completion_tokens,
"cost_usd": cost_usd,
"provider": provider,
"model": model,
}
await self._r.lpush("cost:events", json.dumps(payload, ensure_ascii=False))
await self._r.ltrim("cost:events", 0, 9_999)
# -----------------------------
# 令牌桶限流(全局限速 + 简单自适应)
# -----------------------------
class TokenBucketLimiter:
def __init__(self, rate_per_sec: float, burst: float) -> None:
self.rate = rate_per_sec
self.burst = burst
self._tokens = burst
self._last = time.monotonic()
self._lock = asyncio.Lock()
self._penalty_until = 0.0
def penalize(self, seconds: float) -> None:
self._penalty_until = max(self._penalty_until, time.monotonic() + seconds)
async def acquire(self, cost: float = 1.0) -> None:
async with self._lock:
now = time.monotonic()
if now < self._penalty_until:
await asyncio.sleep(self._penalty_until - now)
now = time.monotonic()
elapsed = now - self._last
self._last = now
self._tokens = min(self.burst, self._tokens + elapsed * self.rate)
if self._tokens >= cost:
self._tokens -= cost
return
need = cost - self._tokens
sleep_for = need / self.rate if self.rate > 0 else 0.1
self._tokens = 0.0
await asyncio.sleep(sleep_for)
await self.acquire(cost)
# -----------------------------
# 语义缓存(教学版:内存向量 + 余弦相似度)
# -----------------------------
def _cosine(a: List[float], b: List[float]) -> float:
dot = sum(x * y for x, y in zip(a, b))
na = sum(x * x for x in a) ** 0.5
nb = sum(y * y for y in b) ** 0.5
if na == 0 or nb == 0:
return 0.0
return dot / (na * nb)
class SemanticCache:
"""生产请换向量数据库;此处用字符 n-gram 伪嵌入,保证可运行。"""
def __init__(self, dim: int = 64, threshold: float = 0.92) -> None:
self.dim = dim
self.threshold = threshold
self._entries: List[tuple[List[float], str]] = []
def _embed(self, text: str) -> List[float]:
v = [0.0] * self.dim
t = (text or "").lower()
for i in range(len(t) - 1):
idx = (ord(t[i]) * 31 + ord(t[i + 1])) % self.dim
v[idx] += 1.0
norm = sum(x * x for x in v) ** 0.5 or 1.0
return [x / norm for x in v]
def lookup(self, code: str) -> Optional[str]:
q = self._embed(code)
best = -1.0
best_text: Optional[str] = None
for vec, ans in self._entries:
sim = _cosine(q, vec)
if sim > best:
best = sim
best_text = ans
if best >= self.threshold:
return best_text
return None
def store(self, code: str, answer: str) -> None:
self._entries.append((self._embed(code), answer))
# -----------------------------
# LangChain 风格流式回调(最小实现)
# -----------------------------
class StreamingReviewHandler:
def __init__(self, emit: Callable[[Dict[str, Any]], None]) -> None:
self.emit = emit
self.prompt_tokens = 0
self.completion_tokens = 0
def on_llm_new_token(self, token: str) -> None:
self.completion_tokens += 1 # 教学近似:1 token ~ 1 chunk
self.emit({"event": "token", "data": {"token": token}})
def on_chain_end(self, outputs: Dict[str, Any]) -> None:
self.emit({"event": "partial", "data": outputs})
class FakeStreamingLLM:
"""模拟供应商流式输出。"""
def stream(self, prompt: str) -> List[str]:
base = "【CodeSentinel 审核摘要】\n"
findings = (
"- 风险:检测到可能的硬编码密钥模式,请确认是否误提交。\n"
"- 架构:该函数圈复杂度偏高,建议拆分或补充单测。\n"
"- 性能:存在同步 HTTP 调用位于热路径,建议异步化或加缓存。\n"
)
text = base + findings + f"\n---\n输入长度提示:{len(prompt)} 字符\n"
# 切成小 token
return [text[i : i + 8] for i in range(0, len(text), 8)]
# -----------------------------
# Redis 队列 + Worker
# -----------------------------
QUEUE_KEY = "review:jobs"
JOB_HASH_PREFIX = "review:job:"
class EnqueueBody(BaseModel):
project_id: str = Field(..., description="租户/项目")
code: str = Field(..., description="待审核代码片段")
max_output_tokens: int = Field(512, ge=16, le=4096)
per_review_token_limit: int = Field(8000, ge=256)
per_project_daily_limit: int = Field(200_000, ge=1024)
class ReviewWorker:
def __init__(
self,
r: redis.Redis,
budget: TokenBudgetManager,
cost: CostTracker,
limiter: TokenBucketLimiter,
cache: SemanticCache,
price: PriceTable,
) -> None:
self._r = r
self._budget = budget
self._cost = cost
self._limiter = limiter
self._cache = cache
self._price = price
self._llm = FakeStreamingLLM()
async def _set_job(self, job_id: str, fields: Dict[str, Any]) -> None:
key = JOB_HASH_PREFIX + job_id
await self._r.hset(key, mapping={k: json.dumps(v, ensure_ascii=False) if not isinstance(v, str) else v for k, v in fields.items()})
await self._r.expire(key, 86400 * 7)
async def run_job(self, job_id: str, body: EnqueueBody) -> None:
await self._set_job(job_id, {"status": "running", "progress": 0.0})
cached = self._cache.lookup(body.code)
if cached:
await self._set_job(
job_id,
{
"status": "done",
"progress": 1.0,
"result": cached,
"cache_hit": True,
},
)
return
prompt = (
"你是 CodeSentinel 高级审核员。请输出结构化审核结论(中文)。\n"
f"代码:\n```\n{body.code}\n```\n"
)
prompt_tokens = self._budget.count_text(prompt)
await self._budget.ensure_review_budget(
project_id=body.project_id,
review_id=job_id,
prompt_tokens=prompt_tokens,
max_output_tokens=body.max_output_tokens,
per_review_limit=body.per_review_token_limit,
per_project_daily_limit=body.per_project_daily_limit,
)
est_cost = self._budget.estimate_cost_usd(
prompt_tokens, body.max_output_tokens, self._price
)
await self._set_job(job_id, {"status": "running", "progress": 0.1, "est_cost_usd": est_cost})
out_chunks: List[str] = []
def emit(evt: Dict[str, Any]) -> None:
# Worker 内事件写入 Redis 列表,供 SSE 拉取
asyncio.create_task(self._r.rpush(f"review:stream:{job_id}", json.dumps(evt, ensure_ascii=False)))
handler = StreamingReviewHandler(emit)
handler.prompt_tokens = prompt_tokens
await self._limiter.acquire(1.0)
for i, tok in enumerate(self._llm.stream(prompt), start=1):
handler.on_llm_new_token(tok)
out_chunks.append(tok)
if i % 5 == 0:
await self._set_job(job_id, {"progress": min(0.95, 0.1 + i / 200)})
result = "".join(out_chunks)
handler.on_chain_end({"result": result})
completion_tokens = self._budget.count_text(result)
total = prompt_tokens + completion_tokens
cost_usd = (
(prompt_tokens / 1000.0) * self._price.usd_per_1k_prompt
+ (completion_tokens / 1000.0) * self._price.usd_per_1k_completion
)
await self._budget.commit_review_tokens(body.project_id, job_id, total)
await self._cost.record(
project_id=body.project_id,
review_id=job_id,
prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
cost_usd=cost_usd,
provider="fake",
model="codesentinel-demo",
)
self._cache.store(body.code, result)
await self._set_job(
job_id,
{
"status": "done",
"progress": 1.0,
"result": result,
"prompt_tokens": prompt_tokens,
"completion_tokens": completion_tokens,
"cost_usd": cost_usd,
"cache_hit": False,
},
)
async def worker_loop(worker: ReviewWorker, r: redis.Redis) -> None:
while True:
item = await r.brpop(QUEUE_KEY, timeout=5)
if not item:
continue
_, raw = item
payload = json.loads(raw.decode("utf-8"))
job_id = payload["job_id"]
body = EnqueueBody.model_validate(payload["body"])
try:
await worker.run_job(job_id, body)
except HTTPException as he:
await r.hset(
JOB_HASH_PREFIX + job_id,
mapping={"status": "failed", "error": json.dumps(he.detail, ensure_ascii=False)},
)
except Exception as exc: # noqa: BLE001
await r.hset(
JOB_HASH_PREFIX + job_id,
mapping={"status": "failed", "error": repr(exc)},
)
# -----------------------------
# FastAPI:入队 + SSE
# -----------------------------
app = FastAPI(title="CodeSentinel Inference Lab", version="0.1.0")
@app.on_event("startup")
async def startup() -> None:
app.state.redis = redis.from_url(os.getenv("REDIS_URL", "redis://localhost:6379/0"), decode_responses=False)
app.state.limiter = TokenBucketLimiter(rate_per_sec=5.0, burst=10.0)
app.state.budget = TokenBudgetManager(app.state.redis)
app.state.cost = CostTracker(app.state.redis)
app.state.cache = SemanticCache()
app.state.price = PriceTable()
app.state.worker = ReviewWorker(
app.state.redis, app.state.budget, app.state.cost, app.state.limiter, app.state.cache, app.state.price
)
app.state.worker_task = asyncio.create_task(worker_loop(app.state.worker, app.state.redis))
@app.on_event("shutdown")
async def shutdown() -> None:
app.state.worker_task.cancel()
await app.state.redis.close()
@app.post("/reviews")
async def enqueue_review(body: EnqueueBody) -> Dict[str, Any]:
job_id = str(uuid.uuid4())
await app.state.redis.hset(
JOB_HASH_PREFIX + job_id,
mapping={"status": "queued", "project_id": body.project_id},
)
await app.state.redis.expire(JOB_HASH_PREFIX + job_id, 86400 * 7)
await app.state.redis.lpush(
QUEUE_KEY,
json.dumps({"job_id": job_id, "body": body.model_dump()}, ensure_ascii=False).encode("utf-8"),
)
return {"job_id": job_id, "status": "queued"}
async def sse_stream(job_id: str) -> AsyncIterator[bytes]:
r: redis.Redis = app.state.redis
key = f"review:stream:{job_id}"
last_idle = 0.0
while True:
item = await r.blpop(key, timeout=1)
if not item:
job = await r.hgetall(JOB_HASH_PREFIX + job_id)
status = (job.get(b"status") or b"").decode()
if status in {"done", "failed"}:
if status == "done":
yield f"event: done\ndata: {json.dumps({'job_id': job_id}, ensure_ascii=False)}\n\n".encode()
else:
err = (job.get(b"error") or b"unknown").decode()
yield f"event: error\ndata: {json.dumps({'error': err}, ensure_ascii=False)}\n\n".encode()
break
last_idle += 1
if last_idle > 60:
yield b"event: heartbeat\ndata: {}\n\n"
last_idle = 0.0
continue
_, raw = item
evt = json.loads(raw.decode("utf-8"))
yield f"data: {json.dumps(evt, ensure_ascii=False)}\n\n".encode()
@app.get("/reviews/stream/{job_id}")
async def stream_review(job_id: str) -> StreamingResponse:
return StreamingResponse(sse_stream(job_id), media_type="text/event-stream")
@app.get("/reviews/{job_id}")
async def get_review(job_id: str) -> Dict[str, Any]:
r: redis.Redis = app.state.redis
raw = await r.hgetall(JOB_HASH_PREFIX + job_id)
if not raw:
raise HTTPException(status_code=404, detail="job not found")
out: Dict[str, Any] = {}
for k, v in raw.items():
key = k.decode()
val = v.decode()
if key in {"result", "error"}:
out[key] = val
else:
try:
out[key] = json.loads(val)
except json.JSONDecodeError:
out[key] = val
return out
if __name__ == "__main__":
uvicorn.run("app:app", host="0.0.0.0", port=int(os.getenv("PORT", "8000")), reload=False)
3. 运行方式
# 终端 1:启动 Redis(已安装 docker 时)
docker run --rm -p 6379:6379 redis:7
# 终端 2
set REDIS_URL=redis://127.0.0.1:6379/0
python -m uvicorn app:app --host 0.0.0.0 --port 8000
curl -s -X POST http://127.0.0.1:8000/reviews -H "Content-Type: application/json" -d "{\"project_id\":\"demo\",\"code\":\"def foo():\n pass\n\",\"max_output_tokens\":256,\"per_review_token_limit\":8000,\"per_project_daily_limit\":200000}"
# 记下 job_id 后:
curl -N http://127.0.0.1:8000/reviews/stream/<job_id>
生产环境实战:从实验室到可交付推理集群
- 把 FakeLLM 换成真实供应商:为 OpenAI/Anthropic 使用官方 async SDK,
stream=True读取异步迭代器;在 callback 中记录真实usage.prompt_tokens与usage.completion_tokens,不要用「chunk 数」近似。 - 队列语义:至少一次消费需要幂等键(
review_id);重复投递时commit_review_tokens要先 CAS 或 Lua 脚本保证只扣一次。 - SSE 穿透代理:Nginx 需
proxy_buffering off;、gzip off;(或对 SSE 关闭压缩),并调大proxy_read_timeout。 - 自适应限流:捕获 429 后调用
limiter.penalize(retry_after),并把事件写入 metrics,触发告警。 - 语义缓存生产化:用 Redis Stack/向量索引或专用向量库;embedding 模型与审核模型解耦,避免版本漂移导致误命中。
- 安全:SSE 与 job 查询接口必须鉴权;
code内容可能含密钥,日志与 trace 需脱敏(下一讲展开)。
本讲小结(Mermaid mindmap)
mindmap
root((第35讲 推理服务))
流式体验
SSE
StreamingResponse
LangChain callbacks
弹性吞吐
Redis 队列
Worker 池
进度与状态
成本治理
tiktoken 预估
单次/项目预算
CostTracker
稳定性
令牌桶
429 自适应
退避与重试
降本增效
语义缓存
embedding 相似度
思考题
- 如果你的团队同时支持「同步小 review」与「异步大 review」,你会如何在 API 层做路由分流,避免同步路径拖垮线程池?
per_project_daily_limit用 UTC 日与业务自然日不一致时,如何设计「滚动 24 小时窗口」并保证 Redis 原子性?- 语义缓存误判会带来合规风险(把错误结论复用到不同上下文)。你会如何引入「上下文指纹」(文件路径、依赖版本、规则集版本)降低误命中?
下一讲预告
第 36 讲将进入 LLM 应用可观测性:用 Trace ID 贯通审核流水线,把 prompt/response 以可审计方式落日志(含 PII 脱敏),用类 Prometheus 指标聚合延迟、token、错误率与质量评分,并给出 CodeSentinel 的告警规则与仪表盘草图。把本讲的推理服务与下一讲的观测层拼在一起,你才真正拥有「可运营」的 AI 平台。