实现多模型切换

19 阅读3分钟

本次功能

多模型切换

支持:

  • 每个会话单独选择模型

  • 前端切换模型

  • 后端根据模型路由到不同 provider

  • 先兼容:

    • Moonshot
    • DeepSeek
    • OpenAI Compatible

1)新增文件

server/llm_clients.py

import os
from openai import OpenAI


def get_provider_config(model_name: str):
    model = (model_name or "").strip()

    if model.startswith("moonshot/"):
        return {
            "provider": "moonshot",
            "base_url": os.getenv("MOONSHOT_BASE_URL", "https://api.moonshot.cn/v1"),
            "api_key": os.getenv("MOONSHOT_API_KEY", ""),
            "model": model.replace("moonshot/", "", 1),
        }

    if model.startswith("deepseek/"):
        return {
            "provider": "deepseek",
            "base_url": os.getenv("DEEPSEEK_BASE_URL", "https://api.deepseek.com"),
            "api_key": os.getenv("DEEPSEEK_API_KEY", ""),
            "model": model.replace("deepseek/", "", 1),
        }

    return {
        "provider": "openai",
        "base_url": os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1"),
        "api_key": os.getenv("OPENAI_API_KEY", ""),
        "model": model.replace("openai/", "", 1) if model.startswith("openai/") else model,
    }


def create_client_by_model(model_name: str):
    config = get_provider_config(model_name)
    client = OpenAI(
        api_key=config["api_key"],
        base_url=config["base_url"],
    )
    return client, config

2)新增文件

web/src/utils/models.js

export const MODEL_OPTIONS = [
  {
    value: 'moonshot/kimi-k2-0905-preview',
    label: 'Moonshot · Kimi K2',
  },
  {
    value: 'deepseek/deepseek-chat',
    label: 'DeepSeek · Chat',
  },
  {
    value: 'openai/gpt-4o-mini',
    label: 'OpenAI · GPT-4o mini',
  },
]

3)改 web/src/utils/session.js

先加 import

import { MODEL_OPTIONS } from './models'

createSession 里新增字段

  return {
    id: crypto.randomUUID(),
    title,
    mode,
    customPrompt: persona.systemPrompt,
    model: MODEL_OPTIONS[0].value,
    temperature: 0.7,

loadSessions 里的 normalize 增加默认值

        model: MODEL_OPTIONS[0].value,
        temperature: 0.7,
        topP: 1,
        maxTokens: 1200,
        memoryEnabled: true,
        pinned: false,
        ...item,

4)改 server/app.py

替换 import 部分

from llm_clients import create_client_by_model

删除这段全局 client

DEFAULT_MODEL_NAME = os.getenv("DEFAULT_MODEL_NAME", "moonshot/kimi-k2-0905-preview")

ChatRequest 新增字段

class ChatRequest(BaseModel):
    messages: List[Message]
    session_id: Optional[str] = None
    model: Optional[str] = None
    temperature: Optional[float] = 0.7
    top_p: Optional[float] = 1
    max_tokens: Optional[int] = 1200
    memory_enabled: Optional[bool] = True

extract_user_memories 改成用动态 client

def extract_user_memories(user_text: str, model_name: Optional[str] = None) -> List[str]:
    text = (user_text or "").strip()
    if not text:
        return []

    prompt = f"""
你是一个信息抽取助手。
请从下面这段用户发言中,提取“适合长期记忆的用户事实”。

要求:
1. 只提取长期有价值的信息,比如:职业、兴趣、目标、偏好、正在做的项目
2. 不要提取一次性寒暄
3. 每条一句话,简洁
4. 最多返回 5 条
5. 只返回 JSON 数组,不要输出其他内容

用户发言:
{text}
"""

    try:
        client, config = create_client_by_model(model_name or DEFAULT_MODEL_NAME)

        completion = client.chat.completions.create(
            model=config["model"],
            messages=[
                {"role": "system", "content": "你是一个严谨的用户事实提取助手。"},
                {"role": "user", "content": prompt},
            ],
            temperature=0.2,
        )

        content = completion.choices[0].message.content or "[]"
        print("memory extract raw content:", content)

        result = json.loads(content)

        if isinstance(result, list):
            parsed = [str(item).strip() for item in result if str(item).strip()]
            if parsed:
                print("memory extract parsed:", parsed)
                return parsed

        fallback_keywords = ["我是", "我在", "我最近", "我想", "我希望", "我主要", "我平时", "我的目标"]
        if any(keyword in text for keyword in fallback_keywords):
            print("memory extract fallback:", [text])
            return [text]

        print("memory extract parsed: []")
        return []
    except Exception as e:
        print("extract_user_memories error:", e)
        return []

/api/chat 里替换模型调用

    try:
        client, config = create_client_by_model(req.model or DEFAULT_MODEL_NAME)

        completion = client.chat.completions.create(
            model=config["model"],
            messages=final_messages,
            temperature=req.temperature or 0.7,
            top_p=req.top_p or 1,
            max_tokens=req.max_tokens or 1200,
        )
        reply = completion.choices[0].message.content or ""

/api/chat 里记忆抽取调用改一下

            new_memories = extract_user_memories(latest_user_text, req.model or DEFAULT_MODEL_NAME)

/api/chat/stream 里替换模型调用

        try:
            client, config = create_client_by_model(req.model or DEFAULT_MODEL_NAME)

            stream = client.chat.completions.create(
                model=config["model"],
                messages=final_messages,
                temperature=req.temperature or 0.7,
                top_p=req.top_p or 1,
                max_tokens=req.max_tokens or 1200,
                stream=True,
            )

/api/chat/stream 里记忆抽取调用改一下

new_memories = extract_user_memories(latest_user_text, req.model or DEFAULT_MODEL_NAME)

5)改 web/src/App.vue

改 import

import { PERSONA_MAP, PERSONA_OPTIONS } from './utils/persona'
import { MODEL_OPTIONS } from './utils/models'

新增计算属性

const currentModel = computed(() => {
  return currentSession.value?.model || MODEL_OPTIONS[0].value
})

新增状态

const modelDraft = ref(MODEL_OPTIONS[0].value)

增加监听

watch(
  currentModel,
  newVal => {
    modelDraft.value = newVal || MODEL_OPTIONS[0].value
  },
  { immediate: true }
)

新增保存模型方法

const handleSaveModelSetting = () => {
  if (!currentSession.value) return

  sessions.value = sortSessions(
    sessions.value.map(item =>
      item.id === currentSessionId.value
        ? {
            ...item,
            model: modelDraft.value || MODEL_OPTIONS[0].value,
            updatedAt: Date.now(),
          }
        : item
    )
  )
}

流式请求 body 增加 model

    body: JSON.stringify({
      messages,
      session_id: currentSession.value.id,
      model: currentSession.value.model,
      temperature: currentSession.value.temperature,
      top_p: currentSession.value.topP,
      max_tokens: currentSession.value.maxTokens,
      memory_enabled: currentSession.value.memoryEnabled,
    }),

6)改模板

在参数面板 params-grid 里最前面插入一个模型选择块

<div class="param-item">
  <label class="param-label">model</label>

  <select v-model="modelDraft" class="param-input">
    <option
      v-for="item in MODEL_OPTIONS"
      :key="item.value"
      :value="item.value"
    >
      {{ item.label }}
    </option>
  </select>

  <div class="memory-switch-row">
    <button class="prompt-btn small" @click="handleSaveModelSetting">保存模型设置</button>
  </div>

  <div class="param-tip">不同会话可绑定不同模型提供方</div>
</div>

7)补充 .env

server/.env 里补上:

DEFAULT_MODEL_NAME=moonshot/kimi-k2-0905-preview

MOONSHOT_API_KEY=你的moonshot_key
MOONSHOT_BASE_URL=https://api.moonshot.cn/v1

DEEPSEEK_API_KEY=你的deepseek_key
DEEPSEEK_BASE_URL=https://api.deepseek.com

OPENAI_API_KEY=你的openai_key
OPENAI_BASE_URL=https://api.openai.com/v1

8)怎么验证

image.png

image.png

image.png

其实这里使用了一个 因为另外两个没有 api key 效果一样 不弄了,要花钱 ,演示下切换够用

本次提交的代码修改

github.com/fhj414/ai-c…

完整代码请看仓库,仓库地址:github.com/huanhunmao/… star 🌟🌟🌟 谢谢~

顺便修复下 ui 遮挡问题

github.com/fhj414/ai-c…