LLM 部署与微调可运行代码集

3 阅读18分钟

端到端可运行的脚本合集:从环境搭建到生产部署的每一步代码,复制即用。 主指南:LLM部署微调学习指南.md 模型手册:LLM部署微调-模型手册.md 更新日期:2026-04-27

所有代码以 Qwen2.5-7B-Instruct 为主示例(其他模型按 模型手册 替换 target_moduleschat_template 即可)。Linux 环境为准,Windows 需用 WSL2。


目录


脚本 0:环境一键搭建

文件名:setup.sh(Linux/WSL)

#!/bin/bash
# 一键搭建大模型环境
set -e

# 1. 创建 conda 环境
conda create -n llm python=3.10 -y
source activate llm

# 2. 换源
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
pip install --upgrade pip

# 3. 装 PyTorch(CUDA 12.1)
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121

# 4. 装核心库
pip install transformers==4.46.0
pip install accelerate==0.34.2
pip install peft==0.13.0
pip install datasets==3.0.0
pip install bitsandbytes==0.44.0
pip install modelscope==1.18.0

# 5. 装部署库
pip install vllm==0.6.3
pip install fastapi==0.115.0 uvicorn==0.30.6
pip install gradio==4.44.0
pip install streamlit==1.39.0

# 6. 装 RAG / LangChain
pip install langchain==0.3.0
pip install langchain-openai==0.2.0
pip install langchain-community==0.3.0
pip install langchain-huggingface==0.1.0
pip install chromadb==0.5.0
pip install sentence-transformers==3.1.0

# 7. 工具
pip install jupyter ipykernel swanlab tensorboard

# 8. 验证
python -c "import torch; print(f'CUDA: {torch.cuda.is_available()}, GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else None}')"

echo "环境搭建完成 ✓"

执行:

chmod +x setup.sh && ./setup.sh

Windows 用户:推荐用 WSL2 + Ubuntu。bitsandbytes 在原生 Windows 上有问题。


脚本 1:模型下载

文件名:download.py

"""从 ModelScope 下载模型权重(国内推荐)。"""
from modelscope import snapshot_download
import os

# 配置 ModelScope 缓存目录(默认在 ~/.cache/modelscope)
os.environ['MODELSCOPE_CACHE'] = './modelscope_cache'

MODELS = {
    "qwen2.5-7b": "Qwen/Qwen2.5-7B-Instruct",
    "qwen2.5-1.5b": "Qwen/Qwen2.5-1.5B-Instruct",
    "deepseek-7b": "deepseek-ai/deepseek-llm-7b-chat",
    "glm4-9b": "ZhipuAI/glm-4-9b-chat-hf",
    "internlm2.5-7b": "Shanghai_AI_Laboratory/internlm2_5-7b-chat",
    "minicpm-2b": "OpenBMB/MiniCPM-2B-sft-bf16",
    "llama3.1-8b": "LLM-Research/Meta-Llama-3.1-8B-Instruct",
    "qwen2-vl-7b": "Qwen/Qwen2-VL-7B-Instruct",
    "embedding-bge": "BAAI/bge-large-zh-v1.5",
}

def download(name):
    if name not in MODELS:
        raise ValueError(f"未知模型 {name},可选: {list(MODELS.keys())}")
    path = snapshot_download(MODELS[name], cache_dir='./models')
    print(f"模型已下载到: {path}")
    return path

if __name__ == "__main__":
    import sys
    if len(sys.argv) < 2:
        print("用法: python download.py <模型名>")
        print(f"可选模型: {list(MODELS.keys())}")
        sys.exit(1)
    download(sys.argv[1])

使用:

python download.py qwen2.5-7b
# 下载到 ./models/Qwen/Qwen2.5-7B-Instruct/

HuggingFace 镜像(下载 Llama 系):

export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir ./models/llama3.1-8b

脚本 2:transformers 推理验证

文件名:infer_basic.py

"""最朴素的 transformers 推理,用来验证模型能跑。"""
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

MODEL_PATH = "./models/Qwen/Qwen2.5-7B-Instruct"

# 加载
print("加载模型中...")
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
model.eval()
print(f"模型加载完成,显存占用: {torch.cuda.memory_allocated()/1024**3:.2f} GB")

# 多轮对话
messages = [
    {"role": "system", "content": "你是一个有用的助手。"},
    {"role": "user", "content": "用三句话解释什么是量子纠缠。"}
]

# 关键:用官方 chat_template,不要手写
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
print(f"实际输入:\n{text}\n{'-'*40}")

inputs = tokenizer([text], return_tensors="pt").to(model.device)

# 生成
with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=512,
        temperature=0.7,
        top_p=0.8,
        do_sample=True,
        repetition_penalty=1.05,
        pad_token_id=tokenizer.eos_token_id
    )

# 只解码新生成的部分
response = tokenizer.decode(
    outputs[0][inputs.input_ids.shape[1]:],
    skip_special_tokens=True
)
print(f"模型回复:\n{response}")

执行:

python infer_basic.py

这一步必须成功,否则后面所有部署/微调都白搭。常见报错:

  • CUDA OOM → 模型太大,换小模型或加 load_in_8bit=True
  • trust_remote_code 警告 → 是正常的,某些模型需要

脚本 3:FastAPI 部署服务

文件名:api_server.py

"""FastAPI 部署:提供兼容 OpenAI 格式的 /v1/chat/completions 接口。"""
import time
import uuid
from contextlib import asynccontextmanager
from typing import List, Optional

import torch
import uvicorn
from fastapi import FastAPI, HTTPException
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer
from threading import Thread

MODEL_PATH = "./models/Qwen/Qwen2.5-7B-Instruct"

# 全局对象,只加载一次
tokenizer = None
model = None


@asynccontextmanager
async def lifespan(app: FastAPI):
    global tokenizer, model
    print("加载模型...")
    tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
    model = AutoModelForCausalLM.from_pretrained(
        MODEL_PATH,
        torch_dtype=torch.bfloat16,
        device_map="auto",
        trust_remote_code=True
    )
    model.eval()
    print("模型就绪")
    yield
    # 关闭时清理
    del model
    torch.cuda.empty_cache()


app = FastAPI(lifespan=lifespan)


class Message(BaseModel):
    role: str
    content: str


class ChatRequest(BaseModel):
    messages: List[Message]
    model: str = "qwen2.5"
    temperature: float = 0.7
    top_p: float = 0.8
    max_tokens: int = 512
    stream: bool = False


def generate_text(messages, max_tokens, temperature, top_p):
    text = tokenizer.apply_chat_template(
        [m.dict() for m in messages],
        tokenize=False,
        add_generation_prompt=True
    )
    inputs = tokenizer([text], return_tensors="pt").to(model.device)

    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=max_tokens,
            temperature=temperature,
            top_p=top_p,
            do_sample=temperature > 0,
            repetition_penalty=1.05,
            pad_token_id=tokenizer.eos_token_id
        )
    return tokenizer.decode(
        outputs[0][inputs.input_ids.shape[1]:],
        skip_special_tokens=True
    )


def generate_stream(messages, max_tokens, temperature, top_p):
    text = tokenizer.apply_chat_template(
        [m.dict() for m in messages],
        tokenize=False,
        add_generation_prompt=True
    )
    inputs = tokenizer([text], return_tensors="pt").to(model.device)

    streamer = TextIteratorStreamer(
        tokenizer,
        skip_prompt=True,
        skip_special_tokens=True,
        timeout=60
    )

    gen_kwargs = dict(
        **inputs,
        streamer=streamer,
        max_new_tokens=max_tokens,
        temperature=temperature,
        top_p=top_p,
        do_sample=temperature > 0,
        repetition_penalty=1.05,
        pad_token_id=tokenizer.eos_token_id
    )

    thread = Thread(target=model.generate, kwargs=gen_kwargs)
    thread.start()

    request_id = f"chatcmpl-{uuid.uuid4()}"
    for chunk in streamer:
        yield f'data: {{"id":"{request_id}","object":"chat.completion.chunk",' \
              f'"created":{int(time.time())},"model":"qwen2.5",' \
              f'"choices":[{{"index":0,"delta":{{"content":{chunk!r}}},"finish_reason":null}}]}}\n\n'
    yield "data: [DONE]\n\n"


@app.post("/v1/chat/completions")
async def chat(req: ChatRequest):
    if model is None:
        raise HTTPException(503, "模型未就绪")

    if req.stream:
        return StreamingResponse(
            generate_stream(req.messages, req.max_tokens, req.temperature, req.top_p),
            media_type="text/event-stream"
        )

    response_text = generate_text(req.messages, req.max_tokens, req.temperature, req.top_p)
    return {
        "id": f"chatcmpl-{uuid.uuid4()}",
        "object": "chat.completion",
        "created": int(time.time()),
        "model": req.model,
        "choices": [{
            "index": 0,
            "message": {"role": "assistant", "content": response_text},
            "finish_reason": "stop"
        }],
        "usage": {"prompt_tokens": -1, "completion_tokens": -1, "total_tokens": -1}
    }


@app.get("/health")
async def health():
    return {"status": "ok" if model else "loading"}


if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000, workers=1)

启动:

python api_server.py

测试(用 OpenAI SDK):

from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")
resp = client.chat.completions.create(
    model="qwen2.5",
    messages=[{"role": "user", "content": "你好"}]
)
print(resp.choices[0].message.content)

脚本 4:Gradio WebUI

文件名:webui.py

"""Gradio 聊天界面,5 行核心代码搞定多轮对话。"""
import torch
import gradio as gr
from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer
from threading import Thread

MODEL_PATH = "./models/Qwen/Qwen2.5-7B-Instruct"

print("加载模型...")
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
model.eval()
print("就绪")


def chat_stream(message, history, system_prompt, temperature, max_tokens):
    """history: [(user1, assistant1), (user2, assistant2), ...]"""
    messages = [{"role": "system", "content": system_prompt}]
    for u, a in history:
        messages.append({"role": "user", "content": u})
        messages.append({"role": "assistant", "content": a})
    messages.append({"role": "user", "content": message})

    text = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )
    inputs = tokenizer([text], return_tensors="pt").to(model.device)

    streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
    gen_kwargs = dict(
        **inputs,
        streamer=streamer,
        max_new_tokens=max_tokens,
        temperature=temperature,
        top_p=0.8,
        do_sample=True,
        repetition_penalty=1.05,
        pad_token_id=tokenizer.eos_token_id
    )
    Thread(target=model.generate, kwargs=gen_kwargs).start()

    response = ""
    for chunk in streamer:
        response += chunk
        yield response


with gr.Blocks(title="LLM Chat") as demo:
    gr.Markdown("# Qwen2.5-7B 对话")
    with gr.Row():
        with gr.Column(scale=4):
            chatbot = gr.ChatInterface(
                fn=chat_stream,
                additional_inputs=[
                    gr.Textbox("你是一个有用的助手。", label="System Prompt"),
                    gr.Slider(0.1, 1.5, 0.7, label="Temperature"),
                    gr.Slider(64, 4096, 512, step=64, label="Max Tokens"),
                ]
            )

if __name__ == "__main__":
    demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False)

启动:

python webui.py
# 浏览器访问 http://localhost:7860

脚本 5:vLLM 生产部署

启动命令(单卡):

python -m vllm.entrypoints.openai.api_server \
  --model ./models/Qwen/Qwen2.5-7B-Instruct \
  --served-model-name qwen2.5 \
  --host 0.0.0.0 \
  --port 8000 \
  --max-model-len 8192 \
  --gpu-memory-utilization 0.9 \
  --tensor-parallel-size 1 \
  --dtype bfloat16

多卡(4×GPU):

python -m vllm.entrypoints.openai.api_server \
  --model ./models/Qwen/Qwen2.5-72B-Instruct \
  --tensor-parallel-size 4 \
  --max-model-len 32768 \
  --gpu-memory-utilization 0.85

带量化(AWQ):

# 先下载 AWQ 量化版
python download.py qwen2.5-7b-awq   # 需要先把这个加到 download.py 里
# 启动
python -m vllm.entrypoints.openai.api_server \
  --model ./models/Qwen/Qwen2.5-7B-Instruct-AWQ \
  --quantization awq \
  --max-model-len 32768

客户端调用(OpenAI SDK):

from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")

# 普通调用
resp = client.chat.completions.create(
    model="qwen2.5",
    messages=[{"role": "user", "content": "讲个笑话"}],
    temperature=0.7,
    max_tokens=256
)
print(resp.choices[0].message.content)

# 流式调用
stream = client.chat.completions.create(
    model="qwen2.5",
    messages=[{"role": "user", "content": "写一首诗"}],
    stream=True
)
for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

压测(对比 transformers):

import time
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="x")

start = time.time()
total_tokens = 0
for i in range(10):
    resp = client.chat.completions.create(
        model="qwen2.5",
        messages=[{"role": "user", "content": f"请用 200 字讲述编号 {i} 的故事"}],
        max_tokens=300
    )
    total_tokens += resp.usage.completion_tokens

elapsed = time.time() - start
print(f"10 个请求耗时 {elapsed:.1f}s,吞吐 {total_tokens/elapsed:.1f} tokens/s")

实测 7B 模型在 4090 上 vLLM 可达 100+ tokens/s,transformers 只有 30-40 tokens/s。


脚本 6:LoRA 微调完整流程

文件名:train_lora.py

"""端到端 LoRA 微调脚本。"""
import json
import torch
from transformers import (
    AutoTokenizer, AutoModelForCausalLM,
    TrainingArguments, Trainer, DataCollatorForSeq2Seq
)
from peft import LoraConfig, get_peft_model, TaskType
from datasets import Dataset

# ============ 配置 ============
MODEL_PATH = "./models/Qwen/Qwen2.5-7B-Instruct"
DATA_PATH = "./data/train.json"
OUTPUT_DIR = "./output/qwen2.5-7b-lora"
MAX_LENGTH = 1024


# ============ 1. 加载 tokenizer 和模型 ============
print("[1/5] 加载模型...")
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
model.enable_input_require_grads()  # 关键:启用梯度


# ============ 2. 配置 LoRA ============
print("[2/5] 配置 LoRA...")
lora_config = LoraConfig(
    task_type=TaskType.CAUSAL_LM,
    r=8,
    lora_alpha=32,
    lora_dropout=0.05,
    target_modules=["q_proj","k_proj","v_proj","o_proj",
                    "gate_proj","up_proj","down_proj"],
    bias="none"
)
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# 输出类似:trainable params: 20,185,088 || all params: 7,635,801,600 || trainable%: 0.2643


# ============ 3. 准备数据 ============
print("[3/5] 处理数据...")
def process_example(example):
    """把 instruction/input/output 格式化为模型输入,只对 output 计算 loss。"""
    instruction = example['instruction']
    input_text = example.get('input', '')
    output = example['output']

    # 拼成对话格式
    messages = [
        {"role": "system", "content": "你是一个有用的助手。"},
        {"role": "user", "content": f"{instruction}\n{input_text}".strip()},
    ]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )
    full_text = prompt + output + tokenizer.eos_token

    # tokenize
    full_ids = tokenizer(full_text, add_special_tokens=False)['input_ids']
    prompt_ids = tokenizer(prompt, add_special_tokens=False)['input_ids']

    # truncate
    if len(full_ids) > MAX_LENGTH:
        full_ids = full_ids[:MAX_LENGTH]

    # labels:prompt 部分置 -100(不计算 loss),只学习 output
    labels = [-100] * len(prompt_ids) + full_ids[len(prompt_ids):]
    labels = labels[:len(full_ids)]

    return {
        "input_ids": full_ids,
        "attention_mask": [1] * len(full_ids),
        "labels": labels
    }


# 加载 JSON 数据
with open(DATA_PATH, 'r', encoding='utf-8') as f:
    raw_data = json.load(f)

dataset = Dataset.from_list(raw_data)
dataset = dataset.map(process_example, remove_columns=dataset.column_names)
dataset = dataset.train_test_split(test_size=0.05, seed=42)
print(f"训练集 {len(dataset['train'])} 条,验证集 {len(dataset['test'])} 条")


# ============ 4. 训练参数 ============
print("[4/5] 开始训练...")
training_args = TrainingArguments(
    output_dir=OUTPUT_DIR,
    num_train_epochs=3,
    per_device_train_batch_size=2,
    per_device_eval_batch_size=2,
    gradient_accumulation_steps=8,        # 等效 batch size = 16
    gradient_checkpointing=True,           # 省显存
    learning_rate=2e-4,
    lr_scheduler_type="cosine",
    warmup_ratio=0.1,
    logging_steps=10,
    save_strategy="epoch",
    eval_strategy="epoch",
    save_total_limit=2,
    bf16=True,
    report_to=["tensorboard"],
    optim="adamw_torch"
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=dataset['train'],
    eval_dataset=dataset['test'],
    data_collator=DataCollatorForSeq2Seq(
        tokenizer=tokenizer, padding=True, return_tensors="pt"
    )
)

trainer.train()


# ============ 5. 保存 ============
print("[5/5] 保存模型...")
final_dir = f"{OUTPUT_DIR}/final"
model.save_pretrained(final_dir)
tokenizer.save_pretrained(final_dir)
print(f"LoRA adapter 保存到 {final_dir} (体积约 80MB)")

启动训练:

python train_lora.py
# 监控:tensorboard --logdir ./output/qwen2.5-7b-lora/runs

显存占用(7B + LoRA):约 20 GB(BF16 + gradient_checkpointing)。


脚本 7:QLoRA 微调(6GB 显存)

文件名:train_qlora.py

"""QLoRA 微调:4bit 量化 + LoRA,7B 模型只要 6-8GB 显存。"""
import json
import torch
from transformers import (
    AutoTokenizer, AutoModelForCausalLM,
    BitsAndBytesConfig, TrainingArguments, Trainer, DataCollatorForSeq2Seq
)
from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training, TaskType
from datasets import Dataset

MODEL_PATH = "./models/Qwen/Qwen2.5-7B-Instruct"
DATA_PATH = "./data/train.json"
OUTPUT_DIR = "./output/qwen2.5-7b-qlora"
MAX_LENGTH = 1024


# ============ QLoRA 关键配置 ============
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",                  # NF4 编码,正态分布优化
    bnb_4bit_compute_dtype=torch.bfloat16,      # 计算用 BF16
    bnb_4bit_use_double_quant=True              # 双重量化,再省 0.4GB
)

# 加载 4bit 模型
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    quantization_config=bnb_config,             # ← 关键:4bit 量化
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

# 准备 K-bit 训练(打开 input require grads,处理量化层)
model = prepare_model_for_kbit_training(model)

# LoRA 配置(和普通 LoRA 一样)
lora_config = LoraConfig(
    task_type=TaskType.CAUSAL_LM,
    r=8,
    lora_alpha=32,
    lora_dropout=0.05,
    target_modules=["q_proj","k_proj","v_proj","o_proj",
                    "gate_proj","up_proj","down_proj"],
    bias="none"
)
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()


# ============ 数据处理(同 LoRA 脚本)============
def process_example(example):
    messages = [
        {"role": "system", "content": "你是一个有用的助手。"},
        {"role": "user", "content": f"{example['instruction']}\n{example.get('input','')}".strip()}
    ]
    prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    full_text = prompt + example['output'] + tokenizer.eos_token
    full_ids = tokenizer(full_text, add_special_tokens=False)['input_ids']
    prompt_ids = tokenizer(prompt, add_special_tokens=False)['input_ids']
    if len(full_ids) > MAX_LENGTH:
        full_ids = full_ids[:MAX_LENGTH]
    labels = [-100] * len(prompt_ids) + full_ids[len(prompt_ids):]
    labels = labels[:len(full_ids)]
    return {"input_ids": full_ids, "attention_mask": [1]*len(full_ids), "labels": labels}


with open(DATA_PATH) as f:
    raw_data = json.load(f)
dataset = Dataset.from_list(raw_data).map(process_example, remove_columns=["instruction","input","output"])
dataset = dataset.train_test_split(test_size=0.05, seed=42)


# ============ 训练 ============
training_args = TrainingArguments(
    output_dir=OUTPUT_DIR,
    num_train_epochs=3,
    per_device_train_batch_size=1,           # QLoRA 用更小 batch
    gradient_accumulation_steps=16,           # 但累积更多
    gradient_checkpointing=True,
    learning_rate=1e-4,                       # QLoRA 学习率比 LoRA 略低
    lr_scheduler_type="cosine",
    warmup_ratio=0.1,
    logging_steps=10,
    save_strategy="epoch",
    eval_strategy="epoch",
    bf16=True,
    optim="paged_adamw_8bit",                 # QLoRA 推荐:分页优化器
    report_to=["tensorboard"]
)

trainer = Trainer(
    model=model, args=training_args,
    train_dataset=dataset['train'], eval_dataset=dataset['test'],
    data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True)
)
trainer.train()
model.save_pretrained(f"{OUTPUT_DIR}/final")
tokenizer.save_pretrained(f"{OUTPUT_DIR}/final")

显存占用对比:

方法7B 训练显存训练速度效果
全量微调80+ GB100%
LoRA20 GB1.2×99%
QLoRA6-8 GB0.7×97%

脚本 8:LoRA 推理与合并

文件名:infer_lora.py

"""加载训练好的 LoRA adapter 进行推理,以及合并到 base 模型。"""
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

BASE_MODEL = "./models/Qwen/Qwen2.5-7B-Instruct"
LORA_PATH = "./output/qwen2.5-7b-lora/final"


def infer_with_adapter():
    """姿势 1:动态加载 adapter(适合多任务切换)。"""
    tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True)
    base = AutoModelForCausalLM.from_pretrained(
        BASE_MODEL,
        torch_dtype=torch.bfloat16,
        device_map="auto",
        trust_remote_code=True
    )
    model = PeftModel.from_pretrained(base, LORA_PATH)
    model.eval()

    messages = [
        {"role": "system", "content": "你是一个有用的助手。"},
        {"role": "user", "content": "测试 LoRA 微调效果"}
    ]
    text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    inputs = tokenizer([text], return_tensors="pt").to(model.device)
    outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
    print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))


def merge_and_save():
    """姿势 2:把 LoRA 合并进 base 模型,产出完整模型(给 vLLM 用)。"""
    tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True)
    base = AutoModelForCausalLM.from_pretrained(
        BASE_MODEL,
        torch_dtype=torch.bfloat16,
        device_map="cpu",                  # 合并在 CPU 上做,省 GPU
        trust_remote_code=True
    )
    model = PeftModel.from_pretrained(base, LORA_PATH)
    merged = model.merge_and_unload()       # 关键操作

    save_path = "./output/qwen2.5-7b-merged"
    merged.save_pretrained(save_path, safe_serialization=True)
    tokenizer.save_pretrained(save_path)
    print(f"合并后的模型保存到 {save_path}(完整 14GB)")
    print("现在可以用 vLLM 直接加载:")
    print(f"  vllm serve {save_path}")


if __name__ == "__main__":
    import sys
    if len(sys.argv) > 1 and sys.argv[1] == "merge":
        merge_and_save()
    else:
        infer_with_adapter()

使用:

# 用 adapter 推理
python infer_lora.py

# 合并后部署到 vLLM
python infer_lora.py merge
python -m vllm.entrypoints.openai.api_server --model ./output/qwen2.5-7b-merged

脚本 9:本地知识库 RAG

文件名:rag_demo.py

"""完整的本地知识库 RAG 实现。"""
import os
from langchain_community.document_loaders import TextLoader, PyPDFLoader, DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate

# ============ 配置 ============
KNOWLEDGE_DIR = "./knowledge"           # 知识库文档目录
PERSIST_DIR = "./chroma_db"             # 向量库持久化路径
EMBEDDING_MODEL = "./models/BAAI/bge-large-zh-v1.5"
LLM_BASE_URL = "http://localhost:8000/v1"   # vLLM 服务地址


def build_index():
    """步骤 1:构建向量索引(只需运行一次)。"""
    print("加载文档...")
    # 支持 .txt 和 .pdf 混合
    txt_loader = DirectoryLoader(KNOWLEDGE_DIR, glob="**/*.txt",
                                  loader_cls=TextLoader,
                                  loader_kwargs={"encoding": "utf-8"})
    pdf_loader = DirectoryLoader(KNOWLEDGE_DIR, glob="**/*.pdf",
                                  loader_cls=PyPDFLoader)
    documents = txt_loader.load() + pdf_loader.load()
    print(f"加载 {len(documents)} 个文档")

    print("切块...")
    splitter = RecursiveCharacterTextSplitter(
        chunk_size=500,                     # 每块约 500 字符
        chunk_overlap=50,                    # 相邻块重叠 50 字符
        separators=["\n\n", "\n", "。", "!", "?", ";", ",", " ", ""]
    )
    chunks = splitter.split_documents(documents)
    print(f"切成 {len(chunks)} 个 chunk")

    print("生成 embedding 并入库...")
    embeddings = HuggingFaceEmbeddings(
        model_name=EMBEDDING_MODEL,
        model_kwargs={"device": "cuda"},
        encode_kwargs={"normalize_embeddings": True}
    )
    vectordb = Chroma.from_documents(
        documents=chunks,
        embedding=embeddings,
        persist_directory=PERSIST_DIR
    )
    print(f"向量库已保存到 {PERSIST_DIR}")
    return vectordb


def query(question, k=4):
    """步骤 2:用知识库回答问题。"""
    embeddings = HuggingFaceEmbeddings(
        model_name=EMBEDDING_MODEL,
        model_kwargs={"device": "cuda"},
        encode_kwargs={"normalize_embeddings": True}
    )
    vectordb = Chroma(persist_directory=PERSIST_DIR, embedding_function=embeddings)

    llm = ChatOpenAI(
        base_url=LLM_BASE_URL,
        api_key="dummy",
        model="qwen2.5",
        temperature=0.3
    )

    # 自定义 prompt 模板,强制基于检索结果回答
    prompt = PromptTemplate(
        input_variables=["context", "question"],
        template="""你是一个严谨的助手。请只基于下面提供的资料回答问题。
如果资料里没有相关信息,直接说"我不知道",不要编造。

资料:
{context}

问题:{question}

回答:"""
    )

    qa = RetrievalQA.from_chain_type(
        llm=llm,
        chain_type="stuff",
        retriever=vectordb.as_retriever(search_kwargs={"k": k}),
        return_source_documents=True,
        chain_type_kwargs={"prompt": prompt}
    )

    result = qa.invoke({"query": question})
    print(f"\n问题:{question}")
    print(f"回答:{result['result']}\n")
    print("引用来源:")
    for i, doc in enumerate(result['source_documents']):
        src = doc.metadata.get('source', '未知')
        print(f"  [{i+1}] {src}")
        print(f"      {doc.page_content[:100]}...")


if __name__ == "__main__":
    import sys
    if len(sys.argv) > 1 and sys.argv[1] == "build":
        build_index()
    else:
        # 交互式问答
        print("RAG 知识库问答(Ctrl+C 退出)")
        while True:
            try:
                q = input("\n> ").strip()
                if q:
                    query(q)
            except KeyboardInterrupt:
                print("\n再见")
                break

使用流程:

# 1. 准备知识库(放 .txt / .pdf 文件)
mkdir -p ./knowledge
cp /path/to/your/docs/*.txt ./knowledge/

# 2. 启动 vLLM 服务(另一个终端)
python -m vllm.entrypoints.openai.api_server --model ./models/Qwen/Qwen2.5-7B-Instruct

# 3. 构建索引(只需一次)
python rag_demo.py build

# 4. 问答
python rag_demo.py

脚本 10:数据集格式转换工具

文件名:prepare_data.py

"""把各种格式的原始数据转成 Alpaca 标准格式。"""
import json
import csv
import re
from pathlib import Path


def from_csv(csv_path, output_path, instruction_col, input_col, output_col):
    """从 CSV 转换。"""
    data = []
    with open(csv_path, encoding='utf-8') as f:
        reader = csv.DictReader(f)
        for row in reader:
            data.append({
                "instruction": row[instruction_col].strip(),
                "input": row.get(input_col, "").strip(),
                "output": row[output_col].strip()
            })
    save(data, output_path)


def from_qa_text(text_path, output_path):
    """从 'Q: xxx\nA: xxx' 格式转换。"""
    text = Path(text_path).read_text(encoding='utf-8')
    pattern = re.compile(r'Q[::]\s*(.+?)\nA[::]\s*(.+?)(?=\nQ[::]|\Z)', re.DOTALL)
    data = []
    for m in pattern.finditer(text):
        data.append({
            "instruction": m.group(1).strip(),
            "input": "",
            "output": m.group(2).strip()
        })
    save(data, output_path)


def from_chatml(chatml_path, output_path):
    """从多轮对话格式转换(每条是 messages 列表)。"""
    with open(chatml_path, encoding='utf-8') as f:
        raw = json.load(f)
    data = []
    for sample in raw:
        msgs = sample['messages']
        # 取最后一轮 user→assistant 作为训练样本
        for i in range(len(msgs) - 1):
            if msgs[i]['role'] == 'user' and msgs[i+1]['role'] == 'assistant':
                # 之前的对话作为 system context
                history = '\n'.join([f"{m['role']}: {m['content']}" for m in msgs[:i]])
                data.append({
                    "instruction": msgs[i]['content'],
                    "input": history,
                    "output": msgs[i+1]['content']
                })
    save(data, output_path)


def filter_quality(input_path, output_path, min_len=10, max_len=2000):
    """质量过滤:去重 + 长度过滤。"""
    with open(input_path, encoding='utf-8') as f:
        data = json.load(f)
    print(f"原始样本数:{len(data)}")

    seen = set()
    filtered = []
    for item in data:
        # 去重(用 instruction+output 哈希)
        key = (item['instruction'], item['output'])
        if key in seen:
            continue
        seen.add(key)
        # 长度过滤
        if not (min_len <= len(item['output']) <= max_len):
            continue
        # 内容检查(空内容、明显错误)
        if not item['instruction'].strip() or not item['output'].strip():
            continue
        filtered.append(item)

    print(f"过滤后样本数:{len(filtered)}(去除 {len(data)-len(filtered)})")
    save(filtered, output_path)


def save(data, path):
    Path(path).parent.mkdir(parents=True, exist_ok=True)
    with open(path, 'w', encoding='utf-8') as f:
        json.dump(data, f, ensure_ascii=False, indent=2)
    print(f"保存 {len(data)} 条样本到 {path}")


if __name__ == "__main__":
    # 示例:
    # from_csv("raw.csv", "data/train.json", "question", "context", "answer")
    # from_qa_text("faq.txt", "data/train.json")
    # filter_quality("data/train.json", "data/train_filtered.json")
    pass

脚本 11:LangChain Agent 工具调用

文件名:agent_demo.py

"""让模型调用工具(计算器、搜索、代码执行等)。需要支持工具调用的模型(GLM-4 / Qwen2.5)。"""
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.tools import tool
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
import requests


# ============ 定义工具 ============
@tool
def calculator(expression: str) -> str:
    """计算数学表达式。输入是合法的 Python 表达式,如 '2*3+5'。"""
    try:
        return str(eval(expression, {"__builtins__": {}}, {}))
    except Exception as e:
        return f"计算错误:{e}"


@tool
def get_weather(city: str) -> str:
    """查询城市天气。输入是城市中文名,如 '北京'。"""
    # 这里模拟,实际可对接真实 API
    return f"{city} 今天晴,温度 22°C"


@tool
def web_search(query: str) -> str:
    """搜索互联网。输入是查询关键词。"""
    # 简化示例,真实场景对接 SerpAPI / Tavily 等
    return f"搜索 '{query}' 的结果:[模拟] 这是相关网页摘要..."


tools = [calculator, get_weather, web_search]


# ============ 创建 Agent ============
llm = ChatOpenAI(
    base_url="http://localhost:8000/v1",
    api_key="dummy",
    model="qwen2.5",
    temperature=0
)

prompt = ChatPromptTemplate.from_messages([
    ("system", "你是一个能调用工具的助手。根据用户问题判断是否需要调用工具,只在必要时调用。"),
    ("user", "{input}"),
    MessagesPlaceholder("agent_scratchpad")
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True, max_iterations=5)


# ============ 测试 ============
if __name__ == "__main__":
    questions = [
        "123 乘以 456 等于多少?",
        "上海今天天气怎么样?",
        "搜索一下最近的 AI 新闻"
    ]
    for q in questions:
        print(f"\n{'='*40}\n用户:{q}")
        result = executor.invoke({"input": q})
        print(f"答案:{result['output']}")

示例数据集

最小可训练数据集示例(放在 data/train.json):

[
  {
    "instruction": "请把下面的话翻译成英文",
    "input": "今天天气真好",
    "output": "The weather is really nice today."
  },
  {
    "instruction": "用三句话总结下面的文章",
    "input": "人工智能近年来发展迅速,大模型成为主流。",
    "output": "1. 人工智能发展迅速。\n2. 大模型是当前主流方向。\n3. 这是近年来的趋势。"
  },
  {
    "instruction": "推荐一道适合早餐的简单菜",
    "input": "",
    "output": "推荐番茄鸡蛋面:煮一碗面,煎一个鸡蛋,加切碎的番茄,淋点酱油即可。"
  }
]

实际微调建议至少 500 条,格式调整任务可以 50 条起步。

公开数据集快速下载

from datasets import load_dataset

# 中文 Belle 数据集(150万条)
belle = load_dataset("BelleGroup/train_3.5M_CN", split="train[:5000]")

# Alpaca-Chinese
alpaca = load_dataset("shibing624/alpaca-zh", split="train")

# Firefly(150万条多任务)
firefly = load_dataset("YeungNLP/firefly-train-1.1M", split="train[:10000]")

# 转成本项目格式
def to_alpaca(item):
    return {
        "instruction": item.get("instruction", item.get("input", "")),
        "input": item.get("input", "") if "instruction" in item else "",
        "output": item.get("output", item.get("target", ""))
    }

常见错误对照表

错误信息原因解法
CUDA out of memory显存不足减小 batch_size、开 gradient_checkpointing、用 QLoRA
Expected all tensors to be on the same device模型在 GPU 但输入在 CPUinputs.to(model.device)
KeyError: 'qwen2_tokenizer'transformers 版本太老升级到 4.45+
bitsandbytes was not compiled with GPU supportbnb 装错pip install bitsandbytes>=0.43.0
RuntimeError: Expected at least one elementlabels 全 -100检查 prompt_len 计算
tokenizer has no pad_tokenpad_token 没设tokenizer.pad_token = tokenizer.eos_token
vLLM serving but no responsemax-model-len 太小--max-model-len 8192
OpenAI API error 422chat_template 不对apply_chat_template 而非手写

快速开始一键脚本

文件名:quickstart.sh

#!/bin/bash
# 一键跑通从环境到 RAG 的完整流程
set -e

echo "[1/6] 搭建环境..."
bash setup.sh

echo "[2/6] 下载 Qwen2.5-7B 和 BGE embedding..."
python download.py qwen2.5-7b
python download.py embedding-bge

echo "[3/6] 验证推理..."
python infer_basic.py

echo "[4/6] 启动 vLLM 服务(后台)..."
nohup python -m vllm.entrypoints.openai.api_server \
  --model ./models/Qwen/Qwen2.5-7B-Instruct \
  --max-model-len 8192 > vllm.log 2>&1 &
sleep 60

echo "[5/6] 准备示例知识库..."
mkdir -p knowledge
echo "公司报销流程:1. 提交发票 2. 主管审批 3. 财务打款。" > knowledge/report.txt

echo "[6/6] 构建 RAG 索引..."
python rag_demo.py build

echo ""
echo "全部就绪!现在可以:"
echo "  - 网页对话:python webui.py"
echo "  - 知识库问答:python rag_demo.py"
echo "  - LoRA 微调:python train_lora.py"

后续学习

代码跑通了,接下来:

  1. 生产化:加监控(Prometheus + Grafana)、日志、限流、负载均衡
  2. 优化:模型量化(AWQ / GPTQ)、推理加速(speculative decoding、prompt caching)
  3. 进阶 RAG:混合检索、重排序、查询改写、多跳推理
  4. Agent 工程:函数调用、ReAct 模式、多 Agent 协作
  5. 评估体系:人工评估、自动评估(MMLU、CMMLU)、对比测试