DeepSeek 作为强大的大模型,提供了优质的基础能力,但在某些特定任务上,直接使用预训练模型可能无法满足需求。本篇文章将介绍 LoRA(Low-Rank Adaptation)、全参数微调 等微调策略,并提供详细的代码示例,帮助开发者高效定制 DeepSeek 以适应特定任务。
LoRA 微调 DeepSeek
LoRA(Low-Rank Adaptation)是一种高效的参数高效微调方法。其核心思想是在预训练权重的基础上添加可训练的低秩适配层,从而减少计算开销。
安装依赖
pip install torch transformers peft accelerate
加载 DeepSeek 模型
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "deepseek-ai/deepseek-mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
LoRA 配置
from peft import LoraConfig, get_peft_model
# 配置 LoRA 训练参数
lora_config = LoraConfig(
r=8, # 低秩矩阵的秩
lora_alpha=32, # LoRA 缩放因子
lora_dropout=0.1, # dropout 率
bias="none",
target_modules=["q_proj", "v_proj"], # 仅对部分层进行微调
)
# 应用 LoRA
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
训练 DeepSeek LoRA
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./lora_model",
per_device_train_batch_size=4,
num_train_epochs=3,
save_steps=100,
logging_dir="./logs",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=my_train_dataset, # 替换为你的数据集
)
trainer.train()
加载数据集
from datasets import Dataset
# 加载
datasets = Dataset.load_from_disk("./wiki_cn_filtered/")
# 划分
# 数据处理
def process_func(example):
MAX_LENGTH = 256
input_ids, attention_mask, labels = [], [], []
instruction = tokenizer("\n".join(["Human: " + example["instruction"], example["input"]]).strip() + "\n\nAssistant: ")
response = tokenizer(example["output"] + tokenizer.eos_token)
input_ids = instruction["input_ids"] + response["input_ids"]
attention_mask = instruction["attention_mask"] + response["attention_mask"]
labels = [-100] * len(instruction["input_ids"]) + response["input_ids"]
if len(input_ids) > MAX_LENGTH:
input_ids = input_ids[:MAX_LENGTH]
attention_mask = attention_mask[:MAX_LENGTH]
labels = labels[:MAX_LENGTH]
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
"labels": labels
}
my_train_dataset = datasets.map(process_func, batched=True, remove_columns=datasets.column_names)
全参数微调 DeepSeek
全参数微调适用于 数据量大、任务复杂 的场景,需要对模型所有参数进行更新,计算资源消耗较高。
环境准备
pip install deepspeed transformers torch
加载 DeepSeek 模型
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "deepseek-ai/deepseek-mistral-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
配置训练参数
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./full_finetune",
per_device_train_batch_size=2,
num_train_epochs=3,
save_strategy="epoch",
report_to="tensorboard",
logging_dir="./logs",
deepspeed="./ds_config.json" # DeepSpeed 加速
)
训练模型
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=my_train_dataset, # 替换为你的数据集
)
trainer.train()