路线:llama中文社区,使用模型

402 阅读1分钟

第二llama2路线

Llama2 -Chinese

在Llama2 -Chinese文件环境,启用python

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('/data1/liming/model/Llama2-Chinese-7b-Chat',device_map='auto',torch_dtype=torch.float16,load_in_8bit=True)
model =model.eval()
tokenizer = AutoTokenizer.from_pretrained('/data1/liming/model/Llama2-Chinese-7b-Chat',use_fast=False)
tokenizer.pad_token = tokenizer.eos_token
input_ids = tokenizer(['<s>Human: 介绍一下中国\n</s><s>Assistant: '], return_tensors="pt",add_special_tokens=False).input_ids.to('cuda')        
generate_input = {
    "input_ids":input_ids,
    "max_new_tokens":512,
    "do_sample":True,
    "top_k":50,
    "top_p":0.95,
    "temperature":0.3,
    "repetition_penalty":1.3,
    "eos_token_id":tokenizer.eos_token_id,
    "bos_token_id":tokenizer.bos_token_id,
    "pad_token_id":tokenizer.pad_token_id
}
generate_ids  = model.generate(**generate_input,maxnewtokens=50)
text = tokenizer.decode(generate_ids[0])
print(text)

以下为llama中文社区案例(只是使用,没有训练过程)

且在下载request中,相互间git仓库依赖冲突

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>
  File "/data1/liming/model/Llama2-Chinese/.venv/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 751, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/data1/liming/model/Llama2-Chinese/.venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1222, in __getattribute__
    requires_backends(cls, cls._backends)
  File "/data1/liming/model/Llama2-Chinese/.venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1210, in requires_backends
    raise ImportError("".join(failed))
ImportError: 
LlamaTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment. Please note that you may need to restart your runtime after installation.

目前出错:(报错详细为python -m bitsandbytes)

model = AutoModelForCausalLM.from_pretrained('/data1/liming/model/Llama2-Chinese-7b-Chat',device_map='auto',torch_dtype=torch.float16,load_in_8bit=True)
model =model.eval()

建议:查找 from transformers import AutoTokenizer, AutoModelForCausalLM

的参数,修改环境,为第二落选


模型训练位置train/sft/finetune_lora.sh

model\Llama2-Chinese\train\sft\finetune_lora.sh

注明:同案例同配置运行中,走不通(未尝试)