BELLE(LLaMA-7B/Bloomz-7B1-mt)大模型使用GPTQ量化后推理性能测试

928 阅读6分钟

之前尝试了基于LLaMA-7B/Bloomz-7B1-mt复现开源中文对话大模型BELLE及GPTQ量化。但是,并没有针对量化后的模型的大小,模型推理时占用GPU显存以及量化后推理性能进行测试。

下面尝试针对 LLaMA-7B 经过8bit/4bit量化之后的模型进行推理性能的对比。

基础环境信息

  • 操作系统: CentOS 7
  • CPUs: 单个节点具有 1TB 内存的 Intel CPU,物理CPU个数为64,每颗CPU核数为16
  • GPUs: 8 卡 A800 80GB GPUs
  • Python: 3.10 (需要先升级OpenSSL到1.1.1t版本(点击下载OpenSSL),然后再编译安装Python),点击下载Python
  • NVIDIA驱动程序版本: 515.65.01,根据不同型号选择不同的驱动程序,点击下载
  • CUDA工具包: 11.7,点击下载
  • NCCL: nccl_2.14.3-1+cuda11.7,点击下载
  • cuDNN: 8.8.1.3_cuda11,点击下载

推理性能测试说明

  1. 本推理性能测试是在非并发场景进行的对比。
  2. 以下所有的数据均是针对每个模型执行一千次模型推理之后统计得出的结果,仅供参考。
  3. 模型由BELLE(7B)基于LLaMA-7B/Bloomz-7B1-mt进行指令精调并量化后提供,下载地址:BELLE-7B-2M(Bloom)BELLE-LLAMA-7B-2MBELLE-7B-gptq(Bloom)BELLE-LLAMA-7B-2M-gptq

推理性能测试代码

下载BELLE代码。

git clone https://github.com/LianjiaTech/BELLE.git
git checkout c794c1d
cd gptq

# 拷贝`llama_inference.py`文件为`llama_inference_benchmark.py`。
cp llama_inference.py  llama_inference_benchmark.py

# 拷贝`bloom_inference.py`文件为`bloom_inference_benchmark.py`。
cp bloom_inference.py  bloom_inference_benchmark.py

修改llama_inference_benchmark.py文件:

import time

import torch
import torch.nn as nn

from gptq import *
from modelutils import *
from quant import *

from transformers import AutoTokenizer
from random import choice
from statistics import mean
import numpy as np


DEV = torch.device('cuda:0')

def get_llama(model):
    import torch
    def skip(*args, **kwargs):
        pass
    torch.nn.init.kaiming_uniform_ = skip
    torch.nn.init.uniform_ = skip
    torch.nn.init.normal_ = skip
    from transformers import LlamaForCausalLM
    model = LlamaForCausalLM.from_pretrained(model, torch_dtype='auto')
    model.seqlen = 2048
    return model

def load_quant(model, checkpoint, wbits, groupsize):
    from transformers import LlamaConfig, LlamaForCausalLM
    config = LlamaConfig.from_pretrained(model)
    def noop(*args, **kwargs):
        pass
    torch.nn.init.kaiming_uniform_ = noop
    torch.nn.init.uniform_ = noop
    torch.nn.init.normal_ = noop

    torch.set_default_dtype(torch.half)
    transformers.modeling_utils._init_weights = False
    torch.set_default_dtype(torch.half)
    model = LlamaForCausalLM(config)
    torch.set_default_dtype(torch.float)
    model = model.eval()
    layers = find_layers(model)
    for name in ['lm_head']:
        if name in layers:
            del layers[name]
    make_quant(model, layers, wbits, groupsize)

    print('Loading model ...')
    if checkpoint.endswith('.safetensors'):
        from safetensors.torch import load_file as safe_load
        model.load_state_dict(safe_load(checkpoint))
    else:
        model.load_state_dict(torch.load(checkpoint))
    model.seqlen = 2048
    print('Done.')

    return model



inputs = ["使用python写一个二分查找的代码",
          "今天天气怎么样,把这句话翻译成英语",
          "怎么让自己精力充沛,列5点建议",
          "小明的爸爸有三个孩子,老大叫王一,老二叫王二,老三叫什么?",
          "明天就假期结束了,有点抗拒上班,应该什么办?",
          "父母都姓李,取一些男宝宝和女宝宝的名字",
          "推荐几本金庸的武侠小说",
          "写一篇英文散文诗,主题是春雨,想象自己是春雨,和英国古代诗人莎士比亚交流"]




if __name__ == '__main__':
    import argparse
    from datautils import *

    parser = argparse.ArgumentParser()

    parser.add_argument(
        'model', type=str,
        help='llama model to load'
    )
    parser.add_argument(
        '--wbits', type=int, default=16, choices=[2, 3, 4, 8, 16],
        help='#bits to use for quantization; use 16 for evaluating base model.'
    )
    parser.add_argument(
        '--groupsize', type=int, default=-1,
        help='Groupsize to use for quantization; default uses full row.'
    )
    parser.add_argument(
        '--load', type=str, default='',
        help='Load quantized model.'
    )

    parser.add_argument(
        '--text', type=str,
        help='input text'
    )

    parser.add_argument(
        '--min_length', type=int, default=10,
        help='The minimum length of the sequence to be generated.'
    )

    parser.add_argument(
        '--max_length', type=int, default=1024,
        help='The maximum length of the sequence to be generated.'
    )

    parser.add_argument(
        '--top_p', type=float , default=0.95,
        help='If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.'
    )

    parser.add_argument(
        '--temperature', type=float, default=0.8,
        help='The value used to module the next token probabilities.'
    )

    args = parser.parse_args()

    if type(args.load) is not str:
        args.load = args.load.as_posix()

    if args.load:
        model = load_quant(args.model, args.load, args.wbits, args.groupsize)
    else:
        model = get_llama(args.model)
        model.eval()

    model.to(DEV)
    tokenizer = AutoTokenizer.from_pretrained(args.model)

    """
    print("Human:")
    line = input()
    while line:
        inputs = 'Human: ' + line.strip() + '\n\nAssistant:'
        input_ids = tokenizer.encode(inputs, return_tensors="pt").to(DEV)

        with torch.no_grad():
            generated_ids = model.generate(
                input_ids,
                do_sample=True,
                min_length=args.min_length,
                max_length=args.max_length,
                top_p=args.top_p,
                temperature=args.temperature,
            )
        print("Assistant:\n")
        print(tokenizer.decode([el.item() for el in generated_ids[0]])[len(inputs)+4:]) # generated_ids开头加上了bos_token,需要将inpu的内容截断,只输出Assistant
        print("\n-------------------------------\n")
        print("Human:") #每次终端用户输入前,加上Human提示。
        line = input()
    """

    time_list = []

    for i in range(1000):
        start = time.perf_counter()

        input_str = str(choice(inputs))
        input_str = 'Human: ' + input_str.strip() + '\n\nAssistant:'
        print(input_str)
        input_ids = tokenizer(input_str, return_tensors="pt").input_ids.to(DEV)

        with torch.no_grad():
            outputs = model.generate(
                input_ids,
                do_sample=True,
                min_length=args.min_length,
                max_length=args.max_length,
                top_p=args.top_p,
                temperature=args.temperature,
            )
        """
        with torch.no_grad():
            outputs = model.generate(input_ids, max_new_tokens=500, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.5, repetition_penalty=1., eos_token_id=2, bos_token_id=1, pad_token_id=0)
        """
        rets = tokenizer.batch_decode(outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False)
        print("\n" + rets[0].strip().replace(input_str, ""))
        #print(tokenizer.decode([el.item() for el in outputs[0]])[len(inputs)+4:])

        end = time.perf_counter()
        runTime = end - start
        runTime_ms = runTime * 1000
        print("运行时间:", round(runTime_ms, 2), "毫秒")
        time_list.append(round(runTime_ms, 2))
        print("\n-------------------------------\n")

    print(time_list)
    result = mean(time_list)
    print("均值:", round(result, 2))
    print("最大值:", round(max(time_list), 2))
    print("最小值:", round(min(time_list), 2))
    print("TP50:", np.percentile(np.array(time_list), 50))
    print("TP90:", np.percentile(np.array(time_list), 90))
    print("TP99:", np.percentile(np.array(time_list), 99))

运行命令:

# 16bit
CUDA_VISIBLE_DEVICES=0 python llama_inference_benchmark.py /data/nfs/guodong.li/pretrain/belle/belle-llama-7b

# 8bit
CUDA_VISIBLE_DEVICES=0 python llama_inference_benchmark.py /data/nfs/guodong.li/pretrain/belle/belle-llama-7b --wbits 8 --groupsize 128 --load /data/nfs/guodong.li/pretrain/belle/belle-llama-gptq-7b/llama7b-2m-8bit-128g.pt

# 4bit
CUDA_VISIBLE_DEVICES=0 python llama_inference_benchmark.py /data/nfs/guodong.li/pretrain/belle/belle-llama-7b --wbits 4 --groupsize 128 --load /data/nfs/guodong.li/pretrain/belle/belle-llama-gptq-7b/llama7b-2m-4bit-128g.pt

修改bloom_inference_benchmark.py文件:

import time

import torch
import torch.nn as nn

from gptq import *
from modelutils import *
from quant import *

from transformers import AutoTokenizer
from random import choice
from statistics import mean
import numpy as np


DEV = torch.device('cuda:0')

def get_bloom(model):
    import torch
    def skip(*args, **kwargs):
        pass
    torch.nn.init.kaiming_uniform_ = skip
    torch.nn.init.uniform_ = skip
    torch.nn.init.normal_ = skip
    from transformers import BloomForCausalLM
    # model = BloomForCausalLM.from_pretrained(model, torch_dtype='auto')
    model = BloomForCausalLM.from_pretrained(model, torch_dtype=torch.float16)
    model.seqlen = 2048
    return model

def load_quant(model, checkpoint, wbits, groupsize):
    from transformers import BloomConfig, BloomForCausalLM
    config = BloomConfig.from_pretrained(model)
    def noop(*args, **kwargs):
        pass
    torch.nn.init.kaiming_uniform_ = noop
    torch.nn.init.uniform_ = noop
    torch.nn.init.normal_ = noop

    torch.set_default_dtype(torch.half)
    transformers.modeling_utils._init_weights = False
    torch.set_default_dtype(torch.half)
    model = BloomForCausalLM(config)
    torch.set_default_dtype(torch.float)
    model = model.eval()
    layers = find_layers(model)
    for name in ['lm_head']:
        if name in layers:
            del layers[name]
    make_quant(model, layers, wbits, groupsize)

    print('Loading model ...')
    if checkpoint.endswith('.safetensors'):
        from safetensors.torch import load_file as safe_load
        model.load_state_dict(safe_load(checkpoint))
    else:
        model.load_state_dict(torch.load(checkpoint))
    model.seqlen = 2048
    print('Done.')

    return model


inputs = ["使用python写一个二分查找的代码",
          "今天天气怎么样,把这句话翻译成英语",
          "怎么让自己精力充沛,列5点建议",
          "小明的爸爸有三个孩子,老大叫王一,老二叫王二,老三叫什么?",
          "明天就假期结束了,有点抗拒上班,应该什么办?",
          "父母都姓李,取一些男宝宝和女宝宝的名字",
          "推荐几本金庸的武侠小说",
          "写一篇英文散文诗,主题是春雨,想象自己是春雨,和英国古代诗人莎士比亚交流"]



if __name__ == '__main__':
    import argparse
    from datautils import *

    parser = argparse.ArgumentParser()

    parser.add_argument(
        'model', type=str,
        help='llama model to load'
    )
    parser.add_argument(
        '--wbits', type=int, default=16, choices=[2, 3, 4, 8, 16],
        help='#bits to use for quantization; use 16 for evaluating base model.'
    )
    parser.add_argument(
        '--groupsize', type=int, default=-1,
        help='Groupsize to use for quantization; default uses full row.'
    )
    parser.add_argument(
        '--load', type=str, default='',
        help='Load quantized model.'
    )

    parser.add_argument(
        '--text', type=str,
        help='hello'
    )

    parser.add_argument(
        '--min_length', type=int, default=10,
        help='The minimum length of the sequence to be generated.'
    )

    parser.add_argument(
        '--max_length', type=int, default=1024,
        help='The maximum length of the sequence to be generated.'
    )

    parser.add_argument(
        '--top_p', type=float , default=0.95,
        help='If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.'
    )

    parser.add_argument(
        '--temperature', type=float, default=0.8,
        help='The value used to module the next token probabilities.'
    )

    args = parser.parse_args()

    if type(args.load) is not str:
        args.load = args.load.as_posix()

    if args.load:
        model = load_quant(args.model, args.load, args.wbits, args.groupsize)
    else:
        model = get_bloom(args.model)
        model.eval()

    model.to(DEV)
    tokenizer = AutoTokenizer.from_pretrained(args.model)

    """
    print("Human:")
    line = input()
    while line:
        inputs = 'Human: ' + line.strip() + '\n\nAssistant:'
        input_ids = tokenizer.encode(inputs, return_tensors="pt").to(DEV)

        with torch.no_grad():
            generated_ids = model.generate(
                input_ids,
                do_sample=True,
                min_length=args.min_length,
                max_length=args.max_length,
                top_p=args.top_p,
                temperature=args.temperature,
            )
        print("Assistant:\n")
        print(tokenizer.decode([el.item() for el in generated_ids[0]]))
        print("\n-------------------------------\n")
        line = input()

    """
    time_list = []

    for i in range(1000):
        start = time.perf_counter()

        input_str = str(choice(inputs))
        input_str = 'Human: ' + input_str.strip() + '\n\nAssistant:'
        print(input_str)
        input_ids = tokenizer(input_str, return_tensors="pt").input_ids.to(DEV)
        with torch.no_grad():
            outputs = model.generate(
                input_ids,
                do_sample=True,
                min_length=args.min_length,
                max_length=args.max_length,
                top_p=args.top_p,
                temperature=args.temperature,
            )

        #print(tokenizer.decode([el.item() for el in outputs[0]]))

        #with torch.no_grad():
        #    outputs = model.generate(input_ids, max_new_tokens=500, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.5, repetition_penalty=1., eos_token_id=2, bos_token_id=1, pad_token_id=0)
        rets = tokenizer.batch_decode(outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False)
        print("\n" + rets[0].strip().replace(input_str, ""))


        end = time.perf_counter()
        runTime = end - start
        runTime_ms = runTime * 1000
        print("运行时间:", round(runTime_ms, 2), "毫秒")
        time_list.append(round(runTime_ms, 2))
        print("\n-------------------------------\n")

    print(time_list)
    result = mean(time_list)
    print("均值:", round(result, 2))
    print("最大值:", round(max(time_list), 2))
    print("最小值:", round(min(time_list), 2))
    print("TP50:", np.percentile(np.array(time_list), 50))
    print("TP90:", np.percentile(np.array(time_list), 90))
    print("TP99:", np.percentile(np.array(time_list), 99))


运行命令:

# 16bit
CUDA_VISIBLE_DEVICES=1 python bloom_inference_benchmark.py /data/nfs/guodong.li/pretrain/belle/belle-bloom-7b


# 8bit
CUDA_VISIBLE_DEVICES=1 python bloom_inference_benchmark.py /data/nfs/guodong.li/pretrain/belle/belle-bloom-7b --wbits 8 --groupsize 128 --load /data/nfs/guodong.li/pretrain/belle/belle-bloom-gptq-7b/bloom7b-2m-8bit-128g.pt


# 4bit
CUDA_VISIBLE_DEVICES=6 python bloom_inference_benchmark.py /data/nfs/guodong.li/pretrain/belle/belle-bloom-7b --wbits 4 --groupsize 128 --load /data/nfs/guodong.li/pretrain/belle/belle-bloom-gptq-7b/bloom7b-2m-4bit-128g.pt

推理性能结果

模型来源File size(GB)GPU memory usage(GB)均值(ms)最小值(ms)最大值(ms)TP50(ms)TP90(ms)TP99(ms)
bloom7b-2m(16bit)belle26.314.72628.6388.5426368.272147.8854545.1622938.37
bloom7b-2m-8bit-128gbelle9.6812.273579.39244.0230461.353190.846441.8523230.53
bloom7b-2m-4bit-128gbelle6.849.436018.52171.0734471.442966.06525627.3128164.99
llama7b-2m(16bit)belle25.114.54700.3728.8531664.512501.9810350.62829715.8
llama7b-2m-8bit-128gbelle6.768.926124.85453.4119510.813731.99515758.2218950.66
llama7b-2m-4bit-128gbelle3.725.696196.32145.6338047.993408.5215094.9335237.53

从上面表格中可以看到:

  1. 基于bloom精调的belle模型推理速度要优于基于llama精调的belle模型。
  2. 无论是基于bloom还是基于llama指令精调的belle模型经过GPTQ量化后,推理速度都明显下降。

那至于为什么经过GPTQ量化后模型推理速度下降这么多?欢迎评论区一起交流。