介绍
CosyVoice 2.0 是阿里巴巴通义实验室开源的高性能语音生成大模型,它在语音合成的自然度、准确性和实时性方面取得了显著进步
| 特性维度 | 具体说明 |
|---|---|
| 核心技术 | 采用有限标量量化(FSQ)技术和块感知因果流匹配模型,基于预训练文本大模型(如Qwen2.5-0.5B)作为主干 |
| 核心功能 | 支持多语言/方言合成、零样本音色克隆、跨语言语音合成、细粒度情感/韵律控制 |
| 关键性能指标 | MOS评分达5.53(接近真人水平),首包合成延迟低至150ms,发音错误率较1.0版降低30%-50% |
| 主要应用场景 | 实时交互(智能客服、虚拟主播)、内容创作(有声书、视频配音)、教育娱乐(个性化语音助手、音色复刻)等 |
-
核心功能详解
- CosyVoice 2.0 的功能设计旨在满足多样化的语音合成需求,其核心能力主要体现在以下几个方面:
- 广泛的语言支持:模型支持中文、英语、日语、韩语等主流语言,并特别加入了如粤语、四川话、上海话等中文方言的合成能力。一个突出的亮点是它能够实现跨语言 语音合成,例如,您可以提供一个中文发音人样本,然后让其流利地朗读英文文本,同时完美保留原始音色特征。
- 高效的音色克隆:仅需一段3-10秒的参考音频,模型即可在“零样本”的情况下克隆出目标音色。该功能不仅复刻音色,还能捕捉并保留原始声音中的韵律、情感等细节,适用于快速构建个性化的语音助手或虚拟形象。
- 实时流式合成:通过创新的“离线和流式一体化建模”技术,模型支持超低延迟的双向流式 语音合成。这意味着在实时对话场景中,首段合成音频的延迟可以低至150毫秒,为用户带来几乎无感知延迟的流畅交互体验。
- 细粒度控制:模型支持通过自然语言指令或特定标签对合成语音进行精细控制。您可以指示模型以“欢快的语气”说话,或在语句中插入
[laughter]标签来生成笑声,甚至可以精确控制语速、语调等参数,大大增强了合成的表现力和灵活性。
-
技术原理与模型选择
- CosyVoice 2.0 在技术上的突破是其卓越性能的基石。
- 模型架构创新:它采用了一种简化的文本-语音语言模型架构,直接使用预训练的大型语言模型作为主干,这增强了对文本语义的深层理解能力。同时,用全尺度量化 替换了传统的向量量化方法来生成语音标记,这显著提升了码本的利用率,从而改善了发音的准确性。
- 一体化建模方案:该方案使得单个模型能够同时支持流式和非流式合成,在保证高质量音频输出的前提下,实现了极低的响应延迟。
- 模型版本选择:项目提供了不同的模型版本以适应不同需求,例如,
CosyVoice-300M是基础模型,支持零样本克隆;而CosyVoice 2.0-0.5B是升级版,参数更多,在各项性能指标上均有显著提升。
-
应用场景与实用建议
-
CosyVoice 2.0 的能力在以下场景中能充分发挥价值:
-
实时交互应用:其低延迟特性使其非常适合集成到智能客服、语音助手、虚拟主播等需要实时语音反馈的系统中,能极大提升交互的自然感和用户体验。
-
创意内容创作:对于短视频配音、有声书制作、游戏角色对话生成等内容创作领域,模型的音色克隆和细粒度控制功能可以大幅提高创作效率,并实现丰富的语音表现。
-
教育与娱乐:可以克隆教师的声音用于制作个性化学习材料,也可以为家人或朋友创造有趣的语音祝福,在教育和娱乐领域开辟了新的可能性。
-
Docker 部署
-
克隆项目
git clone https://ghfast.top/https://github.com/FunAudioLLM/CosyVoice.git -
拉取 Matcha-TTS 源码到 /third_party/Matcha-TTS 目录中:拉取 CosyVoice GitHub 源码会访问失败;加上镜像站,无法使用
--recursive进行递归克隆子模块(也是因为源站无法访问),所以要单独下载 Matcha-TTS 子模块git clone https://ghfast.top/https://github.com/shivammehta25/Matcha-TTS.git -
下载 CosyVoice-300M-Instruct 模型:因为使用基础模型,会出现缺失文件和语音合成模式的问题
modelscope download --model iic/CosyVoice-300M-Instruct --local_dir /i/xiaom/CosyVoice/pretrained_models/CosyVoice2-0.5B -
修改依赖文件:其中 onnxruntime-gpu 会出现版本不兼容的问题,不过没什么影响
# requirements.txt conformer==0.3.2 deepspeed==0.15.1; sys_platform == 'linux' diffusers==0.29.0 fastapi==0.115.6 fastapi-cli==0.0.4 gdown==5.1.0 gradio==5.4.0 grpcio==1.57.0 grpcio-tools==1.57.0 hydra-core==1.3.2 HyperPyYAML==1.2.2 inflect==7.3.1 librosa==0.10.2 lightning==2.2.4 matplotlib==3.7.5 modelscope==1.20.0 networkx==3.1 omegaconf==2.3.0 onnx==1.16.0 onnxruntime-gpu==1.18.0; sys_platform == 'linux' openai-whisper==20231117 protobuf==4.25 pyarrow==18.1.0 pydantic==2.7.0 pyworld==0.3.4 rich==13.7.1 soundfile==0.12.1 tensorboard==2.14.0 tensorrt-cu12==10.0.1; sys_platform == 'linux' tensorrt-cu12-bindings==10.0.1; sys_platform == 'linux' tensorrt-cu12-libs==10.0.1; sys_platform == 'linux' torch==2.3.1 torchaudio==2.3.1 transformers==4.51.3 uvicorn==0.30.0 wetext==0.0.4 wget==3.2 -
编写封装文件:原 Dockerfile 运行失败
# Dockerfile FROM nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04 # 设置工作目录 WORKDIR /app RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list && \ sed -i 's/security.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list # 安装系统依赖 RUN apt-get update && apt-get install -y \ git \ git-lfs \ ffmpeg \ python3 \ python3-pip \ python3-venv \ && rm -rf /var/lib/apt/lists/* \ && ln -sf /usr/bin/python3 /usr/bin/python # 复制项目文件 COPY . . # 确保 third_party/Matcha-TTS 目录存在 #RUN if [ ! -d "third_party/Matcha-TTS" ] || [ -z "$(ls -A third_party/Matcha-TTS)" ]; then \ # echo "下载 Matcha-TTS..."; \ # rm -rf third_party/Matcha-TTS && \ # git clone https://ghfast.top/https://github.com/shivammehta25/Matcha-TTS.git third_party/Matcha-TTS; \ # fi # 安装Python依赖 RUN pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/ # 初始化git lfs(如果需要下载大文件) RUN git lfs install # 暴露端口 EXPOSE 8000 # 启动Gradio服务 CMD ["python", "webui.py", "--port", "8000", "--model_dir", "pretrained_models/CosyVoice2-0.5B"]# docker-compose.yml services: cosyvoice-webui: build: context: . dockerfile: Dockerfile container_name: cosyvoice-webui ports: - "29999:8000" volumes: # 挂载预训练模型目录 - ./pretrained_models:/app/pretrained_models # 可选:挂载输出目录用于保存生成的音频 - ./outputs:/app/outputs environment: - PYTHONUNBUFFERED=1 restart: unless-stopped deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] -
运行
docker compose up --build -d docker compose logs -f cosyvoice-webui -
页面效果:原始 Gradio 服务会出现页面无法正常显示生成音频,但是可以下载 .wav 文件
-
原始代码的问题在于:
- 流式 数据格式 问题:Gradio 对某些流式音频数据的支持可能有问题
- 数据类型 不明确:没有明确指定音频数据的类型和格式
- 数据转换问题:原始 numpy 数组在前端可能无法正确解析
# webui.py # Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Liu Yue) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys import argparse import gradio as gr import numpy as np import torch import torchaudio import random import librosa ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) sys.path.append('{}/third_party/Matcha-TTS'.format(ROOT_DIR)) from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2 from cosyvoice.utils.file_utils import load_wav, logging from cosyvoice.utils.common import set_all_random_seed inference_mode_list = ['预训练音色', '3s极速复刻', '跨语种复刻', '自然语言控制'] instruct_dict = {'预训练音色': '1. 选择预训练音色\n2. 点击生成音频按钮', '3s极速复刻': '1. 选择prompt音频文件,或录入prompt音频,注意不超过30s,若同时提供,优先选择prompt音频文件\n2. 输入prompt文本\n3. 点击生成音频按钮', '跨语种复刻': '1. 选择prompt音频文件,或录入prompt音频,注意不超过30s,若同时提供,优先选择prompt音频文件\n2. 点击生成音频按钮', '自然语言控制': '1. 选择预训练音色\n2. 输入instruct文本\n3. 点击生成音频按钮'} stream_mode_list = [('否', False), ('是', True)] max_val = 0.8 def generate_seed(): seed = random.randint(1, 100000000) return { "__type__": "update", "value": seed } def postprocess(speech, top_db=60, hop_length=220, win_length=440): speech, _ = librosa.effects.trim( speech, top_db=top_db, frame_length=win_length, hop_length=hop_length ) if speech.abs().max() > max_val: speech = speech / speech.abs().max() * max_val speech = torch.concat([speech, torch.zeros(1, int(cosyvoice.sample_rate * 0.2))], dim=1) return speech def change_instruction(mode_checkbox_group): return instruct_dict[mode_checkbox_group] def generate_audio(tts_text, mode_checkbox_group, sft_dropdown, prompt_text, prompt_wav_upload, prompt_wav_record, instruct_text, seed, stream, speed): if prompt_wav_upload is not None: prompt_wav = prompt_wav_upload elif prompt_wav_record is not None: prompt_wav = prompt_wav_record else: prompt_wav = None # if instruct mode, please make sure that model is iic/CosyVoice-300M-Instruct and not cross_lingual mode if mode_checkbox_group in ['自然语言控制']: if cosyvoice.instruct is False: gr.Warning('您正在使用自然语言控制模式, {}模型不支持此模式, 请使用iic/CosyVoice-300M-Instruct模型'.format(args.model_dir)) yield (cosyvoice.sample_rate, default_data) if instruct_text == '': gr.Warning('您正在使用自然语言控制模式, 请输入instruct文本') yield (cosyvoice.sample_rate, default_data) if prompt_wav is not None or prompt_text != '': gr.Info('您正在使用自然语言控制模式, prompt音频/prompt文本会被忽略') # if cross_lingual mode, please make sure that model is iic/CosyVoice-300M and tts_text prompt_text are different language if mode_checkbox_group in ['跨语种复刻']: if cosyvoice.instruct is True: gr.Warning('您正在使用跨语种复刻模式, {}模型不支持此模式, 请使用iic/CosyVoice-300M模型'.format(args.model_dir)) yield (cosyvoice.sample_rate, default_data) if instruct_text != '': gr.Info('您正在使用跨语种复刻模式, instruct文本会被忽略') if prompt_wav is None: gr.Warning('您正在使用跨语种复刻模式, 请提供prompt音频') yield (cosyvoice.sample_rate, default_data) gr.Info('您正在使用跨语种复刻模式, 请确保合成文本和prompt文本为不同语言') # if in zero_shot cross_lingual, please make sure that prompt_text and prompt_wav meets requirements if mode_checkbox_group in ['3s极速复刻', '跨语种复刻']: if prompt_wav is None: gr.Warning('prompt音频为空,您是否忘记输入prompt音频?') yield (cosyvoice.sample_rate, default_data) if torchaudio.info(prompt_wav).sample_rate < prompt_sr: gr.Warning('prompt音频采样率{}低于{}'.format(torchaudio.info(prompt_wav).sample_rate, prompt_sr)) yield (cosyvoice.sample_rate, default_data) # sft mode only use sft_dropdown if mode_checkbox_group in ['预训练音色']: if instruct_text != '' or prompt_wav is not None or prompt_text != '': gr.Info('您正在使用预训练音色模式,prompt文本/prompt音频/instruct文本会被忽略!') if sft_dropdown == '': gr.Warning('没有可用的预训练音色!') yield (cosyvoice.sample_rate, default_data) # zero_shot mode only use prompt_wav prompt text if mode_checkbox_group in ['3s极速复刻']: if prompt_text == '': gr.Warning('prompt文本为空,您是否忘记输入prompt文本?') yield (cosyvoice.sample_rate, default_data) if instruct_text != '': gr.Info('您正在使用3s极速复刻模式,预训练音色/instruct文本会被忽略!') if mode_checkbox_group == '预训练音色': logging.info('get sft inference request') set_all_random_seed(seed) for i in cosyvoice.inference_sft(tts_text, sft_dropdown, stream=stream, speed=speed): yield (cosyvoice.sample_rate, i['tts_speech'].numpy().flatten()) elif mode_checkbox_group == '3s极速复刻': logging.info('get zero_shot inference request') prompt_speech_16k = postprocess(load_wav(prompt_wav, prompt_sr)) set_all_random_seed(seed) for i in cosyvoice.inference_zero_shot(tts_text, prompt_text, prompt_speech_16k, stream=stream, speed=speed): yield (cosyvoice.sample_rate, i['tts_speech'].numpy().flatten()) elif mode_checkbox_group == '跨语种复刻': logging.info('get cross_lingual inference request') prompt_speech_16k = postprocess(load_wav(prompt_wav, prompt_sr)) set_all_random_seed(seed) for i in cosyvoice.inference_cross_lingual(tts_text, prompt_speech_16k, stream=stream, speed=speed): yield (cosyvoice.sample_rate, i['tts_speech'].numpy().flatten()) else: logging.info('get instruct inference request') set_all_random_seed(seed) for i in cosyvoice.inference_instruct(tts_text, sft_dropdown, instruct_text, stream=stream, speed=speed): yield (cosyvoice.sample_rate, i['tts_speech'].numpy().flatten()) def main(): with gr.Blocks() as demo: gr.Markdown("### 代码库 [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) \ 预训练模型 [CosyVoice-300M](https://www.modelscope.cn/models/iic/CosyVoice-300M) \ [CosyVoice-300M-Instruct](https://www.modelscope.cn/models/iic/CosyVoice-300M-Instruct) \ [CosyVoice-300M-SFT](https://www.modelscope.cn/models/iic/CosyVoice-300M-SFT)") gr.Markdown("#### 请输入需要合成的文本,选择推理模式,并按照提示步骤进行操作") tts_text = gr.Textbox(label="输入合成文本", lines=1, value="我是通义实验室语音团队全新推出的生成式语音大模型,提供舒适自然的语音合成能力。") with gr.Row(): mode_checkbox_group = gr.Radio(choices=inference_mode_list, label='选择推理模式', value=inference_mode_list[0]) instruction_text = gr.Text(label="操作步骤", value=instruct_dict[inference_mode_list[0]], scale=0.5) sft_dropdown = gr.Dropdown(choices=sft_spk, label='选择预训练音色', value=sft_spk[0], scale=0.25) stream = gr.Radio(choices=stream_mode_list, label='是否流式推理', value=stream_mode_list[0][1]) speed = gr.Number(value=1, label="速度调节(仅支持非流式推理)", minimum=0.5, maximum=2.0, step=0.1) with gr.Column(scale=0.25): seed_button = gr.Button(value="\U0001F3B2") seed = gr.Number(value=0, label="随机推理种子") with gr.Row(): prompt_wav_upload = gr.Audio(sources='upload', type='filepath', label='选择prompt音频文件,注意采样率不低于16khz') prompt_wav_record = gr.Audio(sources='microphone', type='filepath', label='录制prompt音频文件') prompt_text = gr.Textbox(label="输入prompt文本", lines=1, placeholder="请输入prompt文本,需与prompt音频内容一致,暂时不支持自动识别...", value='') instruct_text = gr.Textbox(label="输入instruct文本", lines=1, placeholder="请输入instruct文本.", value='') generate_button = gr.Button("生成音频") audio_output = gr.Audio(label="合成音频", autoplay=True, streaming=True) seed_button.click(generate_seed, inputs=[], outputs=seed) generate_button.click(generate_audio, inputs=[tts_text, mode_checkbox_group, sft_dropdown, prompt_text, prompt_wav_upload, prompt_wav_record, instruct_text, seed, stream, speed], outputs=[audio_output]) mode_checkbox_group.change(fn=change_instruction, inputs=[mode_checkbox_group], outputs=[instruction_text]) demo.queue(max_size=4, default_concurrency_limit=2) demo.launch(server_name='0.0.0.0', server_port=args.port) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--port', type=int, default=8000) parser.add_argument('--model_dir', type=str, default='pretrained_models/CosyVoice2-0.5B', help='local path or modelscope repo id') args = parser.parse_args() try: cosyvoice = CosyVoice(args.model_dir) except Exception: try: cosyvoice = CosyVoice2(args.model_dir) except Exception: raise TypeError('no valid model_type!') sft_spk = cosyvoice.list_available_spks() if len(sft_spk) == 0: sft_spk = [''] prompt_sr = 16000 default_data = np.zeros(cosyvoice.sample_rate) main()
-
-
修改 webui.py
-
关键修改:
- 从流式生成改为一次性生成完整音频
- 使用
scipy.io.wavfile直接保存为 WAV 文件 - 返回文件路径而不是原始音频数据
- 明确指定类型为文件路径
- 指定格式为 wav
- 简化处理流程:一次性生成完整文件,避免分段处理的问题
- requirements.txt 配置文件中添加 scipy 依赖
# webui.py # Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Liu Yue) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys import argparse import gradio as gr import numpy as np import torch import torchaudio import random import librosa import tempfile from scipy.io import wavfile ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) sys.path.append('{}/third_party/Matcha-TTS'.format(ROOT_DIR)) from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2 from cosyvoice.utils.file_utils import load_wav, logging from cosyvoice.utils.common import set_all_random_seed inference_mode_list = ['预训练音色', '3s极速复刻', '跨语种复刻', '自然语言控制'] instruct_dict = {'预训练音色': '1. 选择预训练音色\n2. 点击生成音频按钮', '3s极速复刻': '1. 选择prompt音频文件,或录入prompt音频,注意不超过30s,若同时提供,优先选择prompt音频文件\n2. 输入prompt文本\n3. 点击生成音频按钮', '跨语种复刻': '1. 选择prompt音频文件,或录入prompt音频,注意不超过30s,若同时提供,优先选择prompt音频文件\n2. 点击生成音频按钮', '自然语言控制': '1. 选择预训练音色\n2. 输入instruct文本\n3. 点击生成音频按钮'} stream_mode_list = [('否', False), ('是', True)] max_val = 0.8 def generate_seed(): seed = random.randint(1, 100000000) return { "__type__": "update", "value": seed } def postprocess(speech, top_db=60, hop_length=220, win_length=440): speech, _ = librosa.effects.trim( speech, top_db=top_db, frame_length=win_length, hop_length=hop_length ) if speech.abs().max() > max_val: speech = speech / speech.abs().max() * max_val speech = torch.concat([speech, torch.zeros(1, int(cosyvoice.sample_rate * 0.2))], dim=1) return speech def change_instruction(mode_checkbox_group): return instruct_dict[mode_checkbox_group] def generate_audio(tts_text, mode_checkbox_group, sft_dropdown, prompt_text, prompt_wav_upload, prompt_wav_record, instruct_text, seed, stream, speed): if prompt_wav_upload is not None: prompt_wav = prompt_wav_upload elif prompt_wav_record is not None: prompt_wav = prompt_wav_record else: prompt_wav = None # if instruct mode, please make sure that model is iic/CosyVoice-300M-Instruct and not cross_lingual mode if mode_checkbox_group in ['自然语言控制']: if cosyvoice.instruct is False: gr.Warning('您正在使用自然语言控制模式, {}模型不支持此模式, 请使用iic/CosyVoice-300M-Instruct模型'.format(args.model_dir)) return None if instruct_text == '': gr.Warning('您正在使用自然语言控制模式, 请输入instruct文本') return None if prompt_wav is not None or prompt_text != '': gr.Info('您正在使用自然语言控制模式, prompt音频/prompt文本会被忽略') # if cross_lingual mode, please make sure that model is iic/CosyVoice-300M and tts_text prompt_text are different language if mode_checkbox_group in ['跨语种复刻']: if cosyvoice.instruct is True: gr.Warning('您正在使用跨语种复刻模式, {}模型不支持此模式, 请使用iic/CosyVoice-300M模型'.format(args.model_dir)) return None if instruct_text != '': gr.Info('您正在使用跨语种复刻模式, instruct文本会被忽略') if prompt_wav is None: gr.Warning('您正在使用跨语种复刻模式, 请提供prompt音频') return None gr.Info('您正在使用跨语种复刻模式, 请确保合成文本和prompt文本为不同语言') # if in zero_shot cross_lingual, please make sure that prompt_text and prompt_wav meets requirements if mode_checkbox_group in ['3s极速复刻', '跨语种复刻']: if prompt_wav is None: gr.Warning('prompt音频为空,您是否忘记输入prompt音频?') return None if torchaudio.info(prompt_wav).sample_rate < prompt_sr: gr.Warning('prompt音频采样率{}低于{}'.format(torchaudio.info(prompt_wav).sample_rate, prompt_sr)) return None # sft mode only use sft_dropdown if mode_checkbox_group in ['预训练音色']: if instruct_text != '' or prompt_wav is not None or prompt_text != '': gr.Info('您正在使用预训练音色模式,prompt文本/prompt音频/instruct文本会被忽略!') if sft_dropdown == '': gr.Warning('没有可用的预训练音色!') return None # zero_shot mode only use prompt_wav prompt text if mode_checkbox_group in ['3s极速复刻']: if prompt_text == '': gr.Warning('prompt文本为空,您是否忘记输入prompt文本?') return None if instruct_text != '': gr.Info('您正在使用3s极速复刻模式,预训练音色/instruct文本会被忽略!') try: if mode_checkbox_group == '预训练音色': logging.info('get sft inference request') set_all_random_seed(seed) audio_chunks = [] for i in cosyvoice.inference_sft(tts_text, sft_dropdown, stream=stream, speed=speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) if audio_chunks: full_audio = np.concatenate(audio_chunks) # 创建临时文件并返回文件路径 with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f: temp_path = f.name wavfile.write(temp_path, cosyvoice.sample_rate, full_audio) return temp_path return None elif mode_checkbox_group == '3s极速复刻': logging.info('get zero_shot inference request') prompt_speech_16k = postprocess(load_wav(prompt_wav, prompt_sr)) set_all_random_seed(seed) audio_chunks = [] for i in cosyvoice.inference_zero_shot(tts_text, prompt_text, prompt_speech_16k, stream=stream, speed=speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) if audio_chunks: full_audio = np.concatenate(audio_chunks) with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f: temp_path = f.name wavfile.write(temp_path, cosyvoice.sample_rate, full_audio) return temp_path return None elif mode_checkbox_group == '跨语种复刻': logging.info('get cross_lingual inference request') prompt_speech_16k = postprocess(load_wav(prompt_wav, prompt_sr)) set_all_random_seed(seed) audio_chunks = [] for i in cosyvoice.inference_cross_lingual(tts_text, prompt_speech_16k, stream=stream, speed=speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) if audio_chunks: full_audio = np.concatenate(audio_chunks) with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f: temp_path = f.name wavfile.write(temp_path, cosyvoice.sample_rate, full_audio) return temp_path return None else: logging.info('get instruct inference request') set_all_random_seed(seed) audio_chunks = [] for i in cosyvoice.inference_instruct(tts_text, sft_dropdown, instruct_text, stream=stream, speed=speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) if audio_chunks: full_audio = np.concatenate(audio_chunks) with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f: temp_path = f.name wavfile.write(temp_path, cosyvoice.sample_rate, full_audio) return temp_path return None except Exception as e: logging.error(f"音频生成错误: {e}") return None def main(): with gr.Blocks() as demo: gr.Markdown("### 代码库 [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) \ 预训练模型 [CosyVoice-300M](https://www.modelscope.cn/models/iic/CosyVoice-300M) \ [CosyVoice-300M-Instruct](https://www.modelscope.cn/models/iic/CosyVoice-300M-Instruct) \ [CosyVoice-300M-SFT](https://www.modelscope.cn/models/iic/CosyVoice-300M-SFT)") gr.Markdown("#### 请输入需要合成的文本,选择推理模式,并按照提示步骤进行操作") tts_text = gr.Textbox(label="输入合成文本", lines=1, value="我是通义实验室语音团队全新推出的生成式语音大模型,提供舒适自然的语音合成能力。") with gr.Row(): mode_checkbox_group = gr.Radio(choices=inference_mode_list, label='选择推理模式', value=inference_mode_list[0]) instruction_text = gr.Text(label="操作步骤", value=instruct_dict[inference_mode_list[0]], scale=0.5) sft_dropdown = gr.Dropdown(choices=sft_spk, label='选择预训练音色', value=sft_spk[0], scale=0.25) stream = gr.Radio(choices=stream_mode_list, label='是否流式推理', value=stream_mode_list[0][1]) speed = gr.Number(value=1, label="速度调节(仅支持非流式推理)", minimum=0.5, maximum=2.0, step=0.1) with gr.Column(scale=0.25): seed_button = gr.Button(value="\U0001F3B2") seed = gr.Number(value=0, label="随机推理种子") with gr.Row(): prompt_wav_upload = gr.Audio(sources='upload', type='filepath', label='选择prompt音频文件,注意采样率不低于16khz') prompt_wav_record = gr.Audio(sources='microphone', type='filepath', label='录制prompt音频文件') prompt_text = gr.Textbox(label="输入prompt文本", lines=1, placeholder="请输入prompt文本,需与prompt音频内容一致,暂时不支持自动识别...", value='') instruct_text = gr.Textbox(label="输入instruct文本", lines=1, placeholder="请输入instruct文本.", value='') generate_button = gr.Button("生成音频") # 修改音频组件配置 audio_output = gr.Audio( label="合成音频", autoplay=True, type="filepath", # 明确指定类型为文件路径 format="wav" # 指定格式为wav ) seed_button.click(generate_seed, inputs=[], outputs=seed) generate_button.click( generate_audio, inputs=[tts_text, mode_checkbox_group, sft_dropdown, prompt_text, prompt_wav_upload, prompt_wav_record, instruct_text, seed, stream, speed], outputs=[audio_output] ) mode_checkbox_group.change(fn=change_instruction, inputs=[mode_checkbox_group], outputs=[instruction_text]) demo.queue(max_size=4, default_concurrency_limit=2) demo.launch(server_name='0.0.0.0', server_port=args.port) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--port', type=int, default=8000) parser.add_argument('--model_dir', type=str, default='pretrained_models/CosyVoice-300M', help='local path or modelscope repo id') args = parser.parse_args() try: cosyvoice = CosyVoice(args.model_dir) except Exception: try: cosyvoice = CosyVoice2(args.model_dir) except Exception: raise TypeError('no valid model_type!') sft_spk = cosyvoice.list_available_spks() if len(sft_spk) == 0: sft_spk = [''] prompt_sr = 16000 default_data = np.zeros(cosyvoice.sample_rate) main()
-
-
语音质量
-
生成速度挺快:100 字左右文本,生成时间在 30s 左右
-
质量还不错,至少默认的参数输出的音频文件,听起来还算自然
-
FastAPI 服务
为了后续整合各种模型,这里将 Gradio 服务转换为 FastAPI 服务,提供大模型调用 CosyVoice2.0 的接口
-
修改 Docker 配置
# Dockerfile FROM nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04 # 设置工作目录 WORKDIR /app RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list && \ sed -i 's/security.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list # 安装系统依赖 RUN apt-get update && apt-get install -y \ git \ git-lfs \ ffmpeg \ python3 \ python3-pip \ python3-venv \ && rm -rf /var/lib/apt/lists/* \ && ln -sf /usr/bin/python3 /usr/bin/python # 复制项目文件 COPY . . # 确保 third_party/Matcha-TTS 目录存在 #RUN if [ ! -d "third_party/Matcha-TTS" ] || [ -z "$(ls -A third_party/Matcha-TTS)" ]; then \ # echo "下载 Matcha-TTS..."; \ # rm -rf third_party/Matcha-TTS && \ # git clone https://ghfast.top/https://github.com/shivammehta25/Matcha-TTS.git third_party/Matcha-TTS; \ # fi # 安装Python依赖 RUN pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/ # 初始化git lfs(如果需要下载大文件) RUN git lfs install # 暴露端口 EXPOSE 7897 CMD ["python", "cvapi.py", "--port", "7897", "--model_dir", "pretrained_models/CosyVoice2-0.5B"]# docker-compose.yml services: cosyvoice-webui: build: context: . dockerfile: Dockerfile container_name: cosyvoice-webui ports: - "29999:7897" volumes: # 挂载预训练模型目录 - ./pretrained_models:/app/pretrained_models # 可选:挂载输出目录用于保存生成的音频 - ./outputs:/app/outputs # 挂载临时文件目录 - ./temp:/tmp environment: - PYTHONUNBUFFERED=1 restart: unless-stopped deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu]# requirements.txt conformer==0.3.2 deepspeed==0.15.1; sys_platform == 'linux' diffusers==0.29.0 fastapi==0.115.6 fastapi-cli==0.0.4 gdown==5.1.0 gradio==5.4.0 grpcio==1.57.0 grpcio-tools==1.57.0 hydra-core==1.3.2 HyperPyYAML==1.2.2 inflect==7.3.1 librosa==0.10.2 lightning==2.2.4 matplotlib==3.7.5 modelscope==1.20.0 networkx==3.1 omegaconf==2.3.0 onnx==1.16.0 onnxruntime-gpu==1.16.0; sys_platform == 'linux' openai-whisper==20231117 protobuf==4.25 pyarrow==18.1.0 pydantic==2.7.0 pyworld==0.3.4 rich==13.7.1 soundfile==0.12.1 tensorboard==2.14.0 tensorrt-cu12==10.0.1; sys_platform == 'linux' tensorrt-cu12-bindings==10.0.1; sys_platform == 'linux' tensorrt-cu12-libs==10.0.1; sys_platform == 'linux' torch==2.3.1 torchaudio==2.3.1 transformers==4.51.3 uvicorn==0.30.0 wetext==0.0.4 wget==3.2 scipy python-multipart -
创建 FastAPI 服务
# cvapi.py import os import sys import argparse import numpy as np import torch import torchaudio import random import librosa import tempfile from scipy.io import wavfile from fastapi import FastAPI, HTTPException from fastapi.responses import FileResponse from pydantic import BaseModel import uvicorn from typing import Optional ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) sys.path.append('{}/third_party/Matcha-TTS'.format(ROOT_DIR)) from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2 from cosyvoice.utils.file_utils import load_wav, logging from cosyvoice.utils.common import set_all_random_seed # FastAPI 应用 app = FastAPI(title="CosyVoice TTS API", version="1.0.0") # 全局变量 cosyvoice = None sft_spk = [] prompt_sr = 16000 max_val = 0.8 # 请求模型 class TTSRequest(BaseModel): text: str mode: str = "预训练音色" # 预训练音色, 3s极速复刻, 跨语种复刻, 自然语言控制 voice: str = "" # 预训练音色名称 prompt_text: Optional[str] = None prompt_audio_path: Optional[str] = None instruct_text: Optional[str] = None seed: int = 0 stream: bool = False speed: float = 1.0 class TTSResponse(BaseModel): success: bool message: str audio_path: Optional[str] = None error: Optional[str] = None def postprocess(speech, top_db=60, hop_length=220, win_length=440): speech, _ = librosa.effects.trim( speech, top_db=top_db, frame_length=win_length, hop_length=hop_length ) if speech.abs().max() > max_val: speech = speech / speech.abs().max() * max_val speech = torch.concat([speech, torch.zeros(1, int(cosyvoice.sample_rate * 0.2))], dim=1) return speech def generate_audio_api(request: TTSRequest): """生成音频的API版本""" try: prompt_wav = request.prompt_audio_path # 参数验证 if request.mode in ['自然语言控制']: if cosyvoice.instruct is False: return TTSResponse( success=False, message="自然语言控制模式不支持当前模型", error="请使用iic/CosyVoice-300M-Instruct模型" ) if not request.instruct_text: return TTSResponse( success=False, message="请输入instruct文本", error="instruct_text参数为空" ) if request.mode in ['跨语种复刻']: if cosyvoice.instruct is True: return TTSResponse( success=False, message="跨语种复刻模式不支持当前模型", error="请使用iic/CosyVoice-300M模型" ) if not prompt_wav: return TTSResponse( success=False, message="请提供prompt音频", error="prompt_audio_path参数为空" ) if request.mode in ['3s极速复刻', '跨语种复刻']: if not prompt_wav: return TTSResponse( success=False, message="prompt音频为空", error="prompt_audio_path参数为空" ) if torchaudio.info(prompt_wav).sample_rate < prompt_sr: return TTSResponse( success=False, message=f"prompt音频采样率过低", error=f"采样率{torchaudio.info(prompt_wav).sample_rate}低于{prompt_sr}" ) if request.mode in ['预训练音色']: if not request.voice: return TTSResponse( success=False, message="没有选择预训练音色", error="voice参数为空" ) if request.mode in ['3s极速复刻']: if not request.prompt_text: return TTSResponse( success=False, message="prompt文本为空", error="prompt_text参数为空" ) # 设置随机种子 if request.seed == 0: request.seed = random.randint(1, 100000000) set_all_random_seed(request.seed) # 生成音频 if request.mode == '预训练音色': logging.info('get sft inference request via API') audio_chunks = [] for i in cosyvoice.inference_sft(request.text, request.voice, stream=request.stream, speed=request.speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) elif request.mode == '3s极速复刻': logging.info('get zero_shot inference request via API') prompt_speech_16k = postprocess(load_wav(prompt_wav, prompt_sr)) audio_chunks = [] for i in cosyvoice.inference_zero_shot(request.text, request.prompt_text, prompt_speech_16k, stream=request.stream, speed=request.speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) elif request.mode == '跨语种复刻': logging.info('get cross_lingual inference request via API') prompt_speech_16k = postprocess(load_wav(prompt_wav, prompt_sr)) audio_chunks = [] for i in cosyvoice.inference_cross_lingual(request.text, prompt_speech_16k, stream=request.stream, speed=request.speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) else: # 自然语言控制 logging.info('get instruct inference request via API') audio_chunks = [] for i in cosyvoice.inference_instruct(request.text, request.voice, request.instruct_text, stream=request.stream, speed=request.speed): audio_data = i['tts_speech'].numpy().flatten() audio_chunks.append(audio_data) # 保存音频文件 if audio_chunks: full_audio = np.concatenate(audio_chunks) with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f: temp_path = f.name wavfile.write(temp_path, cosyvoice.sample_rate, full_audio) return TTSResponse( success=True, message="音频生成成功", audio_path=temp_path ) else: return TTSResponse( success=False, message="音频生成失败", error="没有生成音频数据" ) except Exception as e: logging.error(f"API音频生成错误: {e}") return TTSResponse( success=False, message="音频生成过程中出现错误", error=str(e) ) # API 路由 @app.get("/") async def root(): return {"message": "CosyVoice TTS API", "status": "running"} @app.get("/voices") async def get_voices(): """获取可用的预训练音色列表""" return {"voices": sft_spk} @app.post("/tts", response_model=TTSResponse) async def text_to_speech(request: TTSRequest): """文本转语音接口""" return generate_audio_api(request) @app.get("/audio/{file_path:path}") async def get_audio(file_path: str): """获取生成的音频文件""" if os.path.exists(file_path) and file_path.endswith('.wav'): return FileResponse(file_path, media_type='audio/wav', filename=os.path.basename(file_path)) else: raise HTTPException(status_code=404, detail="音频文件未找到") def init_model(model_dir: str): """初始化模型""" global cosyvoice, sft_spk try: cosyvoice = CosyVoice(model_dir) except Exception: try: cosyvoice = CosyVoice2(model_dir) except Exception: raise TypeError('no valid model_type!') sft_spk = cosyvoice.list_available_spks() if len(sft_spk) == 0: sft_spk = [''] logging.info(f"模型加载成功,可用音色: {sft_spk}") if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--port', type=int, default=7897) parser.add_argument('--model_dir', type=str, default='pretrained_models/CosyVoice-300M') args = parser.parse_args() # 初始化模型 init_model(args.model_dir) # 启动 FastAPI 服务 uvicorn.run(app, host="0.0.0.0", port=args.port) -
测试:正常返回音频文件
# api_test.py import requests import json # API 基础URL BASE_URL = "http://meta.iinside.cn:29999" # 获取可用音色 response = requests.get(f"{BASE_URL}/voices") voices = response.json()["voices"] print("可用音色:", voices) # 文本转语音 tts_data = { "text": "午后阳光透过窗户洒在书桌上,空气中弥漫着淡淡的咖啡香气。街道上车流不息,行人匆匆走过,各自奔赴不同的目的地。公园里孩子们追逐嬉戏,笑声如同清脆的铃铛般回荡在树荫下。远处传来卖花小贩的吆喝声,混合着面包店飘来的奶油甜香,构成城市熟悉的日常交响。老人坐在长椅上翻着报纸,偶尔抬头看看嬉闹的孩童,嘴角泛起一丝温和的笑意。天空中的云朵缓缓移动,时而遮住太阳,在地面投下流动的光影。", "mode": "预训练音色", "voice": voices[0] if voices else "", "speed": 1.0 } response = requests.post(f"{BASE_URL}/tts", json=tts_data) result = response.json() if result["success"]: # 下载音频文件 audio_url = f"{BASE_URL}/audio/{result['audio_path']}" audio_response = requests.get(audio_url) with open("output.wav", "wb") as f: f.write(audio_response.content) print("音频已保存为 output.wav") else: print(f"错误: {result['error']}")