简介
FunASR(Fun Audio Speech Recognition)是一个开源的语音识别工具包,旨在搭建语音识别学术研究与工业应用之间的桥梁。它由阿里巴巴通义实验室推动发展,并作为其“通义百聆”企业级语音大模型家族的核心组成部分之一
定位与核心组成
FunASR的核心目标是支持工业级语音识别模型的训练、微调与部署,为开发者和研究者提供便捷的语音识别解决方案 。
- 多功能集成:它不仅提供高精度的语音识别(ASR),还集成了多项实用功能,包括语音活动检测(VAD)、标点恢复、说话人验证、说话人日志化以及多说话人识别等 。
- 模型库丰富:通过ModelScope和Hugging Face等平台开源了大量预训练模型,代表模型包括Paraformer(非自回归端到端模型,兼顾高准确率与高效率)等,支持中英文等多种语言 。
技术特点与优势
FunASR的技术架构设计注重实用性、高效性和适应性,具有以下突出特点:
- 流式与非流式识别:支持实时流式识别,采用增量解码技术,可将延迟控制在200毫秒以内,适用于直播、会议等实时场景;同时也提供高准确率的非流式识别,用于音频文件转写 。
- 先进的模型架构:声学模型可采用Conformer或Transformer架构,有效捕捉音频时序特征。语言模型支持N-gram与神经网络的混合部署,平衡实时性与准确率 。
- 强大的抗噪与自适应能力:通过集成语音增强模块和领域自适应框架,能在高噪声环境(如会议室、工业现场)下保持较高识别率,并可通过微调快速适配医疗、金融等专业领域的术语 。
- 易于部署与服务化:提供简洁的Python API和命令行工具,支持模型导出为ONNX格式,便于跨平台部署。同时支持Docker容器化部署,可快速启动RESTful或WebSocket API服务,以应对高并发请求 。
主要应用场景
基于其强大的功能,FunASR在多个领域具有广泛的应用价值:
- 智能客服系统:实现语音请求的实时识别与自动路由,并能通过关键词检测进行质控,提升客服效率 。
- 在线教育与会议记录:实时转写教学内容或会议讨论,支持说话人分离,便于生成会议纪要和双语字幕 。
- 医疗文书生成:医生通过口述生成结构化电子病历,系统集成医学术语库,识别准确率高,同时保障数据隐私 。
- 媒体内容生产:用于音频/视频字幕的自动生成、有声书制作等 。
如何快速开始
对于开发者而言,可以按照以下步骤快速集成FunASR:
-
安装:使用pip即可安装核心包。
pip3 install -U funasr -
基本使用:几行代码即可体验语音识别功能。
from funasr import AutoModel model = AutoModel(model="paraformer-zh") res = model.generate(input="your_audio_file.wav") print(res) -
深入开发:参考官方文档,根据实际需求选择合适的模型(如流式模型
paraformer-zh-streaming)并进行微调 。
Docker 部署(CPU)
-
项目结构:
funasr-docker/ ├── docker-compose.yml ├── download_models.sh └── resources/ ├── models/ # 用于持久化存储模型文件 ├── logs/ # 日志目录 └── audio/ # 测试音频目录 -
Docker Compose 配置:
#docker-compose.yml services: # 实时语音识别服务 (在线模式) funasr-online: image: registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.10 container_name: funasr-online restart: always privileged: true ports: - "10095:10095" # 将服务端口映射到宿主机 volumes: - ./resources/models:/workspace/models # 挂载模型目录,实现持久化 - ./resources/logs:/workspace/logs - ./resources/audio:/workspace/audio command: > bash -c " cd /workspace/FunASR/runtime && nohup bash run_server_2pass.sh --download-model-dir /workspace/models --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx --model-dir damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx --online-model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx --punc-dir damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727-onnx --hotword /workspace/models/hotwords.txt --certfile 0 > /workspace/logs/online.log 2>&1 & tail -f /dev/null " networks: - funasr-net # 离线语音识别服务 (处理完整音频文件) funasr-offline: image: registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.6 container_name: funasr-offline restart: always privileged: true ports: - "10096:10095" volumes: - ./resources/models:/workspace/models - ./resources/logs:/workspace/logs - ./resources/audio:/workspace/audio command: > bash -c " cd /workspace/FunASR/runtime && nohup bash run_server.sh --download-model-dir /workspace/models --model-dir damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx --punc-dir damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727-onnx --hotword /workspace/models/hotwords.txt --certfile 0 > /workspace/logs/offline.log 2>&1 & tail -f /dev/null " networks: - funasr-net networks: funasr-net: driver: bridge -
模型下载脚本:
# download_models.sh #!/bin/bash MODEL_DIR="./resources/models" mkdir -p $MODEL_DIR # 下载VAD模型 docker run --rm -v $MODEL_DIR:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.10 python -m modelscope.hub.snapshot_download damo/speech_fsmn_vad_zh-cn-16k-common-onnx -d /workspace/models # 下载其他所需模型... # 具体模型列表可参考 docker-compose.yml 中的配置 echo "模型下载完成!" -
运行:
chmod +x download_models.sh && ./download_models.sh docker-compose up -d
Gradio 服务
- 项目结构:
├── docker-compose.yml ├── Dockerfile ├── fun_gradio.py ├── logs ├── models └── requirements.txt - Gradio 服务代码:
# fun_gradio.py import gradio as gr from funasr import AutoModel import os import time from datetime import datetime import torch # 初始化FunASR模型 def load_funasr_model(model_size="large", device="auto"): """ 加载FunASR模型 model_size: 模型大小,可选 "large", "medium", "small" device: 运行设备,"auto"自动选择,可指定"cuda", "cpu" """ try: # 自动检测设备 if device == "auto": device = "cuda" if torch.cuda.is_available() else "cpu" # 根据模型大小选择具体模型 if model_size == "large": model_name = "paraformer-zh" elif model_size == "medium": model_name = "paraformer-zh-streaming" # 流式版本 else: model_name = "funasr/sensevoice-small" # 轻量版 print(f"正在加载模型: {model_name}, 设备: {device}") model = AutoModel(model=model_name, device=device) print("模型加载成功!") return model except Exception as e: print(f"模型加载失败: {e}") return None # 语音识别处理函数 def speech_recognition(audio_file, language="zh", enable_itn=True, hotwords_text=""): """ 语音识别处理函数 audio_file: 音频文件路径 language: 语言类型 enable_itn: 是否启用文本规整 hotwords_text: 热词文本,每行一个 """ if audio_file is None: return "请先上传音频文件", "", "" # 检查模型是否加载 if 'funasr_model' not in globals() or funasr_model is None: return "错误:模型未加载,请先初始化模型", "", "" try: start_time = time.time() # 处理热词 hotwords_list = None if hotwords_text.strip(): hotwords_list = [word.strip() for word in hotwords_text.split('\n') if word.strip()] print(f"使用热词: {hotwords_list}") # 调用模型进行识别 result = funasr_model.generate( input=audio_file, language=language, hotwords=hotwords_list, itn=enable_itn ) processing_time = time.time() - start_time # 解析结果 if result and len(result) > 0: raw_text = result[0].get("text", "") itn_text = result[0].get("itn_text", "") timestamp_info = result[0].get("timestamp", "") # 生成处理信息 info_text = f"处理完成!耗时: {processing_time:.2f}秒\n" info_text += f"识别模式: {language}" if hotwords_list: info_text += f"\n使用热词: {len(hotwords_list)}个" return raw_text, itn_text if enable_itn else "ITN未启用", info_text else: return "识别失败,无结果返回", "", f"处理耗时: {processing_time:.2f}秒" except Exception as e: error_msg = f"识别过程中出现错误: {str(e)}" print(error_msg) return error_msg, "", "处理失败" # 批量处理函数 def batch_processing(file_files, language="zh", enable_itn=True): """批量处理多个音频文件""" if not file_files: return "请先上传音频文件", "" # 检查模型是否加载 if 'funasr_model' not in globals() or funasr_model is None: return "错误:模型未加载,请先初始化模型", "" results = [] total_files = len(file_files) for i, audio_file in enumerate(file_files): try: raw_text, itn_text, info = speech_recognition(audio_file.name, language, enable_itn) results.append({ "文件名": os.path.basename(audio_file.name), "原始文本": raw_text, "规整文本": itn_text, "状态": "成功" }) except Exception as e: results.append({ "文件名": os.path.basename(audio_file.name), "原始文本": "", "规整文本": "", "状态": f"失败: {str(e)}" }) # 生成结果报告 report = f"批量处理完成!共处理 {total_files} 个文件\n\n" for i, result in enumerate(results, 1): report += f"{i}. {result['文件名']} - {result['状态']}\n" if result['状态'] == '成功': report += f" 结果: {result['规整文本'][:100]}...\n" report += "\n" return report, results # 重新加载模型函数 def reload_model(model_size, device): """重新加载模型""" global funasr_model try: # 先释放旧模型 if 'funasr_model' in globals() and funasr_model is not None: del funasr_model import gc gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() # 加载新模型 funasr_model = load_funasr_model(model_size, device) if funasr_model is not None: return f"模型重新加载成功!\n模型大小: {model_size}\n运行设备: {device}\n当前时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}" else: return "模型加载失败,请检查参数设置" except Exception as e: return f"重新加载模型时出现错误: {str(e)}" # 获取系统信息 def get_system_info(): """获取系统信息""" cuda_available = torch.cuda.is_available() cuda_info = "可用" if cuda_available else "不可用" if cuda_available: cuda_info += f" (GPU: {torch.cuda.get_device_name(0)}, 内存: {torch.cuda.get_device_properties(0).total_memory / 1024**3:.1f}GB)" info = f"系统时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n" info += f"CUDA加速: {cuda_info}\n" info += f"PyTorch版本: {torch.__version__}\n" if 'funasr_model' in globals() and funasr_model is not None: info += "模型状态: 已加载\n" else: info += "模型状态: 未加载\n" return info # 初始化模型 print("正在初始化FunASR模型...") funasr_model = load_funasr_model("large", "auto") # 创建Gradio界面 with gr.Blocks(theme=gr.themes.Soft(), title="FunASR语音识别服务") as demo: gr.Markdown(""" # 🎤🎤🎤 FunASR语音识别服务 基于阿里巴巴FunASR模型的本地语音识别系统,支持高精度语音转文字。 """) with gr.Tab("单文件识别"): with gr.Row(): with gr.Column(): gr.Markdown("### 音频上传与设置") audio_input = gr.Audio( sources=["upload", "microphone"], type="filepath", label="上传音频文件或使用麦克风录制" ) with gr.Row(): language_dropdown = gr.Dropdown( choices=["zh", "en", "ja"], value="zh", label="识别语言" ) itn_checkbox = gr.Checkbox( value=True, label="启用文本规整(ITN)" ) hotwords_textbox = gr.Textbox( lines=3, label="热词列表(每行一个热词)", placeholder="输入需要优先识别的词汇,每行一个\n例如:\n阿里巴巴\n达摩院\n通义千问" ) recognize_btn = gr.Button("开始识别", variant="primary") with gr.Column(): gr.Markdown("### 识别结果") raw_output = gr.Textbox( lines=4, label="原始识别文本", interactive=False ) itn_output = gr.Textbox( lines=4, label="规整后文本", interactive=False ) info_output = gr.Textbox( lines=3, label="处理信息", interactive=False ) with gr.Tab("批量处理"): with gr.Row(): with gr.Column(): gr.Markdown("### 批量文件上传") file_files = gr.File( file_count="multiple", file_types=[".wav", ".mp3", ".m4a", ".flac"], label="选择多个音频文件" ) batch_language = gr.Dropdown( choices=["zh", "en", "ja"], value="zh", label="识别语言" ) batch_itn = gr.Checkbox( value=True, label="启用文本规整" ) batch_btn = gr.Button("开始批量处理", variant="primary") with gr.Column(): gr.Markdown("### 批量处理结果") batch_report = gr.Textbox( lines=10, label="处理报告", interactive=False ) with gr.Tab("模型管理"): gr.Markdown("### 模型设置") with gr.Row(): model_size_dropdown = gr.Dropdown( choices=["large", "medium", "small"], value="large", label="模型大小" ) device_dropdown = gr.Dropdown( choices=["auto", "cuda", "cpu"], value="auto", label="运行设备" ) reload_btn = gr.Button("重新加载模型", variant="primary") reload_output = gr.Textbox( lines=3, label="重新加载结果", interactive=False ) gr.Markdown("### 系统状态") system_info = gr.Textbox( label="系统信息", value=get_system_info(), interactive=False ) refresh_btn = gr.Button("刷新系统信息") with gr.Tab("使用说明"): gr.Markdown("### 使用说明") gr.Markdown(""" **支持格式**: WAV, MP3, M4A, FLAC等常见音频格式 **推荐设置**: - 采样率: 16kHz - 声道: 单声道 - 比特率: 16bit **模型大小说明**: - large: 高精度模型,识别准确率最高,但速度较慢 - medium: 平衡模型,精度和速度均衡 - small: 轻量模型,识别速度快,适合实时应用 **设备选择**: - auto: 自动选择(优先使用GPU) - cuda: 使用GPU加速(需要NVIDIA显卡) - cpu: 使用CPU处理 **热词功能**: 输入专业术语或特定词汇,提升识别准确率 **ITN文本规整**: 自动将口语表达转换为规范书面语 """) # 绑定事件处理 recognize_btn.click( fn=speech_recognition, inputs=[audio_input, language_dropdown, itn_checkbox, hotwords_textbox], outputs=[raw_output, itn_output, info_output] ) batch_btn.click( fn=batch_processing, inputs=[file_files, batch_language, batch_itn], outputs=[batch_report, batch_report] ) reload_btn.click( fn=reload_model, inputs=[model_size_dropdown, device_dropdown], outputs=[reload_output] ) refresh_btn.click( fn=get_system_info, inputs=[], outputs=[system_info] ) # 启动服务 if __name__ == "__main__": # 服务配置 server_port = 7860 server_name = "0.0.0.0" # 允许远程访问 print(f"启动Gradio服务,访问地址: http://localhost:{server_port}") print("如果需要远程访问,请使用: http://<你的IP地址>:7860") demo.launch( server_name=server_name, server_port=server_port, share=False, # 设置为True可生成公共链接 inbrowser=True # 启动时自动打开浏览器 )# fun_gradio.py(简化版) import gradio as gr from funasr import AutoModel import os import time from datetime import datetime # 配置路径 MODEL_PATH = os.getenv('MODEL_PATH', '/app/models') LOG_PATH = os.getenv('LOG_PATH', '/app/logs') # 确保目录存在 os.makedirs(MODEL_PATH, exist_ok=True) os.makedirs(LOG_PATH, exist_ok=True) # 初始化模型 def load_model(model_name="paraformer-zh"): """加载FunASR模型""" try: print(f"正在加载模型: {model_name}") model = AutoModel(model=model_name) print("模型加载成功!") return model except Exception as e: print(f"模型加载失败: {e}") return None # 语音识别函数 def transcribe_audio(audio_file, model_size="large", enable_itn=True): """语音识别处理""" if audio_file is None: return "请先上传音频文件", "" # 加载模型 model = load_model() if model is None: return "模型加载失败,请检查模型配置", "" try: start_time = time.time() # 调用模型进行识别 result = model.generate( input=audio_file, language="zh", itn=enable_itn ) processing_time = time.time() - start_time # 记录日志 log_entry = f"{datetime.now()}: 处理音频 {audio_file}, 耗时: {processing_time:.2f}秒\n" with open(os.path.join(LOG_PATH, "processing.log"), "a", encoding="utf-8") as f: f.write(log_entry) # 解析结果 if result and len(result) > 0: text = result[0].get("text", "识别失败") return text, f"处理完成!耗时: {processing_time:.2f}秒" else: return "识别失败,无结果返回", f"处理耗时: {processing_time:.2f}秒" except Exception as e: error_msg = f"识别过程中出现错误: {str(e)}" print(error_msg) return error_msg, "处理失败" # 创建Gradio界面 def create_gradio_interface(): with gr.Blocks(theme=gr.themes.Soft(), title="FunASR语音识别服务") as demo: gr.Markdown(""" # 🎤 FunASR语音识别服务 基于阿里巴巴FunASR模型的本地语音识别系统 """) with gr.Row(): with gr.Column(): gr.Markdown("### 音频上传") audio_input = gr.Audio( sources=["upload", "microphone"], type="filepath", label="上传音频文件或使用麦克风录制" ) with gr.Row(): model_size = gr.Dropdown( choices=["large", "medium", "small"], value="large", label="模型大小" ) itn_checkbox = gr.Checkbox( value=True, label="启用文本规整" ) recognize_btn = gr.Button("开始识别", variant="primary") with gr.Column(): gr.Markdown("### 识别结果") text_output = gr.Textbox( lines=6, label="识别文本", interactive=False ) info_output = gr.Textbox( lines=2, label="处理信息", interactive=False ) # 绑定事件 recognize_btn.click( fn=transcribe_audio, inputs=[audio_input, model_size, itn_checkbox], outputs=[text_output, info_output] ) return demo # 主程序 if __name__ == "__main__": # 设置Gradio服务器配置 demo = create_gradio_interface() # 启动服务,确保server_name设置为0.0.0.0以便在容器内访问[9](@ref) demo.launch( server_name="0.0.0.0", server_port=7860, share=False ) - Docker 配置文件:
# 使用Python官方镜像 FROM python:3.10-slim # 设置工作目录 WORKDIR /app # 安装系统依赖 RUN apt-get update && apt-get install -y --no-install-recommends \ gcc \ g++ \ make \ curl \ ffmpeg \ && rm -rf /var/lib/apt/lists/* \ && apt-get clean # 复制依赖文件 COPY requirements.txt . # 安装Python依赖 RUN pip install --no-cache-dir -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple # 复制应用代码 COPY fun_gradio.py . # 创建必要的目录 RUN mkdir -p /app/models /app/logs # 暴露Gradio端口 EXPOSE 7860 # 设置环境变量 ENV PYTHONUNBUFFERED=1 ENV GRADIO_SERVER_NAME=0.0.0.0 ENV GRADIO_SERVER_PORT=7860 # 启动应用 CMD ["python", "fun_gradio.py"]# docker-comppse.yml services: funasr-gradio: build: context: . dockerfile: Dockerfile image: funasr-gradio:latest container_name: funasr-gradio-app environment: - PYTHONUNBUFFERED=1 - GRADIO_SERVER_NAME=0.0.0.0 - GRADIO_SERVER_PORT=7860 - MODEL_PATH=/app/models - LOG_PATH=/app/logs volumes: - ./models:/app/models - ./logs:/app/logs ports: - "29999:7860" networks: - funasr-network healthcheck: test: ["CMD", "curl", "-f", "http://localhost:7860"] interval: 30s timeout: 10s retries: 3 start_period: 40s restart: unless-stopped deploy: resources: limits: cpus: '4.0' memory: 8G reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] networks: funasr-network: driver: bridge volumes: model-data: driver: local log-data: driver: local# requirements.txt funasr gradio torch torchaudio librosa soundfile numpy requests pillow# .env # 应用配置 COMPOSE_PROJECT_NAME=funasr-gradio APP_PORT=7860 # 模型配置 DEFAULT_MODEL=paraformer-zh # 性能配置 CPU_LIMIT=4.0 MEMORY_LIMIT=8G # GPU配置(如有) NVIDIA_VISIBLE_DEVICES=all - 运行:
docker compose up --build -d- 多功能版:
- 简化版:
- 多功能版:
FastAPI 服务
# fun_fastapi.py
from fastapi import FastAPI, File, UploadFile, Form, HTTPException
from fastapi.responses import JSONResponse
import uvicorn
import os
import time
from datetime import datetime
import torch
from funasr import AutoModel
from typing import List, Optional
import tempfile
import shutil
import logging
# 配置日志
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("funasr-api")
app = FastAPI(
title="FunASR语音识别API服务",
description="基于阿里巴巴FunASR模型的语音识别API接口",
version="1.0.0",
docs_url="/api/docs",
redoc_url="/api/redoc"
)
class FunASRService:
def __init__(self):
self.model = None
self.config = {
"model_size": "large",
"device": "cuda",
"language": "zh",
"enable_itn": True
}
self.is_loaded = False
def initialize_model(self, model_size: str = "large", device: str = "cuda"):
"""初始化语音识别模型[1,2](@ref)"""
try:
# 设备自动检测与回退
if device == "cuda" and not torch.cuda.is_available():
device = "cpu"
logger.warning("CUDA不可用,已自动切换到CPU模式")
# 模型选择逻辑
model_mapping = {
"large": "paraformer-zh",
"medium": "paraformer-zh-streaming",
"small": "funasr/sensevoice-small"
}
model_name = model_mapping.get(model_size, "paraformer-zh")
# 清理现有模型
if self.model is not None:
del self.model
import gc
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
logger.info(f"正在加载模型: {model_name}, 设备: {device}")
# 模型初始化[2,5](@ref)
self.model = AutoModel(
model=model_name,
device=device,
disable_update=True, # 禁止自动更新
vad_model="fsmn-vad", # 语音活动检测
)
# 更新配置
self.config.update({
"model_size": model_size,
"device": device
})
self.is_loaded = True
logger.info("模型加载成功")
return True
except Exception as e:
logger.error(f"模型加载失败: {str(e)}")
self.is_loaded = False
return False
def transcribe_audio(self, audio_path: str, language: str = "zh",
enable_itn: bool = True, hotwords: List[str] = None):
"""执行语音识别[1](@ref)"""
if not self.is_loaded or self.model is None:
raise Exception("语音识别模型未加载")
if not os.path.exists(audio_path):
raise Exception(f"音频文件不存在: {audio_path}")
try:
start_time = time.time()
# 热词处理
hotwords_list = hotwords if hotwords else []
# 执行识别[5](@ref)
result = self.model.generate(
input=audio_path,
language=language,
hotwords=hotwords_list,
itn=enable_itn
)
processing_time = time.time() - start_time
# 结果解析
if result and len(result) > 0:
return {
"success": True,
"text": result[0].get("text", ""),
"itn_text": result[0].get("itn_text", "") if enable_itn else "",
"language": language,
"processing_time": round(processing_time, 3),
"hotwords_used": len(hotwords_list)
}
else:
return {
"success": False,
"error": "未识别到有效内容"
}
except Exception as e:
logger.error(f"语音识别处理失败: {str(e)}")
return {
"success": False,
"error": f"处理失败: {str(e)}"
}
# 全局服务实例
funasr_service = FunASRService()
# 临时目录管理
temp_dir = tempfile.mkdtemp()
@app.on_event("startup")
async def startup_event():
"""服务启动时初始化模型[6](@ref)"""
logger.info("正在启动FunASR语音识别服务...")
success = funasr_service.initialize_model()
if success:
logger.info("服务启动完成")
else:
logger.error("服务启动失败")
@app.on_event("shutdown")
async def shutdown_event():
"""服务关闭时清理资源"""
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
logger.info("服务已关闭")
@app.get("/api/health")
async def health_check():
"""健康检查端点[6](@ref)"""
return {
"status": "healthy" if funasr_service.is_loaded else "unhealthy",
"service": "FunASR语音识别API",
"timestamp": datetime.now().isoformat(),
"model_loaded": funasr_service.is_loaded,
"current_config": funasr_service.config
}
@app.post("/api/transcribe")
async def transcribe_audio(
audio_file: UploadFile = File(..., description="音频文件(WAV, MP3, M4A, FLAC)"),
language: str = Form("zh", description="识别语言: zh-中文, en-英文, ja-日文"),
enable_itn: bool = Form(True, description="是否启用文本规整"),
hotwords: Optional[str] = Form("", description="热词列表,逗号分隔")
):
"""单文件语音识别接口[1,3](@ref)"""
# 文件类型验证
allowed_extensions = {'.wav', '.mp3', '.m4a', '.flac', '.aac'}
file_ext = os.path.splitext(audio_file.filename)[1].lower()
if file_ext not in allowed_extensions:
raise HTTPException(400, f"不支持的文件格式: {file_ext}")
# 保存上传文件
temp_file_path = os.path.join(temp_dir, f"upload_{int(time.time())}_{audio_file.filename}")
try:
with open(temp_file_path, "wb") as buffer:
content = await audio_file.read()
buffer.write(content)
# 热词处理
hotwords_list = []
if hotwords and hotwords.strip():
hotwords_list = [word.strip() for word in hotwords.split(',') if word.strip()]
# 执行识别
result = funasr_service.transcribe_audio(
audio_path=temp_file_path,
language=language,
enable_itn=enable_itn,
hotwords=hotwords_list
)
return JSONResponse(content=result)
except Exception as e:
logger.error(f"API处理异常: {str(e)}")
raise HTTPException(500, f"服务端错误: {str(e)}")
finally:
# 清理临时文件
if os.path.exists(temp_file_path):
os.unlink(temp_file_path)
@app.post("/api/transcribe/batch")
async def batch_transcribe(
audio_files: List[UploadFile] = File(..., description="多个音频文件"),
language: str = Form("zh", description="识别语言"),
enable_itn: bool = Form(True, description="是否启用文本规整"),
hotwords: Optional[str] = Form("", description="热词列表,逗号分隔")
):
"""批量语音识别接口"""
if not audio_files:
raise HTTPException(400, "请上传至少一个音频文件")
results = []
temp_paths = []
try:
hotwords_list = []
if hotwords and hotwords.strip():
hotwords_list = [word.strip() for word in hotwords.split(',') if word.strip()]
for audio_file in audio_files:
# 文件验证
file_ext = os.path.splitext(audio_file.filename)[1].lower()
allowed_extensions = {'.wav', '.mp3', '.m4a', '.flac', '.aac'}
if file_ext not in allowed_extensions:
results.append({
"filename": audio_file.filename,
"success": False,
"error": f"不支持的格式: {file_ext}"
})
continue
# 保存文件
temp_path = os.path.join(temp_dir, f"batch_{int(time.time())}_{audio_file.filename}")
temp_paths.append(temp_path)
with open(temp_path, "wb") as f:
f.write(await audio_file.read())
# 执行识别
result = funasr_service.transcribe_audio(
audio_path=temp_path,
language=language,
enable_itn=enable_itn,
hotwords=hotwords_list
)
result["filename"] = audio_file.filename
results.append(result)
return {
"batch_id": f"batch_{int(time.time())}",
"total_files": len(audio_files),
"successful": len([r for r in results if r.get("success", False)]),
"failed": len([r for r in results if not r.get("success", True)]),
"results": results
}
finally:
# 清理临时文件
for path in temp_paths:
if os.path.exists(path):
os.unlink(path)
@app.post("/api/model/reload")
async def reload_model(
model_size: str = Form("large", description="模型大小: large, medium, small"),
device: str = Form("cuda", description="运行设备: cuda, cpu, auto")
):
"""重新加载模型接口[2](@ref)"""
valid_sizes = ["large", "medium", "small"]
valid_devices = ["cuda", "cpu", "auto"]
if model_size not in valid_sizes:
raise HTTPException(400, f"无效的模型大小")
if device not in valid_devices:
raise HTTPException(400, f"无效的设备类型")
success = funasr_service.initialize_model(model_size, device)
if success:
return {
"success": True,
"message": "模型重载成功",
"model_size": model_size,
"device": funasr_service.config["device"],
"timestamp": datetime.now().isoformat()
}
else:
raise HTTPException(500, "模型重载失败")
@app.get("/api/system/info")
async def system_info():
"""系统信息接口"""
cuda_available = torch.cuda.is_available()
cuda_info = "可用" if cuda_available else "不可用"
if cuda_available:
cuda_info += f" (GPU: {torch.cuda.get_device_name(0)})"
return {
"system_time": datetime.now().isoformat(),
"cuda_available": cuda_available,
"cuda_info": cuda_info,
"pytorch_version": torch.__version__,
"model_status": "已加载" if funasr_service.is_loaded else "未加载",
"current_config": funasr_service.config
}
if __name__ == "__main__":
# 生产环境配置[6,8](@ref)
uvicorn.run(
app,
host="0.0.0.0",
port=8000,
log_level="info",
access_log=True
)
# requirements.txt
fastapi
uvicorn[standard]
python-multipart
funasr
torch
torchaudio
librosa
soundfile
numpy
pydantic
FROM python:3.10-slim
# 设置工作目录
WORKDIR /app
# 安装系统依赖[2](@ref)
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
g++ \
make \
ffmpeg \
libsndfile1 \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# 复制依赖文件
COPY requirements.txt .
# 安装Python依赖[7](@ref)
RUN pip install --no-cache-dir -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# 复制应用代码
COPY fun_fastapi.py .
# 创建非root用户[7](@ref)
RUN useradd -m appuser && chown -R appuser:appuser /app
USER appuser
# 暴露端口
EXPOSE 8000
# 健康检查
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/api/health || exit 1
# 启动命令[6](@ref)
CMD ["uvicorn", "fun_fastapi:app", "--host", "0.0.0.0", "--port", "8000", "--access-log"]
# docker-compose.yml
services:
funasr-api:
build: .
container_name: funasr-api-service
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- PYTHONPATH=/app
volumes:
- ./models:/app/models
- ./logs:/app/logs
networks:
- funasr-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/api/health"]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped
deploy:
resources:
limits:
cpus: '4.0'
memory: 8G
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
networks:
funasr-network:
driver: bridge
volumes:
model-data:
driver: local
log-data:
driver: local
curl -X POST "http://localhost:8000/api/transcribe" \
-F "audio_file=@test.wav" \
-F "language=zh" \
-F "enable_itn=true"
总结
相比于 Whisper,FunASR 在中文识别中,效率和准确率(同音字或近音字)都更加优秀,使用 CUDA 加速就更快了;模型大小也非常精简;同时支持热词替换等功能;在英文以及鲁棒性(环境噪音)等方面尚未测试