diff --git a/.flake8 b/.flake8 index 6b50de7ed..ae15ad2be 100644 --- a/.flake8 +++ b/.flake8 @@ -33,7 +33,7 @@ filename = # Specify a list of codes to ignore. ignore = W503 - E252,E262,E127,E265,E126,E266,E241,E261,E128,E125 + E252,E262,E127,E265,E126,E266,E241,E261,E128,E125,E129 W291,W293,W605 E203,E305,E402,E501,E721,E741,F403,F405,F821,F841,F999,W503,W504,C408,E302,W291,E303, # shebang has extra meaning in fbcode lints, so I think it's not worth trying diff --git a/README.md b/README.md index 59c61f776..49e40624d 100644 --- a/README.md +++ b/README.md @@ -19,11 +19,9 @@

Quick Start - | Quick Start Server - | Quick Start Streaming Server | Documents | Models List - | AIStudio Courses + | AIStudio Courses | NAACL2022 Best Demo Award Paper | Gitee

@@ -159,6 +157,9 @@ Via the easy-to-use, efficient, flexible and scalable implementation, our vision - 🧩 *Cascaded models application*: as an extension of the typical traditional audio tasks, we combine the workflows of the aforementioned tasks with other fields like Natural language processing (NLP) and Computer Vision (CV). ### Recent Update +- 👑 2022.10.11: Add [Wav2vec2ASR](./examples/librispeech/asr3), wav2vec2.0 fine-tuning for ASR on LibriSpeech. +- 🔥 2022.09.26: Add Voice Cloning, TTS finetune, and ERNIE-SAT in [PaddleSpeech Web Demo](./demos/speech_web). +- ⚡ 2022.09.09: Add AISHELL-3 Voice Cloning [example](./examples/aishell3/vc2) with ECAPA-TDNN speaker encoder. - ⚡ 2022.08.25: Release TTS [finetune](./examples/other/tts_finetune/tts3) example. - 🔥 2022.08.22: Add ERNIE-SAT models: [ERNIE-SAT-vctk](./examples/vctk/ernie_sat)、[ERNIE-SAT-aishell3](./examples/aishell3/ernie_sat)、[ERNIE-SAT-zh_en](./examples/aishell3_vctk/ernie_sat). - 🔥 2022.08.15: Add [g2pW](https://github.com/GitYCC/g2pW) into TTS Chinese Text Frontend. @@ -178,17 +179,17 @@ Via the easy-to-use, efficient, flexible and scalable implementation, our vision - Scan the QR code below with your Wechat, you can access to official technical exchange group and get the bonus ( more than 20GB learning materials, such as papers, codes and videos ) and the live link of the lessons. Look forward to your participation.
- +
## Installation -We strongly recommend our users to install PaddleSpeech in **Linux** with *python>=3.7* and *paddlepaddle>=2.3.1*. +We strongly recommend our users to install PaddleSpeech in **Linux** with *python>=3.7* and *paddlepaddle>=2.4rc*. ### **Dependency Introduction** + gcc >= 4.8.5 -+ paddlepaddle >= 2.3.1 ++ paddlepaddle >= 2.4rc + python >= 3.7 + OS support: Linux(recommend), Windows, Mac OSX @@ -197,6 +198,13 @@ PaddleSpeech depends on paddlepaddle. For installation, please refer to the offi ```bash pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple ``` +You can also specify the version of paddlepaddle or install the develop version. +```bash +# install 2.3.1 version. Note, 2.3.1 is just an example, please follow the minimum dependency of paddlepaddle for your selection +pip install paddlepaddle==2.3.1 -i https://mirror.baidu.com/pypi/simple +# install develop version +pip install paddlepaddle==0.0.0 -f https://www.paddlepaddle.org.cn/whl/linux/cpu-mkl/develop.html +``` There are two quick installation methods for PaddleSpeech, one is pip installation, and the other is source code compilation (recommended). ### pip install @@ -705,7 +713,7 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r Speaker Verification - VoxCeleb12 + VoxCeleb1/2 ECAPA-TDNN ecapa-tdnn-voxceleb12 @@ -714,6 +722,31 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r + + +**Speaker Diarization** + + + + + + + + + + + + + + + + + + +
Task Dataset Model Type Example
Speaker DiarizationAMIECAPA-TDNN + AHC / SC + ecapa-tdnn-ami +
+ **Punctuation Restoration** @@ -767,6 +800,7 @@ Normally, [Speech SoTA](https://paperswithcode.com/area/speech), [Audio SoTA](ht - [Text-to-Speech](#TextToSpeech) - [Audio Classification](#AudioClassification) - [Speaker Verification](#SpeakerVerification) + - [Speaker Diarization](#SpeakerDiarization) - [Punctuation Restoration](#PunctuationRestoration) - [Community](#Community) - [Welcome to contribute](#contribution) diff --git a/README_cn.md b/README_cn.md index 070a656a2..bf3ff4dfd 100644 --- a/README_cn.md +++ b/README_cn.md @@ -19,13 +19,11 @@

- 安装 + 安装 | 快速开始 - | 快速使用服务 - | 快速使用流式服务 | 教程文档 | 模型列表 - | AIStudio 课程 + | AIStudio 课程 | NAACL2022 论文 | Gitee

@@ -164,23 +162,11 @@ - 🧩 级联模型应用: 作为传统语音任务的扩展,我们结合了自然语言处理、计算机视觉等任务,实现更接近实际需求的产业级应用。 -### 近期活动 - - ❗️重磅❗️飞桨智慧金融行业系列直播课 -✅ 覆盖智能风控、智能运维、智能营销、智能客服四大金融主流场景 - -📆 9月6日-9月29日每周二、四19:00 -+ 智慧金融行业深入洞察 -+ 8节理论+实践精品直播课 -+ 10+真实产业场景范例教学及实践 -+ 更有免费算力+结业证书等礼品等你来拿 -扫码报名码住直播链接,与行业精英深度交流 - -
- -
- + ### 近期更新 +- 👑 2022.10.11: 新增 [Wav2vec2ASR](./examples/librispeech/asr3), 在 LibriSpeech 上针对ASR任务对wav2vec2.0 的fine-tuning. +- 🔥 2022.09.26: 新增 Voice Cloning, TTS finetune 和 ERNIE-SAT 到 [PaddleSpeech 网页应用](./demos/speech_web)。 +- ⚡ 2022.09.09: 新增基于 ECAPA-TDNN 声纹模型的 AISHELL-3 Voice Cloning [示例](./examples/aishell3/vc2)。 - ⚡ 2022.08.25: 发布 TTS [finetune](./examples/other/tts_finetune/tts3) 示例。 - 🔥 2022.08.22: 新增 ERNIE-SAT 模型: [ERNIE-SAT-vctk](./examples/vctk/ernie_sat)、[ERNIE-SAT-aishell3](./examples/aishell3/ernie_sat)、[ERNIE-SAT-zh_en](./examples/aishell3_vctk/ernie_sat)。 - 🔥 2022.08.15: 将 [g2pW](https://github.com/GitYCC/g2pW) 引入 TTS 中文文本前端。 @@ -199,13 +185,13 @@ ### 🔥 加入技术交流群获取入群福利 - - 3 日直播课链接: 深度解读 PP-TTS、PP-ASR、PP-VPR 三项核心语音系统关键技术 + - 3 日直播课链接: 深度解读 【一句话语音合成】【小样本语音合成】【定制化语音识别】语音交互技术 - 20G 学习大礼包:视频课程、前沿论文与学习资料 微信扫描二维码关注公众号,点击“马上报名”填写问卷加入官方交流群,获得更高效的问题答疑,与各行各业开发者充分交流,期待您的加入。
- +
@@ -215,7 +201,7 @@ ### 相关依赖 + gcc >= 4.8.5 -+ paddlepaddle >= 2.3.1 ++ paddlepaddle >= 2.4rc + python >= 3.7 + linux(推荐), mac, windows @@ -224,7 +210,13 @@ PaddleSpeech 依赖于 paddlepaddle,安装可以参考[ paddlepaddle 官网](h ```shell pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple ``` - +你也可以安装指定版本的paddlepaddle,或者安装 develop 版本。 +```bash +# 安装2.3.1版本. 注意:2.3.1只是一个示例,请按照对paddlepaddle的最小依赖进行选择。 +pip install paddlepaddle==2.3.1 -i https://mirror.baidu.com/pypi/simple +# 安装 develop 版本 +pip install paddlepaddle==0.0.0 -f https://www.paddlepaddle.org.cn/whl/linux/cpu-mkl/develop.html +``` PaddleSpeech 快速安装方式有两种,一种是 pip 安装,一种是源码编译(推荐)。 ### pip 安装 @@ -717,8 +709,8 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 - Speaker Verification - VoxCeleb12 + 声纹识别 + VoxCeleb1/2 ECAPA-TDNN ecapa-tdnn-voxceleb12 @@ -727,6 +719,31 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 + + +**说话人日志** + + + + + + + + + + + + + + + + + + +
任务 数据集 模型类型 脚本
说话人日志AMIECAPA-TDNN + AHC / SC + ecapa-tdnn-ami +
+ **标点恢复** @@ -786,6 +803,7 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声 - [语音合成](#语音合成模型) - [声音分类](#声音分类模型) - [声纹识别](#声纹识别模型) + - [说话人日志](#说话人日志模型) - [标点恢复](#标点恢复模型) - [技术交流群](#技术交流群) - [欢迎贡献](#欢迎贡献) diff --git a/demos/speech_server/README.md b/demos/speech_server/README.md index e400f7e74..7e7d4b2c5 100644 --- a/demos/speech_server/README.md +++ b/demos/speech_server/README.md @@ -13,7 +13,7 @@ For service interface definition, please check: ### 1. Installation see [installation](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md). -It is recommended to use **paddlepaddle 2.3.1** or above. +It is recommended to use **paddlepaddle 2.4rc** or above. You can choose one way from easy, meduim and hard to install paddlespeech. diff --git a/demos/speech_server/README_cn.md b/demos/speech_server/README_cn.md index 628468c83..594928281 100644 --- a/demos/speech_server/README_cn.md +++ b/demos/speech_server/README_cn.md @@ -14,7 +14,7 @@ ### 1. 安装 请看 [安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md). -推荐使用 **paddlepaddle 2.3.1** 或以上版本。 +推荐使用 **paddlepaddle 2.4rc** 或以上版本。 你可以从简单,中等,困难 几种方式中选择一种方式安装 PaddleSpeech。 diff --git a/demos/speech_web/README.md b/demos/speech_web/README.md index e8c59ea8b..572781ab6 100644 --- a/demos/speech_web/README.md +++ b/demos/speech_web/README.md @@ -21,14 +21,14 @@ Paddle Speech Demo 是一个以 PaddleSpeech 的语音交互功能为主体开 + 小数据微调:基于小数据集的微调方案,内置用12句话标贝中文女声微调示例,你也可以通过一键重置,录制自己的声音,注意在安静环境下录制,效果会更好。你可以在 [【Finetune your own AM based on FastSpeech2 with AISHELL-3】](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/tts_finetune/tts3)中尝试使用自己的数据集进行微调。 -+ ENIRE-SAT:语言-语音跨模态大模型 ENIRE-SAT 可视化展示示例,支持个性化合成,跨语言语音合成(音频为中文则输入英文文本进行合成),语音编辑(修改音频文字中间的结果)功能。 ENIRE-SAT 更多实现细节,可以参考: ++ ERNIE-SAT:语言-语音跨模态大模型 ERNIE-SAT 可视化展示示例,支持个性化合成,跨语言语音合成(音频为中文则输入英文文本进行合成),语音编辑(修改音频文字中间的结果)功能。 ERNIE-SAT 更多实现细节,可以参考: + [【ERNIE-SAT with AISHELL-3 dataset】](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/ernie_sat) + [【ERNIE-SAT with with AISHELL3 and VCTK datasets】](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3_vctk/ernie_sat) + [【ERNIE-SAT with VCTK dataset】](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/vctk/ernie_sat) 运行效果: - ![效果](https://user-images.githubusercontent.com/30135920/191188766-12e7ca15-f7b4-45f8-9da5-0c0b0bbe5fcb.png) + ![效果](https://user-images.githubusercontent.com/30135920/196076507-7eb33d39-2345-4268-aee7-6270b9ac8b98.png) @@ -36,6 +36,7 @@ Paddle Speech Demo 是一个以 PaddleSpeech 的语音交互功能为主体开 ### 后端环境安装 ```bash +# 需要先安装 PaddleSpeech cd speech_server pip install -r requirements.txt -i https://mirror.baidu.com/pypi/simple cd ../ @@ -44,6 +45,8 @@ cd ../ ### 前端环境安装 前端依赖 `node.js` ,需要提前安装,确保 `npm` 可用,`npm` 测试版本 `8.3.1`,建议下载[官网](https://nodejs.org/en/)稳定版的 `node.js` +如果因为网络问题,无法下载依赖库,可以参考 FAQ 部分,`npm / yarn 下载速度慢问题` + ```bash # 进入前端目录 cd web_client @@ -70,7 +73,7 @@ mkdir -p source/model cd source/model # 下载IE模型 wget https://bj.bcebos.com/paddlenlp/applications/speech-cmd-analysis/finetune/model_state.pdparams -cd ../../ +cd ../../../ ``` #### 启动后端服务 @@ -84,6 +87,10 @@ python main.py --port 8010 ### 启动 `vc.py` 后端服务 +参照下面的步骤自行配置项目所需环境。 + +Aistudio 在线体验小样本合成后端功能:[【PaddleSpeech进阶】PaddleSpeech小样本合成方案体验](https://aistudio.baidu.com/aistudio/projectdetail/4573549?sUid=2470186&shared=1&ts=1664174385948) + #### 下载相关模型和音频 ```bash @@ -172,8 +179,19 @@ cd web_client yarn dev --port 8011 ``` -默认配置下,前端中配置的后台地址信息是 localhost,确保后端服务器和打开页面的游览器在同一台机器上,不在一台机器的配置方式见下方的 FAQ:【后端如果部署在其它机器或者别的端口如何修改】 +默认配置下,前端配置的后台地址信息是 `localhost`,确保后端服务器和打开页面的游览器在同一台机器上,不在一台机器的配置方式见下方的 FAQ:【后端如果部署在其它机器或者别的端口如何修改】 + +#### 关于前端的一些说明 + +为了方便后期的维护,这里并没有给出打包好的 HTML 文件,而是 Vue3 的项目,使用 `yarn dev --port 8011` 的方式启动测试,方便大家debug,相当于是启动了一个前端服务器。 + +比如我们在本机启动的这个前端服务(运行 `yarn dev --port 8011` ),我们就可以通过在游览器中通过 `http://localhost:8011` 访问前端页面 + +如果我们在其它服务器上(例如:`*.*.*.*` )启动这个前端服务(运行 `yarn dev --port 8011` ),我们就可以通过在游览器中访问 `http://*.*.*.*:8011` 访问前端页面 +那前端跟后端是什么关系呢? 两个是独立的,只要前端能够通过代理访问到后端的接口,那就没有问题。你可以在 A 机器上部署后端服务,然后在 B 机器上部署前端服务。我们在 `./web_client/vite.config.js` 中将 `/api` 映射到的是 `http://localhost:8010`,你可以把它配置成任意你想要访问后端地址。 + +当前端在以 `*.*.*.*` 这类以 IP 地址形式的网页中访问时,由于游览器的安全限制,会禁止录音,需要重新配置游览器的安全策略, 可以看下面 FAQ 部分: [【前端以IP地址的形式访问,无法录音】] ## FAQ @@ -210,12 +228,24 @@ ASR_SOCKET_RECORD: 'ws://localhost:8010/ws/asr/onlineStream', // Stream ASR 接 TTS_SOCKET_RECORD: 'ws://localhost:8010/ws/tts/online', // Stream TTS 接口 ``` -#### Q:后端以IP地址的形式,前端无法录音 +#### Q:前端以IP地址的形式访问,无法录音 A:这里主要是游览器安全策略的限制,需要配置游览器后重启。游览器修改配置可参考[使用js-audio-recorder报浏览器不支持getUserMedia](https://blog.csdn.net/YRY_LIKE_YOU/article/details/113745273) chrome设置地址: chrome://flags/#unsafely-treat-insecure-origin-as-secure +#### Q: npm / yarn 配置淘宝镜像源 + +A: 配置淘宝镜像源,详细可以参考 [【yarn npm 设置淘宝镜像】](https://www.jianshu.com/p/f6f43e8f9d6b) + +```bash +# npm 配置淘宝镜像源 +npm config set registry https://registry.npmmirror.com + +# yarn 配置淘宝镜像源 +yarn config set registry http://registry.npm.taobao.org/ +``` + ## 参考资料 vue实现录音参考资料:https://blog.csdn.net/qq_41619796/article/details/107865602#t1 diff --git a/demos/speech_web/speech_server/src/ernie_sat.py b/demos/speech_web/speech_server/src/ernie_sat.py index b74dd8e3f..02e1ed9d9 100644 --- a/demos/speech_web/speech_server/src/ernie_sat.py +++ b/demos/speech_web/speech_server/src/ernie_sat.py @@ -1,5 +1,6 @@ import os +from .util import get_ngpu from .util import MAIN_ROOT from .util import run_cmd @@ -171,6 +172,7 @@ class SAT: output_name: str, source_lang: str, target_lang: str): + ngpu = get_ngpu() cmd = f""" FLAGS_allocator_strategy=naive_best_fit \ FLAGS_fraction_of_gpu_memory_to_use=0.01 \ @@ -189,7 +191,8 @@ class SAT: --voc_config={voc_config} \ --voc_ckpt={voc_ckpt} \ --voc_stat={voc_stat} \ - --output_name={output_name} + --output_name={output_name} \ + --ngpu={ngpu} """ return cmd diff --git a/demos/speech_web/speech_server/src/finetune.py b/demos/speech_web/speech_server/src/finetune.py index d7a440f9a..6ca99251b 100644 --- a/demos/speech_web/speech_server/src/finetune.py +++ b/demos/speech_web/speech_server/src/finetune.py @@ -1,5 +1,6 @@ import os +from .util import get_ngpu from .util import MAIN_ROOT from .util import run_cmd @@ -38,7 +39,7 @@ class FineTune: dump_dir = os.path.join(exp_dir, 'dump') output_dir = os.path.join(exp_dir, 'exp') lang = "zh" - ngpu = 1 + ngpu = get_ngpu() cmd = f""" # check oov @@ -91,7 +92,7 @@ class FineTune: output_dir = os.path.join(exp_dir, 'exp') text_path = os.path.join(exp_dir, 'sentences.txt') lang = "zh" - ngpu = 1 + ngpu = get_ngpu() model_path = f"{output_dir}/checkpoints" ckpt = find_max_ckpt(model_path) @@ -117,7 +118,8 @@ class FineTune: --output_dir={out_wav_dir} \ --phones_dict={dump_dir}/phone_id_map.txt \ --speaker_dict={dump_dir}/speaker_id_map.txt \ - --spk_id=0 + --spk_id=0 \ + --ngpu={ngpu} """ out_path = os.path.join(out_wav_dir, f"{wav_name}.wav") diff --git a/demos/speech_web/speech_server/src/ge2e_clone.py b/demos/speech_web/speech_server/src/ge2e_clone.py index d90013b98..83c2b3f35 100644 --- a/demos/speech_web/speech_server/src/ge2e_clone.py +++ b/demos/speech_web/speech_server/src/ge2e_clone.py @@ -1,6 +1,7 @@ import os import shutil +from .util import get_ngpu from .util import MAIN_ROOT from .util import run_cmd @@ -30,11 +31,12 @@ class VoiceCloneGE2E(): ref_audio_dir = os.path.realpath("tmp_dir/ge2e") if os.path.exists(ref_audio_dir): shutil.rmtree(ref_audio_dir) - else: - os.makedirs(ref_audio_dir, exist_ok=True) - shutil.copy(input_wav, ref_audio_dir) + + os.makedirs(ref_audio_dir, exist_ok=True) + shutil.copy(input_wav, ref_audio_dir) output_dir = os.path.dirname(out_wav) + ngpu = get_ngpu() cmd = f""" python3 {self.BIN_DIR}/voice_cloning.py \ @@ -50,7 +52,8 @@ class VoiceCloneGE2E(): --text="{text}" \ --input-dir={ref_audio_dir} \ --output-dir={output_dir} \ - --phones-dict={self.phones_dict} + --phones-dict={self.phones_dict} \ + --ngpu={ngpu} """ output_name = os.path.join(output_dir, full_file_name) diff --git a/demos/speech_web/speech_server/src/tdnn_clone.py b/demos/speech_web/speech_server/src/tdnn_clone.py index c24b9b077..53c5a3816 100644 --- a/demos/speech_web/speech_server/src/tdnn_clone.py +++ b/demos/speech_web/speech_server/src/tdnn_clone.py @@ -1,6 +1,7 @@ import os import shutil +from .util import get_ngpu from .util import MAIN_ROOT from .util import run_cmd @@ -27,11 +28,11 @@ class VoiceCloneTDNN(): ref_audio_dir = os.path.realpath("tmp_dir/tdnn") if os.path.exists(ref_audio_dir): shutil.rmtree(ref_audio_dir) - else: - os.makedirs(ref_audio_dir, exist_ok=True) - shutil.copy(input_wav, ref_audio_dir) + os.makedirs(ref_audio_dir, exist_ok=True) + shutil.copy(input_wav, ref_audio_dir) output_dir = os.path.dirname(out_wav) + ngpu = get_ngpu() cmd = f""" python3 {self.BIN_DIR}/voice_cloning.py \ @@ -47,7 +48,8 @@ class VoiceCloneTDNN(): --input-dir={ref_audio_dir} \ --output-dir={output_dir} \ --phones-dict={self.phones_dict} \ - --use_ecapa=True + --use_ecapa=True \ + --ngpu={ngpu} """ output_name = os.path.join(output_dir, full_file_name) diff --git a/demos/speech_web/speech_server/src/util.py b/demos/speech_web/speech_server/src/util.py index a69e6c42f..0188f0280 100644 --- a/demos/speech_web/speech_server/src/util.py +++ b/demos/speech_web/speech_server/src/util.py @@ -2,10 +2,19 @@ import os import random import subprocess +import paddle + NOW_FILE_PATH = os.path.dirname(__file__) MAIN_ROOT = os.path.realpath(os.path.join(NOW_FILE_PATH, "../../../../")) +def get_ngpu(): + if paddle.device.get_device() == "cpu": + return 0 + else: + return 1 + + def randName(n=5): return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba', n)) diff --git a/demos/speech_web/speech_server/vc.py b/demos/speech_web/speech_server/vc.py index 99e56b404..d035c02a4 100644 --- a/demos/speech_web/speech_server/vc.py +++ b/demos/speech_web/speech_server/vc.py @@ -281,15 +281,18 @@ async def VcCloneG2P(base: VcBaseText): if base.func == 'ge2e': wavName = base.wavName wavPath = os.path.join(VC_OUT_PATH, wavName) - vc_model.vc( + wavPath = vc_model.vc( text=base.text, input_wav=base.wavPath, out_wav=wavPath) else: wavName = base.wavName wavPath = os.path.join(VC_OUT_PATH, wavName) - vc_model_tdnn.vc( + wavPath = vc_model_tdnn.vc( text=base.text, input_wav=base.wavPath, out_wav=wavPath) - res = {"wavName": wavName, "wavPath": wavPath} - return SuccessRequest(result=res) + if wavPath: + res = {"wavName": wavName, "wavPath": wavPath} + return SuccessRequest(result=res) + else: + return ErrorRequest(message="克隆失败,检查克隆脚本是否有效") except Exception as e: print(e) return ErrorRequest(message="克隆失败,合成过程报错") diff --git a/demos/speech_web/web_client/src/components/Experience.vue b/demos/speech_web/web_client/src/components/Experience.vue index 4f32faf95..ca0e1440f 100644 --- a/demos/speech_web/web_client/src/components/Experience.vue +++ b/demos/speech_web/web_client/src/components/Experience.vue @@ -7,7 +7,7 @@ import VPRT from './SubMenu/VPR/VPRT.vue' import IET from './SubMenu/IE/IET.vue' import VoiceCloneT from './SubMenu/VoiceClone/VoiceClone.vue' -import ENIRE_SATT from './SubMenu/ENIRE_SAT/ENIRE_SAT.vue' +import ERNIE_SATT from './SubMenu/ERNIE_SAT/ERNIE_SAT.vue' import FineTuneT from './SubMenu/FineTune/FineTune.vue' @@ -47,8 +47,8 @@ import FineTuneT from './SubMenu/FineTune/FineTune.vue' - - + +
diff --git a/demos/speech_web/web_client/src/components/SubMenu/ASR/RealTime/RealTime.vue b/demos/speech_web/web_client/src/components/SubMenu/ASR/RealTime/RealTime.vue index 761a5c11f..5494bb8f8 100644 --- a/demos/speech_web/web_client/src/components/SubMenu/ASR/RealTime/RealTime.vue +++ b/demos/speech_web/web_client/src/components/SubMenu/ASR/RealTime/RealTime.vue @@ -58,9 +58,6 @@ export default { mounted () { this.wsUrl = apiURL.ASR_SOCKET_RECORD this.ws = new WebSocket(this.wsUrl) - if(this.ws.readyState === this.ws.CONNECTING){ - this.$message.success("实时识别 Websocket 连接成功") - } var _that = this this.ws.addEventListener('message', function (event) { var temp = JSON.parse(event.data); @@ -78,7 +75,7 @@ export default { // 检查 websocket 状态 // debugger if(this.ws.readyState != this.ws.OPEN){ - this.$message.error("websocket 链接失败,请检查链接地址是否正确") + this.$message.error("websocket 链接失败,请检查 Websocket 后端服务是否正确开启") return } diff --git a/demos/speech_web/web_client/src/components/SubMenu/ChatBot/Chat.vue b/demos/speech_web/web_client/src/components/SubMenu/ChatBot/Chat.vue deleted file mode 100644 index 9d356fc80..000000000 --- a/demos/speech_web/web_client/src/components/SubMenu/ChatBot/Chat.vue +++ /dev/null @@ -1,298 +0,0 @@ - - - - - \ No newline at end of file diff --git a/demos/speech_web/web_client/src/components/SubMenu/ChatBot/ChatT.vue b/demos/speech_web/web_client/src/components/SubMenu/ChatBot/ChatT.vue index c37c083ff..6db847706 100644 --- a/demos/speech_web/web_client/src/components/SubMenu/ChatBot/ChatT.vue +++ b/demos/speech_web/web_client/src/components/SubMenu/ChatBot/ChatT.vue @@ -91,6 +91,10 @@ export default { methods: { // 开始录音 startRecorder(){ + if(this.ws.readyState != this.ws.OPEN){ + this.$message.error("websocket 链接失败,请检查 Websocket 后端服务是否正确开启") + return + } this.allResultList = [] if(!this.onReco){ this.asrResult = this.speakingText diff --git a/demos/speech_web/web_client/src/components/SubMenu/ENIRE_SAT/ENIRE_SAT.vue b/demos/speech_web/web_client/src/components/SubMenu/ERNIE_SAT/ERNIE_SAT.vue similarity index 99% rename from demos/speech_web/web_client/src/components/SubMenu/ENIRE_SAT/ENIRE_SAT.vue rename to demos/speech_web/web_client/src/components/SubMenu/ERNIE_SAT/ERNIE_SAT.vue index e1a4f2343..4a0aa2c63 100644 --- a/demos/speech_web/web_client/src/components/SubMenu/ENIRE_SAT/ENIRE_SAT.vue +++ b/demos/speech_web/web_client/src/components/SubMenu/ERNIE_SAT/ERNIE_SAT.vue @@ -98,7 +98,7 @@ 播放 - 播放 + 播放 下载 下载 diff --git a/demos/speech_web/web_client/src/components/SubMenu/FineTune/FineTune.vue b/demos/speech_web/web_client/src/components/SubMenu/FineTune/FineTune.vue index 895dd586d..abf203ae8 100644 --- a/demos/speech_web/web_client/src/components/SubMenu/FineTune/FineTune.vue +++ b/demos/speech_web/web_client/src/components/SubMenu/FineTune/FineTune.vue @@ -80,7 +80,7 @@ - 播放 + 播放 播放 下载 下载 @@ -126,7 +126,7 @@ expPath: '', wav: '', wav_base64: '', - ttsText: '', + ttsText: '欢迎使用飞桨语音套件', cloneWav: '', onEnrollRec: 0, // 录音状态 diff --git a/demos/speech_web/web_client/src/components/SubMenu/IE/IE.vue b/demos/speech_web/web_client/src/components/SubMenu/IE/IE.vue deleted file mode 100644 index c7dd04e9d..000000000 --- a/demos/speech_web/web_client/src/components/SubMenu/IE/IE.vue +++ /dev/null @@ -1,125 +0,0 @@ - - - - - \ No newline at end of file diff --git a/demos/speech_web/web_client/src/components/SubMenu/TTS/TTST.vue b/demos/speech_web/web_client/src/components/SubMenu/TTS/TTST.vue index 353221f7b..ef5591783 100644 --- a/demos/speech_web/web_client/src/components/SubMenu/TTS/TTST.vue +++ b/demos/speech_web/web_client/src/components/SubMenu/TTS/TTST.vue @@ -228,6 +228,10 @@ export default { }, // 基于WS的流式合成 async getTtsChunkWavWS(){ + if(this.ws.readyState != this.ws.OPEN){ + this.$message.error("websocket 链接失败,请检查 Websocket 后端服务是否正确开启") + return + } // 初始化 chunks chunks = [] chunk_index = 0 diff --git a/demos/speech_web/web_client/src/components/SubMenu/VPR/VPR.vue b/demos/speech_web/web_client/src/components/SubMenu/VPR/VPR.vue deleted file mode 100644 index 1fe71e4d8..000000000 --- a/demos/speech_web/web_client/src/components/SubMenu/VPR/VPR.vue +++ /dev/null @@ -1,178 +0,0 @@ - - - - - \ No newline at end of file diff --git a/demos/speech_web/web_client/src/components/SubMenu/VPR/VPRT.vue b/demos/speech_web/web_client/src/components/SubMenu/VPR/VPRT.vue index e398da00c..47eb41df5 100644 --- a/demos/speech_web/web_client/src/components/SubMenu/VPR/VPRT.vue +++ b/demos/speech_web/web_client/src/components/SubMenu/VPR/VPRT.vue @@ -214,14 +214,17 @@ export default { let formData = new FormData() formData.append('spk_id', this.enrollSpkId) formData.append('audio', this.wav) - + const result = await vprEnroll(formData) + if (!result){ + this.$message.error("请检查后端服务是否正确开启") + return + } if(result.data.status){ this.$message.success("声纹注册成功") } else { this.$message.error(result.data.msg) } - // console.log(result) this.GetList() this.wav = '' this.randomSpkId() diff --git a/demos/speech_web/web_client/src/components/SubMenu/VoiceClone/VoiceClone.vue b/demos/speech_web/web_client/src/components/SubMenu/VoiceClone/VoiceClone.vue index 1e380d288..afa572417 100644 --- a/demos/speech_web/web_client/src/components/SubMenu/VoiceClone/VoiceClone.vue +++ b/demos/speech_web/web_client/src/components/SubMenu/VoiceClone/VoiceClone.vue @@ -71,7 +71,7 @@ - 播放 + 播放 播放 下载 下载 @@ -270,6 +270,7 @@ export default { } else if (this.nowIndex >= this.vcDatas.length){ return this.$message.error("当前序号不可以超过音频个数") } + this.cloneWav = "" let func = '' if(this.func_radio === '1'){ func = 'ge2e' @@ -289,12 +290,12 @@ export default { } ); this.g2pOnSys = 0 - if(!result.data.code){ + if(result.data.code == 0){ this.cloneWav = result.data.result console.log("clone wav: ", this.cloneWav) - this.$message.success("音色克隆成功") + this.$message.success("音频合成成功") } else { - this.$message.error(result.data.msg) + this.$message.error("音频合成失败,请检查后台错误后重试!") } }, // 播放表格 diff --git a/demos/streaming_asr_server/README.md b/demos/streaming_asr_server/README.md index a97486757..5eef82866 100644 --- a/demos/streaming_asr_server/README.md +++ b/demos/streaming_asr_server/README.md @@ -14,7 +14,7 @@ Streaming ASR server only support `websocket` protocol, and doesn't support `htt ### 1. Installation see [installation](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md). -It is recommended to use **paddlepaddle 2.3.1** or above. +It is recommended to use **paddlepaddle 2.4rc** or above. You can choose one way from easy, meduim and hard to install paddlespeech. diff --git a/demos/streaming_asr_server/README_cn.md b/demos/streaming_asr_server/README_cn.md index 267367729..1902a2fa9 100644 --- a/demos/streaming_asr_server/README_cn.md +++ b/demos/streaming_asr_server/README_cn.md @@ -14,7 +14,7 @@ ### 1. 安装 安装 PaddleSpeech 的详细过程请看 [安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md)。 -推荐使用 **paddlepaddle 2.3.1** 或以上版本。 +推荐使用 **paddlepaddle 2.4rc** 或以上版本。 你可以从简单,中等,困难 几种方式中选择一种方式安装 PaddleSpeech。 diff --git a/demos/streaming_asr_server/conf/application.yaml b/demos/streaming_asr_server/conf/application.yaml index a89d312ab..d446e13b6 100644 --- a/demos/streaming_asr_server/conf/application.yaml +++ b/demos/streaming_asr_server/conf/application.yaml @@ -21,7 +21,7 @@ engine_list: ['asr_online'] ################################### ASR ######################################### ################### speech task: asr; engine_type: online ####################### asr_online: - model_type: 'conformer_online_wenetspeech' + model_type: 'conformer_u2pp_online_wenetspeech' am_model: # the pdmodel file of am static model [optional] am_params: # the pdiparams file of am static model [optional] lang: 'zh' diff --git a/demos/streaming_tts_server/README.md b/demos/streaming_tts_server/README.md index 15448a46f..ca5d6f1f8 100644 --- a/demos/streaming_tts_server/README.md +++ b/demos/streaming_tts_server/README.md @@ -13,7 +13,7 @@ For service interface definition, please check: ### 1. Installation see [installation](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md). -It is recommended to use **paddlepaddle 2.3.1** or above. +It is recommended to use **paddlepaddle 2.4rc** or above. You can choose one way from easy, meduim and hard to install paddlespeech. diff --git a/demos/streaming_tts_server/README_cn.md b/demos/streaming_tts_server/README_cn.md index b99155bca..125f37033 100644 --- a/demos/streaming_tts_server/README_cn.md +++ b/demos/streaming_tts_server/README_cn.md @@ -12,7 +12,7 @@ ### 1. 安装 请看 [安装文档](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md). -推荐使用 **paddlepaddle 2.3.1** 或以上版本。 +推荐使用 **paddlepaddle 2.4rc** 或以上版本。 你可以从简单,中等,困难 几种方式中选择一种方式安装 PaddleSpeech。 diff --git a/demos/streaming_tts_serving_fastdeploy/README.md b/demos/streaming_tts_serving_fastdeploy/README.md new file mode 100644 index 000000000..3e983a06d --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/README.md @@ -0,0 +1,67 @@ +([简体中文](./README_cn.md)|English) + +# Streaming Speech Synthesis Service + +## Introduction +This demo is an implementation of starting the streaming speech synthesis service and accessing the service. + +`Server` must be started in the docker, while `Client` does not have to be in the docker. + +**The streaming_tts_serving under the path of this article ($PWD) contains the configuration and code of the model, which needs to be mapped to the docker for use.** + +## Usage +### 1. Server +#### 1.1 Docker + +```bash +docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker exec -it -u root fastdeploy bash +``` + +#### 1.2 Installation(inside the docker) +```bash +apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip +pip3 install paddlespeech +export LC_ALL="zh_CN.UTF-8" +export LANG="zh_CN.UTF-8" +export LANGUAGE="zh_CN:zh:en_US:en" +``` + +#### 1.3 Download models(inside the docker) +```bash +cd /models/streaming_tts_serving/1 +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip +unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +unzip mb_melgan_csmsc_onnx_0.2.0.zip +``` +**For the convenience of users, we recommend that you use the command `docker -v` to map $PWD (streaming_tts_service and the configuration and code of the model contained therein) to the docker path `/models`. You can also use other methods, but regardless of which method you use, the final model directory and structure in the docker are shown in the following figure.** + +

+ +

+ +#### 1.4 Start the server(inside the docker) + +```bash +fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving +``` +Arguments: + - `model-repository`(required): Path of model storage. + - `model-control-mode`(required): The mode of loading the model. At present, you can use 'explicit'. + - `load-model`(required): Name of the model to be loaded. + - `http-port`(optional): Port for http service. Default: `8000`. This is not used in our example. + - `grpc-port`(optional): Port for grpc service. Default: `8001`. + - `metrics-port`(optional): Port for metrics service. Default: `8002`. This is not used in our example. + +### 2. Client +#### 2.1 Installation +```bash +pip3 install tritonclient[all] +``` + +#### 2.2 Send request +```bash +python3 /models/streaming_tts_serving/stream_client.py +``` diff --git a/demos/streaming_tts_serving_fastdeploy/README_cn.md b/demos/streaming_tts_serving_fastdeploy/README_cn.md new file mode 100644 index 000000000..7edd32830 --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/README_cn.md @@ -0,0 +1,67 @@ +(简体中文|[English](./README.md)) + +# 流式语音合成服务 + +## 介绍 + +本文介绍了使用FastDeploy搭建流式语音合成服务的方法。 + +`服务端`必须在docker内启动,而`客户端`不是必须在docker容器内. + +**本文所在路径`($PWD)下的streaming_tts_serving里包含模型的配置和代码`(服务端会加载模型和代码以启动服务),需要将其映射到docker中使用。** + +## 使用 +### 1. 服务端 +#### 1.1 Docker +```bash +docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker exec -it -u root fastdeploy bash +``` + +#### 1.2 安装(在docker内) +```bash +apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip +pip3 install paddlespeech +export LC_ALL="zh_CN.UTF-8" +export LANG="zh_CN.UTF-8" +export LANGUAGE="zh_CN:zh:en_US:en" +``` + +#### 1.3 下载模型(在docker内) +```bash +cd /models/streaming_tts_serving/1 +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip +unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +unzip mb_melgan_csmsc_onnx_0.2.0.zip +``` +**为了方便用户使用,我们推荐用户使用1.1中的`docker -v`命令将`$PWD(streaming_tts_serving及里面包含的模型的配置和代码)映射到了docker内的/models路径`,用户也可以使用其他办法,但无论使用哪种方法,最终在docker内的模型目录及结构如下图所示。** + +

+ +

+ +#### 1.4 启动服务端(在docker内) +```bash +fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving +``` + +参数: + - `model-repository`(required): 整套模型streaming_tts_serving存放的路径. + - `model-control-mode`(required): 模型加载的方式,现阶段, 使用'explicit'即可. + - `load-model`(required): 需要加载的模型的名称. + - `http-port`(optional): HTTP服务的端口号. 默认: `8000`. 本示例中未使用该端口. + - `grpc-port`(optional): GRPC服务的端口号. 默认: `8001`. + - `metrics-port`(optional): 服务端指标的端口号. 默认: `8002`. 本示例中未使用该端口. + +### 2. 客户端 +#### 2.1 安装 +```bash +pip3 install tritonclient[all] +``` + +#### 2.2 发送请求 +```bash +python3 /models/streaming_tts_serving/stream_client.py +``` diff --git a/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/1/model.py b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/1/model.py new file mode 100644 index 000000000..46473fdb2 --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/1/model.py @@ -0,0 +1,289 @@ +import codecs +import json +import math +import sys +import threading +import time + +import numpy as np +import onnxruntime as ort +import triton_python_backend_utils as pb_utils + +from paddlespeech.server.utils.util import denorm +from paddlespeech.server.utils.util import get_chunks +from paddlespeech.t2s.frontend.zh_frontend import Frontend + +voc_block = 36 +voc_pad = 14 +am_block = 72 +am_pad = 12 +voc_upsample = 300 + +# 模型路径 +dir_name = "/models/streaming_tts_serving/1/" +phones_dict = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/phone_id_map.txt" +am_stat_path = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/speech_stats.npy" + +onnx_am_encoder = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/fastspeech2_csmsc_am_encoder_infer.onnx" +onnx_am_decoder = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/fastspeech2_csmsc_am_decoder.onnx" +onnx_am_postnet = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/fastspeech2_csmsc_am_postnet.onnx" +onnx_voc_melgan = dir_name + "mb_melgan_csmsc_onnx_0.2.0/mb_melgan_csmsc.onnx" + +frontend = Frontend(phone_vocab_path=phones_dict, tone_vocab_path=None) +am_mu, am_std = np.load(am_stat_path) + +# 用CPU推理 +providers = ['CPUExecutionProvider'] + +# 配置ort session +sess_options = ort.SessionOptions() + +# 创建session +am_encoder_infer_sess = ort.InferenceSession( + onnx_am_encoder, providers=providers, sess_options=sess_options) +am_decoder_sess = ort.InferenceSession( + onnx_am_decoder, providers=providers, sess_options=sess_options) +am_postnet_sess = ort.InferenceSession( + onnx_am_postnet, providers=providers, sess_options=sess_options) +voc_melgan_sess = ort.InferenceSession( + onnx_voc_melgan, providers=providers, sess_options=sess_options) + + +def depadding(data, chunk_num, chunk_id, block, pad, upsample): + """ + Streaming inference removes the result of pad inference + """ + front_pad = min(chunk_id * block, pad) + # first chunk + if chunk_id == 0: + data = data[:block * upsample] + # last chunk + elif chunk_id == chunk_num - 1: + data = data[front_pad * upsample:] + # middle chunk + else: + data = data[front_pad * upsample:(front_pad + block) * upsample] + + return data + + +class TritonPythonModel: + """Your Python model must use the same class name. Every Python model + that is created must have "TritonPythonModel" as the class name. + """ + + def initialize(self, args): + """`initialize` is called only once when the model is being loaded. + Implementing `initialize` function is optional. This function allows + the model to intialize any state associated with this model. + Parameters + ---------- + args : dict + Both keys and values are strings. The dictionary keys and values are: + * model_config: A JSON string containing the model configuration + * model_instance_kind: A string containing model instance kind + * model_instance_device_id: A string containing model instance device ID + * model_repository: Model repository path + * model_version: Model version + * model_name: Model name + """ + sys.stdout = codecs.getwriter("utf-8")(sys.stdout.detach()) + print(sys.getdefaultencoding()) + # You must parse model_config. JSON string is not parsed here + self.model_config = model_config = json.loads(args['model_config']) + print("model_config:", self.model_config) + + using_decoupled = pb_utils.using_decoupled_model_transaction_policy( + model_config) + + if not using_decoupled: + raise pb_utils.TritonModelException( + """the model `{}` can generate any number of responses per request, + enable decoupled transaction policy in model configuration to + serve this model""".format(args['model_name'])) + + self.input_names = [] + for input_config in self.model_config["input"]: + self.input_names.append(input_config["name"]) + print("input:", self.input_names) + + self.output_names = [] + self.output_dtype = [] + for output_config in self.model_config["output"]: + self.output_names.append(output_config["name"]) + dtype = pb_utils.triton_string_to_numpy(output_config["data_type"]) + self.output_dtype.append(dtype) + print("output:", self.output_names) + + # To keep track of response threads so that we can delay + # the finalizing the model until all response threads + # have completed. + self.inflight_thread_count = 0 + self.inflight_thread_count_lck = threading.Lock() + + def execute(self, requests): + """`execute` must be implemented in every Python model. `execute` + function receives a list of pb_utils.InferenceRequest as the only + argument. This function is called when an inference is requested + for this model. Depending on the batching configuration (e.g. Dynamic + Batching) used, `requests` may contain multiple requests. Every + Python model, must create one pb_utils.InferenceResponse for every + pb_utils.InferenceRequest in `requests`. If there is an error, you can + set the error argument when creating a pb_utils.InferenceResponse. + Parameters + ---------- + requests : list + A list of pb_utils.InferenceRequest + Returns + ------- + list + A list of pb_utils.InferenceResponse. The length of this list must + be the same as `requests` + """ + + # This model does not support batching, so 'request_count' should always + # be 1. + if len(requests) != 1: + raise pb_utils.TritonModelException("unsupported batch size " + len( + requests)) + + input_data = [] + for idx in range(len(self.input_names)): + data = pb_utils.get_input_tensor_by_name(requests[0], + self.input_names[idx]) + data = data.as_numpy() + data = data[0].decode('utf-8') + input_data.append(data) + text = input_data[0] + + # Start a separate thread to send the responses for the request. The + # sending back the responses is delegated to this thread. + thread = threading.Thread( + target=self.response_thread, + args=(requests[0].get_response_sender(), text)) + thread.daemon = True + with self.inflight_thread_count_lck: + self.inflight_thread_count += 1 + + thread.start() + # Unlike in non-decoupled model transaction policy, execute function + # here returns no response. A return from this function only notifies + # Triton that the model instance is ready to receive another request. As + # we are not waiting for the response thread to complete here, it is + # possible that at any give time the model may be processing multiple + # requests. Depending upon the request workload, this may lead to a lot + # of requests being processed by a single model instance at a time. In + # real-world models, the developer should be mindful of when to return + # from execute and be willing to accept next request. + return None + + def response_thread(self, response_sender, text): + input_ids = frontend.get_input_ids( + text, merge_sentences=False, get_tone_ids=False) + phone_ids = input_ids["phone_ids"] + for i in range(len(phone_ids)): + part_phone_ids = phone_ids[i].numpy() + voc_chunk_id = 0 + + orig_hs = am_encoder_infer_sess.run( + None, input_feed={'text': part_phone_ids}) + orig_hs = orig_hs[0] + + # streaming voc chunk info + mel_len = orig_hs.shape[1] + voc_chunk_num = math.ceil(mel_len / voc_block) + start = 0 + end = min(voc_block + voc_pad, mel_len) + + # streaming am + hss = get_chunks(orig_hs, am_block, am_pad, "am") + am_chunk_num = len(hss) + for i, hs in enumerate(hss): + am_decoder_output = am_decoder_sess.run( + None, input_feed={'xs': hs}) + am_postnet_output = am_postnet_sess.run( + None, + input_feed={ + 'xs': np.transpose(am_decoder_output[0], (0, 2, 1)) + }) + am_output_data = am_decoder_output + np.transpose( + am_postnet_output[0], (0, 2, 1)) + normalized_mel = am_output_data[0][0] + + sub_mel = denorm(normalized_mel, am_mu, am_std) + sub_mel = depadding(sub_mel, am_chunk_num, i, am_block, am_pad, + 1) + + if i == 0: + mel_streaming = sub_mel + else: + mel_streaming = np.concatenate( + (mel_streaming, sub_mel), axis=0) + + # streaming voc + # 当流式AM推理的mel帧数大于流式voc推理的chunk size,开始进行流式voc 推理 + while (mel_streaming.shape[0] >= end and + voc_chunk_id < voc_chunk_num): + voc_chunk = mel_streaming[start:end, :] + + sub_wav = voc_melgan_sess.run( + output_names=None, input_feed={'logmel': voc_chunk}) + sub_wav = depadding(sub_wav[0], voc_chunk_num, voc_chunk_id, + voc_block, voc_pad, voc_upsample) + + output_np = np.array(sub_wav, dtype=self.output_dtype[0]) + out_tensor1 = pb_utils.Tensor(self.output_names[0], + output_np) + + status = 0 if voc_chunk_id != (voc_chunk_num - 1) else 1 + output_status = np.array( + [status], dtype=self.output_dtype[1]) + out_tensor2 = pb_utils.Tensor(self.output_names[1], + output_status) + + inference_response = pb_utils.InferenceResponse( + output_tensors=[out_tensor1, out_tensor2]) + + #yield sub_wav + response_sender.send(inference_response) + + voc_chunk_id += 1 + start = max(0, voc_chunk_id * voc_block - voc_pad) + end = min((voc_chunk_id + 1) * voc_block + voc_pad, mel_len) + + # We must close the response sender to indicate to Triton that we are + # done sending responses for the corresponding request. We can't use the + # response sender after closing it. The response sender is closed by + # setting the TRITONSERVER_RESPONSE_COMPLETE_FINAL. + response_sender.send( + flags=pb_utils.TRITONSERVER_RESPONSE_COMPLETE_FINAL) + + with self.inflight_thread_count_lck: + self.inflight_thread_count -= 1 + + def finalize(self): + """`finalize` is called only once when the model is being unloaded. + Implementing `finalize` function is OPTIONAL. This function allows + the model to perform any necessary clean ups before exit. + Here we will wait for all response threads to complete sending + responses. + """ + print('Finalize invoked') + + inflight_threads = True + cycles = 0 + logging_time_sec = 5 + sleep_time_sec = 0.1 + cycle_to_log = (logging_time_sec / sleep_time_sec) + while inflight_threads: + with self.inflight_thread_count_lck: + inflight_threads = (self.inflight_thread_count != 0) + if (cycles % cycle_to_log == 0): + print( + f"Waiting for {self.inflight_thread_count} response threads to complete..." + ) + if inflight_threads: + time.sleep(sleep_time_sec) + cycles += 1 + + print('Finalize complete...') diff --git a/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/config.pbtxt b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/config.pbtxt new file mode 100644 index 000000000..e63721d1c --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/config.pbtxt @@ -0,0 +1,33 @@ +name: "streaming_tts_serving" +backend: "python" +max_batch_size: 0 +model_transaction_policy { + decoupled: True +} +input [ + { + name: "INPUT_0" + data_type: TYPE_STRING + dims: [ 1 ] + } +] + +output [ + { + name: "OUTPUT_0" + data_type: TYPE_FP32 + dims: [ -1, 1 ] + }, + { + name: "status" + data_type: TYPE_BOOL + dims: [ 1 ] + } +] + +instance_group [ + { + count: 1 + kind: KIND_CPU + } +] diff --git a/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/stream_client.py b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/stream_client.py new file mode 100644 index 000000000..e7f120b7d --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/stream_client.py @@ -0,0 +1,117 @@ +#!/usr/bin/env python +import argparse +import queue +import sys +from functools import partial + +import numpy as np +import tritonclient.grpc as grpcclient +from tritonclient.utils import * + +FLAGS = None + + +class UserData: + def __init__(self): + self._completed_requests = queue.Queue() + + +# Define the callback function. Note the last two parameters should be +# result and error. InferenceServerClient would povide the results of an +# inference as grpcclient.InferResult in result. For successful +# inference, error will be None, otherwise it will be an object of +# tritonclientutils.InferenceServerException holding the error details +def callback(user_data, result, error): + if error: + user_data._completed_requests.put(error) + else: + user_data._completed_requests.put(result) + + +def async_stream_send(triton_client, values, request_id, model_name): + + infer_inputs = [] + outputs = [] + for idx, data in enumerate(values): + data = np.array([data.encode('utf-8')], dtype=np.object_) + infer_input = grpcclient.InferInput('INPUT_0', [len(data)], "BYTES") + infer_input.set_data_from_numpy(data) + infer_inputs.append(infer_input) + + outputs.append(grpcclient.InferRequestedOutput('OUTPUT_0')) + # Issue the asynchronous sequence inference. + triton_client.async_stream_infer( + model_name=model_name, + inputs=infer_inputs, + outputs=outputs, + request_id=request_id) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument( + '-v', + '--verbose', + action="store_true", + required=False, + default=False, + help='Enable verbose output') + parser.add_argument( + '-u', + '--url', + type=str, + required=False, + default='localhost:8001', + help='Inference server URL and it gRPC port. Default is localhost:8001.') + + FLAGS = parser.parse_args() + + # We use custom "sequence" models which take 1 input + # value. The output is the accumulated value of the inputs. See + # src/custom/sequence. + model_name = "streaming_tts_serving" + + values = ["哈哈哈哈"] + + request_id = "0" + + string_result0_list = [] + + user_data = UserData() + + # It is advisable to use client object within with..as clause + # when sending streaming requests. This ensures the client + # is closed when the block inside with exits. + with grpcclient.InferenceServerClient( + url=FLAGS.url, verbose=FLAGS.verbose) as triton_client: + try: + # Establish stream + triton_client.start_stream(callback=partial(callback, user_data)) + # Now send the inference sequences... + async_stream_send(triton_client, values, request_id, model_name) + except InferenceServerException as error: + print(error) + sys.exit(1) + + # Retrieve results... + recv_count = 0 + result_dict = {} + status = True + while True: + data_item = user_data._completed_requests.get() + if type(data_item) == InferenceServerException: + raise data_item + else: + this_id = data_item.get_response().id + if this_id not in result_dict.keys(): + result_dict[this_id] = [] + result_dict[this_id].append((recv_count, data_item)) + sub_wav = data_item.as_numpy('OUTPUT_0') + status = data_item.as_numpy('status') + print('sub_wav = ', sub_wav, "subwav.shape = ", sub_wav.shape) + print('status = ', status) + if status[0] == 1: + break + recv_count += 1 + + print("PASS: stream_client") diff --git a/demos/streaming_tts_serving_fastdeploy/tree.png b/demos/streaming_tts_serving_fastdeploy/tree.png new file mode 100644 index 000000000..b8d61686a Binary files /dev/null and b/demos/streaming_tts_serving_fastdeploy/tree.png differ diff --git a/docker/ubuntu16-gpu/Dockerfile b/docker/ubuntu16-gpu/Dockerfile index f275471ee..a8c11e37b 100644 --- a/docker/ubuntu16-gpu/Dockerfile +++ b/docker/ubuntu16-gpu/Dockerfile @@ -62,7 +62,7 @@ RUN mkdir -p ~/.pip && echo "[global]" > ~/.pip/pip.conf && \ echo "index-url=https://mirror.baidu.com/pypi/simple" >> ~/.pip/pip.conf && \ echo "trusted-host=mirror.baidu.com" >> ~/.pip/pip.conf && \ python3 -m pip install --upgrade pip && \ - pip install paddlepaddle-gpu==2.3.1.post112 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html && \ + pip install paddlepaddle-gpu==2.4.0rc0.post112 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html && \ rm -rf ~/.cache/pip RUN git clone https://github.com/PaddlePaddle/PaddleSpeech.git && cd PaddleSpeech && \ diff --git a/docs/source/install.md b/docs/source/install.md index 6a9ff3bc8..1e6c1c48b 100644 --- a/docs/source/install.md +++ b/docs/source/install.md @@ -61,6 +61,13 @@ Then you can use the following commands: pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple pip install paddlespeech -i https://pypi.tuna.tsinghua.edu.cn/simple ``` +You can also specify the version of paddlepaddle or install the develop version. +```bash +# install 2.3.1 version. Note, 2.3.1 is just an example, please follow the minimum dependency of paddlepaddle for your selection +pip install paddlepaddle==2.3.1 -i https://mirror.baidu.com/pypi/simple +# install develop version +pip install paddlepaddle==0.0.0 -f https://www.paddlepaddle.org.cn/whl/linux/cpu-mkl/develop.html +``` > If you encounter problem with downloading **nltk_data** while using paddlespeech, it maybe due to your poor network, we suggest you download the [nltk_data](https://paddlespeech.bj.bcebos.com/Parakeet/tools/nltk_data.tar.gz) provided by us, and extract it to your `${HOME}`. > If you fail to install paddlespeech-ctcdecoders, you only can not use deepspeech2 model inference. For other models, it doesn't matter. @@ -117,9 +124,14 @@ conda install -y -c gcc_linux-64=8.4.0 gxx_linux-64=8.4.0 ``` (Hip: Do not use the last script if you want to install by **Hard** way): ### Install PaddlePaddle -You can choose the `PaddlePaddle` version based on your system. For example, for CUDA 10.2, CuDNN7.5 install paddlepaddle-gpu 2.3.1: +You can choose the `PaddlePaddle` version based on your system. For example, for CUDA 10.2, CuDNN7.6 install paddlepaddle-gpu 2.4rc: +```bash +# Note, 2.4rc is just an example, please follow the minimum dependency of paddlepaddle for your selection +python3 -m pip install paddlepaddle-gpu==2.4.0rc0 -i https://mirror.baidu.com/pypi/simple +``` +You can also install the develop version of paddlepaddle. For example, for CUDA 10.2, CuDNN7.6 install paddlepaddle-gpu develop: ```bash -python3 -m pip install paddlepaddle-gpu==2.3.1 -i https://mirror.baidu.com/pypi/simple +python3 -m pip install paddlepaddle-gpu==0.0.0.post102 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html ``` ### Install PaddleSpeech You can install `paddlespeech` by the following command,then you can use the `ready-made` examples in `paddlespeech` : @@ -180,9 +192,14 @@ Some users may fail to install `kaldiio` due to the default download source, you ```bash pip install pytest-runner -i https://pypi.tuna.tsinghua.edu.cn/simple ``` -Make sure you have GPU and the paddlepaddle version is right. For example, for CUDA 10.2, CuDNN7.5 install paddle 2.3.1: +Make sure you have GPU and the paddlepaddle version is right. For example, for CUDA 10.2, CuDNN7.6 install paddle 2.4rc: +```bash +# Note, 2.4rc is just an example, please follow the minimum dependency of paddlepaddle for your selection +python3 -m pip install paddlepaddle-gpu==2.4.0rc0 -i https://mirror.baidu.com/pypi/simple +``` +You can also install the develop version of paddlepaddle. For example, for CUDA 10.2, CuDNN7.6 install paddlepaddle-gpu develop: ```bash -python3 -m pip install paddlepaddle-gpu==2.3.1 -i https://mirror.baidu.com/pypi/simple +python3 -m pip install paddlepaddle-gpu==0.0.0.post102 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html ``` ### Install PaddleSpeech in Developing Mode ```bash diff --git a/docs/source/install_cn.md b/docs/source/install_cn.md index 9f49ebad6..ebc0cf7a2 100644 --- a/docs/source/install_cn.md +++ b/docs/source/install_cn.md @@ -56,7 +56,14 @@ pip install pytest-runner -i https://pypi.tuna.tsinghua.edu.cn/simple 然后你可以使用如下命令: ```bash pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple -pip install paddlespeech -i https://pypi.tuna.tsinghua.edu.cn/simple +pip install paddlespeech -i https://pypi.tuna.tsinghua.edu.cn/simple +``` +你也可以安装指定版本的paddlepaddle,或者安装 develop 版本。 +```bash +# 安装2.3.1版本. 注意:2.3.1只是一个示例,请按照对paddlepaddle的最小依赖进行选择。 +pip install paddlepaddle==2.3.1 -i https://mirror.baidu.com/pypi/simple +# 安装 develop 版本 +pip install paddlepaddle==0.0.0 -f https://www.paddlepaddle.org.cn/whl/linux/cpu-mkl/develop.html ``` > 如果您在使用 paddlespeech 的过程中遇到关于下载 **nltk_data** 的问题,可能是您的网络不佳,我们建议您下载我们提供的 [nltk_data](https://paddlespeech.bj.bcebos.com/Parakeet/tools/nltk_data.tar.gz) 并解压缩到您的 `${HOME}` 目录下。 @@ -111,9 +118,14 @@ conda install -y -c gcc_linux-64=8.4.0 gxx_linux-64=8.4.0 ``` (提示: 如果你想使用**困难**方式完成安装,请不要使用最后一条命令) ### 安装 PaddlePaddle -你可以根据系统配置选择 PaddlePaddle 版本,例如系统使用 CUDA 10.2, CuDNN7.5 ,你可以安装 paddlepaddle-gpu 2.3.1: +你可以根据系统配置选择 PaddlePaddle 版本,例如系统使用 CUDA 10.2, CuDNN7.6,你可以安装 paddlepaddle-gpu 2.4rc: +```bash +# 注意:2.4rc 只是一个示例,请按照对paddlepaddle的最小依赖进行选择。 +python3 -m pip install paddlepaddle-gpu==2.4.0rc0 -i https://mirror.baidu.com/pypi/simple +``` +你也可以安装 develop 版本的PaddlePaddle. 例如系统使用 CUDA 10.2, CuDNN7.6 ,你可以安装 paddlepaddle-gpu develop: ```bash -python3 -m pip install paddlepaddle-gpu==2.3.1 -i https://mirror.baidu.com/pypi/simple +python3 -m pip install paddlepaddle-gpu==0.0.0.post102 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html ``` ### 安装 PaddleSpeech 最后安装 `paddlespeech`,这样你就可以使用 `paddlespeech` 中已有的 examples: @@ -168,13 +180,18 @@ conda activate tools/venv conda install -y -c conda-forge sox libsndfile swig bzip2 libflac bc ``` ### 安装 PaddlePaddle -请确认你系统是否有 GPU,并且使用了正确版本的 paddlepaddle。例如系统使用 CUDA 10.2, CuDNN7.5 ,你可以安装 paddlepaddle-gpu 2.3.1: +请确认你系统是否有 GPU,并且使用了正确版本的 paddlepaddle。例如系统使用 CUDA 10.2, CuDNN7.6 ,你可以安装 paddlepaddle-gpu 2.4rc: +```bash +python3 -m pip install paddlepaddle-gpu==2.4.0rc0 -i https://mirror.baidu.com/pypi/simple +``` +你也可以安装 develop 版本的PaddlePaddle. 例如系统使用 CUDA 10.2, CuDNN7.6 ,你可以安装 paddlepaddle-gpu develop: ```bash -python3 -m pip install paddlepaddle-gpu==2.3.1 -i https://mirror.baidu.com/pypi/simple +python3 -m pip install paddlepaddle-gpu==0.0.0.post102 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html ``` ### 用开发者模式安装 PaddleSpeech 部分用户系统由于默认源的问题,安装中会出现 kaldiio 安转出错的问题,建议首先安装 pytest-runner: ```bash +# 注意:2.4rc 只是一个示例,请按照对paddlepaddle的最小依赖进行选择。 pip install pytest-runner -i https://pypi.tuna.tsinghua.edu.cn/simple ``` 然后安装 PaddleSpeech: diff --git a/docs/source/reference.md b/docs/source/reference.md index 0d36d96f7..9a47a2302 100644 --- a/docs/source/reference.md +++ b/docs/source/reference.md @@ -28,6 +28,8 @@ We borrowed a lot of code from these repos to build `model` and `engine`, thanks * [speechbrain](https://github.com/speechbrain/speechbrain/blob/develop/LICENSE) - Apache-2.0 License - ECAPA-TDNN SV model +- ASR with CTC and pre-trained wav2vec2 models. + * [chainer](https://github.com/chainer/chainer/blob/master/LICENSE) - MIT License @@ -43,3 +45,7 @@ We borrowed a lot of code from these repos to build `model` and `engine`, thanks * [g2pW](https://github.com/GitYCC/g2pW/blob/master/LICENCE) - Apache-2.0 license + +*[transformers](https://github.com/huggingface/transformers) +- Apache-2.0 License +- Wav2vec2.0 diff --git a/docs/source/released_model.md b/docs/source/released_model.md index d6691812e..4e76da033 100644 --- a/docs/source/released_model.md +++ b/docs/source/released_model.md @@ -9,6 +9,7 @@ Acoustic Model | Training Data | Token-based | Size | Descriptions | CER | WER | [Ds2 Online Aishell ASR0 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_online_aishell_fbank161_ckpt_0.2.1.model.tar.gz) | Aishell Dataset | Char-based | 491 MB | 2 Conv + 5 LSTM layers | 0.0666 |-| 151 h | [D2 Online Aishell ASR0](../../examples/aishell/asr0) | onnx/inference/python | [Ds2 Offline Aishell ASR0 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_offline_aishell_ckpt_1.0.1.model.tar.gz)| Aishell Dataset | Char-based | 1.4 GB | 2 Conv + 5 bidirectional LSTM layers| 0.0554 |-| 151 h | [Ds2 Offline Aishell ASR0](../../examples/aishell/asr0) | inference/python | [Conformer Online Wenetspeech ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/wenetspeech/asr1/asr1_chunk_conformer_wenetspeech_ckpt_1.0.0a.model.tar.gz) | WenetSpeech Dataset | Char-based | 457 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring| 0.11 (test\_net) 0.1879 (test\_meeting) |-| 10000 h |- | python | +[Conformer U2PP Online Wenetspeech ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/wenetspeech/asr1/asr1_chunk_conformer_u2pp_wenetspeech_ckpt_1.1.4.model.tar.gz) | WenetSpeech Dataset | Char-based | 476 MB | Encoder:Conformer, Decoder:BiTransformer, Decoding method: Attention rescoring| 0.047198 (aishell test\_-1) 0.059212 (aishell test\_16) |-| 10000 h |- | python | [Conformer Online Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_chunk_conformer_aishell_ckpt_0.2.0.model.tar.gz) | Aishell Dataset | Char-based | 189 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring| 0.0544 |-| 151 h | [Conformer Online Aishell ASR1](../../examples/aishell/asr1) | python | [Conformer Offline Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_conformer_aishell_ckpt_1.0.1.model.tar.gz) | Aishell Dataset | Char-based | 189 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention rescoring | 0.0460 |-| 151 h | [Conformer Offline Aishell ASR1](../../examples/aishell/asr1) | python | [Transformer Aishell ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/asr1_transformer_aishell_ckpt_0.1.1.model.tar.gz) | Aishell Dataset | Char-based | 128 MB | Encoder:Transformer, Decoder:Transformer, Decoding method: Attention rescoring | 0.0523 || 151 h | [Transformer Aishell ASR1](../../examples/aishell/asr1) | python | @@ -17,6 +18,12 @@ Acoustic Model | Training Data | Token-based | Size | Descriptions | CER | WER | [Transformer Librispeech ASR1 Model](https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr1/asr1_transformer_librispeech_ckpt_0.1.1.model.tar.gz) | Librispeech Dataset | subword-based | 131 MB | Encoder:Transformer, Decoder:Transformer, Decoding method: Attention rescoring |-| 0.0381 | 960 h | [Transformer Librispeech ASR1](../../examples/librispeech/asr1) | python | [Transformer Librispeech ASR2 Model](https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr2/asr2_transformer_librispeech_ckpt_0.1.1.model.tar.gz) | Librispeech Dataset | subword-based | 131 MB | Encoder:Transformer, Decoder:Transformer, Decoding method: JoinCTC w/ LM |-| 0.0240 | 960 h | [Transformer Librispeech ASR2](../../examples/librispeech/asr2) | python | +### Self-Supervised Pre-trained Model +Model | Pre-Train Method | Pre-Train Data | Finetune Data | Size | Descriptions | CER | WER | Example Link | +:-------------:| :------------:| :-----: | -----: | :-----: |:-----:| :-----: | :-----: | :-----: | +[Wav2vec2-large-960h-lv60-self Model](https://paddlespeech.bj.bcebos.com/wav2vec/wav2vec2-large-960h-lv60-self.pdparams) | wav2vec2 | Librispeech and LV-60k Dataset (5.3w h) | - | 1.18 GB |Pre-trained Wav2vec2.0 Model | - | - | - | +[Wav2vec2ASR-large-960h-librispeech Model](https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr3/wav2vec2ASR-large-960h-librispeech_ckpt_1.3.0.model.tar.gz) | wav2vec2 | Librispeech and LV-60k Dataset (5.3w h) | Librispeech (960 h) | 1.18 GB |Encoder: Wav2vec2.0, Decoder: CTC, Decoding method: Greedy search | - | 0.0189 | [Wav2vecASR Librispeech ASR3](../../examples/librispeech/asr3) | + ### Language Model based on NGram Language Model | Training Data | Token-based | Size | Descriptions :------------:| :------------:|:------------: | :------------: | :------------: diff --git a/examples/aishell/asr0/local/train.sh b/examples/aishell/asr0/local/train.sh index 256b30d22..2b71b7f76 100755 --- a/examples/aishell/asr0/local/train.sh +++ b/examples/aishell/asr0/local/train.sh @@ -26,6 +26,10 @@ if [ ${seed} != 0 ]; then export FLAGS_cudnn_deterministic=True fi +# default memeory allocator strategy may case gpu training hang +# for no OOM raised when memory exhaused +export FLAGS_allocator_strategy=naive_best_fit + if [ ${ngpu} == 0 ]; then python3 -u ${BIN_DIR}/train.py \ --ngpu ${ngpu} \ diff --git a/examples/aishell/asr1/local/train.sh b/examples/aishell/asr1/local/train.sh index f514de303..bfa8dd97d 100755 --- a/examples/aishell/asr1/local/train.sh +++ b/examples/aishell/asr1/local/train.sh @@ -35,6 +35,10 @@ echo ${ips_config} mkdir -p exp +# default memeory allocator strategy may case gpu training hang +# for no OOM raised when memory exhaused +export FLAGS_allocator_strategy=naive_best_fit + if [ ${ngpu} == 0 ]; then python3 -u ${BIN_DIR}/train.py \ --ngpu ${ngpu} \ diff --git a/examples/iwslt2012/punc0/conf/ernie-3.0-base.yaml b/examples/iwslt2012/punc0/conf/ernie-3.0-base.yaml new file mode 100644 index 000000000..845b13fd8 --- /dev/null +++ b/examples/iwslt2012/punc0/conf/ernie-3.0-base.yaml @@ -0,0 +1,44 @@ +########################################################### +# DATA SETTING # +########################################################### +dataset_type: Ernie +train_path: data/iwslt2012_zh/train.txt +dev_path: data/iwslt2012_zh/dev.txt +test_path: data/iwslt2012_zh/test.txt +batch_size: 64 +num_workers: 2 +data_params: + pretrained_token: ernie-3.0-base-zh + punc_path: data/iwslt2012_zh/punc_vocab + seq_len: 100 + + +########################################################### +# MODEL SETTING # +########################################################### +model_type: ErnieLinear +model: + pretrained_token: ernie-3.0-base-zh + num_classes: 4 + +########################################################### +# OPTIMIZER SETTING # +########################################################### +optimizer_params: + weight_decay: 1.0e-6 # weight decay coefficient. + +scheduler_params: + learning_rate: 1.0e-5 # learning rate. + gamma: 0.9999 # scheduler gamma must between(0.0, 1.0) and closer to 1.0 is better. + +########################################################### +# TRAINING SETTING # +########################################################### +max_epoch: 20 +num_snapshots: 5 + +########################################################### +# OTHER SETTING # +########################################################### +num_snapshots: 10 # max number of snapshots to keep while training +seed: 42 # random seed for paddle, random, and np.random diff --git a/examples/iwslt2012/punc0/conf/ernie-3.0-medium.yaml b/examples/iwslt2012/punc0/conf/ernie-3.0-medium.yaml new file mode 100644 index 000000000..392ba011c --- /dev/null +++ b/examples/iwslt2012/punc0/conf/ernie-3.0-medium.yaml @@ -0,0 +1,44 @@ +########################################################### +# DATA SETTING # +########################################################### +dataset_type: Ernie +train_path: data/iwslt2012_zh/train.txt +dev_path: data/iwslt2012_zh/dev.txt +test_path: data/iwslt2012_zh/test.txt +batch_size: 64 +num_workers: 2 +data_params: + pretrained_token: ernie-3.0-medium-zh + punc_path: data/iwslt2012_zh/punc_vocab + seq_len: 100 + + +########################################################### +# MODEL SETTING # +########################################################### +model_type: ErnieLinear +model: + pretrained_token: ernie-3.0-medium-zh + num_classes: 4 + +########################################################### +# OPTIMIZER SETTING # +########################################################### +optimizer_params: + weight_decay: 1.0e-6 # weight decay coefficient. + +scheduler_params: + learning_rate: 1.0e-5 # learning rate. + gamma: 0.9999 # scheduler gamma must between(0.0, 1.0) and closer to 1.0 is better. + +########################################################### +# TRAINING SETTING # +########################################################### +max_epoch: 20 +num_snapshots: 5 + +########################################################### +# OTHER SETTING # +########################################################### +num_snapshots: 10 # max number of snapshots to keep while training +seed: 42 # random seed for paddle, random, and np.random diff --git a/examples/iwslt2012/punc0/conf/ernie-3.0-mini.yaml b/examples/iwslt2012/punc0/conf/ernie-3.0-mini.yaml new file mode 100644 index 000000000..c57fd94a8 --- /dev/null +++ b/examples/iwslt2012/punc0/conf/ernie-3.0-mini.yaml @@ -0,0 +1,44 @@ +########################################################### +# DATA SETTING # +########################################################### +dataset_type: Ernie +train_path: data/iwslt2012_zh/train.txt +dev_path: data/iwslt2012_zh/dev.txt +test_path: data/iwslt2012_zh/test.txt +batch_size: 64 +num_workers: 2 +data_params: + pretrained_token: ernie-3.0-mini-zh + punc_path: data/iwslt2012_zh/punc_vocab + seq_len: 100 + + +########################################################### +# MODEL SETTING # +########################################################### +model_type: ErnieLinear +model: + pretrained_token: ernie-3.0-mini-zh + num_classes: 4 + +########################################################### +# OPTIMIZER SETTING # +########################################################### +optimizer_params: + weight_decay: 1.0e-6 # weight decay coefficient. + +scheduler_params: + learning_rate: 1.0e-5 # learning rate. + gamma: 0.9999 # scheduler gamma must between(0.0, 1.0) and closer to 1.0 is better. + +########################################################### +# TRAINING SETTING # +########################################################### +max_epoch: 20 +num_snapshots: 5 + +########################################################### +# OTHER SETTING # +########################################################### +num_snapshots: 10 # max number of snapshots to keep while training +seed: 42 # random seed for paddle, random, and np.random diff --git a/examples/iwslt2012/punc0/conf/ernie-3.0-nano-zh.yaml b/examples/iwslt2012/punc0/conf/ernie-3.0-nano-zh.yaml new file mode 100644 index 000000000..a7a84c4c1 --- /dev/null +++ b/examples/iwslt2012/punc0/conf/ernie-3.0-nano-zh.yaml @@ -0,0 +1,44 @@ +########################################################### +# DATA SETTING # +########################################################### +dataset_type: Ernie +train_path: data/iwslt2012_zh/train.txt +dev_path: data/iwslt2012_zh/dev.txt +test_path: data/iwslt2012_zh/test.txt +batch_size: 64 +num_workers: 2 +data_params: + pretrained_token: ernie-3.0-nano-zh + punc_path: data/iwslt2012_zh/punc_vocab + seq_len: 100 + + +########################################################### +# MODEL SETTING # +########################################################### +model_type: ErnieLinear +model: + pretrained_token: ernie-3.0-nano-zh + num_classes: 4 + +########################################################### +# OPTIMIZER SETTING # +########################################################### +optimizer_params: + weight_decay: 1.0e-6 # weight decay coefficient. + +scheduler_params: + learning_rate: 1.0e-5 # learning rate. + gamma: 0.9999 # scheduler gamma must between(0.0, 1.0) and closer to 1.0 is better. + +########################################################### +# TRAINING SETTING # +########################################################### +max_epoch: 20 +num_snapshots: 5 + +########################################################### +# OTHER SETTING # +########################################################### +num_snapshots: 10 # max number of snapshots to keep while training +seed: 42 # random seed for paddle, random, and np.random diff --git a/examples/iwslt2012/punc0/conf/ernie-tiny.yaml b/examples/iwslt2012/punc0/conf/ernie-tiny.yaml new file mode 100644 index 000000000..6a5b7fee2 --- /dev/null +++ b/examples/iwslt2012/punc0/conf/ernie-tiny.yaml @@ -0,0 +1,44 @@ +########################################################### +# DATA SETTING # +########################################################### +dataset_type: Ernie +train_path: data/iwslt2012_zh/train.txt +dev_path: data/iwslt2012_zh/dev.txt +test_path: data/iwslt2012_zh/test.txt +batch_size: 64 +num_workers: 2 +data_params: + pretrained_token: ernie-tiny + punc_path: data/iwslt2012_zh/punc_vocab + seq_len: 100 + + +########################################################### +# MODEL SETTING # +########################################################### +model_type: ErnieLinear +model: + pretrained_token: ernie-tiny + num_classes: 4 + +########################################################### +# OPTIMIZER SETTING # +########################################################### +optimizer_params: + weight_decay: 1.0e-6 # weight decay coefficient. + +scheduler_params: + learning_rate: 1.0e-5 # learning rate. + gamma: 0.9999 # scheduler gamma must between(0.0, 1.0) and closer to 1.0 is better. + +########################################################### +# TRAINING SETTING # +########################################################### +max_epoch: 20 +num_snapshots: 5 + +########################################################### +# OTHER SETTING # +########################################################### +num_snapshots: 10 # max number of snapshots to keep while training +seed: 42 # random seed for paddle, random, and np.random diff --git a/examples/librispeech/README.md b/examples/librispeech/README.md index 74441fd09..9fcbde97a 100644 --- a/examples/librispeech/README.md +++ b/examples/librispeech/README.md @@ -3,7 +3,7 @@ * asr0 - deepspeech2 Streaming/Non-Streaming * asr1 - transformer/conformer Streaming/Non-Streaming * asr2 - transformer/conformer Streaming/Non-Streaming with Kaldi feature - +* asr3 - wav2vecASR, ASR model with pre-trained wav2vec2 and CTC ## Data | Data Subset | Duration in Seconds | diff --git a/examples/librispeech/asr0/local/train.sh b/examples/librispeech/asr0/local/train.sh index 71659e28d..bb41fd554 100755 --- a/examples/librispeech/asr0/local/train.sh +++ b/examples/librispeech/asr0/local/train.sh @@ -26,6 +26,10 @@ if [ ${seed} != 0 ]; then export FLAGS_cudnn_deterministic=True fi +# default memeory allocator strategy may case gpu training hang +# for no OOM raised when memory exhaused +export FLAGS_allocator_strategy=naive_best_fit + if [ ${ngpu} == 0 ]; then python3 -u ${BIN_DIR}/train.py \ --ngpu ${ngpu} \ diff --git a/examples/librispeech/asr1/local/train.sh b/examples/librispeech/asr1/local/train.sh index f729ed22c..e274b9133 100755 --- a/examples/librispeech/asr1/local/train.sh +++ b/examples/librispeech/asr1/local/train.sh @@ -29,6 +29,10 @@ fi # export FLAGS_cudnn_exhaustive_search=true # export FLAGS_conv_workspace_size_limit=4000 +# default memeory allocator strategy may case gpu training hang +# for no OOM raised when memory exhaused +export FLAGS_allocator_strategy=naive_best_fit + if [ ${ngpu} == 0 ]; then python3 -u ${BIN_DIR}/train.py \ --ngpu ${ngpu} \ diff --git a/examples/librispeech/asr2/local/train.sh b/examples/librispeech/asr2/local/train.sh index 1f414ad41..c2f2d4b65 100755 --- a/examples/librispeech/asr2/local/train.sh +++ b/examples/librispeech/asr2/local/train.sh @@ -26,6 +26,10 @@ if [ ${seed} != 0 ]; then export FLAGS_cudnn_deterministic=True fi +# default memeory allocator strategy may case gpu training hang +# for no OOM raised when memory exhaused +export FLAGS_allocator_strategy=naive_best_fit + if [ ${ngpu} == 0 ]; then python3 -u ${BIN_DIR}/train.py \ --ngpu ${ngpu} \ diff --git a/examples/librispeech/asr3/README.md b/examples/librispeech/asr3/README.md new file mode 100644 index 000000000..f99beb338 --- /dev/null +++ b/examples/librispeech/asr3/README.md @@ -0,0 +1,197 @@ +# Wav2vec2ASR with Librispeech +This example contains code used to finetune [wav2vec2.0](https://https://arxiv.org/pdf/2006.11477.pdf) model with [Librispeech dataset](http://www.openslr.org/resources/12) +## Overview +All the scripts you need are in `run.sh`. There are several stages in `run.sh`, and each stage has its function. +| Stage | Function | +|:---- |:----------------------------------------------------------- | +| 0 | Process data. It includes:
(1) Download the dataset
(2) Calculate the CMVN of the train dataset
(3) Get the vocabulary file
(4) Get the manifest files of the train, development and test dataset
(5) Download the pretrained wav2vec2 model | +| 1 | Train the model | +| 2 | Get the final model by averaging the top-k models, set k = 1 means to choose the best model | +| 3 | Test the final model performance | +| 4 | Infer the single audio file | + + +You can choose to run a range of stages by setting `stage` and `stop_stage `. + +For example, if you want to execute the code in stage 2 and stage 3, you can run this script: +```bash +bash run.sh --stage 2 --stop_stage 3 +``` +Or you can set `stage` equal to `stop-stage` to only run one stage. +For example, if you only want to run `stage 0`, you can use the script below: +```bash +bash run.sh --stage 0 --stop_stage 0 +``` +The document below will describe the scripts in `run.sh` in detail. +## The Environment Variables +The path.sh contains the environment variables. +```bash +. ./path.sh +. ./cmd.sh +``` +This script needs to be run first. And another script is also needed: +```bash +source ${MAIN_ROOT}/utils/parse_options.sh +``` +It will support the way of using `--variable value` in the shell scripts. +## The Local Variables +Some local variables are set in `run.sh`. +`gpus` denotes the GPU number you want to use. If you set `gpus=`, it means you only use CPU. +`stage` denotes the number of stages you want to start from in the experiments. +`stop stage` denotes the number of the stage you want to end at in the experiments. +`conf_path` denotes the config path of the model. +`avg_num` denotes the number K of top-K models you want to average to get the final model. +`audio file` denotes the file path of the single file you want to infer in stage 5 +`ckpt` denotes the checkpoint prefix of the model, e.g. "wav2vec2ASR" + +You can set the local variables (except `ckpt`) when you use `run.sh` + +For example, you can set the `gpus` and `avg_num` when you use the command line: +```bash +bash run.sh --gpus 0,1 --avg_num 20 +``` +## Stage 0: Data Processing +To use this example, you need to process data firstly and you can use stage 0 in `run.sh` to do this. The code is shown below: +```bash + if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then + # prepare data + bash ./local/data.sh || exit -1 + fi +``` +Stage 0 is for processing the data. + +If you only want to process the data. You can run +```bash +bash run.sh --stage 0 --stop_stage 0 +``` +You can also just run these scripts in your command line. +```bash +. ./path.sh +. ./cmd.sh +bash ./local/data.sh +``` +After processing the data, the `data` directory will look like this: +```bash +data/ +|-- dev.meta +|-- lang_char +| `-- bpe_unigram_5000.model +| `-- bpe_unigram_5000.vocab +| `-- vocab.txt +|-- manifest.dev +|-- manifest.dev.raw +|-- manifest.test +|-- manifest.test.raw +|-- manifest.train +|-- manifest.train.raw +|-- mean_std.json +|-- test.meta +`-- train.meta +``` + +Stage 0 also downloads the pre-trained [wav2vec2](https://paddlespeech.bj.bcebos.com/wav2vec/wav2vec2-large-960h-lv60-self.pdparams) model. +```bash +mkdir -p exp/wav2vec2 +wget -P exp/wav2vec2 https://paddlespeech.bj.bcebos.com/wav2vec/wav2vec2-large-960h-lv60-self.pdparams +``` +## Stage 1: Model Training +If you want to train the model. you can use stage 1 in `run.sh`. The code is shown below. +```bash +if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then + # train model, all `ckpt` under `exp` dir + CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${ckpt} + fi +``` +If you want to train the model, you can use the script below to execute stage 0 and stage 1: +```bash +bash run.sh --stage 0 --stop_stage 1 +``` +or you can run these scripts in the command line (only use CPU). +```bash +. ./path.sh +. ./cmd.sh +bash ./local/data.sh +CUDA_VISIBLE_DEVICES= ./local/train.sh conf/wav2vec2ASR.yaml wav2vec2ASR +``` +## Stage 2: Top-k Models Averaging +After training the model, we need to get the final model for testing and inference. In every epoch, the model checkpoint is saved, so we can choose the best model from them based on the validation loss or we can sort them and average the parameters of the top-k models to get the final model. We can use stage 2 to do this, and the code is shown below. Note: We only train one epoch for wav2vec2ASR, thus the `avg_num` is set to 1. +```bash + if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then + # avg n best model + avg.sh best exp/${ckpt}/checkpoints ${avg_num} + fi +``` +The `avg.sh` is in the `../../../utils/` which is define in the `path.sh`. +If you want to get the final model, you can use the script below to execute stage 0, stage 1, and stage 2: +```bash +bash run.sh --stage 0 --stop_stage 2 +``` +or you can run these scripts in the command line (only use CPU). + +```bash +. ./path.sh +. ./cmd.sh +bash ./local/data.sh +CUDA_VISIBLE_DEVICES= ./local/train.sh conf/wav2vec2ASR.yaml wav2vec2ASR +avg.sh best exp/wav2vec2ASR/checkpoints 1 +``` +## Stage 3: Model Testing +The test stage is to evaluate the model performance. The code of test stage is shown below: +```bash + if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then + # test ckpt avg_n + CUDA_VISIBLE_DEVICES=0 ./local/test.sh ${conf_path} ${decode_conf_path} exp/${ckpt}/checkpoints/${avg_ckpt} || exit -1 + fi +``` +If you want to train a model and test it, you can use the script below to execute stage 0, stage 1, stage 2, and stage 3 : +```bash +bash run.sh --stage 0 --stop_stage 3 +``` +or you can run these scripts in the command line (only use CPU). +```bash +. ./path.sh +. ./cmd.sh +bash ./local/data.sh +CUDA_VISIBLE_DEVICES= ./local/train.sh conf/wav2vec2ASR.yaml wav2vec2ASR +avg.sh best exp/wav2vec2ASR/checkpoints 1 +CUDA_VISIBLE_DEVICES= ./local/test.sh conf/wav2vec2ASR.yaml conf/tuning/decode.yaml exp/wav2vec2ASR/checkpoints/avg_1 +``` +## Pretrained Model +You can get the pretrained wav2vec2ASR from [this](../../../docs/source/released_model.md). + +using the `tar` scripts to unpack the model and then you can use the script to test the model. + +For example: +```bash +wget https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr3/wav2vec2ASR-large-960h-librispeech_ckpt_1.3.0.model.tar.gz +tar xzvf wav2vec2ASR-large-960h-librispeech_ckpt_1.3.0.model.tar.gz +source path.sh +# If you have process the data and get the manifest file, you can skip the following 2 steps +bash local/data.sh --stage -1 --stop_stage -1 +bash local/data.sh --stage 2 --stop_stage 2 +CUDA_VISIBLE_DEVICES= ./local/test.sh conf/wav2vec2ASR.yaml conf/tuning/decode.yaml exp/wav2vec2ASR/checkpoints/avg_1 +``` +The performance of the released models are shown in [here](./RESULTS.md). + + +## Stage 4: Single Audio File Inference +In some situations, you want to use the trained model to do the inference for the single audio file. You can use stage 5. The code is shown below +```bash + if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then + # test a single .wav file + CUDA_VISIBLE_DEVICES=0 ./local/test_wav.sh ${conf_path} ${decode_conf_path} exp/${ckpt}/checkpoints/${avg_ckpt} ${audio_file} || exit -1 + fi +``` +you can train the model by yourself using ```bash run.sh --stage 0 --stop_stage 3```, or you can download the pretrained model through the script below: +```bash +wget https://paddlespeech.bj.bcebos.com/s2t/librispeech/asr3/wav2vec2ASR-large-960h-librispeech_ckpt_1.3.0.model.tar.gz +tar xzvf wav2vec2ASR-large-960h-librispeech_ckpt_1.3.0.model.tar.gz +``` +You can download the audio demo: +```bash +wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/en/demo_002_en.wav -P data/ +``` +You need to prepare an audio file or use the audio demo above, please confirm the sample rate of the audio is 16K. You can get the result of the audio demo by running the script below. +```bash +CUDA_VISIBLE_DEVICES= ./local/test_wav.sh conf/wav2vec2ASR.yaml conf/tuning/decode.yaml exp/wav2vec2ASR/checkpoints/avg_1 data/demo_002_en.wav +``` diff --git a/examples/librispeech/asr3/RESULTS.md b/examples/librispeech/asr3/RESULTS.md new file mode 100644 index 000000000..1c5626d9e --- /dev/null +++ b/examples/librispeech/asr3/RESULTS.md @@ -0,0 +1,8 @@ +# LibriSpeech + +## Wav2VecASR +train: Epoch 1, 1*V100-32G, batchsize:10 + +| Model | Params | Config | Augmentation| Test set | Decode method | WER | +| --- | --- | --- | --- | --- | --- | --- | +| wav2vec2ASR | 302.86 M | conf/wav2vec2ASR.yaml | spec_aug | test-clean | greedy search | 0.018887 | diff --git a/examples/librispeech/asr3/cmd.sh b/examples/librispeech/asr3/cmd.sh new file mode 100644 index 000000000..7b70ef5e0 --- /dev/null +++ b/examples/librispeech/asr3/cmd.sh @@ -0,0 +1,89 @@ +# ====== About run.pl, queue.pl, slurm.pl, and ssh.pl ====== +# Usage: .pl [options] JOB=1: +# e.g. +# run.pl --mem 4G JOB=1:10 echo.JOB.log echo JOB +# +# Options: +# --time