commit
05447ea93b
@ -0,0 +1,100 @@
|
|||||||
|
([简体中文](./README_cn.md)|English)
|
||||||
|
# ASR Deployment by SpeechX
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
ASR deployment support U2/U2++/Deepspeech2 asr model using c++, which is good practice in industry deployment.
|
||||||
|
|
||||||
|
More info about SpeechX, please see [here](../../speechx/README.md).
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
### 1. Environment
|
||||||
|
|
||||||
|
* python - 3.7
|
||||||
|
* docker - `registry.baidubce.com/paddlepaddle/paddle:2.2.2-gpu-cuda10.2-cudnn7`
|
||||||
|
* os - Ubuntu 16.04.7 LTS
|
||||||
|
* gcc/g++/gfortran - 8.2.0
|
||||||
|
* cmake - 3.16.0
|
||||||
|
|
||||||
|
More info please see [here](../../speechx/README.md).
|
||||||
|
|
||||||
|
### 2. Compile SpeechX
|
||||||
|
|
||||||
|
Please see [here](../../speechx/README.md).
|
||||||
|
|
||||||
|
### 3. Usage
|
||||||
|
|
||||||
|
For u2++ asr deployment example, please to see [here](../../speechx/examples/u2pp_ol/wenetspeech/).
|
||||||
|
|
||||||
|
First go to `speechx/speechx/examples/u2pp_ol/wenetspeech` dir.
|
||||||
|
|
||||||
|
- Source path.sh
|
||||||
|
```bash
|
||||||
|
source path.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
- Download Model, Prepare test data and cmvn
|
||||||
|
```bash
|
||||||
|
run.sh --stage 0 --stop_stage 1
|
||||||
|
```
|
||||||
|
|
||||||
|
- Decode with WAV
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# FP32
|
||||||
|
./local/recognizer.sh
|
||||||
|
|
||||||
|
# INT8
|
||||||
|
./local/recognizer_quant.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```bash
|
||||||
|
I1026 16:13:24.683531 48038 u2_recognizer_main.cc:55] utt: BAC009S0916W0495
|
||||||
|
I1026 16:13:24.683578 48038 u2_recognizer_main.cc:56] wav dur: 4.17119 sec.
|
||||||
|
I1026 16:13:24.683595 48038 u2_recognizer_main.cc:64] wav len (sample): 66739
|
||||||
|
I1026 16:13:25.037652 48038 u2_recognizer_main.cc:87] Pratial result: 3 这令
|
||||||
|
I1026 16:13:25.043697 48038 u2_recognizer_main.cc:87] Pratial result: 4 这令
|
||||||
|
I1026 16:13:25.222124 48038 u2_recognizer_main.cc:87] Pratial result: 5 这令被贷款
|
||||||
|
I1026 16:13:25.228385 48038 u2_recognizer_main.cc:87] Pratial result: 6 这令被贷款
|
||||||
|
I1026 16:13:25.414669 48038 u2_recognizer_main.cc:87] Pratial result: 7 这令被贷款的员工
|
||||||
|
I1026 16:13:25.420714 48038 u2_recognizer_main.cc:87] Pratial result: 8 这令被贷款的员工
|
||||||
|
I1026 16:13:25.608129 48038 u2_recognizer_main.cc:87] Pratial result: 9 这令被贷款的员工们请
|
||||||
|
I1026 16:13:25.801620 48038 u2_recognizer_main.cc:87] Pratial result: 10 这令被贷款的员工们请食难安
|
||||||
|
I1026 16:13:25.804101 48038 feature_cache.h:44] set finished
|
||||||
|
I1026 16:13:25.804128 48038 feature_cache.h:51] compute last feats done.
|
||||||
|
I1026 16:13:25.948771 48038 u2_recognizer_main.cc:87] Pratial result: 11 这令被贷款的员工们请食难安
|
||||||
|
I1026 16:13:26.246963 48038 u2_recognizer_main.cc:113] BAC009S0916W0495 这令被贷款的员工们请食难安
|
||||||
|
```
|
||||||
|
|
||||||
|
## Result
|
||||||
|
|
||||||
|
> CER compute under aishell-test.
|
||||||
|
> RTF compute with feature and decoder, which is more end to end.
|
||||||
|
> Machine Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz avx512_vnni
|
||||||
|
|
||||||
|
### FP32
|
||||||
|
|
||||||
|
```
|
||||||
|
Overall -> 5.75 % N=104765 C=99035 S=5587 D=143 I=294
|
||||||
|
Mandarin -> 5.75 % N=104762 C=99035 S=5584 D=143 I=294
|
||||||
|
English -> 0.00 % N=0 C=0 S=0 D=0 I=0
|
||||||
|
Other -> 100.00 % N=3 C=0 S=3 D=0 I=0
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
RTF is: 0.315337
|
||||||
|
```
|
||||||
|
|
||||||
|
### INT8
|
||||||
|
|
||||||
|
```
|
||||||
|
Overall -> 5.83 % N=104765 C=98943 S=5675 D=147 I=286
|
||||||
|
Mandarin -> 5.83 % N=104762 C=98943 S=5672 D=147 I=286
|
||||||
|
English -> 0.00 % N=0 C=0 S=0 D=0 I=0
|
||||||
|
Other -> 100.00 % N=3 C=0 S=3 D=0 I=0
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
RTF is: 0.269674
|
||||||
|
```
|
@ -0,0 +1,102 @@
|
|||||||
|
([简体中文](./README_cn.md)|English)
|
||||||
|
# Speech SSL (Self-Supervised Learning)
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
Speech SSL, or Self-Supervised Learning, refers to a training method on the large-scale unlabeled speech dataset. The model trained in this way can produce a good acoustic representation, and can be applied to other downstream speech tasks by fine-tuning on labeled datasets.
|
||||||
|
|
||||||
|
This demo is an implementation to recognize text or produce the acoustic representation from a specific audio file by speech ssl models. It can be done by a single command or a few lines in python using `PaddleSpeech`.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
### 1. Installation
|
||||||
|
see [installation](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md).
|
||||||
|
|
||||||
|
You can choose one way from easy, meduim and hard to install paddlespeech.
|
||||||
|
|
||||||
|
### 2. Prepare Input File
|
||||||
|
The input of this demo should be a WAV file(`.wav`), and the sample rate must be the same as the model.
|
||||||
|
|
||||||
|
Here are sample files for this demo that can be downloaded:
|
||||||
|
```bash
|
||||||
|
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/en.wav
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Usage
|
||||||
|
- Command Line(Recommended)
|
||||||
|
```bash
|
||||||
|
# to recognize text
|
||||||
|
paddlespeech ssl --task asr --lang en --input ./en.wav
|
||||||
|
|
||||||
|
# to get acoustic representation
|
||||||
|
paddlespeech ssl --task vector --lang en --input ./en.wav
|
||||||
|
```
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
```bash
|
||||||
|
paddlespeech ssl --help
|
||||||
|
```
|
||||||
|
Arguments:
|
||||||
|
- `input`(required): Audio file to recognize.
|
||||||
|
- `model`: Model type of asr task. Default: `wav2vec2ASR_librispeech`.
|
||||||
|
- `task`: Output type. Default: `asr`.
|
||||||
|
- `lang`: Model language. Default: `en`.
|
||||||
|
- `sample_rate`: Sample rate of the model. Default: `16000`.
|
||||||
|
- `config`: Config of asr task. Use pretrained model when it is None. Default: `None`.
|
||||||
|
- `ckpt_path`: Model checkpoint. Use pretrained model when it is None. Default: `None`.
|
||||||
|
- `yes`: No additional parameters required. Once set this parameter, it means accepting the request of the program by default, which includes transforming the audio sample rate. Default: `False`.
|
||||||
|
- `device`: Choose device to execute model inference. Default: default device of paddlepaddle in current environment.
|
||||||
|
- `verbose`: Show the log information.
|
||||||
|
|
||||||
|
|
||||||
|
- Python API
|
||||||
|
```python
|
||||||
|
import paddle
|
||||||
|
from paddlespeech.cli.ssl import SSLExecutor
|
||||||
|
|
||||||
|
ssl_executor = SSLExecutor()
|
||||||
|
|
||||||
|
# to recognize text
|
||||||
|
text = ssl_executor(
|
||||||
|
model='wav2vec2ASR_librispeech',
|
||||||
|
task='asr',
|
||||||
|
lang='en',
|
||||||
|
sample_rate=16000,
|
||||||
|
config=None, # Set `config` and `ckpt_path` to None to use pretrained model.
|
||||||
|
ckpt_path=None,
|
||||||
|
audio_file='./en.wav',
|
||||||
|
device=paddle.get_device())
|
||||||
|
print('ASR Result: \n{}'.format(text))
|
||||||
|
|
||||||
|
# to get acoustic representation
|
||||||
|
feature = ssl_executor(
|
||||||
|
model='wav2vec2',
|
||||||
|
task='vector',
|
||||||
|
lang='en',
|
||||||
|
sample_rate=16000,
|
||||||
|
config=None, # Set `config` and `ckpt_path` to None to use pretrained model.
|
||||||
|
ckpt_path=None,
|
||||||
|
audio_file='./en.wav',
|
||||||
|
device=paddle.get_device())
|
||||||
|
print('Representation: \n{}'.format(feature))
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```bash
|
||||||
|
ASR Result:
|
||||||
|
i knocked at the door on the ancient side of the building
|
||||||
|
|
||||||
|
Representation:
|
||||||
|
Tensor(shape=[1, 164, 1024], dtype=float32, place=Place(gpu:0), stop_gradient=True,
|
||||||
|
[[[ 0.02351918, -0.12980647, 0.17868176, ..., 0.10118122,
|
||||||
|
-0.04614586, 0.17853957],
|
||||||
|
[ 0.02361383, -0.12978461, 0.17870593, ..., 0.10103855,
|
||||||
|
-0.04638699, 0.17855372],
|
||||||
|
[ 0.02345137, -0.12982975, 0.17883906, ..., 0.10104341,
|
||||||
|
-0.04643029, 0.17856732],
|
||||||
|
...,
|
||||||
|
[ 0.02313030, -0.12918393, 0.17845058, ..., 0.10073373,
|
||||||
|
-0.04701405, 0.17862988],
|
||||||
|
[ 0.02176583, -0.12929161, 0.17797582, ..., 0.10097728,
|
||||||
|
-0.04687393, 0.17864393],
|
||||||
|
[ 0.05269200, 0.01297141, -0.23336855, ..., -0.11257174,
|
||||||
|
-0.17227529, 0.20338398]]])
|
||||||
|
```
|
@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# audio download
|
||||||
|
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/en.wav
|
||||||
|
|
||||||
|
# to recognize text
|
||||||
|
paddlespeech ssl --task asr --lang en --input ./en.wav
|
||||||
|
|
||||||
|
# to get acoustic representation
|
||||||
|
paddlespeech ssl --task vector --lang en --input ./en.wav
|
@ -0,0 +1,95 @@
|
|||||||
|
([简体中文](./README_cn.md)|English)
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
|
||||||
|
|
||||||
|
Whisper model trained by OpenAI whisper https://github.com/openai/whisper
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
### 1. Installation
|
||||||
|
see [installation](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md).
|
||||||
|
|
||||||
|
You can choose one way from easy, meduim and hard to install paddlespeech.
|
||||||
|
|
||||||
|
### 2. Prepare Input File
|
||||||
|
The input of this demo should be a WAV file(`.wav`), and the sample rate must be the same as the model.
|
||||||
|
|
||||||
|
Here are sample files for this demo that can be downloaded:
|
||||||
|
```bash
|
||||||
|
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Usage
|
||||||
|
- Command Line(Recommended)
|
||||||
|
```bash
|
||||||
|
# to recognize text
|
||||||
|
paddlespeech whisper --task transcribe --input ./zh.wav
|
||||||
|
|
||||||
|
# to change model English-Only base size model
|
||||||
|
paddlespeech whisper --lang en --size base --task transcribe --input ./en.wav
|
||||||
|
|
||||||
|
# to recognize text and translate to English
|
||||||
|
paddlespeech whisper --task translate --input ./zh.wav
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
```bash
|
||||||
|
paddlespeech whisper --help
|
||||||
|
```
|
||||||
|
Arguments:
|
||||||
|
- `input`(required): Audio file to recognize.
|
||||||
|
- `model`: Model type of asr task. Default: `whisper-large`.
|
||||||
|
- `task`: Output type. Default: `transcribe`.
|
||||||
|
- `lang`: Model language. Default: ``. Use `en` to choice English-only model. Now [medium,base,small,tiny] size can support English-only.
|
||||||
|
- `size`: Model size for decode. Defalut: `large`. Now can support [large,medium,base,small,tiny].
|
||||||
|
- `language`: Set decode language. Default: `None`. Forcibly set the recognized language, which is determined by the model itself by default.
|
||||||
|
- `sample_rate`: Sample rate of the model. Default: `16000`. Other sampling rates are not supported now.
|
||||||
|
- `config`: Config of asr task. Use pretrained model when it is None. Default: `None`.
|
||||||
|
- `ckpt_path`: Model checkpoint. Use pretrained model when it is None. Default: `None`.
|
||||||
|
- `yes`: No additional parameters required. Once set this parameter, it means accepting the request of the program by default, which includes transforming the audio sample rate. Default: `False`.
|
||||||
|
- `device`: Choose device to execute model inference. Default: default device of paddlepaddle in current environment.
|
||||||
|
- `verbose`: Show the log information.
|
||||||
|
|
||||||
|
|
||||||
|
- Python API
|
||||||
|
```python
|
||||||
|
import paddle
|
||||||
|
from paddlespeech.cli.whisper import WhisperExecutor
|
||||||
|
|
||||||
|
whisper_executor = WhisperExecutor()
|
||||||
|
|
||||||
|
# to recognize text
|
||||||
|
text = whisper_executor(
|
||||||
|
model='whisper',
|
||||||
|
task='transcribe',
|
||||||
|
sample_rate=16000,
|
||||||
|
config=None, # Set `config` and `ckpt_path` to None to use pretrained model.
|
||||||
|
ckpt_path=None,
|
||||||
|
audio_file='./zh.wav',
|
||||||
|
device=paddle.get_device())
|
||||||
|
print('ASR Result: \n{}'.format(text))
|
||||||
|
|
||||||
|
# to recognize text and translate to English
|
||||||
|
feature = whisper_executor(
|
||||||
|
model='whisper',
|
||||||
|
task='translate',
|
||||||
|
sample_rate=16000,
|
||||||
|
config=None, # Set `config` and `ckpt_path` to None to use pretrained model.
|
||||||
|
ckpt_path=None,
|
||||||
|
audio_file='./zh.wav',
|
||||||
|
device=paddle.get_device())
|
||||||
|
print('Representation: \n{}'.format(feature))
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```bash
|
||||||
|
Transcribe Result:
|
||||||
|
Detected language: Chinese
|
||||||
|
[00:00.000 --> 00:05.000] 我认为跑步最重要的就是给我带来了身体健康
|
||||||
|
{'text': '我认为跑步最重要的就是给我带来了身体健康', 'segments': [{'id': 0, 'seek': 0, 'start': 0.0, 'end': 5.0, 'text': '我认为跑步最重要的就是给我带来了身体健康', 'tokens': [50364, 1654, 7422, 97, 13992, 32585, 31429, 8661, 24928, 1546, 5620, 49076, 4845, 99, 34912, 19847, 29485, 44201, 6346, 115, 50614], 'temperature': 0.0, 'avg_logprob': -0.23577967557040128, 'compression_ratio': 0.28169014084507044, 'no_speech_prob': 0.028302080929279327}], 'language': 'zh'}
|
||||||
|
|
||||||
|
Translate Result:
|
||||||
|
Detected language: Chinese
|
||||||
|
[00:00.000 --> 00:05.000] I think the most important thing about running is that it brings me good health.
|
||||||
|
{'text': ' I think the most important thing about running is that it brings me good health.', 'segments': [{'id': 0, 'seek': 0, 'start': 0.0, 'end': 5.0, 'text': ' I think the most important thing about running is that it brings me good health.', 'tokens': [50364, 286, 519, 264, 881, 1021, 551, 466, 2614, 307, 300, 309, 5607, 385, 665, 1585, 13, 50614], 'temperature': 0.0, 'avg_logprob': -0.47945233395225123, 'compression_ratio': 1.095890410958904, 'no_speech_prob': 0.028302080929279327}], 'language': 'zh'}
|
@ -0,0 +1,13 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# audio download
|
||||||
|
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav https://paddlespeech.bj.bcebos.com/PaddleAudio/en.wav
|
||||||
|
|
||||||
|
# to recognize text
|
||||||
|
paddlespeech whisper --task transcribe --input ./zh.wav
|
||||||
|
|
||||||
|
# to recognize text and translate to English
|
||||||
|
paddlespeech whisper --task translate --input ./zh.wav
|
||||||
|
|
||||||
|
# to change model English-Only model
|
||||||
|
paddlespeech whisper --lang en --size base --task transcribe --input ./en.wav
|
@ -0,0 +1 @@
|
|||||||
|
../../../csmsc/tts3/local/export2lite.sh
|
@ -0,0 +1,32 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
train_output_path=$1
|
||||||
|
|
||||||
|
stage=0
|
||||||
|
stop_stage=0
|
||||||
|
|
||||||
|
# pwgan
|
||||||
|
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_aishell3 \
|
||||||
|
--voc=pwgan_aishell3 \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--speaker_dict=dump/speaker_id_map.txt \
|
||||||
|
--spk_id=0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# hifigan
|
||||||
|
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_aishell3 \
|
||||||
|
--voc=hifigan_aishell3 \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--speaker_dict=dump/speaker_id_map.txt \
|
||||||
|
--spk_id=0
|
||||||
|
fi
|
@ -0,0 +1 @@
|
|||||||
|
../../tts3/local/export2lite.sh
|
@ -0,0 +1,43 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
train_output_path=$1
|
||||||
|
|
||||||
|
stage=0
|
||||||
|
stop_stage=0
|
||||||
|
|
||||||
|
# pwgan
|
||||||
|
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=speedyspeech_csmsc \
|
||||||
|
--voc=pwgan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--tones_dict=dump/tone_id_map.txt
|
||||||
|
fi
|
||||||
|
|
||||||
|
# for more GAN Vocoders
|
||||||
|
# multi band melgan
|
||||||
|
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=speedyspeech_csmsc \
|
||||||
|
--voc=mb_melgan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--tones_dict=dump/tone_id_map.txt
|
||||||
|
fi
|
||||||
|
|
||||||
|
# hifigan
|
||||||
|
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=speedyspeech_csmsc \
|
||||||
|
--voc=hifigan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--tones_dict=dump/tone_id_map.txt
|
||||||
|
fi
|
@ -0,0 +1,18 @@
|
|||||||
|
train_output_path=$1
|
||||||
|
model_dir=$2
|
||||||
|
output_dir=$3
|
||||||
|
model=$4
|
||||||
|
valid_targets=$5
|
||||||
|
|
||||||
|
model_name=${model%_*}
|
||||||
|
echo model_name: ${model_name}
|
||||||
|
|
||||||
|
suffix=${valid_targets%,*}
|
||||||
|
|
||||||
|
mkdir -p ${train_output_path}/${output_dir}
|
||||||
|
|
||||||
|
paddle_lite_opt \
|
||||||
|
--model_file ${train_output_path}/${model_dir}/${model}.pdmodel \
|
||||||
|
--param_file ${train_output_path}/${model_dir}/${model}.pdiparams \
|
||||||
|
--optimize_out ${train_output_path}/${output_dir}/${model}_${suffix} \
|
||||||
|
--valid_targets ${valid_targets}
|
@ -0,0 +1,40 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
train_output_path=$1
|
||||||
|
|
||||||
|
stage=0
|
||||||
|
stop_stage=0
|
||||||
|
|
||||||
|
# pwgan
|
||||||
|
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_csmsc \
|
||||||
|
--voc=pwgan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt
|
||||||
|
fi
|
||||||
|
|
||||||
|
# for more GAN Vocoders
|
||||||
|
# multi band melgan
|
||||||
|
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_csmsc \
|
||||||
|
--voc=mb_melgan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt
|
||||||
|
fi
|
||||||
|
|
||||||
|
# hifigan
|
||||||
|
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_csmsc \
|
||||||
|
--voc=hifigan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt
|
||||||
|
fi
|
@ -0,0 +1,47 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
train_output_path=$1
|
||||||
|
|
||||||
|
stage=0
|
||||||
|
stop_stage=0
|
||||||
|
|
||||||
|
# pwgan
|
||||||
|
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict_streaming.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite_streaming \
|
||||||
|
--am=fastspeech2_csmsc \
|
||||||
|
--am_stat=dump/train/speech_stats.npy \
|
||||||
|
--voc=pwgan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out_streaming \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--am_streaming=True
|
||||||
|
fi
|
||||||
|
|
||||||
|
# for more GAN Vocoders
|
||||||
|
# multi band melgan
|
||||||
|
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict_streaming.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite_streaming \
|
||||||
|
--am=fastspeech2_csmsc \
|
||||||
|
--am_stat=dump/train/speech_stats.npy \
|
||||||
|
--voc=mb_melgan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out_streaming \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--am_streaming=True
|
||||||
|
fi
|
||||||
|
|
||||||
|
# hifigan
|
||||||
|
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict_streaming.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite_streaming \
|
||||||
|
--am=fastspeech2_csmsc \
|
||||||
|
--am_stat=dump/train/speech_stats.npy \
|
||||||
|
--voc=hifigan_csmsc \
|
||||||
|
--text=${BIN_DIR}/../sentences.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out_streaming \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--am_streaming=True
|
||||||
|
fi
|
||||||
|
|
@ -1,8 +1,8 @@
|
|||||||
# LibriSpeech
|
# LibriSpeech
|
||||||
|
|
||||||
## Wav2VecASR
|
## Wav2VecASR
|
||||||
train: Epoch 1, 1*V100-32G, batchsize:10
|
train: Epoch 1, 1*V100-32G, batchsize: 6
|
||||||
|
|
||||||
| Model | Params | Config | Augmentation| Test set | Decode method | WER |
|
| Model | Params | Config | Augmentation| Test set | Decode method | WER |
|
||||||
| --- | --- | --- | --- | --- | --- | --- |
|
| --- | --- | --- | --- | --- | --- | --- |
|
||||||
| wav2vec2ASR | 302.86 M | conf/wav2vec2ASR.yaml | spec_aug | test-clean | greedy search | 0.018887 |
|
| wav2vec2ASR | 302.86 M | conf/wav2vec2ASR.yaml | spec_aug | test-clean | greedy search | 0.018906 |
|
||||||
|
@ -0,0 +1 @@
|
|||||||
|
../../../csmsc/tts3/local/export2lite.sh
|
@ -0,0 +1,30 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
train_output_path=$1
|
||||||
|
|
||||||
|
stage=0
|
||||||
|
stop_stage=0
|
||||||
|
|
||||||
|
# pwgan
|
||||||
|
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_ljspeech \
|
||||||
|
--voc=pwgan_ljspeech \
|
||||||
|
--text=${BIN_DIR}/../sentences_en.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--lang=en
|
||||||
|
fi
|
||||||
|
|
||||||
|
# hifigan
|
||||||
|
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_ljspeech \
|
||||||
|
--voc=hifigan_ljspeech \
|
||||||
|
--text=${BIN_DIR}/../sentences_en.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--lang=en
|
||||||
|
fi
|
@ -0,0 +1 @@
|
|||||||
|
../../../csmsc/tts3/local/export2lite.sh
|
@ -0,0 +1,34 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
train_output_path=$1
|
||||||
|
|
||||||
|
stage=0
|
||||||
|
stop_stage=0
|
||||||
|
|
||||||
|
# pwgan
|
||||||
|
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_vctk \
|
||||||
|
--voc=pwgan_vctk \
|
||||||
|
--text=${BIN_DIR}/../sentences_en.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--speaker_dict=dump/speaker_id_map.txt \
|
||||||
|
--spk_id=0 \
|
||||||
|
--lang=en
|
||||||
|
fi
|
||||||
|
|
||||||
|
# hifigan
|
||||||
|
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||||
|
python3 ${BIN_DIR}/../lite_predict.py \
|
||||||
|
--inference_dir=${train_output_path}/pdlite \
|
||||||
|
--am=fastspeech2_vctk \
|
||||||
|
--voc=hifigan_vctk \
|
||||||
|
--text=${BIN_DIR}/../sentences_en.txt \
|
||||||
|
--output_dir=${train_output_path}/lite_infer_out \
|
||||||
|
--phones_dict=dump/phone_id_map.txt \
|
||||||
|
--speaker_dict=dump/speaker_id_map.txt \
|
||||||
|
--spk_id=0 \
|
||||||
|
--lang=en
|
||||||
|
fi
|
@ -0,0 +1,14 @@
|
|||||||
|
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from .infer import SSLExecutor
|
@ -0,0 +1,14 @@
|
|||||||
|
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from .infer import WhisperExecutor
|
@ -0,0 +1,13 @@
|
|||||||
|
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
@ -0,0 +1,123 @@
|
|||||||
|
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.∏
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# Modified from Whisper (https://github.com/openai/whisper/whisper/)
|
||||||
|
import os.path
|
||||||
|
import sys
|
||||||
|
|
||||||
|
import distutils
|
||||||
|
import numpy as np
|
||||||
|
import paddle
|
||||||
|
import soundfile
|
||||||
|
from yacs.config import CfgNode
|
||||||
|
|
||||||
|
from paddlespeech.s2t.models.whisper import log_mel_spectrogram
|
||||||
|
from paddlespeech.s2t.models.whisper import ModelDimensions
|
||||||
|
from paddlespeech.s2t.models.whisper import transcribe
|
||||||
|
from paddlespeech.s2t.models.whisper import Whisper
|
||||||
|
from paddlespeech.s2t.training.cli import default_argument_parser
|
||||||
|
from paddlespeech.s2t.utils.log import Log
|
||||||
|
|
||||||
|
logger = Log(__name__).getlog()
|
||||||
|
|
||||||
|
|
||||||
|
class WhisperInfer():
|
||||||
|
def __init__(self, config, args):
|
||||||
|
self.args = args
|
||||||
|
self.config = config
|
||||||
|
self.audio_file = args.audio_file
|
||||||
|
|
||||||
|
paddle.set_device('gpu' if self.args.ngpu > 0 else 'cpu')
|
||||||
|
config.pop("ngpu")
|
||||||
|
|
||||||
|
#load_model
|
||||||
|
model_dict = paddle.load(self.config.model_file)
|
||||||
|
config.pop("model_file")
|
||||||
|
dims = ModelDimensions(**model_dict["dims"])
|
||||||
|
self.model = Whisper(dims)
|
||||||
|
self.model.load_dict(model_dict)
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
check(args.audio_file)
|
||||||
|
|
||||||
|
with paddle.no_grad():
|
||||||
|
temperature = config.pop("temperature")
|
||||||
|
temperature_increment_on_fallback = config.pop(
|
||||||
|
"temperature_increment_on_fallback")
|
||||||
|
if temperature_increment_on_fallback is not None:
|
||||||
|
temperature = tuple(
|
||||||
|
np.arange(temperature, 1.0 + 1e-6,
|
||||||
|
temperature_increment_on_fallback))
|
||||||
|
else:
|
||||||
|
temperature = [temperature]
|
||||||
|
|
||||||
|
#load audio
|
||||||
|
mel = log_mel_spectrogram(
|
||||||
|
args.audio_file, resource_path=config.resource_path)
|
||||||
|
|
||||||
|
result = transcribe(
|
||||||
|
self.model, mel, temperature=temperature, **config)
|
||||||
|
if args.result_file is not None:
|
||||||
|
with open(args.result_file, 'w') as f:
|
||||||
|
f.write(str(result))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def check(audio_file: str):
|
||||||
|
if not os.path.isfile(audio_file):
|
||||||
|
print("Please input the right audio file path")
|
||||||
|
sys.exit(-1)
|
||||||
|
|
||||||
|
logger.info("checking the audio file format......")
|
||||||
|
try:
|
||||||
|
_, sample_rate = soundfile.read(audio_file)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(str(e))
|
||||||
|
logger.error(
|
||||||
|
"can not open the wav file, please check the audio file format")
|
||||||
|
sys.exit(-1)
|
||||||
|
logger.info("The sample rate is %d" % sample_rate)
|
||||||
|
assert (sample_rate == 16000)
|
||||||
|
logger.info("The audio file format is right")
|
||||||
|
|
||||||
|
|
||||||
|
def main(config, args):
|
||||||
|
WhisperInfer(config, args).run()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = default_argument_parser()
|
||||||
|
# save asr result to
|
||||||
|
parser.add_argument(
|
||||||
|
"--result_file", type=str, help="path of save the asr result")
|
||||||
|
parser.add_argument(
|
||||||
|
"--audio_file", type=str, help="path of the input audio file")
|
||||||
|
parser.add_argument(
|
||||||
|
"--debug",
|
||||||
|
type=distutils.util.strtobool,
|
||||||
|
default=False,
|
||||||
|
help="for debug.")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
config = CfgNode(new_allowed=True)
|
||||||
|
|
||||||
|
if args.config:
|
||||||
|
config.merge_from_file(args.config)
|
||||||
|
if args.decode_cfg:
|
||||||
|
decode_confs = CfgNode(new_allowed=True)
|
||||||
|
decode_confs.merge_from_file(args.decode_cfg)
|
||||||
|
config.decode = decode_confs
|
||||||
|
if args.opts:
|
||||||
|
config.merge_from_list(args.opts)
|
||||||
|
config.freeze()
|
||||||
|
main(config, args)
|
@ -0,0 +1,17 @@
|
|||||||
|
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from .wav2vec2_ASR import Wav2vec2ASR
|
||||||
|
from .wav2vec2_ASR import Wav2vec2Base
|
||||||
|
|
||||||
|
__all__ = ["Wav2vec2ASR", "Wav2vec2Base"]
|
@ -0,0 +1,13 @@
|
|||||||
|
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
@ -0,0 +1,97 @@
|
|||||||
|
# Authors
|
||||||
|
# * Mirco Ravanelli 2020
|
||||||
|
# * Guillermo Cámbara 2021
|
||||||
|
# * Sarthak Yadav 2022
|
||||||
|
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# Modified from speechbrain(https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/nnet/normalization.py)
|
||||||
|
import paddle.nn as nn
|
||||||
|
|
||||||
|
from paddlespeech.s2t.modules.align import BatchNorm1D
|
||||||
|
|
||||||
|
|
||||||
|
class BatchNorm1d(nn.Layer):
|
||||||
|
"""Applies 1d batch normalization to the input tensor.
|
||||||
|
Arguments
|
||||||
|
---------
|
||||||
|
input_shape : tuple
|
||||||
|
The expected shape of the input. Alternatively, use ``input_size``.
|
||||||
|
input_size : int
|
||||||
|
The expected size of the input. Alternatively, use ``input_shape``.
|
||||||
|
eps : float
|
||||||
|
This value is added to std deviation estimation to improve the numerical
|
||||||
|
stability.
|
||||||
|
momentum : float
|
||||||
|
It is a value used for the running_mean and running_var computation.
|
||||||
|
affine : bool
|
||||||
|
When set to True, the affine parameters are learned.
|
||||||
|
track_running_stats : bool
|
||||||
|
When set to True, this module tracks the running mean and variance,
|
||||||
|
and when set to False, this module does not track such statistics.
|
||||||
|
combine_batch_time : bool
|
||||||
|
When true, it combines batch an time axis.
|
||||||
|
Example
|
||||||
|
-------
|
||||||
|
>>> input = paddle.randn([100, 10])
|
||||||
|
>>> norm = BatchNorm1d(input_shape=input.shape)
|
||||||
|
>>> output = norm(input)
|
||||||
|
>>> output.shape
|
||||||
|
Paddle.Shape([100, 10])
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
input_shape=None,
|
||||||
|
input_size=None,
|
||||||
|
eps=1e-05,
|
||||||
|
momentum=0.9,
|
||||||
|
combine_batch_time=False,
|
||||||
|
skip_transpose=False, ):
|
||||||
|
super().__init__()
|
||||||
|
self.combine_batch_time = combine_batch_time
|
||||||
|
self.skip_transpose = skip_transpose
|
||||||
|
|
||||||
|
if input_size is None and skip_transpose:
|
||||||
|
input_size = input_shape[1]
|
||||||
|
elif input_size is None:
|
||||||
|
input_size = input_shape[-1]
|
||||||
|
|
||||||
|
self.norm = BatchNorm1D(input_size, momentum=momentum, epsilon=eps)
|
||||||
|
|
||||||
|
def forward(self, x):
|
||||||
|
"""Returns the normalized input tensor.
|
||||||
|
Arguments
|
||||||
|
---------
|
||||||
|
x : paddle.Tensor (batch, time, [channels])
|
||||||
|
input to normalize. 2d or 3d tensors are expected in input
|
||||||
|
4d tensors can be used when combine_dims=True.
|
||||||
|
"""
|
||||||
|
shape_or = x.shape
|
||||||
|
if self.combine_batch_time:
|
||||||
|
if x.ndim == 3:
|
||||||
|
x = x.reshape(shape_or[0] * shape_or[1], shape_or[2])
|
||||||
|
else:
|
||||||
|
x = x.reshape(shape_or[0] * shape_or[1], shape_or[3],
|
||||||
|
shape_or[2])
|
||||||
|
|
||||||
|
elif not self.skip_transpose:
|
||||||
|
x = x.transpose([0, 2, 1])
|
||||||
|
|
||||||
|
x_n = self.norm(x)
|
||||||
|
if self.combine_batch_time:
|
||||||
|
x_n = x_n.reshape(shape_or)
|
||||||
|
elif not self.skip_transpose:
|
||||||
|
x_n = x_n.transpose([0, 2, 1])
|
||||||
|
|
||||||
|
return x_n
|
@ -0,0 +1,13 @@
|
|||||||
|
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
@ -0,0 +1,12 @@
|
|||||||
|
# MIT License, Copyright (c) 2022 OpenAI.
|
||||||
|
# Copyright (c) 2022 PaddlePaddle Authors and . All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Modified from OpenAI Whisper 2022 (https://github.com/openai/whisper/whisper/__init__.py)
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import decode
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import DecodingOptions
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import DecodingResult
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import detect_language
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import log_mel_spectrogram
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import ModelDimensions
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import transcribe
|
||||||
|
from paddlespeech.s2t.models.whisper.whipser import Whisper
|
@ -0,0 +1,362 @@
|
|||||||
|
# MIT License, Copyright (c) 2022 OpenAI.
|
||||||
|
# Copyright (c) 2022 PaddlePaddle Authors and . All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Modified from OpenAI Whisper 2022 (https://github.com/openai/whisper/whisper/tokenizer.py)
|
||||||
|
import os
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from functools import lru_cache
|
||||||
|
from typing import List
|
||||||
|
from typing import Optional
|
||||||
|
from typing import Tuple
|
||||||
|
from typing import Union
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
import paddle
|
||||||
|
from paddlenlp.transformers import GPTTokenizer
|
||||||
|
|
||||||
|
LANGUAGES = {
|
||||||
|
"en": "english",
|
||||||
|
"zh": "chinese",
|
||||||
|
"de": "german",
|
||||||
|
"es": "spanish",
|
||||||
|
"ru": "russian",
|
||||||
|
"ko": "korean",
|
||||||
|
"fr": "french",
|
||||||
|
"ja": "japanese",
|
||||||
|
"pt": "portuguese",
|
||||||
|
"tr": "turkish",
|
||||||
|
"pl": "polish",
|
||||||
|
"ca": "catalan",
|
||||||
|
"nl": "dutch",
|
||||||
|
"ar": "arabic",
|
||||||
|
"sv": "swedish",
|
||||||
|
"it": "italian",
|
||||||
|
"id": "indonesian",
|
||||||
|
"hi": "hindi",
|
||||||
|
"fi": "finnish",
|
||||||
|
"vi": "vietnamese",
|
||||||
|
"iw": "hebrew",
|
||||||
|
"uk": "ukrainian",
|
||||||
|
"el": "greek",
|
||||||
|
"ms": "malay",
|
||||||
|
"cs": "czech",
|
||||||
|
"ro": "romanian",
|
||||||
|
"da": "danish",
|
||||||
|
"hu": "hungarian",
|
||||||
|
"ta": "tamil",
|
||||||
|
"no": "norwegian",
|
||||||
|
"th": "thai",
|
||||||
|
"ur": "urdu",
|
||||||
|
"hr": "croatian",
|
||||||
|
"bg": "bulgarian",
|
||||||
|
"lt": "lithuanian",
|
||||||
|
"la": "latin",
|
||||||
|
"mi": "maori",
|
||||||
|
"ml": "malayalam",
|
||||||
|
"cy": "welsh",
|
||||||
|
"sk": "slovak",
|
||||||
|
"te": "telugu",
|
||||||
|
"fa": "persian",
|
||||||
|
"lv": "latvian",
|
||||||
|
"bn": "bengali",
|
||||||
|
"sr": "serbian",
|
||||||
|
"az": "azerbaijani",
|
||||||
|
"sl": "slovenian",
|
||||||
|
"kn": "kannada",
|
||||||
|
"et": "estonian",
|
||||||
|
"mk": "macedonian",
|
||||||
|
"br": "breton",
|
||||||
|
"eu": "basque",
|
||||||
|
"is": "icelandic",
|
||||||
|
"hy": "armenian",
|
||||||
|
"ne": "nepali",
|
||||||
|
"mn": "mongolian",
|
||||||
|
"bs": "bosnian",
|
||||||
|
"kk": "kazakh",
|
||||||
|
"sq": "albanian",
|
||||||
|
"sw": "swahili",
|
||||||
|
"gl": "galician",
|
||||||
|
"mr": "marathi",
|
||||||
|
"pa": "punjabi",
|
||||||
|
"si": "sinhala",
|
||||||
|
"km": "khmer",
|
||||||
|
"sn": "shona",
|
||||||
|
"yo": "yoruba",
|
||||||
|
"so": "somali",
|
||||||
|
"af": "afrikaans",
|
||||||
|
"oc": "occitan",
|
||||||
|
"ka": "georgian",
|
||||||
|
"be": "belarusian",
|
||||||
|
"tg": "tajik",
|
||||||
|
"sd": "sindhi",
|
||||||
|
"gu": "gujarati",
|
||||||
|
"am": "amharic",
|
||||||
|
"yi": "yiddish",
|
||||||
|
"lo": "lao",
|
||||||
|
"uz": "uzbek",
|
||||||
|
"fo": "faroese",
|
||||||
|
"ht": "haitian creole",
|
||||||
|
"ps": "pashto",
|
||||||
|
"tk": "turkmen",
|
||||||
|
"nn": "nynorsk",
|
||||||
|
"mt": "maltese",
|
||||||
|
"sa": "sanskrit",
|
||||||
|
"lb": "luxembourgish",
|
||||||
|
"my": "myanmar",
|
||||||
|
"bo": "tibetan",
|
||||||
|
"tl": "tagalog",
|
||||||
|
"mg": "malagasy",
|
||||||
|
"as": "assamese",
|
||||||
|
"tt": "tatar",
|
||||||
|
"haw": "hawaiian",
|
||||||
|
"ln": "lingala",
|
||||||
|
"ha": "hausa",
|
||||||
|
"ba": "bashkir",
|
||||||
|
"jw": "javanese",
|
||||||
|
"su": "sundanese",
|
||||||
|
}
|
||||||
|
|
||||||
|
# language code lookup by name, with a few language aliases
|
||||||
|
TO_LANGUAGE_CODE = {
|
||||||
|
**{language: code for code, language in LANGUAGES.items()},
|
||||||
|
"burmese": "my",
|
||||||
|
"valencian": "ca",
|
||||||
|
"flemish": "nl",
|
||||||
|
"haitian": "ht",
|
||||||
|
"letzeburgesch": "lb",
|
||||||
|
"pushto": "ps",
|
||||||
|
"panjabi": "pa",
|
||||||
|
"moldavian": "ro",
|
||||||
|
"moldovan": "ro",
|
||||||
|
"sinhalese": "si",
|
||||||
|
"castilian": "es",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class Tokenizer:
|
||||||
|
"""A thin wrapper around `GPTTokenizer` providing quick access to special tokens"""
|
||||||
|
|
||||||
|
tokenizer: "GPTTokenizer"
|
||||||
|
language: Optional[str]
|
||||||
|
sot_sequence: Tuple[int]
|
||||||
|
|
||||||
|
def encode(self, text, **kwargs):
|
||||||
|
return self.tokenizer.encode(text, **kwargs)
|
||||||
|
|
||||||
|
def decode(self,
|
||||||
|
token_ids: Union[int, List[int], np.ndarray, paddle.Tensor],
|
||||||
|
**kwargs):
|
||||||
|
if len(token_ids) > 1:
|
||||||
|
ids_list = []
|
||||||
|
for ids in token_ids:
|
||||||
|
if paddle.is_tensor(ids):
|
||||||
|
ids = ids.item()
|
||||||
|
if ids < len(self.tokenizer):
|
||||||
|
ids_list.append(ids)
|
||||||
|
token_ids = ids_list
|
||||||
|
|
||||||
|
return self.tokenizer.decode(token_ids, **kwargs)
|
||||||
|
|
||||||
|
def decode_with_timestamps(self, tokens) -> str:
|
||||||
|
"""
|
||||||
|
Timestamp tokens are above the special tokens' id range and are ignored by `decode()`.
|
||||||
|
This method decodes given tokens with timestamps tokens annotated, e.g. "<|1.08|>".
|
||||||
|
"""
|
||||||
|
outputs = [[]]
|
||||||
|
for token in tokens:
|
||||||
|
if token >= self.timestamp_begin:
|
||||||
|
timestamp = f"<|{(token - self.timestamp_begin) * 0.02:.2f}|>"
|
||||||
|
outputs.append(timestamp)
|
||||||
|
outputs.append([])
|
||||||
|
else:
|
||||||
|
outputs[-1].append(token)
|
||||||
|
outputs = [
|
||||||
|
s if isinstance(s, str) else self.tokenizer.decode(s)
|
||||||
|
for s in outputs
|
||||||
|
]
|
||||||
|
return "".join(outputs)
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def eot(self) -> int:
|
||||||
|
return self.tokenizer.eos_token_id
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def sot(self) -> int:
|
||||||
|
return self._get_single_token_id("<|startoftranscript|>")
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def sot_lm(self) -> int:
|
||||||
|
return self._get_single_token_id("<|startoflm|>")
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def sot_prev(self) -> int:
|
||||||
|
return self._get_single_token_id("<|startofprev|>")
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def no_speech(self) -> int:
|
||||||
|
return self._get_single_token_id("<|nospeech|>")
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def no_timestamps(self) -> int:
|
||||||
|
return self._get_single_token_id("<|notimestamps|>")
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def timestamp_begin(self) -> int:
|
||||||
|
return self.tokenizer.all_special_ids[-1] + 1
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def language_token(self) -> int:
|
||||||
|
"""Returns the token id corresponding to the value of the `language` field"""
|
||||||
|
if self.language is None:
|
||||||
|
raise ValueError(
|
||||||
|
"This tokenizer does not have language token configured")
|
||||||
|
|
||||||
|
additional_tokens = dict(
|
||||||
|
zip(
|
||||||
|
self.tokenizer.additional_special_tokens,
|
||||||
|
self.tokenizer.additional_special_tokens_ids, ))
|
||||||
|
candidate = f"<|{self.language}|>"
|
||||||
|
if candidate in additional_tokens:
|
||||||
|
return additional_tokens[candidate]
|
||||||
|
|
||||||
|
raise KeyError(f"Language {self.language} not found in tokenizer.")
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def all_language_tokens(self) -> Tuple[int]:
|
||||||
|
result = []
|
||||||
|
for token, token_id in zip(
|
||||||
|
self.tokenizer.additional_special_tokens,
|
||||||
|
self.tokenizer.additional_special_tokens_ids, ):
|
||||||
|
if token.strip("<|>") in LANGUAGES:
|
||||||
|
result.append(token_id)
|
||||||
|
return tuple(result)
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def all_language_codes(self) -> Tuple[str]:
|
||||||
|
return tuple(
|
||||||
|
self.decode([l]).strip("<|>") for l in self.all_language_tokens)
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def sot_sequence_including_notimestamps(self) -> Tuple[int]:
|
||||||
|
return tuple(list(self.sot_sequence) + [self.no_timestamps])
|
||||||
|
|
||||||
|
@property
|
||||||
|
@lru_cache()
|
||||||
|
def non_speech_tokens(self) -> Tuple[int]:
|
||||||
|
"""
|
||||||
|
Returns the list of tokens to suppress in order to avoid any speaker tags or non-speech
|
||||||
|
annotations, to prevent sampling texts that are not actually spoken in the audio, e.g.
|
||||||
|
|
||||||
|
- ♪♪♪
|
||||||
|
- ( SPEAKING FOREIGN LANGUAGE )
|
||||||
|
- [DAVID] Hey there,
|
||||||
|
|
||||||
|
keeping basic punctuations like commas, periods, question marks, exclamation points, etc.
|
||||||
|
"""
|
||||||
|
symbols = list("\"#()*+/:;<=>@[\\]^_`{|}~「」『』")
|
||||||
|
symbols += "<< >> <<< >>> -- --- -( -[ (' (\" (( )) ((( ))) [[ ]] {{ }} ♪♪ ♪♪♪".split(
|
||||||
|
)
|
||||||
|
|
||||||
|
# symbols that may be a single token or multiple tokens depending on the tokenizer.
|
||||||
|
# In case they're multiple tokens, suppress the first token, which is safe because:
|
||||||
|
# These are between U+2640 and U+267F miscellaneous symbols that are okay to suppress
|
||||||
|
# in generations, and in the 3-byte UTF-8 representation they share the first two bytes.
|
||||||
|
miscellaneous = set("♩♪♫♬♭♮♯")
|
||||||
|
assert all(0x2640 <= ord(c) <= 0x267F for c in miscellaneous)
|
||||||
|
|
||||||
|
# allow hyphens "-" and single quotes "'" between words, but not at the beginning of a word
|
||||||
|
result = {
|
||||||
|
self.tokenizer.encode(" -").input_ids[0],
|
||||||
|
self.tokenizer.encode(" '").input_ids[0]
|
||||||
|
}
|
||||||
|
for symbol in symbols + list(miscellaneous):
|
||||||
|
for tokens in [
|
||||||
|
self.tokenizer.encode(symbol).input_ids,
|
||||||
|
self.tokenizer.encode(" " + symbol).input_ids
|
||||||
|
]:
|
||||||
|
if len(tokens) == 1 or symbol in miscellaneous:
|
||||||
|
result.add(tokens[0])
|
||||||
|
|
||||||
|
return tuple(sorted(result))
|
||||||
|
|
||||||
|
def _get_single_token_id(self, text) -> int:
|
||||||
|
tokens = self.tokenizer.encode(text).input_ids
|
||||||
|
assert len(tokens) == 1, f"{text} is not encoded as a single token"
|
||||||
|
return tokens[0]
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=None)
|
||||||
|
def build_tokenizer(resource_path: str, name: str="gpt2"):
|
||||||
|
os.environ["TOKENIZERS_PARALLELISM"] = "false"
|
||||||
|
path = os.path.join(resource_path, "assets", name)
|
||||||
|
tokenizer = GPTTokenizer.from_pretrained(path)
|
||||||
|
|
||||||
|
specials = [
|
||||||
|
"<|startoftranscript|>",
|
||||||
|
* [f"<|{lang}|>" for lang in LANGUAGES.keys()],
|
||||||
|
"<|translate|>",
|
||||||
|
"<|transcribe|>",
|
||||||
|
"<|startoflm|>",
|
||||||
|
"<|startofprev|>",
|
||||||
|
"<|nospeech|>",
|
||||||
|
"<|notimestamps|>",
|
||||||
|
]
|
||||||
|
|
||||||
|
tokenizer.add_special_tokens(dict(additional_special_tokens=specials))
|
||||||
|
return tokenizer
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=None)
|
||||||
|
def get_tokenizer(
|
||||||
|
multilingual: bool,
|
||||||
|
resource_path: str,
|
||||||
|
*,
|
||||||
|
task: Optional[str]=None, # Literal["transcribe", "translate", None]
|
||||||
|
language: Optional[str]=None, ) -> Tokenizer:
|
||||||
|
if language is not None:
|
||||||
|
language = language.lower()
|
||||||
|
if language not in LANGUAGES:
|
||||||
|
if language in TO_LANGUAGE_CODE:
|
||||||
|
language = TO_LANGUAGE_CODE[language]
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unsupported language: {language}")
|
||||||
|
|
||||||
|
if multilingual:
|
||||||
|
tokenizer_name = "multilingual"
|
||||||
|
task = task or "transcribe"
|
||||||
|
language = language or "en"
|
||||||
|
else:
|
||||||
|
tokenizer_name = "gpt2"
|
||||||
|
task = None
|
||||||
|
language = None
|
||||||
|
|
||||||
|
tokenizer = build_tokenizer(
|
||||||
|
resource_path=resource_path, name=tokenizer_name)
|
||||||
|
all_special_ids: List[int] = tokenizer.all_special_ids
|
||||||
|
sot: int = all_special_ids[1]
|
||||||
|
translate: int = all_special_ids[-6]
|
||||||
|
transcribe: int = all_special_ids[-5]
|
||||||
|
|
||||||
|
langs = tuple(LANGUAGES.keys())
|
||||||
|
sot_sequence = [sot]
|
||||||
|
if language is not None:
|
||||||
|
sot_sequence.append(sot + 1 + langs.index(language))
|
||||||
|
if task is not None:
|
||||||
|
sot_sequence.append(transcribe if task == "transcribe" else translate)
|
||||||
|
|
||||||
|
return Tokenizer(
|
||||||
|
tokenizer=tokenizer,
|
||||||
|
language=language,
|
||||||
|
sot_sequence=tuple(sot_sequence))
|
@ -0,0 +1,92 @@
|
|||||||
|
# MIT License, Copyright (c) 2022 OpenAI.
|
||||||
|
# Copyright (c) 2022 PaddlePaddle Authors and . All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Modified from OpenAI Whisper 2022 (https://github.com/openai/whisper/whisper/utils.py)
|
||||||
|
import zlib
|
||||||
|
from typing import Iterator
|
||||||
|
from typing import TextIO
|
||||||
|
|
||||||
|
|
||||||
|
def exact_div(x, y):
|
||||||
|
assert x % y == 0
|
||||||
|
return x // y
|
||||||
|
|
||||||
|
|
||||||
|
def str2bool(string):
|
||||||
|
str2val = {"True": True, "False": False}
|
||||||
|
if string in str2val:
|
||||||
|
return str2val[string]
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}")
|
||||||
|
|
||||||
|
|
||||||
|
def optional_int(string):
|
||||||
|
return None if string == "None" else int(string)
|
||||||
|
|
||||||
|
|
||||||
|
def optional_float(string):
|
||||||
|
return None if string == "None" else float(string)
|
||||||
|
|
||||||
|
|
||||||
|
def compression_ratio(text) -> float:
|
||||||
|
return len(text) / len(zlib.compress(text.encode("utf-8")))
|
||||||
|
|
||||||
|
|
||||||
|
def format_timestamp(seconds: float,
|
||||||
|
always_include_hours: bool=False,
|
||||||
|
decimal_marker: str='.'):
|
||||||
|
assert seconds >= 0, "non-negative timestamp expected"
|
||||||
|
milliseconds = round(seconds * 1000.0)
|
||||||
|
|
||||||
|
hours = milliseconds // 3_600_000
|
||||||
|
milliseconds -= hours * 3_600_000
|
||||||
|
|
||||||
|
minutes = milliseconds // 60_000
|
||||||
|
milliseconds -= minutes * 60_000
|
||||||
|
|
||||||
|
seconds = milliseconds // 1_000
|
||||||
|
milliseconds -= seconds * 1_000
|
||||||
|
|
||||||
|
hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else ""
|
||||||
|
return f"{hours_marker}{minutes:02d}:{seconds:02d}{decimal_marker}{milliseconds:03d}"
|
||||||
|
|
||||||
|
|
||||||
|
def write_txt(transcript: Iterator[dict], file: TextIO):
|
||||||
|
for segment in transcript:
|
||||||
|
print(segment['text'].strip(), file=file, flush=True)
|
||||||
|
|
||||||
|
|
||||||
|
def write_vtt(transcript: Iterator[dict], file: TextIO):
|
||||||
|
print("WEBVTT\n", file=file)
|
||||||
|
for segment in transcript:
|
||||||
|
print(
|
||||||
|
f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n"
|
||||||
|
f"{segment['text'].strip().replace('-->', '->')}\n",
|
||||||
|
file=file,
|
||||||
|
flush=True, )
|
||||||
|
|
||||||
|
|
||||||
|
def write_srt(transcript: Iterator[dict], file: TextIO):
|
||||||
|
"""
|
||||||
|
Write a transcript to a file in SRT format.
|
||||||
|
|
||||||
|
Example usage:
|
||||||
|
from pathlib import Path
|
||||||
|
from whisper.utils import write_srt
|
||||||
|
|
||||||
|
result = transcribe(model, audio_path, temperature=temperature, **args)
|
||||||
|
|
||||||
|
# save SRT
|
||||||
|
audio_basename = Path(audio_path).stem
|
||||||
|
with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt:
|
||||||
|
write_srt(result["segments"], file=srt)
|
||||||
|
"""
|
||||||
|
for i, segment in enumerate(transcript, start=1):
|
||||||
|
# write srt lines
|
||||||
|
print(
|
||||||
|
f"{i}\n"
|
||||||
|
f"{format_timestamp(segment['start'], always_include_hours=True, decimal_marker=',')} --> "
|
||||||
|
f"{format_timestamp(segment['end'], always_include_hours=True, decimal_marker=',')}\n"
|
||||||
|
f"{segment['text'].strip().replace('-->', '->')}\n",
|
||||||
|
file=file,
|
||||||
|
flush=True, )
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,21 @@
|
|||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2022 OpenAI
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
@ -0,0 +1,168 @@
|
|||||||
|
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
import argparse
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import soundfile as sf
|
||||||
|
from timer import timer
|
||||||
|
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_frontend
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_lite_am_output
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_lite_predictor
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_lite_voc_output
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_sentences
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Paddle Infernce with acoustic model & vocoder.")
|
||||||
|
# acoustic model
|
||||||
|
parser.add_argument(
|
||||||
|
'--am',
|
||||||
|
type=str,
|
||||||
|
default='fastspeech2_csmsc',
|
||||||
|
choices=[
|
||||||
|
'speedyspeech_csmsc',
|
||||||
|
'fastspeech2_csmsc',
|
||||||
|
'fastspeech2_aishell3',
|
||||||
|
'fastspeech2_ljspeech',
|
||||||
|
'fastspeech2_vctk',
|
||||||
|
'fastspeech2_mix',
|
||||||
|
],
|
||||||
|
help='Choose acoustic model type of tts task.')
|
||||||
|
parser.add_argument(
|
||||||
|
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--tones_dict", type=str, default=None, help="tone vocabulary file.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--speaker_dict", type=str, default=None, help="speaker id map file.")
|
||||||
|
parser.add_argument(
|
||||||
|
'--spk_id',
|
||||||
|
type=int,
|
||||||
|
default=0,
|
||||||
|
help='spk id for multi speaker acoustic model')
|
||||||
|
# voc
|
||||||
|
parser.add_argument(
|
||||||
|
'--voc',
|
||||||
|
type=str,
|
||||||
|
default='pwgan_csmsc',
|
||||||
|
choices=[
|
||||||
|
'pwgan_csmsc',
|
||||||
|
'pwgan_aishell3',
|
||||||
|
'pwgan_ljspeech',
|
||||||
|
'pwgan_vctk',
|
||||||
|
'mb_melgan_csmsc',
|
||||||
|
'hifigan_csmsc',
|
||||||
|
'hifigan_aishell3',
|
||||||
|
'hifigan_ljspeech',
|
||||||
|
'hifigan_vctk',
|
||||||
|
],
|
||||||
|
help='Choose vocoder type of tts task.')
|
||||||
|
# other
|
||||||
|
parser.add_argument(
|
||||||
|
'--lang',
|
||||||
|
type=str,
|
||||||
|
default='zh',
|
||||||
|
help='Choose model language. zh or en or mix')
|
||||||
|
parser.add_argument(
|
||||||
|
"--text",
|
||||||
|
type=str,
|
||||||
|
help="text to synthesize, a 'utt_id sentence' pair per line")
|
||||||
|
parser.add_argument(
|
||||||
|
"--inference_dir", type=str, help="dir to save inference models")
|
||||||
|
parser.add_argument("--output_dir", type=str, help="output dir")
|
||||||
|
|
||||||
|
args, _ = parser.parse_known_args()
|
||||||
|
return args
|
||||||
|
|
||||||
|
|
||||||
|
# only inference for models trained with csmsc now
|
||||||
|
def main():
|
||||||
|
args = parse_args()
|
||||||
|
|
||||||
|
# frontend
|
||||||
|
frontend = get_frontend(
|
||||||
|
lang=args.lang,
|
||||||
|
phones_dict=args.phones_dict,
|
||||||
|
tones_dict=args.tones_dict)
|
||||||
|
|
||||||
|
# am_predictor
|
||||||
|
am_predictor = get_lite_predictor(
|
||||||
|
model_dir=args.inference_dir, model_file=args.am + "_x86.nb")
|
||||||
|
# model: {model_name}_{dataset}
|
||||||
|
am_dataset = args.am[args.am.rindex('_') + 1:]
|
||||||
|
|
||||||
|
# voc_predictor
|
||||||
|
voc_predictor = get_lite_predictor(
|
||||||
|
model_dir=args.inference_dir, model_file=args.voc + "_x86.nb")
|
||||||
|
|
||||||
|
output_dir = Path(args.output_dir)
|
||||||
|
output_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
sentences = get_sentences(text_file=args.text, lang=args.lang)
|
||||||
|
|
||||||
|
merge_sentences = True
|
||||||
|
fs = 24000 if am_dataset != 'ljspeech' else 22050
|
||||||
|
# warmup
|
||||||
|
for utt_id, sentence in sentences[:3]:
|
||||||
|
with timer() as t:
|
||||||
|
mel = get_lite_am_output(
|
||||||
|
input=sentence,
|
||||||
|
am_predictor=am_predictor,
|
||||||
|
am=args.am,
|
||||||
|
frontend=frontend,
|
||||||
|
lang=args.lang,
|
||||||
|
merge_sentences=merge_sentences,
|
||||||
|
speaker_dict=args.speaker_dict,
|
||||||
|
spk_id=args.spk_id, )
|
||||||
|
wav = get_lite_voc_output(voc_predictor=voc_predictor, input=mel)
|
||||||
|
speed = wav.size / t.elapse
|
||||||
|
rtf = fs / speed
|
||||||
|
print(
|
||||||
|
f"{utt_id}, mel: {mel.shape}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
|
||||||
|
)
|
||||||
|
|
||||||
|
print("warm up done!")
|
||||||
|
|
||||||
|
N = 0
|
||||||
|
T = 0
|
||||||
|
for utt_id, sentence in sentences:
|
||||||
|
with timer() as t:
|
||||||
|
mel = get_lite_am_output(
|
||||||
|
input=sentence,
|
||||||
|
am_predictor=am_predictor,
|
||||||
|
am=args.am,
|
||||||
|
frontend=frontend,
|
||||||
|
lang=args.lang,
|
||||||
|
merge_sentences=merge_sentences,
|
||||||
|
speaker_dict=args.speaker_dict,
|
||||||
|
spk_id=args.spk_id, )
|
||||||
|
wav = get_lite_voc_output(voc_predictor=voc_predictor, input=mel)
|
||||||
|
|
||||||
|
N += wav.size
|
||||||
|
T += t.elapse
|
||||||
|
speed = wav.size / t.elapse
|
||||||
|
rtf = fs / speed
|
||||||
|
|
||||||
|
sf.write(output_dir / (utt_id + ".wav"), wav, samplerate=fs)
|
||||||
|
print(
|
||||||
|
f"{utt_id}, mel: {mel.shape}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"{utt_id} done!")
|
||||||
|
print(f"generation speed: {N / T}Hz, RTF: {fs / (N / T) }")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
@ -0,0 +1,230 @@
|
|||||||
|
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
import argparse
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
import soundfile as sf
|
||||||
|
from timer import timer
|
||||||
|
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import denorm
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_chunks
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_frontend
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_lite_am_sublayer_output
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_lite_predictor
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_lite_streaming_am_output
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_lite_voc_output
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import get_sentences
|
||||||
|
from paddlespeech.t2s.exps.syn_utils import run_frontend
|
||||||
|
from paddlespeech.t2s.utils import str2bool
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Paddle Infernce with acoustic model & vocoder.")
|
||||||
|
# acoustic model
|
||||||
|
parser.add_argument(
|
||||||
|
'--am',
|
||||||
|
type=str,
|
||||||
|
default='fastspeech2_csmsc',
|
||||||
|
choices=['fastspeech2_csmsc'],
|
||||||
|
help='Choose acoustic model type of tts task.')
|
||||||
|
parser.add_argument(
|
||||||
|
"--am_stat",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="mean and standard deviation used to normalize spectrogram when training acoustic model."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--tones_dict", type=str, default=None, help="tone vocabulary file.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--speaker_dict", type=str, default=None, help="speaker id map file.")
|
||||||
|
parser.add_argument(
|
||||||
|
'--spk_id',
|
||||||
|
type=int,
|
||||||
|
default=0,
|
||||||
|
help='spk id for multi speaker acoustic model')
|
||||||
|
# voc
|
||||||
|
parser.add_argument(
|
||||||
|
'--voc',
|
||||||
|
type=str,
|
||||||
|
default='pwgan_csmsc',
|
||||||
|
choices=['pwgan_csmsc', 'mb_melgan_csmsc', 'hifigan_csmsc'],
|
||||||
|
help='Choose vocoder type of tts task.')
|
||||||
|
# other
|
||||||
|
parser.add_argument(
|
||||||
|
'--lang',
|
||||||
|
type=str,
|
||||||
|
default='zh',
|
||||||
|
help='Choose model language. zh or en')
|
||||||
|
parser.add_argument(
|
||||||
|
"--text",
|
||||||
|
type=str,
|
||||||
|
help="text to synthesize, a 'utt_id sentence' pair per line")
|
||||||
|
parser.add_argument(
|
||||||
|
"--inference_dir", type=str, help="dir to save inference models")
|
||||||
|
parser.add_argument("--output_dir", type=str, help="output dir")
|
||||||
|
# inference
|
||||||
|
|
||||||
|
# streaming related
|
||||||
|
parser.add_argument(
|
||||||
|
"--am_streaming",
|
||||||
|
type=str2bool,
|
||||||
|
default=False,
|
||||||
|
help="whether use streaming acoustic model")
|
||||||
|
parser.add_argument(
|
||||||
|
"--block_size", type=int, default=42, help="block size of am streaming")
|
||||||
|
parser.add_argument(
|
||||||
|
"--pad_size", type=int, default=12, help="pad size of am streaming")
|
||||||
|
|
||||||
|
args, _ = parser.parse_known_args()
|
||||||
|
return args
|
||||||
|
|
||||||
|
|
||||||
|
# only inference for models trained with csmsc now
|
||||||
|
def main():
|
||||||
|
args = parse_args()
|
||||||
|
|
||||||
|
# frontend
|
||||||
|
frontend = get_frontend(
|
||||||
|
lang=args.lang,
|
||||||
|
phones_dict=args.phones_dict,
|
||||||
|
tones_dict=args.tones_dict)
|
||||||
|
|
||||||
|
# am_predictor
|
||||||
|
am_encoder_infer_predictor = get_lite_predictor(
|
||||||
|
model_dir=args.inference_dir,
|
||||||
|
model_file=args.am + "_am_encoder_infer" + "_x86.nb")
|
||||||
|
am_decoder_predictor = get_lite_predictor(
|
||||||
|
model_dir=args.inference_dir,
|
||||||
|
model_file=args.am + "_am_decoder" + "_x86.nb")
|
||||||
|
am_postnet_predictor = get_lite_predictor(
|
||||||
|
model_dir=args.inference_dir,
|
||||||
|
model_file=args.am + "_am_postnet" + "_x86.nb")
|
||||||
|
am_mu, am_std = np.load(args.am_stat)
|
||||||
|
# model: {model_name}_{dataset}
|
||||||
|
am_dataset = args.am[args.am.rindex('_') + 1:]
|
||||||
|
|
||||||
|
# voc_predictor
|
||||||
|
voc_predictor = get_lite_predictor(
|
||||||
|
model_dir=args.inference_dir, model_file=args.voc + "_x86.nb")
|
||||||
|
|
||||||
|
output_dir = Path(args.output_dir)
|
||||||
|
output_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
sentences = get_sentences(text_file=args.text, lang=args.lang)
|
||||||
|
|
||||||
|
merge_sentences = True
|
||||||
|
|
||||||
|
fs = 24000 if am_dataset != 'ljspeech' else 22050
|
||||||
|
# warmup
|
||||||
|
for utt_id, sentence in sentences[:3]:
|
||||||
|
with timer() as t:
|
||||||
|
normalized_mel = get_lite_streaming_am_output(
|
||||||
|
input=sentence,
|
||||||
|
am_encoder_infer_predictor=am_encoder_infer_predictor,
|
||||||
|
am_decoder_predictor=am_decoder_predictor,
|
||||||
|
am_postnet_predictor=am_postnet_predictor,
|
||||||
|
frontend=frontend,
|
||||||
|
lang=args.lang,
|
||||||
|
merge_sentences=merge_sentences, )
|
||||||
|
mel = denorm(normalized_mel, am_mu, am_std)
|
||||||
|
wav = get_lite_voc_output(voc_predictor=voc_predictor, input=mel)
|
||||||
|
speed = wav.size / t.elapse
|
||||||
|
rtf = fs / speed
|
||||||
|
print(
|
||||||
|
f"{utt_id}, mel: {mel.shape}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
|
||||||
|
)
|
||||||
|
|
||||||
|
print("warm up done!")
|
||||||
|
|
||||||
|
N = 0
|
||||||
|
T = 0
|
||||||
|
block_size = args.block_size
|
||||||
|
pad_size = args.pad_size
|
||||||
|
get_tone_ids = False
|
||||||
|
for utt_id, sentence in sentences:
|
||||||
|
with timer() as t:
|
||||||
|
# frontend
|
||||||
|
frontend_dict = run_frontend(
|
||||||
|
frontend=frontend,
|
||||||
|
text=sentence,
|
||||||
|
merge_sentences=merge_sentences,
|
||||||
|
get_tone_ids=get_tone_ids,
|
||||||
|
lang=args.lang)
|
||||||
|
phone_ids = frontend_dict['phone_ids']
|
||||||
|
phones = phone_ids[0].numpy()
|
||||||
|
# acoustic model
|
||||||
|
orig_hs = get_lite_am_sublayer_output(
|
||||||
|
am_encoder_infer_predictor, input=phones)
|
||||||
|
|
||||||
|
if args.am_streaming:
|
||||||
|
hss = get_chunks(orig_hs, block_size, pad_size)
|
||||||
|
chunk_num = len(hss)
|
||||||
|
mel_list = []
|
||||||
|
for i, hs in enumerate(hss):
|
||||||
|
am_decoder_output = get_lite_am_sublayer_output(
|
||||||
|
am_decoder_predictor, input=hs)
|
||||||
|
am_postnet_output = get_lite_am_sublayer_output(
|
||||||
|
am_postnet_predictor,
|
||||||
|
input=np.transpose(am_decoder_output, (0, 2, 1)))
|
||||||
|
am_output_data = am_decoder_output + np.transpose(
|
||||||
|
am_postnet_output, (0, 2, 1))
|
||||||
|
normalized_mel = am_output_data[0]
|
||||||
|
|
||||||
|
sub_mel = denorm(normalized_mel, am_mu, am_std)
|
||||||
|
# clip output part of pad
|
||||||
|
if i == 0:
|
||||||
|
sub_mel = sub_mel[:-pad_size]
|
||||||
|
elif i == chunk_num - 1:
|
||||||
|
# 最后一块的右侧一定没有 pad 够
|
||||||
|
sub_mel = sub_mel[pad_size:]
|
||||||
|
else:
|
||||||
|
# 倒数几块的右侧也可能没有 pad 够
|
||||||
|
sub_mel = sub_mel[pad_size:(block_size + pad_size) -
|
||||||
|
sub_mel.shape[0]]
|
||||||
|
mel_list.append(sub_mel)
|
||||||
|
mel = np.concatenate(mel_list, axis=0)
|
||||||
|
|
||||||
|
else:
|
||||||
|
am_decoder_output = get_lite_am_sublayer_output(
|
||||||
|
am_decoder_predictor, input=orig_hs)
|
||||||
|
am_postnet_output = get_lite_am_sublayer_output(
|
||||||
|
am_postnet_predictor,
|
||||||
|
input=np.transpose(am_decoder_output, (0, 2, 1)))
|
||||||
|
am_output_data = am_decoder_output + np.transpose(
|
||||||
|
am_postnet_output, (0, 2, 1))
|
||||||
|
normalized_mel = am_output_data[0]
|
||||||
|
mel = denorm(normalized_mel, am_mu, am_std)
|
||||||
|
# vocoder
|
||||||
|
wav = get_lite_voc_output(voc_predictor=voc_predictor, input=mel)
|
||||||
|
|
||||||
|
N += wav.size
|
||||||
|
T += t.elapse
|
||||||
|
speed = wav.size / t.elapse
|
||||||
|
rtf = fs / speed
|
||||||
|
|
||||||
|
sf.write(output_dir / (utt_id + ".wav"), wav, samplerate=24000)
|
||||||
|
print(
|
||||||
|
f"{utt_id}, mel: {mel.shape}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"{utt_id} done!")
|
||||||
|
print(f"generation speed: {N / T}Hz, RTF: {fs / (N / T) }")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
@ -0,0 +1 @@
|
|||||||
|
../../../../utils
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue