pull/2449/head
吕志轩 3 years ago
parent ef298a3315
commit ca90dfb0f1

@ -1,4 +1,5 @@
# TransformerTTS with CSMSC
<<<<<<< HEAD
## Dataset
### Download and Extract
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/TNtts/) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/BZNSYP`.
@ -6,13 +7,33 @@ Download CSMSC from it's [Official Website](https://test.data-baker.com/data/ind
Assume the path to the dataset is `~/datasets/BZNSYP` and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/BZNSYP`.
+
=======
This example contains code used to train a Transformer model with [Chinese Standard Mandarin Speech Copus](https://www.data-baker.com/open_source.html).
## Dataset
### Download and Extract
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/TNtts/) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/BZNSYP`.
### Get MFA Result and Extract
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes for Tacotron2, the durations of MFA are not needed here.
You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo.
## Get Started
Assume the path to the dataset is `~/datasets/BZNSYP`.
Assume the path to the MFA result of CSMSC is `./baker_alignment_tone`.
>>>>>>> 18ee40f1 (修改)
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
<<<<<<< HEAD
- synthesize waveform from text file.
=======
- synthesize waveform from a text file.
>>>>>>> 18ee40f1 (修改)
```bash
./run.sh
```
@ -25,6 +46,19 @@ You can choose a range of stages you want to run, or set `stage` equal to `stop-
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
<<<<<<< HEAD
```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
=======
```text
dump
├── dev
@ -35,11 +69,13 @@ dump
├── test
│ ├── norm
│ └── raw
>>>>>>> 18ee40f1 (修改)
└── train
├── norm
├── raw
└── speech_stats.npy
```
<<<<<<< HEAD
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains the speech feature of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/speech_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, the path of speech features, speaker, and id of each utterance.
@ -49,13 +85,28 @@ Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file tha
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
=======
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains speech features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, durations, the path of speech features, speaker, and the id of each utterance.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
>>>>>>> 18ee40f1 (修改)
Here's the complete help message.
```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU] [--phones-dict PHONES_DICT]
<<<<<<< HEAD
Train a TransformerTTS model with LJSpeech TTS dataset.
=======
Train a TransformerTTS model.
>>>>>>> 18ee40f1 (修改)
optional arguments:
-h, --help show this help message and exit
@ -76,6 +127,7 @@ optional arguments:
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
5. `--phones-dict` is the path of the phone vocabulary file.
<<<<<<< HEAD
## Synthesizing
We use [waveflow](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/ljspeech/voc0) as the neural vocoder.
Download Pretrained WaveFlow Model with residual channel equals 128 from [waveflow_ljspeech_ckpt_0.3.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/waveflow/waveflow_ljspeech_ckpt_0.3.zip) and unzip it.
@ -89,10 +141,27 @@ waveflow_ljspeech_ckpt_0.3
└── step-2000000.pdparams # model parameters of waveflow
```
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
=======
### Synthesizing
We use [parallel wavegan](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/voc1) as the neural vocoder.
Download pretrained parallel wavegan model from [pwg_baker_ckpt_0.4.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/pwgan/pwg_baker_ckpt_0.4.zip) and unzip it.
```bash
unzip pwg_baker_ckpt_0.4.zip
```
Parallel WaveGAN checkpoint contains files listed below.
```text
pwg_baker_ckpt_0.4
├── pwg_default.yaml # default config used to train parallel wavegan
├── pwg_snapshot_iter_400000.pdz # model parameters of parallel wavegan
└── pwg_stats.npy # statistics used to normalize spectrogram when training parallel wavegan
```
`./local/synthesize.sh` calls `${BIN_DIR}/../synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
>>>>>>> 18ee40f1 (修改)
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
<<<<<<< HEAD
usage: synthesize.py [-h] [--transformer-tts-config TRANSFORMER_TTS_CONFIG]
[--transformer-tts-checkpoint TRANSFORMER_TTS_CHECKPOINT]
[--transformer-tts-stat TRANSFORMER_TTS_STAT]
@ -126,11 +195,58 @@ optional arguments:
--ngpu NGPU if ngpu == 0, use cpu.
```
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
=======
usage: synthesize.py [-h]
[--am {speedyspeech_csmsc,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech,tacotron2_aishell3,transformer_csmsc}]
[--am_config AM_CONFIG] [--am_ckpt AM_CKPT]
[--am_stat AM_STAT] [--phones_dict PHONES_DICT]
[--tones_dict TONES_DICT] [--speaker_dict SPEAKER_DICT]
[--voice-cloning VOICE_CLONING]
[--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,wavernn_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,style_melgan_csmsc}]
[--voc_config VOC_CONFIG] [--voc_ckpt VOC_CKPT]
[--voc_stat VOC_STAT] [--ngpu NGPU]
[--test_metadata TEST_METADATA] [--output_dir OUTPUT_DIR]
Synthesize with acoustic model & vocoder
optional arguments:
-h, --help show this help message and exit
--am {speedyspeech_csmsc,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech,tacotron2_aishell3,transformer_csmsc}
Choose acoustic model type of tts task.
--am_config AM_CONFIG
Config of acoustic model.
--am_ckpt AM_CKPT Checkpoint file of acoustic model.
--am_stat AM_STAT mean and standard deviation used to normalize
spectrogram when training acoustic model.
--phones_dict PHONES_DICT
phone vocabulary file.
--tones_dict TONES_DICT
tone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--voice-cloning VOICE_CLONING
whether training voice cloning model.
--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,wavernn_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,style_melgan_csmsc}
Choose vocoder type of tts task.
--voc_config VOC_CONFIG
Config of voc.
--voc_ckpt VOC_CKPT Checkpoint file of voc.
--voc_stat VOC_STAT mean and standard deviation used to normalize
spectrogram when training voc.
--ngpu NGPU if ngpu == 0, use cpu.
--test_metadata TEST_METADATA
test metadata.
--output_dir OUTPUT_DIR
output dir.
```
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/../synthesize_e2e.py`, which can synthesize waveform from text file.
>>>>>>> 18ee40f1 (修改)
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize_e2e.py [-h]
<<<<<<< HEAD
[--transformer-tts-config TRANSFORMER_TTS_CONFIG]
[--transformer-tts-checkpoint TRANSFORMER_TTS_CHECKPOINT]
[--transformer-tts-stat TRANSFORMER_TTS_STAT]
@ -181,11 +297,89 @@ transformer_tts_csmsc_ckpt
└── speech_stats.npy # statistics used to normalize spectrogram when training transformer_tts
```
You can use the following scripts to synthesize for `${BIN_DIR}/../sentences.txt` using pretrained transformer_tts and waveflow models.
=======
[--am {speedyspeech_csmsc,speedyspeech_aishell3,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech,transformer_csmsc}]
[--am_config AM_CONFIG] [--am_ckpt AM_CKPT]
[--am_stat AM_STAT] [--phones_dict PHONES_DICT]
[--tones_dict TONES_DICT]
[--speaker_dict SPEAKER_DICT] [--spk_id SPK_ID]
[--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,style_melgan_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,wavernn_csmsc}]
[--voc_config VOC_CONFIG] [--voc_ckpt VOC_CKPT]
[--voc_stat VOC_STAT] [--lang LANG]
[--inference_dir INFERENCE_DIR] [--ngpu NGPU]
[--text TEXT] [--output_dir OUTPUT_DIR]
Synthesize with acoustic model & vocoder
optional arguments:
-h, --help show this help message and exit
--am {speedyspeech_csmsc,speedyspeech_aishell3,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech,transformer_csmsc}
Choose acoustic model type of tts task.
--am_config AM_CONFIG
Config of acoustic model.
--am_ckpt AM_CKPT Checkpoint file of acoustic model.
--am_stat AM_STAT mean and standard deviation used to normalize
spectrogram when training acoustic model.
--phones_dict PHONES_DICT
phone vocabulary file.
--tones_dict TONES_DICT
tone vocabulary file.
--speaker_dict SPEAKER_DICT
speaker id map file.
--spk_id SPK_ID spk id for multi speaker acoustic model
--voc {pwgan_csmsc,pwgan_ljspeech,pwgan_aishell3,pwgan_vctk,mb_melgan_csmsc,style_melgan_csmsc,hifigan_csmsc,hifigan_ljspeech,hifigan_aishell3,hifigan_vctk,wavernn_csmsc}
Choose vocoder type of tts task.
--voc_config VOC_CONFIG
Config of voc.
--voc_ckpt VOC_CKPT Checkpoint file of voc.
--voc_stat VOC_STAT mean and standard deviation used to normalize
spectrogram when training voc.
--lang LANG Choose model language. zh or en
--inference_dir INFERENCE_DIR
dir to save inference models
--ngpu NGPU if ngpu == 0, use cpu.
--text TEXT text to synthesize, a 'utt_id sentence' pair per line.
--output_dir OUTPUT_DIR
output dir.
```
1. `--am` is acoustic model type with the format {model_name}_{dataset}
2. `--am_config`, `--am_ckpt`, `--am_stat` and `--phones_dict` are arguments for acoustic model, which correspond to the 4 files in the Tacotron2 pretrained model.
3. `--voc` is vocoder type with the format {model_name}_{dataset}
4. `--voc_config`, `--voc_ckpt`, `--voc_stat` are arguments for vocoder, which correspond to the 3 files in the parallel wavegan pretrained model.
5. `--lang` is the model language, which can be `zh` or `en`.
6. `--test_metadata` should be the metadata file in the normalized subfolder of `test` in the `dump` folder.
7. `--text` is the text file, which contains sentences to synthesize.
8. `--output_dir` is the directory to save synthesized audio files.
9. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
## Pretrained Model
Pretrained Tacotron2 model with no silence in the edge of audios:
- [transformer_tts_csmsc_ckpt.zip](https://pan.baidu.com/s/1b-qs5mlWwb75hHprRVqQXw?pwd=jjc3 )
Model | Step | eval/loss | eval/l1_loss | eval/mse_loss | eval/bce_loss| eval/attn_loss
:-------------:| :------------:| :-----: | :-----: | :--------: |:--------:|:---------:
default| 1(gpu) x 30600|0.57185|0.39614|0.14642|0.029|5.8e-05|
TransformerTTS checkpoint contains files listed below.
```text
tacotron2_csmsc_ckpt_0.2.0
├── default.yaml # default config used to train Tacotron2
├── phone_id_map.txt # phone vocabulary file when training Tacotron2
├── snapshot_iter_306000.pdz # model parameters and optimizer states
└── speech_stats.npy # statistics used to normalize spectrogram when training Tacotron2
```
You can use the following scripts to synthesize for `${BIN_DIR}/../sentences.txt` using pretrained Tacotron2 and parallel wavegan models.
>>>>>>> 18ee40f1 (修改)
```bash
source path.sh
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
<<<<<<< HEAD
python3 ${BIN_DIR}/synthesize_e2e.py \
--transformer-tts-config=transformer_tts_csmsc_ckpt/default.yaml \
--transformer-tts-checkpoint=transformer_tts_csmsc_ckpt/snapshot_iter_1118250.pdz \
@ -195,4 +389,20 @@ python3 ${BIN_DIR}/synthesize_e2e.py \
--text=${BIN_DIR}/../sentences.txt \
--output-dir=exp/default/test_e2e \
--phones-dict=transformer_tts_csmsc_ckpt/phone_id_map.txt
=======
python3 ${BIN_DIR}/../synthesize_e2e.py \
--am=transformer_csmsc \
--am_config=transformer_tts_csmsc_ckpt/default.yaml \
--am_ckpt=transformer_tts_csmsc_ckpt/snapshot_iter_30600.pdz \
--am_stat=transformer_tts_csmsc_ckpt/speech_stats.npy \
--voc=pwgan_csmsc \
--voc_config=pwg_baker_ckpt_0.4/pwg_default.yaml \
--voc_ckpt=pwg_baker_ckpt_0.4/pwg_snapshot_iter_400000.pdz \
--voc_stat=pwg_baker_ckpt_0.4/pwg_stats.npy \
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=exp/default/test_e2e \
--inference_dir=exp/default/inference \
--phones_dict=transformer_tts_csmsc_ckpt/phone_id_map.txt
>>>>>>> 18ee40f1 (修改)
```

@ -10,10 +10,18 @@ stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
<<<<<<< HEAD
python3 ${BIN_DIR}/synthesize.py \
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
python3 ${BIN_DIR}/../synthesize.py \
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=pwgan_csmsc \
--voc_config=pwg_baker_ckpt_0.4/pwg_default.yaml \
--voc_ckpt=pwg_baker_ckpt_0.4/pwg_snapshot_iter_400000.pdz \
@ -29,9 +37,16 @@ if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=mb_melgan_csmsc \
--voc_config=mb_melgan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=mb_melgan_csmsc_ckpt_0.1.1/snapshot_iter_1000000.pdz\
@ -46,9 +61,16 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=style_melgan_csmsc \
--voc_config=style_melgan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=style_melgan_csmsc_ckpt_0.1.1/snapshot_iter_1500000.pdz \
@ -64,9 +86,16 @@ if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=hifigan_csmsc \
--voc_config=hifigan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=hifigan_csmsc_ckpt_0.1.1/snapshot_iter_2500000.pdz \
@ -82,9 +111,16 @@ if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=wavernn_csmsc \
--voc_config=wavernn_csmsc_ckpt_0.2.0/default.yaml \
--voc_ckpt=wavernn_csmsc_ckpt_0.2.0/snapshot_iter_400000.pdz \
@ -92,4 +128,8 @@ if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test \
--phones_dict=dump/phone_id_map.txt
<<<<<<< HEAD
fi
=======
fi
>>>>>>> 18ee40f1 (修改)

@ -12,9 +12,16 @@ if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=pwgan_csmsc \
--voc_config=pwg_baker_ckpt_0.4/pwg_default.yaml \
--voc_ckpt=pwg_baker_ckpt_0.4/pwg_snapshot_iter_400000.pdz \
@ -22,9 +29,14 @@ if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
--lang=zh \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
<<<<<<< HEAD
--phones_dict=dump/phone_id_map.txt \
#--inference_dir=${train_output_path}/inference
=======
#--phones_dict=dump/phone_id_map.txt \
>>>>>>> 18ee40f1 (修改)
fi
# for more GAN Vocoders
@ -33,9 +45,16 @@ if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=mb_melgan_csmsc \
--voc_config=mb_melgan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=mb_melgan_csmsc_ckpt_0.1.1/snapshot_iter_1000000.pdz\
@ -54,9 +73,16 @@ if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=style_melgan_csmsc \
--voc_config=style_melgan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=style_melgan_csmsc_ckpt_0.1.1/snapshot_iter_1500000.pdz \
@ -74,9 +100,16 @@ if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=hifigan_csmsc \
--voc_config=hifigan_csmsc_ckpt_0.1.1/default.yaml \
--voc_ckpt=hifigan_csmsc_ckpt_0.1.1/snapshot_iter_2500000.pdz \
@ -85,18 +118,32 @@ if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
--phones_dict=dump/phone_id_map.txt \
<<<<<<< HEAD
#--inference_dir=${train_output_path}/inference
fi
=======
# --inference_dir=${train_output_path}/inference
fi
>>>>>>> 18ee40f1 (修改)
# wavernn
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
echo "in wavernn syn_e2e"
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/../synthesize_e2e.py \
<<<<<<< HEAD
--transformer-tts-config=${config_path} \
--transformer-tts-checkpoint=${train_output_path}/checkpoints/${ckpt_name} \
--transformer-tts-stat=dump/train/speech_stats.npy \
=======
--am=transformer_csmsc \
--am_config=${config_path} \
--am_ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--am_stat=dump/train/speech_stats.npy \
>>>>>>> 18ee40f1 (修改)
--voc=wavernn_csmsc \
--voc_config=wavernn_csmsc_ckpt_0.2.0/default.yaml \
--voc_ckpt=wavernn_csmsc_ckpt_0.2.0/snapshot_iter_400000.pdz \
@ -105,5 +152,9 @@ if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/test_e2e \
--phones_dict=dump/phone_id_map.txt \
<<<<<<< HEAD
#--inference_dir=${train_output_path}/inference
=======
# --inference_dir=${train_output_path}/inference
>>>>>>> 18ee40f1 (修改)
fi

@ -47,9 +47,15 @@ model_alias = {
"paddlespeech.t2s.models.tacotron2:Tacotron2",
"tacotron2_inference":
"paddlespeech.t2s.models.tacotron2:Tacotron2Inference",
<<<<<<< HEAD
"transformerTTS":
"paddlespeech.t2s.models.transformer_tts:TransformerTTS",
"transformerTTS_inference":
=======
"transformer":
"paddlespeech.t2s.models.transformer_tts:TransformerTTS",
"transformer_inference":
>>>>>>> 18ee40f1 (修改)
"paddlespeech.t2s.models.transformer_tts:TransformerTTSInference",
# voc
"pwgan":
@ -199,6 +205,14 @@ def get_am_inference(
am = am_class(idim=vocab_size, odim=odim, **am_config["model"])
elif am_name == 'transformerTTS':
am = am_class(idim=vocab_size, odim=odim, **am_config["model"])
<<<<<<< HEAD
=======
elif am_name == 'transformer':
am = am_class(idim=vocab_size, odim=odim, **am_config["model"])
else:
print("wrong am, please input right am!!!")
>>>>>>> 18ee40f1 (修改)
am.set_state_dict(paddle.load(am_ckpt)["main_params"])
am.eval()
am_mu, am_std = np.load(am_stat)

@ -107,7 +107,11 @@ def evaluate(args):
if args.voice_cloning and "spk_emb" in datum:
spk_emb = paddle.to_tensor(np.load(datum["spk_emb"]))
mel = am_inference(phone_ids, spk_emb=spk_emb)
<<<<<<< HEAD
elif am_name == 'transformerTTS':
=======
elif am_name == 'transformer':
>>>>>>> 18ee40f1 (修改)
phone_ids = paddle.to_tensor(datum["text"])
spk_emb = None
# multi speaker
@ -143,7 +147,12 @@ def parse_args():
choices=[
'speedyspeech_csmsc', 'fastspeech2_csmsc', 'fastspeech2_ljspeech',
'fastspeech2_aishell3', 'fastspeech2_vctk', 'tacotron2_csmsc',
<<<<<<< HEAD
'tacotron2_ljspeech', 'tacotron2_aishell3', 'transformerTTS_csmsc'
=======
'tacotron2_ljspeech', 'tacotron2_aishell3', 'fastspeech2_mix',
'transformer_csmsc'
>>>>>>> 18ee40f1 (修改)
],
help='Choose acoustic model type of tts task.')
parser.add_argument(

@ -141,7 +141,11 @@ def evaluate(args):
mel = am_inference(part_phone_ids, part_tone_ids)
elif am_name == 'tacotron2':
mel = am_inference(part_phone_ids)
<<<<<<< HEAD
elif am_name == 'transformerTTS':
=======
elif am_name == 'transformer':
>>>>>>> 18ee40f1 (修改)
mel = am_inference(part_phone_ids)
# vocoder
wav = voc_inference(mel)
@ -177,7 +181,11 @@ def parse_args():
'speedyspeech_csmsc', 'speedyspeech_aishell3', 'fastspeech2_csmsc',
'fastspeech2_ljspeech', 'fastspeech2_aishell3', 'fastspeech2_vctk',
'tacotron2_csmsc', 'tacotron2_ljspeech', 'fastspeech2_mix',
<<<<<<< HEAD
'transformerTTS_csmsc'
=======
'transformer_csmsc'
>>>>>>> 18ee40f1 (修改)
],
help='Choose acoustic model type of tts task.')
parser.add_argument(

@ -27,7 +27,13 @@ import tqdm
import yaml
from yacs.config import CfgNode
<<<<<<< HEAD
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
=======
from paddlespeech.t2s.datasets.get_feats import Energy
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.datasets.get_feats import Pitch
>>>>>>> 18ee40f1 (修改)
from paddlespeech.t2s.datasets.preprocess_utils import compare_duration_and_mel_length
from paddlespeech.t2s.datasets.preprocess_utils import get_input_token
from paddlespeech.t2s.datasets.preprocess_utils import get_phn_dur
@ -41,6 +47,11 @@ def process_sentence(config: Dict[str, Any],
sentences: Dict,
output_dir: Path,
mel_extractor=None,
<<<<<<< HEAD
=======
pitch_extractor=None,
energy_extractor=None,
>>>>>>> 18ee40f1 (修改)
cut_sil: bool=True,
spk_emb_dir: Path=None):
utt_id = fp.stem
@ -96,12 +107,35 @@ def process_sentence(config: Dict[str, Any],
mel_dir.mkdir(parents=True, exist_ok=True)
mel_path = mel_dir / (utt_id + "_speech.npy")
np.save(mel_path, logmel)
<<<<<<< HEAD
=======
# extract pitch and energy
f0 = pitch_extractor.get_pitch(wav, duration=np.array(durations))
assert f0.shape[0] == len(durations)
f0_dir = output_dir / "data_pitch"
f0_dir.mkdir(parents=True, exist_ok=True)
f0_path = f0_dir / (utt_id + "_pitch.npy")
np.save(f0_path, f0)
energy = energy_extractor.get_energy(wav, duration=np.array(durations))
assert energy.shape[0] == len(durations)
energy_dir = output_dir / "data_energy"
energy_dir.mkdir(parents=True, exist_ok=True)
energy_path = energy_dir / (utt_id + "_energy.npy")
np.save(energy_path, energy)
>>>>>>> 18ee40f1 (修改)
record = {
"utt_id": utt_id,
"phones": phones,
"text_lengths": len(phones),
"speech_lengths": num_frames,
<<<<<<< HEAD
"speech": str(mel_path),
=======
"durations": durations,
"speech": str(mel_path),
"pitch": str(f0_path),
"energy": str(energy_path),
>>>>>>> 18ee40f1 (修改)
"speaker": speaker
}
if spk_emb_dir:
@ -120,9 +154,18 @@ def process_sentences(config,
sentences: Dict,
output_dir: Path,
mel_extractor=None,
<<<<<<< HEAD
nprocs: int=1,
cut_sil: bool=True,
spk_emb_dir: Path=None):
=======
pitch_extractor=None,
energy_extractor=None,
nprocs: int=1,
cut_sil: bool=True,
spk_emb_dir: Path=None,
write_metadata_method: str='w'):
>>>>>>> 18ee40f1 (修改)
if nprocs == 1:
results = []
for fp in tqdm.tqdm(fps, total=len(fps)):
@ -132,6 +175,11 @@ def process_sentences(config,
sentences=sentences,
output_dir=output_dir,
mel_extractor=mel_extractor,
<<<<<<< HEAD
=======
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
>>>>>>> 18ee40f1 (修改)
cut_sil=cut_sil,
spk_emb_dir=spk_emb_dir)
if record:
@ -143,6 +191,10 @@ def process_sentences(config,
for fp in fps:
future = pool.submit(process_sentence, config, fp,
sentences, output_dir, mel_extractor,
<<<<<<< HEAD
=======
pitch_extractor, energy_extractor,
>>>>>>> 18ee40f1 (修改)
cut_sil, spk_emb_dir)
future.add_done_callback(lambda p: progress.update())
futures.append(future)
@ -154,7 +206,12 @@ def process_sentences(config,
results.append(record)
results.sort(key=itemgetter("utt_id"))
<<<<<<< HEAD
with jsonlines.open(output_dir / "metadata.jsonl", 'w') as writer:
=======
with jsonlines.open(output_dir / "metadata.jsonl",
write_metadata_method) as writer:
>>>>>>> 18ee40f1 (修改)
for item in results:
writer.write(item)
print("Done")
@ -198,6 +255,16 @@ def main():
default=None,
type=str,
help="directory to speaker embedding files.")
<<<<<<< HEAD
=======
parser.add_argument(
"--write_metadata_method",
default="w",
type=str,
choices=["w", "a"],
help="How the metadata.jsonl file is written.")
>>>>>>> 18ee40f1 (修改)
args = parser.parse_args()
rootdir = Path(args.rootdir).expanduser()
@ -292,6 +359,19 @@ def main():
n_mels=config.n_mels,
fmin=config.fmin,
fmax=config.fmax)
<<<<<<< HEAD
=======
pitch_extractor = Pitch(
sr=config.fs,
hop_length=config.n_shift,
f0min=config.f0min,
f0max=config.f0max)
energy_extractor = Energy(
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window)
>>>>>>> 18ee40f1 (修改)
# process for the 3 sections
if train_wav_files:
@ -301,9 +381,18 @@ def main():
sentences=sentences,
output_dir=train_dump_dir,
mel_extractor=mel_extractor,
<<<<<<< HEAD
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir)
=======
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method)
>>>>>>> 18ee40f1 (修改)
if dev_wav_files:
process_sentences(
config=config,
@ -311,8 +400,16 @@ def main():
sentences=sentences,
output_dir=dev_dump_dir,
mel_extractor=mel_extractor,
<<<<<<< HEAD
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir)
=======
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method)
>>>>>>> 18ee40f1 (修改)
if test_wav_files:
process_sentences(
config=config,
@ -320,9 +417,18 @@ def main():
sentences=sentences,
output_dir=test_dump_dir,
mel_extractor=mel_extractor,
<<<<<<< HEAD
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir)
=======
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method)
>>>>>>> 18ee40f1 (修改)
if __name__ == "__main__":

Loading…
Cancel
Save