@ -4,6 +4,18 @@ This example contains code used to train a [parallel wavegan](http://arxiv.org/a
### Download and Extract
Download CSMSC from it's [official website](https://test.data-baker.com/data/index/TNtts/) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/BZNSYP`.
After processing the data, the ``BZNSYP`` directory will look like this:
```text
BZNSYP
├── Wave
│ └─ *.wav files (audio speech)
├── PhoneLabeling
│ └─ *.interval files (alignment between phoneme and duration)
└── ProsodyLabeling
└─ 000001-010000.txt (text with prosodic by pinyin)
```
This experiment only uses *.wav files from the Wave file
### Get MFA Result and Extract
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) results to cut silence at the edge of audio.
You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo.