|
|
|
@ -1,89 +1,129 @@
|
|
|
|
|
# FastSpeech2 + AISHELL-3 Voice Cloning
|
|
|
|
|
This example contains code used to train a [Tacotron2 ](https://arxiv.org/abs/1712.05884) model with [AISHELL-3](http://www.aishelltech.com/aishell_3). The trained model can be used in Voice Cloning Task, We refer to the model structure of [Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis](https://arxiv.org/pdf/1806.04558.pdf) . The general steps are as follows:
|
|
|
|
|
1. Speaker Encoder: We use a Speaker Verification to train a speaker encoder. Datasets used in this task are different from those used in Tacotron2, because the transcriptions are not needed, we use more datasets, refer to [ge2e](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/ge2e).
|
|
|
|
|
2. Synthesizer: Then, we use the trained speaker encoder to generate utterance embedding for each sentence in AISHELL-3. This embedding is a extra input of Tacotron2 which will be concated with encoder outputs.
|
|
|
|
|
3. Vocoder: We use WaveFlow as the neural Vocoder, refer to [waveflow](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/ljspeech/voc0).
|
|
|
|
|
This example contains code used to train a [FastSpeech2](https://arxiv.org/abs/2006.04558) model with [AISHELL-3](http://www.aishelltech.com/aishell_3). The trained model can be used in Voice Cloning Task, We refer to the model structure of [Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis](https://arxiv.org/pdf/1806.04558.pdf) . The general steps are as follows:
|
|
|
|
|
1. Speaker Encoder: We use a Speaker Verification to train a speaker encoder. Datasets used in this task are different from those used in `FastSpeech2`, because the transcriptions are not needed, we use more datasets, refer to [ge2e](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/ge2e).
|
|
|
|
|
2. Synthesizer: We use the trained speaker encoder to generate speaker embedding for each sentence in AISHELL-3. This embedding is an extra input of `FastSpeech2` which will be concated with encoder outputs.
|
|
|
|
|
3. Vocoder: We use [Parallel Wave GAN](http://arxiv.org/abs/1910.11480) as the neural Vocoder, refer to [voc1](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/voc1).
|
|
|
|
|
|
|
|
|
|
## Dataset
|
|
|
|
|
### Download and Extract the datasaet
|
|
|
|
|
Download AISHELL-3.
|
|
|
|
|
```bash
|
|
|
|
|
wget https://www.openslr.org/resources/93/data_aishell3.tgz
|
|
|
|
|
```
|
|
|
|
|
Extract AISHELL-3.
|
|
|
|
|
```bash
|
|
|
|
|
mkdir data_aishell3
|
|
|
|
|
tar zxvf data_aishell3.tgz -C data_aishell3
|
|
|
|
|
```
|
|
|
|
|
### Get MFA result of AISHELL-3 and Extract it
|
|
|
|
|
We use [MFA2.x](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get durations for aishell3_fastspeech2.
|
|
|
|
|
You can download from here [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz), or train your own MFA model reference to [use_mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/use_mfa) (use MFA1.x now) of our repo.
|
|
|
|
|
|
|
|
|
|
## Pretrained GE2E model
|
|
|
|
|
We use pretrained GE2E model to generate spwaker embedding for each sentence.
|
|
|
|
|
|
|
|
|
|
Download pretrained GE2E model from here [ge2e_ckpt_0.3.zip](https://paddlespeech.bj.bcebos.com/Parakeet/ge2e_ckpt_0.3.zip), and `unzip` it.
|
|
|
|
|
|
|
|
|
|
## Get Started
|
|
|
|
|
Assume the path to the dataset is `~/datasets/data_aishell3`.
|
|
|
|
|
Assume the path to the MFA result of AISHELL-3 is `./alignment`.
|
|
|
|
|
Assume the path to the pretrained ge2e model is `ge2e_ckpt_path=./ge2e_ckpt_0.3/step-3000000`
|
|
|
|
|
Assume the path to the MFA result of AISHELL-3 is `./aishell3_alignment_tone`.
|
|
|
|
|
Assume the path to the pretrained ge2e model is `./ge2e_ckpt_0.3`.
|
|
|
|
|
|
|
|
|
|
Run the command below to
|
|
|
|
|
1. **source path**.
|
|
|
|
|
2. preprocess the dataset,
|
|
|
|
|
2. preprocess the dataset.
|
|
|
|
|
3. train the model.
|
|
|
|
|
4. start a voice cloning inference.
|
|
|
|
|
4. synthesize waveform from `metadata.jsonl`.
|
|
|
|
|
5. start a voice cloning inference.
|
|
|
|
|
```bash
|
|
|
|
|
./run.sh
|
|
|
|
|
```
|
|
|
|
|
### Preprocess the dataset
|
|
|
|
|
```bash
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${input} ${preprocess_path} ${alignment} ${ge2e_ckpt_path}
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${conf_path} ${ge2e_ckpt_path}
|
|
|
|
|
```
|
|
|
|
|
#### generate utterance embedding
|
|
|
|
|
Use pretrained GE2E (speaker encoder) to generate utterance embedding for each sentence in AISHELL-3, which has the same file structure with wav files and the format is `.npy`.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
|
|
|
|
python3 ${BIN_DIR}/../ge2e/inference.py \
|
|
|
|
|
--input=${input} \
|
|
|
|
|
--output=${preprocess_path}/embed \
|
|
|
|
|
--ngpu=1 \
|
|
|
|
|
--checkpoint_path=${ge2e_ckpt_path}
|
|
|
|
|
fi
|
|
|
|
|
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
|
|
|
|
|
```text
|
|
|
|
|
dump
|
|
|
|
|
├── dev
|
|
|
|
|
│ ├── norm
|
|
|
|
|
│ └── raw
|
|
|
|
|
├── embed
|
|
|
|
|
│ ├── SSB0005
|
|
|
|
|
│ ├── SSB0009
|
|
|
|
|
│ ├── ...
|
|
|
|
|
│ └── ...
|
|
|
|
|
├── phone_id_map.txt
|
|
|
|
|
├── speaker_id_map.txt
|
|
|
|
|
├── test
|
|
|
|
|
│ ├── norm
|
|
|
|
|
│ └── raw
|
|
|
|
|
└── train
|
|
|
|
|
├── energy_stats.npy
|
|
|
|
|
├── norm
|
|
|
|
|
├── pitch_stats.npy
|
|
|
|
|
├── raw
|
|
|
|
|
└── speech_stats.npy
|
|
|
|
|
```
|
|
|
|
|
The `embed` contains the generated speaker embedding for each sentence in AISHELL-3, which has the same file structure with wav files and the format is `.npy`.
|
|
|
|
|
|
|
|
|
|
The computing time of utterance embedding can be x hours.
|
|
|
|
|
#### process wav
|
|
|
|
|
There are silence in the edge of AISHELL-3's wavs, and the audio amplitude is very small, so, we need to remove the silence and normalize the audio. You can the silence remove method based on volume or energy, but the effect is not very good, We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get the alignment of text and speech, then utilize the alignment results to remove the silence.
|
|
|
|
|
|
|
|
|
|
We use Montreal Force Aligner 1.0. The label in aishell3 include pinyin,so the lexicon we provided to MFA is pinyin rather than Chinese characters. And the prosody marks(`$` and `%`) need to be removed. You shoud preprocess the dataset into the format which MFA needs, the texts have the same name with wavs and have the suffix `.lab`.
|
|
|
|
|
The dataset is split into 3 parts, namely `train`, `dev` and` test`, each of which contains a `norm` and `raw` sub folder. The raw folder contains speech、pitch and energy features of each utterances, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/*_stats.npy`.
|
|
|
|
|
|
|
|
|
|
We use [lexicon.txt](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/paddlespeech/t2s/exps/voice_cloning/tacotron2_ge2e/lexicon.txt) as the lexicon.
|
|
|
|
|
Also there is a `metadata.jsonl` in each subfolder. It is a table-like file which contains phones, text_lengths, speech_lengths, durations, path of speech features, path of pitch features, path of energy features, speaker and id of each utterance.
|
|
|
|
|
|
|
|
|
|
You can download the alignment results from here [alignment_aishell3.tar.gz](https://paddlespeech.bj.bcebos.com/Parakeet/alignment_aishell3.tar.gz), or train your own MFA model reference to [use_mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/use_mfa) (use MFA1.x now) of our repo.
|
|
|
|
|
The preprocessing step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but there is one more `ge2e/inference` step here.
|
|
|
|
|
|
|
|
|
|
### Train the model
|
|
|
|
|
`./local/train.sh` calls `${BIN_DIR}/train.py`.
|
|
|
|
|
```bash
|
|
|
|
|
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
|
|
|
|
echo "Process wav ..."
|
|
|
|
|
python3 ${BIN_DIR}/process_wav.py \
|
|
|
|
|
--input=${input}/wav \
|
|
|
|
|
--output=${preprocess_path}/normalized_wav \
|
|
|
|
|
--alignment=${alignment}
|
|
|
|
|
fi
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
|
|
|
|
|
```
|
|
|
|
|
The training step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/train.py`.
|
|
|
|
|
|
|
|
|
|
#### preprocess transcription
|
|
|
|
|
We revert the transcription into `phones` and `tones`. It is worth noting that our processing here is different from that used for MFA, we separated the tones. This is a processing method, of course, you can only segment initials and vowels.
|
|
|
|
|
|
|
|
|
|
### Synthesize
|
|
|
|
|
We use [parallel wavegan](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/voc1) as the neural vocoder.
|
|
|
|
|
Download pretrained parallel wavegan model from [pwg_aishell3_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/pwg_aishell3_ckpt_0.5.zip) and unzip it.
|
|
|
|
|
```bash
|
|
|
|
|
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
|
|
|
|
python3 ${BIN_DIR}/preprocess_transcription.py \
|
|
|
|
|
--input=${input} \
|
|
|
|
|
--output=${preprocess_path}
|
|
|
|
|
fi
|
|
|
|
|
unzip pwg_aishell3_ckpt_0.5.zip
|
|
|
|
|
```
|
|
|
|
|
The default input is `~/datasets/data_aishell3/train`,which contains `label_train-set.txt`, the processed results are `metadata.yaml` and `metadata.pickle`. the former is a text format for easy viewing, and the latter is a binary format for direct reading.
|
|
|
|
|
#### extract mel
|
|
|
|
|
```python
|
|
|
|
|
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
|
|
|
|
python3 ${BIN_DIR}/extract_mel.py \
|
|
|
|
|
--input=${preprocess_path}/normalized_wav \
|
|
|
|
|
--output=${preprocess_path}/mel
|
|
|
|
|
fi
|
|
|
|
|
Parallel WaveGAN checkpoint contains files listed below.
|
|
|
|
|
```text
|
|
|
|
|
pwg_aishell3_ckpt_0.5
|
|
|
|
|
├── default.yaml # default config used to train parallel wavegan
|
|
|
|
|
├── feats_stats.npy # statistics used to normalize spectrogram when training parallel wavegan
|
|
|
|
|
└── snapshot_iter_1000000.pdz # generator parameters of parallel wavegan
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Train the model
|
|
|
|
|
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
|
|
|
|
|
```bash
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${preprocess_path} ${train_output_path}
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
|
|
|
|
```
|
|
|
|
|
The synthesizing step is very similar to that one of [tts3](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/tts3), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/synthesize.py`.
|
|
|
|
|
|
|
|
|
|
Our model remve stop token prediction in Tacotron2, because of the problem of extremely unbalanced proportion of positive and negative samples of stop token prediction, and it's very sensitive to the clip of audio silence. We use the last symbol from the highest point of attention to the encoder side as the termination condition.
|
|
|
|
|
### Voice Cloning
|
|
|
|
|
Assume there are some reference audios in `./ref_audio`
|
|
|
|
|
```text
|
|
|
|
|
ref_audio
|
|
|
|
|
├── 001238.wav
|
|
|
|
|
├── LJ015-0254.wav
|
|
|
|
|
└── audio_self_test.mp3
|
|
|
|
|
```
|
|
|
|
|
`./local/voice_cloning.sh` calls `${BIN_DIR}/voice_cloning.py`
|
|
|
|
|
|
|
|
|
|
In addition, in order to accelerate the convergence of the model, we add `guided attention loss` to induce the alignment between encoder and decoder to show diagonal lines faster.
|
|
|
|
|
### Infernece
|
|
|
|
|
```bash
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${ge2e_params_path} ${tacotron2_params_path} ${waveflow_params_path} ${vc_input} ${vc_output}
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ge2e_params_path} ${ref_audio_dir}
|
|
|
|
|
```
|
|
|
|
|
## Pretrained Model
|
|
|
|
|
[tacotron2_aishell3_ckpt_0.3.zip](https://paddlespeech.bj.bcebos.com/Parakeet/tacotron2_aishell3_ckpt_0.3.zip).
|
|
|
|
|
[fastspeech2_nosil_aishell3_vc1_ckpt_0.5.zip](https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech2_nosil_aishell3_vc1_ckpt_0.5.zip)
|
|
|
|
|
|
|
|
|
|
FastSpeech2 checkpoint contains files listed below.
|
|
|
|
|
(There is no need for `speaker_id_map.txt` here )
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
fastspeech2_nosil_aishell3_ckpt_vc1_0.5
|
|
|
|
|
├── default.yaml # default config used to train fastspeech2
|
|
|
|
|
├── phone_id_map.txt # phone vocabulary file when training fastspeech2
|
|
|
|
|
├── snapshot_iter_96400.pdz # model parameters and optimizer states
|
|
|
|
|
└── speech_stats.npy # statistics used to normalize spectrogram when training fastspeech2
|
|
|
|
|
```
|
|
|
|
|