|
|
|
@ -18,6 +18,7 @@ Run the command below to
|
|
|
|
|
3. train the model.
|
|
|
|
|
4. synthesize wavs.
|
|
|
|
|
- synthesize waveform from `metadata.jsonl`.
|
|
|
|
|
- `--stage` controls the vocoder model during synthesis (0 = pwgan, 1 = hifigan).
|
|
|
|
|
- synthesize waveform from a text file.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
@ -99,9 +100,8 @@ pwg_baker_ckpt_0.4
|
|
|
|
|
```
|
|
|
|
|
`./local/synthesize.sh` calls `${BIN_DIR}/../synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
|
|
|
|
|
```bash
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh --stage 0 ${conf_path} ${train_output_path} ${ckpt_name}
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
|
|
|
|
```
|
|
|
|
|
`--stage` controls the vocoder model during synthesis, which can be `0` or `1`, use `pwgan` or `hifigan` model as vocoder.
|
|
|
|
|
```text
|
|
|
|
|
usage: synthesize.py [-h]
|
|
|
|
|
[--am {speedyspeech_csmsc,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech,tacotron2_aishell3}]
|
|
|
|
@ -148,9 +148,8 @@ optional arguments:
|
|
|
|
|
```
|
|
|
|
|
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/../synthesize_e2e.py`, which can synthesize waveform from text file.
|
|
|
|
|
```bash
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh --stage 0 ${conf_path} ${train_output_path} ${ckpt_name}
|
|
|
|
|
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
|
|
|
|
```
|
|
|
|
|
`--stage` controls the vocoder model during synthesis, which can be `0` or `1`, use `pwgan` or `hifigan` model as vocoder.
|
|
|
|
|
```text
|
|
|
|
|
usage: synthesize_e2e.py [-h]
|
|
|
|
|
[--am {speedyspeech_csmsc,speedyspeech_aishell3,fastspeech2_csmsc,fastspeech2_ljspeech,fastspeech2_aishell3,fastspeech2_vctk,tacotron2_csmsc,tacotron2_ljspeech}]
|
|
|
|
|