update readme for vits.

pull/2268/head
艾梦 3 years ago
parent 1450e74b4f
commit 227ff5df8e

@ -476,7 +476,7 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r
</td>
</tr>
<tr>
<td rowspan="3">Voice Cloning</td>
<td rowspan="4">Voice Cloning</td>
<td>GE2E</td>
<td >Librispeech, etc.</td>
<td>
@ -496,6 +496,13 @@ PaddleSpeech supports a series of most popular models. They are summarized in [r
<td>
<a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a>
</td>
</tr>
<tr>
<td>GE2E + VITS</td>
<td>AISHELL-3</td>
<td>
<a href = "./examples/aishell3/vits-vc">ge2e-vits-aishell3</a>
</td>
</tr>
<tr>
<td rowspan="3">End-to-End</td>

@ -601,7 +601,7 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声
</td>
</tr>
<tr>
<td rowspan="3">声音克隆</td>
<td rowspan="4">声音克隆</td>
<td>GE2E</td>
<td >Librispeech, etc.</td>
<td>
@ -622,13 +622,20 @@ PaddleSpeech 的 **语音合成** 主要包含三个模块:文本前端、声
<a href = "./examples/aishell3/vc1">ge2e-fastspeech2-aishell3</a>
</td>
</tr>
<tr>
<td>GE2E + VITS</td>
<td>AISHELL-3</td>
<td>
<a href = "./examples/aishell3/vits-vc">ge2e-vits-aishell3</a>
</td>
</tr>
</tr>
<tr>
<td rowspan="3">端到端</td>
<td>VITS</td>
<td >CSMSC</td>
<td>CSMSC / AISHELL-3</td>
<td>
<a href = "./examples/csmsc/vits">VITS-csmsc</a>
<a href = "./examples/csmsc/vits">VITS-csmsc</a> / <a href = "./examples/aishell3/vits">VITS-aishell3</a>
</td>
</tr>
</tbody>

@ -131,6 +131,8 @@ If you want to convert a speaker audio file to refered speaker, run:
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ge2e_params_path} ${add_blank} ${ref_audio_dir} ${src_audio_path}
```
<!-- TODO display these after we trained the model -->
<!--
## Pretrained Model
The pretrained model can be downloaded here:
@ -148,3 +150,4 @@ vits_vc_aishell3_ckpt_1.1.0
```
ps: This ckpt is not good enough, a better result is training
-->

@ -163,6 +163,8 @@ optional arguments:
5. `--output_dir` is the directory to save synthesized audio files.
6. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
<!-- TODO display these after we trained the model -->
<!--
## Pretrained Model
The pretrained model can be downloaded here:
@ -197,3 +199,4 @@ python3 ${BIN_DIR}/synthesize_e2e.py \
--text=${BIN_DIR}/../sentences.txt \
--add-blank=${add_blank}
```
-->

Loading…
Cancel
Save