Merge branch 'develop' of https://github.com/PaddlePaddle/PaddleSpeech into add_vc2
commit
35c6ffa90b
@ -0,0 +1,27 @@
|
||||
(简体中文|[English](./README.md))
|
||||
|
||||
# Metaverse
|
||||
|
||||
## 简介
|
||||
|
||||
Metaverse 是一种新的互联网应用和社交形式,融合了多种新技术,产生了虚拟现实。
|
||||
|
||||
这个演示是一个让图片中的名人“说话”的实现。通过 `PaddleSpeech` 的 `TTS` 模块和 `PaddleGAN` 的组合,我们集成了安装和特定模块到一个 shell 脚本中。
|
||||
|
||||
## 使用
|
||||
|
||||
您可以使用 `PaddleSpeech` 的 `TTS` 模块和 `PaddleGAN` 让您最喜欢的人说出指定的内容,并构建您的虚拟人。
|
||||
|
||||
运行 `run.sh` 完成所有基本程序,包括安装。
|
||||
|
||||
```bash
|
||||
./run.sh
|
||||
```
|
||||
|
||||
在 `run.sh`, 先会执行 `source path.sh` 来设置好环境变量。
|
||||
|
||||
如果您想尝试您的句子,请替换 `sentences.txt` 中的句子。
|
||||
|
||||
如果您想尝试图像,请将图像替换 shell 脚本中的 `download/Lamarr.png` 。
|
||||
|
||||
结果已显示在我们的 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)。
|
@ -1,14 +1,13 @@
|
||||
aiofiles
|
||||
faiss-cpu
|
||||
fastapi
|
||||
librosa
|
||||
numpy
|
||||
paddlenlp
|
||||
paddlepaddle
|
||||
paddlespeech
|
||||
pydantic
|
||||
scikit_learn
|
||||
python-multipartscikit_learn
|
||||
SoundFile
|
||||
starlette
|
||||
uvicorn
|
||||
paddlepaddle
|
||||
paddlespeech
|
||||
paddlenlp
|
||||
faiss-cpu
|
||||
python-multipart
|
@ -1,23 +1,23 @@
|
||||
from paddlenlp import Taskflow
|
||||
|
||||
|
||||
class NLP:
|
||||
def __init__(self, ie_model_path=None):
|
||||
schema = ["时间", "出发地", "目的地", "费用"]
|
||||
if ie_model_path:
|
||||
self.ie_model = Taskflow("information_extraction",
|
||||
schema=schema, task_path=ie_model_path)
|
||||
self.ie_model = Taskflow(
|
||||
"information_extraction",
|
||||
schema=schema,
|
||||
task_path=ie_model_path)
|
||||
else:
|
||||
self.ie_model = Taskflow("information_extraction",
|
||||
schema=schema)
|
||||
|
||||
self.ie_model = Taskflow("information_extraction", schema=schema)
|
||||
|
||||
self.dialogue_model = Taskflow("dialogue")
|
||||
|
||||
|
||||
def chat(self, text):
|
||||
result = self.dialogue_model([text])
|
||||
return result[0]
|
||||
|
||||
|
||||
def ie(self, text):
|
||||
result = self.ie_model(text)
|
||||
return result
|
||||
|
||||
|
@ -1,18 +1,13 @@
|
||||
import random
|
||||
|
||||
|
||||
def randName(n=5):
|
||||
return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba',n))
|
||||
return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba', n))
|
||||
|
||||
|
||||
def SuccessRequest(result=None, message="ok"):
|
||||
return {
|
||||
"code": 0,
|
||||
"result":result,
|
||||
"message": message
|
||||
}
|
||||
return {"code": 0, "result": result, "message": message}
|
||||
|
||||
|
||||
def ErrorRequest(result=None, message="error"):
|
||||
return {
|
||||
"code": -1,
|
||||
"result":result,
|
||||
"message": message
|
||||
}
|
||||
return {"code": -1, "result": result, "message": message}
|
||||
|
@ -0,0 +1,20 @@
|
||||
|
||||
(简体中文|[English](./README.md))
|
||||
|
||||
# Story Talker
|
||||
|
||||
## 简介
|
||||
|
||||
故事书是非常重要的儿童启蒙书,但家长通常没有足够的时间为孩子读故事书。对于非常小的孩子,他们可能不理解故事书中的汉字。或有时,孩子们只是想“听”,而不想“读”。
|
||||
|
||||
您可以使用 `PaddleOCR` 获取故事书的文本,并通过 `PaddleSpeech` 的 `TTS` 模块进行阅读。
|
||||
|
||||
## 使用
|
||||
|
||||
运行以下命令行开始:
|
||||
|
||||
```
|
||||
./run.sh
|
||||
```
|
||||
|
||||
结果已显示在 [notebook](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/tutorial/tts/tts_tutorial.ipynb)。
|
@ -0,0 +1,33 @@
|
||||
(简体中文|[English](./README.md))
|
||||
|
||||
# Style FastSpeech2
|
||||
|
||||
## 简介
|
||||
|
||||
[FastSpeech2](https://arxiv.org/abs/2006.04558) 是用于语音合成的经典声学模型,它引入了可控语音输入,包括 `phoneme duration` 、 `energy` 和 `pitch` 。
|
||||
|
||||
在预测阶段,您可以更改这些变量以获得一些有趣的结果。
|
||||
|
||||
例如:
|
||||
|
||||
1. `FastSpeech2` 中的 `duration` 可以控制音频的速度 ,并保持 `pitch` 。(在某些语音工具中,增加速度将增加音调,反之亦然。)
|
||||
2. 当我们将一个句子的 `pitch` 设置为平均值并将音素的 `tones` 设置为 `1` 时,我们将获得 `robot-style` 的音色。
|
||||
3. 当我们提高成年女性的 `pitch` (比例固定)时,我们会得到 `child-style` 的音色。
|
||||
|
||||
句子中不同音素的 `duration` 和 `pitch` 可以具有不同的比例。您可以设置不同的音阶比例来强调或削弱某些音素的发音。
|
||||
|
||||
## 运行
|
||||
|
||||
运行以下命令行开始:
|
||||
|
||||
```
|
||||
./run.sh
|
||||
```
|
||||
|
||||
在 `run.sh`, 会首先执行 `source path.sh` 去设置好环境变量。
|
||||
|
||||
如果您想尝试您的句子,请替换 `sentences.txt`中的句子。
|
||||
|
||||
更多的细节,请查看 `style_syn.py`。
|
||||
|
||||
语音样例可以在 [style-control-in-fastspeech2](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html#style-control-in-fastspeech2) 查看。
|
@ -0,0 +1,154 @@
|
||||
# VITS with AISHELL-3
|
||||
This example contains code used to train a [VITS](https://arxiv.org/abs/2106.06103) model with [AISHELL-3](http://www.aishelltech.com/aishell_3). The trained model can be used in Voice Cloning Task, We refer to the model structure of [Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis](https://arxiv.org/pdf/1806.04558.pdf). The general steps are as follows:
|
||||
1. Speaker Encoder: We use Speaker Verification to train a speaker encoder. Datasets used in this task are different from those used in `VITS` because the transcriptions are not needed, we use more datasets, refer to [ge2e](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/ge2e).
|
||||
2. Synthesizer and Vocoder: We use the trained speaker encoder to generate speaker embedding for each sentence in AISHELL-3. This embedding is an extra input of `VITS` which will be concated with encoder outputs. The vocoder is part of `VITS` due to its special structure.
|
||||
|
||||
## Dataset
|
||||
### Download and Extract
|
||||
Download AISHELL-3 from it's [Official Website](http://www.aishelltech.com/aishell_3) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/data_aishell3`.
|
||||
|
||||
### Get MFA Result and Extract
|
||||
We use [MFA2.x](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes for VITS, the durations of MFA are not needed here.
|
||||
You can download from here [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) (use MFA1.x now) of our repo.
|
||||
|
||||
## Pretrained GE2E Model
|
||||
We use pretrained GE2E model to generate speaker embedding for each sentence.
|
||||
|
||||
Download pretrained GE2E model from here [ge2e_ckpt_0.3.zip](https://bj.bcebos.com/paddlespeech/Parakeet/released_models/ge2e/ge2e_ckpt_0.3.zip), and `unzip` it.
|
||||
|
||||
## Get Started
|
||||
Assume the path to the dataset is `~/datasets/data_aishell3`.
|
||||
Assume the path to the MFA result of AISHELL-3 is `./aishell3_alignment_tone`.
|
||||
Assume the path to the pretrained ge2e model is `./ge2e_ckpt_0.3`.
|
||||
|
||||
Run the command below to
|
||||
1. **source path**.
|
||||
2. preprocess the dataset.
|
||||
3. train the model.
|
||||
4. synthesize waveform from `metadata.jsonl`.
|
||||
5. start a voice cloning inference.
|
||||
|
||||
```bash
|
||||
./run.sh
|
||||
```
|
||||
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
|
||||
```bash
|
||||
./run.sh --stage 0 --stop-stage 0
|
||||
```
|
||||
|
||||
### Data Preprocessing
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${conf_path} ${ge2e_ckpt_path}
|
||||
```
|
||||
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
|
||||
|
||||
```text
|
||||
dump
|
||||
├── dev
|
||||
│ ├── norm
|
||||
│ └── raw
|
||||
├── embed
|
||||
│ ├── SSB0005
|
||||
│ ├── SSB0009
|
||||
│ ├── ...
|
||||
│ └── ...
|
||||
├── phone_id_map.txt
|
||||
├── speaker_id_map.txt
|
||||
├── test
|
||||
│ ├── norm
|
||||
│ └── raw
|
||||
└── train
|
||||
├── feats_stats.npy
|
||||
├── norm
|
||||
└── raw
|
||||
```
|
||||
The `embed` contains the generated speaker embedding for each sentence in AISHELL-3, which has the same file structure with wav files and the format is `.npy`.
|
||||
|
||||
The computing time of utterance embedding can be x hours.
|
||||
|
||||
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave and linear spectrogram of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`.
|
||||
|
||||
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, feats, feats_lengths, the path of linear spectrogram features, the path of raw waves, speaker, and the id of each utterance.
|
||||
|
||||
The preprocessing step is very similar to that one of [vits](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vits), but there is one more `ge2e/inference` step here.
|
||||
|
||||
### Model Training
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
|
||||
```
|
||||
The training step is very similar to that one of [vits](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vits), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/train.py`.
|
||||
|
||||
### Synthesizing
|
||||
|
||||
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
||||
```
|
||||
```text
|
||||
usage: synthesize.py [-h] [--config CONFIG] [--ckpt CKPT]
|
||||
[--phones_dict PHONES_DICT] [--speaker_dict SPEAKER_DICT]
|
||||
[--voice-cloning VOICE_CLONING] [--ngpu NGPU]
|
||||
[--test_metadata TEST_METADATA] [--output_dir OUTPUT_DIR]
|
||||
|
||||
Synthesize with VITS
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG Config of VITS.
|
||||
--ckpt CKPT Checkpoint file of VITS.
|
||||
--phones_dict PHONES_DICT
|
||||
phone vocabulary file.
|
||||
--speaker_dict SPEAKER_DICT
|
||||
speaker id map file.
|
||||
--voice-cloning VOICE_CLONING
|
||||
whether training voice cloning model.
|
||||
--ngpu NGPU if ngpu == 0, use cpu.
|
||||
--test_metadata TEST_METADATA
|
||||
test metadata.
|
||||
--output_dir OUTPUT_DIR
|
||||
output dir.
|
||||
```
|
||||
The synthesizing step is very similar to that one of [vits](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vits), but we should set `--voice-cloning=True` when calling `${BIN_DIR}/../synthesize.py`.
|
||||
|
||||
### Voice Cloning
|
||||
Assume there are some reference audios in `./ref_audio`
|
||||
```text
|
||||
ref_audio
|
||||
├── 001238.wav
|
||||
├── LJ015-0254.wav
|
||||
└── audio_self_test.mp3
|
||||
```
|
||||
`./local/voice_cloning.sh` calls `${BIN_DIR}/voice_cloning.py`
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ge2e_params_path} ${add_blank} ${ref_audio_dir}
|
||||
```
|
||||
|
||||
If you want to convert a speaker audio file to refered speaker, run:
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} ${ge2e_params_path} ${add_blank} ${ref_audio_dir} ${src_audio_path}
|
||||
```
|
||||
|
||||
<!-- TODO display these after we trained the model -->
|
||||
<!--
|
||||
## Pretrained Model
|
||||
|
||||
The pretrained model can be downloaded here:
|
||||
|
||||
- [vits_vc_aishell3_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/vits/vits_vc_aishell3_ckpt_1.1.0.zip) (add_blank=true)
|
||||
|
||||
VITS checkpoint contains files listed below.
|
||||
(There is no need for `speaker_id_map.txt` here )
|
||||
|
||||
```text
|
||||
vits_vc_aishell3_ckpt_1.1.0
|
||||
├── default.yaml # default config used to train vitx
|
||||
├── phone_id_map.txt # phone vocabulary file when training vits
|
||||
└── snapshot_iter_333000.pdz # model parameters and optimizer states
|
||||
```
|
||||
|
||||
ps: This ckpt is not good enough, a better result is training
|
||||
|
||||
-->
|
@ -0,0 +1,185 @@
|
||||
# This configuration tested on 4 GPUs (V100) with 32GB GPU
|
||||
# memory. It takes around 2 weeks to finish the training
|
||||
# but 100k iters model should generate reasonable results.
|
||||
###########################################################
|
||||
# FEATURE EXTRACTION SETTING #
|
||||
###########################################################
|
||||
|
||||
fs: 22050 # sr
|
||||
n_fft: 1024 # FFT size (samples).
|
||||
n_shift: 256 # Hop size (samples). 12.5ms
|
||||
win_length: null # Window length (samples). 50ms
|
||||
# If set to null, it will be the same as fft_size.
|
||||
window: "hann" # Window function.
|
||||
|
||||
|
||||
##########################################################
|
||||
# TTS MODEL SETTING #
|
||||
##########################################################
|
||||
model:
|
||||
# generator related
|
||||
generator_type: vits_generator
|
||||
generator_params:
|
||||
hidden_channels: 192
|
||||
spk_embed_dim: 256
|
||||
global_channels: 256
|
||||
segment_size: 32
|
||||
text_encoder_attention_heads: 2
|
||||
text_encoder_ffn_expand: 4
|
||||
text_encoder_blocks: 6
|
||||
text_encoder_positionwise_layer_type: "conv1d"
|
||||
text_encoder_positionwise_conv_kernel_size: 3
|
||||
text_encoder_positional_encoding_layer_type: "rel_pos"
|
||||
text_encoder_self_attention_layer_type: "rel_selfattn"
|
||||
text_encoder_activation_type: "swish"
|
||||
text_encoder_normalize_before: True
|
||||
text_encoder_dropout_rate: 0.1
|
||||
text_encoder_positional_dropout_rate: 0.0
|
||||
text_encoder_attention_dropout_rate: 0.1
|
||||
use_macaron_style_in_text_encoder: True
|
||||
use_conformer_conv_in_text_encoder: False
|
||||
text_encoder_conformer_kernel_size: -1
|
||||
decoder_kernel_size: 7
|
||||
decoder_channels: 512
|
||||
decoder_upsample_scales: [8, 8, 2, 2]
|
||||
decoder_upsample_kernel_sizes: [16, 16, 4, 4]
|
||||
decoder_resblock_kernel_sizes: [3, 7, 11]
|
||||
decoder_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
|
||||
use_weight_norm_in_decoder: True
|
||||
posterior_encoder_kernel_size: 5
|
||||
posterior_encoder_layers: 16
|
||||
posterior_encoder_stacks: 1
|
||||
posterior_encoder_base_dilation: 1
|
||||
posterior_encoder_dropout_rate: 0.0
|
||||
use_weight_norm_in_posterior_encoder: True
|
||||
flow_flows: 4
|
||||
flow_kernel_size: 5
|
||||
flow_base_dilation: 1
|
||||
flow_layers: 4
|
||||
flow_dropout_rate: 0.0
|
||||
use_weight_norm_in_flow: True
|
||||
use_only_mean_in_flow: True
|
||||
stochastic_duration_predictor_kernel_size: 3
|
||||
stochastic_duration_predictor_dropout_rate: 0.5
|
||||
stochastic_duration_predictor_flows: 4
|
||||
stochastic_duration_predictor_dds_conv_layers: 3
|
||||
# discriminator related
|
||||
discriminator_type: hifigan_multi_scale_multi_period_discriminator
|
||||
discriminator_params:
|
||||
scales: 1
|
||||
scale_downsample_pooling: "AvgPool1D"
|
||||
scale_downsample_pooling_params:
|
||||
kernel_size: 4
|
||||
stride: 2
|
||||
padding: 2
|
||||
scale_discriminator_params:
|
||||
in_channels: 1
|
||||
out_channels: 1
|
||||
kernel_sizes: [15, 41, 5, 3]
|
||||
channels: 128
|
||||
max_downsample_channels: 1024
|
||||
max_groups: 16
|
||||
bias: True
|
||||
downsample_scales: [2, 2, 4, 4, 1]
|
||||
nonlinear_activation: "leakyrelu"
|
||||
nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True
|
||||
use_spectral_norm: False
|
||||
follow_official_norm: False
|
||||
periods: [2, 3, 5, 7, 11]
|
||||
period_discriminator_params:
|
||||
in_channels: 1
|
||||
out_channels: 1
|
||||
kernel_sizes: [5, 3]
|
||||
channels: 32
|
||||
downsample_scales: [3, 3, 3, 3, 1]
|
||||
max_downsample_channels: 1024
|
||||
bias: True
|
||||
nonlinear_activation: "leakyrelu"
|
||||
nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True
|
||||
use_spectral_norm: False
|
||||
# others
|
||||
sampling_rate: 22050 # needed in the inference for saving wav
|
||||
cache_generator_outputs: True # whether to cache generator outputs in the training
|
||||
|
||||
###########################################################
|
||||
# LOSS SETTING #
|
||||
###########################################################
|
||||
# loss function related
|
||||
generator_adv_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
loss_type: mse # loss type, "mse" or "hinge"
|
||||
discriminator_adv_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
loss_type: mse # loss type, "mse" or "hinge"
|
||||
feat_match_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
average_by_layers: False # whether to average loss value by #layers of each discriminator
|
||||
include_final_outputs: True # whether to include final outputs for loss calculation
|
||||
mel_loss_params:
|
||||
fs: 22050 # must be the same as the training data
|
||||
fft_size: 1024 # fft points
|
||||
hop_size: 256 # hop size
|
||||
win_length: null # window length
|
||||
window: hann # window type
|
||||
num_mels: 80 # number of Mel basis
|
||||
fmin: 0 # minimum frequency for Mel basis
|
||||
fmax: null # maximum frequency for Mel basis
|
||||
log_base: null # null represent natural log
|
||||
|
||||
###########################################################
|
||||
# ADVERSARIAL LOSS SETTING #
|
||||
###########################################################
|
||||
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss
|
||||
lambda_mel: 45.0 # loss scaling coefficient for Mel loss
|
||||
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss
|
||||
lambda_dur: 1.0 # loss scaling coefficient for duration loss
|
||||
lambda_kl: 1.0 # loss scaling coefficient for KL divergence loss
|
||||
# others
|
||||
sampling_rate: 22050 # needed in the inference for saving wav
|
||||
cache_generator_outputs: True # whether to cache generator outputs in the training
|
||||
|
||||
|
||||
###########################################################
|
||||
# DATA LOADER SETTING #
|
||||
###########################################################
|
||||
batch_size: 50 # Batch size.
|
||||
num_workers: 4 # Number of workers in DataLoader.
|
||||
|
||||
##########################################################
|
||||
# OPTIMIZER & SCHEDULER SETTING #
|
||||
##########################################################
|
||||
# optimizer setting for generator
|
||||
generator_optimizer_params:
|
||||
beta1: 0.8
|
||||
beta2: 0.99
|
||||
epsilon: 1.0e-9
|
||||
weight_decay: 0.0
|
||||
generator_scheduler: exponential_decay
|
||||
generator_scheduler_params:
|
||||
learning_rate: 2.0e-4
|
||||
gamma: 0.999875
|
||||
|
||||
# optimizer setting for discriminator
|
||||
discriminator_optimizer_params:
|
||||
beta1: 0.8
|
||||
beta2: 0.99
|
||||
epsilon: 1.0e-9
|
||||
weight_decay: 0.0
|
||||
discriminator_scheduler: exponential_decay
|
||||
discriminator_scheduler_params:
|
||||
learning_rate: 2.0e-4
|
||||
gamma: 0.999875
|
||||
generator_first: False # whether to start updating generator first
|
||||
|
||||
##########################################################
|
||||
# OTHER TRAINING SETTING #
|
||||
##########################################################
|
||||
num_snapshots: 10 # max number of snapshots to keep while training
|
||||
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000
|
||||
save_interval_steps: 1000 # Interval steps to save checkpoint.
|
||||
eval_interval_steps: 250 # Interval steps to evaluate the network.
|
||||
seed: 777 # random seed number
|
@ -0,0 +1,79 @@
|
||||
#!/bin/bash
|
||||
|
||||
stage=0
|
||||
stop_stage=100
|
||||
|
||||
config_path=$1
|
||||
add_blank=$2
|
||||
ge2e_ckpt_path=$3
|
||||
|
||||
# gen speaker embedding
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
python3 ${MAIN_ROOT}/paddlespeech/vector/exps/ge2e/inference.py \
|
||||
--input=~/datasets/data_aishell3/train/wav/ \
|
||||
--output=dump/embed \
|
||||
--checkpoint_path=${ge2e_ckpt_path}
|
||||
fi
|
||||
|
||||
# copy from tts3/preprocess
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
# get durations from MFA's result
|
||||
echo "Generate durations.txt from MFA results ..."
|
||||
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
|
||||
--inputdir=./aishell3_alignment_tone \
|
||||
--output durations.txt \
|
||||
--config=${config_path}
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
# extract features
|
||||
echo "Extract features ..."
|
||||
python3 ${BIN_DIR}/preprocess.py \
|
||||
--dataset=aishell3 \
|
||||
--rootdir=~/datasets/data_aishell3/ \
|
||||
--dumpdir=dump \
|
||||
--dur-file=durations.txt \
|
||||
--config=${config_path} \
|
||||
--num-cpu=20 \
|
||||
--cut-sil=True \
|
||||
--spk_emb_dir=dump/embed
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
# get features' stats(mean and std)
|
||||
echo "Get features' stats ..."
|
||||
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--field-name="feats"
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
|
||||
# normalize and covert phone/speaker to id, dev and test should use train's stats
|
||||
echo "Normalize ..."
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--dumpdir=dump/train/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt \
|
||||
--add-blank=${add_blank} \
|
||||
--skip-wav-copy
|
||||
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/dev/raw/metadata.jsonl \
|
||||
--dumpdir=dump/dev/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt \
|
||||
--add-blank=${add_blank} \
|
||||
--skip-wav-copy
|
||||
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/test/raw/metadata.jsonl \
|
||||
--dumpdir=dump/test/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt \
|
||||
--add-blank=${add_blank} \
|
||||
--skip-wav-copy
|
||||
fi
|
@ -0,0 +1,19 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
ckpt_name=$3
|
||||
stage=0
|
||||
stop_stage=0
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
FLAGS_allocator_strategy=naive_best_fit \
|
||||
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
|
||||
python3 ${BIN_DIR}/synthesize.py \
|
||||
--config=${config_path} \
|
||||
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
|
||||
--phones_dict=dump/phone_id_map.txt \
|
||||
--test_metadata=dump/test/norm/metadata.jsonl \
|
||||
--output_dir=${train_output_path}/test \
|
||||
--voice-cloning=True
|
||||
fi
|
@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
|
||||
# install monotonic_align
|
||||
cd ${MAIN_ROOT}/paddlespeech/t2s/models/vits/monotonic_align
|
||||
python3 setup.py build_ext --inplace
|
||||
cd -
|
||||
|
||||
python3 ${BIN_DIR}/train.py \
|
||||
--train-metadata=dump/train/norm/metadata.jsonl \
|
||||
--dev-metadata=dump/dev/norm/metadata.jsonl \
|
||||
--config=${config_path} \
|
||||
--output-dir=${train_output_path} \
|
||||
--ngpu=4 \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--voice-cloning=True
|
@ -0,0 +1,22 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
ckpt_name=$3
|
||||
ge2e_params_path=$4
|
||||
add_blank=$5
|
||||
ref_audio_dir=$6
|
||||
src_audio_path=$7
|
||||
|
||||
FLAGS_allocator_strategy=naive_best_fit \
|
||||
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
|
||||
python3 ${BIN_DIR}/voice_cloning.py \
|
||||
--config=${config_path} \
|
||||
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
|
||||
--ge2e_params_path=${ge2e_params_path} \
|
||||
--phones_dict=dump/phone_id_map.txt \
|
||||
--text="凯莫瑞安联合体的经济崩溃迫在眉睫。" \
|
||||
--audio-path=${src_audio_path} \
|
||||
--input-dir=${ref_audio_dir} \
|
||||
--output-dir=${train_output_path}/vc_syn \
|
||||
--add-blank=${add_blank}
|
@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
export MAIN_ROOT=`realpath ${PWD}/../../../`
|
||||
|
||||
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
|
||||
export LC_ALL=C
|
||||
|
||||
export PYTHONDONTWRITEBYTECODE=1
|
||||
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
|
||||
export PYTHONIOENCODING=UTF-8
|
||||
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
|
||||
|
||||
MODEL=vits
|
||||
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
|
@ -0,0 +1,45 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
source path.sh
|
||||
|
||||
gpus=0,1,2,3
|
||||
stage=0
|
||||
stop_stage=100
|
||||
|
||||
conf_path=conf/default.yaml
|
||||
train_output_path=exp/default
|
||||
ckpt_name=snapshot_iter_153.pdz
|
||||
add_blank=true
|
||||
ref_audio_dir=ref_audio
|
||||
src_audio_path=''
|
||||
|
||||
# not include ".pdparams" here
|
||||
ge2e_ckpt_path=./ge2e_ckpt_0.3/step-3000000
|
||||
|
||||
# include ".pdparams" here
|
||||
ge2e_params_path=${ge2e_ckpt_path}.pdparams
|
||||
|
||||
# with the following command, you can choose the stage range you want to run
|
||||
# such as `./run.sh --stage 0 --stop-stage 0`
|
||||
# this can not be mixed use with `$1`, `$2` ...
|
||||
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
# prepare data
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/preprocess.sh ${conf_path} ${add_blank} ${ge2e_ckpt_path} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/voice_cloning.sh ${conf_path} ${train_output_path} ${ckpt_name} \
|
||||
${ge2e_params_path} ${add_blank} ${ref_audio_dir} ${src_audio_path} || exit -1
|
||||
fi
|
@ -0,0 +1,202 @@
|
||||
# VITS with AISHELL-3
|
||||
This example contains code used to train a [VITS](https://arxiv.org/abs/2106.06103) model with [AISHELL-3](http://www.aishelltech.com/aishell_3).
|
||||
|
||||
AISHELL-3 is a large-scale and high-fidelity multi-speaker Mandarin speech corpus that could be used to train multi-speaker Text-to-Speech (TTS) systems.
|
||||
|
||||
We use AISHELL-3 to train a multi-speaker VITS model here.
|
||||
## Dataset
|
||||
### Download and Extract
|
||||
Download AISHELL-3 from it's [Official Website](http://www.aishelltech.com/aishell_3) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/data_aishell3`.
|
||||
|
||||
### Get MFA Result and Extract
|
||||
We use [MFA2.x](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes for VITS, the durations of MFA are not needed here.
|
||||
You can download from here [aishell3_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/AISHELL-3/with_tone/aishell3_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) (use MFA1.x now) of our repo.
|
||||
|
||||
## Get Started
|
||||
Assume the path to the dataset is `~/datasets/data_aishell3`.
|
||||
Assume the path to the MFA result of AISHELL-3 is `./aishell3_alignment_tone`.
|
||||
Run the command below to
|
||||
1. **source path**.
|
||||
2. preprocess the dataset.
|
||||
3. train the model.
|
||||
4. synthesize wavs.
|
||||
- synthesize waveform from `metadata.jsonl`.
|
||||
- synthesize waveform from a text file.
|
||||
|
||||
```bash
|
||||
./run.sh
|
||||
```
|
||||
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
|
||||
```bash
|
||||
./run.sh --stage 0 --stop-stage 0
|
||||
```
|
||||
|
||||
### Data Preprocessing
|
||||
```bash
|
||||
./local/preprocess.sh ${conf_path}
|
||||
```
|
||||
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
|
||||
|
||||
```text
|
||||
dump
|
||||
├── dev
|
||||
│ ├── norm
|
||||
│ └── raw
|
||||
├── phone_id_map.txt
|
||||
├── speaker_id_map.txt
|
||||
├── test
|
||||
│ ├── norm
|
||||
│ └── raw
|
||||
└── train
|
||||
├── feats_stats.npy
|
||||
├── norm
|
||||
└── raw
|
||||
```
|
||||
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave and linear spectrogram of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`.
|
||||
|
||||
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, feats, feats_lengths, the path of linear spectrogram features, the path of raw waves, speaker, and the id of each utterance.
|
||||
|
||||
### Model Training
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
|
||||
```
|
||||
`./local/train.sh` calls `${BIN_DIR}/train.py`.
|
||||
Here's the complete help message.
|
||||
```text
|
||||
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
|
||||
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
|
||||
[--ngpu NGPU] [--phones-dict PHONES_DICT]
|
||||
[--speaker-dict SPEAKER_DICT] [--voice-cloning VOICE_CLONING]
|
||||
|
||||
Train a VITS model.
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG config file to overwrite default config.
|
||||
--train-metadata TRAIN_METADATA
|
||||
training data.
|
||||
--dev-metadata DEV_METADATA
|
||||
dev data.
|
||||
--output-dir OUTPUT_DIR
|
||||
output dir.
|
||||
--ngpu NGPU if ngpu == 0, use cpu.
|
||||
--phones-dict PHONES_DICT
|
||||
phone vocabulary file.
|
||||
--speaker-dict SPEAKER_DICT
|
||||
speaker id map file for multiple speaker model.
|
||||
--voice-cloning VOICE_CLONING
|
||||
whether training voice cloning model.
|
||||
```
|
||||
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
|
||||
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
|
||||
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory.
|
||||
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
|
||||
5. `--phones-dict` is the path of the phone vocabulary file.
|
||||
6. `--speaker-dict` is the path of the speaker id map file when training a multi-speaker VITS.
|
||||
|
||||
### Synthesizing
|
||||
|
||||
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
||||
```
|
||||
```text
|
||||
usage: synthesize.py [-h] [--config CONFIG] [--ckpt CKPT]
|
||||
[--phones_dict PHONES_DICT] [--speaker_dict SPEAKER_DICT]
|
||||
[--voice-cloning VOICE_CLONING] [--ngpu NGPU]
|
||||
[--test_metadata TEST_METADATA] [--output_dir OUTPUT_DIR]
|
||||
|
||||
Synthesize with VITS
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG Config of VITS.
|
||||
--ckpt CKPT Checkpoint file of VITS.
|
||||
--phones_dict PHONES_DICT
|
||||
phone vocabulary file.
|
||||
--speaker_dict SPEAKER_DICT
|
||||
speaker id map file.
|
||||
--voice-cloning VOICE_CLONING
|
||||
whether training voice cloning model.
|
||||
--ngpu NGPU if ngpu == 0, use cpu.
|
||||
--test_metadata TEST_METADATA
|
||||
test metadata.
|
||||
--output_dir OUTPUT_DIR
|
||||
output dir.
|
||||
```
|
||||
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
||||
```
|
||||
```text
|
||||
usage: synthesize_e2e.py [-h] [--config CONFIG] [--ckpt CKPT]
|
||||
[--phones_dict PHONES_DICT]
|
||||
[--speaker_dict SPEAKER_DICT] [--spk_id SPK_ID]
|
||||
[--lang LANG]
|
||||
[--inference_dir INFERENCE_DIR] [--ngpu NGPU]
|
||||
[--text TEXT] [--output_dir OUTPUT_DIR]
|
||||
|
||||
Synthesize with VITS
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG Config of VITS.
|
||||
--ckpt CKPT Checkpoint file of VITS.
|
||||
--phones_dict PHONES_DICT
|
||||
phone vocabulary file.
|
||||
--speaker_dict SPEAKER_DICT
|
||||
speaker id map file.
|
||||
--spk_id SPK_ID spk id for multi speaker acoustic model
|
||||
--lang LANG Choose model language. zh or en
|
||||
--inference_dir INFERENCE_DIR
|
||||
dir to save inference models
|
||||
--ngpu NGPU if ngpu == 0, use cpu.
|
||||
--text TEXT text to synthesize, a 'utt_id sentence' pair per line.
|
||||
--output_dir OUTPUT_DIR
|
||||
output dir.
|
||||
```
|
||||
1. `--config`, `--ckpt`, `--phones_dict` and `--speaker_dict` are arguments for acoustic model, which correspond to the 3 files in the VITS pretrained model.
|
||||
2. `--lang` is the model language, which can be `zh` or `en`.
|
||||
3. `--test_metadata` should be the metadata file in the normalized subfolder of `test` in the `dump` folder.
|
||||
4. `--text` is the text file, which contains sentences to synthesize.
|
||||
5. `--output_dir` is the directory to save synthesized audio files.
|
||||
6. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
|
||||
|
||||
<!-- TODO display these after we trained the model -->
|
||||
<!--
|
||||
## Pretrained Model
|
||||
|
||||
The pretrained model can be downloaded here:
|
||||
|
||||
- [vits_aishell3_ckpt_1.1.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/vits/vits_aishell3_ckpt_1.1.0.zip) (add_blank=true)
|
||||
|
||||
VITS checkpoint contains files listed below.
|
||||
```text
|
||||
vits_aishell3_ckpt_1.1.0
|
||||
├── default.yaml # default config used to train vitx
|
||||
├── phone_id_map.txt # phone vocabulary file when training vits
|
||||
├── speaker_id_map.txt # speaker id map file when training a multi-speaker vits
|
||||
└── snapshot_iter_333000.pdz # model parameters and optimizer states
|
||||
```
|
||||
|
||||
ps: This ckpt is not good enough, a better result is training
|
||||
|
||||
You can use the following scripts to synthesize for `${BIN_DIR}/../sentences.txt` using pretrained VITS.
|
||||
|
||||
```bash
|
||||
source path.sh
|
||||
add_blank=true
|
||||
|
||||
FLAGS_allocator_strategy=naive_best_fit \
|
||||
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
|
||||
python3 ${BIN_DIR}/synthesize_e2e.py \
|
||||
--config=vits_aishell3_ckpt_1.1.0/default.yaml \
|
||||
--ckpt=vits_aishell3_ckpt_1.1.0/snapshot_iter_333000.pdz \
|
||||
--phones_dict=vits_aishell3_ckpt_1.1.0/phone_id_map.txt \
|
||||
--speaker_dict=vits_aishell3_ckpt_1.1.0/speaker_id_map.txt \
|
||||
--output_dir=exp/default/test_e2e \
|
||||
--text=${BIN_DIR}/../sentences.txt \
|
||||
--add-blank=${add_blank}
|
||||
```
|
||||
-->
|
@ -0,0 +1,184 @@
|
||||
# This configuration tested on 4 GPUs (V100) with 32GB GPU
|
||||
# memory. It takes around 2 weeks to finish the training
|
||||
# but 100k iters model should generate reasonable results.
|
||||
###########################################################
|
||||
# FEATURE EXTRACTION SETTING #
|
||||
###########################################################
|
||||
|
||||
fs: 22050 # sr
|
||||
n_fft: 1024 # FFT size (samples).
|
||||
n_shift: 256 # Hop size (samples). 12.5ms
|
||||
win_length: null # Window length (samples). 50ms
|
||||
# If set to null, it will be the same as fft_size.
|
||||
window: "hann" # Window function.
|
||||
|
||||
|
||||
##########################################################
|
||||
# TTS MODEL SETTING #
|
||||
##########################################################
|
||||
model:
|
||||
# generator related
|
||||
generator_type: vits_generator
|
||||
generator_params:
|
||||
hidden_channels: 192
|
||||
global_channels: 256
|
||||
segment_size: 32
|
||||
text_encoder_attention_heads: 2
|
||||
text_encoder_ffn_expand: 4
|
||||
text_encoder_blocks: 6
|
||||
text_encoder_positionwise_layer_type: "conv1d"
|
||||
text_encoder_positionwise_conv_kernel_size: 3
|
||||
text_encoder_positional_encoding_layer_type: "rel_pos"
|
||||
text_encoder_self_attention_layer_type: "rel_selfattn"
|
||||
text_encoder_activation_type: "swish"
|
||||
text_encoder_normalize_before: True
|
||||
text_encoder_dropout_rate: 0.1
|
||||
text_encoder_positional_dropout_rate: 0.0
|
||||
text_encoder_attention_dropout_rate: 0.1
|
||||
use_macaron_style_in_text_encoder: True
|
||||
use_conformer_conv_in_text_encoder: False
|
||||
text_encoder_conformer_kernel_size: -1
|
||||
decoder_kernel_size: 7
|
||||
decoder_channels: 512
|
||||
decoder_upsample_scales: [8, 8, 2, 2]
|
||||
decoder_upsample_kernel_sizes: [16, 16, 4, 4]
|
||||
decoder_resblock_kernel_sizes: [3, 7, 11]
|
||||
decoder_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
|
||||
use_weight_norm_in_decoder: True
|
||||
posterior_encoder_kernel_size: 5
|
||||
posterior_encoder_layers: 16
|
||||
posterior_encoder_stacks: 1
|
||||
posterior_encoder_base_dilation: 1
|
||||
posterior_encoder_dropout_rate: 0.0
|
||||
use_weight_norm_in_posterior_encoder: True
|
||||
flow_flows: 4
|
||||
flow_kernel_size: 5
|
||||
flow_base_dilation: 1
|
||||
flow_layers: 4
|
||||
flow_dropout_rate: 0.0
|
||||
use_weight_norm_in_flow: True
|
||||
use_only_mean_in_flow: True
|
||||
stochastic_duration_predictor_kernel_size: 3
|
||||
stochastic_duration_predictor_dropout_rate: 0.5
|
||||
stochastic_duration_predictor_flows: 4
|
||||
stochastic_duration_predictor_dds_conv_layers: 3
|
||||
# discriminator related
|
||||
discriminator_type: hifigan_multi_scale_multi_period_discriminator
|
||||
discriminator_params:
|
||||
scales: 1
|
||||
scale_downsample_pooling: "AvgPool1D"
|
||||
scale_downsample_pooling_params:
|
||||
kernel_size: 4
|
||||
stride: 2
|
||||
padding: 2
|
||||
scale_discriminator_params:
|
||||
in_channels: 1
|
||||
out_channels: 1
|
||||
kernel_sizes: [15, 41, 5, 3]
|
||||
channels: 128
|
||||
max_downsample_channels: 1024
|
||||
max_groups: 16
|
||||
bias: True
|
||||
downsample_scales: [2, 2, 4, 4, 1]
|
||||
nonlinear_activation: "leakyrelu"
|
||||
nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True
|
||||
use_spectral_norm: False
|
||||
follow_official_norm: False
|
||||
periods: [2, 3, 5, 7, 11]
|
||||
period_discriminator_params:
|
||||
in_channels: 1
|
||||
out_channels: 1
|
||||
kernel_sizes: [5, 3]
|
||||
channels: 32
|
||||
downsample_scales: [3, 3, 3, 3, 1]
|
||||
max_downsample_channels: 1024
|
||||
bias: True
|
||||
nonlinear_activation: "leakyrelu"
|
||||
nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True
|
||||
use_spectral_norm: False
|
||||
# others
|
||||
sampling_rate: 22050 # needed in the inference for saving wav
|
||||
cache_generator_outputs: True # whether to cache generator outputs in the training
|
||||
|
||||
###########################################################
|
||||
# LOSS SETTING #
|
||||
###########################################################
|
||||
# loss function related
|
||||
generator_adv_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
loss_type: mse # loss type, "mse" or "hinge"
|
||||
discriminator_adv_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
loss_type: mse # loss type, "mse" or "hinge"
|
||||
feat_match_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
average_by_layers: False # whether to average loss value by #layers of each discriminator
|
||||
include_final_outputs: True # whether to include final outputs for loss calculation
|
||||
mel_loss_params:
|
||||
fs: 22050 # must be the same as the training data
|
||||
fft_size: 1024 # fft points
|
||||
hop_size: 256 # hop size
|
||||
win_length: null # window length
|
||||
window: hann # window type
|
||||
num_mels: 80 # number of Mel basis
|
||||
fmin: 0 # minimum frequency for Mel basis
|
||||
fmax: null # maximum frequency for Mel basis
|
||||
log_base: null # null represent natural log
|
||||
|
||||
###########################################################
|
||||
# ADVERSARIAL LOSS SETTING #
|
||||
###########################################################
|
||||
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss
|
||||
lambda_mel: 45.0 # loss scaling coefficient for Mel loss
|
||||
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss
|
||||
lambda_dur: 1.0 # loss scaling coefficient for duration loss
|
||||
lambda_kl: 1.0 # loss scaling coefficient for KL divergence loss
|
||||
# others
|
||||
sampling_rate: 22050 # needed in the inference for saving wav
|
||||
cache_generator_outputs: True # whether to cache generator outputs in the training
|
||||
|
||||
|
||||
###########################################################
|
||||
# DATA LOADER SETTING #
|
||||
###########################################################
|
||||
batch_size: 50 # Batch size.
|
||||
num_workers: 4 # Number of workers in DataLoader.
|
||||
|
||||
##########################################################
|
||||
# OPTIMIZER & SCHEDULER SETTING #
|
||||
##########################################################
|
||||
# optimizer setting for generator
|
||||
generator_optimizer_params:
|
||||
beta1: 0.8
|
||||
beta2: 0.99
|
||||
epsilon: 1.0e-9
|
||||
weight_decay: 0.0
|
||||
generator_scheduler: exponential_decay
|
||||
generator_scheduler_params:
|
||||
learning_rate: 2.0e-4
|
||||
gamma: 0.999875
|
||||
|
||||
# optimizer setting for discriminator
|
||||
discriminator_optimizer_params:
|
||||
beta1: 0.8
|
||||
beta2: 0.99
|
||||
epsilon: 1.0e-9
|
||||
weight_decay: 0.0
|
||||
discriminator_scheduler: exponential_decay
|
||||
discriminator_scheduler_params:
|
||||
learning_rate: 2.0e-4
|
||||
gamma: 0.999875
|
||||
generator_first: False # whether to start updating generator first
|
||||
|
||||
##########################################################
|
||||
# OTHER TRAINING SETTING #
|
||||
##########################################################
|
||||
num_snapshots: 10 # max number of snapshots to keep while training
|
||||
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000
|
||||
save_interval_steps: 1000 # Interval steps to save checkpoint.
|
||||
eval_interval_steps: 250 # Interval steps to evaluate the network.
|
||||
seed: 777 # random seed number
|
@ -0,0 +1,69 @@
|
||||
#!/bin/bash
|
||||
|
||||
stage=0
|
||||
stop_stage=100
|
||||
|
||||
config_path=$1
|
||||
add_blank=$2
|
||||
|
||||
# copy from tts3/preprocess
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
# get durations from MFA's result
|
||||
echo "Generate durations.txt from MFA results ..."
|
||||
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
|
||||
--inputdir=./aishell3_alignment_tone \
|
||||
--output durations.txt \
|
||||
--config=${config_path}
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
# extract features
|
||||
echo "Extract features ..."
|
||||
python3 ${BIN_DIR}/preprocess.py \
|
||||
--dataset=aishell3 \
|
||||
--rootdir=~/datasets/data_aishell3/ \
|
||||
--dumpdir=dump \
|
||||
--dur-file=durations.txt \
|
||||
--config=${config_path} \
|
||||
--num-cpu=20 \
|
||||
--cut-sil=True
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
# get features' stats(mean and std)
|
||||
echo "Get features' stats ..."
|
||||
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--field-name="feats"
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
# normalize and covert phone/speaker to id, dev and test should use train's stats
|
||||
echo "Normalize ..."
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--dumpdir=dump/train/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt \
|
||||
--add-blank=${add_blank} \
|
||||
--skip-wav-copy
|
||||
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/dev/raw/metadata.jsonl \
|
||||
--dumpdir=dump/dev/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt \
|
||||
--add-blank=${add_blank} \
|
||||
--skip-wav-copy
|
||||
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/test/raw/metadata.jsonl \
|
||||
--dumpdir=dump/test/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt \
|
||||
--add-blank=${add_blank} \
|
||||
--skip-wav-copy
|
||||
fi
|
@ -0,0 +1,19 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
ckpt_name=$3
|
||||
stage=0
|
||||
stop_stage=0
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
FLAGS_allocator_strategy=naive_best_fit \
|
||||
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
|
||||
python3 ${BIN_DIR}/synthesize.py \
|
||||
--config=${config_path} \
|
||||
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
|
||||
--phones_dict=dump/phone_id_map.txt \
|
||||
--speaker_dict=dump/speaker_id_map.txt \
|
||||
--test_metadata=dump/test/norm/metadata.jsonl \
|
||||
--output_dir=${train_output_path}/test
|
||||
fi
|
@ -0,0 +1,24 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
ckpt_name=$3
|
||||
add_blank=$4
|
||||
|
||||
stage=0
|
||||
stop_stage=0
|
||||
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
FLAGS_allocator_strategy=naive_best_fit \
|
||||
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
|
||||
python3 ${BIN_DIR}/synthesize_e2e.py \
|
||||
--config=${config_path} \
|
||||
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
|
||||
--phones_dict=dump/phone_id_map.txt \
|
||||
--speaker_dict=dump/speaker_id_map.txt \
|
||||
--spk_id=0 \
|
||||
--output_dir=${train_output_path}/test_e2e \
|
||||
--text=${BIN_DIR}/../sentences.txt \
|
||||
--add-blank=${add_blank}
|
||||
fi
|
@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
|
||||
# install monotonic_align
|
||||
cd ${MAIN_ROOT}/paddlespeech/t2s/models/vits/monotonic_align
|
||||
python3 setup.py build_ext --inplace
|
||||
cd -
|
||||
|
||||
python3 ${BIN_DIR}/train.py \
|
||||
--train-metadata=dump/train/norm/metadata.jsonl \
|
||||
--dev-metadata=dump/dev/norm/metadata.jsonl \
|
||||
--config=${config_path} \
|
||||
--output-dir=${train_output_path} \
|
||||
--ngpu=4 \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt
|
@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
export MAIN_ROOT=`realpath ${PWD}/../../../`
|
||||
|
||||
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
|
||||
export LC_ALL=C
|
||||
|
||||
export PYTHONDONTWRITEBYTECODE=1
|
||||
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
|
||||
export PYTHONIOENCODING=UTF-8
|
||||
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
|
||||
|
||||
MODEL=vits
|
||||
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
|
@ -0,0 +1,36 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
source path.sh
|
||||
|
||||
gpus=0,1,2,3
|
||||
stage=0
|
||||
stop_stage=100
|
||||
|
||||
conf_path=conf/default.yaml
|
||||
train_output_path=exp/default
|
||||
ckpt_name=snapshot_iter_153.pdz
|
||||
add_blank=true
|
||||
|
||||
# with the following command, you can choose the stage range you want to run
|
||||
# such as `./run.sh --stage 0 --stop-stage 0`
|
||||
# this can not be mixed use with `$1`, `$2` ...
|
||||
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
# prepare data
|
||||
./local/preprocess.sh ${conf_path} ${add_blank}|| exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} ${add_blank}|| exit -1
|
||||
fi
|
@ -1,27 +1,29 @@
|
||||
import argparse
|
||||
import os
|
||||
|
||||
|
||||
def process_sentence(line):
|
||||
if line == '': return ''
|
||||
res = line[0]
|
||||
for i in range(1, len(line)):
|
||||
res += (' ' + line[i])
|
||||
return res
|
||||
if line == '':
|
||||
return ''
|
||||
res = line[0]
|
||||
for i in range(1, len(line)):
|
||||
res += (' ' + line[i])
|
||||
return res
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
paser = argparse.ArgumentParser(description = "Input filename")
|
||||
paser.add_argument('-input_file')
|
||||
paser.add_argument('-output_file')
|
||||
sentence_cnt = 0
|
||||
args = paser.parse_args()
|
||||
with open(args.input_file, 'r') as f:
|
||||
with open(args.output_file, 'w') as write_f:
|
||||
while True:
|
||||
line = f.readline()
|
||||
if line:
|
||||
sentence_cnt += 1
|
||||
write_f.write(process_sentence(line))
|
||||
else:
|
||||
break
|
||||
print('preprocess over')
|
||||
print('total sentences number:', sentence_cnt)
|
||||
paser = argparse.ArgumentParser(description="Input filename")
|
||||
paser.add_argument('-input_file')
|
||||
paser.add_argument('-output_file')
|
||||
sentence_cnt = 0
|
||||
args = paser.parse_args()
|
||||
with open(args.input_file, 'r') as f:
|
||||
with open(args.output_file, 'w') as write_f:
|
||||
while True:
|
||||
line = f.readline()
|
||||
if line:
|
||||
sentence_cnt += 1
|
||||
write_f.write(process_sentence(line))
|
||||
else:
|
||||
break
|
||||
print('preprocess over')
|
||||
print('total sentences number:', sentence_cnt)
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue