[TTS] [黑客松]Add JETS (#3109)

pull/3175/head
ljhzxc 2 years ago committed by GitHub
parent bd0d69ca74
commit dc56c3a10e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,98 @@
# JETS with CSMSC
This example contains code used to train a [JETS](https://arxiv.org/abs/2203.16852v1) model with [Chinese Standard Mandarin Speech Copus](https://www.data-baker.com/open_source.html).
## Dataset
### Download and Extract
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/source).
### Get MFA Result and Extract
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes and durations for JETS.
You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo.
## Get Started
Assume the path to the dataset is `~/datasets/BZNSYP`.
Assume the path to the MFA result of CSMSC is `./baker_alignment_tone`.
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from a text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│   ├── norm
│   └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│   ├── norm
│   └── raw
└── train
├── feats_stats.npy
├── norm
└── raw
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave、mel spectrogram、speech、pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, the path of feats, feats_lengths, the path of pitch features, the path of energy features, the path of raw waves, speaker, and the id of each utterance.
### Model Training
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
`./local/train.sh` calls `${BIN_DIR}/train.py`.
Here's the complete help message.
```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU] [--phones-dict PHONES_DICT]
Train a JETS model.
optional arguments:
-h, --help show this help message and exit
--config CONFIG config file to overwrite default config.
--train-metadata TRAIN_METADATA
training data.
--dev-metadata DEV_METADATA
dev data.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu == 0, use cpu.
--phones-dict PHONES_DICT
phone vocabulary file.
```
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory.
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
5. `--phones-dict` is the path of the phone vocabulary file.
### Synthesizing
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
```

@ -0,0 +1,224 @@
# This configuration tested on 4 GPUs (V100) with 32GB GPU
# memory. It takes around 2 weeks to finish the training
# but 100k iters model should generate reasonable results.
###########################################################
# FEATURE EXTRACTION SETTING #
###########################################################
n_mels: 80
fs: 22050 # sr
n_fft: 1024 # FFT size (samples).
n_shift: 256 # Hop size (samples). 12.5ms
win_length: null # Window length (samples). 50ms
# If set to null, it will be the same as fft_size.
window: "hann" # Window function.
fmin: 0 # minimum frequency for Mel basis
fmax: null # maximum frequency for Mel basis
f0min: 80 # Minimum f0 for pitch extraction.
f0max: 400 # Maximum f0 for pitch extraction.
##########################################################
# TTS MODEL SETTING #
##########################################################
model:
# generator related
generator_type: jets_generator
generator_params:
adim: 256 # attention dimension
aheads: 2 # number of attention heads
elayers: 4 # number of encoder layers
eunits: 1024 # number of encoder ff units
dlayers: 4 # number of decoder layers
dunits: 1024 # number of decoder ff units
positionwise_layer_type: conv1d # type of position-wise layer
positionwise_conv_kernel_size: 3 # kernel size of position wise conv layer
duration_predictor_layers: 2 # number of layers of duration predictor
duration_predictor_chans: 256 # number of channels of duration predictor
duration_predictor_kernel_size: 3 # filter size of duration predictor
use_masking: True # whether to apply masking for padded part in loss calculation
encoder_normalize_before: True # whether to perform layer normalization before the input
decoder_normalize_before: True # whether to perform layer normalization before the input
encoder_type: transformer # encoder type
decoder_type: transformer # decoder type
conformer_rel_pos_type: latest # relative positional encoding type
conformer_pos_enc_layer_type: rel_pos # conformer positional encoding type
conformer_self_attn_layer_type: rel_selfattn # conformer self-attention type
conformer_activation_type: swish # conformer activation type
use_macaron_style_in_conformer: true # whether to use macaron style in conformer
use_cnn_in_conformer: true # whether to use CNN in conformer
conformer_enc_kernel_size: 7 # kernel size in CNN module of conformer-based encoder
conformer_dec_kernel_size: 31 # kernel size in CNN module of conformer-based decoder
init_type: xavier_uniform # initialization type
init_enc_alpha: 1.0 # initial value of alpha for encoder
init_dec_alpha: 1.0 # initial value of alpha for decoder
transformer_enc_dropout_rate: 0.2 # dropout rate for transformer encoder layer
transformer_enc_positional_dropout_rate: 0.2 # dropout rate for transformer encoder positional encoding
transformer_enc_attn_dropout_rate: 0.2 # dropout rate for transformer encoder attention layer
transformer_dec_dropout_rate: 0.2 # dropout rate for transformer decoder layer
transformer_dec_positional_dropout_rate: 0.2 # dropout rate for transformer decoder positional encoding
transformer_dec_attn_dropout_rate: 0.2 # dropout rate for transformer decoder attention layer
pitch_predictor_layers: 5 # number of conv layers in pitch predictor
pitch_predictor_chans: 256 # number of channels of conv layers in pitch predictor
pitch_predictor_kernel_size: 5 # kernel size of conv leyers in pitch predictor
pitch_predictor_dropout: 0.5 # dropout rate in pitch predictor
pitch_embed_kernel_size: 1 # kernel size of conv embedding layer for pitch
pitch_embed_dropout: 0.0 # dropout rate after conv embedding layer for pitch
stop_gradient_from_pitch_predictor: true # whether to stop the gradient from pitch predictor to encoder
energy_predictor_layers: 2 # number of conv layers in energy predictor
energy_predictor_chans: 256 # number of channels of conv layers in energy predictor
energy_predictor_kernel_size: 3 # kernel size of conv leyers in energy predictor
energy_predictor_dropout: 0.5 # dropout rate in energy predictor
energy_embed_kernel_size: 1 # kernel size of conv embedding layer for energy
energy_embed_dropout: 0.0 # dropout rate after conv embedding layer for energy
stop_gradient_from_energy_predictor: false # whether to stop the gradient from energy predictor to encoder
generator_out_channels: 1
generator_channels: 512
generator_global_channels: -1
generator_kernel_size: 7
generator_upsample_scales: [8, 8, 2, 2]
generator_upsample_kernel_sizes: [16, 16, 4, 4]
generator_resblock_kernel_sizes: [3, 7, 11]
generator_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
generator_use_additional_convs: true
generator_bias: true
generator_nonlinear_activation: "leakyrelu"
generator_nonlinear_activation_params:
negative_slope: 0.1
generator_use_weight_norm: true
segment_size: 64 # segment size for random windowed discriminator
# discriminator related
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: "AvgPool1D"
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [15, 41, 5, 3]
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: True
downsample_scales: [2, 2, 4, 4, 1]
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
follow_official_norm: False
periods: [2, 3, 5, 7, 11]
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes: [5, 3]
channels: 32
downsample_scales: [3, 3, 3, 3, 1]
max_downsample_channels: 1024
bias: True
nonlinear_activation: "leakyrelu"
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: True
use_spectral_norm: False
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
use_alignment_module: False # whether to use alignment module
###########################################################
# LOSS SETTING #
###########################################################
# loss function related
generator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
discriminator_adv_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
loss_type: mse # loss type, "mse" or "hinge"
feat_match_loss_params:
average_by_discriminators: False # whether to average loss value by #discriminators
average_by_layers: False # whether to average loss value by #layers of each discriminator
include_final_outputs: True # whether to include final outputs for loss calculation
mel_loss_params:
fs: 22050 # must be the same as the training data
fft_size: 1024 # fft points
hop_size: 256 # hop size
win_length: null # window length
window: hann # window type
num_mels: 80 # number of Mel basis
fmin: 0 # minimum frequency for Mel basis
fmax: null # maximum frequency for Mel basis
log_base: null # null represent natural log
###########################################################
# ADVERSARIAL LOSS SETTING #
###########################################################
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss
lambda_mel: 45.0 # loss scaling coefficient for Mel loss
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss
lambda_var: 1.0 # loss scaling coefficient for duration loss
lambda_align: 2.0 # loss scaling coefficient for KL divergence loss
# others
sampling_rate: 22050 # needed in the inference for saving wav
cache_generator_outputs: True # whether to cache generator outputs in the training
# extra module for additional inputs
pitch_extract: dio # pitch extractor type
pitch_extract_conf:
reduction_factor: 1
use_token_averaged_f0: false
pitch_normalize: global_mvn # normalizer for the pitch feature
energy_extract: energy # energy extractor type
energy_extract_conf:
reduction_factor: 1
use_token_averaged_energy: false
energy_normalize: global_mvn # normalizer for the energy feature
###########################################################
# DATA LOADER SETTING #
###########################################################
batch_size: 32 # Batch size.
num_workers: 4 # Number of workers in DataLoader.
##########################################################
# OPTIMIZER & SCHEDULER SETTING #
##########################################################
# optimizer setting for generator
generator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
generator_scheduler: exponential_decay
generator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
# optimizer setting for discriminator
discriminator_optimizer_params:
beta1: 0.8
beta2: 0.99
epsilon: 1.0e-9
weight_decay: 0.0
discriminator_scheduler: exponential_decay
discriminator_scheduler_params:
learning_rate: 2.0e-4
gamma: 0.999875
generator_first: True # whether to start updating generator first
##########################################################
# OTHER TRAINING SETTING #
##########################################################
num_snapshots: 10 # max number of snapshots to keep while training
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000
save_interval_steps: 1000 # Interval steps to save checkpoint.
eval_interval_steps: 250 # Interval steps to evaluate the network.
seed: 777 # random seed number

@ -0,0 +1,15 @@
#!/bin/bash
train_output_path=$1
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
python3 ${BIN_DIR}/inference.py \
--inference_dir=${train_output_path}/inference \
--am=jets_csmsc \
--text=${BIN_DIR}/../sentences.txt \
--output_dir=${train_output_path}/pd_infer_out \
--phones_dict=dump/phone_id_map.txt
fi

@ -0,0 +1,77 @@
#!/bin/bash
set -e
stage=0
stop_stage=100
config_path=$1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# get durations from MFA's result
echo "Generate durations.txt from MFA results ..."
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
--inputdir=./baker_alignment_tone \
--output=durations.txt \
--config=${config_path}
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# extract features
echo "Extract features ..."
python3 ${BIN_DIR}/preprocess.py \
--dataset=baker \
--rootdir=~/datasets/BZNSYP/ \
--dumpdir=dump \
--dur-file=durations.txt \
--config=${config_path} \
--num-cpu=20 \
--cut-sil=True \
--token_average=True
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# get features' stats(mean and std)
echo "Get features' stats ..."
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="feats"
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="pitch"
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
--metadata=dump/train/raw/metadata.jsonl \
--field-name="energy"
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# normalize and covert phone/speaker to id, dev and test should use train's stats
echo "Normalize ..."
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/train/raw/metadata.jsonl \
--dumpdir=dump/train/norm \
--feats-stats=dump/train/feats_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/dev/raw/metadata.jsonl \
--dumpdir=dump/dev/norm \
--feats-stats=dump/train/feats_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
python3 ${BIN_DIR}/normalize.py \
--metadata=dump/test/raw/metadata.jsonl \
--dumpdir=dump/test/norm \
--feats-stats=dump/train/feats_stats.npy \
--pitch-stats=dump/train/pitch_stats.npy \
--energy-stats=dump/train/energy_stats.npy \
--phones-dict=dump/phone_id_map.txt \
--speaker-dict=dump/speaker_id_map.txt
fi

@ -0,0 +1,18 @@
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize.py \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--phones_dict=dump/phone_id_map.txt \
--test_metadata=dump/test/norm/metadata.jsonl \
--output_dir=${train_output_path}/test
fi

@ -0,0 +1,22 @@
#!/bin/bash
config_path=$1
train_output_path=$2
ckpt_name=$3
stage=0
stop_stage=0
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--am=jets_csmsc \
--config=${config_path} \
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
--phones_dict=dump/phone_id_map.txt \
--output_dir=${train_output_path}/test_e2e \
--text=${BIN_DIR}/../sentences.txt \
--inference_dir=${train_output_path}/inference
fi

@ -0,0 +1,12 @@
#!/bin/bash
config_path=$1
train_output_path=$2
python3 ${BIN_DIR}/train.py \
--train-metadata=dump/train/norm/metadata.jsonl \
--dev-metadata=dump/dev/norm/metadata.jsonl \
--config=${config_path} \
--output-dir=${train_output_path} \
--ngpu=1 \
--phones-dict=dump/phone_id_map.txt

@ -0,0 +1,13 @@
#!/bin/bash
export MAIN_ROOT=`realpath ${PWD}/../../../`
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
export LC_ALL=C
export PYTHONDONTWRITEBYTECODE=1
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
MODEL=jets
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}

@ -0,0 +1,41 @@
#!/bin/bash
set -e
source path.sh
gpus=0
stage=0
stop_stage=100
conf_path=conf/default.yaml
train_output_path=exp/default
ckpt_name=snapshot_iter_150000.pdz
# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# prepare data
./local/preprocess.sh ${conf_path}|| exit -1
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# synthesize_e2e
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
CUDA_VISIBLE_DEVICES=${gpus} ./local/inference.sh ${train_output_path} || exit -1
fi

@ -669,6 +669,142 @@ def vits_multi_spk_batch_fn(examples):
return batch return batch
def jets_single_spk_batch_fn(examples):
"""
Returns:
Dict[str, Any]:
- text (Tensor): Text index tensor (B, T_text).
- text_lengths (Tensor): Text length tensor (B,).
- feats (Tensor): Feature tensor (B, T_feats, aux_channels).
- feats_lengths (Tensor): Feature length tensor (B,).
- durations (Tensor): Feature tensor (B, T_text,).
- durations_lengths (Tensor): Durations length tensor (B,).
- pitch (Tensor): Feature tensor (B, pitch_length,).
- energy (Tensor): Feature tensor (B, energy_length,).
- speech (Tensor): Speech waveform tensor (B, T_wav).
"""
# fields = ["text", "text_lengths", "feats", "feats_lengths", "durations", "pitch", "energy", "speech"]
text = [np.array(item["text"], dtype=np.int64) for item in examples]
feats = [np.array(item["feats"], dtype=np.float32) for item in examples]
durations = [
np.array(item["durations"], dtype=np.int64) for item in examples
]
pitch = [np.array(item["pitch"], dtype=np.float32) for item in examples]
energy = [np.array(item["energy"], dtype=np.float32) for item in examples]
speech = [np.array(item["wave"], dtype=np.float32) for item in examples]
text_lengths = [
np.array(item["text_lengths"], dtype=np.int64) for item in examples
]
feats_lengths = [
np.array(item["feats_lengths"], dtype=np.int64) for item in examples
]
text = batch_sequences(text)
feats = batch_sequences(feats)
durations = batch_sequences(durations)
pitch = batch_sequences(pitch)
energy = batch_sequences(energy)
speech = batch_sequences(speech)
# convert each batch to paddle.Tensor
text = paddle.to_tensor(text)
feats = paddle.to_tensor(feats)
durations = paddle.to_tensor(durations)
pitch = paddle.to_tensor(pitch)
energy = paddle.to_tensor(energy)
text_lengths = paddle.to_tensor(text_lengths)
feats_lengths = paddle.to_tensor(feats_lengths)
batch = {
"text": text,
"text_lengths": text_lengths,
"feats": feats,
"feats_lengths": feats_lengths,
"durations": durations,
"durations_lengths": text_lengths,
"pitch": pitch,
"energy": energy,
"speech": speech,
}
return batch
def jets_multi_spk_batch_fn(examples):
"""
Returns:
Dict[str, Any]:
- text (Tensor): Text index tensor (B, T_text).
- text_lengths (Tensor): Text length tensor (B,).
- feats (Tensor): Feature tensor (B, T_feats, aux_channels).
- feats_lengths (Tensor): Feature length tensor (B,).
- durations (Tensor): Feature tensor (B, T_text,).
- durations_lengths (Tensor): Durations length tensor (B,).
- pitch (Tensor): Feature tensor (B, pitch_length,).
- energy (Tensor): Feature tensor (B, energy_length,).
- speech (Tensor): Speech waveform tensor (B, T_wav).
- spk_id (Optional[Tensor]): Speaker index tensor (B,) or (B, 1).
- spk_emb (Optional[Tensor]): Speaker embedding tensor (B, spk_embed_dim).
"""
# fields = ["text", "text_lengths", "feats", "feats_lengths", "durations", "pitch", "energy", "speech", "spk_id"/"spk_emb"]
text = [np.array(item["text"], dtype=np.int64) for item in examples]
feats = [np.array(item["feats"], dtype=np.float32) for item in examples]
durations = [
np.array(item["durations"], dtype=np.int64) for item in examples
]
pitch = [np.array(item["pitch"], dtype=np.float32) for item in examples]
energy = [np.array(item["energy"], dtype=np.float32) for item in examples]
speech = [np.array(item["wave"], dtype=np.float32) for item in examples]
text_lengths = [
np.array(item["text_lengths"], dtype=np.int64) for item in examples
]
feats_lengths = [
np.array(item["feats_lengths"], dtype=np.int64) for item in examples
]
text = batch_sequences(text)
feats = batch_sequences(feats)
durations = batch_sequences(durations)
pitch = batch_sequences(pitch)
energy = batch_sequences(energy)
speech = batch_sequences(speech)
# convert each batch to paddle.Tensor
text = paddle.to_tensor(text)
feats = paddle.to_tensor(feats)
durations = paddle.to_tensor(durations)
pitch = paddle.to_tensor(pitch)
energy = paddle.to_tensor(energy)
text_lengths = paddle.to_tensor(text_lengths)
feats_lengths = paddle.to_tensor(feats_lengths)
batch = {
"text": text,
"text_lengths": text_lengths,
"feats": feats,
"feats_lengths": feats_lengths,
"durations": durations,
"durations_lengths": text_lengths,
"pitch": pitch,
"energy": energy,
"speech": speech,
}
# spk_emb has a higher priority than spk_id
if "spk_emb" in examples[0]:
spk_emb = [
np.array(item["spk_emb"], dtype=np.float32) for item in examples
]
spk_emb = batch_sequences(spk_emb)
spk_emb = paddle.to_tensor(spk_emb)
batch["spk_emb"] = spk_emb
elif "spk_id" in examples[0]:
spk_id = [np.array(item["spk_id"], dtype=np.int64) for item in examples]
spk_id = paddle.to_tensor(spk_id)
batch["spk_id"] = spk_id
return batch
# 因为要传参数,所以需要额外构建 # 因为要传参数,所以需要额外构建
def build_starganv2_vc_collate_fn(latent_dim: int=16, max_mel_length: int=192): def build_starganv2_vc_collate_fn(latent_dim: int=16, max_mel_length: int=192):

@ -0,0 +1,13 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

@ -0,0 +1,172 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from pathlib import Path
import paddle
import soundfile as sf
from timer import timer
from paddlespeech.t2s.exps.syn_utils import get_am_output
from paddlespeech.t2s.exps.syn_utils import get_frontend
from paddlespeech.t2s.exps.syn_utils import get_predictor
from paddlespeech.t2s.exps.syn_utils import get_sentences
from paddlespeech.t2s.utils import str2bool
def parse_args():
parser = argparse.ArgumentParser(
description="Paddle Infernce with acoustic model & vocoder.")
# acoustic model
parser.add_argument(
'--am',
type=str,
default='jets_csmsc',
choices=['jets_csmsc', 'jets_aishell3'],
help='Choose acoustic model type of tts task.')
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
'--spk_id',
type=int,
default=0,
help='spk id for multi speaker acoustic model')
# other
parser.add_argument(
'--lang',
type=str,
default='zh',
help='Choose model language. zh or en or mix')
parser.add_argument(
"--text",
type=str,
help="text to synthesize, a 'utt_id sentence' pair per line")
parser.add_argument(
"--add-blank",
type=str2bool,
default=True,
help="whether to add blank between phones")
parser.add_argument(
"--inference_dir", type=str, help="dir to save inference models")
parser.add_argument("--output_dir", type=str, help="output dir")
# inference
parser.add_argument(
"--use_trt",
type=str2bool,
default=False,
help="whether to use TensorRT or not in GPU", )
parser.add_argument(
"--use_mkldnn",
type=str2bool,
default=False,
help="whether to use MKLDNN or not in CPU.", )
parser.add_argument(
"--precision",
type=str,
default='fp32',
choices=['fp32', 'fp16', 'bf16', 'int8'],
help="mode of running")
parser.add_argument(
"--device",
default="gpu",
choices=["gpu", "cpu"],
help="Device selected for inference.", )
parser.add_argument('--cpu_threads', type=int, default=1)
args, _ = parser.parse_known_args()
return args
# only inference for models trained with csmsc now
def main():
args = parse_args()
paddle.set_device(args.device)
# frontend
frontend = get_frontend(lang=args.lang, phones_dict=args.phones_dict)
# am_predictor
am_predictor = get_predictor(
model_dir=args.inference_dir,
model_file=args.am + ".pdmodel",
params_file=args.am + ".pdiparams",
device=args.device,
use_trt=args.use_trt,
use_mkldnn=args.use_mkldnn,
cpu_threads=args.cpu_threads,
precision=args.precision)
# model: {model_name}_{dataset}
am_dataset = args.am[args.am.rindex('_') + 1:]
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
sentences = get_sentences(text_file=args.text, lang=args.lang)
merge_sentences = True
add_blank = args.add_blank
# jets's fs is 22050
fs = 22050
# warmup
for utt_id, sentence in sentences[:3]:
with timer() as t:
wav = get_am_output(
input=sentence,
am_predictor=am_predictor,
am=args.am,
frontend=frontend,
lang=args.lang,
merge_sentences=merge_sentences,
speaker_dict=args.speaker_dict,
spk_id=args.spk_id, )
speed = wav.size / t.elapse
rtf = fs / speed
print(
f"{utt_id}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
print("warm up done!")
N = 0
T = 0
for utt_id, sentence in sentences:
with timer() as t:
wav = get_am_output(
input=sentence,
am_predictor=am_predictor,
am=args.am,
frontend=frontend,
lang=args.lang,
merge_sentences=merge_sentences,
speaker_dict=args.speaker_dict,
spk_id=args.spk_id, )
N += wav.size
T += t.elapse
speed = wav.size / t.elapse
rtf = fs / speed
sf.write(output_dir / (utt_id + ".wav"), wav, samplerate=fs)
print(
f"{utt_id}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
print(f"{utt_id} done!")
print(f"generation speed: {N / T}Hz, RTF: {fs / (N / T) }")
if __name__ == "__main__":
main()

@ -0,0 +1,163 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Normalize feature files and dump them."""
import argparse
import logging
from operator import itemgetter
from pathlib import Path
import jsonlines
import numpy as np
from sklearn.preprocessing import StandardScaler
from tqdm import tqdm
from paddlespeech.t2s.datasets.data_table import DataTable
def main():
"""Run preprocessing process."""
parser = argparse.ArgumentParser(
description="Normalize dumped raw features (See detail in parallel_wavegan/bin/normalize.py)."
)
parser.add_argument(
"--metadata",
type=str,
required=True,
help="directory including feature files to be normalized. "
"you need to specify either *-scp or rootdir.")
parser.add_argument(
"--dumpdir",
type=str,
required=True,
help="directory to dump normalized feature files.")
parser.add_argument(
"--feats-stats", type=str, required=True, help="feats statistics file.")
parser.add_argument(
"--pitch-stats", type=str, required=True, help="pitch statistics file.")
parser.add_argument(
"--energy-stats",
type=str,
required=True,
help="energy statistics file.")
parser.add_argument(
"--phones-dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker-dict", type=str, default=None, help="speaker id map file.")
args = parser.parse_args()
dumpdir = Path(args.dumpdir).expanduser()
# use absolute path
dumpdir = dumpdir.resolve()
dumpdir.mkdir(parents=True, exist_ok=True)
# get dataset
with jsonlines.open(args.metadata, 'r') as reader:
metadata = list(reader)
dataset = DataTable(
metadata,
converters={
"feats": np.load,
"pitch": np.load,
"energy": np.load,
"wave": str,
})
logging.info(f"The number of files = {len(dataset)}.")
# restore scaler
feats_scaler = StandardScaler()
feats_scaler.mean_ = np.load(args.feats_stats)[0]
feats_scaler.scale_ = np.load(args.feats_stats)[1]
feats_scaler.n_features_in_ = feats_scaler.mean_.shape[0]
pitch_scaler = StandardScaler()
pitch_scaler.mean_ = np.load(args.pitch_stats)[0]
pitch_scaler.scale_ = np.load(args.pitch_stats)[1]
pitch_scaler.n_features_in_ = pitch_scaler.mean_.shape[0]
energy_scaler = StandardScaler()
energy_scaler.mean_ = np.load(args.energy_stats)[0]
energy_scaler.scale_ = np.load(args.energy_stats)[1]
energy_scaler.n_features_in_ = energy_scaler.mean_.shape[0]
vocab_phones = {}
with open(args.phones_dict, 'rt') as f:
phn_id = [line.strip().split() for line in f.readlines()]
for phn, id in phn_id:
vocab_phones[phn] = int(id)
vocab_speaker = {}
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
for spk, id in spk_id:
vocab_speaker[spk] = int(id)
# process each file
output_metadata = []
for item in tqdm(dataset):
utt_id = item['utt_id']
feats = item['feats']
pitch = item['pitch']
energy = item['energy']
wave_path = item['wave']
# normalize
feats = feats_scaler.transform(feats)
feats_dir = dumpdir / "data_feats"
feats_dir.mkdir(parents=True, exist_ok=True)
feats_path = feats_dir / f"{utt_id}_feats.npy"
np.save(feats_path, feats.astype(np.float32), allow_pickle=False)
pitch = pitch_scaler.transform(pitch)
pitch_dir = dumpdir / "data_pitch"
pitch_dir.mkdir(parents=True, exist_ok=True)
pitch_path = pitch_dir / f"{utt_id}_pitch.npy"
np.save(pitch_path, pitch.astype(np.float32), allow_pickle=False)
energy = energy_scaler.transform(energy)
energy_dir = dumpdir / "data_energy"
energy_dir.mkdir(parents=True, exist_ok=True)
energy_path = energy_dir / f"{utt_id}_energy.npy"
np.save(energy_path, energy.astype(np.float32), allow_pickle=False)
phone_ids = [vocab_phones[p] for p in item['phones']]
spk_id = vocab_speaker[item["speaker"]]
record = {
"utt_id": item['utt_id'],
"spk_id": spk_id,
"text": phone_ids,
"text_lengths": item['text_lengths'],
"feats_lengths": item['feats_lengths'],
"durations": item['durations'],
"feats": str(feats_path),
"pitch": str(pitch_path),
"energy": str(energy_path),
"wave": str(wave_path),
}
# add spk_emb for voice cloning
if "spk_emb" in item:
record["spk_emb"] = str(item["spk_emb"])
output_metadata.append(record)
output_metadata.sort(key=itemgetter('utt_id'))
output_metadata_path = Path(args.dumpdir) / "metadata.jsonl"
with jsonlines.open(output_metadata_path, 'w') as writer:
for item in output_metadata:
writer.write(item)
logging.info(f"metadata dumped into {output_metadata_path}")
if __name__ == "__main__":
main()

@ -0,0 +1,451 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
from concurrent.futures import ThreadPoolExecutor
from operator import itemgetter
from pathlib import Path
from typing import Any
from typing import Dict
from typing import List
import jsonlines
import librosa
import numpy as np
import tqdm
import yaml
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.get_feats import Energy
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.datasets.get_feats import Pitch
from paddlespeech.t2s.datasets.preprocess_utils import compare_duration_and_mel_length
from paddlespeech.t2s.datasets.preprocess_utils import get_input_token
from paddlespeech.t2s.datasets.preprocess_utils import get_phn_dur
from paddlespeech.t2s.datasets.preprocess_utils import get_spk_id_map
from paddlespeech.t2s.datasets.preprocess_utils import merge_silence
from paddlespeech.t2s.utils import str2bool
def process_sentence(config: Dict[str, Any],
fp: Path,
sentences: Dict,
output_dir: Path,
mel_extractor=None,
pitch_extractor=None,
energy_extractor=None,
cut_sil: bool=True,
spk_emb_dir: Path=None,
token_average: bool=True):
utt_id = fp.stem
# for vctk
if utt_id.endswith("_mic2"):
utt_id = utt_id[:-5]
record = None
if utt_id in sentences:
# reading, resampling may occur
wav, _ = librosa.load(
str(fp), sr=config.fs,
mono=False) if "canton" in str(fp) else librosa.load(
str(fp), sr=config.fs)
if len(wav.shape) == 2 and "canton" in str(fp):
# Remind that Cantonese datasets should be placed in ~/datasets/canton_all. Otherwise, it may cause problem.
wav = wav[0]
wav = np.ascontiguousarray(wav)
elif len(wav.shape) != 1:
return record
max_value = np.abs(wav).max()
if max_value > 1.0:
wav = wav / max_value
assert len(wav.shape) == 1, f"{utt_id} is not a mono-channel audio."
assert np.abs(wav).max(
) <= 1.0, f"{utt_id} is seems to be different that 16 bit PCM."
phones = sentences[utt_id][0]
durations = sentences[utt_id][1]
speaker = sentences[utt_id][2]
d_cumsum = np.pad(np.array(durations).cumsum(0), (1, 0), 'constant')
# little imprecise than use *.TextGrid directly
times = librosa.frames_to_time(
d_cumsum, sr=config.fs, hop_length=config.n_shift)
if cut_sil:
start = 0
end = d_cumsum[-1]
if phones[0] == "sil" and len(durations) > 1:
start = times[1]
durations = durations[1:]
phones = phones[1:]
if phones[-1] == 'sil' and len(durations) > 1:
end = times[-2]
durations = durations[:-1]
phones = phones[:-1]
sentences[utt_id][0] = phones
sentences[utt_id][1] = durations
start, end = librosa.time_to_samples([start, end], sr=config.fs)
wav = wav[start:end]
# extract mel feats
logmel = mel_extractor.get_log_mel_fbank(wav)
# change duration according to mel_length
compare_duration_and_mel_length(sentences, utt_id, logmel)
# utt_id may be popped in compare_duration_and_mel_length
if utt_id not in sentences:
return None
phones = sentences[utt_id][0]
durations = sentences[utt_id][1]
num_frames = logmel.shape[0]
assert sum(durations) == num_frames
mel_dir = output_dir / "data_feats"
mel_dir.mkdir(parents=True, exist_ok=True)
mel_path = mel_dir / (utt_id + "_feats.npy")
np.save(mel_path, logmel)
if wav.size < num_frames * config.n_shift:
wav = np.pad(
wav, (0, num_frames * config.n_shift - wav.size),
mode="reflect")
else:
wav = wav[:num_frames * config.n_shift]
wave_dir = output_dir / "data_wave"
wave_dir.mkdir(parents=True, exist_ok=True)
wav_path = wave_dir / (utt_id + "_wave.npy")
# (num_samples, )
np.save(wav_path, wav)
# extract pitch and energy
if token_average == True:
f0 = pitch_extractor.get_pitch(
wav,
duration=np.array(durations),
use_token_averaged_f0=token_average)
if (f0 == 0).all():
return None
assert f0.shape[0] == len(durations)
else:
f0 = pitch_extractor.get_pitch(
wav, use_token_averaged_f0=token_average)
if (f0 == 0).all():
return None
f0 = f0[:num_frames]
assert f0.shape[0] == num_frames
f0_dir = output_dir / "data_pitch"
f0_dir.mkdir(parents=True, exist_ok=True)
f0_path = f0_dir / (utt_id + "_pitch.npy")
np.save(f0_path, f0)
if token_average == True:
energy = energy_extractor.get_energy(
wav,
duration=np.array(durations),
use_token_averaged_energy=token_average)
assert energy.shape[0] == len(durations)
else:
energy = energy_extractor.get_energy(
wav, use_token_averaged_energy=token_average)
energy = energy[:num_frames]
assert energy.shape[0] == num_frames
energy_dir = output_dir / "data_energy"
energy_dir.mkdir(parents=True, exist_ok=True)
energy_path = energy_dir / (utt_id + "_energy.npy")
np.save(energy_path, energy)
record = {
"utt_id": utt_id,
"phones": phones,
"text_lengths": len(phones),
"feats_lengths": num_frames,
"durations": durations,
"feats": str(mel_path),
"pitch": str(f0_path),
"energy": str(energy_path),
"wave": str(wav_path),
"speaker": speaker
}
if spk_emb_dir:
if speaker in os.listdir(spk_emb_dir):
embed_name = utt_id + ".npy"
embed_path = spk_emb_dir / speaker / embed_name
if embed_path.is_file():
record["spk_emb"] = str(embed_path)
else:
return None
return record
def process_sentences(config,
fps: List[Path],
sentences: Dict,
output_dir: Path,
mel_extractor=None,
pitch_extractor=None,
energy_extractor=None,
nprocs: int=1,
cut_sil: bool=True,
spk_emb_dir: Path=None,
write_metadata_method: str='w',
token_average: bool=True):
if nprocs == 1:
results = []
for fp in tqdm.tqdm(fps, total=len(fps)):
record = process_sentence(
config=config,
fp=fp,
sentences=sentences,
output_dir=output_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
cut_sil=cut_sil,
spk_emb_dir=spk_emb_dir,
token_average=token_average)
if record:
results.append(record)
else:
with ThreadPoolExecutor(nprocs) as pool:
futures = []
with tqdm.tqdm(total=len(fps)) as progress:
for fp in fps:
future = pool.submit(process_sentence, config, fp,
sentences, output_dir, mel_extractor,
pitch_extractor, energy_extractor,
cut_sil, spk_emb_dir)
future.add_done_callback(lambda p: progress.update())
futures.append(future)
results = []
for ft in futures:
record = ft.result()
if record:
results.append(record)
results.sort(key=itemgetter("utt_id"))
with jsonlines.open(output_dir / "metadata.jsonl",
write_metadata_method) as writer:
for item in results:
writer.write(item)
print("Done")
def main():
# parse config and args
parser = argparse.ArgumentParser(
description="Preprocess audio and then extract features.")
parser.add_argument(
"--dataset",
default="baker",
type=str,
help="name of dataset, should in {baker, aishell3, ljspeech, vctk} now")
parser.add_argument(
"--rootdir", default=None, type=str, help="directory to dataset.")
parser.add_argument(
"--dumpdir",
type=str,
required=True,
help="directory to dump feature files.")
parser.add_argument(
"--dur-file", default=None, type=str, help="path to durations.txt.")
parser.add_argument("--config", type=str, help="fastspeech2 config file.")
parser.add_argument(
"--num-cpu", type=int, default=1, help="number of process.")
parser.add_argument(
"--cut-sil",
type=str2bool,
default=True,
help="whether cut sil in the edge of audio")
parser.add_argument(
"--spk_emb_dir",
default=None,
type=str,
help="directory to speaker embedding files.")
parser.add_argument(
"--write_metadata_method",
default="w",
type=str,
choices=["w", "a"],
help="How the metadata.jsonl file is written.")
parser.add_argument(
"--token_average",
type=str2bool,
default=False,
help="Average the energy and pitch accroding to durations")
args = parser.parse_args()
rootdir = Path(args.rootdir).expanduser()
dumpdir = Path(args.dumpdir).expanduser()
# use absolute path
dumpdir = dumpdir.resolve()
dumpdir.mkdir(parents=True, exist_ok=True)
dur_file = Path(args.dur_file).expanduser()
if args.spk_emb_dir:
spk_emb_dir = Path(args.spk_emb_dir).expanduser().resolve()
else:
spk_emb_dir = None
assert rootdir.is_dir()
assert dur_file.is_file()
with open(args.config, 'rt') as f:
config = CfgNode(yaml.safe_load(f))
sentences, speaker_set = get_phn_dur(dur_file)
merge_silence(sentences)
phone_id_map_path = dumpdir / "phone_id_map.txt"
speaker_id_map_path = dumpdir / "speaker_id_map.txt"
get_input_token(sentences, phone_id_map_path, args.dataset)
get_spk_id_map(speaker_set, speaker_id_map_path)
if args.dataset == "baker":
wav_files = sorted(list((rootdir / "Wave").rglob("*.wav")))
# split data into 3 sections
num_train = 9800
num_dev = 100
train_wav_files = wav_files[:num_train]
dev_wav_files = wav_files[num_train:num_train + num_dev]
test_wav_files = wav_files[num_train + num_dev:]
elif args.dataset == "aishell3":
sub_num_dev = 5
wav_dir = rootdir / "train" / "wav"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*.wav")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
elif args.dataset == "canton":
sub_num_dev = 5
wav_dir = rootdir / "WAV"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*.wav")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
elif args.dataset == "ljspeech":
wav_files = sorted(list((rootdir / "wavs").rglob("*.wav")))
# split data into 3 sections
num_train = 12900
num_dev = 100
train_wav_files = wav_files[:num_train]
dev_wav_files = wav_files[num_train:num_train + num_dev]
test_wav_files = wav_files[num_train + num_dev:]
elif args.dataset == "vctk":
sub_num_dev = 5
wav_dir = rootdir / "wav48_silence_trimmed"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*_mic2.flac")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
else:
print("dataset should in {baker, aishell3, ljspeech, vctk} now!")
train_dump_dir = dumpdir / "train" / "raw"
train_dump_dir.mkdir(parents=True, exist_ok=True)
dev_dump_dir = dumpdir / "dev" / "raw"
dev_dump_dir.mkdir(parents=True, exist_ok=True)
test_dump_dir = dumpdir / "test" / "raw"
test_dump_dir.mkdir(parents=True, exist_ok=True)
# Extractor
mel_extractor = LogMelFBank(
sr=config.fs,
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window,
n_mels=config.n_mels,
fmin=config.fmin,
fmax=config.fmax)
pitch_extractor = Pitch(
sr=config.fs,
hop_length=config.n_shift,
f0min=config.f0min,
f0max=config.f0max)
energy_extractor = Energy(
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window)
# process for the 3 sections
if train_wav_files:
process_sentences(
config=config,
fps=train_wav_files,
sentences=sentences,
output_dir=train_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method,
token_average=args.token_average)
if dev_wav_files:
process_sentences(
config=config,
fps=dev_wav_files,
sentences=sentences,
output_dir=dev_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method,
token_average=args.token_average)
if test_wav_files:
process_sentences(
config=config,
fps=test_wav_files,
sentences=sentences,
output_dir=test_dump_dir,
mel_extractor=mel_extractor,
pitch_extractor=pitch_extractor,
energy_extractor=energy_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir,
write_metadata_method=args.write_metadata_method,
token_average=args.token_average)
if __name__ == "__main__":
main()

@ -0,0 +1,153 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from pathlib import Path
import jsonlines
import numpy as np
import paddle
import soundfile as sf
import yaml
from timer import timer
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.models.jets import JETS
from paddlespeech.t2s.utils import str2bool
def evaluate(args):
# construct dataset for evaluation
with jsonlines.open(args.test_metadata, 'r') as reader:
test_metadata = list(reader)
# Init body.
with open(args.config) as f:
config = CfgNode(yaml.safe_load(f))
print("========Args========")
print(yaml.safe_dump(vars(args)))
print("========Config========")
print(config)
fields = ["utt_id", "text"]
converters = {}
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker jets!")
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
fields += ["spk_id"]
elif args.voice_cloning:
print("Evaluating voice cloning!")
fields += ["spk_emb"]
else:
print("single speaker jets!")
print("spk_num:", spk_num)
test_dataset = DataTable(
data=test_metadata,
fields=fields,
converters=converters, )
with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1
config["model"]["generator_params"]["spks"] = spk_num
jets = JETS(idim=vocab_size, odim=odim, **config["model"])
jets.set_state_dict(paddle.load(args.ckpt)["main_params"])
jets.eval()
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
N = 0
T = 0
for datum in test_dataset:
utt_id = datum["utt_id"]
phone_ids = paddle.to_tensor(datum["text"])
with timer() as t:
with paddle.no_grad():
spk_emb = None
spk_id = None
# multi speaker
if args.voice_cloning and "spk_emb" in datum:
spk_emb = paddle.to_tensor(np.load(datum["spk_emb"]))
elif "spk_id" in datum:
spk_id = paddle.to_tensor(datum["spk_id"])
out = jets.inference(
text=phone_ids, sids=spk_id, spembs=spk_emb)
wav = out["wav"]
wav = wav.numpy()
N += wav.size
T += t.elapse
speed = wav.size / t.elapse
rtf = config.fs / speed
print(
f"{utt_id}, wave: {wav.size}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
sf.write(str(output_dir / (utt_id + ".wav")), wav, samplerate=config.fs)
print(f"{utt_id} done!")
print(f"generation speed: {N / T}Hz, RTF: {config.fs / (N / T) }")
def parse_args():
# parse args and config
parser = argparse.ArgumentParser(description="Synthesize with JETS")
# model
parser.add_argument(
'--config', type=str, default=None, help='Config of JETS.')
parser.add_argument(
'--ckpt', type=str, default=None, help='Checkpoint file of JETS.')
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
"--voice-cloning",
type=str2bool,
default=False,
help="whether training voice cloning model.")
# other
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument("--test_metadata", type=str, help="test metadata.")
parser.add_argument("--output_dir", type=str, help="output dir.")
args = parser.parse_args()
return args
def main():
args = parse_args()
if args.ngpu == 0:
paddle.set_device("cpu")
elif args.ngpu > 0:
paddle.set_device("gpu")
else:
print("ngpu should >= 0 !")
evaluate(args)
if __name__ == "__main__":
main()

@ -0,0 +1,189 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from pathlib import Path
import paddle
import soundfile as sf
import yaml
from timer import timer
from yacs.config import CfgNode
from paddlespeech.t2s.exps.syn_utils import am_to_static
from paddlespeech.t2s.exps.syn_utils import get_frontend
from paddlespeech.t2s.exps.syn_utils import get_sentences
from paddlespeech.t2s.models.jets import JETS
from paddlespeech.t2s.models.jets import JETSInference
from paddlespeech.t2s.utils import str2bool
def evaluate(args):
# Init body.
with open(args.config) as f:
config = CfgNode(yaml.safe_load(f))
print("========Args========")
print(yaml.safe_dump(vars(args)))
print("========Config========")
print(config)
sentences = get_sentences(text_file=args.text, lang=args.lang)
# frontend
frontend = get_frontend(lang=args.lang, phones_dict=args.phones_dict)
# acoustic model
am_name = args.am[:args.am.rindex('_')]
am_dataset = args.am[args.am.rindex('_') + 1:]
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker jets!")
with open(args.speaker_dict, 'rt') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
else:
print("single speaker jets!")
print("spk_num:", spk_num)
with open(args.phones_dict, "r") as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_fft // 2 + 1
config["model"]["generator_params"]["spks"] = spk_num
jets = JETS(idim=vocab_size, odim=odim, **config["model"])
jets.set_state_dict(paddle.load(args.ckpt)["main_params"])
jets.eval()
jets_inference = JETSInference(jets)
# whether dygraph to static
if args.inference_dir:
jets_inference = am_to_static(
am_inference=jets_inference,
am=args.am,
inference_dir=args.inference_dir,
speaker_dict=args.speaker_dict)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
merge_sentences = False
N = 0
T = 0
for utt_id, sentence in sentences:
with timer() as t:
if args.lang == 'zh':
input_ids = frontend.get_input_ids(
sentence, merge_sentences=merge_sentences)
phone_ids = input_ids["phone_ids"]
elif args.lang == 'en':
input_ids = frontend.get_input_ids(
sentence, merge_sentences=merge_sentences)
phone_ids = input_ids["phone_ids"]
else:
print("lang should in {'zh', 'en'}!")
with paddle.no_grad():
flags = 0
for i in range(len(phone_ids)):
part_phone_ids = phone_ids[i]
spk_id = None
if am_dataset in {"aishell3",
"vctk"} and spk_num is not None:
spk_id = paddle.to_tensor(args.spk_id)
wav = jets_inference(part_phone_ids, spk_id)
else:
wav = jets_inference(part_phone_ids)
if flags == 0:
wav_all = wav
flags = 1
else:
wav_all = paddle.concat([wav_all, wav])
wav = wav_all.numpy()
N += wav.size
T += t.elapse
speed = wav.size / t.elapse
rtf = config.fs / speed
print(
f"{utt_id}, wave: {wav.shape}, time: {t.elapse}s, Hz: {speed}, RTF: {rtf}."
)
sf.write(str(output_dir / (utt_id + ".wav")), wav, samplerate=config.fs)
print(f"{utt_id} done!")
print(f"generation speed: {N / T}Hz, RTF: {config.fs / (N / T) }")
def parse_args():
# parse args and config
parser = argparse.ArgumentParser(description="Synthesize with JETS")
# model
parser.add_argument(
'--config', type=str, default=None, help='Config of JETS.')
parser.add_argument(
'--ckpt', type=str, default=None, help='Checkpoint file of JETS.')
parser.add_argument(
"--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker_dict", type=str, default=None, help="speaker id map file.")
parser.add_argument(
'--spk_id',
type=int,
default=0,
help='spk id for multi speaker acoustic model')
# other
parser.add_argument(
'--lang',
type=str,
default='zh',
help='Choose model language. zh or en')
parser.add_argument(
"--inference_dir",
type=str,
default=None,
help="dir to save inference models")
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument(
"--text",
type=str,
help="text to synthesize, a 'utt_id sentence' pair per line.")
parser.add_argument("--output_dir", type=str, help="output dir.")
parser.add_argument(
'--am',
type=str,
default='jets_csmsc',
help='Choose acoustic model type of tts task.')
args = parser.parse_args()
return args
def main():
args = parse_args()
if args.ngpu == 0:
paddle.set_device("cpu")
elif args.ngpu > 0:
paddle.set_device("gpu")
else:
print("ngpu should >= 0 !")
evaluate(args)
if __name__ == "__main__":
main()

@ -0,0 +1,305 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
import shutil
from pathlib import Path
import jsonlines
import numpy as np
import paddle
import yaml
from paddle import DataParallel
from paddle import distributed as dist
from paddle.io import DataLoader
from paddle.optimizer import AdamW
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.am_batch_fn import jets_multi_spk_batch_fn
from paddlespeech.t2s.datasets.am_batch_fn import jets_single_spk_batch_fn
from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.datasets.sampler import ErnieSATSampler
from paddlespeech.t2s.models.jets import JETS
from paddlespeech.t2s.models.jets import JETSEvaluator
from paddlespeech.t2s.models.jets import JETSUpdater
from paddlespeech.t2s.modules.losses import DiscriminatorAdversarialLoss
from paddlespeech.t2s.modules.losses import FeatureMatchLoss
from paddlespeech.t2s.modules.losses import ForwardSumLoss
from paddlespeech.t2s.modules.losses import GeneratorAdversarialLoss
from paddlespeech.t2s.modules.losses import MelSpectrogramLoss
from paddlespeech.t2s.modules.losses import VarianceLoss
from paddlespeech.t2s.training.extensions.snapshot import Snapshot
from paddlespeech.t2s.training.extensions.visualizer import VisualDL
from paddlespeech.t2s.training.optimizer import scheduler_classes
from paddlespeech.t2s.training.seeding import seed_everything
from paddlespeech.t2s.training.trainer import Trainer
from paddlespeech.t2s.utils import str2bool
def train_sp(args, config):
# decides device type and whether to run in parallel
# setup running environment correctly
world_size = paddle.distributed.get_world_size()
if (not paddle.is_compiled_with_cuda()) or args.ngpu == 0:
paddle.set_device("cpu")
else:
paddle.set_device("gpu")
if world_size > 1:
paddle.distributed.init_parallel_env()
# set the random seed, it is a must for multiprocess training
seed_everything(config.seed)
print(
f"rank: {dist.get_rank()}, pid: {os.getpid()}, parent_pid: {os.getppid()}",
)
# dataloader has been too verbose
logging.getLogger("DataLoader").disabled = True
fields = [
"text", "text_lengths", "feats", "feats_lengths", "wave", "durations",
"pitch", "energy"
]
converters = {
"wave": np.load,
"feats": np.load,
"pitch": np.load,
"energy": np.load,
}
spk_num = None
if args.speaker_dict is not None:
print("multiple speaker jets!")
collate_fn = jets_multi_spk_batch_fn
with open(args.speaker_dict, 'rt', encoding='utf-8') as f:
spk_id = [line.strip().split() for line in f.readlines()]
spk_num = len(spk_id)
fields += ["spk_id"]
elif args.voice_cloning:
print("Training voice cloning!")
collate_fn = jets_multi_spk_batch_fn
fields += ["spk_emb"]
converters["spk_emb"] = np.load
else:
print("single speaker jets!")
collate_fn = jets_single_spk_batch_fn
print("spk_num:", spk_num)
# construct dataset for training and validation
with jsonlines.open(args.train_metadata, 'r') as reader:
train_metadata = list(reader)
train_dataset = DataTable(
data=train_metadata,
fields=fields,
converters=converters, )
with jsonlines.open(args.dev_metadata, 'r') as reader:
dev_metadata = list(reader)
dev_dataset = DataTable(
data=dev_metadata,
fields=fields,
converters=converters, )
# collate function and dataloader
train_sampler = ErnieSATSampler(
train_dataset,
batch_size=config.batch_size,
shuffle=False,
drop_last=True)
dev_sampler = ErnieSATSampler(
dev_dataset,
batch_size=config.batch_size,
shuffle=False,
drop_last=False)
print("samplers done!")
train_dataloader = DataLoader(
train_dataset,
batch_sampler=train_sampler,
collate_fn=collate_fn,
num_workers=config.num_workers)
dev_dataloader = DataLoader(
dev_dataset,
batch_sampler=dev_sampler,
collate_fn=collate_fn,
num_workers=config.num_workers)
print("dataloaders done!")
with open(args.phones_dict, 'rt', encoding='utf-8') as f:
phn_id = [line.strip().split() for line in f.readlines()]
vocab_size = len(phn_id)
print("vocab_size:", vocab_size)
odim = config.n_mels
config["model"]["generator_params"]["spks"] = spk_num
model = JETS(idim=vocab_size, odim=odim, **config["model"])
gen_parameters = model.generator.parameters()
dis_parameters = model.discriminator.parameters()
if world_size > 1:
model = DataParallel(model)
gen_parameters = model._layers.generator.parameters()
dis_parameters = model._layers.discriminator.parameters()
print("model done!")
# loss
criterion_mel = MelSpectrogramLoss(
**config["mel_loss_params"], )
criterion_feat_match = FeatureMatchLoss(
**config["feat_match_loss_params"], )
criterion_gen_adv = GeneratorAdversarialLoss(
**config["generator_adv_loss_params"], )
criterion_dis_adv = DiscriminatorAdversarialLoss(
**config["discriminator_adv_loss_params"], )
criterion_var = VarianceLoss()
criterion_forwardsum = ForwardSumLoss()
print("criterions done!")
lr_schedule_g = scheduler_classes[config["generator_scheduler"]](
**config["generator_scheduler_params"])
optimizer_g = AdamW(
learning_rate=lr_schedule_g,
parameters=gen_parameters,
**config["generator_optimizer_params"])
lr_schedule_d = scheduler_classes[config["discriminator_scheduler"]](
**config["discriminator_scheduler_params"])
optimizer_d = AdamW(
learning_rate=lr_schedule_d,
parameters=dis_parameters,
**config["discriminator_optimizer_params"])
print("optimizers done!")
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
if dist.get_rank() == 0:
config_name = args.config.split("/")[-1]
# copy conf to output_dir
shutil.copyfile(args.config, output_dir / config_name)
updater = JETSUpdater(
model=model,
optimizers={
"generator": optimizer_g,
"discriminator": optimizer_d,
},
criterions={
"mel": criterion_mel,
"feat_match": criterion_feat_match,
"gen_adv": criterion_gen_adv,
"dis_adv": criterion_dis_adv,
"var": criterion_var,
"forwardsum": criterion_forwardsum,
},
schedulers={
"generator": lr_schedule_g,
"discriminator": lr_schedule_d,
},
dataloader=train_dataloader,
lambda_adv=config.lambda_adv,
lambda_mel=config.lambda_mel,
lambda_feat_match=config.lambda_feat_match,
lambda_var=config.lambda_var,
lambda_align=config.lambda_align,
generator_first=config.generator_first,
use_alignment_module=config.use_alignment_module,
output_dir=output_dir)
evaluator = JETSEvaluator(
model=model,
criterions={
"mel": criterion_mel,
"feat_match": criterion_feat_match,
"gen_adv": criterion_gen_adv,
"dis_adv": criterion_dis_adv,
"var": criterion_var,
"forwardsum": criterion_forwardsum,
},
dataloader=dev_dataloader,
lambda_adv=config.lambda_adv,
lambda_mel=config.lambda_mel,
lambda_feat_match=config.lambda_feat_match,
lambda_var=config.lambda_var,
lambda_align=config.lambda_align,
generator_first=config.generator_first,
use_alignment_module=config.use_alignment_module,
output_dir=output_dir)
trainer = Trainer(
updater,
stop_trigger=(config.train_max_steps, "iteration"),
out=output_dir)
if dist.get_rank() == 0:
trainer.extend(
evaluator, trigger=(config.eval_interval_steps, 'iteration'))
trainer.extend(VisualDL(output_dir), trigger=(1, 'iteration'))
trainer.extend(
Snapshot(max_size=config.num_snapshots),
trigger=(config.save_interval_steps, 'iteration'))
print("Trainer Done!")
trainer.run()
def main():
# parse args and config and redirect to train_sp
parser = argparse.ArgumentParser(description="Train a JETS model.")
parser.add_argument("--config", type=str, help="JETS config file")
parser.add_argument("--train-metadata", type=str, help="training data.")
parser.add_argument("--dev-metadata", type=str, help="dev data.")
parser.add_argument("--output-dir", type=str, help="output dir.")
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument(
"--phones-dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
"--speaker-dict",
type=str,
default=None,
help="speaker id map file for multiple speaker model.")
parser.add_argument(
"--voice-cloning",
type=str2bool,
default=False,
help="whether training voice cloning model.")
args = parser.parse_args()
with open(args.config, 'rt') as f:
config = CfgNode(yaml.safe_load(f))
print("========Args========")
print(yaml.safe_dump(vars(args)))
print("========Config========")
print(config)
print(
f"master see the word size: {dist.get_world_size()}, from pid: {os.getpid()}"
)
# dispatch
if args.ngpu > 1:
dist.spawn(train_sp, (args, config), nprocs=args.ngpu)
else:
train_sp(args, config)
if __name__ == "__main__":
main()

@ -506,7 +506,7 @@ def am_to_static(am_inference,
am_inference = jit.to_static( am_inference = jit.to_static(
am_inference, input_spec=[InputSpec([-1], dtype=paddle.int64)]) am_inference, input_spec=[InputSpec([-1], dtype=paddle.int64)])
elif am_name == 'vits': elif am_name == 'vits' or am_name == 'jets':
if am_dataset in {"aishell3", "vctk"} and speaker_dict is not None: if am_dataset in {"aishell3", "vctk"} and speaker_dict is not None:
am_inference = jit.to_static( am_inference = jit.to_static(
am_inference, am_inference,

@ -0,0 +1,15 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .jets import *
from .jets_updater import *

@ -0,0 +1,182 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
import numpy as np
import paddle
import paddle.nn.functional as F
from numba import jit
from paddle import nn
from paddlespeech.t2s.modules.masked_fill import masked_fill
class AlignmentModule(nn.Layer):
"""Alignment Learning Framework proposed for parallel TTS models in:
https://arxiv.org/abs/2108.10447
"""
def __init__(self, adim, odim):
super().__init__()
self.t_conv1 = nn.Conv1D(adim, adim, kernel_size=3, padding=1)
self.t_conv2 = nn.Conv1D(adim, adim, kernel_size=1, padding=0)
self.f_conv1 = nn.Conv1D(odim, adim, kernel_size=3, padding=1)
self.f_conv2 = nn.Conv1D(adim, adim, kernel_size=3, padding=1)
self.f_conv3 = nn.Conv1D(adim, adim, kernel_size=1, padding=0)
def forward(self, text, feats, x_masks=None):
"""
Args:
text (Tensor): Batched text embedding (B, T_text, adim)
feats (Tensor): Batched acoustic feature (B, T_feats, odim)
x_masks (Tensor): Mask tensor (B, T_text)
Returns:
Tensor: log probability of attention matrix (B, T_feats, T_text)
"""
text = text.transpose((0, 2, 1))
text = F.relu(self.t_conv1(text))
text = self.t_conv2(text)
text = text.transpose((0, 2, 1))
feats = feats.transpose((0, 2, 1))
feats = F.relu(self.f_conv1(feats))
feats = F.relu(self.f_conv2(feats))
feats = self.f_conv3(feats)
feats = feats.transpose((0, 2, 1))
dist = feats.unsqueeze(2) - text.unsqueeze(1)
dist = paddle.linalg.norm(dist, p=2, axis=3)
score = -dist
if x_masks is not None:
x_masks = x_masks.unsqueeze(-2)
score = masked_fill(score, x_masks, -np.inf)
log_p_attn = F.log_softmax(score, axis=-1)
return log_p_attn, score
@jit(nopython=True)
def _monotonic_alignment_search(log_p_attn):
# https://arxiv.org/abs/2005.11129
T_mel = log_p_attn.shape[0]
T_inp = log_p_attn.shape[1]
Q = np.full((T_inp, T_mel), fill_value=-np.inf)
log_prob = log_p_attn.transpose(1, 0) # -> (T_inp,T_mel)
# 1. Q <- init first row for all j
for j in range(T_mel):
Q[0, j] = log_prob[0, :j + 1].sum()
# 2.
for j in range(1, T_mel):
for i in range(1, min(j + 1, T_inp)):
Q[i, j] = max(Q[i - 1, j - 1], Q[i, j - 1]) + log_prob[i, j]
# 3.
A = np.full((T_mel, ), fill_value=T_inp - 1)
for j in range(T_mel - 2, -1, -1): # T_mel-2, ..., 0
# 'i' in {A[j+1]-1, A[j+1]}
i_a = A[j + 1] - 1
i_b = A[j + 1]
if i_b == 0:
argmax_i = 0
elif Q[i_a, j] >= Q[i_b, j]:
argmax_i = i_a
else:
argmax_i = i_b
A[j] = argmax_i
return A
def viterbi_decode(log_p_attn, text_lengths, feats_lengths):
"""
Args:
log_p_attn (Tensor):
Batched log probability of attention matrix (B, T_feats, T_text)
text_lengths (Tensor):
Text length tensor (B,)
feats_legnths (Tensor):
Feature length tensor (B,)
Returns:
Tensor:
Batched token duration extracted from `log_p_attn` (B,T_text)
Tensor:
binarization loss tensor ()
"""
B = log_p_attn.shape[0]
T_text = log_p_attn.shape[2]
device = log_p_attn.place
bin_loss = 0
ds = paddle.zeros((B, T_text), dtype="int32")
for b in range(B):
cur_log_p_attn = log_p_attn[b, :feats_lengths[b], :text_lengths[b]]
viterbi = _monotonic_alignment_search(cur_log_p_attn.numpy())
_ds = np.bincount(viterbi)
ds[b, :len(_ds)] = paddle.to_tensor(
_ds, place=device, dtype="int32")
t_idx = paddle.arange(feats_lengths[b])
bin_loss = bin_loss - cur_log_p_attn[t_idx, viterbi].mean()
bin_loss = bin_loss / B
return ds, bin_loss
@jit(nopython=True)
def _average_by_duration(ds, xs, text_lengths, feats_lengths):
B = ds.shape[0]
# xs_avg = np.zeros_like(ds)
xs_avg = np.zeros(shape=ds.shape, dtype=np.float32)
ds = ds.astype(np.int32)
for b in range(B):
t_text = text_lengths[b]
t_feats = feats_lengths[b]
d = ds[b, :t_text]
d_cumsum = d.cumsum()
d_cumsum = [0] + list(d_cumsum)
x = xs[b, :t_feats]
for n, (start, end) in enumerate(zip(d_cumsum[:-1], d_cumsum[1:])):
if len(x[start:end]) != 0:
xs_avg[b, n] = x[start:end].mean()
else:
xs_avg[b, n] = 0
return xs_avg
def average_by_duration(ds, xs, text_lengths, feats_lengths):
"""
Args:
ds (Tensor):
Batched token duration (B,T_text)
xs (Tensor):
Batched feature sequences to be averaged (B,T_feats)
text_lengths (Tensor):
Text length tensor (B,)
feats_lengths (Tensor):
Feature length tensor (B,)
Returns:
Tensor: Batched feature averaged according to the token duration (B, T_text)
"""
device = ds.place
args = [ds, xs, text_lengths, feats_lengths]
args = [arg.numpy() for arg in args]
xs_avg = _average_by_duration(*args)
xs_avg = paddle.to_tensor(xs_avg, place=device)
return xs_avg

@ -0,0 +1,897 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
import logging
import math
from typing import Any
from typing import Dict
from typing import List
from typing import Optional
from typing import Sequence
from typing import Tuple
import numpy as np
import paddle
from paddle import nn
from typeguard import check_argument_types
from paddlespeech.t2s.models.hifigan import HiFiGANGenerator
from paddlespeech.t2s.models.jets.alignments import AlignmentModule
from paddlespeech.t2s.models.jets.alignments import average_by_duration
from paddlespeech.t2s.models.jets.alignments import viterbi_decode
from paddlespeech.t2s.models.jets.length_regulator import GaussianUpsampling
from paddlespeech.t2s.modules.nets_utils import get_random_segments
from paddlespeech.t2s.modules.nets_utils import initialize
from paddlespeech.t2s.modules.nets_utils import make_non_pad_mask
from paddlespeech.t2s.modules.nets_utils import make_pad_mask
from paddlespeech.t2s.modules.predictor.duration_predictor import DurationPredictor
from paddlespeech.t2s.modules.predictor.length_regulator import LengthRegulator
from paddlespeech.t2s.modules.predictor.variance_predictor import VariancePredictor
from paddlespeech.t2s.modules.style_encoder import StyleEncoder
from paddlespeech.t2s.modules.transformer.embedding import PositionalEncoding
from paddlespeech.t2s.modules.transformer.embedding import ScaledPositionalEncoding
from paddlespeech.t2s.modules.transformer.encoder import ConformerEncoder
from paddlespeech.t2s.modules.transformer.encoder import TransformerEncoder
class JETSGenerator(nn.Layer):
"""Generator module in JETS.
"""
def __init__(
self,
idim: int,
odim: int,
adim: int=256,
aheads: int=2,
elayers: int=4,
eunits: int=1024,
dlayers: int=4,
dunits: int=1024,
positionwise_layer_type: str="conv1d",
positionwise_conv_kernel_size: int=1,
use_scaled_pos_enc: bool=True,
use_batch_norm: bool=True,
encoder_normalize_before: bool=True,
decoder_normalize_before: bool=True,
encoder_concat_after: bool=False,
decoder_concat_after: bool=False,
reduction_factor: int=1,
encoder_type: str="transformer",
decoder_type: str="transformer",
transformer_enc_dropout_rate: float=0.1,
transformer_enc_positional_dropout_rate: float=0.1,
transformer_enc_attn_dropout_rate: float=0.1,
transformer_dec_dropout_rate: float=0.1,
transformer_dec_positional_dropout_rate: float=0.1,
transformer_dec_attn_dropout_rate: float=0.1,
transformer_activation_type: str="relu",
# only for conformer
conformer_rel_pos_type: str="legacy",
conformer_pos_enc_layer_type: str="rel_pos",
conformer_self_attn_layer_type: str="rel_selfattn",
conformer_activation_type: str="swish",
use_macaron_style_in_conformer: bool=True,
use_cnn_in_conformer: bool=True,
zero_triu: bool=False,
conformer_enc_kernel_size: int=7,
conformer_dec_kernel_size: int=31,
# duration predictor
duration_predictor_layers: int=2,
duration_predictor_chans: int=384,
duration_predictor_kernel_size: int=3,
duration_predictor_dropout_rate: float=0.1,
# energy predictor
energy_predictor_layers: int=2,
energy_predictor_chans: int=384,
energy_predictor_kernel_size: int=3,
energy_predictor_dropout: float=0.5,
energy_embed_kernel_size: int=9,
energy_embed_dropout: float=0.5,
stop_gradient_from_energy_predictor: bool=False,
# pitch predictor
pitch_predictor_layers: int=2,
pitch_predictor_chans: int=384,
pitch_predictor_kernel_size: int=3,
pitch_predictor_dropout: float=0.5,
pitch_embed_kernel_size: int=9,
pitch_embed_dropout: float=0.5,
stop_gradient_from_pitch_predictor: bool=False,
# extra embedding related
spks: Optional[int]=None,
langs: Optional[int]=None,
spk_embed_dim: Optional[int]=None,
spk_embed_integration_type: str="add",
use_gst: bool=False,
gst_tokens: int=10,
gst_heads: int=4,
gst_conv_layers: int=6,
gst_conv_chans_list: Sequence[int]=(32, 32, 64, 64, 128, 128),
gst_conv_kernel_size: int=3,
gst_conv_stride: int=2,
gst_gru_layers: int=1,
gst_gru_units: int=128,
# training related
init_type: str="xavier_uniform",
init_enc_alpha: float=1.0,
init_dec_alpha: float=1.0,
use_masking: bool=False,
use_weighted_masking: bool=False,
segment_size: int=64,
# hifigan generator
generator_out_channels: int=1,
generator_channels: int=512,
generator_global_channels: int=-1,
generator_kernel_size: int=7,
generator_upsample_scales: List[int]=[8, 8, 2, 2],
generator_upsample_kernel_sizes: List[int]=[16, 16, 4, 4],
generator_resblock_kernel_sizes: List[int]=[3, 7, 11],
generator_resblock_dilations: List[List[int]]=[[1, 3, 5], [1, 3, 5],
[1, 3, 5]],
generator_use_additional_convs: bool=True,
generator_bias: bool=True,
generator_nonlinear_activation: str="LeakyReLU",
generator_nonlinear_activation_params: Dict[
str, Any]={"negative_slope": 0.1},
generator_use_weight_norm: bool=True, ):
"""Initialize JETS generator module.
Args:
idim (int):
Dimension of the inputs.
odim (int):
Dimension of the outputs.
adim (int):
Attention dimension.
aheads (int):
Number of attention heads.
elayers (int):
Number of encoder layers.
eunits (int):
Number of encoder hidden units.
dlayers (int):
Number of decoder layers.
dunits (int):
Number of decoder hidden units.
use_scaled_pos_enc (bool):
Whether to use trainable scaled pos encoding.
use_batch_norm (bool):
Whether to use batch normalization in encoder prenet.
encoder_normalize_before (bool):
Whether to apply layernorm layer before encoder block.
decoder_normalize_before (bool):
Whether to apply layernorm layer before decoder block.
encoder_concat_after (bool):
Whether to concatenate attention layer's input and output in encoder.
decoder_concat_after (bool):
Whether to concatenate attention layer's input and output in decoder.
reduction_factor (int):
Reduction factor.
encoder_type (str):
Encoder type ("transformer" or "conformer").
decoder_type (str):
Decoder type ("transformer" or "conformer").
transformer_enc_dropout_rate (float):
Dropout rate in encoder except attention and positional encoding.
transformer_enc_positional_dropout_rate (float):
Dropout rate after encoder positional encoding.
transformer_enc_attn_dropout_rate (float):
Dropout rate in encoder self-attention module.
transformer_dec_dropout_rate (float):
Dropout rate in decoder except attention & positional encoding.
transformer_dec_positional_dropout_rate (float):
Dropout rate after decoder positional encoding.
transformer_dec_attn_dropout_rate (float):
Dropout rate in decoder self-attention module.
conformer_rel_pos_type (str):
Relative pos encoding type in conformer.
conformer_pos_enc_layer_type (str):
Pos encoding layer type in conformer.
conformer_self_attn_layer_type (str):
Self-attention layer type in conformer
conformer_activation_type (str):
Activation function type in conformer.
use_macaron_style_in_conformer:
Whether to use macaron style FFN.
use_cnn_in_conformer:
Whether to use CNN in conformer.
zero_triu:
Whether to use zero triu in relative self-attention module.
conformer_enc_kernel_size:
Kernel size of encoder conformer.
conformer_dec_kernel_size:
Kernel size of decoder conformer.
duration_predictor_layers (int):
Number of duration predictor layers.
duration_predictor_chans (int):
Number of duration predictor channels.
duration_predictor_kernel_size (int):
Kernel size of duration predictor.
duration_predictor_dropout_rate (float):
Dropout rate in duration predictor.
pitch_predictor_layers (int):
Number of pitch predictor layers.
pitch_predictor_chans (int):
Number of pitch predictor channels.
pitch_predictor_kernel_size (int):
Kernel size of pitch predictor.
pitch_predictor_dropout_rate (float):
Dropout rate in pitch predictor.
pitch_embed_kernel_size (float):
Kernel size of pitch embedding.
pitch_embed_dropout_rate (float):
Dropout rate for pitch embedding.
stop_gradient_from_pitch_predictor:
Whether to stop gradient from pitch predictor to encoder.
energy_predictor_layers (int):
Number of energy predictor layers.
energy_predictor_chans (int):
Number of energy predictor channels.
energy_predictor_kernel_size (int):
Kernel size of energy predictor.
energy_predictor_dropout_rate (float):
Dropout rate in energy predictor.
energy_embed_kernel_size (float):
Kernel size of energy embedding.
energy_embed_dropout_rate (float):
Dropout rate for energy embedding.
stop_gradient_from_energy_predictor:
Whether to stop gradient from energy predictor to encoder.
spks (Optional[int]):
Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.
langs (Optional[int]):
Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.
spk_embed_dim (Optional[int]):
Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.
spk_embed_integration_type:
How to integrate speaker embedding.
use_gst (str):
Whether to use global style token.
gst_tokens (int):
The number of GST embeddings.
gst_heads (int):
The number of heads in GST multihead attention.
gst_conv_layers (int):
The number of conv layers in GST.
gst_conv_chans_list: (Sequence[int]):
List of the number of channels of conv layers in GST.
gst_conv_kernel_size (int):
Kernel size of conv layers in GST.
gst_conv_stride (int):
Stride size of conv layers in GST.
gst_gru_layers (int):
The number of GRU layers in GST.
gst_gru_units (int):
The number of GRU units in GST.
init_type (str):
How to initialize transformer parameters.
init_enc_alpha (float):
Initial value of alpha in scaled pos encoding of the encoder.
init_dec_alpha (float):
Initial value of alpha in scaled pos encoding of the decoder.
use_masking (bool):
Whether to apply masking for padded part in loss calculation.
use_weighted_masking (bool):
Whether to apply weighted masking in loss calculation.
segment_size (int):
Segment size for random windowed discriminator
generator_out_channels (int):
Number of output channels.
generator_channels (int):
Number of hidden representation channels.
generator_global_channels (int):
Number of global conditioning channels.
generator_kernel_size (int):
Kernel size of initial and final conv layer.
generator_upsample_scales (List[int]):
List of upsampling scales.
generator_upsample_kernel_sizes (List[int]):
List of kernel sizes for upsample layers.
generator_resblock_kernel_sizes (List[int]):
List of kernel sizes for residual blocks.
generator_resblock_dilations (List[List[int]]):
List of list of dilations for residual blocks.
generator_use_additional_convs (bool):
Whether to use additional conv layers in residual blocks.
generator_bias (bool):
Whether to add bias parameter in convolution layers.
generator_nonlinear_activation (str):
Activation function module name.
generator_nonlinear_activation_params (Dict[str, Any]):
Hyperparameters for activation function.
generator_use_weight_norm (bool):
Whether to use weight norm. If set to true, it will be applied to all of the conv layers.
"""
super().__init__()
self.segment_size = segment_size
self.upsample_factor = int(np.prod(generator_upsample_scales))
self.idim = idim
self.odim = odim
self.reduction_factor = reduction_factor
self.encoder_type = encoder_type
self.decoder_type = decoder_type
self.stop_gradient_from_pitch_predictor = stop_gradient_from_pitch_predictor
self.stop_gradient_from_energy_predictor = stop_gradient_from_energy_predictor
self.use_scaled_pos_enc = use_scaled_pos_enc
self.use_gst = use_gst
# use idx 0 as padding idx
self.padding_idx = 0
# get positional encoding layer type
transformer_pos_enc_layer_type = "scaled_abs_pos" if self.use_scaled_pos_enc else "abs_pos"
# check relative positional encoding compatibility
if "conformer" in [encoder_type, decoder_type]:
if conformer_rel_pos_type == "legacy":
if conformer_pos_enc_layer_type == "rel_pos":
conformer_pos_enc_layer_type = "legacy_rel_pos"
logging.warning(
"Fallback to conformer_pos_enc_layer_type = 'legacy_rel_pos' "
"due to the compatibility. If you want to use the new one, "
"please use conformer_pos_enc_layer_type = 'latest'.")
if conformer_self_attn_layer_type == "rel_selfattn":
conformer_self_attn_layer_type = "legacy_rel_selfattn"
logging.warning(
"Fallback to "
"conformer_self_attn_layer_type = 'legacy_rel_selfattn' "
"due to the compatibility. If you want to use the new one, "
"please use conformer_pos_enc_layer_type = 'latest'.")
elif conformer_rel_pos_type == "latest":
assert conformer_pos_enc_layer_type != "legacy_rel_pos"
assert conformer_self_attn_layer_type != "legacy_rel_selfattn"
else:
raise ValueError(
f"Unknown rel_pos_type: {conformer_rel_pos_type}")
# define encoder
encoder_input_layer = nn.Embedding(
num_embeddings=idim,
embedding_dim=adim,
padding_idx=self.padding_idx)
if encoder_type == "transformer":
self.encoder = TransformerEncoder(
idim=idim,
attention_dim=adim,
attention_heads=aheads,
linear_units=eunits,
num_blocks=elayers,
input_layer=encoder_input_layer,
dropout_rate=transformer_enc_dropout_rate,
positional_dropout_rate=transformer_enc_positional_dropout_rate,
attention_dropout_rate=transformer_enc_attn_dropout_rate,
pos_enc_layer_type=transformer_pos_enc_layer_type,
normalize_before=encoder_normalize_before,
concat_after=encoder_concat_after,
positionwise_layer_type=positionwise_layer_type,
positionwise_conv_kernel_size=positionwise_conv_kernel_size,
activation_type=transformer_activation_type)
elif encoder_type == "conformer":
self.encoder = ConformerEncoder(
idim=idim,
attention_dim=adim,
attention_heads=aheads,
linear_units=eunits,
num_blocks=elayers,
input_layer=encoder_input_layer,
dropout_rate=transformer_enc_dropout_rate,
positional_dropout_rate=transformer_enc_positional_dropout_rate,
attention_dropout_rate=transformer_enc_attn_dropout_rate,
normalize_before=encoder_normalize_before,
concat_after=encoder_concat_after,
positionwise_layer_type=positionwise_layer_type,
positionwise_conv_kernel_size=positionwise_conv_kernel_size,
macaron_style=use_macaron_style_in_conformer,
pos_enc_layer_type=conformer_pos_enc_layer_type,
selfattention_layer_type=conformer_self_attn_layer_type,
activation_type=conformer_activation_type,
use_cnn_module=use_cnn_in_conformer,
cnn_module_kernel=conformer_enc_kernel_size,
zero_triu=zero_triu, )
else:
raise ValueError(f"{encoder_type} is not supported.")
# define GST
if self.use_gst:
self.gst = StyleEncoder(
idim=odim, # the input is mel-spectrogram
gst_tokens=gst_tokens,
gst_token_dim=adim,
gst_heads=gst_heads,
conv_layers=gst_conv_layers,
conv_chans_list=gst_conv_chans_list,
conv_kernel_size=gst_conv_kernel_size,
conv_stride=gst_conv_stride,
gru_layers=gst_gru_layers,
gru_units=gst_gru_units, )
# define spk and lang embedding
self.spks = None
if spks is not None and spks > 1:
self.spks = spks
self.sid_emb = nn.Embedding(spks, adim)
self.langs = None
if langs is not None and langs > 1:
self.langs = langs
self.lid_emb = nn.Embedding(langs, adim)
# define additional projection for speaker embedding
self.spk_embed_dim = None
if spk_embed_dim is not None and spk_embed_dim > 0:
self.spk_embed_dim = spk_embed_dim
self.spk_embed_integration_type = spk_embed_integration_type
if self.spk_embed_dim is not None:
if self.spk_embed_integration_type == "add":
self.projection = nn.Linear(self.spk_embed_dim, adim)
else:
self.projection = nn.Linear(adim + self.spk_embed_dim, adim)
# define duration predictor
self.duration_predictor = DurationPredictor(
idim=adim,
n_layers=duration_predictor_layers,
n_chans=duration_predictor_chans,
kernel_size=duration_predictor_kernel_size,
dropout_rate=duration_predictor_dropout_rate, )
# define pitch predictor
self.pitch_predictor = VariancePredictor(
idim=adim,
n_layers=pitch_predictor_layers,
n_chans=pitch_predictor_chans,
kernel_size=pitch_predictor_kernel_size,
dropout_rate=pitch_predictor_dropout, )
# NOTE(kan-bayashi): We use continuous pitch + FastPitch style avg
self.pitch_embed = nn.Sequential(
nn.Conv1D(
in_channels=1,
out_channels=adim,
kernel_size=pitch_embed_kernel_size,
padding=(pitch_embed_kernel_size - 1) // 2, ),
nn.Dropout(pitch_embed_dropout), )
# define energy predictor
self.energy_predictor = VariancePredictor(
idim=adim,
n_layers=energy_predictor_layers,
n_chans=energy_predictor_chans,
kernel_size=energy_predictor_kernel_size,
dropout_rate=energy_predictor_dropout, )
# NOTE(kan-bayashi): We use continuous enegy + FastPitch style avg
self.energy_embed = nn.Sequential(
nn.Conv1D(
in_channels=1,
out_channels=adim,
kernel_size=energy_embed_kernel_size,
padding=(energy_embed_kernel_size - 1) // 2, ),
nn.Dropout(energy_embed_dropout), )
# define length regulator
self.length_regulator = GaussianUpsampling()
# define decoder
# NOTE: we use encoder as decoder
# because fastspeech's decoder is the same as encoder
if decoder_type == "transformer":
self.decoder = TransformerEncoder(
idim=0,
attention_dim=adim,
attention_heads=aheads,
linear_units=dunits,
num_blocks=dlayers,
# in decoder, don't need layer before pos_enc_class (we use embedding here in encoder)
input_layer=None,
dropout_rate=transformer_dec_dropout_rate,
positional_dropout_rate=transformer_dec_positional_dropout_rate,
attention_dropout_rate=transformer_dec_attn_dropout_rate,
pos_enc_layer_type=transformer_pos_enc_layer_type,
normalize_before=decoder_normalize_before,
concat_after=decoder_concat_after,
positionwise_layer_type=positionwise_layer_type,
positionwise_conv_kernel_size=positionwise_conv_kernel_size,
activation_type=conformer_activation_type, )
elif decoder_type == "conformer":
self.decoder = ConformerEncoder(
idim=0,
attention_dim=adim,
attention_heads=aheads,
linear_units=dunits,
num_blocks=dlayers,
input_layer=None,
dropout_rate=transformer_dec_dropout_rate,
positional_dropout_rate=transformer_dec_positional_dropout_rate,
attention_dropout_rate=transformer_dec_attn_dropout_rate,
normalize_before=decoder_normalize_before,
concat_after=decoder_concat_after,
positionwise_layer_type=positionwise_layer_type,
positionwise_conv_kernel_size=positionwise_conv_kernel_size,
macaron_style=use_macaron_style_in_conformer,
pos_enc_layer_type=conformer_pos_enc_layer_type,
selfattention_layer_type=conformer_self_attn_layer_type,
activation_type=conformer_activation_type,
use_cnn_module=use_cnn_in_conformer,
cnn_module_kernel=conformer_dec_kernel_size, )
else:
raise ValueError(f"{decoder_type} is not supported.")
self.generator = HiFiGANGenerator(
in_channels=adim,
out_channels=generator_out_channels,
channels=generator_channels,
global_channels=generator_global_channels,
kernel_size=generator_kernel_size,
upsample_scales=generator_upsample_scales,
upsample_kernel_sizes=generator_upsample_kernel_sizes,
resblock_kernel_sizes=generator_resblock_kernel_sizes,
resblock_dilations=generator_resblock_dilations,
use_additional_convs=generator_use_additional_convs,
bias=generator_bias,
nonlinear_activation=generator_nonlinear_activation,
nonlinear_activation_params=generator_nonlinear_activation_params,
use_weight_norm=generator_use_weight_norm, )
self.alignment_module = AlignmentModule(adim, odim)
# initialize parameters
self._reset_parameters(
init_type=init_type,
init_enc_alpha=init_enc_alpha,
init_dec_alpha=init_dec_alpha, )
def forward(
self,
text: paddle.Tensor,
text_lengths: paddle.Tensor,
feats: paddle.Tensor,
feats_lengths: paddle.Tensor,
durations: paddle.Tensor,
durations_lengths: paddle.Tensor,
pitch: paddle.Tensor,
energy: paddle.Tensor,
sids: Optional[paddle.Tensor]=None,
spembs: Optional[paddle.Tensor]=None,
lids: Optional[paddle.Tensor]=None,
use_alignment_module: bool=False,
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor,
paddle.Tensor, paddle.Tensor,
Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor,
paddle.Tensor, paddle.Tensor, ], ]:
"""Calculate forward propagation.
Args:
text (Tensor):
Text index tensor (B, T_text).
text_lengths (Tensor):
Text length tensor (B,).
feats (Tensor):
Feature tensor (B, aux_channels, T_feats).
feats_lengths (Tensor):
Feature length tensor (B,).
pitch (Tensor):
Batch of padded token-averaged pitch (B, T_text, 1).
energy (Tensor):
Batch of padded token-averaged energy (B, T_text, 1).
sids (Optional[Tensor]):
Speaker index tensor (B,) or (B, 1).
spembs (Optional[Tensor]):
Speaker embedding tensor (B, spk_embed_dim).
lids (Optional[Tensor]):
Language index tensor (B,) or (B, 1).
use_alignment_module (bool):
Whether to use alignment module.
Returns:
Tensor:
Waveform tensor (B, 1, segment_size * upsample_factor).
Tensor:
binarization loss ()
Tensor:
log probability attention matrix (B,T_feats,T_text)
Tensor:
Segments start index tensor (B,).
Tensor:
predicted duration (B,T_text)
Tensor:
ground-truth duration obtained from an alignment module (B,T_text)
Tensor:
predicted pitch (B,T_text,1)
Tensor:
ground-truth averaged pitch (B,T_text,1)
Tensor:
predicted energy (B,T_text,1)
Tensor:
ground-truth averaged energy (B,T_text,1)
"""
if use_alignment_module:
text = text[:, :text_lengths.max()] # for data-parallel
feats = feats[:, :feats_lengths.max()] # for data-parallel
pitch = pitch[:, :durations_lengths.max()] # for data-parallel
energy = energy[:, :durations_lengths.max()] # for data-parallel
else:
text = text[:, :text_lengths.max()] # for data-parallel
feats = feats[:, :feats_lengths.max()] # for data-parallel
pitch = pitch[:, :feats_lengths.max()] # for data-parallel
energy = energy[:, :feats_lengths.max()] # for data-parallel
# forward encoder
x_masks = self._source_mask(text_lengths)
hs, _ = self.encoder(text, x_masks) # (B, T_text, adim)
# integrate with GST
if self.use_gst:
style_embs = self.gst(ys)
hs = hs + style_embs.unsqueeze(1)
# integrate with SID and LID embeddings
if self.spks is not None:
sid_embs = self.sid_emb(sids.reshape([-1]))
hs = hs + sid_embs.unsqueeze(1)
if self.langs is not None:
lid_embs = self.lid_emb(lids.reshape([-1]))
hs = hs + lid_embs.unsqueeze(1)
# integrate speaker embedding
if self.spk_embed_dim is not None:
hs = self._integrate_with_spk_embed(hs, spembs)
# forward alignment module and obtain duration, averaged pitch, energy
h_masks = make_pad_mask(text_lengths)
if use_alignment_module:
log_p_attn = self.alignment_module(hs, feats, h_masks)
ds, bin_loss = viterbi_decode(log_p_attn, text_lengths,
feats_lengths)
ps = average_by_duration(ds,
pitch.squeeze(-1), text_lengths,
feats_lengths).unsqueeze(-1)
es = average_by_duration(ds,
energy.squeeze(-1), text_lengths,
feats_lengths).unsqueeze(-1)
else:
ds = durations
ps = pitch
es = energy
log_p_attn = attn = bin_loss = None
# forward duration predictor and variance predictors
if self.stop_gradient_from_pitch_predictor:
p_outs = self.pitch_predictor(hs.detach(), h_masks.unsqueeze(-1))
else:
p_outs = self.pitch_predictor(hs, h_masks.unsqueeze(-1))
if self.stop_gradient_from_energy_predictor:
e_outs = self.energy_predictor(hs.detach(), h_masks.unsqueeze(-1))
else:
e_outs = self.energy_predictor(hs, h_masks.unsqueeze(-1))
d_outs = self.duration_predictor(hs, h_masks)
# use groundtruth in training
p_embs = self.pitch_embed(ps.transpose([0, 2, 1])).transpose([0, 2, 1])
e_embs = self.energy_embed(es.transpose([0, 2, 1])).transpose([0, 2, 1])
hs = hs + e_embs + p_embs
# upsampling
h_masks = make_non_pad_mask(feats_lengths)
# d_masks = make_non_pad_mask(text_lengths).to(ds.device)
d_masks = make_non_pad_mask(text_lengths)
hs = self.length_regulator(hs, ds, h_masks,
d_masks) # (B, T_feats, adim)
# forward decoder
h_masks = self._source_mask(feats_lengths)
zs, _ = self.decoder(hs, h_masks) # (B, T_feats, adim)
# get random segments
z_segments, z_start_idxs = get_random_segments(
zs.transpose([0, 2, 1]),
feats_lengths,
self.segment_size, )
# forward generator
wav = self.generator(z_segments)
if use_alignment_module:
return wav, bin_loss, log_p_attn, z_start_idxs, d_outs, ds, p_outs, ps, e_outs, es
else:
return wav, None, None, z_start_idxs, d_outs, ds, p_outs, ps, e_outs, es
def inference(
self,
text: paddle.Tensor,
text_lengths: paddle.Tensor,
feats: Optional[paddle.Tensor]=None,
feats_lengths: Optional[paddle.Tensor]=None,
pitch: Optional[paddle.Tensor]=None,
energy: Optional[paddle.Tensor]=None,
sids: Optional[paddle.Tensor]=None,
spembs: Optional[paddle.Tensor]=None,
lids: Optional[paddle.Tensor]=None,
use_alignment_module: bool=False,
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor]:
"""Run inference.
Args:
text (Tensor): Input text index tensor (B, T_text,).
text_lengths (Tensor): Text length tensor (B,).
feats (Tensor): Feature tensor (B, T_feats, aux_channels).
feats_lengths (Tensor): Feature length tensor (B,).
pitch (Tensor): Pitch tensor (B, T_feats, 1)
energy (Tensor): Energy tensor (B, T_feats, 1)
sids (Optional[Tensor]): Speaker index tensor (B,) or (B, 1).
spembs (Optional[Tensor]): Speaker embedding tensor (B, spk_embed_dim).
lids (Optional[Tensor]): Language index tensor (B,) or (B, 1).
use_alignment_module (bool): Whether to use alignment module.
Returns:
Tensor: Generated waveform tensor (B, T_wav).
Tensor: Duration tensor (B, T_text).
"""
# forward encoder
x_masks = self._source_mask(text_lengths)
hs, _ = self.encoder(text, x_masks) # (B, T_text, adim)
# integrate with GST
if self.use_gst:
style_embs = self.gst(ys)
hs = hs + style_embs.unsqueeze(1)
# integrate with SID and LID embeddings
if self.spks is not None:
sid_embs = self.sid_emb(sids.view(-1))
hs = hs + sid_embs.unsqueeze(1)
if self.langs is not None:
lid_embs = self.lid_emb(lids.view(-1))
hs = hs + lid_embs.unsqueeze(1)
# integrate speaker embedding
if self.spk_embed_dim is not None:
hs = self._integrate_with_spk_embed(hs, spembs)
h_masks = make_pad_mask(text_lengths)
if use_alignment_module:
# forward alignment module and obtain duration, averaged pitch, energy
log_p_attn, attn = self.alignment_module(hs, feats, h_masks)
d_outs, _ = viterbi_decode(log_p_attn, text_lengths, feats_lengths)
p_outs = average_by_duration(d_outs,
pitch.squeeze(-1), text_lengths,
feats_lengths).unsqueeze(-1)
e_outs = average_by_duration(d_outs,
energy.squeeze(-1), text_lengths,
feats_lengths).unsqueeze(-1)
else:
# forward duration predictor and variance predictors
p_outs = self.pitch_predictor(hs, h_masks.unsqueeze(-1))
e_outs = self.energy_predictor(hs, h_masks.unsqueeze(-1))
d_outs = self.duration_predictor.inference(hs, h_masks)
p_embs = self.pitch_embed(p_outs.transpose([0, 2, 1])).transpose(
[0, 2, 1])
e_embs = self.energy_embed(e_outs.transpose([0, 2, 1])).transpose(
[0, 2, 1])
hs = hs + e_embs + p_embs
# upsampling
if feats_lengths is not None:
h_masks = make_non_pad_mask(feats_lengths)
else:
h_masks = None
d_masks = make_non_pad_mask(text_lengths)
hs = self.length_regulator(hs, d_outs, h_masks,
d_masks) # (B, T_feats, adim)
# forward decoder
if feats_lengths is not None:
h_masks = self._source_mask(feats_lengths)
else:
h_masks = None
zs, _ = self.decoder(hs, h_masks) # (B, T_feats, adim)
# forward generator
wav = self.generator(zs.transpose([0, 2, 1]))
return wav.squeeze(1), d_outs
def _integrate_with_spk_embed(self,
hs: paddle.Tensor,
spembs: paddle.Tensor) -> paddle.Tensor:
"""Integrate speaker embedding with hidden states.
Args:
hs (Tensor): Batch of hidden state sequences (B, T_text, adim).
spembs (Tensor): Batch of speaker embeddings (B, spk_embed_dim).
Returns:
Tensor: Batch of integrated hidden state sequences (B, T_text, adim).
"""
if self.spk_embed_integration_type == "add":
# apply projection and then add to hidden states
spembs = self.projection(F.normalize(spembs))
hs = hs + spembs.unsqueeze(1)
elif self.spk_embed_integration_type == "concat":
# concat hidden states with spk embeds and then apply projection
spembs = F.normalize(spembs).unsqueeze(1).expand(-1, hs.shape[1],
-1)
hs = self.projection(paddle.concat([hs, spembs], axis=-1))
else:
raise NotImplementedError("support only add or concat.")
return hs
def _generate_path(self, dur: paddle.Tensor,
mask: paddle.Tensor) -> paddle.Tensor:
"""Generate path a.k.a. monotonic attention.
Args:
dur (Tensor):
Duration tensor (B, 1, T_text).
mask (Tensor):
Attention mask tensor (B, 1, T_feats, T_text).
Returns:
Tensor:
Path tensor (B, 1, T_feats, T_text).
"""
b, _, t_y, t_x = paddle.shape(mask)
cum_dur = paddle.cumsum(dur, -1)
cum_dur_flat = paddle.reshape(cum_dur, [b * t_x])
path = paddle.arange(t_y, dtype=dur.dtype)
path = path.unsqueeze(0) < cum_dur_flat.unsqueeze(1)
path = paddle.reshape(path, [b, t_x, t_y])
'''
path will be like (t_x = 3, t_y = 5):
[[[1., 1., 0., 0., 0.], [[[1., 1., 0., 0., 0.],
[1., 1., 1., 1., 0.], --> [0., 0., 1., 1., 0.],
[1., 1., 1., 1., 1.]]] [0., 0., 0., 0., 1.]]]
'''
path = paddle.cast(path, dtype='float32')
pad_tmp = self.pad1d(path)[:, :-1]
path = path - pad_tmp
return path.unsqueeze(1).transpose([0, 1, 3, 2]) * mask
def _source_mask(self, ilens: paddle.Tensor) -> paddle.Tensor:
"""Make masks for self-attention.
Args:
ilens (LongTensor): Batch of lengths (B,).
Returns:
Tensor: Mask tensor for self-attention.
dtype=paddle.uint8
Examples:
>>> ilens = [5, 3]
>>> self._source_mask(ilens)
tensor([[[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0]]], dtype=torch.uint8)
"""
x_masks = paddle.to_tensor(make_non_pad_mask(ilens))
return x_masks.unsqueeze(-2)
def _reset_parameters(self,
init_type: str,
init_enc_alpha: float,
init_dec_alpha: float):
# initialize parameters
initialize(self, init_type)
# initialize alpha in scaled positional encoding
if self.encoder_type == "transformer" and self.use_scaled_pos_enc:
self.encoder.embed[-1].alpha.data = paddle.to_tensor(init_enc_alpha)
if self.decoder_type == "transformer" and self.use_scaled_pos_enc:
self.decoder.embed[-1].alpha.data = paddle.to_tensor(init_dec_alpha)

@ -0,0 +1,582 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
"""JETS module"""
import math
from typing import Any
from typing import Dict
from typing import Optional
import paddle
from paddle import nn
from typeguard import check_argument_types
from paddlespeech.t2s.models.hifigan import HiFiGANMultiPeriodDiscriminator
from paddlespeech.t2s.models.hifigan import HiFiGANMultiScaleDiscriminator
from paddlespeech.t2s.models.hifigan import HiFiGANMultiScaleMultiPeriodDiscriminator
from paddlespeech.t2s.models.hifigan import HiFiGANPeriodDiscriminator
from paddlespeech.t2s.models.hifigan import HiFiGANScaleDiscriminator
from paddlespeech.t2s.models.jets.generator import JETSGenerator
from paddlespeech.utils.initialize import _calculate_fan_in_and_fan_out
from paddlespeech.utils.initialize import kaiming_uniform_
from paddlespeech.utils.initialize import normal_
from paddlespeech.utils.initialize import ones_
from paddlespeech.utils.initialize import uniform_
from paddlespeech.utils.initialize import zeros_
AVAILABLE_GENERATERS = {
"jets_generator": JETSGenerator,
}
AVAILABLE_DISCRIMINATORS = {
"hifigan_period_discriminator":
HiFiGANPeriodDiscriminator,
"hifigan_scale_discriminator":
HiFiGANScaleDiscriminator,
"hifigan_multi_period_discriminator":
HiFiGANMultiPeriodDiscriminator,
"hifigan_multi_scale_discriminator":
HiFiGANMultiScaleDiscriminator,
"hifigan_multi_scale_multi_period_discriminator":
HiFiGANMultiScaleMultiPeriodDiscriminator,
}
class JETS(nn.Layer):
"""JETS module (generator + discriminator).
This is a module of JETS described in `JETS: Jointly Training FastSpeech2
and HiFi-GAN for End to End Text to Speech`_.
.. _`JETS: Jointly Training FastSpeech2 and HiFi-GAN for End to End Text to Speech
Text-to-Speech`: https://arxiv.org/abs/2203.16852v1
"""
def __init__(
self,
# generator related
idim: int,
odim: int,
sampling_rate: int=22050,
generator_type: str="jets_generator",
generator_params: Dict[str, Any]={
"adim": 256,
"aheads": 2,
"elayers": 4,
"eunits": 1024,
"dlayers": 4,
"dunits": 1024,
"positionwise_layer_type": "conv1d",
"positionwise_conv_kernel_size": 1,
"use_scaled_pos_enc": True,
"use_batch_norm": True,
"encoder_normalize_before": True,
"decoder_normalize_before": True,
"encoder_concat_after": False,
"decoder_concat_after": False,
"reduction_factor": 1,
"encoder_type": "transformer",
"decoder_type": "transformer",
"transformer_enc_dropout_rate": 0.1,
"transformer_enc_positional_dropout_rate": 0.1,
"transformer_enc_attn_dropout_rate": 0.1,
"transformer_dec_dropout_rate": 0.1,
"transformer_dec_positional_dropout_rate": 0.1,
"transformer_dec_attn_dropout_rate": 0.1,
"conformer_rel_pos_type": "latest",
"conformer_pos_enc_layer_type": "rel_pos",
"conformer_self_attn_layer_type": "rel_selfattn",
"conformer_activation_type": "swish",
"use_macaron_style_in_conformer": True,
"use_cnn_in_conformer": True,
"zero_triu": False,
"conformer_enc_kernel_size": 7,
"conformer_dec_kernel_size": 31,
"duration_predictor_layers": 2,
"duration_predictor_chans": 384,
"duration_predictor_kernel_size": 3,
"duration_predictor_dropout_rate": 0.1,
"energy_predictor_layers": 2,
"energy_predictor_chans": 384,
"energy_predictor_kernel_size": 3,
"energy_predictor_dropout": 0.5,
"energy_embed_kernel_size": 1,
"energy_embed_dropout": 0.5,
"stop_gradient_from_energy_predictor": False,
"pitch_predictor_layers": 5,
"pitch_predictor_chans": 384,
"pitch_predictor_kernel_size": 5,
"pitch_predictor_dropout": 0.5,
"pitch_embed_kernel_size": 1,
"pitch_embed_dropout": 0.5,
"stop_gradient_from_pitch_predictor": True,
"generator_out_channels": 1,
"generator_channels": 512,
"generator_global_channels": -1,
"generator_kernel_size": 7,
"generator_upsample_scales": [8, 8, 2, 2],
"generator_upsample_kernel_sizes": [16, 16, 4, 4],
"generator_resblock_kernel_sizes": [3, 7, 11],
"generator_resblock_dilations":
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
"generator_use_additional_convs": True,
"generator_bias": True,
"generator_nonlinear_activation": "LeakyReLU",
"generator_nonlinear_activation_params": {
"negative_slope": 0.1
},
"generator_use_weight_norm": True,
"segment_size": 64,
"spks": -1,
"langs": -1,
"spk_embed_dim": None,
"spk_embed_integration_type": "add",
"use_gst": False,
"gst_tokens": 10,
"gst_heads": 4,
"gst_conv_layers": 6,
"gst_conv_chans_list": [32, 32, 64, 64, 128, 128],
"gst_conv_kernel_size": 3,
"gst_conv_stride": 2,
"gst_gru_layers": 1,
"gst_gru_units": 128,
"init_type": "xavier_uniform",
"init_enc_alpha": 1.0,
"init_dec_alpha": 1.0,
"use_masking": False,
"use_weighted_masking": False,
},
# discriminator related
discriminator_type: str="hifigan_multi_scale_multi_period_discriminator",
discriminator_params: Dict[str, Any]={
"scales": 1,
"scale_downsample_pooling": "AvgPool1D",
"scale_downsample_pooling_params": {
"kernel_size": 4,
"stride": 2,
"padding": 2,
},
"scale_discriminator_params": {
"in_channels": 1,
"out_channels": 1,
"kernel_sizes": [15, 41, 5, 3],
"channels": 128,
"max_downsample_channels": 1024,
"max_groups": 16,
"bias": True,
"downsample_scales": [2, 2, 4, 4, 1],
"nonlinear_activation": "leakyrelu",
"nonlinear_activation_params": {
"negative_slope": 0.1
},
"use_weight_norm": True,
"use_spectral_norm": False,
},
"follow_official_norm": False,
"periods": [2, 3, 5, 7, 11],
"period_discriminator_params": {
"in_channels": 1,
"out_channels": 1,
"kernel_sizes": [5, 3],
"channels": 32,
"downsample_scales": [3, 3, 3, 3, 1],
"max_downsample_channels": 1024,
"bias": True,
"nonlinear_activation": "leakyrelu",
"nonlinear_activation_params": {
"negative_slope": 0.1
},
"use_weight_norm": True,
"use_spectral_norm": False,
},
},
cache_generator_outputs: bool=True, ):
"""Initialize JETS module.
Args:
idim (int):
Input vocabrary size.
odim (int):
Acoustic feature dimension. The actual output channels will
be 1 since JETS is the end-to-end text-to-wave model but for the
compatibility odim is used to indicate the acoustic feature dimension.
sampling_rate (int):
Sampling rate, not used for the training but it will
be referred in saving waveform during the inference.
generator_type (str):
Generator type.
generator_params (Dict[str, Any]):
Parameter dict for generator.
discriminator_type (str):
Discriminator type.
discriminator_params (Dict[str, Any]):
Parameter dict for discriminator.
cache_generator_outputs (bool):
Whether to cache generator outputs.
"""
assert check_argument_types()
super().__init__()
# define modules
generator_class = AVAILABLE_GENERATERS[generator_type]
if generator_type == "jets_generator":
# NOTE: Update parameters for the compatibility.
# The idim and odim is automatically decided from input data,
# where idim represents #vocabularies and odim represents
# the input acoustic feature dimension.
generator_params.update(idim=idim, odim=odim)
self.generator = generator_class(
**generator_params, )
discriminator_class = AVAILABLE_DISCRIMINATORS[discriminator_type]
self.discriminator = discriminator_class(
**discriminator_params, )
# cache
self.cache_generator_outputs = cache_generator_outputs
self._cache = None
# store sampling rate for saving wav file
# (not used for the training)
self.fs = sampling_rate
# store parameters for test compatibility
self.spks = self.generator.spks
self.langs = self.generator.langs
self.spk_embed_dim = self.generator.spk_embed_dim
self.reuse_cache_gen = True
self.reuse_cache_dis = True
self.reset_parameters()
self.generator._reset_parameters(
init_type=generator_params["init_type"],
init_enc_alpha=generator_params["init_enc_alpha"],
init_dec_alpha=generator_params["init_dec_alpha"], )
def forward(
self,
text: paddle.Tensor,
text_lengths: paddle.Tensor,
feats: paddle.Tensor,
feats_lengths: paddle.Tensor,
durations: paddle.Tensor,
durations_lengths: paddle.Tensor,
pitch: paddle.Tensor,
energy: paddle.Tensor,
sids: Optional[paddle.Tensor]=None,
spembs: Optional[paddle.Tensor]=None,
lids: Optional[paddle.Tensor]=None,
forward_generator: bool=True,
use_alignment_module: bool=False,
**kwargs,
) -> Dict[str, Any]:
"""Perform generator forward.
Args:
text (Tensor):
Text index tensor (B, T_text).
text_lengths (Tensor):
Text length tensor (B,).
feats (Tensor):
Feature tensor (B, T_feats, aux_channels).
feats_lengths (Tensor):
Feature length tensor (B,).
durations(Tensor(int64)):
Batch of padded durations (B, Tmax).
durations_lengths (Tensor):
durations length tensor (B,).
pitch(Tensor):
Batch of padded token-averaged pitch (B, Tmax, 1).
energy(Tensor):
Batch of padded token-averaged energy (B, Tmax, 1).
sids (Optional[Tensor]):
Speaker index tensor (B,) or (B, 1).
spembs (Optional[Tensor]):
Speaker embedding tensor (B, spk_embed_dim).
lids (Optional[Tensor]):
Language index tensor (B,) or (B, 1).
forward_generator (bool):
Whether to forward generator.
use_alignment_module (bool):
Whether to use alignment module.
Returns:
"""
if forward_generator:
return self._forward_generator(
text=text,
text_lengths=text_lengths,
feats=feats,
feats_lengths=feats_lengths,
durations=durations,
durations_lengths=durations_lengths,
pitch=pitch,
energy=energy,
sids=sids,
spembs=spembs,
lids=lids,
use_alignment_module=use_alignment_module, )
else:
return self._forward_discrminator(
text=text,
text_lengths=text_lengths,
feats=feats,
feats_lengths=feats_lengths,
durations=durations,
durations_lengths=durations_lengths,
pitch=pitch,
energy=energy,
sids=sids,
spembs=spembs,
lids=lids,
use_alignment_module=use_alignment_module, )
def _forward_generator(
self,
text: paddle.Tensor,
text_lengths: paddle.Tensor,
feats: paddle.Tensor,
feats_lengths: paddle.Tensor,
durations: paddle.Tensor,
durations_lengths: paddle.Tensor,
pitch: paddle.Tensor,
energy: paddle.Tensor,
sids: Optional[paddle.Tensor]=None,
spembs: Optional[paddle.Tensor]=None,
lids: Optional[paddle.Tensor]=None,
use_alignment_module: bool=False,
**kwargs, ) -> Dict[str, Any]:
"""Perform generator forward.
Args:
text (Tensor):
Text index tensor (B, T_text).
text_lengths (Tensor):
Text length tensor (B,).
feats (Tensor):
Feature tensor (B, T_feats, aux_channels).
feats_lengths (Tensor):
Feature length tensor (B,).
durations(Tensor(int64)):
Batch of padded durations (B, Tmax).
durations_lengths (Tensor):
durations length tensor (B,).
pitch(Tensor):
Batch of padded token-averaged pitch (B, Tmax, 1).
energy(Tensor):
Batch of padded token-averaged energy (B, Tmax, 1).
sids (Optional[Tensor]):
Speaker index tensor (B,) or (B, 1).
spembs (Optional[Tensor]):
Speaker embedding tensor (B, spk_embed_dim).
lids (Optional[Tensor]):
Language index tensor (B,) or (B, 1).
use_alignment_module (bool):
Whether to use alignment module.
Returns:
"""
# setup
# calculate generator outputs
self.reuse_cache_gen = True
if not self.cache_generator_outputs or self._cache is None:
self.reuse_cache_gen = False
outs = self.generator(
text=text,
text_lengths=text_lengths,
feats=feats,
feats_lengths=feats_lengths,
durations=durations,
durations_lengths=durations_lengths,
pitch=pitch,
energy=energy,
sids=sids,
spembs=spembs,
lids=lids,
use_alignment_module=use_alignment_module, )
else:
outs = self._cache
# store cache
if self.training and self.cache_generator_outputs and not self.reuse_cache_gen:
self._cache = outs
return outs
def _forward_discrminator(
self,
text: paddle.Tensor,
text_lengths: paddle.Tensor,
feats: paddle.Tensor,
feats_lengths: paddle.Tensor,
durations: paddle.Tensor,
durations_lengths: paddle.Tensor,
pitch: paddle.Tensor,
energy: paddle.Tensor,
sids: Optional[paddle.Tensor]=None,
spembs: Optional[paddle.Tensor]=None,
lids: Optional[paddle.Tensor]=None,
use_alignment_module: bool=False,
**kwargs, ) -> Dict[str, Any]:
"""Perform discriminator forward.
Args:
text (Tensor):
Text index tensor (B, T_text).
text_lengths (Tensor):
Text length tensor (B,).
feats (Tensor):
Feature tensor (B, T_feats, aux_channels).
feats_lengths (Tensor):
Feature length tensor (B,).
durations(Tensor(int64)):
Batch of padded durations (B, Tmax).
durations_lengths (Tensor):
durations length tensor (B,).
pitch(Tensor):
Batch of padded token-averaged pitch (B, Tmax, 1).
energy(Tensor):
Batch of padded token-averaged energy (B, Tmax, 1).
sids (Optional[Tensor]):
Speaker index tensor (B,) or (B, 1).
spembs (Optional[Tensor]):
Speaker embedding tensor (B, spk_embed_dim).
lids (Optional[Tensor]):
Language index tensor (B,) or (B, 1).
use_alignment_module (bool):
Whether to use alignment module.
Returns:
"""
# setup
# calculate generator outputs
self.reuse_cache_dis = True
if not self.cache_generator_outputs or self._cache is None:
self.reuse_cache_dis = False
outs = self.generator(
text=text,
text_lengths=text_lengths,
feats=feats,
feats_lengths=feats_lengths,
durations=durations,
durations_lengths=durations_lengths,
pitch=pitch,
energy=energy,
sids=sids,
spembs=spembs,
lids=lids,
use_alignment_module=use_alignment_module,
**kwargs, )
else:
outs = self._cache
# store cache
if self.cache_generator_outputs and not self.reuse_cache_dis:
self._cache = outs
return outs
def inference(self,
text: paddle.Tensor,
feats: Optional[paddle.Tensor]=None,
pitch: Optional[paddle.Tensor]=None,
energy: Optional[paddle.Tensor]=None,
use_alignment_module: bool=False,
**kwargs) -> Dict[str, paddle.Tensor]:
"""Run inference.
Args:
text (Tensor):
Input text index tensor (T_text,).
feats (Tensor):
Feature tensor (T_feats, aux_channels).
pitch (Tensor):
Pitch tensor (T_feats, 1).
energy (Tensor):
Energy tensor (T_feats, 1).
use_alignment_module (bool):
Whether to use alignment module.
Returns:
Dict[str, Tensor]:
* wav (Tensor):
Generated waveform tensor (T_wav,).
* duration (Tensor):
Predicted duration tensor (T_text,).
"""
# setup
text = text[None]
text_lengths = paddle.to_tensor(paddle.shape(text)[1])
# inference
if use_alignment_module:
assert feats is not None
feats = feats[None]
feats_lengths = paddle.to_tensor(paddle.shape(feats)[1])
pitch = pitch[None]
energy = energy[None]
wav, dur = self.generator.inference(
text=text,
text_lengths=text_lengths,
feats=feats,
feats_lengths=feats_lengths,
pitch=pitch,
energy=energy,
use_alignment_module=use_alignment_module,
**kwargs)
else:
wav, dur = self.generator.inference(
text=text,
text_lengths=text_lengths,
**kwargs, )
return dict(wav=paddle.reshape(wav, [-1]), duration=dur[0])
def reset_parameters(self):
def _reset_parameters(module):
if isinstance(
module,
(nn.Conv1D, nn.Conv1DTranspose, nn.Conv2D, nn.Conv2DTranspose)):
kaiming_uniform_(module.weight, a=math.sqrt(5))
if module.bias is not None:
fan_in, _ = _calculate_fan_in_and_fan_out(module.weight)
if fan_in != 0:
bound = 1 / math.sqrt(fan_in)
uniform_(module.bias, -bound, bound)
if isinstance(
module,
(nn.BatchNorm1D, nn.BatchNorm2D, nn.GroupNorm, nn.LayerNorm)):
ones_(module.weight)
zeros_(module.bias)
if isinstance(module, nn.Linear):
kaiming_uniform_(module.weight, a=math.sqrt(5))
if module.bias is not None:
fan_in, _ = _calculate_fan_in_and_fan_out(module.weight)
bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0
uniform_(module.bias, -bound, bound)
if isinstance(module, nn.Embedding):
normal_(module.weight)
if module._padding_idx is not None:
with paddle.no_grad():
module.weight[module._padding_idx] = 0
self.apply(_reset_parameters)
class JETSInference(nn.Layer):
def __init__(self, model):
super().__init__()
self.acoustic_model = model
def forward(self, text, sids=None):
out = self.acoustic_model.inference(text)
wav = out['wav']
return wav

@ -0,0 +1,437 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
import logging
from typing import Dict
import paddle
from paddle import distributed as dist
from paddle.io import DataLoader
from paddle.nn import Layer
from paddle.optimizer import Optimizer
from paddle.optimizer.lr import LRScheduler
from paddlespeech.t2s.modules.nets_utils import get_segments
from paddlespeech.t2s.training.extensions.evaluator import StandardEvaluator
from paddlespeech.t2s.training.reporter import report
from paddlespeech.t2s.training.updaters.standard_updater import StandardUpdater
from paddlespeech.t2s.training.updaters.standard_updater import UpdaterState
logging.basicConfig(
format='%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s',
datefmt='[%Y-%m-%d %H:%M:%S]')
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
class JETSUpdater(StandardUpdater):
def __init__(self,
model: Layer,
optimizers: Dict[str, Optimizer],
criterions: Dict[str, Layer],
schedulers: Dict[str, LRScheduler],
dataloader: DataLoader,
generator_train_start_steps: int=0,
discriminator_train_start_steps: int=100000,
lambda_adv: float=1.0,
lambda_mel: float=45.0,
lambda_feat_match: float=2.0,
lambda_var: float=1.0,
lambda_align: float=2.0,
generator_first: bool=False,
use_alignment_module: bool=False,
output_dir=None):
# it is designed to hold multiple models
# 因为输入的是单模型,但是没有用到父类的 init(), 所以需要重新写这部分
models = {"main": model}
self.models: Dict[str, Layer] = models
# self.model = model
self.model = model._layers if isinstance(model,
paddle.DataParallel) else model
self.optimizers = optimizers
self.optimizer_g: Optimizer = optimizers['generator']
self.optimizer_d: Optimizer = optimizers['discriminator']
self.criterions = criterions
self.criterion_mel = criterions['mel']
self.criterion_feat_match = criterions['feat_match']
self.criterion_gen_adv = criterions["gen_adv"]
self.criterion_dis_adv = criterions["dis_adv"]
self.criterion_var = criterions["var"]
self.criterion_forwardsum = criterions["forwardsum"]
self.schedulers = schedulers
self.scheduler_g = schedulers['generator']
self.scheduler_d = schedulers['discriminator']
self.dataloader = dataloader
self.generator_train_start_steps = generator_train_start_steps
self.discriminator_train_start_steps = discriminator_train_start_steps
self.lambda_adv = lambda_adv
self.lambda_mel = lambda_mel
self.lambda_feat_match = lambda_feat_match
self.lambda_var = lambda_var
self.lambda_align = lambda_align
self.use_alignment_module = use_alignment_module
if generator_first:
self.turns = ["generator", "discriminator"]
else:
self.turns = ["discriminator", "generator"]
self.state = UpdaterState(iteration=0, epoch=0)
self.train_iterator = iter(self.dataloader)
log_file = output_dir / 'worker_{}.log'.format(dist.get_rank())
self.filehandler = logging.FileHandler(str(log_file))
logger.addHandler(self.filehandler)
self.logger = logger
self.msg = ""
def update_core(self, batch):
self.msg = "Rank: {}, ".format(dist.get_rank())
losses_dict = {}
for turn in self.turns:
speech = batch["speech"]
speech = speech.unsqueeze(1)
text_lengths = batch["text_lengths"]
feats_lengths = batch["feats_lengths"]
outs = self.model(
text=batch["text"],
text_lengths=batch["text_lengths"],
feats=batch["feats"],
feats_lengths=batch["feats_lengths"],
durations=batch["durations"],
durations_lengths=batch["durations_lengths"],
pitch=batch["pitch"],
energy=batch["energy"],
sids=batch.get("spk_id", None),
spembs=batch.get("spk_emb", None),
forward_generator=turn == "generator",
use_alignment_module=self.use_alignment_module)
# Generator
if turn == "generator":
# parse outputs
speech_hat_, bin_loss, log_p_attn, start_idxs, d_outs, ds, p_outs, ps, e_outs, es = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_)
with paddle.no_grad():
# do not store discriminator gradient in generator turn
p = self.model.discriminator(speech_)
# calculate losses
mel_loss = self.criterion_mel(speech_hat_, speech_)
adv_loss = self.criterion_gen_adv(p_hat)
feat_match_loss = self.criterion_feat_match(p_hat, p)
dur_loss, pitch_loss, energy_loss = self.criterion_var(
d_outs, ds, p_outs, ps, e_outs, es, text_lengths)
mel_loss = mel_loss * self.lambda_mel
adv_loss = adv_loss * self.lambda_adv
feat_match_loss = feat_match_loss * self.lambda_feat_match
g_loss = mel_loss + adv_loss + feat_match_loss
var_loss = (
dur_loss + pitch_loss + energy_loss) * self.lambda_var
gen_loss = g_loss + var_loss #+ align_loss
report("train/generator_loss", float(gen_loss))
report("train/generator_generator_loss", float(g_loss))
report("train/generator_variance_loss", float(var_loss))
report("train/generator_generator_mel_loss", float(mel_loss))
report("train/generator_generator_adv_loss", float(adv_loss))
report("train/generator_generator_feat_match_loss",
float(feat_match_loss))
report("train/generator_variance_dur_loss", float(dur_loss))
report("train/generator_variance_pitch_loss", float(pitch_loss))
report("train/generator_variance_energy_loss",
float(energy_loss))
losses_dict["generator_loss"] = float(gen_loss)
losses_dict["generator_generator_loss"] = float(g_loss)
losses_dict["generator_variance_loss"] = float(var_loss)
losses_dict["generator_generator_mel_loss"] = float(mel_loss)
losses_dict["generator_generator_adv_loss"] = float(adv_loss)
losses_dict["generator_generator_feat_match_loss"] = float(
feat_match_loss)
losses_dict["generator_variance_dur_loss"] = float(dur_loss)
losses_dict["generator_variance_pitch_loss"] = float(pitch_loss)
losses_dict["generator_variance_energy_loss"] = float(
energy_loss)
if self.use_alignment_module == True:
forwardsum_loss = self.criterion_forwardsum(
log_p_attn, text_lengths, feats_lengths)
align_loss = (
forwardsum_loss + bin_loss) * self.lambda_align
report("train/generator_alignment_loss", float(align_loss))
report("train/generator_alignment_forwardsum_loss",
float(forwardsum_loss))
report("train/generator_alignment_bin_loss",
float(bin_loss))
losses_dict["generator_alignment_loss"] = float(align_loss)
losses_dict["generator_alignment_forwardsum_loss"] = float(
forwardsum_loss)
losses_dict["generator_alignment_bin_loss"] = float(
bin_loss)
self.optimizer_g.clear_grad()
gen_loss.backward()
self.optimizer_g.step()
self.scheduler_g.step()
# reset cache
if self.model.reuse_cache_gen or not self.model.training:
self.model._cache = None
# Disctiminator
elif turn == "discriminator":
# parse outputs
speech_hat_, _, _, start_idxs, *_ = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_.detach())
p = self.model.discriminator(speech_)
# calculate losses
real_loss, fake_loss = self.criterion_dis_adv(p_hat, p)
dis_loss = real_loss + fake_loss
report("train/real_loss", float(real_loss))
report("train/fake_loss", float(fake_loss))
report("train/discriminator_loss", float(dis_loss))
losses_dict["real_loss"] = float(real_loss)
losses_dict["fake_loss"] = float(fake_loss)
losses_dict["discriminator_loss"] = float(dis_loss)
self.optimizer_d.clear_grad()
dis_loss.backward()
self.optimizer_d.step()
self.scheduler_d.step()
# reset cache
if self.model.reuse_cache_dis or not self.model.training:
self.model._cache = None
self.msg += ', '.join('{}: {:>.6f}'.format(k, v)
for k, v in losses_dict.items())
class JETSEvaluator(StandardEvaluator):
def __init__(self,
model,
criterions: Dict[str, Layer],
dataloader: DataLoader,
lambda_adv: float=1.0,
lambda_mel: float=45.0,
lambda_feat_match: float=2.0,
lambda_var: float=1.0,
lambda_align: float=2.0,
generator_first: bool=False,
use_alignment_module: bool=False,
output_dir=None):
# 因为输入的是单模型,但是没有用到父类的 init(), 所以需要重新写这部分
models = {"main": model}
self.models: Dict[str, Layer] = models
# self.model = model
self.model = model._layers if isinstance(model,
paddle.DataParallel) else model
self.criterions = criterions
self.criterion_mel = criterions['mel']
self.criterion_feat_match = criterions['feat_match']
self.criterion_gen_adv = criterions["gen_adv"]
self.criterion_dis_adv = criterions["dis_adv"]
self.criterion_var = criterions["var"]
self.criterion_forwardsum = criterions["forwardsum"]
self.dataloader = dataloader
self.lambda_adv = lambda_adv
self.lambda_mel = lambda_mel
self.lambda_feat_match = lambda_feat_match
self.lambda_var = lambda_var
self.lambda_align = lambda_align
self.use_alignment_module = use_alignment_module
if generator_first:
self.turns = ["generator", "discriminator"]
else:
self.turns = ["discriminator", "generator"]
log_file = output_dir / 'worker_{}.log'.format(dist.get_rank())
self.filehandler = logging.FileHandler(str(log_file))
logger.addHandler(self.filehandler)
self.logger = logger
self.msg = ""
def evaluate_core(self, batch):
# logging.debug("Evaluate: ")
self.msg = "Evaluate: "
losses_dict = {}
for turn in self.turns:
speech = batch["speech"]
speech = speech.unsqueeze(1)
text_lengths = batch["text_lengths"]
feats_lengths = batch["feats_lengths"]
outs = self.model(
text=batch["text"],
text_lengths=batch["text_lengths"],
feats=batch["feats"],
feats_lengths=batch["feats_lengths"],
durations=batch["durations"],
durations_lengths=batch["durations_lengths"],
pitch=batch["pitch"],
energy=batch["energy"],
sids=batch.get("spk_id", None),
spembs=batch.get("spk_emb", None),
forward_generator=turn == "generator",
use_alignment_module=self.use_alignment_module)
# Generator
if turn == "generator":
# parse outputs
speech_hat_, bin_loss, log_p_attn, start_idxs, d_outs, ds, p_outs, ps, e_outs, es = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_)
with paddle.no_grad():
# do not store discriminator gradient in generator turn
p = self.model.discriminator(speech_)
# calculate losses
mel_loss = self.criterion_mel(speech_hat_, speech_)
adv_loss = self.criterion_gen_adv(p_hat)
feat_match_loss = self.criterion_feat_match(p_hat, p)
dur_loss, pitch_loss, energy_loss = self.criterion_var(
d_outs, ds, p_outs, ps, e_outs, es, text_lengths)
mel_loss = mel_loss * self.lambda_mel
adv_loss = adv_loss * self.lambda_adv
feat_match_loss = feat_match_loss * self.lambda_feat_match
g_loss = mel_loss + adv_loss + feat_match_loss
var_loss = (
dur_loss + pitch_loss + energy_loss) * self.lambda_var
gen_loss = g_loss + var_loss #+ align_loss
report("eval/generator_loss", float(gen_loss))
report("eval/generator_generator_loss", float(g_loss))
report("eval/generator_variance_loss", float(var_loss))
report("eval/generator_generator_mel_loss", float(mel_loss))
report("eval/generator_generator_adv_loss", float(adv_loss))
report("eval/generator_generator_feat_match_loss",
float(feat_match_loss))
report("eval/generator_variance_dur_loss", float(dur_loss))
report("eval/generator_variance_pitch_loss", float(pitch_loss))
report("eval/generator_variance_energy_loss",
float(energy_loss))
losses_dict["generator_loss"] = float(gen_loss)
losses_dict["generator_generator_loss"] = float(g_loss)
losses_dict["generator_variance_loss"] = float(var_loss)
losses_dict["generator_generator_mel_loss"] = float(mel_loss)
losses_dict["generator_generator_adv_loss"] = float(adv_loss)
losses_dict["generator_generator_feat_match_loss"] = float(
feat_match_loss)
losses_dict["generator_variance_dur_loss"] = float(dur_loss)
losses_dict["generator_variance_pitch_loss"] = float(pitch_loss)
losses_dict["generator_variance_energy_loss"] = float(
energy_loss)
if self.use_alignment_module == True:
forwardsum_loss = self.criterion_forwardsum(
log_p_attn, text_lengths, feats_lengths)
align_loss = (
forwardsum_loss + bin_loss) * self.lambda_align
report("eval/generator_alignment_loss", float(align_loss))
report("eval/generator_alignment_forwardsum_loss",
float(forwardsum_loss))
report("eval/generator_alignment_bin_loss", float(bin_loss))
losses_dict["generator_alignment_loss"] = float(align_loss)
losses_dict["generator_alignment_forwardsum_loss"] = float(
forwardsum_loss)
losses_dict["generator_alignment_bin_loss"] = float(
bin_loss)
# reset cache
if self.model.reuse_cache_gen or not self.model.training:
self.model._cache = None
# Disctiminator
elif turn == "discriminator":
# parse outputs
speech_hat_, _, _, start_idxs, *_ = outs
speech_ = get_segments(
x=speech,
start_idxs=start_idxs *
self.model.generator.upsample_factor,
segment_size=self.model.generator.segment_size *
self.model.generator.upsample_factor, )
# calculate discriminator outputs
p_hat = self.model.discriminator(speech_hat_.detach())
p = self.model.discriminator(speech_)
# calculate losses
real_loss, fake_loss = self.criterion_dis_adv(p_hat, p)
dis_loss = real_loss + fake_loss
report("eval/real_loss", float(real_loss))
report("eval/fake_loss", float(fake_loss))
report("eval/discriminator_loss", float(dis_loss))
losses_dict["real_loss"] = float(real_loss)
losses_dict["fake_loss"] = float(fake_loss)
losses_dict["discriminator_loss"] = float(dis_loss)
# reset cache
if self.model.reuse_cache_dis or not self.model.training:
self.model._cache = None
self.msg += ', '.join('{}: {:>.6f}'.format(k, v)
for k, v in losses_dict.items())
self.logger.info(self.msg)

@ -0,0 +1,67 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generator module in JETS.
This code is based on https://github.com/imdanboy/jets.
"""
import paddle
import paddle.nn.functional as F
from paddle import nn
from paddlespeech.t2s.modules.masked_fill import masked_fill
class GaussianUpsampling(nn.Layer):
"""
Gaussian upsampling with fixed temperature as in:
https://arxiv.org/abs/2010.04301
"""
def __init__(self, delta=0.1):
super().__init__()
self.delta = delta
def forward(self, hs, ds, h_masks=None, d_masks=None):
"""
Args:
hs (Tensor): Batched hidden state to be expanded (B, T_text, adim)
ds (Tensor): Batched token duration (B, T_text)
h_masks (Tensor): Mask tensor (B,T_feats)
d_masks (Tensor): Mask tensor (B,T_text)
Returns:
Tensor: Expanded hidden state (B, T_feat, adim)
"""
B = ds.shape[0]
if h_masks is None:
T_feats = paddle.to_tensor(ds.sum(), dtype="int32")
else:
T_feats = h_masks.shape[-1]
t = paddle.to_tensor(
paddle.arange(0, T_feats).unsqueeze(0).tile([B, 1]),
dtype="float32")
if h_masks is not None:
t = t * paddle.to_tensor(h_masks, dtype="float32")
c = ds.cumsum(axis=-1) - ds / 2
energy = -1 * self.delta * (t.unsqueeze(-1) - c.unsqueeze(1))**2
if d_masks is not None:
d_masks = ~(d_masks.unsqueeze(1))
d_masks.stop_gradient = True
d_masks = d_masks.tile([1, T_feats, 1])
energy = masked_fill(energy, d_masks, -float("inf"))
p_attn = F.softmax(energy, axis=2) # (B, T_feats, T_text)
hs = paddle.matmul(p_attn, hs)
return hs

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import math import math
from typing import Tuple
import librosa import librosa
import numpy as np import numpy as np
@ -19,8 +20,13 @@ import paddle
from paddle import nn from paddle import nn
from paddle.nn import functional as F from paddle.nn import functional as F
from scipy import signal from scipy import signal
from scipy.stats import betabinom
from typeguard import check_argument_types
from paddlespeech.t2s.modules.nets_utils import make_non_pad_mask from paddlespeech.t2s.modules.nets_utils import make_non_pad_mask
from paddlespeech.t2s.modules.predictor.duration_predictor import (
DurationPredictorLoss, # noqa: H301
)
# Losses for WaveRNN # Losses for WaveRNN
@ -1126,3 +1132,195 @@ class MLMLoss(nn.Layer):
text_masked_pos_reshape) / paddle.sum((text_masked_pos) + 1e-10) text_masked_pos_reshape) / paddle.sum((text_masked_pos) + 1e-10)
return mlm_loss, text_mlm_loss return mlm_loss, text_mlm_loss
class VarianceLoss(nn.Layer):
def __init__(self, use_masking: bool=True,
use_weighted_masking: bool=False):
"""Initialize JETS variance loss module.
Args:
use_masking (bool): Whether to apply masking for padded part in loss
calculation.
use_weighted_masking (bool): Whether to weighted masking in loss
calculation.
"""
assert check_argument_types()
super().__init__()
assert (use_masking != use_weighted_masking) or not use_masking
self.use_masking = use_masking
self.use_weighted_masking = use_weighted_masking
# define criterions
reduction = "none" if self.use_weighted_masking else "mean"
self.mse_criterion = nn.MSELoss(reduction=reduction)
self.duration_criterion = DurationPredictorLoss(reduction=reduction)
def forward(
self,
d_outs: paddle.Tensor,
ds: paddle.Tensor,
p_outs: paddle.Tensor,
ps: paddle.Tensor,
e_outs: paddle.Tensor,
es: paddle.Tensor,
ilens: paddle.Tensor,
) -> Tuple[paddle.Tensor, paddle.Tensor, paddle.Tensor, paddle.Tensor]:
"""Calculate forward propagation.
Args:
d_outs (LongTensor): Batch of outputs of duration predictor (B, T_text).
ds (LongTensor): Batch of durations (B, T_text).
p_outs (Tensor): Batch of outputs of pitch predictor (B, T_text, 1).
ps (Tensor): Batch of target token-averaged pitch (B, T_text, 1).
e_outs (Tensor): Batch of outputs of energy predictor (B, T_text, 1).
es (Tensor): Batch of target token-averaged energy (B, T_text, 1).
ilens (LongTensor): Batch of the lengths of each input (B,).
Returns:
Tensor: Duration predictor loss value.
Tensor: Pitch predictor loss value.
Tensor: Energy predictor loss value.
"""
# apply mask to remove padded part
if self.use_masking:
duration_masks = paddle.to_tensor(
make_non_pad_mask(ilens), place=ds.place)
d_outs = d_outs.masked_select(duration_masks)
ds = ds.masked_select(duration_masks)
pitch_masks = paddle.to_tensor(
make_non_pad_mask(ilens).unsqueeze(-1), place=ds.place)
p_outs = p_outs.masked_select(pitch_masks)
e_outs = e_outs.masked_select(pitch_masks)
ps = ps.masked_select(pitch_masks)
es = es.masked_select(pitch_masks)
# calculate loss
duration_loss = self.duration_criterion(d_outs, ds)
pitch_loss = self.mse_criterion(p_outs, ps)
energy_loss = self.mse_criterion(e_outs, es)
# make weighted mask and apply it
if self.use_weighted_masking:
duration_masks = paddle.to_tensor(
make_non_pad_mask(ilens), place=ds.place)
duration_weights = (duration_masks.float() /
duration_masks.sum(dim=1, keepdim=True).float())
duration_weights /= ds.size(0)
# apply weight
duration_loss = (duration_loss.mul(duration_weights).masked_select(
duration_masks).sum())
pitch_masks = duration_masks.unsqueeze(-1)
pitch_weights = duration_weights.unsqueeze(-1)
pitch_loss = pitch_loss.mul(pitch_weights).masked_select(
pitch_masks).sum()
energy_loss = (
energy_loss.mul(pitch_weights).masked_select(pitch_masks).sum())
return duration_loss, pitch_loss, energy_loss
class ForwardSumLoss(nn.Layer):
"""
https://openreview.net/forum?id=0NQwnnwAORi
"""
def __init__(self, cache_prior: bool=True):
"""
Args:
cache_prior (bool): Whether to cache beta-binomial prior
"""
super().__init__()
self.cache_prior = cache_prior
self._cache = {}
def forward(
self,
log_p_attn: paddle.Tensor,
ilens: paddle.Tensor,
olens: paddle.Tensor,
blank_prob: float=np.e**-1, ) -> paddle.Tensor:
"""
Args:
log_p_attn (Tensor): Batch of log probability of attention matrix (B, T_feats, T_text).
ilens (Tensor): Batch of the lengths of each input (B,).
olens (Tensor): Batch of the lengths of each target (B,).
blank_prob (float): Blank symbol probability
Returns:
Tensor: forwardsum loss value.
"""
B = log_p_attn.shape[0]
# add beta-binomial prior
bb_prior = self._generate_prior(ilens, olens)
bb_prior = paddle.to_tensor(
bb_prior, dtype=log_p_attn.dtype, place=log_p_attn.place)
log_p_attn = log_p_attn + bb_prior
# a row must be added to the attention matrix to account for blank token of CTC loss
# (B,T_feats,T_text+1)
log_p_attn_pd = F.pad(
log_p_attn, (0, 0, 0, 0, 1, 0), value=np.log(blank_prob))
loss = 0
for bidx in range(B):
# construct target sequnece.
# Every text token is mapped to a unique sequnece number.
target_seq = paddle.arange(
1, ilens[bidx] + 1, dtype="int32").unsqueeze(0)
cur_log_p_attn_pd = log_p_attn_pd[bidx, :olens[bidx], :ilens[
bidx] + 1].unsqueeze(1) # (T_feats,1,T_text+1)
# The input of ctc_loss API need to be fixed
loss += F.ctc_loss(
log_probs=cur_log_p_attn_pd,
labels=target_seq,
input_lengths=olens[bidx:bidx + 1],
label_lengths=ilens[bidx:bidx + 1])
loss = loss / B
return loss
def _generate_prior(self, text_lengths, feats_lengths,
w=1) -> paddle.Tensor:
"""Generate alignment prior formulated as beta-binomial distribution
Args:
text_lengths (Tensor): Batch of the lengths of each input (B,).
feats_lengths (Tensor): Batch of the lengths of each target (B,).
w (float): Scaling factor; lower -> wider the width
Returns:
Tensor: Batched 2d static prior matrix (B, T_feats, T_text)
"""
B = len(text_lengths)
T_text = text_lengths.max()
T_feats = feats_lengths.max()
bb_prior = paddle.full((B, T_feats, T_text), fill_value=-np.inf)
for bidx in range(B):
T = feats_lengths[bidx].item()
N = text_lengths[bidx].item()
key = str(T) + ',' + str(N)
if self.cache_prior and key in self._cache:
prob = self._cache[key]
else:
alpha = w * np.arange(1, T + 1, dtype=float) # (T,)
beta = w * np.array([T - t + 1 for t in alpha])
k = np.arange(N)
batched_k = k[..., None] # (N,1)
prob = betabinom.pmf(batched_k, N, alpha, beta) # (N,T)
# store cache
if self.cache_prior and key not in self._cache:
self._cache[key] = prob
prob = paddle.to_tensor(
prob, place=text_lengths.place, dtype="float32").transpose(
(1, 0)) # -> (T,N)
bb_prior[bidx, :T, :N] = prob
return bb_prior

Loading…
Cancel
Save