训练了transformer声源模型,可与其他声码器匹配

pull/2449/head
吕志轩 3 years ago
parent 923ae61e7e
commit 4ea647c50d

@ -0,0 +1,198 @@
# TransformerTTS with CSMSC
## Dataset
### Download and Extract
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/TNtts/) and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/BZNSYP`.
## Get Started
Assume the path to the dataset is `~/datasets/BZNSYP` and extract it to `~/datasets`. Then the dataset is in the directory `~/datasets/BZNSYP`.
+
Run the command below to
1. **source path**.
2. preprocess the dataset.
3. train the model.
4. synthesize wavs.
- synthesize waveform from `metadata.jsonl`.
- synthesize waveform from text file.
```bash
./run.sh
```
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
```bash
./run.sh --stage 0 --stop-stage 0
```
### Data Preprocessing
```bash
./local/preprocess.sh ${conf_path}
```
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
```text
dump
├── dev
│ ├── norm
│ └── raw
├── phone_id_map.txt
├── speaker_id_map.txt
├── test
│ ├── norm
│ └── raw
└── train
├── norm
├── raw
└── speech_stats.npy
```
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains the speech feature of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/speech_stats.npy`.
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, speech_lengths, the path of speech features, speaker, and id of each utterance.
### Model Training
`./local/train.sh` calls `${BIN_DIR}/train.py`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
```
Here's the complete help message.
```text
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU] [--phones-dict PHONES_DICT]
Train a TransformerTTS model with LJSpeech TTS dataset.
optional arguments:
-h, --help show this help message and exit
--config CONFIG TransformerTTS config file.
--train-metadata TRAIN_METADATA
training data.
--dev-metadata DEV_METADATA
dev data.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu == 0, use cpu.
--phones-dict PHONES_DICT
phone vocabulary file.
```
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory.
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
5. `--phones-dict` is the path of the phone vocabulary file.
## Synthesizing
We use [waveflow](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/ljspeech/voc0) as the neural vocoder.
Download Pretrained WaveFlow Model with residual channel equals 128 from [waveflow_ljspeech_ckpt_0.3.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/waveflow/waveflow_ljspeech_ckpt_0.3.zip) and unzip it.
```bash
unzip waveflow_ljspeech_ckpt_0.3.zip
```
WaveFlow checkpoint contains files listed below.
```text
waveflow_ljspeech_ckpt_0.3
├── config.yaml # default config used to train waveflow
└── step-2000000.pdparams # model parameters of waveflow
```
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize.py [-h] [--transformer-tts-config TRANSFORMER_TTS_CONFIG]
[--transformer-tts-checkpoint TRANSFORMER_TTS_CHECKPOINT]
[--transformer-tts-stat TRANSFORMER_TTS_STAT]
[--waveflow-config WAVEFLOW_CONFIG]
[--waveflow-checkpoint WAVEFLOW_CHECKPOINT]
[--phones-dict PHONES_DICT]
[--test-metadata TEST_METADATA] [--output-dir OUTPUT_DIR]
[--ngpu NGPU]
Synthesize with transformer tts & waveflow.
optional arguments:
-h, --help show this help message and exit
--transformer-tts-config TRANSFORMER_TTS_CONFIG
transformer tts config file.
--transformer-tts-checkpoint TRANSFORMER_TTS_CHECKPOINT
transformer tts checkpoint to load.
--transformer-tts-stat TRANSFORMER_TTS_STAT
mean and standard deviation used to normalize
spectrogram when training transformer tts.
--voc-config WAVEFLOW_CONFIG
waveflow config file.
--voc-checkpoint WAVEFLOW_CHECKPOINT
waveflow checkpoint to load.
--phones-dict PHONES_DICT
phone vocabulary file.
--test-metadata TEST_METADATA
test metadata.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu == 0, use cpu.
```
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
```bash
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
```
```text
usage: synthesize_e2e.py [-h]
[--transformer-tts-config TRANSFORMER_TTS_CONFIG]
[--transformer-tts-checkpoint TRANSFORMER_TTS_CHECKPOINT]
[--transformer-tts-stat TRANSFORMER_TTS_STAT]
[--waveflow-config WAVEFLOW_CONFIG]
[--waveflow-checkpoint WAVEFLOW_CHECKPOINT]
[--phones-dict PHONES_DICT] [--text TEXT]
[--output-dir OUTPUT_DIR] [--ngpu NGPU]
Synthesize with transformer tts & waveflow.
optional arguments:
-h, --help show this help message and exit
--transformer-tts-config TRANSFORMER_TTS_CONFIG
transformer tts config file.
--transformer-tts-checkpoint TRANSFORMER_TTS_CHECKPOINT
transformer tts checkpoint to load.
--transformer-tts-stat TRANSFORMER_TTS_STAT
mean and standard deviation used to normalize
spectrogram when training transformer tts.
--voc-config WAVEFLOW_CONFIG
waveflow config file.
--voc-ckpt WAVEFLOW_CHECKPOINT
waveflow checkpoint to load.
--phones-dict PHONES_DICT
phone vocabulary file.
--text TEXT text to synthesize, a 'utt_id sentence' pair per line.
--output-dir OUTPUT_DIR
output dir.
--ngpu NGPU if ngpu == 0, use cpu.
```
1. `--transformer-tts-config`, `--transformer-tts-checkpoint`, `--transformer-tts-stat` and `--phones-dict` are arguments for transformer_tts, which correspond to the 4 files in the transformer_tts pretrained model.
2. `--waveflow-config`, `--waveflow-checkpoint` are arguments for waveflow, which correspond to the 2 files in the waveflow pretrained model.
3. `--test-metadata` should be the metadata file in the normalized subfolder of `test` in the `dump` folder.
4. `--text` is the text file, which contains sentences to synthesize.
5. `--output-dir` is the directory to save synthesized audio files.
6. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
## Pretrained Model
Pretrained Model can be downloaded here:
- [transformer_tts_csmsc_ckpt.zip](https://pan.baidu.com/s/1-6uvjQDxS0-6c9XZPBYqBQ?pwd=jjc3)
TransformerTTS checkpoint contains files listed below.
```text
transformer_tts_csmsc_ckpt
├── default.yaml # default config used to train transformer_tts
├── phone_id_map.txt # phone vocabulary file when training transformer_tts
├── snapshot_iter_675000.pdz # model parameters and optimizer states
└── speech_stats.npy # statistics used to normalize spectrogram when training transformer_tts
```
You can use the following scripts to synthesize for `${BIN_DIR}/../sentences.txt` using pretrained transformer_tts and waveflow models.
```bash
source path.sh
FLAGS_allocator_strategy=naive_best_fit \
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
python3 ${BIN_DIR}/synthesize_e2e.py \
--transformer-tts-config=transformer_tts_csmsc_ckpt/default.yaml \
--transformer-tts-checkpoint=transformer_tts_csmsc_ckpt/snapshot_iter_1118250.pdz \
--transformer-tts-stat=transformer_tts_csmsc_ckpt/speech_stats.npy \
--voc-config=waveflow_ljspeech_ckpt_0.3/config.yaml \
--voc-ckpt=waveflow_ljspeech_ckpt_0.3/step-2000000.pdparams \
--text=${BIN_DIR}/../sentences.txt \
--output-dir=exp/default/test_e2e \
--phones-dict=transformer_tts_csmsc_ckpt/phone_id_map.txt
```

@ -0,0 +1,91 @@
fs : 24000 # Hz, sample rate
n_fft : 2048 # FFT size (samples).
win_length : 1200 # Window length (samples). 46.4ms
n_shift : 300 # Hop size (samples). 11.6ms
fmin : 80 # Hz, min frequency when converting to mel
fmax : 7600 # Hz, max frequency when converting to mel
n_mels : 80 # mel bands
window: "hann" # Window function.
###########################################################
# DATA SETTING #
###########################################################
batch_size: 4
num_workers: 2
##########################################################
# TTS MODEL SETTING #
##########################################################
tts: transformertts # model architecture
model: # keyword arguments for the selected model
embed_dim: 0 # embedding dimension in encoder prenet
eprenet_conv_layers: 0 # number of conv layers in encoder prenet
# if set to 0, no encoder prenet will be used
eprenet_conv_filts: 0 # filter size of conv layers in encoder prenet
eprenet_conv_chans: 0 # number of channels of conv layers in encoder prenet
dprenet_layers: 2 # number of layers in decoder prenet
dprenet_units: 256 # number of units in decoder prenet
adim: 512 # attention dimension
aheads: 8 # number of attention heads
elayers: 6 # number of encoder layers
eunits: 1024 # number of encoder ff units
dlayers: 6 # number of decoder layers
dunits: 1024 # number of decoder ff units
positionwise_layer_type: conv1d # type of position-wise layer
positionwise_conv_kernel_size: 1 # kernel size of position wise conv layer
postnet_layers: 5 # number of layers of postnset
postnet_filts: 5 # filter size of conv layers in postnet
postnet_chans: 256 # number of channels of conv layers in postnet
use_scaled_pos_enc: True # whether to use scaled positional encoding
encoder_normalize_before: True # whether to perform layer normalization before the input
decoder_normalize_before: True # whether to perform layer normalization before the input
reduction_factor: 1 # reduction factor
init_type: xavier_uniform # initialization type
init_enc_alpha: 1.0 # initial value of alpha of encoder scaled position encoding
init_dec_alpha: 1.0 # initial value of alpha of decoder scaled position encoding
eprenet_dropout_rate: 0.0 # dropout rate for encoder prenet
dprenet_dropout_rate: 0.5 # dropout rate for decoder prenet
postnet_dropout_rate: 0.5 # dropout rate for postnet
transformer_enc_dropout_rate: 0.1 # dropout rate for transformer encoder layer
transformer_enc_positional_dropout_rate: 0.1 # dropout rate for transformer encoder positional encoding
transformer_enc_attn_dropout_rate: 0.1 # dropout rate for transformer encoder attention layer
transformer_dec_dropout_rate: 0.1 # dropout rate for transformer decoder layer
transformer_dec_positional_dropout_rate: 0.1 # dropout rate for transformer decoder positional encoding
transformer_dec_attn_dropout_rate: 0.1 # dropout rate for transformer decoder attention layer
transformer_enc_dec_attn_dropout_rate: 0.1 # dropout rate for transformer encoder-decoder attention layer
num_heads_applied_guided_attn: 2 # number of heads to apply guided attention loss
num_layers_applied_guided_attn: 2 # number of layers to apply guided attention loss
###########################################################
# UPDATER SETTING #
###########################################################
updater:
use_masking: True # whether to apply masking for padded part in loss calculation
loss_type: L1
use_guided_attn_loss: True # whether to use guided attention loss
guided_attn_loss_sigma: 0.4 # sigma in guided attention loss
guided_attn_loss_lambda: 10.0 # lambda in guided attention loss
modules_applied_guided_attn: ["encoder-decoder"] # modules to apply guided attention loss
bce_pos_weight: 5.0 # weight of positive sample in binary cross entropy calculation
##########################################################
# OPTIMIZER & SCHEDULER SETTING #
##########################################################
optimizer:
optim: adam # optimizer type
learning_rate: 0.001 # learning rate
###########################################################
# TRAINING SETTING #
###########################################################
max_epoch: 300
num_snapshots: 5
###########################################################
# OTHER SETTING #
###########################################################
seed: 10086

@ -0,0 +1,13 @@
#!/bin/bash
export MAIN_ROOT=`realpath ${PWD}/../../../`
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
export LC_ALL=C
export PYTHONDONTWRITEBYTECODE=1
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
MODEL=transformer_tts
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}

@ -0,0 +1,37 @@
#!/bin/bash
set -e
source path.sh
gpus=0,1
stage=0
stop_stage=100
conf_path=conf/default.yaml
train_output_path=exp/default
ckpt_name=snapshot_iter_403.pdz
# with the following command, you can choose the stage range you want to run
# such as `./run.sh --stage 0 --stop-stage 0`
# this can not be mixed use with `$1`, `$2` ...
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
# prepare data
./local/preprocess.sh ${conf_path} || exit -1
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
# synthesize, vocoder is pwgan
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
# synthesize_e2e, vocoder is pwgan
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
fi

@ -0,0 +1,206 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
from pathlib import Path
import yaml
from yacs.config import CfgNode
from paddlespeech.t2s.datasets.get_feats import LogMelFBank
from paddlespeech.t2s.datasets.preprocess_utils import get_input_token
from paddlespeech.t2s.datasets.preprocess_utils import get_phn_dur
from paddlespeech.t2s.datasets.preprocess_utils import get_spk_id_map
from paddlespeech.t2s.datasets.preprocess_utils import merge_silence
from paddlespeech.t2s.utils import str2bool
#from concurrent.futures import ThreadPoolExecutor
#from operator import itemgetter
#from typing import Any
#from typing import Dict
#from typing import List
#import jsonlines
#import librosa
#import numpy as np
#import tqdm
#from paddlespeech.t2s.datasets.preprocess_utils import compare_duration_and_mel_length
def main():
# parse config and args
parser = argparse.ArgumentParser(
description="Preprocess audio and then extract features.")
parser.add_argument(
"--dataset",
default="baker",
type=str,
help="name of dataset, should in {baker, aishell3, ljspeech, vctk} now")
parser.add_argument(
"--rootdir", default=None, type=str, help="directory to dataset.")
parser.add_argument(
"--dumpdir",
type=str,
required=True,
help="directory to dump feature files.")
parser.add_argument(
"--dur-file", default=None, type=str, help="path to durations.txt.")
parser.add_argument("--config", type=str, help="transformer config file.")
parser.add_argument(
"--num-cpu", type=int, default=1, help="number of process.")
parser.add_argument(
"--cut-sil",
type=str2bool,
default=True,
help="whether cut sil in the edge of audio")
parser.add_argument(
"--spk_emb_dir",
default=None,
type=str,
help="directory to speaker embedding files.")
args = parser.parse_args()
rootdir = Path(args.rootdir).expanduser()
dumpdir = Path(args.dumpdir).expanduser()
# use absolute path
dumpdir = dumpdir.resolve()
dumpdir.mkdir(parents=True, exist_ok=True)
dur_file = Path(args.dur_file).expanduser()
if args.spk_emb_dir:
spk_emb_dir = Path(args.spk_emb_dir).expanduser().resolve()
else:
spk_emb_dir = None
assert rootdir.is_dir()
assert dur_file.is_file()
with open(args.config, 'rt') as f:
config = CfgNode(yaml.safe_load(f))
sentences, speaker_set = get_phn_dur(dur_file)
merge_silence(sentences)
phone_id_map_path = dumpdir / "phone_id_map.txt"
speaker_id_map_path = dumpdir / "speaker_id_map.txt"
get_input_token(sentences, phone_id_map_path, args.dataset)
get_spk_id_map(speaker_set, speaker_id_map_path)
if args.dataset == "baker":
wav_files = sorted(list((rootdir / "Wave").rglob("*.wav")))
# split data into 3 sections
num_train = 9800
num_dev = 100
train_wav_files = wav_files[:num_train]
dev_wav_files = wav_files[num_train:num_train + num_dev]
test_wav_files = wav_files[num_train + num_dev:]
elif args.dataset == "aishell3":
sub_num_dev = 5
wav_dir = rootdir / "train" / "wav"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*.wav")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
elif args.dataset == "ljspeech":
wav_files = sorted(list((rootdir / "wavs").rglob("*.wav")))
# split data into 3 sections
num_train = 12900
num_dev = 100
train_wav_files = wav_files[:num_train]
dev_wav_files = wav_files[num_train:num_train + num_dev]
test_wav_files = wav_files[num_train + num_dev:]
elif args.dataset == "vctk":
sub_num_dev = 5
wav_dir = rootdir / "wav48_silence_trimmed"
train_wav_files = []
dev_wav_files = []
test_wav_files = []
for speaker in os.listdir(wav_dir):
wav_files = sorted(list((wav_dir / speaker).rglob("*_mic2.flac")))
if len(wav_files) > 100:
train_wav_files += wav_files[:-sub_num_dev * 2]
dev_wav_files += wav_files[-sub_num_dev * 2:-sub_num_dev]
test_wav_files += wav_files[-sub_num_dev:]
else:
train_wav_files += wav_files
else:
print("dataset should in {baker, aishell3, ljspeech, vctk} now!")
train_dump_dir = dumpdir / "train" / "raw"
train_dump_dir.mkdir(parents=True, exist_ok=True)
dev_dump_dir = dumpdir / "dev" / "raw"
dev_dump_dir.mkdir(parents=True, exist_ok=True)
test_dump_dir = dumpdir / "test" / "raw"
test_dump_dir.mkdir(parents=True, exist_ok=True)
# Extractor
mel_extractor = LogMelFBank(
sr=config.fs,
n_fft=config.n_fft,
hop_length=config.n_shift,
win_length=config.win_length,
window=config.window,
n_mels=config.n_mels,
fmin=config.fmin,
fmax=config.fmax)
# process for the 3 sections
if train_wav_files:
process_sentences(
config=config,
fps=train_wav_files,
sentences=sentences,
output_dir=train_dump_dir,
mel_extractor=mel_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir)
if dev_wav_files:
process_sentences(
config=config,
fps=dev_wav_files,
sentences=sentences,
output_dir=dev_dump_dir,
mel_extractor=mel_extractor,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir)
if test_wav_files:
process_sentences(
config=config,
fps=test_wav_files,
sentences=sentences,
output_dir=test_dump_dir,
mel_extractor=mel_extractor,
nprocs=args.num_cpu,
cut_sil=args.cut_sil,
spk_emb_dir=spk_emb_dir)
if __name__ == "__main__":
main()

@ -25,10 +25,11 @@ from yacs.config import CfgNode
from paddlespeech.t2s.datasets.data_table import DataTable from paddlespeech.t2s.datasets.data_table import DataTable
from paddlespeech.t2s.models.transformer_tts import TransformerTTS from paddlespeech.t2s.models.transformer_tts import TransformerTTS
from paddlespeech.t2s.models.transformer_tts import TransformerTTSInference from paddlespeech.t2s.models.transformer_tts import TransformerTTSInference
from paddlespeech.t2s.models.waveflow import ConditionalWaveFlow
from paddlespeech.t2s.modules.normalizer import ZScore from paddlespeech.t2s.modules.normalizer import ZScore
from paddlespeech.t2s.utils import layer_tools from paddlespeech.t2s.utils import layer_tools
#from paddlespeech.t2s.models.waveflow import ConditionalWaveFlow
def evaluate(args, acoustic_model_config, vocoder_config): def evaluate(args, acoustic_model_config, vocoder_config):
# dataloader has been too verbose # dataloader has been too verbose
@ -50,11 +51,14 @@ def evaluate(args, acoustic_model_config, vocoder_config):
model.set_state_dict( model.set_state_dict(
paddle.load(args.transformer_tts_checkpoint)["main_params"]) paddle.load(args.transformer_tts_checkpoint)["main_params"])
model.eval() model.eval()
# remove ".pdparams" in waveflow_checkpoint # remove ".pdparams" in waveflow_checkpoint
vocoder_checkpoint_path = args.waveflow_checkpoint[:-9] if args.waveflow_checkpoint.endswith( vocoder = get_voc_inference(
".pdparams") else args.waveflow_checkpoint voc=args.voc,
vocoder = ConditionalWaveFlow.from_pretrained(vocoder_config, voc_config=vocoder_config,
vocoder_checkpoint_path) voc_ckpt=args.voc_ckpt,
voc_stat=args.voc_stat)
layer_tools.recursively_remove_weight_norm(vocoder) layer_tools.recursively_remove_weight_norm(vocoder)
vocoder.eval() vocoder.eval()
print("model done!") print("model done!")
@ -78,9 +82,8 @@ def evaluate(args, acoustic_model_config, vocoder_config):
with paddle.no_grad(): with paddle.no_grad():
mel = transformer_tts_inference(text) mel = transformer_tts_inference(text)
# mel shape is (T, feats) and waveflow's input shape is (batch, feats, T) # mel shape is (T, feats) and waveflow's input shape is (batch, feats, T)
mel = mel.unsqueeze(0).transpose([0, 2, 1])
# wavflow's output shape is (B, T) # wavflow's output shape is (B, T)
wav = vocoder.infer(mel)[0] wav = vocoder(mel)
sf.write( sf.write(
str(output_dir / (utt_id + ".wav")), str(output_dir / (utt_id + ".wav")),
@ -106,18 +109,34 @@ def main():
type=str, type=str,
help="mean and standard deviation used to normalize spectrogram when training transformer tts." help="mean and standard deviation used to normalize spectrogram when training transformer tts."
) )
# vocoder
parser.add_argument( parser.add_argument(
"--waveflow-config", type=str, help="waveflow config file.") '--voc',
# not normalize when training waveflow type=str,
default='pwgan_csmsc',
choices=[
'pwgan_csmsc', 'pwgan_ljspeech', 'pwgan_aishell3', 'pwgan_vctk',
'mb_melgan_csmsc', 'wavernn_csmsc', 'hifigan_csmsc',
'hifigan_ljspeech', 'hifigan_aishell3', 'hifigan_vctk',
'style_melgan_csmsc'
],
help='Choose vocoder type of tts task.')
parser.add_argument( parser.add_argument(
"--waveflow-checkpoint", type=str, help="waveflow checkpoint to load.") '--voc_config', type=str, default=None, help='Config of voc.')
parser.add_argument( parser.add_argument(
"--phones-dict", type=str, default=None, help="phone vocabulary file.") '--voc_ckpt', type=str, default=None, help='Checkpoint file of voc.')
parser.add_argument(
parser.add_argument("--test-metadata", type=str, help="test metadata.") "--voc_stat",
parser.add_argument("--output-dir", type=str, help="output dir.") type=str,
default=None,
help="mean and standard deviation used to normalize spectrogram when training voc."
)
# other
parser.add_argument( parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.") "--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument("--test_metadata", type=str, help="test metadata.")
parser.add_argument("--output_dir", type=str, help="output dir.")
args = parser.parse_args() args = parser.parse_args()
@ -130,16 +149,16 @@ def main():
with open(args.transformer_tts_config) as f: with open(args.transformer_tts_config) as f:
transformer_tts_config = CfgNode(yaml.safe_load(f)) transformer_tts_config = CfgNode(yaml.safe_load(f))
with open(args.waveflow_config) as f: with open(args.voc_config) as f:
waveflow_config = CfgNode(yaml.safe_load(f)) voc_config = CfgNode(yaml.safe_load(f))
print("========Args========") print("========Args========")
print(yaml.safe_dump(vars(args))) print(yaml.safe_dump(vars(args)))
print("========Config========") print("========Config========")
print(transformer_tts_config) print(transformer_tts_config)
print(waveflow_config) print(voc_config)
evaluate(args, transformer_tts_config, waveflow_config) evaluate(args, transformer_tts_config, voc_config)
if __name__ == "__main__": if __name__ == "__main__":

@ -21,12 +21,15 @@ import soundfile as sf
import yaml import yaml
from yacs.config import CfgNode from yacs.config import CfgNode
from paddlespeech.t2s.frontend import English from paddlespeech.t2s.frontend import Chinese
from paddlespeech.t2s.models.transformer_tts import TransformerTTS from paddlespeech.t2s.models.transformer_tts import TransformerTTS
from paddlespeech.t2s.models.transformer_tts import TransformerTTSInference from paddlespeech.t2s.models.transformer_tts import TransformerTTSInference
from paddlespeech.t2s.models.waveflow import ConditionalWaveFlow
from paddlespeech.t2s.modules.normalizer import ZScore from paddlespeech.t2s.modules.normalizer import ZScore
from paddlespeech.t2s.utils import layer_tools
#from paddlespeech.t2s.frontend import English
#from paddlespeech.t2s.models.waveflow import ConditionalWaveFlow
#from paddlespeech.t2s.utils import layer_tools
def evaluate(args, acoustic_model_config, vocoder_config): def evaluate(args, acoustic_model_config, vocoder_config):
@ -59,15 +62,15 @@ def evaluate(args, acoustic_model_config, vocoder_config):
model.eval() model.eval()
# remove ".pdparams" in waveflow_checkpoint # remove ".pdparams" in waveflow_checkpoint
vocoder_checkpoint_path = args.waveflow_checkpoint[:-9] if args.waveflow_checkpoint.endswith( vocoder = get_voc_inference(
".pdparams") else args.waveflow_checkpoint voc=args.voc,
vocoder = ConditionalWaveFlow.from_pretrained(vocoder_config, voc_config=vocoder_config,
vocoder_checkpoint_path) voc_ckpt=args.voc_ckpt,
layer_tools.recursively_remove_weight_norm(vocoder) voc_stat=args.voc_stat)
vocoder.eval() vocoder.eval()
print("model done!") print("model done!")
frontend = English() frontend = Chinese()
print("frontend done!") print("frontend done!")
stat = np.load(args.transformer_tts_stat) stat = np.load(args.transformer_tts_stat)
@ -90,11 +93,10 @@ def evaluate(args, acoustic_model_config, vocoder_config):
phones = [phn if phn in phone_id_map else "," for phn in phones] phones = [phn if phn in phone_id_map else "," for phn in phones]
phone_ids = [phone_id_map[phn] for phn in phones] phone_ids = [phone_id_map[phn] for phn in phones]
with paddle.no_grad(): with paddle.no_grad():
mel = transformer_tts_inference(paddle.to_tensor(phone_ids)) tensor_phone_ids = paddle.to_tensor(phone_ids)
# mel shape is (T, feats) and waveflow's input shape is (batch, feats, T) mel = transformer_tts_inference(tensor_phone_ids)
mel = mel.unsqueeze(0).transpose([0, 2, 1])
# wavflow's output shape is (B, T) wav = vocoder(mel)
wav = vocoder.infer(mel)[0]
sf.write( sf.write(
str(output_dir / (utt_id + ".wav")), str(output_dir / (utt_id + ".wav")),
@ -120,23 +122,51 @@ def main():
type=str, type=str,
help="mean and standard deviation used to normalize spectrogram when training transformer tts." help="mean and standard deviation used to normalize spectrogram when training transformer tts."
) )
# vocoder
parser.add_argument(
'--voc',
type=str,
default='pwgan_csmsc',
choices=[
'pwgan_csmsc',
'pwgan_ljspeech',
'pwgan_aishell3',
'pwgan_vctk',
'mb_melgan_csmsc',
'style_melgan_csmsc',
'hifigan_csmsc',
'hifigan_ljspeech',
'hifigan_aishell3',
'hifigan_vctk',
'wavernn_csmsc',
],
help='Choose vocoder type of tts task.')
parser.add_argument(
'--voc_config', type=str, default=None, help='Config of voc.')
parser.add_argument( parser.add_argument(
"--waveflow-config", type=str, help="waveflow config file.") '--voc_ckpt', type=str, default=None, help='Checkpoint file of voc.')
# not normalize when training waveflow
parser.add_argument( parser.add_argument(
"--waveflow-checkpoint", type=str, help="waveflow checkpoint to load.") "--voc_stat",
type=str,
default=None,
help="mean and standard deviation used to normalize spectrogram when training voc."
)
# other
parser.add_argument( parser.add_argument(
"--phones-dict", "--phones_dict", type=str, default=None, help="phone vocabulary file.")
parser.add_argument(
'--lang',
type=str, type=str,
default="phone_id_map.txt", default='zh',
help="phone vocabulary file.") help='Choose model language. zh or en or mix')
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
parser.add_argument( parser.add_argument(
"--text", "--text",
type=str, type=str,
help="text to synthesize, a 'utt_id sentence' pair per line.") help="text to synthesize, a 'utt_id sentence' pair per line.")
parser.add_argument("--output-dir", type=str, help="output dir.") parser.add_argument("--output_dir", type=str, help="output dir.")
parser.add_argument(
"--ngpu", type=int, default=1, help="if ngpu == 0, use cpu.")
args = parser.parse_args() args = parser.parse_args()
@ -149,16 +179,16 @@ def main():
with open(args.transformer_tts_config) as f: with open(args.transformer_tts_config) as f:
transformer_tts_config = CfgNode(yaml.safe_load(f)) transformer_tts_config = CfgNode(yaml.safe_load(f))
with open(args.waveflow_config) as f: with open(args.voc_config) as f:
waveflow_config = CfgNode(yaml.safe_load(f)) voc_config = CfgNode(yaml.safe_load(f))
print("========Args========") print("========Args========")
print(yaml.safe_dump(vars(args))) print(yaml.safe_dump(vars(args)))
print("========Config========") print("========Config========")
print(transformer_tts_config) print(transformer_tts_config)
print(waveflow_config) print(voc_config)
evaluate(args, transformer_tts_config, waveflow_config) evaluate(args, transformer_tts_config, voc_config)
if __name__ == "__main__": if __name__ == "__main__":

Loading…
Cancel
Save