TianYuan
f5a3b21f45
|
3 years ago | |
---|---|---|
.. | ||
local | 3 years ago | |
README.md | 3 years ago | |
path.sh | 3 years ago | |
run.sh | 3 years ago |
README.md
Speaker Encoder
This experiment trains a speaker encoder with speaker verification as its task. It is done as a part of the experiment of transfer learning from speaker verification to multispeaker text-to-speech synthesis, which can be found at examples/aishell3/vc0. The trained speaker encoder is used to extract utterance embeddings from utterances.
Model
The model used in this experiment is the speaker encoder with text independent speaker verification task in GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION. GE2E-softmax loss is used.
Download Datasets
Currently supported datasets are Librispeech-other-500, VoxCeleb, VoxCeleb2,ai-datatang-200zh, magicdata, which can be downloaded from corresponding webpage.
- Librispeech/train-other-500
An English multispeaker dataset,URL,only the
train-other-500
subset is used. - VoxCeleb1 An English multispeaker dataset,URL , Audio Files from Dev A to Dev D should be downloaded, combined and extracted.
- VoxCeleb2 An English multispeaker dataset,URL , Audio Files from Dev A to Dev H should be downloaded, combined and extracted.
- Aidatatang-200zh A Mandarin Chinese multispeaker dataset ,URL .
- magicdata A Mandarin Chinese multispeaker dataset ,URL .
If you want to use other datasets, you can also download and preprocess it as long as it meets the requirements described below.
Get Started
./run.sh
You can choose a range of stages you want to run, or set stage
equal to stop-stage
to use only one stage, for example, run the following command will only preprocess the dataset.
./run.sh --stage 0 --stop-stage 0
Data Preprocessing
./local/preprocess.sh
calls ${BIN_DIR}/preprocess.py
.
./local/preprocess.sh ${datasets_root} ${preprocess_path} ${dataset_names}
Assume datasets_root is ~/datasets/GE2E
, and it has the follow structure(We only use train-other-500
for simplicity):
GE2E
├── LibriSpeech
└── (other datasets)
Multispeaker datasets are used as training data, though the transcriptions are not used. To enlarge the amount of data used for training, several multispeaker datasets are combined. The preporcessed datasets are organized in a file structure described below. The mel spectrogram of each utterance is save in .npy
format. The dataset is 2-stratified (speaker-utterance). Since multiple datasets are combined, to avoid conflict in speaker id, dataset name is prepended to the speake ids.
dataset_root
├── dataset01_speaker01/
│ ├── utterance01.npy
│ ├── utterance02.npy
│ └── utterance03.npy
├── dataset01_speaker02/
│ ├── utterance01.npy
│ ├── utterance02.npy
│ └── utterance03.npy
├── dataset02_speaker01/
│ ├── utterance01.npy
│ ├── utterance02.npy
│ └── utterance03.npy
└── dataset02_speaker02/
├── utterance01.npy
├── utterance02.npy
└── utterance03.npy
In ${BIN_DIR}/preprocess.py
:
--datasets_root
is the directory that contains several extracted dataset--output_dir
is the directory to save the preprocessed dataset--dataset_names
is the dataset to preprocess. If there are multiple datasets in--datasets_root
to preprocess, the names can be joined with comma. Currently supported dataset names are librispeech_other, voxceleb1, voxceleb2, aidatatang_200zh and magicdata.
Model Training
./local/train.sh
calls ${BIN_DIR}/train.py
.
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${preprocess_path} ${train_output_path}
In ${BIN_DIR}/train.py
:
--data
is the path to the preprocessed dataset.--output
is the directory to save results,usually a subdirectory ofruns
.It contains visualdl log files, text log files, config file and acheckpoints
directory, which contains parameter file and optimizer state file. If--output
already has some training results in it, the most recent parameter file and optimizer state file is loaded before training.--ngpu
is the number of gpus to use, if ngpu == 0, use cpu.CUDA_VISIBLE_DEVICES
can be used to specify visible devices with cuda.
Other options are described below.
--config
is a.yaml
config file used to override the default config(which is coded inconfig.py
).--opts
is command line options to further override config files. It should be the last comman line options passed with multiple key-value pairs separated by spaces.--checkpoint_path
specifies the checkpoiont to load before training, extension is not included. A parameter file (.pdparams
) and an optimizer state file (.pdopt
) with the same name is used. This option has a higher priority than auto-resuming from the--output
directory.
Inferencing
When training is done, run the command below to generate utterance embedding for each utterance in a dataset.
./local/inference.sh
calls ${BIN_DIR}/inference.py
.
CUDA_VISIBLE_DEVICES=${gpus} ./local/inference.sh ${infer_input} ${infer_output} ${train_output_path} ${ckpt_name}
In ${BIN_DIR}/inference.py
:
--input
is the path of the dataset used for inference.--output
is the directory to save the processed results. It has the same file structure as the input dataset. Each utterance in the dataset has a corrsponding utterance embedding file in*.npy
format.--checkpoint_path
is the path of the checkpoint to use, extension not included.--pattern
is the wildcard pattern to filter audio files for inference, defaults to*.wav
.--ngpu
is the number of gpus to use, if ngpu == 0, use cpu.
Pretrained Model
The pretrained model is first trained to 1560k steps at Librispeech-other-500 and voxceleb1. Then trained at aidatatang_200h and magic_data to 3000k steps.
Download URL ge2e_ckpt_0.3.zip.