This experiment trains a speaker encoder with speaker verification as to its task. It is done as a part of the experiment of transfer learning from speaker verification to multispeaker text-to-speech synthesis, which can be found at [examples/aishell3/vc0](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3/vc0). The trained speaker encoder is used to extract utterance embeddings from utterances.
The model used in this experiment is the speaker encoder with text-independent speaker verification task in [GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION](https://arxiv.org/pdf/1710.10467.pdf). GE2E-softmax loss is used.
Currently supported datasets are Librispeech-other-500, VoxCeleb, VoxCeleb2,ai-datatang-200zh, magicdata, which can be downloaded from the corresponding webpage.
An English multispeaker dataset,[URL](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html), Audio Files from Dev A to Dev D should be downloaded, combined, and extracted.
An English multispeaker dataset,[URL](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html), Audio Files from Dev A to Dev H should be downloaded, combined, and extracted.
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
Multispeaker datasets are used as training data, though the transcriptions are not used. To enlarge the amount of data used for training, several multispeaker datasets are combined. The preprocessed datasets are organized in a file structure described below. The mel spectrogram of each utterance is saved in `.npy` format. The dataset is 2-stratified (speaker-utterance). Since multiple datasets are combined, to avoid conflict in speaker id, the dataset name is prepended to the speaker ids.
3.`--dataset_names` is the dataset to preprocess. If there are multiple datasets in `--datasets_root` to preprocess, the names can be joined with a comma. Currently supported dataset names are librispeech_other, voxceleb1, voxceleb2, aidatatang_200zh, and magicdata.
2.`--output` is the directory to save results,usually a subdirectory of `runs`. It contains visualdl log files, text log files, config files, and a `checkpoints` directory, which contains parameter files and optimizer state files. If `--output` already has some training results in it, the most recent parameter file and optimizer state file are loaded before training.
-`--opts` is a command-line option to further override config files. It should be the last command-line options passed with multiple key-value pairs separated by spaces.
-`--checkpoint_path` specifies the checkpoint to load before training, extension is not included. A parameter file ( `.pdparams`) and an optimizer state file ( `.pdopt`) with the same name is used. This option has a higher priority than auto-resuming from the `--output` directory.
2.`--output` is the directory to save the processed results. It has the same file structure as the input dataset. Each utterance in the dataset has a corresponding utterance embedding file in the `*.npy` format.
The pretrained model is first trained to 1560k steps at Librispeech-other-500 and voxceleb1. Then trained at aidatatang_200h and magic_data to 3000k steps.