commit
bc365cbb52
@ -1,3 +0,0 @@
|
||||
# [Aishell1](http://openslr.elda.org/33/)
|
||||
|
||||
This Open Source Mandarin Speech Corpus, AISHELL-ASR0009-OS1, is 178 hours long. It is a part of AISHELL-ASR0009, of which utterance contains 11 domains, including smart home, autonomous driving, and industrial production. The whole recording was put in quiet indoor environment, using 3 different devices at the same time: high fidelity microphone (44.1kHz, 16-bit,); Android-system mobile phone (16kHz, 16-bit), iOS-system mobile phone (16kHz, 16-bit). Audios in high fidelity were re-sampled to 16kHz to build AISHELL- ASR0009-OS1. 400 speakers from different accent areas in China were invited to participate in the recording. The manual transcription accuracy rate is above 95%, through professional speech annotation and strict quality inspection. The corpus is divided into training, development and testing sets. ( This database is free for academic research, not in the commerce, if without permission. )
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -0,0 +1,108 @@
|
||||
# JETS with CSMSC
|
||||
This example contains code used to train a [JETS](https://arxiv.org/abs/2203.16852v1) model with [Chinese Standard Mandarin Speech Copus](https://www.data-baker.com/open_source.html).
|
||||
|
||||
## Dataset
|
||||
### Download and Extract
|
||||
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/source).
|
||||
|
||||
### Get MFA Result and Extract
|
||||
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes and durations for JETS.
|
||||
You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo.
|
||||
|
||||
## Get Started
|
||||
Assume the path to the dataset is `~/datasets/BZNSYP`.
|
||||
Assume the path to the MFA result of CSMSC is `./baker_alignment_tone`.
|
||||
Run the command below to
|
||||
1. **source path**.
|
||||
2. preprocess the dataset.
|
||||
3. train the model.
|
||||
4. synthesize wavs.
|
||||
- synthesize waveform from `metadata.jsonl`.
|
||||
- synthesize waveform from a text file.
|
||||
|
||||
```bash
|
||||
./run.sh
|
||||
```
|
||||
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset.
|
||||
```bash
|
||||
./run.sh --stage 0 --stop-stage 0
|
||||
```
|
||||
### Data Preprocessing
|
||||
```bash
|
||||
./local/preprocess.sh ${conf_path}
|
||||
```
|
||||
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below.
|
||||
|
||||
```text
|
||||
dump
|
||||
├── dev
|
||||
│ ├── norm
|
||||
│ └── raw
|
||||
├── phone_id_map.txt
|
||||
├── speaker_id_map.txt
|
||||
├── test
|
||||
│ ├── norm
|
||||
│ └── raw
|
||||
└── train
|
||||
├── feats_stats.npy
|
||||
├── norm
|
||||
└── raw
|
||||
```
|
||||
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave、mel spectrogram、speech、pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`.
|
||||
|
||||
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, the path of feats, feats_lengths, the path of pitch features, the path of energy features, the path of raw waves, speaker, and the id of each utterance.
|
||||
|
||||
### Model Training
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path}
|
||||
```
|
||||
`./local/train.sh` calls `${BIN_DIR}/train.py`.
|
||||
Here's the complete help message.
|
||||
```text
|
||||
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA]
|
||||
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR]
|
||||
[--ngpu NGPU] [--phones-dict PHONES_DICT]
|
||||
|
||||
Train a JETS model.
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG config file to overwrite default config.
|
||||
--train-metadata TRAIN_METADATA
|
||||
training data.
|
||||
--dev-metadata DEV_METADATA
|
||||
dev data.
|
||||
--output-dir OUTPUT_DIR
|
||||
output dir.
|
||||
--ngpu NGPU if ngpu == 0, use cpu.
|
||||
--phones-dict PHONES_DICT
|
||||
phone vocabulary file.
|
||||
```
|
||||
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`.
|
||||
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder.
|
||||
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory.
|
||||
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu.
|
||||
5. `--phones-dict` is the path of the phone vocabulary file.
|
||||
|
||||
### Synthesizing
|
||||
|
||||
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`.
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
||||
```
|
||||
|
||||
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file.
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name}
|
||||
```
|
||||
|
||||
## Pretrained Model
|
||||
|
||||
The pretrained model can be downloaded here:
|
||||
|
||||
- [jets_csmsc_ckpt_1.5.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/jets_csmsc_ckpt_1.5.0.zip)
|
||||
|
||||
The static model can be downloaded here:
|
||||
|
||||
- [jets_csmsc_static_1.5.0.zip](https://paddlespeech.bj.bcebos.com/Parakeet/jets_csmsc_static_1.5.0.zip)
|
@ -0,0 +1,224 @@
|
||||
# This configuration tested on 4 GPUs (V100) with 32GB GPU
|
||||
# memory. It takes around 2 weeks to finish the training
|
||||
# but 100k iters model should generate reasonable results.
|
||||
###########################################################
|
||||
# FEATURE EXTRACTION SETTING #
|
||||
###########################################################
|
||||
|
||||
n_mels: 80
|
||||
fs: 22050 # sr
|
||||
n_fft: 1024 # FFT size (samples).
|
||||
n_shift: 256 # Hop size (samples). 12.5ms
|
||||
win_length: null # Window length (samples). 50ms
|
||||
# If set to null, it will be the same as fft_size.
|
||||
window: "hann" # Window function.
|
||||
fmin: 0 # minimum frequency for Mel basis
|
||||
fmax: null # maximum frequency for Mel basis
|
||||
f0min: 80 # Minimum f0 for pitch extraction.
|
||||
f0max: 400 # Maximum f0 for pitch extraction.
|
||||
|
||||
|
||||
##########################################################
|
||||
# TTS MODEL SETTING #
|
||||
##########################################################
|
||||
model:
|
||||
# generator related
|
||||
generator_type: jets_generator
|
||||
generator_params:
|
||||
adim: 256 # attention dimension
|
||||
aheads: 2 # number of attention heads
|
||||
elayers: 4 # number of encoder layers
|
||||
eunits: 1024 # number of encoder ff units
|
||||
dlayers: 4 # number of decoder layers
|
||||
dunits: 1024 # number of decoder ff units
|
||||
positionwise_layer_type: conv1d # type of position-wise layer
|
||||
positionwise_conv_kernel_size: 3 # kernel size of position wise conv layer
|
||||
duration_predictor_layers: 2 # number of layers of duration predictor
|
||||
duration_predictor_chans: 256 # number of channels of duration predictor
|
||||
duration_predictor_kernel_size: 3 # filter size of duration predictor
|
||||
use_masking: True # whether to apply masking for padded part in loss calculation
|
||||
encoder_normalize_before: True # whether to perform layer normalization before the input
|
||||
decoder_normalize_before: True # whether to perform layer normalization before the input
|
||||
encoder_type: transformer # encoder type
|
||||
decoder_type: transformer # decoder type
|
||||
conformer_rel_pos_type: latest # relative positional encoding type
|
||||
conformer_pos_enc_layer_type: rel_pos # conformer positional encoding type
|
||||
conformer_self_attn_layer_type: rel_selfattn # conformer self-attention type
|
||||
conformer_activation_type: swish # conformer activation type
|
||||
use_macaron_style_in_conformer: true # whether to use macaron style in conformer
|
||||
use_cnn_in_conformer: true # whether to use CNN in conformer
|
||||
conformer_enc_kernel_size: 7 # kernel size in CNN module of conformer-based encoder
|
||||
conformer_dec_kernel_size: 31 # kernel size in CNN module of conformer-based decoder
|
||||
init_type: xavier_uniform # initialization type
|
||||
init_enc_alpha: 1.0 # initial value of alpha for encoder
|
||||
init_dec_alpha: 1.0 # initial value of alpha for decoder
|
||||
transformer_enc_dropout_rate: 0.2 # dropout rate for transformer encoder layer
|
||||
transformer_enc_positional_dropout_rate: 0.2 # dropout rate for transformer encoder positional encoding
|
||||
transformer_enc_attn_dropout_rate: 0.2 # dropout rate for transformer encoder attention layer
|
||||
transformer_dec_dropout_rate: 0.2 # dropout rate for transformer decoder layer
|
||||
transformer_dec_positional_dropout_rate: 0.2 # dropout rate for transformer decoder positional encoding
|
||||
transformer_dec_attn_dropout_rate: 0.2 # dropout rate for transformer decoder attention layer
|
||||
pitch_predictor_layers: 5 # number of conv layers in pitch predictor
|
||||
pitch_predictor_chans: 256 # number of channels of conv layers in pitch predictor
|
||||
pitch_predictor_kernel_size: 5 # kernel size of conv leyers in pitch predictor
|
||||
pitch_predictor_dropout: 0.5 # dropout rate in pitch predictor
|
||||
pitch_embed_kernel_size: 1 # kernel size of conv embedding layer for pitch
|
||||
pitch_embed_dropout: 0.0 # dropout rate after conv embedding layer for pitch
|
||||
stop_gradient_from_pitch_predictor: true # whether to stop the gradient from pitch predictor to encoder
|
||||
energy_predictor_layers: 2 # number of conv layers in energy predictor
|
||||
energy_predictor_chans: 256 # number of channels of conv layers in energy predictor
|
||||
energy_predictor_kernel_size: 3 # kernel size of conv leyers in energy predictor
|
||||
energy_predictor_dropout: 0.5 # dropout rate in energy predictor
|
||||
energy_embed_kernel_size: 1 # kernel size of conv embedding layer for energy
|
||||
energy_embed_dropout: 0.0 # dropout rate after conv embedding layer for energy
|
||||
stop_gradient_from_energy_predictor: false # whether to stop the gradient from energy predictor to encoder
|
||||
generator_out_channels: 1
|
||||
generator_channels: 512
|
||||
generator_global_channels: -1
|
||||
generator_kernel_size: 7
|
||||
generator_upsample_scales: [8, 8, 2, 2]
|
||||
generator_upsample_kernel_sizes: [16, 16, 4, 4]
|
||||
generator_resblock_kernel_sizes: [3, 7, 11]
|
||||
generator_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
|
||||
generator_use_additional_convs: true
|
||||
generator_bias: true
|
||||
generator_nonlinear_activation: "leakyrelu"
|
||||
generator_nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
generator_use_weight_norm: true
|
||||
segment_size: 64 # segment size for random windowed discriminator
|
||||
|
||||
# discriminator related
|
||||
discriminator_type: hifigan_multi_scale_multi_period_discriminator
|
||||
discriminator_params:
|
||||
scales: 1
|
||||
scale_downsample_pooling: "AvgPool1D"
|
||||
scale_downsample_pooling_params:
|
||||
kernel_size: 4
|
||||
stride: 2
|
||||
padding: 2
|
||||
scale_discriminator_params:
|
||||
in_channels: 1
|
||||
out_channels: 1
|
||||
kernel_sizes: [15, 41, 5, 3]
|
||||
channels: 128
|
||||
max_downsample_channels: 1024
|
||||
max_groups: 16
|
||||
bias: True
|
||||
downsample_scales: [2, 2, 4, 4, 1]
|
||||
nonlinear_activation: "leakyrelu"
|
||||
nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True
|
||||
use_spectral_norm: False
|
||||
follow_official_norm: False
|
||||
periods: [2, 3, 5, 7, 11]
|
||||
period_discriminator_params:
|
||||
in_channels: 1
|
||||
out_channels: 1
|
||||
kernel_sizes: [5, 3]
|
||||
channels: 32
|
||||
downsample_scales: [3, 3, 3, 3, 1]
|
||||
max_downsample_channels: 1024
|
||||
bias: True
|
||||
nonlinear_activation: "leakyrelu"
|
||||
nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True
|
||||
use_spectral_norm: False
|
||||
# others
|
||||
sampling_rate: 22050 # needed in the inference for saving wav
|
||||
cache_generator_outputs: True # whether to cache generator outputs in the training
|
||||
use_alignment_module: False # whether to use alignment module
|
||||
|
||||
###########################################################
|
||||
# LOSS SETTING #
|
||||
###########################################################
|
||||
# loss function related
|
||||
generator_adv_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
loss_type: mse # loss type, "mse" or "hinge"
|
||||
discriminator_adv_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
loss_type: mse # loss type, "mse" or "hinge"
|
||||
feat_match_loss_params:
|
||||
average_by_discriminators: False # whether to average loss value by #discriminators
|
||||
average_by_layers: False # whether to average loss value by #layers of each discriminator
|
||||
include_final_outputs: True # whether to include final outputs for loss calculation
|
||||
mel_loss_params:
|
||||
fs: 22050 # must be the same as the training data
|
||||
fft_size: 1024 # fft points
|
||||
hop_size: 256 # hop size
|
||||
win_length: null # window length
|
||||
window: hann # window type
|
||||
num_mels: 80 # number of Mel basis
|
||||
fmin: 0 # minimum frequency for Mel basis
|
||||
fmax: null # maximum frequency for Mel basis
|
||||
log_base: null # null represent natural log
|
||||
|
||||
###########################################################
|
||||
# ADVERSARIAL LOSS SETTING #
|
||||
###########################################################
|
||||
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss
|
||||
lambda_mel: 45.0 # loss scaling coefficient for Mel loss
|
||||
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss
|
||||
lambda_var: 1.0 # loss scaling coefficient for duration loss
|
||||
lambda_align: 2.0 # loss scaling coefficient for KL divergence loss
|
||||
# others
|
||||
sampling_rate: 22050 # needed in the inference for saving wav
|
||||
cache_generator_outputs: True # whether to cache generator outputs in the training
|
||||
|
||||
|
||||
# extra module for additional inputs
|
||||
pitch_extract: dio # pitch extractor type
|
||||
pitch_extract_conf:
|
||||
reduction_factor: 1
|
||||
use_token_averaged_f0: false
|
||||
pitch_normalize: global_mvn # normalizer for the pitch feature
|
||||
energy_extract: energy # energy extractor type
|
||||
energy_extract_conf:
|
||||
reduction_factor: 1
|
||||
use_token_averaged_energy: false
|
||||
energy_normalize: global_mvn # normalizer for the energy feature
|
||||
|
||||
|
||||
###########################################################
|
||||
# DATA LOADER SETTING #
|
||||
###########################################################
|
||||
batch_size: 32 # Batch size.
|
||||
num_workers: 4 # Number of workers in DataLoader.
|
||||
|
||||
##########################################################
|
||||
# OPTIMIZER & SCHEDULER SETTING #
|
||||
##########################################################
|
||||
# optimizer setting for generator
|
||||
generator_optimizer_params:
|
||||
beta1: 0.8
|
||||
beta2: 0.99
|
||||
epsilon: 1.0e-9
|
||||
weight_decay: 0.0
|
||||
generator_scheduler: exponential_decay
|
||||
generator_scheduler_params:
|
||||
learning_rate: 2.0e-4
|
||||
gamma: 0.999875
|
||||
|
||||
# optimizer setting for discriminator
|
||||
discriminator_optimizer_params:
|
||||
beta1: 0.8
|
||||
beta2: 0.99
|
||||
epsilon: 1.0e-9
|
||||
weight_decay: 0.0
|
||||
discriminator_scheduler: exponential_decay
|
||||
discriminator_scheduler_params:
|
||||
learning_rate: 2.0e-4
|
||||
gamma: 0.999875
|
||||
generator_first: True # whether to start updating generator first
|
||||
|
||||
##########################################################
|
||||
# OTHER TRAINING SETTING #
|
||||
##########################################################
|
||||
num_snapshots: 10 # max number of snapshots to keep while training
|
||||
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000
|
||||
save_interval_steps: 1000 # Interval steps to save checkpoint.
|
||||
eval_interval_steps: 250 # Interval steps to evaluate the network.
|
||||
seed: 777 # random seed number
|
@ -0,0 +1,15 @@
|
||||
#!/bin/bash
|
||||
|
||||
train_output_path=$1
|
||||
|
||||
stage=0
|
||||
stop_stage=0
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
python3 ${BIN_DIR}/inference.py \
|
||||
--inference_dir=${train_output_path}/inference \
|
||||
--am=jets_csmsc \
|
||||
--text=${BIN_DIR}/../sentences.txt \
|
||||
--output_dir=${train_output_path}/pd_infer_out \
|
||||
--phones_dict=dump/phone_id_map.txt
|
||||
fi
|
@ -0,0 +1,77 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
stage=0
|
||||
stop_stage=100
|
||||
|
||||
config_path=$1
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
# get durations from MFA's result
|
||||
echo "Generate durations.txt from MFA results ..."
|
||||
python3 ${MAIN_ROOT}/utils/gen_duration_from_textgrid.py \
|
||||
--inputdir=./baker_alignment_tone \
|
||||
--output=durations.txt \
|
||||
--config=${config_path}
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
# extract features
|
||||
echo "Extract features ..."
|
||||
python3 ${BIN_DIR}/preprocess.py \
|
||||
--dataset=baker \
|
||||
--rootdir=~/datasets/BZNSYP/ \
|
||||
--dumpdir=dump \
|
||||
--dur-file=durations.txt \
|
||||
--config=${config_path} \
|
||||
--num-cpu=20 \
|
||||
--cut-sil=True \
|
||||
--token_average=True
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
# get features' stats(mean and std)
|
||||
echo "Get features' stats ..."
|
||||
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--field-name="feats"
|
||||
|
||||
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--field-name="pitch"
|
||||
|
||||
python3 ${MAIN_ROOT}/utils/compute_statistics.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--field-name="energy"
|
||||
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
# normalize and covert phone/speaker to id, dev and test should use train's stats
|
||||
echo "Normalize ..."
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/train/raw/metadata.jsonl \
|
||||
--dumpdir=dump/train/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--pitch-stats=dump/train/pitch_stats.npy \
|
||||
--energy-stats=dump/train/energy_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt
|
||||
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/dev/raw/metadata.jsonl \
|
||||
--dumpdir=dump/dev/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--pitch-stats=dump/train/pitch_stats.npy \
|
||||
--energy-stats=dump/train/energy_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt
|
||||
|
||||
python3 ${BIN_DIR}/normalize.py \
|
||||
--metadata=dump/test/raw/metadata.jsonl \
|
||||
--dumpdir=dump/test/norm \
|
||||
--feats-stats=dump/train/feats_stats.npy \
|
||||
--pitch-stats=dump/train/pitch_stats.npy \
|
||||
--energy-stats=dump/train/energy_stats.npy \
|
||||
--phones-dict=dump/phone_id_map.txt \
|
||||
--speaker-dict=dump/speaker_id_map.txt
|
||||
fi
|
@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
ckpt_name=$3
|
||||
stage=0
|
||||
stop_stage=0
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
FLAGS_allocator_strategy=naive_best_fit \
|
||||
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
|
||||
python3 ${BIN_DIR}/synthesize.py \
|
||||
--config=${config_path} \
|
||||
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
|
||||
--phones_dict=dump/phone_id_map.txt \
|
||||
--test_metadata=dump/test/norm/metadata.jsonl \
|
||||
--output_dir=${train_output_path}/test
|
||||
fi
|
@ -0,0 +1,22 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
ckpt_name=$3
|
||||
|
||||
stage=0
|
||||
stop_stage=0
|
||||
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
FLAGS_allocator_strategy=naive_best_fit \
|
||||
FLAGS_fraction_of_gpu_memory_to_use=0.01 \
|
||||
python3 ${BIN_DIR}/synthesize_e2e.py \
|
||||
--am=jets_csmsc \
|
||||
--config=${config_path} \
|
||||
--ckpt=${train_output_path}/checkpoints/${ckpt_name} \
|
||||
--phones_dict=dump/phone_id_map.txt \
|
||||
--output_dir=${train_output_path}/test_e2e \
|
||||
--text=${BIN_DIR}/../sentences.txt \
|
||||
--inference_dir=${train_output_path}/inference
|
||||
fi
|
@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
config_path=$1
|
||||
train_output_path=$2
|
||||
|
||||
python3 ${BIN_DIR}/train.py \
|
||||
--train-metadata=dump/train/norm/metadata.jsonl \
|
||||
--dev-metadata=dump/dev/norm/metadata.jsonl \
|
||||
--config=${config_path} \
|
||||
--output-dir=${train_output_path} \
|
||||
--ngpu=1 \
|
||||
--phones-dict=dump/phone_id_map.txt
|
@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
export MAIN_ROOT=`realpath ${PWD}/../../../`
|
||||
|
||||
export PATH=${MAIN_ROOT}:${MAIN_ROOT}/utils:${PATH}
|
||||
export LC_ALL=C
|
||||
|
||||
export PYTHONDONTWRITEBYTECODE=1
|
||||
# Use UTF-8 in Python to avoid UnicodeDecodeError when LC_ALL=C
|
||||
export PYTHONIOENCODING=UTF-8
|
||||
export PYTHONPATH=${MAIN_ROOT}:${PYTHONPATH}
|
||||
|
||||
MODEL=jets
|
||||
export BIN_DIR=${MAIN_ROOT}/paddlespeech/t2s/exps/${MODEL}
|
@ -0,0 +1,41 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
source path.sh
|
||||
|
||||
gpus=0
|
||||
stage=0
|
||||
stop_stage=100
|
||||
|
||||
conf_path=conf/default.yaml
|
||||
train_output_path=exp/default
|
||||
ckpt_name=snapshot_iter_150000.pdz
|
||||
|
||||
# with the following command, you can choose the stage range you want to run
|
||||
# such as `./run.sh --stage 0 --stop-stage 0`
|
||||
# this can not be mixed use with `$1`, `$2` ...
|
||||
source ${MAIN_ROOT}/utils/parse_options.sh || exit 1
|
||||
|
||||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
|
||||
# prepare data
|
||||
./local/preprocess.sh ${conf_path}|| exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then
|
||||
# train model, all `ckpt` under `train_output_path/checkpoints/` dir
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
|
||||
# synthesize_e2e
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1
|
||||
fi
|
||||
|
||||
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
|
||||
CUDA_VISIBLE_DEVICES=${gpus} ./local/inference.sh ${train_output_path} || exit -1
|
||||
fi
|
||||
|
@ -0,0 +1,174 @@
|
||||
# This is the configuration file for CSMSC dataset.
|
||||
# This configuration is based on HiFiGAN V1, which is an official configuration.
|
||||
# But I found that the optimizer setting does not work well with my implementation.
|
||||
# So I changed optimizer settings as follows:
|
||||
# - AdamW -> Adam
|
||||
# - betas: [0.8, 0.99] -> betas: [0.5, 0.9]
|
||||
# - Scheduler: ExponentialLR -> MultiStepLR
|
||||
# To match the shift size difference, the upsample scales is also modified from the original 256 shift setting.
|
||||
|
||||
###########################################################
|
||||
# FEATURE EXTRACTION SETTING #
|
||||
###########################################################
|
||||
fs: 24000 # Sampling rate.
|
||||
n_fft: 2048 # FFT size (samples).
|
||||
n_shift: 300 # Hop size (samples). 12.5ms
|
||||
win_length: 1200 # Window length (samples). 50ms
|
||||
# If set to null, it will be the same as fft_size.
|
||||
window: "hann" # Window function.
|
||||
n_mels: 80 # Number of mel basis.
|
||||
fmin: 80 # Minimum freq in mel basis calculation. (Hz)
|
||||
fmax: 7600 # Maximum frequency in mel basis calculation. (Hz)
|
||||
|
||||
###########################################################
|
||||
# GENERATOR NETWORK ARCHITECTURE SETTING #
|
||||
###########################################################
|
||||
generator_params:
|
||||
use_istft: True # Use iSTFTNet.
|
||||
istft_layer_id: 2 # Use istft after istft_layer_id layers of upsample layer if use_istft=True.
|
||||
n_fft: 2048 # FFT size (samples) in feature extraction.
|
||||
win_length: 1200 # Window length (samples) in feature extraction.
|
||||
in_channels: 80 # Number of input channels.
|
||||
out_channels: 1 # Number of output channels.
|
||||
channels: 512 # Number of initial channels.
|
||||
kernel_size: 7 # Kernel size of initial and final conv layers.
|
||||
upsample_scales: [5, 5, 4, 3] # Upsampling scales.
|
||||
upsample_kernel_sizes: [10, 10, 8, 6] # Kernel size for upsampling layers.
|
||||
resblock_kernel_sizes: [3, 7, 11] # Kernel size for residual blocks.
|
||||
resblock_dilations: # Dilations for residual blocks.
|
||||
- [1, 3, 5]
|
||||
- [1, 3, 5]
|
||||
- [1, 3, 5]
|
||||
use_additional_convs: True # Whether to use additional conv layer in residual blocks.
|
||||
bias: True # Whether to use bias parameter in conv.
|
||||
nonlinear_activation: "leakyrelu" # Nonlinear activation type.
|
||||
nonlinear_activation_params: # Nonlinear activation paramters.
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True # Whether to apply weight normalization.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
###########################################################
|
||||
# DISCRIMINATOR NETWORK ARCHITECTURE SETTING #
|
||||
###########################################################
|
||||
discriminator_params:
|
||||
scales: 3 # Number of multi-scale discriminator.
|
||||
scale_downsample_pooling: "AvgPool1D" # Pooling operation for scale discriminator.
|
||||
scale_downsample_pooling_params:
|
||||
kernel_size: 4 # Pooling kernel size.
|
||||
stride: 2 # Pooling stride.
|
||||
padding: 2 # Padding size.
|
||||
scale_discriminator_params:
|
||||
in_channels: 1 # Number of input channels.
|
||||
out_channels: 1 # Number of output channels.
|
||||
kernel_sizes: [15, 41, 5, 3] # List of kernel sizes.
|
||||
channels: 128 # Initial number of channels.
|
||||
max_downsample_channels: 1024 # Maximum number of channels in downsampling conv layers.
|
||||
max_groups: 16 # Maximum number of groups in downsampling conv layers.
|
||||
bias: True
|
||||
downsample_scales: [4, 4, 4, 4, 1] # Downsampling scales.
|
||||
nonlinear_activation: "leakyrelu" # Nonlinear activation.
|
||||
nonlinear_activation_params:
|
||||
negative_slope: 0.1
|
||||
follow_official_norm: True # Whether to follow the official norm setting.
|
||||
periods: [2, 3, 5, 7, 11] # List of period for multi-period discriminator.
|
||||
period_discriminator_params:
|
||||
in_channels: 1 # Number of input channels.
|
||||
out_channels: 1 # Number of output channels.
|
||||
kernel_sizes: [5, 3] # List of kernel sizes.
|
||||
channels: 32 # Initial number of channels.
|
||||
downsample_scales: [3, 3, 3, 3, 1] # Downsampling scales.
|
||||
max_downsample_channels: 1024 # Maximum number of channels in downsampling conv layers.
|
||||
bias: True # Whether to use bias parameter in conv layer."
|
||||
nonlinear_activation: "leakyrelu" # Nonlinear activation.
|
||||
nonlinear_activation_params: # Nonlinear activation paramters.
|
||||
negative_slope: 0.1
|
||||
use_weight_norm: True # Whether to apply weight normalization.
|
||||
use_spectral_norm: False # Whether to apply spectral normalization.
|
||||
|
||||
|
||||
###########################################################
|
||||
# STFT LOSS SETTING #
|
||||
###########################################################
|
||||
use_stft_loss: False # Whether to use multi-resolution STFT loss.
|
||||
use_mel_loss: True # Whether to use Mel-spectrogram loss.
|
||||
mel_loss_params:
|
||||
fs: 24000
|
||||
fft_size: 2048
|
||||
hop_size: 300
|
||||
win_length: 1200
|
||||
window: "hann"
|
||||
num_mels: 80
|
||||
fmin: 0
|
||||
fmax: 12000
|
||||
log_base: null
|
||||
generator_adv_loss_params:
|
||||
average_by_discriminators: False # Whether to average loss by #discriminators.
|
||||
discriminator_adv_loss_params:
|
||||
average_by_discriminators: False # Whether to average loss by #discriminators.
|
||||
use_feat_match_loss: True
|
||||
feat_match_loss_params:
|
||||
average_by_discriminators: False # Whether to average loss by #discriminators.
|
||||
average_by_layers: False # Whether to average loss by #layers in each discriminator.
|
||||
include_final_outputs: False # Whether to include final outputs in feat match loss calculation.
|
||||
|
||||
###########################################################
|
||||
# ADVERSARIAL LOSS SETTING #
|
||||
###########################################################
|
||||
lambda_aux: 45.0 # Loss balancing coefficient for STFT loss.
|
||||
lambda_adv: 1.0 # Loss balancing coefficient for adversarial loss.
|
||||
lambda_feat_match: 2.0 # Loss balancing coefficient for feat match loss..
|
||||
|
||||
###########################################################
|
||||
# DATA LOADER SETTING #
|
||||
###########################################################
|
||||
batch_size: 16 # Batch size.
|
||||
batch_max_steps: 8400 # Length of each audio in batch. Make sure dividable by hop_size.
|
||||
num_workers: 2 # Number of workers in DataLoader.
|
||||
|
||||
###########################################################
|
||||
# OPTIMIZER & SCHEDULER SETTING #
|
||||
###########################################################
|
||||
generator_optimizer_params:
|
||||
beta1: 0.5
|
||||
beta2: 0.9
|
||||
weight_decay: 0.0 # Generator's weight decay coefficient.
|
||||
generator_scheduler_params:
|
||||
learning_rate: 2.0e-4 # Generator's learning rate.
|
||||
gamma: 0.5 # Generator's scheduler gamma.
|
||||
milestones: # At each milestone, lr will be multiplied by gamma.
|
||||
- 200000
|
||||
- 400000
|
||||
- 600000
|
||||
- 800000
|
||||
generator_grad_norm: -1 # Generator's gradient norm.
|
||||
discriminator_optimizer_params:
|
||||
beta1: 0.5
|
||||
beta2: 0.9
|
||||
weight_decay: 0.0 # Discriminator's weight decay coefficient.
|
||||
discriminator_scheduler_params:
|
||||
learning_rate: 2.0e-4 # Discriminator's learning rate.
|
||||
gamma: 0.5 # Discriminator's scheduler gamma.
|
||||
milestones: # At each milestone, lr will be multiplied by gamma.
|
||||
- 200000
|
||||
- 400000
|
||||
- 600000
|
||||
- 800000
|
||||
discriminator_grad_norm: -1 # Discriminator's gradient norm.
|
||||
|
||||
###########################################################
|
||||
# INTERVAL SETTING #
|
||||
###########################################################
|
||||
generator_train_start_steps: 1 # Number of steps to start to train discriminator.
|
||||
discriminator_train_start_steps: 0 # Number of steps to start to train discriminator.
|
||||
train_max_steps: 2500000 # Number of training steps.
|
||||
save_interval_steps: 5000 # Interval steps to save checkpoint.
|
||||
eval_interval_steps: 1000 # Interval steps to evaluate the network.
|
||||
|
||||
###########################################################
|
||||
# OTHER SETTING #
|
||||
###########################################################
|
||||
num_snapshots: 10 # max number of snapshots to keep while training
|
||||
seed: 42 # random seed for paddle, random, and np.random
|
@ -1,22 +1,135 @@
|
||||
generator_params:
|
||||
###########################################################
|
||||
# FEATURE EXTRACTION SETTING #
|
||||
###########################################################
|
||||
# 源码 load 的时候用的 24k, 提取 mel 用的 16k, 后续 load 和提取 mel 都要改成 24k
|
||||
fs: 16000
|
||||
n_fft: 2048
|
||||
n_shift: 300
|
||||
win_length: 1200 # Window length.(in samples) 50ms
|
||||
# If set to null, it will be the same as fft_size.
|
||||
window: "hann" # Window function.
|
||||
|
||||
fmin: 0 # Minimum frequency of Mel basis.
|
||||
fmax: 8000 # Maximum frequency of Mel basis. sr // 2
|
||||
n_mels: 80
|
||||
# only for StarGANv2 VC
|
||||
norm: # None here
|
||||
htk: True
|
||||
power: 2.0
|
||||
|
||||
|
||||
###########################################################
|
||||
# MODEL SETTING #
|
||||
###########################################################
|
||||
generator_params:
|
||||
dim_in: 64
|
||||
style_dim: 64
|
||||
max_conv_dim: 512
|
||||
w_hpf: 0
|
||||
F0_channel: 256
|
||||
mapping_network_params:
|
||||
mapping_network_params:
|
||||
num_domains: 20 # num of speakers in StarGANv2
|
||||
latent_dim: 16
|
||||
style_dim: 64 # same as style_dim in generator_params
|
||||
hidden_dim: 512 # same as max_conv_dim in generator_params
|
||||
style_encoder_params:
|
||||
style_encoder_params:
|
||||
dim_in: 64 # same as dim_in in generator_params
|
||||
style_dim: 64 # same as style_dim in generator_params
|
||||
num_domains: 20 # same as num_domains in generator_params
|
||||
max_conv_dim: 512 # same as max_conv_dim in generator_params
|
||||
discriminator_params:
|
||||
discriminator_params:
|
||||
dim_in: 64 # same as dim_in in generator_params
|
||||
num_domains: 20 # same as num_domains in mapping_network_params
|
||||
max_conv_dim: 512 # same as max_conv_dim in generator_params
|
||||
n_repeat: 4
|
||||
repeat_num: 4
|
||||
asr_params:
|
||||
input_dim: 80
|
||||
hidden_dim: 256
|
||||
n_token: 80
|
||||
token_embedding_dim: 256
|
||||
|
||||
###########################################################
|
||||
# ADVERSARIAL LOSS SETTING #
|
||||
###########################################################
|
||||
loss_params:
|
||||
g_loss:
|
||||
lambda_sty: 1.
|
||||
lambda_cyc: 5.
|
||||
lambda_ds: 1.
|
||||
lambda_norm: 1.
|
||||
lambda_asr: 10.
|
||||
lambda_f0: 5.
|
||||
lambda_f0_sty: 0.1
|
||||
lambda_adv: 2.
|
||||
lambda_adv_cls: 0.5
|
||||
norm_bias: 0.5
|
||||
d_loss:
|
||||
lambda_reg: 1.
|
||||
lambda_adv_cls: 0.1
|
||||
lambda_con_reg: 10.
|
||||
|
||||
adv_cls_epoch: 50
|
||||
con_reg_epoch: 30
|
||||
|
||||
|
||||
###########################################################
|
||||
# DATA LOADER SETTING #
|
||||
###########################################################
|
||||
batch_size: 5 # Batch size.
|
||||
num_workers: 2 # Number of workers in DataLoader.
|
||||
max_mel_length: 192
|
||||
|
||||
###########################################################
|
||||
# OPTIMIZER & SCHEDULER SETTING #
|
||||
###########################################################
|
||||
generator_optimizer_params:
|
||||
beta1: 0.0
|
||||
beta2: 0.99
|
||||
weight_decay: 1.0e-4
|
||||
epsilon: 1.0e-9
|
||||
generator_scheduler_params:
|
||||
max_learning_rate: 2.0e-4
|
||||
phase_pct: 0.0
|
||||
divide_factor: 1
|
||||
total_steps: 200000 # train_max_steps
|
||||
end_learning_rate: 2.0e-4
|
||||
style_encoder_optimizer_params:
|
||||
beta1: 0.0
|
||||
beta2: 0.99
|
||||
weight_decay: 1.0e-4
|
||||
epsilon: 1.0e-9
|
||||
style_encoder_scheduler_params:
|
||||
max_learning_rate: 2.0e-4
|
||||
phase_pct: 0.0
|
||||
divide_factor: 1
|
||||
total_steps: 200000 # train_max_steps
|
||||
end_learning_rate: 2.0e-4
|
||||
mapping_network_optimizer_params:
|
||||
beta1: 0.0
|
||||
beta2: 0.99
|
||||
weight_decay: 1.0e-4
|
||||
epsilon: 1.0e-9
|
||||
mapping_network_scheduler_params:
|
||||
max_learning_rate: 2.0e-6
|
||||
phase_pct: 0.0
|
||||
divide_factor: 1
|
||||
total_steps: 200000 # train_max_steps
|
||||
end_learning_rate: 2.0e-6
|
||||
discriminator_optimizer_params:
|
||||
beta1: 0.0
|
||||
beta2: 0.99
|
||||
weight_decay: 1.0e-4
|
||||
epsilon: 1.0e-9
|
||||
discriminator_scheduler_params:
|
||||
max_learning_rate: 2.0e-4
|
||||
phase_pct: 0.0
|
||||
divide_factor: 1
|
||||
total_steps: 200000 # train_max_steps
|
||||
end_learning_rate: 2.0e-4
|
||||
|
||||
###########################################################
|
||||
# TRAINING SETTING #
|
||||
###########################################################
|
||||
max_epoch: 150
|
||||
num_snapshots: 5
|
||||
seed: 1
|
@ -0,0 +1,14 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from .aidatatang_200zh import main as aidatatang_200zh_main
|
@ -0,0 +1,158 @@
|
||||
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Prepare aidatatang_200zh mandarin dataset
|
||||
|
||||
Download, unpack and create manifest files.
|
||||
Manifest file is a json-format file with each line containing the
|
||||
meta data (i.e. audio filepath, transcript and audio duration)
|
||||
of each audio file in the data set.
|
||||
"""
|
||||
import argparse
|
||||
import codecs
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import soundfile
|
||||
|
||||
from paddlespeech.dataset.download import download
|
||||
from paddlespeech.dataset.download import unpack
|
||||
from paddlespeech.utils.argparse import print_arguments
|
||||
|
||||
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
|
||||
|
||||
URL_ROOT = 'http://www.openslr.org/resources/62'
|
||||
# URL_ROOT = 'https://openslr.magicdatatech.com/resources/62'
|
||||
DATA_URL = URL_ROOT + '/aidatatang_200zh.tgz'
|
||||
MD5_DATA = '6e0f4f39cd5f667a7ee53c397c8d0949'
|
||||
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument(
|
||||
"--target_dir",
|
||||
default=DATA_HOME + "/aidatatang_200zh",
|
||||
type=str,
|
||||
help="Directory to save the dataset. (default: %(default)s)")
|
||||
parser.add_argument(
|
||||
"--manifest_prefix",
|
||||
default="manifest",
|
||||
type=str,
|
||||
help="Filepath prefix for output manifests. (default: %(default)s)")
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def create_manifest(data_dir, manifest_path_prefix):
|
||||
print("Creating manifest %s ..." % manifest_path_prefix)
|
||||
json_lines = []
|
||||
transcript_path = os.path.join(data_dir, 'transcript',
|
||||
'aidatatang_200_zh_transcript.txt')
|
||||
transcript_dict = {}
|
||||
for line in codecs.open(transcript_path, 'r', 'utf-8'):
|
||||
line = line.strip()
|
||||
if line == '':
|
||||
continue
|
||||
audio_id, text = line.split(' ', 1)
|
||||
# remove withespace, charactor text
|
||||
text = ''.join(text.split())
|
||||
transcript_dict[audio_id] = text
|
||||
|
||||
data_types = ['train', 'dev', 'test']
|
||||
for dtype in data_types:
|
||||
del json_lines[:]
|
||||
total_sec = 0.0
|
||||
total_text = 0.0
|
||||
total_num = 0
|
||||
|
||||
audio_dir = os.path.join(data_dir, 'corpus/', dtype)
|
||||
for subfolder, _, filelist in sorted(os.walk(audio_dir)):
|
||||
for fname in filelist:
|
||||
if not fname.endswith('.wav'):
|
||||
continue
|
||||
|
||||
audio_path = os.path.abspath(os.path.join(subfolder, fname))
|
||||
audio_id = os.path.basename(fname)[:-4]
|
||||
utt2spk = Path(audio_path).parent.name
|
||||
|
||||
audio_data, samplerate = soundfile.read(audio_path)
|
||||
duration = float(len(audio_data) / samplerate)
|
||||
text = transcript_dict[audio_id]
|
||||
json_lines.append(
|
||||
json.dumps(
|
||||
{
|
||||
'utt': audio_id,
|
||||
'utt2spk': str(utt2spk),
|
||||
'feat': audio_path,
|
||||
'feat_shape': (duration, ), # second
|
||||
'text': text,
|
||||
},
|
||||
ensure_ascii=False))
|
||||
|
||||
total_sec += duration
|
||||
total_text += len(text)
|
||||
total_num += 1
|
||||
|
||||
manifest_path = manifest_path_prefix + '.' + dtype
|
||||
with codecs.open(manifest_path, 'w', 'utf-8') as fout:
|
||||
for line in json_lines:
|
||||
fout.write(line + '\n')
|
||||
|
||||
manifest_dir = os.path.dirname(manifest_path_prefix)
|
||||
meta_path = os.path.join(manifest_dir, dtype) + '.meta'
|
||||
with open(meta_path, 'w') as f:
|
||||
print(f"{dtype}:", file=f)
|
||||
print(f"{total_num} utts", file=f)
|
||||
print(f"{total_sec / (60*60)} h", file=f)
|
||||
print(f"{total_text} text", file=f)
|
||||
print(f"{total_text / total_sec} text/sec", file=f)
|
||||
print(f"{total_sec / total_num} sec/utt", file=f)
|
||||
|
||||
|
||||
def prepare_dataset(url, md5sum, target_dir, manifest_path, subset):
|
||||
"""Download, unpack and create manifest file."""
|
||||
data_dir = os.path.join(target_dir, subset)
|
||||
if not os.path.exists(data_dir):
|
||||
filepath = download(url, md5sum, target_dir)
|
||||
unpack(filepath, target_dir)
|
||||
# unpack all audio tar files
|
||||
audio_dir = os.path.join(data_dir, 'corpus')
|
||||
for subfolder, dirlist, filelist in sorted(os.walk(audio_dir)):
|
||||
for sub in dirlist:
|
||||
print(f"unpack dir {sub}...")
|
||||
for folder, _, filelist in sorted(
|
||||
os.walk(os.path.join(subfolder, sub))):
|
||||
for ftar in filelist:
|
||||
unpack(os.path.join(folder, ftar), folder, True)
|
||||
else:
|
||||
print("Skip downloading and unpacking. Data already exists in %s." %
|
||||
target_dir)
|
||||
|
||||
create_manifest(data_dir, manifest_path)
|
||||
|
||||
|
||||
def main():
|
||||
print_arguments(args, globals())
|
||||
if args.target_dir.startswith('~'):
|
||||
args.target_dir = os.path.expanduser(args.target_dir)
|
||||
|
||||
prepare_dataset(
|
||||
url=DATA_URL,
|
||||
md5sum=MD5_DATA,
|
||||
target_dir=args.target_dir,
|
||||
manifest_path=args.manifest_prefix,
|
||||
subset='aidatatang_200zh')
|
||||
|
||||
print("Data download and manifest prepare done!")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,58 @@
|
||||
# [Aishell1](http://openslr.elda.org/33/)
|
||||
|
||||
This Open Source Mandarin Speech Corpus, AISHELL-ASR0009-OS1, is 178 hours long. It is a part of AISHELL-ASR0009, of which utterance contains 11 domains, including smart home, autonomous driving, and industrial production. The whole recording was put in quiet indoor environment, using 3 different devices at the same time: high fidelity microphone (44.1kHz, 16-bit,); Android-system mobile phone (16kHz, 16-bit), iOS-system mobile phone (16kHz, 16-bit). Audios in high fidelity were re-sampled to 16kHz to build AISHELL- ASR0009-OS1. 400 speakers from different accent areas in China were invited to participate in the recording. The manual transcription accuracy rate is above 95%, through professional speech annotation and strict quality inspection. The corpus is divided into training, development and testing sets. ( This database is free for academic research, not in the commerce, if without permission. )
|
||||
|
||||
|
||||
## Dataset Architecture
|
||||
|
||||
```bash
|
||||
data_aishell
|
||||
├── transcript # text 目录
|
||||
└── wav # wav 目录
|
||||
├── dev # dev 目录
|
||||
│ ├── S0724 # spk 目录
|
||||
│ ├── S0725
|
||||
│ ├── S0726
|
||||
├── train
|
||||
│ ├── S0724
|
||||
│ ├── S0725
|
||||
│ ├── S0726
|
||||
├── test
|
||||
│ ├── S0724
|
||||
│ ├── S0725
|
||||
│ ├── S0726
|
||||
|
||||
|
||||
data_aishell
|
||||
├── transcript
|
||||
│ └── aishell_transcript_v0.8.txt # 文本标注文件
|
||||
└── wav
|
||||
├── dev
|
||||
│ ├── S0724
|
||||
│ │ ├── BAC009S0724W0121.wav # S0724 的音频
|
||||
│ │ ├── BAC009S0724W0122.wav
|
||||
│ │ ├── BAC009S0724W0123.wav
|
||||
├── test
|
||||
│ ├── S0724
|
||||
│ │ ├── BAC009S0724W0121.wav
|
||||
│ │ ├── BAC009S0724W0122.wav
|
||||
│ │ ├── BAC009S0724W0123.wav
|
||||
├── train
|
||||
│ ├── S0724
|
||||
│ │ ├── BAC009S0724W0121.wav
|
||||
│ │ ├── BAC009S0724W0122.wav
|
||||
│ │ ├── BAC009S0724W0123.wav
|
||||
|
||||
标注文件格式: <utt> <tokens>
|
||||
> head data_aishell/transcript/aishell_transcript_v0.8.txt
|
||||
BAC009S0002W0122 而 对 楼市 成交 抑制 作用 最 大 的 限 购
|
||||
BAC009S0002W0123 也 成为 地方 政府 的 眼中 钉
|
||||
BAC009S0002W0124 自 六月 底 呼和浩特 市 率先 宣布 取消 限 购 后
|
||||
BAC009S0002W0125 各地 政府 便 纷纷 跟进
|
||||
BAC009S0002W0126 仅 一 个 多 月 的 时间 里
|
||||
BAC009S0002W0127 除了 北京 上海 广州 深圳 四 个 一 线 城市 和 三亚 之外
|
||||
BAC009S0002W0128 四十六 个 限 购 城市 当中
|
||||
BAC009S0002W0129 四十一 个 已 正式 取消 或 变相 放松 了 限 购
|
||||
BAC009S0002W0130 财政 金融 政策 紧随 其后 而来
|
||||
BAC009S0002W0131 显示 出 了 极 强 的 威力
|
||||
```
|
@ -0,0 +1,18 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from .aishell import check_dataset
|
||||
from .aishell import create_manifest
|
||||
from .aishell import download_dataset
|
||||
from .aishell import main as aishell_main
|
||||
from .aishell import prepare_dataset
|
@ -0,0 +1,230 @@
|
||||
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Prepare Aishell mandarin dataset
|
||||
|
||||
Download, unpack and create manifest files.
|
||||
Manifest file is a json-format file with each line containing the
|
||||
meta data (i.e. audio filepath, transcript and audio duration)
|
||||
of each audio file in the data set.
|
||||
"""
|
||||
import argparse
|
||||
import codecs
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import soundfile
|
||||
|
||||
from paddlespeech.dataset.download import download
|
||||
from paddlespeech.dataset.download import unpack
|
||||
from paddlespeech.utils.argparse import print_arguments
|
||||
|
||||
DATA_HOME = os.path.expanduser('~/.cache/paddle/dataset/speech')
|
||||
|
||||
URL_ROOT = 'http://openslr.elda.org/resources/33'
|
||||
# URL_ROOT = 'https://openslr.magicdatatech.com/resources/33'
|
||||
DATA_URL = URL_ROOT + '/data_aishell.tgz'
|
||||
MD5_DATA = '2f494334227864a8a8fec932999db9d8'
|
||||
RESOURCE_URL = URL_ROOT + '/resource_aishell.tgz'
|
||||
MD5_RESOURCE = '957d480a0fcac85fc18e550756f624e5'
|
||||
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument(
|
||||
"--target_dir",
|
||||
default=DATA_HOME + "/Aishell",
|
||||
type=str,
|
||||
help="Directory to save the dataset. (default: %(default)s)")
|
||||
parser.add_argument(
|
||||
"--manifest_prefix",
|
||||
default="manifest",
|
||||
type=str,
|
||||
help="Filepath prefix for output manifests. (default: %(default)s)")
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def create_manifest(data_dir, manifest_path_prefix):
|
||||
print("Creating manifest %s ..." % os.path.join(data_dir,
|
||||
manifest_path_prefix))
|
||||
json_lines = []
|
||||
transcript_path = os.path.join(data_dir, 'transcript',
|
||||
'aishell_transcript_v0.8.txt')
|
||||
transcript_dict = {}
|
||||
for line in codecs.open(transcript_path, 'r', 'utf-8'):
|
||||
line = line.strip()
|
||||
if line == '':
|
||||
continue
|
||||
audio_id, text = line.split(' ', 1)
|
||||
# remove withespace, charactor text
|
||||
text = ''.join(text.split())
|
||||
transcript_dict[audio_id] = text
|
||||
|
||||
data_metas = dict()
|
||||
data_types = ['train', 'dev', 'test']
|
||||
for dtype in data_types:
|
||||
del json_lines[:]
|
||||
total_sec = 0.0
|
||||
total_text = 0.0
|
||||
total_num = 0
|
||||
|
||||
audio_dir = os.path.join(data_dir, 'wav', dtype)
|
||||
for subfolder, _, filelist in sorted(os.walk(audio_dir)):
|
||||
for fname in filelist:
|
||||
audio_path = os.path.abspath(os.path.join(subfolder, fname))
|
||||
audio_id = os.path.basename(fname)[:-4]
|
||||
# if no transcription for audio then skipped
|
||||
if audio_id not in transcript_dict:
|
||||
continue
|
||||
|
||||
utt2spk = Path(audio_path).parent.name
|
||||
audio_data, samplerate = soundfile.read(audio_path)
|
||||
duration = float(len(audio_data) / samplerate)
|
||||
text = transcript_dict[audio_id]
|
||||
json_lines.append(
|
||||
json.dumps(
|
||||
{
|
||||
'utt': audio_id,
|
||||
'utt2spk': str(utt2spk),
|
||||
'feat': audio_path,
|
||||
'feat_shape': (duration, ), # second
|
||||
'text': text
|
||||
},
|
||||
ensure_ascii=False))
|
||||
|
||||
total_sec += duration
|
||||
total_text += len(text)
|
||||
total_num += 1
|
||||
|
||||
manifest_path = manifest_path_prefix + '.' + dtype
|
||||
with codecs.open(manifest_path, 'w', 'utf-8') as fout:
|
||||
for line in json_lines:
|
||||
fout.write(line + '\n')
|
||||
|
||||
meta = dict()
|
||||
meta["dtype"] = dtype # train, dev, test
|
||||
meta["utts"] = total_num
|
||||
meta["hours"] = total_sec / (60 * 60)
|
||||
meta["text"] = total_text
|
||||
meta["text/sec"] = total_text / total_sec
|
||||
meta["sec/utt"] = total_sec / total_num
|
||||
data_metas[dtype] = meta
|
||||
|
||||
manifest_dir = os.path.dirname(manifest_path_prefix)
|
||||
meta_path = os.path.join(manifest_dir, dtype) + '.meta'
|
||||
with open(meta_path, 'w') as f:
|
||||
for key, val in meta.items():
|
||||
print(f"{key}: {val}", file=f)
|
||||
|
||||
return data_metas
|
||||
|
||||
|
||||
def download_dataset(url, md5sum, target_dir):
|
||||
"""Download, unpack and create manifest file."""
|
||||
data_dir = os.path.join(target_dir, 'data_aishell')
|
||||
if not os.path.exists(data_dir):
|
||||
filepath = download(url, md5sum, target_dir)
|
||||
unpack(filepath, target_dir)
|
||||
# unpack all audio tar files
|
||||
audio_dir = os.path.join(data_dir, 'wav')
|
||||
for subfolder, _, filelist in sorted(os.walk(audio_dir)):
|
||||
for ftar in filelist:
|
||||
unpack(os.path.join(subfolder, ftar), subfolder, True)
|
||||
else:
|
||||
print("Skip downloading and unpacking. Data already exists in %s." %
|
||||
os.path.abspath(target_dir))
|
||||
return os.path.abspath(data_dir)
|
||||
|
||||
|
||||
def check_dataset(data_dir):
|
||||
print(f"check dataset {os.path.abspath(data_dir)} ...")
|
||||
|
||||
transcript_path = os.path.join(data_dir, 'transcript',
|
||||
'aishell_transcript_v0.8.txt')
|
||||
if not os.path.exists(transcript_path):
|
||||
raise FileNotFoundError(f"no transcript file found in {data_dir}.")
|
||||
|
||||
transcript_dict = {}
|
||||
for line in codecs.open(transcript_path, 'r', 'utf-8'):
|
||||
line = line.strip()
|
||||
if line == '':
|
||||
continue
|
||||
audio_id, text = line.split(' ', 1)
|
||||
# remove withespace, charactor text
|
||||
text = ''.join(text.split())
|
||||
transcript_dict[audio_id] = text
|
||||
|
||||
no_label = 0
|
||||
data_types = ['train', 'dev', 'test']
|
||||
for dtype in data_types:
|
||||
audio_dir = os.path.join(data_dir, 'wav', dtype)
|
||||
if not os.path.exists(audio_dir):
|
||||
raise IOError(f"{audio_dir} does not exist.")
|
||||
|
||||
for subfolder, _, filelist in sorted(os.walk(audio_dir)):
|
||||
for fname in filelist:
|
||||
audio_path = os.path.abspath(os.path.join(subfolder, fname))
|
||||
audio_id = os.path.basename(fname)[:-4]
|
||||
# if no transcription for audio then skipped
|
||||
if audio_id not in transcript_dict:
|
||||
print(f"Warning: {audio_id} not has transcript.")
|
||||
no_label += 1
|
||||
continue
|
||||
|
||||
utt2spk = Path(audio_path).parent.name
|
||||
audio_data, samplerate = soundfile.read(audio_path)
|
||||
assert samplerate == 16000, f"{audio_path} sample rate is {samplerate} not 16k, please check."
|
||||
|
||||
print(f"Warning: {dtype} has {no_label} audio does not has transcript.")
|
||||
|
||||
|
||||
def prepare_dataset(url, md5sum, target_dir, manifest_path=None, check=False):
|
||||
"""Download, unpack and create manifest file."""
|
||||
data_dir = download_dataset(url, md5sum, target_dir)
|
||||
|
||||
if check:
|
||||
try:
|
||||
check_dataset(data_dir)
|
||||
except Exception as e:
|
||||
raise ValueError(
|
||||
f"{data_dir} dataset format not right, please check it.")
|
||||
|
||||
meta = None
|
||||
if manifest_path:
|
||||
meta = create_manifest(data_dir, manifest_path)
|
||||
|
||||
return data_dir, meta
|
||||
|
||||
|
||||
def main():
|
||||
print_arguments(args, globals())
|
||||
if args.target_dir.startswith('~'):
|
||||
args.target_dir = os.path.expanduser(args.target_dir)
|
||||
|
||||
data_dir, meta = prepare_dataset(
|
||||
url=DATA_URL,
|
||||
md5sum=MD5_DATA,
|
||||
target_dir=args.target_dir,
|
||||
manifest_path=args.manifest_prefix,
|
||||
check=True)
|
||||
|
||||
resource_dir, _ = prepare_dataset(
|
||||
url=RESOURCE_URL,
|
||||
md5sum=MD5_RESOURCE,
|
||||
target_dir=args.target_dir,
|
||||
manifest_path=None)
|
||||
|
||||
print("Data download and manifest prepare done!")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,20 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# s2t utils binaries.
|
||||
from .avg_model import main as avg_ckpts_main
|
||||
from .build_vocab import main as build_vocab_main
|
||||
from .compute_mean_std import main as compute_mean_std_main
|
||||
from .compute_wer import main as compute_wer_main
|
||||
from .format_data import main as format_data_main
|
||||
from .format_rsl import main as format_rsl_main
|
@ -0,0 +1,125 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import argparse
|
||||
import glob
|
||||
import json
|
||||
import os
|
||||
|
||||
import numpy as np
|
||||
import paddle
|
||||
|
||||
|
||||
def define_argparse():
|
||||
parser = argparse.ArgumentParser(description='average model')
|
||||
parser.add_argument('--dst_model', required=True, help='averaged model')
|
||||
parser.add_argument(
|
||||
'--ckpt_dir', required=True, help='ckpt model dir for average')
|
||||
parser.add_argument(
|
||||
'--val_best', action="store_true", help='averaged model')
|
||||
parser.add_argument(
|
||||
'--num', default=5, type=int, help='nums for averaged model')
|
||||
parser.add_argument(
|
||||
'--min_epoch',
|
||||
default=0,
|
||||
type=int,
|
||||
help='min epoch used for averaging model')
|
||||
parser.add_argument(
|
||||
'--max_epoch',
|
||||
default=65536, # Big enough
|
||||
type=int,
|
||||
help='max epoch used for averaging model')
|
||||
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
def average_checkpoints(dst_model="",
|
||||
ckpt_dir="",
|
||||
val_best=True,
|
||||
num=5,
|
||||
min_epoch=0,
|
||||
max_epoch=65536):
|
||||
paddle.set_device('cpu')
|
||||
|
||||
val_scores = []
|
||||
jsons = glob.glob(f'{ckpt_dir}/[!train]*.json')
|
||||
jsons = sorted(jsons, key=os.path.getmtime, reverse=True)
|
||||
for y in jsons:
|
||||
with open(y, 'r') as f:
|
||||
dic_json = json.load(f)
|
||||
loss = dic_json['val_loss']
|
||||
epoch = dic_json['epoch']
|
||||
if epoch >= min_epoch and epoch <= max_epoch:
|
||||
val_scores.append((epoch, loss))
|
||||
assert val_scores, f"Not find any valid checkpoints: {val_scores}"
|
||||
val_scores = np.array(val_scores)
|
||||
|
||||
if val_best:
|
||||
sort_idx = np.argsort(val_scores[:, 1])
|
||||
sorted_val_scores = val_scores[sort_idx]
|
||||
else:
|
||||
sorted_val_scores = val_scores
|
||||
|
||||
beat_val_scores = sorted_val_scores[:num, 1]
|
||||
selected_epochs = sorted_val_scores[:num, 0].astype(np.int64)
|
||||
avg_val_score = np.mean(beat_val_scores)
|
||||
print("selected val scores = " + str(beat_val_scores))
|
||||
print("selected epochs = " + str(selected_epochs))
|
||||
print("averaged val score = " + str(avg_val_score))
|
||||
|
||||
path_list = [
|
||||
ckpt_dir + '/{}.pdparams'.format(int(epoch))
|
||||
for epoch in sorted_val_scores[:num, 0]
|
||||
]
|
||||
print(path_list)
|
||||
|
||||
avg = None
|
||||
num = args.num
|
||||
assert num == len(path_list)
|
||||
for path in path_list:
|
||||
print(f'Processing {path}')
|
||||
states = paddle.load(path)
|
||||
if avg is None:
|
||||
avg = states
|
||||
else:
|
||||
for k in avg.keys():
|
||||
avg[k] += states[k]
|
||||
# average
|
||||
for k in avg.keys():
|
||||
if avg[k] is not None:
|
||||
avg[k] /= num
|
||||
|
||||
paddle.save(avg, args.dst_model)
|
||||
print(f'Saving to {args.dst_model}')
|
||||
|
||||
meta_path = os.path.splitext(args.dst_model)[0] + '.avg.json'
|
||||
with open(meta_path, 'w') as f:
|
||||
data = json.dumps({
|
||||
"mode": 'val_best' if args.val_best else 'latest',
|
||||
"avg_ckpt": args.dst_model,
|
||||
"val_loss_mean": avg_val_score,
|
||||
"ckpts": path_list,
|
||||
"epochs": selected_epochs.tolist(),
|
||||
"val_losses": beat_val_scores.tolist(),
|
||||
})
|
||||
f.write(data + "\n")
|
||||
|
||||
|
||||
def main():
|
||||
args = define_argparse()
|
||||
average_checkpoints(args)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,166 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Build vocabulary from manifest files.
|
||||
Each item in vocabulary file is a character.
|
||||
"""
|
||||
import argparse
|
||||
import functools
|
||||
import os
|
||||
import tempfile
|
||||
from collections import Counter
|
||||
|
||||
import jsonlines
|
||||
|
||||
from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer
|
||||
from paddlespeech.s2t.frontend.utility import BLANK
|
||||
from paddlespeech.s2t.frontend.utility import SOS
|
||||
from paddlespeech.s2t.frontend.utility import SPACE
|
||||
from paddlespeech.s2t.frontend.utility import UNK
|
||||
from paddlespeech.utils.argparse import add_arguments
|
||||
from paddlespeech.utils.argparse import print_arguments
|
||||
|
||||
|
||||
def count_manifest(counter, text_feature, manifest_path):
|
||||
manifest_jsons = []
|
||||
with jsonlines.open(manifest_path, 'r') as reader:
|
||||
for json_data in reader:
|
||||
manifest_jsons.append(json_data)
|
||||
|
||||
for line_json in manifest_jsons:
|
||||
if isinstance(line_json['text'], str):
|
||||
tokens = text_feature.tokenize(
|
||||
line_json['text'], replace_space=False)
|
||||
|
||||
counter.update(tokens)
|
||||
else:
|
||||
assert isinstance(line_json['text'], list)
|
||||
for text in line_json['text']:
|
||||
tokens = text_feature.tokenize(text, replace_space=False)
|
||||
counter.update(tokens)
|
||||
|
||||
|
||||
def dump_text_manifest(fileobj, manifest_path, key='text'):
|
||||
manifest_jsons = []
|
||||
with jsonlines.open(manifest_path, 'r') as reader:
|
||||
for json_data in reader:
|
||||
manifest_jsons.append(json_data)
|
||||
|
||||
for line_json in manifest_jsons:
|
||||
if isinstance(line_json[key], str):
|
||||
fileobj.write(line_json[key] + "\n")
|
||||
else:
|
||||
assert isinstance(line_json[key], list)
|
||||
for line in line_json[key]:
|
||||
fileobj.write(line + "\n")
|
||||
|
||||
|
||||
def build_vocab(manifest_paths="",
|
||||
vocab_path="examples/librispeech/data/vocab.txt",
|
||||
unit_type="char",
|
||||
count_threshold=0,
|
||||
text_keys='text',
|
||||
spm_mode="unigram",
|
||||
spm_vocab_size=0,
|
||||
spm_model_prefix="",
|
||||
spm_character_coverage=0.9995):
|
||||
fout = open(vocab_path, 'w', encoding='utf-8')
|
||||
fout.write(BLANK + "\n") # 0 will be used for "blank" in CTC
|
||||
fout.write(UNK + '\n') # <unk> must be 1
|
||||
|
||||
if unit_type == 'spm':
|
||||
# tools/spm_train --input=$wave_data/lang_char/input.txt
|
||||
# --vocab_size=${nbpe} --model_type=${bpemode}
|
||||
# --model_prefix=${bpemodel} --input_sentence_size=100000000
|
||||
import sentencepiece as spm
|
||||
|
||||
fp = tempfile.NamedTemporaryFile(mode='w', delete=False)
|
||||
for manifest_path in manifest_paths:
|
||||
_text_keys = [text_keys] if type(
|
||||
text_keys) is not list else text_keys
|
||||
for text_key in _text_keys:
|
||||
dump_text_manifest(fp, manifest_path, key=text_key)
|
||||
fp.close()
|
||||
# train
|
||||
spm.SentencePieceTrainer.Train(
|
||||
input=fp.name,
|
||||
vocab_size=spm_vocab_size,
|
||||
model_type=spm_mode,
|
||||
model_prefix=spm_model_prefix,
|
||||
input_sentence_size=100000000,
|
||||
character_coverage=spm_character_coverage)
|
||||
os.unlink(fp.name)
|
||||
|
||||
# encode
|
||||
text_feature = TextFeaturizer(unit_type, "", spm_model_prefix)
|
||||
counter = Counter()
|
||||
|
||||
for manifest_path in manifest_paths:
|
||||
count_manifest(counter, text_feature, manifest_path)
|
||||
|
||||
count_sorted = sorted(counter.items(), key=lambda x: x[1], reverse=True)
|
||||
tokens = []
|
||||
for token, count in count_sorted:
|
||||
if count < count_threshold:
|
||||
break
|
||||
# replace space by `<space>`
|
||||
token = SPACE if token == ' ' else token
|
||||
tokens.append(token)
|
||||
|
||||
tokens = sorted(tokens)
|
||||
for token in tokens:
|
||||
fout.write(token + '\n')
|
||||
|
||||
fout.write(SOS + "\n") # <sos/eos>
|
||||
fout.close()
|
||||
|
||||
|
||||
def define_argparse():
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
add_arg = functools.partial(add_arguments, argparser=parser)
|
||||
|
||||
# yapf: disable
|
||||
add_arg('unit_type', str, "char", "Unit type, e.g. char, word, spm")
|
||||
add_arg('count_threshold', int, 0,
|
||||
"Truncation threshold for char/word counts.Default 0, no truncate.")
|
||||
add_arg('vocab_path', str,
|
||||
'examples/librispeech/data/vocab.txt',
|
||||
"Filepath to write the vocabulary.")
|
||||
add_arg('manifest_paths', str,
|
||||
None,
|
||||
"Filepaths of manifests for building vocabulary. "
|
||||
"You can provide multiple manifest files.",
|
||||
nargs='+',
|
||||
required=True)
|
||||
add_arg('text_keys', str,
|
||||
'text',
|
||||
"keys of the text in manifest for building vocabulary. "
|
||||
"You can provide multiple k.",
|
||||
nargs='+')
|
||||
# bpe
|
||||
add_arg('spm_vocab_size', int, 0, "Vocab size for spm.")
|
||||
add_arg('spm_mode', str, 'unigram', "spm model type, e.g. unigram, spm, char, word. only need when `unit_type` is spm")
|
||||
add_arg('spm_model_prefix', str, "", "spm_model_%(spm_mode)_%(count_threshold), spm model prefix, only need when `unit_type` is spm")
|
||||
add_arg('spm_character_coverage', float, 0.9995, "character coverage to determine the minimum symbols")
|
||||
# yapf: disable
|
||||
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
def main():
|
||||
args = define_argparse()
|
||||
print_arguments(args, globals())
|
||||
build_vocab(**vars(args))
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,106 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Compute mean and std for feature normalizer, and save to file."""
|
||||
import argparse
|
||||
import functools
|
||||
|
||||
from paddlespeech.s2t.frontend.augmentor.augmentation import AugmentationPipeline
|
||||
from paddlespeech.s2t.frontend.featurizer.audio_featurizer import AudioFeaturizer
|
||||
from paddlespeech.s2t.frontend.normalizer import FeatureNormalizer
|
||||
from paddlespeech.utils.argparse import add_arguments
|
||||
from paddlespeech.utils.argparse import print_arguments
|
||||
|
||||
|
||||
def compute_cmvn(manifest_path="data/librispeech/manifest.train",
|
||||
output_path="data/librispeech/mean_std.npz",
|
||||
num_samples=2000,
|
||||
num_workers=0,
|
||||
spectrum_type="linear",
|
||||
feat_dim=13,
|
||||
delta_delta=False,
|
||||
stride_ms=10,
|
||||
window_ms=20,
|
||||
sample_rate=16000,
|
||||
use_dB_normalization=True,
|
||||
target_dB=-20):
|
||||
|
||||
augmentation_pipeline = AugmentationPipeline('{}')
|
||||
audio_featurizer = AudioFeaturizer(
|
||||
spectrum_type=spectrum_type,
|
||||
feat_dim=feat_dim,
|
||||
delta_delta=delta_delta,
|
||||
stride_ms=float(stride_ms),
|
||||
window_ms=float(window_ms),
|
||||
n_fft=None,
|
||||
max_freq=None,
|
||||
target_sample_rate=sample_rate,
|
||||
use_dB_normalization=use_dB_normalization,
|
||||
target_dB=target_dB,
|
||||
dither=0.0)
|
||||
|
||||
def augment_and_featurize(audio_segment):
|
||||
augmentation_pipeline.transform_audio(audio_segment)
|
||||
return audio_featurizer.featurize(audio_segment)
|
||||
|
||||
normalizer = FeatureNormalizer(
|
||||
mean_std_filepath=None,
|
||||
manifest_path=manifest_path,
|
||||
featurize_func=augment_and_featurize,
|
||||
num_samples=num_samples,
|
||||
num_workers=num_workers)
|
||||
normalizer.write_to_file(output_path)
|
||||
|
||||
|
||||
def define_argparse():
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
add_arg = functools.partial(add_arguments, argparser=parser)
|
||||
|
||||
# yapf: disable
|
||||
add_arg('manifest_path', str,
|
||||
'data/librispeech/manifest.train',
|
||||
"Filepath of manifest to compute normalizer's mean and stddev.")
|
||||
|
||||
add_arg('output_path', str,
|
||||
'data/librispeech/mean_std.npz',
|
||||
"Filepath of write mean and stddev to (.npz).")
|
||||
add_arg('num_samples', int, 2000, "# of samples to for statistics.")
|
||||
add_arg('num_workers',
|
||||
default=0,
|
||||
type=int,
|
||||
help='num of subprocess workers for processing')
|
||||
|
||||
|
||||
add_arg('spectrum_type', str,
|
||||
'linear',
|
||||
"Audio feature type. Options: linear, mfcc, fbank.",
|
||||
choices=['linear', 'mfcc', 'fbank'])
|
||||
add_arg('feat_dim', int, 13, "Audio feature dim.")
|
||||
add_arg('delta_delta', bool, False, "Audio feature with delta delta.")
|
||||
add_arg('stride_ms', int, 10, "stride length in ms.")
|
||||
add_arg('window_ms', int, 20, "stride length in ms.")
|
||||
add_arg('sample_rate', int, 16000, "target sample rate.")
|
||||
add_arg('use_dB_normalization', bool, True, "do dB normalization.")
|
||||
add_arg('target_dB', int, -20, "target dB.")
|
||||
# yapf: disable
|
||||
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
def main():
|
||||
args = define_argparse()
|
||||
print_arguments(args, globals())
|
||||
compute_cmvn(**vars(args))
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,154 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""format manifest with more metadata."""
|
||||
import argparse
|
||||
import functools
|
||||
import json
|
||||
|
||||
import jsonlines
|
||||
|
||||
from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer
|
||||
from paddlespeech.s2t.frontend.utility import load_cmvn
|
||||
from paddlespeech.s2t.io.utility import feat_type
|
||||
from paddlespeech.utils.argparse import add_arguments
|
||||
from paddlespeech.utils.argparse import print_arguments
|
||||
|
||||
|
||||
def define_argparse():
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
add_arg = functools.partial(add_arguments, argparser=parser)
|
||||
# yapf: disable
|
||||
add_arg('manifest_paths', str,
|
||||
None,
|
||||
"Filepaths of manifests for building vocabulary. "
|
||||
"You can provide multiple manifest files.",
|
||||
nargs='+',
|
||||
required=True)
|
||||
add_arg('output_path', str, None, "filepath of formated manifest.", required=True)
|
||||
add_arg('cmvn_path', str,
|
||||
'examples/librispeech/data/mean_std.json',
|
||||
"Filepath of cmvn.")
|
||||
add_arg('unit_type', str, "char", "Unit type, e.g. char, word, spm")
|
||||
add_arg('vocab_path', str,
|
||||
'examples/librispeech/data/vocab.txt',
|
||||
"Filepath of the vocabulary.")
|
||||
# bpe
|
||||
add_arg('spm_model_prefix', str, None,
|
||||
"spm model prefix, spm_model_%(bpe_mode)_%(count_threshold), only need when `unit_type` is spm")
|
||||
|
||||
# yapf: disable
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
def format_data(
|
||||
manifest_paths="",
|
||||
output_path="",
|
||||
cmvn_path="examples/librispeech/data/mean_std.json",
|
||||
unit_type="char",
|
||||
vocab_path="examples/librispeech/data/vocab.txt",
|
||||
spm_model_prefix=""):
|
||||
|
||||
fout = open(output_path, 'w', encoding='utf-8')
|
||||
|
||||
# get feat dim
|
||||
filetype = cmvn_path.split(".")[-1]
|
||||
mean, istd = load_cmvn(cmvn_path, filetype=filetype)
|
||||
feat_dim = mean.shape[0] #(D)
|
||||
print(f"Feature dim: {feat_dim}")
|
||||
|
||||
text_feature = TextFeaturizer(unit_type, vocab_path, spm_model_prefix)
|
||||
vocab_size = text_feature.vocab_size
|
||||
print(f"Vocab size: {vocab_size}")
|
||||
|
||||
# josnline like this
|
||||
# {
|
||||
# "input": [{"name": "input1", "shape": (100, 83), "feat": "xxx.ark:123"}],
|
||||
# "output": [{"name":"target1", "shape": (40, 5002), "text": "a b c de"}],
|
||||
# "utt2spk": "111-2222",
|
||||
# "utt": "111-2222-333"
|
||||
# }
|
||||
count = 0
|
||||
for manifest_path in manifest_paths:
|
||||
with jsonlines.open(str(manifest_path), 'r') as reader:
|
||||
manifest_jsons = list(reader)
|
||||
|
||||
for line_json in manifest_jsons:
|
||||
output_json = {
|
||||
"input": [],
|
||||
"output": [],
|
||||
'utt': line_json['utt'],
|
||||
'utt2spk': line_json.get('utt2spk', 'global'),
|
||||
}
|
||||
|
||||
# output
|
||||
line = line_json['text']
|
||||
if isinstance(line, str):
|
||||
# only one target
|
||||
tokens = text_feature.tokenize(line)
|
||||
tokenids = text_feature.featurize(line)
|
||||
output_json['output'].append({
|
||||
'name': 'target1',
|
||||
'shape': (len(tokenids), vocab_size),
|
||||
'text': line,
|
||||
'token': ' '.join(tokens),
|
||||
'tokenid': ' '.join(map(str, tokenids)),
|
||||
})
|
||||
else:
|
||||
# isinstance(line, list), multi target in one vocab
|
||||
for i, item in enumerate(line, 1):
|
||||
tokens = text_feature.tokenize(item)
|
||||
tokenids = text_feature.featurize(item)
|
||||
output_json['output'].append({
|
||||
'name': f'target{i}',
|
||||
'shape': (len(tokenids), vocab_size),
|
||||
'text': item,
|
||||
'token': ' '.join(tokens),
|
||||
'tokenid': ' '.join(map(str, tokenids)),
|
||||
})
|
||||
|
||||
# input
|
||||
line = line_json['feat']
|
||||
if isinstance(line, str):
|
||||
# only one input
|
||||
feat_shape = line_json['feat_shape']
|
||||
assert isinstance(feat_shape, (list, tuple)), type(feat_shape)
|
||||
filetype = feat_type(line)
|
||||
if filetype == 'sound':
|
||||
feat_shape.append(feat_dim)
|
||||
else: # kaldi
|
||||
raise NotImplementedError('no support kaldi feat now!')
|
||||
|
||||
output_json['input'].append({
|
||||
"name": "input1",
|
||||
"shape": feat_shape,
|
||||
"feat": line,
|
||||
"filetype": filetype,
|
||||
})
|
||||
else:
|
||||
# isinstance(line, list), multi input
|
||||
raise NotImplementedError("not support multi input now!")
|
||||
|
||||
fout.write(json.dumps(output_json) + '\n')
|
||||
count += 1
|
||||
|
||||
print(f"{manifest_paths} Examples number: {count}")
|
||||
fout.close()
|
||||
|
||||
def main():
|
||||
args = define_argparse()
|
||||
print_arguments(args, globals())
|
||||
format_data(**vars(args))
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -0,0 +1,143 @@
|
||||
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
format ref/hyp file for `utt text` format to compute CER/WER/MER.
|
||||
|
||||
norm:
|
||||
BAC009S0764W0196 明确了发展目标和重点任务
|
||||
BAC009S0764W0186 实现我国房地产市场的平稳运行
|
||||
|
||||
|
||||
sclite:
|
||||
加大对结构机械化环境和收集谈控机制力度(BAC009S0906W0240.wav)
|
||||
河南省新乡市丰秋县刘光镇政府东五零左右(BAC009S0770W0441.wav)
|
||||
"""
|
||||
import argparse
|
||||
|
||||
import jsonlines
|
||||
|
||||
from paddlespeech.utils.argparse import print_arguments
|
||||
|
||||
|
||||
def transform_hyp(origin, trans, trans_sclite):
|
||||
"""
|
||||
Args:
|
||||
origin: The input json file which contains the model output
|
||||
trans: The output file for caculate CER/WER
|
||||
trans_sclite: The output file for caculate CER/WER using sclite
|
||||
"""
|
||||
input_dict = {}
|
||||
|
||||
with open(origin, "r+", encoding="utf8") as f:
|
||||
for item in jsonlines.Reader(f):
|
||||
input_dict[item["utt"]] = item["hyps"][0]
|
||||
|
||||
if trans:
|
||||
with open(trans, "w+", encoding="utf8") as f:
|
||||
for key in input_dict.keys():
|
||||
f.write(key + " " + input_dict[key] + "\n")
|
||||
print(f"transform_hyp output: {trans}")
|
||||
|
||||
if trans_sclite:
|
||||
with open(trans_sclite, "w+") as f:
|
||||
for key in input_dict.keys():
|
||||
line = input_dict[key] + "(" + key + ".wav" + ")" + "\n"
|
||||
f.write(line)
|
||||
print(f"transform_hyp output: {trans_sclite}")
|
||||
|
||||
|
||||
def transform_ref(origin, trans, trans_sclite):
|
||||
"""
|
||||
Args:
|
||||
origin: The input json file which contains the model output
|
||||
trans: The output file for caculate CER/WER
|
||||
trans_sclite: The output file for caculate CER/WER using sclite
|
||||
"""
|
||||
input_dict = {}
|
||||
|
||||
with open(origin, "r", encoding="utf8") as f:
|
||||
for item in jsonlines.Reader(f):
|
||||
input_dict[item["utt"]] = item["text"]
|
||||
|
||||
if trans:
|
||||
with open(trans, "w", encoding="utf8") as f:
|
||||
for key in input_dict.keys():
|
||||
f.write(key + " " + input_dict[key] + "\n")
|
||||
print(f"transform_hyp output: {trans}")
|
||||
|
||||
if trans_sclite:
|
||||
with open(trans_sclite, "w") as f:
|
||||
for key in input_dict.keys():
|
||||
line = input_dict[key] + "(" + key + ".wav" + ")" + "\n"
|
||||
f.write(line)
|
||||
print(f"transform_hyp output: {trans_sclite}")
|
||||
|
||||
|
||||
def define_argparse():
|
||||
parser = argparse.ArgumentParser(
|
||||
prog='format ref/hyp file for compute CER/WER', add_help=True)
|
||||
parser.add_argument(
|
||||
'--origin_hyp', type=str, default="", help='origin hyp file')
|
||||
parser.add_argument(
|
||||
'--trans_hyp',
|
||||
type=str,
|
||||
default="",
|
||||
help='hyp file for caculating CER/WER')
|
||||
parser.add_argument(
|
||||
'--trans_hyp_sclite',
|
||||
type=str,
|
||||
default="",
|
||||
help='hyp file for caculating CER/WER by sclite')
|
||||
|
||||
parser.add_argument(
|
||||
'--origin_ref', type=str, default="", help='origin ref file')
|
||||
parser.add_argument(
|
||||
'--trans_ref',
|
||||
type=str,
|
||||
default="",
|
||||
help='ref file for caculating CER/WER')
|
||||
parser.add_argument(
|
||||
'--trans_ref_sclite',
|
||||
type=str,
|
||||
default="",
|
||||
help='ref file for caculating CER/WER by sclite')
|
||||
parser_args = parser.parse_args()
|
||||
return parser_args
|
||||
|
||||
|
||||
def format_result(origin_hyp="",
|
||||
trans_hyp="",
|
||||
trans_hyp_sclite="",
|
||||
origin_ref="",
|
||||
trans_ref="",
|
||||
trans_ref_sclite=""):
|
||||
|
||||
if origin_hyp:
|
||||
transform_hyp(
|
||||
origin=origin_hyp, trans=trans_hyp, trans_sclite=trans_hyp_sclite)
|
||||
|
||||
if origin_ref:
|
||||
transform_ref(
|
||||
origin=origin_ref, trans=trans_ref, trans_sclite=trans_ref_sclite)
|
||||
|
||||
|
||||
def main():
|
||||
args = define_argparse()
|
||||
print_arguments(args, globals())
|
||||
|
||||
format_result(**vars(args))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue