# Transformer/Conformer ASR with Aishell This example contains code used to train a Transformer or [Conformer](http://arxiv.org/abs/2008.03802) model with [Aishell dataset](http://www.openslr.org/resources/33) ## Overview All the scirpts you need are in ```run.sh```. There are several stages in ```run.sh```, and each stage has its function. | Stage | Function | | :---- | :----------------------------------------------------------- | | 0 | Process data. It includes:
(1) Download the dataset
(2) Caculate the CMVN of the train dataset
(3) Get the vocabulary file
(4) Get the manifest files of the train, development and test dataset | | 1 | Train the model | | 2 | Get the final model by averaging the top-k models, set k = 1 means choose the best model | | 3 | Test the final model performance | | 4 | Get ctc alignment of test data using the final model | | 5 | Infer the single audio file | You can choose to run a range of stages by setting ```stage``` and ```stop_stage ```. For example, if you want to execute the code in stage 2 and stage 3, you can run this script: ```bash bash run.sh --stage 2 --stop_stage 3 ``` Or you can set ```stage``` equal to ```stop-stage``` to only run one stage. For example, if you only want to run ```stage 0```, you can use the script below: ```bash bash run.sh --stage 0 --stop_stage 0 ``` The document below will describe the scripts in ```run.sh``` in detail. ## The Environment Variables The path.sh contains the environment variables. ```bash source path.sh ``` This script needs to be run firstly. And another script is also needed: ```bash source ${MAIN_ROOT}/utils/parse_options.sh ``` It will support the way of using```--varibale value``` in the shell scripts. ## The Local Variables Some local variables are set in ```run.sh```. ```gpus``` denotes the GPU number you want to use. If you set ```gpus=```, it means you only use CPU. ```stage``` denotes the number of stage you want to start from in the expriments. ```stop stage```denotes the number of stage you want to end at in the expriments. ```conf_path``` denotes the config path of the model. ```avg_num``` denotes the number K of top-K models you want to average to get the final model. ```audio_file``` denotes the file path of the single file you want to infer in stage 5 ```ckpt``` denotes the checkpoint prefix of the model, e.g. "conformer" You can set the local variables (except ```ckpt```) when you use ```run.sh``` For example, you can set the ```gpus``` and ``avg_num`` when you use the command line.: ```bash bash run.sh --gpus 0,1 --avg_num 20 ``` ## Stage 0: Data Processing To use this example, you need to process data firstly and you can use stage 0 in ```run.sh``` to do this. The code is shown below: ```bash if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then # prepare data bash ./local/data.sh || exit -1 fi ``` Stage 0 is for processing the data. If you only want to process the data. You can run ```bash bash run.sh --stage 0 --stop_stage 0 ``` You can also just run these scripts in your command line. ```bash source path.sh bash ./local/data.sh ``` After processing the data, the ``data`` directory will look like this: ```bash data/ |-- dev.meta |-- lang_char | `-- vocab.txt |-- manifest.dev |-- manifest.dev.raw |-- manifest.test |-- manifest.test.raw |-- manifest.train |-- manifest.train.raw |-- mean_std.json |-- test.meta `-- train.meta ``` ## Stage 1: Model Training If you want to train the model. you can use stage 1 in ```run.sh```. The code is shown below. ```bash if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then # train model, all `ckpt` under `exp` dir CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${ckpt} fi ``` If you want to train the model, you can use the script below to execute stage 0 and stage 1: ```bash bash run.sh --stage 0 --stop_stage 1 ``` or you can run these scripts in the command line (only use CPU). ```bash source path.sh bash ./local/data.sh CUDA_VISIBLE_DEVICES= ./local/train.sh conf/conformer.yaml conformer ``` ## Stage 2: Top-k Models Averaging After training the model, we need to get the final model for testing and inference. In every epoch, the model checkpoint is saved, so we can choose the best model from them based on the validation loss or we can sort them and average the parameters of the top-k models to get the final model. We can use stage 2 to do this, and the code is shown below: ```bash if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then # avg n best model avg.sh best exp/${ckpt}/checkpoints ${avg_num} fi ``` The ```avg.sh``` is in the ```../../../utils/``` which is define in the ```path.sh```. If you want to get the final model, you can use the script below to execute stage 0, stage 1, and stage 2: ```bash bash run.sh --stage 0 --stop_stage 2 ``` or you can run these scripts in the command line (only use CPU). ```bash source path.sh bash ./local/data.sh CUDA_VISIBLE_DEVICES= ./local/train.sh conf/conformer.yaml conformer avg.sh best exp/conformer/checkpoints 20 ``` ## Stage 3: Model Testing The test stage is to evaluate the model performance. The code of test stage is shown below: ```bash if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then # test ckpt avg_n CUDA_VISIBLE_DEVICES=0 ./local/test.sh ${conf_path} exp/${ckpt}/checkpoints/${avg_ckpt} || exit -1 fi ``` If you want to train a model and test it, you can use the script below to execute stage 0, stage 1, stage 2, and stage 3 : ```bash bash run.sh --stage 0 --stop_stage 3 ``` or you can run these scripts in the command line (only use CPU). ```bash source path.sh bash ./local/data.sh CUDA_VISIBLE_DEVICES= ./local/train.sh conf/conformer.yaml conformer avg.sh best exp/conformer/checkpoints 20 CUDA_VISIBLE_DEVICES= ./local/test.sh conf/conformer.yaml exp/conformer/checkpoints/avg_20 ``` ## Pretrained Model You can get the pretrained transfomer or conformer using the scripts below: ```bash Conformer: wget https://deepspeech.bj.bcebos.com/release2.1/aishell/s1/aishell.release.tar.gz Chunk Conformer: wget https://deepspeech.bj.bcebos.com/release2.1/aishell/s1/aishell.chunk.release.tar.gz Transfomer: wget https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/transformer.model.tar.gz ``` using the ```tar``` scripts to unpack the model and then you can use the script to test the modle. For example: ``` wget https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/transformer.model.tar.gz tar xzvf transformer.model.tar.gz source path.sh # If you have process the data and get the manifest fileļ¼Œ you can skip the following 2 steps bash local/data.sh --stage -1 --stop_stage -1 bash local/data.sh --stage 2 --stop_stage 2 CUDA_VISIBLE_DEVICES= ./local/test.sh conf/transformer.yaml exp/transformer/checkpoints/avg_20 ``` The performance of the released models are shown below: ### Conformer | Model | Params | Config | Augmentation | Test set | Decode method | Loss | CER | | --------- | ------ | ------------------- | ---------------- | -------- | ---------------------- | ---- | -------- | | conformer | 47.07M | conf/conformer.yaml | spec_aug + shift | test | attention | - | 0.059858 | | conformer | 47.07M | conf/conformer.yaml | spec_aug + shift | test | ctc_greedy_search | - | 0.062311 | | conformer | 47.07M | conf/conformer.yaml | spec_aug + shift | test | ctc_prefix_beam_search | - | 0.062196 | | conformer | 47.07M | conf/conformer.yaml | spec_aug + shift | test | attention_rescoring | - | 0.054694 | ### Chunk Conformer Need set `decoding.decoding_chunk_size=16` when decoding. | Model | Params | Config | Augmentation | Test set | Decode method | Chunk Size & Left Chunks | Loss | CER | | --------- | ------ | ------------------------- | ---------------- | -------- | ---------------------- | ------------------------ | ---- | -------- | | conformer | 47.06M | conf/chunk_conformer.yaml | spec_aug + shift | test | attention | 16, -1 | - | 0.061939 | | conformer | 47.06M | conf/chunk_conformer.yaml | spec_aug + shift | test | ctc_greedy_search | 16, -1 | - | 0.070806 | | conformer | 47.06M | conf/chunk_conformer.yaml | spec_aug + shift | test | ctc_prefix_beam_search | 16, -1 | - | 0.070739 | | conformer | 47.06M | conf/chunk_conformer.yaml | spec_aug + shift | test | attention_rescoring | 16, -1 | - | 0.059400 | ### Transformer | Model | Params | Config | Augmentation | Test set | Decode method | Loss | CER | | ----------- | ------ | --------------------- | ------------ | -------- | ---------------------- | ----------------- | -------- | | transformer | 31.95M | conf/transformer.yaml | spec_aug | test | attention | 3.858648955821991 | 0.057293 | | transformer | 31.95M | conf/transformer.yaml | spec_aug | test | ctc_greedy_search | 3.858648955821991 | 0.061837 | | transformer | 31.95M | conf/transformer.yaml | spec_aug | test | ctc_prefix_beam_search | 3.858648955821991 | 0.061685 | | transformer | 31.95M | conf/transformer.yaml | spec_aug | test | attention_rescoring | 3.858648955821991 | 0.053844 | ## Stage 4: CTC Alignment If you want to get the alignment between the audio and the text, you can use the ctc alignment. The code of this stage is shown below: ```bash if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then # ctc alignment of test data CUDA_VISIBLE_DEVICES=0 ./local/align.sh ${conf_path} exp/${ckpt}/checkpoints/${avg_ckpt} || exit -1 fi ``` If you want to train the model, test it and do the alignment, you can use the script below to execute stage 0, stage 1, stage 2, and stage 3 : ```bash bash run.sh --stage 0 --stop_stage 4 ``` or if you only need to train a model and do the alignment, you can use these scripts to escape stage 3(test stage): ```bash bash run.sh --stage 0 --stop_stage 2 bash run.sh --stage 4 --stop_stage 4 ``` or you can also use these scripts in the command line (only use CPU). ```bash source path.sh bash ./local/data.sh CUDA_VISIBLE_DEVICES= ./local/train.sh conf/conformer.yaml conformer avg.sh best exp/conformer/checkpoints 20 # test stage is optional CUDA_VISIBLE_DEVICES= ./local/test.sh conf/conformer.yaml exp/conformer/checkpoints/avg_20 CUDA_VISIBLE_DEVICES= ./local/align.sh conf/conformer.yaml exp/conformer/checkpoints/avg_20 ``` ## Stage 5: Single Audio File Inference In some situations, you want to use the trained model to do the inference for the single audio file. You can use stage 5. The code is shown below ```bash if [ ${stage} -le 5 ] && [ ${stop_stage} -ge 5 ]; then # test a single .wav file CUDA_VISIBLE_DEVICES=0 ./local/test_wav.sh ${conf_path} exp/${ckpt}/checkpoints/${avg_ckpt} ${audio_file} || exit -1 fi ``` you can train the model by yourself using ```bash run.sh --stage 0 --stop_stage 3```, or you can download the pretrained model through the script below: ```bash wget https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/transformer.model.tar.gz tar xzvf transformer.model.tar.gz ``` You can downloads the audio demo: ```bash wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wav -P data/ ``` You need to prepare an audio file or use the audio demo above, please confirm the sample rate of the audio is 16K. You can get the result by running the script below. ```bash CUDA_VISIBLE_DEVICES= ./local/test_wav.sh conf/transformer.yaml exp/transformer/checkpoints/avg_20 data/demo_01_03.wav ```