Hui Zhang
b1c80c45e0
|
3 years ago | |
---|---|---|
.. | ||
conf | 3 years ago | |
local | 3 years ago | |
.gitignore | 3 years ago | |
README.md | 3 years ago | |
path.sh | 3 years ago | |
run.sh | 3 years ago |
README.md
Tiny Example
source path.sh
- set
CUDA_VISIBLE_DEVICES
as you need. - demo scrpt is
bash run.sh
. You can run commond separately as needed.
Steps
-
Prepare the data
bash local/data.sh
data.sh
will download dataset, generate manifests, collect normalizer's statistics and build vocabulary. Once the data preparation is done, you will find the data (only part of LibriSpeech) downloaded in${MAIN_ROOT}/dataset/librispeech
and the corresponding manifest files generated in${PWD}/data
as well as a mean stddev file and a vocabulary file. It has to be run for the very first time you run this dataset and is reusable for all further experiments. -
Train your own ASR model
bash local/train.sh
train.sh
will start a training job, with training logs printed to stdout and model checkpoint of every pass/epoch saved to${PWD}/checkpoints
. These checkpoints could be used for training resuming, inference, evaluation and deployment. -
Case inference with an existing model
-
Evaluate an existing model
bash local/test.sh
test.sh
will evaluate the model with Word Error Rate (or Character Error Rate) measurement. Similarly, you can also download a well-trained model and test its performance: -
Export jit model
bash local/export.sh ckpt_path saved_jit_model_path