From 81c142c25c44aa5d43907e2a0e067e182609bf8b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E6=98=A5=E4=B9=94?= <83450930+Liyulingyue@users.noreply.github.com> Date: Sun, 10 Nov 2024 08:05:18 +0800 Subject: [PATCH] Update README.md --- examples/aishell/asr0/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/aishell/asr0/README.md b/examples/aishell/asr0/README.md index 131de36e3..015b85a91 100644 --- a/examples/aishell/asr0/README.md +++ b/examples/aishell/asr0/README.md @@ -178,7 +178,7 @@ This stage is to transform dygraph to static graph. If you already have a dynamic graph model, you can run this script: ```bash source path.sh -./local/export.sh deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1 exp/deepspeech2/checkpoints/avg_1.jit offline +./local/export.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1 exp/deepspeech2/checkpoints/avg_1.jit ``` ## Stage 5: Static graph Model Testing Similar to stage 3, the static graph model can also be tested. @@ -190,7 +190,7 @@ Similar to stage 3, the static graph model can also be tested. ``` If you already have exported the static graph, you can run this script: ```bash -CUDA_VISIBLE_DEVICES= ./local/test_export.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1.jit offline +CUDA_VISIBLE_DEVICES= ./local/test_export.sh conf/deepspeech2.yaml conf/tuning/decode.yaml exp/deepspeech2/checkpoints/avg_1.jit ``` ## Stage 6: Single Audio File Inference In some situations, you want to use the trained model to do the inference for the single audio file. You can use stage 5. The code is shown below