From 0b1a45e3aebb93d859f223653e56813e19e3b6fb Mon Sep 17 00:00:00 2001 From: Your Name Date: Fri, 25 Nov 2022 14:34:26 +0800 Subject: [PATCH] update readme in speechx examples,test=doc --- speechx/examples/ds2_ol/aishell/README.md | 11 ++++++++++ speechx/examples/ds2_ol/onnx/README.md | 15 ++++++++++++- speechx/examples/ds2_ol/websocket/README.md | 22 +++++++++++++++++++ .../examples/u2pp_ol/wenetspeech/README.md | 17 ++++++++++++++ 4 files changed, 64 insertions(+), 1 deletion(-) create mode 100644 speechx/examples/ds2_ol/websocket/README.md diff --git a/speechx/examples/ds2_ol/aishell/README.md b/speechx/examples/ds2_ol/aishell/README.md index 3e7af9244..9710edd6f 100644 --- a/speechx/examples/ds2_ol/aishell/README.md +++ b/speechx/examples/ds2_ol/aishell/README.md @@ -1,5 +1,16 @@ # Aishell - Deepspeech2 Streaming +## tutorial + +## how to use in user's project + +> Users want to learn how to use the example for their project ! Not just test standard result ! If users want to use these codes in their project, how to use our code ? For example, there are some audios that needs to be recognized, how to get the result . Please show it in README + +> What are the requirements for audio format ? + +> if there are some bins or exes builded by speechx(such as `recognizer_main`), how to find the enterpoint source code ? how to use these program ? What are the meanings of these parameters ? What are the limits of these parameters ? Please show it in README, not just write it in run.sh . + + ## How to run ``` diff --git a/speechx/examples/ds2_ol/onnx/README.md b/speechx/examples/ds2_ol/onnx/README.md index e6ab953c8..db0e05f23 100644 --- a/speechx/examples/ds2_ol/onnx/README.md +++ b/speechx/examples/ds2_ol/onnx/README.md @@ -1,5 +1,7 @@ # DeepSpeech2 to ONNX model +## introduce + 1. convert deepspeech2 model to ONNX, using Paddle2ONNX. 2. check paddleinference and onnxruntime output equal. 3. optimize onnx model @@ -31,7 +33,18 @@ onnxruntime 1.11.0 bash run.sh --stage 0 --stop_stage 5 ``` -For more details please see `run.sh`. +**For more details please see `run.sh`.** + +> write more detail in REAME, what's meaning of different stage ? + +## how to use in user's project + +> Users want to learn how to use the example for their project ! Not just test standard result ! If users want to use these codes in their project, how to use our code ? For example, there are some audios that needs to be recognized, how to get the result . Please show it in README + +> What are the requirements for audio format ? + +> if there are some bins or exes builded by speechx(such as `recognizer_main`), how to find the enterpoint source code ? how to use these program ? What are the meanings of these parameters ? What are the limits of these parameters ? Please show it in README, not just write it in run.sh . + ## Outputs The optimized onnx model is `exp/model.opt.onnx`, quanted model is `$exp/model.optset11.quant.onnx`. diff --git a/speechx/examples/ds2_ol/websocket/README.md b/speechx/examples/ds2_ol/websocket/README.md new file mode 100644 index 000000000..623d6bdfc --- /dev/null +++ b/speechx/examples/ds2_ol/websocket/README.md @@ -0,0 +1,22 @@ +# Deepspeech2 Websocket Demo + +## introduce + +## tutorial + +### how to start server + +> if there are some bins or exes builded by speechx(such as `recognizer_main`), how to find the enterpoint source code ? + +### how to start client + +> What are the requirements for audio format ? +> What are the meanings of these parameters ? +> if there are some bins or exes builded by speechx(such as `recognizer_main`), how to find the enterpoint source code ? + +### how to write client by other language ? + +> 1. websocket requirements +> 2. What are the meanings of these parameters ? +> 3. What are the limits of these parameters ? +> 4. how to write client diff --git a/speechx/examples/u2pp_ol/wenetspeech/README.md b/speechx/examples/u2pp_ol/wenetspeech/README.md index b90b8e201..b4093b972 100644 --- a/speechx/examples/u2pp_ol/wenetspeech/README.md +++ b/speechx/examples/u2pp_ol/wenetspeech/README.md @@ -1,5 +1,18 @@ # u2/u2pp Streaming ASR +## introduce + +## tutorial + +### how to use in user's project + +> Users want to learn how to use the example for their project ! Not just test standard result ! If users want to use these codes in their project, how to use our code ? For example, there are some audios that needs to be recognized, how to get the result . Please show it in README + +> What are the requirements for audio format ? + +> if there are some bins or exes builded by speechx(such as `recognizer_main`), how to find the enterpoint source code ? how to use these program ? What are the meanings of these parameters ? What are the limits of these parameters ? Please show it in README, not just write it in run.sh . + + ## Testing with Aishell Test Data ### Download wav and model @@ -25,3 +38,7 @@ ``` ./run.sh --stage 3 --stop_stage 3 ``` + +### Result + +> show result in README !