@ -0,0 +1,355 @@
|
||||
([简体中文](./README_cn.md)|English)
|
||||
|
||||
# Speech Server
|
||||
|
||||
## Introduction
|
||||
This demo is an implementation of starting the streaming speech service and accessing the service. It can be achieved with a single command using `paddlespeech_server` and `paddlespeech_client` or a few lines of code in python.
|
||||
|
||||
|
||||
## Usage
|
||||
### 1. Installation
|
||||
see [installation](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md).
|
||||
|
||||
It is recommended to use **paddlepaddle 2.2.1** or above.
|
||||
You can choose one way from meduim and hard to install paddlespeech.
|
||||
|
||||
### 2. Prepare config File
|
||||
The configuration file can be found in `conf/ws_application.yaml` 和 `conf/ws_conformer_application.yaml`.
|
||||
|
||||
At present, the speech tasks integrated by the model include: DeepSpeech2 and conformer.
|
||||
|
||||
|
||||
The input of ASR client demo should be a WAV file(`.wav`), and the sample rate must be the same as the model.
|
||||
|
||||
Here are sample files for thisASR client demo that can be downloaded:
|
||||
```bash
|
||||
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav
|
||||
```
|
||||
|
||||
### 3. Server Usage
|
||||
- Command Line (Recommended)
|
||||
|
||||
```bash
|
||||
# start the service
|
||||
paddlespeech_server start --config_file ./conf/ws_conformer_application.yaml
|
||||
```
|
||||
|
||||
Usage:
|
||||
|
||||
```bash
|
||||
paddlespeech_server start --help
|
||||
```
|
||||
Arguments:
|
||||
- `config_file`: yaml file of the app, defalut: ./conf/ws_conformer_application.yaml
|
||||
- `log_file`: log file. Default: ./log/paddlespeech.log
|
||||
|
||||
Output:
|
||||
```bash
|
||||
[2022-04-21 15:52:18,126] [ INFO] - create the online asr engine instance
|
||||
[2022-04-21 15:52:18,127] [ INFO] - paddlespeech_server set the device: cpu
|
||||
[2022-04-21 15:52:18,128] [ INFO] - Load the pretrained model, tag = conformer_online_multicn-zh-16k
|
||||
[2022-04-21 15:52:18,128] [ INFO] - File /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/asr1_chunk_conformer_multi_cn_ckpt_0.2.3.model.tar.gz md5 checking...
|
||||
[2022-04-21 15:52:18,727] [ INFO] - Use pretrained model stored in: /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/model.yaml
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/exp/chunk_conformer/checkpoints/multi_cn.pdparams
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/exp/chunk_conformer/checkpoints/multi_cn.pdparams
|
||||
[2022-04-21 15:52:19,446] [ INFO] - start to create the stream conformer asr engine
|
||||
[2022-04-21 15:52:19,473] [ INFO] - model name: conformer_online
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
[2022-04-21 15:52:21,731] [ INFO] - create the transformer like model success
|
||||
[2022-04-21 15:52:21,733] [ INFO] - Initialize ASR server engine successfully.
|
||||
INFO: Started server process [11173]
|
||||
[2022-04-21 15:52:21] [INFO] [server.py:75] Started server process [11173]
|
||||
INFO: Waiting for application startup.
|
||||
[2022-04-21 15:52:21] [INFO] [on.py:45] Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
[2022-04-21 15:52:21] [INFO] [on.py:59] Application startup complete.
|
||||
/home/users/xiongxinlei/.conda/envs/paddlespeech/lib/python3.9/asyncio/base_events.py:1460: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
|
||||
infos = await tasks.gather(*fs, loop=self)
|
||||
/home/users/xiongxinlei/.conda/envs/paddlespeech/lib/python3.9/asyncio/base_events.py:1518: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
|
||||
await tasks.sleep(0, loop=self)
|
||||
INFO: Uvicorn running on http://0.0.0.0:8090 (Press CTRL+C to quit)
|
||||
[2022-04-21 15:52:21] [INFO] [server.py:206] Uvicorn running on http://0.0.0.0:8090 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
- Python API
|
||||
```python
|
||||
from paddlespeech.server.bin.paddlespeech_server import ServerExecutor
|
||||
|
||||
server_executor = ServerExecutor()
|
||||
server_executor(
|
||||
config_file="./conf/ws_conformer_application.yaml",
|
||||
log_file="./log/paddlespeech.log")
|
||||
```
|
||||
|
||||
Output:
|
||||
```bash
|
||||
[2022-04-21 15:52:18,126] [ INFO] - create the online asr engine instance
|
||||
[2022-04-21 15:52:18,127] [ INFO] - paddlespeech_server set the device: cpu
|
||||
[2022-04-21 15:52:18,128] [ INFO] - Load the pretrained model, tag = conformer_online_multicn-zh-16k
|
||||
[2022-04-21 15:52:18,128] [ INFO] - File /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/asr1_chunk_conformer_multi_cn_ckpt_0.2.3.model.tar.gz md5 checking...
|
||||
[2022-04-21 15:52:18,727] [ INFO] - Use pretrained model stored in: /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/model.yaml
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/exp/chunk_conformer/checkpoints/multi_cn.pdparams
|
||||
[2022-04-21 15:52:18,727] [ INFO] - /home/users/xiongxinlei/.paddlespeech/models/conformer_online_multicn-zh-16k/exp/chunk_conformer/checkpoints/multi_cn.pdparams
|
||||
[2022-04-21 15:52:19,446] [ INFO] - start to create the stream conformer asr engine
|
||||
[2022-04-21 15:52:19,473] [ INFO] - model name: conformer_online
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
set kaiming_uniform
|
||||
[2022-04-21 15:52:21,731] [ INFO] - create the transformer like model success
|
||||
[2022-04-21 15:52:21,733] [ INFO] - Initialize ASR server engine successfully.
|
||||
INFO: Started server process [11173]
|
||||
[2022-04-21 15:52:21] [INFO] [server.py:75] Started server process [11173]
|
||||
INFO: Waiting for application startup.
|
||||
[2022-04-21 15:52:21] [INFO] [on.py:45] Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
[2022-04-21 15:52:21] [INFO] [on.py:59] Application startup complete.
|
||||
/home/users/xiongxinlei/.conda/envs/paddlespeech/lib/python3.9/asyncio/base_events.py:1460: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
|
||||
infos = await tasks.gather(*fs, loop=self)
|
||||
/home/users/xiongxinlei/.conda/envs/paddlespeech/lib/python3.9/asyncio/base_events.py:1518: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
|
||||
await tasks.sleep(0, loop=self)
|
||||
INFO: Uvicorn running on http://0.0.0.0:8090 (Press CTRL+C to quit)
|
||||
[2022-04-21 15:52:21] [INFO] [server.py:206] Uvicorn running on http://0.0.0.0:8090 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
|
||||
### 4. ASR Client Usage
|
||||
**Note:** The response time will be slightly longer when using the client for the first time
|
||||
- Command Line (Recommended)
|
||||
```
|
||||
paddlespeech_client asr_online --server_ip 127.0.0.1 --port 8090 --input ./zh.wav
|
||||
```
|
||||
|
||||
Usage:
|
||||
|
||||
```bash
|
||||
paddlespeech_client asr_online --help
|
||||
```
|
||||
Arguments:
|
||||
- `server_ip`: server ip. Default: 127.0.0.1
|
||||
- `port`: server port. Default: 8090
|
||||
- `input`(required): Audio file to be recognized.
|
||||
- `sample_rate`: Audio ampling rate, default: 16000.
|
||||
- `lang`: Language. Default: "zh_cn".
|
||||
- `audio_format`: Audio format. Default: "wav".
|
||||
|
||||
Output:
|
||||
```bash
|
||||
[2022-04-21 15:59:03,904] [ INFO] - receive msg={"status": "ok", "signal": "server_ready"}
|
||||
[2022-04-21 15:59:03,960] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:03,973] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:03,987] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,000] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,012] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,024] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,036] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,047] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,607] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,620] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,633] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,645] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,657] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,669] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,680] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:05,176] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,185] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,192] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,200] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,208] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,216] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,224] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,232] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,724] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,732] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,740] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,747] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,755] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,763] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,770] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:06,271] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,279] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,287] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,294] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,302] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,310] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,318] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,326] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,833] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,842] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,850] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,858] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,866] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,874] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,882] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:07,400] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,408] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,416] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,424] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,432] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,440] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,447] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,455] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,984] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:07,992] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,001] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,008] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,016] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,024] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:12,883] [ INFO] - final receive msg={'status': 'ok', 'signal': 'finished', 'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:12,884] [ INFO] - 我认为跑步最重要的就是给我带来了身体健康
|
||||
[2022-04-21 15:59:12,884] [ INFO] - Response time 9.051567 s.
|
||||
|
||||
```
|
||||
|
||||
- Python API
|
||||
```python
|
||||
from paddlespeech.server.bin.paddlespeech_client import ASRClientExecutor
|
||||
import json
|
||||
|
||||
asrclient_executor = ASRClientExecutor()
|
||||
res = asrclient_executor(
|
||||
input="./zh.wav",
|
||||
server_ip="127.0.0.1",
|
||||
port=8090,
|
||||
sample_rate=16000,
|
||||
lang="zh_cn",
|
||||
audio_format="wav")
|
||||
print(res.json())
|
||||
```
|
||||
|
||||
Output:
|
||||
```bash
|
||||
[2022-04-21 15:59:03,904] [ INFO] - receive msg={"status": "ok", "signal": "server_ready"}
|
||||
[2022-04-21 15:59:03,960] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:03,973] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:03,987] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,000] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,012] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,024] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,036] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,047] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,607] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,620] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,633] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,645] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,657] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,669] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:04,680] [ INFO] - receive msg={'asr_results': ''}
|
||||
[2022-04-21 15:59:05,176] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,185] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,192] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,200] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,208] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,216] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,224] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,232] [ INFO] - receive msg={'asr_results': '我认为跑'}
|
||||
[2022-04-21 15:59:05,724] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,732] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,740] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,747] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,755] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,763] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:05,770] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的'}
|
||||
[2022-04-21 15:59:06,271] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,279] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,287] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,294] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,302] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,310] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,318] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,326] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是'}
|
||||
[2022-04-21 15:59:06,833] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,842] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,850] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,858] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,866] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,874] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:06,882] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给'}
|
||||
[2022-04-21 15:59:07,400] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,408] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,416] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,424] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,432] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,440] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,447] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,455] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了'}
|
||||
[2022-04-21 15:59:07,984] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:07,992] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,001] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,008] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,016] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:08,024] [ INFO] - receive msg={'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:12,883] [ INFO] - final receive msg={'status': 'ok', 'signal': 'finished', 'asr_results': '我认为跑步最重要的就是给我带来了身体健康'}
|
||||
[2022-04-21 15:59:12,884] [ INFO] - 我认为跑步最重要的就是给我带来了身体健康
|
||||
```
|
@ -0,0 +1,47 @@
|
||||
# This is the parameter configuration file for PaddleSpeech Serving.
|
||||
|
||||
#################################################################################
|
||||
# SERVER SETTING #
|
||||
#################################################################################
|
||||
host: 0.0.0.0
|
||||
port: 8090
|
||||
|
||||
# The task format in the engin_list is: <speech task>_<engine type>
|
||||
# task choices = ['asr_online', 'tts_online']
|
||||
# protocol = ['websocket', 'http'] (only one can be selected).
|
||||
# websocket only support online engine type.
|
||||
protocol: 'websocket'
|
||||
engine_list: ['asr_online']
|
||||
|
||||
|
||||
#################################################################################
|
||||
# ENGINE CONFIG #
|
||||
#################################################################################
|
||||
|
||||
################################### ASR #########################################
|
||||
################### speech task: asr; engine_type: online #######################
|
||||
asr_online:
|
||||
model_type: 'deepspeech2online_aishell'
|
||||
am_model: # the pdmodel file of am static model [optional]
|
||||
am_params: # the pdiparams file of am static model [optional]
|
||||
lang: 'zh'
|
||||
sample_rate: 16000
|
||||
cfg_path:
|
||||
decode_method:
|
||||
force_yes: True
|
||||
|
||||
am_predictor_conf:
|
||||
device: # set 'gpu:id' or 'cpu'
|
||||
switch_ir_optim: True
|
||||
glog_info: False # True -> print glog
|
||||
summary: True # False -> do not show predictor config
|
||||
|
||||
chunk_buffer_conf:
|
||||
frame_duration_ms: 80
|
||||
shift_ms: 40
|
||||
sample_rate: 16000
|
||||
sample_width: 2
|
||||
window_n: 7 # frame
|
||||
shift_n: 4 # frame
|
||||
window_ms: 20 # ms
|
||||
shift_ms: 10 # ms
|
@ -0,0 +1,45 @@
|
||||
# This is the parameter configuration file for PaddleSpeech Serving.
|
||||
|
||||
#################################################################################
|
||||
# SERVER SETTING #
|
||||
#################################################################################
|
||||
host: 0.0.0.0
|
||||
port: 8090
|
||||
|
||||
# The task format in the engin_list is: <speech task>_<engine type>
|
||||
# task choices = ['asr_online', 'tts_online']
|
||||
# protocol = ['websocket', 'http'] (only one can be selected).
|
||||
# websocket only support online engine type.
|
||||
protocol: 'websocket'
|
||||
engine_list: ['asr_online']
|
||||
|
||||
|
||||
#################################################################################
|
||||
# ENGINE CONFIG #
|
||||
#################################################################################
|
||||
|
||||
################################### ASR #########################################
|
||||
################### speech task: asr; engine_type: online #######################
|
||||
asr_online:
|
||||
model_type: 'conformer_online_multicn'
|
||||
am_model: # the pdmodel file of am static model [optional]
|
||||
am_params: # the pdiparams file of am static model [optional]
|
||||
lang: 'zh'
|
||||
sample_rate: 16000
|
||||
cfg_path:
|
||||
decode_method:
|
||||
force_yes: True
|
||||
device: # cpu or gpu:id
|
||||
am_predictor_conf:
|
||||
device: # set 'gpu:id' or 'cpu'
|
||||
switch_ir_optim: True
|
||||
glog_info: False # True -> print glog
|
||||
summary: True # False -> do not show predictor config
|
||||
|
||||
chunk_buffer_conf:
|
||||
window_n: 7 # frame
|
||||
shift_n: 4 # frame
|
||||
window_ms: 25 # ms
|
||||
shift_ms: 10 # ms
|
||||
sample_rate: 16000
|
||||
sample_width: 2
|
@ -0,0 +1,2 @@
|
||||
# start the streaming asr service
|
||||
paddlespeech_server start --config_file ./conf/ws_conformer_application.yaml
|
@ -0,0 +1,5 @@
|
||||
# download the test wav
|
||||
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav
|
||||
|
||||
# read the wav and pass it to service
|
||||
python3 websocket_client.py --wavfile ./zh.wav
|
Before Width: | Height: | Size: 949 KiB After Width: | Height: | Size: 949 KiB |
Before Width: | Height: | Size: 432 KiB After Width: | Height: | Size: 432 KiB |
Before Width: | Height: | Size: 72 KiB After Width: | Height: | Size: 72 KiB |
Before Width: | Height: | Size: 286 KiB After Width: | Height: | Size: 286 KiB |
Before Width: | Height: | Size: 4.2 KiB After Width: | Height: | Size: 4.2 KiB |
@ -0,0 +1,62 @@
|
||||
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#!/usr/bin/python
|
||||
# -*- coding: UTF-8 -*-
|
||||
import argparse
|
||||
import asyncio
|
||||
import codecs
|
||||
import logging
|
||||
import os
|
||||
|
||||
from paddlespeech.cli.log import logger
|
||||
from paddlespeech.server.utils.audio_handler import ASRAudioHandler
|
||||
|
||||
|
||||
def main(args):
|
||||
logger.info("asr websocket client start")
|
||||
handler = ASRAudioHandler("127.0.0.1", 8090)
|
||||
loop = asyncio.get_event_loop()
|
||||
|
||||
# support to process single audio file
|
||||
if args.wavfile and os.path.exists(args.wavfile):
|
||||
logger.info(f"start to process the wavscp: {args.wavfile}")
|
||||
result = loop.run_until_complete(handler.run(args.wavfile))
|
||||
result = result["asr_results"]
|
||||
logger.info(f"asr websocket client finished : {result}")
|
||||
|
||||
# support to process batch audios from wav.scp
|
||||
if args.wavscp and os.path.exists(args.wavscp):
|
||||
logging.info(f"start to process the wavscp: {args.wavscp}")
|
||||
with codecs.open(args.wavscp, 'r', encoding='utf-8') as f,\
|
||||
codecs.open("result.txt", 'w', encoding='utf-8') as w:
|
||||
for line in f:
|
||||
utt_name, utt_path = line.strip().split()
|
||||
result = loop.run_until_complete(handler.run(utt_path))
|
||||
result = result["asr_results"]
|
||||
w.write(f"{utt_name} {result}\n")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logger.info("Start to do streaming asr client")
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--wavfile",
|
||||
action="store",
|
||||
help="wav file path ",
|
||||
default="./16_audio.wav")
|
||||
parser.add_argument(
|
||||
"--wavscp", type=str, default=None, help="The batch audios dict text")
|
||||
args = parser.parse_args()
|
||||
|
||||
main(args)
|
@ -0,0 +1,45 @@
|
||||
# This is the parameter configuration file for PaddleSpeech Serving.
|
||||
|
||||
#################################################################################
|
||||
# SERVER SETTING #
|
||||
#################################################################################
|
||||
host: 0.0.0.0
|
||||
port: 8090
|
||||
|
||||
# The task format in the engin_list is: <speech task>_<engine type>
|
||||
# task choices = ['asr_online', 'tts_online']
|
||||
# protocol = ['websocket', 'http'] (only one can be selected).
|
||||
# websocket only support online engine type.
|
||||
protocol: 'websocket'
|
||||
engine_list: ['asr_online']
|
||||
|
||||
|
||||
#################################################################################
|
||||
# ENGINE CONFIG #
|
||||
#################################################################################
|
||||
|
||||
################################### ASR #########################################
|
||||
################### speech task: asr; engine_type: online #######################
|
||||
asr_online:
|
||||
model_type: 'conformer_online_multicn'
|
||||
am_model: # the pdmodel file of am static model [optional]
|
||||
am_params: # the pdiparams file of am static model [optional]
|
||||
lang: 'zh'
|
||||
sample_rate: 16000
|
||||
cfg_path:
|
||||
decode_method:
|
||||
force_yes: True
|
||||
device: # cpu or gpu:id
|
||||
am_predictor_conf:
|
||||
device: # set 'gpu:id' or 'cpu'
|
||||
switch_ir_optim: True
|
||||
glog_info: False # True -> print glog
|
||||
summary: True # False -> do not show predictor config
|
||||
|
||||
chunk_buffer_conf:
|
||||
window_n: 7 # frame
|
||||
shift_n: 4 # frame
|
||||
window_ms: 25 # ms
|
||||
shift_ms: 10 # ms
|
||||
sample_rate: 16000
|
||||
sample_width: 2
|
@ -0,0 +1,130 @@
|
||||
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from collections import defaultdict
|
||||
|
||||
import paddle
|
||||
|
||||
from paddlespeech.cli.log import logger
|
||||
from paddlespeech.s2t.utils.utility import log_add
|
||||
|
||||
__all__ = ['CTCPrefixBeamSearch']
|
||||
|
||||
|
||||
class CTCPrefixBeamSearch:
|
||||
def __init__(self, config):
|
||||
"""Implement the ctc prefix beam search
|
||||
|
||||
Args:
|
||||
config (yacs.config.CfgNode): _description_
|
||||
"""
|
||||
self.config = config
|
||||
self.reset()
|
||||
|
||||
@paddle.no_grad()
|
||||
def search(self, ctc_probs, device, blank_id=0):
|
||||
"""ctc prefix beam search method decode a chunk feature
|
||||
|
||||
Args:
|
||||
xs (paddle.Tensor): feature data
|
||||
ctc_probs (paddle.Tensor): the ctc probability of all the tokens
|
||||
device (paddle.fluid.core_avx.Place): the feature host device, such as CUDAPlace(0).
|
||||
blank_id (int, optional): the blank id in the vocab. Defaults to 0.
|
||||
|
||||
Returns:
|
||||
list: the search result
|
||||
"""
|
||||
# decode
|
||||
logger.info("start to ctc prefix search")
|
||||
|
||||
batch_size = 1
|
||||
beam_size = self.config.beam_size
|
||||
maxlen = ctc_probs.shape[0]
|
||||
|
||||
assert len(ctc_probs.shape) == 2
|
||||
|
||||
# cur_hyps: (prefix, (blank_ending_score, none_blank_ending_score))
|
||||
# blank_ending_score and none_blank_ending_score in ln domain
|
||||
if self.cur_hyps is None:
|
||||
self.cur_hyps = [(tuple(), (0.0, -float('inf')))]
|
||||
# 2. CTC beam search step by step
|
||||
for t in range(0, maxlen):
|
||||
logp = ctc_probs[t] # (vocab_size,)
|
||||
# key: prefix, value (pb, pnb), default value(-inf, -inf)
|
||||
next_hyps = defaultdict(lambda: (-float('inf'), -float('inf')))
|
||||
|
||||
# 2.1 First beam prune: select topk best
|
||||
# do token passing process
|
||||
top_k_logp, top_k_index = logp.topk(beam_size) # (beam_size,)
|
||||
for s in top_k_index:
|
||||
s = s.item()
|
||||
ps = logp[s].item()
|
||||
for prefix, (pb, pnb) in self.cur_hyps:
|
||||
last = prefix[-1] if len(prefix) > 0 else None
|
||||
if s == blank_id: # blank
|
||||
n_pb, n_pnb = next_hyps[prefix]
|
||||
n_pb = log_add([n_pb, pb + ps, pnb + ps])
|
||||
next_hyps[prefix] = (n_pb, n_pnb)
|
||||
elif s == last:
|
||||
# Update *ss -> *s;
|
||||
n_pb, n_pnb = next_hyps[prefix]
|
||||
n_pnb = log_add([n_pnb, pnb + ps])
|
||||
next_hyps[prefix] = (n_pb, n_pnb)
|
||||
# Update *s-s -> *ss, - is for blank
|
||||
n_prefix = prefix + (s, )
|
||||
n_pb, n_pnb = next_hyps[n_prefix]
|
||||
n_pnb = log_add([n_pnb, pb + ps])
|
||||
next_hyps[n_prefix] = (n_pb, n_pnb)
|
||||
else:
|
||||
n_prefix = prefix + (s, )
|
||||
n_pb, n_pnb = next_hyps[n_prefix]
|
||||
n_pnb = log_add([n_pnb, pb + ps, pnb + ps])
|
||||
next_hyps[n_prefix] = (n_pb, n_pnb)
|
||||
|
||||
# 2.2 Second beam prune
|
||||
next_hyps = sorted(
|
||||
next_hyps.items(),
|
||||
key=lambda x: log_add(list(x[1])),
|
||||
reverse=True)
|
||||
self.cur_hyps = next_hyps[:beam_size]
|
||||
|
||||
self.hyps = [(y[0], log_add([y[1][0], y[1][1]])) for y in self.cur_hyps]
|
||||
logger.info("ctc prefix search success")
|
||||
return self.hyps
|
||||
|
||||
def get_one_best_hyps(self):
|
||||
"""Return the one best result
|
||||
|
||||
Returns:
|
||||
list: the one best result
|
||||
"""
|
||||
return [self.hyps[0][0]]
|
||||
|
||||
def get_hyps(self):
|
||||
"""Return the search hyps
|
||||
|
||||
Returns:
|
||||
list: return the search hyps
|
||||
"""
|
||||
return self.hyps
|
||||
|
||||
def reset(self):
|
||||
"""Rest the search cache value
|
||||
"""
|
||||
self.cur_hyps = None
|
||||
self.hyps = None
|
||||
|
||||
def finalize_search(self):
|
||||
"""do nothing in ctc_prefix_beam_search
|
||||
"""
|
||||
pass
|
@ -0,0 +1,13 @@
|
||||
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -0,0 +1,13 @@
|
||||
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -0,0 +1,13 @@
|
||||
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
@ -0,0 +1,35 @@
|
||||
([简体中文](./README_cn.md)|English)
|
||||
|
||||
# Speech Service
|
||||
|
||||
## Introduction
|
||||
|
||||
This document introduces a client for streaming asr service: microphone
|
||||
|
||||
|
||||
## Usage
|
||||
### 1. Install
|
||||
Refer [Install](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md).
|
||||
|
||||
**paddlepaddle 2.2.1** 或以上版本。
|
||||
It is recommended to use **paddlepaddle 2.2.1** or above.
|
||||
You can choose one way from meduim and hard to install paddlespeech.
|
||||
|
||||
|
||||
### 2. Prepare config File
|
||||
|
||||
|
||||
The input of ASR client demo should be a WAV file(`.wav`), and the sample rate must be the same as the model.
|
||||
|
||||
Here are sample files for thisASR client demo that can be downloaded:
|
||||
```bash
|
||||
wget -c https://paddlespeech.bj.bcebos.com/PaddleAudio/zh.wav
|
||||
```
|
||||
|
||||
### 2. Streaming ASR Client Usage
|
||||
|
||||
- microphone
|
||||
```
|
||||
python microphone_client.py
|
||||
|
||||
```
|
@ -1,136 +0,0 @@
|
||||
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#!/usr/bin/python
|
||||
# -*- coding: UTF-8 -*-
|
||||
import argparse
|
||||
import asyncio
|
||||
import codecs
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
import numpy as np
|
||||
import soundfile
|
||||
import websockets
|
||||
|
||||
|
||||
class ASRAudioHandler:
|
||||
def __init__(self, url="127.0.0.1", port=8090):
|
||||
self.url = url
|
||||
self.port = port
|
||||
self.url = "ws://" + self.url + ":" + str(self.port) + "/ws/asr"
|
||||
|
||||
def read_wave(self, wavfile_path: str):
|
||||
samples, sample_rate = soundfile.read(wavfile_path, dtype='int16')
|
||||
x_len = len(samples)
|
||||
# chunk_stride = 40 * 16 #40ms, sample_rate = 16kHz
|
||||
chunk_size = 80 * 16 #80ms, sample_rate = 16kHz
|
||||
|
||||
if x_len % chunk_size != 0:
|
||||
padding_len_x = chunk_size - x_len % chunk_size
|
||||
else:
|
||||
padding_len_x = 0
|
||||
|
||||
padding = np.zeros((padding_len_x), dtype=samples.dtype)
|
||||
padded_x = np.concatenate([samples, padding], axis=0)
|
||||
|
||||
assert (x_len + padding_len_x) % chunk_size == 0
|
||||
num_chunk = (x_len + padding_len_x) / chunk_size
|
||||
num_chunk = int(num_chunk)
|
||||
|
||||
for i in range(0, num_chunk):
|
||||
start = i * chunk_size
|
||||
end = start + chunk_size
|
||||
x_chunk = padded_x[start:end]
|
||||
yield x_chunk
|
||||
|
||||
async def run(self, wavfile_path: str):
|
||||
logging.info("send a message to the server")
|
||||
async with websockets.connect(self.url) as ws:
|
||||
audio_info = json.dumps(
|
||||
{
|
||||
"name": "test.wav",
|
||||
"signal": "start",
|
||||
"nbest": 5
|
||||
},
|
||||
sort_keys=True,
|
||||
indent=4,
|
||||
separators=(',', ': '))
|
||||
await ws.send(audio_info)
|
||||
msg = await ws.recv()
|
||||
logging.info("receive msg={}".format(msg))
|
||||
|
||||
# send chunk audio data to engine
|
||||
for chunk_data in self.read_wave(wavfile_path):
|
||||
await ws.send(chunk_data.tobytes())
|
||||
msg = await ws.recv()
|
||||
msg = json.loads(msg)
|
||||
logging.info("receive msg={}".format(msg))
|
||||
|
||||
result = msg
|
||||
# finished
|
||||
audio_info = json.dumps(
|
||||
{
|
||||
"name": "test.wav",
|
||||
"signal": "end",
|
||||
"nbest": 5
|
||||
},
|
||||
sort_keys=True,
|
||||
indent=4,
|
||||
separators=(',', ': '))
|
||||
await ws.send(audio_info)
|
||||
msg = await ws.recv()
|
||||
msg = json.loads(msg)
|
||||
logging.info("receive msg={}".format(msg))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main(args):
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logging.info("asr websocket client start")
|
||||
handler = ASRAudioHandler("127.0.0.1", 8090)
|
||||
loop = asyncio.get_event_loop()
|
||||
|
||||
# support to process single audio file
|
||||
if args.wavfile and os.path.exists(args.wavfile):
|
||||
logging.info(f"start to process the wavscp: {args.wavfile}")
|
||||
result = loop.run_until_complete(handler.run(args.wavfile))
|
||||
result = result["asr_results"]
|
||||
logging.info(f"asr websocket client finished : {result}")
|
||||
|
||||
# support to process batch audios from wav.scp
|
||||
if args.wavscp and os.path.exists(args.wavscp):
|
||||
logging.info(f"start to process the wavscp: {args.wavscp}")
|
||||
with codecs.open(args.wavscp, 'r', encoding='utf-8') as f,\
|
||||
codecs.open("result.txt", 'w', encoding='utf-8') as w:
|
||||
for line in f:
|
||||
utt_name, utt_path = line.strip().split()
|
||||
result = loop.run_until_complete(handler.run(utt_path))
|
||||
result = result["asr_results"]
|
||||
w.write(f"{utt_name} {result}\n")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--wavfile",
|
||||
action="store",
|
||||
help="wav file path ",
|
||||
default="./16_audio.wav")
|
||||
parser.add_argument(
|
||||
"--wavscp", type=str, default=None, help="The batch audios dict text")
|
||||
args = parser.parse_args()
|
||||
|
||||
main(args)
|
@ -0,0 +1,76 @@
|
||||
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import paddle
|
||||
import paddle.nn as nn
|
||||
from paddle.autograd import PyLayer
|
||||
|
||||
|
||||
class GradientReversalFunction(PyLayer):
|
||||
"""Gradient Reversal Layer from:
|
||||
Unsupervised Domain Adaptation by Backpropagation (Ganin & Lempitsky, 2015)
|
||||
|
||||
Forward pass is the identity function. In the backward pass,
|
||||
the upstream gradients are multiplied by -lambda (i.e. gradient is reversed)
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def forward(ctx, x, lambda_=1):
|
||||
"""Forward in networks
|
||||
"""
|
||||
ctx.save_for_backward(lambda_)
|
||||
return x.clone()
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, grads):
|
||||
"""Backward in networks
|
||||
"""
|
||||
lambda_, = ctx.saved_tensor()
|
||||
dx = -lambda_ * grads
|
||||
return dx
|
||||
|
||||
|
||||
class GradientReversalLayer(nn.Layer):
|
||||
"""Gradient Reversal Layer from:
|
||||
Unsupervised Domain Adaptation by Backpropagation (Ganin & Lempitsky, 2015)
|
||||
|
||||
Forward pass is the identity function. In the backward pass,
|
||||
the upstream gradients are multiplied by -lambda (i.e. gradient is reversed)
|
||||
"""
|
||||
|
||||
def __init__(self, lambda_=1):
|
||||
super(GradientReversalLayer, self).__init__()
|
||||
self.lambda_ = lambda_
|
||||
|
||||
def forward(self, x):
|
||||
"""Forward in networks
|
||||
"""
|
||||
return GradientReversalFunction.apply(x, self.lambda_)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
paddle.set_device("cpu")
|
||||
|
||||
data = paddle.randn([2, 3], dtype="float64")
|
||||
data.stop_gradient = False
|
||||
grl = GradientReversalLayer(1)
|
||||
out = grl(data)
|
||||
out.mean().backward()
|
||||
print(data.grad)
|
||||
|
||||
data = paddle.randn([2, 3], dtype="float64")
|
||||
data.stop_gradient = False
|
||||
grl = GradientReversalLayer(-1)
|
||||
out = grl(data)
|
||||
out.mean().backward()
|
||||
print(data.grad)
|
@ -0,0 +1,88 @@
|
||||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "decoder/recognizer.h"
|
||||
#include "decoder/param.h"
|
||||
#include "kaldi/feat/wave-reader.h"
|
||||
#include "kaldi/util/table-types.h"
|
||||
|
||||
DEFINE_string(wav_rspecifier, "", "test feature rspecifier");
|
||||
DEFINE_string(result_wspecifier, "", "test result wspecifier");
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
gflags::ParseCommandLineFlags(&argc, &argv, false);
|
||||
google::InitGoogleLogging(argv[0]);
|
||||
|
||||
ppspeech::RecognizerResource resource = ppspeech::InitRecognizerResoure();
|
||||
ppspeech::Recognizer recognizer(resource);
|
||||
|
||||
kaldi::SequentialTableReader<kaldi::WaveHolder> wav_reader(
|
||||
FLAGS_wav_rspecifier);
|
||||
kaldi::TokenWriter result_writer(FLAGS_result_wspecifier);
|
||||
int sample_rate = 16000;
|
||||
float streaming_chunk = FLAGS_streaming_chunk;
|
||||
int chunk_sample_size = streaming_chunk * sample_rate;
|
||||
LOG(INFO) << "sr: " << sample_rate;
|
||||
LOG(INFO) << "chunk size (s): " << streaming_chunk;
|
||||
LOG(INFO) << "chunk size (sample): " << chunk_sample_size;
|
||||
|
||||
int32 num_done = 0, num_err = 0;
|
||||
|
||||
for (; !wav_reader.Done(); wav_reader.Next()) {
|
||||
std::string utt = wav_reader.Key();
|
||||
const kaldi::WaveData& wave_data = wav_reader.Value();
|
||||
|
||||
int32 this_channel = 0;
|
||||
kaldi::SubVector<kaldi::BaseFloat> waveform(wave_data.Data(),
|
||||
this_channel);
|
||||
int tot_samples = waveform.Dim();
|
||||
LOG(INFO) << "wav len (sample): " << tot_samples;
|
||||
|
||||
int sample_offset = 0;
|
||||
std::vector<kaldi::Vector<BaseFloat>> feats;
|
||||
int feature_rows = 0;
|
||||
while (sample_offset < tot_samples) {
|
||||
int cur_chunk_size =
|
||||
std::min(chunk_sample_size, tot_samples - sample_offset);
|
||||
|
||||
kaldi::Vector<kaldi::BaseFloat> wav_chunk(cur_chunk_size);
|
||||
for (int i = 0; i < cur_chunk_size; ++i) {
|
||||
wav_chunk(i) = waveform(sample_offset + i);
|
||||
}
|
||||
// wav_chunk = waveform.Range(sample_offset + i, cur_chunk_size);
|
||||
|
||||
recognizer.Accept(wav_chunk);
|
||||
if (cur_chunk_size < chunk_sample_size) {
|
||||
recognizer.SetFinished();
|
||||
}
|
||||
recognizer.Decode();
|
||||
|
||||
// no overlap
|
||||
sample_offset += cur_chunk_size;
|
||||
}
|
||||
|
||||
std::string result;
|
||||
result = recognizer.GetFinalResult();
|
||||
recognizer.Reset();
|
||||
if (result.empty()) {
|
||||
// the TokenWriter can not write empty string.
|
||||
++num_err;
|
||||
KALDI_LOG << " the result of " << utt << " is empty";
|
||||
continue;
|
||||
}
|
||||
KALDI_LOG << " the result of " << utt << " is " << result;
|
||||
result_writer.Write(utt, result);
|
||||
++num_done;
|
||||
}
|
||||
}
|
@ -0,0 +1,2 @@
|
||||
data
|
||||
exp
|
@ -0,0 +1,9 @@
|
||||
cmake_minimum_required(VERSION 3.14 FATAL_ERROR)
|
||||
|
||||
add_executable(websocket_server_main ${CMAKE_CURRENT_SOURCE_DIR}/websocket_server_main.cc)
|
||||
target_include_directories(websocket_server_main PRIVATE ${SPEECHX_ROOT} ${SPEECHX_ROOT}/kaldi)
|
||||
target_link_libraries(websocket_server_main PUBLIC frontend kaldi-feat-common nnet decoder fst utils gflags glog kaldi-base kaldi-matrix kaldi-util kaldi-decoder websocket ${DEPS})
|
||||
|
||||
add_executable(websocket_client_main ${CMAKE_CURRENT_SOURCE_DIR}/websocket_client_main.cc)
|
||||
target_include_directories(websocket_client_main PRIVATE ${SPEECHX_ROOT} ${SPEECHX_ROOT}/kaldi)
|
||||
target_link_libraries(websocket_client_main PUBLIC frontend kaldi-feat-common nnet decoder fst utils gflags glog kaldi-base kaldi-matrix kaldi-util kaldi-decoder websocket ${DEPS})
|
@ -0,0 +1,14 @@
|
||||
# This contains the locations of binarys build required for running the examples.
|
||||
|
||||
SPEECHX_ROOT=$PWD/../../..
|
||||
SPEECHX_EXAMPLES=$SPEECHX_ROOT/build/examples
|
||||
|
||||
SPEECHX_TOOLS=$SPEECHX_ROOT/tools
|
||||
TOOLS_BIN=$SPEECHX_TOOLS/valgrind/install/bin
|
||||
|
||||
[ -d $SPEECHX_EXAMPLES ] || { echo "Error: 'build/examples' directory not found. please ensure that the project build successfully"; }
|
||||
|
||||
export LC_AL=C
|
||||
|
||||
SPEECHX_BIN=$SPEECHX_EXAMPLES/ds2_ol/websocket:$SPEECHX_EXAMPLES/ds2_ol/feat
|
||||
export PATH=$PATH:$SPEECHX_BIN:$TOOLS_BIN
|
@ -0,0 +1,37 @@
|
||||
#!/bin/bash
|
||||
set +x
|
||||
set -e
|
||||
|
||||
. path.sh
|
||||
|
||||
# 1. compile
|
||||
if [ ! -d ${SPEECHX_EXAMPLES} ]; then
|
||||
pushd ${SPEECHX_ROOT}
|
||||
bash build.sh
|
||||
popd
|
||||
fi
|
||||
|
||||
# input
|
||||
mkdir -p data
|
||||
data=$PWD/data
|
||||
ckpt_dir=$data/model
|
||||
model_dir=$ckpt_dir/exp/deepspeech2_online/checkpoints/
|
||||
vocb_dir=$ckpt_dir/data/lang_char
|
||||
# output
|
||||
aishell_wav_scp=aishell_test.scp
|
||||
if [ ! -d $data/test ]; then
|
||||
pushd $data
|
||||
wget -c https://paddlespeech.bj.bcebos.com/s2t/paddle_asr_online/aishell_test.zip
|
||||
unzip aishell_test.zip
|
||||
popd
|
||||
|
||||
realpath $data/test/*/*.wav > $data/wavlist
|
||||
awk -F '/' '{ print $(NF) }' $data/wavlist | awk -F '.' '{ print $1 }' > $data/utt_id
|
||||
paste $data/utt_id $data/wavlist > $data/$aishell_wav_scp
|
||||
fi
|
||||
|
||||
export GLOG_logtostderr=1
|
||||
|
||||
# websocket client
|
||||
websocket_client_main \
|
||||
--wav_rspecifier=scp:$data/$aishell_wav_scp --streaming_chunk=0.36
|
@ -0,0 +1,82 @@
|
||||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "websocket/websocket_client.h"
|
||||
#include "kaldi/feat/wave-reader.h"
|
||||
#include "kaldi/util/kaldi-io.h"
|
||||
#include "kaldi/util/table-types.h"
|
||||
|
||||
DEFINE_string(host, "127.0.0.1", "host of websocket server");
|
||||
DEFINE_int32(port, 8082, "port of websocket server");
|
||||
DEFINE_string(wav_rspecifier, "", "test wav scp path");
|
||||
DEFINE_double(streaming_chunk, 0.1, "streaming feature chunk size");
|
||||
|
||||
using kaldi::int16;
|
||||
int main(int argc, char* argv[]) {
|
||||
gflags::ParseCommandLineFlags(&argc, &argv, false);
|
||||
google::InitGoogleLogging(argv[0]);
|
||||
|
||||
kaldi::SequentialTableReader<kaldi::WaveHolder> wav_reader(
|
||||
FLAGS_wav_rspecifier);
|
||||
|
||||
const int sample_rate = 16000;
|
||||
const float streaming_chunk = FLAGS_streaming_chunk;
|
||||
const int chunk_sample_size = streaming_chunk * sample_rate;
|
||||
|
||||
for (; !wav_reader.Done(); wav_reader.Next()) {
|
||||
ppspeech::WebSocketClient client(FLAGS_host, FLAGS_port);
|
||||
|
||||
client.SendStartSignal();
|
||||
std::string utt = wav_reader.Key();
|
||||
const kaldi::WaveData& wave_data = wav_reader.Value();
|
||||
CHECK_EQ(wave_data.SampFreq(), sample_rate);
|
||||
|
||||
int32 this_channel = 0;
|
||||
kaldi::SubVector<kaldi::BaseFloat> waveform(wave_data.Data(),
|
||||
this_channel);
|
||||
const int tot_samples = waveform.Dim();
|
||||
int sample_offset = 0;
|
||||
|
||||
while (sample_offset < tot_samples) {
|
||||
int cur_chunk_size =
|
||||
std::min(chunk_sample_size, tot_samples - sample_offset);
|
||||
|
||||
std::vector<int16> wav_chunk(cur_chunk_size);
|
||||
for (int i = 0; i < cur_chunk_size; ++i) {
|
||||
wav_chunk[i] = static_cast<int16>(waveform(sample_offset + i));
|
||||
}
|
||||
client.SendBinaryData(wav_chunk.data(),
|
||||
wav_chunk.size() * sizeof(int16));
|
||||
|
||||
|
||||
sample_offset += cur_chunk_size;
|
||||
LOG(INFO) << "Send " << cur_chunk_size << " samples";
|
||||
std::this_thread::sleep_for(
|
||||
std::chrono::milliseconds(static_cast<int>(1 * 1000)));
|
||||
|
||||
if (cur_chunk_size < chunk_sample_size) {
|
||||
client.SendEndSignal();
|
||||
}
|
||||
}
|
||||
|
||||
while (!client.Done()) {
|
||||
}
|
||||
std::string result = client.GetResult();
|
||||
LOG(INFO) << "utt: " << utt << " " << result;
|
||||
|
||||
client.Join();
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
@ -0,0 +1,57 @@
|
||||
#!/bin/bash
|
||||
set +x
|
||||
set -e
|
||||
|
||||
. path.sh
|
||||
|
||||
|
||||
# 1. compile
|
||||
if [ ! -d ${SPEECHX_EXAMPLES} ]; then
|
||||
pushd ${SPEECHX_ROOT}
|
||||
bash build.sh
|
||||
popd
|
||||
fi
|
||||
|
||||
# input
|
||||
mkdir -p data
|
||||
data=$PWD/data
|
||||
ckpt_dir=$data/model
|
||||
model_dir=$ckpt_dir/exp/deepspeech2_online/checkpoints/
|
||||
vocb_dir=$ckpt_dir/data/lang_char/
|
||||
|
||||
if [ ! -f $ckpt_dir/data/mean_std.json ]; then
|
||||
mkdir -p $ckpt_dir
|
||||
pushd $ckpt_dir
|
||||
wget -c https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_online_aishell_ckpt_0.2.0.model.tar.gz
|
||||
tar xzfv asr0_deepspeech2_online_aishell_ckpt_0.2.0.model.tar.gz
|
||||
popd
|
||||
fi
|
||||
|
||||
export GLOG_logtostderr=1
|
||||
|
||||
# 3. gen cmvn
|
||||
cmvn=$data/cmvn.ark
|
||||
cmvn-json2kaldi --json_file=$ckpt_dir/data/mean_std.json --cmvn_write_path=$cmvn
|
||||
|
||||
|
||||
wfst=$data/wfst/
|
||||
mkdir -p $wfst
|
||||
if [ ! -f $wfst/aishell_graph.zip ]; then
|
||||
pushd $wfst
|
||||
wget -c https://paddlespeech.bj.bcebos.com/s2t/paddle_asr_online/aishell_graph.zip
|
||||
unzip aishell_graph.zip
|
||||
mv aishell_graph/* $wfst
|
||||
popd
|
||||
fi
|
||||
|
||||
# 5. test websocket server
|
||||
websocket_server_main \
|
||||
--cmvn_file=$cmvn \
|
||||
--model_path=$model_dir/avg_1.jit.pdmodel \
|
||||
--streaming_chunk=0.1 \
|
||||
--convert2PCM32=true \
|
||||
--param_path=$model_dir/avg_1.jit.pdiparams \
|
||||
--word_symbol_table=$data/wfst/words.txt \
|
||||
--model_output_names=softmax_0.tmp_0,tmp_5,concat_0.tmp_0,concat_1.tmp_0 \
|
||||
--graph_path=$data/wfst/TLG.fst --max_active=7500 \
|
||||
--acoustic_scale=1.2
|
@ -0,0 +1,30 @@
|
||||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "websocket/websocket_server.h"
|
||||
#include "decoder/param.h"
|
||||
|
||||
DEFINE_int32(port, 8082, "websocket listening port");
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
gflags::ParseCommandLineFlags(&argc, &argv, false);
|
||||
google::InitGoogleLogging(argv[0]);
|
||||
|
||||
ppspeech::RecognizerResource resource = ppspeech::InitRecognizerResoure();
|
||||
|
||||
ppspeech::WebSocketServer server(FLAGS_port, resource);
|
||||
LOG(INFO) << "Listening at port " << FLAGS_port;
|
||||
server.Start();
|
||||
return 0;
|
||||
}
|
@ -0,0 +1,30 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -eo pipefail
|
||||
|
||||
data=$1
|
||||
scp=$2
|
||||
split_name=$3
|
||||
numsplit=$4
|
||||
|
||||
# save in $data/split{n}
|
||||
# $scp to split
|
||||
#
|
||||
|
||||
if [[ ! $numsplit -gt 0 ]]; then
|
||||
echo "Invalid num-split argument";
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
directories=$(for n in `seq $numsplit`; do echo $data/split${numsplit}/$n; done)
|
||||
scp_splits=$(for n in `seq $numsplit`; do echo $data/split${numsplit}/$n/${split_name}; done)
|
||||
|
||||
# if this mkdir fails due to argument-list being too long, iterate.
|
||||
if ! mkdir -p $directories >&/dev/null; then
|
||||
for n in `seq $numsplit`; do
|
||||
mkdir -p $data/split${numsplit}/$n
|
||||
done
|
||||
fi
|
||||
|
||||
echo "utils/split_scp.pl $scp $scp_splits"
|
||||
utils/split_scp.pl $scp $scp_splits
|