fix code style and readme

pull/2528/head
HexToString 3 years ago
parent 7cf94a3693
commit 05e57d8d10

@ -7,54 +7,61 @@ This demo is an implementation of starting the streaming speech synthesis servic
`Server` must be started in the docker, while `Client` does not have to be in the docker. `Server` must be started in the docker, while `Client` does not have to be in the docker.
We assume your model and code (which will be loaded by the `Server`) absolute path in your host is `$PWD` and the model absolute path in docker is `/models` **The streaming_tts_serving under the path of this article ($PWD) contains the configuration and code of the model, which needs to be mapped to the docker for use.**
## Usage ## Usage
### 1. Server ### 1. Server
#### 1.1 Docker #### 1.1 Docker
`docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09` ```bash
docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
`docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09` docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker exec -it -u root fastdeploy bash
`docker exec -it -u root fastdeploy bash` ```
#### 1.2 Installation(inside the docker) #### 1.2 Installation(inside the docker)
```bash
`apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip` apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip
pip3 install paddlespeech
`pip3 install paddlespeech` export LC_ALL="zh_CN.UTF-8"
export LANG="zh_CN.UTF-8"
`export LC_ALL="zh_CN.UTF-8"` export LANGUAGE="zh_CN:zh:en_US:en"
```
`export LANG="zh_CN.UTF-8"`
`export LANGUAGE="zh_CN:zh:en_US:en"`
#### 1.3 Download models(inside the docker) #### 1.3 Download models(inside the docker)
```bash
`cd /models/streaming_tts_serving/1` cd /models/streaming_tts_serving/1
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
`wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip` wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip
unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
`wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip` unzip mb_melgan_csmsc_onnx_0.2.0.zip
```
`unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip` **For the convenience of users, we recommend that you use the command `docker -v` to map $PWD (streaming_tts_service and the configuration and code of the model contained therein) to the docker path `/models`. You can also use other methods, but regardless of which method you use, the final model directory and structure in the docker are shown in the following figure.**
`unzip mb_melgan_csmsc_onnx_0.2.0.zip` <p align="center">
<img src="./tree.png" />
</p>
#### 1.4 Start the server(inside the docker) #### 1.4 Start the server(inside the docker)
`fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving` ```bash
fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving
**The default port is 8000(for http),8001(for grpc),8002(for metrics). If you want to change the port, add the command `--http-port 9000 --grpc-port 9001 --metrics-port 9002`** ```
Arguments:
- `model-repository`(required): Path of model storage.
- `model-control-mode`(required): The mode of loading the model. At present, you can use 'explicit'.
- `load-model`(required): Name of the model to be loaded.
- `http-port`(optional): Port for http service. Default: `8000`. This is not used in our example.
- `grpc-port`(optional): Port for grpc service. Default: `8001`.
- `metrics-port`(optional): Port for metrics service. Default: `8002`. This is not used in our example.
### 2. Client ### 2. Client
#### 2.1 Installation #### 2.1 Installation
```bash
`pip3 install tritonclient[all]` pip3 install tritonclient[all]
```
#### 2.2 Send request #### 2.2 Send request
```bash
`python3 /models/streaming_tts_serving/stream_client.py` python3 /models/streaming_tts_serving/stream_client.py
```

@ -3,58 +3,65 @@
# 流式语音合成服务 # 流式语音合成服务
## 介绍 ## 介绍
本文介绍了使用FastDeploy搭建流式语音合成服务的方法。 本文介绍了使用FastDeploy搭建流式语音合成服务的方法。
`服务端`必须在docker内启动,而`客户端`不是必须在docker容器内. `服务端`必须在docker内启动,而`客户端`不是必须在docker容器内.
我们假设你的模型和代码(`服务端`会加载模型和代码以启动服务)在你主机上的绝对路径是`$PWD`,模型和代码在docker内的绝对路径是`/models` **本文所在路径`($PWD)下的streaming_tts_serving里包含模型的配置和代码`(服务端会加载模型和代码以启动服务),需要将其映射到docker中使用。**
## 使用 ## 使用
### 1. 服务端 ### 1. 服务端
#### 1.1 Docker #### 1.1 Docker
```bash
`docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09` docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
`docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09` docker exec -it -u root fastdeploy bash
```
`docker exec -it -u root fastdeploy bash`
#### 1.2 安装(在docker内) #### 1.2 安装(在docker内)
```bash
`apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip` apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip
pip3 install paddlespeech
`pip3 install paddlespeech` export LC_ALL="zh_CN.UTF-8"
export LANG="zh_CN.UTF-8"
`export LC_ALL="zh_CN.UTF-8"` export LANGUAGE="zh_CN:zh:en_US:en"
```
`export LANG="zh_CN.UTF-8"`
`export LANGUAGE="zh_CN:zh:en_US:en"`
#### 1.3 下载模型(在docker内) #### 1.3 下载模型(在docker内)
```bash
`cd /models/streaming_tts_serving/1` cd /models/streaming_tts_serving/1
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
`wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip` wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip
unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
`wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip` unzip mb_melgan_csmsc_onnx_0.2.0.zip
```
`unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip` **为了方便用户使用我们推荐用户使用1.1中的`docker -v`命令将`$PWD(streaming_tts_serving及里面包含的模型的配置和代码)映射到了docker内的/models路径`,用户也可以使用其他办法,但无论使用哪种方法,最终在docker内的模型目录及结构如下图所示。**
`unzip mb_melgan_csmsc_onnx_0.2.0.zip` <p align="center">
<img src="./tree.png" />
</p>
#### 1.4 启动服务端(在docker内) #### 1.4 启动服务端(在docker内)
```bash
`fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving` fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving
```
**服务启动的默认端口是8000(for http),8001(for grpc),8002(for metrics). 如果想要改变服务的端口号,在上述命令后面添加以下参数即可`--http-port 9000 --grpc-port 9001 --metrics-port 9002`**
参数:
- `model-repository`(required): 整套模型streaming_tts_serving存放的路径.
- `model-control-mode`(required): 模型加载的方式,现阶段, 使用'explicit'即可.
- `load-model`(required): 需要加载的模型的名称.
- `http-port`(optional): HTTP服务的端口号. 默认: `8000`. 本示例中未使用该端口.
- `grpc-port`(optional): GRPC服务的端口号. 默认: `8001`.
- `metrics-port`(optional): 服务端指标的端口号. 默认: `8002`. 本示例中未使用该端口.
### 2. 客户端 ### 2. 客户端
#### 2.1 安装 #### 2.1 安装
```bash
`pip3 install tritonclient[all]` pip3 install tritonclient[all]
```
#### 2.2 发送请求 #### 2.2 发送请求
```bash
`python3 /models/streaming_tts_serving/stream_client.py` python3 /models/streaming_tts_serving/stream_client.py
```

@ -20,7 +20,7 @@ am_pad = 12
voc_upsample = 300 voc_upsample = 300
# 模型路径 # 模型路径
dir_name = "/models/streaming_tts_serving/1" dir_name = "/models/streaming_tts_serving/1/"
phones_dict = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/phone_id_map.txt" phones_dict = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/phone_id_map.txt"
am_stat_path = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/speech_stats.npy" am_stat_path = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/speech_stats.npy"

@ -74,7 +74,6 @@ if __name__ == '__main__':
values = ["哈哈哈哈"] values = ["哈哈哈哈"]
request_id = "0" request_id = "0"
#string_sequence_id0 = str(uuid.uuid4())
string_result0_list = [] string_result0_list = []
@ -111,7 +110,7 @@ if __name__ == '__main__':
status = data_item.as_numpy('status') status = data_item.as_numpy('status')
print('sub_wav = ', sub_wav, "subwav.shape = ", sub_wav.shape) print('sub_wav = ', sub_wav, "subwav.shape = ", sub_wav.shape)
print('status = ', status) print('status = ', status)
if status[0] is True: if status[0] == 1:
break break
recv_count += 1 recv_count += 1

Loading…
Cancel
Save