@ -7,54 +7,61 @@ This demo is an implementation of starting the streaming speech synthesis servic
`Server` must be started in the docker, while `Client` does not have to be in the docker.
`Server` must be started in the docker, while `Client` does not have to be in the docker.
We assume your model and code (which will be loaded by the `Server`) absolute path in your host is `$PWD` and the model absolute path in docker is `/models`
**The streaming_tts_serving under the path of this article ($PWD) contains the configuration and code of the model, which needs to be mapped to the docker for use.**
**For the convenience of users, we recommend that you use the command `docker -v` to map $PWD (streaming_tts_service and the configuration and code of the model contained therein) to the docker path `/models`. You can also use other methods, but regardless of which method you use, the final model directory and structure in the docker are shown in the following figure.**
**The default port is 8000(for http),8001(for grpc),8002(for metrics). If you want to change the port, add the command `--http-port 9000 --grpc-port 9001 --metrics-port 9002`**
```
Arguments:
- `model-repository`(required): Path of model storage.
- `model-control-mode`(required): The mode of loading the model. At present, you can use 'explicit'.
- `load-model`(required): Name of the model to be loaded.
- `http-port`(optional): Port for http service. Default: `8000`. This is not used in our example.
- `grpc-port`(optional): Port for grpc service. Default: `8001`.
- `metrics-port`(optional): Port for metrics service. Default: `8002`. This is not used in our example.