You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
PaddleSpeech/speechx
Yang Zhou e90bacc277
remove model opts in offline_decoder_main
2 years ago
..
cmake fix openfst patch 2 years ago
docker dir arch (#1347) 2 years ago
examples remove model opts in offline_decoder_main 2 years ago
patch align linear_feature & nnet 2 years ago
speechx Merge branch 'develop' of github.com:SmileGoat/PaddleSpeech into refactor_decoder 2 years ago
tools fix speech egs 2 years ago
.gitignore fix speech egs 2 years ago
CMakeLists.txt move egs from codelab to examples 2 years ago
README.md fix typo 2 years ago
build.sh more comment of code; 2 years ago

README.md

SpeechX -- All in One Speech Task Inference

Environment

We develop under:

  • docker - registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.2-cudnn7
  • os - Ubuntu 16.04.7 LTS
  • gcc/g++/gfortran - 8.2.0
  • cmake - 3.16.0

We make sure all things work fun under docker, and recommend using it to develop and deploy.

Build

  1. First to launch docker container.
nvidia-docker run --privileged  --net=host --ipc=host -it --rm -v $PWD:/workspace --name=dev registry.baidubce.com/paddlepaddle/paddle:2.1.1-gpu-cuda10.2-cudnn7 /bin/bash
  • More Paddle docker images you can see here.

  • If you want only work under cpu, please download corresponded image, and using docker instead nvidia-docker.

  1. Build speechx and examples.

Do not source venv.

pushd /path/to/speechx
./build.sh
  1. Go to examples to have a fun.

More details please see README.md under examples.

Valgrind (Optional)

If using docker please check --privileged is set when docker run.

  • Fatal error at startup: a function redirection which is mandatory for this platform-tool combination cannot be set up
apt-get install libc6-dbg
  • Install
pushd tools
./setup_valgrind.sh
popd

TODO

  • DecibelNormalizer: there is a little bit difference between offline and online db norm. The computation of online db norm read feature chunk by chunk, which causes the feature size is different with offline db norm. In normalizer.cc:73, the samples.size() is different, which causes the difference of result.