ttsconformerspeech-translationvoice-recognitionsound-classificationtransformerasrspeech-synthesisvoice-cloningpunctuation-restorationstreaming-ttsspeech-recognitionvocoderkwsstreaming-asrspeech-alignment
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
huangyuxin
6dc32d185b
|
3 years ago | |
---|---|---|
.github | 4 years ago | |
.pre-commit-hooks | 4 years ago | |
.travis | 4 years ago | |
deepspeech | 3 years ago | |
docs | 3 years ago | |
examples | 3 years ago | |
speechnn | 3 years ago | |
tests | 3 years ago | |
third_party | 3 years ago | |
tools | 3 years ago | |
utils | 3 years ago | |
.bashrc | 3 years ago | |
.clang-format | 4 years ago | |
.flake8 | 3 years ago | |
.gitconfig | 4 years ago | |
.gitignore | 3 years ago | |
.mergify.yml | 4 years ago | |
.pre-commit-config.yaml | 4 years ago | |
.style.yapf | 7 years ago | |
.travis.yml | 4 years ago | |
.vimrc | 4 years ago | |
LICENSE | 7 years ago | |
README.md | 3 years ago | |
env.sh | 3 years ago | |
requirements.txt | 3 years ago | |
setup.sh | 3 years ago |
README.md
PaddlePaddle Speech to Any toolkit
DeepSpeech is an open-source implementation of end-to-end Automatic Speech Recognition engine, with PaddlePaddle platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient, samller and scalable implementation, including training, inference & testing module, and deployment.
Features
See feature list for more information.
Setup
All tested under:
- Ubuntu 16.04
- python>=3.7
- paddlepaddle>=2.2.0rc
Please see install.
Getting Started
Please see Getting Started and tiny egs.
More Information
Questions and Help
You are welcome to submit questions in Github Discussions and bug reports in Github Issues. You are also welcome to contribute to this project.
License
DeepSpeech is provided under the Apache-2.0 License.
Acknowledgement
We depends on many open source repos. See References for more information.