diff --git a/README.md b/README.md index 809ffe6d..4a1df8d0 100644 --- a/README.md +++ b/README.md @@ -1,48 +1,356 @@ -# PaddlePaddle Speech toolkit +English | [简体中文](README_ch.md) + + +

+ +

+
+ +

+ Quick Start + | Tutorials + | Models List +

+ +------------------------------------------------------------------------------------ ![License](https://img.shields.io/badge/license-Apache%202-red.svg) ![python version](https://img.shields.io/badge/python-3.7+-orange.svg) ![support os](https://img.shields.io/badge/os-linux-yellow.svg) -*DeepSpeech* is an open-source implementation of end-to-end Automatic Speech Recognition engine, with [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform. Our vision is to empower both industrial application and academic research on speech recognition, via an easy-to-use, efficient, samller and scalable implementation, including training, inference & testing module, and deployment. + + +**PaddleSpeech** is an open-source toolkit on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) platform for a variety of critical tasks in speech, with state-of-art and influential models. + +##### Speech-To-Text + +
+ + + + + + + + + + + + + + + + + +
Input Audio Recognition Result
+ +
+
Life was like a box of chocolates, you never know what you will get.
+ +
+
早上好,今天是2020/10/29,最低温度是-3°C。
+ +
+ +##### Text-To-Speech +
+ + + + + + + + + + + + + + + + + +
Input Text Synthetic Audio
Life was like a box of chocolates, you never know what you're gonna get. + +
+
早上好,今天是2020/10/29,最低温度是-3°C。 + +
+
+ +
+ +Via the easy-to-use, efficient, flexible and scalable implementation, our vision is to empower both industrial application and academic research, including training, inference & testing modules, and deployment process. To be more specific, this toolkit features at: +- **Fast and Light-weight**: we provide high-speed and ultra-lightweight models that are convenient for industrial deployment. +- **Rule-based Chinese frontend**: our frontend contains Text Normalization and Grapheme-to-Phoneme (G2P, including Polyphone and Tone Sandhi). Moreover, we use self-defined linguistic rules to adapt Chinese context. +- **Varieties of Functions that Vitalize both Industrial and Academia**: + - *Implementation of critical audio tasks*: this toolkit contains audio functions like Speech Translation, Automatic Speech Recognition, Text-To-Speech Synthesis, Voice Cloning, etc. + - *Integration of mainstream models and datasets*: the toolkit implements modules that participate in the whole pipeline of the speech tasks, and uses mainstream datasets like LibriSpeech, LJSpeech, AIShell, CSMSC, etc. See also [model lists](#models-list) for more details. + - *Cascaded models application*: as an extension of the application of traditional audio tasks, we combine the workflows of aforementioned tasks with other fields like Natural language processing (NLP), like Punctuation Restoration. + +Please refer to [our PaddleSpeech demo page](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html) for more examples. + +# Community + +You are warmly welcome to submit questions in [discussions](https://github.com/PaddlePaddle/DeepSpeech/discussions) and bug reports in [issues](https://github.com/PaddlePaddle/DeepSpeech/issues)! Also, we highly appreciate if you would like to contribute to this project! + +If you are from China, we strongly recommend you join our PaddleSpeech WeChat group. Scan the following WeChat QR code and get in touch with the other developers in this community! + +
+ + +
+ +# Alternative Installation + +The base environment in this page is +- Ubuntu 16.04 +- python>=3.7 +- paddlepaddle==2.1.2 + +If you want to set up PaddleSpeech in other environment, please see the [installation](./docs/installation.md) documents for all the alternatives. + +# Quick Start + +Just a quick test of our functions: [English Speech-To-Text]() and [English Text-To-Speech]() by typing message or upload your own audio file. + +Developers can have a try of our model with only a few lines of code. + +A tiny DeepSpeech2 *Speech-To-Text* model training on toy set of LibriSpeech: + +```shell +cd examples/tiny/s0/ +# source the environment +source path.sh +# prepare librispeech dataset +bash local/data.sh +# evaluate your ckptfile model file +bash local/test.sh conf/deepspeech2.yaml ckptfile offline +``` + +For *Text-To-Speech*, try FastSpeech2 on LJSpeech: +- Download LJSpeech-1.1 from the [ljspeech official website](https://keithito.com/LJ-Speech-Dataset/), our prepared durations for fastspeech2 [ljspeech_alignment](https://paddlespeech.bj.bcebos.com/MFA/LJSpeech-1.1/ljspeech_alignment.tar.gz). +- The pretrained models are seperated into two parts: [fastspeech2_nosil_ljspeech_ckpt](https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech2_nosil_ljspeech_ckpt_0.5.zip) and [pwg_ljspeech_ckpt](https://paddlespeech.bj.bcebos.com/Parakeet/pwg_ljspeech_ckpt_0.5.zip). Please download then unzip to `./model/fastspeech2` and `./model/pwg` respectively. +- Assume your path to the dataset is `~/datasets/LJSpeech-1.1` and `./ljspeech_alignment` accordingly, preprocess your data and then use our pretrained model to synthesize: +```shell +cd examples/csmsc/tts3 +# download the pretrained models and unaip them +wget https://paddlespeech.bj.bcebos.com/Parakeet/pwg_baker_ckpt_0.4.zip +unzip pwg_baker_ckpt_0.4.zip +wget https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech2_nosil_baker_ckpt_0.4.zip +unzip fastspeech2_nosil_baker_ckpt_0.4.zip +# source the environment +source path.sh +# run end-to-end synthesize +FLAGS_allocator_strategy=naive_best_fit \ +FLAGS_fraction_of_gpu_memory_to_use=0.01 \ +python3 ${BIN_DIR}/synthesize_e2e.py \ + --fastspeech2-config=fastspeech2_nosil_baker_ckpt_0.4/default.yaml \ + --fastspeech2-checkpoint=fastspeech2_nosil_baker_ckpt_0.4/snapshot_iter_76000.pdz \ + --fastspeech2-stat=fastspeech2_nosil_baker_ckpt_0.4/speech_stats.npy \ + --pwg-config=pwg_baker_ckpt_0.4/pwg_default.yaml \ + --pwg-checkpoint=pwg_baker_ckpt_0.4/pwg_snapshot_iter_400000.pdz \ + --pwg-stat=pwg_baker_ckpt_0.4/pwg_stats.npy \ + --text=${BIN_DIR}/../sentences.txt \ + --output-dir=exp/default/test_e2e \ + --inference-dir=exp/default/inference \ + --device="gpu" \ + --phones-dict=fastspeech2_nosil_baker_ckpt_0.4/phone_id_map.txt + + +``` + -## Features +If you want to try more functions like training and tuning, please see [Speech-To-Text getting started](./docs/source/asr/getting_started.md) and [Text-To-Speech Basic Use](./docs/source/tts/basic_usage.md). - See [feature list](docs/source/asr/feature_list.md) for more information. +# Models List -## Setup +PaddleSpeech supports a series of most popular models, summarized in released models [Speech-To-Text](./docs/source/asr/released_model.md)/[Text-To-Speech](./docs/source/tts/released_models.md) with available pretrained models. -All tested under: -* Ubuntu 16.04 -* python>=3.7 -* paddlepaddle==2.1.2 +Speech-To-Text module contains *Acoustic Model* and *Language Model*, with the following details: -Please see [install](docs/source/asr/install.md). + -## Getting Started +> Note: The `Link` should be code path rather than download links. -Please see [Getting Started](docs/source/asr/getting_started.md) and [tiny egs](examples/tiny/s0/README.md). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Speech-To-Text Module TypeDatasetModel TypeLink
Acoustic ModelAishellDeepSpeech2 RNN + Conv based Models + Online / Offline +
Transformer based Attention Models + Non-CTC Loss / CTC Loss +
LibrispeechTransformer based Attention Models + Conformer / Transformer Decoder +
Language ModelCommonCrawl(en.00)English Language Model + English Language Model +
Baidu Internal CorpusMandarin Language Model Small + Mandarin Language Model Small / Large +
-## More Information -* [Data Prepration](docs/source/asr/data_preparation.md) -* [Data Augmentation](docs/source/asr/augmentation.md) -* [Ngram LM](docs/source/asr/ngram_lm.md) -* [Benchmark](docs/source/asr/benchmark.md) -* [Relased Model](docs/source/asr/released_model.md) +PaddleSpeech Text-To-Speech mainly contains three modules: *Text Frontend*, *Acoustic Model* and *Vocoder*. Acoustic Model and Vocoder models are listed as follow: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Text-To-Speech Module Type Model Type Dataset Link
Text Frontend + chinese-fronted +
Acoustic ModelTacotron2LJSpeech + tacotron2-ljspeech +
TransformerTTS + transformer-ljspeech +
SpeedySpeechCSMSC + speedyspeech-csmsc +
FastSpeech2AISHELL-3 / VCTK / LJSpeech / CSMSC + fastspeech2-aishell3 / fastspeech2-vctk / fastspeech2-ljspeech / fastspeech2-csmsc +
VocoderWaveFlowLJSpeech + waveflow-ljspeech +
Parallel WaveGANLJSpeech / VCTK / CSMSC + PWGAN-ljspeech / PWGAN-vctk / PWGAN-csmsc +
Voice CloningGE2EAISHELL-3, etc. + ge2e +
GE2E + Tactron2AISHELL-3 + ge2e-tactron2-aishell3 +
-## Questions and Help -You are welcome to submit questions in [Github Discussions](https://github.com/PaddlePaddle/DeepSpeech/discussions) and bug reports in [Github Issues](https://github.com/PaddlePaddle/DeepSpeech/issues). You are also welcome to contribute to this project. +# Tutorials +Normally, [Speech SoTA](https://paperswithcode.com/area/speech) gives you an overview of the hot academic topics in speech. To focus on the tasks in PaddleSpeech, you will find the following guidelines are helpful to grasp the core ideas. +- [Overview](./docs/source/introduction.md) +- Quick Start + - [Dependencies](./docs/source/dependencies.md) and [Installation](./docs/source/install.md) + - [Quick Start of Speech-To-Text](./docs/source/asr/quick_start.md) + - [Quick Start of Text-To-Speech](./docs/source/tts/quick_start.md) +- Speech-To-Text + - [Models Introduction](./docs/source/asr/models_introduction.md) + - [Data Preparation](./docs/source/asr/data_preparation.md) + - [Data Augmentation Pipeline](./docs/source/asr/augmentation.md) + - [Features](./docs/source/asr/feature_list.md) + - [Ngram LM](./docs/source/asr/ngram_lm.md) +- Text-To-Speech + - [Introduction](./docs/source/tts/models_introduction.md) + - [Advanced Usage](./docs/source/tts/advanced_usage.md) + - [Chinese Rule Based Text Frontend](./docs/source/tts/zh_text_frontend.md) + - [Test Audio Samples](https://paddlespeech.readthedocs.io/en/latest/tts/demo.html) and [PaddleSpeech VS. Espnet](https://paddlespeech.readthedocs.io/en/latest/tts/demo_2.html) +- [Released Models](./docs/source/released_model.md) -## License +# License and Acknowledgement -DeepSpeech is provided under the [Apache-2.0 License](./LICENSE). +PaddleSpeech is provided under the [Apache-2.0 License](./LICENSE). -## Acknowledgement +PaddleSpeech depends on a lot of open source repositories. See [references](./docs/source/asr/reference.md) for more information. -We depends on many open source repos. See [References](docs/source/asr/reference.md) for more information. +# Citation +To cite PaddleSpeech for research, please use the following format. +```tex +@misc{ppspeech2021, +title={PaddleSpeech, a toolkit for audio processing based on PaddlePaddle.}, +author={PaddlePaddle Authors}, +howpublished = {\url{https://github.com/PaddlePaddle/DeepSpeech}}, +year={2021} +} +``` diff --git a/docs/images/PaddleSpeech_log.png b/docs/images/PaddleSpeech_log.png new file mode 100644 index 00000000..fb252775 Binary files /dev/null and b/docs/images/PaddleSpeech_log.png differ diff --git a/docs/images/QQ-code.JPG b/docs/images/QQ-code.JPG new file mode 100644 index 00000000..6d681833 Binary files /dev/null and b/docs/images/QQ-code.JPG differ diff --git a/docs/images/wechat-code-speech.png b/docs/images/wechat-code-speech.png new file mode 100644 index 00000000..47251519 Binary files /dev/null and b/docs/images/wechat-code-speech.png differ diff --git a/docs/images/wechat-code.jpeg b/docs/images/wechat-code.jpeg new file mode 100644 index 00000000..6e466cc1 Binary files /dev/null and b/docs/images/wechat-code.jpeg differ diff --git a/docs/images/wechat-group.png b/docs/images/wechat-group.png new file mode 100644 index 00000000..5c61cd2f Binary files /dev/null and b/docs/images/wechat-group.png differ diff --git a/docs/source/tts/install.md b/docs/source/tts/install.md index 24e44b17..c4249a18 100644 --- a/docs/source/tts/install.md +++ b/docs/source/tts/install.md @@ -10,13 +10,13 @@ Example instruction to install paddlepaddle via pip is listed below. ### PaddlePaddle with GPU ```python -# CUDA10.1 的 PaddlePaddle +# PaddlePaddle for CUDA10.1 python -m pip install paddlepaddle-gpu==2.1.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html -# CUDA10.2 的 PaddlePaddle +# PaddlePaddle for CUDA10.2 python -m pip install paddlepaddle-gpu -i https://mirror.baidu.com/pypi/simple -# CUDA11.0 的 PaddlePaddle +# PaddlePaddle for CUDA11.0 python -m pip install paddlepaddle-gpu==2.1.2.post110 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html -# CUDA11.2 的 PaddlePaddle +# PaddlePaddle for CUDA11.2 python -m pip install paddlepaddle-gpu==2.1.2.post112 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html ``` ### PaddlePaddle with CPU