Easy-to-use Speech Toolkit including SOTA/Streaming ASR with punctuation, influential TTS with text frontend, Speaker Verification System and End-to-End Speech Simultaneous Translation.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Go to file
Xinghai Sun 94db28e088
Add function doc for infer_batch_loss() function in model.py for DS2.
7 years ago
data_utils Add warming-up to demo_server.py for DS2 and clean codes. 7 years ago
datasets change the wget method in run.sh of deep_speech2 7 years ago
lm upload the language model 7 years ago
tests update unittest with comments 7 years ago
README.md Remove pynput and pyaudio packages from requriements.txt and add installation tips to README.md. 7 years ago
compute_mean_std.py add mfcc feature for DS2 7 years ago
decoder.py Add function docs for layer.py and model.py and update other details. 7 years ago
demo_client.py Add ASR demo usage to README.md for DS2. 7 years ago
demo_server.py Remove pynput and pyaudio packages from requriements.txt and add installation tips to README.md. 7 years ago
error_rate.py Follow comments. 7 years ago
evaluate.py Simplify train.py, evaluate.py, infer.py and tune.py by adding DeepSpeech2Model class. 7 years ago
infer.py Add a realtime ASR demo for users to test their own voice with mic. 7 years ago
layer.py Add function docs for layer.py and model.py and update other details. 7 years ago
model.py Add function doc for infer_batch_loss() function in model.py for DS2. 7 years ago
requirements.txt Remove pynput and pyaudio packages from requriements.txt and add installation tips to README.md. 7 years ago
setup.sh Add function docs for layer.py and model.py and update other details. 7 years ago
train.py Add function docs for layer.py and model.py and update other details. 7 years ago
tune.py Add function docs for layer.py and model.py and update other details. 7 years ago
utils.py Add shuffle type of instance_shuffle and batch_shuffle_clipped. 7 years ago

README.md

Deep Speech 2 on PaddlePaddle

Installation

Prerequisites

  • Python = 2.7 only supported;
  • cuDNN >= 6.0 is required to utilize NVIDIA GPU platform in the installation of PaddlePaddle, and the CUDA toolkit with proper version suitable for cuDNN. The cuDNN library below 6.0 is found to yield a fatal error in batch normalization when handling utterances with long duration in inference.

Setup for Training & Evaluation

sh setup.sh
export LD_LIBRARY_PATH=$PADDLE_INSTALL_DIR/Paddle/third_party/install/warpctc/lib:$LD_LIBRARY_PATH

Please replace $PADDLE_INSTALL_DIR with your own paddle installation directory.

Setup for Demo

Please do the following extra installation before run demo_client.py to try the realtime ASR demo. However there is no need to install them for the computer running the demo's server-end (demo_server.py). For details of running the ASR demo, please refer to the section.

For example, on MAC OS X:

brew install portaudio
pip install pyaudio
pip install pynput

Usage

Preparing Data

cd datasets
sh run_all.sh
cd ..

sh run_all.sh prepares all ASR datasets (currently, only LibriSpeech available). After running, we have several summarization manifest files in json-format.

A manifest file summarizes a speech data set, with each line containing the meta data (i.e. audio filepath, transcript text, audio duration) of each audio file within the data set, in json format. Manifest file serves as an interface informing our system of where and what to read the speech samples.

More help for arguments:

python datasets/librispeech/librispeech.py --help

Preparing for Training

python compute_mean_std.py

It will compute mean and stdandard deviation for audio features, and save them to a file with a default name ./mean_std.npz. This file will be used in both training and inferencing. The default feature of audio data is power spectrum, and the mfcc feature is also supported. To train and infer based on mfcc feature, please generate this file by

python compute_mean_std.py --specgram_type mfcc

and specify --specgram_type mfcc when running train.py, infer.py, evaluator.py or tune.py.

More help for arguments:

python compute_mean_std.py --help

Training

For GPU Training:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py

For CPU Training:

python train.py --use_gpu False

More help for arguments:

python train.py --help

Preparing language model

The following steps, inference, parameters tuning and evaluating, will require a language model during decoding. A compressed language model is provided and can be accessed by

cd ./lm
sh run.sh
cd ..

Inference

For GPU inference

CUDA_VISIBLE_DEVICES=0 python infer.py

For CPU inference

python infer.py --use_gpu=False

More help for arguments:

python infer.py --help

Evaluating

CUDA_VISIBLE_DEVICES=0 python evaluate.py

More help for arguments:

python evaluate.py --help

Parameters tuning

Usually, the parameters \alpha and \beta for the CTC prefix beam search decoder need to be tuned after retraining the acoustic model.

For GPU tuning

CUDA_VISIBLE_DEVICES=0 python tune.py

For CPU tuning

python tune.py --use_gpu=False

More help for arguments:

python tune.py --help

Then reset parameters with the tuning result before inference or evaluating.

Playing with the ASR Demo

A real-time ASR demo (demo_server.py and demo_client.py) are prepared for users to try out the ASR model with their own voice. After a model and language model is prepared, we can first start the demo server:

CUDA_VISIBLE_DEVICES=0 python demo_server.py

And then in another console, start the client:

python demo_client.py

On the client console, press and hold "white-space" key and start talking, then release the "white-space" key when you finish your speech. The decoding results (infered transcription) will be displayed.

If you would like to start the server and the client in two machines. Please use --host_ip and --host_port to indicate the actual IP address and port, for both demo_server.py and demo_client.py.

Notice that demo_client.py should be started in your local computer with microphone hardware, while demo_server.py can be started in any remote server as well as the same local computer. IP address and port should be properly set for server-client communication.

For running demo_client.py, please first finish the extra installation steps.