Merge pull request #55 from kuke/update_benchmark

Retune hyper-parameters and update benchmark results for English models due to #50
pull/66/head
Yibing Liu 7 years ago committed by GitHub
commit 907898a48c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -505,13 +505,13 @@ Language Model | Training Data | Token-based | Size | Descriptions
Test Set | LibriSpeech Model | BaiduEN8K Model
:--------------------- | ---------------: | -------------------:
LibriSpeech Test-Clean | 7.73 | 6.63
LibriSpeech Test-Other | 23.15 | 16.59
VoxForge American-Canadian | 12.30 |   7.46
VoxForge Commonwealth | 20.03 | 16.23
VoxForge European | 30.31 | 20.47
VoxForge Indian | 55.47 | 28.15
Baidu Internal Testset  |   44.71 |   8.92
LibriSpeech Test-Clean | 6.85 | 5.73
LibriSpeech Test-Other | 21.18 | 14.47
VoxForge American-Canadian | 12.12 |   7.37
VoxForge Commonwealth | 19.82 | 15.58
VoxForge European | 30.15 | 19.44
VoxForge Indian | 53.73 | 26.15
Baidu Internal Testset  |   40.75 |   8.82
For reproducing benchmark results on VoxForge data, we provide a script to download data and generate VoxForge dialect manifest files. Please go to ```data/voxforge``` and execute ```sh run_data.sh``` to get VoxForge dialect manifest files. Notice that VoxForge data may keep updating and the generated manifest files may have difference from those we evaluated on.

@ -21,8 +21,8 @@ python -u infer.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

@ -30,8 +30,8 @@ python -u infer.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

@ -22,8 +22,8 @@ python -u test.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

@ -31,8 +31,8 @@ python -u test.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

@ -21,8 +21,8 @@ python -u infer.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

@ -30,8 +30,8 @@ python -u infer.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

@ -22,8 +22,8 @@ python -u test.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

@ -31,8 +31,8 @@ python -u test.py \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=2048 \
--alpha=2.15 \
--beta=0.35 \
--alpha=2.5 \
--beta=0.3 \
--cutoff_prob=1.0 \
--cutoff_top_n=40 \
--use_gru=False \

Loading…
Cancel
Save