since loss reduce more when loss div batchsize, less lm alpha can be better.

pull/567/head
Hui Zhang 5 years ago
parent e520077652
commit 02e63b060c

@ -3,5 +3,5 @@
## CTC ## CTC
| Model | Config | Test Set | CER | Valid Loss | | Model | Config | Test Set | CER | Valid Loss |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| DeepSpeech2 | conf/deepspeech2.yaml | test | 0.078786 | 7.036566 | | DeepSpeech2 | conf/deepspeech2.yaml | test | 0.078242 | 7.036566 |
| DeepSpeech2 | release 1.8.5 | test | 0.080447 | - | | DeepSpeech2 | release 1.8.5 | test | 0.080447 | - |

@ -39,7 +39,7 @@ decoding:
error_rate_type: cer error_rate_type: cer
decoding_method: ctc_beam_search decoding_method: ctc_beam_search
lang_model_path: data/lm/zh_giga.no_cna_cmn.prune01244.klm lang_model_path: data/lm/zh_giga.no_cna_cmn.prune01244.klm
alpha: 2.6 alpha: 2.5
beta: 5.0 beta: 5.0
beam_size: 300 beam_size: 300
cutoff_prob: 0.99 cutoff_prob: 0.99

Loading…
Cancel
Save