* when loss div batchsize, change lr, more epoch, loss can reduce more and cer lower than before
* since loss reduce more when loss div batchsize, less lm alpha can be better.
* less lm alpha, more cer reduce
* alpha 2.2, cer 0.077478
* alpha 1.9, cer 0.077249
* large librispeech lr for batch_average ctc loss
* since loss reduce and model more confidence, then less lm alpha