@ -8,7 +8,7 @@ Acoustic Model | Training Data | Token-based | Size | Descriptions | CER | WER |
[Conformer Online Aishell Model ](https://deepspeech.bj.bcebos.com/release2.1/aishell/s1/aishell.chunk.release.tar.gz ) | Aishell Dataset | Char-based | 283 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention + CTC | 0.0594 |-| 151 h
[Conformer Online Aishell Model ](https://deepspeech.bj.bcebos.com/release2.1/aishell/s1/aishell.chunk.release.tar.gz ) | Aishell Dataset | Char-based | 283 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention + CTC | 0.0594 |-| 151 h
[Conformer Offline Aishell Model ](https://deepspeech.bj.bcebos.com/release2.1/aishell/s1/aishell.release.tar.gz ) | Aishell Dataset | Char-based | 284 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention | 0.0547 |-| 151 h
[Conformer Offline Aishell Model ](https://deepspeech.bj.bcebos.com/release2.1/aishell/s1/aishell.release.tar.gz ) | Aishell Dataset | Char-based | 284 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention | 0.0547 |-| 151 h
[Conformer Librispeech Model ](https://deepspeech.bj.bcebos.com/release2.1/librispeech/s1/conformer.release.tar.gz ) | Librispeech Dataset | Word-based | 287 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention |-| 0.0325 | 960 h
[Conformer Librispeech Model ](https://deepspeech.bj.bcebos.com/release2.1/librispeech/s1/conformer.release.tar.gz ) | Librispeech Dataset | Word-based | 287 MB | Encoder:Conformer, Decoder:Transformer, Decoding method: Attention |-| 0.0325 | 960 h
[Transformer Librispeech Model ](https://deepspeech.bj.bcebos.com/release2.1/librispeech/s1/transformer.release.tar.gz ) | Librispeech Dataset | Word-based | 195 MB | Encoder:Con former, Decoder:Transformer, Decoding method: Attention |-| 0.0544 | 960 h
[Transformer Librispeech Model ](https://deepspeech.bj.bcebos.com/release2.1/librispeech/s1/transformer.release.tar.gz ) | Librispeech Dataset | Word-based | 195 MB | Encoder:Trans former, Decoder:Transformer, Decoding method: Attention |-| 0.0544 | 960 h
## Acoustic Model Transformed from paddle 1.8
## Acoustic Model Transformed from paddle 1.8
Acoustic Model | Training Data | Token-based | Size | Descriptions | CER | WER | Hours of speech
Acoustic Model | Training Data | Token-based | Size | Descriptions | CER | WER | Hours of speech