{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\"Fork\n", "\n", "\n", "# 语音识别——DeepSpeech2\n", " \n", "# 0. 视频理解与字幕" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# 下载demo视频\n", "!test -f work/source/subtitle_demo1.mp4 || wget https://paddlespeech.bj.bcebos.com/demos/asr_demos/subtitle_demo1.mp4 -P work/source/" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import IPython.display as dp\n", "from IPython.display import HTML\n", "html_str = '''\n", "\n", "'''.format(\"work/source/subtitle_demo1.mp4 \")\n", "dp.display(HTML(html_str))\n", "print (\"ASR结果为:当我说我可以把三十年的经验变成一个准确的算法他们说不可能当我说我们十个人就能实现对十九个城市变电站七乘二十四小时的实时监管他们说不可能\")" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "> Demo实现:[Dhttps://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "# 1. 前言" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "## 1.1 背景知识\n", "语音识别(Automatic Speech Recognition, ASR) 是一项从一段音频中提取出语言文字内容的任务。\n", "\n", "
\n", "\n", "
\n", "(出处:DLHLP 李宏毅 语音识别课程PPT)\n", "
\n", "\n", "目前该技术已经广泛应用于我们的工作和生活当中,包括生活中使用手机的语音转写,工作上使用的会议记录等等。" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "## 1.2 发展历史\n", "\n", "\n", "* 早期,生成模型流行阶段:GMM-HMM (上世纪90年代)\n", "* 深度学习爆发初期: DNN,CTC[1] (2006)\n", "* RNN流行,Attention提出初期: RNN-T[2](2013), DeepSpeech[3] (2014), DeepSpeech2 [4] (2016), LAS[5](2016)\n", "* Attetion is all you need提出开始[6]: Transformer[6](2017),Transformer-transducer[7](2020) Conformer[8] (2020\n", "\n", "
\n", "\n", "
\n", "\n", "Deepspeech2模型包含了CNN,RNN,CTC等深度学习语音识别的基本技术,因此本教程采用了Deepspeech2作为讲解深度学习语音识别的开篇内容。" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "# 2. 实战:使用 DeepSpeech2 进行语音识别的流程\n", "\n", "Deepspeech2 模型,其主要分为3个部分:\n", "1. 特征提取模块:此处使用 linear 特征,也就是将音频信息由时域转到频域后的信息。\n", "2. Encoder:多层神经网络,用于对特征进行编码。\n", "3. CTC Decoder: 采用了 CTC 损失函数训练;使用 CTC 解码得到结果。\n", "\n", "
\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "## 2.1 Deepspeech2 模型结构\n", "\n", "### 2.1.1 Encoder\n", "\n", "\n", "Encoder 主要采用了 2 层降采样的 CNN(subsampling Convolution layer)和多层 RNN(Recurrent Neural Network)层组成。\n", "\n", "其中降采样的 CNN 主要用途在提取局部特征,减少模型输入的帧数,降低计算量,并易于模型收敛。\n", "\n", "\n", " \n", "#### 2.1.1.1 CNN: Receptive field\n", "\n", "假如以 $F_j$ 代表 $L_j$ 的 cnn 滤波器大小, $S_i$ 代表 $L_i$ 的CNN滤波器跳跃长度,并设定 $S_0 = 1$。那么 $L_k$ 的感受野大小可以由以下公式计算:\n", "\n", "$$\\boxed{R_k = 1 + \\sum_{j=1}^{k} [(F_j - 1) \\prod_{i=0}^{j-1} S_i]}$$\n", "在下面的例子中, $F_1 = F_2 = 3$ 并且 $S_1 = S_2 = 2$, 因此可以得到 $R_2 = 1 + 2\\cdot 1 + 2\\cdot 2 = 7$\n", "\n", "
\n", "\n", "
\n", "\n", " \n", "#### 2.1.1.2 RNN\n", "\n", " 而多层 RNN 的作用在于获取语音的上下文信息,这样可以获得更加准确的信息,并一定程度上进行语义消歧。\n", " \n", "Deepspeech2 的模型中 RNNCell 可以选用 GRU 或者 LSTM。\n", " \n", "\n", "#### 2.1.1.3 Softmax\n", "而最后 softmax 层将特征向量映射到为一个字表长度的向量,向量中存储了当前 step 结果预测为字表中每个字的概率。" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "### 2.1.2 Decoder\n", "Decoder 的作用主要是将 Encoder 输出的概率解码为最终的文字结果。\n", "\n", "对于 CTC 的解码主要有3种方式:\n", "\n", "* CTC greedy search \n", "\n", "* CTC beam search \n", "\n", "* CTC Prefix beam search\n", "\n", "#### 2.1.2.1 CTC Greedy Search\n", "\n", "在每个时间点选择后验概率最大的 label 加入候选序列中,最后对候选序列进行后处理,就得到解码结果。\n", "\n", "
\n", "\n", "
\n", "\n", "\n", "#### 2.1.2.2 CTC Beam Search\n", "\n", "CTC Beam Search 的方式是有 beam size 个候选序列,并在每个时间点生成新的最好的 beam size 个候选序列。\n", "最后在 beam size 个候选序列中选择概率最高的序列生成最终结果。\n", "\n", "\n", "
\n", "\n", "
\n", " 引用自[9]\n", "
\n", "\n", "#### 2.1.2.3 CTC Prefix Beam Search\n", "\n", "CTC prefix beam search和 CTC beam search 的主要区别在于:\n", "\n", "CTC beam search 在解码过程中产生的候选有可能产生重复项,而这些重复项在 CTC beam search 的计算过程中是各自独立的,占用了 beam 数,降低解码的多样性和鲁棒性。\n", "\n", "而 CTC prefix beam search 在解码过程中合并了重复项的概率,提升解码的鲁棒性和多样性。\n", "\n", "
\n", "\n", "
\n", " 引用自[9]\n", "
\n", "\n", "CTC prefix beam search 计算过程如下图所示:\n", "\n", "
\n", "\n", "
\n", " 引用自[10]\n", "
\n", "\n", "\n", "> [CTCLoss](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/topic/ctc/) 相关介绍参看 [Topic](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/topic/) 内容。\n", "\n", "#### 2.1.2.4 使用 N-gram 语言模型\n", "\n", "对于解码的候选结果的打分,除了有声学模型的分数外,还会有额外的语言模型分以及长度惩罚分。\n", "\n", "\n", "设定 $W$ 为解码结果,$X$ 为输入语音, $\\alpha$ 和 $\\beta$ 为设定的超参数。\n", "则最终分数的计算公式为:\n", "$$\n", "score = P_{am}(W \\mid X) \\cdot P_{lm}(W) ^ \\alpha \\cdot |W|^\\beta\n", "$$\n" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "## 2.2 准备工作\n", " " ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "### 2.2.1 安装 paddlespeech\n", " \n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!pip install --upgrade pip && pip install paddlespeech==0.1.0" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", " \n", "### 2.2.2 准备工作目录\n", "\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!mkdir -p ./work/workspace_asr_ds2\n", "%cd ./work/workspace_asr_ds2" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "### 2.2.3 获取预训练模型和相关文件\n", " \n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [], "source": [ "!test -f ds2.model.tar.gz || wget -nc https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/ds2.model.tar.gz\n", "!tar xzvf ds2.model.tar.gz" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# 构建一个数据增强的配置文件,由于预测不需要数据增强,因此文件为空即可\n", "!touch conf/augmentation.json\n", "# 下载语言模型\n", "!mkdir -p data/lm\n", "!test -f ./data/lm/zh_giga.no_cna_cmn.prune01244.klm || wget -nc https://deepspeech.bj.bcebos.com/zh_lm/zh_giga.no_cna_cmn.prune01244.klm -P data/lm\n", "# 获取用于预测的音频文件\n", "!test -f ./data/demo_01_03.wav || wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wav -P ./data/" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import IPython\n", "IPython.display.Audio('./data/demo_01_03.wav')" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# 快速体验识别结果\n", "!paddlespeech asr --input ./data/demo_01_03.wav" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "\n", "### 2.2.4 导入python包\n", " \n" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import paddle\n", "import warnings\n", "warnings.filterwarnings('ignore')\n", "\n", "from yacs.config import CfgNode\n", "\n", "from paddlespeech.s2t.frontend.speech import SpeechSegment\n", "from paddlespeech.s2t.frontend.normalizer import FeatureNormalizer\n", "from paddlespeech.s2t.frontend.featurizer.audio_featurizer import AudioFeaturizer\n", "from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer\n", "\n", "from paddlespeech.s2t.io.collator import SpeechCollator\n", "\n", "from paddlespeech.s2t.models.ds2 import DeepSpeech2Model\n", "\n", "from matplotlib import pyplot as plt\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "\n", "### 2.2.5 设置预训练模型的路径\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [], "source": [ "config_path = \"conf/deepspeech2.yaml\" \n", "checkpoint_path = \"./exp/deepspeech/checkpoints/avg_1.pdparams\"\n", "audio_file = \"data/demo_01_03.wav\"\n", "\n", "\n", "# 读取 conf 文件并结构化\n", "ds2_config = CfgNode(new_allowed=True)\n", "ds2_config.merge_from_file(config_path)\n", "print(ds2_config)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "\n", "## 2.3 获取特征\n" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "### 2.3.1 语音特征介绍\n", " \n", "#### 2.3.1.1 语音特征提取整体流程图\n", "\n", "
\n", "\n", "
\n", "由\"莊永松、柯上優 DLHLP - HW1 End-to-end Speech Recognition PPT\" 修改得\n", "
\n", "\n", "#### 2.3.1.2 fbank 提取过程简化图\n", "\n", "\n", "fbank 特征提取大致可以分为3个步骤:\n", "\n", "1. 语音时域信号经过增强,然后进行分帧。\n", "\n", "2. 每一帧数据加窗后经过离散傅立叶变换(DFT)得到频谱图。\n", "\n", "3. 将频谱图的特征经过 Mel 滤波器得到 logmel fbank 特征。\n", "\n", "
\n", "\n", "
\n", "由\"DLHLP 李宏毅 语音识别课程PPT\" 修改得\n", "
\n", "\n", "#### 2.3.1.3 CMVN 计算过程\n", "\n", "对于所有获取的特征,模型在使用前会使用 CMVN 的方式进行归一化\n", "\n", "
\n", " \n", "
\n", "\n", " " ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", " \n", "### 2.3.2 构建音频特征提取对象\n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [], "source": [ "feat_config = ds2_config.collator\n", "audio_featurizer = AudioFeaturizer(\n", " spectrum_type=feat_config.spectrum_type,\n", " feat_dim=feat_config.feat_dim,\n", " delta_delta=feat_config.delta_delta,\n", " stride_ms=feat_config.stride_ms,\n", " window_ms=feat_config.window_ms,\n", " n_fft=feat_config.n_fft,\n", " max_freq=feat_config.max_freq,\n", " target_sample_rate=feat_config.target_sample_rate,\n", " use_dB_normalization=feat_config.use_dB_normalization,\n", " target_dB=feat_config.target_dB,\n", " dither=feat_config.dither)\n", "feature_normalizer = FeatureNormalizer(feat_config.mean_std_filepath) if feat_config.mean_std_filepath else None" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", " \n", "### 2.3.3 提取音频的特征\n", "\n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# 'None' 只是一个占位符,因为预测的时候不需要reference\n", "speech_segment = SpeechSegment.from_file(audio_file, \"None\")\n", "audio_feature = audio_featurizer.featurize(speech_segment)\n", "audio_feature_i = feature_normalizer.apply(audio_feature)\n", "\n", "audio_len = audio_feature_i.shape[0]\n", "audio_len = paddle.to_tensor(audio_len)\n", "audio_feature = paddle.to_tensor(audio_feature_i, dtype='float32')\n", "audio_feature = paddle.unsqueeze(audio_feature, axis=0)\n", "print(f\"shape: {audio_feature.shape}\")\n", "\n", "plt.figure()\n", "plt.imshow(audio_feature_i.T, origin='lower')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", " \n", "## 2.4 使用模型获得结果\n", " \n" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", " \n", "### 2.4.1 构建Deepspeech2模型\n", " \n" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_conf = ds2_config.model\n", "# input dim is feature size\n", "model_conf.input_dim = 161\n", "# output_dim is vocab size\n", "model_conf.output_dim = 4301\n", "model = DeepSpeech2Model.from_config(model_conf)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", " \n", "### 2.4.2 加载预训练的模型\n", " \n" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_dict = paddle.load(checkpoint_path)\n", "model.set_state_dict(model_dict)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", " \n", "### 2.4.3 进行预测\n" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [], "source": [ "decoding_config = ds2_config.decoding\n", "print (decoding_config)\n", "text_feature = TextFeaturizer(unit_type='char',\n", " vocab=ds2_config.collator.vocab_filepath)\n", "\n", "\n", "result_transcripts = model.decode(\n", " audio_feature,\n", " audio_len,\n", " text_feature.vocab_list,\n", " decoding_method=decoding_config.decoding_method,\n", " lang_model_path=decoding_config.lang_model_path,\n", " beam_alpha=decoding_config.alpha,\n", " beam_beta=decoding_config.beta,\n", " beam_size=decoding_config.beam_size,\n", " cutoff_prob=decoding_config.cutoff_prob,\n", " cutoff_top_n=decoding_config.cutoff_top_n,\n", " num_processes=decoding_config.num_proc_bsearch)\n", "\n" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print (\"预测结果为:\")\n", "print (result_transcripts[0])" ] }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "\n", "# 3. 总结\n", "\n", "* CTC 帮助模型学习语音和 label 之间的 alignment。\n", "* CTC 可以做到帧同步解码,非常适合做流式模型。\n", "* CTC 的输出是之间是独立的,相对于 Seq2Seq 其建模能力差,一般需要外挂 LM 才能得到好的结果。\n", "\n", "\n", "# 4. 作业 \n", "1. 使用开发模式安装 [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech) \n", "环境要求:docker, Ubuntu 16.04,root user。 \n", "参考安装方法:[使用Docker安装paddlespeech](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md#hard-get-the-full-funciton-on-your-mechine)\n", "\n", "2. 跑通 example/aishell/asr1 中的 conformer 模型,完成训练和预测。 \n", "\n", "3. 按照 example 的格式使用自己的数据集训练 ASR 模型。 \n", "\n", "\n", "# 5. 关注 PaddleSpeech\n", "\n", "请关注我们的 [Github Repo](https://github.com/PaddlePaddle/PaddleSpeech/),非常欢迎加入以下微信群参与讨论:\n", "- 扫描二维码\n", "- 添加运营小姐姐微信\n", "- 通过后回复【语音】\n", "- 系统自动邀请加入技术群\n", "\n", "\n", "
\n", "\n", "\n", "# 5. 参考文献\n", "\n", "[1] Graves A, Fernández S, Gomez F, et al. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks[C]//Proceedings of the 23rd international conference on Machine learning. 2006: 369-376.\n", "\n", "[2] Graves A, Mohamed A, Hinton G. Speech recognition with deep recurrent neural networks[C]//2013 IEEE international conference on acoustics, speech and signal processing. Ieee, 2013: 6645-6649.\n", "\n", "[3] Hannun A, Case C, Casper J, et al. Deep speech: Scaling up end-to-end speech recognition[J]. arXiv preprint arXiv:1412.5567, 2014.\n", "\n", "[4] Amodei D, Ananthanarayanan S, Anubhai R, et al. Deep speech 2: End-to-end speech recognition in english and mandarin[C]//International conference on machine learning. PMLR, 2016: 173-182.\n", "\n", "[5] Chan W, Jaitly N, Le Q, et al. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition[C]//2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016: 4960-4964.\n", "\n", "[6] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in neural information processing systems. 2017: 5998-6008.\n", "\n", "[7] Zhang Q, Lu H, Sak H, et al. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 7829-7833.\n", "\n", "[8] Gulati A, Qin J, Chiu C C, et al. Conformer: Convolution-augmented transformer for speech recognition[J]. arXiv preprint arXiv:2005.08100, 2020.\n", "\n", "[9] Retrieved 2021-12-6,from \"Sequence Modeling With CTC\": https://distill.pub/2017/ctc/#inference\n", "\n", "[10] Hannun A Y, Maas A L, Jurafsky D, et al. First-pass large vocabulary continuous speech recognition using bi-directional recurrent dnns[J]. arXiv preprint arXiv:1408.2873, 2014." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "py35-paddle1.2.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 }