From 758fd666952c12cab2c61317ef2c765f289bc497 Mon Sep 17 00:00:00 2001
From: Jackwaterveg <87408988+Jackwaterveg@users.noreply.github.com>
Date: Fri, 17 Dec 2021 17:42:43 +0800
Subject: [PATCH] [Doc]Tutorial:asr+cls (#1161)
* test=doc_fix
* test=doc_fix
* test=doc_fix
* test=doc_fix
---
docs/tutorial/asr/tutorial_deepspeech2.ipynb | 751 +++++++++++++++++++
docs/tutorial/asr/tutorial_transformer.ipynb | 681 +++++++++++++++++
docs/tutorial/cls/cls_tutorial.ipynb | 703 +++++++++++++++++
3 files changed, 2135 insertions(+)
create mode 100644 docs/tutorial/asr/tutorial_deepspeech2.ipynb
create mode 100644 docs/tutorial/asr/tutorial_transformer.ipynb
create mode 100644 docs/tutorial/cls/cls_tutorial.ipynb
diff --git a/docs/tutorial/asr/tutorial_deepspeech2.ipynb b/docs/tutorial/asr/tutorial_deepspeech2.ipynb
new file mode 100644
index 00000000..86790473
--- /dev/null
+++ b/docs/tutorial/asr/tutorial_deepspeech2.ipynb
@@ -0,0 +1,751 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "\n",
+ "\n",
+ "# 语音识别——DeepSpeech2\n",
+ " \n",
+ "# 0. 视频理解与字幕"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 下载demo视频\n",
+ "!test -f work/source/subtitle_demo1.mp4 || wget https://paddlespeech.bj.bcebos.com/demos/asr_demos/subtitle_demo1.mp4 -P work/source/"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import IPython.display as dp\n",
+ "from IPython.display import HTML\n",
+ "html_str = '''\n",
+ "\n",
+ "'''.format(\"work/source/subtitle_demo1.mp4 \")\n",
+ "dp.display(HTML(html_str))\n",
+ "print (\"ASR结果为:当我说我可以把三十年的经验变成一个准确的算法他们说不可能当我说我们十个人就能实现对十九个城市变电站七乘二十四小时的实时监管他们说不可能\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "> Demo实现:[Dhttps://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "# 1. 前言"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 1.1 背景知识\n",
+ "语音识别(Automatic Speech Recognition, ASR) 是一项从一段音频中提取出语言文字内容的任务。\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "(出处:DLHLP 李宏毅 语音识别课程PPT)\n",
+ "
\n",
+ "\n",
+ "目前该技术已经广泛应用于我们的工作和生活当中,包括生活中使用手机的语音转写,工作上使用的会议记录等等。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 1.2 发展历史\n",
+ "\n",
+ "\n",
+ "* 早期,生成模型流行阶段:GMM-HMM (上世纪90年代)\n",
+ "* 深度学习爆发初期: DNN,CTC[1] (2006)\n",
+ "* RNN流行,Attention提出初期: RNN-T[2](2013), DeepSpeech[3] (2014), DeepSpeech2 [4] (2016), LAS[5](2016)\n",
+ "* Attetion is all you need提出开始[6]: Transformer[6](2017),Transformer-transducer[7](2020) Conformer[8] (2020\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "Deepspeech2模型包含了CNN,RNN,CTC等深度学习语音识别的基本技术,因此本教程采用了Deepspeech2作为讲解深度学习语音识别的开篇内容。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 2. 实战:使用 DeepSpeech2 进行语音识别的流程\n",
+ "\n",
+ "Deepspeech2 模型,其主要分为3个部分:\n",
+ "1. 特征提取模块:此处使用 linear 特征,也就是将音频信息由时域转到频域后的信息。\n",
+ "2. Encoder:多层神经网络,用于对特征进行编码。\n",
+ "3. CTC Decoder: 采用了 CTC 损失函数训练;使用 CTC 解码得到结果。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 2.1 Deepspeech2 模型结构\n",
+ "\n",
+ "### 2.1.1 Encoder\n",
+ "\n",
+ "\n",
+ "Encoder 主要采用了 2 层降采样的 CNN(subsampling Convolution layer)和多层 RNN(Recurrent Neural Network)层组成。\n",
+ "\n",
+ "其中降采样的 CNN 主要用途在提取局部特征,减少模型输入的帧数,降低计算量,并易于模型收敛。\n",
+ "\n",
+ "\n",
+ " \n",
+ "#### 2.1.1.1 CNN: Receptive field\n",
+ "\n",
+ "假如以 $F_j$ 代表 $L_j$ 的 cnn 滤波器大小, $S_i$ 代表 $L_i$ 的CNN滤波器跳跃长度,并设定 $S_0 = 1$。那么 $L_k$ 的感受野大小可以由以下公式计算:\n",
+ "\n",
+ "$$\\boxed{R_k = 1 + \\sum_{j=1}^{k} [(F_j - 1) \\prod_{i=0}^{j-1} S_i]}$$\n",
+ "在下面的例子中, $F_1 = F_2 = 3$ 并且 $S_1 = S_2 = 2$, 因此可以得到 $R_2 = 1 + 2\\cdot 1 + 2\\cdot 2 = 7$\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ "#### 2.1.1.2 RNN\n",
+ "\n",
+ " 而多层 RNN 的作用在于获取语音的上下文信息,这样可以获得更加准确的信息,并一定程度上进行语义消歧。\n",
+ " \n",
+ "Deepspeech2 的模型中 RNNCell 可以选用 GRU 或者 LSTM。\n",
+ " \n",
+ "\n",
+ "#### 2.1.1.3 Softmax\n",
+ "而最后 softmax 层将特征向量映射到为一个字表长度的向量,向量中存储了当前 step 结果预测为字表中每个字的概率。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "### 2.1.2 Decoder\n",
+ "Decoder 的作用主要是将 Encoder 输出的概率解码为最终的文字结果。\n",
+ "\n",
+ "对于 CTC 的解码主要有3种方式:\n",
+ "\n",
+ "* CTC greedy search \n",
+ "\n",
+ "* CTC beam search \n",
+ "\n",
+ "* CTC Prefix beam search\n",
+ "\n",
+ "#### 2.1.2.1 CTC Greedy Search\n",
+ "\n",
+ "在每个时间点选择后验概率最大的 label 加入候选序列中,最后对候选序列进行后处理,就得到解码结果。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "\n",
+ "#### 2.1.2.2 CTC Beam Search\n",
+ "\n",
+ "CTC Beam Search 的方式是有 beam size 个候选序列,并在每个时间点生成新的最好的 beam size 个候选序列。\n",
+ "最后在 beam size 个候选序列中选择概率最高的序列生成最终结果。\n",
+ "\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " 引用自[9]\n",
+ "
\n",
+ "\n",
+ "#### 2.1.2.3 CTC Prefix Beam Search\n",
+ "\n",
+ "CTC prefix beam search和 CTC beam search 的主要区别在于:\n",
+ "\n",
+ "CTC beam search 在解码过程中产生的候选有可能产生重复项,而这些重复项在 CTC beam search 的计算过程中是各自独立的,占用了 beam 数,降低解码的多样性和鲁棒性。\n",
+ "\n",
+ "而 CTC prefix beam search 在解码过程中合并了重复项的概率,提升解码的鲁棒性和多样性。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " 引用自[9]\n",
+ "
\n",
+ "\n",
+ "CTC prefix beam search 计算过程如下图所示:\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ " 引用自[10]\n",
+ "
\n",
+ "\n",
+ "\n",
+ "> [CTCLoss](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/topic/ctc/) 相关介绍参看 [Topic](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/topic/) 内容。\n",
+ "\n",
+ "#### 2.1.2.4 使用 N-gram 语言模型\n",
+ "\n",
+ "对于解码的候选结果的打分,除了有声学模型的分数外,还会有额外的语言模型分以及长度惩罚分。\n",
+ "\n",
+ "\n",
+ "设定 $W$ 为解码结果,$X$ 为输入语音, $\\alpha$ 和 $\\beta$ 为设定的超参数。\n",
+ "则最终分数的计算公式为:\n",
+ "$$\n",
+ "score = P_{am}(W \\mid X) \\cdot P_{lm}(W) ^ \\alpha \\cdot |W|^\\beta\n",
+ "$$\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "## 2.2 准备工作\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "### 2.2.1 安装 paddlespeech\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "!pip install --upgrade pip && pip install paddlespeech"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "### 2.2.2 准备工作目录\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "!mkdir -p ./work/workspace_asr_ds2\n",
+ "%cd ./work/workspace_asr_ds2"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "### 2.2.3 获取预训练模型和相关文件\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "!test -f ds2.model.tar.gz || wget -nc https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/ds2.model.tar.gz\n",
+ "!tar xzvf ds2.model.tar.gz"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 构建一个数据增强的配置文件,由于预测不需要数据增强,因此文件为空即可\n",
+ "!touch conf/augmentation.json\n",
+ "# 下载语言模型\n",
+ "!mkdir -p data/lm\n",
+ "!test -f ./data/lm/zh_giga.no_cna_cmn.prune01244.klm || wget -nc https://deepspeech.bj.bcebos.com/zh_lm/zh_giga.no_cna_cmn.prune01244.klm -P data/lm\n",
+ "# 获取用于预测的音频文件\n",
+ "!test -f ./data/demo_01_03.wav || wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wav -P ./data/"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import IPython\n",
+ "IPython.display.Audio('./data/demo_01_03.wav')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 快速体验识别结果\n",
+ "!paddlespeech asr --input ./data/demo_01_03.wav"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "\n",
+ "### 2.2.4 导入python包\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import paddle\n",
+ "import warnings\n",
+ "warnings.filterwarnings('ignore')\n",
+ "\n",
+ "from yacs.config import CfgNode\n",
+ "\n",
+ "from paddlespeech.s2t.frontend.speech import SpeechSegment\n",
+ "from paddlespeech.s2t.frontend.normalizer import FeatureNormalizer\n",
+ "from paddlespeech.s2t.frontend.featurizer.audio_featurizer import AudioFeaturizer\n",
+ "from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer\n",
+ "\n",
+ "from paddlespeech.s2t.io.collator import SpeechCollator\n",
+ "\n",
+ "from paddlespeech.s2t.models.ds2 import DeepSpeech2Model\n",
+ "\n",
+ "from matplotlib import pyplot as plt\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "\n",
+ "### 2.2.5 设置预训练模型的路径\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "config_path = \"conf/deepspeech2.yaml\" \n",
+ "checkpoint_path = \"./exp/deepspeech/checkpoints/avg_1.pdparams\"\n",
+ "audio_file = \"data/demo_01_03.wav\"\n",
+ "\n",
+ "\n",
+ "# 读取 conf 文件并结构化\n",
+ "ds2_config = CfgNode(new_allowed=True)\n",
+ "ds2_config.merge_from_file(config_path)\n",
+ "print(ds2_config)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "\n",
+ "## 2.3 获取特征\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.3.1 语音特征介绍\n",
+ " \n",
+ "#### 2.3.1.1 语音特征提取整体流程图\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "由\"莊永松、柯上優 DLHLP - HW1 End-to-end Speech Recognition PPT\" 修改得\n",
+ "
\n",
+ "\n",
+ "#### 2.3.1.2 fbank 提取过程简化图\n",
+ "\n",
+ "\n",
+ "fbank 特征提取大致可以分为3个步骤:\n",
+ "\n",
+ "1. 语音时域信号经过增强,然后进行分帧。\n",
+ "\n",
+ "2. 每一帧数据加窗后经过离散傅立叶变换(DFT)得到频谱图。\n",
+ "\n",
+ "3. 将频谱图的特征经过 Mel 滤波器得到 logmel fbank 特征。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "由\"DLHLP 李宏毅 语音识别课程PPT\" 修改得\n",
+ "
\n",
+ "\n",
+ "#### 2.3.1.3 CMVN 计算过程\n",
+ "\n",
+ "对于所有获取的特征,模型在使用前会使用 CMVN 的方式进行归一化\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "### 2.3.2 构建音频特征提取对象\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "feat_config = ds2_config.collator\n",
+ "audio_featurizer = AudioFeaturizer(\n",
+ " spectrum_type=feat_config.spectrum_type,\n",
+ " feat_dim=feat_config.feat_dim,\n",
+ " delta_delta=feat_config.delta_delta,\n",
+ " stride_ms=feat_config.stride_ms,\n",
+ " window_ms=feat_config.window_ms,\n",
+ " n_fft=feat_config.n_fft,\n",
+ " max_freq=feat_config.max_freq,\n",
+ " target_sample_rate=feat_config.target_sample_rate,\n",
+ " use_dB_normalization=feat_config.use_dB_normalization,\n",
+ " target_dB=feat_config.target_dB,\n",
+ " dither=feat_config.dither)\n",
+ "feature_normalizer = FeatureNormalizer(feat_config.mean_std_filepath) if feat_config.mean_std_filepath else None"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "### 2.3.3 提取音频的特征\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 'None' 只是一个占位符,因为预测的时候不需要reference\n",
+ "speech_segment = SpeechSegment.from_file(audio_file, \"None\")\n",
+ "audio_feature = audio_featurizer.featurize(speech_segment)\n",
+ "audio_feature_i = feature_normalizer.apply(audio_feature)\n",
+ "\n",
+ "audio_len = audio_feature_i.shape[0]\n",
+ "audio_len = paddle.to_tensor(audio_len)\n",
+ "audio_feature = paddle.to_tensor(audio_feature_i, dtype='float32')\n",
+ "audio_feature = paddle.unsqueeze(audio_feature, axis=0)\n",
+ "print(f\"shape: {audio_feature.shape}\")\n",
+ "\n",
+ "plt.figure()\n",
+ "plt.imshow(audio_feature_i.T, origin='lower')\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "## 2.4 使用模型获得结果\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "### 2.4.1 构建Deepspeech2模型\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "model_conf = ds2_config.model\n",
+ "# input dim is feature size\n",
+ "model_conf.input_dim = 161\n",
+ "# output_dim is vocab size\n",
+ "model_conf.output_dim = 4301\n",
+ "model = DeepSpeech2Model.from_config(model_conf)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "### 2.4.2 加载预训练的模型\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "model_dict = paddle.load(checkpoint_path)\n",
+ "model.set_state_dict(model_dict)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "### 2.4.3 进行预测\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "decoding_config = ds2_config.decoding\n",
+ "print (decoding_config)\n",
+ "text_feature = TextFeaturizer(unit_type='char',\n",
+ " vocab=ds2_config.collator.vocab_filepath)\n",
+ "\n",
+ "\n",
+ "result_transcripts = model.decode(\n",
+ " audio_feature,\n",
+ " audio_len,\n",
+ " text_feature.vocab_list,\n",
+ " decoding_method=decoding_config.decoding_method,\n",
+ " lang_model_path=decoding_config.lang_model_path,\n",
+ " beam_alpha=decoding_config.alpha,\n",
+ " beam_beta=decoding_config.beta,\n",
+ " beam_size=decoding_config.beam_size,\n",
+ " cutoff_prob=decoding_config.cutoff_prob,\n",
+ " cutoff_top_n=decoding_config.cutoff_top_n,\n",
+ " num_processes=decoding_config.num_proc_bsearch)\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "print (\"预测结果为:\")\n",
+ "print (result_transcripts[0])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "# 3. 总结\n",
+ "\n",
+ "* CTC 帮助模型学习语音和 label 之间的 alignment。\n",
+ "* CTC 可以做到帧同步解码,非常适合做流式模型。\n",
+ "* CTC 的输出是之间是独立的,相对于 Seq2Seq 其建模能力差,一般需要外挂 LM 才能得到好的结果。\n",
+ "\n",
+ "\n",
+ "# 4. 作业 \n",
+ "1. 使用开发模式安装 [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech) \n",
+ "环境要求:docker, Ubuntu 16.04,root user。 \n",
+ "参考安装方法:[使用Docker安装paddlespeech](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md#hard-get-the-full-funciton-on-your-mechine)\n",
+ "\n",
+ "2. 跑通 example/aishell/asr1 中的 conformer 模型,完成训练和预测。 \n",
+ "\n",
+ "3. 按照 example 的格式使用自己的数据集训练 ASR 模型。 \n",
+ "\n",
+ "\n",
+ "# 5. 关注 PaddleSpeech\n",
+ "\n",
+ "请关注我们的 [Github Repo](https://github.com/PaddlePaddle/PaddleSpeech/),非常欢迎加入以下微信群参与讨论:\n",
+ "- 扫描二维码\n",
+ "- 添加运营小姐姐微信\n",
+ "- 通过后回复【语音】\n",
+ "- 系统自动邀请加入技术群\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "# 5. 参考文献\n",
+ "\n",
+ "[1] Graves A, Fernández S, Gomez F, et al. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks[C]//Proceedings of the 23rd international conference on Machine learning. 2006: 369-376.\n",
+ "\n",
+ "[2] Graves A, Mohamed A, Hinton G. Speech recognition with deep recurrent neural networks[C]//2013 IEEE international conference on acoustics, speech and signal processing. Ieee, 2013: 6645-6649.\n",
+ "\n",
+ "[3] Hannun A, Case C, Casper J, et al. Deep speech: Scaling up end-to-end speech recognition[J]. arXiv preprint arXiv:1412.5567, 2014.\n",
+ "\n",
+ "[4] Amodei D, Ananthanarayanan S, Anubhai R, et al. Deep speech 2: End-to-end speech recognition in english and mandarin[C]//International conference on machine learning. PMLR, 2016: 173-182.\n",
+ "\n",
+ "[5] Chan W, Jaitly N, Le Q, et al. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition[C]//2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016: 4960-4964.\n",
+ "\n",
+ "[6] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in neural information processing systems. 2017: 5998-6008.\n",
+ "\n",
+ "[7] Zhang Q, Lu H, Sak H, et al. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 7829-7833.\n",
+ "\n",
+ "[8] Gulati A, Qin J, Chiu C C, et al. Conformer: Convolution-augmented transformer for speech recognition[J]. arXiv preprint arXiv:2005.08100, 2020.\n",
+ "\n",
+ "[9] Retrieved 2021-12-6,from \"Sequence Modeling With CTC\": https://distill.pub/2017/ctc/#inference\n",
+ "\n",
+ "[10] Hannun A Y, Maas A L, Jurafsky D, et al. First-pass large vocabulary continuous speech recognition using bi-directional recurrent dnns[J]. arXiv preprint arXiv:1408.2873, 2014."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "py35-paddle1.2.0"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}
diff --git a/docs/tutorial/asr/tutorial_transformer.ipynb b/docs/tutorial/asr/tutorial_transformer.ipynb
new file mode 100644
index 00000000..c9eb5ebb
--- /dev/null
+++ b/docs/tutorial/asr/tutorial_transformer.ipynb
@@ -0,0 +1,681 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "# 使用 Transformer 进行语音识别\n",
+ "\n",
+ "# 0. 视频理解与字幕"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 下载demo视频\n",
+ "!test -f work/source/subtitle_demo1.mp4 || wget -c https://paddlespeech.bj.bcebos.com/demos/asr_demos/subtitle_demo1.mp4 -P work/source/"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import IPython.display as dp\n",
+ "from IPython.display import HTML\n",
+ "html_str = '''\n",
+ "\n",
+ "'''.format(\"work/source/subtitle_demo1.mp4 \")\n",
+ "dp.display(HTML(html_str))\n",
+ "print (\"ASR结果为:当我说我可以把三十年的经验变成一个准确的算法他们说不可能当我说我们十个人就能实现对十九个城市变电站七乘二十四小时的实时监管他们说不可能\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "> Demo实现:[https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 1. 前言\n",
+ "\n",
+ "## 1.1 背景知识\n",
+ "语音识别(Automatic Speech Recognition, ASR) 是一项从一段音频中提取出语言文字内容的任务。 \n",
+ "目前该技术已经广泛应用于我们的工作和生活当中,包括生活中使用手机的语音转写,工作上使用的会议记录等等。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "(出处:DLHLP 李宏毅 语音识别课程PPT)\n",
+ "
\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 1.2 发展历史\n",
+ "\n",
+ "\n",
+ "* 早期,生成模型流行阶段:GMM-HMM (上世纪90年代)\n",
+ "* 深度学习爆发初期: DNN,CTC[1] (2006)\n",
+ "* RNN 流行,Attention 提出初期: RNN-T[2](2013), DeepSpeech[3] (2014), DeepSpeech2 [4] (2016), LAS[5](2016)\n",
+ "* Attetion is all you need 提出开始[6]: Transformer[6](2017),Transformer-transducer[7](2020) Conformer[8] (2020)\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "目前 Transformer 和 Conformer 是语音识别领域的主流模型,因此本教程采用了 Transformer 作为讲解的主要内容,并在课后作业中步骤了 Conformer 的相关练习。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "# 2. 实战:使用Transformer进行语音识别的流程\n",
+ "\n",
+ "CTC 的输出相互独立,使得每一帧利用上下文的信息的能力不足。\n",
+ "\n",
+ "而 seq2seq(Transformer,Conformer) 的模型采用自回归的解码方式,所以其建模能力更强,但不便于支持流式。\n",
+ "\n",
+ "对于Transformer模型,它的Encoder可以有效对语音特征的上下文进行建模。而它的Decoder具有语言模型的能力,能够将语言模型融合进整个模型中,是真正意义上的端到端模型。\n",
+ "\n",
+ "\n",
+ "下面简单介绍下 Transformer 语音识别模型,其主要分为 2 个部分:\n",
+ "\n",
+ "\t- Encoder:声学特征会首先进入 Encoder,产生高层特征编码。\n",
+ "\n",
+ " - Decoder:Decoder 利用 Encoder 产生的特征编码解码得到预测结果。\n",
+ " \n",
+ "\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 2.1 准备工作\n",
+ "\n",
+ "### 2.1.1 安装 paddlespeech"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "!pip install --upgrade pip && pip install paddlespeech"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.1.2 准备工作目录"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "!mkdir -p ./work/workspace_asr\n",
+ "%cd ./work/workspace_asr"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "### 2.1.3 获取预训练模型和音频文件\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 获取模型\n",
+ "!test -f transformer.model.tar.gz || wget -nc https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/transformer.model.tar.gz\n",
+ "!tar xzvf transformer.model.tar.gz\n",
+ "\n",
+ "# 获取用于预测的音频文件\n",
+ "!test -f ./data/demo_01_03.wav || wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wav -P ./data/"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import IPython\n",
+ "IPython.display.Audio('./data/demo_01_03.wav')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 快速体验识别结果\n",
+ "!paddlespeech asr --input ./data/demo_01_03.wav"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.1.4 导入python包"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import paddle\n",
+ "import soundfile\n",
+ "\n",
+ "import warnings\n",
+ "warnings.filterwarnings('ignore')\n",
+ "\n",
+ "from yacs.config import CfgNode\n",
+ "from paddlespeech.s2t.transform.spectrogram import LogMelSpectrogramKaldi\n",
+ "from paddlespeech.s2t.transform.cmvn import GlobalCMVN\n",
+ "from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer\n",
+ "from paddlespeech.s2t.models.u2 import U2Model\n",
+ "\n",
+ "from matplotlib import pyplot as plt\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.1.5 设置预训练模型的路径"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "config_path = \"conf/transformer.yaml\" \n",
+ "checkpoint_path = \"./exp/transformer/checkpoints/avg_20.pdparams\"\n",
+ "decoding_method = \"attention\"\n",
+ "audio_file = \"data/demo_01_03.wav\"\n",
+ "\n",
+ "# 读取 conf 文件并结构化\n",
+ "transformer_config = CfgNode(new_allowed=True)\n",
+ "transformer_config.merge_from_file(config_path)\n",
+ "transformer_config.decoding.decoding_method = decoding_method\n",
+ "print(transformer_config)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 2.2 获取特征\n",
+ "\n",
+ "### 2.2.1 音频特征 logfbank\n",
+ "\n",
+ "#### 2.2.1.1 语音特征提取整体流程图\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "由\"莊永松、柯上優 DLHLP - HW1 End-to-end Speech Recognition PPT\" 修改得\n",
+ "
\n",
+ "\n",
+ "#### 2.2.1.2 logfbank 提取过程简化图\n",
+ "\n",
+ "logfbank 特征提取大致可以分为 3 个步骤:\n",
+ "\n",
+ "1. 语音时域信号经过预加重(信号高频分量补偿),然后进行分帧。\n",
+ "\n",
+ "2. 每一帧数据加窗后经过离散傅立叶变换(DFT)得到频谱图。\n",
+ "\n",
+ "3. 将频谱图的特征经过 Mel 滤波器得到 logmel fbank 特征。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "由\"DLHLP 李宏毅 语音识别课程 PPT\" 修改得\n",
+ "
\n",
+ "\n",
+ "#### 2.2.1.3 CMVN 计算过程\n",
+ "\n",
+ "对于所有获取的特征,模型在使用前会使用 CMVN 的方式进行归一化\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.2.2 构建音频特征提取对象"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 构建 logmel 特征\n",
+ "logmel_kaldi= LogMelSpectrogramKaldi(\n",
+ " fs= 16000,\n",
+ " n_mels= 80,\n",
+ " n_shift= 160,\n",
+ " win_length= 400,\n",
+ " dither= True)\n",
+ "\n",
+ "# 特征减均值除以方差\n",
+ "cmvn = GlobalCMVN(\n",
+ " cmvn_path=\"data/mean_std.json\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.2.3 提取音频的特征"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "array, _ = soundfile.read(audio_file, dtype=\"int16\")\n",
+ "array = logmel_kaldi(array, train=False)\n",
+ "audio_feature_i = cmvn(array)\n",
+ "audio_len = audio_feature_i.shape[0]\n",
+ "\n",
+ "audio_len = paddle.to_tensor(audio_len)\n",
+ "audio_feature = paddle.to_tensor(audio_feature_i, dtype='float32')\n",
+ "audio_feature = paddle.unsqueeze(audio_feature, axis=0)\n",
+ "print (audio_feature.shape)\n",
+ "\n",
+ "plt.figure()\n",
+ "plt.imshow(audio_feature_i.T, origin='lower')\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 2.3 使用模型获得结果\n",
+ "\n",
+ "### 2.3.1 Transofomer 语音识别模型的结构\n",
+ "\n",
+ "\n",
+ "Transformer 模型主要由 2 个部分组成,包括 Transformer Encoder 和 Transformer Decoder。 \n",
+ "\n",
+ "\n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.3.2 Transformer Encoder\n",
+ "\n",
+ "Transformer encoder 主要是对音频的原始特征(这里原始特征使用的是 80 维 logfbank)进行特征编码,其输入是 logfbank,输出是特征编码。包含:\n",
+ "\n",
+ "* 位置编码(position encoding)\n",
+ "* 降采样模块(subsampling embedding): 由2层降采样的 CNN 构成。\n",
+ "* Transformer Encoder Layer : \n",
+ " * self-attention: 主要特点是Q(query), K(key)和V(value)都是用了相同的值\n",
+ " * Feed forward Layer: 由两层全连接层构建,其特点是保持了输入和输出的特征维度是一致的。\n",
+ "\n",
+ "\n",
+ "#### 2.3.2.1 Self-Attention\n",
+ "\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "其主要步骤可以分为三步:\n",
+ "\n",
+ "1. `Q` 和 `K` 的向量通过求内积的方式计算相似度,经过 scale 和 softmax 后,获得每个 `Q` 和所有`K` 之间的 score。\n",
+ "\n",
+ "2. 将每个 `Q` 和所有 `K` 之间的 score 和 `V` 进行相乘,再将相乘后的结果求和,得到 self-attetion 的输出向量。\n",
+ "\n",
+ "3. 使用多个 Attetion 模块均进行第一步和第二步,并将最后的输出向量进行合并,得到最终 Multi-Head Self-Attention 的输出。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.3.3 Transformer Decoder\n",
+ "\n",
+ "Transformer 的 Decoder 用于获取最后的输出结果。其结构和 Encoder 有一定的相似性,也具有 Attention 模块和 Feed forward layer。\n",
+ "主要的不同点有 2 个:\n",
+ "1. Decoder 采用的是一种自回归的方式进行解码。\n",
+ "2. Decoder 在 Multi-head self-attention 和 Feed forward layer 模块之间增加了一层 Multi-head cross-attention 层用于获取 Encoder 得到的特征编码。\n",
+ "\n",
+ "\n",
+ "#### 2.3.3.1 Masked Multi-head Self-Attention\n",
+ "细心的同学可能发现了,Decoder 的一个 Multi-head self-attention 前面有一个 mask 。增加了这个 mask 的原因在于进行 Decoder 训练的时候,Decoder 的输入是一句完整的句子,而不是像预测这样一步步输入句子的前缀。\n",
+ "\n",
+ "为了模拟预测的过程,Decoder 训练的时候需要用 mask 遮住句子。 例如 `T=1` 时,就要 mask 输入中除第一个字符以外其他的字符,`T=2` 的时候则需要 mask 除前两个字符以外的其余字符。\n",
+ "\n",
+ "#### 2.3.3.2 Cross Attention\n",
+ "\n",
+ "Decoder 在每一步的解码过程中,都会利用 Encoder 的输出的特征编码进行 cross-attention。\n",
+ "\n",
+ "其中Decoder会将自回结果的编码作为 Attention 中的 `Q` ,而 Encoder 输出的特征编码作为 `K` 和 `V` 来完成 attetion 计算,从而利用 Encoder 提取的音频信息。\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "#### 2.3.3.3 Decoder的自回归解码 \n",
+ "\n",
+ "其采用了一种自回归的结构,即 Decoder 的上一个时间点的输出会作为下一个时间点的输入。\n",
+ "\n",
+ "另外,计算的过程中,Decoder 会利用 Encoder 的输出信息。\n",
+ "\n",
+ "如果使用贪心(greedy)的方式,Decoder 的解码过程如下:\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n",
+ "使用 greedy 模式解码比较简单,但是很有可能会在解码过程中丢失整体上效果更好的解码结果。\n",
+ "\n",
+ "因此我们实际使用的是 beam search 方式的解码,beam search 模式下的 decoder 的解码过程如下:\n",
+ "\n",
+ "\n",
+ "
\n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.3.4 模型训练\n",
+ "\n",
+ "模型训练同时使用了 CTC 损失和 cross entropy 交叉熵损失进行损失函数的计算。\n",
+ "\n",
+ "其中 Encoder 输出的特征直接进入 CTC Decoder 得到 CTC 损失。\n",
+ "\n",
+ "而 Decoder 的输出使用 cross entropy 损失。\n",
+ " \n",
+ "\n",
+ "
\n",
+ "
\n",
+ " (由\"莊永松、柯上優 DLHLP - HW1 End-to-end Speech Recognition PPT\" 修改得)\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "\n",
+ "### 2.3.5 构建Transformer模型\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "model_conf = transformer_config.model\n",
+ "# input_dim 存储的是特征的纬度\n",
+ "model_conf.input_dim = 80\n",
+ "# output_dim 存储的字表的长度\n",
+ "model_conf.output_dim = 4233 \n",
+ "print (\"model_conf\", model_conf)\n",
+ "model = U2Model.from_config(model_conf)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ " \n",
+ "### 2.3.6 加载预训练的模型\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "model_dict = paddle.load(checkpoint_path)\n",
+ "model.set_state_dict(model_dict)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "### 2.3.7 进行预测\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "decoding_config = transformer_config.decoding\n",
+ "text_feature = TextFeaturizer(unit_type='char',\n",
+ " vocab=transformer_config.collator.vocab_filepath)\n",
+ "\n",
+ "\n",
+ "result_transcripts = model.decode(\n",
+ " audio_feature,\n",
+ " audio_len,\n",
+ " text_feature=text_feature,\n",
+ " decoding_method=decoding_config.decoding_method,\n",
+ " beam_size=decoding_config.beam_size,\n",
+ " ctc_weight=decoding_config.ctc_weight,\n",
+ " decoding_chunk_size=decoding_config.decoding_chunk_size,\n",
+ " num_decoding_left_chunks=decoding_config.num_decoding_left_chunks,\n",
+ " simulate_streaming=decoding_config.simulate_streaming)\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "print (\"预测结果对应的token id为:\")\n",
+ "print (result_transcripts[1][0])\n",
+ "print (\"预测结果为:\")\n",
+ "print (result_transcripts[0][0])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 3. 作业 \n",
+ "1. 使用开发模式安装 [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech) \n",
+ "环境要求:docker, Ubuntu 16.04,root user。 \n",
+ "参考安装方法:[使用Docker安装paddlespeech](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md#hard-get-the-full-funciton-on-your-mechine)\n",
+ "\n",
+ "2. 跑通 example/aishell/asr1 中的 conformer 模型,完成训练和预测。 \n",
+ "\n",
+ "3. 按照 example 的格式使用自己的数据集训练 ASR 模型。 \n",
+ "\n",
+ "# 4. 关注 PaddleSpeech\n",
+ "\n",
+ "请关注我们的 [Github Repo](https://github.com/PaddlePaddle/PaddleSpeech/),非常欢迎加入以下微信群参与讨论:\n",
+ "- 扫描二维码\n",
+ "- 添加运营小姐姐微信\n",
+ "- 通过后回复【语音】\n",
+ "- 系统自动邀请加入技术群\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "# 5. 参考文献\n",
+ "\n",
+ "[1] Graves A, Fernández S, Gomez F, et al. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks[C]//Proceedings of the 23rd international conference on Machine learning. 2006: 369-376.\n",
+ "\n",
+ "[2] Graves A, Mohamed A, Hinton G. Speech recognition with deep recurrent neural networks[C]//2013 IEEE international conference on acoustics, speech and signal processing. Ieee, 2013: 6645-6649.\n",
+ "\n",
+ "[3] Hannun A, Case C, Casper J, et al. Deep speech: Scaling up end-to-end speech recognition[J]. arXiv preprint arXiv:1412.5567, 2014.\n",
+ "\n",
+ "[4] Amodei D, Ananthanarayanan S, Anubhai R, et al. Deep speech 2: End-to-end speech recognition in english and mandarin[C]//International conference on machine learning. PMLR, 2016: 173-182.\n",
+ "\n",
+ "[5] Chan W, Jaitly N, Le Q, et al. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition[C]//2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016: 4960-4964.\n",
+ "\n",
+ "[6] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in neural information processing systems. 2017: 5998-6008.\n",
+ "\n",
+ "[7] Zhang Q, Lu H, Sak H, et al. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 7829-7833.\n",
+ "\n",
+ "[8] Gulati A, Qin J, Chiu C C, et al. Conformer: Convolution-augmented transformer for speech recognition[J]. arXiv preprint arXiv:2005.08100, 2020."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "py35-paddle1.2.0"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}
diff --git a/docs/tutorial/cls/cls_tutorial.ipynb b/docs/tutorial/cls/cls_tutorial.ipynb
new file mode 100644
index 00000000..9b8bfc11
--- /dev/null
+++ b/docs/tutorial/cls/cls_tutorial.ipynb
@@ -0,0 +1,703 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "\n",
+ "# 1. 识别声音\n",
+ " \n",
+ " 通过听取声音,人的大脑会获取到大量的信息,其中的一个场景是识别和归类,如:识别熟悉的亲人或朋友的声音、识别不同乐器发出的声音和识别不同环境产生的声音,等等。\n",
+ "\n",
+ " 我们可以根据不同声音的特征(频率,音色等)进行区分,这种区分行为的本质,就是对声音进行分类。\n",
+ "\n",
+ "声音分类根据用途还可以继续细分:\n",
+ "\n",
+ "* 副语言识别:说话人识别(Speaker Recognition), 情绪识别(Speech Emotion Recognition),性别分类(Speaker gender classification)\n",
+ "* 音乐识别:音乐流派分类(Music Genre Classification)\n",
+ "* 场景识别:环境声音分类(Environmental Sound Classification)\n",
+ "* 声音事件检测:各个环境中的声学事件检测\n",
+ " \n",
+ "\n",
+ "\n",
+ "图片来源:http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Speaker%20(v3).pdf\n",
+ "\n",
+ "## 1.1 Audio Tagging\n",
+ "使用 [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech) 的预训练模型对一段音频做实时的声音检测,结果如下视频所示。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "%%HTML\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 2. 音频和特征提取"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 环境准备:安装paddlespeech和paddleaudio\n",
+ "!pip install --upgrade pip && pip install paddlespeech paddleaudio -U"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import warnings\n",
+ "warnings.filterwarnings(\"ignore\")\n",
+ "import IPython\n",
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "\n",
+ "\n",
+ "## 2.1 数字音频\n",
+ "\n",
+ "### 2.1.1 声音信号和音频文件\n",
+ " \n",
+ "下面通过一个例子观察音频文件的波形,直观地了解数字音频文件的包含的内容。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# 获取示例音频\n",
+ "!test -f ./dog.wav || wget https://paddlespeech.bj.bcebos.com/PaddleAudio/dog.wav\n",
+ "IPython.display.Audio('./dog.wav')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "from paddleaudio import load\n",
+ "data, sr = load(file='./dog.wav', mono=True, dtype='float32') # 单通道,float32音频样本点\n",
+ "print('wav shape: {}'.format(data.shape))\n",
+ "print('sample rate: {}'.format(sr))\n",
+ "\n",
+ "# 展示音频波形\n",
+ "plt.figure()\n",
+ "plt.plot(data)\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "!paddlespeech cls --input ./dog.wav"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 2.2 音频特征提取\n",
+ "\n",
+ "### 2.2.1 短时傅里叶变换\n",
+ "\n",
+ " 对于一段音频,一般会将整段音频进行分帧,每一帧含有一定长度的信号数据,一般使用 `25ms`,帧与帧之间的移动距离称为帧移,一般使用 `10ms`,然后对每一帧的信号数据加窗后,进行短时傅立叶变换(STFT)得到时频谱。\n",
+ " \n",
+ "通过按照上面的对一段音频进行分帧后,我们可以用傅里叶变换来分析每一帧信号的频率特性。将每一帧的频率信息拼接后,可以获得该音频不同时刻的频率特征——Spectrogram,也称作为语谱图。\n",
+ "\n",
+ "\n",
+ "图片参考:DLHLP 李宏毅 语音识别课程PPT;https://www.shong.win/2016/04/09/fft/\n",
+ "\n",
+ "
\n",
+ "下面例子采用 `paddle.signal.stft` 演示如何提取示例音频的频谱特征,并进行可视化:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import paddle\n",
+ "import numpy as np\n",
+ "\n",
+ "x = paddle.to_tensor(data)\n",
+ "n_fft = 1024\n",
+ "win_length = 1024\n",
+ "hop_length = 512\n",
+ "\n",
+ "# [D, T]\n",
+ "spectrogram = paddle.signal.stft(x, n_fft=1024, win_length=1024, hop_length=512, onesided=True) \n",
+ "print('spectrogram.shape: {}'.format(spectrogram.shape))\n",
+ "print('spectrogram.dtype: {}'.format(spectrogram.dtype))\n",
+ "\n",
+ "\n",
+ "spec = np.log(np.abs(spectrogram.numpy())**2)\n",
+ "plt.figure()\n",
+ "plt.title(\"Log Power Spectrogram\")\n",
+ "plt.imshow(spec[:100, :], origin='lower')\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.2.2 LogFBank\n",
+ "\n",
+ "研究表明,人类对声音的感知是非线性的,随着声音频率的增加,人对更高频率的声音的区分度会不断下降。\n",
+ "\n",
+ "例如同样是相差 500Hz 的频率,一般人可以轻松分辨出声音中 500Hz 和 1,000Hz 之间的差异,但是很难分辨出 10,000Hz 和 10,500Hz 之间的差异。\n",
+ "\n",
+ "因此,学者提出了梅尔频率,在该频率计量方式下,人耳对相同数值的频率变化的感知程度是一样的。\n",
+ "\n",
+ "\n",
+ "图片来源:https://www.researchgate.net/figure/Curve-relationship-between-frequency-signal-with-its-mel-frequency-scale-Algorithm-1_fig3_221910348\n",
+ "\n",
+ "关于梅尔频率的计算,其会对原始频率的低频的部分进行较多的采样,从而对应更多的频率,而对高频的声音进行较少的采样,从而对应较少的频率。使得人耳对梅尔频率的低频和高频的区分性一致。\n",
+ "\n",
+ "图片来源:https://ww2.mathworks.cn/help/audio/ref/mfcc.html\n",
+ "\n",
+ "Mel Fbank 的计算过程如下,而我们一般都是使用 LogFBank 作为识别特征:\n",
+ "\n",
+ "图片来源:https://ww2.mathworks.cn/help/audio/ref/mfcc.html\n",
+ "\n",
+ "
\n",
+ "下面例子采用 `paddleaudio.features.LogMelSpectrogram` 演示如何提取示例音频的 LogFBank:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "from paddleaudio.features import LogMelSpectrogram\n",
+ "\n",
+ "# - sr: 音频文件的采样率。\n",
+ "# - n_fft: FFT样本点个数。\n",
+ "# - hop_length: 音频帧之间的间隔。\n",
+ "# - win_length: 窗函数的长度。\n",
+ "# - window: 窗函数种类。\n",
+ "# - n_mels: 梅尔刻度数量。\n",
+ "feature_extractor2 = LogMelSpectrogram(\n",
+ " sr=sr, \n",
+ " n_fft=n_fft, \n",
+ " hop_length=hop_length, \n",
+ " win_length=win_length, \n",
+ " window='hann', \n",
+ " n_mels=64)\n",
+ "\n",
+ "x = paddle.to_tensor(data).unsqueeze(0) # [B, L]\n",
+ "log_fbank = feature_extractor2(x) # [B, D, T]\n",
+ "log_fbank = log_fbank.squeeze(0) # [D, T]\n",
+ "print('log_fbank.shape: {}'.format(log_fbank.shape))\n",
+ "\n",
+ "plt.figure()\n",
+ "plt.imshow(log_fbank.numpy(), origin='lower')\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 2.3 声音分类方法\n",
+ "\n",
+ "### 2.3.1 传统机器学习方法\n",
+ "在传统的声音和信号的研究领域中,声音特征是一类包含丰富先验知识的手工特征,如频谱图、梅尔频谱和梅尔频率倒谱系数等。\n",
+ " \n",
+ "因此在一些分类的应用上,可以采用传统的机器学习方法例如决策树、svm和随机森林等方法。\n",
+ " \n",
+ "一个典型的应用案例是:男声和女声分类。\n",
+ "\n",
+ "\n",
+ "图片来源:https://journals.plos.org/plosone/article/figure?id=10.1371/journal.pone.0179403.g001"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.3.2 深度学习方法\n",
+ "传统机器学习方法可以捕捉声音特征的差异(例如男声和女声的声音在音高上往往差异较大)并实现分类任务。\n",
+ " \n",
+ "而深度学习方法则可以突破特征的限制,更灵活的组网方式和更深的网络层次,可以更好地提取声音的高层特征,从而获得更好的分类指标。\n",
+ "\n",
+ "随着深度学习算法的快速发展和在分类任务上的优异表现,当下流行的声音分类模型无一不是采用深度学习网络搭建而成的,如 [AudioCLIP[1]](https://arxiv.org/pdf/2106.13043v1.pdf)、[PANNs[2]](https://arxiv.org/pdf/1912.10211v5.pdf) 和 [Audio Spectrogram Transformer[3]](https://arxiv.org/pdf/2104.01778v3.pdf) 等。\n",
+ "\n",
+ "图片来源:https://towardsdatascience.com/audio-deep-learning-made-simple-sound-classification-step-by-step-cebc936bbe5"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 2.3.3 Pretrain + Finetune\n",
+ "\n",
+ "\n",
+ "在声音分类和声音检测的场景中(如环境声音分类、情绪识别和音乐流派分类等)由于可获取的据集有限,且语音数据标注的成本高,用户可以收集到的数据集体量往往较小,这种数据量稀少的情况对于模型训练是非常不利的。\n",
+ "\n",
+ "预训练模型能够减少领域数据的需求量,并达到较高的识别准确率。在CV和NLP领域中,有诸如 MobileNet、VGG19、YOLO、BERT 和 ERNIE 等开源的预训练模型,在图像检测、图像分类、文本分类和文本生成等各自领域内的任务中,使用预训练模型在下游任务的数据集上进行 finetune ,往往可以更快和更容易获得较好的效果和指标。\n",
+ "\n",
+ "相较于 CV 领域的 ImageNet 数据集,谷歌在 2017 年开放了一个大规模的音频数据集 [AudioSet[4]](https://ieeexplore.ieee.org/document/7952261),它是目前最大的用于音频分类任务的数据集。该数据集包含了 632 类的音频类别以及 2084320 条人工标记的每段 10 秒长度的声音剪辑片段(包括 527 个标签),数据总时长为 5,800 小时。\n",
+ "\n",
+ "\n",
+ "图片来源:https://research.google.com/audioset/ontology/index.html\n",
+ " \n",
+ "`PANNs`([PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition[2]](https://arxiv.org/pdf/1912.10211.pdf))是基于 AudioSet 数据集训练的声音分类/识别的模型,其中`PANNs-CNN14`在测试集上取得了较好的效果:mAP 为 0.431,AUC 为 0.973,d-prime 为 2.732,经过预训练后,该模型可以用于提取音频的 embbedding ,适合用于声音分类和声音检测等下游任务。本示例将使用 `PANNs` 的预训练模型 Finetune 完成声音分类的任务。\n",
+ "\n",
+ "\n",
+ " \n",
+ "本教程选取 `PANNs` 中的预训练模型 `cnn14` 作为 backbone,用于提取声音的深层特征,`SoundClassifer`创建下游的分类网络,实现对输入音频的分类。\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 3. 实践:环境声音分类\n",
+ "\n",
+ "## 3.1 数据集准备\n",
+ "\n",
+ "此课程选取了[ESC-50: Dataset for Environmental Sound Classification[5]](https://github.com/karolpiczak/ESC-50) 数据集作为示例。\n",
+ " \n",
+ "ESC-50是一个包含有 2000 个带标签的环境声音样本,音频样本采样率为 44,100Hz 的单通道音频文件,所有样本根据标签被划分为 50 个类别,每个类别有 40 个样本。\n",
+ "\n",
+ "音频样本可分为 5 个主要类别:\n",
+ " - 动物声音(Animals)\n",
+ " - 自然界产生的声音和水声(Natural soundscapes & water sounds)\n",
+ " - 人类发出的非语言声音(Human, non-speech sounds)\n",
+ " - 室内声音(Interior/domestic sounds)\n",
+ " - 室外声音和一般噪声(Exterior/urban noises)。\n",
+ "\n",
+ "\n",
+ "ESC-50 数据集中的提供的 `meta/esc50.csv` 文件包含的部分信息如下:\n",
+ "```\n",
+ " filename,fold,target,category,esc10,src_file,take\n",
+ " 1-100038-A-14.wav,1,14,chirping_birds,False,100038,A\n",
+ " 1-100210-A-36.wav,1,36,vacuum_cleaner,False,100210,A\n",
+ " 1-101296-A-19.wav,1,19,thunderstorm,False,101296,A\n",
+ " ...\n",
+ "```\n",
+ "\n",
+ " - filename: 音频文件名字。 \n",
+ " - fold: 数据集自身提供的N-Fold验证信息,用于切分训练集和验证集。\n",
+ " - target: 标签数值。\n",
+ " - category: 标签文本信息。\n",
+ " - esc10: 文件是否为ESC-10的数据集子集。\n",
+ " - src_file: 原始音频文件前缀。\n",
+ " - take: 原始文件的截取段落信息。\n",
+ " \n",
+ "在此声音分类的任务中,我们将`target`作为训练过程的分类标签。\n",
+ "\n",
+ "### 3.1.1 数据集初始化\n",
+ "调用以下代码自动下载并读取数据集音频文件,创建训练集和验证集。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "from paddleaudio.datasets import ESC50\n",
+ "\n",
+ "train_ds = ESC50(mode='train')\n",
+ "dev_ds = ESC50(mode='dev')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.1.2 特征提取\n",
+ "通过下列代码,用 `paddleaudio.features.LogMelSpectrogram` 初始化一个音频特征提取器,在训练过程中实时提取音频的 LogFBank 特征,其中主要的参数如下: "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "feature_extractor = LogMelSpectrogram(sr=44100, n_fft=n_fft, hop_length=hop_length, win_length=win_length, window='hann', n_mels=64)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 3.2 模型\n",
+ "\n",
+ "### 3.2.1 选取预训练模型\n",
+ "\n",
+ "选取`cnn14`作为 backbone,用于提取音频的特征:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "from paddlespeech.cls.models import cnn14\n",
+ "backbone = cnn14(pretrained=True, extract_embedding=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "### 3.2.2 构建分类模型\n",
+ "\n",
+ "`SoundClassifer`接收`cnn14`作为backbone模型,并创建下游的分类网络:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "import paddle.nn as nn\n",
+ "\n",
+ "\n",
+ "class SoundClassifier(nn.Layer):\n",
+ "\n",
+ " def __init__(self, backbone, num_class, dropout=0.1):\n",
+ " super().__init__()\n",
+ " self.backbone = backbone\n",
+ " self.dropout = nn.Dropout(dropout)\n",
+ " self.fc = nn.Linear(self.backbone.emb_size, num_class)\n",
+ "\n",
+ " def forward(self, x):\n",
+ " x = x.unsqueeze(1)\n",
+ " x = self.backbone(x)\n",
+ " x = self.dropout(x)\n",
+ " logits = self.fc(x)\n",
+ "\n",
+ " return logits\n",
+ "\n",
+ "model = SoundClassifier(backbone, num_class=len(ESC50.label_list))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 3.3 Finetune"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "1. 创建 DataLoader "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "batch_size = 16\n",
+ "train_loader = paddle.io.DataLoader(train_ds, batch_size=batch_size, shuffle=True)\n",
+ "dev_loader = paddle.io.DataLoader(dev_ds, batch_size=batch_size,)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "2. 定义优化器和 Loss"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "optimizer = paddle.optimizer.Adam(learning_rate=1e-4, parameters=model.parameters())\n",
+ "criterion = paddle.nn.loss.CrossEntropyLoss()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "3. 启动模型训练 "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "from paddleaudio.utils import logger\n",
+ "\n",
+ "epochs = 20\n",
+ "steps_per_epoch = len(train_loader)\n",
+ "log_freq = 10\n",
+ "eval_freq = 10\n",
+ "\n",
+ "for epoch in range(1, epochs + 1):\n",
+ " model.train()\n",
+ "\n",
+ " avg_loss = 0\n",
+ " num_corrects = 0\n",
+ " num_samples = 0\n",
+ " for batch_idx, batch in enumerate(train_loader):\n",
+ " waveforms, labels = batch\n",
+ " feats = feature_extractor(waveforms)\n",
+ " feats = paddle.transpose(feats, [0, 2, 1]) # [B, N, T] -> [B, T, N]\n",
+ " logits = model(feats)\n",
+ "\n",
+ " loss = criterion(logits, labels)\n",
+ " loss.backward()\n",
+ " optimizer.step()\n",
+ " if isinstance(optimizer._learning_rate,\n",
+ " paddle.optimizer.lr.LRScheduler):\n",
+ " optimizer._learning_rate.step()\n",
+ " optimizer.clear_grad()\n",
+ "\n",
+ " # Calculate loss\n",
+ " avg_loss += loss.numpy()[0]\n",
+ "\n",
+ " # Calculate metrics\n",
+ " preds = paddle.argmax(logits, axis=1)\n",
+ " num_corrects += (preds == labels).numpy().sum()\n",
+ " num_samples += feats.shape[0]\n",
+ "\n",
+ " if (batch_idx + 1) % log_freq == 0:\n",
+ " lr = optimizer.get_lr()\n",
+ " avg_loss /= log_freq\n",
+ " avg_acc = num_corrects / num_samples\n",
+ "\n",
+ " print_msg = 'Epoch={}/{}, Step={}/{}'.format(\n",
+ " epoch, epochs, batch_idx + 1, steps_per_epoch)\n",
+ " print_msg += ' loss={:.4f}'.format(avg_loss)\n",
+ " print_msg += ' acc={:.4f}'.format(avg_acc)\n",
+ " print_msg += ' lr={:.6f}'.format(lr)\n",
+ " logger.train(print_msg)\n",
+ "\n",
+ " avg_loss = 0\n",
+ " num_corrects = 0\n",
+ " num_samples = 0\n",
+ "\n",
+ " if epoch % eval_freq == 0 and batch_idx + 1 == steps_per_epoch:\n",
+ " model.eval()\n",
+ " num_corrects = 0\n",
+ " num_samples = 0\n",
+ " with logger.processing('Evaluation on validation dataset'):\n",
+ " for batch_idx, batch in enumerate(dev_loader):\n",
+ " waveforms, labels = batch\n",
+ " feats = feature_extractor(waveforms)\n",
+ " feats = paddle.transpose(feats, [0, 2, 1])\n",
+ " \n",
+ " logits = model(feats)\n",
+ "\n",
+ " preds = paddle.argmax(logits, axis=1)\n",
+ " num_corrects += (preds == labels).numpy().sum()\n",
+ " num_samples += feats.shape[0]\n",
+ "\n",
+ " print_msg = '[Evaluation result]'\n",
+ " print_msg += ' dev_acc={:.4f}'.format(num_corrects / num_samples)\n",
+ "\n",
+ " logger.eval(print_msg)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "## 3.4 音频预测\n",
+ "\n",
+ "执行预测,获取 Top K 分类结果:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "top_k = 10\n",
+ "wav_file = './dog.wav'\n",
+ "\n",
+ "waveform, sr = load(wav_file)\n",
+ "feature_extractor = LogMelSpectrogram(sr=sr, n_fft=n_fft, hop_length=hop_length, win_length=win_length, window='hann', n_mels=64)\n",
+ "feats = feature_extractor(paddle.to_tensor(paddle.to_tensor(waveform).unsqueeze(0)))\n",
+ "feats = paddle.transpose(feats, [0, 2, 1]) # [B, N, T] -> [B, T, N]\n",
+ "print(feats.shape)\n",
+ "\n",
+ "logits = model(feats)\n",
+ "probs = nn.functional.softmax(logits, axis=1).numpy()\n",
+ "\n",
+ "sorted_indices = probs[0].argsort()\n",
+ "\n",
+ "msg = f'[{wav_file}]\\n'\n",
+ "for idx in sorted_indices[-top_k:]:\n",
+ " msg += f'{ESC50.label_list[idx]}: {probs[0][idx]:.5f}\\n'\n",
+ "print(msg)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "# 4. 作业\n",
+ "1. 使用开发模式安装 [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech) \n",
+ "环境要求:docker, Ubuntu 16.04,root user。 \n",
+ "参考安装方法:[使用Docker安装paddlespeech](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md#hard-get-the-full-funciton-on-your-mechine)\n",
+ "1. 在 [MusicSpeech](http://marsyas.info/downloads/datasets.html) 数据集上完成 music/speech 二分类。 \n",
+ "2. 在 [GTZAN Genre Collection](http://marsyas.info/downloads/datasets.html) 音乐分类数据集上利用 PANNs 预训练模型实现音乐类别十分类。\n",
+ "\n",
+ "\n",
+ "# 5. 关注 PaddleSpeech\n",
+ "\n",
+ "请关注我们的 [Github Repo](https://github.com/PaddlePaddle/PaddleSpeech/),非常欢迎加入以下微信群参与讨论:\n",
+ "- 扫描二维码\n",
+ "- 添加运营小姐姐微信\n",
+ "- 通过后回复【语音】\n",
+ "- 系统自动邀请加入技术群\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "# 6. 参考文献\n",
+ "\n",
+ "[1] Guzhov, A., Raue, F., Hees, J., & Dengel, A.R. (2021). AudioCLIP: Extending CLIP to Image, Text and Audio. ArXiv, abs/2106.13043.\n",
+ " \n",
+ "[2] Kong, Q., Cao, Y., Iqbal, T., Wang, Y., Wang, W., & Plumbley, M.D. (2020). PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28, 2880-2894.\n",
+ " \n",
+ "[3] Gong, Y., Chung, Y., & Glass, J.R. (2021). AST: Audio Spectrogram Transformer. ArXiv, abs/2104.01778.\n",
+ " \n",
+ "[4] Gemmeke, J.F., Ellis, D.P., Freedman, D., Jansen, A., Lawrence, W., Moore, R.C., Plakal, M., & Ritter, M. (2017). Audio Set: An ontology and human-labeled dataset for audio events. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 776-780.\n",
+ "\n",
+ "[5] Piczak, K.J. (2015). ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd ACM international conference on Multimedia.\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "py35-paddle1.2.0"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}