You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
PaddleSpeech/docs/tutorial/asr/tutorial_transformer.ipynb

682 lines
23 KiB

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"<a href=\"https://github.com/PaddlePaddle/PaddleSpeech\"><img style=\"position: absolute; z-index: 999; top: 0; right: 0; border: 0; width: 128px; height: 128px;\" src=\"https://nosir.github.io/cleave.js/images/right-graphite@2x.png\" alt=\"Fork me on GitHub\"></a>\n",
" \n",
"# 使用 Transformer 进行语音识别\n",
"\n",
"# 0. 视频理解与字幕"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# 下载demo视频\n",
"!test -f work/source/subtitle_demo1.mp4 || wget -c https://paddlespeech.bj.bcebos.com/demos/asr_demos/subtitle_demo1.mp4 -P work/source/"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import IPython.display as dp\n",
"from IPython.display import HTML\n",
"html_str = '''\n",
"<video controls width=\"600\" height=\"360\" src=\"{}\">animation</video>\n",
"'''.format(\"work/source/subtitle_demo1.mp4 \")\n",
"dp.display(HTML(html_str))\n",
"print (\"ASR结果为当我说我可以把三十年的经验变成一个准确的算法他们说不可能当我说我们十个人就能实现对十九个城市变电站七乘二十四小时的实时监管他们说不可能\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"> Demo实现[https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/demos/automatic_video_subtitiles/)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# 1. 前言\n",
"\n",
"## 1.1 背景知识\n",
"语音识别(Automatic Speech Recognition, ASR) 是一项从一段音频中提取出语言文字内容的任务。 \n",
"目前该技术已经广泛应用于我们的工作和生活当中,包括生活中使用手机的语音转写,工作上使用的会议记录等等。\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/0231a71b0617485d85586d232f65db6379115befdf014068bd90fb15c5786c94\"/>\n",
"<br>\n",
"(出处DLHLP 李宏毅 语音识别课程PPT)\n",
"</div>\n",
"<br></br>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 1.2 发展历史\n",
"\n",
"\n",
"* 早期生成模型流行阶段GMM-HMM (上世纪90年代)\n",
"* 深度学习爆发初期: DNNCTC[1] 2006\n",
"* RNN 流行Attention 提出初期: RNN-T[2]2013, DeepSpeech[3] (2014) DeepSpeech2 [4] (2016) LAS[5]2016\n",
"* Attetion is all you need 提出开始[6]: Transformer[6]2017Transformer-transducer[7]2020 Conformer[8] 2020\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/d6060426bba341a187422803c0f8ac2e2162c5c5422e4070a3425c09f7801379\" height=1300, width=1000 />\n",
"</div>\n",
"\n",
"目前 Transformer 和 Conformer 是语音识别领域的主流模型,因此本教程采用了 Transformer 作为讲解的主要内容,并在课后作业中步骤了 Conformer 的相关练习。"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"\n",
"# 2. 实战使用Transformer进行语音识别的流程\n",
"\n",
"CTC 的输出相互独立,使得每一帧利用上下文的信息的能力不足。\n",
"\n",
"而 seq2seqTransformerConformer 的模型采用自回归的解码方式,所以其建模能力更强,但不便于支持流式。\n",
"\n",
"对于Transformer模型它的Encoder可以有效对语音特征的上下文进行建模。而它的Decoder具有语言模型的能力能够将语言模型融合进整个模型中是真正意义上的端到端模型。\n",
"\n",
"\n",
"下面简单介绍下 Transformer 语音识别模型,其主要分为 2 个部分:\n",
"\n",
"\t- Encoder声学特征会首先进入 Encoder产生高层特征编码。\n",
"\n",
" - DecoderDecoder 利用 Encoder 产生的特征编码解码得到预测结果。\n",
" \n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/13bec64ab9544a3a91205a9633d9f015f2ddb0c3586d49ffb39307daed0229a0\" height=40%, width=50%/>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 2.1 准备工作\n",
"\n",
"### 2.1.1 安装 paddlespeech"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"!pip install --upgrade pip && pip install paddlespeech"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.1.2 准备工作目录"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"!mkdir -p ./work/workspace_asr\n",
"%cd ./work/workspace_asr"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"\n",
"### 2.1.3 获取预训练模型和音频文件\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# 获取模型\n",
"!test -f transformer.model.tar.gz || wget -nc https://paddlespeech.bj.bcebos.com/s2t/aishell/asr1/transformer.model.tar.gz\n",
"!tar xzvf transformer.model.tar.gz\n",
"\n",
"# 获取用于预测的音频文件\n",
"!test -f ./data/demo_01_03.wav || wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wav -P ./data/"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import IPython\n",
"IPython.display.Audio('./data/demo_01_03.wav')"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# 快速体验识别结果\n",
"!paddlespeech asr --input ./data/demo_01_03.wav"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.1.4 导入python包"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import paddle\n",
"import soundfile\n",
"\n",
"import warnings\n",
"warnings.filterwarnings('ignore')\n",
"\n",
"from yacs.config import CfgNode\n",
"from paddlespeech.s2t.transform.spectrogram import LogMelSpectrogramKaldi\n",
"from paddlespeech.s2t.transform.cmvn import GlobalCMVN\n",
"from paddlespeech.s2t.frontend.featurizer.text_featurizer import TextFeaturizer\n",
"from paddlespeech.s2t.models.u2 import U2Model\n",
"\n",
"from matplotlib import pyplot as plt\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.1.5 设置预训练模型的路径"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"config_path = \"conf/transformer.yaml\" \n",
"checkpoint_path = \"./exp/transformer/checkpoints/avg_20.pdparams\"\n",
"decoding_method = \"attention\"\n",
"audio_file = \"data/demo_01_03.wav\"\n",
"\n",
"# 读取 conf 文件并结构化\n",
"transformer_config = CfgNode(new_allowed=True)\n",
"transformer_config.merge_from_file(config_path)\n",
"transformer_config.decoding.decoding_method = decoding_method\n",
"print(transformer_config)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 2.2 获取特征\n",
"\n",
"### 2.2.1 音频特征 logfbank\n",
"\n",
"#### 2.2.1.1 语音特征提取整体流程图\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/54aefbc16dbf4487a7abe38b0210e5dbf1bb0c74fbe4459f94880a06950269f9\" height=1200, width=800 />\n",
"<br>\n",
"由\"莊永松、柯上優 DLHLP - HW1 End-to-end Speech Recognition PPT\" 修改得\n",
"</div>\n",
"\n",
"#### 2.2.1.2 logfbank 提取过程简化图\n",
"\n",
"logfbank 特征提取大致可以分为 3 个步骤:\n",
"\n",
"1. 语音时域信号经过预加重(信号高频分量补偿),然后进行分帧。\n",
"\n",
"2. 每一帧数据加窗后经过离散傅立叶变换DFT得到频谱图。\n",
"\n",
"3. 将频谱图的特征经过 Mel 滤波器得到 logmel fbank 特征。\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/08f7ccecc848495599c350aa2c440071b818ba0465734dd29701a2ff149f0a8c\"/>\n",
"<br>\n",
"由\"DLHLP 李宏毅 语音识别课程 PPT\" 修改得\n",
"</div>\n",
"\n",
"#### 2.2.1.3 CMVN 计算过程\n",
"\n",
"对于所有获取的特征,模型在使用前会使用 CMVN 的方式进行归一化\n",
"\n",
"<div align=center>\n",
" <img src=\"https://ai-studio-static-online.cdn.bcebos.com/46df63199d88481d9a2713a45ce63d00220e8ac42f9940e886282017758b54bf\"/>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.2.2 构建音频特征提取对象"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# 构建 logmel 特征\n",
"logmel_kaldi= LogMelSpectrogramKaldi(\n",
" fs= 16000,\n",
" n_mels= 80,\n",
" n_shift= 160,\n",
" win_length= 400,\n",
" dither= True)\n",
"\n",
"# 特征减均值除以方差\n",
"cmvn = GlobalCMVN(\n",
" cmvn_path=\"data/mean_std.json\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.2.3 提取音频的特征"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"array, _ = soundfile.read(audio_file, dtype=\"int16\")\n",
"array = logmel_kaldi(array, train=False)\n",
"audio_feature_i = cmvn(array)\n",
"audio_len = audio_feature_i.shape[0]\n",
"\n",
"audio_len = paddle.to_tensor(audio_len)\n",
"audio_feature = paddle.to_tensor(audio_feature_i, dtype='float32')\n",
"audio_feature = paddle.unsqueeze(audio_feature, axis=0)\n",
"print (audio_feature.shape)\n",
"\n",
"plt.figure()\n",
"plt.imshow(audio_feature_i.T, origin='lower')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 2.3 使用模型获得结果\n",
"\n",
"### 2.3.1 Transofomer 语音识别模型的结构\n",
"\n",
"\n",
"Transformer 模型主要由 2 个部分组成,包括 Transformer Encoder 和 Transformer Decoder。 \n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/1edcd4ef683c4ef981b375ab8df388b40e3afc5f439f47f1a6f2f230908b63b1\" height=50%, width=50% />\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.3.2 Transformer Encoder\n",
"\n",
"Transformer encoder 主要是对音频的原始特征(这里原始特征使用的是 80 维 logfbank进行特征编码其输入是 logfbank输出是特征编码。包含\n",
"\n",
"* 位置编码position encoding\n",
"* 降采样模块(subsampling embedding) 由2层降采样的 CNN 构成。\n",
"* Transformer Encoder Layer \n",
" * self-attention 主要特点是Q(query), K(key)和V(value)都是用了相同的值\n",
" * Feed forward Layer 由两层全连接层构建,其特点是保持了输入和输出的特征维度是一致的。\n",
"\n",
"\n",
"#### 2.3.2.1 Self-Attention\n",
"\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/72ffd9016d3841149723be2dde2a48c495ce8a95358946bca3736053812c788c\" height=50%, width=50% />\n",
"</div>\n",
"\n",
"其主要步骤可以分为三步:\n",
"\n",
"1. `Q` 和 `K` 的向量通过求内积的方式计算相似度,经过 scale 和 softmax 后,获得每个 `Q` 和所有`K` 之间的 score。\n",
"\n",
"2. 将每个 `Q` 和所有 `K` 之间的 score 和 `V` 进行相乘,再将相乘后的结果求和,得到 self-attetion 的输出向量。\n",
"\n",
"3. 使用多个 Attetion 模块均进行第一步和第二步,并将最后的输出向量进行合并,得到最终 Multi-Head Self-Attention 的输出。\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/fcdef1992e6d4c909403d603062d09e4d5adaff0226e4367b35d27aea2da1303\" height=30%, width=30% />\n",
"</div>\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.3.3 Transformer Decoder\n",
"\n",
"Transformer 的 Decoder 用于获取最后的输出结果。其结构和 Encoder 有一定的相似性,也具有 Attention 模块和 Feed forward layer。\n",
"主要的不同点有 2 个:\n",
"1. Decoder 采用的是一种自回归的方式进行解码。\n",
"2. Decoder 在 Multi-head self-attention 和 Feed forward layer 模块之间增加了一层 Multi-head cross-attention 层用于获取 Encoder 得到的特征编码。\n",
"\n",
"\n",
"#### 2.3.3.1 Masked Multi-head Self-Attention\n",
"细心的同学可能发现了Decoder 的一个 Multi-head self-attention 前面有一个 mask 。增加了这个 mask 的原因在于进行 Decoder 训练的时候Decoder 的输入是一句完整的句子,而不是像预测这样一步步输入句子的前缀。\n",
"\n",
"为了模拟预测的过程Decoder 训练的时候需要用 mask 遮住句子。 例如 `T=1` 时,就要 mask 输入中除第一个字符以外其他的字符,`T=2` 的时候则需要 mask 除前两个字符以外的其余字符。\n",
"\n",
"#### 2.3.3.2 Cross Attention\n",
"\n",
"Decoder 在每一步的解码过程中,都会利用 Encoder 的输出的特征编码进行 cross-attention。\n",
"\n",
"其中Decoder会将自回结果的编码作为 Attention 中的 `Q` ,而 Encoder 输出的特征编码作为 `K` 和 `V` 来完成 attetion 计算,从而利用 Encoder 提取的音频信息。\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/8e93122eb65344ea885a8af9014de4569b7c9c9f55aa45f7ac17ba2d0b0af260\" hegith=30%, width=30% />\n",
"</div>\n",
"\n",
"#### 2.3.3.3 Decoder的自回归解码 \n",
"\n",
"其采用了一种自回归的结构,即 Decoder 的上一个时间点的输出会作为下一个时间点的输入。\n",
"\n",
"另外计算的过程中Decoder 会利用 Encoder 的输出信息。\n",
"\n",
"如果使用贪心greedy的方式Decoder 的解码过程如下:\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/0acaf9f243304120832018b83a4b7c67b8d578f710ce4eeba6062ab9661ef9e7\" hegith=50%, width=50% />\n",
"</div>\n",
"\n",
"使用 greedy 模式解码比较简单,但是很有可能会在解码过程中丢失整体上效果更好的解码结果。\n",
"\n",
"因此我们实际使用的是 beam search 方式的解码beam search 模式下的 decoder 的解码过程如下:\n",
"\n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/367f8f7cd4b4451ab45dd883045c500d941f0d235fca4ad2a3ccb925ec59aea2\" hegith=50%, width=50%/>\n",
"</div>\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### 2.3.4 模型训练\n",
"\n",
"模型训练同时使用了 CTC 损失和 cross entropy 交叉熵损失进行损失函数的计算。\n",
"\n",
"其中 Encoder 输出的特征直接进入 CTC Decoder 得到 CTC 损失。\n",
"\n",
"而 Decoder 的输出使用 cross entropy 损失。\n",
" \n",
"<div align=center>\n",
"<img src=\"https://ai-studio-static-online.cdn.bcebos.com/fe1d3864f18f4df0a9ab3df8dc4e361a693250b387344273952315ca14d30732\"/>\n",
" <br>\n",
" (由\"莊永松、柯上優 DLHLP - HW1 End-to-end Speech Recognition PPT\" 修改得)\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"\n",
"\n",
"### 2.3.5 构建Transformer模型\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model_conf = transformer_config.model\n",
"# input_dim 存储的是特征的纬度\n",
"model_conf.input_dim = 80\n",
"# output_dim 存储的字表的长度\n",
"model_conf.output_dim = 4233 \n",
"print (\"model_conf\", model_conf)\n",
"model = U2Model.from_config(model_conf)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"\n",
" \n",
"### 2.3.6 加载预训练的模型\n",
" \n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"model_dict = paddle.load(checkpoint_path)\n",
"model.set_state_dict(model_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"\n",
"### 2.3.7 进行预测\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"decoding_config = transformer_config.decoding\n",
"text_feature = TextFeaturizer(unit_type='char',\n",
" vocab=transformer_config.collator.vocab_filepath)\n",
"\n",
"\n",
"result_transcripts = model.decode(\n",
" audio_feature,\n",
" audio_len,\n",
" text_feature=text_feature,\n",
" decoding_method=decoding_config.decoding_method,\n",
" beam_size=decoding_config.beam_size,\n",
" ctc_weight=decoding_config.ctc_weight,\n",
" decoding_chunk_size=decoding_config.decoding_chunk_size,\n",
" num_decoding_left_chunks=decoding_config.num_decoding_left_chunks,\n",
" simulate_streaming=decoding_config.simulate_streaming)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print (\"预测结果对应的token id为:\")\n",
"print (result_transcripts[1][0])\n",
"print (\"预测结果为:\")\n",
"print (result_transcripts[0][0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# 3. 作业 \n",
"1. 使用开发模式安装 [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech) \n",
"环境要求docker, Ubuntu 16.04root user。 \n",
"参考安装方法:[使用Docker安装paddlespeech](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md#hard-get-the-full-funciton-on-your-mechine)\n",
"\n",
"2. 跑通 example/aishell/asr1 中的 conformer 模型,完成训练和预测。 \n",
"\n",
"3. 按照 example 的格式使用自己的数据集训练 ASR 模型。 \n",
"\n",
"# 4. 关注 PaddleSpeech\n",
"\n",
"请关注我们的 [Github Repo](https://github.com/PaddlePaddle/PaddleSpeech/),非常欢迎加入以下微信群参与讨论:\n",
"- 扫描二维码\n",
"- 添加运营小姐姐微信\n",
"- 通过后回复【语音】\n",
"- 系统自动邀请加入技术群\n",
"\n",
"\n",
"<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/87bc7da42bcc401bae41d697f13d8b362bfdfd7198f14096b6d46b4004f09613\" width=\"300\" height=\"300\" ></center>\n",
"\n",
"# 5. 参考文献\n",
"\n",
"[1] Graves A, Fernández S, Gomez F, et al. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks[C]//Proceedings of the 23rd international conference on Machine learning. 2006: 369-376.\n",
"\n",
"[2] Graves A, Mohamed A, Hinton G. Speech recognition with deep recurrent neural networks[C]//2013 IEEE international conference on acoustics, speech and signal processing. Ieee, 2013: 6645-6649.\n",
"\n",
"[3] Hannun A, Case C, Casper J, et al. Deep speech: Scaling up end-to-end speech recognition[J]. arXiv preprint arXiv:1412.5567, 2014.\n",
"\n",
"[4] Amodei D, Ananthanarayanan S, Anubhai R, et al. Deep speech 2: End-to-end speech recognition in english and mandarin[C]//International conference on machine learning. PMLR, 2016: 173-182.\n",
"\n",
"[5] Chan W, Jaitly N, Le Q, et al. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition[C]//2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016: 4960-4964.\n",
"\n",
"[6] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in neural information processing systems. 2017: 5998-6008.\n",
"\n",
"[7] Zhang Q, Lu H, Sak H, et al. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020: 7829-7833.\n",
"\n",
"[8] Gulati A, Qin J, Chiu C C, et al. Conformer: Convolution-augmented transformer for speech recognition[J]. arXiv preprint arXiv:2005.08100, 2020."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}