You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
PaddleSpeech/docs/tutorial/tts/tts_tutorial.ipynb

1608 lines
1.6 MiB

3 years ago
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 『听』和『说』\n",
"人类通过听觉获取的信息大约占所有感知信息的 20% ~ 30%。声音存储了丰富的语义以及时序信息,由专门负责听觉的器官接收信号,产生一系列连锁刺激后,在人类大脑的皮层听区进行处理分析,获取语义和知识。近年来,随着深度学习算法上的进步以及不断丰厚的硬件资源条件,**文本转语音Text-to-Speech, TTS** 技术在移动、虚拟娱乐等领域得到了广泛的应用。</font>\n",
"## \"听\"书\n",
"使用 [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR) 直接获取书籍上的文字。"
3 years ago
]
},
{
"cell_type": "code",
3 years ago
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABLAAAAHOCAIAAAAHU2eOAAEAAElEQVR4nOz9ebxlWVkfjH+fZ621hzPdqerW3NVz0wzdgIAaHPMmzrMiIhJfNQ44jyRGowlRE5VoRMUQX3EKKEZBVAxiHFFQFGiapumpurrmqjufaU9rref5/bHPuXWruhuFwPv+ovf7OZ+qc8/Zw9prr73O+j7D9yE/ep/38Wu+9hu2tqZv+ZM/92WwaZ/YAQwAChAAQBQASEAGHxHoE3/svTpHIIgIKDARIABCDETEbAkMWAAqUAV/qM1RAFCFCAAQPfERVLU9L+FxX9PuO9nz7+71fLAGqUgIgZmNtbN2EH2Q7Z/oEDxvhgARCFABqVQVk4F14AQwu80IhOtOwHveX/NVewU6BjPA0Pbf+Qgggkj0jYgQqbFEDBBF7uOJbmaQYIiZmAECJIIAYniqq6p2NnfWRYEBmAWoA6aMwJB2qCkIaomYZQyuFXVEEtAL6Cg8sJVjFDBNpMd0AOWO7rw96J+7zmnUdyHrIu3WNi+NU3aAd7HsxIrH2ygqiEWp2K6G21VTa16cKetQRxaTBqSTmsZVKBstvaysHFxY6DPDsObOijZ1WYx5JXhf17Wvyqaqo69j8BJ8ltiymPTyfHFhQYJnyPLy8uLi4rhPxlhyCcg0UZrax8ZL0Gra/Mkf/8XSyslbnvrcu5/3aZKsrN58V1UFl/UAZjAAmnWrB5XQKaSIfuzrMsZoTZJ2enAZQgeWABIJjSixc9xTJIBtB7kx7R2vymoECp3Lf1bunCv9lXxASZLUI4QiSbUXxqM0qc+euzdJcfbCxnjaD/HwZ3/nD6F7BGQBpzQb6AYeGMXpeYMxUE6uXLI2yTqLGvknfvKnv+3lv+sjkgxKGI2L2Iwunn3o6bcf37p0+uH33dPLs9Smt93xdD5wBMEiyf/kz1537OiNJ0/ekfYPQfMYDIjZAoBAgMiz0zKBAEbNMBF6SpoH6uKcxsYlSy45CrMsriPWAawgwDEyQhphGGAVmg1TBljBSkIUCCUwRlhHuYl6A6GEPBI9QuOI+km6BNeFMQqlbgom2AymByyClqBdIJWyTwyy7VFHiomiIgSCAYyCoQmJARnMHoXXA17iNErFFIzxoADEuLVlsg5GZdwpHnvw7HJ3SRvxdWimIDJVUe9sDZsmpFmWZJkne+Ntd+bLR/Kjt+Lg7bCrkEVoH4Yio525PBpCmaIx2PbFBRd2UG6gGUEjiKAEGJAJvYGxKdkebB/og3qgDtABpyC3+zQrgiAqotEKmmtMWBvRC7F5KFZnUUwyzQADjuDSW9J02aY3c3oEoQvOYBdq6kcYAlJ4DluQjft+7We5uNzNdLC0IFnWO3gUSVb1ummaO5uRJKpERKpRYuESQqxCU8TQIIKUHaVEvB2aLMvyLIEBtBFfxxgEmiSJkiVOYBLiTMmCHMDqltgUGjcoC6j9//r9dz7v475wsPQMZEdBiARFIEQDA7XQa6fLfexjH/vYxz7+z4eqEhEAEWHmGKNzzqpqCOHWW2/94z/+SzQNwDFGy+7vPNxHCdbuMhhhYkCjxLIssyzbs5UADHpSVvl3ggjGXCU7j4M86aGflL592K2RD3H7+QqlbbwqVAFVVYVCBBoJBDYzHvfEOz+OCqqiHSIqAMCtOaDlzQQiEIHZJImJMUavqgQCkUYFQERCINrtAUmZRUW8F4CUiYiJoBxDHZqaRBM7SHhmbAihJo5KQSgylMEAg9pLqyFKlDA5RjAYK7yiVhiDlFXgL6O86MMGcQJZhevAZjApwxgYAUEVIRbjcS8KGt9sbfgCFrlaLouqQR6cRuc46bh04VB3+Zal1e5gpRGKMVZV1VQTgqSJMdAQm+rCuNMzSZIY4hiacjIeD0dVMRkNd4Ka0keeVpaRpulwWq1tnVm67VCSZalx5DgBk0UEwcCXzeXLl7uDQ8xcVtN+7xAg/PdafQogIJmNUut8PRVomnYyZsAolAAvykTOAEAMTQgFIRiDyZVLXqu0k+ZdBzBbcWme2r40tWjt0swmsrKywpw89tgWqil61wyT3WHjfdPUk8zU1jFBm3KaZP077rjNWpDB5laxvNIZ9Dvf+s3f8aqf+QktN72Pa2sb08x9/PM+tqqKjjPb586++X/+wWe+8J92O4O0250Nyz1j6INDVUEf6rNzze4KIRLWABFogKpKoBg1QMQYI4C0nI7abp/ZwmZ/gxjCnLU9ExQNhEhzpg5ms60SRVANMwGmQKVoIEORoOKBBtpEaSANB2+SRIYT1FpMm8Tlly6u+8KPdsbL/SMxal3Wk0kdGl9MPcw0gDZHhe0uLh5bO3FrdeDGu9HPwCk0abxaxwKoBCVRUoBdlmEKBQNMpCCCAiCw0fZamGYfEqAM4vlUwbsTVDvBEdFsQgARiJmFCEQqSlCIKKmIqkBnU9Pjer4dwOPRZDKxVWFAZmrrqgmuYzs91+/RtYOgbYGEAPUgISKQEEhEiJBlmbUWUESJ0ccQACUmlWtMc0SzNld1bW0DDa7x0Xvvvari7/fg7WMf+9jHPvbxDxXWGBNjPHny5HT61qZpjOvQE/46Ej3hr/uHjychUK3DbE5eFYCIhBA+UqeNEdwueK6eC6rCs6sWzNyDbWvkSTx+8rg3j//qCSGAqM4sz6pC9L+xFtFZDyEqKQSRQrvIEyASGQVgBNeauRVKiqueSVURERFVJVGiyEqkEapQankiEc3WTMStc0VVWl8lUU1ETNf4WRVKIENgEhExxCBSCepjlCrUlYmRkwzGAQLEUBdJSlEqaIAh2ARgaEAETAVJoSkTW6oMlRE1g404EkbYkvJsUz+quplSB3EA1wWn0BRqjBIQET3KOodBVIgC7GNT+MJHK6ndrHtZr9dbXM67y3A9Srtu8RAWVxI122tr68UVL5mzlJBxRm3aXzjUBQCNqsrRZTZN+wsQXbtyqS6m5bTYHO3UZTXo93q9XtNE2trKuz0VSjMTgRgCokCo1+muXcKhw+Nulopvunmm1STJBhGztfhsTTwjYW3nCxHRzGfe+ocFgEtzIKqG2vumidZ288wkzCLUNB7q08xYa4G0nG73Fgd1HSLVddEE0WIK9paTZlo3sZ74IONiGDxFr5PhJrQBGCDQVT9426IkSao6QjXrdLTxRVUndvE5z3k2AMNYWeko8PXf8E0/+RM/ATCZ5OiREydvvmmytXHm7Ok7n/aMrYffd+r0+c/9/E8bLA8IFjFCIxhs5p68q7j2ASHsTkTtc8rtI/AhgURbPzR0ZlJRVRFV1Tj7XzWCFCRgASlIZz7uWYSCUVjAEp9RkMKSWGhK2oWmIJAHTA07BsbAFcFFjzXBxDYTQIiFOZIGCXWoK/Uh7y7sbA+t2OmwSly3LIbacFXQ2eF6XfnYBIZhIokxahBD1dZo4i+Yxy6ub02fGnD8ZkLPA87xAVIBKbFVhIhKtLFxZsABkSEGIEwgQ8YosbKhdtS1oQFXZ6Trp6aZiWk2CGn2riWEqoSZZWrXbqCqRAZk9h6KFBBFkiTOOHX9Xtbp9Ric53nW7zcwpExk2pvM2nrpKPoa1N4bIUA1RhBFSdMukQIaQ/DBq0RmYuLZ2ef/7p6bWJlJhaAqPoQQIE9ACJ/UVLiPfexjH/vYxz9EWCKqqmo8HjdNU5ZlCpc6an2I1287M7J+5NAeTJ/ow5kpWgEw2zRNiYgeF636YZApYiiAPaGaNFuCztaUiog5KYUCiI87RBtK+viuaHd7krVpexqVOQmcr/efbPu/GwKNkAAVaASElBVRJRjDNF/T0DVrZZ21UvVq34kgRoi0jSOjUGVVQEUV2i7+mKQ9oyoETAQWVRJhNASar/mojTIlIY2RjCFWQ4B4qIhvmqaxVp1GRwKaNRvqLZRjkNiINCTGEMCESIgKIwBDDCBsKtDIwEO68AkkoF6vqvujnMkTx+YY4gqSEjEBDISI1MQAX6OoVIqdtUta1ZlNSx/OXlr34pYOHIpLJ2gwsMur3F1ooq0ajErNUHf7/XVvtiQxmcsslTFwDI7N4mJe13U1LYpiIjEmxuZZL0vTVZcHX/u
"text/plain": [
3 years ago
"<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x462 at 0x7F7CC46D2710>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
3 years ago
"source": [
"from PIL import Image\n",
3 years ago
"img_path = 'source/ocr_result.jpg'\n",
3 years ago
"im = Image.open(img_path)\n",
"im.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"使用 [PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech),阅读上一步识别出来的文字。"
3 years ago
]
},
{
"cell_type": "code",
"execution_count": 2,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <audio controls=\"controls\" >\n",
" <source src=\"data:audio/x-wav;base64,UklGRmR+BQBXQVZFZm10IBAAAAABAAEAwF0AAIC7AAACABAAZGF0YUB+BQDj/9//3//h/+D/5P/k/+T/5v/m/+b/5//r/+z/6//r/+r/7P/s/+7/7//v/+//7//v//H/8f/x//D/8P/x//P/9f/1//T/9P/2//f/9//3//j/9//3//n/+P/4//j/+P/6//r/+v/6//v/+v/6//n/+v/6//r/+//6//n/+P/6//3//P/7//v//P/8//v//P/7//r/+P/5//r/+f/6//r/+f/7//v/+v/7//z/+v/7//v//P/7//v/+//8//v/+//8//z//P/9//z//P/9//3//f/8//7//v/+//7/AAD//wAAAAABAAAAAQABAAIAAgAAAAIAAQACAAEAAgABAAEAAgABAAEAAQACAAIAAgABAAEAAQAAAAEAAAABAAAAAAD//wAA//8AAAAA//8AAP//AAD/////AAD///7////+//3//f/8//3//f/9//3//v/9//3//f/9//3//v/8//3//f/9//7//f/9//z//f/8//3//P/9//z//P/8//z//P/7//v/+v/6//r/+v/6//r/+v/6//n/+v/5//n/+f/5//j/+f/5//n/+f/5//n/+f/5//n/+f/5//n/+f/5//n/+f/5//n/+v/5//r/+v/6//r/+v/6//r/+//7//v/+//6//v/+v/7//v/+v/7//v/+v/7//v/+//7//r/+//7//3//f/+///////+/////v/+//7//v///////v///////////wAAAAD//wAAAAD//wAAAAAAAAAAAAABAAAAAAAAAAAAAAABAAAAAAAAAP//AAAAAAEAAAABAAAAAQAAAAAAAAAAAAAAAAABAAAAAQABAAEAAAABAAAA//8AAAAA//8AAAAAAQAAAAEA//8BAAAA//8AAAAAAgAAAAEAAAABAAEAAAABAAEAAQABAAAAAQABAAEAAQACAAIAAQABAAEAAQABAAIAAQABAAIAAQABAAAAAQACAAIAAgACAAEAAQABAAEAAgACAAMAAgACAAMAAgACAAIAAgACAAMABAACAAIAAgACAAIAAwADAAIAAwADAAIAAwADAAMAAwADAAIAAgADAAIAAwADAAMAAwAEAAQABAADAAQABAADAAMAAwADAAMAAQACAAMAAQACAAIAAgABAAIAAgABAAIAAQACAAIAAwACAAMAAgADAAMAAwACAAIAAgACAAMAAwADAAMAAwADAAQABAADAAMAAwACAAIAAgACAAIAAQACAAIAAQABAAEAAgABAAEAAgABAAIAAgABAAEAAAAAAAEAAgACAAMAAwADAAMAAgACAAIAAgACAAEAAgAEAAQAAgAEAAMAAwACAAEAAQABAAIAAgADAAIAAwABAAIAAwAEAAMAAwAEAAMAAwADAAQAAwAFAAMAAwACAAIAAQABAAEAAAACAAEAAQABAAEAAAABAAEAAAAAAAAAAAAAAAEAAQAAAP7//v///wEAAAACAAIAAAAAAP///////wAAAQAAAAEAAQAAAAAAAAAAAP/////+//7////+//7////+//3//f/9//7//f/9//7//v/8//3//v/9//z//f/8//z//f/9//7//f/9//3//v/9/////v/+//7//v/+//7//v/9//7//v////7////9/////v/9/////v/+//3//f/9//z//f/8//7//f/9//7//P/8//z//P/8//z//P/8//3//P/8/////v/+//7//v/+//3//P/+/////////wAA//////7//f/+//7//v/+/////v/+//7//v/+//7//v/9//3//v/+//7//v/+//3//f/9//3//v////z//f/9//z/+//+//z/+//7//r//P/8//z//f/8//v/+//7//v//P/7//v/+//7//v//P/8//z/+//7//r/+v/6//z//P/9//3/+//7//r/+//8//z/+//6//z/+v/8//v/+//6//r/+f/6//r/+v/5//n/+//7//v/+v/7//n/+P/4//j/+f/3//f/+P/5//v/+v/7//r/+v/6//r/+v/6//v//P/9//7//f////7//f/9//7//v/+/////v/9//////////7/AQABAAAA//////3//v/+//7//v/+/////v/////////////////+//3//v/+//3//////wAAAQADAAIAAAABAP///v8AAAIABAADAAMAAAAAAP//AAABAAIAAgABAAEAAAABAP/////+//7//////wAAAQAAAP///f/9//7//////wAA/////////v/+//3//f/9//7/AAABAAAAAAD+//3///8BAAEAAQACAAEAAAABAAAAAQAAAP/////+//7///8BAP//AQABAAIAAQABAAAAAAD/////AAD///7/////////AQACAAIAAAD///7//v///wAAAwADAAIAAgABAAAAAAABAAAA///9////AAABAAEAAgABAAAAAQABAAAAAQD///7///8AAAIAAwADAAIAAQABAAAAAAAAAAAAAQABAAQAAQACAP///////wAAAAABAAIABAAEAAQAAwADAAIAAQAFAAUABQADAAEAAQACAAMABgAFAAMAAwADAAQABQAGAAQABAADAAMABQAIAAgACQAIAAUABAACAAIAAwAEAAYABwAIAAkACgAJAAcAAwAAAP//AgAHAAoACgAKAAgABgAFAAQAAwABAAUACwANAAsACAAFAAUABgAHAAYAAgD7//z/AgAGAAcACwAJAP//+//9/wIABwALAAoACwAHAAEAAQAEAAYACAAJAAkACgALAA0ADAALAAsABgD/////BQAOABgAGQAUAAsAAAD6//z/AgAKABAAFAASABEACwADAPz//f/+//v/AgAOABAADgAOAAsABgAFAAcABgAMABAADQAHAAQAAwAFAA4ADgABAAAA/v/1//b///8VACIAKAAkABsACwD1/+z/6P/h/+f/CQAlADwAQwAyAAgA3v/R/+n/FAAmACwAJQALAAAAAgASABYAFAAIAP//AgAIAA4AFgARAAYABwAQAAoAAwALABcAKwAmABAA+//w/+H/8f8TADkAUgBUADsAAwDM/7v/0P/u/wIAJABbAGcARgAVAOj/v/+q/8D/+f9OAHIAXQBHABgA1v+w/8T/2f/z/zEAYwBkADwA9/+3/5v/tv/0/yoAdwCDADEA/f8FAPr/nf92/+L/IwAvAEAAVAAsAPD/4f/2/+//3P/l//z/BgDz/yMAXQBJAAMAzf+d/6P/7v8+AFYAMQAPAPv/2f+k/7X/9v8fAB8ARwBnABsA+v8EAA8AAgDu//b/CQDy/7b/lf/O/x4AQQBSAFMAUADn/5b/v/8IAPX/zv/k/yUAOAAmACsAHAADAAwAVQAtAH7/V/+4//f/NACTALQAhQAhAND/pf+Z//v/SgAIAPD/MABEABEAEQA9AB8A2//N/woAMQAoADcANQAKAB0AIADi/47/bv/M/20A/wDwAGUAuf8A/+v+zf8jAcMBVQEAAF3+9f3i/l8AlAHKAfQA3P9C/2z/4/8BAMD/of8LAKIAFQHVANT/Dv8w/+T/eQC4ANwAtwDO/9n+5P7c/60A0wDxALwAHgC//8f/XwAzAEL/PP+m/6//ZQAxASABuADQ/+f++P7e/90ABQF4AHz/D//Q/2MAswCqABwAg/9D/7L/4wAxAYAAGgDE/0H/Qv/0/2wAjgD7AO0A4/8X/1L/UwDAADcAqf/O/y4AGQBFAOYAbgCI/7j//v+R/2L/owAtAQYAwf84AFwA6v9g/zb/4f80AXYBOwBJ//f+Sf89ANYAjgEgAXf/a/4G/9z/SAAHAU4BwwC//yj/c/9kAOMA3P/N/h7/YgCdAZcBJwHIAOH/Bv9S/sv9x/7fAKAC9QKTAZf/ef5+/vb+kP8OAMUAjwFqAbYA+v/E//r/2f9b/+3+2f9cAUABFQASAA4APv9F/1oATgGHA
" Your browser does not support the audio element.\n",
" </audio>\n",
" "
],
"text/plain": [
"<IPython.lib.display.Audio object>"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
3 years ago
"source": [
"import IPython.display as dp\n",
"dp.Audio(\"source/ocr.wav\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"具体实现代码详见 [Story Talker](https://github.com/DeepSpeech/demos/story_talker/run.sh)"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 偶像开口说话\n",
"*元宇宙来袭,构造你的虚拟人!* 看看 [PaddleGAN](https://github.com/PaddlePaddle/PaddleGAN) 怎样合成唇形让WiFi之母——海蒂·拉玛说话。"
3 years ago
]
},
{
"cell_type": "code",
3 years ago
"execution_count": 3,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/html": [
"\n",
3 years ago
"<video controls width=\"600\" height=\"360\" src=\"source/tts_lips.mp4\">animation</video>\n"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
3 years ago
"source": [
"from IPython.display import HTML\n",
"html_str = '''\n",
"<video controls width=\"600\" height=\"360\" src=\"{}\">animation</video>\n",
3 years ago
"'''.format(\"source/tts_lips.mp4\")\n",
3 years ago
"dp.display(HTML(html_str))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"具体实现代码请参考 [Metaverse](https://github.com/DeepSpeech/demos/metaverse/run.sh)。\n",
"\n",
"下面让我们来系统地学习语音方面的知识,看看怎样使用 **PaddleSpeech** 实现基本的语音功能以及怎样结合光学字符识别Optical Character RecognitionOCR、自然语言处理Natural Language ProcessingNLP等技术“听”书、让名人开口说话。"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 前言\n",
"\n",
"## 背景知识\n",
"为了更好地了解文本转语音任务的要素,我们先简要地回顾一下文本转语音的发展历史。如果你对此已经有所了解,或希望能尽快使用代码实现,请直接跳至第四章[实践](#实践)。\n",
3 years ago
"### 定义\n",
"<!----\n",
"Note: \n",
"1.此句抄自 [李沐Dive into Dive Learning](https://zh-v2.d2l.ai/chapter_introduction/index.html)\n",
"2.修改参考A survey on Neural Speech Sysnthesis.\n",
"---> \n",
"文本转语音又称语音合成Speech Sysnthesis指的是将一段文本按照一定需求转化成对应的音频这种特性决定了的输出数据比输入输入长得多。文本转语音是一项包含了语义学、声学、数字信号处理以及机器学习的等多项学科的交叉任务。虽然辨识低质量音频文件的内容对人类来说很容易但这对计算机来说并非易事。\n",
3 years ago
"\n",
"按照不同的应用需求,更广义的语音合成研究包括:*语音转换*,例如说话人转换、语音到歌唱转换、语音情感转换、口音转换等;*歌唱合成*,例如歌词到歌唱转换、可视语音合成等。\n",
3 years ago
"\n",
"### 发展历史\n",
"\n",
3 years ago
"<!--\n",
"以下摘自维基百科 https://en.wikipedia.org/wiki/Speech_synthesis\n",
"--->\n",
"\n",
"在第二次工业革命之前语音的合成主要以机械式的音素合成为主。1779年德裔丹麦科学家 Christian Gottlieb Kratzenstein 建造了人类的声道模型使其可以产生五个长元音。1791年 Wolfgang von Kempelen 添加了唇和舌的模型使其能够发出辅音和元音。贝尔实验室于20世纪30年代发明了声码器Vocoder将语音自动分解为音调和共振此项技术由 Homer Dudley 改进为键盘式合成器并于 1939年纽约世界博览会展出。\n",
"\n",
"第一台基于计算机的语音合成系统起源于20世纪50年代。1961年IBM 的 John Larry Kelly以及 Louis Gerstman 使用 IBM 704 计算机合成语音成为贝尔实验室最著名的成就之一。1975年第一代语音合成系统之一 —— MUSAMUltichannel Speaking Automation问世其由一个独立的硬件和配套的软件组成。1978年发行的第二个版本也可以进行无伴奏演唱。90 年代的主流是采用 MIT 和贝尔实验室的系统,并结合自然语言处理模型。\n",
"![语音合成技术的发展历史](./source/tts-timeline.png)\n",
"\n",
"### 主流方法\n",
"\n",
"当前的主流方法分为**基于统计参数的语音合成**、**波形拼接语音合成**、**混合方法**以及**端到端神经网络语音合成**。基于参数的语音合成包含隐马尔可夫模型Hidden Markov Model,HMM以及深度学习网络Deep Neural NetworkDNN。端到端的方法保函声学模型+声码器以及“完全”端到端方法。\n",
"\n",
"\n",
3 years ago
"## 基于深度学习的语音合成技术\n",
"\n",
3 years ago
"### 语音合成基本知识\n",
"\n",
3 years ago
"![信号处理流水线](source/signal_pipeline.png)\n",
"\n",
"语音合成流水线包含 <font color=\"#ff0000\">**文本前端Text Frontend**</font> 、<font color=\"#ff0000\">**声学模型Acoustic Model**</font> 和 <font color=\"#ff0000\">**声码器Vocoder**</font> 三个主要模块:\n",
"- 通过文本前端模块将原始文本转换为字符/音素。\n",
"- 通过声学模型将字符/音素转换为声学特征如线性频谱图、mel 频谱图、LPC 特征等。\n",
"- 通过声码器将声学特征转换为波形。\n",
"\n",
3 years ago
"<img style=\"float: center;\" src=\"source/tts_pipeline.png\" width=\"85%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 实践\n",
"<br></br>\n",
"环境安装请参考 [install](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/docs/source/install.md)安装教程。 \n",
"下面使用 **PaddleSpeech** 提供的预训练模型合成中文语音。"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 数据及模型准备"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
3 years ago
"### 获取 PaddlePaddle 预训练模型"
3 years ago
]
},
{
"cell_type": "code",
"execution_count": null,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [],
3 years ago
"source": [
"!mkdir download\n",
"!wget -P download https://paddlespeech.bj.bcebos.com/Parakeet/released_models/pwgan/pwg_baker_ckpt_0.4.zip\n",
3 years ago
"!unzip -d download download/pwg_baker_ckpt_0.4.zip\n",
"!wget -P download https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_nosil_baker_ckpt_0.4.zip\n",
3 years ago
"!unzip -d download download/fastspeech2_nosil_baker_ckpt_0.4.zip"
]
},
3 years ago
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[01;34mdownload/pwg_baker_ckpt_0.4\u001b[00m\n",
"|-- pwg_default.yaml\n",
"|-- pwg_snapshot_iter_400000.pdz\n",
"`-- pwg_stats.npy\n",
"\n",
"0 directories, 3 files\n",
"\u001b[01;34mdownload/fastspeech2_nosil_baker_ckpt_0.4\u001b[00m\n",
"|-- default.yaml\n",
"|-- energy_stats.npy\n",
"|-- phone_id_map.txt\n",
"|-- pitch_stats.npy\n",
"|-- snapshot_iter_76000.pdz\n",
"`-- speech_stats.npy\n",
"\n",
"0 directories, 6 files\n"
]
}
],
"source": [
"!tree download/pwg_baker_ckpt_0.4\n",
"!tree download/fastspeech2_nosil_baker_ckpt_0.4"
]
},
3 years ago
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 导入 Python 包"
]
},
{
"cell_type": "code",
"execution_count": 18,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The autoreload extension is already loaded. To reload it, use:\n",
" %reload_ext autoreload\n"
]
}
],
3 years ago
"source": [
"%load_ext autoreload\n",
"%autoreload 2\n",
"\n",
"# 设置 gpu 环境\n",
"%env CUDA_VISIBLE_DEVICES=0\n",
"\n",
3 years ago
"import logging\n",
"import sys\n",
"import warnings\n",
"warnings.filterwarnings('ignore')\n",
"\n",
"# 需要将PaddleSpeech项目的根目录放到Python路径中\n",
3 years ago
"sys.path.insert(0,\"../../../\")"
]
},
{
"cell_type": "code",
"execution_count": 19,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [],
3 years ago
"source": [
"import argparse\n",
"import os\n",
"from pathlib import Path\n",
"import IPython.display as dp\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import paddle\n",
"import soundfile as sf\n",
"import yaml\n",
"from paddlespeech.t2s.frontend.zh_frontend import Frontend\n",
"from paddlespeech.t2s.models.fastspeech2 import FastSpeech2\n",
"from paddlespeech.t2s.models.fastspeech2 import FastSpeech2Inference\n",
"from paddlespeech.t2s.models.parallel_wavegan import PWGGenerator\n",
"from paddlespeech.t2s.models.parallel_wavegan import PWGInference\n",
"from paddlespeech.t2s.modules.normalizer import ZScore\n",
"from yacs.config import CfgNode"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 设置预训练模型的路径"
]
},
{
"cell_type": "code",
"execution_count": 20,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"========Config========\n",
"batch_size: 64\n",
"f0max: 400\n",
"f0min: 80\n",
"fmax: 7600\n",
"fmin: 80\n",
"fs: 24000\n",
"max_epoch: 1000\n",
"model:\n",
" adim: 384\n",
" aheads: 2\n",
" decoder_normalize_before: True\n",
" dlayers: 4\n",
" dunits: 1536\n",
" duration_predictor_chans: 256\n",
" duration_predictor_kernel_size: 3\n",
" duration_predictor_layers: 2\n",
" elayers: 4\n",
" encoder_normalize_before: True\n",
" energy_embed_dropout: 0.0\n",
" energy_embed_kernel_size: 1\n",
" energy_predictor_chans: 256\n",
" energy_predictor_dropout: 0.5\n",
" energy_predictor_kernel_size: 3\n",
" energy_predictor_layers: 2\n",
" eunits: 1536\n",
" init_dec_alpha: 1.0\n",
" init_enc_alpha: 1.0\n",
" init_type: xavier_uniform\n",
" pitch_embed_dropout: 0.0\n",
" pitch_embed_kernel_size: 1\n",
" pitch_predictor_chans: 256\n",
" pitch_predictor_dropout: 0.5\n",
" pitch_predictor_kernel_size: 5\n",
" pitch_predictor_layers: 5\n",
" positionwise_conv_kernel_size: 3\n",
" positionwise_layer_type: conv1d\n",
" postnet_chans: 256\n",
" postnet_filts: 5\n",
" postnet_layers: 5\n",
" reduction_factor: 1\n",
" stop_gradient_from_energy_predictor: False\n",
" stop_gradient_from_pitch_predictor: True\n",
" transformer_dec_attn_dropout_rate: 0.2\n",
" transformer_dec_dropout_rate: 0.2\n",
" transformer_dec_positional_dropout_rate: 0.2\n",
" transformer_enc_attn_dropout_rate: 0.2\n",
" transformer_enc_dropout_rate: 0.2\n",
" transformer_enc_positional_dropout_rate: 0.2\n",
" use_masking: True\n",
" use_scaled_pos_enc: True\n",
"n_fft: 2048\n",
"n_mels: 80\n",
"n_shift: 300\n",
"num_snapshots: 5\n",
"num_workers: 4\n",
"optimizer:\n",
" learning_rate: 0.001\n",
" optim: adam\n",
"seed: 10086\n",
"updater:\n",
" use_masking: True\n",
"win_length: 1200\n",
"window: hann\n",
"---------------------\n",
"allow_cache: True\n",
"batch_max_steps: 25500\n",
"batch_size: 6\n",
"discriminator_grad_norm: 1\n",
"discriminator_optimizer_params:\n",
" epsilon: 1e-06\n",
" weight_decay: 0.0\n",
"discriminator_params:\n",
" bias: True\n",
" conv_channels: 64\n",
" in_channels: 1\n",
" kernel_size: 3\n",
" layers: 10\n",
" nonlinear_activation: LeakyReLU\n",
" nonlinear_activation_params:\n",
" negative_slope: 0.2\n",
" out_channels: 1\n",
" use_weight_norm: True\n",
"discriminator_scheduler_params:\n",
" gamma: 0.5\n",
" learning_rate: 5e-05\n",
" step_size: 200000\n",
"discriminator_train_start_steps: 100000\n",
"eval_interval_steps: 1000\n",
"fmax: 7600\n",
"fmin: 80\n",
"fs: 24000\n",
"generator_grad_norm: 10\n",
"generator_optimizer_params:\n",
" epsilon: 1e-06\n",
" weight_decay: 0.0\n",
"generator_params:\n",
" aux_channels: 80\n",
" aux_context_window: 2\n",
" bias: True\n",
" dropout: 0.0\n",
" freq_axis_kernel_size: 1\n",
" gate_channels: 128\n",
" in_channels: 1\n",
" interpolate_mode: nearest\n",
" kernel_size: 3\n",
" layers: 30\n",
" nonlinear_activation: None\n",
" nonlinear_activation_params:\n",
" \n",
" out_channels: 1\n",
" residual_channels: 64\n",
" skip_channels: 64\n",
" stacks: 3\n",
" upsample_scales: [4, 5, 3, 5]\n",
" use_causal_conv: False\n",
" use_weight_norm: True\n",
"generator_scheduler_params:\n",
" gamma: 0.5\n",
" learning_rate: 0.0001\n",
" step_size: 200000\n",
"lambda_adv: 4.0\n",
"n_fft: 2048\n",
"n_mels: 80\n",
"n_shift: 300\n",
"num_save_intermediate_results: 4\n",
"num_snapshots: 10\n",
"num_workers: 4\n",
"pin_memory: True\n",
"remove_short_samples: True\n",
"save_interval_steps: 5000\n",
"seed: 42\n",
"stft_loss_params:\n",
" fft_sizes: [1024, 2048, 512]\n",
" hop_sizes: [120, 240, 50]\n",
" win_lengths: [600, 1200, 240]\n",
" window: hann\n",
"top_db: 60\n",
"train_max_steps: 400000\n",
"trim_frame_length: 2048\n",
"trim_hop_length: 512\n",
"trim_silence: False\n",
"win_length: 1200\n",
"window: hann\n"
]
}
],
3 years ago
"source": [
"fastspeech2_config = \"download/fastspeech2_nosil_baker_ckpt_0.4/default.yaml\"\n",
"fastspeech2_checkpoint = \"download/fastspeech2_nosil_baker_ckpt_0.4/snapshot_iter_76000.pdz\"\n",
"fastspeech2_stat = \"download/fastspeech2_nosil_baker_ckpt_0.4/speech_stats.npy\"\n",
"pwg_config = \"download/pwg_baker_ckpt_0.4/pwg_default.yaml\"\n",
"pwg_checkpoint = \"download/pwg_baker_ckpt_0.4/pwg_snapshot_iter_400000.pdz\"\n",
"pwg_stat = \"download/pwg_baker_ckpt_0.4/pwg_stats.npy\"\n",
"phones_dict = \"download/fastspeech2_nosil_baker_ckpt_0.4/phone_id_map.txt\"\n",
"# 读取 conf 配置文件并结构化\n",
3 years ago
"with open(fastspeech2_config) as f:\n",
" fastspeech2_config = CfgNode(yaml.safe_load(f))\n",
"with open(pwg_config) as f:\n",
" pwg_config = CfgNode(yaml.safe_load(f))\n",
"print(\"========Config========\")\n",
"print(fastspeech2_config)\n",
"print(\"---------------------\")\n",
"print(pwg_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 文本前端Text Frontend\n",
3 years ago
"\n",
"\n",
"一个文本前端模块主要包含:\n",
"- 分段Text Segmentation\n",
"- 文本正则化Text Normalization, TN\n",
"- 分词Word Segmentation, 主要是在中文中)\n",
"- 词性标注Part-of-Speech, PoS\n",
"- 韵律预测Prosody\n",
"- 字音转换Grapheme-to-PhonemeG2P\n",
3 years ago
"<br></br>\n",
"<font size=2>Grapheme: **语言**书写系统的最小有意义单位; Phoneme: 区分单词的最小**语音**单位)</font>\n",
" - 多音字Polyphone\n",
" - 变调Tone Sandhi\n",
" - “一”、“不”变调\n",
" - 三声变调\n",
" - 轻声变调\n",
" - 儿化音\n",
" - 方言\n",
3 years ago
"- ...\n",
"<br></br>\n",
"\n",
"(输入给声学模型之前,还需要把音素序列转换为 id\n",
3 years ago
"\n",
"\n",
"其中最重要的模块是<font color=\"#ff0000\"> 文本正则化 </font>模块和<font color=\"#ff0000\"> 字音转换TTS 中更常用 G2P 代指) </font>模块。\n",
"\n",
3 years ago
"\n",
"各模块输出示例:\n",
3 years ago
"```text\n",
"• Text: 全国一共有112所211高校\n",
"• Text Normalization: 全国一共有一百一十二所二一一高校\n",
"• Word Segmentation: 全国/一共/有/一百一十二/所/二一一/高校/\n",
"• G2P注意此句中“一”的读音:\n",
" quan2 guo2 yi2 gong4 you3 yi4 bai3 yi1 shi2 er4 suo3 er4 yao1 yao1 gao1 xiao4\n",
" (可以进一步把声母和韵母分开)\n",
" q uan2 g uo2 y i2 g ong4 y ou3 y i4 b ai3 y i1 sh i2 er4 s uo3 er4 y ao1 y ao1 g ao1 x iao4\n",
" (把音调和声韵母分开)\n",
" q uan g uo y i g ong y ou y i b ai y i sh i er s uo er y ao y ao g ao x iao\n",
" 0 2 0 2 0 2 0 4 0 3 ...\n",
"• Prosody (prosodic words #1, prosodic phrases #2, intonation phrases #3, sentence #4):\n",
" 全国#2一共有#2一百#1一十二所#2二一一#1高校#4\n",
" (分词的结果一般是固定的,但是不同人习惯不同,可能有不同的韵律)\n",
"```\n",
"\n",
"<br></br>\n",
"文本前端模块的设计需要结合很多专业的语义学知识和经验。人类在读文本的时候可以自然而然地读出正确的发音,但是这些先验知识计算机并不知晓。\n",
"例如,对于一个句子的分词:\n",
3 years ago
"\n",
"```text\n",
"我也想过过过儿过过的生活\n",
"我也想/过过/过儿/过过的/生活\n",
"\n",
"货拉拉拉不拉拉布拉多\n",
"货拉拉/拉不拉/拉布拉多\n",
"\n",
"南京市长江大桥\n",
"南京市长/江大桥\n",
"南京市/长江大桥\n",
"```\n",
"或者是词的变调和儿化音:\n",
3 years ago
"```\n",
"你要不要和我们一起出去玩?\n",
"你要不2声要和我们一4声起出去玩\n",
"\n",
"不好,我要一个人出去。\n",
"不4声我要一2声个人出去。\n",
"\n",
"(以下每个词的所有字都是三声的,请你读一读,体会一下在读的时候,是否每个字都被读成了三声?)\n",
"纸老虎、虎骨酒、展览馆、岂有此理、手表厂有五种好产品\n",
"```\n",
"又或是多音字,这类情况通常需要先正确分词:\n",
3 years ago
"```text\n",
"人要行,干一行行一行,一行行行行行;\n",
"人要是不行,干一行不行一行,一行不行行行不行。\n",
"\n",
"佟大为妻子产下一女\n",
"\n",
"海水朝朝朝朝朝朝朝落\n",
"浮云长长长长长长长消\n",
"```\n",
"\n",
"PaddleSpeech Text-to-Speech的文本前端解决方案:\n",
"- [文本正则](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/tn)\n",
"- [G2P](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/g2p):\n",
" - 多音字模块: pypinyin/g2pM\n",
" - 变调模块: 用分词 + 规则\n",
"<br></br>"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 构造文本前端对象\n",
"<font size=4>传入`phones_dict`,把相应的`phones`转换成`phone_ids`。</font>"
3 years ago
]
},
{
"cell_type": "code",
"execution_count": 32,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [],
3 years ago
"source": [
"# 传入 phones_dict 会把相应的 phones 转换成 phone_ids\n",
"frontend = Frontend(phone_vocab_path=phones_dict)\n",
"print(\"Frontend done!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 调用文本前端\n",
"\n",
"文本前端对输入的数据进行正则化的时候会进行分句,你也可以将以下`input`替换成`\"我每天中午12:00起床\"`或者`\"我出生于2005/11/08那天的最低气温达到-10°C\"`。\n",
"若`merge_sentences`设置为`False`,则多个子句并行调用声学模型和声码器提升合成速度;若`merge_sentences`设置为`True``input_ids[\"phone_ids\"][0]`则表示整句的`phone_ids`。"
3 years ago
]
},
{
"cell_type": "code",
"execution_count": 23,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Building prefix dict from the default dictionary ...\n",
"DEBUG:jieba:Building prefix dict from the default dictionary ...\n",
"Loading model from cache /tmp/jieba.cache\n",
"DEBUG:jieba:Loading model from cache /tmp/jieba.cache\n",
"Loading model cost 5.331 seconds.\n",
"DEBUG:jieba:Loading model cost 5.331 seconds.\n",
"Prefix dict has been built successfully.\n",
"DEBUG:jieba:Prefix dict has been built successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"----------------------------\n",
"text norm results:\n",
"['你好,', '欢迎使用百度飞桨框架进行深度学习研究!']\n",
"----------------------------\n",
"g2p results:\n",
"[['n', 'i2', 'h', 'ao3', 'sp', 'h', 'uan1', 'ing2', 'sh', 'iii3', 'iong4', 'b', 'ai3', 'd', 'u4', 'f', 'ei1', 'j', 'iang3', 'k', 'uang1', 'j', 'ia4', 'j', 'in4', 'x', 'ing2', 'sh', 'en1', 'd', 'u4', 'x', 've2', 'x', 'i2', 'ian2', 'j', 'iou1', 'sp']]\n",
"----------------------------\n",
"phone_ids:Tensor(shape=[39], dtype=int64, place=CUDAPlace(0), stop_gradient=True,\n",
" [155, 73 , 71 , 29 , 179, 71 , 199, 126, 177, 115, 138, 37 , 9 , 40 ,\n",
" 186, 69 , 46 , 151, 89 , 152, 204, 151, 80 , 151, 123, 260, 126, 177,\n",
" 51 , 40 , 186, 260, 251, 260, 73 , 83 , 151, 140, 179])\n"
]
}
],
3 years ago
"source": [
"input = \"你好,欢迎使用百度飞桨框架进行深度学习研究!\"\n",
"input_ids = frontend.get_input_ids(input, merge_sentences=True, print_info=True)\n",
3 years ago
"phone_ids = input_ids[\"phone_ids\"][0]\n",
"print(\"phone_ids:%s\"%phone_ids)"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 用深度学习实现文本前端\n",
3 years ago
"<img style=\"float: center;\" src=\"source/text_frontend_struct.png\" width=\"100%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 声学模型Acoustic Model\n",
"\n",
"声学模型将字符/音素转换为声学特征如线性频谱图、mel 频谱图、LPC 特征等,声学特征以 “帧” 为单位,一般一帧是 10ms 左右,一个音素一般对应 5~20 帧左右, 声学模型需要解决的是 <font color=\"#ff0000\">“不等长序列间的映射问题”</font>,“不等长”是指,同一个人发不同音素的持续时间不同,同一个人在不同时刻说同一句话的语速可能不同,对应各个音素的持续时间不同,不同人说话的特色不同,对应各个音素的持续时间不同。这是一个困难的“一对多”问题。\n",
3 years ago
"```\n",
"# 卡尔普陪外孙玩滑梯\n",
"000001|baker_corpus|sil 20 k 12 a2 4 er2 10 p 12 u3 12 p 9 ei2 9 uai4 15 s 11 uen1 12 uan2 14 h 10 ua2 11 t 15 i1 16 sil 20\n",
"```\n",
"\n",
"声学模型主要分为自回归模型和非自回归模型,其中自回归模型在 `t` 时刻的预测需要依赖 `t-1` 时刻的输出作为输入,预测时间长,但是音质相对较好,非自回归模型不存在预测上的依赖关系,预测时间快,音质相对较差。\n",
3 years ago
"\n",
"<br></br>\n",
"主流声学模型发展的脉络:\n",
"- 自回归模型:\n",
" - Tacotron\n",
" - Tacotron2\n",
" - Transformer TTS\n",
"- 非自回归模型:\n",
" - FastSpeech\n",
" - SpeedySpeech\n",
" - FastPitch\n",
" - FastSpeech2\n",
3 years ago
" - ...\n",
" \n",
"<br></br>\n",
"在本教程中,我们使用 `FastSpeech2` 作为声学模型。\n",
3 years ago
"![FastSpeech2](source/fastspeech2.png)\n",
"PaddleSpeech TTS 实现的 FastSpeech2 与论文不同的地方在于,我们使用的的是 phone 级别的 `pitch` 和 `energy`(与 FastPitch 类似)。\n",
3 years ago
"![FastPitch](source/fastpitch.png)\n",
"更多关于[声学模型的发展及改进](https://paddlespeech.readthedocs.io/en/latest/tts/models_introduction.html)。"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 初始化声学模型 FastSpeech2"
]
},
{
"cell_type": "code",
"execution_count": 29,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"vocab_size: 268\n",
"None\n",
"FastSpeech2Inference(\n",
" (normalizer): ZScore()\n",
" (acoustic_model): FastSpeech2(\n",
" (encoder): Encoder(\n",
" (embed): Sequential(\n",
" (0): Embedding(268, 384, padding_idx=0, sparse=False)\n",
" (1): ScaledPositionalEncoding(\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" (encoders): MultiSequential(\n",
" (0): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (1): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (2): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (3): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" (after_norm): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" )\n",
" (duration_predictor): DurationPredictor(\n",
" (conv): LayerList(\n",
" (0): Sequential(\n",
" (0): Conv1D(384, 256, kernel_size=[3], padding=1, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.1, axis=None, mode=upscale_in_train)\n",
" )\n",
" (1): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[3], padding=1, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.1, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" (linear): Linear(in_features=256, out_features=1, dtype=float32)\n",
" )\n",
" (pitch_predictor): VariancePredictor(\n",
" (conv): LayerList(\n",
" (0): Sequential(\n",
" (0): Conv1D(384, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (1): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (2): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (3): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (4): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" (linear): Linear(in_features=256, out_features=1, dtype=float32)\n",
" )\n",
" (pitch_embed): Sequential(\n",
" (0): Conv1D(1, 384, kernel_size=[1], data_format=NCL)\n",
" (1): Dropout(p=0.0, axis=None, mode=upscale_in_train)\n",
" )\n",
" (energy_predictor): VariancePredictor(\n",
" (conv): LayerList(\n",
" (0): Sequential(\n",
" (0): Conv1D(384, 256, kernel_size=[3], padding=1, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (1): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[3], padding=1, data_format=NCL)\n",
" (1): ReLU()\n",
" (2): LayerNorm(normalized_shape=[256], epsilon=1e-05)\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" (linear): Linear(in_features=256, out_features=1, dtype=float32)\n",
" )\n",
" (energy_embed): Sequential(\n",
" (0): Conv1D(1, 384, kernel_size=[1], data_format=NCL)\n",
" (1): Dropout(p=0.0, axis=None, mode=upscale_in_train)\n",
" )\n",
" (length_regulator): LengthRegulator()\n",
" (decoder): Encoder(\n",
" (embed): Sequential(\n",
" (0): ScaledPositionalEncoding(\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" (encoders): MultiSequential(\n",
" (0): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (1): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (2): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (3): EncoderLayer(\n",
" (self_attn): MultiHeadedAttention(\n",
" (linear_q): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_k): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_v): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (linear_out): Linear(in_features=384, out_features=384, dtype=float32)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" (feed_forward): MultiLayeredConv1d(\n",
" (w_1): Conv1D(384, 1536, kernel_size=[3], padding=1, data_format=NCL)\n",
" (w_2): Conv1D(1536, 384, kernel_size=[3], padding=1, data_format=NCL)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" (relu): ReLU()\n",
" )\n",
" (norm1): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (norm2): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" (dropout): Dropout(p=0.2, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" (after_norm): LayerNorm(normalized_shape=[384], epsilon=1e-05)\n",
" )\n",
" (feat_out): Linear(in_features=384, out_features=80, dtype=float32)\n",
" (postnet): Postnet(\n",
" (postnet): LayerList(\n",
" (0): Sequential(\n",
" (0): Conv1D(80, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): BatchNorm1D(num_features=256, momentum=0.9, epsilon=1e-05, data_format=NCL)\n",
" (2): Tanh()\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (1): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): BatchNorm1D(num_features=256, momentum=0.9, epsilon=1e-05, data_format=NCL)\n",
" (2): Tanh()\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (2): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): BatchNorm1D(num_features=256, momentum=0.9, epsilon=1e-05, data_format=NCL)\n",
" (2): Tanh()\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (3): Sequential(\n",
" (0): Conv1D(256, 256, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): BatchNorm1D(num_features=256, momentum=0.9, epsilon=1e-05, data_format=NCL)\n",
" (2): Tanh()\n",
" (3): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" (4): Sequential(\n",
" (0): Conv1D(256, 80, kernel_size=[5], padding=2, data_format=NCL)\n",
" (1): BatchNorm1D(num_features=80, momentum=0.9, epsilon=1e-05, data_format=NCL)\n",
" (2): Dropout(p=0.5, axis=None, mode=upscale_in_train)\n",
" )\n",
" )\n",
" )\n",
" )\n",
")\n",
"FastSpeech2 done!\n"
]
}
],
3 years ago
"source": [
"with open(phones_dict, \"r\") as f:\n",
" phn_id = [line.strip().split() for line in f.readlines()]\n",
"vocab_size = len(phn_id)\n",
"print(\"vocab_size:\", vocab_size)\n",
"odim = fastspeech2_config.n_mels\n",
"model = FastSpeech2(\n",
" idim=vocab_size, odim=odim, **fastspeech2_config[\"model\"])\n",
"\n",
"model.set_state_dict(paddle.load(fastspeech2_checkpoint)[\"main_params\"]) # 加载预训练模型参数\n",
"model.eval() # 推理阶段不启用 batch norm 和 dropout\n",
3 years ago
"stat = np.load(fastspeech2_stat)\n",
"mu, std = stat # 读取数据预处理阶段数据集的均值和标准差\n",
"mu, std = paddle.to_tensor(mu), paddle.to_tensor(std)\n",
"fastspeech2_normalizer = ZScore(mu, std) # 构造归一化的新模型\n",
3 years ago
"fastspeech2_inference = FastSpeech2Inference(fastspeech2_normalizer, model)\n",
"fastspeech2_inference.eval()\n",
"print(fastspeech2_inference)\n",
3 years ago
"print(\"FastSpeech2 done!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 调用声学模型"
]
},
{
"cell_type": "code",
"execution_count": 30,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"shepe of mel (n_frames x n_mels):\n",
"[347, 80]\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAk8AAAGoCAYAAABfbgHJAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAAEAAElEQVR4nOz9e9BtXZcXBv3GnGs/5/3e7+vur+luOt1NE64SKhiicolVVgQhFUKstKG0Q+IFCZEYy1hRq4QkiEii1VqWAcuUsW1FiAZpiRFSYkgFpaKWJDQkQC5FGS6dbuhu+v69t/M8e805/GNc51xr7Wc/5/Ke89B71NlnP3vvteaa1zF+4zLHJGbGjW50oxvd6EY3utGNrqPyritwoxvd6EY3utGNbvSc6AaebnSjG93oRje60Y2eQDfwdKMb3ehGN7rRjW70BLqBpxvd6EY3utGNbnSjJ9ANPN3oRje60Y1udKMbPYFu4OlGN7rRjW50oxvd6Al0A083utFfZ0REv4KIvv9d1+NGN7rRjf56pRt4utGN3iMior9MRA9E9PXT9/8WETER/aw38IxvI6J/m4i+QkQ/QkT/DyL62a9b7iPPZCL6eW/zGTe60Y1u9HnRDTzd6EbvH/0lAH+/fSCi/yiAD99EwQpgfh+A/y6ArwHwswH8swDamyj/Neq1vOHy6pss70Y3utGNMt3A041u9P7RPw/gv5I+/wYI4HEiohdE9D8jov+QiH6IiP45IvrCFWX/rQD+EjP/MRb6iJn/RWb+D7Xc30FEf5CI/gARfUREf5qIfnF67jcT0b9IRD9MRH+JiP5b6bdKRP8EEf0FvfdPEdG3EtG/rpf8GSL6mIj+PnMtEtFvIaIfBPB7tE2/i4j+qr5+FxG9SOX/94joB/S3fyhbs4jof09E/ysi+iNE9AmAX0lEf7da7L5CRN9HRL8jlfWz9P7fqL/9OBH914nolxLRnyWinyCi/+V1w3WjG93opxrdwNONbvT+0Z8A8NVE9AvVgvLrAfwfpmu+A8B/BAKGfh6AbwHw268o+08D+JuI6J8hol9JRF/auebbAPyfAfw0AP8CgP8rEZ2IqAD4lwH8GX3erwLwjxHR36n3/XcgFrNfC+CrAfyDAD5l5r9df//FzPwlZv4D+vlv0Gf8jQB+M4B/EsDfpm36xQB+GYDfBgBE9Gu0/F+t7f0VO/X+BwD8jwB8FYD/N4BPICD0ywD+bgD/CBH956Z7fjmAnw/g7wPwu7QOvxrA3wzg24noP73znBvd6EY/xekGnm50o/eTzPr0dwD49wH8FfuBiAgCNv7bzPxjzPwRgP8xBGRdJGb+ixDg8S0AvhvAj6jVJoOoP8XMf5CZzwD+5wA+gICaXwrgG5j5dzLzg5b1v0nP/YcA/DZm/vNq1fozzPyjF6rTAfwPmPmemT8D8F8E8DuZ+a8x8w8D+B8C+C/rtd8O4Pcw87/LzJ8C+B075f0hZv7/MHNn5pfM/MeZ+c/p5z8L4PcDmMHQP6XX/qsQsPX79fl/BcD/C8B/7GKH3uhGN/opSW80zuBGN7rRG6N/HsC/DolJ+n3Tb98AiYH6U4KjAAAE4Ko4H2b+ExAwAiL6pQD+AMTi8o/rJd+Xru26c++bATCAbyain0jFVQjIAIBvBfAXrqmD0g8z88v0+ZsBfG/6/L36nf32Pem378OWhu+I6JdDLHS/CMAdgBcQi1qmH0p/f7bzec8yd6Mb3einON0sTze60XtIzPy9kMDxXwvg/zL9/CMQwf43M/OX9fU1zPxkQc/Mf1LL/0Xp62+1P9RV9zMA/FUIOPlL6ZlfZuavYuZfq5d/H4Cf+5THT5//KsSFZ/Qz9TsA+AGtx6aOF8r7FwD8YQDfysxfA+Cfg4DMG93oRjd6LbqBpxvd6P2l3wTgP8PMn+QvmblD3GX/DBH9dAAgom9JsUeHRET/KSL6r6X7/iYAfw8kzsroP0FEv053wP1jAO71938TwEca5P0FDRD/RWq9AoDvAvBPEdHPJ6G/hYi+Tn/7IQA/55Hq/X4Av42IvkFTNfx2RKzXdwP4jRoH9iGA//5jbYXEPv0YM78kol8GiYm60Y1udKPXpht4utGN3lNi5r/AzN9z8PNvAfAfAPgTRPQVAP8agF9wRbE/AQFLf46IPgbwrwD4lwD8T9M1fwgSQP3jkJijX8fMZ2ZuAP6z0B17EAvYd0FSHgASH/XdAP5VAF8B8L8FYDsAfweA36u72L79oG7/NMQ192cB/DlIcPs/DQDM/H8H8L8A8P+0dus99xfa+t8A8DuJ6CMIEPvuC9fe6EY3utHVRMyzpftGN7rRT1XS7fw/j5n/S++6LpeIiH4hgH8HwAtmXt91fW50oxv91KKb5elGN7rRsyAi+ns1F9TXAvifAPiXb8DpRje60bugG3i60Y1u9FzoHwbw1yA7+hqAf+TdVudGN7rRT1W6ue1udKMb3ehGN7rRjZ5AN8vTjW50oxvd6EY3utET6FkkybyjF/wBfQlEBJQCLBW8FPRTQT8BvQIoABMkiwupNc0+A5IBhiEXmbHNrp3v2yO7L9/rv+kX+jv5s+Lv+Tv/vjOoA+gM6gz0DnQGOF7ZOuh9UAp4qeh3BK4ELgCXqBen+s3N4twve9/tZcLh6e/cZ3M/z9cj+py6/lkALB21dhRiFK0kTbd2JjATOhP6WkArgRpALYqe2+uPfKQ93i89TRmePndO48Y+ZlwJ7UVBu5vaPj8zj3kfy0Z+JgPloYNaB8A4f/UJ/S6Nabpv09bc7DTPAMS6yHWi9PBOUi/Eb8P1+dqpYfWuYykNlRgM4H5d0NcCNPLxGeZ/j36gVec76/vQCEK/K+gLoZ8Arvs1GNp52CFxLbHMG+vHdgfgg46lNiylg5nw0Cp6K8CZ4v5iBaTnAtFRnH62cbZ2aru5An0B+MRY7hqW2lHQwSCce0VrBdwI6DT1ua4dYqAAy6nhVBoWknXTmdBBaL1I3dciY5qPec7zbC66AlwZtEgH+dCTsR8CNwKtif9NKretR+rWdh54zvoFQl8AFI72eIVSH6ZxsrXElYHKKEVf2tHMwht6L1G/vLZy84v0e1k6ltqxUIPllmUmNJb+a1oW2lSnS5nB0rhv5uIkI0rmW5D5x3eMeup4UVeQjufaC9ZWwavOh0vl5jpM/eiX577teawY1GIdfvbwE3hon33uedD+zl/5Rf7RH3v9c8n/1J+9/6PM/GveQJWuomcBnj7AF/G3vfi7QMsC+vBD4Bu+Fudv+CI++/o7fPJNFfdfBtoXGO0Fo98xeBFGg8rAopKhE3AuKC+LL7K+APyiA5VBpw5auiyqvAKZwB3o5yoM1UBUGa+hVRZdWYFyJpR7+ZtWoN4D9UEERjnL3/XMKGfG8rJj+bShfraifHYGffIS9PIBfP8AnB/AD2dwi4lFd3egDz4AffELaN/wNfj0W76A+6+qOH8JOH9RhA1Ihe4knIm1+hXgBeDCDpr6It+jsAvsAYD5gpP2c2XwAvS7Dtz1YKgMYT5db+4AsTK2lbB8SuhVxou+7h5f/ppP8MW7M14sK2oSCKxC4bPzCffrgk/vT/j0Rz/E6ccWnH6ScPcVOCPnonWP4UC/izbxzOxZxoWaMLTyAJQzUHx82D8vLzvKmWXsWke5byifPmD9mi/gJ3/OB/jobyRwjTJ5AXplBy3UgLISygNw+hio91Y2UBp739YHxhe//1OUn/wUtDb81V/zzfjkZzLWDztQgfpJccZrAs/GJ4C4AKFyr8KTgH7HAhJ0XFnXBev8rZ8W1JdyHxPAJ+tPFoG3MDitB1LhzpXx5W/6Cr7xqz7Cl0736Ez4Cz/29fjKj3wR9cdPOH1E0b9nXQP3jHoPnD7tuPuJFcunK8pnK8r9GWgdKKIY8Knis2/+Ij79hgWffSPh/ssxJzMQpSZzysFfFnY2f9M41Hvg7icZ9UEu+8rPAfjnfopv+NqP8I0ffoSX7YS/8pNfg49+8guoP/gCYOk/Np6S0WsHSOc5NV3zvvaB5SX0nbF8xrj/MuHl1xE++xkrvuFbfxw//Ysf48PlAQ9twQ9+8lX4sa98iPP
"text/plain": [
"<Figure size 648x432 with 2 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
3 years ago
"source": [
"with paddle.no_grad():\n",
" mel = fastspeech2_inference(phone_ids)\n",
"print(\"shepe of mel (n_frames x n_mels):\")\n",
"print(mel.shape)\n",
"# 绘制声学模型输出的 mel 频谱\n",
"fig, ax = plt.subplots(figsize=(9, 6))\n",
"im = ax.imshow(mel.T, aspect='auto',origin='lower')\n",
"fig.colorbar(im, ax=ax)\n",
"plt.title('Mel Spectrogram')\n",
"plt.xlabel('Time')\n",
"plt.ylabel('Frequency')\n",
"plt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 声码器Vocoder\n",
3 years ago
"<br></br>\n",
"声码器将声学特征转换为波形。声码器需要解决的是 <font color=\"#ff0000\">“信息缺失的补全问题”</font>。信息缺失是指,在音频波形转换为频谱图的时候,存在**相位信息**的缺失,在频谱图转换为 mel 频谱图的时候,存在**频域压缩**导致的信息缺失假设音频的采样率是16kHZ, 一帧的音频有 10ms也就是说1s 的音频有 16000 个采样点,而 1s 中包含 100 帧,每一帧有 160 个采样点,声码器的作用就是将一个频谱帧变成音频波形的 160 个采样点,所以声码器中一般会包含**上采样**模块。\n",
3 years ago
" \n",
"<br></br>\n",
"与声学模型类似,声码器也分为自回归模型和非自回归模型, 更细致的分类如下:\n",
"\n",
"- Autoregression\n",
" - WaveNet\n",
" - WaveRNN\n",
" - LPCNet\n",
"- Flow\n",
" - WaveFlow\n",
" - WaveGlow\n",
" - FloWaveNet\n",
" - Parallel WaveNet\n",
"- GAN\n",
" - WaveGAN\n",
" - Parallel WaveGAN\n",
" - MelGAN\n",
" - HiFi-GAN\n",
"- VAE\n",
" - Wave-VAE\n",
"- Diffusion\n",
" - WaveGrad\n",
" - DiffWave\n",
3 years ago
"\n",
"<br></br>\n",
"PaddleSpeech TTS 主要实现了百度的 `WaveFlow` 和一些主流的 GAN Vocoder, 在本教程中,我们使用 `Parallel WaveGAN` 作为声码器。<\n",
3 years ago
"\n",
"<br></br> \n",
"<img style=\"float: center;\" src=\"source/pwgan.png\" width=\"75%\"/> \n",
"\n",
"<br></br>\n",
"各 GAN Vocoder 的生成器和判别器的 Loss 的区别如下表格所示:\n",
3 years ago
" \n",
"Model | Generator Loss |Discriminator Loss\n",
":-------------:| :------------:| :-----\n",
"Parallel Wave GAN| adversial loss <br> Feature Matching | Multi-Scale Discriminator |\n",
"Mel GAN |adversial loss <br> Multi-resolution STFT loss | adversial loss|\n",
"Multi-Band Mel GAN | adversial loss <br> full band Multi-resolution STFT loss <br> sub band Multi-resolution STFT loss |Multi-Scale Discriminator|\n",
"HiFi GAN |adversial loss <br> Feature Matching <br> Mel-Spectrogram Loss | Multi-Scale Discriminator <br> Multi-Period Discriminator|\n"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 初始化声码器 Parallel WaveGAN"
]
},
{
"cell_type": "code",
"execution_count": 34,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Parallel WaveGAN done!\n"
]
}
],
3 years ago
"source": [
"vocoder = PWGGenerator(**pwg_config[\"generator_params\"])\n",
"\n",
"vocoder.set_state_dict(paddle.load(pwg_checkpoint)[\"generator_params\"]) # 模型加载预训练参数\n",
3 years ago
"vocoder.remove_weight_norm()\n",
"vocoder.eval() # 推理阶段不启用 batch norm 和 dropout\n",
"\n",
"stat = np.load(pwg_stat) # 读取数据预处理阶段数据集的均值和标准差\n",
3 years ago
"mu, std = stat\n",
"mu, std = paddle.to_tensor(mu), paddle.to_tensor(std)\n",
3 years ago
"pwg_normalizer = ZScore(mu, std)\n",
"pwg_inference = PWGInference(pwg_normalizer, vocoder) # 构建归一化的模型\n",
3 years ago
"pwg_inference.eval()\n",
"print(\"Parallel WaveGAN done!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 调用声码器"
]
},
{
"cell_type": "code",
"execution_count": 36,
3 years ago
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"shepe of wav (time x n_channels):[104100, 1]\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAoAAAAGoCAYAAADW2lTlAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAABfRUlEQVR4nO3dd3hUddYH8O9J6F2qSAtVRJQWKaLYUEFUXHvvi6vvumtbF3tdxWUta1lX7G3tDQVFQAQU6SAd6VKlSO9JzvvHzMBkMuXemdvv9/M8echMbuYeMsnMub9yjqgqiIiIiCg88twOgIiIiIicxQSQiIiIKGSYABIRERGFDBNAIiIiopBhAkhEREQUMkwAiYiIiEKGCSARkc1EpIGIjBOR7SLypNvxEBExASSiQBORu0Tk64T7FqW472KbwhgAYCOAGqp6u03nICIyjAkgEQXdOADHikg+AIhIQwDlAXRKuK9V9Fg7NAMwT7OovC8i5WyIh4hCjgkgEQXdFEQSvo7R28cDGANgYcJ9SwCcLiLzo1O1S0XkhtiDRO8/M+52ORHZICKdo7e7i8gEEdkiIj+LyInR+98AcBWAO0Vkh4j0FpGKIvKMiKyJfjwjIhWjx58oIqtE5O8isg7A6yLyoIh8JCLvRGObLSJtoqOb60VkpYicZtPPj4gCiAkgEQWaqu4DMAlAr+hdvQCMB/BDwn3jAKwHcCaAGgCuAfB0LMED8B6AS+Ie+nQAG1V1uog0AjAMwKMAagO4A8AnIlJPVa8G8C6Af6pqNVUdBeAeAN0RSUA7AOgK4N64xz40+jjNEJk+BoCzALwN4BAAMwCMQOQ1vBGAhwG8lNUPiIhCiQkgEYXBWBxM9o5HJAEcn3DfWFUdpqpLNGIsgG+jXwOA/wE4W0SqRG9fikhSCACXAxiuqsNVtURVRwKYCuCMFPFcBuBhVV2vqhsAPATgirivlwB4QFX3quru6H3jVXWEqhYB+AhAPQCDVHU/gPcBFIhILbM/GCIKJyaARBQG4wAcJyK1AdRT1UUAJiCyNrA2gPYAxolIXxGZKCK/i8gWRBK4ugCgqosBzAdwVjQJPBuRpBCIjNRdEJ3+3RL93uMANEwRz2EAVsTdXhG9L2aDqu5J+J7f4j7fjcjoY3HcbQColukHQUQEAFxcTERh8BOAmgD+COBHAFDVbSKyJnrfmujHXABXAvhCVfeLyOcAJO5xYtPAeYhs6lgcvX8lgLdV9Y8G41mDSNI4N3q7afS+GNObRYiIzOAIIBEFXnQadSqA2xCZ+o35IXrfOAAVAFQEsAFAkYj0BZC4seL96H034uDoHwC8g8jI4Okiki8ilaKbORqnCOk9APeKSD0RqQvg/uhjEBE5ggkgEYXFWAD1EUn6YsZH7xunqtsB/AXAhwA2I7LGb2j8A6jqWkRGE48F8EHc/SsB9AdwNyIJ5EoAf0Pq19hHEUlIZwGYDWB69D4iIkdIFmWpiIiIiMjHOAJIREREFDJMAImIiIhChgkgERERUcgwASQiIiIKmcDVAaxbt64WFBS4HQYRERGR66ZNm7ZRVesl3h+4BLCgoABTp051OwwiIiIi14nIimT3cwqYiIiIKGSYABIRERGFDBNAIiIiopBhAkhEREQUMkwAiYiIiEKGCSARERFRyDABJCIiIgoZJoBEREREIcMEkIiIiChkmAASERERhQwTQCIiIqKQYQJIREREFDJMAImIiIhChgkgERERUcgwASSirAz8ZBY+nrbK7TCIiCgLTACJKCvvT1mJOz762e0wiIgoC0wAiYiIiEKGCSAR5WTWqi1uh0BERCYxASSinIyYu87tEIiIyCQmgEREREQhwwSQiIiIKGSYABIRERGFDBNAIiIiopBhAkhEREQUMkwAiYiIiEKGCSARERFRyDABJCIiIgoZJoBEREREIcMEkIhyoup2BEREZBYTQCIybefeIrdDICKiHDABJCLTrnh1ktshEBFRDpgAEpFp03/d4nYIRESUAyaARERERCHDBJCIcvKf75fg+jenuB0GERGZwASQiHI2av56t0MgIiITmAASERERhQwTQCIiIqKQYQIYYEXFJXjjx2XYX1zidihERETkIUwAA+zdSb/iwS/n4dUflrkdChEREXkIE8AA275nf6l/iYiIiAAmgEREREShwwQwBF4Ys8TtEChA9hYVux0CERHlyNUEUET6iMhCEVksIgNTHHOhiMwTkbki8j+nYySi0g6/9xu3QyAiohyVc+vEIpIP4AUApwJYBWCKiAxV1Xlxx7QGcBeAnqq6WUTquxMtERERUXC4OQLYFcBiVV2qqvsAvA+gf8IxfwTwgqpuBgBVZbsBIiIiohy5mQA2ArAy7vaq6H3x2gBoIyI/ishEEemT7IFEZICITBWRqRs2bLApXH/jTmAiIiKK8fomkHIAWgM4EcAlAF4WkVqJB6nqEFUtVNXCevXqORuhT9z4znS3QyAiIiKPcDMBXA2gSdztxtH74q0CMFRV96vqMgC/IJIQkkmzVm1xOwQiIiLyCDcTwCkAWotIcxGpAOBiAEMTjvkckdE/iEhdRKaElzoYIxEREVHguJYAqmoRgD8DGAFgPoAPVXWuiDwsImdHDxsBYJOIzAMwBsDfVHWTOxH7z3PfLXY7BAqRkhJ1OwQiIjLItTIwAKCqwwEMT7jv/rjPFcBt0Q8yaW9RyYHPt+0pcjESCoNXfliKAb1auh0GEREZ4PVNIGShJRt2uB0CBdj8tdvdDoGIiAxiAhgie/eXZD6IiIiIAo8JIBEREVHIMAEkIiIiChkmgEREREQhwwSQiAzbsH2v2yEQEZEFmAASkWG3f/Sz2yEQEZEFmAASkWG797GeJBFREDABDJFpK353OwQiIiLyACaAIXLfF3PdDoECLNK4h4iI/IAJIBEREVHIMAEkIiIiChkmgEREREQhwwQwoL6Zsy7p/fuL2Q+YiIgo7JgABtRrPy5Lev8/v1ngcCRERETkNUwAQ2b26q1uh0BEREQuYwJIREREFDJMAInIMIG4HQIREVmACSD5zvy127Btz363w6AEv/6+y+0QiIjIICaA5Dt9/z0el708ye0wKMH0X7dgz/5it8MgIiIDmACSL3EzizftY5khIiJfYAJIREREFDJMAANq8rLf3Q6BiIiIPIoJIBEREVHIMAEkIiIiChkmgEREREQhwwQwZCYu/R37uVOTiIgo1JgABlBJiab9+t4i/yaAP6/c4nYIREREvscEMICe/W6R2yHYZsj4pW6HEG4ZOsFp+msPIiLyCCaAATRs1lq3QyAiIiIPYwJIvvLb1j1uh0BEROR7TABDaOS8dW6HkJVd+4owdcVmt8MIN07xEhEFAhPAELr1g5/dDiEru/cVux1C6E1ezg4zRERBwASQiIiIKGSYAIbUR1NXuh0CBdDi9TvcDoGIiAxgAhhSQ39e43YIFEDnvTjB7RCIiMgAJoBEREREIcMEkIiIiChkmAASERERhQwTQPINkQx9yIiIiMgQJoBEREREIcMEMKTGL9qIEXP92RGEiIiIcsMEMMRueHua2yHkpLiEfcmIiIiy4WoCKCJ9RGShiCwWkYFpjjtPRFRECp2Mj7xly659pW63vHu4S5EQERH5m2sJoIjkA3gBQF8A7QBcIiLtkhxXHcBfAUxyNkLymts+9GcPYyIiIq9xcwSwK4DFqrpUVfcBeB9A/yTHPQLgCQB7nAwuLFT9M436+859mQ8iIiKijNxMABsBiG9Iuyp63wEi0hlAE1Udlu6BRGSAiEwVkakbNmywPtIAe/67xW6HYBirwBAREVnDs5tARCQPwFMAbs90rKoOUdVCVS2sV6+e/cEFyOczV7sdAvnEnv3FbodAREQWcTMBXA2gSdztxtH7YqoDaA/gexFZDqA7gKHcCBJePpqtDqRrXp/idghERGQRNxPAKQBai0hzEakA4GIAQ2NfVNWtqlpXVQtUtQDARABnq+pUd8IlCreflm4ydJyf1pUSEYWVawmgqhYB+DOAEQDmA/hQVeeKyMMicrZbcQXBovU73A6BQuyhL+e5HQIREWVQzs2Tq+pwAMM
"text/plain": [
"<Figure size 648x432 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
3 years ago
"source": [
"with paddle.no_grad():\n",
" wav = pwg_inference(mel)\n",
"print(\"shepe of wav (time x n_channels):%s\"%wav.shape)\n",
"\n",
3 years ago
"# 绘制声码器输出的波形图\n",
"wave_data = wav.numpy().T\n",
"time = np.arange(0, wave_data.shape[1]) * (1.0 / fastspeech2_config.fs)\n",
"fig, ax = plt.subplots(figsize=(9, 6))\n",
"plt.plot(time, wave_data[0])\n",
"plt.title('Waveform')\n",
"plt.xlabel('Time (seconds)')\n",
"plt.ylabel('Amplitude (normed)')\n",
"plt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 播放音频"
]
},
{
"cell_type": "code",
"execution_count": null,
3 years ago
"metadata": {
"scrolled": true
},
"outputs": [],
3 years ago
"source": [
"dp.Audio(wav.numpy().T, rate=fastspeech2_config.fs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 保存音频"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!mkdir output\n",
3 years ago
"sf.write(\n",
" \"output/output.wav\",\n",
3 years ago
" wav.numpy(),\n",
" samplerate=fastspeech2_config.fs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 进阶 —— 个性化调节\n",
3 years ago
"<br></br>\n",
"FastSpeech2 模型可以个性化地调节音素时长、音调和能量,通过一些简单的调节就可以获得一些有意思的效果。例如对于以下的原始音频`\"不要听信别人的谗言,我不是什么克隆人\"`。"
3 years ago
]
},
{
"cell_type": "code",
"execution_count": 40,
3 years ago
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"原始音频\n"
]
},
{
"data": {
"text/html": [
"\n",
" <audio controls=\"controls\" >\n",
" <source src=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/speed/x1_001.wav\" type=\"audio/x-wav\" />\n",
" Your browser does not support the audio element.\n",
" </audio>\n",
" "
],
"text/plain": [
"<IPython.lib.display.Audio object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"speed x 1.2\n"
]
},
{
"data": {
"text/html": [
"\n",
" <audio controls=\"controls\" >\n",
" <source src=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/speed/x1.2_001.wav\" type=\"audio/x-wav\" />\n",
" Your browser does not support the audio element.\n",
" </audio>\n",
" "
],
"text/plain": [
"<IPython.lib.display.Audio object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"speed x 0.8\n"
]
},
{
"data": {
"text/html": [
"\n",
" <audio controls=\"controls\" >\n",
" <source src=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/speed/x0.8_001.wav\" type=\"audio/x-wav\" />\n",
" Your browser does not support the audio element.\n",
" </audio>\n",
" "
],
"text/plain": [
"<IPython.lib.display.Audio object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"pitch x 1.3(童声)\n"
]
},
{
"data": {
"text/html": [
"\n",
" <audio controls=\"controls\" >\n",
" <source src=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/child_voice/001.wav\" type=\"audio/x-wav\" />\n",
" Your browser does not support the audio element.\n",
" </audio>\n",
" "
],
"text/plain": [
"<IPython.lib.display.Audio object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"robot\n"
]
},
{
"data": {
"text/html": [
"\n",
" <audio controls=\"controls\" >\n",
" <source src=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/robot/001.wav\" type=\"audio/x-wav\" />\n",
" Your browser does not support the audio element.\n",
" </audio>\n",
" "
],
"text/plain": [
"<IPython.lib.display.Audio object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
3 years ago
"source": [
"print(\"原始音频\")\n",
"dp.display(dp.Audio(url=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/speed/x1_001.wav\"))\n",
3 years ago
"print(\"speed x 1.2\")\n",
"dp.display(dp.Audio(url=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/speed/x1.2_001.wav\"))\n",
3 years ago
"print(\"speed x 0.8\")\n",
"dp.display(dp.Audio(url=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/speed/x0.8_001.wav\"))\n",
"print(\"pitch x 1.3(童声)\")\n",
"dp.display(dp.Audio(url=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/child_voice/001.wav\"))\n",
"print(\"robot\")\n",
"dp.display(dp.Audio(url=\"https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/robot/001.wav\"))"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"具体实现代码请参考 [Style Fs2](https://github.com/DeepSpeech/demos/style_fs2/run.sh)。"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<br></br>\n",
"## 用 PaddleSpeech 训练 TTS 模型\n",
3 years ago
"<br></br>\n",
"PaddleSpeech 的 examples 是按照 数据集/模型 的结构安排的:\n",
3 years ago
"```text\n",
"examples \n",
"├── aishell3\n",
"│ ├── README.md\n",
"│ ├── tts3\n",
"│ └── vc0\n",
"├── csmsc\n",
"│ ├── README.md\n",
"│ ├── tts2\n",
"│ ├── tts3\n",
"│ ├── voc1\n",
"│ └── voc3\n",
"├── ...\n",
"└── ...\n",
3 years ago
"```\n",
"我们在每个数据集的 README.md 介绍了子目录和模型的对应关系, 在 TTS 中有如下对应关系:\n",
3 years ago
"```text\n",
"tts0 - Tactron2\n",
"tts1 - TransformerTTS\n",
"tts2 - SpeedySpeech\n",
"tts3 - FastSpeech2\n",
"voc0 - WaveFlow\n",
"voc1 - Parallel WaveGAN\n",
"voc2 - MelGAN\n",
"voc3 - MultiBand MelGAN\n",
"```\n",
"<br></br>\n",
"### 基于 CSMCS 数据集训练 FastSpeech2 模型\n",
3 years ago
"```bash\n",
"git clone https://github.com/PaddlePaddle/PaddleSpeech.git\n",
3 years ago
"cd examples/csmsc/tts\n",
"```\n",
"根据 README.md, 下载 CSMCS 数据集和其对应的强制对齐文件, 并放置在对应的位置\n",
3 years ago
"```bash\n",
"./run.sh\n",
"```\n",
"`run.sh` 中包含预处理、训练、合成、静态图推理等步骤:\n",
3 years ago
"\n",
"```bash\n",
"#!/bin/bash\n",
"set -e\n",
"source path.sh\n",
"gpus=0,1\n",
"stage=0\n",
"stop_stage=100\n",
"conf_path=conf/default.yaml\n",
"train_output_path=exp/default\n",
"ckpt_name=snapshot_iter_153.pdz\n",
"\n",
"# with the following command, you can choice the stage range you want to run\n",
"# such as `./run.sh --stage 0 --stop-stage 0`\n",
"# this can not be mixed use with `$1`, `$2` ...\n",
"source ${MAIN_ROOT}/utils/parse_options.sh || exit 1\n",
"\n",
"if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then\n",
" # prepare data\n",
" bash ./local/preprocess.sh ${conf_path} || exit -1\n",
"fi\n",
"if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then\n",
" # train model, all `ckpt` under `train_output_path/checkpoints/` dir\n",
" CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1\n",
"fi\n",
"if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then\n",
" # synthesize, vocoder is pwgan\n",
" CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1\n",
"fi\n",
"if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then\n",
" # synthesize_e2e, vocoder is pwgan\n",
" CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1\n",
"fi\n",
"if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then\n",
" # inference with static model\n",
" CUDA_VISIBLE_DEVICES=${gpus} ./local/inference.sh ${train_output_path} || exit -1\n",
"fi\n",
"```\n",
"\n",
"### 基于 CSMCS 数据集训练 Parallel WaveGAN 模型\n",
3 years ago
"```bash\n",
"git clone https://github.com/PaddlePaddle/PaddleSpeech.git\n",
3 years ago
"cd examples/csmsc/voc1\n",
"```\n",
"根据 README.md, 下载 CSMCS 数据集和其对应的强制对齐文件, 并放置在对应的位置\n",
3 years ago
"```bash\n",
"./run.sh\n",
"```\n",
"`run.sh` 中包含预处理、训练、合成等步骤:\n",
3 years ago
"```bash\n",
"#!/bin/bash\n",
"set -e\n",
"source path.sh\n",
"gpus=0,1\n",
"stage=0\n",
"stop_stage=100\n",
"conf_path=conf/default.yaml\n",
"train_output_path=exp/default\n",
"ckpt_name=snapshot_iter_5000.pdz\n",
"\n",
"# with the following command, you can choice the stage range you want to run\n",
"# such as `./run.sh --stage 0 --stop-stage 0`\n",
"# this can not be mixed use with `$1`, `$2` ...\n",
"source ${MAIN_ROOT}/utils/parse_options.sh || exit 1\n",
"\n",
"if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then\n",
" # prepare data\n",
" ./local/preprocess.sh ${conf_path} || exit -1\n",
"fi\n",
"if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ]; then\n",
" # train model, all `ckpt` under `train_output_path/checkpoints/` dir\n",
" CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} || exit -1\n",
"fi\n",
"if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then\n",
" # synthesize\n",
" CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} || exit -1\n",
"fi\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# FAQ\n",
"\n",
"- 需要注意的问题\n",
"- 经验与分享\n",
"- 用户的其他问题"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 作业\n",
"在 CSMSC 数据集上利用 FastSpeech2 和 Parallel WaveGAN 实现一个中文 TTS 系统。"
3 years ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 关注 PaddleSpeech\n",
3 years ago
"我们的 [Github地址](https://github.com/PaddlePaddle/PaddleSpeech/),欢迎加入以下微信群参与讨论:\n",
" \n",
"<img src=\"./source/wechat-group.png\" width=\"20%\" />"
3 years ago
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.7.0 64-bit ('yt_py37_develop': venv)",
"language": "python",
"name": "python37064bitytpy37developvenv88cd689abeac41d886f9210a708a170b"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {
"height": "607px",
"left": "93px",
"top": "66.1333px",
"width": "200.594px"
3 years ago
},
"toc_section_display": true,
"toc_window_display": true
}
},
"nbformat": 4,
"nbformat_minor": 4
}