pull/629/head
Hui Zhang 3 years ago
parent d05ae8eeb0
commit 3a76707062

@ -1,51 +0,0 @@
# chinese syllable
## Syllable
* [List of Syllables in Pinyin](https://resources.allsetlearning.com/chinese/pronunciation/syllable)
The word syllable is a term referring to the units of a word, composed on an (optional) initial, a final, and a tone.
The word "syllable" is 音节 (yīnjié) in Chinese.
Most spoken syllables in Mandarin Chinese correspond to one written Chinese character.
There are a total of 410 common pinyin syllables.
* [Rare syllable](https://resources.allsetlearning.com/chinese/pronunciation/Rare_syllable)
* [Chinese Pronunciation: The Complete Guide for Beginner](https://www.digmandarin.com/chinese-pronunciation-guide.html)
* [Mandarin Chinese Phonetics](http://www.zein.se/patrick/chinen8p.html)
* [chinese phonetics](https://www.easymandarin.cn/online-chinese-lessons/chinese-phonetics/)
Chinese Characters, called “Hanzi”, are the writing symbols of the Chinese language.
Pinyin is the Romanization of a phonetic notation for Chinese Characters.
Each syllable is composed of three parts: initials, finals, and tones.
In the Pinyin system there are 23 initials, 24 finals, 4 tones and a neutral tone.
## Pinyin
* [Pinyin](https://en.wikipedia.org/wiki/Pinyin)
* [Pinyin quick start guide](https://resources.allsetlearning.com/chinese/pronunciation/Pinyin_quick_start_guide)
* [Pinyin Table](https://en.wikipedia.org/wiki/Pinyin_table)
* [Piyin Chat](https://resources.allsetlearning.com/chinese/pronunciation/Pinyin_chart)
* [Mandarin Chinese Pinyin Table](https://www.archchinese.com/chinese_pinyin.html)
* [Chinese Pinyin Table ](http://www.quickmandarin.com/chinesepinyintable/)
## Tones
* [Four tones](https://resources.allsetlearning.com/chinese/pronunciation/Four_tones)
* [Neutral tone](https://resources.allsetlearning.com/chinese/pronunciation/Neutral_tone)
* [Where do the tone marks go?](http://www.pinyin.info/rules/where.html)
* [声调符号标在哪儿?](http://www.hwjyw.com/resource/content/2010/06/04/8183.shtml)
## Zhuyin
* [Bopomofo](https://en.wikipedia.org/wiki/Bopomofo)
* [Zhuyin table](https://en.wikipedia.org/wiki/Zhuyin_table)

@ -1,21 +0,0 @@
# Dataset
## Text
* [Tatoeba](https://tatoeba.org/cmn)
**Tatoeba is a collection of sentences and translations.** It's collaborative, open, free and even addictive. An open data initiative aimed at translation and speech recognition.
## Speech
* [Tatoeba](https://tatoeba.org/cmn)
**Tatoeba is a collection of sentences and translations.** It's collaborative, open, free and even addictive. An open data initiative aimed at translation and speech recognition.
### ASR Noise
* [asr-noises](https://github.com/speechio/asr-noises)

@ -1,258 +0,0 @@
# Praat and TextGrid
* [**Praat: doing phonetics by computer**](https://www.fon.hum.uva.nl/praat/)
* [TextGrid](https://github.com/kylebgorman/textgrid)
## Praat
**Praat语音学软件**,原名**Praat: doing phonetics by computer**,通常简称**Praat**,是一款[跨平台](https://zh.wikipedia.org/wiki/跨平台)的多功能[语音学](https://zh.wikipedia.org/wiki/语音学)专业[软件](https://zh.wikipedia.org/wiki/软件),主要用于对[数字化](https://zh.wikipedia.org/wiki/数字化)的[语音](https://zh.wikipedia.org/wiki/语音)[信号](https://zh.wikipedia.org/wiki/信号)进行[分析](https://zh.wikipedia.org/w/index.php?title=语音分析&action=edit&redlink=1)、标注、[处理](https://zh.wikipedia.org/wiki/数字信号处理)及[合成](https://zh.wikipedia.org/wiki/语音合成)等实验,同时生成各种[语图](https://zh.wikipedia.org/w/index.php?title=语图&action=edit&redlink=1)和文字报表。
<img src="../images/Praat-output-showing-forced-alignment-for-Mae-hen-wlad-fy-nhadau-the-first-line-of-the.png" width=650>
## TextGrid
### TextGrid文件结构
```text
第一行是固定的:File type = "ooTextFile"
第二行也是固定的:Object class = "TextGrid"
空一行
xmin = xxxx.xxxx  # 表示开始时间
xmax = xxxx.xxxx  # 表示结束时间
tiers? <exists>  # 这一行固定
size = 4  # 表示这个文件有几个item, item也叫tiers, 可以翻译为'层', 这个值是几,就表示有几个item
item []:
    item [1]:
        class = "IntervalTier"
        name = "phone"
        xmin = 1358.8925
        xmax = 1422.5525
        intervals: size = 104
        intervals [1]:
            xmin = 1358.8925
            xmax = 1361.8925
            text = "sil"
        intervals [2]:
            xmin = 1361.8925
            xmax = 1362.0125
            text = "R"
        intervals [3]:
            ...
        intervals [104]:
            xmin = 1422.2325
            xmax = 1422.5525
            text = "sil"
    item [2]:
        class = "IntervalTier"
        name = "word"
        xmin = 1358.8925
        xmax = 1422.5525
        intervals: size = 3
        intervals [1]:
            xmin = 1358.8925
            xmax = 1361.8925
            text = "sp"
```
textgrid 文件中的 size 的值是几就表示有几个 item 每个 item 下面包含 class, name, xmin, xmax, intervals 的键值对item 中的 intervals: size 是几就表示这个 item 中有几个 intervals每个 intervals 有 xmin, xmax, text 三个键值参数。所有 item 中的 xmax - xmin 的值是一样的。
### 安装
```python
pip3 install textgrid
```
### 使用
1. 读一个textgrid文件
```python
import textgrid
tg = textgrid.TextGrid()
tg.read('file.TextGrid') # 'file.TextGrid' 是文件名
```
tg.tiers属性:
会把文件中的所有item打印出来, print(tg.tiers) 的结果如下:
```text
[IntervalTier(
phone, [
Interval(1358.89250, 1361.89250, sil),
Interval(1361.89250, 1362.01250, R),
Interval(1362.01250, 1362.13250, AY1),
Interval(1362.13250, 1362.16250, T),
...
]
)
]
```
此外, tg.tiers[0] 表示第一个 IntervalTier, 支持继续用中括号取序列, '.'来取属性.
比如:
```text
tg.tiers[0][0].mark --> 'sil'
tg.tiers[0].name --> 'phone'
tg.tiers[0][0].minTime --> 1358.8925
tg.tiers[0].intervals --> [Interval(1358.89250, 1361.89250, sil), ..., Interval(1422.23250, 1422.55250, sil)]
tg.tiers[0].maxTime --> 1422.55250
```
TextGrid 模块中包含四种对象
```
PointTier 可以理解为标记(点)的集合
IntervalTier 可以理解为时长(区间)的集合
Point 可以理解为标记
Interval 可以理解为时长
```
2. textgrid库中的对象
**IntervalTier** 对象:
方法
```
add(minTime, maxTime, mark): 添加一个标记,需要同时传入起止时间, 和mark的名字.
addInterval(interval): 添加一个Interval对象, 该Interval对象中已经封装了起止时间.
remove(minTime, maxTime, mark): 删除一个Interval
removeInterval(interval): 删除一个Interval
indexContaining(time): 传入时间或Point对象, 返回包含该时间的Interval对象的下标
例如:
print(tg[0].indexContaining(1362)) --> 1
表示tg[0] 中包含1362时间点的是 下标为1的 Interval 对象
intervalContaining(): 传入时间或Point对象, 返回包含该时间的Interval对象
例如
print(tg[0].intervalContaining(1362)) --> Interval(1361.89250, 1362.01250, R)
read(f): f是文件对象, 读一个TextGrid文件
write(f): f是文件对象, 写一个TextGrid文件
fromFile(f_path): f_path是文件路径, 从一个文件读
bounds(): 返回一个元组, (minTime, maxTime)
```
属性
```
intervals --> 返回所有的 interval 的列表
maxTime --> 返回 number(decimal.Decimal)类型, 表示结束时间
minTime --> 返回 number(decimal.Decimal)类型, 表示开始时间
name --> 返回字符串
strict -- > 返回bool值, 表示是否严格TextGrid格式
```
**PointTier** 对象:
方法
```
add(minTime, maxTime, mark): 添加一个标记,需要同时传入起止时间, 和mark的名字.
addPoint(point): 添加一个Point对象, 该Point对象中已经封装了起止时间.
remove(time, mark): 删除一个 point, 传入时间和mark
removePoint(point): 删除一个 point, 传入point对象
read(f): 读, f是文件对象
write(f): 写, f是文件对象
fromFile(f_path): f_path是文件路径, 从一个文件读
bounds(): 返回一个元组, (minTime, maxTime)
```
属性
```
points 返回所有的 point 的列表
maxTime 和IntervalTier一样, 返回结束时间
minTime 和IntervalTier一样, 返回开始时间
name 返回name
```
**Point** 对象:
支持比较大小, 支持加减运算
属性:
```
mark:
time:
```
**Interval** 对象:
支持比较大小, 支持加减运算
支持 in, not in 的运算
方法:
```
duration(): 返回number 类型, 表示这个Interval的持续时间
bounds(): --> 返回元组, (minTime, maxTime)
overlaps(Interval): --> 返回bool值, 判断本Interval的时间和传入的的Interval的时间是否重叠, 是返回True
```
属性:
```
mark
maxTime
minTime
strick: --> 返回bool值, 判断格式是否严格的TextGrid格式
```
**TextGrid** 对象:
支持列表的取值,复制, 迭代, 求长度, append, extend, pop方法
方法:
```
getFirst(tierName) 返回第一个名字为tierName的tier
getList(tierName) 返回名字为tierName的tier的列表
getNames() 返回所有tier的名字的列表
append(tier) 添加一个tier作为其中的元素
extend(tiers) 添加多个tier作为其中的元素
pop(tier) 删除一个tier
read(f) f是文件对象
write(f) f是文件对象
fromFile(f_path) f_path是文件路径
```
属性:
```
maxTime
minTime
name
strict
tiers 返回所有tiers的列表
```
**MLF** 对象
MLF('xxx.mlf')
'xxx.mlf'为mlf格式的文件,
读取hvite-o sm生成的htk.mlf文件并将其转换为 TextGrid的列表
方法:
```
read(f) f是文件对象
write(prefix='') prefix是写出路径的前缀,可选
```
属性:
```
grids: --> 返回读取的grids的列表
```
## Reference
* https://zh.wikipedia.org/wiki/Praat%E8%AF%AD%E9%9F%B3%E5%AD%A6%E8%BD%AF%E4%BB%B6
* https://blog.csdn.net/duxin_csdn/article/details/88966295

@ -1,3 +0,0 @@
# Useful Tools
* [正则可视化和常用正则表达式](https://wangwl.net/static/projects/visualRegex/#)

@ -1,191 +0,0 @@
# Text Front End
## Text Segmentation
There are various libraries including some of the most popular ones like NLTK, Spacy, Stanford CoreNLP that that provide excellent, easy to use functions for sentence segmentation.
* https://github.com/bminixhofer/nnsplit
* [DeepSegment](https://github.com/notAI-tech/deepsegment) [blog](http://bpraneeth.com/projects/deepsegment) [1](https://praneethbedapudi.medium.com/deepcorrection-1-sentence-segmentation-of-unpunctuated-text-a1dbc0db4e98) [2](https://praneethbedapudi.medium.com/deepcorrection2-automatic-punctuation-restoration-ac4a837d92d9) [3](https://praneethbedapudi.medium.com/deepcorrection-3-spell-correction-and-simple-grammar-correction-d033a52bc11d) [4](https://praneethbedapudi.medium.com/deepsegment-2-0-multilingual-text-segmentation-with-vector-alignment-fd76ce62194f)
## Text Normalization(文本正则)
The **basic preprocessing steps** that occur in English NLP, including data cleaning, stemming/lemmatization, tokenization and stop words. **not all of these steps are necessary for Chinese text data!**
### Lexicon Normalization
Theres a concept similar to stems in this language, and theyre called Radicals. **Radicals are basically the building blocks of Chinese characters.** All Chinese characters are made up of a finite number of components which are put together in different orders and combinations. Radicals are usually the leftmost part of the character. There are around 200 radicals in Chinese, and they are used to index and categorize characters.
Therefore, procedures like stemming and lemmatization are not useful for Chinese text data because seperating the radicals would **change the words meaning entirely**.
### Tokenization
**Tokenizing breaks up text data into shorter pre-set strings**, which help build context and meaning for the machine learning model.
These “tags” label the part of speech. There are 24 part of speech tags and 4 proper name category labels in the `**jieba**` packages existing dictionary.
<img src="../images/jieba_tags.png" width=650>
### Stop Words
In NLP, **stop words are “meaningless” words** that make the data too noisy or ambiguous.
Instead of manually removing them, you could import the `**stopwordsiso**` package for a full list of Chinese stop words. More information can be found [here](https://pypi.org/project/stopwordsiso/). And with this, we can easily create code to filter out any stop words in large text data.
```python
!pip install stopwordsiso
import stopwordsiso
from stopwordsiso import stopwords
stopwords(["zh"]) # Chinese
```
文本正则化 文本正则化主要是讲非标准词(NSW)进行转化,比如:
数字、电话号码: 10086 -> 一千零八十六/幺零零八六
时间,比分: 23:20 -> 二十三点二十分/二十三比二十
分数、小数、百分比: 3/4 -> 四分之三3.24 -> 三点一四, 15% -> 百分之十五
符号、单位: ¥ -> 元, kg -> 千克
网址、文件后缀: www. -> 三W点
* https://github.com/google/re2
* https://github.com/speechio/chinese_text_normalization
* [vinorm](https://github.com/NoahDrisort/vinorm) [cpp_verion](https://github.com/NoahDrisort/vinorm_cpp_version)
Python package for text normalization, use for frontend of Text-to-speech Reseach
* https://github.com/candlewill/CNTN
This is a ChiNese Text Normalization (CNTN) tool for Text-to-speech system, which is based on [sparrowhawk](https://github.com/google/sparrowhawk).
## Word Segmentation(分词)
分词之所以重要可以通过这个例子来说明:
广州市长隆马戏欢迎你 -> 广州市 长隆 马戏 欢迎你
如果没有分词错误会导致句意完全不正确: 
广州 市长 隆马戏 欢迎你
分词常用方法分为最大前向匹配(基于字典)和基于CRF的分词方法。用CRF的方法相当于是把这个任务转换成了序列标注相比于基于字典的方法好处是对于歧义或者未登录词有较强的识别能力缺点是不能快速fix bug并且性能略低于词典。
中文分词的常见工具:
* https://github.com/lancopku/PKUSeg-python
* https://github.com/thunlp/THULAC-Python
* https://github.com/fxsjy/jieba
* CRF++
* https://github.com/isnowfy/snownlp
### MMSEG
* [MMSEG: A Word Identification System for Mandarin Chinese Text Based on Two Variants of the Maximum Matching Algorithm](http://technology.chtsai.org/mmseg/)
* [`中文分词`简单高效的MMSeg](https://www.cnblogs.com/en-heng/p/5872308.html)
* [mmseg分词算法及实现](https://blog.csdn.net/daniel_ustc/article/details/50488040)
* [Mmseg算法](https://www.jianshu.com/p/e4ae8d194487)
* [浅谈中文分词](http://www.isnowfy.com/introduction-to-chinese-segmentation/)
* [pymmseg-cpp](https://github.com/pluskid/pymmseg-cpp.git)
* [ustcdane/mmseg](https://github.com/ustcdane/mmseg)
* [jkom-cloud/mmseg](https://github.com/jkom-cloud/mmseg)
### CScanner
* [CScanner - A Chinese Lexical Scanner](http://technology.chtsai.org/cscanner/)
## Part of Speech(词性预测)
词性解释
n/名词 np/人名 ns/地名 ni/机构名 nz/其它专名
m/数词 q/量词 mq/数量词 t/时间词 f/方位词 s/处所词
v/动词 a/形容词 d/副词 h/前接成分 k/后接成分
i/习语 j/简称 r/代词 c/连词 p/介词 u/助词 y/语气助词
e/叹词 o/拟声词 g/语素 w/标点 x/其它
## G2P(注音)
注音是需要将词转换成对应的发音,对于中文是将其转换成拼音,比如 绿色->(lv4 se4) 这里的数字表示声调。
传统方法是使用字典,但是对于未登录词就很难解决。基于模型的方法是使用 [Phonetisaurus](https://github.com/AdolfVonKleist/Phonetisaurus)。 论文可以参考 - WFST-based Grapheme-to-Phoneme Conversion: Open Source Tools for Alignment, Model-Building and Decoding
当然这个问题也可以看做是序列标注用CRF或者基于神经网络的模型都可以做。 基于神经网络工具: [g2pM](https://github.com/kakaobrain/g2pM)。
## Prosody(韵律预测)
ToBI(an abbreviation of tones and break indices) is a set of conventions for transcribing and annotating the prosody of speech. 中文主要关注break。
韵律等级结构:
音素 -> 音节 -> 韵律词(Prosody Word, PW) -> 韵律短语(prosody phrase, PPH) -> 语调短句(intonational phrase, IPH) -> 子句子 -> 主句子 -> 段落 -> 篇章
LP -> LO -> L1(#1) -> L2(#2) -> L3(#3) -> L4(#4) -> L5 -> L6 -> L7
主要关注 PW, PPH, IPH
| | 停顿时长 | 前后音高特征 |
| --- | ----------| --- |
| 韵律词边界 | 不停顿或从听感上察觉不到停顿 | 无 |
| 韵律短语边界 | 可以感知停顿,但无明显的静音段 | 音高不下倾或稍下倾,韵末不可做句末 |
| 语调短语边界 | 有较长停顿 | 音高下倾比较完全,韵末可以作为句末 |
常用方法使用的是级联CRF首先预测如果是PW再继续预测是否是PPH再预测是否是IPH
<img src="../images/prosody.jpeg" width=450>
论文: 2015 .Ding Et al. - Automatic Prosody Prediction For Chinese Speech Synthesis Using BLSTM-RNN and Embedding Features
## Polyphone(多音字)
## Linguistic Features(语言学特征)
## 基于神经网络的前端文本分析模型
最近这两年基本都是基于 BERT所以这里记录一下相关的论文:
- g2p: 2019. Sevinj Et al. Transformer based Grapheme-to-Phoneme Conversion
- 分词: 2019 huang Et al. - Toward Fast and Accurate Neural Chinese Word Segmentation with Multi-Criteria Learning
- 韵律: 2020 Zhang Et al. - Chinese Prosodic Structure Prediction Based on a Pretrained Language Representation Model
除此之外BLSTM + CRF 也比较主流。
## 总结
总结一下,文本分析各个模块的方法:
TN: 基于规则的方法
分词: 字典/CRF/BLSTM+CRF/BERT
注音: ngram/CRF/BLSTM/seq2seq
韵律: CRF/BLSTM + CRF/ BERT
考虑到分词,注音,韵律都是基于序列标注任务,所以理论上来说可以通过一个模型搞定。
## Reference
* [Text Front End](https://slyne.github.io/%E5%85%AC%E5%BC%80%E8%AF%BE/2020/10/03/TTS1/)
* [Chinese Natural Language (Pre)processing: An Introduction](https://towardsdatascience.com/chinese-natural-language-pre-processing-an-introduction-995d16c2705f)
* [Beginners Guide to Sentiment Analysis for Simplified Chinese using SnowNLP](https://towardsdatascience.com/beginners-guide-to-sentiment-analysis-for-simplified-chinese-using-snownlp-ce88a8407efb)
Loading…
Cancel
Save