diff --git a/assets/image-20240426165150933.png b/assets/image-20240426165150933.png
new file mode 100644
index 0000000..0fa4ba0
Binary files /dev/null and b/assets/image-20240426165150933.png differ
diff --git a/人人都能看懂的Transformer/第二章——文字向量化.md b/人人都能看懂的Transformer/第二章——文字向量化.md
index 1f642f0..7ddf27a 100644
--- a/人人都能看懂的Transformer/第二章——文字向量化.md
+++ b/人人都能看懂的Transformer/第二章——文字向量化.md
@@ -48,4 +48,17 @@ print(tokenizer.convert_ids_to_tokens(inputs['input_ids'][0]))
为什么会被分成两个,原因是即使英文也有数十万甚至更多的词,大部分词都是通过subword(子单词)组成,也就是input可以由in和put组成。如果每个独立的词都单独做词汇会造成极大浪费,最后每次词可能要与几十万的向量去做点积。为了提高资源利用率,以及不造成数据维度爆炸,我们会现在词汇量的大小,如GPT2Tokenizer的词汇量是50257。代码如下:
-
\ No newline at end of file
+~~~python
+from transformers import GPT2Tokenizer
+
+# 初始化分词器
+tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
+# 获取词汇表的大小
+vocab_size = len(tokenizer)
+print(f"The vocabulary size of GPT2Tokenizer is: {vocab_size}")
+"""out:
+The vocabulary size of GPT2Tokenizer is: 50257
+"""
+~~~
+
+
\ No newline at end of file