diff --git a/.gitignore b/.gitignore
index f780d5761..9c2ead20b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -361,3 +361,6 @@ MigrationBackup/
.Rhistory
ML-For-Beginners.Rproj
+.env
+.venv
+venv
\ No newline at end of file
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.es.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.es.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.es.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.hi.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.hi.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.hi.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.it.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.it.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.it.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ja.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ja.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ja.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ka.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ka.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ka.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ko.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ko.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ko.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ms.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ms.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ms.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.sw.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.sw.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.sw.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ta.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ta.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.ta.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.tr.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.tr.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.tr.png differ
diff --git a/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.zh.png b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.zh.png
new file mode 100644
index 000000000..21ac354d7
Binary files /dev/null and b/translated_images/9-feature-importance.cd3193b4bba3fd4bccd415f566c2437fb3298c4824a3dabbcab15270d783606e.zh.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.es.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.es.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.es.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.hi.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.hi.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.hi.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.it.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.it.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.it.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ja.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ja.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ja.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ka.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ka.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ka.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ko.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ko.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ko.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ms.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ms.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ms.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.sw.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.sw.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.sw.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ta.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ta.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.ta.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.tr.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.tr.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.tr.png differ
diff --git a/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.zh.png b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.zh.png
new file mode 100644
index 000000000..c4dbb1e9c
Binary files /dev/null and b/translated_images/9-features-influence.3ead3d3f68a84029f1e40d3eba82107445d3d3b6975d4682b23d8acc905da6d0.zh.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.es.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.es.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.es.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.hi.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.hi.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.hi.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.it.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.it.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.it.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ja.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ja.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ja.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ka.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ka.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ka.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ko.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ko.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ko.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ms.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ms.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ms.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.sw.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.sw.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.sw.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ta.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ta.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.ta.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.tr.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.tr.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.tr.png differ
diff --git a/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.zh.png b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.zh.png
new file mode 100644
index 000000000..3117bbc41
Binary files /dev/null and b/translated_images/ROC.167a70519c5bf8983f04e959942bb550de0fa37c220ff12c0f272d1af16e764a.zh.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.es.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.es.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.es.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.hi.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.hi.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.hi.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.it.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.it.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.it.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ja.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ja.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ja.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ka.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ka.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ka.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ko.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ko.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ko.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ms.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ms.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ms.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.sw.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.sw.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.sw.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ta.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ta.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.ta.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.tr.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.tr.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.tr.png differ
diff --git a/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.zh.png b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.zh.png
new file mode 100644
index 000000000..b5ee41043
Binary files /dev/null and b/translated_images/ROC_2.777f20cdfc4988ca683ade6850ac832cb70c96c12f1b910d294f270ef36e1a1c.zh.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.es.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.es.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.es.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.hi.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.hi.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.hi.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.it.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.it.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.it.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ja.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ja.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ja.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ka.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ka.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ka.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ko.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ko.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ko.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ms.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ms.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ms.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.sw.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.sw.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.sw.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ta.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ta.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.ta.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.tr.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.tr.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.tr.png differ
diff --git a/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.zh.png b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.zh.png
new file mode 100644
index 000000000..aa9d4d053
Binary files /dev/null and b/translated_images/accessibility.c1be5ce816eaea652fe1879bbaf74d97ef15d895ee852a7b0e3542a77b735137.zh.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.es.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.es.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.es.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.hi.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.hi.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.hi.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.it.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.it.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.it.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ja.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ja.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ja.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ka.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ka.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ka.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ko.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ko.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ko.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ms.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ms.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ms.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.sw.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.sw.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.sw.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ta.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ta.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.ta.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.tr.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.tr.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.tr.png differ
diff --git a/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.zh.png b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.zh.png
new file mode 100644
index 000000000..591e7c695
Binary files /dev/null and b/translated_images/accountability.41d8c0f4b85b6231301d97f17a450a805b7a07aaeb56b34015d71c757cad142e.zh.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.es.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.es.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.es.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.hi.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.hi.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.hi.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.it.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.it.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.it.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ja.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ja.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ja.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ka.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ka.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ka.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ko.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ko.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ko.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ms.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ms.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ms.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.sw.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.sw.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.sw.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ta.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ta.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.ta.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.tr.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.tr.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.tr.png differ
diff --git a/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.zh.png b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.zh.png
new file mode 100644
index 000000000..4fa08b2cc
Binary files /dev/null and b/translated_images/accuracy.2c47fe1bf15f44b3656651c84d5e2ba9b37cd929cd2aa8ab6cc3073f50570f4e.zh.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.es.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.es.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.es.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.hi.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.hi.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.hi.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.it.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.it.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.it.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ja.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ja.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ja.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ka.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ka.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ka.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ko.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ko.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ko.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ms.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ms.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ms.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.sw.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.sw.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.sw.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ta.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ta.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.ta.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.tr.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.tr.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.tr.png differ
diff --git a/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.zh.png b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.zh.png
new file mode 100644
index 000000000..8ab82c166
Binary files /dev/null and b/translated_images/ai-ml-ds.537ea441b124ebf69c144a52c0eb13a7af63c4355c2f92f440979380a2fb08b8.zh.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.es.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.es.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.es.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.hi.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.hi.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.hi.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.it.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.it.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.it.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ja.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ja.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ja.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ka.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ka.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ka.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ko.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ko.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ko.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ms.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ms.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ms.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.sw.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.sw.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.sw.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ta.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ta.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.ta.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.tr.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.tr.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.tr.png differ
diff --git a/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.zh.png b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.zh.png
new file mode 100644
index 000000000..88f5c2e88
Binary files /dev/null and b/translated_images/all-genres.1d56ef06cefbfcd61183023834ed3cb891a5ee638a3ba5c924b3151bf80208d7.zh.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.es.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.es.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.es.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.hi.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.hi.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.hi.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.it.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.it.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.it.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ja.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ja.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ja.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ka.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ka.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ka.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ko.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ko.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ko.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ms.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ms.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ms.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.sw.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.sw.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.sw.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ta.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ta.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.ta.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.tr.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.tr.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.tr.png differ
diff --git a/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.zh.png b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.zh.png
new file mode 100644
index 000000000..a2f8cd88e
Binary files /dev/null and b/translated_images/apple.c81c8d5965e5e5daab4a5f6d6aa08162915f2118ce0e46f2867f1a46335e874c.zh.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.es.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.es.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.es.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.hi.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.hi.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.hi.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.it.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.it.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.it.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ja.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ja.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ja.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ka.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ka.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ka.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ko.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ko.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ko.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ms.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ms.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ms.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.sw.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.sw.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.sw.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ta.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ta.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.ta.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.tr.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.tr.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.tr.png differ
diff --git a/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.zh.png b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.zh.png
new file mode 100644
index 000000000..1689371ad
Binary files /dev/null and b/translated_images/barchart.a833ea9194346d769c77a3a870f7d8aee51574cd1138ca902e5500830a41cbce.zh.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.es.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.es.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.es.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.hi.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.hi.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.hi.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.it.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.it.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.it.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ja.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ja.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ja.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ka.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ka.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ka.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ko.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ko.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ko.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ms.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ms.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ms.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.sw.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.sw.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.sw.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ta.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ta.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.ta.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.tr.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.tr.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.tr.png differ
diff --git a/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.zh.png b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.zh.png
new file mode 100644
index 000000000..d51931923
Binary files /dev/null and b/translated_images/bellman-equation.7c0c4c722e5a6b7c208071a0bae51664965050848e4f8a84bb377cd18bdd838b.zh.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.es.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.es.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.es.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.hi.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.hi.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.hi.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.it.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.it.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.it.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ja.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ja.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ja.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ka.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ka.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ka.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ko.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ko.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ko.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ms.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ms.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ms.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.sw.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.sw.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.sw.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ta.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ta.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.ta.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.tr.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.tr.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.tr.png differ
diff --git a/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.zh.png b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.zh.png
new file mode 100644
index 000000000..0a61a2ffc
Binary files /dev/null and b/translated_images/binary-multiclass.b56d0c86c81105a697dddd82242c1d11e4d78b7afefea07a44627a0f1111c1a9.zh.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.es.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.es.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.es.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.hi.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.hi.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.hi.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.it.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.it.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.it.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ja.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ja.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ja.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ka.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ka.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ka.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ko.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ko.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ko.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ms.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ms.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ms.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.sw.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.sw.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.sw.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ta.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ta.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.ta.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.tr.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.tr.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.tr.png differ
diff --git a/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.zh.png b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.zh.png
new file mode 100644
index 000000000..14982b3aa
Binary files /dev/null and b/translated_images/boxplots.8228c29dabd0f29227dd38624231a175f411f1d8d4d7c012cb770e00e4fdf8b6.zh.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.es.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.es.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.es.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.hi.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.hi.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.hi.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.it.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.it.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.it.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ja.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ja.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ja.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ka.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ka.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ka.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ko.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ko.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ko.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ms.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ms.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ms.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.sw.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.sw.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.sw.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ta.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ta.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.ta.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.tr.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.tr.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.tr.png differ
diff --git a/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.zh.png b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.zh.png
new file mode 100644
index 000000000..df42204e5
Binary files /dev/null and b/translated_images/calculation.a209813050a1ddb141cdc4bc56f3af31e67157ed499e16a2ecf9837542704c94.zh.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.es.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.es.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.es.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.hi.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.hi.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.hi.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.it.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.it.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.it.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ja.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ja.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ja.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ka.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ka.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ka.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ko.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ko.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ko.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ms.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ms.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ms.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.sw.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.sw.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.sw.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ta.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ta.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.ta.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.tr.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.tr.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.tr.png differ
diff --git a/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.zh.png b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.zh.png
new file mode 100644
index 000000000..76b66c47c
Binary files /dev/null and b/translated_images/cartpole.b5609cc0494a14f75d121299495ae24fd8f1c30465e7b40961af94ecda2e1cd0.zh.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.es.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.es.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.es.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.hi.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.hi.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.hi.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.it.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.it.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.it.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ja.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ja.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ja.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ka.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ka.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ka.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ko.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ko.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ko.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ms.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ms.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ms.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.sw.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.sw.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.sw.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ta.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ta.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.ta.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.tr.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.tr.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.tr.png differ
diff --git a/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.zh.png b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.zh.png
new file mode 100644
index 000000000..81c85893c
Binary files /dev/null and b/translated_images/centroid.097fde836cf6c9187d0b2033e9f94441829f9d86f4f0b1604dd4b3d1931aee34.zh.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.es.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.es.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.es.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.hi.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.hi.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.hi.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.it.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.it.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.it.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ja.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ja.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ja.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ka.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ka.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ka.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ko.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ko.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ko.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ms.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ms.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ms.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.sw.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.sw.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.sw.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ta.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ta.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.ta.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.tr.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.tr.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.tr.png differ
diff --git a/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.zh.png b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.zh.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.3de5d092ce8d2753d22b48605c1d936a1477081c0646c006a07e9c80a2249fe4.zh.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.es.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.es.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.es.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.hi.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.hi.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.hi.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.it.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.it.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.it.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ja.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ja.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ja.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ka.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ka.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ka.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ko.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ko.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ko.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ms.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ms.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ms.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.sw.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.sw.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.sw.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ta.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ta.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.ta.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.tr.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.tr.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.tr.png differ
diff --git a/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.zh.png b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.zh.png
new file mode 100644
index 000000000..bbb373a47
Binary files /dev/null and b/translated_images/ceos.7a9a67871424a6c07986e7c22ddae062ac660c469f6a54435196e0ae73a1c4da.zh.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.es.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.es.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.es.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.hi.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.hi.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.hi.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.it.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.it.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.it.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ja.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ja.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ja.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ka.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ka.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ka.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ko.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ko.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ko.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ms.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ms.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ms.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.sw.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.sw.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.sw.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ta.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ta.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.ta.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.tr.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.tr.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.tr.png differ
diff --git a/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.zh.png b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.zh.png
new file mode 100644
index 000000000..126b60378
Binary files /dev/null and b/translated_images/cf-what-if-features.5a92a6924da3e9b58b654c974d7560bfbfc067c123b73e98ab4935448b3f70d5.zh.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.es.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.es.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.es.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.hi.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.hi.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.hi.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.it.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.it.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.it.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ja.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ja.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ja.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ka.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ka.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ka.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ko.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ko.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ko.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ms.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ms.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ms.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.sw.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.sw.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.sw.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ta.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ta.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.ta.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.tr.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.tr.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.tr.png differ
diff --git a/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.zh.png b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.zh.png
new file mode 100644
index 000000000..685bef623
Binary files /dev/null and b/translated_images/cheatsheet.07a475ea444d22234cb8907a3826df5bdd1953efec94bd18e4496f36ff60624a.zh.png differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.es.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.es.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.es.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.hi.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.hi.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.hi.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.it.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.it.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.it.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ja.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ja.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ja.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ka.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ka.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ka.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ko.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ko.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ko.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ms.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ms.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ms.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.sw.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.sw.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.sw.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ta.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ta.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.ta.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.tr.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.tr.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.tr.jpg differ
diff --git a/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.zh.jpg b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.zh.jpg
new file mode 100644
index 000000000..afef99916
Binary files /dev/null and b/translated_images/chess.e704a268781bdad85d1876b6c2295742fa0d856e7dcf3659147052df9d3db205.zh.jpg differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.es.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.es.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.es.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.hi.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.hi.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.hi.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.it.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.it.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.it.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ja.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ja.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ja.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ka.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ka.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ka.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ko.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ko.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ko.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ms.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ms.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ms.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.sw.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.sw.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.sw.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ta.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ta.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.ta.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.tr.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.tr.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.tr.png differ
diff --git a/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.zh.png b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.zh.png
new file mode 100644
index 000000000..13cc9b4ed
Binary files /dev/null and b/translated_images/chinese.e62cafa5309f111afd1b54490336daf4e927ce32bed837069a0b7ce481dfae8d.zh.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.es.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.es.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.es.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.hi.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.hi.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.hi.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.it.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.it.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.it.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ja.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ja.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ja.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ka.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ka.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ka.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ko.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ko.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ko.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ms.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ms.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ms.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.sw.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.sw.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.sw.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ta.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ta.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.ta.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.tr.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.tr.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.tr.png differ
diff --git a/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.zh.png b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.zh.png
new file mode 100644
index 000000000..5f991e289
Binary files /dev/null and b/translated_images/clusters.b635354640d8e4fd4a49ef545495518e7be76172c97c13bd748f5b79f171f69a.zh.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.es.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.es.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.es.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.hi.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.hi.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.hi.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.it.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.it.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.it.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ja.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ja.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ja.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ka.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ka.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ka.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ko.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ko.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ko.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ms.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ms.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ms.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.sw.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.sw.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.sw.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ta.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ta.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.ta.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.tr.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.tr.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.tr.png differ
diff --git a/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.zh.png b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.zh.png
new file mode 100644
index 000000000..a64781283
Binary files /dev/null and b/translated_images/comparison.edfab56193a85e7fdecbeaa1b1f8c99e94adbf7178bed0de902090cf93d6734f.zh.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.es.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.es.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.es.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.hi.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.hi.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.hi.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.it.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.it.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.it.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ja.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ja.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ja.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ka.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ka.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ka.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ko.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ko.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ko.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ms.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ms.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ms.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.sw.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.sw.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.sw.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ta.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ta.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.ta.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.tr.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.tr.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.tr.png differ
diff --git a/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.zh.png b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.zh.png
new file mode 100644
index 000000000..aabba0910
Binary files /dev/null and b/translated_images/comprehension.619708fc5959b0f6a24ebffba2ad7b0625391a476141df65b43b59de24e45c6f.zh.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.es.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.es.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.es.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.hi.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.hi.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.hi.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.it.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.it.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.it.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ja.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ja.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ja.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ka.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ka.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ka.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ko.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ko.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ko.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ms.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ms.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ms.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.sw.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.sw.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.sw.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ta.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ta.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.ta.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.tr.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.tr.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.tr.png differ
diff --git a/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.zh.png b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.zh.png
new file mode 100644
index 000000000..5dae1c7d5
Binary files /dev/null and b/translated_images/confusion-matrix.3cc5496a1a37c3e4311e74790f15a1426e03e27af7e611aaabda56bc0a802aaf.zh.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.es.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.es.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.es.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.hi.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.hi.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.hi.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.it.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.it.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.it.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ja.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ja.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ja.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ka.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ka.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ka.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ko.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ko.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ko.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ms.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ms.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ms.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.sw.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.sw.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.sw.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ta.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ta.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.ta.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.tr.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.tr.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.tr.png differ
diff --git a/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.zh.png b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.zh.png
new file mode 100644
index 000000000..fa4dd0b42
Binary files /dev/null and b/translated_images/correlation.a9356bb798f5eea51f47185968e1ebac5c078c92fce9931e28ccf0d7fab71c2b.zh.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.es.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.es.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.es.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.hi.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.hi.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.hi.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.it.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.it.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.it.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ja.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ja.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ja.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ka.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ka.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ka.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ko.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ko.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ko.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ms.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ms.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ms.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.sw.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.sw.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.sw.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ta.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ta.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.ta.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.tr.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.tr.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.tr.png differ
diff --git a/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.zh.png b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.zh.png
new file mode 100644
index 000000000..40dd5206e
Binary files /dev/null and b/translated_images/counterfactuals-examples.b38a50a504ee0a9fc6087aba050a212a5f838adc5b0d76c5c656f8b1ccaab822.zh.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.es.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.es.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.es.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.hi.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.hi.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.hi.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.it.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.it.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.it.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ja.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ja.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ja.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ka.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ka.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ka.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ko.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ko.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ko.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ms.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ms.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ms.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.sw.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.sw.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.sw.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ta.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ta.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.ta.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.tr.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.tr.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.tr.png differ
diff --git a/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.zh.png b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.zh.png
new file mode 100644
index 000000000..97b45b02a
Binary files /dev/null and b/translated_images/cuisine-dist.d0cc2d551abe5c25f83d73a5f560927e4a061e9a4560bac1e97d35682ef3ca6d.zh.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.es.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.es.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.es.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.hi.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.hi.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.hi.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.it.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.it.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.it.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ja.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ja.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ja.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ka.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ka.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ka.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ko.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ko.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ko.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ms.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ms.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ms.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.sw.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.sw.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.sw.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ta.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ta.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.ta.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.tr.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.tr.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.tr.png differ
diff --git a/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.zh.png b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.zh.png
new file mode 100644
index 000000000..0f56c6528
Binary files /dev/null and b/translated_images/currency.e7429812bfc8c6087b2d4c410faaa4aaa11b2fcaabf6f09549b8249c9fbdb641.zh.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.es.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.es.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.es.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.hi.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.hi.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.hi.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.it.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.it.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.it.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ja.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ja.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ja.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ka.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ka.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ka.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ko.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ko.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ko.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ms.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ms.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ms.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.sw.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.sw.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.sw.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ta.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ta.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.ta.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.tr.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.tr.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.tr.png differ
diff --git a/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.zh.png b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.zh.png
new file mode 100644
index 000000000..76a1c12ad
Binary files /dev/null and b/translated_images/data-visualization.54e56dded7c1a804d00d027543f2881cb32da73aeadda2d4a4f10f3497526114.zh.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.es.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.es.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.es.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.hi.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.hi.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.hi.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.it.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.it.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.it.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ja.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ja.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ja.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ka.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ka.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ka.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ko.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ko.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ko.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ms.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ms.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ms.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.sw.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.sw.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.sw.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ta.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ta.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.ta.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.tr.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.tr.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.tr.png differ
diff --git a/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.zh.png b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.zh.png
new file mode 100644
index 000000000..6568a1d64
Binary files /dev/null and b/translated_images/dataanalysis-cover.8d6d0683a70a5c1e274e5a94b27a71137e3d0a3b707761d7170eb340dd07f11d.zh.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.es.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.es.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.es.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.hi.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.hi.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.hi.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.it.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.it.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.it.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ja.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ja.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ja.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ka.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ka.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ka.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ko.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ko.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ko.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ms.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ms.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ms.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.sw.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.sw.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.sw.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ta.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ta.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.ta.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.tr.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.tr.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.tr.png differ
diff --git a/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.zh.png b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.zh.png
new file mode 100644
index 000000000..86c6b1f1f
Binary files /dev/null and b/translated_images/datapoints.aaf6815cd5d873541b61b73b9a6ee6a53914b5d62ed2cbbedaa2e1d9a414c5c1.zh.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.es.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.es.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.es.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.hi.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.hi.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.hi.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.it.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.it.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.it.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ja.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ja.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ja.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ka.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ka.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ka.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ko.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ko.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ko.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ms.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ms.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ms.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.sw.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.sw.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.sw.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ta.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ta.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.ta.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.tr.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.tr.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.tr.png differ
diff --git a/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.zh.png b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.zh.png
new file mode 100644
index 000000000..102756eae
Binary files /dev/null and b/translated_images/distribution.9be11df42356ca958dc8e06e87865e09d77cab78f94fe4fea8a1e6796c64dc4b.zh.png differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.es.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.es.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.es.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.hi.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.hi.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.hi.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.it.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.it.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.it.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ja.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ja.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ja.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ka.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ka.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ka.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ko.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ko.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ko.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ms.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ms.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ms.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.sw.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.sw.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.sw.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ta.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ta.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.ta.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.tr.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.tr.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.tr.jpg differ
diff --git a/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.zh.jpg b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.zh.jpg
new file mode 100644
index 000000000..a0790a754
Binary files /dev/null and b/translated_images/dplyr_filter.b480b264b03439ff7051232a8de1df9a8fd4df723db316feb4f9f5e990db4318.zh.jpg differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.es.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.es.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.es.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.hi.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.hi.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.hi.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.it.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.it.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.it.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ja.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ja.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ja.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ka.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ka.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ka.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ko.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ko.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ko.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ms.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ms.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ms.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.sw.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.sw.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.sw.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ta.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ta.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.ta.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.tr.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.tr.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.tr.png differ
diff --git a/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.zh.png b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.zh.png
new file mode 100644
index 000000000..6f6a9f3c7
Binary files /dev/null and b/translated_images/dplyr_wrangling.f5f99c64fd4580f1377fee3ea428b6f8fd073845ec0f8409d483cfe148f0984e.zh.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.es.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.es.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.es.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.hi.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.hi.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.hi.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.it.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.it.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.it.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ja.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ja.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ja.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ka.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ka.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ka.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ko.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ko.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ko.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ms.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ms.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ms.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.sw.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.sw.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.sw.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ta.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ta.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.ta.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.tr.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.tr.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.tr.png differ
diff --git a/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.zh.png b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.zh.png
new file mode 100644
index 000000000..1f3d2840b
Binary files /dev/null and b/translated_images/ea-error-cohort.6886209ea5d438c4daa8bfbf5ce3a7042586364dd3eccda4a4e3d05623ac702a.zh.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.es.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.es.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.es.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.hi.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.hi.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.hi.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.it.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.it.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.it.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ja.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ja.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ja.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ka.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ka.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ka.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ko.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ko.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ko.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ms.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ms.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ms.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.sw.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.sw.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.sw.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ta.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ta.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.ta.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.tr.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.tr.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.tr.png differ
diff --git a/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.zh.png b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.zh.png
new file mode 100644
index 000000000..ddf5156e2
Binary files /dev/null and b/translated_images/ea-error-distribution.117452e1177c1dd84fab2369967a68bcde787c76c6ea7fdb92fcf15d1fce8206.zh.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.es.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.es.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.es.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.hi.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.hi.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.hi.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.it.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.it.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.it.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ja.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ja.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ja.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ka.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ka.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ka.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ko.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ko.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ko.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ms.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ms.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ms.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.sw.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.sw.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.sw.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ta.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ta.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.ta.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.tr.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.tr.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.tr.png differ
diff --git a/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.zh.png b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.zh.png
new file mode 100644
index 000000000..ab9ecf66d
Binary files /dev/null and b/translated_images/ea-heatmap.8d27185e28cee3830c85e1b2e9df9d2d5e5c8c940f41678efdb68753f2f7e56c.zh.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.es.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.es.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.es.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.hi.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.hi.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.hi.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.it.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.it.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.it.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ja.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ja.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ja.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ka.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ka.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ka.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ko.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ko.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ko.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ms.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ms.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ms.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.sw.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.sw.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.sw.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ta.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ta.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.ta.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.tr.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.tr.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.tr.png differ
diff --git a/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.zh.png b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.zh.png
new file mode 100644
index 000000000..1528be449
Binary files /dev/null and b/translated_images/elbow.72676169eed744ff03677e71334a16c6b8f751e9e716e3d7f40dd7cdef674cca.zh.png differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.es.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.es.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.es.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.hi.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.hi.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.hi.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.it.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.it.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.it.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ja.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ja.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ja.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ka.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ka.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ka.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ko.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ko.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ko.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ms.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ms.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ms.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.sw.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.sw.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.sw.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ta.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ta.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.ta.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.tr.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.tr.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.tr.jpg differ
diff --git a/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.zh.jpg b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.zh.jpg
new file mode 100644
index 000000000..9616118e7
Binary files /dev/null and b/translated_images/electric-grid.0c21d5214db09ffae93c06a87ca2abbb9ba7475ef815129c5b423d7f9a7cf136.zh.jpg differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.es.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.es.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.es.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.hi.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.hi.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.hi.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.it.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.it.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.it.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ja.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ja.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ja.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ka.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ka.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ka.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ko.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ko.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ko.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ms.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ms.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ms.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.sw.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.sw.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.sw.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ta.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ta.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.ta.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.tr.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.tr.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.tr.png differ
diff --git a/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.zh.png b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.zh.png
new file mode 100644
index 000000000..334858f4b
Binary files /dev/null and b/translated_images/eliza.84397454cda9559bb5ec296b5b8fff067571c0cccc5405f9c1ab1c3f105c075c.zh.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.es.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.es.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.es.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.hi.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.hi.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.hi.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.it.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.it.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.it.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ja.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ja.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ja.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ka.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ka.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ka.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ko.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ko.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ko.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ms.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ms.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ms.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.sw.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.sw.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.sw.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ta.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ta.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.ta.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.tr.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.tr.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.tr.png differ
diff --git a/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.zh.png b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.zh.png
new file mode 100644
index 000000000..459dcf565
Binary files /dev/null and b/translated_images/embedding.2cf8953c4b3101d188c2f61a5de5b6f53caaa5ad4ed99236d42bc3b6bd6a1fe2.zh.png differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.es.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.es.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.es.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.hi.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.hi.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.hi.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.it.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.it.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.it.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ja.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ja.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ja.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ka.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ka.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ka.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ko.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ko.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ko.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ms.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ms.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ms.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.sw.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.sw.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.sw.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ta.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ta.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.ta.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.tr.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.tr.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.tr.jpg differ
diff --git a/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.zh.jpg b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.zh.jpg
new file mode 100644
index 000000000..7e685a520
Binary files /dev/null and b/translated_images/encouRage.e75d5fe0367fb9136b78104baf4e2032a7622bc42a2bc34c0ad36c294eeb83f5.zh.jpg differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.es.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.es.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.es.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.hi.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.hi.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.hi.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.it.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.it.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.it.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ja.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ja.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ja.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ka.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ka.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ka.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ko.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ko.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ko.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ms.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ms.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ms.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.sw.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.sw.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.sw.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ta.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ta.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.ta.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.tr.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.tr.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.tr.png differ
diff --git a/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.zh.png b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.zh.png
new file mode 100644
index 000000000..9826bdb57
Binary files /dev/null and b/translated_images/energy-plot.5fdac3f397a910bc6070602e9e45bea8860d4c239354813fa8fc3c9d556f5bad.zh.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.es.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.es.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.es.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.hi.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.hi.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.hi.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.it.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.it.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.it.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ja.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ja.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ja.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ka.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ka.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ka.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ko.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ko.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ko.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ms.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ms.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ms.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.sw.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.sw.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.sw.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ta.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ta.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.ta.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.tr.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.tr.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.tr.png differ
diff --git a/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.zh.png b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.zh.png
new file mode 100644
index 000000000..15f84e910
Binary files /dev/null and b/translated_images/env_init.04e8f26d2d60089e128f21d22e5fef57d580e559f0d5937b06c689e5e7cdd438.zh.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.es.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.es.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.es.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.hi.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.hi.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.hi.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.it.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.it.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.it.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ja.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ja.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ja.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ka.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ka.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ka.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ko.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ko.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ko.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ms.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ms.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ms.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.sw.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.sw.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.sw.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ta.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ta.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.ta.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.tr.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.tr.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.tr.png differ
diff --git a/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.zh.png b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.zh.png
new file mode 100644
index 000000000..f340cf46f
Binary files /dev/null and b/translated_images/environment.40ba3cb66256c93fa7e92f6f7214e1d1f588aafa97d266c11d108c5c5d101b6c.zh.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.es.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.es.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.es.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.hi.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.hi.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.hi.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.it.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.it.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.it.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ja.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ja.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ja.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ka.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ka.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ka.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ko.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ko.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ko.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ms.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ms.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ms.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.sw.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.sw.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.sw.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ta.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ta.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.ta.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.tr.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.tr.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.tr.png differ
diff --git a/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.zh.png b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.zh.png
new file mode 100644
index 000000000..891e0750e
Binary files /dev/null and b/translated_images/escape.18862db9930337e3fce23a9b6a76a06445f229dadea2268e12a6f0a1fde12115.zh.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.es.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.es.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.es.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.hi.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.hi.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.hi.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.it.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.it.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.it.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ja.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ja.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ja.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ka.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ka.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ka.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ko.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ko.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ko.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ms.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ms.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ms.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.sw.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.sw.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.sw.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ta.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ta.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.ta.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.tr.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.tr.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.tr.png differ
diff --git a/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.zh.png b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.zh.png
new file mode 100644
index 000000000..e9871d987
Binary files /dev/null and b/translated_images/facetgrid.9b2e65ce707eba1f983b7cdfed5d952e60f385947afa3011df6e3cc7d200eb5b.zh.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.es.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.es.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.es.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.hi.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.hi.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.hi.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.it.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.it.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.it.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ja.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ja.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ja.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ka.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ka.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ka.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ko.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ko.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ko.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ms.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ms.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ms.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.sw.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.sw.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.sw.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ta.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ta.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.ta.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.tr.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.tr.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.tr.png differ
diff --git a/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.zh.png b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.zh.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.25d7c8ce9817272d25dd0e2b42a6addf7d3b8241cb6c3088fa9fc3eb7227781d.zh.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.es.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.es.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.es.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.hi.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.hi.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.hi.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.it.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.it.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.it.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ja.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ja.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ja.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ka.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ka.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ka.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ko.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ko.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ko.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ms.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ms.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ms.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.sw.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.sw.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.sw.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ta.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ta.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.ta.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.tr.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.tr.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.tr.png differ
diff --git a/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.zh.png b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.zh.png
new file mode 100644
index 000000000..9a9d55f1d
Binary files /dev/null and b/translated_images/fairness.b9f9893a4e3dc28bec350a714555c3be39040c3fe7e0aa4da10bb8e3c54a1cc9.zh.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.es.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.es.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.es.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.hi.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.hi.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.hi.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.it.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.it.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.it.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ja.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ja.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ja.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ka.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ka.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ka.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ko.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ko.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ko.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ms.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ms.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ms.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.sw.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.sw.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.sw.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ta.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ta.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.ta.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.tr.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.tr.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.tr.png differ
diff --git a/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.zh.png b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.zh.png
new file mode 100644
index 000000000..26e0ae439
Binary files /dev/null and b/translated_images/favicon.37b561214b36d454f9fd1f725d77f310fe256eb88f2a0ae08b9cb18aeb30650c.zh.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.es.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.es.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.es.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.hi.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.hi.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.hi.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.it.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.it.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.it.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ja.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ja.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ja.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ka.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ka.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ka.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ko.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ko.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ko.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ms.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ms.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ms.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.sw.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.sw.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.sw.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ta.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ta.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.ta.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.tr.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.tr.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.tr.png differ
diff --git a/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.zh.png b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.zh.png
new file mode 100644
index 000000000..a1648bf77
Binary files /dev/null and b/translated_images/flat-nonflat.d1c8c6e2a96110c1d57fa0b72913f6aab3c245478524d25baf7f4a18efcde224.zh.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.es.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.es.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.es.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.hi.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.hi.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.hi.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.it.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.it.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.it.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ja.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ja.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ja.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ka.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ka.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ka.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ko.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ko.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ko.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ms.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ms.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ms.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.sw.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.sw.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.sw.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ta.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ta.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.ta.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.tr.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.tr.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.tr.png differ
diff --git a/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.zh.png b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.zh.png
new file mode 100644
index 000000000..23d0506f5
Binary files /dev/null and b/translated_images/full-data-predict.4f0fed16a131c8f3bcc57a3060039dc7f2f714a05b07b68c513e0fe7fb3d8964.zh.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.es.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.es.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.es.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.hi.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.hi.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.hi.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.it.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.it.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.it.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ja.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ja.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ja.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ka.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ka.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ka.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ko.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ko.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ko.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ms.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ms.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ms.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.sw.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.sw.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.sw.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ta.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ta.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.ta.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.tr.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.tr.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.tr.png differ
diff --git a/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.zh.png b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.zh.png
new file mode 100644
index 000000000..c1029876e
Binary files /dev/null and b/translated_images/full-data.a82ec9957e580e976f651a4fc38f280b9229c6efdbe3cfe7c60abaa9486d2cbe.zh.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.es.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.es.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.es.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.hi.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.hi.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.hi.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.it.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.it.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.it.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ja.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ja.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ja.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ka.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ka.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ka.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ko.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ko.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ko.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ms.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ms.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ms.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.sw.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.sw.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.sw.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ta.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ta.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.ta.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.tr.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.tr.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.tr.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.zh.png b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.zh.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.bfd87c45da23c08526ec072e397d571d96b6051c8b538600b1ada80289d6ac58.zh.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.es.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.es.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.es.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.hi.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.hi.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.hi.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.it.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.it.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.it.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ja.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ja.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ja.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ka.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ka.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ka.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ko.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ko.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ko.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ms.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ms.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ms.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.sw.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.sw.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.sw.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ta.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ta.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.ta.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.tr.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.tr.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.tr.png differ
diff --git a/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.zh.png b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.zh.png
new file mode 100644
index 000000000..253ddfd62
Binary files /dev/null and b/translated_images/gender-bias-translate-en-tr.f185fd8822c2d4372912f2b690f6aaddd306ffbb49d795ad8d12a4bf141e7af0.zh.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.es.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.es.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.es.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.hi.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.hi.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.hi.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.it.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.it.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.it.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ja.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ja.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ja.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ka.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ka.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ka.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ko.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ko.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ko.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ms.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ms.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ms.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.sw.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.sw.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.sw.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ta.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ta.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.ta.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.tr.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.tr.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.tr.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.zh.png b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.zh.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.1f97568ba9e40e20eb5b40e8538fc38994b794597d2e446f8e43cf40a4baced9.zh.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.es.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.es.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.es.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.hi.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.hi.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.hi.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.it.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.it.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.it.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ja.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ja.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ja.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ka.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ka.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ka.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ko.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ko.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ko.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ms.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ms.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ms.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.sw.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.sw.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.sw.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ta.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ta.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.ta.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.tr.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.tr.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.tr.png differ
diff --git a/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.zh.png b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.zh.png
new file mode 100644
index 000000000..c0847d32d
Binary files /dev/null and b/translated_images/gender-bias-translate-tr-en.4eee7e3cecb8c70e13a8abbc379209bc8032714169e585bdeac75af09b1752aa.zh.png differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.es.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.es.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.es.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.hi.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.hi.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.hi.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.it.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.it.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.it.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ja.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ja.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ja.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ka.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ka.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ka.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ko.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ko.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ko.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ms.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ms.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ms.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.sw.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.sw.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.sw.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ta.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ta.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.ta.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.tr.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.tr.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.tr.jpg differ
diff --git a/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.zh.jpg b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.zh.jpg
new file mode 100644
index 000000000..31ba4b334
Binary files /dev/null and b/translated_images/globe.59f26379ceb40428672b4d9a568044618a2bf6292ecd53a5c481b90e3fa805eb.zh.jpg differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.es.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.es.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.es.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.hi.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.hi.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.hi.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.it.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.it.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.it.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ja.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ja.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ja.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ka.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ka.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ka.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ko.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ko.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ko.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ms.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ms.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ms.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.sw.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.sw.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.sw.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ta.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ta.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.ta.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.tr.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.tr.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.tr.png differ
diff --git a/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.zh.png b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.zh.png
new file mode 100644
index 000000000..bde6517f7
Binary files /dev/null and b/translated_images/grid.464370ad00f3696ce81c7488a963158b69d3b1cfd3f020c58a28360e5cf4239c.zh.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.es.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.es.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.es.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.hi.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.hi.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.hi.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.it.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.it.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.it.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ja.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ja.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ja.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ka.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ka.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ka.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ko.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ko.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ko.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ms.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ms.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ms.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.sw.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.sw.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.sw.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ta.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ta.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.ta.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.tr.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.tr.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.tr.png differ
diff --git a/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.zh.png b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.zh.png
new file mode 100644
index 000000000..bc2472735
Binary files /dev/null and b/translated_images/heatmap.39952045da50b4eb206764735021552f31cff773a79997ece7481fe614897a25.zh.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.es.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.es.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.es.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.hi.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.hi.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.hi.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.it.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.it.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.it.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ja.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ja.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ja.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ka.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ka.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ka.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ko.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ko.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ko.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ms.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ms.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ms.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.sw.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.sw.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.sw.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ta.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ta.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.ta.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.tr.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.tr.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.tr.png differ
diff --git a/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.zh.png b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.zh.png
new file mode 100644
index 000000000..eb84fc9a7
Binary files /dev/null and b/translated_images/hierarchical.bf59403aa43c8c47493bfdf1cc25230f26e45f4e38a3d62e8769cd324129ac15.zh.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.es.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.es.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.es.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.hi.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.hi.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.hi.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.it.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.it.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.it.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ja.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ja.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ja.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ka.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ka.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ka.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ko.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ko.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ko.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ms.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ms.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ms.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.sw.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.sw.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.sw.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ta.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ta.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.ta.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.tr.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.tr.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.tr.png differ
diff --git a/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.zh.png b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.zh.png
new file mode 100644
index 000000000..3070781f6
Binary files /dev/null and b/translated_images/human.e3840390a2ab76901f465c17f568637801ab0df39d7c3fdcb6a112b0c74c6288.zh.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.es.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.es.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.es.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.hi.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.hi.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.hi.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.it.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.it.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.it.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ja.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ja.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ja.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ka.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ka.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ka.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ko.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ko.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ko.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ms.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ms.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ms.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.sw.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.sw.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.sw.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ta.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ta.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.ta.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.tr.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.tr.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.tr.png differ
diff --git a/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.zh.png b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.zh.png
new file mode 100644
index 000000000..d0e31337f
Binary files /dev/null and b/translated_images/hype.07183d711a17aafe70915909a0e45aa286ede136ee9424d418026ab00fec344c.zh.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.es.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.es.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.es.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.hi.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.hi.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.hi.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.it.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.it.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.it.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ja.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ja.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ja.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ka.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ka.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ka.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ko.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ko.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ko.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ms.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ms.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ms.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.sw.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.sw.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.sw.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ta.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ta.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.ta.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.tr.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.tr.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.tr.png differ
diff --git a/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.zh.png b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.zh.png
new file mode 100644
index 000000000..ce4e65402
Binary files /dev/null and b/translated_images/indian.2c4292002af1a1f97a4a24fec6b1459ee8ff616c3822ae56bb62b9903e192af6.zh.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.es.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.es.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.es.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.hi.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.hi.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.hi.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.it.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.it.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.it.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ja.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ja.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ja.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ka.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ka.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ka.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ko.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ko.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ko.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ms.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ms.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ms.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.sw.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.sw.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.sw.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ta.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ta.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.ta.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.tr.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.tr.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.tr.png differ
diff --git a/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.zh.png b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.zh.png
new file mode 100644
index 000000000..17be6837a
Binary files /dev/null and b/translated_images/individual-causal-what-if.00e7b86b52a083cea6344c73c76463e9d41e0fe44fecd6f48671cb2a2d280d81.zh.png differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.es.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.es.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.es.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.hi.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.hi.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.hi.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.it.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.it.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.it.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ja.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ja.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ja.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ka.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ka.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ka.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ko.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ko.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ko.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ms.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ms.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ms.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.sw.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.sw.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.sw.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ta.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ta.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.ta.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.tr.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.tr.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.tr.jpg differ
diff --git a/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.zh.jpg b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.zh.jpg
new file mode 100644
index 000000000..97b3aa950
Binary files /dev/null and b/translated_images/jack-o-lanterns.181c661a9212457d7756f37219f660f1358af27554d856e5a991f16b4e15337c.zh.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.es.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.es.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.es.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.hi.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.hi.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.hi.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.it.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.it.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.it.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ja.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ja.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ja.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ka.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ka.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ka.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ko.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ko.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ko.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ms.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ms.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ms.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.sw.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.sw.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.sw.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ta.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ta.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.ta.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.tr.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.tr.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.tr.jpg differ
diff --git a/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.zh.jpg b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.zh.jpg
new file mode 100644
index 000000000..cccf08c43
Binary files /dev/null and b/translated_images/janitor.e4a77dd3d3e6a32e25327090b8a9c00dc7cf459c44fa9f184c5ecb0d48ce3794.zh.jpg differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.es.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.es.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.es.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.hi.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.hi.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.hi.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.it.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.it.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.it.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ja.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ja.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ja.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ka.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ka.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ka.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ko.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ko.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ko.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ms.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ms.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ms.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.sw.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.sw.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.sw.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ta.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ta.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.ta.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.tr.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.tr.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.tr.png differ
diff --git a/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.zh.png b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.zh.png
new file mode 100644
index 000000000..cfdf5122d
Binary files /dev/null and b/translated_images/japanese.30260486f2a05c463c8faa62ebe7b38f0961ed293bd9a6db8eef5d3f0cf17155.zh.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.es.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.es.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.es.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.hi.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.hi.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.hi.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.it.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.it.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.it.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ja.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ja.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ja.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ka.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ka.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ka.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ko.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ko.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ko.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ms.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ms.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ms.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.sw.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.sw.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.sw.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ta.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ta.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.ta.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.tr.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.tr.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.tr.png differ
diff --git a/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.zh.png b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.zh.png
new file mode 100644
index 000000000..fe1c23bf3
Binary files /dev/null and b/translated_images/july-2014.9e1f7c318ec6d5b30b0d7e1e20be3643501f64a53f3d426d7c7d7b62addb335e.zh.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.es.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.es.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.es.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.hi.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.hi.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.hi.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.it.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.it.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.it.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ja.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ja.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ja.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ka.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ka.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ka.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ko.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ko.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ko.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ms.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ms.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ms.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.sw.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.sw.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.sw.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ta.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ta.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.ta.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.tr.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.tr.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.tr.png differ
diff --git a/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.zh.png b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.zh.png
new file mode 100644
index 000000000..1cdc61d30
Binary files /dev/null and b/translated_images/korean.4a4f0274f3d9805a65e61f05597eeaad8620b03be23a2c0a705c023f65fad2c0.zh.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.es.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.es.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.es.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.hi.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.hi.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.hi.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.it.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.it.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.it.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ja.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ja.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ja.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ka.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ka.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ka.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ko.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ko.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ko.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ms.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ms.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ms.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.sw.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.sw.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.sw.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ta.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ta.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.ta.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.tr.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.tr.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.tr.png differ
diff --git a/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.zh.png b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.zh.png
new file mode 100644
index 000000000..0fc22438a
Binary files /dev/null and b/translated_images/learned.ed28bcd8484b5287a31925c96c43b43e2c2bb876b8ca41a0e1e754f77bb3db20.zh.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.es.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.es.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.es.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.hi.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.hi.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.hi.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.it.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.it.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.it.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ja.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ja.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ja.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ka.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ka.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ka.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ko.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ko.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ko.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ms.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ms.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ms.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.sw.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.sw.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.sw.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ta.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ta.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.ta.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.tr.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.tr.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.tr.png differ
diff --git a/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.zh.png b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.zh.png
new file mode 100644
index 000000000..7e7da0343
Binary files /dev/null and b/translated_images/linear-polynomial.5523c7cb6576ccab0fecbd0e3505986eb2d191d9378e785f82befcf3a578a6e7.zh.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.es.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.es.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.es.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.hi.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.hi.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.hi.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.it.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.it.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.it.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ja.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ja.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ja.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ka.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ka.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ka.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ko.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ko.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ko.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ms.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ms.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ms.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.sw.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.sw.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.sw.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ta.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ta.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.ta.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.tr.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.tr.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.tr.png differ
diff --git a/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.zh.png b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.zh.png
new file mode 100644
index 000000000..2b636a72e
Binary files /dev/null and b/translated_images/linear-results.f7c3552c85b0ed1ce2808276c870656733f6878c8fd37ec220812ee77686c3ef.zh.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.es.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.es.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.es.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.hi.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.hi.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.hi.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.it.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.it.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.it.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ja.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ja.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ja.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ka.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ka.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ka.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ko.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ko.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ko.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ms.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ms.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ms.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.sw.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.sw.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.sw.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ta.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ta.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.ta.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.tr.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.tr.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.tr.png differ
diff --git a/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.zh.png b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.zh.png
new file mode 100644
index 000000000..cae05d6af
Binary files /dev/null and b/translated_images/linear-vs-logistic.ba180bf95e7ee66721ba10ebf2dac2666acbd64a88b003c83928712433a13c7d.zh.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.es.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.es.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.es.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.hi.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.hi.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.hi.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.it.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.it.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.it.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ja.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ja.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ja.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ka.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ka.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ka.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ko.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ko.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ko.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ms.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ms.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ms.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.sw.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.sw.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.sw.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ta.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ta.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.ta.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.tr.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.tr.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.tr.png differ
diff --git a/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.zh.png b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.zh.png
new file mode 100644
index 000000000..fd72fdf81
Binary files /dev/null and b/translated_images/linear.a1b0760a56132551947c85988ff1753b2bccea6c29097394744d3f8a986ac3bf.zh.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.es.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.es.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.es.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.hi.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.hi.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.hi.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.it.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.it.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.it.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ja.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ja.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ja.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ka.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ka.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ka.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ko.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ko.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ko.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ms.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ms.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ms.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.sw.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.sw.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.sw.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ta.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ta.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.ta.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.tr.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.tr.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.tr.png differ
diff --git a/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.zh.png b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.zh.png
new file mode 100644
index 000000000..73ca29080
Binary files /dev/null and b/translated_images/lobe.2fa0806408ef9923ad81b63f5094b5d832a2e52227c4f0abb9fef6e1132fde15.zh.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.es.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.es.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.es.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.hi.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.hi.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.hi.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.it.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.it.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.it.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ja.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ja.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ja.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ka.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ka.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ka.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ko.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ko.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ko.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ms.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ms.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ms.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.sw.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.sw.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.sw.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ta.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ta.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.ta.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.tr.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.tr.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.tr.png differ
diff --git a/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.zh.png b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.zh.png
new file mode 100644
index 000000000..46e4c00ec
Binary files /dev/null and b/translated_images/logistic-linear.0f2f6bb73b3134c1b1463fb22452aefe74b21b7c357ddccac31831a836dcce73.zh.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.es.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.es.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.es.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.hi.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.hi.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.hi.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.it.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.it.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.it.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ja.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ja.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ja.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ka.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ka.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ka.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ko.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ko.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ko.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ms.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ms.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ms.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.sw.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.sw.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.sw.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ta.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ta.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.ta.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.tr.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.tr.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.tr.png differ
diff --git a/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.zh.png b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.zh.png
new file mode 100644
index 000000000..d745453cd
Binary files /dev/null and b/translated_images/logistic.b0cba6b7db4d57899f5a6ae74876bd34a0bd5dc492458b80b3293e948fa46a2d.zh.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.es.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.es.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.es.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.hi.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.hi.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.hi.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.it.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.it.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.it.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ja.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ja.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ja.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ka.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ka.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ka.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ko.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ko.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ko.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ms.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ms.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ms.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.sw.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.sw.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.sw.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ta.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ta.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.ta.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.tr.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.tr.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.tr.png differ
diff --git a/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.zh.png b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.zh.png
new file mode 100644
index 000000000..3d6401011
Binary files /dev/null and b/translated_images/lpathlen.94f211521ed609400dc64c3d8423b9effc5406f33d2648d0002c14c04ba820c1.zh.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.es.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.es.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.es.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.hi.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.hi.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.hi.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.it.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.it.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.it.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ja.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ja.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ja.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ka.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ka.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ka.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ko.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ko.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ko.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ms.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ms.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ms.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.sw.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.sw.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.sw.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ta.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ta.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.ta.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.tr.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.tr.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.tr.png differ
diff --git a/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.zh.png b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.zh.png
new file mode 100644
index 000000000..ca5ced7c0
Binary files /dev/null and b/translated_images/lpathlen1.0534784add58d4ebf25c21d4a1da9bceab4f96743a35817f1b49ab963c64c572.zh.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.es.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.es.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.es.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.hi.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.hi.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.hi.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.it.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.it.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.it.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ja.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ja.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ja.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ka.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ka.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ka.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ko.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ko.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ko.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ms.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ms.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ms.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.sw.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.sw.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.sw.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ta.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ta.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.ta.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.tr.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.tr.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.tr.png differ
diff --git a/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.zh.png b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.zh.png
new file mode 100644
index 000000000..484731132
Binary files /dev/null and b/translated_images/map.e963a6a51349425ab107b38f6c7307eb4c0d0c7ccdd2e81a5e1919292bab9ac7.zh.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.es.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.es.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.es.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.hi.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.hi.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.hi.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.it.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.it.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.it.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ja.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ja.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ja.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ka.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ka.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ka.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ko.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ko.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ko.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ms.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ms.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ms.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.sw.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.sw.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.sw.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ta.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ta.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.ta.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.tr.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.tr.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.tr.png differ
diff --git a/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.zh.png b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.zh.png
new file mode 100644
index 000000000..6a061bf4d
Binary files /dev/null and b/translated_images/mape.fd87bbaf4d346846df6af88b26bf6f0926bf9a5027816d5e23e1200866e3e8a4.zh.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.es.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.es.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.es.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.hi.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.hi.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.hi.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.it.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.it.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.it.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ja.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ja.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ja.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ka.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ka.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ka.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ko.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ko.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ko.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ms.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ms.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ms.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.sw.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.sw.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.sw.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ta.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ta.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.ta.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.tr.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.tr.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.tr.png differ
diff --git a/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.zh.png b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.zh.png
new file mode 100644
index 000000000..713b55034
Binary files /dev/null and b/translated_images/ml-fairness.ef296ebec6afc98a44566d7b6c1ed18dc2bf1115c13ec679bb626028e852fa1d.zh.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.es.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.es.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.es.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.hi.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.hi.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.hi.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.it.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.it.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.it.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ja.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ja.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ja.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ka.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ka.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ka.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ko.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ko.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ko.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ms.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ms.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ms.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.sw.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.sw.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.sw.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ta.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ta.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.ta.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.tr.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.tr.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.tr.png differ
diff --git a/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.zh.png b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.zh.png
new file mode 100644
index 000000000..966c1b8c0
Binary files /dev/null and b/translated_images/ml-for-beginners-video-banner.279f2a268d2130758668f4044f8c252d42f7c0a141c2cb56294c1ccc157cdd1c.zh.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.es.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.es.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.es.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.hi.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.hi.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.hi.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.it.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.it.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.it.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ja.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ja.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ja.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ka.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ka.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ka.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ko.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ko.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ko.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ms.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ms.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ms.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.sw.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.sw.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.sw.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ta.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ta.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.ta.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.tr.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.tr.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.tr.png differ
diff --git a/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.zh.png b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.zh.png
new file mode 100644
index 000000000..2b2904a4b
Binary files /dev/null and b/translated_images/ml-for-beginners.7b65fdd1f4f4159800d88d4e11fac859dd5eb2dc500be72f788085b38ab1bccb.zh.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.es.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.es.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.es.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.hi.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.hi.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.hi.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.it.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.it.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.it.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ja.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ja.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ja.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ka.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ka.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ka.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ko.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ko.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ko.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ms.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ms.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ms.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.sw.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.sw.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.sw.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ta.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ta.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.ta.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.tr.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.tr.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.tr.png differ
diff --git a/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.zh.png b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.zh.png
new file mode 100644
index 000000000..b79ba265c
Binary files /dev/null and b/translated_images/ml-history.a1bdfd4ce1f464d9a0502f38d355ffda384c95cd5278297a46c9a391b5053bc4.zh.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.es.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.es.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.es.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.hi.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.hi.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.hi.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.it.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.it.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.it.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ja.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ja.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ja.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ka.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ka.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ka.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ko.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ko.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ko.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ms.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ms.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ms.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.sw.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.sw.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.sw.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ta.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ta.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.ta.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.tr.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.tr.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.tr.png differ
diff --git a/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.zh.png b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.zh.png
new file mode 100644
index 000000000..9bd65dae9
Binary files /dev/null and b/translated_images/ml-realworld.26ee2746716155771f8076598b6145e6533fe4a9e2e465ea745f46648cbf1b84.zh.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.es.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.es.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.es.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.hi.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.hi.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.hi.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.it.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.it.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.it.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ja.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ja.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ja.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ka.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ka.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ka.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ko.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ko.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ko.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ms.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ms.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ms.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.sw.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.sw.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.sw.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ta.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ta.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.ta.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.tr.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.tr.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.tr.png differ
diff --git a/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.zh.png b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.zh.png
new file mode 100644
index 000000000..a6a94731a
Binary files /dev/null and b/translated_images/ml-regression.4e4f70e3b3ed446e3ace348dec973e133fa5d3680fbc8412b61879507369b98d.zh.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.es.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.es.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.es.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.hi.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.hi.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.hi.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.it.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.it.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.it.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ja.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ja.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ja.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ka.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ka.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ka.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ko.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ko.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ko.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ms.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ms.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ms.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.sw.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.sw.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.sw.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ta.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ta.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.ta.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.tr.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.tr.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.tr.png differ
diff --git a/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.zh.png b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.zh.png
new file mode 100644
index 000000000..a6dfbc88c
Binary files /dev/null and b/translated_images/ml-reinforcement.94024374d63348dbb3571c343ca7ddabef72adac0b8086d47164b769ba3a8a1d.zh.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.es.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.es.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.es.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.hi.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.hi.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.hi.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.it.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.it.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.it.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ja.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ja.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ja.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ka.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ka.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ka.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ko.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ko.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ko.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ms.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ms.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ms.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.sw.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.sw.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.sw.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ta.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ta.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.ta.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.tr.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.tr.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.tr.png differ
diff --git a/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.zh.png b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.zh.png
new file mode 100644
index 000000000..d4259d60d
Binary files /dev/null and b/translated_images/ml-timeseries.fb98d25f1013fc0c59090030080b5d1911ff336427bec31dbaf1ad08193812e9.zh.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.es.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.es.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.es.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.hi.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.hi.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.hi.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.it.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.it.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.it.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ja.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ja.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ja.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ka.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ka.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ka.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ko.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ko.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ko.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ms.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ms.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ms.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.sw.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.sw.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.sw.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ta.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ta.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.ta.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.tr.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.tr.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.tr.png differ
diff --git a/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.zh.png b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.zh.png
new file mode 100644
index 000000000..673aeaca4
Binary files /dev/null and b/translated_images/model-overview-dataset-cohorts.dfa463fb527a35a0afc01b7b012fc87bf2cad756763f3652bbd810cac5d6cf33.zh.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.es.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.es.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.es.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.hi.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.hi.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.hi.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.it.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.it.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.it.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ja.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ja.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ja.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ka.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ka.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ka.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ko.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ko.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ko.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ms.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ms.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ms.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.sw.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.sw.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.sw.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ta.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ta.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.ta.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.tr.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.tr.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.tr.png differ
diff --git a/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.zh.png b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.zh.png
new file mode 100644
index 000000000..c8f514d41
Binary files /dev/null and b/translated_images/model-overview-feature-cohorts.c5104d575ffd0c80b7ad8ede7703fab6166bfc6f9125dd395dcc4ace2f522f70.zh.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.es.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.es.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.es.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.hi.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.hi.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.hi.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.it.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.it.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.it.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ja.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ja.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ja.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ka.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ka.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ka.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ko.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ko.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ko.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ms.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ms.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ms.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.sw.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.sw.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.sw.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ta.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ta.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.ta.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.tr.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.tr.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.tr.png differ
diff --git a/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.zh.png b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.zh.png
new file mode 100644
index 000000000..b8c9c5044
Binary files /dev/null and b/translated_images/monnaie.606c5fa8369d5c3b3031ef0713e2069485c87985dd475cd9056bdf4c76c1f4b8.zh.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.es.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.es.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.es.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.hi.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.hi.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.hi.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.it.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.it.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.it.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ja.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ja.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ja.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ka.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ka.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ka.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ko.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ko.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ko.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ms.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ms.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ms.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.sw.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.sw.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.sw.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ta.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ta.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.ta.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.tr.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.tr.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.tr.png differ
diff --git a/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.zh.png b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.zh.png
new file mode 100644
index 000000000..f84dfd1a0
Binary files /dev/null and b/translated_images/mountaincar.43d56e588ce581c2d035f28cf038a9af112bec043b2ef8da40ac86119b1e3a93.zh.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.es.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.es.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.es.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.hi.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.hi.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.hi.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.it.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.it.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.it.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ja.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ja.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ja.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ka.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ka.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ka.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ko.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ko.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ko.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ms.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ms.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ms.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.sw.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.sw.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.sw.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ta.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ta.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.ta.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.tr.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.tr.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.tr.png differ
diff --git a/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.zh.png b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.zh.png
new file mode 100644
index 000000000..a9d28008e
Binary files /dev/null and b/translated_images/multinomial-ordinal.944fe02295fd6cdffa68facf540d0534c6f428a5d906edc40507cda4356950ee.zh.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.es.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.es.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.es.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.hi.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.hi.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.hi.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.it.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.it.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.it.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ja.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ja.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ja.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ka.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ka.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ka.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ko.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ko.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ko.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ms.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ms.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ms.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.sw.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.sw.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.sw.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ta.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ta.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.ta.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.tr.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.tr.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.tr.png differ
diff --git a/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.zh.png b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.zh.png
new file mode 100644
index 000000000..8e9246960
Binary files /dev/null and b/translated_images/multinomial-vs-ordinal.36701b4850e37d86c9dd49f7bef93a2f94dbdb8fe03443eb68f0542f97f28f29.zh.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.es.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.es.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.es.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.hi.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.hi.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.hi.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.it.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.it.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.it.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ja.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ja.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ja.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ka.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ka.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ka.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ko.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ko.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ko.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ms.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ms.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ms.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.sw.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.sw.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.sw.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ta.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ta.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.ta.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.tr.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.tr.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.tr.png differ
diff --git a/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.zh.png b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.zh.png
new file mode 100644
index 000000000..29e55a863
Binary files /dev/null and b/translated_images/netron.a05f39410211915e0f95e2c0e8b88f41e7d13d725faf660188f3802ba5c9e831.zh.png differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.es.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.es.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.es.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.hi.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.hi.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.hi.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.it.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.it.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.it.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ja.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ja.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ja.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ka.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ka.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ka.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ko.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ko.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ko.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ms.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ms.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ms.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.sw.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.sw.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.sw.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ta.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ta.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.ta.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.tr.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.tr.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.tr.jpg differ
diff --git a/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.zh.jpg b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.zh.jpg
new file mode 100644
index 000000000..d1e4f52d6
Binary files /dev/null and b/translated_images/notebook.4a3ee31f396b88325607afda33cadcc6368de98040ff33942424260aa84d75f2.zh.jpg differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.es.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.es.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.es.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.hi.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.hi.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.hi.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.it.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.it.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.it.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ja.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ja.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ja.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ka.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ka.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ka.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ko.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ko.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ko.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ms.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ms.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ms.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.sw.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.sw.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.sw.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ta.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ta.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.ta.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.tr.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.tr.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.tr.png differ
diff --git a/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.zh.png b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.zh.png
new file mode 100644
index 000000000..14649e72f
Binary files /dev/null and b/translated_images/original.b2b15efe0ce92b8745918f071dceec2231661bf49c8db6918e3ff4b3b0b183c2.zh.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.es.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.es.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.es.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.hi.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.hi.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.hi.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.it.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.it.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.it.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ja.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ja.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ja.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ka.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ka.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ka.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ko.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ko.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ko.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ms.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ms.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ms.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.sw.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.sw.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.sw.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ta.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ta.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.ta.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.tr.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.tr.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.tr.png differ
diff --git a/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.zh.png b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.zh.png
new file mode 100644
index 000000000..65dc241c5
Binary files /dev/null and b/translated_images/overfitting.1c132d92bfd93cb63240baf63ebdf82c30e30a0a44e1ad49861b82ff600c2b5c.zh.png differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.es.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.es.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.es.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.hi.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.hi.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.hi.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.it.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.it.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.it.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ja.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ja.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ja.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ka.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ka.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ka.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ko.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ko.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ko.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ms.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ms.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ms.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.sw.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.sw.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.sw.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ta.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ta.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.ta.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.tr.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.tr.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.tr.jpg differ
diff --git a/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.zh.jpg b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.zh.jpg
new file mode 100644
index 000000000..81cca6e4c
Binary files /dev/null and b/translated_images/p&p.279f1c49ecd889419e4ce6206525e9aa30d32a976955cd24daa636c361c6391f.zh.jpg differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.es.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.es.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.es.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.hi.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.hi.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.hi.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.it.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.it.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.it.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ja.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ja.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ja.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ka.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ka.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ka.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ko.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ko.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ko.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ms.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ms.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ms.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.sw.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.sw.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.sw.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ta.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ta.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.ta.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.tr.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.tr.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.tr.png differ
diff --git a/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.zh.png b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.zh.png
new file mode 100644
index 000000000..f78d1dc36
Binary files /dev/null and b/translated_images/parse.d0c5bbe1106eae8fe7d60a183cd1736c8b6cec907f38000366535f84f3036101.zh.png differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.es.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.es.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.es.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.hi.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.hi.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.hi.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.it.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.it.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.it.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ja.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ja.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ja.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ka.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ka.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ka.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ko.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ko.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ko.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ms.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ms.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ms.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.sw.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.sw.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.sw.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ta.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ta.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.ta.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.tr.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.tr.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.tr.jpg differ
diff --git a/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.zh.jpg b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.zh.jpg
new file mode 100644
index 000000000..f25b446ec
Binary files /dev/null and b/translated_images/parsnip.cd2ce92622976502a80714e69ce67e3f2da3274a9ef5ac484c1308c5f3cb0f4a.zh.jpg differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.es.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.es.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.es.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.hi.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.hi.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.hi.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.it.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.it.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.it.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ja.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ja.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ja.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ka.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ka.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ka.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ko.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ko.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ko.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ms.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ms.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ms.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.sw.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.sw.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.sw.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ta.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ta.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.ta.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.tr.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.tr.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.tr.png differ
diff --git a/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.zh.png b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.zh.png
new file mode 100644
index 000000000..43581e5f3
Binary files /dev/null and b/translated_images/peter.779730f9ba3a8a8d9290600dcf55f2e491c0640c785af7ac0d64f583c49b8864.zh.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.es.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.es.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.es.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.hi.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.hi.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.hi.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.it.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.it.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.it.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ja.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ja.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ja.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ka.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ka.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ka.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ko.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ko.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ko.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ms.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ms.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ms.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.sw.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.sw.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.sw.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ta.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ta.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.ta.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.tr.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.tr.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.tr.png differ
diff --git a/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.zh.png b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.zh.png
new file mode 100644
index 000000000..06ab9e06b
Binary files /dev/null and b/translated_images/pie-pumpkins-scatter.d14f9804a53f927e7fe39aa072486f4ed1bdd7f31c8bb08f476855f4b02350c3.zh.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.es.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.es.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.es.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.hi.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.hi.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.hi.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.it.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.it.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.it.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ja.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ja.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ja.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ka.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ka.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ka.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ko.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ko.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ko.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ms.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ms.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ms.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.sw.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.sw.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.sw.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ta.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ta.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.ta.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.tr.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.tr.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.tr.png differ
diff --git a/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.zh.png b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.zh.png
new file mode 100644
index 000000000..e66b95652
Binary files /dev/null and b/translated_images/pinch.1b035ec9ba7e0d408313b551b60c721c9c290b2dd2094115bc87e6ddacd114c9.zh.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.es.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.es.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.es.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.hi.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.hi.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.hi.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.it.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.it.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.it.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ja.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ja.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ja.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ka.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ka.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ka.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ko.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ko.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ko.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ms.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ms.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ms.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.sw.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.sw.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.sw.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ta.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ta.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.ta.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.tr.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.tr.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.tr.png differ
diff --git a/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.zh.png b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.zh.png
new file mode 100644
index 000000000..f6ee37ec1
Binary files /dev/null and b/translated_images/poly-results.ee587348f0f1f60bd16c471321b0b2f2457d0eaa99d99ec0ced4affc900fa96c.zh.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.es.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.es.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.es.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.hi.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.hi.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.hi.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.it.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.it.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.it.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ja.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ja.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ja.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ka.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ka.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ka.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ko.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ko.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ko.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ms.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ms.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ms.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.sw.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.sw.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.sw.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ta.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ta.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.ta.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.tr.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.tr.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.tr.png differ
diff --git a/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.zh.png b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.zh.png
new file mode 100644
index 000000000..bdf4f7ffe
Binary files /dev/null and b/translated_images/polynomial.8fce4663e7283dfb9864eef62255b57cc2799e187c6d0a6dbfcf29fec6e52faa.zh.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.es.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.es.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.es.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.hi.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.hi.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.hi.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.it.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.it.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.it.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ja.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ja.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ja.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ka.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ka.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ka.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ko.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ko.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ko.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ms.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ms.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ms.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.sw.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.sw.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.sw.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ta.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ta.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.ta.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.tr.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.tr.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.tr.png differ
diff --git a/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.zh.png b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.zh.png
new file mode 100644
index 000000000..384e2d9e3
Binary files /dev/null and b/translated_images/popular.9c48d84b3386705f98bf44e26e9655bee9eb7c849d73be65195e37895bfedb5d.zh.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.es.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.es.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.es.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.hi.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.hi.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.hi.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.it.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.it.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.it.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ja.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ja.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ja.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ka.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ka.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ka.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ko.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ko.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ko.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ms.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ms.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ms.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.sw.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.sw.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.sw.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ta.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ta.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.ta.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.tr.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.tr.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.tr.png differ
diff --git a/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.zh.png b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.zh.png
new file mode 100644
index 000000000..11efe5d61
Binary files /dev/null and b/translated_images/price-by-variety.744a2f9925d9bcb43a9a8c69469ce2520c9524fabfa270b1b2422cc2450d6d11.zh.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.es.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.es.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.es.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.hi.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.hi.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.hi.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.it.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.it.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.it.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ja.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ja.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ja.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ka.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ka.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ka.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ko.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ko.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ko.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ms.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ms.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ms.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.sw.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.sw.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.sw.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ta.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ta.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.ta.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.tr.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.tr.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.tr.png differ
diff --git a/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.zh.png b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.zh.png
new file mode 100644
index 000000000..55a81a2f3
Binary files /dev/null and b/translated_images/problems.f7fb539ccd80608e1f35c319cf5e3ad1809faa3c08537aead8018c6b5ba2e33a.zh.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.es.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.es.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.es.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.hi.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.hi.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.hi.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.it.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.it.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.it.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ja.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ja.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ja.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ka.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ka.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ka.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ko.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ko.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ko.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ms.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ms.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ms.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.sw.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.sw.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.sw.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ta.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ta.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.ta.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.tr.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.tr.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.tr.png differ
diff --git a/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.zh.png b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.zh.png
new file mode 100644
index 000000000..3e303968f
Binary files /dev/null and b/translated_images/pumpkin-classifier.562771f104ad5436b87d1c67bca02a42a17841133556559325c0a0e348e5b774.zh.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.es.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.es.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.es.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.hi.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.hi.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.hi.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.it.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.it.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.it.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ja.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ja.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ja.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ka.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ka.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ka.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ko.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ko.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ko.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ms.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ms.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ms.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.sw.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.sw.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.sw.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ta.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ta.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.ta.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.tr.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.tr.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.tr.png differ
diff --git a/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.zh.png b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.zh.png
new file mode 100644
index 000000000..01a2dc518
Binary files /dev/null and b/translated_images/pumpkins_catplot_1.c55c409b71fea2ecc01921e64b91970542101f90bcccfa4aa3a205db8936f48b.zh.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.es.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.es.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.es.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.hi.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.hi.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.hi.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.it.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.it.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.it.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ja.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ja.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ja.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ka.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ka.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ka.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ko.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ko.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ko.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ms.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ms.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ms.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.sw.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.sw.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.sw.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ta.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ta.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.ta.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.tr.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.tr.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.tr.png differ
diff --git a/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.zh.png b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.zh.png
new file mode 100644
index 000000000..7de8e90f5
Binary files /dev/null and b/translated_images/pumpkins_catplot_2.87a354447880b3889278155957f8f60dd63db4598de5a6d0fda91c334d31f9f1.zh.png differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.es.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.es.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.es.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.hi.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.hi.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.hi.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.it.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.it.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.it.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ja.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ja.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ja.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ka.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ka.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ka.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ko.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ko.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ko.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ms.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ms.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ms.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.sw.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.sw.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.sw.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ta.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ta.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.ta.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.tr.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.tr.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.tr.jpeg differ
diff --git a/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.zh.jpeg b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.zh.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.cd14eb3581a9f28d32086cc042ee8c46f621a5b4e0d59c75f7c642d891327043.zh.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.es.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.es.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.es.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.hi.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.hi.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.hi.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.it.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.it.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.it.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ja.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ja.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ja.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ka.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ka.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ka.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ko.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ko.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ko.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ms.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ms.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ms.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.sw.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.sw.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.sw.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ta.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ta.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.ta.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.tr.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.tr.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.tr.jpeg differ
diff --git a/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.zh.jpeg b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.zh.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e25fa9c205b3a3f98d66476321637b48f61d9c23526309ce82d0a43e88b90f66.zh.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.es.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.es.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.es.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.hi.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.hi.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.hi.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.it.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.it.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.it.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ja.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ja.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ja.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ka.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ka.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ka.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ko.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ko.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ko.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ms.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ms.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ms.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.sw.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.sw.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.sw.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ta.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ta.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.ta.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.tr.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.tr.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.tr.jpeg differ
diff --git a/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.zh.jpeg b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.zh.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.e4a71b113ffbedfe727048ec69741a9295954195d8761c35c46f20277de5f684.zh.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.es.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.es.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.es.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.hi.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.hi.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.hi.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.it.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.it.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.it.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ja.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ja.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ja.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ka.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ka.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ka.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ko.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ko.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ko.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ms.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ms.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ms.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.sw.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.sw.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.sw.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ta.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ta.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.ta.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.tr.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.tr.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.tr.jpeg differ
diff --git a/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.zh.jpeg b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.zh.jpeg
new file mode 100644
index 000000000..2d42e2f24
Binary files /dev/null and b/translated_images/r_learners_sm.f9199f76f1e2e49304b19155ebcfb8bad375aface4625be7e95404486a48d332.zh.jpeg differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.es.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.es.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.es.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.hi.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.hi.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.hi.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.it.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.it.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.it.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ja.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ja.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ja.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ka.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ka.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ka.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ko.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ko.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ko.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ms.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ms.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ms.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.sw.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.sw.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.sw.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ta.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ta.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.ta.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.tr.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.tr.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.tr.png differ
diff --git a/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.zh.png b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.zh.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.186acfa8ed2e8f0059ce17ef22c9452d7b25e7e1e4b044573bacec9a18e040d2.zh.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.es.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.es.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.es.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.hi.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.hi.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.hi.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.it.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.it.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.it.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ja.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ja.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ja.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ka.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ka.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ka.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ko.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ko.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ko.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ms.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ms.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ms.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.sw.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.sw.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.sw.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ta.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ta.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.ta.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.tr.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.tr.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.tr.png differ
diff --git a/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.zh.png b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.zh.png
new file mode 100644
index 000000000..75cc4826e
Binary files /dev/null and b/translated_images/recipes.9ad10d8a4056bf89413fc33644924e0bd29d7c12fb2154e03a1ca3d2d6ea9323.zh.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.es.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.es.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.es.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.hi.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.hi.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.hi.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.it.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.it.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.it.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ja.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ja.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ja.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ka.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ka.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ka.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ko.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ko.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ko.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ms.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ms.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ms.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.sw.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.sw.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.sw.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ta.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ta.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.ta.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.tr.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.tr.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.tr.png differ
diff --git a/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.zh.png b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.zh.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.91897dfbaa26ca4a5f45c99aaabe79b1f1bcd1237f8124c20c0510df482e9f49.zh.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.es.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.es.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.es.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.hi.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.hi.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.hi.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.it.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.it.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.it.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ja.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ja.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ja.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ka.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ka.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ka.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ko.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ko.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ko.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ms.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ms.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ms.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.sw.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.sw.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.sw.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ta.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ta.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.ta.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.tr.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.tr.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.tr.png differ
diff --git a/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.zh.png b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.zh.png
new file mode 100644
index 000000000..a11e46fae
Binary files /dev/null and b/translated_images/scaled.e35258ca5cd3d43f86d5175e584ba96b38d51501f234abf52e11f4fe2631e45f.zh.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.es.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.es.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.es.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.hi.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.hi.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.hi.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.it.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.it.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.it.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ja.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ja.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ja.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ka.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ka.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ka.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ko.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ko.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ko.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ms.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ms.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ms.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.sw.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.sw.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.sw.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ta.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ta.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.ta.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.tr.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.tr.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.tr.png differ
diff --git a/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.zh.png b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.zh.png
new file mode 100644
index 000000000..be0bf88f9
Binary files /dev/null and b/translated_images/scatter-dayofyear-color.65790faefbb9d54fb8f6223c566c445b9fac58a1c15f41f8641c3842af9d548b.zh.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.es.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.es.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.es.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.hi.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.hi.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.hi.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.it.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.it.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.it.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ja.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ja.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ja.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ka.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ka.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ka.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ko.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ko.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ko.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ms.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ms.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ms.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.sw.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.sw.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.sw.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ta.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ta.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.ta.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.tr.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.tr.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.tr.png differ
diff --git a/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.zh.png b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.zh.png
new file mode 100644
index 000000000..38538e271
Binary files /dev/null and b/translated_images/scatter-dayofyear.bc171c189c9fd553fe93030180b9c00ed123148a577640e4d7481c4c01811972.zh.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.es.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.es.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.es.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.hi.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.hi.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.hi.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.it.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.it.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.it.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ja.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ja.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ja.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ka.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ka.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ka.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ko.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ko.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ko.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ms.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ms.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ms.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.sw.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.sw.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.sw.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ta.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ta.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.ta.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.tr.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.tr.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.tr.png differ
diff --git a/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.zh.png b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.zh.png
new file mode 100644
index 000000000..0b61bb73f
Binary files /dev/null and b/translated_images/scatterplot.ad8b356bcbb33be68d54050e09b9b7bfc03e94fde7371f2609ae43f4c563b2d7.zh.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.es.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.es.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.es.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.hi.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.hi.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.hi.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.it.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.it.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.it.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ja.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ja.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ja.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ka.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ka.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ka.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ko.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ko.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ko.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ms.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ms.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ms.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.sw.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.sw.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.sw.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ta.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ta.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.ta.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.tr.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.tr.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.tr.png differ
diff --git a/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.zh.png b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.zh.png
new file mode 100644
index 000000000..1dce2f9c3
Binary files /dev/null and b/translated_images/scatterplot.b6868f44cbd2051c6680ccdbb1510697d06a3ff6cd4abda656f5009c0ed4e3fc.zh.png differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.es.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.es.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.es.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.hi.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.hi.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.hi.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.it.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.it.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.it.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ja.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ja.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ja.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ka.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ka.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ka.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ko.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ko.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ko.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ms.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ms.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ms.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.sw.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.sw.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.sw.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ta.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ta.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.ta.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.tr.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.tr.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.tr.jpg differ
diff --git a/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.zh.jpg b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.zh.jpg
new file mode 100644
index 000000000..cfaa90905
Binary files /dev/null and b/translated_images/shakey.4dc17819c447c05bf4b52f76da0bdd28817d056fdb906252ec20124dd4cfa55e.zh.jpg differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.es.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.es.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.es.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.hi.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.hi.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.hi.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.it.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.it.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.it.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ja.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ja.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ja.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ka.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ka.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ka.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ko.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ko.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ko.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ms.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ms.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ms.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.sw.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.sw.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.sw.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ta.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ta.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.ta.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.tr.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.tr.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.tr.png differ
diff --git a/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.zh.png b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.zh.png
new file mode 100644
index 000000000..41dbdc339
Binary files /dev/null and b/translated_images/sigmoid.8b7ba9d095c789cf72780675d0d1d44980c3736617329abfc392dfc859799704.zh.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.es.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.es.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.es.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.hi.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.hi.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.hi.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.it.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.it.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.it.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ja.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ja.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ja.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ka.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ka.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ka.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ko.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ko.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ko.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ms.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ms.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ms.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.sw.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.sw.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.sw.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ta.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ta.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.ta.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.tr.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.tr.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.tr.png differ
diff --git a/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.zh.png b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.zh.png
new file mode 100644
index 000000000..024fcdc3f
Binary files /dev/null and b/translated_images/slope.f3c9d5910ddbfcf9096eb5564254ba22c9a32d7acd7694cab905d29ad8261db3.zh.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.es.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.es.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.es.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.hi.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.hi.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.hi.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.it.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.it.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.it.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ja.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ja.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ja.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ka.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ka.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ka.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ko.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ko.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ko.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ms.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ms.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ms.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.sw.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.sw.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.sw.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ta.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ta.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.ta.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.tr.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.tr.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.tr.png differ
diff --git a/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.zh.png b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.zh.png
new file mode 100644
index 000000000..9a25f6739
Binary files /dev/null and b/translated_images/solvers.5fc648618529e627dfac29b917b3ccabda4b45ee8ed41b0acb1ce1441e8d1ef1.zh.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.es.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.es.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.es.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.hi.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.hi.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.hi.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.it.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.it.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.it.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ja.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ja.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ja.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ka.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ka.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ka.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ko.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ko.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ko.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ms.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ms.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ms.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.sw.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.sw.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.sw.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ta.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ta.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.ta.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.tr.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.tr.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.tr.png differ
diff --git a/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.zh.png b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.zh.png
new file mode 100644
index 000000000..636c5f258
Binary files /dev/null and b/translated_images/svm.621ae7b516d678e08ed23af77ff1750b5fe392976917f0606861567b779e8862.zh.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.es.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.es.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.es.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.hi.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.hi.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.hi.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.it.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.it.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.it.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ja.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ja.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ja.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ka.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ka.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ka.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ko.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ko.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ko.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ms.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ms.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ms.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.sw.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.sw.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.sw.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ta.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ta.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.ta.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.tr.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.tr.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.tr.png differ
diff --git a/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.zh.png b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.zh.png
new file mode 100644
index 000000000..13510aa1f
Binary files /dev/null and b/translated_images/swarm.56d253ae80a2c0f5940dec8ed3c02e57161891ff44cc0dce5c3cb2f65a4233e7.zh.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.es.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.es.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.es.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.hi.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.hi.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.hi.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.it.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.it.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.it.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ja.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ja.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ja.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ka.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ka.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ka.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ko.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ko.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ko.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ms.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ms.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ms.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.sw.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.sw.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.sw.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ta.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ta.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.ta.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.tr.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.tr.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.tr.png differ
diff --git a/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.zh.png b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.zh.png
new file mode 100644
index 000000000..425f08cd7
Binary files /dev/null and b/translated_images/swarm_2.efeacfca536c2b577dc7b5f8891f28926663fbf62d893ab5e1278ae734ca104e.zh.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.es.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.es.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.es.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.hi.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.hi.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.hi.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.it.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.it.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.it.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ja.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ja.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ja.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ka.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ka.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ka.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ko.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ko.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ko.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ms.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ms.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ms.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.sw.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.sw.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.sw.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ta.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ta.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.ta.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.tr.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.tr.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.tr.png differ
diff --git a/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.zh.png b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.zh.png
new file mode 100644
index 000000000..1aa2d71ab
Binary files /dev/null and b/translated_images/test-data-predict.8afc47ee7e52874f514ebdda4a798647e9ecf44a97cc927c535246fcf7a28aa9.zh.png differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.es.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.es.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.es.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.hi.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.hi.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.hi.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.it.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.it.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.it.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ja.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ja.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ja.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ka.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ka.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ka.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ko.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ko.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ko.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ms.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ms.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ms.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.sw.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.sw.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.sw.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ta.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ta.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.ta.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.tr.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.tr.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.tr.jpg differ
diff --git a/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.zh.jpg b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.zh.jpg
new file mode 100644
index 000000000..d9f7349ab
Binary files /dev/null and b/translated_images/thai-food.c47a7a7f9f05c21892a1f9dc7bf30669e6d18dfda420c5c7ebb4153f6a304edd.zh.jpg differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.es.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.es.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.es.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.hi.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.hi.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.hi.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.it.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.it.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.it.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ja.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ja.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ja.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ka.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ka.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ka.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ko.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ko.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ko.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ms.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ms.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ms.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.sw.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.sw.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.sw.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ta.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ta.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.ta.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.tr.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.tr.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.tr.png differ
diff --git a/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.zh.png b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.zh.png
new file mode 100644
index 000000000..d680b55de
Binary files /dev/null and b/translated_images/thai.0269dbab2e78bd38a132067759fe980008bdb80b6d778e5313448dbe12bed846.zh.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.es.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.es.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.es.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.hi.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.hi.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.hi.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.it.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.it.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.it.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ja.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ja.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ja.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ka.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ka.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ka.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ko.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ko.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ko.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ms.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ms.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ms.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.sw.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.sw.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.sw.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ta.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ta.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.ta.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.tr.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.tr.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.tr.png differ
diff --git a/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.zh.png b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.zh.png
new file mode 100644
index 000000000..990cacf94
Binary files /dev/null and b/translated_images/tokenization.1641a160c66cd2d93d4524e8114e93158a9ce0eba3ecf117bae318e8a6ad3487.zh.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.es.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.es.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.es.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.hi.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.hi.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.hi.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.it.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.it.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.it.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ja.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ja.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ja.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ka.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ka.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ka.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ko.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ko.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ko.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ms.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ms.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ms.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.sw.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.sw.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.sw.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ta.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ta.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.ta.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.tr.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.tr.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.tr.png differ
diff --git a/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.zh.png b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.zh.png
new file mode 100644
index 000000000..253c13709
Binary files /dev/null and b/translated_images/train-data-predict.3c4ef4e78553104ffdd53d47a4c06414007947ea328e9261ddf48d3eafdefbbf.zh.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.es.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.es.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.es.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.hi.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.hi.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.hi.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.it.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.it.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.it.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ja.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ja.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ja.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ka.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ka.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ka.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ko.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ko.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ko.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ms.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ms.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ms.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.sw.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.sw.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.sw.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ta.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ta.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.ta.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.tr.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.tr.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.tr.png differ
diff --git a/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.zh.png b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.zh.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.8928d14e5b91fc942f0ca9201b2d36c890ea7e98f7619fd94f75de3a4c2bacb9.zh.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.es.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.es.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.es.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.hi.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.hi.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.hi.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.it.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.it.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.it.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ja.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ja.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ja.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ka.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ka.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ka.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ko.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ko.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ko.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ms.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ms.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ms.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.sw.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.sw.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.sw.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ta.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ta.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.ta.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.tr.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.tr.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.tr.png differ
diff --git a/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.zh.png b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.zh.png
new file mode 100644
index 000000000..1149b1644
Binary files /dev/null and b/translated_images/train-test.ead0cecbfc341921d4875eccf25fed5eefbb860cdbb69cabcc2276c49e4b33e5.zh.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.es.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.es.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.es.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.hi.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.hi.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.hi.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.it.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.it.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.it.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ja.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ja.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ja.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ka.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ka.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ka.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ko.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ko.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ko.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ms.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ms.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ms.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.sw.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.sw.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.sw.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ta.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ta.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.ta.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.tr.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.tr.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.tr.png differ
diff --git a/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.zh.png b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.zh.png
new file mode 100644
index 000000000..b995b24b9
Binary files /dev/null and b/translated_images/train_progress_raw.2adfdf2daea09c596fc786fa347a23e9aceffe1b463e2257d20a9505794823ec.zh.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.es.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.es.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.es.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.hi.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.hi.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.hi.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.it.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.it.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.it.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ja.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ja.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ja.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ka.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ka.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ka.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ko.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ko.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ko.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ms.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ms.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ms.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.sw.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.sw.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.sw.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ta.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ta.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.ta.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.tr.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.tr.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.tr.png differ
diff --git a/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.zh.png b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.zh.png
new file mode 100644
index 000000000..f5527d750
Binary files /dev/null and b/translated_images/train_progress_runav.c71694a8fa9ab35935aff6f109e5ecdfdbdf1b0ae265da49479a81b5fae8f0aa.zh.png differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.es.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.es.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.es.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.hi.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.hi.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.hi.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.it.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.it.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.it.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ja.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ja.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ja.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ka.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ka.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ka.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ko.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ko.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ko.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ms.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ms.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ms.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.sw.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.sw.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.sw.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ta.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ta.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.ta.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.tr.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.tr.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.tr.jpg differ
diff --git a/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.zh.jpg b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.zh.jpg
new file mode 100644
index 000000000..631db5fad
Binary files /dev/null and b/translated_images/turntable.f2b86b13c53302dc106aa741de9dc96ac372864cf458dd6f879119857aab01da.zh.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.es.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.es.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.es.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.hi.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.hi.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.hi.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.it.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.it.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.it.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ja.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ja.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ja.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ka.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ka.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ka.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ko.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ko.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ko.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ms.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ms.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ms.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.sw.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.sw.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.sw.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ta.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ta.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.ta.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.tr.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.tr.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.tr.jpg differ
diff --git a/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.zh.jpg b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.zh.jpg
new file mode 100644
index 000000000..0db92ac89
Binary files /dev/null and b/translated_images/ufo.9e787f5161da9d4d1dafc537e1da09be8210f2ee996cb638aa5cee1d92867a04.zh.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.es.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.es.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.es.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.hi.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.hi.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.hi.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.it.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.it.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.it.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ja.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ja.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ja.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ka.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ka.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ka.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ko.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ko.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ko.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ms.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ms.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ms.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.sw.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.sw.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.sw.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ta.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ta.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.ta.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.tr.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.tr.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.tr.jpg differ
diff --git a/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.zh.jpg b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.zh.jpg
new file mode 100644
index 000000000..d09c41c8c
Binary files /dev/null and b/translated_images/unruly_data.0eedc7ced92d2d919cf5ea197bfe0fe9a30780c4bf7cdcf14ff4e9dc5a4c7267.zh.jpg differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.es.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.es.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.es.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.hi.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.hi.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.hi.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.it.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.it.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.it.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ja.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ja.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ja.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ka.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ka.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ka.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ko.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ko.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ko.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ms.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ms.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ms.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.sw.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.sw.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.sw.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ta.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ta.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.ta.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.tr.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.tr.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.tr.png differ
diff --git a/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.zh.png b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.zh.png
new file mode 100644
index 000000000..935f6530f
Binary files /dev/null and b/translated_images/violin.ffceb68923177011dc8f1ae08f78297c69f2b868d82fa4e754cc923b185d4f7d.zh.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.es.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.es.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.es.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.hi.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.hi.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.hi.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.it.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.it.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.it.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ja.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ja.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ja.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ka.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ka.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ka.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ko.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ko.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ko.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ms.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ms.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ms.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.sw.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.sw.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.sw.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ta.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ta.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.ta.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.tr.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.tr.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.tr.png differ
diff --git a/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.zh.png b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.zh.png
new file mode 100644
index 000000000..e4db5db46
Binary files /dev/null and b/translated_images/voronoi.1dc1613fb0439b9564615eca8df47a4bcd1ce06217e7e72325d2406ef2180795.zh.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.es.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.es.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.es.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.hi.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.hi.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.hi.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.it.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.it.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.it.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ja.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ja.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ja.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ka.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ka.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ka.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ko.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ko.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ko.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ms.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ms.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ms.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.sw.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.sw.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.sw.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ta.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ta.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.ta.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.tr.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.tr.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.tr.png differ
diff --git a/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.zh.png b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.zh.png
new file mode 100644
index 000000000..ebb533ea9
Binary files /dev/null and b/translated_images/web-app.4c76450cabe20036f8ec6d5e05ccc0c1c064f0d8f2fe3304d3bcc0198f7dc139.zh.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.es.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.es.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.es.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.hi.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.hi.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.hi.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.it.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.it.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.it.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ja.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ja.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ja.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ka.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ka.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ka.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ko.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ko.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ko.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ms.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ms.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ms.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.sw.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.sw.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.sw.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ta.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ta.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.ta.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.tr.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.tr.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.tr.png differ
diff --git a/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.zh.png b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.zh.png
new file mode 100644
index 000000000..a7f831a76
Binary files /dev/null and b/translated_images/wolf.a56d3d4070ca0c79007b28aa2203a1801ebd496f242525381225992ece6c369d.zh.png differ
diff --git a/translations/es/1-Introduction/1-intro-to-ML/README.md b/translations/es/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..338ba6ba3
--- /dev/null
+++ b/translations/es/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# Introducción al aprendizaje automático
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "ML para principiantes - Introducción al Aprendizaje Automático para Principiantes")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video corto sobre esta lección.
+
+¡Bienvenido a este curso de aprendizaje automático clásico para principiantes! Ya sea que seas completamente nuevo en este tema o un practicante experimentado de ML que busca repasar un área, ¡nos alegra que te unas a nosotros! Queremos crear un punto de partida amigable para tu estudio de ML y estaríamos encantados de evaluar, responder e incorporar tus [comentarios](https://github.com/microsoft/ML-For-Beginners/discussions).
+
+[](https://youtu.be/h0e2HAPTGF4 "Introducción al ML")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: John Guttag del MIT introduce el aprendizaje automático
+
+---
+## Comenzando con el aprendizaje automático
+
+Antes de comenzar con este plan de estudios, necesitas tener tu computadora configurada y lista para ejecutar notebooks localmente.
+
+- **Configura tu máquina con estos videos**. Utiliza los siguientes enlaces para aprender [cómo instalar Python](https://youtu.be/CXZYvNRIAKM) en tu sistema y [configurar un editor de texto](https://youtu.be/EU8eayHWoZg) para el desarrollo.
+- **Aprende Python**. También se recomienda tener un entendimiento básico de [Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott), un lenguaje de programación útil para científicos de datos que usamos en este curso.
+- **Aprende Node.js y JavaScript**. También usamos JavaScript algunas veces en este curso al construir aplicaciones web, por lo que necesitarás tener [node](https://nodejs.org) y [npm](https://www.npmjs.com/) instalados, así como [Visual Studio Code](https://code.visualstudio.com/) disponible tanto para el desarrollo en Python como en JavaScript.
+- **Crea una cuenta de GitHub**. Dado que nos encontraste aquí en [GitHub](https://github.com), es posible que ya tengas una cuenta, pero si no, crea una y luego haz un fork de este plan de estudios para usarlo por tu cuenta. (Siéntete libre de darnos una estrella, también 😊)
+- **Explora Scikit-learn**. Familiarízate con [Scikit-learn](https://scikit-learn.org/stable/user_guide.html), un conjunto de bibliotecas de ML que referenciamos en estas lecciones.
+
+---
+## ¿Qué es el aprendizaje automático?
+
+El término 'aprendizaje automático' es uno de los términos más populares y frecuentemente utilizados en la actualidad. Hay una posibilidad no trivial de que hayas escuchado este término al menos una vez si tienes algún tipo de familiaridad con la tecnología, sin importar en qué dominio trabajes. Sin embargo, la mecánica del aprendizaje automático es un misterio para la mayoría de las personas. Para un principiante en aprendizaje automático, el tema a veces puede parecer abrumador. Por lo tanto, es importante entender qué es realmente el aprendizaje automático y aprender sobre él paso a paso, a través de ejemplos prácticos.
+
+---
+## La curva del hype
+
+
+
+> Google Trends muestra la reciente 'curva del hype' del término 'aprendizaje automático'
+
+---
+## Un universo misterioso
+
+Vivimos en un universo lleno de misterios fascinantes. Grandes científicos como Stephen Hawking, Albert Einstein y muchos más han dedicado sus vidas a buscar información significativa que revele los misterios del mundo que nos rodea. Esta es la condición humana de aprender: un niño humano aprende cosas nuevas y descubre la estructura de su mundo año tras año a medida que crece hasta la adultez.
+
+---
+## El cerebro del niño
+
+El cerebro y los sentidos de un niño perciben los hechos de su entorno y gradualmente aprenden los patrones ocultos de la vida que ayudan al niño a crear reglas lógicas para identificar patrones aprendidos. El proceso de aprendizaje del cerebro humano hace que los humanos sean la criatura viviente más sofisticada de este mundo. Aprender continuamente al descubrir patrones ocultos y luego innovar sobre esos patrones nos permite mejorar y mejorar a lo largo de nuestra vida. Esta capacidad de aprendizaje y capacidad de evolución está relacionada con un concepto llamado [plasticidad cerebral](https://www.simplypsychology.org/brain-plasticity.html). Superficialmente, podemos trazar algunas similitudes motivacionales entre el proceso de aprendizaje del cerebro humano y los conceptos de aprendizaje automático.
+
+---
+## El cerebro humano
+
+El [cerebro humano](https://www.livescience.com/29365-human-brain.html) percibe cosas del mundo real, procesa la información percibida, toma decisiones racionales y realiza ciertas acciones según las circunstancias. Esto es lo que llamamos comportarse inteligentemente. Cuando programamos una réplica del proceso de comportamiento inteligente en una máquina, se llama inteligencia artificial (IA).
+
+---
+## Algo de terminología
+
+Aunque los términos pueden confundirse, el aprendizaje automático (ML) es un subconjunto importante de la inteligencia artificial. **ML se preocupa por usar algoritmos especializados para descubrir información significativa y encontrar patrones ocultos a partir de datos percibidos para corroborar el proceso de toma de decisiones racionales**.
+
+---
+## IA, ML, Aprendizaje Profundo
+
+
+
+> Un diagrama que muestra las relaciones entre IA, ML, aprendizaje profundo y ciencia de datos. Infografía de [Jen Looper](https://twitter.com/jenlooper) inspirada por [esta gráfica](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining)
+
+---
+## Conceptos a cubrir
+
+En este plan de estudios, vamos a cubrir solo los conceptos básicos del aprendizaje automático que un principiante debe conocer. Cubrimos lo que llamamos 'aprendizaje automático clásico' utilizando principalmente Scikit-learn, una excelente biblioteca que muchos estudiantes usan para aprender lo básico. Para entender conceptos más amplios de inteligencia artificial o aprendizaje profundo, es indispensable un conocimiento fundamental sólido del aprendizaje automático, y por eso queremos ofrecerlo aquí.
+
+---
+## En este curso aprenderás:
+
+- conceptos básicos del aprendizaje automático
+- la historia del ML
+- ML y equidad
+- técnicas de regresión en ML
+- técnicas de clasificación en ML
+- técnicas de clustering en ML
+- técnicas de procesamiento de lenguaje natural en ML
+- técnicas de pronóstico de series temporales en ML
+- aprendizaje por refuerzo
+- aplicaciones del mundo real para ML
+
+---
+## Lo que no cubriremos
+
+- aprendizaje profundo
+- redes neuronales
+- IA
+
+Para hacer una mejor experiencia de aprendizaje, evitaremos las complejidades de las redes neuronales, el 'aprendizaje profundo' - construcción de modelos de muchas capas usando redes neuronales - y la IA, que discutiremos en un plan de estudios diferente. También ofreceremos un próximo plan de estudios de ciencia de datos para centrarnos en ese aspecto de este campo más amplio.
+
+---
+## ¿Por qué estudiar aprendizaje automático?
+
+El aprendizaje automático, desde una perspectiva de sistemas, se define como la creación de sistemas automatizados que pueden aprender patrones ocultos a partir de datos para ayudar a tomar decisiones inteligentes.
+
+Esta motivación está libremente inspirada en cómo el cerebro humano aprende ciertas cosas basadas en los datos que percibe del mundo exterior.
+
+✅ Piensa por un minuto por qué una empresa querría intentar usar estrategias de aprendizaje automático en lugar de crear un motor basado en reglas codificadas.
+
+---
+## Aplicaciones del aprendizaje automático
+
+Las aplicaciones del aprendizaje automático están ahora casi en todas partes, y son tan ubicuas como los datos que fluyen en nuestras sociedades, generados por nuestros teléfonos inteligentes, dispositivos conectados y otros sistemas. Considerando el inmenso potencial de los algoritmos de aprendizaje automático de última generación, los investigadores han estado explorando su capacidad para resolver problemas de la vida real multidimensionales y multidisciplinarios con grandes resultados positivos.
+
+---
+## Ejemplos de ML aplicado
+
+**Puedes usar el aprendizaje automático de muchas maneras**:
+
+- Para predecir la probabilidad de enfermedad a partir del historial médico o informes de un paciente.
+- Para aprovechar los datos meteorológicos y predecir eventos climáticos.
+- Para entender el sentimiento de un texto.
+- Para detectar noticias falsas y detener la propagación de propaganda.
+
+Finanzas, economía, ciencias de la tierra, exploración espacial, ingeniería biomédica, ciencia cognitiva e incluso campos en las humanidades han adaptado el aprendizaje automático para resolver los arduos problemas de procesamiento de datos de su dominio.
+
+---
+## Conclusión
+
+El aprendizaje automático automatiza el proceso de descubrimiento de patrones al encontrar ideas significativas a partir de datos del mundo real o generados. Ha demostrado ser altamente valioso en aplicaciones comerciales, de salud y financieras, entre otras.
+
+En un futuro cercano, entender los conceptos básicos del aprendizaje automático será una necesidad para personas de cualquier dominio debido a su adopción generalizada.
+
+---
+# 🚀 Desafío
+
+Dibuja, en papel o utilizando una aplicación en línea como [Excalidraw](https://excalidraw.com/), tu comprensión de las diferencias entre IA, ML, aprendizaje profundo y ciencia de datos. Agrega algunas ideas de problemas que cada una de estas técnicas es buena para resolver.
+
+# [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# Revisión y Autoestudio
+
+Para aprender más sobre cómo puedes trabajar con algoritmos de ML en la nube, sigue este [Camino de Aprendizaje](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott).
+
+Toma un [Camino de Aprendizaje](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott) sobre los conceptos básicos de ML.
+
+---
+# Tarea
+
+[Ponte en marcha](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/1-intro-to-ML/assignment.md b/translations/es/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..59f1f8891
--- /dev/null
+++ b/translations/es/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# Ponerse en Marcha
+
+## Instrucciones
+
+En esta tarea no calificada, deberías repasar Python y preparar tu entorno para poder ejecutar notebooks.
+
+Toma esta [Ruta de Aprendizaje de Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott), y luego configura tus sistemas viendo estos videos introductorios:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/2-history-of-ML/README.md b/translations/es/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..d105545cc
--- /dev/null
+++ b/translations/es/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# Historia del aprendizaje automático
+
+
+> Sketchnote por [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "ML para principiantes - Historia del Aprendizaje Automático")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video corto sobre esta lección.
+
+En esta lección, repasaremos los hitos más importantes en la historia del aprendizaje automático y la inteligencia artificial.
+
+La historia de la inteligencia artificial (IA) como campo está entrelazada con la historia del aprendizaje automático, ya que los algoritmos y avances computacionales que sustentan el ML alimentaron el desarrollo de la IA. Es útil recordar que, aunque estos campos como áreas de investigación distintas comenzaron a cristalizarse en la década de 1950, importantes [descubrimientos algorítmicos, estadísticos, matemáticos, computacionales y técnicos](https://wikipedia.org/wiki/Timeline_of_machine_learning) precedieron y se superpusieron a esta era. De hecho, las personas han estado pensando en estas cuestiones durante [cientos de años](https://wikipedia.org/wiki/History_of_artificial_intelligence): este artículo discute las bases intelectuales históricas de la idea de una 'máquina pensante'.
+
+---
+## Descubrimientos notables
+
+- 1763, 1812 [Teorema de Bayes](https://wikipedia.org/wiki/Bayes%27_theorem) y sus predecesores. Este teorema y sus aplicaciones subyacen en la inferencia, describiendo la probabilidad de que ocurra un evento basado en conocimientos previos.
+- 1805 [Teoría de los Mínimos Cuadrados](https://wikipedia.org/wiki/Least_squares) por el matemático francés Adrien-Marie Legendre. Esta teoría, que aprenderás en nuestra unidad de Regresión, ayuda en el ajuste de datos.
+- 1913 [Cadenas de Markov](https://wikipedia.org/wiki/Markov_chain), nombradas así por el matemático ruso Andrey Markov, se utilizan para describir una secuencia de posibles eventos basada en un estado anterior.
+- 1957 [Perceptrón](https://wikipedia.org/wiki/Perceptron) es un tipo de clasificador lineal inventado por el psicólogo estadounidense Frank Rosenblatt que subyace en los avances en el aprendizaje profundo.
+
+---
+
+- 1967 [Vecino más cercano](https://wikipedia.org/wiki/Nearest_neighbor) es un algoritmo diseñado originalmente para mapear rutas. En un contexto de ML se usa para detectar patrones.
+- 1970 [Retropropagación](https://wikipedia.org/wiki/Backpropagation) se utiliza para entrenar [redes neuronales feedforward](https://wikipedia.org/wiki/Feedforward_neural_network).
+- 1982 [Redes Neuronales Recurrentes](https://wikipedia.org/wiki/Recurrent_neural_network) son redes neuronales artificiales derivadas de las redes neuronales feedforward que crean gráficos temporales.
+
+✅ Investiga un poco. ¿Qué otras fechas se destacan como fundamentales en la historia del ML y la IA?
+
+---
+## 1950: Máquinas que piensan
+
+Alan Turing, una persona verdaderamente notable que fue votada [por el público en 2019](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) como el mayor científico del siglo XX, es acreditado por ayudar a sentar las bases para el concepto de una 'máquina que puede pensar'. Luchó con detractores y con su propia necesidad de evidencia empírica de este concepto en parte creando el [Test de Turing](https://www.bbc.com/news/technology-18475646), que explorarás en nuestras lecciones de PLN.
+
+---
+## 1956: Proyecto de Investigación de Verano de Dartmouth
+
+"El Proyecto de Investigación de Verano de Dartmouth sobre inteligencia artificial fue un evento fundamental para la inteligencia artificial como campo," y fue aquí donde se acuñó el término 'inteligencia artificial' ([fuente](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth)).
+
+> Cada aspecto del aprendizaje o cualquier otra característica de la inteligencia puede describirse en principio de manera tan precisa que se pueda hacer que una máquina lo simule.
+
+---
+
+El investigador principal, el profesor de matemáticas John McCarthy, esperaba "proceder sobre la base de la conjetura de que cada aspecto del aprendizaje o cualquier otra característica de la inteligencia puede describirse en principio de manera tan precisa que se pueda hacer que una máquina lo simule." Los participantes incluyeron a otra luminaria en el campo, Marvin Minsky.
+
+El taller se acredita con haber iniciado y alentado varias discusiones, incluyendo "el surgimiento de métodos simbólicos, sistemas enfocados en dominios limitados (primeros sistemas expertos), y sistemas deductivos versus sistemas inductivos." ([fuente](https://wikipedia.org/wiki/Dartmouth_workshop)).
+
+---
+## 1956 - 1974: "Los años dorados"
+
+Desde la década de 1950 hasta mediados de los años '70, el optimismo era alto con la esperanza de que la IA pudiera resolver muchos problemas. En 1967, Marvin Minsky afirmó con confianza que "Dentro de una generación... el problema de crear 'inteligencia artificial' se resolverá sustancialmente." (Minsky, Marvin (1967), Computación: Máquinas Finita e Infinita, Englewood Cliffs, N.J.: Prentice-Hall)
+
+La investigación en procesamiento de lenguaje natural floreció, la búsqueda se refinó y se hizo más poderosa, y se creó el concepto de 'micro-mundos', donde se completaban tareas simples usando instrucciones en lenguaje claro.
+
+---
+
+La investigación fue bien financiada por agencias gubernamentales, se hicieron avances en computación y algoritmos, y se construyeron prototipos de máquinas inteligentes. Algunas de estas máquinas incluyen:
+
+* [Shakey el robot](https://wikipedia.org/wiki/Shakey_the_robot), que podía maniobrar y decidir cómo realizar tareas 'inteligentemente'.
+
+ 
+ > Shakey en 1972
+
+---
+
+* Eliza, un temprano 'chatterbot', podía conversar con personas y actuar como un 'terapeuta' primitivo. Aprenderás más sobre Eliza en las lecciones de PLN.
+
+ 
+ > Una versión de Eliza, un chatbot
+
+---
+
+* "Mundo de bloques" fue un ejemplo de un micro-mundo donde los bloques podían apilarse y ordenarse, y se podían probar experimentos en enseñar a las máquinas a tomar decisiones. Los avances construidos con bibliotecas como [SHRDLU](https://wikipedia.org/wiki/SHRDLU) ayudaron a impulsar el procesamiento del lenguaje.
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "mundo de bloques con SHRDLU")
+
+ > 🎥 Haz clic en la imagen de arriba para un video: Mundo de bloques con SHRDLU
+
+---
+## 1974 - 1980: "Invierno de la IA"
+
+A mediados de los años 70, se hizo evidente que la complejidad de hacer 'máquinas inteligentes' había sido subestimada y que su promesa, dada la potencia de cálculo disponible, había sido exagerada. La financiación se secó y la confianza en el campo disminuyó. Algunos problemas que afectaron la confianza incluyeron:
+---
+- **Limitaciones**. La potencia de cálculo era demasiado limitada.
+- **Explosión combinatoria**. La cantidad de parámetros que necesitaban ser entrenados creció exponencialmente a medida que se pedía más a las computadoras, sin una evolución paralela de la potencia y capacidad de cálculo.
+- **Escasez de datos**. Había una escasez de datos que obstaculizaba el proceso de probar, desarrollar y refinar algoritmos.
+- **¿Estamos haciendo las preguntas correctas?**. Las propias preguntas que se estaban haciendo comenzaron a ser cuestionadas. Los investigadores comenzaron a recibir críticas sobre sus enfoques:
+ - Las pruebas de Turing fueron cuestionadas mediante, entre otras ideas, la 'teoría de la habitación china' que postulaba que, "programar una computadora digital puede hacer que parezca entender el lenguaje, pero no podría producir una comprensión real." ([fuente](https://plato.stanford.edu/entries/chinese-room/))
+ - Se cuestionó la ética de introducir inteligencias artificiales como el "terapeuta" ELIZA en la sociedad.
+
+---
+
+Al mismo tiempo, comenzaron a formarse varias escuelas de pensamiento de IA. Se estableció una dicotomía entre las prácticas de ["IA desordenada" vs. "IA ordenada"](https://wikipedia.org/wiki/Neats_and_scruffies). Los laboratorios _desordenados_ ajustaban programas durante horas hasta obtener los resultados deseados. Los laboratorios _ordenados_ "se centraban en la lógica y la resolución formal de problemas". ELIZA y SHRDLU eran sistemas _desordenados_ bien conocidos. En la década de 1980, a medida que surgió la demanda de hacer los sistemas de ML reproducibles, el enfoque _ordenado_ gradualmente tomó la delantera ya que sus resultados son más explicables.
+
+---
+## Sistemas expertos de los años 80
+
+A medida que el campo creció, sus beneficios para los negocios se hicieron más claros, y en la década de 1980 también lo hizo la proliferación de 'sistemas expertos'. "Los sistemas expertos fueron una de las primeras formas verdaderamente exitosas de software de inteligencia artificial (IA)." ([fuente](https://wikipedia.org/wiki/Expert_system)).
+
+Este tipo de sistema es en realidad _híbrido_, consistiendo parcialmente en un motor de reglas que define los requisitos comerciales, y un motor de inferencia que aprovechaba el sistema de reglas para deducir nuevos hechos.
+
+Esta era también vio una mayor atención a las redes neuronales.
+
+---
+## 1987 - 1993: 'Enfriamiento' de la IA
+
+La proliferación de hardware especializado en sistemas expertos tuvo el desafortunado efecto de volverse demasiado especializado. El auge de las computadoras personales también compitió con estos sistemas grandes, especializados y centralizados. La democratización de la informática había comenzado, y eventualmente allanó el camino para la explosión moderna de big data.
+
+---
+## 1993 - 2011
+
+Esta época vio una nueva era para el ML y la IA, pudiendo resolver algunos de los problemas causados anteriormente por la falta de datos y potencia de cálculo. La cantidad de datos comenzó a aumentar rápidamente y a estar más disponible, para bien y para mal, especialmente con la llegada del smartphone alrededor de 2007. La potencia de cálculo se expandió exponencialmente, y los algoritmos evolucionaron junto a ella. El campo comenzó a madurar a medida que los días desenfrenados del pasado comenzaron a cristalizarse en una verdadera disciplina.
+
+---
+## Ahora
+
+Hoy en día, el aprendizaje automático y la IA tocan casi todas las partes de nuestras vidas. Esta era exige una comprensión cuidadosa de los riesgos y efectos potenciales de estos algoritmos en la vida humana. Como ha dicho Brad Smith de Microsoft, "La tecnología de la información plantea cuestiones que llegan al corazón de las protecciones fundamentales de los derechos humanos, como la privacidad y la libertad de expresión. Estas cuestiones aumentan la responsabilidad de las empresas tecnológicas que crean estos productos. En nuestra opinión, también exigen una regulación gubernamental reflexiva y el desarrollo de normas sobre usos aceptables" ([fuente](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/)).
+
+---
+
+Queda por ver qué depara el futuro, pero es importante entender estos sistemas informáticos y el software y algoritmos que ejecutan. Esperamos que este plan de estudios te ayude a obtener una mejor comprensión para que puedas decidir por ti mismo.
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "La historia del aprendizaje profundo")
+> 🎥 Haz clic en la imagen de arriba para un video: Yann LeCun discute la historia del aprendizaje profundo en esta conferencia
+
+---
+## 🚀Desafío
+
+Investiga uno de estos momentos históricos y aprende más sobre las personas detrás de ellos. Hay personajes fascinantes, y ningún descubrimiento científico se creó jamás en un vacío cultural. ¿Qué descubres?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## Revisión y Autoestudio
+
+Aquí hay elementos para ver y escuchar:
+
+[Este podcast donde Amy Boyd discute la evolución de la IA](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "La historia de la IA por Amy Boyd")
+
+---
+
+## Tarea
+
+[Crear una línea de tiempo](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/2-history-of-ML/assignment.md b/translations/es/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..e14285a77
--- /dev/null
+++ b/translations/es/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Crear una línea de tiempo
+
+## Instrucciones
+
+Usando [este repositorio](https://github.com/Digital-Humanities-Toolkit/timeline-builder), crea una línea de tiempo de algún aspecto de la historia de los algoritmos, matemáticas, estadísticas, IA o ML, o una combinación de estos. Puedes enfocarte en una persona, una idea o un largo período de pensamiento. Asegúrate de añadir elementos multimedia.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | ------------------------------------------------- | --------------------------------------- | ---------------------------------------------------------------- |
+| | Se presenta una línea de tiempo desplegada como una página de GitHub | El código está incompleto y no desplegado | La línea de tiempo está incompleta, no bien investigada y no desplegada |
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/3-fairness/README.md b/translations/es/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..ce728b440
--- /dev/null
+++ b/translations/es/1-Introduction/3-fairness/README.md
@@ -0,0 +1,159 @@
+# Construyendo soluciones de aprendizaje automático con IA responsable
+
+
+> Sketchnote por [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Introducción
+
+En este currículo, comenzarás a descubrir cómo el aprendizaje automático puede y está impactando nuestras vidas cotidianas. Incluso ahora, los sistemas y modelos están involucrados en tareas de toma de decisiones diarias, como diagnósticos de salud, aprobaciones de préstamos o detección de fraudes. Por lo tanto, es importante que estos modelos funcionen bien para proporcionar resultados confiables. Al igual que cualquier aplicación de software, los sistemas de IA pueden no cumplir con las expectativas o tener un resultado indeseable. Es por eso que es esencial poder entender y explicar el comportamiento de un modelo de IA.
+
+Imagina lo que puede suceder cuando los datos que utilizas para construir estos modelos carecen de ciertos datos demográficos, como raza, género, visión política, religión, o representan desproporcionadamente dichos datos demográficos. ¿Qué pasa cuando la salida del modelo se interpreta para favorecer a algunos demográficos? ¿Cuál es la consecuencia para la aplicación? Además, ¿qué sucede cuando el modelo tiene un resultado adverso y es perjudicial para las personas? ¿Quién es responsable del comportamiento de los sistemas de IA? Estas son algunas preguntas que exploraremos en este currículo.
+
+En esta lección, tú:
+
+- Aumentarás tu conciencia sobre la importancia de la equidad en el aprendizaje automático y los daños relacionados con la equidad.
+- Te familiarizarás con la práctica de explorar valores atípicos y escenarios inusuales para garantizar la fiabilidad y seguridad.
+- Comprenderás la necesidad de empoderar a todos diseñando sistemas inclusivos.
+- Explorarás la importancia de proteger la privacidad y seguridad de los datos y las personas.
+- Verás la importancia de tener un enfoque de caja de cristal para explicar el comportamiento de los modelos de IA.
+- Serás consciente de cómo la responsabilidad es esencial para construir confianza en los sistemas de IA.
+
+## Prerrequisito
+
+Como prerrequisito, toma la ruta de aprendizaje "Principios de IA Responsable" y mira el video a continuación sobre el tema:
+
+Aprende más sobre IA Responsable siguiendo esta [Ruta de Aprendizaje](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott)
+
+[](https://youtu.be/dnC8-uUZXSc "Enfoque de Microsoft hacia la IA Responsable")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: Enfoque de Microsoft hacia la IA Responsable
+
+## Equidad
+
+Los sistemas de IA deben tratar a todos de manera justa y evitar afectar a grupos similares de personas de diferentes maneras. Por ejemplo, cuando los sistemas de IA proporcionan orientación sobre tratamiento médico, solicitudes de préstamos o empleo, deben hacer las mismas recomendaciones a todos con síntomas similares, circunstancias financieras o cualificaciones profesionales. Cada uno de nosotros, como seres humanos, lleva consigo sesgos heredados que afectan nuestras decisiones y acciones. Estos sesgos pueden ser evidentes en los datos que utilizamos para entrenar los sistemas de IA. Dicha manipulación puede ocurrir a veces de manera involuntaria. A menudo es difícil saber conscientemente cuándo estás introduciendo sesgo en los datos.
+
+**"Injusticia"** abarca impactos negativos, o "daños", para un grupo de personas, como aquellos definidos en términos de raza, género, edad o estado de discapacidad. Los principales daños relacionados con la equidad se pueden clasificar como:
+
+- **Asignación**, si por ejemplo se favorece a un género o etnia sobre otro.
+- **Calidad del servicio**. Si entrenas los datos para un escenario específico pero la realidad es mucho más compleja, lleva a un servicio de bajo rendimiento. Por ejemplo, un dispensador de jabón de manos que no parece ser capaz de detectar personas con piel oscura. [Referencia](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **Denigración**. Criticar y etiquetar injustamente algo o alguien. Por ejemplo, una tecnología de etiquetado de imágenes etiquetó infamemente imágenes de personas de piel oscura como gorilas.
+- **Sobre o subrepresentación**. La idea es que un cierto grupo no se vea en una cierta profesión, y cualquier servicio o función que siga promoviendo eso está contribuyendo al daño.
+- **Estereotipos**. Asociar un grupo dado con atributos preasignados. Por ejemplo, un sistema de traducción de idiomas entre inglés y turco puede tener inexactitudes debido a palabras con asociaciones estereotipadas de género.
+
+
+> traducción al turco
+
+
+> traducción de vuelta al inglés
+
+Al diseñar y probar sistemas de IA, debemos asegurarnos de que la IA sea justa y no esté programada para tomar decisiones sesgadas o discriminatorias, que los seres humanos también tienen prohibido tomar. Garantizar la equidad en la IA y el aprendizaje automático sigue siendo un desafío sociotécnico complejo.
+
+### Fiabilidad y seguridad
+
+Para generar confianza, los sistemas de IA deben ser fiables, seguros y consistentes bajo condiciones normales e inesperadas. Es importante saber cómo se comportarán los sistemas de IA en una variedad de situaciones, especialmente cuando son casos atípicos. Al construir soluciones de IA, se debe poner un enfoque sustancial en cómo manejar una amplia variedad de circunstancias que las soluciones de IA encontrarían. Por ejemplo, un coche autónomo debe poner la seguridad de las personas como una prioridad principal. Como resultado, la IA que impulsa el coche necesita considerar todos los posibles escenarios que el coche podría encontrar, como noche, tormentas eléctricas o ventiscas, niños corriendo por la calle, mascotas, construcciones en la carretera, etc. Qué tan bien un sistema de IA puede manejar una amplia gama de condiciones de manera fiable y segura refleja el nivel de anticipación que el científico de datos o el desarrollador de IA consideró durante el diseño o prueba del sistema.
+
+> [🎥 Haz clic aquí para un video:](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### Inclusión
+
+Los sistemas de IA deben ser diseñados para involucrar y empoderar a todos. Al diseñar e implementar sistemas de IA, los científicos de datos y desarrolladores de IA identifican y abordan posibles barreras en el sistema que podrían excluir involuntariamente a las personas. Por ejemplo, hay 1 mil millones de personas con discapacidades en todo el mundo. Con el avance de la IA, pueden acceder a una amplia gama de información y oportunidades más fácilmente en sus vidas diarias. Al abordar las barreras, se crean oportunidades para innovar y desarrollar productos de IA con mejores experiencias que beneficien a todos.
+
+> [🎥 Haz clic aquí para un video: inclusión en IA](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### Seguridad y privacidad
+
+Los sistemas de IA deben ser seguros y respetar la privacidad de las personas. Las personas confían menos en los sistemas que ponen en riesgo su privacidad, información o vidas. Al entrenar modelos de aprendizaje automático, dependemos de los datos para producir los mejores resultados. Al hacerlo, se debe considerar el origen de los datos y su integridad. Por ejemplo, ¿fueron los datos enviados por el usuario o estaban disponibles públicamente? Luego, al trabajar con los datos, es crucial desarrollar sistemas de IA que puedan proteger la información confidencial y resistir ataques. A medida que la IA se vuelve más prevalente, proteger la privacidad y asegurar información personal y empresarial importante se vuelve más crítico y complejo. Los problemas de privacidad y seguridad de datos requieren una atención especialmente cercana para la IA porque el acceso a los datos es esencial para que los sistemas de IA hagan predicciones y decisiones precisas e informadas sobre las personas.
+
+> [🎥 Haz clic aquí para un video: seguridad en IA](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Como industria, hemos logrado avances significativos en privacidad y seguridad, impulsados significativamente por regulaciones como el GDPR (Reglamento General de Protección de Datos).
+- Sin embargo, con los sistemas de IA debemos reconocer la tensión entre la necesidad de más datos personales para hacer los sistemas más personales y efectivos y la privacidad.
+- Al igual que con el nacimiento de las computadoras conectadas a Internet, también estamos viendo un gran aumento en el número de problemas de seguridad relacionados con la IA.
+- Al mismo tiempo, hemos visto que la IA se usa para mejorar la seguridad. Como ejemplo, la mayoría de los escáneres antivirus modernos están impulsados por heurísticas de IA hoy en día.
+- Necesitamos asegurarnos de que nuestros procesos de Ciencia de Datos se mezclen armoniosamente con las últimas prácticas de privacidad y seguridad.
+
+### Transparencia
+
+Los sistemas de IA deben ser comprensibles. Una parte crucial de la transparencia es explicar el comportamiento de los sistemas de IA y sus componentes. Mejorar la comprensión de los sistemas de IA requiere que las partes interesadas comprendan cómo y por qué funcionan para que puedan identificar posibles problemas de rendimiento, preocupaciones de seguridad y privacidad, sesgos, prácticas excluyentes o resultados no deseados. También creemos que aquellos que usan sistemas de IA deben ser honestos y abiertos sobre cuándo, por qué y cómo eligen implementarlos, así como las limitaciones de los sistemas que usan. Por ejemplo, si un banco usa un sistema de IA para apoyar sus decisiones de préstamos al consumidor, es importante examinar los resultados y entender qué datos influyen en las recomendaciones del sistema. Los gobiernos están comenzando a regular la IA en diversas industrias, por lo que los científicos de datos y las organizaciones deben explicar si un sistema de IA cumple con los requisitos regulatorios, especialmente cuando hay un resultado no deseado.
+
+> [🎥 Haz clic aquí para un video: transparencia en IA](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Debido a que los sistemas de IA son tan complejos, es difícil entender cómo funcionan e interpretar los resultados.
+- Esta falta de comprensión afecta la forma en que se gestionan, operacionalizan y documentan estos sistemas.
+- Esta falta de comprensión afecta más importantemente las decisiones tomadas utilizando los resultados que estos sistemas producen.
+
+### Responsabilidad
+
+Las personas que diseñan y despliegan sistemas de IA deben ser responsables de cómo operan sus sistemas. La necesidad de responsabilidad es particularmente crucial con tecnologías de uso sensible como el reconocimiento facial. Recientemente, ha habido una creciente demanda de tecnología de reconocimiento facial, especialmente por parte de organizaciones de aplicación de la ley que ven el potencial de la tecnología en usos como encontrar niños desaparecidos. Sin embargo, estas tecnologías podrían ser potencialmente utilizadas por un gobierno para poner en riesgo las libertades fundamentales de sus ciudadanos al, por ejemplo, permitir la vigilancia continua de individuos específicos. Por lo tanto, los científicos de datos y las organizaciones deben ser responsables de cómo su sistema de IA impacta a individuos o a la sociedad.
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Enfoque de Microsoft hacia la IA Responsable")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: Advertencias sobre la vigilancia masiva a través del reconocimiento facial
+
+En última instancia, una de las preguntas más grandes para nuestra generación, como la primera generación que está llevando la IA a la sociedad, es cómo asegurarse de que las computadoras sigan siendo responsables ante las personas y cómo asegurarse de que las personas que diseñan computadoras sean responsables ante todos los demás.
+
+## Evaluación de impacto
+
+Antes de entrenar un modelo de aprendizaje automático, es importante realizar una evaluación de impacto para comprender el propósito del sistema de IA; cuál es el uso previsto; dónde se desplegará; y quién interactuará con el sistema. Estos son útiles para el/los revisor(es) o probadores que evalúan el sistema para saber qué factores considerar al identificar posibles riesgos y consecuencias esperadas.
+
+Las siguientes son áreas de enfoque al realizar una evaluación de impacto:
+
+* **Impacto adverso en individuos**. Ser consciente de cualquier restricción o requisito, uso no soportado o cualquier limitación conocida que obstaculice el rendimiento del sistema es vital para garantizar que el sistema no se use de manera que pueda causar daño a los individuos.
+* **Requisitos de datos**. Comprender cómo y dónde el sistema usará los datos permite a los revisores explorar cualquier requisito de datos que debas tener en cuenta (por ejemplo, regulaciones de datos GDPR o HIPPA). Además, examina si la fuente o cantidad de datos es sustancial para el entrenamiento.
+* **Resumen del impacto**. Reúne una lista de posibles daños que podrían surgir del uso del sistema. A lo largo del ciclo de vida del ML, revisa si los problemas identificados están mitigados o abordados.
+* **Objetivos aplicables** para cada uno de los seis principios fundamentales. Evalúa si los objetivos de cada uno de los principios se cumplen y si hay alguna brecha.
+
+## Depuración con IA responsable
+
+Al igual que depurar una aplicación de software, depurar un sistema de IA es un proceso necesario de identificación y resolución de problemas en el sistema. Hay muchos factores que afectarían a un modelo que no se desempeña como se espera o de manera responsable. La mayoría de las métricas de rendimiento de modelos tradicionales son agregados cuantitativos del rendimiento de un modelo, que no son suficientes para analizar cómo un modelo viola los principios de IA responsable. Además, un modelo de aprendizaje automático es una caja negra que hace difícil entender qué impulsa su resultado o proporcionar una explicación cuando comete un error. Más adelante en este curso, aprenderemos cómo usar el panel de IA Responsable para ayudar a depurar sistemas de IA. El panel proporciona una herramienta holística para que los científicos de datos y desarrolladores de IA realicen:
+
+* **Análisis de errores**. Para identificar la distribución de errores del modelo que puede afectar la equidad o fiabilidad del sistema.
+* **Visión general del modelo**. Para descubrir dónde hay disparidades en el rendimiento del modelo a través de cohortes de datos.
+* **Análisis de datos**. Para entender la distribución de datos e identificar cualquier posible sesgo en los datos que podría llevar a problemas de equidad, inclusión y fiabilidad.
+* **Interpretabilidad del modelo**. Para entender qué afecta o influye en las predicciones del modelo. Esto ayuda a explicar el comportamiento del modelo, lo cual es importante para la transparencia y la responsabilidad.
+
+## 🚀 Desafío
+
+Para prevenir daños desde el principio, debemos:
+
+- tener una diversidad de antecedentes y perspectivas entre las personas que trabajan en los sistemas
+- invertir en conjuntos de datos que reflejen la diversidad de nuestra sociedad
+- desarrollar mejores métodos a lo largo del ciclo de vida del aprendizaje automático para detectar y corregir la IA responsable cuando ocurra
+
+Piensa en escenarios de la vida real donde la falta de confiabilidad de un modelo es evidente en la construcción y uso del modelo. ¿Qué más deberíamos considerar?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## Revisión y autoestudio
+
+En esta lección, has aprendido algunos conceptos básicos sobre la equidad y la injusticia en el aprendizaje automático.
+
+Mira este taller para profundizar en los temas:
+
+- En busca de IA responsable: Llevando los principios a la práctica por Besmira Nushi, Mehrnoosh Sameki y Amit Sharma
+
+[](https://www.youtube.com/watch?v=tGgJCrA-MZU "RAI Toolbox: Un marco de código abierto para construir IA responsable")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: RAI Toolbox: Un marco de código abierto para construir IA responsable por Besmira Nushi, Mehrnoosh Sameki y Amit Sharma
+
+También lee:
+
+- Centro de recursos de RAI de Microsoft: [Recursos de IA Responsable – Microsoft AI](https://www.microsoft.com/ai/responsible-ai-resources?activetab=pivot1%3aprimaryr4)
+
+- Grupo de investigación FATE de Microsoft: [FATE: Equidad, Responsabilidad, Transparencia y Ética en IA - Microsoft Research](https://www.microsoft.com/research/theme/fate/)
+
+RAI Toolbox:
+
+- [Repositorio de GitHub de Responsible AI Toolbox](https://github.com/microsoft/responsible-ai-toolbox)
+
+Lee sobre las herramientas de Azure Machine Learning para garantizar la equidad:
+
+- [Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-fairness-ml?WT.mc_id=academic-77952-leestott)
+
+## Tarea
+
+[Explora RAI Toolbox](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/3-fairness/assignment.md b/translations/es/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..e2328b7e3
--- /dev/null
+++ b/translations/es/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# Explora el Responsible AI Toolbox
+
+## Instrucciones
+
+En esta lección aprendiste sobre el Responsible AI Toolbox, un "proyecto de código abierto impulsado por la comunidad para ayudar a los científicos de datos a analizar y mejorar los sistemas de IA." Para esta tarea, explora uno de los [notebooks](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb) de RAI Toolbox y reporta tus hallazgos en un documento o presentación.
+
+## Rubrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------- | -------- | ---------------- |
+| | Se presenta un documento o una presentación en PowerPoint discutiendo los sistemas de Fairlearn, el notebook que se ejecutó y las conclusiones obtenidas de su ejecución | Se presenta un documento sin conclusiones | No se presenta ningún documento |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/4-techniques-of-ML/README.md b/translations/es/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..91ce103db
--- /dev/null
+++ b/translations/es/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# Técnicas de Aprendizaje Automático
+
+El proceso de construir, usar y mantener modelos de aprendizaje automático y los datos que utilizan es muy diferente de muchos otros flujos de trabajo de desarrollo. En esta lección, desmitificaremos el proceso y delinearemos las principales técnicas que necesitas conocer. Tú:
+
+- Comprenderás los procesos que sustentan el aprendizaje automático a un alto nivel.
+- Explorarás conceptos básicos como 'modelos', 'predicciones' y 'datos de entrenamiento'.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "ML para principiantes - Técnicas de Aprendizaje Automático")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video corto que recorre esta lección.
+
+## Introducción
+
+A un alto nivel, el arte de crear procesos de aprendizaje automático (ML) se compone de varios pasos:
+
+1. **Decidir la pregunta**. La mayoría de los procesos de ML comienzan con una pregunta que no puede ser respondida por un simple programa condicional o un motor basado en reglas. Estas preguntas a menudo giran en torno a predicciones basadas en una colección de datos.
+2. **Recopilar y preparar datos**. Para poder responder a tu pregunta, necesitas datos. La calidad y, a veces, la cantidad de tus datos determinarán qué tan bien puedes responder a tu pregunta inicial. Visualizar los datos es un aspecto importante de esta fase. Esta fase también incluye dividir los datos en un grupo de entrenamiento y un grupo de prueba para construir un modelo.
+3. **Elegir un método de entrenamiento**. Dependiendo de tu pregunta y la naturaleza de tus datos, necesitas elegir cómo quieres entrenar un modelo para reflejar mejor tus datos y hacer predicciones precisas. Esta es la parte de tu proceso de ML que requiere experiencia específica y, a menudo, una cantidad considerable de experimentación.
+4. **Entrenar el modelo**. Usando tus datos de entrenamiento, usarás varios algoritmos para entrenar un modelo que reconozca patrones en los datos. El modelo puede utilizar pesos internos que se pueden ajustar para privilegiar ciertas partes de los datos sobre otras para construir un mejor modelo.
+5. **Evaluar el modelo**. Usas datos que nunca antes has visto (tus datos de prueba) de tu conjunto recopilado para ver cómo está funcionando el modelo.
+6. **Ajuste de parámetros**. Basado en el rendimiento de tu modelo, puedes rehacer el proceso usando diferentes parámetros, o variables, que controlan el comportamiento de los algoritmos utilizados para entrenar el modelo.
+7. **Predecir**. Usa nuevas entradas para probar la precisión de tu modelo.
+
+## Qué pregunta hacer
+
+Las computadoras son particularmente hábiles en descubrir patrones ocultos en los datos. Esta utilidad es muy útil para los investigadores que tienen preguntas sobre un dominio determinado que no pueden ser respondidas fácilmente creando un motor de reglas condicionales. Dada una tarea actuarial, por ejemplo, un científico de datos podría ser capaz de construir reglas hechas a mano sobre la mortalidad de fumadores vs no fumadores.
+
+Sin embargo, cuando se traen muchas otras variables a la ecuación, un modelo de ML podría resultar más eficiente para predecir tasas de mortalidad futuras basadas en el historial de salud pasado. Un ejemplo más alegre podría ser hacer predicciones meteorológicas para el mes de abril en una ubicación determinada basada en datos que incluyen latitud, longitud, cambio climático, proximidad al océano, patrones de la corriente en chorro, y más.
+
+✅ Esta [presentación de diapositivas](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf) sobre modelos meteorológicos ofrece una perspectiva histórica del uso de ML en el análisis meteorológico.
+
+## Tareas previas a la construcción
+
+Antes de comenzar a construir tu modelo, hay varias tareas que necesitas completar. Para probar tu pregunta y formar una hipótesis basada en las predicciones de un modelo, necesitas identificar y configurar varios elementos.
+
+### Datos
+
+Para poder responder a tu pregunta con algún tipo de certeza, necesitas una buena cantidad de datos del tipo correcto. Hay dos cosas que necesitas hacer en este punto:
+
+- **Recopilar datos**. Teniendo en cuenta la lección anterior sobre la equidad en el análisis de datos, recopila tus datos con cuidado. Sé consciente de las fuentes de estos datos, cualquier sesgo inherente que puedan tener y documenta su origen.
+- **Preparar datos**. Hay varios pasos en el proceso de preparación de datos. Es posible que necesites compilar datos y normalizarlos si provienen de fuentes diversas. Puedes mejorar la calidad y cantidad de los datos a través de varios métodos como convertir cadenas a números (como hacemos en [Clustering](../../5-Clustering/1-Visualize/README.md)). También puedes generar nuevos datos, basados en los originales (como hacemos en [Clasificación](../../4-Classification/1-Introduction/README.md)). Puedes limpiar y editar los datos (como haremos antes de la lección de la [Aplicación Web](../../3-Web-App/README.md)). Finalmente, también podrías necesitar aleatorizarlos y barajarlos, dependiendo de tus técnicas de entrenamiento.
+
+✅ Después de recopilar y procesar tus datos, tómate un momento para ver si su forma te permitirá abordar tu pregunta prevista. Puede ser que los datos no se desempeñen bien en tu tarea dada, como descubrimos en nuestras lecciones de [Clustering](../../5-Clustering/1-Visualize/README.md).
+
+### Características y Objetivo
+
+Una [característica](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection) es una propiedad medible de tus datos. En muchos conjuntos de datos se expresa como un encabezado de columna como 'fecha', 'tamaño' o 'color'. Tu variable característica, usualmente representada como `X` en el código, representa la variable de entrada que se utilizará para entrenar el modelo.
+
+Un objetivo es aquello que estás tratando de predecir. El objetivo, usualmente representado como `y` en el código, representa la respuesta a la pregunta que estás tratando de hacer a tus datos: en diciembre, ¿qué **color** de calabazas será el más barato? en San Francisco, ¿qué barrios tendrán el mejor **precio** inmobiliario? A veces, el objetivo también se refiere como atributo de etiqueta.
+
+### Selección de tu variable característica
+
+🎓 **Selección de Características y Extracción de Características** ¿Cómo sabes qué variable elegir al construir un modelo? Probablemente pasarás por un proceso de selección de características o extracción de características para elegir las variables correctas para el modelo más eficiente. Sin embargo, no son lo mismo: "La extracción de características crea nuevas características a partir de funciones de las características originales, mientras que la selección de características devuelve un subconjunto de las características." ([fuente](https://wikipedia.org/wiki/Feature_selection))
+
+### Visualiza tus datos
+
+Un aspecto importante del kit de herramientas del científico de datos es el poder de visualizar datos usando varias bibliotecas excelentes como Seaborn o MatPlotLib. Representar tus datos visualmente puede permitirte descubrir correlaciones ocultas que puedes aprovechar. Tus visualizaciones también pueden ayudarte a descubrir sesgos o datos desequilibrados (como descubrimos en [Clasificación](../../4-Classification/2-Classifiers-1/README.md)).
+
+### Divide tu conjunto de datos
+
+Antes de entrenar, necesitas dividir tu conjunto de datos en dos o más partes de tamaño desigual que aún representen bien los datos.
+
+- **Entrenamiento**. Esta parte del conjunto de datos se ajusta a tu modelo para entrenarlo. Este conjunto constituye la mayoría del conjunto de datos original.
+- **Prueba**. Un conjunto de datos de prueba es un grupo independiente de datos, a menudo reunido a partir de los datos originales, que utilizas para confirmar el rendimiento del modelo construido.
+- **Validación**. Un conjunto de validación es un grupo más pequeño de ejemplos independientes que utilizas para ajustar los hiperparámetros del modelo, o la arquitectura, para mejorar el modelo. Dependiendo del tamaño de tus datos y la pregunta que estás haciendo, es posible que no necesites construir este tercer conjunto (como anotamos en [Pronóstico de Series Temporales](../../7-TimeSeries/1-Introduction/README.md)).
+
+## Construcción de un modelo
+
+Usando tus datos de entrenamiento, tu objetivo es construir un modelo, o una representación estadística de tus datos, usando varios algoritmos para **entrenarlo**. Entrenar un modelo lo expone a datos y le permite hacer suposiciones sobre los patrones percibidos que descubre, valida y acepta o rechaza.
+
+### Decidir un método de entrenamiento
+
+Dependiendo de tu pregunta y la naturaleza de tus datos, elegirás un método para entrenarlo. Revisando la [documentación de Scikit-learn](https://scikit-learn.org/stable/user_guide.html) - que usamos en este curso - puedes explorar muchas formas de entrenar un modelo. Dependiendo de tu experiencia, es posible que tengas que probar varios métodos diferentes para construir el mejor modelo. Es probable que pases por un proceso en el que los científicos de datos evalúan el rendimiento de un modelo alimentándolo con datos no vistos, verificando la precisión, el sesgo y otros problemas que degradan la calidad, y seleccionando el método de entrenamiento más apropiado para la tarea en cuestión.
+
+### Entrenar un modelo
+
+Armado con tus datos de entrenamiento, estás listo para 'ajustarlo' para crear un modelo. Notarás que en muchas bibliotecas de ML encontrarás el código 'model.fit' - es en este momento que envías tu variable característica como un array de valores (usualmente 'X') y una variable objetivo (usualmente 'y').
+
+### Evaluar el modelo
+
+Una vez que el proceso de entrenamiento esté completo (puede tomar muchas iteraciones, o 'épocas', para entrenar un modelo grande), podrás evaluar la calidad del modelo usando datos de prueba para medir su rendimiento. Estos datos son un subconjunto de los datos originales que el modelo no ha analizado previamente. Puedes imprimir una tabla de métricas sobre la calidad de tu modelo.
+
+🎓 **Ajuste del modelo**
+
+En el contexto del aprendizaje automático, el ajuste del modelo se refiere a la precisión de la función subyacente del modelo mientras intenta analizar datos con los que no está familiarizado.
+
+🎓 **Subajuste** y **sobreajuste** son problemas comunes que degradan la calidad del modelo, ya que el modelo se ajusta ya sea no lo suficientemente bien o demasiado bien. Esto causa que el modelo haga predicciones demasiado alineadas o demasiado desalineadas con sus datos de entrenamiento. Un modelo sobreajustado predice los datos de entrenamiento demasiado bien porque ha aprendido demasiado bien los detalles y el ruido de los datos. Un modelo subajustado no es preciso ya que no puede analizar con precisión ni sus datos de entrenamiento ni los datos que aún no ha 'visto'.
+
+
+> Infografía por [Jen Looper](https://twitter.com/jenlooper)
+
+## Ajuste de parámetros
+
+Una vez que tu entrenamiento inicial esté completo, observa la calidad del modelo y considera mejorarlo ajustando sus 'hiperparámetros'. Lee más sobre el proceso [en la documentación](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott).
+
+## Predicción
+
+Este es el momento en el que puedes usar datos completamente nuevos para probar la precisión de tu modelo. En un entorno de ML 'aplicado', donde estás construyendo activos web para usar el modelo en producción, este proceso podría implicar recopilar la entrada del usuario (una pulsación de botón, por ejemplo) para establecer una variable y enviarla al modelo para inferencia, o evaluación.
+
+En estas lecciones, descubrirás cómo usar estos pasos para preparar, construir, probar, evaluar y predecir - todos los gestos de un científico de datos y más, a medida que avanzas en tu viaje para convertirte en un ingeniero de ML 'full stack'.
+
+---
+
+## 🚀Desafío
+
+Dibuja un diagrama de flujo que refleje los pasos de un practicante de ML. ¿Dónde te ves ahora en el proceso? ¿Dónde predices que encontrarás dificultad? ¿Qué te parece fácil?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## Revisión y Autoestudio
+
+Busca en línea entrevistas con científicos de datos que hablen sobre su trabajo diario. Aquí tienes [una](https://www.youtube.com/watch?v=Z3IjgbbCEfs).
+
+## Tarea
+
+[Entrevista a un científico de datos](assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por la precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/4-techniques-of-ML/assignment.md b/translations/es/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..067e22f0d
--- /dev/null
+++ b/translations/es/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Entrevista a un científico de datos
+
+## Instrucciones
+
+En tu empresa, en un grupo de usuarios, o entre tus amigos o compañeros de estudio, habla con alguien que trabaje profesionalmente como científico de datos. Escribe un breve artículo (500 palabras) sobre sus ocupaciones diarias. ¿Son especialistas o trabajan en 'full stack'?
+
+## Rubrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejora |
+| --------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------ | --------------------- |
+| | Un ensayo de la longitud correcta, con fuentes atribuidas, presentado como un archivo .doc | El ensayo está mal atribuido o es más corto de lo requerido | No se presenta ensayo |
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/1-Introduction/README.md b/translations/es/1-Introduction/README.md
new file mode 100644
index 000000000..473b90984
--- /dev/null
+++ b/translations/es/1-Introduction/README.md
@@ -0,0 +1,25 @@
+# Introducción al aprendizaje automático
+
+En esta sección del plan de estudios, se te presentarán los conceptos básicos que subyacen en el campo del aprendizaje automático, qué es y aprenderás sobre su historia y las técnicas que los investigadores utilizan para trabajar con él. ¡Vamos a explorar juntos este nuevo mundo del aprendizaje automático!
+
+
+> Foto de Bill Oxford en Unsplash
+
+### Lecciones
+
+1. [Introducción al aprendizaje automático](1-intro-to-ML/README.md)
+1. [La historia del aprendizaje automático y la IA](2-history-of-ML/README.md)
+1. [Equidad y aprendizaje automático](3-fairness/README.md)
+1. [Técnicas del aprendizaje automático](4-techniques-of-ML/README.md)
+### Créditos
+
+"Introducción al Aprendizaje Automático" fue escrito con ♥️ por un equipo de personas que incluyen a [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) y [Jen Looper](https://twitter.com/jenlooper)
+
+"La Historia del Aprendizaje Automático" fue escrito con ♥️ por [Jen Looper](https://twitter.com/jenlooper) y [Amy Boyd](https://twitter.com/AmyKateNicho)
+
+"Equidad y Aprendizaje Automático" fue escrito con ♥️ por [Tomomi Imura](https://twitter.com/girliemac)
+
+"Técnicas del Aprendizaje Automático" fue escrito con ♥️ por [Jen Looper](https://twitter.com/jenlooper) y [Chris Noring](https://twitter.com/softchris)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/1-Tools/README.md b/translations/es/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..89293ad59
--- /dev/null
+++ b/translations/es/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# Comienza con Python y Scikit-learn para modelos de regresión
+
+
+
+> Sketchnote por [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Cuestionario antes de la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [¡Esta lección está disponible en R!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## Introducción
+
+En estas cuatro lecciones, descubrirás cómo construir modelos de regresión. Hablaremos de para qué sirven en breve. ¡Pero antes de hacer nada, asegúrate de tener las herramientas adecuadas para comenzar el proceso!
+
+En esta lección, aprenderás a:
+
+- Configurar tu computadora para tareas locales de aprendizaje automático.
+- Trabajar con Jupyter notebooks.
+- Usar Scikit-learn, incluida la instalación.
+- Explorar la regresión lineal con un ejercicio práctico.
+
+## Instalaciones y configuraciones
+
+[](https://youtu.be/-DfeD2k2Kj0 "ML para principiantes - Configura tus herramientas para construir modelos de Machine Learning")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video corto sobre cómo configurar tu computadora para ML.
+
+1. **Instala Python**. Asegúrate de que [Python](https://www.python.org/downloads/) esté instalado en tu computadora. Usarás Python para muchas tareas de ciencia de datos y aprendizaje automático. La mayoría de los sistemas informáticos ya incluyen una instalación de Python. También hay [Paquetes de Codificación de Python](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott) útiles para facilitar la configuración a algunos usuarios.
+
+ Sin embargo, algunos usos de Python requieren una versión del software, mientras que otros requieren una versión diferente. Por esta razón, es útil trabajar dentro de un [entorno virtual](https://docs.python.org/3/library/venv.html).
+
+2. **Instala Visual Studio Code**. Asegúrate de tener Visual Studio Code instalado en tu computadora. Sigue estas instrucciones para [instalar Visual Studio Code](https://code.visualstudio.com/) para la instalación básica. Vas a usar Python en Visual Studio Code en este curso, por lo que podría ser útil repasar cómo [configurar Visual Studio Code](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) para el desarrollo en Python.
+
+ > Familiarízate con Python trabajando a través de esta colección de [Módulos de aprendizaje](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott)
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Configura Python con Visual Studio Code")
+ >
+ > 🎥 Haz clic en la imagen de arriba para ver un video: usando Python dentro de VS Code.
+
+3. **Instala Scikit-learn**, siguiendo [estas instrucciones](https://scikit-learn.org/stable/install.html). Dado que necesitas asegurarte de usar Python 3, se recomienda que uses un entorno virtual. Nota, si estás instalando esta biblioteca en una Mac M1, hay instrucciones especiales en la página enlazada arriba.
+
+4. **Instala Jupyter Notebook**. Necesitarás [instalar el paquete Jupyter](https://pypi.org/project/jupyter/).
+
+## Tu entorno de autoría de ML
+
+Vas a usar **notebooks** para desarrollar tu código Python y crear modelos de aprendizaje automático. Este tipo de archivo es una herramienta común para los científicos de datos, y pueden ser identificados por su sufijo o extensión `.ipynb`.
+
+Los notebooks son un entorno interactivo que permite al desarrollador tanto codificar como agregar notas y escribir documentación alrededor del código, lo cual es bastante útil para proyectos experimentales o orientados a la investigación.
+
+[](https://youtu.be/7E-jC8FLA2E "ML para principiantes - Configura Jupyter Notebooks para comenzar a construir modelos de regresión")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video corto sobre este ejercicio.
+
+### Ejercicio - trabajar con un notebook
+
+En esta carpeta, encontrarás el archivo _notebook.ipynb_.
+
+1. Abre _notebook.ipynb_ en Visual Studio Code.
+
+ Se iniciará un servidor Jupyter con Python 3+. Encontrarás áreas del notebook que pueden ser `run`, piezas de código. Puedes ejecutar un bloque de código, seleccionando el ícono que parece un botón de reproducción.
+
+2. Selecciona el ícono `md` y agrega un poco de markdown, y el siguiente texto **# Bienvenido a tu notebook**.
+
+ Luego, agrega algo de código Python.
+
+3. Escribe **print('hello notebook')** en el bloque de código.
+4. Selecciona la flecha para ejecutar el código.
+
+ Deberías ver la declaración impresa:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+Puedes intercalar tu código con comentarios para auto-documentar el notebook.
+
+✅ Piensa por un minuto cuán diferente es el entorno de trabajo de un desarrollador web en comparación con el de un científico de datos.
+
+## Puesta en marcha con Scikit-learn
+
+Ahora que Python está configurado en tu entorno local, y te sientes cómodo con los Jupyter notebooks, vamos a familiarizarnos con Scikit-learn (se pronuncia `sci` as in `science`). Scikit-learn proporciona una [extensa API](https://scikit-learn.org/stable/modules/classes.html#api-ref) para ayudarte a realizar tareas de ML.
+
+Según su [sitio web](https://scikit-learn.org/stable/getting_started.html), "Scikit-learn es una biblioteca de aprendizaje automático de código abierto que admite el aprendizaje supervisado y no supervisado. También proporciona varias herramientas para el ajuste de modelos, el preprocesamiento de datos, la selección y evaluación de modelos, y muchas otras utilidades."
+
+En este curso, usarás Scikit-learn y otras herramientas para construir modelos de aprendizaje automático para realizar lo que llamamos tareas de 'aprendizaje automático tradicional'. Hemos evitado deliberadamente las redes neuronales y el aprendizaje profundo, ya que están mejor cubiertos en nuestro próximo plan de estudios 'AI for Beginners'.
+
+Scikit-learn hace que sea sencillo construir modelos y evaluarlos para su uso. Se centra principalmente en el uso de datos numéricos y contiene varios conjuntos de datos listos para usar como herramientas de aprendizaje. También incluye modelos pre-construidos para que los estudiantes los prueben. Vamos a explorar el proceso de cargar datos preempaquetados y usar un estimador incorporado para el primer modelo de ML con Scikit-learn con algunos datos básicos.
+
+## Ejercicio - tu primer notebook de Scikit-learn
+
+> Este tutorial fue inspirado por el [ejemplo de regresión lineal](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py) en el sitio web de Scikit-learn.
+
+[](https://youtu.be/2xkXL5EUpS0 "ML para principiantes - Tu primer proyecto de regresión lineal en Python")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video corto sobre este ejercicio.
+
+En el archivo _notebook.ipynb_ asociado a esta lección, elimina todas las celdas presionando el ícono de la 'papelera'.
+
+En esta sección, trabajarás con un pequeño conjunto de datos sobre la diabetes que está incorporado en Scikit-learn para fines de aprendizaje. Imagina que quieres probar un tratamiento para pacientes diabéticos. Los modelos de aprendizaje automático podrían ayudarte a determinar qué pacientes responderían mejor al tratamiento, en función de combinaciones de variables. Incluso un modelo de regresión muy básico, cuando se visualiza, podría mostrar información sobre variables que te ayudarían a organizar tus ensayos clínicos teóricos.
+
+✅ Hay muchos tipos de métodos de regresión, y cuál elijas depende de la respuesta que estés buscando. Si deseas predecir la altura probable de una persona de una edad determinada, usarías la regresión lineal, ya que estás buscando un **valor numérico**. Si estás interesado en descubrir si un tipo de cocina debe considerarse vegana o no, estás buscando una **asignación de categoría**, por lo que usarías la regresión logística. Aprenderás más sobre la regresión logística más adelante. Piensa un poco en algunas preguntas que puedes hacer a los datos, y cuál de estos métodos sería más apropiado.
+
+Vamos a empezar con esta tarea.
+
+### Importar bibliotecas
+
+Para esta tarea, importaremos algunas bibliotecas:
+
+- **matplotlib**. Es una herramienta útil para [gráficos](https://matplotlib.org/) y la usaremos para crear un gráfico de líneas.
+- **numpy**. [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) es una biblioteca útil para manejar datos numéricos en Python.
+- **sklearn**. Esta es la biblioteca [Scikit-learn](https://scikit-learn.org/stable/user_guide.html).
+
+Importa algunas bibliotecas para ayudarte con tus tareas.
+
+1. Agrega las importaciones escribiendo el siguiente código:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ Arriba estás importando `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y` será el objetivo de la regresión.
+
+2. Agrega algunos comandos print para mostrar la forma de la matriz de datos y su primer elemento:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ Lo que estás obteniendo como respuesta es una tupla. Lo que estás haciendo es asignar los dos primeros valores de la tupla a `X` and `y` respectivamente. Aprende más [sobre tuplas](https://wikipedia.org/wiki/Tuple).
+
+ Puedes ver que estos datos tienen 442 elementos organizados en matrices de 10 elementos:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ Piensa un poco sobre la relación entre los datos y el objetivo de la regresión. La regresión lineal predice relaciones entre la característica X y la variable objetivo y. ¿Puedes encontrar el [objetivo](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) para el conjunto de datos de diabetes en la documentación? ¿Qué está demostrando este conjunto de datos, dado ese objetivo?
+
+3. A continuación, selecciona una parte de este conjunto de datos para graficar seleccionando la tercera columna del conjunto de datos. Puedes hacer esto usando el `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)`. Si uno de los parámetros es -1, la dimensión correspondiente se calcula automáticamente.
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ En cualquier momento, imprime los datos para verificar su forma.
+
+4. Ahora que tienes los datos listos para ser graficados, puedes ver si una máquina puede ayudar a determinar una división lógica entre los números en este conjunto de datos. Para hacer esto, necesitas dividir tanto los datos (X) como el objetivo (y) en conjuntos de prueba y entrenamiento. Scikit-learn tiene una manera sencilla de hacer esto; puedes dividir tus datos de prueba en un punto dado.
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+5. ¡Ahora estás listo para entrenar tu modelo! Carga el modelo de regresión lineal y entrénalo con tus conjuntos de entrenamiento X e y usando `model.fit()`:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`. Esto se usará para dibujar la línea entre los grupos de datos del modelo.
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. Ahora es el momento de mostrar los datos en un gráfico. Matplotlib es una herramienta muy útil para esta tarea. Crea un diagrama de dispersión de todos los datos de prueba X e y, y usa la predicción para dibujar una línea en el lugar más apropiado, entre los agrupamientos de datos del modelo.
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ Piensa un poco en lo que está pasando aquí. Una línea recta atraviesa muchos pequeños puntos de datos, pero ¿qué está haciendo exactamente? ¿Puedes ver cómo deberías poder usar esta línea para predecir dónde debería encajar un nuevo punto de datos no visto en relación con el eje y del gráfico? Intenta poner en palabras el uso práctico de este modelo.
+
+¡Felicidades, construiste tu primer modelo de regresión lineal, creaste una predicción con él y la mostraste en un gráfico!
+
+---
+## 🚀Desafío
+
+Grafica una variable diferente de este conjunto de datos. Pista: edita esta línea: `X = X[:,2]`. Dado el objetivo de este conjunto de datos, ¿qué puedes descubrir sobre la progresión de la diabetes como enfermedad?
+## [Cuestionario después de la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## Revisión y autoestudio
+
+En este tutorial, trabajaste con regresión lineal simple, en lugar de regresión univariada o múltiple. Lee un poco sobre las diferencias entre estos métodos, o echa un vistazo a [este video](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef).
+
+Lee más sobre el concepto de regresión y piensa en qué tipo de preguntas pueden ser respondidas por esta técnica. Toma este [tutorial](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott) para profundizar tu comprensión.
+
+## Tarea
+
+[Un conjunto de datos diferente](assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/1-Tools/assignment.md b/translations/es/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..f3a4deb3a
--- /dev/null
+++ b/translations/es/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# Regresión con Scikit-learn
+
+## Instrucciones
+
+Echa un vistazo al [dataset de Linnerud](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud) en Scikit-learn. Este dataset tiene múltiples [objetivos](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset): 'Consiste en tres variables de ejercicio (datos) y tres variables fisiológicas (objetivo) recolectadas de veinte hombres de mediana edad en un club de fitness'.
+
+En tus propias palabras, describe cómo crear un modelo de Regresión que trace la relación entre la circunferencia de la cintura y cuántos abdominales se logran. Haz lo mismo para los otros puntos de datos en este conjunto de datos.
+
+## Rubrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| ------------------------------ | ----------------------------------- | ----------------------------- | -------------------------- |
+| Enviar un párrafo descriptivo | Se envía un párrafo bien escrito | Se envían algunas oraciones | No se proporciona descripción |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/1-Tools/solution/Julia/README.md b/translations/es/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..157e605aa
--- /dev/null
+++ b/translations/es/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/2-Data/README.md b/translations/es/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..73def6aa1
--- /dev/null
+++ b/translations/es/2-Regression/2-Data/README.md
@@ -0,0 +1,214 @@
+# Construir un modelo de regresión usando Scikit-learn: preparar y visualizar datos
+
+
+
+Infografía por [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [¡Esta lección está disponible en R!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## Introducción
+
+Ahora que tienes las herramientas necesarias para comenzar a construir modelos de aprendizaje automático con Scikit-learn, estás listo para empezar a hacer preguntas a tus datos. A medida que trabajas con datos y aplicas soluciones de ML, es muy importante entender cómo hacer la pregunta correcta para desbloquear adecuadamente el potencial de tu conjunto de datos.
+
+En esta lección, aprenderás:
+
+- Cómo preparar tus datos para la construcción de modelos.
+- Cómo usar Matplotlib para la visualización de datos.
+
+## Hacer la pregunta correcta a tus datos
+
+La pregunta que necesitas responder determinará qué tipo de algoritmos de ML utilizarás. Y la calidad de la respuesta que obtengas dependerá en gran medida de la naturaleza de tus datos.
+
+Echa un vistazo a los [datos](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) proporcionados para esta lección. Puedes abrir este archivo .csv en VS Code. Una rápida ojeada muestra inmediatamente que hay espacios en blanco y una mezcla de cadenas y datos numéricos. También hay una columna extraña llamada 'Package' donde los datos son una mezcla entre 'sacks', 'bins' y otros valores. Los datos, de hecho, son un poco desordenados.
+
+[](https://youtu.be/5qGjczWTrDQ "ML para principiantes - Cómo analizar y limpiar un conjunto de datos")
+
+> 🎥 Haz clic en la imagen de arriba para ver un breve video sobre cómo preparar los datos para esta lección.
+
+De hecho, no es muy común recibir un conjunto de datos completamente listo para usar y crear un modelo de ML de inmediato. En esta lección, aprenderás cómo preparar un conjunto de datos crudo utilizando bibliotecas estándar de Python. También aprenderás varias técnicas para visualizar los datos.
+
+## Estudio de caso: 'el mercado de calabazas'
+
+En esta carpeta encontrarás un archivo .csv en la carpeta raíz `data` llamado [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) que incluye 1757 líneas de datos sobre el mercado de calabazas, ordenados en grupos por ciudad. Estos son datos crudos extraídos de los [Informes Estándar de Mercados Terminales de Cultivos Especiales](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) distribuidos por el Departamento de Agricultura de los Estados Unidos.
+
+### Preparando los datos
+
+Estos datos están en el dominio público. Se pueden descargar en muchos archivos separados, por ciudad, desde el sitio web del USDA. Para evitar demasiados archivos separados, hemos concatenado todos los datos de las ciudades en una sola hoja de cálculo, por lo tanto, ya hemos _preparado_ un poco los datos. A continuación, echemos un vistazo más de cerca a los datos.
+
+### Los datos de calabazas - primeras conclusiones
+
+¿Qué notas sobre estos datos? Ya viste que hay una mezcla de cadenas, números, espacios en blanco y valores extraños que necesitas interpretar.
+
+¿Qué pregunta puedes hacer a estos datos, utilizando una técnica de regresión? ¿Qué tal "Predecir el precio de una calabaza en venta durante un mes determinado"? Mirando nuevamente los datos, hay algunos cambios que necesitas hacer para crear la estructura de datos necesaria para la tarea.
+## Ejercicio - analizar los datos de calabazas
+
+Vamos a usar [Pandas](https://pandas.pydata.org/), (el nombre significa `Python Data Analysis`) una herramienta muy útil para dar forma a los datos, para analizar y preparar estos datos de calabazas.
+
+### Primero, verifica si hay fechas faltantes
+
+Primero necesitarás tomar medidas para verificar si hay fechas faltantes:
+
+1. Convierte las fechas a un formato de mes (estas son fechas de EE.UU., por lo que el formato es `MM/DD/YYYY`).
+2. Extrae el mes a una nueva columna.
+
+Abre el archivo _notebook.ipynb_ en Visual Studio Code e importa la hoja de cálculo en un nuevo dataframe de Pandas.
+
+1. Usa la función `head()` para ver las primeras cinco filas.
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ ¿Qué función usarías para ver las últimas cinco filas?
+
+1. Verifica si hay datos faltantes en el dataframe actual:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ Hay datos faltantes, pero tal vez no importen para la tarea en cuestión.
+
+1. Para hacer que tu dataframe sea más fácil de trabajar, selecciona solo las columnas que necesitas, usando `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:` en el caso a continuación significa "todas las filas".
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### Segundo, determina el precio promedio de la calabaza
+
+Piensa en cómo determinar el precio promedio de una calabaza en un mes dado. ¿Qué columnas elegirías para esta tarea? Pista: necesitarás 3 columnas.
+
+Solución: toma el promedio de las columnas `Low Price` and `High Price` para llenar la nueva columna Price, y convierte la columna Date para mostrar solo el mes. Afortunadamente, según la verificación anterior, no hay datos faltantes para fechas o precios.
+
+1. Para calcular el promedio, agrega el siguiente código:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ Siéntete libre de imprimir cualquier dato que desees verificar usando `print(month)`.
+
+2. Ahora, copia tus datos convertidos en un nuevo dataframe de Pandas:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ Imprimir tu dataframe mostrará un conjunto de datos limpio y ordenado sobre el cual puedes construir tu nuevo modelo de regresión.
+
+### Pero espera, hay algo extraño aquí
+
+Si miras la columna `Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package`.
+
+1. Agrega un filtro en la parte superior del archivo, debajo de la importación inicial del .csv:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ Si imprimes los datos ahora, puedes ver que solo estás obteniendo alrededor de 415 filas de datos que contienen calabazas por bushel.
+
+### Pero espera, hay una cosa más por hacer
+
+¿Notaste que la cantidad de bushels varía por fila? Necesitas normalizar los precios para que muestres el precio por bushel, así que haz algunos cálculos para estandarizarlo.
+
+1. Agrega estas líneas después del bloque que crea el dataframe new_pumpkins:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ Según [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308), el peso de un bushel depende del tipo de producto, ya que es una medida de volumen. "Un bushel de tomates, por ejemplo, debe pesar 56 libras... Las hojas y verduras ocupan más espacio con menos peso, por lo que un bushel de espinacas pesa solo 20 libras." ¡Es todo bastante complicado! No nos molestemos en hacer una conversión de bushel a libra, y en su lugar, fijemos el precio por bushel. ¡Todo este estudio de bushels de calabazas, sin embargo, muestra lo importante que es entender la naturaleza de tus datos!
+
+Ahora, puedes analizar los precios por unidad basándote en su medida de bushel. Si imprimes los datos una vez más, puedes ver cómo está estandarizado.
+
+✅ ¿Notaste que las calabazas vendidas por medio bushel son muy caras? ¿Puedes averiguar por qué? Pista: las calabazas pequeñas son mucho más caras que las grandes, probablemente porque hay muchas más por bushel, dado el espacio no utilizado que ocupa una gran calabaza hueca para pastel.
+
+## Estrategias de Visualización
+
+Parte del rol del científico de datos es demostrar la calidad y naturaleza de los datos con los que están trabajando. Para hacer esto, a menudo crean visualizaciones interesantes, o gráficos, diagramas y tablas, que muestran diferentes aspectos de los datos. De esta manera, pueden mostrar visualmente relaciones y brechas que de otra manera serían difíciles de descubrir.
+
+[](https://youtu.be/SbUkxH6IJo0 "ML para principiantes - Cómo visualizar datos con Matplotlib")
+
+> 🎥 Haz clic en la imagen de arriba para ver un breve video sobre cómo visualizar los datos para esta lección.
+
+Las visualizaciones también pueden ayudar a determinar la técnica de aprendizaje automático más adecuada para los datos. Un diagrama de dispersión que parece seguir una línea, por ejemplo, indica que los datos son un buen candidato para un ejercicio de regresión lineal.
+
+Una biblioteca de visualización de datos que funciona bien en cuadernos de Jupyter es [Matplotlib](https://matplotlib.org/) (que también viste en la lección anterior).
+
+> Obtén más experiencia con la visualización de datos en [estos tutoriales](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott).
+
+## Ejercicio - experimenta con Matplotlib
+
+Intenta crear algunos gráficos básicos para mostrar el nuevo dataframe que acabas de crear. ¿Qué mostraría un gráfico de líneas básico?
+
+1. Importa Matplotlib en la parte superior del archivo, debajo de la importación de Pandas:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. Vuelve a ejecutar todo el cuaderno para actualizar.
+1. Al final del cuaderno, agrega una celda para graficar los datos como un cuadro:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ ¿Es este un gráfico útil? ¿Hay algo que te sorprenda?
+
+ No es particularmente útil, ya que solo muestra tus datos como una dispersión de puntos en un mes dado.
+
+### Hazlo útil
+
+Para que los gráficos muestren datos útiles, generalmente necesitas agrupar los datos de alguna manera. Intentemos crear un gráfico donde el eje y muestre los meses y los datos demuestren la distribución de los datos.
+
+1. Agrega una celda para crear un gráfico de barras agrupado:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ ¡Esta es una visualización de datos más útil! Parece indicar que el precio más alto para las calabazas ocurre en septiembre y octubre. ¿Cumple eso con tus expectativas? ¿Por qué o por qué no?
+
+---
+
+## 🚀Desafío
+
+Explora los diferentes tipos de visualización que ofrece Matplotlib. ¿Cuáles son los más apropiados para problemas de regresión?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## Revisión y Autoestudio
+
+Echa un vistazo a las muchas formas de visualizar datos. Haz una lista de las diversas bibliotecas disponibles y nota cuáles son las mejores para ciertos tipos de tareas, por ejemplo, visualizaciones 2D vs. visualizaciones 3D. ¿Qué descubres?
+
+## Tarea
+
+[Explorando la visualización](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/2-Data/assignment.md b/translations/es/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..5fc7e6750
--- /dev/null
+++ b/translations/es/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# Explorando Visualizaciones
+
+Hay varias bibliotecas diferentes disponibles para la visualización de datos. Crea algunas visualizaciones utilizando los datos de Pumpkin en esta lección con matplotlib y seaborn en un cuaderno de muestra. ¿Qué bibliotecas son más fáciles de usar?
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------- | -------- | ---------------- |
+| | Se envía un cuaderno con dos exploraciones/visualizaciones | Se envía un cuaderno con una exploración/visualización | No se envía un cuaderno |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/2-Data/solution/Julia/README.md b/translations/es/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..05d50a82d
--- /dev/null
+++ b/translations/es/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/3-Linear/README.md b/translations/es/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..0b5de2c3e
--- /dev/null
+++ b/translations/es/2-Regression/3-Linear/README.md
@@ -0,0 +1,370 @@
+# Construye un modelo de regresión usando Scikit-learn: regresión de cuatro maneras
+
+
+> Infografía por [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [¡Esta lección está disponible en R!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### Introducción
+
+Hasta ahora has explorado qué es la regresión con datos de muestra recopilados del conjunto de datos de precios de calabazas que utilizaremos a lo largo de esta lección. También lo has visualizado usando Matplotlib.
+
+Ahora estás listo para profundizar en la regresión para ML. Mientras que la visualización te permite entender los datos, el verdadero poder del Aprendizaje Automático proviene del _entrenamiento de modelos_. Los modelos se entrenan con datos históricos para capturar automáticamente las dependencias de los datos, y te permiten predecir resultados para nuevos datos, que el modelo no ha visto antes.
+
+En esta lección, aprenderás más sobre dos tipos de regresión: _regresión lineal básica_ y _regresión polinómica_, junto con algunas de las matemáticas subyacentes a estas técnicas. Esos modelos nos permitirán predecir los precios de las calabazas dependiendo de diferentes datos de entrada.
+
+[](https://youtu.be/CRxFT8oTDMg "ML para principiantes - Entendiendo la Regresión Lineal")
+
+> 🎥 Haz clic en la imagen de arriba para una breve visión general de la regresión lineal.
+
+> A lo largo de este plan de estudios, asumimos un conocimiento mínimo de matemáticas, y buscamos hacerlo accesible para estudiantes que vienen de otros campos, así que estate atento a notas, 🧮 llamadas, diagramas y otras herramientas de aprendizaje para ayudar en la comprensión.
+
+### Prerrequisitos
+
+Deberías estar familiarizado con la estructura de los datos de calabazas que estamos examinando. Puedes encontrarlo precargado y pre-limpiado en el archivo _notebook.ipynb_ de esta lección. En el archivo, el precio de la calabaza se muestra por fanega en un nuevo marco de datos. Asegúrate de poder ejecutar estos cuadernos en kernels en Visual Studio Code.
+
+### Preparación
+
+Como recordatorio, estás cargando estos datos para hacerles preguntas.
+
+- ¿Cuándo es el mejor momento para comprar calabazas?
+- ¿Qué precio puedo esperar de una caja de calabazas miniatura?
+- ¿Debería comprarlas en cestas de media fanega o en cajas de 1 1/9 fanega?
+Sigamos profundizando en estos datos.
+
+En la lección anterior, creaste un marco de datos de Pandas y lo llenaste con parte del conjunto de datos original, estandarizando los precios por fanega. Sin embargo, al hacerlo, solo pudiste reunir alrededor de 400 puntos de datos y solo para los meses de otoño.
+
+Echa un vistazo a los datos que precargamos en el cuaderno acompañante de esta lección. Los datos están precargados y se ha graficado un gráfico de dispersión inicial para mostrar los datos por mes. Tal vez podamos obtener un poco más de detalle sobre la naturaleza de los datos limpiándolos más.
+
+## Una línea de regresión lineal
+
+Como aprendiste en la Lección 1, el objetivo de un ejercicio de regresión lineal es poder trazar una línea para:
+
+- **Mostrar relaciones entre variables**. Mostrar la relación entre variables.
+- **Hacer predicciones**. Hacer predicciones precisas sobre dónde caería un nuevo punto de datos en relación con esa línea.
+
+Es típico de la **Regresión de Mínimos Cuadrados** dibujar este tipo de línea. El término 'mínimos cuadrados' significa que todos los puntos de datos que rodean la línea de regresión se elevan al cuadrado y luego se suman. Idealmente, esa suma final es lo más pequeña posible, porque queremos un número bajo de errores, o `least-squares`.
+
+Hacemos esto ya que queremos modelar una línea que tenga la menor distancia acumulada de todos nuestros puntos de datos. También elevamos al cuadrado los términos antes de sumarlos, ya que nos preocupa su magnitud más que su dirección.
+
+> **🧮 Muéstrame las matemáticas**
+>
+> Esta línea, llamada la _línea de mejor ajuste_ puede expresarse por [una ecuación](https://es.wikipedia.org/wiki/Regresión_lineal_simple):
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` columna usando la siguiente expresión:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+Ahora que tienes una comprensión de las matemáticas detrás de la regresión lineal, vamos a crear un modelo de Regresión para ver si podemos predecir qué paquete de calabazas tendrá los mejores precios de calabaza. Alguien que compra calabazas para un huerto de calabazas festivo podría querer esta información para poder optimizar sus compras de paquetes de calabazas para el huerto.
+
+## Buscando correlación
+
+[](https://youtu.be/uoRq-lW2eQo "ML para principiantes - Buscando correlación: La clave para la regresión lineal")
+
+> 🎥 Haz clic en la imagen de arriba para una breve visión general de la correlación.
+
+De la lección anterior, probablemente hayas visto que el precio promedio para diferentes meses se ve así:
+
+
+
+Esto sugiere que debería haber alguna correlación, y podemos intentar entrenar un modelo de regresión lineal para predecir la relación entre `Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` función:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+Parece que la correlación es bastante pequeña, -0.15 por `Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` función de trazado de dispersión podemos trazar todos los puntos en el mismo gráfico:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+Nuestra investigación sugiere que la variedad tiene más efecto en el precio general que la fecha de venta real. Podemos ver esto con un gráfico de barras:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+Centrémonos por el momento solo en una variedad de calabaza, el 'tipo para tarta', y veamos qué efecto tiene la fecha en el precio:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+Si ahora calculamos la correlación entre `Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` - lo que significa que tiene sentido entrenar un modelo predictivo.
+
+> Antes de entrenar un modelo de regresión lineal, es importante asegurarse de que nuestros datos estén limpios. La regresión lineal no funciona bien con valores faltantes, por lo que tiene sentido deshacerse de todas las celdas vacías:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+Otro enfoque sería llenar esos valores vacíos con valores medios de la columna correspondiente.
+
+## Regresión Lineal Simple
+
+[](https://youtu.be/e4c_UP2fSjg "ML para principiantes - Regresión Lineal y Polinómica usando Scikit-learn")
+
+> 🎥 Haz clic en la imagen de arriba para una breve visión general de la regresión lineal y polinómica.
+
+Para entrenar nuestro modelo de Regresión Lineal, utilizaremos la biblioteca **Scikit-learn**.
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+Comenzamos separando los valores de entrada (características) y la salida esperada (etiqueta) en matrices numpy separadas:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> Nota que tuvimos que realizar `reshape` en los datos de entrada para que el paquete de Regresión Lineal los entienda correctamente. La Regresión Lineal espera una matriz 2D como entrada, donde cada fila de la matriz corresponde a un vector de características de entrada. En nuestro caso, como solo tenemos una entrada, necesitamos una matriz con forma N×1, donde N es el tamaño del conjunto de datos.
+
+Luego, necesitamos dividir los datos en conjuntos de entrenamiento y prueba, para que podamos validar nuestro modelo después del entrenamiento:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+Finalmente, entrenar el modelo de Regresión Lineal real toma solo dos líneas de código. Definimos el método `LinearRegression` object, and fit it to our data using the `fit`:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+El `LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` en nuestro caso, indicando el precio al comienzo del año.
+
+Para ver qué tan preciso es nuestro modelo, podemos predecir precios en un conjunto de datos de prueba, y luego medir qué tan cerca están nuestras predicciones de los valores esperados. Esto se puede hacer usando la métrica de error cuadrático medio (MSE), que es la media de todas las diferencias al cuadrado entre el valor esperado y el valor predicho.
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+Nuestro error parece ser de alrededor de 2 puntos, lo que es ~17%. No es muy bueno. Otro indicador de la calidad del modelo es el **coeficiente de determinación**, que se puede obtener así:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+Si el valor es 0, significa que el modelo no toma en cuenta los datos de entrada, y actúa como el *peor predictor lineal*, que es simplemente un valor medio del resultado. El valor de 1 significa que podemos predecir perfectamente todos los resultados esperados. En nuestro caso, el coeficiente es alrededor de 0.06, lo cual es bastante bajo.
+
+También podemos graficar los datos de prueba junto con la línea de regresión para ver mejor cómo funciona la regresión en nuestro caso:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## Regresión Polinómica
+
+Otro tipo de Regresión Lineal es la Regresión Polinómica. Aunque a veces hay una relación lineal entre variables, como que cuanto mayor es el volumen de la calabaza, mayor es el precio, a veces estas relaciones no se pueden trazar como un plano o una línea recta.
+
+✅ Aquí hay [algunos ejemplos más](https://online.stat.psu.edu/stat501/lesson/9/9.8) de datos que podrían usar Regresión Polinómica
+
+Mira nuevamente la relación entre Fecha y Precio. ¿Parece que este gráfico de dispersión debería necesariamente ser analizado por una línea recta? ¿No pueden fluctuar los precios? En este caso, puedes intentar la regresión polinómica.
+
+✅ Los polinomios son expresiones matemáticas que pueden consistir en una o más variables y coeficientes
+
+La regresión polinómica crea una línea curva para ajustar mejor los datos no lineales. En nuestro caso, si incluimos una variable `DayOfYear` al cuadrado en los datos de entrada, deberíamos poder ajustar nuestros datos con una curva parabólica, que tendrá un mínimo en un cierto punto dentro del año.
+
+Scikit-learn incluye una útil [API de pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline) para combinar diferentes pasos de procesamiento de datos juntos. Un **pipeline** es una cadena de **estimadores**. En nuestro caso, crearemos un pipeline que primero agregue características polinómicas a nuestro modelo, y luego entrene la regresión:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+Usando `PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` de lo contrario. Esto significa que habrá cuatro coeficientes en la regresión lineal, uno para cada variedad de calabaza, responsable del "precio inicial" (o más bien "precio adicional") para esa variedad en particular.
+
+El código a continuación muestra cómo podemos codificar una variedad en una sola columna:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+Para entrenar la regresión lineal usando la variedad codificada en una sola columna como entrada, solo necesitamos inicializar los datos `X` and `y` correctamente:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+El resto del código es el mismo que usamos arriba para entrenar la Regresión Lineal. Si lo pruebas, verás que el error cuadrático medio es aproximadamente el mismo, pero obtenemos un coeficiente de determinación mucho más alto (~77%). Para obtener predicciones aún más precisas, podemos tener en cuenta más características categóricas, así como características numéricas, como `Month` or `DayOfYear`. To get one large array of features, we can use `join`:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+Aquí también tenemos en cuenta `City` and `Package` tipo, lo que nos da un MSE de 2.84 (10%), y una determinación de 0.94!
+
+## Poniéndolo todo junto
+
+Para hacer el mejor modelo, podemos usar datos combinados (codificados en una sola columna categórica + numérica) del ejemplo anterior junto con la Regresión Polinómica. Aquí está el código completo para tu conveniencia:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+Esto debería darnos el mejor coeficiente de determinación de casi 97%, y MSE=2.23 (~8% de error de predicción).
+
+| Modelo | MSE | Determinación |
+|-------|-----|---------------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` Lineal | 5.24 (19.7%) | 0.77 |
+| Todas las características Lineal | 2.84 (10.5%) | 0.94 |
+| Todas las características Polinómica | 2.23 (8.25%) | 0.97 |
+
+🏆 ¡Bien hecho! Creaste cuatro modelos de Regresión en una lección, y mejoraste la calidad del modelo al 97%. En la sección final sobre Regresión, aprenderás sobre la Regresión Logística para determinar categorías.
+
+---
+## 🚀Desafío
+
+Prueba varias variables diferentes en este cuaderno para ver cómo la correlación corresponde a la precisión del modelo.
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/14/)
+
+## Revisión y Autoestudio
+
+En esta lección aprendimos sobre la Regresión Lineal. Hay otros tipos importantes de Regresión. Lee sobre las técnicas de Stepwise, Ridge, Lasso y Elasticnet. Un buen curso para estudiar y aprender más es el [curso de Stanford Statistical Learning](https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning)
+
+## Asignación
+
+[Construye un Modelo](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/3-Linear/assignment.md b/translations/es/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..576d55608
--- /dev/null
+++ b/translations/es/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# Crear un Modelo de Regresión
+
+## Instrucciones
+
+En esta lección se te mostró cómo construir un modelo utilizando tanto la Regresión Lineal como la Regresión Polinómica. Usando este conocimiento, encuentra un conjunto de datos o utiliza uno de los conjuntos integrados de Scikit-learn para construir un nuevo modelo. Explica en tu cuaderno por qué elegiste la técnica que utilizaste y demuestra la precisión de tu modelo. Si no es preciso, explica por qué.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | ------------------------------------------------------------ | --------------------------- | ------------------------------- |
+| | presenta un cuaderno completo con una solución bien documentada | la solución está incompleta | la solución tiene fallos o errores |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/3-Linear/solution/Julia/README.md b/translations/es/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..4da585735
--- /dev/null
+++ b/translations/es/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por la precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o mala interpretación que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/4-Logistic/README.md b/translations/es/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..73d72e34d
--- /dev/null
+++ b/translations/es/2-Regression/4-Logistic/README.md
@@ -0,0 +1,381 @@
+# Regresión logística para predecir categorías
+
+
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [¡Esta lección está disponible en R!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## Introducción
+
+En esta última lección sobre Regresión, una de las técnicas básicas _clásicas_ de ML, echaremos un vistazo a la Regresión Logística. Utilizarías esta técnica para descubrir patrones y predecir categorías binarias. ¿Es este caramelo de chocolate o no? ¿Es esta enfermedad contagiosa o no? ¿Elegirá este cliente este producto o no?
+
+En esta lección, aprenderás:
+
+- Una nueva biblioteca para visualización de datos
+- Técnicas para la regresión logística
+
+✅ Profundiza tu comprensión sobre el trabajo con este tipo de regresión en este [módulo de aprendizaje](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott)
+
+## Prerrequisito
+
+Habiendo trabajado con los datos de las calabazas, ahora estamos lo suficientemente familiarizados con ellos para darnos cuenta de que hay una categoría binaria con la que podemos trabajar: `Color`.
+
+Construyamos un modelo de regresión logística para predecir, dado algunas variables, _de qué color es probable que sea una calabaza_ (naranja 🎃 o blanca 👻).
+
+> ¿Por qué estamos hablando de clasificación binaria en una lección sobre regresión? Solo por conveniencia lingüística, ya que la regresión logística es [realmente un método de clasificación](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression), aunque basado en lo lineal. Aprende sobre otras formas de clasificar datos en el próximo grupo de lecciones.
+
+## Definir la pregunta
+
+Para nuestros propósitos, expresaremos esto como un binario: 'Blanco' o 'No Blanco'. También hay una categoría 'rayada' en nuestro conjunto de datos, pero hay pocos casos de ella, por lo que no la usaremos. Desaparece una vez que eliminamos los valores nulos del conjunto de datos, de todos modos.
+
+> 🎃 Dato curioso, a veces llamamos a las calabazas blancas 'calabazas fantasma'. No son muy fáciles de tallar, por lo que no son tan populares como las naranjas, ¡pero se ven geniales! Así que también podríamos reformular nuestra pregunta como: 'Fantasma' o 'No Fantasma'. 👻
+
+## Sobre la regresión logística
+
+La regresión logística difiere de la regresión lineal, de la que aprendiste anteriormente, en algunos aspectos importantes.
+
+[](https://youtu.be/KpeCT6nEpBY "ML para principiantes - Comprender la Regresión Logística para la Clasificación de Aprendizaje Automático")
+
+> 🎥 Haz clic en la imagen de arriba para una breve descripción en video de la regresión logística.
+
+### Clasificación binaria
+
+La regresión logística no ofrece las mismas características que la regresión lineal. La primera ofrece una predicción sobre una categoría binaria ("blanco o no blanco"), mientras que la segunda es capaz de predecir valores continuos, por ejemplo, dado el origen de una calabaza y el tiempo de cosecha, _cuánto subirá su precio_.
+
+
+> Infografía por [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+### Otras clasificaciones
+
+Hay otros tipos de regresión logística, incluyendo la multinomial y la ordinal:
+
+- **Multinomial**, que implica tener más de una categoría - "Naranja, Blanco y Rayada".
+- **Ordinal**, que implica categorías ordenadas, útil si quisiéramos ordenar nuestros resultados lógicamente, como nuestras calabazas que están ordenadas por un número finito de tamaños (mini,pequeño,mediano,grande,xl,xxl).
+
+
+
+### Las variables NO TIENEN que correlacionar
+
+¿Recuerdas cómo la regresión lineal funcionaba mejor con variables más correlacionadas? La regresión logística es lo opuesto: las variables no tienen que alinearse. Eso funciona para estos datos que tienen correlaciones algo débiles.
+
+### Necesitas muchos datos limpios
+
+La regresión logística dará resultados más precisos si usas más datos; nuestro pequeño conjunto de datos no es óptimo para esta tarea, así que tenlo en cuenta.
+
+[](https://youtu.be/B2X4H9vcXTs "ML para principiantes - Análisis y Preparación de Datos para Regresión Logística")
+
+> 🎥 Haz clic en la imagen de arriba para una breve descripción en video de la preparación de datos para la regresión lineal
+
+✅ Piensa en los tipos de datos que se prestarían bien a la regresión logística
+
+## Ejercicio - limpiar los datos
+
+Primero, limpia un poco los datos, eliminando los valores nulos y seleccionando solo algunas de las columnas:
+
+1. Agrega el siguiente código:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ Siempre puedes echar un vistazo a tu nuevo dataframe:
+
+ ```python
+ pumpkins.info
+ ```
+
+### Visualización - gráfico categórico
+
+Para este momento, ya has cargado el [cuaderno inicial](../../../../2-Regression/4-Logistic/notebook.ipynb) con los datos de las calabazas una vez más y los has limpiado para preservar un conjunto de datos que contiene algunas variables, incluyendo `Color`. Vamos a visualizar el dataframe en el cuaderno utilizando una biblioteca diferente: [Seaborn](https://seaborn.pydata.org/index.html), que está construida sobre Matplotlib que usamos anteriormente.
+
+Seaborn ofrece algunas formas interesantes de visualizar tus datos. Por ejemplo, puedes comparar distribuciones de los datos para cada `Variety` y `Color` en un gráfico categórico.
+
+1. Crea un gráfico de este tipo usando `catplot` function, using our pumpkin data `pumpkins`, y especificando un mapeo de color para cada categoría de calabaza (naranja o blanca):
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ Al observar los datos, puedes ver cómo los datos de Color se relacionan con Variety.
+
+ ✅ Dado este gráfico categórico, ¿cuáles son algunas exploraciones interesantes que puedes imaginar?
+
+### Preprocesamiento de datos: codificación de características y etiquetas
+
+Nuestro conjunto de datos de calabazas contiene valores de cadena para todas sus columnas. Trabajar con datos categóricos es intuitivo para los humanos pero no para las máquinas. Los algoritmos de aprendizaje automático funcionan bien con números. Por eso la codificación es un paso muy importante en la fase de preprocesamiento de datos, ya que nos permite convertir datos categóricos en datos numéricos, sin perder información. Una buena codificación lleva a construir un buen modelo.
+
+Para la codificación de características hay dos tipos principales de codificadores:
+
+1. Codificador ordinal: se adapta bien a las variables ordinales, que son variables categóricas donde sus datos siguen un orden lógico, como la columna `Item Size` en nuestro conjunto de datos. Crea un mapeo tal que cada categoría está representada por un número, que es el orden de la categoría en la columna.
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. Codificador categórico: se adapta bien a las variables nominales, que son variables categóricas donde sus datos no siguen un orden lógico, como todas las características diferentes de `Item Size` en nuestro conjunto de datos. Es una codificación de una sola vez, lo que significa que cada categoría está representada por una columna binaria: la variable codificada es igual a 1 si la calabaza pertenece a esa Variety y 0 en caso contrario.
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+Luego, `ColumnTransformer` se utiliza para combinar múltiples codificadores en un solo paso y aplicarlos a las columnas apropiadas.
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+Por otro lado, para codificar la etiqueta, utilizamos la clase `LabelEncoder` de scikit-learn, que es una clase de utilidad para ayudar a normalizar las etiquetas de modo que contengan solo valores entre 0 y n_clases-1 (aquí, 0 y 1).
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+Una vez que hemos codificado las características y la etiqueta, podemos fusionarlas en un nuevo dataframe `encoded_pumpkins`.
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ ¿Cuáles son las ventajas de usar un codificador ordinal para la columna `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety` y `Color` en un gráfico categórico? Para plotear mejor los datos, usaremos la columna codificada `Item Size` column and the unencoded `Variety`.
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### Usar un gráfico de enjambre
+
+Dado que Color es una categoría binaria (Blanco o No), necesita 'un [enfoque especializado](https://seaborn.pydata.org/tutorial/categorical.html?highlight=bar) para la visualización'. Hay otras formas de visualizar la relación de esta categoría con otras variables.
+
+Puedes visualizar variables una al lado de la otra con gráficos de Seaborn.
+
+1. Prueba un gráfico de 'enjambre' para mostrar la distribución de valores:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**Ten cuidado**: el código anterior puede generar una advertencia, ya que Seaborn no puede representar tal cantidad de puntos de datos en un gráfico de enjambre. Una posible solución es disminuir el tamaño del marcador, utilizando el parámetro 'size'. Sin embargo, ten en cuenta que esto afecta la legibilidad del gráfico.
+
+> **🧮 Muéstrame las matemáticas**
+>
+> La regresión logística se basa en el concepto de 'máxima verosimilitud' utilizando [funciones sigmoides](https://wikipedia.org/wiki/Sigmoid_function). Una 'Función Sigmoide' en un gráfico se ve como una forma de 'S'. Toma un valor y lo mapea a algún lugar entre 0 y 1. Su curva también se llama 'curva logística'. Su fórmula se ve así:
+>
+> 
+>
+> donde el punto medio de la sigmoide se encuentra en el punto 0 de x, L es el valor máximo de la curva, y k es la inclinación de la curva. Si el resultado de la función es mayor a 0.5, la etiqueta en cuestión recibirá la clase '1' de la elección binaria. Si no, se clasificará como '0'.
+
+## Construye tu modelo
+
+Construir un modelo para encontrar estas clasificaciones binarias es sorprendentemente sencillo en Scikit-learn.
+
+[](https://youtu.be/MmZS2otPrQ8 "ML para principiantes - Regresión Logística para la clasificación de datos")
+
+> 🎥 Haz clic en la imagen de arriba para una breve descripción en video de la construcción de un modelo de regresión lineal
+
+1. Selecciona las variables que deseas utilizar en tu modelo de clasificación y divide los conjuntos de entrenamiento y prueba llamando a `train_test_split()`:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. Ahora puedes entrenar tu modelo, llamando a `fit()` con tus datos de entrenamiento, e imprimir su resultado:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ Echa un vistazo al puntaje de tu modelo. No está mal, considerando que solo tienes alrededor de 1000 filas de datos:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## Mejor comprensión mediante una matriz de confusión
+
+Aunque puedes obtener un informe de puntuación [términos](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html?highlight=classification_report#sklearn.metrics.classification_report) imprimiendo los elementos anteriores, podrías entender mejor tu modelo usando una [matriz de confusión](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix) para ayudarnos a comprender cómo está funcionando el modelo.
+
+> 🎓 Una '[matriz de confusión](https://wikipedia.org/wiki/Confusion_matrix)' (o 'matriz de error') es una tabla que expresa los verdaderos vs. falsos positivos y negativos de tu modelo, evaluando así la precisión de las predicciones.
+
+1. Para usar una métrica de confusión, llama a `confusion_matrix()`:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ Echa un vistazo a la matriz de confusión de tu modelo:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+En Scikit-learn, las filas de las matrices de confusión (eje 0) son etiquetas reales y las columnas (eje 1) son etiquetas predichas.
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+¿Qué está pasando aquí? Digamos que nuestro modelo se le pide clasificar calabazas entre dos categorías binarias, categoría 'blanco' y categoría 'no-blanco'.
+
+- Si tu modelo predice una calabaza como no blanca y pertenece a la categoría 'no-blanco' en realidad, lo llamamos un verdadero negativo, mostrado por el número superior izquierdo.
+- Si tu modelo predice una calabaza como blanca y pertenece a la categoría 'no-blanco' en realidad, lo llamamos un falso negativo, mostrado por el número inferior izquierdo.
+- Si tu modelo predice una calabaza como no blanca y pertenece a la categoría 'blanco' en realidad, lo llamamos un falso positivo, mostrado por el número superior derecho.
+- Si tu modelo predice una calabaza como blanca y pertenece a la categoría 'blanco' en realidad, lo llamamos un verdadero positivo, mostrado por el número inferior derecho.
+
+Como habrás adivinado, es preferible tener un mayor número de verdaderos positivos y verdaderos negativos y un menor número de falsos positivos y falsos negativos, lo que implica que el modelo funciona mejor.
+
+¿Cómo se relaciona la matriz de confusión con la precisión y el recall? Recuerda, el informe de clasificación impreso anteriormente mostró precisión (0.85) y recall (0.67).
+
+Precisión = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+Recall = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ P: Según la matriz de confusión, ¿cómo le fue al modelo? R: No está mal; hay un buen número de verdaderos negativos pero también algunos falsos negativos.
+
+Volvamos a los términos que vimos anteriormente con la ayuda del mapeo TP/TN y FP/FN de la matriz de confusión:
+
+🎓 Precisión: TP/(TP + FP) La fracción de instancias relevantes entre las instancias recuperadas (por ejemplo, qué etiquetas fueron bien etiquetadas)
+
+🎓 Recall: TP/(TP + FN) La fracción de instancias relevantes que fueron recuperadas, ya sea bien etiquetadas o no
+
+🎓 f1-score: (2 * precisión * recall)/(precisión + recall) Un promedio ponderado de la precisión y el recall, siendo el mejor 1 y el peor 0
+
+🎓 Soporte: El número de ocurrencias de cada etiqueta recuperada
+
+🎓 Exactitud: (TP + TN)/(TP + TN + FP + FN) El porcentaje de etiquetas predichas con precisión para una muestra.
+
+🎓 Promedio Macro: El cálculo de las métricas medias no ponderadas para cada etiqueta, sin tener en cuenta el desequilibrio de las etiquetas.
+
+🎓 Promedio Ponderado: El cálculo de las métricas medias para cada etiqueta, teniendo en cuenta el desequilibrio de las etiquetas ponderándolas por su soporte (el número de instancias verdaderas para cada etiqueta).
+
+✅ ¿Puedes pensar en qué métrica deberías observar si deseas que tu modelo reduzca el número de falsos negativos?
+
+## Visualiza la curva ROC de este modelo
+
+[](https://youtu.be/GApO575jTA0 "ML para principiantes - Analizando el Rendimiento de la Regresión Logística con Curvas ROC")
+
+> 🎥 Haz clic en la imagen de arriba para una breve descripción en video de las curvas ROC
+
+Hagamos una visualización más para ver la llamada curva 'ROC':
+
+```python
+from sklearn.metrics import roc_curve, roc_auc_score
+import matplotlib
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+y_scores = model.predict_proba(X_test)
+fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
+
+fig = plt.figure(figsize=(6, 6))
+plt.plot([0, 1], [0, 1], 'k--')
+plt.plot(fpr, tpr)
+plt.xlabel('False Positive Rate')
+plt.ylabel('True Positive Rate')
+plt.title('ROC Curve')
+plt.show()
+```
+
+Usando Matplotlib, grafica la [Curva Característica Operativa del Receptor](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html?highlight=roc) o ROC del modelo. Las curvas ROC se utilizan a menudo para obtener una vista de la salida de un clasificador en términos de sus verdaderos vs. falsos positivos. "Las curvas ROC generalmente presentan la tasa de verdaderos positivos en el eje Y, y la tasa de falsos positivos en el eje X". Por lo tanto, la inclinación de la curva y el espacio entre la línea del punto medio y la curva importan: deseas una curva que rápidamente se dirija hacia arriba y sobre la línea. En nuestro caso, hay falsos positivos al principio, y luego la línea se dirige hacia arriba y sobre correctamente:
+
+
+
+Finalmente, usa la [API `roc_auc_score` de Scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html?highlight=roc_auc#sklearn.metrics.roc_auc_score) para calcular el 'Área Bajo la Curva' (AUC):
+
+```python
+auc = roc_auc_score(y_test,y_scores[:,1])
+print(auc)
+```
+El resultado es `0.9749908725812341`. Dado que el AUC varía de 0 a 1, deseas un puntaje alto, ya que un modelo que es 100% correcto en sus predicciones tendrá un AUC de 1; en este caso
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/4-Logistic/assignment.md b/translations/es/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..6148b0b02
--- /dev/null
+++ b/translations/es/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,13 @@
+# Reintentando algo de Regresión
+
+## Instrucciones
+
+En la lección, utilizaste un subconjunto de los datos de calabaza. Ahora, vuelve a los datos originales e intenta usar todos ellos, limpios y estandarizados, para construir un modelo de Regresión Logística.
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | ---------------------------------------------------------------------- | ------------------------------------------------------------ | ---------------------------------------------------------- |
+| | Se presenta un notebook con un modelo bien explicado y de buen rendimiento | Se presenta un notebook con un modelo que tiene un rendimiento mínimo | Se presenta un notebook con un modelo de bajo rendimiento o ninguno |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/4-Logistic/solution/Julia/README.md b/translations/es/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..f2f4f1eaf
--- /dev/null
+++ b/translations/es/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/2-Regression/README.md b/translations/es/2-Regression/README.md
new file mode 100644
index 000000000..4459454be
--- /dev/null
+++ b/translations/es/2-Regression/README.md
@@ -0,0 +1,43 @@
+# Modelos de regresión para el aprendizaje automático
+## Tema regional: Modelos de regresión para precios de calabazas en América del Norte 🎃
+
+En América del Norte, las calabazas a menudo se tallan en caras aterradoras para Halloween. ¡Vamos a descubrir más sobre estos fascinantes vegetales!
+
+
+> Foto por Beth Teutschmann en Unsplash
+
+## Lo que aprenderás
+
+[](https://youtu.be/5QnJtDad4iQ "Video de introducción a la regresión - ¡Haz clic para ver!")
+> 🎥 Haz clic en la imagen de arriba para un video de introducción rápida a esta lección
+
+Las lecciones en esta sección cubren los tipos de regresión en el contexto del aprendizaje automático. Los modelos de regresión pueden ayudar a determinar la _relación_ entre variables. Este tipo de modelo puede predecir valores como longitud, temperatura o edad, descubriendo así relaciones entre variables mientras analiza puntos de datos.
+
+En esta serie de lecciones, descubrirás las diferencias entre la regresión lineal y logística, y cuándo deberías preferir una sobre la otra.
+
+[](https://youtu.be/XA3OaoW86R8 "ML para principiantes - Introducción a los modelos de regresión para el aprendizaje automático")
+
+> 🎥 Haz clic en la imagen de arriba para un video corto que introduce los modelos de regresión.
+
+En este grupo de lecciones, te prepararás para comenzar tareas de aprendizaje automático, incluyendo la configuración de Visual Studio Code para gestionar notebooks, el entorno común para los científicos de datos. Descubrirás Scikit-learn, una biblioteca para el aprendizaje automático, y construirás tus primeros modelos, enfocándote en los modelos de regresión en este capítulo.
+
+> Hay herramientas útiles de bajo código que pueden ayudarte a aprender sobre trabajar con modelos de regresión. Prueba [Azure ML para esta tarea](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### Lecciones
+
+1. [Herramientas del oficio](1-Tools/README.md)
+2. [Gestión de datos](2-Data/README.md)
+3. [Regresión lineal y polinómica](3-Linear/README.md)
+4. [Regresión logística](4-Logistic/README.md)
+
+---
+### Créditos
+
+"ML con regresión" fue escrito con ♥️ por [Jen Looper](https://twitter.com/jenlooper)
+
+♥️ Contribuyentes al cuestionario incluyen: [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) y [Ornella Altunyan](https://twitter.com/ornelladotcom)
+
+El conjunto de datos de calabazas es sugerido por [este proyecto en Kaggle](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) y sus datos provienen de los [Informes estándar de mercados terminales de cultivos especiales](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) distribuidos por el Departamento de Agricultura de los Estados Unidos. Hemos añadido algunos puntos sobre el color basado en la variedad para normalizar la distribución. Estos datos son de dominio público.
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/3-Web-App/1-Web-App/README.md b/translations/es/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..2eb4c4c65
--- /dev/null
+++ b/translations/es/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# Crear una Aplicación Web para Usar un Modelo de ML
+
+En esta lección, entrenarás un modelo de ML con un conjunto de datos fuera de este mundo: _Avistamientos de OVNIs durante el último siglo_, obtenidos de la base de datos de NUFORC.
+
+Aprenderás:
+
+- Cómo 'pickle' un modelo entrenado
+- Cómo usar ese modelo en una aplicación Flask
+
+Continuaremos usando notebooks para limpiar datos y entrenar nuestro modelo, pero puedes llevar el proceso un paso más allá explorando el uso de un modelo 'en el mundo real', por así decirlo: en una aplicación web.
+
+Para hacer esto, necesitas construir una aplicación web usando Flask.
+
+## [Cuestionario Previo a la Lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## Construyendo una Aplicación
+
+Hay varias formas de construir aplicaciones web para consumir modelos de machine learning. Tu arquitectura web puede influir en la forma en que tu modelo está entrenado. Imagina que estás trabajando en una empresa donde el grupo de ciencia de datos ha entrenado un modelo que quieren que uses en una aplicación.
+
+### Consideraciones
+
+Hay muchas preguntas que necesitas hacer:
+
+- **¿Es una aplicación web o una aplicación móvil?** Si estás construyendo una aplicación móvil o necesitas usar el modelo en un contexto de IoT, podrías usar [TensorFlow Lite](https://www.tensorflow.org/lite/) y usar el modelo en una aplicación Android o iOS.
+- **¿Dónde residirá el modelo?** ¿En la nube o localmente?
+- **Soporte offline.** ¿La aplicación tiene que funcionar sin conexión?
+- **¿Qué tecnología se utilizó para entrenar el modelo?** La tecnología elegida puede influir en las herramientas que necesitas usar.
+ - **Usando TensorFlow.** Si estás entrenando un modelo usando TensorFlow, por ejemplo, ese ecosistema proporciona la capacidad de convertir un modelo de TensorFlow para su uso en una aplicación web usando [TensorFlow.js](https://www.tensorflow.org/js/).
+ - **Usando PyTorch.** Si estás construyendo un modelo usando una biblioteca como [PyTorch](https://pytorch.org/), tienes la opción de exportarlo en formato [ONNX](https://onnx.ai/) (Open Neural Network Exchange) para su uso en aplicaciones web JavaScript que pueden usar el [Onnx Runtime](https://www.onnxruntime.ai/). Esta opción será explorada en una lección futura para un modelo entrenado con Scikit-learn.
+ - **Usando Lobe.ai o Azure Custom Vision.** Si estás usando un sistema ML SaaS (Software como Servicio) como [Lobe.ai](https://lobe.ai/) o [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott) para entrenar un modelo, este tipo de software proporciona formas de exportar el modelo para muchas plataformas, incluyendo la construcción de una API personalizada para ser consultada en la nube por tu aplicación en línea.
+
+También tienes la oportunidad de construir una aplicación web completa con Flask que sería capaz de entrenar el modelo por sí misma en un navegador web. Esto también se puede hacer usando TensorFlow.js en un contexto JavaScript.
+
+Para nuestros propósitos, ya que hemos estado trabajando con notebooks basados en Python, exploremos los pasos que necesitas seguir para exportar un modelo entrenado desde dicho notebook a un formato legible por una aplicación web construida en Python.
+
+## Herramienta
+
+Para esta tarea, necesitas dos herramientas: Flask y Pickle, ambas funcionan en Python.
+
+✅ ¿Qué es [Flask](https://palletsprojects.com/p/flask/)? Definido como un 'micro-framework' por sus creadores, Flask proporciona las características básicas de los frameworks web usando Python y un motor de plantillas para construir páginas web. Echa un vistazo a [este módulo de aprendizaje](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott) para practicar la construcción con Flask.
+
+✅ ¿Qué es [Pickle](https://docs.python.org/3/library/pickle.html)? Pickle 🥒 es un módulo de Python que serializa y deserializa una estructura de objetos de Python. Cuando 'pickleas' un modelo, serializas o aplastas su estructura para su uso en la web. Ten cuidado: pickle no es intrínsecamente seguro, así que ten cuidado si te piden 'des-picklear' un archivo. Un archivo pickled tiene el sufijo `.pkl`.
+
+## Ejercicio - limpiar tus datos
+
+En esta lección usarás datos de 80,000 avistamientos de OVNIs, recopilados por [NUFORC](https://nuforc.org) (El Centro Nacional de Informes de OVNIs). Estos datos tienen algunas descripciones interesantes de avistamientos de OVNIs, por ejemplo:
+
+- **Descripción larga de ejemplo.** "Un hombre emerge de un rayo de luz que brilla en un campo de hierba por la noche y corre hacia el estacionamiento de Texas Instruments".
+- **Descripción corta de ejemplo.** "las luces nos persiguieron".
+
+La hoja de cálculo [ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) incluye columnas sobre el `city`, `state` y `country` donde ocurrió el avistamiento, el `shape` del objeto y su `latitude` y `longitude`.
+
+En el [notebook](../../../../3-Web-App/1-Web-App/notebook.ipynb) en blanco incluido en esta lección:
+
+1. importa `pandas`, `matplotlib`, y `numpy` como hiciste en lecciones anteriores e importa la hoja de cálculo de ufos. Puedes echar un vistazo a un conjunto de datos de muestra:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. Convierte los datos de ufos a un pequeño dataframe con títulos nuevos. Revisa los valores únicos en el campo `Country`.
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. Ahora, puedes reducir la cantidad de datos que necesitamos manejar eliminando cualquier valor nulo e importando solo avistamientos entre 1-60 segundos:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. Importa la biblioteca `LabelEncoder` de Scikit-learn para convertir los valores de texto de los países a un número:
+
+ ✅ LabelEncoder codifica datos alfabéticamente
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ Tus datos deberían verse así:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## Ejercicio - construir tu modelo
+
+Ahora puedes prepararte para entrenar un modelo dividiendo los datos en el grupo de entrenamiento y prueba.
+
+1. Selecciona las tres características que quieres entrenar como tu vector X, y el vector y será el `Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude` y obtén un id de país para devolver.
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. Entrena tu modelo usando regresión logística:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+La precisión no está mal **(alrededor del 95%)**, no es sorprendente, ya que `Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude`, pero es un buen ejercicio intentar entrenar desde datos en bruto que limpiaste, exportaste y luego usar este modelo en una aplicación web.
+
+## Ejercicio - 'pickle' tu modelo
+
+¡Ahora, es momento de _pickle_ tu modelo! Puedes hacerlo en unas pocas líneas de código. Una vez que esté _pickled_, carga tu modelo pickled y pruébalo contra un array de datos de muestra que contenga valores para segundos, latitud y longitud,
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+El modelo devuelve **'3'**, que es el código de país para el Reino Unido. ¡Increíble! 👽
+
+## Ejercicio - construir una aplicación Flask
+
+Ahora puedes construir una aplicación Flask para llamar a tu modelo y devolver resultados similares, pero de una manera más visualmente agradable.
+
+1. Comienza creando una carpeta llamada **web-app** junto al archivo _notebook.ipynb_ donde reside tu archivo _ufo-model.pkl_.
+
+1. En esa carpeta crea tres carpetas más: **static**, con una carpeta **css** dentro, y **templates**. Ahora deberías tener los siguientes archivos y directorios:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ Consulta la carpeta de solución para ver la aplicación terminada
+
+1. El primer archivo que debes crear en la carpeta _web-app_ es el archivo **requirements.txt**. Al igual que _package.json_ en una aplicación JavaScript, este archivo lista las dependencias requeridas por la aplicación. En **requirements.txt** agrega las líneas:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. Ahora, ejecuta este archivo navegando a _web-app_:
+
+ ```bash
+ cd web-app
+ ```
+
+1. En tu terminal escribe `pip install`, para instalar las bibliotecas listadas en _requirements.txt_:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. Ahora, estás listo para crear tres archivos más para terminar la aplicación:
+
+ 1. Crea **app.py** en la raíz.
+ 2. Crea **index.html** en el directorio _templates_.
+ 3. Crea **styles.css** en el directorio _static/css_.
+
+1. Construye el archivo _styles.css_ con algunos estilos:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. Luego, construye el archivo _index.html_:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ Echa un vistazo a la plantilla en este archivo. Nota la sintaxis 'mustache' alrededor de las variables que serán proporcionadas por la aplicación, como el texto de predicción: `{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py` agrega:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 Consejo: cuando agregas [`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train`. ¿Cuáles son los pros y los contras de seguir este método?
+
+## [Cuestionario Posterior a la Lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## Revisión y Autoestudio
+
+Hay muchas formas de construir una aplicación web para consumir modelos de ML. Haz una lista de las formas en que podrías usar JavaScript o Python para construir una aplicación web que aproveche el machine learning. Considera la arquitectura: ¿debería el modelo permanecer en la aplicación o vivir en la nube? Si es lo último, ¿cómo lo accederías? Dibuja un modelo arquitectónico para una solución web aplicada de ML.
+
+## Tarea
+
+[Prueba un modelo diferente](assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/3-Web-App/1-Web-App/assignment.md b/translations/es/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..8ff2b4bb3
--- /dev/null
+++ b/translations/es/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# Prueba un modelo diferente
+
+## Instrucciones
+
+Ahora que has construido una aplicación web utilizando un modelo de regresión entrenado, usa uno de los modelos de una lección anterior de regresión para rehacer esta aplicación web. Puedes mantener el estilo o diseñarla de manera diferente para reflejar los datos de calabazas. Ten cuidado de cambiar las entradas para reflejar el método de entrenamiento de tu modelo.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| -------------------------- | --------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------- |
+| | La aplicación web funciona como se espera y está desplegada en la nube | La aplicación web contiene fallos o muestra resultados inesperados | La aplicación web no funciona correctamente |
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/3-Web-App/README.md b/translations/es/3-Web-App/README.md
new file mode 100644
index 000000000..d2a059738
--- /dev/null
+++ b/translations/es/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# Construye una aplicación web para usar tu modelo de ML
+
+En esta sección del currículo, se te introducirá a un tema aplicado de ML: cómo guardar tu modelo de Scikit-learn como un archivo que puede ser usado para hacer predicciones dentro de una aplicación web. Una vez que el modelo esté guardado, aprenderás cómo usarlo en una aplicación web construida en Flask. Primero, crearás un modelo usando algunos datos que están relacionados con avistamientos de OVNIs. Luego, construirás una aplicación web que te permitirá ingresar un número de segundos con un valor de latitud y longitud para predecir qué país reportó ver un OVNI.
+
+
+
+Foto por Michael Herren en Unsplash
+
+## Lecciones
+
+1. [Construye una Aplicación Web](1-Web-App/README.md)
+
+## Créditos
+
+"Construye una Aplicación Web" fue escrito con ♥️ por [Jen Looper](https://twitter.com/jenlooper).
+
+♥️ Los cuestionarios fueron escritos por Rohan Raj.
+
+El conjunto de datos proviene de [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings).
+
+La arquitectura de la aplicación web fue sugerida en parte por [este artículo](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) y [este repositorio](https://github.com/abhinavsagar/machine-learning-deployment) de Abhinav Sagar.
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/1-Introduction/README.md b/translations/es/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..1c516ef83
--- /dev/null
+++ b/translations/es/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# Introducción a la clasificación
+
+En estas cuatro lecciones, explorarás un enfoque fundamental del aprendizaje automático clásico: la _clasificación_. Recorreremos el uso de varios algoritmos de clasificación con un conjunto de datos sobre todas las brillantes cocinas de Asia e India. ¡Espero que tengas hambre!
+
+
+
+> ¡Celebra las cocinas panasiáticas en estas lecciones! Imagen por [Jen Looper](https://twitter.com/jenlooper)
+
+La clasificación es una forma de [aprendizaje supervisado](https://wikipedia.org/wiki/Supervised_learning) que tiene mucho en común con las técnicas de regresión. Si el aprendizaje automático se trata de predecir valores o nombres de cosas usando conjuntos de datos, entonces la clasificación generalmente se divide en dos grupos: _clasificación binaria_ y _clasificación multiclase_.
+
+[](https://youtu.be/eg8DJYwdMyg "Introducción a la clasificación")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: John Guttag del MIT introduce la clasificación
+
+Recuerda:
+
+- **La regresión lineal** te ayudó a predecir relaciones entre variables y hacer predicciones precisas sobre dónde caería un nuevo punto de datos en relación con esa línea. Por ejemplo, podrías predecir _cuál sería el precio de una calabaza en septiembre vs. diciembre_.
+- **La regresión logística** te ayudó a descubrir "categorías binarias": a este precio, _¿esta calabaza es naranja o no-naranja_?
+
+La clasificación utiliza varios algoritmos para determinar otras formas de determinar la etiqueta o clase de un punto de datos. Trabajemos con estos datos de cocina para ver si, observando un grupo de ingredientes, podemos determinar su cocina de origen.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [¡Esta lección está disponible en R!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### Introducción
+
+La clasificación es una de las actividades fundamentales del investigador en aprendizaje automático y del científico de datos. Desde la clasificación básica de un valor binario ("¿es este correo electrónico spam o no?"), hasta la clasificación y segmentación de imágenes complejas usando visión por computadora, siempre es útil poder clasificar datos en clases y hacer preguntas sobre ellos.
+
+Para expresar el proceso de una manera más científica, tu método de clasificación crea un modelo predictivo que te permite mapear la relación entre variables de entrada y variables de salida.
+
+
+
+> Problemas binarios vs. multiclase para que los algoritmos de clasificación manejen. Infografía por [Jen Looper](https://twitter.com/jenlooper)
+
+Antes de comenzar el proceso de limpiar nuestros datos, visualizarlos y prepararlos para nuestras tareas de aprendizaje automático, aprendamos un poco sobre las diversas formas en que se puede utilizar el aprendizaje automático para clasificar datos.
+
+Derivado de [estadísticas](https://wikipedia.org/wiki/Statistical_classification), la clasificación usando aprendizaje automático clásico utiliza características, como `smoker`, `weight`, y `age` para determinar _la probabilidad de desarrollar X enfermedad_. Como técnica de aprendizaje supervisado similar a los ejercicios de regresión que realizaste anteriormente, tus datos están etiquetados y los algoritmos de aprendizaje automático usan esas etiquetas para clasificar y predecir clases (o 'características') de un conjunto de datos y asignarlas a un grupo o resultado.
+
+✅ Tómate un momento para imaginar un conjunto de datos sobre cocinas. ¿Qué podría responder un modelo multiclase? ¿Qué podría responder un modelo binario? ¿Qué pasaría si quisieras determinar si una cocina dada es probable que use fenogreco? ¿Y si quisieras ver si, dado un regalo de una bolsa de supermercado llena de anís estrellado, alcachofas, coliflor y rábano picante, podrías crear un plato típico indio?
+
+[](https://youtu.be/GuTeDbaNoEU "Cestas misteriosas locas")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video. Toda la premisa del programa 'Chopped' es la 'cesta misteriosa' donde los chefs tienen que hacer algún plato con una elección aleatoria de ingredientes. ¡Seguramente un modelo de aprendizaje automático habría ayudado!
+
+## Hola 'clasificador'
+
+La pregunta que queremos hacer sobre este conjunto de datos de cocina es en realidad una **pregunta multiclase**, ya que tenemos varias cocinas nacionales potenciales con las que trabajar. Dado un lote de ingredientes, ¿a cuál de estas muchas clases se ajustarán los datos?
+
+Scikit-learn ofrece varios algoritmos diferentes para clasificar datos, dependiendo del tipo de problema que quieras resolver. En las próximas dos lecciones, aprenderás sobre varios de estos algoritmos.
+
+## Ejercicio - limpiar y equilibrar tus datos
+
+La primera tarea a mano, antes de comenzar este proyecto, es limpiar y **equilibrar** tus datos para obtener mejores resultados. Comienza con el archivo _notebook.ipynb_ en la raíz de esta carpeta.
+
+Lo primero que debes instalar es [imblearn](https://imbalanced-learn.org/stable/). Este es un paquete de Scikit-learn que te permitirá equilibrar mejor los datos (aprenderás más sobre esta tarea en un momento).
+
+1. Para instalar `imblearn`, ejecuta `pip install`, de la siguiente manera:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. Importa los paquetes que necesitas para importar tus datos y visualizarlos, también importa `SMOTE` de `imblearn`.
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ Ahora estás listo para importar los datos.
+
+1. La siguiente tarea será importar los datos:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ Usando `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df`.
+
+1. Revisa la forma de los datos:
+
+ ```python
+ df.head()
+ ```
+
+ Las primeras cinco filas se ven así:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. Obtén información sobre estos datos llamando a `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ Tu salida se parece a:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## Ejercicio - aprendiendo sobre cocinas
+
+Ahora el trabajo empieza a volverse más interesante. Descubramos la distribución de los datos, por cocina.
+
+1. Grafica los datos como barras llamando a `barh()`:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ Hay un número finito de cocinas, pero la distribución de los datos es desigual. ¡Puedes arreglar eso! Antes de hacerlo, explora un poco más.
+
+1. Descubre cuántos datos hay disponibles por cocina e imprímelos:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ la salida se ve así:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## Descubriendo ingredientes
+
+Ahora puedes profundizar en los datos y aprender cuáles son los ingredientes típicos por cocina. Deberías limpiar los datos recurrentes que crean confusión entre cocinas, así que aprendamos sobre este problema.
+
+1. Crea una función `create_ingredient()` en Python para crear un dataframe de ingredientes. Esta función comenzará eliminando una columna no útil y clasificando los ingredientes por su cantidad:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ Ahora puedes usar esa función para obtener una idea de los diez ingredientes más populares por cocina.
+
+1. Llama a `create_ingredient()` and plot it calling `barh()`:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Haz lo mismo para los datos japoneses:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Ahora para los ingredientes chinos:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Grafica los ingredientes indios:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Finalmente, grafica los ingredientes coreanos:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Ahora, elimina los ingredientes más comunes que crean confusión entre cocinas distintas, llamando a `drop()`:
+
+ ¡A todos les encanta el arroz, el ajo y el jengibre!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## Equilibrar el conjunto de datos
+
+Ahora que has limpiado los datos, usa [SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - "Técnica de Sobremuestreo de Minorías Sintéticas" - para equilibrarlos.
+
+1. Llama a `fit_resample()`, esta estrategia genera nuevas muestras mediante interpolación.
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ Al equilibrar tus datos, tendrás mejores resultados al clasificarlos. Piensa en una clasificación binaria. Si la mayoría de tus datos son de una clase, un modelo de aprendizaje automático va a predecir esa clase con más frecuencia, simplemente porque hay más datos para ella. El equilibrado de los datos toma cualquier dato sesgado y ayuda a eliminar este desequilibrio.
+
+1. Ahora puedes revisar el número de etiquetas por ingrediente:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ Tu salida se ve así:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ Los datos están bien y limpios, equilibrados y muy deliciosos!
+
+1. El último paso es guardar tus datos equilibrados, incluyendo etiquetas y características, en un nuevo dataframe que se pueda exportar a un archivo:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. Puedes echar un último vistazo a los datos usando `transformed_df.head()` and `transformed_df.info()`. Guarda una copia de estos datos para usarlos en futuras lecciones:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ Este nuevo CSV se puede encontrar ahora en la carpeta de datos raíz.
+
+---
+
+## 🚀Desafío
+
+Este plan de estudios contiene varios conjuntos de datos interesantes. Explora las carpetas `data` y ve si alguna contiene conjuntos de datos que serían apropiados para clasificación binaria o multiclase. ¿Qué preguntas harías sobre este conjunto de datos?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## Revisión y autoestudio
+
+Explora la API de SMOTE. ¿Para qué casos de uso es más adecuado? ¿Qué problemas resuelve?
+
+## Tarea
+
+[Explora métodos de clasificación](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/1-Introduction/assignment.md b/translations/es/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..3904e9e01
--- /dev/null
+++ b/translations/es/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Explora métodos de clasificación
+
+## Instrucciones
+
+En la [documentación de Scikit-learn](https://scikit-learn.org/stable/supervised_learning.html) encontrarás una gran lista de formas de clasificar datos. Haz una pequeña búsqueda del tesoro en estos documentos: tu objetivo es buscar métodos de clasificación y emparejar un conjunto de datos en este plan de estudios, una pregunta que puedas hacer sobre él y una técnica de clasificación. Crea una hoja de cálculo o tabla en un archivo .doc y explica cómo funcionaría el conjunto de datos con el algoritmo de clasificación.
+
+## Rubrica
+
+| Criterio | Ejemplar | Adecuado | Necesita Mejora |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | se presenta un documento que resume 5 algoritmos junto con una técnica de clasificación. El resumen está bien explicado y detallado. | se presenta un documento que resume 3 algoritmos junto con una técnica de clasificación. El resumen está bien explicado y detallado. | se presenta un documento que resume menos de tres algoritmos junto con una técnica de clasificación y el resumen no está bien explicado ni detallado. |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/1-Introduction/solution/Julia/README.md b/translations/es/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..a22a164cc
--- /dev/null
+++ b/translations/es/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/2-Classifiers-1/README.md b/translations/es/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..f90111e42
--- /dev/null
+++ b/translations/es/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,77 @@
+# Clasificadores de Cocina 1
+
+En esta lección, utilizarás el conjunto de datos que guardaste de la última lección, lleno de datos equilibrados y limpios sobre cocinas.
+
+Utilizarás este conjunto de datos con una variedad de clasificadores para _predecir una cocina nacional dada un grupo de ingredientes_. Mientras lo haces, aprenderás más sobre algunas de las formas en que los algoritmos pueden ser aprovechados para tareas de clasificación.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# Preparación
+
+Asumiendo que completaste [Lección 1](../1-Introduction/README.md), asegúrate de que exista un archivo _cleaned_cuisines.csv_ en la carpeta raíz `/data` para estas cuatro lecciones.
+
+## Ejercicio - predecir una cocina nacional
+
+1. Trabajando en la carpeta _notebook.ipynb_ de esta lección, importa ese archivo junto con la biblioteca Pandas:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ Los datos se ven así:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. Ahora, importa varias bibliotecas más:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. Divide las coordenadas X e y en dos dataframes para el entrenamiento. `cuisine` puede ser el dataframe de etiquetas:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ Se verá así:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. Elimina `Unnamed: 0` column and the `cuisine` column, calling `drop()`. Guarda el resto de los datos como características entrenables:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ Tus características se verán así:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/2-Classifiers-1/assignment.md b/translations/es/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..05a64b0dc
--- /dev/null
+++ b/translations/es/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# Estudiar los solucionadores
+## Instrucciones
+
+En esta lección aprendiste sobre los diversos solucionadores que combinan algoritmos con un proceso de aprendizaje automático para crear un modelo preciso. Revisa los solucionadores listados en la lección y elige dos. Con tus propias palabras, compara y contrasta estos dos solucionadores. ¿Qué tipo de problema abordan? ¿Cómo trabajan con diversas estructuras de datos? ¿Por qué elegirías uno sobre el otro?
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejora |
+| --------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | Se presenta un archivo .doc con dos párrafos, uno sobre cada solucionador, comparándolos cuidadosamente. | Se presenta un archivo .doc con solo un párrafo | La tarea está incompleta |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/es/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..9da98d90d
--- /dev/null
+++ b/translations/es/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/3-Classifiers-2/README.md b/translations/es/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..9acedd864
--- /dev/null
+++ b/translations/es/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# Clasificadores de cocina 2
+
+En esta segunda lección de clasificación, explorarás más formas de clasificar datos numéricos. También aprenderás sobre las implicaciones de elegir un clasificador sobre otro.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### Requisito previo
+
+Asumimos que has completado las lecciones anteriores y tienes un conjunto de datos limpiado en tu carpeta `data` llamado _cleaned_cuisines.csv_ en la raíz de esta carpeta de 4 lecciones.
+
+### Preparación
+
+Hemos cargado tu archivo _notebook.ipynb_ con el conjunto de datos limpiado y lo hemos dividido en dataframes X e y, listos para el proceso de construcción del modelo.
+
+## Un mapa de clasificación
+
+Anteriormente, aprendiste sobre las diversas opciones que tienes al clasificar datos usando la hoja de trucos de Microsoft. Scikit-learn ofrece una hoja de trucos similar, pero más granular, que puede ayudarte a reducir aún más tus estimadores (otro término para clasificadores):
+
+
+> Tip: [visita este mapa en línea](https://scikit-learn.org/stable/tutorial/machine_learning_map/) y haz clic a lo largo del camino para leer la documentación.
+
+### El plan
+
+Este mapa es muy útil una vez que tienes un claro entendimiento de tus datos, ya que puedes 'caminar' por sus caminos hacia una decisión:
+
+- Tenemos >50 muestras
+- Queremos predecir una categoría
+- Tenemos datos etiquetados
+- Tenemos menos de 100K muestras
+- ✨ Podemos elegir un Linear SVC
+- Si eso no funciona, ya que tenemos datos numéricos
+ - Podemos intentar un ✨ KNeighbors Classifier
+ - Si eso no funciona, prueba con ✨ SVC y ✨ Ensemble Classifiers
+
+Este es un camino muy útil a seguir.
+
+## Ejercicio - dividir los datos
+
+Siguiendo este camino, deberíamos comenzar importando algunas bibliotecas para usar.
+
+1. Importa las bibliotecas necesarias:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. Divide tus datos de entrenamiento y prueba:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Clasificador Linear SVC
+
+El clustering de vectores de soporte (SVC) es un miembro de la familia de técnicas de ML de máquinas de vectores de soporte (aprende más sobre estas a continuación). En este método, puedes elegir un 'kernel' para decidir cómo agrupar las etiquetas. El parámetro 'C' se refiere a 'regularización', que regula la influencia de los parámetros. El kernel puede ser uno de [varios](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC); aquí lo configuramos en 'linear' para asegurar que aprovechamos el Linear SVC. La probabilidad por defecto es 'false'; aquí la configuramos en 'true' para obtener estimaciones de probabilidad. Configuramos el estado aleatorio en '0' para mezclar los datos y obtener probabilidades.
+
+### Ejercicio - aplicar un Linear SVC
+
+Comienza creando un array de clasificadores. Irás agregando progresivamente a este array a medida que probamos.
+
+1. Comienza con un Linear SVC:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. Entrena tu modelo usando el Linear SVC e imprime un informe:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ El resultado es bastante bueno:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## Clasificador K-Neighbors
+
+K-Neighbors es parte de la familia de métodos de ML "neighbors", que pueden usarse tanto para aprendizaje supervisado como no supervisado. En este método, se crea un número predefinido de puntos y se recopilan datos alrededor de estos puntos para que se puedan predecir etiquetas generalizadas para los datos.
+
+### Ejercicio - aplicar el clasificador K-Neighbors
+
+El clasificador anterior fue bueno y funcionó bien con los datos, pero tal vez podamos obtener mejor precisión. Prueba con un clasificador K-Neighbors.
+
+1. Agrega una línea a tu array de clasificadores (agrega una coma después del elemento Linear SVC):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ El resultado es un poco peor:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ Aprende sobre [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors)
+
+## Clasificador de vectores de soporte
+
+Los clasificadores de vectores de soporte son parte de la familia de métodos de ML [Support-Vector Machine](https://wikipedia.org/wiki/Support-vector_machine) que se usan para tareas de clasificación y regresión. Los SVM "mapean ejemplos de entrenamiento a puntos en el espacio" para maximizar la distancia entre dos categorías. Los datos subsecuentes se mapean en este espacio para que se pueda predecir su categoría.
+
+### Ejercicio - aplicar un clasificador de vectores de soporte
+
+Vamos a intentar obtener una mejor precisión con un clasificador de vectores de soporte.
+
+1. Agrega una coma después del elemento K-Neighbors, y luego agrega esta línea:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ ¡El resultado es bastante bueno!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ Aprende sobre [Support-Vectors](https://scikit-learn.org/stable/modules/svm.html#svm)
+
+## Clasificadores Ensemble
+
+Sigamos el camino hasta el final, aunque la prueba anterior fue bastante buena. Probemos algunos 'Clasificadores Ensemble', específicamente Random Forest y AdaBoost:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+El resultado es muy bueno, especialmente para Random Forest:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ Aprende sobre [Clasificadores Ensemble](https://scikit-learn.org/stable/modules/ensemble.html)
+
+Este método de Machine Learning "combina las predicciones de varios estimadores base" para mejorar la calidad del modelo. En nuestro ejemplo, usamos Random Trees y AdaBoost.
+
+- [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#forest), un método de promediado, construye un 'bosque' de 'árboles de decisión' infundidos con aleatoriedad para evitar el sobreajuste. El parámetro n_estimators se establece en el número de árboles.
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) ajusta un clasificador a un conjunto de datos y luego ajusta copias de ese clasificador al mismo conjunto de datos. Se enfoca en los pesos de los elementos clasificados incorrectamente y ajusta el ajuste para el siguiente clasificador para corregir.
+
+---
+
+## 🚀Desafío
+
+Cada una de estas técnicas tiene una gran cantidad de parámetros que puedes ajustar. Investiga los parámetros predeterminados de cada uno y piensa en lo que significaría ajustar estos parámetros para la calidad del modelo.
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## Revisión y autoestudio
+
+Hay mucho argot en estas lecciones, así que tómate un minuto para revisar [esta lista](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott) de terminología útil.
+
+## Tarea
+
+[Juego de parámetros](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/3-Classifiers-2/assignment.md b/translations/es/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..62029c80e
--- /dev/null
+++ b/translations/es/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# Juego de Parámetros
+
+## Instrucciones
+
+Hay muchos parámetros que se establecen por defecto cuando se trabaja con estos clasificadores. Intellisense en VS Code puede ayudarte a profundizar en ellos. Adopta una de las Técnicas de Clasificación de ML en esta lección y vuelve a entrenar los modelos ajustando varios valores de parámetros. Crea un cuaderno explicando por qué algunos cambios mejoran la calidad del modelo mientras que otros la degradan. Sé detallado en tu respuesta.
+
+## Rúbrica
+
+| Criterio | Ejemplar | Adecuado | Necesita Mejorar |
+| -------- | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------- |
+| | Se presenta un cuaderno con un clasificador completamente desarrollado y sus parámetros ajustados y cambios explicados en cuadros de texto | Se presenta un cuaderno parcialmente o mal explicado | El cuaderno tiene errores o fallos |
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/es/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..25e7f2a63
--- /dev/null
+++ b/translations/es/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/4-Applied/README.md b/translations/es/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..e3e1ffdac
--- /dev/null
+++ b/translations/es/4-Classification/4-Applied/README.md
@@ -0,0 +1,318 @@
+# Construir una Aplicación Web de Recomendación de Cocina
+
+En esta lección, construirás un modelo de clasificación utilizando algunas de las técnicas que has aprendido en lecciones anteriores y con el delicioso conjunto de datos de cocina utilizado a lo largo de esta serie. Además, construirás una pequeña aplicación web para usar un modelo guardado, aprovechando el runtime web de Onnx.
+
+Uno de los usos prácticos más útiles del aprendizaje automático es construir sistemas de recomendación, ¡y puedes dar el primer paso en esa dirección hoy!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: Jen Looper construye una aplicación web usando datos de cocina clasificados
+
+## [Cuestionario antes de la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+En esta lección aprenderás:
+
+- Cómo construir un modelo y guardarlo como un modelo Onnx
+- Cómo usar Netron para inspeccionar el modelo
+- Cómo usar tu modelo en una aplicación web para inferencia
+
+## Construye tu modelo
+
+Construir sistemas de ML aplicados es una parte importante de aprovechar estas tecnologías para tus sistemas empresariales. Puedes usar modelos dentro de tus aplicaciones web (y así usarlos en un contexto offline si es necesario) utilizando Onnx.
+
+En una [lección anterior](../../3-Web-App/1-Web-App/README.md), construiste un modelo de Regresión sobre avistamientos de OVNIs, lo "encurtiste" y lo usaste en una aplicación Flask. Si bien esta arquitectura es muy útil de conocer, es una aplicación Python de pila completa, y tus requisitos pueden incluir el uso de una aplicación JavaScript.
+
+En esta lección, puedes construir un sistema básico basado en JavaScript para la inferencia. Primero, sin embargo, necesitas entrenar un modelo y convertirlo para usarlo con Onnx.
+
+## Ejercicio - entrenar modelo de clasificación
+
+Primero, entrena un modelo de clasificación utilizando el conjunto de datos de cocinas limpiado que utilizamos.
+
+1. Comienza importando bibliotecas útiles:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ Necesitas '[skl2onnx](https://onnx.ai/sklearn-onnx/)' para ayudar a convertir tu modelo de Scikit-learn al formato Onnx.
+
+1. Luego, trabaja con tus datos de la misma manera que lo hiciste en lecciones anteriores, leyendo un archivo CSV usando `read_csv()`:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. Elimina las dos primeras columnas innecesarias y guarda los datos restantes como 'X':
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. Guarda las etiquetas como 'y':
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### Comienza la rutina de entrenamiento
+
+Usaremos la biblioteca 'SVC' que tiene buena precisión.
+
+1. Importa las bibliotecas apropiadas de Scikit-learn:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. Separa los conjuntos de entrenamiento y prueba:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. Construye un modelo de Clasificación SVC como lo hiciste en la lección anterior:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. Ahora, prueba tu modelo, llamando a `predict()`:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. Imprime un informe de clasificación para verificar la calidad del modelo:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Como vimos antes, la precisión es buena:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### Convierte tu modelo a Onnx
+
+Asegúrate de hacer la conversión con el número de Tensor adecuado. Este conjunto de datos tiene 380 ingredientes listados, por lo que necesitas anotar ese número en `FloatTensorType`:
+
+1. Convierte usando un número de tensor de 380.
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. Crea el archivo onx y guárdalo como **model.onnx**:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > Nota, puedes pasar [opciones](https://onnx.ai/sklearn-onnx/parameterized.html) en tu script de conversión. En este caso, pasamos 'nocl' como True y 'zipmap' como False. Dado que este es un modelo de clasificación, tienes la opción de eliminar ZipMap, que produce una lista de diccionarios (no es necesario). `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` file.
+
+1. En este archivo _index.html_, agrega el siguiente marcado:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. Ahora, trabajando dentro de las etiquetas `body`, agrega un poco de marcado para mostrar una lista de casillas de verificación que reflejan algunos ingredientes:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ Nota que a cada casilla de verificación se le da un valor. Esto refleja el índice donde se encuentra el ingrediente según el conjunto de datos. La manzana, por ejemplo, en esta lista alfabética, ocupa la quinta columna, por lo que su valor es '4' ya que comenzamos a contar desde 0. Puedes consultar la [hoja de cálculo de ingredientes](../../../../4-Classification/data/ingredient_indexes.csv) para descubrir el índice de un ingrediente dado.
+
+ Continuando tu trabajo en el archivo index.html, agrega un bloque de script donde se llama al modelo después del cierre final ``.
+
+1. Primero, importa el [Runtime de Onnx](https://www.onnxruntime.ai/):
+
+ ```html
+
+ ```
+
+ > Onnx Runtime se usa para permitir ejecutar tus modelos Onnx en una amplia gama de plataformas de hardware, incluidas optimizaciones y una API para usar.
+
+1. Una vez que el Runtime esté en su lugar, puedes llamarlo:
+
+ ```html
+
+ ```
+
+En este código, están ocurriendo varias cosas:
+
+1. Creaste una matriz de 380 posibles valores (1 o 0) para ser configurados y enviados al modelo para inferencia, dependiendo de si una casilla de verificación de ingrediente está marcada.
+2. Creaste una matriz de casillas de verificación y una forma de determinar si estaban marcadas en un `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` en el indicador. Debería abrirse un localhost y puedes ver tu aplicación web. Verifica qué cocina se recomienda según varios ingredientes:
+
+
+
+¡Felicidades, has creado una aplicación web de 'recomendación' con unos pocos campos! Tómate un tiempo para desarrollar este sistema.
+
+## 🚀Desafío
+
+Tu aplicación web es muy mínima, así que continúa desarrollándola utilizando ingredientes y sus índices de los datos de [ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv). ¿Qué combinaciones de sabores funcionan para crear un plato nacional dado?
+
+## [Cuestionario después de la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## Revisión y Autoestudio
+
+Si bien esta lección solo tocó la utilidad de crear un sistema de recomendación para ingredientes alimentarios, esta área de aplicaciones de ML es muy rica en ejemplos. Lee más sobre cómo se construyen estos sistemas:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## Asignación
+
+[Construye un nuevo recomendador](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/4-Applied/assignment.md b/translations/es/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..69f19af88
--- /dev/null
+++ b/translations/es/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# Construye un recomendador
+
+## Instrucciones
+
+Dado tus ejercicios en esta lección, ahora sabes cómo construir una aplicación web basada en JavaScript utilizando Onnx Runtime y un modelo convertido a Onnx. Experimenta construyendo un nuevo recomendador usando datos de estas lecciones o de otras fuentes (da crédito, por favor). Podrías crear un recomendador de mascotas dadas varias características de personalidad, o un recomendador de géneros musicales basado en el estado de ánimo de una persona. ¡Sé creativo!
+
+## Rubrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | --------------------------------------------------------------------- | ------------------------------------- | --------------------------------- |
+| | Se presenta una aplicación web y un cuaderno, ambos bien documentados y en funcionamiento | Falta uno de esos dos o tiene errores | Ambos faltan o tienen errores |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/4-Classification/README.md b/translations/es/4-Classification/README.md
new file mode 100644
index 000000000..d93c7f88a
--- /dev/null
+++ b/translations/es/4-Classification/README.md
@@ -0,0 +1,30 @@
+# Empezando con la clasificación
+
+## Tema regional: Deliciosas cocinas asiáticas e indias 🍜
+
+En Asia e India, las tradiciones culinarias son extremadamente diversas y ¡muy deliciosas! Veamos datos sobre las cocinas regionales para tratar de entender sus ingredientes.
+
+
+> Foto de Lisheng Chang en Unsplash
+
+## Lo que aprenderás
+
+En esta sección, profundizarás en tu estudio anterior de la regresión y aprenderás sobre otros clasificadores que puedes usar para comprender mejor los datos.
+
+> Existen herramientas útiles de bajo código que pueden ayudarte a aprender a trabajar con modelos de clasificación. Prueba [Azure ML para esta tarea](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Lecciones
+
+1. [Introducción a la clasificación](1-Introduction/README.md)
+2. [Más clasificadores](2-Classifiers-1/README.md)
+3. [Otros clasificadores](3-Classifiers-2/README.md)
+4. [ML aplicado: construir una aplicación web](4-Applied/README.md)
+
+## Créditos
+
+"Empezando con la clasificación" fue escrito con ♥️ por [Cassie Breviu](https://www.twitter.com/cassiebreviu) y [Jen Looper](https://www.twitter.com/jenlooper)
+
+El conjunto de datos de las deliciosas cocinas fue obtenido de [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines).
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/5-Clustering/1-Visualize/README.md b/translations/es/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..be722bc86
--- /dev/null
+++ b/translations/es/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,219 @@
+# Introducción a la agrupación
+
+La agrupación es un tipo de [Aprendizaje No Supervisado](https://wikipedia.org/wiki/Unsupervised_learning) que supone que un conjunto de datos no está etiquetado o que sus entradas no están emparejadas con salidas predefinidas. Utiliza varios algoritmos para clasificar datos no etiquetados y proporcionar agrupaciones según los patrones que discierne en los datos.
+
+[](https://youtu.be/ty2advRiWJM "No One Like You by PSquare")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video. Mientras estudias el aprendizaje automático con agrupación, disfruta de algunas pistas de Dance Hall nigeriano: esta es una canción muy valorada de 2014 por PSquare.
+
+## [Cuestionario antes de la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### Introducción
+
+[La agrupación](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) es muy útil para la exploración de datos. Veamos si puede ayudar a descubrir tendencias y patrones en la forma en que el público nigeriano consume música.
+
+✅ Tómate un minuto para pensar en los usos de la agrupación. En la vida real, la agrupación ocurre cada vez que tienes una pila de ropa y necesitas separar la ropa de los miembros de tu familia 🧦👕👖🩲. En ciencia de datos, la agrupación ocurre cuando se intenta analizar las preferencias de un usuario o determinar las características de cualquier conjunto de datos no etiquetado. La agrupación, de alguna manera, ayuda a darle sentido al caos, como un cajón de calcetines.
+
+[](https://youtu.be/esmzYhuFnds "Introduction to Clustering")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: John Guttag del MIT introduce la agrupación
+
+En un entorno profesional, la agrupación puede usarse para determinar cosas como la segmentación del mercado, determinando qué grupos de edad compran qué artículos, por ejemplo. Otro uso sería la detección de anomalías, tal vez para detectar fraudes en un conjunto de datos de transacciones con tarjetas de crédito. O podrías usar la agrupación para determinar tumores en un lote de escaneos médicos.
+
+✅ Piensa un minuto sobre cómo podrías haber encontrado la agrupación 'en la naturaleza', en un entorno bancario, de comercio electrónico o de negocios.
+
+> 🎓 Curiosamente, el análisis de agrupaciones se originó en los campos de la Antropología y la Psicología en la década de 1930. ¿Puedes imaginar cómo podría haberse utilizado?
+
+Alternativamente, podrías usarlo para agrupar resultados de búsqueda: por enlaces de compras, imágenes o reseñas, por ejemplo. La agrupación es útil cuando tienes un conjunto de datos grande que deseas reducir y sobre el cual deseas realizar un análisis más granular, por lo que la técnica puede usarse para aprender sobre los datos antes de que se construyan otros modelos.
+
+✅ Una vez que tus datos estén organizados en grupos, les asignas un Id de grupo, y esta técnica puede ser útil para preservar la privacidad del conjunto de datos; en lugar de referirse a un punto de datos por su id de grupo, en lugar de por datos más reveladores e identificables. ¿Puedes pensar en otras razones por las que preferirías referirte a un Id de grupo en lugar de a otros elementos del grupo para identificarlo?
+
+Profundiza tu comprensión de las técnicas de agrupación en este [módulo de aprendizaje](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott)
+## Empezando con la agrupación
+
+[Scikit-learn ofrece una amplia gama](https://scikit-learn.org/stable/modules/clustering.html) de métodos para realizar agrupaciones. El tipo que elijas dependerá de tu caso de uso. Según la documentación, cada método tiene varios beneficios. Aquí hay una tabla simplificada de los métodos compatibles con Scikit-learn y sus casos de uso apropiados:
+
+| Nombre del método | Caso de uso |
+| :--------------------------- | :------------------------------------------------------------------------- |
+| K-Means | propósito general, inductivo |
+| Affinity propagation | muchos, grupos desiguales, inductivo |
+| Mean-shift | muchos, grupos desiguales, inductivo |
+| Spectral clustering | pocos, grupos uniformes, transductivo |
+| Ward hierarchical clustering | muchos, grupos restringidos, transductivo |
+| Agglomerative clustering | muchos, restringidos, distancias no euclidianas, transductivo |
+| DBSCAN | geometría no plana, grupos desiguales, transductivo |
+| OPTICS | geometría no plana, grupos desiguales con densidad variable, transductivo |
+| Gaussian mixtures | geometría plana, inductivo |
+| BIRCH | conjunto de datos grande con valores atípicos, inductivo |
+
+> 🎓 Cómo creamos grupos tiene mucho que ver con cómo reunimos los puntos de datos en grupos. Desempaquemos algo de vocabulario:
+>
+> 🎓 ['Transductivo' vs. 'inductivo'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> La inferencia transductiva se deriva de casos de entrenamiento observados que se asignan a casos de prueba específicos. La inferencia inductiva se deriva de casos de entrenamiento que se asignan a reglas generales que solo entonces se aplican a casos de prueba.
+>
+> Un ejemplo: Imagina que tienes un conjunto de datos que solo está parcialmente etiquetado. Algunas cosas son 'discos', algunas 'CDs' y algunas están en blanco. Tu trabajo es proporcionar etiquetas para los vacíos. Si eliges un enfoque inductivo, entrenarías un modelo buscando 'discos' y 'CDs', y aplicarías esas etiquetas a tus datos no etiquetados. Este enfoque tendrá problemas para clasificar cosas que en realidad son 'cassettes'. Un enfoque transductivo, por otro lado, maneja estos datos desconocidos de manera más efectiva, ya que trabaja para agrupar elementos similares y luego aplica una etiqueta a un grupo. En este caso, los grupos podrían reflejar 'cosas musicales redondas' y 'cosas musicales cuadradas'.
+>
+> 🎓 ['Geometría no plana' vs. 'plana'](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> Derivado de la terminología matemática, la geometría no plana vs. plana se refiere a la medida de distancias entre puntos mediante métodos geométricos 'planos' ([Euclidianos](https://wikipedia.org/wiki/Euclidean_geometry)) o 'no planos' (no euclidianos).
+>
+>'Plana' en este contexto se refiere a la geometría euclidiana (partes de la cual se enseñan como geometría 'plana'), y no plana se refiere a la geometría no euclidiana. ¿Qué tiene que ver la geometría con el aprendizaje automático? Bueno, como dos campos que están arraigados en las matemáticas, debe haber una forma común de medir las distancias entre puntos en grupos, y eso se puede hacer de manera 'plana' o 'no plana', dependiendo de la naturaleza de los datos. Las distancias [euclidianas](https://wikipedia.org/wiki/Euclidean_distance) se miden como la longitud de un segmento de línea entre dos puntos. Las distancias [no euclidianas](https://wikipedia.org/wiki/Non-Euclidean_geometry) se miden a lo largo de una curva. Si tus datos, visualizados, parecen no existir en un plano, es posible que necesites usar un algoritmo especializado para manejarlos.
+>
+
+> Infografía por [Dasani Madipalli](https://twitter.com/dasani_decoded)
+>
+> 🎓 ['Distancias'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> Los grupos se definen por su matriz de distancias, es decir, las distancias entre puntos. Esta distancia se puede medir de varias maneras. Los grupos euclidianos se definen por el promedio de los valores de los puntos y contienen un 'centroide' o punto central. Las distancias, por lo tanto, se miden por la distancia a ese centroide. Las distancias no euclidianas se refieren a 'clustroides', el punto más cercano a otros puntos. Los clustroides, a su vez, pueden definirse de varias maneras.
+>
+> 🎓 ['Restringido'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [La Agrupación Restringida](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) introduce el aprendizaje 'semi-supervisado' en este método no supervisado. Las relaciones entre puntos se marcan como 'no pueden enlazarse' o 'deben enlazarse', por lo que se imponen algunas reglas al conjunto de datos.
+>
+>Un ejemplo: Si un algoritmo se libera en un lote de datos no etiquetados o semi-etiquetados, los grupos que produce pueden ser de baja calidad. En el ejemplo anterior, los grupos podrían agrupar 'cosas musicales redondas' y 'cosas musicales cuadradas' y 'cosas triangulares' y 'galletas'. Si se le dan algunas restricciones o reglas a seguir ("el ítem debe estar hecho de plástico", "el ítem debe poder producir música"), esto puede ayudar a 'restringir' el algoritmo para tomar mejores decisiones.
+>
+> 🎓 'Densidad'
+>
+> Los datos que son 'ruidosos' se consideran 'densos'. Las distancias entre puntos en cada uno de sus grupos pueden demostrar, al examinarlas, ser más o menos densas, o 'abarrotadas', y por lo tanto, estos datos deben analizarse con el método de agrupación apropiado. [Este artículo](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) demuestra la diferencia entre usar el algoritmo de agrupación K-Means vs. HDBSCAN para explorar un conjunto de datos ruidosos con densidad de grupo desigual.
+
+## Algoritmos de agrupación
+
+Existen más de 100 algoritmos de agrupación, y su uso depende de la naturaleza de los datos disponibles. Hablemos de algunos de los principales:
+
+- **Agrupación jerárquica**. Si un objeto se clasifica por su proximidad a un objeto cercano, en lugar de a uno más lejano, los grupos se forman en función de la distancia de sus miembros a y desde otros objetos. La agrupación aglomerativa de Scikit-learn es jerárquica.
+
+ 
+ > Infografía por [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Agrupación de centroides**. Este algoritmo popular requiere la elección de 'k', o el número de grupos a formar, después de lo cual el algoritmo determina el punto central de un grupo y reúne datos alrededor de ese punto. [La agrupación K-means](https://wikipedia.org/wiki/K-means_clustering) es una versión popular de la agrupación de centroides. El centro se determina por la media más cercana, de ahí el nombre. La distancia cuadrada desde el grupo se minimiza.
+
+ 
+ > Infografía por [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Agrupación basada en distribución**. Basada en el modelado estadístico, la agrupación basada en distribución se centra en determinar la probabilidad de que un punto de datos pertenezca a un grupo y asignarlo en consecuencia. Los métodos de mezcla gaussiana pertenecen a este tipo.
+
+- **Agrupación basada en densidad**. Los puntos de datos se asignan a grupos en función de su densidad o su agrupación entre sí. Los puntos de datos alejados del grupo se consideran valores atípicos o ruido. DBSCAN, Mean-shift y OPTICS pertenecen a este tipo de agrupación.
+
+- **Agrupación basada en cuadrícula**. Para conjuntos de datos multidimensionales, se crea una cuadrícula y los datos se dividen entre las celdas de la cuadrícula, creando así grupos.
+
+## Ejercicio - agrupa tus datos
+
+La técnica de agrupación se ve muy beneficiada por la visualización adecuada, así que comencemos visualizando nuestros datos musicales. Este ejercicio nos ayudará a decidir cuál de los métodos de agrupación deberíamos usar más eficazmente para la naturaleza de estos datos.
+
+1. Abre el archivo [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb) en esta carpeta.
+
+1. Importa el paquete `Seaborn` para una buena visualización de datos.
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. Agrega los datos de las canciones desde [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv). Carga un dataframe con algunos datos sobre las canciones. Prepárate para explorar estos datos importando las bibliotecas y descargando los datos:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ Revisa las primeras líneas de datos:
+
+ | | nombre | álbum | artista | género_principal_artista | fecha_lanzamiento | duración | popularidad | bailabilidad | acusticidad | energía | instrumentalidad | vivacidad | volumen | locuacidad | tempo | compás |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ------------------------ | ----------------- | -------- | ----------- | ------------ | ------------ | ------- | ----------------- | --------- | ------- | ----------- | ------- | ------ |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool | Enjoy Your Life | Lady Donli | nigerian pop | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25 | 0.702 | 0.116 | 0.833 | 0.91 | 0.348 | -6.044 | 0.0447 | 105.115 | 4 |
+
+1. Obtén información sobre el dataframe, llamando a `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ La salida se verá así:
+
+ ```output
+
+ RangeIndex: 530 entries, 0 to 529
+ Data columns (total 16 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 name 530 non-null object
+ 1 album 530 non-null object
+ 2 artist 530 non-null object
+ 3 artist_top_genre 530 non-null object
+ 4 release_date 530 non-null int64
+ 5 length 530 non-null int64
+ 6 popularity 530 non-null int64
+ 7 danceability 530 non-null float64
+ 8 acousticness 530 non-null float64
+ 9 energy 530 non-null float64
+ 10 instrumentalness 530 non-null float64
+ 11 liveness 530 non-null float64
+ 12 loudness 530 non-null float64
+ 13 speechiness 530 non-null float64
+ 14 tempo 530 non-null float64
+ 15 time_signature 530 non-null int64
+ dtypes: float64(8), int64(4), object(4)
+ memory usage: 66.4+ KB
+ ```
+
+1. Verifica nuevamente si hay valores nulos, llamando a `isnull()` y verificando que la suma sea 0:
+
+ ```python
+ df.isnull().sum()
+ ```
+
+ Se ve bien:
+
+ ```output
+ name 0
+ album 0
+ artist 0
+ artist_top_genre 0
+ release_date 0
+ length 0
+ popularity 0
+ danceability 0
+ acousticness 0
+ energy 0
+ instrumentalness 0
+ liveness 0
+ loudness 0
+ speechiness 0
+ tempo 0
+ time_signature 0
+ dtype: int64
+ ```
+
+1. Describe los datos:
+
+ ```python
+ df.describe()
+ ```
+
+ | | fecha_lanzamiento | duración | popularidad | bailabilidad | acusticidad | energía | instrumentalidad | vivacidad | volumen | locuacidad | tempo | compás |
+ | ----- | ----------------- | ---------- | ----------- | ------------ | ------------ | --------- | ----------------- | --------- | --------- | ----------- | ---------- | ------- |
+ | count | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 |
+ | mean | 2015.390566 | 222298.1698| 17.507547 | 0.741619 | 0.265412 | 0.760623 | 0.016305 | 0.147308 | -4.953011 | 0.130748 | 116.487864 | 3.986792|
+ | std | 3.131688 | 39696.82226| 18.992212 | 0.117522 | 0.208342 | 0.148533 | 0.090321 | 0.123588 | 2.464186 | 0.092939 | 23.518601 | 0.333701|
+ | min | 1998 | 89488 | 0 | 0.255 | 0.000665 | 0.111 | 0 | 0.0283 | -19.362 | 0.0278 | 61.695 | 3 |
+ | 25% | 2014 | 199305 | 0 | 0.681 | 0.089525 | 0.669 | 0
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## Revisión y Autoestudio
+
+Antes de aplicar algoritmos de clustering, como hemos aprendido, es una buena idea entender la naturaleza de tu conjunto de datos. Lee más sobre este tema [aquí](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
+
+[Este artículo útil](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/) te guía a través de las diferentes formas en que se comportan varios algoritmos de clustering, dadas diferentes formas de datos.
+
+## Tarea
+
+[Investiga otras visualizaciones para clustering](assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/5-Clustering/1-Visualize/assignment.md b/translations/es/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..472250e0b
--- /dev/null
+++ b/translations/es/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# Investigar otras visualizaciones para clustering
+
+## Instrucciones
+
+En esta lección, has trabajado con algunas técnicas de visualización para comprender cómo graficar tus datos en preparación para agruparlos. Los gráficos de dispersión, en particular, son útiles para encontrar grupos de objetos. Investiga diferentes maneras y diferentes bibliotecas para crear gráficos de dispersión y documenta tu trabajo en un cuaderno. Puedes usar los datos de esta lección, de otras lecciones, o datos que obtengas por ti mismo (sin embargo, por favor acredita su fuente en tu cuaderno). Grafica algunos datos usando gráficos de dispersión y explica lo que descubres.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | ---------------------------------- |
+| | Se presenta un cuaderno con cinco gráficos de dispersión bien documentados | Se presenta un cuaderno con menos de cinco gráficos de dispersión y está menos bien documentado | Se presenta un cuaderno incompleto |
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/es/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..668ffc76b
--- /dev/null
+++ b/translations/es/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/5-Clustering/2-K-Means/README.md b/translations/es/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..869889432
--- /dev/null
+++ b/translations/es/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# Agrupamiento K-Means
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+En esta lección, aprenderás a crear grupos utilizando Scikit-learn y el conjunto de datos de música nigeriana que importaste anteriormente. Cubriremos los conceptos básicos de K-Means para el agrupamiento. Ten en cuenta que, como aprendiste en la lección anterior, hay muchas formas de trabajar con grupos y el método que utilices depende de tus datos. Intentaremos K-Means ya que es la técnica de agrupamiento más común. ¡Vamos a empezar!
+
+Términos que aprenderás:
+
+- Puntuación de Silhouette
+- Método del codo
+- Inercia
+- Varianza
+
+## Introducción
+
+[El agrupamiento K-Means](https://wikipedia.org/wiki/K-means_clustering) es un método derivado del procesamiento de señales. Se utiliza para dividir y particionar grupos de datos en 'k' grupos utilizando una serie de observaciones. Cada observación trabaja para agrupar un punto de datos dado lo más cerca posible de su 'media' más cercana, o el punto central de un grupo.
+
+Los grupos se pueden visualizar como [diagramas de Voronoi](https://wikipedia.org/wiki/Voronoi_diagram), que incluyen un punto (o 'semilla') y su región correspondiente.
+
+
+
+> infografía por [Jen Looper](https://twitter.com/jenlooper)
+
+El proceso de agrupamiento K-Means [se ejecuta en un proceso de tres pasos](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. El algoritmo selecciona un número k de puntos centrales muestreando del conjunto de datos. Después de esto, se repite:
+ 1. Asigna cada muestra al centroide más cercano.
+ 2. Crea nuevos centroides tomando el valor medio de todas las muestras asignadas a los centroides anteriores.
+ 3. Luego, calcula la diferencia entre los nuevos y antiguos centroides y repite hasta que los centroides se estabilicen.
+
+Una desventaja de usar K-Means es que necesitarás establecer 'k', es decir, el número de centroides. Afortunadamente, el 'método del codo' ayuda a estimar un buen valor inicial para 'k'. Lo probarás en un momento.
+
+## Prerrequisitos
+
+Trabajarás en el archivo [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) de esta lección que incluye la importación de datos y la limpieza preliminar que hiciste en la última lección.
+
+## Ejercicio - preparación
+
+Comienza echando otro vistazo a los datos de las canciones.
+
+1. Crea un diagrama de caja, llamando a `boxplot()` para cada columna:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ Estos datos son un poco ruidosos: al observar cada columna como un diagrama de caja, puedes ver valores atípicos.
+
+ 
+
+Podrías recorrer el conjunto de datos y eliminar estos valores atípicos, pero eso haría que los datos fueran bastante mínimos.
+
+1. Por ahora, elige qué columnas usarás para tu ejercicio de agrupamiento. Escoge aquellas con rangos similares y codifica la columna `artist_top_genre` como datos numéricos:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. Ahora necesitas elegir cuántos grupos apuntar. Sabes que hay 3 géneros de canciones que extrajimos del conjunto de datos, así que intentemos con 3:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+Verás un array impreso con los grupos predichos (0, 1 o 2) para cada fila del dataframe.
+
+1. Usa este array para calcular una 'puntuación de Silhouette':
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## Puntuación de Silhouette
+
+Busca una puntuación de Silhouette más cercana a 1. Esta puntuación varía de -1 a 1, y si la puntuación es 1, el grupo es denso y está bien separado de otros grupos. Un valor cercano a 0 representa grupos superpuestos con muestras muy cercanas al límite de decisión de los grupos vecinos. [(Fuente)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+Nuestra puntuación es **.53**, así que justo en el medio. Esto indica que nuestros datos no son particularmente adecuados para este tipo de agrupamiento, pero sigamos adelante.
+
+### Ejercicio - construir un modelo
+
+1. Importa `KMeans` y comienza el proceso de agrupamiento.
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ Hay algunas partes aquí que merecen una explicación.
+
+ > 🎓 rango: Estas son las iteraciones del proceso de agrupamiento
+
+ > 🎓 random_state: "Determina la generación de números aleatorios para la inicialización del centroide." [Fuente](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "suma de cuadrados dentro del grupo" mide la distancia promedio al cuadrado de todos los puntos dentro de un grupo al centroide del grupo. [Fuente](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce).
+
+ > 🎓 Inercia: Los algoritmos K-Means intentan elegir centroides para minimizar la 'inercia', "una medida de cuán coherentes son internamente los grupos." [Fuente](https://scikit-learn.org/stable/modules/clustering.html). El valor se agrega a la variable wcss en cada iteración.
+
+ > 🎓 k-means++: En [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means) puedes usar la optimización 'k-means++', que "inicializa los centroides para que estén (generalmente) distantes entre sí, lo que lleva a probablemente mejores resultados que la inicialización aleatoria.
+
+### Método del codo
+
+Anteriormente, dedujiste que, debido a que has apuntado a 3 géneros de canciones, deberías elegir 3 grupos. ¿Pero es ese el caso?
+
+1. Usa el 'método del codo' para asegurarte.
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ Usa la variable `wcss` que construiste en el paso anterior para crear un gráfico que muestre dónde está el 'doblez' en el codo, lo que indica el número óptimo de grupos. ¡Quizás sí sean 3!
+
+ 
+
+## Ejercicio - mostrar los grupos
+
+1. Intenta el proceso nuevamente, esta vez estableciendo tres grupos, y muestra los grupos como un gráfico de dispersión:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. Verifica la precisión del modelo:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ La precisión de este modelo no es muy buena, y la forma de los grupos te da una pista del porqué.
+
+ 
+
+ Estos datos están demasiado desequilibrados, poco correlacionados y hay demasiada varianza entre los valores de las columnas para agrupar bien. De hecho, los grupos que se forman probablemente estén fuertemente influenciados o sesgados por las tres categorías de género que definimos anteriormente. ¡Eso fue un proceso de aprendizaje!
+
+ En la documentación de Scikit-learn, puedes ver que un modelo como este, con grupos no muy bien demarcados, tiene un problema de 'varianza':
+
+ 
+ > Infografía de Scikit-learn
+
+## Varianza
+
+La varianza se define como "el promedio de las diferencias al cuadrado desde la media" [(Fuente)](https://www.mathsisfun.com/data/standard-deviation.html). En el contexto de este problema de agrupamiento, se refiere a datos donde los números de nuestro conjunto de datos tienden a divergir demasiado de la media.
+
+✅ Este es un buen momento para pensar en todas las formas en que podrías corregir este problema. ¿Ajustar un poco más los datos? ¿Usar diferentes columnas? ¿Usar un algoritmo diferente? Pista: Intenta [escalar tus datos](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) para normalizarlos y probar otras columnas.
+
+> Prueba este '[calculador de varianza](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' para entender un poco más el concepto.
+
+---
+
+## 🚀Desafío
+
+Pasa un tiempo con este cuaderno, ajustando parámetros. ¿Puedes mejorar la precisión del modelo limpiando más los datos (eliminando valores atípicos, por ejemplo)? Puedes usar pesos para dar más peso a ciertas muestras de datos. ¿Qué más puedes hacer para crear mejores grupos?
+
+Pista: Intenta escalar tus datos. Hay código comentado en el cuaderno que agrega escalado estándar para hacer que las columnas de datos se parezcan más en términos de rango. Verás que, aunque la puntuación de Silhouette baja, el 'doblez' en el gráfico del codo se suaviza. Esto se debe a que dejar los datos sin escalar permite que los datos con menos varianza tengan más peso. Lee un poco más sobre este problema [aquí](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226).
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## Revisión y autoestudio
+
+Echa un vistazo a un simulador de K-Means [como este](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). Puedes usar esta herramienta para visualizar puntos de datos de muestra y determinar sus centroides. Puedes editar la aleatoriedad de los datos, el número de grupos y el número de centroides. ¿Esto te ayuda a tener una idea de cómo se pueden agrupar los datos?
+
+También, echa un vistazo a [este folleto sobre K-Means](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) de Stanford.
+
+## Tarea
+
+[Prueba diferentes métodos de agrupamiento](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/5-Clustering/2-K-Means/assignment.md b/translations/es/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..4bb135b06
--- /dev/null
+++ b/translations/es/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,13 @@
+# Prueba diferentes métodos de clustering
+
+## Instrucciones
+
+En esta lección aprendiste sobre el clustering de K-Means. A veces, K-Means no es apropiado para tus datos. Crea un cuaderno usando datos ya sea de estas lecciones o de algún otro lugar (da crédito a tu fuente) y muestra un método de clustering diferente que NO use K-Means. ¿Qué aprendiste?
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | ---------------------------------------------------------------- | -------------------------------------------------------------------- | ---------------------------- |
+| | Se presenta un cuaderno con un modelo de clustering bien documentado | Se presenta un cuaderno sin buena documentación y/o incompleto | Se presenta trabajo incompleto |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/es/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..df385a25e
--- /dev/null
+++ b/translations/es/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/5-Clustering/README.md b/translations/es/5-Clustering/README.md
new file mode 100644
index 000000000..9d5a2da3a
--- /dev/null
+++ b/translations/es/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# Modelos de clustering para aprendizaje automático
+
+El clustering es una tarea de aprendizaje automático que busca encontrar objetos que se asemejan entre sí y agruparlos en grupos llamados clústeres. Lo que diferencia al clustering de otros enfoques en el aprendizaje automático es que las cosas suceden automáticamente, de hecho, es justo decir que es lo opuesto al aprendizaje supervisado.
+
+## Tema regional: modelos de clustering para el gusto musical de una audiencia nigeriana 🎧
+
+La diversa audiencia de Nigeria tiene gustos musicales variados. Utilizando datos extraídos de Spotify (inspirados por [este artículo](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421)), veamos algunas músicas populares en Nigeria. Este conjunto de datos incluye información sobre varias canciones como la puntuación de 'danceability', 'acousticness', volumen, 'speechiness', popularidad y energía. ¡Será interesante descubrir patrones en estos datos!
+
+
+
+> Foto por Marcela Laskoski en Unsplash
+
+En esta serie de lecciones, descubrirás nuevas formas de analizar datos utilizando técnicas de clustering. El clustering es particularmente útil cuando tu conjunto de datos carece de etiquetas. Si tiene etiquetas, entonces las técnicas de clasificación como las que aprendiste en lecciones anteriores podrían ser más útiles. Pero en casos donde buscas agrupar datos sin etiquetar, el clustering es una excelente manera de descubrir patrones.
+
+> Hay herramientas de bajo código útiles que pueden ayudarte a aprender a trabajar con modelos de clustering. Prueba [Azure ML para esta tarea](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Lecciones
+
+1. [Introducción al clustering](1-Visualize/README.md)
+2. [Clustering K-Means](2-K-Means/README.md)
+
+## Créditos
+
+Estas lecciones fueron escritas con 🎶 por [Jen Looper](https://www.twitter.com/jenlooper) con revisiones útiles de [Rishit Dagli](https://rishit_dagli) y [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan).
+
+El conjunto de datos [Nigerian Songs](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify) fue obtenido de Kaggle y extraído de Spotify.
+
+Ejemplos útiles de K-Means que ayudaron en la creación de esta lección incluyen esta [exploración de iris](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering), este [cuaderno introductorio](https://www.kaggle.com/prashant111/k-means-clustering-with-python), y este [ejemplo hipotético de ONG](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering).
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automatizada por inteligencia artificial. Si bien nos esforzamos por la precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/1-Introduction-to-NLP/README.md b/translations/es/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..50abb4723
--- /dev/null
+++ b/translations/es/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# Introducción al procesamiento de lenguaje natural
+
+Esta lección cubre una breve historia y conceptos importantes del *procesamiento de lenguaje natural*, un subcampo de la *lingüística computacional*.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## Introducción
+
+El procesamiento de lenguaje natural (NLP, por sus siglas en inglés), como se le conoce comúnmente, es una de las áreas más conocidas donde se ha aplicado el aprendizaje automático y se utiliza en software de producción.
+
+✅ ¿Puedes pensar en algún software que uses todos los días que probablemente tenga algo de NLP integrado? ¿Qué hay de tus programas de procesamiento de texto o aplicaciones móviles que usas regularmente?
+
+Aprenderás sobre:
+
+- **La idea de los lenguajes**. Cómo se desarrollaron los lenguajes y cuáles han sido las principales áreas de estudio.
+- **Definición y conceptos**. También aprenderás definiciones y conceptos sobre cómo las computadoras procesan texto, incluyendo análisis sintáctico, gramática e identificación de sustantivos y verbos. Hay algunas tareas de codificación en esta lección, y se introducen varios conceptos importantes que aprenderás a codificar más adelante en las próximas lecciones.
+
+## Lingüística computacional
+
+La lingüística computacional es un área de investigación y desarrollo de muchas décadas que estudia cómo las computadoras pueden trabajar con, e incluso entender, traducir y comunicarse con los lenguajes. El procesamiento de lenguaje natural (NLP) es un campo relacionado enfocado en cómo las computadoras pueden procesar lenguajes 'naturales', o humanos.
+
+### Ejemplo - dictado por teléfono
+
+Si alguna vez has dictado a tu teléfono en lugar de escribir o has hecho una pregunta a un asistente virtual, tu discurso se convirtió en una forma de texto y luego se procesó o *analizó* desde el idioma que hablaste. Las palabras clave detectadas se procesaron luego en un formato que el teléfono o asistente pudiera entender y actuar en consecuencia.
+
+
+> ¡La comprensión lingüística real es difícil! Imagen de [Jen Looper](https://twitter.com/jenlooper)
+
+### ¿Cómo es posible esta tecnología?
+
+Esto es posible porque alguien escribió un programa de computadora para hacerlo. Hace unas décadas, algunos escritores de ciencia ficción predijeron que la gente hablaría principalmente con sus computadoras, y las computadoras siempre entenderían exactamente lo que querían decir. Lamentablemente, resultó ser un problema más difícil de lo que muchos imaginaron, y aunque hoy en día es un problema mucho mejor comprendido, existen desafíos significativos para lograr un procesamiento de lenguaje natural 'perfecto' cuando se trata de entender el significado de una oración. Esto es particularmente difícil cuando se trata de entender el humor o detectar emociones como el sarcasmo en una oración.
+
+En este punto, puede que estés recordando las clases escolares donde el maestro cubría las partes de la gramática en una oración. En algunos países, se enseña gramática y lingüística como una asignatura dedicada, pero en muchos, estos temas se incluyen como parte del aprendizaje de un idioma: ya sea tu primer idioma en la escuela primaria (aprender a leer y escribir) y tal vez un segundo idioma en la escuela secundaria. ¡No te preocupes si no eres un experto en diferenciar sustantivos de verbos o adverbios de adjetivos!
+
+Si te cuesta la diferencia entre el *presente simple* y el *presente progresivo*, no estás solo. Esto es un desafío para muchas personas, incluso para hablantes nativos de un idioma. La buena noticia es que las computadoras son realmente buenas para aplicar reglas formales, y aprenderás a escribir código que pueda *analizar* una oración tan bien como un humano. El mayor desafío que examinarás más adelante es entender el *significado* y el *sentimiento* de una oración.
+
+## Requisitos previos
+
+Para esta lección, el requisito principal es poder leer y entender el idioma de esta lección. No hay problemas matemáticos ni ecuaciones que resolver. Aunque el autor original escribió esta lección en inglés, también está traducida a otros idiomas, por lo que podrías estar leyendo una traducción. Hay ejemplos donde se usan varios idiomas diferentes (para comparar las diferentes reglas gramaticales de diferentes idiomas). Estos *no* están traducidos, pero el texto explicativo sí, por lo que el significado debería ser claro.
+
+Para las tareas de codificación, usarás Python y los ejemplos están usando Python 3.8.
+
+En esta sección, necesitarás y usarás:
+
+- **Comprensión de Python 3**. Comprensión del lenguaje de programación en Python 3, esta lección usa entrada, bucles, lectura de archivos, matrices.
+- **Visual Studio Code + extensión**. Usaremos Visual Studio Code y su extensión de Python. También puedes usar un IDE de Python de tu elección.
+- **TextBlob**. [TextBlob](https://github.com/sloria/TextBlob) es una biblioteca simplificada de procesamiento de texto para Python. Sigue las instrucciones en el sitio de TextBlob para instalarlo en tu sistema (instala también los corpora, como se muestra a continuación):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 Consejo: Puedes ejecutar Python directamente en entornos de VS Code. Consulta los [documentos](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott) para obtener más información.
+
+## Hablando con máquinas
+
+La historia de intentar que las computadoras entiendan el lenguaje humano se remonta a décadas, y uno de los primeros científicos en considerar el procesamiento de lenguaje natural fue *Alan Turing*.
+
+### La 'prueba de Turing'
+
+Cuando Turing estaba investigando la *inteligencia artificial* en la década de 1950, consideró si se podría dar una prueba conversacional a un humano y una computadora (a través de correspondencia escrita) donde el humano en la conversación no estuviera seguro si estaba conversando con otro humano o una computadora.
+
+Si, después de una cierta duración de la conversación, el humano no podía determinar si las respuestas provenían de una computadora o no, ¿se podría decir que la computadora estaba *pensando*?
+
+### La inspiración - 'el juego de imitación'
+
+La idea para esto vino de un juego de fiesta llamado *El Juego de Imitación* donde un interrogador está solo en una habitación y tiene la tarea de determinar cuál de dos personas (en otra habitación) es hombre y mujer respectivamente. El interrogador puede enviar notas y debe tratar de pensar en preguntas cuyas respuestas escritas revelen el género de la persona misteriosa. Por supuesto, los jugadores en la otra habitación están tratando de engañar al interrogador respondiendo preguntas de manera que confundan o confundan al interrogador, mientras que también dan la apariencia de responder honestamente.
+
+### Desarrollando Eliza
+
+En la década de 1960, un científico del MIT llamado *Joseph Weizenbaum* desarrolló [*Eliza*](https://wikipedia.org/wiki/ELIZA), un 'terapeuta' de computadora que haría preguntas al humano y daría la apariencia de entender sus respuestas. Sin embargo, aunque Eliza podía analizar una oración e identificar ciertos constructos gramaticales y palabras clave para dar una respuesta razonable, no se podía decir que *entendiera* la oración. Si a Eliza se le presentaba una oración siguiendo el formato "**Yo estoy** triste", podría reorganizar y sustituir palabras en la oración para formar la respuesta "¿Cuánto tiempo has **estado** triste?".
+
+Esto daba la impresión de que Eliza entendía la declaración y estaba haciendo una pregunta de seguimiento, mientras que en realidad, estaba cambiando el tiempo verbal y agregando algunas palabras. Si Eliza no podía identificar una palabra clave para la cual tenía una respuesta, en su lugar daría una respuesta aleatoria que podría ser aplicable a muchas declaraciones diferentes. Eliza podría ser fácilmente engañada, por ejemplo, si un usuario escribía "**Tú eres** una bicicleta", podría responder "¿Cuánto tiempo he **sido** una bicicleta?", en lugar de una respuesta más razonada.
+
+[](https://youtu.be/RMK9AphfLco "Chateando con Eliza")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video sobre el programa original ELIZA
+
+> Nota: Puedes leer la descripción original de [Eliza](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract) publicada en 1966 si tienes una cuenta de ACM. Alternativamente, lee sobre Eliza en [wikipedia](https://wikipedia.org/wiki/ELIZA)
+
+## Ejercicio - codificando un bot conversacional básico
+
+Un bot conversacional, como Eliza, es un programa que obtiene la entrada del usuario y parece entender y responder inteligentemente. A diferencia de Eliza, nuestro bot no tendrá varias reglas que le den la apariencia de tener una conversación inteligente. En su lugar, nuestro bot tendrá una única habilidad: mantener la conversación con respuestas aleatorias que podrían funcionar en casi cualquier conversación trivial.
+
+### El plan
+
+Tus pasos al construir un bot conversacional:
+
+1. Imprime instrucciones que aconsejen al usuario cómo interactuar con el bot
+2. Inicia un bucle
+ 1. Acepta la entrada del usuario
+ 2. Si el usuario ha pedido salir, entonces salir
+ 3. Procesa la entrada del usuario y determina la respuesta (en este caso, la respuesta es una elección aleatoria de una lista de posibles respuestas genéricas)
+ 4. Imprime la respuesta
+3. Vuelve al paso 2
+
+### Construyendo el bot
+
+Vamos a crear el bot a continuación. Comenzaremos definiendo algunas frases.
+
+1. Crea este bot tú mismo en Python con las siguientes respuestas aleatorias:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ Aquí tienes una salida de muestra para guiarte (la entrada del usuario está en las líneas que comienzan con `>`):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ Una posible solución a la tarea está [aquí](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)
+
+ ✅ Detente y considera
+
+ 1. ¿Crees que las respuestas aleatorias 'engañarían' a alguien para que piense que el bot realmente los entendió?
+ 2. ¿Qué características necesitaría el bot para ser más efectivo?
+ 3. Si un bot realmente pudiera 'entender' el significado de una oración, ¿necesitaría 'recordar' el significado de oraciones anteriores en una conversación también?
+
+---
+
+## 🚀Desafío
+
+Elige uno de los elementos de "detente y considera" anteriores e intenta implementarlo en código o escribe una solución en papel usando pseudocódigo.
+
+En la próxima lección, aprenderás sobre varias otras aproximaciones para analizar el lenguaje natural y el aprendizaje automático.
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## Revisión y autoestudio
+
+Echa un vistazo a las referencias a continuación como oportunidades de lectura adicional.
+
+### Referencias
+
+1. Schubert, Lenhart, "Lingüística Computacional", *The Stanford Encyclopedia of Philosophy* (Edición de Primavera 2020), Edward N. Zalta (ed.), URL = .
+2. Princeton University "Acerca de WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
+
+## Asignación
+
+[Busca un bot](assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/es/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..293e8f819
--- /dev/null
+++ b/translations/es/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# Buscar un bot
+
+## Instrucciones
+
+Los bots están en todas partes. Tu tarea: ¡encuentra uno y adóptalo! Puedes encontrarlos en sitios web, en aplicaciones bancarias y por teléfono, por ejemplo, cuando llamas a empresas de servicios financieros para obtener asesoramiento o información sobre cuentas. Analiza el bot y ve si puedes confundirlo. Si puedes confundir al bot, ¿por qué crees que ocurrió eso? Escribe un breve informe sobre tu experiencia.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejora |
+| --------- | ------------------------------------------------------------------------------------------------------------ | -------------------------------------------- | --------------------- |
+| | Se escribe un informe de una página completa, explicando la arquitectura presumida del bot y describiendo tu experiencia con él | El informe está incompleto o no está bien investigado | No se presenta un informe |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/2-Tasks/README.md b/translations/es/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..c8625a33b
--- /dev/null
+++ b/translations/es/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# Tareas y técnicas comunes de procesamiento de lenguaje natural
+
+Para la mayoría de las tareas de *procesamiento de lenguaje natural*, el texto a procesar debe descomponerse, examinarse y los resultados deben almacenarse o cruzarse con reglas y conjuntos de datos. Estas tareas permiten al programador derivar el _significado_ o _intención_ o solo la _frecuencia_ de términos y palabras en un texto.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+Vamos a descubrir técnicas comunes utilizadas en el procesamiento de texto. Combinadas con el aprendizaje automático, estas técnicas te ayudan a analizar grandes cantidades de texto de manera eficiente. Sin embargo, antes de aplicar ML a estas tareas, entendamos los problemas que enfrenta un especialista en PLN.
+
+## Tareas comunes en PLN
+
+Existen diferentes formas de analizar un texto en el que estás trabajando. Hay tareas que puedes realizar y a través de estas tareas puedes comprender el texto y sacar conclusiones. Normalmente, llevas a cabo estas tareas en una secuencia.
+
+### Tokenización
+
+Probablemente, lo primero que la mayoría de los algoritmos de PLN deben hacer es dividir el texto en tokens o palabras. Aunque esto suena simple, tener en cuenta la puntuación y los delimitadores de palabras y oraciones en diferentes idiomas puede complicarlo. Es posible que debas usar varios métodos para determinar las demarcaciones.
+
+
+> Tokenizando una oración de **Orgullo y Prejuicio**. Infografía por [Jen Looper](https://twitter.com/jenlooper)
+
+### Embeddings
+
+[Embeddings de palabras](https://wikipedia.org/wiki/Word_embedding) son una forma de convertir tus datos de texto numéricamente. Los embeddings se realizan de manera que las palabras con un significado similar o palabras usadas juntas se agrupan.
+
+
+> "Tengo el mayor respeto por tus nervios, son mis viejos amigos." - Embeddings de palabras para una oración en **Orgullo y Prejuicio**. Infografía por [Jen Looper](https://twitter.com/jenlooper)
+
+✅ Prueba [esta herramienta interesante](https://projector.tensorflow.org/) para experimentar con embeddings de palabras. Hacer clic en una palabra muestra grupos de palabras similares: 'juguete' se agrupa con 'disney', 'lego', 'playstation' y 'consola'.
+
+### Análisis gramatical y etiquetado de partes del discurso
+
+Cada palabra que ha sido tokenizada puede etiquetarse como una parte del discurso: un sustantivo, verbo o adjetivo. La oración `the quick red fox jumped over the lazy brown dog` podría etiquetarse como POS: zorro = sustantivo, saltó = verbo.
+
+
+
+> Analizando una oración de **Orgullo y Prejuicio**. Infografía por [Jen Looper](https://twitter.com/jenlooper)
+
+El análisis gramatical es reconocer qué palabras están relacionadas entre sí en una oración. Por ejemplo, `the quick red fox jumped` es una secuencia adjetivo-sustantivo-verbo que está separada de la secuencia `lazy brown dog`.
+
+### Frecuencia de palabras y frases
+
+Un procedimiento útil al analizar un gran cuerpo de texto es construir un diccionario de cada palabra o frase de interés y cuántas veces aparece. La frase `the quick red fox jumped over the lazy brown dog` tiene una frecuencia de palabras de 2 para "the".
+
+Veamos un ejemplo de texto donde contamos la frecuencia de palabras. El poema de Rudyard Kipling "Los Ganadores" contiene el siguiente verso:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+Como las frecuencias de frases pueden ser insensibles a mayúsculas o sensibles a mayúsculas según sea necesario, la frase `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` es 2.
+
+### N-gramas
+
+Un texto puede dividirse en secuencias de palabras de una longitud establecida, una sola palabra (unigrama), dos palabras (bigramas), tres palabras (trigramas) o cualquier número de palabras (n-gramas).
+
+Por ejemplo, `the quick red fox jumped over the lazy brown dog` con una puntuación de n-grama de 2 produce los siguientes n-gramas:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+Podría ser más fácil visualizarlo como una caja deslizante sobre la oración. Aquí está para n-gramas de 3 palabras, el n-grama está en negrita en cada oración:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> Valor de n-grama de 3: Infografía por [Jen Looper](https://twitter.com/jenlooper)
+
+### Extracción de frases nominales
+
+En la mayoría de las oraciones, hay un sustantivo que es el sujeto u objeto de la oración. En inglés, a menudo se identifica por tener 'a', 'an' o 'the' antes de él. Identificar el sujeto u objeto de una oración extrayendo la frase nominal es una tarea común en PLN cuando se intenta entender el significado de una oración.
+
+✅ En la oración "No puedo fijar la hora, ni el lugar, ni la mirada ni las palabras, que sentaron las bases. Hace demasiado tiempo. Estaba en medio antes de saber que había comenzado.", ¿puedes identificar las frases nominales?
+
+En la oración `the quick red fox jumped over the lazy brown dog` hay 2 frases nominales: **quick red fox** y **lazy brown dog**.
+
+### Análisis de sentimiento
+
+Una oración o texto puede analizarse para determinar su sentimiento, o cuán *positivo* o *negativo* es. El sentimiento se mide en *polaridad* y *objetividad/subjetividad*. La polaridad se mide de -1.0 a 1.0 (negativo a positivo) y de 0.0 a 1.0 (más objetivo a más subjetivo).
+
+✅ Más adelante aprenderás que hay diferentes formas de determinar el sentimiento utilizando el aprendizaje automático, pero una forma es tener una lista de palabras y frases categorizadas como positivas o negativas por un experto humano y aplicar ese modelo al texto para calcular una puntuación de polaridad. ¿Puedes ver cómo esto funcionaría en algunas circunstancias y menos en otras?
+
+### Inflección
+
+La inflección te permite tomar una palabra y obtener el singular o plural de la palabra.
+
+### Lematización
+
+Un *lema* es la raíz o palabra principal de un conjunto de palabras, por ejemplo, *flew*, *flies*, *flying* tienen como lema el verbo *fly*.
+
+También hay bases de datos útiles disponibles para el investigador de PLN, notablemente:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/) es una base de datos de palabras, sinónimos, antónimos y muchos otros detalles para cada palabra en muchos idiomas diferentes. Es increíblemente útil al intentar construir traducciones, correctores ortográficos o herramientas de lenguaje de cualquier tipo.
+
+## Bibliotecas de PLN
+
+Afortunadamente, no tienes que construir todas estas técnicas tú mismo, ya que hay excelentes bibliotecas de Python disponibles que lo hacen mucho más accesible para los desarrolladores que no están especializados en procesamiento de lenguaje natural o aprendizaje automático. Las próximas lecciones incluyen más ejemplos de estas, pero aquí aprenderás algunos ejemplos útiles para ayudarte con la siguiente tarea.
+
+### Ejercicio - usando `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor`.
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > ¿Qué está pasando aquí? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor) es "Un extractor de frases nominales que utiliza análisis por fragmentos entrenados con el corpus de entrenamiento ConLL-2000." ConLL-2000 se refiere a la Conferencia de 2000 sobre Aprendizaje Computacional del Lenguaje Natural. Cada año la conferencia organizaba un taller para abordar un problema espinoso de PLN, y en 2000 fue la fragmentación de sustantivos. Se entrenó un modelo en el Wall Street Journal, con "las secciones 15-18 como datos de entrenamiento (211727 tokens) y la sección 20 como datos de prueba (47377 tokens)". Puedes ver los procedimientos utilizados [aquí](https://www.clips.uantwerpen.be/conll2000/chunking/) y los [resultados](https://ifarm.nl/erikt/research/np-chunking.html).
+
+### Desafío - mejorando tu bot con PLN
+
+En la lección anterior construiste un bot de preguntas y respuestas muy simple. Ahora, harás que Marvin sea un poco más simpático analizando tu entrada para detectar el sentimiento y mostrando una respuesta que coincida con el sentimiento. También necesitarás identificar una `noun_phrase` y preguntar sobre ella.
+
+Tus pasos al construir un mejor bot conversacional:
+
+1. Imprime instrucciones aconsejando al usuario cómo interactuar con el bot
+2. Inicia el bucle
+ 1. Acepta la entrada del usuario
+ 2. Si el usuario ha pedido salir, entonces salir
+ 3. Procesa la entrada del usuario y determina la respuesta de sentimiento adecuada
+ 4. Si se detecta una frase nominal en el sentimiento, pluralízala y pide más información sobre ese tema
+ 5. Imprime la respuesta
+3. vuelve al paso 2
+
+Aquí está el fragmento de código para determinar el sentimiento usando TextBlob. Nota que solo hay cuatro *gradientes* de respuesta de sentimiento (podrías tener más si lo deseas):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+Aquí hay un ejemplo de salida para guiarte (la entrada del usuario está en las líneas que comienzan con >):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+Una posible solución a la tarea está [aquí](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)
+
+✅ Verificación de conocimiento
+
+1. ¿Crees que las respuestas simpáticas podrían 'engañar' a alguien haciéndole pensar que el bot realmente los entendía?
+2. ¿Hace que el bot sea más 'creíble' identificar la frase nominal?
+3. ¿Por qué sería útil extraer una 'frase nominal' de una oración?
+
+---
+
+Implementa el bot en la verificación de conocimiento previa y pruébalo con un amigo. ¿Puede engañarlo? ¿Puedes hacer que tu bot sea más 'creíble'?
+
+## 🚀Desafío
+
+Toma una tarea en la verificación de conocimiento previa e intenta implementarla. Prueba el bot con un amigo. ¿Puede engañarlo? ¿Puedes hacer que tu bot sea más 'creíble'?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## Revisión y autoestudio
+
+En las próximas lecciones aprenderás más sobre el análisis de sentimiento. Investiga esta interesante técnica en artículos como estos en [KDNuggets](https://www.kdnuggets.com/tag/nlp)
+
+## Tarea
+
+[Haz que un bot responda](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizados por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/2-Tasks/assignment.md b/translations/es/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..19f553b86
--- /dev/null
+++ b/translations/es/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# Haz que un Bot responda
+
+## Instrucciones
+
+En las lecciones anteriores, programaste un bot básico con el cual chatear. Este bot da respuestas aleatorias hasta que digas 'adiós'. ¿Puedes hacer que las respuestas sean un poco menos aleatorias y que se activen si dices cosas específicas, como 'por qué' o 'cómo'? Piensa un poco en cómo el aprendizaje automático podría hacer este tipo de trabajo menos manual a medida que amplías tu bot. Puedes usar las bibliotecas NLTK o TextBlob para facilitar tus tareas.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | ---------------------------------------------| ------------------------------------------------ | ----------------------- |
+| | Se presenta y documenta un nuevo archivo bot.py | Se presenta un nuevo archivo bot pero contiene errores | No se presenta un archivo |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/3-Translation-Sentiment/README.md b/translations/es/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..b7b693398
--- /dev/null
+++ b/translations/es/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,190 @@
+# Traducción y análisis de sentimientos con ML
+
+En las lecciones anteriores aprendiste a construir un bot básico usando `TextBlob`, una biblioteca que incorpora ML detrás de escena para realizar tareas básicas de NLP como la extracción de frases nominales. Otro desafío importante en la lingüística computacional es la _traducción_ precisa de una oración de un idioma hablado o escrito a otro.
+
+## [Cuestionario previo a la clase](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+La traducción es un problema muy difícil debido al hecho de que hay miles de idiomas y cada uno puede tener reglas gramaticales muy diferentes. Un enfoque es convertir las reglas gramaticales formales de un idioma, como el inglés, en una estructura no dependiente del idioma, y luego traducirla volviendo a convertirla a otro idioma. Este enfoque significa que seguirías los siguientes pasos:
+
+1. **Identificación**. Identificar o etiquetar las palabras en el idioma de entrada como sustantivos, verbos, etc.
+2. **Crear traducción**. Producir una traducción directa de cada palabra en el formato del idioma objetivo.
+
+### Oración de ejemplo, inglés a irlandés
+
+En 'inglés', la oración _I feel happy_ tiene tres palabras en el orden:
+
+- **sujeto** (I)
+- **verbo** (feel)
+- **adjetivo** (happy)
+
+Sin embargo, en el idioma 'irlandés', la misma oración tiene una estructura gramatical muy diferente: las emociones como "*happy*" o "*sad*" se expresan como estando *sobre* ti.
+
+La frase en inglés `I feel happy` en irlandés sería `Tá athas orm`. Una traducción *literal* sería `Happy is upon me`.
+
+Un hablante de irlandés que traduce al inglés diría `I feel happy`, no `Happy is upon me`, porque entiende el significado de la oración, aunque las palabras y la estructura de la oración sean diferentes.
+
+El orden formal para la oración en irlandés es:
+
+- **verbo** (Tá o is)
+- **adjetivo** (athas, o happy)
+- **sujeto** (orm, o upon me)
+
+## Traducción
+
+Un programa de traducción ingenuo podría traducir solo palabras, ignorando la estructura de la oración.
+
+✅ Si has aprendido un segundo (o tercer o más) idioma como adulto, es posible que hayas comenzado pensando en tu idioma nativo, traduciendo un concepto palabra por palabra en tu cabeza al segundo idioma, y luego hablando tu traducción. Esto es similar a lo que hacen los programas de traducción ingenuos. ¡Es importante superar esta fase para alcanzar la fluidez!
+
+La traducción ingenua lleva a malas (y a veces hilarantes) malas traducciones: `I feel happy` se traduce literalmente como `Mise bhraitheann athas` en irlandés. Eso significa (literalmente) `me feel happy` y no es una oración válida en irlandés. Aunque el inglés y el irlandés son idiomas hablados en dos islas vecinas muy cercanas, son idiomas muy diferentes con estructuras gramaticales diferentes.
+
+> Puedes ver algunos videos sobre las tradiciones lingüísticas irlandesas como [este](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### Enfoques de aprendizaje automático
+
+Hasta ahora, has aprendido sobre el enfoque de reglas formales para el procesamiento del lenguaje natural. Otro enfoque es ignorar el significado de las palabras y _en su lugar usar el aprendizaje automático para detectar patrones_. Esto puede funcionar en la traducción si tienes muchos textos (un *corpus*) o textos (*corpora*) en ambos idiomas de origen y objetivo.
+
+Por ejemplo, considera el caso de *Orgullo y Prejuicio*, una novela inglesa bien conocida escrita por Jane Austen en 1813. Si consultas el libro en inglés y una traducción humana del libro en *francés*, podrías detectar frases en uno que se traducen _idiomáticamente_ en el otro. Lo harás en un momento.
+
+Por ejemplo, cuando una frase en inglés como `I have no money` se traduce literalmente al francés, podría convertirse en `Je n'ai pas de monnaie`. "Monnaie" es un falso amigo francés complicado, ya que 'money' y 'monnaie' no son sinónimos. Una mejor traducción que un humano podría hacer sería `Je n'ai pas d'argent`, porque transmite mejor el significado de que no tienes dinero (en lugar de 'cambio suelto' que es el significado de 'monnaie').
+
+
+
+> Imagen de [Jen Looper](https://twitter.com/jenlooper)
+
+Si un modelo de ML tiene suficientes traducciones humanas para construir un modelo, puede mejorar la precisión de las traducciones identificando patrones comunes en textos que han sido previamente traducidos por hablantes humanos expertos de ambos idiomas.
+
+### Ejercicio - traducción
+
+Puedes usar `TextBlob` para traducir oraciones. Prueba la famosa primera línea de **Orgullo y Prejuicio**:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob` hace un buen trabajo con la traducción: "C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!".
+
+Se puede argumentar que la traducción de TextBlob es mucho más exacta, de hecho, que la traducción francesa de 1932 del libro por V. Leconte y Ch. Pressoir:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet egard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+En este caso, la traducción informada por ML hace un mejor trabajo que el traductor humano que está poniendo palabras innecesarias en la boca del autor original para 'claridad'.
+
+> ¿Qué está pasando aquí? y ¿por qué TextBlob es tan bueno en la traducción? Bueno, detrás de escena, está usando Google Translate, una IA sofisticada capaz de analizar millones de frases para predecir las mejores cadenas para la tarea en cuestión. No hay nada manual en esto y necesitas una conexión a internet para usar `blob.translate`.
+
+✅ Try some more sentences. Which is better, ML or human translation? In which cases?
+
+## Sentiment analysis
+
+Another area where machine learning can work very well is sentiment analysis. A non-ML approach to sentiment is to identify words and phrases which are 'positive' and 'negative'. Then, given a new piece of text, calculate the total value of the positive, negative and neutral words to identify the overall sentiment.
+
+This approach is easily tricked as you may have seen in the Marvin task - the sentence `Great, that was a wonderful waste of time, I'm glad we are lost on this dark road` es una oración sarcástica y de sentimiento negativo, pero el algoritmo simple detecta 'great', 'wonderful', 'glad' como positivas y 'waste', 'lost' y 'dark' como negativas. El sentimiento general se ve influenciado por estas palabras contradictorias.
+
+✅ Detente un segundo y piensa en cómo transmitimos sarcasmo como hablantes humanos. La inflexión del tono juega un papel importante. Intenta decir la frase "Well, that film was awesome" de diferentes maneras para descubrir cómo tu voz transmite significado.
+
+### Enfoques de ML
+
+El enfoque de ML sería recopilar manualmente cuerpos de texto negativos y positivos: tweets, reseñas de películas, o cualquier cosa donde el humano haya dado una puntuación *y* una opinión escrita. Luego, se pueden aplicar técnicas de NLP a las opiniones y puntuaciones, de modo que surjan patrones (por ejemplo, las reseñas de películas positivas tienden a tener la frase 'Oscar worthy' más que las reseñas de películas negativas, o las reseñas de restaurantes positivos dicen 'gourmet' mucho más que 'disgusting').
+
+> ⚖️ **Ejemplo**: Si trabajas en la oficina de un político y se está debatiendo una nueva ley, los constituyentes podrían escribir a la oficina con correos electrónicos a favor o en contra de la nueva ley en particular. Supongamos que te encargan leer los correos electrónicos y clasificarlos en 2 montones, *a favor* y *en contra*. Si hubiera muchos correos electrónicos, podrías sentirte abrumado intentando leerlos todos. ¿No sería genial si un bot pudiera leerlos todos por ti, entenderlos y decirte en qué montón pertenece cada correo electrónico?
+>
+> Una forma de lograr eso es usar Machine Learning. Entrenarías el modelo con una porción de los correos electrónicos *en contra* y una porción de los correos electrónicos *a favor*. El modelo tendería a asociar frases y palabras con el lado en contra y el lado a favor, *pero no entendería ninguno de los contenidos*, solo que ciertas palabras y patrones eran más probables de aparecer en un correo electrónico *en contra* o *a favor*. Podrías probarlo con algunos correos electrónicos que no usaste para entrenar el modelo y ver si llegaba a la misma conclusión que tú. Luego, una vez que estuvieras satisfecho con la precisión del modelo, podrías procesar futuros correos electrónicos sin tener que leer cada uno.
+
+✅ ¿Este proceso te suena similar a los procesos que has utilizado en lecciones anteriores?
+
+## Ejercicio - oraciones sentimentales
+
+El sentimiento se mide con una *polaridad* de -1 a 1, lo que significa que -1 es el sentimiento más negativo y 1 es el más positivo. El sentimiento también se mide con una puntuación de 0 a 1 para objetividad (0) y subjetividad (1).
+
+Echa otro vistazo a *Orgullo y Prejuicio* de Jane Austen. El texto está disponible aquí en [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm). El siguiente ejemplo muestra un programa corto que analiza el sentimiento de las primeras y últimas oraciones del libro y muestra su polaridad de sentimiento y puntuación de subjetividad/objetividad.
+
+Deberías usar la biblioteca `TextBlob` (descrita arriba) para determinar `sentimiento` (no tienes que escribir tu propio calculador de sentimientos) en la siguiente tarea.
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+Ves la siguiente salida:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## Desafío - comprobar la polaridad del sentimiento
+
+Tu tarea es determinar, utilizando la polaridad del sentimiento, si *Orgullo y Prejuicio* tiene más oraciones absolutamente positivas que absolutamente negativas. Para esta tarea, puedes asumir que una puntuación de polaridad de 1 o -1 es absolutamente positiva o negativa respectivamente.
+
+**Pasos:**
+
+1. Descarga una [copia de Orgullo y Prejuicio](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) de Project Gutenberg como un archivo .txt. Elimina los metadatos al principio y al final del archivo, dejando solo el texto original.
+2. Abre el archivo en Python y extrae el contenido como una cadena.
+3. Crea un TextBlob usando la cadena del libro.
+4. Analiza cada oración en el libro en un bucle.
+ 1. Si la polaridad es 1 o -1, almacena la oración en una matriz o lista de mensajes positivos o negativos.
+5. Al final, imprime todas las oraciones positivas y negativas (por separado) y el número de cada una.
+
+Aquí hay una [solución de ejemplo](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb).
+
+✅ Comprobación de Conocimientos
+
+1. El sentimiento se basa en las palabras utilizadas en la oración, pero ¿entiende el código *las palabras*?
+2. ¿Crees que la polaridad del sentimiento es precisa o, en otras palabras, estás *de acuerdo* con las puntuaciones?
+ 1. En particular, ¿estás de acuerdo o en desacuerdo con la polaridad **positiva** absoluta de las siguientes oraciones?
+ * “What an excellent father you have, girls!” said she, when the door was shut.
+ * “Your examination of Mr. Darcy is over, I presume,” said Miss Bingley; “and pray what is the result?” “I am perfectly convinced by it that Mr. Darcy has no defect.
+ * How wonderfully these sort of things occur!
+ * I have the greatest dislike in the world to that sort of thing.
+ * Charlotte is an excellent manager, I dare say.
+ * “This is delightful indeed!
+ * I am so happy!
+ * Your idea of the ponies is delightful.
+ 2. Las siguientes 3 oraciones fueron puntuadas con un sentimiento absolutamente positivo, pero al leerlas detenidamente, no son oraciones positivas. ¿Por qué el análisis de sentimiento pensó que eran oraciones positivas?
+ * Happy shall I be, when his stay at Netherfield is over!” “I wish I could say anything to comfort you,” replied Elizabeth; “but it is wholly out of my power.
+ * If I could but see you as happy!
+ * Our distress, my dear Lizzy, is very great.
+ 3. ¿Estás de acuerdo o en desacuerdo con la polaridad **negativa** absoluta de las siguientes oraciones?
+ - Everybody is disgusted with his pride.
+ - “I should like to know how he behaves among strangers.” “You shall hear then—but prepare yourself for something very dreadful.
+ - The pause was to Elizabeth’s feelings dreadful.
+ - It would be dreadful!
+
+✅ Cualquier aficionado a Jane Austen entenderá que ella a menudo usa sus libros para criticar los aspectos más ridículos de la sociedad de la Regencia inglesa. Elizabeth Bennett, el personaje principal en *Orgullo y Prejuicio*, es una observadora social perspicaz (como la autora) y su lenguaje a menudo está muy matizado. Incluso Mr. Darcy (el interés amoroso en la historia) nota el uso juguetón y burlón del lenguaje de Elizabeth: "He tenido el placer de conocerte lo suficiente como para saber que disfrutas mucho ocasionalmente profesando opiniones que de hecho no son tuyas."
+
+---
+
+## 🚀Desafío
+
+¿Puedes mejorar a Marvin aún más extrayendo otras características de la entrada del usuario?
+
+## [Cuestionario posterior a la clase](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## Revisión y Autoestudio
+
+Hay muchas maneras de extraer sentimientos de un texto. Piensa en las aplicaciones comerciales que podrían hacer uso de esta técnica. Piensa en cómo puede salir mal. Lee más sobre sistemas empresariales sofisticados que analizan sentimientos como [Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott). Prueba algunas de las oraciones de Orgullo y Prejuicio anteriores y ve si puede detectar matices.
+
+## Asignación
+
+[Licencia poética](assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/3-Translation-Sentiment/assignment.md b/translations/es/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..9bae1a8fa
--- /dev/null
+++ b/translations/es/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# Licencia poética
+
+## Instrucciones
+
+En [este cuaderno](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency) puedes encontrar más de 500 poemas de Emily Dickinson previamente analizados para determinar su sentimiento usando Azure text analytics. Utilizando este conjunto de datos, analízalo utilizando las técnicas descritas en la lección. ¿El sentimiento sugerido de un poema coincide con la decisión del servicio más sofisticado de Azure? ¿Por qué o por qué no, en tu opinión? ¿Hay algo que te sorprenda?
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------------------------------------------------------------------------- | ------------------------------------------------------- | ------------------------ |
+| | Se presenta un cuaderno con un análisis sólido de una muestra del autor | El cuaderno está incompleto o no realiza análisis | No se presenta cuaderno |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/es/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..7a9955f54
--- /dev/null
+++ b/translations/es/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizados por inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/es/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..98b1e6b71
--- /dev/null
+++ b/translations/es/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/4-Hotel-Reviews-1/README.md b/translations/es/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..1cc6d7781
--- /dev/null
+++ b/translations/es/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,303 @@
+# Análisis de sentimiento con reseñas de hoteles - procesando los datos
+
+En esta sección, usarás las técnicas de las lecciones anteriores para realizar un análisis exploratorio de datos de un conjunto de datos grande. Una vez que tengas una buena comprensión de la utilidad de las diversas columnas, aprenderás:
+
+- cómo eliminar las columnas innecesarias
+- cómo calcular algunos datos nuevos basados en las columnas existentes
+- cómo guardar el conjunto de datos resultante para usarlo en el desafío final
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### Introducción
+
+Hasta ahora has aprendido que los datos de texto son bastante diferentes a los datos numéricos. Si es un texto escrito o hablado por un humano, puede analizarse para encontrar patrones y frecuencias, sentimiento y significado. Esta lección te lleva a un conjunto de datos real con un desafío real: **[515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** e incluye una [licencia CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/). Fue recopilado de Booking.com de fuentes públicas. El creador del conjunto de datos fue Jiashen Liu.
+
+### Preparación
+
+Necesitarás:
+
+* La capacidad de ejecutar notebooks .ipynb usando Python 3
+* pandas
+* NLTK, [que deberías instalar localmente](https://www.nltk.org/install.html)
+* El conjunto de datos que está disponible en Kaggle [515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe). Pesa alrededor de 230 MB descomprimido. Descárgalo en la carpeta raíz `/data` asociada con estas lecciones de PLN.
+
+## Análisis exploratorio de datos
+
+Este desafío asume que estás construyendo un bot de recomendaciones de hoteles utilizando análisis de sentimiento y puntuaciones de reseñas de huéspedes. El conjunto de datos que utilizarás incluye reseñas de 1493 hoteles diferentes en 6 ciudades.
+
+Usando Python, un conjunto de datos de reseñas de hoteles, y el análisis de sentimiento de NLTK podrías descubrir:
+
+* ¿Cuáles son las palabras y frases más frecuentemente utilizadas en las reseñas?
+* ¿Los *tags* oficiales que describen un hotel se correlacionan con las puntuaciones de las reseñas (por ejemplo, hay más reseñas negativas para un hotel particular por *Familia con niños pequeños* que por *Viajero solo*, tal vez indicando que es mejor para *Viajeros solos*)?
+* ¿Las puntuaciones de sentimiento de NLTK 'coinciden' con la puntuación numérica del revisor del hotel?
+
+#### Conjunto de datos
+
+Vamos a explorar el conjunto de datos que has descargado y guardado localmente. Abre el archivo en un editor como VS Code o incluso Excel.
+
+Los encabezados en el conjunto de datos son los siguientes:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+Aquí están agrupados de una manera que podría ser más fácil de examinar:
+##### Columnas del hotel
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (latitud), `lng` (longitud)
+ * Usando *lat* y *lng* podrías trazar un mapa con Python mostrando las ubicaciones de los hoteles (quizás codificado por colores para reseñas negativas y positivas)
+ * Hotel_Address no es obviamente útil para nosotros, y probablemente lo reemplazaremos con un país para facilitar la clasificación y búsqueda
+
+**Columnas de meta-reseña del hotel**
+
+* `Average_Score`
+ * Según el creador del conjunto de datos, esta columna es el *Puntaje promedio del hotel, calculado en base al último comentario en el último año*. Esto parece una forma inusual de calcular el puntaje, pero es el dato recopilado, así que lo tomaremos como válido por ahora.
+
+ ✅ Basado en las otras columnas en estos datos, ¿puedes pensar en otra manera de calcular el puntaje promedio?
+
+* `Total_Number_of_Reviews`
+ * El número total de reseñas que ha recibido este hotel - no está claro (sin escribir algo de código) si esto se refiere a las reseñas en el conjunto de datos.
+* `Additional_Number_of_Scoring`
+ * Esto significa que se dio un puntaje de reseña pero el revisor no escribió una reseña positiva o negativa
+
+**Columnas de reseñas**
+
+- `Reviewer_Score`
+ - Este es un valor numérico con hasta 1 decimal entre los valores mínimos y máximos 2.5 y 10
+ - No se explica por qué 2.5 es el puntaje más bajo posible
+- `Negative_Review`
+ - Si un revisor no escribió nada, este campo tendrá "**No Negative**"
+ - Ten en cuenta que un revisor puede escribir una reseña positiva en la columna de reseña negativa (por ejemplo, "no hay nada malo en este hotel")
+- `Review_Total_Negative_Word_Counts`
+ - Un mayor conteo de palabras negativas indica un puntaje más bajo (sin verificar la sentimentalidad)
+- `Positive_Review`
+ - Si un revisor no escribió nada, este campo tendrá "**No Positive**"
+ - Ten en cuenta que un revisor puede escribir una reseña negativa en la columna de reseña positiva (por ejemplo, "no hay nada bueno en este hotel en absoluto")
+- `Review_Total_Positive_Word_Counts`
+ - Un mayor conteo de palabras positivas indica un puntaje más alto (sin verificar la sentimentalidad)
+- `Review_Date` y `days_since_review`
+ - Se podría aplicar una medida de frescura o antigüedad a una reseña (las reseñas más antiguas podrían no ser tan precisas como las más nuevas porque la administración del hotel cambió, o se realizaron renovaciones, o se agregó una piscina, etc.)
+- `Tags`
+ - Son descriptores cortos que un revisor puede seleccionar para describir el tipo de huésped que eran (por ejemplo, solo o familia), el tipo de habitación que tenían, la duración de la estancia y cómo se envió la reseña.
+ - Desafortunadamente, usar estos tags es problemático, revisa la sección a continuación que discute su utilidad
+
+**Columnas del revisor**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - Esto podría ser un factor en un modelo de recomendación, por ejemplo, si pudieras determinar que los revisores más prolíficos con cientos de reseñas eran más propensos a ser negativos en lugar de positivos. Sin embargo, el revisor de cualquier reseña en particular no está identificado con un código único, y por lo tanto no puede vincularse a un conjunto de reseñas. Hay 30 revisores con 100 o más reseñas, pero es difícil ver cómo esto puede ayudar al modelo de recomendación.
+- `Reviewer_Nationality`
+ - Algunas personas podrían pensar que ciertas nacionalidades son más propensas a dar una reseña positiva o negativa debido a una inclinación nacional. Ten cuidado al construir tales opiniones anecdóticas en tus modelos. Estos son estereotipos nacionales (y a veces raciales), y cada revisor fue un individuo que escribió una reseña basada en su experiencia. Puede haber sido filtrado a través de muchas lentes como sus estancias anteriores en hoteles, la distancia viajada, y su temperamento personal. Pensar que su nacionalidad fue la razón de una puntuación de reseña es difícil de justificar.
+
+##### Ejemplos
+
+| Puntaje Promedio | Número Total de Reseñas | Puntaje del Revisor | Reseña Negativa | Reseña Positiva | Tags |
+| ---------------- | ----------------------- | ------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | Este no es actualmente un hotel sino un sitio de construcción. Fui aterrorizado desde temprano en la mañana y todo el día con ruidos de construcción inaceptables mientras descansaba después de un largo viaje y trabajaba en la habitación. La gente trabajaba todo el día, es decir, con martillos neumáticos en las habitaciones contiguas. Pedí un cambio de habitación pero no había una habitación silenciosa disponible. Para empeorar las cosas, me cobraron de más. Me fui en la noche ya que tenía que salir muy temprano en vuelo y recibí una factura apropiada. Un día después, el hotel hizo otro cargo sin mi consentimiento en exceso del precio reservado. Es un lugar terrible. No te castigues reservando aquí. | Nada. Lugar terrible. Aléjate. | Viaje de negocios. Pareja. Habitación Doble Estándar. Se hospedó 2 noches. |
+
+Como puedes ver, este huésped no tuvo una estancia feliz en este hotel. El hotel tiene un buen puntaje promedio de 7.8 y 1945 reseñas, pero este revisor le dio 2.5 y escribió 115 palabras sobre lo negativa que fue su estancia. Si no escribió nada en la columna de Reseña Positiva, podrías suponer que no había nada positivo, pero escribió 7 palabras de advertencia. Si solo contáramos palabras en lugar del significado o sentimiento de las palabras, podríamos tener una visión sesgada de la intención del revisor. Curiosamente, su puntaje de 2.5 es confuso, porque si esa estancia en el hotel fue tan mala, ¿por qué darle algún punto? Investigando el conjunto de datos de cerca, verás que el puntaje más bajo posible es 2.5, no 0. El puntaje más alto posible es 10.
+
+##### Tags
+
+Como se mencionó anteriormente, a primera vista, la idea de usar `Tags` para categorizar los datos tiene sentido. Desafortunadamente, estos tags no están estandarizados, lo que significa que en un hotel dado, las opciones podrían ser *Single room*, *Twin room*, y *Double room*, pero en el siguiente hotel, son *Deluxe Single Room*, *Classic Queen Room*, y *Executive King Room*. Estos podrían ser las mismas cosas, pero hay tantas variaciones que la elección se convierte en:
+
+1. Intentar cambiar todos los términos a un estándar único, lo cual es muy difícil, porque no está claro cuál sería el camino de conversión en cada caso (por ejemplo, *Classic single room* se mapea a *Single room* pero *Superior Queen Room with Courtyard Garden or City View* es mucho más difícil de mapear)
+
+1. Podemos tomar un enfoque de PLN y medir la frecuencia de ciertos términos como *Solo*, *Business Traveller*, o *Family with young kids* a medida que se aplican a cada hotel, y factorizar eso en la recomendación
+
+Los tags son usualmente (pero no siempre) un solo campo que contiene una lista de 5 a 6 valores separados por comas alineados a *Tipo de viaje*, *Tipo de huéspedes*, *Tipo de habitación*, *Número de noches*, y *Tipo de dispositivo en el que se envió la reseña*. Sin embargo, debido a que algunos revisores no completan cada campo (pueden dejar uno en blanco), los valores no siempre están en el mismo orden.
+
+Como ejemplo, toma *Tipo de grupo*. Hay 1025 posibilidades únicas en este campo en la columna `Tags`, y desafortunadamente solo algunos de ellos se refieren a un grupo (algunos son el tipo de habitación, etc.). Si filtras solo los que mencionan familia, los resultados contienen muchos resultados del tipo *Family room*. Si incluyes el término *with*, es decir, cuentas los valores de *Family with*, los resultados son mejores, con más de 80,000 de los 515,000 resultados que contienen la frase "Family with young children" o "Family with older children".
+
+Esto significa que la columna de tags no es completamente inútil para nosotros, pero tomará algo de trabajo hacerla útil.
+
+##### Puntaje promedio del hotel
+
+Hay una serie de rarezas o discrepancias con el conjunto de datos que no puedo descifrar, pero están ilustradas aquí para que estés al tanto de ellas al construir tus modelos. Si las descifras, por favor háznoslo saber en la sección de discusión.
+
+El conjunto de datos tiene las siguientes columnas relacionadas con el puntaje promedio y el número de reseñas:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+El único hotel con más reseñas en este conjunto de datos es *Britannia International Hotel Canary Wharf* con 4789 reseñas de 515,000. Pero si miramos el valor de `Total_Number_of_Reviews` para este hotel, es 9086. Podrías suponer que hay muchas más puntuaciones sin reseñas, así que tal vez deberíamos agregar el valor de la columna `Additional_Number_of_Scoring`. Ese valor es 2682, y sumándolo a 4789 nos da 7,471, lo cual sigue estando 1615 por debajo de `Total_Number_of_Reviews`.
+
+Si tomas las columnas `Average_Score`, podrías suponer que es el promedio de las reseñas en el conjunto de datos, pero la descripción de Kaggle es "*Puntaje Promedio del hotel, calculado en base al último comentario en el último año*". Eso no parece tan útil, pero podemos calcular nuestro propio promedio basado en las puntuaciones de las reseñas en el conjunto de datos. Usando el mismo hotel como ejemplo, el puntaje promedio del hotel se da como 7.1 pero el puntaje calculado (promedio de las puntuaciones de los revisores *en* el conjunto de datos) es 6.8. Esto es cercano, pero no el mismo valor, y solo podemos suponer que las puntuaciones dadas en las reseñas `Additional_Number_of_Scoring` aumentaron el promedio a 7.1. Desafortunadamente, sin una forma de probar o demostrar esa afirmación, es difícil usar o confiar en `Average_Score`, `Additional_Number_of_Scoring` y `Total_Number_of_Reviews` cuando se basan en, o se refieren a, datos que no tenemos.
+
+Para complicar las cosas aún más, el hotel con el segundo mayor número de reseñas tiene un puntaje promedio calculado de 8.12 y el `Average_Score` del conjunto de datos es 8.1. ¿Es esta coincidencia del puntaje correcto o es el primer hotel una discrepancia?
+
+En la posibilidad de que estos hoteles puedan ser un caso atípico, y que tal vez la mayoría de los valores coincidan (pero algunos no por alguna razón), escribiremos un programa corto a continuación para explorar los valores en el conjunto de datos y determinar el uso correcto (o no uso) de los valores.
+
+> 🚨 Una nota de precaución
+>
+> Al trabajar con este conjunto de datos, escribirás código que calcule algo a partir del texto sin tener que leer o analizar el texto tú mismo. Esta es la esencia del PLN, interpretar el significado o sentimiento sin que un humano tenga que hacerlo. Sin embargo, es posible que leas algunas de las reseñas negativas. Te insto a no hacerlo, porque no tienes que hacerlo. Algunas de ellas son tontas o irrelevantes, como "El clima no fue bueno", algo fuera del control del hotel, o de hecho, de cualquiera. Pero hay un lado oscuro en algunas reseñas también. A veces las reseñas negativas son racistas, sexistas, o discriminatorias por edad. Esto es desafortunado pero de esperarse en un conjunto de datos recopilado de un sitio web público. Algunos revisores dejan reseñas que encontrarías de mal gusto, incómodas, o molestas. Es mejor dejar que el código mida el sentimiento que leerlas tú mismo y molestarte. Dicho esto, es una minoría la que escribe tales cosas, pero existen de todas formas.
+
+## Ejercicio - Exploración de datos
+### Cargar los datos
+
+Eso es suficiente examinando los datos visualmente, ¡ahora escribirás algo de código y obtendrás algunas respuestas! Esta sección usa la biblioteca pandas. Tu primera tarea es asegurarte de que puedes cargar y leer los datos CSV. La biblioteca pandas tiene un cargador de CSV rápido, y el resultado se coloca en un dataframe, como en lecciones anteriores. El CSV que estamos cargando tiene más de medio millón de filas, pero solo 17 columnas. Pandas te da muchas formas poderosas de interactuar con un dataframe, incluyendo la capacidad de realizar operaciones en cada fila.
+
+De aquí en adelante en esta lección, habrá fragmentos de código y algunas explicaciones del código y algunas discusiones sobre lo que significan los resultados. Usa el _notebook.ipynb_ incluido para tu código.
+
+Comencemos cargando el archivo de datos que estarás usando:
+
+```python
+# Load the hotel reviews from CSV
+import pandas as pd
+import time
+# importing time so the start and end time can be used to calculate file loading time
+print("Loading data file now, this could take a while depending on file size")
+start = time.time()
+# df is 'DataFrame' - make sure you downloaded the file to the data folder
+df = pd.read_csv('../../data/Hotel_Reviews.csv')
+end = time.time()
+print("Loading took " + str(round(end - start, 2)) + " seconds")
+```
+
+Ahora que los datos están cargados, podemos realizar algunas operaciones sobre ellos. Mantén este código en la parte superior de tu programa para la siguiente parte.
+
+## Explorar los datos
+
+En este caso, los datos ya están *limpios*, eso significa que están listos para trabajar, y no tienen caracteres en otros idiomas que puedan hacer tropezar a los algoritmos que esperan solo caracteres en inglés.
+
+✅ Puede que tengas que trabajar con datos que requieran algún procesamiento inicial para formatearlos antes de aplicar técnicas de PLN, pero no esta vez. Si tuvieras que hacerlo, ¿cómo manejarías los caracteres no ingleses?
+
+Tómate un momento para asegurarte de que una vez que los datos estén cargados, puedas explorarlos con código. Es muy fácil querer enfocarse en las columnas `Negative_Review` y `Positive_Review`. Están llenas de texto natural para que tus algoritmos de PLN los procesen. ¡Pero espera! Antes de saltar al PLN y el sentimiento, deberías seguir el código a continuación para verificar si los valores dados en el conjunto de datos coinciden con los valores que calculas con pandas.
+
+## Operaciones con dataframes
+
+La primera tarea en esta lección es verificar si las siguientes afirmaciones son correctas escribiendo algo de código que examine el dataframe (sin cambiarlo).
+
+> Como muchas tareas de programación, hay varias formas de completarlo, pero un buen consejo es hacerlo de la manera más simple y fácil posible, especialmente si será más fácil de entender cuando vuelvas a este código en el futuro. Con dataframes, hay una API completa que a menudo tendrá una manera de hacer lo que deseas de manera eficiente.
+Trata las siguientes preguntas como tareas de codificación e intenta responderlas sin mirar la solución. 1. Imprime la *forma* del dataframe que acabas de cargar (
+rows have column `Positive_Review` values of "No Positive" 9. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ### Respuestas de código 1. Imprime la *forma* del marco de datos que acabas de cargar (la forma es el número de filas y columnas) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Calcula el conteo de frecuencia para las nacionalidades de los revisores: 1. ¿Cuántos valores distintos hay para la columna `Reviewer_Nationality` y cuáles son? 2. ¿Qué nacionalidad de revisor es la más común en el conjunto de datos (imprime el país y el número de reseñas)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. ¿Cuáles son las siguientes 10 nacionalidades más frecuentemente encontradas, y su conteo de frecuencia? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. ¿Cuál fue el hotel más frecuentemente revisado para cada una de las 10 nacionalidades de revisores más comunes? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. ¿Cuántas reseñas hay por hotel (conteo de frecuencia de hotel) en el conjunto de datos? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Nombre_Hotel | Número_Total_de_Reseñas | Reseñas_Encontradas | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | Puedes notar que los *contados en el conjunto de datos* resultados no coinciden con el valor en `Total_Number_of_Reviews`. No está claro si este valor en el conjunto de datos representaba el número total de reseñas que tenía el hotel, pero no todas fueron extraídas, o algún otro cálculo. `Total_Number_of_Reviews` no se usa en el modelo debido a esta falta de claridad. 5. Aunque hay una columna `Average_Score` para cada hotel en el conjunto de datos, también puedes calcular un puntaje promedio (obteniendo el promedio de todas las puntuaciones de los revisores en el conjunto de datos para cada hotel). Agrega una nueva columna a tu marco de datos con el encabezado de columna `Calc_Average_Score` que contenga ese promedio calculado. Imprime las columnas `Hotel_Name`, `Average_Score`, y `Calc_Average_Score`. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` También puedes preguntarte sobre el valor de `Average_Score` y por qué a veces es diferente del puntaje promedio calculado. Como no podemos saber por qué algunos de los valores coinciden, pero otros tienen una diferencia, es más seguro en este caso usar las puntuaciones de las reseñas que tenemos para calcular el promedio nosotros mismos. Dicho esto, las diferencias son generalmente muy pequeñas, aquí están los hoteles con la mayor desviación del promedio del conjunto de datos y el promedio calculado: | Diferencia_Promedio_Puntaje | Promedio_Puntaje | Calc_Average_Score | Nombre_Hotel | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | Con solo 1 hotel teniendo una diferencia de puntaje mayor a 1, significa que probablemente podemos ignorar la diferencia y usar el puntaje promedio calculado. 6. Calcula e imprime cuántas filas tienen valores de columna `Negative_Review` de "No Negative" 7. Calcula e imprime cuántas filas tienen valores de columna `Positive_Review` de "No Positive" 8. Calcula e imprime cuántas filas tienen valores de columna `Positive_Review` de "No Positive" **y** valores de columna `Negative_Review` de "No Negative" ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Otra manera Otra manera de contar ítems sin Lambdas, y usar sum para contar las filas: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` Puedes haber notado que hay 127 filas que tienen tanto "No Negative" como "No Positive" valores para las columnas `Negative_Review` y `Positive_Review` respectivamente. Eso significa que el revisor dio al hotel un puntaje numérico, pero se negó a escribir una reseña positiva o negativa. Afortunadamente, esta es una pequeña cantidad de filas (127 de 515738, o 0.02%), por lo que probablemente no sesgará nuestro modelo o resultados en ninguna dirección particular, pero podrías no haber esperado que un conjunto de datos de reseñas tuviera filas sin reseñas, por lo que vale la pena explorar los datos para descubrir filas como esta. Ahora que has explorado el conjunto de datos, en la próxima lección filtrarás los datos y agregarás algún análisis de sentimiento. --- ## 🚀Desafío Esta lección demuestra, como vimos en lecciones anteriores, lo críticamente importante que es entender tus datos y sus peculiaridades antes de realizar operaciones sobre ellos. Los datos basados en texto, en particular, requieren un escrutinio cuidadoso. Profundiza en varios conjuntos de datos pesados en texto y ve si puedes descubrir áreas que podrían introducir sesgo o sentimiento sesgado en un modelo. ## [Cuestionario post-lectura](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## Revisión y autoestudio Toma [este Camino de Aprendizaje sobre NLP](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) para descubrir herramientas que probar al construir modelos de habla y texto pesados. ## Asignación [NLTK](assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/es/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..cfb78fce8
--- /dev/null
+++ b/translations/es/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## Instrucciones
+
+NLTK es una biblioteca bien conocida para su uso en lingüística computacional y PLN. Aprovecha esta oportunidad para leer el '[libro de NLTK](https://www.nltk.org/book/)' y probar sus ejercicios. En esta tarea no calificada, conocerás esta biblioteca más a fondo.
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/es/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..a367ddc26
--- /dev/null
+++ b/translations/es/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/es/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..21d8685e7
--- /dev/null
+++ b/translations/es/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/5-Hotel-Reviews-2/README.md b/translations/es/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..4e06976b5
--- /dev/null
+++ b/translations/es/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# Análisis de sentimiento con reseñas de hoteles
+
+Ahora que has explorado el conjunto de datos en detalle, es hora de filtrar las columnas y luego usar técnicas de PLN en el conjunto de datos para obtener nuevas perspectivas sobre los hoteles.
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### Operaciones de filtrado y análisis de sentimiento
+
+Como probablemente has notado, el conjunto de datos tiene algunos problemas. Algunas columnas están llenas de información inútil, otras parecen incorrectas. Si son correctas, no está claro cómo fueron calculadas, y las respuestas no pueden ser verificadas independientemente por tus propios cálculos.
+
+## Ejercicio: un poco más de procesamiento de datos
+
+Limpia los datos un poco más. Agrega columnas que serán útiles más adelante, cambia los valores en otras columnas y elimina ciertas columnas por completo.
+
+1. Procesamiento inicial de columnas
+
+ 1. Elimina `lat` y `lng`
+
+ 2. Reemplaza los valores de `Hotel_Address` con los siguientes valores (si la dirección contiene el nombre de la ciudad y el país, cámbialo solo por la ciudad y el país).
+
+ Estas son las únicas ciudades y países en el conjunto de datos:
+
+ Ámsterdam, Países Bajos
+
+ Barcelona, España
+
+ Londres, Reino Unido
+
+ Milán, Italia
+
+ París, Francia
+
+ Viena, Austria
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ Ahora puedes consultar datos a nivel de país:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | Hotel_Address | Hotel_Name |
+ | :--------------------- | :--------: |
+ | Amsterdam, Netherlands | 105 |
+ | Barcelona, Spain | 211 |
+ | London, United Kingdom | 400 |
+ | Milan, Italy | 162 |
+ | Paris, France | 458 |
+ | Vienna, Austria | 158 |
+
+2. Procesa las columnas de Meta-reseñas del hotel
+
+ 1. Elimina `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` con nuestro propio puntaje calculado
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. Procesa las columnas de reseñas
+
+ 1. Elimina `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']`, puedes comenzar a preguntarte si es posible reducir significativamente el procesamiento que tienes que hacer. Afortunadamente, es posible, pero primero necesitas seguir algunos pasos para determinar las etiquetas de interés.
+
+### Filtrando etiquetas
+
+Recuerda que el objetivo del conjunto de datos es agregar sentimiento y columnas que te ayudarán a elegir el mejor hotel (para ti o tal vez para un cliente que te pide que hagas un bot de recomendación de hoteles). Necesitas preguntarte si las etiquetas son útiles o no en el conjunto de datos final. Aquí hay una interpretación (si necesitaras el conjunto de datos por otras razones, diferentes etiquetas podrían quedarse o salir de la selección):
+
+1. El tipo de viaje es relevante, y eso debería quedarse
+2. El tipo de grupo de huéspedes es importante, y eso debería quedarse
+3. El tipo de habitación, suite o estudio en el que se alojó el huésped es irrelevante (todos los hoteles tienen básicamente las mismas habitaciones)
+4. El dispositivo desde el cual se envió la reseña es irrelevante
+5. El número de noches que el revisor se quedó *podría* ser relevante si atribuyes estancias más largas a que les gustó más el hotel, pero es un poco exagerado y probablemente irrelevante
+
+En resumen, **mantén 2 tipos de etiquetas y elimina las demás**.
+
+Primero, no quieres contar las etiquetas hasta que estén en un mejor formato, lo que significa eliminar los corchetes y las comillas. Puedes hacer esto de varias maneras, pero quieres la más rápida ya que podría llevar mucho tiempo procesar muchos datos. Afortunadamente, pandas tiene una manera fácil de hacer cada uno de estos pasos.
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+Cada etiqueta se convierte en algo como: `Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag` columna coincide con una de las nuevas columnas, agrega un 1, si no, agrega un 0. El resultado final será un recuento de cuántos revisores eligieron este hotel (en conjunto) para, por ejemplo, negocios vs ocio, o para llevar una mascota, y esta es información útil al recomendar un hotel.
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### Guarda tu archivo
+
+Finalmente, guarda el conjunto de datos tal como está ahora con un nuevo nombre.
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## Operaciones de análisis de sentimiento
+
+En esta sección final, aplicarás análisis de sentimiento a las columnas de reseñas y guardarás los resultados en un conjunto de datos.
+
+## Ejercicio: carga y guarda los datos filtrados
+
+Ten en cuenta que ahora estás cargando el conjunto de datos filtrado que se guardó en la sección anterior, **no** el conjunto de datos original.
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### Eliminando palabras vacías
+
+Si fueras a ejecutar el Análisis de Sentimiento en las columnas de reseñas negativas y positivas, podría llevar mucho tiempo. Probado en un portátil de prueba potente con CPU rápida, tomó 12 - 14 minutos dependiendo de la biblioteca de sentimiento utilizada. Eso es un tiempo (relativamente) largo, por lo que vale la pena investigar si se puede acelerar.
+
+Eliminar palabras vacías, o palabras comunes en inglés que no cambian el sentimiento de una oración, es el primer paso. Al eliminarlas, el análisis de sentimiento debería ejecutarse más rápido, pero no ser menos preciso (ya que las palabras vacías no afectan el sentimiento, pero sí ralentizan el análisis).
+
+La reseña negativa más larga tenía 395 palabras, pero después de eliminar las palabras vacías, tiene 195 palabras.
+
+Eliminar las palabras vacías también es una operación rápida, eliminar las palabras vacías de 2 columnas de reseñas sobre 515,000 filas tomó 3.3 segundos en el dispositivo de prueba. Podría tomar un poco más o menos tiempo para ti dependiendo de la velocidad de tu CPU, RAM, si tienes un SSD o no, y algunos otros factores. La relativa brevedad de la operación significa que si mejora el tiempo de análisis de sentimiento, entonces vale la pena hacerlo.
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### Realizando análisis de sentimiento
+
+Ahora deberías calcular el análisis de sentimiento para ambas columnas de reseñas negativas y positivas, y almacenar el resultado en 2 nuevas columnas. La prueba del sentimiento será compararlo con la puntuación del revisor para la misma reseña. Por ejemplo, si el sentimiento piensa que la reseña negativa tuvo un sentimiento de 1 (sentimiento extremadamente positivo) y el sentimiento de la reseña positiva de 1, pero el revisor le dio al hotel la puntuación más baja posible, entonces o el texto de la reseña no coincide con la puntuación, o el analizador de sentimientos no pudo reconocer el sentimiento correctamente. Deberías esperar que algunos puntajes de sentimiento estén completamente equivocados, y a menudo eso será explicable, por ejemplo, la reseña podría ser extremadamente sarcástica "Por supuesto que ME ENCANTÓ dormir en una habitación sin calefacción" y el analizador de sentimientos piensa que eso es un sentimiento positivo, aunque un humano que lo lea sabría que era sarcasmo.
+
+NLTK proporciona diferentes analizadores de sentimiento para aprender, y puedes sustituirlos y ver si el sentimiento es más o menos preciso. El análisis de sentimiento VADER se utiliza aquí.
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+Más adelante en tu programa, cuando estés listo para calcular el sentimiento, puedes aplicarlo a cada reseña de la siguiente manera:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+Esto toma aproximadamente 120 segundos en mi computadora, pero variará en cada computadora. Si deseas imprimir los resultados y ver si el sentimiento coincide con la reseña:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+Lo último que debes hacer con el archivo antes de usarlo en el desafío, es guardarlo. También deberías considerar reorganizar todas tus nuevas columnas para que sean fáciles de trabajar (para un humano, es un cambio cosmético).
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+Debes ejecutar todo el código para [el cuaderno de análisis](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) (después de haber ejecutado [tu cuaderno de filtrado](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) para generar el archivo Hotel_Reviews_Filtered.csv).
+
+Para revisar, los pasos son:
+
+1. El archivo del conjunto de datos original **Hotel_Reviews.csv** se explora en la lección anterior con [el cuaderno explorador](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb)
+2. Hotel_Reviews.csv se filtra mediante [el cuaderno de filtrado](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) resultando en **Hotel_Reviews_Filtered.csv**
+3. Hotel_Reviews_Filtered.csv se procesa mediante [el cuaderno de análisis de sentimiento](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) resultando en **Hotel_Reviews_NLP.csv**
+4. Usa Hotel_Reviews_NLP.csv en el Desafío de PLN a continuación
+
+### Conclusión
+
+Cuando comenzaste, tenías un conjunto de datos con columnas y datos, pero no todo podía ser verificado o utilizado. Has explorado los datos, filtrado lo que no necesitas, convertido etiquetas en algo útil, calculado tus propios promedios, agregado algunas columnas de sentimiento y, con suerte, aprendido algunas cosas interesantes sobre el procesamiento de texto natural.
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## Desafío
+
+Ahora que tienes tu conjunto de datos analizado para sentimiento, ve si puedes usar estrategias que has aprendido en este currículo (¿quizás clustering?) para determinar patrones en torno al sentimiento.
+
+## Revisión y autoestudio
+
+Toma [este módulo de Learn](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott) para aprender más y usar diferentes herramientas para explorar el sentimiento en el texto.
+## Tarea
+
+[Prueba con un conjunto de datos diferente](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/es/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..5c9c826b5
--- /dev/null
+++ b/translations/es/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# Prueba con un conjunto de datos diferente
+
+## Instrucciones
+
+Ahora que has aprendido a usar NLTK para asignar sentimiento a un texto, prueba con un conjunto de datos diferente. Probablemente necesitarás hacer algo de procesamiento de datos, así que crea un cuaderno y documenta tu proceso de pensamiento. ¿Qué descubres?
+
+## Rubrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejora |
+| --------- | ---------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | Se presenta un cuaderno completo y un conjunto de datos con celdas bien documentadas que explican cómo se asigna el sentimiento | Al cuaderno le faltan buenas explicaciones | El cuaderno tiene fallos |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/es/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..d10af765f
--- /dev/null
+++ b/translations/es/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/es/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..964d9a93d
--- /dev/null
+++ b/translations/es/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/README.md b/translations/es/6-NLP/README.md
new file mode 100644
index 000000000..1fc7fe8c8
--- /dev/null
+++ b/translations/es/6-NLP/README.md
@@ -0,0 +1,27 @@
+# Empezando con el procesamiento del lenguaje natural
+
+El procesamiento del lenguaje natural (PLN) es la capacidad de un programa de computadora para entender el lenguaje humano tal como se habla y se escribe, referido como lenguaje natural. Es un componente de la inteligencia artificial (IA). El PLN ha existido por más de 50 años y tiene raíces en el campo de la lingüística. Todo el campo está dirigido a ayudar a las máquinas a entender y procesar el lenguaje humano. Esto se puede usar para realizar tareas como la corrección ortográfica o la traducción automática. Tiene una variedad de aplicaciones en el mundo real en varios campos, incluyendo la investigación médica, motores de búsqueda e inteligencia empresarial.
+
+## Tema regional: Idiomas y literatura europeos y hoteles románticos de Europa ❤️
+
+En esta sección del currículo, se te introducirá a uno de los usos más extendidos del aprendizaje automático: el procesamiento del lenguaje natural (PLN). Derivado de la lingüística computacional, esta categoría de inteligencia artificial es el puente entre humanos y máquinas a través de la comunicación por voz o textual.
+
+En estas lecciones aprenderemos los conceptos básicos del PLN construyendo pequeños bots conversacionales para aprender cómo el aprendizaje automático ayuda a hacer estas conversaciones cada vez más 'inteligentes'. Viajarás en el tiempo, conversando con Elizabeth Bennett y el Sr. Darcy del clásico de Jane Austen, **Orgullo y Prejuicio**, publicado en 1813. Luego, ampliarás tus conocimientos aprendiendo sobre el análisis de sentimientos a través de reseñas de hoteles en Europa.
+
+
+> Foto por Elaine Howlin en Unsplash
+
+## Lecciones
+
+1. [Introducción al procesamiento del lenguaje natural](1-Introduction-to-NLP/README.md)
+2. [Tareas y técnicas comunes del PLN](2-Tasks/README.md)
+3. [Traducción y análisis de sentimientos con aprendizaje automático](3-Translation-Sentiment/README.md)
+4. [Preparando tus datos](4-Hotel-Reviews-1/README.md)
+5. [NLTK para el análisis de sentimientos](5-Hotel-Reviews-2/README.md)
+
+## Créditos
+
+Estas lecciones de procesamiento del lenguaje natural fueron escritas con ☕ por [Stephen Howell](https://twitter.com/Howell_MSFT)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/6-NLP/data/README.md b/translations/es/6-NLP/data/README.md
new file mode 100644
index 000000000..f32f259a0
--- /dev/null
+++ b/translations/es/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/1-Introduction/README.md b/translations/es/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..37f3cf5b5
--- /dev/null
+++ b/translations/es/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# Introducción a la predicción de series temporales
+
+
+
+> Sketchnote por [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+En esta lección y la siguiente, aprenderás un poco sobre la predicción de series temporales, una parte interesante y valiosa del repertorio de un científico de ML que es un poco menos conocida que otros temas. La predicción de series temporales es una especie de 'bola de cristal': basada en el rendimiento pasado de una variable como el precio, puedes predecir su valor potencial futuro.
+
+[](https://youtu.be/cBojo1hsHiI "Introducción a la predicción de series temporales")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video sobre la predicción de series temporales
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+Es un campo útil e interesante con un valor real para los negocios, dada su aplicación directa a problemas de precios, inventarios y problemas de la cadena de suministro. Aunque las técnicas de aprendizaje profundo han comenzado a usarse para obtener más información y predecir mejor el rendimiento futuro, la predicción de series temporales sigue siendo un campo muy informado por técnicas clásicas de ML.
+
+> El útil plan de estudios de series temporales de Penn State se puede encontrar [aquí](https://online.stat.psu.edu/stat510/lesson/1)
+
+## Introducción
+
+Supón que mantienes una serie de parquímetros inteligentes que proporcionan datos sobre con qué frecuencia se utilizan y por cuánto tiempo a lo largo del tiempo.
+
+> ¿Qué pasaría si pudieras predecir, basándote en el rendimiento pasado del parquímetro, su valor futuro de acuerdo con las leyes de oferta y demanda?
+
+Predecir con precisión cuándo actuar para lograr tu objetivo es un desafío que podría abordarse mediante la predicción de series temporales. ¡No haría feliz a la gente que se les cobrara más en momentos de alta demanda cuando están buscando un lugar para estacionar, pero sería una forma segura de generar ingresos para limpiar las calles!
+
+Vamos a explorar algunos de los tipos de algoritmos de series temporales y comenzar un cuaderno para limpiar y preparar algunos datos. Los datos que analizarás están tomados de la competencia de predicción GEFCom2014. Consisten en 3 años de carga eléctrica horaria y valores de temperatura entre 2012 y 2014. Dado los patrones históricos de carga eléctrica y temperatura, puedes predecir los valores futuros de carga eléctrica.
+
+En este ejemplo, aprenderás cómo predecir un paso temporal adelante, utilizando solo datos históricos de carga. Sin embargo, antes de comenzar, es útil entender qué está sucediendo detrás de escena.
+
+## Algunas definiciones
+
+Cuando encuentres el término 'serie temporal' necesitas entender su uso en varios contextos diferentes.
+
+🎓 **Serie temporal**
+
+En matemáticas, "una serie temporal es una serie de puntos de datos indexados (o listados o graficados) en orden temporal. Más comúnmente, una serie temporal es una secuencia tomada en puntos de tiempo sucesivos equidistantes." Un ejemplo de una serie temporal es el valor de cierre diario del [Promedio Industrial Dow Jones](https://wikipedia.org/wiki/Time_series). El uso de gráficos de series temporales y modelado estadístico se encuentra frecuentemente en procesamiento de señales, predicción del clima, predicción de terremotos y otros campos donde ocurren eventos y se pueden graficar puntos de datos a lo largo del tiempo.
+
+🎓 **Análisis de series temporales**
+
+El análisis de series temporales es el análisis de los datos de series temporales mencionados anteriormente. Los datos de series temporales pueden tomar formas distintas, incluyendo 'series temporales interrumpidas' que detectan patrones en la evolución de una serie temporal antes y después de un evento interruptor. El tipo de análisis necesario para la serie temporal depende de la naturaleza de los datos. Los datos de series temporales en sí pueden tomar la forma de series de números o caracteres.
+
+El análisis a realizar utiliza una variedad de métodos, incluyendo dominio de frecuencia y dominio de tiempo, lineal y no lineal, y más. [Aprende más](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm) sobre las muchas formas de analizar este tipo de datos.
+
+🎓 **Predicción de series temporales**
+
+La predicción de series temporales es el uso de un modelo para predecir valores futuros basados en patrones mostrados por datos previamente recopilados tal como ocurrieron en el pasado. Aunque es posible usar modelos de regresión para explorar datos de series temporales, con índices de tiempo como variables x en un gráfico, tales datos se analizan mejor utilizando tipos especiales de modelos.
+
+Los datos de series temporales son una lista de observaciones ordenadas, a diferencia de los datos que pueden analizarse mediante regresión lineal. El más común es ARIMA, un acrónimo que significa "Promedio Móvil Integrado Autoregresivo".
+
+[Modelos ARIMA](https://online.stat.psu.edu/stat510/lesson/1/1.1) "relacionan el valor presente de una serie con valores pasados y errores de predicción pasados." Son más apropiados para analizar datos de dominio temporal, donde los datos están ordenados a lo largo del tiempo.
+
+> Hay varios tipos de modelos ARIMA, que puedes aprender [aquí](https://people.duke.edu/~rnau/411arim.htm) y que tocarás en la próxima lección.
+
+En la próxima lección, construirás un modelo ARIMA utilizando [Series Temporales Univariadas](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm), que se enfoca en una variable que cambia su valor a lo largo del tiempo. Un ejemplo de este tipo de datos es [este conjunto de datos](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm) que registra la concentración mensual de CO2 en el Observatorio Mauna Loa:
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ Identifica la variable que cambia a lo largo del tiempo en este conjunto de datos
+
+## Características de los datos de series temporales a considerar
+
+Al observar los datos de series temporales, podrías notar que tienen [ciertas características](https://online.stat.psu.edu/stat510/lesson/1/1.1) que necesitas tener en cuenta y mitigar para comprender mejor sus patrones. Si consideras los datos de series temporales como potencialmente proporcionando una 'señal' que deseas analizar, estas características pueden considerarse 'ruido'. A menudo necesitarás reducir este 'ruido' compensando algunas de estas características utilizando algunas técnicas estadísticas.
+
+Aquí hay algunos conceptos que debes conocer para poder trabajar con series temporales:
+
+🎓 **Tendencias**
+
+Las tendencias se definen como aumentos y disminuciones medibles a lo largo del tiempo. [Lee más](https://machinelearningmastery.com/time-series-trends-in-python). En el contexto de las series temporales, se trata de cómo usar y, si es necesario, eliminar las tendencias de tu serie temporal.
+
+🎓 **[Estacionalidad](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+La estacionalidad se define como fluctuaciones periódicas, como las temporadas de compras navideñas que podrían afectar las ventas, por ejemplo. [Echa un vistazo](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm) a cómo diferentes tipos de gráficos muestran la estacionalidad en los datos.
+
+🎓 **Valores atípicos**
+
+Los valores atípicos están muy alejados de la varianza estándar de los datos.
+
+🎓 **Ciclo a largo plazo**
+
+Independientemente de la estacionalidad, los datos podrían mostrar un ciclo a largo plazo, como una recesión económica que dura más de un año.
+
+🎓 **Varianza constante**
+
+Con el tiempo, algunos datos muestran fluctuaciones constantes, como el uso de energía por día y noche.
+
+🎓 **Cambios abruptos**
+
+Los datos podrían mostrar un cambio abrupto que podría necesitar un análisis más detallado. El cierre abrupto de negocios debido al COVID, por ejemplo, causó cambios en los datos.
+
+✅ Aquí hay un [ejemplo de gráfico de series temporales](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python) que muestra el gasto diario en moneda dentro del juego durante algunos años. ¿Puedes identificar alguna de las características mencionadas anteriormente en estos datos?
+
+
+
+## Ejercicio - comenzando con datos de uso de energía
+
+Vamos a comenzar creando un modelo de series temporales para predecir el uso futuro de energía dado el uso pasado.
+
+> Los datos en este ejemplo están tomados de la competencia de predicción GEFCom2014. Consisten en 3 años de carga eléctrica horaria y valores de temperatura entre 2012 y 2014.
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli y Rob J. Hyndman, "Predicción probabilística de energía: Competencia de predicción de energía global 2014 y más allá", International Journal of Forecasting, vol.32, no.3, pp 896-913, julio-septiembre, 2016.
+
+1. En la carpeta `working` de esta lección, abre el archivo _notebook.ipynb_. Comienza agregando bibliotecas que te ayudarán a cargar y visualizar datos
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ Nota, estás usando los archivos del `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()` incluidos:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ Puedes ver que hay dos columnas que representan fecha y carga:
+
+ | | load |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. Ahora, grafica los datos llamando a `plot()`:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. Ahora, grafica la primera semana de julio de 2014, proporcionándola como entrada al patrón `energy` in `[from date]: [to date]`:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ ¡Un gráfico hermoso! Echa un vistazo a estos gráficos y ve si puedes determinar alguna de las características mencionadas anteriormente. ¿Qué podemos deducir visualizando los datos?
+
+En la próxima lección, crearás un modelo ARIMA para hacer algunas predicciones.
+
+---
+
+## 🚀Desafío
+
+Haz una lista de todas las industrias y áreas de investigación que puedas pensar que se beneficiarían de la predicción de series temporales. ¿Puedes pensar en una aplicación de estas técnicas en las artes? ¿En Econometría? ¿Ecología? ¿Comercio minorista? ¿Industria? ¿Finanzas? ¿Dónde más?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## Repaso y autoestudio
+
+Aunque no los cubriremos aquí, las redes neuronales a veces se usan para mejorar los métodos clásicos de predicción de series temporales. Lee más sobre ellas [en este artículo](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412)
+
+## Tarea
+
+[Visualiza algunas series temporales más](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/1-Introduction/assignment.md b/translations/es/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..edf4be858
--- /dev/null
+++ b/translations/es/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Visualizar algunas series temporales más
+
+## Instrucciones
+
+Has comenzado a aprender sobre la previsión de series temporales al observar el tipo de datos que requieren este modelado especial. Has visualizado algunos datos sobre energía. Ahora, busca otros datos que se beneficiarían de la previsión de series temporales. Encuentra tres ejemplos (prueba [Kaggle](https://kaggle.com) y [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott)) y crea un notebook para visualizarlos. Anota cualquier característica especial que tengan (estacionalidad, cambios abruptos u otras tendencias) en el notebook.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejora |
+| --------- | ------------------------------------------------------ | ---------------------------------------------------- | ---------------------------------------------------------------------------------------- |
+| | Tres conjuntos de datos están graficados y explicados en un notebook | Dos conjuntos de datos están graficados y explicados en un notebook | Pocos conjuntos de datos están graficados o explicados en un notebook o los datos presentados son insuficientes |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/es/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..5bad49ff7
--- /dev/null
+++ b/translations/es/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/es/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..c76b8497d
--- /dev/null
+++ b/translations/es/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de cualquier malentendido o mala interpretación que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/2-ARIMA/README.md b/translations/es/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..e529b3514
--- /dev/null
+++ b/translations/es/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,396 @@
+# Pronóstico de series temporales con ARIMA
+
+En la lección anterior, aprendiste un poco sobre el pronóstico de series temporales y cargaste un conjunto de datos que muestra las fluctuaciones de la carga eléctrica a lo largo de un período de tiempo.
+
+[](https://youtu.be/IUSk-YDau10 "Introducción a ARIMA")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: Una breve introducción a los modelos ARIMA. El ejemplo está hecho en R, pero los conceptos son universales.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## Introducción
+
+En esta lección, descubrirás una forma específica de construir modelos con [ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average). Los modelos ARIMA son particularmente adecuados para ajustar datos que muestran [no estacionariedad](https://wikipedia.org/wiki/Stationary_process).
+
+## Conceptos generales
+
+Para poder trabajar con ARIMA, hay algunos conceptos que necesitas conocer:
+
+- 🎓 **Estacionariedad**. Desde un contexto estadístico, la estacionariedad se refiere a datos cuya distribución no cambia cuando se desplazan en el tiempo. Los datos no estacionarios, entonces, muestran fluctuaciones debido a tendencias que deben transformarse para ser analizadas. La estacionalidad, por ejemplo, puede introducir fluctuaciones en los datos y puede eliminarse mediante un proceso de 'diferenciación estacional'.
+
+- 🎓 **[Diferenciación](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**. Diferenciar datos, nuevamente desde un contexto estadístico, se refiere al proceso de transformar datos no estacionarios para hacerlos estacionarios eliminando su tendencia no constante. "La diferenciación elimina los cambios en el nivel de una serie temporal, eliminando la tendencia y la estacionalidad y, en consecuencia, estabilizando la media de la serie temporal." [Documento de Shixiong et al](https://arxiv.org/abs/1904.07632)
+
+## ARIMA en el contexto de series temporales
+
+Desglosamos las partes de ARIMA para entender mejor cómo nos ayuda a modelar series temporales y nos ayuda a hacer predicciones.
+
+- **AR - de AutoRegresivo**. Los modelos autorregresivos, como su nombre indica, miran 'hacia atrás' en el tiempo para analizar valores anteriores en tus datos y hacer suposiciones sobre ellos. Estos valores anteriores se llaman 'rezagos'. Un ejemplo sería datos que muestran ventas mensuales de lápices. El total de ventas de cada mes se consideraría una 'variable evolutiva' en el conjunto de datos. Este modelo se construye como "la variable evolutiva de interés se regresa sobre sus propios valores rezagados (es decir, anteriores)." [wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - de Integrado**. A diferencia de los modelos similares 'ARMA', la 'I' en ARIMA se refiere a su aspecto *[integrado](https://wikipedia.org/wiki/Order_of_integration)*. Los datos se 'integran' cuando se aplican pasos de diferenciación para eliminar la no estacionariedad.
+
+- **MA - de Promedio Móvil**. El aspecto de [promedio móvil](https://wikipedia.org/wiki/Moving-average_model) de este modelo se refiere a la variable de salida que se determina observando los valores actuales y pasados de los rezagos.
+
+En resumen: ARIMA se usa para ajustar un modelo lo más cerca posible a la forma especial de los datos de series temporales.
+
+## Ejercicio - construir un modelo ARIMA
+
+Abre la carpeta [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) en esta lección y encuentra el archivo [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb).
+
+1. Ejecuta el notebook para cargar la biblioteca de Python `statsmodels`; necesitarás esto para los modelos ARIMA.
+
+1. Carga las bibliotecas necesarias
+
+1. Ahora, carga varias bibliotecas más útiles para graficar datos:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. Carga los datos del archivo `/data/energy.csv` en un dataframe de Pandas y échale un vistazo:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. Grafica todos los datos de energía disponibles desde enero de 2012 hasta diciembre de 2014. No debería haber sorpresas, ya que vimos estos datos en la última lección:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ ¡Ahora, vamos a construir un modelo!
+
+### Crear conjuntos de datos de entrenamiento y prueba
+
+Ahora que tus datos están cargados, puedes separarlos en conjuntos de entrenamiento y prueba. Entrenarás tu modelo en el conjunto de entrenamiento. Como de costumbre, después de que el modelo haya terminado de entrenarse, evaluarás su precisión usando el conjunto de prueba. Necesitas asegurarte de que el conjunto de prueba cubra un período posterior en el tiempo al del conjunto de entrenamiento para asegurarte de que el modelo no obtenga información de períodos futuros.
+
+1. Asigna un período de dos meses desde el 1 de septiembre hasta el 31 de octubre de 2014 al conjunto de entrenamiento. El conjunto de prueba incluirá el período de dos meses del 1 de noviembre al 31 de diciembre de 2014:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ Dado que estos datos reflejan el consumo diario de energía, hay un fuerte patrón estacional, pero el consumo es más similar al consumo en días más recientes.
+
+1. Visualiza las diferencias:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Por lo tanto, usar una ventana de tiempo relativamente pequeña para entrenar los datos debería ser suficiente.
+
+ > Nota: Dado que la función que usamos para ajustar el modelo ARIMA utiliza validación en la muestra durante el ajuste, omitiremos los datos de validación.
+
+### Preparar los datos para el entrenamiento
+
+Ahora, necesitas preparar los datos para el entrenamiento realizando el filtrado y escalado de tus datos. Filtra tu conjunto de datos para incluir solo los períodos de tiempo y columnas que necesitas, y escala para asegurarte de que los datos se proyecten en el intervalo 0,1.
+
+1. Filtra el conjunto de datos original para incluir solo los períodos de tiempo mencionados por conjunto y solo incluye la columna necesaria 'load' más la fecha:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ Puedes ver la forma de los datos:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. Escala los datos para que estén en el rango (0, 1).
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. Visualiza los datos originales vs. los escalados:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > Los datos originales
+
+ 
+
+ > Los datos escalados
+
+1. Ahora que has calibrado los datos escalados, puedes escalar los datos de prueba:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### Implementar ARIMA
+
+¡Es hora de implementar ARIMA! Ahora usarás la biblioteca `statsmodels` que instalaste anteriormente.
+
+Ahora necesitas seguir varios pasos
+
+ 1. Define el modelo llamando a `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q`, pero corresponden a los componentes estacionales del modelo.
+
+1. Comienza configurando tu valor de horizonte preferido. Probemos 3 horas:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ Seleccionar los mejores valores para los parámetros de un modelo ARIMA puede ser un desafío, ya que es algo subjetivo y requiere tiempo. Podrías considerar usar una biblioteca `auto_arima()` function from the [`pyramid` library](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html),
+
+1. Por ahora, prueba algunas selecciones manuales para encontrar un buen modelo.
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ Se imprime una tabla de resultados.
+
+¡Has construido tu primer modelo! Ahora necesitamos encontrar una manera de evaluarlo.
+
+### Evaluar tu modelo
+
+Para evaluar tu modelo, puedes realizar la llamada validación `walk forward`. En la práctica, los modelos de series temporales se reentrenan cada vez que se dispone de nuevos datos. Esto permite que el modelo haga la mejor predicción en cada paso de tiempo.
+
+Comenzando desde el principio de la serie temporal usando esta técnica, entrena el modelo en el conjunto de datos de entrenamiento. Luego haz una predicción en el siguiente paso de tiempo. La predicción se evalúa en función del valor conocido. Luego, el conjunto de entrenamiento se expande para incluir el valor conocido y se repite el proceso.
+
+> Nota: Debes mantener fija la ventana del conjunto de entrenamiento para un entrenamiento más eficiente, de modo que cada vez que agregues una nueva observación al conjunto de entrenamiento, elimines la observación del comienzo del conjunto.
+
+Este proceso proporciona una estimación más robusta de cómo se desempeñará el modelo en la práctica. Sin embargo, tiene el costo computacional de crear tantos modelos. Esto es aceptable si los datos son pequeños o si el modelo es simple, pero podría ser un problema a gran escala.
+
+La validación walk-forward es el estándar de oro de la evaluación de modelos de series temporales y se recomienda para tus propios proyectos.
+
+1. Primero, crea un punto de datos de prueba para cada paso de HORIZON.
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | load | load+1 | load+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ Los datos se desplazan horizontalmente según su punto de horizonte.
+
+1. Haz predicciones en tus datos de prueba usando este enfoque de ventana deslizante en un bucle del tamaño de la longitud de los datos de prueba:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ Puedes ver el entrenamiento ocurriendo:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. Compara las predicciones con la carga real:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ Salida
+ | | | timestamp | h | prediction | actual |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+ Observa la predicción de los datos horarios, en comparación con la carga real. ¿Qué tan precisa es?
+
+### Verificar la precisión del modelo
+
+Verifica la precisión de tu modelo probando su error porcentual absoluto medio (MAPE) sobre todas las predicciones.
+
+> **🧮 Muéstrame las matemáticas**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/) se utiliza para mostrar la precisión de la predicción como una proporción definida por la fórmula anterior. La diferencia entre actualt y predichot se divide por el actualt. "El valor absoluto en este cálculo se suma para cada punto de pronóstico en el tiempo y se divide por el número de puntos ajustados n." [wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. Expresa la ecuación en código:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. Calcula el MAPE de un paso:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ MAPE de pronóstico de un paso: 0.5570581332313952 %
+
+1. Imprime el MAPE de pronóstico de múltiples pasos:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ Un número bajo es mejor: considera que un pronóstico que tiene un MAPE de 10 está desviado en un 10%.
+
+1. Pero como siempre, es más fácil ver este tipo de medición de precisión visualmente, así que vamos a graficarlo:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 Una gráfica muy bonita, que muestra un modelo con buena precisión. ¡Bien hecho!
+
+---
+
+## 🚀Desafío
+
+Investiga las formas de probar la precisión de un modelo de series temporales. Tocamos el MAPE en esta lección, pero ¿hay otros métodos que podrías usar? Investígalos y anótalos. Un documento útil se puede encontrar [aquí](https://otexts.com/fpp2/accuracy.html)
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## Repaso y autoestudio
+
+Esta lección toca solo lo básico del pronóstico de series temporales con ARIMA. Tómate un tiempo para profundizar tu conocimiento explorando [este repositorio](https://microsoft.github.io/forecasting/) y sus diversos tipos de modelos para aprender otras formas de construir modelos de series temporales.
+
+## Tarea
+
+[Un nuevo modelo ARIMA](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/2-ARIMA/assignment.md b/translations/es/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..3ddebebad
--- /dev/null
+++ b/translations/es/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# Un nuevo modelo ARIMA
+
+## Instrucciones
+
+Ahora que has construido un modelo ARIMA, construye uno nuevo con datos frescos (prueba uno de [estos conjuntos de datos de Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Anota tu trabajo en un cuaderno, visualiza los datos y tu modelo, y prueba su precisión utilizando MAPE.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | ------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------- | ----------------------------------- |
+| | Se presenta un cuaderno con un nuevo modelo ARIMA construido, probado y explicado con visualizaciones y precisión indicada. | El cuaderno presentado no está anotado o contiene errores | Se presenta un cuaderno incompleto |
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/es/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..85529d6f5
--- /dev/null
+++ b/translations/es/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/es/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..c138f1459
--- /dev/null
+++ b/translations/es/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/3-SVR/README.md b/translations/es/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..910ad7b0a
--- /dev/null
+++ b/translations/es/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,382 @@
+# Pronóstico de Series Temporales con Regressor de Máquinas de Vectores de Soporte
+
+En la lección anterior, aprendiste a usar el modelo ARIMA para hacer predicciones de series temporales. Ahora veremos el modelo Regressor de Máquinas de Vectores de Soporte, que es un modelo de regresión utilizado para predecir datos continuos.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## Introducción
+
+En esta lección, descubrirás una forma específica de construir modelos con [**SVM**: **M**áquina de **V**ectores de **S**oporte](https://en.wikipedia.org/wiki/Support-vector_machine) para regresión, o **SVR: Regressor de Máquinas de Vectores de Soporte**.
+
+### SVR en el contexto de series temporales [^1]
+
+Antes de entender la importancia del SVR en la predicción de series temporales, aquí hay algunos conceptos importantes que necesitas conocer:
+
+- **Regresión:** Técnica de aprendizaje supervisado para predecir valores continuos a partir de un conjunto dado de entradas. La idea es ajustar una curva (o línea) en el espacio de características que tenga el mayor número de puntos de datos. [Haz clic aquí](https://en.wikipedia.org/wiki/Regression_analysis) para más información.
+- **Máquina de Vectores de Soporte (SVM):** Un tipo de modelo de aprendizaje supervisado utilizado para clasificación, regresión y detección de valores atípicos. El modelo es un hiperplano en el espacio de características, que en el caso de la clasificación actúa como una frontera, y en el caso de la regresión actúa como la línea de mejor ajuste. En SVM, generalmente se usa una función Kernel para transformar el conjunto de datos a un espacio de mayor número de dimensiones, para que puedan ser fácilmente separables. [Haz clic aquí](https://en.wikipedia.org/wiki/Support-vector_machine) para más información sobre las SVM.
+- **Regressor de Máquinas de Vectores de Soporte (SVR):** Un tipo de SVM, para encontrar la línea de mejor ajuste (que en el caso de SVM es un hiperplano) que tiene el mayor número de puntos de datos.
+
+### ¿Por qué SVR? [^1]
+
+En la última lección aprendiste sobre ARIMA, que es un método estadístico lineal muy exitoso para pronosticar datos de series temporales. Sin embargo, en muchos casos, los datos de series temporales tienen *no linealidad*, que no puede ser mapeada por modelos lineales. En tales casos, la capacidad de SVM para considerar la no linealidad en los datos para tareas de regresión hace que SVR tenga éxito en el pronóstico de series temporales.
+
+## Ejercicio - construir un modelo SVR
+
+Los primeros pasos para la preparación de datos son los mismos que en la lección anterior sobre [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA).
+
+Abre la carpeta [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working) en esta lección y encuentra el archivo [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb).[^2]
+
+1. Ejecuta el notebook e importa las bibliotecas necesarias: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. Carga los datos del archivo `/data/energy.csv` en un dataframe de Pandas y échale un vistazo: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. Grafica todos los datos de energía disponibles desde enero de 2012 hasta diciembre de 2014: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Ahora, construyamos nuestro modelo SVR.
+
+### Crear conjuntos de datos de entrenamiento y prueba
+
+Ahora que tus datos están cargados, puedes separarlos en conjuntos de entrenamiento y prueba. Luego, remodelarás los datos para crear un conjunto de datos basado en pasos de tiempo que será necesario para el SVR. Entrenarás tu modelo en el conjunto de entrenamiento. Después de que el modelo haya terminado de entrenar, evaluarás su precisión en el conjunto de entrenamiento, en el conjunto de prueba y luego en el conjunto de datos completo para ver el rendimiento general. Necesitas asegurarte de que el conjunto de prueba cubra un período posterior en el tiempo del conjunto de entrenamiento para asegurar que el modelo no obtenga información de períodos de tiempo futuros [^2] (una situación conocida como *sobreajuste*).
+
+1. Asigna un período de dos meses desde el 1 de septiembre hasta el 31 de octubre de 2014 al conjunto de entrenamiento. El conjunto de prueba incluirá el período de dos meses del 1 de noviembre al 31 de diciembre de 2014: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. Visualiza las diferencias: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+### Preparar los datos para el entrenamiento
+
+Ahora, necesitas preparar los datos para el entrenamiento realizando el filtrado y la escalación de tus datos. Filtra tu conjunto de datos para incluir solo los períodos de tiempo y columnas que necesitas, y escala para asegurar que los datos se proyecten en el intervalo 0,1.
+
+1. Filtra el conjunto de datos original para incluir solo los períodos de tiempo mencionados por conjunto e incluyendo solo la columna necesaria 'load' más la fecha: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. Escala los datos de entrenamiento para que estén en el rango (0, 1): [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. Ahora, escala los datos de prueba: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### Crear datos con pasos de tiempo [^1]
+
+Para el SVR, transformas los datos de entrada para que sean de la forma `[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data` de manera que haya una nueva dimensión que se refiere a los pasos de tiempo.
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+Para este ejemplo, tomamos `timesteps = 5`. Así que, las entradas al modelo son los datos para los primeros 4 pasos de tiempo, y la salida serán los datos para el quinto paso de tiempo.
+
+```python
+timesteps=5
+```
+
+Convirtiendo los datos de entrenamiento a tensor 2D usando comprensión de listas anidadas:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+Convirtiendo los datos de prueba a tensor 2D:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+Seleccionando entradas y salidas de los datos de entrenamiento y prueba:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### Implementar SVR [^1]
+
+Ahora, es momento de implementar SVR. Para leer más sobre esta implementación, puedes consultar [esta documentación](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html). Para nuestra implementación, seguimos estos pasos:
+
+ 1. Define el modelo llamando a `SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()` function
+
+Ahora creamos un modelo SVR. Aquí usamos el [kernel RBF](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel), y establecemos los hiperparámetros gamma, C y epsilon en 0.5, 10 y 0.05 respectivamente.
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### Ajustar el modelo en los datos de entrenamiento [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### Hacer predicciones con el modelo [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+¡Has construido tu SVR! Ahora necesitamos evaluarlo.
+
+### Evaluar tu modelo [^1]
+
+Para la evaluación, primero escalaremos los datos a nuestra escala original. Luego, para verificar el rendimiento, graficaremos la serie temporal original y la predicha, y también imprimiremos el resultado de MAPE.
+
+Escala la salida predicha y la original:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### Verificar el rendimiento del modelo en los datos de entrenamiento y prueba [^1]
+
+Extraemos las marcas de tiempo del conjunto de datos para mostrar en el eje x de nuestro gráfico. Nota que estamos usando los primeros ```timesteps-1``` valores como entrada para la primera salida, por lo que las marcas de tiempo para la salida comenzarán después de eso.
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+Graficar las predicciones para los datos de entrenamiento:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+Imprimir MAPE para los datos de entrenamiento
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+Graficar las predicciones para los datos de prueba
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+Imprimir MAPE para los datos de prueba
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 ¡Tienes un muy buen resultado en el conjunto de datos de prueba!
+
+### Verificar el rendimiento del modelo en el conjunto de datos completo [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+🏆 Muy buenos gráficos, mostrando un modelo con buena precisión. ¡Bien hecho!
+
+---
+
+## 🚀Desafío
+
+- Intenta ajustar los hiperparámetros (gamma, C, epsilon) al crear el modelo y evalúalo en los datos para ver qué conjunto de hiperparámetros da los mejores resultados en los datos de prueba. Para saber más sobre estos hiperparámetros, puedes consultar el documento [aquí](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel).
+- Intenta usar diferentes funciones kernel para el modelo y analiza su rendimiento en el conjunto de datos. Un documento útil se puede encontrar [aquí](https://scikit-learn.org/stable/modules/svm.html#kernel-functions).
+- Intenta usar diferentes valores para `timesteps` para que el modelo mire hacia atrás para hacer la predicción.
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## Revisión y autoestudio
+
+Esta lección fue para introducir la aplicación de SVR para el pronóstico de series temporales. Para leer más sobre SVR, puedes consultar [este blog](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/). Esta [documentación en scikit-learn](https://scikit-learn.org/stable/modules/svm.html) proporciona una explicación más completa sobre las SVM en general, [SVRs](https://scikit-learn.org/stable/modules/svm.html#regression) y también otros detalles de implementación como las diferentes [funciones kernel](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) que se pueden usar, y sus parámetros.
+
+## Tarea
+
+[Un nuevo modelo SVR](assignment.md)
+
+## Créditos
+
+[^1]: El texto, código y salida en esta sección fueron contribuidos por [@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD)
+[^2]: El texto, código y salida en esta sección fueron tomados de [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/3-SVR/assignment.md b/translations/es/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..93821ea64
--- /dev/null
+++ b/translations/es/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,16 @@
+# Un nuevo modelo SVR
+
+## Instrucciones [^1]
+
+Ahora que has construido un modelo SVR, construye uno nuevo con datos frescos (prueba uno de [estos conjuntos de datos de Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Anota tu trabajo en un cuaderno, visualiza los datos y tu modelo, y prueba su precisión utilizando gráficos apropiados y MAPE. También intenta ajustar los diferentes hiperparámetros y usar diferentes valores para los pasos de tiempo.
+
+## Rúbrica [^1]
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejora |
+| --------- | ------------------------------------------------------------ | --------------------------------------------------------- | ----------------------------------- |
+| | Se presenta un cuaderno con un modelo SVR construido, probado y explicado con visualizaciones y precisión indicada. | El cuaderno presentado no está anotado o contiene errores. | Se presenta un cuaderno incompleto |
+
+[^1]: El texto en esta sección se basó en la [asignación de ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md)
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/7-TimeSeries/README.md b/translations/es/7-TimeSeries/README.md
new file mode 100644
index 000000000..80b081ec6
--- /dev/null
+++ b/translations/es/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# Introducción a la predicción de series temporales
+
+¿Qué es la predicción de series temporales? Se trata de predecir eventos futuros analizando las tendencias del pasado.
+
+## Tema regional: uso de electricidad a nivel mundial ✨
+
+En estas dos lecciones, se te presentará la predicción de series temporales, un área algo menos conocida del aprendizaje automático que, no obstante, es extremadamente valiosa para aplicaciones industriales y comerciales, entre otros campos. Aunque las redes neuronales pueden usarse para mejorar la utilidad de estos modelos, los estudiaremos en el contexto del aprendizaje automático clásico, ya que los modelos ayudan a predecir el rendimiento futuro basándose en el pasado.
+
+Nuestro enfoque regional es el uso eléctrico en el mundo, un conjunto de datos interesante para aprender sobre la predicción del uso futuro de energía basado en patrones de carga pasados. Puedes ver cómo este tipo de predicción puede ser extremadamente útil en un entorno empresarial.
+
+
+
+Foto por [Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) de torres eléctricas en una carretera en Rajasthan en [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
+
+## Lecciones
+
+1. [Introducción a la predicción de series temporales](1-Introduction/README.md)
+2. [Construcción de modelos de series temporales ARIMA](2-ARIMA/README.md)
+3. [Construcción de un Regresor de Vectores de Soporte para la predicción de series temporales](3-SVR/README.md)
+
+## Créditos
+
+"La introducción a la predicción de series temporales" fue escrita con ⚡️ por [Francesca Lazzeri](https://twitter.com/frlazzeri) y [Jen Looper](https://twitter.com/jenlooper). Los notebooks aparecieron por primera vez en línea en el [repositorio "Deep Learning For Time Series" de Azure](https://github.com/Azure/DeepLearningForTimeSeriesForecasting) originalmente escrito por Francesca Lazzeri. La lección de SVR fue escrita por [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/1-QLearning/README.md b/translations/es/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..92f08a67f
--- /dev/null
+++ b/translations/es/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,320 @@
+## Introducción al Aprendizaje por Refuerzo y Q-Learning
+
+
+> Sketchnote por [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+El aprendizaje por refuerzo implica tres conceptos importantes: el agente, algunos estados y un conjunto de acciones por estado. Al ejecutar una acción en un estado especificado, el agente recibe una recompensa. Imagina de nuevo el videojuego Super Mario. Eres Mario, estás en un nivel del juego, parado al borde de un acantilado. Encima de ti hay una moneda. Tú, siendo Mario, en un nivel del juego, en una posición específica... ese es tu estado. Mover un paso a la derecha (una acción) te llevará al borde y te daría una puntuación baja. Sin embargo, presionar el botón de salto te permitiría obtener un punto y seguirías vivo. Ese es un resultado positivo y debería otorgarte una puntuación numérica positiva.
+
+Usando el aprendizaje por refuerzo y un simulador (el juego), puedes aprender a jugar el juego para maximizar la recompensa, que es mantenerte vivo y obtener tantos puntos como sea posible.
+
+[](https://www.youtube.com/watch?v=lDq_en8RNOo)
+
+> 🎥 Haz clic en la imagen de arriba para escuchar a Dmitry hablar sobre el Aprendizaje por Refuerzo
+
+## [Cuestionario antes de la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/45/)
+
+## Requisitos y Configuración
+
+En esta lección, experimentaremos con algo de código en Python. Deberías poder ejecutar el código del Jupyter Notebook de esta lección, ya sea en tu computadora o en algún lugar en la nube.
+
+Puedes abrir [el notebook de la lección](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/notebook.ipynb) y seguir esta lección para construir.
+
+> **Nota:** Si estás abriendo este código desde la nube, también necesitas obtener el archivo [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py), que se usa en el código del notebook. Agrégalo al mismo directorio que el notebook.
+
+## Introducción
+
+En esta lección, exploraremos el mundo de **[Pedro y el lobo](https://es.wikipedia.org/wiki/Pedro_y_el_lobo)**, inspirado en un cuento musical de hadas por un compositor ruso, [Sergei Prokofiev](https://es.wikipedia.org/wiki/Serguéi_Prokófiev). Usaremos **Aprendizaje por Refuerzo** para permitir que Pedro explore su entorno, recoja manzanas sabrosas y evite encontrarse con el lobo.
+
+El **Aprendizaje por Refuerzo** (RL) es una técnica de aprendizaje que nos permite aprender un comportamiento óptimo de un **agente** en algún **entorno** mediante la ejecución de muchos experimentos. Un agente en este entorno debe tener algún **objetivo**, definido por una **función de recompensa**.
+
+## El entorno
+
+Para simplificar, consideremos que el mundo de Pedro es un tablero cuadrado de tamaño `width` x `height`, como este:
+
+
+
+Cada celda en este tablero puede ser:
+
+* **suelo**, sobre el cual Pedro y otras criaturas pueden caminar.
+* **agua**, sobre la cual obviamente no se puede caminar.
+* un **árbol** o **hierba**, un lugar donde puedes descansar.
+* una **manzana**, que representa algo que Pedro estaría encantado de encontrar para alimentarse.
+* un **lobo**, que es peligroso y debe evitarse.
+
+Hay un módulo de Python separado, [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py), que contiene el código para trabajar con este entorno. Debido a que este código no es importante para entender nuestros conceptos, importaremos el módulo y lo usaremos para crear el tablero de muestra (bloque de código 1):
+
+```python
+from rlboard import *
+
+width, height = 8,8
+m = Board(width,height)
+m.randomize(seed=13)
+m.plot()
+```
+
+Este código debería imprimir una imagen del entorno similar a la anterior.
+
+## Acciones y política
+
+En nuestro ejemplo, el objetivo de Pedro sería encontrar una manzana, evitando al lobo y otros obstáculos. Para hacer esto, esencialmente puede caminar hasta encontrar una manzana.
+
+Por lo tanto, en cualquier posición, puede elegir entre una de las siguientes acciones: arriba, abajo, izquierda y derecha.
+
+Definiremos esas acciones como un diccionario y las mapearemos a pares de cambios de coordenadas correspondientes. Por ejemplo, moverse a la derecha (`R`) would correspond to a pair `(1,0)`. (bloque de código 2):
+
+```python
+actions = { "U" : (0,-1), "D" : (0,1), "L" : (-1,0), "R" : (1,0) }
+action_idx = { a : i for i,a in enumerate(actions.keys()) }
+```
+
+Para resumir, la estrategia y el objetivo de este escenario son los siguientes:
+
+- **La estrategia** de nuestro agente (Pedro) está definida por una llamada **política**. Una política es una función que devuelve la acción en cualquier estado dado. En nuestro caso, el estado del problema está representado por el tablero, incluyendo la posición actual del jugador.
+
+- **El objetivo** del aprendizaje por refuerzo es eventualmente aprender una buena política que nos permita resolver el problema de manera eficiente. Sin embargo, como línea base, consideremos la política más simple llamada **caminar aleatorio**.
+
+## Caminar aleatorio
+
+Primero resolvamos nuestro problema implementando una estrategia de caminar aleatorio. Con caminar aleatorio, elegiremos aleatoriamente la siguiente acción de las acciones permitidas, hasta que lleguemos a la manzana (bloque de código 3).
+
+1. Implementa el caminar aleatorio con el siguiente código:
+
+ ```python
+ def random_policy(m):
+ return random.choice(list(actions))
+
+ def walk(m,policy,start_position=None):
+ n = 0 # number of steps
+ # set initial position
+ if start_position:
+ m.human = start_position
+ else:
+ m.random_start()
+ while True:
+ if m.at() == Board.Cell.apple:
+ return n # success!
+ if m.at() in [Board.Cell.wolf, Board.Cell.water]:
+ return -1 # eaten by wolf or drowned
+ while True:
+ a = actions[policy(m)]
+ new_pos = m.move_pos(m.human,a)
+ if m.is_valid(new_pos) and m.at(new_pos)!=Board.Cell.water:
+ m.move(a) # do the actual move
+ break
+ n+=1
+
+ walk(m,random_policy)
+ ```
+
+ La llamada a `walk` debería devolver la longitud del camino correspondiente, que puede variar de una ejecución a otra.
+
+1. Ejecuta el experimento de caminar varias veces (digamos, 100), y imprime las estadísticas resultantes (bloque de código 4):
+
+ ```python
+ def print_statistics(policy):
+ s,w,n = 0,0,0
+ for _ in range(100):
+ z = walk(m,policy)
+ if z<0:
+ w+=1
+ else:
+ s += z
+ n += 1
+ print(f"Average path length = {s/n}, eaten by wolf: {w} times")
+
+ print_statistics(random_policy)
+ ```
+
+ Nota que la longitud promedio de un camino es de alrededor de 30-40 pasos, lo cual es bastante, dado el hecho de que la distancia promedio a la manzana más cercana es de alrededor de 5-6 pasos.
+
+ También puedes ver cómo se ve el movimiento de Pedro durante el caminar aleatorio:
+
+ 
+
+## Función de recompensa
+
+Para hacer nuestra política más inteligente, necesitamos entender qué movimientos son "mejores" que otros. Para hacer esto, necesitamos definir nuestro objetivo.
+
+El objetivo puede definirse en términos de una **función de recompensa**, que devolverá algún valor de puntuación para cada estado. Cuanto mayor sea el número, mejor será la función de recompensa. (bloque de código 5)
+
+```python
+move_reward = -0.1
+goal_reward = 10
+end_reward = -10
+
+def reward(m,pos=None):
+ pos = pos or m.human
+ if not m.is_valid(pos):
+ return end_reward
+ x = m.at(pos)
+ if x==Board.Cell.water or x == Board.Cell.wolf:
+ return end_reward
+ if x==Board.Cell.apple:
+ return goal_reward
+ return move_reward
+```
+
+Una cosa interesante sobre las funciones de recompensa es que en la mayoría de los casos, *solo se nos da una recompensa sustancial al final del juego*. Esto significa que nuestro algoritmo debe recordar de alguna manera los pasos "buenos" que conducen a una recompensa positiva al final, y aumentar su importancia. De manera similar, todos los movimientos que conducen a malos resultados deben desalentarse.
+
+## Q-Learning
+
+Un algoritmo que discutiremos aquí se llama **Q-Learning**. En este algoritmo, la política está definida por una función (o una estructura de datos) llamada **Q-Table**. Registra la "bondad" de cada una de las acciones en un estado dado.
+
+Se llama Q-Table porque a menudo es conveniente representarla como una tabla, o matriz multidimensional. Dado que nuestro tablero tiene dimensiones `width` x `height`, podemos representar la Q-Table usando una matriz numpy con forma `width` x `height` x `len(actions)`: (bloque de código 6)
+
+```python
+Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)
+```
+
+Nota que inicializamos todos los valores de la Q-Table con un valor igual, en nuestro caso - 0.25. Esto corresponde a la política de "caminar aleatorio", porque todos los movimientos en cada estado son igualmente buenos. Podemos pasar la Q-Table a la `plot` function in order to visualize the table on the board: `m.plot(Q)`.
+
+
+
+In the center of each cell there is an "arrow" that indicates the preferred direction of movement. Since all directions are equal, a dot is displayed.
+
+Now we need to run the simulation, explore our environment, and learn a better distribution of Q-Table values, which will allow us to find the path to the apple much faster.
+
+## Essence of Q-Learning: Bellman Equation
+
+Once we start moving, each action will have a corresponding reward, i.e. we can theoretically select the next action based on the highest immediate reward. However, in most states, the move will not achieve our goal of reaching the apple, and thus we cannot immediately decide which direction is better.
+
+> Remember that it is not the immediate result that matters, but rather the final result, which we will obtain at the end of the simulation.
+
+In order to account for this delayed reward, we need to use the principles of **[dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming)**, which allow us to think about out problem recursively.
+
+Suppose we are now at the state *s*, and we want to move to the next state *s'*. By doing so, we will receive the immediate reward *r(s,a)*, defined by the reward function, plus some future reward. If we suppose that our Q-Table correctly reflects the "attractiveness" of each action, then at state *s'* we will chose an action *a* that corresponds to maximum value of *Q(s',a')*. Thus, the best possible future reward we could get at state *s* will be defined as `max`a'*Q(s',a')* (maximum here is computed over all possible actions *a'* at state *s'*).
+
+This gives the **Bellman formula** for calculating the value of the Q-Table at state *s*, given action *a*:
+
+
+
+Here γ is the so-called **discount factor** that determines to which extent you should prefer the current reward over the future reward and vice versa.
+
+## Learning Algorithm
+
+Given the equation above, we can now write pseudo-code for our learning algorithm:
+
+* Initialize Q-Table Q with equal numbers for all states and actions
+* Set learning rate α ← 1
+* Repeat simulation many times
+ 1. Start at random position
+ 1. Repeat
+ 1. Select an action *a* at state *s*
+ 2. Execute action by moving to a new state *s'*
+ 3. If we encounter end-of-game condition, or total reward is too small - exit simulation
+ 4. Compute reward *r* at the new state
+ 5. Update Q-Function according to Bellman equation: *Q(s,a)* ← *(1-α)Q(s,a)+α(r+γ maxa'Q(s',a'))*
+ 6. *s* ← *s'*
+ 7. Update the total reward and decrease α.
+
+## Exploit vs. explore
+
+In the algorithm above, we did not specify how exactly we should choose an action at step 2.1. If we are choosing the action randomly, we will randomly **explore** the environment, and we are quite likely to die often as well as explore areas where we would not normally go. An alternative approach would be to **exploit** the Q-Table values that we already know, and thus to choose the best action (with higher Q-Table value) at state *s*. This, however, will prevent us from exploring other states, and it's likely we might not find the optimal solution.
+
+Thus, the best approach is to strike a balance between exploration and exploitation. This can be done by choosing the action at state *s* with probabilities proportional to values in the Q-Table. In the beginning, when Q-Table values are all the same, it would correspond to a random selection, but as we learn more about our environment, we would be more likely to follow the optimal route while allowing the agent to choose the unexplored path once in a while.
+
+## Python implementation
+
+We are now ready to implement the learning algorithm. Before we do that, we also need some function that will convert arbitrary numbers in the Q-Table into a vector of probabilities for corresponding actions.
+
+1. Create a function `probs()`:
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+ ```
+
+ Agregamos algunos `eps` al vector original para evitar la división por 0 en el caso inicial, cuando todos los componentes del vector son idénticos.
+
+Ejecuta el algoritmo de aprendizaje a través de 5000 experimentos, también llamados **épocas**: (bloque de código 8)
+```python
+ for epoch in range(5000):
+
+ # Pick initial point
+ m.random_start()
+
+ # Start travelling
+ n=0
+ cum_reward = 0
+ while True:
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ dpos = actions[a]
+ m.move(dpos,check_correctness=False) # we allow player to move outside the board, which terminates episode
+ r = reward(m)
+ cum_reward += r
+ if r==end_reward or cum_reward < -1000:
+ lpath.append(n)
+ break
+ alpha = np.exp(-n / 10e5)
+ gamma = 0.5
+ ai = action_idx[a]
+ Q[x,y,ai] = (1 - alpha) * Q[x,y,ai] + alpha * (r + gamma * Q[x+dpos[0], y+dpos[1]].max())
+ n+=1
+```
+
+Después de ejecutar este algoritmo, la Q-Table debería actualizarse con valores que definan la atractividad de diferentes acciones en cada paso. Podemos intentar visualizar la Q-Table dibujando un vector en cada celda que apunte en la dirección deseada del movimiento. Para simplificar, dibujamos un pequeño círculo en lugar de una punta de flecha.
+Estás entrenado con datos hasta octubre de 2023.
+
+## Verificando la política
+
+Dado que la Q-Table enumera la "atractividad" de cada acción en cada estado, es bastante fácil usarla para definir la navegación eficiente en nuestro mundo. En el caso más simple, podemos seleccionar la acción correspondiente al valor más alto de la Q-Table: (bloque de código 9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> Si intentas el código anterior varias veces, puedes notar que a veces se "cuelga", y necesitas presionar el botón de DETENER en el notebook para interrumpirlo. Esto sucede porque podría haber situaciones en las que dos estados "señalen" entre sí en términos de Q-Value óptimo, en cuyo caso el agente termina moviéndose entre esos estados indefinidamente.
+
+## 🚀Desafío
+
+> **Tarea 1:** Modifica el `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` ejecuta la simulación 100 veces): (bloque de código 10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+Después de ejecutar este código, deberías obtener una longitud promedio de camino mucho menor que antes, en el rango de 3-6.
+
+## Investigando el proceso de aprendizaje
+
+Como hemos mencionado, el proceso de aprendizaje es un equilibrio entre la exploración y la explotación del conocimiento adquirido sobre la estructura del espacio del problema. Hemos visto que los resultados del aprendizaje (la capacidad de ayudar a un agente a encontrar un camino corto hacia el objetivo) han mejorado, pero también es interesante observar cómo se comporta la longitud promedio del camino durante el proceso de aprendizaje:
+
+Las conclusiones se pueden resumir como:
+
+- **La longitud promedio del camino aumenta**. Lo que vemos aquí es que al principio, la longitud promedio del camino aumenta. Esto probablemente se deba al hecho de que cuando no sabemos nada sobre el entorno, es probable que quedemos atrapados en estados malos, agua o lobo. A medida que aprendemos más y comenzamos a usar este conocimiento, podemos explorar el entorno por más tiempo, pero aún no sabemos muy bien dónde están las manzanas.
+
+- **La longitud del camino disminuye, a medida que aprendemos más**. Una vez que aprendemos lo suficiente, se vuelve más fácil para el agente lograr el objetivo, y la longitud del camino comienza a disminuir. Sin embargo, todavía estamos abiertos a la exploración, por lo que a menudo nos desviamos del mejor camino y exploramos nuevas opciones, haciendo que el camino sea más largo de lo óptimo.
+
+- **La longitud aumenta abruptamente**. Lo que también observamos en este gráfico es que en algún momento, la longitud aumentó abruptamente. Esto indica la naturaleza estocástica del proceso, y que en algún momento podemos "estropear" los coeficientes de la Q-Table sobrescribiéndolos con nuevos valores. Esto idealmente debería minimizarse disminuyendo la tasa de aprendizaje (por ejemplo, hacia el final del entrenamiento, solo ajustamos los valores de la Q-Table por un pequeño valor).
+
+En general, es importante recordar que el éxito y la calidad del proceso de aprendizaje dependen significativamente de los parámetros, como la tasa de aprendizaje, la disminución de la tasa de aprendizaje y el factor de descuento. Estos a menudo se llaman **hiperparámetros**, para distinguirlos de **parámetros**, que optimizamos durante el entrenamiento (por ejemplo, coeficientes de la Q-Table). El proceso de encontrar los mejores valores de hiperparámetros se llama **optimización de hiperparámetros**, y merece un tema aparte.
+
+## [Cuestionario después de la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## Asignación
+[Un Mundo Más Realista](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/1-QLearning/assignment.md b/translations/es/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..67cdc23ff
--- /dev/null
+++ b/translations/es/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,30 @@
+# Un Mundo Más Realista
+
+En nuestra situación, Peter podía moverse casi sin cansarse ni tener hambre. En un mundo más realista, tiene que sentarse y descansar de vez en cuando, y también alimentarse. Hagamos nuestro mundo más realista implementando las siguientes reglas:
+
+1. Al moverse de un lugar a otro, Peter pierde **energía** y gana algo de **fatiga**.
+2. Peter puede ganar más energía comiendo manzanas.
+3. Peter puede deshacerse de la fatiga descansando bajo el árbol o en el césped (es decir, caminando hacia una ubicación en el tablero con un árbol o césped - campo verde).
+4. Peter necesita encontrar y matar al lobo.
+5. Para matar al lobo, Peter necesita tener ciertos niveles de energía y fatiga, de lo contrario pierde la batalla.
+
+## Instrucciones
+
+Usa el cuaderno original [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) como punto de partida para tu solución.
+
+Modifica la función de recompensa anterior de acuerdo con las reglas del juego, ejecuta el algoritmo de aprendizaje por refuerzo para aprender la mejor estrategia para ganar el juego y compara los resultados del paseo aleatorio con tu algoritmo en términos de número de juegos ganados y perdidos.
+
+> **Note**: En tu nuevo mundo, el estado es más complejo y, además de la posición del humano, también incluye los niveles de fatiga y energía. Puedes elegir representar el estado como una tupla (Board, energy, fatigue), o definir una clase para el estado (también puedes derivarla de `Board`), o incluso modificar la clase original `Board` dentro de [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py).
+
+En tu solución, por favor mantén el código responsable de la estrategia de paseo aleatorio y compara los resultados de tu algoritmo con el paseo aleatorio al final.
+
+> **Note**: Es posible que necesites ajustar los hiperparámetros para que funcione, especialmente el número de épocas. Debido a que el éxito del juego (luchar contra el lobo) es un evento raro, puedes esperar un tiempo de entrenamiento mucho más largo.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
+| | Se presenta un cuaderno con la definición de las nuevas reglas del mundo, el algoritmo de Q-Learning y algunas explicaciones textuales. Q-Learning es capaz de mejorar significativamente los resultados en comparación con el paseo aleatorio. | Se presenta un cuaderno, se implementa Q-Learning y mejora los resultados en comparación con el paseo aleatorio, pero no significativamente; o el cuaderno está mal documentado y el código no está bien estructurado. | Se hace algún intento de redefinir las reglas del mundo, pero el algoritmo de Q-Learning no funciona, o la función de recompensa no está completamente definida. |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/es/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..d42dd8762
--- /dev/null
+++ b/translations/es/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/es/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..2920cc9c5
--- /dev/null
+++ b/translations/es/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/2-Gym/README.md b/translations/es/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..3b3c4ad01
--- /dev/null
+++ b/translations/es/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,342 @@
+## CartPole Patinaje
+
+El problema que hemos estado resolviendo en la lección anterior podría parecer un problema de juguete, no realmente aplicable a escenarios de la vida real. Este no es el caso, porque muchos problemas del mundo real también comparten este escenario, incluyendo jugar al Ajedrez o al Go. Son similares, porque también tenemos un tablero con reglas dadas y un **estado discreto**.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/47/)
+
+## Introducción
+
+En esta lección aplicaremos los mismos principios de Q-Learning a un problema con **estado continuo**, es decir, un estado que se da por uno o más números reales. Abordaremos el siguiente problema:
+
+> **Problema**: Si Peter quiere escapar del lobo, necesita poder moverse más rápido. Veremos cómo Peter puede aprender a patinar, en particular, a mantener el equilibrio, utilizando Q-Learning.
+
+
+
+> ¡Peter y sus amigos se ponen creativos para escapar del lobo! Imagen de [Jen Looper](https://twitter.com/jenlooper)
+
+Usaremos una versión simplificada del equilibrio conocida como el problema **CartPole**. En el mundo de CartPole, tenemos un deslizador horizontal que puede moverse hacia la izquierda o hacia la derecha, y el objetivo es equilibrar un poste vertical sobre el deslizador.
+
+## Requisitos previos
+
+En esta lección, utilizaremos una biblioteca llamada **OpenAI Gym** para simular diferentes **entornos**. Puedes ejecutar el código de esta lección localmente (por ejemplo, desde Visual Studio Code), en cuyo caso la simulación se abrirá en una nueva ventana. Al ejecutar el código en línea, es posible que necesites hacer algunos ajustes en el código, como se describe [aquí](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7).
+
+## OpenAI Gym
+
+En la lección anterior, las reglas del juego y el estado fueron dados por la clase `Board` que definimos nosotros mismos. Aquí utilizaremos un **entorno de simulación** especial, que simulará la física detrás del poste en equilibrio. Uno de los entornos de simulación más populares para entrenar algoritmos de aprendizaje por refuerzo se llama [Gym](https://gym.openai.com/), que es mantenido por [OpenAI](https://openai.com/). Usando este gym, podemos crear diferentes **entornos** desde una simulación de CartPole hasta juegos de Atari.
+
+> **Nota**: Puedes ver otros entornos disponibles en OpenAI Gym [aquí](https://gym.openai.com/envs/#classic_control).
+
+Primero, instalemos el gym e importemos las bibliotecas requeridas (bloque de código 1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## Ejercicio - inicializar un entorno de CartPole
+
+Para trabajar con un problema de equilibrio de CartPole, necesitamos inicializar el entorno correspondiente. Cada entorno está asociado con:
+
+- **Espacio de observación** que define la estructura de la información que recibimos del entorno. Para el problema de CartPole, recibimos la posición del poste, la velocidad y otros valores.
+
+- **Espacio de acción** que define las posibles acciones. En nuestro caso, el espacio de acción es discreto y consta de dos acciones: **izquierda** y **derecha**. (bloque de código 2)
+
+1. Para inicializar, escribe el siguiente código:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+Para ver cómo funciona el entorno, ejecutemos una simulación corta de 100 pasos. En cada paso, proporcionamos una de las acciones a realizar; en esta simulación simplemente seleccionamos aleatoriamente una acción de `action_space`.
+
+1. Ejecuta el código a continuación y observa qué sucede.
+
+ ✅ Recuerda que es preferible ejecutar este código en una instalación local de Python. (bloque de código 3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ Deberías estar viendo algo similar a esta imagen:
+
+ 
+
+1. Durante la simulación, necesitamos obtener observaciones para decidir cómo actuar. De hecho, la función step devuelve las observaciones actuales, una función de recompensa y una bandera de finalización que indica si tiene sentido continuar la simulación o no: (bloque de código 4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ Terminarás viendo algo como esto en la salida del notebook:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ El vector de observación que se devuelve en cada paso de la simulación contiene los siguientes valores:
+ - Posición del carrito
+ - Velocidad del carrito
+ - Ángulo del poste
+ - Tasa de rotación del poste
+
+1. Obtén el valor mínimo y máximo de esos números: (bloque de código 5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ También puedes notar que el valor de recompensa en cada paso de la simulación es siempre 1. Esto se debe a que nuestro objetivo es sobrevivir el mayor tiempo posible, es decir, mantener el poste en una posición razonablemente vertical durante el mayor tiempo posible.
+
+ ✅ De hecho, la simulación de CartPole se considera resuelta si logramos obtener una recompensa promedio de 195 durante 100 pruebas consecutivas.
+
+## Discretización del estado
+
+En Q-Learning, necesitamos construir una Q-Table que defina qué hacer en cada estado. Para poder hacer esto, necesitamos que el estado sea **discreto**, más precisamente, debe contener un número finito de valores discretos. Por lo tanto, necesitamos de alguna manera **discretizar** nuestras observaciones, mapeándolas a un conjunto finito de estados.
+
+Hay algunas formas de hacer esto:
+
+- **Dividir en contenedores**. Si conocemos el intervalo de un cierto valor, podemos dividir este intervalo en un número de **contenedores**, y luego reemplazar el valor por el número del contenedor al que pertenece. Esto se puede hacer utilizando el método [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html) de numpy. En este caso, conoceremos precisamente el tamaño del estado, porque dependerá del número de contenedores que seleccionemos para la digitalización.
+
+✅ Podemos usar la interpolación lineal para llevar los valores a un intervalo finito (digamos, de -20 a 20), y luego convertir los números en enteros redondeándolos. Esto nos da un poco menos de control sobre el tamaño del estado, especialmente si no conocemos los rangos exactos de los valores de entrada. Por ejemplo, en nuestro caso, 2 de los 4 valores no tienen límites superiores/inferiores en sus valores, lo que puede resultar en un número infinito de estados.
+
+En nuestro ejemplo, utilizaremos el segundo enfoque. Como puedes notar más adelante, a pesar de los límites superiores/inferiores indefinidos, esos valores rara vez toman valores fuera de ciertos intervalos finitos, por lo que esos estados con valores extremos serán muy raros.
+
+1. Aquí está la función que tomará la observación de nuestro modelo y producirá una tupla de 4 valores enteros: (bloque de código 6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. Exploremos también otro método de discretización utilizando contenedores: (bloque de código 7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. Ahora ejecutemos una simulación corta y observemos esos valores discretos del entorno. Siéntete libre de probar ambos `discretize` and `discretize_bins` y ver si hay alguna diferencia.
+
+ ✅ discretize_bins devuelve el número del contenedor, que es basado en 0. Por lo tanto, para valores de la variable de entrada alrededor de 0, devuelve el número del medio del intervalo (10). En discretize, no nos preocupamos por el rango de valores de salida, permitiéndoles ser negativos, por lo que los valores del estado no están desplazados, y 0 corresponde a 0. (bloque de código 8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ Descomenta la línea que comienza con env.render si deseas ver cómo se ejecuta el entorno. De lo contrario, puedes ejecutarlo en segundo plano, lo cual es más rápido. Usaremos esta ejecución "invisible" durante nuestro proceso de Q-Learning.
+
+## La estructura de la Q-Table
+
+En nuestra lección anterior, el estado era un simple par de números del 0 al 8, y por lo tanto era conveniente representar la Q-Table con un tensor numpy con una forma de 8x8x2. Si usamos la discretización de contenedores, el tamaño de nuestro vector de estado también es conocido, por lo que podemos usar el mismo enfoque y representar el estado con un array de forma 20x20x10x10x2 (aquí 2 es la dimensión del espacio de acción, y las primeras dimensiones corresponden al número de contenedores que hemos seleccionado para usar para cada uno de los parámetros en el espacio de observación).
+
+Sin embargo, a veces las dimensiones precisas del espacio de observación no son conocidas. En el caso de la función `discretize`, nunca podemos estar seguros de que nuestro estado se mantenga dentro de ciertos límites, porque algunos de los valores originales no están limitados. Por lo tanto, usaremos un enfoque ligeramente diferente y representaremos la Q-Table con un diccionario.
+
+1. Usa el par *(estado, acción)* como la clave del diccionario, y el valor correspondería al valor de entrada de la Q-Table. (bloque de código 9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ Aquí también definimos una función `qvalues()`, que devuelve una lista de valores de la Q-Table para un estado dado que corresponde a todas las posibles acciones. Si la entrada no está presente en la Q-Table, devolveremos 0 como valor predeterminado.
+
+## Comencemos con el Q-Learning
+
+¡Ahora estamos listos para enseñar a Peter a mantener el equilibrio!
+
+1. Primero, establezcamos algunos hiperparámetros: (bloque de código 10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ Aquí, `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards` vector para su posterior representación gráfica. (bloque de código 11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+Lo que puedes notar de esos resultados:
+
+- **Cerca de nuestro objetivo**. Estamos muy cerca de alcanzar el objetivo de obtener 195 recompensas acumuladas en más de 100 ejecuciones consecutivas de la simulación, ¡o podríamos haberlo logrado! Incluso si obtenemos números más pequeños, aún no lo sabemos, porque promediamos sobre 5000 ejecuciones, y solo se requieren 100 ejecuciones en el criterio formal.
+
+- **La recompensa comienza a disminuir**. A veces la recompensa comienza a disminuir, lo que significa que podemos "destruir" valores ya aprendidos en la Q-Table con los que empeoran la situación.
+
+Esta observación es más claramente visible si graficamos el progreso del entrenamiento.
+
+## Graficando el progreso del entrenamiento
+
+Durante el entrenamiento, hemos recopilado el valor de recompensa acumulada en cada una de las iteraciones en el vector `rewards`. Así es como se ve cuando lo graficamos contra el número de iteración:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+De este gráfico, no es posible decir nada, porque debido a la naturaleza del proceso de entrenamiento estocástico, la duración de las sesiones de entrenamiento varía mucho. Para darle más sentido a este gráfico, podemos calcular el **promedio móvil** sobre una serie de experimentos, digamos 100. Esto se puede hacer convenientemente usando `np.convolve`: (bloque de código 12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## Variando los hiperparámetros
+
+Para hacer el aprendizaje más estable, tiene sentido ajustar algunos de nuestros hiperparámetros durante el entrenamiento. En particular:
+
+- **Para la tasa de aprendizaje**, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon`, y subir hasta casi 1.
+
+> **Tarea 1**: Juega con los valores de los hiperparámetros y ve si puedes lograr una mayor recompensa acumulada. ¿Estás obteniendo más de 195?
+
+> **Tarea 2**: Para resolver formalmente el problema, necesitas obtener una recompensa promedio de 195 en 100 ejecuciones consecutivas. Mide eso durante el entrenamiento y asegúrate de haber resuelto formalmente el problema.
+
+## Viendo el resultado en acción
+
+Sería interesante ver cómo se comporta el modelo entrenado. Ejecutemos la simulación y sigamos la misma estrategia de selección de acciones que durante el entrenamiento, muestreando según la distribución de probabilidad en la Q-Table: (bloque de código 13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+Deberías ver algo como esto:
+
+
+
+---
+
+## 🚀Desafío
+
+> **Tarea 3**: Aquí, estábamos usando la copia final de la Q-Table, que puede no ser la mejor. Recuerda que hemos almacenado la Q-Table con mejor rendimiento en `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax` para encontrar el número de acción correspondiente al valor más alto de la Q-Table. Implementa esta estrategia y ve si mejora el equilibrio.
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## Tarea
+[Entrena un Mountain Car](assignment.md)
+
+## Conclusión
+
+Ahora hemos aprendido cómo entrenar agentes para lograr buenos resultados simplemente proporcionándoles una función de recompensa que define el estado deseado del juego, y dándoles la oportunidad de explorar inteligentemente el espacio de búsqueda. Hemos aplicado con éxito el algoritmo de Q-Learning en los casos de entornos discretos y continuos, pero con acciones discretas.
+
+Es importante también estudiar situaciones donde el estado de la acción también es continuo, y cuando el espacio de observación es mucho más complejo, como la imagen de la pantalla del juego de Atari. En esos problemas a menudo necesitamos usar técnicas de aprendizaje automático más poderosas, como redes neuronales, para lograr buenos resultados. Esos temas más avanzados son el tema de nuestro próximo curso más avanzado de IA.
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/2-Gym/assignment.md b/translations/es/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..e240f5c4b
--- /dev/null
+++ b/translations/es/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# Entrenar Mountain Car
+
+[OpenAI Gym](http://gym.openai.com) ha sido diseñado de tal manera que todos los entornos proporcionan la misma API - es decir, los mismos métodos `reset`, `step` y `render`, y las mismas abstracciones de **espacio de acción** y **espacio de observación**. Por lo tanto, debería ser posible adaptar los mismos algoritmos de aprendizaje por refuerzo a diferentes entornos con mínimos cambios en el código.
+
+## Un Entorno de Mountain Car
+
+El [entorno de Mountain Car](https://gym.openai.com/envs/MountainCar-v0/) contiene un coche atrapado en un valle:
+El objetivo es salir del valle y capturar la bandera, realizando en cada paso una de las siguientes acciones:
+
+| Valor | Significado |
+|---|---|
+| 0 | Acelerar hacia la izquierda |
+| 1 | No acelerar |
+| 2 | Acelerar hacia la derecha |
+
+El truco principal de este problema es, sin embargo, que el motor del coche no es lo suficientemente fuerte como para escalar la montaña en un solo intento. Por lo tanto, la única manera de tener éxito es conducir de un lado a otro para acumular impulso.
+
+El espacio de observación consta de solo dos valores:
+
+| Num | Observación | Mín | Máx |
+|-----|--------------|-----|-----|
+| 0 | Posición del coche | -1.2| 0.6 |
+| 1 | Velocidad del coche | -0.07 | 0.07 |
+
+El sistema de recompensas para el coche de montaña es bastante complicado:
+
+ * Se otorga una recompensa de 0 si el agente alcanza la bandera (posición = 0.5) en la cima de la montaña.
+ * Se otorga una recompensa de -1 si la posición del agente es menor que 0.5.
+
+El episodio termina si la posición del coche es mayor que 0.5, o si la duración del episodio es mayor que 200.
+## Instrucciones
+
+Adapta nuestro algoritmo de aprendizaje por refuerzo para resolver el problema del coche de montaña. Comienza con el código existente en [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb), sustituye el nuevo entorno, cambia las funciones de discretización del estado, e intenta hacer que el algoritmo existente entrene con mínimas modificaciones de código. Optimiza el resultado ajustando los hiperparámetros.
+
+> **Nota**: Es probable que sea necesario ajustar los hiperparámetros para que el algoritmo converja.
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------- | -------- | ---------------- |
+| | El algoritmo Q-Learning se adapta exitosamente desde el ejemplo de CartPole, con mínimas modificaciones de código, y es capaz de resolver el problema de capturar la bandera en menos de 200 pasos. | Se ha adoptado un nuevo algoritmo Q-Learning de Internet, pero está bien documentado; o se ha adoptado el algoritmo existente, pero no alcanza los resultados deseados. | El estudiante no pudo adoptar exitosamente ningún algoritmo, pero ha dado pasos sustanciales hacia la solución (implementó discretización del estado, estructura de datos Q-Table, etc.) |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/es/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..c00997438
--- /dev/null
+++ b/translations/es/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática por inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/2-Gym/solution/R/README.md b/translations/es/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..f2cb375d3
--- /dev/null
+++ b/translations/es/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/8-Reinforcement/README.md b/translations/es/8-Reinforcement/README.md
new file mode 100644
index 000000000..b0b4ed220
--- /dev/null
+++ b/translations/es/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# Introducción al aprendizaje por refuerzo
+
+El aprendizaje por refuerzo, RL, es visto como uno de los paradigmas básicos del aprendizaje automático, junto con el aprendizaje supervisado y el aprendizaje no supervisado. RL trata sobre decisiones: tomar las decisiones correctas o al menos aprender de ellas.
+
+Imagina que tienes un entorno simulado como el mercado de valores. ¿Qué pasa si impones una regulación determinada? ¿Tiene un efecto positivo o negativo? Si ocurre algo negativo, necesitas tomar este _refuerzo negativo_, aprender de él y cambiar de rumbo. Si es un resultado positivo, necesitas construir sobre ese _refuerzo positivo_.
+
+
+
+> ¡Pedro y sus amigos necesitan escapar del lobo hambriento! Imagen por [Jen Looper](https://twitter.com/jenlooper)
+
+## Tema regional: Pedro y el lobo (Rusia)
+
+[Pedro y el lobo](https://es.wikipedia.org/wiki/Pedro_y_el_lobo) es un cuento musical escrito por el compositor ruso [Sergei Prokofiev](https://es.wikipedia.org/wiki/Serguéi_Prokófiev). Es una historia sobre el joven pionero Pedro, que valientemente sale de su casa hacia el claro del bosque para perseguir al lobo. En esta sección, entrenaremos algoritmos de aprendizaje automático que ayudarán a Pedro:
+
+- **Explorar** el área circundante y construir un mapa de navegación óptimo.
+- **Aprender** a usar una patineta y equilibrarse en ella, para moverse más rápido.
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 Haz clic en la imagen de arriba para escuchar Pedro y el lobo por Prokofiev
+
+## Aprendizaje por refuerzo
+
+En secciones anteriores, has visto dos ejemplos de problemas de aprendizaje automático:
+
+- **Supervisado**, donde tenemos conjuntos de datos que sugieren soluciones de muestra al problema que queremos resolver. [Clasificación](../4-Classification/README.md) y [regresión](../2-Regression/README.md) son tareas de aprendizaje supervisado.
+- **No supervisado**, en el cual no tenemos datos de entrenamiento etiquetados. El principal ejemplo de aprendizaje no supervisado es [Agrupamiento](../5-Clustering/README.md).
+
+En esta sección, te presentaremos un nuevo tipo de problema de aprendizaje que no requiere datos de entrenamiento etiquetados. Hay varios tipos de estos problemas:
+
+- **[Aprendizaje semi-supervisado](https://es.wikipedia.org/wiki/Aprendizaje_semi-supervisado)**, donde tenemos muchos datos no etiquetados que pueden usarse para pre-entrenar el modelo.
+- **[Aprendizaje por refuerzo](https://es.wikipedia.org/wiki/Aprendizaje_por_refuerzo)**, en el cual un agente aprende cómo comportarse realizando experimentos en algún entorno simulado.
+
+### Ejemplo - juego de computadora
+
+Supongamos que quieres enseñar a una computadora a jugar un juego, como el ajedrez, o [Super Mario](https://es.wikipedia.org/wiki/Super_Mario). Para que la computadora juegue un juego, necesitamos que prediga qué movimiento hacer en cada uno de los estados del juego. Aunque esto pueda parecer un problema de clasificación, no lo es, porque no tenemos un conjunto de datos con estados y acciones correspondientes. Aunque podamos tener algunos datos como partidas de ajedrez existentes o grabaciones de jugadores jugando Super Mario, es probable que esos datos no cubran suficientemente un número grande de estados posibles.
+
+En lugar de buscar datos de juego existentes, el **Aprendizaje por Refuerzo** (RL) se basa en la idea de *hacer que la computadora juegue* muchas veces y observar el resultado. Así, para aplicar el Aprendizaje por Refuerzo, necesitamos dos cosas:
+
+- **Un entorno** y **un simulador** que nos permitan jugar un juego muchas veces. Este simulador definiría todas las reglas del juego, así como los posibles estados y acciones.
+
+- **Una función de recompensa**, que nos diría qué tan bien lo hicimos durante cada movimiento o juego.
+
+La principal diferencia entre otros tipos de aprendizaje automático y RL es que en RL típicamente no sabemos si ganamos o perdemos hasta que terminamos el juego. Por lo tanto, no podemos decir si un cierto movimiento solo es bueno o no - solo recibimos una recompensa al final del juego. Y nuestro objetivo es diseñar algoritmos que nos permitan entrenar un modelo bajo condiciones inciertas. Aprenderemos sobre un algoritmo de RL llamado **Q-learning**.
+
+## Lecciones
+
+1. [Introducción al aprendizaje por refuerzo y Q-Learning](1-QLearning/README.md)
+2. [Uso de un entorno de simulación de gimnasio](2-Gym/README.md)
+
+## Créditos
+
+"La Introducción al Aprendizaje por Refuerzo" fue escrita con ♥️ por [Dmitry Soshnikov](http://soshnikov.com)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/9-Real-World/1-Applications/README.md b/translations/es/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..853cf7b92
--- /dev/null
+++ b/translations/es/9-Real-World/1-Applications/README.md
@@ -0,0 +1,150 @@
+# Posdata: Aprendizaje automático en el mundo real
+
+
+> Sketchnote por [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+En este plan de estudios, has aprendido muchas formas de preparar datos para el entrenamiento y crear modelos de aprendizaje automático. Construiste una serie de modelos clásicos de regresión, clustering, clasificación, procesamiento de lenguaje natural y series temporales. ¡Felicidades! Ahora, podrías preguntarte para qué sirve todo esto... ¿cuáles son las aplicaciones reales de estos modelos?
+
+Aunque la inteligencia artificial ha generado mucho interés en la industria, generalmente utilizando aprendizaje profundo, todavía hay aplicaciones valiosas para los modelos clásicos de aprendizaje automático. ¡Incluso podrías estar utilizando algunas de estas aplicaciones hoy en día! En esta lección, explorarás cómo ocho industrias y dominios diferentes utilizan estos tipos de modelos para hacer que sus aplicaciones sean más eficientes, confiables, inteligentes y valiosas para los usuarios.
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 Finanzas
+
+El sector financiero ofrece muchas oportunidades para el aprendizaje automático. Muchos problemas en esta área se prestan a ser modelados y resueltos utilizando ML.
+
+### Detección de fraude con tarjetas de crédito
+
+Aprendimos sobre [k-means clustering](../../5-Clustering/2-K-Means/README.md) anteriormente en el curso, pero ¿cómo se puede utilizar para resolver problemas relacionados con el fraude con tarjetas de crédito?
+
+El k-means clustering es útil durante una técnica de detección de fraude con tarjetas de crédito llamada **detección de valores atípicos**. Los valores atípicos, o desviaciones en las observaciones sobre un conjunto de datos, pueden indicarnos si una tarjeta de crédito se está utilizando de manera normal o si algo inusual está ocurriendo. Como se muestra en el artículo enlazado a continuación, puedes clasificar los datos de tarjetas de crédito utilizando un algoritmo de k-means clustering y asignar cada transacción a un clúster según cuán atípica parezca ser. Luego, puedes evaluar los clústeres más riesgosos para transacciones fraudulentas versus legítimas.
+[Referencia](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### Gestión de patrimonio
+
+En la gestión de patrimonio, un individuo o empresa maneja inversiones en nombre de sus clientes. Su trabajo es mantener y hacer crecer la riqueza a largo plazo, por lo que es esencial elegir inversiones que tengan un buen desempeño.
+
+Una forma de evaluar cómo se desempeña una inversión particular es a través de la regresión estadística. La [regresión lineal](../../2-Regression/1-Tools/README.md) es una herramienta valiosa para entender cómo se desempeña un fondo en relación con algún punto de referencia. También podemos deducir si los resultados de la regresión son estadísticamente significativos o cuánto afectarían las inversiones de un cliente. Incluso podrías expandir tu análisis utilizando regresión múltiple, donde se pueden tener en cuenta factores de riesgo adicionales. Para un ejemplo de cómo esto funcionaría para un fondo específico, consulta el artículo a continuación sobre la evaluación del rendimiento de fondos utilizando regresión.
+[Referencia](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 Educación
+
+El sector educativo también es un área muy interesante donde se puede aplicar el ML. Hay problemas interesantes por resolver, como detectar trampas en exámenes o ensayos o gestionar sesgos, intencionados o no, en el proceso de corrección.
+
+### Predicción del comportamiento estudiantil
+
+[Coursera](https://coursera.com), un proveedor de cursos abiertos en línea, tiene un excelente blog tecnológico donde discuten muchas decisiones de ingeniería. En este estudio de caso, trazaron una línea de regresión para intentar explorar cualquier correlación entre una baja calificación de NPS (Net Promoter Score) y la retención o abandono del curso.
+[Referencia](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### Mitigación de sesgos
+
+[Grammarly](https://grammarly.com), un asistente de escritura que revisa errores ortográficos y gramaticales, utiliza sofisticados [sistemas de procesamiento de lenguaje natural](../../6-NLP/README.md) en todos sus productos. Publicaron un interesante estudio de caso en su blog tecnológico sobre cómo lidiaron con el sesgo de género en el aprendizaje automático, que aprendiste en nuestra [lección introductoria sobre equidad](../../1-Introduction/3-fairness/README.md).
+[Referencia](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 Retail
+
+El sector minorista definitivamente puede beneficiarse del uso de ML, desde crear una mejor experiencia del cliente hasta gestionar el inventario de manera óptima.
+
+### Personalización del recorrido del cliente
+
+En Wayfair, una empresa que vende artículos para el hogar como muebles, ayudar a los clientes a encontrar los productos adecuados para sus gustos y necesidades es primordial. En este artículo, los ingenieros de la empresa describen cómo utilizan ML y NLP para "mostrar los resultados correctos a los clientes". Notablemente, su Motor de Intención de Consulta ha sido construido para usar extracción de entidades, entrenamiento de clasificadores, extracción de activos y opiniones, y etiquetado de sentimientos en las reseñas de los clientes. Este es un caso clásico de cómo funciona el NLP en el comercio minorista en línea.
+[Referencia](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### Gestión de inventario
+
+Empresas innovadoras y ágiles como [StitchFix](https://stitchfix.com), un servicio de cajas que envía ropa a los consumidores, dependen en gran medida del ML para recomendaciones y gestión de inventario. Sus equipos de estilistas trabajan juntos con sus equipos de comercialización, de hecho: "uno de nuestros científicos de datos experimentó con un algoritmo genético y lo aplicó a la ropa para predecir qué sería una prenda exitosa que no existe hoy. Llevamos eso al equipo de comercialización y ahora pueden usarlo como una herramienta."
+[Referencia](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 Atención Médica
+
+El sector de la salud puede aprovechar el ML para optimizar tareas de investigación y también problemas logísticos como la readmisión de pacientes o detener la propagación de enfermedades.
+
+### Gestión de ensayos clínicos
+
+La toxicidad en los ensayos clínicos es una gran preocupación para los fabricantes de medicamentos. ¿Cuánta toxicidad es tolerable? En este estudio, el análisis de varios métodos de ensayos clínicos llevó al desarrollo de un nuevo enfoque para predecir las probabilidades de resultados de ensayos clínicos. Específicamente, pudieron usar random forest para producir un [clasificador](../../4-Classification/README.md) que es capaz de distinguir entre grupos de medicamentos.
+[Referencia](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### Gestión de readmisiones hospitalarias
+
+La atención hospitalaria es costosa, especialmente cuando los pacientes tienen que ser readmitidos. Este artículo discute una empresa que utiliza ML para predecir el potencial de readmisión utilizando algoritmos de [clustering](../../5-Clustering/README.md). Estos clústeres ayudan a los analistas a "descubrir grupos de readmisiones que pueden compartir una causa común".
+[Referencia](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### Gestión de enfermedades
+
+La reciente pandemia ha puesto de relieve las formas en que el aprendizaje automático puede ayudar a detener la propagación de enfermedades. En este artículo, reconocerás el uso de ARIMA, curvas logísticas, regresión lineal y SARIMA. "Este trabajo es un intento de calcular la tasa de propagación de este virus y, por lo tanto, predecir las muertes, recuperaciones y casos confirmados, para que pueda ayudarnos a prepararnos mejor y sobrevivir."
+[Referencia](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 Ecología y Tecnología Verde
+
+La naturaleza y la ecología consisten en muchos sistemas sensibles donde la interacción entre animales y naturaleza entra en foco. Es importante poder medir estos sistemas con precisión y actuar adecuadamente si algo sucede, como un incendio forestal o una disminución en la población animal.
+
+### Gestión forestal
+
+Aprendiste sobre el [Aprendizaje por Refuerzo](../../8-Reinforcement/README.md) en lecciones anteriores. Puede ser muy útil al intentar predecir patrones en la naturaleza. En particular, se puede usar para rastrear problemas ecológicos como incendios forestales y la propagación de especies invasoras. En Canadá, un grupo de investigadores utilizó el Aprendizaje por Refuerzo para construir modelos de dinámica de incendios forestales a partir de imágenes satelitales. Utilizando un innovador "proceso de propagación espacial (SSP)", visualizaron un incendio forestal como "el agente en cualquier celda del paisaje". "El conjunto de acciones que el fuego puede tomar desde una ubicación en cualquier momento incluye propagarse hacia el norte, sur, este u oeste o no propagarse."
+
+Este enfoque invierte la configuración habitual del RL, ya que la dinámica del Proceso de Decisión de Markov (MDP) correspondiente es una función conocida para la propagación inmediata del incendio forestal." Lee más sobre los algoritmos clásicos utilizados por este grupo en el enlace a continuación.
+[Referencia](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### Detección de movimiento de animales
+
+Aunque el aprendizaje profundo ha creado una revolución en el seguimiento visual de los movimientos de los animales (puedes construir tu propio [rastreador de osos polares](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott) aquí), el ML clásico todavía tiene un lugar en esta tarea.
+
+Los sensores para rastrear los movimientos de los animales de granja y el IoT hacen uso de este tipo de procesamiento visual, pero las técnicas más básicas de ML son útiles para preprocesar datos. Por ejemplo, en este artículo, se monitorearon y analizaron las posturas de las ovejas utilizando varios algoritmos de clasificación. Podrías reconocer la curva ROC en la página 335.
+[Referencia](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ Gestión de Energía
+
+En nuestras lecciones sobre [pronóstico de series temporales](../../7-TimeSeries/README.md), invocamos el concepto de parquímetros inteligentes para generar ingresos para una ciudad basándonos en la comprensión de la oferta y la demanda. Este artículo discute en detalle cómo la combinación de clustering, regresión y pronóstico de series temporales ayudó a predecir el uso futuro de energía en Irlanda, basándose en la medición inteligente.
+[Referencia](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 Seguros
+
+El sector de seguros es otro sector que utiliza ML para construir y optimizar modelos financieros y actuariales viables.
+
+### Gestión de Volatilidad
+
+MetLife, un proveedor de seguros de vida, es abierto con la forma en que analizan y mitigan la volatilidad en sus modelos financieros. En este artículo, notarás visualizaciones de clasificación binaria y ordinal. También descubrirás visualizaciones de pronóstico.
+[Referencia](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 Artes, Cultura y Literatura
+
+En las artes, por ejemplo en el periodismo, hay muchos problemas interesantes. Detectar noticias falsas es un gran problema, ya que se ha demostrado que influye en la opinión de las personas e incluso puede derribar democracias. Los museos también pueden beneficiarse del uso de ML en todo, desde encontrar vínculos entre artefactos hasta la planificación de recursos.
+
+### Detección de noticias falsas
+
+Detectar noticias falsas se ha convertido en un juego del gato y el ratón en los medios de comunicación de hoy. En este artículo, los investigadores sugieren que se puede probar un sistema que combine varias de las técnicas de ML que hemos estudiado y desplegar el mejor modelo: "Este sistema se basa en el procesamiento de lenguaje natural para extraer características de los datos y luego estas características se utilizan para el entrenamiento de clasificadores de aprendizaje automático como Naive Bayes, Support Vector Machine (SVM), Random Forest (RF), Stochastic Gradient Descent (SGD), y Logistic Regression (LR)."
+[Referencia](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+Este artículo muestra cómo la combinación de diferentes dominios de ML puede producir resultados interesantes que pueden ayudar a detener la propagación de noticias falsas y crear daños reales; en este caso, el impulso fue la propagación de rumores sobre tratamientos de COVID que incitaron a la violencia de multitudes.
+
+### ML en museos
+
+Los museos están en la cúspide de una revolución de IA en la que catalogar y digitalizar colecciones y encontrar vínculos entre artefactos se está volviendo más fácil a medida que avanza la tecnología. Proyectos como [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.) están ayudando a desbloquear los misterios de colecciones inaccesibles como los Archivos del Vaticano. Pero, el aspecto comercial de los museos también se beneficia de los modelos de ML.
+
+Por ejemplo, el Art Institute of Chicago construyó modelos para predecir en qué están interesados los visitantes y cuándo asistirán a las exposiciones. El objetivo es crear experiencias de visita individualizadas y optimizadas cada vez que el usuario visite el museo. "Durante el año fiscal 2017, el modelo predijo la asistencia y las admisiones con un 1 por ciento de precisión, dice Andrew Simnick, vicepresidente senior en el Art Institute."
+
+[Referencia](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 Marketing
+
+### Segmentación de clientes
+
+Las estrategias de marketing más efectivas dirigen a los clientes de diferentes maneras basadas en varios agrupamientos. En este artículo, se discuten los usos de los algoritmos de Clustering para apoyar el marketing diferenciado. El marketing diferenciado ayuda a las empresas a mejorar el reconocimiento de la marca, alcanzar a más clientes y ganar más dinero.
+[Referencia](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 Desafío
+
+Identifica otro sector que se beneficie de algunas de las técnicas que aprendiste en este currículo y descubre cómo utiliza ML.
+
+## [Cuestionario post-lectura](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## Revisión y Autoestudio
+
+El equipo de ciencia de datos de Wayfair tiene varios videos interesantes sobre cómo utilizan ML en su empresa. Vale la pena [echar un vistazo](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)!
+
+## Tarea
+
+[Una búsqueda del tesoro de ML](assignment.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/9-Real-World/1-Applications/assignment.md b/translations/es/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..00f9beacc
--- /dev/null
+++ b/translations/es/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# Una Búsqueda del Tesoro de ML
+
+## Instrucciones
+
+En esta lección, aprendiste sobre muchos casos de uso en la vida real que se resolvieron utilizando ML clásico. Aunque el uso de deep learning, nuevas técnicas y herramientas en IA, y el aprovechamiento de redes neuronales han ayudado a acelerar la producción de herramientas para ayudar en estos sectores, el ML clásico utilizando las técnicas de este currículo aún tiene un gran valor.
+
+En esta tarea, imagina que estás participando en un hackathon. Usa lo que aprendiste en el currículo para proponer una solución utilizando ML clásico para resolver un problema en uno de los sectores discutidos en esta lección. Crea una presentación donde discutas cómo implementarás tu idea. ¡Puntos extra si puedes recopilar datos de muestra y construir un modelo de ML para apoyar tu concepto!
+
+## Rubrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------------------------------------------------------------------- | -------------------------------------------------- | ---------------------- |
+| | Se presenta una presentación en PowerPoint - bono por construir un modelo | Se presenta una presentación básica y no innovadora | El trabajo está incompleto |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No somos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/9-Real-World/2-Debugging-ML-Models/README.md b/translations/es/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..0432b2605
--- /dev/null
+++ b/translations/es/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,158 @@
+# Posdata: Depuración de Modelos en Aprendizaje Automático usando Componentes del Tablero de IA Responsable
+
+## [Cuestionario previo a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Introducción
+
+El aprendizaje automático impacta nuestras vidas cotidianas. La IA está encontrando su camino en algunos de los sistemas más importantes que nos afectan tanto a nivel individual como a nuestra sociedad, desde la atención médica, las finanzas, la educación y el empleo. Por ejemplo, los sistemas y modelos están involucrados en tareas diarias de toma de decisiones, como diagnósticos médicos o la detección de fraudes. En consecuencia, los avances en IA junto con la adopción acelerada están siendo recibidos con expectativas sociales en evolución y una creciente regulación en respuesta. Constantemente vemos áreas donde los sistemas de IA siguen sin cumplir las expectativas; exponen nuevos desafíos; y los gobiernos están comenzando a regular las soluciones de IA. Por lo tanto, es importante que estos modelos sean analizados para proporcionar resultados justos, fiables, inclusivos, transparentes y responsables para todos.
+
+En este currículo, veremos herramientas prácticas que se pueden usar para evaluar si un modelo tiene problemas de IA responsable. Las técnicas tradicionales de depuración de aprendizaje automático tienden a basarse en cálculos cuantitativos como la precisión agregada o la pérdida de error promedio. Imagina lo que puede suceder cuando los datos que estás utilizando para construir estos modelos carecen de ciertos datos demográficos, como raza, género, visión política, religión, o representan desproporcionadamente estos datos demográficos. ¿Qué pasa cuando la salida del modelo se interpreta para favorecer a algunos demográficos? Esto puede introducir una representación excesiva o insuficiente de estos grupos de características sensibles, resultando en problemas de equidad, inclusión o fiabilidad del modelo. Otro factor es que los modelos de aprendizaje automático se consideran cajas negras, lo que dificulta entender y explicar qué impulsa la predicción de un modelo. Todos estos son desafíos que enfrentan los científicos de datos y desarrolladores de IA cuando no tienen herramientas adecuadas para depurar y evaluar la equidad o confiabilidad de un modelo.
+
+En esta lección, aprenderás sobre la depuración de tus modelos usando:
+
+- **Análisis de Errores**: identificar dónde en la distribución de tus datos el modelo tiene altas tasas de error.
+- **Visión General del Modelo**: realizar un análisis comparativo entre diferentes cohortes de datos para descubrir disparidades en las métricas de rendimiento de tu modelo.
+- **Análisis de Datos**: investigar dónde podría haber una representación excesiva o insuficiente de tus datos que puedan sesgar tu modelo para favorecer a un demográfico de datos sobre otro.
+- **Importancia de Características**: entender qué características están impulsando las predicciones de tu modelo a nivel global o local.
+
+## Requisito previo
+
+Como requisito previo, revisa [Herramientas de IA Responsable para desarrolladores](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+
+> 
+
+## Análisis de Errores
+
+Las métricas tradicionales de rendimiento del modelo utilizadas para medir la precisión son principalmente cálculos basados en predicciones correctas vs incorrectas. Por ejemplo, determinar que un modelo es preciso el 89% del tiempo con una pérdida de error de 0.001 puede considerarse un buen rendimiento. Los errores a menudo no se distribuyen uniformemente en tu conjunto de datos subyacente. Puedes obtener una puntuación de precisión del modelo del 89%, pero descubrir que hay diferentes regiones de tus datos en las que el modelo falla el 42% del tiempo. La consecuencia de estos patrones de falla con ciertos grupos de datos puede llevar a problemas de equidad o fiabilidad. Es esencial entender las áreas donde el modelo está funcionando bien o no. Las regiones de datos donde hay un alto número de inexactitudes en tu modelo pueden resultar ser un demográfico de datos importante.
+
+
+
+El componente de Análisis de Errores en el tablero de RAI ilustra cómo la falla del modelo se distribuye a través de varios cohortes con una visualización en árbol. Esto es útil para identificar características o áreas donde hay una alta tasa de error en tu conjunto de datos. Al ver de dónde provienen la mayoría de las inexactitudes del modelo, puedes comenzar a investigar la causa raíz. También puedes crear cohortes de datos para realizar análisis. Estos cohortes de datos ayudan en el proceso de depuración para determinar por qué el rendimiento del modelo es bueno en un cohorte, pero erróneo en otro.
+
+
+
+Los indicadores visuales en el mapa de árbol ayudan a localizar las áreas problemáticas más rápidamente. Por ejemplo, cuanto más oscuro es el tono de color rojo en un nodo del árbol, mayor es la tasa de error.
+
+El mapa de calor es otra funcionalidad de visualización que los usuarios pueden usar para investigar la tasa de error usando una o dos características para encontrar un contribuyente a los errores del modelo en todo el conjunto de datos o cohortes.
+
+
+
+Usa el análisis de errores cuando necesites:
+
+* Obtener una comprensión profunda de cómo se distribuyen las fallas del modelo en un conjunto de datos y a través de varias dimensiones de entrada y características.
+* Desglosar las métricas de rendimiento agregadas para descubrir automáticamente cohortes erróneos que informen tus pasos de mitigación específicos.
+
+## Visión General del Modelo
+
+Evaluar el rendimiento de un modelo de aprendizaje automático requiere obtener una comprensión holística de su comportamiento. Esto se puede lograr revisando más de una métrica, como tasa de error, precisión, recall, precisión, o MAE (Error Absoluto Medio) para encontrar disparidades entre las métricas de rendimiento. Una métrica de rendimiento puede verse excelente, pero las inexactitudes pueden exponerse en otra métrica. Además, comparar las métricas por disparidades en todo el conjunto de datos o cohortes ayuda a arrojar luz sobre dónde el modelo está funcionando bien o no. Esto es especialmente importante al ver el rendimiento del modelo entre características sensibles vs insensibles (por ejemplo, raza del paciente, género o edad) para descubrir posibles injusticias que el modelo pueda tener. Por ejemplo, descubrir que el modelo es más erróneo en un cohorte que tiene características sensibles puede revelar posibles injusticias que el modelo pueda tener.
+
+El componente de Visión General del Modelo del tablero de RAI ayuda no solo a analizar las métricas de rendimiento de la representación de datos en un cohorte, sino que da a los usuarios la capacidad de comparar el comportamiento del modelo entre diferentes cohortes.
+
+
+
+La funcionalidad de análisis basada en características del componente permite a los usuarios reducir subgrupos de datos dentro de una característica particular para identificar anomalías a nivel granular. Por ejemplo, el tablero tiene inteligencia incorporada para generar automáticamente cohortes para una característica seleccionada por el usuario (por ejemplo, *"time_in_hospital < 3"* o *"time_in_hospital >= 7"*). Esto permite a un usuario aislar una característica particular de un grupo de datos más grande para ver si es un influenciador clave de los resultados erróneos del modelo.
+
+
+
+El componente de Visión General del Modelo soporta dos clases de métricas de disparidad:
+
+**Disparidad en el rendimiento del modelo**: Estos conjuntos de métricas calculan la disparidad (diferencia) en los valores de la métrica de rendimiento seleccionada entre subgrupos de datos. Aquí hay algunos ejemplos:
+
+* Disparidad en la tasa de precisión
+* Disparidad en la tasa de error
+* Disparidad en la precisión
+* Disparidad en el recall
+* Disparidad en el error absoluto medio (MAE)
+
+**Disparidad en la tasa de selección**: Esta métrica contiene la diferencia en la tasa de selección (predicción favorable) entre subgrupos. Un ejemplo de esto es la disparidad en las tasas de aprobación de préstamos. La tasa de selección significa la fracción de puntos de datos en cada clase clasificados como 1 (en clasificación binaria) o la distribución de valores de predicción (en regresión).
+
+## Análisis de Datos
+
+> "Si torturas los datos lo suficiente, confesarán cualquier cosa" - Ronald Coase
+
+Esta declaración suena extrema, pero es cierto que los datos pueden ser manipulados para apoyar cualquier conclusión. Tal manipulación a veces puede ocurrir de manera no intencional. Como humanos, todos tenemos sesgos, y a menudo es difícil saber conscientemente cuándo estás introduciendo sesgo en los datos. Garantizar la equidad en la IA y el aprendizaje automático sigue siendo un desafío complejo.
+
+Los datos son un gran punto ciego para las métricas tradicionales de rendimiento del modelo. Puedes tener altas puntuaciones de precisión, pero esto no siempre refleja el sesgo subyacente en tus datos. Por ejemplo, si un conjunto de datos de empleados tiene el 27% de mujeres en puestos ejecutivos en una empresa y el 73% de hombres en el mismo nivel, un modelo de publicidad de empleo de IA entrenado en estos datos puede dirigirse principalmente a una audiencia masculina para puestos de trabajo de alto nivel. Tener este desequilibrio en los datos sesgó la predicción del modelo para favorecer a un género. Esto revela un problema de equidad donde hay un sesgo de género en el modelo de IA.
+
+El componente de Análisis de Datos en el tablero de RAI ayuda a identificar áreas donde hay una sobre- y sub-representación en el conjunto de datos. Ayuda a los usuarios a diagnosticar la causa raíz de errores y problemas de equidad introducidos por desequilibrios de datos o falta de representación de un grupo de datos particular. Esto da a los usuarios la capacidad de visualizar conjuntos de datos basados en resultados predichos y reales, grupos de errores y características específicas. A veces, descubrir un grupo de datos subrepresentado también puede revelar que el modelo no está aprendiendo bien, de ahí las altas inexactitudes. Tener un modelo con sesgo de datos no es solo un problema de equidad, sino que muestra que el modelo no es inclusivo ni confiable.
+
+
+
+Usa el análisis de datos cuando necesites:
+
+* Explorar las estadísticas de tu conjunto de datos seleccionando diferentes filtros para dividir tus datos en diferentes dimensiones (también conocidas como cohortes).
+* Entender la distribución de tu conjunto de datos a través de diferentes cohortes y grupos de características.
+* Determinar si tus hallazgos relacionados con la equidad, el análisis de errores y la causalidad (derivados de otros componentes del tablero) son el resultado de la distribución de tu conjunto de datos.
+* Decidir en qué áreas recolectar más datos para mitigar errores que provienen de problemas de representación, ruido de etiquetas, ruido de características, sesgo de etiquetas y factores similares.
+
+## Interpretabilidad del Modelo
+
+Los modelos de aprendizaje automático tienden a ser cajas negras. Entender qué características clave de los datos impulsan la predicción de un modelo puede ser un desafío. Es importante proporcionar transparencia sobre por qué un modelo hace una cierta predicción. Por ejemplo, si un sistema de IA predice que un paciente diabético está en riesgo de ser readmitido en un hospital en menos de 30 días, debería poder proporcionar datos de apoyo que llevaron a su predicción. Tener indicadores de datos de apoyo aporta transparencia para ayudar a los clínicos u hospitales a tomar decisiones bien informadas. Además, poder explicar por qué un modelo hizo una predicción para un paciente individual permite la responsabilidad con las regulaciones de salud. Cuando estás utilizando modelos de aprendizaje automático de maneras que afectan la vida de las personas, es crucial entender y explicar qué influye en el comportamiento de un modelo. La explicabilidad e interpretabilidad del modelo ayuda a responder preguntas en escenarios como:
+
+* Depuración del modelo: ¿Por qué mi modelo cometió este error? ¿Cómo puedo mejorar mi modelo?
+* Colaboración humano-IA: ¿Cómo puedo entender y confiar en las decisiones del modelo?
+* Cumplimiento regulatorio: ¿Cumple mi modelo con los requisitos legales?
+
+El componente de Importancia de Características del tablero de RAI te ayuda a depurar y obtener una comprensión integral de cómo un modelo hace predicciones. También es una herramienta útil para profesionales de aprendizaje automático y tomadores de decisiones para explicar y mostrar evidencia de características que influyen en el comportamiento de un modelo para el cumplimiento regulatorio. A continuación, los usuarios pueden explorar tanto explicaciones globales como locales para validar qué características impulsan la predicción de un modelo. Las explicaciones globales enumeran las principales características que afectaron la predicción general de un modelo. Las explicaciones locales muestran qué características llevaron a la predicción de un modelo para un caso individual. La capacidad de evaluar explicaciones locales también es útil para depurar o auditar un caso específico para comprender mejor y interpretar por qué un modelo hizo una predicción precisa o inexacta.
+
+
+
+* Explicaciones globales: Por ejemplo, ¿qué características afectan el comportamiento general de un modelo de readmisión hospitalaria para diabetes?
+* Explicaciones locales: Por ejemplo, ¿por qué se predijo que un paciente diabético mayor de 60 años con hospitalizaciones previas sería readmitido o no readmitido dentro de los 30 días en un hospital?
+
+En el proceso de depuración de examinar el rendimiento de un modelo a través de diferentes cohortes, la Importancia de Características muestra qué nivel de impacto tiene una característica a través de los cohortes. Ayuda a revelar anomalías al comparar el nivel de influencia que tiene la característica en impulsar las predicciones erróneas de un modelo. El componente de Importancia de Características puede mostrar qué valores en una característica influyeron positiva o negativamente en el resultado del modelo. Por ejemplo, si un modelo hizo una predicción inexacta, el componente te da la capacidad de profundizar y señalar qué características o valores de características impulsaron la predicción. Este nivel de detalle ayuda no solo en la depuración, sino que proporciona transparencia y responsabilidad en situaciones de auditoría. Finalmente, el componente puede ayudarte a identificar problemas de equidad. Para ilustrar, si una característica sensible como la etnia o el género es altamente influyente en impulsar la predicción de un modelo, esto podría ser un indicio de sesgo de raza o género en el modelo.
+
+
+
+Usa la interpretabilidad cuando necesites:
+
+* Determinar cuán confiables son las predicciones de tu sistema de IA al entender qué características son más importantes para las predicciones.
+* Abordar la depuración de tu modelo entendiéndolo primero e identificando si el modelo está utilizando características saludables o meramente correlaciones falsas.
+* Descubrir posibles fuentes de injusticia al entender si el modelo está basando predicciones en características sensibles o en características que están altamente correlacionadas con ellas.
+* Construir confianza en las decisiones de tu modelo generando explicaciones locales para ilustrar sus resultados.
+* Completar una auditoría regulatoria de un sistema de IA para validar modelos y monitorear el impacto de las decisiones del modelo en los humanos.
+
+## Conclusión
+
+Todos los componentes del tablero de RAI son herramientas prácticas para ayudarte a construir modelos de aprendizaje automático que sean menos dañinos y más confiables para la sociedad. Mejora la prevención de amenazas a los derechos humanos; discriminar o excluir a ciertos grupos de oportunidades de vida; y el riesgo de daño físico o psicológico. También ayuda a construir confianza en las decisiones de tu modelo generando explicaciones locales para ilustrar sus resultados. Algunos de los posibles daños se pueden clasificar como:
+
+- **Asignación**, si un género o etnia, por ejemplo, es favorecido sobre otro.
+- **Calidad del servicio**. Si entrenas los datos para un escenario específico pero la realidad es mucho más compleja, lleva a un servicio de bajo rendimiento.
+- **Estereotipado**. Asociar a un grupo dado con atributos preasignados.
+- **Denigración**. Criticar injustamente y etiquetar algo o a alguien.
+- **Sobre- o sub- representación**. La idea es que un cierto grupo no se ve en una cierta profesión, y cualquier servicio o función que siga promoviendo eso está contribuyendo al daño.
+
+### Tablero de RAI de Azure
+
+[El tablero de RAI de Azure](https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu) está construido sobre herramientas de código abierto desarrolladas por las principales instituciones académicas y organizaciones, incluidas Microsoft, que son instrumentales para que los científicos de datos y desarrolladores de IA comprendan mejor el comportamiento del modelo, descubran y mitiguen problemas indeseables de los modelos de IA.
+
+- Aprende a usar los diferentes componentes consultando la [documentación del tablero de RAI.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu)
+
+- Consulta algunos [cuadernos de muestra del tablero de RAI](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) para depurar más escenarios de IA responsable en Azure Machine Learning.
+
+---
+## 🚀 Desafío
+
+Para evitar que se introduzcan sesgos estadísticos o de datos en primer lugar, debemos:
+
+- tener una diversidad de antecedentes y perspectivas entre las personas que trabajan en los sistemas
+- invertir en conjuntos de datos que reflejen la diversidad de nuestra sociedad
+- desarrollar mejores métodos para detectar y corregir el sesgo cuando ocurra
+
+Piensa en escenarios de la vida real donde la injusticia es evidente en la construcción y uso de modelos. ¿Qué más deberíamos considerar?
+
+## [Cuestionario posterior a la lección](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## Revisión y Autoestudio
+
+En esta lección, has aprendido algunas de las herramientas prácticas para incorporar IA responsable en el aprendizaje automático.
+
+Mira este taller para profundizar en los temas:
+
+- Tablero de IA Responsable: Un centro integral para operacionalizar la RAI en la práctica por Besmira Nushi y Mehrnoosh Sameki
+
+[](https://www.youtube.com/watch?v=f1oaDNl3djg "Tablero de IA Responsable: Un centro integral para operacionalizar la RAI en la práctica")
+
+> 🎥 Haz clic en la imagen de arriba para ver un video: Tablero de IA Responsable: Un centro integral para operacionalizar la RAI en la práctica por Bes
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/es/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..3e00d7b65
--- /dev/null
+++ b/translations/es/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# Explorar el tablero de IA Responsable (RAI)
+
+## Instrucciones
+
+En esta lección aprendiste sobre el tablero de RAI, un conjunto de componentes construidos sobre herramientas "de código abierto" para ayudar a los científicos de datos a realizar análisis de errores, exploración de datos, evaluación de equidad, interpretabilidad de modelos, evaluaciones contrafactuales/what-if y análisis causal en sistemas de IA. Para esta tarea, explora algunos de los [notebooks](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) de muestra del tablero de RAI y reporta tus hallazgos en un documento o presentación.
+
+## Rúbrica
+
+| Criterios | Ejemplar | Adecuado | Necesita Mejorar |
+| --------- | -------- | -------- | ---------------- |
+| | Se presenta un documento o una presentación en PowerPoint discutiendo los componentes del tablero de RAI, el notebook que se ejecutó y las conclusiones obtenidas al ejecutarlo | Se presenta un documento sin conclusiones | No se presenta ningún documento |
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/9-Real-World/README.md b/translations/es/9-Real-World/README.md
new file mode 100644
index 000000000..13a60b51b
--- /dev/null
+++ b/translations/es/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# Postscript: Aplicaciones del mundo real del aprendizaje automático clásico
+
+En esta sección del currículo, se te presentarán algunas aplicaciones del mundo real del aprendizaje automático clásico. Hemos recorrido internet para encontrar documentos técnicos y artículos sobre aplicaciones que han utilizado estas estrategias, evitando redes neuronales, aprendizaje profundo e inteligencia artificial tanto como sea posible. Aprende cómo se utiliza el aprendizaje automático en sistemas empresariales, aplicaciones ecológicas, finanzas, artes y cultura, y más.
+
+
+
+> Foto de Alexis Fauvet en Unsplash
+
+## Lección
+
+1. [Aplicaciones del mundo real para el aprendizaje automático](1-Applications/README.md)
+2. [Depuración de modelos en el aprendizaje automático usando componentes del tablero de IA Responsable](2-Debugging-ML-Models/README.md)
+
+## Créditos
+
+"Aplicaciones del mundo real" fue escrito por un equipo de personas, incluyendo a [Jen Looper](https://twitter.com/jenlooper) y [Ornella Altunyan](https://twitter.com/ornelladotcom).
+
+"Depuración de modelos en el aprendizaje automático usando componentes del tablero de IA Responsable" fue escrito por [Ruth Yakubu](https://twitter.com/ruthieyakubu)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/CODE_OF_CONDUCT.md b/translations/es/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..f07208e25
--- /dev/null
+++ b/translations/es/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Código de Conducta de Código Abierto de Microsoft
+
+Este proyecto ha adoptado el [Código de Conducta de Código Abierto de Microsoft](https://opensource.microsoft.com/codeofconduct/).
+
+Recursos:
+
+- [Código de Conducta de Código Abierto de Microsoft](https://opensource.microsoft.com/codeofconduct/)
+- [Preguntas Frecuentes sobre el Código de Conducta de Microsoft](https://opensource.microsoft.com/codeofconduct/faq/)
+- Contacta a [opencode@microsoft.com](mailto:opencode@microsoft.com) para preguntas o inquietudes
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/CONTRIBUTING.md b/translations/es/CONTRIBUTING.md
new file mode 100644
index 000000000..1659d59bb
--- /dev/null
+++ b/translations/es/CONTRIBUTING.md
@@ -0,0 +1,14 @@
+# Contribuyendo
+
+Este proyecto da la bienvenida a contribuciones y sugerencias. La mayoría de las contribuciones requieren que aceptes un Acuerdo de Licencia de Contribuidor (CLA) declarando que tienes el derecho de, y efectivamente, nos otorgas los derechos para usar tu contribución. Para más detalles, visita https://cla.microsoft.com.
+
+> Importante: cuando traduzcas texto en este repositorio, asegúrate de no usar traducción automática. Verificaremos las traducciones a través de la comunidad, así que solo ofrécete como voluntario para traducciones en idiomas en los que seas competente.
+
+Cuando envíes una solicitud de extracción (pull request), un bot de CLA determinará automáticamente si necesitas proporcionar un CLA y decorará el PR apropiadamente (por ejemplo, etiqueta, comentario). Simplemente sigue las instrucciones proporcionadas por el bot. Solo necesitarás hacer esto una vez en todos los repositorios que usen nuestro CLA.
+
+Este proyecto ha adoptado el [Código de Conducta de Código Abierto de Microsoft](https://opensource.microsoft.com/codeofconduct/).
+Para más información, consulta las [Preguntas Frecuentes del Código de Conducta](https://opensource.microsoft.com/codeofconduct/faq/)
+o contacta a [opencode@microsoft.com](mailto:opencode@microsoft.com) con cualquier pregunta o comentario adicional.
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/README.md b/translations/es/README.md
new file mode 100644
index 000000000..f61e34df0
--- /dev/null
+++ b/translations/es/README.md
@@ -0,0 +1,155 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# Aprendizaje Automático para Principiantes - Un Currículo
+
+> 🌍 Viaja por el mundo mientras exploramos el Aprendizaje Automático a través de las culturas del mundo 🌍
+
+Los Cloud Advocates en Microsoft están encantados de ofrecer un currículo de 12 semanas y 26 lecciones sobre **Aprendizaje Automático**. En este currículo, aprenderás sobre lo que a veces se llama **aprendizaje automático clásico**, utilizando principalmente Scikit-learn como biblioteca y evitando el aprendizaje profundo, que se cubre en nuestro [currículo de AI para Principiantes](https://aka.ms/ai4beginners). ¡Combina estas lecciones con nuestro [currículo de Ciencia de Datos para Principiantes](https://aka.ms/ds4beginners), también!
+
+Viaja con nosotros alrededor del mundo mientras aplicamos estas técnicas clásicas a datos de muchas áreas del mundo. Cada lección incluye cuestionarios antes y después de la lección, instrucciones escritas para completar la lección, una solución, una tarea y más. Nuestra pedagogía basada en proyectos te permite aprender mientras construyes, una manera comprobada para que las nuevas habilidades 'se queden'.
+
+**✍️ Agradecimientos de corazón a nuestros autores** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu y Amy Boyd
+
+**🎨 Gracias también a nuestros ilustradores** Tomomi Imura, Dasani Madipalli y Jen Looper
+
+**🙏 Agradecimientos especiales 🙏 a nuestros autores, revisores y colaboradores de contenido Microsoft Student Ambassador**, notablemente Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila y Snigdha Agarwal
+
+**🤩 Extra gratitud a los Microsoft Student Ambassadors Eric Wanjau, Jasleen Sondhi y Vidushi Gupta por nuestras lecciones de R!**
+
+# Empezando
+
+Sigue estos pasos:
+1. **Haz un Fork del Repositorio**: Haz clic en el botón "Fork" en la esquina superior derecha de esta página.
+2. **Clona el Repositorio**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [encuentra todos los recursos adicionales para este curso en nuestra colección de Microsoft Learn](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+**[Estudiantes](https://aka.ms/student-page)**, para usar este currículo, haz un fork del repositorio completo a tu propia cuenta de GitHub y completa los ejercicios por tu cuenta o en grupo:
+
+- Empieza con un cuestionario previo a la lección.
+- Lee la lección y completa las actividades, haciendo pausas y reflexionando en cada verificación de conocimiento.
+- Trata de crear los proyectos comprendiendo las lecciones en lugar de ejecutar el código de solución; sin embargo, ese código está disponible en las carpetas `/solution` en cada lección orientada a proyectos.
+- Realiza el cuestionario posterior a la lección.
+- Completa el desafío.
+- Completa la tarea.
+- Después de completar un grupo de lecciones, visita el [Tablero de Discusión](https://github.com/microsoft/ML-For-Beginners/discussions) y "aprende en voz alta" llenando la rúbrica PAT apropiada. Un 'PAT' es una Herramienta de Evaluación de Progreso que es una rúbrica que llenas para avanzar en tu aprendizaje. También puedes reaccionar a otros PATs para que podamos aprender juntos.
+
+> Para un estudio adicional, recomendamos seguir estos módulos y rutas de aprendizaje de [Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott).
+
+**Profesores**, hemos [incluido algunas sugerencias](for-teachers.md) sobre cómo usar este currículo.
+
+---
+
+## Recorridos en video
+
+Algunas de las lecciones están disponibles en formato de video corto. Puedes encontrar todos estos videos en las lecciones, o en la [lista de reproducción de ML para Principiantes en el canal de YouTube de Microsoft Developer](https://aka.ms/ml-beginners-videos) haciendo clic en la imagen a continuación.
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## Conoce al Equipo
+
+[](https://youtu.be/Tj1XWrDSYJU "Video promocional")
+
+**Gif por** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 ¡Haz clic en la imagen de arriba para ver un video sobre el proyecto y las personas que lo crearon!
+
+---
+
+## Pedagogía
+
+Hemos elegido dos principios pedagógicos al construir este currículo: asegurar que sea **basado en proyectos** y que incluya **cuestionarios frecuentes**. Además, este currículo tiene un **tema común** para darle cohesión.
+
+Al asegurar que el contenido se alinee con los proyectos, el proceso se hace más atractivo para los estudiantes y se aumenta la retención de conceptos. Además, un cuestionario de baja presión antes de la clase establece la intención del estudiante hacia el aprendizaje de un tema, mientras que un segundo cuestionario después de la clase asegura una mayor retención. Este currículo fue diseñado para ser flexible y divertido y puede tomarse en su totalidad o en parte. Los proyectos comienzan pequeños y se vuelven cada vez más complejos al final del ciclo de 12 semanas. Este currículo también incluye un postscript sobre aplicaciones del mundo real del aprendizaje automático, que puede usarse como crédito adicional o como base para la discusión.
+
+> Encuentra nuestro [Código de Conducta](CODE_OF_CONDUCT.md), [Contribución](CONTRIBUTING.md) y [Pautas de Traducción](TRANSLATIONS.md). ¡Agradecemos tus comentarios constructivos!
+
+## Cada lección incluye
+
+- sketchnote opcional
+- video complementario opcional
+- recorrido en video (algunas lecciones solamente)
+- cuestionario de calentamiento previo a la lección
+- lección escrita
+- para lecciones basadas en proyectos, guías paso a paso sobre cómo construir el proyecto
+- verificaciones de conocimiento
+- un desafío
+- lectura complementaria
+- tarea
+- cuestionario posterior a la lección
+
+> **Una nota sobre los idiomas**: Estas lecciones están escritas principalmente en Python, pero muchas también están disponibles en R. Para completar una lección en R, ve a la carpeta `/solution` y busca lecciones en R. Incluyen una extensión .rmd que representa un archivo **R Markdown** que puede definirse simplemente como una incrustación de `code chunks` (de R u otros lenguajes) y una `YAML header` (que guía cómo formatear las salidas como PDF) en un `Markdown document`. Como tal, sirve como un marco de autoría ejemplar para la ciencia de datos, ya que te permite combinar tu código, su salida y tus pensamientos al permitirte escribirlos en Markdown. Además, los documentos R Markdown pueden renderizarse a formatos de salida como PDF, HTML o Word.
+
+> **Una nota sobre los cuestionarios**: Todos los cuestionarios están contenidos en la [carpeta de la aplicación de cuestionarios](../../quiz-app), para un total de 52 cuestionarios de tres preguntas cada uno. Están vinculados desde dentro de las lecciones, pero la aplicación de cuestionarios puede ejecutarse localmente; sigue las instrucciones en la carpeta `quiz-app` para alojarla localmente o desplegarla en Azure.
+
+| Número de Lección | Tema | Agrupación de Lección | Objetivos de Aprendizaje | Lección Vinculada | Autor |
+| :---------------: | :------------------------------------------------------------: | :-------------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | Introducción al aprendizaje automático | [Introducción](1-Introduction/README.md) | Aprende los conceptos básicos detrás del aprendizaje automático | [Lección](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | La Historia del aprendizaje automático | [Introducción](1-Introduction/README.md) | Aprende la historia subyacente a este campo | [Lección](1-Introduction/2-history-of-ML/README.md) | Jen y Amy |
+| 03 | Equidad y aprendizaje automático | [Introducción](1-Introduction/README.md) | ¿Cuáles son los importantes problemas filosóficos sobre la equidad que los estudiantes deben considerar al construir y aplicar modelos de aprendizaje automático? | [Lección](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | Técnicas para el aprendizaje automático | [Introduction](1-Introduction/README.md) | ¿Qué técnicas utilizan los investigadores de ML para construir modelos de ML? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris y Jen |
+| 05 | Introducción a la regresión | [Regression](2-Regression/README.md) | Comienza con Python y Scikit-learn para modelos de regresión |
|
+| 09 | Una aplicación web 🔌 | [Web App](3-Web-App/README.md) | Construye una aplicación web para usar tu modelo entrenado | [Python](3-Web-App/1-Web-App/README.md) | Jen |
+| 10 | Introducción a la clasificación | [Classification](4-Classification/README.md) | Limpia, prepara y visualiza tus datos; introducción a la clasificación |
|
+| 13 | Deliciosas cocinas asiáticas e indias 🍜 | [Classification](4-Classification/README.md) | Construye una aplicación web de recomendación usando tu modelo | [Python](4-Classification/4-Applied/README.md) | Jen |
+| 14 | Introducción al clustering | [Clustering](5-Clustering/README.md) | Limpia, prepara y visualiza tus datos; introducción al clustering |
|
+| 16 | Introducción al procesamiento de lenguaje natural ☕️ | [Natural language processing](6-NLP/README.md) | Aprende los conceptos básicos sobre NLP construyendo un bot sencillo | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | Tareas comunes de NLP ☕️ | [Natural language processing](6-NLP/README.md) | Profundiza tu conocimiento de NLP entendiendo las tareas comunes requeridas al tratar con estructuras de lenguaje | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | Traducción y análisis de sentimiento ♥️ | [Natural language processing](6-NLP/README.md) | Traducción y análisis de sentimiento con Jane Austen | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | Hoteles románticos de Europa ♥️ | [Natural language processing](6-NLP/README.md) | Análisis de sentimiento con reseñas de hoteles 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | Hoteles románticos de Europa ♥️ | [Natural language processing](6-NLP/README.md) | Análisis de sentimiento con reseñas de hoteles 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | Introducción a la predicción de series temporales | [Time series](7-TimeSeries/README.md) | Introducción a la predicción de series temporales | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ Uso de energía mundial ⚡️ - predicción de series temporales con ARIMA | [Time series](7-TimeSeries/README.md) | Predicción de series temporales con ARIMA | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ Uso de energía mundial ⚡️ - predicción de series temporales con SVR | [Time series](7-TimeSeries/README.md) | Predicción de series temporales con Support Vector Regressor | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | Introducción al aprendizaje por refuerzo | [Reinforcement learning](8-Reinforcement/README.md) | Introducción al aprendizaje por refuerzo con Q-Learning | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | ¡Ayuda a Peter a evitar al lobo! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | Gimnasio de aprendizaje por refuerzo | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | Escenarios y aplicaciones de ML en el mundo real | [ML in the Wild](9-Real-World/README.md) | Aplicaciones interesantes y reveladoras de ML clásico en el mundo real | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | Depuración de modelos en ML usando el panel de control de RAI | [ML in the Wild](9-Real-World/README.md) | Depuración de modelos en Machine Learning usando componentes del panel de control de IA responsable | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [encuentra todos los recursos adicionales para este curso en nuestra colección de Microsoft Learn](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## Acceso sin conexión
+
+Puedes ejecutar esta documentación sin conexión utilizando [Docsify](https://docsify.js.org/#/). Haz un fork de este repositorio, [instala Docsify](https://docsify.js.org/#/quickstart) en tu máquina local, y luego en la carpeta raíz de este repositorio, escribe `docsify serve`. El sitio web se servirá en el puerto 3000 en tu localhost: `localhost:3000`.
+
+## PDFs
+Encuentra un pdf del currículo con enlaces [aquí](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf).
+
+## Se Busca Ayuda
+
+¿Te gustaría contribuir con una traducción? Por favor, lee nuestras [directrices de traducción](TRANSLATIONS.md) y añade un issue con plantilla para gestionar la carga de trabajo [aquí](https://github.com/microsoft/ML-For-Beginners/issues).
+
+## Otros Currículos
+
+¡Nuestro equipo produce otros currículos! Echa un vistazo a:
+
+- [AI for Beginners](https://aka.ms/ai4beginners)
+- [Data Science for Beginners](https://aka.ms/datascience-beginners)
+- [**Nueva Versión 2.0** - Generative AI for Beginners](https://aka.ms/genai-beginners)
+- [**NUEVO** Cybersecurity for Beginners](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Web Dev for Beginners](https://aka.ms/webdev-beginners)
+- [IoT for Beginners](https://aka.ms/iot-beginners)
+- [Machine Learning for Beginners](https://aka.ms/ml4beginners)
+- [XR Development for Beginners](https://aka.ms/xr-dev-for-beginners)
+- [Mastering GitHub Copilot for AI Paired Programming](https://aka.ms/GitHubCopilotAI)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/SECURITY.md b/translations/es/SECURITY.md
new file mode 100644
index 000000000..a02353620
--- /dev/null
+++ b/translations/es/SECURITY.md
@@ -0,0 +1,40 @@
+## Seguridad
+
+Microsoft toma en serio la seguridad de nuestros productos y servicios de software, lo cual incluye todos los repositorios de código fuente gestionados a través de nuestras organizaciones en GitHub, que incluyen [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin) y [nuestras organizaciones en GitHub](https://opensource.microsoft.com/).
+
+Si crees que has encontrado una vulnerabilidad de seguridad en algún repositorio propiedad de Microsoft que cumpla con [la definición de vulnerabilidad de seguridad de Microsoft](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott), por favor repórtalo como se describe a continuación.
+
+## Reporte de Problemas de Seguridad
+
+**Por favor, no reportes vulnerabilidades de seguridad a través de issues públicos en GitHub.**
+
+En su lugar, repórtalas al Centro de Respuesta de Seguridad de Microsoft (MSRC) en [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).
+
+Si prefieres enviar el reporte sin iniciar sesión, envía un correo electrónico a [secure@microsoft.com](mailto:secure@microsoft.com). Si es posible, encripta tu mensaje con nuestra clave PGP; por favor descárgala desde la [página de Clave PGP del Centro de Respuesta de Seguridad de Microsoft](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).
+
+Deberías recibir una respuesta dentro de 24 horas. Si por alguna razón no la recibes, por favor haz un seguimiento vía correo electrónico para asegurarte de que recibimos tu mensaje original. Información adicional puede encontrarse en [microsoft.com/msrc](https://www.microsoft.com/msrc).
+
+Por favor, incluye la información solicitada a continuación (tanto como puedas proporcionar) para ayudarnos a entender mejor la naturaleza y el alcance del posible problema:
+
+ * Tipo de problema (por ejemplo, desbordamiento de búfer, inyección SQL, scripting entre sitios, etc.)
+ * Rutas completas de los archivos fuente relacionados con la manifestación del problema
+ * La ubicación del código fuente afectado (etiqueta/rama/commit o URL directa)
+ * Cualquier configuración especial requerida para reproducir el problema
+ * Instrucciones paso a paso para reproducir el problema
+ * Código de prueba de concepto o de explotación (si es posible)
+ * Impacto del problema, incluyendo cómo un atacante podría explotar el problema
+
+Esta información nos ayudará a priorizar tu reporte más rápidamente.
+
+Si estás reportando para una recompensa por errores, reportes más completos pueden contribuir a una mayor recompensa. Por favor visita nuestra página del [Programa de Recompensas por Errores de Microsoft](https://microsoft.com/msrc/bounty) para más detalles sobre nuestros programas activos.
+
+## Idiomas Preferidos
+
+Preferimos que todas las comunicaciones sean en inglés.
+
+## Política
+
+Microsoft sigue el principio de [Divulgación Coordinada de Vulnerabilidades](https://www.microsoft.com/en-us/msrc/cvd).
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/SUPPORT.md b/translations/es/SUPPORT.md
new file mode 100644
index 000000000..8854f5285
--- /dev/null
+++ b/translations/es/SUPPORT.md
@@ -0,0 +1,15 @@
+# Soporte
+## Cómo reportar problemas y obtener ayuda
+
+Este proyecto utiliza GitHub Issues para rastrear errores y solicitudes de características. Por favor, busca en los
+problemas existentes antes de reportar nuevos problemas para evitar duplicados. Para nuevos problemas, reporta tu error o
+solicitud de característica como un nuevo Issue.
+
+Para obtener ayuda y preguntas sobre el uso de este proyecto, reporta un issue.
+
+## Política de Soporte de Microsoft
+
+El soporte para este repositorio se limita a los recursos listados arriba.
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/TRANSLATIONS.md b/translations/es/TRANSLATIONS.md
new file mode 100644
index 000000000..813e1503d
--- /dev/null
+++ b/translations/es/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# Contribuye traduciendo lecciones
+
+¡Agradecemos las traducciones para las lecciones en este currículo!
+## Pautas
+
+Hay carpetas en cada carpeta de lección y carpeta de introducción a la lección que contienen los archivos markdown traducidos.
+
+> Nota, por favor no traduzcas ningún código en los archivos de ejemplos de código; las únicas cosas que deben traducirse son README, tareas y los cuestionarios. ¡Gracias!
+
+Los archivos traducidos deben seguir esta convención de nombres:
+
+**README._[language]_.md**
+
+donde _[language]_ es una abreviatura de dos letras del idioma siguiendo el estándar ISO 639-1 (por ejemplo, `README.es.md` para español y `README.nl.md` para neerlandés).
+
+**assignment._[language]_.md**
+
+Similar a los Readme, por favor traduce también las tareas.
+
+> Importante: cuando traduzcas texto en este repositorio, por favor asegúrate de no usar traducción automática. Verificaremos las traducciones a través de la comunidad, así que por favor solo ofrécete para traducir en idiomas en los que seas competente.
+
+**Cuestionarios**
+
+1. Agrega tu traducción a la aplicación de cuestionarios añadiendo un archivo aquí: https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations, con la convención de nombres adecuada (en.json, fr.json). **Por favor no localices las palabras 'true' o 'false'. ¡Gracias!**
+
+2. Agrega tu código de idioma al menú desplegable en el archivo App.vue de la aplicación de cuestionarios.
+
+3. Edita el [archivo index.js de traducciones](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js) de la aplicación de cuestionarios para agregar tu idioma.
+
+4. Finalmente, edita TODOS los enlaces de los cuestionarios en tus archivos README.md traducidos para que apunten directamente a tu cuestionario traducido: https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1 se convierte en https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id
+
+**GRACIAS**
+
+¡Realmente apreciamos tus esfuerzos!
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automatizada por IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción profesional humana. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/docs/_sidebar.md b/translations/es/docs/_sidebar.md
new file mode 100644
index 000000000..89a96b24c
--- /dev/null
+++ b/translations/es/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- Introducción
+ - [Introducción al Aprendizaje Automático](../1-Introduction/1-intro-to-ML/README.md)
+ - [Historia del Aprendizaje Automático](../1-Introduction/2-history-of-ML/README.md)
+ - [AA y Equidad](../1-Introduction/3-fairness/README.md)
+ - [Técnicas de AA](../1-Introduction/4-techniques-of-ML/README.md)
+
+- Regresión
+ - [Herramientas del Oficio](../2-Regression/1-Tools/README.md)
+ - [Datos](../2-Regression/2-Data/README.md)
+ - [Regresión Lineal](../2-Regression/3-Linear/README.md)
+ - [Regresión Logística](../2-Regression/4-Logistic/README.md)
+
+- Construir una Aplicación Web
+ - [Aplicación Web](../3-Web-App/1-Web-App/README.md)
+
+- Clasificación
+ - [Introducción a la Clasificación](../4-Classification/1-Introduction/README.md)
+ - [Clasificadores 1](../4-Classification/2-Classifiers-1/README.md)
+ - [Clasificadores 2](../4-Classification/3-Classifiers-2/README.md)
+ - [AA Aplicado](../4-Classification/4-Applied/README.md)
+
+- Agrupamiento
+ - [Visualiza tus Datos](../5-Clustering/1-Visualize/README.md)
+ - [K-Means](../5-Clustering/2-K-Means/README.md)
+
+- PLN
+ - [Introducción al PLN](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [Tareas de PLN](../6-NLP/2-Tasks/README.md)
+ - [Traducción y Sentimiento](../6-NLP/3-Translation-Sentiment/README.md)
+ - [Reseñas de Hoteles 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [Reseñas de Hoteles 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- Pronóstico de Series Temporales
+ - [Introducción al Pronóstico de Series Temporales](../7-TimeSeries/1-Introduction/README.md)
+ - [ARIMA](../7-TimeSeries/2-ARIMA/README.md)
+ - [SVR](../7-TimeSeries/3-SVR/README.md)
+
+- Aprendizaje por Refuerzo
+ - [Q-Learning](../8-Reinforcement/1-QLearning/README.md)
+ - [Gym](../8-Reinforcement/2-Gym/README.md)
+
+- AA en el Mundo Real
+ - [Aplicaciones](../9-Real-World/1-Applications/README.md)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de cualquier malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/for-teachers.md b/translations/es/for-teachers.md
new file mode 100644
index 000000000..defc0aa98
--- /dev/null
+++ b/translations/es/for-teachers.md
@@ -0,0 +1,26 @@
+## Para Educadores
+
+¿Te gustaría usar este plan de estudios en tu aula? ¡Adelante!
+
+De hecho, puedes usarlo dentro de GitHub mismo utilizando GitHub Classroom.
+
+Para hacerlo, haz un fork de este repositorio. Vas a necesitar crear un repositorio para cada lección, así que necesitarás extraer cada carpeta en un repositorio separado. De esa manera, [GitHub Classroom](https://classroom.github.com/classrooms) puede recoger cada lección por separado.
+
+Estas [instrucciones completas](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/) te darán una idea de cómo configurar tu aula.
+
+## Usar el repositorio tal como está
+
+Si te gustaría usar este repositorio tal como está, sin usar GitHub Classroom, también se puede hacer. Necesitarías comunicarte con tus estudiantes sobre cuál lección trabajar juntos.
+
+En un formato en línea (Zoom, Teams, u otro) podrías formar salas de trabajo para los cuestionarios, y mentorear a los estudiantes para ayudarles a prepararse para aprender. Luego, invitar a los estudiantes a realizar los cuestionarios y enviar sus respuestas como 'issues' a una hora determinada. Podrías hacer lo mismo con las tareas, si quieres que los estudiantes trabajen colaborativamente en público.
+
+Si prefieres un formato más privado, pide a tus estudiantes que hagan fork del plan de estudios, lección por lección, a sus propios repositorios de GitHub como repositorios privados, y te den acceso. Luego pueden completar cuestionarios y tareas de manera privada y enviártelos a través de issues en tu repositorio de aula.
+
+Hay muchas maneras de hacer que esto funcione en un formato de aula en línea. ¡Por favor, háznos saber qué funciona mejor para ti!
+
+## ¡Por favor danos tu opinión!
+
+Queremos que este plan de estudios funcione para ti y tus estudiantes. Por favor, danos tu [opinión](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u).
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática por IA. Aunque nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/quiz-app/README.md b/translations/es/quiz-app/README.md
new file mode 100644
index 000000000..87948bf89
--- /dev/null
+++ b/translations/es/quiz-app/README.md
@@ -0,0 +1,115 @@
+# Cuestionarios
+
+Estos cuestionarios son los cuestionarios pre y post-lectura para el currículo de ML en https://aka.ms/ml-beginners
+
+## Configuración del proyecto
+
+```
+npm install
+```
+
+### Compila y recarga en caliente para desarrollo
+
+```
+npm run serve
+```
+
+### Compila y minimiza para producción
+
+```
+npm run build
+```
+
+### Lint y corrige archivos
+
+```
+npm run lint
+```
+
+### Personaliza la configuración
+
+Consulta la [Referencia de Configuración](https://cli.vuejs.org/config/).
+
+Créditos: Gracias a la versión original de esta aplicación de cuestionarios: https://github.com/arpan45/simple-quiz-vue
+
+## Desplegando en Azure
+
+Aquí tienes una guía paso a paso para ayudarte a empezar:
+
+1. Haz un fork del Repositorio de GitHub
+Asegúrate de que el código de tu aplicación web estática está en tu repositorio de GitHub. Haz un fork de este repositorio.
+
+2. Crea una Aplicación Web Estática de Azure
+- Crea una [cuenta de Azure](http://azure.microsoft.com)
+- Ve al [portal de Azure](https://portal.azure.com)
+- Haz clic en “Crear un recurso” y busca “Aplicación Web Estática”.
+- Haz clic en “Crear”.
+
+3. Configura la Aplicación Web Estática
+- Básicos: Suscripción: Selecciona tu suscripción de Azure.
+- Grupo de Recursos: Crea un nuevo grupo de recursos o usa uno existente.
+- Nombre: Proporciona un nombre para tu aplicación web estática.
+- Región: Elige la región más cercana a tus usuarios.
+
+- #### Detalles de Despliegue:
+- Fuente: Selecciona “GitHub”.
+- Cuenta de GitHub: Autoriza a Azure para acceder a tu cuenta de GitHub.
+- Organización: Selecciona tu organización de GitHub.
+- Repositorio: Elige el repositorio que contiene tu aplicación web estática.
+- Rama: Selecciona la rama desde la que quieres desplegar.
+
+- #### Detalles de Construcción:
+- Presets de Construcción: Elige el framework con el que está construida tu aplicación (por ejemplo, React, Angular, Vue, etc.).
+- Ubicación de la Aplicación: Especifica la carpeta que contiene el código de tu aplicación (por ejemplo, / si está en la raíz).
+- Ubicación de la API: Si tienes una API, especifica su ubicación (opcional).
+- Ubicación de Salida: Especifica la carpeta donde se genera la salida de la construcción (por ejemplo, build o dist).
+
+4. Revisa y Crea
+Revisa tus configuraciones y haz clic en “Crear”. Azure configurará los recursos necesarios y creará un flujo de trabajo de GitHub Actions en tu repositorio.
+
+5. Flujo de Trabajo de GitHub Actions
+Azure creará automáticamente un archivo de flujo de trabajo de GitHub Actions en tu repositorio (.github/workflows/azure-static-web-apps-.yml). Este flujo de trabajo manejará el proceso de construcción y despliegue.
+
+6. Monitorea el Despliegue
+Ve a la pestaña “Actions” en tu repositorio de GitHub.
+Deberías ver un flujo de trabajo en ejecución. Este flujo de trabajo construirá y desplegará tu aplicación web estática en Azure.
+Una vez que el flujo de trabajo se complete, tu aplicación estará en vivo en la URL proporcionada por Azure.
+
+### Archivo de Flujo de Trabajo de Ejemplo
+
+Aquí tienes un ejemplo de cómo podría verse el archivo de flujo de trabajo de GitHub Actions:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### Recursos Adicionales
+- [Documentación de Aplicaciones Web Estáticas de Azure](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [Documentación de GitHub Actions](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/sketchnotes/LICENSE.md b/translations/es/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..a83b61da1
--- /dev/null
+++ b/translations/es/sketchnotes/LICENSE.md
@@ -0,0 +1,172 @@
+Attribution-ShareAlike 4.0 International
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons") no es un bufete de abogados y no proporciona servicios legales ni asesoramiento legal. La distribución de licencias públicas de Creative Commons no crea una relación abogado-cliente u otra relación. Creative Commons pone a disposición sus licencias y la información relacionada "tal cual". Creative Commons no ofrece garantías respecto a sus licencias, cualquier material licenciado bajo sus términos y condiciones, o cualquier información relacionada. Creative Commons no se hace responsable de los daños resultantes de su uso en la máxima medida posible.
+
+Uso de licencias públicas de Creative Commons
+
+Las licencias públicas de Creative Commons proporcionan un conjunto estándar de términos y condiciones que los creadores y otros titulares de derechos pueden utilizar para compartir obras originales sujetas a derechos de autor y ciertos otros derechos especificados en la licencia pública a continuación. Las siguientes consideraciones son solo para fines informativos, no son exhaustivas y no forman parte de nuestras licencias.
+
+ Consideraciones para los licenciadores: Nuestras licencias públicas están destinadas a ser utilizadas por aquellos autorizados a otorgar al público permiso para utilizar material de maneras que de otro modo estarían restringidas por los derechos de autor y ciertos otros derechos. Nuestras licencias son irrevocables. Los licenciadores deben leer y comprender los términos y condiciones de la licencia que elijan antes de aplicarla. Los licenciadores también deben asegurar todos los derechos necesarios antes de aplicar nuestras licencias para que el público pueda reutilizar el material según lo esperado. Los licenciadores deben marcar claramente cualquier material no sujeto a la licencia. Esto incluye otro material licenciado por CC o material utilizado bajo una excepción o limitación a los derechos de autor. Más consideraciones para los licenciadores: wiki.creativecommons.org/Considerations_for_licensors
+
+ Consideraciones para el público: Al utilizar una de nuestras licencias públicas, un licenciante otorga al público permiso para usar el material licenciado bajo los términos y condiciones especificados. Si el permiso del licenciante no es necesario por alguna razón, por ejemplo, debido a una excepción o limitación aplicable a los derechos de autor, entonces ese uso no está regulado por la licencia. Nuestras licencias solo otorgan permisos bajo derechos de autor y ciertos otros derechos que un licenciante tiene autoridad para otorgar. El uso del material licenciado aún puede estar restringido por otras razones, incluyendo porque otros tienen derechos de autor u otros derechos en el material. Un licenciante puede hacer solicitudes especiales, como pedir que todos los cambios se marquen o describan. Aunque no es requerido por nuestras licencias, se le anima a respetar esas solicitudes cuando sean razonables. Más consideraciones para el público: wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Licencia Pública Internacional de Creative Commons Attribution-ShareAlike 4.0
+
+Al ejercer los Derechos Licenciados (definidos a continuación), Usted acepta y se compromete a cumplir con los términos y condiciones de esta Licencia Pública Internacional de Creative Commons Attribution-ShareAlike 4.0 ("Licencia Pública"). En la medida en que esta Licencia Pública pueda interpretarse como un contrato, se le otorgan los Derechos Licenciados en consideración a su aceptación de estos términos y condiciones, y el Licenciante le otorga dichos derechos en consideración a los beneficios que el Licenciante recibe al poner el Material Licenciado a disposición bajo estos términos y condiciones.
+
+Sección 1 -- Definiciones.
+
+ a. Material Adaptado significa material sujeto a Derechos de Autor y Derechos Similares que se deriva o basa en el Material Licenciado y en el que el Material Licenciado se traduce, altera, arregla, transforma o modifica de otro modo de una manera que requiere permiso bajo los Derechos de Autor y Derechos Similares que posee el Licenciante. A los efectos de esta Licencia Pública, cuando el Material Licenciado es una obra musical, interpretación o grabación sonora, siempre se produce Material Adaptado cuando el Material Licenciado se sincroniza en relación temporal con una imagen en movimiento.
+
+ b. Licencia del Adaptador significa la licencia que Usted aplica a sus Derechos de Autor y Derechos Similares en sus contribuciones al Material Adaptado de acuerdo con los términos y condiciones de esta Licencia Pública.
+
+ c. Licencia Compatible BY-SA significa una licencia enumerada en creativecommons.org/compatiblelicenses, aprobada por Creative Commons como esencialmente equivalente a esta Licencia Pública.
+
+ d. Derechos de Autor y Derechos Similares significa derechos de autor y/o derechos similares estrechamente relacionados con los derechos de autor, incluidos, entre otros, interpretación, radiodifusión, grabación sonora y Derechos de Bases de Datos Sui Generis, sin importar cómo se etiqueten o categoricen los derechos. A los efectos de esta Licencia Pública, los derechos especificados en la Sección 2(b)(1)-(2) no son Derechos de Autor y Derechos Similares.
+
+ e. Medidas Tecnológicas Efectivas significa aquellas medidas que, en ausencia de la autoridad adecuada, no pueden ser eludidas bajo leyes que cumplen con las obligaciones bajo el Artículo 11 del Tratado de la OMPI sobre Derechos de Autor adoptado el 20 de diciembre de 1996, y/o acuerdos internacionales similares.
+
+ f. Excepciones y Limitaciones significa uso justo, trato justo y/o cualquier otra excepción o limitación a los Derechos de Autor y Derechos Similares que se aplica a su uso del Material Licenciado.
+
+ g. Elementos de la Licencia significa los atributos de la licencia enumerados en el nombre de una Licencia Pública de Creative Commons. Los Elementos de la Licencia de esta Licencia Pública son Atribución y CompartirIgual.
+
+ h. Material Licenciado significa la obra artística o literaria, base de datos u otro material al que el Licenciante aplicó esta Licencia Pública.
+
+ i. Derechos Licenciados significa los derechos que se le otorgan a Usted sujeto a los términos y condiciones de esta Licencia Pública, que se limitan a todos los Derechos de Autor y Derechos Similares que se aplican a su uso del Material Licenciado y que el Licenciante tiene autoridad para licenciar.
+
+ j. Licenciante significa la(s) persona(s) o entidad(es) que otorgan derechos bajo esta Licencia Pública.
+
+ k. Compartir significa proporcionar material al público por cualquier medio o proceso que requiera permiso bajo los Derechos Licenciados, como reproducción, exhibición pública, interpretación pública, distribución, difusión, comunicación o importación, y poner material a disposición del público, incluyendo de maneras que los miembros del público puedan acceder al material desde un lugar y en un momento elegido individualmente por ellos.
+
+ l. Derechos de Bases de Datos Sui Generis significa derechos distintos de los derechos de autor resultantes de la Directiva 96/9/CE del Parlamento Europeo y del Consejo de 11 de marzo de 1996 sobre la protección jurídica de las bases de datos, en su versión modificada y/o sucedida, así como otros derechos esencialmente equivalentes en cualquier parte del mundo.
+
+ m. Usted significa la persona o entidad que ejerce los Derechos Licenciados bajo esta Licencia Pública. Su tiene un significado correspondiente.
+
+Sección 2 -- Alcance.
+
+ a. Concesión de la licencia.
+
+ 1. Sujeto a los términos y condiciones de esta Licencia Pública, el Licenciante le otorga por la presente una licencia mundial, libre de regalías, no sublicenciable, no exclusiva e irrevocable para ejercer los Derechos Licenciados en el Material Licenciado para:
+
+ a. reproducir y Compartir el Material Licenciado, en su totalidad o en parte; y
+
+ b. producir, reproducir y Compartir Material Adaptado.
+
+ 2. Excepciones y Limitaciones. Para evitar dudas, cuando se apliquen Excepciones y Limitaciones a su uso, esta Licencia Pública no se aplica y Usted no necesita cumplir con sus términos y condiciones.
+
+ 3. Plazo. El plazo de esta Licencia Pública se especifica en la Sección 6(a).
+
+ 4. Medios y formatos; modificaciones técnicas permitidas. El Licenciante le autoriza a ejercer los Derechos Licenciados en todos los medios y formatos, ya sean conocidos ahora o creados en el futuro, y a realizar las modificaciones técnicas necesarias para hacerlo. El Licenciante renuncia y/o acuerda no hacer valer ningún derecho o autoridad para prohibirle realizar las modificaciones técnicas necesarias para ejercer los Derechos Licenciados, incluidas las modificaciones técnicas necesarias para eludir las Medidas Tecnológicas Efectivas. A los efectos de esta Licencia Pública, simplemente realizar modificaciones autorizadas por esta Sección 2(a)(4) nunca produce Material Adaptado.
+
+ 5. Destinatarios posteriores.
+
+ a. Oferta del Licenciante -- Material Licenciado. Cada destinatario del Material Licenciado recibe automáticamente una oferta del Licenciante para ejercer los Derechos Licenciados bajo los términos y condiciones de esta Licencia Pública.
+
+ b. Oferta adicional del Licenciante -- Material Adaptado. Cada destinatario de Material Adaptado de Usted recibe automáticamente una oferta del Licenciante para ejercer los Derechos Licenciados en el Material Adaptado bajo las condiciones de la Licencia del Adaptador que Usted aplique.
+
+ c. Sin restricciones posteriores. Usted no puede ofrecer ni imponer ningún término o condición adicional o diferente, ni aplicar ninguna Medida Tecnológica Efectiva al Material Licenciado si hacerlo restringe el ejercicio de los Derechos Licenciados por parte de cualquier destinatario del Material Licenciado.
+
+ 6. Sin respaldo. Nada en esta Licencia Pública constituye o puede interpretarse como permiso para afirmar o implicar que Usted está, o que su uso del Material Licenciado está, conectado con, o patrocinado, respaldado, o con estatus oficial otorgado por el Licenciante u otros designados para recibir atribución como se indica en la Sección 3(a)(1)(A)(i).
+
+ b. Otros derechos.
+
+ 1. Los derechos morales, como el derecho a la integridad, no están licenciados bajo esta Licencia Pública, ni los derechos de publicidad, privacidad y/o otros derechos de personalidad similares; sin embargo, en la medida de lo posible, el Licenciante renuncia y/o acuerda no hacer valer tales derechos que posea el Licenciante en la medida limitada necesaria para permitirle ejercer los Derechos Licenciados, pero no de otra manera.
+
+ 2. Los derechos de patente y marca registrada no están licenciados bajo esta Licencia Pública.
+
+ 3. En la medida de lo posible, el Licenciante renuncia a cualquier derecho a cobrar regalías de Usted por el ejercicio de los Derechos Licenciados, ya sea directamente o a través de una sociedad de gestión colectiva bajo cualquier esquema de licencia voluntaria o renunciable legal o obligatoria. En todos los demás casos, el Licenciante se reserva expresamente cualquier derecho a cobrar tales regalías.
+
+Sección 3 -- Condiciones de la Licencia.
+
+Su ejercicio de los Derechos Licenciados está expresamente sujeto a las siguientes condiciones.
+
+ a. Atribución.
+
+ 1. Si Usted comparte el Material Licenciado (incluyendo en forma modificada), debe:
+
+ a. conservar lo siguiente si es proporcionado por el Licenciante con el Material Licenciado:
+
+ i. identificación del creador(es) del Material Licenciado y cualquier otro designado para recibir atribución, de cualquier manera razonable solicitada por el Licenciante (incluyendo por seudónimo si se designa);
+
+ ii. un aviso de derechos de autor;
+
+ iii. un aviso que haga referencia a esta Licencia Pública;
+
+ iv. un aviso que haga referencia a la exención de garantías;
+
+ v. un URI o hipervínculo al Material Licenciado en la medida razonablemente practicable;
+
+ b. indicar si Usted modificó el Material Licenciado y conservar una indicación de cualquier modificación previa; y
+
+ c. indicar que el Material Licenciado está licenciado bajo esta Licencia Pública, e incluir el texto de, o el URI o hipervínculo a, esta Licencia Pública.
+
+ 2. Usted puede cumplir con las condiciones en la Sección 3(a)(1) de cualquier manera razonable según el medio, los medios y el contexto en el que Usted comparta el Material Licenciado. Por ejemplo, puede ser razonable cumplir con las condiciones proporcionando un URI o hipervínculo a un recurso que incluya la información requerida.
+
+ 3. Si el Licenciante lo solicita, Usted debe eliminar cualquier información requerida por la Sección 3(a)(1)(A) en la medida razonablemente practicable.
+
+ b. CompartirIgual.
+
+ Además de las condiciones en la Sección 3(a), si Usted comparte Material Adaptado que produce, también se aplican las siguientes condiciones.
+
+ 1. La Licencia del Adaptador que Usted aplique debe ser una licencia de Creative Commons con los mismos Elementos de la Licencia, esta versión o posterior, o una Licencia Compatible BY-SA.
+
+ 2. Usted debe incluir el texto de, o el URI o hipervínculo a, la Licencia del Adaptador que aplique. Usted puede cumplir con esta condición de cualquier manera razonable según el medio, los medios y el contexto en el que Usted comparta el Material Adaptado.
+
+ 3. Usted no puede ofrecer ni imponer ningún término o condición adicional o diferente, ni aplicar ninguna Medida Tecnológica Efectiva al Material Adaptado que restrinja el ejercicio de los derechos otorgados bajo la Licencia del Adaptador que aplique.
+
+Sección 4 -- Derechos de Bases de Datos Sui Generis.
+
+Cuando los Derechos Licenciados incluyen Derechos de Bases de Datos Sui Generis que se aplican a su uso del Material Licenciado:
+
+ a. para evitar dudas, la Sección 2(a)(1) le otorga el derecho a extraer, reutilizar, reproducir y compartir todo o una parte sustancial del contenido de la base de datos;
+
+ b. si Usted incluye todo o una parte sustancial del contenido de la base de datos en una base de datos en la que Usted tiene Derechos de Bases de Datos Sui Generis, entonces la base de datos en la que Usted tiene Derechos de Bases de Datos Sui Generis (pero no su contenido individual) es Material Adaptado, incluyendo para los propósitos de la Sección 3(b); y
+
+ c. Usted debe cumplir con las condiciones en la Sección 3(a) si comparte todo o una parte sustancial del contenido de la base de datos.
+
+Para evitar dudas, esta Sección 4 complementa y no reemplaza sus obligaciones bajo esta Licencia Pública cuando los Derechos Licenciados incluyen otros Derechos de Autor y Derechos Similares.
+
+Sección 5 -- Renuncia de Garantías y Limitación de Responsabilidad.
+
+ a. A MENOS QUE EL LICENCIANTE LO HAYA ASUMIDO SEPARADAMENTE, EN LA MEDIDA DE LO POSIBLE, EL LICENCIANTE OFRECE EL MATERIAL LICENCIADO "TAL CUAL" Y "SEGÚN DISPONIBILIDAD", Y NO HACE REPRESENTACIONES NI GARANTÍAS DE NINGÚN TIPO RESPECTO AL MATERIAL LICENCIADO, YA SEA EXPRESAS, IMPLÍCITAS, ESTATUTARIAS U OTRAS. ESTO INCLUYE, SIN LIMITACIÓN, GARANTÍAS DE TÍTULO, COMERCIABILIDAD, IDONEIDAD PARA UN PROPÓSITO PARTICULAR, NO INFRACCIÓN, AUSENCIA DE DEFECTOS LATENTES U OTROS, PRECISIÓN, O LA PRESENCIA O AUSENCIA DE ERRORES, YA SEAN CONOCIDOS O DESCUBRIBLES. DONDE LAS RENUNCIAS DE GARANTÍAS NO ESTÉN PERMITIDAS EN SU TOTALIDAD O EN PARTE, ESTA RENUNCIA PUEDE NO APLICARSE A USTED.
+
+ b. EN LA MEDIDA DE LO POSIBLE, EN NINGÚN CASO EL LICENCIANTE SERÁ RESPONSABLE ANTE USTED POR NINGUNA TEORÍA LEGAL (INCLUYENDO, SIN LIMITACIÓN, NEGLIGENCIA) O DE OTRO TIPO POR CUALQUIER PÉRDIDA, COSTO, GASTO O DAÑO DIRECTO, ESPECIAL, INDIRECTO, INCIDENTAL, CONSECUENTE, PUNITIVO, EJEMPLAR U OTRO QUE SURJA DE ESTA LICENCIA PÚBLICA O DEL USO DEL MATERIAL LICENCIADO, INCLUSO SI EL LICENCIANTE HA SIDO ADVERTIDO DE LA POSIBILIDAD DE TALES PÉRDIDAS, COSTOS, GASTOS O DAÑOS. DONDE UNA LIMITACIÓN DE RESPONSABILIDAD NO ESTÉ PERMITIDA EN SU TOTALIDAD O EN PARTE, ESTA LIMITACIÓN PUEDE NO APLICARSE A USTED.
+
+ c. La renuncia de garantías y la limitación de responsabilidad proporcionadas anteriormente se interpretarán de una manera que, en la medida de lo posible, se aproxime más a una renuncia absoluta y exención de toda responsabilidad.
+
+Sección 6 -- Plazo y Terminación.
+
+ a. Esta Licencia Pública se aplica por el plazo de los Derechos de Autor y Derechos Similares licenciados aquí. Sin embargo, si Usted no cumple con esta Licencia Pública, sus derechos bajo esta Licencia Pública terminan automáticamente.
+
+ b. Cuando su derecho a usar el Material Licenciado haya terminado bajo la Sección 6(a), se reinstala:
+
+ 1. automáticamente a partir de la fecha en que se subsana la violación, siempre que se subsane dentro de los 30 días posteriores a su descubrimiento de la violación; o
+
+ 2. mediante reinstalación expresa por parte del Licenciante.
+
+ Para evitar dudas, esta Sección 6(b) no afecta ningún derecho que el Licenciante pueda tener para buscar remedios por sus violaciones de esta Licencia Pública.
+
+ c. Para evitar dudas, el Licenciante también puede ofrecer el Material Licenciado bajo términos o condiciones separados o dejar de distribuir el Material Licenciado en cualquier momento; sin embargo, hacerlo no terminará esta Licencia Pública.
+
+ d. Las Secciones 1, 5, 6, 7 y 8 sobreviven a la terminación de esta Licencia Pública.
+
+Sección 7 -- Otros Términos y Condiciones.
+
+ a. El Licenciante no estará obligado por ningún término o condición adicional o diferente comunicado por Usted a menos que se acuerde expresamente.
+
+ b. Cualquier arreglo, entendimiento o acuerdo con respecto al Material Licenciado no mencionado aquí es separado e independiente de los términos y condiciones de esta Licencia Pública.
+
+Sección 8 -- Interpretación.
+
+ a. Para evitar dudas, esta Licencia Pública no reduce, limita, restringe ni impone condiciones sobre ningún uso del Material Licenciado que pudiera hacerse legalmente sin permiso bajo esta Licencia Pública.
+
+ b. En la medida de lo posible, si alguna disposición de esta Licencia Pública se considera inaplicable, se reformará automáticamente en la medida mínima necesaria para hacerla aplicable. Si la disposición no puede reformarse, se separará de esta Licencia Pública sin afectar la aplicabilidad de los términos y condiciones restantes.
+
+ c. Ningún término o condición de esta Licencia Pública será renunciado y ningún incumplimiento será consentido a menos que el Licenciante
+
+ **Descargo de responsabilidad**:
+ Este documento ha sido traducido utilizando servicios de traducción automática basados en IA. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción humana profesional. No somos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/es/sketchnotes/README.md b/translations/es/sketchnotes/README.md
new file mode 100644
index 000000000..e48a0654f
--- /dev/null
+++ b/translations/es/sketchnotes/README.md
@@ -0,0 +1,10 @@
+Todo el material de sketchnotes del currículo se puede descargar aquí.
+
+🖨 Para imprimir en alta resolución, las versiones TIFF están disponibles en [este repositorio](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff).
+
+🎨 Creado por: [Tomomi Imura](https://github.com/girliemac) (Twitter: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**Descargo de responsabilidad**:
+Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automáticas pueden contener errores o inexactitudes. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda la traducción profesional humana. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/1-intro-to-ML/README.md b/translations/hi/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..3349ecea0
--- /dev/null
+++ b/translations/hi/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# मशीन लर्निंग का परिचय
+
+## [प्रारंभिक क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "शुरुआत के लिए मशीन लर्निंग - शुरुआत के लिए मशीन लर्निंग का परिचय")
+
+> 🎥 इस पाठ के माध्यम से काम करने के लिए ऊपर दी गई छवि पर क्लिक करें।
+
+क्लासिकल मशीन लर्निंग के इस कोर्स में आपका स्वागत है! चाहे आप इस विषय में बिल्कुल नए हों, या एक अनुभवी एमएल प्रैक्टिशनर हों जो किसी क्षेत्र में सुधार करना चाहते हैं, हम आपके शामिल होने से खुश हैं! हम आपके एमएल अध्ययन के लिए एक दोस्ताना लॉन्चिंग स्थान बनाना चाहते हैं और आपके [प्रतिक्रिया](https://github.com/microsoft/ML-For-Beginners/discussions) का मूल्यांकन, प्रतिक्रिया और समावेश करने में खुशी होगी।
+
+[](https://youtu.be/h0e2HAPTGF4 "मशीन लर्निंग का परिचय")
+
+> 🎥 एक वीडियो के लिए ऊपर दी गई छवि पर क्लिक करें: एमआईटी के जॉन गुट्टाग मशीन लर्निंग का परिचय देते हैं
+
+---
+## मशीन लर्निंग के साथ शुरुआत
+
+इस पाठ्यक्रम को शुरू करने से पहले, आपको अपने कंप्यूटर को सेट अप और नोटबुक्स को लोकली चलाने के लिए तैयार करना होगा।
+
+- **इन वीडियो के साथ अपने मशीन को कॉन्फ़िगर करें**। अपने सिस्टम में [Python को इंस्टॉल कैसे करें](https://youtu.be/CXZYvNRIAKM) और विकास के लिए [टेक्स्ट एडिटर सेटअप कैसे करें](https://youtu.be/EU8eayHWoZg) सीखने के लिए निम्नलिखित लिंक का उपयोग करें।
+- **Python सीखें**। यह भी अनुशंसित है कि आप [Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott) की एक बुनियादी समझ रखें, एक प्रोग्रामिंग भाषा जो डेटा वैज्ञानिकों के लिए उपयोगी है और जिसे हम इस कोर्स में उपयोग करते हैं।
+- **Node.js और JavaScript सीखें**। हम इस कोर्स में कुछ बार वेब ऐप्स बनाते समय JavaScript का भी उपयोग करते हैं, इसलिए आपको [node](https://nodejs.org) और [npm](https://www.npmjs.com/) को इंस्टॉल करने की आवश्यकता होगी, साथ ही [Visual Studio Code](https://code.visualstudio.com/) को Python और JavaScript विकास दोनों के लिए उपलब्ध रखना होगा।
+- **GitHub खाता बनाएं**। चूंकि आपने हमें [GitHub](https://github.com) पर पाया है, आपके पास पहले से एक खाता हो सकता है, लेकिन अगर नहीं है, तो एक खाता बनाएं और फिर इस पाठ्यक्रम को अपने उपयोग के लिए फोर्क करें। (हमें एक स्टार देने में भी संकोच न करें 😊)
+- **Scikit-learn को एक्सप्लोर करें**। [Scikit-learn](https://scikit-learn.org/stable/user_guide.html) से परिचित हों, एक सेट एमएल लाइब्रेरीज़ जो हम इन पाठों में संदर्भित करते हैं।
+
+---
+## मशीन लर्निंग क्या है?
+
+'मशीन लर्निंग' शब्द आज के सबसे लोकप्रिय और अक्सर उपयोग किए जाने वाले शब्दों में से एक है। यह एक महत्वपूर्ण संभावना है कि आपने इस शब्द को कम से कम एक बार सुना होगा यदि आप किसी प्रकार की प्रौद्योगिकी से परिचित हैं, चाहे आप किसी भी क्षेत्र में काम करते हों। हालांकि, मशीन लर्निंग की यांत्रिकी अधिकांश लोगों के लिए एक रहस्य है। एक मशीन लर्निंग शुरुआती के लिए, विषय कभी-कभी भारी महसूस हो सकता है। इसलिए, यह समझना महत्वपूर्ण है कि मशीन लर्निंग वास्तव में क्या है, और इसे व्यावहारिक उदाहरणों के माध्यम से चरण-दर-चरण सीखना है।
+
+---
+## प्रचार वक्र
+
+
+
+> गूगल ट्रेंड्स 'मशीन लर्निंग' शब्द के हाल के 'प्रचार वक्र' को दिखाता है
+
+---
+## एक रहस्यमय ब्रह्मांड
+
+हम एक ऐसे ब्रह्मांड में रहते हैं जो आकर्षक रहस्यों से भरा हुआ है। महान वैज्ञानिक जैसे स्टीफन हॉकिंग, अल्बर्ट आइंस्टीन, और कई अन्य ने हमारे चारों ओर की दुनिया के रहस्यों को उजागर करने वाली महत्वपूर्ण जानकारी खोजने के लिए अपना जीवन समर्पित कर दिया है। यह सीखने की मानवीय स्थिति है: एक मानव बच्चा नई चीजें सीखता है और जैसे-जैसे वह वयस्कता की ओर बढ़ता है, अपने विश्व की संरचना को उजागर करता है।
+
+---
+## बच्चे का मस्तिष्क
+
+एक बच्चे का मस्तिष्क और इंद्रियां अपने परिवेश के तथ्यों को समझते हैं और धीरे-धीरे जीवन के छिपे हुए पैटर्न को सीखते हैं जो बच्चे को सीखे गए पैटर्न की पहचान करने के लिए तार्किक नियम बनाने में मदद करते हैं। मानव मस्तिष्क की सीखने की प्रक्रिया मनुष्यों को इस दुनिया का सबसे परिष्कृत जीवित प्राणी बनाती है। छिपे हुए पैटर्न की खोज करके लगातार सीखना और फिर उन पैटर्न पर नवाचार करना हमें हमारे जीवनकाल में खुद को बेहतर बनाने में सक्षम बनाता है। यह सीखने की क्षमता और विकसित होने की क्षमता एक अवधारणा से संबंधित है जिसे [मस्तिष्क की प्लास्टिसिटी](https://www.simplypsychology.org/brain-plasticity.html) कहा जाता है। सतही तौर पर, हम मानव मस्तिष्क की सीखने की प्रक्रिया और मशीन लर्निंग की अवधारणाओं के बीच कुछ प्रेरणादायक समानताएं खींच सकते हैं।
+
+---
+## मानव मस्तिष्क
+
+[मानव मस्तिष्क](https://www.livescience.com/29365-human-brain.html) वास्तविक दुनिया से चीजों को समझता है, समझी गई जानकारी को संसाधित करता है, तार्किक निर्णय लेता है, और परिस्थितियों के आधार पर कुछ कार्य करता है। इसे हम बुद्धिमानी से व्यवहार करना कहते हैं। जब हम बुद्धिमान व्यवहार प्रक्रिया की नकल को एक मशीन में प्रोग्राम करते हैं, तो इसे कृत्रिम बुद्धिमत्ता (AI) कहा जाता है।
+
+---
+## कुछ शब्दावली
+
+हालांकि शब्दों को भ्रमित किया जा सकता है, मशीन लर्निंग (ML) कृत्रिम बुद्धिमत्ता का एक महत्वपूर्ण उपसमूह है। **एमएल का संबंध विशेष एल्गोरिदम का उपयोग करके महत्वपूर्ण जानकारी को उजागर करने और समझी गई डेटा से छिपे हुए पैटर्न को खोजने से है ताकि तार्किक निर्णय लेने की प्रक्रिया को प्रमाणित किया जा सके**।
+
+---
+## एआई, एमएल, डीप लर्निंग
+
+
+
+> एआई, एमएल, डीप लर्निंग, और डेटा साइंस के बीच संबंधों को दिखाने वाला एक आरेख। [Jen Looper](https://twitter.com/jenlooper) द्वारा प्रेरित [इस ग्राफिक](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining) से प्रेरित इन्फोग्राफिक
+
+---
+## कवर करने के लिए अवधारणाएँ
+
+इस पाठ्यक्रम में, हम केवल मशीन लर्निंग की मुख्य अवधारणाओं को कवर करेंगे जो एक शुरुआती को जानना चाहिए। हम मुख्य रूप से Scikit-learn का उपयोग करके 'क्लासिकल मशीन लर्निंग' को कवर करेंगे, एक उत्कृष्ट लाइब्रेरी जिसका उपयोग कई छात्र बुनियादी बातों को सीखने के लिए करते हैं। कृत्रिम बुद्धिमत्ता या डीप लर्निंग की व्यापक अवधारणाओं को समझने के लिए, मशीन लर्निंग का एक मजबूत बुनियादी ज्ञान अपरिहार्य है, और इसलिए हम इसे यहां पेश करना चाहेंगे।
+
+---
+## इस कोर्स में आप सीखेंगे:
+
+- मशीन लर्निंग की मुख्य अवधारणाएँ
+- एमएल का इतिहास
+- एमएल और निष्पक्षता
+- प्रतिगमन एमएल तकनीकें
+- वर्गीकरण एमएल तकनीकें
+- क्लस्टरिंग एमएल तकनीकें
+- प्राकृतिक भाषा प्रसंस्करण एमएल तकनीकें
+- समय श्रृंखला पूर्वानुमान एमएल तकनीकें
+- सुदृढीकरण सीखना
+- एमएल के वास्तविक दुनिया के अनुप्रयोग
+
+---
+## हम क्या कवर नहीं करेंगे
+
+- डीप लर्निंग
+- न्यूरल नेटवर्क्स
+- एआई
+
+बेहतर सीखने के अनुभव के लिए, हम न्यूरल नेटवर्क्स, 'डीप लर्निंग' - न्यूरल नेटवर्क्स का उपयोग करके कई-स्तरीय मॉडल-निर्माण - और एआई की जटिलताओं से बचेंगे, जिसे हम एक अलग पाठ्यक्रम में चर्चा करेंगे। हम एक आगामी डेटा साइंस पाठ्यक्रम भी पेश करेंगे जो इस बड़े क्षेत्र के उस पहलू पर ध्यान केंद्रित करेगा।
+
+---
+## मशीन लर्निंग क्यों पढ़ें?
+
+मशीन लर्निंग, सिस्टम दृष्टिकोण से, डेटा से छिपे हुए पैटर्न को सीखने के लिए स्वचालित सिस्टम बनाने के रूप में परिभाषित किया गया है ताकि बुद्धिमान निर्णय लेने में मदद मिल सके।
+
+यह प्रेरणा ढीली तौर पर इस बात से प्रेरित है कि मानव मस्तिष्क बाहरी दुनिया से प्राप्त डेटा के आधार पर कुछ चीजें कैसे सीखता है।
+
+✅ एक मिनट के लिए सोचें कि एक व्यवसाय हार्ड-कोडेड नियम-आधारित इंजन बनाने के बजाय मशीन लर्निंग रणनीतियों का उपयोग क्यों करना चाहेगा।
+
+---
+## मशीन लर्निंग के अनुप्रयोग
+
+मशीन लर्निंग के अनुप्रयोग अब लगभग हर जगह हैं, और हमारे समाजों के चारों ओर बहने वाले डेटा के रूप में सर्वव्यापी हैं, जो हमारे स्मार्ट फोन, जुड़े हुए उपकरणों और अन्य प्रणालियों द्वारा उत्पन्न होते हैं। अत्याधुनिक मशीन लर्निंग एल्गोरिदम की अपार संभावनाओं को देखते हुए, शोधकर्ता उनके क्षमता का पता लगा रहे हैं कि वे बहुआयामी और बहु-विषयक वास्तविक जीवन की समस्याओं को सकारात्मक परिणामों के साथ हल करने में सक्षम हैं।
+
+---
+## लागू एमएल के उदाहरण
+
+**आप कई तरीकों से मशीन लर्निंग का उपयोग कर सकते हैं**:
+
+- एक रोगी के चिकित्सा इतिहास या रिपोर्ट से बीमारी की संभावना की भविष्यवाणी करने के लिए।
+- मौसम की घटनाओं की भविष्यवाणी करने के लिए मौसम डेटा का लाभ उठाने के लिए।
+- एक पाठ की भावना को समझने के लिए।
+- प्रचार के प्रसार को रोकने के लिए नकली समाचारों का पता लगाने के लिए।
+
+वित्त, अर्थशास्त्र, पृथ्वी विज्ञान, अंतरिक्ष अन्वेषण, बायोमेडिकल इंजीनियरिंग, संज्ञानात्मक विज्ञान, और यहां तक कि मानविकी के क्षेत्र भी अपने डोमेन की कठिन, डेटा-प्रसंस्करण भारी समस्याओं को हल करने के लिए मशीन लर्निंग को अनुकूलित कर चुके हैं।
+
+---
+## निष्कर्ष
+
+मशीन लर्निंग वास्तविक दुनिया या उत्पन्न डेटा से महत्वपूर्ण अंतर्दृष्टि खोजने के द्वारा पैटर्न-खोज की प्रक्रिया को स्वचालित करता है। इसने व्यापार, स्वास्थ्य, और वित्तीय अनुप्रयोगों में, अन्य क्षेत्रों में, खुद को अत्यधिक मूल्यवान साबित किया है।
+
+निकट भविष्य में, किसी भी डोमेन के लोगों के लिए मशीन लर्निंग की बुनियादी बातों को समझना एक आवश्यकता बनने जा रहा है क्योंकि इसका व्यापक रूप से अपनाया जा रहा है।
+
+---
+# 🚀 चुनौती
+
+कागज पर या [Excalidraw](https://excalidraw.com/) जैसे ऑनलाइन ऐप का उपयोग करके, एआई, एमएल, डीप लर्निंग, और डेटा साइंस के बीच अंतर को समझने के लिए अपना स्केच बनाएं। कुछ समस्याओं के विचार जोड़ें जो इन तकनीकों में से प्रत्येक के हल करने के लिए अच्छे हैं।
+
+# [पाठ के बाद का क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# समीक्षा और आत्म अध्ययन
+
+क्लाउड में एमएल एल्गोरिदम के साथ कैसे काम कर सकते हैं, यह जानने के लिए इस [लर्निंग पाथ](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott) का पालन करें।
+
+एमएल की मूल बातें के बारे में सीखने के लिए इस [लर्निंग पाथ](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott) को लें।
+
+---
+# असाइनमेंट
+
+[शुरू करें](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/1-intro-to-ML/assignment.md b/translations/hi/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..28d944c57
--- /dev/null
+++ b/translations/hi/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# शुरुआत करें और चलाएं
+
+## निर्देश
+
+इस गैर-ग्रेडेड असाइनमेंट में, आपको Python पर पुनः विचार करना चाहिए और अपने पर्यावरण को सेटअप और चलाने के योग्य बनाना चाहिए ताकि नोटबुक्स चला सकें।
+
+इस [Python Learning Path](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott) को लें, और फिर इन प्रारंभिक वीडियो को देखकर अपने सिस्टम्स को सेटअप करें:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/2-history-of-ML/README.md b/translations/hi/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..dafb61f8d
--- /dev/null
+++ b/translations/hi/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# मशीन लर्निंग का इतिहास
+
+
+> स्केच नोट [Tomomi Imura](https://www.twitter.com/girlie_mac) द्वारा
+
+## [व्याख्यान से पहले का क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "शुरुआती लोगों के लिए मशीन लर्निंग - मशीन लर्निंग का इतिहास")
+
+> 🎥 इस पाठ के माध्यम से काम करने के लिए ऊपर दी गई छवि पर क्लिक करें।
+
+इस पाठ में, हम मशीन लर्निंग और कृत्रिम बुद्धिमत्ता के इतिहास में प्रमुख मील के पत्थरों पर चर्चा करेंगे।
+
+कृत्रिम बुद्धिमत्ता (AI) के क्षेत्र का इतिहास मशीन लर्निंग के इतिहास से जुड़ा हुआ है, क्योंकि ML को आधार देने वाले एल्गोरिदम और कम्प्यूटेशनल प्रगति ने AI के विकास में योगदान दिया। यह याद रखना उपयोगी है कि, जबकि ये क्षेत्र 1950 के दशक में विशिष्ट अनुसंधान क्षेत्रों के रूप में क्रिस्टलाइज होने लगे थे, महत्वपूर्ण [एल्गोरिदमिक, सांख्यिकीय, गणितीय, कम्प्यूटेशनल और तकनीकी खोजें](https://wikipedia.org/wiki/Timeline_of_machine_learning) इस युग से पहले और साथ ही साथ हुई थीं। वास्तव में, लोग इन प्रश्नों के बारे में [सैकड़ों वर्षों](https://wikipedia.org/wiki/History_of_artificial_intelligence) से सोचते आ रहे हैं: यह लेख 'सोचने वाली मशीन' के विचार की ऐतिहासिक बौद्धिक नींव पर चर्चा करता है।
+
+---
+## उल्लेखनीय खोजें
+
+- 1763, 1812 [बेयस प्रमेय](https://wikipedia.org/wiki/Bayes%27_theorem) और इसके पूर्ववर्ती। यह प्रमेय और इसके अनुप्रयोग अनुमान का आधार बनाते हैं, जो पूर्व ज्ञान के आधार पर किसी घटना के घटित होने की संभावना का वर्णन करते हैं।
+- 1805 [लीस्ट स्क्वेयर थ्योरी](https://wikipedia.org/wiki/Least_squares) फ्रांसीसी गणितज्ञ एड्रियन-मैरी लेजेंड्रे द्वारा। यह सिद्धांत, जिसके बारे में आप हमारे रिग्रेशन यूनिट में जानेंगे, डेटा फिटिंग में मदद करता है।
+- 1913 [मार्कोव चेन](https://wikipedia.org/wiki/Markov_chain), जिसका नाम रूसी गणितज्ञ आंद्रे मार्कोव के नाम पर रखा गया है, एक पूर्व स्थिति के आधार पर संभावित घटनाओं की अनुक्रम का वर्णन करने के लिए उपयोग किया जाता है।
+- 1957 [परसेप्ट्रॉन](https://wikipedia.org/wiki/Perceptron) एक प्रकार का रैखिक वर्गीकरणकर्ता है जिसका आविष्कार अमेरिकी मनोवैज्ञानिक फ्रैंक रोसेनब्लाट ने किया था और जो गहन शिक्षा में प्रगति का आधार बनता है।
+
+---
+
+- 1967 [निकटतम पड़ोसी](https://wikipedia.org/wiki/Nearest_neighbor) एक एल्गोरिदम है जिसे मूल रूप से मार्गों को मैप करने के लिए डिज़ाइन किया गया था। एमएल संदर्भ में इसका उपयोग पैटर्न का पता लगाने के लिए किया जाता है।
+- 1970 [बैकप्रोपेगेशन](https://wikipedia.org/wiki/Backpropagation) का उपयोग [फीडफॉरवर्ड न्यूरल नेटवर्क](https://wikipedia.org/wiki/Feedforward_neural_network) को प्रशिक्षित करने के लिए किया जाता है।
+- 1982 [रिकरंट न्यूरल नेटवर्क](https://wikipedia.org/wiki/Recurrent_neural_network) कृत्रिम न्यूरल नेटवर्क हैं जो फीडफॉरवर्ड न्यूरल नेटवर्क से प्राप्त होते हैं और अस्थायी ग्राफ़ बनाते हैं।
+
+✅ थोड़ा शोध करें। एमएल और एआई के इतिहास में कौन सी अन्य तिथियाँ महत्वपूर्ण हैं?
+
+---
+## 1950: सोचने वाली मशीनें
+
+एलन ट्यूरिंग, एक वास्तव में उल्लेखनीय व्यक्ति जिन्हें [2019 में जनता द्वारा](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) 20वीं सदी के सबसे महान वैज्ञानिक के रूप में वोट दिया गया, को 'सोचने वाली मशीन' की अवधारणा की नींव रखने में मदद करने का श्रेय दिया जाता है। उन्होंने इस अवधारणा के लिए ठोस प्रमाण की आवश्यकता और नकारात्मक विचारों से निपटने के लिए [ट्यूरिंग टेस्ट](https://www.bbc.com/news/technology-18475646) बनाया, जिसे आप हमारे एनएलपी पाठों में अन्वेषित करेंगे।
+
+---
+## 1956: डार्टमाउथ समर रिसर्च प्रोजेक्ट
+
+"कृत्रिम बुद्धिमत्ता पर डार्टमाउथ समर रिसर्च प्रोजेक्ट कृत्रिम बुद्धिमत्ता के क्षेत्र के लिए एक महत्वपूर्ण घटना थी," और यहीं पर 'कृत्रिम बुद्धिमत्ता' शब्द गढ़ा गया ([स्रोत](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth))।
+
+> सीखने या बुद्धिमत्ता की किसी भी अन्य विशेषता के हर पहलू को सैद्धांतिक रूप से इतनी सटीकता से वर्णित किया जा सकता है कि एक मशीन को इसे अनुकरण करने के लिए बनाया जा सकता है।
+
+---
+
+मुख्य शोधकर्ता, गणित के प्रोफेसर जॉन मैकार्थी ने "इस अनुमान के आधार पर आगे बढ़ने की उम्मीद की कि सीखने या बुद्धिमत्ता की किसी भी अन्य विशेषता के हर पहलू को सैद्धांतिक रूप से इतनी सटीकता से वर्णित किया जा सकता है कि एक मशीन को इसे अनुकरण करने के लिए बनाया जा सकता है।" प्रतिभागियों में इस क्षेत्र के एक अन्य प्रमुख व्यक्ति, मार्विन मिंस्की भी शामिल थे।
+
+इस कार्यशाला का श्रेय कई चर्चाओं को शुरू करने और प्रोत्साहित करने के लिए दिया जाता है, जिसमें "प्रतीकात्मक तरीकों का उदय, सीमित डोमेन पर केंद्रित सिस्टम (प्रारंभिक विशेषज्ञ सिस्टम), और डिडक्टिव सिस्टम बनाम इंडक्टिव सिस्टम" शामिल हैं। ([स्रोत](https://wikipedia.org/wiki/Dartmouth_workshop))।
+
+---
+## 1956 - 1974: "स्वर्णिम वर्ष"
+
+1950 के दशक से लेकर 70 के दशक के मध्य तक, एआई से कई समस्याओं को हल करने की उम्मीद में बहुत आशावाद था। 1967 में, मार्विन मिंस्की ने आत्मविश्वास से कहा कि "एक पीढ़ी के भीतर ... 'कृत्रिम बुद्धिमत्ता' बनाने की समस्या काफी हद तक हल हो जाएगी।" (मिंस्की, मार्विन (1967), कम्प्यूटेशन: फाइनाइट और इनफिनाइट मशीनें, एंगलवुड क्लिफ्स, एन.जे.: प्रेंटिस-हॉल)
+
+प्राकृतिक भाषा प्रसंस्करण अनुसंधान फल-फूल रहा था, खोज को परिष्कृत और अधिक शक्तिशाली बनाया गया, और 'माइक्रो-वर्ल्ड्स' की अवधारणा बनाई गई, जहां साधारण कार्यों को सादा भाषा निर्देशों का उपयोग करके पूरा किया गया।
+
+---
+
+अनुसंधान को सरकारी एजेंसियों द्वारा अच्छी तरह से वित्त पोषित किया गया था, कम्प्यूटेशन और एल्गोरिदम में प्रगति हुई, और बुद्धिमान मशीनों के प्रोटोटाइप बनाए गए। इनमें से कुछ मशीनें शामिल हैं:
+
+* [शेकी द रोबोट](https://wikipedia.org/wiki/Shakey_the_robot), जो कार्यों को 'बुद्धिमानी से' करने के लिए निर्णय ले सकता था।
+
+ 
+ > 1972 में शेकी
+
+---
+
+* एलिजा, एक प्रारंभिक 'चैटरबॉट', लोगों से बात कर सकता था और एक आदिम 'चिकित्सक' के रूप में कार्य कर सकता था। आप एनएलपी पाठों में एलिजा के बारे में और जानेंगे।
+
+ 
+ > एलिजा का एक संस्करण, एक चैटबॉट
+
+---
+
+* "ब्लॉक्स वर्ल्ड" एक माइक्रो-वर्ल्ड का उदाहरण था जहां ब्लॉक्स को स्टैक और सॉर्ट किया जा सकता था, और मशीनों को निर्णय लेना सिखाने के प्रयोग किए जा सकते थे। [SHRDLU](https://wikipedia.org/wiki/SHRDLU) जैसी लाइब्रेरी के साथ निर्मित प्रगति ने भाषा प्रसंस्करण को आगे बढ़ाने में मदद की।
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "SHRDLU के साथ ब्लॉक्स वर्ल्ड")
+
+ > 🎥 वीडियो के लिए ऊपर दी गई छवि पर क्लिक करें: SHRDLU के साथ ब्लॉक्स वर्ल्ड
+
+---
+## 1974 - 1980: "एआई विंटर"
+
+1970 के दशक के मध्य तक, यह स्पष्ट हो गया था कि 'बुद्धिमान मशीनें' बनाने की जटिलता को कम करके आंका गया था और उपलब्ध कम्प्यूट पावर को देखते हुए इसकी संभावनाओं को बढ़ा-चढ़ा कर पेश किया गया था। वित्त पोषण सूख गया और क्षेत्र में विश्वास धीमा हो गया। कुछ मुद्दे जिन्होंने विश्वास को प्रभावित किया उनमें शामिल थे:
+---
+- **सीमाएँ**. कम्प्यूट पावर बहुत सीमित थी।
+- **कॉम्बिनेटोरियल विस्फोट**. कंप्यूटरों से अधिक मांग की जाने पर प्रशिक्षित किए जाने वाले मापदंडों की मात्रा तेजी से बढ़ गई, बिना कम्प्यूट पावर और क्षमता के समानांतर विकास के।
+- **डेटा की कमी**. परीक्षण, विकास और एल्गोरिदम को परिष्कृत करने की प्रक्रिया में डेटा की कमी बाधा उत्पन्न कर रही थी।
+- **क्या हम सही प्रश्न पूछ रहे हैं?**. जिन प्रश्नों को पूछा जा रहा था, वे ही सवालों के घेरे में आने लगे। शोधकर्ताओं ने अपने दृष्टिकोणों के बारे में आलोचना का सामना करना शुरू किया:
+ - ट्यूरिंग परीक्षणों पर 'चीनी कक्ष सिद्धांत' के माध्यम से प्रश्न उठाए गए, जिसमें कहा गया था कि, "एक डिजिटल कंप्यूटर को प्रोग्राम करना भाषा को समझने का आभास दे सकता है, लेकिन वास्तविक समझ उत्पन्न नहीं कर सकता।" ([स्रोत](https://plato.stanford.edu/entries/chinese-room/))
+ - समाज में "चिकित्सक" एलिजा जैसी कृत्रिम बुद्धिमत्ता को पेश करने के नैतिकता पर सवाल उठाया गया।
+
+---
+
+इसी समय, विभिन्न एआई विचारधाराओं का गठन होने लगा। ["स्क्रफी" बनाम "नीट एआई"](https://wikipedia.org/wiki/Neats_and_scruffies) प्रथाओं के बीच एक द्वैतता स्थापित की गई। _स्क्रफी_ प्रयोगशालाएं घंटों तक प्रोग्रामों को तब तक समायोजित करती रहीं जब तक उन्हें वांछित परिणाम नहीं मिल गए। _नीट_ प्रयोगशालाएं "तर्क और औपचारिक समस्या समाधान पर केंद्रित थीं"। एलिजा और SHRDLU प्रसिद्ध _स्क्रफी_ सिस्टम थे। 1980 के दशक में, जब एमएल सिस्टम को पुनरुत्पादन योग्य बनाने की मांग उभरी, तो _नीट_ दृष्टिकोण ने धीरे-धीरे प्रमुखता प्राप्त की क्योंकि इसके परिणाम अधिक व्याख्यात्मक हैं।
+
+---
+## 1980 के दशक के विशेषज्ञ सिस्टम
+
+जैसे-जैसे क्षेत्र बढ़ता गया, इसका व्यवसाय के लिए लाभ स्पष्ट होता गया, और 1980 के दशक में 'विशेषज्ञ सिस्टम' का प्रसार भी हुआ। "विशेषज्ञ सिस्टम कृत्रिम बुद्धिमत्ता (एआई) सॉफ़्टवेयर के पहले वास्तव में सफल रूपों में से थे।" ([स्रोत](https://wikipedia.org/wiki/Expert_system))।
+
+इस प्रकार की प्रणाली वास्तव में _हाइब्रिड_ है, जिसमें आंशिक रूप से व्यापार आवश्यकताओं को परिभाषित करने वाला एक नियम इंजन, और एक अनुमान इंजन शामिल है जो नए तथ्यों को निष्कर्षित करने के लिए नियम प्रणाली का लाभ उठाता है।
+
+इस युग में न्यूरल नेटवर्क पर भी बढ़ती ध्यान दी गई।
+
+---
+## 1987 - 1993: एआई 'चिल'
+
+विशिष्ट विशेषज्ञ सिस्टम हार्डवेयर का प्रसार दुर्भाग्यवश बहुत विशिष्ट हो जाने का प्रभाव था। व्यक्तिगत कंप्यूटरों का उदय भी इन बड़े, विशिष्ट, केंद्रीकृत प्रणालियों के साथ प्रतिस्पर्धा कर रहा था। कंप्यूटिंग का लोकतंत्रीकरण शुरू हो गया था, और इसने अंततः बड़े डेटा के आधुनिक विस्फोट का मार्ग प्रशस्त किया।
+
+---
+## 1993 - 2011
+
+इस युग ने एमएल और एआई के लिए उन समस्याओं को हल करने के लिए एक नए युग की शुरुआत की जो पहले डेटा और कम्प्यूट पावर की कमी के कारण उत्पन्न हुई थीं। डेटा की मात्रा तेजी से बढ़ने लगी और अधिक व्यापक रूप से उपलब्ध होने लगी, बेहतर और बदतर दोनों के लिए, विशेष रूप से 2007 के आसपास स्मार्टफोन के आगमन के साथ। कम्प्यूट पावर तेजी से बढ़ी, और एल्गोरिदम के साथ विकसित हुई। क्षेत्र ने परिपक्वता प्राप्त करना शुरू किया क्योंकि पिछले दिनों की स्वतंत्रता एक सच्चे अनुशासन में क्रिस्टलाइज होने लगी।
+
+---
+## अब
+
+आज मशीन लर्निंग और एआई हमारे जीवन के लगभग हर हिस्से को छूते हैं। यह युग इन एल्गोरिदम के मानव जीवन पर संभावित प्रभावों और जोखिमों की सावधानीपूर्वक समझ की मांग करता है। जैसा कि माइक्रोसॉफ्ट के ब्रैड स्मिथ ने कहा है, "सूचना प्रौद्योगिकी ऐसे मुद्दे उठाती है जो गोपनीयता और अभिव्यक्ति की स्वतंत्रता जैसे मौलिक मानवाधिकारों के संरक्षण के मूल में जाते हैं। ये मुद्दे उन तकनीकी कंपनियों के लिए जिम्मेदारी बढ़ाते हैं जो इन उत्पादों का निर्माण करती हैं। हमारे विचार में, वे विचारशील सरकारी विनियमन और स्वीकार्य उपयोगों के आसपास मानदंडों के विकास के लिए भी कहते हैं" ([स्रोत](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/))।
+
+---
+
+यह देखना बाकी है कि भविष्य क्या रखता है, लेकिन इन कंप्यूटर सिस्टम और वे सॉफ़्टवेयर और एल्गोरिदम को समझना महत्वपूर्ण है जो वे चलाते हैं। हमें उम्मीद है कि यह पाठ्यक्रम आपको बेहतर समझ हासिल करने में मदद करेगा ताकि आप स्वयं निर्णय ले सकें।
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "डीप लर्निंग का इतिहास")
+> 🎥 वीडियो के लिए ऊपर दी गई छवि पर क्लिक करें: यान लेकुन इस व्याख्यान में डीप लर्निंग के इतिहास पर चर्चा करते हैं
+
+---
+## 🚀चुनौती
+
+इन ऐतिहासिक क्षणों में से किसी एक में गहराई से अध्ययन करें और उनके पीछे के लोगों के बारे में अधिक जानें। वहाँ दिलचस्प पात्र हैं, और कोई भी वैज्ञानिक खोज कभी भी सांस्कृतिक निर्वात में नहीं बनाई गई थी। आप क्या खोजते हैं?
+
+## [व्याख्यान के बाद का क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## समीक्षा और स्व-अध्ययन
+
+यहां देखने और सुनने के लिए आइटम हैं:
+
+[यह पॉडकास्ट जहां एमी बॉयड एआई के विकास पर चर्चा करती हैं](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "The history of AI by Amy Boyd")
+
+---
+
+## असाइनमेंट
+
+[एक टाइमलाइन बनाएं](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/2-history-of-ML/assignment.md b/translations/hi/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..8673c31cf
--- /dev/null
+++ b/translations/hi/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# एक टाइमलाइन बनाएँ
+
+## निर्देश
+
+[इस रिपॉजिटरी](https://github.com/Digital-Humanities-Toolkit/timeline-builder) का उपयोग करके, एल्गोरिदम, गणित, सांख्यिकी, एआई, या एमएल के इतिहास के किसी पहलू की एक टाइमलाइन बनाएँ, या इनका संयोजन करें। आप एक व्यक्ति, एक विचार, या विचारों के लंबे समयावधि पर ध्यान केंद्रित कर सकते हैं। सुनिश्चित करें कि आप मल्टीमीडिया तत्व जोड़ें।
+
+## मूल्यांकन मापदंड
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता है |
+| -------- | ------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------ |
+| | एक प्रकाशित टाइमलाइन GitHub पेज के रूप में प्रस्तुत की गई है | कोड अधूरा है और प्रकाशित नहीं है | टाइमलाइन अधूरी है, अच्छी तरह से शोध नहीं की गई है और प्रकाशित नहीं है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में अधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/3-fairness/README.md b/translations/hi/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..a2330cff2
--- /dev/null
+++ b/translations/hi/1-Introduction/3-fairness/README.md
@@ -0,0 +1,131 @@
+# जिम्मेदार AI के साथ मशीन लर्निंग समाधान बनाना
+
+
+> स्केच नोट द्वारा [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [पूर्व-व्याख्यान क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## परिचय
+
+इस पाठ्यक्रम में, आप जानेंगे कि मशीन लर्निंग कैसे और किस प्रकार हमारे दैनिक जीवन को प्रभावित कर रही है। आज भी, सिस्टम और मॉडल स्वास्थ्य देखभाल निदान, ऋण स्वीकृति या धोखाधड़ी का पता लगाने जैसे दैनिक निर्णय लेने के कार्यों में शामिल हैं। इसलिए, यह महत्वपूर्ण है कि ये मॉडल अच्छे से काम करें और विश्वसनीय परिणाम प्रदान करें। जैसे किसी भी सॉफ्टवेयर एप्लिकेशन में, AI सिस्टम भी उम्मीदों को पूरा करने में असफल हो सकते हैं या अवांछित परिणाम दे सकते हैं। यही कारण है कि AI मॉडल के व्यवहार को समझना और समझाना आवश्यक है।
+
+कल्पना करें कि जब आप इन मॉडलों को बनाने के लिए जिस डेटा का उपयोग कर रहे हैं उसमें कुछ जनसांख्यिकी की कमी हो, जैसे कि जाति, लिंग, राजनीतिक दृष्टिकोण, धर्म, या अनुपातहीन रूप से कुछ जनसांख्यिकी का प्रतिनिधित्व करता हो। क्या होगा जब मॉडल के आउटपुट को किसी जनसांख्यिकी के पक्ष में व्याख्या की जाती है? एप्लिकेशन के लिए इसका परिणाम क्या होगा? इसके अलावा, जब मॉडल का प्रतिकूल परिणाम होता है और यह लोगों के लिए हानिकारक होता है, तो क्या होता है? AI सिस्टम के व्यवहार के लिए कौन जिम्मेदार है? ये कुछ सवाल हैं जिन्हें हम इस पाठ्यक्रम में खोजेंगे।
+
+इस पाठ में, आप:
+
+- मशीन लर्निंग में निष्पक्षता और निष्पक्षता-संबंधित हानियों के महत्व के बारे में जागरूकता बढ़ाएंगे।
+- विश्वसनीयता और सुरक्षा सुनिश्चित करने के लिए अपवादों और असामान्य परिदृश्यों की जांच करने के अभ्यास से परिचित होंगे।
+- समावेशी सिस्टम डिज़ाइन करके सभी को सशक्त बनाने की आवश्यकता को समझेंगे।
+- डेटा और लोगों की गोपनीयता और सुरक्षा की रक्षा करने के महत्व का अन्वेषण करेंगे।
+- AI मॉडल के व्यवहार को समझाने के लिए एक ग्लास बॉक्स दृष्टिकोण के महत्व को देखेंगे।
+- यह समझेंगे कि AI सिस्टम में विश्वास बनाने के लिए जवाबदेही कैसे महत्वपूर्ण है।
+
+## आवश्यकताएं
+
+एक आवश्यकतानुसार, कृपया "जिम्मेदार AI सिद्धांत" सीख पथ को पूरा करें और नीचे दिए गए वीडियो को देखें:
+
+जिम्मेदार AI के बारे में अधिक जानने के लिए इस [लर्निंग पाथ](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott) का अनुसरण करें।
+
+[](https://youtu.be/dnC8-uUZXSc "Microsoft का जिम्मेदार AI के प्रति दृष्टिकोण")
+
+> 🎥 ऊपर की छवि पर क्लिक करें एक वीडियो के लिए: Microsoft का जिम्मेदार AI के प्रति दृष्टिकोण
+
+## निष्पक्षता
+
+AI सिस्टम को सभी के साथ निष्पक्षता से पेश आना चाहिए और समान समूहों के लोगों को अलग-अलग तरीके से प्रभावित करने से बचना चाहिए। उदाहरण के लिए, जब AI सिस्टम चिकित्सा उपचार, ऋण आवेदन या रोजगार पर मार्गदर्शन प्रदान करते हैं, तो उन्हें समान लक्षणों, वित्तीय परिस्थितियों या पेशेवर योग्यताओं वाले सभी लोगों को समान सिफारिशें देनी चाहिए। हम में से प्रत्येक व्यक्ति के पास विरासत में मिली पूर्वाग्रह होती है जो हमारे निर्णयों और कार्यों को प्रभावित करती है। ये पूर्वाग्रह उस डेटा में स्पष्ट हो सकते हैं जिसका हम AI सिस्टम को प्रशिक्षित करने के लिए उपयोग करते हैं। ऐसा हेरफेर कभी-कभी अनजाने में हो सकता है। डेटा में पूर्वाग्रह कब प्रस्तुत कर रहे हैं, इसे जानबूझकर समझना अक्सर कठिन होता है।
+
+**“अन्याय”** में एक समूह के लोगों के लिए नकारात्मक प्रभाव, या “हानियां” शामिल हैं, जैसे कि जाति, लिंग, आयु या विकलांगता स्थिति के संदर्भ में परिभाषित। मुख्य निष्पक्षता-संबंधित हानियों को निम्नलिखित प्रकारों में वर्गीकृत किया जा सकता है:
+
+- **आवंटन**, यदि उदाहरण के लिए एक लिंग या जातीयता को दूसरे पर प्राथमिकता दी जाती है।
+- **सेवा की गुणवत्ता**। यदि आप डेटा को एक विशिष्ट परिदृश्य के लिए प्रशिक्षित करते हैं लेकिन वास्तविकता बहुत अधिक जटिल है, तो यह खराब प्रदर्शन करने वाली सेवा की ओर ले जाता है। उदाहरण के लिए, एक हाथ साबुन डिस्पेंसर जो गहरे रंग की त्वचा वाले लोगों को पहचानने में सक्षम नहीं लगता था। [संदर्भ](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **निंदा**। किसी चीज़ या किसी को अनुचित तरीके से आलोचना और लेबल करना। उदाहरण के लिए, एक छवि लेबलिंग तकनीक ने कुख्यात रूप से गहरे रंग की त्वचा वाले लोगों की छवियों को गोरिल्ला के रूप में गलत लेबल किया।
+- **अधिक या कम प्रतिनिधित्व**। विचार यह है कि एक निश्चित समूह को एक निश्चित पेशे में नहीं देखा जाता है, और कोई भी सेवा या कार्य जो इसे बढ़ावा देना जारी रखता है, वह हानि में योगदान दे रहा है।
+- **रूढ़िवादिता**। एक दिए गए समूह को पूर्व-निर्धारित गुणों के साथ जोड़ना। उदाहरण के लिए, अंग्रेजी और तुर्की के बीच एक भाषा अनुवाद प्रणाली में लिंग से जुड़े शब्दों के कारण गलतियाँ हो सकती हैं।
+
+
+> तुर्की में अनुवाद
+
+
+> अंग्रेजी में अनुवाद
+
+AI सिस्टम को डिजाइन और परीक्षण करते समय, हमें यह सुनिश्चित करने की आवश्यकता है कि AI निष्पक्ष है और पूर्वाग्रहपूर्ण या भेदभावपूर्ण निर्णय लेने के लिए प्रोग्राम नहीं किया गया है, जिन्हें मनुष्यों को भी लेने की अनुमति नहीं है। AI और मशीन लर्निंग में निष्पक्षता सुनिश्चित करना एक जटिल सामाजिक-तकनीकी चुनौती बनी हुई है।
+
+### विश्वसनीयता और सुरक्षा
+
+विश्वास बनाने के लिए, AI सिस्टम को सामान्य और अप्रत्याशित परिस्थितियों में विश्वसनीय, सुरक्षित और सुसंगत होना चाहिए। यह जानना महत्वपूर्ण है कि AI सिस्टम विभिन्न स्थितियों में कैसे व्यवहार करेंगे, विशेष रूप से जब वे अपवाद होते हैं। AI समाधान बनाते समय, उन परिस्थितियों की एक विस्तृत विविधता को संभालने पर ध्यान केंद्रित करने की आवश्यकता होती है जिनका AI समाधान सामना करेगा। उदाहरण के लिए, एक स्व-चालित कार को लोगों की सुरक्षा को शीर्ष प्राथमिकता के रूप में रखना चाहिए। परिणामस्वरूप, कार को शक्ति देने वाले AI को सभी संभावित परिदृश्यों पर विचार करना चाहिए जिनका कार सामना कर सकती है, जैसे रात, तूफान या बर्फबारी, सड़क पर दौड़ते हुए बच्चे, पालतू जानवर, सड़क निर्माण आदि। AI सिस्टम कितनी अच्छी तरह से एक विस्तृत श्रृंखला की स्थितियों को विश्वसनीय और सुरक्षित तरीके से संभाल सकता है, यह उस स्तर को दर्शाता है जिसे डेटा वैज्ञानिक या AI डेवलपर ने सिस्टम के डिजाइन या परीक्षण के दौरान विचार किया था।
+
+> [🎥 वीडियो के लिए यहां क्लिक करें: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### समावेशिता
+
+AI सिस्टम को सभी को संलग्न और सशक्त बनाने के लिए डिज़ाइन किया जाना चाहिए। AI सिस्टम को डिजाइन और कार्यान्वित करते समय, डेटा वैज्ञानिक और AI डेवलपर्स सिस्टम में संभावित बाधाओं की पहचान करते हैं और उन्हें संबोधित करते हैं जो अनजाने में लोगों को बाहर कर सकते हैं। उदाहरण के लिए, दुनिया भर में 1 बिलियन लोग विकलांगता के साथ हैं। AI की प्रगति के साथ, वे अपनी दैनिक जिंदगी में अधिक आसानी से विभिन्न प्रकार की जानकारी और अवसरों तक पहुंच सकते हैं। बाधाओं को संबोधित करके, यह नवाचार और बेहतर अनुभवों के साथ AI उत्पादों के विकास के अवसर पैदा करता है जो सभी के लिए लाभकारी होते हैं।
+
+> [🎥 वीडियो के लिए यहां क्लिक करें: AI में समावेशिता](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### सुरक्षा और गोपनीयता
+
+AI सिस्टम को सुरक्षित और लोगों की गोपनीयता का सम्मान करना चाहिए। लोग उन सिस्टमों पर कम विश्वास करते हैं जो उनकी गोपनीयता, जानकारी या जीवन को जोखिम में डालते हैं। मशीन लर्निंग मॉडल को प्रशिक्षित करते समय, हम सबसे अच्छे परिणाम प्राप्त करने के लिए डेटा पर निर्भर होते हैं। ऐसा करते समय, डेटा की उत्पत्ति और अखंडता पर विचार करना आवश्यक है। उदाहरण के लिए, क्या डेटा उपयोगकर्ता द्वारा प्रस्तुत किया गया था या सार्वजनिक रूप से उपलब्ध था? इसके बाद, डेटा के साथ काम करते समय, यह महत्वपूर्ण है कि AI सिस्टम को गोपनीय जानकारी की रक्षा करने और हमलों का प्रतिरोध करने के लिए विकसित किया जाए। जैसे-जैसे AI अधिक प्रचलित हो रहा है, गोपनीयता की रक्षा और महत्वपूर्ण व्यक्तिगत और व्यावसायिक जानकारी की सुरक्षा अधिक महत्वपूर्ण और जटिल हो रही है। गोपनीयता और डेटा सुरक्षा के मुद्दों को AI के लिए विशेष रूप से करीब ध्यान देने की आवश्यकता होती है क्योंकि डेटा तक पहुंच AI सिस्टम के लिए लोगों के बारे में सटीक और सूचित पूर्वानुमान और निर्णय लेने के लिए आवश्यक होती है।
+
+> [🎥 वीडियो के लिए यहां क्लिक करें: AI में सुरक्षा](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- उद्योग के रूप में हमने गोपनीयता और सुरक्षा में महत्वपूर्ण प्रगति की है, जिसे GDPR (जनरल डेटा प्रोटेक्शन रेगुलेशन) जैसी विनियमों द्वारा काफी हद तक प्रेरित किया गया है।
+- फिर भी AI सिस्टम के साथ हमें व्यक्तिगत डेटा की अधिक आवश्यकता को स्वीकार करना चाहिए ताकि सिस्टम अधिक व्यक्तिगत और प्रभावी हो सकें – और गोपनीयता।
+- जैसे इंटरनेट के साथ जुड़े कंप्यूटरों के जन्म के साथ, हम AI से संबंधित सुरक्षा मुद्दों की संख्या में भी भारी वृद्धि देख रहे हैं।
+- साथ ही, हमने देखा है कि सुरक्षा में सुधार के लिए AI का उपयोग किया जा रहा है। उदाहरण के लिए, अधिकांश आधुनिक एंटी-वायरस स्कैनर आज AI ह्यूरिस्टिक्स द्वारा संचालित होते हैं।
+- हमें यह सुनिश्चित करने की आवश्यकता है कि हमारे डेटा विज्ञान प्रक्रियाएं नवीनतम गोपनीयता और सुरक्षा प्रथाओं के साथ सामंजस्यपूर्ण रूप से मिश्रित हों।
+
+### पारदर्शिता
+
+AI सिस्टम को समझने योग्य होना चाहिए। पारदर्शिता का एक महत्वपूर्ण हिस्सा AI सिस्टम और उनके घटकों के व्यवहार को समझाना है। AI सिस्टम की समझ में सुधार के लिए यह आवश्यक है कि हितधारक यह समझें कि वे कैसे और क्यों काम करते हैं ताकि वे संभावित प्रदर्शन मुद्दों, सुरक्षा और गोपनीयता चिंताओं, पूर्वाग्रहों, बहिष्करण प्रथाओं, या अनपेक्षित परिणामों की पहचान कर सकें। हम यह भी मानते हैं कि जो लोग AI सिस्टम का उपयोग करते हैं उन्हें यह ईमानदारी और स्पष्टता से बताना चाहिए कि वे कब, क्यों, और कैसे उनका उपयोग करते हैं। साथ ही उन सिस्टमों की सीमाओं को भी स्पष्ट करना चाहिए जो वे उपयोग करते हैं। उदाहरण के लिए, यदि एक बैंक अपने उपभोक्ता ऋण निर्णयों का समर्थन करने के लिए AI सिस्टम का उपयोग करता है, तो यह महत्वपूर्ण है कि परिणामों की जांच की जाए और यह समझा जाए कि कौन सा डेटा सिस्टम की सिफारिशों को प्रभावित करता है। सरकारें उद्योगों में AI को विनियमित करना शुरू कर रही हैं, इसलिए डेटा वैज्ञानिकों और संगठनों को यह बताना चाहिए कि क्या AI सिस्टम नियामक आवश्यकताओं को पूरा करता है, विशेष रूप से जब एक अवांछनीय परिणाम होता है।
+
+> [🎥 वीडियो के लिए यहां क्लिक करें: AI में पारदर्शिता](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- क्योंकि AI सिस्टम इतने जटिल हैं, यह समझना मुश्किल है कि वे कैसे काम करते हैं और परिणामों की व्याख्या कैसे करें।
+- इस समझ की कमी इन सिस्टमों के प्रबंधन, संचालन, और दस्तावेज़ीकरण को प्रभावित करती है।
+- इस समझ की कमी सबसे महत्वपूर्ण रूप से उन निर्णयों को प्रभावित करती है जो इन सिस्टमों द्वारा उत्पन्न परिणामों का उपयोग करके किए जाते हैं।
+
+### जवाबदेही
+
+AI सिस्टम को डिजाइन और तैनात करने वाले लोगों को अपने सिस्टम के संचालन के लिए जवाबदेह होना चाहिए। जवाबदेही की आवश्यकता विशेष रूप से संवेदनशील उपयोग तकनीकों जैसे चेहरे की पहचान के साथ महत्वपूर्ण है। हाल ही में, चेहरे की पहचान तकनीक की मांग बढ़ रही है, विशेष रूप से कानून प्रवर्तन संगठनों से जो इस तकनीक की संभावनाओं को लापता बच्चों को खोजने जैसे उपयोगों में देखते हैं। हालांकि, ये तकनीकें संभावित रूप से एक सरकार द्वारा अपने नागरिकों की मौलिक स्वतंत्रताओं को खतरे में डाल सकती हैं, जैसे कि विशिष्ट व्यक्तियों की निरंतर निगरानी को सक्षम करना। इसलिए, डेटा वैज्ञानिकों और संगठनों को अपने AI सिस्टम के व्यक्तियों या समाज पर प्रभाव के लिए जिम्मेदार होना चाहिए।
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Microsoft का जिम्मेदार AI के प्रति दृष्टिकोण")
+
+> 🎥 ऊपर की छवि पर क्लिक करें एक वीडियो के लिए: चेहरे की पहचान के माध्यम से बड़े पैमाने पर निगरानी की चेतावनी
+
+अंततः हमारी पीढ़ी के लिए सबसे बड़े सवालों में से एक, जो AI को समाज में ला रही है, यह है कि कैसे यह सुनिश्चित किया जाए कि कंप्यूटर लोगों के प्रति जवाबदेह बने रहेंगे और यह सुनिश्चित किया जाए कि कंप्यूटर डिजाइन करने वाले लोग सभी के प्रति जवाबदेह बने रहें।
+
+## प्रभाव मूल्यांकन
+
+मशीन लर्निंग मॉडल को प्रशिक्षित करने से पहले, यह समझने के लिए एक प्रभाव मूल्यांकन करना महत्वपूर्ण है कि AI सिस्टम का उद्देश्य क्या है; इसका इरादा उपयोग क्या है; यह कहां तैनात किया जाएगा; और सिस्टम के साथ कौन बातचीत करेगा। ये समीक्षक या परीक्षकों के लिए सहायक होते हैं जो सिस्टम का मूल्यांकन करते समय संभावित जोखिमों और अपेक्षित परिणामों की पहचान करने के लिए विचार करने वाले कारकों को जानने में सहायक होते हैं।
+
+प्रभाव मूल्यांकन करते समय निम्नलिखित क्षेत्रों पर ध्यान केंद्रित करें:
+
+* **व्यक्तियों पर प्रतिकूल प्रभाव**। किसी भी प्रतिबंध या आवश्यकताओं, असमर्थित उपयोग या किसी भी ज्ञात सीमाओं से अवगत होना जो सिस्टम के प्रदर्शन को बाधित कर सकते हैं, यह सुनिश्चित करने के लिए महत्वपूर्ण है कि सिस्टम का उपयोग इस तरह से नहीं किया जाए जिससे व्यक्तियों को नुकसान हो।
+* **डेटा आवश्यकताएं**। सिस्टम डेटा का उपयोग कैसे और कहां करेगा, इसे समझने से समीक्षकों को किसी भी डेटा आवश्यकताओं का पता लगाने में मदद मिलती है जिनके प्रति आपको सावधान रहना चाहिए (जैसे, GDPR या HIPPA डेटा विनियम)। इसके अलावा, यह जांचें कि प्रशिक्षण के लिए डेटा का स्रोत या मात्रा पर्याप्त है या नहीं।
+* **प्रभाव का सारांश**। सिस्टम का उपयोग करने से उत्पन्न होने वाली संभावित हानियों की सूची एकत्र करें। ML जीवनचक्र के दौरान, यह समीक्षा करें कि पहचानी गई समस्याओं को कम किया गया है या संबोधित किया गया है।
+* **छह मुख्य सिद्धांतों के लिए लागू लक्ष्य**। प्रत्येक सिद्धांत के लक्ष्यों का आकलन करें कि क्या वे मिले हैं और यदि कोई अंतराल हैं।
+
+## जिम्मेदार AI के साथ डीबगिंग
+
+किसी सॉफ्टवेयर एप्लिकेशन को डीबग करने की तरह, AI सिस्टम को डीबग करना सिस्टम में समस्याओं की पहचान करने और उन्हें हल करने की एक आवश्यक प्रक्रिया है। कई कारक हो सकते हैं जो मॉडल के अपेक्षित या जिम्मेदार तरीके से प्रदर्शन न करने को प्रभावित करते हैं। अधिकांश पारंपरिक मॉडल प्रदर्शन मेट्रिक्स मॉडल के प्रदर्शन के मात्रात्मक समुच्चय होते हैं, जो जिम्मेदार AI सिद्धांतों का उल्लंघन कैसे करते हैं, इसका विश्लेषण करने के लिए पर्याप्त नहीं होते। इसके अलावा, एक मशीन लर्निंग मॉडल एक ब्लैक बॉक्स है जो यह समझना मुश्किल बनाता है कि इसके परिणामों को क्या प्रेरित करता है या जब यह गलती करता है तो स्पष्टीकरण प्रदान करता है। इस पाठ्यक्रम में आगे, हम जिम्मेदार AI डैशबोर्ड का उपयोग करके AI सिस्टम को डीबग करना सीखेंगे। डैशबोर्ड डेटा वैज्ञानिकों और AI डेवलपर्स के लिए एक समग्र उपकरण प्रदान करता है:
+
+* **त्रुटि विश्लेषण**। मॉडल की त्रुटि वितरण की पहचान करने के लिए जो सिस्टम की निष्पक्षता या विश्वसनीयता को प्रभावित कर सकता है।
+* **मॉडल अवलोकन**। यह पता लगाने के लिए कि डेटा समूहों के प्रदर्शन में कहां असमानताएं हैं।
+* **डेटा विश्लेषण**। डेटा वितरण को समझने के लिए और डेटा में किसी भी संभावित पूर्वाग्रह की पहचान करने के लिए जो निष्पक्षता, समावेशिता और विश्वसनीयता मुद्दों का कारण बन सकता है।
+* **मॉडल व्याख्या**। यह समझने के लिए कि मॉडल की भविष्यवाणियों को क्या प्रभावित करता है। यह मॉडल के व्यवहार को समझाने में मदद करता है, जो पारदर्शिता और जवाबदेही के लिए महत्वपूर्ण है।
+
+## 🚀 चुनौती
+
+हानियों को पहले स्थान पर प्रस्तुत होने से रोकने के लिए, हमें:
+
+- सिस्टम पर काम करने वाले लोगों के बीच विविध पृष्ठभूमि और दृष्टिकोण होने चाहिए।
+- हमारे समाज की विविधता को दर्शाने वाले डेटा सेटों में निवेश करना चाहिए।
+- मशीन लर्निंग जीवनचक्र के दौरान जिम्मेदार AI का पता लगाने और सुधारने के लिए बेहतर तरीकों का विकास करना चाहिए।
+
+वास्तविक जीवन के परिदृश्यों के बारे में सोचें जहां मॉडल की अविश्वसनीयता मॉडल-निर्माण और उपयोग में स्पष्ट है। और क्या विचार करना चाहिए?
+
+## [पोस्ट-व्याख्यान क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## समीक्षा और स्व
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/3-fairness/assignment.md b/translations/hi/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..4eb3725d5
--- /dev/null
+++ b/translations/hi/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# जिम्मेदार AI टूलबॉक्स का अन्वेषण करें
+
+## निर्देश
+
+इस पाठ में आपने जिम्मेदार AI टूलबॉक्स के बारे में सीखा, जो "डेटा वैज्ञानिकों को AI सिस्टम का विश्लेषण और सुधार करने में मदद करने के लिए एक ओपन-सोर्स, समुदाय-संचालित परियोजना है।" इस असाइनमेंट के लिए, RAI टूलबॉक्स के [notebooks](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb) में से एक का अन्वेषण करें और अपने निष्कर्षों को एक पेपर या प्रस्तुति में रिपोर्ट करें।
+
+## मूल्यांकन
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| ------ | -------- | -------- | ----------------- |
+| | एक पेपर या पॉवरपॉइंट प्रस्तुति प्रस्तुत की जाती है जिसमें Fairlearn के सिस्टम, चलाए गए नोटबुक, और इसे चलाने से प्राप्त निष्कर्षों पर चर्चा की जाती है | निष्कर्षों के बिना एक पेपर प्रस्तुत किया जाता है | कोई पेपर प्रस्तुत नहीं किया गया है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/4-techniques-of-ML/README.md b/translations/hi/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..09d445769
--- /dev/null
+++ b/translations/hi/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# मशीन लर्निंग की तकनीकें
+
+मशीन लर्निंग मॉडल और उनके डेटा को बनाने, उपयोग करने और बनाए रखने की प्रक्रिया कई अन्य विकास वर्कफ़्लो से बहुत अलग है। इस पाठ में, हम इस प्रक्रिया को सरल बनाएंगे, और मुख्य तकनीकों का वर्णन करेंगे जिन्हें आपको जानने की आवश्यकता है। आप:
+
+- उच्च स्तर पर मशीन लर्निंग के अंतर्निहित प्रक्रियाओं को समझेंगे।
+- 'मॉडल', 'प्रेडिक्शन', और 'ट्रेनिंग डेटा' जैसे आधारभूत अवधारणाओं का अन्वेषण करेंगे।
+
+## [पाठ से पहले की क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "शुरुआती लोगों के लिए मशीन लर्निंग - मशीन लर्निंग की तकनीकें")
+
+> 🎥 इस पाठ के माध्यम से काम करने के लिए ऊपर दी गई छवि पर क्लिक करें।
+
+## परिचय
+
+उच्च स्तर पर, मशीन लर्निंग (एमएल) प्रक्रियाओं को बनाने की कला कई चरणों में विभाजित होती है:
+
+1. **प्रश्न तय करें**। अधिकांश एमएल प्रक्रियाएं एक प्रश्न पूछने से शुरू होती हैं जिसे एक साधारण कंडीशनल प्रोग्राम या नियम-आधारित इंजन द्वारा उत्तर नहीं दिया जा सकता। ये प्रश्न अक्सर डेटा के संग्रह के आधार पर भविष्यवाणियों के इर्द-गिर्द घूमते हैं।
+2. **डेटा एकत्र करें और तैयार करें**। अपने प्रश्न का उत्तर देने के लिए, आपको डेटा की आवश्यकता होती है। आपके डेटा की गुणवत्ता और, कभी-कभी, मात्रा यह निर्धारित करेगी कि आप अपने प्रारंभिक प्रश्न का उत्तर कितनी अच्छी तरह दे सकते हैं। डेटा का विज़ुअलाइज़ेशन इस चरण का एक महत्वपूर्ण पहलू है। इस चरण में डेटा को प्रशिक्षण और परीक्षण समूह में विभाजित करना भी शामिल है ताकि एक मॉडल बनाया जा सके।
+3. **प्रशिक्षण विधि चुनें**। आपके प्रश्न और आपके डेटा की प्रकृति के आधार पर, आपको यह चुनना होगा कि आप अपने डेटा को सर्वोत्तम रूप से प्रतिबिंबित करने और इसके खिलाफ सटीक भविष्यवाणी करने के लिए एक मॉडल को कैसे प्रशिक्षित करना चाहते हैं। यह आपके एमएल प्रक्रिया का वह हिस्सा है जिसके लिए विशिष्ट विशेषज्ञता की आवश्यकता होती है और अक्सर, काफी मात्रा में प्रयोग की आवश्यकता होती है।
+4. **मॉडल को प्रशिक्षित करें**। अपने प्रशिक्षण डेटा का उपयोग करके, आप डेटा में पैटर्न को पहचानने के लिए विभिन्न एल्गोरिदम का उपयोग करके एक मॉडल को प्रशिक्षित करेंगे। मॉडल आंतरिक वज़नों का लाभ उठा सकता है जिन्हें डेटा के कुछ हिस्सों को दूसरों पर प्राथमिकता देने के लिए समायोजित किया जा सकता है ताकि एक बेहतर मॉडल बनाया जा सके।
+5. **मॉडल का मूल्यांकन करें**। आप अपने एकत्र किए गए सेट से पहले कभी नहीं देखे गए डेटा (अपने परीक्षण डेटा) का उपयोग करके देखेंगे कि मॉडल कैसे प्रदर्शन कर रहा है।
+6. **पैरामीटर ट्यूनिंग**। आपके मॉडल के प्रदर्शन के आधार पर, आप विभिन्न पैरामीटर या वेरिएबल का उपयोग करके प्रक्रिया को दोबारा कर सकते हैं जो मॉडल को प्रशिक्षित करने के लिए उपयोग किए गए एल्गोरिदम के व्यवहार को नियंत्रित करते हैं।
+7. **भविष्यवाणी करें**। अपने मॉडल की सटीकता का परीक्षण करने के लिए नए इनपुट का उपयोग करें।
+
+## कौन सा प्रश्न पूछना है
+
+कंप्यूटर डेटा में छिपे पैटर्न की खोज करने में विशेष रूप से कुशल होते हैं। यह उपयोगिता उन शोधकर्ताओं के लिए बहुत सहायक होती है जिनके पास किसी दिए गए डोमेन के बारे में प्रश्न होते हैं जिन्हें एक शर्त-आधारित नियम इंजन बनाकर आसानी से उत्तर नहीं दिया जा सकता। उदाहरण के लिए, एक एक्चुरियल कार्य को देखते हुए, एक डेटा वैज्ञानिक धूम्रपान करने वालों बनाम गैर-धूम्रपान करने वालों की मृत्यु दर के आसपास हस्तनिर्मित नियम बना सकता है।
+
+हालांकि, जब कई अन्य वेरिएबल को समीकरण में लाया जाता है, तो एक एमएल मॉडल पिछले स्वास्थ्य इतिहास के आधार पर भविष्य की मृत्यु दर की भविष्यवाणी करने में अधिक कुशल साबित हो सकता है। एक अधिक प्रसन्नता भरा उदाहरण हो सकता है कि किसी दिए गए स्थान में अप्रैल महीने के लिए मौसम की भविष्यवाणी करना जिसमें अक्षांश, देशांतर, जलवायु परिवर्तन, महासागर की निकटता, जेट स्ट्रीम के पैटर्न और अन्य डेटा शामिल हो।
+
+✅ मौसम मॉडल पर यह [स्लाइड डेक](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf) मौसम विश्लेषण में एमएल के उपयोग के लिए एक ऐतिहासिक परिप्रेक्ष्य प्रदान करता है।
+
+## मॉडल बनाने से पहले के कार्य
+
+अपने मॉडल को बनाना शुरू करने से पहले, आपको कई कार्यों को पूरा करना होगा। अपने प्रश्न का परीक्षण करने और एक मॉडल की भविष्यवाणियों के आधार पर एक परिकल्पना बनाने के लिए, आपको कई तत्वों की पहचान और कॉन्फ़िगर करने की आवश्यकता है।
+
+### डेटा
+
+किसी भी प्रकार की निश्चितता के साथ अपने प्रश्न का उत्तर देने के लिए, आपको सही प्रकार के पर्याप्त डेटा की आवश्यकता होती है। इस समय आपको दो चीजें करनी होंगी:
+
+- **डेटा एकत्र करें**। डेटा विश्लेषण में निष्पक्षता पर पिछले पाठ को ध्यान में रखते हुए, अपने डेटा को सावधानी से एकत्र करें। इस डेटा के स्रोतों, इसके अंतर्निहित पूर्वाग्रहों के बारे में जागरूक रहें और इसके स्रोत का दस्तावेज़ीकरण करें।
+- **डेटा तैयार करें**। डेटा तैयारी प्रक्रिया में कई चरण होते हैं। यदि डेटा विविध स्रोतों से आता है, तो आपको डेटा को संकलित और सामान्यीकृत करने की आवश्यकता हो सकती है। आप विभिन्न तरीकों से डेटा की गुणवत्ता और मात्रा में सुधार कर सकते हैं जैसे कि स्ट्रिंग्स को नंबरों में बदलना (जैसा कि हम [क्लस्टरिंग](../../5-Clustering/1-Visualize/README.md) में करते हैं)। आप मूल डेटा के आधार पर नया डेटा भी उत्पन्न कर सकते हैं (जैसा कि हम [क्लासिफिकेशन](../../4-Classification/1-Introduction/README.md) में करते हैं)। आप डेटा को साफ और संपादित कर सकते हैं (जैसा कि हम [वेब ऐप](../../3-Web-App/README.md) पाठ से पहले करेंगे)। अंत में, आप इसे यादृच्छिक और शफल भी कर सकते हैं, आपके प्रशिक्षण तकनीकों के आधार पर।
+
+✅ अपने डेटा को एकत्र और संसाधित करने के बाद, यह देखने के लिए एक क्षण लें कि क्या इसका आकार आपको अपने इच्छित प्रश्न को संबोधित करने की अनुमति देगा। यह हो सकता है कि डेटा आपके दिए गए कार्य में अच्छा प्रदर्शन न करे, जैसा कि हम अपने [क्लस्टरिंग](../../5-Clustering/1-Visualize/README.md) पाठों में खोजते हैं!
+
+### फीचर्स और टारगेट
+
+एक [फीचर](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection) आपके डेटा की एक मापने योग्य संपत्ति है। कई डेटा सेट में इसे 'तारीख', 'आकार' या 'रंग' जैसे कॉलम हेडिंग के रूप में व्यक्त किया जाता है। आपका फीचर वेरिएबल, जिसे आमतौर पर कोड में `X` के रूप में दर्शाया जाता है, इनपुट वेरिएबल का प्रतिनिधित्व करता है जिसका उपयोग मॉडल को प्रशिक्षित करने के लिए किया जाएगा।
+
+एक टारगेट वह चीज़ है जिसे आप भविष्यवाणी करने की कोशिश कर रहे हैं। टारगेट, जिसे आमतौर पर कोड में `y` के रूप में दर्शाया जाता है, उस प्रश्न का उत्तर दर्शाता है जिसे आप अपने डेटा से पूछने की कोशिश कर रहे हैं: दिसंबर में कौन से **रंग** के कद्दू सबसे सस्ते होंगे? सैन फ्रांसिस्को में, कौन से पड़ोस में सबसे अच्छी रियल एस्टेट **कीमत** होगी? कभी-कभी टारगेट को लेबल एट्रिब्यूट के रूप में भी संदर्भित किया जाता है।
+
+### अपने फीचर वेरिएबल का चयन करना
+
+🎓 **फीचर चयन और फीचर एक्सट्रैक्शन** जब आप एक मॉडल बना रहे होते हैं तो आप कैसे जानते हैं कि कौन सा वेरिएबल चुनना है? आप शायद फीचर चयन या फीचर एक्सट्रैक्शन की प्रक्रिया से गुजरेंगे ताकि सबसे प्रदर्शनकारी मॉडल के लिए सही वेरिएबल चुने जा सकें। हालांकि, वे एक ही चीज़ नहीं हैं: "फीचर एक्सट्रैक्शन मूल फीचर्स के फ़ंक्शन्स से नए फीचर्स बनाता है, जबकि फीचर चयन फीचर्स का एक उपसमूह लौटाता है।" ([स्रोत](https://wikipedia.org/wiki/Feature_selection))
+
+### अपने डेटा को विज़ुअलाइज़ करें
+
+डेटा वैज्ञानिक के टूलकिट का एक महत्वपूर्ण पहलू डेटा को विज़ुअलाइज़ करने की शक्ति है, जिसमें कई उत्कृष्ट लाइब्रेरी जैसे Seaborn या MatPlotLib शामिल हैं। अपने डेटा को दृश्य रूप में प्रस्तुत करने से आपको छिपे हुए संबंधों का पता लगाने में मदद मिल सकती है जिन्हें आप लाभ उठा सकते हैं। आपके विज़ुअलाइज़ेशन आपको पूर्वाग्रह या असंतुलित डेटा का भी पता लगाने में मदद कर सकते हैं (जैसा कि हम [क्लासिफिकेशन](../../4-Classification/2-Classifiers-1/README.md) में खोजते हैं)।
+
+### अपने डेटासेट को विभाजित करें
+
+प्रशिक्षण से पहले, आपको अपने डेटासेट को असमान आकार के दो या अधिक भागों में विभाजित करने की आवश्यकता है जो डेटा का अच्छी तरह से प्रतिनिधित्व करते हैं।
+
+- **प्रशिक्षण**। डेटासेट का यह हिस्सा आपके मॉडल को प्रशिक्षित करने के लिए फिट होता है। यह सेट मूल डेटासेट का अधिकांश भाग बनाता है।
+- **परीक्षण**। एक परीक्षण डेटासेट डेटा का एक स्वतंत्र समूह है, जो अक्सर मूल डेटा से एकत्र किया जाता है, जिसे आप निर्मित मॉडल के प्रदर्शन की पुष्टि करने के लिए उपयोग करते हैं।
+- **मान्यकरण**। एक मान्यकरण सेट उदाहरणों का एक छोटा स्वतंत्र समूह है जिसका उपयोग आप मॉडल के हाइपरपैरामीटर्स या आर्किटेक्चर को ट्यून करने के लिए करते हैं ताकि मॉडल में सुधार किया जा सके। आपके डेटा के आकार और आपके पूछे गए प्रश्न के आधार पर, आपको इस तीसरे सेट को बनाने की आवश्यकता नहीं हो सकती (जैसा कि हम [टाइम सीरीज़ फोरकास्टिंग](../../7-TimeSeries/1-Introduction/README.md) में नोट करते हैं)।
+
+## एक मॉडल बनाना
+
+अपने प्रशिक्षण डेटा का उपयोग करके, आपका लक्ष्य विभिन्न एल्गोरिदम का उपयोग करके एक मॉडल या आपके डेटा का एक सांख्यिकीय प्रतिनिधित्व बनाना है ताकि इसे **प्रशिक्षित** किया जा सके। एक मॉडल को प्रशिक्षण देना इसे डेटा के संपर्क में लाता है और इसे खोजे गए पैटर्न के बारे में धारणाएं बनाने, मान्य करने और स्वीकार करने या अस्वीकार करने की अनुमति देता है।
+
+### एक प्रशिक्षण विधि तय करें
+
+आपके प्रश्न और आपके डेटा की प्रकृति के आधार पर, आप इसे प्रशिक्षित करने के लिए एक विधि चुनेंगे। [Scikit-learn के दस्तावेज़](https://scikit-learn.org/stable/user_guide.html) के माध्यम से कदम बढ़ाते हुए - जिसका हम इस कोर्स में उपयोग करते हैं - आप एक मॉडल को प्रशिक्षित करने के कई तरीके खोज सकते हैं। आपके अनुभव के आधार पर, आपको सबसे अच्छा मॉडल बनाने के लिए कई अलग-अलग तरीकों को आज़माना पड़ सकता है। आप एक ऐसी प्रक्रिया से गुजरने की संभावना रखते हैं जिसमें डेटा वैज्ञानिक मॉडल के प्रदर्शन का मूल्यांकन करते हैं, इसे अनदेखे डेटा को खिलाते हैं, सटीकता, पूर्वाग्रह और अन्य गुणवत्ता-घटाने वाले मुद्दों की जांच करते हैं, और हाथ में कार्य के लिए सबसे उपयुक्त प्रशिक्षण विधि का चयन करते हैं।
+
+### एक मॉडल को प्रशिक्षित करें
+
+अपने प्रशिक्षण डेटा के साथ सशस्त्र, आप इसे 'फिट' करने के लिए तैयार हैं ताकि एक मॉडल बनाया जा सके। आप देखेंगे कि कई एमएल लाइब्रेरी में आप कोड 'model.fit' पाएंगे - यह उस समय है जब आप अपने फीचर वेरिएबल को मानों की एक सरणी (आमतौर पर 'X') और एक टारगेट वेरिएबल (आमतौर पर 'y') के रूप में भेजते हैं।
+
+### मॉडल का मूल्यांकन करें
+
+एक बार प्रशिक्षण प्रक्रिया पूरी हो जाने के बाद (एक बड़े मॉडल को प्रशिक्षित करने के लिए कई पुनरावृत्तियों, या 'एपोक्स' लग सकते हैं), आप परीक्षण डेटा का उपयोग करके मॉडल की गुणवत्ता का मूल्यांकन करने में सक्षम होंगे ताकि इसके प्रदर्शन का मूल्यांकन किया जा सके। यह डेटा मूल डेटा का एक उपसमूह है जिसे मॉडल ने पहले कभी विश्लेषण नहीं किया है। आप अपने मॉडल की गुणवत्ता के बारे में मेट्रिक्स की एक तालिका प्रिंट कर सकते हैं।
+
+🎓 **मॉडल फिटिंग**
+
+मशीन लर्निंग के संदर्भ में, मॉडल फिटिंग उस मॉडल के अंतर्निहित फ़ंक्शन की सटीकता को संदर्भित करता है क्योंकि यह डेटा का विश्लेषण करने का प्रयास करता है जिससे यह परिचित नहीं है।
+
+🎓 **अंडरफिटिंग** और **ओवरफिटिंग** सामान्य समस्याएं हैं जो मॉडल की गुणवत्ता को घटाती हैं, क्योंकि मॉडल या तो पर्याप्त रूप से फिट नहीं होता है या बहुत अच्छी तरह से फिट होता है। यह मॉडल को या तो अपने प्रशिक्षण डेटा के साथ बहुत अधिक या बहुत ढीले तरीके से संरेखित भविष्यवाणियां करने का कारण बनता है। एक ओवरफिट मॉडल प्रशिक्षण डेटा को बहुत अच्छी तरह से भविष्यवाणी करता है क्योंकि इसने डेटा के विवरण और शोर को बहुत अच्छी तरह से सीखा है। एक अंडरफिट मॉडल सटीक नहीं है क्योंकि यह न तो अपने प्रशिक्षण डेटा का सही तरीके से विश्लेषण कर सकता है और न ही डेटा जिसे इसने अभी तक 'देखा' नहीं है।
+
+
+> [Jen Looper](https://twitter.com/jenlooper) द्वारा इन्फोग्राफिक
+
+## पैरामीटर ट्यूनिंग
+
+एक बार आपका प्रारंभिक प्रशिक्षण पूरा हो जाने के बाद, मॉडल की गुणवत्ता का अवलोकन करें और इसके 'हाइपरपैरामीटर्स' को ट्वीक करके इसे सुधारने पर विचार करें। इस प्रक्रिया के बारे में और पढ़ें [दस्तावेज़ में](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott)।
+
+## भविष्यवाणी
+
+यह वह क्षण है जब आप अपने मॉडल की सटीकता का परीक्षण करने के लिए पूरी तरह से नए डेटा का उपयोग कर सकते हैं। एक 'एप्लाइड' एमएल सेटिंग में, जहां आप उत्पादन में मॉडल का उपयोग करने के लिए वेब संपत्तियों का निर्माण कर रहे हैं, इस प्रक्रिया में एक वेरिएबल सेट करने और मॉडल को इन्फेरेंस या मूल्यांकन के लिए भेजने के लिए उपयोगकर्ता इनपुट (उदाहरण के लिए एक बटन दबाना) एकत्र करना शामिल हो सकता है।
+
+इन पाठों में, आप यह जानेंगे कि इन चरणों का उपयोग कैसे करें - डेटा वैज्ञानिक के सभी इशारों और अधिक के रूप में तैयार करने, बनाने, परीक्षण करने, मूल्यांकन करने और भविष्यवाणी करने के लिए, जैसे-जैसे आप 'फुल स्टैक' एमएल इंजीनियर बनने की अपनी यात्रा में आगे बढ़ते हैं।
+
+---
+
+## 🚀चुनौती
+
+एक फ्लो चार्ट बनाएं जो एक एमएल प्रैक्टिशनर के चरणों को दर्शाता है। आप इस प्रक्रिया में अभी खुद को कहां देखते हैं? आपको कहां कठिनाई का सामना करने की संभावना है? आपको क्या आसान लगता है?
+
+## [पाठ के बाद की क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## समीक्षा और स्व-अध्ययन
+
+ऑनलाइन खोजें डेटा वैज्ञानिकों के साक्षात्कार जो अपने दैनिक कार्य पर चर्चा करते हैं। यहाँ एक [साक्षात्कार](https://www.youtube.com/watch?v=Z3IjgbbCEfs) है।
+
+## असाइनमेंट
+
+[एक डेटा वैज्ञानिक का साक्षात्कार लें](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/4-techniques-of-ML/assignment.md b/translations/hi/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..f15b66a07
--- /dev/null
+++ b/translations/hi/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# एक डेटा वैज्ञानिक का साक्षात्कार
+
+## निर्देश
+
+अपनी कंपनी में, किसी यूजर ग्रुप में, या अपने दोस्तों या सहपाठियों के बीच, किसी ऐसे व्यक्ति से बात करें जो पेशेवर रूप से डेटा वैज्ञानिक के रूप में काम करता हो। उनके दैनिक कार्यों के बारे में एक छोटा पेपर (500 शब्द) लिखें। क्या वे विशेषज्ञ हैं, या वे 'फुल स्टैक' काम करते हैं?
+
+## मूल्यांकन मानदंड
+
+| मापदंड | उत्कृष्टता | पर्याप्त | सुधार की आवश्यकता |
+| -------| ------------------------------------------------------------------------------------ | ------------------------------------------------------------------ | --------------------- |
+| | सही लंबाई का एक निबंध, जिसमें उद्धृत स्रोत शामिल हैं, एक .doc फ़ाइल के रूप में प्रस्तुत किया गया है | निबंध खराब तरीके से उद्धृत किया गया है या आवश्यक लंबाई से छोटा है | कोई निबंध प्रस्तुत नहीं किया गया |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/1-Introduction/README.md b/translations/hi/1-Introduction/README.md
new file mode 100644
index 000000000..5b1de7d30
--- /dev/null
+++ b/translations/hi/1-Introduction/README.md
@@ -0,0 +1,26 @@
+# मशीन लर्निंग का परिचय
+
+इस पाठ्यक्रम के इस भाग में, आपको मशीन लर्निंग के क्षेत्र के आधारभूत सिद्धांतों से परिचित कराया जाएगा, यह क्या है, और इसके इतिहास और शोधकर्ता इसके साथ काम करने के लिए किन तकनीकों का उपयोग करते हैं, इसके बारे में जानेंगे। चलिए साथ मिलकर इस नए ML की दुनिया की खोज करते हैं!
+
+
+> Photo by Bill Oxford on Unsplash
+
+### पाठ
+
+1. [मशीन लर्निंग का परिचय](1-intro-to-ML/README.md)
+1. [मशीन लर्निंग और AI का इतिहास](2-history-of-ML/README.md)
+1. [न्याय और मशीन लर्निंग](3-fairness/README.md)
+1. [मशीन लर्निंग की तकनीकें](4-techniques-of-ML/README.md)
+
+### श्रेय
+
+"Introduction to Machine Learning" को ♥️ के साथ [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) और [Jen Looper](https://twitter.com/jenlooper) सहित कई लोगों की टीम द्वारा लिखा गया है।
+
+"The History of Machine Learning" को ♥️ के साथ [Jen Looper](https://twitter.com/jenlooper) और [Amy Boyd](https://twitter.com/AmyKateNicho) द्वारा लिखा गया है।
+
+"Fairness and Machine Learning" को ♥️ के साथ [Tomomi Imura](https://twitter.com/girliemac) द्वारा लिखा गया है।
+
+"Techniques of Machine Learning" को ♥️ के साथ [Jen Looper](https://twitter.com/jenlooper) और [Chris Noring](https://twitter.com/softchris) द्वारा लिखा गया है।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित AI अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियां हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/1-Tools/README.md b/translations/hi/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..8259ba233
--- /dev/null
+++ b/translations/hi/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# Python और Scikit-learn के साथ Regression Models की शुरुआत करें
+
+
+
+> Sketchnote [Tomomi Imura](https://www.twitter.com/girlie_mac) द्वारा
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [यह पाठ R में उपलब्ध है!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## परिचय
+
+इन चार पाठों में, आप जानेंगे कि regression models कैसे बनाते हैं। हम जल्द ही चर्चा करेंगे कि ये क्या होते हैं। लेकिन इससे पहले कि आप कुछ भी करें, यह सुनिश्चित करें कि आपके पास सही उपकरण हैं ताकि आप प्रक्रिया शुरू कर सकें!
+
+इस पाठ में, आप सीखेंगे कि कैसे:
+
+- अपने कंप्यूटर को स्थानीय मशीन लर्निंग कार्यों के लिए कॉन्फ़िगर करें।
+- Jupyter notebooks के साथ काम करें।
+- Scikit-learn का उपयोग करें, जिसमें इंस्टॉलेशन भी शामिल है।
+- एक hands-on exercise के साथ linear regression का अन्वेषण करें।
+
+## इंस्टॉलेशन और कॉन्फ़िगरेशन
+
+[](https://youtu.be/-DfeD2k2Kj0 "ML for beginners - मशीन लर्निंग मॉडल बनाने के लिए अपने उपकरण सेट करें")
+
+> 🎥 ऊपर दी गई छवि पर क्लिक करें अपने कंप्यूटर को ML के लिए कॉन्फ़िगर करने के लिए एक छोटे वीडियो के लिए।
+
+1. **Python इंस्टॉल करें**। सुनिश्चित करें कि [Python](https://www.python.org/downloads/) आपके कंप्यूटर पर इंस्टॉल है। आप Python का उपयोग कई डेटा विज्ञान और मशीन लर्निंग कार्यों के लिए करेंगे। अधिकांश कंप्यूटर सिस्टम पहले से ही एक Python इंस्टॉलेशन शामिल करते हैं। कुछ उपयोगकर्ताओं के लिए सेटअप को आसान बनाने के लिए उपयोगी [Python Coding Packs](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott) भी उपलब्ध हैं।
+
+ हालांकि, Python के कुछ उपयोगों के लिए सॉफ़्टवेयर का एक संस्करण आवश्यक होता है, जबकि अन्य के लिए एक अलग संस्करण की आवश्यकता होती है। इस कारण से, एक [virtual environment](https://docs.python.org/3/library/venv.html) में काम करना उपयोगी है।
+
+2. **Visual Studio Code इंस्टॉल करें**। सुनिश्चित करें कि Visual Studio Code आपके कंप्यूटर पर इंस्टॉल है। [Visual Studio Code इंस्टॉल करने के लिए](https://code.visualstudio.com/) इन निर्देशों का पालन करें। आप इस कोर्स में Visual Studio Code में Python का उपयोग करने जा रहे हैं, इसलिए आपको [Visual Studio Code को Python विकास के लिए कॉन्फ़िगर करने](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) के बारे में जानकारी होनी चाहिए।
+
+ > Python के साथ आरामदायक होने के लिए इस [Learn modules](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott) संग्रह को काम करके देखें
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Visual Studio Code के साथ Python सेटअप करें")
+ >
+ > 🎥 ऊपर दी गई छवि पर क्लिक करें: VS Code में Python का उपयोग करने के लिए एक वीडियो।
+
+3. **Scikit-learn इंस्टॉल करें**, इन [निर्देशों](https://scikit-learn.org/stable/install.html) का पालन करके। चूंकि आपको यह सुनिश्चित करने की आवश्यकता है कि आप Python 3 का उपयोग कर रहे हैं, यह अनुशंसा की जाती है कि आप एक virtual environment का उपयोग करें। ध्यान दें, यदि आप इस लाइब्रेरी को M1 Mac पर इंस्टॉल कर रहे हैं, तो ऊपर दिए गए पेज पर विशेष निर्देश हैं।
+
+1. **Jupyter Notebook इंस्टॉल करें**। आपको [Jupyter package इंस्टॉल करना](https://pypi.org/project/jupyter/) होगा।
+
+## आपका ML लेखन वातावरण
+
+आप अपने Python कोड को विकसित करने और मशीन लर्निंग मॉडल बनाने के लिए **notebooks** का उपयोग करेंगे। इस प्रकार की फ़ाइल डेटा वैज्ञानिकों के लिए एक सामान्य उपकरण है, और इन्हें उनके प्रत्यय या एक्सटेंशन `.ipynb` द्वारा पहचाना जा सकता है।
+
+Notebooks एक इंटरैक्टिव वातावरण हैं जो डेवलपर को कोड और नोट्स दोनों जोड़ने और कोड के चारों ओर दस्तावेज़ लिखने की अनुमति देते हैं, जो प्रयोगात्मक या अनुसंधान-उन्मुख परियोजनाओं के लिए काफी सहायक होता है।
+
+[](https://youtu.be/7E-jC8FLA2E "ML for beginners - Jupyter Notebooks सेट करें ताकि regression models बनाना शुरू कर सकें")
+
+> 🎥 ऊपर दी गई छवि पर क्लिक करें इस अभ्यास को करने के लिए एक छोटे वीडियो के लिए।
+
+### अभ्यास - एक notebook के साथ काम करें
+
+इस फ़ोल्डर में, आपको _notebook.ipynb_ फ़ाइल मिलेगी।
+
+1. Visual Studio Code में _notebook.ipynb_ खोलें।
+
+ एक Jupyter सर्वर Python 3+ के साथ शुरू होगा। आपको notebook के क्षेत्रों में `run` कोड के टुकड़े मिलेंगे। आप एक कोड ब्लॉक चला सकते हैं, प्ले बटन जैसे दिखने वाले आइकन का चयन करके।
+
+1. `md` आइकन का चयन करें और थोड़ा मार्कडाउन जोड़ें, और निम्नलिखित पाठ **# Welcome to your notebook** जोड़ें।
+
+ अगला, कुछ Python कोड जोड़ें।
+
+1. कोड ब्लॉक में **print('hello notebook')** टाइप करें।
+1. कोड चलाने के लिए तीर का चयन करें।
+
+ आपको प्रिंट किया हुआ बयान देखना चाहिए:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+आप अपने कोड को टिप्पणियों के साथ इंटरलीफ कर सकते हैं ताकि notebook को स्वयं-प्रलेखित किया जा सके।
+
+✅ एक मिनट के लिए सोचें कि एक वेब डेवलपर का कार्य वातावरण डेटा वैज्ञानिक के कार्य वातावरण से कितना अलग है।
+
+## Scikit-learn के साथ शुरू करें
+
+अब जब Python आपके स्थानीय वातावरण में सेट हो गया है, और आप Jupyter notebooks के साथ सहज हैं, तो चलिए Scikit-learn के साथ भी उतने ही सहज होते हैं (इसे `sci` as in `science` उच्चारित करें)। Scikit-learn आपको ML कार्य करने में मदद करने के लिए एक [विस्तृत API](https://scikit-learn.org/stable/modules/classes.html#api-ref) प्रदान करता है।
+
+उनकी [वेबसाइट](https://scikit-learn.org/stable/getting_started.html) के अनुसार, "Scikit-learn एक ओपन सोर्स मशीन लर्निंग लाइब्रेरी है जो सुपरवाइज्ड और अनसुपरवाइज्ड लर्निंग का समर्थन करती है। यह मॉडल फिटिंग, डेटा प्रीप्रोसेसिंग, मॉडल चयन और मूल्यांकन के लिए विभिन्न उपकरण भी प्रदान करती है, और कई अन्य उपयोगिताओं।"
+
+इस कोर्स में, आप Scikit-learn और अन्य उपकरणों का उपयोग करेंगे ताकि आप मशीन लर्निंग मॉडल बना सकें जिन्हें हम 'पारंपरिक मशीन लर्निंग' कार्य कहते हैं। हमने जानबूझकर न्यूरल नेटवर्क और डीप लर्निंग से परहेज किया है, क्योंकि उन्हें हमारे आगामी 'AI for Beginners' पाठ्यक्रम में बेहतर कवर किया गया है।
+
+Scikit-learn मॉडल बनाना और उनका मूल्यांकन करना सरल बनाता है। यह मुख्य रूप से संख्यात्मक डेटा का उपयोग करने पर केंद्रित है और कई तैयार किए गए डेटासेट प्रदान करता है जो सीखने के उपकरण के रूप में उपयोग किए जा सकते हैं। इसमें छात्रों के लिए आज़माने के लिए प्री-बिल्ट मॉडल भी शामिल हैं। चलिए प्रीपैकेज्ड डेटा को लोड करने की प्रक्रिया का अन्वेषण करते हैं और कुछ बुनियादी डेटा के साथ Scikit-learn के पहले ML मॉडल का उपयोग करते हैं।
+
+## अभ्यास - आपका पहला Scikit-learn notebook
+
+> यह ट्यूटोरियल Scikit-learn की वेबसाइट पर [linear regression example](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py) से प्रेरित था।
+
+[](https://youtu.be/2xkXL5EUpS0 "ML for beginners - Python में आपका पहला Linear Regression प्रोजेक्ट")
+
+> 🎥 ऊपर दी गई छवि पर क्लिक करें इस अभ्यास को करने के लिए एक छोटे वीडियो के लिए।
+
+इस पाठ से संबंधित _notebook.ipynb_ फ़ाइल में, सभी सेल्स को 'trash can' आइकन दबाकर साफ़ करें।
+
+इस खंड में, आप एक छोटे डेटासेट के साथ काम करेंगे जो Scikit-learn में सीखने के उद्देश्यों के लिए बनाया गया है। कल्पना करें कि आप मधुमेह रोगियों के लिए एक उपचार का परीक्षण करना चाहते थे। मशीन लर्निंग मॉडल आपको यह निर्धारित करने में मदद कर सकते हैं कि कौन से रोगी उपचार का बेहतर उत्तर देंगे, चर के संयोजन के आधार पर। यहां तक कि एक बहुत ही बुनियादी regression model, जब दृश्यीकृत किया जाता है, तो चर के बारे में जानकारी दिखा सकता है जो आपको अपने सैद्धांतिक क्लिनिकल परीक्षणों को व्यवस्थित करने में मदद कर सकता है।
+
+✅ कई प्रकार के regression methods होते हैं, और आप किसे चुनते हैं यह आपके उत्तर पर निर्भर करता है। यदि आप किसी दिए गए उम्र के व्यक्ति के लिए संभावित ऊंचाई की भविष्यवाणी करना चाहते हैं, तो आप linear regression का उपयोग करेंगे, क्योंकि आप एक **संख्यात्मक मान** की तलाश कर रहे हैं। यदि आप यह जानने में रुचि रखते हैं कि किसी प्रकार के भोजन को शाकाहारी माना जाना चाहिए या नहीं, तो आप एक **श्रेणी असाइनमेंट** की तलाश कर रहे हैं, इसलिए आप logistic regression का उपयोग करेंगे। आप बाद में logistic regression के बारे में अधिक जानेंगे। कुछ प्रश्नों के बारे में सोचें जो आप डेटा से पूछ सकते हैं, और इनमें से कौन सी विधियाँ अधिक उपयुक्त होंगी।
+
+आइए इस कार्य पर शुरुआत करें।
+
+### लाइब्रेरीज़ इंपोर्ट करें
+
+इस कार्य के लिए हम कुछ लाइब्रेरीज़ इंपोर्ट करेंगे:
+
+- **matplotlib**। यह एक उपयोगी [ग्राफिंग टूल](https://matplotlib.org/) है और हम इसका उपयोग एक लाइन प्लॉट बनाने के लिए करेंगे।
+- **numpy**। [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) Python में संख्यात्मक डेटा को संभालने के लिए एक उपयोगी लाइब्रेरी है।
+- **sklearn**। यह [Scikit-learn](https://scikit-learn.org/stable/user_guide.html) लाइब्रेरी है।
+
+अपने कार्यों में मदद के लिए कुछ लाइब्रेरीज़ इंपोर्ट करें।
+
+1. निम्नलिखित कोड टाइप करके इंपोर्ट जोड़ें:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ ऊपर आप `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y` को इंपोर्ट कर रहे हैं जो regression target होगा।
+
+1. डेटा मैट्रिक्स के आकार और इसके पहले तत्व को दिखाने के लिए कुछ प्रिंट कमांड जोड़ें:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ जो प्रतिक्रिया आपको मिल रही है, वह एक टपल है। आप जो कर रहे हैं वह टपल के पहले दो मानों को `X` and `y` को सौंपना है। [टपल्स के बारे में](https://wikipedia.org/wiki/Tuple) और जानें।
+
+ आप देख सकते हैं कि इस डेटा में 442 आइटम हैं जो 10 तत्वों की सरणियों में आकारित हैं:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ डेटा और regression target के बीच के संबंध के बारे में सोचें। Linear regression फीचर X और टारगेट वेरिएबल y के बीच संबंधों की भविष्यवाणी करता है। क्या आप दस्तावेज़ में मधुमेह डेटासेट के लिए [target](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) पा सकते हैं? इस डेटासेट का लक्ष्य क्या है, यह डेटा क्या प्रदर्शित कर रहा है?
+
+2. इसके बाद, इस डेटासेट के एक हिस्से का चयन करें जिसे तीसरे कॉलम का चयन करके प्लॉट किया जा सकता है। आप `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)` का उपयोग करके ऐसा कर सकते हैं। यदि पैरामीटर में से एक -1 है, तो संबंधित आयाम स्वचालित रूप से गणना किया जाता है।
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ किसी भी समय, डेटा का आकार जांचने के लिए इसे प्रिंट करें।
+
+3. अब जब आपके पास प्लॉट करने के लिए डेटा तैयार है, तो आप देख सकते हैं कि क्या एक मशीन इस डेटासेट में संख्याओं के बीच एक तार्किक विभाजन निर्धारित करने में मदद कर सकती है। ऐसा करने के लिए, आपको डेटा (X) और टारगेट (y) दोनों को परीक्षण और प्रशिक्षण सेटों में विभाजित करना होगा। Scikit-learn में इसे करने का एक सरल तरीका है; आप अपने परीक्षण डेटा को एक दिए गए बिंदु पर विभाजित कर सकते हैं।
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+4. अब आप अपने मॉडल को प्रशिक्षित करने के लिए तैयार हैं! Linear regression मॉडल लोड करें और अपने X और y प्रशिक्षण सेटों के साथ इसे `model.fit()` का उपयोग करके प्रशिक्षित करें:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`। इसका उपयोग डेटा समूहों के बीच लाइन खींचने के लिए किया जाएगा
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. अब डेटा को एक प्लॉट में दिखाने का समय है। Matplotlib इस कार्य के लिए एक बहुत उपयोगी उपकरण है। सभी X और y परीक्षण डेटा का एक scatterplot बनाएं, और मॉडल के डेटा समूहों के बीच सबसे उपयुक्त स्थान पर एक लाइन खींचने के लिए भविष्यवाणी का उपयोग करें।
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ यहाँ क्या हो रहा है इसके बारे में सोचें। कई छोटे डेटा बिंदुओं के माध्यम से एक सीधी रेखा चल रही है, लेकिन यह वास्तव में क्या कर रही है? क्या आप देख सकते हैं कि आपको इस रेखा का उपयोग करके एक नया, अनदेखा डेटा बिंदु प्लॉट के y अक्ष के संबंध में कहां फिट होना चाहिए, यह भविष्यवाणी करने में सक्षम होना चाहिए? इस मॉडल के व्यावहारिक उपयोग को शब्दों में डालने का प्रयास करें।
+
+बधाई हो, आपने अपना पहला linear regression model बनाया, इसके साथ एक भविष्यवाणी बनाई, और इसे एक प्लॉट में प्रदर्शित किया!
+
+---
+## 🚀चुनौती
+
+इस डेटासेट से एक अलग वेरिएबल को प्लॉट करें। संकेत: इस पंक्ति को संपादित करें: `X = X[:,2]`। इस डेटासेट के लक्ष्य को देखते हुए, आप मधुमेह के एक रोग के रूप में प्रगति के बारे में क्या खोज सकते हैं?
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## समीक्षा और आत्म-अध्ययन
+
+इस ट्यूटोरियल में, आपने सरल linear regression के साथ काम किया, बजाय univariate या multiple linear regression के। इन तरीकों के बीच के अंतर के बारे में थोड़ा पढ़ें, या [इस वीडियो](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef) को देखें
+
+regression की अवधारणा के बारे में और पढ़ें और सोचें कि इस तकनीक द्वारा किन प्रकार के प्रश्नों का उत्तर दिया जा सकता है। अपनी समझ को गहरा करने के लिए इस [ट्यूटोरियल](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott) को लें।
+
+## असाइनमेंट
+
+[एक अलग डेटासेट](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/1-Tools/assignment.md b/translations/hi/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..f56134fca
--- /dev/null
+++ b/translations/hi/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# स्किकिट-लर्न के साथ प्रतिगमन
+
+## निर्देश
+
+स्किकिट-लर्न में [लिनेरुड डेटासेट](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud) पर एक नज़र डालें। इस डेटासेट में कई [लक्ष्य](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset) हैं: 'यह बीस मध्यम आयु वर्ग के पुरुषों से एक फिटनेस क्लब में एकत्र किए गए तीन व्यायाम (डेटा) और तीन शारीरिक (लक्ष्य) चर से बना है।'
+
+अपने शब्दों में, एक प्रतिगमन मॉडल बनाने का वर्णन करें जो कमर और कितने सिटअप्स किए गए हैं के बीच संबंध को दर्शाए। इस डेटासेट में अन्य डेटा पॉइंट्स के लिए भी ऐसा ही करें।
+
+## मूल्यांकन
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| ------------------------------ | ------------------------------------ | ------------------------------ | ----------------------------- |
+| एक वर्णनात्मक पैराग्राफ सबमिट करें | अच्छी तरह से लिखा गया पैराग्राफ सबमिट किया गया है | कुछ वाक्य सबमिट किए गए हैं | कोई विवरण नहीं दिया गया है |
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित AI अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/1-Tools/solution/Julia/README.md b/translations/hi/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..633bb77e0
--- /dev/null
+++ b/translations/hi/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/2-Data/README.md b/translations/hi/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..7cd770f6e
--- /dev/null
+++ b/translations/hi/2-Regression/2-Data/README.md
@@ -0,0 +1,215 @@
+# Scikit-learn का उपयोग करके एक रिग्रेशन मॉडल बनाएं: डेटा तैयार करें और विज़ुअलाइज़ करें
+
+
+
+इंफोग्राफिक द्वारा [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [यह पाठ R में उपलब्ध है!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## परिचय
+
+अब जब आपके पास Scikit-learn के साथ मशीन लर्निंग मॉडल बनाने के लिए आवश्यक उपकरण हैं, तो आप अपने डेटा से सवाल पूछने के लिए तैयार हैं। जब आप डेटा के साथ काम करते हैं और एमएल समाधान लागू करते हैं, तो सही सवाल पूछना बहुत महत्वपूर्ण होता है ताकि आप अपने डेटा सेट की संभावनाओं को सही ढंग से खोल सकें।
+
+इस पाठ में, आप सीखेंगे:
+
+- अपने डेटा को मॉडल-बिल्डिंग के लिए कैसे तैयार करें।
+- डेटा विज़ुअलाइज़ेशन के लिए Matplotlib का उपयोग कैसे करें।
+
+## अपने डेटा से सही सवाल पूछना
+
+आपको किस सवाल का जवाब चाहिए, यह निर्धारित करेगा कि आप किस प्रकार के एमएल एल्गोरिदम का उपयोग करेंगे। और आपको जो उत्तर मिलेगा, उसकी गुणवत्ता काफी हद तक आपके डेटा की प्रकृति पर निर्भर करेगी।
+
+इस पाठ के लिए प्रदान किए गए [डेटा](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) को देखें। आप इस .csv फ़ाइल को VS Code में खोल सकते हैं। एक त्वरित नज़र से तुरंत पता चलता है कि इसमें खाली स्थान और स्ट्रिंग और न्यूमेरिक डेटा का मिश्रण है। इसमें 'Package' नाम का एक अजीब कॉलम भी है जहाँ डेटा 'sacks', 'bins' और अन्य मानों के बीच का मिश्रण है। वास्तव में, डेटा थोड़ा बिखरा हुआ है।
+
+[](https://youtu.be/5qGjczWTrDQ "ML for beginners - How to Analyze and Clean a Dataset")
+
+> 🎥 इस पाठ के लिए डेटा तैयार करने के लिए ऊपर की छवि पर क्लिक करें।
+
+वास्तव में, ऐसा बहुत कम होता है कि आपको एक ऐसा डेटा सेट मिलता है जो पूरी तरह से तैयार हो ताकि आप तुरंत एक एमएल मॉडल बना सकें। इस पाठ में, आप मानक पायथन लाइब्रेरी का उपयोग करके एक कच्चे डेटा सेट को तैयार करना सीखेंगे। आप डेटा को विज़ुअलाइज़ करने की विभिन्न तकनीकों को भी सीखेंगे।
+
+## केस स्टडी: 'कद्दू का बाजार'
+
+इस फोल्डर में आपको रूट `data` फोल्डर में एक .csv फ़ाइल मिलेगी जिसका नाम [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) है जिसमें कद्दू के बाजार के बारे में 1757 लाइनें डेटा शामिल हैं, जो शहर के अनुसार समूहित हैं। यह डेटा संयुक्त राज्य अमेरिका के कृषि विभाग द्वारा वितरित [Specialty Crops Terminal Markets Standard Reports](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) से निकाला गया कच्चा डेटा है।
+
+### डेटा तैयार करना
+
+यह डेटा सार्वजनिक डोमेन में है। इसे USDA वेबसाइट से कई अलग-अलग फाइलों में, प्रति शहर, डाउनलोड किया जा सकता है। बहुत अधिक अलग-अलग फाइलों से बचने के लिए, हमने सभी शहरों के डेटा को एक स्प्रेडशीट में संयोजित कर दिया है, इस प्रकार हमने पहले ही डेटा को थोड़ा _तैयार_ कर लिया है। अब, आइए डेटा पर करीब से नज़र डालें।
+
+### कद्दू डेटा - प्रारंभिक निष्कर्ष
+
+आप इस डेटा के बारे में क्या देखते हैं? आपने पहले ही देखा कि इसमें स्ट्रिंग, नंबर, खाली स्थान और अजीब मानों का मिश्रण है जिसे आपको समझना होगा।
+
+आप रिग्रेशन तकनीक का उपयोग करके इस डेटा से कौन सा सवाल पूछ सकते हैं? "एक दिए गए महीने के दौरान बिक्री के लिए कद्दू की कीमत की भविष्यवाणी करें" के बारे में क्या? डेटा को फिर से देखते हुए, डेटा संरचना बनाने के लिए कुछ परिवर्तन करने की आवश्यकता है।
+
+## अभ्यास - कद्दू डेटा का विश्लेषण करें
+
+आइए [Pandas](https://pandas.pydata.org/) का उपयोग करें, (जिसका नाम `Python Data Analysis` है) एक उपकरण जो डेटा को आकार देने के लिए बहुत उपयोगी है, कद्दू डेटा का विश्लेषण और तैयारी करने के लिए।
+
+### सबसे पहले, गायब तारीखों की जाँच करें
+
+आपको पहले गायब तारीखों की जाँच करने के लिए कदम उठाने होंगे:
+
+1. तारीखों को महीने के प्रारूप में बदलें (ये अमेरिकी तारीखें हैं, इसलिए प्रारूप `MM/DD/YYYY` है)।
+2. महीने को एक नए कॉलम में निकालें।
+
+Visual Studio Code में _notebook.ipynb_ फ़ाइल खोलें और स्प्रेडशीट को एक नए Pandas डेटा फ्रेम में इंपोर्ट करें।
+
+1. पहले पाँच पंक्तियों को देखने के लिए `head()` फ़ंक्शन का उपयोग करें।
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ अंतिम पाँच पंक्तियों को देखने के लिए आप किस फ़ंक्शन का उपयोग करेंगे?
+
+1. जाँच करें कि वर्तमान डेटा फ्रेम में कोई गायब डेटा है या नहीं:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ गायब डेटा है, लेकिन शायद यह वर्तमान कार्य के लिए महत्वपूर्ण नहीं है।
+
+1. अपने डेटा फ्रेम को काम करने के लिए आसान बनाने के लिए, केवल उन कॉलमों का चयन करें जिनकी आपको आवश्यकता है, `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:` का उपयोग करके। नीचे दिए गए मामले में इसका अर्थ है "सभी पंक्तियाँ"।
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### दूसरा, कद्दू की औसत कीमत निर्धारित करें
+
+सोचें कि एक दिए गए महीने में कद्दू की औसत कीमत कैसे निर्धारित करें। इस कार्य के लिए आप कौन से कॉलम चुनेंगे? संकेत: आपको 3 कॉलमों की आवश्यकता होगी।
+
+समाधान: `Low Price` and `High Price` कॉलमों का औसत लें और नए Price कॉलम को पॉप्युलेट करें, और Date कॉलम को केवल महीने दिखाने के लिए बदलें। सौभाग्य से, ऊपर की जाँच के अनुसार, तारीखों या कीमतों के लिए कोई गायब डेटा नहीं है।
+
+1. औसत की गणना करने के लिए, निम्न कोड जोड़ें:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ `print(month)` का उपयोग करके किसी भी डेटा को चेक करने के लिए स्वतंत्र महसूस करें।
+
+2. अब, अपने कनवर्टेड डेटा को एक नए Pandas डेटा फ्रेम में कॉपी करें:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ अपने डेटा फ्रेम को प्रिंट करने से आपको एक साफ, व्यवस्थित डेटा सेट दिखाई देगा जिस पर आप अपना नया रिग्रेशन मॉडल बना सकते हैं।
+
+### लेकिन रुको! यहाँ कुछ अजीब है
+
+यदि आप `Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package` कॉलम को देखते हैं।
+
+1. फ़ाइल के शीर्ष पर, प्रारंभिक .csv इंपोर्ट के तहत एक फ़िल्टर जोड़ें:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ यदि आप अब डेटा प्रिंट करते हैं, तो आप देख सकते हैं कि आपको केवल 415 या इतने डेटा पंक्तियाँ मिल रही हैं जिनमें कद्दू बाय द बसल है।
+
+### लेकिन रुको! एक और चीज़ करनी है
+
+क्या आपने देखा कि बसल राशि प्रति पंक्ति भिन्न होती है? आपको मूल्य निर्धारण को सामान्य करने की आवश्यकता है ताकि आप बसल के अनुसार मूल्य दिखा सकें, इसलिए इसे मानकीकृत करने के लिए कुछ गणना करें।
+
+1. नई_pumpkins डेटा फ्रेम बनाने वाले ब्लॉक के बाद इन पंक्तियों को जोड़ें:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308) के अनुसार, एक बसल का वजन उत्पाद के प्रकार पर निर्भर करता है, क्योंकि यह एक मात्रा माप है। "उदाहरण के लिए, टमाटरों का एक बसल 56 पाउंड का होता है... पत्तियां और साग अधिक जगह लेते हैं और वजन कम होता है, इसलिए पालक का एक बसल केवल 20 पाउंड का होता है।" यह सब काफी जटिल है! चलिए बसल-टू-पाउंड रूपांतरण के साथ परेशान नहीं होते हैं, और इसके बजाय बसल के हिसाब से मूल्य निर्धारण करते हैं। हालांकि, कद्दू के बसल का यह अध्ययन दिखाता है कि आपके डेटा की प्रकृति को समझना कितना महत्वपूर्ण है!
+
+अब, आप उनके बसल माप के आधार पर प्रति यूनिट मूल्य का विश्लेषण कर सकते हैं। यदि आप डेटा को एक बार फिर से प्रिंट करते हैं, तो आप देख सकते हैं कि यह कैसे मानकीकृत है।
+
+✅ क्या आपने देखा कि आधे बसल के हिसाब से बेचे जाने वाले कद्दू बहुत महंगे हैं? क्या आप इसका कारण समझ सकते हैं? संकेत: छोटे कद्दू बड़े कद्दू की तुलना में बहुत महंगे होते हैं, शायद इसलिए कि एक बसल में इतने अधिक छोटे कद्दू होते हैं, एक बड़े खोखले पाई कद्दू द्वारा लिए गए खाली स्थान को देखते हुए।
+
+## विज़ुअलाइज़ेशन रणनीतियाँ
+
+डेटा वैज्ञानिक का एक हिस्सा यह दिखाना होता है कि वे जिस डेटा के साथ काम कर रहे हैं उसकी गुणवत्ता और प्रकृति क्या है। ऐसा करने के लिए, वे अक्सर दिलचस्प विज़ुअलाइज़ेशन, या प्लॉट्स, ग्राफ़, और चार्ट बनाते हैं, जो डेटा के विभिन्न पहलुओं को दिखाते हैं। इस तरह, वे दृश्य रूप से संबंधों और अंतरालों को दिखा सकते हैं जो अन्यथा खोजने में कठिन होते हैं।
+
+[](https://youtu.be/SbUkxH6IJo0 "ML for beginners - How to Visualize Data with Matplotlib")
+
+> 🎥 इस पाठ के डेटा को विज़ुअलाइज़ करने के लिए ऊपर की छवि पर क्लिक करें।
+
+विज़ुअलाइज़ेशन यह निर्धारित करने में भी मदद कर सकते हैं कि कौन सी मशीन लर्निंग तकनीक डेटा के लिए सबसे उपयुक्त है। उदाहरण के लिए, एक स्कैटरप्लॉट जो एक रेखा का अनुसरण करता हुआ प्रतीत होता है, यह इंगित करता है कि डेटा एक रैखिक रिग्रेशन अभ्यास के लिए एक अच्छा उम्मीदवार है।
+
+एक डेटा विज़ुअलाइज़ेशन लाइब्रेरी जो Jupyter नोटबुक में अच्छी तरह से काम करती है वह है [Matplotlib](https://matplotlib.org/) (जो आपने पिछले पाठ में भी देखा था)।
+
+> डेटा विज़ुअलाइज़ेशन के साथ और अधिक अनुभव प्राप्त करें [इन ट्यूटोरियल्स](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott) में।
+
+## अभ्यास - Matplotlib के साथ प्रयोग करें
+
+कुछ बुनियादी प्लॉट्स बनाने का प्रयास करें ताकि आप नए डेटा फ्रेम को प्रदर्शित कर सकें जो आपने अभी बनाया है। एक बुनियादी लाइन प्लॉट क्या दिखाएगा?
+
+1. फ़ाइल के शीर्ष पर, Pandas इंपोर्ट के तहत Matplotlib इंपोर्ट करें:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. पूरे नोटबुक को रीफ्रेश करने के लिए पुनः चलाएँ।
+1. नोटबुक के नीचे, डेटा को बॉक्स के रूप में प्लॉट करने के लिए एक सेल जोड़ें:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ क्या यह एक उपयोगी प्लॉट है? क्या इसके बारे में कुछ आपको आश्चर्यचकित करता है?
+
+ यह विशेष रूप से उपयोगी नहीं है क्योंकि यह केवल आपके डेटा को एक दिए गए महीने में बिंदुओं के प्रसार के रूप में प्रदर्शित करता है।
+
+### इसे उपयोगी बनाएं
+
+उपयोगी डेटा प्रदर्शित करने के लिए चार्ट्स को आमतौर पर किसी न किसी तरह से डेटा को समूहबद्ध करने की आवश्यकता होती है। आइए एक ऐसा प्लॉट बनाते हैं जहाँ y अक्ष महीनों को दिखाता है और डेटा वितरण को प्रदर्शित करता है।
+
+1. एक सेल जोड़ें ताकि एक समूहित बार चार्ट बनाया जा सके:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ यह एक अधिक उपयोगी डेटा विज़ुअलाइज़ेशन है! ऐसा लगता है कि कद्दू की सबसे अधिक कीमत सितंबर और अक्टूबर में होती है। क्या यह आपकी अपेक्षा के अनुरूप है? क्यों या क्यों नहीं?
+
+---
+
+## 🚀चुनौती
+
+Matplotlib द्वारा पेश किए गए विभिन्न प्रकार के विज़ुअलाइज़ेशन का अन्वेषण करें। कौन से प्रकार रिग्रेशन समस्याओं के लिए सबसे उपयुक्त हैं?
+
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## समीक्षा और स्व-अध्ययन
+
+डेटा को विज़ुअलाइज़ करने के कई तरीकों पर एक नज़र डालें। उपलब्ध विभिन्न लाइब्रेरी की एक सूची बनाएं और ध्यान दें कि कौन से प्रकार के कार्यों के लिए सबसे अच्छे हैं, उदाहरण के लिए 2D विज़ुअलाइज़ेशन बनाम 3D विज़ुअलाइज़ेशन। आप क्या खोजते हैं?
+
+## असाइनमेंट
+
+[विज़ुअलाइज़ेशन का अन्वेषण](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/2-Data/assignment.md b/translations/hi/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..489d8ba4b
--- /dev/null
+++ b/translations/hi/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# विज़ुअलाइज़ेशन का अन्वेषण
+
+डेटा विज़ुअलाइज़ेशन के लिए कई अलग-अलग लाइब्रेरी उपलब्ध हैं। इस पाठ में कद्दू डेटा का उपयोग करके matplotlib और seaborn के साथ एक नमूना नोटबुक में कुछ विज़ुअलाइज़ेशन बनाएं। किन लाइब्रेरियों के साथ काम करना आसान है?
+## मूल्यांकन
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | --------- | -------- | ----------------- |
+| | दो अन्वेषण/विज़ुअलाइज़ेशन के साथ एक नोटबुक सबमिट की गई है | एक अन्वेषण/विज़ुअलाइज़ेशन के साथ एक नोटबुक सबमिट की गई है | एक नोटबुक सबमिट नहीं की गई है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/2-Data/solution/Julia/README.md b/translations/hi/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..065a31c72
--- /dev/null
+++ b/translations/hi/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या गलतियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/3-Linear/README.md b/translations/hi/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..9212f5202
--- /dev/null
+++ b/translations/hi/2-Regression/3-Linear/README.md
@@ -0,0 +1,349 @@
+# Scikit-learn का उपयोग करके एक रिग्रेशन मॉडल बनाएं: रिग्रेशन के चार तरीके
+
+
+> Infographic by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [यह पाठ R में उपलब्ध है!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### परिचय
+
+अब तक आपने इस पाठ में उपयोग किए जाने वाले कद्दू मूल्य निर्धारण डेटासेट से एकत्र किए गए नमूना डेटा के साथ रिग्रेशन क्या है, इसका पता लगाया है। आपने इसे Matplotlib का उपयोग करके भी विज़ुअलाइज़ किया है।
+
+अब आप एमएल के लिए रिग्रेशन में गहराई से गोता लगाने के लिए तैयार हैं। जबकि विज़ुअलाइज़ेशन आपको डेटा को समझने की अनुमति देता है, मशीन लर्निंग की वास्तविक शक्ति _मॉडल प्रशिक्षण_ से आती है। मॉडल ऐतिहासिक डेटा पर प्रशिक्षित होते हैं ताकि डेटा निर्भरताओं को स्वचालित रूप से कैप्चर किया जा सके, और वे आपको नए डेटा के लिए परिणामों की भविष्यवाणी करने की अनुमति देते हैं, जिसे मॉडल ने पहले नहीं देखा है।
+
+इस पाठ में, आप दो प्रकार के रिग्रेशन के बारे में अधिक जानेंगे: _बेसिक लीनियर रिग्रेशन_ और _पोलिनोमियल रिग्रेशन_, साथ ही इन तकनीकों के अंतर्निहित गणित के कुछ पहलू। ये मॉडल हमें विभिन्न इनपुट डेटा के आधार पर कद्दू की कीमतों की भविष्यवाणी करने की अनुमति देंगे।
+
+[](https://youtu.be/CRxFT8oTDMg "ML for beginners - Understanding Linear Regression")
+
+> 🎥 ऊपर दी गई छवि पर क्लिक करें लीनियर रिग्रेशन का एक संक्षिप्त वीडियो अवलोकन देखने के लिए।
+
+> इस पाठ्यक्रम के दौरान, हम गणित का न्यूनतम ज्ञान मानते हैं और अन्य क्षेत्रों से आने वाले छात्रों के लिए इसे सुलभ बनाने का प्रयास करते हैं, इसलिए समझ में सहायता के लिए नोट्स, 🧮 कॉलआउट्स, आरेख और अन्य शिक्षण उपकरण देखें।
+
+### आवश्यकताएँ
+
+अब तक आपको कद्दू डेटा की संरचना से परिचित होना चाहिए जिसे हम जांच रहे हैं। आप इसे इस पाठ के _notebook.ipynb_ फ़ाइल में पहले से लोड और पहले से साफ़ कर सकते हैं। फ़ाइल में, कद्दू की कीमत एक नए डेटा फ्रेम में प्रति बुशल प्रदर्शित होती है। सुनिश्चित करें कि आप इन नोटबुक्स को Visual Studio Code के कर्नेल्स में चला सकते हैं।
+
+### तैयारी
+
+याद दिलाने के लिए, आप इस डेटा को लोड कर रहे हैं ताकि इससे सवाल पूछ सकें।
+
+- कद्दू खरीदने का सबसे अच्छा समय कब है?
+- एक मिनिएचर कद्दू के केस की कीमत कितनी हो सकती है?
+- क्या मुझे उन्हें आधे-बुशल बास्केट में खरीदना चाहिए या 1 1/9 बुशल बॉक्स में?
+आइए इस डेटा में और गहराई से जांच करें।
+
+पिछले पाठ में, आपने एक Pandas डेटा फ्रेम बनाया और इसे मूल डेटासेट के एक हिस्से से आबाद किया, बुशल द्वारा मूल्य निर्धारण को मानकीकृत किया। ऐसा करने से, हालांकि, आप केवल लगभग 400 डेटा पॉइंट्स एकत्र करने में सक्षम थे और केवल पतझड़ के महीनों के लिए।
+
+इस पाठ के साथ आने वाली नोटबुक में हमने जो डेटा पहले से लोड किया है, उस पर एक नज़र डालें। डेटा पहले से लोड है और एक प्रारंभिक बिखराव प्लॉट महीने के डेटा को दिखाने के लिए चार्ट किया गया है। हो सकता है कि हम इसे और अधिक साफ करके डेटा की प्रकृति के बारे में थोड़ी अधिक जानकारी प्राप्त कर सकें।
+
+## एक लीनियर रिग्रेशन रेखा
+
+जैसा कि आपने पाठ 1 में सीखा, लीनियर रिग्रेशन अभ्यास का लक्ष्य एक रेखा को प्लॉट करने में सक्षम होना है:
+
+- **चर संबंध दिखाएं**। चर के बीच संबंध दिखाएं
+- **भविष्यवाणियाँ करें**। यह भविष्यवाणी करें कि एक नया डेटा पॉइंट उस रेखा के संबंध में कहाँ गिर सकता है।
+
+इस प्रकार की रेखा खींचने के लिए **लीस्ट-स्क्वेर्स रिग्रेशन** का उपयोग किया जाता है। 'लीस्ट-स्क्वेर्स' शब्द का अर्थ है कि रिग्रेशन रेखा के चारों ओर के सभी डेटा पॉइंट्स को वर्गाकार किया जाता है और फिर जोड़ा जाता है। आदर्श रूप से, वह अंतिम योग जितना संभव हो उतना छोटा होता है, क्योंकि हम कम संख्या में त्रुटियों, या `least-squares` चाहते हैं।
+
+हम ऐसा इसलिए करते हैं क्योंकि हम एक ऐसी रेखा को मॉडल बनाना चाहते हैं जिसमें हमारे सभी डेटा पॉइंट्स से सबसे कम संचयी दूरी हो। हम उन्हें जोड़ने से पहले शब्दों को वर्गाकार भी करते हैं क्योंकि हम इसकी दिशा के बजाय इसके परिमाण से चिंतित हैं।
+
+> **🧮 गणित दिखाएं**
+>
+> इस रेखा को, जिसे _सबसे अच्छा फिट_ कहा जाता है, [एक समीकरण](https://en.wikipedia.org/wiki/Simple_linear_regression) द्वारा व्यक्त किया जा सकता है:
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` कॉलम का उपयोग करके निम्नलिखित अभिव्यक्ति के साथ:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+अब जब आपके पास लीनियर रिग्रेशन के पीछे के गणित की समझ है, तो आइए एक रिग्रेशन मॉडल बनाएं यह देखने के लिए कि हम कौन सा कद्दू पैकेज सबसे अच्छी कद्दू कीमतों के साथ भविष्यवाणी कर सकते हैं। कोई व्यक्ति जो छुट्टी के कद्दू पैच के लिए कद्दू खरीद रहा है, वह इस जानकारी को कद्दू पैच के लिए कद्दू पैकेजों की खरीद को अनुकूलित करने के लिए उपयोग करना चाह सकता है।
+
+## सहसंबंध की तलाश में
+
+[](https://youtu.be/uoRq-lW2eQo "ML for beginners - Looking for Correlation: The Key to Linear Regression")
+
+> 🎥 ऊपर दी गई छवि पर क्लिक करें सहसंबंध का एक संक्षिप्त वीडियो अवलोकन देखने के लिए।
+
+पिछले पाठ से आपने शायद देखा है कि विभिन्न महीनों के लिए औसत कीमत इस प्रकार दिखती है:
+
+
+
+यह सुझाव देता है कि कुछ सहसंबंध होना चाहिए, और हम `Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` फ़ंक्शन का उपयोग करके `Month` and `Price` के बीच संबंध की भविष्यवाणी करने के लिए लीनियर रिग्रेशन मॉडल को प्रशिक्षित करने का प्रयास कर सकते हैं:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+ऐसा लगता है कि सहसंबंध काफी छोटा है, -0.15 `Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` प्लॉटिंग फ़ंक्शन का उपयोग करके हम सभी पॉइंट्स को एक ही ग्राफ पर प्लॉट कर सकते हैं:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+हमारी जांच से पता चलता है कि विविधता का वास्तविक बिक्री तिथि की तुलना में समग्र मूल्य पर अधिक प्रभाव है। हम इसे एक बार ग्राफ के साथ देख सकते हैं:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+आइए फिलहाल केवल एक कद्दू की किस्म, 'पाई प्रकार', पर ध्यान केंद्रित करें और देखें कि तारीख का मूल्य पर क्या प्रभाव पड़ता है:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+यदि हम अब `Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` के बीच सहसंबंध की गणना करते हैं - जिसका अर्थ है कि भविष्यवाणी मॉडल को प्रशिक्षित करना समझ में आता है।
+
+> एक लीनियर रिग्रेशन मॉडल को प्रशिक्षित करने से पहले, यह सुनिश्चित करना महत्वपूर्ण है कि हमारा डेटा साफ़ है। लीनियर रिग्रेशन लापता मूल्यों के साथ अच्छी तरह से काम नहीं करता है, इसलिए सभी खाली कोशिकाओं से छुटकारा पाना समझ में आता है:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+एक और दृष्टिकोण यह होगा कि उन खाली मूल्यों को संबंधित कॉलम से औसत मानों से भर दिया जाए।
+
+## सरल लीनियर रिग्रेशन
+
+[](https://youtu.be/e4c_UP2fSjg "ML for beginners - Linear and Polynomial Regression using Scikit-learn")
+
+> 🎥 ऊपर दी गई छवि पर क्लिक करें लीनियर और पोलिनोमियल रिग्रेशन का एक संक्षिप्त वीडियो अवलोकन देखने के लिए।
+
+हमारे लीनियर रिग्रेशन मॉडल को प्रशिक्षित करने के लिए, हम **Scikit-learn** लाइब्रेरी का उपयोग करेंगे।
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+हम इनपुट मानों (फीचर्स) और अपेक्षित आउटपुट (लेबल) को अलग-अलग numpy arrays में अलग करके शुरू करते हैं:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> ध्यान दें कि हमें इनपुट डेटा पर `reshape` करना पड़ा ताकि लीनियर रिग्रेशन पैकेज इसे सही ढंग से समझ सके। लीनियर रिग्रेशन एक इनपुट के रूप में 2D-array की अपेक्षा करता है, जहां array की प्रत्येक पंक्ति इनपुट फीचर्स के वेक्टर के अनुरूप होती है। हमारे मामले में, चूंकि हमारे पास केवल एक इनपुट है - हमें आकार N×1 के साथ एक array की आवश्यकता है, जहां N डेटासेट का आकार है।
+
+फिर, हमें डेटा को ट्रेन और टेस्ट डेटासेट्स में विभाजित करने की आवश्यकता है, ताकि हम प्रशिक्षण के बाद अपने मॉडल को मान्य कर सकें:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+अंत में, वास्तविक लीनियर रिग्रेशन मॉडल को प्रशिक्षित करना केवल दो पंक्तियों का कोड लेता है। हम `LinearRegression` object, and fit it to our data using the `fit` मेथड को परिभाषित करते हैं:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+`LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` हमारे मामले में, वर्ष की शुरुआत में कीमत को इंगित करता है।
+
+यह देखने के लिए कि हमारा मॉडल कितना सटीक है, हम एक टेस्ट डेटासेट पर कीमतों की भविष्यवाणी कर सकते हैं, और फिर यह माप सकते हैं कि हमारी भविष्यवाणियाँ अपेक्षित मानों के कितने करीब हैं। यह मीन स्क्वायर एरर (MSE) मेट्रिक्स का उपयोग करके किया जा सकता है, जो अपेक्षित और भविष्यवाणी किए गए मूल्य के बीच सभी वर्गाकार अंतरों का औसत है।
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+हमारी त्रुटि लगभग 2 अंक के आसपास लगती है, जो ~17% है। मॉडल गुणवत्ता का एक और संकेतक **निर्धारण का गुणांक** है, जिसे इस तरह से प्राप्त किया जा सकता है:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+यदि मान 0 है, तो इसका मतलब है कि मॉडल इनपुट डेटा को ध्यान में नहीं रखता है, और *सबसे खराब लीनियर प्रेडिक्टर* के रूप में कार्य करता है, जो परिणाम का केवल एक औसत मान है। मान 1 का अर्थ है कि हम सभी अपेक्षित आउटपुट को पूरी तरह से भविष्यवाणी कर सकते हैं। हमारे मामले में, गुणांक लगभग 0.06 है, जो काफी कम है।
+
+हम परीक्षण डेटा को रिग्रेशन लाइन के साथ प्लॉट भी कर सकते हैं ताकि यह बेहतर तरीके से देखा जा सके कि हमारे मामले में रिग्रेशन कैसे काम करता है:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## पोलिनोमियल रिग्रेशन
+
+लीनियर रिग्रेशन का एक और प्रकार पोलिनोमियल रिग्रेशन है। जबकि कभी-कभी चर के बीच एक लीनियर संबंध होता है - कद्दू का आकार जितना बड़ा होता है, कीमत उतनी ही अधिक होती है - कभी-कभी इन संबंधों को एक विमान या सीधी रेखा के रूप में प्लॉट नहीं किया जा सकता है।
+
+✅ यहां [कुछ और उदाहरण](https://online.stat.psu.edu/stat501/lesson/9/9.8) हैं जिनमें पोलिनोमियल रिग्रेशन का उपयोग किया जा सकता है।
+
+डेट और कीमत के बीच संबंध पर फिर से एक नज़र डालें। क्या यह बिखराव प्लॉट ऐसा लगता है कि इसे सीधे रेखा द्वारा विश्लेषित किया जाना चाहिए? क्या कीमतें नहीं बदल सकतीं? इस मामले में, आप पोलिनोमियल रिग्रेशन का प्रयास कर सकते हैं।
+
+✅ पोलिनोमियल गणितीय अभिव्यक्तियाँ हैं जिनमें एक या अधिक चर और गुणांक शामिल हो सकते हैं
+
+पोलिनोमियल रिग्रेशन एक घुमावदार रेखा बनाता है ताकि गैर-लीनियर डेटा को बेहतर तरीके से फिट किया जा सके। हमारे मामले में, यदि हम इनपुट डेटा में एक वर्गीय `DayOfYear` चर शामिल करते हैं, तो हमें अपने डेटा को एक परवलयिक वक्र के साथ फिट करने में सक्षम होना चाहिए, जिसमें वर्ष के एक निश्चित बिंदु पर न्यूनतम होगा।
+
+Scikit-learn में विभिन्न डेटा प्रोसेसिंग चरणों को एक साथ संयोजित करने के लिए एक उपयोगी [pipeline API](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline) शामिल है। एक **pipeline** **अनुमानकों** की एक श्रृंखला है। हमारे मामले में, हम एक pipeline बनाएंगे जो पहले हमारे मॉडल में पोलिनोमियल फीचर्स जोड़ता है, और फिर रिग्रेशन को प्रशिक्षित करता है:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+`PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` अन्यथा। इसका मतलब है कि लीनियर रिग्रेशन में चार गुणांक होंगे, प्रत्येक कद्दू की किस्म के लिए एक, जो उस विशेष किस्म के लिए "शुरुआती कीमत" (या बल्कि "अतिरिक्त कीमत") के लिए जिम्मेदार है।
+
+नीचे दिया गया कोड दिखाता है कि हम एक वेराइटी को कैसे वन-हॉट एन्कोड कर सकते हैं:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+वन-हॉट एन्कोड वेराइटी का उपयोग करके लीनियर रिग्रेशन को प्रशिक्षित करने के लिए, हमें बस `X` and `y` डेटा को सही ढंग से प्रारंभ करने की आवश्यकता है:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+बाकी कोड वही है जो हमने लीनियर रिग्रेशन को प्रशिक्षित करने के लिए ऊपर उपयोग किया था। यदि आप इसे आजमाते हैं, तो आप देखेंगे कि मीन स्क्वायर एरर लगभग समान है, लेकिन हमें बहुत अधिक निर्धारण गुणांक (~77%) मिलता है। और अधिक सटीक भविष्यवाणियाँ प्राप्त करने के लिए, हम अधिक श्रेणीबद्ध फीचर्स को ध्यान में रख सकते हैं, साथ ही संख्यात्मक फीचर्स, जैसे `Month` or `DayOfYear`. To get one large array of features, we can use `join`:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+यहां हम `City` and `Package` प्रकार को भी ध्यान में रखते हैं, जो हमें MSE 2.84 (10%) और निर्धारण 0.94 देता है!
+
+## सब कुछ एक साथ रखना
+
+सर्वश्रेष्ठ मॉडल बनाने के लिए, हम ऊपर दिए गए उदाहरण से संयुक्त (वन-हॉट एन्कोड श्रेणीबद्ध + संख्यात्मक) डेटा का उपयोग पोलिनोमियल रिग्रेशन के साथ कर सकते हैं। आपकी सुविधा के लिए यहां पूरा कोड दिया गया है:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+यह हमें लगभग 97% का सर्वोत्तम निर्धारण गुणांक और MSE=2.23 (~8% भविष्यवाणी त्रुटि) देना चाहिए।
+
+| मॉडल | MSE | निर्धारण |
+|-------|-----|---------------|
+| `DayOfYear@@INLINE_CODE
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/3-Linear/assignment.md b/translations/hi/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..c20edc514
--- /dev/null
+++ b/translations/hi/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# एक रिग्रेशन मॉडल बनाएं
+
+## निर्देश
+
+इस पाठ में आपको लीनियर और पॉलीनोमियल रिग्रेशन दोनों का उपयोग करके एक मॉडल बनाने का तरीका दिखाया गया था। इस ज्ञान का उपयोग करते हुए, एक डेटासेट खोजें या Scikit-learn के अंतर्निर्मित सेटों में से एक का उपयोग करके एक नया मॉडल बनाएं। अपने नोटबुक में समझाएं कि आपने जिस तकनीक को चुना है, वह क्यों चुनी, और अपने मॉडल की सटीकता का प्रदर्शन करें। यदि यह सटीक नहीं है, तो समझाएं क्यों।
+
+## मूल्यांकन मापदंड
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| ------- | ----------------------------------------------------------- | ------------------------- | ------------------------------- |
+| | अच्छी तरह से दस्तावेज़ित समाधान के साथ एक पूरा नोटबुक प्रस्तुत करता है | समाधान अधूरा है | समाधान त्रुटिपूर्ण या बग्गी है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/3-Linear/solution/Julia/README.md b/translations/hi/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..ccba6ad62
--- /dev/null
+++ b/translations/hi/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/4-Logistic/README.md b/translations/hi/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..41325c82a
--- /dev/null
+++ b/translations/hi/2-Regression/4-Logistic/README.md
@@ -0,0 +1,314 @@
+# श्रेणियों की भविष्यवाणी के लिए लॉजिस्टिक रिग्रेशन
+
+
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [यह पाठ R में उपलब्ध है!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## परिचय
+
+रिग्रेशन पर इस अंतिम पाठ में, जो कि एक बुनियादी _क्लासिक_ एमएल तकनीक है, हम लॉजिस्टिक रिग्रेशन पर नज़र डालेंगे। आप इस तकनीक का उपयोग बाइनरी श्रेणियों की भविष्यवाणी के लिए पैटर्न खोजने के लिए करेंगे। क्या यह कैंडी चॉकलेट है या नहीं? क्या यह बीमारी संक्रामक है या नहीं? क्या यह ग्राहक इस उत्पाद को चुनेगा या नहीं?
+
+इस पाठ में, आप सीखेंगे:
+
+- डेटा विज़ुअलाइज़ेशन के लिए एक नई लाइब्रेरी
+- लॉजिस्टिक रिग्रेशन के तकनीकें
+
+✅ इस प्रकार के रिग्रेशन के साथ काम करने की अपनी समझ को गहरा करें इस [Learn module](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott) में
+
+## पूर्वापेक्षा
+
+कद्दू के डेटा के साथ काम करने के बाद, हम अब इस बात से परिचित हैं कि इसमें एक बाइनरी श्रेणी है जिसके साथ हम काम कर सकते हैं: `Color`.
+
+आइए एक लॉजिस्टिक रिग्रेशन मॉडल बनाएं ताकि यह भविष्यवाणी की जा सके कि दिए गए कुछ वेरिएबल्स के आधार पर, _एक दिए गए कद्दू का रंग क्या होगा_ (नारंगी 🎃 या सफेद 👻)।
+
+> हम रिग्रेशन के बारे में एक पाठ में बाइनरी क्लासिफिकेशन के बारे में क्यों बात कर रहे हैं? केवल भाषाई सुविधा के लिए, क्योंकि लॉजिस्टिक रिग्रेशन [वास्तव में एक क्लासिफिकेशन विधि](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression) है, हालांकि यह एक रैखिक आधारित है। डेटा को क्लासिफाई करने के अन्य तरीकों के बारे में जानें अगले पाठ समूह में।
+
+## प्रश्न को परिभाषित करें
+
+हमारे उद्देश्यों के लिए, हम इसे एक बाइनरी के रूप में व्यक्त करेंगे: 'सफेद' या 'न सफेद'। हमारे डेटा सेट में एक 'धारीदार' श्रेणी भी है लेकिन इसके कुछ उदाहरण ही हैं, इसलिए हम इसका उपयोग नहीं करेंगे। यह वैसे भी तब गायब हो जाता है जब हम डेटा सेट से शून्य मानों को हटा देते हैं।
+
+> 🎃 मजेदार तथ्य, हम कभी-कभी सफेद कद्दुओं को 'भूत' कद्दू कहते हैं। इन्हें तराशना बहुत आसान नहीं होता, इसलिए ये नारंगी वाले जितने लोकप्रिय नहीं होते लेकिन ये देखने में बहुत अच्छे लगते हैं! तो हम अपने प्रश्न को इस प्रकार भी पुनःप्रारूपित कर सकते हैं: 'भूत' या 'न भूत'। 👻
+
+## लॉजिस्टिक रिग्रेशन के बारे में
+
+लॉजिस्टिक रिग्रेशन कुछ महत्वपूर्ण तरीकों से रैखिक रिग्रेशन से भिन्न है, जिसे आपने पहले सीखा था।
+
+[](https://youtu.be/KpeCT6nEpBY "शुरुआती लोगों के लिए एमएल - मशीन लर्निंग क्लासिफिकेशन के लिए लॉजिस्टिक रिग्रेशन को समझना")
+
+> 🎥 लॉजिस्टिक रिग्रेशन का संक्षिप्त वीडियो अवलोकन देखने के लिए ऊपर की छवि पर क्लिक करें।
+
+### बाइनरी क्लासिफिकेशन
+
+लॉजिस्टिक रिग्रेशन रैखिक रिग्रेशन जैसी विशेषताएं प्रदान नहीं करता। पूर्ववर्ती बाइनरी श्रेणी ("सफेद या न सफेद") के बारे में एक भविष्यवाणी प्रदान करता है जबकि बाद वाला निरंतर मानों की भविष्यवाणी करने में सक्षम है, उदाहरण के लिए, कद्दू की उत्पत्ति और कटाई के समय को देखते हुए, _इसकी कीमत कितनी बढ़ेगी_।
+
+
+> इन्फोग्राफिक [दसानी मदीपल्ली](https://twitter.com/dasani_decoded) द्वारा
+
+### अन्य क्लासिफिकेशन
+
+लॉजिस्टिक रिग्रेशन के अन्य प्रकार भी हैं, जिनमें मल्टीनोमियल और ऑर्डिनल शामिल हैं:
+
+- **मल्टीनोमियल**, जिसमें एक से अधिक श्रेणियां होती हैं - "नारंगी, सफेद, और धारीदार"।
+- **ऑर्डिनल**, जिसमें क्रमबद्ध श्रेणियां होती हैं, उपयोगी होती हैं यदि हम अपने परिणामों को तार्किक रूप से क्रमबद्ध करना चाहते हैं, जैसे हमारे कद्दू जो एक सीमित संख्या में आकारों (मिनी, स्मॉल, मीडियम, लार्ज, एक्सएल, एक्सएक्सएल) द्वारा क्रमबद्ध होते हैं।
+
+
+
+### वेरिएबल्स का सहसंबंध होना जरूरी नहीं है
+
+याद रखें कि रैखिक रिग्रेशन अधिक सहसंबद्ध वेरिएबल्स के साथ बेहतर काम करता है? लॉजिस्टिक रिग्रेशन इसके विपरीत है - वेरिएबल्स का सहसंबंध होना जरूरी नहीं है। यह इस डेटा के लिए काम करता है जिसमें कुछ हद तक कमजोर सहसंबंध हैं।
+
+### आपको बहुत सारा साफ डेटा चाहिए
+
+लॉजिस्टिक रिग्रेशन अधिक डेटा का उपयोग करने पर अधिक सटीक परिणाम देगा; हमारा छोटा डेटा सेट इस कार्य के लिए आदर्श नहीं है, इसलिए इसे ध्यान में रखें।
+
+[](https://youtu.be/B2X4H9vcXTs "शुरुआती लोगों के लिए एमएल - लॉजिस्टिक रिग्रेशन के लिए डेटा विश्लेषण और तैयारी")
+
+> 🎥 लॉजिस्टिक रिग्रेशन के लिए डेटा तैयार करने का संक्षिप्त वीडियो अवलोकन देखने के लिए ऊपर की छवि पर क्लिक करें
+
+✅ उन डेटा प्रकारों के बारे में सोचें जो लॉजिस्टिक रिग्रेशन के लिए उपयुक्त होंगे
+
+## व्यायाम - डेटा को साफ करें
+
+पहले, डेटा को थोड़ा साफ करें, शून्य मानों को हटाएं और केवल कुछ कॉलम चुनें:
+
+1. निम्नलिखित कोड जोड़ें:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ आप हमेशा अपने नए डेटा फ्रेम पर एक नज़र डाल सकते हैं:
+
+ ```python
+ pumpkins.info
+ ```
+
+### विज़ुअलाइज़ेशन - श्रेणीबद्ध प्लॉट
+
+अब तक आपने [स्टार्टर नोटबुक](../../../../2-Regression/4-Logistic/notebook.ipynb) को कद्दू डेटा के साथ फिर से लोड कर लिया है और इसे इस प्रकार साफ कर लिया है कि इसमें कुछ वेरिएबल्स सहित एक डेटा सेट संरक्षित हो। आइए नोटबुक में डेटा फ्रेम को एक अलग लाइब्रेरी का उपयोग करके विज़ुअलाइज़ करें: [Seaborn](https://seaborn.pydata.org/index.html), जो पहले उपयोग की गई Matplotlib पर आधारित है।
+
+Seaborn आपके डेटा को विज़ुअलाइज़ करने के कुछ शानदार तरीके प्रदान करता है। उदाहरण के लिए, आप श्रेणीबद्ध प्लॉट में `Variety` और `Color` के लिए डेटा के वितरण की तुलना कर सकते हैं।
+
+1. एक श्रेणीबद्ध प्लॉट बनाएं `catplot` function, using our pumpkin data `pumpkins` का उपयोग करके, और प्रत्येक कद्दू श्रेणी (नारंगी या सफेद) के लिए एक रंग मैपिंग निर्दिष्ट करें:
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ डेटा का अवलोकन करके, आप देख सकते हैं कि रंग डेटा का संबंध Variety से कैसा है।
+
+ ✅ इस श्रेणीबद्ध प्लॉट को देखते हुए, आप कौन से दिलचस्प अन्वेषणों की कल्पना कर सकते हैं?
+
+### डेटा पूर्व-प्रसंस्करण: फीचर और लेबल एन्कोडिंग
+हमारे कद्दू डेटा सेट में इसके सभी कॉलम के लिए स्ट्रिंग मान होते हैं। श्रेणीबद्ध डेटा के साथ काम करना मनुष्यों के लिए सहज है लेकिन मशीनों के लिए नहीं। मशीन लर्निंग एल्गोरिदम संख्याओं के साथ अच्छा काम करते हैं। इसलिए एन्कोडिंग डेटा पूर्व-प्रसंस्करण चरण में एक बहुत महत्वपूर्ण कदम है, क्योंकि यह हमें श्रेणीबद्ध डेटा को संख्यात्मक डेटा में बदलने में सक्षम बनाता है, बिना किसी जानकारी को खोए। अच्छी एन्कोडिंग एक अच्छे मॉडल के निर्माण की ओर ले जाती है।
+
+फीचर एन्कोडिंग के लिए दो मुख्य प्रकार के एन्कोडर होते हैं:
+
+1. ऑर्डिनल एन्कोडर: यह ऑर्डिनल वेरिएबल्स के लिए अच्छी तरह से अनुकूल है, जो श्रेणीबद्ध वेरिएबल्स हैं जहां उनके डेटा का तार्किक क्रम होता है, जैसे हमारे डेटा सेट में `Item Size` कॉलम। यह एक मैपिंग बनाता है ताकि प्रत्येक श्रेणी को एक संख्या द्वारा दर्शाया जाए, जो कॉलम में श्रेणी का क्रम है।
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. श्रेणीबद्ध एन्कोडर: यह नाममात्र वेरिएबल्स के लिए अच्छी तरह से अनुकूल है, जो श्रेणीबद्ध वेरिएबल्स हैं जहां उनके डेटा का तार्किक क्रम नहीं होता है, जैसे हमारे डेटा सेट में `Item Size` से भिन्न सभी फीचर्स। यह एक हॉट एन्कोडिंग है, जिसका अर्थ है कि प्रत्येक श्रेणी को एक बाइनरी कॉलम द्वारा दर्शाया जाता है: एन्कोडेड वेरिएबल उस Variety से संबंधित होने पर 1 के बराबर होता है और अन्यथा 0।
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+फिर, `ColumnTransformer` का उपयोग कई एन्कोडर को एकल चरण में संयोजित करने और उन्हें उपयुक्त कॉलम पर लागू करने के लिए किया जाता है।
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+दूसरी ओर, लेबल को एन्कोड करने के लिए, हम scikit-learn `LabelEncoder` क्लास का उपयोग करते हैं, जो एक उपयोगिता क्लास है जो लेबल को सामान्यीकृत करने में मदद करता है ताकि उनमें केवल 0 और n_classes-1 (यहां, 0 और 1) के बीच मान हों।
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+एक बार जब हमने फीचर्स और लेबल को एन्कोड कर लिया, तो हम उन्हें एक नए डेटा फ्रेम `encoded_pumpkins` में मर्ज कर सकते हैं।
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety` और `Color` को श्रेणीबद्ध प्लॉट में बेहतर तरीके से प्लॉट करने के लिए एन्कोडेड `Item Size` column and the unencoded `Variety` कॉलम का उपयोग करेंगे।
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### स्वार्म प्लॉट का उपयोग करें
+
+चूंकि रंग एक बाइनरी श्रेणी (सफेद या न) है, इसे विज़ुअलाइज़ेशन के लिए 'एक [विशेष दृष्टिकोण](https://seaborn.pydata.org/tutorial/categorical.html?highlight=bar)' की आवश्यकता होती है। इस श्रेणी के अन्य वेरिएबल्स के साथ संबंध को विज़ुअलाइज़ करने के अन्य तरीके भी हैं।
+
+आप Seaborn प्लॉट्स के साथ वेरिएबल्स को साइड-बाय-साइड विज़ुअलाइज़ कर सकते हैं।
+
+1. मानों के वितरण को दिखाने के लिए एक 'स्वार्म' प्लॉट का प्रयास करें:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**सावधान रहें**: ऊपर का कोड एक चेतावनी उत्पन्न कर सकता है, क्योंकि seaborn इतनी मात्रा में डेटा पॉइंट्स को स्वार्म प्लॉट में प्रदर्शित करने में विफल हो सकता है। एक संभावित समाधान मार्कर के आकार को कम करना है, 'size' पैरामीटर का उपयोग करके। हालांकि, ध्यान दें कि यह प्लॉट की पठनीयता को प्रभावित करता है।
+
+> **🧮 मुझे गणित दिखाओ**
+>
+> लॉजिस्टिक रिग्रेशन 'अधिकतम संभावना' की अवधारणा पर निर्भर करता है, [सिग्मॉइड फंक्शन्स](https://wikipedia.org/wiki/Sigmoid_function) का उपयोग करके। एक 'सिग्मॉइड फंक्शन' एक प्लॉट पर 'S' आकार की तरह दिखता है। यह एक मान लेता है और इसे 0 और 1 के बीच कहीं मैप करता है। इसका कर्व भी 'लॉजिस्टिक कर्व' कहलाता है। इसका सूत्र इस प्रकार दिखता है:
+>
+> 
+>
+> जहां सिग्मॉइड का मध्यबिंदु x के 0 बिंदु पर होता है, L कर्व का अधिकतम मान होता है, और k कर्व की तीव्रता होती है। यदि फंक्शन का परिणाम 0.5 से अधिक होता है, तो संबंधित लेबल को बाइनरी विकल्प के '1' वर्ग में दिया जाएगा। यदि नहीं, तो इसे '0' के रूप में वर्गीकृत किया जाएगा।
+
+## अपना मॉडल बनाएं
+
+Scikit-learn में इन बाइनरी क्लासिफिकेशन्स को खोजने के लिए एक मॉडल बनाना आश्चर्यजनक रूप से सीधा है।
+
+[](https://youtu.be/MmZS2otPrQ8 "शुरुआती लोगों के लिए एमएल - डेटा के क्लासिफिकेशन के लिए लॉजिस्टिक रिग्रेशन")
+
+> 🎥 एक रैखिक रिग्रेशन मॉडल बनाने का संक्षिप्त वीडियो अवलोकन देखने के लिए ऊपर की छवि पर क्लिक करें
+
+1. उन वेरिएबल्स का चयन करें जिन्हें आप अपने क्लासिफिकेशन मॉडल में उपयोग करना चाहते हैं और `train_test_split()` को कॉल करके प्रशिक्षण और परीक्षण सेट को विभाजित करें:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. अब आप अपने मॉडल को प्रशिक्षित कर सकते हैं, अपने प्रशिक्षण डेटा के साथ `fit()` को कॉल करके, और इसके परिणाम को प्रिंट कर सकते हैं:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ अपने मॉडल के स्कोरबोर्ड पर एक नज़र डालें। यह बुरा नहीं है, यह देखते हुए कि आपके पास केवल लगभग 1000 पंक्तियों का डेटा है:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## एक भ्रम मैट्रिक्स के माध्यम से बेहतर समझ
+
+जबकि आप ऊपर दिए गए आइटम्स को प्रिंट करके एक स्कोरबोर्ड रिपोर्ट [शर्तें](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html?highlight=classification_report#sklearn.metrics.classification_report) प्राप्त कर सकते हैं, आप अपने मॉडल को एक [भ्रम मैट्रिक्स](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix) का उपयोग करके अधिक आसानी से समझ सकते हैं जो हमें यह समझने में मदद करता है कि मॉडल कैसा प्रदर्शन कर रहा है।
+
+> 🎓 एक '[भ्रम मैट्रिक्स](https://wikipedia.org/wiki/Confusion_matrix)' (या 'त्रुटि मैट्रिक्स') एक तालिका है जो आपके मॉडल के वास्तविक बनाम झूठे सकारात्मक और नकारात्मक को व्यक्त करती है, इस प्रकार भविष्यवाणियों की सटीकता का आकलन करती है।
+
+1. एक भ्रम मैट्रिक्स का उपयोग करने के लिए, `confusion_matrix()` को कॉल करें:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ अपने मॉडल के भ्रम मैट्रिक्स पर एक नज़र डालें:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+Scikit-learn में, भ्रम मैट्रिक्स की पंक्तियाँ (अक्ष 0) वास्तविक लेबल हैं और कॉलम (अक्ष 1) भविष्यवाणी किए गए लेबल हैं।
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+यहाँ क्या हो रहा है? मान लें कि हमारे मॉडल से कद्दू को दो बाइनरी श्रेणियों के बीच वर्गीकृत करने के लिए कहा जाता है, श्रेणी 'सफेद' और श्रेणी 'न-सफेद'।
+
+- यदि आपका मॉडल कद्दू को न-सफेद के रूप में भविष्यवाणी करता है और यह वास्तव में श्रेणी 'न-सफेद' से संबंधित है, तो हम इसे एक सच्चा नकारात्मक कहते हैं, जो शीर्ष बाएँ संख्या द्वारा दिखाया गया है।
+- यदि आपका मॉडल कद्दू को सफेद के रूप में भविष्यवाणी करता है और यह वास्तव में श्रेणी 'न-सफेद' से संबंधित है, तो हम इसे एक झूठा नकारात्मक कहते हैं, जो नीचे बाएँ संख्या द्वारा दिखाया गया है।
+- यदि आपका मॉडल कद्दू को न-सफेद के रूप में भविष्यवाणी करता है और यह वास्तव में श्रेणी 'सफेद' से संबंधित है, तो हम इसे एक झूठा सकारात्मक कहते हैं, जो शीर्ष दाएँ संख्या द्वारा दिखाया गया है।
+- यदि आपका मॉडल कद्दू को
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/4-Logistic/assignment.md b/translations/hi/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..c26bdf426
--- /dev/null
+++ b/translations/hi/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,14 @@
+# कुछ पुनः प्रयास करते हुए प्रतिगमन
+
+## निर्देश
+
+पाठ में, आपने कद्दू डेटा का एक उपसमुच्चय उपयोग किया था। अब, मूल डेटा पर वापस जाएं और सभी डेटा का उपयोग करें, जो साफ और मानकीकृत हो, एक Logistic Regression मॉडल बनाने के लिए।
+
+## मूल्यांकन
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | ----------------------------------------------------------------------- | ------------------------------------------------------------ | ----------------------------------------------------------- |
+| | एक नोटबुक प्रस्तुत की गई है जिसमें एक अच्छी तरह से समझाया गया और अच्छी तरह से प्रदर्शन करने वाला मॉडल है | एक नोटबुक प्रस्तुत की गई है जिसमें एक न्यूनतम प्रदर्शन करने वाला मॉडल है | एक नोटबुक प्रस्तुत की गई है जिसमें एक उप-प्रदर्शन करने वाला मॉडल है या कोई मॉडल नहीं है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/4-Logistic/solution/Julia/README.md b/translations/hi/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..4ce20f476
--- /dev/null
+++ b/translations/hi/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित AI अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/2-Regression/README.md b/translations/hi/2-Regression/README.md
new file mode 100644
index 000000000..0623a3442
--- /dev/null
+++ b/translations/hi/2-Regression/README.md
@@ -0,0 +1,43 @@
+# मशीन लर्निंग के लिए रिग्रेशन मॉडल्स
+## क्षेत्रीय विषय: उत्तरी अमेरिका में कद्दू की कीमतों के लिए रिग्रेशन मॉडल्स 🎃
+
+उत्तरी अमेरिका में, कद्दू अक्सर हैलोवीन के लिए डरावने चेहरों में तराशे जाते हैं। आइए इन आकर्षक सब्जियों के बारे में और जानें!
+
+
+> फोटो बेथ टेउत्स्चमैन द्वारा अनस्प्लैश पर
+
+## आप क्या सीखेंगे
+
+[](https://youtu.be/5QnJtDad4iQ "रिग्रेशन परिचय वीडियो - देखने के लिए क्लिक करें!")
+> 🎥 ऊपर की छवि पर क्लिक करें इस पाठ के लिए एक त्वरित परिचय वीडियो के लिए
+
+इस खंड में पाठ मशीन लर्निंग के संदर्भ में रिग्रेशन के प्रकारों को कवर करते हैं। रिग्रेशन मॉडल्स वेरिएबल्स के बीच _संबंध_ को निर्धारित करने में मदद कर सकते हैं। इस प्रकार का मॉडल लंबाई, तापमान, या उम्र जैसी मानों का पूर्वानुमान लगा सकता है, इस प्रकार डेटा बिंदुओं का विश्लेषण करते समय वेरिएबल्स के बीच संबंधों का पता लगा सकता है।
+
+इस श्रृंखला के पाठों में, आप रेखीय और लॉजिस्टिक रिग्रेशन के बीच के अंतर को जानेंगे, और कब आपको एक को दूसरे पर प्राथमिकता देनी चाहिए।
+
+[](https://youtu.be/XA3OaoW86R8 "शुरुआती के लिए एमएल - मशीन लर्निंग के लिए रिग्रेशन मॉडल्स का परिचय")
+
+> 🎥 ऊपर की छवि पर क्लिक करें रिग्रेशन मॉडल्स का परिचय देने वाले एक छोटे वीडियो के लिए।
+
+इस पाठ समूह में, आप मशीन लर्निंग कार्यों को शुरू करने के लिए सेट अप करेंगे, जिसमें नोटबुक्स को प्रबंधित करने के लिए Visual Studio Code को कॉन्फ़िगर करना शामिल है, जो डेटा वैज्ञानिकों के लिए एक सामान्य वातावरण है। आप Scikit-learn की खोज करेंगे, जो मशीन लर्निंग के लिए एक लाइब्रेरी है, और आप अपने पहले मॉडल्स का निर्माण करेंगे, इस अध्याय में रिग्रेशन मॉडल्स पर ध्यान केंद्रित करेंगे।
+
+> कुछ उपयोगी लो-कोड टूल्स हैं जो आपको रिग्रेशन मॉडल्स के साथ काम करने के बारे में सीखने में मदद कर सकते हैं। इस कार्य के लिए [Azure ML आज़माएं](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### पाठ
+
+1. [ट्रेड के टूल्स](1-Tools/README.md)
+2. [डेटा प्रबंधन](2-Data/README.md)
+3. [रेखीय और बहुपद रिग्रेशन](3-Linear/README.md)
+4. [लॉजिस्टिक रिग्रेशन](4-Logistic/README.md)
+
+---
+### क्रेडिट्स
+
+"रिग्रेशन के साथ एमएल" को ♥️ के साथ [जेन लूपर](https://twitter.com/jenlooper) द्वारा लिखा गया था
+
+♥️ क्विज़ योगदानकर्ताओं में शामिल हैं: [मुहम्मद साकिब खान इनान](https://twitter.com/Sakibinan) और [ऑर्नेला अल्टुन्यान](https://twitter.com/ornelladotcom)
+
+कद्दू डेटासेट का सुझाव [इस प्रोजेक्ट द्वारा कागल पर](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) दिया गया है और इसका डेटा [विशेष फसलों के टर्मिनल मार्केट्स स्टैंडर्ड रिपोर्ट्स](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) से लिया गया है, जिसे संयुक्त राज्य अमेरिका के कृषि विभाग द्वारा वितरित किया गया है। हमने वितरण को सामान्य करने के लिए विविधता के आधार पर कुछ रंग बिंदु जोड़े हैं। यह डेटा सार्वजनिक डोमेन में है।
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/3-Web-App/1-Web-App/README.md b/translations/hi/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..198bfc727
--- /dev/null
+++ b/translations/hi/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# एक वेब ऐप बनाएं जो एमएल मॉडल का उपयोग करती है
+
+इस पाठ में, आप एक डेटा सेट पर एक एमएल मॉडल को प्रशिक्षित करेंगे जो इस दुनिया से बाहर है: _पिछले सदी के यूएफओ देखे जाने_, जो NUFORC के डेटाबेस से लिया गया है।
+
+आप सीखेंगे:
+
+- एक प्रशिक्षित मॉडल को 'pickle' कैसे करें
+- उस मॉडल को एक Flask ऐप में कैसे उपयोग करें
+
+हम अपने नोटबुक का उपयोग डेटा को साफ करने और हमारे मॉडल को प्रशिक्षित करने के लिए जारी रखेंगे, लेकिन आप प्रक्रिया को एक कदम आगे ले जा सकते हैं और एक वेब ऐप में एक मॉडल का उपयोग करने की खोज कर सकते हैं।
+
+ऐसा करने के लिए, आपको Flask का उपयोग करके एक वेब ऐप बनाना होगा।
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## एक ऐप बनाना
+
+मशीन लर्निंग मॉडल का उपभोग करने के लिए वेब ऐप बनाने के कई तरीके हैं। आपकी वेब आर्किटेक्चर आपके मॉडल के प्रशिक्षित होने के तरीके को प्रभावित कर सकती है। कल्पना करें कि आप एक व्यवसाय में काम कर रहे हैं जहां डेटा साइंस समूह ने एक मॉडल प्रशिक्षित किया है जिसे वे चाहते हैं कि आप एक ऐप में उपयोग करें।
+
+### विचार
+
+कई प्रश्न हैं जिन्हें आपको पूछना चाहिए:
+
+- **क्या यह एक वेब ऐप है या मोबाइल ऐप?** यदि आप एक मोबाइल ऐप बना रहे हैं या IoT संदर्भ में मॉडल का उपयोग करने की आवश्यकता है, तो आप [TensorFlow Lite](https://www.tensorflow.org/lite/) का उपयोग कर सकते हैं और मॉडल को एक Android या iOS ऐप में उपयोग कर सकते हैं।
+- **मॉडल कहाँ रहेगा?** क्लाउड में या स्थानीय रूप से?
+- **ऑफलाइन समर्थन।** क्या ऐप को ऑफलाइन काम करना है?
+- **मॉडल को प्रशिक्षित करने के लिए कौन सी तकनीक का उपयोग किया गया था?** चुनी गई तकनीक आपके द्वारा उपयोग किए जाने वाले टूलिंग को प्रभावित कर सकती है।
+ - **TensorFlow का उपयोग करना।** यदि आप TensorFlow का उपयोग करके एक मॉडल को प्रशिक्षित कर रहे हैं, उदाहरण के लिए, वह इकोसिस्टम एक TensorFlow मॉडल को एक वेब ऐप में उपयोग के लिए परिवर्तित करने की क्षमता प्रदान करता है [TensorFlow.js](https://www.tensorflow.org/js/) का उपयोग करके।
+ - **PyTorch का उपयोग करना।** यदि आप एक लाइब्रेरी जैसे [PyTorch](https://pytorch.org/) का उपयोग करके एक मॉडल बना रहे हैं, तो आपके पास इसे [ONNX](https://onnx.ai/) (Open Neural Network Exchange) प्रारूप में निर्यात करने का विकल्प है ताकि इसे जावास्क्रिप्ट वेब ऐप्स में उपयोग किया जा सके जो [Onnx Runtime](https://www.onnxruntime.ai/) का उपयोग कर सकते हैं। इस विकल्प को एक भविष्य के पाठ में एक Scikit-learn प्रशिक्षित मॉडल के लिए खोजा जाएगा।
+ - **Lobe.ai या Azure Custom Vision का उपयोग करना।** यदि आप एक एमएल SaaS (Software as a Service) सिस्टम जैसे [Lobe.ai](https://lobe.ai/) या [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott) का उपयोग करके एक मॉडल को प्रशिक्षित कर रहे हैं, तो इस प्रकार का सॉफ़्टवेयर कई प्लेटफ़ॉर्म के लिए मॉडल को निर्यात करने के तरीके प्रदान करता है, जिसमें आपके ऑनलाइन एप्लिकेशन द्वारा क्लाउड में क्वेरी किए जाने वाले एक विशेष API का निर्माण शामिल है।
+
+आपके पास एक संपूर्ण Flask वेब ऐप बनाने का अवसर भी है जो स्वयं वेब ब्राउज़र में मॉडल को प्रशिक्षित कर सकता है। यह भी TensorFlow.js का उपयोग करके एक जावास्क्रिप्ट संदर्भ में किया जा सकता है।
+
+हमारे उद्देश्यों के लिए, चूंकि हम Python-आधारित नोटबुक के साथ काम कर रहे हैं, आइए उन चरणों का अन्वेषण करें जिन्हें आपको एक प्रशिक्षित मॉडल को ऐसे नोटबुक से Python-निर्मित वेब ऐप द्वारा पढ़े जाने योग्य प्रारूप में निर्यात करने के लिए लेने की आवश्यकता है।
+
+## उपकरण
+
+इस कार्य के लिए, आपको दो उपकरणों की आवश्यकता है: Flask और Pickle, दोनों Python पर चलते हैं।
+
+✅ [Flask](https://palletsprojects.com/p/flask/) क्या है? इसके निर्माताओं द्वारा एक 'माइक्रो-फ्रेमवर्क' के रूप में परिभाषित, Flask Python का उपयोग करके वेब फ्रेमवर्क की बुनियादी विशेषताएं और वेब पेज बनाने के लिए एक टेम्पलेटिंग इंजन प्रदान करता है। Flask के साथ निर्माण का अभ्यास करने के लिए [इस Learn module](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott) को देखें।
+
+✅ [Pickle](https://docs.python.org/3/library/pickle.html) क्या है? Pickle 🥒 एक Python मॉड्यूल है जो एक Python ऑब्जेक्ट संरचना को सीरियलाइज़ और डी-सीरियलाइज़ करता है। जब आप एक मॉडल को 'pickle' करते हैं, तो आप उसकी संरचना को वेब पर उपयोग के लिए सीरियलाइज़ या फ्लैटन करते हैं। सावधान रहें: pickle स्वाभाविक रूप से सुरक्षित नहीं है, इसलिए यदि किसी फाइल को 'अन-पिकल' करने के लिए प्रेरित किया जाता है तो सावधान रहें। एक पिकल की गई फाइल का उपसर्ग `.pkl` होता है।
+
+## अभ्यास - अपने डेटा को साफ करें
+
+इस पाठ में आप 80,000 यूएफओ देखे जाने के डेटा का उपयोग करेंगे, जो [NUFORC](https://nuforc.org) (The National UFO Reporting Center) द्वारा एकत्र किया गया है। इस डेटा में यूएफओ देखे जाने के कुछ दिलचस्प विवरण हैं, उदाहरण के लिए:
+
+- **लंबा उदाहरण विवरण।** "एक आदमी रात में एक घास के मैदान पर चमकने वाली रोशनी की एक किरण से उभरता है और वह टेक्सास इंस्ट्रूमेंट्स पार्किंग स्थल की ओर दौड़ता है"।
+- **छोटा उदाहरण विवरण।** "रोशनी ने हमारा पीछा किया"।
+
+[ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) स्प्रेडशीट में `city`, `state` और `country` के बारे में कॉलम शामिल हैं जहां देखे जाने की घटना हुई, वस्तु का `shape` और उसका `latitude` और `longitude`।
+
+इस पाठ में शामिल खाली [notebook](../../../../3-Web-App/1-Web-App/notebook.ipynb) में:
+
+1. `pandas`, `matplotlib`, और `numpy` को आयात करें जैसा कि आपने पिछले पाठों में किया था और ufos स्प्रेडशीट को आयात करें। आप डेटा सेट के एक नमूने को देख सकते हैं:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. UfOs डेटा को ताजा शीर्षकों के साथ एक छोटे डेटा फ्रेम में परिवर्तित करें। `Country` फ़ील्ड में अद्वितीय मानों की जांच करें।
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. अब, आप उन डेटा की मात्रा को कम कर सकते हैं जिनसे हमें निपटना है, किसी भी null मानों को हटा कर और केवल 1-60 सेकंड के बीच देखे जाने को आयात करके:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. टेक्स्ट मानों को संख्याओं में बदलने के लिए Scikit-learn की `LabelEncoder` लाइब्रेरी आयात करें:
+
+ ✅ LabelEncoder डेटा को वर्णानुक्रम में एन्कोड करता है
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ आपका डेटा इस प्रकार दिखना चाहिए:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## अभ्यास - अपना मॉडल बनाएं
+
+अब आप डेटा को प्रशिक्षण और परीक्षण समूह में विभाजित करके मॉडल को प्रशिक्षित करने के लिए तैयार हो सकते हैं।
+
+1. उन तीन विशेषताओं का चयन करें जिन पर आप प्रशिक्षण देना चाहते हैं, जैसा कि आपके X वेक्टर के लिए, और y वेक्टर `Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude` होगा और एक देश आईडी वापस पाने के लिए।
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. अपने मॉडल को लॉजिस्टिक रिग्रेशन का उपयोग करके प्रशिक्षित करें:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+सटीकता बुरी नहीं है **(लगभग 95%)**, अप्रत्याशित रूप से, जैसा कि `Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude`, लेकिन यह एक अच्छा अभ्यास है कच्चे डेटा से प्रशिक्षित करने का प्रयास करना जिसे आपने साफ किया, निर्यात किया, और फिर इस मॉडल को एक वेब ऐप में उपयोग करें।
+
+## अभ्यास - अपने मॉडल को 'pickle' करें
+
+अब, अपने मॉडल को _pickle_ करने का समय है! आप इसे कुछ लाइनों के कोड में कर सकते हैं। एक बार जब यह _pickled_ हो जाए, तो अपने pickled मॉडल को लोड करें और इसे सेकंड, अक्षांश और देशांतर के मानों वाली एक नमूना डेटा सरणी के खिलाफ परीक्षण करें,
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+मॉडल **'3'** लौटाता है, जो यूके के लिए देश कोड है। अद्भुत! 👽
+
+## अभ्यास - एक Flask ऐप बनाएं
+
+अब आप अपने मॉडल को कॉल करने और समान परिणाम लौटाने के लिए एक Flask ऐप बना सकते हैं, लेकिन एक अधिक दृष्टिगत रूप से आकर्षक तरीके से।
+
+1. _notebook.ipynb_ फाइल के बगल में **web-app** नामक एक फोल्डर बनाकर शुरू करें जहां आपकी _ufo-model.pkl_ फाइल स्थित है।
+
+1. उस फोल्डर में तीन और फोल्डर बनाएं: **static**, जिसके अंदर एक फोल्डर **css** हो, और **templates**। अब आपके पास निम्नलिखित फाइलें और निर्देशिकाएँ होनी चाहिए:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ समाप्त ऐप का दृश्य पाने के लिए समाधान फोल्डर का संदर्भ लें
+
+1. _web-app_ फोल्डर में बनाने वाली पहली फाइल **requirements.txt** फाइल है। जैसे कि एक जावास्क्रिप्ट ऐप में _package.json_, यह फाइल ऐप द्वारा आवश्यक निर्भरताओं को सूचीबद्ध करती है। **requirements.txt** में निम्न पंक्तियाँ जोड़ें:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. अब, इस फाइल को _web-app_ में नेविगेट करके चलाएँ:
+
+ ```bash
+ cd web-app
+ ```
+
+1. अपने टर्मिनल में _requirements.txt_ में सूचीबद्ध लाइब्रेरीज़ को स्थापित करने के लिए `pip install` टाइप करें:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. अब, ऐप को पूरा करने के लिए तीन और फाइलें बनाने के लिए तैयार हैं:
+
+ 1. रूट में **app.py** बनाएं।
+ 2. _templates_ निर्देशिका में **index.html** बनाएं।
+ 3. _static/css_ निर्देशिका में **styles.css** बनाएं।
+
+1. _styles.css_ फाइल को कुछ शैलियों के साथ बनाएं:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. अगला, _index.html_ फाइल बनाएं:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ इस फाइल में टेम्पलेटिंग पर एक नज़र डालें। उन वेरिएबल्स के चारों ओर 'मस्टैच' सिंटैक्स पर ध्यान दें जिन्हें ऐप द्वारा प्रदान किया जाएगा, जैसे भविष्यवाणी टेक्स्ट: `{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py` में जोड़ें:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 टिप: जब आप [`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train` जोड़ते हैं तो इसके फायदे और नुकसान क्या हैं?
+
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## समीक्षा और आत्म-अध्ययन
+
+मशीन लर्निंग मॉडल का उपभोग करने के लिए एक वेब ऐप बनाने के कई तरीके हैं। उन तरीकों की एक सूची बनाएं जिनसे आप जावास्क्रिप्ट या पायथन का उपयोग करके एक वेब ऐप बना सकते हैं ताकि मशीन लर्निंग का लाभ उठाया जा सके। आर्किटेक्चर पर विचार करें: क्या मॉडल को ऐप में रहना चाहिए या क्लाउड में रहना चाहिए? यदि बाद वाला, तो आप इसे कैसे एक्सेस करेंगे? एक लागू एमएल वेब समाधान के लिए एक आर्किटेक्चरल मॉडल बनाएं।
+
+## असाइनमेंट
+
+[एक अलग मॉडल आज़माएं](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/3-Web-App/1-Web-App/assignment.md b/translations/hi/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..23e3eb175
--- /dev/null
+++ b/translations/hi/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# एक अलग मॉडल आज़माएं
+
+## निर्देश
+
+अब जब आपने एक प्रशिक्षित Regression मॉडल का उपयोग करके एक वेब ऐप बना लिया है, तो पहले के Regression पाठ से एक मॉडल का उपयोग करके इस वेब ऐप को फिर से बनाएं। आप शैली को बनाए रख सकते हैं या इसे कद्दू डेटा को दर्शाने के लिए अलग तरीके से डिज़ाइन कर सकते हैं। अपने मॉडल के प्रशिक्षण विधि को प्रतिबिंबित करने के लिए इनपुट्स को बदलने में सावधानी बरतें।
+
+## मूल्यांकन
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------------------------- | --------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------- |
+| | वेब ऐप अपेक्षित रूप से चलता है और क्लाउड में तैनात है | वेब ऐप में खामियां हैं या अप्रत्याशित परिणाम दिखाता है | वेब ऐप सही ढंग से कार्य नहीं करता है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/3-Web-App/README.md b/translations/hi/3-Web-App/README.md
new file mode 100644
index 000000000..501a1bac2
--- /dev/null
+++ b/translations/hi/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# अपने ML मॉडल का उपयोग करने के लिए एक वेब ऐप बनाएं
+
+इस पाठ्यक्रम के इस भाग में, आपको एक लागू ML विषय से परिचित कराया जाएगा: अपने Scikit-learn मॉडल को एक फ़ाइल के रूप में कैसे सहेजें जिसे वेब एप्लिकेशन के भीतर भविष्यवाणियां करने के लिए उपयोग किया जा सकता है। एक बार जब मॉडल सहेजा जाता है, तो आप सीखेंगे कि इसे Flask में निर्मित एक वेब ऐप में कैसे उपयोग किया जाए। आप सबसे पहले कुछ डेटा का उपयोग करके एक मॉडल बनाएंगे जो यूएफओ देखे जाने के बारे में है! फिर, आप एक वेब ऐप बनाएंगे जो आपको एक संख्या में सेकंड के साथ अक्षांश और देशांतर मान दर्ज करने की अनुमति देगा ताकि यह भविष्यवाणी की जा सके कि किस देश ने यूएफओ देखने की रिपोर्ट की है।
+
+
+
+Michael Herren द्वारा Unsplash पर फोटो
+
+## पाठ
+
+1. [एक वेब ऐप बनाएं](1-Web-App/README.md)
+
+## श्रेय
+
+"Build a Web App" को ♥️ से [Jen Looper](https://twitter.com/jenlooper) द्वारा लिखा गया था।
+
+♥️ क्विज़ को Rohan Raj द्वारा लिखा गया था।
+
+डेटासेट [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings) से प्राप्त किया गया है।
+
+वेब ऐप आर्किटेक्चर को आंशिक रूप से [इस लेख](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) और [इस रेपो](https://github.com/abhinavsagar/machine-learning-deployment) द्वारा Abhinav Sagar द्वारा सुझाया गया था।
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/1-Introduction/README.md b/translations/hi/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..086ad8eab
--- /dev/null
+++ b/translations/hi/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# वर्गीकरण का परिचय
+
+इन चार पाठों में, आप क्लासिक मशीन लर्निंग के एक मौलिक फोकस - _वर्गीकरण_ का अन्वेषण करेंगे। हम एशिया और भारत के सभी शानदार व्यंजनों के बारे में एक डेटासेट का उपयोग करके विभिन्न वर्गीकरण एल्गोरिदम का उपयोग करेंगे। आशा है कि आप भूखे हैं!
+
+
+
+> इन पाठों में पैन-एशियाई व्यंजनों का जश्न मनाएं! छवि [Jen Looper](https://twitter.com/jenlooper) द्वारा
+
+वर्गीकरण [सुपरवाइज्ड लर्निंग](https://wikipedia.org/wiki/Supervised_learning) का एक रूप है जो प्रतिगमन तकनीकों के साथ बहुत कुछ समानता रखता है। यदि मशीन लर्निंग का सार यह है कि यह डेटा सेट्स का उपयोग करके चीजों के मान या नामों की भविष्यवाणी करता है, तो वर्गीकरण आमतौर पर दो समूहों में आता है: _बाइनरी वर्गीकरण_ और _मल्टीक्लास वर्गीकरण_।
+
+[](https://youtu.be/eg8DJYwdMyg "वर्गीकरण का परिचय")
+
+> 🎥 ऊपर की छवि पर क्लिक करें: MIT के John Guttag वर्गीकरण का परिचय देते हैं
+
+याद रखें:
+
+- **लिनियर रिग्रेशन** ने आपको चर के बीच संबंधों की भविष्यवाणी करने और यह सटीक भविष्यवाणी करने में मदद की कि एक नया डेटा पॉइंट उस रेखा के संबंध में कहाँ गिरेगा। तो, आप भविष्यवाणी कर सकते थे कि _सितंबर और दिसंबर में कद्दू की कीमत क्या होगी_।
+- **लॉजिस्टिक रिग्रेशन** ने आपको "बाइनरी श्रेणियाँ" खोजने में मदद की: इस मूल्य बिंदु पर, _क्या यह कद्दू नारंगी है या नहीं_?
+
+वर्गीकरण विभिन्न एल्गोरिदम का उपयोग करके यह निर्धारित करता है कि किसी डेटा पॉइंट का लेबल या वर्ग क्या हो सकता है। आइए इस व्यंजन डेटा के साथ काम करें और देखें कि क्या हम सामग्री के समूह का अवलोकन करके इसके मूल व्यंजन का निर्धारण कर सकते हैं।
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [यह पाठ R में भी उपलब्ध है!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### परिचय
+
+वर्गीकरण मशीन लर्निंग शोधकर्ता और डेटा वैज्ञानिक की मौलिक गतिविधियों में से एक है। एक बाइनरी मूल्य का बुनियादी वर्गीकरण ("क्या यह ईमेल स्पैम है या नहीं?") से लेकर जटिल छवि वर्गीकरण और कंप्यूटर दृष्टि का उपयोग करके विभाजन तक, डेटा को वर्गों में क्रमबद्ध करने और उससे प्रश्न पूछने में सक्षम होना हमेशा उपयोगी होता है।
+
+इस प्रक्रिया को अधिक वैज्ञानिक तरीके से बताने के लिए, आपकी वर्गीकरण विधि एक भविष्यवाणी मॉडल बनाती है जो आपको इनपुट चर और आउटपुट चर के बीच संबंध को मैप करने में सक्षम बनाती है।
+
+
+
+> वर्गीकरण एल्गोरिदम को संभालने के लिए बाइनरी बनाम मल्टीक्लास समस्याएं। इन्फोग्राफिक [Jen Looper](https://twitter.com/jenlooper) द्वारा
+
+हमारे डेटा को साफ करने, उसे विज़ुअलाइज़ करने और हमारे एमएल कार्यों के लिए तैयार करने की प्रक्रिया शुरू करने से पहले, आइए जानें कि मशीन लर्निंग का उपयोग करके डेटा को वर्गीकृत करने के विभिन्न तरीके क्या हैं।
+
+[सांख्यिकी](https://wikipedia.org/wiki/Statistical_classification) से व्युत्पन्न, क्लासिक मशीन लर्निंग का उपयोग करके वर्गीकरण सुविधाओं का उपयोग करता है, जैसे कि `smoker`, `weight`, और `age` यह निर्धारित करने के लिए कि _X बीमारी के विकास की संभावना_। एक सुपरवाइज्ड लर्निंग तकनीक के रूप में, जो आपने पहले किए गए रिग्रेशन अभ्यासों के समान है, आपका डेटा लेबल किया गया है और एमएल एल्गोरिदम उन लेबलों का उपयोग डेटासेट की श्रेणियों (या 'फीचर्स') को वर्गीकृत करने और उन्हें एक समूह या परिणाम में असाइन करने के लिए करते हैं।
+
+✅ कल्पना करें कि आपके पास व्यंजनों के बारे में एक डेटासेट है। एक मल्टीक्लास मॉडल किस प्रकार के प्रश्नों का उत्तर दे सकता है? एक बाइनरी मॉडल किस प्रकार के प्रश्नों का उत्तर दे सकता है? यदि आप यह निर्धारित करना चाहते हैं कि कोई विशेष व्यंजन मेथी का उपयोग करता है या नहीं? या यदि आप यह देखना चाहते हैं कि, एक उपहार में स्टार ऐनीज़, आर्टिचोक, फूलगोभी, और हॉर्सरैडिश से भरी एक किराने की थैली प्राप्त करने पर, आप एक विशिष्ट भारतीय व्यंजन बना सकते हैं?
+
+[](https://youtu.be/GuTeDbaNoEU "पागल रहस्यमय बास्केट")
+
+> 🎥 ऊपर की छवि पर क्लिक करें। शो 'चॉप्ड' का पूरा आधार 'मिस्ट्री बास्केट' है जहां शेफ को कुछ यादृच्छिक सामग्री से कोई डिश बनानी होती है। निश्चित रूप से एक एमएल मॉडल ने मदद की होगी!
+
+## हैलो 'क्लासिफायर'
+
+इस व्यंजन डेटासेट से हम जो प्रश्न पूछना चाहते हैं वह वास्तव में एक **मल्टीक्लास प्रश्न** है, क्योंकि हमारे पास काम करने के लिए कई संभावित राष्ट्रीय व्यंजन हैं। सामग्री के एक बैच को देखते हुए, इन कई वर्गों में से कौन सा डेटा फिट होगा?
+
+स्किट-लर्न विभिन्न प्रकार की समस्याओं को हल करने के लिए डेटा को वर्गीकृत करने के लिए कई अलग-अलग एल्गोरिदम प्रदान करता है। अगले दो पाठों में, आप इनमें से कई एल्गोरिदम के बारे में जानेंगे।
+
+## अभ्यास - अपने डेटा को साफ और संतुलित करें
+
+इस प्रोजेक्ट को शुरू करने से पहले पहला कार्य अपने डेटा को साफ और **संतुलित** करना है ताकि बेहतर परिणाम मिल सकें। इस फ़ोल्डर की रूट में खाली _notebook.ipynb_ फ़ाइल से प्रारंभ करें।
+
+सबसे पहले जो इंस्टॉल करना है वह है [imblearn](https://imbalanced-learn.org/stable/)। यह एक स्किट-लर्न पैकेज है जो आपको डेटा को बेहतर संतुलित करने की अनुमति देगा (आप इस कार्य के बारे में एक मिनट में और जानेंगे)।
+
+1. `imblearn` को इंस्टॉल करने के लिए, `pip install` चलाएँ, जैसे:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. अपने डेटा को आयात करने और उसे विज़ुअलाइज़ करने के लिए आवश्यक पैकेज आयात करें, साथ ही `imblearn` से `SMOTE` भी आयात करें।
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ अब आप डेटा आयात करने के लिए तैयार हैं।
+
+1. अगला कार्य डेटा आयात करना होगा:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df` का उपयोग करना।
+
+1. डेटा के आकार की जाँच करें:
+
+ ```python
+ df.head()
+ ```
+
+ पहली पाँच पंक्तियाँ इस प्रकार दिखती हैं:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. `info()` कॉल करके इस डेटा के बारे में जानकारी प्राप्त करें:
+
+ ```python
+ df.info()
+ ```
+
+ आपका आउटपुट इस प्रकार दिखता है:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## अभ्यास - व्यंजनों के बारे में सीखना
+
+अब काम अधिक दिलचस्प हो जाता है। आइए देखें कि डेटा का वितरण प्रत्येक व्यंजन के अनुसार कैसा है।
+
+1. `barh()` कॉल करके डेटा को बार के रूप में प्लॉट करें:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ व्यंजनों की संख्या सीमित है, लेकिन डेटा का वितरण असमान है। आप इसे ठीक कर सकते हैं! ऐसा करने से पहले, थोड़ा और अन्वेषण करें।
+
+1. पता करें कि प्रत्येक व्यंजन के लिए कितना डेटा उपलब्ध है और इसे प्रिंट करें:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ आउटपुट इस प्रकार दिखता है:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## सामग्री की खोज
+
+अब आप डेटा में गहराई से जा सकते हैं और जान सकते हैं कि प्रत्येक व्यंजन के लिए सामान्य सामग्री क्या हैं। आपको पुनरावर्ती डेटा को साफ करना चाहिए जो व्यंजनों के बीच भ्रम पैदा करता है, तो आइए इस समस्या के बारे में जानें।
+
+1. एक फ़ंक्शन `create_ingredient()` बनाएं जो सामग्री का डेटा फ्रेम बनाए। यह फ़ंक्शन एक अनुपयोगी कॉलम को हटाकर और सामग्री को उनकी गिनती के अनुसार क्रमबद्ध करके शुरू होगा:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ अब आप इस फ़ंक्शन का उपयोग करके प्रत्येक व्यंजन के लिए शीर्ष दस सबसे लोकप्रिय सामग्री का अंदाजा लगा सकते हैं।
+
+1. `create_ingredient()` and plot it calling `barh()` कॉल करें:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. जापानी डेटा के लिए भी ऐसा ही करें:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. अब चीनी सामग्री के लिए:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. भारतीय सामग्री को प्लॉट करें:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. अंत में, कोरियाई सामग्री को प्लॉट करें:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. अब, `drop()` कॉल करके उन सामान्य सामग्रियों को हटा दें जो विभिन्न व्यंजनों के बीच भ्रम पैदा करती हैं:
+
+ हर कोई चावल, लहसुन और अदरक से प्यार करता है!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## डेटासेट को संतुलित करें
+
+अब जब आपने डेटा को साफ कर लिया है, तो इसे संतुलित करने के लिए [SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - "सिंथेटिक माइनॉरिटी ओवर-सैंपलिंग तकनीक" का उपयोग करें।
+
+1. `fit_resample()` कॉल करें, यह रणनीति इंटरपोलेशन द्वारा नए नमूने उत्पन्न करती है।
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ अपने डेटा को संतुलित करके, आपके पास इसे वर्गीकृत करने में बेहतर परिणाम होंगे। एक बाइनरी वर्गीकरण के बारे में सोचें। यदि आपके अधिकांश डेटा एक वर्ग के हैं, तो एक एमएल मॉडल उस वर्ग की अधिक बार भविष्यवाणी करेगा, सिर्फ इसलिए कि उसके लिए अधिक डेटा है। डेटा को संतुलित करना किसी भी विकृत डेटा को लेता है और इस असंतुलन को हटाने में मदद करता है।
+
+1. अब आप सामग्री प्रति लेबल की संख्या की जाँच कर सकते हैं:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ आपका आउटपुट इस प्रकार दिखता है:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ डेटा अच्छा और साफ है, संतुलित है, और बहुत स्वादिष्ट है!
+
+1. अंतिम चरण आपके संतुलित डेटा को, जिसमें लेबल और फीचर्स शामिल हैं, एक नए डेटा फ्रेम में सहेजना है जिसे एक फ़ाइल में निर्यात किया जा सकता है:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. आप `transformed_df.head()` and `transformed_df.info()` का उपयोग करके डेटा पर एक और नज़र डाल सकते हैं। भविष्य के पाठों में उपयोग के लिए इस डेटा की एक प्रति सहेजें:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ यह ताज़ा CSV अब रूट डेटा फ़ोल्डर में पाई जा सकती है।
+
+---
+
+## 🚀चुनौती
+
+इस पाठ्यक्रम में कई दिलचस्प डेटासेट शामिल हैं। `data` फ़ोल्डरों के माध्यम से खुदाई करें और देखें कि क्या कोई ऐसा डेटासेट है जो बाइनरी या मल्टी-क्लास वर्गीकरण के लिए उपयुक्त होगा? आप इस डेटासेट से कौन से प्रश्न पूछेंगे?
+
+## [पोस्ट-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## समीक्षा और आत्म-अध्ययन
+
+SMOTE के API का अन्वेषण करें। इसे किस प्रकार के उपयोग मामलों के लिए सबसे अच्छा उपयोग किया जाता है? यह किन समस्याओं का समाधान करता है?
+
+## असाइनमेंट
+
+[वर्गीकरण विधियों का अन्वेषण करें](assignment.md)
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/1-Introduction/assignment.md b/translations/hi/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..c9a305669
--- /dev/null
+++ b/translations/hi/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# वर्गीकरण विधियों का अन्वेषण करें
+
+## निर्देश
+
+[Scikit-learn दस्तावेज़](https://scikit-learn.org/stable/supervised_learning.html) में आपको डेटा को वर्गीकृत करने के कई तरीके मिलेंगे। इन दस्तावेज़ों में थोड़ा सा खोज करें: आपका लक्ष्य वर्गीकरण विधियों को खोजना और इस पाठ्यक्रम में एक डेटासेट, एक प्रश्न जिसे आप इससे पूछ सकते हैं, और एक वर्गीकरण तकनीक को मिलाना है। एक स्प्रेडशीट या तालिका एक .doc फ़ाइल में बनाएं और समझाएं कि डेटासेट वर्गीकरण एल्गोरिदम के साथ कैसे काम करेगा।
+
+## मूल्यांकन
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+|--------|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| | एक दस्तावेज़ प्रस्तुत किया गया है जिसमें 5 एल्गोरिदम के साथ एक वर्गीकरण तकनीक का अवलोकन है। अवलोकन अच्छी तरह से समझाया गया है और विस्तृत है। | एक दस्तावेज़ प्रस्तुत किया गया है जिसमें 3 एल्गोरिदम के साथ एक वर्गीकरण तकनीक का अवलोकन है। अवलोकन अच्छी तरह से समझाया गया है और विस्तृत है। | एक दस्तावेज़ प्रस्तुत किया गया है जिसमें तीन से कम एल्गोरिदम के साथ एक वर्गीकरण तकनीक का अवलोकन है और अवलोकन न तो अच्छी तरह से समझाया गया है और न ही विस्तृत है। |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/1-Introduction/solution/Julia/README.md b/translations/hi/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..9acee712b
--- /dev/null
+++ b/translations/hi/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/2-Classifiers-1/README.md b/translations/hi/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..b4433125d
--- /dev/null
+++ b/translations/hi/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,77 @@
+# व्यंजन वर्गीकरणकर्ता 1
+
+इस पाठ में, आप पिछले पाठ से सहेजे गए संतुलित और साफ डेटा से भरे डेटासेट का उपयोग करेंगे, जो सभी व्यंजनों के बारे में है।
+
+आप इस डेटासेट का उपयोग विभिन्न वर्गीकरणकर्ताओं के साथ करेंगे ताकि _सामग्री के एक समूह के आधार पर किसी दिए गए राष्ट्रीय व्यंजन की भविष्यवाणी की जा सके_। ऐसा करते समय, आप उन तरीकों के बारे में अधिक जानेंगे जिनसे एल्गोरिदम को वर्गीकरण कार्यों के लिए उपयोग किया जा सकता है।
+
+## [पाठ पूर्व क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# तैयारी
+
+मान लेते हैं कि आपने [पाठ 1](../1-Introduction/README.md) पूरा कर लिया है, सुनिश्चित करें कि इन चार पाठों के लिए रूट `/data` फ़ोल्डर में एक _cleaned_cuisines.csv_ फ़ाइल मौजूद है।
+
+## अभ्यास - एक राष्ट्रीय व्यंजन की भविष्यवाणी करें
+
+1. इस पाठ के _notebook.ipynb_ फ़ोल्डर में काम करते हुए, उस फ़ाइल को पांडस लाइब्रेरी के साथ आयात करें:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ डेटा इस प्रकार दिखता है:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. अब, कई और लाइब्रेरी आयात करें:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. प्रशिक्षण के लिए X और y निर्देशांक को दो डेटा फ्रेम में विभाजित करें। `cuisine` लेबल डेटा फ्रेम हो सकता है:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ यह इस प्रकार दिखेगा:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. उस `Unnamed: 0` column and the `cuisine` column, calling `drop()` को हटा दें। शेष डेटा को ट्रेन करने योग्य फीचर्स के रूप में सहेजें:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ आपके फीचर्स इस प्रकार दिखते हैं:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... |
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित AI अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या गलतियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/2-Classifiers-1/assignment.md b/translations/hi/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..5ae52e095
--- /dev/null
+++ b/translations/hi/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# सॉल्वर्स का अध्ययन करें
+## निर्देश
+
+इस पाठ में आपने विभिन्न सॉल्वर्स के बारे में सीखा जो एल्गोरिदम को मशीन लर्निंग प्रक्रिया के साथ जोड़ते हैं ताकि एक सटीक मॉडल बनाया जा सके। पाठ में सूचीबद्ध सॉल्वर्स का अवलोकन करें और उनमें से दो का चयन करें। अपने शब्दों में, इन दो सॉल्वर्स की तुलना और विरोधाभास करें। वे किस प्रकार की समस्या का समाधान करते हैं? वे विभिन्न डेटा संरचनाओं के साथ कैसे काम करते हैं? आप एक को दूसरे के ऊपर क्यों चुनेंगे?
+## मूल्यांकन मानदंड
+
+| मानदंड | उत्कृष्टता | पर्याप्तता | सुधार की आवश्यकता |
+| -------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | एक .doc फ़ाइल प्रस्तुत की गई है जिसमें प्रत्येक सॉल्वर पर विचारशील तुलना के साथ दो पैराग्राफ हैं। | एक .doc फ़ाइल प्रस्तुत की गई है जिसमें केवल एक पैराग्राफ है | असाइनमेंट अधूरा है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/hi/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..1f1876a24
--- /dev/null
+++ b/translations/hi/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित AI अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/3-Classifiers-2/README.md b/translations/hi/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..5c815d65e
--- /dev/null
+++ b/translations/hi/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# क्यूज़ीन वर्गीकरणकर्ता 2
+
+इस दूसरे वर्गीकरण पाठ में, आप संख्यात्मक डेटा को वर्गीकृत करने के और भी तरीके जानेंगे। आप यह भी सीखेंगे कि एक वर्गीकरणकर्ता को दूसरे पर चुनने के परिणाम क्या हो सकते हैं।
+
+## [पाठ-पूर्व क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### पूर्वापेक्षा
+
+हम मानते हैं कि आपने पिछले पाठों को पूरा कर लिया है और आपके पास आपके `data` फ़ोल्डर में _cleaned_cuisines.csv_ नामक एक साफ़ किया हुआ डेटासेट है, जो इस 4-पाठ वाले फ़ोल्डर की जड़ में है।
+
+### तैयारी
+
+हमने आपके _notebook.ipynb_ फ़ाइल को साफ़ किए गए डेटासेट के साथ लोड कर दिया है और इसे X और y डेटा फ्रेम में विभाजित कर दिया है, जो मॉडल निर्माण प्रक्रिया के लिए तैयार हैं।
+
+## एक वर्गीकरण मानचित्र
+
+पहले, आपने माइक्रोसॉफ्ट के चीट शीट का उपयोग करके डेटा को वर्गीकृत करने के विभिन्न विकल्पों के बारे में सीखा था। Scikit-learn एक समान, लेकिन अधिक विस्तृत चीट शीट प्रदान करता है जो आपके वर्गीकरणकर्ताओं (वर्गीकरणकर्ताओं के लिए एक और शब्द) को और अधिक संकीर्ण करने में मदद कर सकता है:
+
+
+> टिप: [इस मानचित्र को ऑनलाइन देखें](https://scikit-learn.org/stable/tutorial/machine_learning_map/) और दस्तावेज़ पढ़ने के लिए रास्ते पर क्लिक करें।
+
+### योजना
+
+यह मानचित्र आपके डेटा की स्पष्ट समझ होने के बाद बहुत सहायक होता है, क्योंकि आप इसके रास्तों पर 'चल' सकते हैं और निर्णय ले सकते हैं:
+
+- हमारे पास >50 नमूने हैं
+- हम एक श्रेणी की भविष्यवाणी करना चाहते हैं
+- हमारे पास लेबल किया हुआ डेटा है
+- हमारे पास 100K से कम नमूने हैं
+- ✨ हम एक Linear SVC चुन सकते हैं
+- यदि यह काम नहीं करता है, क्योंकि हमारे पास संख्यात्मक डेटा है
+ - हम ✨ KNeighbors Classifier आज़मा सकते हैं
+ - यदि यह काम नहीं करता है, तो ✨ SVC और ✨ Ensemble Classifiers आज़माएं
+
+यह अनुसरण करने के लिए एक बहुत ही सहायक मार्ग है।
+
+## अभ्यास - डेटा विभाजित करें
+
+इस मार्ग का अनुसरण करते हुए, हमें कुछ पुस्तकालयों को आयात करके शुरू करना चाहिए।
+
+1. आवश्यक पुस्तकालयों को आयात करें:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. अपने प्रशिक्षण और परीक्षण डेटा को विभाजित करें:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Linear SVC वर्गीकरणकर्ता
+
+सपोर्ट-वेेक्टर क्लस्टरिंग (SVC) सपोर्ट-वेेक्टर मशीनों के परिवार का एक हिस्सा है (इनके बारे में नीचे और जानें)। इस विधि में, आप लेबल्स को क्लस्टर करने के लिए एक 'कर्नेल' चुन सकते हैं। 'C' पैरामीटर 'रेगुलराइजेशन' को संदर्भित करता है जो पैरामीटरों के प्रभाव को नियंत्रित करता है। कर्नेल [कई](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) में से एक हो सकता है; यहाँ हम इसे 'linear' पर सेट करते हैं ताकि हम linear SVC का लाभ उठा सकें। Probability डिफ़ॉल्ट रूप से 'false' है; यहाँ हम इसे 'true' पर सेट करते हैं ताकि संभाव्यता अनुमान प्राप्त कर सकें। हम रैंडम स्टेट को '0' पर सेट करते हैं ताकि डेटा को शफल किया जा सके और संभाव्यताओं को प्राप्त किया जा सके।
+
+### अभ्यास - एक linear SVC लागू करें
+
+क्लासिफ़ायरों की एक array बनाकर शुरू करें। हम परीक्षण करते समय इस array में क्रमशः जोड़ते जाएंगे।
+
+1. एक Linear SVC के साथ शुरू करें:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. अपने मॉडल को Linear SVC का उपयोग करके प्रशिक्षित करें और एक रिपोर्ट प्रिंट करें:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ परिणाम काफी अच्छा है:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## K-Neighbors वर्गीकरणकर्ता
+
+K-Neighbors "पड़ोसियों" परिवार का हिस्सा है, जिसका उपयोग पर्यवेक्षित और बिना पर्यवेक्षण दोनों प्रकार के शिक्षण के लिए किया जा सकता है। इस विधि में, एक पूर्वनिर्धारित संख्या के बिंदु बनाए जाते हैं और डेटा को इन बिंदुओं के चारों ओर एकत्र किया जाता है ताकि डेटा के लिए सामान्यीकृत लेबल की भविष्यवाणी की जा सके।
+
+### अभ्यास - K-Neighbors वर्गीकरणकर्ता लागू करें
+
+पिछला वर्गीकरणकर्ता अच्छा था और डेटा के साथ अच्छी तरह से काम किया, लेकिन शायद हम बेहतर सटीकता प्राप्त कर सकते हैं। एक K-Neighbors वर्गीकरणकर्ता आज़माएं।
+
+1. अपने क्लासिफ़ायर array में एक लाइन जोड़ें (Linear SVC आइटम के बाद एक कॉमा जोड़ें):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ परिणाम थोड़ा खराब है:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors) के बारे में जानें
+
+## सपोर्ट वेक्टर क्लासिफायर
+
+सपोर्ट-वेेक्टर क्लासिफायर [सपोर्ट-वेेक्टर मशीन](https://wikipedia.org/wiki/Support-vector_machine) परिवार का हिस्सा हैं, जिनका उपयोग वर्गीकरण और प्रतिगमन कार्यों के लिए किया जाता है। SVMs "प्रशिक्षण उदाहरणों को स्थान में बिंदुओं पर मैप करते हैं" ताकि दो श्रेणियों के बीच की दूरी को अधिकतम किया जा सके। इसके बाद का डेटा इस स्थान में मैप किया जाता है ताकि उनकी श्रेणी की भविष्यवाणी की जा सके।
+
+### अभ्यास - सपोर्ट वेक्टर क्लासिफायर लागू करें
+
+आइए सपोर्ट वेक्टर क्लासिफायर के साथ थोड़ी बेहतर सटीकता प्राप्त करने का प्रयास करें।
+
+1. K-Neighbors आइटम के बाद एक कॉमा जोड़ें, और फिर इस लाइन को जोड़ें:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ परिणाम काफी अच्छा है!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ [सपोर्ट-वेेक्टर](https://scikit-learn.org/stable/modules/svm.html#svm) के बारे में जानें
+
+## एन्सेम्बल क्लासिफायर
+
+आइए इस रास्ते के अंत तक जाएं, भले ही पिछला परीक्षण काफी अच्छा था। आइए कुछ 'एन्सेम्बल क्लासिफायर', विशेष रूप से रैंडम फॉरेस्ट और AdaBoost आज़माएं:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+परिणाम बहुत अच्छा है, विशेष रूप से रैंडम फॉरेस्ट के लिए:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ [एन्सेम्बल क्लासिफायर](https://scikit-learn.org/stable/modules/ensemble.html) के बारे में जानें
+
+इस मशीन लर्निंग विधि में "कई बेस एस्टिमेटर्स की भविष्यवाणियों को मिलाया जाता है" ताकि मॉडल की गुणवत्ता में सुधार हो सके। हमारे उदाहरण में, हमने रैंडम ट्री और AdaBoost का उपयोग किया।
+
+- [रैंडम फॉरेस्ट](https://scikit-learn.org/stable/modules/ensemble.html#forest), एक एवरेजिंग विधि, एक 'फॉरेस्ट' बनाता है जिसमें 'डिसीजन ट्री' होते हैं, जो ओवरफिटिंग से बचने के लिए रैंडमनेस से भरे होते हैं। n_estimators पैरामीटर पेड़ों की संख्या पर सेट होता है।
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) एक क्लासिफायर को डेटासेट पर फिट करता है और फिर उसी डेटासेट पर उस क्लासिफायर की प्रतियों को फिट करता है। यह गलत तरीके से वर्गीकृत आइटमों के वज़न पर ध्यान केंद्रित करता है और अगले क्लासिफायर के लिए फिट को समायोजित करता है ताकि उसे सही किया जा सके।
+
+---
+
+## 🚀चुनौती
+
+इनमें से प्रत्येक तकनीक में बहुत सारे पैरामीटर होते हैं जिन्हें आप समायोजित कर सकते हैं। प्रत्येक के डिफ़ॉल्ट पैरामीटरों के बारे में शोध करें और सोचें कि इन पैरामीटरों को समायोजित करने से मॉडल की गुणवत्ता के लिए क्या मतलब होगा।
+
+## [पाठ-उत्तर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## समीक्षा और स्व-अध्ययन
+
+इन पाठों में बहुत सारा शब्दजाल है, इसलिए एक मिनट लें और [इस सूची](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott) को देखें, जिसमें उपयोगी शब्दावली है!
+
+## असाइनमेंट
+
+[पैरामीटर खेल](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/3-Classifiers-2/assignment.md b/translations/hi/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..f5b6b3170
--- /dev/null
+++ b/translations/hi/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# पैरामीटर प्ले
+
+## निर्देश
+
+जब आप इन क्लासिफायर के साथ काम करते हैं तो कई पैरामीटर डिफ़ॉल्ट रूप से सेट होते हैं। VS Code में Intellisense आपकी मदद कर सकता है उन्हें गहराई से समझने में। इस पाठ में से किसी एक ML Classification Technique को अपनाएं और विभिन्न पैरामीटर मानों को बदलते हुए मॉडल्स को फिर से प्रशिक्षित करें। एक नोटबुक बनाएं जिसमें बताया जाए कि कुछ बदलाव मॉडल की गुणवत्ता को कैसे सुधारते हैं जबकि अन्य इसे खराब करते हैं। अपने उत्तर में विस्तृत रहें।
+
+## मूल्यांकन मापदंड
+
+| मापदंड | उत्कृष्टता | पर्याप्तता | सुधार की आवश्यकता |
+| ------- | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------- |
+| | एक नोटबुक प्रस्तुत की गई है जिसमें एक क्लासिफायर पूरी तरह से निर्मित है और उसके पैरामीटर समायोजित किए गए हैं और टेक्स्टबॉक्स में बदलावों की व्याख्या की गई है | एक नोटबुक आंशिक रूप से प्रस्तुत की गई है या खराब तरीके से समझाई गई है | एक नोटबुक बग्गी या दोषपूर्ण है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/hi/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..b1eca0f2d
--- /dev/null
+++ b/translations/hi/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या गलतियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/4-Applied/README.md b/translations/hi/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..87dd7b3e7
--- /dev/null
+++ b/translations/hi/4-Classification/4-Applied/README.md
@@ -0,0 +1,317 @@
+# एक व्यंजन सिफारिश वेब ऐप बनाएं
+
+इस पाठ में, आप कुछ तकनीकों का उपयोग करके एक वर्गीकरण मॉडल बनाएंगे, जिन्हें आपने पिछले पाठों में सीखा है और इस श्रृंखला में उपयोग किए गए स्वादिष्ट व्यंजन डेटासेट के साथ। इसके अतिरिक्त, आप एक छोटा वेब ऐप बनाएंगे जो एक सहेजे गए मॉडल का उपयोग करेगा, Onnx के वेब रनटाइम का लाभ उठाते हुए।
+
+मशीन लर्निंग का सबसे उपयोगी व्यावहारिक उपयोग सिफारिश प्रणाली बनाना है, और आप आज उस दिशा में पहला कदम उठा सकते हैं!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 ऊपर की छवि पर क्लिक करें एक वीडियो के लिए: Jen Looper वर्गीकृत व्यंजन डेटा का उपयोग करके एक वेब ऐप बनाती हैं
+
+## [पाठ पूर्व क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+इस पाठ में आप सीखेंगे:
+
+- एक मॉडल कैसे बनाएं और इसे एक Onnx मॉडल के रूप में सहेजें
+- Netron का उपयोग करके मॉडल का निरीक्षण कैसे करें
+- अपने मॉडल को एक वेब ऐप में अनुमान के लिए कैसे उपयोग करें
+
+## अपना मॉडल बनाएं
+
+लागू एमएल सिस्टम का निर्माण इन तकनीकों का उपयोग करके आपके व्यावसायिक सिस्टम के लिए महत्वपूर्ण है। आप Onnx का उपयोग करके अपने वेब एप्लिकेशन में मॉडल का उपयोग कर सकते हैं (और इस प्रकार आवश्यकता पड़ने पर उन्हें ऑफ़लाइन संदर्भ में उपयोग कर सकते हैं)।
+
+एक [पिछले पाठ](../../3-Web-App/1-Web-App/README.md) में, आपने यूएफओ देखे जाने के बारे में एक प्रतिगमन मॉडल बनाया, इसे "पिकल्ड" किया, और इसे एक Flask ऐप में उपयोग किया। जबकि यह आर्किटेक्चर जानने के लिए बहुत उपयोगी है, यह एक फुल-स्टैक पायथन ऐप है, और आपकी आवश्यकताओं में एक जावास्क्रिप्ट एप्लिकेशन का उपयोग शामिल हो सकता है।
+
+इस पाठ में, आप अनुमान के लिए एक बुनियादी जावास्क्रिप्ट-आधारित सिस्टम बना सकते हैं। हालांकि, सबसे पहले, आपको एक मॉडल को प्रशिक्षित करना होगा और इसे Onnx के साथ उपयोग के लिए परिवर्तित करना होगा।
+
+## अभ्यास - वर्गीकरण मॉडल को प्रशिक्षित करें
+
+सबसे पहले, उस साफ किए गए व्यंजन डेटासेट का उपयोग करके एक वर्गीकरण मॉडल को प्रशिक्षित करें जिसे हमने उपयोग किया था।
+
+1. उपयोगी पुस्तकालयों को आयात करके प्रारंभ करें:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ आपको अपने Scikit-learn मॉडल को Onnx प्रारूप में बदलने में मदद करने के लिए '[skl2onnx](https://onnx.ai/sklearn-onnx/)' की आवश्यकता है।
+
+1. फिर, अपने डेटा के साथ उसी तरह काम करें जैसे आपने पिछले पाठों में किया था, `read_csv()` का उपयोग करके एक CSV फ़ाइल पढ़कर:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. पहले दो अनावश्यक स्तंभों को हटा दें और शेष डेटा को 'X' के रूप में सहेजें:
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. लेबल को 'y' के रूप में सहेजें:
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### प्रशिक्षण दिनचर्या शुरू करें
+
+हम 'SVC' पुस्तकालय का उपयोग करेंगे जिसमें अच्छी सटीकता है।
+
+1. Scikit-learn से उपयुक्त पुस्तकालयों को आयात करें:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. प्रशिक्षण और परीक्षण सेट अलग करें:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. पिछले पाठ में जैसा आपने किया था, एक SVC वर्गीकरण मॉडल बनाएं:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. अब, अपने मॉडल का परीक्षण करें, `predict()` को कॉल करें:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. मॉडल की गुणवत्ता की जांच करने के लिए एक वर्गीकरण रिपोर्ट प्रिंट करें:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ जैसा कि हमने पहले देखा था, सटीकता अच्छी है:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### अपने मॉडल को Onnx में बदलें
+
+सुनिश्चित करें कि उचित टेंसर संख्या के साथ रूपांतरण करें। इस डेटासेट में 380 अवयव सूचीबद्ध हैं, इसलिए आपको `FloatTensorType` में उस संख्या को नोट करना होगा:
+
+1. 380 की टेंसर संख्या का उपयोग करके परिवर्तित करें।
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. onx बनाएं और इसे **model.onnx** के रूप में फ़ाइल के रूप में सहेजें:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > नोट, आप अपने रूपांतरण स्क्रिप्ट में [विकल्प](https://onnx.ai/sklearn-onnx/parameterized.html) पास कर सकते हैं। इस मामले में, हमने 'nocl' को True और 'zipmap' को False पास किया। चूंकि यह एक वर्गीकरण मॉडल है, आपके पास ZipMap को हटाने का विकल्प है जो शब्दकोशों की एक सूची उत्पन्न करता है (आवश्यक नहीं)। `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` फ़ाइल।
+
+1. इस फ़ाइल _index.html_ में, निम्नलिखित मार्कअप जोड़ें:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. अब, `body` टैग के भीतर काम करते हुए, कुछ अवयवों को दर्शाने वाले चेकबॉक्स की एक सूची दिखाने के लिए थोड़ा मार्कअप जोड़ें:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ ध्यान दें कि प्रत्येक चेकबॉक्स को एक मान दिया गया है। यह उस इंडेक्स को दर्शाता है जहां अवयव डेटासेट के अनुसार पाए जाते हैं। उदाहरण के लिए, इस वर्णमाला सूची में, Apple पांचवें स्तंभ पर है, इसलिए इसका मान '4' है क्योंकि हम 0 से गिनती शुरू करते हैं। आप [अवयव स्प्रेडशीट](../../../../4-Classification/data/ingredient_indexes.csv) को परामर्श कर सकते हैं ताकि किसी दिए गए अवयव का इंडेक्स खोजा जा सके।
+
+ _index.html_ फ़ाइल में अपने काम को जारी रखते हुए, एक स्क्रिप्ट ब्लॉक जोड़ें जहां अंतिम समापन `` के बाद मॉडल को कॉल किया जाता है।
+
+1. सबसे पहले, [Onnx Runtime](https://www.onnxruntime.ai/) को आयात करें:
+
+ ```html
+
+ ```
+
+ > Onnx Runtime का उपयोग आपके Onnx मॉडलों को हार्डवेयर प्लेटफार्मों की एक विस्तृत श्रृंखला में चलाने के लिए किया जाता है, जिसमें अनुकूलन और उपयोग के लिए एक एपीआई शामिल है।
+
+1. एक बार Runtime सेट हो जाने के बाद, आप इसे कॉल कर सकते हैं:
+
+ ```html
+
+ ```
+
+इस कोड में, कई चीजें हो रही हैं:
+
+1. आपने 380 संभावित मानों (1 या 0) का एक सरणी बनाया है जिसे सेट किया जाएगा और मॉडल को अनुमान के लिए भेजा जाएगा, इस पर निर्भर करता है कि कोई अवयव चेकबॉक्स चेक किया गया है या नहीं।
+2. आपने चेकबॉक्स की एक सरणी बनाई और यह निर्धारित करने का एक तरीका कि क्या वे `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` पर संकेत पर चेक किए गए थे। एक लोकलहोस्ट खुल जाना चाहिए और आप अपने वेब ऐप को देख सकते हैं। विभिन्न अवयवों के आधार पर कौन सा व्यंजन अनुशंसित है, इसकी जांच करें:
+
+
+
+बधाई हो, आपने कुछ फ़ील्ड के साथ एक 'सिफारिश' वेब ऐप बनाया है। इस सिस्टम को बनाने के लिए कुछ समय निकालें!
+## 🚀चुनौती
+
+आपका वेब ऐप बहुत न्यूनतम है, इसलिए [ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv) डेटा से अवयवों और उनके इंडेक्स का उपयोग करके इसे बनाना जारी रखें। कौन से स्वाद संयोजन एक दिए गए राष्ट्रीय व्यंजन को बनाने के लिए काम करते हैं?
+
+## [पाठ के बाद क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## समीक्षा और स्व-अध्ययन
+
+जबकि इस पाठ ने खाद्य अवयवों के लिए एक सिफारिश प्रणाली बनाने की उपयोगिता को छुआ, एमएल अनुप्रयोगों का यह क्षेत्र उदाहरणों में बहुत समृद्ध है। पढ़ें कि इन प्रणालियों को कैसे बनाया जाता है:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## असाइनमेंट
+
+[एक नई सिफारिशकर्ता बनाएं](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया अवगत रहें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियां हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/4-Applied/assignment.md b/translations/hi/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..30907c531
--- /dev/null
+++ b/translations/hi/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# एक सिफारिशकर्ता बनाएं
+
+## निर्देश
+
+इस पाठ में अपने अभ्यासों को देखते हुए, अब आप Onnx Runtime और एक परिवर्तित Onnx मॉडल का उपयोग करके जावास्क्रिप्ट-आधारित वेब ऐप बनाना जानते हैं। इन पाठों से या अन्य स्रोतों से डेटा का उपयोग करके एक नया सिफारिशकर्ता बनाने के साथ प्रयोग करें (कृपया श्रेय दें)। आप विभिन्न व्यक्तित्व विशेषताओं के आधार पर एक पालतू सिफारिशकर्ता बना सकते हैं, या किसी व्यक्ति के मूड के आधार पर संगीत शैली सिफारिशकर्ता बना सकते हैं। रचनात्मक बनें!
+
+## मूल्यांकन मानदंड
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | ---------------------------------------------------------------------- | ------------------------------------- | --------------------------------- |
+| | एक वेब ऐप और नोटबुक प्रस्तुत किए गए हैं, दोनों अच्छी तरह से प्रलेखित और चल रहे हैं | इनमें से एक गायब है या त्रुटिपूर्ण है | दोनों या तो गायब हैं या त्रुटिपूर्ण हैं |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/4-Classification/README.md b/translations/hi/4-Classification/README.md
new file mode 100644
index 000000000..9474860c3
--- /dev/null
+++ b/translations/hi/4-Classification/README.md
@@ -0,0 +1,30 @@
+# वर्गीकरण के साथ शुरुआत करना
+
+## क्षेत्रीय विषय: स्वादिष्ट एशियाई और भारतीय व्यंजन 🍜
+
+एशिया और भारत में, खाद्य परंपराएँ अत्यधिक विविध और बहुत स्वादिष्ट हैं! चलिए क्षेत्रीय व्यंजनों के बारे में डेटा देखते हैं और उनके अवयवों को समझने की कोशिश करते हैं।
+
+
+> फोटो लिशेंग चांग द्वारा अनस्प्लैश पर
+
+## आप क्या सीखेंगे
+
+इस अनुभाग में, आप अपने पिछले अध्ययन को रिग्रेशन पर आधारित करेंगे और अन्य क्लासिफायर्स के बारे में जानेंगे जो आपको डेटा को बेहतर समझने में मदद कर सकते हैं।
+
+> ऐसे उपयोगी लो-कोड टूल्स हैं जो आपको वर्गीकरण मॉडल्स के साथ काम करने के बारे में सीखने में मदद कर सकते हैं। इस कार्य के लिए [Azure ML आज़माएं](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## पाठ
+
+1. [वर्गीकरण का परिचय](1-Introduction/README.md)
+2. [अधिक क्लासिफायर्स](2-Classifiers-1/README.md)
+3. [अन्य क्लासिफायर्स](3-Classifiers-2/README.md)
+4. [लागू एमएल: एक वेब ऐप बनाएं](4-Applied/README.md)
+
+## श्रेय
+
+"वर्गीकरण के साथ शुरुआत करना" को ♥️ के साथ [कैसी ब्रेवियु](https://www.twitter.com/cassiebreviu) और [जेन लूपर](https://www.twitter.com/jenlooper) द्वारा लिखा गया था।
+
+स्वादिष्ट व्यंजनों का डेटासेट [कागल](https://www.kaggle.com/hoandan/asian-and-indian-cuisines) से लिया गया था।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/5-Clustering/1-Visualize/README.md b/translations/hi/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..8bd9a461b
--- /dev/null
+++ b/translations/hi/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,140 @@
+# क्लस्टरिंग का परिचय
+
+क्लस्टरिंग एक प्रकार का [अनसुपरवाइज्ड लर्निंग](https://wikipedia.org/wiki/Unsupervised_learning) है जो मानता है कि एक डेटासेट लेबल्ड नहीं है या उसके इनपुट्स प्री-डिफाइन्ड आउटपुट्स से मेल नहीं खाते। यह विभिन्न एल्गोरिदम का उपयोग करके अनलेबल्ड डेटा को छांटता है और डेटा में पाए गए पैटर्न के अनुसार समूह प्रदान करता है।
+
+[](https://youtu.be/ty2advRiWJM "No One Like You by PSquare")
+
+> 🎥 ऊपर की छवि पर क्लिक करें वीडियो के लिए। क्लस्टरिंग के साथ मशीन लर्निंग का अध्ययन करते समय, कुछ नाइजीरियाई डांस हॉल ट्रैक्स का आनंद लें - यह PSquare का 2014 का एक उच्च रेटेड गीत है।
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### परिचय
+
+[क्लस्टरिंग](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) डेटा एक्सप्लोरेशन के लिए बहुत उपयोगी है। आइए देखें कि क्या यह नाइजीरियाई दर्शकों के संगीत उपभोग के तरीकों में रुझानों और पैटर्न की खोज करने में मदद कर सकता है।
+
+✅ क्लस्टरिंग के उपयोग के बारे में एक मिनट सोचें। वास्तविक जीवन में, क्लस्टरिंग तब होती है जब आपके पास कपड़ों का ढेर होता है और आपको अपने परिवार के सदस्यों के कपड़े छांटने होते हैं 🧦👕👖🩲। डेटा साइंस में, क्लस्टरिंग तब होती है जब किसी उपयोगकर्ता की प्राथमिकताओं का विश्लेषण करने या किसी अनलेबल्ड डेटासेट की विशेषताओं का निर्धारण करने की कोशिश की जाती है। क्लस्टरिंग, एक तरह से, अराजकता को समझने में मदद करती है, जैसे एक मोजे के दराज को।
+
+[](https://youtu.be/esmzYhuFnds "Introduction to Clustering")
+
+> 🎥 ऊपर की छवि पर क्लिक करें वीडियो के लिए: MIT के John Guttag क्लस्टरिंग का परिचय देते हैं
+
+पेशेवर सेटिंग में, क्लस्टरिंग का उपयोग बाजार विभाजन जैसे चीजों का निर्धारण करने के लिए किया जा सकता है, उदाहरण के लिए, कौन से आयु समूह कौन सी वस्तुएं खरीदते हैं। एक और उपयोग हो सकता है विसंगति का पता लगाना, शायद क्रेडिट कार्ड लेनदेन के डेटासेट से धोखाधड़ी का पता लगाने के लिए। या आप क्लस्टरिंग का उपयोग चिकित्सा स्कैन के बैच में ट्यूमर का निर्धारण करने के लिए कर सकते हैं।
+
+✅ एक मिनट के लिए सोचें कि आपने बैंकिंग, ई-कॉमर्स, या व्यवसाय सेटिंग में 'जंगली' में क्लस्टरिंग का सामना कैसे किया होगा।
+
+> 🎓 दिलचस्प बात यह है कि क्लस्टर विश्लेषण का उद्भव 1930 के दशक में मानवशास्त्र और मनोविज्ञान के क्षेत्रों में हुआ था। क्या आप कल्पना कर सकते हैं कि इसका उपयोग कैसे किया गया होगा?
+
+वैकल्पिक रूप से, आप इसका उपयोग खोज परिणामों को समूहबद्ध करने के लिए कर सकते हैं - उदाहरण के लिए, शॉपिंग लिंक, छवियों, या समीक्षाओं द्वारा। क्लस्टरिंग तब उपयोगी होती है जब आपके पास एक बड़ा डेटासेट होता है जिसे आप कम करना चाहते हैं और जिस पर आप अधिक विस्तृत विश्लेषण करना चाहते हैं, इसलिए इस तकनीक का उपयोग अन्य मॉडलों के निर्माण से पहले डेटा के बारे में जानने के लिए किया जा सकता है।
+
+✅ एक बार जब आपका डेटा क्लस्टर में संगठित हो जाता है, तो आप इसे एक क्लस्टर आईडी असाइन करते हैं, और यह तकनीक डेटासेट की गोपनीयता बनाए रखने में उपयोगी हो सकती है; आप इसके बजाय डेटा बिंदु को इसके क्लस्टर आईडी द्वारा संदर्भित कर सकते हैं, बजाय अधिक प्रकट पहचान योग्य डेटा के। क्या आप सोच सकते हैं कि क्लस्टर आईडी के बजाय क्लस्टर के अन्य तत्वों को पहचानने के लिए अन्य कारण क्या हो सकते हैं?
+
+इस [लर्न मॉड्यूल](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott) में क्लस्टरिंग तकनीकों की अपनी समझ को गहरा करें
+## क्लस्टरिंग के साथ शुरुआत
+
+[Scikit-learn एक बड़ी श्रृंखला](https://scikit-learn.org/stable/modules/clustering.html) के तरीकों की पेशकश करता है क्लस्टरिंग करने के लिए। आप जो प्रकार चुनते हैं वह आपके उपयोग के मामले पर निर्भर करेगा। प्रलेखन के अनुसार, प्रत्येक विधि के विभिन्न लाभ हैं। यहां Scikit-learn द्वारा समर्थित तरीकों और उनके उपयुक्त उपयोग मामलों की एक सरलीकृत तालिका है:
+
+| विधि का नाम | उपयोग का मामला |
+| :--------------------------- | :--------------------------------------------------------------------- |
+| K-Means | सामान्य प्रयोजन, प्रेरक |
+| Affinity propagation | कई, असमान क्लस्टर, प्रेरक |
+| Mean-shift | कई, असमान क्लस्टर, प्रेरक |
+| Spectral clustering | कुछ, समान क्लस्टर, ट्रांसडक्टिव |
+| Ward hierarchical clustering | कई, बाधित क्लस्टर, ट्रांसडक्टिव |
+| Agglomerative clustering | कई, बाधित, गैर-यूक्लिडियन दूरी, ट्रांसडक्टिव |
+| DBSCAN | गैर-फ्लैट ज्योमेट्री, असमान क्लस्टर, ट्रांसडक्टिव |
+| OPTICS | गैर-फ्लैट ज्योमेट्री, असमान क्लस्टर के साथ परिवर्तनशील घनत्व, ट्रांसडक्टिव |
+| Gaussian mixtures | फ्लैट ज्योमेट्री, प्रेरक |
+| BIRCH | आउटलेर्स के साथ बड़ा डेटासेट, प्रेरक |
+
+> 🎓 हम क्लस्टर कैसे बनाते हैं इसका बहुत कुछ इस पर निर्भर करता है कि हम डेटा बिंदुओं को समूहों में कैसे इकट्ठा करते हैं। आइए कुछ शब्दावली को समझें:
+>
+> 🎓 ['ट्रांसडक्टिव' बनाम 'प्रेरक'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> ट्रांसडक्टिव इंफरेंस उन देखे गए प्रशिक्षण मामलों से प्राप्त होता है जो विशिष्ट परीक्षण मामलों से मेल खाते हैं। प्रेरक इंफरेंस उन प्रशिक्षण मामलों से प्राप्त होता है जो सामान्य नियमों से मेल खाते हैं जो केवल बाद में परीक्षण मामलों पर लागू होते हैं।
+>
+> एक उदाहरण: कल्पना करें कि आपके पास एक डेटासेट है जो केवल आंशिक रूप से लेबल्ड है। कुछ चीजें 'रिकॉर्ड्स' हैं, कुछ 'सीडी' हैं, और कुछ खाली हैं। आपका काम खाली जगहों के लिए लेबल प्रदान करना है। यदि आप प्रेरक दृष्टिकोण चुनते हैं, तो आप 'रिकॉर्ड्स' और 'सीडी' की खोज के लिए एक मॉडल प्रशिक्षण देंगे, और उन लेबल्स को आपके अनलेबल्ड डेटा पर लागू करेंगे। इस दृष्टिकोण को उन चीजों को वर्गीकृत करने में कठिनाई होगी जो वास्तव में 'कैसेट्स' हैं। दूसरी ओर, एक ट्रांसडक्टिव दृष्टिकोण इस अज्ञात डेटा को अधिक प्रभावी ढंग से संभालता है क्योंकि यह समान वस्तुओं को एक साथ समूहित करने का काम करता है और फिर एक समूह को एक लेबल लागू करता है। इस मामले में, क्लस्टर 'गोल संगीत चीजें' और 'वर्ग संगीत चीजें' को प्रतिबिंबित कर सकते हैं।
+>
+> 🎓 ['गैर-फ्लैट' बनाम 'फ्लैट' ज्योमेट्री](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> गणितीय शब्दावली से व्युत्पन्न, गैर-फ्लैट बनाम फ्लैट ज्योमेट्री का तात्पर्य बिंदुओं के बीच की दूरी को मापने से है, या तो 'फ्लैट' ([यूक्लिडियन](https://wikipedia.org/wiki/Euclidean_geometry)) या 'गैर-फ्लैट' (गैर-यूक्लिडियन) ज्यामितीय विधियों द्वारा।
+>
+>'फ्लैट' इस संदर्भ में यूक्लिडियन ज्योमेट्री को संदर्भित करता है (जिसका कुछ भाग 'प्लेन' ज्योमेट्री के रूप में सिखाया जाता है), और गैर-फ्लैट गैर-यूक्लिडियन ज्योमेट्री को संदर्भित करता है। ज्योमेट्री का मशीन लर्निंग से क्या लेना-देना है? खैर, दो क्षेत्र जो गणित में निहित हैं, बिंदुओं के बीच की दूरी को मापने का एक सामान्य तरीका होना चाहिए, और इसे 'फ्लैट' या 'गैर-फ्लैट' तरीके से किया जा सकता है, डेटा की प्रकृति के आधार पर। [यूक्लिडियन दूरी](https://wikipedia.org/wiki/Euclidean_distance) को दो बिंदुओं के बीच एक रेखा खंड की लंबाई के रूप में मापा जाता है। [गैर-यूक्लिडियन दूरी](https://wikipedia.org/wiki/Non-Euclidean_geometry) को एक वक्र के साथ मापा जाता है। यदि आपका डेटा, विज़ुअलाइज़ किया गया, एक प्लेन पर मौजूद नहीं लगता है, तो आपको इसे संभालने के लिए एक विशेष एल्गोरिदम का उपयोग करने की आवश्यकता हो सकती है।
+>
+
+> इंफोग्राफिक द्वारा [दसानी मदीपल्ली](https://twitter.com/dasani_decoded)
+>
+> 🎓 ['दूरी'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> क्लस्टर उनके दूरी मैट्रिक्स द्वारा परिभाषित किए जाते हैं, जैसे बिंदुओं के बीच की दूरी। इस दूरी को कुछ तरीकों से मापा जा सकता है। यूक्लिडियन क्लस्टर बिंदु मानों के औसत द्वारा परिभाषित होते हैं, और इसमें एक 'सेंट्रोइड' या केंद्र बिंदु होता है। दूरी को उस सेंट्रोइड से दूरी के रूप में मापा जाता है। गैर-यूक्लिडियन दूरी 'क्लस्ट्रोइड्स' को संदर्भित करती है, जो अन्य बिंदुओं के सबसे करीब होता है। क्लस्ट्रोइड्स को विभिन्न तरीकों से परिभाषित किया जा सकता है।
+>
+> 🎓 ['बाधित'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [बाधित क्लस्टरिंग](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) इस अनसुपरवाइज्ड विधि में 'अर्ध-सुपरवाइज्ड' लर्निंग को प्रस्तुत करती है। बिंदुओं के बीच के संबंधों को 'लिंक नहीं कर सकते' या 'लिंक करना आवश्यक है' के रूप में चिह्नित किया जाता है, ताकि डेटासेट पर कुछ नियम लागू किए जा सकें।
+>
+>एक उदाहरण: यदि एक एल्गोरिदम को एक बैच के अनलेबल्ड या अर्ध-लेबल्ड डेटा पर स्वतंत्र रूप से सेट किया जाता है, तो यह जो क्लस्टर उत्पन्न करता है, वे खराब गुणवत्ता के हो सकते हैं। ऊपर के उदाहरण में, क्लस्टर 'गोल संगीत चीजें' और 'वर्ग संगीत चीजें' और 'त्रिकोणीय चीजें' और 'कुकीज़' समूहित कर सकते हैं। यदि कुछ बाधाओं, या नियमों का पालन करने के लिए दिया जाता है ("वस्तु प्लास्टिक से बनी होनी चाहिए", "वस्तु संगीत उत्पन्न करने में सक्षम होनी चाहिए") तो यह एल्गोरिदम को बेहतर विकल्प बनाने में मदद कर सकता है।
+>
+> 🎓 'घनत्व'
+>
+> डेटा जो 'शोर' है उसे 'घना' माना जाता है। इसके प्रत्येक क्लस्टर में बिंदुओं के बीच की दूरी, जांच पर, अधिक या कम घनी, या 'भीड़' हो सकती है और इसलिए इस डेटा को उपयुक्त क्लस्टरिंग विधि के साथ विश्लेषण करने की आवश्यकता है। [यह लेख](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) शोर वाले डेटासेट के साथ असमान क्लस्टर घनत्व का पता लगाने के लिए K-Means क्लस्टरिंग बनाम HDBSCAN एल्गोरिदम का उपयोग करने के बीच का अंतर प्रदर्शित करता है।
+
+## क्लस्टरिंग एल्गोरिदम
+
+100 से अधिक क्लस्टरिंग एल्गोरिदम हैं, और उनका उपयोग डेटा की प्रकृति पर निर्भर करता है। आइए कुछ प्रमुखों पर चर्चा करें:
+
+- **हाइरार्किकल क्लस्टरिंग**। यदि किसी वस्तु को उसकी निकटता से एक निकट वस्तु के साथ वर्गीकृत किया जाता है, बजाय इसके कि वह किसी दूर की वस्तु के साथ, क्लस्टर उनके सदस्यों की अन्य वस्तुओं से दूरी के आधार पर बनते हैं। Scikit-learn की एग्लोमरेटिव क्लस्टरिंग हाइरार्किकल है।
+
+ 
+ > इंफोग्राफिक द्वारा [दसानी मदीपल्ली](https://twitter.com/dasani_decoded)
+
+- **सेंट्रोइड क्लस्टरिंग**। यह लोकप्रिय एल्गोरिदम 'k' की पसंद की आवश्यकता होती है, या क्लस्टर की संख्या को बनाने के लिए, जिसके बाद एल्गोरिदम क्लस्टर के केंद्र बिंदु को निर्धारित करता है और उस बिंदु के चारों ओर डेटा एकत्र करता है। [K-means clustering](https://wikipedia.org/wiki/K-means_clustering) सेंट्रोइड क्लस्टरिंग का एक लोकप्रिय संस्करण है। केंद्र को निकटतम औसत द्वारा निर्धारित किया जाता है, इसलिए नाम। क्लस्टर से वर्ग दूरी को कम किया जाता है।
+
+ 
+ > इंफोग्राफिक द्वारा [दसानी मदीपल्ली](https://twitter.com/dasani_decoded)
+
+- **वितरण-आधारित क्लस्टरिंग**। सांख्यिकीय मॉडलिंग पर आधारित, वितरण-आधारित क्लस्टरिंग एक डेटा बिंदु के क्लस्टर से संबंधित होने की संभावना को निर्धारित करने और तदनुसार असाइन करने पर केंद्रित होती है। Gaussian मिश्रण विधियाँ इस प्रकार से संबंधित हैं।
+
+- **घनत्व-आधारित क्लस्टरिंग**। डेटा बिंदुओं को उनके घनत्व, या उनके आसपास के समूहों के आधार पर क्लस्टर में असाइन किया जाता है। समूह से दूर डेटा बिंदुओं को आउटलेर्स या शोर माना जाता है। DBSCAN, Mean-shift और OPTICS इस प्रकार की क्लस्टरिंग से संबंधित हैं।
+
+- **ग्रिड-आधारित क्लस्टरिंग**। बहु-आयामी डेटासेट के लिए, एक ग्रिड बनाया जाता है और डेटा को ग्रिड की कोशिकाओं के बीच विभाजित किया जाता है, इस प्रकार क्लस्टर बनते हैं।
+
+## अभ्यास - अपने डेटा को क्लस्टर करें
+
+क्लस्टरिंग एक तकनीक के रूप में उचित विज़ुअलाइज़ेशन द्वारा बहुत सहायता प्राप्त होती है, तो चलिए अपने संगीत डेटा को विज़ुअलाइज़ करके शुरू करते हैं। यह अभ्यास हमें यह निर्णय लेने में मदद करेगा कि इस डेटा की प्रकृति के लिए कौन सी क्लस्टरिंग विधियों का सबसे प्रभावी ढंग से उपयोग करना चाहिए।
+
+1. इस फ़ोल्डर में [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb) फ़ाइल खोलें।
+
+1. अच्छे डेटा विज़ुअलाइज़ेशन के लिए `Seaborn` पैकेज आयात करें।
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv) से गाने का डेटा जोड़ें। गानों के बारे में कुछ डेटा के साथ एक डेटा फ्रेम लोड करें। लाइब्रेरी आयात करके और डेटा को डंप करके इस डेटा का पता लगाने के लिए तैयार हो जाएं:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ डेटा की पहली कुछ पंक्तियों की जांच करें:
+
+ | | नाम | एल्बम | कलाकार | कलाकार का शीर्ष शैली | रिलीज़ की तारीख | लंबाई | लोकप्रियता | नृत्ययोग्यता | ध्वनिकता | ऊर्जा | वाद्य यंत्र | जीवंतता | जोर | बोलना | गति | समय हस्ताक्षर |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ---------------- | ------------ | ------ | ---------- | ------------ | ------------ | ------ | ---------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r
+## [व्याख्यान के बाद का क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## समीक्षा और आत्म-अध्ययन
+
+जैसा कि हमने सीखा है, क्लस्टरिंग एल्गोरिदम लागू करने से पहले अपने डेटा सेट की प्रकृति को समझना एक अच्छा विचार है। इस विषय पर अधिक पढ़ें [यहाँ](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
+
+[यह सहायक लेख](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/) विभिन्न क्लस्टरिंग एल्गोरिदम के विभिन्न डेटा आकृतियों को दिए गए व्यवहार के तरीकों के बारे में बताता है।
+
+## असाइनमेंट
+
+[क्लस्टरिंग के लिए अन्य विज़ुअलाइज़ेशन का शोध करें](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में अधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/5-Clustering/1-Visualize/assignment.md b/translations/hi/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..22f0736a3
--- /dev/null
+++ b/translations/hi/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# क्लस्टरिंग के लिए अन्य विज़ुअलाइज़ेशन पर शोध
+
+## निर्देश
+
+इस पाठ में, आपने अपने डेटा को क्लस्टरिंग के लिए तैयार करने के लिए कुछ विज़ुअलाइज़ेशन तकनीकों के साथ काम किया है। विशेष रूप से, स्कैटरप्लॉट्स वस्तुओं के समूहों को खोजने के लिए उपयोगी होते हैं। विभिन्न तरीकों और विभिन्न लाइब्रेरियों का उपयोग करके स्कैटरप्लॉट्स बनाने पर शोध करें और अपने कार्य को एक नोटबुक में दस्तावेज़ करें। आप इस पाठ के डेटा, अन्य पाठों के डेटा, या स्वयं के स्रोत से डेटा का उपयोग कर सकते हैं (कृपया इसके स्रोत का अपनी नोटबुक में उल्लेख करें)। स्कैटरप्लॉट्स का उपयोग करके कुछ डेटा प्लॉट करें और बताएं कि आपने क्या खोजा।
+
+## मूल्यांकन
+
+| मापदंड | उत्कृष्टता | पर्याप्तता | सुधार की आवश्यकता |
+| -------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------- |
+| | पांच अच्छी तरह से दस्तावेज़ किए गए स्कैटरप्लॉट्स के साथ एक नोटबुक प्रस्तुत की गई है | पांच से कम स्कैटरप्लॉट्स के साथ एक नोटबुक प्रस्तुत की गई है और यह कम अच्छी तरह से दस्तावेज़ है | एक अधूरी नोटबुक प्रस्तुत की गई है |
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/hi/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..087d1758f
--- /dev/null
+++ b/translations/hi/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियां हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/5-Clustering/2-K-Means/README.md b/translations/hi/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..831472ec6
--- /dev/null
+++ b/translations/hi/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# K-Means क्लस्टरिंग
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+इस पाठ में, आप सीखेंगे कि Scikit-learn और पहले से आयात किए गए नाइजीरियाई संगीत डेटा सेट का उपयोग करके क्लस्टर कैसे बनाएं। हम क्लस्टरिंग के लिए K-Means की मूल बातें कवर करेंगे। ध्यान रखें कि, जैसा कि आपने पिछले पाठ में सीखा था, क्लस्टरों के साथ काम करने के कई तरीके हैं और आपके डेटा पर निर्भर करता है कि आप कौन सा तरीका उपयोग करते हैं। हम K-Means को आजमाएंगे क्योंकि यह सबसे सामान्य क्लस्टरिंग तकनीक है। चलिए शुरू करते हैं!
+
+आप जिन शर्तों के बारे में जानेंगे:
+
+- सिल्हूट स्कोरिंग
+- एल्बो विधि
+- जड़ता
+- वैरिएंस
+
+## परिचय
+
+[K-Means क्लस्टरिंग](https://wikipedia.org/wiki/K-means_clustering) सिग्नल प्रोसेसिंग के क्षेत्र से व्युत्पन्न एक विधि है। इसका उपयोग डेटा समूहों को 'k' क्लस्टरों में विभाजित और विभाजन करने के लिए किया जाता है। प्रत्येक अवलोकन एक दिए गए डेटा बिंदु को उसके निकटतम 'मीन', या एक क्लस्टर के केंद्र बिंदु के निकटतम समूह में रखने का कार्य करता है।
+
+क्लस्टरों को [वोरोनोई आरेख](https://wikipedia.org/wiki/Voronoi_diagram) के रूप में देखा जा सकता है, जिसमें एक बिंदु (या 'बीज') और उसका संबंधित क्षेत्र शामिल होता है।
+
+
+
+> [जेन लूपर](https://twitter.com/jenlooper) द्वारा इन्फोग्राफिक
+
+K-Means क्लस्टरिंग प्रक्रिया [तीन-चरण प्रक्रिया में निष्पादित होती है](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. एल्गोरिदम डेटा सेट से सैंपलिंग करके k-संख्या के केंद्र बिंदु का चयन करता है। इसके बाद, यह लूप करता है:
+ 1. यह प्रत्येक सैंपल को निकटतम सेंट्रॉइड को असाइन करता है।
+ 2. यह पिछले सेंट्रॉइड को असाइन किए गए सभी सैंपल के औसत मूल्य को लेकर नए सेंट्रॉइड बनाता है।
+ 3. फिर, यह नए और पुराने सेंट्रॉइड के बीच के अंतर की गणना करता है और सेंट्रॉइड स्थिर होने तक दोहराता है।
+
+K-Means का उपयोग करने में एक कमी यह है कि आपको 'k' स्थापित करने की आवश्यकता होगी, यानी सेंट्रॉइड की संख्या। सौभाग्य से, 'एल्बो विधि' 'k' के लिए एक अच्छा प्रारंभिक मूल्य अनुमान लगाने में मदद करती है। आप इसे अभी आजमाएंगे।
+
+## पूर्वापेक्षा
+
+आप इस पाठ के [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) फ़ाइल में काम करेंगे जिसमें पिछले पाठ में किए गए डेटा आयात और प्रारंभिक सफाई शामिल है।
+
+## अभ्यास - तैयारी
+
+गानों के डेटा पर एक और नज़र डालकर शुरू करें।
+
+1. प्रत्येक कॉलम के लिए `boxplot()` कॉल करते हुए एक बॉक्सप्लॉट बनाएं:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ यह डेटा थोड़ा शोरयुक्त है: प्रत्येक कॉलम को एक बॉक्सप्लॉट के रूप में देखकर, आप बाहरी मान देख सकते हैं।
+
+ 
+
+आप डेटा सेट के माध्यम से जा सकते हैं और इन बाहरी मानों को हटा सकते हैं, लेकिन इससे डेटा बहुत न्यूनतम हो जाएगा।
+
+1. अभी के लिए, चुनें कि आप अपने क्लस्टरिंग अभ्यास के लिए कौन से कॉलम का उपयोग करेंगे। समान रेंज वाले कॉलम चुनें और `artist_top_genre` कॉलम को संख्यात्मक डेटा के रूप में एन्कोड करें:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. अब आपको यह चुनना होगा कि कितने क्लस्टरों को लक्षित करना है। आप जानते हैं कि डेटा सेट से हमने 3 गाना शैलियों को अलग किया है, तो चलिए 3 को आजमाते हैं:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+आपको डेटा फ्रेम के प्रत्येक पंक्ति के लिए भविष्यवाणी किए गए क्लस्टरों (0, 1, या 2) के साथ एक सरणी प्रिंट की हुई दिखाई देती है।
+
+1. इस सरणी का उपयोग 'सिल्हूट स्कोर' की गणना के लिए करें:
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## सिल्हूट स्कोर
+
+1 के करीब सिल्हूट स्कोर देखें। यह स्कोर -1 से 1 तक भिन्न होता है, और यदि स्कोर 1 है, तो क्लस्टर घना और अन्य क्लस्टरों से अच्छी तरह से अलग होता है। 0 के पास का मान ओवरलैपिंग क्लस्टरों का प्रतिनिधित्व करता है जिसमें नमूने पड़ोसी क्लस्टरों की निर्णय सीमा के बहुत करीब होते हैं। [(स्रोत)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+हमारा स्कोर **.53** है, इसलिए बीच में है। यह इंगित करता है कि हमारा डेटा इस प्रकार की क्लस्टरिंग के लिए विशेष रूप से उपयुक्त नहीं है, लेकिन चलिए जारी रखते हैं।
+
+### अभ्यास - एक मॉडल बनाएं
+
+1. `KMeans` आयात करें और क्लस्टरिंग प्रक्रिया शुरू करें।
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ यहां कुछ भाग हैं जिन्हें समझाने की आवश्यकता है।
+
+ > 🎓 range: ये क्लस्टरिंग प्रक्रिया के पुनरावृत्तियां हैं
+
+ > 🎓 random_state: "सेंट्रॉइड प्रारंभिककरण के लिए यादृच्छिक संख्या पीढ़ी निर्धारित करता है।" [स्रोत](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "within-cluster sums of squares" एक क्लस्टर के भीतर सभी बिंदुओं की औसत दूरी को मापता है। [स्रोत](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce)।
+
+ > 🎓 जड़ता: K-Means एल्गोरिदम 'जड़ता' को कम करने के लिए सेंट्रॉइड को चुनने का प्रयास करते हैं, "एक उपाय कि क्लस्टर कितने आंतरिक रूप से सुसंगत हैं।" [स्रोत](https://scikit-learn.org/stable/modules/clustering.html)। मान को प्रत्येक पुनरावृत्ति पर wcss चर में जोड़ा जाता है।
+
+ > 🎓 k-means++: [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means) में आप 'k-means++' अनुकूलन का उपयोग कर सकते हैं, जो "सेंट्रॉइड को (सामान्यतः) एक-दूसरे से दूर प्रारंभिक करता है, जिससे यादृच्छिक प्रारंभिककरण की तुलना में संभवतः बेहतर परिणाम मिलते हैं।"
+
+### एल्बो विधि
+
+पहले, आपने अनुमान लगाया कि, क्योंकि आपने 3 गाना शैलियों को लक्षित किया है, आपको 3 क्लस्टर चुनने चाहिए। लेकिन क्या यह मामला है?
+
+1. सुनिश्चित करने के लिए 'एल्बो विधि' का उपयोग करें।
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ 'wcss' चर का उपयोग करें जिसे आपने पिछले चरण में बनाया था ताकि एक चार्ट बनाया जा सके जहां 'एल्बो' में मोड़ हो, जो क्लस्टरों की इष्टतम संख्या को इंगित करता है। शायद यह **वास्तव में** 3 है!
+
+ 
+
+## अभ्यास - क्लस्टरों को प्रदर्शित करें
+
+1. प्रक्रिया को फिर से आजमाएं, इस बार तीन क्लस्टर सेट करें, और क्लस्टरों को एक स्कैटरप्लॉट के रूप में प्रदर्शित करें:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. मॉडल की सटीकता की जांच करें:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ इस मॉडल की सटीकता बहुत अच्छी नहीं है, और क्लस्टरों का आकार आपको एक संकेत देता है कि क्यों।
+
+ 
+
+ यह डेटा बहुत असंतुलित है, बहुत कम सहसंबद्ध है और क्लस्टर करने के लिए कॉलम मानों के बीच बहुत अधिक वैरिएंस है। वास्तव में, जो क्लस्टर बनते हैं, वे शायद भारी रूप से ऊपर परिभाषित की गई तीन शैली श्रेणियों से प्रभावित या तिरछे होते हैं। यह एक सीखने की प्रक्रिया थी!
+
+ Scikit-learn के दस्तावेज़ों में, आप देख सकते हैं कि इस तरह के मॉडल, जिसमें क्लस्टर बहुत अच्छी तरह से चिह्नित नहीं होते हैं, में 'वैरिएंस' की समस्या होती है:
+
+ 
+ > इन्फोग्राफिक स्किकिट-लर्न से
+
+## वैरिएंस
+
+वैरिएंस को "मीन से वर्गीकृत अंतर का औसत" के रूप में परिभाषित किया गया है [(स्रोत)](https://www.mathsisfun.com/data/standard-deviation.html)। इस क्लस्टरिंग समस्या के संदर्भ में, यह हमारे डेटा सेट की संख्या को मीन से थोड़ा अधिक भिन्न होने का संकेत देता है।
+
+✅ यह एक शानदार क्षण है यह सोचने का कि आप इस मुद्दे को ठीक करने के लिए सभी तरीके क्या कर सकते हैं। डेटा को थोड़ा और ट्वीक करें? विभिन्न कॉलम का उपयोग करें? एक अलग एल्गोरिदम का उपयोग करें? संकेत: अपने डेटा को सामान्यीकृत करने के लिए [स्केलिंग](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) आज़माएं और अन्य कॉलम का परीक्षण करें।
+
+> इस '[वैरिएंस कैलकुलेटर](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' को आज़माएं ताकि अवधारणा को थोड़ा और समझ सकें।
+
+---
+
+## 🚀चुनौती
+
+इस नोटबुक के साथ कुछ समय बिताएं, पैरामीटर ट्वीक करें। क्या आप डेटा को और अधिक साफ करके (उदाहरण के लिए बाहरी मानों को हटाकर) मॉडल की सटीकता में सुधार कर सकते हैं? आप दिए गए डेटा नमूनों को अधिक भार देने के लिए वज़न का उपयोग कर सकते हैं। बेहतर क्लस्टर बनाने के लिए आप और क्या कर सकते हैं?
+
+संकेत: अपने डेटा को स्केल करने का प्रयास करें। नोटबुक में टिप्पणी की गई कोड है जो मानक स्केलिंग जोड़ता है ताकि डेटा कॉलम रेंज के संदर्भ में एक-दूसरे के समान हो जाएं। आप पाएंगे कि जबकि सिल्हूट स्कोर नीचे चला जाता है, एल्बो ग्राफ में 'किंक' बाहर हो जाता है। ऐसा इसलिए है क्योंकि डेटा को बिना स्केल किए छोड़ने से कम वैरिएंस वाले डेटा को अधिक भार मिल सकता है। इस समस्या के बारे में और पढ़ें [यहां](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226)।
+
+## [पोस्ट-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## समीक्षा और स्व-अध्ययन
+
+K-Means सिम्युलेटर [जैसे इस एक](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/) को देखें। आप इस टूल का उपयोग नमूना डेटा बिंदुओं को विज़ुअलाइज़ करने और इसके सेंट्रॉइड निर्धारित करने के लिए कर सकते हैं। आप डेटा की यादृच्छिकता, क्लस्टरों की संख्या और सेंट्रॉइड की संख्या को संपादित कर सकते हैं। क्या इससे आपको यह समझने में मदद मिलती है कि डेटा को कैसे समूहित किया जा सकता है?
+
+इसके अलावा, स्टैनफोर्ड से [K-Means पर इस हैंडआउट](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) को देखें।
+
+## असाइनमेंट
+
+[विभिन्न क्लस्टरिंग विधियों का प्रयास करें](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/5-Clustering/2-K-Means/assignment.md b/translations/hi/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..48831b99c
--- /dev/null
+++ b/translations/hi/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,14 @@
+# विभिन्न क्लस्टरिंग विधियों को आजमाएं
+
+## निर्देश
+
+इस पाठ में आपने K-Means क्लस्टरिंग के बारे में सीखा। कभी-कभी K-Means आपके डेटा के लिए उपयुक्त नहीं होता है। इन पाठों से या कहीं और से डेटा का उपयोग करके एक नोटबुक बनाएं (अपने स्रोत को श्रेय दें) और K-Means का उपयोग किए बिना एक अलग क्लस्टरिंग विधि दिखाएं। आपने क्या सीखा?
+
+## मूल्यांकन
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | --------------------------------------------------------------- | -------------------------------------------------------------------- | ---------------------------- |
+| | एक अच्छी तरह से प्रलेखित क्लस्टरिंग मॉडल के साथ एक नोटबुक प्रस्तुत की जाती है | बिना अच्छे प्रलेखन और/या अधूरी नोटबुक प्रस्तुत की जाती है | अधूरा काम प्रस्तुत किया जाता है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/hi/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..a0f39397f
--- /dev/null
+++ b/translations/hi/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/5-Clustering/README.md b/translations/hi/5-Clustering/README.md
new file mode 100644
index 000000000..f09a1c12f
--- /dev/null
+++ b/translations/hi/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# मशीन लर्निंग के लिए क्लस्टरिंग मॉडल
+
+क्लस्टरिंग एक मशीन लर्निंग कार्य है जिसमें यह उन वस्तुओं को खोजने का प्रयास करता है जो एक-दूसरे से मिलती-जुलती हैं और इन्हें समूहों में विभाजित करता है जिन्हें क्लस्टर कहा जाता है। क्लस्टरिंग को मशीन लर्निंग के अन्य दृष्टिकोणों से जो चीज अलग करती है, वह यह है कि चीजें स्वचालित रूप से होती हैं, वास्तव में, यह कहना उचित है कि यह सुपरवाइज्ड लर्निंग के विपरीत है।
+
+## क्षेत्रीय विषय: नाइजीरियाई दर्शकों के संगीत स्वाद के लिए क्लस्टरिंग मॉडल 🎧
+
+नाइजीरिया के विविध दर्शकों के विविध संगीत स्वाद हैं। स्पॉटिफाई से स्क्रैप किए गए डेटा का उपयोग करते हुए (इस [लेख](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421) से प्रेरित होकर), आइए नाइजीरिया में कुछ लोकप्रिय संगीत देखें। इस डेटासेट में विभिन्न गीतों के 'डांसबिलिटी' स्कोर, 'एकॉस्टिकनेस', लाउडनेस, 'स्पीचनेस', लोकप्रियता और ऊर्जा के बारे में डेटा शामिल है। इस डेटा में पैटर्न की खोज करना दिलचस्प होगा!
+
+
+
+> फोटो मार्सेला लास्कोस्की द्वारा अनस्प्लैश पर
+
+इस पाठ्यक्रम की श्रृंखला में, आप क्लस्टरिंग तकनीकों का उपयोग करके डेटा का विश्लेषण करने के नए तरीके खोजेंगे। क्लस्टरिंग विशेष रूप से तब उपयोगी होती है जब आपके डेटासेट में लेबल की कमी होती है। यदि इसमें लेबल होते हैं, तो पिछले पाठों में आपने जो वर्गीकरण तकनीकें सीखी हैं, वे अधिक उपयोगी हो सकती हैं। लेकिन उन मामलों में जहां आप बिना लेबल वाले डेटा को समूहित करना चाहते हैं, क्लस्टरिंग पैटर्न की खोज के लिए एक शानदार तरीका है।
+
+> कुछ उपयोगी लो-कोड टूल हैं जो आपको क्लस्टरिंग मॉडल के साथ काम करने के बारे में जानने में मदद कर सकते हैं। इस कार्य के लिए [Azure ML का प्रयास करें](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## पाठ
+
+1. [क्लस्टरिंग का परिचय](1-Visualize/README.md)
+2. [K-Means क्लस्टरिंग](2-K-Means/README.md)
+
+## क्रेडिट्स
+
+ये पाठ 🎶 के साथ [Jen Looper](https://www.twitter.com/jenlooper) द्वारा लिखे गए थे और [Rishit Dagli](https://rishit_dagli) और [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) द्वारा सहायक समीक्षाओं के साथ।
+
+[Nigerian Songs](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify) डेटासेट को Kaggle से स्पॉटिफाई से स्क्रैप किया गया था।
+
+उपयोगी K-Means उदाहरण जिन्होंने इस पाठ को बनाने में सहायता की, उनमें यह [iris exploration](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering), यह [introductory notebook](https://www.kaggle.com/prashant111/k-means-clustering-with-python), और यह [hypothetical NGO example](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering) शामिल हैं।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/1-Introduction-to-NLP/README.md b/translations/hi/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..4b2f58192
--- /dev/null
+++ b/translations/hi/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# प्राकृतिक भाषा प्रसंस्करण का परिचय
+
+यह पाठ *प्राकृतिक भाषा प्रसंस्करण*, जो *कंप्यूटेशनल लिंग्विस्टिक्स* का एक उपक्षेत्र है, के संक्षिप्त इतिहास और महत्वपूर्ण अवधारणाओं को कवर करता है।
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## परिचय
+
+NLP, जैसा कि आमतौर पर जाना जाता है, उन सबसे प्रसिद्ध क्षेत्रों में से एक है जहाँ मशीन लर्निंग को लागू किया गया है और उत्पादन सॉफ्टवेयर में उपयोग किया गया है।
+
+✅ क्या आप किसी सॉफ्टवेयर के बारे में सोच सकते हैं जिसका आप हर दिन उपयोग करते हैं जिसमें शायद कुछ NLP एम्बेडेड है? आपके वर्ड प्रोसेसिंग प्रोग्राम या मोबाइल ऐप्स के बारे में क्या जो आप नियमित रूप से उपयोग करते हैं?
+
+आप निम्नलिखित के बारे में जानेंगे:
+
+- **भाषाओं का विचार**. भाषाएँ कैसे विकसित हुईं और अध्ययन के प्रमुख क्षेत्र क्या रहे हैं।
+- **परिभाषा और अवधारणाएँ**. आप यह भी सीखेंगे कि कंप्यूटर टेक्स्ट को कैसे प्रोसेस करते हैं, जिसमें पार्सिंग, ग्रामर, और संज्ञा और क्रियाओं की पहचान शामिल है। इस पाठ में कुछ कोडिंग कार्य हैं, और कई महत्वपूर्ण अवधारणाओं को पेश किया गया है जिन्हें आप अगले पाठों में कोड करना सीखेंगे।
+
+## कंप्यूटेशनल लिंग्विस्टिक्स
+
+कंप्यूटेशनल लिंग्विस्टिक्स एक अनुसंधान और विकास का क्षेत्र है जो कई दशकों से यह अध्ययन करता है कि कंप्यूटर कैसे भाषाओं के साथ काम कर सकते हैं, और यहां तक कि समझ सकते हैं, अनुवाद कर सकते हैं, और संवाद कर सकते हैं। प्राकृतिक भाषा प्रसंस्करण (NLP) एक संबंधित क्षेत्र है जो इस बात पर केंद्रित है कि कंप्यूटर 'प्राकृतिक', या मानव, भाषाओं को कैसे प्रोसेस कर सकते हैं।
+
+### उदाहरण - फोन डिक्टेशन
+
+यदि आपने कभी टाइपिंग के बजाय अपने फोन पर बोला है या एक वर्चुअल असिस्टेंट से प्रश्न पूछा है, तो आपकी आवाज़ को टेक्स्ट में परिवर्तित किया गया और फिर उस भाषा से प्रोसेस या *पार्स* किया गया जिसे आपने बोला। फिर पहचाने गए कीवर्ड को एक फॉर्मेट में प्रोसेस किया गया जिसे फोन या असिस्टेंट समझ सके और उस पर कार्य कर सके।
+
+
+> वास्तविक भाषाई समझना कठिन है! छवि [Jen Looper](https://twitter.com/jenlooper) द्वारा
+
+### यह तकनीक कैसे संभव है?
+
+यह संभव है क्योंकि किसी ने ऐसा कंप्यूटर प्रोग्राम लिखा है। कुछ दशक पहले, कुछ विज्ञान कथा लेखकों ने भविष्यवाणी की थी कि लोग ज्यादातर अपने कंप्यूटरों से बात करेंगे, और कंप्यूटर हमेशा यह समझेंगे कि वे क्या मतलब रखते हैं। दुख की बात है कि यह समस्या उतनी आसान नहीं थी जितनी कई लोगों ने कल्पना की थी, और जबकि यह आज एक बेहतर समझी जाने वाली समस्या है, वाक्य के अर्थ को समझने में 'सही' प्राकृतिक भाषा प्रसंस्करण प्राप्त करने में महत्वपूर्ण चुनौतियाँ हैं। यह विशेष रूप से हास्य या वाक्य में व्यंग्य जैसी भावनाओं को समझने में कठिन समस्या है।
+
+इस बिंदु पर, आप स्कूल की कक्षाओं को याद कर सकते हैं जहाँ शिक्षक ने वाक्य में ग्रामर के भागों को कवर किया था। कुछ देशों में, छात्रों को ग्रामर और लिंग्विस्टिक्स एक समर्पित विषय के रूप में पढ़ाया जाता है, लेकिन कई में, ये विषय एक भाषा सीखने का हिस्सा होते हैं: या तो प्राथमिक विद्यालय में आपकी पहली भाषा (पढ़ना और लिखना सीखना) और शायद एक दूसरी भाषा पोस्ट-प्राइमरी, या हाई स्कूल में। चिंता न करें यदि आप संज्ञाओं और क्रियाओं या क्रिया विशेषणों और विशेषणों के बीच अंतर करने में विशेषज्ञ नहीं हैं!
+
+यदि आप *साधारण वर्तमान* और *वर्तमान प्रगतिशील* के बीच के अंतर से संघर्ष करते हैं, तो आप अकेले नहीं हैं। यह कई लोगों के लिए एक चुनौतीपूर्ण चीज है, यहां तक कि एक भाषा के मूल वक्ताओं के लिए भी। अच्छी खबर यह है कि कंप्यूटर औपचारिक नियमों को लागू करने में वास्तव में अच्छे हैं, और आप ऐसा कोड लिखना सीखेंगे जो मानव की तरह एक वाक्य को *पार्स* कर सके। बाद में आप जिस बड़ी चुनौती की जांच करेंगे वह है एक वाक्य के *अर्थ* और *भावना* को समझना।
+
+## पूर्वापेक्षाएँ
+
+इस पाठ के लिए, मुख्य पूर्वापेक्षा इस पाठ की भाषा को पढ़ने और समझने में सक्षम होना है। कोई गणितीय समस्याएं या समीकरण हल करने की आवश्यकता नहीं है। जबकि मूल लेखक ने इस पाठ को अंग्रेजी में लिखा था, इसे अन्य भाषाओं में भी अनुवादित किया गया है, इसलिए आप एक अनुवाद पढ़ सकते हैं। ऐसे उदाहरण हैं जहां कई विभिन्न भाषाओं का उपयोग किया गया है (विभिन्न भाषाओं के विभिन्न ग्रामर नियमों की तुलना करने के लिए)। ये *अनुवादित नहीं* हैं, लेकिन व्याख्यात्मक पाठ अनुवादित है, इसलिए अर्थ स्पष्ट होना चाहिए।
+
+कोडिंग कार्यों के लिए, आप Python का उपयोग करेंगे और उदाहरण Python 3.8 का उपयोग कर रहे हैं।
+
+इस अनुभाग में, आपको आवश्यकता होगी, और उपयोग करेंगे:
+
+- **Python 3 समझ**. Python 3 में प्रोग्रामिंग भाषा समझ, यह पाठ इनपुट, लूप, फ़ाइल पढ़ना, ऐरे का उपयोग करता है।
+- **Visual Studio Code + एक्सटेंशन**. हम Visual Studio Code और इसके Python एक्सटेंशन का उपयोग करेंगे। आप अपने पसंदीदा Python IDE का भी उपयोग कर सकते हैं।
+- **TextBlob**. [TextBlob](https://github.com/sloria/TextBlob) Python के लिए एक सरलीकृत टेक्स्ट प्रोसेसिंग लाइब्रेरी है। इसे अपने सिस्टम पर स्थापित करने के लिए TextBlob साइट पर दिए गए निर्देशों का पालन करें (नीचे दिखाए अनुसार कॉर्पोरा को भी इंस्टॉल करें):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 टिप: आप सीधे VS Code वातावरण में Python चला सकते हैं। अधिक जानकारी के लिए [docs](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott) देखें।
+
+## मशीनों से बात करना
+
+मानव भाषा को समझने के लिए कंप्यूटर बनाने का इतिहास दशकों पुराना है, और सबसे पहले वैज्ञानिकों में से एक जिन्होंने प्राकृतिक भाषा प्रसंस्करण पर विचार किया था, वे *Alan Turing* थे।
+
+### 'ट्यूरिंग टेस्ट'
+
+जब ट्यूरिंग 1950 के दशक में *कृत्रिम बुद्धिमत्ता* पर शोध कर रहे थे, तो उन्होंने सोचा कि क्या एक संवादात्मक परीक्षण दिया जा सकता है जिसमें एक मानव और एक कंप्यूटर (टाइप किए गए संवाद के माध्यम से) शामिल हों जहां बातचीत में मानव यह सुनिश्चित न कर सके कि वे किसी अन्य मानव या कंप्यूटर से बात कर रहे हैं।
+
+यदि, एक निश्चित अवधि की बातचीत के बाद, मानव यह निर्धारित नहीं कर सकता कि उत्तर कंप्यूटर से आ रहे हैं या नहीं, तो क्या कहा जा सकता है कि कंप्यूटर *सोच* रहा है?
+
+### प्रेरणा - 'द इमिटेशन गेम'
+
+इस विचार की प्रेरणा एक पार्टी गेम *द इमिटेशन गेम* से आई थी जिसमें एक पूछताछकर्ता अकेले एक कमरे में होता है और उसे यह निर्धारित करना होता है कि दो लोगों (दूसरे कमरे में) में से कौन पुरुष और कौन महिला है। पूछताछकर्ता नोट भेज सकता है, और उसे ऐसे प्रश्न सोचने होते हैं जिनके लिखित उत्तर रहस्यमय व्यक्ति के लिंग का खुलासा करें। बेशक, दूसरे कमरे में खिलाड़ी पूछताछकर्ता को भ्रमित या गुमराह करने की कोशिश कर रहे होते हैं, जबकि ईमानदारी से उत्तर देने की कोशिश कर रहे होते हैं।
+
+### Eliza का विकास
+
+1960 के दशक में MIT के एक वैज्ञानिक *Joseph Weizenbaum* ने [*Eliza*](https://wikipedia.org/wiki/ELIZA) नामक एक कंप्यूटर 'थेरेपिस्ट' विकसित किया जो मानव से प्रश्न पूछता था और उनके उत्तरों को समझने का आभास देता था। हालांकि, जबकि Eliza एक वाक्य को पार्स कर सकता था और कुछ व्याकरणिक संरचनाओं और कीवर्ड की पहचान कर सकता था ताकि एक उचित उत्तर दे सके, इसे वाक्य को *समझने* के रूप में नहीं कहा जा सकता था। यदि Eliza को एक वाक्य इस प्रारूप में प्रस्तुत किया गया "**I am** sad", तो यह वाक्य को पुनर्व्यवस्थित और प्रतिस्थापित कर सकता है और प्रतिक्रिया में "How long have **you been** sad" बना सकता है।
+
+इससे ऐसा आभास होता था कि Eliza ने कथन को समझा और एक अनुवर्ती प्रश्न पूछ रहा है, जबकि वास्तव में, यह केवल काल को बदल रहा था और कुछ शब्द जोड़ रहा था। यदि Eliza को कोई कीवर्ड नहीं मिलता था जिसके लिए उसके पास उत्तर होता, तो यह इसके बजाय एक रैंडम उत्तर देता जो कई विभिन्न कथनों पर लागू हो सकता था। Eliza को आसानी से मूर्ख बनाया जा सकता था, उदाहरण के लिए यदि उपयोगकर्ता लिखता "**You are** a bicycle", तो यह प्रतिक्रिया देता "How long have **I been** a bicycle?", बजाय एक अधिक तर्कसंगत प्रतिक्रिया के।
+
+[](https://youtu.be/RMK9AphfLco "Eliza से बातचीत")
+
+> 🎥 ऊपर की छवि पर क्लिक करें मूल ELIZA प्रोग्राम के बारे में वीडियो के लिए
+
+> नोट: आप [Eliza](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract) के 1966 में प्रकाशित मूल विवरण को पढ़ सकते हैं यदि आपके पास ACM खाता है। वैकल्पिक रूप से, [wikipedia](https://wikipedia.org/wiki/ELIZA) पर Eliza के बारे में पढ़ें।
+
+## व्यायाम - एक बुनियादी संवादात्मक बॉट को कोड करना
+
+एक संवादात्मक बॉट, जैसे Eliza, एक प्रोग्राम है जो उपयोगकर्ता इनपुट को प्राप्त करता है और समझने और बुद्धिमानी से प्रतिक्रिया देने का आभास देता है। Eliza के विपरीत, हमारे बॉट के पास कई नियम नहीं होंगे जो इसे एक बुद्धिमान बातचीत का आभास देंगे। इसके बजाय, हमारे बॉट में केवल एक ही क्षमता होगी, जो कि बातचीत को यादृच्छिक प्रतिक्रियाओं के साथ जारी रखना है जो लगभग किसी भी सामान्य बातचीत में काम कर सकती हैं।
+
+### योजना
+
+आपके संवादात्मक बॉट बनाने के चरण:
+
+1. उपयोगकर्ता को बॉट के साथ कैसे इंटरैक्ट करना है, इसके निर्देश प्रिंट करें
+2. एक लूप शुरू करें
+ 1. उपयोगकर्ता इनपुट स्वीकार करें
+ 2. यदि उपयोगकर्ता ने बाहर निकलने के लिए कहा है, तो बाहर निकलें
+ 3. उपयोगकर्ता इनपुट को प्रोसेस करें और प्रतिक्रिया निर्धारित करें (इस मामले में, प्रतिक्रिया संभावित सामान्य प्रतिक्रियाओं की सूची से एक रैंडम चयन है)
+ 4. प्रतिक्रिया प्रिंट करें
+3. चरण 2 पर वापस जाएं
+
+### बॉट बनाना
+
+आइए बॉट को बनाना शुरू करें। हम कुछ वाक्यांशों को परिभाषित करके शुरू करेंगे।
+
+1. निम्नलिखित रैंडम प्रतिक्रियाओं के साथ Python में इस बॉट को स्वयं बनाएं:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ यहां कुछ नमूना आउटपुट है जो आपको मार्गदर्शन करेगा (उपयोगकर्ता इनपुट `>` से शुरू होने वाली लाइनों पर है):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ कार्य का एक संभावित समाधान [यहां](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py) है
+
+ ✅ रुकें और विचार करें
+
+ 1. क्या आपको लगता है कि रैंडम प्रतिक्रियाएं किसी को यह सोचने में 'धोखा' देंगी कि बॉट वास्तव में उन्हें समझता है?
+ 2. बॉट को अधिक प्रभावी बनाने के लिए उसे किन सुविधाओं की आवश्यकता होगी?
+ 3. यदि बॉट वास्तव में एक वाक्य के अर्थ को 'समझ' सकता है, तो क्या उसे बातचीत में पिछले वाक्यों के अर्थ को 'याद' करने की आवश्यकता होगी?
+
+---
+
+## 🚀चुनौती
+
+ऊपर दिए गए "रुकें और विचार करें" तत्वों में से एक को चुनें और या तो इसे कोड में लागू करने का प्रयास करें या पेपर पर छद्मकोड का उपयोग करके एक समाधान लिखें।
+
+अगले पाठ में, आप प्राकृतिक भाषा को पार्स करने और मशीन लर्निंग के कई अन्य दृष्टिकोणों के बारे में जानेंगे।
+
+## [पोस्ट-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## समीक्षा और आत्म-अध्ययन
+
+नीचे दिए गए संदर्भों को आगे पढ़ने के अवसरों के रूप में देखें।
+
+### संदर्भ
+
+1. शुबर्ट, लेनहार्ट, "कंप्यूटेशनल लिंग्विस्टिक्स", *द स्टैनफोर्ड एन्साइक्लोपीडिया ऑफ फिलॉसफी* (स्प्रिंग 2020 संस्करण), एडवर्ड एन. ज़ाल्टा (संपादक), URL = .
+2. प्रिंसटन यूनिवर्सिटी "वर्डनेट के बारे में।" [WordNet](https://wordnet.princeton.edu/). प्रिंसटन यूनिवर्सिटी। 2010।
+
+## असाइनमेंट
+
+[एक बॉट खोजें](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/hi/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..fb6484135
--- /dev/null
+++ b/translations/hi/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# बॉट की खोज
+
+## निर्देश
+
+बॉट हर जगह होते हैं। आपका कार्य: एक बॉट ढूंढें और उसे अपनाएं! आप उन्हें वेबसाइटों, बैंकिंग एप्लिकेशनों, और फोन पर पा सकते हैं, उदाहरण के लिए जब आप वित्तीय सेवाओं की कंपनियों को सलाह या खाता जानकारी के लिए कॉल करते हैं। बॉट का विश्लेषण करें और देखें कि क्या आप उसे भ्रमित कर सकते हैं। अगर आप बॉट को भ्रमित कर सकते हैं, तो आपको क्यों लगता है कि ऐसा हुआ? अपने अनुभव के बारे में एक छोटा पेपर लिखें।
+
+## मूल्यांकन मानदंड
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | ------------------------------------------------------------------------------------------------------------- | -------------------------------------------- | ------------------------ |
+| | एक पूरा पेज का पेपर लिखा गया है, जिसमें अनुमानित बॉट आर्किटेक्चर को समझाया गया है और आपके अनुभव को रेखांकित किया गया है | पेपर अधूरा है या अच्छी तरह से शोधित नहीं है | कोई पेपर जमा नहीं किया गया |
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या गलतियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/2-Tasks/README.md b/translations/hi/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..d25a06b2c
--- /dev/null
+++ b/translations/hi/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# सामान्य प्राकृतिक भाषा प्रसंस्करण कार्य और तकनीकें
+
+अधिकांश *प्राकृतिक भाषा प्रसंस्करण* कार्यों के लिए, प्रसंस्करण के लिए पाठ को तोड़ना, जांचना और परिणामों को संग्रहीत या नियमों और डेटा सेट के साथ क्रॉस रेफरेंस करना आवश्यक होता है। ये कार्य प्रोग्रामर को एक पाठ में शब्दों और शब्दों की _आवृत्ति_ या _अर्थ_ या केवल _इरादा_ प्राप्त करने की अनुमति देते हैं।
+
+## [पूर्व-व्याख्यान क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+आइए उन सामान्य तकनीकों की खोज करें जो पाठ प्रसंस्करण में उपयोग की जाती हैं। मशीन लर्निंग के साथ मिलकर, ये तकनीकें आपको बड़े पैमाने पर पाठ को कुशलता से विश्लेषण करने में मदद करती हैं। हालांकि, इन कार्यों पर एमएल लागू करने से पहले, आइए समझें कि एक एनएलपी विशेषज्ञ द्वारा सामना की जाने वाली समस्याएं क्या हैं।
+
+## एनएलपी के सामान्य कार्य
+
+आप जिस पाठ पर काम कर रहे हैं, उसका विश्लेषण करने के विभिन्न तरीके हैं। ऐसे कार्य हैं जिन्हें आप कर सकते हैं और इन कार्यों के माध्यम से आप पाठ की समझ का अनुमान लगा सकते हैं और निष्कर्ष निकाल सकते हैं। आप आमतौर पर इन कार्यों को क्रम में करते हैं।
+
+### टोकनाइजेशन
+
+शायद सबसे पहली चीज जो अधिकांश एनएलपी एल्गोरिदम को करनी होती है वह है पाठ को टोकन या शब्दों में विभाजित करना। जबकि यह सरल लगता है, विराम चिह्नों और विभिन्न भाषाओं के शब्द और वाक्य सीमांककों को ध्यान में रखना इसे जटिल बना सकता है। आपको सीमांकन निर्धारित करने के लिए विभिन्न तरीकों का उपयोग करना पड़ सकता है।
+
+
+> **Pride and Prejudice** से एक वाक्य को टोकनाइज़ करना। [Jen Looper](https://twitter.com/jenlooper) द्वारा इन्फोग्राफिक
+
+### एम्बेडिंग्स
+
+[शब्द एम्बेडिंग्स](https://wikipedia.org/wiki/Word_embedding) आपके पाठ डेटा को संख्यात्मक रूप में बदलने का एक तरीका है। एम्बेडिंग्स इस तरह से की जाती हैं ताकि समान अर्थ वाले या एक साथ उपयोग किए जाने वाले शब्द एक साथ समूहित हो जाएं।
+
+
+> "I have the highest respect for your nerves, they are my old friends." - **Pride and Prejudice** में एक वाक्य के लिए शब्द एम्बेडिंग्स। [Jen Looper](https://twitter.com/jenlooper) द्वारा इन्फोग्राफिक
+
+✅ शब्द एम्बेडिंग्स के साथ प्रयोग करने के लिए [यह दिलचस्प उपकरण](https://projector.tensorflow.org/) आज़माएं। एक शब्द पर क्लिक करने से समान शब्दों के समूह दिखते हैं: 'toy' 'disney', 'lego', 'playstation', और 'console' के साथ समूहित होता है।
+
+### पार्सिंग और पार्ट-ऑफ-स्पीच टैगिंग
+
+प्रत्येक शब्द जिसे टोकनाइज़ किया गया है, उसे एक भाग-ऑफ-स्पीच के रूप में टैग किया जा सकता है - एक संज्ञा, क्रिया, या विशेषण। वाक्य `the quick red fox jumped over the lazy brown dog` को पीओएस टैग किया जा सकता है जैसे fox = noun, jumped = verb.
+
+
+
+> **Pride and Prejudice** से एक वाक्य पार्स करना। [Jen Looper](https://twitter.com/jenlooper) द्वारा इन्फोग्राफिक
+
+पार्सिंग यह पहचानना है कि एक वाक्य में कौन से शब्द एक-दूसरे से संबंधित हैं - उदाहरण के लिए `the quick red fox jumped` एक विशेषण-संज्ञा-क्रिया अनुक्रम है जो `lazy brown dog` अनुक्रम से अलग है।
+
+### शब्द और वाक्यांश आवृत्तियाँ
+
+एक बड़े पाठ का विश्लेषण करते समय एक उपयोगी प्रक्रिया यह है कि प्रत्येक रुचिकर शब्द या वाक्यांश की एक डिक्शनरी बनाई जाए और यह कितनी बार प्रकट होता है। वाक्यांश `the quick red fox jumped over the lazy brown dog` में the के लिए शब्द आवृत्ति 2 है।
+
+आइए एक उदाहरण पाठ देखें जहां हम शब्दों की आवृत्ति की गणना करते हैं। रुडयार्ड किपलिंग की कविता द विनर्स में निम्नलिखित श्लोक है:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+चूंकि वाक्यांश आवृत्तियाँ आवश्यकतानुसार केस सेंसिटिव या केस इंसेंसिटिव हो सकती हैं, वाक्यांश `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` 2 है।
+
+### एन-ग्राम्स
+
+एक पाठ को एक सेट लंबाई के शब्दों के अनुक्रम में विभाजित किया जा सकता है, एकल शब्द (यूनिग्राम), दो शब्द (बिग्राम्स), तीन शब्द (ट्रिग्राम्स) या किसी भी संख्या के शब्द (एन-ग्राम्स)।
+
+उदाहरण के लिए `the quick red fox jumped over the lazy brown dog` के साथ 2 के एन-ग्राम स्कोर के साथ निम्नलिखित एन-ग्राम्स उत्पन्न होते हैं:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+इसे वाक्य के ऊपर एक स्लाइडिंग बॉक्स के रूप में देखना आसान हो सकता है। यहां यह 3 शब्दों के एन-ग्राम्स के लिए है, प्रत्येक वाक्य में एन-ग्राम बोल्ड में है:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> एन-ग्राम मान 3: [Jen Looper](https://twitter.com/jenlooper) द्वारा इन्फोग्राफिक
+
+### संज्ञा वाक्यांश निष्कर्षण
+
+अधिकांश वाक्यों में एक संज्ञा होती है जो वाक्य का विषय या वस्तु होती है। अंग्रेजी में, इसे अक्सर 'a' या 'an' या 'the' के पहले होने के रूप में पहचाना जा सकता है। वाक्य के अर्थ को समझने का प्रयास करते समय एनएलपी में 'संज्ञा वाक्यांश को निकालकर' वाक्य के विषय या वस्तु की पहचान करना एक सामान्य कार्य है।
+
+✅ वाक्य "I cannot fix on the hour, or the spot, or the look or the words, which laid the foundation. It is too long ago. I was in the middle before I knew that I had begun." में, क्या आप संज्ञा वाक्यांशों की पहचान कर सकते हैं?
+
+वाक्य `the quick red fox jumped over the lazy brown dog` में 2 संज्ञा वाक्यांश हैं: **quick red fox** और **lazy brown dog**।
+
+### भावना विश्लेषण
+
+एक वाक्य या पाठ का विश्लेषण भावना के लिए किया जा सकता है, या यह कितना *सकारात्मक* या *नकारात्मक* है। भावना को *ध्रुवीयता* और *वस्तुनिष्ठता/अवस्तुनिष्ठता* में मापा जाता है। ध्रुवीयता को -1.0 से 1.0 (नकारात्मक से सकारात्मक) और 0.0 से 1.0 (सबसे वस्तुनिष्ठ से सबसे अवस्तुनिष्ठ) में मापा जाता है।
+
+✅ बाद में आप सीखेंगे कि मशीन लर्निंग का उपयोग करके भावना निर्धारित करने के विभिन्न तरीके हैं, लेकिन एक तरीका यह है कि किसी मानव विशेषज्ञ द्वारा सकारात्मक या नकारात्मक के रूप में वर्गीकृत किए गए शब्दों और वाक्यांशों की एक सूची हो और उस मॉडल को पाठ पर लागू करें ताकि ध्रुवीयता स्कोर की गणना की जा सके। क्या आप देख सकते हैं कि यह कुछ परिस्थितियों में कैसे काम करेगा और दूसरों में कम काम करेगा?
+
+### इंफ्लेक्शन
+
+इंफ्लेक्शन आपको एक शब्द लेने और उस शब्द के एकवचन या बहुवचन को प्राप्त करने में सक्षम बनाता है।
+
+### लेमाटाइजेशन
+
+एक *लेम्मा* एक सेट के शब्दों के लिए मूल या हेडवर्ड है, उदाहरण के लिए *flew*, *flies*, *flying* का लेम्मा क्रिया *fly* है।
+
+एनएलपी शोधकर्ता के लिए कुछ उपयोगी डेटाबेस भी उपलब्ध हैं, विशेष रूप से:
+
+### वर्डनेट
+
+[वर्डनेट](https://wordnet.princeton.edu/) शब्दों, पर्यायवाची, विलोम और कई अन्य विवरणों का डेटाबेस है जो विभिन्न भाषाओं में हर शब्द के लिए है। अनुवाद, वर्तनी परीक्षक, या किसी भी प्रकार के भाषा उपकरण बनाने का प्रयास करते समय यह अविश्वसनीय रूप से उपयोगी है।
+
+## एनएलपी लाइब्रेरीज़
+
+सौभाग्य से, आपको इन सभी तकनीकों को स्वयं बनाने की आवश्यकता नहीं है, क्योंकि उत्कृष्ट पायथन लाइब्रेरीज़ उपलब्ध हैं जो इसे उन डेवलपर्स के लिए अधिक सुलभ बनाती हैं जो प्राकृतिक भाषा प्रसंस्करण या मशीन लर्निंग में विशेषज्ञ नहीं हैं। अगला पाठ इनमें से अधिक उदाहरण शामिल करता है, लेकिन यहां आप अगले कार्य में मदद करने के लिए कुछ उपयोगी उदाहरण सीखेंगे।
+
+### व्यायाम - `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor` का उपयोग करना
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > यहाँ क्या हो रहा है? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor) "एक संज्ञा वाक्यांश निष्कर्षण है जो ConLL-2000 प्रशिक्षण कॉर्पस के साथ प्रशिक्षित चंक पार्सिंग का उपयोग करता है।" ConLL-2000 कंप्यूटेशनल नेचुरल लैंग्वेज लर्निंग पर 2000 सम्मेलन को संदर्भित करता है। हर साल सम्मेलन ने एक कांटेदार एनएलपी समस्या को हल करने के लिए एक कार्यशाला की मेजबानी की, और 2000 में यह संज्ञा चंकिंग थी। एक मॉडल को वॉल स्ट्रीट जर्नल पर प्रशिक्षित किया गया था, "अनुभाग 15-18 को प्रशिक्षण डेटा (211727 टोकन) और अनुभाग 20 को परीक्षण डेटा (47377 टोकन) के रूप में।" आप उपयोग की गई प्रक्रियाओं को [यहां](https://www.clips.uantwerpen.be/conll2000/chunking/) और [परिणाम](https://ifarm.nl/erikt/research/np-chunking.html) देख सकते हैं।
+
+### चुनौती - अपने बॉट को एनएलपी के साथ सुधारना
+
+पिछले पाठ में आपने एक बहुत ही सरल प्रश्नोत्तर बॉट बनाया था। अब, आप अपनी इनपुट का विश्लेषण करके और भावना के अनुसार प्रतिक्रिया प्रिंट करके मार्विन को थोड़ा और सहानुभूतिपूर्ण बनाएंगे। आपको एक `noun_phrase` की पहचान भी करनी होगी और उसके बारे में अधिक इनपुट पूछना होगा।
+
+एक बेहतर संवादात्मक बॉट बनाते समय आपके कदम:
+
+1. उपयोगकर्ता को बॉट के साथ बातचीत कैसे करें, इसके निर्देश प्रिंट करें
+2. लूप शुरू करें
+ 1. उपयोगकर्ता इनपुट स्वीकार करें
+ 2. यदि उपयोगकर्ता ने बाहर निकलने के लिए कहा है, तो बाहर निकलें
+ 3. उपयोगकर्ता इनपुट को संसाधित करें और उपयुक्त भावना प्रतिक्रिया निर्धारित करें
+ 4. यदि भावना में एक संज्ञा वाक्यांश का पता चला है, तो उसे बहुवचन बनाएं और उस विषय पर अधिक इनपुट के लिए पूछें
+ 5. प्रतिक्रिया प्रिंट करें
+3. चरण 2 पर वापस लूप करें
+
+यहां TextBlob का उपयोग करके भावना निर्धारित करने के लिए कोड स्निपेट है। ध्यान दें कि भावना प्रतिक्रिया के केवल चार *ग्रेडिएंट्स* हैं (यदि आप चाहें तो अधिक हो सकते हैं):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+यहां कुछ नमूना आउटपुट है जो आपका मार्गदर्शन करेगा (उपयोगकर्ता इनपुट उन पंक्तियों पर है जो > से शुरू होती हैं):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+कार्य का एक संभावित समाधान [यहां](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py) है
+
+✅ ज्ञान की जाँच
+
+1. क्या आपको लगता है कि सहानुभूतिपूर्ण प्रतिक्रियाएँ किसी को यह सोचने के लिए 'धोखा' दे सकती हैं कि बॉट वास्तव में उन्हें समझता है?
+2. क्या संज्ञा वाक्यांश की पहचान करने से बॉट अधिक 'विश्वसनीय' बनता है?
+3. एक वाक्य से 'संज्ञा वाक्यांश' निकालना उपयोगी क्यों होगा?
+
+---
+
+पिछले ज्ञान की जांच में बॉट को लागू करें और इसे एक मित्र पर परीक्षण करें। क्या यह उन्हें धोखा दे सकता है? क्या आप अपने बॉट को अधिक 'विश्वसनीय' बना सकते हैं?
+
+## 🚀चुनौती
+
+पिछले ज्ञान की जांच में एक कार्य लें और इसे लागू करने का प्रयास करें। बॉट को एक मित्र पर परीक्षण करें। क्या यह उन्हें धोखा दे सकता है? क्या आप अपने बॉट को अधिक 'विश्वसनीय' बना सकते हैं?
+
+## [व्याख्यान के बाद क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## समीक्षा और स्व-अध्ययन
+
+अगले कुछ पाठों में आप भावना विश्लेषण के बारे में अधिक जानेंगे। [KDNuggets](https://www.kdnuggets.com/tag/nlp) पर इन लेखों जैसे लेखों में इस दिलचस्प तकनीक पर शोध करें।
+
+## असाइनमेंट
+
+[बॉट को बात करना सिखाएं](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियां हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/2-Tasks/assignment.md b/translations/hi/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..1a135cbb5
--- /dev/null
+++ b/translations/hi/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# एक बॉट को जवाब देना सिखाएं
+
+## निर्देश
+
+पिछले कुछ पाठों में, आपने एक बेसिक बॉट प्रोग्राम किया था जिससे आप चैट कर सकते हैं। यह बॉट रैंडम उत्तर देता है जब तक आप 'बाय' नहीं कहते। क्या आप उत्तरों को थोड़ा कम रैंडम बना सकते हैं, और विशेष चीजें कहने पर उत्तर ट्रिगर कर सकते हैं, जैसे 'क्यों' या 'कैसे'? सोचें कि मशीन लर्निंग कैसे इस प्रकार के काम को कम मैनुअल बना सकता है जब आप अपने बॉट को विस्तारित करेंगे। आप अपने कार्यों को आसान बनाने के लिए NLTK या TextBlob लाइब्रेरी का उपयोग कर सकते हैं।
+
+## मूल्यांकन
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | --------------------------------------------- | ------------------------------------------------ | ----------------------- |
+| | एक नया bot.py फ़ाइल प्रस्तुत की गई है और दस्तावेज़ित है | एक नई बॉट फ़ाइल प्रस्तुत की गई है लेकिन इसमें बग्स हैं | एक फ़ाइल प्रस्तुत नहीं की गई |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/3-Translation-Sentiment/README.md b/translations/hi/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..b3a657527
--- /dev/null
+++ b/translations/hi/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,190 @@
+# मशीन लर्निंग के साथ अनुवाद और भाव विश्लेषण
+
+पिछले पाठों में आपने सीखा कि कैसे एक बेसिक बॉट बनाना है, जो `TextBlob` का उपयोग करता है, जो कि एक लाइब्रेरी है जो बेसिक एनएलपी कार्यों जैसे कि संज्ञा वाक्यांश निष्कर्षण को पूरा करने के लिए पर्दे के पीछे एमएल को एम्बेड करती है। कम्प्यूटेशनल लिंग्विस्टिक्स में एक और महत्वपूर्ण चुनौती एक भाषा से दूसरी भाषा में वाक्य का सटीक _अनुवाद_ करना है।
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+अनुवाद एक बहुत ही कठिन समस्या है क्योंकि दुनिया में हजारों भाषाएँ हैं और प्रत्येक की व्याकरणिक नियम बहुत अलग हो सकते हैं। एक दृष्टिकोण यह है कि एक भाषा, जैसे अंग्रेज़ी, के औपचारिक व्याकरण नियमों को एक गैर-भाषा निर्भर संरचना में परिवर्तित किया जाए, और फिर इसे दूसरी भाषा में परिवर्तित करके अनुवाद किया जाए। इस दृष्टिकोण का मतलब है कि आप निम्नलिखित चरणों का पालन करेंगे:
+
+1. **पहचान**। इनपुट भाषा के शब्दों को संज्ञा, क्रिया आदि में पहचानें या टैग करें।
+2. **अनुवाद बनाएं**। प्रत्येक शब्द का लक्षित भाषा प्रारूप में सीधा अनुवाद तैयार करें।
+
+### उदाहरण वाक्य, अंग्रेजी से आयरिश
+
+'अंग्रेजी' में, वाक्य _I feel happy_ तीन शब्दों में है:
+
+- **विषय** (I)
+- **क्रिया** (feel)
+- **विशेषण** (happy)
+
+हालांकि, 'आयरिश' भाषा में, वही वाक्य बहुत अलग व्याकरणिक संरचना में होता है - भावनाओं जैसे "*happy*" या "*sad*" को *आप पर* होने के रूप में व्यक्त किया जाता है।
+
+आयरिश में अंग्रेजी वाक्य `I feel happy` होगा `Tá athas orm`। एक *शाब्दिक* अनुवाद होगा `Happy is upon me`।
+
+एक आयरिश वक्ता जो अंग्रेजी में अनुवाद कर रहा है, वह कहेगा `I feel happy`, न कि `Happy is upon me`, क्योंकि वे वाक्य का अर्थ समझते हैं, भले ही शब्द और वाक्य संरचना अलग हों।
+
+आयरिश में वाक्य के औपचारिक क्रम हैं:
+
+- **क्रिया** (Tá या is)
+- **विशेषण** (athas, या happy)
+- **विषय** (orm, या upon me)
+
+## अनुवाद
+
+एक नासमझ अनुवाद कार्यक्रम केवल शब्दों का अनुवाद कर सकता है, वाक्य संरचना को नजरअंदाज करते हुए।
+
+✅ यदि आपने वयस्क के रूप में दूसरी (या तीसरी या अधिक) भाषा सीखी है, तो आपने अपनी मातृभाषा में सोचने से शुरुआत की होगी, एक अवधारणा को शब्द दर शब्द दूसरी भाषा में अनुवाद किया होगा, और फिर अपने अनुवाद को बोलने की कोशिश की होगी। यह वही है जो नासमझ अनुवाद कंप्यूटर प्रोग्राम कर रहे हैं। यह चरण पार करना महत्वपूर्ण है ताकि आप भाषा में प्रवीणता प्राप्त कर सकें!
+
+नासमझ अनुवाद खराब (और कभी-कभी हास्यास्पद) गलत अनुवाद की ओर ले जाता है: `I feel happy` का शाब्दिक अनुवाद `Mise bhraitheann athas` में आयरिश में होता है। इसका मतलब (शाब्दिक रूप से) है `me feel happy` और यह एक मान्य आयरिश वाक्य नहीं है। भले ही अंग्रेजी और आयरिश दो निकटवर्ती द्वीपों पर बोली जाने वाली भाषाएं हैं, वे बहुत अलग भाषाएं हैं जिनकी व्याकरणिक संरचनाएं अलग हैं।
+
+> आप आयरिश भाषाई परंपराओं के बारे में कुछ वीडियो देख सकते हैं जैसे [यह एक](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### मशीन लर्निंग दृष्टिकोण
+
+अब तक, आपने प्राकृतिक भाषा प्रसंस्करण के औपचारिक नियम दृष्टिकोण के बारे में सीखा है। एक और दृष्टिकोण यह है कि शब्दों का अर्थ नजरअंदाज कर दिया जाए, और _इसके बजाय पैटर्न का पता लगाने के लिए मशीन लर्निंग का उपयोग किया जाए_। यदि आपके पास बहुत सारे टेक्स्ट (एक *कॉर्पस*) या टेक्स्ट (*कॉर्पोरा*) दोनों मूल और लक्षित भाषाओं में हैं, तो यह अनुवाद में काम कर सकता है।
+
+उदाहरण के लिए, *Pride and Prejudice* के मामले पर विचार करें, जो कि 1813 में जेन ऑस्टेन द्वारा लिखी गई एक प्रसिद्ध अंग्रेजी उपन्यास है। यदि आप पुस्तक को अंग्रेजी में और पुस्तक का मानव अनुवाद *फ्रेंच* में देखें, तो आप एक में वाक्यांशों का पता लगा सकते हैं जो दूसरे में _मुहावरेदार_ रूप से अनुवादित हैं। आप इसे कुछ ही समय में करेंगे।
+
+उदाहरण के लिए, जब एक अंग्रेजी वाक्यांश जैसे `I have no money` को शाब्दिक रूप से फ्रेंच में अनुवादित किया जाता है, तो यह `Je n'ai pas de monnaie` बन सकता है। "Monnaie" एक मुश्किल फ्रेंच 'झूठा समानार्थक शब्द' है, क्योंकि 'money' और 'monnaie' पर्यायवाची नहीं हैं। एक बेहतर अनुवाद जो एक मानव कर सकता है वह होगा `Je n'ai pas d'argent`, क्योंकि यह बेहतर तरीके से इस अर्थ को व्यक्त करता है कि आपके पास पैसे नहीं हैं (बल्कि 'ढीला परिवर्तन' जो 'monnaie' का अर्थ है)।
+
+
+
+> छवि [Jen Looper](https://twitter.com/jenlooper) द्वारा
+
+यदि एक एमएल मॉडल के पास पर्याप्त मानव अनुवाद हैं जिन पर एक मॉडल बनाया जा सके, तो यह उन सामान्य पैटर्नों की पहचान करके अनुवाद की सटीकता को सुधार सकता है जो पहले दोनों भाषाओं के विशेषज्ञ मानव वक्ताओं द्वारा अनुवादित टेक्स्ट में पाए गए हैं।
+
+### अभ्यास - अनुवाद
+
+आप वाक्यों का अनुवाद करने के लिए `TextBlob` का उपयोग कर सकते हैं। **Pride and Prejudice** की प्रसिद्ध पहली पंक्ति को आजमाएं:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob` अनुवाद में काफी अच्छा काम करता है: "C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!".
+
+यह तर्क दिया जा सकता है कि TextBlob का अनुवाद वास्तव में 1932 में V. Leconte और Ch. Pressoir द्वारा पुस्तक के फ्रेंच अनुवाद से कहीं अधिक सटीक है:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet egard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+इस मामले में, एमएल द्वारा सूचित अनुवाद मानव अनुवादक से बेहतर काम करता है जो 'स्पष्टता' के लिए मूल लेखक के शब्दों को अनावश्यक रूप से जोड़ रहा है।
+
+> यहाँ क्या हो रहा है? और TextBlob अनुवाद में इतना अच्छा क्यों है? खैर, पर्दे के पीछे, यह Google अनुवाद का उपयोग कर रहा है, जो एक परिष्कृत एआई है जो लाखों वाक्यांशों को पार्स कर सकता है ताकि कार्य के लिए सबसे अच्छे स्ट्रिंग्स की भविष्यवाणी की जा सके। यहाँ कुछ भी मैन्युअल नहीं हो रहा है और आपको `blob.translate`.
+
+✅ Try some more sentences. Which is better, ML or human translation? In which cases?
+
+## Sentiment analysis
+
+Another area where machine learning can work very well is sentiment analysis. A non-ML approach to sentiment is to identify words and phrases which are 'positive' and 'negative'. Then, given a new piece of text, calculate the total value of the positive, negative and neutral words to identify the overall sentiment.
+
+This approach is easily tricked as you may have seen in the Marvin task - the sentence `Great, that was a wonderful waste of time, I'm glad we are lost on this dark road` एक व्यंग्यात्मक, नकारात्मक भावना वाला वाक्य है, लेकिन सरल एल्गोरिदम 'great', 'wonderful', 'glad' को सकारात्मक और 'waste', 'lost' और 'dark' को नकारात्मक के रूप में पहचानता है। कुल मिलाकर भावना इन विरोधाभासी शब्दों से प्रभावित होती है।
+
+✅ एक सेकंड रुकें और सोचें कि हम मानव वक्ताओं के रूप में व्यंग्य कैसे व्यक्त करते हैं। टोन इन्फ्लेक्शन इसमें बड़ी भूमिका निभाता है। वाक्यांश "Well, that film was awesome" को अलग-अलग तरीकों से कहने की कोशिश करें ताकि पता चल सके कि आपकी आवाज़ कैसे अर्थ व्यक्त करती है।
+
+### एमएल दृष्टिकोण
+
+एमएल दृष्टिकोण यह होगा कि नकारात्मक और सकारात्मक पाठों का मैन्युअल रूप से संग्रह किया जाए - ट्वीट्स, या मूवी समीक्षाएँ, या कुछ भी जहाँ मानव ने एक स्कोर *और* एक लिखित राय दी हो। फिर एनएलपी तकनीकों को राय और स्कोर पर लागू किया जा सकता है, ताकि पैटर्न उभर सकें (जैसे, सकारात्मक मूवी समीक्षाओं में 'ऑस्कर योग्य' वाक्यांश नकारात्मक मूवी समीक्षाओं की तुलना में अधिक होता है, या सकारात्मक रेस्तरां समीक्षाएँ 'गौर्मेट' शब्द का उपयोग 'घृणास्पद' की तुलना में अधिक करती हैं)।
+
+> ⚖️ **उदाहरण**: यदि आप एक राजनेता के कार्यालय में काम करते हैं और कोई नया कानून बहस के लिए है, तो मतदाता कार्यालय को उस विशेष नए कानून का समर्थन करने वाले या उसके खिलाफ ईमेल लिख सकते हैं। मान लीजिए कि आपको ईमेल पढ़ने और उन्हें 2 ढेरियों में छांटने का काम सौंपा गया है, *समर्थन में* और *विरोध में*। यदि बहुत सारे ईमेल होते, तो आप सभी को पढ़ने की कोशिश में अभिभूत हो सकते थे। क्या यह अच्छा नहीं होगा कि एक बॉट आपके लिए सभी ईमेल पढ़ सके, उन्हें समझ सके और आपको बता सके कि कौन सा ईमेल किस ढेरी में जाना चाहिए?
+>
+> इसे प्राप्त करने का एक तरीका मशीन लर्निंग का उपयोग करना है। आप मॉडल को *विरोध में* ईमेल के एक हिस्से और *समर्थन में* ईमेल के एक हिस्से के साथ प्रशिक्षित करेंगे। मॉडल उन वाक्यांशों और शब्दों को *विरोध में* या *समर्थन में* ईमेल के साथ अधिक संभावित रूप से प्रकट होने वाले शब्दों और पैटर्नों के साथ जोड़ देगा, *लेकिन यह किसी भी सामग्री को नहीं समझेगा*, केवल यह कि कुछ शब्द और पैटर्न एक *विरोध में* या *समर्थन में* ईमेल में अधिक संभावना से प्रकट होते हैं। आप इसे उन ईमेल के साथ परीक्षण कर सकते हैं जिन्हें आपने मॉडल को प्रशिक्षित करने के लिए उपयोग नहीं किया था, और देख सकते हैं कि क्या यह आपके समान निष्कर्ष पर पहुंचता है। फिर, एक बार जब आप मॉडल की सटीकता से संतुष्ट हो जाते हैं, तो आप भविष्य के ईमेल को बिना प्रत्येक को पढ़े संसाधित कर सकते हैं।
+
+✅ क्या यह प्रक्रिया उन प्रक्रियाओं जैसी लगती है जिन्हें आपने पिछले पाठों में उपयोग किया है?
+
+## अभ्यास - भावनात्मक वाक्य
+
+भावना को -1 से 1 की *ध्रुवता* के साथ मापा जाता है, जिसका मतलब है कि -1 सबसे नकारात्मक भावना है, और 1 सबसे सकारात्मक। भावना को 0 - 1 के स्कोर के साथ वस्तुनिष्ठता (0) और व्यक्तिनिष्ठता (1) के साथ भी मापा जाता है।
+
+जेन ऑस्टेन की *Pride and Prejudice* पर एक और नज़र डालें। पाठ यहाँ [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) पर उपलब्ध है। नीचे दिया गया नमूना एक छोटा कार्यक्रम दिखाता है जो पुस्तक के पहले और अंतिम वाक्यों की भावना का विश्लेषण करता है और इसकी भावना ध्रुवता और व्यक्तिनिष्ठता/वस्तुनिष्ठता स्कोर प्रदर्शित करता है।
+
+आपको `sentiment` निर्धारित करने के लिए `TextBlob` लाइब्रेरी (ऊपर वर्णित) का उपयोग करना चाहिए (आपको अपना खुद का भावना कैलकुलेटर लिखने की आवश्यकता नहीं है) निम्नलिखित कार्य में।
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+आप निम्नलिखित आउटपुट देखते हैं:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## चुनौती - भावना ध्रुवता की जाँच करें
+
+आपका कार्य भावना ध्रुवता का उपयोग करके यह निर्धारित करना है कि *Pride and Prejudice* में अधिक बिल्कुल सकारात्मक वाक्य हैं या बिल्कुल नकारात्मक। इस कार्य के लिए, आप मान सकते हैं कि 1 या -1 की ध्रुवता स्कोर बिल्कुल सकारात्मक या नकारात्मक है।
+
+**चरण:**
+
+1. [Pride and Prejudice की एक प्रति](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) Project Gutenberg से .txt फ़ाइल के रूप में डाउनलोड करें। फ़ाइल की शुरुआत और अंत में मेटाडेटा को हटा दें, केवल मूल पाठ को छोड़ दें
+2. फ़ाइल को पायथन में खोलें और सामग्री को एक स्ट्रिंग के रूप में निकालें
+3. पुस्तक स्ट्रिंग का उपयोग करके एक TextBlob बनाएं
+4. पुस्तक में प्रत्येक वाक्य का एक लूप में विश्लेषण करें
+ 1. यदि ध्रुवता 1 या -1 है, तो वाक्य को सकारात्मक या नकारात्मक संदेशों की एक सरणी या सूची में संग्रहीत करें
+5. अंत में, सभी सकारात्मक वाक्य और नकारात्मक वाक्य (अलग-अलग) और प्रत्येक की संख्या प्रिंट करें।
+
+यहाँ एक नमूना [समाधान](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb) है।
+
+✅ ज्ञान जांच
+
+1. भावना वाक्य में उपयोग किए गए शब्दों पर आधारित है, लेकिन क्या कोड *शब्दों को समझता है*?
+2. क्या आपको लगता है कि भावना ध्रुवता सटीक है, या दूसरे शब्दों में, क्या आप स्कोर से *सहमत* हैं?
+ 1. विशेष रूप से, क्या आप निम्नलिखित वाक्यों की पूर्ण **सकारात्मक** ध्रुवता से सहमत हैं या असहमत हैं?
+ * “What an excellent father you have, girls!” said she, when the door was shut.
+ * “Your examination of Mr. Darcy is over, I presume,” said Miss Bingley; “and pray what is the result?” “I am perfectly convinced by it that Mr. Darcy has no defect.
+ * How wonderfully these sort of things occur!
+ * I have the greatest dislike in the world to that sort of thing.
+ * Charlotte is an excellent manager, I dare say.
+ * “This is delightful indeed!
+ * I am so happy!
+ * Your idea of the ponies is delightful.
+ 2. अगले 3 वाक्य पूर्ण सकारात्मक भावना के साथ स्कोर किए गए थे, लेकिन नजदीकी पढ़ाई पर, वे सकारात्मक वाक्य नहीं हैं। भावना विश्लेषण ने क्यों सोचा कि वे सकारात्मक वाक्य थे?
+ * Happy shall I be, when his stay at Netherfield is over!” “I wish I could say anything to comfort you,” replied Elizabeth; “but it is wholly out of my power.
+ * If I could but see you as happy!
+ * Our distress, my dear Lizzy, is very great.
+ 3. क्या आप निम्नलिखित वाक्यों की पूर्ण **नकारात्मक** ध्रुवता से सहमत हैं या असहमत हैं?
+ - Everybody is disgusted with his pride.
+ - “I should like to know how he behaves among strangers.” “You shall hear then—but prepare yourself for something very dreadful.
+ - The pause was to Elizabeth’s feelings dreadful.
+ - It would be dreadful!
+
+✅ जेन ऑस्टेन का कोई भी प्रेमी समझेगा कि वह अक्सर अपनी पुस्तकों का उपयोग अंग्रेजी रीजेंसी समाज के अधिक हास्यास्पद पहलुओं की आलोचना करने के लिए करती है। *Pride and Prejudice* की मुख्य पात्र एलिजाबेथ बेनेट एक सटीक सामाजिक पर्यवेक्षक हैं (जैसे कि लेखक) और उनकी भाषा अक्सर भारी तौर पर सूक्ष्म होती है। यहां तक कि मिस्टर डार्सी (कहानी में प्रेम रुचि) भी एलिजाबेथ की चंचल और छेड़छाड़ वाली भाषा का नोट लेते हैं: "मैंने आपके परिचय का आनंद लंबे समय तक लिया है ताकि मैं जान सकूं कि आप कभी-कभी ऐसी राय व्यक्त करने में बहुत आनंद पाते हैं जो वास्तव में आपकी नहीं हैं।"
+
+---
+
+## 🚀चुनौती
+
+क्या आप उपयोगकर्ता इनपुट से अन्य विशेषताओं को निकालकर मार्विन को और भी बेहतर बना सकते हैं?
+
+## [पोस्ट-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## समीक्षा और आत्म-अध्ययन
+
+पाठ से भावना निकालने के कई तरीके हैं। उन व्यावसायिक अनुप्रयोगों के बारे में सोचें जो इस तकनीक का उपयोग कर सकते हैं। सोचें कि यह कैसे गलत हो सकता है। ऐसे परिष्कृत एंटरप्राइज़-तैयार सिस्टम के बारे में और पढ़ें जो भावना का विश्लेषण करते हैं जैसे [Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott)। ऊपर दिए गए Pride and Prejudice वाक्यों में से कुछ का परीक्षण करें और देखें कि क्या यह सूक्ष्मता का पता लगा सकता है।
+
+## असाइनमेंट
+
+[काव्यात्मक लाइसेंस](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियां हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/3-Translation-Sentiment/assignment.md b/translations/hi/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..c83a7883a
--- /dev/null
+++ b/translations/hi/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# काव्यात्मक स्वतंत्रता
+
+## निर्देश
+
+[इस नोटबुक](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency) में आप 500 से अधिक एमिली डिकिंसन की कविताओं को पा सकते हैं जिन्हें पहले से Azure टेक्स्ट एनालिटिक्स का उपयोग करके भावना के लिए विश्लेषित किया गया है। इस डेटा सेट का उपयोग करके, इसे पाठ में वर्णित तकनीकों का उपयोग करके विश्लेषित करें। क्या कविता का सुझाया गया भावना अधिक परिष्कृत Azure सेवा के निर्णय से मेल खाता है? क्यों या क्यों नहीं, आपके विचार में? क्या कुछ आपको आश्चर्यचकित करता है?
+
+## मूल्यांकन मापदंड
+
+| मापदंड | उत्कृष्टता | पर्याप्तता | सुधार की आवश्यकता |
+| -------- | ------------------------------------------------------------------------- | -------------------------------------------------------- | ------------------------ |
+| | लेखक के नमूना आउटपुट का ठोस विश्लेषण के साथ एक नोटबुक प्रस्तुत की जाती है | नोटबुक अधूरी है या विश्लेषण नहीं करती | कोई नोटबुक प्रस्तुत नहीं की जाती |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/hi/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..e402e584e
--- /dev/null
+++ b/translations/hi/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/hi/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..01b9baf62
--- /dev/null
+++ b/translations/hi/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/4-Hotel-Reviews-1/README.md b/translations/hi/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..208eb177b
--- /dev/null
+++ b/translations/hi/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,264 @@
+# होटल समीक्षाओं के साथ भावना विश्लेषण - डेटा प्रोसेसिंग
+
+इस खंड में आप पिछले पाठों में सीखी गई तकनीकों का उपयोग करके एक बड़े डेटा सेट का कुछ खोजपूर्ण डेटा विश्लेषण करेंगे। एक बार जब आप विभिन्न स्तंभों की उपयोगिता को अच्छी तरह से समझ लेंगे, तो आप सीखेंगे:
+
+- अनावश्यक स्तंभों को कैसे हटाएं
+- मौजूदा स्तंभों के आधार पर कुछ नए डेटा कैसे गणना करें
+- अंतिम चुनौती में उपयोग के लिए परिणामी डेटा सेट को कैसे सहेजें
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### परिचय
+
+अब तक आपने सीखा है कि पाठ डेटा संख्यात्मक प्रकार के डेटा के बिल्कुल विपरीत होता है। यदि यह पाठ किसी मानव द्वारा लिखा या बोला गया है, तो इसे पैटर्न और आवृत्तियों, भावना और अर्थ खोजने के लिए विश्लेषण किया जा सकता है। यह पाठ आपको एक वास्तविक डेटा सेट और एक वास्तविक चुनौती में ले जाता है: **[515K होटल समीक्षाएं डेटा यूरोप में](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** और इसमें एक [CC0: सार्वजनिक डोमेन लाइसेंस](https://creativecommons.org/publicdomain/zero/1.0/) शामिल है। इसे Booking.com से सार्वजनिक स्रोतों से स्क्रैप किया गया था। डेटा सेट के निर्माता जियाशेन लियू थे।
+
+### तैयारी
+
+आपको आवश्यकता होगी:
+
+* Python 3 का उपयोग करके .ipynb नोटबुक चलाने की क्षमता
+* pandas
+* NLTK, [जिसे आपको स्थानीय रूप से इंस्टॉल करना चाहिए](https://www.nltk.org/install.html)
+* डेटा सेट जो Kaggle पर उपलब्ध है [515K होटल समीक्षाएं डेटा यूरोप में](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)। यह अनज़िप किए जाने पर लगभग 230 MB है। इसे इन NLP पाठों से संबंधित मूल `/data` फ़ोल्डर में डाउनलोड करें।
+
+## खोजपूर्ण डेटा विश्लेषण
+
+यह चुनौती मानती है कि आप भावना विश्लेषण और अतिथि समीक्षा स्कोर का उपयोग करके एक होटल सिफारिश बॉट बना रहे हैं। डेटा सेट जिसमें आप उपयोग करेंगे, उसमें 6 शहरों में 1493 विभिन्न होटलों की समीक्षाएं शामिल हैं।
+
+Python, होटल समीक्षाओं के डेटा सेट और NLTK के भावना विश्लेषण का उपयोग करके आप पता लगा सकते हैं:
+
+* समीक्षाओं में सबसे अधिक बार उपयोग किए जाने वाले शब्द और वाक्यांश क्या हैं?
+* क्या होटल का वर्णन करने वाले आधिकारिक *टैग* समीक्षा स्कोर के साथ मेल खाते हैं (जैसे कि *युवा बच्चों के साथ परिवार* के लिए एक विशेष होटल की अधिक नकारात्मक समीक्षाएं हैं बजाय *एकल यात्री* के लिए, शायद यह दर्शाता है कि यह *एकल यात्रियों* के लिए बेहतर है?)
+* क्या NLTK भावना स्कोर होटल समीक्षक के संख्यात्मक स्कोर से 'सहमत' हैं?
+
+#### डेटा सेट
+
+आइए उस डेटा सेट का पता लगाएं जिसे आपने डाउनलोड किया है और स्थानीय रूप से सहेजा है। फ़ाइल को VS Code या यहाँ तक कि Excel जैसे संपादक में खोलें।
+
+डेटा सेट में हेडर निम्नलिखित हैं:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+यहाँ उन्हें एक तरीके से समूहित किया गया है जो जांचने में आसान हो सकता है:
+##### होटल स्तंभ
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (अक्षांश), `lng` (देशांतर)
+ * *lat* और *lng* का उपयोग करके आप होटल स्थान दिखाने के लिए Python के साथ एक मानचित्र बना सकते हैं (शायद नकारात्मक और सकारात्मक समीक्षाओं के लिए रंग कोडित)
+ * Hotel_Address हमारे लिए स्पष्ट रूप से उपयोगी नहीं है, और हम शायद इसे आसान छंटाई और खोज के लिए एक देश के साथ बदल देंगे
+
+**होटल मेटा-समीक्षा स्तंभ**
+
+* `Average_Score`
+ * डेटा सेट निर्माता के अनुसार, यह स्तंभ *होटल का औसत स्कोर है, जो पिछले वर्ष में नवीनतम टिप्पणी के आधार पर गणना किया गया है*। यह स्कोर की गणना करने का एक असामान्य तरीका लगता है, लेकिन यह स्क्रैप किया गया डेटा है इसलिए हम इसे अभी के लिए फेस वैल्यू पर ले सकते हैं।
+
+ ✅ इस डेटा के अन्य स्तंभों के आधार पर, क्या आप औसत स्कोर की गणना करने का कोई और तरीका सोच सकते हैं?
+
+* `Total_Number_of_Reviews`
+ * इस होटल को प्राप्त समीक्षाओं की कुल संख्या - यह स्पष्ट नहीं है (कुछ कोड लिखे बिना) कि यह डेटा सेट में समीक्षाओं को संदर्भित करता है या नहीं।
+* `Additional_Number_of_Scoring`
+ * इसका मतलब है कि एक समीक्षा स्कोर दिया गया था लेकिन समीक्षक द्वारा कोई सकारात्मक या नकारात्मक समीक्षा नहीं लिखी गई थी
+
+**समीक्षा स्तंभ**
+
+- `Reviewer_Score`
+ - यह एक संख्यात्मक मान है जिसमें न्यूनतम और अधिकतम मान 2.5 और 10 के बीच सबसे अधिक 1 दशमलव स्थान है
+ - यह स्पष्ट नहीं किया गया है कि 2.5 सबसे कम संभव स्कोर क्यों है
+- `Negative_Review`
+ - यदि एक समीक्षक ने कुछ नहीं लिखा, तो इस फ़ील्ड में "**No Negative**" होगा
+ - ध्यान दें कि एक समीक्षक नकारात्मक समीक्षा स्तंभ में सकारात्मक समीक्षा लिख सकता है (जैसे "इस होटल के बारे में कुछ भी बुरा नहीं है")
+- `Review_Total_Negative_Word_Counts`
+ - उच्च नकारात्मक शब्द गणना से कम स्कोर का संकेत मिलता है (भावना की जांच किए बिना)
+- `Positive_Review`
+ - यदि एक समीक्षक ने कुछ नहीं लिखा, तो इस फ़ील्ड में "**No Positive**" होगा
+ - ध्यान दें कि एक समीक्षक सकारात्मक समीक्षा स्तंभ में नकारात्मक समीक्षा लिख सकता है (जैसे "इस होटल के बारे में कुछ भी अच्छा नहीं है")
+- `Review_Total_Positive_Word_Counts`
+ - उच्च सकारात्मक शब्द गणना से उच्च स्कोर का संकेत मिलता है (भावना की जांच किए बिना)
+- `Review_Date` और `days_since_review`
+ - एक समीक्षा पर ताजगी या बासीपन का माप लागू किया जा सकता है (पुरानी समीक्षाएं नई समीक्षाओं के रूप में सटीक नहीं हो सकती हैं क्योंकि होटल प्रबंधन बदल गया है, या नवीनीकरण किया गया है, या एक पूल जोड़ा गया है आदि)
+- `Tags`
+ - ये छोटे वर्णनकर्ता होते हैं जिन्हें एक समीक्षक यह वर्णन करने के लिए चुन सकता है कि वे किस प्रकार के अतिथि थे (जैसे एकल या परिवार), उनके पास किस प्रकार का कमरा था, ठहरने की अवधि और समीक्षा कैसे प्रस्तुत की गई थी।
+ - दुर्भाग्य से, इन टैग्स का उपयोग करना समस्याग्रस्त है, उनकी उपयोगिता पर चर्चा करने वाला खंड देखें
+
+**समीक्षक स्तंभ**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - यह सिफारिश मॉडल में एक कारक हो सकता है, उदाहरण के लिए, यदि आप यह निर्धारित कर सकते हैं कि सैकड़ों समीक्षाओं के साथ अधिक विपुल समीक्षक नकारात्मक होने की तुलना में सकारात्मक होने की अधिक संभावना रखते थे। हालाँकि, किसी विशेष समीक्षा के समीक्षक को एक अद्वितीय कोड के साथ पहचाना नहीं गया है, और इसलिए इसे समीक्षाओं के एक सेट से जोड़ा नहीं जा सकता है। 100 या अधिक समीक्षाओं वाले 30 समीक्षक हैं, लेकिन यह देखना मुश्किल है कि यह सिफारिश मॉडल में कैसे मदद कर सकता है।
+- `Reviewer_Nationality`
+ - कुछ लोगों का मानना है कि कुछ राष्ट्रीयताओं के लोगों के सकारात्मक या नकारात्मक समीक्षा देने की संभावना अधिक होती है क्योंकि उनके पास एक राष्ट्रीय प्रवृत्ति होती है। अपने मॉडलों में इस तरह के उपाख्यानात्मक विचारों को शामिल करने से सावधान रहें। ये राष्ट्रीय (और कभी-कभी नस्लीय) रूढ़िवादिता हैं, और प्रत्येक समीक्षक एक व्यक्ति था जिसने अपने अनुभव के आधार पर एक समीक्षा लिखी। इसे उनके कई लेंसों के माध्यम से फ़िल्टर किया गया हो सकता है जैसे कि उनके पिछले होटल में ठहराव, यात्रा की दूरी और उनके व्यक्तिगत स्वभाव। यह सोचना कि उनकी राष्ट्रीयता समीक्षा स्कोर का कारण थी, सही ठहराना मुश्किल है।
+
+##### उदाहरण
+
+| औसत स्कोर | कुल समीक्षाओं की संख्या | समीक्षक स्कोर | नकारात्मक समीक्षा | सकारात्मक समीक्षा | टैग्स |
+| -------------- | ---------------------- | ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | यह वर्तमान में एक होटल नहीं है बल्कि एक निर्माण स्थल है मुझे लंबे सफर के बाद आराम करने और कमरे में काम करने के दौरान सुबह से लेकर पूरे दिन अस्वीकार्य निर्माण शोर से आतंकित किया गया लोग पूरे दिन काम कर रहे थे यानी आसन्न कमरों में जैकहैमर के साथ मैंने कमरे में बदलाव के लिए कहा लेकिन कोई शांत कमरा उपलब्ध नहीं था चीजों को और भी खराब करने के लिए मुझसे अधिक शुल्क लिया गया मैंने शाम को चेक आउट किया क्योंकि मुझे बहुत जल्दी उड़ान भरनी थी और मुझे उपयुक्त बिल मिला एक दिन बाद होटल ने मेरी सहमति के बिना बुक की गई कीमत से अधिक का एक और शुल्क लिया यह एक भयानक जगह है यहाँ बुकिंग करके खुद को सजा न दें | कुछ नहीं भयानक जगह दूर रहें | व्यापार यात्रा युगल मानक डबल कमरा 2 रात रुके |
+
+जैसा कि आप देख सकते हैं, इस अतिथि का इस होटल में ठहराव सुखद नहीं था। होटल का औसत स्कोर 7.8 है और 1945 समीक्षाएं हैं, लेकिन इस समीक्षक ने इसे 2.5 दिया और उनके ठहराव के बारे में 115 शब्द लिखे कि यह कितना नकारात्मक था। यदि उन्होंने सकारात्मक समीक्षा कॉलम में कुछ भी नहीं लिखा, तो आप अनुमान लगा सकते हैं कि कुछ भी सकारात्मक नहीं था, लेकिन अफसोस, उन्होंने चेतावनी के 7 शब्द लिखे। यदि हम शब्दों की बजाय शब्दों का अर्थ या भावना गिनते हैं, तो हमें समीक्षक के इरादे का विकृत दृश्य मिल सकता है। अजीब तरह से, उनका स्कोर 2.5 भ्रमित करने वाला है, क्योंकि यदि वह होटल ठहराव इतना बुरा था, तो उन्होंने इसे कोई अंक क्यों दिए? डेटा सेट का बारीकी से निरीक्षण करने पर, आप देखेंगे कि सबसे कम संभव स्कोर 2.5 है, 0 नहीं। सबसे अधिक संभव स्कोर 10 है।
+
+##### टैग्स
+
+जैसा कि ऊपर उल्लेख किया गया है, पहली नज़र में, डेटा को वर्गीकृत करने के लिए `Tags` का उपयोग करने का विचार समझ में आता है। दुर्भाग्य से ये टैग मानकीकृत नहीं हैं, जिसका अर्थ है कि किसी दिए गए होटल में, विकल्प *एकल कमरा*, *जुड़वां कमरा*, और *डबल कमरा* हो सकते हैं, लेकिन अगले होटल में वे *डीलक्स एकल कमरा*, *क्लासिक क्वीन कमरा*, और *एक्जीक्यूटिव किंग कमरा* हो सकते हैं। ये वही चीजें हो सकती हैं, लेकिन इतनी सारी विविधताएं हैं कि विकल्प बन जाता है:
+
+1. सभी शर्तों को एकल मानक में बदलने का प्रयास करें, जो बहुत कठिन है, क्योंकि यह स्पष्ट नहीं है कि प्रत्येक मामले में रूपांतरण पथ क्या होगा (जैसे *क्लासिक सिंगल रूम* को *सिंगल रूम* से मैप करता है लेकिन *आंगन गार्डन या सिटी व्यू के साथ सुपीरियर क्वीन रूम* को मैप करना बहुत कठिन है)
+
+1. हम एक NLP दृष्टिकोण ले सकते हैं और प्रत्येक होटल पर लागू होने वाले *सोलो*, *बिजनेस ट्रैवलर*, या *युवा बच्चों के साथ परिवार* जैसे कुछ शब्दों की आवृत्ति को माप सकते हैं, और इसे सिफारिश में कारक बना सकते हैं
+
+टैग आमतौर पर (लेकिन हमेशा नहीं) एकल फ़ील्ड होते हैं जिनमें *यात्रा का प्रकार*, *अतिथियों का प्रकार*, *कमरे का प्रकार*, *रातों की संख्या*, और *प्रकार का डिवाइस समीक्षा प्रस्तुत की गई थी* के अनुरूप 5 से 6 अल्पविराम से अलग मानों की सूची होती है। हालाँकि, क्योंकि कुछ समीक्षक प्रत्येक फ़ील्ड को नहीं भरते हैं (वे एक को खाली छोड़ सकते हैं), मान हमेशा एक ही क्रम में नहीं होते हैं।
+
+उदाहरण के लिए, *समूह का प्रकार* लें। `Tags` स्तंभ में इस फ़ील्ड में 1025 अनूठी संभावनाएं हैं, और दुर्भाग्य से उनमें से केवल कुछ ही समूह को संदर्भित करती हैं (कुछ कमरे के प्रकार आदि हैं)। यदि आप केवल उन लोगों को फ़िल्टर करते हैं जो परिवार का उल्लेख करते हैं, तो परिणामों में कई *फैमिली रूम* प्रकार के परिणाम होते हैं। यदि आप *साथ* शब्द को शामिल करते हैं, यानी *फैमिली विथ* मानों की गणना करें, तो परिणाम बेहतर होते हैं, 515,000 परिणामों में से 80,000 से अधिक में "युवा बच्चों के साथ परिवार" या "बड़े बच्चों के साथ परिवार" वाक्यांश होता है।
+
+इसका मतलब है कि टैग कॉलम हमारे लिए पूरी तरह से बेकार नहीं है, लेकिन इसे उपयोगी बनाने के लिए कुछ काम करना होगा।
+
+##### औसत होटल स्कोर
+
+डेटा सेट के साथ कई विचित्रताएँ या विसंगतियाँ हैं जिन्हें मैं समझ नहीं पा रहा हूँ, लेकिन जब आप अपने मॉडल बना रहे हैं तो आप उनसे अवगत रहें इसके लिए यहाँ चित्रित किया गया है। यदि आप इसका पता लगाते हैं, तो कृपया हमें चर्चा अनुभाग में बताएं!
+
+डेटा सेट में औसत स्कोर और समीक्षाओं की संख्या से संबंधित निम्नलिखित स्तंभ हैं:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+इस डेटा सेट में सबसे अधिक समीक्षाओं वाला एकल होटल *ब्रिटानिया इंटरनेशनल होटल कैनरी व्हार्फ* है जिसमें 515,000 में से 4789 समीक्षाएं हैं। लेकिन अगर हम इस होटल के `Total_Number_of_Reviews` मान को देखते हैं, तो यह 9086 है। आप यह अनुमान लगा सकते हैं कि समीक्षाओं के बिना कई और स्कोर हैं, इसलिए शायद हमें `Additional_Number_of_Scoring` स्तंभ मान जोड़ना चाहिए। वह मान 2682 है, और इसे 4789 में जोड़ने से हमें 7,471 मिलते हैं जो अभी भी `Total_Number_of_Reviews` से 1615 कम हैं।
+
+यदि आप `Average_Score` स्तंभ लेते हैं, तो आप यह अनुमान लगा सकते हैं कि यह डेटा सेट में समीक्षाओं का औसत है, लेकिन Kaggle का विवरण है "*होटल का औसत स्कोर, पिछले वर्ष में नवीनतम टिप्पणी के आधार पर गणना किया गया*। यह इतना उपयोगी नहीं लगता है, लेकिन हम डेटा सेट में समीक्षा स्कोर के आधार पर अपना औसत गणना कर सकते हैं। उदाहरण के लिए एक ही होटल का उपयोग करते हुए, औसत होटल स्कोर 7.1 दिया गया है लेकिन गणना किया गया स्कोर (डेटा सेट में औसत समीक्षक स्कोर) 6.8 है। यह करीब है, लेकिन समान मूल्य नहीं है, और हम केवल यह अनुमान लगा सकते हैं कि `Additional_Number_of_Scoring` समीक्षाओं में दिए गए स्कोर ने औसत को 7.1 तक बढ़ा दिया। दुर्भाग्य से उस दावे का परीक्षण या प्रमाणित करने का कोई तरीका नहीं होने के कारण, `Average_Score`, `Additional_Number_of_Scoring` और `Total_Number_of_Reviews` का उपयोग करना या उन पर भरोसा करना मुश्किल है जब वे डेटा पर आधारित होते हैं या डेटा का संदर्भ देते हैं जो हमारे पास नहीं है।
+
+चीजों को और अधिक जटिल बनाने के लिए, सबसे अधिक समीक्षाओं वाले दूसरे होटल का औसत स्कोर 8.12 है और डेटा सेट `Average_Score` 8.1 है। क्या यह सही स्कोर एक संयोग है या पहला होटल एक विसंगति है?
+
+इस संभावना पर कि ये होटल एक बाहरी हो सकते हैं, और हो सकता है कि अधिकांश मान सही हों (लेकिन कुछ कारणों से नहीं हैं) हम डेटा सेट में मानों का पता लगाने और मानों के सही उपयोग (या गैर-उपयोग) का निर्धारण करने के लिए अगला एक छोटा कार्यक्रम लिखेंगे।
+
+> 🚨 एक सावधानी नोट
+>
+> इस डेटा सेट के साथ काम करते समय आप कुछ कोड लिखेंगे जो पाठ से कुछ गणना करता है बिना आपको स्वयं पाठ पढ़ने या विश्लेषण करने की आवश्यकता के। यह NLP का सार है, बिना किसी मानव को यह करने की आवश्यकता के अर्थ या भावना की व्याख्या करना। हालाँकि, यह संभव है कि आप कुछ नकारात्मक समीक्षाएँ पढ़ेंगे। मैं आपसे आग्रह करूंगा कि आप ऐसा न करें, क्योंकि आपको ऐसा करने की आवश्यकता नहीं है। उनमें से कुछ बेवकूफी भरी हैं, या अप्रासंगिक नकारात्मक होटल समीक्षाएँ हैं, जैसे "मौसम अच्छा नहीं था", जो होटल के नियंत्रण से परे कुछ है, या वास्तव में, किसी का भी। लेकिन कुछ समीक्षाओं का एक काला पक्ष भी है। कभी-कभी नकारात्मक समीक्षाएँ नस्लवादी, सेक्सिस्ट या आयुर्वादी होती हैं। यह दुर्भाग्यपूर्ण है लेकिन सार्वजनिक वेबसाइट से स्क्रैप किए गए डेटा सेट में अपेक्षित है। कुछ समीक्षक ऐसी समीक्षाएँ छोड़ते हैं जिन्हें आप घृणित, असहज या परेशान करने वाली पाते हैं। भावना को मापने के लिए कोड को पढ़ने के बजाय उन्हें स्वयं पढ़ना और परेशान होना बेहतर है। ऐसा कहा जा रहा है, यह एक अल्प
+पंक्तियों में कॉलम `Positive_Review` के मान "No Positive" हैं 9. गणना करें और प्रिंट करें कि कितनी पंक्तियों में कॉलम `Positive_Review` के मान "No Positive" **और** `Negative_Review` के मान "No Negative" हैं ### कोड उत्तर 1. आपने जो डेटा फ्रेम लोड किया है उसका *आकार* प्रिंट करें (आकार पंक्तियों और कॉलमों की संख्या है) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. समीक्षक राष्ट्रीयताओं के लिए आवृत्ति गणना करें: 1. कॉलम `Reviewer_Nationality` के लिए कितने विशिष्ट मान हैं और वे क्या हैं? 2. डेटासेट में सबसे आम समीक्षक राष्ट्रीयता कौन सी है (देश और समीक्षाओं की संख्या प्रिंट करें)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. अगली 10 सबसे अधिक बार पाई जाने वाली राष्ट्रीयताएँ और उनकी आवृत्ति गणना क्या हैं? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. शीर्ष 10 सबसे समीक्षक राष्ट्रीयताओं में से प्रत्येक के लिए सबसे अधिक बार समीक्षा किया गया होटल कौन सा था? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. डेटासेट में प्रति होटल कितनी समीक्षाएँ हैं (होटल की आवृत्ति गणना)? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Hotel_Name | Total_Number_of_Reviews | Total_Reviews_Found | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | आप देख सकते हैं कि *डेटासेट में गिना गया* परिणाम `Total_Number_of_Reviews` में मान से मेल नहीं खाता है। यह स्पष्ट नहीं है कि क्या डेटासेट में यह मान होटल की कुल समीक्षाओं का प्रतिनिधित्व करता था, लेकिन सभी स्क्रैप नहीं की गईं, या कुछ अन्य गणना। इस अस्पष्टता के कारण `Total_Number_of_Reviews` को मॉडल में उपयोग नहीं किया गया है। 5. जबकि डेटासेट में प्रत्येक होटल के लिए एक `Average_Score` कॉलम है, आप एक औसत स्कोर भी गणना कर सकते हैं (प्रत्येक होटल के लिए डेटासेट में सभी समीक्षक स्कोर का औसत प्राप्त करना)। अपने डेटा फ्रेम में एक नया कॉलम जोड़ें जिसका कॉलम हेडर `Calc_Average_Score` हो और जिसमें वह गणना किया गया औसत हो। कॉलम `Hotel_Name`, `Average_Score`, और `Calc_Average_Score` प्रिंट करें। ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` आप `Average_Score` मान के बारे में भी सोच सकते हैं और क्यों यह कभी-कभी गणना किए गए औसत स्कोर से भिन्न होता है। जैसा कि हम नहीं जान सकते कि कुछ मान मेल खाते हैं, लेकिन अन्य में अंतर है, इस मामले में हमारे पास जो समीक्षा स्कोर हैं उनका उपयोग करके औसत स्वयं गणना करना सबसे सुरक्षित है। कहा जा रहा है, अंतर आमतौर पर बहुत छोटे होते हैं, यहाँ डेटासेट औसत और गणना किए गए औसत से सबसे बड़े विचलन वाले होटल हैं: | Average_Score_Difference | Average_Score | Calc_Average_Score | Hotel_Name | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | केवल 1 होटल के साथ जिसका स्कोर अंतर 1 से अधिक है, इसका मतलब है कि हम संभवतः अंतर को अनदेखा कर सकते हैं और गणना किए गए औसत स्कोर का उपयोग कर सकते हैं। 6. गणना करें और प्रिंट करें कि कितनी पंक्तियों में कॉलम `Negative_Review` के मान "No Negative" हैं 7. गणना करें और प्रिंट करें कि कितनी पंक्तियों में कॉलम `Positive_Review` के मान "No Positive" हैं 8. गणना करें और प्रिंट करें कि कितनी पंक्तियों में कॉलम `Positive_Review` के मान "No Positive" **और** `Negative_Review` के मान "No Negative" हैं ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## एक और तरीका एक और तरीका बिना लैम्ब्डास के आइटम गिनना, और पंक्तियों को गिनने के लिए सम का उपयोग करना: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` आपने देखा होगा कि कॉलम `Negative_Review` और `Positive_Review` के लिए क्रमशः "No Negative" और "No Positive" मानों वाली 127 पंक्तियाँ हैं। इसका मतलब है कि समीक्षक ने होटल को एक संख्यात्मक स्कोर दिया, लेकिन सकारात्मक या नकारात्मक समीक्षा लिखने से इनकार कर दिया। सौभाग्य से यह एक छोटी मात्रा की पंक्तियाँ हैं (515738 में से 127, या 0.02%), इसलिए यह संभवतः हमारे मॉडल या परिणामों को किसी विशेष दिशा में नहीं ले जाएगा, लेकिन आप एक डेटा सेट की समीक्षा करने की उम्मीद नहीं कर सकते जिसमें कोई समीक्षा नहीं है, इसलिए यह डेटा का पता लगाने लायक है ताकि ऐसी पंक्तियों की खोज की जा सके। अब जब आपने डेटासेट का पता लगा लिया है, अगली कक्षा में आप डेटा को फ़िल्टर करेंगे और कुछ भावना विश्लेषण जोड़ेंगे। --- ## 🚀चुनौती यह पाठ प्रदर्शित करता है, जैसा कि हमने पिछले पाठों में देखा, डेटा और इसकी खामियों को समझना कितना महत्वपूर्ण है इससे पहले कि आप उस पर संचालन करें। विशेष रूप से, टेक्स्ट-आधारित डेटा सावधानीपूर्वक जांच का सामना करता है। विभिन्न टेक्स्ट-भारी डेटा सेटों के माध्यम से खुदाई करें और देखें कि क्या आप ऐसे क्षेत्र खोज सकते हैं जो मॉडल में पूर्वाग्रह या विकृत भावना ला सकते हैं। ## [पोस्ट-व्याख्यान प्रश्नोत्तरी](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## समीक्षा और आत्म अध्ययन [इस NLP लर्निंग पाथ](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) को लें और भाषण और टेक्स्ट-भारी मॉडल बनाने के लिए प्रयास करने के लिए उपकरण खोजें। ## असाइनमेंट [NLTK](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/hi/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..58bf4dd44
--- /dev/null
+++ b/translations/hi/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## निर्देश
+
+NLTK एक प्रसिद्ध लाइब्रेरी है जिसका उपयोग कम्प्यूटेशनल लिंग्विस्टिक्स और NLP में किया जाता है। इस अवसर का लाभ उठाकर '[NLTK किताब](https://www.nltk.org/book/)' को पढ़ें और इसके अभ्यासों को आजमाएं। इस बिना ग्रेड वाली असाइनमेंट में, आप इस लाइब्रेरी को और अधिक गहराई से जान पाएंगे।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/hi/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..f8ed09fd4
--- /dev/null
+++ b/translations/hi/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/hi/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..9acee712b
--- /dev/null
+++ b/translations/hi/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/5-Hotel-Reviews-2/README.md b/translations/hi/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..1850399ae
--- /dev/null
+++ b/translations/hi/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# होटल समीक्षा के साथ भावना विश्लेषण
+
+अब जब आपने डेटासेट का विस्तार से अन्वेषण कर लिया है, तो समय आ गया है कि आप कॉलम को फ़िल्टर करें और फिर होटल के बारे में नई अंतर्दृष्टि प्राप्त करने के लिए एनएलपी तकनीकों का उपयोग करें।
+## [पूर्व-व्याख्यान क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### फ़िल्टरिंग और भावना विश्लेषण संचालन
+
+जैसा कि आपने शायद देखा होगा, डेटासेट में कुछ समस्याएं हैं। कुछ कॉलम बेकार की जानकारी से भरे हुए हैं, जबकि अन्य गलत लगते हैं। अगर वे सही भी हैं, तो यह स्पष्ट नहीं है कि उनकी गणना कैसे की गई थी, और आपके अपने गणनाओं द्वारा उत्तरों को स्वतंत्र रूप से सत्यापित नहीं किया जा सकता।
+
+## अभ्यास: थोड़ा और डेटा प्रोसेसिंग
+
+डेटा को थोड़ा और साफ करें। उन कॉलम को जोड़ें जो बाद में उपयोगी होंगे, अन्य कॉलम में मान बदलें, और कुछ कॉलम को पूरी तरह से हटा दें।
+
+1. प्रारंभिक कॉलम प्रोसेसिंग
+
+ 1. `lat` और `lng` को हटा दें
+
+ 2. `Hotel_Address` मानों को निम्नलिखित मानों के साथ बदलें (यदि पता शहर और देश का नाम शामिल करता है, तो इसे केवल शहर और देश में बदलें)।
+
+ ये डेटासेट में केवल शहर और देश हैं:
+
+ एम्स्टर्डम, नीदरलैंड्स
+
+ बार्सिलोना, स्पेन
+
+ लंदन, यूनाइटेड किंगडम
+
+ मिलान, इटली
+
+ पेरिस, फ्रांस
+
+ वियना, ऑस्ट्रिया
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ अब आप देश स्तर का डेटा क्वेरी कर सकते हैं:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | होटल_पता | होटल_नाम |
+ | :--------------------- | :--------: |
+ | एम्स्टर्डम, नीदरलैंड्स | 105 |
+ | बार्सिलोना, स्पेन | 211 |
+ | लंदन, यूनाइटेड किंगडम | 400 |
+ | मिलान, इटली | 162 |
+ | पेरिस, फ्रांस | 458 |
+ | वियना, ऑस्ट्रिया | 158 |
+
+2. होटल मेटा-रिव्यू कॉलम प्रोसेस करें
+
+ 1. `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` को हमारे अपने गणना किए गए स्कोर के साथ हटा दें
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. समीक्षा कॉलम प्रोसेस करें
+
+ 1. `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']` को हटा दें, आप पूछ सकते हैं कि क्या प्रोसेसिंग को काफी हद तक कम करना संभव है। सौभाग्य से, यह संभव है - लेकिन पहले आपको यह सुनिश्चित करने के लिए कुछ कदम उठाने की आवश्यकता है कि कौन से टैग प्रासंगिक हैं।
+
+### टैग फ़िल्टरिंग
+
+याद रखें कि डेटासेट का लक्ष्य भावना और कॉलम जोड़ना है जो आपको सर्वश्रेष्ठ होटल चुनने में मदद करेगा (अपने लिए या शायद किसी क्लाइंट के लिए जो आपसे होटल सिफारिश बॉट बनाने का काम सौंप रहा है)। आपको खुद से पूछना होगा कि क्या टैग अंतिम डेटासेट में उपयोगी हैं या नहीं। यहां एक व्याख्या है (यदि आपको डेटासेट अन्य कारणों से चाहिए तो विभिन्न टैग चयन में रह सकते हैं/नहीं रह सकते):
+
+1. यात्रा का प्रकार प्रासंगिक है, और इसे रहना चाहिए
+2. अतिथि समूह का प्रकार महत्वपूर्ण है, और इसे रहना चाहिए
+3. अतिथि ने जिस प्रकार के कमरे, सुइट या स्टूडियो में ठहराव किया वह अप्रासंगिक है (सभी होटलों में मूल रूप से समान कमरे होते हैं)
+4. समीक्षा जिस डिवाइस से सबमिट की गई वह अप्रासंगिक है
+5. समीक्षक ने कितनी रातें ठहराई यह प्रासंगिक हो सकता है अगर आप मानते हैं कि लंबे ठहराव का मतलब है कि उन्हें होटल अधिक पसंद आया, लेकिन यह एक खिंचाव है, और शायद अप्रासंगिक
+
+संक्षेप में, **2 प्रकार के टैग रखें और अन्य को हटा दें**।
+
+पहले, आप टैग की गिनती तब तक नहीं करना चाहेंगे जब तक वे बेहतर प्रारूप में न हों, इसलिए इसका मतलब है कोष्ठकों और उद्धरणों को हटाना। आप इसे कई तरीकों से कर सकते हैं, लेकिन आप सबसे तेज़ तरीका चाहते हैं क्योंकि बहुत सारा डेटा प्रोसेस करने में बहुत समय लग सकता है। सौभाग्य से, पांडा के पास इन चरणों में से प्रत्येक को करने का एक आसान तरीका है।
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+प्रत्येक टैग कुछ इस प्रकार बन जाता है: `Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag` कॉलम नए कॉलम में से एक से मेल खाता है, तो 1 जोड़ें, यदि नहीं, तो 0 जोड़ें। अंतिम परिणाम यह होगा कि कितने समीक्षकों ने इस होटल को (कुल मिलाकर) व्यापार बनाम अवकाश के लिए चुना, या पालतू जानवर को लाने के लिए चुना, और यह होटल की सिफारिश करते समय उपयोगी जानकारी है।
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### अपनी फ़ाइल सहेजें
+
+अंत में, अब जैसा है वैसा ही डेटासेट एक नए नाम से सहेजें।
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## भावना विश्लेषण संचालन
+
+इस अंतिम अनुभाग में, आप समीक्षा कॉलम पर भावना विश्लेषण लागू करेंगे और परिणामों को एक डेटासेट में सहेजेंगे।
+
+## अभ्यास: फ़िल्टर किया गया डेटा लोड और सहेजें
+
+ध्यान दें कि अब आप वह फ़िल्टर किया गया डेटासेट लोड कर रहे हैं जिसे पिछले अनुभाग में सहेजा गया था, **मूल** डेटासेट नहीं।
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### स्टॉप शब्द हटाना
+
+यदि आप नकारात्मक और सकारात्मक समीक्षा कॉलम पर भावना विश्लेषण चलाते हैं, तो इसमें बहुत समय लग सकता है। एक शक्तिशाली परीक्षण लैपटॉप पर तेज़ सीपीयू के साथ परीक्षण किया गया, इसमें 12 - 14 मिनट लगे, जो भावना पुस्तकालय पर निर्भर करता है। यह (सापेक्ष) लंबा समय है, इसलिए यह जांचने योग्य है कि क्या इसे तेज किया जा सकता है।
+
+स्टॉप शब्दों, या सामान्य अंग्रेजी शब्दों को हटाना जो वाक्य की भावना को नहीं बदलते, पहला कदम है। उन्हें हटाकर, भावना विश्लेषण को तेज़ी से चलाना चाहिए, लेकिन कम सटीक नहीं होना चाहिए (क्योंकि स्टॉप शब्द भावना को प्रभावित नहीं करते, लेकिन वे विश्लेषण को धीमा कर देते हैं)।
+
+सबसे लंबी नकारात्मक समीक्षा 395 शब्दों की थी, लेकिन स्टॉप शब्दों को हटाने के बाद, यह 195 शब्दों की हो गई।
+
+स्टॉप शब्दों को हटाना भी एक तेज़ ऑपरेशन है, 515,000 पंक्तियों में से 2 समीक्षा कॉलम से स्टॉप शब्दों को हटाने में परीक्षण डिवाइस पर 3.3 सेकंड लगे। आपके लिए इसमें थोड़ा अधिक या कम समय लग सकता है, जो आपके डिवाइस के सीपीयू गति, रैम, एसएसडी होने या न होने और कुछ अन्य कारकों पर निर्भर करता है। ऑपरेशन की सापेक्ष कम समय की वजह से अगर यह भावना विश्लेषण समय में सुधार करता है, तो यह करने लायक है।
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### भावना विश्लेषण करना
+
+अब आपको नकारात्मक और सकारात्मक समीक्षा कॉलम के लिए भावना विश्लेषण की गणना करनी चाहिए, और परिणाम को 2 नए कॉलम में सहेजना चाहिए। भावना की परीक्षा यह होगी कि इसे समीक्षक के स्कोर से तुलना की जाए। उदाहरण के लिए, यदि भावना विश्लेषण नकारात्मक समीक्षा को 1 (अत्यधिक सकारात्मक भावना) और सकारात्मक समीक्षा भावना को 1 मानता है, लेकिन समीक्षक ने होटल को सबसे कम संभव स्कोर दिया, तो या तो समीक्षा पाठ स्कोर से मेल नहीं खाता, या भावना विश्लेषक भावना को सही से पहचान नहीं पाया। आपको उम्मीद करनी चाहिए कि कुछ भावना स्कोर पूरी तरह से गलत होंगे, और अक्सर यह समझाने योग्य होगा, जैसे कि समीक्षा अत्यधिक व्यंग्यात्मक हो सकती है "बेशक मुझे बिना हीटिंग वाले कमरे में सोना बहुत पसंद आया" और भावना विश्लेषक सोचता है कि यह सकारात्मक भावना है, जबकि एक इंसान इसे पढ़कर जानता होगा कि यह व्यंग्य है।
+
+एनएलटीके विभिन्न भावना विश्लेषक प्रदान करता है, और आप उन्हें बदल सकते हैं और देख सकते हैं कि भावना अधिक या कम सटीक है। यहां VADER भावना विश्लेषण का उपयोग किया गया है।
+
+> हुट्टो, सी.जे. और गिल्बर्ट, ई.ई. (2014)। VADER: सोशल मीडिया टेक्स्ट के भावना विश्लेषण के लिए एक सरल नियम-आधारित मॉडल। आठवीं अंतर्राष्ट्रीय सम्मेलन पर वेबलॉग्स और सोशल मीडिया (आईसीडब्ल्यूएसएम-14)। एन आर्बर, एमआई, जून 2014।
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+बाद में अपने प्रोग्राम में जब आप भावना की गणना करने के लिए तैयार हों, तो आप इसे प्रत्येक समीक्षा पर इस प्रकार लागू कर सकते हैं:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+यह मेरे कंप्यूटर पर लगभग 120 सेकंड लेता है, लेकिन यह प्रत्येक कंप्यूटर पर भिन्न होगा। यदि आप परिणामों को प्रिंट करना चाहते हैं और देखना चाहते हैं कि क्या भावना समीक्षा से मेल खाती है:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+फाइल का उपयोग करने से पहले आखिरी चीज जो करनी है, वह इसे सहेजना है! आपको अपने सभी नए कॉलम को फिर से क्रमबद्ध करने पर भी विचार करना चाहिए ताकि वे काम करने में आसान हों (एक इंसान के लिए, यह एक सौंदर्य परिवर्तन है)।
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+आपको [विश्लेषण नोटबुक](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) के लिए पूरा कोड चलाना चाहिए (जैसा कि आपने [अपनी फ़िल्टरिंग नोटबुक](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) चलाने के बाद Hotel_Reviews_Filtered.csv फाइल बनाने के लिए किया था)।
+
+समीक्षा करने के लिए, चरण हैं:
+
+1. मूल डेटासेट फाइल **Hotel_Reviews.csv** को पिछले पाठ में [एक्सप्लोरर नोटबुक](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb) के साथ अन्वेषण किया गया है
+2. Hotel_Reviews.csv को [फ़िल्टरिंग नोटबुक](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) द्वारा फ़िल्टर किया गया है जिसके परिणामस्वरूप **Hotel_Reviews_Filtered.csv** बनता है
+3. Hotel_Reviews_Filtered.csv को [भावना विश्लेषण नोटबुक](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) द्वारा प्रोसेस किया गया है जिसके परिणामस्वरूप **Hotel_Reviews_NLP.csv** बनता है
+4. नीचे दिए गए एनएलपी चैलेंज में Hotel_Reviews_NLP.csv का उपयोग करें
+
+### निष्कर्ष
+
+जब आपने शुरू किया, तो आपके पास कॉलम और डेटा के साथ एक डेटासेट था, लेकिन इसका सारा हिस्सा सत्यापित या उपयोग नहीं किया जा सकता था। आपने डेटा का अन्वेषण किया, जो आवश्यक नहीं था उसे फ़िल्टर किया, टैग को उपयोगी चीजों में परिवर्तित किया, अपने औसत की गणना की, कुछ भावना कॉलम जोड़े और उम्मीद है, प्राकृतिक पाठ प्रोसेसिंग के बारे में कुछ रोचक चीजें सीखी हैं।
+
+## [पोस्ट-व्याख्यान क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## चुनौती
+
+अब जब आपने अपने डेटासेट का भावना के लिए विश्लेषण कर लिया है, तो देखें कि क्या आप इस पाठ्यक्रम में सीखी गई रणनीतियों (शायद क्लस्टरिंग?) का उपयोग करके भावना के आसपास पैटर्न निर्धारित कर सकते हैं।
+
+## समीक्षा और स्व-अध्ययन
+
+[इस लर्न मॉड्यूल](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott) को लें ताकि आप अधिक जान सकें और पाठ में भावना का पता लगाने के लिए विभिन्न उपकरणों का उपयोग कर सकें।
+## असाइनमेंट
+
+[एक अलग डेटासेट आज़माएं](assignment.md)
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/hi/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..c7567d897
--- /dev/null
+++ b/translations/hi/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# एक अलग डेटासेट आज़माएं
+
+## निर्देश
+
+अब जब आपने टेक्स्ट को सेंटिमेंट असाइन करने के लिए NLTK का उपयोग करना सीख लिया है, तो एक अलग डेटासेट आज़माएं। आपको शायद इसके चारों ओर कुछ डेटा प्रोसेसिंग करनी पड़ेगी, इसलिए एक नोटबुक बनाएं और अपनी सोच प्रक्रिया को दस्तावेज करें। आपको क्या पता चलता है?
+
+## रूब्रिक
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| ------- | ----------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | एक पूरी नोटबुक और डेटासेट प्रस्तुत किए गए हैं, जिनमें अच्छी तरह से दस्तावेजित सेल्स बताते हैं कि सेंटिमेंट कैसे असाइन किया गया है | नोटबुक में अच्छे स्पष्टीकरण गायब हैं | नोटबुक में खामियां हैं |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/hi/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..69521982c
--- /dev/null
+++ b/translations/hi/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/hi/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..01b9baf62
--- /dev/null
+++ b/translations/hi/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/README.md b/translations/hi/6-NLP/README.md
new file mode 100644
index 000000000..d6baa5ae2
--- /dev/null
+++ b/translations/hi/6-NLP/README.md
@@ -0,0 +1,27 @@
+# प्राकृतिक भाषा प्रसंस्करण के साथ शुरुआत
+
+प्राकृतिक भाषा प्रसंस्करण (NLP) एक कंप्यूटर प्रोग्राम की क्षमता है जो मानव भाषा को समझता है जैसे कि वह बोली और लिखी जाती है -- इसे प्राकृतिक भाषा कहा जाता है। यह कृत्रिम बुद्धिमत्ता (AI) का एक घटक है। NLP 50 से अधिक वर्षों से अस्तित्व में है और इसका जड़ें भाषाविज्ञान के क्षेत्र में हैं। पूरा क्षेत्र मशीनों को मानव भाषा को समझने और संसाधित करने में मदद करने पर केंद्रित है। इसका उपयोग तब वर्तनी जांच या मशीन अनुवाद जैसे कार्यों को करने के लिए किया जा सकता है। इसमें चिकित्सा अनुसंधान, खोज इंजन और व्यापार बुद्धिमत्ता सहित कई क्षेत्रों में वास्तविक दुनिया के अनुप्रयोगों की विविधता है।
+
+## क्षेत्रीय विषय: यूरोपीय भाषाएं और साहित्य और यूरोप के रोमांटिक होटल ❤️
+
+इस पाठ्यक्रम के इस खंड में, आपको मशीन लर्निंग के सबसे व्यापक उपयोगों में से एक से परिचित कराया जाएगा: प्राकृतिक भाषा प्रसंस्करण (NLP)। कंप्यूटेशनल भाषाविज्ञान से व्युत्पन्न, कृत्रिम बुद्धिमत्ता की यह श्रेणी आवाज़ या पाठ्य संचार के माध्यम से मनुष्यों और मशीनों के बीच पुल है।
+
+इन पाठों में हम छोटे संवादात्मक बॉट्स बनाकर NLP की मूल बातें सीखेंगे ताकि यह समझ सकें कि मशीन लर्निंग इन वार्तालापों को और अधिक 'स्मार्ट' बनाने में कैसे मदद करती है। आप समय में पीछे की यात्रा करेंगे, जेन ऑस्टेन के क्लासिक उपन्यास, **प्राइड एंड प्रेजुडिस**, जो 1813 में प्रकाशित हुआ था, के एलिजाबेथ बेनेट और मिस्टर डार्सी से बातचीत करेंगे। फिर, आप होटल समीक्षाओं के माध्यम से भावना विश्लेषण के बारे में सीखकर अपने ज्ञान को और बढ़ाएंगे।
+
+
+> Photo by Elaine Howlin on Unsplash
+
+## पाठ
+
+1. [प्राकृतिक भाषा प्रसंस्करण का परिचय](1-Introduction-to-NLP/README.md)
+2. [सामान्य NLP कार्य और तकनीकें](2-Tasks/README.md)
+3. [मशीन लर्निंग के साथ अनुवाद और भावना विश्लेषण](3-Translation-Sentiment/README.md)
+4. [अपने डेटा की तैयारी](4-Hotel-Reviews-1/README.md)
+5. [भावना विश्लेषण के लिए NLTK](5-Hotel-Reviews-2/README.md)
+
+## श्रेय
+
+ये प्राकृतिक भाषा प्रसंस्करण पाठ ☕ के साथ [Stephen Howell](https://twitter.com/Howell_MSFT) द्वारा लिखे गए थे।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/6-NLP/data/README.md b/translations/hi/6-NLP/data/README.md
new file mode 100644
index 000000000..2d89e051f
--- /dev/null
+++ b/translations/hi/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/1-Introduction/README.md b/translations/hi/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..4e73881c7
--- /dev/null
+++ b/translations/hi/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# टाइम सीरीज़ पूर्वानुमान का परिचय
+
+
+
+> स्केच नोट [Tomomi Imura](https://www.twitter.com/girlie_mac) द्वारा
+
+इस पाठ और अगले पाठ में, आप टाइम सीरीज़ पूर्वानुमान के बारे में थोड़ा जानेंगे, जो एक एमएल वैज्ञानिक के शस्त्रागार का एक दिलचस्प और मूल्यवान हिस्सा है, जो अन्य विषयों की तुलना में थोड़ा कम जाना जाता है। टाइम सीरीज़ पूर्वानुमान एक प्रकार का 'क्रिस्टल बॉल' है: जैसे मूल्य जैसी चर के पिछले प्रदर्शन के आधार पर, आप इसके भविष्य के संभावित मूल्य की भविष्यवाणी कर सकते हैं।
+
+[](https://youtu.be/cBojo1hsHiI "टाइम सीरीज़ पूर्वानुमान का परिचय")
+
+> 🎥 टाइम सीरीज़ पूर्वानुमान के बारे में वीडियो के लिए ऊपर की छवि पर क्लिक करें
+
+## [पूर्व-व्याख्यान प्रश्नोत्तरी](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+यह एक उपयोगी और दिलचस्प क्षेत्र है जिसका व्यवसाय में वास्तविक मूल्य है, इसकी मूल्य निर्धारण, इन्वेंटरी और आपूर्ति श्रृंखला समस्याओं के प्रत्यक्ष अनुप्रयोग को देखते हुए। जबकि गहन सीखने की तकनीकों का उपयोग भविष्य के प्रदर्शन को बेहतर ढंग से पूर्वानुमान लगाने के लिए अधिक अंतर्दृष्टि प्राप्त करने के लिए किया जाने लगा है, टाइम सीरीज़ पूर्वानुमान एक ऐसा क्षेत्र है जो क्लासिक एमएल तकनीकों द्वारा बहुत सूचित है।
+
+> पेन स्टेट का उपयोगी टाइम सीरीज़ पाठ्यक्रम [यहाँ](https://online.stat.psu.edu/stat510/lesson/1) पाया जा सकता है
+
+## परिचय
+
+मान लीजिए कि आप स्मार्ट पार्किंग मीटर की एक श्रृंखला बनाए रखते हैं जो समय के साथ कितनी बार उपयोग किए जाते हैं और कितनी देर तक उपयोग किए जाते हैं, इसका डेटा प्रदान करते हैं।
+
+> क्या होगा यदि आप मीटर के पिछले प्रदर्शन के आधार पर आपूर्ति और मांग के नियमों के अनुसार इसके भविष्य के मूल्य की भविष्यवाणी कर सकते हैं?
+
+अपने लक्ष्य को प्राप्त करने के लिए कब कार्य करना है, इसका सटीक पूर्वानुमान लगाना एक चुनौती है जिसे टाइम सीरीज़ पूर्वानुमान द्वारा निपटाया जा सकता है। व्यस्त समय में जब लोग पार्किंग स्थान की तलाश कर रहे हों तो अधिक शुल्क लेना लोगों को खुश नहीं करेगा, लेकिन यह सड़कों की सफाई के लिए राजस्व उत्पन्न करने का एक निश्चित तरीका होगा!
+
+आइए टाइम सीरीज़ एल्गोरिदम के कुछ प्रकारों का अन्वेषण करें और कुछ डेटा को साफ और तैयार करने के लिए एक नोटबुक शुरू करें। जिस डेटा का आप विश्लेषण करेंगे वह GEFCom2014 पूर्वानुमान प्रतियोगिता से लिया गया है। इसमें 2012 और 2014 के बीच 3 वर्षों के प्रति घंटे के बिजली लोड और तापमान के मान शामिल हैं। बिजली लोड और तापमान के ऐतिहासिक पैटर्न को देखते हुए, आप बिजली लोड के भविष्य के मानों की भविष्यवाणी कर सकते हैं।
+
+इस उदाहरण में, आप केवल ऐतिहासिक लोड डेटा का उपयोग करके एक समय कदम आगे का पूर्वानुमान लगाना सीखेंगे। हालाँकि, शुरू करने से पहले, यह समझना उपयोगी है कि पर्दे के पीछे क्या चल रहा है।
+
+## कुछ परिभाषाएँ
+
+जब 'टाइम सीरीज़' शब्द का सामना करना पड़ता है तो आपको इसे कई अलग-अलग संदर्भों में समझने की आवश्यकता होती है।
+
+🎓 **टाइम सीरीज़**
+
+गणित में, "एक टाइम सीरीज़ समय क्रम में अनुक्रमित (या सूचीबद्ध या ग्राफ़) डेटा बिंदुओं की एक श्रृंखला है। सबसे सामान्यतः, एक टाइम सीरीज़ एक अनुक्रम है जिसे समय में समान अंतराल पर लिया गया है।" टाइम सीरीज़ का एक उदाहरण [डॉव जोन्स इंडस्ट्रियल एवरेज](https://wikipedia.org/wiki/Time_series) का दैनिक समापन मूल्य है। टाइम सीरीज़ प्लॉट्स और सांख्यिकीय मॉडलिंग का उपयोग अक्सर सिग्नल प्रोसेसिंग, मौसम पूर्वानुमान, भूकंप पूर्वानुमान और अन्य क्षेत्रों में होता है जहाँ घटनाएँ होती हैं और डेटा बिंदुओं को समय के साथ प्लॉट किया जा सकता है।
+
+🎓 **टाइम सीरीज़ विश्लेषण**
+
+टाइम सीरीज़ विश्लेषण, उपर्युक्त टाइम सीरीज़ डेटा का विश्लेषण है। टाइम सीरीज़ डेटा विभिन्न रूपों में हो सकता है, जिसमें 'अवरोधित टाइम सीरीज़' भी शामिल है जो किसी अवरोधक घटना से पहले और बाद में टाइम सीरीज़ के विकास में पैटर्न का पता लगाता है। टाइम सीरीज़ के लिए आवश्यक विश्लेषण, डेटा की प्रकृति पर निर्भर करता है। टाइम सीरीज़ डेटा स्वयं संख्याओं या वर्णों की श्रृंखला के रूप में हो सकता है।
+
+जो विश्लेषण किया जाना है, उसमें विभिन्न विधियाँ शामिल हैं, जिनमें आवृत्ति-डोमेन और समय-डोमेन, रैखिक और गैर-रैखिक, और अधिक शामिल हैं। इस प्रकार के डेटा का विश्लेषण करने के कई तरीकों के बारे में [अधिक जानें](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm)।
+
+🎓 **टाइम सीरीज़ पूर्वानुमान**
+
+टाइम सीरीज़ पूर्वानुमान एक मॉडल का उपयोग करके भविष्य के मानों की भविष्यवाणी करना है जो पहले एकत्रित डेटा द्वारा प्रदर्शित पैटर्न पर आधारित है जैसा कि अतीत में हुआ था। जबकि समय सूचकांक को x चर के रूप में प्लॉट पर उपयोग करके टाइम सीरीज़ डेटा का पता लगाने के लिए प्रतिगमन मॉडल का उपयोग करना संभव है, ऐसे डेटा का विश्लेषण विशेष प्रकार के मॉडलों का उपयोग करके सबसे अच्छा किया जाता है।
+
+टाइम सीरीज़ डेटा क्रमबद्ध टिप्पणियों की एक सूची है, जो डेटा के विपरीत है जिसे रैखिक प्रतिगमन द्वारा विश्लेषण किया जा सकता है। सबसे सामान्य एक एआरआईएमए है, एक संक्षिप्त नाम जो "ऑटोरेग्रेसिव इंटीग्रेटेड मूविंग एवरेज" के लिए खड़ा है।
+
+[एआरआईएमए मॉडल](https://online.stat.psu.edu/stat510/lesson/1/1.1) "श्रृंखला के वर्तमान मूल्य को पिछले मूल्यों और पिछले पूर्वानुमान त्रुटियों से संबंधित करते हैं।" वे समय-डोमेन डेटा का विश्लेषण करने के लिए सबसे उपयुक्त हैं, जहाँ डेटा समय के साथ क्रमबद्ध होता है।
+
+> एआरआईएमए मॉडलों के कई प्रकार हैं, जिनके बारे में आप [यहाँ](https://people.duke.edu/~rnau/411arim.htm) और अगले पाठ में जान सकते हैं।
+
+अगले पाठ में, आप [एकवचनीय टाइम सीरीज़](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm) का उपयोग करके एक एआरआईएमए मॉडल बनाएंगे, जो एक चर पर केंद्रित है जो समय के साथ अपने मान को बदलता है। इस प्रकार के डेटा का एक उदाहरण [यह डेटासेट](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm) है जो माउना लोआ वेधशाला में मासिक CO2 सांद्रता को रिकॉर्ड करता है:
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ इस डेटासेट में वह चर पहचानें जो समय के साथ बदलता है
+
+## टाइम सीरीज़ डेटा की विशेषताएँ जिन पर विचार करना चाहिए
+
+जब आप टाइम सीरीज़ डेटा को देखते हैं, तो आप देख सकते हैं कि इसमें [कुछ विशेषताएँ](https://online.stat.psu.edu/stat510/lesson/1/1.1) हैं जिन्हें आपको बेहतर समझने के लिए ध्यान में रखना और कम करना होगा। यदि आप टाइम सीरीज़ डेटा को संभावित रूप से एक 'सिग्नल' प्रदान करने के रूप में मानते हैं जिसे आप विश्लेषण करना चाहते हैं, तो इन विशेषताओं को 'शोर' के रूप में माना जा सकता है। आपको अक्सर कुछ सांख्यिकीय तकनीकों का उपयोग करके इन विशेषताओं को ऑफसेट करके इस 'शोर' को कम करने की आवश्यकता होगी।
+
+यहाँ कुछ अवधारणाएँ दी गई हैं जिन्हें आपको टाइम सीरीज़ के साथ काम करने में सक्षम होने के लिए जानना चाहिए:
+
+🎓 **रुझान**
+
+रुझान को समय के साथ मापने योग्य वृद्धि और कमी के रूप में परिभाषित किया गया है। [और पढ़ें](https://machinelearningmastery.com/time-series-trends-in-python)। टाइम सीरीज़ के संदर्भ में, यह इस बारे में है कि आपके टाइम सीरीज़ से रुझानों का उपयोग कैसे करें और, यदि आवश्यक हो, उन्हें कैसे हटाएँ।
+
+🎓 **[मौसमी](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+मौसमी को आवधिक उतार-चढ़ाव के रूप में परिभाषित किया गया है, जैसे कि छुट्टियों की भीड़ जो बिक्री को प्रभावित कर सकती है, उदाहरण के लिए। डेटा में मौसमी को प्रदर्शित करने वाले विभिन्न प्रकार के प्लॉट्स को [देखें](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm)।
+
+🎓 **असामान्य मान**
+
+असामान्य मान मानक डेटा भिन्नता से बहुत दूर होते हैं।
+
+🎓 **दीर्घकालिक चक्र**
+
+मौसमी से स्वतंत्र, डेटा एक दीर्घकालिक चक्र प्रदर्शित कर सकता है जैसे कि एक आर्थिक मंदी जो एक वर्ष से अधिक समय तक चलती है।
+
+🎓 **स्थिर विचलन**
+
+समय के साथ, कुछ डेटा स्थिर उतार-चढ़ाव प्रदर्शित करते हैं, जैसे दिन और रात के प्रति ऊर्जा उपयोग।
+
+🎓 **अचानक परिवर्तन**
+
+डेटा अचानक परिवर्तन प्रदर्शित कर सकता है जिसे आगे के विश्लेषण की आवश्यकता हो सकती है। उदाहरण के लिए, COVID के कारण व्यवसायों के अचानक बंद होने से डेटा में परिवर्तन हुआ।
+
+✅ यहाँ एक [नमूना टाइम सीरीज़ प्लॉट](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python) है जो कुछ वर्षों में दैनिक इन-गेम मुद्रा खर्च दिखाता है। क्या आप इस डेटा में ऊपर सूचीबद्ध किसी भी विशेषता की पहचान कर सकते हैं?
+
+
+
+## व्यायाम - बिजली उपयोग डेटा के साथ शुरुआत करना
+
+आइए पिछले उपयोग को देखते हुए भविष्य के बिजली उपयोग की भविष्यवाणी करने के लिए एक टाइम सीरीज़ मॉडल बनाना शुरू करें।
+
+> इस उदाहरण में डेटा GEFCom2014 पूर्वानुमान प्रतियोगिता से लिया गया है। इसमें 2012 और 2014 के बीच 3 वर्षों के प्रति घंटे के बिजली लोड और तापमान के मान शामिल हैं।
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli और Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, जुलाई-सितंबर, 2016.
+
+1. इस पाठ के `working` फ़ोल्डर में, _notebook.ipynb_ फ़ाइल खोलें। डेटा लोड और विज़ुअलाइज़ करने में मदद करने के लिए लाइब्रेरी जोड़कर शुरू करें
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ ध्यान दें, आप शामिल `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()` से फ़ाइलों का उपयोग कर रहे हैं:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ आप देख सकते हैं कि दो कॉलम हैं जो तिथि और लोड का प्रतिनिधित्व करते हैं:
+
+ | | लोड |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. अब, `plot()` को कॉल करके डेटा प्लॉट करें:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. अब, जुलाई 2014 के पहले सप्ताह को `energy` in `[from date]: [to date]` पैटर्न को इनपुट के रूप में प्रदान करके प्लॉट करें:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ एक सुंदर प्लॉट! इन प्लॉट्स को देखें और देखें कि क्या आप ऊपर सूचीबद्ध किसी भी विशेषता की पहचान कर सकते हैं। डेटा को विज़ुअलाइज़ करके हम क्या निष्कर्ष निकाल सकते हैं?
+
+अगले पाठ में, आप कुछ पूर्वानुमान बनाने के लिए एक एआरआईएमए मॉडल बनाएंगे।
+
+---
+
+## 🚀चुनौती
+
+उन सभी उद्योगों और पूछताछ के क्षेत्रों की एक सूची बनाएं जिनका आप सोच सकते हैं कि टाइम सीरीज़ पूर्वानुमान से लाभ होगा। क्या आप कला में इन तकनीकों के किसी अनुप्रयोग के बारे में सोच सकते हैं? अर्थमिति में? पारिस्थितिकी में? खुदरा? उद्योग? वित्त? और कहाँ?
+
+## [पोस्ट-व्याख्यान प्रश्नोत्तरी](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## समीक्षा और स्व-अध्ययन
+
+हालाँकि हम यहाँ उन्हें कवर नहीं करेंगे, लेकिन कभी-कभी टाइम सीरीज़ पूर्वानुमान के क्लासिक तरीकों को बढ़ाने के लिए न्यूरल नेटवर्क का उपयोग किया जाता है। उनके बारे में [इस लेख](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412) में और पढ़ें
+
+## असाइनमेंट
+
+[कुछ और टाइम सीरीज़ को विज़ुअलाइज़ करें](assignment.md)
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/1-Introduction/assignment.md b/translations/hi/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..0d2134cc6
--- /dev/null
+++ b/translations/hi/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# कुछ और समय श्रृंखला का दृश्य प्रदर्शन करें
+
+## निर्देश
+
+आपने समय श्रृंखला पूर्वानुमान के बारे में सीखना शुरू कर दिया है, जो इस विशेष मॉडलिंग की आवश्यकता वाले डेटा के प्रकार को देखकर आता है। आपने ऊर्जा के आसपास कुछ डेटा को दृश्यात्मक रूप से प्रदर्शित किया है। अब, कुछ और डेटा ढूंढें जो समय श्रृंखला पूर्वानुमान से लाभान्वित हो सकते हैं। तीन उदाहरण खोजें (प्रयास करें [Kaggle](https://kaggle.com) और [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott)) और उन्हें दृश्यात्मक बनाने के लिए एक नोटबुक बनाएं। नोटबुक में उनके किसी भी विशेष गुण (मौसमी, अचानक परिवर्तन, या अन्य रुझान) को नोट करें।
+
+## मूल्यांकन
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| ------- | ------------------------------------------------------- | --------------------------------------------------- | ----------------------------------------------------------------------------------------- |
+| | तीन डेटा सेट नोटबुक में प्लॉट और समझाए गए हैं | दो डेटा सेट नोटबुक में प्लॉट और समझाए गए हैं | कुछ डेटा सेट नोटबुक में प्लॉट या समझाए गए हैं या प्रस्तुत डेटा अपर्याप्त है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियां हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/hi/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..a4c41c9c2
--- /dev/null
+++ b/translations/hi/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/hi/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..80f2072eb
--- /dev/null
+++ b/translations/hi/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/2-ARIMA/README.md b/translations/hi/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..cf1a3ebf4
--- /dev/null
+++ b/translations/hi/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,397 @@
+# एआरआईएमए के साथ टाइम सीरीज भविष्यवाणी
+
+पिछले पाठ में, आपने टाइम सीरीज भविष्यवाणी के बारे में थोड़ा सीखा और एक डेटासेट लोड किया जो एक समय अवधि में विद्युत भार के उतार-चढ़ाव को दर्शाता है।
+
+[](https://youtu.be/IUSk-YDau10 "Introduction to ARIMA")
+
+> 🎥 वीडियो के लिए ऊपर की छवि पर क्लिक करें: एआरआईएमए मॉडल्स का संक्षिप्त परिचय। उदाहरण R में किया गया है, लेकिन अवधारणाएँ सार्वभौमिक हैं।
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## परिचय
+
+इस पाठ में, आप [एआरआईएमए: *ऑटो*रेग्रेसिव *इंटीग्रेटेड *मूविंग *एवरेज](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average) के साथ मॉडल बनाने का एक विशिष्ट तरीका खोजेंगे। एआरआईएमए मॉडल विशेष रूप से [गैर-स्टेशनरिटी](https://wikipedia.org/wiki/Stationary_process) दिखाने वाले डेटा को फिट करने के लिए उपयुक्त होते हैं।
+
+## सामान्य अवधारणाएँ
+
+एआरआईएमए के साथ काम करने में सक्षम होने के लिए, कुछ अवधारणाएँ हैं जिन्हें आपको जानना आवश्यक है:
+
+- 🎓 **स्टेशनरिटी**। सांख्यिकीय संदर्भ से, स्टेशनरिटी उन डेटा को संदर्भित करता है जिनका वितरण समय में स्थानांतरित होने पर नहीं बदलता है। गैर-स्टेशनरी डेटा, फिर, रुझानों के कारण उतार-चढ़ाव दिखाता है जिसे विश्लेषण करने के लिए परिवर्तित करना आवश्यक है। उदाहरण के लिए, सीजनलिटी डेटा में उतार-चढ़ाव ला सकती है और इसे 'सीजनल-डिफरेंसिंग' की प्रक्रिया द्वारा समाप्त किया जा सकता है।
+
+- 🎓 **[डिफरेंसिंग](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**। सांख्यिकीय संदर्भ से फिर से, डिफरेंसिंग डेटा को स्थिर बनाने के लिए गैर-स्टेशनरी डेटा को परिवर्तित करने की प्रक्रिया को संदर्भित करता है। "डिफरेंसिंग समय श्रृंखला के स्तर में बदलाव को हटा देता है, रुझान और सीजनलिटी को समाप्त करता है और परिणामस्वरूप समय श्रृंखला के माध्य को स्थिर करता है।" [शिक्सिओंग एट अल का पेपर](https://arxiv.org/abs/1904.07632)
+
+## टाइम सीरीज के संदर्भ में एआरआईएमए
+
+आइए एआरआईएमए के भागों को समझें ताकि यह समझ सकें कि यह हमें टाइम सीरीज मॉडल बनाने और इसके खिलाफ भविष्यवाणियाँ करने में कैसे मदद करता है।
+
+- **एआर - ऑटोरेग्रेसिव के लिए**। जैसा कि नाम से पता चलता है, ऑटोरेग्रेसिव मॉडल आपके डेटा में पिछली मानों को 'पीछे' देखने और उनके बारे में धारणाएँ बनाने के लिए देखता है। इन पिछली मानों को 'लैग्स' कहा जाता है। एक उदाहरण पेंसिल की मासिक बिक्री दिखाने वाले डेटा का होगा। प्रत्येक महीने की बिक्री कुल को डेटासेट में एक 'विकसित हो रहा चर' माना जाएगा। यह मॉडल इस प्रकार बनाया गया है कि "रुचि का विकसित चर अपने स्वयं के लैग्ड (यानी, पूर्व) मानों पर पुनरावृत्त होता है।" [विकिपीडिया](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **आई - इंटीग्रेटेड के लिए**। समान 'एआरएमए' मॉडल के विपरीत, एआरआईएमए में 'आई' इसके *[इंटीग्रेटेड](https://wikipedia.org/wiki/Order_of_integration)* पहलू को संदर्भित करता है। गैर-स्टेशनरिटी को समाप्त करने के लिए डिफरेंसिंग चरणों को लागू करने पर डेटा 'इंटीग्रेटेड' हो जाता है।
+
+- **एमए - मूविंग एवरेज के लिए**। इस मॉडल के [मूविंग-एवरेज](https://wikipedia.org/wiki/Moving-average_model) पहलू का तात्पर्य आउटपुट वेरिएबल से है जो लैग्स के वर्तमान और पिछले मानों का निरीक्षण करके निर्धारित होता है।
+
+निचला रेखा: एआरआईएमए का उपयोग विशेष प्रकार के टाइम सीरीज डेटा को यथासंभव निकटता से फिट करने के लिए किया जाता है।
+
+## व्यायाम - एक एआरआईएमए मॉडल बनाएं
+
+इस पाठ में [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) फ़ोल्डर खोलें और [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb) फ़ाइल खोजें।
+
+1. नोटबुक चलाएं ताकि आप `statsmodels` Python लाइब्रेरी को लोड कर सकें; आपको एआरआईएमए मॉडल के लिए इसकी आवश्यकता होगी।
+
+1. आवश्यक लाइब्रेरी लोड करें
+
+1. अब, डेटा को प्लॉट करने के लिए उपयोगी कई और लाइब्रेरी लोड करें:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. `/data/energy.csv` फ़ाइल से डेटा को एक पांडा डेटा फ्रेम में लोड करें और एक नज़र डालें:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. जनवरी 2012 से दिसंबर 2014 तक सभी उपलब्ध ऊर्जा डेटा को प्लॉट करें। कोई आश्चर्य नहीं होना चाहिए क्योंकि हमने इस डेटा को पिछले पाठ में देखा था:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ अब, आइए एक मॉडल बनाएं!
+
+### प्रशिक्षण और परीक्षण डेटासेट बनाएं
+
+अब आपका डेटा लोड हो गया है, इसलिए आप इसे ट्रेन और टेस्ट सेट में अलग कर सकते हैं। आप अपने मॉडल को ट्रेन सेट पर प्रशिक्षित करेंगे। हमेशा की तरह, मॉडल के प्रशिक्षण समाप्त होने के बाद, आप इसके सटीकता का मूल्यांकन परीक्षण सेट का उपयोग करके करेंगे। आपको यह सुनिश्चित करने की आवश्यकता है कि परीक्षण सेट प्रशिक्षण सेट की तुलना में समय की एक बाद की अवधि को कवर करता है ताकि यह सुनिश्चित हो सके कि मॉडल भविष्य की समय अवधि से जानकारी प्राप्त नहीं करता है।
+
+1. 1 सितंबर से 31 अक्टूबर, 2014 की दो महीने की अवधि को ट्रेनिंग सेट में आवंटित करें। परीक्षण सेट में 1 नवंबर से 31 दिसंबर, 2014 की दो महीने की अवधि शामिल होगी:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ चूंकि यह डेटा ऊर्जा की दैनिक खपत को दर्शाता है, इसलिए एक मजबूत मौसमी पैटर्न है, लेकिन खपत हाल के दिनों में खपत के समान है।
+
+1. अंतर को दृश्य रूप में देखें:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ इसलिए, डेटा को प्रशिक्षित करने के लिए अपेक्षाकृत छोटे समय विंडो का उपयोग करना पर्याप्त होना चाहिए।
+
+ > नोट: चूंकि हम एआरआईएमए मॉडल को फिट करने के लिए जिस फ़ंक्शन का उपयोग करते हैं वह फिटिंग के दौरान इन-सैंपल सत्यापन का उपयोग करता है, हम सत्यापन डेटा को छोड़ देंगे।
+
+### प्रशिक्षण के लिए डेटा तैयार करें
+
+अब, आपको अपने डेटा को फिल्टर और स्केल करके प्रशिक्षण के लिए तैयार करना होगा। अपने डेटासेट को केवल आवश्यक समय अवधि और कॉलम को शामिल करने के लिए फिल्टर करें, और यह सुनिश्चित करने के लिए स्केलिंग करें कि डेटा को 0,1 के अंतराल में प्रक्षेपित किया गया है।
+
+1. मूल डेटासेट को केवल उपरोक्त समय अवधि प्रति सेट और केवल आवश्यक कॉलम 'लोड' और तारीख को शामिल करने के लिए फिल्टर करें:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ आप डेटा का आकार देख सकते हैं:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. डेटा को (0, 1) के रेंज में स्केल करें।
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. मूल बनाम स्केल डेटा को दृश्य रूप में देखें:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > मूल डेटा
+
+ 
+
+ > स्केल किया हुआ डेटा
+
+1. अब जब आपने स्केल किए गए डेटा को कैलिब्रेट कर लिया है, तो आप परीक्षण डेटा को स्केल कर सकते हैं:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### एआरआईएमए लागू करें
+
+अब एआरआईएमए लागू करने का समय है! अब आप `statsmodels` लाइब्रेरी का उपयोग करेंगे जिसे आपने पहले इंस्टॉल किया था।
+
+अब आपको कई चरणों का पालन करना होगा
+
+ 1. `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q` कॉल करके मॉडल को परिभाषित करें, लेकिन मॉडल के मौसमी घटकों से संबंधित हैं।
+
+1. अपनी पसंदीदा क्षितिज मान सेट करके प्रारंभ करें। आइए 3 घंटे आजमाएं:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ एआरआईएमए मॉडल के पैरामीटर के लिए सर्वोत्तम मानों का चयन करना चुनौतीपूर्ण हो सकता है क्योंकि यह कुछ हद तक व्यक्तिपरक और समय लेने वाला है। आप `auto_arima()` function from the [`pyramid` लाइब्रेरी](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html) का उपयोग करने पर विचार कर सकते हैं,
+
+1. अभी के लिए कुछ मैनुअल चयन आजमाएं ताकि एक अच्छा मॉडल मिल सके।
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ परिणामों की एक तालिका मुद्रित होती है।
+
+आपने अपना पहला मॉडल बना लिया है! अब हमें इसे मूल्यांकन करने का तरीका खोजना होगा।
+
+### अपने मॉडल का मूल्यांकन करें
+
+अपने मॉडल का मूल्यांकन करने के लिए, आप तथाकथित `वॉक फॉरवर्ड` सत्यापन कर सकते हैं। व्यवहार में, टाइम सीरीज मॉडल को हर बार एक नया डेटा उपलब्ध होने पर पुन: प्रशिक्षित किया जाता है। यह मॉडल को प्रत्येक समय चरण पर सर्वोत्तम पूर्वानुमान बनाने की अनुमति देता है।
+
+इस तकनीक का उपयोग करके टाइम सीरीज की शुरुआत में शुरू करते हुए, ट्रेन डेटा सेट पर मॉडल को प्रशिक्षित करें। फिर अगले समय चरण पर एक भविष्यवाणी करें। भविष्यवाणी ज्ञात मान के खिलाफ मूल्यांकित की जाती है। फिर ट्रेनिंग सेट को ज्ञात मान को शामिल करने के लिए विस्तारित किया जाता है और प्रक्रिया को दोहराया जाता है।
+
+> नोट: आपको अधिक कुशल प्रशिक्षण के लिए प्रशिक्षण सेट विंडो को स्थिर रखना चाहिए ताकि हर बार जब आप प्रशिक्षण सेट में एक नया अवलोकन जोड़ते हैं, तो आप सेट की शुरुआत से अवलोकन को हटा देते हैं।
+
+यह प्रक्रिया इस बात का अधिक मजबूत अनुमान प्रदान करती है कि मॉडल व्यवहार में कैसा प्रदर्शन करेगा। हालाँकि, यह इतने सारे मॉडल बनाने की गणना लागत पर आता है। यदि डेटा छोटा है या मॉडल सरल है तो यह स्वीकार्य है, लेकिन पैमाने पर एक समस्या हो सकती है।
+
+वॉक-फॉरवर्ड वैलिडेशन टाइम सीरीज मॉडल मूल्यांकन का स्वर्ण मानक है और आपके अपने प्रोजेक्ट के लिए अनुशंसित है।
+
+1. सबसे पहले, प्रत्येक HORIZON चरण के लिए एक परीक्षण डेटा बिंदु बनाएं।
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | लोड | लोड+1 | लोड+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ डेटा को उसके क्षितिज बिंदु के अनुसार क्षैतिज रूप से स्थानांतरित किया गया है।
+
+1. इस स्लाइडिंग विंडो दृष्टिकोण का उपयोग करके अपने परीक्षण डेटा पर भविष्यवाणियाँ करें और परीक्षण डेटा की लंबाई के आकार में एक लूप में करें:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ आप प्रशिक्षण होते हुए देख सकते हैं:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. भविष्यवाणियों की तुलना वास्तविक लोड से करें:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ आउटपुट
+ | | | टाइमस्टैम्प | एच | भविष्यवाणी | वास्तविक |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+
+ वास्तविक लोड की तुलना में प्रति घंटे डेटा की भविष्यवाणी देखें। यह कितना सटीक है?
+
+### मॉडल सटीकता की जांच करें
+
+सभी भविष्यवाणियों पर अपने मॉडल की सटीकता की जांच उसके माध्य पूर्ण प्रतिशत त्रुटि (MAPE) का परीक्षण करके करें।
+
+> **🧮 गणित दिखाओ**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/) को उपरोक्त सूत्र द्वारा परिभाषित अनुपात के रूप में पूर्वानुमान सटीकता दिखाने के लिए उपयोग किया जाता है। वास्तविकt और पूर्वानुमानितt के बीच का अंतर वास्तविकt द्वारा विभाजित किया जाता है। "इस गणना में पूर्ण मान को समय में प्रत्येक पूर्वानुमानित बिंदु के लिए जोड़ा जाता है और फिट किए गए बिंदुओं n की संख्या से विभाजित किया जाता है।" [विकिपीडिया](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. कोड में समीकरण व्यक्त करें:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. एक चरण का MAPE गणना करें:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ एक चरण पूर्वानुमान MAPE: 0.5570581332313952 %
+
+1. बहु-चरण पूर्वानुमान MAPE प्रिंट करें:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ एक अच्छा कम संख्या सबसे अच्छा है: विचार करें कि 10 का MAPE वाला पूर्वानुमान 10% से बंद है।
+
+1. लेकिन हमेशा की तरह, इस प्रकार की सटीकता माप को दृश्य रूप से देखना आसान होता है, इसलिए आइए इसे प्लॉट करें:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 एक बहुत ही अच्छा प्लॉट, जो एक अच्छे सटीकता वाले मॉडल को दिखा रहा है। बहुत बढ़िया!
+
+---
+
+## 🚀चुनौती
+
+टाइम सीरीज मॉडल की सटीकता का परीक्षण करने के तरीकों में गहराई से जाएं। हम इस पाठ में MAPE को छूते हैं, लेकिन क्या अन्य विधियाँ हैं जिन्हें आप उपयोग कर सकते हैं? उनका शोध करें और उन्हें एनोटेट करें। एक सहायक दस्तावेज़ [यहां](https://otexts.com/fpp2/accuracy.html) पाया जा सकता है
+
+## [पोस्ट-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## समीक्षा और स्व-अध्ययन
+
+यह पाठ एआरआईएमए के साथ टाइम सीरीज भविष्यवाणी की केवल मूल बातें छूता है। [इस भंडार](https://microsoft.github.io/forecasting/) और इसके विभिन्न मॉडल प्रकारों में गहराई से जानकारी लेकर अन्य तरीकों से टाइम सीरीज मॉडल बनाने के तरीकों को सीखने के लिए अपना ज्ञान बढ़ाने के लिए समय निकालें।
+
+## असाइनमेंट
+
+[एक नया एआरआईएमए मॉडल](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/2-ARIMA/assignment.md b/translations/hi/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..a1d47ae04
--- /dev/null
+++ b/translations/hi/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# एक नया ARIMA मॉडल
+
+## निर्देश
+
+अब जब आपने एक ARIMA मॉडल बना लिया है, तो ताज़ा डेटा के साथ एक नया मॉडल बनाएं (इनमें से किसी एक [ड्यूक के डेटासेट्स](http://www2.stat.duke.edu/~mw/ts_data_sets.html) को आज़माएं)। अपने काम को एक नोटबुक में एनोटेट करें, डेटा और अपने मॉडल को विज़ुअलाइज़ करें, और इसकी सटीकता को MAPE का उपयोग करके परीक्षण करें।
+
+## मूल्यांकन मानदंड
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता है |
+| -------- | ------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- | --------------------------------- |
+| | एक नोटबुक प्रस्तुत की गई है जिसमें एक नया ARIMA मॉडल बनाया गया है, परीक्षण किया गया है और विज़ुअलाइज़ेशन और सटीकता के साथ समझाया गया है। | प्रस्तुत नोटबुक एनोटेट नहीं है या इसमें बग्स हैं | एक अधूरी नोटबुक प्रस्तुत की गई है |
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/hi/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..69521982c
--- /dev/null
+++ b/translations/hi/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/hi/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..9acee712b
--- /dev/null
+++ b/translations/hi/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/3-SVR/README.md b/translations/hi/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..2ba4db7bc
--- /dev/null
+++ b/translations/hi/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,382 @@
+# सपोर्ट वेक्टर रिग्रेशर के साथ टाइम सीरीज फोरकास्टिंग
+
+पिछले पाठ में, आपने ARIMA मॉडल का उपयोग करके टाइम सीरीज प्रेडिक्शन करना सीखा था। अब आप सपोर्ट वेक्टर रिग्रेशर मॉडल को देखेंगे, जो एक रिग्रेशर मॉडल है जिसका उपयोग निरंतर डेटा की भविष्यवाणी करने के लिए किया जाता है।
+
+## [पूर्व-व्याख्यान क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## परिचय
+
+इस पाठ में, आप रिग्रेशन के लिए [**SVM**: **S**upport **V**ector **M**achine](https://en.wikipedia.org/wiki/Support-vector_machine) के साथ मॉडल बनाने का एक विशिष्ट तरीका खोजेंगे, जिसे **SVR: Support Vector Regressor** कहा जाता है।
+
+### टाइम सीरीज के संदर्भ में SVR [^1]
+
+टाइम सीरीज प्रेडिक्शन में SVR के महत्व को समझने से पहले, यहाँ कुछ महत्वपूर्ण अवधारणाएँ हैं जिन्हें आपको जानना आवश्यक है:
+
+- **रिग्रेशन:** सुपरवाइज्ड लर्निंग तकनीक जो दिए गए इनपुट सेट से निरंतर मानों की भविष्यवाणी करती है। विचार यह है कि फीचर स्पेस में एक कर्व (या लाइन) फिट करना जिसमें अधिकतम संख्या में डेटा पॉइंट्स हों। अधिक जानकारी के लिए [यहाँ क्लिक करें](https://en.wikipedia.org/wiki/Regression_analysis)।
+- **सपोर्ट वेक्टर मशीन (SVM):** एक प्रकार का सुपरवाइज्ड मशीन लर्निंग मॉडल जो वर्गीकरण, रिग्रेशन और आउटलेयर डिटेक्शन के लिए उपयोग किया जाता है। मॉडल फीचर स्पेस में एक हाइपरप्लेन होता है, जो वर्गीकरण के मामले में एक सीमा के रूप में कार्य करता है, और रिग्रेशन के मामले में बेस्ट-फिट लाइन के रूप में कार्य करता है। SVM में, आमतौर पर एक कर्नल फंक्शन का उपयोग करके डेटासेट को उच्च आयामों की जगह में बदल दिया जाता है, ताकि उन्हें आसानी से विभाजित किया जा सके। SVMs पर अधिक जानकारी के लिए [यहाँ क्लिक करें](https://en.wikipedia.org/wiki/Support-vector_machine)।
+- **सपोर्ट वेक्टर रिग्रेशर (SVR):** SVM का एक प्रकार, जो बेस्ट फिट लाइन (जो SVM के मामले में एक हाइपरप्लेन है) खोजने के लिए उपयोग किया जाता है जिसमें अधिकतम संख्या में डेटा पॉइंट्स होते हैं।
+
+### SVR क्यों? [^1]
+
+पिछले पाठ में आपने ARIMA के बारे में सीखा, जो टाइम सीरीज डेटा की भविष्यवाणी के लिए एक बहुत ही सफल सांख्यिकीय रैखिक विधि है। हालाँकि, कई मामलों में, टाइम सीरीज डेटा में *नॉन-लाइनियरिटी* होती है, जिसे रैखिक मॉडलों द्वारा मैप नहीं किया जा सकता। ऐसे मामलों में, रिग्रेशन कार्यों के लिए डेटा में नॉन-लाइनियरिटी पर विचार करने की SVM की क्षमता SVR को टाइम सीरीज फोरकास्टिंग में सफल बनाती है।
+
+## अभ्यास - एक SVR मॉडल बनाना
+
+डेटा तैयारी के पहले कुछ चरण पिछले पाठ [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA) के समान हैं।
+
+इस पाठ में [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working) फ़ोल्डर खोलें और [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb) फ़ाइल खोजें।[^2]
+
+1. नोटबुक चलाएं और आवश्यक लाइब्रेरीज़ इम्पोर्ट करें: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. `/data/energy.csv` फ़ाइल से डेटा को एक पांडास डेटा फ्रेम में लोड करें और एक नज़र डालें: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. जनवरी 2012 से दिसंबर 2014 तक उपलब्ध सभी ऊर्जा डेटा को प्लॉट करें: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ अब, चलिए हमारा SVR मॉडल बनाते हैं।
+
+### प्रशिक्षण और परीक्षण डेटा सेट बनाएं
+
+अब आपका डेटा लोड हो गया है, इसलिए आप इसे ट्रेन और टेस्ट सेट में विभाजित कर सकते हैं। फिर आप डेटा को समय-चरण आधारित डेटासेट बनाने के लिए रीशेप करेंगे, जो SVR के लिए आवश्यक होगा। आप अपने मॉडल को ट्रेन सेट पर प्रशिक्षित करेंगे। मॉडल के प्रशिक्षण के बाद, आप इसके सटीकता का मूल्यांकन ट्रेनिंग सेट, टेस्टिंग सेट और फिर पूरे डेटासेट पर करेंगे ताकि समग्र प्रदर्शन देखा जा सके। आपको यह सुनिश्चित करने की आवश्यकता है कि टेस्ट सेट ट्रेनिंग सेट से एक बाद की अवधि को कवर करता है ताकि यह सुनिश्चित किया जा सके कि मॉडल भविष्य की समय अवधि से जानकारी प्राप्त न करे [^2] (एक स्थिति जिसे *ओवरफिटिंग* के रूप में जाना जाता है)।
+
+1. 1 सितंबर से 31 अक्टूबर 2014 की दो महीने की अवधि को ट्रेनिंग सेट के लिए आवंटित करें। टेस्ट सेट में 1 नवंबर से 31 दिसंबर 2014 की दो महीने की अवधि शामिल होगी: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. अंतर को विज़ुअलाइज़ करें: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+### प्रशिक्षण के लिए डेटा तैयार करें
+
+अब, आपको अपने डेटा को फ़िल्टर और स्केल करके प्रशिक्षण के लिए तैयार करने की आवश्यकता है। अपने डेटासेट को केवल उन समय अवधि और कॉलम को शामिल करने के लिए फ़िल्टर करें जिन्हें आपको चाहिए, और यह सुनिश्चित करने के लिए स्केलिंग करें कि डेटा 0,1 के अंतराल में प्रक्षेपित हो।
+
+1. मूल डेटासेट को फ़िल्टर करें ताकि केवल उपर्युक्त समय अवधि प्रति सेट और केवल आवश्यक कॉलम 'लोड' और तारीख शामिल हों: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. ट्रेनिंग डेटा को (0, 1) की सीमा में स्केल करें: [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. अब, आप टेस्टिंग डेटा को स्केल करें: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### समय-चरणों के साथ डेटा बनाएं [^1]
+
+SVR के लिए, आप इनपुट डेटा को `[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data` के रूप में बदलते हैं ताकि एक नया आयाम हो जो समय-चरणों को संदर्भित करता है।
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+इस उदाहरण के लिए, हम `timesteps = 5` लेते हैं। इसलिए, मॉडल के इनपुट पहले 4 समय-चरणों के डेटा हैं, और आउटपुट 5वें समय-चरण का डेटा होगा।
+
+```python
+timesteps=5
+```
+
+नेस्टेड सूची समग्रण का उपयोग करके प्रशिक्षण डेटा को 2D टेंसर में परिवर्तित करना:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+टेस्टिंग डेटा को 2D टेंसर में परिवर्तित करना:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+प्रशिक्षण और परीक्षण डेटा से इनपुट और आउटपुट का चयन करना:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### SVR लागू करें [^1]
+
+अब, SVR को लागू करने का समय है। इस कार्यान्वयन के बारे में अधिक पढ़ने के लिए, आप [इस दस्तावेज़](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) को संदर्भित कर सकते हैं। हमारे कार्यान्वयन के लिए, हम इन चरणों का पालन करते हैं:
+
+ 1. मॉडल को `SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()` फ़ंक्शन को कॉल करके परिभाषित करें
+
+अब हम एक SVR मॉडल बनाते हैं। यहाँ हम [RBF कर्नल](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel) का उपयोग करते हैं, और हाइपरपैरामीटर गामा, C और एप्सिलॉन को क्रमशः 0.5, 10 और 0.05 के रूप में सेट करते हैं।
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### प्रशिक्षण डेटा पर मॉडल फिट करें [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### मॉडल प्रेडिक्शन बनाएं [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+आपने अपना SVR बना लिया है! अब हमें इसका मूल्यांकन करना है।
+
+### अपने मॉडल का मूल्यांकन करें [^1]
+
+मूल्यांकन के लिए, पहले हम डेटा को हमारे मूल पैमाने पर वापस स्केल करेंगे। फिर, प्रदर्शन की जांच करने के लिए, हम मूल और भविष्यवाणी किए गए समय श्रृंखला प्लॉट को प्लॉट करेंगे, और MAPE परिणाम भी प्रिंट करेंगे।
+
+भविष्यवाणी और मूल आउटपुट को स्केल करें:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### प्रशिक्षण और परीक्षण डेटा पर मॉडल प्रदर्शन की जांच करें [^1]
+
+हम अपने प्लॉट के x-अक्ष में दिखाने के लिए डेटासेट से टाइमस्टैम्प निकालते हैं। ध्यान दें कि हम पहले ```timesteps-1``` मानों का उपयोग पहले आउटपुट के लिए इनपुट के रूप में कर रहे हैं, इसलिए आउटपुट के लिए टाइमस्टैम्प उसके बाद शुरू होंगे।
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+प्रशिक्षण डेटा के लिए भविष्यवाणियों को प्लॉट करें:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+प्रशिक्षण डेटा के लिए MAPE प्रिंट करें
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+परीक्षण डेटा के लिए भविष्यवाणियों को प्लॉट करें
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+परीक्षण डेटा के लिए MAPE प्रिंट करें
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 आपके पास परीक्षण डेटासेट पर बहुत अच्छा परिणाम है!
+
+### पूर्ण डेटासेट पर मॉडल प्रदर्शन की जांच करें [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+🏆 बहुत अच्छे प्लॉट्स, एक अच्छे सटीकता वाले मॉडल को दिखाते हुए। बहुत बढ़िया!
+
+---
+
+## 🚀चुनौती
+
+- मॉडल बनाते समय हाइपरपैरामीटर (गामा, C, एप्सिलॉन) को ट्वीक करने का प्रयास करें और डेटा पर मूल्यांकन करें कि कौन सा हाइपरपैरामीटर सेट परीक्षण डेटा पर सर्वोत्तम परिणाम देता है। इन हाइपरपैरामीटर के बारे में अधिक जानने के लिए, आप [यहाँ](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel) दस्तावेज़ संदर्भित कर सकते हैं।
+- मॉडल के लिए विभिन्न कर्नल फंक्शन का उपयोग करने का प्रयास करें और उनके प्रदर्शन का विश्लेषण करें। एक सहायक दस्तावेज़ [यहाँ](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) पाया जा सकता है।
+- मॉडल को भविष्यवाणी करने के लिए पीछे देखने के लिए `timesteps` के विभिन्न मानों का उपयोग करने का प्रयास करें।
+
+## [व्याख्यान के बाद का क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## समीक्षा और स्व-अध्ययन
+
+यह पाठ टाइम सीरीज फोरकास्टिंग के लिए SVR के अनुप्रयोग को प्रस्तुत करने के लिए था। SVR के बारे में अधिक पढ़ने के लिए, आप [इस ब्लॉग](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/) को संदर्भित कर सकते हैं। यह [scikit-learn पर दस्तावेज़](https://scikit-learn.org/stable/modules/svm.html) सामान्य रूप से SVMs के बारे में अधिक व्यापक स्पष्टीकरण प्रदान करता है, [SVRs](https://scikit-learn.org/stable/modules/svm.html#regression) और अन्य कार्यान्वयन विवरण जैसे कि विभिन्न [कर्नल फंक्शन](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) जो उपयोग किए जा सकते हैं, और उनके पैरामीटर।
+
+## असाइनमेंट
+
+[एक नया SVR मॉडल](assignment.md)
+
+## क्रेडिट्स
+
+[^1]: इस अनुभाग में पाठ, कोड और आउटपुट [@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD) द्वारा योगदान किया गया था
+[^2]: इस अनुभाग में पाठ, कोड और आउटपुट [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA) से लिया गया था
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/3-SVR/assignment.md b/translations/hi/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..70ccd7135
--- /dev/null
+++ b/translations/hi/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,16 @@
+# एक नया SVR मॉडल
+
+## निर्देश [^1]
+
+अब जब आपने एक SVR मॉडल बना लिया है, तो नए डेटा के साथ एक नया मॉडल बनाएं (इनमें से किसी एक [Duke के डेटासेट्स](http://www2.stat.duke.edu/~mw/ts_data_sets.html) को आज़माएं)। अपने काम को एक नोटबुक में एनोटेट करें, डेटा और अपने मॉडल को विज़ुअलाइज़ करें, और उचित प्लॉट्स और MAPE का उपयोग करके इसकी सटीकता का परीक्षण करें। विभिन्न हाइपरपैरामीटर्स को ट्वीक करने और टाइमस्टेप्स के लिए अलग-अलग मानों का उपयोग करने की भी कोशिश करें।
+
+## मूल्यांकन [^1]
+
+| मानदंड | उत्कृष्टता | पर्याप्त | सुधार की आवश्यकता |
+| ------- | -------------------------------------------------------------- | -------------------------------------------------------- | ----------------------------------- |
+| | एक नोटबुक प्रस्तुत की गई है जिसमें SVR मॉडल बनाया गया है, परीक्षण किया गया है और विज़ुअलाइज़ेशन और सटीकता के साथ समझाया गया है। | प्रस्तुत की गई नोटबुक एनोटेट नहीं है या इसमें बग्स हैं। | एक अधूरी नोटबुक प्रस्तुत की गई है |
+
+[^1]:इस अनुभाग का पाठ [ARIMA से असाइनमेंट](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md) पर आधारित है।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया अवगत रहें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/7-TimeSeries/README.md b/translations/hi/7-TimeSeries/README.md
new file mode 100644
index 000000000..a2233676b
--- /dev/null
+++ b/translations/hi/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# समय श्रृंखला पूर्वानुमान का परिचय
+
+समय श्रृंखला पूर्वानुमान क्या है? यह अतीत के रुझानों का विश्लेषण करके भविष्य की घटनाओं की भविष्यवाणी करने के बारे में है।
+
+## क्षेत्रीय विषय: वैश्विक बिजली उपयोग ✨
+
+इन दो पाठों में, आपको समय श्रृंखला पूर्वानुमान से परिचित कराया जाएगा, जो मशीन लर्निंग का एक अपेक्षाकृत कम ज्ञात क्षेत्र है, लेकिन फिर भी उद्योग और व्यापार अनुप्रयोगों के लिए बेहद मूल्यवान है, अन्य क्षेत्रों के अलावा। जबकि न्यूरल नेटवर्क का उपयोग इन मॉडलों की उपयोगिता बढ़ाने के लिए किया जा सकता है, हम इन्हें पारंपरिक मशीन लर्निंग के संदर्भ में अध्ययन करेंगे क्योंकि मॉडल अतीत के आधार पर भविष्य के प्रदर्शन की भविष्यवाणी करने में मदद करते हैं।
+
+हमारा क्षेत्रीय ध्यान दुनिया में विद्युत उपयोग पर है, जो भविष्य के विद्युत उपयोग की भविष्यवाणी करने के लिए अतीत के लोड पैटर्न के आधार पर सीखने के लिए एक दिलचस्प डेटासेट है। आप देख सकते हैं कि इस प्रकार का पूर्वानुमान व्यावसायिक वातावरण में कितना सहायक हो सकता है।
+
+
+
+[Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) द्वारा राजस्थान में सड़क पर विद्युत टावरों की तस्वीर [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) पर।
+
+## पाठ
+
+1. [समय श्रृंखला पूर्वानुमान का परिचय](1-Introduction/README.md)
+2. [ARIMA समय श्रृंखला मॉडल बनाना](2-ARIMA/README.md)
+3. [समय श्रृंखला पूर्वानुमान के लिए सपोर्ट वेक्टर रिग्रेसर बनाना](3-SVR/README.md)
+
+## श्रेय
+
+"समय श्रृंखला पूर्वानुमान का परिचय" ⚡️ के साथ [Francesca Lazzeri](https://twitter.com/frlazzeri) और [Jen Looper](https://twitter.com/jenlooper) द्वारा लिखा गया था। नोटबुक्स पहली बार ऑनलाइन [Azure "Deep Learning For Time Series" repo](https://github.com/Azure/DeepLearningForTimeSeriesForecasting) में दिखाई दिए, जो मूल रूप से Francesca Lazzeri द्वारा लिखे गए थे। SVR पाठ [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD) द्वारा लिखा गया था।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/1-QLearning/README.md b/translations/hi/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..2fe964fb1
--- /dev/null
+++ b/translations/hi/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,59 @@
+## नीति की जाँच करना
+
+चूंकि Q-Table प्रत्येक स्थिति में प्रत्येक क्रिया की "आकर्षण" सूचीबद्ध करता है, इसलिए इसका उपयोग हमारे विश्व में कुशल नेविगेशन को परिभाषित करने के लिए काफी आसान है। सबसे सरल मामले में, हम उस क्रिया का चयन कर सकते हैं जो Q-Table के उच्चतम मूल्य से मेल खाती है: (कोड ब्लॉक 9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> यदि आप ऊपर दिए गए कोड को कई बार आजमाते हैं, तो आप देख सकते हैं कि कभी-कभी यह "लटक" जाता है, और आपको इसे बाधित करने के लिए नोटबुक में STOP बटन दबाने की आवश्यकता होती है। ऐसा इसलिए होता है क्योंकि ऐसी स्थितियाँ हो सकती हैं जब दो राज्य इष्टतम Q-Value के संदर्भ में एक-दूसरे की ओर "इशारा" करते हैं, इस स्थिति में एजेंट उन राज्यों के बीच अनिश्चितकाल तक चलता रहता है।
+
+## 🚀चुनौती
+
+> **कार्य 1:** `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` को संशोधित करें ताकि यह सिमुलेशन को 100 बार चलाए: (कोड ब्लॉक 10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+इस कोड को चलाने के बाद, आपको पहले की तुलना में औसत पथ लंबाई बहुत कम मिलनी चाहिए, जो 3-6 की सीमा में होगी।
+
+## सीखने की प्रक्रिया की जांच
+
+जैसा कि हमने उल्लेख किया है, सीखने की प्रक्रिया समस्या स्थान की संरचना के बारे में प्राप्त ज्ञान की खोज और अन्वेषण के बीच एक संतुलन है। हमने देखा है कि सीखने के परिणाम (एक एजेंट को लक्ष्य तक पहुँचने के लिए एक छोटा रास्ता खोजने की क्षमता) में सुधार हुआ है, लेकिन यह भी देखना दिलचस्प है कि सीखने की प्रक्रिया के दौरान औसत पथ लंबाई कैसे व्यवहार करती है:
+
+## सीखने को इस प्रकार संक्षेपित किया जा सकता है:
+
+- **औसत पथ लंबाई बढ़ जाती है**। यहां हम देखते हैं कि पहले औसत पथ लंबाई बढ़ जाती है। ऐसा शायद इसलिए होता है क्योंकि जब हमें पर्यावरण के बारे में कुछ नहीं पता होता है, तो हम खराब राज्यों, पानी या भेड़िये में फंसने की संभावना रखते हैं। जैसे-जैसे हम अधिक सीखते हैं और इस ज्ञान का उपयोग करना शुरू करते हैं, हम लंबे समय तक पर्यावरण का पता लगा सकते हैं, लेकिन हमें अभी भी सेब कहाँ हैं इसके बारे में अच्छी तरह से पता नहीं होता है।
+
+- **जैसे-जैसे हम अधिक सीखते हैं, पथ की लंबाई कम हो जाती है**। एक बार जब हम पर्याप्त सीख लेते हैं, तो एजेंट के लिए लक्ष्य प्राप्त करना आसान हो जाता है, और पथ लंबाई कम होने लगती है। हालांकि, हम अभी भी अन्वेषण के लिए खुले हैं, इसलिए हम अक्सर सबसे अच्छे रास्ते से भटकते हैं, और नए विकल्पों का पता लगाते हैं, जिससे रास्ता इष्टतम से लंबा हो जाता है।
+
+- **लंबाई अचानक बढ़ जाती है**। इस ग्राफ पर हम यह भी देखते हैं कि किसी बिंदु पर, लंबाई अचानक बढ़ जाती है। यह प्रक्रिया की यादृच्छिक प्रकृति को इंगित करता है, और यह कि हम किसी बिंदु पर Q-Table गुणांक को नए मानों के साथ अधिलेखित करके "बिगाड़" सकते हैं। आदर्श रूप से इसे सीखने की दर को कम करके कम किया जाना चाहिए (उदाहरण के लिए, प्रशिक्षण के अंत की ओर, हम Q-Table मानों को एक छोटे मान से ही समायोजित करते हैं)।
+
+कुल मिलाकर, यह याद रखना महत्वपूर्ण है कि सीखने की प्रक्रिया की सफलता और गुणवत्ता काफी हद तक मापदंडों पर निर्भर करती है, जैसे सीखने की दर, सीखने की दर में कमी, और छूट कारक। इन्हें अक्सर **हाइपरपैरामीटर्स** कहा जाता है, ताकि उन्हें **पैरामीटर्स** से अलग किया जा सके, जिन्हें हम प्रशिक्षण के दौरान अनुकूलित करते हैं (उदाहरण के लिए, Q-Table गुणांक)। सर्वोत्तम हाइपरपैरामीटर मानों को खोजने की प्रक्रिया को **हाइपरपैरामीटर ऑप्टिमाइज़ेशन** कहा जाता है, और यह एक अलग विषय का हकदार है।
+
+## [पोस्ट-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## असाइनमेंट
+[एक अधिक यथार्थवादी दुनिया](assignment.md)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/1-QLearning/assignment.md b/translations/hi/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..462cbabe6
--- /dev/null
+++ b/translations/hi/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,30 @@
+# एक अधिक यथार्थवादी दुनिया
+
+हमारी स्थिति में, पीटर लगभग बिना थके या भूखे हुए इधर-उधर घूम सकता था। एक अधिक यथार्थवादी दुनिया में, उसे समय-समय पर बैठकर आराम करना पड़ता है और खुद को खिलाना भी पड़ता है। आइए निम्नलिखित नियमों को लागू करके हमारी दुनिया को अधिक यथार्थवादी बनाएं:
+
+1. एक स्थान से दूसरे स्थान पर जाने से पीटर की **ऊर्जा** कम हो जाती है और कुछ **थकान** बढ़ जाती है।
+2. पीटर सेब खाकर अधिक ऊर्जा प्राप्त कर सकता है।
+3. पीटर पेड़ के नीचे या घास पर आराम करके थकान से छुटकारा पा सकता है (यानी, बोर्ड के किसी स्थान पर जाकर जहां पेड़ या घास हो - हरा क्षेत्र)
+4. पीटर को भेड़िये को खोजना और मारना होगा।
+5. भेड़िये को मारने के लिए, पीटर के पास ऊर्जा और थकान के निश्चित स्तर होने चाहिए, अन्यथा वह लड़ाई हार जाएगा।
+
+## निर्देश
+
+अपने समाधान के लिए मूल [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) नोटबुक का उपयोग प्रारंभिक बिंदु के रूप में करें।
+
+ऊपर दिए गए गेम के नियमों के अनुसार इनाम फ़ंक्शन को संशोधित करें, गेम जीतने की सर्वोत्तम रणनीति सीखने के लिए पुनर्बलन सीखने के एल्गोरिदम को चलाएं, और रैंडम वॉक के परिणामों की तुलना अपने एल्गोरिदम के साथ करें कि कितने गेम जीते और हारे गए।
+
+> **Note**: आपकी नई दुनिया में, स्थिति अधिक जटिल है, और मानव स्थिति के अलावा थकान और ऊर्जा स्तर भी शामिल हैं। आप स्थिति का प्रतिनिधित्व एक ट्यूपल (Board, energy, fatigue) के रूप में कर सकते हैं, या स्थिति के लिए एक क्लास परिभाषित कर सकते हैं (आप इसे `Board` से भी व्युत्पन्न कर सकते हैं), या यहां तक कि मूल `Board` क्लास को [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py) के अंदर संशोधित कर सकते हैं।
+
+अपने समाधान में, कृपया रैंडम वॉक रणनीति के लिए जिम्मेदार कोड को बनाए रखें, और अंत में अपने एल्गोरिदम के परिणामों की तुलना रैंडम वॉक से करें।
+
+> **Note**: आपको इसे काम करने के लिए हाइपरपैरामीटर को समायोजित करने की आवश्यकता हो सकती है, विशेष रूप से युगों की संख्या। क्योंकि गेम की सफलता (भेड़िये से लड़ाई) एक दुर्लभ घटना है, आप बहुत अधिक प्रशिक्षण समय की उम्मीद कर सकते हैं।
+
+## मूल्यांकन मानदंड
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| | एक नोटबुक प्रस्तुत की गई है जिसमें नई दुनिया के नियमों की परिभाषा, Q-लर्निंग एल्गोरिदम और कुछ पाठ्य विवरण शामिल हैं। Q-लर्निंग रैंडम वॉक की तुलना में परिणामों को महत्वपूर्ण रूप से सुधारने में सक्षम है। | नोटबुक प्रस्तुत की गई है, Q-लर्निंग लागू किया गया है और रैंडम वॉक की तुलना में परिणामों में सुधार करता है, लेकिन महत्वपूर्ण रूप से नहीं; या नोटबुक खराब तरीके से प्रलेखित है और कोड अच्छी तरह से संरचित नहीं है | दुनिया के नियमों को फिर से परिभाषित करने का कुछ प्रयास किया गया है, लेकिन Q-लर्निंग एल्गोरिदम काम नहीं करता, या इनाम फ़ंक्शन पूरी तरह से परिभाषित नहीं है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/hi/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..89b5e8f2a
--- /dev/null
+++ b/translations/hi/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित AI अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/hi/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..64cc9fa55
--- /dev/null
+++ b/translations/hi/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/2-Gym/README.md b/translations/hi/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..6255e576f
--- /dev/null
+++ b/translations/hi/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,342 @@
+# CartPole स्केटिंग
+
+पिछले पाठ में हमने जिस समस्या को हल किया था, वह एक खिलौना समस्या की तरह लग सकती है, जो वास्तव में वास्तविक जीवन परिदृश्यों के लिए लागू नहीं होती है। ऐसा नहीं है, क्योंकि कई वास्तविक दुनिया की समस्याएं भी इस परिदृश्य को साझा करती हैं - जिसमें शतरंज या गो खेलना भी शामिल है। वे समान हैं, क्योंकि हमारे पास दिए गए नियमों के साथ एक बोर्ड भी है और एक **डिस्क्रीट स्टेट**।
+
+## [प्री-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/47/)
+
+## परिचय
+
+इस पाठ में हम Q-लर्निंग के समान सिद्धांतों को एक समस्या पर लागू करेंगे जिसमें **कंटीन्यूअस स्टेट** होता है, यानी एक स्टेट जो एक या अधिक वास्तविक संख्याओं द्वारा दी जाती है। हम निम्नलिखित समस्या से निपटेंगे:
+
+> **समस्या**: अगर पीटर को भेड़िये से बचना है, तो उसे तेजी से चलने में सक्षम होना चाहिए। हम देखेंगे कि पीटर कैसे स्केट करना सीख सकता है, विशेष रूप से संतुलन बनाए रखना, Q-लर्निंग का उपयोग करके।
+
+
+
+> पीटर और उसके दोस्त भेड़िये से बचने के लिए रचनात्मक हो जाते हैं! चित्र [Jen Looper](https://twitter.com/jenlooper) द्वारा
+
+हम संतुलन के रूप में ज्ञात एक सरलीकृत संस्करण का उपयोग करेंगे जिसे **CartPole** समस्या कहा जाता है। CartPole की दुनिया में, हमारे पास एक क्षैतिज स्लाइडर है जो बाएं या दाएं चल सकता है, और लक्ष्य स्लाइडर के ऊपर एक लंबवत पोल को संतुलित करना है।
+
+## आवश्यकताएँ
+
+इस पाठ में, हम **OpenAI Gym** नामक एक लाइब्रेरी का उपयोग करेंगे विभिन्न **एनवायरनमेंट्स** का अनुकरण करने के लिए। आप इस पाठ के कोड को स्थानीय रूप से (जैसे कि Visual Studio Code से) चला सकते हैं, इस स्थिति में अनुकरण एक नई विंडो में खुलेगा। ऑनलाइन कोड चलाते समय, आपको कोड में कुछ समायोजन करने की आवश्यकता हो सकती है, जैसा कि [यहाँ](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7) वर्णित है।
+
+## OpenAI Gym
+
+पिछले पाठ में, खेल के नियम और स्टेट `Board` क्लास द्वारा दिए गए थे जिसे हमने स्वयं परिभाषित किया था। यहाँ हम एक विशेष **सिमुलेशन एनवायरनमेंट** का उपयोग करेंगे, जो संतुलन पोल के पीछे के भौतिकी का अनुकरण करेगा। सबसे लोकप्रिय सिमुलेशन एनवायरनमेंट्स में से एक जिसे रिइनफोर्समेंट लर्निंग एल्गोरिदम के प्रशिक्षण के लिए उपयोग किया जाता है, उसे [Gym](https://gym.openai.com/) कहा जाता है, जो [OpenAI](https://openai.com/) द्वारा बनाए रखा जाता है। इस जिम का उपयोग करके हम कार्टपोल सिमुलेशन से लेकर अटारी गेम्स तक विभिन्न **एनवायरनमेंट्स** बना सकते हैं।
+
+> **नोट**: आप OpenAI Gym से उपलब्ध अन्य एनवायरनमेंट्स को [यहाँ](https://gym.openai.com/envs/#classic_control) देख सकते हैं।
+
+पहले, जिम को इंस्टॉल करें और आवश्यक लाइब्रेरीज़ को इम्पोर्ट करें (कोड ब्लॉक 1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## व्यायाम - एक कार्टपोल एनवायरनमेंट को प्रारंभ करें
+
+कार्टपोल संतुलन समस्या के साथ काम करने के लिए, हमें संबंधित एनवायरनमेंट को प्रारंभ करना होगा। प्रत्येक एनवायरनमेंट के साथ एक:
+
+- **ऑब्जर्वेशन स्पेस** जुड़ा होता है जो उस जानकारी की संरचना को परिभाषित करता है जो हमें एनवायरनमेंट से प्राप्त होती है। कार्टपोल समस्या के लिए, हमें पोल की स्थिति, वेग और कुछ अन्य मान प्राप्त होते हैं।
+
+- **एक्शन स्पेस** जो संभावित कार्यों को परिभाषित करता है। हमारे मामले में एक्शन स्पेस डिस्क्रीट है, और इसमें दो कार्य शामिल हैं - **बाएं** और **दाएं**। (कोड ब्लॉक 2)
+
+1. प्रारंभ करने के लिए, निम्नलिखित कोड टाइप करें:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+एनवायरनमेंट कैसे काम करता है यह देखने के लिए, चलिए 100 चरणों के लिए एक छोटी सिमुलेशन चलाते हैं। प्रत्येक चरण में, हम एक कार्य प्रदान करते हैं - इस सिमुलेशन में हम बस `action_space` से एक कार्य को यादृच्छिक रूप से चुनते हैं।
+
+1. नीचे दिए गए कोड को चलाएं और देखें कि इसका परिणाम क्या है।
+
+ ✅ याद रखें कि इस कोड को स्थानीय Python इंस्टॉलेशन पर चलाना बेहतर है! (कोड ब्लॉक 3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ आपको इस छवि के समान कुछ देखना चाहिए:
+
+ 
+
+1. सिमुलेशन के दौरान, हमें यह तय करने के लिए ऑब्जर्वेशन प्राप्त करने की आवश्यकता होती है कि कैसे कार्य करना है। वास्तव में, स्टेप फ़ंक्शन वर्तमान ऑब्जर्वेशन, एक रिवार्ड फ़ंक्शन और डन फ्लैग लौटाता है जो इंगित करता है कि सिमुलेशन जारी रखने का कोई मतलब है या नहीं: (कोड ब्लॉक 4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ आपको नोटबुक आउटपुट में कुछ ऐसा ही देखना चाहिए:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ सिमुलेशन के प्रत्येक चरण में लौटाए गए ऑब्जर्वेशन वेक्टर में निम्नलिखित मान शामिल होते हैं:
+ - कार्ट की स्थिति
+ - कार्ट की वेग
+ - पोल का कोण
+ - पोल का रोटेशन दर
+
+1. उन संख्याओं का न्यूनतम और अधिकतम मान प्राप्त करें: (कोड ब्लॉक 5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ आप यह भी देख सकते हैं कि प्रत्येक सिमुलेशन चरण पर रिवार्ड मान हमेशा 1 होता है। इसका कारण यह है कि हमारा लक्ष्य जितना संभव हो सके जीवित रहना है, यानी पोल को यथासंभव लंबवत स्थिति में रखना है।
+
+ ✅ वास्तव में, यदि हम 100 लगातार परीक्षणों में 195 का औसत रिवार्ड प्राप्त करने में सफल होते हैं तो CartPole सिमुलेशन को हल किया जाता है।
+
+## स्टेट का डिस्क्रीटाइजेशन
+
+Q-लर्निंग में, हमें Q-टेबल बनाना होता है जो परिभाषित करता है कि प्रत्येक स्टेट पर क्या करना है। ऐसा करने में सक्षम होने के लिए, हमें स्टेट को **डिस्क्रीट** बनाना होगा, अधिक सटीक रूप से, इसमें सीमित संख्या में डिस्क्रीट मान शामिल होने चाहिए। इस प्रकार, हमें किसी प्रकार से अपने ऑब्जर्वेशन को **डिस्क्रीटाइज** करना होगा, उन्हें सीमित स्टेट सेट में मैप करना होगा।
+
+हम इसे करने के कुछ तरीके हैं:
+
+- **बिन्स में विभाजित करें**। यदि हमें किसी मान का अंतराल पता है, तो हम इस अंतराल को कई **बिन्स** में विभाजित कर सकते हैं, और फिर उस मान को बिन नंबर से बदल सकते हैं जिसमें यह आता है। यह numpy [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html) विधि का उपयोग करके किया जा सकता है। इस मामले में, हम स्टेट आकार को ठीक से जानेंगे, क्योंकि यह उन बिन्स की संख्या पर निर्भर करेगा जिन्हें हम डिजिटलीकरण के लिए चुनते हैं।
+
+✅ हम मूल्यों को किसी सीमित अंतराल (कहें, -20 से 20 तक) में लाने के लिए रैखिक इंटरपोलेशन का उपयोग कर सकते हैं, और फिर उन्हें गोल करके पूर्णांकों में बदल सकते हैं। इससे हमें स्टेट के आकार पर थोड़ा कम नियंत्रण मिलता है, विशेष रूप से यदि हमें इनपुट मूल्यों की सटीक रेंज नहीं पता है। उदाहरण के लिए, हमारे मामले में 4 में से 2 मानों की कोई ऊपरी/निचली सीमा नहीं है, जिससे असीमित संख्या में स्टेट हो सकते हैं।
+
+हमारे उदाहरण में, हम दूसरे दृष्टिकोण के साथ जाएंगे। जैसा कि आप बाद में देख सकते हैं, अपरिभाषित ऊपरी/निचली सीमाओं के बावजूद, वे मान शायद ही कभी कुछ सीमित अंतरालों के बाहर मान लेते हैं, इस प्रकार उन स्टेट्स के साथ चरम मान बहुत दुर्लभ होंगे।
+
+1. यहाँ वह फ़ंक्शन है जो हमारे मॉडल से ऑब्जर्वेशन लेगा और 4 पूर्णांक मानों का एक ट्यूपल उत्पन्न करेगा: (कोड ब्लॉक 6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. चलिए बिन्स का उपयोग करके एक और डिस्क्रीटाइजेशन विधि का भी अन्वेषण करते हैं: (कोड ब्लॉक 7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. अब चलिए एक छोटी सिमुलेशन चलाते हैं और उन डिस्क्रीट एनवायरनमेंट मूल्यों का अवलोकन करते हैं। `discretize` and `discretize_bins` दोनों का उपयोग करने का प्रयास करें और देखें कि क्या कोई अंतर है।
+
+ ✅ discretize_bins बिन नंबर लौटाता है, जो 0-आधारित होता है। इस प्रकार इनपुट वेरिएबल के चारों ओर 0 के मानों के लिए यह अंतराल के मध्य से संख्या लौटाता है (10)। डिस्क्रीटाइज में, हमने आउटपुट मानों की रेंज की परवाह नहीं की, उन्हें नकारात्मक होने की अनुमति दी, इस प्रकार स्टेट मान स्थानांतरित नहीं होते, और 0 का मतलब 0 होता है। (कोड ब्लॉक 8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ यदि आप देखना चाहते हैं कि एनवायरनमेंट कैसे निष्पादित होता है तो env.render से शुरू होने वाली पंक्ति को अनकमेंट करें। अन्यथा आप इसे पृष्ठभूमि में निष्पादित कर सकते हैं, जो तेज है। हम इस "अदृश्य" निष्पादन का उपयोग Q-लर्निंग प्रक्रिया के दौरान करेंगे।
+
+## Q-टेबल संरचना
+
+हमारे पिछले पाठ में, स्टेट एक साधारण संख्या जोड़ी थी 0 से 8 तक, और इस प्रकार Q-टेबल को 8x8x2 आकार के numpy टेंसर द्वारा प्रस्तुत करना सुविधाजनक था। यदि हम बिन्स डिस्क्रीटाइजेशन का उपयोग करते हैं, तो हमारे स्टेट वेक्टर का आकार भी ज्ञात होता है, इसलिए हम उसी दृष्टिकोण का उपयोग कर सकते हैं और स्टेट को 20x20x10x10x2 आकार के एक सरणी द्वारा प्रस्तुत कर सकते हैं (यहाँ 2 एक्शन स्पेस का आयाम है, और पहले आयाम ऑब्जर्वेशन स्पेस में प्रत्येक पैरामीटर के लिए उपयोग किए गए बिन्स की संख्या से मेल खाते हैं)।
+
+हालांकि, कभी-कभी ऑब्जर्वेशन स्पेस के सटीक आयाम ज्ञात नहीं होते हैं। `discretize` फ़ंक्शन के मामले में, हम कभी भी यह सुनिश्चित नहीं कर सकते कि हमारा स्टेट निश्चित सीमाओं के भीतर रहता है, क्योंकि कुछ मूल मान बाउंड नहीं होते। इस प्रकार, हम एक अलग दृष्टिकोण का उपयोग करेंगे और Q-टेबल को एक डिक्शनरी द्वारा प्रस्तुत करेंगे।
+
+1. *(state,action)* जोड़ी का उपयोग डिक्शनरी की कुंजी के रूप में करें, और मान Q-टेबल प्रविष्टि मान से मेल खाता होगा। (कोड ब्लॉक 9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ यहाँ हम एक फ़ंक्शन `qvalues()` भी परिभाषित करते हैं, जो एक दिए गए स्टेट के लिए सभी संभावित कार्यों से मेल खाने वाले Q-टेबल मानों की एक सूची लौटाता है। यदि प्रविष्टि Q-टेबल में मौजूद नहीं है, तो हम डिफ़ॉल्ट रूप से 0 लौटाएंगे।
+
+## चलिए Q-लर्निंग शुरू करते हैं
+
+अब हम पीटर को संतुलन सिखाने के लिए तैयार हैं!
+
+1. पहले, कुछ हाइपरपैरामीटर्स सेट करें: (कोड ब्लॉक 10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ यहाँ, `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards` वेक्टर आगे की प्लॉटिंग के लिए। (कोड ब्लॉक 11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+आप उन परिणामों से क्या नोटिस कर सकते हैं:
+
+- **हमारे लक्ष्य के करीब**। हम 100+ लगातार सिमुलेशन रन के दौरान 195 संचयी रिवार्ड प्राप्त करने के लक्ष्य को प्राप्त करने के बहुत करीब हैं, या हम वास्तव में इसे प्राप्त कर चुके हो सकते हैं! भले ही हमें छोटे नंबर मिलें, हम अभी भी नहीं जानते, क्योंकि हम 5000 रन के औसत पर जा रहे हैं, और केवल 100 रन औपचारिक मानदंड में आवश्यक हैं।
+
+- **रिवार्ड गिरना शुरू होता है**। कभी-कभी रिवार्ड गिरना शुरू हो जाता है, जिसका मतलब है कि हम Q-टेबल में पहले से सीखे गए मानों को उन मानों से "नष्ट" कर सकते हैं जो स्थिति को बदतर बनाते हैं।
+
+यह अवलोकन अधिक स्पष्ट रूप से दिखाई देता है यदि हम प्रशिक्षण प्रगति का ग्राफ़ बनाते हैं।
+
+## प्रशिक्षण प्रगति का ग्राफ़ बनाना
+
+प्रशिक्षण के दौरान, हमने प्रत्येक पुनरावृत्ति में संचयी रिवार्ड मान को `rewards` वेक्टर में एकत्र किया है। यहाँ यह कैसा दिखता है जब हम इसे पुनरावृत्ति संख्या के खिलाफ प्लॉट करते हैं:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+इस ग्राफ़ से, कुछ भी बताना संभव नहीं है, क्योंकि स्टोचैस्टिक प्रशिक्षण प्रक्रिया की प्रकृति के कारण प्रशिक्षण सत्रों की लंबाई बहुत भिन्न होती है। इस ग्राफ़ को अधिक समझने योग्य बनाने के लिए, हम प्रयोगों की एक श्रृंखला पर **रनिंग एवरेज** की गणना कर सकते हैं, कहें 100। इसे `np.convolve` का उपयोग करके आसानी से किया जा सकता है: (कोड ब्लॉक 12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## हाइपरपैरामीटर्स को बदलना
+
+प्रशिक्षण को अधिक स्थिर बनाने के लिए, यह समझ में आता है कि हमारे कुछ हाइपरपैरामीटर्स को प्रशिक्षण के दौरान समायोजित किया जाए। विशेष रूप से:
+
+- **लर्निंग रेट** के लिए, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon`, और लगभग 1 तक बढ़ें।
+
+> **कार्य 1**: हाइपरपैरामीटर मानों के साथ खेलें और देखें कि क्या आप उच्च संचयी रिवार्ड प्राप्त कर सकते हैं। क्या आप 195 से ऊपर जा रहे हैं?
+
+> **कार्य 2**: समस्या को औपचारिक रूप से हल करने के लिए, आपको 100 लगातार रन के दौरान 195 औसत रिवार्ड प्राप्त करने की आवश्यकता है। प्रशिक्षण के दौरान इसे मापें और सुनिश्चित करें कि आपने समस्या को औपचारिक रूप से हल कर लिया है!
+
+## परिणाम को क्रियान्वित में देखना
+
+यह देखना दिलचस्प होगा कि प्रशिक्षित मॉडल वास्तव में कैसे व्यवहार करता है। चलिए सिमुलेशन चलाते हैं और Q-टेबल में संभाव्यता वितरण के अनुसार कार्य चयन रणनीति का पालन करते हैं: (कोड ब्लॉक 13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+आपको कुछ ऐसा दिखना चाहिए:
+
+
+
+---
+
+## 🚀चुनौती
+
+> **कार्य 3**: यहाँ, हम Q-टेबल की अंतिम प्रति का उपयोग कर रहे थे, जो सबसे अच्छी नहीं हो सकती। याद रखें कि हमने सबसे अच्छा प्रदर्शन करने वाले Q-टेबल को `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax` फ़ंक्शन का उपयोग करके सबसे अधिक Q-टेबल मान से मेल खाने वाले कार्य संख्या को खोजने के लिए। इस रणनीति को लागू करें और देखें कि क्या यह संतुलन में सुधार करता है।
+
+## [पोस्ट-लेक्चर क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## असाइनमेंट
+[Train a Mountain Car](assignment.md)
+
+## निष्कर्ष
+
+अब हमने सीखा है कि एजेंटों को अच्छे परिणाम प्राप्त करने के लिए कैसे प्रशिक्षित किया जाए, बस उन्हें एक रिवार्ड फ़ंक्शन प्रदान करके जो खेल की वांछित स्थिति को परिभाषित करता है, और उन्हें खोज स्थान को बुद्धिमानी से अन्वेषण करने का अवसर देकर। हमने डिस्क्रीट और कंटीन्यूअस एनवायरनमेंट्स के मामलों में Q-लर्निंग एल्गोरिदम को सफलतापूर्वक लागू किया है, लेकिन डिस्क्रीट एक्शन्स के साथ।
+
+यह अध्ययन करना भी महत्वपूर्ण है कि जब एक्शन स्टेट भी कंटीन्यूअस होता है, और जब ऑब्जर्वेशन स्पेस बहुत अधिक जटिल होता है, जैसे कि अटारी गेम स्क्रीन से छवि। उन समस्याओं में हमें अक्सर अच्छे परिणाम प्राप्त करने के लिए अधिक शक्तिशाली मशीन लर्निंग तकनीकों, जैसे कि न्यूरल नेटवर्क्स, का उपयोग करने की आवश्यकता होती है। ये अधिक उन्नत विषय हमारे आगामी अधिक उन्नत एआई कोर्स के विषय हैं।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/2-Gym/assignment.md b/translations/hi/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..559b3b05c
--- /dev/null
+++ b/translations/hi/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# ट्रेन माउंटेन कार
+
+[OpenAI Gym](http://gym.openai.com) को इस तरह से डिज़ाइन किया गया है कि सभी परिवेश एक ही API प्रदान करते हैं - अर्थात् एक ही तरीके `reset`, `step` और `render`, और **क्रिया स्थान** और **अवलोकन स्थान** के समान अमूर्तता। इस प्रकार, न्यूनतम कोड परिवर्तनों के साथ विभिन्न परिवेशों के लिए समान सुदृढीकरण शिक्षण एल्गोरिदम को अनुकूलित करना संभव होना चाहिए।
+
+## एक माउंटेन कार पर्यावरण
+
+[Mountain Car environment](https://gym.openai.com/envs/MountainCar-v0/) में एक कार एक घाटी में फंसी होती है:
+लक्ष्य है घाटी से बाहर निकलना और झंडा पकड़ना, प्रत्येक कदम पर निम्नलिखित क्रियाओं में से एक करके:
+
+| मूल्य | अर्थ |
+|---|---|
+| 0 | बाईं ओर तेजी लाएं |
+| 1 | तेजी न लाएं |
+| 2 | दाईं ओर तेजी लाएं |
+
+हालांकि, इस समस्या की मुख्य चाल यह है कि कार का इंजन एक ही पास में पहाड़ पर चढ़ने के लिए पर्याप्त मजबूत नहीं है। इसलिए, सफल होने का एकमात्र तरीका है गति बढ़ाने के लिए आगे-पीछे चलाना।
+
+अवलोकन स्थान में केवल दो मान होते हैं:
+
+| संख्या | अवलोकन | न्यूनतम | अधिकतम |
+|-----|--------------|-----|-----|
+| 0 | कार की स्थिति | -1.2| 0.6 |
+| 1 | कार की वेग | -0.07 | 0.07 |
+
+माउंटेन कार के लिए इनाम प्रणाली काफी पेचीदा है:
+
+ * यदि एजेंट ने पहाड़ के ऊपर झंडे तक पहुंच (स्थिति = 0.5) प्राप्त कर लिया है तो 0 का इनाम दिया जाता है।
+ * यदि एजेंट की स्थिति 0.5 से कम है तो -1 का इनाम दिया जाता है।
+
+एपिसोड समाप्त हो जाता है यदि कार की स्थिति 0.5 से अधिक है, या एपिसोड की लंबाई 200 से अधिक है।
+## निर्देश
+
+हमारे सुदृढीकरण शिक्षण एल्गोरिदम को माउंटेन कार समस्या को हल करने के लिए अनुकूलित करें। मौजूदा [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb) कोड से शुरू करें, नए पर्यावरण को प्रतिस्थापित करें, राज्य विवर्तनिकीकरण कार्यों को बदलें, और मौजूदा एल्गोरिदम को न्यूनतम कोड संशोधनों के साथ प्रशिक्षित करने का प्रयास करें। हाइपरपैरामीटर समायोजित करके परिणाम का अनुकूलन करें।
+
+> **Note**: एल्गोरिदम को अभिसरण करने के लिए हाइपरपैरामीटर समायोजन की आवश्यकता हो सकती है।
+## रूब्रिक
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | --------- | -------- | ----------------- |
+| | Q-Learning एल्गोरिदम को सफलतापूर्वक CartPole उदाहरण से अनुकूलित किया गया है, न्यूनतम कोड संशोधनों के साथ, जो 200 कदमों के भीतर झंडा पकड़ने की समस्या को हल करने में सक्षम है। | इंटरनेट से एक नया Q-Learning एल्गोरिदम अपनाया गया है, लेकिन अच्छी तरह से प्रलेखित है; या मौजूदा एल्गोरिदम अपनाया गया है, लेकिन वांछित परिणाम नहीं प्राप्त करता है | छात्र किसी भी एल्गोरिदम को सफलतापूर्वक अपनाने में सक्षम नहीं था, लेकिन समाधान की ओर महत्वपूर्ण कदम उठाए हैं (राज्य विवर्तनिकीकरण, Q-Table डेटा संरचना, आदि को लागू किया है) |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/hi/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..0cff07e4c
--- /dev/null
+++ b/translations/hi/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियां हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/2-Gym/solution/R/README.md b/translations/hi/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..01805e4da
--- /dev/null
+++ b/translations/hi/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/8-Reinforcement/README.md b/translations/hi/8-Reinforcement/README.md
new file mode 100644
index 000000000..96451f05f
--- /dev/null
+++ b/translations/hi/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# परिचय: सुदृढीकरण शिक्षण
+
+सुदृढीकरण शिक्षण, RL, को पर्यवेक्षित शिक्षण और अप्रशिक्षित शिक्षण के साथ एक बुनियादी मशीन लर्निंग प्रतिमान के रूप में देखा जाता है। RL निर्णयों के बारे में है: सही निर्णय देना या कम से कम उनसे सीखना।
+
+कल्पना कीजिए कि आपके पास एक सिम्युलेटेड वातावरण है जैसे कि शेयर बाजार। अगर आप एक विशेष नियम लागू करते हैं तो क्या होता है? क्या इसका सकारात्मक या नकारात्मक प्रभाव होता है? यदि कुछ नकारात्मक होता है, तो आपको इस _नकारात्मक सुदृढीकरण_ को लेना होगा, उससे सीखना होगा, और दिशा बदलनी होगी। यदि यह एक सकारात्मक परिणाम है, तो आपको उस _सकारात्मक सुदृढीकरण_ पर निर्माण करना होगा।
+
+
+
+> पीटर और उसके दोस्तों को भूखे भेड़िये से बचना है! चित्र [Jen Looper](https://twitter.com/jenlooper) द्वारा
+
+## क्षेत्रीय विषय: पीटर और भेड़िया (रूस)
+
+[पीटर और भेड़िया](https://en.wikipedia.org/wiki/Peter_and_the_Wolf) एक संगीत परी कथा है जिसे रूसी संगीतकार [सर्गेई प्रोकोफिएव](https://en.wikipedia.org/wiki/Sergei_Prokofiev) ने लिखा है। यह एक युवा अग्रणी पीटर की कहानी है, जो बहादुरी से अपने घर से बाहर जंगल की साफ़ जगह पर भेड़िये का पीछा करने के लिए जाता है। इस अनुभाग में, हम मशीन लर्निंग एल्गोरिदम को प्रशिक्षित करेंगे जो पीटर की मदद करेंगे:
+
+- **आसपास के क्षेत्र का अन्वेषण करें** और एक इष्टतम नेविगेशन मानचित्र बनाएं
+- **सीखें** कि स्केटबोर्ड का उपयोग कैसे करें और उस पर संतुलन बनाए रखें, ताकि तेजी से घूम सकें।
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 प्रोकोफिएव द्वारा पीटर और भेड़िया सुनने के लिए ऊपर की छवि पर क्लिक करें
+
+## सुदृढीकरण शिक्षण
+
+पिछले अनुभागों में, आपने मशीन लर्निंग समस्याओं के दो उदाहरण देखे हैं:
+
+- **पर्यवेक्षित**, जहाँ हमारे पास डेटा सेट होते हैं जो उस समस्या का समाधान सुझाते हैं जिसे हम हल करना चाहते हैं। [वर्गीकरण](../4-Classification/README.md) और [प्रतिगमन](../2-Regression/README.md) पर्यवेक्षित शिक्षण कार्य हैं।
+- **अप्रशिक्षित**, जिसमें हमारे पास लेबल किया हुआ प्रशिक्षण डेटा नहीं होता है। अप्रशिक्षित शिक्षण का मुख्य उदाहरण [क्लस्टरिंग](../5-Clustering/README.md) है।
+
+इस अनुभाग में, हम आपको एक नए प्रकार की शिक्षण समस्या से परिचित कराएंगे जिसके लिए लेबल किया हुआ प्रशिक्षण डेटा आवश्यक नहीं है। ऐसे कई प्रकार की समस्याएं हैं:
+
+- **[अर्ध-पर्यवेक्षित शिक्षण](https://wikipedia.org/wiki/Semi-supervised_learning)**, जिसमें हमारे पास बहुत सारा बिना लेबल का डेटा होता है जिसका उपयोग मॉडल को पूर्व-प्रशिक्षित करने के लिए किया जा सकता है।
+- **[सुदृढीकरण शिक्षण](https://wikipedia.org/wiki/Reinforcement_learning)**, जिसमें एक एजेंट कुछ सिम्युलेटेड वातावरण में प्रयोग करके व्यवहार करना सीखता है।
+
+### उदाहरण - कंप्यूटर गेम
+
+मान लीजिए आप कंप्यूटर को कोई गेम खेलना सिखाना चाहते हैं, जैसे शतरंज, या [सुपर मारियो](https://wikipedia.org/wiki/Super_Mario)। कंप्यूटर को गेम खेलने के लिए, हमें उसे यह अनुमान लगाना होगा कि प्रत्येक गेम स्थिति में कौन सा कदम उठाना है। जबकि यह एक वर्गीकरण समस्या की तरह लग सकता है, ऐसा नहीं है - क्योंकि हमारे पास स्थिति और संबंधित क्रियाओं के साथ एक डेटा सेट नहीं है। हमारे पास कुछ डेटा हो सकता है जैसे मौजूदा शतरंज मैच या खिलाड़ी सुपर मारियो खेलते हुए, लेकिन संभावना है कि वह डेटा पर्याप्त रूप से बड़ी संख्या में संभावित स्थितियों को कवर नहीं करेगा।
+
+मौजूदा गेम डेटा की तलाश करने के बजाय, **सुदृढीकरण शिक्षण** (RL) *कंप्यूटर को कई बार खेल खेलने और परिणाम का अवलोकन करने* के विचार पर आधारित है। इस प्रकार, सुदृढीकरण शिक्षण को लागू करने के लिए, हमें दो चीजों की आवश्यकता होती है:
+
+- **एक वातावरण** और **एक सिम्युलेटर** जो हमें कई बार गेम खेलने की अनुमति देता है। यह सिम्युलेटर सभी गेम नियमों के साथ-साथ संभावित स्थितियों और क्रियाओं को परिभाषित करेगा।
+
+- **एक पुरस्कार फ़ंक्शन**, जो हमें यह बताएगा कि प्रत्येक चाल या गेम के दौरान हमने कितना अच्छा किया।
+
+अन्य प्रकार की मशीन लर्निंग और RL के बीच मुख्य अंतर यह है कि RL में हम आमतौर पर यह नहीं जानते कि हम जीतेंगे या हारेंगे जब तक कि हम गेम समाप्त नहीं करते। इस प्रकार, हम यह नहीं कह सकते कि एक निश्चित चाल अकेले अच्छी है या नहीं - हमें केवल गेम के अंत में एक पुरस्कार प्राप्त होता है। और हमारा लक्ष्य ऐसे एल्गोरिदम डिजाइन करना है जो हमें अनिश्चित परिस्थितियों में एक मॉडल को प्रशिक्षित करने की अनुमति देंगे। हम एक RL एल्गोरिदम के बारे में जानेंगे जिसे **Q-लर्निंग** कहा जाता है।
+
+## पाठ
+
+1. [सुदृढीकरण शिक्षण और Q-लर्निंग का परिचय](1-QLearning/README.md)
+2. [जिम सिम्युलेशन वातावरण का उपयोग करना](2-Gym/README.md)
+
+## श्रेय
+
+"सुदृढीकरण शिक्षण का परिचय" ♥️ के साथ [Dmitry Soshnikov](http://soshnikov.com) द्वारा लिखा गया था
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/9-Real-World/1-Applications/README.md b/translations/hi/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..6542ba148
--- /dev/null
+++ b/translations/hi/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# परिशिष्ट: वास्तविक दुनिया में मशीन लर्निंग
+
+
+> स्केच नोट [Tomomi Imura](https://www.twitter.com/girlie_mac) द्वारा
+
+इस पाठ्यक्रम में, आपने डेटा को प्रशिक्षण के लिए तैयार करने और मशीन लर्निंग मॉडल बनाने के कई तरीके सीखे हैं। आपने शास्त्रीय रिग्रेशन, क्लस्टरिंग, क्लासिफिकेशन, नेचुरल लैंग्वेज प्रोसेसिंग, और टाइम सीरीज मॉडल की एक श्रृंखला बनाई। बधाई हो! अब, आप सोच रहे होंगे कि यह सब किस लिए है... इन मॉडलों के वास्तविक दुनिया में क्या अनुप्रयोग हैं?
+
+जबकि उद्योग में एआई में बहुत रुचि है, जो आमतौर पर डीप लर्निंग का उपयोग करता है, फिर भी क्लासिकल मशीन लर्निंग मॉडलों के लिए मूल्यवान अनुप्रयोग हैं। आप आज इनमें से कुछ अनुप्रयोगों का भी उपयोग कर सकते हैं! इस पाठ में, आप जानेंगे कि आठ विभिन्न उद्योग और विषय-विशेषज्ञता डोमेन इन प्रकार के मॉडलों का उपयोग कैसे करते हैं ताकि उनके अनुप्रयोग अधिक प्रभावी, विश्वसनीय, बुद्धिमान, और उपयोगकर्ताओं के लिए मूल्यवान बन सकें।
+
+## [पूर्व-व्याख्यान क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 वित्त
+
+वित्त क्षेत्र में मशीन लर्निंग के कई अवसर हैं। इस क्षेत्र में कई समस्याएं एमएल का उपयोग करके मॉडल और हल की जा सकती हैं।
+
+### क्रेडिट कार्ड धोखाधड़ी का पता लगाना
+
+हमने पाठ्यक्रम में पहले [k-means क्लस्टरिंग](../../5-Clustering/2-K-Means/README.md) के बारे में सीखा, लेकिन इसे क्रेडिट कार्ड धोखाधड़ी से संबंधित समस्याओं को हल करने के लिए कैसे उपयोग किया जा सकता है?
+
+K-means क्लस्टरिंग क्रेडिट कार्ड धोखाधड़ी का पता लगाने की तकनीक में सहायक होती है जिसे **आउटलायर डिटेक्शन** कहा जाता है। आउटलायर, या डेटा सेट के बारे में अवलोकनों में विचलन, हमें बता सकते हैं कि क्या एक क्रेडिट कार्ड सामान्य क्षमता में उपयोग हो रहा है या कुछ असामान्य हो रहा है। नीचे लिंक किए गए पेपर में दिखाए गए अनुसार, आप k-means क्लस्टरिंग एल्गोरिदम का उपयोग करके क्रेडिट कार्ड डेटा को सॉर्ट कर सकते हैं और प्रत्येक लेनदेन को एक क्लस्टर में असाइन कर सकते हैं कि यह कितना आउटलायर लगता है। फिर, आप धोखाधड़ी और वैध लेनदेन के लिए सबसे जोखिम भरे क्लस्टर का मूल्यांकन कर सकते हैं।
+[संदर्भ](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### धन प्रबंधन
+
+धन प्रबंधन में, एक व्यक्ति या फर्म अपने ग्राहकों की ओर से निवेश संभालती है। उनका काम दीर्घकालिक में धन को बनाए रखना और बढ़ाना है, इसलिए यह आवश्यक है कि वे निवेशों का चयन करें जो अच्छा प्रदर्शन करें।
+
+किसी विशेष निवेश के प्रदर्शन का मूल्यांकन करने का एक तरीका सांख्यिकीय रिग्रेशन के माध्यम से है। [लिनियर रिग्रेशन](../../2-Regression/1-Tools/README.md) यह समझने के लिए एक मूल्यवान उपकरण है कि एक फंड कुछ बेंचमार्क के सापेक्ष कैसा प्रदर्शन करता है। हम यह भी अनुमान लगा सकते हैं कि रिग्रेशन के परिणाम सांख्यिकीय रूप से महत्वपूर्ण हैं या नहीं, या वे ग्राहक के निवेश को कितना प्रभावित करेंगे। आप अपने विश्लेषण को मल्टीपल रिग्रेशन का उपयोग करके और भी विस्तार कर सकते हैं, जहां अतिरिक्त जोखिम कारकों को ध्यान में रखा जा सकता है। एक विशिष्ट फंड के लिए यह कैसे काम करेगा, इसका एक उदाहरण देखने के लिए नीचे दिए गए पेपर को देखें।
+[संदर्भ](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 शिक्षा
+
+शैक्षिक क्षेत्र भी एक बहुत ही दिलचस्प क्षेत्र है जहां एमएल लागू किया जा सकता है। यहां कई दिलचस्प समस्याएं हैं जैसे परीक्षाओं या निबंधों में धोखाधड़ी का पता लगाना या सुधार प्रक्रिया में पूर्वाग्रह का प्रबंधन करना।
+
+### छात्र व्यवहार की भविष्यवाणी
+
+[Coursera](https://coursera.com), एक ऑनलाइन ओपन कोर्स प्रोवाइडर, के पास एक शानदार टेक ब्लॉग है जहां वे कई इंजीनियरिंग निर्णयों पर चर्चा करते हैं। इस केस स्टडी में, उन्होंने एक रिग्रेशन लाइन को प्लॉट किया ताकि यह पता लगाया जा सके कि कम एनपीएस (नेट प्रमोटर स्कोर) रेटिंग और कोर्स रिटेंशन या ड्रॉप-ऑफ के बीच कोई संबंध है या नहीं।
+[संदर्भ](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### पूर्वाग्रह को कम करना
+
+[Grammarly](https://grammarly.com), एक लेखन सहायक जो वर्तनी और व्याकरण की त्रुटियों की जांच करता है, अपने उत्पादों में उन्नत [प्राकृतिक भाषा प्रसंस्करण प्रणालियों](../../6-NLP/README.md) का उपयोग करता है। उन्होंने अपने टेक ब्लॉग में एक दिलचस्प केस स्टडी प्रकाशित की है कि उन्होंने मशीन लर्निंग में जेंडर पूर्वाग्रह से कैसे निपटा, जिसके बारे में आपने हमारे [प्रारंभिक निष्पक्षता पाठ](../../1-Introduction/3-fairness/README.md) में सीखा।
+[संदर्भ](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 रिटेल
+
+रिटेल सेक्टर निश्चित रूप से एमएल के उपयोग से लाभ उठा सकता है, बेहतर ग्राहक यात्रा बनाने से लेकर इन्वेंटरी को इष्टतम तरीके से स्टॉक करने तक।
+
+### ग्राहक यात्रा को व्यक्तिगत बनाना
+
+Wayfair, एक कंपनी जो फर्नीचर जैसी घरेलू वस्तुएं बेचती है, के लिए ग्राहकों को उनके स्वाद और आवश्यकताओं के अनुसार सही उत्पाद खोजने में मदद करना महत्वपूर्ण है। इस लेख में, कंपनी के इंजीनियर बताते हैं कि वे एमएल और एनएलपी का उपयोग कैसे करते हैं ताकि "ग्राहकों के लिए सही परिणाम सामने आएं"। विशेष रूप से, उनका क्वेरी इंटेंट इंजन इकाई निष्कर्षण, क्लासिफायर प्रशिक्षण, एसेट और राय निष्कर्षण, और ग्राहक समीक्षाओं पर भावना टैगिंग का उपयोग करने के लिए बनाया गया है। यह ऑनलाइन रिटेल में एनएलपी के काम करने का एक क्लासिक उपयोग मामला है।
+[संदर्भ](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### इन्वेंटरी प्रबंधन
+
+[StitchFix](https://stitchfix.com) जैसी अभिनव, फुर्तीली कंपनियां, जो उपभोक्ताओं को कपड़े भेजने वाली एक बॉक्स सेवा है, अनुशंसाओं और इन्वेंटरी प्रबंधन के लिए बड़े पैमाने पर एमएल पर निर्भर करती हैं। उनकी स्टाइलिंग टीमें उनके मर्चेंडाइजिंग टीमों के साथ मिलकर काम करती हैं, वास्तव में: "हमारे एक डेटा वैज्ञानिक ने एक जेनेटिक एल्गोरिदम के साथ छेड़छाड़ की और इसे परिधान पर लागू किया ताकि यह अनुमान लगाया जा सके कि कौन सा कपड़ा आज सफल होगा। हमने इसे मर्चेंडाइज टीम के पास लाया और अब वे इसे एक उपकरण के रूप में उपयोग कर सकते हैं।"
+[संदर्भ](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 स्वास्थ्य देखभाल
+
+स्वास्थ्य देखभाल क्षेत्र अनुसंधान कार्यों को अनुकूलित करने और साथ ही रोगियों को फिर से भर्ती करने या रोगों के फैलने से रोकने जैसी लॉजिस्टिक समस्याओं को हल करने के लिए एमएल का लाभ उठा सकता है।
+
+### क्लिनिकल ट्रायल का प्रबंधन
+
+क्लिनिकल ट्रायल में विषाक्तता दवा निर्माताओं के लिए एक प्रमुख चिंता है। कितनी विषाक्तता सहनशील है? इस अध्ययन में, विभिन्न क्लिनिकल ट्रायल विधियों का विश्लेषण करने से क्लिनिकल ट्रायल परिणामों की संभावनाओं की भविष्यवाणी के लिए एक नया दृष्टिकोण विकसित हुआ। विशेष रूप से, वे एक [क्लासिफायर](../../4-Classification/README.md) का उपयोग करने में सक्षम थे जो दवाओं के समूहों के बीच अंतर कर सकता है।
+[संदर्भ](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### अस्पताल पुन: प्रवेश प्रबंधन
+
+अस्पताल देखभाल महंगी है, विशेष रूप से जब मरीजों को फिर से भर्ती करना पड़ता है। इस पेपर में एक कंपनी पर चर्चा की गई है जो [क्लस्टरिंग](../../5-Clustering/README.md) एल्गोरिदम का उपयोग करके पुन: प्रवेश की संभावना की भविष्यवाणी करने के लिए एमएल का उपयोग करती है। ये क्लस्टर विश्लेषकों को "पुन: प्रवेश के समूहों की खोज करने में मदद करते हैं जिनके पास एक सामान्य कारण हो सकता है"।
+[संदर्भ](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### रोग प्रबंधन
+
+हालिया महामारी ने यह उजागर किया है कि मशीन लर्निंग रोग के फैलाव को रोकने में कैसे मदद कर सकती है। इस लेख में, आप ARIMA, लॉजिस्टिक कर्व्स, लिनियर रिग्रेशन, और SARIMA के उपयोग को पहचानेंगे। "यह काम इस वायरस के फैलाव की दर की गणना करने और इस प्रकार मौतों, रिकवरी, और पुष्टि किए गए मामलों की भविष्यवाणी करने का एक प्रयास है, ताकि यह हमें बेहतर तैयार करने और जीवित रहने में मदद कर सके।"
+[संदर्भ](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 पारिस्थितिकी और ग्रीन टेक
+
+प्रकृति और पारिस्थितिकी में कई संवेदनशील प्रणालियाँ होती हैं जहाँ जानवरों और प्रकृति के बीच का अंतःक्रिया ध्यान में आता है। इन प्रणालियों को सटीक रूप से मापना और कुछ होने पर उचित कार्य करना महत्वपूर्ण है, जैसे कि जंगल की आग या जानवरों की आबादी में गिरावट।
+
+### वन प्रबंधन
+
+आपने पिछले पाठों में [रिइनफोर्समेंट लर्निंग](../../8-Reinforcement/README.md) के बारे में सीखा। यह प्रकृति में पैटर्न की भविष्यवाणी करने में बहुत उपयोगी हो सकता है। विशेष रूप से, इसका उपयोग जंगल की आग और आक्रामक प्रजातियों के फैलाव जैसी पारिस्थितिक समस्याओं को ट्रैक करने के लिए किया जा सकता है। कनाडा में, एक समूह ने उपग्रह चित्रों से जंगल की आग की गतिशीलता मॉडल बनाने के लिए रिइनफोर्समेंट लर्निंग का उपयोग किया। एक अभिनव "स्थानिक रूप से फैलने वाली प्रक्रिया (SSP)" का उपयोग करके, उन्होंने एक जंगल की आग को "परिदृश्य में किसी भी सेल में एजेंट" के रूप में देखा। "किसी स्थान से किसी भी समय पर आग द्वारा लिए जा सकने वाले क्रियाओं का सेट उत्तर, दक्षिण, पूर्व, या पश्चिम में फैलना या न फैलना शामिल है।
+
+इस दृष्टिकोण ने सामान्य RL सेटअप को उलट दिया क्योंकि संबंधित मार्कोव निर्णय प्रक्रिया (MDP) का गतिशीलता एक ज्ञात कार्य है।" इस समूह द्वारा उपयोग किए गए क्लासिक एल्गोरिदम के बारे में अधिक पढ़ने के लिए नीचे दिए गए लिंक पर जाएं।
+[संदर्भ](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### जानवरों की गति का पता लगाना
+
+जबकि डीप लर्निंग ने जानवरों की गति को दृश्य रूप से ट्रैक करने में क्रांति ला दी है (आप अपना खुद का [ध्रुवीय भालू ट्रैकर](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott) यहाँ बना सकते हैं), क्लासिकल एमएल का इस कार्य में अभी भी एक स्थान है।
+
+फार्म जानवरों की गति को ट्रैक करने और आईओटी सेंसर इस प्रकार की दृश्य प्रसंस्करण का उपयोग करते हैं, लेकिन अधिक बुनियादी एमएल तकनीकें डेटा को पूर्व-प्रसंस्करण करने के लिए उपयोगी हैं। उदाहरण के लिए, इस पेपर में, भेड़ों की मुद्राओं की निगरानी और विभिन्न क्लासिफायर एल्गोरिदम का उपयोग करके विश्लेषण किया गया। आप पृष्ठ 335 पर आरओसी कर्व को पहचान सकते हैं।
+[संदर्भ](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ ऊर्जा प्रबंधन
+
+हमारे [टाइम सीरीज फोरकास्टिंग](../../7-TimeSeries/README.md) पर पाठों में, हमने एक शहर के लिए आपूर्ति और मांग को समझकर राजस्व उत्पन्न करने के लिए स्मार्ट पार्किंग मीटर की अवधारणा को लागू किया। इस लेख में, क्लस्टरिंग, रिग्रेशन और टाइम सीरीज फोरकास्टिंग को मिलाकर आयरलैंड में भविष्य की ऊर्जा उपयोग की भविष्यवाणी करने में कैसे मदद की गई, इसका विवरण दिया गया है, स्मार्ट मीटरिंग पर आधारित।
+[संदर्भ](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 बीमा
+
+बीमा क्षेत्र एक और क्षेत्र है जो एमएल का उपयोग करके व्यवहार्य वित्तीय और अंकेक्षणीय मॉडल का निर्माण और अनुकूलन करता है।
+
+### अस्थिरता प्रबंधन
+
+MetLife, एक जीवन बीमा प्रदाता, अपने वित्तीय मॉडलों में अस्थिरता का विश्लेषण और शमन करने के तरीके के बारे में खुलकर बताता है। इस लेख में आप बाइनरी और ऑर्डिनल क्लासिफिकेशन विज़ुअलाइज़ेशन देखेंगे। आप फोरकास्टिंग विज़ुअलाइज़ेशन भी खोजेंगे।
+[संदर्भ](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 कला, संस्कृति, और साहित्य
+
+कला में, उदाहरण के लिए पत्रकारिता में, कई दिलचस्प समस्याएं हैं। फेक न्यूज़ का पता लगाना एक बड़ी समस्या है क्योंकि यह साबित हो चुका है कि यह लोगों की राय को प्रभावित कर सकता है और यहां तक कि लोकतंत्रों को भी गिरा सकता है। संग्रहालय भी एमएल का उपयोग करके लाभ उठा सकते हैं, जैसे कि कलाकृतियों के बीच लिंक खोजने से लेकर संसाधन योजना तक।
+
+### फेक न्यूज़ का पता लगाना
+
+आज के मीडिया में फेक न्यूज़ का पता लगाना बिल्ली और चूहे का खेल बन गया है। इस लेख में, शोधकर्ता सुझाव देते हैं कि एक प्रणाली जिसमें हमने अध्ययन किए गए कई एमएल तकनीकों का संयोजन किया जा सकता है और सबसे अच्छा मॉडल तैनात किया जा सकता है: "यह प्रणाली डेटा से विशेषताएं निकालने के लिए प्राकृतिक भाषा प्रसंस्करण पर आधारित है और फिर इन विशेषताओं का उपयोग मशीन लर्निंग क्लासिफायर जैसे Naive Bayes, Support Vector Machine (SVM), Random Forest (RF), Stochastic Gradient Descent (SGD), और Logistic Regression(LR) के प्रशिक्षण के लिए किया जाता है।"
+[संदर्भ](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+यह लेख दिखाता है कि विभिन्न एमएल डोमेन को मिलाकर दिलचस्प परिणाम प्राप्त किए जा सकते हैं जो फेक न्यूज़ को फैलने और वास्तविक नुकसान से बचाने में मदद कर सकते हैं; इस मामले में, प्रेरणा COVID उपचारों के बारे में अफवाहों के फैलाव से हुई हिंसा थी।
+
+### संग्रहालय एमएल
+
+संग्रहालय एआई क्रांति के कगार पर हैं जिसमें संग्रहों को सूचीबद्ध और डिजिटाइज़ करना और कलाकृतियों के बीच लिंक खोजना तकनीक के उन्नति के साथ आसान हो रहा है। [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.) जैसे प्रोजेक्ट्स वेटिकन आर्काइव्स जैसी अप्राप्य संग्रहों के रहस्यों को उजागर करने में मदद कर रहे हैं। लेकिन, संग्रहालयों के व्यावसायिक पहलू भी एमएल मॉडल से लाभान्वित होते हैं।
+
+उदाहरण के लिए, शिकागो का आर्ट इंस्टीट्यूट मॉडल बनाता है ताकि यह अनुमान लगाया जा सके कि दर्शक किसमें रुचि रखते हैं और वे कब प्रदर्शनियों में आएंगे। लक्ष्य प्रत्येक बार जब उपयोगकर्ता संग्रहालय का दौरा करता है तो व्यक्तिगत और अनुकूलित आगंतुक अनुभव बनाना है। "वित्तीय 2017 के दौरान, मॉडल ने उपस्थिति और प्रवेश को 1 प्रतिशत सटीकता के भीतर भविष्यवाणी की, एंड्रयू सिमनिक, आर्ट इंस्टीट्यूट के वरिष्ठ उपाध्यक्ष कहते हैं।"
+[Reference](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 मार्केटिंग
+
+### ग्राहक विभाजन
+
+सबसे प्रभावी मार्केटिंग रणनीतियाँ विभिन्न समूहों के आधार पर ग्राहकों को अलग-अलग तरीकों से लक्षित करती हैं। इस लेख में, विभेदित मार्केटिंग का समर्थन करने के लिए क्लस्टरिंग एल्गोरिदम के उपयोगों पर चर्चा की गई है। विभेदित मार्केटिंग कंपनियों को ब्रांड पहचान में सुधार करने, अधिक ग्राहकों तक पहुँचने और अधिक पैसा कमाने में मदद करती है।
+[Reference](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 चुनौती
+
+उस अन्य क्षेत्र की पहचान करें जो इस पाठ्यक्रम में सीखी गई कुछ तकनीकों से लाभान्वित होता है, और खोजें कि यह ML का उपयोग कैसे करता है।
+
+## [व्याख्यान के बाद का क्विज़](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## समीक्षा और आत्म-अध्ययन
+
+Wayfair डेटा साइंस टीम के पास कई दिलचस्प वीडियो हैं कि वे अपनी कंपनी में ML का उपयोग कैसे करते हैं। यह [देखने लायक](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos) है!
+
+## असाइनमेंट
+
+[A ML scavenger hunt](assignment.md)
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में अधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/9-Real-World/1-Applications/assignment.md b/translations/hi/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..04699ff51
--- /dev/null
+++ b/translations/hi/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# एक एमएल स्कैवेंजर हंट
+
+## निर्देश
+
+इस पाठ में, आपने कई वास्तविक जीवन के उपयोग मामलों के बारे में सीखा जो क्लासिकल एमएल का उपयोग करके हल किए गए थे। जबकि डीप लर्निंग, एआई में नई तकनीकों और उपकरणों का उपयोग, और न्यूरल नेटवर्क का लाभ उठाने से इन क्षेत्रों में मदद करने वाले उपकरणों के उत्पादन में तेजी आई है, इस पाठ्यक्रम में तकनीकों का उपयोग करके क्लासिक एमएल अभी भी बहुत मूल्य रखता है।
+
+इस असाइनमेंट में, कल्पना करें कि आप एक हैकाथॉन में भाग ले रहे हैं। पाठ्यक्रम में आपने जो सीखा है उसका उपयोग करके इस पाठ में चर्चा किए गए किसी एक क्षेत्र में समस्या को हल करने के लिए क्लासिक एमएल का उपयोग करके एक समाधान प्रस्तावित करें। एक प्रस्तुति बनाएं जिसमें आप अपने विचार को लागू करने के तरीके पर चर्चा करें। बोनस अंक यदि आप नमूना डेटा एकत्र कर सकते हैं और अपनी अवधारणा का समर्थन करने के लिए एक एमएल मॉडल बना सकते हैं!
+
+## मूल्यांकन
+
+| मानदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| -------- | ------------------------------------------------------------------- | ------------------------------------------------- | ---------------------- |
+| | एक पावरपॉइंट प्रस्तुति प्रस्तुत की गई है - मॉडल बनाने के लिए बोनस | एक गैर-नवीन, बुनियादी प्रस्तुति प्रस्तुत की गई है | काम अधूरा है |
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/9-Real-World/2-Debugging-ML-Models/README.md b/translations/hi/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..d2c385081
--- /dev/null
+++ b/translations/hi/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,114 @@
+# पोस्टस्क्रिप्ट: मशीन लर्निंग में मॉडल डिबगिंग का उपयोग जिम्मेदार एआई डैशबोर्ड घटकों के साथ
+
+## [प्री-लेक्चर क्विज](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## परिचय
+
+मशीन लर्निंग हमारे रोजमर्रा के जीवन को प्रभावित करता है। एआई हमारे समाज के सबसे महत्वपूर्ण सिस्टमों में अपनी जगह बना रहा है, जैसे कि स्वास्थ्य सेवा, वित्त, शिक्षा, और रोजगार। उदाहरण के लिए, सिस्टम और मॉडल दैनिक निर्णय लेने के कार्यों में शामिल होते हैं, जैसे कि स्वास्थ्य देखभाल निदान या धोखाधड़ी का पता लगाना। परिणामस्वरूप, एआई में प्रगति के साथ-साथ तेजी से अपनाने के साथ सामाजिक अपेक्षाएं और बढ़ती विनियमन बदल रहे हैं। हम लगातार देखते हैं कि एआई सिस्टम किस प्रकार अपेक्षाओं को पूरा नहीं करते हैं; वे नई चुनौतियों को उजागर करते हैं; और सरकारें एआई समाधानों को विनियमित करना शुरू कर रही हैं। इसलिए, यह महत्वपूर्ण है कि इन मॉडलों का विश्लेषण किया जाए ताकि सभी के लिए निष्पक्ष, विश्वसनीय, समावेशी, पारदर्शी, और जवाबदेह परिणाम प्रदान किए जा सकें।
+
+इस पाठ्यक्रम में, हम व्यावहारिक उपकरणों को देखेंगे जिनका उपयोग यह आकलन करने के लिए किया जा सकता है कि क्या किसी मॉडल में जिम्मेदार एआई मुद्दे हैं। पारंपरिक मशीन लर्निंग डिबगिंग तकनीकें आमतौर पर मात्रात्मक गणनाओं पर आधारित होती हैं, जैसे कि संपूर्ण सटीकता या औसत त्रुटि हानि। कल्पना करें कि जब आप इन मॉडलों को बनाने के लिए उपयोग कर रहे डेटा में कुछ जनसांख्यिकी की कमी हो, जैसे कि नस्ल, लिंग, राजनीतिक दृष्टिकोण, धर्म, या असंतुलित रूप से प्रतिनिधित्व करता हो। जब मॉडल का आउटपुट किसी जनसांख्यिकी को पक्षपाती रूप से व्याख्या किया जाता है, तो क्या होगा? इससे संवेदनशील विशेषता समूहों की अधिक या कम प्रतिनिधित्व की संभावना होती है, जिससे मॉडल में निष्पक्षता, समावेशिता, या विश्वसनीयता के मुद्दे उत्पन्न हो सकते हैं। एक और कारक है, मशीन लर्निंग मॉडल को ब्लैक बॉक्स माना जाता है, जिससे यह समझना और समझाना मुश्किल हो जाता है कि मॉडल की भविष्यवाणी को क्या प्रेरित करता है। ये सभी चुनौतियां डेटा वैज्ञानिकों और एआई डेवलपर्स का सामना करती हैं जब उनके पास मॉडल की निष्पक्षता या विश्वसनीयता का डिबग और आकलन करने के लिए पर्याप्त उपकरण नहीं होते हैं।
+
+इस पाठ में, आप अपने मॉडलों को डिबग करने के बारे में जानेंगे:
+
+- **त्रुटि विश्लेषण**: यह पहचानें कि आपके डेटा वितरण में मॉडल की उच्च त्रुटि दर कहाँ है।
+- **मॉडल ओवरव्यू**: विभिन्न डेटा समूहों के बीच तुलनात्मक विश्लेषण करें ताकि आपके मॉडल के प्रदर्शन मेट्रिक्स में असमानताएं खोजी जा सकें।
+- **डेटा विश्लेषण**: यह जांचें कि आपके डेटा में कहाँ अधिक या कम प्रतिनिधित्व हो सकता है जो आपके मॉडल को एक डेटा जनसांख्यिकी के पक्ष में कर सकता है।
+- **फीचर महत्व**: समझें कि कौन सी विशेषताएं आपके मॉडल की भविष्यवाणियों को वैश्विक स्तर या स्थानीय स्तर पर प्रेरित कर रही हैं।
+
+## पूर्वापेक्षा
+
+पूर्वापेक्षा के रूप में, कृपया [डेवलपर्स के लिए जिम्मेदार एआई उपकरण](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard) की समीक्षा करें।
+
+> 
+
+## त्रुटि विश्लेषण
+
+सटीकता मापने के लिए पारंपरिक मॉडल प्रदर्शन मेट्रिक्स आमतौर पर सही और गलत भविष्यवाणियों पर आधारित गणनाएं होती हैं। उदाहरण के लिए, यह निर्धारित करना कि एक मॉडल 89% समय सटीक है और त्रुटि हानि 0.001 है, एक अच्छा प्रदर्शन माना जा सकता है। त्रुटियां अक्सर आपके आधारभूत डेटा सेट में समान रूप से वितरित नहीं होती हैं। आपको 89% मॉडल सटीकता स्कोर मिल सकता है लेकिन पता चलता है कि आपके डेटा के विभिन्न क्षेत्रों में मॉडल 42% समय विफल हो रहा है। इन विफलता पैटर्न के परिणामस्वरूप कुछ डेटा समूहों के साथ निष्पक्षता या विश्वसनीयता के मुद्दे हो सकते हैं। यह समझना आवश्यक है कि मॉडल कहाँ अच्छा प्रदर्शन कर रहा है और कहाँ नहीं। डेटा क्षेत्रों में जहाँ आपके मॉडल में बड़ी संख्या में गलतियाँ हैं, वे एक महत्वपूर्ण डेटा जनसांख्यिकी हो सकते हैं।
+
+
+
+RAI डैशबोर्ड पर त्रुटि विश्लेषण घटक विभिन्न समूहों के बीच मॉडल विफलता को एक पेड़ दृश्य के साथ दिखाता है। यह आपके डेटा सेट में उच्च त्रुटि दर वाले क्षेत्रों या विशेषताओं की पहचान करने में सहायक होता है। यह देखकर कि अधिकांश मॉडल की गलतियाँ कहाँ से आ रही हैं, आप मूल कारण की जांच शुरू कर सकते हैं। आप डेटा के समूह भी बना सकते हैं जिन पर विश्लेषण किया जा सके। ये डेटा समूह डिबगिंग प्रक्रिया में मदद करते हैं यह निर्धारित करने में कि एक समूह में मॉडल का प्रदर्शन अच्छा क्यों है और दूसरे में गलत क्यों है।
+
+
+
+पेड़ मानचित्र पर दृश्य संकेतक समस्या क्षेत्रों को तेजी से खोजने में मदद करते हैं। उदाहरण के लिए, पेड़ नोड का गहरा लाल रंग जितना गहरा होता है, त्रुटि दर उतनी ही अधिक होती है।
+
+हीट मैप एक और दृश्य कार्यक्षमता है जिसका उपयोग उपयोगकर्ता त्रुटि दर की जांच करने के लिए एक या दो विशेषताओं का उपयोग करके कर सकते हैं ताकि मॉडल की त्रुटियों में योगदानकर्ता का पता लगाया जा सके।
+
+
+
+त्रुटि विश्लेषण का उपयोग करें जब आपको आवश्यकता हो:
+
+* यह गहराई से समझें कि मॉडल विफलताएँ डेटा सेट और कई इनपुट और फीचर आयामों में कैसे वितरित होती हैं।
+* समग्र प्रदर्शन मेट्रिक्स को तोड़ें ताकि त्रुटिपूर्ण समूहों की स्वचालित रूप से खोज की जा सके और लक्षित सुधारात्मक कदमों की जानकारी प्राप्त हो सके।
+
+## मॉडल ओवरव्यू
+
+एक मशीन लर्निंग मॉडल के प्रदर्शन का मूल्यांकन करने के लिए इसके व्यवहार की समग्र समझ प्राप्त करना आवश्यक है। यह त्रुटि दर, सटीकता, रिकॉल, प्रिसिजन, या MAE (मीन एब्सोल्यूट एरर) जैसे एक से अधिक मेट्रिक्स की समीक्षा करके प्राप्त किया जा सकता है ताकि प्रदर्शन मेट्रिक्स में असमानताएं पाई जा सकें। एक प्रदर्शन मेट्रिक बहुत अच्छा लग सकता है, लेकिन एक अन्य मेट्रिक में गलतियाँ उजागर हो सकती हैं। इसके अलावा, पूरे डेटा सेट या समूहों में मेट्रिक्स की तुलना करने से यह पता चलता है कि मॉडल कहाँ अच्छा प्रदर्शन कर रहा है और कहाँ नहीं। यह विशेष रूप से महत्वपूर्ण है संवेदनशील और असंवेदनशील विशेषताओं (जैसे, रोगी की नस्ल, लिंग, या आयु) के बीच मॉडल के प्रदर्शन को देखने के लिए ताकि संभावित अनुचितता को उजागर किया जा सके जो मॉडल में हो सकती है। उदाहरण के लिए, यह पता लगाना कि मॉडल एक समूह में अधिक गलत है जिसमें संवेदनशील विशेषताएं हैं, मॉडल में संभावित अनुचितता को उजागर कर सकता है।
+
+RAI डैशबोर्ड के मॉडल ओवरव्यू घटक न केवल समूह में डेटा प्रतिनिधित्व के प्रदर्शन मेट्रिक्स का विश्लेषण करने में मदद करते हैं, बल्कि यह उपयोगकर्ताओं को विभिन्न समूहों के बीच मॉडल के व्यवहार की तुलना करने की क्षमता भी देता है।
+
+
+
+घटक की फीचर-आधारित विश्लेषण कार्यक्षमता उपयोगकर्ताओं को एक विशेष फीचर के भीतर डेटा उपसमूहों को संकीर्ण करने की अनुमति देती है ताकि सूक्ष्म स्तर पर विसंगतियों की पहचान की जा सके। उदाहरण के लिए, डैशबोर्ड में उपयोगकर्ता-चयनित फीचर (जैसे, *"time_in_hospital < 3"* या *"time_in_hospital >= 7"*) के लिए स्वचालित रूप से समूह उत्पन्न करने के लिए अंतर्निहित बुद्धिमत्ता है। यह उपयोगकर्ता को बड़े डेटा समूह से एक विशेष फीचर को अलग करने की अनुमति देता है ताकि यह देखा जा सके कि क्या यह मॉडल के गलत परिणामों का प्रमुख प्रभावक है।
+
+
+
+मॉडल ओवरव्यू घटक दो प्रकार के असमानता मेट्रिक्स का समर्थन करता है:
+
+**मॉडल प्रदर्शन में असमानता**: ये मेट्रिक्स का सेट डेटा के उपसमूहों में चयनित प्रदर्शन मेट्रिक के मानों में असमानता (अंतर) की गणना करता है। यहाँ कुछ उदाहरण हैं:
+
+* सटीकता दर में असमानता
+* त्रुटि दर में असमानता
+* प्रिसिजन में असमानता
+* रिकॉल में असमानता
+* मीन एब्सोल्यूट एरर (MAE) में असमानता
+
+**चयन दर में असमानता**: यह मेट्रिक उपसमूहों के बीच चयन दर (अनुकूल भविष्यवाणी) में अंतर को शामिल करता है। इसका एक उदाहरण ऋण स्वीकृति दरों में असमानता है। चयन दर का अर्थ है प्रत्येक वर्ग में डेटा बिंदुओं का अंश जिसे 1 के रूप में वर्गीकृत किया गया है (बाइनरी वर्गीकरण में) या भविष्यवाणी मानों का वितरण (पुनरावृत्ति में)।
+
+## डेटा विश्लेषण
+
+> "यदि आप डेटा को लंबे समय तक प्रताड़ित करेंगे, तो यह किसी भी चीज़ को स्वीकार कर लेगा" - रोनाल्ड कोस
+
+यह कथन अत्यधिक लगता है, लेकिन यह सच है कि डेटा को किसी भी निष्कर्ष का समर्थन करने के लिए हेरफेर किया जा सकता है। ऐसी हेरफेर कभी-कभी अनजाने में हो सकती है। हम सभी मनुष्य हैं, और हमारे पास पूर्वाग्रह होते हैं, और यह अक्सर कठिन होता है यह जानना कि कब आप डेटा में पूर्वाग्रह ला रहे हैं। एआई और मशीन लर्निंग में निष्पक्षता सुनिश्चित करना एक जटिल चुनौती बनी हुई है।
+
+डेटा पारंपरिक मॉडल प्रदर्शन मेट्रिक्स के लिए एक बड़ा अंधा स्थान है। आपके पास उच्च सटीकता स्कोर हो सकते हैं, लेकिन यह हमेशा आपके डेटा सेट में मौजूद अंतर्निहित डेटा पूर्वाग्रह को प्रतिबिंबित नहीं करता है। उदाहरण के लिए, यदि किसी कंपनी में कार्यकारी पदों पर 27% महिलाएं और 73% पुरुष हैं, तो एक नौकरी विज्ञापन एआई मॉडल जो इस डेटा पर प्रशिक्षित है, वरिष्ठ स्तर की नौकरी पदों के लिए मुख्य रूप से पुरुष दर्शकों को लक्षित कर सकता है। इस डेटा में असंतुलन ने मॉडल की भविष्यवाणी को एक लिंग के पक्ष में झुका दिया। यह एक निष्पक्षता मुद्दा प्रकट करता है जहाँ एआई मॉडल में लिंग पूर्वाग्रह है।
+
+RAI डैशबोर्ड पर डेटा विश्लेषण घटक उन क्षेत्रों की पहचान करने में मदद करता है जहाँ डेटा सेट में अधिक या कम प्रतिनिधित्व है। यह उपयोगकर्ताओं को डेटा असंतुलन या किसी विशेष डेटा समूह की कमी से उत्पन्न त्रुटियों और निष्पक्षता मुद्दों के मूल कारण का निदान करने में मदद करता है। यह उपयोगकर्ताओं को पूर्वानुमानित और वास्तविक परिणामों, त्रुटि समूहों, और विशिष्ट विशेषताओं के आधार पर डेटा सेट को देखने की क्षमता देता है। कभी-कभी एक कम प्रतिनिधित्व वाले डेटा समूह की खोज यह भी प्रकट कर सकती है कि मॉडल अच्छी तरह से नहीं सीख रहा है, इसलिए उच्च त्रुटियां हैं। एक मॉडल में डेटा पूर्वाग्रह का होना न केवल एक निष्पक्षता मुद्दा है बल्कि यह दिखाता है कि मॉडल समावेशी या विश्वसनीय नहीं है।
+
+
+
+डेटा विश्लेषण का उपयोग करें जब आपको आवश्यकता हो:
+
+* विभिन्न फिल्टर का चयन करके अपने डेटा सेट के आंकड़ों का अन्वेषण करें ताकि अपने डेटा को विभिन्न आयामों (जिसे समूह भी कहा जाता है) में विभाजित किया जा सके।
+* विभिन्न समूहों और फीचर समूहों के बीच अपने डेटा सेट का वितरण समझें।
+* यह निर्धारित करें कि क्या आपके निष्कर्ष जो निष्पक्षता, त्रुटि विश्लेषण, और कारणता से संबंधित हैं (अन्य डैशबोर्ड घटकों से प्राप्त) आपके डेटा सेट के वितरण का परिणाम हैं।
+* यह तय करें कि प्रतिनिधित्व मुद्दों, लेबल शोर, फीचर शोर, लेबल पूर्वाग्रह, और समान कारकों से उत्पन्न त्रुटियों को कम करने के लिए किन क्षेत्रों में अधिक डेटा एकत्र करना है।
+
+## मॉडल व्याख्यात्मकता
+
+मशीन लर्निंग मॉडल अक्सर ब्लैक बॉक्स होते हैं। यह समझना कि कौन सी प्रमुख डेटा विशेषताएँ मॉडल की भविष्यवाणी को प्रेरित करती हैं, चुनौतीपूर्ण हो सकता है। यह महत्वपूर्ण है कि एक मॉडल क्यों एक निश्चित भविष्यवाणी करता है, इसके लिए पारदर्शिता प्रदान की जाए। उदाहरण के लिए, यदि एक एआई सिस्टम भविष्यवाणी करता है कि एक मधुमेह रोगी 30 दिनों से कम समय में अस्पताल में फिर से भर्ती होने के जोखिम में है, तो इसे अपनी भविष्यवाणी का समर्थन करने वाला डेटा प्रदान करने में सक्षम होना चाहिए। सहायक डेटा संकेतक पारदर्शिता लाते हैं ताकि चिकित्सक या अस्पताल सूचित निर्णय लेने में सक्षम हों। इसके अलावा, एक व्यक्तिगत रोगी के लिए मॉडल ने भविष्यवाणी क्यों की, इसे समझाने में सक्षम होना स्वास्थ्य विनियमों के साथ जवाबदेही सक्षम करता है। जब आप मशीन लर्निंग मॉडलों का उपयोग ऐसे तरीकों से करते हैं जो लोगों के जीवन को प्रभावित करते हैं, तो यह समझना और समझाना महत्वपूर्ण है कि मॉडल के व्यवहार को क्या प्रेरित करता है। मॉडल व्याख्यात्मकता और व्याख्यात्मकता निम्नलिखित परिदृश्यों में प्रश्नों का उत्तर देने में मदद करती है:
+
+* मॉडल डिबगिंग: मेरे मॉडल ने यह गलती क्यों की? मैं अपने मॉडल को कैसे सुधार सकता हूँ?
+* मानव-एआई सहयोग: मैं मॉडल के निर्णयों को कैसे समझ सकता हूँ और उन पर विश्वास कर सकता हूँ?
+* नियामक अनुपालन: क्या मेरा मॉडल कानूनी आवश्यकताओं को पूरा करता है?
+
+RAI डैशबोर्ड का फीचर महत्व घटक आपको डिबग करने और यह समझने में मदद करता है कि एक मॉडल भविष्यवाणियाँ कैसे करता है। यह मशीन लर्निंग पेशेवरों और निर्णयकर्ताओं के लिए एक उपयोगी उपकरण भी है ताकि वे यह समझा सकें और दिखा सकें कि कौन सी विशेषताएँ मॉडल के व्यवहार को प्रभावित कर रही हैं, ताकि नियामक अनुपालन के लिए सबूत प्रदान किया जा सके। इसके बाद, उपयोगकर्ता वैश्विक और स्थानीय व्याख्याओं दोनों का अन्वेषण कर सकते हैं ताकि यह सत्यापित किया जा सके कि कौन सी विशेषताएँ मॉडल की भविष्यवाणी को प्रेरित करती हैं। वैश्विक व्याख्याएँ शीर्ष विशेषताओं को सूचीबद्ध करती हैं जिन्होंने मॉडल की समग्र भविष्यवाणी को प्रभावित किया। स्थानीय व्याख्याएँ यह दिखाती हैं कि कौन सी विशेषताएँ एक व्यक्तिगत मामले के लिए मॉडल की भविष्यवाणी को प्रेरित करती हैं। स्थानीय व्याख्याओं का मूल्यांकन करने की क्षमता एक विशिष्ट मामले को डिबग या ऑडिट करने में भी सहायक होती है ताकि यह बेहतर समझा जा सके और व्याख्या की जा सके कि मॉडल ने एक सटीक या गलत भविष्यवाणी क्यों की।
+
+
+
+* वैश्विक व्याख्याएँ: उदाहरण के लिए, कौन सी विशेषताएँ मधुमेह अस्पताल पुनः भर्ती मॉडल के समग्र व्यवहार को प्रभावित करती हैं?
+* स्थानीय व्याख्याएँ: उदाहरण के लिए, एक 60 वर्ष से अधिक आयु के मधुमेह रोगी के साथ पूर्व अस्पताल में भर्ती होने वाले को 30 दिनों के भीतर अस्पताल में पुनः भर्ती होने की भविष्यवाणी क्यों की गई?
+
+विभिन्न समूहों के बीच मॉडल के प्रदर्शन की जांच करने की प्रक्रिया में, फीचर महत्व यह दिखाता है कि समूहों में एक फीचर का कितना प्रभाव है। यह तुलना करते समय विसंगतियों को प्रकट करने में मदद करता है कि मॉडल की गलत भविष्यवाणियों को प्रेरित करने में फीचर का कितना प्रभाव है। फीचर महत्व घटक यह दिखा सकता है कि एक फीचर में कौन सी मानें मॉडल के परिणाम को सकारात्मक या नकारात्मक रूप से प्रभावित करती हैं। उदाहरण के लिए, यदि मॉडल ने एक गलत भविष्यवाणी की, तो घटक आपको यह ड्रिल करने और यह पता लगाने की क्षमता देता है कि भविष्यवाणी को कौन सी विशेषताएँ या विशेषता मानें प्रेरित करती हैं। यह स्तर का विवरण न केवल डिबगिंग में मदद करता है बल्कि ऑडिटिंग स्थितियों में पारदर्शिता और जवाबदेही प्रदान करता है। अंत में, घटक आपको निष्पक्षता मुद्दों की पहचान करने में मदद कर सकता है। उदाहरण के लिए, यदि एक संवेदनशील विशेषता जैसे कि जातीयता या लिंग मॉडल की भविष्यवाणी को प्रेरित करने में अत्यधिक प्रभावशाली है, तो यह मॉडल में नस्ल या लिंग पूर्वाग्रह का संकेत हो सकता है।
+
+
+
+व्याख्यात्मकता का उपयोग करें जब आपको आवश्यकता हो:
+
+* यह निर्धारित करें कि आपके एआई सिस्टम की भविष्यवाणियाँ कितनी विश्वसनीय हैं यह समझकर कि कौन सी विशेषताएँ भविष्यवाणियों के लिए सबसे महत्वपूर्ण हैं।
+* अपने मॉडल को समझकर और यह पहचानकर कि क्या मॉडल स्वस्थ विशेषताओं का उपयोग कर रहा है या केवल झूठे सहसंबंधों का उपयोग कर रहा है, अपने मॉडल को डिबग करने के लिए दृष्टिकोण अपनाएँ।
+* यह समझकर संभावित अनुचितता के स्रोतों का पता लगाएँ कि क्या मॉडल संवेदनशील विशेषताओं या उनके साथ
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियां या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/hi/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..95e1de423
--- /dev/null
+++ b/translations/hi/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# Responsible AI (RAI) डैशबोर्ड का अन्वेषण करें
+
+## निर्देश
+
+इस पाठ में आपने RAI डैशबोर्ड के बारे में सीखा, जो "ओपन-सोर्स" टूल्स पर आधारित घटकों का एक समूह है जो डेटा वैज्ञानिकों को एरर विश्लेषण, डेटा अन्वेषण, निष्पक्षता मूल्यांकन, मॉडल व्याख्यात्मकता, काउंटरफैक्ट/व्हाट-इफ आकलन और AI सिस्टम पर कारण विश्लेषण करने में मदद करता है। इस असाइनमेंट के लिए, RAI डैशबोर्ड के कुछ नमूना [notebooks](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) का अन्वेषण करें और अपने निष्कर्षों को एक पेपर या प्रस्तुति में रिपोर्ट करें।
+
+## मूल्यांकन
+
+| मापदंड | उत्कृष्ट | पर्याप्त | सुधार की आवश्यकता |
+| ------- | --------- | -------- | ----------------- |
+| | एक पेपर या पॉवरपॉइंट प्रस्तुति प्रस्तुत की जाती है जिसमें RAI डैशबोर्ड के घटकों, चलाए गए नोटबुक और उससे निकाले गए निष्कर्षों पर चर्चा की जाती है | निष्कर्षों के बिना एक पेपर प्रस्तुत किया जाता है | कोई पेपर प्रस्तुत नहीं किया गया है |
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया अवगत रहें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/9-Real-World/README.md b/translations/hi/9-Real-World/README.md
new file mode 100644
index 000000000..5884b6d64
--- /dev/null
+++ b/translations/hi/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# पोस्टस्क्रिप्ट: क्लासिक मशीन लर्निंग के वास्तविक दुनिया के अनुप्रयोग
+
+इस पाठ्यक्रम के इस भाग में, आपको क्लासिकल एमएल के कुछ वास्तविक दुनिया के अनुप्रयोगों से परिचित कराया जाएगा। हमने इंटरनेट को खंगाला है ताकि ऐसे अनुप्रयोगों के बारे में श्वेतपत्र और लेख मिल सकें जिन्होंने इन रणनीतियों का उपयोग किया है, न्यूरल नेटवर्क, डीप लर्निंग और एआई से यथासंभव बचते हुए। जानें कि कैसे एमएल का उपयोग व्यावसायिक प्रणालियों, पारिस्थितिक अनुप्रयोगों, वित्त, कला और संस्कृति, और अधिक में किया जाता है।
+
+
+
+> फोटो एलेक्सिस फाउवेट द्वारा अनस्प्लैश पर
+
+## पाठ
+
+1. [एमएल के वास्तविक दुनिया के अनुप्रयोग](1-Applications/README.md)
+2. [जिम्मेदार एआई डैशबोर्ड घटकों का उपयोग करके मशीन लर्निंग में मॉडल डीबगिंग](2-Debugging-ML-Models/README.md)
+
+## श्रेय
+
+"वास्तविक दुनिया के अनुप्रयोग" को [Jen Looper](https://twitter.com/jenlooper) और [Ornella Altunyan](https://twitter.com/ornelladotcom) सहित कई लोगों की एक टीम द्वारा लिखा गया था।
+
+"जिम्मेदार एआई डैशबोर्ड घटकों का उपयोग करके मशीन लर्निंग में मॉडल डीबगिंग" को [Ruth Yakubu](https://twitter.com/ruthieyakubu) द्वारा लिखा गया था।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/CODE_OF_CONDUCT.md b/translations/hi/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..95b3c7d9c
--- /dev/null
+++ b/translations/hi/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Microsoft ओपन सोर्स कोड ऑफ कंडक्ट
+
+इस प्रोजेक्ट ने [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) को अपनाया है।
+
+संसाधन:
+
+- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
+- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+- प्रश्नों या चिंताओं के लिए [opencode@microsoft.com](mailto:opencode@microsoft.com) से संपर्क करें
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/CONTRIBUTING.md b/translations/hi/CONTRIBUTING.md
new file mode 100644
index 000000000..ec659d3d9
--- /dev/null
+++ b/translations/hi/CONTRIBUTING.md
@@ -0,0 +1,14 @@
+# योगदान
+
+यह प्रोजेक्ट योगदान और सुझावों का स्वागत करता है। अधिकांश योगदानों के लिए आपको एक योगदानकर्ता लाइसेंस समझौते (CLA) से सहमत होना आवश्यक है जो यह घोषित करता है कि आपके पास अधिकार हैं, और वास्तव में, हमें आपके योगदान का उपयोग करने के अधिकार प्रदान करते हैं। विवरण के लिए, https://cla.microsoft.com पर जाएं।
+
+> महत्वपूर्ण: इस रिपो में टेक्स्ट का अनुवाद करते समय, कृपया सुनिश्चित करें कि आप मशीन अनुवाद का उपयोग न करें। हम अनुवादों को समुदाय के माध्यम से सत्यापित करेंगे, इसलिए कृपया केवल उन भाषाओं में अनुवाद के लिए स्वयंसेवक बनें जिनमें आप प्रवीण हैं।
+
+जब आप एक पुल अनुरोध सबमिट करते हैं, तो एक CLA-बॉट स्वचालित रूप से यह निर्धारित करेगा कि क्या आपको CLA प्रदान करने की आवश्यकता है और PR को उपयुक्त रूप से सजाएगा (जैसे, लेबल, टिप्पणी)। बस बॉट द्वारा प्रदान किए गए निर्देशों का पालन करें। आपको हमारे CLA का उपयोग करने वाले सभी रिपोजिटरी में यह केवल एक बार करने की आवश्यकता होगी।
+
+इस प्रोजेक्ट ने [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) को अपनाया है।
+अधिक जानकारी के लिए [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) देखें
+या किसी भी अतिरिक्त प्रश्न या टिप्पणियों के साथ [opencode@microsoft.com](mailto:opencode@microsoft.com) से संपर्क करें।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/README.md b/translations/hi/README.md
new file mode 100644
index 000000000..212fcd150
--- /dev/null
+++ b/translations/hi/README.md
@@ -0,0 +1,156 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# शुरुआती लोगों के लिए मशीन लर्निंग - एक पाठ्यक्रम
+
+> 🌍 दुनिया की संस्कृतियों के माध्यम से मशीन लर्निंग का अन्वेषण करते हुए हमारे साथ दुनिया की यात्रा करें 🌍
+
+Microsoft के क्लाउड एडवोकेट्स एक 12-सप्ताह, 26-पाठ का पाठ्यक्रम प्रदान करने के लिए प्रसन्न हैं जो पूरी तरह से **मशीन लर्निंग** के बारे में है। इस पाठ्यक्रम में, आप कभी-कभी **क्लासिक मशीन लर्निंग** कहलाने वाली चीजों के बारे में जानेंगे, जिसमें मुख्य रूप से Scikit-learn का एक पुस्तकालय के रूप में उपयोग किया जाएगा और डीप लर्निंग से बचा जाएगा, जिसे हमारे [AI for Beginners' पाठ्यक्रम](https://aka.ms/ai4beginners) में कवर किया गया है। इन पाठों को हमारे ['Data Science for Beginners' पाठ्यक्रम](https://aka.ms/ds4beginners) के साथ भी जोड़ें!
+
+हमारे साथ दुनिया भर में यात्रा करें क्योंकि हम इन क्लासिक तकनीकों को दुनिया के विभिन्न क्षेत्रों से डेटा पर लागू करते हैं। प्रत्येक पाठ में पाठ से पहले और बाद के क्विज़, पाठ को पूरा करने के लिए लिखित निर्देश, एक समाधान, एक असाइनमेंट, और अधिक शामिल हैं। हमारे प्रोजेक्ट-आधारित शिक्षाशास्त्र आपको निर्माण करते समय सीखने की अनुमति देते हैं, नए कौशल को 'स्टिक' करने का एक सिद्ध तरीका।
+
+**✍️ हमारे लेखकों का हार्दिक धन्यवाद** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu और Amy Boyd
+
+**🎨 हमारे चित्रकारों का भी धन्यवाद** Tomomi Imura, Dasani Madipalli, और Jen Looper
+
+**🙏 विशेष धन्यवाद 🙏 हमारे Microsoft Student Ambassador लेखकों, समीक्षकों, और सामग्री योगदानकर्ताओं को**, विशेष रूप से Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila, और Snigdha Agarwal
+
+**🤩 हमारे R पाठों के लिए Microsoft Student Ambassadors Eric Wanjau, Jasleen Sondhi, और Vidushi Gupta को अतिरिक्त आभार!**
+
+# शुरुआत करना
+
+इन चरणों का पालन करें:
+1. **रिपॉजिटरी को फोर्क करें**: इस पृष्ठ के ऊपर-दाईं ओर "Fork" बटन पर क्लिक करें।
+2. **रिपॉजिटरी को क्लोन करें**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [इस कोर्स के लिए सभी अतिरिक्त संसाधनों को हमारे Microsoft Learn संग्रह में खोजें](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+
+**[विद्यार्थियों](https://aka.ms/student-page)**, इस पाठ्यक्रम का उपयोग करने के लिए, पूरे रिपॉजिटरी को अपने GitHub खाते में फोर्क करें और अपने या समूह के साथ अभ्यास करें:
+
+- लेक्चर से पहले का क्विज़ शुरू करें।
+- लेक्चर पढ़ें और गतिविधियों को पूरा करें, प्रत्येक ज्ञान जांच पर रुकें और विचार करें।
+- पाठों को समझकर प्रोजेक्ट बनाने का प्रयास करें बजाय समाधान कोड चलाने के; हालांकि वह कोड प्रत्येक प्रोजेक्ट-उन्मुख पाठ में `/solution` फ़ोल्डरों में उपलब्ध है।
+- लेक्चर के बाद का क्विज़ लें।
+- चुनौती को पूरा करें।
+- असाइनमेंट को पूरा करें।
+- एक पाठ समूह को पूरा करने के बाद, [चर्चा बोर्ड](https://github.com/microsoft/ML-For-Beginners/discussions) पर जाएं और उचित PAT रूब्रिक को भरकर "जोर से सीखें"। 'PAT' एक प्रगति मूल्यांकन उपकरण है जो एक रूब्रिक है जिसे आप अपने सीखने को आगे बढ़ाने के लिए भरते हैं। आप अन्य PATs पर प्रतिक्रिया भी दे सकते हैं ताकि हम एक साथ सीख सकें।
+
+> आगे की पढ़ाई के लिए, हम इन [Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott) मॉड्यूल और लर्निंग पाथ का पालन करने की सलाह देते हैं।
+
+**शिक्षकों**, हमने इस पाठ्यक्रम का उपयोग कैसे करें पर [कुछ सुझाव शामिल किए हैं](for-teachers.md)।
+
+---
+
+## वीडियो वॉकथ्रू
+
+कुछ पाठ छोटे वीडियो के रूप में उपलब्ध हैं। आप इन सभी को पाठों में इन-लाइन पा सकते हैं, या Microsoft Developer YouTube चैनल पर [ML for Beginners प्लेलिस्ट](https://aka.ms/ml-beginners-videos) पर क्लिक करके देख सकते हैं।
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## टीम से मिलें
+
+[](https://youtu.be/Tj1XWrDSYJU "प्रोमो वीडियो")
+
+**Gif द्वारा** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 परियोजना और इसे बनाने वाले लोगों के बारे में वीडियो के लिए ऊपर की छवि पर क्लिक करें!
+
+---
+
+## शिक्षाशास्त्र
+
+हमने इस पाठ्यक्रम का निर्माण करते समय दो शैक्षिक सिद्धांतों को चुना है: यह सुनिश्चित करना कि यह व्यावहारिक **प्रोजेक्ट-आधारित** है और इसमें **बार-बार क्विज़** शामिल हैं। इसके अलावा, इस पाठ्यक्रम में इसे एकजुटता देने के लिए एक सामान्य **थीम** है।
+
+यह सुनिश्चित करके कि सामग्री परियोजनाओं के साथ संरेखित है, प्रक्रिया छात्रों के लिए अधिक आकर्षक बन जाती है और अवधारणाओं का प्रतिधारण बढ़ेगा। इसके अलावा, कक्षा से पहले एक कम-दांव क्विज़ छात्र के इरादे को एक विषय की ओर सेट करता है, जबकि कक्षा के बाद का दूसरा क्विज़ आगे के प्रतिधारण को सुनिश्चित करता है। यह पाठ्यक्रम लचीला और मजेदार बनाने के लिए डिज़ाइन किया गया था और इसे पूरे या आंशिक रूप से लिया जा सकता है। परियोजनाएं छोटी शुरू होती हैं और 12-सप्ताह के चक्र के अंत तक बढ़ती जटिल होती जाती हैं। इस पाठ्यक्रम में एमएल के वास्तविक दुनिया के अनुप्रयोगों पर एक परिशिष्ट भी शामिल है, जिसका उपयोग अतिरिक्त क्रेडिट या चर्चा के आधार के रूप में किया जा सकता है।
+
+> हमारे [आचार संहिता](CODE_OF_CONDUCT.md), [योगदान](CONTRIBUTING.md), और [अनुवाद](TRANSLATIONS.md) दिशानिर्देश खोजें। हम आपके रचनात्मक प्रतिक्रिया का स्वागत करते हैं!
+
+## प्रत्येक पाठ में शामिल हैं
+
+- वैकल्पिक स्केच नोट
+- वैकल्पिक पूरक वीडियो
+- वीडियो वॉकथ्रू (कुछ पाठों में ही)
+- लेक्चर से पहले का वार्मअप क्विज़
+- लिखित पाठ
+- प्रोजेक्ट-आधारित पाठों के लिए, प्रोजेक्ट कैसे बनाएं इसके चरण-दर-चरण मार्गदर्शिका
+- ज्ञान जांच
+- एक चुनौती
+- पूरक पठन
+- असाइनमेंट
+- लेक्चर के बाद का क्विज़
+
+> **भाषाओं के बारे में एक नोट**: ये पाठ मुख्य रूप से Python में लिखे गए हैं, लेकिन कई R में भी उपलब्ध हैं। एक R पाठ को पूरा करने के लिए, `/solution` फ़ोल्डर में जाएं और R पाठ खोजें। इनमें एक .rmd एक्सटेंशन होता है जो एक **R मार्कडाउन** फ़ाइल का प्रतिनिधित्व करता है जिसे आसानी से परिभाषित किया जा सकता है जैसे कि `code chunks` (R या अन्य भाषाओं का) और एक `YAML header` (जो आउटपुट को प्रारूपित करने के तरीके को मार्गदर्शित करता है जैसे कि PDF) एक `Markdown document` में। इस प्रकार, यह डेटा विज्ञान के लिए एक उत्कृष्ट लेखन ढांचा के रूप में कार्य करता है क्योंकि यह आपको अपने कोड, उसके आउटपुट और अपने विचारों को एक साथ लिखने की अनुमति देता है। इसके अलावा, R मार्कडाउन दस्तावेज़ों को PDF, HTML, या Word जैसे आउटपुट प्रारूपों में प्रस्तुत किया जा सकता है।
+
+> **क्विज़ के बारे में एक नोट**: सभी क्विज़ [क्विज़ ऐप फ़ोल्डर](../../quiz-app) में निहित हैं, प्रत्येक में तीन प्रश्नों के कुल 52 क्विज़ हैं। वे पाठों के भीतर से जुड़े हुए हैं लेकिन क्विज़ ऐप को स्थानीय रूप से चलाया जा सकता है; स्थानीय रूप से होस्ट करने या Azure पर तैनात करने के निर्देश `quiz-app` फ़ोल्डर में पाए जा सकते हैं।
+
+| पाठ संख्या | विषय | पाठ समूह | सीखने के उद्देश्य | लिंक्ड पाठ | लेखक |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | मशीन लर्निंग का परिचय | [परिचय](1-Introduction/README.md) | मशीन लर्निंग के पीछे के मूलभूत अवधारणाओं को जानें | [पाठ](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | मशीन लर्निंग का इतिहास | [परिचय](1-Introduction/README.md) | इस क्षेत्र के पीछे के इतिहास को जानें | [पाठ](1-Introduction/2-history-of-ML/README.md) | Jen और Amy |
+| 03 | निष्पक्षता और मशीन लर्निंग | [परिचय](1-Introduction/README.md) | मशीन लर्निंग मॉडल बनाते और लागू करते समय छात्रों को निष्पक्षता के महत्वपूर्ण दार्शनिक मुद्दों पर विचार करना चाहिए? | [पाठ](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | मशीन लर्निंग के लिए तकनीकें | [Introduction](1-Introduction/README.md) | एमएल शोधकर्ता एमएल मॉडल बनाने के लिए किन तकनीकों का उपयोग करते हैं? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | क्रिस और जेन |
+| 05 | प्रतिगमन का परिचय | [Regression](2-Regression/README.md) | प्रतिगमन मॉडल के लिए पाइथन और स्किकिट-लर्न के साथ शुरुआत करें |
|
+| 09 | एक वेब ऐप 🔌 | [Web App](3-Web-App/README.md) | अपने प्रशिक्षित मॉडल का उपयोग करने के लिए एक वेब ऐप बनाएं | [Python](3-Web-App/1-Web-App/README.md) | जेन |
+| 10 | वर्गीकरण का परिचय | [Classification](4-Classification/README.md) | अपने डेटा को साफ करें, तैयार करें, और विज़ुअलाइज़ करें; वर्गीकरण का परिचय |
|
+| 13 | स्वादिष्ट एशियाई और भारतीय व्यंजन 🍜 | [Classification](4-Classification/README.md) | अपने मॉडल का उपयोग करके एक अनुशंसा वेब ऐप बनाएं | [Python](4-Classification/4-Applied/README.md) | जेन |
+| 14 | क्लस्टरिंग का परिचय | [Clustering](5-Clustering/README.md) | अपने डेटा को साफ करें, तैयार करें, और विज़ुअलाइज़ करें; क्लस्टरिंग का परिचय |
|
+| 16 | प्राकृतिक भाषा प्रसंस्करण का परिचय ☕️ | [Natural language processing](6-NLP/README.md) | एक साधारण बॉट बनाकर NLP के बारे में बुनियादी बातें सीखें | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | सामान्य NLP कार्य ☕️ | [Natural language processing](6-NLP/README.md) | भाषा संरचनाओं के साथ काम करते समय आवश्यक सामान्य कार्यों को समझकर अपने NLP ज्ञान को गहरा करें | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | अनुवाद और भावना विश्लेषण ♥️ | [Natural language processing](6-NLP/README.md) | जेन ऑस्टेन के साथ अनुवाद और भावना विश्लेषण | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | यूरोप के रोमांटिक होटल ♥️ | [Natural language processing](6-NLP/README.md) | होटल समीक्षाओं के साथ भावना विश्लेषण 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | यूरोप के रोमांटिक होटल ♥️ | [Natural language processing](6-NLP/README.md) | होटल समीक्षाओं के साथ भावना विश्लेषण 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | समय श्रृंखला पूर्वानुमान का परिचय | [Time series](7-TimeSeries/README.md) | समय श्रृंखला पूर्वानुमान का परिचय | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ विश्व पावर उपयोग ⚡️ - ARIMA के साथ समय श्रृंखला पूर्वानुमान | [Time series](7-TimeSeries/README.md) | ARIMA के साथ समय श्रृंखला पूर्वानुमान | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ विश्व पावर उपयोग ⚡️ - SVR के साथ समय श्रृंखला पूर्वानुमान | [Time series](7-TimeSeries/README.md) | सपोर्ट वेक्टर रेग्रेसर के साथ समय श्रृंखला पूर्वानुमान | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | सुदृढीकरण अधिगम का परिचय | [Reinforcement learning](8-Reinforcement/README.md) | Q-लर्निंग के साथ सुदृढीकरण अधिगम का परिचय | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | पीटर को भेड़िये से बचाएं! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | सुदृढीकरण अधिगम जिम | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | वास्तविक दुनिया के ML परिदृश्य और अनुप्रयोग | [ML in the Wild](9-Real-World/README.md) | शास्त्रीय ML के दिलचस्प और प्रकट करने वाले वास्तविक दुनिया के अनुप्रयोग | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | RAI डैशबोर्ड का उपयोग करके ML में मॉडल डिबगिंग | [ML in the Wild](9-Real-World/README.md) | जिम्मेदार AI डैशबोर्ड घटकों का उपयोग करके मशीन लर्निंग में मॉडल डिबगिंग | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [इस कोर्स के लिए सभी अतिरिक्त संसाधनों को हमारे Microsoft Learn संग्रह में खोजें](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## ऑफलाइन एक्सेस
+
+आप [Docsify](https://docsify.js.org/#/) का उपयोग करके इस दस्तावेज़ को ऑफ़लाइन चला सकते हैं। इस रिपॉजिटरी को फोर्क करें, [Docsify इंस्टॉल करें](https://docsify.js.org/#/quickstart) अपने स्थानीय मशीन पर, और फिर इस रिपॉजिटरी के रूट फ़ोल्डर में, टाइप करें `docsify serve`। वेबसाइट आपके localhost पर पोर्ट 3000 पर सर्व होगी: `localhost:3000`।
+
+## PDFs
+Find a pdf of the curriculum with links [here](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf).
+
+## सहायता की आवश्यकता
+
+क्या आप अनुवाद में योगदान देना चाहेंगे? कृपया हमारे [अनुवाद दिशानिर्देश](TRANSLATIONS.md) पढ़ें और कार्यभार प्रबंधन के लिए एक टेम्पलेटेड मुद्दा [यहां](https://github.com/microsoft/ML-For-Beginners/issues) जोड़ें।
+
+## अन्य पाठ्यक्रम
+
+हमारी टीम अन्य पाठ्यक्रम भी तैयार करती है! इन्हें देखें:
+
+- [AI for Beginners](https://aka.ms/ai4beginners)
+- [Data Science for Beginners](https://aka.ms/datascience-beginners)
+- [**New Version 2.0** - Generative AI for Beginners](https://aka.ms/genai-beginners)
+- [**NEW** Cybersecurity for Beginners](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Web Dev for Beginners](https://aka.ms/webdev-beginners)
+- [IoT for Beginners](https://aka.ms/iot-beginners)
+- [Machine Learning for Beginners](https://aka.ms/ml4beginners)
+- [XR Development for Beginners](https://aka.ms/xr-dev-for-beginners)
+- [Mastering GitHub Copilot for AI Paired Programming](https://aka.ms/GitHubCopilotAI)
+
+ **अस्वीकरण**:
+ इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/SECURITY.md b/translations/hi/SECURITY.md
new file mode 100644
index 000000000..b0a66a063
--- /dev/null
+++ b/translations/hi/SECURITY.md
@@ -0,0 +1,40 @@
+## सुरक्षा
+
+Microsoft हमारे सॉफ़्टवेयर उत्पादों और सेवाओं की सुरक्षा को गंभीरता से लेता है, जिसमें हमारे GitHub संगठनों के माध्यम से प्रबंधित सभी स्रोत कोड रिपॉज़िटरी शामिल हैं, जिनमें [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), और [हमारे GitHub संगठन](https://opensource.microsoft.com/) शामिल हैं।
+
+यदि आपको किसी Microsoft-स्वामित्व वाली रिपॉज़िटरी में कोई सुरक्षा भेद्यता मिली है जो [Microsoft की सुरक्षा भेद्यता की परिभाषा](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott) को पूरा करती है, तो कृपया नीचे वर्णित के अनुसार हमें रिपोर्ट करें।
+
+## सुरक्षा मुद्दों की रिपोर्टिंग
+
+**कृपया सार्वजनिक GitHub मुद्दों के माध्यम से सुरक्षा भेद्यताओं की रिपोर्ट न करें।**
+
+इसके बजाय, कृपया उन्हें Microsoft Security Response Center (MSRC) पर [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report) पर रिपोर्ट करें।
+
+यदि आप लॉग इन किए बिना सबमिट करना पसंद करते हैं, तो [secure@microsoft.com](mailto:secure@microsoft.com) पर ईमेल भेजें। यदि संभव हो, तो हमारे PGP कुंजी के साथ अपने संदेश को एन्क्रिप्ट करें; कृपया इसे [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc) से डाउनलोड करें।
+
+आपको 24 घंटों के भीतर एक प्रतिक्रिया प्राप्त होनी चाहिए। यदि किसी कारणवश आपको प्रतिक्रिया नहीं मिलती है, तो कृपया सुनिश्चित करने के लिए ईमेल के माध्यम से फॉलो अप करें कि हमें आपका मूल संदेश प्राप्त हुआ है। अतिरिक्त जानकारी [microsoft.com/msrc](https://www.microsoft.com/msrc) पर पाई जा सकती है।
+
+कृपया नीचे सूचीबद्ध अनुरोधित जानकारी (जितना आप प्रदान कर सकते हैं) शामिल करें ताकि हमें संभावित मुद्दे की प्रकृति और दायरे को बेहतर ढंग से समझने में मदद मिल सके:
+
+ * मुद्दे का प्रकार (जैसे बफर ओवरफ्लो, SQL इंजेक्शन, क्रॉस-साइट स्क्रिप्टिंग, आदि)
+ * मुद्दे के प्रकट होने से संबंधित स्रोत फ़ाइल(फ़ाइलों) के पूर्ण पथ
+ * प्रभावित स्रोत कोड का स्थान (टैग/ब्रांच/कमिट या डायरेक्ट URL)
+ * मुद्दे को पुन: उत्पन्न करने के लिए आवश्यक कोई विशेष कॉन्फ़िगरेशन
+ * मुद्दे को पुन: उत्पन्न करने के लिए चरण-दर-चरण निर्देश
+ * प्रूफ-ऑफ-कॉन्सेप्ट या एक्सप्लॉइट कोड (यदि संभव हो)
+ * मुद्दे का प्रभाव, जिसमें हमलावर द्वारा मुद्दे का उपयोग कैसे किया जा सकता है
+
+यह जानकारी हमें आपकी रिपोर्ट को अधिक तेजी से ट्रायज करने में मदद करेगी।
+
+यदि आप बग बाउंटी के लिए रिपोर्ट कर रहे हैं, तो अधिक पूर्ण रिपोर्टें उच्च बाउंटी पुरस्कार में योगदान कर सकती हैं। हमारे सक्रिय कार्यक्रमों के बारे में अधिक विवरण के लिए कृपया हमारे [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) पृष्ठ पर जाएं।
+
+## प्राथमिकताएँ भाषाएँ
+
+हम सभी संचार अंग्रेजी में करने को प्राथमिकता देते हैं।
+
+## नीति
+
+Microsoft [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd) के सिद्धांत का पालन करता है।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। इसकी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/SUPPORT.md b/translations/hi/SUPPORT.md
new file mode 100644
index 000000000..d04aafb39
--- /dev/null
+++ b/translations/hi/SUPPORT.md
@@ -0,0 +1,13 @@
+# समर्थन
+## समस्याओं को दर्ज करने और सहायता प्राप्त करने का तरीका
+
+यह परियोजना बग और फीचर अनुरोधों को ट्रैक करने के लिए GitHub Issues का उपयोग करती है। कृपया नए मुद्दों को दर्ज करने से पहले मौजूदा मुद्दों की खोज करें ताकि डुप्लिकेट से बचा जा सके। नए मुद्दों के लिए, अपने बग या फीचर अनुरोध को एक नए Issue के रूप में दर्ज करें।
+
+इस परियोजना का उपयोग करने के बारे में सहायता और प्रश्नों के लिए, एक Issue दर्ज करें।
+
+## Microsoft समर्थन नीति
+
+इस रिपॉजिटरी के लिए समर्थन ऊपर सूचीबद्ध संसाधनों तक सीमित है।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/TRANSLATIONS.md b/translations/hi/TRANSLATIONS.md
new file mode 100644
index 000000000..c0a329195
--- /dev/null
+++ b/translations/hi/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# पाठों का अनुवाद करके योगदान दें
+
+हम इस पाठ्यक्रम में पाठों के लिए अनुवादों का स्वागत करते हैं!
+## दिशानिर्देश
+
+प्रत्येक पाठ फ़ोल्डर और पाठ परिचय फ़ोल्डर में फ़ोल्डर होते हैं जिनमें अनुवादित मार्कडाउन फ़ाइलें होती हैं।
+
+> नोट, कृपया कोड नमूना फ़ाइलों में किसी भी कोड का अनुवाद न करें; अनुवाद करने के लिए केवल README, असाइनमेंट और क्विज़ हैं। धन्यवाद!
+
+अनुवादित फ़ाइलों को इस नामकरण परंपरा का पालन करना चाहिए:
+
+**README._[language]_.md**
+
+जहां _[language]_ ISO 639-1 मानक का पालन करने वाला दो अक्षरों का भाषा संक्षेपण है (उदा. `README.es.md` स्पेनिश के लिए और `README.nl.md` डच के लिए)।
+
+**assignment._[language]_.md**
+
+README's की तरह, कृपया असाइनमेंट का भी अनुवाद करें।
+
+> महत्वपूर्ण: इस रेपो में पाठ का अनुवाद करते समय, कृपया सुनिश्चित करें कि आप मशीन अनुवाद का उपयोग न करें। हम समुदाय के माध्यम से अनुवादों को सत्यापित करेंगे, इसलिए कृपया केवल उन भाषाओं में अनुवाद के लिए स्वयंसेवक बनें जिनमें आप प्रवीण हैं।
+
+**क्विज़**
+
+1. क्विज़-ऐप में अपना अनुवाद यहां एक फ़ाइल जोड़कर जोड़ें: https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations, उचित नामकरण परंपरा के साथ (en.json, fr.json)। **कृपया 'true' या 'false' शब्दों का स्थानीयकरण न करें। धन्यवाद!**
+
+2. क्विज़-ऐप के App.vue फ़ाइल में ड्रॉपडाउन में अपना भाषा कोड जोड़ें।
+
+3. क्विज़-ऐप के [translations index.js file](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js) को संपादित करें और अपनी भाषा जोड़ें।
+
+4. अंत में, अपनी अनुवादित README.md फ़ाइलों में सभी क्विज़ लिंक को सीधे आपके अनुवादित क्विज़ की ओर इंगित करने के लिए संपादित करें: https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1 becomes https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id
+
+**धन्यवाद**
+
+हम वास्तव में आपके प्रयासों की सराहना करते हैं!
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया अवगत रहें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को आधिकारिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/docs/_sidebar.md b/translations/hi/docs/_sidebar.md
new file mode 100644
index 000000000..a89f8bea6
--- /dev/null
+++ b/translations/hi/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- परिचय
+ - [मशीन लर्निंग का परिचय](../1-Introduction/1-intro-to-ML/README.md)
+ - [मशीन लर्निंग का इतिहास](../1-Introduction/2-history-of-ML/README.md)
+ - [एमएल और निष्पक्षता](../1-Introduction/3-fairness/README.md)
+ - [एमएल की तकनीकें](../1-Introduction/4-techniques-of-ML/README.md)
+
+- प्रतिगमन
+ - [टूल्स ऑफ द ट्रेड](../2-Regression/1-Tools/README.md)
+ - [डेटा](../2-Regression/2-Data/README.md)
+ - [रेखीय प्रतिगमन](../2-Regression/3-Linear/README.md)
+ - [लॉजिस्टिक प्रतिगमन](../2-Regression/4-Logistic/README.md)
+
+- एक वेब ऐप बनाएं
+ - [वेब ऐप](../3-Web-App/1-Web-App/README.md)
+
+- वर्गीकरण
+ - [वर्गीकरण का परिचय](../4-Classification/1-Introduction/README.md)
+ - [वर्गीकरणकर्ता 1](../4-Classification/2-Classifiers-1/README.md)
+ - [वर्गीकरणकर्ता 2](../4-Classification/3-Classifiers-2/README.md)
+ - [लागू एमएल](../4-Classification/4-Applied/README.md)
+
+- क्लस्टरिंग
+ - [अपने डेटा को विज़ुअलाइज़ करें](../5-Clustering/1-Visualize/README.md)
+ - [के-मीन](../5-Clustering/2-K-Means/README.md)
+
+- एनएलपी
+ - [एनएलपी का परिचय](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [एनएलपी कार्य](../6-NLP/2-Tasks/README.md)
+ - [अनुवाद और भावना](../6-NLP/3-Translation-Sentiment/README.md)
+ - [होटल समीक्षाएँ 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [होटल समीक्षाएँ 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- समय श्रृंखला पूर्वानुमान
+ - [समय श्रृंखला पूर्वानुमान का परिचय](../7-TimeSeries/1-Introduction/README.md)
+ - [एआरआईएमए](../7-TimeSeries/2-ARIMA/README.md)
+ - [एसवीआर](../7-TimeSeries/3-SVR/README.md)
+
+- सुदृढीकरण शिक्षण
+ - [क्यू-लर्निंग](../8-Reinforcement/1-QLearning/README.md)
+ - [जिम](../8-Reinforcement/2-Gym/README.md)
+
+- वास्तविक दुनिया का एमएल
+ - [अनुप्रयोग](../9-Real-World/1-Applications/README.md)
+
+**अस्वीकरण**:
+इस दस्तावेज़ का अनुवाद मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल भाषा में मूल दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/for-teachers.md b/translations/hi/for-teachers.md
new file mode 100644
index 000000000..e6e44b548
--- /dev/null
+++ b/translations/hi/for-teachers.md
@@ -0,0 +1,26 @@
+## शिक्षकों के लिए
+
+क्या आप इस पाठ्यक्रम का उपयोग अपने कक्षा में करना चाहेंगे? कृपया बेहिचक इसका उपयोग करें!
+
+वास्तव में, आप इसे GitHub Classroom का उपयोग करके स्वयं GitHub के भीतर उपयोग कर सकते हैं।
+
+ऐसा करने के लिए, इस रिपॉजिटरी को फोर्क करें। आपको प्रत्येक पाठ के लिए एक रिपॉजिटरी बनाने की आवश्यकता होगी, इसलिए आपको प्रत्येक फ़ोल्डर को एक अलग रिपॉजिटरी में निकालना होगा। इस तरह, [GitHub Classroom](https://classroom.github.com/classrooms) प्रत्येक पाठ को अलग से उठा सकेगा।
+
+ये [पूर्ण निर्देश](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/) आपको बताएंगे कि अपने कक्षा को कैसे सेट अप करें।
+
+## रिपॉजिटरी का वर्तमान रूप में उपयोग करना
+
+यदि आप इस रिपॉजिटरी का उपयोग इसके वर्तमान रूप में करना चाहते हैं, बिना GitHub Classroom का उपयोग किए, तो वह भी किया जा सकता है। आपको अपने छात्रों के साथ संवाद करना होगा कि किस पाठ को एक साथ पढ़ना है।
+
+एक ऑनलाइन प्रारूप (Zoom, Teams, या अन्य) में आप क्विज़ के लिए ब्रेकआउट रूम बना सकते हैं, और छात्रों को सीखने के लिए तैयार करने में मदद कर सकते हैं। फिर छात्रों को क्विज़ के लिए आमंत्रित करें और निश्चित समय पर उनके उत्तर 'issues' के रूप में सबमिट करें। आप असाइनमेंट्स के साथ भी ऐसा कर सकते हैं, यदि आप चाहते हैं कि छात्र खुले में सहयोगात्मक रूप से काम करें।
+
+यदि आप एक अधिक निजी प्रारूप पसंद करते हैं, तो अपने छात्रों को पाठ्यक्रम को फोर्क करने के लिए कहें, पाठ दर पाठ, उनके अपने GitHub रिपॉजिटरीज़ में निजी रिपॉजिटरीज़ के रूप में, और आपको एक्सेस दें। फिर वे निजी तौर पर क्विज़ और असाइनमेंट्स को पूरा कर सकते हैं और उन्हें आपके कक्षा रिपॉजिटरी पर issues के माध्यम से आपको सबमिट कर सकते हैं।
+
+एक ऑनलाइन कक्षा प्रारूप में इसे काम करने के कई तरीके हैं। कृपया हमें बताएं कि आपके लिए सबसे अच्छा क्या काम करता है!
+
+## कृपया हमें अपने विचार बताएं!
+
+हम इस पाठ्यक्रम को आपके और आपके छात्रों के लिए उपयोगी बनाना चाहते हैं। कृपया हमें [प्रतिक्रिया दें](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u)।
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम जिम्मेदार नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/quiz-app/README.md b/translations/hi/quiz-app/README.md
new file mode 100644
index 000000000..40c1b9a50
--- /dev/null
+++ b/translations/hi/quiz-app/README.md
@@ -0,0 +1,115 @@
+# Quizzes
+
+ये क्विज़ https://aka.ms/ml-beginners पर ML पाठ्यक्रम के लिए प्री- और पोस्ट-लेक्चर क्विज़ हैं।
+
+## प्रोजेक्ट सेटअप
+
+```
+npm install
+```
+
+### विकास के लिए संकलन और हॉट-रीलोड
+
+```
+npm run serve
+```
+
+### उत्पादन के लिए संकलन और संक्षिप्त
+
+```
+npm run build
+```
+
+### फाइलों को लिंट और ठीक करता है
+
+```
+npm run lint
+```
+
+### कॉन्फ़िगरेशन को कस्टमाइज़ करें
+
+[Configuration Reference](https://cli.vuejs.org/config/) देखें।
+
+श्रेय: इस क्विज़ ऐप के मूल संस्करण के लिए धन्यवाद: https://github.com/arpan45/simple-quiz-vue
+
+## Azure पर तैनाती
+
+यहां एक चरण-दर-चरण मार्गदर्शिका है जो आपको आरंभ करने में मदद करेगी:
+
+1. एक GitHub रिपॉजिटरी को फोर्क करें
+सुनिश्चित करें कि आपका स्थिर वेब ऐप कोड आपके GitHub रिपॉजिटरी में है। इस रिपॉजिटरी को फोर्क करें।
+
+2. एक Azure Static Web App बनाएं
+- एक [Azure खाता](http://azure.microsoft.com) बनाएं
+- [Azure पोर्टल](https://portal.azure.com) पर जाएं
+- “Create a resource” पर क्लिक करें और “Static Web App” खोजें।
+- “Create” पर क्लिक करें।
+
+3. Static Web App को कॉन्फ़िगर करें
+- मूलभूत जानकारी: Subscription: अपनी Azure सब्सक्रिप्शन चुनें।
+- Resource Group: एक नया संसाधन समूह बनाएं या मौजूदा का उपयोग करें।
+- नाम: अपने स्थिर वेब ऐप के लिए एक नाम प्रदान करें।
+- क्षेत्र: अपने उपयोगकर्ताओं के निकटतम क्षेत्र चुनें।
+
+- #### तैनाती विवरण:
+- स्रोत: “GitHub” चुनें।
+- GitHub खाता: Azure को आपके GitHub खाते तक पहुंचने के लिए अधिकृत करें।
+- संगठन: अपना GitHub संगठन चुनें।
+- रिपॉजिटरी: वह रिपॉजिटरी चुनें जिसमें आपका स्थिर वेब ऐप है।
+- शाखा: वह शाखा चुनें जिससे आप तैनात करना चाहते हैं।
+
+- #### निर्माण विवरण:
+- निर्माण प्रीसेट: उस फ्रेमवर्क को चुनें जिससे आपका ऐप बनाया गया है (उदा., React, Angular, Vue, आदि)।
+- ऐप स्थान: उस फ़ोल्डर को निर्दिष्ट करें जिसमें आपका ऐप कोड है (उदा., / यदि यह रूट में है)।
+- API स्थान: यदि आपके पास API है, तो उसका स्थान निर्दिष्ट करें (वैकल्पिक)।
+- आउटपुट स्थान: उस फ़ोल्डर को निर्दिष्ट करें जहां निर्माण आउटपुट उत्पन्न होता है (उदा., build या dist)।
+
+4. समीक्षा और निर्माण
+अपनी सेटिंग्स की समीक्षा करें और “Create” पर क्लिक करें। Azure आवश्यक संसाधनों को सेटअप करेगा और आपके रिपॉजिटरी में एक GitHub Actions वर्कफ़्लो बनाएगा।
+
+5. GitHub Actions वर्कफ़्लो
+Azure स्वचालित रूप से आपके रिपॉजिटरी में एक GitHub Actions वर्कफ़्लो फ़ाइल बनाएगा (.github/workflows/azure-static-web-apps-.yml)। यह वर्कफ़्लो निर्माण और तैनाती प्रक्रिया को संभालेगा।
+
+6. तैनाती की निगरानी करें
+अपने GitHub रिपॉजिटरी में “Actions” टैब पर जाएं।
+आपको एक वर्कफ़्लो चलता हुआ दिखाई देना चाहिए। यह वर्कफ़्लो आपके स्थिर वेब ऐप को Azure पर निर्माण और तैनात करेगा।
+एक बार वर्कफ़्लो पूरा हो जाने पर, आपका ऐप प्रदान किए गए Azure URL पर लाइव हो जाएगा।
+
+### उदाहरण वर्कफ़्लो फ़ाइल
+
+यहां एक उदाहरण है कि GitHub Actions वर्कफ़्लो फ़ाइल कैसी दिख सकती है:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### अतिरिक्त संसाधन
+- [Azure Static Web Apps Documentation](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [GitHub Actions Documentation](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन-आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियाँ या गलतियाँ हो सकती हैं। अपनी मूल भाषा में मूल दस्तावेज़ को प्राधिकृत स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/sketchnotes/LICENSE.md b/translations/hi/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..1f07a327c
--- /dev/null
+++ b/translations/hi/sketchnotes/LICENSE.md
@@ -0,0 +1,139 @@
+Attribution-ShareAlike 4.0 International
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons") कोई कानून फर्म नहीं है और कानूनी सेवाएं या कानूनी सलाह नहीं प्रदान करता है। Creative Commons सार्वजनिक लाइसेंस का वितरण वकील-ग्राहक या अन्य संबंध नहीं बनाता है। Creative Commons अपने लाइसेंस और संबंधित जानकारी "जैसा है" के आधार पर उपलब्ध कराता है। Creative Commons अपने लाइसेंस, उनके नियमों और शर्तों के तहत लाइसेंस प्राप्त किसी भी सामग्री, या किसी भी संबंधित जानकारी के संबंध में कोई वारंटी नहीं देता है। Creative Commons उनके उपयोग से होने वाले नुकसान के लिए पूरी हद तक किसी भी जिम्मेदारी से इनकार करता है।
+
+Creative Commons सार्वजनिक लाइसेंस का उपयोग करना
+
+Creative Commons सार्वजनिक लाइसेंस रचनाकारों और अन्य अधिकार धारकों द्वारा मौलिक लेखन कार्यों और अन्य सामग्री को साझा करने के लिए उपयोग किए जाने वाले एक मानक सेट की शर्तें और शर्तें प्रदान करते हैं, जो कॉपीराइट और नीचे निर्दिष्ट सार्वजनिक लाइसेंस में निर्दिष्ट कुछ अन्य अधिकारों के अधीन हैं। निम्नलिखित विचार केवल जानकारी के लिए हैं, पूर्ण नहीं हैं, और हमारे लाइसेंस का हिस्सा नहीं हैं।
+
+ लाइसेंसदाताओं के लिए विचार: हमारे सार्वजनिक लाइसेंस उन लोगों द्वारा उपयोग के लिए अभिप्रेत हैं जिन्हें सामग्री को सार्वजनिक रूप से उपयोग करने की अनुमति देने के लिए अधिकृत किया गया है, अन्यथा कॉपीराइट और कुछ अन्य अधिकारों द्वारा प्रतिबंधित। हमारे लाइसेंस अपरिवर्तनीय हैं। लाइसेंसदाताओं को लाइसेंस को लागू करने से पहले चुने गए लाइसेंस की शर्तों और शर्तों को पढ़ना और समझना चाहिए। लाइसेंसदाताओं को लाइसेंस लागू करने से पहले सभी आवश्यक अधिकार सुरक्षित करने चाहिए ताकि जनता अपेक्षित रूप से सामग्री का पुन: उपयोग कर सके। लाइसेंसदाताओं को स्पष्ट रूप से किसी भी सामग्री को चिह्नित करना चाहिए जो लाइसेंस के अधीन नहीं है। इसमें अन्य CC-लाइसेंस प्राप्त सामग्री या कॉपीराइट के अपवाद या सीमा के तहत उपयोग की गई सामग्री शामिल है। लाइसेंसदाताओं के लिए अधिक विचार: wiki.creativecommons.org/Considerations_for_licensors
+
+ जनता के लिए विचार: हमारे सार्वजनिक लाइसेंस में से एक का उपयोग करके, एक लाइसेंसदाता जनता को निर्दिष्ट शर्तों और शर्तों के तहत लाइसेंस प्राप्त सामग्री का उपयोग करने की अनुमति देता है। यदि किसी कारण से लाइसेंसदाता की अनुमति आवश्यक नहीं है - उदाहरण के लिए, किसी लागू अपवाद या कॉपीराइट की सीमा के कारण - तो उस उपयोग को लाइसेंस द्वारा विनियमित नहीं किया जाता है। हमारे लाइसेंस केवल कॉपीराइट और कुछ अन्य अधिकारों के तहत अनुमति देते हैं जिन्हें एक लाइसेंसदाता देने का अधिकार रखता है। लाइसेंस प्राप्त सामग्री का उपयोग अभी भी अन्य कारणों से प्रतिबंधित हो सकता है, जिसमें अन्य लोगों के पास सामग्री में कॉपीराइट या अन्य अधिकार हैं। एक लाइसेंसदाता विशेष अनुरोध कर सकता है, जैसे कि सभी परिवर्तनों को चिह्नित या वर्णित किया जाए। यद्यपि हमारे लाइसेंस द्वारा आवश्यक नहीं है, आपसे अनुरोधों का सम्मान करने के लिए प्रोत्साहित किया जाता है जहां उचित हो। जनता के लिए अधिक विचार: wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Creative Commons Attribution-ShareAlike 4.0 International Public License
+
+लाइसेंस प्राप्त अधिकारों (नीचे परिभाषित) का उपयोग करके, आप इस Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License") की शर्तों और शर्तों से बंधे होने के लिए सहमति देते हैं। जहां तक इस सार्वजनिक लाइसेंस को एक अनुबंध के रूप में व्याख्यायित किया जा सकता है, आपको इन शर्तों और शर्तों की स्वीकृति के बदले में लाइसेंस प्राप्त अधिकार दिए जाते हैं, और लाइसेंसदाता आपको इन शर्तों और शर्तों के तहत उपलब्ध कराई गई सामग्री से प्राप्त लाभों के बदले में ऐसे अधिकार प्रदान करता है।
+
+अनुभाग 1 -- परिभाषाएं।
+
+ a. अनुकूलित सामग्री का अर्थ है कॉपीराइट और समान अधिकारों के अधीन सामग्री जो लाइसेंस प्राप्त सामग्री से व्युत्पन्न है या उस पर आधारित है और जिसमें लाइसेंस प्राप्त सामग्री का अनुवाद, परिवर्तन, व्यवस्था, रूपांतरण, या अन्यथा संशोधन किया गया है, जिसके लिए लाइसेंसदाता द्वारा रखे गए कॉपीराइट और समान अधिकारों के तहत अनुमति की आवश्यकता होती है। इस सार्वजनिक लाइसेंस के प्रयोजनों के लिए, जहां लाइसेंस प्राप्त सामग्री एक संगीत कार्य, प्रदर्शन, या ध्वनि रिकॉर्डिंग है, अनुकूलित सामग्री हमेशा उत्पन्न होती है जहां लाइसेंस प्राप्त सामग्री को एक चलती छवि के साथ समयबद्ध संबंध में समकालीन किया जाता है।
+
+ b. अनुकूलक का लाइसेंस का अर्थ है आपके द्वारा आपके योगदान के अनुकूलित सामग्री में आपके कॉपीराइट और समान अधिकारों को इस सार्वजनिक लाइसेंस की शर्तों और शर्तों के अनुसार लागू किया गया लाइसेंस।
+
+ c. BY-SA संगत लाइसेंस का अर्थ है एक लाइसेंस जो creativecommons.org/compatiblelicenses पर सूचीबद्ध है, जिसे Creative Commons द्वारा इस सार्वजनिक लाइसेंस के समकक्ष के रूप में स्वीकृत किया गया है।
+
+ d. कॉपीराइट और समान अधिकार का अर्थ है कॉपीराइट और/या समान अधिकार जो कॉपीराइट से निकटता से संबंधित हैं, जिसमें प्रदर्शन, प्रसारण, ध्वनि रिकॉर्डिंग, और सूई जेनेरिस डेटाबेस अधिकार शामिल हैं, बिना यह ध्यान दिए कि अधिकारों को कैसे लेबल किया गया है या वर्गीकृत किया गया है। इस सार्वजनिक लाइसेंस के प्रयोजनों के लिए, अनुभाग 2(b)(1)-(2) में निर्दिष्ट अधिकार कॉपीराइट और समान अधिकार नहीं हैं।
+
+ e. प्रभावी तकनीकी उपाय का अर्थ है वे उपाय जो, उचित प्राधिकरण के अभाव में, 20 दिसंबर 1996 को अपनाई गई WIPO कॉपीराइट संधि के अनुच्छेद 11 के तहत दायित्वों को पूरा करने वाले कानूनों के तहत दरकिनार नहीं किए जा सकते हैं, और/या समान अंतर्राष्ट्रीय समझौतों के तहत।
+
+ f. अपवाद और सीमाएं का अर्थ है उचित उपयोग, उचित व्यवहार, और/या कोई अन्य अपवाद या सीमा जो कॉपीराइट और समान अधिकारों पर लागू होती है जो आपके द्वारा लाइसेंस प्राप्त सामग्री के उपयोग पर लागू होती है।
+
+ g. लाइसेंस तत्वों का अर्थ है एक Creative Commons सार्वजनिक लाइसेंस के नाम में सूचीबद्ध लाइसेंस गुण। इस सार्वजनिक लाइसेंस के लाइसेंस तत्व हैं Attribution और ShareAlike।
+
+ h. लाइसेंस प्राप्त सामग्री का अर्थ है वह कलात्मक या साहित्यिक कार्य, डेटाबेस, या अन्य सामग्री जिस पर लाइसेंसदाता ने इस सार्वजनिक लाइसेंस को लागू किया है।
+
+ i. लाइसेंस प्राप्त अधिकार का अर्थ है इस सार्वजनिक लाइसेंस की शर्तों और शर्तों के अधीन आपको दिए गए अधिकार, जो आपके द्वारा लाइसेंस प्राप्त सामग्री के उपयोग पर लागू सभी कॉपीराइट और समान अधिकारों तक सीमित हैं और जिनके लाइसेंसदाता को लाइसेंस देने का अधिकार है।
+
+ j. लाइसेंसदाता का अर्थ है वह व्यक्ति(यों) या इकाई(यों) जो इस सार्वजनिक लाइसेंस के तहत अधिकार प्रदान कर रहे हैं।
+
+ k. साझा करना का अर्थ है किसी भी माध्यम या प्रक्रिया द्वारा जनता को सामग्री प्रदान करना जिसके लिए लाइसेंस प्राप्त अधिकारों के तहत अनुमति की आवश्यकता होती है, जैसे पुनरुत्पादन, सार्वजनिक प्रदर्शन, सार्वजनिक प्रदर्शन, वितरण, प्रसार, संचार, या आयात, और जनता को सामग्री उपलब्ध कराना जिसमें वे सामग्री को एक स्थान और समय पर व्यक्तिगत रूप से चुन सकते हैं।
+
+ l. सूई जेनेरिस डेटाबेस अधिकार का अर्थ है कॉपीराइट के अलावा अन्य अधिकार जो 11 मार्च 1996 को यूरोपीय संसद और परिषद के निर्देश 96/9/EC से उत्पन्न होते हैं, जैसा कि संशोधित और/या सफल हुआ है, साथ ही दुनिया में कहीं भी अन्य समान अधिकार।
+
+ m. आप का अर्थ है वह व्यक्ति या इकाई जो इस सार्वजनिक लाइसेंस के तहत लाइसेंस प्राप्त अधिकारों का उपयोग कर रहा है। आपका का एक संबंधित अर्थ है।
+
+अनुभाग 2 -- दायरा।
+
+ a. लाइसेंस अनुदान।
+
+ 1. इस सार्वजनिक लाइसेंस की शर्तों और शर्तों के अधीन, लाइसेंसदाता आपको एक विश्वव्यापी, रॉयल्टी-मुक्त, गैर-सब-लाइसेंस योग्य, गैर-विशेष, अपरिवर्तनीय लाइसेंस प्रदान करता है ताकि लाइसेंस प्राप्त सामग्री में लाइसेंस प्राप्त अधिकारों का उपयोग किया जा सके:
+
+ a. लाइसेंस प्राप्त सामग्री का पुनरुत्पादन और साझा करना, पूरे या आंशिक रूप में; और
+
+ b. अनुकूलित सामग्री का उत्पादन, पुनरुत्पादन, और साझा करना।
+
+ 2. अपवाद और सीमाएं। संदेह से बचने के लिए, जहां अपवाद और सीमाएं आपके उपयोग पर लागू होती हैं, यह सार्वजनिक लाइसेंस लागू नहीं होता है, और आपको इसकी शर्तों और शर्तों का पालन करने की आवश्यकता नहीं है।
+
+ 3. अवधि। इस सार्वजनिक लाइसेंस की अवधि अनुभाग 6(a) में निर्दिष्ट है।
+
+ 4. मीडिया और प्रारूप; तकनीकी संशोधनों की अनुमति। लाइसेंसदाता आपको सभी मीडिया और प्रारूपों में लाइसेंस प्राप्त अधिकारों का उपयोग करने के लिए अधिकृत करता है, चाहे अब ज्ञात हो या बाद में बनाया गया हो, और ऐसा करने के लिए आवश्यक तकनीकी संशोधन करने के लिए। लाइसेंसदाता किसी भी अधिकार या प्राधिकरण को छोड़ देता है और/या सहमत होता है कि आपको लाइसेंस प्राप्त अधिकारों का उपयोग करने के लिए आवश्यक तकनीकी संशोधन करने से मना नहीं करेगा, जिसमें प्रभावी तकनीकी उपायों को दरकिनार करने के लिए आवश्यक तकनीकी संशोधन शामिल हैं। इस सार्वजनिक लाइसेंस के प्रयोजनों के लिए, इस अनुभाग 2(a)(4) द्वारा अधिकृत संशोधन करने से कभी भी अनुकूलित सामग्री उत्पन्न नहीं होती है।
+
+ 5. डाउनस्ट्रीम प्राप्तकर्ता।
+
+ a. लाइसेंसदाता से प्रस्ताव -- लाइसेंस प्राप्त सामग्री। लाइसेंस प्राप्त सामग्री के हर प्राप्तकर्ता को स्वचालित रूप से लाइसेंसदाता से इस सार्वजनिक लाइसेंस की शर्तों और शर्तों के तहत लाइसेंस प्राप्त अधिकारों का उपयोग करने का प्रस्ताव मिलता है।
+
+ b. लाइसेंसदाता से अतिरिक्त प्रस्ताव -- अनुकूलित सामग्री। आपसे अनुकूलित सामग्री के हर प्राप्तकर्ता को स्वचालित रूप से लाइसेंसदाता से अनुकूलित सामग्री में लाइसेंस प्राप्त अधिकारों का उपयोग करने का प्रस्ताव मिलता है, जिस शर्तों के तहत आप अनुकूलक का लाइसेंस लागू करते हैं।
+
+ c. डाउनस्ट्रीम प्रतिबंध नहीं। आप किसी भी प्राप्तकर्ता को लाइसेंस प्राप्त सामग्री के किसी भी अतिरिक्त या अलग शर्तों और शर्तों की पेशकश या लागू नहीं कर सकते हैं, या लाइसेंस प्राप्त सामग्री पर कोई प्रभावी तकनीकी उपाय लागू नहीं कर सकते हैं, यदि ऐसा करने से लाइसेंस प्राप्त सामग्री के किसी भी प्राप्तकर्ता द्वारा लाइसेंस प्राप्त अधिकारों का उपयोग प्रतिबंधित होता है।
+
+ 6. अनुमोदन नहीं। इस सार्वजनिक लाइसेंस में कुछ भी ऐसा नहीं है या ऐसा व्याख्यायित किया जा सकता है कि आपसे यह अनुमति दी जाए या यह संकेत दिया जाए कि आप, या आपका लाइसेंस प्राप्त सामग्री का उपयोग, लाइसेंसदाता या अन्य द्वारा अनुमोदित, प्रायोजित, या आधिकारिक स्थिति प्राप्त है, जैसा कि अनुभाग 3(a)(1)(A)(i) में निर्दिष्ट किया गया है।
+
+ b. अन्य अधिकार।
+
+ 1. नैतिक अधिकार, जैसे अखंडता का अधिकार, इस सार्वजनिक लाइसेंस के तहत लाइसेंस प्राप्त नहीं हैं, न ही प्रचार, गोपनीयता, और/या अन्य समान व्यक्तित्व अधिकार; हालांकि, जहां तक संभव हो, लाइसेंसदाता ऐसे किसी भी अधिकार को छोड़ देता है और/या सहमत होता है कि लाइसेंसदाता द्वारा रखे गए ऐसे अधिकारों को इस हद तक सीमित करने के लिए आवश्यक है ताकि आपको लाइसेंस प्राप्त अधिकारों का उपयोग करने की अनुमति दी जा सके, लेकिन अन्यथा नहीं।
+
+ 2. पेटेंट और ट्रेडमार्क अधिकार इस सार्वजनिक लाइसेंस के तहत लाइसेंस प्राप्त नहीं हैं।
+
+ 3. जहां तक संभव हो, लाइसेंसदाता किसी भी स्वैच्छिक या छोड़े जाने योग्य सांविधिक या अनिवार्य लाइसेंसिंग योजना के तहत सीधे या किसी संग्रहण समाज के माध्यम से लाइसेंस प्राप्त अधिकारों के उपयोग के लिए रॉयल्टी एकत्र करने के किसी भी अधिकार को छोड़ देता है। अन्य सभी मामलों में, लाइसेंसदाता स्पष्ट रूप से ऐसे रॉयल्टी एकत्र करने के किसी भी अधिकार को सुरक्षित रखता है।
+
+अनुभाग 3 -- लाइसेंस शर्तें।
+
+आपके द्वारा लाइसेंस प्राप्त अधिकारों का उपयोग विशेष रूप से निम्नलिखित शर्तों के अधीन किया गया है।
+
+ a. श्रेय।
+
+ 1. यदि आप लाइसेंस प्राप्त सामग्री को साझा करते हैं (संशोधित रूप में भी), तो आपको निम्नलिखित को बनाए रखना चाहिए यदि यह लाइसेंसदाता द्वारा लाइसेंस प्राप्त सामग्री के साथ प्रदान किया गया है:
+
+ a. लाइसेंस प्राप्त सामग्री के निर्माता(यों) की पहचान और किसी भी अन्य को श्रेय प्राप्त करने के लिए निर्दिष्ट किया गया है, किसी भी उचित तरीके से जैसा कि लाइसेंसदाता द्वारा अनुरोध किया गया है (छद्म नाम द्वारा भी यदि निर्दिष्ट है);
+
+ ii. एक कॉपीराइट नोटिस;
+
+ iii. एक नोटिस जो इस सार्वजनिक लाइसेंस का उल्लेख करता है;
+
+ iv. एक नोटिस जो वारंटी के अस्वीकरण का उल्लेख करता है;
+
+ v. एक URI या हाइपरलिंक लाइसेंस प्राप्त सामग्री के लिए जहां तक सम्भव हो;
+
+ b. यदि आपने लाइसेंस प्राप्त सामग्री को संशोधित किया है तो संकेत दें और किसी भी पिछले संशोधनों का संकेत बनाए रखें; और
+
+ c. संकेत दें कि लाइसेंस प्राप्त सामग्री इस सार्वजनिक लाइसेंस के तहत लाइसेंस प्राप्त है, और इस सार्वजनिक लाइसेंस के पाठ, या URI या हाइपरलिंक को शामिल करें।
+
+ 2. आप अनुभाग 3(a)(1) में शर्तों को किसी भी उचित तरीके से पूरा कर सकते हैं, जिस माध्यम, साधन, और संदर्भ के आधार पर जिसमें आप लाइसेंस प्राप्त सामग्री साझा करते हैं। उदाहरण के लिए, यह शर्तों को पूरा करने के लिए एक संसाधन के URI या हाइपरलिंक प्रदान करके उचित हो सकता है जिसमें आवश्यक जानकारी शामिल है।
+
+ 3. यदि लाइसेंसदाता द्वारा अनुरोध किया गया है, तो आपको अनुभाग 3(a)(1)(A) द्वारा आवश्यक किसी भी जानकारी को हटाना होगा जहां तक सम्भव हो।
+
+ b. ShareAlike।
+
+ अनुभाग 3(a) में शर्तों के अलावा, यदि आप अनुकूलित सामग्री साझा करते हैं जिसे आपने उत्पादित किया है, तो निम्नलिखित शर्तें भी लागू होती हैं।
+
+ 1. अनुकूलक का लाइसेंस जिसे आप लागू करते हैं वह Creative Commons लाइसेंस होना चाहिए जिसमें वही लाइसेंस तत्व हों, यह संस्करण या बाद का, या एक BY-SA संगत लाइसेंस।
+
+ 2. आपको अनुकूलक का लाइसेंस जिसे आप लागू करते हैं, के पाठ, या URI या हाइपरलिंक को शामिल करना होगा। आप इस शर्त को किसी भी उचित तरीके से पूरा कर सकते हैं, जिस माध्यम, साधन, और संदर्भ के आधार पर जिसमें आप अनुकूलित सामग्री साझा करते हैं।
+
+ 3. आप अनुकूलित सामग्री पर कोई अतिरिक्त या अलग शर्तें या शर्तें लागू नहीं कर सकते हैं, या कोई प्रभावी तकनीकी उपाय लागू नहीं कर सकते हैं, जो अनुकूलक के लाइसेंस के तहत दिए गए अधिकारों के उपयोग को प्रतिबंधित करते हैं जिसे आप लागू करते हैं।
+
+अनुभाग 4 -- सूई जेनेरिस डेटाबेस अधिकार।
+
+जहां लाइसेंस प्राप्त अधिकारों में सूई जेनेरिस डेटाबेस अधिकार शामिल हैं जो आपके द्वारा लाइसेंस प्राप्त सामग्री के उपयोग पर लागू होते हैं:
+
+ a. संदेह से बचने के लिए, अनुभाग 2(a)(1) आपको डेटाबेस की सामग्री का निष्कर्षण, पुन: उपयोग, पुनरुत्पादन, और साझा करने का अधिकार देता है;
+
+ b. यदि आप डेटाबेस की सभी या एक महत्वपूर्ण हिस्से की सामग्री को एक डेटाबेस में शामिल करते हैं जिसमें आपके पास सूई जेनेरिस डेटाबेस अधिकार हैं, तो आपके पास सूई जेनेरिस डेटाबेस अधिकार (लेकिन इसकी व्यक्तिगत सामग्री नहीं) वाले डेटाबेस को अनुकूलित सामग्री माना जाता है,
+
+ अनुभाग 3(b) के प्रयोजनों के लिए; और
+ c. आपको अनुभाग 3(a) में शर्तों का पालन करना होगा यदि आप डेटाबेस की सभी या एक महत्वपूर्ण हिस्से की सामग्री को साझा करते हैं।
+
+संदेह से बचने के लिए, यह अनुभाग 4 आपके दायित्वों को इस सार्वजनिक लाइसेंस के तहत पूरक करता है जहां लाइसेंस प्राप्त अधिकारों में अन्य कॉपीराइट और समान अधिकार शामिल हैं।
+
+अनुभाग 5 -- वारंटी अस्वीकरण और देयता की सीमा।
+
+ a. जब तक लाइसेंसदाता द्वारा अलग से नहीं किया गया है, जहां तक संभव हो, लाइसेंसदाता लाइसेंस प्राप्त सामग्री को "जैसा है" और "जहां उपलब्ध है" के आधार पर प्रदान करता है, और लाइसेंस प्राप्त सामग्री के संबंध में किसी भी प्रकार की कोई प्रतिनिधित्व या वारंटी नहीं देता है, चाहे वह स्पष्ट हो, निहित हो, सांविधिक हो, या अन्य। इसमें, बिना सीमा के, शीर्षक की वारंटी, व्यापारिकता, किसी विशेष उद्देश्य के लिए उपयुक्तता, गैर-उल्लंघन, छिपे हुए या अन्य दोषों की अनुपस्थिति, सटीकता, या त्रुटियों की उपस्थिति या अनुपस्थिति, चाहे ज्ञात हो या खोज योग्य हो। जहां वारंटी अस्वीकरण पूरी तरह से या आंशिक रूप से अनुमति नहीं है, यह अस्वीकरण आप पर लागू नहीं हो सकता है।
+
+ b. जहां तक संभव हो, किसी भी स्थिति में लाइसेंसदाता आपके लिए किसी भी कानूनी सिद्धांत (शामिल, बिना सीमा के, लापरवाही) या अन्यथा किसी भी प्रत्यक्ष, विशेष, अप्रत्यक्ष, आकस्मिक, परिणामी, दंडात्मक, अनुकरणीय, या अन्य हानि, लागत, खर्च
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या अशुद्धियाँ हो सकती हैं। मूल दस्तावेज़ को उसकी मूल भाषा में प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/hi/sketchnotes/README.md b/translations/hi/sketchnotes/README.md
new file mode 100644
index 000000000..a4b53d9c4
--- /dev/null
+++ b/translations/hi/sketchnotes/README.md
@@ -0,0 +1,10 @@
+सभी पाठ्यक्रम की स्केच नोट्स यहाँ से डाउनलोड की जा सकती हैं।
+
+🖨 उच्च-रिज़ॉल्यूशन में प्रिंटिंग के लिए, TIFF संस्करण [इस रिपॉज़िटरी](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff) पर उपलब्ध हैं।
+
+🎨 बनाया गया: [Tomomi Imura](https://github.com/girliemac) (Twitter: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**अस्वीकरण**:
+यह दस्तावेज़ मशीन आधारित एआई अनुवाद सेवाओं का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयासरत हैं, कृपया ध्यान दें कि स्वचालित अनुवादों में त्रुटियाँ या गलतियाँ हो सकती हैं। मूल भाषा में दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
\ No newline at end of file
diff --git a/translations/it/1-Introduction/1-intro-to-ML/README.md b/translations/it/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..0941b011d
--- /dev/null
+++ b/translations/it/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# Introduzione al machine learning
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "ML per principianti - Introduzione al Machine Learning per Principianti")
+
+> 🎥 Clicca sull'immagine sopra per un breve video che esplora questa lezione.
+
+Benvenuto a questo corso sul machine learning classico per principianti! Che tu sia completamente nuovo a questo argomento o un praticante esperto di ML che vuole rinfrescare alcune conoscenze, siamo felici che tu ti unisca a noi! Vogliamo creare un punto di partenza accogliente per il tuo studio del ML e saremmo felici di valutare, rispondere e incorporare i tuoi [feedback](https://github.com/microsoft/ML-For-Beginners/discussions).
+
+[](https://youtu.be/h0e2HAPTGF4 "Introduzione al ML")
+
+> 🎥 Clicca sull'immagine sopra per un video: John Guttag del MIT introduce il machine learning
+
+---
+## Iniziare con il machine learning
+
+Prima di iniziare con questo curriculum, è necessario configurare il computer e prepararlo per eseguire notebook localmente.
+
+- **Configura il tuo computer con questi video**. Utilizza i seguenti link per imparare [come installare Python](https://youtu.be/CXZYvNRIAKM) nel tuo sistema e [configurare un editor di testo](https://youtu.be/EU8eayHWoZg) per lo sviluppo.
+- **Impara Python**. È anche raccomandato avere una comprensione di base di [Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott), un linguaggio di programmazione utile per i data scientist che utilizziamo in questo corso.
+- **Impara Node.js e JavaScript**. Utilizziamo anche JavaScript alcune volte in questo corso quando costruiamo applicazioni web, quindi sarà necessario avere [node](https://nodejs.org) e [npm](https://www.npmjs.com/) installati, oltre a [Visual Studio Code](https://code.visualstudio.com/) disponibile per lo sviluppo sia in Python che in JavaScript.
+- **Crea un account GitHub**. Dato che ci hai trovato qui su [GitHub](https://github.com), potresti già avere un account, ma se non lo hai, creane uno e poi fork questo curriculum per utilizzarlo da solo. (Sentiti libero di darci una stella, 😊)
+- **Esplora Scikit-learn**. Familiarizza con [Scikit-learn](https://scikit-learn.org/stable/user_guide.html), un insieme di librerie ML che citiamo in queste lezioni.
+
+---
+## Cos'è il machine learning?
+
+Il termine 'machine learning' è uno dei più popolari e frequentemente utilizzati al giorno d'oggi. È molto probabile che tu abbia sentito questo termine almeno una volta se hai una certa familiarità con la tecnologia, indipendentemente dal settore in cui lavori. La meccanica del machine learning, tuttavia, è un mistero per la maggior parte delle persone. Per un principiante del machine learning, l'argomento può a volte sembrare travolgente. Pertanto, è importante capire cosa sia realmente il machine learning e impararlo passo dopo passo, attraverso esempi pratici.
+
+---
+## La curva dell'hype
+
+
+
+> Google Trends mostra la recente 'curva dell'hype' del termine 'machine learning'
+
+---
+## Un universo misterioso
+
+Viviamo in un universo pieno di misteri affascinanti. Grandi scienziati come Stephen Hawking, Albert Einstein e molti altri hanno dedicato la loro vita alla ricerca di informazioni significative che svelino i misteri del mondo che ci circonda. Questa è la condizione umana dell'apprendimento: un bambino umano impara nuove cose e scopre la struttura del suo mondo anno dopo anno mentre cresce fino all'età adulta.
+
+---
+## Il cervello del bambino
+
+Il cervello e i sensi di un bambino percepiscono i fatti del loro ambiente e gradualmente apprendono i modelli nascosti della vita che aiutano il bambino a creare regole logiche per identificare i modelli appresi. Il processo di apprendimento del cervello umano rende gli esseri umani la creatura vivente più sofisticata di questo mondo. Imparare continuamente scoprendo modelli nascosti e poi innovando su quei modelli ci permette di migliorare sempre di più nel corso della nostra vita. Questa capacità di apprendimento e capacità di evoluzione è correlata a un concetto chiamato [plasticità cerebrale](https://www.simplypsychology.org/brain-plasticity.html). Superficialmente, possiamo tracciare alcune somiglianze motivazionali tra il processo di apprendimento del cervello umano e i concetti del machine learning.
+
+---
+## Il cervello umano
+
+Il [cervello umano](https://www.livescience.com/29365-human-brain.html) percepisce le cose dal mondo reale, elabora le informazioni percepite, prende decisioni razionali e compie determinate azioni in base alle circostanze. Questo è ciò che chiamiamo comportarsi in modo intelligente. Quando programmiamo una copia del processo comportamentale intelligente su una macchina, viene chiamata intelligenza artificiale (AI).
+
+---
+## Alcuni termini
+
+Sebbene i termini possano essere confusi, il machine learning (ML) è un sottoinsieme importante dell'intelligenza artificiale. **ML si occupa di utilizzare algoritmi specializzati per scoprire informazioni significative e trovare modelli nascosti dai dati percepiti per corroborare il processo decisionale razionale**.
+
+---
+## AI, ML, Deep Learning
+
+
+
+> Un diagramma che mostra le relazioni tra AI, ML, deep learning e data science. Infografica di [Jen Looper](https://twitter.com/jenlooper) ispirata da [questa grafica](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining)
+
+---
+## Concetti da coprire
+
+In questo curriculum, copriremo solo i concetti fondamentali del machine learning che un principiante deve conoscere. Copriamo ciò che chiamiamo 'machine learning classico' utilizzando principalmente Scikit-learn, un'ottima libreria che molti studenti utilizzano per imparare le basi. Per comprendere concetti più ampi di intelligenza artificiale o deep learning, è indispensabile una solida conoscenza fondamentale del machine learning, e quindi vorremmo offrirla qui.
+
+---
+## In questo corso imparerai:
+
+- concetti fondamentali del machine learning
+- la storia del ML
+- ML e equità
+- tecniche di regressione ML
+- tecniche di classificazione ML
+- tecniche di clustering ML
+- tecniche di elaborazione del linguaggio naturale ML
+- tecniche di previsione delle serie temporali ML
+- apprendimento per rinforzo
+- applicazioni reali per ML
+
+---
+## Cosa non copriremo
+
+- deep learning
+- reti neurali
+- AI
+
+Per migliorare l'esperienza di apprendimento, eviteremo le complessità delle reti neurali, del 'deep learning' - costruzione di modelli a più strati utilizzando reti neurali - e dell'AI, che discuteremo in un curriculum diverso. Offriremo anche un prossimo curriculum di data science per concentrarci su quell'aspetto di questo campo più ampio.
+
+---
+## Perché studiare il machine learning?
+
+Il machine learning, da una prospettiva di sistemi, è definito come la creazione di sistemi automatizzati che possono apprendere modelli nascosti dai dati per aiutare a prendere decisioni intelligenti.
+
+Questa motivazione è vagamente ispirata da come il cervello umano apprende certe cose basate sui dati che percepisce dal mondo esterno.
+
+✅ Pensa per un minuto perché un'azienda vorrebbe provare a utilizzare strategie di machine learning rispetto a creare un motore basato su regole codificate.
+
+---
+## Applicazioni del machine learning
+
+Le applicazioni del machine learning sono ora quasi ovunque e sono tanto onnipresenti quanto i dati che fluiscono nelle nostre società, generati dai nostri smartphone, dispositivi connessi e altri sistemi. Considerando l'immenso potenziale degli algoritmi di machine learning all'avanguardia, i ricercatori hanno esplorato la loro capacità di risolvere problemi reali multidimensionali e multidisciplinari con grandi risultati positivi.
+
+---
+## Esempi di ML applicato
+
+**Puoi utilizzare il machine learning in molti modi**:
+
+- Per prevedere la probabilità di una malattia dalla storia medica di un paziente o dai rapporti.
+- Per sfruttare i dati meteorologici per prevedere eventi meteorologici.
+- Per comprendere il sentimento di un testo.
+- Per rilevare notizie false per fermare la diffusione della propaganda.
+
+Finanza, economia, scienze della terra, esplorazione spaziale, ingegneria biomedica, scienze cognitive e persino campi nelle scienze umane hanno adattato il machine learning per risolvere i problemi ardui e pesanti di elaborazione dei dati del loro settore.
+
+---
+## Conclusione
+
+Il machine learning automatizza il processo di scoperta dei modelli trovando intuizioni significative dai dati reali o generati. Si è dimostrato altamente prezioso in applicazioni aziendali, sanitarie e finanziarie, tra le altre.
+
+Nel prossimo futuro, comprendere le basi del machine learning sarà un must per le persone di qualsiasi settore a causa della sua adozione diffusa.
+
+---
+# 🚀 Sfida
+
+Disegna, su carta o utilizzando un'app online come [Excalidraw](https://excalidraw.com/), la tua comprensione delle differenze tra AI, ML, deep learning e data science. Aggiungi alcune idee sui problemi che ciascuna di queste tecniche è brava a risolvere.
+
+# [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# Revisione & Studio Autonomo
+
+Per saperne di più su come puoi lavorare con gli algoritmi ML nel cloud, segui questo [Percorso di Apprendimento](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott).
+
+Segui un [Percorso di Apprendimento](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott) sui fondamenti del ML.
+
+---
+# Compito
+
+[Inizia a lavorare](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzata basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/1-intro-to-ML/assignment.md b/translations/it/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..6af698e00
--- /dev/null
+++ b/translations/it/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# Iniziare a Lavorare
+
+## Istruzioni
+
+In questo compito non valutato, dovresti ripassare Python e configurare il tuo ambiente per eseguire i notebook.
+
+Segui questo [Percorso di Apprendimento Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott), e poi configura i tuoi sistemi guardando questi video introduttivi:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/2-history-of-ML/README.md b/translations/it/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..7f4c5a2af
--- /dev/null
+++ b/translations/it/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# Storia del machine learning
+
+
+> Schizzo di [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "ML per principianti - Storia del Machine Learning")
+
+> 🎥 Clicca sull'immagine qui sopra per un breve video su questa lezione.
+
+In questa lezione, passeremo in rassegna i principali traguardi nella storia del machine learning e dell'intelligenza artificiale.
+
+La storia dell'intelligenza artificiale (AI) come campo è intrecciata con la storia del machine learning, poiché gli algoritmi e i progressi computazionali che sostengono il ML hanno alimentato lo sviluppo dell'AI. È utile ricordare che, sebbene questi campi come aree di indagine distinte abbiano iniziato a cristallizzarsi negli anni '50, importanti [scoperte algoritmiche, statistiche, matematiche, computazionali e tecniche](https://wikipedia.org/wiki/Timeline_of_machine_learning) hanno preceduto e si sono sovrapposte a questa era. In effetti, le persone hanno pensato a queste domande per [centinaia di anni](https://wikipedia.org/wiki/History_of_artificial_intelligence): questo articolo discute le basi intellettuali storiche dell'idea di una 'macchina pensante'.
+
+---
+## Scoperte notevoli
+
+- 1763, 1812 [Teorema di Bayes](https://wikipedia.org/wiki/Bayes%27_theorem) e i suoi predecessori. Questo teorema e le sue applicazioni sono alla base dell'inferenza, descrivendo la probabilità che un evento si verifichi basandosi su conoscenze pregresse.
+- 1805 [Teoria dei minimi quadrati](https://wikipedia.org/wiki/Least_squares) del matematico francese Adrien-Marie Legendre. Questa teoria, che imparerai nella nostra unità sulla Regressione, aiuta nell'adattamento dei dati.
+- 1913 [Catene di Markov](https://wikipedia.org/wiki/Markov_chain), chiamate così in onore del matematico russo Andrey Markov, vengono utilizzate per descrivere una sequenza di eventi possibili basati su uno stato precedente.
+- 1957 [Perceptron](https://wikipedia.org/wiki/Perceptron) è un tipo di classificatore lineare inventato dallo psicologo americano Frank Rosenblatt che è alla base dei progressi nel deep learning.
+
+---
+
+- 1967 [Nearest Neighbor](https://wikipedia.org/wiki/Nearest_neighbor) è un algoritmo originariamente progettato per mappare percorsi. In un contesto ML viene utilizzato per rilevare modelli.
+- 1970 [Backpropagation](https://wikipedia.org/wiki/Backpropagation) viene utilizzato per addestrare le [reti neurali feedforward](https://wikipedia.org/wiki/Feedforward_neural_network).
+- 1982 [Reti Neurali Ricorrenti](https://wikipedia.org/wiki/Recurrent_neural_network) sono reti neurali artificiali derivate dalle reti neurali feedforward che creano grafici temporali.
+
+✅ Fai una piccola ricerca. Quali altre date si distinguono come fondamentali nella storia del ML e dell'AI?
+
+---
+## 1950: Macchine che pensano
+
+Alan Turing, una persona davvero straordinaria che è stata votata [dal pubblico nel 2019](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) come il più grande scienziato del XX secolo, è accreditato di aver contribuito a gettare le basi per il concetto di 'macchina che può pensare'. Ha lottato con i detrattori e con il proprio bisogno di prove empiriche di questo concetto in parte creando il [Test di Turing](https://www.bbc.com/news/technology-18475646), che esplorerai nelle nostre lezioni di NLP.
+
+---
+## 1956: Progetto di ricerca estivo di Dartmouth
+
+"Il Progetto di ricerca estivo di Dartmouth sull'intelligenza artificiale è stato un evento fondamentale per l'intelligenza artificiale come campo," ed è stato qui che il termine 'intelligenza artificiale' è stato coniato ([fonte](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth)).
+
+> Ogni aspetto dell'apprendimento o di qualsiasi altra caratteristica dell'intelligenza può, in linea di principio, essere descritto così precisamente che una macchina può essere costruita per simularlo.
+
+---
+
+Il ricercatore principale, il professore di matematica John McCarthy, sperava "di procedere sulla base dell'ipotesi che ogni aspetto dell'apprendimento o di qualsiasi altra caratteristica dell'intelligenza possa, in linea di principio, essere descritto così precisamente che una macchina può essere costruita per simularlo." I partecipanti includevano un'altra luminare nel campo, Marvin Minsky.
+
+Il workshop è accreditato di aver avviato e incoraggiato diverse discussioni, tra cui "l'ascesa dei metodi simbolici, sistemi focalizzati su domini limitati (primi sistemi esperti) e sistemi deduttivi contro sistemi induttivi." ([fonte](https://wikipedia.org/wiki/Dartmouth_workshop)).
+
+---
+## 1956 - 1974: "Gli anni d'oro"
+
+Dagli anni '50 fino alla metà degli anni '70, l'ottimismo era alto nella speranza che l'AI potesse risolvere molti problemi. Nel 1967, Marvin Minsky affermava con sicurezza che "Entro una generazione ... il problema di creare 'intelligenza artificiale' sarà sostanzialmente risolto." (Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall)
+
+la ricerca sul natural language processing fiorì, la ricerca fu perfezionata e resa più potente, e fu creato il concetto di 'micro-mondi', dove compiti semplici venivano completati usando istruzioni in linguaggio naturale.
+
+---
+
+La ricerca era ben finanziata da agenzie governative, si fecero progressi nel calcolo e negli algoritmi, e furono costruiti prototipi di macchine intelligenti. Alcune di queste macchine includono:
+
+* [Shakey il robot](https://wikipedia.org/wiki/Shakey_the_robot), che poteva manovrare e decidere come eseguire i compiti 'intelligentemente'.
+
+ 
+ > Shakey nel 1972
+
+---
+
+* Eliza, un primo 'chatterbot', poteva conversare con le persone e agire come un primitivo 'terapeuta'. Imparerai di più su Eliza nelle lezioni di NLP.
+
+ 
+ > Una versione di Eliza, un chatbot
+
+---
+
+* "Blocks world" era un esempio di micro-mondo dove i blocchi potevano essere impilati e ordinati, e si potevano testare esperimenti nell'insegnare alle macchine a prendere decisioni. I progressi costruiti con librerie come [SHRDLU](https://wikipedia.org/wiki/SHRDLU) hanno aiutato a spingere avanti l'elaborazione del linguaggio.
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "blocks world con SHRDLU")
+
+ > 🎥 Clicca sull'immagine qui sopra per un video: Blocks world con SHRDLU
+
+---
+## 1974 - 1980: "AI Winter"
+
+Entro la metà degli anni '70, era diventato evidente che la complessità di creare 'macchine intelligenti' era stata sottovalutata e che la sua promessa, data la potenza di calcolo disponibile, era stata sopravvalutata. I finanziamenti si prosciugarono e la fiducia nel campo rallentò. Alcuni problemi che hanno influenzato la fiducia includevano:
+---
+- **Limitazioni**. La potenza di calcolo era troppo limitata.
+- **Esplosione combinatoria**. La quantità di parametri che dovevano essere addestrati cresceva esponenzialmente man mano che si chiedeva di più ai computer, senza un'evoluzione parallela della potenza di calcolo e delle capacità.
+- **Scarsità di dati**. C'era una scarsità di dati che ostacolava il processo di test, sviluppo e raffinamento degli algoritmi.
+- **Stiamo facendo le domande giuste?**. Le stesse domande che venivano poste cominciarono a essere messe in discussione. I ricercatori iniziarono a ricevere critiche sui loro approcci:
+ - I test di Turing furono messi in discussione tramite, tra le altre idee, la 'teoria della stanza cinese' che postulava che, "programmare un computer digitale può farlo apparire come se comprendesse il linguaggio ma non potrebbe produrre una vera comprensione." ([fonte](https://plato.stanford.edu/entries/chinese-room/))
+ - L'etica dell'introduzione di intelligenze artificiali come il "terapeuta" ELIZA nella società fu messa in discussione.
+
+---
+
+Allo stesso tempo, varie scuole di pensiero sull'AI cominciarono a formarsi. Si stabilì una dicotomia tra le pratiche ["scruffy" e "neat AI"](https://wikipedia.org/wiki/Neats_and_scruffies). I laboratori _scruffy_ modificavano i programmi per ore fino a ottenere i risultati desiderati. I laboratori _neat_ "si concentravano sulla logica e sulla risoluzione formale dei problemi". ELIZA e SHRDLU erano sistemi _scruffy_ ben noti. Negli anni '80, con l'emergere della domanda di rendere i sistemi ML riproducibili, l'approccio _neat_ prese gradualmente il sopravvento poiché i suoi risultati sono più spiegabili.
+
+---
+## Sistemi esperti degli anni '80
+
+Man mano che il campo cresceva, il suo beneficio per il business diventava più chiaro, e negli anni '80 proliferarono i 'sistemi esperti'. "I sistemi esperti furono tra le prime forme di software di intelligenza artificiale (AI) veramente di successo." ([fonte](https://wikipedia.org/wiki/Expert_system)).
+
+Questo tipo di sistema è in realtà _ibrido_, costituito in parte da un motore di regole che definisce i requisiti aziendali, e un motore di inferenza che sfruttava il sistema di regole per dedurre nuovi fatti.
+
+Quest'era vide anche una crescente attenzione rivolta alle reti neurali.
+
+---
+## 1987 - 1993: "Chill" dell'AI
+
+La proliferazione dell'hardware specializzato dei sistemi esperti ebbe l'effetto sfortunato di diventare troppo specializzata. L'ascesa dei personal computer competeva anche con questi grandi, specializzati, sistemi centralizzati. La democratizzazione del calcolo era iniziata, e alla fine spianò la strada all'esplosione moderna dei big data.
+
+---
+## 1993 - 2011
+
+Quest'epoca vide una nuova era per il ML e l'AI per poter risolvere alcuni dei problemi che erano stati causati in precedenza dalla mancanza di dati e potenza di calcolo. La quantità di dati iniziò a crescere rapidamente e a diventare più ampiamente disponibile, nel bene e nel male, soprattutto con l'avvento dello smartphone intorno al 2007. La potenza di calcolo si espanse esponenzialmente, e gli algoritmi si evolvettero di pari passo. Il campo iniziò a maturare man mano che i giorni liberi del passato cominciarono a cristallizzarsi in una vera disciplina.
+
+---
+## Oggi
+
+Oggi il machine learning e l'AI toccano quasi ogni parte delle nostre vite. Quest'era richiede una comprensione attenta dei rischi e degli effetti potenziali di questi algoritmi sulle vite umane. Come ha affermato Brad Smith di Microsoft, "La tecnologia dell'informazione solleva questioni che vanno al cuore delle protezioni fondamentali dei diritti umani come la privacy e la libertà di espressione. Queste questioni aumentano la responsabilità delle aziende tecnologiche che creano questi prodotti. A nostro avviso, richiedono anche una regolamentazione governativa ponderata e lo sviluppo di norme sull'uso accettabile" ([fonte](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/)).
+
+---
+
+Resta da vedere cosa riserva il futuro, ma è importante comprendere questi sistemi informatici e il software e gli algoritmi che eseguono. Speriamo che questo curriculum ti aiuti a ottenere una migliore comprensione in modo che tu possa decidere da solo.
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "La storia del deep learning")
+> 🎥 Clicca sull'immagine qui sopra per un video: Yann LeCun discute la storia del deep learning in questa lezione
+
+---
+## 🚀Sfida
+
+Approfondisci uno di questi momenti storici e scopri di più sulle persone dietro di essi. Ci sono personaggi affascinanti, e nessuna scoperta scientifica è mai stata creata in un vuoto culturale. Cosa scopri?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## Revisione e studio autonomo
+
+Ecco alcuni elementi da guardare e ascoltare:
+
+[Questo podcast dove Amy Boyd discute l'evoluzione dell'AI](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "La storia dell'IA di Amy Boyd")
+
+---
+
+## Compito
+
+[Crea una timeline](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/2-history-of-ML/assignment.md b/translations/it/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..252537062
--- /dev/null
+++ b/translations/it/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Creare una timeline
+
+## Istruzioni
+
+Usando [questo repo](https://github.com/Digital-Humanities-Toolkit/timeline-builder), crea una timeline di qualche aspetto della storia degli algoritmi, della matematica, della statistica, dell'AI o del ML, o una combinazione di questi. Puoi concentrarti su una persona, un'idea o un lungo periodo di pensiero. Assicurati di aggiungere elementi multimediali.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Necessita Miglioramenti |
+| -------- | -------------------------------------------------- | --------------------------------------- | ---------------------------------------------------------------- |
+| | Una timeline pubblicata è presentata come pagina GitHub | Il codice è incompleto e non pubblicato | La timeline è incompleta, non ben ricercata e non pubblicata |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Anche se ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/3-fairness/README.md b/translations/it/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..08ddc1abc
--- /dev/null
+++ b/translations/it/1-Introduction/3-fairness/README.md
@@ -0,0 +1,151 @@
+# Costruire soluzioni di Machine Learning con AI responsabile
+
+
+> Sketchnote di [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Introduzione
+
+In questo curriculum, inizierai a scoprire come il machine learning può influenzare e sta influenzando le nostre vite quotidiane. Anche ora, sistemi e modelli sono coinvolti in compiti decisionali quotidiani, come diagnosi sanitarie, approvazioni di prestiti o rilevamento di frodi. Pertanto, è importante che questi modelli funzionino bene per fornire risultati affidabili. Come qualsiasi applicazione software, i sistemi AI possono non soddisfare le aspettative o avere un esito indesiderato. Ecco perché è essenziale essere in grado di comprendere e spiegare il comportamento di un modello AI.
+
+Immagina cosa può accadere quando i dati che usi per costruire questi modelli mancano di determinate demografie, come razza, genere, opinione politica, religione, o rappresentano queste demografie in modo sproporzionato. E se l'output del modello fosse interpretato in modo da favorire alcune demografie? Quali sono le conseguenze per l'applicazione? Inoltre, cosa succede quando il modello ha un esito negativo e danneggia le persone? Chi è responsabile del comportamento dei sistemi AI? Queste sono alcune delle domande che esploreremo in questo curriculum.
+
+In questa lezione, tu:
+
+- Aumenterai la consapevolezza dell'importanza dell'equità nel machine learning e dei danni correlati all'equità.
+- Diventerai familiare con la pratica di esplorare casi anomali e scenari insoliti per garantire affidabilità e sicurezza.
+- Capirai la necessità di potenziare tutti progettando sistemi inclusivi.
+- Esplorerai quanto sia vitale proteggere la privacy e la sicurezza dei dati e delle persone.
+- Vedrai l'importanza di avere un approccio a scatola trasparente per spiegare il comportamento dei modelli AI.
+- Sarai consapevole di come la responsabilità sia essenziale per costruire fiducia nei sistemi AI.
+
+## Prerequisito
+
+Come prerequisito, segui il percorso di apprendimento "Principi di AI Responsabile" e guarda il video qui sotto sull'argomento:
+
+Scopri di più sull'AI Responsabile seguendo questo [Percorso di Apprendimento](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott)
+
+[](https://youtu.be/dnC8-uUZXSc "Approccio di Microsoft all'AI Responsabile")
+
+> 🎥 Clicca sull'immagine sopra per un video: Approccio di Microsoft all'AI Responsabile
+
+## Equità
+
+I sistemi AI dovrebbero trattare tutti in modo equo ed evitare di influenzare gruppi simili di persone in modi diversi. Ad esempio, quando i sistemi AI forniscono indicazioni su trattamenti medici, domande di prestito o occupazione, dovrebbero fare le stesse raccomandazioni a tutti con sintomi simili, circostanze finanziarie o qualifiche professionali. Ognuno di noi come esseri umani porta con sé pregiudizi ereditati che influenzano le nostre decisioni e azioni. Questi pregiudizi possono essere evidenti nei dati che usiamo per addestrare i sistemi AI. Tale manipolazione può a volte accadere involontariamente. È spesso difficile sapere consapevolmente quando stai introducendo pregiudizi nei dati.
+
+**"Ingiustizia"** comprende impatti negativi, o "danni", per un gruppo di persone, come quelli definiti in termini di razza, genere, età o stato di disabilità. I principali danni correlati all'equità possono essere classificati come:
+
+- **Allocazione**, se un genere o un'etnia, ad esempio, è favorita rispetto a un'altra.
+- **Qualità del servizio**. Se addestri i dati per uno scenario specifico ma la realtà è molto più complessa, porta a un servizio di scarsa qualità. Ad esempio, un dispenser di sapone per le mani che non riesce a rilevare persone con pelle scura. [Riferimento](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **Denigrazione**. Criticare e etichettare ingiustamente qualcosa o qualcuno. Ad esempio, una tecnologia di etichettatura delle immagini ha infamemente etichettato erroneamente immagini di persone con pelle scura come gorilla.
+- **Sovra- o sotto-rappresentazione**. L'idea è che un certo gruppo non sia visto in una certa professione, e qualsiasi servizio o funzione che continua a promuovere ciò sta contribuendo al danno.
+- **Stereotipizzazione**. Associare un determinato gruppo con attributi preassegnati. Ad esempio, un sistema di traduzione linguistica tra inglese e turco può avere inesattezze dovute a parole con associazioni stereotipate al genere.
+
+
+> traduzione in turco
+
+
+> traduzione di ritorno in inglese
+
+Quando progettiamo e testiamo sistemi AI, dobbiamo garantire che l'AI sia equa e non programmata per prendere decisioni pregiudizievoli o discriminatorie, che anche gli esseri umani sono proibiti dal prendere. Garantire l'equità nell'AI e nel machine learning rimane una sfida socio-tecnica complessa.
+
+### Affidabilità e sicurezza
+
+Per costruire fiducia, i sistemi AI devono essere affidabili, sicuri e coerenti in condizioni normali e inattese. È importante sapere come si comporteranno i sistemi AI in una varietà di situazioni, specialmente quando sono anomali. Quando si costruiscono soluzioni AI, è necessario concentrarsi notevolmente su come gestire una vasta gamma di circostanze che le soluzioni AI incontreranno. Ad esempio, un'auto a guida autonoma deve mettere la sicurezza delle persone come priorità assoluta. Di conseguenza, l'AI che alimenta l'auto deve considerare tutti gli scenari possibili che l'auto potrebbe incontrare come notte, temporali o bufere di neve, bambini che attraversano la strada, animali domestici, lavori stradali ecc. Quanto bene un sistema AI può gestire una vasta gamma di condizioni in modo affidabile e sicuro riflette il livello di anticipazione che lo scienziato dei dati o lo sviluppatore AI ha considerato durante la progettazione o il test del sistema.
+
+> [🎥 Clicca qui per un video: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### Inclusività
+
+I sistemi AI dovrebbero essere progettati per coinvolgere e potenziare tutti. Quando progettano e implementano sistemi AI, gli scienziati dei dati e gli sviluppatori AI identificano e affrontano potenziali barriere nel sistema che potrebbero escludere involontariamente le persone. Ad esempio, ci sono 1 miliardo di persone con disabilità in tutto il mondo. Con l'avanzamento dell'AI, possono accedere a una vasta gamma di informazioni e opportunità più facilmente nella loro vita quotidiana. Affrontando le barriere, si creano opportunità per innovare e sviluppare prodotti AI con migliori esperienze che beneficiano tutti.
+
+> [🎥 Clicca qui per un video: inclusività nell'AI](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### Sicurezza e privacy
+
+I sistemi AI devono essere sicuri e rispettare la privacy delle persone. Le persone hanno meno fiducia nei sistemi che mettono a rischio la loro privacy, le loro informazioni o le loro vite. Quando addestriamo modelli di machine learning, ci affidiamo ai dati per ottenere i migliori risultati. Facendo ciò, è necessario considerare l'origine dei dati e la loro integrità. Ad esempio, i dati sono stati inviati dagli utenti o sono pubblicamente disponibili? Inoltre, mentre si lavora con i dati, è cruciale sviluppare sistemi AI che possano proteggere le informazioni riservate e resistere agli attacchi. Con l'aumento della diffusione dell'AI, proteggere la privacy e garantire la sicurezza delle informazioni personali e aziendali sta diventando sempre più critico e complesso. Le questioni di privacy e sicurezza dei dati richiedono un'attenzione particolarmente attenta per l'AI perché l'accesso ai dati è essenziale per i sistemi AI per fare previsioni e decisioni accurate e informate sulle persone.
+
+> [🎥 Clicca qui per un video: sicurezza nell'AI](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Come industria, abbiamo fatto significativi progressi in Privacy e Sicurezza, alimentati significativamente da regolamenti come il GDPR (Regolamento Generale sulla Protezione dei Dati).
+- Tuttavia, con i sistemi AI dobbiamo riconoscere la tensione tra la necessità di più dati personali per rendere i sistemi più personali ed efficaci - e la privacy.
+- Proprio come con la nascita dei computer connessi a Internet, stiamo anche vedendo un enorme aumento del numero di problemi di sicurezza legati all'AI.
+- Allo stesso tempo, abbiamo visto l'AI essere utilizzata per migliorare la sicurezza. Ad esempio, la maggior parte degli scanner antivirus moderni sono guidati da euristiche AI.
+- Dobbiamo garantire che i nostri processi di Data Science si armonizzino con le più recenti pratiche di privacy e sicurezza.
+
+### Trasparenza
+
+I sistemi AI devono essere comprensibili. Una parte cruciale della trasparenza è spiegare il comportamento dei sistemi AI e dei loro componenti. Migliorare la comprensione dei sistemi AI richiede che gli stakeholder comprendano come e perché funzionano in modo che possano identificare potenziali problemi di prestazione, preoccupazioni sulla sicurezza e sulla privacy, pregiudizi, pratiche esclusive o risultati indesiderati. Crediamo anche che coloro che usano i sistemi AI debbano essere onesti e trasparenti su quando, perché e come scelgono di utilizzarli. Così come i limiti dei sistemi che utilizzano. Ad esempio, se una banca utilizza un sistema AI per supportare le sue decisioni di prestito ai consumatori, è importante esaminare i risultati e capire quali dati influenzano le raccomandazioni del sistema. I governi stanno iniziando a regolamentare l'AI in vari settori, quindi gli scienziati dei dati e le organizzazioni devono spiegare se un sistema AI soddisfa i requisiti normativi, specialmente quando c'è un risultato indesiderato.
+
+> [🎥 Clicca qui per un video: trasparenza nell'AI](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Poiché i sistemi AI sono così complessi, è difficile capire come funzionano e interpretare i risultati.
+- Questa mancanza di comprensione influisce sul modo in cui questi sistemi sono gestiti, operazionalizzati e documentati.
+- Questa mancanza di comprensione influisce soprattutto sulle decisioni prese utilizzando i risultati prodotti da questi sistemi.
+
+### Responsabilità
+
+Le persone che progettano e implementano sistemi AI devono essere responsabili del modo in cui i loro sistemi operano. La necessità di responsabilità è particolarmente cruciale con tecnologie sensibili come il riconoscimento facciale. Recentemente, c'è stata una crescente domanda di tecnologia di riconoscimento facciale, soprattutto da parte delle organizzazioni di applicazione della legge che vedono il potenziale della tecnologia in usi come la ricerca di bambini scomparsi. Tuttavia, queste tecnologie potrebbero potenzialmente essere utilizzate da un governo per mettere a rischio le libertà fondamentali dei cittadini, ad esempio, consentendo la sorveglianza continua di individui specifici. Pertanto, gli scienziati dei dati e le organizzazioni devono essere responsabili di come il loro sistema AI impatta sugli individui o sulla società.
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Approccio di Microsoft all'AI Responsabile")
+
+> 🎥 Clicca sull'immagine sopra per un video: Avvertimenti sulla Sorveglianza di Massa Attraverso il Riconoscimento Facciale
+
+Alla fine, una delle domande più grandi per la nostra generazione, come la prima generazione che sta portando l'AI nella società, è come garantire che i computer rimangano responsabili verso le persone e come garantire che le persone che progettano i computer rimangano responsabili verso tutti gli altri.
+
+## Valutazione dell'impatto
+
+Prima di addestrare un modello di machine learning, è importante condurre una valutazione dell'impatto per comprendere lo scopo del sistema AI; qual è l'uso previsto; dove sarà implementato; e chi interagirà con il sistema. Questi sono utili per il revisore o i tester che valutano il sistema per sapere quali fattori considerare quando identificano potenziali rischi e conseguenze attese.
+
+Le seguenti sono aree di interesse quando si conduce una valutazione dell'impatto:
+
+* **Impatto negativo sugli individui**. Essere consapevoli di eventuali restrizioni o requisiti, uso non supportato o eventuali limitazioni note che ostacolano le prestazioni del sistema è vitale per garantire che il sistema non venga utilizzato in modo da causare danni agli individui.
+* **Requisiti dei dati**. Capire come e dove il sistema utilizzerà i dati consente ai revisori di esplorare eventuali requisiti dei dati di cui bisogna essere consapevoli (ad esempio, regolamenti sui dati GDPR o HIPPA). Inoltre, esaminare se la fonte o la quantità di dati è sufficiente per l'addestramento.
+* **Sintesi dell'impatto**. Raccogliere un elenco di potenziali danni che potrebbero derivare dall'uso del sistema. Durante tutto il ciclo di vita del ML, verificare se i problemi identificati sono mitigati o affrontati.
+* **Obiettivi applicabili** per ciascuno dei sei principi fondamentali. Valutare se gli obiettivi di ciascun principio sono soddisfatti e se ci sono eventuali lacune.
+
+## Debugging con AI responsabile
+
+Simile al debugging di un'applicazione software, il debugging di un sistema AI è un processo necessario per identificare e risolvere i problemi nel sistema. Ci sono molti fattori che potrebbero influenzare un modello che non performa come previsto o in modo responsabile. La maggior parte delle metriche di prestazione dei modelli tradizionali sono aggregati quantitativi delle prestazioni di un modello, che non sono sufficienti per analizzare come un modello viola i principi dell'AI responsabile. Inoltre, un modello di machine learning è una scatola nera che rende difficile capire cosa guida il suo risultato o fornire spiegazioni quando commette un errore. Più avanti in questo corso, impareremo come utilizzare la dashboard AI Responsabile per aiutare a fare debugging dei sistemi AI. La dashboard fornisce uno strumento olistico per gli scienziati dei dati e gli sviluppatori AI per eseguire:
+
+* **Analisi degli errori**. Per identificare la distribuzione degli errori del modello che può influenzare l'equità o l'affidabilità del sistema.
+* **Panoramica del modello**. Per scoprire dove ci sono disparità nelle prestazioni del modello tra i vari gruppi di dati.
+* **Analisi dei dati**. Per comprendere la distribuzione dei dati e identificare eventuali pregiudizi nei dati che potrebbero portare a problemi di equità, inclusività e affidabilità.
+* **Interpretabilità del modello**. Per capire cosa influenza o influenza le previsioni del modello. Questo aiuta a spiegare il comportamento del modello, che è importante per la trasparenza e la responsabilità.
+
+## 🚀 Sfida
+
+Per prevenire i danni fin dall'inizio, dovremmo:
+
+- avere una diversità di background e prospettive tra le persone che lavorano sui sistemi
+- investire in dataset che riflettano la diversità della nostra società
+- sviluppare metodi migliori lungo tutto il ciclo di vita del machine learning per rilevare e correggere l'AI responsabile quando si verifica
+
+Pensa a scenari reali in cui l'inaffidabilità di un modello è evidente nella costruzione e nell'uso del modello. Cos'altro dovremmo considerare?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## Revisione e Autoapprendimento
+
+In questa lezione, hai appreso alcune basi dei concetti di equità e iniquità nel machine learning.
+
+Guarda questo workshop per approfondire gli argomenti:
+
+- Alla ricerca di AI responsabile: portare i principi in pratica di Besmira Nushi, Mehrnoosh Sameki e Amit Sharma
+
+[](https://www.youtube.com/watch?v=tGgJCrA-MZU "RAI Toolbox: Un framework open-source per costruire AI responsabile")
+
+> 🎥 Clicca sull'immagine sopra per un video: RAI Toolbox: Un framework open-source per costruire AI responsabile di Besmira Nushi, Mehrnoosh Sameki e Amit Sharma
+
+Leggi anche:
+
+- Centro risorse RAI di Microsoft: [Risorse di AI Responsabile – Microsoft AI](https://www.microsoft.com/ai/responsible-ai-resources?activetab=pivot1%3aprimaryr4)
+
+- Gruppo di ricerca FATE di Microsoft: [FATE: Equità, Responsabilità, Trasparenza ed Etica nell'AI - Microsoft Research](https://www.microsoft.com/research/theme/fate/)
+
+RAI Toolbox:
+
+- [Repository GitHub di Responsible AI Toolbox](https://github.com/microsoft/responsible-ai
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/3-fairness/assignment.md b/translations/it/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..dea4bd6e1
--- /dev/null
+++ b/translations/it/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# Esplora il Responsible AI Toolbox
+
+## Istruzioni
+
+In questa lezione hai appreso del Responsible AI Toolbox, un "progetto open-source guidato dalla comunità per aiutare i data scientist ad analizzare e migliorare i sistemi di intelligenza artificiale." Per questo compito, esplora uno dei [notebook](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb) del RAI Toolbox e riporta le tue scoperte in un documento o in una presentazione.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| ------- | --------- | -------- | ------------- |
+| | Viene presentato un documento o una presentazione PowerPoint che discute i sistemi di Fairlearn, il notebook eseguito e le conclusioni tratte dall'esecuzione | Viene presentato un documento senza conclusioni | Non viene presentato alcun documento |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su AI. Anche se ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/4-techniques-of-ML/README.md b/translations/it/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..d207da0fe
--- /dev/null
+++ b/translations/it/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# Tecniche di Machine Learning
+
+Il processo di costruzione, utilizzo e mantenimento dei modelli di machine learning e dei dati che utilizzano è molto diverso da molti altri flussi di lavoro di sviluppo. In questa lezione, demistificheremo il processo e delineeremo le principali tecniche che devi conoscere. Imparerai a:
+
+- Comprendere i processi che stanno alla base del machine learning a un livello alto.
+- Esplorare concetti di base come 'modelli', 'predizioni' e 'dati di addestramento'.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "ML per principianti - Tecniche di Machine Learning")
+
+> 🎥 Clicca sull'immagine sopra per un breve video che illustra questa lezione.
+
+## Introduzione
+
+A un livello alto, l'arte di creare processi di machine learning (ML) è composta da diversi passaggi:
+
+1. **Decidere la domanda**. La maggior parte dei processi ML inizia ponendo una domanda che non può essere risolta da un semplice programma condizionale o da un motore basato su regole. Queste domande spesso ruotano attorno a predizioni basate su una raccolta di dati.
+2. **Raccogliere e preparare i dati**. Per poter rispondere alla tua domanda, hai bisogno di dati. La qualità e, a volte, la quantità dei tuoi dati determineranno quanto bene puoi rispondere alla tua domanda iniziale. Visualizzare i dati è un aspetto importante di questa fase. Questa fase include anche la suddivisione dei dati in un gruppo di addestramento e un gruppo di test per costruire un modello.
+3. **Scegliere un metodo di addestramento**. A seconda della tua domanda e della natura dei tuoi dati, devi scegliere come vuoi addestrare un modello per riflettere al meglio i tuoi dati e fare predizioni accurate. Questa è la parte del tuo processo ML che richiede competenze specifiche e, spesso, una notevole quantità di sperimentazione.
+4. **Addestrare il modello**. Utilizzando i tuoi dati di addestramento, userai vari algoritmi per addestrare un modello a riconoscere schemi nei dati. Il modello potrebbe sfruttare pesi interni che possono essere regolati per privilegiare certe parti dei dati rispetto ad altre per costruire un modello migliore.
+5. **Valutare il modello**. Usi dati mai visti prima (i tuoi dati di test) dal tuo set raccolto per vedere come il modello sta performando.
+6. **Ottimizzazione dei parametri**. In base alle prestazioni del tuo modello, puoi rifare il processo utilizzando parametri diversi, o variabili, che controllano il comportamento degli algoritmi utilizzati per addestrare il modello.
+7. **Predire**. Utilizza nuovi input per testare l'accuratezza del tuo modello.
+
+## Quale domanda porre
+
+I computer sono particolarmente abili nello scoprire schemi nascosti nei dati. Questa capacità è molto utile per i ricercatori che hanno domande su un dato dominio che non possono essere facilmente risolte creando un motore basato su regole condizionali. Data un'attività attuariale, per esempio, un data scientist potrebbe essere in grado di costruire regole artigianali sulla mortalità dei fumatori rispetto ai non fumatori.
+
+Quando molti altri variabili vengono inserite nell'equazione, tuttavia, un modello ML potrebbe risultare più efficiente nel predire i tassi di mortalità futuri basandosi sulla storia sanitaria passata. Un esempio più allegro potrebbe essere fare previsioni meteorologiche per il mese di aprile in una data località basandosi su dati che includono latitudine, longitudine, cambiamenti climatici, prossimità all'oceano, schemi della corrente a getto e altro.
+
+✅ Questa [presentazione](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf) sui modelli meteorologici offre una prospettiva storica sull'uso del ML nell'analisi meteorologica.
+
+## Compiti pre-costruzione
+
+Prima di iniziare a costruire il tuo modello, ci sono diversi compiti che devi completare. Per testare la tua domanda e formare un'ipotesi basata sulle predizioni di un modello, devi identificare e configurare diversi elementi.
+
+### Dati
+
+Per poter rispondere alla tua domanda con una certa sicurezza, hai bisogno di una buona quantità di dati del tipo giusto. Ci sono due cose che devi fare a questo punto:
+
+- **Raccogliere dati**. Tenendo presente la lezione precedente sull'equità nell'analisi dei dati, raccogli i tuoi dati con cura. Sii consapevole delle fonti di questi dati, di eventuali bias intrinseci che potrebbero avere e documenta la loro origine.
+- **Preparare i dati**. Ci sono diversi passaggi nel processo di preparazione dei dati. Potresti dover unire i dati e normalizzarli se provengono da fonti diverse. Puoi migliorare la qualità e la quantità dei dati attraverso vari metodi come convertire stringhe in numeri (come facciamo in [Clustering](../../5-Clustering/1-Visualize/README.md)). Potresti anche generare nuovi dati, basandoti sugli originali (come facciamo in [Classification](../../4-Classification/1-Introduction/README.md)). Puoi pulire e modificare i dati (come faremo prima della lezione sulla [Web App](../../3-Web-App/README.md)). Infine, potresti anche doverli randomizzare e mescolarli, a seconda delle tue tecniche di addestramento.
+
+✅ Dopo aver raccolto e processato i tuoi dati, prenditi un momento per vedere se la loro forma ti permetterà di affrontare la tua domanda prevista. Potrebbe essere che i dati non performino bene nel tuo compito dato, come scopriamo nelle nostre lezioni di [Clustering](../../5-Clustering/1-Visualize/README.md)!
+
+### Caratteristiche e Target
+
+Una [caratteristica](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection) è una proprietà misurabile dei tuoi dati. In molti set di dati è espressa come un'intestazione di colonna come 'data', 'dimensione' o 'colore'. La tua variabile caratteristica, solitamente rappresentata come `X` nel codice, rappresenta la variabile di input che verrà utilizzata per addestrare il modello.
+
+Un target è ciò che stai cercando di predire. Il target solitamente rappresentato come `y` nel codice, rappresenta la risposta alla domanda che stai cercando di fare ai tuoi dati: a dicembre, quale **colore** delle zucche sarà il più economico? a San Francisco, quali quartieri avranno il miglior **prezzo** immobiliare? A volte il target è anche indicato come attributo etichetta.
+
+### Selezionare la tua variabile caratteristica
+
+🎓 **Selezione delle caratteristiche ed Estrazione delle caratteristiche** Come fai a sapere quale variabile scegliere quando costruisci un modello? Probabilmente passerai attraverso un processo di selezione delle caratteristiche o estrazione delle caratteristiche per scegliere le variabili giuste per il modello più performante. Tuttavia, non sono la stessa cosa: "L'estrazione delle caratteristiche crea nuove caratteristiche da funzioni delle caratteristiche originali, mentre la selezione delle caratteristiche restituisce un sottoinsieme delle caratteristiche." ([fonte](https://wikipedia.org/wiki/Feature_selection))
+
+### Visualizzare i tuoi dati
+
+Un aspetto importante dell'arsenale del data scientist è il potere di visualizzare i dati utilizzando diverse eccellenti librerie come Seaborn o MatPlotLib. Rappresentare i tuoi dati visivamente potrebbe permetterti di scoprire correlazioni nascoste che puoi sfruttare. Le tue visualizzazioni potrebbero anche aiutarti a scoprire bias o dati sbilanciati (come scopriamo in [Classification](../../4-Classification/2-Classifiers-1/README.md)).
+
+### Dividere il tuo dataset
+
+Prima di addestrare, devi dividere il tuo dataset in due o più parti di dimensioni disuguali che rappresentino comunque bene i dati.
+
+- **Addestramento**. Questa parte del dataset viene adattata al tuo modello per addestrarlo. Questo set costituisce la maggior parte del dataset originale.
+- **Test**. Un dataset di test è un gruppo indipendente di dati, spesso raccolti dai dati originali, che usi per confermare le prestazioni del modello costruito.
+- **Validazione**. Un set di validazione è un gruppo indipendente più piccolo di esempi che usi per regolare i parametri del modello, o l'architettura, per migliorare il modello. A seconda delle dimensioni dei tuoi dati e della domanda che stai ponendo, potresti non avere bisogno di costruire questo terzo set (come notiamo in [Time Series Forecasting](../../7-TimeSeries/1-Introduction/README.md)).
+
+## Costruire un modello
+
+Utilizzando i tuoi dati di addestramento, il tuo obiettivo è costruire un modello, o una rappresentazione statistica dei tuoi dati, utilizzando vari algoritmi per **addestrarlo**. Addestrare un modello lo espone ai dati e gli permette di fare assunzioni sugli schemi percepiti che scopre, valida e accetta o rifiuta.
+
+### Decidere un metodo di addestramento
+
+A seconda della tua domanda e della natura dei tuoi dati, sceglierai un metodo per addestrarlo. Scorrendo la [documentazione di Scikit-learn](https://scikit-learn.org/stable/user_guide.html) - che usiamo in questo corso - puoi esplorare molti modi per addestrare un modello. A seconda della tua esperienza, potresti dover provare diversi metodi per costruire il miglior modello. Probabilmente passerai attraverso un processo in cui i data scientist valutano le prestazioni di un modello alimentandolo con dati non visti, controllando l'accuratezza, i bias e altri problemi di degrado della qualità, e selezionando il metodo di addestramento più appropriato per il compito in questione.
+
+### Addestrare un modello
+
+Armato dei tuoi dati di addestramento, sei pronto a 'adattarli' per creare un modello. Noterai che in molte librerie ML troverai il codice 'model.fit' - è in questo momento che invii la tua variabile caratteristica come un array di valori (solitamente 'X') e una variabile target (solitamente 'y').
+
+### Valutare il modello
+
+Una volta completato il processo di addestramento (possono essere necessarie molte iterazioni, o 'epoche', per addestrare un grande modello), sarai in grado di valutare la qualità del modello utilizzando dati di test per valutarne le prestazioni. Questi dati sono un sottoinsieme dei dati originali che il modello non ha precedentemente analizzato. Puoi stampare una tabella di metriche sulla qualità del tuo modello.
+
+🎓 **Adattamento del modello**
+
+Nel contesto del machine learning, l'adattamento del modello si riferisce all'accuratezza della funzione sottostante del modello mentre tenta di analizzare dati con cui non ha familiarità.
+
+🎓 **Underfitting** e **overfitting** sono problemi comuni che degradano la qualità del modello, poiché il modello si adatta troppo bene o troppo poco. Questo causa il modello a fare predizioni troppo allineate o troppo poco allineate con i suoi dati di addestramento. Un modello sovradattato predice i dati di addestramento troppo bene perché ha imparato troppo bene i dettagli e il rumore dei dati. Un modello sotto-adattato non è accurato poiché non può analizzare accuratamente né i suoi dati di addestramento né i dati che non ha ancora 'visto'.
+
+
+> Infografica di [Jen Looper](https://twitter.com/jenlooper)
+
+## Ottimizzazione dei parametri
+
+Una volta completato l'addestramento iniziale, osserva la qualità del modello e considera di migliorarlo regolando i suoi 'iperparametri'. Leggi di più sul processo [nella documentazione](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott).
+
+## Predizione
+
+Questo è il momento in cui puoi utilizzare dati completamente nuovi per testare l'accuratezza del tuo modello. In un contesto ML 'applicato', dove stai costruendo asset web per utilizzare il modello in produzione, questo processo potrebbe coinvolgere la raccolta di input utente (una pressione di un pulsante, per esempio) per impostare una variabile e inviarla al modello per inferenza, o valutazione.
+
+In queste lezioni, scoprirai come utilizzare questi passaggi per preparare, costruire, testare, valutare e predire - tutti i gesti di un data scientist e altro, mentre progredisci nel tuo viaggio per diventare un ingegnere ML 'full stack'.
+
+---
+
+## 🚀Sfida
+
+Disegna un diagramma di flusso che rifletta i passaggi di un praticante ML. Dove ti vedi in questo momento nel processo? Dove prevedi che troverai difficoltà? Cosa ti sembra facile?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## Revisione & Studio Autonomo
+
+Cerca online interviste con data scientist che discutono del loro lavoro quotidiano. Eccone [una](https://www.youtube.com/watch?v=Z3IjgbbCEfs).
+
+## Compito
+
+[Intervista a un data scientist](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/4-techniques-of-ML/assignment.md b/translations/it/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..2e437a124
--- /dev/null
+++ b/translations/it/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Intervista a un data scientist
+
+## Istruzioni
+
+Nella tua azienda, in un gruppo di utenti, o tra i tuoi amici o compagni di studio, parla con qualcuno che lavora professionalmente come data scientist. Scrivi un breve articolo (500 parole) sulle loro occupazioni quotidiane. Sono specialisti o lavorano come 'full stack'?
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------ | --------------------- |
+| | Un saggio della lunghezza corretta, con fonti attribuite, presentato come file .doc | Il saggio è attribuito in modo approssimativo o più corto della lunghezza richiesta | Nessun saggio presentato |
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/1-Introduction/README.md b/translations/it/1-Introduction/README.md
new file mode 100644
index 000000000..79666f902
--- /dev/null
+++ b/translations/it/1-Introduction/README.md
@@ -0,0 +1,25 @@
+# Introduzione al machine learning
+
+In questa sezione del curriculum, verranno introdotti i concetti base che stanno alla base del campo del machine learning, cos'è e imparerai la sua storia e le tecniche che i ricercatori usano per lavorarci. Esploriamo insieme questo nuovo mondo del ML!
+
+
+> Foto di Bill Oxford su Unsplash
+
+### Lezioni
+
+1. [Introduzione al machine learning](1-intro-to-ML/README.md)
+1. [La storia del machine learning e dell'AI](2-history-of-ML/README.md)
+1. [Equità e machine learning](3-fairness/README.md)
+1. [Tecniche di machine learning](4-techniques-of-ML/README.md)
+### Crediti
+
+"L'introduzione al Machine Learning" è stata scritta con ♥️ da un team di persone tra cui [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) e [Jen Looper](https://twitter.com/jenlooper)
+
+"La storia del Machine Learning" è stata scritta con ♥️ da [Jen Looper](https://twitter.com/jenlooper) e [Amy Boyd](https://twitter.com/AmyKateNicho)
+
+"Equità e Machine Learning" è stata scritta con ♥️ da [Tomomi Imura](https://twitter.com/girliemac)
+
+"Tecniche di Machine Learning" è stata scritta con ♥️ da [Jen Looper](https://twitter.com/jenlooper) e [Chris Noring](https://twitter.com/softchris)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/1-Tools/README.md b/translations/it/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..ce4ada36b
--- /dev/null
+++ b/translations/it/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# Inizia con Python e Scikit-learn per modelli di regressione
+
+
+
+> Sketchnote di [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [Questa lezione è disponibile in R!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## Introduzione
+
+In queste quattro lezioni, scoprirai come costruire modelli di regressione. Discuteremo presto a cosa servono. Ma prima di fare qualsiasi cosa, assicurati di avere gli strumenti giusti per iniziare il processo!
+
+In questa lezione, imparerai come:
+
+- Configurare il tuo computer per attività di machine learning locale.
+- Lavorare con Jupyter notebooks.
+- Usare Scikit-learn, inclusa l'installazione.
+- Esplorare la regressione lineare con un esercizio pratico.
+
+## Installazioni e configurazioni
+
+[](https://youtu.be/-DfeD2k2Kj0 "ML per principianti - Configura i tuoi strumenti per costruire modelli di Machine Learning")
+
+> 🎥 Clicca sull'immagine sopra per un breve video che ti guida nella configurazione del tuo computer per ML.
+
+1. **Installa Python**. Assicurati che [Python](https://www.python.org/downloads/) sia installato sul tuo computer. Userai Python per molte attività di data science e machine learning. La maggior parte dei sistemi informatici include già un'installazione di Python. Sono disponibili anche utili [Python Coding Packs](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott) per facilitare la configurazione per alcuni utenti.
+
+ Alcuni utilizzi di Python, tuttavia, richiedono una versione del software, mentre altri ne richiedono una diversa. Per questo motivo, è utile lavorare all'interno di un [ambiente virtuale](https://docs.python.org/3/library/venv.html).
+
+2. **Installa Visual Studio Code**. Assicurati di avere Visual Studio Code installato sul tuo computer. Segui queste istruzioni per [installare Visual Studio Code](https://code.visualstudio.com/) per l'installazione di base. Userai Python in Visual Studio Code in questo corso, quindi potrebbe essere utile ripassare come [configurare Visual Studio Code](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) per lo sviluppo in Python.
+
+ > Familiarizza con Python lavorando attraverso questa raccolta di [moduli di apprendimento](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott)
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Configura Python con Visual Studio Code")
+ >
+ > 🎥 Clicca sull'immagine sopra per un video: usare Python all'interno di VS Code.
+
+3. **Installa Scikit-learn**, seguendo [queste istruzioni](https://scikit-learn.org/stable/install.html). Poiché devi assicurarti di usare Python 3, si consiglia di usare un ambiente virtuale. Nota, se stai installando questa libreria su un Mac M1, ci sono istruzioni speciali sulla pagina linkata sopra.
+
+4. **Installa Jupyter Notebook**. Dovrai [installare il pacchetto Jupyter](https://pypi.org/project/jupyter/).
+
+## Il tuo ambiente di sviluppo ML
+
+Utilizzerai **notebooks** per sviluppare il tuo codice Python e creare modelli di machine learning. Questo tipo di file è uno strumento comune per i data scientist e possono essere identificati dalla loro estensione `.ipynb`.
+
+I notebooks sono un ambiente interattivo che permette al sviluppatore di codificare e aggiungere note e scrivere documentazione intorno al codice, il che è molto utile per progetti sperimentali o orientati alla ricerca.
+
+[](https://youtu.be/7E-jC8FLA2E "ML per principianti - Configura Jupyter Notebooks per iniziare a costruire modelli di regressione")
+
+> 🎥 Clicca sull'immagine sopra per un breve video che ti guida attraverso questo esercizio.
+
+### Esercizio - lavorare con un notebook
+
+In questa cartella, troverai il file _notebook.ipynb_.
+
+1. Apri _notebook.ipynb_ in Visual Studio Code.
+
+ Un server Jupyter si avvierà con Python 3+. Troverai aree del notebook che possono essere `run`, pezzi di codice. Puoi eseguire un blocco di codice, selezionando l'icona che sembra un pulsante di riproduzione.
+
+2. Seleziona l'icona `md` e aggiungi un po' di markdown, e il seguente testo **# Benvenuto nel tuo notebook**.
+
+ Successivamente, aggiungi del codice Python.
+
+3. Scrivi **print('hello notebook')** nel blocco di codice.
+4. Seleziona la freccia per eseguire il codice.
+
+ Dovresti vedere la dichiarazione stampata:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+Puoi intercalare il tuo codice con commenti per auto-documentare il notebook.
+
+✅ Pensa per un momento a quanto è diverso l'ambiente di lavoro di uno sviluppatore web rispetto a quello di un data scientist.
+
+## Iniziare con Scikit-learn
+
+Ora che Python è configurato nel tuo ambiente locale e ti senti a tuo agio con i Jupyter notebooks, prendiamo confidenza anche con Scikit-learn (pronuncialo `sci` as in `science`). Scikit-learn fornisce una [API estesa](https://scikit-learn.org/stable/modules/classes.html#api-ref) per aiutarti a svolgere compiti di ML.
+
+Secondo il loro [sito web](https://scikit-learn.org/stable/getting_started.html), "Scikit-learn è una libreria open source di machine learning che supporta l'apprendimento supervisionato e non supervisionato. Fornisce anche vari strumenti per l'adattamento dei modelli, la pre-elaborazione dei dati, la selezione e la valutazione dei modelli e molte altre utilità."
+
+In questo corso, utilizzerai Scikit-learn e altri strumenti per costruire modelli di machine learning per eseguire quelli che chiamiamo compiti di 'machine learning tradizionale'. Abbiamo deliberatamente evitato le reti neurali e il deep learning, poiché sono meglio trattati nel nostro prossimo curriculum 'AI for Beginners'.
+
+Scikit-learn rende semplice costruire modelli e valutarli per l'uso. È principalmente focalizzato sull'utilizzo di dati numerici e contiene diversi dataset pronti all'uso come strumenti di apprendimento. Include anche modelli pre-costruiti per gli studenti da provare. Esploriamo il processo di caricamento dei dati preconfezionati e l'uso di un estimatore incorporato per il primo modello di ML con Scikit-learn con alcuni dati di base.
+
+## Esercizio - il tuo primo notebook con Scikit-learn
+
+> Questo tutorial è stato ispirato dall'[esempio di regressione lineare](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py) sul sito web di Scikit-learn.
+
+[](https://youtu.be/2xkXL5EUpS0 "ML per principianti - Il tuo primo progetto di regressione lineare in Python")
+
+> 🎥 Clicca sull'immagine sopra per un breve video che ti guida attraverso questo esercizio.
+
+Nel file _notebook.ipynb_ associato a questa lezione, cancella tutte le celle premendo l'icona del 'cestino'.
+
+In questa sezione, lavorerai con un piccolo dataset sul diabete che è integrato in Scikit-learn per scopi didattici. Immagina di voler testare un trattamento per pazienti diabetici. I modelli di Machine Learning potrebbero aiutarti a determinare quali pazienti risponderebbero meglio al trattamento, in base a combinazioni di variabili. Anche un modello di regressione molto semplice, quando visualizzato, potrebbe mostrare informazioni sulle variabili che ti aiuterebbero a organizzare i tuoi studi clinici teorici.
+
+✅ Esistono molti tipi di metodi di regressione e quale scegli dipende dalla risposta che stai cercando. Se vuoi prevedere l'altezza probabile di una persona di una certa età, useresti la regressione lineare, poiché stai cercando un **valore numerico**. Se sei interessato a scoprire se un tipo di cucina dovrebbe essere considerato vegano o meno, stai cercando un **assegnazione di categoria** quindi useresti la regressione logistica. Imparerai di più sulla regressione logistica più avanti. Pensa un po' a delle domande che puoi fare ai dati e quale di questi metodi sarebbe più appropriato.
+
+Iniziamo con questo compito.
+
+### Importa le librerie
+
+Per questo compito importeremo alcune librerie:
+
+- **matplotlib**. È uno strumento utile per [grafici](https://matplotlib.org/) e lo useremo per creare un grafico a linee.
+- **numpy**. [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) è una libreria utile per la gestione dei dati numerici in Python.
+- **sklearn**. Questa è la libreria [Scikit-learn](https://scikit-learn.org/stable/user_guide.html).
+
+Importa alcune librerie per aiutarti con i tuoi compiti.
+
+1. Aggiungi gli import digitando il seguente codice:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ Sopra stai importando `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y` sarà il target della regressione.
+
+2. Aggiungi alcuni comandi di stampa per mostrare la forma della matrice dei dati e il suo primo elemento:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ Quello che ottieni come risposta è una tupla. Quello che stai facendo è assegnare i primi due valori della tupla a `X` and `y` rispettivamente. Scopri di più [sulle tuple](https://wikipedia.org/wiki/Tuple).
+
+ Puoi vedere che questi dati hanno 442 elementi formati in array di 10 elementi:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ Pensa un po' alla relazione tra i dati e il target della regressione. La regressione lineare prevede relazioni tra la caratteristica X e la variabile target y. Puoi trovare il [target](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) per il dataset del diabete nella documentazione? Cosa sta dimostrando questo dataset, dato quel target?
+
+3. Successivamente, seleziona una porzione di questo dataset da tracciare selezionando la 3ª colonna del dataset. Puoi farlo usando il `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)`. Se uno dei parametri è -1, la dimensione corrispondente viene calcolata automaticamente.
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ In qualsiasi momento, stampa i dati per controllarne la forma.
+
+4. Ora che hai i dati pronti per essere tracciati, puoi vedere se una macchina può aiutare a determinare una divisione logica tra i numeri in questo dataset. Per fare ciò, devi dividere sia i dati (X) che il target (y) in set di test e di addestramento. Scikit-learn ha un modo semplice per farlo; puoi dividere i tuoi dati di test in un punto dato.
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+5. Ora sei pronto per addestrare il tuo modello! Carica il modello di regressione lineare e addestralo con i tuoi set di addestramento X e y usando `model.fit()`:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`. Questo sarà usato per tracciare la linea tra i gruppi di dati del modello
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. Ora è il momento di mostrare i dati in un grafico. Matplotlib è uno strumento molto utile per questo compito. Crea un grafico a dispersione di tutti i dati di test X e y, e usa la previsione per tracciare una linea nel punto più appropriato, tra i gruppi di dati del modello.
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ Pensa un po' a cosa sta succedendo qui. Una linea retta sta attraversando molti piccoli punti di dati, ma cosa sta facendo esattamente? Riesci a vedere come dovresti essere in grado di usare questa linea per prevedere dove un nuovo punto dati non visto dovrebbe adattarsi in relazione all'asse y del grafico? Prova a mettere in parole l'uso pratico di questo modello.
+
+Congratulazioni, hai costruito il tuo primo modello di regressione lineare, creato una previsione con esso e l'hai visualizzata in un grafico!
+
+---
+## 🚀Sfida
+
+Traccia una variabile diversa da questo dataset. Suggerimento: modifica questa linea: `X = X[:,2]`. Dato il target di questo dataset, cosa sei in grado di scoprire sulla progressione del diabete come malattia?
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## Revisione & Studio Individuale
+
+In questo tutorial, hai lavorato con la regressione lineare semplice, piuttosto che con la regressione univariata o multipla. Leggi un po' sulle differenze tra questi metodi, o guarda [questo video](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef)
+
+Leggi di più sul concetto di regressione e pensa a quali tipi di domande possono essere risposte con questa tecnica. Segui questo [tutorial](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott) per approfondire la tua comprensione.
+
+## Compito
+
+[Un dataset diverso](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/1-Tools/assignment.md b/translations/it/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..45f446038
--- /dev/null
+++ b/translations/it/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# Regressione con Scikit-learn
+
+## Istruzioni
+
+Dai un'occhiata al [dataset Linnerud](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud) in Scikit-learn. Questo dataset ha diversi [target](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset): 'Consiste in tre variabili di esercizio (dati) e tre variabili fisiologiche (target) raccolte da venti uomini di mezza età in un club di fitness'.
+
+Con parole tue, descrivi come creare un modello di Regressione che possa tracciare la relazione tra la circonferenza della vita e quanti sit-up vengono eseguiti. Fai lo stesso per gli altri punti dati in questo dataset.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Bisogno di miglioramento |
+| ------------------------------ | ------------------------------------ | ----------------------------- | -------------------------- |
+| Invia un paragrafo descrittivo | Viene inviato un paragrafo ben scritto | Vengono inviate alcune frasi | Nessuna descrizione fornita |
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/1-Tools/solution/Julia/README.md b/translations/it/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..b3b9d777c
--- /dev/null
+++ b/translations/it/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/2-Data/README.md b/translations/it/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..ce18148c2
--- /dev/null
+++ b/translations/it/2-Regression/2-Data/README.md
@@ -0,0 +1,215 @@
+# Costruisci un modello di regressione usando Scikit-learn: prepara e visualizza i dati
+
+
+
+Infografica di [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [Questa lezione è disponibile in R!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## Introduzione
+
+Ora che hai a disposizione gli strumenti necessari per iniziare a costruire modelli di machine learning con Scikit-learn, sei pronto per iniziare a fare domande ai tuoi dati. Quando lavori con i dati e applichi soluzioni di ML, è molto importante capire come porre la domanda giusta per sbloccare correttamente il potenziale del tuo dataset.
+
+In questa lezione imparerai:
+
+- Come preparare i tuoi dati per la costruzione di modelli.
+- Come usare Matplotlib per la visualizzazione dei dati.
+
+## Porre la domanda giusta ai tuoi dati
+
+La domanda a cui hai bisogno di rispondere determinerà il tipo di algoritmi di ML che utilizzerai. E la qualità della risposta che ottieni sarà fortemente dipendente dalla natura dei tuoi dati.
+
+Dai un'occhiata ai [dati](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) forniti per questa lezione. Puoi aprire questo file .csv in VS Code. Una rapida occhiata mostra subito che ci sono spazi vuoti e un mix di stringhe e dati numerici. C'è anche una colonna strana chiamata 'Package' dove i dati sono un mix tra 'sacks', 'bins' e altri valori. In effetti, i dati sono un po' un disastro.
+
+[](https://youtu.be/5qGjczWTrDQ "ML per principianti - Come analizzare e pulire un dataset")
+
+> 🎥 Clicca sull'immagine sopra per un breve video su come preparare i dati per questa lezione.
+
+In effetti, non è molto comune ricevere un dataset completamente pronto per essere utilizzato per creare un modello di ML. In questa lezione, imparerai come preparare un dataset grezzo usando librerie Python standard. Imparerai anche varie tecniche per visualizzare i dati.
+
+## Caso di studio: 'il mercato delle zucche'
+
+In questa cartella troverai un file .csv nella cartella principale `data` chiamato [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) che include 1757 righe di dati sul mercato delle zucche, ordinati per città. Questi sono dati grezzi estratti dai [Rapporti Standard dei Mercati Terminali delle Colture Speciali](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) distribuiti dal Dipartimento dell'Agricoltura degli Stati Uniti.
+
+### Preparazione dei dati
+
+Questi dati sono di dominio pubblico. Possono essere scaricati in molti file separati, per città, dal sito web dell'USDA. Per evitare troppi file separati, abbiamo concatenato tutti i dati delle città in un unico foglio di calcolo, quindi abbiamo già _preparato_ un po' i dati. Ora, diamo un'occhiata più da vicino ai dati.
+
+### I dati delle zucche - prime conclusioni
+
+Cosa noti riguardo a questi dati? Hai già visto che c'è un mix di stringhe, numeri, spazi vuoti e valori strani che devi capire.
+
+Che domanda puoi fare a questi dati, usando una tecnica di Regressione? Che ne dici di "Prevedere il prezzo di una zucca in vendita durante un dato mese". Guardando di nuovo i dati, ci sono alcune modifiche che devi fare per creare la struttura dei dati necessaria per il compito.
+
+## Esercizio - analizzare i dati delle zucche
+
+Usiamo [Pandas](https://pandas.pydata.org/), (il nome sta per `Python Data Analysis`) uno strumento molto utile per modellare i dati, per analizzare e preparare questi dati sulle zucche.
+
+### Prima di tutto, controlla le date mancanti
+
+Per prima cosa dovrai prendere provvedimenti per controllare le date mancanti:
+
+1. Converti le date in un formato mensile (queste sono date statunitensi, quindi il formato è `MM/DD/YYYY`).
+2. Estrai il mese in una nuova colonna.
+
+Apri il file _notebook.ipynb_ in Visual Studio Code e importa il foglio di calcolo in un nuovo dataframe Pandas.
+
+1. Usa la funzione `head()` per visualizzare le prime cinque righe.
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ Quale funzione useresti per visualizzare le ultime cinque righe?
+
+1. Controlla se ci sono dati mancanti nel dataframe corrente:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ Ci sono dati mancanti, ma forse non sarà importante per il compito in questione.
+
+1. Per rendere il tuo dataframe più facile da gestire, seleziona solo le colonne di cui hai bisogno, usando `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:` nel caso sotto significa "tutte le righe".
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### Secondo, determina il prezzo medio delle zucche
+
+Pensa a come determinare il prezzo medio di una zucca in un dato mese. Quali colonne sceglieresti per questo compito? Suggerimento: avrai bisogno di 3 colonne.
+
+Soluzione: prendi la media delle colonne `Low Price` and `High Price` per popolare la nuova colonna Price e converti la colonna Date per mostrare solo il mese. Fortunatamente, secondo il controllo sopra, non ci sono dati mancanti per le date o i prezzi.
+
+1. Per calcolare la media, aggiungi il seguente codice:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ Sentiti libero di stampare qualsiasi dato che desideri controllare usando `print(month)`.
+
+2. Ora, copia i tuoi dati convertiti in un nuovo dataframe Pandas:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ Stampando il tuo dataframe, vedrai un dataset pulito e ordinato su cui puoi costruire il tuo nuovo modello di regressione.
+
+### Ma aspetta! C'è qualcosa di strano qui
+
+Se guardi la colonna `Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package`.
+
+1. Aggiungi un filtro in cima al file, sotto l'importazione iniziale del .csv:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ Se stampi i dati ora, puoi vedere che stai ottenendo solo circa 415 righe di dati contenenti zucche al bushel.
+
+### Ma aspetta! C'è un'altra cosa da fare
+
+Hai notato che la quantità di bushel varia per riga? Devi normalizzare i prezzi in modo da mostrare il prezzo per bushel, quindi fai qualche calcolo per standardizzarlo.
+
+1. Aggiungi queste righe dopo il blocco che crea il dataframe new_pumpkins:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ Secondo [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308), il peso di un bushel dipende dal tipo di prodotto, poiché è una misura di volume. "Un bushel di pomodori, ad esempio, dovrebbe pesare 56 libbre... Le foglie e le verdure occupano più spazio con meno peso, quindi un bushel di spinaci pesa solo 20 libbre." È tutto piuttosto complicato! Non preoccupiamoci di fare una conversione bushel-libbre e invece calcoliamo il prezzo per bushel. Tutto questo studio sui bushel di zucche, tuttavia, dimostra quanto sia importante capire la natura dei tuoi dati!
+
+Ora, puoi analizzare i prezzi per unità in base alla loro misura di bushel. Se stampi i dati un'altra volta, puoi vedere come sono standardizzati.
+
+✅ Hai notato che le zucche vendute a mezzo bushel sono molto costose? Riesci a capire perché? Suggerimento: le zucche piccole sono molto più costose di quelle grandi, probabilmente perché ce ne sono molte di più per bushel, dato lo spazio inutilizzato occupato da una grande zucca vuota per torta.
+
+## Strategie di visualizzazione
+
+Parte del ruolo del data scientist è dimostrare la qualità e la natura dei dati con cui stanno lavorando. Per fare ciò, spesso creano visualizzazioni interessanti, o grafici, diagrammi e chart, mostrando diversi aspetti dei dati. In questo modo, sono in grado di mostrare visivamente relazioni e lacune che altrimenti sarebbero difficili da scoprire.
+
+[](https://youtu.be/SbUkxH6IJo0 "ML per principianti - Come visualizzare i dati con Matplotlib")
+
+> 🎥 Clicca sull'immagine sopra per un breve video su come visualizzare i dati per questa lezione.
+
+Le visualizzazioni possono anche aiutare a determinare la tecnica di machine learning più appropriata per i dati. Un grafico a dispersione che sembra seguire una linea, ad esempio, indica che i dati sono un buon candidato per un esercizio di regressione lineare.
+
+Una libreria di visualizzazione dei dati che funziona bene nei notebook Jupyter è [Matplotlib](https://matplotlib.org/) (che hai visto anche nella lezione precedente).
+
+> Ottieni più esperienza con la visualizzazione dei dati in [questi tutorial](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott).
+
+## Esercizio - sperimenta con Matplotlib
+
+Prova a creare alcuni grafici di base per visualizzare il nuovo dataframe che hai appena creato. Cosa mostrerebbe un grafico a linee di base?
+
+1. Importa Matplotlib in cima al file, sotto l'importazione di Pandas:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. Rilancia l'intero notebook per aggiornare.
+1. In fondo al notebook, aggiungi una cella per tracciare i dati come un box:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ Questo grafico è utile? C'è qualcosa che ti sorprende?
+
+ Non è particolarmente utile poiché tutto ciò che fa è mostrare i tuoi dati come una distribuzione di punti in un dato mese.
+
+### Rendilo utile
+
+Per ottenere grafici che mostrino dati utili, di solito è necessario raggruppare i dati in qualche modo. Proviamo a creare un grafico in cui l'asse y mostra i mesi e i dati dimostrano la distribuzione dei dati.
+
+1. Aggiungi una cella per creare un grafico a barre raggruppato:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ Questa è una visualizzazione dei dati più utile! Sembra indicare che il prezzo più alto delle zucche si verifica a settembre e ottobre. Questo soddisfa le tue aspettative? Perché o perché no?
+
+---
+
+## 🚀Sfida
+
+Esplora i diversi tipi di visualizzazione che Matplotlib offre. Quali tipi sono più appropriati per i problemi di regressione?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## Revisione e studio autonomo
+
+Dai un'occhiata ai molti modi per visualizzare i dati. Fai un elenco delle varie librerie disponibili e annota quali sono le migliori per determinati tipi di compiti, ad esempio visualizzazioni 2D vs. visualizzazioni 3D. Cosa scopri?
+
+## Compito
+
+[Esplorazione della visualizzazione](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/2-Data/assignment.md b/translations/it/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..900782dda
--- /dev/null
+++ b/translations/it/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# Esplorare le Visualizzazioni
+
+Ci sono diverse librerie disponibili per la visualizzazione dei dati. Crea alcune visualizzazioni utilizzando i dati delle Zucche in questa lezione con matplotlib e seaborn in un notebook di esempio. Quali librerie sono più facili da usare?
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| ------- | --------- | -------- | ------------- |
+| | Viene inviato un notebook con due esplorazioni/visualizzazioni | Viene inviato un notebook con una esplorazione/visualizzazione | Non viene inviato alcun notebook |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre deve essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/2-Data/solution/Julia/README.md b/translations/it/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..a547fe188
--- /dev/null
+++ b/translations/it/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/3-Linear/README.md b/translations/it/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..65dcf1366
--- /dev/null
+++ b/translations/it/2-Regression/3-Linear/README.md
@@ -0,0 +1,370 @@
+# Costruire un modello di regressione usando Scikit-learn: quattro modi di fare regressione
+
+
+> Infografica di [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [Questa lezione è disponibile in R!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### Introduzione
+
+Finora hai esplorato cosa sia la regressione con dati di esempio raccolti dal dataset dei prezzi delle zucche che useremo durante questa lezione. Hai anche visualizzato questi dati usando Matplotlib.
+
+Ora sei pronto per approfondire la regressione per il Machine Learning. Mentre la visualizzazione ti permette di comprendere i dati, il vero potere del Machine Learning deriva dall'addestramento dei modelli. I modelli vengono addestrati su dati storici per catturare automaticamente le dipendenze dei dati e ti permettono di prevedere i risultati per nuovi dati che il modello non ha mai visto prima.
+
+In questa lezione, imparerai di più su due tipi di regressione: _regressione lineare di base_ e _regressione polinomiale_, insieme ad alcune delle matematiche sottostanti queste tecniche. Questi modelli ci permetteranno di prevedere i prezzi delle zucche in base a diversi dati di input.
+
+[](https://youtu.be/CRxFT8oTDMg "ML per principianti - Comprendere la Regressione Lineare")
+
+> 🎥 Clicca sull'immagine sopra per una breve panoramica sulla regressione lineare.
+
+> In tutto questo curriculum, assumiamo una conoscenza minima della matematica e cerchiamo di renderla accessibile per gli studenti provenienti da altri campi, quindi presta attenzione alle note, 🧮 callout, diagrammi e altri strumenti di apprendimento per aiutare nella comprensione.
+
+### Prerequisiti
+
+Dovresti essere ormai familiare con la struttura dei dati delle zucche che stiamo esaminando. Puoi trovarli pre-caricati e pre-puliti nel file _notebook.ipynb_ di questa lezione. Nel file, il prezzo delle zucche è mostrato per bushel in un nuovo dataframe. Assicurati di poter eseguire questi notebook nei kernel in Visual Studio Code.
+
+### Preparazione
+
+Come promemoria, stai caricando questi dati per fare delle domande su di essi.
+
+- Qual è il momento migliore per comprare zucche?
+- Quale prezzo posso aspettarmi per una cassa di zucche in miniatura?
+- Dovrei comprarle in cesti da mezzo bushel o in scatole da 1 1/9 bushel?
+Continuiamo a scavare in questi dati.
+
+Nella lezione precedente, hai creato un dataframe Pandas e lo hai popolato con parte del dataset originale, standardizzando i prezzi per bushel. Facendo così, tuttavia, sei riuscito a raccogliere solo circa 400 punti dati e solo per i mesi autunnali.
+
+Dai un'occhiata ai dati che abbiamo pre-caricato nel notebook allegato a questa lezione. I dati sono pre-caricati e un primo scatterplot è tracciato per mostrare i dati mensili. Forse possiamo ottenere un po' più di dettaglio sulla natura dei dati pulendoli ulteriormente.
+
+## Una linea di regressione lineare
+
+Come hai appreso nella Lezione 1, l'obiettivo di un esercizio di regressione lineare è essere in grado di tracciare una linea per:
+
+- **Mostrare le relazioni tra le variabili**. Mostrare la relazione tra le variabili
+- **Fare previsioni**. Fare previsioni accurate su dove un nuovo punto dati cadrebbe in relazione a quella linea.
+
+È tipico della **Regressione dei Minimi Quadrati** tracciare questo tipo di linea. Il termine 'minimi quadrati' significa che tutti i punti dati che circondano la linea di regressione vengono quadrati e poi sommati. Idealmente, quella somma finale è il più piccola possibile, perché vogliamo un numero basso di errori, o `least-squares`.
+
+Facciamo così poiché vogliamo modellare una linea che abbia la minima distanza cumulativa da tutti i nostri punti dati. Inoltre, quadratiamo i termini prima di aggiungerli poiché siamo interessati alla loro grandezza piuttosto che alla loro direzione.
+
+> **🧮 Mostrami la matematica**
+>
+> Questa linea, chiamata _linea di miglior adattamento_ può essere espressa da [un'equazione](https://en.wikipedia.org/wiki/Simple_linear_regression):
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` colonna usando la seguente espressione:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+Ora che hai compreso la matematica dietro la regressione lineare, creiamo un modello di Regressione per vedere se possiamo prevedere quale pacchetto di zucche avrà i migliori prezzi delle zucche. Qualcuno che acquista zucche per un campo di zucche per le vacanze potrebbe voler avere questa informazione per ottimizzare i propri acquisti di pacchetti di zucche per il campo.
+
+## Cercare la Correlazione
+
+[](https://youtu.be/uoRq-lW2eQo "ML per principianti - Cercare la Correlazione: La Chiave per la Regressione Lineare")
+
+> 🎥 Clicca sull'immagine sopra per una breve panoramica sulla correlazione.
+
+Dalla lezione precedente hai probabilmente visto che il prezzo medio per i diversi mesi appare così:
+
+
+
+Questo suggerisce che ci dovrebbe essere una certa correlazione, e possiamo provare ad addestrare un modello di regressione lineare per prevedere la relazione tra `Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` funzione:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+Sembra che la correlazione sia piuttosto bassa, -0.15 da `Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` funzione di tracciamento possiamo tracciare tutti i punti sullo stesso grafico:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+La nostra indagine suggerisce che la varietà ha più effetto sul prezzo complessivo rispetto alla data effettiva di vendita. Possiamo vedere questo con un grafico a barre:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+Concentriamoci per il momento solo su una varietà di zucca, il 'tipo torta', e vediamo quale effetto ha la data sul prezzo:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+Se ora calcoliamo la correlazione tra `Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` - il che significa che addestrare un modello predittivo ha senso.
+
+> Prima di addestrare un modello di regressione lineare, è importante assicurarsi che i nostri dati siano puliti. La regressione lineare non funziona bene con valori mancanti, quindi ha senso eliminare tutte le celle vuote:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+Un altro approccio sarebbe riempire quei valori vuoti con valori medi dalla colonna corrispondente.
+
+## Regressione Lineare Semplice
+
+[](https://youtu.be/e4c_UP2fSjg "ML per principianti - Regressione Lineare e Polinomiale usando Scikit-learn")
+
+> 🎥 Clicca sull'immagine sopra per una breve panoramica sulla regressione lineare e polinomiale.
+
+Per addestrare il nostro modello di Regressione Lineare, useremo la libreria **Scikit-learn**.
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+Iniziamo separando i valori di input (caratteristiche) e l'output atteso (etichetta) in array numpy separati:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> Nota che abbiamo dovuto eseguire `reshape` sui dati di input affinché il pacchetto di Regressione Lineare li comprenda correttamente. La Regressione Lineare si aspetta un array 2D come input, dove ogni riga dell'array corrisponde a un vettore di caratteristiche di input. Nel nostro caso, poiché abbiamo solo un input, abbiamo bisogno di un array con forma N×1, dove N è la dimensione del dataset.
+
+Poi, dobbiamo dividere i dati in dataset di addestramento e di test, in modo da poter validare il nostro modello dopo l'addestramento:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+Infine, l'addestramento del vero e proprio modello di Regressione Lineare richiede solo due righe di codice. Definiamo il metodo `LinearRegression` object, and fit it to our data using the `fit`:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+Il `LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` nel nostro caso, indicando il prezzo all'inizio dell'anno.
+
+Per vedere quanto è accurato il nostro modello, possiamo prevedere i prezzi su un dataset di test, e poi misurare quanto le nostre previsioni siano vicine ai valori attesi. Questo può essere fatto usando la metrica dell'errore quadratico medio (MSE), che è la media di tutte le differenze quadrate tra il valore atteso e quello previsto.
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+Il nostro errore sembra essere intorno ai 2 punti, che è ~17%. Non troppo buono. Un altro indicatore della qualità del modello è il **coefficiente di determinazione**, che può essere ottenuto così:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+Se il valore è 0, significa che il modello non tiene conto dei dati di input e agisce come il *peggior predittore lineare*, che è semplicemente un valore medio del risultato. Il valore di 1 significa che possiamo prevedere perfettamente tutti gli output attesi. Nel nostro caso, il coefficiente è intorno a 0.06, che è piuttosto basso.
+
+Possiamo anche tracciare i dati di test insieme alla linea di regressione per vedere meglio come funziona la regressione nel nostro caso:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## Regressione Polinomiale
+
+Un altro tipo di Regressione Lineare è la Regressione Polinomiale. Mentre a volte c'è una relazione lineare tra le variabili - più grande è la zucca in volume, più alto è il prezzo - a volte queste relazioni non possono essere tracciate come un piano o una linea retta.
+
+✅ Ecco [alcuni esempi](https://online.stat.psu.edu/stat501/lesson/9/9.8) di dati che potrebbero usare la Regressione Polinomiale
+
+Dai un'altra occhiata alla relazione tra Data e Prezzo. Questo scatterplot sembra necessariamente essere analizzato con una linea retta? I prezzi non possono fluttuare? In questo caso, puoi provare la regressione polinomiale.
+
+✅ I polinomi sono espressioni matematiche che potrebbero consistere in una o più variabili e coefficienti
+
+La regressione polinomiale crea una linea curva per adattarsi meglio ai dati non lineari. Nel nostro caso, se includiamo una variabile quadrata `DayOfYear` nei dati di input, dovremmo essere in grado di adattare i nostri dati con una curva parabolica, che avrà un minimo in un certo punto dell'anno.
+
+Scikit-learn include una utile [API pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline) per combinare diversi passaggi di elaborazione dei dati insieme. Una **pipeline** è una catena di **stimatori**. Nel nostro caso, creeremo una pipeline che prima aggiunge caratteristiche polinomiali al nostro modello, e poi addestra la regressione:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+Usando `PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` altrimenti. Questo significa che ci saranno quattro coefficienti nella regressione lineare, uno per ogni varietà di zucca, responsabile del "prezzo iniziale" (o piuttosto "prezzo aggiuntivo") per quella particolare varietà.
+
+Il codice qui sotto mostra come possiamo codificare una varietà con one-hot encoding:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+Per addestrare la regressione lineare usando la varietà codificata con one-hot come input, dobbiamo solo inizializzare correttamente i dati `X` and `y`:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+Il resto del codice è lo stesso di quello che abbiamo usato sopra per addestrare la Regressione Lineare. Se lo provi, vedrai che l'errore quadratico medio è più o meno lo stesso, ma otteniamo un coefficiente di determinazione molto più alto (~77%). Per ottenere previsioni ancora più accurate, possiamo tenere conto di più caratteristiche categoriche, così come di caratteristiche numeriche, come `Month` or `DayOfYear`. To get one large array of features, we can use `join`:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+Qui teniamo anche conto di `City` and `Package` tipo, che ci dà un MSE di 2.84 (10%), e una determinazione di 0.94!
+
+## Mettere tutto insieme
+
+Per fare il miglior modello, possiamo usare dati combinati (categorici codificati con one-hot + numerici) dall'esempio sopra insieme alla Regressione Polinomiale. Ecco il codice completo per tua comodità:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+Questo dovrebbe darci il miglior coefficiente di determinazione di quasi il 97%, e MSE=2.23 (~8% di errore di previsione).
+
+| Modello | MSE | Determinazione |
+|---------|-----|----------------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` Lineare | 5.24 (19.7%) | 0.77 |
+| Tutte le caratteristiche Lineare | 2.84 (10.5%) | 0.94 |
+| Tutte le caratteristiche Polinomiale | 2.23 (8.25%) | 0.97 |
+
+🏆 Ben fatto! Hai creato quattro modelli di Regressione in una lezione e hai migliorato la qualità del modello al 97%. Nell'ultima sezione sulla Regressione, imparerai la Regressione Logistica per determinare le categorie.
+
+---
+## 🚀Sfida
+
+Testa diverse variabili in questo notebook per vedere come la correlazione corrisponde alla precisione del modello.
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/14/)
+
+## Revisione e Studio Autonomo
+
+In questa lezione abbiamo imparato la Regressione Lineare. Ci sono altri tipi importanti di Regressione. Leggi delle tecniche Stepwise, Ridge, Lasso e Elasticnet. Un buon corso da seguire per saperne di più è il [corso di Stanford Statistical Learning](https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning)
+
+## Compito
+
+[Costruisci un Modello](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/3-Linear/assignment.md b/translations/it/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..579c6ec98
--- /dev/null
+++ b/translations/it/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# Creare un Modello di Regressione
+
+## Istruzioni
+
+In questa lezione ti è stato mostrato come costruire un modello utilizzando sia la Regressione Lineare che quella Polinomiale. Utilizzando queste conoscenze, trova un dataset o usa uno dei set integrati di Scikit-learn per costruire un nuovo modello. Spiega nel tuo notebook perché hai scelto la tecnica che hai utilizzato e dimostra l'accuratezza del tuo modello. Se non è accurato, spiega il perché.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| -------- | ------------------------------------------------------------ | --------------------------- | ------------------------------- |
+| | presenta un notebook completo con una soluzione ben documentata | la soluzione è incompleta | la soluzione è difettosa o presenta bug |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/3-Linear/solution/Julia/README.md b/translations/it/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..1a7bc4352
--- /dev/null
+++ b/translations/it/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci sforziamo di ottenere la massima precisione, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/4-Logistic/README.md b/translations/it/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..c291dc306
--- /dev/null
+++ b/translations/it/2-Regression/4-Logistic/README.md
@@ -0,0 +1,371 @@
+# Regressione logistica per predire categorie
+
+
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [Questa lezione è disponibile in R!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## Introduzione
+
+In questa ultima lezione sulla Regressione, una delle tecniche di ML _classiche_ di base, esamineremo la Regressione Logistica. Utilizzeresti questa tecnica per scoprire schemi per prevedere categorie binarie. Questa caramella è cioccolato o no? Questa malattia è contagiosa o no? Questo cliente sceglierà questo prodotto o no?
+
+In questa lezione, imparerai:
+
+- Una nuova libreria per la visualizzazione dei dati
+- Tecniche per la regressione logistica
+
+✅ Approfondisci la tua comprensione del lavoro con questo tipo di regressione in questo [modulo di apprendimento](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott)
+
+## Prerequisiti
+
+Avendo lavorato con i dati della zucca, ora siamo abbastanza familiari con essi da capire che c'è una categoria binaria con cui possiamo lavorare: `Color`.
+
+Costruiamo un modello di regressione logistica per prevedere, date alcune variabili, _di che colore è probabile che sia una data zucca_ (arancione 🎃 o bianca 👻).
+
+> Perché stiamo parlando di classificazione binaria in una lezione raggruppata sulla regressione? Solo per convenienza linguistica, poiché la regressione logistica è [in realtà un metodo di classificazione](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression), sebbene basato su un modello lineare. Impara altri modi per classificare i dati nel prossimo gruppo di lezioni.
+
+## Definire la domanda
+
+Per i nostri scopi, esprimeremo questo come un binario: 'Bianco' o 'Non Bianco'. C'è anche una categoria 'a strisce' nel nostro dataset, ma ci sono pochi casi, quindi non la useremo. Scompare comunque una volta rimosse le nullità dal dataset.
+
+> 🎃 Curiosità, a volte chiamiamo le zucche bianche 'zucche fantasma'. Non sono molto facili da intagliare, quindi non sono popolari come quelle arancioni, ma hanno un aspetto interessante! Quindi potremmo anche riformulare la nostra domanda come: 'Fantasma' o 'Non Fantasma'. 👻
+
+## Sulla regressione logistica
+
+La regressione logistica differisce dalla regressione lineare, di cui hai già appreso, in alcuni modi importanti.
+
+[](https://youtu.be/KpeCT6nEpBY "ML per principianti - Comprendere la Regressione Logistica per la Classificazione del Machine Learning")
+
+> 🎥 Clicca sull'immagine sopra per una breve panoramica sulla regressione logistica.
+
+### Classificazione binaria
+
+La regressione logistica non offre le stesse funzionalità della regressione lineare. La prima offre una previsione su una categoria binaria ("bianco o non bianco") mentre la seconda è in grado di prevedere valori continui, ad esempio data l'origine di una zucca e il tempo del raccolto, _quanto aumenterà il suo prezzo_.
+
+
+> Infografica di [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+### Altre classificazioni
+
+Ci sono altri tipi di regressione logistica, inclusi multinomiale e ordinale:
+
+- **Multinomiale**, che implica avere più di una categoria - "Arancione, Bianco e a Strisce".
+- **Ordinale**, che implica categorie ordinate, utile se volessimo ordinare i nostri risultati in modo logico, come le nostre zucche ordinate per un numero finito di dimensioni (mini, sm, med, lg, xl, xxl).
+
+
+
+### Le variabili NON devono correlare
+
+Ricordi come la regressione lineare funzionava meglio con variabili più correlate? La regressione logistica è l'opposto - le variabili non devono allinearsi. Questo funziona per questi dati che hanno correlazioni piuttosto deboli.
+
+### Hai bisogno di molti dati puliti
+
+La regressione logistica darà risultati più accurati se usi più dati; il nostro piccolo dataset non è ottimale per questo compito, quindi tienilo a mente.
+
+[](https://youtu.be/B2X4H9vcXTs "ML per principianti - Analisi e Preparazione dei Dati per la Regressione Logistica")
+
+> 🎥 Clicca sull'immagine sopra per una breve panoramica sulla preparazione dei dati per la regressione lineare
+
+✅ Pensa ai tipi di dati che si presterebbero bene alla regressione logistica
+
+## Esercizio - pulire i dati
+
+Per prima cosa, pulisci un po' i dati, eliminando i valori nulli e selezionando solo alcune delle colonne:
+
+1. Aggiungi il seguente codice:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ Puoi sempre dare un'occhiata al tuo nuovo dataframe:
+
+ ```python
+ pumpkins.info
+ ```
+
+### Visualizzazione - grafico categorico
+
+A questo punto hai caricato il [notebook iniziale](../../../../2-Regression/4-Logistic/notebook.ipynb) con i dati delle zucche ancora una volta e li hai puliti in modo da preservare un dataset contenente alcune variabili, inclusa `Color`. Visualizziamo il dataframe nel notebook usando una libreria diversa: [Seaborn](https://seaborn.pydata.org/index.html), che è costruita su Matplotlib che abbiamo usato in precedenza.
+
+Seaborn offre modi interessanti per visualizzare i tuoi dati. Ad esempio, puoi confrontare le distribuzioni dei dati per ogni `Variety` e `Color` in un grafico categorico.
+
+1. Crea un tale grafico usando `catplot` function, using our pumpkin data `pumpkins`, specificando una mappatura dei colori per ogni categoria di zucca (arancione o bianca):
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ Osservando i dati, puoi vedere come i dati di Colore si relazionano a Variety.
+
+ ✅ Data questa trama categorica, quali sono alcune esplorazioni interessanti che puoi immaginare?
+
+### Pre-elaborazione dei dati: codifica delle caratteristiche e delle etichette
+Il nostro dataset di zucche contiene valori stringa per tutte le sue colonne. Lavorare con dati categorici è intuitivo per gli esseri umani ma non per le macchine. Gli algoritmi di machine learning funzionano bene con i numeri. Ecco perché la codifica è un passaggio molto importante nella fase di pre-elaborazione dei dati, poiché ci consente di trasformare i dati categorici in dati numerici, senza perdere alcuna informazione. Una buona codifica porta alla costruzione di un buon modello.
+
+Per la codifica delle caratteristiche ci sono due principali tipi di encoder:
+
+1. Encoder ordinale: si adatta bene alle variabili ordinali, che sono variabili categoriche in cui i loro dati seguono un ordine logico, come la colonna `Item Size` nel nostro dataset. Crea una mappatura tale che ogni categoria sia rappresentata da un numero, che è l'ordine della categoria nella colonna.
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. Encoder categorico: si adatta bene alle variabili nominali, che sono variabili categoriche in cui i loro dati non seguono un ordine logico, come tutte le caratteristiche diverse da `Item Size` nel nostro dataset. È una codifica one-hot, il che significa che ogni categoria è rappresentata da una colonna binaria: la variabile codificata è uguale a 1 se la zucca appartiene a quella Variety e 0 altrimenti.
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+Quindi, `ColumnTransformer` viene utilizzato per combinare più encoder in un unico passaggio e applicarli alle colonne appropriate.
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+D'altra parte, per codificare l'etichetta, utilizziamo la classe `LabelEncoder` di scikit-learn, che è una classe di utilità per aiutare a normalizzare le etichette in modo che contengano solo valori tra 0 e n_classi-1 (qui, 0 e 1).
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+Una volta codificate le caratteristiche e l'etichetta, possiamo unirle in un nuovo dataframe `encoded_pumpkins`.
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ Quali sono i vantaggi dell'utilizzo di un encoder ordinale per la colonna `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety` e `Color` in un grafico categorico. Per meglio rappresentare i dati utilizzeremo la colonna codificata `Item Size` column and the unencoded `Variety`.
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### Utilizza un grafico a sciame
+
+Poiché Color è una categoria binaria (Bianco o Non Bianco), necessita di 'un [approccio specializzato](https://seaborn.pydata.org/tutorial/categorical.html?highlight=bar) per la visualizzazione'. Ci sono altri modi per visualizzare la relazione di questa categoria con altre variabili.
+
+Puoi visualizzare le variabili fianco a fianco con i grafici di Seaborn.
+
+1. Prova un grafico a 'sciame' per mostrare la distribuzione dei valori:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**Attenzione**: il codice sopra potrebbe generare un avviso, poiché Seaborn non riesce a rappresentare una tale quantità di punti dati in un grafico a sciame. Una possibile soluzione è ridurre la dimensione del marcatore, utilizzando il parametro 'size'. Tuttavia, tieni presente che ciò influisce sulla leggibilità del grafico.
+
+
+> **🧮 Mostrami la Matematica**
+>
+> La regressione logistica si basa sul concetto di 'massima verosimiglianza' utilizzando [funzioni sigmoidi](https://wikipedia.org/wiki/Sigmoid_function). Una 'Funzione Sigmoide' su un grafico appare come una forma a 'S'. Prende un valore e lo mappa tra 0 e 1. La sua curva è anche chiamata 'curva logistica'. La sua formula appare così:
+>
+> 
+>
+> dove il punto medio della sigmoide si trova nel punto 0 di x, L è il valore massimo della curva, e k è la pendenza della curva. Se il risultato della funzione è superiore a 0.5, l'etichetta in questione verrà assegnata alla classe '1' della scelta binaria. In caso contrario, sarà classificata come '0'.
+
+## Costruisci il tuo modello
+
+Costruire un modello per trovare queste classificazioni binarie è sorprendentemente semplice in Scikit-learn.
+
+[](https://youtu.be/MmZS2otPrQ8 "ML per principianti - Regressione Logistica per la classificazione dei dati")
+
+> 🎥 Clicca sull'immagine sopra per una breve panoramica sulla costruzione di un modello di regressione lineare
+
+1. Seleziona le variabili che vuoi utilizzare nel tuo modello di classificazione e dividi i set di addestramento e test chiamando `train_test_split()`:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. Ora puoi addestrare il tuo modello, chiamando `fit()` con i tuoi dati di addestramento, e stampare il suo risultato:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ Dai un'occhiata al punteggio del tuo modello. Non è male, considerando che hai solo circa 1000 righe di dati:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## Migliore comprensione tramite una matrice di confusione
+
+Sebbene tu possa ottenere un rapporto sul punteggio [termini](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html?highlight=classification_report#sklearn.metrics.classification_report) stampando gli elementi sopra, potresti riuscire a comprendere meglio il tuo modello utilizzando una [matrice di confusione](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix) per aiutarci a capire come sta performando il modello.
+
+> 🎓 Una '[matrice di confusione](https://wikipedia.org/wiki/Confusion_matrix)' (o 'matrice degli errori') è una tabella che esprime i veri vs. falsi positivi e negativi del tuo modello, valutando così l'accuratezza delle previsioni.
+
+1. Per utilizzare una matrice di confusione, chiama `confusion_matrix()`:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ Dai un'occhiata alla matrice di confusione del tuo modello:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+In Scikit-learn, le righe delle matrici di confusione (asse 0) sono etichette reali e le colonne (asse 1) sono etichette previste.
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+Cosa sta succedendo qui? Supponiamo che il nostro modello sia chiamato a classificare le zucche tra due categorie binarie, categoria 'bianco' e categoria 'non-bianco'.
+
+- Se il tuo modello prevede una zucca come non bianca e appartiene alla categoria 'non-bianco' in realtà, la chiamiamo un vero negativo, mostrato dal numero in alto a sinistra.
+- Se il tuo modello prevede una zucca come bianca e appartiene alla categoria 'non-bianco' in realtà, la chiamiamo un falso negativo, mostrato dal numero in basso a sinistra.
+- Se il tuo modello prevede una zucca come non bianca e appartiene alla categoria 'bianco' in realtà, la chiamiamo un falso positivo, mostrato dal numero in alto a destra.
+- Se il tuo modello prevede una zucca come bianca e appartiene alla categoria 'bianco' in realtà, la chiamiamo un vero positivo, mostrato dal numero in basso a destra.
+
+Come avrai intuito, è preferibile avere un numero maggiore di veri positivi e veri negativi e un numero inferiore di falsi positivi e falsi negativi, il che implica che il modello performa meglio.
+
+Come si relaziona la matrice di confusione con la precisione e il richiamo? Ricorda, il rapporto di classificazione stampato sopra ha mostrato precisione (0.85) e richiamo (0.67).
+
+Precisione = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+Richiamo = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ Q: Secondo la matrice di confusione, come ha fatto il modello? A: Non male; ci sono un buon numero di veri negativi ma anche alcuni falsi negativi.
+
+Rivediamo i termini che abbiamo visto in precedenza con l'aiuto della mappatura della matrice di confusione di TP/TN e FP/FN:
+
+🎓 Precisione: TP/(TP + FP) La frazione di istanze rilevanti tra le istanze recuperate (ad esempio quali etichette erano ben etichettate)
+
+🎓 Richiamo: TP/(TP + FN) La frazione di istanze rilevanti che sono state recuperate, che siano ben etichettate o meno
+
+🎓 f1-score: (2 * precision * recall)/(precision + recall) Una media ponderata della precisione e del richiamo, con il migliore essendo 1 e il peggiore essendo 0
+
+🎓 Supporto: Il numero di occorrenze di ciascuna etichetta recuperata
+
+🎓 Accuratezza: (TP + TN)/(TP + TN + FP + FN) La percentuale di etichette previste accuratamente per un campione.
+
+🎓 Macro Avg: Il calcolo delle metriche medie non ponderate per ciascuna etichetta, senza tenere conto dello squilibrio delle etichette.
+
+🎓 Weighted Avg: Il calcolo delle metriche medie per ciascuna etichetta, tenendo conto dello squilibrio delle etichette ponderandole in base al loro supporto (il numero di istanze vere per ciascuna etichetta).
+
+✅ Riesci a pensare a quale metrica dovresti guardare se vuoi che il tuo modello riduca il numero di falsi negativi?
+
+## Visualizza la curva ROC di questo modello
+
+[](https://youtu.be/GApO575jTA0 "ML per principianti - Analisi delle Prestazioni della Regressione Logistica con le Curve ROC")
+
+> 🎥 Clicca sull'immagine sopra per una breve panoramica sulle curve ROC
+
+Facciamo un'ultima visualizzazione per vedere la cosiddetta curva 'ROC':
+
+```python
+from sklearn.metrics import roc_curve, roc_auc_score
+import matplotlib
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+y_scores = model.predict_proba(X_test)
+fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
+
+fig = plt.figure(figsize=(6, 6))
+plt.plot([0, 1], [0, 1], 'k--')
+plt.plot(fpr, tpr)
+plt.xlabel('False Positive Rate')
+plt.ylabel('True Positive Rate')
+plt.title('ROC Curve')
+plt.show()
+```
+
+Usando Matplotlib, traccia la [Curva di Ricezione Operativa](https://scikit-learn.org/stable/auto_examples/model
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/4-Logistic/assignment.md b/translations/it/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..b75f9bff7
--- /dev/null
+++ b/translations/it/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,13 @@
+# Riprovare una Regressione
+
+## Istruzioni
+
+Nella lezione, hai utilizzato un sottoinsieme dei dati sulle zucche. Ora, torna ai dati originali e prova a usarli tutti, puliti e standardizzati, per costruire un modello di Regressione Logistica.
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| -------- | ----------------------------------------------------------------------- | ------------------------------------------------------------- | ----------------------------------------------------------- |
+| | Viene presentato un notebook con un modello ben spiegato e performante | Viene presentato un notebook con un modello che performa minimamente | Viene presentato un notebook con un modello sotto-performante o nessuno |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/4-Logistic/solution/Julia/README.md b/translations/it/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..e97b5ec66
--- /dev/null
+++ b/translations/it/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/2-Regression/README.md b/translations/it/2-Regression/README.md
new file mode 100644
index 000000000..4700801e7
--- /dev/null
+++ b/translations/it/2-Regression/README.md
@@ -0,0 +1,43 @@
+# Modelli di regressione per il machine learning
+## Argomento regionale: Modelli di regressione per i prezzi delle zucche in Nord America 🎃
+
+In Nord America, le zucche vengono spesso intagliate in volti spaventosi per Halloween. Scopriamo di più su questi affascinanti ortaggi!
+
+
+> Foto di Beth Teutschmann su Unsplash
+
+## Cosa imparerai
+
+[](https://youtu.be/5QnJtDad4iQ "Video introduttivo sulla Regressione - Clicca per guardare!")
+> 🎥 Clicca sull'immagine sopra per un breve video introduttivo a questa lezione
+
+Le lezioni in questa sezione coprono i tipi di regressione nel contesto del machine learning. I modelli di regressione possono aiutare a determinare la _relazione_ tra variabili. Questo tipo di modello può prevedere valori come lunghezza, temperatura o età, rivelando così le relazioni tra le variabili mentre analizza i punti dati.
+
+In questa serie di lezioni, scoprirai le differenze tra la regressione lineare e quella logistica, e quando preferire l'una rispetto all'altra.
+
+[](https://youtu.be/XA3OaoW86R8 "ML per principianti - Introduzione ai modelli di regressione per il Machine Learning")
+
+> 🎥 Clicca sull'immagine sopra per un breve video che introduce i modelli di regressione.
+
+In questo gruppo di lezioni, ti preparerai per iniziare i compiti di machine learning, incluso configurare Visual Studio Code per gestire i notebook, l'ambiente comune per i data scientist. Scoprirai Scikit-learn, una libreria per il machine learning, e costruirai i tuoi primi modelli, concentrandoti sui modelli di regressione in questo capitolo.
+
+> Esistono utili strumenti low-code che possono aiutarti a imparare a lavorare con i modelli di regressione. Prova [Azure ML per questo compito](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### Lezioni
+
+1. [Strumenti del mestiere](1-Tools/README.md)
+2. [Gestione dei dati](2-Data/README.md)
+3. [Regressione lineare e polinomiale](3-Linear/README.md)
+4. [Regressione logistica](4-Logistic/README.md)
+
+---
+### Crediti
+
+"ML con la regressione" è stato scritto con ♥️ da [Jen Looper](https://twitter.com/jenlooper)
+
+♥️ I contributori ai quiz includono: [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) e [Ornella Altunyan](https://twitter.com/ornelladotcom)
+
+Il dataset delle zucche è suggerito da [questo progetto su Kaggle](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) e i suoi dati provengono dai [Rapporti standard dei mercati terminali delle colture speciali](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) distribuiti dal Dipartimento dell'Agricoltura degli Stati Uniti. Abbiamo aggiunto alcuni punti relativi al colore in base alla varietà per normalizzare la distribuzione. Questi dati sono di dominio pubblico.
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/3-Web-App/1-Web-App/README.md b/translations/it/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..145537b58
--- /dev/null
+++ b/translations/it/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# Costruisci un'app Web per utilizzare un modello ML
+
+In questa lezione, addestrerai un modello ML su un set di dati fuori dal comune: _avvistamenti UFO nell'ultimo secolo_, provenienti dal database di NUFORC.
+
+Imparerai:
+
+- Come 'pickle' un modello addestrato
+- Come utilizzare quel modello in un'app Flask
+
+Continueremo a usare i notebook per pulire i dati e addestrare il nostro modello, ma puoi fare un ulteriore passo avanti esplorando l'uso di un modello "nel mondo reale", per così dire: in un'app web.
+
+Per fare questo, è necessario costruire un'app web utilizzando Flask.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## Costruzione di un'app
+
+Ci sono diversi modi per costruire app web che consumano modelli di machine learning. La tua architettura web potrebbe influenzare il modo in cui il tuo modello viene addestrato. Immagina di lavorare in un'azienda in cui il gruppo di data science ha addestrato un modello che vogliono tu usi in un'app.
+
+### Considerazioni
+
+Ci sono molte domande che devi farti:
+
+- **È un'app web o un'app mobile?** Se stai costruendo un'app mobile o hai bisogno di utilizzare il modello in un contesto IoT, potresti usare [TensorFlow Lite](https://www.tensorflow.org/lite/) e utilizzare il modello in un'app Android o iOS.
+- **Dove risiederà il modello?** Nel cloud o localmente?
+- **Supporto offline.** L'app deve funzionare offline?
+- **Quale tecnologia è stata utilizzata per addestrare il modello?** La tecnologia scelta potrebbe influenzare gli strumenti che devi usare.
+ - **Usando TensorFlow.** Se stai addestrando un modello usando TensorFlow, ad esempio, quell'ecosistema offre la possibilità di convertire un modello TensorFlow per l'uso in un'app web utilizzando [TensorFlow.js](https://www.tensorflow.org/js/).
+ - **Usando PyTorch.** Se stai costruendo un modello usando una libreria come [PyTorch](https://pytorch.org/), hai l'opzione di esportarlo in formato [ONNX](https://onnx.ai/) (Open Neural Network Exchange) per l'uso in app web JavaScript che possono utilizzare [Onnx Runtime](https://www.onnxruntime.ai/). Questa opzione verrà esplorata in una lezione futura per un modello addestrato con Scikit-learn.
+ - **Usando Lobe.ai o Azure Custom Vision.** Se stai usando un sistema ML SaaS (Software as a Service) come [Lobe.ai](https://lobe.ai/) o [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott) per addestrare un modello, questo tipo di software fornisce modi per esportare il modello per molte piattaforme, incluso la costruzione di un'API su misura da interrogare nel cloud dalla tua applicazione online.
+
+Hai anche l'opportunità di costruire un'intera app web Flask che sarebbe in grado di addestrare il modello stesso in un browser web. Questo può essere fatto anche utilizzando TensorFlow.js in un contesto JavaScript.
+
+Per i nostri scopi, poiché abbiamo lavorato con notebook basati su Python, esploriamo i passaggi necessari per esportare un modello addestrato da un tale notebook in un formato leggibile da un'app web costruita in Python.
+
+## Strumenti
+
+Per questo compito, hai bisogno di due strumenti: Flask e Pickle, entrambi funzionanti su Python.
+
+✅ Cos'è [Flask](https://palletsprojects.com/p/flask/)? Definito come un 'micro-framework' dai suoi creatori, Flask fornisce le caratteristiche di base dei framework web utilizzando Python e un motore di template per costruire pagine web. Dai un'occhiata a [questo modulo di apprendimento](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott) per esercitarti a costruire con Flask.
+
+✅ Cos'è [Pickle](https://docs.python.org/3/library/pickle.html)? Pickle 🥒 è un modulo Python che serializza e deserializza una struttura di oggetti Python. Quando 'pickle' un modello, ne serializzi o appiattisci la struttura per l'uso sul web. Attenzione: pickle non è intrinsecamente sicuro, quindi fai attenzione se ti viene chiesto di 'un-pickle' un file. Un file pickled ha il suffisso `.pkl`.
+
+## Esercizio - pulisci i tuoi dati
+
+In questa lezione userai dati provenienti da 80.000 avvistamenti UFO, raccolti da [NUFORC](https://nuforc.org) (The National UFO Reporting Center). Questi dati contengono alcune descrizioni interessanti degli avvistamenti UFO, ad esempio:
+
+- **Descrizione lunga.** "Un uomo emerge da un raggio di luce che brilla su un campo erboso di notte e corre verso il parcheggio della Texas Instruments".
+- **Descrizione breve.** "le luci ci hanno inseguito".
+
+Il foglio di calcolo [ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) include colonne riguardanti `city`, `state` e `country` dove è avvenuto l'avvistamento, l'`shape` dell'oggetto e il suo `latitude` e `longitude`.
+
+Nel [notebook](../../../../3-Web-App/1-Web-App/notebook.ipynb) incluso in questa lezione:
+
+1. importa `pandas`, `matplotlib`, e `numpy` come hai fatto nelle lezioni precedenti e importa il foglio di calcolo ufos. Puoi dare un'occhiata a un set di dati di esempio:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. Converti i dati ufos in un piccolo dataframe con nuovi titoli. Controlla i valori unici nel campo `Country`.
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. Ora, puoi ridurre la quantità di dati con cui dobbiamo lavorare eliminando eventuali valori nulli e importando solo gli avvistamenti tra 1-60 secondi:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. Importa la libreria `LabelEncoder` di Scikit-learn per convertire i valori di testo per i paesi in un numero:
+
+ ✅ LabelEncoder codifica i dati in ordine alfabetico
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ I tuoi dati dovrebbero apparire così:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## Esercizio - costruisci il tuo modello
+
+Ora puoi prepararti ad addestrare un modello dividendo i dati nel gruppo di addestramento e nel gruppo di test.
+
+1. Seleziona le tre caratteristiche su cui vuoi addestrarti come il tuo vettore X, e il vettore y sarà `Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude` e ottieni un id paese da restituire.
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. Addestra il tuo modello utilizzando la regressione logistica:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+La precisione non è male **(circa il 95%)**, non sorprende, poiché `Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude`, ma è un buon esercizio provare ad addestrare partendo da dati grezzi che hai pulito, esportato e poi usare questo modello in un'app web.
+
+## Esercizio - 'pickle' il tuo modello
+
+Ora, è il momento di _pickle_ il tuo modello! Puoi farlo in poche righe di codice. Una volta che è _pickled_, carica il tuo modello pickled e testalo contro un array di dati di esempio contenente valori per secondi, latitudine e longitudine,
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+Il modello restituisce **'3'**, che è il codice paese per il Regno Unito. Incredibile! 👽
+
+## Esercizio - costruisci un'app Flask
+
+Ora puoi costruire un'app Flask per chiamare il tuo modello e restituire risultati simili, ma in un modo più visivamente piacevole.
+
+1. Inizia creando una cartella chiamata **web-app** accanto al file _notebook.ipynb_ dove risiede il tuo file _ufo-model.pkl_.
+
+1. In quella cartella crea altre tre cartelle: **static**, con una cartella **css** all'interno, e **templates**. Ora dovresti avere i seguenti file e directory:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ Consulta la cartella della soluzione per una visuale dell'app finita
+
+1. Il primo file da creare nella cartella _web-app_ è il file **requirements.txt**. Come _package.json_ in un'app JavaScript, questo file elenca le dipendenze richieste dall'app. In **requirements.txt** aggiungi le righe:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. Ora, esegui questo file navigando fino a _web-app_:
+
+ ```bash
+ cd web-app
+ ```
+
+1. Nel tuo terminale digita `pip install`, per installare le librerie elencate in _requirements.txt_:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. Ora, sei pronto per creare altri tre file per completare l'app:
+
+ 1. Crea **app.py** nella root.
+ 2. Crea **index.html** nella directory _templates_.
+ 3. Crea **styles.css** nella directory _static/css_.
+
+1. Costruisci il file _styles.css_ con alcuni stili:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. Successivamente, costruisci il file _index.html_:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ Dai un'occhiata al templating in questo file. Nota la sintassi 'mustache' attorno alle variabili che verranno fornite dall'app, come il testo della previsione: `{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py` aggiungi:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 Suggerimento: quando aggiungi [`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train`. Quali sono i pro e i contro di perseguire questo metodo?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## Revisione & Studio autonomo
+
+Ci sono molti modi per costruire un'app web che consumi modelli ML. Fai un elenco dei modi in cui potresti usare JavaScript o Python per costruire un'app web che sfrutti il machine learning. Considera l'architettura: il modello dovrebbe rimanere nell'app o vivere nel cloud? Se la seconda opzione, come lo accederesti? Disegna un modello architettonico per una soluzione ML applicata.
+
+## Compito
+
+[Prova un modello diverso](assignment.md)
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua madre deve essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/3-Web-App/1-Web-App/assignment.md b/translations/it/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..8699f1c6f
--- /dev/null
+++ b/translations/it/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# Prova un modello diverso
+
+## Istruzioni
+
+Ora che hai costruito una web app utilizzando un modello di regressione addestrato, usa uno dei modelli di una lezione precedente sulla regressione per rifare questa web app. Puoi mantenere lo stile o progettarla diversamente per riflettere i dati sulla zucca. Fai attenzione a cambiare gli input per riflettere il metodo di addestramento del tuo modello.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------------------------- | --------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------- |
+| | La web app funziona come previsto ed è distribuita nel cloud | La web app contiene errori o presenta risultati inaspettati | La web app non funziona correttamente |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/3-Web-App/README.md b/translations/it/3-Web-App/README.md
new file mode 100644
index 000000000..764358cd3
--- /dev/null
+++ b/translations/it/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# Crea un'app web per utilizzare il tuo modello di ML
+
+In questa sezione del curriculum, verrà introdotto un argomento di ML applicato: come salvare il tuo modello Scikit-learn come un file che può essere utilizzato per fare previsioni all'interno di un'applicazione web. Una volta che il modello è salvato, imparerai come utilizzarlo in un'app web costruita con Flask. Per prima cosa creerai un modello utilizzando alcuni dati riguardanti avvistamenti di UFO! Poi, costruirai un'app web che ti permetterà di inserire un numero di secondi insieme a un valore di latitudine e longitudine per prevedere quale paese ha segnalato di aver visto un UFO.
+
+
+
+Foto di Michael Herren su Unsplash
+
+## Lezioni
+
+1. [Crea un'App Web](1-Web-App/README.md)
+
+## Crediti
+
+"Crea un'App Web" è stato scritto con ♥️ da [Jen Looper](https://twitter.com/jenlooper).
+
+♥️ I quiz sono stati scritti da Rohan Raj.
+
+Il dataset è stato fornito da [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings).
+
+L'architettura dell'app web è stata suggerita in parte da [questo articolo](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) e [questo repo](https://github.com/abhinavsagar/machine-learning-deployment) di Abhinav Sagar.
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/1-Introduction/README.md b/translations/it/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..8ea5da26d
--- /dev/null
+++ b/translations/it/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# Introduzione alla classificazione
+
+In queste quattro lezioni esplorerai un aspetto fondamentale del machine learning classico - la _classificazione_. Analizzeremo l'uso di vari algoritmi di classificazione con un dataset che riguarda tutte le brillanti cucine dell'Asia e dell'India. Speriamo che tu abbia fame!
+
+
+
+> Celebra le cucine pan-asiatiche in queste lezioni! Immagine di [Jen Looper](https://twitter.com/jenlooper)
+
+La classificazione è una forma di [apprendimento supervisionato](https://wikipedia.org/wiki/Supervised_learning) che ha molto in comune con le tecniche di regressione. Se il machine learning riguarda la previsione di valori o nomi per le cose utilizzando dataset, allora la classificazione generalmente si divide in due gruppi: _classificazione binaria_ e _classificazione multiclasse_.
+
+[](https://youtu.be/eg8DJYwdMyg "Introduzione alla classificazione")
+
+> 🎥 Clicca sull'immagine sopra per un video: John Guttag del MIT introduce la classificazione
+
+Ricorda:
+
+- **La regressione lineare** ti ha aiutato a prevedere le relazioni tra variabili e fare previsioni accurate su dove un nuovo punto dati cadrebbe in relazione a quella linea. Ad esempio, potresti prevedere _che prezzo avrà una zucca a settembre rispetto a dicembre_.
+- **La regressione logistica** ti ha aiutato a scoprire "categorie binarie": a questo prezzo, _questa zucca è arancione o non-arancione_?
+
+La classificazione utilizza vari algoritmi per determinare altri modi di determinare l'etichetta o la classe di un punto dati. Lavoriamo con questi dati sulle cucine per vedere se, osservando un gruppo di ingredienti, possiamo determinare la cucina di origine.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [Questa lezione è disponibile in R!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### Introduzione
+
+La classificazione è una delle attività fondamentali del ricercatore di machine learning e del data scientist. Dalla classificazione di base di un valore binario ("questa email è spam o no?"), alla complessa classificazione e segmentazione delle immagini utilizzando la visione artificiale, è sempre utile poter ordinare i dati in classi e porre domande su di essi.
+
+Per esprimere il processo in modo più scientifico, il tuo metodo di classificazione crea un modello predittivo che ti consente di mappare la relazione tra variabili di input e variabili di output.
+
+
+
+> Problemi binari vs. multiclasse per gli algoritmi di classificazione. Infografica di [Jen Looper](https://twitter.com/jenlooper)
+
+Prima di iniziare il processo di pulizia dei dati, visualizzazione e preparazione per i nostri compiti di ML, impariamo un po' sui vari modi in cui il machine learning può essere utilizzato per classificare i dati.
+
+Derivata dalla [statistica](https://wikipedia.org/wiki/Statistical_classification), la classificazione utilizzando il machine learning classico utilizza caratteristiche, come `smoker`, `weight` e `age` per determinare la _probabilità di sviluppare X malattia_. Come tecnica di apprendimento supervisionato simile agli esercizi di regressione che hai eseguito in precedenza, i tuoi dati sono etichettati e gli algoritmi di ML utilizzano queste etichette per classificare e prevedere classi (o 'caratteristiche') di un dataset e assegnarle a un gruppo o risultato.
+
+✅ Prenditi un momento per immaginare un dataset sulle cucine. Cosa potrebbe rispondere un modello multiclasse? Cosa potrebbe rispondere un modello binario? E se volessi determinare se una determinata cucina è probabile che utilizzi il fieno greco? E se volessi vedere se, dato un sacchetto di spesa pieno di anice stellato, carciofi, cavolfiori e rafano, potresti creare un piatto tipico indiano?
+
+[](https://youtu.be/GuTeDbaNoEU "Crazy mystery baskets")
+
+> 🎥 Clicca sull'immagine sopra per un video. L'intera premessa dello show 'Chopped' è il 'cestino misterioso' dove gli chef devono preparare un piatto con una scelta casuale di ingredienti. Sicuramente un modello di ML avrebbe aiutato!
+
+## Ciao 'classificatore'
+
+La domanda che vogliamo porre a questo dataset di cucina è in realtà una **domanda multiclasse**, poiché abbiamo diverse potenziali cucine nazionali con cui lavorare. Dato un gruppo di ingredienti, a quale di queste molte classi si adatteranno i dati?
+
+Scikit-learn offre diversi algoritmi per classificare i dati, a seconda del tipo di problema che vuoi risolvere. Nelle prossime due lezioni imparerai a conoscere alcuni di questi algoritmi.
+
+## Esercizio - pulire e bilanciare i tuoi dati
+
+Il primo compito da svolgere, prima di iniziare questo progetto, è pulire e **bilanciare** i tuoi dati per ottenere risultati migliori. Inizia con il file _notebook.ipynb_ vuoto nella radice di questa cartella.
+
+La prima cosa da installare è [imblearn](https://imbalanced-learn.org/stable/). Questo è un pacchetto di Scikit-learn che ti permetterà di bilanciare meglio i dati (imparerai di più su questo compito tra un minuto).
+
+1. Per installare `imblearn`, esegui `pip install`, come segue:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. Importa i pacchetti necessari per importare i tuoi dati e visualizzarli, importa anche `SMOTE` da `imblearn`.
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ Ora sei pronto per importare i dati.
+
+1. Il prossimo compito sarà importare i dati:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ Utilizzando `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df`.
+
+1. Controlla la forma dei dati:
+
+ ```python
+ df.head()
+ ```
+
+ Le prime cinque righe sembrano così:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. Ottieni informazioni su questi dati chiamando `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ Il tuo output somiglia a:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## Esercizio - imparare sulle cucine
+
+Ora il lavoro inizia a diventare più interessante. Scopriamo la distribuzione dei dati, per cucina.
+
+1. Traccia i dati come barre chiamando `barh()`:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ Ci sono un numero finito di cucine, ma la distribuzione dei dati è disomogenea. Puoi risolverlo! Prima di farlo, esplora un po' di più.
+
+1. Scopri quanti dati sono disponibili per cucina e stampali:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ l'output sembra così:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## Scoprire gli ingredienti
+
+Ora puoi approfondire i dati e scoprire quali sono gli ingredienti tipici per cucina. Dovresti pulire i dati ricorrenti che creano confusione tra le cucine, quindi impariamo a conoscere questo problema.
+
+1. Crea una funzione `create_ingredient()` in Python per creare un dataframe di ingredienti. Questa funzione inizierà eliminando una colonna non utile e ordinando gli ingredienti in base al loro conteggio:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ Ora puoi usare quella funzione per farti un'idea dei dieci ingredienti più popolari per cucina.
+
+1. Chiama `create_ingredient()` and plot it calling `barh()`:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Fai lo stesso per i dati giapponesi:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Ora per gli ingredienti cinesi:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Traccia gli ingredienti indiani:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Infine, traccia gli ingredienti coreani:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Ora, elimina gli ingredienti più comuni che creano confusione tra cucine distinte, chiamando `drop()`:
+
+ Tutti amano il riso, l'aglio e lo zenzero!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## Bilanciare il dataset
+
+Ora che hai pulito i dati, utilizza [SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - "Synthetic Minority Over-sampling Technique" - per bilanciarli.
+
+1. Chiama `fit_resample()`, questa strategia genera nuovi campioni per interpolazione.
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ Bilanciando i tuoi dati, otterrai risultati migliori quando li classifichi. Pensa a una classificazione binaria. Se la maggior parte dei tuoi dati appartiene a una classe, un modello di ML predirà quella classe più frequentemente, solo perché ci sono più dati per essa. Bilanciare i dati rimuove qualsiasi squilibrio nei dati e aiuta a rimuovere questo squilibrio.
+
+1. Ora puoi controllare il numero di etichette per ingrediente:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ Il tuo output sembra così:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ I dati sono belli, puliti, bilanciati e molto deliziosi!
+
+1. L'ultimo passaggio è salvare i tuoi dati bilanciati, inclusi etichette e caratteristiche, in un nuovo dataframe che può essere esportato in un file:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. Puoi dare un'ultima occhiata ai dati utilizzando `transformed_df.head()` and `transformed_df.info()`. Salva una copia di questi dati per usarli nelle lezioni future:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ Questo nuovo CSV si trova ora nella cartella principale dei dati.
+
+---
+
+## 🚀Sfida
+
+Questo curriculum contiene diversi dataset interessanti. Esplora le cartelle `data` e verifica se qualcuno contiene dataset che potrebbero essere appropriati per una classificazione binaria o multiclasse. Quali domande potresti porre a questo dataset?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## Revisione & Autoapprendimento
+
+Esplora l'API di SMOTE. Per quali casi d'uso è meglio utilizzato? Quali problemi risolve?
+
+## Compito
+
+[Esplora i metodi di classificazione](assignment.md)
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/1-Introduction/assignment.md b/translations/it/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..d5f01f430
--- /dev/null
+++ b/translations/it/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Esplora i metodi di classificazione
+
+## Istruzioni
+
+Nella [documentazione di Scikit-learn](https://scikit-learn.org/stable/supervised_learning.html) troverai una lunga lista di modi per classificare i dati. Fai una piccola caccia al tesoro in questi documenti: il tuo obiettivo è cercare i metodi di classificazione e abbinarli a un dataset in questo curriculum, una domanda che puoi porre su di esso, e una tecnica di classificazione. Crea un foglio di calcolo o una tabella in un file .doc e spiega come il dataset funzionerebbe con l'algoritmo di classificazione.
+
+## Valutazione
+
+| Criteri | Esemplare | Adeguato | Bisogna Migliorare |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | viene presentato un documento che fornisce una panoramica di 5 algoritmi insieme a una tecnica di classificazione. La panoramica è ben spiegata e dettagliata. | viene presentato un documento che fornisce una panoramica di 3 algoritmi insieme a una tecnica di classificazione. La panoramica è ben spiegata e dettagliata. | viene presentato un documento che fornisce una panoramica di meno di tre algoritmi insieme a una tecnica di classificazione e la panoramica non è ben spiegata né dettagliata. |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/1-Introduction/solution/Julia/README.md b/translations/it/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..02bff495f
--- /dev/null
+++ b/translations/it/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/2-Classifiers-1/README.md b/translations/it/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..69fc323e2
--- /dev/null
+++ b/translations/it/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,77 @@
+# Classificatori di Cucine 1
+
+In questa lezione, utilizzerai il dataset che hai salvato dalla lezione precedente, pieno di dati bilanciati e puliti sulle cucine.
+
+Utilizzerai questo dataset con una varietà di classificatori per _prevedere una cucina nazionale data un gruppo di ingredienti_. Durante questo processo, imparerai di più su alcuni dei modi in cui gli algoritmi possono essere utilizzati per compiti di classificazione.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# Preparazione
+
+Assumendo che tu abbia completato [Lezione 1](../1-Introduction/README.md), assicurati che esista un file _cleaned_cuisines.csv_ nella cartella radice `/data` per queste quattro lezioni.
+
+## Esercizio - prevedere una cucina nazionale
+
+1. Lavorando nella cartella _notebook.ipynb_ di questa lezione, importa quel file insieme alla libreria Pandas:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ I dati appaiono così:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. Ora, importa diverse altre librerie:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. Dividi le coordinate X e y in due dataframe per l'addestramento. `cuisine` può essere il dataframe delle etichette:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ Sarà così:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. Elimina `Unnamed: 0` column and the `cuisine` column, calling `drop()`. Salva il resto dei dati come caratteristiche addestrabili:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ Le tue caratteristiche appaiono così:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre deve essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/2-Classifiers-1/assignment.md b/translations/it/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..32ddd2aee
--- /dev/null
+++ b/translations/it/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# Studiare i solvers
+## Istruzioni
+
+In questa lezione hai imparato a conoscere i vari solvers che abbinano algoritmi a un processo di machine learning per creare un modello accurato. Esamina i solvers elencati nella lezione e scegli due. Con parole tue, confronta e contrappone questi due solvers. Che tipo di problema affrontano? Come funzionano con varie strutture di dati? Perché sceglieresti uno rispetto all'altro?
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | Viene presentato un file .doc con due paragrafi, uno per ciascun solver, confrontandoli in modo approfondito. | Viene presentato un file .doc con un solo paragrafo | Il compito è incompleto |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/it/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..8b4a4d7ef
--- /dev/null
+++ b/translations/it/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/3-Classifiers-2/README.md b/translations/it/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..ec7f3297e
--- /dev/null
+++ b/translations/it/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# Classificatori di cucina 2
+
+In questa seconda lezione sulla classificazione, esplorerai ulteriori metodi per classificare i dati numerici. Imparerai anche le conseguenze della scelta di un classificatore rispetto a un altro.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### Prerequisito
+
+Presumiamo che tu abbia completato le lezioni precedenti e che tu abbia un dataset pulito nella tua cartella `data` chiamato _cleaned_cuisines.csv_ nella radice di questa cartella di 4 lezioni.
+
+### Preparazione
+
+Abbiamo caricato il tuo file _notebook.ipynb_ con il dataset pulito e lo abbiamo diviso in dataframe X e y, pronti per il processo di costruzione del modello.
+
+## Una mappa della classificazione
+
+In precedenza, hai imparato delle varie opzioni disponibili per classificare i dati utilizzando il cheat sheet di Microsoft. Scikit-learn offre un cheat sheet simile, ma più dettagliato, che può aiutare ulteriormente a restringere i tuoi stimatori (un altro termine per classificatori):
+
+
+> Suggerimento: [visita questa mappa online](https://scikit-learn.org/stable/tutorial/machine_learning_map/) e clicca lungo il percorso per leggere la documentazione.
+
+### Il piano
+
+Questa mappa è molto utile una volta che hai una chiara comprensione dei tuoi dati, poiché puoi 'camminare' lungo i suoi percorsi verso una decisione:
+
+- Abbiamo >50 campioni
+- Vogliamo prevedere una categoria
+- Abbiamo dati etichettati
+- Abbiamo meno di 100K campioni
+- ✨ Possiamo scegliere un Linear SVC
+- Se non funziona, poiché abbiamo dati numerici
+ - Possiamo provare un ✨ Classificatore KNeighbors
+ - Se non funziona, prova ✨ SVC e ✨ Classificatori Ensemble
+
+Questo è un percorso molto utile da seguire.
+
+## Esercizio - dividere i dati
+
+Seguendo questo percorso, dovremmo iniziare importando alcune librerie da utilizzare.
+
+1. Importa le librerie necessarie:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. Dividi i tuoi dati di addestramento e di test:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Classificatore Linear SVC
+
+Il clustering Support-Vector (SVC) è un componente della famiglia delle macchine Support-Vector (SVM) di tecniche ML (scopri di più su queste di seguito). In questo metodo, puoi scegliere un 'kernel' per decidere come raggruppare le etichette. Il parametro 'C' si riferisce alla 'regolarizzazione' che regola l'influenza dei parametri. Il kernel può essere uno di [diversi](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC); qui lo impostiamo su 'lineare' per assicurarci di sfruttare Linear SVC. La probabilità di default è 'false'; qui la impostiamo su 'true' per raccogliere stime di probabilità. Impostiamo lo stato casuale su '0' per mescolare i dati e ottenere probabilità.
+
+### Esercizio - applica un Linear SVC
+
+Inizia creando un array di classificatori. Aggiungerai progressivamente a questo array man mano che testiamo.
+
+1. Inizia con un Linear SVC:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. Addestra il tuo modello utilizzando il Linear SVC e stampa un report:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Il risultato è piuttosto buono:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## Classificatore K-Neighbors
+
+K-Neighbors fa parte della famiglia di metodi ML "vicini", che può essere utilizzata sia per l'apprendimento supervisionato che non supervisionato. In questo metodo, viene creato un numero predefinito di punti e i dati vengono raccolti attorno a questi punti in modo tale che possano essere previste etichette generalizzate per i dati.
+
+### Esercizio - applica il classificatore K-Neighbors
+
+Il classificatore precedente era buono e funzionava bene con i dati, ma forse possiamo ottenere una precisione migliore. Prova un classificatore K-Neighbors.
+
+1. Aggiungi una riga al tuo array di classificatori (aggiungi una virgola dopo l'elemento Linear SVC):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ Il risultato è un po' peggiore:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ Scopri di più su [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors)
+
+## Classificatore Support Vector
+
+I classificatori Support-Vector fanno parte della famiglia delle [Support-Vector Machine](https://wikipedia.org/wiki/Support-vector_machine) di metodi ML che vengono utilizzati per compiti di classificazione e regressione. Le SVM "mappano esempi di addestramento su punti nello spazio" per massimizzare la distanza tra due categorie. I dati successivi vengono mappati in questo spazio in modo che la loro categoria possa essere prevista.
+
+### Esercizio - applica un Support Vector Classifier
+
+Proviamo a ottenere una precisione leggermente migliore con un Support Vector Classifier.
+
+1. Aggiungi una virgola dopo l'elemento K-Neighbors, e poi aggiungi questa riga:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ Il risultato è piuttosto buono!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ Scopri di più su [Support-Vectors](https://scikit-learn.org/stable/modules/svm.html#svm)
+
+## Classificatori Ensemble
+
+Seguiamo il percorso fino alla fine, anche se il test precedente era piuttosto buono. Proviamo alcuni 'Classificatori Ensemble', in particolare Random Forest e AdaBoost:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+Il risultato è molto buono, specialmente per Random Forest:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ Scopri di più sui [Classificatori Ensemble](https://scikit-learn.org/stable/modules/ensemble.html)
+
+Questo metodo di Machine Learning "combina le previsioni di diversi stimatori di base" per migliorare la qualità del modello. Nel nostro esempio, abbiamo utilizzato Random Trees e AdaBoost.
+
+- [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#forest), un metodo di mediazione, costruisce una 'foresta' di 'alberi decisionali' infusi di casualità per evitare l'overfitting. Il parametro n_estimators è impostato sul numero di alberi.
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) adatta un classificatore a un dataset e poi adatta copie di quel classificatore allo stesso dataset. Si concentra sui pesi degli elementi classificati in modo errato e regola l'adattamento per il classificatore successivo per correggere.
+
+---
+
+## 🚀Sfida
+
+Ognuna di queste tecniche ha un gran numero di parametri che puoi modificare. Ricerca i parametri di default di ciascuna e pensa a cosa significherebbe modificare questi parametri per la qualità del modello.
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## Revisione & Studio Autonomo
+
+C'è molto gergo in queste lezioni, quindi prenditi un momento per rivedere [questa lista](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott) di terminologia utile!
+
+## Compito
+
+[Gioca con i parametri](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/3-Classifiers-2/assignment.md b/translations/it/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..521ce9c6f
--- /dev/null
+++ b/translations/it/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# Giocare con i Parametri
+
+## Istruzioni
+
+Ci sono molti parametri che vengono impostati di default quando si lavora con questi classificatori. Intellisense in VS Code può aiutarti a esplorarli. Adotta una delle Tecniche di Classificazione ML in questa lezione e addestra nuovamente i modelli modificando vari valori dei parametri. Crea un notebook spiegando perché alcune modifiche migliorano la qualità del modello mentre altre la peggiorano. Sii dettagliato nella tua risposta.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| -------- | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------- |
+| | Viene presentato un notebook con un classificatore completamente costruito e i suoi parametri modificati e spiegati nei textboxes | Viene presentato un notebook parzialmente o spiegato male | Il notebook presenta bug o errori |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzata basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/it/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..e38d44d30
--- /dev/null
+++ b/translations/it/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre deve essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/4-Applied/README.md b/translations/it/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..ad3a46a33
--- /dev/null
+++ b/translations/it/4-Classification/4-Applied/README.md
@@ -0,0 +1,317 @@
+# Costruisci un'app web per consigliare cucine
+
+In questa lezione, costruirai un modello di classificazione utilizzando alcune delle tecniche che hai imparato nelle lezioni precedenti e con il delizioso dataset di cucine utilizzato in tutta questa serie. Inoltre, costruirai una piccola app web per utilizzare un modello salvato, sfruttando il runtime web di Onnx.
+
+Uno degli usi pratici più utili del machine learning è la costruzione di sistemi di raccomandazione, e oggi puoi fare il primo passo in quella direzione!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 Clicca sull'immagine sopra per un video: Jen Looper costruisce un'app web utilizzando dati classificati di cucina
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+In questa lezione imparerai:
+
+- Come costruire un modello e salvarlo come modello Onnx
+- Come usare Netron per ispezionare il modello
+- Come utilizzare il tuo modello in un'app web per inferenza
+
+## Costruisci il tuo modello
+
+Costruire sistemi di ML applicati è una parte importante per sfruttare queste tecnologie nei tuoi sistemi aziendali. Puoi utilizzare modelli all'interno delle tue applicazioni web (e quindi usarli in un contesto offline se necessario) utilizzando Onnx.
+
+In una [lezione precedente](../../3-Web-App/1-Web-App/README.md), hai costruito un modello di regressione sulle osservazioni UFO, lo hai "pickled" e utilizzato in un'app Flask. Sebbene questa architettura sia molto utile da conoscere, è un'app Python full-stack, e i tuoi requisiti potrebbero includere l'uso di un'applicazione JavaScript.
+
+In questa lezione, puoi costruire un sistema di base basato su JavaScript per l'inferenza. Tuttavia, prima devi addestrare un modello e convertirlo per l'uso con Onnx.
+
+## Esercizio - addestra il modello di classificazione
+
+Prima di tutto, addestra un modello di classificazione utilizzando il dataset di cucine pulito che abbiamo usato.
+
+1. Inizia importando le librerie utili:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ Ti serve '[skl2onnx](https://onnx.ai/sklearn-onnx/)' per aiutarti a convertire il tuo modello Scikit-learn in formato Onnx.
+
+1. Poi, lavora con i tuoi dati nello stesso modo in cui hai fatto nelle lezioni precedenti, leggendo un file CSV usando `read_csv()`:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. Rimuovi le prime due colonne non necessarie e salva i dati rimanenti come 'X':
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. Salva le etichette come 'y':
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### Inizia la routine di addestramento
+
+Utilizzeremo la libreria 'SVC' che ha una buona accuratezza.
+
+1. Importa le librerie appropriate da Scikit-learn:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. Separa i set di addestramento e test:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. Costruisci un modello di classificazione SVC come hai fatto nella lezione precedente:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. Ora, testa il tuo modello, chiamando `predict()`:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. Stampa un rapporto di classificazione per verificare la qualità del modello:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Come abbiamo visto prima, l'accuratezza è buona:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### Converti il tuo modello in Onnx
+
+Assicurati di fare la conversione con il numero corretto di Tensor. Questo dataset ha 380 ingredienti elencati, quindi devi annotare quel numero in `FloatTensorType`:
+
+1. Converti usando un numero di tensor di 380.
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. Crea il file onx e salvalo come **model.onnx**:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > Nota, puoi passare [opzioni](https://onnx.ai/sklearn-onnx/parameterized.html) nel tuo script di conversione. In questo caso, abbiamo passato 'nocl' come True e 'zipmap' come False. Poiché questo è un modello di classificazione, hai l'opzione di rimuovere ZipMap che produce un elenco di dizionari (non necessario). `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` file.
+
+1. In questo file _index.html_, aggiungi il seguente markup:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. Ora, lavorando all'interno dei tag `body`, aggiungi un po' di markup per mostrare un elenco di caselle di controllo che riflettono alcuni ingredienti:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ Nota che ogni casella di controllo ha un valore. Questo riflette l'indice dove l'ingrediente si trova secondo il dataset. La mela, per esempio, in questa lista alfabetica, occupa la quinta colonna, quindi il suo valore è '4' poiché iniziamo a contare da 0. Puoi consultare il [foglio di calcolo degli ingredienti](../../../../4-Classification/data/ingredient_indexes.csv) per scoprire l'indice di un dato ingrediente.
+
+ Continuando il tuo lavoro nel file index.html, aggiungi un blocco di script dove il modello viene chiamato dopo l'ultimo `` di chiusura.
+
+1. Prima, importa il [Runtime Onnx](https://www.onnxruntime.ai/):
+
+ ```html
+
+ ```
+
+ > Onnx Runtime è utilizzato per abilitare l'esecuzione dei tuoi modelli Onnx su una vasta gamma di piattaforme hardware, inclusi ottimizzazioni e un'API da utilizzare.
+
+1. Una volta che il Runtime è in posizione, puoi chiamarlo:
+
+ ```html
+
+ ```
+
+In questo codice, ci sono diverse cose che stanno succedendo:
+
+1. Hai creato un array di 380 valori possibili (1 o 0) da impostare e inviare al modello per l'inferenza, a seconda che una casella di controllo dell'ingrediente sia selezionata.
+2. Hai creato un array di caselle di controllo e un modo per determinare se sono state selezionate in un `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` al prompt. Si dovrebbe aprire un localhost e puoi visualizzare la tua app web. Controlla quale cucina viene consigliata in base a vari ingredienti:
+
+
+
+Congratulazioni, hai creato un'app web di 'raccomandazione' con alcuni campi. Prenditi del tempo per sviluppare ulteriormente questo sistema!
+## 🚀Sfida
+
+La tua app web è molto minimale, quindi continua a svilupparla utilizzando gli ingredienti e i loro indici dai dati [ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv). Quali combinazioni di sapori funzionano per creare un piatto nazionale dato?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## Revisione e auto-studio
+
+Mentre questa lezione ha solo toccato l'utilità di creare un sistema di raccomandazione per gli ingredienti alimentari, quest'area delle applicazioni ML è molto ricca di esempi. Leggi di più su come vengono costruiti questi sistemi:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## Compito
+
+[Costruisci un nuovo raccomandatore](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzata basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/4-Applied/assignment.md b/translations/it/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..bb69021b8
--- /dev/null
+++ b/translations/it/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# Costruisci un sistema di raccomandazione
+
+## Istruzioni
+
+Considerando i tuoi esercizi in questa lezione, ora sai come costruire un'app web basata su JavaScript utilizzando Onnx Runtime e un modello Onnx convertito. Sperimenta costruendo un nuovo sistema di raccomandazione utilizzando i dati di queste lezioni o provenienti da altre fonti (dai credito, per favore). Potresti creare un sistema di raccomandazione per animali domestici basato su vari attributi di personalità, oppure un sistema di raccomandazione di generi musicali basato sull'umore di una persona. Sii creativo!
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | ---------------------------------------------------------------------- | ------------------------------------- | --------------------------------- |
+| | Vengono presentati un'app web e un notebook, entrambi ben documentati e funzionanti | Uno dei due manca o è difettoso | Entrambi mancano o sono difettosi |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/4-Classification/README.md b/translations/it/4-Classification/README.md
new file mode 100644
index 000000000..d8b83d720
--- /dev/null
+++ b/translations/it/4-Classification/README.md
@@ -0,0 +1,30 @@
+# Introduzione alla classificazione
+
+## Argomento regionale: Deliziose cucine asiatiche e indiane 🍜
+
+In Asia e India, le tradizioni culinarie sono estremamente diverse e molto deliziose! Esaminiamo i dati sulle cucine regionali per cercare di capire i loro ingredienti.
+
+
+> Foto di Lisheng Chang su Unsplash
+
+## Cosa imparerai
+
+In questa sezione, approfondirai il tuo studio della Regressione e imparerai a conoscere altri classificatori che puoi utilizzare per comprendere meglio i dati.
+
+> Esistono utili strumenti low-code che possono aiutarti a imparare a lavorare con i modelli di classificazione. Prova [Azure ML per questo compito](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Lezioni
+
+1. [Introduzione alla classificazione](1-Introduction/README.md)
+2. [Altri classificatori](2-Classifiers-1/README.md)
+3. [Ancora altri classificatori](3-Classifiers-2/README.md)
+4. [ML applicato: costruisci un'app web](4-Applied/README.md)
+
+## Crediti
+
+"Introduzione alla classificazione" è stato scritto con ♥️ da [Cassie Breviu](https://www.twitter.com/cassiebreviu) e [Jen Looper](https://www.twitter.com/jenlooper)
+
+Il dataset delle deliziose cucine è stato ottenuto da [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines).
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/5-Clustering/1-Visualize/README.md b/translations/it/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..ff60513b2
--- /dev/null
+++ b/translations/it/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,215 @@
+# Introduzione al clustering
+
+Il clustering è un tipo di [Apprendimento Non Supervisionato](https://wikipedia.org/wiki/Unsupervised_learning) che presuppone che un dataset sia non etichettato o che i suoi input non siano associati a output predefiniti. Utilizza vari algoritmi per ordinare i dati non etichettati e fornire raggruppamenti secondo i pattern che rileva nei dati.
+
+[](https://youtu.be/ty2advRiWJM "No One Like You di PSquare")
+
+> 🎥 Clicca sull'immagine sopra per un video. Mentre studi il machine learning con il clustering, goditi alcuni brani Dance Hall nigeriani - questa è una canzone molto apprezzata del 2014 di PSquare.
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### Introduzione
+
+Il [clustering](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) è molto utile per l'esplorazione dei dati. Vediamo se può aiutare a scoprire tendenze e pattern nel modo in cui il pubblico nigeriano consuma musica.
+
+✅ Prenditi un minuto per pensare agli usi del clustering. Nella vita reale, il clustering avviene ogni volta che hai un mucchio di biancheria e devi separare i vestiti dei membri della tua famiglia 🧦👕👖🩲. In data science, il clustering avviene quando si cerca di analizzare le preferenze di un utente o di determinare le caratteristiche di qualsiasi dataset non etichettato. Il clustering, in un certo senso, aiuta a dare un senso al caos, come un cassetto dei calzini.
+
+[](https://youtu.be/esmzYhuFnds "Introduzione al Clustering")
+
+> 🎥 Clicca sull'immagine sopra per un video: John Guttag del MIT introduce il clustering
+
+In un contesto professionale, il clustering può essere utilizzato per determinare cose come la segmentazione del mercato, determinare quali fasce d'età acquistano quali articoli, per esempio. Un altro uso sarebbe il rilevamento delle anomalie, forse per rilevare frodi in un dataset di transazioni con carta di credito. Oppure potresti usare il clustering per determinare i tumori in un lotto di scansioni mediche.
+
+✅ Pensa un minuto a come potresti aver incontrato il clustering 'nel mondo reale', in un contesto bancario, di e-commerce o aziendale.
+
+> 🎓 Interessante, l'analisi dei cluster ha avuto origine nei campi dell'Antropologia e della Psicologia negli anni '30. Riesci a immaginare come potrebbe essere stata utilizzata?
+
+In alternativa, potresti usarlo per raggruppare i risultati di ricerca - per link di shopping, immagini o recensioni, per esempio. Il clustering è utile quando hai un grande dataset che vuoi ridurre e su cui vuoi eseguire un'analisi più granulare, quindi la tecnica può essere utilizzata per conoscere i dati prima che vengano costruiti altri modelli.
+
+✅ Una volta che i tuoi dati sono organizzati in cluster, gli assegni un Id di cluster, e questa tecnica può essere utile quando si preserva la privacy di un dataset; puoi invece fare riferimento a un punto dati tramite il suo Id di cluster, piuttosto che tramite dati identificabili più rivelatori. Riesci a pensare ad altri motivi per cui faresti riferimento a un Id di cluster piuttosto che ad altri elementi del cluster per identificarlo?
+
+Approfondisci la tua comprensione delle tecniche di clustering in questo [modulo di apprendimento](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott)
+## Iniziare con il clustering
+
+[Scikit-learn offre una vasta gamma](https://scikit-learn.org/stable/modules/clustering.html) di metodi per eseguire il clustering. Il tipo che scegli dipenderà dal tuo caso d'uso. Secondo la documentazione, ogni metodo ha vari vantaggi. Ecco una tabella semplificata dei metodi supportati da Scikit-learn e i loro casi d'uso appropriati:
+
+| Nome del metodo | Caso d'uso |
+| :--------------------------- | :--------------------------------------------------------------------- |
+| K-Means | uso generale, induttivo |
+| Affinity propagation | molti, cluster irregolari, induttivo |
+| Mean-shift | molti, cluster irregolari, induttivo |
+| Spectral clustering | pochi, cluster regolari, transduttivo |
+| Ward hierarchical clustering | molti, cluster vincolati, transduttivo |
+| Agglomerative clustering | molti, vincolati, distanze non euclidee, transduttivo |
+| DBSCAN | geometria non piatta, cluster irregolari, transduttivo |
+| OPTICS | geometria non piatta, cluster irregolari con densità variabile, transduttivo |
+| Gaussian mixtures | geometria piatta, induttivo |
+| BIRCH | grande dataset con outlier, induttivo |
+
+> 🎓 Come creiamo i cluster ha molto a che fare con come raccogliamo i punti dati nei gruppi. Esploriamo un po' di vocabolario:
+>
+> 🎓 ['Transduttivo' vs. 'induttivo'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> L'inferenza transduttiva è derivata dai casi di addestramento osservati che mappano su casi di test specifici. L'inferenza induttiva è derivata dai casi di addestramento che mappano su regole generali che vengono poi applicate ai casi di test.
+>
+> Un esempio: Immagina di avere un dataset parzialmente etichettato. Alcune cose sono 'dischi', altre 'cd', e alcune sono vuote. Il tuo compito è fornire etichette per i vuoti. Se scegli un approccio induttivo, addestreresti un modello cercando 'dischi' e 'cd', e applichi quelle etichette ai tuoi dati non etichettati. Questo approccio avrà difficoltà a classificare cose che sono effettivamente 'cassette'. Un approccio transduttivo, d'altra parte, gestisce questi dati sconosciuti in modo più efficace poiché lavora per raggruppare oggetti simili e poi applica un'etichetta a un gruppo. In questo caso, i cluster potrebbero riflettere 'oggetti musicali rotondi' e 'oggetti musicali quadrati'.
+>
+> 🎓 ['Geometria non piatta' vs. 'piatta'](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> Derivato dalla terminologia matematica, la geometria non piatta vs. piatta si riferisce alla misura delle distanze tra punti tramite metodi geometrici 'piatti' ([Euclidei](https://wikipedia.org/wiki/Euclidean_geometry)) o 'non piatti' (non Euclidei).
+>
+>'Piatta' in questo contesto si riferisce alla geometria euclidea (parti della quale sono insegnate come geometria 'piana'), e non piatta si riferisce alla geometria non euclidea. Cosa c'entra la geometria con il machine learning? Beh, come due campi che sono radicati nella matematica, deve esserci un modo comune per misurare le distanze tra punti nei cluster, e ciò può essere fatto in modo 'piatto' o 'non piatto', a seconda della natura dei dati. Le [distanze euclidee](https://wikipedia.org/wiki/Euclidean_distance) sono misurate come la lunghezza di un segmento di linea tra due punti. Le [distanze non euclidee](https://wikipedia.org/wiki/Non-Euclidean_geometry) sono misurate lungo una curva. Se i tuoi dati, visualizzati, sembrano non esistere su un piano, potresti dover usare un algoritmo specializzato per gestirli.
+>
+
+> Infografica di [Dasani Madipalli](https://twitter.com/dasani_decoded)
+>
+> 🎓 ['Distanze'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> I cluster sono definiti dalla loro matrice di distanze, ad esempio le distanze tra i punti. Questa distanza può essere misurata in vari modi. I cluster euclidei sono definiti dalla media dei valori dei punti e contengono un 'centroide' o punto centrale. Le distanze sono quindi misurate dalla distanza da quel centroide. Le distanze non euclidee si riferiscono ai 'clustroidi', il punto più vicino ad altri punti. I clustroidi a loro volta possono essere definiti in vari modi.
+>
+> 🎓 ['Vincolato'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> Il [Clustering Vincolato](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) introduce l'apprendimento 'semi-supervisionato' in questo metodo non supervisionato. Le relazioni tra i punti sono segnate come 'non può collegare' o 'deve collegare' quindi alcune regole vengono imposte sul dataset.
+>
+>Un esempio: Se un algoritmo viene lasciato libero su un lotto di dati non etichettati o semi-etichettati, i cluster che produce possono essere di scarsa qualità. Nell'esempio sopra, i cluster potrebbero raggruppare 'oggetti musicali rotondi' e 'oggetti musicali quadrati' e 'oggetti triangolari' e 'biscotti'. Se vengono dati alcuni vincoli, o regole da seguire ("l'oggetto deve essere fatto di plastica", "l'oggetto deve essere in grado di produrre musica") ciò può aiutare a 'vincolare' l'algoritmo a fare scelte migliori.
+>
+> 🎓 'Densità'
+>
+> I dati che sono 'rumorosi' sono considerati 'densi'. Le distanze tra i punti in ciascuno dei suoi cluster possono risultare, all'esame, più o meno dense, o 'affollate' e quindi questi dati devono essere analizzati con il metodo di clustering appropriato. [Questo articolo](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) dimostra la differenza tra l'uso del clustering K-Means e gli algoritmi HDBSCAN per esplorare un dataset rumoroso con densità di cluster irregolare.
+
+## Algoritmi di clustering
+
+Esistono oltre 100 algoritmi di clustering, e il loro utilizzo dipende dalla natura dei dati in questione. Discutiamo alcuni dei principali:
+
+- **Clustering gerarchico**. Se un oggetto è classificato in base alla sua prossimità a un oggetto vicino, piuttosto che a uno più lontano, i cluster sono formati in base alla distanza dei loro membri da e verso altri oggetti. Il clustering agglomerativo di Scikit-learn è gerarchico.
+
+ 
+ > Infografica di [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Clustering del centroide**. Questo popolare algoritmo richiede la scelta di 'k', ovvero il numero di cluster da formare, dopo di che l'algoritmo determina il punto centrale di un cluster e raccoglie i dati attorno a quel punto. Il [clustering K-means](https://wikipedia.org/wiki/K-means_clustering) è una versione popolare del clustering del centroide. Il centro è determinato dalla media più vicina, da cui il nome. La distanza quadrata dal cluster è minimizzata.
+
+ 
+ > Infografica di [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Clustering basato sulla distribuzione**. Basato sulla modellazione statistica, il clustering basato sulla distribuzione si concentra sulla determinazione della probabilità che un punto dati appartenga a un cluster e lo assegna di conseguenza. I metodi di miscele gaussiane appartengono a questo tipo.
+
+- **Clustering basato sulla densità**. I punti dati sono assegnati ai cluster in base alla loro densità, o al loro raggruppamento tra di loro. I punti dati lontani dal gruppo sono considerati outlier o rumore. DBSCAN, Mean-shift e OPTICS appartengono a questo tipo di clustering.
+
+- **Clustering basato sulla griglia**. Per dataset multidimensionali, viene creata una griglia e i dati vengono divisi tra le celle della griglia, creando così i cluster.
+
+## Esercizio - raggruppa i tuoi dati
+
+Il clustering come tecnica è notevolmente facilitato da una visualizzazione adeguata, quindi iniziamo visualizzando i nostri dati musicali. Questo esercizio ci aiuterà a decidere quale dei metodi di clustering utilizzare in modo più efficace per la natura di questi dati.
+
+1. Apri il file [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb) in questa cartella.
+
+1. Importa il pacchetto `Seaborn` per una buona visualizzazione dei dati.
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. Aggiungi i dati delle canzoni dal file [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv). Carica un dataframe con alcuni dati sulle canzoni. Preparati a esplorare questi dati importando le librerie e scaricando i dati:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ Controlla le prime righe di dati:
+
+ | | nome | album | artista | genere_top_artista | data_rilascio | durata | popolarità | ballabilità | acusticità | energia | strumentalità | vivacità | volume | parlato | tempo | firma_temporale |
+ | --- | ----------------------- | ---------------------------- | ------------------- | ------------------ | ------------- | ------ | ---------- | ----------- | ----------- | ------- | -------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool| Enjoy Your Life | Lady Donli | nigerian pop | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25 | 0.702 | 0.116 | 0.833 | 0.91 | 0.348 | -6.044 | 0.0447 | 105.115 | 4 |
+
+1. Ottieni alcune informazioni sul dataframe, chiamando `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ L'output appare così:
+
+ ```output
+
+ RangeIndex: 530 entries, 0 to 529
+ Data columns (total 16 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 name 530 non-null object
+ 1 album 530 non-null object
+ 2 artist 530 non-null object
+ 3 artist_top_genre 530 non-null object
+ 4 release_date 530 non-null int64
+ 5 length 530 non-null int64
+ 6 popularity 530 non-null int64
+ 7 danceability 530 non-null float64
+ 8 acousticness 530 non-null float64
+ 9 energy 530 non-null float64
+ 10 instrumentalness 530 non-null float64
+ 11 liveness 530 non-null float64
+ 12 loudness 530 non-null float64
+ 13 speechiness 530 non-null float64
+ 14 tempo 530 non-null float64
+ 15 time_signature 530 non-null int64
+ dtypes: float64(8), int64(4), object(4)
+ memory usage: 66.4+ KB
+ ```
+
+1. Controlla di nuovo per valori nulli, chiamando `isnull()` e verificando che la somma sia 0:
+
+ ```python
+ df.isnull().sum()
+ ```
+
+ Sembra tutto a posto:
+
+ ```output
+ name 0
+ album 0
+ artist 0
+ artist_top_genre 0
+ release_date 0
+ length 0
+ popularity 0
+ danceability 0
+ acousticness 0
+ energy 0
+ instrumentalness 0
+ liveness 0
+ loudness 0
+ speechiness 0
+ tempo 0
+ time_signature 0
+ dtype: int64
+ ```
+
+1. Descrivi i dati:
+
+ ```python
+ df.describe()
+ ```
+
+ | | data_rilascio | durata | popolarità | ballabilità | acusticità | energia | strumentalità | vivacità | volume | parlato | tempo | firma_temporale |
+ | ----- | ------------- | ----------- | ---------- | ----------- | ----------- | -------- | -------------- | -------- | --------- | ----------- | ---------- | -------------- |
+ | count | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 |
+ | mean | 2015.390566 | 222298.1698 | 17.507547 | 0.741619 | 0.265412 | 0.760623 | 0.016305 | 0.147308 | -4.953011 | 0.130748 | 116.487864 |
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## Revisione e Autoapprendimento
+
+Prima di applicare gli algoritmi di clustering, come abbiamo imparato, è una buona idea comprendere la natura del tuo dataset. Leggi di più su questo argomento [qui](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
+
+[Questo articolo utile](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/) ti guida attraverso i diversi modi in cui vari algoritmi di clustering si comportano, date diverse forme di dati.
+
+## Compito
+
+[Esplora altre visualizzazioni per il clustering](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/5-Clustering/1-Visualize/assignment.md b/translations/it/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..02c9d5d41
--- /dev/null
+++ b/translations/it/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# Ricerca di altre visualizzazioni per il clustering
+
+## Istruzioni
+
+In questa lezione, hai lavorato con alcune tecniche di visualizzazione per comprendere come rappresentare graficamente i tuoi dati in preparazione per il clustering. Gli scatterplot, in particolare, sono utili per trovare gruppi di oggetti. Ricerca diversi modi e librerie per creare scatterplot e documenta il tuo lavoro in un notebook. Puoi utilizzare i dati di questa lezione, di altre lezioni, o dati che procuri tu stesso (per favore, accredita comunque la loro fonte nel tuo notebook). Traccia alcuni dati usando scatterplot e spiega cosa scopri.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------- |
+| | Viene presentato un notebook con cinque scatterplot ben documentati | Viene presentato un notebook con meno di cinque scatterplot ed è meno ben documentato | Viene presentato un notebook incompleto |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/it/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..5c56a1b96
--- /dev/null
+++ b/translations/it/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci sforziamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/5-Clustering/2-K-Means/README.md b/translations/it/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..ef9c4afa1
--- /dev/null
+++ b/translations/it/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# Clustering K-Means
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+In questa lezione, imparerai come creare cluster utilizzando Scikit-learn e il dataset della musica nigeriana che hai importato in precedenza. Copriremo le basi del K-Means per il clustering. Tieni presente che, come hai appreso nella lezione precedente, ci sono molti modi per lavorare con i cluster e il metodo che usi dipende dai tuoi dati. Proveremo il K-Means poiché è la tecnica di clustering più comune. Iniziamo!
+
+Termini che imparerai:
+
+- Silhouette scoring
+- Metodo dell'Elbow
+- Inerzia
+- Varianza
+
+## Introduzione
+
+[K-Means Clustering](https://wikipedia.org/wiki/K-means_clustering) è un metodo derivato dal campo dell'elaborazione del segnale. Viene utilizzato per dividere e partizionare gruppi di dati in 'k' cluster utilizzando una serie di osservazioni. Ogni osservazione lavora per raggruppare un dato punto dati più vicino al suo 'mean' più vicino, o il punto centrale di un cluster.
+
+I cluster possono essere visualizzati come [diagrammi di Voronoi](https://wikipedia.org/wiki/Voronoi_diagram), che includono un punto (o 'seme') e la sua corrispondente regione.
+
+
+
+> infografica di [Jen Looper](https://twitter.com/jenlooper)
+
+Il processo di clustering K-Means [si esegue in tre fasi](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. L'algoritmo seleziona un numero k di punti centrali campionando dal dataset. Dopo questo, cicla:
+ 1. Assegna ogni campione al centroide più vicino.
+ 2. Crea nuovi centroidi prendendo il valore medio di tutti i campioni assegnati ai centroidi precedenti.
+ 3. Poi, calcola la differenza tra i nuovi e i vecchi centroidi e ripete finché i centroidi non si stabilizzano.
+
+Uno svantaggio dell'utilizzo del K-Means è che dovrai stabilire 'k', cioè il numero di centroidi. Fortunatamente, il 'metodo dell'Elbow' aiuta a stimare un buon valore di partenza per 'k'. Lo proverai tra un minuto.
+
+## Prerequisito
+
+Lavorerai nel file [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) di questa lezione che include l'importazione dei dati e la pulizia preliminare che hai fatto nella lezione precedente.
+
+## Esercizio - preparazione
+
+Inizia dando un'altra occhiata ai dati delle canzoni.
+
+1. Crea un boxplot, chiamando `boxplot()` per ogni colonna:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ Questi dati sono un po' rumorosi: osservando ogni colonna come un boxplot, puoi vedere i valori anomali.
+
+ 
+
+Potresti passare attraverso il dataset e rimuovere questi valori anomali, ma ciò renderebbe i dati piuttosto minimi.
+
+1. Per ora, scegli quali colonne utilizzerai per il tuo esercizio di clustering. Scegli quelle con intervalli simili e codifica la colonna `artist_top_genre` come dati numerici:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. Ora devi scegliere quanti cluster mirare. Sai che ci sono 3 generi di canzoni che abbiamo estratto dal dataset, quindi proviamo con 3:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+Vedrai un array stampato con cluster previsti (0, 1 o 2) per ogni riga del dataframe.
+
+1. Usa questo array per calcolare un 'silhouette score':
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## Silhouette score
+
+Cerca un silhouette score vicino a 1. Questo punteggio varia da -1 a 1, e se il punteggio è 1, il cluster è denso e ben separato dagli altri cluster. Un valore vicino a 0 rappresenta cluster sovrapposti con campioni molto vicini al confine decisionale dei cluster vicini. [(Fonte)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+Il nostro punteggio è **.53**, quindi a metà strada. Questo indica che i nostri dati non sono particolarmente adatti a questo tipo di clustering, ma continuiamo.
+
+### Esercizio - costruisci un modello
+
+1. Importa `KMeans` e inizia il processo di clustering.
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ Ci sono alcune parti qui che meritano spiegazioni.
+
+ > 🎓 range: Queste sono le iterazioni del processo di clustering
+
+ > 🎓 random_state: "Determina la generazione di numeri casuali per l'inizializzazione dei centroidi." [Fonte](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "within-cluster sums of squares" misura la distanza media quadratica di tutti i punti all'interno di un cluster dal centroide del cluster. [Fonte](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce).
+
+ > 🎓 Inerzia: Gli algoritmi K-Means tentano di scegliere i centroidi per minimizzare l'inerzia, "una misura di quanto siano coerenti internamente i cluster." [Fonte](https://scikit-learn.org/stable/modules/clustering.html). Il valore viene aggiunto alla variabile wcss a ogni iterazione.
+
+ > 🎓 k-means++: In [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means) puoi usare l'ottimizzazione 'k-means++', che "inizializza i centroidi in modo che siano (generalmente) distanti tra loro, portando probabilmente a risultati migliori rispetto all'inizializzazione casuale."
+
+### Metodo dell'Elbow
+
+In precedenza, avevi ipotizzato che, poiché hai mirato a 3 generi di canzoni, dovresti scegliere 3 cluster. Ma è davvero così?
+
+1. Usa il 'metodo dell'Elbow' per essere sicuro.
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ Usa la variabile `wcss` che hai costruito nel passaggio precedente per creare un grafico che mostra dove si trova la 'curva' nell'Elbow, che indica il numero ottimale di cluster. Forse è davvero 3!
+
+ 
+
+## Esercizio - visualizza i cluster
+
+1. Prova di nuovo il processo, questa volta impostando tre cluster, e visualizza i cluster come uno scatterplot:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. Controlla l'accuratezza del modello:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ L'accuratezza di questo modello non è molto buona, e la forma dei cluster ti dà un indizio del perché.
+
+ 
+
+ Questi dati sono troppo sbilanciati, troppo poco correlati e c'è troppa varianza tra i valori delle colonne per clusterizzare bene. Infatti, i cluster che si formano sono probabilmente fortemente influenzati o distorti dalle tre categorie di generi che abbiamo definito sopra. È stato un processo di apprendimento!
+
+ Nella documentazione di Scikit-learn, puoi vedere che un modello come questo, con cluster non molto ben demarcati, ha un problema di 'varianza':
+
+ 
+ > Infografica da Scikit-learn
+
+## Varianza
+
+La varianza è definita come "la media delle differenze quadrate dalla media" [(Fonte)](https://www.mathsisfun.com/data/standard-deviation.html). Nel contesto di questo problema di clustering, si riferisce a dati in cui i numeri del nostro dataset tendono a divergere un po' troppo dalla media.
+
+✅ Questo è un ottimo momento per pensare a tutti i modi in cui potresti correggere questo problema. Modificare un po' di più i dati? Utilizzare colonne diverse? Usare un algoritmo diverso? Suggerimento: Prova a [scalare i tuoi dati](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) per normalizzarli e testare altre colonne.
+
+> Prova questo '[calcolatore di varianza](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' per comprendere meglio il concetto.
+
+---
+
+## 🚀Sfida
+
+Trascorri un po' di tempo con questo notebook, modificando i parametri. Puoi migliorare l'accuratezza del modello pulendo di più i dati (rimuovendo i valori anomali, ad esempio)? Puoi usare i pesi per dare più peso a determinati campioni di dati. Cos'altro puoi fare per creare cluster migliori?
+
+Suggerimento: Prova a scalare i tuoi dati. C'è del codice commentato nel notebook che aggiunge la scalatura standard per far sembrare le colonne dei dati più simili tra loro in termini di intervallo. Scoprirai che, mentre il silhouette score diminuisce, la 'curva' nel grafico dell'Elbow si appiana. Questo perché lasciare i dati non scalati permette ai dati con meno varianza di avere più peso. Leggi un po' di più su questo problema [qui](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226).
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## Revisione e studio autonomo
+
+Dai un'occhiata a un simulatore di K-Means [come questo](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). Puoi usare questo strumento per visualizzare i punti dati campione e determinare i suoi centroidi. Puoi modificare la casualità dei dati, il numero di cluster e il numero di centroidi. Questo ti aiuta a farti un'idea di come i dati possono essere raggruppati?
+
+Dai anche un'occhiata a [questo documento sul K-Means](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) di Stanford.
+
+## Compito
+
+[Prova diversi metodi di clustering](assignment.md)
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/5-Clustering/2-K-Means/assignment.md b/translations/it/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..4609689cc
--- /dev/null
+++ b/translations/it/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,13 @@
+# Prova diversi metodi di clustering
+
+## Istruzioni
+
+In questa lezione hai imparato il clustering K-Means. A volte K-Means non è appropriato per i tuoi dati. Crea un notebook utilizzando dati provenienti da queste lezioni o da altre fonti (cita la tua fonte) e mostra un metodo di clustering diverso da K-Means. Cosa hai imparato?
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | --------------------------------------------------------------- | -------------------------------------------------------------------- | ---------------------------- |
+| | Viene presentato un notebook con un modello di clustering ben documentato | Viene presentato un notebook senza una buona documentazione e/o incompleto | Viene presentato un lavoro incompleto |
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/it/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..ef10d169a
--- /dev/null
+++ b/translations/it/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/5-Clustering/README.md b/translations/it/5-Clustering/README.md
new file mode 100644
index 000000000..7f576f962
--- /dev/null
+++ b/translations/it/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# Modelli di clustering per il machine learning
+
+Il clustering è un compito di machine learning che cerca di trovare oggetti che si somigliano e raggrupparli in gruppi chiamati cluster. Ciò che differenzia il clustering da altri approcci nel machine learning è che le cose avvengono automaticamente; infatti, si può dire che è l'opposto dell'apprendimento supervisionato.
+
+## Argomento regionale: modelli di clustering per i gusti musicali del pubblico nigeriano 🎧
+
+Il pubblico nigeriano è molto variegato e ha gusti musicali diversi. Utilizzando dati estratti da Spotify (ispirati da [questo articolo](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421)), diamo un'occhiata ad alcune delle musiche popolari in Nigeria. Questo dataset include dati su vari aspetti delle canzoni come il punteggio di 'ballabilità', 'acousticness', volume, 'speechiness', popolarità ed energia. Sarà interessante scoprire dei pattern in questi dati!
+
+
+
+> Foto di Marcela Laskoski su Unsplash
+
+In questa serie di lezioni, scoprirai nuovi modi per analizzare i dati utilizzando tecniche di clustering. Il clustering è particolarmente utile quando il tuo dataset manca di etichette. Se ha etichette, allora tecniche di classificazione come quelle apprese nelle lezioni precedenti potrebbero essere più utili. Ma nei casi in cui si desidera raggruppare dati non etichettati, il clustering è un ottimo modo per scoprire pattern.
+
+> Ci sono utili strumenti low-code che possono aiutarti a imparare a lavorare con i modelli di clustering. Prova [Azure ML per questo compito](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Lezioni
+
+1. [Introduzione al clustering](1-Visualize/README.md)
+2. [Clustering K-Means](2-K-Means/README.md)
+
+## Crediti
+
+Queste lezioni sono state scritte con 🎶 da [Jen Looper](https://www.twitter.com/jenlooper) con utili recensioni di [Rishit Dagli](https://rishit_dagli) e [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan).
+
+Il dataset [Nigerian Songs](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify) è stato ottenuto da Kaggle come estratto da Spotify.
+
+Esempi utili di K-Means che hanno aiutato nella creazione di questa lezione includono questa [esplorazione dell'iris](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering), questo [notebook introduttivo](https://www.kaggle.com/prashant111/k-means-clustering-with-python), e questo [esempio ipotetico di ONG](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering).
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Anche se ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/1-Introduction-to-NLP/README.md b/translations/it/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..59dcbb830
--- /dev/null
+++ b/translations/it/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# Introduzione all'elaborazione del linguaggio naturale
+
+Questa lezione copre una breve storia e i concetti importanti dell'*elaborazione del linguaggio naturale*, un sotto-campo della *linguistica computazionale*.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## Introduzione
+
+L'NLP, come è comunemente conosciuto, è una delle aree più note in cui il machine learning è stato applicato e utilizzato nel software di produzione.
+
+✅ Riesci a pensare a un software che usi ogni giorno che probabilmente ha dell'NLP integrato? Che dire dei tuoi programmi di elaborazione testi o delle app mobili che usi regolarmente?
+
+Imparerai a conoscere:
+
+- **L'idea delle lingue**. Come si sono sviluppate le lingue e quali sono state le principali aree di studio.
+- **Definizione e concetti**. Imparerai anche definizioni e concetti su come i computer elaborano il testo, inclusi il parsing, la grammatica e l'identificazione di nomi e verbi. Ci sono alcuni compiti di codifica in questa lezione e vengono introdotti diversi concetti importanti che imparerai a programmare nelle prossime lezioni.
+
+## Linguistica computazionale
+
+La linguistica computazionale è un'area di ricerca e sviluppo che studia da decenni come i computer possono lavorare con le lingue, comprenderle, tradurle e comunicare con esse. L'elaborazione del linguaggio naturale (NLP) è un campo correlato che si concentra su come i computer possono elaborare le lingue 'naturali', o umane.
+
+### Esempio - dettatura telefonica
+
+Se hai mai dettato al tuo telefono invece di digitare o hai fatto una domanda a un assistente virtuale, il tuo discorso è stato convertito in una forma di testo e poi elaborato o *analizzato* dalla lingua che hai parlato. Le parole chiave rilevate sono state quindi elaborate in un formato che il telefono o l'assistente poteva comprendere e su cui poteva agire.
+
+
+> La vera comprensione linguistica è difficile! Immagine di [Jen Looper](https://twitter.com/jenlooper)
+
+### Come è possibile questa tecnologia?
+
+Questo è possibile perché qualcuno ha scritto un programma per computer per farlo. Alcuni decenni fa, alcuni scrittori di fantascienza prevedevano che le persone avrebbero parlato principalmente ai loro computer, e i computer avrebbero sempre capito esattamente cosa intendevano. Purtroppo, si è rivelato un problema più difficile di quanto molti immaginassero, e sebbene oggi sia un problema molto meglio compreso, ci sono sfide significative nel raggiungere una 'perfetta' elaborazione del linguaggio naturale quando si tratta di comprendere il significato di una frase. Questo è un problema particolarmente difficile quando si tratta di comprendere l'umorismo o di rilevare emozioni come il sarcasmo in una frase.
+
+A questo punto, potresti ricordare le lezioni scolastiche in cui l'insegnante copriva le parti della grammatica in una frase. In alcuni paesi, agli studenti viene insegnata la grammatica e la linguistica come materia dedicata, ma in molti, questi argomenti sono inclusi come parte dell'apprendimento di una lingua: sia la tua prima lingua nella scuola primaria (imparare a leggere e scrivere) e forse una seconda lingua nella scuola post-primaria, o superiore. Non preoccuparti se non sei un esperto nel distinguere i nomi dai verbi o gli avverbi dagli aggettivi!
+
+Se hai difficoltà con la differenza tra il *presente semplice* e il *presente progressivo*, non sei solo. Questo è un problema impegnativo per molte persone, anche per i madrelingua di una lingua. La buona notizia è che i computer sono davvero bravi ad applicare regole formali, e imparerai a scrivere codice che può *analizzare* una frase così bene come un essere umano. La sfida più grande che esaminerai in seguito è comprendere il *significato* e il *sentimento* di una frase.
+
+## Prerequisiti
+
+Per questa lezione, il prerequisito principale è essere in grado di leggere e comprendere la lingua di questa lezione. Non ci sono problemi matematici o equazioni da risolvere. Sebbene l'autore originale abbia scritto questa lezione in inglese, è anche tradotta in altre lingue, quindi potresti leggere una traduzione. Ci sono esempi in cui vengono utilizzate diverse lingue (per confrontare le diverse regole grammaticali delle diverse lingue). Questi non sono tradotti, ma il testo esplicativo sì, quindi il significato dovrebbe essere chiaro.
+
+Per i compiti di codifica, utilizzerai Python e gli esempi utilizzano Python 3.8.
+
+In questa sezione, avrai bisogno e utilizzerai:
+
+- **Comprensione di Python 3**. Comprensione del linguaggio di programmazione in Python 3, questa lezione utilizza input, loop, lettura di file, array.
+- **Visual Studio Code + estensione**. Utilizzeremo Visual Studio Code e la sua estensione Python. Puoi anche utilizzare un IDE Python di tua scelta.
+- **TextBlob**. [TextBlob](https://github.com/sloria/TextBlob) è una libreria semplificata per l'elaborazione del testo in Python. Segui le istruzioni sul sito di TextBlob per installarlo sul tuo sistema (installa anche i corpora, come mostrato di seguito):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 Suggerimento: Puoi eseguire Python direttamente negli ambienti VS Code. Consulta i [documenti](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott) per maggiori informazioni.
+
+## Parlare con le macchine
+
+La storia del tentativo di far comprendere ai computer il linguaggio umano risale a decenni fa, e uno dei primi scienziati a considerare l'elaborazione del linguaggio naturale è stato *Alan Turing*.
+
+### Il 'test di Turing'
+
+Quando Turing stava ricercando l'*intelligenza artificiale* negli anni '50, considerò se potesse essere somministrato un test conversazionale a un essere umano e a un computer (tramite corrispondenza scritta) in cui l'umano nella conversazione non fosse sicuro se stesse conversando con un altro umano o un computer.
+
+Se, dopo una certa lunghezza della conversazione, l'umano non riusciva a determinare se le risposte provenissero da un computer o meno, allora si poteva dire che il computer stesse *pensando*?
+
+### L'ispirazione - 'il gioco dell'imitazione'
+
+L'idea per questo venne da un gioco di società chiamato *Il gioco dell'imitazione* in cui un interrogatore è solo in una stanza e incaricato di determinare quali delle due persone (in un'altra stanza) sono rispettivamente maschio e femmina. L'interrogatore può inviare note e deve cercare di pensare a domande in cui le risposte scritte rivelino il genere della persona misteriosa. Naturalmente, i giocatori nell'altra stanza cercano di ingannare l'interrogatore rispondendo alle domande in modo tale da fuorviare o confondere l'interrogatore, pur dando l'apparenza di rispondere onestamente.
+
+### Sviluppare Eliza
+
+Negli anni '60, uno scienziato del MIT chiamato *Joseph Weizenbaum* sviluppò [*Eliza*](https://wikipedia.org/wiki/ELIZA), un 'terapeuta' computerizzato che faceva domande all'umano e dava l'impressione di comprendere le loro risposte. Tuttavia, mentre Eliza poteva analizzare una frase e identificare certi costrutti grammaticali e parole chiave in modo da dare una risposta ragionevole, non si poteva dire che *comprendesse* la frase. Se a Eliza veniva presentata una frase seguendo il formato "**Io sono** triste" poteva riorganizzare e sostituire le parole nella frase per formare la risposta "Da quanto tempo **sei** triste".
+
+Questo dava l'impressione che Eliza comprendesse l'affermazione e stesse facendo una domanda di follow-up, mentre in realtà stava cambiando il tempo verbale e aggiungendo alcune parole. Se Eliza non riusciva a identificare una parola chiave per cui aveva una risposta, dava invece una risposta casuale che poteva essere applicabile a molte affermazioni diverse. Eliza poteva essere facilmente ingannata, ad esempio se un utente scriveva "**Tu sei** una bicicletta" poteva rispondere con "Da quanto tempo **sono** una bicicletta?", invece di una risposta più ragionata.
+
+[](https://youtu.be/RMK9AphfLco "Chattare con Eliza")
+
+> 🎥 Clicca sull'immagine sopra per un video sul programma originale ELIZA
+
+> Nota: Puoi leggere la descrizione originale di [Eliza](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract) pubblicata nel 1966 se hai un account ACM. In alternativa, leggi di Eliza su [wikipedia](https://wikipedia.org/wiki/ELIZA)
+
+## Esercizio - codificare un bot conversazionale di base
+
+Un bot conversazionale, come Eliza, è un programma che sollecita input dall'utente e sembra comprendere e rispondere in modo intelligente. A differenza di Eliza, il nostro bot non avrà diverse regole che gli danno l'apparenza di avere una conversazione intelligente. Invece, il nostro bot avrà una sola abilità, quella di mantenere la conversazione con risposte casuali che potrebbero funzionare in quasi qualsiasi conversazione banale.
+
+### Il piano
+
+I tuoi passaggi quando costruisci un bot conversazionale:
+
+1. Stampa istruzioni che avvisano l'utente su come interagire con il bot
+2. Avvia un ciclo
+ 1. Accetta input dall'utente
+ 2. Se l'utente ha chiesto di uscire, allora esci
+ 3. Elabora l'input dell'utente e determina la risposta (in questo caso, la risposta è una scelta casuale da un elenco di possibili risposte generiche)
+ 4. Stampa la risposta
+3. torna al passaggio 2
+
+### Costruire il bot
+
+Creiamo il bot. Inizieremo definendo alcune frasi.
+
+1. Crea questo bot in Python con le seguenti risposte casuali:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ Ecco un esempio di output per guidarti (l'input dell'utente è sulle righe che iniziano con `>`):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ Una possibile soluzione al compito è [qui](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)
+
+ ✅ Fermati e rifletti
+
+ 1. Pensi che le risposte casuali 'ingannerebbero' qualcuno facendogli credere che il bot li comprenda veramente?
+ 2. Quali caratteristiche avrebbe bisogno il bot per essere più efficace?
+ 3. Se un bot potesse davvero 'comprendere' il significato di una frase, avrebbe bisogno di 'ricordare' anche il significato delle frasi precedenti in una conversazione?
+
+---
+
+## 🚀Sfida
+
+Scegli uno degli elementi "fermati e rifletti" sopra e prova a implementarlo nel codice o scrivi una soluzione su carta usando pseudocodice.
+
+Nella prossima lezione, imparerai su numerosi altri approcci per analizzare il linguaggio naturale e il machine learning.
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## Revisione e studio autonomo
+
+Dai un'occhiata alle referenze qui sotto come ulteriori opportunità di lettura.
+
+### Referenze
+
+1. Schubert, Lenhart, "Computational Linguistics", *The Stanford Encyclopedia of Philosophy* (Spring 2020 Edition), Edward N. Zalta (ed.), URL = .
+2. Princeton University "About WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
+
+## Compito
+
+[Search for a bot](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su intelligenza artificiale. Anche se ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/it/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..f4cab41a3
--- /dev/null
+++ b/translations/it/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# Cerca un bot
+
+## Istruzioni
+
+I bot sono ovunque. Il tuo compito: trovarne uno e adottarlo! Puoi trovarli su siti web, in applicazioni bancarie e al telefono, ad esempio quando chiami le aziende di servizi finanziari per consigli o informazioni sull'account. Analizza il bot e vedi se riesci a confonderlo. Se riesci a confondere il bot, perché pensi che sia successo? Scrivi un breve documento sulla tua esperienza.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | --------------------- |
+| | È stato scritto un documento completo, che spiega la presunta architettura del bot e descrive la tua esperienza | Il documento è incompleto o non ben ricercato | Nessun documento è stato presentato |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/2-Tasks/README.md b/translations/it/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..64a61db22
--- /dev/null
+++ b/translations/it/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# Compiti e tecniche comuni di elaborazione del linguaggio naturale
+
+Per la maggior parte dei compiti di *elaborazione del linguaggio naturale*, il testo da elaborare deve essere scomposto, esaminato e i risultati memorizzati o confrontati con regole e set di dati. Questi compiti permettono al programmatore di derivare il _significato_ o l'_intento_ o solo la _frequenza_ dei termini e delle parole in un testo.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+Scopriamo insieme le tecniche comuni utilizzate nell'elaborazione del testo. Combinando queste tecniche con il machine learning, è possibile analizzare grandi quantità di testo in modo efficiente. Prima di applicare il ML a questi compiti, però, è importante comprendere i problemi che un esperto di NLP può incontrare.
+
+## Compiti comuni nell'NLP
+
+Esistono diversi modi per analizzare un testo su cui si sta lavorando. Ci sono compiti che puoi eseguire e attraverso questi compiti puoi comprendere il testo e trarre conclusioni. Di solito, questi compiti vengono eseguiti in sequenza.
+
+### Tokenizzazione
+
+Probabilmente la prima cosa che la maggior parte degli algoritmi di NLP deve fare è dividere il testo in token, o parole. Sebbene possa sembrare semplice, dover tener conto della punteggiatura e dei delimitatori di parole e frasi di diverse lingue può renderlo complicato. Potrebbe essere necessario utilizzare vari metodi per determinare le demarcazioni.
+
+
+> Tokenizzazione di una frase da **Orgoglio e Pregiudizio**. Infografica di [Jen Looper](https://twitter.com/jenlooper)
+
+### Embeddings
+
+I [word embeddings](https://wikipedia.org/wiki/Word_embedding) sono un modo per convertire i dati del testo in numeri. Gli embeddings sono realizzati in modo tale che parole con significati simili o parole usate insieme si raggruppino.
+
+
+> "Ho il massimo rispetto per i tuoi nervi, sono vecchi amici." - Word embeddings per una frase in **Orgoglio e Pregiudizio**. Infografica di [Jen Looper](https://twitter.com/jenlooper)
+
+✅ Prova [questo interessante strumento](https://projector.tensorflow.org/) per sperimentare con i word embeddings. Cliccando su una parola, vengono mostrati i cluster di parole simili: 'giocattolo' si raggruppa con 'disney', 'lego', 'playstation' e 'console'.
+
+### Parsing e Part-of-speech Tagging
+
+Ogni parola che è stata tokenizzata può essere etichettata come parte del discorso - un sostantivo, un verbo o un aggettivo. La frase `the quick red fox jumped over the lazy brown dog` potrebbe essere etichettata come POS fox = sostantivo, jumped = verbo.
+
+
+
+> Parsing di una frase da **Orgoglio e Pregiudizio**. Infografica di [Jen Looper](https://twitter.com/jenlooper)
+
+Il parsing è il riconoscimento delle parole che sono correlate tra loro in una frase - per esempio `the quick red fox jumped` è una sequenza di aggettivo-sostantivo-verbo che è separata dalla sequenza `lazy brown dog`.
+
+### Frequenze di parole e frasi
+
+Una procedura utile quando si analizza un grande corpo di testo è costruire un dizionario di ogni parola o frase di interesse e quante volte appare. La frase `the quick red fox jumped over the lazy brown dog` ha una frequenza di parole di 2 per the.
+
+Vediamo un esempio di testo in cui contiamo la frequenza delle parole. La poesia di Rudyard Kipling The Winners contiene il seguente verso:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+Poiché le frequenze delle frasi possono essere sensibili o insensibili alle maiuscole, la frase `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` è 2.
+
+### N-grams
+
+Un testo può essere suddiviso in sequenze di parole di una lunghezza impostata, una singola parola (unigram), due parole (bigram), tre parole (trigram) o qualsiasi numero di parole (n-grams).
+
+Per esempio `the quick red fox jumped over the lazy brown dog` con un punteggio n-gram di 2 produce i seguenti n-grams:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+Potrebbe essere più facile visualizzarlo come una casella scorrevole sulla frase. Ecco qui per n-grams di 3 parole, il n-gram è in grassetto in ogni frase:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> Valore n-gram di 3: Infografica di [Jen Looper](https://twitter.com/jenlooper)
+
+### Estrazione di frasi nominali
+
+Nella maggior parte delle frasi, c'è un sostantivo che è il soggetto o l'oggetto della frase. In inglese, spesso è identificabile perché preceduto da 'a', 'an' o 'the'. Identificare il soggetto o l'oggetto di una frase estraendo la frase nominale è un compito comune nell'NLP quando si cerca di comprendere il significato di una frase.
+
+✅ Nella frase "Non riesco a fissare l'ora, o il luogo, o lo sguardo o le parole, che hanno posto le basi. È troppo tempo fa. Ero nel mezzo prima di sapere che avevo iniziato.", riesci a identificare le frasi nominali?
+
+Nella frase `the quick red fox jumped over the lazy brown dog` ci sono 2 frasi nominali: **quick red fox** e **lazy brown dog**.
+
+### Analisi del sentimento
+
+Una frase o un testo possono essere analizzati per il sentimento, ovvero quanto è *positivo* o *negativo*. Il sentimento è misurato in *polarità* e *oggettività/soggettività*. La polarità è misurata da -1.0 a 1.0 (negativo a positivo) e da 0.0 a 1.0 (più oggettivo a più soggettivo).
+
+✅ Più avanti imparerai che ci sono diversi modi per determinare il sentimento usando il machine learning, ma un modo è avere una lista di parole e frasi categorizzate come positive o negative da un esperto umano e applicare quel modello al testo per calcolare un punteggio di polarità. Riesci a vedere come questo funzionerebbe in alcune circostanze e meno bene in altre?
+
+### Inflessione
+
+L'inflessione ti permette di prendere una parola e ottenere il singolare o il plurale della parola.
+
+### Lemmatizzazione
+
+Un *lemma* è la radice o la parola principale per un insieme di parole, ad esempio *flew*, *flies*, *flying* hanno come lemma il verbo *fly*.
+
+Esistono anche database utili per il ricercatore NLP, in particolare:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/) è un database di parole, sinonimi, contrari e molti altri dettagli per ogni parola in molte lingue diverse. È incredibilmente utile quando si tenta di costruire traduzioni, correttori ortografici o strumenti linguistici di qualsiasi tipo.
+
+## Librerie NLP
+
+Fortunatamente, non devi costruire tutte queste tecniche da solo, poiché ci sono eccellenti librerie Python disponibili che rendono tutto molto più accessibile agli sviluppatori che non sono specializzati in elaborazione del linguaggio naturale o machine learning. Le prossime lezioni includono più esempi di queste, ma qui imparerai alcuni esempi utili per aiutarti con il prossimo compito.
+
+### Esercizio - usando `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor`.
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > Cosa sta succedendo qui? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor) è "Un estrattore di frasi nominali che utilizza il chunk parsing addestrato con il corpus di addestramento ConLL-2000." ConLL-2000 si riferisce alla Conferenza del 2000 sull'Apprendimento del Linguaggio Naturale Computazionale. Ogni anno la conferenza ospitava un workshop per affrontare un problema spinoso di NLP, e nel 2000 si trattava del chunking dei nomi. Un modello è stato addestrato sul Wall Street Journal, con "le sezioni 15-18 come dati di addestramento (211727 token) e la sezione 20 come dati di test (47377 token)". Puoi guardare le procedure utilizzate [qui](https://www.clips.uantwerpen.be/conll2000/chunking/) e i [risultati](https://ifarm.nl/erikt/research/np-chunking.html).
+
+### Sfida - migliorare il tuo bot con l'NLP
+
+Nella lezione precedente hai costruito un bot di domande e risposte molto semplice. Ora, renderai Marvin un po' più simpatico analizzando il tuo input per il sentimento e stampando una risposta che corrisponda al sentimento. Dovrai anche identificare una `noun_phrase` e chiedere informazioni a riguardo.
+
+I tuoi passaggi quando costruisci un bot conversazionale migliore:
+
+1. Stampa istruzioni che consigliano l'utente su come interagire con il bot
+2. Avvia il ciclo
+ 1. Accetta l'input dell'utente
+ 2. Se l'utente ha chiesto di uscire, esci
+ 3. Elabora l'input dell'utente e determina la risposta appropriata al sentimento
+ 4. Se viene rilevata una frase nominale nel sentimento, pluralizzala e chiedi ulteriori informazioni su quel tema
+ 5. Stampa la risposta
+3. torna al passaggio 2
+
+Ecco il frammento di codice per determinare il sentimento usando TextBlob. Nota che ci sono solo quattro *gradazioni* di risposta al sentimento (puoi averne di più se vuoi):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+Ecco un esempio di output per guidarti (l'input dell'utente è sulle righe che iniziano con >):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+Una possibile soluzione al compito è [qui](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)
+
+✅ Verifica delle conoscenze
+
+1. Pensi che le risposte simpatiche potrebbero 'ingannare' qualcuno facendogli credere che il bot li capisca davvero?
+2. Identificare la frase nominale rende il bot più 'credibile'?
+3. Perché estrarre una 'frase nominale' da una frase è una cosa utile da fare?
+
+---
+
+Implementa il bot nella verifica delle conoscenze precedente e testalo su un amico. Riesce a ingannarlo? Riesci a rendere il tuo bot più 'credibile'?
+
+## 🚀Sfida
+
+Prendi un compito nella verifica delle conoscenze precedente e prova a implementarlo. Testa il bot su un amico. Riesce a ingannarlo? Riesci a rendere il tuo bot più 'credibile'?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## Revisione e auto-studio
+
+Nelle prossime lezioni imparerai di più sull'analisi del sentimento. Ricerca questa interessante tecnica in articoli come questi su [KDNuggets](https://www.kdnuggets.com/tag/nlp)
+
+## Compito
+
+[Make a bot talk back](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/2-Tasks/assignment.md b/translations/it/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..b81820b59
--- /dev/null
+++ b/translations/it/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# Fai parlare un Bot
+
+## Istruzioni
+
+Nelle lezioni precedenti, hai programmato un bot di base con cui chattare. Questo bot dà risposte casuali fino a quando non dici 'ciao'. Puoi rendere le risposte un po' meno casuali e attivare risposte se dici cose specifiche, come 'perché' o 'come'? Pensa a come l'apprendimento automatico potrebbe rendere questo tipo di lavoro meno manuale mentre estendi il tuo bot. Puoi usare le librerie NLTK o TextBlob per facilitare i tuoi compiti.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | --------------------------------------------- | ------------------------------------------------- | ----------------------- |
+| | Un nuovo file bot.py è presentato e documentato | Un nuovo file bot è presentato ma contiene bug | Un file non è presentato |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/3-Translation-Sentiment/README.md b/translations/it/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..c7bcc162b
--- /dev/null
+++ b/translations/it/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,190 @@
+# Traduzione e analisi del sentimento con ML
+
+Nelle lezioni precedenti hai imparato come costruire un bot di base usando `TextBlob`, una libreria che incorpora ML dietro le quinte per eseguire compiti NLP di base come l'estrazione di frasi nominali. Un'altra sfida importante nella linguistica computazionale è la _traduzione_ accurata di una frase da una lingua parlata o scritta a un'altra.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+La traduzione è un problema molto complesso aggravato dal fatto che esistono migliaia di lingue, ognuna con regole grammaticali molto diverse. Un approccio consiste nel convertire le regole grammaticali formali di una lingua, come l'inglese, in una struttura non dipendente dalla lingua, e poi tradurla riconvertendola in un'altra lingua. Questo approccio prevede i seguenti passaggi:
+
+1. **Identificazione**. Identificare o etichettare le parole nella lingua di input come nomi, verbi ecc.
+2. **Creare la traduzione**. Produrre una traduzione diretta di ogni parola nel formato della lingua di destinazione.
+
+### Frase di esempio, Inglese a Irlandese
+
+In 'Inglese', la frase _I feel happy_ è composta da tre parole nell'ordine:
+
+- **soggetto** (I)
+- **verbo** (feel)
+- **aggettivo** (happy)
+
+Tuttavia, nella lingua 'Irlandese', la stessa frase ha una struttura grammaticale molto diversa - le emozioni come "*happy*" o "*sad*" sono espresse come se fossero *su di te*.
+
+La frase inglese `I feel happy` in irlandese sarebbe `Tá athas orm`. Una traduzione *letterale* sarebbe `Happy is upon me`.
+
+Un parlante irlandese che traduce in inglese direbbe `I feel happy`, non `Happy is upon me`, perché comprende il significato della frase, anche se le parole e la struttura della frase sono diverse.
+
+L'ordine formale per la frase in irlandese è:
+
+- **verbo** (Tá o is)
+- **aggettivo** (athas, o happy)
+- **soggetto** (orm, o upon me)
+
+## Traduzione
+
+Un programma di traduzione ingenuo potrebbe tradurre solo le parole, ignorando la struttura della frase.
+
+✅ Se hai imparato una seconda (o terza o più) lingua da adulto, potresti aver iniziato pensando nella tua lingua madre, traducendo un concetto parola per parola nella tua testa nella seconda lingua, e poi pronunciando la tua traduzione. Questo è simile a ciò che fanno i programmi di traduzione ingenua. È importante superare questa fase per raggiungere la fluidità!
+
+La traduzione ingenua porta a cattive (e talvolta esilaranti) traduzioni errate: `I feel happy` traduce letteralmente a `Mise bhraitheann athas` in irlandese. Questo significa (letteralmente) `me feel happy` e non è una frase irlandese valida. Anche se l'inglese e l'irlandese sono lingue parlate su due isole vicine, sono lingue molto diverse con strutture grammaticali diverse.
+
+> Puoi guardare alcuni video sulle tradizioni linguistiche irlandesi come [questo](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### Approcci di machine learning
+
+Finora, hai imparato l'approccio delle regole formali per l'elaborazione del linguaggio naturale. Un altro approccio è ignorare il significato delle parole, e _invece usare il machine learning per rilevare modelli_. Questo può funzionare nella traduzione se hai molti testi (un *corpus*) o testi (*corpora*) sia nella lingua di origine che in quella di destinazione.
+
+Ad esempio, considera il caso di *Orgoglio e Pregiudizio*, un noto romanzo inglese scritto da Jane Austen nel 1813. Se consulti il libro in inglese e una traduzione umana del libro in *francese*, potresti rilevare frasi in una lingua che sono tradotte _idiomaticamente_ nell'altra. Lo farai tra un minuto.
+
+Ad esempio, quando una frase inglese come `I have no money` viene tradotta letteralmente in francese, potrebbe diventare `Je n'ai pas de monnaie`. "Monnaie" è un falso amico francese insidioso, poiché 'money' e 'monnaie' non sono sinonimi. Una traduzione migliore che un umano potrebbe fare sarebbe `Je n'ai pas d'argent`, perché trasmette meglio il significato che non hai soldi (piuttosto che 'spiccioli' che è il significato di 'monnaie').
+
+
+
+> Immagine di [Jen Looper](https://twitter.com/jenlooper)
+
+Se un modello ML ha abbastanza traduzioni umane su cui costruire un modello, può migliorare l'accuratezza delle traduzioni identificando modelli comuni nei testi che sono stati precedentemente tradotti da esperti parlanti umani di entrambe le lingue.
+
+### Esercizio - traduzione
+
+Puoi usare `TextBlob` per tradurre frasi. Prova la famosa prima riga di **Orgoglio e Pregiudizio**:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob` fa un buon lavoro nella traduzione: "C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!".
+
+Si può sostenere che la traduzione di TextBlob sia molto più precisa, in effetti, rispetto alla traduzione francese del libro del 1932 di V. Leconte e Ch. Pressoir:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet égard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+In questo caso, la traduzione informata da ML fa un lavoro migliore rispetto al traduttore umano che mette inutilmente parole in bocca all'autore originale per 'chiarezza'.
+
+> Cosa sta succedendo qui? e perché TextBlob è così bravo nella traduzione? Beh, dietro le quinte, sta usando Google translate, un'IA sofisticata in grado di analizzare milioni di frasi per prevedere le stringhe migliori per il compito da svolgere. Non c'è niente di manuale qui e hai bisogno di una connessione internet per usare `blob.translate`.
+
+✅ Try some more sentences. Which is better, ML or human translation? In which cases?
+
+## Sentiment analysis
+
+Another area where machine learning can work very well is sentiment analysis. A non-ML approach to sentiment is to identify words and phrases which are 'positive' and 'negative'. Then, given a new piece of text, calculate the total value of the positive, negative and neutral words to identify the overall sentiment.
+
+This approach is easily tricked as you may have seen in the Marvin task - the sentence `Great, that was a wonderful waste of time, I'm glad we are lost on this dark road` è una frase sarcastica e di sentimento negativo, ma l'algoritmo semplice rileva 'great', 'wonderful', 'glad' come positivi e 'waste', 'lost' e 'dark' come negativi. Il sentimento complessivo è influenzato da queste parole contrastanti.
+
+✅ Fermati un attimo e pensa a come trasmettiamo il sarcasmo come parlanti umani. L'inflessione del tono gioca un ruolo importante. Prova a dire la frase "Beh, quel film era fantastico" in modi diversi per scoprire come la tua voce trasmette il significato.
+
+### Approcci ML
+
+L'approccio ML consisterebbe nel raccogliere manualmente corpi di testo negativi e positivi - tweet, recensioni di film o qualsiasi cosa in cui l'umano abbia dato un punteggio *e* un'opinione scritta. Poi le tecniche NLP possono essere applicate alle opinioni e ai punteggi, in modo che emergano modelli (ad esempio, le recensioni positive dei film tendono ad avere la frase 'degno di un Oscar' più delle recensioni negative dei film, o le recensioni positive dei ristoranti dicono 'gourmet' molto più di 'disgustoso').
+
+> ⚖️ **Esempio**: Se lavorassi nell'ufficio di un politico e ci fosse una nuova legge in discussione, i cittadini potrebbero scrivere all'ufficio con email a favore o contro la particolare nuova legge. Diciamo che ti venga assegnato il compito di leggere le email e ordinarle in 2 pile, *a favore* e *contro*. Se ci fossero molte email, potresti essere sopraffatto dal tentativo di leggerle tutte. Non sarebbe bello se un bot potesse leggerle tutte per te, capirle e dirti in quale pila appartiene ogni email?
+>
+> Un modo per ottenere ciò è usare il Machine Learning. Addestreresti il modello con una porzione delle email *contro* e una porzione delle email *a favore*. Il modello tenderebbe ad associare frasi e parole con il lato contro e il lato a favore, *ma non comprenderebbe nessuno dei contenuti*, solo che certe parole e modelli erano più probabilmente presenti in un'email *contro* o *a favore*. Potresti testarlo con alcune email che non avevi usato per addestrare il modello, e vedere se arrivava alla stessa conclusione di te. Poi, una volta che fossi soddisfatto dell'accuratezza del modello, potresti elaborare le email future senza dover leggere ciascuna.
+
+✅ Questo processo ti sembra simile ai processi che hai usato nelle lezioni precedenti?
+
+## Esercizio - frasi sentimentali
+
+Il sentimento è misurato con una *polarità* da -1 a 1, dove -1 è il sentimento più negativo e 1 è il più positivo. Il sentimento è anche misurato con un punteggio da 0 a 1 per oggettività (0) e soggettività (1).
+
+Dai un'altra occhiata a *Orgoglio e Pregiudizio* di Jane Austen. Il testo è disponibile qui su [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm). Il campione sotto mostra un breve programma che analizza il sentimento delle prime e ultime frasi del libro e mostra la polarità del sentimento e il punteggio di soggettività/oggettività.
+
+Dovresti usare la libreria `TextBlob` (descritta sopra) per determinare `sentiment` (non devi scrivere il tuo calcolatore di sentimenti) nel seguente compito.
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+Vedi il seguente output:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## Sfida - controlla la polarità del sentimento
+
+Il tuo compito è determinare, usando la polarità del sentimento, se *Orgoglio e Pregiudizio* ha più frasi assolutamente positive che negative. Per questo compito, puoi assumere che un punteggio di polarità di 1 o -1 sia rispettivamente assolutamente positivo o negativo.
+
+**Passaggi:**
+
+1. Scarica una [copia di Orgoglio e Pregiudizio](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) da Project Gutenberg come file .txt. Rimuovi i metadati all'inizio e alla fine del file, lasciando solo il testo originale
+2. Apri il file in Python ed estrai i contenuti come una stringa
+3. Crea un TextBlob usando la stringa del libro
+4. Analizza ogni frase del libro in un ciclo
+ 1. Se la polarità è 1 o -1, memorizza la frase in un array o lista di messaggi positivi o negativi
+5. Alla fine, stampa tutte le frasi positive e negative (separatamente) e il numero di ciascuna.
+
+Ecco un esempio di [soluzione](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb).
+
+✅ Verifica delle conoscenze
+
+1. Il sentimento si basa sulle parole usate nella frase, ma il codice *comprende* le parole?
+2. Pensi che la polarità del sentimento sia accurata, o in altre parole, sei *d'accordo* con i punteggi?
+ 1. In particolare, sei d'accordo o in disaccordo con la polarità assolutamente **positiva** delle seguenti frasi?
+ * “Che padre eccellente avete, ragazze!” disse lei, quando la porta fu chiusa.
+ * “La tua esaminazione del signor Darcy è finita, suppongo,” disse Miss Bingley; “e prego qual è il risultato?” “Sono perfettamente convinto che il signor Darcy non ha alcun difetto.
+ * Come accadono meravigliosamente queste cose!
+ * Ho la più grande avversione al mondo per quel genere di cose.
+ * Charlotte è un'eccellente amministratrice, oserei dire.
+ * “Questo è davvero delizioso!
+ * Sono così felice!
+ * La tua idea dei pony è deliziosa.
+ 2. Le prossime 3 frasi sono state valutate con un sentimento assolutamente positivo, ma a una lettura attenta, non sono frasi positive. Perché l'analisi del sentimento ha pensato che fossero frasi positive?
+ * Sarò felice, quando la sua permanenza a Netherfield sarà finita!” “Vorrei poter dire qualcosa per confortarti,” rispose Elizabeth; “ma è completamente fuori dal mio potere.
+ * Se solo potessi vederti felice!
+ * Il nostro disagio, mia cara Lizzy, è molto grande.
+ 3. Sei d'accordo o in disaccordo con la polarità assolutamente **negativa** delle seguenti frasi?
+ - Tutti sono disgustati dal suo orgoglio.
+ - “Vorrei sapere come si comporta tra gli estranei.” “Allora sentirai, ma preparati a qualcosa di molto terribile.
+ - La pausa fu per i sentimenti di Elizabeth terribile.
+ - Sarebbe terribile!
+
+✅ Qualsiasi appassionato di Jane Austen capirà che spesso usa i suoi libri per criticare gli aspetti più ridicoli della società della Reggenza inglese. Elizabeth Bennett, il personaggio principale in *Orgoglio e Pregiudizio*, è una acuta osservatrice sociale (come l'autrice) e il suo linguaggio è spesso fortemente sfumato. Anche Mr. Darcy (l'interesse amoroso nella storia) nota l'uso giocoso e scherzoso del linguaggio di Elizabeth: "Ho avuto il piacere della tua conoscenza abbastanza a lungo per sapere che trovi grande divertimento nell'affermare occasionalmente opinioni che in realtà non sono le tue."
+
+---
+
+## 🚀Sfida
+
+Puoi rendere Marvin ancora migliore estraendo altre caratteristiche dall'input dell'utente?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## Revisione & Autoapprendimento
+
+Ci sono molti modi per estrarre il sentimento dal testo. Pensa alle applicazioni aziendali che potrebbero fare uso di questa tecnica. Pensa a come può andare storto. Leggi di più sui sistemi sofisticati pronti per l'impresa che analizzano il sentimento come [Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott). Prova alcune delle frasi di Orgoglio e Pregiudizio sopra e vedi se riesce a rilevare le sfumature.
+
+## Compito
+
+[Licenza poetica](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/3-Translation-Sentiment/assignment.md b/translations/it/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..921abff6e
--- /dev/null
+++ b/translations/it/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# Licenza poetica
+
+## Istruzioni
+
+In [questo notebook](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency) puoi trovare oltre 500 poesie di Emily Dickinson precedentemente analizzate per sentiment utilizzando Azure text analytics. Utilizzando questo dataset, analizzalo utilizzando le tecniche descritte nella lezione. Il sentiment suggerito di una poesia corrisponde alla decisione del servizio Azure più sofisticato? Perché o perché no, secondo te? Qualcosa ti sorprende?
+
+## Griglia di valutazione
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | ----------------------------------------------------------------------- | ------------------------------------------------------ | ------------------------ |
+| | Viene presentato un notebook con un'analisi solida di un campione dell'autore | Il notebook è incompleto o non esegue l'analisi | Non viene presentato nessun notebook |
+
+**Dichiarazione di esclusione di responsabilità**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di tenere presente che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/it/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..572966368
--- /dev/null
+++ b/translations/it/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/it/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..646e38fab
--- /dev/null
+++ b/translations/it/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzata basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/4-Hotel-Reviews-1/README.md b/translations/it/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..74a65139b
--- /dev/null
+++ b/translations/it/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,294 @@
+# Analisi del sentiment con recensioni di hotel - elaborazione dei dati
+
+In questa sezione utilizzerai le tecniche apprese nelle lezioni precedenti per fare un'analisi esplorativa dei dati di un grande dataset. Una volta che avrai una buona comprensione dell'utilità delle varie colonne, imparerai:
+
+- come rimuovere le colonne non necessarie
+- come calcolare nuovi dati basati sulle colonne esistenti
+- come salvare il dataset risultante per l'uso nella sfida finale
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### Introduzione
+
+Finora hai imparato come i dati testuali siano molto diversi dai dati numerici. Se il testo è stato scritto o parlato da un umano, può essere analizzato per trovare pattern e frequenze, sentimenti e significati. Questa lezione ti introduce a un dataset reale con una sfida reale: **[515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** e include una [licenza CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/). È stato estratto da Booking.com da fonti pubbliche. Il creatore del dataset è Jiashen Liu.
+
+### Preparazione
+
+Avrai bisogno di:
+
+* La capacità di eseguire notebook .ipynb usando Python 3
+* pandas
+* NLTK, [che dovresti installare localmente](https://www.nltk.org/install.html)
+* Il dataset disponibile su Kaggle [515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe). È di circa 230 MB non compresso. Scaricalo nella cartella root `/data` associata a queste lezioni di NLP.
+
+## Analisi esplorativa dei dati
+
+Questa sfida presume che tu stia costruendo un bot di raccomandazione per hotel utilizzando l'analisi del sentiment e i punteggi delle recensioni degli ospiti. Il dataset che utilizzerai include recensioni di 1493 hotel diversi in 6 città.
+
+Utilizzando Python, un dataset di recensioni di hotel e l'analisi del sentiment di NLTK potresti scoprire:
+
+* Quali sono le parole e le frasi più frequentemente usate nelle recensioni?
+* I *tag* ufficiali che descrivono un hotel correlano con i punteggi delle recensioni (ad esempio, ci sono più recensioni negative per un particolare hotel da parte di *Famiglie con bambini piccoli* rispetto a *Viaggiatori solitari*, forse indicando che è migliore per i *Viaggiatori solitari*)?
+* I punteggi di sentiment di NLTK 'concordano' con il punteggio numerico del recensore dell'hotel?
+
+#### Dataset
+
+Esploriamo il dataset che hai scaricato e salvato localmente. Apri il file in un editor come VS Code o anche Excel.
+
+Le intestazioni nel dataset sono le seguenti:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+Eccole raggruppate in un modo che potrebbe essere più facile da esaminare:
+##### Colonne dell'hotel
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (latitudine), `lng` (longitudine)
+ * Utilizzando *lat* e *lng* potresti tracciare una mappa con Python che mostra le posizioni degli hotel (forse codificate a colori per recensioni negative e positive)
+ * Hotel_Address non è ovviamente utile per noi, e probabilmente lo sostituiremo con un paese per una più facile ordinazione e ricerca
+
+**Colonne Meta-review dell'hotel**
+
+* `Average_Score`
+ * Secondo il creatore del dataset, questa colonna è il *Punteggio Medio dell'hotel, calcolato in base all'ultimo commento dell'ultimo anno*. Questo sembra un modo insolito di calcolare il punteggio, ma è il dato estratto, quindi per ora possiamo prenderlo per buono.
+
+ ✅ In base alle altre colonne di questi dati, riesci a pensare a un altro modo per calcolare il punteggio medio?
+
+* `Total_Number_of_Reviews`
+ * Il numero totale di recensioni che questo hotel ha ricevuto - non è chiaro (senza scrivere del codice) se questo si riferisce alle recensioni nel dataset.
+* `Additional_Number_of_Scoring`
+ * Questo significa che è stato dato un punteggio di recensione ma non è stata scritta alcuna recensione positiva o negativa dal recensore
+
+**Colonne della recensione**
+
+- `Reviewer_Score`
+ - Questo è un valore numerico con al massimo 1 decimale tra i valori minimi e massimi di 2.5 e 10
+ - Non è spiegato perché 2.5 è il punteggio più basso possibile
+- `Negative_Review`
+ - Se un recensore non ha scritto nulla, questo campo avrà "**No Negative**"
+ - Nota che un recensore può scrivere una recensione positiva nella colonna delle recensioni negative (ad esempio "non c'è niente di negativo in questo hotel")
+- `Review_Total_Negative_Word_Counts`
+ - Maggiore è il conteggio delle parole negative, minore è il punteggio (senza controllare la sentimentalità)
+- `Positive_Review`
+ - Se un recensore non ha scritto nulla, questo campo avrà "**No Positive**"
+ - Nota che un recensore può scrivere una recensione negativa nella colonna delle recensioni positive (ad esempio "non c'è niente di buono in questo hotel")
+- `Review_Total_Positive_Word_Counts`
+ - Maggiore è il conteggio delle parole positive, maggiore è il punteggio (senza controllare la sentimentalità)
+- `Review_Date` e `days_since_review`
+ - Si potrebbe applicare una misura di freschezza o stantio a una recensione (le recensioni più vecchie potrebbero non essere accurate come quelle più recenti perché la gestione dell'hotel è cambiata, sono state fatte ristrutturazioni, è stata aggiunta una piscina ecc.)
+- `Tags`
+ - Questi sono brevi descrittori che un recensore può selezionare per descrivere il tipo di ospite che erano (ad esempio, solitario o famiglia), il tipo di stanza che avevano, la durata del soggiorno e come è stata inviata la recensione.
+ - Sfortunatamente, l'uso di questi tag è problematico, consulta la sezione qui sotto che discute la loro utilità
+
+**Colonne del recensore**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - Questo potrebbe essere un fattore in un modello di raccomandazione, ad esempio, se potessi determinare che i recensori più prolifici con centinaia di recensioni erano più propensi a essere negativi piuttosto che positivi. Tuttavia, il recensore di una particolare recensione non è identificato con un codice univoco e quindi non può essere collegato a un set di recensioni. Ci sono 30 recensori con 100 o più recensioni, ma è difficile vedere come questo possa aiutare il modello di raccomandazione.
+- `Reviewer_Nationality`
+ - Alcune persone potrebbero pensare che alcune nazionalità siano più propense a dare una recensione positiva o negativa a causa di una propensione nazionale. Fai attenzione a costruire tali opinioni aneddotiche nei tuoi modelli. Questi sono stereotipi nazionali (e talvolta razziali), e ogni recensore era un individuo che ha scritto una recensione basata sulla sua esperienza. Potrebbe essere stata filtrata attraverso molte lenti come i loro soggiorni precedenti in hotel, la distanza percorsa e il loro temperamento personale. Pensare che la loro nazionalità sia stata la ragione di un punteggio di recensione è difficile da giustificare.
+
+##### Esempi
+
+| Punteggio Medio | Numero Totale di Recensioni | Punteggio del Recensore | Recensione Negativa | Recensione Positiva | Tag |
+| -------------- | ---------------------- | ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | Questo attualmente non è un hotel ma un cantiere sono stato terrorizzato fin dalle prime ore del mattino e per tutto il giorno con rumori di costruzione inaccettabili mentre riposavo dopo un lungo viaggio e lavoravo nella stanza le persone lavoravano tutto il giorno con martelli pneumatici nelle stanze adiacenti ho chiesto un cambio di stanza ma non c'era una stanza silenziosa disponibile per peggiorare le cose sono stato sovraccaricato ho effettuato il check-out la sera poiché dovevo partire molto presto e ho ricevuto una fattura appropriata un giorno dopo l'hotel ha effettuato un altro addebito senza il mio consenso in eccesso rispetto al prezzo prenotato è un posto terribile non punirti prenotando qui | Niente Posto terribile Stai lontano | Viaggio d'affari Coppia Camera Doppia Standard Soggiorno di 2 notti |
+
+Come puoi vedere, questo ospite non ha avuto un soggiorno felice in questo hotel. L'hotel ha un buon punteggio medio di 7.8 e 1945 recensioni, ma questo recensore gli ha dato 2.5 e ha scritto 115 parole su quanto negativa fosse la loro permanenza. Se non avessero scritto nulla nella colonna Positive_Review, potresti dedurre che non ci fosse nulla di positivo, ma ahimè hanno scritto 7 parole di avvertimento. Se contassimo solo le parole invece del significato o del sentiment delle parole, potremmo avere una visione distorta dell'intento del recensore. Stranamente, il loro punteggio di 2.5 è confuso, perché se quel soggiorno in hotel era così brutto, perché dare qualche punto? Investigando il dataset da vicino, vedrai che il punteggio più basso possibile è 2.5, non 0. Il punteggio più alto possibile è 10.
+
+##### Tag
+
+Come accennato sopra, a prima vista, l'idea di utilizzare `Tags` per categorizzare i dati ha senso. Sfortunatamente questi tag non sono standardizzati, il che significa che in un dato hotel, le opzioni potrebbero essere *Camera singola*, *Camera doppia*, e *Camera matrimoniale*, ma nel prossimo hotel, sono *Camera Singola Deluxe*, *Camera Queen Classica*, e *Camera King Executive*. Potrebbero essere le stesse cose, ma ci sono così tante variazioni che la scelta diventa:
+
+1. Tentare di cambiare tutti i termini a uno standard unico, il che è molto difficile, perché non è chiaro quale sarebbe il percorso di conversione in ogni caso (ad esempio, *Camera singola classica* si mappa a *Camera singola* ma *Camera Queen Superior con Vista Giardino o Città* è molto più difficile da mappare)
+
+1. Possiamo adottare un approccio NLP e misurare la frequenza di certi termini come *Solitario*, *Viaggiatore d'affari*, o *Famiglia con bambini piccoli* mentre si applicano a ciascun hotel, e considerarlo nel modello di raccomandazione
+
+I tag sono di solito (ma non sempre) un singolo campo contenente un elenco di 5 o 6 valori separati da virgole allineati a *Tipo di viaggio*, *Tipo di ospiti*, *Tipo di stanza*, *Numero di notti*, e *Tipo di dispositivo su cui è stata inviata la recensione*. Tuttavia, poiché alcuni recensori non riempiono ogni campo (potrebbero lasciarne uno vuoto), i valori non sono sempre nello stesso ordine.
+
+Come esempio, prendi *Tipo di gruppo*. Ci sono 1025 possibilità uniche in questo campo nella colonna `Tags`, e sfortunatamente solo alcune di esse si riferiscono a un gruppo (alcune sono il tipo di stanza ecc.). Se filtri solo quelli che menzionano la famiglia, i risultati contengono molti tipi di *Camera famiglia*. Se includi il termine *con*, cioè conti i valori *Famiglia con*, i risultati sono migliori, con oltre 80.000 dei 515.000 risultati contenenti la frase "Famiglia con bambini piccoli" o "Famiglia con bambini più grandi".
+
+Questo significa che la colonna dei tag non è completamente inutile per noi, ma ci vorrà un po' di lavoro per renderla utile.
+
+##### Punteggio medio dell'hotel
+
+Ci sono un certo numero di stranezze o discrepanze con il dataset che non riesco a capire, ma sono illustrate qui in modo che tu ne sia consapevole quando costruisci i tuoi modelli. Se riesci a capirlo, faccelo sapere nella sezione di discussione!
+
+Il dataset ha le seguenti colonne relative al punteggio medio e al numero di recensioni:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+L'hotel singolo con il maggior numero di recensioni in questo dataset è *Britannia International Hotel Canary Wharf* con 4789 recensioni su 515.000. Ma se guardiamo il valore `Total_Number_of_Reviews` per questo hotel, è 9086. Potresti dedurre che ci sono molti più punteggi senza recensioni, quindi forse dovremmo aggiungere il valore della colonna `Additional_Number_of_Scoring`. Quel valore è 2682, e aggiungendolo a 4789 otteniamo 7.471 che è ancora 1615 meno di `Total_Number_of_Reviews`.
+
+Se prendi le colonne `Average_Score`, potresti dedurre che è la media delle recensioni nel dataset, ma la descrizione da Kaggle è "*Punteggio Medio dell'hotel, calcolato in base all'ultimo commento dell'ultimo anno*". Questo non sembra molto utile, ma possiamo calcolare la nostra media basata sui punteggi delle recensioni nel dataset. Utilizzando lo stesso hotel come esempio, il punteggio medio dell'hotel è dato come 7.1 ma il punteggio calcolato (media del punteggio del recensore *nel* dataset) è 6.8. Questo è vicino, ma non lo stesso valore, e possiamo solo supporre che i punteggi dati nelle recensioni `Additional_Number_of_Scoring` abbiano aumentato la media a 7.1. Sfortunatamente, senza un modo per testare o provare tale affermazione, è difficile usare o fidarsi di `Average_Score`, `Additional_Number_of_Scoring` e `Total_Number_of_Reviews` quando si basano su, o si riferiscono a, dati che non abbiamo.
+
+Per complicare ulteriormente le cose, l'hotel con il secondo maggior numero di recensioni ha un punteggio medio calcolato di 8.12 e il dataset `Average_Score` è 8.1. È questa coincidenza corretta o è la discrepanza del primo hotel?
+
+Nel caso in cui questi hotel potrebbero essere un'eccezione, e che forse la maggior parte dei valori si allineano (ma alcuni no per qualche ragione), scriveremo un breve programma successivo per esplorare i valori nel dataset e determinare l'uso corretto (o non uso) dei valori.
+
+> 🚨 Una nota di cautela
+>
+> Quando lavori con questo dataset scriverai codice che calcola qualcosa dal testo senza dover leggere o analizzare il testo tu stesso. Questa è l'essenza dell'NLP, interpretare il significato o il sentiment senza dover farlo fare a un umano. Tuttavia, è possibile che tu legga alcune delle recensioni negative. Ti esorto a non farlo, perché non è necessario. Alcune di esse sono sciocche o irrilevanti recensioni negative di hotel, come "Il tempo non era buono", qualcosa al di fuori del controllo dell'hotel, o di chiunque. Ma c'è anche un lato oscuro in alcune recensioni. A volte le recensioni negative sono razziste, sessiste o discriminatorie per età. Questo è sfortunato ma prevedibile in un dataset estratto da un sito pubblico. Alcuni recensori lasciano recensioni che troveresti di cattivo gusto, scomode o sconvolgenti. Meglio lasciare che il codice misuri il sentiment piuttosto che leggerle tu stesso e rimanere sconvolto. Detto ciò, è una minoranza che scrive tali cose, ma esistono comunque.
+
+## Esercizio - Esplorazione dei dati
+### Carica i dati
+
+Basta esaminare visivamente i dati, ora scriverai del codice e otterrai delle risposte! Questa sezione utilizza la libreria pandas. Il tuo primissimo compito è assicurarti di poter caricare e leggere i dati CSV. La libreria pandas ha un caricatore CSV veloce, e il risultato è posizionato in un dataframe, come nelle lezioni precedenti. Il CSV che stiamo caricando ha oltre mezzo milione di righe, ma solo 17 colonne. Pandas ti offre molti modi potenti per interagire con un dataframe, inclusa la possibilità di eseguire operazioni su ogni riga.
+
+Da qui in avanti in questa lezione, ci saranno frammenti di codice e alcune spiegazioni del codice e alcune discussioni su cosa significano i risultati. Usa il notebook _notebook.ipynb_ incluso per il tuo codice.
+
+Iniziamo con il caricamento del file di dati che utilizzerai:
+
+```python
+# Load the hotel reviews from CSV
+import pandas as pd
+import time
+# importing time so the start and end time can be used to calculate file loading time
+print("Loading data file now, this could take a while depending on file size")
+start = time.time()
+# df is 'DataFrame' - make sure you downloaded the file to the data folder
+df = pd.read_csv('../../data/Hotel_Reviews.csv')
+end = time.time()
+print("Loading took " + str(round(end - start, 2)) + " seconds")
+```
+
+Ora che i dati sono caricati, possiamo eseguire alcune operazioni su di essi. Tieni questo codice in cima al tuo programma per la prossima parte.
+
+## Esplora i dati
+
+In questo caso, i dati sono già *puliti*, il che significa che sono pronti per essere utilizzati e non contengono caratteri in altre lingue che potrebbero far inciampare gli algoritmi che si aspettano solo caratteri inglesi.
+
+✅ Potresti dover lavor
+righe hanno valori della colonna `Positive_Review` di "No Positive" 9. Calcola e stampa quante righe hanno valori della colonna `Positive_Review` di "No Positive" **e** valori della colonna `Negative_Review` di "No Negative" ### Risposte al codice 1. Stampa la *forma* del data frame che hai appena caricato (la forma è il numero di righe e colonne) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Calcola la frequenza delle nazionalità dei recensori: 1. Quanti valori distinti ci sono per la colonna `Reviewer_Nationality` e quali sono? 2. Qual è la nazionalità del recensore più comune nel dataset (stampa il paese e il numero di recensioni)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. Quali sono le successive 10 nazionalità più frequentemente trovate e il loro conteggio di frequenza? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. Qual è stato l'hotel più recensito per ciascuna delle prime 10 nazionalità dei recensori? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. Quante recensioni ci sono per hotel (conteggio di frequenza degli hotel) nel dataset? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Nome_Hotel | Numero_Totale_di_Recensioni | Recensioni_Trovate | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | Potresti notare che i risultati *contati nel dataset* non corrispondono al valore in `Total_Number_of_Reviews`. Non è chiaro se questo valore nel dataset rappresentasse il numero totale di recensioni che l'hotel aveva, ma non tutte sono state estratte, o qualche altro calcolo. `Total_Number_of_Reviews` non è utilizzato nel modello a causa di questa incertezza. 5. Sebbene ci sia una colonna `Average_Score` per ciascun hotel nel dataset, puoi anche calcolare un punteggio medio (ottenendo la media di tutti i punteggi dei recensori nel dataset per ciascun hotel). Aggiungi una nuova colonna al tuo dataframe con l'intestazione della colonna `Calc_Average_Score` che contiene quella media calcolata. Stampa le colonne `Hotel_Name`, `Average_Score` e `Calc_Average_Score`. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` Potresti anche chiederti del valore `Average_Score` e perché a volte è diverso dal punteggio medio calcolato. Poiché non possiamo sapere perché alcuni valori corrispondono, ma altri hanno una differenza, è più sicuro in questo caso utilizzare i punteggi delle recensioni che abbiamo per calcolare la media da soli. Detto questo, le differenze sono solitamente molto piccole, ecco gli hotel con la maggiore deviazione dalla media del dataset e la media calcolata: | Differenza_Punteggio_Medio | Punteggio_Medio | Calc_Average_Score | Nome_Hotel | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | Con solo 1 hotel che ha una differenza di punteggio superiore a 1, significa che probabilmente possiamo ignorare la differenza e utilizzare il punteggio medio calcolato. 6. Calcola e stampa quante righe hanno valori della colonna `Negative_Review` di "No Negative" 7. Calcola e stampa quante righe hanno valori della colonna `Positive_Review` di "No Positive" 8. Calcola e stampa quante righe hanno valori della colonna `Positive_Review` di "No Positive" **e** valori della colonna `Negative_Review` di "No Negative" ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Un altro modo Un altro modo per contare gli elementi senza Lambdas, e utilizzare sum per contare le righe: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` Potresti aver notato che ci sono 127 righe che hanno sia valori "No Negative" che "No Positive" per le colonne `Negative_Review` e `Positive_Review` rispettivamente. Ciò significa che il recensore ha dato all'hotel un punteggio numerico, ma ha rifiutato di scrivere una recensione positiva o negativa. Fortunatamente si tratta di una piccola quantità di righe (127 su 515738, ovvero lo 0,02%), quindi probabilmente non influenzerà il nostro modello o i risultati in una direzione particolare, ma potresti non aspettarti che un set di dati di recensioni abbia righe senza recensioni, quindi vale la pena esplorare i dati per scoprire righe come questa. Ora che hai esplorato il dataset, nella prossima lezione filtrerai i dati e aggiungerai un'analisi del sentiment. --- ## 🚀Sfida Questa lezione dimostra, come abbiamo visto nelle lezioni precedenti, quanto sia fondamentale comprendere i tuoi dati e le loro particolarità prima di eseguire operazioni su di essi. I dati basati su testo, in particolare, richiedono un'attenta analisi. Scava in vari set di dati ricchi di testo e vedi se riesci a scoprire aree che potrebbero introdurre bias o sentiment distorti in un modello. ## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## Revisione e studio autonomo Segui [questo percorso di apprendimento su NLP](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) per scoprire strumenti da provare quando costruisci modelli basati su discorsi e testi. ## Compito [NLTK](assignment.md)
+
+**Dichiarazione di non responsabilità**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Pur cercando di garantire la massima accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre deve essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/it/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..7153e692d
--- /dev/null
+++ b/translations/it/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## Istruzioni
+
+NLTK è una libreria ben nota per l'uso nella linguistica computazionale e nell'NLP. Approfitta di questa opportunità per leggere il '[libro NLTK](https://www.nltk.org/book/)' e provare i suoi esercizi. In questo compito non valutato, avrai l'opportunità di conoscere più a fondo questa libreria.
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Anche se ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/it/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..11af8dcf2
--- /dev/null
+++ b/translations/it/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/it/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..d3081994b
--- /dev/null
+++ b/translations/it/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/5-Hotel-Reviews-2/README.md b/translations/it/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..93ae56a0e
--- /dev/null
+++ b/translations/it/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# Analisi del sentiment con recensioni di hotel
+
+Ora che hai esplorato il dataset in dettaglio, è il momento di filtrare le colonne e poi utilizzare tecniche di NLP sul dataset per ottenere nuove informazioni sugli hotel.
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### Operazioni di Filtraggio e Analisi del Sentiment
+
+Come avrai notato, il dataset presenta alcuni problemi. Alcune colonne sono piene di informazioni inutili, altre sembrano errate. Anche se fossero corrette, non è chiaro come siano state calcolate, e le risposte non possono essere verificate indipendentemente dai tuoi calcoli.
+
+## Esercizio: un po' più di elaborazione dei dati
+
+Pulisci i dati un po' di più. Aggiungi colonne che saranno utili in seguito, cambia i valori in altre colonne e elimina completamente alcune colonne.
+
+1. Elaborazione iniziale delle colonne
+
+ 1. Elimina `lat` e `lng`
+
+ 2. Sostituisci i valori di `Hotel_Address` con i seguenti valori (se l'indirizzo contiene il nome della città e del paese, cambialo con solo la città e il paese).
+
+ Queste sono le uniche città e paesi nel dataset:
+
+ Amsterdam, Netherlands
+
+ Barcelona, Spain
+
+ London, United Kingdom
+
+ Milan, Italy
+
+ Paris, France
+
+ Vienna, Austria
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ Ora puoi interrogare i dati a livello di paese:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | Hotel_Address | Hotel_Name |
+ | :--------------------- | :--------: |
+ | Amsterdam, Netherlands | 105 |
+ | Barcelona, Spain | 211 |
+ | London, United Kingdom | 400 |
+ | Milan, Italy | 162 |
+ | Paris, France | 458 |
+ | Vienna, Austria | 158 |
+
+2. Processa le colonne Meta-review degli Hotel
+
+ 1. Elimina `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` con il nostro punteggio calcolato
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. Processa le colonne delle recensioni
+
+ 1. Elimina `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']`, puoi iniziare a chiederti se è possibile ridurre notevolmente l'elaborazione che devi fare. Fortunatamente, è possibile - ma prima devi seguire alcuni passaggi per accertarti dei tag di interesse.
+
+### Filtraggio dei tag
+
+Ricorda che l'obiettivo del dataset è aggiungere sentiment e colonne che ti aiuteranno a scegliere il miglior hotel (per te stesso o magari per un cliente che ti chiede di creare un bot di raccomandazione di hotel). Devi chiederti se i tag sono utili o meno nel dataset finale. Ecco un'interpretazione (se avessi bisogno del dataset per altri motivi diversi, potrebbero restare/fuori dalla selezione):
+
+1. Il tipo di viaggio è rilevante e dovrebbe rimanere
+2. Il tipo di gruppo di ospiti è importante e dovrebbe rimanere
+3. Il tipo di stanza, suite o studio in cui l'ospite ha soggiornato è irrilevante (tutti gli hotel hanno fondamentalmente le stesse stanze)
+4. Il dispositivo da cui è stata inviata la recensione è irrilevante
+5. Il numero di notti di soggiorno del recensore *potrebbe* essere rilevante se attribuisci soggiorni più lunghi al fatto che gli sia piaciuto di più l'hotel, ma è un'ipotesi azzardata e probabilmente irrilevante
+
+In sintesi, **mantieni 2 tipi di tag e rimuovi gli altri**.
+
+Prima di tutto, non vuoi contare i tag finché non sono in un formato migliore, quindi significa rimuovere le parentesi quadre e le virgolette. Puoi farlo in diversi modi, ma vuoi il più veloce possibile poiché potrebbe richiedere molto tempo per elaborare molti dati. Fortunatamente, pandas ha un modo semplice per eseguire ciascuno di questi passaggi.
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+Ogni tag diventa qualcosa come: `Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag` se la colonna corrisponde a una delle nuove colonne, aggiungi 1, altrimenti aggiungi 0. Il risultato finale sarà un conteggio di quanti recensori hanno scelto questo hotel (in aggregato) per, ad esempio, affari vs svago, o per portare un animale domestico, e queste sono informazioni utili quando si raccomanda un hotel.
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### Salva il tuo file
+
+Infine, salva il dataset come è ora con un nuovo nome.
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## Operazioni di Analisi del Sentiment
+
+In questa sezione finale, applicherai l'analisi del sentiment alle colonne delle recensioni e salverai i risultati in un dataset.
+
+## Esercizio: carica e salva i dati filtrati
+
+Nota che ora stai caricando il dataset filtrato che è stato salvato nella sezione precedente, **non** il dataset originale.
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### Rimozione delle stop words
+
+Se dovessi eseguire l'analisi del sentiment sulle colonne delle recensioni negative e positive, potrebbe richiedere molto tempo. Testato su un potente laptop di prova con CPU veloce, ha impiegato 12 - 14 minuti a seconda della libreria di sentiment utilizzata. È un tempo (relativamente) lungo, quindi vale la pena indagare se può essere velocizzato.
+
+Rimuovere le stop words, o parole comuni in inglese che non cambiano il sentiment di una frase, è il primo passo. Rimuovendole, l'analisi del sentiment dovrebbe essere più veloce, ma non meno accurata (poiché le stop words non influenzano il sentiment, ma rallentano l'analisi).
+
+La recensione negativa più lunga era di 395 parole, ma dopo aver rimosso le stop words, è di 195 parole.
+
+La rimozione delle stop words è anche un'operazione veloce, rimuovere le stop words da 2 colonne di recensioni su 515.000 righe ha impiegato 3,3 secondi sul dispositivo di prova. Potrebbe richiedere un po' più o meno tempo per te a seconda della velocità della CPU del tuo dispositivo, della RAM, se hai un SSD o meno, e di altri fattori. La relativa brevità dell'operazione significa che se migliora il tempo dell'analisi del sentiment, allora vale la pena farlo.
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### Eseguire l'analisi del sentiment
+
+Ora dovresti calcolare l'analisi del sentiment per entrambe le colonne delle recensioni negative e positive, e memorizzare il risultato in 2 nuove colonne. Il test del sentiment sarà confrontarlo con il punteggio del recensore per la stessa recensione. Ad esempio, se il sentiment ritiene che la recensione negativa abbia un sentiment di 1 (sentiment estremamente positivo) e un sentiment della recensione positiva di 1, ma il recensore ha dato all'hotel il punteggio più basso possibile, allora o il testo della recensione non corrisponde al punteggio, oppure l'analizzatore di sentiment non è riuscito a riconoscere correttamente il sentiment. Dovresti aspettarti che alcuni punteggi di sentiment siano completamente sbagliati, e spesso sarà spiegabile, ad esempio la recensione potrebbe essere estremamente sarcastica "Ovviamente HO ADORATO dormire in una stanza senza riscaldamento" e l'analizzatore di sentiment pensa che sia un sentiment positivo, anche se un essere umano leggendo capirebbe che è sarcasmo.
+
+NLTK fornisce diversi analizzatori di sentiment con cui imparare, e puoi sostituirli e vedere se il sentiment è più o meno accurato. Qui viene utilizzata l'analisi del sentiment VADER.
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+Più avanti nel tuo programma, quando sei pronto per calcolare il sentiment, puoi applicarlo a ogni recensione come segue:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+Questo richiede circa 120 secondi sul mio computer, ma varierà su ciascun computer. Se vuoi stampare i risultati e vedere se il sentiment corrisponde alla recensione:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+L'ultima cosa da fare con il file prima di usarlo nella sfida è salvarlo! Dovresti anche considerare di riordinare tutte le tue nuove colonne in modo che siano facili da lavorare (per un essere umano, è un cambiamento cosmetico).
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+Dovresti eseguire l'intero codice per [il notebook di analisi](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) (dopo aver eseguito [il tuo notebook di filtraggio](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) per generare il file Hotel_Reviews_Filtered.csv).
+
+Per riepilogare, i passaggi sono:
+
+1. Il file del dataset originale **Hotel_Reviews.csv** è stato esplorato nella lezione precedente con [il notebook di esplorazione](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb)
+2. Hotel_Reviews.csv è filtrato dal [notebook di filtraggio](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) risultando in **Hotel_Reviews_Filtered.csv**
+3. Hotel_Reviews_Filtered.csv è processato dal [notebook di analisi del sentiment](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) risultando in **Hotel_Reviews_NLP.csv**
+4. Usa Hotel_Reviews_NLP.csv nella sfida NLP qui sotto
+
+### Conclusione
+
+Quando hai iniziato, avevi un dataset con colonne e dati ma non tutti potevano essere verificati o utilizzati. Hai esplorato i dati, filtrato ciò che non ti serve, convertito i tag in qualcosa di utile, calcolato le tue medie, aggiunto alcune colonne di sentiment e, si spera, imparato alcune cose interessanti sull'elaborazione del testo naturale.
+
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## Sfida
+
+Ora che hai analizzato il tuo dataset per il sentiment, vedi se puoi utilizzare le strategie che hai imparato in questo curriculum (clustering, forse?) per determinare modelli attorno al sentiment.
+
+## Revisione & Studio Autonomo
+
+Segui [questo modulo di Learn](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott) per saperne di più e utilizzare strumenti diversi per esplorare il sentiment nel testo.
+## Compito
+
+[Prova un dataset diverso](assignment.md)
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/it/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..e725ffbaa
--- /dev/null
+++ b/translations/it/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# Prova un dataset diverso
+
+## Istruzioni
+
+Ora che hai imparato a usare NLTK per assegnare sentimenti al testo, prova un dataset diverso. Probabilmente dovrai fare un po' di elaborazione dei dati, quindi crea un notebook e documenta il tuo processo di pensiero. Cosa scopri?
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da migliorare |
+| -------- | ----------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | Viene presentato un notebook completo e un dataset con celle ben documentate che spiegano come viene assegnato il sentimento | Il notebook manca di buone spiegazioni | Il notebook è difettoso |
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/it/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..0fccd2faa
--- /dev/null
+++ b/translations/it/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/it/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..b1ba65971
--- /dev/null
+++ b/translations/it/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/README.md b/translations/it/6-NLP/README.md
new file mode 100644
index 000000000..b409e9635
--- /dev/null
+++ b/translations/it/6-NLP/README.md
@@ -0,0 +1,27 @@
+# Introduzione all'elaborazione del linguaggio naturale
+
+L'elaborazione del linguaggio naturale (NLP) è la capacità di un programma informatico di comprendere il linguaggio umano così come viene parlato e scritto, noto come linguaggio naturale. È una componente dell'intelligenza artificiale (AI). L'NLP esiste da più di 50 anni e ha radici nel campo della linguistica. L'intero campo è indirizzato ad aiutare le macchine a comprendere e processare il linguaggio umano. Questo può essere poi utilizzato per svolgere compiti come il controllo ortografico o la traduzione automatica. Ha una varietà di applicazioni nel mondo reale in numerosi campi, tra cui la ricerca medica, i motori di ricerca e l'intelligenza aziendale.
+
+## Argomento regionale: lingue e letterature europee e hotel romantici d'Europa ❤️
+
+In questa sezione del curriculum, verrai introdotto a uno degli usi più diffusi del machine learning: l'elaborazione del linguaggio naturale (NLP). Derivata dalla linguistica computazionale, questa categoria di intelligenza artificiale è il ponte tra gli esseri umani e le macchine tramite la comunicazione vocale o testuale.
+
+In queste lezioni impareremo le basi dell'NLP costruendo piccoli bot conversazionali per capire come il machine learning aiuta a rendere queste conversazioni sempre più 'intelligenti'. Viaggerai indietro nel tempo, chiacchierando con Elizabeth Bennett e Mr. Darcy dal classico romanzo di Jane Austen, **Orgoglio e Pregiudizio**, pubblicato nel 1813. Poi, approfondirai la tua conoscenza imparando l'analisi del sentiment tramite le recensioni degli hotel in Europa.
+
+
+> Foto di Elaine Howlin su Unsplash
+
+## Lezioni
+
+1. [Introduzione all'elaborazione del linguaggio naturale](1-Introduction-to-NLP/README.md)
+2. [Compiti e tecniche comuni dell'NLP](2-Tasks/README.md)
+3. [Traduzione e analisi del sentiment con il machine learning](3-Translation-Sentiment/README.md)
+4. [Preparare i tuoi dati](4-Hotel-Reviews-1/README.md)
+5. [NLTK per l'analisi del sentiment](5-Hotel-Reviews-2/README.md)
+
+## Crediti
+
+Queste lezioni sull'elaborazione del linguaggio naturale sono state scritte con ☕ da [Stephen Howell](https://twitter.com/Howell_MSFT)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Anche se ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione umana professionale. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/6-NLP/data/README.md b/translations/it/6-NLP/data/README.md
new file mode 100644
index 000000000..d5833f295
--- /dev/null
+++ b/translations/it/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci sforziamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/1-Introduction/README.md b/translations/it/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..fdcaeaba3
--- /dev/null
+++ b/translations/it/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# Introduzione alla previsione delle serie temporali
+
+
+
+> Sketchnote di [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+In questa lezione e nella successiva, imparerai un po' sulla previsione delle serie temporali, una parte interessante e preziosa del repertorio di uno scienziato di ML, che è un po' meno conosciuta rispetto ad altri argomenti. La previsione delle serie temporali è una sorta di 'sfera di cristallo': basandosi sulle prestazioni passate di una variabile come il prezzo, puoi prevederne il valore potenziale futuro.
+
+[](https://youtu.be/cBojo1hsHiI "Introduzione alla previsione delle serie temporali")
+
+> 🎥 Clicca sull'immagine sopra per un video sulla previsione delle serie temporali
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+È un campo utile e interessante con un reale valore per il business, dato che ha una diretta applicazione a problemi di prezzo, inventario e questioni della catena di approvvigionamento. Mentre le tecniche di deep learning hanno iniziato a essere utilizzate per ottenere maggiori intuizioni per prevedere meglio le prestazioni future, la previsione delle serie temporali rimane un campo fortemente informato dalle tecniche classiche di ML.
+
+> Il curriculum utile di serie temporali della Penn State può essere trovato [qui](https://online.stat.psu.edu/stat510/lesson/1)
+
+## Introduzione
+
+Supponiamo che tu gestisca una serie di parcometri intelligenti che forniscono dati su quanto spesso vengono utilizzati e per quanto tempo nel tempo.
+
+> E se potessi prevedere, basandoti sulle prestazioni passate del parcometro, il suo valore futuro secondo le leggi della domanda e dell'offerta?
+
+Prevedere accuratamente quando agire per raggiungere il tuo obiettivo è una sfida che potrebbe essere affrontata dalla previsione delle serie temporali. Non renderebbe felici le persone essere addebitate di più nei momenti di punta quando cercano un parcheggio, ma sarebbe un modo sicuro per generare entrate per pulire le strade!
+
+Esploriamo alcuni dei tipi di algoritmi delle serie temporali e iniziamo un notebook per pulire e preparare alcuni dati. I dati che analizzerai sono presi dalla competizione di previsione GEFCom2014. Consistono in 3 anni di valori orari di carico elettrico e temperatura tra il 2012 e il 2014. Dati i modelli storici di carico elettrico e temperatura, puoi prevedere i valori futuri del carico elettrico.
+
+In questo esempio, imparerai come prevedere un passo temporale avanti, utilizzando solo i dati storici del carico. Tuttavia, prima di iniziare, è utile capire cosa sta succedendo dietro le quinte.
+
+## Alcune definizioni
+
+Quando incontri il termine 'serie temporale' devi capire il suo uso in diversi contesti.
+
+🎓 **Serie temporale**
+
+In matematica, "una serie temporale è una serie di punti dati indicizzati (o elencati o tracciati) in ordine temporale. Più comunemente, una serie temporale è una sequenza presa in punti di tempo successivi equamente distanziati." Un esempio di serie temporale è il valore di chiusura giornaliero del [Dow Jones Industrial Average](https://wikipedia.org/wiki/Time_series). L'uso di tracciati di serie temporali e la modellazione statistica è frequentemente incontrato nell'elaborazione del segnale, previsione meteorologica, previsione dei terremoti e altri campi in cui si verificano eventi e i punti dati possono essere tracciati nel tempo.
+
+🎓 **Analisi delle serie temporali**
+
+L'analisi delle serie temporali è l'analisi dei dati di serie temporali sopra menzionati. I dati di serie temporali possono assumere forme distinte, incluso 'serie temporali interrotte' che rilevano modelli nell'evoluzione di una serie temporale prima e dopo un evento di interruzione. Il tipo di analisi necessario per la serie temporale dipende dalla natura dei dati. I dati di serie temporali stessi possono assumere la forma di serie di numeri o caratteri.
+
+L'analisi da eseguire utilizza una varietà di metodi, inclusi dominio della frequenza e dominio del tempo, lineare e non lineare, e altro. [Scopri di più](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm) sui molti modi per analizzare questo tipo di dati.
+
+🎓 **Previsione delle serie temporali**
+
+La previsione delle serie temporali è l'uso di un modello per prevedere valori futuri basati su modelli mostrati dai dati raccolti in precedenza come si sono verificati nel passato. Mentre è possibile utilizzare modelli di regressione per esplorare i dati di serie temporali, con indici temporali come variabili x su un grafico, tali dati sono meglio analizzati utilizzando tipi speciali di modelli.
+
+I dati di serie temporali sono un elenco di osservazioni ordinate, a differenza dei dati che possono essere analizzati tramite regressione lineare. Il più comune è ARIMA, un acronimo che sta per "Autoregressive Integrated Moving Average".
+
+I [modelli ARIMA](https://online.stat.psu.edu/stat510/lesson/1/1.1) "relazionano il valore presente di una serie ai valori passati e agli errori di previsione passati." Sono più appropriati per l'analisi dei dati nel dominio del tempo, dove i dati sono ordinati nel tempo.
+
+> Esistono diversi tipi di modelli ARIMA, che puoi imparare [qui](https://people.duke.edu/~rnau/411arim.htm) e che toccherai nella prossima lezione.
+
+Nella prossima lezione, costruirai un modello ARIMA utilizzando [Serie Temporali Univariate](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm), che si concentra su una variabile che cambia il suo valore nel tempo. Un esempio di questo tipo di dati è [questo dataset](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm) che registra la concentrazione mensile di CO2 all'Osservatorio di Mauna Loa:
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ Identifica la variabile che cambia nel tempo in questo dataset
+
+## Caratteristiche dei dati delle serie temporali da considerare
+
+Quando guardi i dati delle serie temporali, potresti notare che hanno [certe caratteristiche](https://online.stat.psu.edu/stat510/lesson/1/1.1) che devi prendere in considerazione e mitigare per comprendere meglio i loro modelli. Se consideri i dati delle serie temporali come potenzialmente fornendo un 'segnale' che vuoi analizzare, queste caratteristiche possono essere considerate 'rumore'. Spesso sarà necessario ridurre questo 'rumore' compensando alcune di queste caratteristiche utilizzando alcune tecniche statistiche.
+
+Ecco alcuni concetti che dovresti conoscere per poter lavorare con le serie temporali:
+
+🎓 **Trend**
+
+I trend sono definiti come aumenti e diminuzioni misurabili nel tempo. [Leggi di più](https://machinelearningmastery.com/time-series-trends-in-python). Nel contesto delle serie temporali, si tratta di come usare e, se necessario, rimuovere i trend dalla tua serie temporale.
+
+🎓 **[Stagionalità](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+La stagionalità è definita come fluttuazioni periodiche, come i picchi di vendita durante le festività, per esempio. [Dai un'occhiata](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm) a come diversi tipi di grafici mostrano la stagionalità nei dati.
+
+🎓 **Outliers**
+
+Gli outliers sono lontani dalla varianza standard dei dati.
+
+🎓 **Ciclo a lungo termine**
+
+Indipendentemente dalla stagionalità, i dati potrebbero mostrare un ciclo a lungo termine come una recessione economica che dura più di un anno.
+
+🎓 **Varianza costante**
+
+Nel tempo, alcuni dati mostrano fluttuazioni costanti, come l'uso di energia per giorno e notte.
+
+🎓 **Cambiamenti improvvisi**
+
+I dati potrebbero mostrare un cambiamento improvviso che potrebbe necessitare di ulteriori analisi. La chiusura improvvisa delle attività a causa del COVID, per esempio, ha causato cambiamenti nei dati.
+
+✅ Ecco un [esempio di grafico di serie temporali](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python) che mostra la spesa giornaliera in valuta di gioco nel corso di alcuni anni. Riesci a identificare alcune delle caratteristiche elencate sopra in questi dati?
+
+
+
+## Esercizio - iniziare con i dati sull'uso dell'energia
+
+Iniziamo a creare un modello di serie temporali per prevedere l'uso futuro dell'energia dato l'uso passato.
+
+> I dati in questo esempio sono presi dalla competizione di previsione GEFCom2014. Consistono in 3 anni di valori orari di carico elettrico e temperatura tra il 2012 e il 2014.
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli e Rob J. Hyndman, "Previsione probabilistica dell'energia: Competizione Globale di Previsione dell'Energia 2014 e oltre", International Journal of Forecasting, vol.32, no.3, pp 896-913, luglio-settembre, 2016.
+
+1. Nella cartella `working` di questa lezione, apri il file _notebook.ipynb_. Inizia aggiungendo le librerie che ti aiuteranno a caricare e visualizzare i dati
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ Nota, stai usando i file dalla cartella `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()`:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ Puoi vedere che ci sono due colonne che rappresentano data e carico:
+
+ | | carico |
+ | :-----------------: | :------: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. Ora, traccia i dati chiamando `plot()`:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. Ora, traccia la prima settimana di luglio 2014, fornendola come input al modello `energy` in `[from date]: [to date]`:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Un grafico bellissimo! Dai un'occhiata a questi grafici e vedi se riesci a determinare alcune delle caratteristiche elencate sopra. Cosa possiamo dedurre visualizzando i dati?
+
+Nella prossima lezione, creerai un modello ARIMA per creare alcune previsioni.
+
+---
+
+## 🚀Sfida
+
+Fai un elenco di tutte le industrie e aree di indagine che puoi pensare che potrebbero beneficiare della previsione delle serie temporali. Riesci a pensare a un'applicazione di queste tecniche nelle arti? Nell'econometria? Nell'ecologia? Nel commercio al dettaglio? Nell'industria? Nella finanza? Dove altro?
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## Revisione e Autoapprendimento
+
+Anche se non li copriremo qui, le reti neurali sono talvolta utilizzate per migliorare i metodi classici di previsione delle serie temporali. Leggi di più su di esse [in questo articolo](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412)
+
+## Compito
+
+[Visualizza altre serie temporali](assignment.md)
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/1-Introduction/assignment.md b/translations/it/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..5b93f96f8
--- /dev/null
+++ b/translations/it/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Visualizza altre Serie Temporali
+
+## Istruzioni
+
+Hai iniziato a conoscere il Time Series Forecasting esaminando il tipo di dati che richiede questa modellazione speciale. Hai visualizzato alcuni dati relativi all'energia. Ora, cerca altri dati che potrebbero beneficiare del Time Series Forecasting. Trova tre esempi (prova [Kaggle](https://kaggle.com) e [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott)) e crea un notebook per visualizzarli. Nota nel notebook qualsiasi caratteristica speciale che presentano (stagionalità, cambiamenti improvvisi o altre tendenze).
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| -------- | ------------------------------------------------------ | ---------------------------------------------------- | ----------------------------------------------------------------------------------------- |
+| | Tre dataset sono tracciati e spiegati in un notebook | Due dataset sono tracciati e spiegati in un notebook | Pochi dataset sono tracciati o spiegati in un notebook o i dati presentati sono insufficienti |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su intelligenza artificiale. Anche se ci sforziamo di garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/it/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..cdfe4eab9
--- /dev/null
+++ b/translations/it/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/it/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..cd23bc558
--- /dev/null
+++ b/translations/it/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Anche se ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/2-ARIMA/README.md b/translations/it/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..e9bebb7fc
--- /dev/null
+++ b/translations/it/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,396 @@
+# Previsione di serie temporali con ARIMA
+
+Nella lezione precedente, hai appreso un po' sulla previsione di serie temporali e hai caricato un dataset che mostra le fluttuazioni del carico elettrico nel tempo.
+
+[](https://youtu.be/IUSk-YDau10 "Introduzione ad ARIMA")
+
+> 🎥 Clicca sull'immagine sopra per un video: Una breve introduzione ai modelli ARIMA. L'esempio è fatto in R, ma i concetti sono universali.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## Introduzione
+
+In questa lezione, scoprirai un modo specifico per costruire modelli con [ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average). I modelli ARIMA sono particolarmente adatti per adattarsi ai dati che mostrano [non-stazionarietà](https://wikipedia.org/wiki/Stationary_process).
+
+## Concetti generali
+
+Per poter lavorare con ARIMA, ci sono alcuni concetti che devi conoscere:
+
+- 🎓 **Stazionarietà**. In un contesto statistico, la stazionarietà si riferisce a dati la cui distribuzione non cambia quando viene spostata nel tempo. I dati non stazionari, invece, mostrano fluttuazioni dovute a tendenze che devono essere trasformate per essere analizzate. La stagionalità, per esempio, può introdurre fluttuazioni nei dati e può essere eliminata attraverso un processo di 'differenziazione stagionale'.
+
+- 🎓 **[Differenziazione](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**. La differenziazione dei dati, ancora una volta in un contesto statistico, si riferisce al processo di trasformazione dei dati non stazionari per renderli stazionari rimuovendo la loro tendenza non costante. "La differenziazione rimuove i cambiamenti nel livello di una serie temporale, eliminando tendenza e stagionalità e stabilizzando di conseguenza la media della serie temporale." [Paper di Shixiong et al](https://arxiv.org/abs/1904.07632)
+
+## ARIMA nel contesto delle serie temporali
+
+Esaminiamo le parti di ARIMA per capire meglio come ci aiuta a modellare le serie temporali e a fare previsioni.
+
+- **AR - per AutoRegressivo**. I modelli autoregressivi, come suggerisce il nome, guardano 'indietro' nel tempo per analizzare i valori precedenti nei tuoi dati e fare ipotesi su di essi. Questi valori precedenti sono chiamati 'lag'. Un esempio potrebbe essere i dati che mostrano le vendite mensili di matite. Il totale delle vendite di ogni mese sarebbe considerato una 'variabile evolutiva' nel dataset. Questo modello è costruito come "la variabile evolutiva di interesse è regressa sui suoi stessi valori ritardati (cioè, precedenti)." [wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - per Integrato**. A differenza dei modelli simili 'ARMA', la 'I' in ARIMA si riferisce al suo aspetto *[integrato](https://wikipedia.org/wiki/Order_of_integration)*. I dati sono 'integrati' quando vengono applicati passaggi di differenziazione per eliminare la non stazionarietà.
+
+- **MA - per Media Mobile**. L'aspetto della [media mobile](https://wikipedia.org/wiki/Moving-average_model) di questo modello si riferisce alla variabile di output che è determinata osservando i valori attuali e passati dei lag.
+
+In sintesi: ARIMA viene utilizzato per adattare un modello alla forma speciale dei dati di serie temporali nel modo più preciso possibile.
+
+## Esercizio - costruisci un modello ARIMA
+
+Apri la cartella [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) in questa lezione e trova il file [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb).
+
+1. Esegui il notebook per caricare la libreria `statsmodels` Python; avrai bisogno di questa per i modelli ARIMA.
+
+1. Carica le librerie necessarie
+
+1. Ora, carica altre librerie utili per la visualizzazione dei dati:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. Carica i dati dal file `/data/energy.csv` in un dataframe Pandas e dai un'occhiata:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. Traccia tutti i dati energetici disponibili da gennaio 2012 a dicembre 2014. Non ci dovrebbero essere sorprese poiché abbiamo visto questi dati nella lezione precedente:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ Ora, costruiamo un modello!
+
+### Crea dataset di addestramento e test
+
+Ora i tuoi dati sono caricati, quindi puoi separarli in set di addestramento e di test. Addestrerai il tuo modello sul set di addestramento. Come al solito, dopo che il modello ha terminato l'addestramento, valuterai la sua accuratezza utilizzando il set di test. Devi assicurarti che il set di test copra un periodo di tempo successivo rispetto al set di addestramento per garantire che il modello non acquisisca informazioni dai periodi futuri.
+
+1. Assegna un periodo di due mesi dal 1 settembre al 31 ottobre 2014 al set di addestramento. Il set di test includerà il periodo di due mesi dal 1 novembre al 31 dicembre 2014:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ Poiché questi dati riflettono il consumo giornaliero di energia, c'è un forte schema stagionale, ma il consumo è più simile al consumo nei giorni più recenti.
+
+1. Visualizza le differenze:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Pertanto, utilizzare una finestra di tempo relativamente piccola per l'addestramento dei dati dovrebbe essere sufficiente.
+
+ > Nota: Poiché la funzione che utilizziamo per adattare il modello ARIMA utilizza la validazione in-sample durante l'adattamento, ometteremo i dati di validazione.
+
+### Prepara i dati per l'addestramento
+
+Ora, devi preparare i dati per l'addestramento eseguendo il filtraggio e la scalatura dei tuoi dati. Filtra il tuo dataset per includere solo i periodi di tempo e le colonne di cui hai bisogno e scala i dati per assicurarti che siano proiettati nell'intervallo 0,1.
+
+1. Filtra il dataset originale per includere solo i periodi di tempo sopra menzionati per set e includendo solo la colonna 'load' necessaria più la data:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ Puoi vedere la forma dei dati:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. Scala i dati per essere nell'intervallo (0, 1).
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. Visualizza i dati originali vs. scalati:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > I dati originali
+
+ 
+
+ > I dati scalati
+
+1. Ora che hai calibrato i dati scalati, puoi scalare i dati di test:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### Implementa ARIMA
+
+È ora di implementare ARIMA! Ora utilizzerai la libreria `statsmodels` che hai installato in precedenza.
+
+Ora devi seguire diversi passaggi
+
+ 1. Definisci il modello chiamando `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q`, ma corrispondono ai componenti stagionali del modello.
+
+1. Inizia impostando il tuo valore di orizzonte preferito. Proviamo 3 ore:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ Selezionare i migliori valori per i parametri di un modello ARIMA può essere impegnativo poiché è in parte soggettivo e richiede tempo. Potresti considerare di utilizzare una libreria `auto_arima()` function from the [`pyramid`](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html),
+
+1. Per ora prova alcune selezioni manuali per trovare un buon modello.
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ Viene stampata una tabella di risultati.
+
+Hai costruito il tuo primo modello! Ora dobbiamo trovare un modo per valutarlo.
+
+### Valuta il tuo modello
+
+Per valutare il tuo modello, puoi eseguire la cosiddetta validazione `walk forward`. In pratica, i modelli di serie temporali vengono ri-addestrati ogni volta che sono disponibili nuovi dati. Questo consente al modello di fare la migliore previsione a ogni passo temporale.
+
+Partendo dall'inizio della serie temporale utilizzando questa tecnica, addestra il modello sul set di dati di addestramento. Quindi fai una previsione sul passo temporale successivo. La previsione viene valutata rispetto al valore noto. Il set di addestramento viene quindi ampliato per includere il valore noto e il processo viene ripetuto.
+
+> Nota: Dovresti mantenere la finestra del set di addestramento fissa per un addestramento più efficiente in modo che ogni volta che aggiungi una nuova osservazione al set di addestramento, rimuovi l'osservazione dall'inizio del set.
+
+Questo processo fornisce una stima più robusta di come il modello si comporterà in pratica. Tuttavia, comporta il costo computazionale di creare tanti modelli. Questo è accettabile se i dati sono piccoli o se il modello è semplice, ma potrebbe essere un problema su larga scala.
+
+La validazione walk-forward è lo standard d'oro per la valutazione dei modelli di serie temporali ed è raccomandata per i tuoi progetti.
+
+1. Innanzitutto, crea un punto dati di test per ogni passo HORIZON.
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | load | load+1 | load+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ I dati vengono spostati orizzontalmente in base al punto di orizzonte.
+
+1. Fai previsioni sui tuoi dati di test utilizzando questo approccio a finestra scorrevole in un ciclo della lunghezza dei dati di test:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ Puoi osservare l'addestramento in corso:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. Confronta le previsioni con il carico effettivo:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ Output
+ | | | timestamp | h | prediction | actual |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+ Osserva la previsione dei dati orari, rispetto al carico effettivo. Quanto è accurata questa previsione?
+
+### Verifica l'accuratezza del modello
+
+Verifica l'accuratezza del tuo modello testando il suo errore percentuale assoluto medio (MAPE) su tutte le previsioni.
+
+> **🧮 Mostrami i calcoli**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/) viene utilizzato per mostrare l'accuratezza della previsione come rapporto definito dalla formula sopra. La differenza tra actualt e predictedt viene divisa per actualt. "Il valore assoluto in questo calcolo viene sommato per ogni punto di previsione nel tempo e diviso per il numero di punti adattati n." [wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. Esprimi l'equazione in codice:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. Calcola il MAPE di un singolo passo:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ MAPE della previsione di un passo: 0.5570581332313952 %
+
+1. Stampa il MAPE della previsione multi-passo:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ Un numero basso è il migliore: considera che una previsione con un MAPE di 10 è sbagliata del 10%.
+
+1. Ma come sempre, è più facile vedere questo tipo di misurazione dell'accuratezza visivamente, quindi tracciamolo:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 Un grafico molto bello, che mostra un modello con buona accuratezza. Ben fatto!
+
+---
+
+## 🚀Sfida
+
+Approfondisci i modi per testare l'accuratezza di un modello di serie temporali. Abbiamo trattato il MAPE in questa lezione, ma ci sono altri metodi che potresti usare? Ricercali e annotali. Un documento utile può essere trovato [qui](https://otexts.com/fpp2/accuracy.html)
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## Revisione & Autoapprendimento
+
+Questa lezione tocca solo le basi della previsione di serie temporali con ARIMA. Prenditi del tempo per approfondire la tua conoscenza esplorando [questo repository](https://microsoft.github.io/forecasting/) e i suoi vari tipi di modelli per imparare altri modi per costruire modelli di serie temporali.
+
+## Compito
+
+[Un nuovo modello ARIMA](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci sforziamo di ottenere accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/2-ARIMA/assignment.md b/translations/it/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..6c9d7f677
--- /dev/null
+++ b/translations/it/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# Un nuovo modello ARIMA
+
+## Istruzioni
+
+Ora che hai costruito un modello ARIMA, costruiscine uno nuovo con dati freschi (prova uno di [questi dataset di Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Annota il tuo lavoro in un notebook, visualizza i dati e il tuo modello, e testa la sua accuratezza utilizzando MAPE.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Necessita Miglioramenti |
+| -------- | ------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- | ----------------------------------- |
+| | Un notebook è presentato con un nuovo modello ARIMA costruito, testato e spiegato con visualizzazioni e accuratezza dichiarata. | Il notebook presentato non è annotato o contiene errori | Viene presentato un notebook incompleto |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/it/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..b20a093c6
--- /dev/null
+++ b/translations/it/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/it/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..9bc2be598
--- /dev/null
+++ b/translations/it/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/3-SVR/README.md b/translations/it/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..f9095cd83
--- /dev/null
+++ b/translations/it/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,389 @@
+# Previsione di Serie Temporali con Support Vector Regressor
+
+Nella lezione precedente, hai imparato a utilizzare il modello ARIMA per fare previsioni su serie temporali. Ora vedrai il modello Support Vector Regressor, che è un modello di regressione usato per prevedere dati continui.
+
+## [Quiz Pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## Introduzione
+
+In questa lezione, scoprirai un modo specifico per costruire modelli con [**SVM**: **S**upport **V**ector **M**achine](https://en.wikipedia.org/wiki/Support-vector_machine) per la regressione, o **SVR: Support Vector Regressor**.
+
+### SVR nel contesto delle serie temporali [^1]
+
+Prima di comprendere l'importanza di SVR nella previsione delle serie temporali, ecco alcuni concetti importanti che devi conoscere:
+
+- **Regressione:** Tecnica di apprendimento supervisionato per prevedere valori continui da un insieme di input dato. L'idea è di adattare una curva (o linea) nello spazio delle caratteristiche che ha il maggior numero di punti dati. [Clicca qui](https://en.wikipedia.org/wiki/Regression_analysis) per maggiori informazioni.
+- **Support Vector Machine (SVM):** Un tipo di modello di apprendimento supervisionato utilizzato per classificazione, regressione e rilevamento di anomalie. Il modello è un iperpiano nello spazio delle caratteristiche, che nel caso della classificazione agisce come un confine, e nel caso della regressione agisce come la linea di miglior adattamento. In SVM, una funzione Kernel viene generalmente utilizzata per trasformare il dataset in uno spazio con un numero maggiore di dimensioni, in modo che possano essere facilmente separabili. [Clicca qui](https://en.wikipedia.org/wiki/Support-vector_machine) per maggiori informazioni sulle SVM.
+- **Support Vector Regressor (SVR):** Un tipo di SVM, per trovare la linea di miglior adattamento (che nel caso di SVM è un iperpiano) che ha il maggior numero di punti dati.
+
+### Perché SVR? [^1]
+
+Nell'ultima lezione hai imparato l'ARIMA, che è un metodo statistico lineare molto efficace per prevedere i dati delle serie temporali. Tuttavia, in molti casi, i dati delle serie temporali presentano *non-linearità*, che non possono essere mappate da modelli lineari. In questi casi, la capacità di SVM di considerare la non-linearità nei dati per compiti di regressione rende SVR efficace nella previsione delle serie temporali.
+
+## Esercizio - costruisci un modello SVR
+
+I primi passi per la preparazione dei dati sono gli stessi della lezione precedente su [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA).
+
+Apri la cartella [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working) in questa lezione e trova il file [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb).[^2]
+
+1. Esegui il notebook e importa le librerie necessarie: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. Carica i dati dal file `/data/energy.csv` in un dataframe Pandas e dai un'occhiata: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. Traccia tutti i dati energetici disponibili da gennaio 2012 a dicembre 2014: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Ora, costruiamo il nostro modello SVR.
+
+### Crea set di dati per l'addestramento e il test
+
+Ora i tuoi dati sono caricati, quindi puoi separarli in set di addestramento e test. Poi ridimensionerai i dati per creare un dataset basato sui passi temporali che sarà necessario per il SVR. Addestrerai il tuo modello sul set di addestramento. Dopo che il modello ha finito l'addestramento, valuterai la sua accuratezza sul set di addestramento, sul set di test e poi sull'intero dataset per vedere le prestazioni complessive. Devi assicurarti che il set di test copra un periodo successivo nel tempo rispetto al set di addestramento per garantire che il modello non acquisisca informazioni dai periodi futuri [^2] (una situazione nota come *Overfitting*).
+
+1. Assegna un periodo di due mesi dal 1 settembre al 31 ottobre 2014 al set di addestramento. Il set di test includerà il periodo di due mesi dal 1 novembre al 31 dicembre 2014: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. Visualizza le differenze: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+
+
+### Prepara i dati per l'addestramento
+
+Ora, devi preparare i dati per l'addestramento eseguendo il filtraggio e la scalatura dei dati. Filtra il tuo dataset per includere solo i periodi di tempo e le colonne necessarie, e scala i dati per garantire che siano proiettati nell'intervallo 0,1.
+
+1. Filtra il dataset originale per includere solo i periodi di tempo sopra menzionati per set e includendo solo la colonna necessaria 'load' più la data: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. Scala i dati di addestramento per essere nell'intervallo (0, 1): [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. Ora, scala i dati di test: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### Crea dati con passi temporali [^1]
+
+Per il SVR, trasformi i dati di input in forma `[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data` in modo che ci sia una nuova dimensione che si riferisce ai passi temporali.
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+Per questo esempio, prendiamo `timesteps = 5`. Quindi, gli input al modello sono i dati per i primi 4 passi temporali, e l'output sarà i dati per il 5° passo temporale.
+
+```python
+timesteps=5
+```
+
+Convertire i dati di addestramento in un tensore 2D utilizzando la comprensione delle liste nidificate:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+Convertire i dati di test in un tensore 2D:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+ Selezionare input e output dai dati di addestramento e test:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### Implementa SVR [^1]
+
+Ora, è il momento di implementare SVR. Per leggere di più su questa implementazione, puoi fare riferimento a [questa documentazione](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html). Per la nostra implementazione, seguiamo questi passaggi:
+
+ 1. Definisci il modello chiamando `SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()` function
+
+Ora creiamo un modello SVR. Qui usiamo il [kernel RBF](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel), e impostiamo gli iperparametri gamma, C ed epsilon rispettivamente a 0.5, 10 e 0.05.
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### Adatta il modello ai dati di addestramento [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### Fai previsioni con il modello [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+Hai costruito il tuo SVR! Ora dobbiamo valutarlo.
+
+### Valuta il tuo modello [^1]
+
+Per la valutazione, prima scaleremo indietro i dati alla nostra scala originale. Poi, per verificare le prestazioni, tracceremo il grafico della serie temporale originale e prevista, e stamperemo anche il risultato MAPE.
+
+Scala l'output previsto e originale:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### Verifica le prestazioni del modello sui dati di addestramento e di test [^1]
+
+Estrarremo i timestamp dal dataset per mostrarli sull'asse x del nostro grafico. Nota che stiamo utilizzando i primi ```timesteps-1``` valori come input per il primo output, quindi i timestamp per l'output inizieranno dopo.
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+Traccia le previsioni per i dati di addestramento:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+Stampa MAPE per i dati di addestramento
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+Traccia le previsioni per i dati di test
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+Stampa MAPE per i dati di test
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 Hai ottenuto un ottimo risultato sul dataset di test!
+
+### Verifica le prestazioni del modello sull'intero dataset [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+
+
+🏆 Grafici molto belli, che mostrano un modello con buona accuratezza. Ben fatto!
+
+---
+
+## 🚀Sfida
+
+- Prova a modificare gli iperparametri (gamma, C, epsilon) durante la creazione del modello e valuta sui dati per vedere quale set di iperparametri dà i migliori risultati sui dati di test. Per saperne di più su questi iperparametri, puoi fare riferimento al documento [qui](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel).
+- Prova a usare diverse funzioni kernel per il modello e analizza le loro prestazioni sul dataset. Un documento utile può essere trovato [qui](https://scikit-learn.org/stable/modules/svm.html#kernel-functions).
+- Prova a usare diversi valori per `timesteps` per far sì che il modello guardi indietro per fare la previsione.
+
+## [Quiz Post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## Revisione & Studio Autonomo
+
+Questa lezione era per introdurre l'applicazione di SVR per la previsione delle serie temporali. Per leggere di più su SVR, puoi fare riferimento a [questo blog](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/). Questa [documentazione su scikit-learn](https://scikit-learn.org/stable/modules/svm.html) fornisce una spiegazione più completa sulle SVM in generale, [SVR](https://scikit-learn.org/stable/modules/svm.html#regression) e anche altri dettagli di implementazione come le diverse [funzioni kernel](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) che possono essere utilizzate, e i loro parametri.
+
+## Compito
+
+[Un nuovo modello SVR](assignment.md)
+
+
+
+## Crediti
+
+
+[^1]: Il testo, il codice e l'output in questa sezione sono stati contribuiti da [@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD)
+[^2]: Il testo, il codice e l'output in questa sezione sono stati presi da [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzata basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/3-SVR/assignment.md b/translations/it/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..2fa971bbe
--- /dev/null
+++ b/translations/it/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,16 @@
+# Un nuovo modello SVR
+
+## Istruzioni [^1]
+
+Ora che hai costruito un modello SVR, costruiscine uno nuovo con dati freschi (prova uno di [questi dataset da Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Annota il tuo lavoro in un notebook, visualizza i dati e il tuo modello, e testa la sua accuratezza utilizzando grafici appropriati e MAPE. Prova anche a modificare i diversi iperparametri e a usare valori differenti per i timesteps.
+
+## Rubrica [^1]
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| -------- | ------------------------------------------------------------ | --------------------------------------------------------- | ----------------------------------- |
+| | Viene presentato un notebook con un modello SVR costruito, testato e spiegato con visualizzazioni e accuratezza dichiarata. | Il notebook presentato non è annotato o contiene errori. | Viene presentato un notebook incompleto. |
+
+[^1]: Il testo in questa sezione è basato sull'[assignment di ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/7-TimeSeries/README.md b/translations/it/7-TimeSeries/README.md
new file mode 100644
index 000000000..9502c0c39
--- /dev/null
+++ b/translations/it/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# Introduzione alla previsione delle serie temporali
+
+Cos'è la previsione delle serie temporali? Si tratta di prevedere eventi futuri analizzando le tendenze del passato.
+
+## Argomento regionale: consumo di elettricità nel mondo ✨
+
+In queste due lezioni, verrà introdotta la previsione delle serie temporali, un'area del machine learning meno conosciuta ma estremamente preziosa per le applicazioni industriali e aziendali, tra gli altri campi. Sebbene le reti neurali possano essere utilizzate per migliorare l'utilità di questi modelli, li studieremo nel contesto del machine learning classico poiché i modelli aiutano a prevedere le prestazioni future basandosi sul passato.
+
+Il nostro focus regionale è il consumo di elettricità nel mondo, un dataset interessante per imparare a prevedere il consumo futuro di energia basandosi sui modelli di carico passati. Puoi vedere come questo tipo di previsione possa essere estremamente utile in un ambiente aziendale.
+
+
+
+Foto di [Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) di torri elettriche su una strada in Rajasthan su [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
+
+## Lezioni
+
+1. [Introduzione alla previsione delle serie temporali](1-Introduction/README.md)
+2. [Costruzione di modelli ARIMA per serie temporali](2-ARIMA/README.md)
+3. [Costruzione di un Support Vector Regressor per la previsione delle serie temporali](3-SVR/README.md)
+
+## Crediti
+
+"L'introduzione alla previsione delle serie temporali" è stata scritta con ⚡️ da [Francesca Lazzeri](https://twitter.com/frlazzeri) e [Jen Looper](https://twitter.com/jenlooper). I notebook sono apparsi per la prima volta online nel [repo Azure "Deep Learning For Time Series"](https://github.com/Azure/DeepLearningForTimeSeriesForecasting) originariamente scritto da Francesca Lazzeri. La lezione SVR è stata scritta da [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzata basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/1-QLearning/README.md b/translations/it/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..e5b89fedd
--- /dev/null
+++ b/translations/it/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,319 @@
+## Introduzione al Reinforcement Learning e al Q-Learning
+
+
+> Sketchnote di [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+Il reinforcement learning coinvolge tre concetti importanti: l'agente, alcuni stati e un insieme di azioni per ogni stato. Eseguendo un'azione in uno stato specificato, l'agente riceve una ricompensa. Immagina di nuovo il videogioco Super Mario. Sei Mario, sei in un livello del gioco, in piedi accanto a un dirupo. Sopra di te c'è una moneta. Essere Mario, in un livello del gioco, in una posizione specifica ... quello è il tuo stato. Muoversi di un passo a destra (un'azione) ti farà cadere nel vuoto, e ciò ti darebbe un punteggio numerico basso. Tuttavia, premendo il pulsante di salto otterresti un punto e rimarresti vivo. Questo è un risultato positivo e dovrebbe premiarti con un punteggio numerico positivo.
+
+Utilizzando il reinforcement learning e un simulatore (il gioco), puoi imparare a giocare per massimizzare la ricompensa che consiste nel rimanere vivo e segnare il maggior numero di punti possibile.
+
+[](https://www.youtube.com/watch?v=lDq_en8RNOo)
+
+> 🎥 Clicca sull'immagine sopra per ascoltare Dmitry parlare del Reinforcement Learning
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/45/)
+
+## Prerequisiti e Setup
+
+In questa lezione, sperimenteremo un po' di codice in Python. Dovresti essere in grado di eseguire il codice del Jupyter Notebook di questa lezione, sia sul tuo computer che da qualche parte nel cloud.
+
+Puoi aprire [il notebook della lezione](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/notebook.ipynb) e seguire questa lezione per costruire.
+
+> **Nota:** Se stai aprendo questo codice dal cloud, devi anche recuperare il file [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py), che viene utilizzato nel codice del notebook. Aggiungilo nella stessa directory del notebook.
+
+## Introduzione
+
+In questa lezione, esploreremo il mondo di **[Peter e il lupo](https://en.wikipedia.org/wiki/Peter_and_the_Wolf)**, ispirato a una fiaba musicale di un compositore russo, [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev). Utilizzeremo il **Reinforcement Learning** per permettere a Peter di esplorare il suo ambiente, raccogliere gustose mele ed evitare di incontrare il lupo.
+
+Il **Reinforcement Learning** (RL) è una tecnica di apprendimento che ci permette di apprendere un comportamento ottimale di un **agente** in un certo **ambiente** eseguendo molti esperimenti. Un agente in questo ambiente dovrebbe avere un **obiettivo**, definito da una **funzione di ricompensa**.
+
+## L'ambiente
+
+Per semplicità, consideriamo il mondo di Peter come una scacchiera di dimensioni `width` x `height`, come questa:
+
+
+
+Ogni cella in questa scacchiera può essere:
+
+* **terra**, su cui Peter e altre creature possono camminare.
+* **acqua**, su cui ovviamente non si può camminare.
+* un **albero** o **erba**, un luogo dove si può riposare.
+* una **mela**, che rappresenta qualcosa che Peter sarebbe felice di trovare per nutrirsi.
+* un **lupo**, che è pericoloso e dovrebbe essere evitato.
+
+C'è un modulo Python separato, [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py), che contiene il codice per lavorare con questo ambiente. Poiché questo codice non è importante per comprendere i nostri concetti, importeremo il modulo e lo utilizzeremo per creare la scacchiera di esempio (blocco di codice 1):
+
+```python
+from rlboard import *
+
+width, height = 8,8
+m = Board(width,height)
+m.randomize(seed=13)
+m.plot()
+```
+
+Questo codice dovrebbe stampare un'immagine dell'ambiente simile a quella sopra.
+
+## Azioni e politica
+
+Nel nostro esempio, l'obiettivo di Peter sarebbe trovare una mela, evitando il lupo e altri ostacoli. Per fare ciò, può essenzialmente camminare in giro fino a trovare una mela.
+
+Pertanto, in qualsiasi posizione, può scegliere tra una delle seguenti azioni: su, giù, sinistra e destra.
+
+Definiremo queste azioni come un dizionario e le mapperemo a coppie di cambiamenti di coordinate corrispondenti. Ad esempio, muoversi a destra (`R`) would correspond to a pair `(1,0)`. (blocco di codice 2):
+
+```python
+actions = { "U" : (0,-1), "D" : (0,1), "L" : (-1,0), "R" : (1,0) }
+action_idx = { a : i for i,a in enumerate(actions.keys()) }
+```
+
+Riassumendo, la strategia e l'obiettivo di questo scenario sono i seguenti:
+
+- **La strategia**, del nostro agente (Peter) è definita da una cosiddetta **politica**. Una politica è una funzione che restituisce l'azione in qualsiasi stato dato. Nel nostro caso, lo stato del problema è rappresentato dalla scacchiera, inclusa la posizione attuale del giocatore.
+
+- **L'obiettivo**, del reinforcement learning è alla fine imparare una buona politica che ci permetta di risolvere il problema in modo efficiente. Tuttavia, come base, consideriamo la politica più semplice chiamata **camminata casuale**.
+
+## Camminata casuale
+
+Per prima cosa risolviamo il nostro problema implementando una strategia di camminata casuale. Con la camminata casuale, sceglieremo casualmente la prossima azione tra le azioni consentite, fino a raggiungere la mela (blocco di codice 3).
+
+1. Implementa la camminata casuale con il codice seguente:
+
+ ```python
+ def random_policy(m):
+ return random.choice(list(actions))
+
+ def walk(m,policy,start_position=None):
+ n = 0 # number of steps
+ # set initial position
+ if start_position:
+ m.human = start_position
+ else:
+ m.random_start()
+ while True:
+ if m.at() == Board.Cell.apple:
+ return n # success!
+ if m.at() in [Board.Cell.wolf, Board.Cell.water]:
+ return -1 # eaten by wolf or drowned
+ while True:
+ a = actions[policy(m)]
+ new_pos = m.move_pos(m.human,a)
+ if m.is_valid(new_pos) and m.at(new_pos)!=Board.Cell.water:
+ m.move(a) # do the actual move
+ break
+ n+=1
+
+ walk(m,random_policy)
+ ```
+
+ La chiamata a `walk` dovrebbe restituire la lunghezza del percorso corrispondente, che può variare da una esecuzione all'altra.
+
+1. Esegui l'esperimento di camminata un certo numero di volte (diciamo, 100) e stampa le statistiche risultanti (blocco di codice 4):
+
+ ```python
+ def print_statistics(policy):
+ s,w,n = 0,0,0
+ for _ in range(100):
+ z = walk(m,policy)
+ if z<0:
+ w+=1
+ else:
+ s += z
+ n += 1
+ print(f"Average path length = {s/n}, eaten by wolf: {w} times")
+
+ print_statistics(random_policy)
+ ```
+
+ Nota che la lunghezza media di un percorso è intorno ai 30-40 passi, che è piuttosto elevata, dato che la distanza media alla mela più vicina è di circa 5-6 passi.
+
+ Puoi anche vedere come appare il movimento di Peter durante la camminata casuale:
+
+ 
+
+## Funzione di ricompensa
+
+Per rendere la nostra politica più intelligente, dobbiamo capire quali mosse sono "migliori" delle altre. Per fare ciò, dobbiamo definire il nostro obiettivo.
+
+L'obiettivo può essere definito in termini di una **funzione di ricompensa**, che restituirà un valore di punteggio per ogni stato. Più alto è il numero, migliore è la funzione di ricompensa. (blocco di codice 5)
+
+```python
+move_reward = -0.1
+goal_reward = 10
+end_reward = -10
+
+def reward(m,pos=None):
+ pos = pos or m.human
+ if not m.is_valid(pos):
+ return end_reward
+ x = m.at(pos)
+ if x==Board.Cell.water or x == Board.Cell.wolf:
+ return end_reward
+ if x==Board.Cell.apple:
+ return goal_reward
+ return move_reward
+```
+
+Una cosa interessante delle funzioni di ricompensa è che nella maggior parte dei casi, *ci viene data una ricompensa sostanziale solo alla fine del gioco*. Questo significa che il nostro algoritmo dovrebbe in qualche modo ricordare i "buoni" passi che portano a una ricompensa positiva alla fine e aumentare la loro importanza. Allo stesso modo, tutte le mosse che portano a risultati negativi dovrebbero essere scoraggiate.
+
+## Q-Learning
+
+Un algoritmo di cui discuteremo qui è chiamato **Q-Learning**. In questo algoritmo, la politica è definita da una funzione (o una struttura dati) chiamata **Q-Table**. Registra la "bontà" di ciascuna delle azioni in uno stato dato.
+
+Si chiama Q-Table perché spesso è conveniente rappresentarla come una tabella, o array multidimensionale. Poiché la nostra scacchiera ha dimensioni `width` x `height`, possiamo rappresentare la Q-Table utilizzando un array numpy con forma `width` x `height` x `len(actions)`: (blocco di codice 6)
+
+```python
+Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)
+```
+
+Nota che inizializziamo tutti i valori della Q-Table con un valore uguale, nel nostro caso - 0.25. Questo corrisponde alla politica della "camminata casuale", perché tutte le mosse in ogni stato sono ugualmente buone. Possiamo passare la Q-Table al `plot` function in order to visualize the table on the board: `m.plot(Q)`.
+
+
+
+In the center of each cell there is an "arrow" that indicates the preferred direction of movement. Since all directions are equal, a dot is displayed.
+
+Now we need to run the simulation, explore our environment, and learn a better distribution of Q-Table values, which will allow us to find the path to the apple much faster.
+
+## Essence of Q-Learning: Bellman Equation
+
+Once we start moving, each action will have a corresponding reward, i.e. we can theoretically select the next action based on the highest immediate reward. However, in most states, the move will not achieve our goal of reaching the apple, and thus we cannot immediately decide which direction is better.
+
+> Remember that it is not the immediate result that matters, but rather the final result, which we will obtain at the end of the simulation.
+
+In order to account for this delayed reward, we need to use the principles of **[dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming)**, which allow us to think about out problem recursively.
+
+Suppose we are now at the state *s*, and we want to move to the next state *s'*. By doing so, we will receive the immediate reward *r(s,a)*, defined by the reward function, plus some future reward. If we suppose that our Q-Table correctly reflects the "attractiveness" of each action, then at state *s'* we will chose an action *a* that corresponds to maximum value of *Q(s',a')*. Thus, the best possible future reward we could get at state *s* will be defined as `max`a'*Q(s',a')* (maximum here is computed over all possible actions *a'* at state *s'*).
+
+This gives the **Bellman formula** for calculating the value of the Q-Table at state *s*, given action *a*:
+
+
+
+Here γ is the so-called **discount factor** that determines to which extent you should prefer the current reward over the future reward and vice versa.
+
+## Learning Algorithm
+
+Given the equation above, we can now write pseudo-code for our learning algorithm:
+
+* Initialize Q-Table Q with equal numbers for all states and actions
+* Set learning rate α ← 1
+* Repeat simulation many times
+ 1. Start at random position
+ 1. Repeat
+ 1. Select an action *a* at state *s*
+ 2. Execute action by moving to a new state *s'*
+ 3. If we encounter end-of-game condition, or total reward is too small - exit simulation
+ 4. Compute reward *r* at the new state
+ 5. Update Q-Function according to Bellman equation: *Q(s,a)* ← *(1-α)Q(s,a)+α(r+γ maxa'Q(s',a'))*
+ 6. *s* ← *s'*
+ 7. Update the total reward and decrease α.
+
+## Exploit vs. explore
+
+In the algorithm above, we did not specify how exactly we should choose an action at step 2.1. If we are choosing the action randomly, we will randomly **explore** the environment, and we are quite likely to die often as well as explore areas where we would not normally go. An alternative approach would be to **exploit** the Q-Table values that we already know, and thus to choose the best action (with higher Q-Table value) at state *s*. This, however, will prevent us from exploring other states, and it's likely we might not find the optimal solution.
+
+Thus, the best approach is to strike a balance between exploration and exploitation. This can be done by choosing the action at state *s* with probabilities proportional to values in the Q-Table. In the beginning, when Q-Table values are all the same, it would correspond to a random selection, but as we learn more about our environment, we would be more likely to follow the optimal route while allowing the agent to choose the unexplored path once in a while.
+
+## Python implementation
+
+We are now ready to implement the learning algorithm. Before we do that, we also need some function that will convert arbitrary numbers in the Q-Table into a vector of probabilities for corresponding actions.
+
+1. Create a function `probs()`:
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+ ```
+
+ Aggiungiamo alcuni `eps` al vettore originale per evitare la divisione per 0 nel caso iniziale, quando tutti i componenti del vettore sono identici.
+
+Esegui l'algoritmo di apprendimento attraverso 5000 esperimenti, chiamati anche **epoche**: (blocco di codice 8)
+```python
+ for epoch in range(5000):
+
+ # Pick initial point
+ m.random_start()
+
+ # Start travelling
+ n=0
+ cum_reward = 0
+ while True:
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ dpos = actions[a]
+ m.move(dpos,check_correctness=False) # we allow player to move outside the board, which terminates episode
+ r = reward(m)
+ cum_reward += r
+ if r==end_reward or cum_reward < -1000:
+ lpath.append(n)
+ break
+ alpha = np.exp(-n / 10e5)
+ gamma = 0.5
+ ai = action_idx[a]
+ Q[x,y,ai] = (1 - alpha) * Q[x,y,ai] + alpha * (r + gamma * Q[x+dpos[0], y+dpos[1]].max())
+ n+=1
+```
+
+Dopo aver eseguito questo algoritmo, la Q-Table dovrebbe essere aggiornata con valori che definiscono l'attrattività delle diverse azioni a ogni passo. Possiamo provare a visualizzare la Q-Table tracciando un vettore in ogni cella che indicherà la direzione desiderata del movimento. Per semplicità, disegniamo un piccolo cerchio invece della punta di una freccia.
+
+## Verifica della politica
+
+Poiché la Q-Table elenca l'"attrattività" di ciascuna azione in ogni stato, è abbastanza facile utilizzarla per definire la navigazione efficiente nel nostro mondo. Nel caso più semplice, possiamo selezionare l'azione corrispondente al valore Q-Table più alto: (blocco di codice 9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> Se provi il codice sopra diverse volte, potresti notare che a volte si "blocca" e devi premere il pulsante STOP nel notebook per interromperlo. Questo accade perché potrebbero esserci situazioni in cui due stati "puntano" l'uno all'altro in termini di valore Q ottimale, nel qual caso l'agente finisce per muoversi tra quegli stati indefinitamente.
+
+## 🚀Sfida
+
+> **Compito 1:** Modifica il `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` esegue la simulazione 100 volte): (blocco di codice 10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+Dopo aver eseguito questo codice, dovresti ottenere una lunghezza media del percorso molto più piccola rispetto a prima, nell'intervallo di 3-6.
+
+## Indagare il processo di apprendimento
+
+Come abbiamo menzionato, il processo di apprendimento è un equilibrio tra esplorazione e esplorazione della conoscenza acquisita sulla struttura dello spazio dei problemi. Abbiamo visto che i risultati dell'apprendimento (la capacità di aiutare un agente a trovare un percorso breve verso l'obiettivo) sono migliorati, ma è anche interessante osservare come si comporta la lunghezza media del percorso durante il processo di apprendimento:
+
+Le lezioni possono essere riassunte come:
+
+- **Aumento della lunghezza media del percorso**. Quello che vediamo qui è che all'inizio, la lunghezza media del percorso aumenta. Questo è probabilmente dovuto al fatto che quando non sappiamo nulla sull'ambiente, è probabile che ci imbattiamo in stati negativi, acqua o lupo. Man mano che impariamo di più e iniziamo a utilizzare questa conoscenza, possiamo esplorare l'ambiente per più tempo, ma non sappiamo ancora bene dove si trovano le mele.
+
+- **Diminuzione della lunghezza del percorso, man mano che impariamo di più**. Una volta che impariamo abbastanza, diventa più facile per l'agente raggiungere l'obiettivo, e la lunghezza del percorso inizia a diminuire. Tuttavia, siamo ancora aperti all'esplorazione, quindi spesso ci allontaniamo dal percorso migliore e esploriamo nuove opzioni, rendendo il percorso più lungo del necessario.
+
+- **Aumento improvviso della lunghezza**. Quello che osserviamo anche su questo grafico è che a un certo punto, la lunghezza è aumentata improvvisamente. Questo indica la natura stocastica del processo, e che a un certo punto possiamo "rovinare" i coefficienti della Q-Table sovrascrivendoli con nuovi valori. Questo dovrebbe idealmente essere minimizzato riducendo il tasso di apprendimento (ad esempio, verso la fine dell'addestramento, regoliamo i valori della Q-Table solo di un piccolo valore).
+
+In generale, è importante ricordare che il successo e la qualità del processo di apprendimento dipendono significativamente dai parametri, come il tasso di apprendimento, la decadenza del tasso di apprendimento e il fattore di sconto. Questi sono spesso chiamati **iperparametri**, per distinguerli dai **parametri**, che ottimizziamo durante l'addestramento (ad esempio, i coefficienti della Q-Table). Il processo di trovare i migliori valori degli iperparametri è chiamato **ottimizzazione degli iperparametri**, e merita un argomento a parte.
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## Compito
+[Un Mondo Più Realistico](assignment.md)
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/1-QLearning/assignment.md b/translations/it/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..d91102cf7
--- /dev/null
+++ b/translations/it/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,30 @@
+# Un Mondo Più Realistico
+
+Nella nostra situazione, Peter era in grado di muoversi quasi senza stancarsi o avere fame. In un mondo più realistico, deve sedersi e riposare di tanto in tanto, e anche nutrirsi. Rendiamo il nostro mondo più realistico, implementando le seguenti regole:
+
+1. Spostandosi da un luogo all'altro, Peter perde **energia** e guadagna un po' di **fatica**.
+2. Peter può guadagnare più energia mangiando mele.
+3. Peter può liberarsi della fatica riposando sotto l'albero o sull'erba (cioè camminando in una posizione della tavola con un albero o erba - campo verde)
+4. Peter deve trovare e uccidere il lupo.
+5. Per uccidere il lupo, Peter deve avere certi livelli di energia e fatica, altrimenti perde la battaglia.
+
+## Istruzioni
+
+Usa il [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) originale come punto di partenza per la tua soluzione.
+
+Modifica la funzione di ricompensa sopra secondo le regole del gioco, esegui l'algoritmo di apprendimento per rinforzo per imparare la migliore strategia per vincere il gioco, e confronta i risultati del cammino casuale con il tuo algoritmo in termini di numero di giochi vinti e persi.
+
+> **Note**: Nel tuo nuovo mondo, lo stato è più complesso, e oltre alla posizione umana include anche i livelli di fatica e energia. Puoi scegliere di rappresentare lo stato come una tupla (Board,energy,fatigue), o definire una classe per lo stato (puoi anche voler derivarla da `Board`), o anche modificare la classe originale `Board` all'interno di [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py).
+
+Nella tua soluzione, per favore mantieni il codice responsabile della strategia del cammino casuale, e confronta i risultati del tuo algoritmo con il cammino casuale alla fine.
+
+> **Note**: Potrebbe essere necessario regolare gli iperparametri per farlo funzionare, specialmente il numero di epoche. Poiché il successo del gioco (combattere il lupo) è un evento raro, puoi aspettarti tempi di allenamento molto più lunghi.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Bisogno di Miglioramento |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| | Un notebook è presentato con la definizione delle nuove regole del mondo, algoritmo Q-Learning e alcune spiegazioni testuali. Q-Learning è in grado di migliorare significativamente i risultati rispetto al cammino casuale. | Il notebook è presentato, Q-Learning è implementato e migliora i risultati rispetto al cammino casuale, ma non significativamente; o il notebook è scarsamente documentato e il codice non è ben strutturato | È stato fatto qualche tentativo di ridefinire le regole del mondo, ma l'algoritmo Q-Learning non funziona, o la funzione di ricompensa non è completamente definita |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/it/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..434620a63
--- /dev/null
+++ b/translations/it/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/it/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..dd36ee3d8
--- /dev/null
+++ b/translations/it/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/2-Gym/README.md b/translations/it/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..49403b0c9
--- /dev/null
+++ b/translations/it/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,342 @@
+# CartPole Skating
+
+Il problema che abbiamo risolto nella lezione precedente potrebbe sembrare un problema giocattolo, non realmente applicabile a scenari di vita reale. Non è così, perché molti problemi del mondo reale condividono questo scenario - incluso giocare a scacchi o Go. Sono simili, perché abbiamo anche una scacchiera con regole date e uno **stato discreto**.
+
+## [Quiz Pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/47/)
+
+## Introduzione
+
+In questa lezione applicheremo gli stessi principi del Q-Learning a un problema con **stato continuo**, cioè uno stato che è dato da uno o più numeri reali. Ci occuperemo del seguente problema:
+
+> **Problema**: Se Peter vuole scappare dal lupo, deve essere in grado di muoversi più velocemente. Vedremo come Peter può imparare a pattinare, in particolare, a mantenere l'equilibrio, usando il Q-Learning.
+
+
+
+> Peter e i suoi amici diventano creativi per scappare dal lupo! Immagine di [Jen Looper](https://twitter.com/jenlooper)
+
+Utilizzeremo una versione semplificata del mantenimento dell'equilibrio nota come problema **CartPole**. Nel mondo del cartpole, abbiamo uno slider orizzontale che può muoversi a sinistra o a destra, e l'obiettivo è mantenere in equilibrio un palo verticale sulla parte superiore dello slider.
+
+## Prerequisiti
+
+In questa lezione, utilizzeremo una libreria chiamata **OpenAI Gym** per simulare diversi **ambienti**. Puoi eseguire il codice di questa lezione localmente (ad esempio da Visual Studio Code), nel qual caso la simulazione si aprirà in una nuova finestra. Quando esegui il codice online, potresti dover apportare alcune modifiche al codice, come descritto [qui](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7).
+
+## OpenAI Gym
+
+Nella lezione precedente, le regole del gioco e lo stato erano dati dalla classe `Board` che abbiamo definito noi stessi. Qui utilizzeremo un **ambiente di simulazione** speciale, che simulerà la fisica dietro il palo in equilibrio. Uno degli ambienti di simulazione più popolari per l'addestramento degli algoritmi di apprendimento per rinforzo è chiamato [Gym](https://gym.openai.com/), che è mantenuto da [OpenAI](https://openai.com/). Utilizzando questo gym possiamo creare diversi **ambienti** da una simulazione di cartpole a giochi Atari.
+
+> **Nota**: Puoi vedere altri ambienti disponibili da OpenAI Gym [qui](https://gym.openai.com/envs/#classic_control).
+
+Prima, installiamo il gym e importiamo le librerie necessarie (blocco di codice 1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## Esercizio - inizializzare un ambiente cartpole
+
+Per lavorare con un problema di equilibrio del cartpole, dobbiamo inizializzare l'ambiente corrispondente. Ogni ambiente è associato a:
+
+- **Observation space** che definisce la struttura delle informazioni che riceviamo dall'ambiente. Per il problema del cartpole, riceviamo la posizione del palo, la velocità e altri valori.
+
+- **Action space** che definisce le azioni possibili. Nel nostro caso lo spazio delle azioni è discreto e consiste in due azioni - **sinistra** e **destra**. (blocco di codice 2)
+
+1. Per inizializzare, digita il seguente codice:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+Per vedere come funziona l'ambiente, eseguiamo una breve simulazione per 100 passi. Ad ogni passo, forniamo una delle azioni da intraprendere - in questa simulazione selezioniamo casualmente un'azione da `action_space`.
+
+1. Esegui il codice qui sotto e vedi a cosa porta.
+
+ ✅ Ricorda che è preferibile eseguire questo codice su un'installazione locale di Python! (blocco di codice 3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ Dovresti vedere qualcosa di simile a questa immagine:
+
+ 
+
+1. Durante la simulazione, dobbiamo ottenere osservazioni per decidere come agire. Infatti, la funzione step restituisce le osservazioni attuali, una funzione di ricompensa e il flag done che indica se ha senso continuare la simulazione o meno: (blocco di codice 4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ Finirai per vedere qualcosa di simile a questo nell'output del notebook:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ Il vettore di osservazione che viene restituito ad ogni passo della simulazione contiene i seguenti valori:
+ - Posizione del carrello
+ - Velocità del carrello
+ - Angolo del palo
+ - Velocità di rotazione del palo
+
+1. Ottieni il valore minimo e massimo di questi numeri: (blocco di codice 5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ Potresti anche notare che il valore della ricompensa ad ogni passo della simulazione è sempre 1. Questo perché il nostro obiettivo è sopravvivere il più a lungo possibile, cioè mantenere il palo in una posizione ragionevolmente verticale per il periodo di tempo più lungo possibile.
+
+ ✅ In effetti, la simulazione del CartPole è considerata risolta se riusciamo a ottenere una ricompensa media di 195 su 100 prove consecutive.
+
+## Discretizzazione dello stato
+
+Nel Q-Learning, dobbiamo costruire una Q-Table che definisca cosa fare in ogni stato. Per poter fare questo, lo stato deve essere **discreto**, più precisamente, deve contenere un numero finito di valori discreti. Pertanto, dobbiamo in qualche modo **discretizzare** le nostre osservazioni, mappandole su un insieme finito di stati.
+
+Ci sono alcuni modi in cui possiamo farlo:
+
+- **Dividere in bin**. Se conosciamo l'intervallo di un certo valore, possiamo dividere questo intervallo in un numero di **bin**, e poi sostituire il valore con il numero del bin a cui appartiene. Questo può essere fatto usando il metodo numpy [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html). In questo caso, conosceremo esattamente la dimensione dello stato, perché dipenderà dal numero di bin che selezioniamo per la digitalizzazione.
+
+✅ Possiamo usare l'interpolazione lineare per portare i valori a un intervallo finito (diciamo, da -20 a 20), e poi convertire i numeri in interi arrotondandoli. Questo ci dà un po' meno controllo sulla dimensione dello stato, soprattutto se non conosciamo gli intervalli esatti dei valori di input. Ad esempio, nel nostro caso 2 dei 4 valori non hanno limiti superiori/inferiori sui loro valori, il che può comportare un numero infinito di stati.
+
+Nel nostro esempio, utilizzeremo il secondo approccio. Come potresti notare più avanti, nonostante i limiti superiori/inferiori indefiniti, quei valori raramente assumono valori al di fuori di certi intervalli finiti, quindi quegli stati con valori estremi saranno molto rari.
+
+1. Ecco la funzione che prenderà l'osservazione dal nostro modello e produrrà una tupla di 4 valori interi: (blocco di codice 6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. Esploriamo anche un altro metodo di discretizzazione usando i bin: (blocco di codice 7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. Ora eseguiamo una breve simulazione e osserviamo quei valori discreti dell'ambiente. Sentiti libero di provare sia `discretize` and `discretize_bins` e vedere se c'è una differenza.
+
+ ✅ discretize_bins restituisce il numero del bin, che è basato su 0. Quindi per i valori della variabile di input intorno a 0 restituisce il numero dal centro dell'intervallo (10). In discretize, non ci siamo preoccupati dell'intervallo dei valori di output, permettendo loro di essere negativi, quindi i valori dello stato non sono spostati, e 0 corrisponde a 0. (blocco di codice 8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ Decommenta la riga che inizia con env.render se vuoi vedere come l'ambiente viene eseguito. Altrimenti puoi eseguirlo in background, che è più veloce. Utilizzeremo questa esecuzione "invisibile" durante il nostro processo di Q-Learning.
+
+## La struttura della Q-Table
+
+Nella nostra lezione precedente, lo stato era una semplice coppia di numeri da 0 a 8, quindi era conveniente rappresentare la Q-Table con un tensore numpy con una forma di 8x8x2. Se usiamo la discretizzazione dei bin, la dimensione del nostro vettore di stato è anche conosciuta, quindi possiamo usare lo stesso approccio e rappresentare lo stato con un array di forma 20x20x10x10x2 (qui 2 è la dimensione dello spazio delle azioni, e le prime dimensioni corrispondono al numero di bin che abbiamo selezionato per ciascuno dei parametri nello spazio delle osservazioni).
+
+Tuttavia, a volte le dimensioni precise dello spazio delle osservazioni non sono conosciute. Nel caso della funzione `discretize`, potremmo non essere mai sicuri che il nostro stato rimanga entro certi limiti, perché alcuni dei valori originali non sono limitati. Pertanto, utilizzeremo un approccio leggermente diverso e rappresenteremo la Q-Table con un dizionario.
+
+1. Usa la coppia *(stato, azione)* come chiave del dizionario, e il valore corrisponderebbe al valore dell'entry della Q-Table. (blocco di codice 9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ Qui definiamo anche una funzione `qvalues()`, che restituisce una lista di valori della Q-Table per un dato stato che corrisponde a tutte le azioni possibili. Se l'entry non è presente nella Q-Table, restituiremo 0 come valore predefinito.
+
+## Iniziamo il Q-Learning
+
+Ora siamo pronti a insegnare a Peter a mantenere l'equilibrio!
+
+1. Prima, impostiamo alcuni iperparametri: (blocco di codice 10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ Qui, `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards` per ulteriori grafici. (blocco di codice 11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+Quello che potresti notare da questi risultati:
+
+- **Vicino al nostro obiettivo**. Siamo molto vicini a raggiungere l'obiettivo di ottenere 195 ricompense cumulative su 100+ esecuzioni consecutive della simulazione, o potremmo averlo effettivamente raggiunto! Anche se otteniamo numeri più piccoli, non lo sappiamo ancora, perché facciamo una media su 5000 esecuzioni, e solo 100 esecuzioni sono richieste nei criteri formali.
+
+- **La ricompensa inizia a diminuire**. A volte la ricompensa inizia a diminuire, il che significa che possiamo "distruggere" i valori già appresi nella Q-Table con quelli che peggiorano la situazione.
+
+Questa osservazione è più chiaramente visibile se tracciamo il progresso dell'addestramento.
+
+## Tracciare il progresso dell'addestramento
+
+Durante l'addestramento, abbiamo raccolto il valore della ricompensa cumulativa a ciascuna delle iterazioni nel vettore `rewards`. Ecco come appare quando lo tracciamo contro il numero di iterazioni:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+Da questo grafico, non è possibile dire nulla, perché a causa della natura del processo di addestramento stocastico la durata delle sessioni di addestramento varia notevolmente. Per dare più senso a questo grafico, possiamo calcolare la **media mobile** su una serie di esperimenti, diciamo 100. Questo può essere fatto comodamente usando `np.convolve`: (blocco di codice 12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## Variazione degli iperparametri
+
+Per rendere l'apprendimento più stabile, ha senso regolare alcuni dei nostri iperparametri durante l'addestramento. In particolare:
+
+- **Per il tasso di apprendimento**, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon`, e salire fino a quasi 1.
+
+> **Compito 1**: Gioca con i valori degli iperparametri e vedi se riesci a ottenere una ricompensa cumulativa più alta. Stai ottenendo sopra 195?
+
+> **Compito 2**: Per risolvere formalmente il problema, devi ottenere una ricompensa media di 195 su 100 esecuzioni consecutive. Misuralo durante l'addestramento e assicurati di aver risolto formalmente il problema!
+
+## Vedere il risultato in azione
+
+Sarebbe interessante vedere come si comporta il modello addestrato. Eseguiamo la simulazione e seguiamo la stessa strategia di selezione delle azioni durante l'addestramento, campionando secondo la distribuzione di probabilità nella Q-Table: (blocco di codice 13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+Dovresti vedere qualcosa di simile a questo:
+
+
+
+---
+
+## 🚀Sfida
+
+> **Compito 3**: Qui, abbiamo utilizzato la copia finale della Q-Table, che potrebbe non essere la migliore. Ricorda che abbiamo memorizzato la Q-Table con le migliori prestazioni in `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax` per trovare il numero dell'azione corrispondente al valore più alto della Q-Table. Implementa questa strategia e vedi se migliora l'equilibrio.
+
+## [Quiz Post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## Compito
+[Addestra una Mountain Car](assignment.md)
+
+## Conclusione
+
+Abbiamo ora imparato come addestrare agenti per ottenere buoni risultati semplicemente fornendo loro una funzione di ricompensa che definisce lo stato desiderato del gioco e dando loro l'opportunità di esplorare intelligentemente lo spazio di ricerca. Abbiamo applicato con successo l'algoritmo Q-Learning nei casi di ambienti discreti e continui, ma con azioni discrete.
+
+È importante studiare anche situazioni in cui lo stato delle azioni è continuo e quando lo spazio delle osservazioni è molto più complesso, come l'immagine dello schermo di un gioco Atari. In questi problemi spesso dobbiamo usare tecniche di machine learning più potenti, come le reti neurali, per ottenere buoni risultati. Questi argomenti più avanzati sono l'oggetto del nostro prossimo corso avanzato di IA.
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/2-Gym/assignment.md b/translations/it/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..6b2691aa3
--- /dev/null
+++ b/translations/it/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# Allenare l'Auto di Montagna
+
+[OpenAI Gym](http://gym.openai.com) è stato progettato in modo tale che tutti gli ambienti forniscano la stessa API - cioè gli stessi metodi `reset`, `step` e `render`, e le stesse astrazioni di **spazio delle azioni** e **spazio delle osservazioni**. Pertanto, dovrebbe essere possibile adattare gli stessi algoritmi di apprendimento per rinforzo a diversi ambienti con minime modifiche al codice.
+
+## Un Ambiente di Auto di Montagna
+
+L'[ambiente dell'Auto di Montagna](https://gym.openai.com/envs/MountainCar-v0/) contiene un'auto bloccata in una valle:
+L'obiettivo è uscire dalla valle e catturare la bandiera, compiendo ad ogni passo una delle seguenti azioni:
+
+| Valore | Significato |
+|---|---|
+| 0 | Accelerare a sinistra |
+| 1 | Non accelerare |
+| 2 | Accelerare a destra |
+
+Il trucco principale di questo problema è, tuttavia, che il motore dell'auto non è abbastanza potente da scalare la montagna in un solo passaggio. Pertanto, l'unico modo per avere successo è guidare avanti e indietro per accumulare slancio.
+
+Lo spazio delle osservazioni consiste di soli due valori:
+
+| Num | Osservazione | Min | Max |
+|-----|--------------|-----|-----|
+| 0 | Posizione dell'Auto | -1.2| 0.6 |
+| 1 | Velocità dell'Auto | -0.07 | 0.07 |
+
+Il sistema di ricompensa per l'auto di montagna è piuttosto complicato:
+
+ * Una ricompensa di 0 viene assegnata se l'agente ha raggiunto la bandiera (posizione = 0.5) in cima alla montagna.
+ * Una ricompensa di -1 viene assegnata se la posizione dell'agente è inferiore a 0.5.
+
+L'episodio termina se la posizione dell'auto è superiore a 0.5, o se la durata dell'episodio è superiore a 200.
+## Istruzioni
+
+Adatta il nostro algoritmo di apprendimento per rinforzo per risolvere il problema dell'auto di montagna. Inizia con il codice esistente nel [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb), sostituisci il nuovo ambiente, cambia le funzioni di discretizzazione dello stato e cerca di far allenare l'algoritmo esistente con minime modifiche al codice. Ottimizza il risultato regolando gli iperparametri.
+
+> **Nota**: È probabile che sia necessario regolare gli iperparametri per far convergere l'algoritmo.
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Bisogno di Miglioramento |
+| -------- | --------- | -------- | ----------------- |
+| | L'algoritmo di Q-Learning è stato adattato con successo dall'esempio di CartPole, con minime modifiche al codice, ed è in grado di risolvere il problema di catturare la bandiera in meno di 200 passi. | È stato adottato un nuovo algoritmo di Q-Learning da Internet, ma è ben documentato; oppure l'algoritmo esistente è stato adottato, ma non raggiunge i risultati desiderati | Lo studente non è stato in grado di adottare con successo alcun algoritmo, ma ha fatto passi sostanziali verso la soluzione (implementazione della discretizzazione dello stato, struttura dati della Q-Table, ecc.) |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/it/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..36dcdb371
--- /dev/null
+++ b/translations/it/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzata basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/2-Gym/solution/R/README.md b/translations/it/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..0d01ffd48
--- /dev/null
+++ b/translations/it/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Pur cercando di garantire la massima accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre deve essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/8-Reinforcement/README.md b/translations/it/8-Reinforcement/README.md
new file mode 100644
index 000000000..6096b76c8
--- /dev/null
+++ b/translations/it/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# Introduzione all'apprendimento per rinforzo
+
+L'apprendimento per rinforzo, RL, è considerato uno dei paradigmi fondamentali del machine learning, accanto all'apprendimento supervisionato e non supervisionato. L'RL riguarda le decisioni: prendere le decisioni giuste o almeno imparare da esse.
+
+Immagina di avere un ambiente simulato come il mercato azionario. Cosa succede se imponi una determinata regolamentazione? Ha un effetto positivo o negativo? Se succede qualcosa di negativo, devi prendere questo _rinforzo negativo_, imparare da esso e cambiare rotta. Se l'esito è positivo, devi costruire su quel _rinforzo positivo_.
+
+
+
+> Peter e i suoi amici devono sfuggire al lupo affamato! Immagine di [Jen Looper](https://twitter.com/jenlooper)
+
+## Argomento regionale: Peter e il Lupo (Russia)
+
+[Peter e il Lupo](https://en.wikipedia.org/wiki/Peter_and_the_Wolf) è una fiaba musicale scritta dal compositore russo [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev). È una storia che parla del giovane pioniere Peter, che coraggiosamente esce di casa per inseguire il lupo nella radura della foresta. In questa sezione, addestreremo algoritmi di machine learning che aiuteranno Peter a:
+
+- **Esplorare** l'area circostante e costruire una mappa di navigazione ottimale
+- **Imparare** a usare uno skateboard e a bilanciarsi su di esso, per muoversi più velocemente.
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 Clicca sull'immagine sopra per ascoltare Peter e il Lupo di Prokofiev
+
+## Apprendimento per rinforzo
+
+Nelle sezioni precedenti, hai visto due esempi di problemi di machine learning:
+
+- **Supervisionato**, dove abbiamo dataset che suggeriscono soluzioni campione al problema che vogliamo risolvere. [Classificazione](../4-Classification/README.md) e [regressione](../2-Regression/README.md) sono compiti di apprendimento supervisionato.
+- **Non supervisionato**, in cui non abbiamo dati di addestramento etichettati. L'esempio principale di apprendimento non supervisionato è il [Clustering](../5-Clustering/README.md).
+
+In questa sezione, ti introdurremo a un nuovo tipo di problema di apprendimento che non richiede dati di addestramento etichettati. Esistono diversi tipi di tali problemi:
+
+- **[Apprendimento semi-supervisionato](https://wikipedia.org/wiki/Semi-supervised_learning)**, dove abbiamo molti dati non etichettati che possono essere utilizzati per pre-addestrare il modello.
+- **[Apprendimento per rinforzo](https://wikipedia.org/wiki/Reinforcement_learning)**, in cui un agente impara a comportarsi eseguendo esperimenti in un ambiente simulato.
+
+### Esempio - gioco per computer
+
+Supponiamo di voler insegnare a un computer a giocare a un gioco, come gli scacchi o [Super Mario](https://wikipedia.org/wiki/Super_Mario). Per far giocare il computer, dobbiamo fargli prevedere quale mossa fare in ciascuno degli stati del gioco. Anche se potrebbe sembrare un problema di classificazione, non lo è - perché non abbiamo un dataset con stati e azioni corrispondenti. Anche se potremmo avere alcuni dati come partite di scacchi esistenti o registrazioni di giocatori che giocano a Super Mario, è probabile che quei dati non coprano sufficientemente un numero sufficiente di stati possibili.
+
+Invece di cercare dati di gioco esistenti, **l'Apprendimento per Rinforzo** (RL) si basa sull'idea di *far giocare il computer* molte volte e osservare il risultato. Pertanto, per applicare l'Apprendimento per Rinforzo, abbiamo bisogno di due cose:
+
+- **Un ambiente** e **un simulatore** che ci permettano di giocare molte volte. Questo simulatore definirebbe tutte le regole del gioco, nonché gli stati e le azioni possibili.
+
+- **Una funzione di ricompensa**, che ci dica quanto bene abbiamo fatto durante ogni mossa o partita.
+
+La principale differenza tra gli altri tipi di machine learning e l'RL è che nell'RL tipicamente non sappiamo se vinciamo o perdiamo fino a quando non finiamo il gioco. Pertanto, non possiamo dire se una certa mossa da sola sia buona o no - riceviamo una ricompensa solo alla fine del gioco. E il nostro obiettivo è progettare algoritmi che ci permettano di addestrare un modello in condizioni di incertezza. Impareremo un algoritmo di RL chiamato **Q-learning**.
+
+## Lezioni
+
+1. [Introduzione all'apprendimento per rinforzo e Q-Learning](1-QLearning/README.md)
+2. [Utilizzo di un ambiente di simulazione gym](2-Gym/README.md)
+
+## Crediti
+
+"L'Introduzione all'Apprendimento per Rinforzo" è stata scritta con ♥️ da [Dmitry Soshnikov](http://soshnikov.com)
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di notare che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/9-Real-World/1-Applications/README.md b/translations/it/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..8785aa769
--- /dev/null
+++ b/translations/it/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# Postscript: Machine learning nel mondo reale
+
+
+> Sketchnote di [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+In questo curriculum, hai imparato molti modi per preparare i dati per l'addestramento e creare modelli di machine learning. Hai costruito una serie di modelli classici di regressione, clustering, classificazione, elaborazione del linguaggio naturale e serie temporali. Congratulazioni! Ora, potresti chiederti a cosa serve tutto questo... quali sono le applicazioni reali di questi modelli?
+
+Sebbene l'interesse dell'industria sia spesso rivolto all'AI, che di solito sfrutta il deep learning, ci sono ancora applicazioni preziose per i modelli classici di machine learning. Potresti persino usare alcune di queste applicazioni oggi stesso! In questa lezione, esplorerai come otto diversi settori e domini tematici utilizzano questi tipi di modelli per rendere le loro applicazioni più performanti, affidabili, intelligenti e preziose per gli utenti.
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 Finanza
+
+Il settore finanziario offre molte opportunità per il machine learning. Molti problemi in quest'area possono essere modellati e risolti utilizzando il ML.
+
+### Rilevamento delle frodi con carta di credito
+
+Abbiamo imparato riguardo al [k-means clustering](../../5-Clustering/2-K-Means/README.md) in precedenza nel corso, ma come può essere utilizzato per risolvere problemi legati alle frodi con carta di credito?
+
+Il k-means clustering è utile durante una tecnica di rilevamento delle frodi con carta di credito chiamata **rilevamento degli outlier**. Gli outlier, o deviazioni nelle osservazioni su un insieme di dati, possono dirci se una carta di credito viene utilizzata in modo normale o se sta accadendo qualcosa di insolito. Come mostrato nel documento collegato di seguito, puoi ordinare i dati delle carte di credito utilizzando un algoritmo di k-means clustering e assegnare ogni transazione a un cluster in base a quanto sembra essere un outlier. Poi, puoi valutare i cluster più rischiosi per distinguere tra transazioni fraudolente e legittime.
+[Reference](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### Gestione della ricchezza
+
+Nella gestione della ricchezza, un individuo o un'azienda gestisce investimenti per conto dei propri clienti. Il loro lavoro è sostenere e far crescere la ricchezza a lungo termine, quindi è essenziale scegliere investimenti che performano bene.
+
+Un modo per valutare come un particolare investimento performa è attraverso la regressione statistica. La [regressione lineare](../../2-Regression/1-Tools/README.md) è uno strumento prezioso per capire come un fondo performa rispetto a un benchmark. Possiamo anche dedurre se i risultati della regressione sono statisticamente significativi o quanto influenzerebbero gli investimenti di un cliente. Puoi anche espandere ulteriormente la tua analisi utilizzando la regressione multipla, dove possono essere considerati ulteriori fattori di rischio. Per un esempio di come questo funzionerebbe per un fondo specifico, dai un'occhiata al documento di seguito sulla valutazione delle performance del fondo utilizzando la regressione.
+[Reference](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 Educazione
+
+Il settore educativo è anche un'area molto interessante dove il ML può essere applicato. Ci sono problemi interessanti da affrontare come rilevare il cheating nei test o nei saggi o gestire il bias, intenzionale o meno, nel processo di correzione.
+
+### Prevedere il comportamento degli studenti
+
+[Coursera](https://coursera.com), un provider di corsi online aperti, ha un ottimo blog tecnico dove discutono molte decisioni ingegneristiche. In questo caso di studio, hanno tracciato una linea di regressione per cercare di esplorare una correlazione tra un basso punteggio NPS (Net Promoter Score) e la ritenzione o l'abbandono del corso.
+[Reference](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### Mitigare il bias
+
+[Grammarly](https://grammarly.com), un assistente alla scrittura che controlla errori di ortografia e grammatica, utilizza sofisticati [sistemi di elaborazione del linguaggio naturale](../../6-NLP/README.md) nei suoi prodotti. Hanno pubblicato un interessante caso di studio nel loro blog tecnico su come hanno affrontato il bias di genere nel machine learning, di cui hai appreso nella nostra [lezione introduttiva sull'equità](../../1-Introduction/3-fairness/README.md).
+[Reference](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 Retail
+
+Il settore retail può sicuramente beneficiare dell'uso del ML, con tutto, dalla creazione di un miglior percorso cliente alla gestione ottimale dell'inventario.
+
+### Personalizzare il percorso cliente
+
+A Wayfair, un'azienda che vende articoli per la casa come mobili, aiutare i clienti a trovare i prodotti giusti per i loro gusti e bisogni è fondamentale. In questo articolo, gli ingegneri dell'azienda descrivono come utilizzano il ML e l'NLP per "far emergere i risultati giusti per i clienti". In particolare, il loro Query Intent Engine è stato costruito per utilizzare l'estrazione di entità, l'addestramento di classificatori, l'estrazione di risorse e opinioni e il tagging dei sentimenti nelle recensioni dei clienti. Questo è un classico caso d'uso di come l'NLP funziona nel retail online.
+[Reference](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### Gestione dell'inventario
+
+Aziende innovative e agili come [StitchFix](https://stitchfix.com), un servizio di box che spedisce abbigliamento ai consumatori, si affidano fortemente al ML per le raccomandazioni e la gestione dell'inventario. I loro team di styling lavorano insieme ai loro team di merchandising, infatti: "uno dei nostri data scientist ha sperimentato con un algoritmo genetico e lo ha applicato all'abbigliamento per prevedere quale sarebbe stato un capo di successo che non esiste oggi. Abbiamo portato questo al team di merchandising e ora possono usarlo come strumento."
+[Reference](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 Sanità
+
+Il settore sanitario può sfruttare il ML per ottimizzare le attività di ricerca e anche problemi logistici come la riammissione dei pazienti o la prevenzione della diffusione delle malattie.
+
+### Gestione delle sperimentazioni cliniche
+
+La tossicità nelle sperimentazioni cliniche è una preoccupazione importante per i produttori di farmaci. Quanto è tollerabile la tossicità? In questo studio, l'analisi di vari metodi di sperimentazione clinica ha portato allo sviluppo di un nuovo approccio per prevedere le probabilità di esiti delle sperimentazioni cliniche. In particolare, sono stati in grado di utilizzare la foresta casuale per produrre un [classificatore](../../4-Classification/README.md) in grado di distinguere tra gruppi di farmaci.
+[Reference](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### Gestione della riammissione ospedaliera
+
+L'assistenza ospedaliera è costosa, specialmente quando i pazienti devono essere riammessi. Questo documento discute di un'azienda che utilizza il ML per prevedere il potenziale di riammissione utilizzando algoritmi di [clustering](../../5-Clustering/README.md). Questi cluster aiutano gli analisti a "scoprire gruppi di riammissioni che possono condividere una causa comune".
+[Reference](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### Gestione delle malattie
+
+La recente pandemia ha messo in luce i modi in cui il machine learning può aiutare a fermare la diffusione delle malattie. In questo articolo, riconoscerai l'uso di ARIMA, curve logistiche, regressione lineare e SARIMA. "Questo lavoro è un tentativo di calcolare il tasso di diffusione di questo virus e quindi di prevedere i decessi, le guarigioni e i casi confermati, in modo che possa aiutarci a prepararci meglio e sopravvivere."
+[Reference](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 Ecologia e tecnologia verde
+
+La natura e l'ecologia consistono in molti sistemi sensibili dove l'interazione tra animali e natura è al centro dell'attenzione. È importante essere in grado di misurare accuratamente questi sistemi e agire in modo appropriato se succede qualcosa, come un incendio boschivo o una diminuzione della popolazione animale.
+
+### Gestione delle foreste
+
+Hai imparato riguardo al [Reinforcement Learning](../../8-Reinforcement/README.md) nelle lezioni precedenti. Può essere molto utile quando si cerca di prevedere i modelli in natura. In particolare, può essere utilizzato per monitorare problemi ecologici come gli incendi boschivi e la diffusione di specie invasive. In Canada, un gruppo di ricercatori ha utilizzato il Reinforcement Learning per costruire modelli di dinamiche degli incendi boschivi a partire dalle immagini satellitari. Utilizzando un innovativo "processo di diffusione spaziale (SSP)", hanno immaginato un incendio boschivo come "l'agente in qualsiasi cella nel paesaggio." "Il set di azioni che il fuoco può intraprendere da una posizione in qualsiasi momento include la diffusione verso nord, sud, est o ovest o non diffondersi.
+
+Questo approccio inverte la configurazione usuale del RL poiché le dinamiche del corrispondente Markov Decision Process (MDP) sono una funzione nota per la diffusione immediata degli incendi." Leggi di più sugli algoritmi classici utilizzati da questo gruppo al link di seguito.
+[Reference](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### Rilevamento del movimento degli animali
+
+Sebbene il deep learning abbia creato una rivoluzione nel monitoraggio visivo dei movimenti degli animali (puoi costruire il tuo [tracciatore di orsi polari](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott) qui), il ML classico ha ancora un ruolo in questo compito.
+
+I sensori per monitorare i movimenti degli animali da fattoria e l'IoT fanno uso di questo tipo di elaborazione visiva, ma le tecniche di ML più basilari sono utili per preprocessare i dati. Ad esempio, in questo documento, le posture delle pecore sono state monitorate e analizzate utilizzando vari algoritmi di classificazione. Potresti riconoscere la curva ROC a pagina 335.
+[Reference](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ Gestione dell'energia
+
+Nelle nostre lezioni sulla [previsione delle serie temporali](../../7-TimeSeries/README.md), abbiamo introdotto il concetto di parchimetri intelligenti per generare entrate per una città basandosi sulla comprensione della domanda e dell'offerta. Questo articolo discute in dettaglio come clustering, regressione e previsione delle serie temporali si combinano per aiutare a prevedere l'uso futuro dell'energia in Irlanda, basandosi sui misuratori intelligenti.
+[Reference](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 Assicurazioni
+
+Il settore assicurativo è un altro settore che utilizza il ML per costruire e ottimizzare modelli finanziari e attuariali validi.
+
+### Gestione della volatilità
+
+MetLife, un fornitore di assicurazioni sulla vita, è trasparente nel modo in cui analizza e mitiga la volatilità nei suoi modelli finanziari. In questo articolo noterai visualizzazioni di classificazione binaria e ordinale. Scoprirai anche visualizzazioni di previsione.
+[Reference](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 Arti, cultura e letteratura
+
+Nelle arti, ad esempio nel giornalismo, ci sono molti problemi interessanti. Rilevare le fake news è un enorme problema poiché è stato dimostrato che influenzano l'opinione delle persone e persino rovesciano le democrazie. Anche i musei possono beneficiare dell'uso del ML in tutto, dal trovare collegamenti tra artefatti alla pianificazione delle risorse.
+
+### Rilevamento delle fake news
+
+Rilevare le fake news è diventato un gioco del gatto e del topo nei media di oggi. In questo articolo, i ricercatori suggeriscono che un sistema che combina diverse tecniche di ML che abbiamo studiato può essere testato e il miglior modello implementato: "Questo sistema si basa sull'elaborazione del linguaggio naturale per estrarre caratteristiche dai dati e poi queste caratteristiche vengono utilizzate per l'addestramento di classificatori di machine learning come Naive Bayes, Support Vector Machine (SVM), Random Forest (RF), Stochastic Gradient Descent (SGD) e Logistic Regression (LR)."
+[Reference](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+Questo articolo mostra come combinare diversi domini di ML può produrre risultati interessanti che possono aiutare a fermare la diffusione delle fake news e creare danni reali; in questo caso, l'impulso è stato la diffusione di voci sui trattamenti COVID che hanno incitato la violenza di massa.
+
+### ML nei musei
+
+I musei sono all'avanguardia di una rivoluzione AI in cui catalogare e digitalizzare collezioni e trovare collegamenti tra artefatti sta diventando più facile man mano che la tecnologia avanza. Progetti come [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.) stanno aiutando a svelare i misteri delle collezioni inaccessibili come gli Archivi Vaticani. Ma anche l'aspetto commerciale dei musei beneficia dei modelli di ML.
+
+Ad esempio, l'Art Institute di Chicago ha costruito modelli per prevedere cosa interessa al pubblico e quando parteciperà alle esposizioni. L'obiettivo è creare esperienze di visita individualizzate e ottimizzate ogni volta che l'utente visita il museo. "Durante l'anno fiscale 2017, il modello ha previsto la partecipazione e le entrate con una precisione dell'1 percento, dice Andrew Simnick, vicepresidente senior all'Art Institute."
+[Reference](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 Marketing
+
+### Segmentazione dei clienti
+
+Le strategie di marketing più efficaci mirano ai clienti in modi diversi basati su vari raggruppamenti. In questo articolo, vengono discussi gli usi degli algoritmi di Clustering per supportare il marketing differenziato. Il marketing differenziato aiuta le aziende a migliorare il riconoscimento del marchio, raggiungere più clienti e guadagnare di più.
+[Reference](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 Sfida
+
+Identificare un altro settore che beneficia di alcune delle tecniche che hai appreso in questo curriculum e scoprire come utilizza il ML.
+
+## [Quiz post-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## Revisione & Autoapprendimento
+
+Il team di data science di Wayfair ha diversi video interessanti su come utilizzano il ML nella loro azienda. Vale la pena [dare un'occhiata](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)!
+
+## Compito
+
+[Una caccia al tesoro di ML](assignment.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/9-Real-World/1-Applications/assignment.md b/translations/it/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..479cbc032
--- /dev/null
+++ b/translations/it/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# Una Caccia al Tesoro con il ML
+
+## Istruzioni
+
+In questa lezione, hai imparato molti casi d'uso reali risolti utilizzando il ML classico. Sebbene l'uso del deep learning, nuove tecniche e strumenti nell'IA, e l'uso delle reti neurali abbiano aiutato ad accelerare la produzione di strumenti per aiutare in questi settori, il ML classico utilizzando le tecniche in questo curriculum ha ancora un grande valore.
+
+In questo compito, immagina di partecipare a un hackathon. Usa ciò che hai imparato nel curriculum per proporre una soluzione utilizzando il ML classico per risolvere un problema in uno dei settori discussi in questa lezione. Crea una presentazione in cui discuti come implementerai la tua idea. Punti bonus se riesci a raccogliere dati di esempio e costruire un modello ML per supportare il tuo concetto!
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Da Migliorare |
+| -------- | ------------------------------------------------------------------- | ------------------------------------------------- | ---------------------- |
+| | Viene presentata una presentazione PowerPoint - bonus per la costruzione di un modello | Viene presentata una presentazione di base non innovativa | Il lavoro è incompleto |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Anche se ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/9-Real-World/2-Debugging-ML-Models/README.md b/translations/it/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..972495162
--- /dev/null
+++ b/translations/it/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,130 @@
+# Postscript: Debugging dei Modelli di Machine Learning utilizzando i componenti della dashboard Responsible AI
+
+## [Quiz pre-lezione](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Introduzione
+
+Il machine learning ha un impatto significativo sulle nostre vite quotidiane. L'AI sta trovando spazio in alcuni dei sistemi più importanti che ci riguardano come individui e come società, dalla sanità, alla finanza, all'istruzione e all'occupazione. Ad esempio, sistemi e modelli sono coinvolti nelle decisioni quotidiane, come diagnosi sanitarie o rilevamento delle frodi. Di conseguenza, i progressi dell'AI e la sua adozione accelerata sono accompagnati da aspettative sociali in evoluzione e da una crescente regolamentazione. Continuamente vediamo aree in cui i sistemi di AI non soddisfano le aspettative; espongono nuove sfide; e i governi stanno iniziando a regolare le soluzioni AI. Pertanto, è importante che questi modelli siano analizzati per garantire risultati equi, affidabili, inclusivi, trasparenti e responsabili per tutti.
+
+In questo curriculum, esamineremo strumenti pratici che possono essere utilizzati per valutare se un modello presenta problemi di AI responsabile. Le tecniche tradizionali di debugging del machine learning tendono a basarsi su calcoli quantitativi come l'accuratezza aggregata o la perdita media dell'errore. Immagina cosa può succedere quando i dati che stai usando per costruire questi modelli mancano di determinate demografie, come razza, genere, visione politica, religione, o rappresentano in modo sproporzionato tali demografie. Cosa succede quando l'output del modello è interpretato per favorire una certa demografia? Questo può introdurre una sovra o sotto rappresentazione di questi gruppi di caratteristiche sensibili, risultando in problemi di equità, inclusività o affidabilità del modello. Un altro fattore è che i modelli di machine learning sono considerati scatole nere, il che rende difficile comprendere e spiegare cosa guida le predizioni di un modello. Tutte queste sono sfide che i data scientist e gli sviluppatori di AI affrontano quando non dispongono di strumenti adeguati per eseguire il debug e valutare l'equità o l'affidabilità di un modello.
+
+In questa lezione, imparerai a fare il debug dei tuoi modelli utilizzando:
+
+- **Analisi degli Errori**: identificare dove nella distribuzione dei dati il modello ha alti tassi di errore.
+- **Panoramica del Modello**: eseguire un'analisi comparativa tra diversi gruppi di dati per scoprire disparità nelle metriche di performance del modello.
+- **Analisi dei Dati**: investigare dove potrebbe esserci una sovra o sotto rappresentazione dei tuoi dati che può distorcere il modello a favore di una demografia rispetto a un'altra.
+- **Importanza delle Caratteristiche**: comprendere quali caratteristiche stanno guidando le predizioni del modello a livello globale o locale.
+
+## Prerequisito
+
+Come prerequisito, ti invitiamo a rivedere [Strumenti di AI responsabile per sviluppatori](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+
+> 
+
+## Analisi degli Errori
+
+Le metriche di performance tradizionali utilizzate per misurare l'accuratezza dei modelli sono per lo più calcoli basati su predizioni corrette vs incorrette. Ad esempio, determinare che un modello è accurato l'89% delle volte con una perdita di errore di 0.001 può essere considerato una buona performance. Gli errori spesso non sono distribuiti uniformemente nel tuo dataset sottostante. Puoi ottenere un punteggio di accuratezza del modello dell'89% ma scoprire che ci sono diverse regioni dei tuoi dati per cui il modello fallisce il 42% delle volte. La conseguenza di questi schemi di fallimento con certi gruppi di dati può portare a problemi di equità o affidabilità. È essenziale comprendere le aree in cui il modello sta performando bene o meno. Le regioni di dati dove ci sono un alto numero di imprecisioni nel tuo modello potrebbero rivelarsi un'importante demografia di dati.
+
+
+
+Il componente di Analisi degli Errori sulla dashboard RAI illustra come il fallimento del modello è distribuito tra vari gruppi con una visualizzazione ad albero. Questo è utile per identificare caratteristiche o aree dove c'è un alto tasso di errore nel tuo dataset. Vedendo da dove provengono la maggior parte delle imprecisioni del modello, puoi iniziare a investigare la causa principale. Puoi anche creare gruppi di dati per eseguire analisi. Questi gruppi di dati aiutano nel processo di debugging per determinare perché la performance del modello è buona in un gruppo, ma errata in un altro.
+
+
+
+Gli indicatori visivi sulla mappa ad albero aiutano a localizzare più velocemente le aree problematiche. Ad esempio, più scura è la tonalità di rosso di un nodo dell'albero, più alto è il tasso di errore.
+
+La mappa di calore è un'altra funzionalità di visualizzazione che gli utenti possono utilizzare per investigare il tasso di errore utilizzando una o due caratteristiche per trovare un contributore agli errori del modello su un intero dataset o gruppi.
+
+
+
+Usa l'analisi degli errori quando hai bisogno di:
+
+* Ottenere una comprensione profonda di come i fallimenti del modello sono distribuiti su un dataset e attraverso diverse dimensioni di input e caratteristiche.
+* Scomporre le metriche di performance aggregate per scoprire automaticamente gruppi di dati errati per informare i tuoi passi di mitigazione mirati.
+
+## Panoramica del Modello
+
+Valutare la performance di un modello di machine learning richiede una comprensione olistica del suo comportamento. Questo può essere ottenuto esaminando più di una metrica come il tasso di errore, l'accuratezza, il richiamo, la precisione o MAE (Mean Absolute Error) per trovare disparità tra le metriche di performance. Una metrica di performance può sembrare ottima, ma le imprecisioni possono emergere in un'altra metrica. Inoltre, confrontare le metriche per disparità su tutto il dataset o gruppi aiuta a far luce su dove il modello sta performando bene o meno. Questo è particolarmente importante nel vedere la performance del modello tra caratteristiche sensibili vs insensibili (ad esempio, razza del paziente, genere o età) per scoprire potenziali ingiustizie che il modello potrebbe avere. Ad esempio, scoprire che il modello è più errato in un gruppo che ha caratteristiche sensibili può rivelare potenziali ingiustizie che il modello potrebbe avere.
+
+Il componente Panoramica del Modello della dashboard RAI aiuta non solo ad analizzare le metriche di performance della rappresentazione dei dati in un gruppo, ma dà agli utenti la possibilità di confrontare il comportamento del modello tra diversi gruppi.
+
+
+
+La funzionalità di analisi basata sulle caratteristiche del componente permette agli utenti di restringere i sottogruppi di dati all'interno di una particolare caratteristica per identificare anomalie a un livello granulare. Ad esempio, la dashboard ha un'intelligenza integrata per generare automaticamente gruppi per una caratteristica selezionata dall'utente (ad es., *"time_in_hospital < 3"* o *"time_in_hospital >= 7"*). Questo permette a un utente di isolare una particolare caratteristica da un gruppo di dati più grande per vedere se è un influenzatore chiave degli esiti errati del modello.
+
+
+
+Il componente Panoramica del Modello supporta due classi di metriche di disparità:
+
+**Disparità nella performance del modello**: Questi set di metriche calcolano la disparità (differenza) nei valori della metrica di performance selezionata tra i sottogruppi di dati. Ecco alcuni esempi:
+
+* Disparità nel tasso di accuratezza
+* Disparità nel tasso di errore
+* Disparità nella precisione
+* Disparità nel richiamo
+* Disparità nell'errore assoluto medio (MAE)
+
+**Disparità nel tasso di selezione**: Questa metrica contiene la differenza nel tasso di selezione (predizione favorevole) tra i sottogruppi. Un esempio di questo è la disparità nei tassi di approvazione dei prestiti. Il tasso di selezione significa la frazione di punti dati in ogni classe classificata come 1 (nella classificazione binaria) o la distribuzione dei valori di predizione (nella regressione).
+
+## Analisi dei Dati
+
+> "Se torturi i dati abbastanza a lungo, confesseranno qualsiasi cosa" - Ronald Coase
+
+Questa affermazione suona estrema, ma è vero che i dati possono essere manipolati per supportare qualsiasi conclusione. Tale manipolazione può talvolta avvenire in modo non intenzionale. Come esseri umani, tutti abbiamo pregiudizi, ed è spesso difficile sapere consapevolmente quando stai introducendo pregiudizi nei dati. Garantire l'equità nell'AI e nel machine learning rimane una sfida complessa.
+
+I dati sono un enorme punto cieco per le metriche di performance tradizionali dei modelli. Puoi avere alti punteggi di accuratezza, ma questo non riflette sempre il pregiudizio sottostante che potrebbe essere nel tuo dataset. Ad esempio, se un dataset di dipendenti ha il 27% di donne in posizioni dirigenziali in un'azienda e il 73% di uomini allo stesso livello, un modello di pubblicità per il lavoro addestrato su questi dati potrebbe mirare principalmente a un pubblico maschile per posizioni di alto livello. Avere questo squilibrio nei dati ha distorto la predizione del modello a favore di un genere. Questo rivela un problema di equità dove c'è un pregiudizio di genere nel modello di AI.
+
+Il componente Analisi dei Dati sulla dashboard RAI aiuta a identificare aree dove c'è una sovra- e sotto-rappresentazione nel dataset. Aiuta gli utenti a diagnosticare la causa principale degli errori e dei problemi di equità introdotti da squilibri nei dati o dalla mancanza di rappresentazione di un particolare gruppo di dati. Questo dà agli utenti la possibilità di visualizzare i dataset in base agli esiti predetti e reali, ai gruppi di errori e alle caratteristiche specifiche. A volte scoprire un gruppo di dati sottorappresentato può anche rivelare che il modello non sta imparando bene, quindi le alte imprecisioni. Avere un modello che ha pregiudizi nei dati non è solo un problema di equità, ma mostra che il modello non è inclusivo o affidabile.
+
+
+
+Usa l'analisi dei dati quando hai bisogno di:
+
+* Esplorare le statistiche del tuo dataset selezionando diversi filtri per suddividere i tuoi dati in diverse dimensioni (anche conosciute come gruppi).
+* Comprendere la distribuzione del tuo dataset tra diversi gruppi e caratteristiche.
+* Determinare se le tue scoperte relative all'equità, all'analisi degli errori e alla causalità (derivate da altri componenti della dashboard) sono il risultato della distribuzione del tuo dataset.
+* Decidere in quali aree raccogliere più dati per mitigare gli errori che derivano da problemi di rappresentazione, rumore nelle etichette, rumore nelle caratteristiche, pregiudizi nelle etichette e fattori simili.
+
+## Interpretabilità del Modello
+
+I modelli di machine learning tendono a essere scatole nere. Comprendere quali caratteristiche chiave dei dati guidano la predizione di un modello può essere una sfida. È importante fornire trasparenza sul perché un modello fa una certa predizione. Ad esempio, se un sistema di AI predice che un paziente diabetico è a rischio di essere ricoverato di nuovo in ospedale entro 30 giorni, dovrebbe essere in grado di fornire dati di supporto che hanno portato alla sua predizione. Avere indicatori di dati di supporto porta trasparenza per aiutare i clinici o gli ospedali a prendere decisioni ben informate. Inoltre, essere in grado di spiegare perché un modello ha fatto una predizione per un singolo paziente permette di rispettare le normative sanitarie. Quando usi modelli di machine learning in modi che influenzano la vita delle persone, è cruciale comprendere e spiegare cosa influenza il comportamento di un modello. L'interpretabilità e la spiegabilità del modello aiutano a rispondere a domande in scenari come:
+
+* Debug del modello: Perché il mio modello ha commesso questo errore? Come posso migliorare il mio modello?
+* Collaborazione uomo-AI: Come posso comprendere e fidarmi delle decisioni del modello?
+* Conformità normativa: Il mio modello soddisfa i requisiti legali?
+
+Il componente Importanza delle Caratteristiche della dashboard RAI ti aiuta a fare il debug e ottenere una comprensione completa di come un modello fa le predizioni. È anche uno strumento utile per i professionisti del machine learning e i decisori per spiegare e mostrare evidenze delle caratteristiche che influenzano il comportamento del modello per la conformità normativa. Successivamente, gli utenti possono esplorare spiegazioni sia globali che locali per validare quali caratteristiche guidano la predizione di un modello. Le spiegazioni globali elencano le principali caratteristiche che hanno influenzato la predizione complessiva del modello. Le spiegazioni locali mostrano quali caratteristiche hanno portato alla predizione di un modello per un singolo caso. La possibilità di valutare spiegazioni locali è anche utile nel fare il debug o nell'audit di un caso specifico per comprendere e interpretare meglio perché un modello ha fatto una predizione accurata o inaccurata.
+
+
+
+* Spiegazioni globali: Ad esempio, quali caratteristiche influenzano il comportamento complessivo di un modello di riammissione ospedaliera per il diabete?
+* Spiegazioni locali: Ad esempio, perché un paziente diabetico di età superiore a 60 anni con precedenti ricoveri è stato predetto di essere ricoverato o non ricoverato entro 30 giorni?
+
+Nel processo di debug dell'esaminare la performance di un modello tra diversi gruppi, l'Importanza delle Caratteristiche mostra quale livello di impatto ha una caratteristica tra i gruppi. Aiuta a rivelare anomalie quando si confronta il livello di influenza che la caratteristica ha nel guidare le predizioni errate di un modello. Il componente Importanza delle Caratteristiche può mostrare quali valori in una caratteristica hanno influenzato positivamente o negativamente l'esito del modello. Ad esempio, se un modello ha fatto una predizione inaccurata, il componente ti dà la possibilità di approfondire e individuare quali caratteristiche o valori di caratteristiche hanno guidato la predizione. Questo livello di dettaglio aiuta non solo nel fare il debug ma fornisce trasparenza e responsabilità in situazioni di audit. Infine, il componente può aiutarti a identificare problemi di equità. Per illustrare, se una caratteristica sensibile come l'etnia o il genere è altamente influente nel guidare la predizione di un modello, questo potrebbe essere un segno di pregiudizio razziale o di genere nel modello.
+
+
+
+Usa l'interpretabilità quando hai bisogno di:
+
+* Determinare quanto siano affidabili le predizioni del tuo sistema AI comprendendo quali caratteristiche sono più importanti per le predizioni.
+* Approcciare il debug del tuo modello comprendendolo prima e identificando se il modello sta usando caratteristiche sane o solo false correlazioni.
+* Scoprire potenziali fonti di ingiustizia comprendendo se il modello sta basando le predizioni su caratteristiche sensibili o su caratteristiche altamente correlate con esse.
+* Costruire fiducia nelle decisioni del tuo modello generando spiegazioni locali per illustrare i loro esiti.
+* Completare un audit normativo di un sistema AI per validare i modelli e monitorare l'impatto delle decisioni del modello sugli esseri umani.
+
+## Conclusione
+
+Tutti i componenti della dashboard RAI sono strumenti pratici per aiutarti a costruire modelli di machine learning che siano meno dannosi e più affidabili per la società. Migliora la prevenzione delle minacce ai diritti umani; discriminare o escludere certi gruppi dalle opportunità di vita; e il rischio di lesioni fisiche o psicologiche. Aiuta anche a costruire fiducia nelle decisioni del tuo modello generando spiegazioni locali per illustrare i loro esiti. Alcuni dei potenziali danni possono essere classificati come:
+
+- **Allocazione**, se ad esempio un genere o un'etnia è favorita rispetto a un'altra.
+- **Qualità del servizio**. Se addestri i dati per uno scenario specifico ma la realtà è molto più complessa, porta a un servizio di scarsa qualità.
+- **Stereotipi**. Associare un dato gruppo a attributi preassegnati.
+- **Denigrazione**. Criticare ingiustamente e etichettare qualcosa o qualcuno.
+- **Sovra o sotto rappresentazione**. L'idea è che un certo gruppo non è visto in una certa professione, e qualsiasi servizio o funzione che continua a promuovere ciò sta contribuendo al danno.
+
+### Dashboard Azure RAI
+
+[Dashboard Azure RAI](https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu) è costruita su strumenti open-source sviluppati dalle principali istituzioni accademiche e organizzazioni, inclusa Microsoft, ed è fondamentale per i data scientist e gli
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatizzati basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/it/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..9f524ce99
--- /dev/null
+++ b/translations/it/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# Esplora la dashboard di Responsible AI (RAI)
+
+## Istruzioni
+
+In questa lezione hai imparato a conoscere la dashboard RAI, una suite di componenti basata su strumenti "open-source" per aiutare i data scientist a eseguire analisi degli errori, esplorazione dei dati, valutazione dell'equità, interpretabilità del modello, valutazioni controfattuali/cosa-se e analisi causale sui sistemi di intelligenza artificiale. Per questo compito, esplora alcuni dei notebook di esempio della dashboard RAI [notebooks](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) e riporta le tue scoperte in un documento o in una presentazione.
+
+## Rubrica
+
+| Criteri | Esemplare | Adeguato | Necessita Miglioramento |
+| -------- | --------- | -------- | ----------------- |
+| | Viene presentato un documento o una presentazione PowerPoint che discute i componenti della dashboard RAI, il notebook eseguito e le conclusioni tratte dall'esecuzione dello stesso | Viene presentato un documento senza conclusioni | Nessun documento viene presentato |
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/9-Real-World/README.md b/translations/it/9-Real-World/README.md
new file mode 100644
index 000000000..83f22408b
--- /dev/null
+++ b/translations/it/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# Postscript: Applicazioni reali del machine learning classico
+
+In questa sezione del curriculum, ti verranno presentate alcune applicazioni reali del ML classico. Abbiamo esplorato il web per trovare whitepaper e articoli sulle applicazioni che hanno utilizzato queste strategie, evitando il più possibile reti neurali, deep learning e AI. Scopri come il ML viene utilizzato nei sistemi aziendali, nelle applicazioni ecologiche, nella finanza, nelle arti e nella cultura, e molto altro.
+
+
+
+> Foto di Alexis Fauvet su Unsplash
+
+## Lezione
+
+1. [Applicazioni Reali del ML](1-Applications/README.md)
+2. [Debugging dei Modelli di Machine Learning utilizzando componenti del dashboard di Responsible AI](2-Debugging-ML-Models/README.md)
+
+## Crediti
+
+"Applicazioni Reali" è stato scritto da un team di persone, tra cui [Jen Looper](https://twitter.com/jenlooper) e [Ornella Altunyan](https://twitter.com/ornelladotcom).
+
+"Debugging dei Modelli di Machine Learning utilizzando componenti del dashboard di Responsible AI" è stato scritto da [Ruth Yakubu](https://twitter.com/ruthieyakubu)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/CODE_OF_CONDUCT.md b/translations/it/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..ce2690e68
--- /dev/null
+++ b/translations/it/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Codice di Condotta Open Source di Microsoft
+
+Questo progetto ha adottato il [Codice di Condotta Open Source di Microsoft](https://opensource.microsoft.com/codeofconduct/).
+
+Risorse:
+
+- [Codice di Condotta Open Source di Microsoft](https://opensource.microsoft.com/codeofconduct/)
+- [FAQ sul Codice di Condotta di Microsoft](https://opensource.microsoft.com/codeofconduct/faq/)
+- Contatta [opencode@microsoft.com](mailto:opencode@microsoft.com) per domande o preoccupazioni
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/CONTRIBUTING.md b/translations/it/CONTRIBUTING.md
new file mode 100644
index 000000000..1fb33e17f
--- /dev/null
+++ b/translations/it/CONTRIBUTING.md
@@ -0,0 +1,14 @@
+# Contribuire
+
+Questo progetto accoglie con favore contributi e suggerimenti. La maggior parte dei contributi richiede che tu accetti un Contributor License Agreement (CLA) dichiarando che hai il diritto di, e realmente concedi a noi i diritti di utilizzare il tuo contributo. Per i dettagli, visita https://cla.microsoft.com.
+
+> Importante: quando traduci il testo in questo repo, assicurati di non utilizzare la traduzione automatica. Verificheremo le traduzioni tramite la comunità, quindi offriti volontario solo per traduzioni in lingue in cui sei competente.
+
+Quando invii una pull request, un CLA-bot determinerà automaticamente se devi fornire un CLA e decorare il PR in modo appropriato (ad esempio, etichetta, commento). Segui semplicemente le istruzioni fornite dal bot. Dovrai farlo solo una volta per tutti i repository che utilizzano il nostro CLA.
+
+Questo progetto ha adottato il [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+Per maggiori informazioni, consulta le [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+o contatta [opencode@microsoft.com](mailto:opencode@microsoft.com) per qualsiasi domanda o commento aggiuntivo.
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda la traduzione umana professionale. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/README.md b/translations/it/README.md
new file mode 100644
index 000000000..57c19523b
--- /dev/null
+++ b/translations/it/README.md
@@ -0,0 +1,155 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# Machine Learning for Beginners - Un Curriculum
+
+> 🌍 Viaggia per il mondo mentre esploriamo il Machine Learning attraverso le culture del mondo 🌍
+
+Gli Advocates di Microsoft sono lieti di offrire un curriculum di 12 settimane e 26 lezioni tutto dedicato al **Machine Learning**. In questo curriculum, imparerai ciò che a volte viene chiamato **machine learning classico**, utilizzando principalmente la libreria Scikit-learn ed evitando il deep learning, che è trattato nel nostro [curriculum AI for Beginners](https://aka.ms/ai4beginners). Abbina queste lezioni al nostro curriculum ['Data Science for Beginners'](https://aka.ms/ds4beginners)!
+
+Viaggia con noi per il mondo mentre applichiamo queste tecniche classiche ai dati di molte aree del mondo. Ogni lezione include quiz pre e post lezione, istruzioni scritte per completare la lezione, una soluzione, un compito e altro ancora. La nostra pedagogia basata su progetti ti permette di imparare costruendo, un metodo provato per far sì che le nuove competenze restino impresse.
+
+**✍️ Un grande grazie ai nostri autori** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu e Amy Boyd
+
+**🎨 Grazie anche ai nostri illustratori** Tomomi Imura, Dasani Madipalli e Jen Looper
+
+**🙏 Un ringraziamento speciale 🙏 ai nostri autori, revisori e contributori di contenuti Microsoft Student Ambassador**, in particolare Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila e Snigdha Agarwal
+
+**🤩 Un ringraziamento extra agli Microsoft Student Ambassadors Eric Wanjau, Jasleen Sondhi e Vidushi Gupta per le nostre lezioni in R!**
+
+# Iniziare
+
+Segui questi passaggi:
+1. **Fork del Repository**: Clicca sul pulsante "Fork" nell'angolo in alto a destra di questa pagina.
+2. **Clona il Repository**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [trova tutte le risorse aggiuntive per questo corso nella nostra raccolta Microsoft Learn](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+**[Studenti](https://aka.ms/student-page)**, per utilizzare questo curriculum, fai il fork dell'intero repository nel tuo account GitHub e completa gli esercizi da solo o con un gruppo:
+
+- Inizia con un quiz pre-lezione.
+- Leggi la lezione e completa le attività, fermandoti e riflettendo a ogni verifica delle conoscenze.
+- Cerca di creare i progetti comprendendo le lezioni piuttosto che eseguendo il codice della soluzione; tuttavia, quel codice è disponibile nelle cartelle `/solution` in ogni lezione orientata al progetto.
+- Fai il quiz post-lezione.
+- Completa la sfida.
+- Completa il compito.
+- Dopo aver completato un gruppo di lezioni, visita il [Forum di Discussione](https://github.com/microsoft/ML-For-Beginners/discussions) e "impara ad alta voce" compilando il rubrica PAT appropriata. Un 'PAT' è uno Strumento di Valutazione del Progresso che è una rubrica che compili per approfondire il tuo apprendimento. Puoi anche reagire ad altri PAT così possiamo imparare insieme.
+
+> Per ulteriori studi, consigliamo di seguire questi moduli e percorsi di apprendimento [Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott).
+
+**Insegnanti**, abbiamo [incluso alcuni suggerimenti](for-teachers.md) su come utilizzare questo curriculum.
+
+---
+
+## Video walkthroughs
+
+Alcune delle lezioni sono disponibili come video brevi. Puoi trovare tutti questi in linea nelle lezioni, o nella [playlist ML for Beginners sul canale YouTube Microsoft Developer](https://aka.ms/ml-beginners-videos) cliccando sull'immagine qui sotto.
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## Incontra il Team
+
+[](https://youtu.be/Tj1XWrDSYJU "Promo video")
+
+**Gif di** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 Clicca sull'immagine qui sopra per un video sul progetto e sulle persone che l'hanno creato!
+
+---
+
+## Pedagogia
+
+Abbiamo scelto due principi pedagogici mentre costruivamo questo curriculum: assicurarsi che sia **basato su progetti** e che includa **quiz frequenti**. Inoltre, questo curriculum ha un **tema comune** per dargli coesione.
+
+Assicurando che il contenuto sia allineato con i progetti, il processo diventa più coinvolgente per gli studenti e la ritenzione dei concetti sarà aumentata. Inoltre, un quiz a basso rischio prima di una lezione imposta l'intenzione dello studente verso l'apprendimento di un argomento, mentre un secondo quiz dopo la lezione assicura una maggiore ritenzione. Questo curriculum è stato progettato per essere flessibile e divertente e può essere seguito in tutto o in parte. I progetti iniziano piccoli e diventano sempre più complessi alla fine del ciclo di 12 settimane. Questo curriculum include anche un postscript sulle applicazioni reali del ML, che può essere utilizzato come credito extra o come base per una discussione.
+
+> Trova il nostro [Codice di Condotta](CODE_OF_CONDUCT.md), [Contributi](CONTRIBUTING.md) e linee guida per [Traduzioni](TRANSLATIONS.md). Accogliamo con favore il tuo feedback costruttivo!
+
+## Ogni lezione include
+
+- sketchnote opzionale
+- video supplementare opzionale
+- video walkthrough (solo alcune lezioni)
+- quiz di riscaldamento pre-lezione
+- lezione scritta
+- per lezioni basate su progetti, guide passo-passo su come costruire il progetto
+- verifiche delle conoscenze
+- una sfida
+- letture supplementari
+- compito
+- quiz post-lezione
+
+> **Una nota sulle lingue**: Queste lezioni sono principalmente scritte in Python, ma molte sono disponibili anche in R. Per completare una lezione in R, vai alla cartella `/solution` e cerca le lezioni in R. Includono un'estensione .rmd che rappresenta un **R Markdown** file che può essere semplicemente definito come un'integrazione di `code chunks` (di R o altre lingue) e un `YAML header` (che guida come formattare gli output come PDF) in un `Markdown document`. In quanto tale, serve come un eccellente framework di creazione per la data science poiché ti permette di combinare il tuo codice, il suo output e i tuoi pensieri permettendoti di scriverli in Markdown. Inoltre, i documenti R Markdown possono essere resi in formati di output come PDF, HTML o Word.
+
+> **Una nota sui quiz**: Tutti i quiz sono contenuti nella [cartella Quiz App](../../quiz-app), per un totale di 52 quiz di tre domande ciascuno. Sono collegati all'interno delle lezioni ma l'app quiz può essere eseguita localmente; segui le istruzioni nella cartella `quiz-app` per ospitare localmente o distribuire su Azure.
+
+| Numero Lezione | Argomento | Raggruppamento Lezione | Obiettivi di Apprendimento | Lezione Collegata | Autore |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | Introduzione al machine learning | [Introduzione](1-Introduction/README.md) | Impara i concetti base dietro il machine learning | [Lezione](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | La Storia del machine learning | [Introduzione](1-Introduction/README.md) | Impara la storia alla base di questo campo | [Lezione](1-Introduction/2-history-of-ML/README.md) | Jen e Amy |
+| 03 | Equità e machine learning | [Introduzione](1-Introduction/README.md) | Quali sono le questioni filosofiche importanti sull'equità che gli studenti dovrebbero considerare quando costruiscono e applicano modelli ML? | [Lezione](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | Tecniche per il machine learning | [Introduction](1-Introduction/README.md) | Quali tecniche utilizzano i ricercatori di ML per costruire modelli di ML? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris e Jen |
+| 05 | Introduzione alla regressione | [Regression](2-Regression/README.md) | Inizia con Python e Scikit-learn per i modelli di regressione |
|
+| 09 | Una Web App 🔌 | [Web App](3-Web-App/README.md) | Costruisci una web app per utilizzare il tuo modello addestrato | [Python](3-Web-App/1-Web-App/README.md) | Jen |
+| 10 | Introduzione alla classificazione | [Classification](4-Classification/README.md) | Pulisci, prepara e visualizza i tuoi dati; introduzione alla classificazione |
|
+| 13 | Deliziose cucine asiatiche e indiane 🍜 | [Classification](4-Classification/README.md) | Costruisci una web app di raccomandazioni usando il tuo modello | [Python](4-Classification/4-Applied/README.md) | Jen |
+| 14 | Introduzione al clustering | [Clustering](5-Clustering/README.md) | Pulisci, prepara e visualizza i tuoi dati; introduzione al clustering |
|
+| 16 | Introduzione all'elaborazione del linguaggio naturale ☕️ | [Natural language processing](6-NLP/README.md) | Impara le basi dell'NLP costruendo un semplice bot | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | Compiti comuni di NLP ☕️ | [Natural language processing](6-NLP/README.md) | Approfondisci le tue conoscenze di NLP comprendendo i compiti comuni richiesti quando si lavora con le strutture linguistiche | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | Traduzione e analisi del sentimento ♥️ | [Natural language processing](6-NLP/README.md) | Traduzione e analisi del sentimento con Jane Austen | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | Hotel romantici d'Europa ♥️ | [Natural language processing](6-NLP/README.md) | Analisi del sentimento con le recensioni degli hotel 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | Hotel romantici d'Europa ♥️ | [Natural language processing](6-NLP/README.md) | Analisi del sentimento con le recensioni degli hotel 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | Introduzione alla previsione delle serie temporali | [Time series](7-TimeSeries/README.md) | Introduzione alla previsione delle serie temporali | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ Utilizzo dell'energia mondiale ⚡️ - previsione delle serie temporali con ARIMA | [Time series](7-TimeSeries/README.md) | Previsione delle serie temporali con ARIMA | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ Utilizzo dell'energia mondiale ⚡️ - previsione delle serie temporali con SVR | [Time series](7-TimeSeries/README.md) | Previsione delle serie temporali con Support Vector Regressor | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | Introduzione al reinforcement learning | [Reinforcement learning](8-Reinforcement/README.md) | Introduzione al reinforcement learning con Q-Learning | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | Aiuta Peter a evitare il lupo! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | Reinforcement learning Gym | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | Scenari e applicazioni ML nel mondo reale | [ML in the Wild](9-Real-World/README.md) | Applicazioni interessanti e rivelatrici del ML classico | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | Debugging dei modelli in ML usando il dashboard RAI | [ML in the Wild](9-Real-World/README.md) | Debugging dei modelli di Machine Learning utilizzando i componenti del dashboard di Responsible AI | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [trova tutte le risorse aggiuntive per questo corso nella nostra collezione Microsoft Learn](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## Accesso offline
+
+Puoi eseguire questa documentazione offline utilizzando [Docsify](https://docsify.js.org/#/). Fai un fork di questo repo, [installa Docsify](https://docsify.js.org/#/quickstart) sulla tua macchina locale, e poi nella cartella principale di questo repo, digita `docsify serve`. Il sito web sarà servito sulla porta 3000 del tuo localhost: `localhost:3000`.
+
+## PDF
+Trova un pdf del curriculum con i link [qui](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf).
+
+## Aiuto Cercasi
+
+Ti piacerebbe contribuire con una traduzione? Per favore leggi le nostre [linee guida per la traduzione](TRANSLATIONS.md) e aggiungi un problema preimpostato per gestire il carico di lavoro [qui](https://github.com/microsoft/ML-For-Beginners/issues).
+
+## Altri Curriculum
+
+Il nostro team produce altri curriculum! Dai un'occhiata a:
+
+- [AI for Beginners](https://aka.ms/ai4beginners)
+- [Data Science for Beginners](https://aka.ms/datascience-beginners)
+- [**Nuova Versione 2.0** - Generative AI for Beginners](https://aka.ms/genai-beginners)
+- [**NUOVO** Cybersecurity for Beginners](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Web Dev for Beginners](https://aka.ms/webdev-beginners)
+- [IoT for Beginners](https://aka.ms/iot-beginners)
+- [Machine Learning for Beginners](https://aka.ms/ml4beginners)
+- [XR Development for Beginners](https://aka.ms/xr-dev-for-beginners)
+- [Mastering GitHub Copilot for AI Paired Programming](https://aka.ms/GitHubCopilotAI)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/SECURITY.md b/translations/it/SECURITY.md
new file mode 100644
index 000000000..5736f81b6
--- /dev/null
+++ b/translations/it/SECURITY.md
@@ -0,0 +1,40 @@
+## Sicurezza
+
+Microsoft prende molto seriamente la sicurezza dei nostri prodotti software e servizi, il che include tutti i repository di codice sorgente gestiti attraverso le nostre organizzazioni GitHub, che includono [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), e [le nostre organizzazioni GitHub](https://opensource.microsoft.com/).
+
+Se credi di aver trovato una vulnerabilità di sicurezza in qualsiasi repository di proprietà di Microsoft che soddisfi [la definizione di vulnerabilità di sicurezza di Microsoft](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott), ti preghiamo di segnalarcelo come descritto di seguito.
+
+## Segnalazione di Problemi di Sicurezza
+
+**Ti preghiamo di non segnalare vulnerabilità di sicurezza tramite le issue pubbliche di GitHub.**
+
+Invece, segnalale al Microsoft Security Response Center (MSRC) su [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).
+
+Se preferisci inviare la segnalazione senza effettuare il login, invia una email a [secure@microsoft.com](mailto:secure@microsoft.com). Se possibile, cripta il tuo messaggio con la nostra chiave PGP; scaricala dalla [pagina della chiave PGP del Microsoft Security Response Center](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).
+
+Dovresti ricevere una risposta entro 24 ore. Se per qualche motivo non dovessi riceverla, ti preghiamo di inviare una email di follow-up per assicurarci di aver ricevuto il tuo messaggio originale. Ulteriori informazioni possono essere trovate su [microsoft.com/msrc](https://www.microsoft.com/msrc).
+
+Per favore includi le informazioni richieste elencate di seguito (per quanto possibile) per aiutarci a comprendere meglio la natura e la portata del possibile problema:
+
+ * Tipo di problema (ad esempio, buffer overflow, SQL injection, cross-site scripting, ecc.)
+ * Percorsi completi dei file sorgente relativi alla manifestazione del problema
+ * La posizione del codice sorgente interessato (tag/branch/commit o URL diretto)
+ * Qualsiasi configurazione speciale necessaria per riprodurre il problema
+ * Istruzioni passo-passo per riprodurre il problema
+ * Codice proof-of-concept o exploit (se possibile)
+ * Impatto del problema, incluso come un attaccante potrebbe sfruttare il problema
+
+Queste informazioni ci aiuteranno a dare priorità alla tua segnalazione più rapidamente.
+
+Se stai segnalando per una ricompensa bug bounty, rapporti più completi possono contribuire a un premio più alto. Visita la nostra pagina del [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) per ulteriori dettagli sui nostri programmi attivi.
+
+## Lingue Preferite
+
+Preferiamo che tutte le comunicazioni siano in inglese.
+
+## Politica
+
+Microsoft segue il principio della [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd).
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/SUPPORT.md b/translations/it/SUPPORT.md
new file mode 100644
index 000000000..8c884ad15
--- /dev/null
+++ b/translations/it/SUPPORT.md
@@ -0,0 +1,13 @@
+# Supporto
+## Come segnalare problemi e ottenere assistenza
+
+Questo progetto utilizza GitHub Issues per tracciare bug e richieste di funzionalità. Si prega di cercare tra i problemi esistenti prima di segnalare nuovi problemi per evitare duplicati. Per nuovi problemi, segnalare il bug o la richiesta di funzionalità come una nuova Issue.
+
+Per assistenza e domande sull'utilizzo di questo progetto, segnalare un problema.
+
+## Politica di supporto di Microsoft
+
+Il supporto per questo repository è limitato alle risorse elencate sopra.
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/TRANSLATIONS.md b/translations/it/TRANSLATIONS.md
new file mode 100644
index 000000000..9e0e02aac
--- /dev/null
+++ b/translations/it/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# Contribuire traducendo le lezioni
+
+Siamo lieti di accogliere traduzioni per le lezioni di questo curriculum!
+## Linee guida
+
+Ci sono cartelle in ogni cartella di lezione e cartella di introduzione alla lezione che contengono i file markdown tradotti.
+
+> Nota, per favore non tradurre alcun codice nei file di esempio di codice; le uniche cose da tradurre sono README, compiti e quiz. Grazie!
+
+I file tradotti dovrebbero seguire questa convenzione di denominazione:
+
+**README._[language]_.md**
+
+dove _[language]_ è un'abbreviazione di due lettere della lingua secondo lo standard ISO 639-1 (ad esempio `README.es.md` per lo spagnolo e `README.nl.md` per l'olandese).
+
+**assignment._[language]_.md**
+
+Simile ai Readme, per favore traduci anche i compiti.
+
+> Importante: quando traduci il testo in questo repository, assicurati di non utilizzare la traduzione automatica. Verificheremo le traduzioni tramite la comunità, quindi per favore offriti volontario per le traduzioni solo nelle lingue in cui sei competente.
+
+**Quiz**
+
+1. Aggiungi la tua traduzione all'app quiz aggiungendo un file qui: https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations, con la corretta convenzione di denominazione (en.json, fr.json). **Per favore non localizzare le parole 'true' o 'false' comunque. grazie!**
+
+2. Aggiungi il codice della tua lingua al menu a tendina nel file App.vue dell'app quiz.
+
+3. Modifica il [file index.js delle traduzioni](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js) dell'app quiz per aggiungere la tua lingua.
+
+4. Infine, modifica TUTTI i link dei quiz nei tuoi file README.md tradotti per puntare direttamente al tuo quiz tradotto: https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1 diventa https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id
+
+**GRAZIE**
+
+Apprezziamo davvero i tuoi sforzi!
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci impegniamo per l'accuratezza, si prega di notare che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/docs/_sidebar.md b/translations/it/docs/_sidebar.md
new file mode 100644
index 000000000..7e07d1a0a
--- /dev/null
+++ b/translations/it/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- Introduzione
+ - [Introduzione al Machine Learning](../1-Introduction/1-intro-to-ML/README.md)
+ - [Storia del Machine Learning](../1-Introduction/2-history-of-ML/README.md)
+ - [ML e Giustizia](../1-Introduction/3-fairness/README.md)
+ - [Tecniche di ML](../1-Introduction/4-techniques-of-ML/README.md)
+
+- Regressione
+ - [Strumenti del mestiere](../2-Regression/1-Tools/README.md)
+ - [Dati](../2-Regression/2-Data/README.md)
+ - [Regressione Lineare](../2-Regression/3-Linear/README.md)
+ - [Regressione Logistica](../2-Regression/4-Logistic/README.md)
+
+- Creare un'App Web
+ - [App Web](../3-Web-App/1-Web-App/README.md)
+
+- Classificazione
+ - [Introduzione alla Classificazione](../4-Classification/1-Introduction/README.md)
+ - [Classificatori 1](../4-Classification/2-Classifiers-1/README.md)
+ - [Classificatori 2](../4-Classification/3-Classifiers-2/README.md)
+ - [ML Applicato](../4-Classification/4-Applied/README.md)
+
+- Clustering
+ - [Visualizza i tuoi Dati](../5-Clustering/1-Visualize/README.md)
+ - [K-Means](../5-Clustering/2-K-Means/README.md)
+
+- NLP
+ - [Introduzione al NLP](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [Compiti di NLP](../6-NLP/2-Tasks/README.md)
+ - [Traduzione e Sentimento](../6-NLP/3-Translation-Sentiment/README.md)
+ - [Recensioni di Hotel 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [Recensioni di Hotel 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- Previsioni di Serie Temporali
+ - [Introduzione alle Previsioni di Serie Temporali](../7-TimeSeries/1-Introduction/README.md)
+ - [ARIMA](../7-TimeSeries/2-ARIMA/README.md)
+ - [SVR](../7-TimeSeries/3-SVR/README.md)
+
+- Apprendimento Rinforzato
+ - [Q-Learning](../8-Reinforcement/1-QLearning/README.md)
+ - [Gym](../8-Reinforcement/2-Gym/README.md)
+
+- ML nel Mondo Reale
+ - [Applicazioni](../9-Real-World/1-Applications/README.md)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci impegniamo per garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/for-teachers.md b/translations/it/for-teachers.md
new file mode 100644
index 000000000..a5cdd346d
--- /dev/null
+++ b/translations/it/for-teachers.md
@@ -0,0 +1,26 @@
+## Per Educatori
+
+Ti piacerebbe usare questo curriculum nella tua classe? Sentiti libero di farlo!
+
+Infatti, puoi usarlo direttamente su GitHub utilizzando GitHub Classroom.
+
+Per farlo, fai un fork di questo repo. Avrai bisogno di creare un repo per ogni lezione, quindi dovrai estrarre ogni cartella in un repo separato. In questo modo, [GitHub Classroom](https://classroom.github.com/classrooms) potrà gestire ogni lezione separatamente.
+
+Queste [istruzioni complete](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/) ti daranno un'idea su come configurare la tua classe.
+
+## Usare il repo così com'è
+
+Se desideri utilizzare questo repo nella sua forma attuale, senza usare GitHub Classroom, puoi farlo. Dovrai comunicare ai tuoi studenti quale lezione seguire insieme.
+
+In un formato online (Zoom, Teams o altri) potresti creare stanze separate per i quiz e fare da mentore agli studenti per aiutarli a prepararsi. Poi invita gli studenti a partecipare ai quiz e a inviare le loro risposte come 'issue' a un orario prestabilito. Potresti fare lo stesso con i compiti, se vuoi che gli studenti lavorino collaborativamente in modo aperto.
+
+Se preferisci un formato più privato, chiedi ai tuoi studenti di fare un fork del curriculum, lezione per lezione, nei loro repo GitHub privati e di darti accesso. In questo modo potranno completare quiz e compiti in privato e inviarteli tramite issue sul tuo repo di classe.
+
+Ci sono molti modi per far funzionare questo formato in una classe online. Facci sapere quale funziona meglio per te!
+
+## Per favore, dacci il tuo feedback!
+
+Vogliamo fare in modo che questo curriculum funzioni per te e i tuoi studenti. Per favore, dacci il tuo [feedback](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u).
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/quiz-app/README.md b/translations/it/quiz-app/README.md
new file mode 100644
index 000000000..95e4540f4
--- /dev/null
+++ b/translations/it/quiz-app/README.md
@@ -0,0 +1,115 @@
+# Quiz
+
+Questi quiz sono i quiz pre e post-lezione per il curriculum di ML su https://aka.ms/ml-beginners
+
+## Configurazione del progetto
+
+```
+npm install
+```
+
+### Compilazione e ricaricamento automatico per lo sviluppo
+
+```
+npm run serve
+```
+
+### Compilazione e minificazione per la produzione
+
+```
+npm run build
+```
+
+### Lint e correzione dei file
+
+```
+npm run lint
+```
+
+### Personalizza la configurazione
+
+Consulta [Configuration Reference](https://cli.vuejs.org/config/).
+
+Crediti: Grazie alla versione originale di questa app quiz: https://github.com/arpan45/simple-quiz-vue
+
+## Distribuzione su Azure
+
+Ecco una guida passo-passo per aiutarti a iniziare:
+
+1. Fai un fork del repository GitHub
+Assicurati che il codice della tua app web statica sia nel tuo repository GitHub. Fai un fork di questo repository.
+
+2. Crea una Azure Static Web App
+- Crea un [account Azure](http://azure.microsoft.com)
+- Vai al [portale di Azure](https://portal.azure.com)
+- Clicca su “Crea una risorsa” e cerca “Static Web App”.
+- Clicca su “Crea”.
+
+3. Configura la Static Web App
+- Base: Sottoscrizione: Seleziona la tua sottoscrizione Azure.
+- Gruppo di risorse: Crea un nuovo gruppo di risorse o usa uno esistente.
+- Nome: Fornisci un nome per la tua app web statica.
+- Regione: Scegli la regione più vicina ai tuoi utenti.
+
+- #### Dettagli di distribuzione:
+- Sorgente: Seleziona “GitHub”.
+- Account GitHub: Autorizza Azure ad accedere al tuo account GitHub.
+- Organizzazione: Seleziona la tua organizzazione GitHub.
+- Repository: Scegli il repository contenente la tua app web statica.
+- Branch: Seleziona il branch da cui vuoi distribuire.
+
+- #### Dettagli di build:
+- Preimpostazioni di build: Scegli il framework con cui è costruita la tua app (es. React, Angular, Vue, ecc.).
+- Posizione dell'app: Specifica la cartella contenente il codice della tua app (es. / se è nella radice).
+- Posizione API: Se hai un'API, specifica la sua posizione (opzionale).
+- Posizione output: Specifica la cartella in cui viene generato l'output della build (es. build o dist).
+
+4. Rivedi e crea
+Rivedi le tue impostazioni e clicca su “Crea”. Azure configurerà le risorse necessarie e creerà un workflow di GitHub Actions nel tuo repository.
+
+5. Workflow di GitHub Actions
+Azure creerà automaticamente un file di workflow di GitHub Actions nel tuo repository (.github/workflows/azure-static-web-apps-.yml). Questo workflow gestirà il processo di build e distribuzione.
+
+6. Monitora la distribuzione
+Vai alla scheda “Actions” nel tuo repository GitHub.
+Dovresti vedere un workflow in esecuzione. Questo workflow costruirà e distribuirà la tua app web statica su Azure.
+Una volta completato il workflow, la tua app sarà live sull'URL fornito da Azure.
+
+### Esempio di file Workflow
+
+Ecco un esempio di come potrebbe apparire il file di workflow di GitHub Actions:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### Risorse aggiuntive
+- [Documentazione di Azure Static Web Apps](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [Documentazione di GitHub Actions](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Anche se ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/sketchnotes/LICENSE.md b/translations/it/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..ec2180491
--- /dev/null
+++ b/translations/it/sketchnotes/LICENSE.md
@@ -0,0 +1,261 @@
+Attribuzione-Condividi allo stesso modo 4.0 Internazionale
+
+=======================================================================
+
+La Creative Commons Corporation ("Creative Commons") non è uno studio legale e non fornisce servizi legali o consulenza legale. La distribuzione delle licenze pubbliche Creative Commons non crea un rapporto avvocato-cliente o altro tipo di rapporto. Creative Commons rende disponibili le sue licenze e le informazioni correlate "così come sono". Creative Commons non offre garanzie riguardo alle sue licenze, qualsiasi materiale concesso in licenza secondo i loro termini e condizioni, o qualsiasi informazione correlata. Creative Commons declina ogni responsabilità per danni derivanti dal loro utilizzo nella misura massima consentita.
+
+Utilizzo delle licenze pubbliche Creative Commons
+
+Le licenze pubbliche Creative Commons forniscono un insieme standard di termini e condizioni che i creatori e altri titolari di diritti possono utilizzare per condividere opere originali e altro materiale soggetto a copyright e altri diritti specificati nella licenza pubblica di seguito. Le seguenti considerazioni sono solo a scopo informativo, non sono esaustive e non fanno parte delle nostre licenze.
+
+ Considerazioni per i licenzianti: Le nostre licenze pubbliche sono
+ destinate all'uso da parte di coloro che sono autorizzati a dare
+ al pubblico il permesso di utilizzare il materiale in modi altrimenti
+ limitati dal copyright e da certi altri diritti. Le nostre licenze sono
+ irrevocabili. I licenzianti dovrebbero leggere e comprendere i termini
+ e le condizioni della licenza che scelgono prima di applicarla.
+ I licenzianti dovrebbero anche assicurarsi di avere tutti i diritti
+ necessari prima di applicare le nostre licenze in modo che il pubblico
+ possa riutilizzare il materiale come previsto. I licenzianti dovrebbero
+ chiaramente contrassegnare qualsiasi materiale non soggetto alla licenza.
+ Questo include altro materiale concesso in licenza CC, o materiale
+ utilizzato in base a un'eccezione o limitazione al copyright. Maggiori
+ considerazioni per i licenzianti:
+ wiki.creativecommons.org/Considerations_for_licensors
+
+ Considerazioni per il pubblico: Utilizzando una delle nostre licenze
+ pubbliche, un licenziante concede al pubblico il permesso di utilizzare
+ il materiale concesso in licenza secondo i termini e le condizioni
+ specificati. Se il permesso del licenziante non è necessario per
+ qualsiasi motivo, ad esempio, a causa di un'eccezione o limitazione
+ applicabile al copyright, tale uso non è regolato dalla licenza. Le
+ nostre licenze concedono solo permessi sotto il copyright e certi
+ altri diritti che un licenziante ha l'autorità di concedere. L'uso del
+ materiale concesso in licenza può comunque essere limitato per altri
+ motivi, inclusi il fatto che altri abbiano diritti di copyright o altri
+ diritti sul materiale. Un licenziante può fare richieste speciali, come
+ chiedere che tutte le modifiche siano contrassegnate o descritte. Anche
+ se non richiesto dalle nostre licenze, si consiglia di rispettare tali
+ richieste quando ragionevoli. Maggiori considerazioni per il pubblico:
+ wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Licenza Pubblica Creative Commons Attribution-ShareAlike 4.0 Internazionale
+
+Esercitando i Diritti Concessi (definiti di seguito), accetti e concordi di essere vincolato dai termini e dalle condizioni di questa Licenza Pubblica Creative Commons Attribution-ShareAlike 4.0 Internazionale ("Licenza Pubblica"). Nella misura in cui questa Licenza Pubblica possa essere interpretata come un contratto, ti vengono concessi i Diritti Concessi in considerazione della tua accettazione di questi termini e condizioni, e il Licenziante ti concede tali diritti in considerazione dei benefici che il Licenziante riceve rendendo disponibile il Materiale Concesso in licenza secondo questi termini e condizioni.
+
+
+Sezione 1 -- Definizioni.
+
+ a. Materiale Adattato significa materiale soggetto a Copyright e Diritti Simili che è derivato o basato sul Materiale Concesso in licenza e nel quale il Materiale Concesso in licenza è tradotto, alterato, arrangiato, trasformato o altrimenti modificato in un modo che richiede permesso ai sensi del Copyright e dei Diritti Simili detenuti dal Licenziante. Ai fini di questa Licenza Pubblica, quando il Materiale Concesso in licenza è un'opera musicale, una performance o una registrazione sonora, il Materiale Adattato è sempre prodotto quando il Materiale Concesso in licenza è sincronizzato in relazione temporale con un'immagine in movimento.
+
+ b. Licenza dell'Adattatore significa la licenza che applichi ai tuoi Copyright e Diritti Simili nei tuoi contributi al Materiale Adattato in conformità con i termini e le condizioni di questa Licenza Pubblica.
+
+ c. Licenza Compatibile BY-SA significa una licenza elencata su creativecommons.org/compatiblelicenses, approvata da Creative Commons come essenzialmente equivalente a questa Licenza Pubblica.
+
+ d. Copyright e Diritti Simili significa copyright e/o diritti simili strettamente correlati al copyright inclusi, senza limitazione, performance, trasmissione, registrazione sonora e Diritti sui Database Sui Generis, senza riguardo a come i diritti sono etichettati o categorizzati. Ai fini di questa Licenza Pubblica, i diritti specificati nella Sezione 2(b)(1)-(2) non sono Copyright e Diritti Simili.
+
+ e. Misure Tecnologiche Efficaci significa quelle misure che, in assenza di adeguata autorizzazione, non possono essere aggirate ai sensi delle leggi che soddisfano gli obblighi ai sensi dell'Articolo 11 del Trattato sul Copyright dell'OMPI adottato il 20 dicembre 1996 e/o accordi internazionali simili.
+
+ f. Eccezioni e Limitazioni significa uso equo, trattativa equa, e/o qualsiasi altra eccezione o limitazione al Copyright e Diritti Simili che si applica al tuo utilizzo del Materiale Concesso in licenza.
+
+ g. Elementi della Licenza significa gli attributi della licenza elencati nel nome di una Licenza Pubblica Creative Commons. Gli Elementi della Licenza di questa Licenza Pubblica sono Attribuzione e Condividi allo stesso modo.
+
+ h. Materiale Concesso in licenza significa l'opera artistica o letteraria, il database o altro materiale a cui il Licenziante ha applicato questa Licenza Pubblica.
+
+ i. Diritti Concessi significa i diritti concessi a te soggetti ai termini e alle condizioni di questa Licenza Pubblica, che sono limitati a tutti i Copyright e Diritti Simili che si applicano al tuo utilizzo del Materiale Concesso in licenza e che il Licenziante ha l'autorità di concedere.
+
+ j. Licenziante significa l'individuo o gli individui o l'entità o le entità che concedono diritti ai sensi di questa Licenza Pubblica.
+
+ k. Condividere significa fornire materiale al pubblico con qualsiasi mezzo o processo che richiede permesso ai sensi dei Diritti Concessi, come la riproduzione, la visualizzazione pubblica, la performance pubblica, la distribuzione, la diffusione, la comunicazione o l'importazione, e rendere disponibile il materiale al pubblico anche in modi che i membri del pubblico possano accedere al materiale da un luogo e in un momento scelti individualmente da loro.
+
+ l. Diritti sui Database Sui Generis significa diritti diversi dal copyright risultanti dalla Direttiva 96/9/CE del Parlamento Europeo e del Consiglio dell'11 marzo 1996 sulla protezione giuridica dei database, come modificata e/o sostituita, nonché altri diritti essenzialmente equivalenti in qualsiasi parte del mondo.
+
+ m. Tu significa l'individuo o l'entità che esercita i Diritti Concessi ai sensi di questa Licenza Pubblica. Il tuo ha un significato corrispondente.
+
+
+Sezione 2 -- Ambito.
+
+ a. Concessione della licenza.
+
+ 1. Soggetto ai termini e alle condizioni di questa Licenza Pubblica,
+ il Licenziante ti concede una licenza mondiale, gratuita,
+ non sublicenziabile, non esclusiva, irrevocabile per
+ esercitare i Diritti Concessi nel Materiale Concesso in licenza per:
+
+ a. riprodurre e Condividere il Materiale Concesso in licenza,
+ in tutto o in parte; e
+
+ b. produrre, riprodurre e Condividere Materiale Adattato.
+
+ 2. Eccezioni e Limitazioni. Per evitare dubbi, laddove
+ Eccezioni e Limitazioni si applichino al tuo utilizzo, questa
+ Licenza Pubblica non si applica, e non è necessario che tu
+ rispetti i suoi termini e condizioni.
+
+ 3. Durata. La durata di questa Licenza Pubblica è specificata nella Sezione
+ 6(a).
+
+ 4. Media e formati; modifiche tecniche consentite. Il
+ Licenziante ti autorizza a esercitare i Diritti Concessi in
+ tutti i media e formati, sia ora conosciuti che creati in futuro,
+ e a fare le modifiche tecniche necessarie per farlo. Il
+ Licenziante rinuncia e/o si impegna a non far valere alcun diritto o
+ autorità per vietarti di fare modifiche tecniche necessarie per
+ esercitare i Diritti Concessi, comprese le modifiche tecniche
+ necessarie per aggirare le Misure Tecnologiche Efficaci. Ai fini di
+ questa Licenza Pubblica, semplicemente fare modifiche autorizzate
+ da questa Sezione 2(a)(4) non produce mai Materiale Adattato.
+
+ 5. Destinatari successivi.
+
+ a. Offerta del Licenziante -- Materiale Concesso in licenza. Ogni
+ destinatario del Materiale Concesso in licenza riceve automaticamente
+ un'offerta dal Licenziante per esercitare i Diritti Concessi
+ ai sensi dei termini e delle condizioni di questa Licenza Pubblica.
+
+ b. Offerta aggiuntiva del Licenziante -- Materiale Adattato.
+ Ogni destinatario del Materiale Adattato da te
+ riceve automaticamente un'offerta dal Licenziante per
+ esercitare i Diritti Concessi nel Materiale Adattato
+ secondo le condizioni della Licenza dell'Adattatore che applichi.
+
+ c. Nessuna restrizione a valle. Non puoi offrire o imporre
+ termini o condizioni aggiuntivi o diversi, né
+ applicare Misure Tecnologiche Efficaci al
+ Materiale Concesso in licenza se ciò limita l'esercizio dei
+ Diritti Concessi da parte di qualsiasi destinatario del Materiale
+ Concesso in licenza.
+
+ 6. Nessuna approvazione. Nulla in questa Licenza Pubblica costituisce o
+ può essere interpretato come un permesso di affermare o implicare che tu
+ sia, o che il tuo utilizzo del Materiale Concesso in licenza sia, collegato
+ con, o sponsorizzato, approvato, o concesso stato ufficiale dal,
+ Licenziante o altri designati per ricevere attribuzione come
+ previsto nella Sezione 3(a)(1)(A)(i).
+
+ b. Altri diritti.
+
+ 1. I diritti morali, come il diritto all'integrità, non sono
+ concessi ai sensi di questa Licenza Pubblica, né i diritti di pubblicità,
+ privacy, e/o altri diritti simili della personalità; tuttavia, nella
+ misura possibile, il Licenziante rinuncia e/o si impegna a non far valere
+ tali diritti detenuti dal Licenziante nella misura limitata necessaria
+ per consentirti di esercitare i Diritti Concessi, ma non oltre.
+
+ 2. I diritti di brevetto e marchio non sono concessi ai sensi di questa
+ Licenza Pubblica.
+
+ 3. Nella misura possibile, il Licenziante rinuncia a qualsiasi diritto di
+ raccogliere royalties da te per l'esercizio dei Diritti Concessi, sia
+ direttamente che tramite una società di gestione collettiva
+ ai sensi di qualsiasi schema di licenza volontaria o obbligatoria
+ che possa essere rinunciata. In tutti gli altri casi il Licenziante si riserva
+ espressamente qualsiasi diritto di raccogliere tali royalties.
+
+
+Sezione 3 -- Condizioni della Licenza.
+
+L'esercizio dei Diritti Concessi è espressamente soggetto alle
+seguenti condizioni.
+
+ a. Attribuzione.
+
+ 1. Se Condividi il Materiale Concesso in licenza (incluso in forma modificata),
+ devi:
+
+ a. mantenere quanto segue se fornito dal Licenziante
+ con il Materiale Concesso in licenza:
+
+ i. identificazione del creatore o dei creatori del Materiale
+ Concesso in licenza e di qualsiasi altro designato per ricevere
+ attribuzione, in qualsiasi modo ragionevole richiesto dal
+ Licenziante (incluso con pseudonimo se
+ designato);
+
+ ii. un avviso di copyright;
+
+ iii. un avviso che fa riferimento a questa Licenza Pubblica;
+
+ iv. un avviso che fa riferimento alla dichiarazione di
+ non responsabilità;
+
+ v. un URI o un collegamento ipertestuale al Materiale Concesso in licenza
+ nella misura ragionevolmente praticabile;
+
+ b. indicare se hai modificato il Materiale Concesso in licenza e
+ mantenere un'indicazione di eventuali modifiche precedenti; e
+
+ c. indicare che il Materiale Concesso in licenza è concesso in licenza ai sensi di questa
+ Licenza Pubblica, e includere il testo di, o l'URI o
+ il collegamento ipertestuale a, questa Licenza Pubblica.
+
+ 2. Puoi soddisfare le condizioni nella Sezione 3(a)(1) in qualsiasi
+ modo ragionevole basato sul mezzo, sui mezzi e sul contesto in
+ cui Condividi il Materiale Concesso in licenza. Ad esempio, può essere
+ ragionevole soddisfare le condizioni fornendo un URI o
+ un collegamento ipertestuale a una risorsa che include le informazioni
+ richieste.
+
+ 3. Se richiesto dal Licenziante, devi rimuovere qualsiasi delle
+ informazioni richieste dalla Sezione 3(a)(1)(A) nella misura
+ ragionevolmente praticabile.
+
+ b. Condividi allo stesso modo.
+
+ Oltre alle condizioni nella Sezione 3(a), se Condividi
+ Materiale Adattato che produci, si applicano anche le seguenti condizioni.
+
+ 1. La Licenza dell'Adattatore che applichi deve essere una licenza Creative Commons
+ con gli stessi Elementi della Licenza, questa versione o
+ successiva, o una Licenza Compatibile BY-SA.
+
+ 2. Devi includere il testo di, o l'URI o il collegamento ipertestuale a,
+ la Licenza dell'Adattatore che applichi. Puoi soddisfare questa condizione
+ in qualsiasi modo ragionevole basato sul mezzo, sui mezzi e sul contesto in
+ cui Condividi il Materiale Adattato.
+
+ 3. Non puoi offrire o imporre termini o condizioni aggiuntivi o diversi, né
+ applicare Misure Tecnologiche Efficaci al
+ Materiale Adattato che limitino l'esercizio dei
+ diritti concessi ai sensi della Licenza dell'Adattatore che applichi.
+
+
+Sezione 4 -- Diritti sui Database Sui Generis.
+
+Laddove i Diritti Concessi includano i Diritti sui Database Sui Generis che
+si applicano al tuo utilizzo del Materiale Concesso in licenza:
+
+ a. per evitare dubbi, la Sezione 2(a)(1) ti concede il diritto
+ di estrarre, riutilizzare, riprodurre e Condividere tutto o una parte
+ sostanziale del contenuto del database;
+
+ b. se includi tutto o una parte sostanziale del contenuto del database
+ in un database in cui hai Diritti sui Database Sui Generis, allora il database in
+ cui hai Diritti sui Database Sui Generis (ma non i suoi contenuti individuali) è Materiale Adattato,
+
+ incluso ai fini della Sezione 3(b); e
+ c. devi rispettare le condizioni nella Sezione 3(a) se Condividi
+ tutto o una parte sostanziale del contenuto del database.
+
+Per evitare dubbi, questa Sezione 4 integra e non
+sostituisce i tuoi obblighi ai sensi di questa Licenza Pubblica laddove i Diritti
+Concessi includano altri Copyright e Diritti Simili.
+
+
+Sezione 5 -- Dichiarazione di Non Responsabilità e Limitazione di Responsabilità.
+
+ a. SALVO DIVERSAMENTE SPECIFICATO DAL LICENZIANTE, NELLA
+ MISURA POSSIBILE, IL LICENZIANTE OFFRE IL MATERIALE CONCESSO IN LICENZA
+ COSÌ COM'È E COME DISPONIBILE, E NON FA ALCUNA DICHIARAZIONE O GARANZIA DI
+ ALCUN TIPO RIGUARDO AL MATERIALE CONCESSO IN LICENZA, SIA ESPRESSA,
+ IMPLICITA, STATUTARIA, O ALTRO. QUESTO INCLUDE, SENZA LIMITAZIONE,
+ GARANZIE DI TITOLO, COMMERCIABILITÀ, IDONEITÀ A UNO SCOPO PARTICOLARE,
+ NON VIOLAZIONE, ASSENZA DI DIFETTI LATENTI O ALTRI DIFETTI,
+ ACCURATEZZA, O PRESENZA O ASSENZA DI ERRORI, SIA CONOSCIUTI CHE
+ SCOPRIBILI. LADDOVE LE DICHIARAZIONI DI NON RESPONSABILITÀ DELLE GARANZIE NON
+ SIANO CONS
+
+**Avvertenza**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su intelligenza artificiale. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatizzate possono contenere errori o imprecisioni. Il documento originale nella sua lingua madre deve essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione umana professionale. Non siamo responsabili per eventuali fraintendimenti o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/it/sketchnotes/README.md b/translations/it/sketchnotes/README.md
new file mode 100644
index 000000000..54834ae06
--- /dev/null
+++ b/translations/it/sketchnotes/README.md
@@ -0,0 +1,10 @@
+Tutte le sketchnote del curriculum possono essere scaricate qui.
+
+🖨 Per la stampa in alta risoluzione, le versioni TIFF sono disponibili in [questo repo](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff).
+
+🎨 Creato da: [Tomomi Imura](https://github.com/girliemac) (Twitter: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**Disclaimer**:
+Questo documento è stato tradotto utilizzando servizi di traduzione automatica basati su AI. Sebbene ci sforziamo di garantire l'accuratezza, si prega di essere consapevoli che le traduzioni automatiche possono contenere errori o inesattezze. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale umana. Non siamo responsabili per eventuali malintesi o interpretazioni errate derivanti dall'uso di questa traduzione.
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/1-intro-to-ML/README.md b/translations/ja/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..96b70f49c
--- /dev/null
+++ b/translations/ja/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# 機械学習の入門
+
+## [事前講義クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "初心者向け機械学習入門")
+
+> 🎥 上の画像をクリックすると、このレッスンの短い動画が再生されます。
+
+初心者向けの古典的な機械学習コースへようこそ!このトピックに全くの初心者であっても、または経験豊富なML実践者であっても、私たちはあなたを歓迎します!私たちは、あなたのML学習のための親しみやすい出発点を作りたいと考えており、あなたの[フィードバック](https://github.com/microsoft/ML-For-Beginners/discussions)を評価し、対応し、取り入れることを喜んで行います。
+
+[](https://youtu.be/h0e2HAPTGF4 "機械学習入門")
+
+> 🎥 上の画像をクリックすると、MITのJohn Guttagが機械学習を紹介する動画が再生されます。
+
+---
+## 機械学習の始め方
+
+このカリキュラムを始める前に、ノートブックをローカルで実行するためにコンピュータのセットアップを行う必要があります。
+
+- **これらのビデオでマシンを構成する**。システムに[Pythonをインストールする方法](https://youtu.be/CXZYvNRIAKM)と、開発のための[テキストエディタを設定する](https://youtu.be/EU8eayHWoZg)方法を学ぶために以下のリンクを使用してください。
+- **Pythonを学ぶ**。このコースで使用するデータサイエンティストに役立つプログラミング言語である[Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott)の基本的な理解も推奨されます。
+- **Node.jsとJavaScriptを学ぶ**。このコースでは、Webアプリを作成する際にJavaScriptも数回使用するため、[node](https://nodejs.org)と[npm](https://www.npmjs.com/)をインストールし、[Visual Studio Code](https://code.visualstudio.com/)をPythonとJavaScriptの開発に利用できるようにする必要があります。
+- **GitHubアカウントを作成する**。あなたが[GitHub](https://github.com)で私たちを見つけたのであれば、すでにアカウントを持っているかもしれませんが、そうでない場合はアカウントを作成し、このカリキュラムをフォークして自分で使用してください。(星を付けていただけると嬉しいです😊)
+- **Scikit-learnを探索する**。これらのレッスンで参照する一連のMLライブラリである[Scikit-learn](https://scikit-learn.org/stable/user_guide.html)に慣れてください。
+
+---
+## 機械学習とは何か?
+
+「機械学習」という用語は、今日最も人気があり頻繁に使用される用語の一つです。技術に多少なりとも親しみがあれば、どの分野で働いていても、この用語を少なくとも一度は聞いたことがある可能性があります。しかし、機械学習の仕組みは多くの人にとって謎です。機械学習の初心者にとって、このテーマは時に圧倒的に感じられることがあります。したがって、機械学習が実際に何であるかを理解し、実践的な例を通じて一歩一歩学んでいくことが重要です。
+
+---
+## ハイプカーブ
+
+
+
+> Google Trendsが示す最近の「機械学習」のハイプカーブ
+
+---
+## 謎に満ちた宇宙
+
+私たちは魅力的な謎に満ちた宇宙に住んでいます。スティーブン・ホーキングやアルバート・アインシュタインなどの偉大な科学者たちは、周囲の世界の謎を解き明かす有意義な情報を探求するために一生を捧げました。これは学習という人間の条件です。人間の子供は、新しいことを学び、成長するにつれて世界の構造を年々明らかにしていきます。
+
+---
+## 子供の脳
+
+子供の脳と感覚は周囲の事実を知覚し、徐々に生活の隠れたパターンを学び、それによって学んだパターンを識別するための論理的なルールを作成します。人間の脳の学習プロセスは、人間をこの世界で最も高度な生き物にします。隠れたパターンを発見し続け、それに基づいて革新することで、生涯を通じて自分自身をより良くしていくことができます。この学習能力と進化能力は、[脳の可塑性](https://www.simplypsychology.org/brain-plasticity.html)という概念に関連しています。表面的には、人間の脳の学習プロセスと機械学習の概念との間には、いくつかの動機的な類似点を引き出すことができます。
+
+---
+## 人間の脳
+
+[人間の脳](https://www.livescience.com/29365-human-brain.html)は現実世界から物事を知覚し、知覚した情報を処理し、合理的な決定を下し、状況に応じて特定の行動を取ります。これが知的に振る舞うと呼ばれるものです。知的な行動プロセスの模倣を機械にプログラムすることを人工知能(AI)と呼びます。
+
+---
+## 用語の説明
+
+用語が混同されることがありますが、機械学習(ML)は人工知能の重要なサブセットです。**MLは、知覚されたデータから有意義な情報を発見し、隠れたパターンを見つけて合理的な意思決定プロセスを裏付けるために特化したアルゴリズムを使用することに関心があります**。
+
+---
+## AI、ML、ディープラーニング
+
+
+
+> AI、ML、ディープラーニング、データサイエンスの関係を示す図。インフォグラフィックは[Jen Looper](https://twitter.com/jenlooper)によって、この[グラフィック](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining)にインスパイアされて作成されました。
+
+---
+## カバーする概念
+
+このカリキュラムでは、初心者が知っておくべき機械学習の基本概念のみをカバーします。主にScikit-learnを使用して「古典的な機械学習」をカバーします。これは多くの学生が基本を学ぶために使用する優れたライブラリです。人工知能やディープラーニングの広範な概念を理解するためには、機械学習の強固な基礎知識が不可欠であり、ここでそれを提供したいと考えています。
+
+---
+## このコースで学ぶこと
+
+- 機械学習の基本概念
+- 機械学習の歴史
+- 機械学習と公平性
+- 回帰ML技術
+- 分類ML技術
+- クラスタリングML技術
+- 自然言語処理ML技術
+- 時系列予測ML技術
+- 強化学習
+- 機械学習の実世界での応用
+
+---
+## カバーしないこと
+
+- ディープラーニング
+- ニューラルネットワーク
+- AI
+
+より良い学習体験を提供するために、ニューラルネットワークの複雑さ、「ディープラーニング」 - ニューラルネットワークを使用した多層モデル構築 - およびAIを避けます。これらは別のカリキュラムで取り上げます。また、データサイエンスに焦点を当てたカリキュラムも今後提供する予定です。
+
+---
+## なぜ機械学習を学ぶのか?
+
+システムの観点から見ると、機械学習は、データから隠れたパターンを学習し、知的な意思決定を支援する自動化システムの作成と定義されます。
+
+この動機は、人間の脳が外界から知覚したデータに基づいて特定のことを学習する方法に緩やかに触発されています。
+
+✅ ビジネスがハードコードされたルールベースのエンジンを作成するのではなく、機械学習戦略を使用しようとする理由について考えてみてください。
+
+---
+## 機械学習の応用
+
+機械学習の応用は今やほぼどこにでもあり、私たちの社会を流れるデータ、スマートフォン、接続されたデバイス、およびその他のシステムによって生成されるデータと同様に普及しています。最先端の機械学習アルゴリズムの膨大な可能性を考慮して、研究者たちはその能力を多次元および多分野の実生活の問題を解決するために探求し、素晴らしい成果を上げています。
+
+---
+## 機械学習の応用例
+
+**機械学習をさまざまな方法で使用できます**:
+
+- 患者の医療履歴やレポートから病気の可能性を予測する。
+- 天気データを利用して天気イベントを予測する。
+- テキストの感情を理解する。
+- 偽ニュースを検出してプロパガンダの拡散を防ぐ。
+
+金融、経済学、地球科学、宇宙探査、生物医学工学、認知科学、さらには人文学の分野でも、機械学習を適用して、その領域の厄介でデータ処理の重い問題を解決しています。
+
+---
+## 結論
+
+機械学習は、現実世界のデータや生成されたデータから有意義な洞察を見つけることでパターン発見のプロセスを自動化します。ビジネス、健康、金融などのアプリケーションにおいて非常に価値があることが証明されています。
+
+近い将来、機械学習の基本を理解することは、その広範な採用のため、どの分野の人々にとっても必須になるでしょう。
+
+---
+# 🚀 チャレンジ
+
+紙や[Excalidraw](https://excalidraw.com/)のようなオンラインアプリを使用して、AI、ML、ディープラーニング、データサイエンスの違いについての理解をスケッチしてください。これらの技術が得意とする問題のアイデアも追加してください。
+
+# [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# レビューと自主学習
+
+クラウドでMLアルゴリズムを使用する方法について詳しく学ぶには、この[学習パス](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott)を参照してください。
+
+MLの基本についての[学習パス](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott)を受講してください。
+
+---
+# 課題
+
+[セットアップと実行](assignment.md)
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳にはエラーや不正確さが含まれる場合があることにご注意ください。元の言語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/1-intro-to-ML/assignment.md b/translations/ja/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..8851f4d63
--- /dev/null
+++ b/translations/ja/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# 起動と実行
+
+## 手順
+
+この成績に関係しない課題では、Pythonの復習をし、ノートブックを実行できる環境を整える必要があります。
+
+この[Python学習パス](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott)を取り、その後、以下の入門ビデオを見てシステムを設定してください:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努めておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で記載されたオリジナル文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当方は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/2-history-of-ML/README.md b/translations/ja/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..80fb2d342
--- /dev/null
+++ b/translations/ja/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# 機械学習の歴史
+
+
+> スケッチノート: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [講義前のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "初心者向け機械学習 - 機械学習の歴史")
+
+> 🎥 上の画像をクリックすると、このレッスンの短い動画が再生されます。
+
+このレッスンでは、機械学習と人工知能の歴史における主要なマイルストーンを見ていきます。
+
+人工知能(AI)の歴史は機械学習の歴史と密接に関連しています。なぜなら、MLを支えるアルゴリズムや計算の進歩がAIの発展に寄与しているからです。これらの分野が1950年代に独立した研究領域として結晶化し始めた一方で、重要な[アルゴリズム、統計、数学、計算、および技術的発見](https://wikipedia.org/wiki/Timeline_of_machine_learning)はその前から存在し、重なり合っていました。実際、人々は[何百年も前から](https://wikipedia.org/wiki/History_of_artificial_intelligence)これらの問題について考えてきました。この記事では、「考える機械」のアイデアの歴史的な知的基盤について議論しています。
+
+---
+## 重要な発見
+
+- 1763年、1812年 [ベイズの定理](https://wikipedia.org/wiki/Bayes%27_theorem) とその前身。この定理とその応用は、事前の知識に基づいてイベントが発生する確率を説明します。
+- 1805年 [最小二乗法](https://wikipedia.org/wiki/Least_squares) フランスの数学者Adrien-Marie Legendreによる。この理論はデータフィッティングに役立ち、回帰ユニットで学びます。
+- 1913年 [マルコフ連鎖](https://wikipedia.org/wiki/Markov_chain)、ロシアの数学者Andrey Markovにちなんで名付けられたもので、前の状態に基づいて一連の可能なイベントを説明します。
+- 1957年 [パーセプトロン](https://wikipedia.org/wiki/Perceptron) アメリカの心理学者Frank Rosenblattによって発明された線形分類器の一種で、深層学習の進歩の基礎となっています。
+
+---
+
+- 1967年 [最近傍法](https://wikipedia.org/wiki/Nearest_neighbor) 元々はルートをマッピングするために設計されたアルゴリズムで、MLの文脈ではパターンを検出するために使用されます。
+- 1970年 [逆伝播](https://wikipedia.org/wiki/Backpropagation) [フィードフォワードニューラルネットワーク](https://wikipedia.org/wiki/Feedforward_neural_network) を訓練するために使用されます。
+- 1982年 [リカレントニューラルネットワーク](https://wikipedia.org/wiki/Recurrent_neural_network) フィードフォワードニューラルネットワークから派生した人工ニューラルネットワークで、時間的なグラフを作成します。
+
+✅ 少し調べてみましょう。MLとAIの歴史において他に重要な日付は何でしょうか?
+
+---
+## 1950年: 考える機械
+
+2019年に[一般投票で](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) 20世紀の偉大な科学者に選ばれたAlan Turingは、「考える機械」の概念の基礎を築いたとされています。彼は、この概念に対する反対意見や自身の経験的証拠の必要性に取り組むために[チューリングテスト](https://www.bbc.com/news/technology-18475646) を作成しました。このテストについてはNLPレッスンで探求します。
+
+---
+## 1956年: ダートマス夏季研究プロジェクト
+
+「ダートマス夏季研究プロジェクトは、人工知能という分野にとって画期的なイベントであり、ここで『人工知能』という用語が生まれました」([source](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth))。
+
+> 学習やその他の知能の特徴のすべては、原則として、機械がそれをシミュレートできるほど正確に記述することができます。
+
+---
+
+主任研究者である数学教授John McCarthyは、「学習やその他の知能の特徴のすべては、原則として、機械がそれをシミュレートできるほど正確に記述することができる」という仮定に基づいて進めたいと考えていました。参加者には、分野のもう一人の著名人であるMarvin Minskyも含まれていました。
+
+このワークショップは、「シンボリックメソッドの台頭、限定されたドメインに焦点を当てたシステム(初期のエキスパートシステム)、および帰納システムと演繹システムの議論」などの議論を始め、奨励したとされています ([source](https://wikipedia.org/wiki/Dartmouth_workshop))。
+
+---
+## 1956年 - 1974年: 「黄金時代」
+
+1950年代から1970年代半ばにかけて、AIが多くの問題を解決できるという希望に満ちていました。1967年、Marvin Minskyは「一世代のうちに…『人工知能』を作成する問題は実質的に解決されるだろう」と自信を持って述べました (Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall)。
+
+自然言語処理の研究が盛んになり、検索が洗練され強力になり、「マイクロワールド」という概念が生まれ、簡単なタスクが平易な言語指示で完了されました。
+
+---
+
+政府機関からの研究資金が豊富で、計算とアルゴリズムの進歩があり、インテリジェントな機械のプロトタイプが作られました。これらの機械のいくつかには次のようなものがあります:
+
+* [Shakey the robot](https://wikipedia.org/wiki/Shakey_the_robot)、タスクを「知的に」実行する方法を決定しながら移動できるロボット。
+
+ 
+ > 1972年のShakey
+
+---
+
+* Eliza、初期の「チャターボット」、人々と会話し、原始的な「セラピスト」として機能しました。ElizaについてはNLPレッスンでさらに学びます。
+
+ 
+ > チャットボットの一バージョン、Eliza
+
+---
+
+* 「ブロックワールド」は、ブロックを積み重ねたり分類したりするマイクロワールドの例で、機械に意思決定を教える実験が行われました。 [SHRDLU](https://wikipedia.org/wiki/SHRDLU) などのライブラリを使用して構築された進歩が言語処理を推進しました。
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "SHRDLUを使ったブロックワールド")
+
+ > 🎥 上の画像をクリックすると、SHRDLUを使ったブロックワールドの動画が再生されます
+
+---
+## 1974年 - 1980年: 「AIの冬」
+
+1970年代半ばまでに、「知的な機械」を作ることの複雑さが過小評価されており、利用可能な計算能力に対するその約束が過大評価されていたことが明らかになりました。資金は枯渇し、分野への信頼は低下しました。信頼を損なった問題には次のようなものがあります:
+---
+- **限界**。計算能力が限られていました。
+- **組み合わせ爆発**。コンピュータに求めることが増えるにつれて、トレーニングする必要のあるパラメータの数が指数関数的に増加しましたが、計算能力と能力の進化が並行していませんでした。
+- **データの不足**。アルゴリズムのテスト、開発、改良を妨げるデータの不足がありました。
+- **正しい質問をしているのか?**。研究者が取り組んでいる質問自体が疑問視され始めました。研究者たちはそのアプローチに対して批判を受けるようになりました:
+ - チューリングテストは、「デジタルコンピュータをプログラムすることで言語を理解しているように見せることはできるが、実際の理解を生み出すことはできない」という『中国語の部屋理論』などのアイデアによって疑問視されました ([source](https://plato.stanford.edu/entries/chinese-room/))。
+ - 「セラピスト」ELIZAのような人工知能を社会に導入する倫理が問われました。
+
+---
+
+同時に、さまざまなAIの学派が形成され始めました。「[スクラフィー]と[ニートAI](https://wikipedia.org/wiki/Neats_and_scruffies)」の実践の間に二分化が確立されました。_スクラフィー_の研究室は、望む結果が得られるまでプログラムを微調整しました。_ニート_の研究室は「論理と形式的な問題解決」に焦点を当てました。ELIZAとSHRDLUはよく知られた_スクラフィー_システムでした。1980年代に入り、MLシステムを再現可能にする需要が高まると、その結果がより説明可能であるため、_ニート_アプローチが徐々に主流になりました。
+
+---
+## 1980年代 エキスパートシステム
+
+分野が成長するにつれて、そのビジネスへの利益が明確になり、1980年代には「エキスパートシステム」の普及が進みました。「エキスパートシステムは、最初に成功した人工知能(AI)ソフトウェアの一つです」([source](https://wikipedia.org/wiki/Expert_system))。
+
+このタイプのシステムは実際には_ハイブリッド_であり、ビジネス要件を定義するルールエンジンと、ルールシステムを活用して新しい事実を推論する推論エンジンから部分的に構成されています。
+
+この時代はまた、ニューラルネットワークに対する関心が高まった時期でもあります。
+
+---
+## 1987年 - 1993年: AIの冷却期
+
+専門化されたエキスパートシステムハードウェアの普及は、残念ながらあまりにも専門化されすぎました。パーソナルコンピュータの普及もこれらの大規模で専門化された集中システムと競合しました。コンピューティングの民主化が始まり、最終的にはビッグデータの現代の爆発の道を開きました。
+
+---
+## 1993年 - 2011年
+
+この時代は、データと計算能力の不足によって引き起こされた問題を解決するための新しい時代を迎えました。データの量は急速に増加し、より広く利用可能になり、特に2007年頃のスマートフォンの登場とともに、良くも悪くも広まりました。計算能力は指数関数的に拡大し、アルゴリズムもそれに伴って進化しました。この分野は、過去の自由奔放な日々から真の学問分野へと結晶化し始めました。
+
+---
+## 現在
+
+今日、機械学習とAIは私たちの生活のほぼすべての部分に触れています。この時代は、これらのアルゴリズムが人間の生活に与えるリスクと影響を慎重に理解することを求めています。MicrosoftのBrad Smithは、「情報技術はプライバシーや表現の自由などの基本的人権保護の核心に関わる問題を提起します。これらの問題は、これらの製品を作成する技術企業に対する責任を高めます。私たちの見解では、思慮深い政府の規制と許容される使用に関する規範の発展も必要です」([source](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/))と述べています。
+
+---
+
+将来がどうなるかはまだわかりませんが、これらのコンピュータシステムとそれらが実行するソフトウェアやアルゴリズムを理解することが重要です。このカリキュラムが、あなた自身が判断するためのより良い理解を得るのに役立つことを願っています。
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "ディープラーニングの歴史")
+> 🎥 上の画像をクリックすると、Yann LeCunがディープラーニングの歴史について講演する動画が再生されます
+
+---
+## 🚀チャレンジ
+
+これらの歴史的な瞬間の一つに掘り下げて、その背後にいる人々についてもっと学びましょう。魅力的なキャラクターがいて、どんな科学的発見も文化的な真空の中で生まれたわけではありません。何を発見しますか?
+
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## 復習と自己学習
+
+以下のアイテムを見たり聞いたりしましょう:
+
+[Amy BoydがAIの進化について話すこのポッドキャスト](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "The history of AI by Amy Boyd")
+
+---
+
+## 課題
+
+[タイムラインを作成する](assignment.md)
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。原文がその言語での権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤った解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/2-history-of-ML/assignment.md b/translations/ja/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..636342739
--- /dev/null
+++ b/translations/ja/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# タイムラインを作成する
+
+## 指示
+
+[このリポジトリ](https://github.com/Digital-Humanities-Toolkit/timeline-builder)を使用して、アルゴリズム、数学、統計、AI、またはMLの歴史のある側面、またはこれらの組み合わせのタイムラインを作成します。一人の人物、一つのアイデア、または長い期間の思考に焦点を当てることができます。マルチメディア要素を追加することを忘れないでください。
+
+## ルーブリック
+
+| 基準 | 優秀 | 十分 | 改善が必要 |
+| -------- | ------------------------------------------------- | --------------------------------------- | ---------------------------------------------------------------- |
+| | GitHubページとして展開されたタイムラインが提示される | コードが不完全で展開されていない | タイムラインが不完全で、十分に調査されておらず、展開されていない |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努力しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があります。原文の母国語の文書を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当方は一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/3-fairness/README.md b/translations/ja/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..f89275236
--- /dev/null
+++ b/translations/ja/1-Introduction/3-fairness/README.md
@@ -0,0 +1,113 @@
+# 責任あるAIを用いた機械学習ソリューションの構築
+
+
+> スケッチノート by [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [事前講義クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## はじめに
+
+このカリキュラムでは、機械学習がどのように私たちの日常生活に影響を与えているかを発見し始めます。現在でも、システムやモデルは、医療診断、ローンの承認、詐欺の検出など、日常の意思決定タスクに関与しています。したがって、これらのモデルが信頼できる結果を提供するためにうまく機能することが重要です。あらゆるソフトウェアアプリケーションと同様に、AIシステムも期待を裏切ったり、望ましくない結果をもたらすことがあります。そのため、AIモデルの挙動を理解し、説明することが不可欠です。
+
+これらのモデルを構築するために使用するデータに特定の人口統計(人種、性別、政治的見解、宗教など)が欠けている場合、またはこれらの人口統計が不均衡に表現されている場合、何が起こるかを想像してみてください。モデルの出力が特定の人口統計を優遇するように解釈された場合、アプリケーションにとっての結果はどうなるでしょうか。さらに、モデルが望ましくない結果をもたらし、人々に害を及ぼす場合はどうなるでしょうか。AIシステムの挙動に対する責任は誰が負うのでしょうか。これらは、このカリキュラムで探求するいくつかの質問です。
+
+このレッスンでは、次のことを学びます:
+
+- 機械学習における公平性の重要性と関連する害についての認識を高める
+- 信頼性と安全性を確保するために異常値や異常なシナリオを探る実践に慣れる
+- 包括的なシステムを設計することで全員をエンパワーメントする必要性を理解する
+- データと人々のプライバシーとセキュリティを保護することの重要性を探る
+- AIモデルの挙動を説明するためのガラスボックスアプローチの重要性を見る
+- AIシステムに対する信頼を構築するためにアカウンタビリティが不可欠であることを意識する
+
+## 前提条件
+
+前提条件として、「責任あるAIの原則」学習パスを受講し、以下のビデオを視聴してください:
+
+この[学習パス](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott)を通じて責任あるAIについてさらに学びましょう。
+
+[](https://youtu.be/dnC8-uUZXSc "Microsoftの責任あるAIへのアプローチ")
+
+> 🎥 上の画像をクリックしてビデオを視聴: Microsoftの責任あるAIへのアプローチ
+
+## 公平性
+
+AIシステムはすべての人を公平に扱い、類似のグループに異なる影響を与えないようにする必要があります。例えば、AIシステムが医療処置、ローン申請、または雇用に関するガイダンスを提供する場合、類似の症状、財政状況、または専門資格を持つすべての人に同じ推奨を行うべきです。私たち人間は、意思決定や行動に影響を与える遺伝的なバイアスを持っています。これらのバイアスは、AIシステムをトレーニングするために使用するデータにも現れることがあります。このような操作は時には意図せずに行われることがあります。データにバイアスを導入する際に意識的に気づくことはしばしば困難です。
+
+**「不公平」**とは、人種、性別、年齢、障害状態などの観点から定義される人々のグループに対するネガティブな影響、または「害」を指します。主な公平性に関連する害は次のように分類されます:
+
+- **割り当て**。例えば、性別や民族が他の性別や民族に対して優遇される場合。
+- **サービスの質**。特定のシナリオのためにデータをトレーニングしたが、現実ははるかに複雑である場合、パフォーマンスの低いサービスにつながります。例えば、暗い肌の人々を感知できないハンドソープディスペンサー。[参考](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **中傷**。何かまたは誰かを不公平に批判し、ラベル付けすること。例えば、画像ラベリング技術が暗い肌の人々の画像をゴリラと誤ってラベリングしたことで有名です。
+- **過剰または過小な表現**。特定の職業に特定のグループが見られないという考えであり、そのサービスや機能がそれを促進し続けることは害をもたらします。
+- **ステレオタイプ**。特定のグループを事前に割り当てられた属性と関連付けること。例えば、英語とトルコ語の間の言語翻訳システムが、性別に関するステレオタイプに関連する単語のために不正確さを持つ可能性があります。
+
+
+> トルコ語への翻訳
+
+
+> 英語への翻訳
+
+AIシステムの設計とテストを行う際には、AIが公平であり、バイアスや差別的な決定を下すようにプログラムされていないことを確認する必要があります。AIと機械学習における公平性を保証することは、依然として複雑な社会技術的課題です。
+
+### 信頼性と安全性
+
+信頼を築くためには、AIシステムは通常および予期せぬ条件下でも信頼性が高く、安全で一貫している必要があります。特に外れ値の場合に、AIシステムがどのように挙動するかを知ることが重要です。AIソリューションを構築する際には、AIソリューションが遭遇するさまざまな状況をどのように処理するかに多くの焦点を当てる必要があります。例えば、自動運転車は人々の安全を最優先に考える必要があります。その結果、車を駆動するAIは、夜間、雷雨、吹雪、道路を横切る子供、ペット、道路工事など、車が遭遇する可能性のあるすべてのシナリオを考慮する必要があります。AIシステムが幅広い条件をどれだけ信頼性が高く安全に処理できるかは、データサイエンティストやAI開発者がシステムの設計やテスト中にどれだけの予測を考慮したかを反映しています。
+
+> [🎥 こちらをクリックしてビデオを見る: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### 包括性
+
+AIシステムは、すべての人々をエンゲージし、エンパワーメントするように設計されるべきです。AIシステムを設計および実装する際に、データサイエンティストやAI開発者は、システムが意図せずに人々を排除する可能性のある障壁を特定し、対処します。例えば、世界中には10億人の障害者がいます。AIの進歩により、彼らは日常生活で幅広い情報や機会により簡単にアクセスできるようになります。障壁に対処することで、すべての人々に利益をもたらすより良い体験を提供するAI製品を革新し、開発する機会が生まれます。
+
+> [🎥 こちらをクリックしてビデオを見る: AIにおける包括性](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### セキュリティとプライバシー
+
+AIシステムは安全であり、人々のプライバシーを尊重するべきです。プライバシー、情報、または生活を危険にさらすシステムには、人々はあまり信頼を寄せません。機械学習モデルをトレーニングする際には、最良の結果を生み出すためにデータに依存します。そのため、データの出所と整合性を考慮する必要があります。例えば、データがユーザーによって提供されたものか、公開されているものかを確認します。次に、データを扱う際には、機密情報を保護し、攻撃に耐えるAIシステムを開発することが重要です。AIが普及するにつれて、プライバシーを保護し、重要な個人情報やビジネス情報を保護することがますます重要かつ複雑になっています。プライバシーとデータセキュリティの問題は、AIにとって特に注意が必要です。なぜなら、データへのアクセスは、AIシステムが人々について正確で情報に基づいた予測と決定を行うために不可欠だからです。
+
+> [🎥 こちらをクリックしてビデオを見る: AIにおけるセキュリティ](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- 業界として、GDPR(一般データ保護規則)などの規制により、プライバシーとセキュリティの分野で大きな進展を遂げました。
+- しかし、AIシステムにおいては、システムをより個人的で効果的にするためにより多くの個人データが必要であるという必要性とプライバシーの間の緊張関係を認識する必要があります。
+- インターネットを介した接続されたコンピュータの誕生と同様に、AIに関連するセキュリティ問題の数が急増しています。
+- 同時に、セキュリティを向上させるためにAIが使用されているのを目にしています。例えば、ほとんどの最新のアンチウイルススキャナーは、今日ではAIヒューリスティックによって駆動されています。
+- データサイエンスのプロセスが最新のプライバシーとセキュリティの実践と調和するようにする必要があります。
+
+### 透明性
+
+AIシステムは理解可能であるべきです。透明性の重要な部分は、AIシステムとそのコンポーネントの挙動を説明することです。AIシステムの理解を深めるためには、ステークホルダーがそれらがどのように機能し、なぜ機能するのかを理解し、潜在的なパフォーマンスの問題、安全性とプライバシーの懸念、バイアス、排除的な実践、または意図しない結果を特定できるようにする必要があります。また、AIシステムを使用する人々は、いつ、なぜ、どのようにそれらを展開することを選択するのかについて正直で率直であるべきだと考えています。使用するシステムの限界も含めて説明する必要があります。例えば、銀行が消費者向け融資決定を支援するためにAIシステムを使用する場合、その結果を調査し、システムの推奨をどのデータが影響しているのかを理解することが重要です。政府は産業全体でAIを規制し始めているため、データサイエンティストや組織は、AIシステムが規制要件を満たしているかどうかを説明する必要があります。特に望ましくない結果が生じた場合には、特に重要です。
+
+> [🎥 こちらをクリックしてビデオを見る: AIにおける透明性](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- AIシステムは非常に複雑であるため、それらがどのように機能し、結果を解釈するのかを理解するのは難しいです。
+- この理解の欠如は、これらのシステムがどのように管理され、運用され、文書化されるかに影響を与えます。
+- さらに重要なのは、これらのシステムが生成する結果を使用して行われる意思決定に影響を与えます。
+
+### アカウンタビリティ
+
+AIシステムを設計し、展開する人々は、そのシステムの動作に対して責任を負わなければなりません。アカウンタビリティの必要性は、特に顔認識のような敏感な技術において特に重要です。最近、特に失踪した子供を見つけるなどの用途で技術の可能性を見ている法執行機関から、顔認識技術に対する需要が高まっています。しかし、これらの技術は、特定の個人の継続的な監視を可能にすることによって、政府が市民の基本的な自由を危険にさらす可能性があります。したがって、データサイエンティストや組織は、自分たちのAIシステムが個人や社会に与える影響について責任を持つ必要があります。
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Microsoftの責任あるAIへのアプローチ")
+
+> 🎥 上の画像をクリックしてビデオを視聴: 顔認識による大規模監視の警告
+
+最終的に、AIを社会に導入する最初の世代としての私たちの最大の課題の一つは、コンピュータが人々に対して責任を持ち続けることをどのように確保するか、そしてコンピュータを設計する人々が他のすべての人々に対して責任を持ち続けることをどのように確保するかです。
+
+## 影響評価
+
+機械学習モデルをトレーニングする前に、AIシステムの目的、意図された使用法、展開場所、およびシステムとやり取りする人々を理解するために影響評価を実施することが重要です。これらは、システムを評価するレビュアーやテスターに、潜在的なリスクや予期される結果を特定する際に考慮すべき要因を知らせるのに役立ちます。
+
+影響評価を行う際の重点領域は次のとおりです:
+
+* **個人への悪影響**。システムのパフォーマンスを妨げる制限や要件、サポートされていない使用法、既知の制限に注意を払うことは、システムが個人に害を与える方法で使用されないようにするために重要です。
+* **データ要件**。システムがデータをどのように使用し、どこで使用するかを理解することで、考慮すべきデータ要件(例:GDPRやHIPPAデータ規制)を探ることができます。さらに、トレーニングに十分なデータの出所や量を確認します。
+* **影響の概要**。システムの使用によって生じる可能性のある害のリストを収集します。MLライフサイクル全体で、特定された問題が軽減または対処されているかどうかを確認します。
+* **6つのコア原則の適用可能な目標**。各原則の目標が達成されているかどうか、およびギャップがあるかどうかを評価します。
+
+## 責任あるAIによるデバッグ
+
+ソフトウェアアプリケーションの
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご理解ください。元の言語で書かれた原文が信頼できる情報源とみなされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤った解釈について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/3-fairness/assignment.md b/translations/ja/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..d67d8983c
--- /dev/null
+++ b/translations/ja/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# Responsible AI Toolboxを探る
+
+## 指示
+
+このレッスンでは、データサイエンティストがAIシステムを分析・改善するための「オープンソースのコミュニティ主導のプロジェクト」であるResponsible AI Toolboxについて学びました。この課題では、RAI Toolboxの[ノートブック](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb)の一つを探り、その結果を論文やプレゼンテーションで報告してください。
+
+## 評価基準
+
+| 基準 | 優秀 | 適切 | 改善が必要 |
+| -------- | --------- | -------- | ----------------- |
+| | Fairlearnのシステム、実行したノートブック、およびその結果について論じた論文またはパワーポイントプレゼンテーションが提示されている | 結論のない論文が提示されている | 論文が提示されていない |
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があります。権威ある情報源として、原文の言語の文書を参照してください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用により生じた誤解や誤認に対して、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/4-techniques-of-ML/README.md b/translations/ja/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..3601eaa94
--- /dev/null
+++ b/translations/ja/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# 機械学習のテクニック
+
+機械学習モデルを構築、使用、維持するプロセスと、それに使用するデータの管理は、他の多くの開発ワークフローとは大きく異なります。このレッスンでは、そのプロセスを解明し、知っておくべき主要なテクニックを概説します。以下のことを学びます:
+
+- 機械学習のプロセスを高いレベルで理解する。
+- 「モデル」、「予測」、「トレーニングデータ」といった基本概念を探る。
+
+## [講義前のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "初心者向け機械学習 - 機械学習のテクニック")
+
+> 🎥 上の画像をクリックすると、このレッスンの短いビデオが再生されます。
+
+## はじめに
+
+高いレベルでは、機械学習(ML)プロセスを作成する技術は以下のいくつかのステップから成り立っています:
+
+1. **質問を決める**。ほとんどのMLプロセスは、単純な条件付きプログラムやルールベースのエンジンでは答えられない質問から始まります。これらの質問は、データのコレクションに基づく予測に関することが多いです。
+2. **データを収集し準備する**。質問に答えるためにはデータが必要です。データの質や量が、最初の質問にどれだけうまく答えられるかを決定します。このフェーズではデータの可視化も重要です。また、データをトレーニング用とテスト用に分割してモデルを構築することも含まれます。
+3. **トレーニング方法を選ぶ**。質問とデータの性質に応じて、モデルをどのようにトレーニングするかを選びます。これはMLプロセスの中で特定の専門知識が必要であり、多くの場合、かなりの実験が必要な部分です。
+4. **モデルをトレーニングする**。トレーニングデータを使用して、データのパターンを認識するモデルをトレーニングします。モデルは内部の重みを調整して、データの特定の部分を他よりも優先することで、より良いモデルを構築することができます。
+5. **モデルを評価する**。収集したデータセットから見たことのないデータ(テストデータ)を使用して、モデルのパフォーマンスを確認します。
+6. **パラメータ調整**。モデルのパフォーマンスに基づいて、異なるパラメータやアルゴリズムを使用してプロセスを再実行することができます。
+7. **予測する**。新しい入力を使用して、モデルの精度をテストします。
+
+## どの質問をするか
+
+コンピュータはデータの中に隠れたパターンを発見するのが得意です。この能力は、条件ベースのルールエンジンを作成するだけでは簡単に答えられない質問を持つ研究者にとって非常に役立ちます。例えば、アクチュアリアルなタスクを考えると、データサイエンティストは喫煙者と非喫煙者の死亡率に関する手作りのルールを構築できるかもしれません。
+
+しかし、他の多くの変数が関与すると、過去の健康履歴に基づいて将来の死亡率を予測するためにMLモデルがより効率的である可能性があります。より楽しい例として、緯度、経度、気候変動、海への近さ、ジェット気流のパターンなどのデータに基づいて、特定の場所の4月の天気予測をすることが挙げられます。
+
+✅ この[スライドデッキ](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf)は、天気予報モデルにおけるMLの使用に関する歴史的な視点を提供しています。
+
+## モデル構築前のタスク
+
+モデルを構築する前に、いくつかのタスクを完了する必要があります。質問をテストし、モデルの予測に基づいて仮説を形成するために、いくつかの要素を特定し、設定する必要があります。
+
+### データ
+
+質問に確実に答えるためには、適切な種類のデータが十分に必要です。この時点で行うべきことは2つあります:
+
+- **データを収集する**。前のレッスンでデータ分析の公平性について学んだことを念頭に置き、データを慎重に収集します。このデータのソース、その中に含まれるバイアス、その出所を文書化します。
+- **データを準備する**。データ準備プロセスにはいくつかのステップがあります。データを収集して正規化する必要があるかもしれません。データの質と量を向上させるために、文字列を数値に変換するなどの方法を使用することができます([クラスタリング](../../5-Clustering/1-Visualize/README.md)で行うように)。また、元のデータに基づいて新しいデータを生成することもできます([分類](../../4-Classification/1-Introduction/README.md)で行うように)。データをクリーンアップし、編集することもできます([Webアプリ](../../3-Web-App/README.md)のレッスンの前に行うように)。最後に、トレーニングテクニックに応じて、データをランダム化し、シャッフルする必要があるかもしれません。
+
+✅ データを収集し処理した後、その形状が意図した質問に答えるのに適しているか確認してください。データが与えられたタスクでうまく機能しないことがあるかもしれません([クラスタリング](../../5-Clustering/1-Visualize/README.md)のレッスンで発見するように)。
+
+### 特徴量とターゲット
+
+[特徴量](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection)はデータの測定可能な特性です。多くのデータセットでは、「日付」、「サイズ」、「色」などの列見出しとして表現されます。特徴量変数は通常コードでは`X`として表され、モデルをトレーニングするために使用される入力変数を表します。
+
+ターゲットは予測しようとしているものです。ターゲットは通常コードでは`y`として表され、データに対して尋ねようとしている質問の答えを表します:12月にはどの**色**のカボチャが最も安いのか?サンフランシスコではどの地区が最も良い不動産**価格**を持つのか?ターゲットはラベル属性とも呼ばれることがあります。
+
+### 特徴量変数の選択
+
+🎓 **特徴選択と特徴抽出** モデルを構築する際にどの変数を選択するかどうかをどうやって知るのか?おそらく特徴選択や特徴抽出のプロセスを経て、最もパフォーマンスの良いモデルのための適切な変数を選ぶことになるでしょう。しかし、これらは同じことではありません:「特徴抽出は元の特徴の関数から新しい特徴を作成するのに対し、特徴選択は特徴のサブセットを返します。」([出典](https://wikipedia.org/wiki/Feature_selection))
+
+### データの可視化
+
+データサイエンティストのツールキットの重要な側面は、SeabornやMatPlotLibのような優れたライブラリを使用してデータを可視化する力です。データを視覚的に表現することで、活用できる隠れた相関関係を見つけることができます。また、バイアスや不均衡なデータを発見するのにも役立ちます([分類](../../4-Classification/2-Classifiers-1/README.md)で発見するように)。
+
+### データセットの分割
+
+トレーニングの前に、データセットを不均等なサイズの2つ以上の部分に分割し、それでもデータをよく表現する必要があります。
+
+- **トレーニング**。データセットのこの部分は、モデルにフィットしてトレーニングします。このセットは元のデータセットの大部分を構成します。
+- **テスト**。テストデータセットは、元のデータから収集された独立したデータのグループで、構築されたモデルのパフォーマンスを確認するために使用します。
+- **検証**。検証セットは、モデルのハイパーパラメータやアーキテクチャを調整してモデルを改善するために使用する小さな独立したデータのグループです。データのサイズと質問に応じて、この3つ目のセットを構築する必要がないかもしれません([時系列予測](../../7-TimeSeries/1-Introduction/README.md)で注目するように)。
+
+## モデルの構築
+
+トレーニングデータを使用して、さまざまなアルゴリズムを使用してデータの統計的表現であるモデルを構築することが目標です。モデルをトレーニングすることで、データに対する仮定を行い、発見したパターンを検証し、受け入れたり拒否したりします。
+
+### トレーニング方法を決定する
+
+質問とデータの性質に応じて、トレーニング方法を選択します。このコースで使用する[Scikit-learnのドキュメント](https://scikit-learn.org/stable/user_guide.html)をステップバイステップで進めることで、モデルをトレーニングするための多くの方法を探ることができます。経験に応じて、最適なモデルを構築するためにいくつかの異なる方法を試す必要があるかもしれません。データサイエンティストが未見のデータをフィードしてモデルのパフォーマンスを評価し、精度、バイアス、その他の品質低下の問題をチェックし、タスクに最も適したトレーニング方法を選択するプロセスを経ることが多いです。
+
+### モデルをトレーニングする
+
+トレーニングデータを用意したら、モデルを「フィット」させる準備が整います。多くのMLライブラリでは、「model.fit」というコードを見つけることができます。このときに特徴量変数(通常は「X」)とターゲット変数(通常は「y」)を値の配列として送信します。
+
+### モデルを評価する
+
+トレーニングプロセスが完了すると(大規模なモデルをトレーニングするには多くの反復や「エポック」が必要な場合があります)、テストデータを使用してモデルのパフォーマンスを評価することができます。このデータは、モデルが以前に分析していない元のデータのサブセットです。モデルの品質に関するメトリクスのテーブルを印刷することができます。
+
+🎓 **モデルのフィッティング**
+
+機械学習の文脈では、モデルのフィッティングとは、モデルの基礎となる関数が未知のデータを分析しようとする際の精度を指します。
+
+🎓 **アンダーフィッティング**と**オーバーフィッティング**は、モデルの品質を低下させる一般的な問題です。モデルがトレーニングデータに対して適合しすぎるか、適合しなさすぎるかのいずれかです。オーバーフィットしたモデルは、データの詳細やノイズを学びすぎるため、トレーニングデータを非常によく予測します。アンダーフィットしたモデルは、トレーニングデータも未知のデータも正確に分析できないため、精度が低いです。
+
+
+> インフォグラフィック by [Jen Looper](https://twitter.com/jenlooper)
+
+## パラメータ調整
+
+初期トレーニングが完了したら、モデルの品質を観察し、「ハイパーパラメータ」を調整することで改善することを考えます。プロセスの詳細は[ドキュメント](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott)を参照してください。
+
+## 予測
+
+これは、完全に新しいデータを使用してモデルの精度をテストする瞬間です。「応用」ML設定では、モデルを実際に使用するためのWebアセットを構築する場合、このプロセスはユーザー入力(例えばボタンの押下)を収集し、変数を設定してモデルに推論または評価のために送信することを含むかもしれません。
+
+これらのレッスンでは、準備、構築、テスト、評価、予測のステップを使用して、データサイエンティストとしてのジェスチャーを学び、より「フルスタック」なMLエンジニアになるための旅を進めていきます。
+
+---
+
+## 🚀チャレンジ
+
+ML実践者のステップを反映したフローチャートを描いてください。現在のプロセスのどこにいると感じますか?どこで困難を感じると予測しますか?何が簡単に感じますか?
+
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## 復習と自主学習
+
+データサイエンティストが日常の仕事について話しているインタビューをオンラインで検索してください。ここに[一例](https://www.youtube.com/watch?v=Z3IjgbbCEfs)があります。
+
+## 課題
+
+[データサイエンティストにインタビューする](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期すため努力しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご理解ください。原文はその言語での正式な文書と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当方は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/4-techniques-of-ML/assignment.md b/translations/ja/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..a73026dd4
--- /dev/null
+++ b/translations/ja/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# データサイエンティストにインタビューする
+
+## 指示
+
+あなたの会社、ユーザーグループ、友人や同級生の中で、データサイエンティストとして専門的に働いている人に話を聞いてください。彼らの日常業務について500文字の短いレポートを書いてください。彼らはスペシャリストですか、それとも「フルスタック」で働いていますか?
+
+## ルーブリック
+
+| 基準 | 模範的 | 十分 | 改善が必要 |
+| -------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------ | --------------------- |
+| | 正しい長さのエッセイで、出典が明示されており、.docファイルとして提出されている | エッセイの出典が不十分、または必要な長さに満たない | エッセイが提出されていない |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。元の言語での原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/1-Introduction/README.md b/translations/ja/1-Introduction/README.md
new file mode 100644
index 000000000..02d7e7f61
--- /dev/null
+++ b/translations/ja/1-Introduction/README.md
@@ -0,0 +1,25 @@
+# 機械学習の紹介
+
+このカリキュラムのセクションでは、機械学習の分野における基本概念、機械学習とは何か、その歴史や研究者が使用する技術について学びます。一緒にこの新しいMLの世界を探求しましょう!
+
+
+> 写真提供 Bill Oxford on Unsplash
+
+### レッスン
+
+1. [機械学習の紹介](1-intro-to-ML/README.md)
+1. [機械学習とAIの歴史](2-history-of-ML/README.md)
+1. [公平性と機械学習](3-fairness/README.md)
+1. [機械学習の技術](4-techniques-of-ML/README.md)
+### クレジット
+
+"Introduction to Machine Learning" は [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) および [Jen Looper](https://twitter.com/jenlooper) を含むチームによって♥️を込めて書かれました。
+
+"The History of Machine Learning" は [Jen Looper](https://twitter.com/jenlooper) および [Amy Boyd](https://twitter.com/AmyKateNicho) によって♥️を込めて書かれました。
+
+"Fairness and Machine Learning" は [Tomomi Imura](https://twitter.com/girliemac) によって♥️を込めて書かれました。
+
+"Techniques of Machine Learning" は [Jen Looper](https://twitter.com/jenlooper) および [Chris Noring](https://twitter.com/softchris) によって♥️を込めて書かれました。
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。原文がある場合は、その原文を正式な情報源とみなしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/1-Tools/README.md b/translations/ja/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..ab0e494e0
--- /dev/null
+++ b/translations/ja/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# PythonとScikit-learnで回帰モデルを始めよう
+
+
+
+> スケッチノート by [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [プレレクチャークイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [このレッスンはRでも利用可能です!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## はじめに
+
+これらの4つのレッスンで、回帰モデルの構築方法を学びます。これが何のためにあるのかについては、すぐに説明します。しかし、何かを始める前に、プロセスを開始するための適切なツールが揃っていることを確認してください!
+
+このレッスンでは、以下のことを学びます:
+
+- ローカル機械学習タスクのためにコンピュータを設定する方法
+- Jupyterノートブックの使用方法
+- Scikit-learnのインストールと使用方法
+- ハンズオンエクササイズで線形回帰を探求する方法
+
+## インストールと設定
+
+[](https://youtu.be/-DfeD2k2Kj0 "ML for beginners -Setup your tools ready to build Machine Learning models")
+
+> 🎥 上の画像をクリックして、コンピュータをML用に設定する短いビデオをご覧ください。
+
+1. **Pythonをインストールする**。コンピュータに[Python](https://www.python.org/downloads/)がインストールされていることを確認してください。Pythonは多くのデータサイエンスや機械学習タスクで使用されます。ほとんどのコンピュータシステムには既にPythonがインストールされています。一部のユーザーにとってセットアップを簡単にするための便利な[Python Coding Packs](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott)もあります。
+
+ ただし、Pythonの使用法によっては、特定のバージョンが必要な場合があります。そのため、[仮想環境](https://docs.python.org/3/library/venv.html)で作業することが便利です。
+
+2. **Visual Studio Codeをインストールする**。コンピュータにVisual Studio Codeがインストールされていることを確認してください。基本的なインストール手順については、[Visual Studio Codeのインストール](https://code.visualstudio.com/)に従ってください。このコースではVisual Studio CodeでPythonを使用するので、[Visual Studio CodeのPython開発用の設定](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott)についても確認しておくと良いでしょう。
+
+ > このコレクションの[Learnモジュール](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott)を通してPythonに慣れてください。
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Setup Python with Visual Studio Code")
+ >
+ > 🎥 上の画像をクリックして、VS Code内でPythonを使用するビデオをご覧ください。
+
+3. **Scikit-learnをインストールする**。詳細は[こちらの手順](https://scikit-learn.org/stable/install.html)に従ってください。Python 3を使用する必要があるため、仮想環境を使用することをお勧めします。M1 Macにこのライブラリをインストールする場合は、上記のページに特別な指示があります。
+
+4. **Jupyter Notebookをインストールする**。 [Jupyterパッケージ](https://pypi.org/project/jupyter/)をインストールする必要があります。
+
+## あなたのML著作環境
+
+**ノートブック**を使用してPythonコードを開発し、機械学習モデルを作成します。このタイプのファイルはデータサイエンティストにとって一般的なツールであり、拡張子`.ipynb`で識別できます。
+
+ノートブックは、開発者がコードを書くだけでなく、コードに関するメモやドキュメントを追加することができるインタラクティブな環境であり、実験的または研究指向のプロジェクトに非常に役立ちます。
+
+[](https://youtu.be/7E-jC8FLA2E "ML for beginners - Set up Jupyter Notebooks to start building regression models")
+
+> 🎥 上の画像をクリックして、このエクササイズを進める短いビデオをご覧ください。
+
+### エクササイズ - ノートブックを使う
+
+このフォルダには、_notebook.ipynb_というファイルがあります。
+
+1. _notebook.ipynb_をVisual Studio Codeで開きます。
+
+ JupyterサーバーがPython 3+で起動します。ノートブックの中には`run`、コードの部分があります。再生ボタンのようなアイコンを選択してコードブロックを実行できます。
+
+1. `md`アイコンを選択し、マークダウンを少し追加し、次のテキストを追加します **# Welcome to your notebook**。
+
+ 次に、Pythonコードを追加します。
+
+1. コードブロックに**print('hello notebook')**と入力します。
+1. 矢印を選択してコードを実行します。
+
+ 以下の出力が表示されるはずです:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+コードとコメントを交互に記述してノートブックを自己文書化できます。
+
+✅ ウェブ開発者の作業環境とデータサイエンティストの作業環境の違いについて少し考えてみてください。
+
+## Scikit-learnのセットアップ
+
+Pythonがローカル環境に設定され、Jupyterノートブックに慣れたところで、次はScikit-learnに慣れていきましょう(発音は `sci` as in `science`)。Scikit-learnはMLタスクを実行するための[広範なAPI](https://scikit-learn.org/stable/modules/classes.html#api-ref)を提供しています。
+
+彼らの[ウェブサイト](https://scikit-learn.org/stable/getting_started.html)によると、「Scikit-learnは、教師あり学習と教師なし学習をサポートするオープンソースの機械学習ライブラリです。また、モデルフィッティング、データ前処理、モデル選択と評価、その他多くのユーティリティのためのさまざまなツールを提供します。」
+
+このコースでは、Scikit-learnやその他のツールを使用して、いわゆる「伝統的な機械学習」タスクを実行するための機械学習モデルを構築します。ニューラルネットワークやディープラーニングは含まれていませんが、それらについては今後の「AI for Beginners」カリキュラムで詳しく取り上げます。
+
+Scikit-learnはモデルの構築と評価を簡単に行えるようにします。主に数値データの使用に焦点を当てており、学習ツールとして使用できるいくつかの既成のデータセットも含まれています。また、学生が試すための事前構築されたモデルも含まれています。パッケージ化されたデータを読み込み、基本的なデータを使用してScikit-learnで最初のMLモデルを構築するプロセスを探りましょう。
+
+## エクササイズ - 初めてのScikit-learnノートブック
+
+> このチュートリアルは、Scikit-learnのウェブサイトにある[線形回帰の例](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py)に触発されました。
+
+[](https://youtu.be/2xkXL5EUpS0 "ML for beginners - Your First Linear Regression Project in Python")
+
+> 🎥 上の画像をクリックして、このエクササイズを進める短いビデオをご覧ください。
+
+このレッスンに関連する_notebook.ipynb_ファイル内のすべてのセルをゴミ箱アイコンを押してクリアします。
+
+このセクションでは、学習目的でScikit-learnに組み込まれている糖尿病に関する小さなデータセットを使用します。糖尿病患者の治療をテストしたいと考えているとします。機械学習モデルは、変数の組み合わせに基づいて、どの患者が治療に反応しやすいかを判断するのに役立つかもしれません。非常に基本的な回帰モデルでも、視覚化すると、理論的な臨床試験を整理するのに役立つ変数に関する情報を示すかもしれません。
+
+✅ 回帰方法には多くの種類があり、どの方法を選ぶかは求める答えによって異なります。ある年齢の人の予想身長を予測したい場合は、線形回帰を使用します。なぜなら、**数値の値**を求めているからです。ある料理がビーガンかどうかを調べたい場合は、**カテゴリの割り当て**を求めているので、ロジスティック回帰を使用します。ロジスティック回帰については後で詳しく学びます。データに対してどのような質問をすることができるか、そしてどの方法が適切かについて少し考えてみてください。
+
+このタスクを始めましょう。
+
+### ライブラリをインポートする
+
+このタスクのためにいくつかのライブラリをインポートします:
+
+- **matplotlib**。便利な[グラフツール](https://matplotlib.org/)で、ラインプロットを作成するために使用します。
+- **numpy**。 [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html)は、Pythonで数値データを扱うのに便利なライブラリです。
+- **sklearn**。これは[Scikit-learn](https://scikit-learn.org/stable/user_guide.html)ライブラリです。
+
+タスクを助けるためにいくつかのライブラリをインポートします。
+
+1. 次のコードを入力してインポートを追加します:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ 上記では、`matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y`は回帰ターゲットになります。
+
+1. データマトリックスの形状とその最初の要素を表示するためにいくつかのprintコマンドを追加します:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ 返される応答はタプルです。タプルの最初の2つの値をそれぞれ`X` and `y`に割り当てています。 [タプルについて](https://wikipedia.org/wiki/Tuple)の詳細を学びましょう。
+
+ このデータは10個の要素で構成された配列の442アイテムがあることがわかります:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ データと回帰ターゲットの関係について少し考えてみてください。線形回帰は特徴量Xとターゲット変数yの関係を予測します。このドキュメントで糖尿病データセットの[ターゲット](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset)を見つけることができますか?ターゲットを考えると、このデータセットは何を示しているのでしょうか?
+
+2. 次に、このデータセットの一部を選択してプロットするために、データセットの3番目の列を選択します。これは`:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)`を使用して行うことができます。パラメータの1つが-1の場合、対応する次元は自動的に計算されます。
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ いつでもデータを印刷してその形状を確認してください。
+
+3. データがプロットする準備ができたら、このデータセットの数値間に論理的な分割を見つけるために機械を使用できるかどうかを確認できます。これを行うには、データ(X)とターゲット(y)の両方をテストセットとトレーニングセットに分割する必要があります。Scikit-learnにはこれを簡単に行う方法があります。指定したポイントでテストデータを分割できます。
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+4. モデルをトレーニングする準備ができました!線形回帰モデルをロードし、Xとyのトレーニングセットを使用して`model.fit()`でトレーニングします:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`。これはデータグループ間に線を引くために使用されます。
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. データをプロットに表示する時が来ました。Matplotlibはこのタスクに非常に便利なツールです。すべてのXとyテストデータの散布図を作成し、モデルのデータグループ間に最も適切な場所に線を引くために予測を使用します。
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ ここで何が起こっているのか少し考えてみてください。多くの小さなデータ点の間に一直線が引かれていますが、具体的に何をしているのでしょうか?この線を使用して、新しい見えないデータポイントがプロットのy軸との関係でどこにフィットするべきかを予測できることがわかりますか?このモデルの実際の使用法を言葉で表現してみてください。
+
+おめでとうございます、最初の線形回帰モデルを構築し、それを使用して予測を作成し、プロットに表示しました!
+
+---
+## 🚀チャレンジ
+
+このデータセットから別の変数をプロットしてください。ヒント:この行を編集します:`X = X[:,2]`。このデータセットのターゲットを考えると、糖尿病の進行について何を発見できるでしょうか?
+## [ポストレクチャークイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## レビュー&自己学習
+
+このチュートリアルでは、単回帰ではなく、単変量回帰または多重回帰を使用しました。これらの方法の違いについて少し読んでみるか、[このビデオ](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef)を見てみてください。
+
+回帰の概念についてさらに読み、どのような質問にこの技術で答えることができるか考えてみてください。この[チュートリアル](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott)を受けて、理解を深めてください。
+
+## 課題
+
+[別のデータセット](assignment.md)
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すために努力しておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で書かれた文書を権威ある情報源とみなすべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤った解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/1-Tools/assignment.md b/translations/ja/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..b4b0ed8bd
--- /dev/null
+++ b/translations/ja/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# Scikit-learnを使った回帰
+
+## 説明
+
+Scikit-learnの[Linnerudデータセット](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud)を見てみましょう。このデータセットには複数の[ターゲット](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset)があります。'これはフィットネスクラブに通う20人の中年男性から収集された、3つの運動(データ)と3つの生理学的(ターゲット)変数で構成されています。'
+
+あなた自身の言葉で、ウエストラインと達成された腹筋運動の回数の関係をプロットする回帰モデルの作成方法を説明してください。このデータセットの他のデータポイントについても同様に行ってください。
+
+## ルーブリック
+
+| 基準 | 優秀 | 適切 | 改善の余地あり |
+| ------------------------------ | ----------------------------------- | ----------------------------- | -------------------------- |
+| 説明文の提出 | よく書かれた説明文が提出されている | 数文の説明文が提出されている | 説明文が提出されていない |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。元の言語で書かれた原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/1-Tools/solution/Julia/README.md b/translations/ja/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..44b6c1888
--- /dev/null
+++ b/translations/ja/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確な点が含まれる場合があります。元の言語で書かれた原文が権威ある情報源とみなされるべきです。重要な情報については、専門の人間による翻訳を推奨します。本翻訳の使用に起因する誤解や誤った解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/2-Data/README.md b/translations/ja/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..4b3123891
--- /dev/null
+++ b/translations/ja/2-Regression/2-Data/README.md
@@ -0,0 +1,215 @@
+# Scikit-learnで回帰モデルを構築する: データの準備と可視化
+
+
+
+インフォグラフィック作成者: [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [講義前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [このレッスンはRでも利用可能です!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## はじめに
+
+Scikit-learnを使って機械学習モデルを構築するためのツールをセットアップしたので、データに対して質問を始める準備が整いました。データを扱い、MLソリューションを適用する際には、データセットの可能性を正しく引き出すために適切な質問をすることが非常に重要です。
+
+このレッスンでは以下を学びます:
+
+- モデル構築のためのデータの準備方法
+- Matplotlibを使ったデータの可視化方法
+
+## データに対して適切な質問をする
+
+解決したい質問が、使用するMLアルゴリズムの種類を決定します。そして、得られる回答の質はデータの性質に大きく依存します。
+
+このレッスンで提供される[データ](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv)を見てみましょう。この.csvファイルをVS Codeで開くことができます。ざっと見ると、空白や文字列と数値データの混在があることがすぐにわかります。また、「Package」という奇妙な列があり、データは「sacks」、「bins」などの値が混在しています。実際、このデータは少し乱雑です。
+
+[](https://youtu.be/5qGjczWTrDQ "初心者向けML - データセットの分析とクリーニング方法")
+
+> 🎥 上の画像をクリックして、このレッスンのデータ準備を進める短いビデオを視聴してください。
+
+実際、完全に使える状態のデータセットをそのまま受け取ることはあまり一般的ではありません。このレッスンでは、標準的なPythonライブラリを使用して生データセットを準備する方法を学びます。また、データを可視化するためのさまざまな手法も学びます。
+
+## ケーススタディ: 'かぼちゃ市場'
+
+このフォルダには、ルート`data`フォルダに[US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv)という名前の.csvファイルがあります。これは、都市ごとに分類されたかぼちゃ市場に関する1757行のデータを含んでいます。これは、米国農務省が配布している[Specialty Crops Terminal Markets Standard Reports](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice)から抽出された生データです。
+
+### データの準備
+
+このデータはパブリックドメインにあります。USDAのウェブサイトから都市ごとに多数の別々のファイルとしてダウンロードできます。あまりにも多くの別々のファイルを避けるために、すべての都市データを1つのスプレッドシートに連結しましたので、すでに少しデータを準備しています。次に、データを詳しく見てみましょう。
+
+### かぼちゃデータ - 初期の結論
+
+このデータについて何に気づきますか?すでに文字列、数値、空白、奇妙な値が混在していることがわかりました。
+
+回帰手法を使用してこのデータにどのような質問ができますか?例えば、「特定の月に販売されるかぼちゃの価格を予測する」といった質問はどうでしょうか。データを再度見てみると、このタスクに必要なデータ構造を作成するためにいくつかの変更が必要です。
+
+## 演習 - かぼちゃデータの分析
+
+[Pandas](https://pandas.pydata.org/)(名前は`Python Data Analysis`の略です)というデータの整形に非常に役立つツールを使用して、このかぼちゃデータを分析・準備しましょう。
+
+### まず、欠損している日付を確認する
+
+まず、欠損している日付を確認するための手順を踏む必要があります:
+
+1. 日付を月形式に変換します(これらは米国の日付なので、形式は`MM/DD/YYYY`です)。
+2. 新しい列に月を抽出します。
+
+Visual Studio Codeで_notebook.ipynb_ファイルを開き、スプレッドシートを新しいPandasデータフレームにインポートします。
+
+1. `head()`関数を使用して最初の5行を表示します。
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ 最後の5行を表示するにはどの関数を使用しますか?
+
+1. 現在のデータフレームに欠損データがあるかどうかを確認します:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ 欠損データがありますが、現在のタスクには影響がないかもしれません。
+
+1. データフレームを扱いやすくするために、以下の例のように必要な列だけを選択します。`loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:`は「すべての行」を意味します。
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### 次に、かぼちゃの平均価格を決定する
+
+特定の月におけるかぼちゃの平均価格を決定する方法を考えてみましょう。このタスクにはどの列を選びますか?ヒント:3つの列が必要です。
+
+解決策:`Low Price` and `High Price`列の平均を取って新しいPrice列を作成し、Date列を月だけ表示するように変換します。幸い、上記のチェックによると、日付や価格に欠損データはありません。
+
+1. 平均を計算するために、次のコードを追加します:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ `print(month)`を使用して、必要なデータをチェックすることができます。
+
+2. 変換したデータを新しいPandasデータフレームにコピーします:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ データフレームを印刷すると、新しい回帰モデルを構築するためのきれいで整理されたデータセットが表示されます。
+
+### しかし、まだ奇妙な点があります
+
+`Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package`列を見てください。
+
+1. 最初の.csvインポートの下にフィルタを追加します:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ データを印刷すると、バスケット単位でかぼちゃを含む約415行のデータのみが表示されます。
+
+### しかし、もう一つやるべきことがあります
+
+バスケットの量が行ごとに異なることに気づきましたか?価格を標準化するために、バスケットごとの価格を表示するように計算を行います。
+
+1. 新しい_pumpkinsデータフレームを作成するブロックの後に、次の行を追加します:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308)によると、バスケットの重量は産物の種類によって異なります。これは体積の測定だからです。「例えば、トマトのバスケットは56ポンドの重量があるとされています... 葉物や野菜は空間を多く取り、重量が少ないため、ほうれん草のバスケットは20ポンドしかありません。」これは非常に複雑です!バスケットからポンドへの変換は行わず、バスケットごとの価格を表示することにしましょう。このかぼちゃのバスケットに関する研究は、データの性質を理解することの重要性を示しています!
+
+これで、バスケットの測定に基づいて単位ごとの価格を分析できます。データをもう一度印刷すると、標準化されたことがわかります。
+
+✅ 半バスケットで販売されているかぼちゃが非常に高価であることに気づきましたか?なぜかを考えてみましょう。ヒント:小さなかぼちゃは大きなかぼちゃよりもはるかに高価です。なぜなら、大きな中空のパイかぼちゃが占める未使用のスペースが多いからです。
+
+## 可視化戦略
+
+データサイエンティストの役割の一つは、彼らが扱っているデータの質と性質を示すことです。そのために、しばしば興味深い可視化、プロット、グラフ、チャートを作成し、データのさまざまな側面を示します。このようにして、視覚的に関係やギャップを示すことができ、それを発見するのが難しいことがあります。
+
+[](https://youtu.be/SbUkxH6IJo0 "初心者向けML - Matplotlibを使ったデータの可視化方法")
+
+> 🎥 上の画像をクリックして、このレッスンのデータ可視化を進める短いビデオを視聴してください。
+
+可視化は、データに最も適した機械学習手法を決定するのにも役立ちます。例えば、線に沿っているように見える散布図は、データが線形回帰演習に適していることを示します。
+
+Jupyterノートブックでよく機能するデータ可視化ライブラリの一つに[Matplotlib](https://matplotlib.org/)があります(前のレッスンでも見ました)。
+
+> データ可視化の経験をさらに積むために、[これらのチュートリアル](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott)を参照してください。
+
+## 演習 - Matplotlibを試してみる
+
+作成した新しいデータフレームを表示するために、いくつかの基本的なプロットを作成してみましょう。基本的な折れ線グラフはどのように見えるでしょうか?
+
+1. ファイルの上部に、Pandasのインポートの下にMatplotlibをインポートします:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. ノートブック全体を再実行してリフレッシュします。
+1. ノートブックの下部に、データをボックスプロットとしてプロットするセルを追加します:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ このプロットは役に立ちますか?何か驚くことがありますか?
+
+ 特に役に立ちません。データを月ごとの点の広がりとして表示するだけです。
+
+### 役に立つものにする
+
+有用なデータを表示するためには、通常データを何らかの方法でグループ化する必要があります。y軸に月を表示し、データが分布を示すプロットを作成してみましょう。
+
+1. グループ化された棒グラフを作成するセルを追加します:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ これはより有用なデータ可視化です!9月と10月にかぼちゃの最高価格が発生しているようです。それはあなたの期待に合っていますか?なぜそう思いますか?
+
+---
+
+## 🚀チャレンジ
+
+Matplotlibが提供するさまざまな種類の可視化を探ってみましょう。回帰問題に最も適した種類はどれですか?
+
+## [講義後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## レビューと自己学習
+
+データを可視化するさまざまな方法を見てみましょう。利用可能なさまざまなライブラリのリストを作成し、特定のタスクに最適なものをメモしてください。例えば、2D可視化と3D可視化の違いについて調べてみてください。何がわかりましたか?
+
+## 課題
+
+[可視化の探索](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の母国語の文書を権威ある情報源と見なしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤った解釈については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/2-Data/assignment.md b/translations/ja/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..f16b7e2a1
--- /dev/null
+++ b/translations/ja/2-Regression/2-Data/assignment.md
@@ -0,0 +1,12 @@
+# ビジュアライゼーションの探求
+
+データビジュアライゼーションにはいくつかの異なるライブラリが利用可能です。このレッスンで使用するカボチャのデータを使って、matplotlibとseabornを使用してサンプルノートブックでいくつかのビジュアライゼーションを作成します。どのライブラリが使いやすいですか?
+
+## ルーブリック
+
+| 基準 | 優秀 | 適切 | 改善の余地あり |
+| -------- | --------- | -------- | ----------------- |
+| | 2つの探求/ビジュアライゼーションを含むノートブックが提出される | 1つの探求/ビジュアライゼーションを含むノートブックが提出される | ノートブックが提出されない |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。原文の言語による文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用によって生じる誤解や誤解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/2-Data/solution/Julia/README.md b/translations/ja/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..88ea5dacf
--- /dev/null
+++ b/translations/ja/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努めていますが、自動翻訳にはエラーや不正確さが含まれる場合があります。元の言語での原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用から生じる誤解や誤認については責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/3-Linear/README.md b/translations/ja/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..4a56fb755
--- /dev/null
+++ b/translations/ja/2-Regression/3-Linear/README.md
@@ -0,0 +1,355 @@
+# Scikit-learnを使用して回帰モデルを構築する: 回帰の4つの方法
+
+
+> インフォグラフィック作成者 [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [講義前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [このレッスンはRでも利用可能です!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### はじめに
+
+これまでに、かぼちゃの価格データセットから収集したサンプルデータを使用して、回帰が何であるかを調査しました。また、Matplotlibを使用してそれを視覚化しました。
+
+今度は、MLの回帰についてさらに深く掘り下げる準備ができました。視覚化はデータを理解するのに役立ちますが、機械学習の本当の力はモデルのトレーニングにあります。モデルは過去のデータに基づいてデータの依存関係を自動的にキャプチャし、モデルが以前に見たことのない新しいデータの結果を予測できるようにします。
+
+このレッスンでは、基本的な線形回帰と多項式回帰の2種類の回帰について学びます。これらのモデルを使用して、異なる入力データに基づいてかぼちゃの価格を予測できるようになります。
+
+[](https://youtu.be/CRxFT8oTDMg "初心者向けML - 線形回帰の理解")
+
+> 🎥 上の画像をクリックすると、線形回帰の概要を短いビデオで確認できます。
+
+> このカリキュラム全体を通じて、数学の知識が最小限であることを前提としており、他の分野から来る学生にも理解しやすいように、ノート、🧮 コールアウト、図表、その他の学習ツールを活用しています。
+
+### 前提条件
+
+これまでに、調査しているかぼちゃデータの構造に慣れているはずです。このレッスンの_notebook.ipynb_ファイルに、あらかじめ読み込まれ、前処理された状態でデータが含まれています。ファイルには、かぼちゃの価格が新しいデータフレームでブッシェルごとに表示されています。これらのノートブックをVisual Studio Codeのカーネルで実行できるようにしてください。
+
+### 準備
+
+データを読み込む目的を思い出してください。
+
+- かぼちゃを買うのに最適な時期はいつですか?
+- ミニチュアかぼちゃ1ケースの価格はどのくらいですか?
+- それらを半ブッシェルバスケットで買うべきですか、それとも1 1/9ブッシェルボックスで買うべきですか?
+このデータをさらに掘り下げてみましょう。
+
+前のレッスンでは、Pandasデータフレームを作成し、元のデータセットの一部を取り込み、ブッシェル単位で価格を標準化しました。しかし、その結果、約400のデータポイントしか収集できず、秋の数か月分のデータしかありませんでした。
+
+このレッスンのノートブックに事前に読み込まれたデータを見てみましょう。データは事前に読み込まれ、月ごとのデータを示す初期の散布図が描かれています。データをさらにクリーンアップすることで、データの性質についてもう少し詳細を得ることができるかもしれません。
+
+## 線形回帰のライン
+
+レッスン1で学んだように、線形回帰の目的は次のことができるようにラインをプロットすることです:
+
+- **変数の関係を示す**。変数間の関係を示す
+- **予測を行う**。新しいデータポイントがそのラインに対してどこに位置するかを正確に予測する
+
+このタイプのラインを描くのは、通常、**最小二乗法回帰**です。「最小二乗」とは、回帰線の周りのすべてのデータポイントを二乗してから合計することを意味します。理想的には、その最終的な合計ができるだけ小さいことが望まれます。なぜなら、エラーの数を少なくしたいからです。
+
+すべてのデータポイントからの累積距離が最小になるようにラインをモデル化したいからです。また、方向よりも大きさを重視するため、項を加算する前に二乗します。
+
+> **🧮 数学を見せて**
+>
+> このライン、最適フィットラインは[方程式](https://en.wikipedia.org/wiki/Simple_linear_regression)で表すことができます:
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` カラムを次の式を使用して計算します:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+線形回帰の数学的背景を理解したので、回帰モデルを作成して、どのパッケージのかぼちゃが最も良い価格を持つかを予測してみましょう。ホリデーパンプキンパッチのためにかぼちゃを購入する人は、この情報を利用してかぼちゃパッケージの購入を最適化することができるでしょう。
+
+## 相関を探す
+
+[](https://youtu.be/uoRq-lW2eQo "初心者向けML - 相関を探す: 線形回帰の鍵")
+
+> 🎥 上の画像をクリックすると、相関の概要を短いビデオで確認できます。
+
+前のレッスンで、異なる月の平均価格が次のように見えることに気づいたかもしれません:
+
+
+
+これは、ある種の相関があることを示唆しており、`Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` 関数を使用して相関を計算してみることができます:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+相関はかなり小さいようです、`Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` プロット関数を使用してすべてのポイントを同じグラフにプロットできます:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+私たちの調査は、実際の販売日よりも品種が全体の価格に影響を与えることを示唆しています。これを棒グラフで確認できます:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+今のところ、'パイタイプ'のかぼちゃ品種にのみ焦点を当て、日付が価格に与える影響を見てみましょう:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+`Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` の相関を計算すると、予測モデルのトレーニングが意味を持つことがわかります。
+
+> 線形回帰モデルをトレーニングする前に、データがクリーンであることを確認することが重要です。線形回帰は欠損値に対してうまく機能しないため、すべての空のセルを取り除くことが理にかなっています:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+もう一つのアプローチは、それらの空の値を対応する列の平均値で埋めることです。
+
+## 単純な線形回帰
+
+[](https://youtu.be/e4c_UP2fSjg "初心者向けML - Scikit-learnを使用した線形および多項式回帰")
+
+> 🎥 上の画像をクリックすると、線形回帰と多項式回帰の概要を短いビデオで確認できます。
+
+線形回帰モデルをトレーニングするために、**Scikit-learn**ライブラリを使用します。
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+まず、入力値(特徴)と予想出力(ラベル)を別々のnumpy配列に分けます:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> 入力データに`reshape`を実行する必要があることに注意してください。線形回帰は2D配列を入力として期待し、配列の各行が入力特徴のベクトルに対応します。私たちの場合、入力が1つしかないため、形状がN×1の配列が必要です。Nはデータセットのサイズです。
+
+次に、データをトレーニングデータセットとテストデータセットに分割し、トレーニング後にモデルを検証できるようにします:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+最後に、実際の線形回帰モデルのトレーニングは2行のコードで行います。`LinearRegression` object, and fit it to our data using the `fit` メソッドを定義します:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+`LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` が示しているように、年の初めの価格を示しています。
+
+モデルの精度を確認するために、テストデータセットで価格を予測し、予測値と期待値の違いを測定します。これは、期待値と予測値のすべての二乗誤差の平均である平均二乗誤差(MSE)を使用して行うことができます。
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+誤差は約2ポイントで、約17%です。あまり良くありません。モデルの品質のもう一つの指標は**決定係数**であり、次のように取得できます:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+値が0の場合、モデルは入力データを考慮せず、*最悪の線形予測器*として機能し、単に結果の平均値を示します。値が1の場合、すべての期待出力を完全に予測できることを意味します。私たちの場合、決定係数は約0.06で、かなり低いです。
+
+テストデータと回帰ラインを一緒にプロットして、回帰がどのように機能するかをよりよく見ることができます:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## 多項式回帰
+
+もう一つの線形回帰のタイプは多項式回帰です。変数間に線形関係がある場合がありますが、例えば、かぼちゃの体積が大きいほど価格が高くなる場合がありますが、これらの関係は平面や直線としてプロットできないことがあります。
+
+✅ ここに[いくつかの例](https://online.stat.psu.edu/stat501/lesson/9/9.8)があります。多項式回帰を使用できるデータの例です。
+
+日付と価格の関係をもう一度見てみましょう。この散布図は直線で分析すべきだと思いますか?価格は変動する可能性がありますか?この場合、多項式回帰を試すことができます。
+
+✅ 多項式は、1つ以上の変数と係数で構成される数学的表現です。
+
+多項式回帰は、非線形データにより適合する曲線を作成します。私たちの場合、入力データに`DayOfYear`の二乗変数を含めると、年のある時点で最小値を持つ放物線をフィットさせることができます。
+
+Scikit-learnには、データ処理の異なるステップを組み合わせるための便利な[パイプラインAPI](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline)が含まれています。**パイプライン**は、**推定器**のチェーンです。私たちの場合、まずモデルに多項式特徴を追加し、その後回帰をトレーニングするパイプラインを作成します:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+`PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` ということになります。つまり、線形回帰には4つの係数があり、各かぼちゃ品種ごとに1つの係数があり、その品種の「開始価格」(または「追加価格」)を表します。
+
+以下のコードは、品種をワンホットエンコードする方法を示しています:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+ワンホットエンコードされた品種を使用して線形回帰をトレーニングするには、`X` and `y`データを正しく初期化するだけです:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+他のコードは、上記で使用した線形回帰をトレーニングするためのコードと同じです。試してみると、平均二乗誤差はほぼ同じですが、決定係数が大幅に高くなります(約77%)。さらに正確な予測を行うために、より多くのカテゴリカル特徴や数値的特徴(例えば`Month` or `DayOfYear`. To get one large array of features, we can use `join`)を考慮することができます:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+ここでは、`City` and `Package`タイプも考慮しており、MSEは2.84(10%)、決定係数は0.94です!
+
+## すべてをまとめる
+
+最良のモデルを作成するために、上記の例からの結合データ(ワンホットエンコードされたカテゴリカルデータ+数値データ)と多項式回帰を使用します。ここに完全なコードがあります:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+これにより、決定係数がほぼ97%、MSE=2.23(約8%の予測誤差)となります。
+
+| モデル | MSE | 決定係数 |
+|-------|-----|---------------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` 線形 | 5.24 (19.7%) | 0.77 |
+| すべての特徴 線形 | 2.84 (10.5%) | 0.94 |
+| すべての特徴 多項式 | 2.23 (8.25%) | 0.97 |
+
+🏆 よくできました!1つのレッスンで4つの回帰モデルを作成し、モデルの品質を97%に向上させました。回帰の最終セクションでは、ロジスティック回帰について学び、カテゴリを
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語による文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/3-Linear/assignment.md b/translations/ja/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..fff2e1173
--- /dev/null
+++ b/translations/ja/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# 回帰モデルの作成
+
+## 手順
+
+このレッスンでは、線形回帰と多項式回帰の両方を使用してモデルを構築する方法が示されました。この知識を活用して、データセットを見つけるか、Scikit-learn の組み込みセットのいずれかを使用して新しいモデルを構築してください。ノートブックで、なぜその手法を選んだのかを説明し、モデルの精度を実証してください。もし精度が低い場合は、その理由を説明してください。
+
+## ルーブリック
+
+| 基準 | 模範的な | 適切な | 改善が必要な |
+| --------- | ------------------------------------------------------------ | -------------------------- | ------------------------------ |
+| | 完全にドキュメント化された解決策を含む完全なノートブックを提示する | 解決策が不完全である | 解決策に欠陥やバグがある |
+
+**免責事項**:
+この文書は、機械翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があります。元の言語で書かれた原文を公式な情報源と見なしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/3-Linear/solution/Julia/README.md b/translations/ja/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..08c7f1db5
--- /dev/null
+++ b/translations/ja/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用により生じた誤解や誤った解釈について、当方は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/4-Logistic/README.md b/translations/ja/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..eeebca6ea
--- /dev/null
+++ b/translations/ja/2-Regression/4-Logistic/README.md
@@ -0,0 +1,328 @@
+# カテゴリー予測のためのロジスティック回帰
+
+
+
+## [講義前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [このレッスンはRでも利用可能です!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## はじめに
+
+この回帰に関する最終レッスンでは、基本的な _クラシック_ な機械学習技術の一つであるロジスティック回帰について見ていきます。この技術を使用して、二値のカテゴリーを予測するパターンを発見することができます。このキャンディーはチョコレートかどうか?この病気は伝染するかどうか?この顧客はこの製品を選ぶかどうか?
+
+このレッスンで学ぶこと:
+
+- データビジュアライゼーションのための新しいライブラリ
+- ロジスティック回帰の技術
+
+✅ このタイプの回帰を扱う理解を深めるために、この [Learnモジュール](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott) を参照してください。
+
+## 前提条件
+
+パンプキンデータを扱ってきたので、二値のカテゴリーが一つあることに気づくことができました:`Color`。
+
+いくつかの変数を与えられた場合に、_特定のカボチャの色がオレンジ 🎃 か白 👻 かを予測する_ ロジスティック回帰モデルを構築しましょう。
+
+> なぜ回帰に関するレッスンで二値分類について話しているのか?それは言語的な便宜のためであり、ロジスティック回帰は [実際には分類方法](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression) ですが、線形に基づいているためです。次のレッスングループでは、データを分類する他の方法について学びます。
+
+## 質問を定義する
+
+私たちの目的のために、これを「白」または「白ではない」として表現します。データセットには「ストライプ」カテゴリもありますが、インスタンスが少ないため使用しません。データセットからnull値を削除すると、いずれにせよ消えます。
+
+> 🎃 面白い事実:白いカボチャを「ゴーストカボチャ」と呼ぶことがあります。彫るのが難しいので、オレンジのカボチャほど人気はありませんが、見た目はクールです!したがって、質問を「ゴースト」または「ゴーストではない」と再定義することもできます。👻
+
+## ロジスティック回帰について
+
+ロジスティック回帰は、以前に学んだ線形回帰とはいくつかの重要な点で異なります。
+
+[](https://youtu.be/KpeCT6nEpBY "ML初心者向け - 機械学習分類のためのロジスティック回帰の理解")
+
+> 🎥 上の画像をクリックして、ロジスティック回帰の概要を短いビデオで確認してください。
+
+### 二値分類
+
+ロジスティック回帰は、線形回帰と同じ機能を提供しません。前者は二値のカテゴリー(「白か白ではない」)についての予測を提供しますが、後者は連続的な値を予測することができます。たとえば、カボチャの産地と収穫時期を考慮して、_その価格がどの程度上昇するか_ を予測できます。
+
+
+> インフォグラフィック by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+### 他の分類
+
+ロジスティック回帰には、他にも多項式や順序などの種類があります:
+
+- **多項式**:複数のカテゴリを持つもの - 「オレンジ、白、ストライプ」。
+- **順序**:順序付けられたカテゴリを持つもの。たとえば、カボチャのサイズ(ミニ、小、中、大、XL、XXL)で順序付ける場合など。
+
+
+
+### 変数が相関している必要はない
+
+線形回帰がより相関のある変数でうまく機能することを覚えていますか?ロジスティック回帰はその逆で、変数が一致する必要はありません。このデータには、やや弱い相関がありますが、それでも機能します。
+
+### たくさんのクリーンなデータが必要
+
+ロジスティック回帰は、データが多いほど正確な結果をもたらします。私たちの小さなデータセットはこのタスクには最適ではないので、それを念頭に置いてください。
+
+[](https://youtu.be/B2X4H9vcXTs "ML初心者向け - ロジスティック回帰のためのデータ分析と準備")
+
+> 🎥 上の画像をクリックして、線形回帰のためのデータ準備の概要を短いビデオで確認してください。
+
+✅ ロジスティック回帰に適したデータの種類について考えてみてください。
+
+## 演習 - データの整頓
+
+まず、データを少しクリーンにし、null値を削除し、いくつかの列のみを選択します:
+
+1. 以下のコードを追加します:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ 新しいデータフレームを覗いてみることもできます:
+
+ ```python
+ pumpkins.info
+ ```
+
+### ビジュアライゼーション - カテゴリカルプロット
+
+これまでに、パンプキンデータを再度読み込み、いくつかの変数を含むデータセットを保持するようにクリーニングした [スターターノートブック](../../../../2-Regression/4-Logistic/notebook.ipynb) をロードしました。ノートブックでデータフレームを新しいライブラリを使ってビジュアライズしましょう:[Seaborn](https://seaborn.pydata.org/index.html) は、以前使用したMatplotlibの上に構築されています。
+
+Seabornはデータをビジュアライズするための便利な方法を提供します。たとえば、`Variety`と`Color`のデータの分布をカテゴリカルプロットで比較することができます。
+
+1. `catplot` function, using our pumpkin data `pumpkins` を使用して、各カボチャカテゴリ(オレンジまたは白)の色マッピングを指定して、プロットを作成します:
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ データを観察することで、色データがVarietyにどのように関連しているかがわかります。
+
+ ✅ このカテゴリカルプロットを見て、どのような興味深い探索ができるか考えてみてください。
+
+### データ前処理:特徴とラベルのエンコーディング
+私たちのカボチャデータセットには、すべての列に文字列値が含まれています。カテゴリカルデータを扱うことは人間にとっては直感的ですが、機械にはそうではありません。機械学習アルゴリズムは数値でうまく機能します。そのため、エンコーディングはデータ前処理フェーズで非常に重要なステップです。これにより、カテゴリカルデータを数値データに変換することができ、情報を失うことなく行えます。良いエンコーディングは良いモデルの構築につながります。
+
+特徴エンコーディングには2つの主要なエンコーダーがあります:
+
+1. 順序エンコーダー:これは順序変数に適しています。順序変数は、データが論理的な順序に従うカテゴリカル変数です。私たちのデータセットの`Item Size`列のようなものです。各カテゴリを数値で表し、その列の順序に従ってマッピングを作成します。
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. カテゴリカルエンコーダー:これは名義変数に適しています。名義変数は、データが論理的な順序に従わないカテゴリカル変数です。データセットの`Item Size`以外のすべての特徴がこれに該当します。これはワンホットエンコーディングであり、各カテゴリをバイナリ列で表します。エンコードされた変数が1の場合、そのカボチャがそのVarietyに属し、0の場合はそうではありません。
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+その後、`ColumnTransformer`を使用して、複数のエンコーダーを1つのステップに組み合わせて適切な列に適用します。
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+一方、ラベルをエンコードするために、scikit-learnの`LabelEncoder`クラスを使用します。これは、ラベルを0からn_classes-1(ここでは0と1)の間の値のみを含むように正規化するユーティリティクラスです。
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+特徴とラベルをエンコードしたら、新しいデータフレーム`encoded_pumpkins`にマージできます。
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ 順序エンコーダーを`Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety`および`Color`にカテゴリカルプロットで使用する利点は何ですか?データをよりよくプロットするために、エンコードされた`Item Size` column and the unencoded `Variety`列を使用します。
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### スウォームプロットを使用する
+
+色は二値のカテゴリ(白か白ではない)であるため、ビジュアライゼーションには「[特化したアプローチ](https://seaborn.pydata.org/tutorial/categorical.html?highlight=bar)」が必要です。このカテゴリと他の変数の関係をビジュアライズする他の方法もあります。
+
+Seabornプロットを使用して変数を並べて視覚化することができます。
+
+1. 値の分布を示すために「スウォーム」プロットを試してみてください:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**注意**:上記のコードは警告を生成する可能性があります。これはSeabornがスウォームプロットに多くのデータポイントを表示するのに失敗するためです。解決策として、マーカーのサイズを「size」パラメーターを使用して小さくすることが考えられます。ただし、これによりプロットの可読性が影響を受ける可能性があることに注意してください。
+
+> **🧮 数学を見せて**
+>
+> ロジスティック回帰は「最大尤度」の概念に依存しており、[シグモイド関数](https://wikipedia.org/wiki/Sigmoid_function)を使用します。プロット上の「シグモイド関数」は「S」字型に見えます。これは値を取り、それを0から1の間のどこかにマッピングします。この曲線は「ロジスティック曲線」とも呼ばれます。その公式は次のようになります:
+>
+> 
+>
+> ここで、シグモイドの中点はxの0点にあり、Lは曲線の最大値、kは曲線の急峻さです。関数の結果が0.5を超える場合、そのラベルは二値選択の「1」として分類されます。それ以外の場合は「0」として分類されます。
+
+## モデルを構築する
+
+Scikit-learnでこの二値分類を見つけるモデルを構築するのは驚くほど簡単です。
+
+[](https://youtu.be/MmZS2otPrQ8 "ML初心者向け - データの分類のためのロジスティック回帰")
+
+> 🎥 上の画像をクリックして、線形回帰モデルの構築についての短いビデオを確認してください。
+
+1. 分類モデルで使用する変数を選択し、`train_test_split()`を呼び出してトレーニングセットとテストセットに分割します:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. 次に、トレーニングデータを使用してモデルをトレーニングし、結果を出力します:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ モデルのスコアボードを見てみましょう。約1000行のデータしかないことを考えると、悪くないです:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## 混同行列による理解の向上
+
+上記の項目を印刷してスコアボードレポートを取得することもできますが、[混同行列](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix)を使用してモデルのパフォーマンスを理解する方が簡単かもしれません。
+
+> 🎓 「[混同行列](https://wikipedia.org/wiki/Confusion_matrix)」または「エラーマトリックス」は、モデルの真陽性と偽陽性、および真陰性と偽陰性を表現する表であり、予測の正確性を測定します。
+
+1. 混同行列を使用するには、`confusion_matrix()`を呼び出します:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ モデルの混同行列を見てみましょう:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+Scikit-learnの混同行列では、行(軸0)は実際のラベルであり、列(軸1)は予測されたラベルです。
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+ここで何が起こっているのでしょうか?たとえば、モデルがカボチャを「白」と「白ではない」という二値カテゴリに分類するとしましょう。
+
+- モデルがカボチャを「白ではない」と予測し、実際に「白ではない」カテゴリに属する場合、これを真陰性(TN)と呼びます。これは左上の数字で示されます。
+- モデルがカボチャを「白」と予測し、実際に「白ではない」カテゴリに属する場合、これを偽陰性(FN)と呼びます。これは左下の数字で示されます。
+- モデルがカボチャを「白ではない」と予測し、実際に「白」カテゴリに属する場合、これを偽陽性(FP)と呼びます。これは右上の数字で示されます。
+- モデルがカボチャを「白」と予測し、実際に「白」カテゴリに属する場合、これを真陽性(TP)と呼びます。これは右下の数字で示されます。
+
+予想通り、真陽性と真陰性の数が多く、偽陽性と偽陰性の数が少ない方が、モデルのパフォーマンスが優れていることを示します。
+
+混同行列が精度と再現率にどのように関連しているかを見てみましょう。前述の分類レポートには精度(0.85)と再現率(0.67)が表示されました。
+
+精度 = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+再現率 = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ Q: 混同行列によると、モデルはどうでしたか? A: 悪くないです。真陰性の数が多いですが、偽陰性もいくつかあります。
+
+混同行列のTP/TNとFP/FNのマッピングを使用して、先ほど見た用語を再確認しましょう:
+
+🎓 精度:TP/(TP + FP)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。原文の言語による元の文書を信頼できる情報源とみなすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/4-Logistic/assignment.md b/translations/ja/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..13ed9aa89
--- /dev/null
+++ b/translations/ja/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,14 @@
+# 回帰の再試行
+
+## 手順
+
+このレッスンでは、かぼちゃデータの一部を使用しました。今度は、元のデータに戻り、すべてのデータをクリーンアップして標準化し、ロジスティック回帰モデルを構築してみましょう。
+
+## 評価基準
+
+| 基準 | 優秀 | 適切 | 改善の余地あり |
+| -------- | -------------------------------------------------------------------- | ------------------------------------------------------- | -------------------------------------------------------- |
+| | よく説明され、パフォーマンスの良いモデルを持つノートブックが提出される | 最低限のパフォーマンスを持つモデルを持つノートブックが提出される | パフォーマンスの低いモデル、またはモデルがないノートブックが提出される |
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語で記載された原文を信頼できる情報源と見なしてください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤った解釈については責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/4-Logistic/solution/Julia/README.md b/translations/ja/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..8ce1ec93f
--- /dev/null
+++ b/translations/ja/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳にはエラーや不正確さが含まれる場合があります。元の言語で記載された原文を信頼できる情報源と見なしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用によって生じたいかなる誤解や誤訳についても、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/2-Regression/README.md b/translations/ja/2-Regression/README.md
new file mode 100644
index 000000000..e1a749bf3
--- /dev/null
+++ b/translations/ja/2-Regression/README.md
@@ -0,0 +1,43 @@
+# 機械学習のための回帰モデル
+## 地域トピック: 北米のカボチャ価格に対する回帰モデル 🎃
+
+北米では、ハロウィンのためにカボチャがよく怖い顔に彫られます。この魅力的な野菜についてもっと発見してみましょう!
+
+
+> 写真提供: Beth Teutschmann on Unsplash
+
+## 学ぶこと
+
+[](https://youtu.be/5QnJtDad4iQ "回帰紹介ビデオ - クリックして視聴!")
+> 🎥 上の画像をクリックして、このレッスンの簡単な紹介ビデオを視聴してください
+
+このセクションのレッスンでは、機械学習の文脈での回帰の種類について説明します。回帰モデルは、変数間の_関係_を決定するのに役立ちます。このタイプのモデルは、長さ、温度、年齢などの値を予測することができ、データポイントを分析することで変数間の関係を明らかにします。
+
+この一連のレッスンでは、線形回帰とロジスティック回帰の違いを発見し、どちらを選ぶべきかを学びます。
+
+[](https://youtu.be/XA3OaoW86R8 "初心者向け機械学習 - 回帰モデルの紹介")
+
+> 🎥 上の画像をクリックして、回帰モデルの紹介ビデオを視聴してください
+
+この一連のレッスンでは、機械学習のタスクを開始するための準備を行います。これには、データサイエンティストの共通の環境であるノートブックを管理するためのVisual Studio Codeの設定が含まれます。Scikit-learnという機械学習のライブラリを発見し、この章では回帰モデルに焦点を当てて最初のモデルを構築します。
+
+> 回帰モデルの操作について学ぶのに役立つローコードツールがあります。このタスクには [Azure ML](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott) を試してみてください。
+
+### レッスン
+
+1. [ツールの紹介](1-Tools/README.md)
+2. [データの管理](2-Data/README.md)
+3. [線形および多項式回帰](3-Linear/README.md)
+4. [ロジスティック回帰](4-Logistic/README.md)
+
+---
+### クレジット
+
+「回帰を用いた機械学習」は [Jen Looper](https://twitter.com/jenlooper) によって♥️を込めて書かれました。
+
+♥️ クイズの貢献者には [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) と [Ornella Altunyan](https://twitter.com/ornelladotcom) が含まれます。
+
+カボチャのデータセットは [このKaggleプロジェクト](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) によって提案され、そのデータはアメリカ合衆国農務省が配布する [Specialty Crops Terminal Markets Standard Reports](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) に由来しています。品種に基づく色に関するポイントを追加して分布を正規化しました。このデータはパブリックドメインです。
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。原文の言語によるオリジナル文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/3-Web-App/1-Web-App/assignment.md b/translations/ja/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..a2355ad6f
--- /dev/null
+++ b/translations/ja/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# 別のモデルを試す
+
+## 手順
+
+既に訓練された回帰モデルを使用して1つのウェブアプリを構築したので、以前の回帰レッスンで使用したモデルの1つを使用して、このウェブアプリを再作成してください。スタイルを維持するか、かぼちゃデータを反映するように異なるデザインにすることができます。モデルの訓練方法を反映するように入力を変更することに注意してください。
+
+## ルーブリック
+
+| 基準 | 優秀 | 十分 | 改善が必要 |
+| -------------------------- | --------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------- |
+| ウェブアプリの動作 | ウェブアプリが期待通りに動作し、クラウドにデプロイされている | ウェブアプリに欠陥があるか、予期しない結果を示す | ウェブアプリが正しく機能しない |
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。原文は権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/3-Web-App/README.md b/translations/ja/3-Web-App/README.md
new file mode 100644
index 000000000..179fcde7e
--- /dev/null
+++ b/translations/ja/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# あなたのMLモデルを使用するためのWebアプリを構築する
+
+このカリキュラムのセクションでは、応用されたMLのトピックに触れます。具体的には、Scikit-learnモデルをファイルとして保存し、それをWebアプリケーション内で予測に使用する方法について学びます。モデルを保存した後、そのモデルをFlaskで構築されたWebアプリで使用する方法を学びます。まず、UFO目撃情報に関するデータを使用してモデルを作成します。そして、緯度と経度の値を入力して、UFOを目撃した国を予測するWebアプリを構築します。
+
+
+
+Photo by Michael Herren on Unsplash
+
+## レッスン
+
+1. [Webアプリを構築する](1-Web-App/README.md)
+
+## クレジット
+
+"Build a Web App" は [Jen Looper](https://twitter.com/jenlooper) によって書かれました。
+
+クイズはRohan Rajによって書かれました。
+
+データセットは[Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings)から提供されています。
+
+Webアプリのアーキテクチャは、部分的に[この記事](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4)および[このリポジトリ](https://github.com/abhinavsagar/machine-learning-deployment)(Abhinav Sagarによる)を参考にしています。
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確性を期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があります。元の言語での原文を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解釈について、当社は責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/1-Introduction/README.md b/translations/ja/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..341f7123d
--- /dev/null
+++ b/translations/ja/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# 分類の紹介
+
+この4つのレッスンでは、クラシックな機械学習の基本的な焦点である_分類_について探ります。アジアとインドの素晴らしい料理に関するデータセットを使用して、さまざまな分類アルゴリズムを使用する方法を説明します。お腹が空いてきましたか?
+
+
+
+> これらのレッスンでパンアジア料理を祝おう!画像提供:[Jen Looper](https://twitter.com/jenlooper)
+
+分類は、回帰技術と多くの共通点を持つ[教師あり学習](https://wikipedia.org/wiki/Supervised_learning)の一形態です。機械学習がデータセットを使用して値や名前を予測することに関するものであるならば、分類は一般的に2つのグループに分かれます:_二値分類_と_多クラス分類_です。
+
+[](https://youtu.be/eg8DJYwdMyg "Introduction to classification")
+
+> 🎥 上の画像をクリックしてビデオを視聴:MITのJohn Guttagが分類を紹介
+
+覚えておいてください:
+
+- **線形回帰**は、変数間の関係を予測し、新しいデータポイントがその線とどのように関係するかを正確に予測するのに役立ちました。例えば、_9月と12月のカボチャの価格_を予測することができました。
+- **ロジスティック回帰**は、「二値カテゴリ」を発見するのに役立ちました:この価格帯では、_このカボチャはオレンジ色か、オレンジ色でないか_?
+
+分類は、データポイントのラベルやクラスを決定するためにさまざまなアルゴリズムを使用します。この料理データを使用して、材料のグループを観察することで、その料理の起源を特定できるかどうかを見てみましょう。
+
+## [事前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [このレッスンはRでも利用可能です!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### はじめに
+
+分類は、機械学習研究者やデータサイエンティストの基本的な活動の1つです。二値値の基本的な分類(「このメールはスパムかどうか」)から、コンピュータビジョンを使用した複雑な画像分類やセグメンテーションまで、データをクラスに分類し、それに質問する能力は常に役立ちます。
+
+プロセスをより科学的に述べると、分類法は入力変数と出力変数の関係をマッピングする予測モデルを作成します。
+
+
+
+> 分類アルゴリズムが処理する二値問題と多クラス問題。インフォグラフィック提供:[Jen Looper](https://twitter.com/jenlooper)
+
+データのクリーニング、視覚化、およびMLタスクの準備を開始する前に、データを分類するために機械学習を使用するさまざまな方法について学びましょう。
+
+[統計学](https://wikipedia.org/wiki/Statistical_classification)に由来する、クラシックな機械学習を使用した分類は、`smoker`、`weight`、および`age`などの特徴を使用して、_X病を発症する可能性_を決定します。以前に行った回帰演習と同様の教師あり学習技術として、データはラベル付けされ、MLアルゴリズムはこれらのラベルを使用してデータセットのクラス(または「特徴」)を分類および予測し、それらをグループまたは結果に割り当てます。
+
+✅ 料理に関するデータセットを想像してみてください。多クラスモデルでは何が答えられるでしょうか?二値モデルでは何が答えられるでしょうか?ある料理がフェヌグリークを使用する可能性があるかどうかを判断したい場合はどうでしょうか?星アニス、アーティチョーク、カリフラワー、ホースラディッシュが入った食料袋をプレゼントされた場合、典型的なインド料理を作れるかどうかを確認したい場合はどうでしょうか?
+
+[](https://youtu.be/GuTeDbaNoEU "Crazy mystery baskets")
+
+> 🎥 上の画像をクリックしてビデオを視聴。ショー「Chopped」の全体の前提は、「ミステリーバスケット」で、シェフがランダムな材料の選択から料理を作らなければならないというものです。確かにMLモデルが役立ったでしょう!
+
+## こんにちは「分類器」
+
+この料理データセットに関して私たちが尋ねたい質問は、実際には**多クラスの質問**です。私たちにはいくつかの潜在的な国の料理があるためです。一連の材料が与えられた場合、そのデータはこれらの多くのクラスのどれに適合するでしょうか?
+
+Scikit-learnは、解決したい問題の種類に応じて、データを分類するために使用できるさまざまなアルゴリズムを提供しています。次の2つのレッスンでは、これらのアルゴリズムのいくつかについて学びます。
+
+## 演習 - データをクリーンアップしてバランスを取る
+
+このプロジェクトを開始する前の最初のタスクは、データをクリーンアップして**バランスを取る**ことです。これにより、より良い結果が得られます。このフォルダーのルートにある空の_notebook.ipynb_ファイルから始めます。
+
+最初にインストールするものは[imblearn](https://imbalanced-learn.org/stable/)です。これはScikit-learnパッケージで、データのバランスをより良く取ることができます(このタスクについては後で詳しく学びます)。
+
+1. `imblearn`をインストールするには、次のように`pip install`を実行します:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. データをインポートして視覚化するために必要なパッケージをインポートし、`imblearn`から`SMOTE`もインポートします。
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ これで、次にデータをインポートする準備が整いました。
+
+1. 次のタスクはデータのインポートです:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df`を使用します。
+
+1. データの形状を確認します:
+
+ ```python
+ df.head()
+ ```
+
+ 最初の5行は次のようになります:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. `info()`を呼び出してこのデータに関する情報を取得します:
+
+ ```python
+ df.info()
+ ```
+
+ 出力は次のようになります:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## 演習 - 料理について学ぶ
+
+ここからが面白くなります。料理ごとのデータの分布を見てみましょう。
+
+1. `barh()`を呼び出してデータをバーとしてプロットします:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ 料理の数は限られていますが、データの分布は不均一です。これを修正できます!その前に、もう少し探ってみましょう。
+
+1. 料理ごとに利用可能なデータの量を調べて印刷します:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ 出力は次のようになります:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## 材料の発見
+
+次に、データをさらに深く掘り下げて、各料理の典型的な材料を学びます。料理間の混乱を引き起こす再発データをクリーニングする必要があるので、この問題について学びましょう。
+
+1. Pythonで`create_ingredient()`関数を作成して材料データフレームを作成します。この関数は、役に立たない列を削除し、カウントによって材料をソートすることから始めます:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ これで、この関数を使用して、料理ごとのトップ10の人気材料のアイデアを得ることができます。
+
+1. `create_ingredient()` and plot it calling `barh()`を呼び出します:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 日本のデータについても同じことを行います:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 次に、中国の材料についても同じことを行います:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. インドの材料をプロットします:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 最後に、韓国の材料をプロットします:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. `drop()`を呼び出して、異なる料理間で混乱を引き起こす最も一般的な材料を削除します:
+
+ みんなお米、にんにく、しょうがが大好きです!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## データセットのバランスを取る
+
+データをクリーンアップしたので、[SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - 「合成少数オーバーサンプリング技術」 - を使用してバランスを取ります。
+
+1. `fit_resample()`を呼び出します。この戦略は、補間によって新しいサンプルを生成します。
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ データのバランスを取ることで、分類時により良い結果が得られます。二値分類を考えてみてください。データのほとんどが1つのクラスである場合、MLモデルはそのクラスをより頻繁に予測します。なぜなら、そのクラスのデータが多いためです。データのバランスを取ることで、偏ったデータを取り除き、この不均衡を解消するのに役立ちます。
+
+1. 次に、材料ごとのラベルの数を確認します:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ 出力は次のようになります:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ データはきれいでバランスが取れており、とてもおいしそうです!
+
+1. 最後のステップは、ラベルと特徴を含むバランスの取れたデータを新しいデータフレームに保存し、ファイルにエクスポートできるようにすることです:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. `transformed_df.head()` and `transformed_df.info()`を使用してデータをもう一度確認できます。今後のレッスンで使用するためにこのデータのコピーを保存します:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ この新しいCSVはルートデータフォルダーにあります。
+
+---
+
+## 🚀チャレンジ
+
+このカリキュラムにはいくつかの興味深いデータセットが含まれています。`data`フォルダーを掘り下げて、二値または多クラス分類に適したデータセットが含まれているかどうかを確認してください。このデータセットにどのような質問をしますか?
+
+## [事後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## レビューと自己学習
+
+SMOTEのAPIを探ってみてください。どのようなユースケースに最適ですか?どのような問題を解決しますか?
+
+## 課題
+
+[分類方法を探る](assignment.md)
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳にはエラーや不正確さが含まれる可能性があることをご了承ください。原文の言語で書かれたオリジナル文書を権威ある情報源とみなしてください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/1-Introduction/assignment.md b/translations/ja/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..a625920d9
--- /dev/null
+++ b/translations/ja/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# 分類方法を探る
+
+## 指示
+
+[Scikit-learn ドキュメント](https://scikit-learn.org/stable/supervised_learning.html)には、データを分類するための多くの方法が記載されています。これらのドキュメントで少し宝探しをしてみてください。あなたの目標は、分類方法を探し、このカリキュラムのデータセット、質問、分類技術をマッチさせることです。スプレッドシートまたは.docファイルで表を作成し、データセットが分類アルゴリズムとどのように連携するかを説明してください。
+
+## 評価基準
+
+| 基準 | 優秀 | 適切 | 改善の余地あり |
+| -------- | -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | 5つのアルゴリズムと分類技術の概要を示す文書が提示されており、その概要はよく説明され詳細です。 | 3つのアルゴリズムと分類技術の概要を示す文書が提示されており、その概要はよく説明され詳細です。 | 3つ未満のアルゴリズムと分類技術の概要を示す文書が提示されており、その概要は十分に説明されておらず詳細でもありません。 |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努力しておりますが、自動翻訳にはエラーや不正確さが含まれる可能性があります。元の言語での原文を権威ある情報源とみなすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤った解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/1-Introduction/solution/Julia/README.md b/translations/ja/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..7312036c1
--- /dev/null
+++ b/translations/ja/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があります。元の言語での原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/2-Classifiers-1/README.md b/translations/ja/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..386ea4583
--- /dev/null
+++ b/translations/ja/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,244 @@
+# Cuisine classifiers 1
+
+このレッスンでは、前回のレッスンで保存した、バランスの取れたきれいなデータセットを使って、料理に関する予測を行います。
+
+このデータセットを使って、さまざまな分類器を使用し、_材料のグループに基づいて特定の国の料理を予測_します。その過程で、アルゴリズムを分類タスクに利用する方法についてさらに学びます。
+
+## [事前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# 準備
+
+[Lesson 1](../1-Introduction/README.md)を完了したと仮定して、_cleaned_cuisines.csv_ ファイルがルート `/data` フォルダーに存在することを確認してください。このファイルは4つのレッスンで使用されます。
+
+## 演習 - 国の料理を予測する
+
+1. このレッスンの _notebook.ipynb_ フォルダーで、そのファイルとPandasライブラリをインポートします:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ データは次のように見えます:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. 次に、いくつかのライブラリをインポートします:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. Xとyの座標を2つのデータフレームに分けてトレーニングします。`cuisine`をラベルデータフレームにします:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ 次のように見えます:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. `Unnamed: 0` column and the `cuisine` column, calling `drop()`をドロップします。残りのデータをトレーニング可能な特徴として保存します:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ あなたの特徴は次のように見えます:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
+
+これでモデルのトレーニングの準備が整いました!
+
+## 分類器の選択
+
+データがきれいになり、トレーニングの準備が整ったので、どのアルゴリズムを使用するかを決定する必要があります。
+
+Scikit-learnは分類を教師あり学習の下にグループ化しており、そのカテゴリーでは多くの分類方法があります。[この多様性](https://scikit-learn.org/stable/supervised_learning.html)は最初は非常に圧倒されるかもしれません。以下の方法にはすべて分類技術が含まれます:
+
+- 線形モデル
+- サポートベクターマシン
+- 確率的勾配降下法
+- 最近傍法
+- ガウス過程
+- 決定木
+- アンサンブル法(投票分類器)
+- マルチクラスおよびマルチアウトプットアルゴリズム(マルチクラスおよびマルチラベル分類、マルチクラス-マルチアウトプット分類)
+
+> データを分類するために[ニューラルネットワークを使用する](https://scikit-learn.org/stable/modules/neural_networks_supervised.html#classification)こともできますが、これはこのレッスンの範囲外です。
+
+### どの分類器を選ぶべきか?
+
+では、どの分類器を選ぶべきでしょうか?多くの場合、いくつかの分類器を試して良い結果を探すのが一つの方法です。Scikit-learnは[KNeighbors、SVCの2つの方法、GaussianProcessClassifier、DecisionTreeClassifier、RandomForestClassifier、MLPClassifier、AdaBoostClassifier、GaussianNB、QuadraticDiscrinationAnalysis](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html)の比較を行い、結果を視覚化して表示します:
+
+
+> Scikit-learnのドキュメントで生成されたプロット
+
+> AutoMLはこれらの比較をクラウドで実行し、データに最適なアルゴリズムを選択できるようにすることで、この問題をうまく解決します。試してみてください [こちら](https://docs.microsoft.com/learn/modules/automate-model-selection-with-azure-automl/?WT.mc_id=academic-77952-leestott)
+
+### より良いアプローチ
+
+しかし、無作為に推測するよりも、ダウンロード可能な[ML Cheat Sheet](https://docs.microsoft.com/azure/machine-learning/algorithm-cheat-sheet?WT.mc_id=academic-77952-leestott)のアイデアに従う方が良いです。ここでは、マルチクラスの問題に対していくつかの選択肢があることがわかります:
+
+
+> マイクロソフトのアルゴリズムチートシートの一部、マルチクラス分類オプションの詳細
+
+✅ このチートシートをダウンロードし、印刷して壁に貼りましょう!
+
+### 理由付け
+
+制約を考慮して、異なるアプローチを検討してみましょう:
+
+- **ニューラルネットワークは重すぎる**。きれいだが最小限のデータセットと、ノートブックを使ってローカルでトレーニングを実行するという事実を考えると、ニューラルネットワークはこのタスクには重すぎます。
+- **2クラス分類器は使用しない**。2クラス分類器は使用しないため、one-vs-allは除外されます。
+- **決定木やロジスティック回帰が使える**。決定木やマルチクラスデータのロジスティック回帰が使えるかもしれません。
+- **マルチクラスブーステッド決定木は異なる問題を解決する**。マルチクラスブーステッド決定木は非パラメトリックなタスク、例えばランキングを作成するタスクに最も適しているため、私たちには役立ちません。
+
+### Scikit-learnの使用
+
+データを分析するためにScikit-learnを使用します。しかし、Scikit-learnでロジスティック回帰を使用する方法はたくさんあります。[渡すパラメータ](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logistic%20regressio#sklearn.linear_model.LogisticRegression)を確認してください。
+
+本質的に、重要なパラメータは2つあります - `multi_class` and `solver` - that we need to specify, when we ask Scikit-learn to perform a logistic regression. The `multi_class` value applies a certain behavior. The value of the solver is what algorithm to use. Not all solvers can be paired with all `multi_class` values.
+
+According to the docs, in the multiclass case, the training algorithm:
+
+- **Uses the one-vs-rest (OvR) scheme**, if the `multi_class` option is set to `ovr`
+- **Uses the cross-entropy loss**, if the `multi_class` option is set to `multinomial`. (Currently the `multinomial` option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.)"
+
+> 🎓 The 'scheme' here can either be 'ovr' (one-vs-rest) or 'multinomial'. Since logistic regression is really designed to support binary classification, these schemes allow it to better handle multiclass classification tasks. [source](https://machinelearningmastery.com/one-vs-rest-and-one-vs-one-for-multi-class-classification/)
+
+> 🎓 The 'solver' is defined as "the algorithm to use in the optimization problem". [source](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logistic%20regressio#sklearn.linear_model.LogisticRegression).
+
+Scikit-learn offers this table to explain how solvers handle different challenges presented by different kinds of data structures:
+
+
+
+## Exercise - split the data
+
+We can focus on logistic regression for our first training trial since you recently learned about the latter in a previous lesson.
+Split your data into training and testing groups by calling `train_test_split()`です:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+```
+
+## 演習 - ロジスティック回帰を適用する
+
+マルチクラスの場合、どのスキームを使用するか、どのソルバーを設定するかを選択する必要があります。マルチクラス設定と**liblinear**ソルバーを使用してロジスティック回帰をトレーニングします。
+
+1. multi_classを`ovr` and the solver set to `liblinear`に設定してロジスティック回帰を作成します:
+
+ ```python
+ lr = LogisticRegression(multi_class='ovr',solver='liblinear')
+ model = lr.fit(X_train, np.ravel(y_train))
+
+ accuracy = model.score(X_test, y_test)
+ print ("Accuracy is {}".format(accuracy))
+ ```
+
+ ✅ `lbfgs`, which is often set as default
+
+ > Note, use Pandas [`ravel`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ravel.html)関数を試して、必要に応じてデータをフラットにします。
+
+ 正確度は**80%**を超えています!
+
+1. データの1行(#50)をテストすることで、このモデルを実際に見ることができます:
+
+ ```python
+ print(f'ingredients: {X_test.iloc[50][X_test.iloc[50]!=0].keys()}')
+ print(f'cuisine: {y_test.iloc[50]}')
+ ```
+
+ 結果が表示されます:
+
+ ```output
+ ingredients: Index(['cilantro', 'onion', 'pea', 'potato', 'tomato', 'vegetable_oil'], dtype='object')
+ cuisine: indian
+ ```
+
+ ✅ 別の行番号を試して結果を確認してみてください
+
+1. この予測の正確性を確認するために、さらに掘り下げてみましょう:
+
+ ```python
+ test= X_test.iloc[50].values.reshape(-1, 1).T
+ proba = model.predict_proba(test)
+ classes = model.classes_
+ resultdf = pd.DataFrame(data=proba, columns=classes)
+
+ topPrediction = resultdf.T.sort_values(by=[0], ascending = [False])
+ topPrediction.head()
+ ```
+
+ 結果が表示されます - インド料理が最も高い確率で予測されています:
+
+ | | 0 |
+ | -------: | -------: |
+ | indian | 0.715851 |
+ | chinese | 0.229475 |
+ | japanese | 0.029763 |
+ | korean | 0.017277 |
+ | thai | 0.007634 |
+
+ ✅ なぜこのモデルがインド料理だと確信しているのか説明できますか?
+
+1. 回帰レッスンで行ったように、分類レポートを印刷して詳細を確認します:
+
+ ```python
+ y_pred = model.predict(X_test)
+ print(classification_report(y_test,y_pred))
+ ```
+
+ | | precision | recall | f1-score | support |
+ | ------------ | --------- | ------ | -------- | ------- |
+ | chinese | 0.73 | 0.71 | 0.72 | 229 |
+ | indian | 0.91 | 0.93 | 0.92 | 254 |
+ | japanese | 0.70 | 0.75 | 0.72 | 220 |
+ | korean | 0.86 | 0.76 | 0.81 | 242 |
+ | thai | 0.79 | 0.85 | 0.82 | 254 |
+ | accuracy | 0.80 | 1199 | | |
+ | macro avg | 0.80 | 0.80 | 0.80 | 1199 |
+ | weighted avg | 0.80 | 0.80 | 0.80 | 1199 |
+
+## 🚀チャレンジ
+
+このレッスンでは、クリーンなデータを使用して、材料の一連のリストに基づいて国の料理を予測する機械学習モデルを構築しました。Scikit-learnがデータを分類するために提供する多くのオプションを読み、さらに掘り下げてみてください。ソルバーの概念を深く理解し、背後で何が起こっているのかを理解してください。
+
+## [事後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/22/)
+
+## レビュー & 自己学習
+
+[このレッスン](https://people.eecs.berkeley.edu/~russell/classes/cs194/f11/lectures/CS194%20Fall%202011%20Lecture%2006.pdf)でロジスティック回帰の数学についてさらに掘り下げてみてください。
+## 課題
+
+[ソルバーを研究する](assignment.md)
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることにご注意ください。元の言語での原文を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/2-Classifiers-1/assignment.md b/translations/ja/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..b07ee2ea4
--- /dev/null
+++ b/translations/ja/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,13 @@
+# ソルバーの研究
+## 手順
+
+このレッスンでは、アルゴリズムと機械学習プロセスを組み合わせて正確なモデルを作成するさまざまなソルバーについて学びました。レッスンで紹介されたソルバーを見直し、その中から2つを選んでください。自分の言葉でこれら2つのソルバーを比較・対照してください。彼らはどのような問題に対処しますか?さまざまなデータ構造とどのように連携しますか?なぜ一方を選ぶ理由があるのかを説明してください。
+
+## 評価基準
+
+| 基準 | 模範的 | 適切 | 改善が必要 |
+| -------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | 各ソルバーについて比較を考慮した2つの段落がある.docファイルが提出されている。 | 1つの段落のみがある.docファイルが提出されている | 課題が未完成 |
+
+**免責事項**:
+この文書は、機械翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることにご注意ください。元の言語での文書が権威ある情報源とみなされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認については、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/ja/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..a6ae1f617
--- /dev/null
+++ b/translations/ja/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることにご留意ください。原文の言語によるオリジナル文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/3-Classifiers-2/README.md b/translations/ja/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..86b3378c6
--- /dev/null
+++ b/translations/ja/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# 料理の分類器 2
+
+この第2の分類レッスンでは、数値データを分類するためのさまざまな方法を探求します。また、異なる分類器を選択することの影響についても学びます。
+
+## [講義前のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### 前提条件
+
+前のレッスンを完了し、この4つのレッスンフォルダーのルートにある `data` フォルダーに _cleaned_cuisines.csv_ という名前のクリーンなデータセットがあることを前提としています。
+
+### 準備
+
+クリーンなデータセットを使用して _notebook.ipynb_ ファイルを読み込み、モデル構築プロセスに向けて X と y のデータフレームに分割しました。
+
+## 分類マップ
+
+以前、Microsoftのチートシートを使用してデータを分類する際のさまざまなオプションについて学びました。Scikit-learnも同様の、しかしより詳細なチートシートを提供しており、推定器(分類器の別名)をさらに絞り込むのに役立ちます:
+
+
+> Tip: [このマップをオンラインで見る](https://scikit-learn.org/stable/tutorial/machine_learning_map/)と、パスに沿ってクリックしてドキュメントを読むことができます。
+
+### 計画
+
+このマップはデータの理解が深まると非常に役立ちます。パスに沿って歩きながら決定を下すことができます:
+
+- 50以上のサンプルがあります
+- カテゴリを予測したい
+- ラベル付きデータがあります
+- 100K未満のサンプルがあります
+- ✨ Linear SVCを選択できます
+- それがうまくいかない場合、数値データがあるので
+ - ✨ KNeighbors Classifierを試すことができます
+ - それもうまくいかない場合、✨ SVCや✨ Ensemble Classifiersを試してみることができます
+
+これは非常に役立つルートです。
+
+## 演習 - データを分割する
+
+このパスに従って、使用するライブラリをインポートすることから始めましょう。
+
+1. 必要なライブラリをインポートします:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. トレーニングデータとテストデータを分割します:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Linear SVC分類器
+
+サポートベクタークラスタリング(SVC)は、サポートベクターマシンのML技術ファミリーの一部です(詳細は以下を参照)。この方法では、ラベルをクラスタリングする方法を決定するための「カーネル」を選択できます。「C」パラメータは「正則化」を指し、パラメータの影響を調整します。カーネルは[いくつかの](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) から選択でき、ここでは線形SVCを利用するために「linear」に設定します。確率はデフォルトで「false」ですが、ここでは確率推定を収集するために「true」に設定します。ランダム状態を「0」に設定してデータをシャッフルし、確率を取得します。
+
+### 演習 - 線形SVCを適用する
+
+分類器の配列を作成することから始めます。テストを進めるにつれてこの配列に追加していきます。
+
+1. Linear SVCから始めます:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. Linear SVCを使用してモデルをトレーニングし、レポートを出力します:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ 結果はかなり良好です:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## K-Neighbors分類器
+
+K-Neighborsは「neighbors」ファミリーのMLメソッドの一部で、これは教師あり学習と教師なし学習の両方に使用できます。この方法では、事前に定義されたポイントの周りにデータを集め、一般化されたラベルをデータに対して予測できるようにします。
+
+### 演習 - K-Neighbors分類器を適用する
+
+前の分類器は良好で、データにうまく適合しましたが、さらに精度を向上させることができるかもしれません。K-Neighbors分類器を試してみましょう。
+
+1. 分類器の配列に行を追加します(Linear SVC項目の後にカンマを追加):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ 結果は少し悪くなります:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors) について学びましょう
+
+## サポートベクター分類器
+
+サポートベクター分類器は、分類と回帰タスクに使用される[サポートベクターマシン](https://wikipedia.org/wiki/Support-vector_machine)ファミリーの一部です。SVMは「トレーニング例を空間のポイントにマップ」して、2つのカテゴリ間の距離を最大化します。その後のデータはこの空間にマップされ、カテゴリを予測できます。
+
+### 演習 - サポートベクター分類器を適用する
+
+サポートベクター分類器を使用して、さらに精度を向上させましょう。
+
+1. K-Neighbors項目の後にカンマを追加し、この行を追加します:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ 結果は非常に良好です!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ [サポートベクター](https://scikit-learn.org/stable/modules/svm.html#svm) について学びましょう
+
+## アンサンブル分類器
+
+前のテストは非常に良好でしたが、最後までパスをたどってみましょう。ランダムフォレストとAdaBoostのようなアンサンブル分類器を試してみましょう:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+結果は非常に良好で、特にランダムフォレストは優れています:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ [アンサンブル分類器](https://scikit-learn.org/stable/modules/ensemble.html) について学びましょう
+
+この機械学習の方法は、「いくつかの基本推定器の予測を組み合わせる」ことでモデルの質を向上させます。私たちの例では、ランダムツリーとAdaBoostを使用しました。
+
+- [ランダムフォレスト](https://scikit-learn.org/stable/modules/ensemble.html#forest) は、平均化法であり、ランダム性を取り入れた「決定木」の「森」を構築して過学習を回避します。n_estimatorsパラメータは木の数を設定します。
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) は、データセットに分類器を適合させ、その分類器のコピーを同じデータセットに適合させます。誤分類されたアイテムの重みを重視し、次の分類器の適合を調整して修正します。
+
+---
+
+## 🚀チャレンジ
+
+これらの技術のそれぞれには、多くの調整可能なパラメータがあります。それぞれのデフォルトパラメータを調査し、これらのパラメータを調整することがモデルの質にどのような影響を与えるかを考えてみてください。
+
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## 復習と自己学習
+
+これらのレッスンには多くの専門用語が含まれているため、[この用語集](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott) を見直してみましょう!
+
+## 課題
+
+[パラメータの調整](assignment.md)
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期すために努めていますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語での文書を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/3-Classifiers-2/assignment.md b/translations/ja/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..2ba3f6212
--- /dev/null
+++ b/translations/ja/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# パラメータ プレイ
+
+## 指示
+
+これらの分類器を使用する際には、多くのパラメータがデフォルトで設定されています。VS Code の Intellisense を使ってそれらを掘り下げてみましょう。このレッスンで紹介された ML 分類技術の一つを採用し、さまざまなパラメータ値を調整してモデルを再訓練してください。なぜいくつかの変更がモデルの品質を向上させ、他の変更がそれを悪化させるのかを説明するノートブックを作成してください。回答には詳細を記載してください。
+
+## ルーブリック
+
+| 基準 | 模範的 | 適切 | 改善が必要 |
+| -------- | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- | ----------------------------- |
+| | 分類器が完全に構築され、そのパラメータが調整され、変更点がテキストボックスで説明されたノートブックが提示されている | ノートブックが部分的に提示されているか、説明が不十分 | ノートブックにバグや欠陥がある |
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語の文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/ja/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..ac3249102
--- /dev/null
+++ b/translations/ja/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。権威ある情報源としては、原文の母国語の文書を考慮してください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/4-Applied/README.md b/translations/ja/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..49ea4b8f0
--- /dev/null
+++ b/translations/ja/4-Classification/4-Applied/README.md
@@ -0,0 +1,317 @@
+# 料理推薦Webアプリを作成しよう
+
+このレッスンでは、以前のレッスンで学んだ技術と、このシリーズ全体で使用されているおいしい料理データセットを使用して、分類モデルを構築します。さらに、保存したモデルを使用する小さなWebアプリを構築し、OnnxのWebランタイムを活用します。
+
+機械学習の最も実用的な用途の一つは推薦システムの構築であり、今日からその方向に一歩を踏み出すことができます!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 上の画像をクリックしてビデオを見る: Jen Looperが分類された料理データを使用してWebアプリを構築します
+
+## [レッスン前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+このレッスンで学ぶこと:
+
+- モデルを構築してOnnxモデルとして保存する方法
+- Netronを使用してモデルを検査する方法
+- 推論のためにWebアプリでモデルを使用する方法
+
+## モデルを構築しよう
+
+ビジネスシステムにこれらの技術を活用するためには、応用機械学習システムを構築することが重要です。Onnxを使用することで、Webアプリケーション内でモデルを使用し(必要に応じてオフラインコンテキストでも使用可能)、JavaScriptアプリケーションで使用することができます。
+
+[前のレッスン](../../3-Web-App/1-Web-App/README.md)では、UFO目撃情報についての回帰モデルを構築し、それをFlaskアプリで使用しました。このアーキテクチャは非常に有用ですが、フルスタックのPythonアプリであり、JavaScriptアプリケーションを使用する要件があるかもしれません。
+
+このレッスンでは、推論のための基本的なJavaScriptベースのシステムを構築できます。しかし、まずモデルを訓練し、Onnxで使用するために変換する必要があります。
+
+## 演習 - 分類モデルを訓練しよう
+
+まず、使用したクリーンな料理データセットを使用して分類モデルを訓練します。
+
+1. 有用なライブラリをインポートすることから始めます:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ Scikit-learnモデルをOnnx形式に変換するために '[skl2onnx](https://onnx.ai/sklearn-onnx/)' が必要です。
+
+1. 次に、前のレッスンと同じ方法でデータを処理し、`read_csv()`を使用してCSVファイルを読み込みます:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. 不要な最初の2列を削除し、残りのデータを 'X' として保存します:
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. ラベルを 'y' として保存します:
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### 訓練ルーチンを開始しよう
+
+'SVC' ライブラリを使用します。これは高い精度を持っています。
+
+1. Scikit-learnから適切なライブラリをインポートします:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. 訓練セットとテストセットを分けます:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. 前のレッスンで行ったように、SVC分類モデルを構築します:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. 次に、`predict()`を呼び出してモデルをテストします:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. モデルの品質を確認するために分類レポートを出力します:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ 以前見たように、精度は良好です:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### モデルをOnnxに変換しよう
+
+適切なテンソル数で変換を行うことを確認してください。このデータセットには380の成分が記載されているため、`FloatTensorType`にその数を記載する必要があります:
+
+1. 380のテンソル数を使用して変換します。
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. onxを作成し、ファイル **model.onnx** として保存します:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > Note、変換スクリプトに[オプション](https://onnx.ai/sklearn-onnx/parameterized.html)を渡すことができます。この場合、'nocl'をTrue、'zipmap'をFalseに設定しました。これは分類モデルなので、辞書のリストを生成するZipMapを削除するオプションがあります(必要ありません)。`nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx`ファイル。
+
+1. このファイル _index.html_ に次のマークアップを追加します:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. 次に、`body`タグ内で、いくつかの成分を反映するチェックボックスのリストを表示するための少しのマークアップを追加します:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ 各チェックボックスには値が与えられています。これは、データセットに従って成分が見つかるインデックスを反映しています。例えば、リンゴはこのアルファベット順のリストの5番目の列を占めるため、その値は'4'です(0から数え始めます)。[成分スプレッドシート](../../../../4-Classification/data/ingredient_indexes.csv)を参照して、特定の成分のインデックスを確認できます。
+
+ index.htmlファイルでの作業を続け、最終的な閉じタグ``の後にモデルを呼び出すスクリプトブロックを追加します。
+
+1. まず、[Onnxランタイム](https://www.onnxruntime.ai/)をインポートします:
+
+ ```html
+
+ ```
+
+ > Onnxランタイムは、幅広いハードウェアプラットフォームでOnnxモデルを実行できるようにするために使用され、最適化と使用するためのAPIを提供します。
+
+1. ランタイムが設定されたら、それを呼び出します:
+
+ ```html
+
+ ```
+
+このコードでは、いくつかのことが行われています:
+
+1. モデルに送信するために、成分チェックボックスがチェックされているかどうかに応じて設定される380の可能な値(1または0)の配列を作成しました。
+2. チェックボックスの配列を作成し、それらがチェックされているかどうかを決定する方法を`init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server`プロンプトで。ローカルホストが開き、Webアプリを見ることができます。さまざまな成分に基づいてどの料理が推奨されるかを確認してください:
+
+
+
+おめでとうございます、いくつかのフィールドを持つ「推薦」Webアプリを作成しました。このシステムを構築するために時間をかけてください!
+## 🚀チャレンジ
+
+Webアプリは非常にミニマルなので、[ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv)データの成分とそのインデックスを使用して引き続き構築してください。どのような風味の組み合わせが特定の国の料理を作るのに役立ちますか?
+
+## [レッスン後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## レビューと自己学習
+
+このレッスンでは、食材の推薦システムの作成の有用性について簡単に触れましたが、この分野のMLアプリケーションには多くの例があります。これらのシステムがどのように構築されているかについてもっと読んでください:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## 課題
+
+[新しい推薦システムを構築する](assignment.md)
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期すために努力していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語での原文を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/4-Applied/assignment.md b/translations/ja/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..d337288de
--- /dev/null
+++ b/translations/ja/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# レコメンダーの構築
+
+## 指示
+
+このレッスンの演習を通じて、Onnx Runtimeと変換されたOnnxモデルを使用してJavaScriptベースのウェブアプリを構築する方法を学びました。これらのレッスンのデータや他のソースからのデータを使用して、新しいレコメンダーを構築する実験をしてみてください(クレジットを忘れずに)。例えば、さまざまな性格属性に基づいたペットレコメンダーや、人の気分に基づいた音楽ジャンルのレコメンダーを作成することができます。創造力を発揮してください!
+
+## ルブリック
+
+| 基準 | 模範的 | 適切 | 改善が必要 |
+| -------- | ---------------------------------------------------------------------- | ------------------------------------- | --------------------------------- |
+| | ウェブアプリとノートブックが提示されており、どちらもよく文書化されて動作している | そのうちの一つが欠けているか、または欠陥がある | 両方が欠けているか、または欠陥がある |
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の母国語で書かれた文書が権威ある情報源とみなされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/4-Classification/README.md b/translations/ja/4-Classification/README.md
new file mode 100644
index 000000000..6c6f277a1
--- /dev/null
+++ b/translations/ja/4-Classification/README.md
@@ -0,0 +1,30 @@
+# 分類の始め方
+
+## 地域のトピック: 美味しいアジアとインドの料理 🍜
+
+アジアとインドでは、食文化が非常に多様であり、非常に美味しいです!地域の料理に関するデータを見て、その材料を理解してみましょう。
+
+
+> Lisheng Changによる写真 Unsplash
+
+## 学べること
+
+このセクションでは、回帰に関する以前の学習を基にして、データをよりよく理解するために使用できる他の分類器について学びます。
+
+> 分類モデルを扱うことを学ぶのに役立つ低コードツールがあります。このタスクには [Azure ML](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott) を試してみてください。
+
+## レッスン
+
+1. [分類の紹介](1-Introduction/README.md)
+2. [その他の分類器](2-Classifiers-1/README.md)
+3. [さらに別の分類器](3-Classifiers-2/README.md)
+4. [応用機械学習: ウェブアプリを作成する](4-Applied/README.md)
+
+## クレジット
+
+「分類の始め方」は [Cassie Breviu](https://www.twitter.com/cassiebreviu) と [Jen Looper](https://www.twitter.com/jenlooper) によって♥️で書かれました。
+
+美味しい料理のデータセットは [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines) から提供されました。
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努力しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があります。原文の言語で書かれた元の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解については責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/5-Clustering/1-Visualize/README.md b/translations/ja/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..c902881a9
--- /dev/null
+++ b/translations/ja/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,144 @@
+# クラスタリング入門
+
+クラスタリングは、データセットがラベル付けされていない、またはその入力が事前定義された出力と一致していないことを前提とする[教師なし学習](https://wikipedia.org/wiki/Unsupervised_learning)の一種です。さまざまなアルゴリズムを使用してラベルのないデータを整理し、データ内のパターンに従ってグループ化を提供します。
+
+[](https://youtu.be/ty2advRiWJM "No One Like You by PSquare")
+
+> 🎥 上の画像をクリックするとビデオが再生されます。クラスタリングを使った機械学習を勉強しながら、ナイジェリアのダンスホールトラックを楽しんでください。これはPSquareの2014年の高評価の曲です。
+
+## [プレ講義クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### イントロダクション
+
+[クラスタリング](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124)はデータ探索に非常に役立ちます。ナイジェリアの聴衆が音楽を消費する方法に関するトレンドやパターンを発見できるかどうか見てみましょう。
+
+✅ クラスタリングの利用方法について少し考えてみてください。現実世界では、洗濯物を家族ごとに仕分けるときにクラスタリングが行われます🧦👕👖🩲。データサイエンスでは、ユーザーの好みを分析したり、ラベルのないデータセットの特性を決定したりする際にクラスタリングが行われます。クラスタリングは、混沌とした状況を整理するのに役立ちます。例えば、靴下の引き出しのように。
+
+[](https://youtu.be/esmzYhuFnds "Introduction to Clustering")
+
+> 🎥 上の画像をクリックするとビデオが再生されます。MITのジョン・グットタグがクラスタリングを紹介します。
+
+プロフェッショナルな設定では、クラスタリングは市場セグメンテーションや、どの年齢層がどの商品を購入するかの決定などに使用されます。別の使用例としては、クレジットカード取引のデータセットから詐欺を検出するための異常検出があります。また、医療スキャンのバッチから腫瘍を特定するためにクラスタリングを使用することもできます。
+
+✅ 銀行、eコマース、ビジネスの設定で「野生の中で」クラスタリングに遭遇した方法について考えてみてください。
+
+> 🎓 興味深いことに、クラスタ分析は1930年代に人類学と心理学の分野で始まりました。どのように使用されたか想像できますか?
+
+また、検索結果をグループ化するために使用することもできます。例えば、ショッピングリンク、画像、レビューなど。クラスタリングは、大規模なデータセットを減らし、より詳細な分析を行いたいときに役立ちます。この技術は、他のモデルを構築する前にデータを理解するために使用されます。
+
+✅ データがクラスタに整理されると、クラスタIDを割り当てます。この技術はデータセットのプライバシーを保護する際に役立ちます。クラスタIDでデータポイントを参照することで、より具体的な識別可能なデータを使用せずに済みます。他の要素ではなくクラスタIDを使用して識別する理由を考えてみてください。
+
+クラスタリング技術の理解を深めるために、この[Learnモジュール](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott)を参照してください。
+
+## クラスタリングの始め方
+
+[Scikit-learnは](https://scikit-learn.org/stable/modules/clustering.html)クラスタリングを実行するための多くの方法を提供しています。選択するタイプは使用ケースに依存します。ドキュメントによると、各方法にはさまざまな利点があります。以下は、Scikit-learnでサポートされている方法とその適切な使用ケースの簡略化された表です:
+
+| メソッド名 | 使用ケース |
+| :--------------------------- | :--------------------------------------------------------------------- |
+| K-Means | 一般的な目的、帰納的 |
+| Affinity propagation | 多くの、不均一なクラスタ、帰納的 |
+| Mean-shift | 多くの、不均一なクラスタ、帰納的 |
+| Spectral clustering | 少数の、均一なクラスタ、推論的 |
+| Ward hierarchical clustering | 多くの、制約されたクラスタ、推論的 |
+| Agglomerative clustering | 多くの、制約された、非ユークリッド距離、推論的 |
+| DBSCAN | 非平坦な幾何学、不均一なクラスタ、推論的 |
+| OPTICS | 非平坦な幾何学、変動密度の不均一なクラスタ、推論的 |
+| Gaussian mixtures | 平坦な幾何学、帰納的 |
+| BIRCH | 外れ値のある大規模なデータセット、帰納的 |
+
+> 🎓 クラスタを作成する方法は、データポイントをグループにまとめる方法に大きく関係しています。いくつかの用語を解説しましょう:
+>
+> 🎓 ['推論的' vs. '帰納的'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> 推論的推論は、特定のテストケースにマッピングされる観察されたトレーニングケースから導かれます。帰納的推論は、一般的なルールにマッピングされるトレーニングケースから導かれ、それがテストケースに適用されます。
+>
+> 例:部分的にラベル付けされたデータセットがあると想像してください。いくつかは「レコード」、いくつかは「CD」、いくつかは空白です。あなたの仕事は空白にラベルを付けることです。帰納的アプローチを選択すると、「レコード」と「CD」を探すモデルをトレーニングし、そのラベルをラベルのないデータに適用します。このアプローチは、実際には「カセット」であるものを分類するのに苦労します。一方、推論的アプローチは、似たアイテムをグループ化し、そのグループにラベルを適用することで、この未知のデータをより効果的に処理します。この場合、クラスタは「丸い音楽のもの」や「四角い音楽のもの」を反映するかもしれません。
+>
+> 🎓 ['非平坦' vs. '平坦'な幾何学](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> 数学的用語から派生した非平坦 vs. 平坦な幾何学は、ポイント間の距離を「平坦」([ユークリッド](https://wikipedia.org/wiki/Euclidean_geometry))または「非平坦」(非ユークリッド)な幾何学的方法で測定することを指します。
+>
+>'平坦'はユークリッド幾何学(部分的には「平面」幾何学として教えられる)を指し、非平坦は非ユークリッド幾何学を指します。幾何学が機械学習と何の関係があるのでしょうか?数学に根ざした2つの分野として、クラスタ内のポイント間の距離を測定する共通の方法が必要であり、それはデータの性質に応じて「平坦」または「非平坦」な方法で行うことができます。[ユークリッド距離](https://wikipedia.org/wiki/Euclidean_distance)は、2つのポイント間の線分の長さとして測定されます。[非ユークリッド距離](https://wikipedia.org/wiki/Non-Euclidean_geometry)は曲線に沿って測定されます。データが平面上に存在しないように見える場合、特殊なアルゴリズムを使用する必要があるかもしれません。
+>
+
+> インフォグラフィック: [Dasani Madipalli](https://twitter.com/dasani_decoded)
+>
+> 🎓 ['距離'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> クラスタは、その距離行列、つまりポイント間の距離によって定義されます。この距離は、いくつかの方法で測定できます。ユークリッドクラスタはポイント値の平均によって定義され、「重心」または中心点を含みます。距離はその重心までの距離によって測定されます。非ユークリッド距離は「クラストロイド」と呼ばれる最も近いポイントによって測定されます。クラストロイドはさまざまな方法で定義できます。
+>
+> 🎓 ['制約された'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [制約付きクラスタリング](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf)は、この教師なし方法に「半教師あり」学習を導入します。ポイント間の関係は「リンクできない」または「リンクしなければならない」としてフラグが立てられ、データセットにいくつかのルールが適用されます。
+>
+>例:アルゴリズムがラベルのないまたは半ラベルのデータのバッチに自由に設定されると、生成されるクラスタは質が低い可能性があります。上記の例では、クラスタは「丸い音楽のもの」、「四角い音楽のもの」、「三角形のもの」、「クッキー」をグループ化するかもしれません。いくつかの制約、つまりフォローするルール(「アイテムはプラスチックでなければならない」、「アイテムは音楽を生成できる必要がある」)を与えると、アルゴリズムがより良い選択をするのに役立ちます。
+>
+> 🎓 '密度'
+>
+> 'ノイズ'の多いデータは「密度が高い」と見なされます。そのクラスタ内のポイント間の距離は、調査の結果、より密度が高い、または低い、つまり「混雑している」ことがわかるかもしれません。このデータは適切なクラスタリング方法で分析する必要があります。[この記事](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html)は、不均一なクラスタ密度を持つノイズの多いデータセットを探索するためにK-MeansクラスタリングとHDBSCANアルゴリズムを使用する違いを示しています。
+
+## クラスタリングアルゴリズム
+
+クラスタリングアルゴリズムは100以上あり、その使用は手元のデータの性質に依存します。主要なものをいくつか紹介しましょう:
+
+- **階層的クラスタリング**。オブジェクトが遠くのオブジェクトよりも近くのオブジェクトに基づいて分類される場合、クラスタはメンバーの他のオブジェクトとの距離に基づいて形成されます。Scikit-learnの凝集クラスタリングは階層的です。
+
+ 
+ > インフォグラフィック: [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **重心クラスタリング**。この人気のあるアルゴリズムは、'k'、つまり形成するクラスタの数を選択する必要があります。その後、アルゴリズムはクラスタの中心点を決定し、その点の周りにデータを収集します。[K-meansクラスタリング](https://wikipedia.org/wiki/K-means_clustering)は重心クラスタリングの人気バージョンです。中心は最も近い平均によって決定されるため、この名前が付いています。クラスタからの二乗距離が最小化されます。
+
+ 
+ > インフォグラフィック: [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **分布ベースのクラスタリング**。統計モデリングに基づいており、分布ベースのクラスタリングはデータポイントがクラスタに属する確率を決定し、それに応じて割り当てます。ガウス混合法はこのタイプに属します。
+
+- **密度ベースのクラスタリング**。データポイントはその密度、つまり互いの周りにグループ化されることに基づいてクラスタに割り当てられます。グループから遠く離れたデータポイントは外れ値またはノイズと見なされます。DBSCAN、Mean-shift、およびOPTICSはこのタイプのクラスタリングに属します。
+
+- **グリッドベースのクラスタリング**。多次元データセットの場合、グリッドが作成され、データはグリッドのセルに分割され、それによってクラスタが作成されます。
+
+## 演習 - データをクラスタリングする
+
+クラスタリング技術は適切な視覚化によって大いに助けられるので、音楽データを視覚化することから始めましょう。この演習は、このデータの性質に最も効果的なクラスタリング方法を決定するのに役立ちます。
+
+1. このフォルダ内の[_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb)ファイルを開きます。
+
+1. 良いデータ視覚化のために`Seaborn`パッケージをインポートします。
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv)から曲データを追加します。曲に関するデータでデータフレームを読み込みます。ライブラリをインポートし、データをダンプしてこのデータを探索する準備をします:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ 最初の数行のデータを確認します:
+
+ | | name | album | artist | artist_top_genre | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ---------------- | ------------ | ------ | ---------- | ------------ | ------------ | ------ | ---------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool | Enjoy Your Life | Lady Donli | nigerian pop | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## 復習と自主学習
+
+クラスタリングアルゴリズムを適用する前に、データセットの性質を理解することが重要です。このトピックについてもっと知りたい方は[こちら](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)をご覧ください。
+
+[この役立つ記事](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/)では、さまざまなデータ形状に応じた異なるクラスタリングアルゴリズムの挙動について説明しています。
+
+## 課題
+
+[クラスタリングの他の可視化方法を調査する](assignment.md)
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳にはエラーや不正確さが含まれる場合がありますのでご注意ください。元の言語での原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/5-Clustering/1-Visualize/assignment.md b/translations/ja/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..0bdc36b12
--- /dev/null
+++ b/translations/ja/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# クラスタリングのための他の視覚化手法の調査
+
+## 指示
+
+このレッスンでは、データをプロットしてクラスタリングの準備をするためのいくつかの視覚化手法について学びました。特に散布図は、オブジェクトのグループを見つけるのに役立ちます。散布図を作成するための異なる方法や異なるライブラリを調査し、その作業をノートブックに記録してください。このレッスンのデータ、他のレッスンのデータ、または自分で調達したデータを使用できます(ただし、ノートブックにはその出典を明記してください)。散布図を使用してデータをプロットし、発見したことを説明してください。
+
+## ルーブリック
+
+| 基準 | 模範的なもの | 適切なもの | 改善が必要なもの |
+| -------- | ---------------------------------------------------------- | -------------------------------------------------------------------------- | ----------------------------------- |
+| | 五つの十分に文書化された散布図を含むノートブックが提示される | 五つ未満の散布図を含むノートブックが提示され、文書化が不十分である | 不完全なノートブックが提示される |
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。権威ある情報源としては、原文の母国語の文書を考慮すべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/ja/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..92ef81a0b
--- /dev/null
+++ b/translations/ja/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期しておりますが、自動翻訳には誤りや不正確な点が含まれる場合があります。元の言語の文書を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/5-Clustering/2-K-Means/README.md b/translations/ja/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..70347e105
--- /dev/null
+++ b/translations/ja/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# K-Means クラスタリング
+
+## [事前講義クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+このレッスンでは、以前にインポートしたナイジェリア音楽データセットを使用して、Scikit-learnを使ったクラスタリングの作成方法を学びます。K-Meansクラスタリングの基本について説明します。前のレッスンで学んだように、クラスタリングには多くの方法があり、使用する方法はデータに依存します。K-Meansは最も一般的なクラスタリング技術なので、これを試してみましょう!
+
+学ぶ用語:
+
+- シルエットスコア
+- エルボー法
+- 慣性
+- 分散
+
+## 導入
+
+[K-Meansクラスタリング](https://wikipedia.org/wiki/K-means_clustering)は、信号処理の分野から派生した手法です。データのグループを 'k' クラスターに分割し、観察を通じてクラスタリングを行います。各観察は、与えられたデータポイントを最も近い '平均'、つまりクラスターの中心点にグループ化するために機能します。
+
+クラスタは、点(または '種子')とその対応する領域を含む[ボロノイ図](https://wikipedia.org/wiki/Voronoi_diagram)として視覚化できます。
+
+
+
+> インフォグラフィック by [Jen Looper](https://twitter.com/jenlooper)
+
+K-Meansクラスタリングのプロセスは[3ステップで実行されます](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. アルゴリズムはデータセットからサンプリングしてk個の中心点を選択します。その後、以下をループします:
+ 1. 各サンプルを最も近いセントロイドに割り当てます。
+ 2. 前のセントロイドに割り当てられたすべてのサンプルの平均値を取って新しいセントロイドを作成します。
+ 3. その後、新旧のセントロイドの差を計算し、セントロイドが安定するまで繰り返します。
+
+K-Meansの使用には、 'k' 、つまりセントロイドの数を設定する必要があるという欠点があります。幸いにも、 'エルボー法' は 'k' の適切な開始値を推定するのに役立ちます。すぐに試してみましょう。
+
+## 前提条件
+
+このレッスンの[_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb)ファイルで作業します。このファイルには、前のレッスンで行ったデータのインポートと事前クリーニングが含まれています。
+
+## 演習 - 準備
+
+もう一度、曲のデータを見てみましょう。
+
+1. 各列に対して `boxplot()` を呼び出してボックスプロットを作成します:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ このデータは少しノイズが多いです: 各列をボックスプロットとして観察すると、外れ値が見えます。
+
+ 
+
+データセットを通じてこれらの外れ値を削除することもできますが、それではデータが非常に少なくなります。
+
+1. クラスタリング演習に使用する列を選択します。範囲が似ているものを選び、 `artist_top_genre` 列を数値データとしてエンコードします:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. 次に、ターゲットとするクラスタ数を選択する必要があります。データセットから3つの曲のジャンルがあることがわかっているので、3つを試してみましょう:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+データフレームの各行に対して予測されたクラスタ(0, 1, 2)が含まれた配列が表示されます。
+
+1. この配列を使用して 'シルエットスコア' を計算します:
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## シルエットスコア
+
+シルエットスコアは1に近いほど良いです。このスコアは-1から1まで変動し、スコアが1の場合、クラスターは密集しており他のクラスターとよく分離されています。0に近い値は、隣接するクラスターの決定境界に非常に近いサンプルを持つオーバーラップするクラスターを表します。 [(ソース)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+私たちのスコアは**.53**ですので、中間です。これは、このタイプのクラスタリングにデータが特に適していないことを示していますが、続けてみましょう。
+
+### 演習 - モデルの構築
+
+1. `KMeans` をインポートし、クラスタリングプロセスを開始します。
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ ここには説明する価値のあるいくつかの部分があります。
+
+ > 🎓 range: クラスタリングプロセスの反復回数です
+
+ > 🎓 random_state: "セントロイドの初期化のための乱数生成を決定します。" [ソース](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "within-cluster sums of squares" は、クラスター内のすべてのポイントのクラスターセントロイドへの平均距離の平方を測定します。 [ソース](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce).
+
+ > 🎓 Inertia: K-Meansアルゴリズムは 'inertia' を最小化するセントロイドを選択しようとします。これは、"クラスターが内部的にどれだけ一貫しているかを測定するものです。" [ソース](https://scikit-learn.org/stable/modules/clustering.html). この値は各反復でwcss変数に追加されます。
+
+ > 🎓 k-means++: [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means) では、'k-means++' 最適化を使用できます。これは、"セントロイドをお互いに(一般的に)遠くに初期化し、ランダムな初期化よりもおそらく良い結果をもたらします。
+
+### エルボー法
+
+以前、3つの曲のジャンルをターゲットにしているため、3つのクラスターを選択するべきだと推測しました。しかし、それは本当でしょうか?
+
+1. 'エルボー法' を使用して確認します。
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ 前のステップで構築した `wcss` 変数を使用して、クラスターの最適な数を示す 'エルボー' の曲がりを示すチャートを作成します。おそらくそれは **3** です!
+
+ 
+
+## 演習 - クラスタを表示する
+
+1. 今度は3つのクラスターを設定してプロセスを再試行し、クラスタを散布図として表示します:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. モデルの精度を確認します:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ このモデルの精度はあまり良くなく、クラスタの形状がその理由を示しています。
+
+ 
+
+ このデータは不均衡で、相関が少なく、列の値の間に大きなばらつきがあり、うまくクラスタリングできません。実際、形成されるクラスタは、上で定義した3つのジャンルカテゴリに大きく影響されているか、偏っている可能性があります。これは学習プロセスでした!
+
+ Scikit-learnのドキュメントでは、このようにクラスタがあまりよく区別されていないモデルは '分散' の問題があるとされています:
+
+ 
+ > Infographic from Scikit-learn
+
+## 分散
+
+分散は "平均からの平方の差の平均" と定義されます [(ソース)](https://www.mathsisfun.com/data/standard-deviation.html). このクラスタリング問題の文脈では、データセットの数値が平均から少し離れすぎることを指します。
+
+✅ これは、この問題を解決するためのすべての方法を考える良い機会です。データをもう少し調整しますか?他の列を使用しますか?別のアルゴリズムを使用しますか?ヒント: [データをスケーリング](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) して正規化し、他の列をテストしてみてください。
+
+> この '[分散計算機](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' を試して、概念をもう少し理解してください。
+
+---
+
+## 🚀チャレンジ
+
+このノートブックでパラメータを調整してみてください。データをさらにクリーニングすることでモデルの精度を向上させることができますか(たとえば、外れ値を削除するなど)?特定のデータサンプルに重みをつけることもできます。他にどのようにしてより良いクラスタを作成できますか?
+
+ヒント: データをスケーリングしてみてください。ノートブックには、データ列が範囲の点で互いにより似るように標準スケーリングを追加するコメント付きのコードがあります。シルエットスコアは下がりますが、エルボーグラフの '曲がり' が滑らかになります。これは、データをスケーリングせずに放置すると、分散が少ないデータがより多くの重みを持つことになるためです。この問題についてさらに読みたい場合は、[こちら](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226)を参照してください。
+
+## [事後講義クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## レビューと自主学習
+
+このK-Meansシミュレーター[こちら](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/)を見てみてください。このツールを使用して、サンプルデータポイントを視覚化し、そのセントロイドを決定できます。データのランダム性、クラスタの数、セントロイドの数を編集できます。これにより、データがどのようにグループ化されるかのアイデアが得られますか?
+
+また、スタンフォードの[K-Meansに関するハンドアウト](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html)も見てみてください。
+
+## 課題
+
+[さまざまなクラスタリング手法を試してみてください](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる可能性があります。原文が記載された母国語の文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用により生じる誤解や誤認について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/5-Clustering/2-K-Means/assignment.md b/translations/ja/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..22351e672
--- /dev/null
+++ b/translations/ja/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,13 @@
+# 異なるクラスタリング手法を試す
+
+## 指示
+
+このレッスンでは、K-Meansクラスタリングについて学びました。時には、K-Meansがあなたのデータに適していない場合もあります。これらのレッスンや他の場所からデータを使用して(ソースを明記してください)、K-Meansを使用しない別のクラスタリング手法を示すノートブックを作成してください。何を学びましたか?
+## 評価基準
+
+| 基準 | 優秀 | 適切 | 改善の余地あり |
+| -------- | ------------------------------------------------------------- | -------------------------------------------------------------------- | ---------------------------- |
+| | よく文書化されたクラスタリングモデルが提示されている | 十分な文書化がない、または不完全なノートブックが提示されている | 不完全な作業が提出されている |
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期すために努力していますが、自動翻訳には誤りや不正確さが含まれる場合があります。権威ある情報源として、元の言語で書かれた原文を参照してください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/ja/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..8ff8ff839
--- /dev/null
+++ b/translations/ja/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語での文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/5-Clustering/README.md b/translations/ja/5-Clustering/README.md
new file mode 100644
index 000000000..63c955d8b
--- /dev/null
+++ b/translations/ja/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# 機械学習のためのクラスタリングモデル
+
+クラスタリングは、互いに似たオブジェクトを見つけ、それらをクラスタと呼ばれるグループにまとめる機械学習のタスクです。クラスタリングが他の機械学習のアプローチと異なるのは、すべてが自動的に行われる点です。実際、これは教師あり学習とは対極にあると言えます。
+
+## 地域トピック: ナイジェリアの聴衆の音楽の好みに基づくクラスタリングモデル 🎧
+
+ナイジェリアの多様な聴衆には、多様な音楽の好みがあります。Spotifyからスクレイピングされたデータを使用して([この記事](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421)に触発されて)、ナイジェリアで人気のある音楽を見てみましょう。このデータセットには、さまざまな曲の「ダンスしやすさ」スコア、「アコースティック性」、音量、「スピーチ性」、人気度、エネルギーに関するデータが含まれています。このデータの中にどのようなパターンが見つかるか、興味深いところです!
+
+
+
+> 写真提供:Marcela Laskoski on Unsplash
+
+この一連のレッスンでは、クラスタリング技術を使用してデータを分析する新しい方法を発見します。クラスタリングは、データセットにラベルがない場合に特に有用です。ラベルがある場合は、前のレッスンで学んだような分類技術の方が役立つかもしれません。しかし、ラベルのないデータをグループ化しようとしている場合、クラスタリングはパターンを発見するための優れた方法です。
+
+> クラスタリングモデルを扱うための学習に役立つローコードツールがあります。試してみてください [Azure ML for this task](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## レッスン
+
+1. [クラスタリングの紹介](1-Visualize/README.md)
+2. [K-Meansクラスタリング](2-K-Means/README.md)
+
+## クレジット
+
+これらのレッスンは🎶を持つ[Jen Looper](https://www.twitter.com/jenlooper)によって書かれ、[Rishit Dagli](https://rishit_dagli)と[Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan)の役立つレビューがありました。
+
+[Nigerian Songs](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify)データセットは、SpotifyからスクレイピングされたものとしてKaggleから入手しました。
+
+このレッスンの作成に役立った有用なK-Meansの例には、この[iris exploration](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering)、この[introductory notebook](https://www.kaggle.com/prashant111/k-means-clustering-with-python)、およびこの[hypothetical NGO example](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering)が含まれています。
+
+**免責事項**:
+この文書は、機械翻訳AIサービスを使用して翻訳されています。正確性を期しておりますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。元の言語の文書が権威ある情報源とみなされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/1-Introduction-to-NLP/README.md b/translations/ja/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..522e1ae1f
--- /dev/null
+++ b/translations/ja/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# 自然言語処理の入門
+
+このレッスンでは、*計算言語学*の一分野である*自然言語処理*の簡単な歴史と重要な概念を紹介します。
+
+## [講義前のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## はじめに
+
+NLP(自然言語処理)は、機械学習が適用され、実際のソフトウェアで使用されている最もよく知られた分野の一つです。
+
+✅ 日常的に使用しているソフトウェアの中で、NLPが組み込まれているものを考えてみてください。例えば、ワードプロセッサやモバイルアプリなど。
+
+このレッスンで学ぶこと:
+
+- **言語の概念**。言語がどのように発展し、主要な研究分野が何であったか。
+- **定義と概念**。コンピュータがテキストを処理する方法に関する定義と概念、例えばパーシング、文法、名詞と動詞の識別などを学びます。このレッスンにはいくつかのコーディングタスクが含まれており、次のレッスンでコーディングを学ぶための重要な概念がいくつか紹介されます。
+
+## 計算言語学
+
+計算言語学は、何十年にもわたる研究と開発の分野で、コンピュータが言語をどのように扱い、さらには理解し、翻訳し、コミュニケーションできるかを研究します。自然言語処理(NLP)は、コンピュータが「自然」な、つまり人間の言語をどのように処理できるかに焦点を当てた関連分野です。
+
+### 例 - 音声入力
+
+もし、タイピングの代わりに電話に向かって話したり、仮想アシスタントに質問をしたことがあるなら、その音声はテキスト形式に変換され、話した言語から処理または*パース*されました。検出されたキーワードは、その後、電話やアシスタントが理解し、実行できる形式に処理されました。
+
+
+> 本当の言語理解は難しい!画像提供:[Jen Looper](https://twitter.com/jenlooper)
+
+### この技術はどのように実現されているのか?
+
+これは、誰かがこのためのコンピュータプログラムを書いたからです。数十年前、一部のSF作家は、人々が主にコンピュータに話しかけ、コンピュータが常に彼らの意味を正確に理解するだろうと予測しました。残念ながら、これは多くの人が想像したよりも難しい問題であることが判明しました。今日では、問題ははるかに理解されていますが、文の意味を理解する際に「完璧な」自然言語処理を達成するには重大な課題があります。特に、ユーモアや皮肉などの感情を理解することは非常に難しい問題です。
+
+この時点で、学校の授業で先生が文の文法の部分を教えたことを思い出すかもしれません。いくつかの国では、学生は文法や言語学を専用の科目として教えられますが、多くの国では、これらのトピックは言語を学ぶ一環として含まれています:小学校での最初の言語(読み書きを学ぶ)や、中学校や高校での第二言語などです。名詞と動詞や副詞と形容詞を区別するのが苦手でも心配しないでください!
+
+*現在形*と*現在進行形*の違いに苦労している場合、それはあなた一人ではありません。これは多くの人、特にその言語のネイティブスピーカーにとっても難しいことです。良いニュースは、コンピュータは形式的なルールを適用するのが非常に得意であり、人間と同じように文を*パース*するコードを書くことを学ぶことができるということです。後で検討する大きな課題は、文の*意味*や*感情*を理解することです。
+
+## 前提条件
+
+このレッスンの主な前提条件は、このレッスンの言語を読んで理解できることです。数学の問題や方程式を解く必要はありません。元の著者はこのレッスンを英語で書きましたが、他の言語にも翻訳されているため、翻訳を読んでいる可能性もあります。異なる言語の文法ルールを比較するために、いくつかの異なる言語が使用される例があります。これらは翻訳されていませんが、説明文は翻訳されているため、意味は明確です。
+
+コーディングタスクでは、Pythonを使用し、例はPython 3.8を使用しています。
+
+このセクションで必要なのは:
+
+- **Python 3の理解**。Python 3でのプログラミング言語の理解。このレッスンでは、入力、ループ、ファイルの読み取り、配列を使用します。
+- **Visual Studio Code + 拡張機能**。Visual Studio CodeとそのPython拡張機能を使用します。または、お好みのPython IDEを使用することもできます。
+- **TextBlob**。 [TextBlob](https://github.com/sloria/TextBlob) はPythonの簡易テキスト処理ライブラリです。TextBlobのサイトの指示に従ってシステムにインストールしてください(以下のようにコーパスもインストールします):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 Tip: VS Code環境でPythonを直接実行できます。詳細は[docs](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott)を参照してください。
+
+## 機械との対話
+
+コンピュータが人間の言語を理解しようとする歴史は数十年にわたり、自然言語処理を考えた最初の科学者の一人は*アラン・チューリング*です。
+
+### 'チューリングテスト'
+
+チューリングが1950年代に*人工知能*を研究していたとき、人間とコンピュータ(タイプされた通信を介して)に会話テストを行い、人間が他の人間と会話しているのかコンピュータと会話しているのかを確信できない場合を考えました。
+
+一定の会話時間の後、人間が回答がコンピュータからのものであるかどうかを判断できなかった場合、そのコンピュータは*考えている*と言えるのでしょうか?
+
+### インスピレーション - '模倣ゲーム'
+
+このアイデアは、*模倣ゲーム*と呼ばれるパーティーゲームから来ました。このゲームでは、尋問者が一人部屋にいて、別の部屋にいる二人の人物(男性と女性)を識別する任務を負います。尋問者はメモを送ることができ、書かれた回答から謎の人物の性別を明らかにする質問を考えなければなりません。もちろん、別の部屋にいるプレイヤーたちは、正直に答えているように見せながら、尋問者を誤解させたり混乱させたりするような回答をすることで、尋問者を欺こうとします。
+
+### エリザの開発
+
+1960年代、MITの科学者*ジョセフ・ワイゼンバウム*は、[エリザ](https://wikipedia.org/wiki/ELIZA)というコンピュータ「セラピスト」を開発しました。エリザは人間に質問をし、彼らの回答を理解しているかのように見せかけました。しかし、エリザは文を解析し、特定の文法構造やキーワードを識別して合理的な回答をすることができましたが、文を*理解*しているとは言えませんでした。例えば、エリザが「**私は** 悲しい」という形式の文を提示された場合、エリザは文の単語を並べ替え、置き換えて「**あなたは** 悲しい 期間はどれくらいですか?」という回答を形成するかもしれません。
+
+これは、エリザが文を理解し、続けて質問をしているかのような印象を与えましたが、実際には時制を変更し、いくつかの単語を追加していただけでした。エリザが回答するキーワードを識別できない場合、代わりに多くの異なる文に適用できるランダムな回答を返すことになります。例えば、ユーザーが「**あなたは** 自転車」と書いた場合、エリザは「**私は** 自転車 期間はどれくらいですか?」と回答するかもしれません。もっと合理的な回答ではなく。
+
+[](https://youtu.be/RMK9AphfLco "エリザとのチャット")
+
+> 🎥 上の画像をクリックして、オリジナルのELIZAプログラムについてのビデオをご覧ください
+
+> 注:1966年に公開された[エリザ](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract)のオリジナルの説明を読むことができます。ACMアカウントを持っている場合、または[wikipedia](https://wikipedia.org/wiki/ELIZA)でエリザについて読むこともできます。
+
+## 演習 - 基本的な会話ボットのコーディング
+
+エリザのような会話ボットは、ユーザー入力を引き出し、理解しているかのように見せかけて応答するプログラムです。エリザとは異なり、私たちのボットは知的な会話をしているように見せかけるためのいくつかのルールを持っていません。代わりに、私たちのボットは、ほぼどんな些細な会話にも使えるランダムな応答を持つだけです。
+
+### 計画
+
+会話ボットを作成する際のステップ:
+
+1. ユーザーにボットとの対話方法を指示するテキストを表示
+2. ループを開始
+ 1. ユーザー入力を受け取る
+ 2. ユーザーが終了を求めた場合は終了する
+ 3. ユーザー入力を処理し、応答を決定する(この場合、応答は可能な一般的な応答のリストからランダムに選ばれる)
+ 4. 応答を表示する
+3. ステップ2に戻る
+
+### ボットの作成
+
+次にボットを作成しましょう。まず、いくつかのフレーズを定義します。
+
+1. 次のランダムな応答を持つPythonでこのボットを作成します:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ 以下は、参考のためのサンプル出力です(ユーザー入力は`>`で始まる行です):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ タスクの一つの可能な解決策は[こちら](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)です
+
+ ✅ 考えてみてください
+
+ 1. ランダムな応答がボットが実際に理解していると思わせることができると思いますか?
+ 2. ボットがより効果的になるためにはどんな機能が必要ですか?
+ 3. ボットが本当に文の意味を理解できるなら、会話の前の文の意味も「覚えて」おく必要があるでしょうか?
+
+---
+
+## 🚀チャレンジ
+
+上記の「考えてみてください」の要素の一つを選び、それをコードに実装するか、擬似コードを使用して紙に解決策を書いてみてください。
+
+次のレッスンでは、自然言語のパースと機械学習の他のアプローチについて学びます。
+
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## 復習と自己学習
+
+以下の参考文献をさらに読み進める機会としてご覧ください。
+
+### 参考文献
+
+1. Schubert, Lenhart, "Computational Linguistics", *The Stanford Encyclopedia of Philosophy* (Spring 2020 Edition), Edward N. Zalta (ed.), URL = .
+2. Princeton University "About WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
+
+## 課題
+
+[ボットを探す](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる可能性があります。権威ある情報源としては、原文の言語の文書を参照してください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用により生じる誤解や誤訳については、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/ja/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..32d6ec0b0
--- /dev/null
+++ b/translations/ja/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# ボットを探す
+
+## 手順
+
+ボットはあらゆる場所に存在します。あなたの課題は、ボットを見つけて採用することです!ウェブサイト、銀行アプリケーション、そして電話で見つけることができます。例えば、金融サービス会社にアドバイスやアカウント情報を求めて電話をかけるときなどです。ボットを分析して、混乱させることができるか試してみてください。もしボットを混乱させることができたら、なぜそうなったのかを考えてみてください。その経験について短いレポートを書いてください。
+
+## 採点基準
+
+| 基準 | 優秀 | 適切 | 改善が必要 |
+| ------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------- | --------------------- |
+| | ボットのアーキテクチャを推測し、その経験を詳細に説明した1ページのレポートが書かれている | レポートが不完全であったり、十分に調査されていない | レポートが提出されていない |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があります。元の言語で書かれた原文を権威ある情報源とみなしてください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤認については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/2-Tasks/README.md b/translations/ja/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..7058d21d1
--- /dev/null
+++ b/translations/ja/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# 一般的な自然言語処理タスクと技術
+
+ほとんどの*自然言語処理*タスクでは、処理するテキストを分解し、調査し、その結果をルールやデータセットと照合する必要があります。これらのタスクにより、プログラマーはテキスト内の用語や単語の_意味_や_意図_、または単に_頻度_を導き出すことができます。
+
+## [事前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+テキスト処理に使用される一般的な技術を見てみましょう。これらの技術は機械学習と組み合わせることで、大量のテキストを効率的に分析するのに役立ちます。しかし、これらのタスクにMLを適用する前に、NLPの専門家が直面する問題を理解しておきましょう。
+
+## NLPに共通するタスク
+
+作業中のテキストを分析する方法はいくつかあります。これらのタスクを実行することで、テキストの理解を深め、結論を導き出すことができます。通常、これらのタスクは一連の手順で行います。
+
+### トークン化
+
+おそらくほとんどのNLPアルゴリズムが最初に行うことは、テキストをトークンや単語に分割することです。これが簡単に思えるかもしれませんが、句読点や異なる言語の単語や文の区切りを考慮する必要があるため、難しくなることがあります。区切りを判断するためにさまざまな方法を使用する必要があるかもしれません。
+
+
+> **高慢と偏見**からの文をトークン化する。インフォグラフィック提供:[Jen Looper](https://twitter.com/jenlooper)
+
+### 埋め込み
+
+[単語の埋め込み](https://wikipedia.org/wiki/Word_embedding)は、テキストデータを数値的に変換する方法です。埋め込みは、意味が似ている単語や一緒に使われる単語がクラスターとして集まるように行われます。
+
+
+> "I have the highest respect for your nerves, they are my old friends." - **高慢と偏見**の文に対する単語の埋め込み。インフォグラフィック提供:[Jen Looper](https://twitter.com/jenlooper)
+
+✅ [この興味深いツール](https://projector.tensorflow.org/)を試して、単語の埋め込みを実験してみてください。ある単語をクリックすると、似た単語のクラスターが表示されます:'toy'は'disney', 'lego', 'playstation', 'console'とクラスターを形成します。
+
+### 解析と品詞タグ付け
+
+トークン化されたすべての単語は、名詞、動詞、形容詞などの品詞としてタグ付けされます。文 `the quick red fox jumped over the lazy brown dog` は、fox = 名詞、jumped = 動詞として品詞タグ付けされるかもしれません。
+
+
+
+> **高慢と偏見**からの文を解析する。インフォグラフィック提供:[Jen Looper](https://twitter.com/jenlooper)
+
+解析とは、文中のどの単語が互いに関連しているかを認識することです。例えば、`the quick red fox jumped` は形容詞-名詞-動詞のシーケンスであり、`lazy brown dog` シーケンスとは別です。
+
+### 単語とフレーズの頻度
+
+大規模なテキストを分析する際に有用な手法は、興味のあるすべての単語やフレーズの辞書を作成し、それがどのくらいの頻度で出現するかを記録することです。フレーズ `the quick red fox jumped over the lazy brown dog` は、the の単語頻度が2です。
+
+単語の頻度をカウントする例文を見てみましょう。ラドヤード・キップリングの詩「The Winners」には次の節があります:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+フレーズの頻度は、大文字小文字を区別する場合や区別しない場合があります。フレーズ `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` は2です。
+
+### N-gram
+
+テキストは、一定の長さの単語のシーケンスに分割できます。単一の単語(ユニグラム)、2つの単語(バイグラム)、3つの単語(トライグラム)または任意の数の単語(n-gram)です。
+
+例えば、`the quick red fox jumped over the lazy brown dog` を2-gramスコアで分割すると、次のn-gramが生成されます:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+これは文の上にスライドするボックスとして視覚化するのが簡単かもしれません。3単語のn-gramの場合、各文でn-gramが太字で示されています:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> N-gram値3: インフォグラフィック提供:[Jen Looper](https://twitter.com/jenlooper)
+
+### 名詞句抽出
+
+ほとんどの文には、主語または目的語としての名詞があります。英語では、しばしば 'a' や 'an' または 'the' が前に置かれていることで識別できます。文の意味を理解しようとする際に、名詞句を抽出して文の主語または目的語を特定することは、NLPで一般的なタスクです。
+
+✅ "I cannot fix on the hour, or the spot, or the look or the words, which laid the foundation. It is too long ago. I was in the middle before I knew that I had begun." という文で、名詞句を特定できますか?
+
+文 `the quick red fox jumped over the lazy brown dog` には、2つの名詞句があります:**quick red fox** と **lazy brown dog**。
+
+### 感情分析
+
+文やテキストは、その*ポジティブ*または*ネガティブ*な感情を分析することができます。感情は*極性*および*客観性/主観性*で測定されます。極性は-1.0から1.0(ネガティブからポジティブ)で測定され、0.0から1.0(最も客観的から最も主観的)で測定されます。
+
+✅ 後で、機械学習を使用して感情を判定するさまざまな方法を学びますが、一つの方法は、ポジティブまたはネガティブに分類された単語やフレーズのリストを人間の専門家が作成し、そのモデルをテキストに適用して極性スコアを計算することです。この方法がどのように機能するか、またはうまく機能しない場合があるかを理解できますか?
+
+### 屈折
+
+屈折は、単語を取り、その単語の単数形または複数形を取得することを可能にします。
+
+### レンマ化
+
+*レンマ*は、一連の単語の根本または主要な単語です。例えば、*flew*, *flies*, *flying* は動詞 *fly* のレンマです。
+
+NLP研究者にとって有用なデータベースもあります。特に:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/)は、異なる言語のすべての単語の同義語、反義語、およびその他の詳細を含むデータベースです。翻訳、スペルチェッカー、または任意の種類の言語ツールを構築する際に非常に役立ちます。
+
+## NLPライブラリ
+
+幸いなことに、これらの技術をすべて自分で構築する必要はありません。自然言語処理や機械学習に特化していない開発者にもアクセスしやすくする優れたPythonライブラリがあります。次のレッスンではこれらの例をさらに紹介しますが、ここでは次のタスクに役立ついくつかの有用な例を学びます。
+
+### 演習 - `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor`を使用する
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > ここで何が起こっているのか? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor) は「ConLL-2000トレーニングコーパスで訓練されたチャンク解析を使用する名詞句抽出器」です。ConLL-2000は、2000年の計算言語学学習に関する会議を指します。毎年、この会議は難解なNLP問題に取り組むワークショップを開催しており、2000年には名詞チャンクに焦点を当てました。モデルはWall Street Journalで訓練され、「セクション15-18を訓練データ(211727トークン)として使用し、セクション20をテストデータ(47377トークン)として使用しました」。使用された手順は[こちら](https://www.clips.uantwerpen.be/conll2000/chunking/)で、[結果](https://ifarm.nl/erikt/research/np-chunking.html)を確認できます。
+
+### チャレンジ - NLPでボットを改良する
+
+前のレッスンでは、非常にシンプルなQ&Aボットを作成しました。今回は、入力の感情を分析して感情に応じた応答を出力することで、マーヴィンを少し共感的にします。また、`noun_phrase`を特定し、そのトピックについてさらに入力を求める必要があります。
+
+より良い会話ボットを構築する際の手順:
+
+1. ユーザーにボットとの対話方法を指示する
+2. ループを開始する
+ 1. ユーザー入力を受け取る
+ 2. ユーザーが終了を求めた場合は終了する
+ 3. ユーザー入力を処理し、適切な感情応答を決定する
+ 4. 感情に名詞句が含まれている場合、その名詞句を複数形にしてそのトピックについてさらに入力を求める
+ 5. 応答を出力する
+3. ステップ2に戻る
+
+TextBlobを使用して感情を判定するコードスニペットはこちらです。感情応答の*勾配*は4つしかありません(もっと増やしても構いません):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+以下は参考となるサンプル出力です(ユーザー入力は>で始まる行です):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+タスクの一つの解決策は[こちら](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)です。
+
+✅ 知識チェック
+
+1. 共感的な応答が、ボットが実際に理解していると人を「騙す」ことができると思いますか?
+2. 名詞句を特定することで、ボットがより「信じられる」ものになりますか?
+3. 文から名詞句を抽出することが有用な理由は何ですか?
+
+---
+
+前の知識チェックでボットを実装し、友人にテストしてみてください。それが彼らを騙すことができますか?ボットをもっと「信じられる」ものにすることができますか?
+
+## 🚀チャレンジ
+
+前の知識チェックでのタスクを実装してみてください。ボットを友人にテストしてみてください。それが彼らを騙すことができますか?ボットをもっと「信じられる」ものにすることができますか?
+
+## [事後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## レビュー&自習
+
+次のいくつかのレッスンでは、感情分析についてさらに学びます。この記事のようなものを読んで、この興味深い技術を研究してみてください。[KDNuggets](https://www.kdnuggets.com/tag/nlp)
+
+## 課題
+
+[Make a bot talk back](assignment.md)
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご理解ください。権威ある情報源としては、元の言語で書かれた原文を参照してください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤認については、当方は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/2-Tasks/assignment.md b/translations/ja/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..be8edba53
--- /dev/null
+++ b/translations/ja/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# ボットに返答させる
+
+## 手順
+
+過去のレッスンで、チャットするための基本的なボットをプログラムしました。このボットは「bye」と言うまでランダムな返答をします。返答を少しだけランダムでなくし、「why」や「how」のような特定の言葉を言った場合に返答をトリガーすることはできますか?ボットを拡張する際に、このタイプの作業をいかに機械学習が手動でなくするかを少し考えてみてください。NLTKやTextBlobライブラリを使用してタスクを簡単にすることができます。
+
+## 評価基準
+
+| 基準 | 優秀 | 適切 | 改善が必要 |
+| -------- | --------------------------------------------- | ---------------------------------------------- | ----------------------- |
+| | 新しいbot.pyファイルが提示され、文書化されている | 新しいボットファイルが提示されているがバグが含まれている | ファイルが提示されていない |
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。原文の言語で書かれた文書が正式な情報源とみなされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/3-Translation-Sentiment/README.md b/translations/ja/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..0157dba6a
--- /dev/null
+++ b/translations/ja/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,188 @@
+# 機械学習を使った翻訳と感情分析
+
+前回のレッスンでは、`TextBlob` を使って基本的なボットを構築する方法を学びました。このライブラリは、名詞句の抽出などの基本的な自然言語処理タスクを実行するために、裏で機械学習を利用しています。計算言語学におけるもう一つの重要な課題は、ある言語から別の言語への正確な _翻訳_ です。
+
+## [講義前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+翻訳は非常に難しい問題であり、これは数千もの言語が存在し、それぞれが非常に異なる文法規則を持つためです。一つのアプローチは、英語のような言語の正式な文法規則を非言語依存の構造に変換し、次に別の言語に変換することです。このアプローチでは、次のステップを踏むことになります:
+
+1. **識別**。入力言語の単語を名詞、動詞などにタグ付けする。
+2. **翻訳の作成**。ターゲット言語の形式で各単語の直接的な翻訳を生成する。
+
+### 英語からアイルランド語への例文
+
+英語では、_I feel happy_ という文は次の3つの単語で構成されています:
+
+- **主語** (I)
+- **動詞** (feel)
+- **形容詞** (happy)
+
+しかし、アイルランド語では、同じ文は非常に異なる文法構造を持っています。感情は「*あなたの上に*」あると表現されます。
+
+英語のフレーズ `I feel happy` をアイルランド語にすると `Tá athas orm` となります。*直訳* すると `Happy is upon me` です。
+
+アイルランド語の話者が英語に翻訳する場合、`I feel happy` と言うでしょう。`Happy is upon me` とは言いません。なぜなら、彼らは文の意味を理解しているからです。単語や文の構造が異なっていてもです。
+
+アイルランド語の文の正式な順序は:
+
+- **動詞** (Tá または is)
+- **形容詞** (athas または happy)
+- **主語** (orm または upon me)
+
+## 翻訳
+
+単純な翻訳プログラムは、文の構造を無視して単語だけを翻訳するかもしれません。
+
+✅ 第二言語(または第三言語以上)を大人になってから学んだことがある場合、最初は母国語で考え、概念を頭の中で一語ずつ第二言語に翻訳し、それを話すことから始めたかもしれません。これは単純な翻訳コンピュータープログラムが行っていることと似ています。この段階を超えて流暢さを達成することが重要です!
+
+単純な翻訳は、悪い(時には面白い)誤訳を引き起こします。`I feel happy` はアイルランド語では `Mise bhraitheann athas` と直訳されます。これは(直訳すると)`me feel happy` という意味で、有効なアイルランド語の文ではありません。英語とアイルランド語は隣接する島で話される言語ですが、非常に異なる文法構造を持っています。
+
+> アイルランドの言語伝統についてのビデオをいくつか見ることができます。例えば [こちら](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### 機械学習のアプローチ
+
+これまでのところ、自然言語処理の形式的な規則アプローチについて学びました。もう一つのアプローチは、単語の意味を無視し、_代わりにパターンを検出するために機械学習を使用する_ ことです。これには、元の言語とターゲット言語の両方で大量のテキスト(*コーパス*)またはテキスト(*コーパス*)が必要です。
+
+例えば、1813年にジェーン・オースティンによって書かれた有名な英語の小説『高慢と偏見』を考えてみましょう。英語の本とその*フランス語*の人間による翻訳を参照すると、ある言語のフレーズが他の言語に*イディオム的に*翻訳されていることがわかります。すぐにそれを行います。
+
+例えば、英語のフレーズ `I have no money` をフランス語に直訳すると、`Je n'ai pas de monnaie` になるかもしれません。"Monnaie" はフランス語の 'false cognate' で、'money' と 'monnaie' は同義ではありません。人間が行うより良い翻訳は `Je n'ai pas d'argent` で、これはお金がないという意味をよりよく伝えます('monnaie' の意味は '小銭' です)。
+
+
+
+> 画像提供 [Jen Looper](https://twitter.com/jenlooper)
+
+十分な人間による翻訳がある場合、MLモデルは以前に専門家の人間が翻訳したテキストの一般的なパターンを特定することにより、翻訳の精度を向上させることができます。
+
+### 演習 - 翻訳
+
+`TextBlob` を使用して文章を翻訳できます。**高慢と偏見**の有名な最初の一文を試してみてください:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob` は翻訳でかなり良い仕事をします:"C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!"。
+
+`TextBlob` の翻訳は、実際には1932年にV. LeconteとCh. Pressoirによって行われた本のフランス語翻訳よりもはるかに正確であると言えます:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet egard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+この場合、機械学習による翻訳は、原作者の言葉を不必要に追加している人間の翻訳者よりも良い仕事をしています。
+
+> ここで何が起こっているのでしょうか?そしてなぜ`TextBlob`は翻訳がこんなに上手いのでしょうか?実は、背後ではGoogle翻訳を使用しており、何百万ものフレーズを解析して最適な文字列を予測する高度なAIが動作しています。ここでは手動の操作は一切行われておらず、`blob.translate` を使用するにはインターネット接続が必要です。
+
+## 感情分析
+
+次に、機械学習を使用してテキストの感情を分析する方法を見てみましょう。
+
+> **例**: "Great, that was a wonderful waste of time, I'm glad we are lost on this dark road" は皮肉で否定的な感情の文ですが、単純なアルゴリズムは 'great', 'wonderful', 'glad' を肯定的として検出し、'waste', 'lost' および 'dark' を否定的として検出します。全体の感情はこれらの相反する単語によって揺れ動きます。
+
+✅ 人間の話者として皮肉をどのように伝えるかについて少し考えてみてください。声のイントネーションが大きな役割を果たします。"Well, that film was awesome" というフレーズを異なる方法で言ってみて、声がどのように意味を伝えるかを発見してみてください。
+
+### 機械学習のアプローチ
+
+機械学習のアプローチは、否定的および肯定的なテキストのコーパスを手動で収集することです。ツイート、映画のレビュー、または人間がスコアと意見を書いたものなら何でも構いません。その後、意見とスコアにNLP技術を適用し、パターンを見つけます(例えば、肯定的な映画レビューには 'Oscar worthy' というフレーズが否定的な映画レビューよりも多く含まれる傾向がある、または肯定的なレストランレビューには 'gourmet' という言葉が 'disgusting' よりも多く含まれる)。
+
+> ⚖️ **例**: 政治家のオフィスで働いていて、新しい法律が議論されている場合、有権者はその特定の新しい法律を支持するメールや反対するメールをオフィスに送るかもしれません。あなたがそのメールを読んで、*賛成* と *反対* に分けるように任命されたとしましょう。メールがたくさんあれば、すべてを読むのは大変です。ボットがすべてのメールを読んで、理解し、どの山に属するかを教えてくれたら素晴らしいと思いませんか?
+>
+> これを実現する一つの方法は、機械学習を使用することです。モデルを*反対*のメールの一部と*賛成*のメールの一部で訓練します。モデルは、反対側または賛成側のメールに特定のフレーズや単語が現れる可能性が高いことを関連付ける傾向がありますが、*内容を理解することはありません*。モデルを訓練に使用していないメールでテストし、同じ結論に達するかどうかを確認できます。モデルの精度に満足したら、今後のメールを読むことなく処理できます。
+
+✅ 以前のレッスンで使用したプロセスと似ていると思いますか?
+
+## 演習 - 感情的な文章
+
+感情は -1 から 1 の*極性*で測定されます。-1 は最も否定的な感情を示し、1 は最も肯定的な感情を示します。また、感情は客観性 (0) と主観性 (1) のスコアで測定されます。
+
+ジェーン・オースティンの『高慢と偏見』をもう一度見てみましょう。テキストは [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) で利用可能です。以下のサンプルは、本の最初と最後の文章の感情を分析し、その感情の極性と主観性/客観性のスコアを表示する短いプログラムを示しています。
+
+以下のタスクでは、`sentiment` を決定するために `TextBlob` ライブラリ(上記で説明)を使用する必要があります(独自の感情計算機を書く必要はありません)。
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+次のような出力が表示されます:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## チャレンジ - 感情の極性をチェック
+
+あなたのタスクは、感情の極性を使用して、『高慢と偏見』が絶対的に肯定的な文章が絶対的に否定的な文章より多いかどうかを判断することです。このタスクでは、極性スコアが 1 または -1 である場合、それぞれ絶対的に肯定的または否定的であると仮定できます。
+
+**ステップ:**
+
+1. Project Gutenberg から [高慢と偏見のコピー](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) を .txt ファイルとしてダウンロードします。ファイルの最初と最後のメタデータを削除し、元のテキストのみを残します。
+2. Pythonでファイルを開き、内容を文字列として抽出します。
+3. 本の文字列を使用して TextBlob を作成します。
+4. 本の各文章をループで分析します。
+ 1. 極性が 1 または -1 の場合、文章を肯定的または否定的なメッセージの配列またはリストに保存します。
+5. 最後に、肯定的な文章と否定的な文章(別々に)およびそれぞれの数を出力します。
+
+サンプルの[解決策](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb)はこちらです。
+
+✅ 知識チェック
+
+1. 感情は文中で使用される単語に基づいていますが、コードは単語を*理解*していますか?
+2. 感情の極性が正確だと思いますか、つまり、スコアに*同意*しますか?
+ 1. 特に、次の文章の絶対的な**肯定的**極性に同意しますか、それとも反対しますか?
+ * “What an excellent father you have, girls!” said she, when the door was shut.
+ * “Your examination of Mr. Darcy is over, I presume,” said Miss Bingley; “and pray what is the result?” “I am perfectly convinced by it that Mr. Darcy has no defect.
+ * How wonderfully these sort of things occur!
+ * I have the greatest dislike in the world to that sort of thing.
+ * Charlotte is an excellent manager, I dare say.
+ * “This is delightful indeed!
+ * I am so happy!
+ * Your idea of the ponies is delightful.
+ 2. 次の3つの文章は絶対的に肯定的な感情でスコアリングされましたが、よく読むと肯定的な文章ではありません。なぜ感情分析はそれらを肯定的な文章だと思ったのでしょうか?
+ * Happy shall I be, when his stay at Netherfield is over!” “I wish I could say anything to comfort you,” replied Elizabeth; “but it is wholly out of my power.
+ * If I could but see you as happy!
+ * Our distress, my dear Lizzy, is very great.
+ 3. 次の文章の絶対的な**否定的**極性に同意しますか、それとも反対しますか?
+ - Everybody is disgusted with his pride.
+ - “I should like to know how he behaves among strangers.” “You shall hear then—but prepare yourself for something very dreadful.
+ - The pause was to Elizabeth’s feelings dreadful.
+ - It would be dreadful!
+
+✅ ジェーン・オースティンのファンなら、彼女がしばしば自分の本を使ってイギリスのリージェンシー社会のより滑稽な側面を批判していることを理解しているでしょう。『高慢と偏見』の主人公であるエリザベス・ベネットは、鋭い社会観察者であり(著者と同様)、彼女の言葉はしばしば非常に微妙です。物語のラブインタレストであるダルシー氏でさえ、エリザベスの遊び心とからかいの言葉の使い方に気づいています。「あなたが時折、自分の意見ではないことを表明することを楽しんでいることを知っています。」
+
+---
+
+## 🚀チャレンジ
+
+ユーザー入力から他の特徴を抽出して、Marvinをさらに改善できますか?
+
+## [講義後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## 復習 & 自習
+
+テキストから感情を抽出する方法はたくさんあります。この技術を利用するビジネスアプリケーションについて考えてみてください。また、どのように誤って使用される可能性があるかについても考えてみてください。感情を分析する洗練されたエンタープライズ対応のシステムについてさらに詳しく読みましょう。例えば、[Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott) などです。上記の『高慢と偏見』の文章のいくつかをテストして、ニュアンスを検出できるかどうかを確認してみてください。
+
+## 課題
+
+[Poetic license](assignment.md)
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる可能性があります。元の言語で書かれた原文を信頼できる情報源としてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用により生じた誤解や誤認について、当方は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/3-Translation-Sentiment/assignment.md b/translations/ja/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..44e16ad90
--- /dev/null
+++ b/translations/ja/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# 詩的ライセンス
+
+## 指示
+
+[このノートブック](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency)では、Azureテキスト分析を使用して感情分析された500以上のエミリー・ディキンソンの詩を見つけることができます。このデータセットを使用して、レッスンで説明された技術を使用して分析してください。詩の推奨される感情は、より高度なAzureサービスの判断と一致しますか?あなたの意見では、なぜそうなのか、なぜそうでないのか?何か驚いたことはありますか?
+
+## ルーブリック
+
+| 基準 | 模範的 | 適切 | 改善が必要 |
+| -------- | -------------------------------------------------------------------------- | --------------------------------------------------- | ------------------------ |
+| | 著者のサンプル出力のしっかりとした分析が含まれるノートブックが提示されている | ノートブックが不完全または分析を行っていない | ノートブックが提示されていない |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語で書かれた文書を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/ja/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..f3f6013af
--- /dev/null
+++ b/translations/ja/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。原文の言語で作成された元の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/ja/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..424aaab1c
--- /dev/null
+++ b/translations/ja/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確性を期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語の文書を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用により生じた誤解や誤認については責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/4-Hotel-Reviews-1/README.md b/translations/ja/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..77df71ac7
--- /dev/null
+++ b/translations/ja/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,268 @@
+# ホテルレビューによる感情分析 - データの処理
+
+このセクションでは、前のレッスンで学んだ技術を使って、大規模なデータセットの探索的データ分析を行います。各列の有用性を十分に理解した後、次のことを学びます:
+
+- 不要な列の削除方法
+- 既存の列を基に新しいデータを計算する方法
+- 最終チャレンジで使用するために結果のデータセットを保存する方法
+
+## [事前レクチャークイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### はじめに
+
+これまで、テキストデータが数値データとは全く異なるものであることを学びました。人間が書いたり話したりしたテキストは、パターンや頻度、感情や意味を見つけるために分析することができます。このレッスンでは、実際のデータセットと実際のチャレンジに取り組みます: **[515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** で、[CC0: Public Domainライセンス](https://creativecommons.org/publicdomain/zero/1.0/)が含まれています。このデータはBooking.comから公開ソースからスクレイピングされました。データセットの作成者はJiashen Liuです。
+
+### 準備
+
+必要なもの:
+
+* Python 3を使用して.ipynbノートブックを実行できる能力
+* pandas
+* NLTK、[ローカルにインストールする必要があります](https://www.nltk.org/install.html)
+* Kaggleで入手可能なデータセット [515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)。解凍すると約230MBです。これをこれらのNLPレッスンに関連するルート `/data` フォルダーにダウンロードしてください。
+
+## 探索的データ分析
+
+このチャレンジでは、感情分析とゲストレビューのスコアを使用してホテル推薦ボットを構築することを前提としています。使用するデータセットには、6つの都市にある1493の異なるホテルのレビューが含まれています。
+
+Python、ホテルレビューのデータセット、およびNLTKの感情分析を使用して次のことがわかります:
+
+* レビューで最も頻繁に使用される単語やフレーズは何か?
+* ホテルを説明する公式の *タグ* はレビューのスコアと相関しているか?(例えば、*Family with young children* のレビューが *Solo traveller* よりもネガティブなレビューが多い場合、そのホテルは *Solo travellers* に向いているかもしれません)
+* NLTKの感情スコアはホテルレビューの数値スコアと一致するか?
+
+#### データセット
+
+ダウンロードしてローカルに保存したデータセットを探索してみましょう。VS CodeやExcelのようなエディタでファイルを開いてみてください。
+
+データセットのヘッダーは次の通りです:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+ここでは、検査しやすいようにグループ化しています:
+##### ホテル列
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (緯度), `lng` (経度)
+ * *lat* と *lng* を使用して、Pythonでホテルの場所を示す地図をプロットできます(ネガティブレビューとポジティブレビューの色分けをすることも可能です)
+ * Hotel_Address は明らかに有用ではないので、国に置き換えてソートや検索を容易にする予定です
+
+**ホテルメタレビュー列**
+
+* `Average_Score`
+ * データセット作成者によると、この列は「過去1年の最新コメントに基づいて計算されたホテルの平均スコア」です。この方法でスコアを計算するのは珍しいですが、今のところそのまま受け入れることにします。
+
+ ✅ 他の列に基づいて、平均スコアを計算する別の方法を考えられますか?
+
+* `Total_Number_of_Reviews`
+ * このホテルが受け取ったレビューの総数です。このデータセットのレビューに関するものかどうかは(コードを書かずに)明確ではありません。
+* `Additional_Number_of_Scoring`
+ * これはレビューのスコアが与えられたが、レビュアーによってポジティブまたはネガティブなレビューが書かれなかったことを意味します
+
+**レビュー列**
+
+- `Reviewer_Score`
+ - 最小値と最大値の間で小数点以下1桁までの数値です
+ - なぜ2.5が最低スコアなのかは説明されていません
+- `Negative_Review`
+ - レビュアーが何も書かなかった場合、このフィールドには「**No Negative**」と表示されます
+ - レビュアーがネガティブレビュー欄にポジティブレビューを書くこともあります(例:「このホテルには悪いところが何もありません」)
+- `Review_Total_Negative_Word_Counts`
+ - ネガティブな単語数が多いほど、スコアは低くなります(感情をチェックしない場合)
+- `Positive_Review`
+ - レビュアーが何も書かなかった場合、このフィールドには「**No Positive**」と表示されます
+ - レビュアーがポジティブレビュー欄にネガティブレビューを書くこともあります(例:「このホテルには全く良いところがありません」)
+- `Review_Total_Positive_Word_Counts`
+ - ポジティブな単語数が多いほど、スコアは高くなります(感情をチェックしない場合)
+- `Review_Date` と `days_since_review`
+ - レビューに新鮮さや古さの指標を適用することができます(古いレビューは、新しいレビューほど正確でないかもしれません。ホテルの管理が変わったり、改装が行われたり、プールが追加されたりするため)
+- `Tags`
+ - これはレビュアーが選択できる短い記述子で、ゲストの種類(例:一人旅や家族)、部屋の種類、滞在期間、レビューが提出されたデバイスの種類を示します。
+ - ただし、これらのタグを使用するのは問題がある場合があります。以下のセクションでその有用性について説明します
+
+**レビュアー列**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - 推奨モデルの要素になるかもしれません。例えば、数百のレビューを持つレビューアーがポジティブよりもネガティブなレビューを残す可能性が高いと判断できる場合。しかし、特定のレビューのレビュアーは一意のコードで識別されないため、一連のレビューにリンクすることはできません。100以上のレビューを持つレビュアーが30人いますが、これが推奨モデルにどのように役立つかは明確ではありません。
+- `Reviewer_Nationality`
+ - 一部の人々は、特定の国籍がポジティブまたはネガティブなレビューを残す傾向があると考えるかもしれませんが、これはモデルにそのような逸話的な見解を組み込む際には注意が必要です。これらは国や時には人種のステレオタイプであり、各レビュアーは彼らの経験に基づいてレビューを書いた個人です。彼らの国籍がレビューのスコアの理由であると考えるのは正当化するのが難しいです。
+
+##### 例
+
+| Average Score | Total Number Reviews | Reviewer Score | Negative Review | Positive Review | Tags |
+| -------------- | ---------------------- | ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | This is currently not a hotel but a construction site I was terrorized from early morning and all day with unacceptable building noise while resting after a long trip and working in the room People were working all day i e with jackhammers in the adjacent rooms I asked for a room change but no silent room was available To make things worse I was overcharged I checked out in the evening since I had to leave very early flight and received an appropriate bill A day later the hotel made another charge without my consent in excess of booked price It's a terrible place Don't punish yourself by booking here | Nothing Terrible place Stay away | Business trip Couple Standard Double Room Stayed 2 nights |
+
+このゲストは、このホテルでの滞在に満足していなかったことがわかります。ホテルは7.8の良い平均スコアと1945のレビューを持っていますが、このレビュアーは2.5を与え、滞在がいかにネガティブだったかを115語で書いています。ポジティブレビュー欄に何も書いていなければ、ポジティブな点がなかったと推測できますが、7語の警告を書いています。単語の数だけを数える代わりに、単語の意味や感情を分析しないと、レビュアーの意図が歪んでしまうかもしれません。奇妙なことに、2.5のスコアは混乱を招きます。なぜなら、そのホテル滞在がそれほど悪かったなら、なぜポイントを与えるのでしょうか?データセットを詳しく調査すると、最低スコアが2.5であり、0ではないことがわかります。最高スコアは10です。
+
+##### タグ
+
+前述のように、`Tags` を使用してデータを分類するアイデアは一見理にかなっているように見えます。しかし、これらのタグは標準化されていないため、あるホテルでは *Single room*、*Twin room*、*Double room* というオプションがあり、次のホテルでは *Deluxe Single Room*、*Classic Queen Room*、*Executive King Room* というオプションがあります。これらは同じものかもしれませんが、バリエーションが非常に多いため、選択肢は次のようになります:
+
+1. すべての用語を単一の標準に変更する試み、これは非常に困難です。なぜなら、各ケースでの変換パスが明確ではないからです(例:*Classic single room* を *Single room* にマッピングするが、*Superior Queen Room with Courtyard Garden or City View* はマッピングが難しい)
+1. NLPアプローチを取り、各ホテルに適用される *Solo*、*Business Traveller*、*Family with young kids* などの用語の頻度を測定し、それを推奨に組み込む
+
+タグは通常(必ずしもそうではありませんが)、*Type of trip*、*Type of guests*、*Type of room*、*Number of nights*、*Type of device review was submitted on* に一致する5〜6のコンマ区切りの値のリストを含む単一のフィールドです。しかし、一部のレビュアーは各フィールドを埋めないことがあるため(空欄のままにすることがある)、値は常に同じ順序にはありません。
+
+例えば、*Type of group* を取り上げます。このフィールドには `Tags` 列に1025の一意の可能性がありますが、そのうちの一部だけがグループに関連しています(いくつかは部屋の種類など)。*Family* に関連するものだけをフィルタリングすると、多くの *Family room* タイプの結果が含まれます。*with* という用語を含めると、*Family with* の値を数えると、結果は良くなり、515,000の結果のうち80,000以上が「Family with young children」または「Family with older children」を含んでいます。
+
+これは、タグ列が完全に無用ではないが、役立つようにするには作業が必要であることを意味します。
+
+##### ホテルの平均スコア
+
+データセットにはいくつかの奇妙な点や不一致がありますが、モデルを構築する際にそれらに気付いているように、ここに示しています。もし解決方法がわかったら、ディスカッションセクションで教えてください!
+
+データセットには、平均スコアとレビュー数に関連する次の列があります:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+このデータセットで最もレビューが多いホテルは *Britannia International Hotel Canary Wharf* で、515,000のレビューのうち4789件のレビューがあります。しかし、このホテルの `Total_Number_of_Reviews` の値を見ると、9086です。多くのスコアがレビューなしであると推測できますので、`Additional_Number_of_Scoring` 列の値を追加する必要があります。その値は2682で、4789に追加すると7471になりますが、`Total_Number_of_Reviews` の値よりも1615少ないです。
+
+`Average_Score` 列を取ると、データセット内のレビューの平均であると推測できますが、Kaggleの説明には「*過去1年の最新コメントに基づいて計算されたホテルの平均スコア*」とあります。それはあまり役立たないように見えますが、データセット内のレビューのスコアに基づいて独自の平均を計算できます。同じホテルを例にとると、平均ホテルスコアは7.1とされていますが、データセット内のレビュアースコアの平均は6.8です。これは近いですが同じ値ではなく、`Additional_Number_of_Scoring` のレビューで与えられたスコアが平均を7.1に引き上げたと推測できますが、その主張をテストまたは証明する方法がないため、`Average_Score`、`Additional_Number_of_Scoring`、および `Total_Number_of_Reviews` の値を使用または信頼するのは難しいです。
+
+さらに複雑なのは、レビュー数が2番目に多いホテルの計算された平均スコアが8.12で、データセットの `Average_Score` は8.1です。この正しいスコアは偶然の一致でしょうか、それとも最初のホテルの不一致でしょうか?
+
+これらのホテルが外れ値である可能性があり、ほとんどの値が一致する(ただし、いくつかは何らかの理由で一致しない)ことを前提に、次にデータセットの値を探索し、値の正しい使用(または非使用)を決定するための短いプログラムを書きます。
+
+> 🚨 注意点
+>
+> このデータセットを使用する際には、テキストを自分で読んだり分析したりすることなく、テキストから何かを計算するコードを書きます。これがNLPの本質であり、人間がそれを行わずに意味や感情を解釈することです。しかし、ネガティブなレビューを読む可能性があります。必要ないのであれば、読まないようにしましょう。中には「天気が良くなかった」など、ホテルや誰にもコントロールできないことを理由にした馬鹿げたネガティブなレビューもありますが、一部のレビューには人種差別、性差別、年齢差別が含まれていることもあります。これは残念ですが、公開されたウェブサイトからスクレイピングされたデータセットでは予想されることです。一部のレビュアーは、あなたが不快に感じたり、気分を害したりするようなレビューを残します。コードで感情を測定する方が、実際に読んで気分を害するよりも良いでしょう。とはいえ、そのようなことを書く人は少数ですが、それでも存在します。
+
+## 演習 - データ探索
+### データの読み込み
+
+視覚的にデータを調べるのはここまでにして、コードを書いていくつかの答えを見つけましょう!このセクションでは、pandasライブラリを使用します。最初のタスクは、CSVデータを読み込んで表示できることを確認することです。pandasライブラリには高速なCSVローダーがあり、結果は前のレッスンのようにデータフレームに配置されます。読み込むCSVには50万行以上ありますが、列は17列だけです。pandasは、データ
+rows have column `Positive_Review` values of "No Positive" 9. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ### Code answers 1. Print out the *shape* of the data frame you have just loaded (the shape is the number of rows and columns) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Calculate the frequency count for reviewer nationalities: 1. How many distinct values are there for the column `Reviewer_Nationality` and what are they? 2. What reviewer nationality is the most common in the dataset (print country and number of reviews)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. What are the next top 10 most frequently found nationalities, and their frequency count? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. What was the most frequently reviewed hotel for each of the top 10 most reviewer nationalities? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. How many reviews are there per hotel (frequency count of hotel) in the dataset? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Hotel_Name | Total_Number_of_Reviews | Total_Reviews_Found | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | You may notice that the *counted in the dataset* results do not match the value in `Total_Number_of_Reviews`. It is unclear if this value in the dataset represented the total number of reviews the hotel had, but not all were scraped, or some other calculation. `Total_Number_of_Reviews` is not used in the model because of this unclarity. 5. While there is an `Average_Score` column for each hotel in the dataset, you can also calculate an average score (getting the average of all reviewer scores in the dataset for each hotel). Add a new column to your dataframe with the column header `Calc_Average_Score` that contains that calculated average. Print out the columns `Hotel_Name`, `Average_Score`, and `Calc_Average_Score`. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` You may also wonder about the `Average_Score` value and why it is sometimes different from the calculated average score. As we can't know why some of the values match, but others have a difference, it's safest in this case to use the review scores that we have to calculate the average ourselves. That said, the differences are usually very small, here are the hotels with the greatest deviation from the dataset average and the calculated average: | Average_Score_Difference | Average_Score | Calc_Average_Score | Hotel_Name | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | With only 1 hotel having a difference of score greater than 1, it means we can probably ignore the difference and use the calculated average score. 6. Calculate and print out how many rows have column `Negative_Review` values of "No Negative" 7. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" 8. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Another way Another way count items without Lambdas, and use sum to count the rows: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` You may have noticed that there are 127 rows that have both "No Negative" and "No Positive" values for the columns `Negative_Review` and `Positive_Review` respectively. That means that the reviewer gave the hotel a numerical score, but declined to write either a positive or negative review. Luckily this is a small amount of rows (127 out of 515738, or 0.02%), so it probably won't skew our model or results in any particular direction, but you might not have expected a data set of reviews to have rows with no reviews, so it's worth exploring the data to discover rows like this. Now that you have explored the dataset, in the next lesson you will filter the data and add some sentiment analysis. --- ## 🚀Challenge This lesson demonstrates, as we saw in previous lessons, how critically important it is to understand your data and its foibles before performing operations on it. Text-based data, in particular, bears careful scrutiny. Dig through various text-heavy datasets and see if you can discover areas that could introduce bias or skewed sentiment into a model. ## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## Review & Self Study Take [this Learning Path on NLP](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) to discover tools to try when building speech and text-heavy models. ## Assignment [NLTK](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご理解ください。権威ある情報源としては、元の言語で書かれた原文を考慮すべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/ja/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..02ab74904
--- /dev/null
+++ b/translations/ja/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## 説明
+
+NLTKは計算言語学とNLPで使用される有名なライブラリです。この機会に '[NLTK book](https://www.nltk.org/book/)' を読み、その演習を試してみてください。この評価されない課題では、このライブラリについてより深く知ることができます。
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。権威ある情報源としては、元の言語で書かれた原文を参照してください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/ja/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..0ab9d6327
--- /dev/null
+++ b/translations/ja/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で書かれた文書が権威ある情報源とみなされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/ja/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..78e3bca8a
--- /dev/null
+++ b/translations/ja/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。原文の母国語での文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/5-Hotel-Reviews-2/README.md b/translations/ja/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..7181fb4ab
--- /dev/null
+++ b/translations/ja/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# ホテルレビューによる感情分析
+
+データセットを詳細に調査したので、次は列をフィルタリングし、NLP技術を使ってホテルに関する新しい洞察を得ましょう。
+## [事前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### フィルタリングと感情分析の操作
+
+おそらく気づいたと思いますが、このデータセットにはいくつか問題があります。一部の列には無意味な情報が含まれており、他の列は正しくないように見えます。正しいとしても、どのように計算されたのか不明であり、自分の計算で独自に検証することはできません。
+
+## 演習: もう少しデータ処理
+
+データをもう少しきれいにしましょう。後で役立つ列を追加し、他の列の値を変更し、特定の列を完全に削除します。
+
+1. 初期の列処理
+
+ 1. `lat` と `lng` を削除
+
+ 2. `Hotel_Address` の値を以下の値に置き換える(住所に都市名と国名が含まれている場合は、都市名と国名のみに変更)。
+
+ データセットに含まれている都市と国は次の通りです:
+
+ アムステルダム、オランダ
+
+ バルセロナ、スペイン
+
+ ロンドン、イギリス
+
+ ミラノ、イタリア
+
+ パリ、フランス
+
+ ウィーン、オーストリア
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ これで国レベルのデータをクエリできます:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | ホテル住所 | ホテル名 |
+ | :--------------------- | :--------: |
+ | アムステルダム、オランダ | 105 |
+ | バルセロナ、スペイン | 211 |
+ | ロンドン、イギリス | 400 |
+ | ミラノ、イタリア | 162 |
+ | パリ、フランス | 458 |
+ | ウィーン、オーストリア | 158 |
+
+2. ホテルメタレビュー列の処理
+
+ 1. `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` を自分で計算したスコアで置き換え
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. レビュー列の処理
+
+ 1. `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']` を削除し、タグの興味を確認するためのいくつかのステップに従う必要があります。
+
+### タグのフィルタリング
+
+データセットの目的は、感情と列を追加して、最適なホテルを選ぶのに役立つことです(自分自身やホテル推薦ボットを作成するクライアントのため)。最終データセットでタグが役立つかどうかを自問する必要があります。以下は一つの解釈です(他の理由でデータセットが必要な場合、異なるタグが選択に含まれるかもしれません):
+
+1. 旅行の種類は関連しており、保持すべき
+2. ゲストグループの種類は重要で、保持すべき
+3. ゲストが滞在した部屋、スイート、スタジオの種類は無関係(すべてのホテルに基本的に同じ部屋がある)
+4. レビューが提出されたデバイスは無関係
+5. レビュアーが滞在した夜数は、ホテルを気に入っている可能性があるため関連するかもしれないが、おそらく無関係
+
+要約すると、**2種類のタグを保持し、他のタグを削除する**。
+
+まず、タグがより良い形式になるまでカウントしたくないので、角括弧と引用符を削除する必要があります。これにはいくつかの方法がありますが、最速の方法を選びたいです。幸い、pandasにはこれらのステップを簡単に行う方法があります。
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+各タグは次のようになります: `Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag` 列が新しい列のいずれかと一致する場合は1を追加し、一致しない場合は0を追加します。最終結果は、ビジネス対レジャーのために、またはペットを連れて行くために、どれだけのレビュアーがこのホテルを選んだか(集計で)をカウントするものであり、これはホテルを推薦する際に有用な情報です。
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### ファイルの保存
+
+最後に、現在のデータセットを新しい名前で保存します。
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## 感情分析の操作
+
+この最後のセクションでは、レビュー列に感情分析を適用し、結果をデータセットに保存します。
+
+## 演習: フィルタリングされたデータの読み込みと保存
+
+注意すべき点は、今は前のセクションで保存されたフィルタリングされたデータセットを読み込んでいることです。**元のデータセットではありません**。
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### ストップワードの削除
+
+ネガティブおよびポジティブレビュー列で感情分析を実行すると、時間がかかることがあります。高速なCPUを持つ強力なテストラップトップでテストしたところ、使用する感情ライブラリによって12〜14分かかりました。それは(比較的)長い時間なので、スピードアップできるかどうかを調査する価値があります。
+
+ストップワード、つまり文の感情を変えない一般的な英語の単語を削除することが最初のステップです。これらを削除することで、感情分析はより速く実行されるはずですが、精度は低下しません(ストップワードは感情に影響を与えませんが、分析を遅くします)。
+
+最も長いネガティブレビューは395単語でしたが、ストップワードを削除した後は195単語になりました。
+
+ストップワードの削除も高速な操作であり、515,000行の2つのレビュー列からストップワードを削除するのに3.3秒かかりました。デバイスのCPU速度、RAM、SSDの有無、その他の要因により、若干の時間の違いがあるかもしれません。この操作が感情分析の時間を改善するならば、それは価値があります。
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### 感情分析の実行
+
+次に、ネガティブおよびポジティブレビュー列の感情分析を計算し、その結果を2つの新しい列に保存する必要があります。同じレビューに対するレビュアーのスコアと比較して感情をテストします。たとえば、感情分析がネガティブレビューの感情を1(非常にポジティブな感情)と判断し、ポジティブレビューの感情も1と判断したが、レビュアーがホテルに最低のスコアを与えた場合、レビューのテキストがスコアと一致していないか、感情分析が感情を正しく認識できなかった可能性があります。一部の感情スコアが完全に間違っていることを期待するべきであり、その理由は説明可能であることがよくあります。たとえば、レビューが非常に皮肉である場合、「もちろん、暖房のない部屋で寝るのが大好きでした」といった場合、感情分析はそれがポジティブな感情であると考えますが、人間が読むとそれが皮肉であることがわかります。
+
+NLTKは学習に使用できるさまざまな感情分析ツールを提供しており、それらを代替して感情がより正確かどうかを確認できます。ここではVADER感情分析が使用されています。
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+プログラムの後半で感情を計算する準備ができたら、各レビューに次のように適用できます:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+これは私のコンピュータで約120秒かかりますが、各コンピュータで異なります。結果を印刷して感情がレビューと一致するか確認したい場合:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+ファイルを使用する前に最後に行うことは、それを保存することです!また、新しい列をすべて再配置して、作業しやすくすることも検討してください(人間にとっては、これは見た目の変更です)。
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+[分析ノートブック](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb)の全コードを実行する必要があります([フィルタリングノートブック](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb)を実行してHotel_Reviews_Filtered.csvファイルを生成した後)。
+
+手順を振り返ると:
+
+1. 元のデータセットファイル **Hotel_Reviews.csv** は、[エクスプローラーノートブック](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb)で前のレッスンで調査されました
+2. Hotel_Reviews.csv は [フィルタリングノートブック](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) によってフィルタリングされ、**Hotel_Reviews_Filtered.csv** になります
+3. Hotel_Reviews_Filtered.csv は [感情分析ノートブック](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) によって処理され、**Hotel_Reviews_NLP.csv** になります
+4. 以下のNLPチャレンジで Hotel_Reviews_NLP.csv を使用します
+
+### 結論
+
+最初は、列とデータが含まれているデータセットがありましたが、そのすべてを検証したり使用したりすることはできませんでした。データを調査し、不要なものをフィルタリングし、タグを有用なものに変換し、独自の平均値を計算し、いくつかの感情列を追加し、自然言語処理について興味深いことを学びました。
+
+## [事後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## チャレンジ
+
+感情のためにデータセットを分析したので、このカリキュラムで学んだ戦略(クラスター分析など)を使用して、感情に関するパターンを特定できるか試してみてください。
+
+## 復習と自己学習
+
+[このLearnモジュール](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott)を取って、テキストの感情を探索するためのさまざまなツールを使用してみてください。
+## 課題
+
+[別のデータセットを試してみてください](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご理解ください。原文はその言語での公式な文書とみなされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用により生じた誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/ja/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..cf76d6596
--- /dev/null
+++ b/translations/ja/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# 別のデータセットを試してみよう
+
+## 指示
+
+NLTKを使ってテキストに感情を割り当てる方法を学んだので、別のデータセットを試してみましょう。データ処理が必要になる可能性が高いので、ノートブックを作成し、思考プロセスを記録してください。何を発見しましたか?
+
+## 評価基準
+
+| 基準 | 模範的 | 適切 | 改善が必要 |
+| -------- | ----------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | 完全なノートブックとデータセットが提示され、感情がどのように割り当てられるかを説明するセルがよく文書化されている | ノートブックに十分な説明が欠けている | ノートブックに欠陥がある |
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確性を期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の母国語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/ja/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..d8c00f178
--- /dev/null
+++ b/translations/ja/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があります。元の言語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/ja/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..a2b59800b
--- /dev/null
+++ b/translations/ja/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。元の言語で書かれた文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当方は一切責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/README.md b/translations/ja/6-NLP/README.md
new file mode 100644
index 000000000..c28f12371
--- /dev/null
+++ b/translations/ja/6-NLP/README.md
@@ -0,0 +1,27 @@
+# 自然言語処理の入門
+
+自然言語処理 (NLP) は、コンピュータープログラムが人間の言語を話し言葉や書き言葉として理解する能力のことです。これは人工知能 (AI) の一部です。NLP は50年以上前から存在し、言語学の分野に根ざしています。この分野全体は、機械が人間の言語を理解し処理するのを助けることを目的としています。これにより、スペルチェックや機械翻訳などのタスクを実行することができます。NLP は、医療研究、検索エンジン、ビジネスインテリジェンスなど、多くの分野で現実の応用があります。
+
+## 地域トピック: ヨーロッパの言語と文学、そしてロマンチックなヨーロッパのホテル ❤️
+
+このカリキュラムのセクションでは、機械学習の最も広範な使用例の1つである自然言語処理 (NLP) に紹介されます。計算言語学から派生したこの人工知能のカテゴリは、音声やテキストによるコミュニケーションを介して人間と機械をつなぐ橋渡しとなります。
+
+これらのレッスンでは、小さな会話ボットを作成することで NLP の基本を学び、機械学習がこれらの会話をますます「スマート」にするのをどのように支援するかを学びます。ジェーン・オースティンのクラシック小説『高慢と偏見』のエリザベス・ベネットやミスター・ダーシーとチャットして、1813年に出版された時代にタイムトラベルします。その後、ヨーロッパのホテルレビューを通じて感情分析について学び、知識を深めます。
+
+
+> 写真提供: Elaine Howlin on Unsplash
+
+## レッスン
+
+1. [自然言語処理の紹介](1-Introduction-to-NLP/README.md)
+2. [一般的な NLP のタスクと技術](2-Tasks/README.md)
+3. [機械学習による翻訳と感情分析](3-Translation-Sentiment/README.md)
+4. [データの準備](4-Hotel-Reviews-1/README.md)
+5. [感情分析のための NLTK](5-Hotel-Reviews-2/README.md)
+
+## クレジット
+
+これらの自然言語処理のレッスンは、[Stephen Howell](https://twitter.com/Howell_MSFT) によって ☕ を入れながら書かれました。
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご承知おきください。権威ある情報源としては、原文の母国語文書を考慮すべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤った解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/6-NLP/data/README.md b/translations/ja/6-NLP/data/README.md
new file mode 100644
index 000000000..2baf82e1d
--- /dev/null
+++ b/translations/ja/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+ホテルレビューのデータをこのフォルダーにダウンロードしてください。
+
+**免責事項**:
+この文書は、機械翻訳AIサービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。原文が書かれている言語の文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用によって生じる誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/1-Introduction/README.md b/translations/ja/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..381ecf0bf
--- /dev/null
+++ b/translations/ja/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# 時系列予測の入門
+
+
+
+> スケッチノート by [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+このレッスンと次のレッスンでは、時系列予測について学びます。これは、価格などの変数の過去のパフォーマンスに基づいて、将来の潜在的な価値を予測することができる、ML科学者のレパートリーの一部であり、他のトピックほど知られていない興味深く価値のある分野です。
+
+[](https://youtu.be/cBojo1hsHiI "時系列予測の入門")
+
+> 🎥 上の画像をクリックして、時系列予測についてのビデオをご覧ください
+
+## [講義前のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+価格設定、在庫、サプライチェーンの問題に直接応用できるため、ビジネスにとって非常に価値のある興味深い分野です。深層学習技術が将来のパフォーマンスをより良く予測するために使われ始めていますが、時系列予測は依然として古典的なML技術によって大いに支えられています。
+
+> ペンシルベニア州立大学の有用な時系列カリキュラムは[こちら](https://online.stat.psu.edu/stat510/lesson/1)にあります
+
+## はじめに
+
+あなたがスマート駐車メーターのデータを管理していて、その使用頻度や使用時間のデータを持っているとしましょう。
+
+> メーターの過去のパフォーマンスに基づいて、需要と供給の法則に従って将来の価値を予測できたらどうでしょう?
+
+目標を達成するための適切な行動タイミングを正確に予測することは、時系列予測で解決できる課題です。忙しい時に駐車スペースを探している人々に対して料金を上げるのは喜ばれないかもしれませんが、街を清掃するための収益を確保する確実な方法です!
+
+さまざまな時系列アルゴリズムを探り、データをクリーンアップして準備するためのノートブックを開始しましょう。この例では、GEFCom2014予測コンペティションから取得したデータを分析します。2012年から2014年までの3年間の毎時の電力負荷と温度の値が含まれています。電力負荷と温度の過去のパターンに基づいて、将来の電力負荷の値を予測することができます。
+
+この例では、過去の負荷データのみを使用して、1ステップ先を予測する方法を学びます。ただし、開始する前に、舞台裏で何が起こっているのかを理解することが有益です。
+
+## いくつかの定義
+
+'時系列'という用語に出会ったとき、さまざまな文脈での使用法を理解する必要があります。
+
+🎓 **時系列**
+
+数学では、「時系列は、時間順にインデックス付け(またはリスト化またはグラフ化)されたデータポイントの系列です。最も一般的には、時系列は等間隔の連続した時間ポイントで取得されたシーケンスです。」時系列の例として、[ダウ・ジョーンズ工業株平均](https://wikipedia.org/wiki/Time_series)の終値があります。時系列プロットや統計モデリングの使用は、信号処理、天気予報、地震予測などのイベントが発生し、データポイントが時間とともにプロットされる他の分野で頻繁に見られます。
+
+🎓 **時系列分析**
+
+時系列分析は、上記の時系列データの分析です。時系列データは、'中断された時系列'のように、中断イベントの前後の時系列の進化パターンを検出するなど、さまざまな形式を取ることがあります。時系列の分析に必要なタイプは、データの性質によります。時系列データ自体は、数字や文字の系列の形式を取ることができます。
+
+実行される分析は、周波数領域と時間領域、線形と非線形など、さまざまな方法を使用します。この種のデータを分析する多くの方法については、[こちら](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm)を参照してください。
+
+🎓 **時系列予測**
+
+時系列予測は、過去に発生したデータによって表示されたパターンに基づいて将来の値を予測するためにモデルを使用することです。時間インデックスをプロット上のx変数として使用して時系列データを探索するために回帰モデルを使用することは可能ですが、そのようなデータは特別なタイプのモデルを使用して最もよく分析されます。
+
+時系列データは、線形回帰で分析できるデータとは異なり、順序付けられた観察値のリストです。最も一般的なものは、"自己回帰移動平均モデル"を意味するARIMAです。
+
+[ARIMAモデル](https://online.stat.psu.edu/stat510/lesson/1/1.1)は「現在のシリーズの値を過去の値および過去の予測誤差に関連付けます。」これらは、データが時間とともに順序付けられる時間領域データの分析に最も適しています。
+
+> いくつかのタイプのARIMAモデルがあり、[こちら](https://people.duke.edu/~rnau/411arim.htm)で学ぶことができます。また、次のレッスンで触れます。
+
+次のレッスンでは、[単変量時系列](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm)を使用してARIMAモデルを構築します。これは、時間とともにその値が変化する1つの変数に焦点を当てます。この種のデータの例として、マウナ・ロア観測所で記録された月ごとのCO2濃度の[このデータセット](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm)があります:
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ このデータセットで時間とともに変化する変数を特定してください
+
+## 時系列データの特性に注意する
+
+時系列データを見ると、そのパターンをよりよく理解するために考慮し、軽減する必要がある[特定の特性](https://online.stat.psu.edu/stat510/lesson/1/1.1)があることに気付くかもしれません。時系列データを分析したい'信号'として考えると、これらの特性は'ノイズ'と見なすことができます。これらの特性の一部を統計技術を使用して相殺することにより、この'ノイズ'を削減する必要があることがよくあります。
+
+時系列を扱うために知っておくべきいくつかの概念は次のとおりです:
+
+🎓 **トレンド**
+
+トレンドは、時間とともに測定可能な増減として定義されます。[詳細はこちら](https://machinelearningmastery.com/time-series-trends-in-python)。時系列の文脈では、トレンドを使用し、必要に応じて時系列からトレンドを削除する方法について説明します。
+
+🎓 **[季節性](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+季節性は、例えば、販売に影響を与える可能性のある休日のラッシュのような周期的な変動として定義されます。データにおける季節性を表示するさまざまなタイプのプロットについては、[こちら](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm)を参照してください。
+
+🎓 **外れ値**
+
+外れ値は、標準データの分散から遠く離れたものです。
+
+🎓 **長期サイクル**
+
+季節性とは無関係に、データは1年以上続く経済の低迷のような長期サイクルを示すことがあります。
+
+🎓 **一定の分散**
+
+時間とともに、いくつかのデータは、昼夜のエネルギー使用量のように一定の変動を示します。
+
+🎓 **急激な変化**
+
+データはさらなる分析が必要な急激な変化を示すことがあります。例えば、COVIDによる企業の突然の閉鎖は、データに変化をもたらしました。
+
+✅ [こちらのサンプル時系列プロット](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python)は、数年間にわたる日ごとのゲーム内通貨の支出を示しています。このデータに上記の特性のいくつかを特定できますか?
+
+
+
+## 演習 - 電力使用データの開始
+
+過去の使用量に基づいて将来の電力使用量を予測するための時系列モデルを作成しましょう。
+
+> この例のデータは、GEFCom2014予測コンペティションから取得したものです。2012年から2014年までの3年間の毎時の電力負荷と温度の値が含まれています。
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli and Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016.
+
+1. このレッスンの `working` フォルダーで、_notebook.ipynb_ ファイルを開きます。データを読み込み、可視化するためのライブラリを追加することから始めましょう
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ 注:含まれている `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()` のファイルを使用しています:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ 日付と負荷を表す2つの列があることがわかります:
+
+ | | load |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. 次に、`plot()`を呼び出してデータをプロットします:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. 次に、2014年7月の最初の週を `energy` in `[from date]: [to date]` パターンとして入力してプロットします:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ 素晴らしいプロットです!これらのプロットを見て、上記の特性のいくつかを特定できますか?データを視覚化することによって何を推測できますか?
+
+次のレッスンでは、ARIMAモデルを作成していくつかの予測を行います。
+
+---
+
+## 🚀チャレンジ
+
+時系列予測から利益を得ることができるすべての業界と調査分野のリストを作成してください。これらの技術の応用を芸術に思いつきますか?計量経済学?生態学?小売業?産業?金融?他にはどこがありますか?
+
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## 復習と自主学習
+
+ここでは取り上げませんが、ニューラルネットワークは時系列予測の古典的な方法を強化するために時々使用されます。[この記事](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412)で詳しく読むことができます。
+
+## 課題
+
+[さらに時系列を可視化する](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。元の言語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/1-Introduction/assignment.md b/translations/ja/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..9d2aaf9d9
--- /dev/null
+++ b/translations/ja/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# いくつかの時系列データをさらに可視化
+
+## 手順
+
+あなたは、特殊なモデリングが必要なデータの種類を見て、時系列予測について学び始めました。エネルギーに関するデータを可視化しました。次に、時系列予測の恩恵を受ける他のデータを探してみましょう。3つの例を見つけてください([Kaggle](https://kaggle.com) や [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott) を試してみてください)。それらを可視化するためのノートブックを作成してください。それらが持つ特別な特性(季節性、急激な変化、その他のトレンドなど)をノートブックに記録してください。
+
+## ルブリック
+
+| 基準 | 優秀な例 | 適切な例 | 改善が必要な例 |
+| ------ | ------------------------------------------------------ | ---------------------------------------------------- | --------------------------------------------------------------------------------------- |
+| | 3つのデータセットがノートブックでプロットされ説明されている | 2つのデータセットがノートブックでプロットされ説明されている | 少数のデータセットがプロットまたは説明されているか、提示されたデータが不十分である |
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確性を期しておりますが、自動翻訳には誤りや不正確な部分が含まれる可能性があります。権威ある情報源としては、元の言語で記載された文書を参照してください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用によって生じる誤解や誤訳については責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/ja/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..91e95299c
--- /dev/null
+++ b/translations/ja/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる可能性があることにご注意ください。原文の言語で書かれたオリジナルの文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解釈については、一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/ja/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..07933750a
--- /dev/null
+++ b/translations/ja/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳にはエラーや不正確さが含まれる場合があります。元の言語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/2-ARIMA/README.md b/translations/ja/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..caa10aa2b
--- /dev/null
+++ b/translations/ja/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,397 @@
+# ARIMAを使った時系列予測
+
+前のレッスンでは、時系列予測について少し学び、ある期間における電力負荷の変動を示すデータセットを読み込みました。
+
+[](https://youtu.be/IUSk-YDau10 "ARIMAの紹介")
+
+> 🎥 上の画像をクリックしてビデオを見る: ARIMAモデルの簡単な紹介。例はRで行われていますが、概念は普遍的です。
+
+## [講義前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## はじめに
+
+このレッスンでは、[ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)を使ってモデルを構築する具体的な方法を学びます。ARIMAモデルは、[非定常性](https://wikipedia.org/wiki/Stationary_process)を示すデータに特に適しています。
+
+## 一般的な概念
+
+ARIMAを使うためには、いくつか知っておくべき概念があります:
+
+- 🎓 **定常性**。統計的な文脈では、定常性とは、時間をシフトしても分布が変わらないデータを指します。非定常データは、分析のために変換が必要な傾向による変動を示します。例えば、季節性はデータに変動をもたらし、「季節差分」というプロセスで除去できます。
+
+- 🎓 **[差分](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**。統計的な文脈でデータを差分化するとは、非定常データを定常化するためにその非一定の傾向を取り除くプロセスを指します。「差分は時系列のレベルの変化を取り除き、傾向と季節性を排除し、結果として時系列の平均を安定させます。」[Shixiongらの論文](https://arxiv.org/abs/1904.07632)
+
+## 時系列におけるARIMA
+
+ARIMAの各部分を解説し、時系列をどのようにモデル化し、予測に役立てるかを理解しましょう。
+
+- **AR - 自己回帰**。自己回帰モデルは、その名前が示すように、過去のデータを分析して仮定を立てます。これらの過去の値は「ラグ」と呼ばれます。例えば、月ごとの鉛筆の販売データがあるとします。各月の販売総数はデータセットの「進化変数」とみなされます。このモデルは「興味のある進化変数がそのラグ(すなわち、前の値)に回帰される」として構築されます。[wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - 統合**。類似の'ARMA'モデルとは異なり、ARIMAの'I'はその*[統合](https://wikipedia.org/wiki/Order_of_integration)*側面を指します。データは非定常性を排除するために差分ステップが適用されると「統合」されます。
+
+- **MA - 移動平均**。このモデルの[移動平均](https://wikipedia.org/wiki/Moving-average_model)側面は、現在および過去のラグの値を観察することによって決定される出力変数を指します。
+
+結論: ARIMAは、時系列データの特殊な形式にできるだけ近づけるためにモデルを作成するために使用されます。
+
+## 演習 - ARIMAモデルの構築
+
+このレッスンの[_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working)フォルダーを開き、[_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb)ファイルを見つけてください。
+
+1. ノートブックを実行して`statsmodels` Pythonライブラリを読み込みます。これはARIMAモデルに必要です。
+
+1. 必要なライブラリを読み込む
+
+1. データをプロットするために便利なライブラリをいくつか読み込みます:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. `/data/energy.csv`ファイルからデータをPandasデータフレームに読み込み、確認します:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. 2012年1月から2014年12月までの全てのエネルギーデータをプロットします。前のレッスンで見たデータなので驚くことはないでしょう:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ では、モデルを構築しましょう!
+
+### トレーニングとテストデータセットの作成
+
+データが読み込まれたので、トレーニングセットとテストセットに分けます。トレーニングセットでモデルをトレーニングします。通常、モデルのトレーニングが終了したら、テストセットを使用してその精度を評価します。モデルが将来の時間帯から情報を得ないようにするために、テストセットがトレーニングセットよりも後の期間をカバーしていることを確認する必要があります。
+
+1. 2014年9月1日から10月31日までの2か月間をトレーニングセットに割り当てます。テストセットには2014年11月1日から12月31日までの2か月間が含まれます:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ このデータは日々のエネルギー消費を反映しているため、強い季節的パターンがありますが、消費は最近の日々の消費に最も似ています。
+
+1. 差異を視覚化します:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ したがって、データのトレーニングには比較的小さな時間枠を使用することが適切です。
+
+ > 注: ARIMAモデルをフィットさせるために使用する関数はフィッティング中にインサンプル検証を行うため、検証データは省略します。
+
+### トレーニングのためのデータ準備
+
+次に、データをフィルタリングおよびスケーリングしてトレーニングの準備をします。必要な期間と列のみを含むようにデータセットをフィルタリングし、データが0から1の範囲に投影されるようにスケーリングします。
+
+1. 元のデータセットをフィルタリングし、前述の期間ごとのセットと、必要な列「load」と日付のみを含むようにします:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ データの形状を確認できます:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. データを(0, 1)の範囲にスケーリングします。
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. 元のデータとスケーリングされたデータを視覚化します:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > 元のデータ
+
+ 
+
+ > スケーリングされたデータ
+
+1. スケーリングされたデータをキャリブレーションしたので、テストデータをスケーリングします:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### ARIMAの実装
+
+いよいよARIMAの実装です!先ほどインストールした`statsmodels`ライブラリを使用します。
+
+次にいくつかのステップを実行する必要があります
+
+ 1. `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q`を呼び出してモデルを定義しますが、これはモデルの季節成分に対応します。
+
+1. 好みのホライズン値を設定します。3時間を試してみましょう:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ ARIMAモデルのパラメータの最適な値を選択するのは難しい場合があります。これは主観的であり、時間がかかるためです。`auto_arima()` function from the [`pyramid`ライブラリ](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html)を使用することを検討するかもしれません。
+
+1. まずは手動でいくつかの選択を試して、良いモデルを見つけます。
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ 結果の表が表示されます。
+
+最初のモデルを構築しました!次に、これを評価する方法を見つける必要があります。
+
+### モデルの評価
+
+モデルを評価するために、いわゆる`ウォークフォワード`検証を実行できます。実際には、新しいデータが利用可能になるたびに時系列モデルは再トレーニングされます。これにより、モデルは各時点で最適な予測を行うことができます。
+
+この技術を使用して時系列の最初から始め、トレーニングデータセットでモデルをトレーニングします。その後、次の時点で予測を行います。予測は既知の値と比較されます。トレーニングセットは既知の値を含むように拡張され、このプロセスが繰り返されます。
+
+> 注: トレーニングセットウィンドウを固定して効率的なトレーニングを行うために、新しい観測値をトレーニングセットに追加するたびに、セットの最初から観測値を削除します。
+
+このプロセスは、モデルが実際にどのように動作するかのより堅牢な推定を提供します。ただし、多くのモデルを作成する計算コストがかかります。データが小さい場合やモデルがシンプルな場合は許容範囲ですが、スケールが大きい場合は問題になる可能性があります。
+
+ウォークフォワード検証は時系列モデルの評価のゴールドスタンダードであり、独自のプロジェクトに推奨されます。
+
+1. まず、各ホライズンステップのテストデータポイントを作成します。
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | load | load+1 | load+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ データはホライズンポイントに従って水平方向にシフトされます。
+
+1. このスライディングウィンドウアプローチを使用して、テストデータで予測を行い、テストデータの長さのループで実行します:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ トレーニングが行われるのを観察できます:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. 予測と実際の負荷を比較します:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ 出力
+ | | | timestamp | h | prediction | actual |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+
+ 時間ごとのデータの予測を観察し、実際の負荷と比較します。どれくらい正確ですか?
+
+### モデルの精度を確認する
+
+全ての予測に対して平均絶対誤差率(MAPE)をテストしてモデルの精度を確認します。
+
+> **🧮 数学を見せて**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/)は、上記の式で定義される比率として予測精度を示すために使用されます。実際値tと予測値tの差を実際値tで割ります。「この計算の絶対値は、予測されたすべての時点で合計され、フィットされたポイントの数nで割られます。」[wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. 式をコードで表現します:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. 1ステップのMAPEを計算します:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ 1ステップ予測MAPE: 0.5570581332313952 %
+
+1. 複数ステップ予測のMAPEを表示します:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ 低い数値が良い: 予測のMAPEが10であれば、10%の誤差があることを意味します。
+
+1. しかし、いつものように、このような精度の測定を視覚的に見る方が簡単ですので、プロットしてみましょう:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 非常に良いプロットで、良い精度のモデルを示しています。よくできました!
+
+---
+
+## 🚀チャレンジ
+
+時系列モデルの精度をテストする方法を掘り下げてみましょう。このレッスンではMAPEに触れましたが、他に使用できる方法はありますか?それらを調査して注釈を付けてください。役立つドキュメントは[こちら](https://otexts.com/fpp2/accuracy.html)にあります。
+
+## [講義後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## レビューと自習
+
+このレッスンでは、ARIMAを使った時系列予測の基本に触れました。時間をかけて[このリポジトリ](https://microsoft.github.io/forecasting/)とそのさまざまなモデルタイプを掘り下げ、他の時系列モデルの構築方法を学んでください。
+
+## 課題
+
+[新しいARIMAモデル](assignment.md)
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努めていますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で記載された元の文書を権威ある情報源と見なしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/2-ARIMA/assignment.md b/translations/ja/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..5c73a6314
--- /dev/null
+++ b/translations/ja/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# 新しいARIMAモデル
+
+## 手順
+
+既にARIMAモデルを構築したので、新しいデータを使って新しいモデルを構築しましょう([Dukeのこれらのデータセット](http://www2.stat.duke.edu/~mw/ts_data_sets.html)の1つを試してみてください)。ノートブックに作業を注釈し、データとモデルを可視化し、MAPEを使用してその精度をテストしてください。
+
+## ルーブリック
+
+| 基準 | 優秀 | 適切 | 改善が必要 |
+| ------- | ------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | ---------------------------- |
+| | 新しいARIMAモデルが構築され、テストされ、可視化と精度が説明されたノートブックが提示されています。 | 提示されたノートブックには注釈がないか、バグが含まれています。 | 不完全なノートブックが提示されています。 |
+
+**免責事項**:
+この文書は、機械翻訳AIサービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。元の言語で書かれた文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/ja/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..95cb54a78
--- /dev/null
+++ b/translations/ja/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳には誤りや不正確さが含まれる可能性がありますのでご注意ください。元の言語で書かれた原文が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/ja/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..bdf594aee
--- /dev/null
+++ b/translations/ja/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確な部分が含まれる場合があります。原文の言語で書かれた文書が権威ある情報源とみなされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/3-SVR/README.md b/translations/ja/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..43f1a4846
--- /dev/null
+++ b/translations/ja/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,384 @@
+# サポートベクター回帰を用いた時系列予測
+
+前回のレッスンでは、ARIMAモデルを使用して時系列予測を行う方法を学びました。今回は、連続データを予測するために使用される回帰モデルであるサポートベクター回帰モデルについて見ていきます。
+
+## [事前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## はじめに
+
+このレッスンでは、回帰のための[**SVM**: **S**upport **V**ector **M**achine](https://en.wikipedia.org/wiki/Support-vector_machine)、つまり**SVR: サポートベクター回帰**を用いてモデルを構築する具体的な方法を学びます。
+
+### 時系列におけるSVRの役割 [^1]
+
+時系列予測におけるSVRの重要性を理解する前に、知っておくべき重要な概念をいくつか紹介します:
+
+- **回帰:** 与えられた入力セットから連続値を予測するための教師あり学習技術です。アイデアは、特徴空間内の最大数のデータポイントを持つ曲線(または直線)にフィットさせることです。詳細は[こちら](https://en.wikipedia.org/wiki/Regression_analysis)をクリックしてください。
+- **サポートベクターマシン (SVM):** 分類、回帰、および外れ値検出のために使用される教師あり機械学習モデルの一種です。モデルは特徴空間内の超平面であり、分類の場合は境界として機能し、回帰の場合は最適なフィットラインとして機能します。SVMでは、カーネル関数を使用してデータセットをより高次元の空間に変換し、容易に分離可能にすることが一般的です。SVMの詳細は[こちら](https://en.wikipedia.org/wiki/Support-vector_machine)をクリックしてください。
+- **サポートベクター回帰 (SVR):** SVMの一種で、最大数のデータポイントを持つ最適なフィットライン(SVMの場合は超平面)を見つけるためのものです。
+
+### なぜSVRを使うのか? [^1]
+
+前回のレッスンでは、時系列データを予測するための非常に成功した統計的線形方法であるARIMAについて学びました。しかし、多くの場合、時系列データには線形モデルではマッピングできない*非線形性*が含まれています。このような場合、回帰タスクにおいてデータの非線形性を考慮できるSVMの能力が、時系列予測においてSVRを成功させる理由となります。
+
+## 演習 - SVRモデルの構築
+
+データ準備の最初のステップは、前回の[ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)のレッスンと同じです。
+
+このレッスンの[_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working)フォルダーを開き、[_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb)ファイルを見つけてください。[^2]
+
+1. ノートブックを実行し、必要なライブラリをインポートします: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. `/data/energy.csv`ファイルからデータをPandasデータフレームに読み込み、確認します: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. 2012年1月から2014年12月までの利用可能なすべてのエネルギーデータをプロットします: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ それでは、SVRモデルを構築しましょう。
+
+### トレーニングデータとテストデータの作成
+
+データが読み込まれたので、トレーニングセットとテストセットに分割します。その後、SVRに必要なタイムステップベースのデータセットを作成するためにデータを再形成します。モデルはトレーニングセットでトレーニングされます。トレーニングが終了した後、トレーニングセット、テストセット、そして全データセットでその精度を評価し、全体的なパフォーマンスを確認します。テストセットがトレーニングセットより後の期間をカバーするようにして、モデルが将来の時間から情報を得ないようにする必要があります[^2](これは*過剰適合*と呼ばれる状況です)。
+
+1. トレーニングセットには2014年9月1日から10月31日までの2ヶ月間を割り当てます。テストセットには2014年11月1日から12月31日までの2ヶ月間が含まれます: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. 違いを視覚化します: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+### トレーニング用データの準備
+
+次に、データをフィルタリングおよびスケーリングしてトレーニング用に準備する必要があります。必要な期間と列のみを含むようにデータセットをフィルタリングし、データが0から1の範囲に投影されるようにスケーリングします。
+
+1. 元のデータセットをフィルタリングして、前述の期間ごとのセットと、必要な列「load」と日付のみを含むようにします: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. トレーニングデータを(0, 1)の範囲にスケーリングします: [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. 次に、テストデータをスケーリングします: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### タイムステップを持つデータの作成 [^1]
+
+SVRのために、入力データを`[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data`の形式に変換します。新しい次元がタイムステップを参照するようになります。
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+この例では、`timesteps = 5`とします。つまり、モデルへの入力は最初の4つのタイムステップのデータであり、出力は5番目のタイムステップのデータとなります。
+
+```python
+timesteps=5
+```
+
+ネストされたリスト内包表記を使用してトレーニングデータを2Dテンソルに変換します:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+テストデータを2Dテンソルに変換します:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+トレーニングデータとテストデータから入力と出力を選択します:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### SVRの実装 [^1]
+
+それでは、SVRを実装する時が来ました。この実装について詳しく知りたい場合は、[このドキュメント](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html)を参照してください。私たちの実装では、以下のステップに従います:
+
+ 1. `SVR()`を呼び出してモデルを定義し、`fit()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `predict()`関数を使用します。
+
+次に、SVRモデルを作成します。ここでは[RBFカーネル](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel)を使用し、ハイパーパラメータgamma、C、およびepsilonをそれぞれ0.5、10、および0.05に設定します。
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### トレーニングデータにモデルを適合させる [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### モデル予測を行う [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+SVRを構築しました!次に、それを評価する必要があります。
+
+### モデルの評価 [^1]
+
+評価のために、まずデータを元のスケールに戻します。次に、パフォーマンスを確認するために、元の時系列プロットと予測された時系列プロットをプロットし、MAPE結果も出力します。
+
+予測された出力と元の出力をスケールバックします:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### トレーニングデータとテストデータでモデルのパフォーマンスを確認する [^1]
+
+データセットからタイムスタンプを抽出し、プロットのx軸に表示します。最初の```timesteps-1```値を最初の出力の入力として使用しているため、出力のタイムスタンプはその後に開始します。
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+トレーニングデータの予測をプロットします:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+トレーニングデータのMAPEを出力します
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+テストデータの予測をプロットします
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+テストデータのMAPEを出力します
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 テストデータセットで非常に良い結果を得ました!
+
+### 全データセットでモデルのパフォーマンスを確認する [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+🏆 素晴らしいプロットで、精度の高いモデルを示しています。よくできました!
+
+---
+
+## 🚀チャレンジ
+
+- モデルを作成する際にハイパーパラメータ(gamma、C、epsilon)を調整して、テストデータでの最適な結果を得るセットを評価してみてください。これらのハイパーパラメータについて詳しく知りたい場合は、[こちらのドキュメント](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel)を参照してください。
+- モデルに対して異なるカーネル関数を使用し、データセットでのパフォーマンスを分析してみてください。役立つドキュメントは[こちら](https://scikit-learn.org/stable/modules/svm.html#kernel-functions)にあります。
+- モデルが予測を行うために振り返る`timesteps`の値を異なるものにしてみてください。
+
+## [事後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## レビューと自己学習
+
+このレッスンでは、時系列予測のためのSVRの応用を紹介しました。SVRについてさらに詳しく知りたい場合は、[このブログ](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/)を参照してください。この[scikit-learnのドキュメント](https://scikit-learn.org/stable/modules/svm.html)では、SVM全般、[SVR](https://scikit-learn.org/stable/modules/svm.html#regression)、および異なる[カーネル関数](https://scikit-learn.org/stable/modules/svm.html#kernel-functions)の使用方法やそのパラメータなど、他の実装の詳細についても包括的な説明が提供されています。
+
+## 課題
+
+[新しいSVRモデル](assignment.md)
+
+
+
+## クレジット
+
+
+[^1]: このセクションのテキスト、コード、および出力は[AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD)によって提供されました。
+[^2]: このセクションのテキスト、コード、および出力は[ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)から引用されました。
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。元の言語で書かれた原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/3-SVR/assignment.md b/translations/ja/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..8ca1f48ab
--- /dev/null
+++ b/translations/ja/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,17 @@
+# 新しいSVRモデル
+
+## 手順 [^1]
+
+SVRモデルを構築したら、新しいデータを使って新しいモデルを構築してください([Dukeのこれらのデータセット](http://www2.stat.duke.edu/~mw/ts_data_sets.html)のいずれかを試してください)。作業をノートブックに注釈し、データとモデルを可視化し、適切なプロットとMAPEを使用してその精度をテストします。また、異なるハイパーパラメータを調整したり、異なるタイムステップの値を使用したりすることも試してください。
+
+## ルーブリック [^1]
+
+| 基準 | 模範的な内容 | 適切な内容 | 改善が必要な内容 |
+| -------- | ------------------------------------------------------------ | --------------------------------------------------------- | ----------------------------------- |
+| | SVRモデルが構築され、テストされ、可視化と精度が明示されたノートブックが提示される。 | 提示されたノートブックに注釈がなく、バグが含まれている。 | 不完全なノートブックが提示される。 |
+
+
+[^1]:このセクションのテキストは[ARIMAの課題](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md)に基づいています。
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期すために努力していますが、自動翻訳にはエラーや不正確さが含まれる場合があります。元の言語で書かれた文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用によって生じた誤解や誤訳については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/7-TimeSeries/README.md b/translations/ja/7-TimeSeries/README.md
new file mode 100644
index 000000000..521ccaf7e
--- /dev/null
+++ b/translations/ja/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# 時系列予測の紹介
+
+時系列予測とは何でしょうか?それは過去のトレンドを分析して将来の出来事を予測することです。
+
+## 地域トピック: 世界の電力使用量 ✨
+
+この2つのレッスンでは、時系列予測について紹介します。これは機械学習の中でもあまり知られていない分野ですが、産業やビジネスの応用などにおいて非常に価値があります。ニューラルネットワークを使ってこれらのモデルの有用性を高めることもできますが、ここでは過去のデータに基づいて将来のパフォーマンスを予測するための古典的な機械学習の文脈でこれらを学びます。
+
+地域的な焦点は世界の電力使用量にあります。この興味深いデータセットを通じて、過去の負荷パターンに基づいて将来の電力使用量を予測する方法を学びます。このような予測がビジネス環境で非常に役立つことがわかるでしょう。
+
+
+
+ラジャスタンの道路にある電柱の写真は、[Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) によるもので、[Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) に掲載されています。
+
+## レッスン
+
+1. [時系列予測の紹介](1-Introduction/README.md)
+2. [ARIMA時系列モデルの構築](2-ARIMA/README.md)
+3. [時系列予測のためのサポートベクターレグレッサの構築](3-SVR/README.md)
+
+## クレジット
+
+「時系列予測の紹介」は、[Francesca Lazzeri](https://twitter.com/frlazzeri) と [Jen Looper](https://twitter.com/jenlooper) によって⚡️を込めて書かれました。このノートブックは最初に [Azure "Deep Learning For Time Series" リポジトリ](https://github.com/Azure/DeepLearningForTimeSeriesForecasting) にオンラインで公開され、Francesca Lazzeri によって書かれました。SVRレッスンは [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD) によって書かれました。
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確性を期すために努力していますが、自動翻訳には誤りや不正確さが含まれる可能性があります。元の言語で記載された文書を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用により生じる誤解や誤解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/1-QLearning/README.md b/translations/ja/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..95af5c12c
--- /dev/null
+++ b/translations/ja/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,320 @@
+# 強化学習とQ学習の紹介
+
+
+> スケッチノート: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+強化学習には、エージェント、状態、各状態ごとの一連のアクションという3つの重要な概念が含まれます。指定された状態でアクションを実行すると、エージェントに報酬が与えられます。コンピュータゲーム「スーパーマリオ」を想像してみてください。あなたはマリオで、崖の端に立っているゲームレベルにいます。上にはコインがあります。あなたがマリオで、特定の位置にいるゲームレベル...それがあなたの状態です。右に一歩進む(アクション)と崖から落ちてしまい、低い数値スコアが与えられます。しかし、ジャンプボタンを押すとポイントが得られ、生き残ることができます。これはポジティブな結果であり、ポジティブな数値スコアが与えられるべきです。
+
+強化学習とシミュレーター(ゲーム)を使用することで、ゲームをプレイして報酬を最大化する方法を学ぶことができます。報酬は生き残り、できるだけ多くのポイントを獲得することです。
+
+[](https://www.youtube.com/watch?v=lDq_en8RNOo)
+
+> 🎥 上の画像をクリックして、Dmitry が強化学習について話すのを聞いてみましょう
+
+## [講義前のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/45/)
+
+## 前提条件とセットアップ
+
+このレッスンでは、Python でいくつかのコードを実験します。このレッスンの Jupyter Notebook コードを、自分のコンピュータ上またはクラウド上で実行できるようにしてください。
+
+[レッスンノートブック](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/notebook.ipynb)を開いて、このレッスンを進めながら構築していくことができます。
+
+> **Note:** クラウドからこのコードを開く場合、ノートブックコードで使用される [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py) ファイルも取得する必要があります。同じディレクトリに追加してください。
+
+## はじめに
+
+このレッスンでは、ロシアの作曲家 [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev) による音楽童話に触発された **[ピーターと狼](https://en.wikipedia.org/wiki/Peter_and_the_Wolf)** の世界を探ります。**強化学習** を使用して、ピーターが環境を探索し、美味しいリンゴを集め、狼に出会わないようにします。
+
+**強化学習** (RL) は、**エージェント** がいくつかの **環境** で最適な行動を学習するための技術です。エージェントはこの環境で **報酬関数** によって定義された **目標** を持つべきです。
+
+## 環境
+
+簡単にするために、ピーターの世界を次のような `width` x `height` のサイズの正方形のボードと考えます:
+
+
+
+このボードの各セルは次のいずれかです:
+
+* **地面**: ピーターや他の生き物が歩ける場所。
+* **水**: 明らかに歩けない場所。
+* **木** または **草**: 休む場所。
+* **リンゴ**: ピーターが見つけて食べたいもの。
+* **狼**: 危険で避けるべきもの。
+
+この環境で動作するコードを含む別の Python モジュール [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py) があります。このコードは概念の理解には重要ではないため、モジュールをインポートしてサンプルボードを作成します(コードブロック 1):
+
+```python
+from rlboard import *
+
+width, height = 8,8
+m = Board(width,height)
+m.randomize(seed=13)
+m.plot()
+```
+
+このコードは、上記の環境に似た画像を出力します。
+
+## アクションとポリシー
+
+この例では、ピーターの目標は狼や他の障害物を避けながらリンゴを見つけることです。これを行うために、彼はリンゴを見つけるまで基本的に歩き回ることができます。
+
+したがって、任意の位置で、彼は次のアクションのいずれかを選択できます:上、下、左、右。
+
+これらのアクションを辞書として定義し、それらを対応する座標の変化のペアにマッピングします。例えば、右に移動する (`R`) would correspond to a pair `(1,0)` とします(コードブロック 2):
+
+```python
+actions = { "U" : (0,-1), "D" : (0,1), "L" : (-1,0), "R" : (1,0) }
+action_idx = { a : i for i,a in enumerate(actions.keys()) }
+```
+
+まとめると、このシナリオの戦略と目標は次のとおりです:
+
+- **戦略**: エージェント(ピーター)の戦略は **ポリシー** と呼ばれる関数によって定義されます。ポリシーは任意の状態でアクションを返す関数です。私たちの場合、問題の状態はプレイヤーの現在位置を含むボードによって表されます。
+
+- **目標**: 強化学習の目標は、問題を効率的に解決するための良いポリシーを最終的に学習することです。ただし、基準として、最も単純なポリシーである **ランダムウォーク** を考えます。
+
+## ランダムウォーク
+
+まず、ランダムウォーク戦略を実装して問題を解決しましょう。ランダムウォークでは、許可されたアクションから次のアクションをランダムに選択し、リンゴに到達するまで繰り返します(コードブロック 3)。
+
+1. 以下のコードでランダムウォークを実装します:
+
+ ```python
+ def random_policy(m):
+ return random.choice(list(actions))
+
+ def walk(m,policy,start_position=None):
+ n = 0 # number of steps
+ # set initial position
+ if start_position:
+ m.human = start_position
+ else:
+ m.random_start()
+ while True:
+ if m.at() == Board.Cell.apple:
+ return n # success!
+ if m.at() in [Board.Cell.wolf, Board.Cell.water]:
+ return -1 # eaten by wolf or drowned
+ while True:
+ a = actions[policy(m)]
+ new_pos = m.move_pos(m.human,a)
+ if m.is_valid(new_pos) and m.at(new_pos)!=Board.Cell.water:
+ m.move(a) # do the actual move
+ break
+ n+=1
+
+ walk(m,random_policy)
+ ```
+
+ `walk` の呼び出しは、対応する経路の長さを返すべきです。これは実行ごとに異なる場合があります。
+
+1. ウォーク実験を何度か(例えば100回)実行し、結果の統計を出力します(コードブロック 4):
+
+ ```python
+ def print_statistics(policy):
+ s,w,n = 0,0,0
+ for _ in range(100):
+ z = walk(m,policy)
+ if z<0:
+ w+=1
+ else:
+ s += z
+ n += 1
+ print(f"Average path length = {s/n}, eaten by wolf: {w} times")
+
+ print_statistics(random_policy)
+ ```
+
+ 経路の平均長さが約30〜40ステップであることに注意してください。これは、最も近いリンゴまでの平均距離が約5〜6ステップであることを考えると、かなり多いです。
+
+ また、ランダムウォーク中のピーターの動きがどのように見えるかも確認できます:
+
+ 
+
+## 報酬関数
+
+ポリシーをより知的にするためには、どの移動が他の移動よりも「良い」かを理解する必要があります。これを行うためには、目標を定義する必要があります。
+
+目標は、各状態に対していくつかのスコア値を返す **報酬関数** の観点から定義できます。数値が高いほど、報酬関数が良いことを意味します(コードブロック 5)。
+
+```python
+move_reward = -0.1
+goal_reward = 10
+end_reward = -10
+
+def reward(m,pos=None):
+ pos = pos or m.human
+ if not m.is_valid(pos):
+ return end_reward
+ x = m.at(pos)
+ if x==Board.Cell.water or x == Board.Cell.wolf:
+ return end_reward
+ if x==Board.Cell.apple:
+ return goal_reward
+ return move_reward
+```
+
+報酬関数について興味深い点は、ほとんどの場合、*ゲームの最後にのみ実質的な報酬が与えられる* ことです。これは、アルゴリズムがポジティブな報酬につながる「良い」ステップを記憶し、それらの重要性を高める必要があることを意味します。同様に、悪い結果につながるすべての移動は抑制されるべきです。
+
+## Q学習
+
+ここで議論するアルゴリズムは **Q学習** と呼ばれます。このアルゴリズムでは、ポリシーは **Qテーブル** と呼ばれる関数(またはデータ構造)によって定義されます。これは、特定の状態で各アクションの「良さ」を記録します。
+
+Qテーブルと呼ばれるのは、それを表形式や多次元配列として表現するのが便利なためです。ボードのサイズが `width` x `height` であるため、`width` x `height` x `len(actions)` の形状を持つ numpy 配列を使用して Qテーブルを表現できます(コードブロック 6)。
+
+```python
+Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)
+```
+
+Qテーブルのすべての値を等しい値(この場合は 0.25)で初期化することに注意してください。これは、すべての状態でのすべての移動が等しく良いことを意味する「ランダムウォーク」ポリシーに対応します。Qテーブルを `plot` function in order to visualize the table on the board: `m.plot(Q)`.
+
+
+
+In the center of each cell there is an "arrow" that indicates the preferred direction of movement. Since all directions are equal, a dot is displayed.
+
+Now we need to run the simulation, explore our environment, and learn a better distribution of Q-Table values, which will allow us to find the path to the apple much faster.
+
+## Essence of Q-Learning: Bellman Equation
+
+Once we start moving, each action will have a corresponding reward, i.e. we can theoretically select the next action based on the highest immediate reward. However, in most states, the move will not achieve our goal of reaching the apple, and thus we cannot immediately decide which direction is better.
+
+> Remember that it is not the immediate result that matters, but rather the final result, which we will obtain at the end of the simulation.
+
+In order to account for this delayed reward, we need to use the principles of **[dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming)**, which allow us to think about out problem recursively.
+
+Suppose we are now at the state *s*, and we want to move to the next state *s'*. By doing so, we will receive the immediate reward *r(s,a)*, defined by the reward function, plus some future reward. If we suppose that our Q-Table correctly reflects the "attractiveness" of each action, then at state *s'* we will chose an action *a* that corresponds to maximum value of *Q(s',a')*. Thus, the best possible future reward we could get at state *s* will be defined as `max`a'*Q(s',a')* (maximum here is computed over all possible actions *a'* at state *s'*).
+
+This gives the **Bellman formula** for calculating the value of the Q-Table at state *s*, given action *a*:
+
+
+
+Here γ is the so-called **discount factor** that determines to which extent you should prefer the current reward over the future reward and vice versa.
+
+## Learning Algorithm
+
+Given the equation above, we can now write pseudo-code for our learning algorithm:
+
+* Initialize Q-Table Q with equal numbers for all states and actions
+* Set learning rate α ← 1
+* Repeat simulation many times
+ 1. Start at random position
+ 1. Repeat
+ 1. Select an action *a* at state *s*
+ 2. Execute action by moving to a new state *s'*
+ 3. If we encounter end-of-game condition, or total reward is too small - exit simulation
+ 4. Compute reward *r* at the new state
+ 5. Update Q-Function according to Bellman equation: *Q(s,a)* ← *(1-α)Q(s,a)+α(r+γ maxa'Q(s',a'))*
+ 6. *s* ← *s'*
+ 7. Update the total reward and decrease α.
+
+## Exploit vs. explore
+
+In the algorithm above, we did not specify how exactly we should choose an action at step 2.1. If we are choosing the action randomly, we will randomly **explore** the environment, and we are quite likely to die often as well as explore areas where we would not normally go. An alternative approach would be to **exploit** the Q-Table values that we already know, and thus to choose the best action (with higher Q-Table value) at state *s*. This, however, will prevent us from exploring other states, and it's likely we might not find the optimal solution.
+
+Thus, the best approach is to strike a balance between exploration and exploitation. This can be done by choosing the action at state *s* with probabilities proportional to values in the Q-Table. In the beginning, when Q-Table values are all the same, it would correspond to a random selection, but as we learn more about our environment, we would be more likely to follow the optimal route while allowing the agent to choose the unexplored path once in a while.
+
+## Python implementation
+
+We are now ready to implement the learning algorithm. Before we do that, we also need some function that will convert arbitrary numbers in the Q-Table into a vector of probabilities for corresponding actions.
+
+1. Create a function `probs()` に渡すことができます:
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+ ```
+
+ 初期状態でベクトルのすべての成分が同一である場合に 0 で割ることを避けるために、元のベクトルにいくつかの `eps` を追加します。
+
+5000回の実験(エポック)を通じて学習アルゴリズムを実行します(コードブロック 8)。
+
+```python
+ for epoch in range(5000):
+
+ # Pick initial point
+ m.random_start()
+
+ # Start travelling
+ n=0
+ cum_reward = 0
+ while True:
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ dpos = actions[a]
+ m.move(dpos,check_correctness=False) # we allow player to move outside the board, which terminates episode
+ r = reward(m)
+ cum_reward += r
+ if r==end_reward or cum_reward < -1000:
+ lpath.append(n)
+ break
+ alpha = np.exp(-n / 10e5)
+ gamma = 0.5
+ ai = action_idx[a]
+ Q[x,y,ai] = (1 - alpha) * Q[x,y,ai] + alpha * (r + gamma * Q[x+dpos[0], y+dpos[1]].max())
+ n+=1
+```
+
+このアルゴリズムを実行した後、Qテーブルは各ステップでの異なるアクションの魅力を定義する値で更新されます。Qテーブルを視覚化して、各セルに小さな円を描くことで、移動の希望方向を示すベクトルをプロットすることができます。
+
+## ポリシーの確認
+
+Qテーブルは各状態での各アクションの「魅力」をリストしているため、効率的なナビゲーションを定義するのに簡単に使用できます。最も簡単な場合、Qテーブルの値が最も高いアクションを選択できます(コードブロック 9)。
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> 上記のコードを数回試してみると、時々「ハング」することがあり、ノートブックの STOP ボタンを押して中断する必要があることに気付くかもしれません。これは、最適な Q値の観点から2つの状態が互いに「指し示す」状況があり、その場合、エージェントが無限にその状態間を移動し続けるためです。
+
+## 🚀チャレンジ
+
+> **タスク 1:** `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` を修正して、シミュレーションを100回実行します(コードブロック 10)。
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+このコードを実行した後、以前よりも平均経路長がはるかに短くなり、3〜6の範囲になります。
+
+## 学習プロセスの調査
+
+学習プロセスは、問題空間の構造に関する獲得した知識の探索と探索のバランスです。学習の結果(エージェントが目標に到達するための短い経路を見つける能力)が向上したことがわかりましたが、学習プロセス中の平均経路長の変化を観察することも興味深いです。
+
+学習の要点をまとめると:
+
+- **平均経路長の増加**。最初は平均経路長が増加します。これは、環境について何も知らないときに、悪い状態(水や狼)に閉じ込められやすいことが原因です。より多くを学び、この知識を使い始めると、環境をより長く探索できますが、リンゴの位置についてはまだよくわかりません。
+
+- **学習が進むにつれて経路長が減少**。十分に学習すると、エージェントが目標を達成するのが簡単になり、経路長が減少し始めます。ただし、探索は続けているため、最適な経路から逸れ、新しいオプションを探索することがあり、経路が最適より長くなることがあります。
+
+- **突然の経路長の増加**。グラフで経路長が突然増加することもあります。これはプロセスの確率的な性質を示しており、新しい値で Qテーブルの係数を上書きすることで Qテーブルが「損なわれる」可能性があります。理想的には、学習率を低下させることでこれを最小限に抑えるべきです(例えば、学習の終わりに向かって、Qテーブルの値をわずかに調整する)。
+
+全体として、学習プロセスの成功と質は、学習率、学習率の減衰、割引率などのパラメータに大きく依存することを覚えておくことが重要です。これらは **ハイパーパラメータ** と呼ばれ、**パラメータ** とは区別されます。パラメータは学習中に最適化するものであり(例えば、Qテーブルの係数)、最適なハイパーパラメータ値を見つけるプロセスは **ハイパーパラメータ最適化** と呼ばれ、別のトピックとして取り上げる価値があります。
+
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## 課題
+[より現実的な世界](assignment.md)
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。元の言語で記載された文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤った解釈について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/1-QLearning/assignment.md b/translations/ja/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..7f19b4629
--- /dev/null
+++ b/translations/ja/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,30 @@
+# より現実的な世界
+
+私たちのシナリオでは、ピーターはほとんど疲れたりお腹が空いたりすることなく移動することができました。より現実的な世界では、ピーターは時々座って休憩し、食事をする必要があります。次のルールを実装して、私たちの世界をより現実的にしましょう。
+
+1. 一つの場所から別の場所に移動することで、ピーターは**エネルギー**を失い、少し**疲労**を得ます。
+2. ピーターはリンゴを食べることでエネルギーを増やすことができます。
+3. ピーターは木の下や草の上で休むことで疲労を取り除くことができます(つまり、木や草のあるボードの位置に移動すること - 緑のフィールド)。
+4. ピーターはオオカミを見つけて倒す必要があります。
+5. オオカミを倒すためには、ピーターは一定のエネルギーと疲労のレベルが必要で、それがないと戦いに負けてしまいます。
+
+## 手順
+
+解決策の出発点として、元の [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) ノートブックを使用してください。
+
+上記のルールに従って報酬関数を修正し、強化学習アルゴリズムを実行してゲームに勝つための最適な戦略を学び、ランダムウォークとの勝敗数を比較してください。
+
+> **Note**: 新しい世界では状態がより複雑で、人間の位置に加えて疲労とエネルギーのレベルも含まれます。状態をタプル (Board, energy, fatigue) として表現するか、状態のためのクラスを定義することを選ぶことができます(`Board` から派生させることもできます)、または元の `Board` クラスを [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py) 内で修正することもできます。
+
+解決策では、ランダムウォーク戦略を担当するコードを保持し、最後にアルゴリズムの結果をランダムウォークと比較してください。
+
+> **Note**: ハイパーパラメータを調整して動作させる必要があるかもしれません。特にエポック数です。ゲームの成功(オオカミとの戦い)は稀なイベントであるため、訓練時間が長くなることが予想されます。
+
+## 評価基準
+
+| 基準 | 模範的 | 適切 | 改善が必要 |
+| ---- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
+| | 新しい世界のルール、Q学習アルゴリズム、およびいくつかのテキスト説明が定義されたノートブックが提示されます。Q学習はランダムウォークと比較して結果を大幅に改善することができます。 | ノートブックが提示され、Q学習が実装され、ランダムウォークと比較して結果が改善されますが、大幅ではないか、ノートブックの文書が不十分でコードがよく構成されていない。 | 世界のルールを再定義しようとする試みがなされていますが、Q学習アルゴリズムが機能せず、報酬関数が完全に定義されていない。 |
+
+**免責事項**:
+この文書は機械翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で書かれた元の文書を権威ある情報源とみなしてください。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤解について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/ja/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..327ab68ec
--- /dev/null
+++ b/translations/ja/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。元の言語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤訳については、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/ja/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..59121ca43
--- /dev/null
+++ b/translations/ja/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/2-Gym/README.md b/translations/ja/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..cda88ba74
--- /dev/null
+++ b/translations/ja/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,342 @@
+# カートポール スケート
+
+前回のレッスンで解決していた問題は、現実のシナリオには適用できないおもちゃの問題のように見えるかもしれません。しかし、実際には多くの現実の問題もこのシナリオを共有しています。例えば、チェスや囲碁のプレイも同様です。これらは、与えられたルールと**離散状態**を持つボードがあるため、似ています。
+
+## [講義前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/47/)
+
+## はじめに
+
+このレッスンでは、**連続状態**を持つ問題にQラーニングの原則を適用します。連続状態とは、1つ以上の実数で与えられる状態のことです。以下の問題に取り組みます:
+
+> **問題**: ピーターが狼から逃げるためには、もっと速く動けるようになる必要があります。Qラーニングを使用して、ピーターがスケートを学び、特にバランスを保つ方法を見てみましょう。
+
+
+
+> ピーターと彼の友達は、狼から逃れるために創造的になります!画像は[Jen Looper](https://twitter.com/jenlooper)によるものです。
+
+ここでは、**カートポール**問題として知られるバランスを取る方法の簡略版を使用します。カートポールの世界では、左右に動ける水平スライダーがあり、その上に垂直のポールをバランスさせることが目標です。
+
+## 前提条件
+
+このレッスンでは、**OpenAI Gym**というライブラリを使用して、さまざまな**環境**をシミュレートします。このレッスンのコードをローカル(例:Visual Studio Code)で実行する場合、シミュレーションは新しいウィンドウで開きます。オンラインでコードを実行する場合は、[こちら](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7)に記載されているように、コードにいくつかの調整が必要になるかもしれません。
+
+## OpenAI Gym
+
+前回のレッスンでは、ゲームのルールと状態は自分で定義した`Board`クラスによって与えられました。ここでは、バランスポールの物理をシミュレートする特別な**シミュレーション環境**を使用します。強化学習アルゴリズムをトレーニングするための最も人気のあるシミュレーション環境の1つは、[Gym](https://gym.openai.com/)と呼ばれ、[OpenAI](https://openai.com/)によって維持されています。このジムを使用することで、カートポールシミュレーションからアタリゲームまで、さまざまな**環境**を作成できます。
+
+> **Note**: OpenAI Gymで利用可能な他の環境は[こちら](https://gym.openai.com/envs/#classic_control)で確認できます。
+
+まず、ジムをインストールし、必要なライブラリをインポートしましょう(コードブロック1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## 演習 - カートポール環境の初期化
+
+カートポールのバランス問題に取り組むために、対応する環境を初期化する必要があります。各環境には次のものが関連付けられています:
+
+- **観察スペース**: 環境から受け取る情報の構造を定義します。カートポール問題では、ポールの位置、速度、およびその他の値を受け取ります。
+
+- **アクションスペース**: 可能なアクションを定義します。私たちの場合、アクションスペースは離散的で、**左**と**右**の2つのアクションから成ります。(コードブロック2)
+
+1. 初期化するために、次のコードを入力します:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+環境がどのように機能するかを見るために、100ステップの短いシミュレーションを実行してみましょう。各ステップで、取るべきアクションの1つを提供します。このシミュレーションでは、`action_space`からランダムにアクションを選択します。
+
+1. 以下のコードを実行して、その結果を確認してください。
+
+ ✅ このコードをローカルのPythonインストールで実行することが推奨されます!(コードブロック3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ 次のような画像が表示されるはずです:
+
+ 
+
+1. シミュレーション中に、どのように行動するかを決定するために観察を取得する必要があります。実際、ステップ関数は現在の観察、報酬関数、およびシミュレーションを続行するかどうかを示す完了フラグを返します:(コードブロック4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ ノートブックの出力に次のようなものが表示されるはずです:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ シミュレーションの各ステップで返される観察ベクトルには次の値が含まれます:
+ - カートの位置
+ - カートの速度
+ - ポールの角度
+ - ポールの回転率
+
+1. これらの数値の最小値と最大値を取得します:(コードブロック5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ また、各シミュレーションステップでの報酬値が常に1であることに気付くかもしれません。これは、私たちの目標ができるだけ長く生存し、ポールを垂直に保つことであるためです。
+
+ ✅ 実際、カートポールシミュレーションは、100回連続の試行で平均報酬が195に達した場合に解決されたと見なされます。
+
+## 状態の離散化
+
+Qラーニングでは、各状態で何をするかを定義するQテーブルを構築する必要があります。これを行うためには、状態が**離散的**である必要があります。つまり、有限の離散値を含む必要があります。したがって、観察を**離散化**し、有限の状態セットにマッピングする必要があります。
+
+これを行う方法はいくつかあります:
+
+- **ビンに分割する**。特定の値の範囲がわかっている場合、この範囲をいくつかの**ビン**に分割し、その値が属するビン番号で置き換えることができます。これはnumpyの[`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html)メソッドを使用して行うことができます。この場合、デジタル化に選択したビンの数に依存するため、状態サイズが正確にわかります。
+
+✅ 線形補間を使用して値をある有限の範囲(例えば、-20から20)に持ってきてから、四捨五入して整数に変換することもできます。これにより、入力値の正確な範囲がわからない場合でも、状態サイズに対する制御が少なくなります。例えば、私たちの場合、4つの値のうち2つは上限/下限がありません。これにより、無限の状態数が発生する可能性があります。
+
+この例では、2番目のアプローチを使用します。後で気づくかもしれませんが、定義されていない上限/下限にもかかわらず、これらの値は特定の有限の範囲外に出ることはめったにありません。そのため、極端な値を持つ状態は非常にまれです。
+
+1. 次の関数は、モデルからの観察を取り、4つの整数値のタプルを生成します:(コードブロック6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. もう1つのビンを使用した離散化方法も探索しましょう:(コードブロック7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. 次に、短いシミュレーションを実行し、それらの離散環境値を観察しましょう。`discretize` and `discretize_bins`の両方を試してみて、違いがあるかどうかを確認してください。
+
+ ✅ discretize_binsはビン番号を返しますが、これは0ベースです。したがって、入力変数の値が0に近い場合、範囲の中央(10)の数を返します。discretizeでは、出力値の範囲を気にしなかったため、値はシフトされず、0は0に対応します。(コードブロック8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ 環境が実行される様子を見たい場合は、env.renderで始まる行のコメントを外してください。そうでない場合は、バックグラウンドで実行することができ、これにより高速化されます。この「見えない」実行をQラーニングプロセス中に使用します。
+
+## Q-テーブルの構造
+
+前回のレッスンでは、状態は0から8までの単純な数字のペアであり、そのためQ-テーブルを8x8x2の形状のnumpyテンソルで表現するのが便利でした。ビンの離散化を使用する場合、状態ベクトルのサイズもわかっているため、同じアプローチを使用し、観察スペースの各パラメータに使用するビンの数に対応する形状の配列(20x20x10x10x2)で状態を表現できます。
+
+しかし、観察スペースの正確な次元がわからない場合もあります。`discretize`関数の場合、元の値の一部が制限されていないため、状態が特定の制限内に収まることを保証できません。そのため、Q-テーブルを辞書で表現するという異なるアプローチを使用します。
+
+1. 辞書キーとして*(state,action)*のペアを使用し、値はQ-テーブルのエントリ値に対応します。(コードブロック9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ ここでは、特定の状態に対するすべての可能なアクションに対応するQ-テーブル値のリストを返す`qvalues()`関数も定義します。Q-テーブルにエントリが存在しない場合、デフォルトで0を返します。
+
+## Qラーニングを始めましょう
+
+さて、ピーターにバランスを取ることを教える準備ができました!
+
+1. まず、いくつかのハイパーパラメータを設定しましょう:(コードブロック10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ ここで、`alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards`ベクトルに収集しました。これを後でプロットするために使用します。(コードブロック11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+結果から次のことがわかります:
+
+- **目標に近い**。100回以上の連続シミュレーションで195の累積報酬を得るという目標に非常に近づいているか、実際に達成しているかもしれません!小さな数値を得ても、5000回の実行で平均しているため、正式な基準では100回の実行のみが必要です。
+
+- **報酬が低下し始める**。時々、報酬が低下し始めることがあります。これは、Q-テーブルに既に学習した値を破壊し、状況を悪化させる値を新たに追加していることを意味します。
+
+この観察は、トレーニングの進捗をプロットするとより明確に見えます。
+
+## トレーニングの進捗をプロットする
+
+トレーニング中、各イテレーションで累積報酬値を`rewards`ベクトルに収集しました。これをイテレーション番号に対してプロットすると次のようになります:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+このグラフからは何もわかりません。これは、確率的トレーニングプロセスの性質上、トレーニングセッションの長さが大きく異なるためです。このグラフをより理解しやすくするために、一連の実験(例えば100)の**移動平均**を計算できます。これは`np.convolve`を使用して便利に行うことができます:(コードブロック12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## ハイパーパラメータの変更
+
+学習をより安定させるために、トレーニング中にいくつかのハイパーパラメータを調整することが理にかなっています。特に:
+
+- **学習率**に関しては、`alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon`の値を徐々に減少させ、ほぼ1に近づけます。
+
+> **Task 1**: ハイパーパラメータの値を変更して、より高い累積報酬を達成できるか試してみてください。195以上を達成していますか?
+
+> **Task 2**: 問題を正式に解決するには、100回連続の実行で平均195の報酬を得る必要があります。トレーニング中にそれを測定し、問題を正式に解決したことを確認してください!
+
+## 結果を実際に見る
+
+トレーニングされたモデルがどのように動作するかを実際に見ることは興味深いでしょう。シミュレーションを実行し、トレーニング中と同じアクション選択戦略に従い、Q-テーブルの確率分布に基づいてサンプリングします:(コードブロック13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+次のようなものが表示されるはずです:
+
+
+
+---
+
+## 🚀チャレンジ
+
+> **Task 3**: ここでは、Q-テーブルの最終コピーを使用しましたが、これが最良のものとは限りません。最もパフォーマンスの良いQ-テーブルを`Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax`関数を使用して、最も高いQ-テーブル値に対応するアクション番号を見つける戦略を実装し、それがバランスを改善するかどうかを確認してください。
+
+## [講義後クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## 課題
+[山登りカーのトレーニング](assignment.md)
+
+## 結論
+
+私たちは、エージェントに報酬関数を提供し、ゲームの望ましい状態を定義し、探索空間を知的に探索する機会を与えることで、良い結果を達成する方法を学びました。私たちは、離散的および連続的な環境でQラーニングアルゴリズムを成功裏に適用しましたが、アクションは離散的なままでした。
+
+アクション状態も連続している場合や、観察スペースがアタリゲーム画面の画像のように非常に複雑な場合もあります。これらの問題では、良い結果を得るために、ニューラルネットワークなどのより強力な機械学習技術を使用する必要があります。これらのより高度なトピックは、今後のより高度なAIコースの主題となります。
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語で書かれた原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/2-Gym/assignment.md b/translations/ja/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..c28e69cda
--- /dev/null
+++ b/translations/ja/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# マウンテンカーをトレーニングする
+
+[OpenAI Gym](http://gym.openai.com) は、すべての環境が同じAPI、つまり同じメソッド `reset`、`step`、`render` を提供し、**アクションスペース** と **オブザベーションスペース** の同じ抽象化を提供するように設計されています。そのため、最小限のコード変更で同じ強化学習アルゴリズムを異なる環境に適応させることが可能です。
+
+## マウンテンカー環境
+
+[Mountain Car environment](https://gym.openai.com/envs/MountainCar-v0/) には、谷に閉じ込められた車が含まれています。
+目標は、次のいずれかのアクションを実行して谷から脱出し、旗をキャプチャすることです:
+
+| 値 | 意味 |
+|---|---|
+| 0 | 左に加速 |
+| 1 | 加速しない |
+| 2 | 右に加速 |
+
+この問題の主なトリックは、車のエンジンが一度に山を登るのに十分な強さを持っていないことです。そのため、成功する唯一の方法は、勢いをつけるために前後に運転することです。
+
+オブザベーションスペースは、次の2つの値で構成されています:
+
+| Num | 観察 | 最小 | 最大 |
+|-----|------|-----|-----|
+| 0 | 車の位置 | -1.2| 0.6 |
+| 1 | 車の速度 | -0.07 | 0.07 |
+
+マウンテンカーの報酬システムはかなりトリッキーです:
+
+ * 山の頂上で旗(位置 = 0.5)に到達した場合、報酬は0です。
+ * エージェントの位置が0.5未満の場合、報酬は-1です。
+
+車の位置が0.5を超えるか、エピソードの長さが200を超えるとエピソードが終了します。
+## 手順
+
+強化学習アルゴリズムを適応させて、マウンテンカーの問題を解決します。既存の [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb) のコードから始め、新しい環境を置き換え、状態の離散化関数を変更し、最小限のコード変更で既存のアルゴリズムをトレーニングできるようにします。ハイパーパラメータを調整して結果を最適化します。
+
+> **Note**: アルゴリズムを収束させるためには、ハイパーパラメータの調整が必要になる可能性があります。
+## ルーブリック
+
+| 基準 | 優秀 | 適切 | 改善が必要 |
+| -------- | --------- | -------- | ----------------- |
+| | カートポールの例からQ-ラーニングアルゴリズムが最小限のコード変更でうまく適応され、200ステップ以内に旗をキャプチャする問題を解決できる。 | インターネットから新しいQ-ラーニングアルゴリズムが採用されているが、十分に文書化されている。または既存のアルゴリズムが採用されているが、望ましい結果に達していない。 | 学生はアルゴリズムをうまく採用することができなかったが、解決に向けて重要なステップを踏んでいる(状態の離散化、Q-テーブルのデータ構造などを実装している)。 |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努力していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語で書かれた文書を権威ある情報源とみなすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/ja/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..f3da5d7e3
--- /dev/null
+++ b/translations/ja/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。元の言語で書かれた原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤認については責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/2-Gym/solution/R/README.md b/translations/ja/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..00f7f952d
--- /dev/null
+++ b/translations/ja/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語の文書が信頼できる情報源とみなされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤った解釈については、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/8-Reinforcement/README.md b/translations/ja/8-Reinforcement/README.md
new file mode 100644
index 000000000..75773e4de
--- /dev/null
+++ b/translations/ja/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# 強化学習の紹介
+
+強化学習(RL)は、教師あり学習や教師なし学習と並んで、基本的な機械学習のパラダイムの一つと見なされています。RLは意思決定に関するもので、適切な決定を下すこと、または少なくともそこから学ぶことを目指しています。
+
+例えば、株式市場のようなシミュレーション環境があるとします。ある規制を導入するとどうなるでしょうか?それがポジティブな影響を与えるのか、ネガティブな影響を与えるのか?もしネガティブな結果が出た場合、それを _負の強化_ として学び、方針を変更する必要があります。ポジティブな結果が出た場合、その _正の強化_ を基にさらに進める必要があります。
+
+
+
+> ピーターとその友達は空腹の狼から逃げなければなりません!画像提供:[Jen Looper](https://twitter.com/jenlooper)
+
+## 地域トピック: ピーターと狼(ロシア)
+
+[ピーターと狼](https://en.wikipedia.org/wiki/Peter_and_the_Wolf)は、ロシアの作曲家[セルゲイ・プロコフィエフ](https://en.wikipedia.org/wiki/Sergei_Prokofiev)によって書かれた音楽童話です。若きパイオニアのピーターが勇敢に家を出て、狼を追いかけるために森の空き地に向かう物語です。このセクションでは、ピーターを助けるための機械学習アルゴリズムを訓練します:
+
+- **周辺を探索**し、最適なナビゲーションマップを作成する
+- **スケートボードの使い方を学び**、バランスを取って素早く移動する
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 上の画像をクリックして、プロコフィエフの「ピーターと狼」を聴いてください
+
+## 強化学習
+
+前のセクションでは、機械学習の問題の二つの例を見ました:
+
+- **教師あり学習**では、解決したい問題に対するサンプルソリューションを示すデータセットが存在します。[分類](../4-Classification/README.md)や[回帰](../2-Regression/README.md)は教師あり学習のタスクです。
+- **教師なし学習**では、ラベル付きのトレーニングデータが存在しません。教師なし学習の主な例は[クラスタリング](../5-Clustering/README.md)です。
+
+このセクションでは、ラベル付きのトレーニングデータを必要としない新しいタイプの学習問題を紹介します。このような問題にはいくつかのタイプがあります:
+
+- **[半教師あり学習](https://wikipedia.org/wiki/Semi-supervised_learning)**では、多くのラベルなしデータを使ってモデルを事前に訓練することができます。
+- **[強化学習](https://wikipedia.org/wiki/Reinforcement_learning)**では、エージェントがシミュレーション環境で実験を行いながら行動を学びます。
+
+### 例 - コンピュータゲーム
+
+例えば、コンピュータにチェスや[スーパーマリオ](https://wikipedia.org/wiki/Super_Mario)のようなゲームを教えたいとします。コンピュータがゲームをプレイするためには、各ゲーム状態でどの手を打つかを予測する必要があります。これは分類問題のように見えるかもしれませんが、実際にはそうではありません。なぜなら、状態とそれに対応する行動のデータセットが存在しないからです。既存のチェスの試合やスーパーマリオのプレイヤーの記録のようなデータがあったとしても、そのデータが十分な数の状態をカバーしているとは限りません。
+
+既存のゲームデータを探す代わりに、**強化学習**(RL)は*コンピュータに何度もプレイさせる*というアイデアに基づいています。そしてその結果を観察します。したがって、強化学習を適用するためには二つのものが必要です:
+
+- **環境**と**シミュレーター**、これによって何度もゲームをプレイすることができます。このシミュレーターは、すべてのゲームルールや可能な状態と行動を定義します。
+
+- **報酬関数**、これによって各手やゲームの結果がどれだけ良かったかを教えてくれます。
+
+他の機械学習のタイプとRLの主な違いは、RLでは通常、ゲームが終了するまで勝ち負けがわからないことです。したがって、ある手が単独で良いかどうかを判断することはできず、ゲームの終了時にのみ報酬を受け取ります。そして、不確実な条件下でモデルを訓練するアルゴリズムを設計することが目標です。ここでは**Q学習**と呼ばれるRLアルゴリズムについて学びます。
+
+## レッスン
+
+1. [強化学習とQ学習の紹介](1-QLearning/README.md)
+2. [ジムシミュレーション環境の使用](2-Gym/README.md)
+
+## クレジット
+
+「強化学習の紹介」は[Dmitry Soshnikov](http://soshnikov.com)によって♥️を込めて書かれました
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳にはエラーや不正確さが含まれる場合があります。元の言語で書かれた文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/9-Real-World/1-Applications/README.md b/translations/ja/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..08eba9696
--- /dev/null
+++ b/translations/ja/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# 後書き: 実世界での機械学習
+
+
+> スケッチノート: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+このカリキュラムでは、データをトレーニングのために準備し、機械学習モデルを作成するための多くの方法を学びました。クラシックな回帰、クラスタリング、分類、自然言語処理、時系列モデルの一連を構築しました。おめでとうございます!さて、これらのモデルが実際に何のために使われるのか気になっているかもしれません。実世界でのこれらのモデルの応用例とは何でしょうか?
+
+業界では深層学習を利用したAIが注目されていますが、クラシックな機械学習モデルにも価値のある応用例があります。実際に、今日でもこれらの応用例のいくつかを使っているかもしれません。このレッスンでは、8つの異なる業界や専門分野がこれらのモデルをどのように使ってアプリケーションをより高性能、信頼性、知能的、価値あるものにしているかを探ります。
+
+## [レクチャー前のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 ファイナンス
+
+ファイナンス分野は機械学習に多くの機会を提供します。この分野の多くの問題は、MLを使用してモデル化し解決することができます。
+
+### クレジットカード詐欺検出
+
+以前のコースで[k-meansクラスタリング](../../5-Clustering/2-K-Means/README.md)について学びましたが、これをクレジットカード詐欺に関連する問題解決にどのように使えるのでしょうか?
+
+k-meansクラスタリングは、**異常検出**と呼ばれるクレジットカード詐欺検出技術で役立ちます。データセットに関する観測の逸脱や異常は、クレジットカードが通常の方法で使用されているか、何か異常が起きているかを教えてくれます。以下のリンクされた論文に示されているように、k-meansクラスタリングアルゴリズムを使用してクレジットカードデータを分類し、各取引をどれだけ異常であるかに基づいてクラスターに割り当てることができます。その後、最もリスクの高いクラスターを評価し、詐欺と正当な取引を区別します。
+[参考](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### 資産管理
+
+資産管理では、個人や企業がクライアントのために投資を扱います。彼らの仕事は長期的に資産を維持し増やすことなので、良いパフォーマンスを示す投資を選ぶことが重要です。
+
+特定の投資がどのようにパフォーマンスを発揮するかを評価する一つの方法は、統計的回帰です。[線形回帰](../../2-Regression/1-Tools/README.md)は、ファンドのパフォーマンスをベンチマークに対して理解するための価値あるツールです。また、回帰の結果が統計的に有意かどうか、つまりクライアントの投資にどれだけ影響を与えるかを推測することもできます。さらに、複数のリスク要因を考慮に入れて多重回帰を使用して分析を拡張することもできます。具体的なファンドのパフォーマンスを回帰を使って評価する方法については、以下の論文を参照してください。
+[参考](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 教育
+
+教育分野もまた、MLが適用できる非常に興味深い分野です。試験やエッセイのカンニングを検出することや、意図的であれ無意識であれ、修正プロセスでのバイアスを管理することなど、興味深い問題があります。
+
+### 学生の行動予測
+
+オンラインオープンコースプロバイダーである[Coursera](https://coursera.com)には、多くのエンジニアリング決定について議論する素晴らしいテックブログがあります。このケーススタディでは、低いNPS(ネットプロモータースコア)評価とコースの継続率や中途退学との相関関係を探るために回帰線をプロットしました。
+[参考](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### バイアスの軽減
+
+スペルや文法エラーをチェックするライティングアシスタントである[Grammarly](https://grammarly.com)は、製品全体で高度な[自然言語処理システム](../../6-NLP/README.md)を使用しています。彼らは、機械学習におけるジェンダーバイアスにどのように対処したかについて、テックブログで興味深いケーススタディを公開しました。これは、私たちの[公平性に関する導入レッスン](../../1-Introduction/3-fairness/README.md)で学んだ内容です。
+[参考](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 小売
+
+小売業界は、顧客体験の向上から在庫の最適な管理まで、MLの利用によって大いに恩恵を受けることができます。
+
+### 顧客体験のパーソナライズ
+
+家具などの家庭用品を販売する会社であるWayfairでは、顧客が自分の好みやニーズに合った製品を見つけるのを助けることが最も重要です。この記事では、同社のエンジニアがMLとNLPをどのように使用して「顧客に最適な結果を提供するか」について説明しています。特に、彼らのQuery Intent Engineは、エンティティ抽出、分類器トレーニング、アセットおよび意見抽出、感情タグ付けを顧客レビューに対して行うように設計されています。これは、オンライン小売におけるNLPの典型的な使用例です。
+[参考](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### 在庫管理
+
+[StitchFix](https://stitchfix.com)のような革新的で敏捷な企業は、推奨と在庫管理にMLを大いに活用しています。彼らのスタイリングチームは、実際には商品チームと協力しています。「私たちのデータサイエンティストの一人が遺伝的アルゴリズムをいじり、それをアパレルに適用して、今日存在しない成功する可能性のある衣料品を予測しました。それを商品チームに持ち込み、彼らはそれをツールとして使用できるようになりました。」
+[参考](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 医療
+
+医療分野では、研究タスクの最適化や患者の再入院管理、病気の拡散防止などのロジスティックな問題にMLを活用できます。
+
+### 臨床試験の管理
+
+臨床試験における毒性は薬品メーカーにとって大きな懸念事項です。どの程度の毒性が許容されるのでしょうか?この研究では、さまざまな臨床試験方法を分析することで、臨床試験の結果を予測するための新しいアプローチが開発されました。具体的には、ランダムフォレストを使用して、薬剤のグループを区別する[分類器](../../4-Classification/README.md)を作成することができました。
+[参考](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### 病院の再入院管理
+
+病院でのケアは高コストであり、特に患者が再入院する場合はなおさらです。この論文では、[クラスタリング](../../5-Clustering/README.md)アルゴリズムを使用して再入院の可能性を予測する企業について議論しています。これらのクラスターは、「共通の原因を共有する可能性のある再入院のグループを発見する」ためにアナリストを支援します。
+[参考](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### 病気の管理
+
+最近のパンデミックは、機械学習が病気の拡散を防ぐためにどのように役立つかを明らかにしました。この記事では、ARIMA、ロジスティック曲線、線形回帰、SARIMAの使用が認識されます。「この研究は、ウイルスの拡散率を計算し、死亡者、回復者、確認されたケースを予測する試みであり、これによって私たちがより良く準備し、生き延びるのに役立つことを目指しています。」
+[参考](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 エコロジーとグリーンテック
+
+自然とエコロジーは、動物と自然の相互作用が注目される多くの繊細なシステムで構成されています。これらのシステムを正確に測定し、森林火災や動物の個体数減少などの問題が発生した場合に適切に対処することが重要です。
+
+### 森林管理
+
+以前のレッスンで[強化学習](../../8-Reinforcement/README.md)について学びました。これは自然のパターンを予測する際に非常に有用です。特に、森林火災や侵入種の拡散などの生態学的問題を追跡するために使用できます。カナダでは、研究者のグループが衛星画像から森林火災の動態モデルを構築するために強化学習を使用しました。革新的な「空間的拡散プロセス(SSP)」を使用して、森林火災を「景観内の任意のセルでのエージェント」として視覚化しました。「火災が任意の時点で特定の場所から取ることができる行動のセットには、北、南、東、西への拡散や拡散しないことが含まれます。」
+
+このアプローチは、対応するマルコフ決定プロセス(MDP)の動態が即時の火災拡散のための既知の関数であるため、通常のRLセットアップを逆転させます。以下のリンクで、このグループが使用したクラシックなアルゴリズムについて詳しく読むことができます。
+[参考](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### 動物の動きのモーションセンシング
+
+ディープラーニングは動物の動きを視覚的に追跡する革命をもたらしました(ここで自分自身の[ホッキョクグマ追跡器](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott)を作ることができます)が、クラシックなMLもこのタスクにおいて依然として役割を果たしています。
+
+農場の動物の動きを追跡するセンサーやIoTはこの種の視覚処理を利用しますが、より基本的なML技術はデータを前処理するのに役立ちます。例えば、この論文では、羊の姿勢がさまざまな分類器アルゴリズムを使用して監視および分析されました。ページ335のROC曲線を認識するかもしれません。
+[参考](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ エネルギー管理
+
+[時系列予測](../../7-TimeSeries/README.md)のレッスンでは、供給と需要を理解することに基づいて町の収益を生み出すスマート駐車メーターの概念を導入しました。この記事では、クラスタリング、回帰、および時系列予測を組み合わせて、スマートメーターを基にしたアイルランドの将来のエネルギー使用量を予測する方法について詳しく説明しています。
+[参考](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 保険
+
+保険業界は、実行可能な金融およびアクチュアリーモデルを構築および最適化するためにMLを使用するもう一つの分野です。
+
+### 変動性管理
+
+生命保険提供者であるMetLifeは、金融モデルの変動性を分析および軽減する方法について率直に説明しています。この記事では、バイナリおよび序数分類の視覚化が見られます。また、予測の視覚化も発見できます。
+[参考](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 芸術、文化、文学
+
+ジャーナリズムなどの芸術分野には多くの興味深い問題があります。フェイクニュースの検出は大きな問題であり、人々の意見に影響を与え、さらには民主主義を揺るがすことが証明されています。博物館も、アーティファクト間のリンクを見つけることからリソース計画に至るまで、MLの使用から恩恵を受けることができます。
+
+### フェイクニュース検出
+
+今日のメディアでフェイクニュースを検出することは、いたちごっこのようなものです。この記事では、研究者が複数のML技術を組み合わせたシステムをテストし、最適なモデルを展開することを提案しています。「このシステムは、データから特徴を抽出するための自然言語処理に基づいており、これらの特徴がナイーブベイズ、サポートベクターマシン(SVM)、ランダムフォレスト(RF)、確率的勾配降下(SGD)、ロジスティック回帰(LR)などの機械学習分類器のトレーニングに使用されます。」
+[参考](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+この記事は、異なるMLドメインを組み合わせることで、フェイクニュースの拡散を防ぎ、実際の被害を防ぐために興味深い結果を生み出すことができることを示しています。この場合、COVID治療に関する噂の拡散が暴力を引き起こしたことが動機となりました。
+
+### 博物館ML
+
+博物館は、コレクションのカタログ化やデジタル化、アーティファクト間のリンクを見つけることが技術の進歩により容易になっているAI革命の最前線にいます。[In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.)などのプロジェクトは、バチカンのアーカイブのようなアクセス不可能なコレクションの謎を解き明かすのに役立っています。しかし、博物館のビジネス面もMLモデルから恩恵を受けています。
+
+例えば、シカゴ美術館は、観客が何に興味を持ち、いつ展示を訪れるかを予測するモデルを構築しました。目標は、ユーザーが博物館を訪れるたびに個別化され最適化された体験を提供することです。「2017年度中、モデルは1%の精度で入場者数と入場料を予測しました」とシカゴ美術館の上級副社長であるAndrew Simnick氏は述べています。
+[Reference](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 マーケティング
+
+### 顧客セグメンテーション
+
+最も効果的なマーケティング戦略は、さまざまなグループに基づいて顧客を異なる方法でターゲットにします。この記事では、差別化されたマーケティングをサポートするためにクラスタリングアルゴリズムの使用について議論されています。差別化されたマーケティングは、企業がブランド認知を向上させ、より多くの顧客にリーチし、より多くの利益を上げるのに役立ちます。
+[Reference](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 チャレンジ
+
+このカリキュラムで学んだ技術のいくつかが恩恵を受ける別のセクターを特定し、そのセクターがどのようにMLを使用しているかを調べてください。
+
+## [講義後のクイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## レビューと自己学習
+
+Wayfairのデータサイエンスチームは、彼らの会社でどのようにMLを使用しているかについての興味深いビデオをいくつか提供しています。ぜひ[チェック](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)してみてください!
+
+## 課題
+
+[A ML scavenger hunt](assignment.md)
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で書かれたオリジナルの文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤解について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/9-Real-World/1-Applications/assignment.md b/translations/ja/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..2e4a28a3a
--- /dev/null
+++ b/translations/ja/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# ML スカベンジャーハント
+
+## 指示
+
+このレッスンでは、古典的な機械学習を使用して解決された多くの実際のユースケースについて学びました。深層学習、新しい技術やツール、ニューラルネットワークの活用がこれらの分野でのツールの生産を加速させるのに役立っていますが、このカリキュラムで学んだ技術を使用した古典的な機械学習も依然として大きな価値を持っています。
+
+この課題では、ハッカソンに参加していると想像してください。カリキュラムで学んだことを使って、このレッスンで議論された分野の一つの問題を古典的な機械学習を使用して解決する提案を行います。アイデアを実装する方法を議論するプレゼンテーションを作成してください。サンプルデータを収集し、コンセプトをサポートするための機械学習モデルを構築できれば、追加ポイントを獲得できます!
+
+## ルーブリック
+
+| 基準 | 優秀 | 適切 | 改善が必要 |
+| ------ | -------------------------------------------------------------- | --------------------------------------------- | ---------------------- |
+| | PowerPoint プレゼンテーションが提示されている - モデルを構築した場合はボーナス | 革新的でない基本的なプレゼンテーションが提示されている | 作業が不完全 |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努めていますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で書かれた元の文書を権威ある情報源と見なしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/9-Real-World/2-Debugging-ML-Models/README.md b/translations/ja/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..2b75785f7
--- /dev/null
+++ b/translations/ja/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,107 @@
+# 後書き: 責任あるAIダッシュボードコンポーネントを用いた機械学習モデルのデバッグ
+
+## [講義前クイズ](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## はじめに
+
+機械学習は私たちの日常生活に大きな影響を与えています。AIは、医療、金融、教育、雇用など、私たち個人や社会に影響を与える最も重要なシステムの一部に浸透しています。例えば、システムやモデルは、医療診断や詐欺検出などの日常的な意思決定タスクに関与しています。その結果、AIの進歩と急速な採用に伴い、社会の期待も進化し、規制も増加しています。AIシステムが期待を裏切る場面が続出し、新たな課題が浮き彫りになり、政府もAIソリューションを規制し始めています。したがって、これらのモデルが公平で信頼性があり、包括的で透明性があり、責任ある結果を提供することを確認することが重要です。
+
+このカリキュラムでは、モデルに責任あるAIの問題があるかどうかを評価するために使用できる実践的なツールを見ていきます。従来の機械学習デバッグ技術は、集計された精度や平均誤差損失などの定量的な計算に基づく傾向があります。使用しているデータが人種、性別、政治的見解、宗教などの特定の人口統計を欠いている場合、またはこれらの人口統計を過剰に代表している場合に何が起こるかを想像してみてください。モデルの出力が特定の人口統計を優遇するように解釈される場合はどうでしょうか。これにより、モデルから公平性、包括性、信頼性の問題が発生する可能性があります。さらに、機械学習モデルはブラックボックスと見なされるため、モデルの予測を駆動するものを理解し説明することが難しいです。これらはすべて、データサイエンティストやAI開発者が、モデルの公平性や信頼性をデバッグおよび評価するための適切なツールを持っていない場合に直面する課題です。
+
+このレッスンでは、次の方法でモデルをデバッグする方法を学びます:
+
+- **エラー分析**: データ分布のどこでモデルが高いエラー率を持っているかを特定します。
+- **モデル概要**: 異なるデータコホート間で比較分析を行い、モデルのパフォーマンス指標の格差を発見します。
+- **データ分析**: データの過剰または過少表現がモデルの偏りを引き起こす可能性のある場所を調査します。
+- **特徴重要度**: モデルの予測をグローバルレベルまたはローカルレベルで駆動する特徴を理解します。
+
+## 前提条件
+
+前提条件として、[開発者向けの責任あるAIツール](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)をレビューしてください。
+
+> 
+
+## エラー分析
+
+従来のモデルパフォーマンス指標は、主に正しい予測と間違った予測に基づいた計算です。例えば、モデルが89%の精度を持ち、エラー損失が0.001であると判断される場合、それは良いパフォーマンスと見なされるかもしれません。エラーは基礎となるデータセットに均等に分布していないことがよくあります。89%のモデル精度スコアを得ることができても、モデルが42%の確率で失敗するデータの異なる領域があることがわかるかもしれません。これらの失敗パターンの結果として、特定のデータグループで公平性や信頼性の問題が発生する可能性があります。モデルがうまく機能しているかどうかを理解することが重要です。モデルに不正確なデータ領域が多い場合、それは重要なデータ人口統計である可能性があります。
+
+
+
+RAIダッシュボードのエラー分析コンポーネントは、ツリービジュアライゼーションを使用して、さまざまなコホートにわたるモデルの失敗がどのように分布しているかを示します。これは、データセットに高いエラー率がある特徴や領域を特定するのに役立ちます。モデルの不正確さがどこから来ているかを確認することで、根本原因を調査し始めることができます。データコホートを作成して分析を行うこともできます。これらのデータコホートは、あるコホートでモデルのパフォーマンスが良好である理由を特定し、別のコホートでエラーが発生する理由を特定するのに役立ちます。
+
+
+
+ツリーマップの視覚的なインジケーターは、問題領域をより迅速に特定するのに役立ちます。例えば、ツリーノードの赤の濃い色合いが濃いほど、エラー率が高いことを示しています。
+
+ヒートマップは、エラー率を調査するための別の視覚化機能であり、1つまたは2つの特徴を使用してデータセット全体またはコホート全体でモデルエラーの原因を見つけるのに役立ちます。
+
+
+
+エラー分析を使用する場面:
+
+* データセット全体および複数の入力および特徴次元にわたるモデルの失敗の分布を深く理解する必要がある場合。
+* 集計パフォーマンス指標を分解して、ターゲットとする緩和ステップを通知するために誤ったコホートを自動的に発見する場合。
+
+## モデル概要
+
+機械学習モデルのパフォーマンスを評価するには、その動作を全体的に理解する必要があります。これは、エラー率、精度、リコール、精度、MAE(平均絶対誤差)など、複数の指標を確認して、パフォーマンス指標の間に格差がないかを見つけることで達成できます。あるパフォーマンス指標は優れているように見えるかもしれませんが、別の指標では不正確さが露呈することがあります。さらに、データセット全体またはコホート間で指標を比較して格差を見つけることで、モデルがどこでうまく機能しているかを明らかにするのに役立ちます。特に、敏感な特徴(例:患者の人種、性別、年齢)と非敏感な特徴の間でモデルのパフォーマンスを見ることが重要です。例えば、敏感な特徴を持つコホートでモデルがより多くのエラーを発生させていることがわかると、モデルに潜在的な不公平があることが明らかになるかもしれません。
+
+RAIダッシュボードのモデル概要コンポーネントは、コホート内のデータ表現のパフォーマンス指標を分析するだけでなく、異なるコホート間でモデルの動作を比較する機能をユーザーに提供します。
+
+
+
+コンポーネントの特徴ベースの分析機能を使用すると、特定の特徴内のデータサブグループを絞り込んで、詳細レベルで異常を特定することができます。例えば、ダッシュボードにはユーザーが選択した特徴(例:「time_in_hospital < 3」または「time_in_hospital >= 7」)に基づいてコホートを自動的に生成するための組み込みのインテリジェンスがあります。これにより、ユーザーは大きなデータグループから特定の特徴を分離して、それがモデルの誤った結果を引き起こす主要な要因であるかどうかを確認できます。
+
+
+
+モデル概要コンポーネントは、次の2つのクラスの格差指標をサポートします:
+
+**モデルパフォーマンスの格差**: これらの指標セットは、データのサブグループ間で選択されたパフォーマンス指標の値の格差(差異)を計算します。いくつかの例を以下に示します:
+
+* 精度率の格差
+* エラー率の格差
+* 精度の格差
+* リコールの格差
+* 平均絶対誤差(MAE)の格差
+
+**選択率の格差**: この指標は、サブグループ間の選択率(好ましい予測)の違いを含みます。例えば、ローン承認率の格差です。選択率とは、各クラスで1と分類されたデータポイントの割合(バイナリ分類)または予測値の分布(回帰)を意味します。
+
+## データ分析
+
+> 「データを十分に拷問すれば、何でも白状する」 - ロナルド・コース
+
+この言葉は極端に聞こえるかもしれませんが、データはどんな結論をも支持するように操作できることは事実です。このような操作は、時には意図せずに行われることもあります。私たち人間は皆、バイアスを持っており、データにバイアスを導入していることを意識的に知ることは難しいことがよくあります。AIと機械学習の公平性を保証することは依然として複雑な課題です。
+
+データは従来のモデルパフォーマンス指標にとって大きな盲点です。高い精度スコアを持っていても、データセットに潜在するバイアスを反映しているとは限りません。例えば、ある会社の幹部ポジションに女性が27%、男性が73%いるデータセットがある場合、このデータを基にトレーニングされた求人広告AIモデルは、主に男性向けのシニアポジションの求人をターゲットにする可能性があります。このようなデータの不均衡は、モデルの予測が一方の性別を優遇することにつながります。これにより、AIモデルに性別バイアスがあることが明らかになります。
+
+RAIダッシュボードのデータ分析コンポーネントは、データセットに過剰または過少表現がある場所を特定するのに役立ちます。データの不均衡や特定のデータグループの欠如から生じるエラーや公平性の問題の根本原因を診断するのに役立ちます。これにより、予測結果や実際の結果、エラーグループ、特定の特徴に基づいてデータセットを視覚化する機能を提供します。時には、過少表現されたデータグループを発見することで、モデルがうまく学習していないことが明らかになり、高い不正確さが生じることがあります。データバイアスを持つモデルは、公平性の問題だけでなく、包括性や信頼性の欠如を示しています。
+
+
+
+データ分析を使用する場面:
+
+* 異なるフィルターを選択してデータセット統計を探索し、データを異なる次元(コホートとしても知られる)に分割する必要がある場合。
+* 異なるコホートや特徴グループにわたるデータセットの分布を理解する必要がある場合。
+* データセットの分布に起因する公平性、エラー分析、因果関係(他のダッシュボードコンポーネントから導き出される)に関連する発見を判断する必要がある場合。
+* 表現の問題から生じるエラーを緩和するために、どの領域でより多くのデータを収集するかを決定する必要がある場合。
+
+## モデルの解釈可能性
+
+機械学習モデルはブラックボックスと見なされることが多いです。モデルの予測を駆動する主要なデータ特徴を理解することは難しいです。モデルが特定の予測を行う理由に透明性を提供することが重要です。例えば、AIシステムが糖尿病患者が30日以内に再入院するリスクがあると予測する場合、その予測の根拠となるデータを提供する必要があります。支持データの指標を持つことで、臨床医や病院が十分な情報に基づいた意思決定を行うのに役立ちます。さらに、個々の患者に対してモデルが予測を行った理由を説明することで、医療規制に対する説明責任が果たされます。人々の生活に影響を与える方法で機械学習モデルを使用する場合、モデルの行動に影響を与える要因を理解し説明することが重要です。モデルの解釈可能性と説明可能性は、次のシナリオで質問に答えるのに役立ちます:
+
+* モデルのデバッグ: モデルがこのミスを犯した理由は何ですか?モデルを改善する方法は何ですか?
+* 人間とAIの協力: モデルの決定を理解し信頼する方法は何ですか?
+* 規制遵守: モデルが法的要件を満たしているかどうか?
+
+RAIダッシュボードの特徴重要度コンポーネントは、モデルがどのように予測を行うかをデバッグし包括的に理解するのに役立ちます。また、機械学習の専門家や意思決定者がモデルの行動に影響を与える特徴を説明し、規制遵守のための証拠を示すのにも役立ちます。次に、ユーザーはグローバルおよびローカルの説明を探索して、モデルの予測を駆動する特徴を検証できます。グローバル説明は、モデルの全体的な予測に影響を与えた上位の特徴をリストします。ローカル説明は、個々のケースに対してモデルが予測を行った特徴を表示します。ローカル説明を評価する能力は、特定のケースをデバッグまたは監査して、モデルが正確または不正確な予測を行った理由をよりよく理解し解釈するのにも役立ちます。
+
+
+
+* グローバル説明: 例えば、糖尿病の病院再入院モデルの全体的な行動に影響を与える特徴は何ですか?
+* ローカル説明: 例えば、60歳以上の糖尿病患者が30日以内に再入院するかどうかを予測した理由は何ですか?
+
+異なるコホートにわたるモデルのパフォーマンスを調べるデバッグプロセスでは、特徴重要度がコホート全体でどの程度の影響を持っているかを示します。モデルの誤った予測を駆動する特徴の影響レベルを比較することで異常を明らかにするのに役立ちます。特徴重要度コンポーネントは、特徴の値がモデルの結果にどのようにプラスまたはマイナスの影響を与えたかを示すことができます。例えば、モデルが不正確な予測を行った場合、コンポーネントは予測を駆動した特徴や特徴値を特定するために詳細に掘り下げる能力を提供します。このレベルの詳細は、デバッグだけでなく、監査状況での透明性と説明責任を提供するのにも役立ちます。最後に、特徴重要度コンポーネントは公平性の問題を特定するのにも役立ちます。例えば、民族性や性別などの敏感な特徴がモデルの予測を駆動する際に大きな影響を与える場合、それはモデルに人種や性別のバイアスがある兆候かもしれ
+
+**免責事項**:
+
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努めておりますが、自動翻訳にはエラーや不正確さが含まれる可能性があることをご了承ください。元の言語で書かれた原文を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用によって生じた誤解や誤訳について、当社は一切責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/ja/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..ac9082204
--- /dev/null
+++ b/translations/ja/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# 責任あるAI (RAI) ダッシュボードを探る
+
+## 指示
+
+このレッスンでは、RAIダッシュボードについて学びました。これは、データサイエンティストがエラー分析、データ探索、公平性評価、モデルの解釈可能性、反事実/what-if評価、および因果分析をAIシステムで行うのを助けるために「オープンソース」ツールに基づいて構築されたコンポーネントのスイートです。この課題では、RAIダッシュボードのサンプル[ノートブック](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks)のいくつかを探り、その発見を論文やプレゼンテーションで報告してください。
+
+## ルーブリック
+
+| 基準 | 模範的 | 適切 | 改善が必要 |
+| -------- | --------- | -------- | ----------------- |
+| | RAIダッシュボードのコンポーネント、実行したノートブック、およびそれから得られた結論を論じる論文またはパワーポイントプレゼンテーションが提示される | 結論なしの論文が提示される | 論文が提示されない |
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期しておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の母国語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤認について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/9-Real-World/README.md b/translations/ja/9-Real-World/README.md
new file mode 100644
index 000000000..15690cf35
--- /dev/null
+++ b/translations/ja/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# 後書き: 古典的な機械学習の実世界アプリケーション
+
+このカリキュラムのセクションでは、古典的な機械学習の実世界アプリケーションについて紹介します。インターネット上でホワイトペーパーや記事を探し、ニューラルネットワーク、ディープラーニング、AIをできるだけ避けた戦略を使用したアプリケーションを見つけました。機械学習がビジネスシステム、生態学的アプリケーション、金融、芸術と文化などでどのように使用されているかを学びましょう。
+
+
+
+> 写真提供 Alexis Fauvet on Unsplash
+
+## レッスン
+
+1. [機械学習の実世界アプリケーション](1-Applications/README.md)
+2. [Responsible AIダッシュボードコンポーネントを使用した機械学習モデルのデバッグ](2-Debugging-ML-Models/README.md)
+
+## クレジット
+
+「機械学習の実世界アプリケーション」は、[Jen Looper](https://twitter.com/jenlooper) と [Ornella Altunyan](https://twitter.com/ornelladotcom) を含むチームによって書かれました。
+
+「Responsible AIダッシュボードコンポーネントを使用した機械学習モデルのデバッグ」は [Ruth Yakubu](https://twitter.com/ruthieyakubu) によって書かれました。
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。原文はその言語で作成されたものを権威ある情報源とみなしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用により生じた誤解や誤訳について、当社は一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/CODE_OF_CONDUCT.md b/translations/ja/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..3f2f16318
--- /dev/null
+++ b/translations/ja/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# マイクロソフト オープンソース 行動規範
+
+このプロジェクトは [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) を採用しています。
+
+リソース:
+
+- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
+- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+- 質問や懸念事項がある場合は [opencode@microsoft.com](mailto:opencode@microsoft.com) にお問い合わせください
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文の言語で書かれたオリジナル文書を権威ある情報源と見なすべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/CONTRIBUTING.md b/translations/ja/CONTRIBUTING.md
new file mode 100644
index 000000000..7d765853a
--- /dev/null
+++ b/translations/ja/CONTRIBUTING.md
@@ -0,0 +1,13 @@
+# 貢献について
+
+このプロジェクトでは、貢献や提案を歓迎します。ほとんどの貢献には、あなたが寄稿者ライセンス契約(CLA)に同意し、あなたの貢献を使用する権利を私たちに与える権利があることを宣言する必要があります。詳細は https://cla.microsoft.com をご覧ください。
+
+> 重要: このリポジトリのテキストを翻訳する際は、機械翻訳を使用しないようにしてください。翻訳はコミュニティによって確認されますので、堪能な言語のみで翻訳を行ってください。
+
+プルリクエストを送信すると、CLAボットが自動的にCLAの提供が必要かどうかを判断し、適切にPRを装飾します(例:ラベル、コメント)。ボットの指示に従ってください。CLAを使用するすべてのリポジトリでこれを一度行うだけで済みます。
+
+このプロジェクトは [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) を採用しています。
+詳細については [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) を参照するか、追加の質問やコメントがある場合は [opencode@microsoft.com](mailto:opencode@microsoft.com) にお問い合わせください。
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。原文の言語で記載された文書が正式な情報源とみなされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤認について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/README.md b/translations/ja/README.md
new file mode 100644
index 000000000..24e6836ac
--- /dev/null
+++ b/translations/ja/README.md
@@ -0,0 +1,156 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# 初心者のための機械学習 - カリキュラム
+
+> 🌍 世界中を旅しながら、世界の文化を通じて機械学習を学びましょう 🌍
+
+MicrosoftのCloud Advocatesは、**機械学習**に関する12週間、26レッスンのカリキュラムを提供します。このカリキュラムでは、主にScikit-learnライブラリを使用して、**クラシックな機械学習**と呼ばれることもある技術を学びます。ディープラーニングは含まれていませんが、それについては[AI for Beginners' カリキュラム](https://aka.ms/ai4beginners)で学ぶことができます。このレッスンを['データサイエンス初心者向けカリキュラム'](https://aka.ms/ds4beginners)と組み合わせることもお勧めします。
+
+世界各地のデータを使って、クラシックな技術を適用しながら一緒に学びましょう。各レッスンには、事前・事後のクイズ、レッスンを完了するための書面による指示、解答、課題などが含まれています。プロジェクトベースの教育方法により、学びながら実践することで新しいスキルが定着しやすくなります。
+
+**✍️ 著者の皆さんに心からの感謝を** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu, Amy Boyd
+
+**🎨 イラストレーターの皆さんにも感謝を** Tomomi Imura, Dasani Madipalli, Jen Looper
+
+**🙏 特別な感謝 🙏 Microsoft Student Ambassadorの著者、レビューアー、コンテンツ貢献者の皆さんに、特に Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila, Snigdha Agarwal
+
+**🤩 Rレッスンに関して、Microsoft Student Ambassadorsの Eric Wanjau, Jasleen Sondhi, Vidushi Gupta に特別な感謝を!**
+
+# 始めに
+
+以下の手順に従ってください:
+1. **リポジトリをフォークする**: このページの右上にある「Fork」ボタンをクリックします。
+2. **リポジトリをクローンする**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [このコースの追加リソースはMicrosoft Learnコレクションにあります](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+**[学生の皆さん](https://aka.ms/student-page)**、このカリキュラムを使用するには、リポジトリ全体を自分のGitHubアカウントにフォークし、自分でまたはグループで演習を完了してください:
+
+- レクチャー前のクイズから始めます。
+- レクチャーを読み、各知識チェックで一時停止し、反省します。
+- 解答コードを実行するのではなく、レッスンを理解しながらプロジェクトを作成してみてください。ただし、そのコードは各プロジェクト指向のレッスンの`/solution`フォルダーにあります。
+- レクチャー後のクイズを受けます。
+- チャレンジを完了します。
+- 課題を完了します。
+- レッスングループを完了した後、[ディスカッションボード](https://github.com/microsoft/ML-For-Beginners/discussions)にアクセスし、適切なPATルーブリックを記入して「声に出して学ぶ」ことをお勧めします。PATは進捗評価ツールで、学習を深めるために記入するルーブリックです。他のPATに反応することもでき、一緒に学びましょう。
+
+> さらなる学習のために、これらの[Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott)モジュールと学習パスをお勧めします。
+
+**教師の皆さん**、このカリキュラムの使用方法について[いくつかの提案](for-teachers.md)を含めています。
+
+---
+
+## ビデオウォークスルー
+
+いくつかのレッスンは短い形式のビデオとして提供されています。これらはレッスン内にインラインで見つけることができます。または、Microsoft Developer YouTubeチャンネルの[ML for Beginnersプレイリスト](https://aka.ms/ml-beginners-videos)で画像をクリックして視聴できます。
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## チーム紹介
+
+[](https://youtu.be/Tj1XWrDSYJU "Promo video")
+
+**Gif by** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 上の画像をクリックして、プロジェクトと作成者についてのビデオをご覧ください!
+
+---
+
+## 教育方法
+
+このカリキュラムを作成する際に、2つの教育的な原則を選びました:それが実践的な**プロジェクトベース**であることと、**頻繁なクイズ**を含むことです。さらに、このカリキュラムには一貫性を持たせるための共通の**テーマ**があります。
+
+プロジェクトに合わせてコンテンツを整えることで、学習者にとってより魅力的なプロセスとなり、概念の定着が強化されます。また、クラス前の低リスクのクイズは、学習者の意図をトピックに向けさせ、クラス後の2回目のクイズはさらなる定着を確保します。このカリキュラムは柔軟で楽しいものとして設計されており、全体または一部を受講することができます。プロジェクトは小さなものから始まり、12週間のサイクルの終わりまでに徐々に複雑になります。また、このカリキュラムには、MLの実世界での応用に関する後書きが含まれており、追加のクレジットやディスカッションの基礎として使用できます。
+
+> [行動規範](CODE_OF_CONDUCT.md)、[貢献](CONTRIBUTING.md)、および[翻訳](TRANSLATIONS.md)ガイドラインをご覧ください。建設的なフィードバックを歓迎します!
+
+## 各レッスンには以下が含まれます
+
+- オプションのスケッチノート
+- オプションの補足ビデオ
+- ビデオウォークスルー(いくつかのレッスンのみ)
+- レクチャー前のウォームアップクイズ
+- 書面によるレッスン
+- プロジェクトベースのレッスンの場合、プロジェクトを構築するためのステップバイステップガイド
+- 知識チェック
+- チャレンジ
+- 補足読書
+- 課題
+- レクチャー後のクイズ
+
+> **言語に関する注意**: これらのレッスンは主にPythonで書かれていますが、多くはRでも利用可能です。Rレッスンを完了するには、`/solution`フォルダーに移動し、Rレッスンを探してください。それらには.rmd拡張子が含まれており、`code chunks`(Rまたは他の言語の)と`YAML header`(PDFなどの出力をフォーマットする方法を指示する)を`Markdown document`に埋め込んだものと簡単に定義できます。したがって、データサイエンスのための優れた著作フレームワークとして機能し、コード、その出力、およびMarkdownで書き留めることができる考えを組み合わせることができます。さらに、R MarkdownドキュメントはPDF、HTML、Wordなどの出力形式にレンダリングできます。
+
+> **クイズに関する注意**: すべてのクイズは[Quiz Appフォルダー](../../quiz-app)に含まれており、合計52のクイズがあり、それぞれ3つの質問が含まれています。それらはレッスン内からリンクされていますが、クイズアプリはローカルで実行できます。ローカルでホストするかAzureにデプロイする手順は`quiz-app`フォルダーに従ってください。
+
+| レッスン番号 | トピック | レッスングループ | 学習目標 | リンク先レッスン | 著者 |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | 機械学習の紹介 | [Introduction](1-Introduction/README.md) | 機械学習の基本的な概念を学ぶ | [レッスン](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | 機械学習の歴史 | [Introduction](1-Introduction/README.md) | この分野の歴史を学ぶ | [レッスン](1-Introduction/2-history-of-ML/README.md) | Jen and Amy |
+| 03 | 公平性と機械学習 | [Introduction](1-Introduction/README.md) | 学生がMLモデルを構築および適用する際に考慮すべき重要な哲学的問題とは? | [レッスン](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | 機械学習のテクニック | [Introduction](1-Introduction/README.md) | ML研究者がMLモデルを構築するために使用するテクニックとは? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris and Jen |
+| 05 | 回帰の入門 | [Regression](2-Regression/README.md) | 回帰モデルのためのPythonとScikit-learnの入門 |
|
+| 16 | 自然言語処理の紹介 ☕️ | [Natural language processing](6-NLP/README.md) | シンプルなボットを作成してNLPの基本を学ぶ | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | 一般的なNLPタスク ☕️ | [Natural language processing](6-NLP/README.md) | 言語構造を扱う際に必要な一般的なタスクを理解してNLPの知識を深める | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | 翻訳と感情分析 ♥️ | [Natural language processing](6-NLP/README.md) | ジェーン・オースティンを使った翻訳と感情分析 | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | ヨーロッパのロマンチックなホテル ♥️ | [Natural language processing](6-NLP/README.md) | ホテルレビューを使った感情分析 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | ヨーロッパのロマンチックなホテル ♥️ | [Natural language processing](6-NLP/README.md) | ホテルレビューを使った感情分析 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | 時系列予測の紹介 | [Time series](7-TimeSeries/README.md) | 時系列予測の紹介 | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ 世界の電力使用 ⚡️ - ARIMAによる時系列予測 | [Time series](7-TimeSeries/README.md) | ARIMAによる時系列予測 | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ 世界の電力使用 ⚡️ - SVRによる時系列予測 | [Time series](7-TimeSeries/README.md) | サポートベクターレグレッサーによる時系列予測 | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | 強化学習の紹介 | [Reinforcement learning](8-Reinforcement/README.md) | Q-Learningによる強化学習の紹介 | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | ピーターがオオカミを避けるのを助けよう! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | 強化学習ジム | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | 現実世界のMLシナリオとアプリケーション | [ML in the Wild](9-Real-World/README.md) | 古典的なMLの興味深く、明らかな現実世界のアプリケーション | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | RAIダッシュボードを使用したMLのモデルデバッグ | [ML in the Wild](9-Real-World/README.md) | Responsible AIダッシュボードコンポーネントを使用した機械学習のモデルデバッグ | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [このコースの追加リソースはMicrosoft Learnのコレクションで見つけることができます](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## オフラインアクセス
+
+[Docsify](https://docsify.js.org/#/)を使用してこのドキュメントをオフラインで実行できます。このリポジトリをフォークし、ローカルマシンに[Docsifyをインストール](https://docsify.js.org/#/quickstart)し、このリポジトリのルートフォルダで`docsify serve`を入力します。ウェブサイトはlocalhostのポート3000で提供されます:`localhost:3000`。
+
+## PDFs
+[ここ](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf)でリンク付きのカリキュラムのPDFを見つけてください。
+
+## 助けが必要です
+
+翻訳に貢献したいですか?私たちの[翻訳ガイドライン](TRANSLATIONS.md)を読んで、作業負荷を管理するためのテンプレート化されたイシューを[こちら](https://github.com/microsoft/ML-For-Beginners/issues)に追加してください。
+
+## その他のカリキュラム
+
+私たちのチームは他のカリキュラムも制作しています!以下をご覧ください:
+
+- [AI for Beginners](https://aka.ms/ai4beginners)
+- [Data Science for Beginners](https://aka.ms/datascience-beginners)
+- [**New Version 2.0** - Generative AI for Beginners](https://aka.ms/genai-beginners)
+- [**NEW** Cybersecurity for Beginners](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Web Dev for Beginners](https://aka.ms/webdev-beginners)
+- [IoT for Beginners](https://aka.ms/iot-beginners)
+- [Machine Learning for Beginners](https://aka.ms/ml4beginners)
+- [XR Development for Beginners](https://aka.ms/xr-dev-for-beginners)
+- [Mastering GitHub Copilot for AI Paired Programming](https://aka.ms/GitHubCopilotAI)
+
+**免責事項**:
+
+この文書は機械翻訳サービスを使用して翻訳されています。正確性を期しておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。原文が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用により生じた誤解や誤認については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/SECURITY.md b/translations/ja/SECURITY.md
new file mode 100644
index 000000000..3d1d180af
--- /dev/null
+++ b/translations/ja/SECURITY.md
@@ -0,0 +1,40 @@
+## セキュリティ
+
+Microsoftは、ソフトウェア製品およびサービスのセキュリティを非常に重視しています。これには、[Microsoft](https://github.com/Microsoft)、[Azure](https://github.com/Azure)、[DotNet](https://github.com/dotnet)、[AspNet](https://github.com/aspnet)、[Xamarin](https://github.com/xamarin)、および[当社のGitHub組織](https://opensource.microsoft.com/)を通じて管理されるすべてのソースコードリポジトリが含まれます。
+
+もし、[Microsoftのセキュリティ脆弱性の定義](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott)に該当するセキュリティ脆弱性をMicrosoft所有のリポジトリで発見したと思われる場合は、以下の手順に従って報告してください。
+
+## セキュリティ問題の報告
+
+**セキュリティ脆弱性を公開のGitHubイシューで報告しないでください。**
+
+代わりに、Microsoft Security Response Center (MSRC)に報告してください:[https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report)。
+
+ログインせずに提出したい場合は、[secure@microsoft.com](mailto:secure@microsoft.com)にメールを送信してください。可能であれば、メッセージをPGPキーで暗号化してください。PGPキーは[Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc)からダウンロードできます。
+
+24時間以内に返信を受け取るはずです。万が一返信がない場合は、最初のメッセージが届いているか確認するためにメールでフォローアップしてください。追加情報は[microsoft.com/msrc](https://www.microsoft.com/msrc)で確認できます。
+
+可能な限り、以下の情報を含めてください。これにより、問題の性質と範囲をよりよく理解できます:
+
+ * 問題の種類(例:バッファオーバーフロー、SQLインジェクション、クロスサイトスクリプティングなど)
+ * 問題が発生するソースファイルのフルパス
+ * 影響を受けるソースコードの場所(タグ/ブランチ/コミットまたは直接のURL)
+ * 問題を再現するために必要な特別な構成
+ * 問題を再現するためのステップバイステップの手順
+ * 概念実証やエクスプロイトコード(可能な場合)
+ * 問題の影響、攻撃者がどのように問題を悪用するか
+
+この情報は、レポートの優先順位を迅速に決定するのに役立ちます。
+
+バグ報奨金のために報告する場合、より詳細なレポートはより高い報奨金に貢献することがあります。現在のプログラムの詳細については、[Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty)ページをご覧ください。
+
+## 優先言語
+
+すべてのコミュニケーションは英語で行うことを推奨します。
+
+## ポリシー
+
+Microsoftは[Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd)の原則に従います。
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すために努めていますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。元の言語で書かれた文書が信頼できる情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/SUPPORT.md b/translations/ja/SUPPORT.md
new file mode 100644
index 000000000..02b91291e
--- /dev/null
+++ b/translations/ja/SUPPORT.md
@@ -0,0 +1,13 @@
+# サポート
+## 問題の報告方法とヘルプの取得
+
+このプロジェクトでは、GitHub Issuesを使用してバグや機能リクエストを追跡しています。重複を避けるために、新しい問題を報告する前に既存の問題を検索してください。新しい問題については、新しいIssueとしてバグや機能リクエストを報告してください。
+
+このプロジェクトの使用に関するヘルプや質問については、Issueを報告してください。
+
+## Microsoftサポートポリシー
+
+このリポジトリのサポートは、上記のリソースに限定されています。
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性がありますのでご注意ください。原文の言語での文書が権威ある情報源とみなされるべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤認については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/TRANSLATIONS.md b/translations/ja/TRANSLATIONS.md
new file mode 100644
index 000000000..d502f55fb
--- /dev/null
+++ b/translations/ja/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# レッスンの翻訳に貢献する
+
+このカリキュラムのレッスンの翻訳を歓迎します!
+## ガイドライン
+
+各レッスンフォルダーおよびレッスン紹介フォルダーには、翻訳されたMarkdownファイルが含まれています。
+
+> 注意、コードサンプルファイル内のコードは翻訳しないでください。翻訳するのはREADME、課題、およびクイズのみです。ありがとう!
+
+翻訳されたファイルは次の命名規則に従う必要があります:
+
+**README._[language]_.md**
+
+ここで_[language]_はISO 639-1標準に従った2文字の言語略語です(例:`README.es.md`はスペイン語、`README.nl.md`はオランダ語)。
+
+**assignment._[language]_.md**
+
+READMEと同様に、課題も翻訳してください。
+
+> 重要: このリポジトリのテキストを翻訳する際には、機械翻訳を使用しないようにしてください。翻訳はコミュニティによって検証されるため、熟練している言語の翻訳にのみボランティアで参加してください。
+
+**クイズ**
+
+1. クイズアプリに翻訳を追加するには、適切な命名規則(en.json, fr.json)でここにファイルを追加してください:https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations **ただし、「true」や「false」という単語はローカライズしないでください。ありがとう!**
+
+2. クイズアプリのApp.vueファイルのドロップダウンに言語コードを追加してください。
+
+3. クイズアプリの[translations index.jsファイル](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js)を編集して、言語を追加してください。
+
+4. 最後に、翻訳されたREADME.mdファイル内のすべてのクイズリンクを直接翻訳されたクイズにポイントするように編集してください:https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1 を https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id に変更します。
+
+**ありがとう**
+
+あなたの努力に本当に感謝します!
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すために努力しておりますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語で書かれた原文が権威ある情報源と見なされるべきです。重要な情報については、プロの人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤認については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/docs/_sidebar.md b/translations/ja/docs/_sidebar.md
new file mode 100644
index 000000000..c45081acc
--- /dev/null
+++ b/translations/ja/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- はじめに
+ - [機械学習の紹介](../1-Introduction/1-intro-to-ML/README.md)
+ - [機械学習の歴史](../1-Introduction/2-history-of-ML/README.md)
+ - [機械学習と公平性](../1-Introduction/3-fairness/README.md)
+ - [機械学習の技術](../1-Introduction/4-techniques-of-ML/README.md)
+
+- 回帰分析
+ - [ツールの紹介](../2-Regression/1-Tools/README.md)
+ - [データ](../2-Regression/2-Data/README.md)
+ - [線形回帰](../2-Regression/3-Linear/README.md)
+ - [ロジスティック回帰](../2-Regression/4-Logistic/README.md)
+
+- ウェブアプリの構築
+ - [ウェブアプリ](../3-Web-App/1-Web-App/README.md)
+
+- 分類
+ - [分類の紹介](../4-Classification/1-Introduction/README.md)
+ - [分類器 1](../4-Classification/2-Classifiers-1/README.md)
+ - [分類器 2](../4-Classification/3-Classifiers-2/README.md)
+ - [応用機械学習](../4-Classification/4-Applied/README.md)
+
+- クラスタリング
+ - [データの可視化](../5-Clustering/1-Visualize/README.md)
+ - [K-平均法](../5-Clustering/2-K-Means/README.md)
+
+- 自然言語処理 (NLP)
+ - [NLPの紹介](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [NLPのタスク](../6-NLP/2-Tasks/README.md)
+ - [翻訳と感情分析](../6-NLP/3-Translation-Sentiment/README.md)
+ - [ホテルレビュー 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [ホテルレビュー 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- 時系列予測
+ - [時系列予測の紹介](../7-TimeSeries/1-Introduction/README.md)
+ - [ARIMA](../7-TimeSeries/2-ARIMA/README.md)
+ - [SVR](../7-TimeSeries/3-SVR/README.md)
+
+- 強化学習
+ - [Q学習](../8-Reinforcement/1-QLearning/README.md)
+ - [Gym](../8-Reinforcement/2-Gym/README.md)
+
+- 実世界の機械学習
+ - [アプリケーション](../9-Real-World/1-Applications/README.md)
+
+**免責事項**:
+この文書は、機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期すよう努めていますが、自動翻訳にはエラーや不正確さが含まれる場合があります。元の言語の文書を権威ある情報源とみなしてください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解について、当社は責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/for-teachers.md b/translations/ja/for-teachers.md
new file mode 100644
index 000000000..906b26c8d
--- /dev/null
+++ b/translations/ja/for-teachers.md
@@ -0,0 +1,26 @@
+## 教育者の皆さんへ
+
+このカリキュラムを教室で使いたいですか? ぜひご利用ください!
+
+実際、GitHub Classroomを使ってGitHub内で使用することもできます。
+
+そのためには、このリポジトリをフォークしてください。各レッスンごとにリポジトリを作成する必要があるので、各フォルダを別々のリポジトリに抽出する必要があります。そうすることで、[GitHub Classroom](https://classroom.github.com/classrooms)が各レッスンを個別に認識できます。
+
+こちらの[詳細な指示](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/)を参考にして、教室の設定方法を確認してください。
+
+## 現在のリポジトリをそのまま使用する場合
+
+GitHub Classroomを使用せずに、現在のリポジトリをそのまま使用したい場合も可能です。その場合、どのレッスンを一緒に進めるかを学生に伝える必要があります。
+
+オンライン形式(Zoom、Teams、その他)では、クイズ用のブレイクアウトルームを作成し、学生が学習の準備をするのを助けるメンターを配置することができます。そして、特定の時間にクイズを行い、回答を'issues'として提出するように学生に招待します。同様に、課題もオープンな形で協力して作業する場合は同じ方法を取ることができます。
+
+よりプライベートな形式を好む場合は、学生にカリキュラムをレッスンごとに自分のGitHubリポジトリにプライベートリポジトリとしてフォークさせ、あなたにアクセス権を与えるように指示します。そうすれば、クイズや課題をプライベートに完了し、教室のリポジトリのissuesを通じて提出することができます。
+
+オンライン教室形式でこれを機能させる方法はたくさんあります。あなたに最適な方法をぜひ教えてください!
+
+## ご意見をお聞かせください!
+
+このカリキュラムがあなたと学生にとって役立つものになるようにしたいと考えています。ぜひ[フィードバック](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u)をお寄せください。
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確性を期していますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。原文の文書を信頼できる情報源として考慮すべきです。重要な情報については、専門の人間による翻訳を推奨します。この翻訳の使用に起因する誤解や誤解について、当社は一切責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/quiz-app/README.md b/translations/ja/quiz-app/README.md
new file mode 100644
index 000000000..07594c611
--- /dev/null
+++ b/translations/ja/quiz-app/README.md
@@ -0,0 +1,115 @@
+# クイズ
+
+これらのクイズは、https://aka.ms/ml-beginners にあるMLカリキュラムの前後のレクチャークイズです。
+
+## プロジェクトセットアップ
+
+```
+npm install
+```
+
+### 開発のためのコンパイルとホットリロード
+
+```
+npm run serve
+```
+
+### 本番用のコンパイルと最小化
+
+```
+npm run build
+```
+
+### ファイルのリントと修正
+
+```
+npm run lint
+```
+
+### 設定のカスタマイズ
+
+[Configuration Reference](https://cli.vuejs.org/config/) を参照してください。
+
+クレジット: このクイズアプリのオリジナルバージョンに感謝します: https://github.com/arpan45/simple-quiz-vue
+
+## Azureへのデプロイ
+
+以下は、始めるためのステップバイステップガイドです:
+
+1. GitHubリポジトリをフォークする
+静的WebアプリのコードをGitHubリポジトリに入れてください。このリポジトリをフォークします。
+
+2. Azure静的Webアプリを作成する
+- [Azureアカウント](http://azure.microsoft.com) を作成する
+- [Azureポータル](https://portal.azure.com) にアクセスする
+- 「リソースの作成」をクリックして、「Static Web App」を検索する。
+- 「作成」をクリックする。
+
+3. 静的Webアプリを設定する
+- 基本: サブスクリプション: Azureサブスクリプションを選択します。
+- リソースグループ: 新しいリソースグループを作成するか、既存のものを使用します。
+- 名前: 静的Webアプリの名前を入力します。
+- リージョン: ユーザーに最も近いリージョンを選択します。
+
+- #### デプロイメントの詳細:
+- ソース: 「GitHub」を選択します。
+- GitHubアカウント: AzureがGitHubアカウントにアクセスすることを許可します。
+- 組織: GitHubの組織を選択します。
+- リポジトリ: 静的Webアプリを含むリポジトリを選択します。
+- ブランチ: デプロイするブランチを選択します。
+
+- #### ビルドの詳細:
+- ビルドプリセット: アプリが構築されているフレームワークを選択します(例:React, Angular, Vueなど)。
+- アプリの場所: アプリコードを含むフォルダを指定します(例:ルートにある場合は/)。
+- APIの場所: APIがある場合、その場所を指定します(オプション)。
+- 出力場所: ビルド出力が生成されるフォルダを指定します(例:buildまたはdist)。
+
+4. レビューと作成
+設定を確認し、「作成」をクリックします。Azureは必要なリソースを設定し、リポジトリにGitHub Actionsワークフローを作成します。
+
+5. GitHub Actionsワークフロー
+Azureは自動的にリポジトリにGitHub Actionsワークフローファイル (.github/workflows/azure-static-web-apps-.yml) を作成します。このワークフローがビルドとデプロイのプロセスを処理します。
+
+6. デプロイの監視
+GitHubリポジトリの「Actions」タブに移動します。
+ワークフローが実行中であることが確認できます。このワークフローは静的WebアプリをAzureにビルドしてデプロイします。
+ワークフローが完了すると、アプリは提供されたAzure URLで公開されます。
+
+### ワークフローファイルの例
+
+以下は、GitHub Actionsワークフローファイルの例です:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### 追加リソース
+- [Azure Static Web Apps Documentation](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [GitHub Actions Documentation](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**免責事項**:
+この文書は、機械翻訳AIサービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる場合があります。元の言語の文書が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳について、当社は一切の責任を負いません。
\ No newline at end of file
diff --git a/translations/ja/sketchnotes/LICENSE.md b/translations/ja/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..9a1933c0c
--- /dev/null
+++ b/translations/ja/sketchnotes/LICENSE.md
@@ -0,0 +1,277 @@
+Attribution-ShareAlike 4.0 International
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons") は法律事務所ではなく、
+法的サービスや法的アドバイスを提供しません。Creative Commons 公共ライセンスの
+配布は、弁護士-クライアント関係やその他の関係を作成するものではありません。
+Creative Commons は、そのライセンスおよび関連情報を「現状のまま」提供します。
+Creative Commons は、そのライセンス、その条件下でライセンスされた
+いかなる資料、または関連情報について、いかなる保証も行いません。
+Creative Commons は、それらの使用に起因する損害に対するすべての責任を
+可能な限り最大限に否認します。
+
+Creative Commons 公共ライセンスの使用
+
+Creative Commons 公共ライセンスは、クリエイターやその他の権利者が著作権および
+以下の公共ライセンスに特定された特定の他の権利の対象となる
+オリジナルの著作物やその他の資料を共有するために使用できる
+標準的な条件を提供します。以下の考慮事項は情報提供のみを目的としており、
+包括的ではなく、ライセンスの一部を構成するものではありません。
+
+ ライセンサーのための考慮事項: 当社の公共ライセンスは、
+ 著作権および特定の他の権利によって制限される方法で資料を
+ 使用するために一般に許可を与える権限を持つ人々によって
+ 使用されることを意図しています。当社のライセンスは
+ 取消不能です。ライセンサーは、ライセンスを適用する前に
+ 選択したライセンスの条件を読み理解する必要があります。
+ また、ライセンサーは、ライセンスを適用する前に必要なすべての
+ 権利を確保し、一般の人々が期待通りに資料を再利用できるように
+ する必要があります。ライセンサーは、ライセンスの対象と
+ ならない資料を明確に表示する必要があります。これには、他の
+ CCライセンスされた資料や、著作権の例外や制限の下で使用される
+ 資料が含まれます。ライセンサーのためのその他の考慮事項:
+ wiki.creativecommons.org/Considerations_for_licensors
+
+ 一般のための考慮事項: 当社の公共ライセンスのいずれかを
+ 使用することにより、ライセンサーは、特定の条件の下で
+ ライセンスされた資料を使用する許可を一般に与えます。
+ ライセンサーの許可が不要な場合(例: 著作権の適用される
+ 例外や制限のため)、その使用はライセンスによって規制されません。
+ 当社のライセンスは、著作権およびライセンサーが許可する権限を
+ 持つ特定の他の権利の下でのみ許可を与えます。ライセンスされた
+ 資料の使用は、他の理由により依然として制限される場合があります。
+ たとえば、他者が資料に著作権や他の権利を持っている場合です。
+ ライセンサーは、すべての変更をマークまたは説明するように
+ 特別な要求をする場合があります。当社のライセンスでは
+ 必須ではありませんが、合理的な範囲内でそのような要求に
+ 敬意を払うことをお勧めします。一般のためのその他の考慮事項:
+ wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Creative Commons Attribution-ShareAlike 4.0 International Public
+License
+
+ライセンスされた権利を行使することにより、あなたはこの
+Creative Commons Attribution-ShareAlike 4.0 International Public License
+(「公共ライセンス」)の条件に拘束されることに同意します。
+この公共ライセンスが契約として解釈される範囲で、あなたは
+これらの条件を受け入れることによりライセンスされた権利を
+付与され、ライセンサーはこれらの条件の下でライセンスされた
+資料を提供することによる利益を考慮してあなたにその権利を
+付与します。
+
+セクション 1 -- 定義
+
+ a. 変更された資料とは、ライセンサーが保持する著作権および
+ 類似の権利の下で許可が必要な方法でライセンスされた資料が
+ 翻訳、変更、編成、変換、またはその他の方法で修正された
+ 資料を意味します。この公共ライセンスの目的のために、
+ ライセンスされた資料が音楽作品、パフォーマンス、または
+ 録音である場合、ライセンスされた資料が動画像とタイミング
+ 関係で同期される場合、変更された資料は常に生成されます。
+
+ b. 変更者のライセンスとは、この公共ライセンスの条件に
+ 従って、変更された資料に対するあなたの寄稿に対する
+ あなたの著作権および類似の権利に適用されるライセンスを意味します。
+
+ c. BY-SA 互換ライセンスとは、Creative Commons がこの公共ライセンスと
+ 本質的に同等であると承認したライセンスを意味します。
+ creativecommons.org/compatiblelicenses にリストされています。
+
+ d. 著作権および類似の権利とは、著作権および/または著作権に
+ 密接に関連する類似の権利を意味し、これには、パフォーマンス、
+ 放送、録音、および独自のデータベース権利が含まれます。
+ 権利がどのようにラベル付けされているか、または分類されているか
+ に関係なく。この公共ライセンスの目的のために、セクション
+ 2(b)(1)-(2) で指定された権利は著作権および類似の権利ではありません。
+
+ e. 有効な技術的措置とは、適切な権限がない場合に回避できない
+ 措置を意味し、1996 年 12 月 20 日に採択された WIPO 著作権
+ 条約の第 11 条の義務を満たす法律および/または類似の国際
+ 協定の下で回避できない措置を意味します。
+
+ f. 例外および制限とは、フェアユース、フェアディーリング、
+ および/またはライセンスされた資料の使用に適用される
+ 著作権および類似の権利に対するその他の例外または制限を意味します。
+
+ g. ライセンス要素とは、Creative Commons 公共ライセンスの名前に
+ 記載されているライセンス属性を意味します。この公共
+ ライセンスのライセンス要素は、帰属と共有のようなものです。
+
+ h. ライセンスされた資料とは、ライセンサーがこの公共
+ ライセンスを適用した芸術作品または文学作品、データベース、
+ またはその他の資料を意味します。
+
+ i. ライセンスされた権利とは、この公共ライセンスの条件に
+ 従ってあなたに付与される権利を意味し、ライセンスされた
+ 資料の使用に適用されるすべての著作権および類似の権利に
+ 制限され、ライセンサーがライセンスする権限を持つものを
+ 意味します。
+
+ j. ライセンサーとは、この公共ライセンスの下で権利を付与する
+ 個人または団体を意味します。
+
+ k. 共有とは、ライセンスされた権利の下で許可が必要な方法または
+ プロセスによって資料を一般に提供することを意味し、
+ 複製、公開表示、公開パフォーマンス、配布、普及、通信、
+ または輸入を含み、資料を一般に提供することを意味します。
+ 公衆が資料を自分で選んだ場所と時間からアクセスできるように
+ する方法を含みます。
+
+ l. 独自のデータベース権利とは、1996 年 3 月 11 日の欧州議会および
+ 欧州理事会の指令 96/9/EC によって生じる著作権以外の権利を
+ 意味し、修正および/または後継されるもの、および世界中の
+ 他の本質的に同等の権利を意味します。
+
+ m. あなたとは、この公共ライセンスの下でライセンスされた権利を
+ 行使する個人または団体を意味します。あなたの意味は
+ 対応する意味を持ちます。
+
+セクション 2 -- 範囲
+
+ a. ライセンスの付与
+
+ 1. この公共ライセンスの条件に従い、ライセンサーは
+ ここに、ライセンスされた資料に対してライセンスされた
+ 権利を行使するための世界的、ロイヤリティフリー、
+ サブライセンス不可、非排他的、取消不能なライセンスを
+ あなたに付与します。
+
+ a. ライセンスされた資料を全体または一部複製および共有すること。
+
+ b. 変更された資料を生成、複製、および共有すること。
+
+ 2. 例外および制限。あなたの使用に例外および制限が
+ 適用される場合、この公共ライセンスは適用されず、
+ あなたはその条件に従う必要はありません。
+
+ 3. 期間。この公共ライセンスの期間はセクション 6(a) に
+ 指定されています。
+
+ 4. メディアおよびフォーマット; 技術的変更の許可。
+ ライセンサーは、現在知られているか将来作成される
+ すべてのメディアおよびフォーマットでライセンスされた
+ 権利を行使し、必要な技術的変更を行うことを許可します。
+ ライセンサーは、ライセンスされた権利を行使するために
+ 必要な技術的変更を行うことを禁止する権利または権限を
+ 放棄し、または主張しないことに同意します。この公共
+ ライセンスの目的のために、このセクション 2(a)(4) に
+ よって許可された変更を行うだけでは、変更された資料は
+ 生成されません。
+
+ 5. 下流の受領者。
+
+ a. ライセンサーからのオファー -- ライセンスされた資料。
+ ライセンスされた資料のすべての受領者は、自動的に
+ ライセンサーからこの公共ライセンスの条件に従って
+ ライセンスされた権利を行使するためのオファーを
+ 受け取ります。
+
+ b. ライセンサーからの追加オファー -- 変更された資料。
+ あなたからの変更された資料のすべての受領者は、
+ 自動的にライセンサーから変更された資料に対する
+ ライセンスされた権利を行使するためのオファーを
+ 受け取ります。
+
+ c. 下流の制限なし。あなたは、ライセンスされた資料の
+ 受領者がライセンスされた権利を行使することを
+ 制限する場合、ライセンスされた資料に追加または
+ 異なる条件を提供したり、適用したりすることは
+ できません。
+
+ 6. 推奨の不在。この公共ライセンスには、あなたがライセンサー
+ またはセクション 3(a)(1)(A)(i) に提供される属性を受け取る
+ 他の人と関連している、またはあなたのライセンスされた
+ 資料の使用が関連している、支援されている、または公式
+ ステータスを付与されていることを主張または暗示する
+ 許可は含まれていません。
+
+ b. その他の権利
+
+ 1. 道徳的権利、例えば完全性の権利は、この公共ライセンス
+ の下でライセンスされておらず、また、パブリシティ、
+ プライバシー、および/またはその他の類似のパーソナリティ
+ 権もライセンスされていません。ただし、可能な限り、
+ ライセンサーは、ライセンスされた権利を行使できるように
+ 必要な範囲で、ライセンサーが保持するそのような権利を
+ 放棄し、または主張しないことに同意しますが、それ以外の
+ 場合はありません。
+
+ 2. 特許および商標権は、この公共ライセンスの下でライセンス
+ されていません。
+
+ 3. 可能な限り、ライセンサーは、任意のまたは放棄可能な
+ 法定または強制的なライセンススキームの下で、収集社会
+ を通じて直接または間接的に、ライセンスされた権利の
+ 行使に対してあなたからロイヤリティを収集する権利を
+ 放棄します。他のすべての場合、ライセンサーはそのような
+ ロイヤリティを収集する権利を明示的に留保します。
+
+セクション 3 -- ライセンス条件
+
+ライセンスされた権利の行使は、明示的に以下の条件に従うことを
+条件とします。
+
+ a. 帰属
+
+ 1. ライセンスされた資料を共有する場合(変更された形を
+ 含む)、以下を保持する必要があります。
+
+ a. ライセンサーがライセンスされた資料に提供した場合、
+ 次の事項を保持する必要があります。
+
+ i. ライセンスされた資料の作成者および帰属を
+ 受けるよう指定された他の人の識別(仮名が
+ 指定されている場合は仮名を含む)。
+
+ ii. 著作権表示。
+
+ iii. この公共ライセンスに言及する通知。
+
+ iv. 保証の免責に言及する通知。
+
+ v. 実行可能な範囲でライセンスされた資料への URI
+ またはハイパーリンク。
+
+ b. ライセンスされた資料を変更した場合、その変更を
+ 示し、以前の変更の表示を保持する必要があります。
+
+ c. ライセンスされた資料がこの公共ライセンスの下で
+ ライセンスされていることを示し、この公共ライセンス
+ のテキスト、URI、またはハイパーリンクを含める
+ 必要があります。
+
+ 2. セクション 3(a)(1) の条件は、あなたがライセンスされた
+ 資料を共有するメディア、手段、およびコンテキストに
+ 基づいて合理的な方法で満たすことができます。
+ たとえば、必要な情報を含むリソースへの URI または
+ ハイパーリンクを提供することによって条件を満たすことが
+ 合理的である場合があります。
+
+ 3. ライセンサーの要求があった場合、合理的に実行可能な
+ 範囲でセクション 3(a)(1)(A) で必要とされる情報を削除
+ する必要があります。
+
+ b. 共有のようなもの
+
+ セクション 3(a) の条件に加えて、あなたが生成する変更された
+ 資料を共有する場合、以下の条件も適用されます。
+
+ 1. あなたが適用する変更者のライセンスは、同じライセンス
+ 要素を持つ Creative Commons ライセンス、このバージョン
+ または後のバージョン、または BY-SA 互換ライセンスで
+ なければなりません。
+
+ 2. あなたが適用する変更者のライセンスのテキスト、URI
+ またはハイパーリンクを含める必要があります。
+ 変更された資料を共有するメディア、手段、および
+ コンテキストに基づいて、この条件を合理的な方法で
+ 満たすことができます。
+
+ 3. あなたは、変更された資料に対して適用される変更者の
+ ライセンスの下
+
+**免責事項**:
+この文書は機械ベースのAI翻訳サービスを使用して翻訳されています。正確さを期していますが、自動翻訳には誤りや不正確さが含まれる可能性があることをご了承ください。権威ある情報源としては、元の言語で書かれた原文を参照してください。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤訳については責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ja/sketchnotes/README.md b/translations/ja/sketchnotes/README.md
new file mode 100644
index 000000000..7d390a7cf
--- /dev/null
+++ b/translations/ja/sketchnotes/README.md
@@ -0,0 +1,10 @@
+すべてのカリキュラムのスケッチノートはここからダウンロードできます。
+
+🖨 高解像度で印刷するために、TIFFバージョンは[このリポジトリ](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff)で利用可能です。
+
+🎨 作成者: [Tomomi Imura](https://github.com/girliemac) (Twitter: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**免責事項**:
+この文書は機械翻訳AIサービスを使用して翻訳されています。正確さを期すよう努めておりますが、自動翻訳には誤りや不正確さが含まれる場合がありますのでご注意ください。原文の言語によるオリジナル文書を信頼できる情報源とみなしてください。重要な情報については、専門の人間による翻訳を推奨します。本翻訳の使用に起因する誤解や誤訳については、一切の責任を負いかねます。
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/1-intro-to-ML/README.md b/translations/ko/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..58ce5f363
--- /dev/null
+++ b/translations/ko/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# 머신 러닝 소개
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "초보자를 위한 머신 러닝 - 초보자를 위한 머신 러닝 소개")
+
+> 🎥 위 이미지를 클릭하면 이 강의를 다루는 짧은 비디오를 볼 수 있습니다.
+
+초보자를 위한 클래식 머신 러닝 강좌에 오신 것을 환영합니다! 이 주제가 처음이든, 특정 영역을 복습하고 싶은 경험 있는 ML 실무자든, 함께 하게 되어 기쁩니다! 여러분의 ML 학습을 위한 친근한 출발점을 만들고자 하며, [피드백](https://github.com/microsoft/ML-For-Beginners/discussions)을 평가하고, 응답하고, 반영하는 것을 기쁘게 생각합니다.
+
+[](https://youtu.be/h0e2HAPTGF4 "ML 소개")
+
+> 🎥 위 이미지를 클릭하면 MIT의 John Guttag이 머신 러닝을 소개하는 비디오를 볼 수 있습니다.
+
+---
+## 머신 러닝 시작하기
+
+이 커리큘럼을 시작하기 전에, 노트북을 로컬에서 실행할 수 있도록 컴퓨터를 설정해야 합니다.
+
+- **이 비디오들로 컴퓨터 설정하기**. 시스템에 [파이썬 설치 방법](https://youtu.be/CXZYvNRIAKM)과 개발을 위한 [텍스트 편집기 설정](https://youtu.be/EU8eayHWoZg)을 배우기 위해 다음 링크를 사용하세요.
+- **파이썬 배우기**. 데이터 과학자에게 유용한 프로그래밍 언어인 [파이썬](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott)에 대한 기본적인 이해를 갖추는 것이 좋습니다.
+- **Node.js와 자바스크립트 배우기**. 이 과정에서 웹 앱을 구축할 때 몇 번 자바스크립트를 사용하므로 [node](https://nodejs.org)와 [npm](https://www.npmjs.com/)을 설치하고, 파이썬과 자바스크립트 개발을 위한 [Visual Studio Code](https://code.visualstudio.com/)도 설치해야 합니다.
+- **GitHub 계정 만들기**. [GitHub](https://github.com)에서 우리를 찾았으니 이미 계정이 있을 수도 있지만, 그렇지 않다면 계정을 만들고 이 커리큘럼을 포크하여 사용하세요. (별표도 주시면 좋겠습니다 😊)
+- **Scikit-learn 탐색하기**. 이 강의에서 참조하는 ML 라이브러리 세트인 [Scikit-learn](https://scikit-learn.org/stable/user_guide.html)에 익숙해지세요.
+
+---
+## 머신 러닝이란 무엇인가?
+
+'머신 러닝'이라는 용어는 오늘날 가장 인기 있고 자주 사용되는 용어 중 하나입니다. 어떤 분야에서 일하든 기술에 대해 어느 정도 익숙하다면 이 용어를 한 번쯤은 들어봤을 가능성이 큽니다. 그러나 머신 러닝의 메커니즘은 대부분의 사람들에게는 신비롭습니다. 머신 러닝 초보자에게는 이 주제가 때로는 벅차게 느껴질 수 있습니다. 따라서 머신 러닝이 실제로 무엇인지 이해하고, 실용적인 예제를 통해 단계적으로 배우는 것이 중요합니다.
+
+---
+## 과대광고 곡선
+
+
+
+> 구글 트렌드는 '머신 러닝' 용어의 최근 '과대광고 곡선'을 보여줍니다.
+
+---
+## 신비로운 우주
+
+우리는 매혹적인 신비로 가득 찬 우주에 살고 있습니다. Stephen Hawking, Albert Einstein 등 많은 위대한 과학자들은 우리 주변 세계의 신비를 밝혀내기 위해 평생을 바쳤습니다. 이것이 학습의 인간 조건입니다: 인간의 아이는 새로운 것을 배우고 성인이 될 때까지 해마다 자신의 세계 구조를 발견합니다.
+
+---
+## 아이의 두뇌
+
+아이의 두뇌와 감각은 주변 환경의 사실을 인식하고 점차 삶의 숨겨진 패턴을 배우며, 이러한 패턴을 식별하는 논리적 규칙을 만들도록 도와줍니다. 인간 두뇌의 학습 과정은 인간을 이 세상의 가장 정교한 생명체로 만듭니다. 숨겨진 패턴을 발견하고 그 패턴을 혁신함으로써 끊임없이 배우는 것은 우리가 평생 동안 더 나아질 수 있게 합니다. 이 학습 능력과 진화 능력은 [뇌 가소성](https://www.simplypsychology.org/brain-plasticity.html)이라는 개념과 관련이 있습니다. 표면적으로, 인간 두뇌의 학습 과정과 머신 러닝 개념 사이에 몇 가지 동기적인 유사점을 그릴 수 있습니다.
+
+---
+## 인간 두뇌
+
+[인간 두뇌](https://www.livescience.com/29365-human-brain.html)는 실제 세계에서 정보를 인식하고, 인식된 정보를 처리하며, 합리적인 결정을 내리고, 상황에 따라 특정 행동을 수행합니다. 이를 지능적으로 행동한다고 합니다. 지능적 행동 과정을 기계에 프로그램하는 것을 인공지능(AI)이라고 합니다.
+
+---
+## 몇 가지 용어
+
+용어가 혼동될 수 있지만, 머신 러닝(ML)은 인공지능의 중요한 하위 집합입니다. **ML은 특수한 알고리즘을 사용하여 의미 있는 정보를 발견하고 인식된 데이터에서 숨겨진 패턴을 찾아 합리적 의사 결정 과정을 확립하는 것에 중점을 둡니다**.
+
+---
+## AI, ML, 딥 러닝
+
+
+
+> AI, ML, 딥 러닝, 데이터 과학 간의 관계를 보여주는 다이어그램. [Jen Looper](https://twitter.com/jenlooper)의 인포그래픽으로 [이 그래픽](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining)에서 영감을 받았습니다.
+
+---
+## 다룰 개념
+
+이 커리큘럼에서는 초보자가 반드시 알아야 할 머신 러닝의 핵심 개념만 다룰 것입니다. 우리는 많은 학생들이 기초를 배우기 위해 사용하는 훌륭한 라이브러리인 Scikit-learn을 주로 사용하여 '클래식 머신 러닝'을 다룹니다. 인공지능이나 딥 러닝의 더 넓은 개념을 이해하려면 머신 러닝에 대한 강력한 기초 지식이 필수적이므로, 여기에서 이를 제공하고자 합니다.
+
+---
+## 이 강의에서 배우게 될 것:
+
+- 머신 러닝의 핵심 개념
+- ML의 역사
+- ML과 공정성
+- 회귀 ML 기술
+- 분류 ML 기술
+- 군집화 ML 기술
+- 자연어 처리 ML 기술
+- 시계열 예측 ML 기술
+- 강화 학습
+- ML의 실제 응용
+
+---
+## 다루지 않을 내용
+
+- 딥 러닝
+- 신경망
+- AI
+
+더 나은 학습 경험을 위해, 신경망, '딥 러닝' - 신경망을 사용한 다층 모델 구축 - 및 AI의 복잡성을 피할 것입니다. 이는 다른 커리큘럼에서 다룰 예정입니다. 또한 이 더 큰 분야에 집중하기 위해 다가오는 데이터 과학 커리큘럼도 제공할 것입니다.
+
+---
+## 왜 머신 러닝을 공부해야 하는가?
+
+시스템 관점에서 머신 러닝은 데이터에서 숨겨진 패턴을 학습하여 지능적인 결정을 내리는 자동화 시스템을 만드는 것으로 정의됩니다.
+
+이 동기는 인간 두뇌가 외부 세계에서 인식한 데이터를 기반으로 특정한 것을 배우는 방식에서 느슨하게 영감을 받았습니다.
+
+✅ 비즈니스가 하드 코딩된 규칙 기반 엔진을 만드는 대신 머신 러닝 전략을 사용하려는 이유를 잠시 생각해 보세요.
+
+---
+## 머신 러닝의 응용
+
+머신 러닝의 응용은 이제 거의 모든 곳에 있으며, 우리 사회에서 스마트폰, 연결된 장치 및 기타 시스템에 의해 생성되는 데이터만큼이나 널리 퍼져 있습니다. 최첨단 머신 러닝 알고리즘의 엄청난 잠재력을 고려할 때, 연구자들은 다차원적이고 다학문적인 실제 문제를 해결하는 데 그 능력을 탐구해 왔으며, 긍정적인 결과를 얻고 있습니다.
+
+---
+## 적용된 ML의 예
+
+**다양한 방식으로 머신 러닝을 사용할 수 있습니다**:
+
+- 환자의 병력이나 보고서에서 질병의 가능성을 예측합니다.
+- 기상 데이터를 활용하여 기상 이벤트를 예측합니다.
+- 텍스트의 감정을 이해합니다.
+- 가짜 뉴스를 감지하여 선전의 확산을 막습니다.
+
+금융, 경제학, 지구과학, 우주 탐사, 생물 의공학, 인지 과학, 심지어 인문학 분야에서도 머신 러닝을 적용하여 그들의 분야에서 어려운 데이터 처리 문제를 해결하고 있습니다.
+
+---
+## 결론
+
+머신 러닝은 실제 데이터나 생성된 데이터에서 의미 있는 통찰을 찾아 패턴 발견 과정을 자동화합니다. 비즈니스, 건강, 금융 응용 등에서 매우 가치가 높다는 것이 입증되었습니다.
+
+가까운 미래에는 머신 러닝의 기본을 이해하는 것이 그 광범위한 채택으로 인해 어느 분야에서든 필수가 될 것입니다.
+
+---
+# 🚀 도전
+
+[Excalidraw](https://excalidraw.com/)와 같은 온라인 앱이나 종이를 사용하여 AI, ML, 딥 러닝, 데이터 과학 간의 차이에 대한 이해를 스케치하세요. 각 기술이 해결하기에 좋은 문제에 대한 아이디어를 추가하세요.
+
+# [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# 복습 및 자기 학습
+
+클라우드에서 ML 알고리즘을 사용하는 방법에 대해 자세히 알아보려면 이 [학습 경로](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott)를 따라가세요.
+
+ML의 기본에 대한 [학습 경로](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott)를 따라가세요.
+
+---
+# 과제
+
+[시작하기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서가 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/1-intro-to-ML/assignment.md b/translations/ko/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..1e1002d12
--- /dev/null
+++ b/translations/ko/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# 시작하기
+
+## 지침
+
+이 비평가 없는 과제에서는 Python을 복습하고 환경을 설정하여 노트북을 실행할 수 있도록 해야 합니다.
+
+이 [Python 학습 경로](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott)를 따라가고, 아래의 입문 동영상을 통해 시스템을 설정하세요:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 모국어 버전을 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/2-history-of-ML/README.md b/translations/ko/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..635364a3f
--- /dev/null
+++ b/translations/ko/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# 머신 러닝의 역사
+
+
+> 스케치노트 by [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "초보자를 위한 머신 러닝 - 머신 러닝의 역사")
+
+> 🎥 위 이미지를 클릭하여 이 강의를 다룬 짧은 영상을 보세요.
+
+이번 강의에서는 머신 러닝과 인공지능의 역사에서 중요한 이정표들을 살펴보겠습니다.
+
+인공지능(AI) 분야의 역사는 머신 러닝의 역사와 밀접하게 얽혀 있습니다. 머신 러닝을 뒷받침하는 알고리즘과 계산상의 진보가 AI의 발전에 기여했기 때문입니다. 이 분야들이 1950년대에 독립된 연구 영역으로 구체화되기 시작했지만, 중요한 [알고리즘적, 통계적, 수학적, 계산적 및 기술적 발견](https://wikipedia.org/wiki/Timeline_of_machine_learning)은 이 시기 이전에도 존재했으며, 이 시기와 겹칩니다. 사실, 사람들은 [수백 년 동안](https://wikipedia.org/wiki/History_of_artificial_intelligence) 이 질문들에 대해 생각해왔습니다. 이 글은 '생각하는 기계'라는 아이디어의 역사적 지적 기초에 대해 논의합니다.
+
+---
+## 주목할 만한 발견들
+
+- 1763, 1812 [베이즈 정리](https://wikipedia.org/wiki/Bayes%27_theorem)와 그 선행 이론들. 이 정리와 그 응용은 사전 지식을 기반으로 사건 발생 확률을 설명하는 추론의 기초가 됩니다.
+- 1805 [최소 제곱법 이론](https://wikipedia.org/wiki/Least_squares) 프랑스 수학자 Adrien-Marie Legendre에 의해. 이 이론은 데이터 피팅에 도움을 줍니다.
+- 1913 [마르코프 연쇄](https://wikipedia.org/wiki/Markov_chain), 러시아 수학자 Andrey Markov의 이름을 딴 것으로, 이전 상태를 기반으로 가능한 사건의 연속을 설명하는 데 사용됩니다.
+- 1957 [퍼셉트론](https://wikipedia.org/wiki/Perceptron)은 미국 심리학자 Frank Rosenblatt가 발명한 선형 분류기의 일종으로, 딥러닝의 발전에 기초가 됩니다.
+
+---
+
+- 1967 [최근접 이웃](https://wikipedia.org/wiki/Nearest_neighbor)은 원래 경로를 맵핑하기 위해 설계된 알고리즘입니다. 머신 러닝 맥락에서는 패턴을 감지하는 데 사용됩니다.
+- 1970 [역전파](https://wikipedia.org/wiki/Backpropagation)는 [순방향 신경망](https://wikipedia.org/wiki/Feedforward_neural_network)을 훈련시키는 데 사용됩니다.
+- 1982 [순환 신경망](https://wikipedia.org/wiki/Recurrent_neural_network)은 순방향 신경망에서 파생된 인공 신경망으로, 시간 그래프를 생성합니다.
+
+✅ 조금 더 연구해 보세요. 머신 러닝과 AI 역사에서 중요한 다른 날짜는 무엇인가요?
+
+---
+## 1950: 생각하는 기계
+
+Alan Turing은 2019년 [대중 투표](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century)에서 20세기 최고의 과학자로 선정된 정말 놀라운 인물로, '생각할 수 있는 기계' 개념의 기초를 마련하는 데 기여한 것으로 인정받고 있습니다. 그는 이 개념에 대한 경험적 증거의 필요성과 반대 의견을 극복하기 위해 [튜링 테스트](https://www.bbc.com/news/technology-18475646)를 만들었습니다. 이 테스트는 우리의 NLP 강의에서 다룰 것입니다.
+
+---
+## 1956: 다트머스 여름 연구 프로젝트
+
+"다트머스 여름 연구 프로젝트는 인공지능 분야의 중요한 사건이었으며," 여기서 '인공지능'이라는 용어가 만들어졌습니다 ([출처](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth)).
+
+> 학습이나 지능의 다른 모든 측면은 원칙적으로 매우 정확하게 설명될 수 있으며, 따라서 기계가 이를 모방할 수 있습니다.
+
+---
+
+주 연구자인 수학 교수 John McCarthy는 "학습이나 지능의 다른 모든 측면은 원칙적으로 매우 정확하게 설명될 수 있으며, 따라서 기계가 이를 모방할 수 있다"는 가설을 바탕으로 진행하고자 했습니다. 참가자 중에는 이 분야의 또 다른 거장인 Marvin Minsky도 포함되어 있었습니다.
+
+이 워크숍은 "상징적 방법의 부상, 제한된 도메인에 초점을 맞춘 시스템(초기 전문가 시스템), 귀납적 시스템 대 연역적 시스템"과 같은 여러 논의를 시작하고 장려한 것으로 인정받고 있습니다. ([출처](https://wikipedia.org/wiki/Dartmouth_workshop)).
+
+---
+## 1956 - 1974: "황금기"
+
+1950년대부터 1970년대 중반까지, AI가 많은 문제를 해결할 수 있을 것이라는 낙관주의가 높았습니다. 1967년, Marvin Minsky는 "한 세대 안에... '인공지능' 문제는 상당히 해결될 것"이라고 자신 있게 말했습니다. (Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall)
+
+자연어 처리 연구가 번성하고, 검색이 정교해지며 더 강력해졌으며, '마이크로 월드' 개념이 만들어져 단순한 작업이 평범한 언어 명령으로 완료될 수 있었습니다.
+
+---
+
+연구는 정부 기관의 풍부한 자금 지원을 받았고, 계산과 알고리즘에서 진보가 이루어졌으며, 지능형 기계의 프로토타입이 만들어졌습니다. 이러한 기계 중 일부는 다음과 같습니다:
+
+* [Shakey 로봇](https://wikipedia.org/wiki/Shakey_the_robot), 지능적으로 작업을 수행할 방법을 결정할 수 있는 로봇.
+
+ 
+ > 1972년의 Shakey
+
+---
+
+* Eliza, 초기 '채터봇'으로 사람들과 대화하고 원시적인 '치료사' 역할을 할 수 있었습니다. Eliza에 대해서는 NLP 강의에서 더 자세히 배울 것입니다.
+
+ 
+ > 채터봇 Eliza의 한 버전
+
+---
+
+* "블록 세계"는 블록을 쌓고 정렬할 수 있는 마이크로 월드의 예로, 기계가 결정을 내리는 실험이 테스트될 수 있었습니다. [SHRDLU](https://wikipedia.org/wiki/SHRDLU)와 같은 라이브러리를 통해 구축된 진보는 언어 처리를 발전시키는 데 도움을 주었습니다.
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "SHRDLU와 함께 하는 블록 세계")
+
+ > 🎥 위 이미지를 클릭하여 비디오를 보세요: SHRDLU와 함께 하는 블록 세계
+
+---
+## 1974 - 1980: "AI 겨울"
+
+1970년대 중반까지 '지능형 기계'를 만드는 복잡성이 과소평가되었고, 주어진 계산 능력으로 약속된 성과가 과장되었다는 것이 분명해졌습니다. 자금 지원이 중단되고, 이 분야에 대한 신뢰가 감소했습니다. 신뢰에 영향을 미친 몇 가지 문제는 다음과 같습니다:
+---
+- **제한 사항**. 계산 능력이 너무 제한적이었습니다.
+- **조합 폭발**. 컴퓨터에 더 많은 요구가 있을수록 훈련해야 하는 매개변수의 양이 기하급수적으로 증가했지만, 계산 능력과 기능의 진화는 이에 따라오지 못했습니다.
+- **데이터 부족**. 테스트, 개발 및 알고리즘을 개선하는 과정을 방해하는 데이터 부족이 있었습니다.
+- **우리가 올바른 질문을 하고 있는가?**. 제기된 질문들 자체가 의문을 제기하기 시작했습니다. 연구자들은 접근 방식에 대한 비판에 직면하기 시작했습니다:
+ - 튜링 테스트는 '중국 방 이론'과 같은 아이디어를 통해 의문이 제기되었습니다. 이 이론은 "디지털 컴퓨터를 프로그래밍하면 언어를 이해하는 것처럼 보이게 할 수 있지만 실제 이해를 생성할 수는 없다"고 주장했습니다. ([출처](https://plato.stanford.edu/entries/chinese-room/))
+ - 사회에 인공지능을 도입하는 것의 윤리성, 예를 들어 "치료사" ELIZA와 같은 인공지능의 도입이 도전받았습니다.
+
+---
+
+동시에, 다양한 AI 학파가 형성되기 시작했습니다. ["깔끔한 AI" 대 "지저분한 AI"](https://wikipedia.org/wiki/Neats_and_scruffies) 관행 사이에 이분법이 형성되었습니다. _지저분한_ 연구실은 원하는 결과를 얻을 때까지 프로그램을 조정했습니다. _깔끔한_ 연구실은 "논리와 형식적인 문제 해결에 집중했습니다". ELIZA와 SHRDLU는 잘 알려진 _지저분한_ 시스템이었습니다. 1980년대에 ML 시스템의 재현 가능성을 요구하는 수요가 나타나면서, _깔끔한_ 접근 방식이 점차 앞서게 되었으며, 그 결과는 더 설명 가능했습니다.
+
+---
+## 1980년대 전문가 시스템
+
+이 분야가 성장하면서, 그 이점이 비즈니스에 더 명확해졌고, 1980년대에는 '전문가 시스템'의 확산이 이루어졌습니다. "전문가 시스템은 최초로 진정으로 성공적인 인공지능(AI) 소프트웨어 형태 중 하나였습니다." ([출처](https://wikipedia.org/wiki/Expert_system)).
+
+이러한 시스템은 실제로 _하이브리드_ 시스템으로, 비즈니스 요구 사항을 정의하는 규칙 엔진과 규칙 시스템을 활용하여 새로운 사실을 추론하는 추론 엔진으로 구성됩니다.
+
+이 시기에는 신경망에 대한 관심도 증가했습니다.
+
+---
+## 1987 - 1993: AI '냉각기'
+
+전문화된 전문가 시스템 하드웨어의 확산은 불행히도 지나치게 전문화되는 효과를 가져왔습니다. 개인용 컴퓨터의 등장도 이러한 대형, 전문화된 중앙 시스템과 경쟁했습니다. 컴퓨팅의 민주화가 시작되었으며, 이는 결국 빅 데이터의 현대적 폭발을 위한 길을 열었습니다.
+
+---
+## 1993 - 2011
+
+이 시기는 데이터와 계산 능력의 부족으로 인해 이전에 발생한 문제를 해결할 수 있는 새로운 시대를 열었습니다. 데이터의 양은 급격히 증가하고 더 널리 이용 가능해졌으며, 특히 2007년경 스마트폰의 등장으로 인해 더 그렇습니다. 계산 능력은 기하급수적으로 확장되었고, 알고리즘도 함께 발전했습니다. 이 분야는 과거의 자유분방한 날들이 진정한 학문으로 구체화되면서 성숙해지기 시작했습니다.
+
+---
+## 현재
+
+오늘날 머신 러닝과 AI는 우리 생활의 거의 모든 부분에 영향을 미칩니다. 이 시대는 이러한 알고리즘이 인간의 삶에 미칠 수 있는 위험과 잠재적 영향을 신중하게 이해할 필요가 있습니다. Microsoft의 Brad Smith는 "정보 기술은 프라이버시와 표현의 자유와 같은 기본적인 인권 보호의 핵심에 이르는 문제를 제기합니다. 이러한 문제는 이러한 제품을 만드는 기술 회사의 책임을 높이며, 우리의 견해로는 신중한 정부 규제와 허용 가능한 사용에 대한 규범의 개발을 요구합니다"라고 말했습니다. ([출처](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/)).
+
+---
+
+미래가 어떻게 될지는 아직 알 수 없지만, 이러한 컴퓨터 시스템과 그들이 실행하는 소프트웨어 및 알고리즘을 이해하는 것이 중요합니다. 이 커리큘럼이 여러분이 더 나은 이해를 얻고 스스로 결정을 내리는 데 도움이 되기를 바랍니다.
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "딥러닝의 역사")
+> 🎥 위 이미지를 클릭하여 비디오를 보세요: Yann LeCun이 이 강의에서 딥러닝의 역사를 논의합니다
+
+---
+## 🚀도전
+
+이 역사적 순간 중 하나를 파고들어 그 뒤에 있는 사람들에 대해 더 알아보세요. 매혹적인 인물들이 있으며, 과학적 발견은 문화적 진공 상태에서 이루어진 것이 아닙니다. 무엇을 발견했나요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## 복습 및 자기 학습
+
+여기 시청하고 들을 항목들이 있습니다:
+
+[Amy Boyd가 AI의 진화를 논의하는 이 팟캐스트](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "The history of AI by Amy Boyd")
+
+---
+
+## 과제
+
+[타임라인 만들기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 모국어 버전이 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/2-history-of-ML/assignment.md b/translations/ko/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..1c118b190
--- /dev/null
+++ b/translations/ko/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# 타임라인 만들기
+
+## 지침
+
+[이 레포지토리](https://github.com/Digital-Humanities-Toolkit/timeline-builder)를 사용하여 알고리즘, 수학, 통계, AI, 또는 ML의 역사 중 일부 측면에 대한 타임라인을 만드세요. 또는 이들의 조합을 사용할 수도 있습니다. 한 사람, 한 가지 아이디어 또는 오랜 기간의 사상에 초점을 맞출 수 있습니다. 멀티미디어 요소를 추가하는 것을 잊지 마세요.
+
+## 평가 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| -------- | ----------------------------------------------- | ------------------------------------- | ---------------------------------------------------------- |
+| | 배포된 타임라인이 GitHub 페이지로 제시됨 | 코드가 불완전하고 배포되지 않음 | 타임라인이 불완전하고, 연구가 잘 이루어지지 않았으며 배포되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서를 신뢰할 수 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/3-fairness/README.md b/translations/ko/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..0d9e87229
--- /dev/null
+++ b/translations/ko/1-Introduction/3-fairness/README.md
@@ -0,0 +1,159 @@
+# 책임 있는 AI로 머신 러닝 솔루션 구축하기
+
+
+> 스케치노트 작성: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## 소개
+
+이 커리큘럼에서는 머신 러닝이 우리의 일상 생활에 어떻게 영향을 미치는지 알아보게 됩니다. 현재도 시스템과 모델은 의료 진단, 대출 승인, 사기 탐지 등 일상적인 의사 결정 작업에 관여하고 있습니다. 따라서 이러한 모델이 신뢰할 수 있는 결과를 제공하기 위해 잘 작동하는 것이 중요합니다. 모든 소프트웨어 애플리케이션과 마찬가지로, AI 시스템도 기대에 미치지 못하거나 바람직하지 않은 결과를 낼 수 있습니다. 그렇기 때문에 AI 모델의 동작을 이해하고 설명할 수 있는 것이 필수적입니다.
+
+여러분이 이러한 모델을 구축하는 데 사용하는 데이터에 인종, 성별, 정치적 견해, 종교 등의 특정 인구 통계가 부족하거나 불균형하게 대표되는 경우 어떤 일이 발생할 수 있을지 상상해보세요. 모델의 출력이 특정 인구 통계를 선호하도록 해석되는 경우는 어떨까요? 애플리케이션에 어떤 영향을 미칠까요? 또한 모델이 바람직하지 않은 결과를 내고 사람들에게 해를 끼치는 경우는 어떨까요? AI 시스템의 행동에 대한 책임은 누구에게 있을까요? 이 커리큘럼에서 이러한 질문들을 탐구할 것입니다.
+
+이번 강의에서는 다음을 배우게 됩니다:
+
+- 머신 러닝에서 공정성의 중요성과 공정성 관련 피해에 대한 인식을 높입니다.
+- 신뢰성과 안전성을 보장하기 위해 이상치와 비정상적인 시나리오를 탐구하는 실습을 익힙니다.
+- 모두를 포용하는 시스템을 설계하는 필요성을 이해합니다.
+- 데이터와 사람들의 프라이버시와 보안을 보호하는 것이 얼마나 중요한지 탐구합니다.
+- AI 모델의 동작을 설명하기 위한 투명성 접근의 중요성을 봅니다.
+- AI 시스템에 대한 신뢰를 구축하기 위해 책임이 얼마나 중요한지 인식합니다.
+
+## 사전 요구 사항
+
+사전 요구 사항으로 "책임 있는 AI 원칙" 학습 경로를 수강하고 아래 비디오를 시청하세요:
+
+책임 있는 AI에 대해 자세히 알아보려면 이 [학습 경로](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott)를 따르세요.
+
+[](https://youtu.be/dnC8-uUZXSc "Microsoft의 책임 있는 AI 접근")
+
+> 🎥 위 이미지를 클릭하면 비디오를 볼 수 있습니다: Microsoft의 책임 있는 AI 접근
+
+## 공정성
+
+AI 시스템은 모든 사람을 공정하게 대우하고 유사한 그룹의 사람들에게 다른 방식으로 영향을 미치는 것을 피해야 합니다. 예를 들어, AI 시스템이 의료 치료, 대출 신청 또는 고용에 대한 지침을 제공할 때 유사한 증상, 재정 상황 또는 전문 자격을 가진 모든 사람에게 동일한 권장 사항을 제공해야 합니다. 우리 각자는 의사 결정과 행동에 영향을 미치는 유전된 편견을 가지고 있습니다. 이러한 편견은 우리가 AI 시스템을 훈련시키는 데이터에서 나타날 수 있습니다. 이러한 조작은 때로는 의도치 않게 발생할 수 있습니다. 데이터를 편향되게 만드는 시점을 의식적으로 아는 것은 종종 어렵습니다.
+
+**“불공정성”**은 인종, 성별, 연령 또는 장애 상태 등으로 정의된 사람들 그룹에 대한 부정적인 영향 또는 “피해”를 포함합니다. 주요 공정성 관련 피해는 다음과 같이 분류할 수 있습니다:
+
+- **할당**: 예를 들어 성별 또는 민족이 다른 성별이나 민족보다 우대되는 경우.
+- **서비스 품질**: 특정 시나리오에 대한 데이터를 훈련시키지만 현실은 훨씬 더 복잡한 경우, 이는 성능이 저하된 서비스를 초래합니다. 예를 들어, 어두운 피부를 가진 사람들을 인식하지 못하는 손 세정기. [참조](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **비방**: 불공정하게 비판하고 라벨링하는 것. 예를 들어, 이미지 라벨링 기술이 어두운 피부를 가진 사람들의 이미지를 고릴라로 잘못 라벨링한 경우.
+- **과소 또는 과대 대표**: 특정 직업에서 특정 그룹이 보이지 않는다는 아이디어와 그러한 서비스를 계속 홍보하는 모든 서비스 또는 기능이 피해를 초래합니다.
+- **고정관념**: 특정 그룹을 사전에 할당된 속성과 연관시키는 것. 예를 들어, 영어와 터키어 간의 언어 번역 시스템이 성별에 대한 고정관념 연관성으로 인해 부정확할 수 있습니다.
+
+
+> 터키어로 번역
+
+
+> 영어로 다시 번역
+
+AI 시스템을 설계하고 테스트할 때, AI가 공정하고 편향되거나 차별적인 결정을 내리지 않도록 프로그래밍되지 않도록 해야 합니다. AI와 머신 러닝에서 공정성을 보장하는 것은 여전히 복잡한 사회 기술적 과제입니다.
+
+### 신뢰성과 안전성
+
+신뢰를 구축하려면 AI 시스템이 정상적이거나 예상치 못한 조건에서도 신뢰할 수 있고 안전하며 일관되게 작동해야 합니다. 특히 이상치 상황에서 AI 시스템이 어떻게 동작할지 아는 것이 중요합니다. AI 솔루션을 구축할 때, AI 솔루션이 직면할 다양한 상황을 처리하는 방법에 상당한 주의를 기울여야 합니다. 예를 들어, 자율 주행차는 사람들의 안전을 최우선으로 고려해야 합니다. 따라서 자동차를 구동하는 AI는 밤, 뇌우, 눈보라, 도로를 가로지르는 아이들, 애완동물, 도로 공사 등 자동차가 마주칠 수 있는 모든 가능한 시나리오를 고려해야 합니다. AI 시스템이 다양한 조건을 얼마나 신뢰할 수 있고 안전하게 처리할 수 있는지는 데이터 과학자나 AI 개발자가 시스템 설계나 테스트 중에 고려한 예상 수준을 반영합니다.
+
+> [🎥 여기에서 비디오를 클릭하세요: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### 포용성
+
+AI 시스템은 모두를 참여시키고 권한을 부여하도록 설계되어야 합니다. AI 시스템을 설계하고 구현할 때 데이터 과학자와 AI 개발자는 시스템에서 사람들을 의도치 않게 배제할 수 있는 잠재적 장벽을 식별하고 해결합니다. 예를 들어, 전 세계적으로 10억 명의 장애인이 있습니다. AI의 발전으로 인해 그들은 일상 생활에서 더 쉽게 다양한 정보와 기회에 접근할 수 있습니다. 장벽을 해결함으로써 더 나은 경험을 제공하는 AI 제품을 혁신하고 개발할 수 있는 기회를 창출합니다.
+
+> [🎥 여기에서 비디오를 클릭하세요: AI에서의 포용성](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### 보안 및 프라이버시
+
+AI 시스템은 안전하고 사람들의 프라이버시를 존중해야 합니다. 프라이버시, 정보 또는 생명을 위험에 빠뜨리는 시스템에 대한 신뢰는 낮아집니다. 머신 러닝 모델을 훈련할 때, 최고의 결과를 도출하기 위해 데이터에 의존합니다. 이 과정에서 데이터의 출처와 무결성을 고려해야 합니다. 예를 들어, 데이터가 사용자 제출 데이터인지 공공 데이터인지 확인해야 합니다. 다음으로, 데이터를 다룰 때 기밀 정보를 보호하고 공격에 저항할 수 있는 AI 시스템을 개발하는 것이 중요합니다. AI가 점점 더 보편화됨에 따라 프라이버시 보호와 중요한 개인 및 비즈니스 정보 보안이 더욱 중요하고 복잡해지고 있습니다. 프라이버시 및 데이터 보안 문제는 AI에서 특히 주의가 필요합니다. 데이터에 접근하는 것이 AI 시스템이 사람들에 대한 정확하고 정보에 입각한 예측과 결정을 내리는 데 필수적이기 때문입니다.
+
+> [🎥 여기에서 비디오를 클릭하세요: AI에서의 보안](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- 산업계에서는 GDPR(일반 데이터 보호 규정)과 같은 규정에 의해 크게 발전한 프라이버시 및 보안에서 상당한 진전을 이루었습니다.
+- 그러나 AI 시스템에서는 시스템을 더 개인화하고 효과적으로 만들기 위해 더 많은 개인 데이터가 필요하다는 점과 프라이버시 사이의 긴장을 인정해야 합니다.
+- 인터넷 연결된 컴퓨터의 탄생과 마찬가지로 AI와 관련된 보안 문제의 수가 급증하고 있습니다.
+- 동시에 AI가 보안을 개선하는 데 사용되고 있습니다. 예를 들어, 대부분의 최신 안티바이러스 스캐너는 오늘날 AI 휴리스틱에 의해 구동됩니다.
+- 데이터 과학 프로세스가 최신 프라이버시 및 보안 관행과 조화롭게 융합되도록 해야 합니다.
+
+### 투명성
+
+AI 시스템은 이해할 수 있어야 합니다. 투명성의 중요한 부분은 AI 시스템과 그 구성 요소의 동작을 설명하는 것입니다. AI 시스템의 이해를 개선하려면 이해 관계자가 시스템이 어떻게 작동하고 왜 그렇게 작동하는지 이해해야 잠재적인 성능 문제, 안전 및 프라이버시 문제, 편향, 배제 관행 또는 의도치 않은 결과를 식별할 수 있습니다. 또한 AI 시스템을 사용하는 사람들은 시스템을 언제, 왜, 어떻게 배포하기로 결정했는지에 대해 정직하고 솔직해야 한다고 믿습니다. 또한 사용 중인 시스템의 한계에 대해서도 정직해야 합니다. 예를 들어, 은행이 소비자 대출 결정을 지원하기 위해 AI 시스템을 사용하는 경우, 결과를 조사하고 어떤 데이터가 시스템의 권장 사항에 영향을 미치는지 이해하는 것이 중요합니다. 정부는 산업 전반에 걸쳐 AI를 규제하기 시작하고 있으므로 데이터 과학자와 조직은 AI 시스템이 규제 요구 사항을 충족하는지, 특히 바람직하지 않은 결과가 발생했을 때 설명해야 합니다.
+
+> [🎥 여기에서 비디오를 클릭하세요: AI에서의 투명성](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- AI 시스템이 매우 복잡하기 때문에 그 작동 방식과 결과를 해석하는 것이 어렵습니다.
+- 이러한 이해 부족은 시스템이 관리되고 운영되며 문서화되는 방식에 영향을 미칩니다.
+- 이러한 이해 부족은 더 중요한 것은 시스템이 생성한 결과를 사용하여 내리는 결정에 영향을 미칩니다.
+
+### 책임성
+
+AI 시스템을 설계하고 배포하는 사람들은 시스템의 작동 방식에 대해 책임을 져야 합니다. 책임의 필요성은 특히 얼굴 인식과 같은 민감한 기술에서 매우 중요합니다. 최근에는 실종 아동 찾기와 같은 용도로 기술의 잠재력을 인식한 법 집행 기관에서 얼굴 인식 기술에 대한 수요가 증가하고 있습니다. 그러나 이러한 기술은 특정 개인에 대한 지속적인 감시를 가능하게 하여 정부가 시민의 기본 자유를 위협하는 데 사용할 수 있습니다. 따라서 데이터 과학자와 조직은 AI 시스템이 개인이나 사회에 미치는 영향에 대해 책임을 져야 합니다.
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Microsoft의 책임 있는 AI 접근")
+
+> 🎥 위 이미지를 클릭하면 비디오를 볼 수 있습니다: 얼굴 인식을 통한 대규모 감시에 대한 경고
+
+궁극적으로, AI를 사회에 도입하는 첫 세대로서 우리 세대의 가장 큰 질문 중 하나는 컴퓨터가 사람들에게 계속 책임을 지게 할 방법과 컴퓨터를 설계하는 사람들이 다른 모든 사람들에게 계속 책임을 지게 할 방법을 찾는 것입니다.
+
+## 영향 평가
+
+머신 러닝 모델을 훈련하기 전에 AI 시스템의 목적, 의도된 사용, 배포 위치 및 시스템과 상호 작용할 사람들을 이해하기 위해 영향 평가를 수행하는 것이 중요합니다. 이는 시스템을 평가하는 리뷰어 또는 테스터에게 잠재적 위험과 예상 결과를 식별할 때 고려해야 할 요소를 알려줍니다.
+
+영향 평가를 수행할 때 집중해야 할 영역은 다음과 같습니다:
+
+* **개인에 대한 부정적인 영향**: 시스템 성능을 저해하는 제한 사항, 요구 사항, 지원되지 않는 사용 또는 알려진 제한 사항을 인식하는 것이 중요합니다. 이는 시스템이 개인에게 해를 끼칠 수 있는 방식으로 사용되지 않도록 보장합니다.
+* **데이터 요구 사항**: 시스템이 데이터를 사용하는 방법과 위치를 이해하면 리뷰어가 고려해야 할 데이터 요구 사항(GDPR 또는 HIPPA 데이터 규정 등)을 탐색할 수 있습니다. 또한, 훈련을 위한 데이터의 출처나 양이 충분한지 확인합니다.
+* **영향 요약**: 시스템 사용으로 인해 발생할 수 있는 잠재적 피해 목록을 수집합니다. ML 수명 주기 전반에 걸쳐 식별된 문제가 완화되었는지 또는 해결되었는지 검토합니다.
+* **핵심 원칙별 적용 목표**: 각 원칙의 목표가 충족되는지 평가하고 격차가 있는지 확인합니다.
+
+## 책임 있는 AI로 디버깅하기
+
+소프트웨어 애플리케이션 디버깅과 마찬가지로 AI 시스템 디버깅은 시스템의 문제를 식별하고 해결하는 필수 과정입니다. 모델이 예상대로 또는 책임 있게 작동하지 않는 데 영향을 미치는 많은 요인이 있습니다. 대부분의 전통적인 모델 성능 지표는 모델 성능의 정량적 집계로, 책임 있는 AI 원칙을 위반하는 모델을 분석하기에는 충분하지 않습니다. 또한, 머신 러닝 모델은 결과를 이해하거나 실수를 설명하기 어렵게 만드는 블랙 박스입니다. 이 과정에서 우리는 AI 시스템 디버깅을 돕는 책임 있는 AI 대시보드를 사용하는 방법을 배울 것입니다. 대시보드는 데이터 과학자와 AI 개발자가 다음을 수행할 수 있는 포괄적인 도구를 제공합니다:
+
+* **오류 분석**: 시스템의 공정성 또는 신뢰성에 영향을 미칠 수 있는 모델의 오류 분포를 식별합니다.
+* **모델 개요**: 데이터 코호트 전반에서 모델 성능의 차이를 발견합니다.
+* **데이터 분석**: 데이터 분포를 이해하고 공정성, 포용성 및 신뢰성 문제를 초래할 수 있는 데이터의 잠재적 편향을 식별합니다.
+* **모델 해석 가능성**: 모델의 예측에 영향을 미치는 요소를 이해합니다. 이는 모델의 동작을 설명하는 데 도움이 되며, 이는 투명성과 책임에 중요합니다.
+
+## 🚀 도전
+
+피해가 처음부터 도입되지 않도록 하기 위해 우리는 다음을 수행해야 합니다:
+
+- 시스템 작업에 참여하는 사람들의 배경과 관점을 다양화합니다.
+- 사회의 다양성을 반영하는 데이터 세트에 투자합니다.
+- 머신 러닝 수명 주기 전반에 걸쳐 책임 있는 AI를 감지하고 수정하는 더 나은 방법을 개발합니다.
+
+모델 구축 및 사용에서 모델의 신뢰할 수 없음이 명백한 실제 시나리오를 생각해보세요. 무엇을 더 고려해야 할까요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## 복습 및 자습
+
+이번 강의에서는 머신 러닝에서 공정성과 불공정성의 개념에 대해 기본적인 내용을 배웠습니다.
+
+다음 워크숍을 시청하여 주제에 대해 더 깊이 알아보세요:
+
+- 책임 있는 AI를 추구하며: 원칙을 실천으로 옮기기 - Besmira Nushi, Mehrnoosh Sameki, Amit Sharma
+
+[](https://www.youtube.com/watch?v=tGgJCrA-MZU "책임 있는 AI 도구 상자: 책임 있는 AI를 구축하기 위한 오픈 소스 프레임워크")
+
+> 🎥 위 이미지를 클릭하면 비디오를 볼 수 있습니다: 책임 있는 AI 도구 상자: 책임 있는 AI를 구축하기 위한 오픈 소스 프레임워크 - Besmira Nushi, Mehrnoosh Sameki, Amit Sharma
+
+또한 읽어보세요:
+
+- Microsoft의 책임 있는 AI 리소스 센터: [책임 있는 AI 리소스 – Microsoft AI](https://www.microsoft.com/ai/responsible-ai-resources?activetab=pivot1%3aprimaryr4)
+
+- Microsoft의 FATE 연구 그룹: [FATE: 공정성, 책임성, 투명성 및 윤리 - Microsoft Research](https://www.microsoft.com/research/theme/fate/)
+
+책임 있는 AI 도구 상자:
+
+- [책임 있는 AI 도구 상자 GitHub 저장소](https://github.com/microsoft/responsible-ai-toolbox)
+
+공정성을 보장하기 위한 Azure Machine Learning 도구에 대해 읽어보세요:
+
+- [Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-fairness-ml?WT.mc_id=academic-77952-leestott)
+
+## 과제
+
+[책임 있는 AI 도구 상자 탐색하기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 모국어 버전이 권위 있는 출처로 간주되어야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/3-fairness/assignment.md b/translations/ko/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..d041aa32e
--- /dev/null
+++ b/translations/ko/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# 책임 있는 AI 도구 상자 탐색하기
+
+## 지침
+
+이 수업에서는 데이터 과학자들이 AI 시스템을 분석하고 개선하는 데 도움을 주기 위해 "오픈 소스, 커뮤니티 주도 프로젝트"인 책임 있는 AI 도구 상자에 대해 배웠습니다. 이번 과제에서는 RAI Toolbox의 [노트북](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb) 중 하나를 탐색하고, 그 결과를 논문이나 프레젠테이션으로 보고하세요.
+
+## 평가 기준
+
+| 기준 | 탁월함 | 적절함 | 개선 필요 |
+| -------- | --------- | -------- | ----------------- |
+| | Fairlearn의 시스템, 실행한 노트북, 실행 결과에서 도출된 결론을 논의하는 논문 또는 파워포인트 프레젠테이션이 제출됨 | 결론 없이 논문만 제출됨 | 논문이 제출되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서의 모국어 버전이 권위 있는 출처로 간주되어야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역을 사용하여 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/4-techniques-of-ML/README.md b/translations/ko/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..81c1e472c
--- /dev/null
+++ b/translations/ko/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# 머신 러닝 기법
+
+머신 러닝 모델과 그들이 사용하는 데이터를 구축하고 사용하는 과정은 다른 개발 워크플로우와는 매우 다릅니다. 이번 강의에서는 이 과정을 명확히 하고, 알아야 할 주요 기법들을 개괄할 것입니다. 여러분은:
+
+- 머신 러닝을 뒷받침하는 과정을 높은 수준에서 이해할 것입니다.
+- '모델', '예측', '훈련 데이터'와 같은 기본 개념을 탐구할 것입니다.
+
+## [사전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "초보자를 위한 머신 러닝 - 머신 러닝 기법")
+
+> 🎥 위 이미지를 클릭하면 이 강의를 다루는 짧은 동영상을 볼 수 있습니다.
+
+## 소개
+
+높은 수준에서 보면, 머신 러닝(ML) 프로세스를 만드는 기술은 여러 단계로 구성됩니다:
+
+1. **질문 결정**. 대부분의 ML 프로세스는 간단한 조건 프로그램이나 규칙 기반 엔진으로는 답할 수 없는 질문을 하는 것에서 시작합니다. 이러한 질문들은 종종 데이터 모음에 기반한 예측을 중심으로 합니다.
+2. **데이터 수집 및 준비**. 질문에 답하려면 데이터가 필요합니다. 데이터의 품질과 양은 초기 질문에 얼마나 잘 답할 수 있는지를 결정합니다. 데이터를 시각화하는 것은 이 단계에서 중요한 측면입니다. 이 단계는 또한 모델을 구축하기 위해 데이터를 훈련과 테스트 그룹으로 나누는 것을 포함합니다.
+3. **훈련 방법 선택**. 질문과 데이터의 특성에 따라, 데이터를 가장 잘 반영하고 정확한 예측을 할 수 있도록 모델을 훈련시키는 방법을 선택해야 합니다. 이 단계는 특정 전문 지식이 필요하고, 종종 상당한 양의 실험을 요구합니다.
+4. **모델 훈련**. 훈련 데이터를 사용하여 다양한 알고리즘을 사용해 모델을 훈련시켜 데이터의 패턴을 인식하게 합니다. 모델은 데이터의 특정 부분을 더 잘 반영하기 위해 내부 가중치를 조정할 수 있습니다.
+5. **모델 평가**. 수집된 데이터에서 이전에 본 적 없는 데이터(테스트 데이터)를 사용하여 모델의 성능을 평가합니다.
+6. **매개변수 조정**. 모델의 성능을 기반으로 다른 매개변수나 변수를 사용하여 프로세스를 다시 실행할 수 있습니다.
+7. **예측**. 새로운 입력을 사용하여 모델의 정확성을 테스트합니다.
+
+## 어떤 질문을 할 것인가
+
+컴퓨터는 데이터에서 숨겨진 패턴을 발견하는 데 특히 능숙합니다. 이 유틸리티는 조건 기반 규칙 엔진을 만들어 쉽게 답할 수 없는 특정 도메인에 대한 질문을 가진 연구자들에게 매우 유용합니다. 예를 들어, 보험업무에서 데이터 과학자는 흡연자와 비흡연자의 사망률에 대한 수작업 규칙을 만들 수 있습니다.
+
+그러나 많은 다른 변수가 방정식에 들어오면, ML 모델은 과거 건강 기록을 기반으로 미래의 사망률을 예측하는 데 더 효율적일 수 있습니다. 더 즐거운 예로는, 특정 위치의 4월 날씨 예측을 위도, 경도, 기후 변화, 해양 근접성, 제트 스트림 패턴 등을 포함한 데이터를 기반으로 예측하는 것입니다.
+
+✅ 이 [슬라이드 자료](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf)는 날씨 분석에서 ML을 사용하는 역사적 관점을 제공합니다.
+
+## 모델 구축 전 작업
+
+모델을 구축하기 전에 완료해야 할 몇 가지 작업이 있습니다. 질문을 테스트하고 모델의 예측을 기반으로 가설을 형성하려면 여러 요소를 식별하고 구성해야 합니다.
+
+### 데이터
+
+질문에 확실히 답할 수 있으려면 적절한 유형의 충분한 데이터가 필요합니다. 이 시점에서 해야 할 두 가지 작업이 있습니다:
+
+- **데이터 수집**. 데이터 분석의 공정성에 대한 이전 강의를 염두에 두고, 데이터를 신중하게 수집하십시오. 이 데이터의 출처, 내재된 편향성, 출처를 문서화하십시오.
+- **데이터 준비**. 데이터 준비 과정에는 여러 단계가 있습니다. 다양한 출처에서 온 데이터를 통합하고 정규화해야 할 수도 있습니다. 문자열을 숫자로 변환하는 것과 같은 다양한 방법으로 데이터의 품질과 양을 개선할 수 있습니다([클러스터링](../../5-Clustering/1-Visualize/README.md)에서처럼). 원본 데이터를 기반으로 새로운 데이터를 생성할 수도 있습니다([분류](../../4-Classification/1-Introduction/README.md)에서처럼). 데이터를 정리하고 편집할 수도 있습니다([웹 앱](../../3-Web-App/README.md) 강의 전처럼). 마지막으로, 훈련 기술에 따라 데이터를 무작위로 섞어야 할 수도 있습니다.
+
+✅ 데이터를 수집하고 처리한 후, 데이터의 형태가 의도한 질문에 답할 수 있는지 확인하십시오. [클러스터링](../../5-Clustering/1-Visualize/README.md) 강의에서 발견한 것처럼, 데이터가 주어진 작업에서 잘 작동하지 않을 수 있습니다!
+
+### 특성과 목표
+
+[특성](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection)은 데이터의 측정 가능한 속성입니다. 많은 데이터셋에서 '날짜', '크기', '색상'과 같은 열 머리글로 표현됩니다. 코드에서 보통 `X`으로 표현되는 특성 변수는 모델을 훈련시키기 위해 사용되는 입력 변수를 나타냅니다.
+
+목표는 예측하려는 것입니다. 코드에서 보통 `y`으로 표현되는 목표는 데이터에 대해 묻고자 하는 질문의 답을 나타냅니다: 12월에 가장 저렴한 **색상의** 호박은 무엇일까요? 샌프란시스코에서 가장 좋은 부동산 **가격**을 가진 지역은 어디일까요? 때로는 목표를 레이블 속성이라고도 합니다.
+
+### 특성 변수 선택
+
+🎓 **특성 선택 및 특성 추출** 모델을 구축할 때 어떤 변수를 선택해야 할까요? 아마도 가장 성능이 좋은 모델을 위해 올바른 변수를 선택하는 특성 선택 또는 특성 추출 과정을 거치게 될 것입니다. 그러나 이들은 동일한 것이 아닙니다: "특성 추출은 원래 특성의 함수에서 새로운 특성을 생성하는 반면, 특성 선택은 특성의 하위 집합을 반환합니다." ([출처](https://wikipedia.org/wiki/Feature_selection))
+
+### 데이터 시각화
+
+데이터 과학자의 도구 키트에서 중요한 측면은 Seaborn이나 MatPlotLib과 같은 훌륭한 라이브러리를 사용하여 데이터를 시각화하는 능력입니다. 데이터를 시각적으로 표현하면 활용할 수 있는 숨겨진 상관관계를 발견할 수 있습니다. 시각화는 또한 편향이나 불균형 데이터를 발견하는 데 도움이 될 수 있습니다([분류](../../4-Classification/2-Classifiers-1/README.md)에서 발견한 것처럼).
+
+### 데이터셋 분할
+
+훈련 전에 데이터셋을 불균등한 크기로 나누어야 합니다. 그러나 여전히 데이터를 잘 대표해야 합니다.
+
+- **훈련**. 이 부분은 모델을 훈련시키기 위해 맞추는 데이터셋입니다. 이 세트는 원래 데이터셋의 대부분을 구성합니다.
+- **테스트**. 테스트 데이터셋은 원래 데이터에서 수집된 독립적인 데이터 그룹으로, 구축된 모델의 성능을 확인하는 데 사용됩니다.
+- **검증**. 검증 세트는 모델의 하이퍼파라미터나 아키텍처를 조정하여 모델을 개선하는 데 사용하는 더 작은 독립적인 예제 그룹입니다. 데이터의 크기와 질문에 따라, 이 세트를 구축할 필요가 없을 수도 있습니다([시계열 예측](../../7-TimeSeries/1-Introduction/README.md)에서 언급한 것처럼).
+
+## 모델 구축
+
+훈련 데이터를 사용하여 다양한 알고리즘을 사용해 데이터를 **훈련**시켜 모델을 구축하는 것이 목표입니다. 모델을 훈련시키면 데이터를 노출하고, 발견된 패턴에 대해 가정을 하며, 이를 검증하고 수락하거나 거부합니다.
+
+### 훈련 방법 결정
+
+질문과 데이터의 특성에 따라 훈련 방법을 선택할 것입니다. 이 과정에서 [Scikit-learn의 문서](https://scikit-learn.org/stable/user_guide.html)를 탐색하며 다양한 방법을 살펴볼 수 있습니다. 경험에 따라 최적의 모델을 구축하기 위해 여러 방법을 시도해야 할 수도 있습니다. 데이터 과학자들이 모델의 성능을 평가하고, 정확도, 편향, 기타 품질 저하 문제를 확인하며, 적절한 훈련 방법을 선택하는 과정을 거칠 가능성이 높습니다.
+
+### 모델 훈련
+
+훈련 데이터를 갖춘 상태에서 모델을 '맞추기' 위해 준비가 되었습니다. 많은 ML 라이브러리에서 'model.fit'이라는 코드를 볼 수 있습니다 - 이때 특성 변수를 값 배열(보통 'X')로, 목표 변수를 (보통 'y')로 보냅니다.
+
+### 모델 평가
+
+훈련 과정이 완료되면(큰 모델을 훈련시키는 데 여러 번의 반복 또는 '에포크'가 필요할 수 있음), 테스트 데이터를 사용하여 모델의 품질을 평가할 수 있습니다. 이 데이터는 모델이 이전에 분석하지 않은 원래 데이터의 하위 집합입니다. 모델의 품질에 대한 메트릭 테이블을 출력할 수 있습니다.
+
+🎓 **모델 피팅**
+
+머신 러닝의 맥락에서 모델 피팅은 모델의 기본 함수가 익숙하지 않은 데이터를 분석하려는 시도로서의 정확성을 의미합니다.
+
+🎓 **언더피팅**과 **오버피팅**은 모델의 품질을 저하시키는 일반적인 문제입니다. 모델이 충분히 잘 맞지 않거나 너무 잘 맞아 예측이 훈련 데이터와 너무 밀접하게 또는 너무 느슨하게 일치하는 경우가 있습니다. 오버피팅된 모델은 데이터의 세부 사항과 노이즈를 너무 잘 학습하여 훈련 데이터를 너무 잘 예측합니다. 언더피팅된 모델은 훈련 데이터나 아직 '보지 못한' 데이터를 정확히 분석하지 못하여 정확하지 않습니다.
+
+
+> [Jen Looper](https://twitter.com/jenlooper)의 인포그래픽
+
+## 매개변수 조정
+
+초기 훈련이 완료되면 모델의 품질을 관찰하고 '하이퍼파라미터'를 조정하여 개선할 수 있는지 고려하십시오. 자세한 내용은 [문서](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott)를 참조하십시오.
+
+## 예측
+
+이제 완전히 새로운 데이터를 사용하여 모델의 정확성을 테스트할 수 있습니다. 모델을 생산에 사용하기 위해 웹 자산을 구축하는 '응용' ML 설정에서는 사용자 입력(예: 버튼 클릭)을 수집하여 변수를 설정하고 모델에 추론 또는 평가를 위해 보낼 수 있습니다.
+
+이 강의에서는 데이터 과학자의 모든 제스처와 더불어, '풀 스택' ML 엔지니어가 되기 위한 여정을 진행하면서 준비, 구축, 테스트, 평가, 예측하는 방법을 발견할 것입니다.
+
+---
+
+## 🚀도전
+
+ML 실무자의 단계를 반영하는 흐름도를 그리세요. 지금 과정에서 자신이 어디에 있는지, 어디에서 어려움을 겪을 것 같은지, 무엇이 쉬워 보이는지 예측해 보세요.
+
+## [사후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## 복습 및 자기 학습
+
+데이터 과학자들이 자신의 일상 업무에 대해 이야기하는 인터뷰를 온라인에서 검색해 보세요. [여기](https://www.youtube.com/watch?v=Z3IjgbbCEfs)에 하나가 있습니다.
+
+## 과제
+
+[데이터 과학자 인터뷰](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/4-techniques-of-ML/assignment.md b/translations/ko/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..d0e27e5e7
--- /dev/null
+++ b/translations/ko/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# 데이터 과학자 인터뷰
+
+## 지침
+
+당신의 회사에서, 사용자 그룹에서, 친구들이나 동료 학생들 중에서, 전문적으로 데이터 과학자로 일하는 사람과 대화해 보세요. 그들의 일상 업무에 대해 짧은 글(500 단어)을 작성하세요. 그들은 전문가인가요, 아니면 '풀 스택'으로 일하나요?
+
+## 채점 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| -------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------ | --------------------- |
+| | 올바른 길이의 에세이가 출처를 명시하고 .doc 파일로 제출됨 | 에세이의 출처가 부정확하거나 요구된 길이보다 짧음 | 에세이가 제출되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인한 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/1-Introduction/README.md b/translations/ko/1-Introduction/README.md
new file mode 100644
index 000000000..75aaa455d
--- /dev/null
+++ b/translations/ko/1-Introduction/README.md
@@ -0,0 +1,25 @@
+# 머신러닝 소개
+
+이 커리큘럼 섹션에서는 머신러닝 분야의 기본 개념, 그것이 무엇인지, 그리고 연구자들이 그것을 다루기 위해 사용하는 기술에 대해 배우게 됩니다. 함께 새로운 ML의 세계를 탐험해 봅시다!
+
+
+> 사진 출처: Bill Oxford on Unsplash
+
+### 강의 목록
+
+1. [머신러닝 소개](1-intro-to-ML/README.md)
+1. [머신러닝과 AI의 역사](2-history-of-ML/README.md)
+1. [공정성과 머신러닝](3-fairness/README.md)
+1. [머신러닝 기법](4-techniques-of-ML/README.md)
+### 저작권
+
+"Introduction to Machine Learning"는 [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) 그리고 [Jen Looper](https://twitter.com/jenlooper)를 포함한 여러 사람들의 ♥️로 작성되었습니다.
+
+"The History of Machine Learning"는 [Jen Looper](https://twitter.com/jenlooper)와 [Amy Boyd](https://twitter.com/AmyKateNicho)의 ♥️로 작성되었습니다.
+
+"Fairness and Machine Learning"는 [Tomomi Imura](https://twitter.com/girliemac)의 ♥️로 작성되었습니다.
+
+"Techniques of Machine Learning"는 [Jen Looper](https://twitter.com/jenlooper)와 [Chris Noring](https://twitter.com/softchris)의 ♥️로 작성되었습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 모국어 버전이 권위 있는 소스로 간주되어야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/1-Tools/README.md b/translations/ko/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..4d7f0a9f9
--- /dev/null
+++ b/translations/ko/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# Python과 Scikit-learn으로 회귀 모델 시작하기
+
+
+
+> 스케치노트: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [이 강의는 R에서도 제공됩니다!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## 소개
+
+이 네 개의 강의에서, 회귀 모델을 구축하는 방법을 배우게 됩니다. 곧 이 모델들이 무엇을 하는지 설명할 것입니다. 하지만 그 전에, 과정을 시작하기 위해 필요한 도구들이 준비되었는지 확인하세요!
+
+이 강의에서는 다음을 배우게 됩니다:
+
+- 로컬 머신 러닝 작업을 위해 컴퓨터를 설정하는 방법.
+- Jupyter 노트북을 사용하는 방법.
+- Scikit-learn 설치 및 사용 방법.
+- 실습을 통해 선형 회귀를 탐구하는 방법.
+
+## 설치 및 설정
+
+[](https://youtu.be/-DfeD2k2Kj0 "초보자를 위한 머신 러닝 - 머신 러닝 모델을 구축할 도구 준비")
+
+> 🎥 위 이미지를 클릭하여 머신 러닝을 위해 컴퓨터를 설정하는 짧은 비디오를 시청하세요.
+
+1. **Python 설치**. [Python](https://www.python.org/downloads/)이 컴퓨터에 설치되어 있는지 확인하세요. Python은 많은 데이터 과학 및 머신 러닝 작업에 사용됩니다. 대부분의 컴퓨터 시스템에는 이미 Python이 설치되어 있습니다. 일부 사용자에게 설정을 쉽게 해주는 [Python Coding Packs](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott)도 있습니다.
+
+ 하지만 Python의 일부 사용법은 특정 버전을 요구하는 경우가 있으므로, [가상 환경](https://docs.python.org/3/library/venv.html)에서 작업하는 것이 유용합니다.
+
+2. **Visual Studio Code 설치**. 컴퓨터에 Visual Studio Code가 설치되어 있는지 확인하세요. 기본 설치를 위해 [Visual Studio Code 설치](https://code.visualstudio.com/) 지침을 따르세요. 이 강의에서는 Visual Studio Code에서 Python을 사용할 것이므로, [Python 개발을 위한 Visual Studio Code 설정](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) 방법을 익혀두는 것이 좋습니다.
+
+ > 이 [Learn modules](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott) 컬렉션을 통해 Python에 익숙해지세요.
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Visual Studio Code로 Python 설정")
+ >
+ > 🎥 위 이미지를 클릭하여 Visual Studio Code 내에서 Python을 사용하는 비디오를 시청하세요.
+
+3. **Scikit-learn 설치**. [이 지침](https://scikit-learn.org/stable/install.html)을 따라 Scikit-learn을 설치하세요. Python 3을 사용해야 하므로 가상 환경을 사용하는 것이 좋습니다. M1 Mac에 이 라이브러리를 설치하는 경우, 위 링크된 페이지에 특별한 지침이 있습니다.
+
+4. **Jupyter Notebook 설치**. [Jupyter 패키지](https://pypi.org/project/jupyter/)를 설치해야 합니다.
+
+## ML 작성 환경
+
+**노트북**을 사용하여 Python 코드를 개발하고 머신 러닝 모델을 만들 것입니다. 이 파일 유형은 데이터 과학자들에게 흔히 사용되며, `.ipynb` 확장자로 식별할 수 있습니다.
+
+노트북은 개발자가 코드를 작성하고 코드 주위에 주석을 추가하고 문서를 작성할 수 있는 대화형 환경을 제공합니다. 이는 실험적이거나 연구 지향적인 프로젝트에 매우 유용합니다.
+
+[](https://youtu.be/7E-jC8FLA2E "초보자를 위한 머신 러닝 - 회귀 모델을 구축하기 위한 Jupyter 노트북 설정")
+
+> 🎥 위 이미지를 클릭하여 이 연습을 진행하는 짧은 비디오를 시청하세요.
+
+### 연습 - 노트북 사용하기
+
+이 폴더에서 _notebook.ipynb_ 파일을 찾을 수 있습니다.
+
+1. Visual Studio Code에서 _notebook.ipynb_ 파일을 엽니다.
+
+ Python 3+로 Jupyter 서버가 시작됩니다. 노트북의 여러 부분에 `run`로 표시된 코드 조각이 있습니다. 재생 버튼 모양의 아이콘을 선택하여 코드 블록을 실행할 수 있습니다.
+
+2. `md` 아이콘을 선택하고 약간의 마크다운과 다음 텍스트 **# Welcome to your notebook**을 추가합니다.
+
+ 다음으로, Python 코드를 추가합니다.
+
+3. 코드 블록에 **print('hello notebook')**을 입력합니다.
+4. 화살표를 선택하여 코드를 실행합니다.
+
+ 출력된 문장을 볼 수 있어야 합니다:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+코드와 주석을 섞어서 노트북을 자체 문서화할 수 있습니다.
+
+✅ 웹 개발자의 작업 환경과 데이터 과학자의 작업 환경이 얼마나 다른지 잠시 생각해보세요.
+
+## Scikit-learn 시작하기
+
+이제 로컬 환경에 Python이 설정되었고 Jupyter 노트북에 익숙해졌으므로, Scikit-learn에도 익숙해져 봅시다. Scikit-learn은 머신 러닝 작업을 수행하는 데 도움을 주는 [광범위한 API](https://scikit-learn.org/stable/modules/classes.html#api-ref)를 제공합니다.
+
+그들의 [웹사이트](https://scikit-learn.org/stable/getting_started.html)에 따르면, "Scikit-learn은 지도 학습과 비지도 학습을 지원하는 오픈 소스 머신 러닝 라이브러리입니다. 또한 모델 적합, 데이터 전처리, 모델 선택 및 평가 등을 위한 다양한 도구를 제공합니다."
+
+이 강의에서는 Scikit-learn과 다른 도구들을 사용하여 '전통적인 머신 러닝' 작업을 수행할 머신 러닝 모델을 구축할 것입니다. 우리는 신경망과 딥러닝을 일부러 피했으며, 이는 곧 출시될 'AI for Beginners' 커리큘럼에서 다룰 예정입니다.
+
+Scikit-learn은 모델을 구축하고 평가하는 것을 간단하게 만듭니다. 주로 숫자 데이터를 사용하며 학습 도구로 사용할 수 있는 여러 가지 준비된 데이터셋을 포함하고 있습니다. 또한 학생들이 시도해볼 수 있는 사전 구축된 모델도 포함되어 있습니다. 기본 데이터를 사용하여 Scikit-learn으로 첫 번째 머신 러닝 모델을 구축하는 과정을 탐구해 봅시다.
+
+## 연습 - 첫 번째 Scikit-learn 노트북
+
+> 이 튜토리얼은 Scikit-learn 웹사이트의 [선형 회귀 예제](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py)에서 영감을 받았습니다.
+
+[](https://youtu.be/2xkXL5EUpS0 "초보자를 위한 머신 러닝 - Python에서 첫 번째 선형 회귀 프로젝트")
+
+> 🎥 위 이미지를 클릭하여 이 연습을 진행하는 짧은 비디오를 시청하세요.
+
+이 강의와 관련된 _notebook.ipynb_ 파일에서 '쓰레기통' 아이콘을 눌러 모든 셀을 지웁니다.
+
+이 섹션에서는 학습 목적으로 Scikit-learn에 내장된 작은 당뇨병 데이터셋을 사용할 것입니다. 당뇨병 환자를 위한 치료를 테스트하려고 한다고 가정해 봅시다. 머신 러닝 모델은 변수 조합을 기반으로 어떤 환자가 치료에 더 잘 반응할지 결정하는 데 도움을 줄 수 있습니다. 시각화된 매우 기본적인 회귀 모델조차도 이론적인 임상 시험을 조직하는 데 도움이 될 수 있는 변수에 대한 정보를 보여줄 수 있습니다.
+
+✅ 다양한 회귀 방법이 있으며, 선택하는 방법은 찾고자 하는 답에 따라 다릅니다. 주어진 나이의 사람의 예상 키를 예측하려면, **숫자 값**을 찾고 있으므로 선형 회귀를 사용합니다. 특정 요리가 비건인지 아닌지를 알아내고 싶다면, **카테고리 할당**을 찾고 있으므로 로지스틱 회귀를 사용합니다. 로지스틱 회귀에 대해서는 나중에 더 배우게 될 것입니다. 데이터를 통해 물어볼 수 있는 질문들에 대해 생각해 보고, 어떤 방법이 더 적합할지 생각해 보세요.
+
+이 작업을 시작해 봅시다.
+
+### 라이브러리 가져오기
+
+이 작업을 위해 몇 가지 라이브러리를 가져오겠습니다:
+
+- **matplotlib**. [그래프 도구](https://matplotlib.org/)로 유용하며, 선 그래프를 만드는 데 사용할 것입니다.
+- **numpy**. [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html)는 Python에서 숫자 데이터를 처리하는 데 유용한 라이브러리입니다.
+- **sklearn**. 이는 [Scikit-learn](https://scikit-learn.org/stable/user_guide.html) 라이브러리입니다.
+
+작업을 도와줄 라이브러리를 가져옵니다.
+
+1. 다음 코드를 입력하여 라이브러리를 가져옵니다:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ 위에서는 `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y`가 회귀 대상이 될 것입니다.
+
+1. 데이터 매트릭스의 모양과 첫 번째 요소를 보여주는 print 명령을 추가합니다:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ 응답으로 받는 것은 튜플입니다. 튜플의 두 번째 값을 각각 `X` and `y`에 할당하는 것입니다. [튜플에 대해 더 알아보기](https://wikipedia.org/wiki/Tuple).
+
+ 이 데이터는 442개의 항목으로 구성되어 있으며, 각 배열은 10개의 요소로 구성되어 있음을 알 수 있습니다:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ 데이터와 회귀 대상 간의 관계에 대해 잠시 생각해 보세요. 선형 회귀는 특성 X와 대상 변수 y 간의 관계를 예측합니다. 문서에서 당뇨병 데이터셋의 [대상](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset)을 찾을 수 있습니까? 이 데이터셋은 주어진 대상에 대해 무엇을 나타내고 있습니까?
+
+2. 다음으로, 데이터셋의 3번째 열을 선택하여 플롯할 부분을 선택합니다. `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)`를 사용하여 이 작업을 수행할 수 있습니다. 파라미터 중 하나가 -1이면, 해당 차원이 자동으로 계산됩니다.
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ 언제든지 데이터를 출력하여 모양을 확인하세요.
+
+3. 이제 플롯할 준비가 되었으므로, 머신이 이 데이터셋의 숫자 간의 논리적 분할을 결정하는 데 도움이 되는지 확인할 수 있습니다. 이를 위해 데이터(X)와 대상(y)을 테스트 및 훈련 세트로 분할해야 합니다. Scikit-learn에는 이를 수행하는 간단한 방법이 있습니다. 주어진 지점에서 테스트 데이터를 분할할 수 있습니다.
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+4. 이제 모델을 훈련할 준비가 되었습니다! 선형 회귀 모델을 로드하고 `model.fit()`을 사용하여 X 및 y 훈련 세트로 훈련합니다:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`를 사용하여 데이터 그룹 간의 선을 그릴 것입니다.
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. 이제 데이터를 플롯으로 표시할 시간입니다. Matplotlib은 이 작업에 매우 유용한 도구입니다. 모든 X 및 y 테스트 데이터를 산점도로 만들고, 모델의 데이터 그룹 사이에서 가장 적절한 위치에 선을 그리기 위해 예측을 사용합니다.
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ 여기서 무슨 일이 일어나고 있는지 잠시 생각해 보세요. 많은 작은 데이터 점들 사이에 직선이 그려져 있지만, 정확히 무엇을 하고 있습니까? 이 선을 사용하여 새로운, 보이지 않는 데이터 포인트가 플롯의 y축과 관련하여 어디에 맞아야 하는지 예측할 수 있는 방법을 볼 수 있습니까? 이 모델의 실용적인 사용을 말로 설명해 보세요.
+
+축하합니다, 첫 번째 선형 회귀 모델을 구축하고, 이를 사용하여 예측을 생성하고, 플롯에 표시했습니다!
+
+---
+## 🚀도전
+
+이 데이터셋의 다른 변수를 플롯해 보세요. 힌트: 이 줄을 편집하세요: `X = X[:,2]`. 이 데이터셋의 목표를 고려할 때, 당뇨병의 진행에 대해 무엇을 발견할 수 있습니까?
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## 복습 및 자습
+
+이 튜토리얼에서는 단순 선형 회귀를 사용했으며, 단변량 또는 다변량 선형 회귀를 사용하지 않았습니다. 이 방법들 간의 차이점에 대해 조금 읽어보거나, [이 비디오](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef)를 시청해 보세요.
+
+회귀의 개념에 대해 더 읽어보고, 이 기법으로 어떤 종류의 질문에 답할 수 있는지 생각해 보세요. 이 [튜토리얼](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott)을 통해 이해를 깊게 하세요.
+
+## 과제
+
+[다른 데이터셋](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 원어를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/1-Tools/assignment.md b/translations/ko/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..b404180ca
--- /dev/null
+++ b/translations/ko/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# Scikit-learn을 사용한 회귀 분석
+
+## 지침
+
+Scikit-learn의 [Linnerud 데이터셋](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud)을 살펴보세요. 이 데이터셋에는 여러 [타겟](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset)이 있습니다: '이 데이터셋은 피트니스 클럽에서 20명의 중년 남성으로부터 수집된 세 가지 운동 (데이터) 및 세 가지 생리학적 (타겟) 변수를 포함합니다.'
+
+자신의 말로 허리 둘레와 수행된 윗몸 일으키기 횟수 간의 관계를 나타내는 회귀 모델을 만드는 방법을 설명하세요. 이 데이터셋의 다른 데이터 포인트에 대해서도 동일하게 해보세요.
+
+## 채점 기준
+
+| 기준 | 우수한 수준 | 적절한 수준 | 개선이 필요함 |
+| ----------------------------- | ------------------------------------ | ----------------------------- | -------------------------- |
+| 설명 문단 제출 | 잘 작성된 문단이 제출됨 | 몇 문장만 제출됨 | 설명이 제공되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 원어가 권위 있는 출처로 간주되어야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/1-Tools/solution/Julia/README.md b/translations/ko/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..09c854c61
--- /dev/null
+++ b/translations/ko/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있음을 유의하시기 바랍니다. 원본 문서는 해당 언어로 작성된 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/2-Data/README.md b/translations/ko/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..fc7979d31
--- /dev/null
+++ b/translations/ko/2-Regression/2-Data/README.md
@@ -0,0 +1,215 @@
+# Scikit-learn을 사용하여 회귀 모델 구축: 데이터 준비 및 시각화
+
+
+
+인포그래픽 제작: [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [사전 강의 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [이 강의는 R로도 제공됩니다!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## 소개
+
+이제 Scikit-learn을 사용하여 머신러닝 모델을 구축하기 위한 도구를 설정했으므로 데이터를 통해 질문을 시작할 준비가 되었습니다. 데이터를 다루고 ML 솔루션을 적용할 때, 데이터셋의 잠재력을 올바르게 해제하기 위해 올바른 질문을 하는 방법을 이해하는 것이 매우 중요합니다.
+
+이 강의에서 배우게 될 내용:
+
+- 모델 구축을 위해 데이터를 준비하는 방법.
+- Matplotlib을 사용하여 데이터를 시각화하는 방법.
+
+## 데이터에 올바른 질문하기
+
+답을 얻고자 하는 질문은 어떤 유형의 ML 알고리즘을 사용할지를 결정합니다. 그리고 얻은 답변의 품질은 데이터의 특성에 크게 의존합니다.
+
+이 강의에서 제공된 [데이터](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv)를 살펴보세요. 이 .csv 파일을 VS Code에서 열 수 있습니다. 빠르게 훑어보면 공백과 문자열 및 숫자 데이터의 혼합이 있음을 바로 알 수 있습니다. 또한 'Package'라는 이상한 열이 있는데, 여기에는 'sacks', 'bins' 등의 값이 혼합되어 있습니다. 사실, 이 데이터는 약간 엉망입니다.
+
+[](https://youtu.be/5qGjczWTrDQ "ML for beginners - How to Analyze and Clean a Dataset")
+
+> 🎥 위 이미지를 클릭하면 이 강의를 위해 데이터를 준비하는 과정을 보여주는 짧은 비디오를 볼 수 있습니다.
+
+사실, 완전히 준비된 데이터셋을 즉시 사용할 수 있도록 제공받는 경우는 매우 드뭅니다. 이 강의에서는 표준 Python 라이브러리를 사용하여 원시 데이터셋을 준비하는 방법을 배울 것입니다. 또한 데이터를 시각화하는 다양한 기술을 배울 것입니다.
+
+## 사례 연구: '호박 시장'
+
+이 폴더의 루트 `data` 폴더에는 [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv)라는 .csv 파일이 있습니다. 이 파일에는 도시별로 그룹화된 호박 시장에 대한 1757개의 데이터가 포함되어 있습니다. 이 데이터는 미국 농무부에서 배포한 [Specialty Crops Terminal Markets Standard Reports](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice)에서 추출한 원시 데이터입니다.
+
+### 데이터 준비
+
+이 데이터는 공공 도메인에 있습니다. USDA 웹사이트에서 도시별로 여러 개의 파일로 다운로드할 수 있습니다. 너무 많은 파일을 피하기 위해, 모든 도시 데이터를 하나의 스프레드시트로 결합했습니다. 따라서 데이터를 약간 _준비_ 했습니다. 이제 데이터를 좀 더 자세히 살펴보겠습니다.
+
+### 호박 데이터 - 초기 결론
+
+이 데이터에 대해 무엇을 알 수 있나요? 이미 문자열, 숫자, 공백 및 이상한 값이 혼합되어 있음을 보았습니다.
+
+회귀 기법을 사용하여 이 데이터에 어떤 질문을 할 수 있을까요? 예를 들어 "주어진 달에 판매되는 호박의 가격을 예측하세요"라는 질문을 생각해볼 수 있습니다. 데이터를 다시 살펴보면, 이 작업에 필요한 데이터 구조를 만들기 위해 몇 가지 변경이 필요합니다.
+
+## 연습 - 호박 데이터 분석
+
+[판다스(Pandas)](https://pandas.pydata.org/)를 사용하여 이 호박 데이터를 분석하고 준비해 봅시다. Pandas는 데이터를 다루기에 매우 유용한 도구입니다.
+
+### 먼저, 누락된 날짜 확인
+
+먼저 누락된 날짜를 확인해야 합니다:
+
+1. 날짜를 월 형식으로 변환합니다(이 날짜는 미국 날짜 형식이므로 `MM/DD/YYYY` 형식입니다).
+2. 월을 새 열에 추출합니다.
+
+Visual Studio Code에서 _notebook.ipynb_ 파일을 열고 스프레드시트를 새로운 Pandas 데이터프레임에 가져옵니다.
+
+1. `head()` 함수를 사용하여 처음 다섯 개의 행을 봅니다.
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ 마지막 다섯 개의 행을 보려면 어떤 함수를 사용하시겠습니까?
+
+1. 현재 데이터프레임에 누락된 데이터가 있는지 확인합니다:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ 누락된 데이터가 있지만, 현재 작업에는 중요하지 않을 수 있습니다.
+
+1. 데이터프레임을 더 쉽게 다루기 위해 필요한 열만 선택합니다. 아래의 경우 `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:`는 "모든 행"을 의미합니다.
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### 두 번째, 호박의 평균 가격 결정
+
+주어진 달에 호박의 평균 가격을 결정하는 방법을 생각해 보세요. 이 작업을 위해 어떤 열을 선택하시겠습니까? 힌트: 세 개의 열이 필요합니다.
+
+해결책: `Low Price` and `High Price` 열의 평균을 구하여 새 Price 열을 채우고, Date 열을 월만 표시하도록 변환합니다. 다행히도 위의 확인에 따르면 날짜나 가격에 누락된 데이터는 없습니다.
+
+1. 평균을 계산하려면 다음 코드를 추가합니다:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ `print(month)`을 사용하여 원하는 데이터를 출력해 볼 수 있습니다.
+
+2. 변환된 데이터를 새로운 Pandas 데이터프레임에 복사합니다:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ 데이터프레임을 출력하면 새로운 회귀 모델을 구축할 수 있는 깔끔하고 정돈된 데이터셋을 볼 수 있습니다.
+
+### 잠깐! 이상한 점이 있습니다
+
+`Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package` 열을 살펴보면 이상한 점이 있습니다.
+
+1. 초기 .csv 가져오기 아래에 필터를 추가합니다:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ 이제 데이터를 출력하면, 버셸 단위로 판매되는 호박의 약 415개의 행만 얻을 수 있습니다.
+
+### 잠깐! 할 일이 하나 더 있습니다
+
+버셸 양이 행마다 다르다는 것을 눈치채셨나요? 가격을 표준화하여 버셸 단위로 가격을 표시해야 합니다. 따라서 이를 표준화하기 위해 약간의 수학을 사용해야 합니다.
+
+1. new_pumpkins 데이터프레임을 생성한 블록 뒤에 다음 줄을 추가합니다:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308)에 따르면, 버셸의 무게는 생산물의 종류에 따라 다릅니다. 이는 부피 측정 단위입니다. "예를 들어, 토마토 한 버셸은 56파운드로 측정됩니다... 잎과 채소는 더 적은 무게로 더 많은 공간을 차지하므로, 시금치 한 버셸은 20파운드에 불과합니다." 이는 매우 복잡합니다! 버셸에서 파운드로의 변환을 신경 쓰지 말고, 대신 버셸 단위로 가격을 표시합시다. 그러나 호박의 버셸을 연구하면서 데이터의 특성을 이해하는 것이 얼마나 중요한지 알 수 있습니다!
+
+이제 버셸 측정 단위를 기준으로 단위당 가격을 분석할 수 있습니다. 데이터를 한 번 더 출력하면 표준화된 것을 볼 수 있습니다.
+
+✅ 반 버셸로 판매되는 호박이 매우 비싸다는 것을 눈치채셨나요? 그 이유를 알 수 있나요? 힌트: 작은 호박은 큰 호박보다 훨씬 비쌉니다. 이는 버셸당 많은 공간을 차지하지 않는 큰 파이 호박보다 작은 호박이 훨씬 더 많이 있기 때문입니다.
+
+## 시각화 전략
+
+데이터 과학자의 역할 중 하나는 자신이 작업하는 데이터의 품질과 특성을 보여주는 것입니다. 이를 위해 종종 흥미로운 시각화, 즉 플롯, 그래프, 차트를 생성하여 데이터의 다양한 측면을 보여줍니다. 이렇게 함으로써 시각적으로 관계와 격차를 쉽게 파악할 수 있습니다.
+
+[](https://youtu.be/SbUkxH6IJo0 "ML for beginners - How to Visualize Data with Matplotlib")
+
+> 🎥 위 이미지를 클릭하면 이 강의를 위해 데이터를 시각화하는 과정을 보여주는 짧은 비디오를 볼 수 있습니다.
+
+시각화는 또한 데이터에 가장 적합한 머신러닝 기법을 결정하는 데 도움이 될 수 있습니다. 예를 들어, 선을 따르는 것처럼 보이는 산점도는 데이터가 선형 회귀 연습에 적합하다는 것을 나타냅니다.
+
+Jupyter 노트북에서 잘 작동하는 데이터 시각화 라이브러리 중 하나는 [Matplotlib](https://matplotlib.org/)입니다 (이전 강의에서도 보았습니다).
+
+> 데이터 시각화에 대한 더 많은 경험을 쌓으려면 [이 튜토리얼](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott)을 참조하세요.
+
+## 연습 - Matplotlib 실험
+
+방금 만든 새로운 데이터프레임을 표시하기 위해 몇 가지 기본 플롯을 만들어 보세요. 기본 선 그래프는 무엇을 보여줄까요?
+
+1. 파일 상단의 Pandas 가져오기 아래에 Matplotlib을 가져옵니다:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. 전체 노트북을 다시 실행하여 새로 고칩니다.
+1. 노트북 하단에 데이터를 박스로 플롯하는 셀을 추가합니다:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ 이 플롯이 유용한가요? 무엇이 놀랍나요?
+
+ 이 플롯은 주어진 달의 데이터 분포를 점으로 표시할 뿐이므로 특별히 유용하지 않습니다.
+
+### 유용하게 만들기
+
+유용한 데이터를 표시하려면 데이터를 그룹화해야 합니다. y축에 달을 표시하고 데이터 분포를 보여주는 플롯을 만들어 봅시다.
+
+1. 그룹화된 막대 차트를 생성하는 셀을 추가합니다:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ 이 데이터 시각화는 더 유용합니다! 9월과 10월에 호박의 가격이 가장 높다는 것을 나타내는 것 같습니다. 이는 예상과 일치하나요? 그 이유는 무엇인가요?
+
+---
+
+## 🚀도전 과제
+
+Matplotlib이 제공하는 다양한 시각화 유형을 탐색해 보세요. 회귀 문제에 가장 적합한 유형은 무엇인가요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## 복습 및 자습
+
+데이터를 시각화하는 다양한 방법을 살펴보세요. 사용할 수 있는 다양한 라이브러리를 목록화하고, 주어진 작업 유형에 가장 적합한 라이브러리를 기록하세요. 예를 들어 2D 시각화와 3D 시각화의 차이점을 조사해 보세요. 무엇을 발견하셨나요?
+
+## 과제
+
+[시각화 탐구](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 원어가 권위 있는 출처로 간주되어야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/2-Data/assignment.md b/translations/ko/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..f8394f3a8
--- /dev/null
+++ b/translations/ko/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# 시각화 탐구
+
+데이터 시각화를 위한 여러 가지 라이브러리가 있습니다. 이 수업의 Pumpkin 데이터를 사용하여 샘플 노트북에서 matplotlib과 seaborn을 사용해 몇 가지 시각화를 만들어 보세요. 어떤 라이브러리가 더 사용하기 쉬운가요?
+## 평가 기준
+
+| 기준 | 모범적 | 적절한 | 개선 필요 |
+| -------- | --------- | -------- | ----------------- |
+| | 두 가지 탐구/시각화가 포함된 노트북이 제출됨 | 한 가지 탐구/시각화가 포함된 노트북이 제출됨 | 노트북이 제출되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확한 내용이 포함될 수 있습니다. 원본 문서의 원어를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인한 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/2-Data/solution/Julia/README.md b/translations/ko/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..a4b790306
--- /dev/null
+++ b/translations/ko/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서는 원어로 작성된 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/3-Linear/README.md b/translations/ko/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..2fc08454d
--- /dev/null
+++ b/translations/ko/2-Regression/3-Linear/README.md
@@ -0,0 +1,370 @@
+# Scikit-learn을 사용한 회귀 모델 구축: 네 가지 회귀 방법
+
+
+> 인포그래픽 by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [이 강의는 R로도 제공됩니다!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### 소개
+
+지금까지 호박 가격 데이터셋을 사용하여 회귀가 무엇인지 탐구하고, Matplotlib을 사용하여 시각화하는 방법을 배웠습니다.
+
+이제 머신러닝을 위한 회귀에 대해 더 깊이 탐구할 준비가 되었습니다. 시각화는 데이터를 이해하는 데 도움이 되지만, 머신러닝의 진정한 힘은 _모델 훈련_에서 나옵니다. 모델은 과거 데이터를 바탕으로 데이터 종속성을 자동으로 캡처하며, 이를 통해 모델이 보지 못한 새로운 데이터의 결과를 예측할 수 있습니다.
+
+이번 강의에서는 _기본 선형 회귀_와 _다항 회귀_의 두 가지 회귀 유형과 이 기술들의 수학적 기초에 대해 배울 것입니다. 이 모델들은 다양한 입력 데이터를 바탕으로 호박 가격을 예측할 수 있게 해줍니다.
+
+[](https://youtu.be/CRxFT8oTDMg "초보자를 위한 머신러닝 - 선형 회귀 이해하기")
+
+> 🎥 위 이미지를 클릭하여 선형 회귀에 대한 짧은 비디오 개요를 확인하세요.
+
+> 이 커리큘럼 전반에 걸쳐 최소한의 수학 지식을 가정하고, 다른 분야에서 온 학생들도 이해할 수 있도록 접근성을 높이기 위해 노트, 🧮 호출, 다이어그램 및 기타 학습 도구를 사용합니다.
+
+### 선행 조건
+
+현재까지 우리가 다루고 있는 호박 데이터의 구조에 익숙해야 합니다. 이 강의의 _notebook.ipynb_ 파일에 사전 로드 및 사전 정리된 데이터를 찾을 수 있습니다. 파일에서 호박 가격은 새로운 데이터 프레임에 부셸 단위로 표시됩니다. Visual Studio Code에서 이 노트북을 커널에서 실행할 수 있는지 확인하세요.
+
+### 준비
+
+이 데이터를 로드하여 질문을 할 수 있도록 상기하십시오.
+
+- 호박을 사기에 가장 좋은 시기는 언제인가요?
+- 미니어처 호박 한 상자의 가격은 얼마일까요?
+- 반 부셸 바구니로 사야 하나요, 아니면 1 1/9 부셸 상자로 사야 하나요?
+이 데이터를 계속 탐구해 봅시다.
+
+이전 강의에서 Pandas 데이터 프레임을 생성하고 원래 데이터셋의 일부로 채워 부셸 단위로 가격을 표준화했습니다. 하지만 그렇게 함으로써 약 400개의 데이터 포인트만 수집할 수 있었고, 그것도 가을 달만 해당되었습니다.
+
+이번 강의의 노트북에 사전 로드된 데이터를 확인해 보세요. 데이터는 사전 로드되어 있으며 초기 산점도는 월별 데이터를 보여줍니다. 데이터를 더 정리하면 데이터의 특성에 대해 더 자세히 알 수 있을지도 모릅니다.
+
+## 선형 회귀선
+
+1강에서 배운 것처럼, 선형 회귀 연습의 목표는 다음을 위해 선을 그릴 수 있는 것입니다:
+
+- **변수 관계 표시**. 변수 간의 관계를 보여줍니다.
+- **예측 수행**. 새로운 데이터 포인트가 그 선과의 관계에서 어디에 위치할지 정확하게 예측합니다.
+
+이 유형의 선을 그리는 것은 **최소 제곱 회귀**에서 일반적입니다. '최소 제곱'이라는 용어는 회귀선 주변의 모든 데이터 포인트가 제곱되고 더해진다는 것을 의미합니다. 이상적으로는 최종 합계가 가능한 한 작아야 합니다. 왜냐하면 우리는 낮은 오류 수, 즉 `least-squares`을 원하기 때문입니다.
+
+우리는 모든 데이터 포인트로부터의 누적 거리가 가장 적은 선을 모델링하고자 합니다. 또한 방향보다는 크기에 관심이 있기 때문에 항을 더하기 전에 제곱합니다.
+
+> **🧮 수학을 보여줘**
+>
+> 이 선은 _최적 적합선_이라고 불리며, [방정식](https://en.wikipedia.org/wiki/Simple_linear_regression)으로 표현될 수 있습니다:
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` 열을 다음 식을 사용하여 계산할 수 있습니다:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+이제 선형 회귀의 수학적 배경을 이해했으니, 호박 패키지의 최적의 가격을 예측할 수 있는 회귀 모델을 만들어 봅시다. 휴일 호박 밭을 위해 호박을 사는 사람은 이 정보를 통해 호박 패키지 구매를 최적화할 수 있습니다.
+
+## 상관 관계 찾기
+
+[](https://youtu.be/uoRq-lW2eQo "초보자를 위한 머신러닝 - 상관 관계 찾기: 선형 회귀의 핵심")
+
+> 🎥 위 이미지를 클릭하여 상관 관계에 대한 짧은 비디오 개요를 확인하세요.
+
+이전 강의에서 다양한 달의 평균 가격이 다음과 같다는 것을 보았을 것입니다:
+
+
+
+이는 어느 정도 상관 관계가 있음을 시사하며, `Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` 함수를 사용하여 상관 관계를 확인해 볼 수 있습니다:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+상관 관계는 -0.15로 상당히 작아 보입니다. `Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` 플로팅 함수를 사용하여 모든 포인트를 동일한 그래프에 플로팅할 수 있습니다:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+우리의 조사에 따르면 품종이 실제 판매 날짜보다 전체 가격에 더 큰 영향을 미치는 것으로 보입니다. 이는 막대 그래프로 확인할 수 있습니다:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+잠시 동안 '파이 타입'이라는 한 가지 호박 품종에만 집중하여 날짜가 가격에 미치는 영향을 확인해 봅시다:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+이제 `Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` 사이의 상관 관계를 계산하면 예측 모델을 훈련시키는 것이 의미가 있음을 알 수 있습니다.
+
+> 선형 회귀 모델을 훈련시키기 전에 데이터가 깨끗한지 확인하는 것이 중요합니다. 선형 회귀는 누락된 값과 잘 작동하지 않으므로 모든 빈 셀을 제거하는 것이 좋습니다:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+다른 접근 방식은 해당 열의 평균 값으로 빈 값을 채우는 것입니다.
+
+## 단순 선형 회귀
+
+[](https://youtu.be/e4c_UP2fSjg "초보자를 위한 머신러닝 - Scikit-learn을 사용한 선형 및 다항 회귀")
+
+> 🎥 위 이미지를 클릭하여 선형 및 다항 회귀에 대한 짧은 비디오 개요를 확인하세요.
+
+우리의 선형 회귀 모델을 훈련시키기 위해 **Scikit-learn** 라이브러리를 사용할 것입니다.
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+먼저 입력 값(특징)과 예상 출력(레이블)을 별도의 numpy 배열로 분리합니다:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> 선형 회귀 패키지가 입력 데이터를 올바르게 이해할 수 있도록 입력 데이터에 `reshape`를 수행해야 했습니다. 선형 회귀는 각 배열 행이 입력 특징 벡터에 해당하는 2D 배열을 입력으로 기대합니다. 우리의 경우, 하나의 입력만 있기 때문에 N×1 형상의 배열이 필요합니다. 여기서 N은 데이터셋 크기입니다.
+
+그런 다음 데이터를 훈련 및 테스트 데이터셋으로 분할하여 훈련 후 모델을 검증할 수 있습니다:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+마지막으로 실제 선형 회귀 모델을 훈련시키는 것은 단 두 줄의 코드로 가능합니다. `LinearRegression` object, and fit it to our data using the `fit` 메서드를 정의합니다:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+`LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21`은 연초의 가격을 나타냅니다.
+
+모델의 정확성을 확인하려면 테스트 데이터셋에서 가격을 예측한 다음, 예측 값과 예상 값이 얼마나 가까운지 측정할 수 있습니다. 이는 모든 예상 값과 예측 값의 제곱 차이의 평균인 평균 제곱 오차(MSE) 메트릭을 사용하여 수행할 수 있습니다.
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+우리의 오류는 약 2 포인트로, 약 17%입니다. 그다지 좋지 않습니다. 모델 품질의 또 다른 지표는 **결정 계수**로, 다음과 같이 얻을 수 있습니다:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+값이 0이면 모델이 입력 데이터를 고려하지 않고 *최악의 선형 예측기*로 작동하며, 이는 단순히 결과의 평균 값입니다. 값이 1이면 모든 예상 출력을 완벽하게 예측할 수 있음을 의미합니다. 우리의 경우 결정 계수는 약 0.06으로 상당히 낮습니다.
+
+테스트 데이터와 회귀선을 함께 플로팅하여 우리의 경우 회귀가 어떻게 작동하는지 더 잘 볼 수 있습니다:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## 다항 회귀
+
+다른 유형의 선형 회귀는 다항 회귀입니다. 변수 간에 선형 관계가 있을 때가 있지만 - 호박의 부피가 클수록 가격이 높아지는 경우 - 때로는 이러한 관계를 평면이나 직선으로 그릴 수 없습니다.
+
+✅ [다항 회귀를 사용할 수 있는 데이터의 더 많은 예시](https://online.stat.psu.edu/stat501/lesson/9/9.8)를 확인해 보세요.
+
+날짜와 가격 간의 관계를 다시 한 번 살펴보세요. 이 산점도가 반드시 직선으로 분석되어야 할 것처럼 보이나요? 가격이 변동할 수 있지 않나요? 이 경우 다항 회귀를 시도해 볼 수 있습니다.
+
+✅ 다항식은 하나 이상의 변수와 계수로 구성될 수 있는 수학적 표현입니다.
+
+다항 회귀는 비선형 데이터를 더 잘 맞추기 위해 곡선을 만듭니다. 우리의 경우, 입력 데이터에 제곱 `DayOfYear` 변수를 포함하면, 연도의 특정 시점에 최소값을 가지는 포물선 곡선으로 데이터를 맞출 수 있습니다.
+
+Scikit-learn에는 데이터 처리의 다양한 단계를 함께 결합할 수 있는 유용한 [파이프라인 API](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline)가 포함되어 있습니다. **파이프라인**은 **추정기**의 체인입니다. 우리의 경우, 모델에 다항 특징을 먼저 추가하고, 그 다음 회귀를 훈련시키는 파이프라인을 만들 것입니다:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+`PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` 그렇지 않으면. 이는 선형 회귀에서 네 개의 계수가 있으며, 각 호박 품종에 대해 하나씩, 해당 품종의 "시작 가격" (또는 "추가 가격")을 담당합니다.
+
+다음 코드는 품종을 원-핫 인코딩하는 방법을 보여줍니다:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+원-핫 인코딩된 품종을 사용하여 선형 회귀를 훈련시키려면 `X` and `y` 데이터를 올바르게 초기화하기만 하면 됩니다:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+나머지 코드는 선형 회귀를 훈련시키는 데 사용한 것과 동일합니다. 시도해 보면 평균 제곱 오차는 비슷하지만 결정 계수는 훨씬 높아집니다 (~77%). 더 정확한 예측을 위해서는 `Month` or `DayOfYear`. To get one large array of features, we can use `join`과 같은 숫자 특징뿐만 아니라 더 많은 범주형 특징을 고려할 수 있습니다:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+여기에서는 `City` and `Package` 유형도 고려하여 MSE 2.84 (10%)와 결정 계수 0.94를 얻습니다!
+
+## 모든 것을 종합하여
+
+최고의 모델을 만들기 위해 위의 예제에서 사용한 결합된 (원-핫 인코딩된 범주형 + 숫자) 데이터를 사용하여 다항 회귀와 결합할 수 있습니다. 다음은 편의를 위해 전체 코드입니다:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+이것은 거의 97%의 최고의 결정 계수를 제공하며, MSE=2.23 (~8% 예측 오류)을 제공합니다.
+
+| 모델 | MSE | 결정 계수 |
+|-------|-----|---------------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` 선형 | 5.24 (19.7%) | 0.77 |
+| 모든 특징 선형 | 2.84 (10.5%) | 0.94 |
+| 모든 특징 다항 | 2.23 (8.25%) | 0.97 |
+
+🏆 잘하셨습니다! 한 강의에서 네 가지 회귀 모델을 만들었으며, 모델 품질을 97%까지 향상시켰습니다. 회귀에 대한 마지막 섹션에서는 범주를 결정하기 위해 로지스틱 회귀에 대해 배울 것입니다.
+
+---
+## 🚀도전
+
+이 노트북에서 여러 다른 변수를 테스트하여 상관 관계가 모델 정확도와 어떻게 대응하는지 확인하세요.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/14/)
+
+## 복습 및 자습
+
+이번 강의에서는 선형 회귀에 대해 배웠습니다. 다른 중요한 회귀 유형도 있습니다. Stepwise, Ridge, Lasso 및 Elasticnet 기술에 대해 읽어보세요. 더 배우기 위해 좋은 과정은 [스탠포드 통계학 학습 과정](https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning)입니다.
+
+## 과제
+
+[모델 구축](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/3-Linear/assignment.md b/translations/ko/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..d18b9b92c
--- /dev/null
+++ b/translations/ko/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# 회귀 모델 만들기
+
+## 지침
+
+이 수업에서는 선형 회귀와 다항 회귀를 사용하여 모델을 구축하는 방법을 배웠습니다. 이 지식을 사용하여 데이터셋을 찾거나 Scikit-learn의 내장된 세트를 사용하여 새로운 모델을 구축하세요. 왜 해당 기법을 선택했는지 노트북에 설명하고, 모델의 정확성을 보여주세요. 만약 정확하지 않다면, 그 이유를 설명하세요.
+
+## 평가 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| ------- | --------------------------------------------------------- | -------------------------- | -------------------------------- |
+| | 완벽하게 문서화된 솔루션을 포함한 완전한 노트북을 제시함 | 솔루션이 불완전함 | 솔루션에 결함이나 버그가 있음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/3-Linear/solution/Julia/README.md b/translations/ko/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..545fb34d1
--- /dev/null
+++ b/translations/ko/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서는 해당 언어로 작성된 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역을 사용함으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/4-Logistic/README.md b/translations/ko/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..23e4dd65d
--- /dev/null
+++ b/translations/ko/2-Regression/4-Logistic/README.md
@@ -0,0 +1,370 @@
+# 카테고리 예측을 위한 로지스틱 회귀
+
+
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [이 강의는 R에서도 사용할 수 있습니다!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## 소개
+
+이제 고전적인 머신러닝 기법 중 하나인 회귀에 대한 마지막 강의로, 로지스틱 회귀를 살펴보겠습니다. 이 기법을 사용하여 이진 카테고리를 예측하는 패턴을 발견할 수 있습니다. 이 사탕이 초콜릿인가 아닌가? 이 질병이 전염성이 있는가 없는가? 이 고객이 이 제품을 선택할 것인가 아닌가?
+
+이 강의에서는 다음을 배우게 됩니다:
+
+- 데이터 시각화를 위한 새로운 라이브러리
+- 로지스틱 회귀 기법
+
+✅ 이 [학습 모듈](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott)을 통해 이 회귀 기법에 대한 이해를 심화하세요.
+
+## 전제 조건
+
+호박 데이터를 다루면서, 이제 우리가 작업할 수 있는 이진 카테고리 하나가 있다는 것을 알게 되었습니다: `Color`.
+
+이제 몇 가지 변수를 통해 _주어진 호박의 색상이 무엇일지_ (주황색 🎃 또는 흰색 👻) 예측하는 로지스틱 회귀 모델을 만들어 보겠습니다.
+
+> 왜 회귀에 관한 강의에서 이진 분류에 대해 이야기하고 있을까요? 단지 언어적 편의를 위해서입니다. 로지스틱 회귀는 [실제로는 분류 방법](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)이지만, 선형 기반입니다. 다음 강의 그룹에서 데이터를 분류하는 다른 방법에 대해 배워보세요.
+
+## 질문 정의하기
+
+우리의 목적을 위해, 이를 '흰색' 또는 '흰색이 아님'으로 표현하겠습니다. 데이터셋에 '줄무늬' 카테고리도 있지만, 인스턴스가 거의 없어서 사용하지 않겠습니다. 어쨌든 데이터셋에서 null 값을 제거하면 사라집니다.
+
+> 🎃 재미있는 사실, 우리는 때때로 흰색 호박을 '유령' 호박이라고 부릅니다. 조각하기가 쉽지 않아서 주황색 호박만큼 인기가 많지 않지만, 멋지게 생겼습니다! 그래서 질문을 '유령' 또는 '유령이 아님'으로 다시 표현할 수도 있습니다. 👻
+
+## 로지스틱 회귀에 대하여
+
+로지스틱 회귀는 이전에 배운 선형 회귀와 몇 가지 중요한 면에서 다릅니다.
+
+[](https://youtu.be/KpeCT6nEpBY "초보자를 위한 머신러닝 - 로지스틱 회귀 이해하기")
+
+> 🎥 위 이미지를 클릭하여 로지스틱 회귀에 대한 짧은 비디오 개요를 확인하세요.
+
+### 이진 분류
+
+로지스틱 회귀는 선형 회귀와 동일한 기능을 제공하지 않습니다. 전자는 이진 카테고리("흰색 또는 흰색이 아님")에 대한 예측을 제공하는 반면, 후자는 예를 들어 호박의 출처와 수확 시기를 기준으로 _가격이 얼마나 오를지_와 같은 연속적인 값을 예측할 수 있습니다.
+
+
+> 인포그래픽 by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+### 다른 분류들
+
+다른 유형의 로지스틱 회귀도 있습니다. 다항 로지스틱 회귀와 서열 로지스틱 회귀가 있습니다:
+
+- **다항 로지스틱 회귀**, 여러 카테고리가 있는 경우 - "주황색, 흰색, 줄무늬".
+- **서열 로지스틱 회귀**, 순서가 있는 카테고리로, 결과를 논리적으로 정렬해야 하는 경우 유용합니다. 예를 들어, 호박을 크기(미니, 소, 중, 대, 특대, 초대형)로 정렬하는 경우.
+
+
+
+### 변수들이 반드시 상관관계가 있을 필요는 없음
+
+선형 회귀가 상관관계가 더 높은 변수들과 더 잘 작동했던 것을 기억하시나요? 로지스틱 회귀는 반대입니다 - 변수들이 일치할 필요가 없습니다. 이는 상관관계가 약한 이 데이터에 적합합니다.
+
+### 많은 깨끗한 데이터가 필요함
+
+로지스틱 회귀는 더 많은 데이터를 사용할수록 더 정확한 결과를 제공합니다. 우리의 작은 데이터셋은 이 작업에 최적화되어 있지 않으므로 유의하세요.
+
+[](https://youtu.be/B2X4H9vcXTs "초보자를 위한 머신러닝 - 로지스틱 회귀를 위한 데이터 분석 및 준비")
+
+> 🎥 위 이미지를 클릭하여 로지스틱 회귀를 위한 데이터 준비에 대한 짧은 비디오 개요를 확인하세요.
+
+✅ 로지스틱 회귀에 적합한 데이터 유형을 생각해 보세요.
+
+## 연습 - 데이터 정리하기
+
+먼저 데이터를 조금 정리하고, null 값을 제거하고 일부 열만 선택합니다:
+
+1. 다음 코드를 추가하세요:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ 새로운 데이터프레임을 항상 확인할 수 있습니다:
+
+ ```python
+ pumpkins.info
+ ```
+
+### 시각화 - 카테고리 플롯
+
+이제 [시작 노트북](../../../../2-Regression/4-Logistic/notebook.ipynb)을 다시 열고 호박 데이터를 불러와서 몇 가지 변수를 포함하는 데이터셋을 유지하도록 정리했습니다. 이번에는 다른 라이브러리를 사용하여 데이터프레임을 시각화해 봅시다: [Seaborn](https://seaborn.pydata.org/index.html), 이는 이전에 사용한 Matplotlib 위에 구축되었습니다.
+
+Seaborn은 데이터를 시각화하는 멋진 방법을 제공합니다. 예를 들어, 각 `Variety`와 `Color`의 데이터 분포를 카테고리 플롯으로 비교할 수 있습니다.
+
+1. `catplot` function, using our pumpkin data `pumpkins`를 사용하여 각 호박 카테고리(주황색 또는 흰색)에 대한 색상 매핑을 지정하여 플롯을 만드세요:
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ 데이터를 관찰하여 Color 데이터가 Variety와 어떻게 관련되는지 확인할 수 있습니다.
+
+ ✅ 이 카테고리 플롯을 통해 어떤 흥미로운 탐색을 상상할 수 있습니까?
+
+### 데이터 전처리: 특성 및 라벨 인코딩
+호박 데이터셋의 모든 열은 문자열 값을 포함합니다. 카테고리 데이터를 다루는 것은 사람에게는 직관적이지만 기계에게는 그렇지 않습니다. 머신러닝 알고리즘은 숫자와 잘 작동합니다. 따라서 인코딩은 데이터 전처리 단계에서 매우 중요한 단계입니다. 이는 카테고리 데이터를 숫자 데이터로 변환하여 정보를 잃지 않도록 합니다. 좋은 인코딩은 좋은 모델을 구축하는 데 도움이 됩니다.
+
+특성 인코딩에는 두 가지 주요 유형의 인코더가 있습니다:
+
+1. 순서형 인코더: 이는 순서형 변수에 적합합니다. 순서형 변수는 데이터가 논리적 순서를 따르는 카테고리 변수입니다. 예를 들어, 데이터셋의 `Item Size` 열입니다. 각 카테고리가 열의 순서에 따라 숫자로 표현되도록 매핑을 만듭니다.
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. 카테고리 인코더: 이는 명목 변수에 적합합니다. 명목 변수는 데이터가 논리적 순서를 따르지 않는 카테고리 변수입니다. 데이터셋에서 `Item Size`를 제외한 모든 특성입니다. 이는 원-핫 인코딩으로, 각 카테고리가 이진 열로 표현됩니다: 인코딩된 변수는 호박이 해당 Variety에 속하면 1, 그렇지 않으면 0입니다.
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+그런 다음, `ColumnTransformer`를 사용하여 여러 인코더를 하나의 단계로 결합하고 적절한 열에 적용합니다.
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+한편, 라벨을 인코딩하기 위해, scikit-learn의 `LabelEncoder` 클래스를 사용합니다. 이는 라벨을 0에서 n_classes-1(여기서는 0과 1) 사이의 값만 포함하도록 정규화하는 유틸리티 클래스입니다.
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+특성과 라벨을 인코딩한 후, 이를 새로운 데이터프레임 `encoded_pumpkins`에 병합할 수 있습니다.
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety`와 `Color`의 카테고리 플롯에 대해 순서형 인코더를 사용하는 장점은 무엇입니까? 데이터를 더 잘 플롯하기 위해 인코딩된 `Item Size` column and the unencoded `Variety` 열을 사용할 것입니다.
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### 스웜 플롯 사용하기
+
+Color는 이진 카테고리(흰색 또는 흰색이 아님)이므로, 시각화를 위해 '특화된 접근 방식'이 필요합니다. 이 카테고리와 다른 변수 간의 관계를 시각화하는 다른 방법이 있습니다.
+
+Seaborn 플롯을 사용하여 변수를 나란히 시각화할 수 있습니다.
+
+1. 값의 분포를 보여주는 '스웜' 플롯을 시도해 보세요:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**주의**: 위의 코드는 경고를 발생시킬 수 있습니다. Seaborn이 스웜 플롯에 많은 데이터 포인트를 나타내지 못하기 때문입니다. 가능한 해결책은 'size' 매개변수를 사용하여 마커의 크기를 줄이는 것입니다. 하지만 이는 플롯의 가독성에 영향을 미칠 수 있습니다.
+
+> **🧮 수학을 보여주세요**
+>
+> 로지스틱 회귀는 [시그모이드 함수](https://wikipedia.org/wiki/Sigmoid_function)를 사용하여 '최대 가능성' 개념에 의존합니다. 플롯에서 '시그모이드 함수'는 'S' 모양처럼 보입니다. 이는 값을 0과 1 사이로 매핑합니다. 곡선은 '로지스틱 곡선'이라고도 합니다. 공식은 다음과 같습니다:
+>
+> 
+>
+> 여기서 시그모이드의 중간점은 x의 0 지점에 위치하고, L은 곡선의 최대 값이며, k는 곡선의 가파름을 나타냅니다. 함수의 결과가 0.5보다 크면 해당 라벨은 이진 선택의 '1'로 분류됩니다. 그렇지 않으면 '0'으로 분류됩니다.
+
+## 모델 구축하기
+
+이진 분류를 찾기 위한 모델을 구축하는 것은 Scikit-learn에서 놀랍도록 간단합니다.
+
+[](https://youtu.be/MmZS2otPrQ8 "초보자를 위한 머신러닝 - 데이터 분류를 위한 로지스틱 회귀")
+
+> 🎥 위 이미지를 클릭하여 선형 회귀 모델 구축에 대한 짧은 비디오 개요를 확인하세요.
+
+1. 분류 모델에 사용할 변수를 선택하고 `train_test_split()`을 호출하여 학습 및 테스트 세트를 분할하세요:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. 이제 학습 데이터를 사용하여 `fit()`을 호출하여 모델을 학습시키고 결과를 출력할 수 있습니다:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ 모델의 점수를 확인하세요. 약 1000개의 데이터 행만 있는 것을 고려하면 나쁘지 않습니다:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## 혼동 행렬을 통한 더 나은 이해
+
+위에서 출력한 항목을 통해 점수 보고서를 얻을 수 있지만, [혼동 행렬](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix)을 사용하여 모델의 성능을 더 쉽게 이해할 수 있습니다.
+
+> 🎓 '[혼동 행렬](https://wikipedia.org/wiki/Confusion_matrix)'(또는 '오류 행렬')은 모델의 실제 vs. 거짓 긍정 및 부정을 나타내어 예측의 정확성을 측정하는 테이블입니다.
+
+1. 혼동 행렬을 사용하려면 `confusion_matrix()`를 호출하세요:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ 모델의 혼동 행렬을 확인하세요:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+Scikit-learn에서 혼동 행렬의 행(축 0)은 실제 라벨이고 열(축 1)은 예측된 라벨입니다.
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+여기서 무슨 일이 일어나고 있는지 봅시다. 모델이 두 개의 이진 카테고리, '흰색'과 '흰색이 아님' 사이에서 호박을 분류하도록 요청받았다고 가정해 봅시다.
+
+- 모델이 호박을 흰색이 아니라고 예측하고 실제로도 '흰색이 아님' 카테고리에 속하면 이를 참 부정이라고 부릅니다. 이는 왼쪽 상단 숫자로 표시됩니다.
+- 모델이 호박을 흰색이라고 예측하고 실제로는 '흰색이 아님' 카테고리에 속하면 이를 거짓 부정이라고 부릅니다. 이는 왼쪽 하단 숫자로 표시됩니다.
+- 모델이 호박을 흰색이 아니라고 예측하고 실제로는 '흰색' 카테고리에 속하면 이를 거짓 긍정이라고 부릅니다. 이는 오른쪽 상단 숫자로 표시됩니다.
+- 모델이 호박을 흰색이라고 예측하고 실제로도 '흰색' 카테고리에 속하면 이를 참 긍정이라고 부릅니다. 이는 오른쪽 하단 숫자로 표시됩니다.
+
+참 긍정과 참 부정의 숫자가 많고 거짓 긍정과 거짓 부정의 숫자가 적을수록 모델이 더 잘 작동한다고 할 수 있습니다.
+
+혼동 행렬이 정확도와 재현율과 어떻게 관련이 있는지 봅시다. 위에서 출력된 분류 보고서는 정확도(0.85)와 재현율(0.67)을 보여줍니다.
+
+정확도 = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+재현율 = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ Q: 혼동 행렬에 따르면 모델의 성능은 어땠나요? A: 나쁘지 않습니다. 참 부정의 숫자가 많지만 거짓 부정도 몇 개 있습니다.
+
+혼동 행렬의 TP/TN과 FP/FN 매핑을 통해 앞서 봤던 용어를 다시 살펴봅시다:
+
+🎓 정확도: TP/(TP + FP) 검색된 인스턴스 중 관련 인스턴스의 비율 (예: 잘 라벨된 라벨)
+
+🎓 재현율: TP/(TP + FN) 검색된 관련 인스턴스의 비율, 잘 라벨된지 여부와 관계없이
+
+🎓 f1-score: (2 * 정확도 * 재현율)/(정확도 + 재현율) 정확도와 재현율의 가중 평균, 최상은 1, 최악은 0
+
+🎓 지원: 검색된 각 라벨의 발생 수
+
+🎓 정확도: (TP + TN)/(TP + TN + FP + FN) 샘플에 대해 정확하게 예측된 라벨의 비율.
+
+🎓 매크로 평균: 라벨 불균형을 고려하지 않고 각 라벨에 대한 비가중 평균 메트릭 계산.
+
+🎓 가중 평균: 각 라벨에 대한 비율을 고려하여 라벨 불균형을 고려하여 지원에 따라 가중치를 부여한 평균 메트릭 계산.
+
+✅ 거짓 부정의 수를 줄이려면 어떤 메트릭을 주시해야 할지 생각해 보세요.
+
+## 이 모델의 ROC 곡선 시각화
+
+[](https://youtu.be/GApO575jTA0 "초보자를 위한 머신러닝 - ROC 곡선을 통한 로지스틱 회귀 성능 분석")
+
+> 🎥 위 이미지를 클릭하여 ROC 곡선에 대한 짧은 비디오 개요를 확인하세요.
+
+이제 'ROC' 곡선을 시각화해 봅시다:
+
+```python
+from sklearn.metrics import roc_curve, roc_auc_score
+import matplotlib
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+y_scores = model.predict_proba(X_test)
+fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
+
+fig = plt.figure(figsize=(6, 6))
+plt.plot([0, 1], [0, 1], 'k--')
+plt.plot(fpr, tpr)
+plt.xlabel('False Positive Rate')
+plt.ylabel('True Positive Rate')
+plt.title('ROC Curve')
+plt.show()
+```
+
+Matplotlib을 사용하여 모델의 [수신 운영 특성](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서를 해당 언어로 작성된 상태에서 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/4-Logistic/assignment.md b/translations/ko/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..540d7ed49
--- /dev/null
+++ b/translations/ko/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,14 @@
+# 회귀 재시도
+
+## 지침
+
+강의에서 호박 데이터의 일부를 사용했습니다. 이제 원래 데이터를 다시 사용하여 모든 데이터를 정리하고 표준화하여 로지스틱 회귀 모델을 구축해 보세요.
+
+## 평가 기준
+
+| 기준 | 우수함 | 적절함 | 개선 필요 |
+| --------- | --------------------------------------------------------------------- | ----------------------------------------------------------- | -------------------------------------------------------- |
+| | 잘 설명되고 성능이 좋은 모델이 포함된 노트북을 제출함 | 최소한의 성능을 발휘하는 모델이 포함된 노트북을 제출함 | 성능이 떨어지는 모델이 포함된 노트북을 제출하거나 없음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/4-Logistic/solution/Julia/README.md b/translations/ko/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..d7e19ebfa
--- /dev/null
+++ b/translations/ko/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확한 내용이 포함될 수 있습니다. 원본 문서의 모국어 버전을 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/2-Regression/README.md b/translations/ko/2-Regression/README.md
new file mode 100644
index 000000000..15c6dd7f9
--- /dev/null
+++ b/translations/ko/2-Regression/README.md
@@ -0,0 +1,43 @@
+# 머신 러닝을 위한 회귀 모델
+## 지역 주제: 북미의 호박 가격을 위한 회귀 모델 🎃
+
+북미에서는 호박을 할로윈 때 무서운 얼굴로 조각하는 경우가 많습니다. 이 매력적인 채소에 대해 더 알아봅시다!
+
+
+> 사진 출처: Beth Teutschmann on Unsplash
+
+## 학습 내용
+
+[](https://youtu.be/5QnJtDad4iQ "Regression Introduction video - Click to Watch!")
+> 🎥 위 이미지를 클릭하면 이 강의의 간단한 소개 영상을 볼 수 있습니다
+
+이 섹션의 강의는 머신 러닝의 맥락에서 회귀의 유형을 다룹니다. 회귀 모델은 변수 간의 _관계_를 결정하는 데 도움을 줄 수 있습니다. 이 유형의 모델은 길이, 온도, 나이와 같은 값을 예측할 수 있으며, 데이터 포인트를 분석하면서 변수 간의 관계를 밝혀냅니다.
+
+이 시리즈의 강의에서는 선형 회귀와 로지스틱 회귀의 차이점과 어느 상황에서 어느 것을 선호해야 하는지 알아볼 것입니다.
+
+[](https://youtu.be/XA3OaoW86R8 "ML for beginners - Introduction to Regression models for Machine Learning")
+
+> 🎥 위 이미지를 클릭하면 회귀 모델을 소개하는 짧은 영상을 볼 수 있습니다.
+
+이 강의 그룹에서는 머신 러닝 작업을 시작하기 위한 설정 방법, 특히 데이터 과학자들이 주로 사용하는 노트북을 관리하기 위한 Visual Studio Code 설정 방법을 배웁니다. Scikit-learn이라는 머신 러닝 라이브러리를 발견하고, 이 장에서는 회귀 모델에 초점을 맞춘 첫 번째 모델을 구축할 것입니다.
+
+> 회귀 모델 작업을 배우는 데 도움이 되는 유용한 로우코드 도구들이 있습니다. [Azure ML을 사용해 보세요](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### 강의 목록
+
+1. [필수 도구들](1-Tools/README.md)
+2. [데이터 관리](2-Data/README.md)
+3. [선형 및 다항 회귀](3-Linear/README.md)
+4. [로지스틱 회귀](4-Logistic/README.md)
+
+---
+### 크레딧
+
+"회귀를 이용한 머신 러닝"은 [Jen Looper](https://twitter.com/jenlooper)가 ♥️를 담아 작성했습니다.
+
+♥️ 퀴즈 기여자들: [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) 및 [Ornella Altunyan](https://twitter.com/ornelladotcom)
+
+호박 데이터셋은 [Kaggle의 이 프로젝트](https://www.kaggle.com/usda/a-year-of-pumpkin-prices)에서 제안되었으며, 데이터는 미국 농무부에서 배포한 [특수 작물 터미널 시장 표준 보고서](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice)에서 가져왔습니다. 우리는 분포를 정규화하기 위해 품종에 따라 색상에 대한 몇 가지 포인트를 추가했습니다. 이 데이터는 공공 도메인에 있습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있음을 유의하시기 바랍니다. 원본 문서의 원어를 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/3-Web-App/1-Web-App/README.md b/translations/ko/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..93e099be4
--- /dev/null
+++ b/translations/ko/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# 웹 앱을 만들어 ML 모델 사용하기
+
+이번 강의에서는 _지난 한 세기 동안의 UFO 목격_ 데이터 세트를 사용하여 ML 모델을 훈련할 것입니다. 이 데이터는 NUFORC의 데이터베이스에서 가져왔습니다.
+
+배울 내용:
+
+- 훈련된 모델을 '피클'하는 방법
+- Flask 앱에서 그 모델을 사용하는 방법
+
+노트북을 사용하여 데이터를 정리하고 모델을 훈련하는 방법을 계속 배우겠지만, 이를 한 단계 더 나아가 웹 앱에서 모델을 사용하는 방법도 탐구할 것입니다.
+
+이를 위해 Flask를 사용하여 웹 앱을 구축해야 합니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## 앱 구축하기
+
+머신 러닝 모델을 사용하는 웹 앱을 구축하는 방법에는 여러 가지가 있습니다. 웹 아키텍처는 모델 훈련 방식에 영향을 미칠 수 있습니다. 데이터 과학 그룹이 훈련한 모델을 앱에서 사용해야 하는 상황을 상상해 보세요.
+
+### 고려 사항
+
+다음과 같은 여러 질문을 해야 합니다:
+
+- **웹 앱인가 모바일 앱인가?** 모바일 앱을 구축하거나 IoT 환경에서 모델을 사용해야 하는 경우, [TensorFlow Lite](https://www.tensorflow.org/lite/)를 사용하여 Android 또는 iOS 앱에서 모델을 사용할 수 있습니다.
+- **모델은 어디에 위치할 것인가?** 클라우드에 있을 것인가, 로컬에 있을 것인가?
+- **오프라인 지원.** 앱이 오프라인에서도 작동해야 하는가?
+- **모델을 훈련하는 데 사용된 기술은 무엇인가?** 선택한 기술이 사용해야 할 도구에 영향을 미칠 수 있습니다.
+ - **TensorFlow 사용.** 예를 들어 TensorFlow를 사용하여 모델을 훈련하는 경우, [TensorFlow.js](https://www.tensorflow.org/js/)를 사용하여 웹 앱에서 사용할 수 있도록 TensorFlow 모델을 변환할 수 있습니다.
+ - **PyTorch 사용.** [PyTorch](https://pytorch.org/)와 같은 라이브러리를 사용하여 모델을 구축하는 경우, [ONNX](https://onnx.ai/) (Open Neural Network Exchange) 형식으로 내보내 JavaScript 웹 앱에서 사용할 수 있는 [Onnx Runtime](https://www.onnxruntime.ai/)을 사용할 수 있습니다. 이 옵션은 Scikit-learn으로 훈련된 모델에 대해 추후 강의에서 탐구할 것입니다.
+ - **Lobe.ai 또는 Azure Custom Vision 사용.** [Lobe.ai](https://lobe.ai/) 또는 [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott)과 같은 ML SaaS(Software as a Service) 시스템을 사용하여 모델을 훈련하는 경우, 이 소프트웨어는 모델을 다양한 플랫폼에 내보내는 방법을 제공합니다. 여기에는 온라인 애플리케이션에서 클라우드에서 쿼리할 수 있는 맞춤형 API를 구축하는 것도 포함됩니다.
+
+또한 웹 브라우저에서 직접 모델을 훈련할 수 있는 전체 Flask 웹 앱을 구축할 수도 있습니다. 이는 JavaScript 환경에서 TensorFlow.js를 사용하여 수행할 수 있습니다.
+
+우리의 목적을 위해, Python 기반 노트북을 사용하고 있으므로, 훈련된 모델을 이러한 노트북에서 Python으로 구축된 웹 앱에서 읽을 수 있는 형식으로 내보내는 단계를 탐구해 보겠습니다.
+
+## 도구
+
+이 작업을 위해 Flask와 Pickle 두 가지 도구가 필요합니다. 둘 다 Python에서 실행됩니다.
+
+✅ [Flask](https://palletsprojects.com/p/flask/)란? Flask는 그 창시자들에 의해 '마이크로 프레임워크'로 정의되며, Python을 사용하여 웹 페이지를 구축하는 템플릿 엔진을 포함한 웹 프레임워크의 기본 기능을 제공합니다. Flask로 구축하는 연습을 위해 [이 학습 모듈](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott)을 살펴보세요.
+
+✅ [Pickle](https://docs.python.org/3/library/pickle.html)이란? Pickle 🥒은 Python 객체 구조를 직렬화하고 역직렬화하는 Python 모듈입니다. 모델을 '피클'할 때, 구조를 웹에서 사용할 수 있도록 직렬화하거나 평탄화합니다. 주의하세요: 피클은 본질적으로 안전하지 않으므로 파일을 '언피클'하라는 메시지가 표시되면 주의해야 합니다. 피클된 파일의 접미사는 `.pkl`입니다.
+
+## 연습 - 데이터 정리하기
+
+이번 강의에서는 [NUFORC](https://nuforc.org) (National UFO Reporting Center)에서 수집한 80,000건의 UFO 목격 데이터를 사용합니다. 이 데이터에는 흥미로운 UFO 목격 설명이 포함되어 있습니다. 예를 들어:
+
+- **긴 설명 예시.** "한 남자가 밤에 풀밭에 빛나는 빔에서 나와 텍사스 인스트루먼트 주차장으로 달려갑니다".
+- **짧은 설명 예시.** "불빛이 우리를 쫓아왔습니다".
+
+[ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) 스프레드시트에는 목격이 발생한 `city`, `state`, `country`에 대한 열과 객체의 `shape`, `latitude`, `longitude`가 포함되어 있습니다.
+
+이번 강의에 포함된 빈 [노트북](../../../../3-Web-App/1-Web-App/notebook.ipynb)에서:
+
+1. 이전 강의에서 했던 것처럼 `pandas`, `matplotlib`, `numpy`을 가져오고 ufos 스프레드시트를 가져옵니다. 샘플 데이터 세트를 확인할 수 있습니다:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. ufos 데이터를 새 제목으로 작은 데이터프레임으로 변환합니다. `Country` 필드의 고유 값을 확인합니다.
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. 이제 필요한 데이터 양을 줄이기 위해 null 값을 삭제하고 1-60초 사이의 목격만 가져옵니다:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. Scikit-learn의 `LabelEncoder` 라이브러리를 가져와 국가의 텍스트 값을 숫자로 변환합니다:
+
+ ✅ LabelEncoder는 데이터를 알파벳 순서로 인코딩합니다
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ 데이터는 다음과 같아야 합니다:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## 연습 - 모델 구축하기
+
+이제 데이터를 훈련 및 테스트 그룹으로 나누어 모델을 훈련할 준비를 할 수 있습니다.
+
+1. 훈련할 세 가지 특징을 X 벡터로 선택하고, y 벡터는 `Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude`를 선택하여 국가 ID를 반환합니다.
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. 로지스틱 회귀를 사용하여 모델을 훈련합니다:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+정확도는 **약 95%**로 나쁘지 않습니다. 이는 `Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude` 때문입니다. 하지만, 정리한 원시 데이터에서 훈련하고, 내보낸 후 이 모델을 웹 앱에서 사용하는 연습을 하는 것은 좋은 경험입니다.
+
+## 연습 - 모델 '피클'하기
+
+이제 모델을 _피클_할 시간입니다! 몇 줄의 코드로 이를 수행할 수 있습니다. _피클_된 후에는 피클된 모델을 로드하고 초, 위도, 경도 값을 포함하는 샘플 데이터 배열에 대해 테스트합니다.
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+모델은 **'3'**을 반환합니다. 이는 영국의 국가 코드입니다. 놀랍군요! 👽
+
+## 연습 - Flask 앱 구축하기
+
+이제 모델을 호출하고 유사한 결과를 반환하는 Flask 앱을 구축할 수 있습니다. 더 시각적으로 보기 좋게 만들 것입니다.
+
+1. _notebook.ipynb_ 파일이 있는 위치에 **web-app** 폴더를 만듭니다. 여기에는 _ufo-model.pkl_ 파일이 있습니다.
+
+1. 그 폴더에 세 개의 폴더를 더 만듭니다: **static** 폴더와 그 안에 **css** 폴더, 그리고 **templates** 폴더. 이제 다음과 같은 파일과 디렉토리가 있어야 합니다:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ 완성된 앱의 모습을 보려면 솔루션 폴더를 참조하세요
+
+1. _web-app_ 폴더에서 첫 번째로 생성할 파일은 **requirements.txt** 파일입니다. JavaScript 앱의 _package.json_과 같이 이 파일은 앱에 필요한 종속성을 나열합니다. **requirements.txt**에 다음 줄을 추가합니다:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. 이제 _web-app_으로 이동하여 이 파일을 실행합니다:
+
+ ```bash
+ cd web-app
+ ```
+
+1. 터미널에서 `pip install`을 입력하여 _requirements.txt_에 나열된 라이브러리를 설치합니다:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. 이제 앱을 완성하기 위해 세 개의 파일을 더 생성할 준비가 되었습니다:
+
+ 1. 루트에 **app.py**를 생성합니다.
+ 2. _templates_ 디렉토리에 **index.html**을 생성합니다.
+ 3. _static/css_ 디렉토리에 **styles.css**를 생성합니다.
+
+1. _styles.css_ 파일에 몇 가지 스타일을 추가합니다:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. 다음으로 _index.html_ 파일을 작성합니다:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ 이 파일의 템플릿 구문을 살펴보세요. 예측 텍스트와 같은 변수가 제공될 때 `{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py`에 추가합니다:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 팁: [`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train`. 이 방법을 추구할 때의 장단점은 무엇인가요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## 복습 및 자습
+
+ML 모델을 사용하는 웹 앱을 구축하는 방법은 여러 가지가 있습니다. JavaScript 또는 Python을 사용하여 머신 러닝을 활용하는 웹 앱을 구축할 수 있는 방법을 목록으로 만들어 보세요. 아키텍처를 고려해 보세요: 모델이 앱에 있어야 하나요, 아니면 클라우드에 있어야 하나요? 후자의 경우, 어떻게 접근할 수 있나요? 적용된 ML 웹 솔루션의 아키텍처 모델을 그려보세요.
+
+## 과제
+
+[다른 모델 시도해보기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/3-Web-App/1-Web-App/assignment.md b/translations/ko/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..66e6bc91d
--- /dev/null
+++ b/translations/ko/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# 다른 모델 시도하기
+
+## 지침
+
+이제 훈련된 회귀 모델을 사용하여 하나의 웹 앱을 구축했으므로, 이전 회귀 수업에서 사용한 모델 중 하나를 사용하여 이 웹 앱을 다시 만들어보세요. 스타일을 유지하거나 호박 데이터를 반영하도록 다르게 디자인할 수 있습니다. 모델의 훈련 방법을 반영하도록 입력을 변경하는 것을 잊지 마세요.
+
+## 평가 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| -------------------------- | ------------------------------------------------------ | ------------------------------------------------------ | -------------------------------------- |
+| 웹 앱 동작 여부 | 웹 앱이 예상대로 실행되고 클라우드에 배포됨 | 웹 앱에 결함이 있거나 예상치 못한 결과를 보임 | 웹 앱이 제대로 작동하지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 모국어 버전이 권위 있는 소스로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/3-Web-App/README.md b/translations/ko/3-Web-App/README.md
new file mode 100644
index 000000000..2b236db15
--- /dev/null
+++ b/translations/ko/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# 웹 앱을 구축하여 ML 모델 사용하기
+
+이 커리큘럼의 이 섹션에서는 적용된 머신러닝 주제에 대해 소개합니다: Scikit-learn 모델을 파일로 저장하여 웹 애플리케이션 내에서 예측에 사용할 수 있는 방법입니다. 모델을 저장한 후에는 Flask로 구축된 웹 앱에서 이를 사용하는 방법을 배우게 됩니다. 먼저 UFO 목격에 관한 데이터를 사용하여 모델을 생성합니다! 그런 다음, 위도와 경도 값을 입력하여 어느 나라에서 UFO를 목격했는지 예측할 수 있는 웹 앱을 구축합니다.
+
+
+
+Michael Herren의 사진, Unsplash에서 제공
+
+## 레슨
+
+1. [웹 앱 구축하기](1-Web-App/README.md)
+
+## 크레딧
+
+"웹 앱 구축하기"는 [Jen Looper](https://twitter.com/jenlooper)의 ♥️와 함께 작성되었습니다.
+
+♥️ 퀴즈는 Rohan Raj가 작성했습니다.
+
+데이터셋은 [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings)에서 제공되었습니다.
+
+웹 앱 아키텍처는 부분적으로 [이 기사](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4)와 Abhinav Sagar의 [이 저장소](https://github.com/abhinavsagar/machine-learning-deployment)의 제안을 참고했습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 모국어 버전이 권위 있는 소스로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/1-Introduction/README.md b/translations/ko/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..2890d9ba4
--- /dev/null
+++ b/translations/ko/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# 분류 소개
+
+이 네 가지 강의에서, 여러분은 고전적인 머신 러닝의 근본적인 초점인 _분류_를 탐구할 것입니다. 아시아와 인도의 모든 훌륭한 요리에 관한 데이터셋을 사용하여 다양한 분류 알고리즘을 다룰 것입니다. 배가 고프시길 바랍니다!
+
+
+
+> 이 강의에서 범아시아 요리를 축하하세요! 이미지 제공: [Jen Looper](https://twitter.com/jenlooper)
+
+분류는 회귀 기법과 많은 공통점을 가진 [지도 학습](https://wikipedia.org/wiki/Supervised_learning)의 한 형태입니다. 머신 러닝이 데이터셋을 사용하여 값이나 이름을 예측하는 것이라면, 분류는 일반적으로 _이진 분류_와 _다중 클래스 분류_의 두 그룹으로 나뉩니다.
+
+[](https://youtu.be/eg8DJYwdMyg "분류 소개")
+
+> 🎥 위 이미지를 클릭하면 MIT의 John Guttag이 분류를 소개하는 비디오로 이동합니다.
+
+기억하세요:
+
+- **선형 회귀**는 변수 간의 관계를 예측하고 새로운 데이터 포인트가 그 선과의 관계에서 어디에 위치할지 정확하게 예측하는 데 도움을 줍니다. 예를 들어, _9월과 12월에 호박 가격이 얼마일지_ 예측할 수 있습니다.
+- **로지스틱 회귀**는 "이진 카테고리"를 발견하는 데 도움을 줍니다: 이 가격대에서 _이 호박이 주황색인지 아닌지_?
+
+분류는 데이터 포인트의 레이블이나 클래스를 결정하는 다양한 알고리즘을 사용합니다. 이 요리 데이터를 사용하여 재료 그룹을 관찰함으로써 원산지 요리를 결정할 수 있는지 살펴보겠습니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [이 강의는 R로도 제공됩니다!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### 소개
+
+분류는 머신 러닝 연구자와 데이터 과학자의 기본 활동 중 하나입니다. 이진 값의 기본 분류("이 이메일이 스팸인지 아닌지")부터 컴퓨터 비전을 사용한 복잡한 이미지 분류 및 세분화까지, 데이터를 클래스로 분류하고 질문하는 것은 항상 유용합니다.
+
+과정을 더 과학적으로 설명하자면, 분류 방법은 입력 변수와 출력 변수 간의 관계를 매핑할 수 있는 예측 모델을 생성합니다.
+
+
+
+> 분류 알고리즘이 처리할 이진 vs. 다중 클래스 문제. 인포그래픽 제공: [Jen Looper](https://twitter.com/jenlooper)
+
+데이터를 정리하고 시각화하며 ML 작업을 준비하기 전에, 데이터를 분류하는 데 머신 러닝을 활용할 수 있는 다양한 방법에 대해 알아봅시다.
+
+고전적인 머신 러닝을 사용한 분류는 [통계학](https://wikipedia.org/wiki/Statistical_classification)에서 유래하며, `smoker`, `weight`, `age`와 같은 특징을 사용하여 _X 질병 발병 가능성_을 결정합니다. 이전에 수행한 회귀 연습과 유사한 지도 학습 기법으로, 데이터는 라벨이 지정되고 ML 알고리즘은 이러한 라벨을 사용하여 데이터셋의 클래스(또는 '특징')를 분류하고 예측하여 그룹이나 결과에 할당합니다.
+
+✅ 요리에 관한 데이터셋을 상상해보세요. 다중 클래스 모델은 어떤 질문에 답할 수 있을까요? 이진 모델은 어떤 질문에 답할 수 있을까요? 특정 요리가 호로파를 사용할 가능성이 있는지 결정하고 싶다면 어떻게 할까요? 별모양의 아니스, 아티초크, 콜리플라워, 고추냉이가 가득한 식료품 가방을 선물로 받았을 때 전형적인 인도 요리를 만들 수 있을까요?
+
+[](https://youtu.be/GuTeDbaNoEU "Crazy mystery baskets")
+
+> 🎥 위 이미지를 클릭하면 'Chopped' 쇼의 '미스터리 바구니'에서 셰프들이 무작위로 선택된 재료로 요리를 만드는 전제의 비디오로 이동합니다. 분명히 ML 모델이 도움이 되었을 것입니다!
+
+## 안녕하세요 '분류기'
+
+이 요리 데이터셋에 대해 우리가 묻고 싶은 질문은 실제로 **다중 클래스 질문**입니다. 여러 잠재적인 국가 요리가 있기 때문입니다. 재료의 묶음을 주어졌을 때, 이 많은 클래스 중 어느 것에 데이터가 맞을까요?
+
+Scikit-learn은 문제를 해결하고자 하는 유형에 따라 데이터를 분류하는 데 사용할 수 있는 여러 알고리즘을 제공합니다. 다음 두 강의에서는 이러한 알고리즘 중 몇 가지에 대해 배울 것입니다.
+
+## 연습 - 데이터 정리 및 균형 맞추기
+
+이 프로젝트를 시작하기 전에 첫 번째 작업은 데이터를 정리하고 **균형을 맞추는 것**입니다. 이 폴더의 루트에 있는 빈 _notebook.ipynb_ 파일로 시작하세요.
+
+첫 번째로 설치할 것은 [imblearn](https://imbalanced-learn.org/stable/)입니다. 이것은 데이터를 더 잘 균형 맞출 수 있게 해주는 Scikit-learn 패키지입니다(이 작업에 대해 곧 더 배울 것입니다).
+
+1. `imblearn`를 설치하려면 `pip install`를 실행하세요:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. 데이터를 가져오고 시각화하는 데 필요한 패키지를 가져오고, `imblearn`에서 `SMOTE`를 가져오세요.
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ 이제 데이터를 가져올 준비가 되었습니다.
+
+1. 데이터를 가져오는 다음 작업을 수행하세요:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df`를 사용하세요.
+
+1. 데이터의 모양을 확인하세요:
+
+ ```python
+ df.head()
+ ```
+
+ 처음 다섯 줄은 다음과 같습니다:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. `info()`를 호출하여 이 데이터에 대한 정보를 얻으세요:
+
+ ```python
+ df.info()
+ ```
+
+ 출력은 다음과 같습니다:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## 연습 - 요리에 대해 배우기
+
+이제 작업이 더 흥미로워집니다. 요리별 데이터 분포를 발견해 봅시다.
+
+1. `barh()`를 호출하여 데이터를 막대로 플로팅하세요:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ 요리의 수는 유한하지만 데이터의 분포는 고르지 않습니다. 이를 수정할 수 있습니다! 그 전에 조금 더 탐색해 보세요.
+
+1. 요리별로 사용 가능한 데이터 양을 찾아 출력하세요:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ 출력은 다음과 같습니다:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## 재료 발견하기
+
+이제 데이터를 더 깊이 파고들어 요리별 전형적인 재료가 무엇인지 알아볼 수 있습니다. 요리 간 혼동을 일으키는 반복 데이터를 정리해야 하므로, 이 문제에 대해 배워봅시다.
+
+1. 재료 데이터프레임을 생성하는 `create_ingredient()` 함수를 Python에서 만드세요. 이 함수는 도움이 되지 않는 열을 제거하고 재료를 그 수에 따라 정렬하는 것으로 시작합니다:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ 이제 이 함수를 사용하여 요리별 상위 10개의 가장 인기 있는 재료에 대한 아이디어를 얻을 수 있습니다.
+
+1. `create_ingredient()` and plot it calling `barh()`를 호출하세요:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 일본 요리에 대해 동일하게 수행하세요:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 중국 요리에 대해 동일하게 수행하세요:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 인도 요리를 플로팅하세요:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 마지막으로 한국 요리를 플로팅하세요:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 이제 `drop()`을 호출하여 서로 다른 요리 간 혼동을 일으키는 가장 일반적인 재료를 제거하세요:
+
+ 모두가 쌀, 마늘, 생강을 좋아합니다!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## 데이터셋 균형 맞추기
+
+이제 데이터를 정리했으니, [SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - "Synthetic Minority Over-sampling Technique" -를 사용하여 균형을 맞추세요.
+
+1. `fit_resample()`을 호출하세요. 이 전략은 보간을 통해 새로운 샘플을 생성합니다.
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ 데이터를 균형 맞춤으로써 분류 시 더 나은 결과를 얻을 수 있습니다. 이진 분류를 생각해보세요. 대부분의 데이터가 하나의 클래스인 경우, ML 모델은 단순히 그 클래스에 대한 데이터가 더 많기 때문에 그 클래스를 더 자주 예측할 것입니다. 데이터를 균형 맞추면 왜곡된 데이터를 제거하는 데 도움이 됩니다.
+
+1. 이제 재료별 라벨 수를 확인할 수 있습니다:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ 출력은 다음과 같습니다:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ 데이터는 깔끔하고 균형이 맞으며 매우 맛있습니다!
+
+1. 마지막 단계는 균형 잡힌 데이터(라벨과 특징 포함)를 새로운 데이터프레임에 저장하여 파일로 내보내는 것입니다:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. `transformed_df.head()` and `transformed_df.info()`를 사용하여 데이터를 한 번 더 확인하세요. 이 데이터를 저장하여 향후 강의에서 사용할 수 있습니다:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ 이 새로운 CSV는 이제 루트 데이터 폴더에서 찾을 수 있습니다.
+
+---
+
+## 🚀도전
+
+이 커리큘럼에는 여러 흥미로운 데이터셋이 포함되어 있습니다. `data` 폴더를 살펴보고 이진 또는 다중 클래스 분류에 적합한 데이터셋이 있는지 확인해 보세요. 이 데이터셋에 대해 어떤 질문을 할 수 있을까요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## 복습 및 자기 학습
+
+SMOTE의 API를 탐색해 보세요. 어떤 사용 사례에 가장 적합할까요? 어떤 문제를 해결할 수 있을까요?
+
+## 과제
+
+[분류 방법 탐색](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/1-Introduction/assignment.md b/translations/ko/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..fd8a00d64
--- /dev/null
+++ b/translations/ko/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# 분류 방법 탐색
+
+## 지침
+
+[Scikit-learn 문서](https://scikit-learn.org/stable/supervised_learning.html)에서 데이터를 분류하는 다양한 방법을 찾을 수 있습니다. 이 문서에서 작은 탐색을 해보세요: 목표는 분류 방법을 찾아 이 커리큘럼의 데이터셋, 물어볼 수 있는 질문, 그리고 분류 기법을 매칭하는 것입니다. 스프레드시트 또는 .doc 파일에 표를 만들어 데이터셋이 분류 알고리즘과 어떻게 작동하는지 설명하세요.
+
+## 평가 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| ------- | --------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | 5개의 알고리즘과 분류 기법을 개요하는 문서가 제시됩니다. 개요는 잘 설명되고 자세합니다. | 3개의 알고리즘과 분류 기법을 개요하는 문서가 제시됩니다. 개요는 잘 설명되고 자세합니다. | 3개 미만의 알고리즘과 분류 기법을 개요하는 문서가 제시되고 개요는 잘 설명되지 않았거나 자세하지 않습니다. |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/1-Introduction/solution/Julia/README.md b/translations/ko/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..648e67cd5
--- /dev/null
+++ b/translations/ko/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서가 권위 있는 자료로 간주되어야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/2-Classifiers-1/README.md b/translations/ko/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..e2f003a70
--- /dev/null
+++ b/translations/ko/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,78 @@
+# 요리 분류기 1
+
+이 강의에서는 지난 강의에서 저장한 균형 잡힌 깨끗한 요리 데이터셋을 사용하게 됩니다.
+
+이 데이터셋을 다양한 분류기와 함께 사용하여 _주어진 재료 그룹을 기반으로 특정 국가의 요리를 예측_합니다. 이 과정에서 분류 작업에 알고리즘을 활용할 수 있는 여러 가지 방법에 대해 더 배우게 될 것입니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# 준비
+
+[Lesson 1](../1-Introduction/README.md)을 완료했다고 가정하고, 이 네 개의 강의에 대해 루트 `/data` 폴더에 _cleaned_cuisines.csv_ 파일이 있는지 확인하십시오.
+
+## 연습 - 국가 요리 예측
+
+1. 이 강의의 _notebook.ipynb_ 폴더에서, 해당 파일과 함께 Pandas 라이브러리를 가져옵니다:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ 데이터는 다음과 같습니다:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. 이제 몇 가지 라이브러리를 더 가져옵니다:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. X와 y 좌표를 훈련을 위해 두 개의 데이터프레임으로 나눕니다. `cuisine`은 라벨 데이터프레임이 될 수 있습니다:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ 다음과 같이 보일 것입니다:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. `Unnamed: 0` column and the `cuisine` column, calling `drop()`을 제거합니다. 나머지 데이터를 훈련 가능한 특징으로 저장합니다:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ 당신의 특징은 다음과 같습니다:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 기하기 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 원어를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/2-Classifiers-1/assignment.md b/translations/ko/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..49e3fb5c4
--- /dev/null
+++ b/translations/ko/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# 솔버 연구
+## 지침
+
+이 수업에서는 알고리즘을 머신 러닝 프로세스와 결합하여 정확한 모델을 만드는 다양한 솔버에 대해 배웠습니다. 수업에서 언급된 솔버들을 살펴보고 두 가지를 선택하세요. 자신의 말로 이 두 솔버를 비교하고 대조하세요. 이 솔버들은 어떤 문제를 해결하나요? 다양한 데이터 구조와는 어떻게 작동하나요? 왜 하나를 다른 것보다 선택하게 되나요?
+## 평가 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| -------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | 두 개의 단락으로 각 솔버에 대해 신중하게 비교한 .doc 파일이 제출됨 | 하나의 단락만 있는 .doc 파일이 제출됨 | 과제가 불완전함 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/ko/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..b3a1bb4b2
--- /dev/null
+++ b/translations/ko/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서는 원어로 작성된 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/3-Classifiers-2/README.md b/translations/ko/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..ffac41e72
--- /dev/null
+++ b/translations/ko/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# 요리 분류기 2
+
+두 번째 분류 수업에서는 숫자 데이터를 분류하는 더 많은 방법을 탐구하게 됩니다. 또한, 하나의 분류기를 선택하는 것이 다른 분류기를 선택하는 것에 비해 어떤 영향을 미치는지에 대해 배우게 됩니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### 전제 조건
+
+이전 수업을 완료하고, 이 4-강의 폴더의 루트에 있는 `data` 폴더에 _cleaned_cuisines.csv_라는 정리된 데이터셋을 가지고 있다고 가정합니다.
+
+### 준비
+
+_notebook.ipynb_ 파일에 정리된 데이터셋을 로드하고, 모델 빌딩 과정을 위해 X와 y 데이터프레임으로 나누어 두었습니다.
+
+## 분류 지도
+
+이전에 Microsoft의 치트 시트를 사용하여 데이터를 분류할 때 사용할 수 있는 다양한 옵션에 대해 배웠습니다. Scikit-learn은 유사하지만 더 세부적인 치트 시트를 제공하여 분류기를 좁히는 데 도움을 줄 수 있습니다(다른 용어로는 추정자):
+
+
+> Tip: [이 지도를 온라인에서 방문](https://scikit-learn.org/stable/tutorial/machine_learning_map/)하고 경로를 따라가며 문서를 읽어보세요.
+
+### 계획
+
+데이터를 명확하게 이해한 후에는 이 지도가 매우 유용합니다. 경로를 따라가며 결정을 내릴 수 있습니다:
+
+- 샘플이 50개 이상
+- 카테고리를 예측하고자 함
+- 라벨이 있는 데이터
+- 샘플이 10만 개 미만
+- ✨ Linear SVC 선택 가능
+- 작동하지 않으면 숫자 데이터가 있으므로
+ - ✨ KNeighbors Classifier 시도 가능
+ - 이것도 작동하지 않으면 ✨ SVC와 ✨ Ensemble Classifiers 시도
+
+이 경로를 따르는 것이 매우 유용합니다.
+
+## 연습 - 데이터 분할
+
+이 경로를 따라가려면 사용할 라이브러리를 가져오는 것부터 시작해야 합니다.
+
+1. 필요한 라이브러리 가져오기:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. 훈련 데이터와 테스트 데이터를 분할하기:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Linear SVC 분류기
+
+Support-Vector clustering (SVC)은 ML 기술의 Support-Vector machines 가족에 속합니다(아래에서 더 알아보세요). 이 방법에서는 '커널'을 선택하여 라벨을 클러스터링할 수 있습니다. 'C' 매개변수는 '정규화'를 의미하며 매개변수의 영향을 조절합니다. 커널은 여러 가지 중 하나일 수 있습니다(https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC); 여기서는 'linear'로 설정하여 Linear SVC를 활용합니다. 확률은 기본적으로 'false'로 설정되어 있으며, 여기서는 확률 추정을 수집하기 위해 'true'로 설정합니다. 데이터를 섞어 확률을 얻기 위해 무작위 상태를 '0'으로 설정합니다.
+
+### 연습 - Linear SVC 적용하기
+
+분류기 배열을 만들어 시작하세요. 테스트하면서 이 배열에 점진적으로 추가할 것입니다.
+
+1. Linear SVC로 시작하기:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. Linear SVC를 사용하여 모델을 훈련하고 보고서를 출력하기:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ 결과가 꽤 좋습니다:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## K-Neighbors 분류기
+
+K-Neighbors는 ML 방법의 "이웃" 가족에 속하며, 감독 학습과 비감독 학습 모두에 사용할 수 있습니다. 이 방법에서는 미리 정의된 수의 포인트를 생성하고 데이터를 이 포인트 주위에 모아서 데이터에 대한 일반화된 라벨을 예측할 수 있습니다.
+
+### 연습 - K-Neighbors 분류기 적용하기
+
+이전 분류기는 좋았고 데이터와 잘 맞았지만, 더 나은 정확도를 얻을 수 있을지도 모릅니다. K-Neighbors 분류기를 시도해 보세요.
+
+1. 분류기 배열에 줄을 추가하기 (Linear SVC 항목 뒤에 쉼표 추가):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ 결과가 약간 나쁩니다:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors)에 대해 알아보세요
+
+## Support Vector 분류기
+
+Support-Vector 분류기는 ML 방법의 [Support-Vector Machine](https://wikipedia.org/wiki/Support-vector_machine) 가족에 속하며 분류 및 회귀 작업에 사용됩니다. SVM은 "훈련 예제를 공간의 포인트로 매핑"하여 두 카테고리 간의 거리를 최대화합니다. 이후 데이터는 이 공간에 매핑되어 카테고리를 예측할 수 있습니다.
+
+### 연습 - Support Vector 분류기 적용하기
+
+Support Vector 분류기를 사용하여 조금 더 나은 정확도를 시도해 봅시다.
+
+1. K-Neighbors 항목 뒤에 쉼표를 추가한 후 이 줄을 추가하세요:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ 결과가 꽤 좋습니다!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ [Support-Vectors](https://scikit-learn.org/stable/modules/svm.html#svm)에 대해 알아보세요
+
+## 앙상블 분류기
+
+이전 테스트가 꽤 좋았지만, 경로를 끝까지 따라가 봅시다. '앙상블 분류기', 특히 랜덤 포레스트와 AdaBoost를 시도해 봅시다:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+결과가 매우 좋습니다, 특히 랜덤 포레스트의 경우:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ [앙상블 분류기](https://scikit-learn.org/stable/modules/ensemble.html)에 대해 알아보세요
+
+이 기계 학습 방법은 "여러 기본 추정자의 예측을 결합"하여 모델의 품질을 향상시킵니다. 예제에서는 랜덤 트리와 AdaBoost를 사용했습니다.
+
+- [랜덤 포레스트](https://scikit-learn.org/stable/modules/ensemble.html#forest)는 평균화 방법으로, '결정 트리'의 '숲'을 생성하여 과적합을 피하기 위해 무작위성을 주입합니다. n_estimators 매개변수는 트리의 수를 설정합니다.
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html)는 데이터셋에 분류기를 맞추고, 동일한 데이터셋에 그 분류기의 복사본을 맞춥니다. 잘못 분류된 항목의 가중치에 집중하고 다음 분류기에 대한 적합을 조정하여 수정합니다.
+
+---
+
+## 🚀챌린지
+
+이 기술들 각각에는 조정할 수 있는 많은 매개변수가 있습니다. 각 기술의 기본 매개변수를 조사하고, 이러한 매개변수를 조정하는 것이 모델의 품질에 어떤 의미가 있는지 생각해 보세요.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## 복습 및 자습
+
+이 수업에는 많은 용어가 있으므로, [이 목록](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott)의 유용한 용어를 검토하는 시간을 가져보세요!
+
+## 과제
+
+[매개변수 조정](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서를 해당 언어로 작성된 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해서는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/3-Classifiers-2/assignment.md b/translations/ko/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..b7d6db649
--- /dev/null
+++ b/translations/ko/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# Parameter Play
+
+## 지침
+
+이 분류기들을 사용할 때 기본적으로 설정된 많은 매개변수가 있습니다. VS Code의 인텔리센스를 사용하면 이를 탐색할 수 있습니다. 이 수업에서 하나의 ML 분류 기법을 채택하고 다양한 매개변수 값을 조정하여 모델을 재훈련하세요. 일부 변경 사항이 모델 품질에 도움이 되는 이유와 다른 변경 사항이 모델 품질을 저하시킬 수 있는 이유를 설명하는 노트를 작성하세요. 답변은 상세히 작성하세요.
+
+## 평가 기준
+
+| 기준 | 모범 사례 | 적절함 | 개선 필요 |
+| -------- | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------- |
+| | 분류기가 완전히 구축되고 매개변수가 조정된 노트북이 텍스트 박스에 설명과 함께 제시됩니다 | 노트북이 부분적으로 제시되거나 설명이 부실합니다 | 노트북에 버그가 있거나 결함이 있습니다 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/ko/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..82fd0e6f1
--- /dev/null
+++ b/translations/ko/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 원어 버전을 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/4-Applied/README.md b/translations/ko/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..ea770a55e
--- /dev/null
+++ b/translations/ko/4-Classification/4-Applied/README.md
@@ -0,0 +1,318 @@
+# 음식 추천 웹 앱 만들기
+
+이 강의에서는 이전 강의에서 배운 몇 가지 기술과 이 시리즈에서 사용된 맛있는 음식 데이터셋을 활용하여 분류 모델을 구축할 것입니다. 또한, 저장된 모델을 사용하기 위해 Onnx의 웹 런타임을 활용한 작은 웹 앱을 만들 것입니다.
+
+머신 러닝의 가장 유용한 실제 활용 중 하나는 추천 시스템을 구축하는 것입니다. 오늘 그 방향으로 첫 걸음을 내딛을 수 있습니다!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 위 이미지를 클릭하면 비디오를 볼 수 있습니다: Jen Looper가 분류된 음식 데이터를 사용하여 웹 앱을 구축합니다
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+이 강의에서 배울 내용:
+
+- 모델을 구축하고 Onnx 모델로 저장하는 방법
+- Netron을 사용하여 모델을 검사하는 방법
+- 웹 앱에서 추론을 위해 모델을 사용하는 방법
+
+## 모델 구축하기
+
+적용된 ML 시스템을 구축하는 것은 비즈니스 시스템에서 이러한 기술을 활용하는 중요한 부분입니다. Onnx를 사용하여 웹 애플리케이션 내에서 모델을 사용할 수 있으며, 필요할 경우 오프라인에서도 사용할 수 있습니다.
+
+[이전 강의](../../3-Web-App/1-Web-App/README.md)에서 UFO 목격에 관한 회귀 모델을 만들고, 이를 "피클"하여 Flask 앱에서 사용했습니다. 이 아키텍처는 매우 유용하지만, 풀스택 파이썬 앱이며, JavaScript 애플리케이션을 사용해야 할 수도 있습니다.
+
+이 강의에서는 추론을 위한 기본 JavaScript 기반 시스템을 구축할 수 있습니다. 그러나 먼저 모델을 훈련하고 Onnx로 변환해야 합니다.
+
+## 실습 - 분류 모델 훈련
+
+먼저, 우리가 사용한 정제된 음식 데이터셋을 사용하여 분류 모델을 훈련합니다.
+
+1. 유용한 라이브러리 가져오기:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ Scikit-learn 모델을 Onnx 형식으로 변환하는 데 도움이 되는 '[skl2onnx](https://onnx.ai/sklearn-onnx/)'가 필요합니다.
+
+1. 이전 강의에서 했던 것처럼 `read_csv()`를 사용하여 CSV 파일을 읽어 데이터를 처리합니다:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. 첫 두 개의 불필요한 열을 제거하고 나머지 데이터를 'X'로 저장합니다:
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. 라벨을 'y'로 저장합니다:
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### 훈련 루틴 시작
+
+우리는 정확도가 좋은 'SVC' 라이브러리를 사용할 것입니다.
+
+1. Scikit-learn에서 적절한 라이브러리를 가져옵니다:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. 훈련 세트와 테스트 세트를 분리합니다:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. 이전 강의에서 했던 것처럼 SVC 분류 모델을 구축합니다:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. 이제 `predict()`를 호출하여 모델을 테스트합니다:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. 모델의 품질을 확인하기 위해 분류 보고서를 출력합니다:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ 이전에 봤던 것처럼 정확도가 좋습니다:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### 모델을 Onnx로 변환하기
+
+올바른 텐서 수를 사용하여 변환을 수행해야 합니다. 이 데이터셋에는 380개의 재료가 나열되어 있으므로 `FloatTensorType`에 그 숫자를 표기해야 합니다:
+
+1. 380의 텐서 숫자를 사용하여 변환합니다.
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. onx를 생성하고 **model.onnx** 파일로 저장합니다:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > 참고로, 변환 스크립트에서 [옵션](https://onnx.ai/sklearn-onnx/parameterized.html)을 전달할 수 있습니다. 이 경우 'nocl'을 True로 설정하고 'zipmap'을 False로 설정했습니다. 이것은 분류 모델이므로 ZipMap을 제거할 옵션이 있습니다. ZipMap은 필요 없는 딕셔너리 목록을 생성합니다. `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` 파일입니다.
+
+1. 이 파일 _index.html_에 다음 마크업을 추가합니다:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. 이제 `body` 태그 내에서 몇 가지 재료를 표시하는 체크박스 목록을 보여주는 약간의 마크업을 추가합니다:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ 각 체크박스에 값이 부여된 것을 확인하세요. 이는 데이터셋에 따라 재료가 있는 인덱스를 반영합니다. 예를 들어, 사과는 이 알파벳 순서 목록에서 다섯 번째 열에 있으므로 값은 '4'입니다. 인덱스는 0부터 시작합니다. 주어진 재료의 인덱스를 확인하려면 [ingredients spreadsheet](../../../../4-Classification/data/ingredient_indexes.csv)를 참조하세요.
+
+ index.html 파일에서 작업을 계속하고, 최종 닫는 `` 뒤에 모델을 호출하는 스크립트 블록을 추가합니다.
+
+1. 먼저 [Onnx Runtime](https://www.onnxruntime.ai/)을 가져옵니다:
+
+ ```html
+
+ ```
+
+ > Onnx Runtime은 다양한 하드웨어 플랫폼에서 Onnx 모델을 실행할 수 있도록 최적화와 API를 제공합니다.
+
+1. 런타임이 설정되면 호출할 수 있습니다:
+
+ ```html
+
+ ```
+
+이 코드에서 여러 가지 일이 발생합니다:
+
+1. 체크박스가 선택되었는지 여부에 따라 모델에 전송할 380개의 가능한 값(1 또는 0) 배열을 생성했습니다.
+2. 체크박스 배열과 체크박스가 선택되었는지 여부를 결정하는 방법을 생성했습니다.
+3. `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` 프롬프트에서 로컬호스트를 열어 웹 앱을 확인할 수 있습니다. 다양한 재료에 따라 추천되는 요리를 확인하세요:
+
+
+
+축하합니다! 몇 가지 필드로 구성된 '추천' 웹 앱을 만들었습니다. 이 시스템을 확장하는 데 시간을 투자해 보세요!
+## 🚀도전 과제
+
+웹 앱이 매우 미니멀하므로 [ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv) 데이터의 재료와 인덱스를 사용하여 계속 확장해 보세요. 어떤 맛 조합이 특정 국가 요리를 만드는 데 도움이 되는지 알아보세요.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## 복습 및 자습
+
+이 강의에서는 음식 재료를 위한 추천 시스템을 만드는 유용성에 대해 간단히 다루었지만, 이 ML 응용 분야는 예시가 매우 풍부합니다. 이러한 시스템이 어떻게 구축되는지에 대해 더 읽어보세요:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## 과제
+
+[새로운 추천 시스템 만들기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서가 해당 언어로 작성된 경우 이를 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역을 사용하여 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/4-Applied/assignment.md b/translations/ko/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..6439222aa
--- /dev/null
+++ b/translations/ko/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# 추천 시스템 만들기
+
+## 지침
+
+이 강의에서 배운 내용을 바탕으로, Onnx Runtime과 변환된 Onnx 모델을 사용하여 JavaScript 기반의 웹 애플리케이션을 만드는 방법을 이제 알고 있습니다. 이 강의의 데이터나 다른 출처의 데이터를 사용하여 새로운 추천 시스템을 만들어 보세요 (출처를 명시해 주세요). 다양한 성격 특성을 고려한 반려동물 추천 시스템이나 사람의 기분에 따른 음악 장르 추천 시스템을 만들 수도 있습니다. 창의력을 발휘해 보세요!
+
+## 평가 기준
+
+| 기준 | 모범적 사례 | 적절한 사례 | 개선이 필요한 사례 |
+| -------- | ------------------------------------------------------------------------- | ------------------------------------- | ---------------------------------- |
+| | 웹 애플리케이션과 노트북이 모두 잘 문서화되고 실행됨 | 둘 중 하나가 없거나 결함이 있음 | 둘 다 없거나 결함이 있음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서가 권위 있는 자료로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/4-Classification/README.md b/translations/ko/4-Classification/README.md
new file mode 100644
index 000000000..a7c26f637
--- /dev/null
+++ b/translations/ko/4-Classification/README.md
@@ -0,0 +1,30 @@
+# 분류 시작하기
+
+## 지역 주제: 맛있는 아시아 및 인도 요리 🍜
+
+아시아와 인도에서는 음식 전통이 매우 다양하고 정말 맛있습니다! 그들의 재료를 이해하기 위해 지역 요리에 대한 데이터를 살펴보겠습니다.
+
+
+> 사진 제공 Lisheng Chang on Unsplash
+
+## 배우게 될 내용
+
+이 섹션에서는 이전에 공부한 회귀 분석을 바탕으로 다른 분류기를 사용하여 데이터를 더 잘 이해하는 방법을 배울 것입니다.
+
+> 분류 모델 작업에 대해 배울 수 있는 유용한 저코드 도구가 있습니다. 이 작업을 위해 [Azure ML](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)을 시도해 보세요.
+
+## 강의
+
+1. [분류 소개](1-Introduction/README.md)
+2. [추가 분류기](2-Classifiers-1/README.md)
+3. [다른 분류기](3-Classifiers-2/README.md)
+4. [적용된 ML: 웹 앱 구축](4-Applied/README.md)
+
+## 저자 정보
+
+"분류 시작하기"는 [Cassie Breviu](https://www.twitter.com/cassiebreviu)와 [Jen Looper](https://www.twitter.com/jenlooper)의 ♥️로 작성되었습니다.
+
+맛있는 요리 데이터셋은 [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines)에서 제공되었습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서의 원어 버전이 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/5-Clustering/1-Visualize/README.md b/translations/ko/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..d620ee968
--- /dev/null
+++ b/translations/ko/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,216 @@
+# 클러스터링 소개
+
+클러스터링은 데이터셋이 라벨이 없거나 입력이 사전 정의된 출력과 일치하지 않는다고 가정하는 [비지도 학습](https://wikipedia.org/wiki/Unsupervised_learning)의 한 유형입니다. 다양한 알고리즘을 사용하여 라벨이 없는 데이터를 정리하고 데이터에서 인식된 패턴에 따라 그룹을 제공합니다.
+
+[](https://youtu.be/ty2advRiWJM "No One Like You by PSquare")
+
+> 🎥 위 이미지를 클릭하면 비디오로 이동합니다. 클러스터링을 통해 머신 러닝을 공부하는 동안, 나이지리아 댄스홀 트랙을 즐겨보세요 - 이것은 PSquare의 2014년 고평가된 곡입니다.
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### 소개
+
+[클러스터링](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124)은 데이터 탐색에 매우 유용합니다. 나이지리아 관객들이 음악을 소비하는 방식에서 트렌드와 패턴을 발견하는 데 도움이 되는지 알아봅시다.
+
+✅ 클러스터링의 사용에 대해 잠시 생각해보세요. 실제 생활에서 클러스터링은 빨래 더미가 있을 때 가족 구성원의 옷을 정리해야 할 때 발생합니다 🧦👕👖🩲. 데이터 과학에서는 사용자의 선호도를 분석하거나 라벨이 없는 데이터셋의 특성을 결정하려고 할 때 클러스터링이 발생합니다. 클러스터링은 일종의 혼란 속에서 질서를 찾는 데 도움을 줍니다, 마치 양말 서랍처럼요.
+
+[](https://youtu.be/esmzYhuFnds "Introduction to Clustering")
+
+> 🎥 위 이미지를 클릭하면 비디오로 이동합니다: MIT의 John Guttag이 클러스터링을 소개합니다.
+
+전문 환경에서는 클러스터링을 사용하여 시장 세분화, 예를 들어 어떤 연령대가 어떤 품목을 구매하는지 등을 결정할 수 있습니다. 또 다른 용도는 데이터셋에서 사기를 감지하기 위해 클러스터링을 사용할 수 있습니다. 또는 의료 스캔에서 종양을 결정하는 데 클러스터링을 사용할 수 있습니다.
+
+✅ 은행, 전자 상거래 또는 비즈니스 환경에서 '야생'에서 클러스터링을 어떻게 접했는지 잠시 생각해보세요.
+
+> 🎓 흥미롭게도 클러스터 분석은 1930년대 인류학과 심리학 분야에서 시작되었습니다. 어떻게 사용되었을지 상상해보세요.
+
+또는 검색 결과를 쇼핑 링크, 이미지 또는 리뷰 등으로 그룹화하는 데 사용할 수도 있습니다. 클러스터링은 큰 데이터셋을 줄이고 보다 세밀한 분석을 수행하고자 할 때 유용합니다. 따라서 이 기술은 다른 모델을 구축하기 전에 데이터를 학습하는 데 사용할 수 있습니다.
+
+✅ 데이터가 클러스터로 조직되면 클러스터 ID를 할당하고, 이 기술은 데이터셋의 프라이버시를 유지할 때 유용할 수 있습니다. 더 드러나는 식별 가능한 데이터 대신 클러스터 ID로 데이터 포인트를 참조할 수 있습니다. 클러스터 ID를 다른 클러스터 요소 대신 식별하는 이유는 무엇일까요?
+
+클러스터링 기술에 대한 이해를 심화하려면 이 [학습 모듈](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott)을 참조하세요.
+## 클러스터링 시작하기
+
+[Scikit-learn은 다양한](https://scikit-learn.org/stable/modules/clustering.html) 클러스터링 방법을 제공합니다. 선택하는 유형은 사용 사례에 따라 다릅니다. 문서에 따르면 각 방법에는 다양한 이점이 있습니다. Scikit-learn이 지원하는 방법과 적절한 사용 사례의 간단한 표는 다음과 같습니다:
+
+| 방법 이름 | 사용 사례 |
+| :--------------------------- | :--------------------------------------------------------------------- |
+| K-Means | 일반 목적, 귀납적 |
+| Affinity propagation | 많은, 고르지 않은 클러스터, 귀납적 |
+| Mean-shift | 많은, 고르지 않은 클러스터, 귀납적 |
+| Spectral clustering | 적은, 고른 클러스터, 전이적 |
+| Ward hierarchical clustering | 많은, 제약된 클러스터, 전이적 |
+| Agglomerative clustering | 많은, 제약된, 비유클리드 거리, 전이적 |
+| DBSCAN | 비평면 기하학, 고르지 않은 클러스터, 전이적 |
+| OPTICS | 비평면 기하학, 가변 밀도의 고르지 않은 클러스터, 전이적 |
+| Gaussian mixtures | 평면 기하학, 귀납적 |
+| BIRCH | 이상값이 있는 큰 데이터셋, 귀납적 |
+
+> 🎓 클러스터를 만드는 방법은 데이터 포인트를 그룹으로 모으는 방식과 관련이 많습니다. 일부 용어를 살펴봅시다:
+>
+> 🎓 ['전이적' vs. '귀납적'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> 전이적 추론은 특정 테스트 케이스에 매핑되는 관찰된 훈련 사례에서 파생됩니다. 귀납적 추론은 일반적인 규칙에 매핑되는 훈련 사례에서 파생되며, 그런 다음 테스트 케이스에 적용됩니다.
+>
+> 예를 들어, 라벨이 부분적으로 있는 데이터셋이 있다고 상상해보세요. 일부는 '레코드', 일부는 'CD', 일부는 비어 있습니다. 당신의 작업은 빈 곳에 라벨을 제공하는 것입니다. 귀납적 접근 방식을 선택하면 '레코드'와 'CD'를 찾는 모델을 훈련하고 라벨이 없는 데이터에 그 라벨을 적용합니다. 이 접근 방식은 실제로 '카세트'인 것을 분류하는 데 어려움을 겪을 것입니다. 전이적 접근 방식은 이 알려지지 않은 데이터를 보다 효과적으로 처리하며, 유사한 항목을 함께 그룹화한 다음 그룹에 라벨을 적용합니다. 이 경우 클러스터는 '둥근 음악 물건'과 '사각 음악 물건'을 반영할 수 있습니다.
+>
+> 🎓 ['비평면' vs. '평면' 기하학](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> 수학 용어에서 파생된 비평면 vs. 평면 기하학은 '평면'([유클리드](https://wikipedia.org/wiki/Euclidean_geometry)) 또는 '비평면'(비유클리드) 기하학적 방법으로 점 사이의 거리를 측정하는 것을 말합니다.
+>
+>'평면'은 유클리드 기하학을 의미하고, '비평면'은 비유클리드 기하학을 의미합니다. 기하학이 머신 러닝과 무슨 관련이 있을까요? 두 분야 모두 수학에 뿌리를 두고 있기 때문에 클러스터 내 점 사이의 거리를 측정하는 공통된 방법이 필요하며, 이는 데이터의 성격에 따라 '평면' 또는 '비평면' 방식으로 측정될 수 있습니다. [유클리드 거리](https://wikipedia.org/wiki/Euclidean_distance)는 두 점 사이의 선분 길이로 측정됩니다. [비유클리드 거리](https://wikipedia.org/wiki/Non-Euclidean_geometry)는 곡선을 따라 측정됩니다. 데이터가 시각화되었을 때 평면에 존재하지 않는 것처럼 보이면 이를 처리하기 위해 특수 알고리즘이 필요할 수 있습니다.
+>
+
+> 인포그래픽 by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+>
+> 🎓 ['거리'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> 클러스터는 점 사이의 거리를 나타내는 거리 행렬로 정의됩니다. 이 거리는 몇 가지 방법으로 측정될 수 있습니다. 유클리드 클러스터는 점 값의 평균으로 정의되며, '중심점' 또는 중심점을 포함합니다. 거리는 중심점까지의 거리로 측정됩니다. 비유클리드 거리는 '클러스트로이드'로 참조되며, 이는 다른 점에 가장 가까운 점입니다. 클러스트로이드는 여러 가지 방법으로 정의될 수 있습니다.
+>
+> 🎓 ['제약된'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [제약된 클러스터링](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf)은 이 비지도 방법에 '반지도 학습'을 도입합니다. 점 사이의 관계는 '연결할 수 없음' 또는 '연결해야 함'으로 표시되어 데이터셋에 일부 규칙이 강제됩니다.
+>
+>예를 들어, 알고리즘이 라벨이 없거나 반라벨이 있는 데이터 배치에서 자유롭게 작동하면 생성된 클러스터의 품질이 낮을 수 있습니다. 위 예에서 클러스터는 '둥근 음악 물건', '사각 음악 물건', '삼각형 물건' 및 '쿠키'를 그룹화할 수 있습니다. 일부 제약 조건이나 규칙("항목은 플라스틱으로 만들어져야 합니다", "항목은 음악을 재생할 수 있어야 합니다")이 주어지면 알고리즘이 더 나은 선택을 하도록 '제약'할 수 있습니다.
+>
+> 🎓 '밀도'
+>
+> '노이즈'가 많은 데이터는 '밀도가 높다'고 간주됩니다. 각 클러스터 내 점 사이의 거리는 더 밀집되거나 '혼잡한' 것으로 나타날 수 있으며, 따라서 이 데이터는 적절한 클러스터링 방법으로 분석해야 합니다. [이 기사](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html)는 불균일한 클러스터 밀도가 있는 노이즈 데이터셋을 탐색하기 위해 K-Means 클러스터링과 HDBSCAN 알고리즘을 사용하는 차이점을 설명합니다.
+
+## 클러스터링 알고리즘
+
+클러스터링 알고리즘은 100개 이상 있으며, 그 사용은 데이터의 특성에 따라 달라집니다. 주요 알고리즘을 몇 가지 논의해 봅시다:
+
+- **계층적 클러스터링**. 객체가 먼 객체보다 가까운 객체와의 근접성에 따라 분류되면, 클러스터는 다른 객체와의 거리로 인해 형성됩니다. Scikit-learn의 응집형 클러스터링은 계층적입니다.
+
+ 
+ > 인포그래픽 by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **중심점 클러스터링**. 이 인기 있는 알고리즘은 'k' 또는 형성할 클러스터 수를 선택한 후, 알고리즘이 클러스터의 중심점을 결정하고 그 주위에 데이터를 모읍니다. [K-means 클러스터링](https://wikipedia.org/wiki/K-means_clustering)은 중심점 클러스터링의 인기 있는 버전입니다. 중심점은 가장 가까운 평균에 의해 결정되므로 이름이 붙여졌습니다. 클러스터에서의 제곱 거리가 최소화됩니다.
+
+ 
+ > 인포그래픽 by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **분포 기반 클러스터링**. 통계 모델링에 기반한 분포 기반 클러스터링은 데이터 포인트가 클러스터에 속할 확률을 결정하고 이에 따라 할당합니다. 가우시안 혼합 방법이 이 유형에 속합니다.
+
+- **밀도 기반 클러스터링**. 데이터 포인트는 밀도, 즉 서로 주위에 그룹화된 정도에 따라 클러스터에 할당됩니다. 그룹에서 멀리 떨어진 데이터 포인트는 이상치 또는 노이즈로 간주됩니다. DBSCAN, Mean-shift 및 OPTICS는 이 유형의 클러스터링에 속합니다.
+
+- **그리드 기반 클러스터링**. 다차원 데이터셋의 경우, 그리드가 생성되고 데이터는 그리드의 셀 사이에 분할되어 클러스터가 생성됩니다.
+
+## 실습 - 데이터 클러스터링
+
+클러스터링 기술은 적절한 시각화에 크게 도움을 받으므로, 음악 데이터를 시각화하는 것으로 시작합시다. 이 실습은 이 데이터의 특성에 가장 효과적으로 사용할 클러스터링 방법을 결정하는 데 도움이 됩니다.
+
+1. 이 폴더의 [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb) 파일을 엽니다.
+
+1. 좋은 데이터 시각화를 위해 `Seaborn` 패키지를 가져옵니다.
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv)에서 노래 데이터를 추가합니다. 노래에 대한 일부 데이터를 포함하는 데이터프레임을 로드합니다. 라이브러리를 가져오고 데이터를 덤프하여 이 데이터를 탐색할 준비를 합니다:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ 데이터의 첫 몇 줄을 확인합니다:
+
+ | | name | album | artist | artist_top_genre | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ---------------- | ------------ | ------ | ---------- | ------------ | ------------ | ------ | ---------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool | Enjoy Your Life | Lady Donli | nigerian pop | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25 | 0.702 | 0.116 | 0.833 | 0.91 | 0.348 | -6.044 | 0.0447 | 105.115 | 4 |
+
+1. `info()`를 호출하여 데이터프레임에 대한 정보를 얻습니다:
+
+ ```python
+ df.info()
+ ```
+
+ 출력은 다음과 같습니다:
+
+ ```output
+
+ RangeIndex: 530 entries, 0 to 529
+ Data columns (total 16 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 name 530 non-null object
+ 1 album 530 non-null object
+ 2 artist 530 non-null object
+ 3 artist_top_genre 530 non-null object
+ 4 release_date 530 non-null int64
+ 5 length 530 non-null int64
+ 6 popularity 530 non-null int64
+ 7 danceability 530 non-null float64
+ 8 acousticness 530 non-null float64
+ 9 energy 530 non-null float64
+ 10 instrumentalness 530 non-null float64
+ 11 liveness 530 non-null float64
+ 12 loudness 530 non-null float64
+ 13 speechiness 530 non-null float64
+ 14 tempo 530 non-null float64
+ 15 time_signature 530 non-null int64
+ dtypes: float64(8), int64(4), object(4)
+ memory usage: 66.4+ KB
+ ```
+
+1. `isnull()`을 호출하여 null 값을 이중 확인하고 합계가 0인지 확인합니다:
+
+ ```python
+ df.isnull().sum()
+ ```
+
+ 좋아 보입니다:
+
+ ```output
+ name 0
+ album 0
+ artist 0
+ artist_top_genre 0
+ release_date 0
+ length 0
+ popularity 0
+ danceability 0
+ acousticness 0
+ energy 0
+ instrumentalness 0
+ liveness 0
+ loudness 0
+ speechiness 0
+ tempo 0
+ time_signature 0
+ dtype: int64
+ ```
+
+1. 데이터를 설명합니다:
+
+ ```python
+ df.describe()
+ ```
+
+ | | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | ----- | ------------ | ----------- | ---------- | ------------ | ------------ | -------- | ---------------- | -------- | --------- | ----------- | ---------- | -------------- |
+ | count | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 |
+ | mean | 2015.390566 | 222298.1698 | 17.507547 | 0.741619 | 0.265412 | 0.760623 | 0.016305 | 0.147308 | -4.953011 | 0.130748 | 116.487864 | 3.986792 |
+ | std | 3.131688 | 39696.82226 | 18.992212 | 0.117522 | 0.208342 | 0.148533 | 0.090321 | 0.123588 | 2.464186 | 0.092
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## 복습 및 자습
+
+클러스터링 알고리즘을 적용하기 전에, 우리가 배운 것처럼 데이터셋의 특성을 이해하는 것이 좋습니다. 이 주제에 대해 더 읽어보세요 [여기](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
+
+[이 유용한 기사](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/)는 다양한 클러스터링 알고리즘이 다양한 데이터 형태에서 어떻게 동작하는지 안내합니다.
+
+## 과제
+
+[클러스터링을 위한 다른 시각화 연구하기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/5-Clustering/1-Visualize/assignment.md b/translations/ko/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..12ae1b988
--- /dev/null
+++ b/translations/ko/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# 클러스터링을 위한 다른 시각화 연구
+
+## 지침
+
+이 강의에서는 데이터를 클러스터링하기 위해 시각화 기법을 사용하는 방법을 배웠습니다. 특히 산점도는 객체 그룹을 찾는 데 유용합니다. 산점도를 만드는 다양한 방법과 라이브러리를 조사하고, 조사한 내용을 노트북에 문서화하세요. 이 강의의 데이터, 다른 강의의 데이터 또는 직접 수집한 데이터를 사용할 수 있습니다 (그러나 노트북에 출처를 명시해 주세요). 산점도를 사용하여 데이터를 플로팅하고 발견한 내용을 설명하세요.
+
+## 평가 기준
+
+| 기준 | 우수한 경우 | 적절한 경우 | 개선이 필요한 경우 |
+| ------- | -------------------------------------------------------------- | -------------------------------------------------------------------------------------- | ----------------------------------- |
+| | 다섯 개의 잘 문서화된 산점도가 포함된 노트북을 제출했습니다. | 다섯 개 미만의 산점도가 포함되고 문서화가 덜 된 노트북을 제출했습니다. | 불완전한 노트북을 제출했습니다. |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 원어를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/ko/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..8876a7631
--- /dev/null
+++ b/translations/ko/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서를 해당 언어로 작성된 상태로 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역을 사용하여 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/5-Clustering/2-K-Means/README.md b/translations/ko/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..098f18a2f
--- /dev/null
+++ b/translations/ko/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# K-Means 클러스터링
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+이 강의에서는 Scikit-learn과 이전에 가져온 나이지리아 음악 데이터셋을 사용하여 클러스터를 만드는 방법을 배웁니다. 우리는 클러스터링을 위한 K-Means의 기본 사항을 다룰 것입니다. 이전 강의에서 배운 것처럼, 클러스터를 다루는 방법에는 여러 가지가 있으며 사용 방법은 데이터에 따라 다릅니다. 가장 일반적인 클러스터링 기술인 K-Means를 시도해 보겠습니다. 시작해볼까요!
+
+배울 용어들:
+
+- 실루엣 점수
+- 엘보우 방법
+- 관성
+- 분산
+
+## 소개
+
+[K-Means 클러스터링](https://wikipedia.org/wiki/K-means_clustering)은 신호 처리 분야에서 파생된 방법입니다. 일련의 관찰을 사용하여 데이터를 'k' 클러스터로 나누고 분할하는 데 사용됩니다. 각 관찰은 주어진 데이터 포인트를 가장 가까운 '평균' 또는 클러스터의 중심점에 그룹화하는 역할을 합니다.
+
+클러스터는 점(또는 '씨앗')과 해당 영역을 포함하는 [보로노이 다이어그램](https://wikipedia.org/wiki/Voronoi_diagram)으로 시각화할 수 있습니다.
+
+
+
+> 인포그래픽 by [Jen Looper](https://twitter.com/jenlooper)
+
+K-Means 클러스터링 과정은 [세 단계로 실행됩니다](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. 알고리즘은 데이터셋에서 샘플링하여 k-개의 중심점을 선택합니다. 이후 반복합니다:
+ 1. 각 샘플을 가장 가까운 중심점에 할당합니다.
+ 2. 이전 중심점에 할당된 모든 샘플의 평균 값을 취하여 새로운 중심점을 만듭니다.
+ 3. 그런 다음 새로운 중심점과 이전 중심점의 차이를 계산하고 중심점이 안정화될 때까지 반복합니다.
+
+K-Means를 사용할 때의 한 가지 단점은 'k', 즉 중심점의 수를 설정해야 한다는 것입니다. 다행히도 '엘보우 방법'은 'k'의 좋은 시작 값을 추정하는 데 도움이 됩니다. 곧 시도해 보겠습니다.
+
+## 전제 조건
+
+이 강의의 [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) 파일에서 작업할 것입니다. 이 파일에는 이전 강의에서 수행한 데이터 가져오기 및 초기 정리가 포함되어 있습니다.
+
+## 연습 - 준비
+
+노래 데이터를 다시 한 번 살펴보세요.
+
+1. 각 열에 대해 `boxplot()`를 호출하여 박스플롯을 생성합니다:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ 이 데이터는 약간 시끄럽습니다: 각 열을 박스플롯으로 관찰함으로써 이상치를 확인할 수 있습니다.
+
+ 
+
+데이터셋을 통해 이러한 이상치를 제거할 수 있지만, 그렇게 하면 데이터가 매우 최소화될 것입니다.
+
+1. 이제 클러스터링 연습에 사용할 열을 선택하세요. 유사한 범위를 가진 열을 선택하고 `artist_top_genre` 열을 숫자 데이터로 인코딩하세요:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. 이제 몇 개의 클러스터를 타겟으로 할지 선택해야 합니다. 데이터셋에서 3개의 노래 장르를 추출했으므로 3개를 시도해 보겠습니다:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+데이터프레임의 각 행에 대해 예측된 클러스터(0, 1 또는 2)가 포함된 배열이 출력됩니다.
+
+1. 이 배열을 사용하여 '실루엣 점수'를 계산합니다:
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## 실루엣 점수
+
+실루엣 점수가 1에 가까운지 확인하세요. 이 점수는 -1에서 1까지 변하며, 점수가 1이면 클러스터가 밀집되고 다른 클러스터와 잘 분리된 것을 의미합니다. 0에 가까운 값은 샘플이 이웃 클러스터의 결정 경계에 매우 가까운 중첩된 클러스터를 나타냅니다. [(출처)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+우리의 점수는 **.53**으로 중간에 위치합니다. 이는 데이터가 이 유형의 클러스터링에 특히 적합하지 않음을 나타내지만 계속 진행해 보겠습니다.
+
+### 연습 - 모델 구축
+
+1. `KMeans`를 가져오고 클러스터링 과정을 시작합니다.
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ 여기에는 설명할 부분이 몇 가지 있습니다.
+
+ > 🎓 range: 클러스터링 과정의 반복 횟수
+
+ > 🎓 random_state: "중심점 초기화를 위한 난수 생성 결정." [출처](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "클러스터 내 제곱합"은 클러스터 내의 모든 점이 클러스터 중심점에서 평균적으로 얼마나 떨어져 있는지를 측정합니다. [출처](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce).
+
+ > 🎓 관성: K-Means 알고리즘은 클러스터가 내부적으로 얼마나 일관된지를 측정하는 '관성'을 최소화하기 위해 중심점을 선택하려고 합니다. [출처](https://scikit-learn.org/stable/modules/clustering.html). 값은 각 반복에서 wcss 변수에 추가됩니다.
+
+ > 🎓 k-means++: [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means)에서는 'k-means++' 최적화를 사용할 수 있습니다. 이는 중심점을 서로 멀리 떨어진 곳에 초기화하여 무작위 초기화보다 더 나은 결과를 얻을 수 있습니다.
+
+### 엘보우 방법
+
+이전에 3개의 노래 장르를 타겟으로 했기 때문에 3개의 클러스터를 선택해야 한다고 추측했습니다. 하지만 정말 그럴까요?
+
+1. '엘보우 방법'을 사용하여 확인해 보세요.
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ 이전 단계에서 생성한 `wcss` 변수를 사용하여 엘보우의 '굽힘'을 나타내는 차트를 생성합니다. 이는 최적의 클러스터 수를 나타냅니다. 아마도 **3**일 것입니다!
+
+ 
+
+## 연습 - 클러스터 표시
+
+1. 이번에는 3개의 클러스터를 설정하고 클러스터를 산점도로 표시합니다:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. 모델의 정확도를 확인합니다:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ 이 모델의 정확도는 그다지 좋지 않으며, 클러스터의 모양이 그 이유를 암시합니다.
+
+ 
+
+ 이 데이터는 너무 불균형하고, 상관관계가 적으며, 열 값 간의 분산이 너무 커서 잘 클러스터링되지 않습니다. 사실, 형성된 클러스터는 아마도 우리가 위에서 정의한 세 가지 장르 카테고리에 의해 크게 영향을 받거나 왜곡되었을 것입니다. 이것은 학습 과정이었습니다!
+
+ Scikit-learn의 문서에서는 이 모델처럼 클러스터가 잘 구분되지 않는 모델은 '분산' 문제를 가지고 있다고 볼 수 있습니다:
+
+ 
+ > 인포그래픽 from Scikit-learn
+
+## 분산
+
+분산은 "평균에서 제곱된 차이의 평균"으로 정의됩니다 [(출처)](https://www.mathsisfun.com/data/standard-deviation.html). 이 클러스터링 문제의 맥락에서 이는 데이터셋의 숫자가 평균에서 너무 많이 벗어나는 경향이 있음을 의미합니다.
+
+✅ 이 문제를 해결할 수 있는 모든 방법을 생각해볼 좋은 순간입니다. 데이터를 조금 더 조정할까요? 다른 열을 사용할까요? 다른 알고리즘을 사용할까요? 힌트: 데이터를 [스케일링](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/)하여 정규화하고 다른 열을 테스트해 보세요.
+
+> 이 '[분산 계산기](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)'를 사용하여 개념을 조금 더 이해해 보세요.
+
+---
+
+## 🚀도전 과제
+
+이 노트북을 사용하여 매개변수를 조정하는 데 시간을 할애하세요. 데이터를 더 정리(예: 이상치 제거)하여 모델의 정확도를 향상시킬 수 있나요? 특정 데이터 샘플에 더 많은 가중치를 부여할 수 있습니다. 더 나은 클러스터를 만들기 위해 무엇을 할 수 있을까요?
+
+힌트: 데이터를 스케일링해 보세요. 노트북에 데이터 열이 범위 면에서 서로 더 비슷하게 보이도록 표준 스케일링을 추가하는 주석 코드가 있습니다. 실루엣 점수가 낮아지더라도 엘보우 그래프의 '굽힘'이 부드러워집니다. 이는 데이터가 스케일링되지 않은 상태에서는 분산이 적은 데이터가 더 많은 가중치를 가지게 되기 때문입니다. 이 문제에 대해 더 읽어보세요 [여기](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226).
+
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## 복습 및 자습
+
+[K-Means 시뮬레이터](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/)를 살펴보세요. 이 도구를 사용하여 샘플 데이터 포인트를 시각화하고 중심점을 결정할 수 있습니다. 데이터의 무작위성, 클러스터 수 및 중심점 수를 편집할 수 있습니다. 이 도구가 데이터가 어떻게 그룹화될 수 있는지 이해하는 데 도움이 되나요?
+
+또한 [스탠포드의 K-Means 핸드아웃](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html)을 살펴보세요.
+
+## 과제
+
+[다른 클러스터링 방법 시도](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 본래 언어가 권위 있는 출처로 간주되어야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/5-Clustering/2-K-Means/assignment.md b/translations/ko/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..bc6fab037
--- /dev/null
+++ b/translations/ko/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,13 @@
+# 다양한 클러스터링 방법 시도해보기
+
+## 지침
+
+이 강의에서는 K-Means 클러스터링에 대해 배웠습니다. 때때로 K-Means는 데이터에 적합하지 않을 수 있습니다. 이 강의나 다른 출처의 데이터를 사용하여 노트북을 만들어, K-Means를 사용하지 않는 다른 클러스터링 방법을 보여주세요. 무엇을 배웠나요? 출처를 명시하세요.
+## 평가 기준
+
+| 기준 | 우수 | 적절 | 개선 필요 |
+| -------- | ------------------------------------------------------------- | ---------------------------------------------------------------- | -------------------------- |
+| | 잘 문서화된 클러스터링 모델이 있는 노트북이 제시됨 | 문서화가 잘 안 되었거나 불완전한 노트북이 제시됨 | 불완전한 작업이 제출됨 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서는 해당 언어로 작성된 것이 권위 있는 자료로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/ko/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..9dff5f2c8
--- /dev/null
+++ b/translations/ko/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/5-Clustering/README.md b/translations/ko/5-Clustering/README.md
new file mode 100644
index 000000000..c5d04278f
--- /dev/null
+++ b/translations/ko/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# 머신 러닝을 위한 클러스터링 모델
+
+클러스터링은 서로 유사한 객체들을 찾아내어 클러스터라고 불리는 그룹으로 묶는 머신 러닝 작업입니다. 클러스터링이 머신 러닝의 다른 접근 방식과 다른 점은 모든 것이 자동으로 이루어진다는 것입니다. 사실, 이는 지도 학습과는 정반대라고 할 수 있습니다.
+
+## 지역 주제: 나이지리아 청중의 음악 취향을 위한 클러스터링 모델 🎧
+
+나이지리아의 다양한 청중은 다양한 음악 취향을 가지고 있습니다. [이 기사](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421)에서 영감을 받아 Spotify에서 수집한 데이터를 사용하여 나이지리아에서 인기 있는 음악을 살펴보겠습니다. 이 데이터셋에는 다양한 노래의 '댄서빌리티' 점수, '어쿠스틱성', 음량, '스피치니스', 인기도 및 에너지에 대한 데이터가 포함되어 있습니다. 이 데이터에서 패턴을 발견하는 것은 흥미로울 것입니다!
+
+
+
+> 사진 출처: Marcela Laskoski on Unsplash
+
+이 일련의 수업에서 클러스터링 기법을 사용하여 데이터를 분석하는 새로운 방법을 발견할 것입니다. 클러스터링은 데이터셋에 레이블이 없는 경우 특히 유용합니다. 레이블이 있는 경우 이전 수업에서 배운 분류 기법이 더 유용할 수 있습니다. 그러나 레이블이 없는 데이터를 그룹화하려는 경우 클러스터링은 패턴을 발견하는 훌륭한 방법입니다.
+
+> 클러스터링 모델 작업에 대해 배우는 데 도움이 되는 유용한 로우 코드 도구가 있습니다. [Azure ML을 사용해 보세요](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## 수업
+
+1. [클러스터링 소개](1-Visualize/README.md)
+2. [K-Means 클러스터링](2-K-Means/README.md)
+
+## 크레딧
+
+이 수업은 [Jen Looper](https://www.twitter.com/jenlooper)가 작성하고, [Rishit Dagli](https://rishit_dagli)와 [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan)의 유익한 리뷰로 완성되었습니다.
+
+[Nigerian Songs](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify) 데이터셋은 Kaggle에서 Spotify에서 수집된 데이터를 기반으로 한 것입니다.
+
+이 수업을 만드는 데 도움이 된 유용한 K-Means 예제에는 이 [iris exploration](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering), 이 [introductory notebook](https://www.kaggle.com/prashant111/k-means-clustering-with-python), 그리고 이 [hypothetical NGO example](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering)이 포함됩니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/1-Introduction-to-NLP/README.md b/translations/ko/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..d5f820eac
--- /dev/null
+++ b/translations/ko/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# 자연어 처리 소개
+
+이 강의에서는 *계산 언어학*의 하위 분야인 *자연어 처리*의 간략한 역사와 중요한 개념을 다룹니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## 소개
+
+일반적으로 NLP라고 알려진 자연어 처리는 기계 학습이 적용되고 생산 소프트웨어에서 사용되는 가장 잘 알려진 분야 중 하나입니다.
+
+✅ 매일 사용하는 소프트웨어 중 일부에 NLP가 포함되어 있을 것 같은 소프트웨어를 생각해 보세요. 자주 사용하는 워드 프로세싱 프로그램이나 모바일 앱은 어떤가요?
+
+다음 내용을 배울 것입니다:
+
+- **언어의 개념**. 언어가 어떻게 발전했는지와 주요 연구 분야가 무엇인지.
+- **정의와 개념**. 컴퓨터가 텍스트를 처리하는 방법, 구문 분석, 문법, 명사와 동사를 식별하는 방법 등에 대한 정의와 개념을 배울 것입니다. 이 강의에는 몇 가지 코딩 과제가 있으며, 다음 강의에서 코딩을 배우게 될 중요한 개념들이 소개됩니다.
+
+## 계산 언어학
+
+계산 언어학은 수십 년에 걸쳐 연구되고 개발된 분야로, 컴퓨터가 언어와 함께 작업하고 심지어 이해하고 번역하고 의사소통할 수 있는 방법을 연구합니다. 자연어 처리(NLP)는 컴퓨터가 '자연어', 즉 인간 언어를 처리하는 방법에 중점을 둔 관련 분야입니다.
+
+### 예시 - 전화 받아쓰기
+
+전화 대신 음성을 받아쓰거나 가상 비서에게 질문한 적이 있다면, 당신의 음성은 텍스트 형식으로 변환된 후 당신이 말한 언어로부터 *구문 분석*되었을 것입니다. 감지된 키워드는 전화나 비서가 이해하고 실행할 수 있는 형식으로 처리되었습니다.
+
+
+> 실제 언어 이해는 어렵습니다! 이미지 출처: [Jen Looper](https://twitter.com/jenlooper)
+
+### 이 기술은 어떻게 가능할까요?
+
+이것은 누군가가 이를 수행하는 컴퓨터 프로그램을 작성했기 때문에 가능합니다. 몇 십 년 전, 몇몇 공상 과학 작가들은 사람들이 대부분 컴퓨터와 대화할 것이며 컴퓨터는 항상 그들의 의미를 정확히 이해할 것이라고 예측했습니다. 안타깝게도, 이는 많은 사람들이 상상한 것보다 더 어려운 문제로 밝혀졌으며, 오늘날에는 훨씬 더 잘 이해되는 문제이지만 문장의 의미를 이해하는 '완벽한' 자연어 처리를 달성하는 데에는 여전히 상당한 도전 과제가 있습니다. 특히 유머를 이해하거나 문장에서 풍자를 감지하는 것은 매우 어려운 문제입니다.
+
+이 시점에서 학교 수업에서 문장의 문법 부분을 다루었던 수업을 기억할 수도 있습니다. 일부 국가에서는 학생들이 문법과 언어학을 전용 과목으로 배우지만, 많은 국가에서는 이러한 주제가 언어를 배우는 과정의 일부로 포함됩니다: 초등학교에서 모국어를 배우거나(읽기와 쓰기 배우기) 중등학교나 고등학교에서 제2외국어를 배우는 과정에서. 명사와 동사 또는 부사와 형용사를 구분하는 전문가가 아니더라도 걱정하지 마세요!
+
+*단순 현재*와 *현재 진행형*의 차이점을 이해하는 데 어려움을 겪는다면, 당신은 혼자가 아닙니다. 이는 많은 사람들, 심지어 언어의 원어민에게도 도전적인 문제입니다. 좋은 소식은 컴퓨터가 공식 규칙을 적용하는 데 매우 능숙하다는 것이며, 인간처럼 문장을 *구문 분석*할 수 있는 코드를 작성하는 방법을 배우게 될 것입니다. 나중에 검토할 더 큰 도전 과제는 문장의 *의미*와 *감정*을 이해하는 것입니다.
+
+## 전제 조건
+
+이 강의를 위해 주된 전제 조건은 이 강의의 언어를 읽고 이해할 수 있는 것입니다. 수학 문제나 방정식을 풀 필요는 없습니다. 원래 저자가 이 강의를 영어로 작성했지만, 다른 언어로 번역되었을 수도 있으므로 번역본을 읽고 있을 수도 있습니다. 여러 언어의 문법 규칙을 비교하기 위해 여러 언어가 사용된 예가 있습니다. 이러한 예는 번역되지 않지만, 설명 텍스트는 번역되므로 의미가 명확해야 합니다.
+
+코딩 과제에서는 Python을 사용하며 예제는 Python 3.8을 사용합니다.
+
+이 섹션에서는 다음이 필요하고 사용됩니다:
+
+- **Python 3 이해**. Python 3의 프로그래밍 언어 이해, 이 강의에서는 입력, 루프, 파일 읽기, 배열을 사용합니다.
+- **Visual Studio Code + 확장**. Visual Studio Code와 Python 확장을 사용할 것입니다. 원하는 Python IDE를 사용할 수도 있습니다.
+- **TextBlob**. [TextBlob](https://github.com/sloria/TextBlob)은 Python용 간단한 텍스트 처리 라이브러리입니다. TextBlob 사이트의 지침에 따라 시스템에 설치하세요(아래와 같이 말뭉치도 설치하세요):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 팁: VS Code 환경에서 Python을 직접 실행할 수 있습니다. 자세한 내용은 [문서](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott)를 확인하세요.
+
+## 기계와 대화하기
+
+컴퓨터가 인간의 언어를 이해하려는 역사는 수십 년 전으로 거슬러 올라가며, 자연어 처리를 처음으로 고려한 과학자 중 한 명은 *앨런 튜링*이었습니다.
+
+### '튜링 테스트'
+
+튜링이 1950년대에 *인공지능*을 연구할 때, 인간과 컴퓨터(타이핑으로)에게 대화 테스트를 제공하여 대화 중인 인간이 다른 인간과 대화하고 있는지 컴퓨터와 대화하고 있는지 확신할 수 없는 상황을 고려했습니다.
+
+일정 길이의 대화 후에 인간이 컴퓨터의 답변인지 아닌지를 판단할 수 없다면, 컴퓨터가 *생각*한다고 할 수 있을까요?
+
+### 영감 - '모방 게임'
+
+이 아이디어는 *모방 게임*이라는 파티 게임에서 영감을 받았습니다. 이 게임에서 조사관은 방에 혼자 있고 다른 방에 있는 두 사람 중 각각이 남자인지 여자인지를 판단해야 합니다. 조사관은 메모를 보낼 수 있으며, 작성된 답변이 신비한 사람의 성별을 드러내는 질문을 생각해내야 합니다. 물론, 다른 방에 있는 플레이어들은 조사관을 혼란스럽게 하거나 오도하는 방식으로 질문에 답변하려고 하면서도 정직하게 답변하는 모습을 보이려고 합니다.
+
+### 엘리자 개발
+
+1960년대에 MIT 과학자인 *조셉 바이젠바움*은 [*엘리자*](https://wikipedia.org/wiki/ELIZA)라는 컴퓨터 '치료사'를 개발하여 인간에게 질문을 하고 그들의 답변을 이해하는 듯한 모습을 보였습니다. 그러나 엘리자는 문장을 구문 분석하고 특정 문법 구조와 키워드를 식별하여 합리적인 답변을 제공할 수 있었지만, 문장을 *이해*한다고 할 수는 없었습니다. 엘리자가 "**나는** 슬프다" 형식의 문장을 받으면 "얼마나 오랫동안 **당신은** 슬펐나요"라는 답변을 만들기 위해 문장을 재구성하고 단어를 대체할 수 있었습니다.
+
+이는 엘리자가 진술을 이해하고 후속 질문을 하고 있는 듯한 인상을 주었지만, 실제로는 시제를 변경하고 몇 가지 단어를 추가한 것이었습니다. 엘리자가 답변할 키워드를 식별할 수 없으면, 대신 여러 다른 진술에 적용될 수 있는 임의의 답변을 제공했습니다. 예를 들어 사용자가 "**당신은** 자전거입니다"라고 작성하면, 엘리자는 "얼마나 오랫동안 **나는** 자전거였나요?"라는 답변을 제공할 수 있으며, 이는 더 합리적인 답변이 아닙니다.
+
+[](https://youtu.be/RMK9AphfLco "엘리자와 대화하기")
+
+> 🎥 위 이미지를 클릭하면 원래 엘리자 프로그램에 대한 비디오를 볼 수 있습니다
+
+> 참고: 1966년에 발표된 [엘리자](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract)의 원본 설명을 ACM 계정이 있으면 읽을 수 있습니다. 또는 [위키백과](https://wikipedia.org/wiki/ELIZA)에서 엘리자에 대해 읽어보세요.
+
+## 실습 - 기본 대화형 봇 코딩
+
+엘리자와 같은 대화형 봇은 사용자 입력을 유도하고 이해하고 지능적으로 응답하는 것처럼 보이는 프로그램입니다. 엘리자와 달리, 우리의 봇은 지능적인 대화를 하는 것처럼 보이기 위해 여러 규칙을 가지지 않을 것입니다. 대신, 우리의 봇은 거의 모든 사소한 대화에서 작동할 수 있는 임의의 응답을 제공하여 대화를 계속할 수 있는 능력만을 가질 것입니다.
+
+### 계획
+
+대화형 봇을 만들 때의 단계:
+
+1. 사용자가 봇과 상호작용하는 방법을 알려주는 지침을 출력합니다.
+2. 루프를 시작합니다.
+ 1. 사용자 입력을 받습니다.
+ 2. 사용자가 종료를 요청했으면 종료합니다.
+ 3. 사용자 입력을 처리하고 응답을 결정합니다(이 경우 응답은 가능한 일반적인 응답 목록에서 임의로 선택됩니다).
+ 4. 응답을 출력합니다.
+3. 2단계로 다시 돌아갑니다.
+
+### 봇 만들기
+
+다음으로 봇을 만들어 보겠습니다. 몇 가지 문구를 정의하는 것부터 시작하겠습니다.
+
+1. 다음과 같은 임의의 응답을 사용하는 Python으로 직접 봇을 만드세요:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ 아래는 샘플 출력입니다(사용자 입력은 `>`로 시작하는 줄에 있습니다):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ 작업에 대한 가능한 해결책은 [여기](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)에서 확인할 수 있습니다.
+
+ ✅ 멈추고 고려해보세요
+
+ 1. 임의의 응답이 봇이 실제로 사용자를 이해한다고 생각하게 만들 수 있을까요?
+ 2. 봇이 더 효과적이기 위해 어떤 기능이 필요할까요?
+ 3. 봇이 문장의 의미를 실제로 이해할 수 있다면, 대화 중 이전 문장의 의미를 '기억'해야 할까요?
+
+---
+
+## 🚀도전
+
+위의 "멈추고 고려해보세요" 요소 중 하나를 선택하고 이를 코드로 구현하거나 의사 코드로 종이에 해결책을 작성해보세요.
+
+다음 강의에서는 자연어를 구문 분석하고 기계 학습을 사용하는 여러 가지 접근 방식을 배울 것입니다.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## 복습 및 자습
+
+아래 참고 자료를 살펴보세요.
+
+### 참고 자료
+
+1. Schubert, Lenhart, "Computational Linguistics", *The Stanford Encyclopedia of Philosophy* (Spring 2020 Edition), Edward N. Zalta (ed.), URL = .
+2. Princeton University "About WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
+
+## 과제
+
+[봇 검색](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서가 해당 언어로 작성된 경우, 이를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역을 사용하여 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/ko/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..4ab65f060
--- /dev/null
+++ b/translations/ko/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# 봇 찾기
+
+## 지침
+
+봇은 어디에나 있습니다. 여러분의 과제는 하나를 찾아 입양하는 것입니다! 웹사이트, 은행 애플리케이션, 그리고 금융 서비스 회사에 조언이나 계좌 정보를 요청할 때 전화로도 찾을 수 있습니다. 봇을 분석하고 혼란스럽게 만들 수 있는지 확인해 보세요. 만약 봇을 혼란스럽게 만들 수 있다면, 왜 그런 일이 발생했는지 생각해 보세요. 여러분의 경험에 대해 짧은 글을 작성하세요.
+
+## 평가 기준
+
+| 기준 | 모범적 사례 | 적절한 사례 | 개선 필요 |
+| ------- | -------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ----------------------- |
+| | 전체 페이지 분량의 글이 작성되었으며, 추정되는 봇의 아키텍처와 경험이 설명되어 있음 | 글이 불완전하거나 충분히 연구되지 않음 | 글이 제출되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/2-Tasks/README.md b/translations/ko/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..adfe24ca2
--- /dev/null
+++ b/translations/ko/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# 일반적인 자연어 처리 작업 및 기법
+
+대부분의 *자연어 처리* 작업에서는 처리할 텍스트를 분해하고, 검토하며, 그 결과를 규칙 및 데이터 세트와 교차 참조하여 저장해야 합니다. 이러한 작업을 통해 프로그래머는 텍스트에서 _의미_나 _의도_, 또는 단순히 _용어와 단어의 빈도_를 도출할 수 있습니다.
+
+## [사전 강의 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+텍스트를 처리하는 데 사용되는 일반적인 기법을 알아봅시다. 이러한 기법을 기계 학습과 결합하면 대량의 텍스트를 효율적으로 분석할 수 있습니다. 그러나 이러한 작업에 ML을 적용하기 전에, NLP 전문가가 직면하는 문제를 이해해 봅시다.
+
+## NLP에서 공통적으로 수행하는 작업
+
+텍스트를 분석하는 방법에는 여러 가지가 있습니다. 수행할 수 있는 작업이 있으며, 이러한 작업을 통해 텍스트를 이해하고 결론을 도출할 수 있습니다. 보통 이러한 작업은 순서대로 수행됩니다.
+
+### 토큰화
+
+아마도 대부분의 NLP 알고리즘이 처음 해야 할 일은 텍스트를 토큰 또는 단어로 분할하는 것입니다. 이는 간단해 보이지만, 구두점과 다른 언어의 단어 및 문장 구분자를 고려해야 하기 때문에 까다로울 수 있습니다. 여러 가지 방법을 사용하여 경계를 결정해야 할 수도 있습니다.
+
+
+> **Pride and Prejudice**의 문장을 토큰화. 인포그래픽: [Jen Looper](https://twitter.com/jenlooper)
+
+### 임베딩
+
+[단어 임베딩](https://wikipedia.org/wiki/Word_embedding)은 텍스트 데이터를 수치적으로 변환하는 방법입니다. 임베딩은 유사한 의미를 가진 단어나 함께 사용되는 단어가 함께 클러스터링되도록 합니다.
+
+
+> "I have the highest respect for your nerves, they are my old friends." - **Pride and Prejudice**의 문장에 대한 단어 임베딩. 인포그래픽: [Jen Looper](https://twitter.com/jenlooper)
+
+✅ [이 흥미로운 도구](https://projector.tensorflow.org/)를 사용하여 단어 임베딩을 실험해보세요. 단어를 클릭하면 유사한 단어의 클러스터가 나타납니다: 'toy'는 'disney', 'lego', 'playstation', 'console'과 클러스터를 형성합니다.
+
+### 구문 분석 및 품사 태깅
+
+토큰화된 각 단어는 명사, 동사 또는 형용사 등 품사로 태깅될 수 있습니다. 문장 `the quick red fox jumped over the lazy brown dog`는 fox = 명사, jumped = 동사로 품사 태깅될 수 있습니다.
+
+
+
+> **Pride and Prejudice**의 문장을 구문 분석. 인포그래픽: [Jen Looper](https://twitter.com/jenlooper)
+
+구문 분석은 문장에서 어떤 단어들이 서로 관련되어 있는지 인식하는 것입니다 - 예를 들어 `the quick red fox jumped`는 형용사-명사-동사 시퀀스로 `lazy brown dog` 시퀀스와는 별개입니다.
+
+### 단어 및 구문 빈도
+
+대량의 텍스트를 분석할 때 유용한 절차는 관심 있는 모든 단어 또는 구문의 사전을 작성하고 얼마나 자주 나타나는지를 기록하는 것입니다. 구문 `the quick red fox jumped over the lazy brown dog`는 the의 단어 빈도가 2입니다.
+
+단어의 빈도를 세는 예제 텍스트를 살펴봅시다. Rudyard Kipling의 시 The Winners에는 다음과 같은 구절이 있습니다:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+구문 빈도는 대소문자를 구분하거나 구분하지 않을 수 있으므로, 구문 `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels`의 빈도는 2입니다.
+
+### N-그램
+
+텍스트는 일정 길이의 단어 시퀀스로 분할될 수 있습니다. 단일 단어 (유니그램), 두 단어 (바이그램), 세 단어 (트라이그램) 또는 임의의 수의 단어 (N-그램)로 분할할 수 있습니다.
+
+예를 들어 `the quick red fox jumped over the lazy brown dog`는 n-그램 점수 2로 다음과 같은 n-그램을 생성합니다:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+이것을 문장 위에 슬라이딩 박스로 시각화하면 더 쉽게 이해할 수 있습니다. 여기서 각 문장에서 n-그램이 굵게 표시됩니다:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> N-그램 값 3: 인포그래픽: [Jen Looper](https://twitter.com/jenlooper)
+
+### 명사구 추출
+
+대부분의 문장에는 주어나 목적어 역할을 하는 명사가 있습니다. 영어에서는 종종 'a', 'an' 또는 'the'가 앞에 붙어 있는 것으로 식별할 수 있습니다. 문장의 의미를 이해하려고 할 때 명사구를 추출하여 주어나 목적어를 식별하는 것은 NLP에서 일반적인 작업입니다.
+
+✅ "I cannot fix on the hour, or the spot, or the look or the words, which laid the foundation. It is too long ago. I was in the middle before I knew that I had begun." 문장에서 명사구를 식별할 수 있습니까?
+
+문장 `the quick red fox jumped over the lazy brown dog`에는 2개의 명사구가 있습니다: **quick red fox**와 **lazy brown dog**.
+
+### 감정 분석
+
+문장이나 텍스트는 얼마나 *긍정적*인지 또는 *부정적*인지 감정 분석을 할 수 있습니다. 감정은 *극성*과 *객관성/주관성*으로 측정됩니다. 극성은 -1.0에서 1.0 (부정적에서 긍정적)까지 측정되고, 객관성/주관성은 0.0에서 1.0 (가장 객관적인 것에서 가장 주관적인 것)까지 측정됩니다.
+
+✅ 나중에 기계 학습을 사용하여 감정을 결정하는 다양한 방법을 배우겠지만, 한 가지 방법은 인간 전문가가 긍정적 또는 부정적으로 분류한 단어와 구문의 목록을 가지고 텍스트에 그 모델을 적용하여 극성 점수를 계산하는 것입니다. 이러한 방식이 어떤 상황에서는 잘 작동하고 다른 상황에서는 덜 작동하는 이유를 알 수 있습니까?
+
+### 굴절
+
+굴절은 단어를 취해 단어의 단수형 또는 복수형을 얻을 수 있게 합니다.
+
+### 표제어 추출
+
+*표제어*는 한 세트의 단어에 대한 기본 단어나 중심 단어입니다. 예를 들어 *flew*, *flies*, *flying*은 동사 *fly*의 표제어입니다.
+
+NLP 연구자에게 유용한 데이터베이스도 있습니다. 특히:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/)은 여러 언어의 모든 단어에 대한 동의어, 반의어 및 기타 많은 세부 정보를 포함하는 데이터베이스입니다. 번역기, 맞춤법 검사기 또는 언어 도구를 만들 때 매우 유용합니다.
+
+## NLP 라이브러리
+
+다행히도 이러한 기법을 직접 구축할 필요는 없습니다. 자연어 처리나 기계 학습에 전문적이지 않은 개발자도 접근할 수 있도록 해주는 훌륭한 Python 라이브러리가 많이 있습니다. 다음 수업에서는 이러한 라이브러리의 더 많은 예를 포함하고 있지만, 여기서는 다음 작업에 도움이 되는 몇 가지 유용한 예를 배웁니다.
+
+### 연습 - `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor` 사용하기
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > 여기서 무슨 일이 벌어지고 있나요? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor)는 "ConLL-2000 학습 코퍼스로 훈련된 청크 파싱을 사용하는 명사구 추출기"입니다. ConLL-2000은 2000년 계산 언어 학습에 관한 회의를 의미합니다. 매년 이 회의에서는 까다로운 NLP 문제를 해결하기 위한 워크숍을 개최했으며, 2000년에는 명사 청킹이 주제였습니다. 모델은 Wall Street Journal을 기반으로 훈련되었으며, "섹션 15-18을 학습 데이터(211727 토큰)로 사용하고 섹션 20을 테스트 데이터(47377 토큰)로 사용"했습니다. 사용된 절차는 [여기](https://www.clips.uantwerpen.be/conll2000/chunking/)에서 확인할 수 있으며, [결과](https://ifarm.nl/erikt/research/np-chunking.html)는 여기에서 확인할 수 있습니다.
+
+### 도전 과제 - NLP로 봇 개선하기
+
+이전 수업에서 매우 간단한 Q&A 봇을 만들었습니다. 이제 입력된 텍스트의 감정을 분석하고 그에 맞는 응답을 출력하여 Marvin을 조금 더 공감할 수 있게 만드세요. 또한 `noun_phrase`를 식별하고 이에 대해 질문하세요.
+
+더 나은 대화형 봇을 구축할 때의 단계:
+
+1. 사용자가 봇과 상호작용하는 방법에 대한 지침 출력
+2. 루프 시작
+ 1. 사용자 입력 수락
+ 2. 사용자가 종료를 요청했으면 종료
+ 3. 사용자 입력을 처리하고 적절한 감정 응답 결정
+ 4. 감정에서 명사구가 감지되면 복수형으로 만들고 해당 주제에 대해 더 많은 입력 요청
+ 5. 응답 출력
+3. 단계 2로 돌아가기
+
+TextBlob을 사용하여 감정을 결정하는 코드 스니펫은 다음과 같습니다. 감정 응답의 *그라데이션*은 네 가지뿐입니다 (원하는 경우 더 추가할 수 있습니다):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+다음은 샘플 출력을 안내하는 예제입니다 (사용자 입력은 >로 시작하는 줄에 있습니다):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+이 작업에 대한 가능한 솔루션은 [여기](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)에 있습니다.
+
+✅ 지식 점검
+
+1. 공감하는 응답이 실제로 봇이 사용자를 이해하는 것처럼 '속일' 수 있다고 생각하십니까?
+2. 명사구를 식별하는 것이 봇을 더 '믿을 수 있게' 만듭니까?
+3. 문장에서 '명사구'를 추출하는 것이 유용한 이유는 무엇입니까?
+
+---
+
+이전 지식 점검에서 봇을 구현하고 친구에게 테스트해보세요. 친구를 속일 수 있나요? 봇을 더 '믿을 수 있게' 만들 수 있나요?
+
+## 🚀도전 과제
+
+이전 지식 점검에서 작업을 선택하고 이를 구현해보세요. 친구에게 봇을 테스트해보세요. 친구를 속일 수 있나요? 봇을 더 '믿을 수 있게' 만들 수 있나요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## 복습 및 자습
+
+다음 몇 강의에서는 감정 분석에 대해 더 배울 것입니다. [KDNuggets](https://www.kdnuggets.com/tag/nlp)의 기사와 같은 자료를 통해 이 흥미로운 기법을 연구해보세요.
+
+## 과제
+
+[봇이 대화하게 만들기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서가 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/2-Tasks/assignment.md b/translations/ko/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..3a795096d
--- /dev/null
+++ b/translations/ko/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# 봇에게 응답하게 만들기
+
+## 지침
+
+지난 몇 번의 수업에서, 채팅할 수 있는 기본 봇을 프로그래밍했습니다. 이 봇은 당신이 'bye'라고 말할 때까지 무작위로 대답합니다. 이제 대답을 좀 더 덜 무작위적으로 만들고, 'why'나 'how' 같은 특정 단어를 말하면 대답을 유도할 수 있나요? 봇을 확장할 때 기계 학습이 이러한 작업을 덜 수동적으로 만드는 방법에 대해 생각해 보세요. NLTK나 TextBlob 라이브러리를 사용하여 작업을 더 쉽게 할 수 있습니다.
+
+## 평가 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| -------- | ------------------------------------------ | ----------------------------------------------- | ----------------------- |
+| | 새로운 bot.py 파일이 제시되고 문서화됨 | 새로운 봇 파일이 제시되었지만 버그가 있음 | 파일이 제시되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/3-Translation-Sentiment/README.md b/translations/ko/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..180c33d33
--- /dev/null
+++ b/translations/ko/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,182 @@
+# 번역 및 감정 분석을 위한 ML
+
+이전 수업에서 `TextBlob`을 사용하여 기본 봇을 만드는 방법을 배웠습니다. 이 라이브러리는 기본 NLP 작업(명사 구 추출 등)을 수행하기 위해 ML을 백그라운드에서 활용합니다. 컴퓨터 언어학에서 또 다른 중요한 과제는 한 언어로 말하거나 쓰인 문장을 다른 언어로 정확하게 _번역_하는 것입니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+번역은 수천 개의 언어가 존재하고 각 언어마다 매우 다른 문법 규칙이 있을 수 있기 때문에 매우 어려운 문제입니다. 한 가지 접근 방식은 영어와 같은 한 언어의 공식 문법 규칙을 비언어 종속 구조로 변환한 다음 이를 다시 다른 언어로 변환하는 것입니다. 이 접근 방식에서는 다음 단계를 따릅니다:
+
+1. **식별**. 입력 언어의 단어를 명사, 동사 등으로 식별하거나 태그를 지정합니다.
+2. **번역 생성**. 각 단어를 목표 언어 형식으로 직접 번역합니다.
+
+### 예문, 영어에서 아일랜드어로
+
+영어로 된 문장 _I feel happy_는 다음 순서로 세 단어입니다:
+
+- **주어** (I)
+- **동사** (feel)
+- **형용사** (happy)
+
+하지만 아일랜드어에서는 같은 문장이 매우 다른 문법 구조를 가지고 있습니다 - "*happy*" 또는 "*sad*"와 같은 감정은 *당신 위에* 있는 것으로 표현됩니다.
+
+영어 문장 `I feel happy`을 아일랜드어로 번역하면 `Tá athas orm`가 됩니다. 문자 그대로 번역하면 `Happy is upon me`가 됩니다.
+
+아일랜드어 화자가 영어로 번역할 때 `I feel happy`라고 말할 것입니다. 이는 문장의 의미를 이해하고 있기 때문이지, 단어와 문장 구조가 다르기 때문은 아닙니다.
+
+아일랜드어 문장의 공식 순서는 다음과 같습니다:
+
+- **동사** (Tá 또는 is)
+- **형용사** (athas, 또는 happy)
+- **주어** (orm, 또는 upon me)
+
+## 번역
+
+단순한 번역 프로그램은 문장 구조를 무시하고 단어만 번역할 수 있습니다.
+
+✅ 성인이 되어 제2 외국어(또는 제3 외국어 이상)를 배운 적이 있다면, 모국어로 생각하고 개념을 단어 단위로 머릿속에서 번역한 다음 번역을 말하기 시작했을 것입니다. 이는 단순한 번역 컴퓨터 프로그램이 하는 것과 유사합니다. 유창해지기 위해서는 이 단계를 넘어서는 것이 중요합니다!
+
+단순한 번역은 나쁜(때로는 웃긴) 오역을 초래할 수 있습니다: `I feel happy`은 아일랜드어로 문자 그대로 번역하면 `Mise bhraitheann athas`가 됩니다. 이는 문자 그대로 `me feel happy`를 의미하며 유효한 아일랜드어 문장이 아닙니다. 영어와 아일랜드어는 인접한 두 섬에서 사용되는 언어임에도 불구하고, 매우 다른 문법 구조를 가지고 있는 다른 언어입니다.
+
+> 아일랜드어 언어 전통에 관한 비디오를 [여기](https://www.youtube.com/watch?v=mRIaLSdRMMs)에서 시청할 수 있습니다.
+
+### 머신러닝 접근 방식
+
+지금까지는 자연어 처리를 위한 공식 규칙 접근 방식에 대해 배웠습니다. 또 다른 접근 방식은 단어의 의미를 무시하고 _대신 머신러닝을 사용하여 패턴을 감지하는 것_입니다. 이는 원본 언어와 목표 언어 모두에서 많은 텍스트(코퍼스) 또는 텍스트들(코퍼라)이 있는 경우 번역에 유용할 수 있습니다.
+
+예를 들어, 1813년에 제인 오스틴이 쓴 잘 알려진 영어 소설 *Pride and Prejudice*를 생각해 보세요. 이 책을 영어와 *프랑스어*로 된 인간 번역본에서 상담하면, 한 언어에서 다른 언어로 *관용적으로* 번역된 구문을 감지할 수 있습니다. 잠시 후에 그렇게 할 것입니다.
+
+예를 들어, 영어 문구 `I have no money`가 프랑스어로 문자 그대로 번역되면 `Je n'ai pas de monnaie`가 될 수 있습니다. "Monnaie"는 까다로운 프랑스어 '거짓 동의어'로, 'money'와 'monnaie'는 동의어가 아닙니다. 인간이 더 나은 번역을 하면 `Je n'ai pas d'argent`가 될 수 있습니다. 이는 '잔돈'이라는 'monnaie'의 의미보다는 돈이 없다는 의미를 더 잘 전달하기 때문입니다.
+
+
+
+> 이미지 제공: [Jen Looper](https://twitter.com/jenlooper)
+
+충분한 인간 번역본이 있는 ML 모델이 있다면, 두 언어 모두의 전문가 인간 번역자가 이전에 번역한 텍스트에서 공통 패턴을 식별하여 번역의 정확성을 향상시킬 수 있습니다.
+
+### 연습 - 번역
+
+`TextBlob`을 사용하여 문장을 번역할 수 있습니다. **Pride and Prejudice**의 유명한 첫 문장을 시도해 보세요:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob`은 "C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!"라는 번역을 꽤 잘 해냅니다.
+
+TextBlob의 번역이 사실 1932년 V. Leconte와 Ch. Pressoir의 프랑스어 번역본보다 훨씬 정확하다고 주장할 수 있습니다:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet egard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+이 경우, ML이 반영된 번역이 원래 저자의 말을 불필요하게 추가하는 인간 번역자보다 더 나은 작업을 수행합니다.
+
+> 여기서 무슨 일이 일어나고 있나요? 그리고 TextBlob이 번역에 뛰어난 이유는 무엇인가요? 사실, TextBlob은 백그라운드에서 Google 번역을 사용하고 있으며, 이는 수백만 개의 구문을 분석하여 작업에 가장 적합한 문자열을 예측할 수 있는 정교한 AI입니다. 여기에는 수동 작업이 없으며 `blob.translate`를 사용하려면 인터넷 연결이 필요합니다.
+
+---
+
+## 감정 분석
+
+감정 분석은 주어진 텍스트가 긍정적인지 부정적인지를 결정하는 것입니다. 이는 트윗, 영화 리뷰 또는 점수와 의견이 포함된 다른 텍스트에서 수동으로 긍정적 및 부정적 텍스트를 수집하여 수행할 수 있습니다. 그런 다음 NLP 기술을 사용하여 의견과 점수를 분석하여 패턴을 찾을 수 있습니다(예: 긍정적인 영화 리뷰는 부정적인 영화 리뷰보다 'Oscar worthy'라는 구문이 더 자주 나타나고, 긍정적인 레스토랑 리뷰는 'gourmet'라는 단어가 'disgusting'보다 더 자주 나타남).
+
+> ⚖️ **예시**: 정치인의 사무실에서 일하고 있고 새로운 법안이 논의 중이라면, 지지하거나 반대하는 이메일을 사무실에 보낼 수 있습니다. 이메일이 많다면 모두 읽으려면 압도될 수 있습니다. 봇이 모든 이메일을 읽고 이해하여 각 이메일이 어떤 쪽에 속하는지 알려준다면 좋지 않을까요?
+>
+> 이를 달성하는 한 가지 방법은 머신러닝을 사용하는 것입니다. 모델을 *반대* 이메일의 일부와 *찬성* 이메일의 일부로 훈련시킬 것입니다. 모델은 *반대* 또는 *찬성* 이메일에 더 자주 나타나는 특정 단어와 패턴을 연관시키겠지만, 내용은 이해하지 못하고 단지 특정 단어와 패턴이 *반대* 또는 *찬성* 이메일에 더 자주 나타난다는 것만 이해할 것입니다. 모델을 훈련에 사용하지 않은 이메일로 테스트하여 동일한 결론에 도달하는지 확인할 수 있습니다. 모델의 정확성에 만족하면 미래의 이메일을 각 이메일을 읽지 않고도 처리할 수 있습니다.
+
+✅ 이전 수업에서 사용한 프로세스와 유사한가요?
+
+## 연습 - 감정 문장
+
+감정은 -1에서 1까지의 *극성*으로 측정되며, -1은 가장 부정적인 감정이고 1은 가장 긍정적인 감정입니다. 감정은 또한 0(객관성)에서 1(주관성)까지의 점수로 측정됩니다.
+
+제인 오스틴의 *Pride and Prejudice*를 다시 한 번 살펴보세요. 텍스트는 [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm)에서 사용할 수 있습니다. 아래 샘플은 책의 첫 번째와 마지막 문장의 감정을 분석하고 감정 극성과 주관성/객관성 점수를 표시하는 짧은 프로그램을 보여줍니다.
+
+다음 작업에서는 `sentiment`를 결정하기 위해 `TextBlob` 라이브러리를 사용해야 합니다(자체 감정 계산기를 작성할 필요는 없습니다).
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+다음과 같은 출력을 볼 수 있습니다:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## 챌린지 - 감정 극성 확인
+
+*Pride and Prejudice*에서 절대적으로 긍정적인 문장이 절대적으로 부정적인 문장보다 더 많은지 감정 극성을 사용하여 결정하는 것이 과제입니다. 이 작업에서는 극성 점수가 1 또는 -1인 경우 절대적으로 긍정적 또는 부정적이라고 가정할 수 있습니다.
+
+**단계:**
+
+1. Project Gutenberg에서 [Pride and Prejudice](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm)의 사본을 .txt 파일로 다운로드합니다. 파일 시작과 끝의 메타데이터를 제거하고 원본 텍스트만 남깁니다.
+2. 파일을 Python에서 열고 내용을 문자열로 추출합니다.
+3. 책 문자열을 사용하여 TextBlob을 만듭니다.
+4. 루프에서 책의 각 문장을 분석합니다.
+ 1. 극성이 1 또는 -1인 경우 해당 문장을 긍정적 또는 부정적 메시지의 배열 또는 목록에 저장합니다.
+5. 마지막으로 모든 긍정적 문장과 부정적 문장(별도로) 및 각 문장의 수를 출력합니다.
+
+여기 샘플 [솔루션](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb)이 있습니다.
+
+✅ 지식 점검
+
+1. 감정은 문장에서 사용된 단어를 기반으로 하지만, 코드가 단어를 *이해*하나요?
+2. 감정 극성이 정확하다고 생각하나요, 즉 점수에 *동의*하나요?
+ 1. 특히 다음 문장의 절대 **긍정적** 극성에 동의하나요 아니면 동의하지 않나요?
+ * “What an excellent father you have, girls!” said she, when the door was shut.
+ * “Your examination of Mr. Darcy is over, I presume,” said Miss Bingley; “and pray what is the result?” “I am perfectly convinced by it that Mr. Darcy has no defect.
+ * How wonderfully these sort of things occur!
+ * I have the greatest dislike in the world to that sort of thing.
+ * Charlotte is an excellent manager, I dare say.
+ * “This is delightful indeed!
+ * I am so happy!
+ * Your idea of the ponies is delightful.
+ 2. 다음 3문장은 절대적으로 긍정적인 감정으로 평가되었지만, 자세히 읽어보면 긍정적인 문장이 아닙니다. 감정 분석이 왜 긍정적이라고 생각했을까요?
+ * Happy shall I be, when his stay at Netherfield is over!” “I wish I could say anything to comfort you,” replied Elizabeth; “but it is wholly out of my power.
+ * If I could but see you as happy!
+ * Our distress, my dear Lizzy, is very great.
+ 3. 다음 문장의 절대 **부정적** 극성에 동의하나요 아니면 동의하지 않나요?
+ - Everybody is disgusted with his pride.
+ - “I should like to know how he behaves among strangers.” “You shall hear then—but prepare yourself for something very dreadful.
+ - The pause was to Elizabeth’s feelings dreadful.
+ - It would be dreadful!
+
+✅ 제인 오스틴의 열렬한 팬이라면 그녀가 종종 책을 통해 영국 리젠시 사회의 더 우스꽝스러운 측면을 비판하는 것을 이해할 것입니다. *Pride and Prejudice*의 주인공 엘리자베스 베넷은 예리한 사회 관찰자이며(저자와 마찬가지로) 그녀의 언어는 종종 미묘하게 표현됩니다. 심지어 이야기의 사랑 관심사인 미스터 다아시는 엘리자베스의 장난스럽고 놀리는 언어 사용을 지적합니다: "I have had the pleasure of your acquaintance long enough to know that you find great enjoyment in occasionally professing opinions which in fact are not your own."
+
+---
+
+## 🚀챌린지
+
+사용자 입력에서 다른 기능을 추출하여 Marvin을 더 개선할 수 있나요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## 복습 및 자습
+
+텍스트에서 감정을 추출하는 방법은 여러 가지가 있습니다. 이 기술을 사용할 수 있는 비즈니스 응용 프로그램을 생각해 보세요. 어떻게 잘못될 수 있는지 생각해 보세요. [Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott)와 같은 정교한 기업용 시스템에 대해 자세히 읽어보세요. 위의 Pride and Prejudice 문장을 테스트하여 미묘함을 감지할 수 있는지 확인해 보세요.
+
+## 과제
+
+[Poetic license](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/3-Translation-Sentiment/assignment.md b/translations/ko/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..406a32de3
--- /dev/null
+++ b/translations/ko/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# 시적 허용
+
+## 지침
+
+[이 노트북](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency)에서 Azure 텍스트 분석을 사용하여 감정 분석된 500편 이상의 에밀리 디킨슨 시를 찾을 수 있습니다. 이 데이터셋을 사용하여 강의에서 설명한 기법을 사용해 분석하세요. 시의 제안된 감정이 더 정교한 Azure 서비스의 결정과 일치합니까? 왜 그렇다고 생각하나요? 혹은 왜 그렇지 않나요? 놀라운 점이 있나요?
+
+## 채점 기준
+
+| 기준 | 모범 사례 | 적절한 사례 | 개선이 필요한 사례 |
+| ------- | ------------------------------------------------------------------------------ | ------------------------------------------------------ | ------------------------ |
+| | 작가의 샘플 출력에 대한 철저한 분석이 포함된 노트북이 제시됨 | 노트북이 불완전하거나 분석을 수행하지 않음 | 노트북이 제시되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서를 신뢰할 수 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/ko/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..5acb9d1e8
--- /dev/null
+++ b/translations/ko/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서가 원어로 작성된 경우, 이를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/ko/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..a2b1972b1
--- /dev/null
+++ b/translations/ko/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/4-Hotel-Reviews-1/README.md b/translations/ko/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..d0c10e151
--- /dev/null
+++ b/translations/ko/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,302 @@
+# 호텔 리뷰를 통한 감정 분석 - 데이터 처리
+
+이 섹션에서는 이전 강의에서 배운 기술들을 사용하여 대규모 데이터셋을 탐색적으로 분석할 것입니다. 다양한 열의 유용성을 잘 이해한 후에는 다음을 배우게 됩니다:
+
+- 불필요한 열을 제거하는 방법
+- 기존 열을 기반으로 새로운 데이터를 계산하는 방법
+- 최종 도전에 사용할 결과 데이터셋을 저장하는 방법
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### 소개
+
+지금까지 텍스트 데이터가 숫자 데이터와는 상당히 다르다는 것을 배웠습니다. 사람이 작성하거나 말한 텍스트는 패턴과 빈도, 감정 및 의미를 분석할 수 있습니다. 이번 강의에서는 실제 데이터셋과 실제 도전을 다루게 됩니다: **[유럽의 515K 호텔 리뷰 데이터](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** 및 [CC0: Public Domain 라이선스](https://creativecommons.org/publicdomain/zero/1.0/)가 포함되어 있습니다. 이 데이터는 Booking.com에서 공개 소스를 통해 수집되었습니다. 데이터셋의 작성자는 Jiashen Liu입니다.
+
+### 준비
+
+다음이 필요합니다:
+
+* Python 3을 사용하여 .ipynb 노트북을 실행할 수 있는 능력
+* pandas
+* NLTK, [로컬에 설치해야 합니다](https://www.nltk.org/install.html)
+* Kaggle에서 사용할 수 있는 데이터셋 [유럽의 515K 호텔 리뷰 데이터](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe). 압축을 풀면 약 230 MB입니다. 이를 NLP 강의와 관련된 루트 `/data` 폴더에 다운로드하십시오.
+
+## 탐색적 데이터 분석
+
+이 도전 과제는 감정 분석과 손님 리뷰 점수를 사용하여 호텔 추천 봇을 구축하는 것을 가정합니다. 사용할 데이터셋에는 6개 도시의 1493개 호텔에 대한 리뷰가 포함되어 있습니다.
+
+Python, 호텔 리뷰 데이터셋 및 NLTK의 감정 분석을 사용하여 다음을 알아낼 수 있습니다:
+
+* 리뷰에서 가장 자주 사용되는 단어와 구는 무엇인가?
+* 호텔을 설명하는 공식 *태그*가 리뷰 점수와 상관관계가 있는가? (예: 특정 호텔에 대한 더 부정적인 리뷰가 *어린 자녀를 둔 가족*보다 *솔로 여행자*에게 더 많은가? 이는 *솔로 여행자*에게 더 적합하다는 것을 나타낼 수 있습니다.)
+* NLTK 감정 점수가 호텔 리뷰어의 숫자 점수와 '일치'하는가?
+
+#### 데이터셋
+
+다운로드하여 로컬에 저장한 데이터셋을 탐색해 봅시다. VS Code나 Excel과 같은 편집기에서 파일을 열어보세요.
+
+데이터셋의 헤더는 다음과 같습니다:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+여기에는 더 쉽게 검토할 수 있도록 그룹화되어 있습니다:
+##### 호텔 열
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (위도), `lng` (경도)
+ * *lat*과 *lng*를 사용하여 Python으로 호텔 위치를 보여주는 지도를 그릴 수 있습니다 (부정적 및 긍정적 리뷰에 따라 색상을 구분할 수 있음)
+ * Hotel_Address는 우리에게 명확히 유용하지 않으며, 더 쉬운 정렬 및 검색을 위해 국가로 대체할 가능성이 큽니다.
+
+**호텔 메타 리뷰 열**
+
+* `Average_Score`
+ * 데이터셋 작성자에 따르면, 이 열은 *지난 1년 동안의 최신 댓글을 기반으로 계산된 호텔의 평균 점수*입니다. 이는 점수를 계산하는 독특한 방법처럼 보이지만, 지금은 데이터를 액면 그대로 받아들여야 합니다.
+
+ ✅ 이 데이터의 다른 열을 기반으로 평균 점수를 계산할 다른 방법을 생각해 볼 수 있나요?
+
+* `Total_Number_of_Reviews`
+ * 이 호텔이 받은 총 리뷰 수 - 코드 작성 없이 이 데이터셋의 리뷰를 의미하는지 명확하지 않습니다.
+* `Additional_Number_of_Scoring`
+ * 이는 리뷰어가 긍정적 또는 부정적 리뷰를 작성하지 않았지만 점수를 부여한 것을 의미합니다.
+
+**리뷰 열**
+
+- `Reviewer_Score`
+ - 이는 최소 1자리 소수점으로 된 숫자 값이며, 최소값과 최대값은 2.5와 10 사이입니다.
+ - 2.5가 가능한 최저 점수인 이유는 설명되지 않았습니다.
+- `Negative_Review`
+ - 리뷰어가 아무것도 작성하지 않았다면, 이 필드는 "**No Negative**"로 표시됩니다.
+ - 리뷰어가 부정적 리뷰 칸에 긍정적 리뷰를 작성할 수도 있습니다 (예: "이 호텔에 나쁜 점이 없습니다").
+- `Review_Total_Negative_Word_Counts`
+ - 부정적 단어 수가 많을수록 점수가 낮아집니다 (감정을 확인하지 않은 경우).
+- `Positive_Review`
+ - 리뷰어가 아무것도 작성하지 않았다면, 이 필드는 "**No Positive**"로 표시됩니다.
+ - 리뷰어가 긍정적 리뷰 칸에 부정적 리뷰를 작성할 수도 있습니다 (예: "이 호텔에는 좋은 점이 전혀 없습니다").
+- `Review_Total_Positive_Word_Counts`
+ - 긍정적 단어 수가 많을수록 점수가 높아집니다 (감정을 확인하지 않은 경우).
+- `Review_Date` 및 `days_since_review`
+ - 신선도 또는 오래된 리뷰에 대한 측정을 적용할 수 있습니다 (호텔 관리가 변경되었거나, 리모델링이 이루어졌거나, 수영장이 추가되었기 때문에 오래된 리뷰는 정확하지 않을 수 있음).
+- `Tags`
+ - 이는 리뷰어가 자신이 어떤 유형의 손님이었는지, 어떤 유형의 방을 가졌는지, 체류 기간, 리뷰를 제출한 방법 등을 설명하기 위해 선택할 수 있는 짧은 설명입니다.
+ - 불행히도, 이러한 태그를 사용하는 것은 문제가 있으며, 아래 섹션에서 그 유용성에 대해 논의합니다.
+
+**리뷰어 열**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - 이는 추천 모델에서 요인이 될 수 있습니다. 예를 들어, 수백 개의 리뷰를 작성한 더 많은 리뷰어가 부정적 리뷰를 남길 가능성이 더 높다는 것을 알 수 있다면 유용할 수 있습니다. 그러나 특정 리뷰의 리뷰어는 고유 코드로 식별되지 않으므로 리뷰 세트와 연결할 수 없습니다. 100개 이상의 리뷰를 작성한 리뷰어가 30명 있지만, 이것이 추천 모델에 어떻게 도움이 되는지 보기 어렵습니다.
+- `Reviewer_Nationality`
+ - 일부 사람들은 특정 국적이 국가적 성향 때문에 긍정적 또는 부정적 리뷰를 남길 가능성이 더 높다고 생각할 수 있습니다. 이러한 일화적 견해를 모델에 포함하는 데 주의해야 합니다. 이는 국가적(때로는 인종적) 고정관념이며, 각 리뷰어는 자신의 경험을 바탕으로 리뷰를 작성한 개인입니다. 그들의 국적이 리뷰 점수의 이유였다고 생각하는 것은 정당화하기 어렵습니다.
+
+##### 예시
+
+| 평균 점수 | 총 리뷰 수 | 리뷰어 점수 | 부정적 리뷰 | 긍정적 리뷰 | 태그 |
+| -------------- | ---------------------- | ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | This is currently not a hotel but a construction site I was terrorized from early morning and all day with unacceptable building noise while resting after a long trip and working in the room People were working all day i e with jackhammers in the adjacent rooms I asked for a room change but no silent room was available To make things worse I was overcharged I checked out in the evening since I had to leave very early flight and received an appropriate bill A day later the hotel made another charge without my consent in excess of booked price It's a terrible place Don't punish yourself by booking here | Nothing Terrible place Stay away | Business trip Couple Standard Double Room Stayed 2 nights |
+
+보시다시피, 이 손님은 이 호텔에서 행복한 체류를 하지 않았습니다. 이 호텔은 7.8의 좋은 평균 점수와 1945개의 리뷰를 가지고 있지만, 이 리뷰어는 2.5를 주고 그들의 체류가 얼마나 부정적이었는지에 대해 115단어를 작성했습니다. 긍정적 리뷰 칸에 아무것도 작성하지 않았다면 긍정적인 것이 없다고 추측할 수 있지만, 그들은 7단어로 경고를 작성했습니다. 단어의 의미나 감정을 고려하지 않고 단어 수만 센다면, 리뷰어의 의도를 왜곡할 수 있습니다. 이상하게도, 그들의 점수 2.5는 혼란스럽습니다. 호텔 체류가 그렇게 나빴다면 왜 점수를 주었을까요? 데이터셋을 자세히 조사하면 가능한 최저 점수가 2.5이며, 0이 아니라는 것을 알 수 있습니다. 가능한 최고 점수는 10입니다.
+
+##### 태그
+
+위에서 언급했듯이, 처음에는 `Tags`을 사용하여 데이터를 분류하는 아이디어가 타당해 보입니다. 불행히도 이러한 태그는 표준화되어 있지 않아서, 특정 호텔에서는 *싱글룸*, *트윈룸*, *더블룸* 옵션이 있을 수 있지만, 다음 호텔에서는 *디럭스 싱글룸*, *클래식 퀸룸*, *이그제큐티브 킹룸* 옵션이 있습니다. 이는 동일한 것일 수 있지만, 너무 많은 변형이 있어 선택이 어려워집니다:
+
+1. 모든 용어를 단일 표준으로 변경하려고 시도하는 것, 이는 각 경우에 변환 경로가 명확하지 않기 때문에 매우 어렵습니다 (예: *클래식 싱글룸*을 *싱글룸*으로 매핑하지만 *코트야드 가든 또는 시티 뷰가 있는 슈페리어 퀸룸*은 매핑하기가 훨씬 어렵습니다).
+
+1. NLP 접근 방식을 사용하여 각 호텔에 적용되는 *솔로*, *비즈니스 여행객*, 또는 *어린 아이가 있는 가족*과 같은 특정 용어의 빈도를 측정하고 이를 추천에 반영할 수 있습니다.
+
+태그는 일반적으로 (항상 그렇지는 않지만) *여행 유형*, *손님 유형*, *방 유형*, *숙박 기간*, 및 *리뷰를 제출한 장치 유형*에 맞추어진 5~6개의 쉼표로 구분된 값을 포함하는 단일 필드입니다. 그러나 일부 리뷰어가 각 필드를 채우지 않는 경우 (하나를 비워둘 수 있음), 값은 항상 같은 순서로 제공되지 않습니다.
+
+예를 들어 *그룹 유형*을 보세요. `Tags` 열에서 이 필드에는 1025개의 고유 가능성이 있으며, 불행히도 그 중 일부만 그룹을 참조합니다 (일부는 방 유형 등입니다). 가족을 언급하는 것만 필터링하면 많은 *가족 방* 유형의 결과가 포함됩니다. *with* 용어를 포함하면, 즉 *가족과 함께* 값을 계산하면, "어린 자녀를 둔 가족" 또는 "나이 든 자녀를 둔 가족" 문구가 포함된 80,000개 이상의 515,000개 결과가 나옵니다.
+
+이것은 태그 열이 완전히 쓸모없지는 않지만, 유용하게 만들기 위해서는 약간의 작업이 필요함을 의미합니다.
+
+##### 호텔 평균 점수
+
+데이터셋에는 이해하기 어렵지만 모델을 구축할 때 인식해야 하는 몇 가지 이상 현상 또는 불일치가 있습니다. 이해하게 되면 토론 섹션에서 알려주세요!
+
+데이터셋에는 평균 점수 및 리뷰 수와 관련된 다음 열이 있습니다:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+이 데이터셋에서 가장 많은 리뷰를 가진 단일 호텔은 *Britannia International Hotel Canary Wharf*로 515,000개의 리뷰 중 4789개의 리뷰를 가지고 있습니다. 그러나 `Total_Number_of_Reviews` 값이 9086인 이 호텔을 보면, 리뷰 없이 점수만 있는 경우가 많이 있을 수 있다고 추측할 수 있습니다. `Additional_Number_of_Scoring` 열 값을 추가해야 할 수도 있습니다. 그 값은 2682이며, 4789에 추가하면 7471이 되어 `Total_Number_of_Reviews`에서 여전히 1615 부족합니다.
+
+`Average_Score` 열을 보면, 데이터셋의 리뷰 평균일 수 있다고 추측할 수 있지만, Kaggle의 설명은 "*지난 1년 동안의 최신 댓글을 기반으로 계산된 호텔의 평균 점수*"입니다. 이는 그다지 유용해 보이지 않지만, 데이터셋의 리뷰 점수를 기반으로 자체 평균을 계산할 수 있습니다. 동일한 호텔을 예로 들면, 호텔 평균 점수는 7.1로 주어졌지만, 데이터셋에서 계산된 점수 (리뷰어 점수의 평균)는 6.8입니다. 이는 비슷하지만 동일한 값은 아니며, `Additional_Number_of_Scoring` 리뷰에서 제공된 점수가 평균을 7.1로 올렸다고 추측할 수 있습니다. 그러나 이를 테스트하거나 증명할 방법이 없으므로, `Average_Score`, `Additional_Number_of_Scoring` 및 `Total_Number_of_Reviews`를 사용하는 것이 어렵습니다.
+
+더 복잡하게도, 두 번째로 많은 리뷰를 가진 호텔의 계산된 평균 점수는 8.12이고, 데이터셋 `Average_Score`는 8.1입니다. 이 정확한 점수는 우연의 일치인가요, 아니면 첫 번째 호텔의 불일치인가요?
+
+이 호텔이 특이치일 가능성을 고려하여, 대부분의 값이 일치할 수도 있지만 (어떤 이유로 일부는 그렇지 않음) 다음으로 데이터셋의 값을 탐색하고 값의 올바른 사용 (또는 비사용)을 결정하는 짧은 프로그램을 작성할 것입니다.
+
+> 🚨 주의 사항
+>
+> 이 데이터셋을 작업할 때 텍스트를 읽거나 분석하지 않고도 텍스트에서 무언가를 계산하는 코드를 작성하게 될 것입니다. 이는 NLP의 본질로, 사람이 하지 않고도 의미나 감정을 해석하는 것입니다. 그러나 부정적 리뷰를 읽을 가능성이 있습니다. 그렇게 하지 않기를 권장합니다. 일부는 어리석거나, 호텔의 통제 밖의 것들에 대한 부적절한 부정적 리뷰일 수 있습니다. 예를 들어 "날씨가 좋지 않았다"는 호텔이나 그 누구도 통제할 수 없는 것에 대한 리뷰입니다. 그러나 일부 리뷰는 인종차별적, 성차별적, 연령차별적일 수 있습니다. 이는 불행하지만 공개 웹사이트에서 수집된 데이터셋에서 예상할 수 있는 일입니다. 일부 리뷰어는 불쾌하거나 불편하거나 화나게 할 수 있는 리뷰를 남깁니다. 코드를 통해 감정을 측정하는 것이 좋습니다. 다만, 소수의 사람들이 그러한 리뷰를 남기지만, 여전히 존재합니다.
+
+## 연습 - 데이터 탐색
+### 데이터 로드
+
+데이터를 시각적으로 충분히 살펴보았으니 이제 코드를 작성하여 몇 가지 답을 얻어봅시다! 이 섹션에서는 pandas 라이브러리를 사용합니다. 첫 번째 작업은 CSV 데이터를 로드하고 읽을 수 있는지 확인하는 것입니다. pandas 라이브러리는 빠른 CSV 로더를 가지고 있으며, 결과는 이전 강의에서와 같이 데이터프레임에 배치됩니다. 로드할 CSV는 50만 개 이상의 행이 있지만, 열은 17개뿐입니다. pandas는 데이터프레임과 상호 작용할 수 있는 강력한 방법을 많이 제공합니다. 여기에는 모든 행에 대해 연산을 수행하는 기능도 포함됩니다.
+
+이 강의의 이후 부분에서는 코드 스니펫과 코드 설명, 결과에 대한 논의가 포함됩니다. 코드 작성에는 _notebook.ipynb_를 사용하세요.
+
+다음은 사용할 데이터 파일을 로드하는 코드입니다:
+
+```python
+# Load the hotel reviews from CSV
+import pandas as pd
+import time
+# importing time so the start and end time can be used to calculate file loading time
+print("Loading data file now, this could take a while depending on file size")
+start = time.time()
+# df is 'DataFrame' - make sure you downloaded the file to the data folder
+df = pd.read_csv('../../data/Hotel_Reviews.csv')
+end = time.time()
+print("Loading took " + str(round(end - start, 2)) + " seconds")
+```
+
+이제 데이터가 로드되었으니 몇 가지 연산을 수행할 수 있습니다. 이 코드를 프로그램의 상단에 유지하세요.
+
+## 데이터 탐색
+
+이 경우 데이터는 이미 *깨끗*합니다. 즉, 작업할 준비가 되어 있으며, 알고리즘이 영어 문자만 기대하는 다른 언어의 문자가 포함되어 있지 않습니다.
+
+✅ NLP 기술을 적용하기 전에 데이터를 형식화하기 위한 초기 처리가 필요한 데이터를 다룰 수도 있습니다. 그렇다면 비영어 문자를 어떻게 처리하시겠습니까?
+
+데이터가 로드된 후 코드를 사용하여 탐색할 수 있는지 확인하세요. `Negative_Review` 및 `Positive_Review` 열에 집중하고 싶어질 수 있습니다. 이 열들은 NLP 알고리즘이 처리할 자연어 텍스트로 가득 차 있습니다. 그러나 기다리세요! NLP와 감정 분석에 뛰어들기 전에, 아래 코드를 따라 데이터셋에 주어진 값이 pandas로 계산한 값과 일치하는지 확인하세요.
+
+## 데이터프레임 연산
+
+이 강의의 첫 번째 작업은 데이터 프레임을 변경하지 않고 데이터를 검사하는 코드를 작성하여 다음 주장이 올바른지 확인하는 것입니다.
+
+> 많은 프로그래밍 작업과 마찬가지로, 이를 완료하는 여러 가지 방법이 있지만, 가장 간단하고 쉬운 방법으로 하는 것이 좋습니다. 특히 나중에 이 코드를
+rows have column `Positive_Review` values of "No Positive" 9. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ### Code answers 1. Print out the *shape* of the data frame you have just loaded (the shape is the number of rows and columns) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Calculate the frequency count for reviewer nationalities: 1. How many distinct values are there for the column `Reviewer_Nationality` and what are they? 2. What reviewer nationality is the most common in the dataset (print country and number of reviews)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. What are the next top 10 most frequently found nationalities, and their frequency count? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. What was the most frequently reviewed hotel for each of the top 10 most reviewer nationalities? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. How many reviews are there per hotel (frequency count of hotel) in the dataset? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Hotel_Name | Total_Number_of_Reviews | Total_Reviews_Found | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | You may notice that the *counted in the dataset* results do not match the value in `Total_Number_of_Reviews`. It is unclear if this value in the dataset represented the total number of reviews the hotel had, but not all were scraped, or some other calculation. `Total_Number_of_Reviews` is not used in the model because of this unclarity. 5. While there is an `Average_Score` column for each hotel in the dataset, you can also calculate an average score (getting the average of all reviewer scores in the dataset for each hotel). Add a new column to your dataframe with the column header `Calc_Average_Score` that contains that calculated average. Print out the columns `Hotel_Name`, `Average_Score`, and `Calc_Average_Score`. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` You may also wonder about the `Average_Score` value and why it is sometimes different from the calculated average score. As we can't know why some of the values match, but others have a difference, it's safest in this case to use the review scores that we have to calculate the average ourselves. That said, the differences are usually very small, here are the hotels with the greatest deviation from the dataset average and the calculated average: | Average_Score_Difference | Average_Score | Calc_Average_Score | Hotel_Name | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | With only 1 hotel having a difference of score greater than 1, it means we can probably ignore the difference and use the calculated average score. 6. Calculate and print out how many rows have column `Negative_Review` values of "No Negative" 7. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" 8. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Another way Another way count items without Lambdas, and use sum to count the rows: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` You may have noticed that there are 127 rows that have both "No Negative" and "No Positive" values for the columns `Negative_Review` and `Positive_Review` respectively. That means that the reviewer gave the hotel a numerical score, but declined to write either a positive or negative review. Luckily this is a small amount of rows (127 out of 515738, or 0.02%), so it probably won't skew our model or results in any particular direction, but you might not have expected a data set of reviews to have rows with no reviews, so it's worth exploring the data to discover rows like this. Now that you have explored the dataset, in the next lesson you will filter the data and add some sentiment analysis. --- ## 🚀Challenge This lesson demonstrates, as we saw in previous lessons, how critically important it is to understand your data and its foibles before performing operations on it. Text-based data, in particular, bears careful scrutiny. Dig through various text-heavy datasets and see if you can discover areas that could introduce bias or skewed sentiment into a model. ## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## Review & Self Study Take [this Learning Path on NLP](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) to discover tools to try when building speech and text-heavy models. ## Assignment [NLTK](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해서는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/ko/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..8679b69cb
--- /dev/null
+++ b/translations/ko/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## Instructions
+
+NLTK는 계산 언어학 및 NLP에 사용되는 잘 알려진 라이브러리입니다. '[NLTK book](https://www.nltk.org/book/)'을 읽고 그 연습 문제를 시도해보세요. 이 평가되지 않는 과제에서 이 라이브러리에 대해 더 깊이 알아갈 수 있을 것입니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의해 주십시오. 원어로 작성된 원본 문서가 권위 있는 자료로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/ko/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..ba6f9eafa
--- /dev/null
+++ b/translations/ko/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확한 내용이 포함될 수 있습니다. 원본 문서의 모국어 버전을 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/ko/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..a49cd3195
--- /dev/null
+++ b/translations/ko/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서가 권위 있는 자료로 간주되어야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역을 사용함으로써 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/5-Hotel-Reviews-2/README.md b/translations/ko/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..560667952
--- /dev/null
+++ b/translations/ko/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# 호텔 리뷰를 통한 감정 분석
+
+이제 데이터셋을 자세히 탐색했으니, 열을 필터링하고 NLP 기법을 사용하여 호텔에 대한 새로운 통찰을 얻을 때입니다.
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### 필터링 및 감정 분석 작업
+
+아마도 데이터셋에 몇 가지 문제가 있다는 것을 눈치챘을 것입니다. 일부 열은 쓸모없는 정보로 채워져 있고, 다른 열은 잘못된 것처럼 보입니다. 만약 그 열이 맞다고 하더라도, 그것들이 어떻게 계산되었는지 명확하지 않으며, 자신의 계산으로 독립적으로 검증할 수 없습니다.
+
+## 연습: 데이터 처리 조금 더 하기
+
+데이터를 조금 더 정리하세요. 나중에 유용할 열을 추가하고, 다른 열의 값을 변경하고, 특정 열을 완전히 삭제하세요.
+
+1. 초기 열 처리
+
+ 1. `lat` 및 `lng` 삭제
+
+ 2. `Hotel_Address` 값을 다음 값으로 대체하세요 (주소에 도시와 국가가 포함되어 있다면, 도시와 국가만 남기세요).
+
+ 데이터셋에 있는 유일한 도시와 국가는 다음과 같습니다:
+
+ 암스테르담, 네덜란드
+
+ 바르셀로나, 스페인
+
+ 런던, 영국
+
+ 밀라노, 이탈리아
+
+ 파리, 프랑스
+
+ 비엔나, 오스트리아
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ 이제 국가 수준의 데이터를 쿼리할 수 있습니다:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | 호텔 주소 | 호텔 이름 |
+ | :--------------------- | :--------: |
+ | 암스테르담, 네덜란드 | 105 |
+ | 바르셀로나, 스페인 | 211 |
+ | 런던, 영국 | 400 |
+ | 밀라노, 이탈리아 | 162 |
+ | 파리, 프랑스 | 458 |
+ | 비엔나, 오스트리아 | 158 |
+
+2. 호텔 메타 리뷰 열 처리
+
+ 1. `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` 삭제하고 직접 계산한 점수로 대체
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. 리뷰 열 처리
+
+ 1. `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']` 삭제하고, 관심 있는 태그를 확인하기 위해 몇 가지 단계를 따르세요.
+
+### 태그 필터링
+
+데이터셋의 목표는 감정을 추가하고 최종 데이터셋에서 유용한 열을 추가하여 최고의 호텔을 선택하는 데 도움을 주는 것입니다 (자신을 위해서나 호텔 추천 봇을 만들기 위해 클라이언트가 요청한 경우). 태그가 최종 데이터셋에서 유용한지 아닌지 스스로에게 물어봐야 합니다. 다음은 한 가지 해석입니다 (다른 이유로 데이터셋이 필요하다면 다른 태그가 선택에 남거나 제외될 수 있습니다):
+
+1. 여행 유형은 관련이 있으며, 유지해야 합니다.
+2. 게스트 그룹 유형은 중요하며, 유지해야 합니다.
+3. 게스트가 머문 방, 스위트룸, 스튜디오 유형은 무관합니다 (모든 호텔에 기본적으로 동일한 방이 있습니다).
+4. 리뷰가 제출된 장치는 무관합니다.
+5. 리뷰어가 머문 밤 수는 *관련*이 있을 수 있습니다. 더 긴 숙박이 호텔을 더 좋아하는 것과 관련이 있다고 가정할 수 있지만, 이는 다소 무관할 수 있습니다.
+
+요약하자면, **두 가지 종류의 태그를 유지하고 나머지는 제거하세요**.
+
+먼저, 태그가 더 나은 형식으로 변환될 때까지 태그를 계산하지 않으려면 대괄호와 따옴표를 제거해야 합니다. 여러 가지 방법이 있지만, 데이터 처리 시간이 오래 걸릴 수 있으므로 가장 빠른 방법을 원합니다. 다행히도, 판다는 이러한 각 단계를 쉽게 수행할 수 있는 방법을 제공합니다.
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+각 태그는 다음과 같이 됩니다: `Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag` 열이 새 열 중 하나와 일치하면 1을 추가하고, 그렇지 않으면 0을 추가합니다. 최종 결과는 비즈니스 vs 레저 또는 애완동물을 데리고 오는 등의 이유로 이 호텔을 선택한 리뷰어의 수를 집계한 것입니다. 이는 호텔을 추천할 때 유용한 정보입니다.
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### 파일 저장
+
+마지막으로, 현재 상태의 데이터셋을 새 이름으로 저장하세요.
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## 감정 분석 작업
+
+이 마지막 섹션에서는 리뷰 열에 감정 분석을 적용하고 결과를 데이터셋에 저장합니다.
+
+## 연습: 필터링된 데이터 로드 및 저장
+
+이제 이전 섹션에서 저장한 필터링된 데이터셋을 로드하고, **원본 데이터셋이 아님**을 주의하세요.
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### 불용어 제거
+
+부정적 리뷰와 긍정적 리뷰 열에서 감정 분석을 수행하려면 시간이 오래 걸릴 수 있습니다. 빠른 CPU를 가진 강력한 테스트 노트북에서 테스트한 결과, 사용된 감정 라이브러리에 따라 12-14분이 걸렸습니다. 이는 (상대적으로) 긴 시간이므로, 이를 빠르게 할 수 있는지 조사할 가치가 있습니다.
+
+불용어, 즉 문장의 감정을 바꾸지 않는 일반적인 영어 단어를 제거하는 것이 첫 번째 단계입니다. 불용어를 제거하면 감정 분석이 더 빨리 실행되지만, 정확도가 떨어지지 않습니다 (불용어는 감정에 영향을 미치지 않지만, 분석을 느리게 만듭니다).
+
+가장 긴 부정적 리뷰는 395단어였지만, 불용어를 제거한 후에는 195단어입니다.
+
+불용어를 제거하는 작업도 빠른 작업이며, 515,000개의 행에서 2개의 리뷰 열에서 불용어를 제거하는 데 테스트 장치에서 3.3초가 걸렸습니다. 장치의 CPU 속도, RAM, SSD 여부 등 여러 요인에 따라 시간이 약간 더 걸리거나 덜 걸릴 수 있습니다. 작업이 상대적으로 짧으므로, 감정 분석 시간을 개선할 수 있다면 수행할 가치가 있습니다.
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### 감정 분석 수행
+
+이제 부정적 리뷰와 긍정적 리뷰 열에 대해 감정 분석을 계산하고 결과를 두 개의 새로운 열에 저장해야 합니다. 감정의 테스트는 같은 리뷰에 대해 리뷰어의 점수와 비교하는 것입니다. 예를 들어, 감정 분석이 부정적 리뷰에서 1 (매우 긍정적인 감정)을 나타내고 긍정적 리뷰에서 1을 나타내지만, 리뷰어가 호텔에 가능한 최저 점수를 준다면, 리뷰 텍스트가 점수와 일치하지 않거나 감정 분석기가 감정을 올바르게 인식하지 못했음을 나타냅니다. 일부 감정 점수가 완전히 잘못될 것으로 예상해야 하며, 종종 설명할 수 있습니다. 예를 들어, 리뷰가 매우 비꼬는 "물론 난 난방이 없는 방에서 자는 걸 정말 좋아했어요"와 같이 작성되었고, 감정 분석기가 이를 긍정적인 감정으로 인식할 수 있지만, 사람이 읽으면 비꼬는 것임을 알 수 있습니다.
+
+NLTK는 학습을 위한 다양한 감정 분석기를 제공하며, 이를 대체하여 감정이 더 정확한지 덜 정확한지 확인할 수 있습니다. 여기서는 VADER 감정 분석이 사용되었습니다.
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+프로그램에서 감정을 계산할 준비가 되었을 때, 다음과 같이 각 리뷰에 적용할 수 있습니다:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+이 작업은 제 컴퓨터에서 약 120초가 걸리지만, 각 컴퓨터마다 다를 수 있습니다. 결과를 출력하여 감정이 리뷰와 일치하는지 확인하려면:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+파일을 사용하기 전에 마지막으로 해야 할 일은 저장하는 것입니다! 또한 모든 새로운 열을 다시 정렬하여 작업하기 쉽게 만드세요 (사람에게는 미용적 변경입니다).
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+전체 코드를 실행하여 [분석 노트북](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb)을 실행하세요 (필터링 노트북을 실행하여 Hotel_Reviews_Filtered.csv 파일을 생성한 후).
+
+복습을 위해, 단계는 다음과 같습니다:
+
+1. 원본 데이터셋 파일 **Hotel_Reviews.csv**는 이전 강의에서 [탐색 노트북](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb)으로 탐색되었습니다.
+2. Hotel_Reviews.csv는 [필터링 노트북](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb)에 의해 필터링되어 **Hotel_Reviews_Filtered.csv**가 생성되었습니다.
+3. Hotel_Reviews_Filtered.csv는 [감정 분석 노트북](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb)에 의해 처리되어 **Hotel_Reviews_NLP.csv**가 생성되었습니다.
+4. 아래의 NLP 챌린지에서 Hotel_Reviews_NLP.csv를 사용하세요.
+
+### 결론
+
+처음 시작할 때, 검증하거나 사용할 수 없는 열과 데이터가 포함된 데이터셋이 있었습니다. 데이터를 탐색하고, 필요 없는 것을 필터링하고, 태그를 유용한 것으로 변환하고, 자신의 평균을 계산하고, 감정 열을 추가하여 자연어 텍스트 처리에 대해 흥미로운 것을 배웠기를 바랍니다.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## 챌린지
+
+이제 데이터셋의 감정 분석이 완료되었으니, 이 커리큘럼에서 배운 전략(예를 들어 클러스터링)을 사용하여 감정 주위의 패턴을 결정할 수 있는지 확인하세요.
+
+## 복습 및 자습
+
+[이 Learn 모듈](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott)을 통해 더 많은 것을 배우고, 텍스트에서 감정을 탐색하는 데 다양한 도구를 사용해 보세요.
+## 과제
+
+[다른 데이터셋 시도해보기](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/ko/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..f70ecea13
--- /dev/null
+++ b/translations/ko/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# 다른 데이터셋 시도해보기
+
+## 지침
+
+NLTK를 사용하여 텍스트에 감정을 할당하는 방법을 배웠으니, 이제 다른 데이터셋을 시도해보세요. 데이터를 처리하는 과정이 필요할 수 있으니, 노트북을 만들어서 생각 과정을 문서화하세요. 무엇을 발견했나요?
+
+## 평가 기준
+
+| 기준 | 모범 사례 | 적절한 사례 | 개선이 필요함 |
+| --------- | ------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | 감정이 어떻게 할당되는지 설명하는 잘 문서화된 셀을 포함한 완전한 노트북과 데이터셋이 제시됨 | 노트북에 좋은 설명이 부족함 | 노트북에 결함이 있음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/ko/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..1f08a2663
--- /dev/null
+++ b/translations/ko/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서의 원어를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/ko/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..46c400fd7
--- /dev/null
+++ b/translations/ko/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서를 해당 언어로 작성된 상태로 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/README.md b/translations/ko/6-NLP/README.md
new file mode 100644
index 000000000..3e24e2cff
--- /dev/null
+++ b/translations/ko/6-NLP/README.md
@@ -0,0 +1,27 @@
+# 자연어 처리 시작하기
+
+자연어 처리(NLP)는 컴퓨터 프로그램이 인간 언어를 이해하는 능력을 의미합니다. 이는 인공지능(AI)의 한 구성 요소로, 50년 이상의 역사를 가지고 있으며 언어학 분야에 뿌리를 두고 있습니다. 이 전체 분야는 기계가 인간 언어를 이해하고 처리하도록 돕는 데 중점을 둡니다. 이를 통해 맞춤법 검사나 기계 번역과 같은 작업을 수행할 수 있습니다. 자연어 처리는 의료 연구, 검색 엔진, 비즈니스 인텔리전스 등 다양한 분야에서 실질적인 응용 프로그램을 가지고 있습니다.
+
+## 지역 주제: 유럽 언어와 문학 및 유럽의 로맨틱 호텔 ❤️
+
+이 커리큘럼 섹션에서는 기계 학습의 가장 널리 사용되는 사례 중 하나인 자연어 처리(NLP)에 대해 소개합니다. 컴퓨터 언어학에서 파생된 이 인공지능 카테고리는 음성이나 텍스트 커뮤니케이션을 통해 인간과 기계를 연결하는 다리 역할을 합니다.
+
+이 강의에서는 작은 대화형 봇을 만들어 NLP의 기본을 배우고, 기계 학습이 이러한 대화를 점점 더 '스마트'하게 만드는 데 어떻게 도움을 주는지 알아봅니다. 1813년에 출판된 제인 오스틴의 고전 소설 **오만과 편견**의 엘리자베스 베넷과 미스터 다아시와 대화를 나누며 시간을 거슬러 올라갑니다. 그런 다음, 유럽의 호텔 리뷰를 통해 감정 분석에 대해 배우며 지식을 확장합니다.
+
+
+> 사진 제공: Elaine Howlin on Unsplash
+
+## 강의 목록
+
+1. [자연어 처리 소개](1-Introduction-to-NLP/README.md)
+2. [일반적인 NLP 작업 및 기술](2-Tasks/README.md)
+3. [기계 학습을 통한 번역 및 감정 분석](3-Translation-Sentiment/README.md)
+4. [데이터 준비하기](4-Hotel-Reviews-1/README.md)
+5. [감정 분석을 위한 NLTK](5-Hotel-Reviews-2/README.md)
+
+## 크레딧
+
+이 자연어 처리 강의는 ☕와 함께 [Stephen Howell](https://twitter.com/Howell_MSFT) 작성했습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서는 해당 언어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/6-NLP/data/README.md b/translations/ko/6-NLP/data/README.md
new file mode 100644
index 000000000..30460928a
--- /dev/null
+++ b/translations/ko/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+호텔 리뷰 데이터를 이 폴더에 다운로드하세요.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/1-Introduction/README.md b/translations/ko/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..312851df3
--- /dev/null
+++ b/translations/ko/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# 시계열 예측 소개
+
+
+
+> 스케치노트 by [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+이번 강의와 다음 강의에서는 시계열 예측에 대해 배우게 됩니다. 시계열 예측은 가격과 같은 변수의 과거 성과를 기반으로 미래의 잠재적 가치를 예측할 수 있는 흥미롭고 가치 있는 ML 과학자의 레퍼토리 중 하나입니다.
+
+[](https://youtu.be/cBojo1hsHiI "시계열 예측 소개")
+
+> 🎥 위 이미지를 클릭하면 시계열 예측에 관한 비디오를 볼 수 있습니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+시계열 예측은 가격 책정, 재고 관리, 공급망 문제 등 실제 비즈니스 문제에 직접 적용할 수 있어 유용하고 흥미로운 분야입니다. 딥러닝 기법이 더 나은 예측 성능을 얻기 위해 사용되기 시작했지만, 시계열 예측은 여전히 고전적인 ML 기법에 의해 크게 영향을 받습니다.
+
+> Penn State의 유용한 시계열 커리큘럼은 [여기](https://online.stat.psu.edu/stat510/lesson/1)에서 찾을 수 있습니다.
+
+## 소개
+
+스마트 주차 미터기를 관리하면서 이들이 얼마나 자주 사용되고 얼마나 오랫동안 사용되는지에 대한 데이터를 제공한다고 가정해 봅시다.
+
+> 만약 미터기의 과거 성과를 기반으로 공급과 수요의 법칙에 따라 미래 가치를 예측할 수 있다면 어떨까요?
+
+목표를 달성하기 위해 언제 행동해야 할지를 정확하게 예측하는 것은 시계열 예측으로 해결할 수 있는 도전 과제입니다. 사람들이 주차 자리를 찾을 때 바쁜 시간에 더 많은 요금을 부과하는 것은 불쾌할 수 있지만, 이는 도로를 청소하기 위한 수익을 창출하는 확실한 방법이 될 것입니다!
+
+시계열 알고리즘의 종류를 탐구하고 데이터를 정리하고 준비하는 노트북을 시작해 봅시다. 분석할 데이터는 GEFCom2014 예측 대회에서 가져온 것입니다. 2012년부터 2014년까지 3년간의 시간별 전기 부하 및 온도 값을 포함하고 있습니다. 전기 부하 및 온도의 역사적 패턴을 바탕으로 미래의 전기 부하 값을 예측할 수 있습니다.
+
+이 예제에서는 역사적 부하 데이터만을 사용하여 한 단계 앞을 예측하는 방법을 배웁니다. 시작하기 전에, 무슨 일이 일어나고 있는지 이해하는 것이 유용합니다.
+
+## 몇 가지 정의
+
+'시계열'이라는 용어를 접할 때 여러 다른 맥락에서 사용되는 것을 이해해야 합니다.
+
+🎓 **시계열**
+
+수학에서 "시계열은 시간 순서로 색인화(또는 나열 또는 그래프화)된 데이터 포인트의 시리즈입니다. 가장 일반적으로, 시계열은 시간의 연속적으로 동일한 간격에서 취한 일련의 데이터입니다." 시계열의 예로는 [다우 존스 산업 평균](https://wikipedia.org/wiki/Time_series)의 일일 종가가 있습니다. 시계열 플롯 및 통계 모델링의 사용은 신호 처리, 날씨 예측, 지진 예측 등 이벤트가 발생하고 데이터 포인트가 시간에 따라 플로팅될 수 있는 분야에서 자주 접할 수 있습니다.
+
+🎓 **시계열 분석**
+
+시계열 분석은 위에서 언급한 시계열 데이터를 분석하는 것입니다. 시계열 데이터는 '중단된 시계열'과 같은 다양한 형태를 취할 수 있으며, 이는 중단 이벤트 전후의 시계열 진화를 감지합니다. 시계열에 필요한 분석 유형은 데이터의 특성에 따라 다릅니다. 시계열 데이터 자체는 숫자나 문자 시리즈 형태로 나타날 수 있습니다.
+
+수행할 분석은 주파수 도메인 및 시간 도메인, 선형 및 비선형 등을 포함한 다양한 방법을 사용합니다. 이러한 유형의 데이터를 분석하는 여러 방법에 대해 [자세히 알아보기](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm).
+
+🎓 **시계열 예측**
+
+시계열 예측은 과거에 발생한 데이터를 기반으로 미래 값을 예측하기 위해 모델을 사용하는 것입니다. 회귀 모델을 사용하여 시계열 데이터를 탐색할 수 있지만, 시간 지수를 x 변수로 플로팅하는 경우 이러한 데이터는 특별한 유형의 모델을 사용하여 분석하는 것이 가장 좋습니다.
+
+시계열 데이터는 선형 회귀로 분석할 수 있는 데이터와 달리 순서가 있는 관찰 목록입니다. 가장 일반적인 것은 "자기회귀 이동평균 통합"을 의미하는 ARIMA입니다.
+
+[ARIMA 모델](https://online.stat.psu.edu/stat510/lesson/1/1.1)은 "현재 시리즈 값을 과거 값 및 과거 예측 오류와 관련짓습니다." 이러한 모델은 시간이 지남에 따라 데이터가 순서대로 나열되는 시간 도메인 데이터를 분석하는 데 가장 적합합니다.
+
+> 여러 유형의 ARIMA 모델이 있으며, 이에 대해 [여기](https://people.duke.edu/~rnau/411arim.htm)에서 배울 수 있으며, 다음 강의에서 다룰 것입니다.
+
+다음 강의에서는 [단변량 시계열](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm)을 사용하여 ARIMA 모델을 구축할 것입니다. 단변량 시계열은 시간이 지남에 따라 값이 변하는 하나의 변수에 중점을 둡니다. 이 유형의 데이터의 예로는 Mauna Loa Observatory에서 월별 CO2 농도를 기록한 [이 데이터셋](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm)이 있습니다:
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ 이 데이터셋에서 시간이 지남에 따라 변하는 변수를 식별하십시오.
+
+## 고려해야 할 시계열 데이터 특성
+
+시계열 데이터를 살펴볼 때, 이를 더 잘 이해하기 위해 고려해야 할 [특정 특성](https://online.stat.psu.edu/stat510/lesson/1/1.1)이 있음을 알 수 있습니다. 시계열 데이터를 분석하고자 하는 '신호'로 간주하면, 이러한 특성은 '노이즈'로 생각할 수 있습니다. 통계 기법을 사용하여 이러한 특성 중 일부를 상쇄하여 '노이즈'를 줄이는 것이 종종 필요합니다.
+
+시계열을 작업할 때 알아야 할 몇 가지 개념은 다음과 같습니다:
+
+🎓 **추세**
+
+추세는 시간에 따라 측정 가능한 증가 및 감소로 정의됩니다. [더 읽어보기](https://machinelearningmastery.com/time-series-trends-in-python). 시계열의 맥락에서 추세를 사용하고 필요하다면 추세를 제거하는 방법에 관한 것입니다.
+
+🎓 **[계절성](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+계절성은 예를 들어, 판매에 영향을 미칠 수 있는 휴가 시즌과 같은 주기적인 변동으로 정의됩니다. 데이터에서 계절성을 표시하는 다양한 유형의 플롯을 [살펴보십시오](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm).
+
+🎓 **이상치**
+
+이상치는 표준 데이터 분산에서 멀리 떨어져 있습니다.
+
+🎓 **장기 주기**
+
+계절성과는 독립적으로, 데이터는 1년 이상 지속되는 경제 침체와 같은 장기 주기를 표시할 수 있습니다.
+
+🎓 **일정한 분산**
+
+시간이 지남에 따라 일부 데이터는 낮과 밤의 에너지 사용량과 같은 일정한 변동을 표시합니다.
+
+🎓 **급격한 변화**
+
+데이터는 추가 분석이 필요한 급격한 변화를 표시할 수 있습니다. 예를 들어, COVID로 인한 사업체의 갑작스러운 폐쇄는 데이터에 변화를 일으켰습니다.
+
+✅ [샘플 시계열 플롯](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python)을 확인하여 몇 년 동안의 일일 인게임 통화 사용량을 보여줍니다. 이 데이터에서 위에 나열된 특성 중 일부를 식별할 수 있습니까?
+
+
+
+## 연습 - 전력 사용량 데이터로 시작하기
+
+과거 사용량을 기반으로 미래 전력 사용량을 예측하는 시계열 모델을 만들어 봅시다.
+
+> 이 예제의 데이터는 GEFCom2014 예측 대회에서 가져온 것입니다. 2012년부터 2014년까지 3년간의 시간별 전기 부하 및 온도 값을 포함하고 있습니다.
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli 및 Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016.
+
+1. 이 강의의 `working` 폴더에서 _notebook.ipynb_ 파일을 엽니다. 데이터를 로드하고 시각화하는 데 도움이 될 라이브러리를 추가하는 것부터 시작하십시오.
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ 포함된 `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()` 파일을 사용하고 있음을 유의하십시오:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ 날짜와 부하를 나타내는 두 개의 열이 있음을 알 수 있습니다:
+
+ | | load |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. 이제 `plot()`을 호출하여 데이터를 플로팅합니다:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. 이제 2014년 7월 첫 주를 입력으로 제공하여 플로팅합니다 `energy` in `[from date]: [to date]` 패턴을 사용합니다:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ 멋진 플롯입니다! 이 플롯을 살펴보고 위에 나열된 특성 중 일부를 식별할 수 있는지 확인하십시오. 데이터를 시각화하여 무엇을 추측할 수 있을까요?
+
+다음 강의에서는 ARIMA 모델을 만들어 예측을 수행할 것입니다.
+
+---
+
+## 🚀도전 과제
+
+시계열 예측이 도움이 될 수 있는 모든 산업 및 연구 분야의 목록을 작성해 보세요. 예술, 경제학, 생태학, 소매업, 산업, 금융 등에서 이러한 기법의 응용을 생각할 수 있습니까? 어디에서 사용할 수 있을까요?
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## 복습 및 자기 학습
+
+여기서는 다루지 않겠지만, 신경망은 때때로 시계열 예측의 고전적인 방법을 강화하는 데 사용됩니다. [이 기사](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412)에서 자세히 읽어보세요.
+
+## 과제
+
+[더 많은 시계열 시각화](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/1-Introduction/assignment.md b/translations/ko/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..bd685be77
--- /dev/null
+++ b/translations/ko/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# 몇 가지 추가 시계열 데이터 시각화
+
+## 지침
+
+특수한 모델링이 필요한 데이터 유형을 살펴보며 시계열 예측에 대해 배우기 시작했습니다. 에너지 관련 데이터를 시각화했습니다. 이제 시계열 예측에 도움이 될 만한 다른 데이터를 찾아보세요. 세 가지 예제를 찾아 ([Kaggle](https://kaggle.com)과 [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott) 참고) 노트북을 만들어 시각화하세요. 노트북에 그들이 가진 특별한 특성(계절성, 급격한 변화 또는 기타 트렌드)을 기록하세요.
+
+## 평가 기준
+
+| 기준 | 모범적 | 적절한 | 개선 필요 |
+| -------- | --------------------------------------------------- | ------------------------------------------------- | --------------------------------------------------------------------------------- |
+| | 세 개의 데이터셋이 노트북에 시각화되고 설명됨 | 두 개의 데이터셋이 노트북에 시각화되고 설명됨 | 몇 개의 데이터셋만 시각화되거나 설명되었거나 제시된 데이터가 불충분함 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/ko/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..1c06e776b
--- /dev/null
+++ b/translations/ko/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/ko/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..bd3a168bc
--- /dev/null
+++ b/translations/ko/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/2-ARIMA/README.md b/translations/ko/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..fbce297e4
--- /dev/null
+++ b/translations/ko/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,396 @@
+# ARIMA를 사용한 시계열 예측
+
+이전 강의에서는 시계열 예측에 대해 조금 배웠고, 일정 기간 동안 전력 부하의 변동을 보여주는 데이터를 로드했습니다.
+
+[](https://youtu.be/IUSk-YDau10 "Introduction to ARIMA")
+
+> 🎥 위 이미지를 클릭하면 ARIMA 모델에 대한 간략한 소개 영상을 볼 수 있습니다. 예제는 R로 작성되었지만, 개념은 보편적입니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## 소개
+
+이 강의에서는 [ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average) 모델을 구축하는 특정 방법을 배웁니다. ARIMA 모델은 특히 [비정상성](https://wikipedia.org/wiki/Stationary_process)을 보이는 데이터를 적합하게 만들기에 적합합니다.
+
+## 일반 개념
+
+ARIMA를 사용하기 위해 알아야 할 몇 가지 개념이 있습니다:
+
+- 🎓 **정상성**. 통계적 맥락에서 정상성은 시간이 지나도 분포가 변하지 않는 데이터를 의미합니다. 비정상 데이터는 분석하기 위해 변환이 필요합니다. 예를 들어 계절성은 데이터에 변동을 일으킬 수 있으며, '계절 차분' 과정을 통해 제거할 수 있습니다.
+
+- 🎓 **[차분](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**. 차분은 비정상 데이터를 정상 데이터로 변환하는 과정을 의미합니다. "차분은 시계열의 수준 변화를 제거하여 추세와 계절성을 제거하고 시계열의 평균을 안정화시킵니다." [Shixiong et al의 논문](https://arxiv.org/abs/1904.07632)
+
+## 시계열의 맥락에서 ARIMA
+
+ARIMA의 구성 요소를 분석하여 시계열 모델을 구축하고 예측하는 데 어떻게 도움이 되는지 알아보겠습니다.
+
+- **AR - 자기회귀**. 자기회귀 모델은 이름에서 알 수 있듯이 데이터를 '뒤돌아보며' 이전 값을 분석하고 가정을 합니다. 이러한 이전 값을 '시차'라고 합니다. 예를 들어 월별 연필 판매 데이터를 생각해보세요. 각 월의 판매 총액은 데이터셋에서 '진화 변수'로 간주됩니다. 이 모델은 "관심 있는 진화 변수가 자신의 시차(즉, 이전) 값에 대해 회귀되는" 방식으로 구축됩니다. [wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - 통합**. 'ARMA' 모델과 달리 ARIMA의 'I'는 *[통합](https://wikipedia.org/wiki/Order_of_integration)* 측면을 의미합니다. 데이터를 비정상성을 제거하기 위해 차분 단계를 적용하여 '통합'합니다.
+
+- **MA - 이동 평균**. [이동 평균](https://wikipedia.org/wiki/Moving-average_model) 모델의 측면은 현재와 과거 시차 값을 관찰하여 결정되는 출력 변수를 의미합니다.
+
+결론: ARIMA는 시계열 데이터를 가능한 한 밀접하게 적합하게 만드는 모델을 구축하는 데 사용됩니다.
+
+## 실습 - ARIMA 모델 구축
+
+이 강의의 [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) 폴더를 열고 [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb) 파일을 찾으세요.
+
+1. 노트북을 실행하여 ARIMA 모델에 필요한 `statsmodels` Python 라이브러리를 로드하세요.
+
+1. 필요한 라이브러리를 로드하세요.
+
+1. 이제 데이터를 시각화하는 데 유용한 몇 가지 추가 라이브러리를 로드하세요:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. `/data/energy.csv` 파일에서 데이터를 Pandas 데이터프레임으로 로드하고 확인하세요:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. 2012년 1월부터 2014년 12월까지의 모든 에너지 데이터를 시각화하세요. 이전 강의에서 본 데이터이므로 놀라운 점은 없을 것입니다:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 이제 모델을 구축해봅시다!
+
+### 학습 및 테스트 데이터셋 생성
+
+이제 데이터를 로드했으므로 학습 세트와 테스트 세트로 분리할 수 있습니다. 학습 세트로 모델을 학습시킬 것입니다. 모델 학습이 완료되면 테스트 세트를 사용하여 정확도를 평가합니다. 모델이 미래 기간의 정보를 얻지 않도록 학습 세트와 테스트 세트가 시간적으로 구분되도록 해야 합니다.
+
+1. 2014년 9월 1일부터 10월 31일까지의 2개월 기간을 학습 세트로 할당합니다. 테스트 세트는 2014년 11월 1일부터 12월 31일까지의 2개월 기간을 포함합니다:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ 이 데이터는 일일 에너지 소비를 반영하므로 강한 계절 패턴이 있지만, 소비는 최근 일과 가장 유사합니다.
+
+1. 차이를 시각화하세요:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ 따라서 데이터를 학습하는 데 상대적으로 작은 시간 창을 사용하는 것이 충분해야 합니다.
+
+ > Note: ARIMA 모델을 맞추기 위해 사용하는 함수는 맞추는 동안 샘플 내 검증을 사용하므로 검증 데이터를 생략합니다.
+
+### 학습을 위한 데이터 준비
+
+이제 데이터를 필터링하고 스케일링하여 학습을 위한 데이터를 준비해야 합니다. 필요한 기간과 열만 포함하도록 원본 데이터셋을 필터링하고 데이터를 0과 1 사이의 범위로 투영하도록 스케일링합니다.
+
+1. 위에서 언급한 기간별로 데이터셋을 필터링하고 필요한 'load' 열과 날짜만 포함하도록 필터링하세요:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ 데이터의 형태를 확인할 수 있습니다:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. 데이터를 (0, 1) 범위로 스케일링하세요.
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. 원본 데이터와 스케일링된 데이터를 시각화하세요:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > 원본 데이터
+
+ 
+
+ > 스케일링된 데이터
+
+1. 이제 스케일링된 데이터를 조정했으므로 테스트 데이터를 스케일링할 수 있습니다:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### ARIMA 구현
+
+이제 ARIMA를 구현할 시간입니다! 이전에 설치한 `statsmodels` 라이브러리를 사용할 것입니다.
+
+몇 가지 단계를 따라야 합니다
+
+ 1. `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q`를 호출하여 모델을 정의하세요.
+
+1. 선호하는 horizon 값을 설정하세요. 3시간을 시도해봅시다:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ ARIMA 모델의 매개변수에 대한 최적의 값을 선택하는 것은 주관적이고 시간이 많이 걸릴 수 있습니다. `auto_arima()` function from the [`pyramid` 라이브러리](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html)를 사용하는 것을 고려할 수 있습니다.
+
+1. 지금은 좋은 모델을 찾기 위해 몇 가지 수동 선택을 시도해봅시다.
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ 결과 테이블이 출력됩니다.
+
+첫 번째 모델을 구축했습니다! 이제 이를 평가할 방법을 찾아야 합니다.
+
+### 모델 평가
+
+모델을 평가하기 위해 이른바 `walk forward` 검증을 수행할 수 있습니다. 실제로 시계열 모델은 새로운 데이터가 제공될 때마다 다시 학습됩니다. 이를 통해 모델은 각 시간 단계에서 최상의 예측을 할 수 있습니다.
+
+이 기술을 사용하여 시계열의 시작부터 학습 데이터 세트로 모델을 학습합니다. 그런 다음 다음 시간 단계에 대한 예측을 수행합니다. 예측은 알려진 값과 비교됩니다. 그런 다음 학습 세트는 알려진 값을 포함하도록 확장되고 과정이 반복됩니다.
+
+> Note: 더 효율적인 학습을 위해 학습 세트 창을 고정하는 것이 좋습니다. 매번 학습 세트에 새로운 관찰값을 추가할 때, 세트의 시작 부분에서 관찰값을 제거합니다.
+
+이 과정은 모델이 실제로 어떻게 성능을 발휘할지에 대한 더 견고한 추정을 제공합니다. 그러나 많은 모델을 생성하는 계산 비용이 발생합니다. 데이터가 작거나 모델이 간단하면 허용되지만, 규모가 커지면 문제가 될 수 있습니다.
+
+Walk-forward 검증은 시계열 모델 평가의 황금 표준이며, 자신의 프로젝트에 권장됩니다.
+
+1. 먼저 각 HORIZON 단계에 대한 테스트 데이터 포인트를 생성합니다.
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | load | load+1 | load+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ 데이터는 horizon 포인트에 따라 수평으로 이동됩니다.
+
+1. 테스트 데이터의 크기만큼의 루프에서 이 슬라이딩 윈도우 접근법을 사용하여 예측을 수행합니다:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ 학습 과정을 지켜볼 수 있습니다:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. 실제 부하와 예측을 비교합니다:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ 출력
+ | | | timestamp | h | prediction | actual |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+ 시간별 데이터의 예측을 실제 부하와 비교해보세요. 얼마나 정확한가요?
+
+### 모델 정확도 확인
+
+모든 예측에 대해 평균 절대 백분율 오차(MAPE)를 테스트하여 모델의 정확도를 확인하세요.
+
+> **🧮 수학 보여줘**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/)는 위의 공식으로 정의된 비율로 예측 정확도를 보여줍니다. 실제t와 예측t의 차이는 실제t로 나뉩니다. "이 계산의 절대값은 시간의 모든 예측 지점에 대해 합산되고 맞춘 지점 수 n으로 나뉩니다." [wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. 코드를 통해 공식을 표현하세요:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. 한 단계의 MAPE를 계산하세요:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ 한 단계 예측 MAPE: 0.5570581332313952 %
+
+1. 다중 단계 예측 MAPE를 출력하세요:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ 낮은 숫자가 좋습니다: 예측이 10의 MAPE를 가지면 10%만큼 벗어난 것입니다.
+
+1. 그러나 항상 그렇듯이, 이러한 정확도 측정을 시각적으로 보는 것이 더 쉽습니다. 따라서 이를 시각화해봅시다:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 매우 좋은 정확도를 보여주는 멋진 플롯입니다. 잘 했습니다!
+
+---
+
+## 🚀도전
+
+시계열 모델의 정확도를 테스트하는 방법을 조사해보세요. 이 강의에서는 MAPE를 다루지만, 사용할 수 있는 다른 방법이 있을까요? 이를 연구하고 주석을 달아보세요. [여기](https://otexts.com/fpp2/accuracy.html)에서 도움이 되는 문서를 찾을 수 있습니다.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## 복습 및 자습
+
+이 강의에서는 ARIMA를 사용한 시계열 예측의 기본 사항만 다룹니다. [이 저장소](https://microsoft.github.io/forecasting/)와 다양한 모델 유형을 탐색하여 시계열 모델을 구축하는 다른 방법을 배우는 데 시간을 할애하세요.
+
+## 과제
+
+[새로운 ARIMA 모델](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/2-ARIMA/assignment.md b/translations/ko/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..c16ec76ac
--- /dev/null
+++ b/translations/ko/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# 새로운 ARIMA 모델
+
+## 지침
+
+이제 ARIMA 모델을 구축했으니, 새로운 데이터로 새로운 모델을 만들어 보세요 (예를 들어 [Duke의 데이터셋](http://www2.stat.duke.edu/~mw/ts_data_sets.html) 중 하나를 사용해 보세요). 노트북에 작업을 주석으로 달고, 데이터를 시각화하고, 모델을 시각화하며, MAPE를 사용하여 정확도를 테스트하세요.
+
+## 평가 기준
+
+| 기준 | 뛰어남 | 적절함 | 개선 필요 |
+| -------- | ------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- | ----------------------------------- |
+| | 노트북이 새로운 ARIMA 모델을 구축, 테스트 및 시각화와 정확도를 명시하여 설명합니다. | 노트북이 주석이 없거나 버그가 포함되어 있습니다. | 불완전한 노트북이 제출되었습니다. |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/ko/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..195b207eb
--- /dev/null
+++ b/translations/ko/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/ko/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..7524bcc88
--- /dev/null
+++ b/translations/ko/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/3-SVR/README.md b/translations/ko/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..c47c704f4
--- /dev/null
+++ b/translations/ko/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,382 @@
+# 서포트 벡터 회귀 모델을 사용한 시계열 예측
+
+이전 강의에서 ARIMA 모델을 사용하여 시계열 예측을 하는 방법을 배웠습니다. 이번에는 연속 데이터를 예측하는 데 사용되는 회귀 모델인 서포트 벡터 회귀 모델을 살펴보겠습니다.
+
+## [사전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## 소개
+
+이번 강의에서는 회귀를 위한 [**SVM**: **S**upport **V**ector **M**achine](https://en.wikipedia.org/wiki/Support-vector_machine), 즉 **SVR: Support Vector Regressor** 모델을 구축하는 방법을 알아보겠습니다.
+
+### 시계열에서의 SVR [^1]
+
+시계열 예측에서 SVR의 중요성을 이해하기 전에 알아야 할 몇 가지 중요한 개념이 있습니다:
+
+- **회귀:** 주어진 입력 집합에서 연속 값을 예측하는 지도 학습 기술입니다. 아이디어는 피처 공간에서 최대한 많은 데이터 포인트를 포함하는 곡선(또는 선)을 맞추는 것입니다. [여기를 클릭](https://en.wikipedia.org/wiki/Regression_analysis)하여 더 많은 정보를 확인하세요.
+- **서포트 벡터 머신 (SVM):** 분류, 회귀 및 이상치 감지에 사용되는 지도 학습 모델의 한 유형입니다. 모델은 피처 공간의 초평면이며, 분류의 경우 경계로 작용하고 회귀의 경우 최적의 선으로 작용합니다. SVM에서는 일반적으로 커널 함수를 사용하여 데이터셋을 더 높은 차원의 공간으로 변환하여 쉽게 분리할 수 있게 합니다. [여기를 클릭](https://en.wikipedia.org/wiki/Support-vector_machine)하여 SVM에 대한 더 많은 정보를 확인하세요.
+- **서포트 벡터 회귀 (SVR):** SVM의 한 유형으로, 최대한 많은 데이터 포인트를 포함하는 최적의 선(이 경우 SVM의 초평면)을 찾습니다.
+
+### 왜 SVR인가요? [^1]
+
+지난 강의에서 시계열 데이터를 예측하는 데 매우 성공적인 통계적 선형 방법인 ARIMA에 대해 배웠습니다. 그러나 많은 경우 시계열 데이터에는 선형 모델로 매핑할 수 없는 *비선형성*이 포함되어 있습니다. 이러한 경우, 회귀 작업에서 데이터의 비선형성을 고려할 수 있는 SVM의 능력이 시계열 예측에서 SVR을 성공적으로 만듭니다.
+
+## 실습 - SVR 모델 구축
+
+데이터 준비를 위한 첫 번째 몇 단계는 [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA) 강의와 동일합니다.
+
+이번 강의의 [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working) 폴더를 열고 [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb) 파일을 찾으세요.[^2]
+
+1. 노트북을 실행하고 필요한 라이브러리를 가져옵니다: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. `/data/energy.csv` 파일에서 데이터를 Pandas 데이터프레임으로 로드하고 확인합니다: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. 2012년 1월부터 2014년 12월까지의 모든 에너지 데이터를 플로팅합니다: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ 이제, SVR 모델을 구축해봅시다.
+
+### 학습 및 테스트 데이터셋 생성
+
+이제 데이터를 로드했으므로 학습 세트와 테스트 세트로 분리할 수 있습니다. 그런 다음 SVR에 필요한 시계열 기반 데이터셋을 만들기 위해 데이터를 재구성합니다. 모델을 학습 세트에 대해 학습시키고, 학습이 끝나면 학습 세트, 테스트 세트 및 전체 데이터셋에 대한 정확도를 평가하여 전체 성능을 확인합니다. 모델이 미래의 시간대를 미리 알지 못하도록 하기 위해 테스트 세트가 학습 세트 이후의 기간을 포함하도록 해야 합니다[^2] (이를 *과적합*이라고 합니다).
+
+1. 2014년 9월 1일부터 10월 31일까지의 두 달 기간을 학습 세트에 할당합니다. 테스트 세트는 2014년 11월 1일부터 12월 31일까지의 두 달 기간을 포함합니다: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. 차이점을 시각화합니다: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+### 학습을 위한 데이터 준비
+
+이제 데이터를 필터링하고 스케일링하여 학습을 위한 데이터를 준비해야 합니다. 데이터셋을 필요한 기간과 열만 포함하도록 필터링하고, 데이터를 0,1 구간으로 투영하기 위해 스케일링합니다.
+
+1. 원본 데이터셋을 앞서 언급한 기간별로 필터링하고 필요한 열 'load'와 날짜만 포함합니다: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. 학습 데이터를 (0, 1) 범위로 스케일링합니다: [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. 이제 테스트 데이터를 스케일링합니다: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### 시계열 기반 데이터 생성 [^1]
+
+SVR을 위해 입력 데이터를 `[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data` 형태로 변환하여 시계열을 나타내는 새로운 차원을 추가합니다.
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+이 예제에서는 `timesteps = 5`를 사용합니다. 따라서 모델의 입력은 처음 4개의 시계열 데이터이며, 출력은 5번째 시계열 데이터가 됩니다.
+
+```python
+timesteps=5
+```
+
+중첩된 리스트 내포를 사용하여 학습 데이터를 2D 텐서로 변환합니다:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+테스트 데이터를 2D 텐서로 변환합니다:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+학습 및 테스트 데이터에서 입력과 출력을 선택합니다:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### SVR 구현 [^1]
+
+이제 SVR을 구현할 시간입니다. 이 구현에 대해 더 알고 싶다면 [이 문서](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html)를 참조하세요. 우리의 구현에서는 다음 단계를 따릅니다:
+
+ 1. `SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()` 함수를 호출하여 모델을 정의합니다.
+
+이제 SVR 모델을 만듭니다. 여기서는 [RBF 커널](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel)을 사용하고, 하이퍼파라미터 gamma, C 및 epsilon을 각각 0.5, 10 및 0.05로 설정합니다.
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### 학습 데이터에 모델 적합 [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### 모델 예측 수행 [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+SVR을 구축했습니다! 이제 이를 평가해야 합니다.
+
+### 모델 평가 [^1]
+
+평가를 위해 먼저 데이터를 원래 스케일로 되돌립니다. 그런 다음 성능을 확인하기 위해 원본 및 예측된 시계열 플롯을 그리고, MAPE 결과도 출력합니다.
+
+예측된 데이터와 원본 데이터를 스케일링합니다:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### 학습 및 테스트 데이터에서 모델 성능 확인 [^1]
+
+데이터셋에서 타임스탬프를 추출하여 플롯의 x축에 표시합니다. 첫 번째 ```timesteps-1``` 값을 첫 번째 출력의 입력으로 사용하고 있으므로 출력의 타임스탬프는 그 이후부터 시작됩니다.
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+학습 데이터에 대한 예측을 플롯합니다:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+학습 데이터에 대한 MAPE 출력
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+테스트 데이터에 대한 예측을 플롯합니다
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+테스트 데이터에 대한 MAPE 출력
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 테스트 데이터셋에서 매우 좋은 결과를 얻었습니다!
+
+### 전체 데이터셋에서 모델 성능 확인 [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+🏆 매우 정확한 모델을 보여주는 멋진 플롯입니다. 잘했습니다!
+
+---
+
+## 🚀도전 과제
+
+- 모델을 생성할 때 하이퍼파라미터(gamma, C, epsilon)를 조정하고 데이터를 평가하여 테스트 데이터에서 가장 좋은 결과를 제공하는 하이퍼파라미터 세트를 찾아보세요. 이러한 하이퍼파라미터에 대해 더 알고 싶다면 [여기](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel) 문서를 참조하세요.
+- 모델에 대해 다른 커널 함수를 사용해보고 데이터셋에서 그 성능을 분석해보세요. 도움이 되는 문서는 [여기](https://scikit-learn.org/stable/modules/svm.html#kernel-functions)에서 찾을 수 있습니다.
+- 모델이 예측을 위해 돌아보는 `timesteps` 값을 다르게 설정해보세요.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## 복습 및 자기 학습
+
+이번 강의에서는 시계열 예측을 위한 SVR의 적용을 소개했습니다. SVR에 대해 더 알고 싶다면 [이 블로그](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/)를 참조하세요. 이 [scikit-learn 문서](https://scikit-learn.org/stable/modules/svm.html)는 일반적인 SVM, [SVR](https://scikit-learn.org/stable/modules/svm.html#regression) 및 사용할 수 있는 다양한 [커널 함수](https://scikit-learn.org/stable/modules/svm.html#kernel-functions)와 그 매개변수에 대한 보다 포괄적인 설명을 제공합니다.
+
+## 과제
+
+[새로운 SVR 모델](assignment.md)
+
+## 공로
+
+[^1]: 이 섹션의 텍스트, 코드 및 출력은 [@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD)가 기여했습니다.
+[^2]: 이 섹션의 텍스트, 코드 및 출력은 [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)에서 가져왔습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/3-SVR/assignment.md b/translations/ko/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..bc07d57d2
--- /dev/null
+++ b/translations/ko/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,16 @@
+# 새로운 SVR 모델
+
+## 지침 [^1]
+
+이제 SVR 모델을 구축했으니, 새로운 데이터로 새로운 모델을 만들어 보세요 (예: [듀크 대학교의 데이터셋](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). 작업 내용을 노트북에 주석으로 달고, 데이터를 시각화하며 모델을 시각화하고, 적절한 플롯과 MAPE를 사용하여 정확도를 테스트하세요. 또한 다양한 하이퍼파라미터를 조정하고 타임스텝에 대한 다양한 값을 사용해 보세요.
+
+## 평가 기준 [^1]
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| -------- | --------------------------------------------------------- | --------------------------------------------------------- | ----------------------------------- |
+| | 노트북에 구축된 SVR 모델이 시각화 및 정확도와 함께 설명됨 | 노트북이 주석 없이 제공되거나 버그가 포함됨 | 불완전한 노트북이 제공됨 |
+
+[^1]: 이 섹션의 텍스트는 [ARIMA 과제](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md)를 기반으로 작성되었습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서의 원어가 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/7-TimeSeries/README.md b/translations/ko/7-TimeSeries/README.md
new file mode 100644
index 000000000..64d64710f
--- /dev/null
+++ b/translations/ko/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# 시계열 예측 소개
+
+시계열 예측이란 무엇인가요? 이는 과거의 추세를 분석하여 미래의 사건을 예측하는 것입니다.
+
+## 지역 주제: 전 세계 전기 사용량 ✨
+
+이 두 수업에서는 시계열 예측에 대해 소개합니다. 시계열 예측은 비교적 덜 알려진 기계 학습의 한 분야이지만, 산업 및 비즈니스 애플리케이션 등 여러 분야에서 매우 가치가 있습니다. 신경망을 사용하여 이러한 모델의 유용성을 향상시킬 수 있지만, 우리는 과거의 데이터를 기반으로 미래 성능을 예측하는 데 도움이 되는 고전적인 기계 학습 모델의 맥락에서 이를 공부할 것입니다.
+
+우리의 지역 초점은 전 세계의 전기 사용량입니다. 이는 과거의 부하 패턴을 기반으로 미래의 전력 사용량을 예측하는 방법을 배우기에 흥미로운 데이터셋입니다. 이러한 종류의 예측이 비즈니스 환경에서 매우 유용할 수 있음을 알 수 있습니다.
+
+
+
+라자스탄의 도로에 있는 전기 타워의 사진은 [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)의 [Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) 제공입니다.
+
+## 수업
+
+1. [시계열 예측 소개](1-Introduction/README.md)
+2. [ARIMA 시계열 모델 구축](2-ARIMA/README.md)
+3. [시계열 예측을 위한 서포트 벡터 회귀 모델 구축](3-SVR/README.md)
+
+## 크레딧
+
+"시계열 예측 소개"는 [Francesca Lazzeri](https://twitter.com/frlazzeri)와 [Jen Looper](https://twitter.com/jenlooper)의 ⚡️로 작성되었습니다. 이 노트북은 처음에 [Azure "Deep Learning For Time Series" repo](https://github.com/Azure/DeepLearningForTimeSeriesForecasting)에 Francesca Lazzeri에 의해 온라인에 게시되었습니다. SVR 수업은 [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD)가 작성했습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/1-QLearning/README.md b/translations/ko/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..87ad9bc0b
--- /dev/null
+++ b/translations/ko/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,319 @@
+# 강화 학습과 Q-러닝 소개
+
+
+> 스케치노트: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+강화 학습에는 에이전트, 몇 가지 상태, 그리고 상태별로 실행할 수 있는 행동 집합이라는 세 가지 중요한 개념이 있습니다. 특정 상태에서 행동을 실행하면 에이전트는 보상을 받습니다. 다시 슈퍼 마리오 게임을 상상해 보세요. 당신은 마리오입니다. 게임 레벨에 있으며, 절벽 가장자리에 서 있습니다. 당신 위에는 동전이 있습니다. 당신이 마리오로서 특정 위치에 있는 상태가 바로 당신의 상태입니다. 오른쪽으로 한 걸음 이동하는 행동을 하면 절벽 아래로 떨어지게 되어 낮은 점수를 받게 됩니다. 그러나 점프 버튼을 누르면 점수를 얻고 살아남을 수 있습니다. 이것은 긍정적인 결과이며, 긍정적인 점수를 받을 수 있습니다.
+
+강화 학습과 시뮬레이터(게임)를 사용하여 게임을 최대한 오래 살아남고 최대한 많은 점수를 얻기 위해 게임을 하는 방법을 배울 수 있습니다.
+
+[](https://www.youtube.com/watch?v=lDq_en8RNOo)
+
+> 🎥 위 이미지를 클릭하여 Dmitry가 강화 학습에 대해 이야기하는 것을 들어보세요.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/45/)
+
+## 사전 준비 및 설정
+
+이번 강의에서는 파이썬 코드로 실험을 해보겠습니다. 이 강의의 Jupyter Notebook 코드를 컴퓨터나 클라우드에서 실행할 수 있어야 합니다.
+
+[강의 노트북](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/notebook.ipynb)을 열어 이 강의를 따라가며 빌드할 수 있습니다.
+
+> **참고:** 클라우드에서 이 코드를 열 경우, 노트북 코드에서 사용되는 [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py) 파일도 가져와야 합니다. 이 파일을 노트북과 동일한 디렉토리에 추가하세요.
+
+## 소개
+
+이번 강의에서는 러시아 작곡가 [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev)의 음악 동화 **[Peter and the Wolf](https://en.wikipedia.org/wiki/Peter_and_the_Wolf)**에서 영감을 받아, Peter가 환경을 탐험하고 맛있는 사과를 모으고 늑대를 피하는 방법을 **강화 학습**을 사용해 알아보겠습니다.
+
+**강화 학습**(RL)은 여러 실험을 통해 **에이전트**가 특정 **환경**에서 최적의 행동을 배우게 하는 학습 기술입니다. 이 환경에서 에이전트는 **보상 함수**로 정의된 **목표**를 가지고 있어야 합니다.
+
+## 환경
+
+간단하게 Peter의 세계를 `width` x `height` 크기의 정사각형 보드로 생각해봅시다:
+
+
+
+이 보드의 각 셀은 다음 중 하나일 수 있습니다:
+
+* **땅**, Peter와 다른 생물들이 걸을 수 있는 곳.
+* **물**, 당연히 걸을 수 없는 곳.
+* **나무** 또는 **풀**, 쉴 수 있는 곳.
+* **사과**, Peter가 먹기 위해 찾고 싶어하는 것.
+* **늑대**, 위험하므로 피해야 하는 것.
+
+이 환경과 작업하기 위한 코드를 포함하는 별도의 파이썬 모듈 [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py)이 있습니다. 이 코드는 개념 이해에 중요하지 않으므로 모듈을 가져와 샘플 보드를 생성하는 데 사용하겠습니다 (코드 블록 1):
+
+```python
+from rlboard import *
+
+width, height = 8,8
+m = Board(width,height)
+m.randomize(seed=13)
+m.plot()
+```
+
+이 코드는 위와 유사한 환경 그림을 출력해야 합니다.
+
+## 행동과 정책
+
+우리 예제에서 Peter의 목표는 늑대와 다른 장애물을 피하면서 사과를 찾는 것입니다. 이를 위해 그는 사과를 찾을 때까지 주위를 돌아다닐 수 있습니다.
+
+따라서 어떤 위치에서든 다음 행동 중 하나를 선택할 수 있습니다: 위, 아래, 왼쪽, 오른쪽.
+
+이 행동들을 사전으로 정의하고 해당 좌표 변경 쌍에 매핑하겠습니다. 예를 들어, 오른쪽으로 이동하는 것은 (`R`) would correspond to a pair `(1,0)`. (코드 블록 2):
+
+```python
+actions = { "U" : (0,-1), "D" : (0,1), "L" : (-1,0), "R" : (1,0) }
+action_idx = { a : i for i,a in enumerate(actions.keys()) }
+```
+
+요약하면, 이 시나리오의 전략과 목표는 다음과 같습니다:
+
+- **전략**은 이른바 **정책**으로 정의됩니다. 정책은 주어진 상태에서 행동을 반환하는 함수입니다. 우리의 경우 문제의 상태는 플레이어의 현재 위치를 포함한 보드로 나타납니다.
+
+- **목표**는 강화 학습을 통해 문제를 효율적으로 해결할 수 있는 좋은 정책을 배우는 것입니다. 그러나 기준선으로서 가장 간단한 정책인 **랜덤 워크**를 고려해보겠습니다.
+
+## 랜덤 워크
+
+먼저 랜덤 워크 전략을 구현하여 문제를 해결해 보겠습니다. 랜덤 워크에서는 허용된 행동 중에서 다음 행동을 무작위로 선택하여 사과에 도달할 때까지 진행합니다 (코드 블록 3).
+
+1. 아래 코드를 사용하여 랜덤 워크를 구현하세요:
+
+ ```python
+ def random_policy(m):
+ return random.choice(list(actions))
+
+ def walk(m,policy,start_position=None):
+ n = 0 # number of steps
+ # set initial position
+ if start_position:
+ m.human = start_position
+ else:
+ m.random_start()
+ while True:
+ if m.at() == Board.Cell.apple:
+ return n # success!
+ if m.at() in [Board.Cell.wolf, Board.Cell.water]:
+ return -1 # eaten by wolf or drowned
+ while True:
+ a = actions[policy(m)]
+ new_pos = m.move_pos(m.human,a)
+ if m.is_valid(new_pos) and m.at(new_pos)!=Board.Cell.water:
+ m.move(a) # do the actual move
+ break
+ n+=1
+
+ walk(m,random_policy)
+ ```
+
+ `walk` 호출은 실행 경로의 길이를 반환해야 하며, 이는 실행마다 다를 수 있습니다.
+
+1. 여러 번(예: 100번) 걷기 실험을 실행하고 결과 통계를 출력하세요 (코드 블록 4):
+
+ ```python
+ def print_statistics(policy):
+ s,w,n = 0,0,0
+ for _ in range(100):
+ z = walk(m,policy)
+ if z<0:
+ w+=1
+ else:
+ s += z
+ n += 1
+ print(f"Average path length = {s/n}, eaten by wolf: {w} times")
+
+ print_statistics(random_policy)
+ ```
+
+ 경로의 평균 길이가 약 30-40 단계로, 이는 사과까지의 평균 거리가 약 5-6 단계임을 고려하면 꽤 많은 것입니다.
+
+ 또한 랜덤 워크 동안 Peter의 움직임이 어떻게 보이는지 확인할 수 있습니다:
+
+ 
+
+## 보상 함수
+
+정책을 더 지능적으로 만들기 위해 어떤 이동이 "더 나은지" 이해해야 합니다. 이를 위해 목표를 정의해야 합니다.
+
+목표는 **보상 함수**로 정의할 수 있으며, 각 상태에 대해 일부 점수 값을 반환합니다. 숫자가 클수록 보상 함수가 더 좋습니다. (코드 블록 5)
+
+```python
+move_reward = -0.1
+goal_reward = 10
+end_reward = -10
+
+def reward(m,pos=None):
+ pos = pos or m.human
+ if not m.is_valid(pos):
+ return end_reward
+ x = m.at(pos)
+ if x==Board.Cell.water or x == Board.Cell.wolf:
+ return end_reward
+ if x==Board.Cell.apple:
+ return goal_reward
+ return move_reward
+```
+
+보상 함수의 흥미로운 점은 대부분의 경우, *게임이 끝날 때만 실질적인 보상을 받는다는 것*입니다. 이는 알고리즘이 "좋은" 단계들을 기억하고, 긍정적인 보상으로 이어지는 단계들의 중요성을 높여야 함을 의미합니다. 마찬가지로 나쁜 결과로 이어지는 모든 이동은 억제해야 합니다.
+
+## Q-러닝
+
+여기서 논의할 알고리즘은 **Q-러닝**입니다. 이 알고리즘에서 정책은 **Q-테이블**이라고 불리는 함수(또는 데이터 구조)로 정의됩니다. 이는 주어진 상태에서 각 행동의 "좋음"을 기록합니다.
+
+Q-테이블이라고 불리는 이유는 테이블 또는 다차원 배열로 표현하는 것이 편리하기 때문입니다. 우리의 보드가 `width` x `height` 크기이므로, Q-테이블을 `width` x `height` x `len(actions)` 형태의 numpy 배열로 표현할 수 있습니다: (코드 블록 6)
+
+```python
+Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)
+```
+
+모든 Q-테이블 값을 동일한 값으로 초기화하는데, 이 경우 0.25입니다. 이는 모든 상태에서 모든 이동이 동일하게 좋다는 것을 의미하므로 "랜덤 워크" 정책에 해당합니다. Q-테이블을 `plot` function in order to visualize the table on the board: `m.plot(Q)`.
+
+
+
+In the center of each cell there is an "arrow" that indicates the preferred direction of movement. Since all directions are equal, a dot is displayed.
+
+Now we need to run the simulation, explore our environment, and learn a better distribution of Q-Table values, which will allow us to find the path to the apple much faster.
+
+## Essence of Q-Learning: Bellman Equation
+
+Once we start moving, each action will have a corresponding reward, i.e. we can theoretically select the next action based on the highest immediate reward. However, in most states, the move will not achieve our goal of reaching the apple, and thus we cannot immediately decide which direction is better.
+
+> Remember that it is not the immediate result that matters, but rather the final result, which we will obtain at the end of the simulation.
+
+In order to account for this delayed reward, we need to use the principles of **[dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming)**, which allow us to think about out problem recursively.
+
+Suppose we are now at the state *s*, and we want to move to the next state *s'*. By doing so, we will receive the immediate reward *r(s,a)*, defined by the reward function, plus some future reward. If we suppose that our Q-Table correctly reflects the "attractiveness" of each action, then at state *s'* we will chose an action *a* that corresponds to maximum value of *Q(s',a')*. Thus, the best possible future reward we could get at state *s* will be defined as `max`a'*Q(s',a')* (maximum here is computed over all possible actions *a'* at state *s'*).
+
+This gives the **Bellman formula** for calculating the value of the Q-Table at state *s*, given action *a*:
+
+
+
+Here γ is the so-called **discount factor** that determines to which extent you should prefer the current reward over the future reward and vice versa.
+
+## Learning Algorithm
+
+Given the equation above, we can now write pseudo-code for our learning algorithm:
+
+* Initialize Q-Table Q with equal numbers for all states and actions
+* Set learning rate α ← 1
+* Repeat simulation many times
+ 1. Start at random position
+ 1. Repeat
+ 1. Select an action *a* at state *s*
+ 2. Execute action by moving to a new state *s'*
+ 3. If we encounter end-of-game condition, or total reward is too small - exit simulation
+ 4. Compute reward *r* at the new state
+ 5. Update Q-Function according to Bellman equation: *Q(s,a)* ← *(1-α)Q(s,a)+α(r+γ maxa'Q(s',a'))*
+ 6. *s* ← *s'*
+ 7. Update the total reward and decrease α.
+
+## Exploit vs. explore
+
+In the algorithm above, we did not specify how exactly we should choose an action at step 2.1. If we are choosing the action randomly, we will randomly **explore** the environment, and we are quite likely to die often as well as explore areas where we would not normally go. An alternative approach would be to **exploit** the Q-Table values that we already know, and thus to choose the best action (with higher Q-Table value) at state *s*. This, however, will prevent us from exploring other states, and it's likely we might not find the optimal solution.
+
+Thus, the best approach is to strike a balance between exploration and exploitation. This can be done by choosing the action at state *s* with probabilities proportional to values in the Q-Table. In the beginning, when Q-Table values are all the same, it would correspond to a random selection, but as we learn more about our environment, we would be more likely to follow the optimal route while allowing the agent to choose the unexplored path once in a while.
+
+## Python implementation
+
+We are now ready to implement the learning algorithm. Before we do that, we also need some function that will convert arbitrary numbers in the Q-Table into a vector of probabilities for corresponding actions.
+
+1. Create a function `probs()`에 전달할 수 있습니다:
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+ ```
+
+ 초기 상태에서 모든 벡터 구성 요소가 동일할 때 0으로 나누는 것을 피하기 위해 원래 벡터에 몇 개의 `eps`를 추가합니다.
+
+5000번의 실험을 통해 학습 알고리즘을 실행합니다. 이는 **에포크**라고도 합니다: (코드 블록 8)
+```python
+ for epoch in range(5000):
+
+ # Pick initial point
+ m.random_start()
+
+ # Start travelling
+ n=0
+ cum_reward = 0
+ while True:
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ dpos = actions[a]
+ m.move(dpos,check_correctness=False) # we allow player to move outside the board, which terminates episode
+ r = reward(m)
+ cum_reward += r
+ if r==end_reward or cum_reward < -1000:
+ lpath.append(n)
+ break
+ alpha = np.exp(-n / 10e5)
+ gamma = 0.5
+ ai = action_idx[a]
+ Q[x,y,ai] = (1 - alpha) * Q[x,y,ai] + alpha * (r + gamma * Q[x+dpos[0], y+dpos[1]].max())
+ n+=1
+```
+
+이 알고리즘을 실행한 후 Q-테이블은 각 단계에서 다양한 행동의 매력을 정의하는 값으로 업데이트되어야 합니다. Q-테이블을 시각화하여 각 셀에서 이동 방향을 가리키는 벡터를 그려볼 수 있습니다. 간단히 화살표 머리 대신 작은 원을 그립니다.
+
+## 정책 확인
+
+Q-테이블은 각 상태에서 각 행동의 "매력도"를 나열하므로 이를 사용하여 우리 세계에서 효율적인 탐색을 정의하는 것이 매우 쉽습니다. 가장 간단한 경우, Q-테이블 값이 가장 높은 행동을 선택할 수 있습니다: (코드 블록 9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> 위 코드를 여러 번 시도해보면 가끔 "멈추는" 것을 발견할 수 있으며, 이 경우 노트북에서 STOP 버튼을 눌러 중단해야 합니다. 이는 최적의 Q-값 측면에서 두 상태가 서로를 가리키는 상황이 발생할 수 있기 때문에 에이전트가 무한히 그 상태 사이를 이동하게 되는 경우 발생합니다.
+
+## 🚀챌린지
+
+> **과제 1:** `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics`가 시뮬레이션을 100번 실행하도록 수정하세요: (코드 블록 10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+이 코드를 실행한 후에는 이전보다 훨씬 짧은 평균 경로 길이, 약 3-6 정도를 얻을 수 있어야 합니다.
+
+## 학습 과정 조사
+
+앞서 언급했듯이, 학습 과정은 문제 공간 구조에 대한 획득한 지식을 탐색하고 탐구하는 것 사이의 균형입니다. 학습 결과(목표에 도달하기 위한 짧은 경로를 찾는 에이전트의 능력)가 향상되었지만, 학습 과정 중 평균 경로 길이가 어떻게 변하는지 관찰하는 것도 흥미롭습니다:
+
+학습 내용을 요약하면 다음과 같습니다:
+
+- **평균 경로 길이 증가**. 처음에는 평균 경로 길이가 증가하는 것을 볼 수 있습니다. 이는 환경에 대해 아무것도 모를 때 나쁜 상태, 물 또는 늑대에 갇힐 가능성이 높기 때문입니다. 더 많이 배우고 이 지식을 사용하기 시작하면 환경을 더 오래 탐색할 수 있지만, 여전히 사과가 어디 있는지 잘 모릅니다.
+
+- **더 많이 배울수록 경로 길이 감소**. 충분히 배우면 에이전트가 목표를 달성하기 쉬워지고 경로 길이가 줄어들기 시작합니다. 그러나 여전히 탐색을 계속하고 있기 때문에 종종 최적의 경로에서 벗어나 새로운 옵션을 탐색하여 경로가 최적보다 길어집니다.
+
+- **길이가 갑자기 증가**. 이 그래프에서 볼 수 있듯이 어느 순간 경로 길이가 갑자기 증가합니다. 이는 과정의 확률적 특성을 나타내며, Q-테이블 계수를 새로운 값으로 덮어써서 "망칠" 수 있음을 나타냅니다. 이는 학습이 끝날 때쯤 학습률을 줄여 Q-테이블 값을 작은 값으로만 조정하는 방식으로 최소화해야 합니다.
+
+전체적으로 학습 과정의 성공과 품질은 학습률, 학습률 감소, 할인 계수와 같은 매개 변수에 크게 의존합니다. 이러한 매개 변수는 **하이퍼파라미터**라고 하며, **파라미터**와 구분됩니다. 파라미터는 훈련 중 최적화하는 값입니다(예: Q-테이블 계수). 최적의 하이퍼파라미터 값을 찾는 과정을 **하이퍼파라미터 최적화**라고 하며, 이는 별도의 주제를 다룰 가치가 있습니다.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## 과제
+[더 현실적인 세계](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 모국어 버전이 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/1-QLearning/assignment.md b/translations/ko/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..a749d6063
--- /dev/null
+++ b/translations/ko/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,30 @@
+# 더 현실적인 세계
+
+우리의 상황에서 Peter는 거의 지치거나 배고프지 않은 상태로 이동할 수 있었습니다. 더 현실적인 세계에서는 Peter가 때때로 앉아서 쉬어야 하고, 스스로를 먹여야 합니다. 다음 규칙을 구현하여 우리의 세계를 더 현실적으로 만들어 봅시다:
+
+1. 한 장소에서 다른 장소로 이동할 때 Peter는 **에너지**를 잃고 약간의 **피로**를 얻습니다.
+2. Peter는 사과를 먹음으로써 더 많은 에너지를 얻을 수 있습니다.
+3. Peter는 나무 아래나 잔디 위에서 쉬면서 피로를 없앨 수 있습니다 (즉, 나무나 잔디가 있는 보드 위치로 걸어가면 됩니다 - 녹색 필드).
+4. Peter는 늑대를 찾아서 죽여야 합니다.
+5. 늑대를 죽이기 위해서는 Peter가 일정 수준의 에너지와 피로를 가지고 있어야 하며, 그렇지 않으면 전투에서 패배하게 됩니다.
+
+## 지침
+
+해결책의 시작점으로 원래의 [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) 노트북을 사용하세요.
+
+위의 보상 함수를 게임 규칙에 따라 수정하고, 강화 학습 알고리즘을 실행하여 게임에서 승리하기 위한 최적의 전략을 학습한 다음, 무작위 보행과 알고리즘의 결과를 게임에서 이긴 횟수와 패배한 횟수 측면에서 비교하세요.
+
+> **Note**: 새로운 세계에서는 상태가 더 복잡해지며, 인간의 위치 외에도 피로도와 에너지 수준을 포함합니다. 상태를 튜플 (Board,energy,fatigue)로 표현하거나 상태에 대한 클래스를 정의할 수 있습니다 (또는 `Board`에서 파생시킬 수도 있습니다). 또는 [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py) 내의 원래 `Board` 클래스를 수정할 수도 있습니다.
+
+해결책에서 무작위 보행 전략을 담당하는 코드를 유지하고, 알고리즘의 결과를 무작위 보행과 비교하세요.
+
+> **Note**: 특히 에포크 수를 조정해야 할 수도 있습니다. 게임의 성공(늑대와의 싸움)은 드문 사건이기 때문에 훨씬 더 긴 훈련 시간을 예상할 수 있습니다.
+
+## 평가 기준
+
+| 기준 | 우수 | 적절 | 개선 필요 |
+| ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
+| | 새로운 세계 규칙의 정의, Q-러닝 알고리즘 및 몇 가지 텍스트 설명이 포함된 노트북이 제공됩니다. Q-러닝은 무작위 보행과 비교하여 결과를 크게 향상시킬 수 있습니다. | 노트북이 제공되고, Q-러닝이 구현되어 무작위 보행과 비교하여 결과를 개선하지만 크게 향상되지는 않음; 또는 노트북이 잘 문서화되지 않았고 코드가 잘 구조화되지 않음 | 세계의 규칙을 재정의하려는 시도가 있지만, Q-러닝 알고리즘이 작동하지 않거나 보상 함수가 완전히 정의되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서가 해당 언어로 작성된 문서가 권위 있는 자료로 간주되어야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/ko/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..d255d06c5
--- /dev/null
+++ b/translations/ko/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서의 모국어 버전이 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/ko/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..0efb0d93d
--- /dev/null
+++ b/translations/ko/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/2-Gym/README.md b/translations/ko/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..1d66a9fee
--- /dev/null
+++ b/translations/ko/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,324 @@
+## 사전 요구사항
+
+이 강의에서는 다양한 **환경**을 시뮬레이션하기 위해 **OpenAI Gym**이라는 라이브러리를 사용할 것입니다. 이 강의의 코드를 로컬에서 실행할 수 있습니다(예: Visual Studio Code에서). 이 경우 시뮬레이션이 새 창에서 열립니다. 온라인으로 코드를 실행할 때는 [여기](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7) 설명된 대로 약간의 수정이 필요할 수 있습니다.
+
+## OpenAI Gym
+
+이전 강의에서는 우리가 직접 정의한 `Board` 클래스가 게임의 규칙과 상태를 제공했습니다. 여기서는 **시뮬레이션 환경**을 사용하여 균형 잡기 막대의 물리학을 시뮬레이션할 것입니다. 강화 학습 알고리즘을 훈련하기 위한 가장 인기 있는 시뮬레이션 환경 중 하나는 [Gym](https://gym.openai.com/)으로, [OpenAI](https://openai.com/)에서 유지 관리합니다. 이 Gym을 사용하여 카트폴 시뮬레이션부터 Atari 게임까지 다양한 **환경**을 만들 수 있습니다.
+
+> **참고**: OpenAI Gym에서 사용할 수 있는 다른 환경은 [여기](https://gym.openai.com/envs/#classic_control)에서 확인할 수 있습니다.
+
+먼저, gym을 설치하고 필요한 라이브러리를 가져옵니다(코드 블록 1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## 연습 - 카트폴 환경 초기화
+
+카트폴 균형 문제를 다루기 위해 해당 환경을 초기화해야 합니다. 각 환경은 다음과 연관됩니다:
+
+- **관찰 공간**: 환경으로부터 받는 정보의 구조를 정의합니다. 카트폴 문제의 경우, 막대의 위치, 속도 및 기타 값을 받습니다.
+
+- **액션 공간**: 가능한 동작을 정의합니다. 우리의 경우, 액션 공간은 이산적이며, **왼쪽**과 **오른쪽**의 두 가지 동작으로 구성됩니다. (코드 블록 2)
+
+1. 초기화하려면 다음 코드를 입력하세요:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+환경이 어떻게 작동하는지 보기 위해 100단계의 짧은 시뮬레이션을 실행해 봅시다. 각 단계에서 `action_space`에서 무작위로 선택된 동작 중 하나를 제공합니다.
+
+1. 아래 코드를 실행하고 결과를 확인하세요.
+
+ ✅ 이 코드는 로컬 Python 설치에서 실행하는 것이 좋습니다! (코드 블록 3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ 다음과 비슷한 이미지를 볼 수 있어야 합니다:
+
+ 
+
+1. 시뮬레이션 중에 어떻게 행동할지 결정하기 위해 관찰 값을 얻어야 합니다. 사실, step 함수는 현재의 관찰 값, 보상 함수 및 시뮬레이션을 계속할 가치가 있는지 여부를 나타내는 완료 플래그를 반환합니다: (코드 블록 4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ 노트북 출력에서 다음과 같은 것을 보게 될 것입니다:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ 시뮬레이션의 각 단계에서 반환되는 관찰 벡터는 다음 값을 포함합니다:
+ - 카트의 위치
+ - 카트의 속도
+ - 막대의 각도
+ - 막대의 회전 속도
+
+1. 이 숫자들의 최소값과 최대값을 가져옵니다: (코드 블록 5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ 각 시뮬레이션 단계에서 보상 값이 항상 1인 것을 알 수 있습니다. 이는 우리의 목표가 가능한 한 오래 생존하는 것, 즉 막대를 가능한 한 오랫동안 수직에 가깝게 유지하는 것이기 때문입니다.
+
+ ✅ 사실, 카트폴 시뮬레이션은 100번의 연속적인 시도에서 평균 보상이 195에 도달하면 해결된 것으로 간주됩니다.
+
+## 상태 이산화
+
+Q-Learning에서는 각 상태에서 무엇을 해야 할지 정의하는 Q-Table을 작성해야 합니다. 이를 위해서는 상태가 **이산적**이어야 하며, 더 정확하게는 유한한 수의 이산 값을 포함해야 합니다. 따라서 관찰 값을 **이산화**하여 유한한 상태 집합으로 매핑해야 합니다.
+
+이를 수행하는 방법에는 몇 가지가 있습니다:
+
+- **구간으로 나누기**. 특정 값의 범위를 알고 있는 경우, 이 범위를 여러 **구간**으로 나눌 수 있으며, 그런 다음 값을 해당하는 구간 번호로 대체할 수 있습니다. 이는 numpy [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html) 메서드를 사용하여 수행할 수 있습니다. 이 경우, 디지털화에 선택한 구간 수에 따라 상태 크기를 정확히 알 수 있습니다.
+
+✅ 값을 유한한 범위(예: -20에서 20)로 가져오기 위해 선형 보간을 사용할 수 있으며, 그런 다음 값을 반올림하여 정수로 변환할 수 있습니다. 이는 특히 입력 값의 정확한 범위를 모르는 경우 상태 크기에 대한 제어가 덜 됩니다. 예를 들어, 우리의 경우 4개의 값 중 2개는 상한/하한 값이 없으며, 이는 무한한 수의 상태를 초래할 수 있습니다.
+
+우리 예제에서는 두 번째 접근 방식을 사용할 것입니다. 나중에 알게 되겠지만, 정의되지 않은 상한/하한 값에도 불구하고, 이러한 값들은 특정 유한한 범위를 벗어나는 경우가 드뭅니다. 따라서 극단적인 값이 있는 상태는 매우 드뭅니다.
+
+1. 모델의 관찰 값을 받아 4개의 정수 값 튜플을 생성하는 함수는 다음과 같습니다: (코드 블록 6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. 구간을 사용하는 다른 이산화 방법을 탐색해 봅시다: (코드 블록 7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. 짧은 시뮬레이션을 실행하고 이러한 이산 환경 값을 관찰해 봅시다. `discretize` and `discretize_bins` 둘 다 시도해 보고 차이가 있는지 확인하세요.
+
+ ✅ discretize_bins는 0 기반의 구간 번호를 반환합니다. 따라서 입력 변수 값이 0에 가까운 경우 구간의 중간 값(10)에서 번호를 반환합니다. discretize에서는 출력 값의 범위에 신경 쓰지 않았으므로, 값이 이동하지 않으며 0이 0에 해당합니다. (코드 블록 8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ 환경 실행을 보고 싶다면 env.render로 시작하는 줄의 주석을 해제하세요. 그렇지 않으면 백그라운드에서 실행할 수 있으며, 이는 더 빠릅니다. Q-Learning 과정 동안 이 "보이지 않는" 실행을 사용할 것입니다.
+
+## Q-Table 구조
+
+이전 강의에서는 상태가 0에서 8까지의 간단한 숫자 쌍이었기 때문에 Q-Table을 8x8x2 모양의 numpy 텐서로 표현하는 것이 편리했습니다. 구간 이산화를 사용하는 경우, 상태 벡터의 크기도 알려져 있으므로 동일한 접근 방식을 사용하여 상태를 20x20x10x10x2 모양의 배열로 표현할 수 있습니다(여기서 2는 액션 공간의 차원이며, 첫 번째 차원은 관찰 공간의 각 매개변수에 사용할 구간 수에 해당합니다).
+
+그러나 관찰 공간의 정확한 차원이 알려지지 않은 경우도 있습니다. `discretize` 함수의 경우, 일부 원래 값이 제한되지 않았기 때문에 상태가 특정 한계 내에 머무르는지 확신할 수 없습니다. 따라서 우리는 약간 다른 접근 방식을 사용하여 Q-Table을 사전으로 표현할 것입니다.
+
+1. *(state, action)* 쌍을 사전 키로 사용하고 값은 Q-Table 항목 값에 해당합니다. (코드 블록 9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ 여기서 `qvalues()` 함수를 정의하여 주어진 상태에 대한 Q-Table 값을 반환합니다. Q-Table에 항목이 없으면 기본값으로 0을 반환합니다.
+
+## Q-Learning 시작하기
+
+이제 Peter에게 균형을 잡는 법을 가르칠 준비가 되었습니다!
+
+1. 먼저 몇 가지 하이퍼파라미터를 설정해 봅시다: (코드 블록 10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ 여기서 `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards` 벡터를 나중에 플로팅하기 위해 정의합니다. (코드 블록 11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+이 결과에서 알 수 있는 것:
+
+- **목표에 가까워짐**. 100회 연속 시뮬레이션에서 195 누적 보상을 달성하는 목표에 매우 가까워졌습니다. 또는 실제로 달성했을 수도 있습니다! 더 작은 숫자를 얻더라도 5000회 실행의 평균을 내고 있기 때문에 공식 기준에서는 100회 실행만 필요합니다.
+
+- **보상이 떨어지기 시작함**. 때때로 보상이 떨어지기 시작하여 Q-Table에 이미 학습된 값을 상황을 악화시키는 값으로 "파괴"할 수 있습니다.
+
+이 관찰은 학습 진행 상황을 플로팅할 때 더 명확하게 보입니다.
+
+## 학습 진행 상황 플로팅
+
+훈련 중에 각 반복에서 누적 보상 값을 `rewards` 벡터에 수집했습니다. 이를 반복 번호에 대해 플로팅하면 다음과 같습니다:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+이 그래프에서는 아무것도 알 수 없습니다. 확률적 학습 과정의 특성상 훈련 세션의 길이가 크게 다르기 때문입니다. 이 그래프를 더 이해하기 쉽게 만들기 위해 100회 실험에 대한 **이동 평균**을 계산할 수 있습니다. 이는 `np.convolve`를 사용하여 편리하게 수행할 수 있습니다: (코드 블록 12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## 하이퍼파라미터 변경
+
+학습을 더 안정적으로 만들기 위해 훈련 중에 일부 하이퍼파라미터를 조정하는 것이 좋습니다. 특히:
+
+- **학습률**의 경우, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon` 값을 거의 1까지 올립니다.
+
+> **과제 1**: 하이퍼파라미터 값을 조정하여 더 높은 누적 보상을 달성할 수 있는지 확인하세요. 195를 초과하고 있나요?
+
+> **과제 2**: 문제를 공식적으로 해결하려면 100회 연속 실행에서 평균 195 보상을 달성해야 합니다. 훈련 중에 이를 측정하고 문제를 공식적으로 해결했는지 확인하세요!
+
+## 결과를 실제로 보기
+
+훈련된 모델이 어떻게 작동하는지 실제로 보는 것은 흥미로울 것입니다. 시뮬레이션을 실행하고 훈련 중과 동일한 동작 선택 전략을 따르며, Q-Table의 확률 분포에 따라 샘플링합니다: (코드 블록 13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+다음과 같은 것을 볼 수 있어야 합니다:
+
+
+
+---
+
+## 🚀도전
+
+> **과제 3**: 여기서는 최종 Q-Table을 사용했는데, 이는 최상의 것이 아닐 수 있습니다. 최고 성능의 Q-Table을 `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax` 함수를 사용하여 가장 높은 Q-Table 값에 해당하는 동작 번호를 찾는 전략을 구현하세요. 이 전략이 균형을 개선하는지 확인하세요.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## 과제
+[Mountain Car 훈련하기](assignment.md)
+
+## 결론
+
+이제 우리는 보상 함수를 제공하고, 지능적으로 탐색할 기회를 제공하여 에이전트를 훈련시키는 방법을 배웠습니다. 이산 및 연속 환경에서 Q-Learning 알고리즘을 성공적으로 적용했지만, 이산 동작만을 사용했습니다.
+
+동작 상태도 연속적이고, 관찰 공간이 Atari 게임 화면 이미지처럼 훨씬 더 복잡한 상황을 연구하는 것이 중요합니다. 이러한 문제에서는 신경망과 같은 더 강력한 기계 학습 기술을 사용해야 좋은 결과를 얻을 수 있습니다. 이러한 더 고급 주제는 우리의 다가오는 더 고급 AI 과정의 주제입니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 모국어 버전을 권위 있는 소스로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 우리는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/2-Gym/assignment.md b/translations/ko/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..cb92ebda1
--- /dev/null
+++ b/translations/ko/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# 산악 자동차 훈련
+
+[OpenAI Gym](http://gym.openai.com)은 모든 환경이 동일한 API, 즉 동일한 메소드 `reset`, `step` 및 `render`를 제공하도록 설계되었습니다. 또한 **행동 공간**과 **관찰 공간**의 동일한 추상화를 제공합니다. 따라서 동일한 강화 학습 알고리즘을 최소한의 코드 변경으로 다양한 환경에 적용할 수 있어야 합니다.
+
+## 산악 자동차 환경
+
+[산악 자동차 환경](https://gym.openai.com/envs/MountainCar-v0/)에는 계곡에 갇힌 자동차가 있습니다:
+목표는 계곡을 빠져나가 깃발을 잡는 것입니다. 각 단계에서 다음 중 하나의 행동을 수행합니다:
+
+| 값 | 의미 |
+|---|---|
+| 0 | 왼쪽으로 가속 |
+| 1 | 가속하지 않음 |
+| 2 | 오른쪽으로 가속 |
+
+이 문제의 주요 트릭은 자동차 엔진이 한 번에 산을 오를 만큼 강력하지 않다는 것입니다. 따라서 성공하려면 앞뒤로 운전하여 모멘텀을 쌓는 것이 유일한 방법입니다.
+
+관찰 공간은 단 두 가지 값으로 구성됩니다:
+
+| 번호 | 관찰 항목 | 최소값 | 최대값 |
+|-----|--------------|-----|-----|
+| 0 | 자동차 위치 | -1.2| 0.6 |
+| 1 | 자동차 속도 | -0.07 | 0.07 |
+
+산악 자동차의 보상 시스템은 다소 까다롭습니다:
+
+ * 에이전트가 산 정상에 있는 깃발(위치 = 0.5)에 도달하면 보상 0이 주어집니다.
+ * 에이전트의 위치가 0.5 미만이면 보상 -1이 주어집니다.
+
+에피소드는 자동차 위치가 0.5를 초과하거나 에피소드 길이가 200을 초과하면 종료됩니다.
+## 지침
+
+우리의 강화 학습 알고리즘을 산악 자동차 문제를 해결하도록 조정하십시오. 기존 [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb) 코드로 시작하여 새 환경을 대체하고 상태 이산화 함수를 변경하고 최소한의 코드 수정으로 기존 알고리즘을 훈련시키십시오. 하이퍼파라미터를 조정하여 결과를 최적화하십시오.
+
+> **참고**: 알고리즘이 수렴하려면 하이퍼파라미터 조정이 필요할 수 있습니다.
+## 평가 기준
+
+| 기준 | 모범적 | 적절한 | 개선 필요 |
+| -------- | --------- | -------- | ----------------- |
+| | Q-러닝 알고리즘이 최소한의 코드 수정으로 CartPole 예제에서 성공적으로 조정되어 200단계 이내에 깃발을 잡는 문제를 해결할 수 있습니다. | 새로운 Q-러닝 알고리즘이 인터넷에서 채택되었지만 잘 문서화되어 있거나 기존 알고리즘이 채택되었지만 원하는 결과에 도달하지 못함 | 학생이 성공적으로 알고리즘을 채택하지 못했지만 솔루션을 향해 상당한 단계를 밟았음(상태 이산화, Q-테이블 데이터 구조 등 구현) |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확한 내용이 포함될 수 있습니다. 원본 문서의 원어가 권위 있는 출처로 간주되어야 합니다. 중요한 정보에 대해서는 전문 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/ko/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..77a015528
--- /dev/null
+++ b/translations/ko/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/2-Gym/solution/R/README.md b/translations/ko/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..d36b909e2
--- /dev/null
+++ b/translations/ko/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/8-Reinforcement/README.md b/translations/ko/8-Reinforcement/README.md
new file mode 100644
index 000000000..ce2b284b3
--- /dev/null
+++ b/translations/ko/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# 강화 학습 소개
+
+강화 학습(RL)은 지도 학습과 비지도 학습과 함께 기본적인 기계 학습 패러다임 중 하나로 여겨집니다. RL은 올바른 결정을 내리거나 최소한 그 결정에서 배우는 것과 관련이 있습니다.
+
+예를 들어 주식 시장과 같은 시뮬레이션 환경이 있다고 상상해보세요. 특정 규제를 도입하면 어떤 일이 발생할까요? 긍정적인 효과가 있을까요, 부정적인 효과가 있을까요? 부정적인 일이 발생하면, 이 _부정적 강화_에서 배워서 방향을 바꿔야 합니다. 긍정적인 결과라면, 그 _긍정적 강화_를 바탕으로 더 나아가야 합니다.
+
+
+
+> 피터와 그의 친구들이 배고픈 늑대에게서 도망쳐야 해요! 이미지 제공: [Jen Looper](https://twitter.com/jenlooper)
+
+## 지역 주제: 피터와 늑대 (러시아)
+
+[피터와 늑대](https://en.wikipedia.org/wiki/Peter_and_the_Wolf)는 러시아 작곡가 [세르게이 프로코피예프](https://en.wikipedia.org/wiki/Sergei_Prokofiev)가 쓴 음악 동화입니다. 이 이야기는 용감한 소년 피터가 집을 나와 숲 속 공터에서 늑대를 쫓는 이야기입니다. 이 섹션에서는 피터를 도울 기계 학습 알고리즘을 훈련할 것입니다:
+
+- 주변 지역을 **탐색**하고 최적의 내비게이션 지도를 작성합니다.
+- 더 빠르게 이동하기 위해 스케이트보드를 타고 균형을 잡는 법을 **배웁니다**.
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 위 이미지를 클릭하여 프로코피예프의 피터와 늑대를 들어보세요
+
+## 강화 학습
+
+이전 섹션에서는 두 가지 기계 학습 문제의 예를 보았습니다:
+
+- **지도 학습**은 우리가 해결하고자 하는 문제에 대한 샘플 솔루션을 제안하는 데이터셋을 가지고 있는 경우입니다. [분류](../4-Classification/README.md)와 [회귀](../2-Regression/README.md)는 지도 학습 과제입니다.
+- **비지도 학습**은 라벨이 지정된 학습 데이터가 없는 경우입니다. 비지도 학습의 주요 예는 [클러스터링](../5-Clustering/README.md)입니다.
+
+이 섹션에서는 라벨이 지정된 학습 데이터가 필요하지 않은 새로운 유형의 학습 문제를 소개할 것입니다. 이러한 문제에는 여러 유형이 있습니다:
+
+- **[반지도 학습](https://wikipedia.org/wiki/Semi-supervised_learning)**은 라벨이 지정되지 않은 많은 데이터를 사용하여 모델을 사전 훈련할 수 있는 경우입니다.
+- **[강화 학습](https://wikipedia.org/wiki/Reinforcement_learning)**은 에이전트가 시뮬레이션된 환경에서 실험을 수행하면서 행동하는 방법을 배우는 경우입니다.
+
+### 예제 - 컴퓨터 게임
+
+컴퓨터에게 체스나 [슈퍼 마리오](https://wikipedia.org/wiki/Super_Mario)와 같은 게임을 가르치고 싶다고 가정해보세요. 컴퓨터가 게임을 하려면 각 게임 상태에서 어떤 움직임을 취할지 예측해야 합니다. 이것은 분류 문제처럼 보일 수 있지만, 그렇지 않습니다. 왜냐하면 상태와 해당 행동을 포함하는 데이터셋이 없기 때문입니다. 기존의 체스 경기나 슈퍼 마리오를 플레이하는 플레이어의 기록과 같은 데이터가 있을 수 있지만, 그 데이터가 가능한 상태의 충분한 수를 충분히 포괄하지 못할 가능성이 큽니다.
+
+기존의 게임 데이터를 찾는 대신, **강화 학습**(RL)은 *컴퓨터가 여러 번 게임을 하게 하고* 결과를 관찰하는 아이디어에 기반합니다. 따라서 강화 학습을 적용하려면 두 가지가 필요합니다:
+
+- **환경**과 **시뮬레이터**는 여러 번 게임을 할 수 있게 해줍니다. 이 시뮬레이터는 모든 게임 규칙뿐만 아니라 가능한 상태와 행동을 정의합니다.
+
+- **보상 함수**는 각 움직임이나 게임 동안 얼마나 잘했는지 알려줍니다.
+
+다른 유형의 기계 학습과 RL의 주요 차이점은 RL에서는 일반적으로 게임이 끝날 때까지 우리가 이겼는지 졌는지 알 수 없다는 것입니다. 따라서 특정 움직임이 좋거나 나쁜지 단독으로 판단할 수 없으며, 게임이 끝날 때 보상을 받습니다. 우리의 목표는 불확실한 조건에서 모델을 훈련할 수 있는 알고리즘을 설계하는 것입니다. 우리는 **Q-learning**이라는 RL 알고리즘에 대해 배울 것입니다.
+
+## 레슨
+
+1. [강화 학습과 Q-Learning 소개](1-QLearning/README.md)
+2. [Gym 시뮬레이션 환경 사용하기](2-Gym/README.md)
+
+## 크레딧
+
+"강화 학습 소개"는 [Dmitry Soshnikov](http://soshnikov.com) 가 ♥️를 담아 작성했습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/9-Real-World/1-Applications/README.md b/translations/ko/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..5728a3083
--- /dev/null
+++ b/translations/ko/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# 후기: 실제 세계의 머신 러닝
+
+
+> 스케치노트: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+이 커리큘럼에서는 데이터를 학습용으로 준비하고 머신 러닝 모델을 만드는 다양한 방법을 배웠습니다. 고전적인 회귀, 군집화, 분류, 자연어 처리, 시계열 모델을 연속적으로 구축했습니다. 축하합니다! 이제 이 모든 것이 무엇을 위한 것인지 궁금할 수 있습니다... 이러한 모델들이 실제 세계에서 어떻게 사용되는지 궁금할 수 있습니다.
+
+산업계에서 AI, 특히 딥러닝을 활용하는 것에 많은 관심이 있지만, 고전적인 머신 러닝 모델도 여전히 가치 있는 응용 프로그램을 가지고 있습니다. 오늘날에도 이러한 응용 프로그램 중 일부를 사용하고 있을지도 모릅니다! 이 강의에서는 8개의 다양한 산업 및 주제 분야가 이러한 유형의 모델을 사용하여 애플리케이션을 더 성능 좋고, 신뢰할 수 있으며, 지능적이고, 사용자에게 가치 있게 만드는 방법을 탐구할 것입니다.
+
+## [강의 전 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 금융
+
+금융 부문은 머신 러닝을 활용할 수 있는 많은 기회를 제공합니다. 이 분야의 많은 문제는 ML을 사용하여 모델링하고 해결할 수 있습니다.
+
+### 신용 카드 사기 탐지
+
+이 과정에서 [k-means 군집화](../../5-Clustering/2-K-Means/README.md)에 대해 배웠지만, 이를 신용 카드 사기 문제 해결에 어떻게 사용할 수 있을까요?
+
+k-means 군집화는 **이상치 탐지**라는 신용 카드 사기 탐지 기법에서 유용합니다. 데이터 세트에 대한 관찰에서 벗어난 이상치 또는 편차는 신용 카드가 정상적으로 사용되고 있는지 아니면 비정상적인 일이 일어나고 있는지를 알려줄 수 있습니다. 아래 링크된 논문에서 보여지듯이, k-means 군집화 알고리즘을 사용하여 신용 카드 데이터를 정렬하고 각 거래를 얼마나 이상치로 보이는지에 따라 군집에 할당할 수 있습니다. 그런 다음 사기 거래와 합법 거래를 평가할 수 있습니다.
+[참고자료](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### 자산 관리
+
+자산 관리에서는 개인이나 회사가 고객을 대신하여 투자를 관리합니다. 그들의 일은 장기적으로 자산을 유지하고 성장시키는 것이므로 성과가 좋은 투자를 선택하는 것이 중요합니다.
+
+특정 투자가 어떻게 성과를 내는지 평가하는 한 가지 방법은 통계적 회귀를 사용하는 것입니다. [선형 회귀](../../2-Regression/1-Tools/README.md)는 펀드가 어떤 기준에 비해 어떻게 성과를 내는지 이해하는 데 유용한 도구입니다. 또한 회귀 결과가 통계적으로 유의미한지, 즉 고객의 투자에 얼마나 영향을 미치는지 추론할 수 있습니다. 추가 위험 요소를 고려하는 다중 회귀를 사용하여 분석을 확장할 수도 있습니다. 특정 펀드에 대해 회귀를 사용하여 성과를 평가하는 방법에 대한 예는 아래 논문을 참조하세요.
+[참고자료](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 교육
+
+교육 부문은 ML을 적용할 수 있는 매우 흥미로운 분야입니다. 시험이나 에세이에서 부정행위를 감지하거나 채점 과정에서 의도적이든 아니든 편향을 관리하는 등의 흥미로운 문제들이 있습니다.
+
+### 학생 행동 예측
+
+온라인 공개 강좌 제공자인 [Coursera](https://coursera.com)는 많은 엔지니어링 결정을 논의하는 훌륭한 기술 블로그를 운영하고 있습니다. 이 사례 연구에서, 그들은 낮은 NPS(순 추천 지수) 평가와 코스 유지율 또는 이탈 간의 상관 관계를 탐구하기 위해 회귀선을 그렸습니다.
+[참고자료](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### 편향 완화
+
+철자 및 문법 오류를 검사하는 쓰기 도우미인 [Grammarly](https://grammarly.com)는 제품 전반에 걸쳐 정교한 [자연어 처리 시스템](../../6-NLP/README.md)을 사용합니다. 그들은 기계 학습에서 성별 편향을 다루는 방법에 대한 흥미로운 사례 연구를 기술 블로그에 게시했습니다. 이는 우리의 [공정성 소개 강의](../../1-Introduction/3-fairness/README.md)에서 배운 내용과 관련이 있습니다.
+[참고자료](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 소매
+
+소매 부문은 고객 여정을 개선하고 재고를 최적화하는 등 다양한 측면에서 ML을 활용할 수 있습니다.
+
+### 고객 여정 개인화
+
+가구와 같은 홈 상품을 판매하는 Wayfair에서는 고객이 자신의 취향과 필요에 맞는 제품을 찾는 것이 중요합니다. 이 기사에서 회사의 엔지니어들은 ML과 NLP를 사용하여 "고객에게 적합한 결과를 제공"하는 방법을 설명합니다. 특히, 그들의 Query Intent Engine은 엔티티 추출, 분류기 학습, 자산 및 의견 추출, 고객 리뷰에 대한 감정 태그를 사용하여 구축되었습니다. 이는 온라인 소매에서 NLP가 어떻게 작동하는지에 대한 고전적인 사용 사례입니다.
+[참고자료](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### 재고 관리
+
+의류를 소비자에게 배송하는 박스 서비스인 [StitchFix](https://stitchfix.com)와 같은 혁신적이고 민첩한 회사는 추천 및 재고 관리를 위해 ML에 크게 의존합니다. 그들의 스타일링 팀은 실제로 상품 팀과 협력합니다: "우리의 데이터 과학자 중 한 명이 유전 알고리즘을 가지고 실험하여 오늘날 존재하지 않는 성공적인 의류를 예측했습니다. 우리는 그것을 상품 팀에 가져갔고 이제 그들은 그것을 도구로 사용할 수 있습니다."
+[참고자료](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 의료
+
+의료 부문은 연구 작업을 최적화하고 환자 재입원 또는 질병 확산 방지와 같은 물류 문제를 해결하기 위해 ML을 활용할 수 있습니다.
+
+### 임상 시험 관리
+
+임상 시험에서의 독성은 약 제조업체에게 주요한 관심사입니다. 얼마나 많은 독성이 허용될 수 있을까요? 이 연구에서는 다양한 임상 시험 방법을 분석하여 임상 시험 결과를 예측하는 새로운 접근 방식을 개발했습니다. 특히, 랜덤 포레스트를 사용하여 약물 그룹을 구별할 수 있는 [분류기](../../4-Classification/README.md)를 생성할 수 있었습니다.
+[참고자료](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### 병원 재입원 관리
+
+병원 치료는 비용이 많이 들며, 특히 환자를 재입원시켜야 할 때 더욱 그렇습니다. 이 논문에서는 [군집화](../../5-Clustering/README.md) 알고리즘을 사용하여 재입원 가능성을 예측하는 회사를 다룹니다. 이러한 군집은 분석가가 "공통 원인을 공유할 수 있는 재입원 그룹을 발견"하는 데 도움이 됩니다.
+[참고자료](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### 질병 관리
+
+최근의 팬데믹은 머신 러닝이 질병 확산을 막는 데 어떻게 도움이 될 수 있는지를 분명히 보여주었습니다. 이 기사에서는 ARIMA, 로지스틱 곡선, 선형 회귀 및 SARIMA의 사용을 인식할 수 있습니다. "이 작업은 이 바이러스의 확산 속도를 계산하고 사망, 회복 및 확인된 사례를 예측하여 더 잘 준비하고 생존할 수 있도록 돕기 위한 시도입니다."
+[참고자료](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 생태 및 그린 테크
+
+자연과 생태는 동물과 자연 간의 상호작용이 중요한 민감한 시스템으로 구성됩니다. 이러한 시스템을 정확하게 측정하고 산불이나 동물 개체수 감소와 같은 일이 발생할 때 적절하게 대응하는 것이 중요합니다.
+
+### 산림 관리
+
+이전 강의에서 [강화 학습](../../8-Reinforcement/README.md)에 대해 배웠습니다. 이는 자연에서 패턴을 예측하는 데 매우 유용할 수 있습니다. 특히, 산불 및 침입 종의 확산과 같은 생태 문제를 추적하는 데 사용할 수 있습니다. 캐나다에서는 연구자들이 위성 이미지를 사용하여 강화 학습을 통해 산불 역학 모델을 구축했습니다. 혁신적인 "공간 확산 과정(SSP)"을 사용하여 산불을 "경관의 모든 셀에서 에이전트로" 상상했습니다. "화재가 특정 위치에서 특정 시간에 취할 수 있는 행동 세트에는 북쪽, 남쪽, 동쪽, 서쪽으로 확산하거나 확산하지 않는 것이 포함됩니다."
+
+이 접근 방식은 해당 마코프 결정 프로세스(MDP)의 동적 특성이 즉각적인 산불 확산에 대한 알려진 함수이기 때문에 일반적인 RL 설정을 역전시킵니다." 이 그룹이 사용한 고전 알고리즘에 대한 자세한 내용은 아래 링크에서 확인할 수 있습니다.
+[참고자료](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### 동물의 움직임 감지
+
+딥러닝은 동물의 움직임을 시각적으로 추적하는 데 혁신을 일으켰지만 (여기서 자신의 [북극곰 추적기](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott)를 구축할 수 있습니다), 고전적인 ML도 이 작업에서 여전히 중요한 역할을 합니다.
+
+농장 동물의 움직임을 추적하는 센서와 IoT는 이러한 유형의 시각 처리 기술을 사용하지만, 더 기본적인 ML 기술은 데이터를 전처리하는 데 유용합니다. 예를 들어, 이 논문에서는 다양한 분류기 알고리즘을 사용하여 양의 자세를 모니터링하고 분석했습니다. 335페이지에서 ROC 곡선을 확인할 수 있습니다.
+[참고자료](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ 에너지 관리
+
+[시계열 예측](../../7-TimeSeries/README.md) 수업에서 공급과 수요를 이해하여 마을의 수익을 창출하는 스마트 주차 미터 개념을 도입했습니다. 이 기사는 클러스터링, 회귀 및 시계열 예측을 결합하여 아일랜드의 스마트 미터링을 기반으로 미래 에너지 사용을 예측하는 방법을 자세히 설명합니다.
+[참고자료](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 보험
+
+보험 부문은 실행 가능하고 최적화된 금융 및 보험 모델을 구축하고 최적화하는 데 ML을 사용합니다.
+
+### 변동성 관리
+
+생명 보험 제공자인 MetLife는 그들의 금융 모델에서 변동성을 분석하고 완화하는 방법을 공개적으로 설명합니다. 이 기사에서는 이진 및 서열 분류 시각화를 확인할 수 있습니다. 또한 예측 시각화도 발견할 수 있습니다.
+[참고자료](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 예술, 문화, 문학
+
+예술, 예를 들어 저널리즘에서는 많은 흥미로운 문제들이 있습니다. 가짜 뉴스를 감지하는 것은 사람들의 의견에 영향을 미치고 심지어 민주주의를 무너뜨리는 것으로 입증되었기 때문에 큰 문제입니다. 박물관도 유물 간의 연결을 찾거나 자원 계획에서 ML을 사용하는 등 많은 이점을 얻을 수 있습니다.
+
+### 가짜 뉴스 감지
+
+오늘날의 미디어에서 가짜 뉴스를 감지하는 것은 고양이와 쥐의 게임이 되었습니다. 이 기사에서 연구자들은 우리가 공부한 여러 ML 기법을 결합한 시스템을 테스트하고 최상의 모델을 배포할 수 있다고 제안합니다: "이 시스템은 데이터를 통해 특징을 추출하기 위해 자연어 처리를 기반으로 하며, 그런 다음 이 특징들은 나이브 베이즈, 서포트 벡터 머신 (SVM), 랜덤 포레스트 (RF), 확률적 경사 하강법 (SGD), 로지스틱 회귀 (LR)와 같은 머신 러닝 분류기를 훈련하는 데 사용됩니다."
+[참고자료](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+이 기사는 다양한 ML 도메인을 결합하여 가짜 뉴스의 확산을 막고 실제 피해를 방지할 수 있는 흥미로운 결과를 도출할 수 있음을 보여줍니다. 이 경우, COVID 치료에 대한 소문이 폭력적인 군중을 선동한 것이 동기가 되었습니다.
+
+### 박물관 ML
+
+박물관은 컬렉션을 카탈로그화하고 디지털화하며 유물 간의 연결을 찾는 것이 기술 발전과 함께 점점 더 쉬워지면서 AI 혁명의 최전선에 있습니다. [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.)와 같은 프로젝트는 바티칸 기록 보관소와 같은 접근 불가능한 컬렉션의 신비를 풀어주는 데 도움이 되고 있습니다. 하지만, 박물관의 비즈니스 측면도 ML 모델의 혜택을 받습니다.
+
+예를 들어, 시카고 아트 인스티튜트는 관객들이 무엇에 관심이 있고 언제 전시회를 방문할지 예측하는 모델을 구축했습니다. 목표는 사용자가 박물관을 방문할 때마다 개인화되고 최적화된 방문 경험을 제공하는 것입니다. "2017 회계 연도 동안, 모델은 출석률과 입장료를 1% 이내의 정확도로 예측했습니다."라고 시카고 아트 인스티튜트의 수석 부사장인 Andrew Simnick는 말합니다.
+[Reference](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 마케팅
+
+### 고객 세분화
+
+가장 효과적인 마케팅 전략은 다양한 그룹에 기반하여 고객을 다르게 타겟팅하는 것입니다. 이 기사에서는 클러스터링 알고리즘의 사용을 통해 차별화된 마케팅을 지원하는 방법을 다룹니다. 차별화된 마케팅은 기업이 브랜드 인지도를 높이고, 더 많은 고객에게 도달하며, 더 많은 수익을 창출하는 데 도움이 됩니다.
+[Reference](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 도전 과제
+
+이 커리큘럼에서 배운 기술들 중 일부를 활용하는 또 다른 분야를 찾아보고, 그 분야가 어떻게 ML을 사용하는지 알아보세요.
+
+## [강의 후 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## 복습 및 자습
+
+Wayfair 데이터 과학 팀은 회사에서 ML을 어떻게 사용하는지에 대한 여러 흥미로운 비디오를 가지고 있습니다. [한번 살펴보는 것](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)도 좋습니다!
+
+## 과제
+
+[A ML scavenger hunt](assignment.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/9-Real-World/1-Applications/assignment.md b/translations/ko/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..896640b5b
--- /dev/null
+++ b/translations/ko/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# A ML Scavenger Hunt
+
+## Instructions
+
+이 수업에서는 고전적인 ML을 사용하여 해결된 많은 실제 사용 사례에 대해 배웠습니다. 딥 러닝, 새로운 기술 및 도구의 사용, 신경망을 활용하는 것이 이러한 부문에서 도구 생산을 가속화하는 데 도움이 되었지만, 이 커리큘럼에서 다루는 기술을 사용한 고전적인 ML은 여전히 큰 가치를 지니고 있습니다.
+
+이 과제에서는 해커톤에 참가한다고 상상해 보세요. 커리큘럼에서 배운 내용을 사용하여 이 수업에서 논의된 부문 중 하나에서 문제를 해결하기 위해 고전적인 ML을 사용하는 솔루션을 제안해 보세요. 아이디어를 구현하는 방법을 논의하는 프레젠테이션을 작성하세요. 샘플 데이터를 수집하고 개념을 뒷받침할 ML 모델을 구축할 수 있다면 추가 점수를 받을 수 있습니다!
+
+## Rubric
+
+| Criteria | Exemplary | Adequate | Needs Improvement |
+| -------- | ------------------------------------------------------------------- | ------------------------------------------------- | ---------------------- |
+| | A PowerPoint presentation is presented - bonus for building a model | A non-innovative, basic presentation is presented | The work is incomplete |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서는 해당 언어로 작성된 것이 권위 있는 출처로 간주되어야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역을 사용하여 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/9-Real-World/2-Debugging-ML-Models/README.md b/translations/ko/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..d8406622b
--- /dev/null
+++ b/translations/ko/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,139 @@
+# 후기: 책임 있는 AI 대시보드 구성 요소를 사용한 머신러닝 모델 디버깅
+
+## [사전 강의 퀴즈](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## 소개
+
+머신러닝은 우리의 일상 생활에 큰 영향을 미칩니다. AI는 의료, 금융, 교육, 고용 등 개인과 사회에 영향을 미치는 중요한 시스템에 점점 더 많이 도입되고 있습니다. 예를 들어, 시스템과 모델은 의료 진단이나 사기 탐지와 같은 일상적인 의사 결정 작업에 관여하고 있습니다. 결과적으로, AI의 발전과 가속화된 채택은 진화하는 사회적 기대와 증가하는 규제에 대응하고 있습니다. 우리는 AI 시스템이 기대에 미치지 못하는 영역을 지속적으로 보고 있으며, 새로운 도전 과제를 노출하고 정부는 AI 솔루션을 규제하기 시작했습니다. 따라서 이러한 모델이 공정하고 신뢰할 수 있으며 포괄적이고 투명하며 책임 있는 결과를 제공하는지 분석하는 것이 중요합니다.
+
+이 교육 과정에서는 모델에 책임 있는 AI 문제가 있는지 평가할 수 있는 실용적인 도구를 살펴보겠습니다. 전통적인 머신러닝 디버깅 기술은 주로 집계된 정확도나 평균 오류 손실과 같은 정량적 계산에 기반합니다. 이러한 모델을 구축하는 데 사용하는 데이터에 인종, 성별, 정치적 견해, 종교 등 특정 인구 통계가 부족하거나 불균형적으로 대표되는 경우 어떤 일이 발생할 수 있는지 상상해보세요. 모델의 출력이 특정 인구 통계를 선호하도록 해석되면 어떻게 될까요? 이는 민감한 특성 그룹의 과잉 또는 과소 대표를 초래하여 모델의 공정성, 포괄성 또는 신뢰성 문제를 일으킬 수 있습니다. 또 다른 요소는 머신러닝 모델이 블랙박스로 간주되어 모델의 예측을 무엇이 주도하는지 이해하고 설명하기 어렵다는 점입니다. 이러한 모든 문제는 모델의 공정성이나 신뢰성을 디버깅하고 평가할 적절한 도구가 없을 때 데이터 과학자와 AI 개발자가 직면하는 도전 과제입니다.
+
+이 수업에서는 다음을 통해 모델을 디버깅하는 방법을 배울 것입니다:
+
+- **오류 분석**: 데이터 분포에서 모델의 오류율이 높은 위치를 식별합니다.
+- **모델 개요**: 다른 데이터 코호트 간의 비교 분석을 수행하여 모델의 성능 지표에서 차이를 발견합니다.
+- **데이터 분석**: 데이터가 과잉 또는 과소 대표되는 위치를 조사하여 모델이 한 데이터 인구 통계를 다른 데이터 인구 통계보다 선호하도록 왜곡될 수 있는지 조사합니다.
+- **특성 중요도**: 글로벌 수준 또는 로컬 수준에서 모델의 예측을 주도하는 특성을 이해합니다.
+
+## 전제 조건
+
+전제 조건으로, [개발자를 위한 책임 있는 AI 도구](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)를 검토해 주세요.
+
+> 
+
+## 오류 분석
+
+정확도를 측정하는 데 사용되는 전통적인 모델 성능 지표는 주로 올바른 예측과 잘못된 예측에 기반한 계산입니다. 예를 들어, 모델이 89%의 정확도를 가지고 오류 손실이 0.001인 경우 좋은 성능으로 간주될 수 있습니다. 오류는 기본 데이터 세트에서 균일하게 분포되지 않는 경우가 많습니다. 89%의 모델 정확도 점수를 얻었지만 모델이 42%의 시간 동안 실패하는 데이터의 다른 영역을 발견할 수 있습니다. 특정 데이터 그룹의 이러한 실패 패턴의 결과는 공정성 또는 신뢰성 문제로 이어질 수 있습니다. 모델이 잘 수행되는 영역과 그렇지 않은 영역을 이해하는 것이 중요합니다. 모델의 부정확성이 많은 데이터 영역은 중요한 데이터 인구 통계일 수 있습니다.
+
+
+
+RAI 대시보드의 오류 분석 구성 요소는 트리 시각화를 통해 다양한 코호트에서 모델 실패가 어떻게 분포되는지 보여줍니다. 이는 데이터 세트에서 높은 오류율을 가진 특성이나 영역을 식별하는 데 유용합니다. 모델의 부정확성이 대부분 어디에서 발생하는지 확인하여 근본 원인을 조사하기 시작할 수 있습니다. 또한 분석을 수행하기 위해 데이터 코호트를 생성할 수 있습니다. 이러한 데이터 코호트는 모델 성능이 한 코호트에서는 좋지만 다른 코호트에서는 오류가 발생하는 이유를 파악하는 데 도움이 됩니다.
+
+
+
+트리 맵의 시각적 지표는 문제 영역을 더 빨리 찾는 데 도움이 됩니다. 예를 들어, 트리 노드가 더 짙은 빨간색일수록 오류율이 높습니다.
+
+히트 맵은 사용자가 전체 데이터 세트 또는 코호트에서 모델 오류의 기여자를 찾기 위해 하나 또는 두 개의 특성을 사용하여 오류율을 조사하는 데 사용할 수 있는 또 다른 시각화 기능입니다.
+
+
+
+오류 분석을 사용해야 할 때:
+
+* 데이터 세트와 여러 입력 및 특성 차원에서 모델 실패가 어떻게 분포되는지 깊이 이해합니다.
+* 집계 성능 지표를 분해하여 목표로 하는 완화 단계를 알리기 위해 자동으로 오류 코호트를 발견합니다.
+
+## 모델 개요
+
+머신러닝 모델의 성능을 평가하려면 모델의 행동에 대한 전체적인 이해가 필요합니다. 이를 위해 오류율, 정확도, 재현율, 정밀도 또는 MAE(Mean Absolute Error)와 같은 여러 지표를 검토하여 성능 지표 간의 차이를 찾을 수 있습니다. 하나의 성능 지표는 훌륭해 보일 수 있지만 다른 지표에서 부정확성이 드러날 수 있습니다. 또한 전체 데이터 세트 또는 코호트 간의 지표를 비교하면 모델이 잘 수행되는지 여부를 확인하는 데 도움이 됩니다. 이는 특히 민감한 특성(예: 환자의 인종, 성별 또는 나이)과 비민감한 특성 간의 모델 성능을 확인하여 모델이 가질 수 있는 잠재적인 불공정성을 발견하는 데 중요합니다. 예를 들어, 민감한 특성을 가진 코호트에서 모델이 더 많은 오류를 일으킨다는 것을 발견하면 모델이 가질 수 있는 잠재적인 불공정성을 드러낼 수 있습니다.
+
+RAI 대시보드의 모델 개요 구성 요소는 데이터 코호트의 성능 지표를 분석하는 데 도움이 될 뿐만 아니라 사용자가 다양한 코호트 간의 모델 행동을 비교할 수 있는 기능을 제공합니다.
+
+
+
+구성 요소의 특성 기반 분석 기능을 사용하여 특정 특성 내의 데이터 하위 그룹을 좁혀서 세부 수준에서 이상 현상을 식별할 수 있습니다. 예를 들어, 대시보드는 사용자가 선택한 특성(예: *"time_in_hospital < 3"* 또는 *"time_in_hospital >= 7"*)에 대해 자동으로 코호트를 생성하는 내장 인텔리전스를 가지고 있습니다. 이를 통해 사용자는 더 큰 데이터 그룹에서 특정 특성을 분리하여 모델의 오류 결과를 주도하는 주요 영향을 미치는지 확인할 수 있습니다.
+
+
+
+모델 개요 구성 요소는 두 가지 클래스의 차이 지표를 지원합니다:
+
+**모델 성능의 차이**: 이 지표 세트는 데이터 하위 그룹 간에 선택한 성능 지표 값의 차이(차이)를 계산합니다. 몇 가지 예는 다음과 같습니다:
+
+* 정확도 비율의 차이
+* 오류율의 차이
+* 정밀도의 차이
+* 재현율의 차이
+* 평균 절대 오차(MAE)의 차이
+
+**선택 비율의 차이**: 이 지표는 하위 그룹 간의 선택 비율(유리한 예측)의 차이를 포함합니다. 예를 들어, 대출 승인 비율의 차이입니다. 선택 비율은 각 클래스의 데이터 포인트 비율을 1(이진 분류)로 분류하거나 예측 값의 분포(회귀)를 의미합니다.
+
+## 데이터 분석
+
+> "데이터를 충분히 고문하면 무엇이든 자백할 것입니다" - Ronald Coase
+
+이 말은 극단적으로 들리지만, 데이터는 어떤 결론을 뒷받침하기 위해 조작될 수 있다는 점에서 사실입니다. 이러한 조작은 때때로 의도하지 않게 발생할 수 있습니다. 인간으로서 우리는 모두 편견을 가지고 있으며 데이터를 다룰 때 편견을 도입하는 시점을 의식적으로 아는 것은 종종 어렵습니다. AI와 머신러닝에서 공정성을 보장하는 것은 여전히 복잡한 과제입니다.
+
+데이터는 전통적인 모델 성능 지표에 큰 맹점이 있습니다. 높은 정확도 점수를 가지고 있을 수 있지만, 이는 데이터 세트에 있을 수 있는 근본적인 데이터 편향을 항상 반영하지는 않습니다. 예를 들어, 회사의 임원직에 있는 여성 비율이 27%이고 남성 비율이 73%인 직원 데이터 세트가 있다면, 이 데이터를 학습한 직업 광고 AI 모델은 주로 남성 청중을 대상으로 고위직을 광고할 수 있습니다. 데이터의 이러한 불균형은 모델의 예측을 한 성별에 유리하게 왜곡시켰습니다. 이는 AI 모델에 성별 편향이 있는 공정성 문제를 드러냅니다.
+
+RAI 대시보드의 데이터 분석 구성 요소는 데이터 세트에서 과잉 및 과소 대표되는 영역을 식별하는 데 도움이 됩니다. 이는 데이터 불균형이나 특정 데이터 그룹의 대표 부족으로 인해 발생한 오류와 공정성 문제의 근본 원인을 진단하는 데 도움이 됩니다. 이를 통해 사용자는 예측 및 실제 결과, 오류 그룹 및 특정 특성을 기반으로 데이터 세트를 시각화할 수 있습니다. 때로는 대표되지 않은 데이터 그룹을 발견하면 모델이 잘 학습하지 못하고 높은 부정확성을 초래한다는 것을 알 수 있습니다. 데이터 편향이 있는 모델은 공정성 문제일 뿐만 아니라 모델이 포괄적이거나 신뢰할 수 없음을 나타냅니다.
+
+
+
+데이터 분석을 사용해야 할 때:
+
+* 다양한 필터를 선택하여 데이터를 다양한 차원(코호트라고도 함)으로 분할하여 데이터 세트 통계를 탐색합니다.
+* 다양한 코호트 및 특성 그룹 간에 데이터 세트의 분포를 이해합니다.
+* 공정성, 오류 분석 및 인과 관계와 관련된 결과가 데이터 세트의 분포로 인한 것인지 여부를 결정합니다.
+* 대표 문제, 레이블 노이즈, 특성 노이즈, 레이블 편향 및 유사한 요인으로 인해 발생하는 오류를 완화하기 위해 데이터를 수집할 영역을 결정합니다.
+
+## 모델 해석 가능성
+
+머신러닝 모델은 블랙박스로 간주되는 경우가 많습니다. 모델의 예측을 주도하는 주요 데이터 특성을 이해하는 것은 어려울 수 있습니다. 모델이 특정 예측을 하는 이유에 대한 투명성을 제공하는 것이 중요합니다. 예를 들어, AI 시스템이 당뇨병 환자가 30일 이내에 병원에 재입원할 위험이 있다고 예측하는 경우, 예측에 기여한 데이터를 제공해야 합니다. 지원 데이터를 제공하면 임상의나 병원이 잘-informed된 결정을 내리는 데 도움이 됩니다. 또한, 개별 환자에 대한 모델의 예측 이유를 설명할 수 있는 것은 건강 규제와의 책임성을 가능하게 합니다. 사람들의 삶에 영향을 미치는 방식으로 머신러닝 모델을 사용할 때 모델의 행동에 영향을 미치는 요소를 이해하고 설명하는 것이 중요합니다. 모델 해석 가능성과 해석 가능성은 다음과 같은 시나리오에서 질문에 답하는 데 도움이 됩니다:
+
+* 모델 디버깅: 내 모델이 왜 이 실수를 했을까? 내 모델을 어떻게 개선할 수 있을까?
+* 인간-AI 협업: 모델의 결정을 어떻게 이해하고 신뢰할 수 있을까?
+* 규제 준수: 내 모델이 법적 요구 사항을 충족하는가?
+
+RAI 대시보드의 특성 중요도 구성 요소는 모델이 예측을 어떻게 수행하는지 디버깅하고 포괄적으로 이해하는 데 도움이 됩니다. 또한 머신러닝 전문가와 의사 결정자가 모델의 행동에 영향을 미치는 특성을 설명하고 증거를 제공하는 데 유용한 도구입니다. 다음으로, 사용자는 글로벌 및 로컬 설명을 모두 탐색하여 모델의 예측을 주도하는 특성을 확인할 수 있습니다. 글로벌 설명은 모델의 전체 예측에 영향을 미친 주요 특성을 나열합니다. 로컬 설명은 개별 사례에 대한 모델의 예측을 이끈 특성을 표시합니다. 로컬 설명을 평가하는 기능은 특정 사례를 디버깅하거나 감사하는 데도 유용하여 모델이 정확하거나 부정확한 예측을 한 이유를 더 잘 이해하고 해석할 수 있습니다.
+
+
+
+* 글로벌 설명: 예를 들어, 당뇨병 병원 재입원 모델의 전체 행동에 영향을 미치는 특성은 무엇인가?
+* 로컬 설명: 예를 들어, 왜 60세 이상의 당뇨병 환자가 이전에 병원에 입원한 적이 있는 경우 30일 이내에 재입원하거나 재입원하지 않을 것으로 예측되었는가?
+
+다양한 코호트에서 모델의 성능을 검사하는 디버깅 과정에서 특성 중요도는 코호트 간의 특성의 영향 수준을 보여줍니다. 이는 모델의 오류 예측을 주도하는 특성의 영향 수준을 비교할 때 이상 현상을 드러내는 데 도움이 됩니다. 특성 중요도 구성 요소는 특성의 값이 모델의 결과에 긍정적이거나 부정적으로 영향을 미쳤는지 보여줄 수 있습니다. 예를 들어, 모델이 부정확한 예측을 한 경우, 구성 요소는 예측을 주도한 특성이나 특성 값을 자세히 조사하고 식별할 수 있는 기능을 제공합니다. 이러한 세부 수준은 디버깅뿐만 아니라 감사 상황에서 투명성과 책임성을 제공하는 데 도움이 됩니다. 마지막으로, 구성 요소는 공정성 문제를 식별하는 데 도움이 될 수 있습니다. 예를 들어, 민감한 특성(예: 인종 또는 성별)이 모델의 예측을 주도하는 데 매우 영향력이 있는 경우, 이는 모델에 인종 또는 성별 편향이 있을 수 있음을 나타낼 수 있습니다.
+
+
+
+해석 가능성을 사용해야 할 때:
+
+* 모델의 예측이 얼마나 신뢰할 수 있는지 이해하여 예측에 가장 중요한 특성을 결정합니다.
+* 모델을 먼저 이해하고 모델이 건강한 특성을 사용하는지 아니면 단순히 잘못된 상관 관계를 사용하는지 식별하여 모델을 디버깅합니다.
+* 모델이 민감한 특성이나 해당 특성과 높은 상관 관계가 있는 특성을 기반으로 예측하는지 이해하여 잠재적인 불공정성의 원인을 발견합니다.
+* 로컬 설명을 생성하여 모델의 결정을 이해하고 신뢰할 수 있도록 사용자 신뢰를 구축합니다.
+* AI 시스템의 규제 감사를 완료하여 모델을 검증하고 모델 결정이 사람들에게 미치는 영향을 모니터링합니다.
+
+## 결론
+
+RAI 대시보드의 모든 구성 요소는 사회에 덜 해롭고 더 신뢰할 수 있는 머신러닝 모델을 구축하는 데 도움이 되는 실용적인 도구입니다. 이는 인권 위협의 예방을 개선하고, 특정 그룹을 삶의 기회에서 차별하거나 배제하는 것을 방지하며, 신체적 또는 심리적 부상의 위험을 줄입니다. 또한 로컬 설명을 생성하여 모델의 결정을 시각화함으로써 모델의 결정에 대한 신뢰를 구축하는 데 도움이 됩니다. 잠재적인 해악은 다음과 같이 분류할 수 있습니다:
+
+- **할당**, 예를 들어 성별이나 인종이 다른 것보다 우대되는 경우.
+- **서비스 품질**. 하나의 특정 시나리오에 대해 데이터를 학습시키지만 실제 상황은 훨씬 더 복잡한 경우, 이는 성능이 저하된 서비스로 이어집니다.
+- **고정관념**. 특정 그룹을 사전 할당된 속성과 연결하는 것.
+- **비난**. 공정하지 않게 비판하고 라벨을 붙이는 것.
+- **과잉 또는 과소 대표**. 특정 그룹이 특정 직업에서 보이지 않으며, 이를 계속 촉진하는 서비스나 기능은 해악을 초래합니다.
+
+### Azure RAI 대시보드
+
+[Azure RAI 대시보드](https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu)는 Microsoft를 포함한 주요 학술 기관 및 조직에서 개발한 오픈 소스 도구를 기반으로 하여 데이터 과학자와 AI 개발자가 모델의 행동을 더 잘 이해하고 AI 모델에서 바람직하지 않은 문제를 발견하고 완화하는 데 중요한 역할을 합니다.
+
+- 다양한 구성 요소를 사용하는 방법을 알아보려면 RAI 대시보드 [문서](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu)를 확인하세요.
+
+- Azure 머신러닝에서 더 책임 있는 AI 시나리오를 디버깅하기 위한 RAI 대시보드 [샘플 노트북](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks)을 확인하세요.
+
+---
+## 🚀 도전 과제
+
+통계적 또는 데이터 편향이 처음부터 도입되지 않도록
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서는 원어로 작성된 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/ko/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..dfa45aa9e
--- /dev/null
+++ b/translations/ko/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# 책임 있는 AI (RAI) 대시보드 탐색
+
+## 지침
+
+이 강의에서 여러분은 데이터 과학자들이 오류 분석, 데이터 탐색, 공정성 평가, 모델 해석 가능성, 반사실/가정 분석 및 인과 분석을 수행할 수 있도록 돕는 "오픈 소스" 도구를 기반으로 한 구성 요소 모음인 RAI 대시보드에 대해 배웠습니다. 이번 과제에서는 RAI 대시보드의 샘플 [노트북](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) 중 일부를 탐색하고, 탐색 결과를 논문이나 프레젠테이션 형식으로 보고하세요.
+
+## 평가 기준
+
+| 기준 | 모범적 | 적절함 | 개선 필요 |
+| -------- | --------- | -------- | ----------------- |
+| | RAI 대시보드의 구성 요소, 실행된 노트북, 그리고 실행 결과로 도출된 결론을 논의하는 논문 또는 파워포인트 프레젠테이션이 제시됨 | 결론 없이 논문만 제시됨 | 논문이 제시되지 않음 |
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서는 해당 언어로 작성된 것이 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/9-Real-World/README.md b/translations/ko/9-Real-World/README.md
new file mode 100644
index 000000000..4261ce922
--- /dev/null
+++ b/translations/ko/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# 후기: 고전적 머신 러닝의 실제 응용 사례
+
+이 커리큘럼의 이 섹션에서는 고전적 머신 러닝의 실제 응용 사례를 소개합니다. 우리는 인터넷을 뒤져서 이러한 전략을 사용한 응용 프로그램에 대한 백서와 기사를 찾아냈으며, 가능한 한 신경망, 딥 러닝 및 AI를 피했습니다. 비즈니스 시스템, 생태 응용 프로그램, 금융, 예술 및 문화 등에서 머신 러닝이 어떻게 사용되는지 알아보세요.
+
+
+
+> 사진 제공: Alexis Fauvet on Unsplash
+
+## 강의
+
+1. [ML의 실제 응용 사례](1-Applications/README.md)
+2. [책임 있는 AI 대시보드 구성 요소를 사용한 머신 러닝 모델 디버깅](2-Debugging-ML-Models/README.md)
+
+## 저작권
+
+"실제 응용 사례"는 [Jen Looper](https://twitter.com/jenlooper)와 [Ornella Altunyan](https://twitter.com/ornelladotcom)을 포함한 여러 팀원이 작성했습니다.
+
+"책임 있는 AI 대시보드 구성 요소를 사용한 머신 러닝 모델 디버깅"은 [Ruth Yakubu](https://twitter.com/ruthieyakubu)가 작성했습니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/CODE_OF_CONDUCT.md b/translations/ko/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..6e4ea3d90
--- /dev/null
+++ b/translations/ko/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Microsoft 오픈 소스 행동 강령
+
+이 프로젝트는 [Microsoft 오픈 소스 행동 강령](https://opensource.microsoft.com/codeofconduct/)을 채택했습니다.
+
+리소스:
+
+- [Microsoft 오픈 소스 행동 강령](https://opensource.microsoft.com/codeofconduct/)
+- [Microsoft 행동 강령 FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+- 질문이나 우려 사항이 있으면 [opencode@microsoft.com](mailto:opencode@microsoft.com)으로 연락하세요.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 모국어 버전을 권위 있는 출처로 간주해야 합니다. 중요한 정보의 경우 전문 인간 번역을 권장합니다. 이 번역의 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/CONTRIBUTING.md b/translations/ko/CONTRIBUTING.md
new file mode 100644
index 000000000..7e31c8f23
--- /dev/null
+++ b/translations/ko/CONTRIBUTING.md
@@ -0,0 +1,13 @@
+# 기여하기
+
+이 프로젝트는 기여와 제안을 환영합니다. 대부분의 기여는 Contributor License Agreement (CLA)에 동의해야 하며, 이는 귀하가 귀하의 기여를 사용할 수 있는 권리를 우리에게 부여할 권리가 있음을 선언하는 것입니다. 자세한 내용은 https://cla.microsoft.com을 참조하세요.
+
+> 중요: 이 저장소의 텍스트를 번역할 때는 기계 번역을 사용하지 않도록 주의해 주세요. 번역은 커뮤니티를 통해 검증될 예정이므로, 능숙한 언어로만 번역에 자원해 주세요.
+
+풀 리퀘스트를 제출하면 CLA-봇이 자동으로 CLA를 제공해야 하는지 여부를 결정하고 PR에 적절하게 표시(예: 레이블, 댓글)합니다. 봇이 제공하는 지침을 따르기만 하면 됩니다. CLA를 사용하는 모든 저장소에 대해 한 번만 이 작업을 수행하면 됩니다.
+
+이 프로젝트는 [Microsoft 오픈 소스 행동 강령](https://opensource.microsoft.com/codeofconduct/)을 채택했습니다.
+자세한 내용은 [행동 강령 FAQ](https://opensource.microsoft.com/codeofconduct/faq/)를 참조하거나 [opencode@microsoft.com](mailto:opencode@microsoft.com)으로 추가 질문이나 의견을 보내주세요.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/README.md b/translations/ko/README.md
new file mode 100644
index 000000000..62e6ad5cf
--- /dev/null
+++ b/translations/ko/README.md
@@ -0,0 +1,155 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# 초보자를 위한 머신 러닝 - 커리큘럼
+
+> 🌍 세계 문화를 통해 머신 러닝을 탐험하는 여행을 떠나보세요 🌍
+
+Microsoft의 클라우드 옹호자들이 **머신 러닝**에 대한 12주, 26강 커리큘럼을 제공하게 되어 기쁩니다. 이 커리큘럼에서 주로 Scikit-learn 라이브러리를 사용하여 **클래식 머신 러닝**을 배우게 되며, 심화 학습은 [초보자를 위한 AI 커리큘럼](https://aka.ms/ai4beginners)에서 다룹니다. 이 강의와 함께 ['초보자를 위한 데이터 과학' 커리큘럼](https://aka.ms/ds4beginners)도 활용해보세요!
+
+세계 각지의 데이터를 이용하여 이 클래식 기술을 적용하면서 우리와 함께 세계를 여행하세요. 각 강의에는 강의 전후의 퀴즈, 강의 완성에 필요한 서면 지침, 솔루션, 과제 등이 포함되어 있습니다. 프로젝트 기반 학습법을 통해 새로운 기술을 습득하면서 배울 수 있습니다.
+
+**✍️ 저자들에게 깊은 감사의 인사를 전합니다** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu, 그리고 Amy Boyd
+
+**🎨 일러스트레이터들에게도 감사드립니다** Tomomi Imura, Dasani Madipalli, 그리고 Jen Looper
+
+**🙏 특별히 Microsoft Student Ambassador 저자, 리뷰어, 그리고 콘텐츠 기여자들에게 감사드립니다**, 특히 Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila, 그리고 Snigdha Agarwal
+
+**🤩 R 강의에 도움을 준 Microsoft Student Ambassadors Eric Wanjau, Jasleen Sondhi, 그리고 Vidushi Gupta에게도 특별히 감사드립니다!**
+
+# 시작하기
+
+다음 단계를 따르세요:
+1. **저장소 포크하기**: 이 페이지 오른쪽 상단의 "Fork" 버튼을 클릭하세요.
+2. **저장소 클론하기**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [이 강좌에 대한 추가 리소스는 Microsoft Learn 컬렉션에서 확인하세요](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+**[학생들](https://aka.ms/student-page)**, 이 커리큘럼을 사용하려면 전체 저장소를 자신의 GitHub 계정으로 포크하고 혼자 또는 그룹과 함께 연습을 완료하세요:
+
+- 강의 전 퀴즈부터 시작하세요.
+- 강의를 읽고 활동을 완료하세요. 각 지식 점검에서 멈추고 생각해보세요.
+- 솔루션 코드를 실행하지 않고 강의를 이해하여 프로젝트를 시도하세요. 그러나 해당 코드는 각 프로젝트 기반 강의의 `/solution` 폴더에 있습니다.
+- 강의 후 퀴즈를 풀어보세요.
+- 도전을 완료하세요.
+- 과제를 완료하세요.
+- 강의 그룹을 완료한 후 [토론 게시판](https://github.com/microsoft/ML-For-Beginners/discussions)에 방문하여 적절한 PAT 루브릭을 작성하여 "소리 내어 학습"하세요. 'PAT'는 학습을 촉진하기 위해 작성하는 루브릭입니다. 다른 PAT에 반응하여 함께 학습할 수 있습니다.
+
+> 추가 학습을 위해 이 [Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott) 모듈과 학습 경로를 따르기를 권장합니다.
+
+**교사들**, 이 커리큘럼을 사용하는 방법에 대한 [몇 가지 제안](for-teachers.md)을 포함시켰습니다.
+
+---
+
+## 비디오 워크스루
+
+일부 강의는 짧은 형식의 비디오로 제공됩니다. 이 비디오는 강의 내에 인라인으로 포함되어 있거나, [Microsoft Developer YouTube 채널의 초보자를 위한 머신 러닝 재생 목록](https://aka.ms/ml-beginners-videos)에서 이미지를 클릭하여 찾을 수 있습니다.
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## 팀 소개
+
+[](https://youtu.be/Tj1XWrDSYJU "홍보 비디오")
+
+**Gif by** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 위 이미지를 클릭하여 프로젝트와 창작자들에 대한 비디오를 확인하세요!
+
+---
+
+## 교육 방법론
+
+이 커리큘럼을 만들 때 두 가지 교육 원칙을 선택했습니다: **프로젝트 기반** 학습과 **빈번한 퀴즈**를 포함하는 것입니다. 또한, 이 커리큘럼에는 일관성을 위해 공통 **주제**가 있습니다.
+
+프로젝트와 연계된 콘텐츠를 보장함으로써 학습 과정은 학생들에게 더 흥미롭고 개념의 유지율이 높아집니다. 또한, 수업 전 낮은 부담의 퀴즈는 학생이 주제를 학습할 의도를 설정하고, 수업 후 두 번째 퀴즈는 추가적인 개념 유지율을 보장합니다. 이 커리큘럼은 유연하고 재미있게 설계되었으며 전체 또는 일부를 수강할 수 있습니다. 프로젝트는 작게 시작하여 12주 사이클이 끝날 때 점점 복잡해집니다. 이 커리큘럼에는 ML의 실제 응용에 대한 후기가 포함되어 있으며, 이는 추가 학점으로 사용하거나 토론의 기초로 사용할 수 있습니다.
+
+> 우리의 [행동 강령](CODE_OF_CONDUCT.md), [기여](CONTRIBUTING.md), 그리고 [번역](TRANSLATIONS.md) 지침을 확인하세요. 건설적인 피드백을 환영합니다!
+
+## 각 강의에는 다음이 포함됩니다
+
+- 선택적 스케치 노트
+- 선택적 보충 비디오
+- 비디오 워크스루 (일부 강의만 해당)
+- 강의 전 워밍업 퀴즈
+- 서면 강의
+- 프로젝트 기반 강의를 위한 단계별 가이드
+- 지식 점검
+- 도전 과제
+- 보충 읽기 자료
+- 과제
+- 강의 후 퀴즈
+
+> **언어에 대한 주의 사항**: 이 강의는 주로 Python으로 작성되었지만, 많은 강의가 R로도 제공됩니다. R 강의를 완료하려면 `/solution` 폴더로 이동하여 R 강의를 찾으세요. 이들은 .rmd 확장자를 포함하며, 이는 **R Markdown** 파일을 나타냅니다. 이는 코드(R 또는 다른 언어)와 출력 형식을 안내하는 `YAML header` (PDF 등) 및 `Markdown document`을 포함하는 문서입니다. 따라서 데이터 과학을 위한 저작 프레임워크로서 훌륭하며, 코드를 출력과 함께 작성하고 생각을 Markdown으로 기록할 수 있습니다. 게다가, R Markdown 문서는 PDF, HTML 또는 Word와 같은 출력 형식으로 렌더링될 수 있습니다.
+
+> **퀴즈에 대한 주의 사항**: 모든 퀴즈는 [퀴즈 앱 폴더](../../quiz-app)에 포함되어 있으며, 각 퀴즈는 3개의 질문으로 구성된 52개의 퀴즈로 이루어져 있습니다. 이들은 강의 내에서 링크되어 있지만, 퀴즈 앱을 로컬에서 실행할 수 있습니다. `quiz-app` 폴더의 지침을 따라 로컬에서 호스트하거나 Azure에 배포하세요.
+
+| 강의 번호 | 주제 | 강의 그룹 | 학습 목표 | 링크된 강의 | 저자 |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | 머신 러닝 소개 | [소개](1-Introduction/README.md) | 머신 러닝의 기본 개념을 배우세요 | [강의](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | 머신 러닝의 역사 | [소개](1-Introduction/README.md) | 이 분야의 역사를 배우세요 | [강의](1-Introduction/2-history-of-ML/README.md) | Jen and Amy |
+| 03 | 공정성과 머신 러닝 | [소개](1-Introduction/README.md) | 학생들이 ML 모델을 구축하고 적용할 때 고려해야 할 중요한 철학적 문제는 무엇입니까? | [강의](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | 기계 학습을 위한 기법들 | [Introduction](1-Introduction/README.md) | ML 연구자들이 ML 모델을 구축하기 위해 사용하는 기법은 무엇일까요? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris and Jen |
+| 05 | 회귀 소개 | [Regression](2-Regression/README.md) | 회귀 모델을 위해 Python과 Scikit-learn을 시작해보세요 |
|
+| 09 | 웹 앱 🔌 | [Web App](3-Web-App/README.md) | 학습된 모델을 사용하는 웹 앱을 구축하세요 | [Python](3-Web-App/1-Web-App/README.md) | Jen |
+| 10 | 분류 소개 | [Classification](4-Classification/README.md) | 데이터를 정리하고 준비하고 시각화하세요; 분류 소개 |
|
+| 13 | 맛있는 아시아 및 인도 요리 🍜 | [Classification](4-Classification/README.md) | 모델을 사용하여 추천 웹 앱을 구축하세요 | [Python](4-Classification/4-Applied/README.md) | Jen |
+| 14 | 클러스터링 소개 | [Clustering](5-Clustering/README.md) | 데이터를 정리하고 준비하고 시각화하세요; 클러스터링 소개 |
|
+| 16 | 자연어 처리 소개 ☕️ | [Natural language processing](6-NLP/README.md) | 간단한 봇을 만들어보며 NLP의 기본 개념 배우기 | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | 일반적인 NLP 작업 ☕️ | [Natural language processing](6-NLP/README.md) | 언어 구조를 다룰 때 필요한 일반적인 작업을 이해하며 NLP 지식 심화 | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | 번역 및 감정 분석 ♥️ | [Natural language processing](6-NLP/README.md) | 제인 오스틴과 함께하는 번역 및 감정 분석 | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | 유럽의 로맨틱 호텔 ♥️ | [Natural language processing](6-NLP/README.md) | 호텔 리뷰를 통한 감정 분석 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | 유럽의 로맨틱 호텔 ♥️ | [Natural language processing](6-NLP/README.md) | 호텔 리뷰를 통한 감정 분석 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | 시계열 예측 소개 | [Time series](7-TimeSeries/README.md) | 시계열 예측 소개 | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ 세계 전력 사용량 ⚡️ - ARIMA를 이용한 시계열 예측 | [Time series](7-TimeSeries/README.md) | ARIMA를 이용한 시계열 예측 | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ 세계 전력 사용량 ⚡️ - SVR을 이용한 시계열 예측 | [Time series](7-TimeSeries/README.md) | 서포트 벡터 회귀를 이용한 시계열 예측 | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | 강화 학습 소개 | [Reinforcement learning](8-Reinforcement/README.md) | Q-Learning을 통한 강화 학습 소개 | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | 피터가 늑대를 피하도록 도와주세요! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | 강화 학습 Gym | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | 실제 세계의 ML 시나리오 및 응용 프로그램 | [ML in the Wild](9-Real-World/README.md) | 고전적인 ML의 흥미롭고 놀라운 실제 응용 프로그램 | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | RAI 대시보드를 사용한 ML 모델 디버깅 | [ML in the Wild](9-Real-World/README.md) | Responsible AI 대시보드 구성 요소를 사용한 머신 러닝 모델 디버깅 | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [이 과정의 추가 자료는 Microsoft Learn 컬렉션에서 확인하세요](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## 오프라인 접근
+
+[Docsify](https://docsify.js.org/#/)를 사용하여 이 문서를 오프라인으로 실행할 수 있습니다. 이 저장소를 포크하고, 로컬 머신에 [Docsify 설치](https://docsify.js.org/#/quickstart)를 한 후, 이 저장소의 루트 폴더에서 `docsify serve`을 입력하세요. 웹사이트는 localhost의 포트 3000에서 제공됩니다: `localhost:3000`.
+
+## PDFs
+[여기](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf)에서 링크가 포함된 커리큘럼의 PDF를 찾을 수 있습니다.
+
+## 도움 요청
+
+번역에 기여하고 싶으신가요? [번역 가이드라인](TRANSLATIONS.md)을 읽고 작업량 관리를 위한 템플릿 이슈를 [여기](https://github.com/microsoft/ML-For-Beginners/issues)에 추가해주세요.
+
+## 다른 커리큘럼
+
+우리 팀은 다른 커리큘럼도 제작합니다! 확인해보세요:
+
+- [AI for Beginners](https://aka.ms/ai4beginners)
+- [Data Science for Beginners](https://aka.ms/datascience-beginners)
+- [**New Version 2.0** - Generative AI for Beginners](https://aka.ms/genai-beginners)
+- [**NEW** Cybersecurity for Beginners](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Web Dev for Beginners](https://aka.ms/webdev-beginners)
+- [IoT for Beginners](https://aka.ms/iot-beginners)
+- [Machine Learning for Beginners](https://aka.ms/ml4beginners)
+- [XR Development for Beginners](https://aka.ms/xr-dev-for-beginners)
+- [Mastering GitHub Copilot for AI Paired Programming](https://aka.ms/GitHubCopilotAI)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서는 해당 언어로 작성된 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/SECURITY.md b/translations/ko/SECURITY.md
new file mode 100644
index 000000000..b5b095d2e
--- /dev/null
+++ b/translations/ko/SECURITY.md
@@ -0,0 +1,40 @@
+## 보안
+
+Microsoft는 소프트웨어 제품 및 서비스의 보안을 매우 중요하게 생각하며, 여기에는 [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin) 및 [our GitHub organizations](https://opensource.microsoft.com/)를 포함한 GitHub 조직을 통해 관리되는 모든 소스 코드 리포지토리가 포함됩니다.
+
+Microsoft 소유의 리포지토리에서 [Microsoft의 보안 취약성 정의](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott)에 부합하는 보안 취약성을 발견했다고 생각되시면, 아래 설명된 대로 보고해 주세요.
+
+## 보안 문제 보고
+
+**공개 GitHub 이슈를 통해 보안 취약성을 보고하지 마세요.**
+
+대신, Microsoft Security Response Center (MSRC)에서 [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report)로 보고해 주세요.
+
+로그인하지 않고 제출하려면 [secure@microsoft.com](mailto:secure@microsoft.com)으로 이메일을 보내주세요. 가능하다면, 메시지를 PGP 키로 암호화해 주세요. PGP 키는 [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc)에서 다운로드할 수 있습니다.
+
+24시간 내에 응답을 받을 수 있습니다. 만약 그렇지 않은 경우, 원래 메시지를 받았는지 확인하기 위해 이메일로 다시 연락해 주세요. 추가 정보는 [microsoft.com/msrc](https://www.microsoft.com/msrc)에서 찾을 수 있습니다.
+
+가능한 한 아래 나열된 정보를 포함해 주세요. 이는 문제의 성격과 범위를 더 잘 이해하는 데 도움이 됩니다:
+
+ * 문제 유형 (예: 버퍼 오버플로, SQL 인젝션, 크로스 사이트 스크립팅 등)
+ * 문제 발생과 관련된 소스 파일의 전체 경로
+ * 영향을 받는 소스 코드의 위치 (태그/브랜치/커밋 또는 직접 URL)
+ * 문제를 재현하기 위해 필요한 특별한 설정
+ * 문제를 재현하는 단계별 지침
+ * 개념 증명 또는 익스플로잇 코드 (가능한 경우)
+ * 문제의 영향, 공격자가 문제를 어떻게 악용할 수 있는지
+
+이 정보는 보고서를 더 신속하게 분류하는 데 도움이 됩니다.
+
+버그 바운티를 위해 보고하는 경우, 더 완전한 보고서는 더 높은 보상으로 이어질 수 있습니다. 활성 프로그램에 대한 자세한 내용은 [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) 페이지를 방문해 주세요.
+
+## 선호하는 언어
+
+모든 통신은 영어로 해주시기 바랍니다.
+
+## 정책
+
+Microsoft는 [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd) 원칙을 따릅니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서를 신뢰할 수 있는 출처로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해서는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/SUPPORT.md b/translations/ko/SUPPORT.md
new file mode 100644
index 000000000..64b7098a2
--- /dev/null
+++ b/translations/ko/SUPPORT.md
@@ -0,0 +1,13 @@
+# 지원
+## 문제 제기 및 도움 받는 방법
+
+이 프로젝트는 GitHub Issues를 사용하여 버그 및 기능 요청을 추적합니다. 중복을 피하기 위해 새로운 문제를 제기하기 전에 기존 문제를 검색해 주세요. 새로운 문제의 경우, 버그 또는 기능 요청을 새로운 Issue로 제출하세요.
+
+이 프로젝트 사용에 대한 도움이나 질문은 Issue를 제출하세요.
+
+## Microsoft 지원 정책
+
+이 리포지토리에 대한 지원은 위에 나열된 리소스들로 제한됩니다.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 모국어 버전이 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 오역에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/TRANSLATIONS.md b/translations/ko/TRANSLATIONS.md
new file mode 100644
index 000000000..aa90ab4fe
--- /dev/null
+++ b/translations/ko/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# 수업 번역으로 기여하기
+
+이 커리큘럼의 수업 번역을 환영합니다!
+## 가이드라인
+
+각 수업 폴더와 수업 소개 폴더에는 번역된 마크다운 파일이 들어 있습니다.
+
+> 참고: 코드 샘플 파일의 코드는 번역하지 마세요. 번역할 것은 README, 과제, 퀴즈뿐입니다. 감사합니다!
+
+번역된 파일은 다음과 같은 명명 규칙을 따라야 합니다:
+
+**README._[language]_.md**
+
+여기서 _[language]_는 ISO 639-1 표준을 따르는 두 글자의 언어 약어입니다 (예: 스페인어는 `README.es.md`, 네덜란드어는 `README.nl.md`).
+
+**assignment._[language]_.md**
+
+Readme와 마찬가지로 과제도 번역해 주세요.
+
+> 중요: 이 저장소의 텍스트를 번역할 때는 기계 번역을 사용하지 마세요. 커뮤니티를 통해 번역을 검증할 것이므로, 능숙한 언어에 대해서만 번역을 자원해 주세요.
+
+**퀴즈**
+
+1. 번역을 퀴즈 앱에 추가하려면 여기 파일을 추가하세요: https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations, 적절한 명명 규칙을 따릅니다 (en.json, fr.json). **'true'나 'false'라는 단어는 로컬라이즈하지 마세요. 감사합니다!**
+
+2. 퀴즈 앱의 App.vue 파일에서 드롭다운에 언어 코드를 추가하세요.
+
+3. 퀴즈 앱의 [translations index.js 파일](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js)을 편집하여 언어를 추가하세요.
+
+4. 마지막으로, 번역된 README.md 파일의 모든 퀴즈 링크를 직접 번역된 퀴즈로 가리키도록 수정하세요: https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1이 https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id로 변경됩니다.
+
+**감사합니다**
+
+정말로 여러분의 노고에 감사드립니다!
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서를 해당 언어로 작성된 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/docs/_sidebar.md b/translations/ko/docs/_sidebar.md
new file mode 100644
index 000000000..dad5a91f1
--- /dev/null
+++ b/translations/ko/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- 소개
+ - [머신러닝 소개](../1-Introduction/1-intro-to-ML/README.md)
+ - [머신러닝의 역사](../1-Introduction/2-history-of-ML/README.md)
+ - [머신러닝과 공정성](../1-Introduction/3-fairness/README.md)
+ - [머신러닝 기술](../1-Introduction/4-techniques-of-ML/README.md)
+
+- 회귀 분석
+ - [필수 도구](../2-Regression/1-Tools/README.md)
+ - [데이터](../2-Regression/2-Data/README.md)
+ - [선형 회귀](../2-Regression/3-Linear/README.md)
+ - [로지스틱 회귀](../2-Regression/4-Logistic/README.md)
+
+- 웹 앱 만들기
+ - [웹 앱](../3-Web-App/1-Web-App/README.md)
+
+- 분류
+ - [분류 소개](../4-Classification/1-Introduction/README.md)
+ - [분류기 1](../4-Classification/2-Classifiers-1/README.md)
+ - [분류기 2](../4-Classification/3-Classifiers-2/README.md)
+ - [응용 머신러닝](../4-Classification/4-Applied/README.md)
+
+- 클러스터링
+ - [데이터 시각화](../5-Clustering/1-Visualize/README.md)
+ - [K-평균](../5-Clustering/2-K-Means/README.md)
+
+- 자연어 처리
+ - [자연어 처리 소개](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [자연어 처리 작업](../6-NLP/2-Tasks/README.md)
+ - [번역과 감정 분석](../6-NLP/3-Translation-Sentiment/README.md)
+ - [호텔 리뷰 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [호텔 리뷰 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- 시계열 예측
+ - [시계열 예측 소개](../7-TimeSeries/1-Introduction/README.md)
+ - [ARIMA](../7-TimeSeries/2-ARIMA/README.md)
+ - [SVR](../7-TimeSeries/3-SVR/README.md)
+
+- 강화 학습
+ - [Q-러닝](../8-Reinforcement/1-QLearning/README.md)
+ - [Gym](../8-Reinforcement/2-Gym/README.md)
+
+- 실전 머신러닝
+ - [응용](../9-Real-World/1-Applications/README.md)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서가 해당 언어로 작성된 것이 권위 있는 자료로 간주되어야 합니다. 중요한 정보의 경우, 전문 인간 번역을 권장합니다. 이 번역 사용으로 인한 오해나 잘못된 해석에 대해 우리는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/for-teachers.md b/translations/ko/for-teachers.md
new file mode 100644
index 000000000..558be4038
--- /dev/null
+++ b/translations/ko/for-teachers.md
@@ -0,0 +1,26 @@
+## 교육자를 위한 안내
+
+이 커리큘럼을 교실에서 사용하고 싶으신가요? 마음껏 사용하세요!
+
+사실, GitHub Classroom을 사용하여 GitHub 자체에서 이 커리큘럼을 사용할 수 있습니다.
+
+이를 위해, 이 저장소를 포크하세요. 각 수업마다 별도의 저장소가 필요하므로, 각 폴더를 별도의 저장소로 분리해야 합니다. 그렇게 하면 [GitHub Classroom](https://classroom.github.com/classrooms)에서 각 수업을 개별적으로 인식할 수 있습니다.
+
+이 [전체 지침](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/)은 교실을 설정하는 방법에 대한 아이디어를 제공합니다.
+
+## 현재 상태 그대로 저장소 사용하기
+
+GitHub Classroom을 사용하지 않고 현재 상태 그대로 이 저장소를 사용하고 싶다면, 그것도 가능합니다. 학생들과 함께 어떤 수업을 진행할지 소통할 필요가 있습니다.
+
+온라인 형식(ZOOM, Teams 또는 기타)을 사용한다면, 퀴즈를 위해 브레이크아웃 룸을 만들고 학생들이 학습 준비를 할 수 있도록 멘토링할 수 있습니다. 그런 다음 학생들에게 퀴즈를 초대하고 특정 시간에 '이슈'로 답변을 제출하도록 요청하세요. 만약 학생들이 공개적으로 협력하기를 원한다면, 과제도 같은 방식으로 진행할 수 있습니다.
+
+보다 개인적인 형식을 선호한다면, 학생들에게 커리큘럼을 각자의 GitHub 저장소에 비공개로 포크하고, 당신에게 접근 권한을 부여하도록 요청하세요. 그런 다음 학생들은 비공개로 퀴즈와 과제를 완료하고, 당신의 교실 저장소의 이슈를 통해 제출할 수 있습니다.
+
+온라인 교실 형식에서 이를 구현하는 여러 가지 방법이 있습니다. 무엇이 가장 잘 작동하는지 알려주세요!
+
+## 의견을 주세요!
+
+이 커리큘럼이 당신과 학생들에게 잘 맞도록 만들고 싶습니다. [피드백](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u)을 주세요.
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인한 오해나 잘못된 해석에 대해 우리는 책임지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/quiz-app/README.md b/translations/ko/quiz-app/README.md
new file mode 100644
index 000000000..b81fc539a
--- /dev/null
+++ b/translations/ko/quiz-app/README.md
@@ -0,0 +1,115 @@
+# 퀴즈
+
+이 퀴즈들은 https://aka.ms/ml-beginners 에 있는 ML 커리큘럼의 강의 전후 퀴즈입니다.
+
+## 프로젝트 설정
+
+```
+npm install
+```
+
+### 개발을 위한 컴파일 및 핫 리로드
+
+```
+npm run serve
+```
+
+### 프로덕션을 위한 컴파일 및 최소화
+
+```
+npm run build
+```
+
+### 파일 린트 및 수정
+
+```
+npm run lint
+```
+
+### 구성 사용자 정의
+
+[Configuration Reference](https://cli.vuejs.org/config/)를 참조하세요.
+
+크레딧: 원본 퀴즈 앱 버전에 감사드립니다: https://github.com/arpan45/simple-quiz-vue
+
+## Azure에 배포하기
+
+다음은 시작하는 데 도움이 되는 단계별 가이드입니다:
+
+1. GitHub 저장소 포크하기
+정적 웹 앱 코드가 GitHub 저장소에 있는지 확인하세요. 이 저장소를 포크하세요.
+
+2. Azure 정적 웹 앱 생성
+- [Azure 계정](http://azure.microsoft.com)을 생성하세요.
+- [Azure 포털](https://portal.azure.com)에 접속하세요.
+- "리소스 생성"을 클릭하고 "Static Web App"을 검색하세요.
+- "생성"을 클릭하세요.
+
+3. 정적 웹 앱 구성
+- 기본 설정: 구독: Azure 구독을 선택하세요.
+- 리소스 그룹: 새 리소스 그룹을 생성하거나 기존 리소스 그룹을 사용하세요.
+- 이름: 정적 웹 앱의 이름을 입력하세요.
+- 지역: 사용자와 가장 가까운 지역을 선택하세요.
+
+- #### 배포 세부 사항:
+- 소스: "GitHub"를 선택하세요.
+- GitHub 계정: Azure가 GitHub 계정에 접근할 수 있도록 권한을 부여하세요.
+- 조직: GitHub 조직을 선택하세요.
+- 저장소: 정적 웹 앱이 포함된 저장소를 선택하세요.
+- 브랜치: 배포할 브랜치를 선택하세요.
+
+- #### 빌드 세부 사항:
+- 빌드 프리셋: 앱이 구축된 프레임워크를 선택하세요 (예: React, Angular, Vue 등).
+- 앱 위치: 앱 코드가 포함된 폴더를 지정하세요 (예: 루트에 있는 경우 /).
+- API 위치: API가 있는 경우 위치를 지정하세요 (선택 사항).
+- 출력 위치: 빌드 출력이 생성되는 폴더를 지정하세요 (예: build 또는 dist).
+
+4. 검토 및 생성
+설정을 검토하고 "생성"을 클릭하세요. Azure는 필요한 리소스를 설정하고 GitHub 저장소에 GitHub Actions 워크플로우를 생성합니다.
+
+5. GitHub Actions 워크플로우
+Azure는 자동으로 저장소에 GitHub Actions 워크플로우 파일을 생성합니다 (.github/workflows/azure-static-web-apps-.yml). 이 워크플로우는 빌드 및 배포 프로세스를 처리합니다.
+
+6. 배포 모니터링
+GitHub 저장소의 "Actions" 탭으로 이동하세요.
+워크플로우가 실행 중인 것을 볼 수 있습니다. 이 워크플로우는 정적 웹 앱을 Azure에 빌드하고 배포할 것입니다.
+워크플로우가 완료되면 제공된 Azure URL에서 앱이 활성화됩니다.
+
+### 예제 워크플로우 파일
+
+다음은 GitHub Actions 워크플로우 파일의 예제입니다:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### 추가 리소스
+- [Azure Static Web Apps Documentation](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [GitHub Actions Documentation](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 있을 수 있음을 유의하시기 바랍니다. 원본 문서의 원어가 권위 있는 출처로 간주되어야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/sketchnotes/LICENSE.md b/translations/ko/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..366dc9485
--- /dev/null
+++ b/translations/ko/sketchnotes/LICENSE.md
@@ -0,0 +1,169 @@
+Attribution-ShareAlike 4.0 International
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons")는 법률 사무소가 아니며 법률 서비스나 법률 조언을 제공하지 않습니다. Creative Commons 공개 라이선스를 배포한다고 해서 변호사-의뢰인 관계나 기타 관계가 형성되는 것은 아닙니다. Creative Commons는 그 라이선스와 관련 정보를 있는 그대로 제공합니다. Creative Commons는 그 라이선스나 해당 조건 하에 라이선스된 자료, 또는 관련 정보에 대해 어떠한 보증도 하지 않습니다. Creative Commons는 그 사용으로 인해 발생하는 모든 손해에 대해 최대한의 범위 내에서 모든 책임을 부인합니다.
+
+Creative Commons 공용 라이선스 사용
+
+Creative Commons 공용 라이선스는 창작자 및 기타 권리 보유자가 저작권과 아래 공용 라이선스에 명시된 특정 권리에 따라 원본 저작물 및 기타 자료를 공유할 수 있는 표준 조건을 제공합니다. 다음 고려 사항은 정보 제공 목적으로만 제공되며, 포괄적이지 않으며, 우리의 라이선스의 일부를 형성하지 않습니다.
+
+ 라이선스 제공자를 위한 고려 사항: 우리의 공용 라이선스는 저작권 및 특정 기타 권리에 의해 제한된 방식으로 자료를 사용할 수 있는 공용 허가를 제공할 권한이 있는 사람들이 사용하도록 의도되었습니다. 우리의 라이선스는 취소할 수 없습니다. 라이선스 제공자는 라이선스를 적용하기 전에 선택한 라이선스의 조건을 읽고 이해해야 합니다. 또한 라이선스 제공자는 공용이 기대한 대로 자료를 재사용할 수 있도록 필요한 모든 권리를 확보해야 합니다. 라이선스 적용 대상이 아닌 자료를 명확히 표시해야 합니다. 여기에는 다른 CC-라이선스 자료나 저작권의 예외나 제한에 따라 사용된 자료가 포함됩니다. 라이선스 제공자를 위한 추가 고려 사항: wiki.creativecommons.org/Considerations_for_licensors
+
+ 공용을 위한 고려 사항: 우리의 공용 라이선스 중 하나를 사용함으로써, 라이선스 제공자는 지정된 조건에 따라 라이선스된 자료를 사용할 수 있는 허가를 공용에 부여합니다. 만약 라이선스 제공자의 허가가 어떤 이유로 필요하지 않다면 - 예를 들어, 저작권의 예외나 제한이 적용되는 경우 - 그 사용은 라이선스에 의해 규제되지 않습니다. 우리의 라이선스는 저작권과 라이선스 제공자가 부여할 권한이 있는 특정 기타 권리에 대한 허가만 부여합니다. 라이선스된 자료의 사용은 다른 이유로 여전히 제한될 수 있습니다. 예를 들어, 다른 사람들이 그 자료에 대해 저작권이나 기타 권리를 가지고 있는 경우가 있습니다. 라이선스 제공자는 모든 변경 사항을 표시하거나 설명하도록 요청할 수 있습니다. 우리의 라이선스에서 요구하지 않지만, 합리적인 경우 그러한 요청을 존중할 것을 권장합니다. 공용을 위한 추가 고려 사항: wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Creative Commons Attribution-ShareAlike 4.0 International Public License
+
+라이선스된 권리(아래 정의됨)를 행사함으로써, 귀하는 본 Creative Commons Attribution-ShareAlike 4.0 International Public License("공용 라이선스")의 조건에 동의하고 구속됩니다. 이 공용 라이선스가 계약으로 해석될 수 있는 범위 내에서, 귀하는 이러한 조건을 수락함으로써 라이선스된 권리를 부여받으며, 라이선스 제공자는 이러한 조건에 따라 라이선스된 자료를 제공함으로써 혜택을 받습니다.
+
+섹션 1 -- 정의.
+
+ a. 개작 자료는 라이선스 제공자가 보유한 저작권 및 유사 권리에 따라 라이선스된 자료를 번역, 변경, 배열, 변형 또는 기타 방식으로 수정하여 파생되거나 기반이 되는 자료를 의미합니다. 공용 라이선스의 목적상, 라이선스된 자료가 음악 작품, 공연, 또는 음반인 경우, 개작 자료는 항상 라이선스된 자료가 움직이는 이미지와 시간적으로 동기화된 경우에 생성됩니다.
+
+ b. 개작자 라이선스는 귀하가 본 공용 라이선스의 조건에 따라 귀하의 개작 자료에 대한 기여에 적용하는 라이선스를 의미합니다.
+
+ c. BY-SA 호환 라이선스는 Creative Commons가 본 공용 라이선스와 본질적으로 동등하다고 승인한 라이선스로, creativecommons.org/compatiblelicenses에 나열된 라이선스를 의미합니다.
+
+ d. 저작권 및 유사 권리는 저작권 및/또는 저작권과 밀접하게 관련된 유사 권리를 의미하며, 여기에는 공연, 방송, 음반, 데이터베이스 보호 권리가 포함되며, 권리의 라벨이나 분류 방식에 관계없이 포함됩니다. 공용 라이선스의 목적상, 섹션 2(b)(1)-(2)에 명시된 권리는 저작권 및 유사 권리가 아닙니다.
+
+ e. 유효한 기술적 조치는 1996년 12월 20일 채택된 WIPO 저작권 조약의 제11조의 의무를 이행하는 법률에 따라 적절한 권한 없이 회피할 수 없는 조치를 의미합니다.
+
+ f. 예외 및 제한은 공정 사용, 공정 거래, 및/또는 귀하의 라이선스된 자료 사용에 적용되는 기타 예외 또는 제한을 의미합니다.
+
+ g. 라이선스 요소는 Creative Commons 공용 라이선스의 이름에 나열된 라이선스 속성을 의미합니다. 본 공용 라이선스의 라이선스 요소는 저작자 표시와 동일 조건 변경 허락입니다.
+
+ h. 라이선스된 자료는 라이선스 제공자가 본 공용 라이선스를 적용한 예술 또는 문학 작품, 데이터베이스, 또는 기타 자료를 의미합니다.
+
+ i. 라이선스된 권리는 귀하가 라이선스된 자료를 사용할 때 적용되는 모든 저작권 및 유사 권리에 제한되며, 라이선스 제공자가 라이선스를 부여할 권한이 있는 권리를 의미합니다.
+
+ j. 라이선스 제공자는 본 공용 라이선스 하에 권리를 부여하는 개인 또는 단체를 의미합니다.
+
+ k. 공유는 복제, 공공 전시, 공공 공연, 배포, 전파, 통신, 또는 수입과 같은 라이선스된 권리 하에 허가가 필요한 방식으로 자료를 공공에 제공하는 것을 의미하며, 공공이 개별적으로 선택한 장소와 시간에 자료에 접근할 수 있도록 자료를 공공에 제공하는 것을 포함합니다.
+
+ l. Sui Generis 데이터베이스 권리는 1996년 3월 11일 유럽 의회 및 이사회 지침 96/9/EC에 따라 데이터베이스의 법적 보호로부터 발생하는 권리로, 수정 및/또는 후속된 권리뿐만 아니라 전 세계적으로 본질적으로 동등한 권리를 의미합니다.
+
+ m. 귀하는 본 공용 라이선스 하에 라이선스된 권리를 행사하는 개인 또는 단체를 의미합니다. 귀하의 의미는 이에 상응하는 의미를 갖습니다.
+
+섹션 2 -- 범위.
+
+ a. 라이선스 부여.
+
+ 1. 본 공용 라이선스의 조건에 따라, 라이선스 제공자는 귀하에게 라이선스된 자료의 라이선스된 권리를 행사할 수 있는 전 세계적, 로열티 무료, 서브라이선스 불가, 비독점적, 취소 불가능한 라이선스를 부여합니다:
+
+ a. 라이선스된 자료를 전부 또는 일부 복제 및 공유할 수 있습니다; 및
+
+ b. 개작 자료를 생성, 복제 및 공유할 수 있습니다.
+
+ 2. 예외 및 제한. 귀하의 사용에 예외 및 제한이 적용되는 경우, 본 공용 라이선스는 적용되지 않으며, 귀하는 그 조건을 준수할 필요가 없습니다.
+
+ 3. 기간. 본 공용 라이선스의 기간은 섹션 6(a)에 명시되어 있습니다.
+
+ 4. 매체 및 형식; 기술적 수정 허용. 라이선스 제공자는 귀하가 현재 알려진 또는 향후 생성될 모든 매체 및 형식에서 라이선스된 권리를 행사할 수 있도록 허용하며, 이를 위해 필요한 기술적 수정을 허용합니다. 라이선스 제공자는 라이선스된 권리를 행사하기 위해 필요한 기술적 수정을 금지할 권리 또는 권한을 주장하지 않으며, 유효한 기술적 조치를 회피하기 위해 필요한 기술적 수정을 금지하지 않습니다. 본 공용 라이선스의 목적상, 본 섹션 2(a)(4)에 의해 허가된 수정만으로는 개작 자료가 생성되지 않습니다.
+
+ 5. 하위 수령자.
+
+ a. 라이선스 제공자의 제공 -- 라이선스된 자료. 라이선스된 자료의 모든 수령자는 본 공용 라이선스의 조건에 따라 라이선스된 권리를 행사할 수 있는 라이선스 제공자의 제안을 자동으로 수락합니다.
+
+ b. 라이선스 제공자의 추가 제공 -- 개작 자료. 귀하로부터 개작 자료를 수령한 모든 사람은 개작 자료에서 개작자 라이선스를 적용한 조건에 따라 라이선스된 권리를 행사할 수 있는 라이선스 제공자의 제안을 자동으로 수락합니다.
+
+ c. 하위 제한 없음. 귀하는 라이선스된 자료의 수령자가 라이선스된 권리를 행사하는 것을 제한하는 추가 또는 다른 조건을 제안하거나 부과할 수 없으며, 유효한 기술적 조치를 적용할 수 없습니다.
+
+ 6. 승인 없음. 본 공용 라이선스의 어떠한 것도 귀하가 라이선스된 자료를 사용하는 것이 라이선스 제공자나 다른 사람들과 관련이 있거나, 후원, 승인, 또는 공식 지위를 부여받았음을 주장하거나 암시하는 허가로 해석될 수 없습니다.
+
+ b. 기타 권리.
+
+ 1. 무결성 권리와 같은 도덕적 권리는 본 공용 라이선스 하에 허가되지 않으며, 공개성, 프라이버시 및/또는 기타 유사한 인격권도 허가되지 않습니다; 그러나, 가능한 한, 라이선스 제공자는 귀하가 라이선스된 권리를 행사할 수 있도록 필요한 범위 내에서만 이러한 권리를 포기하고/또는 주장하지 않기로 동의합니다.
+
+ 2. 특허 및 상표권은 본 공용 라이선스 하에 허가되지 않습니다.
+
+ 3. 가능한 한, 라이선스 제공자는 자발적 또는 포기 가능한 법정 또는 의무적 라이선스 제도 하에서 직접 또는 저작권 단체를 통해 귀하로부터 라이선스된 권리의 행사에 대한 로열티를 수집할 권리를 포기합니다. 다른 모든 경우에 라이선스 제공자는 그러한 로열티를 수집할 권리를 명시적으로 보유합니다.
+
+섹션 3 -- 라이선스 조건.
+
+라이선스된 권리를 행사하는 귀하는 다음 조건에 명시적으로 구속됩니다.
+
+ a. 저작자 표시.
+
+ 1. 귀하가 라이선스된 자료를 공유하는 경우(수정된 형태로 포함), 다음을 유지해야 합니다:
+
+ a. 라이선스 제공자가 라이선스된 자료와 함께 제공한 경우 다음을 유지해야 합니다:
+
+ i. 라이선스된 자료의 창작자 및 저작자 표시를 받을 자격이 있는 다른 사람을 합리적인 방식으로 식별하는 정보(필명 포함);
+
+ ii. 저작권 고지;
+
+ iii. 본 공용 라이선스를 참조하는 고지;
+
+ iv. 보증 부인 고지;
+
+ v. 가능한 한 라이선스된 자료에 대한 URI 또는 하이퍼링크;
+
+ b. 귀하가 라이선스된 자료를 수정했음을 표시하고 이전 수정 사항을 유지해야 합니다; 및
+
+ c. 라이선스된 자료가 본 공용 라이선스 하에 라이선스되었음을 표시하고, 본 공용 라이선스의 텍스트 또는 URI 또는 하이퍼링크를 포함해야 합니다.
+
+ 2. 귀하는 귀하가 라이선스된 자료를 공유하는 매체, 수단 및 맥락에 따라 섹션 3(a)(1)의 조건을 합리적인 방식으로 충족할 수 있습니다. 예를 들어, 필요한 정보를 포함하는 리소스에 URI 또는 하이퍼링크를 제공함으로써 조건을 충족하는 것이 합리적일 수 있습니다.
+
+ 3. 라이선스 제공자가 요청하는 경우, 귀하는 섹션 3(a)(1)(A)에 필요한 정보를 가능한 한 제거해야 합니다.
+
+ b. 동일 조건 변경 허락.
+
+ 섹션 3(a)의 조건 외에도, 귀하가 개작 자료를 공유하는 경우 다음 조건이 적용됩니다.
+
+ 1. 귀하가 적용하는 개작자 라이선스는 동일한 라이선스 요소를 가진 Creative Commons 라이선스, 이 버전 또는 이후 버전, 또는 BY-SA 호환 라이선스여야 합니다.
+
+ 2. 귀하는 귀하가 적용하는 개작자 라이선스의 텍스트 또는 URI 또는 하이퍼링크를 포함해야 합니다. 귀하는 귀하가 개작 자료를 공유하는 매체, 수단 및 맥락에 따라 이 조건을 합리적인 방식으로 충족할 수 있습니다.
+
+ 3. 귀하는 귀하가 적용하는 개작자 라이선스에 따라 부여된 권리를 제한하는 추가 또는 다른 조건을 제안하거나 부과할 수 없으며, 유효한 기술적 조치를 적용할 수 없습니다.
+
+섹션 4 -- Sui Generis 데이터베이스 권리.
+
+라이선스된 권리에 귀하의 라이선스된 자료 사용에 적용되는 Sui Generis 데이터베이스 권리가 포함되는 경우:
+
+ a. 의심의 여지를 없애기 위해, 섹션 2(a)(1)는 귀하에게 데이터베이스의 모든 또는 상당 부분의 내용을 추출, 재사용, 복제 및 공유할 권리를 부여합니다;
+
+ b. 귀하가 Sui Generis 데이터베이스 권리를 가진 데이터베이스에 모든 또는 상당 부분의 데이터베이스 내용을 포함하는 경우, 귀하가 Sui Generis 데이터베이스 권리를 가진 데이터베이스(그러나 그 개별 내용은 아님)는 개작 자료입니다,
+
+ 섹션 3(b)의 목적을 포함하여; 및
+ c. 귀하가 데이터베이스의 모든 또는 상당 부분의 내용을 공유하는 경우 섹션 3(a)의 조건을 준수해야 합니다.
+
+의심의 여지를 없애기 위해, 본 섹션 4는 라이선스된 권리에 다른 저작권 및 유사 권리가 포함된 경우 본 공용 라이선스 하의 귀하의 의무를 대체하지 않습니다.
+
+섹션 5 -- 보증 부인 및 책임 제한.
+
+ a. 라이선스 제공자가 별도로 이행하지 않는 한, 가능한 한, 라이선스 제공자는 라이선스된 자료를 있는 그대로 제공하며, 라이선스된 자료에 대한 어떠한 종류의 진술이나 보증도 하지 않습니다. 여기에는 제목, 상품성, 특정 목적에의 적합성, 비침해, 잠재적 또는 기타 결함의 부재, 정확성, 또는 오류의 존재 여부에 대한 보증이 포함되며, 이는 알려지거나 발견될 수 있는지 여부에 관계없이 포함됩니다. 보증 부인이 전부 또는 일부 허용되지 않는 경우, 이 부인은 귀하에게 적용되지 않을 수 있습니다.
+
+ b. 가능한 한, 라이선스 제공자는 본 공용 라이선스 또는 라이선스된 자료 사용으로 인해 발생하는 직접적, 특별, 간접적, 부수적, 결과적, 징벌적, 모범적, 또는 기타 손실, 비용, 경비, 또는 손해에 대해 어떠한 법적 이론(과실 포함) 또는 기타 방식으로 귀하에게 책임을 지지 않습니다, 심지어 라이선스 제공자가 그러한 손실, 비용, 경비, 또는 손해의 가능성을 알고 있었더라도. 책임 제한이 전부 또는 일부 허용되지 않는 경우, 이 제한은 귀하에게 적용되지 않을 수 있습니다.
+
+ c. 위의 보증 부인 및 책임 제한은 가능한 한 절대적인 책임 포기 및 면제를 가장 가깝게 대체하는 방식으로 해석되어야 합니다.
+
+섹션 6 -- 기간 및 종료.
+
+ a. 본 공용 라이선스는 여기서 라이선스된 저작권 및 유사 권리의 기간 동안 적용됩니다. 그러나 귀하가 본 공용 라이선스를 준수하지 않는 경우, 본 공용 라이선스 하의 귀하의 권리는 자동으로 종료됩니다.
+
+ b. 섹션 6(a)에 따라 라이선스된 자료를 사용할 권리가 종료된 경우, 다음과 같이 재설정됩니다:
+
+ 1. 귀하의 위반 사항을 발견한 후 30일 이내에 위반 사항이 수정된 경우, 자동으로 재설정됩니다; 또는
+
+ 2. 라이선스 제공자가 명시적으로 재설정한 경우.
+
+ 의심의 여지를 없애기 위해, 본 섹션 6(b)는 귀하의 본 공용 라이선스 위반에 대해 라이선스 제공자가 구제를 추구할 수 있는 권리에 영향을 미치지 않습니다.
+
+ c. 의심의 여지를 없애기 위해, 라이선스 제공자는 언제든지 별도의 조건으로 라이선스된 자료를 제공하거나 배포를 중단할 수 있습니다; 그러나, 이는 본 공용 라이선스를 종료하지 않습니다.
+
+ d. 섹션 1, 5, 6, 7, 및 8은 본 공용 라이선스 종료 후에도 계속 적용됩니다.
+
+섹션 7 -- 기타 조건 및 조건.
+
+ a. 라이선스 제공자는 귀하가 전달한 추가 또는 다른 조건에 명시적으로 동의하지 않는 한 구속되지 않습니다.
+
+ b. 여기 명시되지 않은 라이선스된 자료에 관한 모든 합의, 이해 또는 계약은 본 공용 라이선스의 조건 및 조건과 별개로 독립적입니다.
+
+섹션 8 -- 해석.
+
+ a. 의심의 여지를 없애기 위해, 본 공용 라이선스는 귀하가 본 공용 라이선스 하에 허가 없이 합법적으로 할 수 있는 라이선스
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ko/sketchnotes/README.md b/translations/ko/sketchnotes/README.md
new file mode 100644
index 000000000..087a360b8
--- /dev/null
+++ b/translations/ko/sketchnotes/README.md
@@ -0,0 +1,10 @@
+모든 커리큘럼의 스케치노트를 여기에서 다운로드할 수 있습니다.
+
+🖨 고해상도로 인쇄하려면, TIFF 버전을 [이 저장소](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff)에서 사용할 수 있습니다.
+
+🎨 작성자: [Tomomi Imura](https://github.com/girliemac) (트위터: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**면책 조항**:
+이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만 자동 번역에는 오류나 부정확성이 있을 수 있습니다. 원어로 작성된 원본 문서를 권위 있는 출처로 간주해야 합니다. 중요한 정보에 대해서는 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/1-intro-to-ML/README.md b/translations/ms/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..9e7b6cc01
--- /dev/null
+++ b/translations/ms/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# Pengenalan kepada pembelajaran mesin
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "ML untuk pemula - Pengenalan kepada Pembelajaran Mesin untuk Pemula")
+
+> 🎥 Klik gambar di atas untuk video pendek yang melalui pelajaran ini.
+
+Selamat datang ke kursus ini mengenai pembelajaran mesin klasik untuk pemula! Sama ada anda baru sahaja mengenali topik ini, atau seorang pengamal ML yang berpengalaman yang ingin mengasah kemahiran dalam bidang tertentu, kami gembira anda menyertai kami! Kami ingin mencipta tempat pelancaran yang mesra untuk kajian ML anda dan akan gembira untuk menilai, memberi maklum balas, dan memasukkan [maklum balas](https://github.com/microsoft/ML-For-Beginners/discussions) anda.
+
+[](https://youtu.be/h0e2HAPTGF4 "Pengenalan kepada ML")
+
+> 🎥 Klik gambar di atas untuk video: John Guttag dari MIT memperkenalkan pembelajaran mesin
+
+---
+## Memulakan dengan pembelajaran mesin
+
+Sebelum memulakan kurikulum ini, anda perlu menyediakan komputer anda dan bersedia untuk menjalankan notebook secara tempatan.
+
+- **Konfigurasi mesin anda dengan video-video ini**. Gunakan pautan berikut untuk belajar [cara memasang Python](https://youtu.be/CXZYvNRIAKM) dalam sistem anda dan [menyediakan penyunting teks](https://youtu.be/EU8eayHWoZg) untuk pembangunan.
+- **Belajar Python**. Disyorkan juga untuk mempunyai pemahaman asas tentang [Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott), bahasa pengaturcaraan yang berguna untuk saintis data yang kami gunakan dalam kursus ini.
+- **Belajar Node.js dan JavaScript**. Kami juga menggunakan JavaScript beberapa kali dalam kursus ini semasa membina aplikasi web, jadi anda perlu mempunyai [node](https://nodejs.org) dan [npm](https://www.npmjs.com/) dipasang, serta [Visual Studio Code](https://code.visualstudio.com/) tersedia untuk pembangunan Python dan JavaScript.
+- **Buat akaun GitHub**. Oleh kerana anda menemui kami di [GitHub](https://github.com), anda mungkin sudah mempunyai akaun, tetapi jika tidak, buat satu dan kemudian fork kurikulum ini untuk digunakan sendiri. (Jangan lupa beri kami bintang, juga 😊)
+- **Terokai Scikit-learn**. Biasakan diri dengan [Scikit-learn](https://scikit-learn.org/stable/user_guide.html), satu set perpustakaan ML yang kami rujuk dalam pelajaran ini.
+
+---
+## Apakah itu pembelajaran mesin?
+
+Istilah 'pembelajaran mesin' adalah salah satu istilah yang paling popular dan sering digunakan pada masa kini. Terdapat kemungkinan besar anda pernah mendengar istilah ini sekurang-kurangnya sekali jika anda mempunyai sedikit pengetahuan tentang teknologi, tidak kira bidang apa yang anda ceburi. Mekanik pembelajaran mesin, bagaimanapun, adalah misteri bagi kebanyakan orang. Bagi pemula pembelajaran mesin, subjek ini kadang-kadang boleh terasa menggentarkan. Oleh itu, adalah penting untuk memahami apa sebenarnya pembelajaran mesin, dan belajar mengenainya langkah demi langkah, melalui contoh praktikal.
+
+---
+## Lengkung hype
+
+
+
+> Google Trends menunjukkan 'lengkung hype' terbaru istilah 'pembelajaran mesin'
+
+---
+## Alam semesta yang misteri
+
+Kita hidup dalam alam semesta yang penuh dengan misteri yang menakjubkan. Saintis hebat seperti Stephen Hawking, Albert Einstein, dan ramai lagi telah mengabdikan hidup mereka untuk mencari maklumat bermakna yang membongkar misteri dunia di sekitar kita. Ini adalah keadaan manusia untuk belajar: seorang kanak-kanak manusia belajar perkara baru dan membongkar struktur dunia mereka tahun demi tahun semasa mereka membesar menjadi dewasa.
+
+---
+## Otak kanak-kanak
+
+Otak dan deria seorang kanak-kanak mengesan fakta persekitaran mereka dan secara beransur-ansur mempelajari corak tersembunyi kehidupan yang membantu kanak-kanak itu mencipta peraturan logik untuk mengenal pasti corak yang dipelajari. Proses pembelajaran otak manusia menjadikan manusia makhluk hidup yang paling canggih di dunia ini. Pembelajaran secara berterusan dengan menemui corak tersembunyi dan kemudian berinovasi pada corak tersebut membolehkan kita menjadi lebih baik dan lebih baik sepanjang hayat kita. Kapasiti pembelajaran dan keupayaan berkembang ini berkaitan dengan konsep yang dipanggil [keplastikan otak](https://www.simplypsychology.org/brain-plasticity.html). Secara cetek, kita boleh menarik beberapa persamaan motivasi antara proses pembelajaran otak manusia dan konsep pembelajaran mesin.
+
+---
+## Otak manusia
+
+Otak [manusia](https://www.livescience.com/29365-human-brain.html) mengesan perkara dari dunia nyata, memproses maklumat yang diterima, membuat keputusan yang rasional, dan melaksanakan tindakan tertentu berdasarkan keadaan. Ini adalah apa yang kita panggil berkelakuan secara bijak. Apabila kita memprogramkan tiruan proses tingkah laku bijak kepada mesin, ia dipanggil kecerdasan buatan (AI).
+
+---
+## Beberapa istilah
+
+Walaupun istilah-istilah ini boleh dikelirukan, pembelajaran mesin (ML) adalah subset penting kecerdasan buatan. **ML berkaitan dengan penggunaan algoritma khusus untuk mencari maklumat bermakna dan mencari corak tersembunyi dari data yang diterima untuk menyokong proses membuat keputusan yang rasional**.
+
+---
+## AI, ML, Pembelajaran Mendalam
+
+
+
+> Diagram yang menunjukkan hubungan antara AI, ML, pembelajaran mendalam, dan sains data. Infografik oleh [Jen Looper](https://twitter.com/jenlooper) yang diilhamkan oleh [grafik ini](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining)
+
+---
+## Konsep yang akan diliputi
+
+Dalam kurikulum ini, kami akan meliputi hanya konsep asas pembelajaran mesin yang mesti diketahui oleh seorang pemula. Kami meliputi apa yang kami panggil 'pembelajaran mesin klasik' terutamanya menggunakan Scikit-learn, perpustakaan yang sangat baik yang digunakan oleh ramai pelajar untuk mempelajari asas-asas. Untuk memahami konsep yang lebih luas tentang kecerdasan buatan atau pembelajaran mendalam, pengetahuan asas yang kukuh tentang pembelajaran mesin adalah sangat diperlukan, dan oleh itu kami ingin menawarkannya di sini.
+
+---
+## Dalam kursus ini anda akan belajar:
+
+- konsep asas pembelajaran mesin
+- sejarah ML
+- ML dan keadilan
+- teknik ML regresi
+- teknik ML klasifikasi
+- teknik ML pengelompokan
+- teknik pemprosesan bahasa semula jadi ML
+- teknik ramalan siri masa ML
+- pembelajaran pengukuhan
+- aplikasi dunia sebenar untuk ML
+
+---
+## Apa yang tidak akan kami liputi
+
+- pembelajaran mendalam
+- rangkaian neural
+- AI
+
+Untuk pengalaman pembelajaran yang lebih baik, kami akan mengelakkan kerumitan rangkaian neural, 'pembelajaran mendalam' - pembinaan model berlapis-lapis menggunakan rangkaian neural - dan AI, yang akan kami bincangkan dalam kurikulum yang berbeza. Kami juga akan menawarkan kurikulum sains data yang akan datang untuk memberi tumpuan kepada aspek tersebut dalam bidang yang lebih besar ini.
+
+---
+## Mengapa belajar pembelajaran mesin?
+
+Pembelajaran mesin, dari perspektif sistem, ditakrifkan sebagai penciptaan sistem automatik yang boleh belajar corak tersembunyi dari data untuk membantu dalam membuat keputusan yang bijak.
+
+Motivasi ini secara longgar diilhamkan oleh cara otak manusia belajar perkara tertentu berdasarkan data yang diterima dari dunia luar.
+
+✅ Fikirkan sejenak mengapa perniagaan ingin mencuba menggunakan strategi pembelajaran mesin berbanding mencipta enjin berasaskan peraturan yang ditetapkan.
+
+---
+## Aplikasi pembelajaran mesin
+
+Aplikasi pembelajaran mesin kini hampir di mana-mana, dan sama banyak dengan data yang mengalir di sekitar masyarakat kita, yang dihasilkan oleh telefon pintar kita, peranti yang bersambung, dan sistem lain. Mengambil kira potensi besar algoritma pembelajaran mesin terkini, penyelidik telah meneroka keupayaannya untuk menyelesaikan masalah kehidupan sebenar yang multi-dimensi dan pelbagai disiplin dengan hasil yang sangat positif.
+
+---
+## Contoh ML yang diterapkan
+
+**Anda boleh menggunakan pembelajaran mesin dalam banyak cara**:
+
+- Untuk meramalkan kemungkinan penyakit dari sejarah perubatan atau laporan pesakit.
+- Untuk memanfaatkan data cuaca untuk meramalkan peristiwa cuaca.
+- Untuk memahami sentimen teks.
+- Untuk mengesan berita palsu untuk menghentikan penyebaran propaganda.
+
+Kewangan, ekonomi, sains bumi, penerokaan angkasa lepas, kejuruteraan bioperubatan, sains kognitif, dan bahkan bidang dalam kemanusiaan telah mengadaptasi pembelajaran mesin untuk menyelesaikan masalah berat pemprosesan data dalam bidang mereka.
+
+---
+## Kesimpulan
+
+Pembelajaran mesin mengautomatikkan proses penemuan corak dengan mencari pandangan bermakna dari data dunia nyata atau data yang dihasilkan. Ia telah terbukti sangat berharga dalam perniagaan, kesihatan, dan aplikasi kewangan, antara lain.
+
+Dalam masa terdekat, memahami asas pembelajaran mesin akan menjadi satu keperluan untuk orang dari mana-mana bidang kerana penerimaannya yang meluas.
+
+---
+# 🚀 Cabaran
+
+Lukis, di atas kertas atau menggunakan aplikasi dalam talian seperti [Excalidraw](https://excalidraw.com/), pemahaman anda tentang perbezaan antara AI, ML, pembelajaran mendalam, dan sains data. Tambahkan beberapa idea tentang masalah yang baik untuk diselesaikan oleh setiap teknik ini.
+
+# [Kuiz pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# Kajian & Pembelajaran Kendiri
+
+Untuk mengetahui lebih lanjut tentang cara anda boleh bekerja dengan algoritma ML di awan, ikuti [Laluan Pembelajaran](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott) ini.
+
+Ikuti [Laluan Pembelajaran](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott) tentang asas ML.
+
+---
+# Tugasan
+
+[Mulakan dan berlari](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/1-intro-to-ML/assignment.md b/translations/ms/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..f912108d9
--- /dev/null
+++ b/translations/ms/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# Mula Beroperasi
+
+## Arahan
+
+Dalam tugasan yang tidak dinilai ini, anda sepatutnya mengasah semula kemahiran Python anda dan memastikan persekitaran anda berfungsi dan mampu menjalankan notebook.
+
+Ikuti [Python Learning Path](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott) ini, dan kemudian pasang sistem anda dengan menonton video pengenalan ini:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/2-history-of-ML/README.md b/translations/ms/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..541d1532b
--- /dev/null
+++ b/translations/ms/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# Sejarah Pembelajaran Mesin
+
+
+> Sketchnote oleh [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Kuiz Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "ML untuk Pemula - Sejarah Pembelajaran Mesin")
+
+> 🎥 Klik gambar di atas untuk video pendek yang membahas pelajaran ini.
+
+Dalam pelajaran ini, kita akan menelusuri tonggak-tonggak penting dalam sejarah pembelajaran mesin dan kecerdasan buatan.
+
+Sejarah kecerdasan buatan (AI) sebagai bidang sangat terkait dengan sejarah pembelajaran mesin, karena algoritma dan kemajuan komputasi yang mendasari ML berkontribusi pada pengembangan AI. Penting untuk diingat bahwa, meskipun bidang-bidang ini sebagai area penyelidikan yang berbeda mulai mengkristal pada tahun 1950-an, penemuan [algoritma, statistik, matematika, komputasi, dan teknis yang penting](https://wikipedia.org/wiki/Timeline_of_machine_learning) mendahului dan tumpang tindih dengan era ini. Faktanya, orang telah memikirkan pertanyaan-pertanyaan ini selama [ratusan tahun](https://wikipedia.org/wiki/History_of_artificial_intelligence): artikel ini membahas dasar intelektual historis dari gagasan 'mesin berpikir.'
+
+---
+## Penemuan Penting
+
+- 1763, 1812 [Teorema Bayes](https://wikipedia.org/wiki/Bayes%27_theorem) dan pendahulunya. Teorema ini dan aplikasinya mendasari inferensi, menggambarkan probabilitas suatu peristiwa terjadi berdasarkan pengetahuan sebelumnya.
+- 1805 [Teori Kuadrat Terkecil](https://wikipedia.org/wiki/Least_squares) oleh matematikawan Prancis Adrien-Marie Legendre. Teori ini, yang akan Anda pelajari dalam unit Regresi kami, membantu dalam pemasangan data.
+- 1913 [Rantai Markov](https://wikipedia.org/wiki/Markov_chain), dinamai menurut matematikawan Rusia Andrey Markov, digunakan untuk menggambarkan urutan peristiwa yang mungkin terjadi berdasarkan keadaan sebelumnya.
+- 1957 [Perceptron](https://wikipedia.org/wiki/Perceptron) adalah jenis pengklasifikasi linear yang ditemukan oleh psikolog Amerika Frank Rosenblatt yang mendasari kemajuan dalam pembelajaran mendalam.
+
+---
+
+- 1967 [Tetangga Terdekat](https://wikipedia.org/wiki/Nearest_neighbor) adalah algoritma yang awalnya dirancang untuk memetakan rute. Dalam konteks ML, algoritma ini digunakan untuk mendeteksi pola.
+- 1970 [Backpropagation](https://wikipedia.org/wiki/Backpropagation) digunakan untuk melatih [jaringan saraf feedforward](https://wikipedia.org/wiki/Feedforward_neural_network).
+- 1982 [Jaringan Saraf Rekuren](https://wikipedia.org/wiki/Recurrent_neural_network) adalah jaringan saraf buatan yang berasal dari jaringan saraf feedforward yang membuat grafik temporal.
+
+✅ Lakukan sedikit penelitian. Tanggal-tanggal lain apa yang menonjol sebagai penting dalam sejarah ML dan AI?
+
+---
+## 1950: Mesin yang Berpikir
+
+Alan Turing, seorang yang sangat luar biasa yang dipilih [oleh publik pada tahun 2019](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) sebagai ilmuwan terbesar abad ke-20, dianggap membantu meletakkan dasar untuk konsep 'mesin yang dapat berpikir.' Dia berjuang dengan para penentang dan kebutuhannya sendiri untuk bukti empiris dari konsep ini sebagian dengan menciptakan [Tes Turing](https://www.bbc.com/news/technology-18475646), yang akan Anda jelajahi dalam pelajaran NLP kami.
+
+---
+## 1956: Proyek Penelitian Musim Panas Dartmouth
+
+"Proyek Penelitian Musim Panas Dartmouth tentang kecerdasan buatan adalah acara penting untuk kecerdasan buatan sebagai bidang," dan di sinilah istilah 'kecerdasan buatan' diciptakan ([sumber](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth)).
+
+> Setiap aspek pembelajaran atau fitur kecerdasan lainnya pada prinsipnya dapat dijelaskan dengan sangat tepat sehingga mesin dapat dibuat untuk mensimulasikannya.
+
+---
+
+Peneliti utama, profesor matematika John McCarthy, berharap "untuk melanjutkan berdasarkan dugaan bahwa setiap aspek pembelajaran atau fitur kecerdasan lainnya pada prinsipnya dapat dijelaskan dengan sangat tepat sehingga mesin dapat dibuat untuk mensimulasikannya." Para peserta termasuk tokoh terkenal lainnya dalam bidang ini, Marvin Minsky.
+
+Lokakarya ini dikreditkan dengan memulai dan mendorong beberapa diskusi termasuk "peningkatan metode simbolik, sistem yang berfokus pada domain terbatas (sistem ahli awal), dan sistem deduktif versus sistem induktif." ([sumber](https://wikipedia.org/wiki/Dartmouth_workshop)).
+
+---
+## 1956 - 1974: "Tahun-tahun emas"
+
+Dari tahun 1950-an hingga pertengahan '70-an, optimisme tinggi bahwa AI dapat menyelesaikan banyak masalah. Pada tahun 1967, Marvin Minsky dengan yakin menyatakan bahwa "Dalam satu generasi ... masalah menciptakan 'kecerdasan buatan' akan secara substansial terpecahkan." (Minsky, Marvin (1967), Komputasi: Mesin Terbatas dan Tak Terbatas, Englewood Cliffs, N.J.: Prentice-Hall)
+
+penelitian pemrosesan bahasa alami berkembang pesat, pencarian diperhalus dan dibuat lebih kuat, dan konsep 'dunia mikro' diciptakan, di mana tugas-tugas sederhana diselesaikan menggunakan instruksi bahasa sederhana.
+
+---
+
+Penelitian didanai dengan baik oleh lembaga pemerintah, kemajuan dibuat dalam komputasi dan algoritma, dan prototipe mesin cerdas dibangun. Beberapa mesin ini termasuk:
+
+* [Shakey the robot](https://wikipedia.org/wiki/Shakey_the_robot), yang bisa bermanuver dan memutuskan cara melakukan tugas dengan 'cerdas'.
+
+ 
+ > Shakey pada tahun 1972
+
+---
+
+* Eliza, 'chatterbot' awal, bisa bercakap-cakap dengan orang dan bertindak sebagai 'terapis' primitif. Anda akan belajar lebih banyak tentang Eliza dalam pelajaran NLP.
+
+ 
+ > Versi Eliza, chatbot
+
+---
+
+* "Dunia blok" adalah contoh dunia mikro di mana blok dapat ditumpuk dan disortir, dan eksperimen dalam mengajar mesin untuk membuat keputusan dapat diuji. Kemajuan yang dibangun dengan pustaka seperti [SHRDLU](https://wikipedia.org/wiki/SHRDLU) membantu mendorong pemrosesan bahasa ke depan.
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "dunia blok dengan SHRDLU")
+
+ > 🎥 Klik gambar di atas untuk video: Dunia blok dengan SHRDLU
+
+---
+## 1974 - 1980: "Musim Dingin AI"
+
+Pada pertengahan 1970-an, menjadi jelas bahwa kompleksitas membuat 'mesin cerdas' telah diremehkan dan janjinya, mengingat kekuatan komputasi yang tersedia, telah dilebih-lebihkan. Pendanaan mengering dan kepercayaan pada bidang ini melambat. Beberapa masalah yang mempengaruhi kepercayaan termasuk:
+---
+- **Keterbatasan**. Kekuatan komputasi terlalu terbatas.
+- **Ledakan kombinatorial**. Jumlah parameter yang perlu dilatih tumbuh secara eksponensial seiring dengan lebih banyak yang diminta dari komputer, tanpa evolusi paralel kekuatan dan kemampuan komputasi.
+- **Kekurangan data**. Ada kekurangan data yang menghambat proses pengujian, pengembangan, dan penyempurnaan algoritma.
+- **Apakah kita mengajukan pertanyaan yang tepat?**. Pertanyaan-pertanyaan yang diajukan mulai dipertanyakan. Peneliti mulai menghadapi kritik tentang pendekatan mereka:
+ - Tes Turing dipertanyakan melalui, antara lain, teori 'ruang cina' yang berpendapat bahwa, "pemrograman komputer digital dapat membuatnya tampak memahami bahasa tetapi tidak dapat menghasilkan pemahaman nyata." ([sumber](https://plato.stanford.edu/entries/chinese-room/))
+ - Etika memperkenalkan kecerdasan buatan seperti "terapis" ELIZA ke dalam masyarakat dipertanyakan.
+
+---
+
+Pada saat yang sama, berbagai aliran pemikiran AI mulai terbentuk. Sebuah dikotomi dibentuk antara praktik ["scruffy" vs. "neat AI"](https://wikipedia.org/wiki/Neats_and_scruffies). Laboratorium _scruffy_ mengubah program selama berjam-jam hingga mereka mendapatkan hasil yang diinginkan. Laboratorium _neat_ "berfokus pada logika dan pemecahan masalah formal". ELIZA dan SHRDLU adalah sistem _scruffy_ yang terkenal. Pada tahun 1980-an, ketika permintaan muncul untuk membuat sistem ML dapat direproduksi, pendekatan _neat_ secara bertahap menjadi yang terdepan karena hasilnya lebih dapat dijelaskan.
+
+---
+## Sistem Ahli 1980-an
+
+Seiring berkembangnya bidang ini, manfaatnya bagi bisnis menjadi lebih jelas, dan pada tahun 1980-an demikian pula proliferasi 'sistem ahli'. "Sistem ahli adalah salah satu bentuk perangkat lunak kecerdasan buatan (AI) yang benar-benar berhasil pertama." ([sumber](https://wikipedia.org/wiki/Expert_system)).
+
+Jenis sistem ini sebenarnya _hibrida_, terdiri sebagian dari mesin aturan yang mendefinisikan persyaratan bisnis, dan mesin inferensi yang memanfaatkan sistem aturan untuk menyimpulkan fakta baru.
+
+Era ini juga melihat perhatian yang meningkat pada jaringan saraf.
+
+---
+## 1987 - 1993: AI 'Chill'
+
+Proliferasi perangkat keras sistem ahli khusus memiliki efek yang tidak menguntungkan menjadi terlalu khusus. Munculnya komputer pribadi juga bersaing dengan sistem besar, khusus, dan terpusat ini. Demokratisasi komputasi telah dimulai, dan akhirnya membuka jalan bagi ledakan data besar modern.
+
+---
+## 1993 - 2011
+
+Epok ini melihat era baru untuk ML dan AI untuk dapat memecahkan beberapa masalah yang disebabkan sebelumnya oleh kurangnya data dan kekuatan komputasi. Jumlah data mulai meningkat pesat dan menjadi lebih tersedia secara luas, untuk lebih baik dan lebih buruk, terutama dengan munculnya smartphone sekitar tahun 2007. Kekuatan komputasi berkembang secara eksponensial, dan algoritma berkembang seiring. Bidang ini mulai matang saat hari-hari bebas masa lalu mulai mengkristal menjadi disiplin yang benar.
+
+---
+## Sekarang
+
+Saat ini pembelajaran mesin dan AI menyentuh hampir setiap bagian dari kehidupan kita. Era ini memerlukan pemahaman yang cermat tentang risiko dan potensi efek dari algoritma ini pada kehidupan manusia. Seperti yang dikatakan oleh Brad Smith dari Microsoft, "Teknologi informasi mengangkat isu-isu yang menyentuh inti perlindungan hak asasi manusia fundamental seperti privasi dan kebebasan berekspresi. Isu-isu ini meningkatkan tanggung jawab bagi perusahaan teknologi yang menciptakan produk-produk ini. Menurut pandangan kami, mereka juga memerlukan regulasi pemerintah yang bijaksana dan pengembangan norma-norma tentang penggunaan yang dapat diterima" ([sumber](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/)).
+
+---
+
+Masih harus dilihat apa yang akan terjadi di masa depan, tetapi penting untuk memahami sistem komputer ini dan perangkat lunak serta algoritma yang mereka jalankan. Kami berharap kurikulum ini akan membantu Anda untuk mendapatkan pemahaman yang lebih baik sehingga Anda dapat memutuskan sendiri.
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "Sejarah pembelajaran mendalam")
+> 🎥 Klik gambar di atas untuk video: Yann LeCun membahas sejarah pembelajaran mendalam dalam kuliah ini
+
+---
+## 🚀Tantangan
+
+Gali salah satu momen sejarah ini dan pelajari lebih lanjut tentang orang-orang di baliknya. Ada karakter-karakter yang menarik, dan tidak ada penemuan ilmiah yang pernah diciptakan dalam kekosongan budaya. Apa yang Anda temukan?
+
+## [Kuiz Pasca-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## Tinjauan & Studi Mandiri
+
+Berikut adalah item untuk ditonton dan didengarkan:
+
+[Podcast ini di mana Amy Boyd membahas evolusi AI](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "Sejarah AI oleh Amy Boyd")
+
+---
+
+## Tugasan
+
+[Buat garis masa](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/2-history-of-ML/assignment.md b/translations/ms/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..24a67f7a6
--- /dev/null
+++ b/translations/ms/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Buat Garis Masa
+
+## Arahan
+
+Menggunakan [repo ini](https://github.com/Digital-Humanities-Toolkit/timeline-builder), buat garis masa mengenai beberapa aspek sejarah algoritma, matematik, statistik, AI, atau ML, atau gabungan daripadanya. Anda boleh memberi tumpuan kepada seorang individu, satu idea, atau tempoh masa pemikiran yang panjang. Pastikan untuk menambah elemen multimedia.
+
+## Rubrik
+
+| Kriteria | Contoh Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ------------------------------------------------ | ----------------------------------------- | ---------------------------------------------------------------- |
+| | Garis masa yang dilancarkan dipaparkan sebagai halaman GitHub | Kod tidak lengkap dan tidak dilancarkan | Garis masa tidak lengkap, tidak cukup penyelidikan dan tidak dilancarkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/3-fairness/README.md b/translations/ms/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..ab0191c1f
--- /dev/null
+++ b/translations/ms/1-Introduction/3-fairness/README.md
@@ -0,0 +1,159 @@
+# Membina Penyelesaian Pembelajaran Mesin dengan AI Bertanggungjawab
+
+
+> Sketchnote oleh [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Kuiz Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Pengenalan
+
+Dalam kurikulum ini, anda akan mula mengetahui bagaimana pembelajaran mesin boleh dan sedang mempengaruhi kehidupan seharian kita. Bahkan sekarang, sistem dan model terlibat dalam tugas membuat keputusan harian, seperti diagnosis kesihatan, kelulusan pinjaman atau mengesan penipuan. Oleh itu, adalah penting bahawa model-model ini berfungsi dengan baik untuk memberikan hasil yang boleh dipercayai. Sama seperti mana-mana aplikasi perisian, sistem AI akan terlepas jangkaan atau mempunyai hasil yang tidak diingini. Itulah sebabnya penting untuk memahami dan menjelaskan tingkah laku model AI.
+
+Bayangkan apa yang boleh berlaku apabila data yang anda gunakan untuk membina model-model ini kekurangan demografi tertentu, seperti kaum, jantina, pandangan politik, agama, atau mewakili demografi tersebut secara tidak seimbang. Bagaimana pula apabila output model ditafsirkan untuk memihak kepada beberapa demografi? Apakah akibatnya untuk aplikasi tersebut? Selain itu, apa yang berlaku apabila model mempunyai hasil yang buruk dan membahayakan orang? Siapa yang bertanggungjawab atas tingkah laku sistem AI? Ini adalah beberapa soalan yang akan kita terokai dalam kurikulum ini.
+
+Dalam pelajaran ini, anda akan:
+
+- Meningkatkan kesedaran tentang kepentingan keadilan dalam pembelajaran mesin dan bahaya berkaitan keadilan.
+- Menjadi biasa dengan amalan meneroka penyimpangan dan senario luar biasa untuk memastikan kebolehpercayaan dan keselamatan.
+- Memahami keperluan untuk memberdayakan semua orang dengan merancang sistem yang inklusif.
+- Meneroka betapa pentingnya melindungi privasi dan keselamatan data serta orang.
+- Melihat kepentingan pendekatan kotak kaca untuk menjelaskan tingkah laku model AI.
+- Berhati-hati bagaimana akauntabiliti penting untuk membina kepercayaan dalam sistem AI.
+
+## Prasyarat
+
+Sebagai prasyarat, sila ambil "Prinsip AI Bertanggungjawab" di Laluan Pembelajaran dan tonton video di bawah mengenai topik ini:
+
+Ketahui lebih lanjut tentang AI Bertanggungjawab dengan mengikuti [Laluan Pembelajaran](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott)
+
+[](https://youtu.be/dnC8-uUZXSc "Pendekatan Microsoft terhadap AI Bertanggungjawab")
+
+> 🎥 Klik gambar di atas untuk video: Pendekatan Microsoft terhadap AI Bertanggungjawab
+
+## Keadilan
+
+Sistem AI harus melayan semua orang dengan adil dan mengelakkan mempengaruhi kumpulan orang yang serupa dengan cara yang berbeza. Sebagai contoh, apabila sistem AI memberikan panduan tentang rawatan perubatan, permohonan pinjaman, atau pekerjaan, mereka harus memberikan cadangan yang sama kepada semua orang dengan simptom, keadaan kewangan, atau kelayakan profesional yang serupa. Setiap daripada kita sebagai manusia membawa bias yang diwarisi yang mempengaruhi keputusan dan tindakan kita. Bias ini boleh menjadi jelas dalam data yang kita gunakan untuk melatih sistem AI. Manipulasi sedemikian kadang-kadang boleh berlaku secara tidak sengaja. Selalunya sukar untuk secara sedar mengetahui bila anda memperkenalkan bias dalam data.
+
+**“Ketidakadilan”** merangkumi kesan negatif, atau “bahaya”, untuk sekumpulan orang, seperti yang ditakrifkan dari segi kaum, jantina, umur, atau status kecacatan. Bahaya berkaitan keadilan utama boleh diklasifikasikan sebagai:
+
+- **Peruntukan**, jika jantina atau etnik sebagai contoh lebih disukai daripada yang lain.
+- **Kualiti perkhidmatan**. Jika anda melatih data untuk satu senario tertentu tetapi realiti jauh lebih kompleks, ia membawa kepada perkhidmatan yang kurang baik. Sebagai contoh, dispenser sabun tangan yang tidak dapat mengesan orang dengan kulit gelap. [Rujukan](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **Penghinaan**. Untuk mengkritik dan melabel sesuatu atau seseorang secara tidak adil. Sebagai contoh, teknologi pelabelan imej secara salah melabelkan imej orang berkulit gelap sebagai gorila.
+- **Perwakilan berlebihan atau kurang**. Idea bahawa kumpulan tertentu tidak dilihat dalam profesion tertentu, dan mana-mana perkhidmatan atau fungsi yang terus mempromosikan itu menyumbang kepada bahaya.
+- **Stereotaip**. Mengaitkan kumpulan tertentu dengan atribut yang telah ditetapkan. Sebagai contoh, sistem terjemahan bahasa antara Bahasa Inggeris dan Turki mungkin mempunyai ketidaktepatan kerana perkataan dengan kaitan stereotaip kepada jantina.
+
+
+> terjemahan ke Bahasa Turki
+
+
+> terjemahan kembali ke Bahasa Inggeris
+
+Apabila mereka bentuk dan menguji sistem AI, kita perlu memastikan bahawa AI adalah adil dan tidak diprogramkan untuk membuat keputusan yang bias atau diskriminasi, yang juga dilarang oleh manusia. Menjamin keadilan dalam AI dan pembelajaran mesin kekal sebagai cabaran sosio-teknikal yang kompleks.
+
+### Kebolehpercayaan dan keselamatan
+
+Untuk membina kepercayaan, sistem AI perlu boleh dipercayai, selamat, dan konsisten di bawah keadaan normal dan tidak dijangka. Adalah penting untuk mengetahui bagaimana sistem AI akan berkelakuan dalam pelbagai situasi, terutamanya apabila mereka adalah penyimpangan. Apabila membina penyelesaian AI, perlu ada fokus yang besar pada bagaimana menangani pelbagai keadaan yang mungkin dihadapi oleh penyelesaian AI. Sebagai contoh, kereta tanpa pemandu perlu meletakkan keselamatan orang sebagai keutamaan utama. Akibatnya, AI yang menggerakkan kereta perlu mempertimbangkan semua senario yang mungkin dihadapi oleh kereta seperti malam, ribut petir atau ribut salji, kanak-kanak berlari melintasi jalan, haiwan peliharaan, pembinaan jalan, dan sebagainya. Sejauh mana sistem AI boleh menangani pelbagai keadaan dengan boleh dipercayai dan selamat mencerminkan tahap antisipasi yang dipertimbangkan oleh saintis data atau pembangun AI semasa mereka bentuk atau menguji sistem.
+
+> [🎥 Klik di sini untuk video: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### Inklusiviti
+
+Sistem AI harus direka untuk melibatkan dan memberdayakan semua orang. Apabila mereka bentuk dan melaksanakan sistem AI, saintis data dan pembangun AI mengenal pasti dan menangani potensi halangan dalam sistem yang boleh secara tidak sengaja mengecualikan orang. Sebagai contoh, terdapat 1 bilion orang kurang upaya di seluruh dunia. Dengan kemajuan AI, mereka boleh mengakses pelbagai maklumat dan peluang dengan lebih mudah dalam kehidupan seharian mereka. Dengan menangani halangan, ia mewujudkan peluang untuk berinovasi dan membangunkan produk AI dengan pengalaman yang lebih baik yang memberi manfaat kepada semua orang.
+
+> [🎥 Klik di sini untuk video: inklusiviti dalam AI](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### Keselamatan dan privasi
+
+Sistem AI harus selamat dan menghormati privasi orang. Orang kurang mempercayai sistem yang meletakkan privasi, maklumat, atau nyawa mereka dalam risiko. Apabila melatih model pembelajaran mesin, kita bergantung pada data untuk menghasilkan hasil terbaik. Dalam berbuat demikian, asal usul data dan integriti mesti dipertimbangkan. Sebagai contoh, adakah data pengguna diserahkan atau tersedia secara awam? Seterusnya, semasa bekerja dengan data, adalah penting untuk membangunkan sistem AI yang boleh melindungi maklumat sulit dan menahan serangan. Apabila AI menjadi lebih meluas, melindungi privasi dan mengamankan maklumat peribadi dan perniagaan yang penting menjadi lebih kritikal dan kompleks. Isu privasi dan keselamatan data memerlukan perhatian yang sangat dekat untuk AI kerana akses kepada data adalah penting untuk sistem AI membuat ramalan dan keputusan yang tepat dan bermaklumat tentang orang.
+
+> [🎥 Klik di sini untuk video: keselamatan dalam AI](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Sebagai industri kita telah membuat kemajuan yang ketara dalam Privasi & keselamatan, didorong secara signifikan oleh peraturan seperti GDPR (Peraturan Perlindungan Data Umum).
+- Namun dengan sistem AI kita mesti mengakui ketegangan antara keperluan untuk lebih banyak data peribadi untuk menjadikan sistem lebih peribadi dan berkesan – dan privasi.
+- Sama seperti kelahiran komputer yang disambungkan dengan internet, kita juga melihat peningkatan besar dalam bilangan isu keselamatan berkaitan AI.
+- Pada masa yang sama, kita telah melihat AI digunakan untuk meningkatkan keselamatan. Sebagai contoh, kebanyakan pengimbas anti-virus moden dipacu oleh heuristik AI hari ini.
+- Kita perlu memastikan bahawa proses Sains Data kita bercampur secara harmoni dengan amalan privasi dan keselamatan terkini.
+
+### Ketelusan
+
+Sistem AI harus dapat difahami. Bahagian penting dalam ketelusan adalah menjelaskan tingkah laku sistem AI dan komponennya. Meningkatkan pemahaman tentang sistem AI memerlukan pihak berkepentingan memahami bagaimana dan mengapa ia berfungsi supaya mereka boleh mengenal pasti potensi isu prestasi, kebimbangan keselamatan dan privasi, bias, amalan pengecualian, atau hasil yang tidak diingini. Kami juga percaya bahawa mereka yang menggunakan sistem AI harus jujur dan terus terang tentang bila, mengapa, dan bagaimana mereka memilih untuk menggunakan sistem tersebut. Serta had sistem yang mereka gunakan. Sebagai contoh, jika bank menggunakan sistem AI untuk menyokong keputusan pemberian pinjaman pengguna, adalah penting untuk memeriksa hasil dan memahami data mana yang mempengaruhi cadangan sistem. Kerajaan mula mengawal selia AI di seluruh industri, jadi saintis data dan organisasi mesti menjelaskan jika sistem AI memenuhi keperluan peraturan, terutamanya apabila terdapat hasil yang tidak diingini.
+
+> [🎥 Klik di sini untuk video: ketelusan dalam AI](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Oleh kerana sistem AI sangat kompleks, sukar untuk memahami bagaimana ia berfungsi dan mentafsir hasilnya.
+- Kekurangan pemahaman ini mempengaruhi cara sistem ini diurus, dioperasikan, dan didokumentasikan.
+- Kekurangan pemahaman ini lebih penting mempengaruhi keputusan yang dibuat menggunakan hasil yang dihasilkan oleh sistem ini.
+
+### Akauntabiliti
+
+Orang yang mereka bentuk dan melaksanakan sistem AI mesti bertanggungjawab atas cara sistem mereka beroperasi. Keperluan untuk akauntabiliti adalah sangat penting dengan teknologi penggunaan sensitif seperti pengecaman wajah. Baru-baru ini, terdapat permintaan yang semakin meningkat untuk teknologi pengecaman wajah, terutamanya daripada organisasi penguatkuasaan undang-undang yang melihat potensi teknologi dalam penggunaan seperti mencari kanak-kanak yang hilang. Walau bagaimanapun, teknologi ini berpotensi digunakan oleh kerajaan untuk meletakkan kebebasan asas warganya dalam risiko dengan, sebagai contoh, membolehkan pengawasan berterusan individu tertentu. Oleh itu, saintis data dan organisasi perlu bertanggungjawab terhadap cara sistem AI mereka memberi kesan kepada individu atau masyarakat.
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Pendekatan Microsoft terhadap AI Bertanggungjawab")
+
+> 🎥 Klik gambar di atas untuk video: Amaran Pengawasan Massa Melalui Pengecaman Wajah
+
+Akhirnya salah satu soalan terbesar untuk generasi kita, sebagai generasi pertama yang membawa AI kepada masyarakat, adalah bagaimana memastikan komputer akan tetap bertanggungjawab kepada orang dan bagaimana memastikan orang yang mereka bentuk komputer tetap bertanggungjawab kepada semua orang lain.
+
+## Penilaian Kesan
+
+Sebelum melatih model pembelajaran mesin, adalah penting untuk menjalankan penilaian kesan untuk memahami tujuan sistem AI; apa kegunaan yang dimaksudkan; di mana ia akan digunakan; dan siapa yang akan berinteraksi dengan sistem tersebut. Ini berguna untuk pengulas atau penguji yang menilai sistem untuk mengetahui faktor-faktor yang perlu dipertimbangkan semasa mengenal pasti potensi risiko dan akibat yang dijangka.
+
+Berikut adalah bidang fokus apabila menjalankan penilaian kesan:
+
+* **Kesan buruk terhadap individu**. Menyedari sebarang sekatan atau keperluan, penggunaan yang tidak disokong atau sebarang had yang diketahui yang menghalang prestasi sistem adalah penting untuk memastikan bahawa sistem tidak digunakan dengan cara yang boleh menyebabkan bahaya kepada individu.
+* **Keperluan data**. Memahami bagaimana dan di mana sistem akan menggunakan data membolehkan pengulas meneroka sebarang keperluan data yang perlu anda perhatikan (contohnya, peraturan data GDPR atau HIPPA). Selain itu, periksa sama ada sumber atau kuantiti data mencukupi untuk latihan.
+* **Ringkasan kesan**. Kumpulkan senarai potensi bahaya yang boleh timbul daripada menggunakan sistem. Sepanjang kitar hayat ML, semak jika isu yang dikenal pasti telah dikurangkan atau ditangani.
+* **Matlamat yang berkenaan** untuk setiap enam prinsip teras. Nilai jika matlamat dari setiap prinsip dipenuhi dan jika terdapat sebarang jurang.
+
+## Debugging dengan AI Bertanggungjawab
+
+Sama seperti debugging aplikasi perisian, debugging sistem AI adalah proses yang perlu untuk mengenal pasti dan menyelesaikan isu dalam sistem. Terdapat banyak faktor yang akan mempengaruhi model tidak berfungsi seperti yang dijangkakan atau bertanggungjawab. Kebanyakan metrik prestasi model tradisional adalah agregat kuantitatif prestasi model, yang tidak mencukupi untuk menganalisis bagaimana model melanggar prinsip AI bertanggungjawab. Tambahan pula, model pembelajaran mesin adalah kotak hitam yang menjadikannya sukar untuk memahami apa yang mendorong hasilnya atau memberikan penjelasan apabila ia membuat kesilapan. Kemudian dalam kursus ini, kita akan belajar bagaimana menggunakan papan pemuka AI Bertanggungjawab untuk membantu debugging sistem AI. Papan pemuka menyediakan alat holistik untuk saintis data dan pembangun AI untuk melaksanakan:
+
+* **Analisis ralat**. Untuk mengenal pasti taburan ralat model yang boleh menjejaskan keadilan atau kebolehpercayaan sistem.
+* **Gambaran keseluruhan model**. Untuk menemui di mana terdapat perbezaan dalam prestasi model merentasi kohort data.
+* **Analisis data**. Untuk memahami taburan data dan mengenal pasti sebarang potensi bias dalam data yang boleh membawa kepada isu keadilan, inklusiviti, dan kebolehpercayaan.
+* **Kebolehfahaman model**. Untuk memahami apa yang mempengaruhi atau mempengaruhi ramalan model. Ini membantu dalam menjelaskan tingkah laku model, yang penting untuk ketelusan dan akauntabiliti.
+
+## 🚀 Cabaran
+
+Untuk mengelakkan bahaya daripada diperkenalkan pada awalnya, kita harus:
+
+- mempunyai kepelbagaian latar belakang dan perspektif dalam kalangan orang yang bekerja pada sistem
+- melabur dalam set data yang mencerminkan kepelbagaian masyarakat kita
+- membangunkan kaedah yang lebih baik sepanjang kitar hayat pembelajaran mesin untuk mengesan dan membetulkan AI bertanggungjawab apabila ia berlaku
+
+Fikirkan tentang senario kehidupan sebenar di mana ketidakpercayaan model jelas dalam pembinaan dan penggunaan model. Apa lagi yang harus kita pertimbangkan?
+
+## [Kuiz Pasca-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## Ulasan & Kajian Sendiri
+
+Dalam pelajaran ini, anda telah mempelajari beberapa asas konsep keadilan dan ketidakadilan dalam pembelajaran mesin.
+
+Tonton bengkel ini untuk mendalami topik:
+
+- Dalam mengejar AI bertanggungjawab: Membawa prinsip kepada amalan oleh Besmira Nushi, Mehrnoosh Sameki dan Amit Sharma
+
+[](https://www.youtube.com/watch?v=tGgJCrA-MZU "Kotak Alat AI Bertanggungjawab: Kerangka sumber terbuka untuk membina AI bertanggungjawab oleh Besmira Nushi, Mehrnoosh Sameki, dan Amit Sharma")
+
+> 🎥 Klik gambar di atas untuk video: Kotak Alat AI Bertanggungjawab: Kerangka sumber terbuka untuk membina AI bertanggungjawab oleh Besmira Nushi, Mehrnoosh Sameki, dan Amit Sharma
+
+Juga, baca:
+
+- Pusat sumber RAI Microsoft: [Sumber AI Bertanggungjawab – Microsoft AI](https://www.microsoft.com/ai/responsible-ai-resources?activetab=pivot1%3aprimaryr4)
+
+- Kumpulan penyelidikan FATE Microsoft: [FATE: Keadilan, Akauntabiliti, Ketelusan, dan Etika dalam AI - Penyelidikan Microsoft](https://www.microsoft.com/research/theme/fate/)
+
+Kotak Alat RAI:
+
+- [Repositori GitHub Kotak Alat AI Bertanggungjawab](https://github.com/microsoft/responsible-ai-toolbox)
+
+Baca tentang alat Azure Machine Learning untuk memastikan keadilan:
+
+- [Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-fairness-ml?WT.mc_id=academic-77952-leestott)
+
+## Tugasan
+
+[Terokai Kotak Alat RAI](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/3-fairness/assignment.md b/translations/ms/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..a7b791a43
--- /dev/null
+++ b/translations/ms/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# Terokai Kotak Alat AI Bertanggungjawab
+
+## Arahan
+
+Dalam pelajaran ini, anda telah mempelajari tentang Kotak Alat AI Bertanggungjawab, sebuah "projek sumber terbuka yang didorong oleh komuniti untuk membantu saintis data menganalisis dan memperbaiki sistem AI." Untuk tugasan ini, terokai salah satu [notebook](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb) dari Kotak Alat RAI dan laporkan penemuan anda dalam bentuk kertas kerja atau pembentangan.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | --------- | -------- | -------------------- |
+| | Kertas kerja atau pembentangan powerpoint disediakan yang membincangkan sistem Fairlearn, notebook yang dijalankan, dan kesimpulan yang diperoleh daripada menjalankannya | Kertas kerja disediakan tanpa kesimpulan | Tiada kertas kerja disediakan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/4-techniques-of-ML/README.md b/translations/ms/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..571c79e51
--- /dev/null
+++ b/translations/ms/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# Teknik Pembelajaran Mesin
+
+Proses membangun, menggunakan, dan memelihara model pembelajaran mesin dan data yang mereka gunakan sangat berbeda dari banyak alur kerja pengembangan lainnya. Dalam pelajaran ini, kita akan mengurai proses tersebut, dan menjelaskan teknik utama yang perlu Anda ketahui. Anda akan:
+
+- Memahami proses yang mendasari pembelajaran mesin pada tingkat tinggi.
+- Menjelajahi konsep dasar seperti 'model', 'prediksi', dan 'data pelatihan'.
+
+## [Kuis Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "ML untuk pemula - Teknik Pembelajaran Mesin")
+
+> 🎥 Klik gambar di atas untuk video singkat yang membahas pelajaran ini.
+
+## Pengenalan
+
+Secara garis besar, seni menciptakan proses pembelajaran mesin (ML) terdiri dari beberapa langkah:
+
+1. **Tentukan pertanyaan**. Sebagian besar proses ML dimulai dengan mengajukan pertanyaan yang tidak dapat dijawab oleh program kondisional sederhana atau mesin berbasis aturan. Pertanyaan-pertanyaan ini sering kali berputar di sekitar prediksi berdasarkan kumpulan data.
+2. **Kumpulkan dan siapkan data**. Untuk dapat menjawab pertanyaan Anda, Anda memerlukan data. Kualitas dan, kadang-kadang, kuantitas data Anda akan menentukan seberapa baik Anda dapat menjawab pertanyaan awal Anda. Visualisasi data adalah aspek penting dari fase ini. Fase ini juga mencakup pembagian data menjadi kelompok pelatihan dan pengujian untuk membangun model.
+3. **Pilih metode pelatihan**. Tergantung pada pertanyaan Anda dan sifat data Anda, Anda perlu memilih bagaimana Anda ingin melatih model untuk mencerminkan data Anda dengan baik dan membuat prediksi yang akurat. Ini adalah bagian dari proses ML Anda yang membutuhkan keahlian khusus dan, sering kali, banyak percobaan.
+4. **Latih model**. Menggunakan data pelatihan Anda, Anda akan menggunakan berbagai algoritma untuk melatih model agar mengenali pola dalam data. Model mungkin memanfaatkan bobot internal yang dapat disesuaikan untuk memprioritaskan bagian data tertentu daripada yang lain untuk membangun model yang lebih baik.
+5. **Evaluasi model**. Anda menggunakan data yang belum pernah dilihat sebelumnya (data pengujian Anda) dari kumpulan yang dikumpulkan untuk melihat bagaimana kinerja model.
+6. **Penyetelan parameter**. Berdasarkan kinerja model Anda, Anda dapat mengulangi proses menggunakan parameter yang berbeda, atau variabel, yang mengontrol perilaku algoritma yang digunakan untuk melatih model.
+7. **Prediksi**. Gunakan input baru untuk menguji akurasi model Anda.
+
+## Pertanyaan apa yang harus diajukan
+
+Komputer sangat mahir dalam menemukan pola tersembunyi dalam data. Utilitas ini sangat membantu bagi para peneliti yang memiliki pertanyaan tentang domain tertentu yang tidak dapat dijawab dengan mudah dengan membuat mesin berbasis aturan kondisional. Diberikan tugas aktuaria, misalnya, seorang ilmuwan data mungkin dapat membangun aturan buatan tangan tentang kematian perokok vs non-perokok.
+
+Namun, ketika banyak variabel lain dimasukkan ke dalam persamaan, model ML mungkin lebih efisien untuk memprediksi tingkat kematian di masa depan berdasarkan riwayat kesehatan masa lalu. Contoh yang lebih ceria mungkin adalah membuat prediksi cuaca untuk bulan April di lokasi tertentu berdasarkan data yang mencakup garis lintang, garis bujur, perubahan iklim, kedekatan dengan laut, pola aliran jet, dan lainnya.
+
+✅ [Slide deck](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf) ini tentang model cuaca menawarkan perspektif historis untuk menggunakan ML dalam analisis cuaca.
+
+## Tugas Pra-Pembangunan
+
+Sebelum mulai membangun model Anda, ada beberapa tugas yang perlu Anda selesaikan. Untuk menguji pertanyaan Anda dan membentuk hipotesis berdasarkan prediksi model, Anda perlu mengidentifikasi dan mengkonfigurasi beberapa elemen.
+
+### Data
+
+Untuk dapat menjawab pertanyaan Anda dengan kepastian apa pun, Anda memerlukan sejumlah data yang tepat. Ada dua hal yang perlu Anda lakukan pada titik ini:
+
+- **Kumpulkan data**. Mengingat pelajaran sebelumnya tentang keadilan dalam analisis data, kumpulkan data Anda dengan hati-hati. Sadarilah sumber data ini, bias bawaan yang mungkin dimilikinya, dan dokumentasikan asal-usulnya.
+- **Siapkan data**. Ada beberapa langkah dalam proses persiapan data. Anda mungkin perlu menggabungkan data dan menormalkannya jika berasal dari berbagai sumber. Anda dapat meningkatkan kualitas dan kuantitas data melalui berbagai metode seperti mengubah string menjadi angka (seperti yang kita lakukan dalam [Clustering](../../5-Clustering/1-Visualize/README.md)). Anda juga dapat menghasilkan data baru, berdasarkan yang asli (seperti yang kita lakukan dalam [Classification](../../4-Classification/1-Introduction/README.md)). Anda dapat membersihkan dan mengedit data (seperti yang akan kita lakukan sebelum pelajaran [Web App](../../3-Web-App/README.md)). Terakhir, Anda mungkin juga perlu mengacak dan mengocoknya, tergantung pada teknik pelatihan Anda.
+
+✅ Setelah mengumpulkan dan memproses data Anda, luangkan waktu untuk melihat apakah bentuknya akan memungkinkan Anda menjawab pertanyaan yang dimaksud. Mungkin data tersebut tidak akan berkinerja baik dalam tugas yang diberikan, seperti yang kita temukan dalam pelajaran [Clustering](../../5-Clustering/1-Visualize/README.md)!
+
+### Fitur dan Target
+
+[Fitur](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection) adalah properti terukur dari data Anda. Dalam banyak dataset, itu diekspresikan sebagai tajuk kolom seperti 'tanggal', 'ukuran', atau 'warna'. Variabel fitur Anda, biasanya diwakili sebagai `X` dalam kode, mewakili variabel input yang akan digunakan untuk melatih model.
+
+Target adalah hal yang Anda coba prediksi. Target biasanya diwakili sebagai `y` dalam kode, mewakili jawaban atas pertanyaan yang Anda coba ajukan dari data Anda: pada bulan Desember, warna **apa** labu yang akan paling murah? di San Francisco, lingkungan mana yang akan memiliki harga **real estate** terbaik? Kadang-kadang target juga disebut sebagai atribut label.
+
+### Memilih variabel fitur Anda
+
+🎓 **Pemilihan Fitur dan Ekstraksi Fitur** Bagaimana Anda tahu variabel mana yang harus dipilih saat membangun model? Anda mungkin akan melalui proses pemilihan fitur atau ekstraksi fitur untuk memilih variabel yang tepat untuk model yang paling berkinerja. Namun, mereka tidak sama: "Ekstraksi fitur membuat fitur baru dari fungsi fitur asli, sedangkan pemilihan fitur mengembalikan subset dari fitur." ([sumber](https://wikipedia.org/wiki/Feature_selection))
+
+### Visualisasikan data Anda
+
+Aspek penting dari alat ilmuwan data adalah kekuatan untuk memvisualisasikan data menggunakan beberapa pustaka yang sangat baik seperti Seaborn atau MatPlotLib. Mewakili data Anda secara visual mungkin memungkinkan Anda menemukan korelasi tersembunyi yang dapat Anda manfaatkan. Visualisasi Anda mungkin juga membantu Anda menemukan bias atau data yang tidak seimbang (seperti yang kita temukan dalam [Classification](../../4-Classification/2-Classifiers-1/README.md)).
+
+### Membagi dataset Anda
+
+Sebelum pelatihan, Anda perlu membagi dataset Anda menjadi dua atau lebih bagian dengan ukuran yang tidak sama yang masih mewakili data dengan baik.
+
+- **Pelatihan**. Bagian dataset ini cocok dengan model Anda untuk melatihnya. Set ini merupakan mayoritas dari dataset asli.
+- **Pengujian**. Dataset pengujian adalah kelompok data independen, sering kali dikumpulkan dari data asli, yang Anda gunakan untuk mengonfirmasi kinerja model yang dibangun.
+- **Validasi**. Set validasi adalah kelompok contoh independen yang lebih kecil yang Anda gunakan untuk menyetel hyperparameter model, atau arsitektur, untuk meningkatkan model. Tergantung pada ukuran data Anda dan pertanyaan yang Anda ajukan, Anda mungkin tidak perlu membangun set ketiga ini (seperti yang kami catat dalam [Peramalan Deret Waktu](../../7-TimeSeries/1-Introduction/README.md)).
+
+## Membangun model
+
+Menggunakan data pelatihan Anda, tujuan Anda adalah membangun model, atau representasi statistik dari data Anda, menggunakan berbagai algoritma untuk **melatih**nya. Melatih model mengeksposnya ke data dan memungkinkan model membuat asumsi tentang pola yang dirasakannya, memvalidasi, dan menerima atau menolak.
+
+### Tentukan metode pelatihan
+
+Tergantung pada pertanyaan Anda dan sifat data Anda, Anda akan memilih metode untuk melatihnya. Melalui [dokumentasi Scikit-learn](https://scikit-learn.org/stable/user_guide.html) - yang kita gunakan dalam kursus ini - Anda dapat menjelajahi banyak cara untuk melatih model. Tergantung pada pengalaman Anda, Anda mungkin harus mencoba beberapa metode berbeda untuk membangun model terbaik. Anda kemungkinan besar akan melalui proses di mana ilmuwan data mengevaluasi kinerja model dengan memberinya data yang belum terlihat, memeriksa akurasi, bias, dan masalah penurunan kualitas lainnya, dan memilih metode pelatihan yang paling tepat untuk tugas yang ada.
+
+### Melatih model
+
+Dengan data pelatihan Anda, Anda siap untuk 'memasangkannya' untuk membuat model. Anda akan melihat bahwa di banyak pustaka ML Anda akan menemukan kode 'model.fit' - saat inilah Anda mengirimkan variabel fitur Anda sebagai array nilai (biasanya 'X') dan variabel target (biasanya 'y').
+
+### Evaluasi model
+
+Setelah proses pelatihan selesai (bisa memakan banyak iterasi, atau 'epoch', untuk melatih model besar), Anda akan dapat mengevaluasi kualitas model dengan menggunakan data pengujian untuk mengukur kinerjanya. Data ini adalah subset dari data asli yang belum pernah dianalisis oleh model sebelumnya. Anda dapat mencetak tabel metrik tentang kualitas model Anda.
+
+🎓 **Pemasangan Model**
+
+Dalam konteks pembelajaran mesin, pemasangan model mengacu pada akurasi fungsi dasar model saat mencoba menganalisis data yang tidak dikenalnya.
+
+🎓 **Underfitting** dan **overfitting** adalah masalah umum yang menurunkan kualitas model, karena model cocok baik tidak cukup baik atau terlalu baik. Ini menyebabkan model membuat prediksi yang terlalu selaras atau terlalu longgar dengan data pelatihannya. Model yang overfit memprediksi data pelatihan terlalu baik karena telah mempelajari detail dan kebisingan data dengan terlalu baik. Model yang underfit tidak akurat karena tidak dapat menganalisis data pelatihannya maupun data yang belum pernah 'dilihatnya' dengan akurat.
+
+
+> Infografik oleh [Jen Looper](https://twitter.com/jenlooper)
+
+## Penyetelan parameter
+
+Setelah pelatihan awal Anda selesai, amati kualitas model dan pertimbangkan untuk meningkatkannya dengan menyesuaikan 'hyperparameter'-nya. Baca lebih lanjut tentang prosesnya [dalam dokumentasi](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott).
+
+## Prediksi
+
+Ini adalah saat di mana Anda dapat menggunakan data yang benar-benar baru untuk menguji akurasi model Anda. Dalam pengaturan ML 'terapan', di mana Anda membangun aset web untuk menggunakan model dalam produksi, proses ini mungkin melibatkan pengumpulan input pengguna (misalnya, menekan tombol) untuk menetapkan variabel dan mengirimkannya ke model untuk inferensi, atau evaluasi.
+
+Dalam pelajaran-pelajaran ini, Anda akan menemukan cara menggunakan langkah-langkah ini untuk mempersiapkan, membangun, menguji, mengevaluasi, dan memprediksi - semua gerakan seorang ilmuwan data dan lebih banyak lagi, saat Anda maju dalam perjalanan Anda untuk menjadi seorang insinyur ML 'full stack'.
+
+---
+
+## 🚀Tantangan
+
+Buat bagan alur yang mencerminkan langkah-langkah seorang praktisi ML. Di mana Anda melihat diri Anda saat ini dalam proses? Di mana Anda memprediksi Anda akan menemukan kesulitan? Apa yang tampaknya mudah bagi Anda?
+
+## [Kuis Pasca-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## Tinjauan & Studi Mandiri
+
+Cari wawancara dengan ilmuwan data yang membahas pekerjaan harian mereka secara online. Berikut [salah satunya](https://www.youtube.com/watch?v=Z3IjgbbCEfs).
+
+## Tugas
+
+[Wawancarai seorang ilmuwan data](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional disarankan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/4-techniques-of-ML/assignment.md b/translations/ms/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..7cb62471c
--- /dev/null
+++ b/translations/ms/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Temu ramah dengan seorang saintis data
+
+## Arahan
+
+Di syarikat anda, dalam kumpulan pengguna, atau dalam kalangan rakan-rakan atau rakan sekuliah, berbincanglah dengan seseorang yang bekerja secara profesional sebagai seorang saintis data. Tulis satu kertas pendek (500 patah perkataan) mengenai pekerjaan harian mereka. Adakah mereka pakar dalam satu bidang, atau adakah mereka bekerja 'full stack'?
+
+## Rubrik
+
+| Kriteria | Contoh Terbaik | Memadai | Perlu Penambahbaikan |
+| -------- | ------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------- | --------------------- |
+| | Satu esei dengan panjang yang betul, dengan sumber yang dinyatakan, diserahkan sebagai fail .doc | Esei dengan sumber yang kurang dinyatakan atau lebih pendek dari panjang yang diperlukan | Tiada esei diserahkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/1-Introduction/README.md b/translations/ms/1-Introduction/README.md
new file mode 100644
index 000000000..49ce4d052
--- /dev/null
+++ b/translations/ms/1-Introduction/README.md
@@ -0,0 +1,25 @@
+# Pengenalan kepada pembelajaran mesin
+
+Dalam bahagian kurikulum ini, anda akan diperkenalkan kepada konsep asas yang mendasari bidang pembelajaran mesin, apa itu, dan belajar tentang sejarahnya serta teknik yang digunakan oleh penyelidik untuk bekerja dengannya. Mari kita terokai dunia baru ML ini bersama-sama!
+
+
+> Foto oleh Bill Oxford di Unsplash
+
+### Pelajaran
+
+1. [Pengenalan kepada pembelajaran mesin](1-intro-to-ML/README.md)
+1. [Sejarah pembelajaran mesin dan AI](2-history-of-ML/README.md)
+1. [Keadilan dan pembelajaran mesin](3-fairness/README.md)
+1. [Teknik pembelajaran mesin](4-techniques-of-ML/README.md)
+### Kredit
+
+"Pengenalan kepada Pembelajaran Mesin" ditulis dengan ♥️ oleh sekumpulan individu termasuk [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) dan [Jen Looper](https://twitter.com/jenlooper)
+
+"Sejarah Pembelajaran Mesin" ditulis dengan ♥️ oleh [Jen Looper](https://twitter.com/jenlooper) dan [Amy Boyd](https://twitter.com/AmyKateNicho)
+
+"Keadilan dan Pembelajaran Mesin" ditulis dengan ♥️ oleh [Tomomi Imura](https://twitter.com/girliemac)
+
+"Teknik Pembelajaran Mesin" ditulis dengan ♥️ oleh [Jen Looper](https://twitter.com/jenlooper) dan [Chris Noring](https://twitter.com/softchris)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/1-Tools/README.md b/translations/ms/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..3ddaa053d
--- /dev/null
+++ b/translations/ms/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# Memulakan dengan Python dan Scikit-learn untuk model regresi
+
+
+
+> Sketchnote oleh [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [Pelajaran ini tersedia dalam R!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## Pengenalan
+
+Dalam empat pelajaran ini, anda akan belajar bagaimana untuk membina model regresi. Kami akan membincangkan apa kegunaannya sebentar lagi. Tetapi sebelum anda memulakan apa-apa, pastikan anda mempunyai alat yang betul untuk memulakan proses ini!
+
+Dalam pelajaran ini, anda akan belajar bagaimana untuk:
+
+- Mengkonfigurasi komputer anda untuk tugas pembelajaran mesin tempatan.
+- Bekerja dengan Jupyter notebook.
+- Menggunakan Scikit-learn, termasuk pemasangan.
+- Meneroka regresi linear dengan latihan praktikal.
+
+## Pemasangan dan konfigurasi
+
+[](https://youtu.be/-DfeD2k2Kj0 "ML untuk pemula - Siapkan alat anda untuk membina model Pembelajaran Mesin")
+
+> 🎥 Klik imej di atas untuk video pendek mengenai cara mengkonfigurasi komputer anda untuk ML.
+
+1. **Pasang Python**. Pastikan [Python](https://www.python.org/downloads/) dipasang pada komputer anda. Anda akan menggunakan Python untuk banyak tugas sains data dan pembelajaran mesin. Kebanyakan sistem komputer sudah termasuk pemasangan Python. Terdapat juga [Pek Pengkodan Python](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott) yang berguna untuk memudahkan pemasangan bagi sesetengah pengguna.
+
+ Sesetengah penggunaan Python memerlukan satu versi perisian, manakala yang lain memerlukan versi yang berbeza. Oleh itu, adalah berguna untuk bekerja dalam [persekitaran maya](https://docs.python.org/3/library/venv.html).
+
+2. **Pasang Visual Studio Code**. Pastikan anda mempunyai Visual Studio Code dipasang pada komputer anda. Ikuti arahan ini untuk [memasang Visual Studio Code](https://code.visualstudio.com/) untuk pemasangan asas. Anda akan menggunakan Python dalam Visual Studio Code dalam kursus ini, jadi mungkin anda ingin menyemak cara [mengkonfigurasi Visual Studio Code](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) untuk pembangunan Python.
+
+ > Biasakan diri dengan Python dengan bekerja melalui koleksi [modul Pembelajaran](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott)
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Siapkan Python dengan Visual Studio Code")
+ >
+ > 🎥 Klik imej di atas untuk video: menggunakan Python dalam VS Code.
+
+3. **Pasang Scikit-learn**, dengan mengikuti [arahan ini](https://scikit-learn.org/stable/install.html). Memandangkan anda perlu memastikan bahawa anda menggunakan Python 3, adalah disyorkan agar anda menggunakan persekitaran maya. Perhatikan, jika anda memasang perpustakaan ini pada Mac M1, terdapat arahan khas pada halaman yang dipautkan di atas.
+
+1. **Pasang Jupyter Notebook**. Anda perlu [memasang pakej Jupyter](https://pypi.org/project/jupyter/).
+
+## Persekitaran pengarang ML anda
+
+Anda akan menggunakan **notebook** untuk membangunkan kod Python anda dan mencipta model pembelajaran mesin. Jenis fail ini adalah alat biasa untuk saintis data, dan ia boleh dikenali dengan akhiran atau sambungan `.ipynb`.
+
+Notebook adalah persekitaran interaktif yang membolehkan pembangun untuk menulis kod dan menambah nota serta menulis dokumentasi di sekitar kod yang sangat membantu untuk projek eksperimen atau berorientasikan penyelidikan.
+
+[](https://youtu.be/7E-jC8FLA2E "ML untuk pemula - Siapkan Jupyter Notebooks untuk mula membina model regresi")
+
+> 🎥 Klik imej di atas untuk video pendek mengenai latihan ini.
+
+### Latihan - bekerja dengan notebook
+
+Dalam folder ini, anda akan menemui fail _notebook.ipynb_.
+
+1. Buka _notebook.ipynb_ dalam Visual Studio Code.
+
+ Pelayan Jupyter akan bermula dengan Python 3+ dimulakan. Anda akan menemui kawasan notebook yang boleh `run`, potongan kod. Anda boleh menjalankan blok kod dengan memilih ikon yang kelihatan seperti butang main.
+
+1. Pilih ikon `md` dan tambah sedikit markdown, dan teks berikut **# Selamat datang ke notebook anda**.
+
+ Seterusnya, tambahkan sedikit kod Python.
+
+1. Taip **print('hello notebook')** dalam blok kod.
+1. Pilih anak panah untuk menjalankan kod.
+
+ Anda sepatutnya melihat kenyataan yang dicetak:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+Anda boleh menyisipkan kod anda dengan komen untuk mendokumentasikan notebook secara sendiri.
+
+✅ Fikirkan sejenak tentang perbezaan antara persekitaran kerja pembangun web dan saintis data.
+
+## Berfungsi dengan Scikit-learn
+
+Sekarang Python telah disediakan dalam persekitaran tempatan anda, dan anda selesa dengan Jupyter notebook, mari kita sama-sama selesa dengan Scikit-learn (sebut `sci` as in `science`). Scikit-learn menyediakan [API yang luas](https://scikit-learn.org/stable/modules/classes.html#api-ref) untuk membantu anda melaksanakan tugas ML.
+
+Menurut [laman web mereka](https://scikit-learn.org/stable/getting_started.html), "Scikit-learn adalah perpustakaan pembelajaran mesin sumber terbuka yang menyokong pembelajaran terkawal dan tidak terkawal. Ia juga menyediakan pelbagai alat untuk pemasangan model, prapemprosesan data, pemilihan model dan penilaian, serta banyak utiliti lain."
+
+Dalam kursus ini, anda akan menggunakan Scikit-learn dan alat lain untuk membina model pembelajaran mesin untuk melaksanakan apa yang kita panggil tugas 'pembelajaran mesin tradisional'. Kami sengaja mengelakkan rangkaian neural dan pembelajaran mendalam, kerana ia lebih baik diliputi dalam kurikulum 'AI untuk Pemula' kami yang akan datang.
+
+Scikit-learn memudahkan untuk membina model dan menilainya untuk digunakan. Ia terutama tertumpu pada penggunaan data berangka dan mengandungi beberapa set data sedia ada untuk digunakan sebagai alat pembelajaran. Ia juga termasuk model siap bina untuk pelajar mencuba. Mari kita terokai proses memuatkan data yang telah dibungkus dan menggunakan estimator yang dibina untuk model ML pertama dengan Scikit-learn dengan beberapa data asas.
+
+## Latihan - notebook Scikit-learn pertama anda
+
+> Tutorial ini diilhamkan oleh [contoh regresi linear](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py) di laman web Scikit-learn.
+
+[](https://youtu.be/2xkXL5EUpS0 "ML untuk pemula - Projek Regresi Linear Pertama Anda dalam Python")
+
+> 🎥 Klik imej di atas untuk video pendek mengenai latihan ini.
+
+Dalam fail _notebook.ipynb_ yang berkaitan dengan pelajaran ini, kosongkan semua sel dengan menekan ikon 'tong sampah'.
+
+Dalam bahagian ini, anda akan bekerja dengan set data kecil tentang diabetes yang dibina dalam Scikit-learn untuk tujuan pembelajaran. Bayangkan bahawa anda ingin menguji rawatan untuk pesakit diabetes. Model Pembelajaran Mesin mungkin membantu anda menentukan pesakit mana yang akan memberi tindak balas lebih baik kepada rawatan, berdasarkan kombinasi pembolehubah. Walaupun model regresi yang sangat asas, apabila divisualisasikan, mungkin menunjukkan maklumat tentang pembolehubah yang akan membantu anda mengatur percubaan klinikal teori anda.
+
+✅ Terdapat banyak jenis kaedah regresi, dan yang mana satu anda pilih bergantung pada jawapan yang anda cari. Jika anda ingin meramalkan ketinggian yang mungkin untuk seseorang pada usia tertentu, anda akan menggunakan regresi linear, kerana anda mencari **nilai berangka**. Jika anda berminat untuk mengetahui sama ada jenis masakan harus dianggap vegan atau tidak, anda mencari **penugasan kategori** jadi anda akan menggunakan regresi logistik. Anda akan belajar lebih lanjut tentang regresi logistik kemudian. Fikirkan sedikit tentang beberapa soalan yang boleh anda tanya daripada data, dan kaedah mana yang lebih sesuai.
+
+Mari kita mulakan tugas ini.
+
+### Import perpustakaan
+
+Untuk tugas ini, kita akan mengimport beberapa perpustakaan:
+
+- **matplotlib**. Ia adalah alat [grafik yang berguna](https://matplotlib.org/) dan kita akan menggunakannya untuk mencipta plot garis.
+- **numpy**. [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) adalah perpustakaan yang berguna untuk mengendalikan data berangka dalam Python.
+- **sklearn**. Ini adalah perpustakaan [Scikit-learn](https://scikit-learn.org/stable/user_guide.html).
+
+Import beberapa perpustakaan untuk membantu dengan tugas anda.
+
+1. Tambah import dengan menaip kod berikut:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ Di atas anda mengimport `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y` akan menjadi sasaran regresi.
+
+1. Tambah beberapa arahan cetak untuk menunjukkan bentuk matriks data dan elemen pertamanya:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ Apa yang anda dapatkan sebagai respons adalah tuple. Apa yang anda lakukan adalah menetapkan dua nilai pertama tuple kepada `X` and `y` masing-masing. Ketahui lebih lanjut [mengenai tuple](https://wikipedia.org/wiki/Tuple).
+
+ Anda boleh melihat bahawa data ini mempunyai 442 item yang dibentuk dalam tatasusunan 10 elemen:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ Fikirkan sedikit tentang hubungan antara data dan sasaran regresi. Regresi linear meramalkan hubungan antara ciri X dan pembolehubah sasaran y. Bolehkah anda mencari [sasaran](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) untuk set data diabetes dalam dokumentasi? Apakah yang ditunjukkan oleh set data ini, memandangkan sasaran tersebut?
+
+2. Seterusnya, pilih sebahagian daripada set data ini untuk plot dengan memilih lajur ke-3 set data. Anda boleh melakukannya dengan menggunakan `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)`. Jika salah satu parameter adalah -1, dimensi yang sepadan akan dikira secara automatik.
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ Pada bila-bila masa, cetak data untuk memeriksa bentuknya.
+
+3. Sekarang anda mempunyai data yang sedia untuk diplot, anda boleh melihat sama ada mesin boleh membantu menentukan pemisahan logik antara nombor dalam set data ini. Untuk melakukan ini, anda perlu membahagikan kedua-dua data (X) dan sasaran (y) kepada set ujian dan latihan. Scikit-learn mempunyai cara yang mudah untuk melakukan ini; anda boleh membahagikan data ujian anda pada titik tertentu.
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+4. Sekarang anda bersedia untuk melatih model anda! Muatkan model regresi linear dan latihkannya dengan set latihan X dan y anda menggunakan `model.fit()`:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`. Ini akan digunakan untuk melukis garis antara kumpulan data
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. Sekarang tiba masanya untuk menunjukkan data dalam plot. Matplotlib adalah alat yang sangat berguna untuk tugas ini. Cipta scatterplot semua data ujian X dan y, dan gunakan ramalan untuk melukis garis di tempat yang paling sesuai, antara kumpulan data model.
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ Fikirkan sedikit tentang apa yang sedang berlaku di sini. Garis lurus berjalan melalui banyak titik kecil data, tetapi apa sebenarnya yang dilakukannya? Bolehkah anda melihat bagaimana anda sepatutnya boleh menggunakan garis ini untuk meramalkan di mana titik data baru yang belum dilihat patut ditempatkan dalam hubungan dengan paksi y plot? Cuba jelaskan kegunaan praktikal model ini.
+
+Tahniah, anda telah membina model regresi linear pertama anda, mencipta ramalan dengannya, dan memaparkannya dalam plot!
+
+---
+## 🚀Cabaran
+
+Plot pembolehubah yang berbeza daripada set data ini. Petunjuk: edit baris ini: `X = X[:,2]`. Memandangkan sasaran set data ini, apakah yang anda dapat temui tentang perkembangan penyakit diabetes?
+## [Kuiz pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## Ulasan & Kajian Sendiri
+
+Dalam tutorial ini, anda bekerja dengan regresi linear mudah, dan bukannya regresi univariat atau multivariat. Baca sedikit tentang perbezaan antara kaedah ini, atau tonton [video ini](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef)
+
+Baca lebih lanjut tentang konsep regresi dan fikirkan tentang jenis soalan yang boleh dijawab oleh teknik ini. Ambil [tutorial ini](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott) untuk mendalami pemahaman anda.
+
+## Tugasan
+
+[Set data yang berbeza](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/1-Tools/assignment.md b/translations/ms/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..08d4e521f
--- /dev/null
+++ b/translations/ms/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# Regresi dengan Scikit-learn
+
+## Arahan
+
+Lihat dataset [Linnerud](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud) dalam Scikit-learn. Dataset ini mempunyai beberapa [sasaran](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset): 'Ia terdiri daripada tiga data senaman dan tiga pembolehubah fisiologi yang dikumpul daripada dua puluh lelaki pertengahan umur di kelab kecergasan'.
+
+Dengan kata-kata anda sendiri, terangkan cara untuk mencipta model Regresi yang akan memplotkan hubungan antara lilitan pinggang dan bilangan situp yang dicapai. Lakukan perkara yang sama untuk titik data lain dalam dataset ini.
+
+## Rubrik
+
+| Kriteria | Contoh yang cemerlang | Memadai | Perlu Peningkatan |
+| ------------------------------ | ----------------------------------- | ----------------------------- | -------------------------- |
+| Hantar perenggan deskriptif | Perenggan yang ditulis dengan baik dihantar | Beberapa ayat dihantar | Tiada penerangan diberikan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/1-Tools/solution/Julia/README.md b/translations/ms/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..1da118c7d
--- /dev/null
+++ b/translations/ms/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/2-Data/README.md b/translations/ms/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..9a9419981
--- /dev/null
+++ b/translations/ms/2-Regression/2-Data/README.md
@@ -0,0 +1,214 @@
+# Bina model regresi menggunakan Scikit-learn: menyediakan dan memvisualisasikan data
+
+
+
+Infografik oleh [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [Pelajaran ini tersedia dalam R!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## Pengenalan
+
+Sekarang setelah Anda memiliki alat yang diperlukan untuk mulai membangun model pembelajaran mesin dengan Scikit-learn, Anda siap untuk mulai mengajukan pertanyaan pada data Anda. Saat Anda bekerja dengan data dan menerapkan solusi ML, sangat penting untuk memahami cara mengajukan pertanyaan yang tepat untuk benar-benar membuka potensi dataset Anda.
+
+Dalam pelajaran ini, Anda akan belajar:
+
+- Cara mempersiapkan data Anda untuk membangun model.
+- Cara menggunakan Matplotlib untuk visualisasi data.
+
+## Mengajukan pertanyaan yang tepat pada data Anda
+
+Pertanyaan yang perlu Anda jawab akan menentukan jenis algoritma ML yang akan Anda gunakan. Dan kualitas jawaban yang Anda dapatkan sangat bergantung pada sifat data Anda.
+
+Lihatlah [data](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) yang disediakan untuk pelajaran ini. Anda dapat membuka file .csv ini di VS Code. Sekilas cepat segera menunjukkan bahwa ada kekosongan dan campuran data string dan numerik. Ada juga kolom aneh bernama 'Package' di mana datanya adalah campuran antara 'sacks', 'bins' dan nilai lainnya. Data tersebut, sebenarnya, agak berantakan.
+
+[](https://youtu.be/5qGjczWTrDQ "ML untuk pemula - Cara Menganalisis dan Membersihkan Dataset")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang cara mempersiapkan data untuk pelajaran ini.
+
+Faktanya, tidak umum diberikan dataset yang sepenuhnya siap digunakan untuk membuat model ML secara langsung. Dalam pelajaran ini, Anda akan belajar cara mempersiapkan dataset mentah menggunakan pustaka Python standar. Anda juga akan mempelajari berbagai teknik untuk memvisualisasikan data.
+
+## Studi kasus: 'pasar labu'
+
+Di folder ini, Anda akan menemukan file .csv di root `data` folder yang disebut [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) yang mencakup 1757 baris data tentang pasar untuk labu, diurutkan ke dalam grup berdasarkan kota. Ini adalah data mentah yang diekstraksi dari [Laporan Standar Pasar Terminal Tanaman Khusus](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) yang didistribusikan oleh Departemen Pertanian Amerika Serikat.
+
+### Mempersiapkan data
+
+Data ini berada di domain publik. Itu dapat diunduh dalam banyak file terpisah, per kota, dari situs web USDA. Untuk menghindari terlalu banyak file terpisah, kami telah menggabungkan semua data kota menjadi satu spreadsheet, sehingga kami telah _mempersiapkan_ data sedikit. Selanjutnya, mari kita lihat lebih dekat datanya.
+
+### Data labu - kesimpulan awal
+
+Apa yang Anda perhatikan tentang data ini? Anda sudah melihat bahwa ada campuran string, angka, kekosongan, dan nilai aneh yang perlu Anda pahami.
+
+Pertanyaan apa yang dapat Anda ajukan dari data ini, menggunakan teknik Regresi? Bagaimana dengan "Memprediksi harga labu yang dijual selama bulan tertentu". Melihat kembali data, ada beberapa perubahan yang perlu Anda buat untuk membuat struktur data yang diperlukan untuk tugas tersebut.
+## Latihan - analisis data labu
+
+Mari gunakan [Pandas](https://pandas.pydata.org/), (nama tersebut adalah singkatan dari `Python Data Analysis`) alat yang sangat berguna untuk membentuk data, untuk menganalisis dan mempersiapkan data labu ini.
+
+### Pertama, periksa tanggal yang hilang
+
+Anda pertama-tama perlu mengambil langkah-langkah untuk memeriksa tanggal yang hilang:
+
+1. Konversikan tanggal ke format bulan (ini adalah tanggal AS, jadi formatnya `MM/DD/YYYY`).
+2. Ekstrak bulan ke kolom baru.
+
+Buka file _notebook.ipynb_ di Visual Studio Code dan impor spreadsheet ke dataframe Pandas baru.
+
+1. Gunakan fungsi `head()` untuk melihat lima baris pertama.
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ Fungsi apa yang akan Anda gunakan untuk melihat lima baris terakhir?
+
+1. Periksa apakah ada data yang hilang di dataframe saat ini:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ Ada data yang hilang, tetapi mungkin tidak akan menjadi masalah untuk tugas ini.
+
+1. Untuk membuat dataframe Anda lebih mudah digunakan, pilih hanya kolom yang Anda butuhkan, menggunakan `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:` dalam kasus di bawah ini berarti "semua baris".
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### Kedua, tentukan harga rata-rata labu
+
+Pikirkan tentang cara menentukan harga rata-rata labu dalam bulan tertentu. Kolom apa yang akan Anda pilih untuk tugas ini? Petunjuk: Anda memerlukan 3 kolom.
+
+Solusi: ambil rata-rata dari kolom `Low Price` and `High Price` untuk mengisi kolom Price baru, dan konversikan kolom Date untuk hanya menunjukkan bulan. Untungnya, menurut pemeriksaan di atas, tidak ada data yang hilang untuk tanggal atau harga.
+
+1. Untuk menghitung rata-rata, tambahkan kode berikut:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ Jangan ragu untuk mencetak data apa pun yang ingin Anda periksa menggunakan `print(month)`.
+
+2. Sekarang, salin data yang telah dikonversi ke dataframe Pandas baru:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ Mencetak dataframe Anda akan menunjukkan dataset yang bersih dan rapi di mana Anda dapat membangun model regresi baru Anda.
+
+### Tapi tunggu! Ada sesuatu yang aneh di sini
+
+Jika Anda melihat kolom `Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package`.
+
+1. Tambahkan filter di bagian atas file, di bawah impor .csv awal:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ Jika Anda mencetak data sekarang, Anda dapat melihat bahwa Anda hanya mendapatkan sekitar 415 baris data yang berisi labu per gantang.
+
+### Tapi tunggu! Ada satu hal lagi yang harus dilakukan
+
+Apakah Anda memperhatikan bahwa jumlah gantang bervariasi per baris? Anda perlu menormalkan harga sehingga Anda menunjukkan harga per gantang, jadi lakukan beberapa perhitungan untuk menstandarkannya.
+
+1. Tambahkan baris-baris ini setelah blok yang membuat dataframe new_pumpkins:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ Menurut [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308), berat satu gantang tergantung pada jenis produk, karena ini adalah pengukuran volume. "Satu gantang tomat, misalnya, seharusnya memiliki berat 56 pon... Daun dan sayuran mengambil lebih banyak ruang dengan berat lebih sedikit, sehingga satu gantang bayam hanya 20 pon." Semuanya cukup rumit! Mari kita tidak repot-repot dengan konversi gantang-ke-pon, dan sebaliknya harga per gantang. Semua studi tentang gantang labu ini, bagaimanapun, menunjukkan betapa pentingnya memahami sifat data Anda!
+
+Sekarang, Anda dapat menganalisis harga per unit berdasarkan pengukuran gantang mereka. Jika Anda mencetak data sekali lagi, Anda dapat melihat bagaimana itu distandarkan.
+
+✅ Apakah Anda memperhatikan bahwa labu yang dijual per setengah gantang sangat mahal? Bisakah Anda mengetahuinya? Petunjuk: labu kecil jauh lebih mahal daripada yang besar, mungkin karena ada lebih banyak lagi per gantang, mengingat ruang kosong yang tidak digunakan oleh satu labu pai besar.
+
+## Strategi Visualisasi
+
+Bagian dari peran ilmuwan data adalah menunjukkan kualitas dan sifat data yang mereka kerjakan. Untuk melakukan ini, mereka sering membuat visualisasi menarik, atau plot, grafik, dan diagram, yang menunjukkan berbagai aspek data. Dengan cara ini, mereka dapat secara visual menunjukkan hubungan dan celah yang sulit ditemukan.
+
+[](https://youtu.be/SbUkxH6IJo0 "ML untuk pemula - Cara Memvisualisasikan Data dengan Matplotlib")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang cara memvisualisasikan data untuk pelajaran ini.
+
+Visualisasi juga dapat membantu menentukan teknik pembelajaran mesin yang paling sesuai untuk data. Sebuah scatterplot yang tampaknya mengikuti garis, misalnya, menunjukkan bahwa data adalah kandidat yang baik untuk latihan regresi linier.
+
+Salah satu pustaka visualisasi data yang bekerja dengan baik di notebook Jupyter adalah [Matplotlib](https://matplotlib.org/) (yang juga Anda lihat di pelajaran sebelumnya).
+
+> Dapatkan lebih banyak pengalaman dengan visualisasi data di [tutorial ini](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott).
+
+## Latihan - bereksperimen dengan Matplotlib
+
+Cobalah membuat beberapa plot dasar untuk menampilkan dataframe baru yang baru saja Anda buat. Apa yang akan ditampilkan oleh plot garis dasar?
+
+1. Impor Matplotlib di bagian atas file, di bawah impor Pandas:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. Jalankan ulang seluruh notebook untuk menyegarkan.
+1. Di bagian bawah notebook, tambahkan sel untuk memplot data sebagai kotak:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ Apakah ini plot yang berguna? Apakah ada yang mengejutkan Anda tentang itu?
+
+ Ini tidak terlalu berguna karena semua yang ditampilkan hanyalah penyebaran titik-titik dalam bulan tertentu.
+
+### Buatlah berguna
+
+Untuk membuat grafik menampilkan data yang berguna, Anda biasanya perlu mengelompokkan data dengan cara tertentu. Mari coba membuat plot di mana sumbu y menunjukkan bulan dan data menunjukkan distribusi data.
+
+1. Tambahkan sel untuk membuat grafik batang berkelompok:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ Ini adalah visualisasi data yang lebih berguna! Tampaknya menunjukkan bahwa harga tertinggi untuk labu terjadi pada bulan September dan Oktober. Apakah itu sesuai dengan harapan Anda? Mengapa atau mengapa tidak?
+
+---
+
+## 🚀Tantangan
+
+Jelajahi berbagai jenis visualisasi yang ditawarkan oleh Matplotlib. Jenis mana yang paling sesuai untuk masalah regresi?
+
+## [Kuiz pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## Tinjauan & Studi Mandiri
+
+Lihat berbagai cara untuk memvisualisasikan data. Buat daftar berbagai pustaka yang tersedia dan catat mana yang terbaik untuk jenis tugas tertentu, misalnya visualisasi 2D vs. visualisasi 3D. Apa yang Anda temukan?
+
+## Tugas
+
+[Mengeksplorasi visualisasi](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/2-Data/assignment.md b/translations/ms/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..72c7f1ead
--- /dev/null
+++ b/translations/ms/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# Meneroka Visualisasi
+
+Terdapat beberapa perpustakaan yang tersedia untuk visualisasi data. Buat beberapa visualisasi menggunakan data Labu dalam pelajaran ini dengan matplotlib dan seaborn dalam sebuah buku nota sampel. Perpustakaan mana yang lebih mudah digunakan?
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | --------- | -------- | ----------------- |
+| | Sebuah buku nota diserahkan dengan dua penerokaan/visualisasi | Sebuah buku nota diserahkan dengan satu penerokaan/visualisasi | Sebuah buku nota tidak diserahkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/2-Data/solution/Julia/README.md b/translations/ms/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..aa1ddb2b9
--- /dev/null
+++ b/translations/ms/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/3-Linear/README.md b/translations/ms/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..96641faf7
--- /dev/null
+++ b/translations/ms/2-Regression/3-Linear/README.md
@@ -0,0 +1,370 @@
+# Bina model regresi menggunakan Scikit-learn: regresi empat cara
+
+
+> Infografik oleh [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [Kuis pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [Pelajaran ini tersedia dalam R!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### Pengenalan
+
+Sejauh ini, Anda telah menjelajahi apa itu regresi dengan data sampel yang dikumpulkan dari dataset harga labu yang akan kita gunakan sepanjang pelajaran ini. Anda juga telah memvisualisasikannya menggunakan Matplotlib.
+
+Sekarang Anda siap untuk mendalami regresi untuk ML. Sementara visualisasi memungkinkan Anda memahami data, kekuatan sebenarnya dari Pembelajaran Mesin berasal dari _melatih model_. Model dilatih pada data historis untuk secara otomatis menangkap ketergantungan data, dan mereka memungkinkan Anda memprediksi hasil untuk data baru, yang belum pernah dilihat oleh model sebelumnya.
+
+Dalam pelajaran ini, Anda akan mempelajari lebih lanjut tentang dua jenis regresi: _regresi linear dasar_ dan _regresi polinomial_, bersama dengan beberapa matematika yang mendasari teknik-teknik ini. Model-model tersebut akan memungkinkan kita memprediksi harga labu tergantung pada data input yang berbeda.
+
+[](https://youtu.be/CRxFT8oTDMg "ML untuk pemula - Memahami Regresi Linear")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang regresi linear.
+
+> Sepanjang kurikulum ini, kami mengasumsikan pengetahuan matematika minimal, dan berusaha membuatnya dapat diakses oleh siswa yang berasal dari bidang lain, jadi perhatikan catatan, 🧮 panggilan, diagram, dan alat bantu belajar lainnya untuk membantu pemahaman.
+
+### Prasyarat
+
+Anda seharusnya sudah familiar dengan struktur data labu yang kita periksa. Anda dapat menemukannya dimuat sebelumnya dan dibersihkan sebelumnya dalam file _notebook.ipynb_ pelajaran ini. Dalam file tersebut, harga labu ditampilkan per gantang dalam bingkai data baru. Pastikan Anda dapat menjalankan notebook ini dalam kernel di Visual Studio Code.
+
+### Persiapan
+
+Sebagai pengingat, Anda memuat data ini untuk menanyakan pertanyaan tentangnya.
+
+- Kapan waktu terbaik untuk membeli labu?
+- Berapa harga yang bisa saya harapkan untuk satu kotak labu miniatur?
+- Haruskah saya membelinya dalam keranjang setengah gantang atau dalam kotak 1 1/9 gantang?
+Mari kita terus menggali data ini.
+
+Dalam pelajaran sebelumnya, Anda membuat bingkai data Pandas dan mengisinya dengan bagian dari dataset asli, menstandarkan harga berdasarkan gantang. Dengan melakukan itu, Anda hanya bisa mengumpulkan sekitar 400 titik data dan hanya untuk bulan-bulan musim gugur.
+
+Lihat data yang telah dimuat sebelumnya dalam notebook yang menyertai pelajaran ini. Data dimuat sebelumnya dan diagram pencar awal dibuat untuk menunjukkan data bulan. Mungkin kita bisa mendapatkan lebih banyak detail tentang sifat data dengan membersihkannya lebih lanjut.
+
+## Garis regresi linear
+
+Seperti yang Anda pelajari di Pelajaran 1, tujuan dari latihan regresi linear adalah untuk dapat memplot garis untuk:
+
+- **Menunjukkan hubungan variabel**. Menunjukkan hubungan antara variabel
+- **Membuat prediksi**. Membuat prediksi akurat tentang di mana titik data baru akan jatuh dalam hubungan dengan garis itu.
+
+Biasanya **Regresi Kuadrat Terkecil** menggambar jenis garis ini. Istilah 'kuadrat terkecil' berarti semua titik data di sekitar garis regresi dikuadratkan dan kemudian dijumlahkan. Idealnya, jumlah akhir itu sekecil mungkin, karena kita menginginkan jumlah kesalahan yang rendah, atau `least-squares`.
+
+Kita melakukannya karena kita ingin memodelkan garis yang memiliki jarak kumulatif terkecil dari semua titik data kita. Kita juga mengkuadratkan istilah-istilah sebelum menjumlahkannya karena kita lebih peduli dengan besarnya daripada arahnya.
+
+> **🧮 Tunjukkan matematika kepada saya**
+>
+> Garis ini, yang disebut _garis kecocokan terbaik_ dapat dinyatakan dengan [sebuah persamaan](https://en.wikipedia.org/wiki/Simple_linear_regression):
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` column menggunakan ekspresi berikut:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+Sekarang Anda memahami matematika di balik regresi linear, mari kita buat model Regresi untuk melihat apakah kita dapat memprediksi paket labu mana yang akan memiliki harga labu terbaik. Seseorang yang membeli labu untuk tambalan labu liburan mungkin menginginkan informasi ini untuk dapat mengoptimalkan pembelian paket labu untuk tambalan tersebut.
+
+## Mencari Korelasi
+
+[](https://youtu.be/uoRq-lW2eQo "ML untuk pemula - Mencari Korelasi: Kunci Regresi Linear")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang korelasi.
+
+Dari pelajaran sebelumnya, Anda mungkin telah melihat bahwa harga rata-rata untuk bulan yang berbeda terlihat seperti ini:
+
+
+
+Ini menunjukkan bahwa harus ada beberapa korelasi, dan kita dapat mencoba melatih model regresi linear untuk memprediksi hubungan antara `Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` function:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+Sepertinya korelasinya cukup kecil, -0.15 oleh `Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` plotting function kita bisa plot semua titik pada grafik yang sama:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+Penyelidikan kami menunjukkan bahwa variasi memiliki lebih banyak pengaruh pada harga keseluruhan daripada tanggal penjualan yang sebenarnya. Kita bisa melihat ini dengan diagram batang:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+Mari kita fokus untuk saat ini hanya pada satu variasi labu, 'jenis pai', dan lihat apa pengaruh tanggal terhadap harga:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+Jika kita sekarang menghitung korelasi antara `Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` - yang berarti melatih model prediktif masuk akal.
+
+> Sebelum melatih model regresi linear, penting untuk memastikan bahwa data kita bersih. Regresi linear tidak bekerja dengan baik dengan nilai yang hilang, sehingga masuk akal untuk menghapus semua sel kosong:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+Pendekatan lain adalah mengisi nilai kosong tersebut dengan nilai rata-rata dari kolom yang sesuai.
+
+## Regresi Linear Sederhana
+
+[](https://youtu.be/e4c_UP2fSjg "ML untuk pemula - Regresi Linear dan Polinomial menggunakan Scikit-learn")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang regresi linear dan polinomial.
+
+Untuk melatih model Regresi Linear kita, kita akan menggunakan perpustakaan **Scikit-learn**.
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+Kita mulai dengan memisahkan nilai input (fitur) dan output yang diharapkan (label) menjadi array numpy terpisah:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> Perhatikan bahwa kita harus melakukan `reshape` pada data input agar paket Regresi Linear memahaminya dengan benar. Regresi Linear mengharapkan array 2D sebagai input, di mana setiap baris dari array sesuai dengan vektor fitur input. Dalam kasus kita, karena kita hanya memiliki satu input - kita memerlukan array dengan bentuk N×1, di mana N adalah ukuran dataset.
+
+Kemudian, kita perlu membagi data menjadi dataset pelatihan dan pengujian, sehingga kita dapat memvalidasi model kita setelah pelatihan:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+Akhirnya, melatih model Regresi Linear yang sebenarnya hanya membutuhkan dua baris kode. Kita mendefinisikan `LinearRegression` object, and fit it to our data using the `fit` method:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+`LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` dalam kasus kita, yang menunjukkan harga di awal tahun.
+
+Untuk melihat seberapa akurat model kita, kita bisa memprediksi harga pada dataset pengujian, dan kemudian mengukur seberapa dekat prediksi kita dengan nilai yang diharapkan. Ini bisa dilakukan menggunakan metrik mean square error (MSE), yang merupakan rata-rata dari semua perbedaan kuadrat antara nilai yang diharapkan dan yang diprediksi.
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+Kesalahan kita tampaknya sekitar 2 poin, yaitu ~17%. Tidak terlalu bagus. Indikator lain dari kualitas model adalah **koefisien determinasi**, yang dapat diperoleh seperti ini:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+Jika nilainya 0, itu berarti model tidak memperhitungkan data input, dan bertindak sebagai *prediktor linear terburuk*, yang hanya merupakan nilai rata-rata dari hasil. Nilai 1 berarti kita dapat memprediksi semua output yang diharapkan dengan sempurna. Dalam kasus kita, koefisiennya sekitar 0.06, yang cukup rendah.
+
+Kita juga bisa memplot data uji bersama dengan garis regresi untuk lebih melihat bagaimana regresi bekerja dalam kasus kita:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## Regresi Polinomial
+
+Jenis lain dari Regresi Linear adalah Regresi Polinomial. Sementara kadang-kadang ada hubungan linear antara variabel - semakin besar labu dalam volume, semakin tinggi harga - kadang-kadang hubungan ini tidak bisa diplot sebagai bidang atau garis lurus.
+
+✅ Berikut adalah [beberapa contoh lagi](https://online.stat.psu.edu/stat501/lesson/9/9.8) data yang bisa menggunakan Regresi Polinomial
+
+Lihat lagi hubungan antara Tanggal dan Harga. Apakah diagram pencar ini tampak seperti harus dianalisis dengan garis lurus? Bukankah harga bisa berfluktuasi? Dalam hal ini, Anda bisa mencoba regresi polinomial.
+
+✅ Polinomial adalah ekspresi matematika yang mungkin terdiri dari satu atau lebih variabel dan koefisien
+
+Regresi polinomial menciptakan garis melengkung untuk lebih cocok dengan data non-linear. Dalam kasus kita, jika kita menyertakan variabel `DayOfYear` kuadrat ke dalam data input, kita harus bisa menyesuaikan data kita dengan kurva parabola, yang akan memiliki minimum pada titik tertentu dalam tahun tersebut.
+
+Scikit-learn menyertakan [API pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline) yang membantu untuk menggabungkan langkah-langkah pemrosesan data yang berbeda bersama-sama. Sebuah **pipeline** adalah rantai **estimators**. Dalam kasus kita, kita akan membuat pipeline yang pertama menambahkan fitur polinomial ke model kita, dan kemudian melatih regresi:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+Menggunakan `PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` sebaliknya. Ini berarti akan ada empat koefisien dalam regresi linear, satu untuk setiap variasi labu, yang bertanggung jawab atas "harga awal" (atau lebih tepatnya "harga tambahan") untuk variasi tersebut.
+
+Kode di bawah ini menunjukkan bagaimana kita bisa one-hot encode variasi:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+Untuk melatih regresi linear menggunakan variasi one-hot encoded sebagai input, kita hanya perlu menginisialisasi data `X` and `y` dengan benar:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+Sisa kode sama seperti yang kita gunakan di atas untuk melatih Regresi Linear. Jika Anda mencobanya, Anda akan melihat bahwa mean squared error hampir sama, tetapi kita mendapatkan koefisien determinasi yang jauh lebih tinggi (~77%). Untuk mendapatkan prediksi yang lebih akurat, kita bisa mempertimbangkan lebih banyak fitur kategorikal, serta fitur numerik, seperti `Month` or `DayOfYear`. To get one large array of features, we can use `join`:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+Di sini kita juga mempertimbangkan `City` and `Package` type, yang memberi kita MSE 2.84 (10%), dan determinasi 0.94!
+
+## Menggabungkan semuanya
+
+Untuk membuat model terbaik, kita bisa menggunakan data gabungan (satu-hot encoded categorical + numeric) dari contoh di atas bersama dengan Regresi Polinomial. Berikut adalah kode lengkapnya untuk kenyamanan Anda:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+Ini harus memberi kita koefisien determinasi terbaik hampir 97%, dan MSE=2.23 (~8% kesalahan prediksi).
+
+| Model | MSE | Determinasi |
+|-------|-----|-------------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` Linear | 5.24 (19.7%) | 0.77 |
+| Semua fitur Linear | 2.84 (10.5%) | 0.94 |
+| Semua fitur Polinomial | 2.23 (8.25%) | 0.97 |
+
+🏆 Kerja bagus! Anda membuat empat model Regresi dalam satu pelajaran, dan meningkatkan kualitas model hingga 97%. Di bagian akhir tentang Regresi, Anda akan belajar tentang Regresi Logistik untuk menentukan kategori.
+
+---
+## 🚀Tantangan
+
+Uji beberapa variabel berbeda dalam notebook ini untuk melihat bagaimana korelasi sesuai dengan akurasi model.
+
+## [Kuis pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/14/)
+
+## Tinjauan & Studi Mandiri
+
+Dalam pelajaran ini kita belajar tentang Regresi Linear. Ada jenis Regresi penting lainnya. Baca tentang teknik Stepwise, Ridge, Lasso, dan Elasticnet. Kursus yang bagus untuk belajar lebih lanjut adalah [Kursus Pembelajaran Statistik Stanford](https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning)
+
+## Tugas
+
+[Membangun Model](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/3-Linear/assignment.md b/translations/ms/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..2dcc92bb9
--- /dev/null
+++ b/translations/ms/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# Membina Model Regresi
+
+## Arahan
+
+Dalam pelajaran ini, anda telah ditunjukkan cara membina model menggunakan Regresi Linear dan Polinomial. Menggunakan pengetahuan ini, cari satu set data atau gunakan salah satu set terbina dalam Scikit-learn untuk membina model baharu. Terangkan dalam buku nota anda mengapa anda memilih teknik yang anda gunakan, dan tunjukkan ketepatan model anda. Jika ia tidak tepat, terangkan mengapa.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ------------------------------------------------------------- | -------------------------- | ------------------------------ |
+| | membentangkan buku nota lengkap dengan penyelesaian yang didokumentasikan dengan baik | penyelesaian tidak lengkap | penyelesaian cacat atau bermasalah |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/3-Linear/solution/Julia/README.md b/translations/ms/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..af182734c
--- /dev/null
+++ b/translations/ms/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/4-Logistic/README.md b/translations/ms/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..57d113c13
--- /dev/null
+++ b/translations/ms/2-Regression/4-Logistic/README.md
@@ -0,0 +1,392 @@
+# Regresi Logistik untuk Meramal Kategori
+
+
+
+## [Kuiz Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [Pelajaran ini tersedia dalam R!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## Pengenalan
+
+Dalam pelajaran terakhir mengenai Regresi ini, salah satu teknik dasar _klasik_ ML, kita akan melihat Regresi Logistik. Anda akan menggunakan teknik ini untuk menemukan pola untuk meramal kategori biner. Apakah permen ini coklat atau tidak? Apakah penyakit ini menular atau tidak? Apakah pelanggan ini akan memilih produk ini atau tidak?
+
+Dalam pelajaran ini, Anda akan belajar:
+
+- Perpustakaan baru untuk visualisasi data
+- Teknik untuk regresi logistik
+
+✅ Perdalam pemahaman Anda tentang bekerja dengan jenis regresi ini di [modul Pembelajaran ini](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott)
+
+## Prasyarat
+
+Setelah bekerja dengan data labu, kita sekarang cukup akrab untuk menyadari bahwa ada satu kategori biner yang bisa kita kerjakan: `Color`.
+
+Mari kita bangun model regresi logistik untuk meramal bahwa, mengingat beberapa variabel, _warna apa yang kemungkinan besar dari labu yang diberikan_ (oranye 🎃 atau putih 👻).
+
+> Mengapa kita membicarakan klasifikasi biner dalam pelajaran tentang regresi? Hanya untuk kenyamanan linguistik, karena regresi logistik adalah [sebenarnya metode klasifikasi](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression), meskipun berbasis linear. Pelajari cara lain untuk mengklasifikasikan data dalam kelompok pelajaran berikutnya.
+
+## Definisikan pertanyaan
+
+Untuk tujuan kita, kita akan menyatakannya sebagai biner: 'Putih' atau 'Tidak Putih'. Ada juga kategori 'bergaris' dalam dataset kita, tetapi ada sedikit contohnya, jadi kita tidak akan menggunakannya. Itu hilang setelah kita menghapus nilai null dari dataset, bagaimanapun juga.
+
+> 🎃 Fakta menyenangkan, kadang-kadang kita menyebut labu putih sebagai 'labu hantu'. Mereka tidak mudah diukir, jadi mereka tidak sepopuler yang oranye tetapi mereka terlihat keren! Jadi kita juga bisa merumuskan ulang pertanyaan kita sebagai: 'Hantu' atau 'Bukan Hantu'. 👻
+
+## Tentang regresi logistik
+
+Regresi logistik berbeda dari regresi linear, yang telah Anda pelajari sebelumnya, dalam beberapa cara penting.
+
+[](https://youtu.be/KpeCT6nEpBY "ML untuk pemula - Memahami Regresi Logistik untuk Klasifikasi Pembelajaran Mesin")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang regresi logistik.
+
+### Klasifikasi biner
+
+Regresi logistik tidak menawarkan fitur yang sama seperti regresi linear. Yang pertama menawarkan prediksi tentang kategori biner ("putih atau tidak putih") sedangkan yang terakhir mampu meramal nilai berkelanjutan, misalnya mengingat asal labu dan waktu panen, _berapa banyak harganya akan naik_.
+
+
+> Infografik oleh [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+### Klasifikasi lainnya
+
+Ada jenis regresi logistik lainnya, termasuk multinomial dan ordinal:
+
+- **Multinomial**, yang melibatkan lebih dari satu kategori - "Oranye, Putih, dan Bergaris".
+- **Ordinal**, yang melibatkan kategori berurutan, berguna jika kita ingin mengurutkan hasil kita secara logis, seperti labu kita yang diurutkan berdasarkan sejumlah ukuran terbatas (mini,sm,med,lg,xl,xxl).
+
+
+
+### Variabel TIDAK HARUS berkorelasi
+
+Ingat bagaimana regresi linear bekerja lebih baik dengan lebih banyak variabel yang berkorelasi? Regresi logistik adalah kebalikannya - variabelnya tidak harus sejajar. Itu bekerja untuk data ini yang memiliki korelasi yang agak lemah.
+
+### Anda memerlukan banyak data bersih
+
+Regresi logistik akan memberikan hasil yang lebih akurat jika Anda menggunakan lebih banyak data; dataset kecil kita tidak optimal untuk tugas ini, jadi ingatlah hal itu.
+
+[](https://youtu.be/B2X4H9vcXTs "ML untuk pemula - Analisis dan Persiapan Data untuk Regresi Logistik")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang persiapan data untuk regresi linear
+
+✅ Pikirkan tentang jenis data yang cocok untuk regresi logistik
+
+## Latihan - bersihkan data
+
+Pertama, bersihkan data sedikit, hilangkan nilai null dan pilih hanya beberapa kolom:
+
+1. Tambahkan kode berikut:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ Anda selalu dapat melihat sekilas dataframe baru Anda:
+
+ ```python
+ pumpkins.info
+ ```
+
+### Visualisasi - plot kategori
+
+Sekarang Anda telah memuat [notebook awal](../../../../2-Regression/4-Logistic/notebook.ipynb) dengan data labu sekali lagi dan membersihkannya sehingga menyimpan dataset yang berisi beberapa variabel, termasuk `Color`. Mari kita visualisasikan dataframe dalam notebook menggunakan perpustakaan yang berbeda: [Seaborn](https://seaborn.pydata.org/index.html), yang dibangun di atas Matplotlib yang kita gunakan sebelumnya.
+
+Seaborn menawarkan beberapa cara menarik untuk memvisualisasikan data Anda. Misalnya, Anda dapat membandingkan distribusi data untuk setiap `Variety` dan `Color` dalam plot kategori.
+
+1. Buat plot seperti itu dengan menggunakan `catplot` function, using our pumpkin data `pumpkins`, dan tentukan pemetaan warna untuk setiap kategori labu (oranye atau putih):
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ Dengan mengamati data, Anda dapat melihat bagaimana data Warna berkaitan dengan Variety.
+
+ ✅ Mengingat plot kategori ini, eksplorasi menarik apa yang bisa Anda bayangkan?
+
+### Praproses data: pengkodean fitur dan label
+Dataset labu kita mengandung nilai string untuk semua kolomnya. Bekerja dengan data kategori adalah intuitif bagi manusia tetapi tidak untuk mesin. Algoritma pembelajaran mesin bekerja dengan baik dengan angka. Itulah mengapa pengkodean adalah langkah yang sangat penting dalam fase praproses data, karena memungkinkan kita untuk mengubah data kategori menjadi data numerik, tanpa kehilangan informasi apa pun. Pengkodean yang baik mengarah pada pembangunan model yang baik.
+
+Untuk pengkodean fitur ada dua jenis pengkode utama:
+
+1. Pengkode ordinal: cocok untuk variabel ordinal, yang merupakan variabel kategori di mana datanya mengikuti urutan logis, seperti kolom `Item Size` dalam dataset kita. Ini membuat pemetaan sehingga setiap kategori diwakili oleh angka, yang merupakan urutan kategori dalam kolom.
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. Pengkode kategori: cocok untuk variabel nominal, yang merupakan variabel kategori di mana datanya tidak mengikuti urutan logis, seperti semua fitur yang berbeda dari `Item Size` dalam dataset kita. Ini adalah pengkodean satu-hot, yang berarti bahwa setiap kategori diwakili oleh kolom biner: variabel yang dikodekan sama dengan 1 jika labu termasuk dalam Variety tersebut dan 0 sebaliknya.
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+Kemudian, `ColumnTransformer` digunakan untuk menggabungkan beberapa pengkode ke dalam satu langkah dan menerapkannya ke kolom yang sesuai.
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+Di sisi lain, untuk mengkode label, kita menggunakan kelas `LabelEncoder` dari scikit-learn, yang merupakan kelas utilitas untuk membantu menormalkan label sehingga hanya berisi nilai antara 0 dan n_classes-1 (di sini, 0 dan 1).
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+Setelah kita mengkode fitur dan label, kita dapat menggabungkannya ke dalam dataframe baru `encoded_pumpkins`.
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ Apa keuntungan menggunakan pengkode ordinal untuk kolom `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety` dan `Color` dalam plot kategori. Untuk lebih memplot data kita akan menggunakan kolom `Item Size` column and the unencoded `Variety` yang telah dikodekan.
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### Gunakan plot swarm
+
+Karena Warna adalah kategori biner (Putih atau Tidak), itu memerlukan 'pendekatan [khusus](https://seaborn.pydata.org/tutorial/categorical.html?highlight=bar) untuk visualisasi'. Ada cara lain untuk memvisualisasikan hubungan kategori ini dengan variabel lainnya.
+
+Anda dapat memvisualisasikan variabel berdampingan dengan plot Seaborn.
+
+1. Coba plot 'swarm' untuk menunjukkan distribusi nilai:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**Perhatikan**: kode di atas mungkin menghasilkan peringatan, karena seaborn gagal mewakili jumlah titik data tersebut dalam plot swarm. Solusi yang mungkin adalah mengurangi ukuran penanda, dengan menggunakan parameter 'size'. Namun, perlu diketahui bahwa ini memengaruhi keterbacaan plot.
+
+
+> **🧮 Tunjukkan Matematika**
+>
+> Regresi logistik bergantung pada konsep 'maximum likelihood' menggunakan [fungsi sigmoid](https://wikipedia.org/wiki/Sigmoid_function). Fungsi 'Sigmoid' pada plot terlihat seperti bentuk 'S'. Ini mengambil nilai dan memetakannya ke antara 0 dan 1. Kurvanya juga disebut 'kurva logistik'. Rumusnya terlihat seperti ini:
+>
+> 
+>
+> di mana titik tengah sigmoid berada pada titik 0 dari x, L adalah nilai maksimum kurva, dan k adalah kemiringan kurva. Jika hasil fungsi lebih dari 0,5, label yang dimaksud akan diberi kelas '1' dari pilihan biner. Jika tidak, itu akan diklasifikasikan sebagai '0'.
+
+## Bangun model Anda
+
+Membangun model untuk menemukan klasifikasi biner ini ternyata cukup mudah di Scikit-learn.
+
+[](https://youtu.be/MmZS2otPrQ8 "ML untuk pemula - Regresi Logistik untuk klasifikasi data")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang membangun model regresi linear
+
+1. Pilih variabel yang ingin Anda gunakan dalam model klasifikasi Anda dan bagi set pelatihan dan pengujian dengan memanggil `train_test_split()`:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. Sekarang Anda dapat melatih model Anda, dengan memanggil `fit()` dengan data pelatihan Anda, dan mencetak hasilnya:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ Lihatlah papan skor model Anda. Tidak buruk, mengingat Anda hanya memiliki sekitar 1000 baris data:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## Pemahaman yang lebih baik melalui matriks kebingungan
+
+Meskipun Anda bisa mendapatkan laporan papan skor [istilah](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html?highlight=classification_report#sklearn.metrics.classification_report) dengan mencetak item di atas, Anda mungkin bisa memahami model Anda dengan lebih mudah dengan menggunakan [matriks kebingungan](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix) untuk membantu kita memahami bagaimana model bekerja.
+
+> 🎓 Sebuah '[matriks kebingungan](https://wikipedia.org/wiki/Confusion_matrix)' (atau 'matriks kesalahan') adalah tabel yang mengungkapkan positif dan negatif sejati vs. palsu dari model Anda, sehingga mengukur akurasi prediksi.
+
+1. Untuk menggunakan metrik kebingungan, panggil `confusion_matrix()`:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ Lihatlah matriks kebingungan model Anda:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+Di Scikit-learn, Baris (sumbu 0) matriks kebingungan adalah label sebenarnya dan kolom (sumbu 1) adalah label yang diprediksi.
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+Apa yang terjadi di sini? Katakanlah model kita diminta untuk mengklasifikasikan labu antara dua kategori biner, kategori 'putih' dan kategori 'tidak putih'.
+
+- Jika model Anda memprediksi labu sebagai tidak putih dan itu benar-benar termasuk dalam kategori 'tidak putih' kita menyebutnya negatif benar, ditunjukkan oleh angka kiri atas.
+- Jika model Anda memprediksi labu sebagai putih dan itu benar-benar termasuk dalam kategori 'tidak putih' kita menyebutnya negatif palsu, ditunjukkan oleh angka kiri bawah.
+- Jika model Anda memprediksi labu sebagai tidak putih dan itu benar-benar termasuk dalam kategori 'putih' kita menyebutnya positif palsu, ditunjukkan oleh angka kanan atas.
+- Jika model Anda memprediksi labu sebagai putih dan itu benar-benar termasuk dalam kategori 'putih' kita menyebutnya positif benar, ditunjukkan oleh angka kanan bawah.
+
+Seperti yang mungkin Anda duga, lebih disukai memiliki jumlah positif benar dan negatif benar yang lebih besar dan jumlah positif palsu dan negatif palsu yang lebih rendah, yang menyiratkan bahwa model bekerja lebih baik.
+
+Bagaimana matriks kebingungan berkaitan dengan presisi dan recall? Ingat, laporan klasifikasi yang dicetak di atas menunjukkan presisi (0.85) dan recall (0.67).
+
+Presisi = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+Recall = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ Q: Menurut matriks kebingungan, bagaimana kinerja model? A: Tidak buruk; ada banyak negatif benar tetapi juga beberapa negatif palsu.
+
+Mari kita tinjau kembali istilah yang kita lihat sebelumnya dengan bantuan pemetaan TP/TN dan FP/FN dari matriks kebingungan:
+
+🎓 Presisi: TP/(TP + FP) Fraksi instance relevan di antara instance yang diambil (misalnya label mana yang dilabeli dengan baik)
+
+🎓 Recall: TP/(TP + FN) Fraksi instance relevan yang diambil, apakah dilabeli dengan baik atau tidak
+
+🎓 f1-score: (2 * presisi * recall)/(presisi + recall) Rata-rata tertimbang dari presisi dan recall, dengan yang terbaik adalah 1 dan yang terburuk adalah 0
+
+🎓 Dukungan: Jumlah kejadian dari setiap label yang diambil
+
+🎓 Akurasi: (TP + TN)/(TP + TN + FP + FN) Persentase label yang diprediksi dengan akurat untuk sebuah sampel.
+
+🎓 Rata-rata Makro: Perhitungan rata-rata metrik yang tidak berbobot untuk setiap label, tanpa memperhitungkan ketidakseimbangan label.
+
+🎓 Rata-rata Tertimbang: Perhitungan rata-rata metrik untuk setiap label, dengan memperhitungkan ketidakseimbangan label dengan menimbangnya berdasarkan dukungan mereka (jumlah instance sebenarnya untuk setiap label).
+
+✅ Bisakah Anda memikirkan metrik mana yang harus Anda perhatikan jika Anda ingin model Anda mengurangi jumlah negatif palsu?
+
+## Visualisasikan kurva ROC dari model ini
+
+[](https://youtu.be/GApO575jTA0 "ML untuk pemula - Menganalisis Kinerja Regresi Logistik dengan Kurva ROC")
+
+> 🎥 Klik gambar di atas untuk video singkat tentang kurva ROC
+
+Mari kita lakukan satu visualisasi lagi untuk melihat yang disebut 'kurva ROC':
+
+```python
+from sklearn.metrics import roc_curve, roc_auc_score
+import matplotlib
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+y_scores = model.predict_proba(X_test)
+fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
+
+fig = plt.figure(figsize=(6, 6))
+plt.plot([0, 1], [0, 1], 'k--')
+plt.plot(fpr, tpr)
+plt.xlabel('False Positive Rate')
+plt.ylabel('True Positive Rate')
+plt.title('ROC Curve')
+plt.show()
+```
+
+Menggunakan Matplotlib, plot [Receiving Operating Characteristic](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html?highlight=roc) atau ROC dari model. Kurva ROC sering digunakan untuk mendapatkan pandangan tentang output dari sebuah classifier dalam hal positif benar vs. positif palsu. "Kurva ROC biasanya menampilkan true positive rate pada sumbu Y, dan false positive rate pada sumbu X." Dengan demikian, kemiringan kurva dan ruang antara garis tengah dan kurva penting: Anda ingin kurva yang cepat naik dan melewati garis. Dalam kasus kita, ada positif palsu untuk memulai, dan kemudian garis naik dan melewati dengan benar:
+
+
+
+Akhirnya, gunakan [`API roc_auc_score` dari Scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html?highlight=roc_auc#sklearn.metrics.roc_auc_score) untuk menghitung 'Area Under the Curve' (AUC) yang sebenarnya:
+
+```python
+auc = roc_auc_score(y_test,y_scores[:,1])
+print(auc)
+```
+Hasilnya adalah `0.9749908725812341`. Mengingat bahwa AUC berkisar dari 0 hingga 1, Anda menginginkan skor yang besar, karena model yang 100% benar dalam prediksinya akan memiliki AUC sebesar 1; dalam kasus ini, model _cukup bagus_.
+
+Dalam pelajaran klasifikasi di masa depan, Anda akan belajar cara mengulangi untuk meningkatkan skor model Anda. Tetapi untuk saat ini, selamat! Anda telah menyelesaikan pelajaran regresi ini!
+
+---
+## 🚀Tantangan
+
+Masih banyak lagi yang bisa dibahas mengenai regresi logistik! Tapi cara terbaik untuk belajar adalah dengan bereksperimen. Temukan dataset yang cocok untuk analisis jenis ini dan bangun model dengannya. Apa yang Anda pelajari? tip: coba [Kaggle](https://www.kaggle.com/search?q=logistic+regression+datasets) untuk dataset yang menarik.
+
+## [Kuiz Pasca-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/16/)
+
+## T
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/4-Logistic/assignment.md b/translations/ms/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..756cd929c
--- /dev/null
+++ b/translations/ms/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,14 @@
+# Mencuba Semula Beberapa Regresi
+
+## Arahan
+
+Dalam pelajaran, anda menggunakan subset data labu. Sekarang, kembali kepada data asal dan cuba gunakan semuanya, yang telah dibersihkan dan distandardkan, untuk membina model Regresi Logistik.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Peningkatan |
+| -------- | ----------------------------------------------------------------------- | ----------------------------------------------------------- | ----------------------------------------------------------- |
+| | Buku nota disampaikan dengan model yang dijelaskan dengan baik dan berprestasi baik | Buku nota disampaikan dengan model yang berprestasi minimum | Buku nota disampaikan dengan model yang berprestasi rendah atau tiada |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/4-Logistic/solution/Julia/README.md b/translations/ms/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..84c037866
--- /dev/null
+++ b/translations/ms/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/2-Regression/README.md b/translations/ms/2-Regression/README.md
new file mode 100644
index 000000000..3305cda17
--- /dev/null
+++ b/translations/ms/2-Regression/README.md
@@ -0,0 +1,43 @@
+# Model Regresi untuk Pembelajaran Mesin
+## Topik Regional: Model Regresi untuk Harga Labu di Amerika Utara 🎃
+
+Di Amerika Utara, labu sering diukir menjadi wajah menakutkan untuk Halloween. Mari kita temukan lebih banyak tentang sayuran yang menarik ini!
+
+
+> Foto oleh Beth Teutschmann di Unsplash
+
+## Apa yang Akan Anda Pelajari
+
+[](https://youtu.be/5QnJtDad4iQ "Video Pengenalan Regresi - Klik untuk Menonton!")
+> 🎥 Klik gambar di atas untuk video pengenalan singkat tentang pelajaran ini
+
+Pelajaran dalam bagian ini mencakup jenis-jenis regresi dalam konteks pembelajaran mesin. Model regresi dapat membantu menentukan _hubungan_ antara variabel. Jenis model ini dapat memprediksi nilai seperti panjang, suhu, atau usia, sehingga mengungkapkan hubungan antara variabel saat menganalisis titik data.
+
+Dalam serangkaian pelajaran ini, Anda akan menemukan perbedaan antara regresi linear dan logistik, dan kapan Anda harus memilih salah satu di atas yang lain.
+
+[](https://youtu.be/XA3OaoW86R8 "ML untuk Pemula - Pengenalan Model Regresi untuk Pembelajaran Mesin")
+
+> 🎥 Klik gambar di atas untuk video pendek yang memperkenalkan model regresi.
+
+Dalam kelompok pelajaran ini, Anda akan disiapkan untuk memulai tugas pembelajaran mesin, termasuk mengkonfigurasi Visual Studio Code untuk mengelola notebook, lingkungan umum bagi ilmuwan data. Anda akan menemukan Scikit-learn, sebuah pustaka untuk pembelajaran mesin, dan Anda akan membangun model pertama Anda, dengan fokus pada model Regresi dalam bab ini.
+
+> Ada alat low-code yang berguna yang dapat membantu Anda belajar tentang bekerja dengan model regresi. Coba [Azure ML untuk tugas ini](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### Pelajaran
+
+1. [Alat Perdagangan](1-Tools/README.md)
+2. [Mengelola Data](2-Data/README.md)
+3. [Regresi Linear dan Polinomial](3-Linear/README.md)
+4. [Regresi Logistik](4-Logistic/README.md)
+
+---
+### Kredit
+
+"ML dengan regresi" ditulis dengan ♥️ oleh [Jen Looper](https://twitter.com/jenlooper)
+
+♥️ Kontributor kuis termasuk: [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) dan [Ornella Altunyan](https://twitter.com/ornelladotcom)
+
+Dataset labu disarankan oleh [proyek ini di Kaggle](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) dan datanya bersumber dari [Laporan Standar Pasar Terminal Tanaman Khusus](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) yang didistribusikan oleh Departemen Pertanian Amerika Serikat. Kami telah menambahkan beberapa poin tentang warna berdasarkan varietas untuk menormalkan distribusi. Data ini berada di domain publik.
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/3-Web-App/1-Web-App/README.md b/translations/ms/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..a9da95685
--- /dev/null
+++ b/translations/ms/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# Bina Aplikasi Web untuk Menggunakan Model ML
+
+Dalam pelajaran ini, anda akan melatih model ML pada set data yang sangat menarik: _Penampakan UFO selama abad yang lalu_, yang bersumber dari basis data NUFORC.
+
+Anda akan belajar:
+
+- Cara 'pickle' model yang telah dilatih
+- Cara menggunakan model tersebut dalam aplikasi Flask
+
+Kita akan melanjutkan penggunaan notebook untuk membersihkan data dan melatih model kita, tetapi anda dapat melangkah lebih jauh dengan mengeksplorasi penggunaan model 'di lapangan': dalam aplikasi web.
+
+Untuk melakukan ini, anda perlu membangun aplikasi web menggunakan Flask.
+
+## [Kuis Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## Membangun Aplikasi
+
+Ada beberapa cara untuk membangun aplikasi web yang mengonsumsi model pembelajaran mesin. Arsitektur web anda mungkin mempengaruhi cara model anda dilatih. Bayangkan anda bekerja di sebuah perusahaan di mana kelompok ilmu data telah melatih model yang mereka ingin anda gunakan dalam aplikasi.
+
+### Pertimbangan
+
+Ada banyak pertanyaan yang perlu anda tanyakan:
+
+- **Apakah itu aplikasi web atau aplikasi seluler?** Jika anda membangun aplikasi seluler atau perlu menggunakan model dalam konteks IoT, anda dapat menggunakan [TensorFlow Lite](https://www.tensorflow.org/lite/) dan menggunakan model dalam aplikasi Android atau iOS.
+- **Di mana model akan berada?** Di cloud atau lokal?
+- **Dukungan offline.** Apakah aplikasi harus berfungsi secara offline?
+- **Teknologi apa yang digunakan untuk melatih model?** Teknologi yang dipilih mungkin mempengaruhi alat yang perlu anda gunakan.
+ - **Menggunakan TensorFlow.** Jika anda melatih model menggunakan TensorFlow, misalnya, ekosistem tersebut menyediakan kemampuan untuk mengkonversi model TensorFlow untuk digunakan dalam aplikasi web dengan menggunakan [TensorFlow.js](https://www.tensorflow.org/js/).
+ - **Menggunakan PyTorch.** Jika anda membangun model menggunakan perpustakaan seperti [PyTorch](https://pytorch.org/), anda memiliki opsi untuk mengekspornya dalam format [ONNX](https://onnx.ai/) (Open Neural Network Exchange) untuk digunakan dalam aplikasi web JavaScript yang dapat menggunakan [Onnx Runtime](https://www.onnxruntime.ai/). Opsi ini akan dieksplorasi dalam pelajaran mendatang untuk model yang dilatih dengan Scikit-learn.
+ - **Menggunakan Lobe.ai atau Azure Custom Vision.** Jika anda menggunakan sistem ML SaaS (Software as a Service) seperti [Lobe.ai](https://lobe.ai/) atau [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott) untuk melatih model, jenis perangkat lunak ini menyediakan cara untuk mengekspor model untuk banyak platform, termasuk membangun API khusus untuk di-query di cloud oleh aplikasi online anda.
+
+Anda juga memiliki kesempatan untuk membangun seluruh aplikasi web Flask yang dapat melatih model itu sendiri dalam browser web. Ini juga dapat dilakukan menggunakan TensorFlow.js dalam konteks JavaScript.
+
+Untuk tujuan kita, karena kita telah bekerja dengan notebook berbasis Python, mari kita eksplorasi langkah-langkah yang perlu diambil untuk mengekspor model yang telah dilatih dari notebook tersebut ke format yang dapat dibaca oleh aplikasi web yang dibangun dengan Python.
+
+## Alat
+
+Untuk tugas ini, anda memerlukan dua alat: Flask dan Pickle, keduanya berjalan di Python.
+
+✅ Apa itu [Flask](https://palletsprojects.com/p/flask/)? Didefinisikan sebagai 'micro-framework' oleh penciptanya, Flask menyediakan fitur dasar kerangka kerja web menggunakan Python dan mesin templat untuk membangun halaman web. Lihat [modul Belajar ini](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott) untuk berlatih membangun dengan Flask.
+
+✅ Apa itu [Pickle](https://docs.python.org/3/library/pickle.html)? Pickle 🥒 adalah modul Python yang men-serialisasi dan de-serialisasi struktur objek Python. Ketika anda 'pickle' model, anda men-serialisasi atau meratakan strukturnya untuk digunakan di web. Hati-hati: pickle tidak secara intrinsik aman, jadi berhati-hatilah jika diminta untuk 'un-pickle' file. File yang di-pickle memiliki akhiran `.pkl`.
+
+## Latihan - membersihkan data anda
+
+Dalam pelajaran ini anda akan menggunakan data dari 80.000 penampakan UFO, dikumpulkan oleh [NUFORC](https://nuforc.org) (Pusat Pelaporan UFO Nasional). Data ini memiliki beberapa deskripsi menarik tentang penampakan UFO, misalnya:
+
+- **Deskripsi contoh panjang.** "Seorang pria muncul dari sinar cahaya yang menyinari lapangan berumput di malam hari dan dia berlari menuju tempat parkir Texas Instruments".
+- **Deskripsi contoh pendek.** "lampu-lampu mengejar kami".
+
+Spreadsheet [ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) mencakup kolom tentang `city`, `state`, dan `country` di mana penampakan terjadi, objek `shape` dan `latitude` serta `longitude`.
+
+Dalam [notebook](../../../../3-Web-App/1-Web-App/notebook.ipynb) kosong yang disertakan dalam pelajaran ini:
+
+1. impor `pandas`, `matplotlib`, dan `numpy` seperti yang anda lakukan dalam pelajaran sebelumnya dan impor spreadsheet ufos. Anda dapat melihat sampel set data:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. Konversikan data ufos menjadi dataframe kecil dengan judul baru. Periksa nilai unik di bidang `Country`.
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. Sekarang, anda dapat mengurangi jumlah data yang perlu kita tangani dengan menghapus nilai null dan hanya mengimpor penampakan antara 1-60 detik:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. Impor perpustakaan `LabelEncoder` dari Scikit-learn untuk mengonversi nilai teks untuk negara menjadi angka:
+
+ ✅ LabelEncoder mengkodekan data secara alfabetis
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ Data anda harus terlihat seperti ini:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## Latihan - membangun model anda
+
+Sekarang anda dapat bersiap untuk melatih model dengan membagi data menjadi kelompok pelatihan dan pengujian.
+
+1. Pilih tiga fitur yang ingin anda latih sebagai vektor X anda, dan vektor y akan menjadi `Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude` dan mendapatkan id negara untuk dikembalikan.
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. Latih model anda menggunakan regresi logistik:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+Akurasi tidak buruk **(sekitar 95%)**, tidak mengherankan, karena `Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude`, tetapi ini adalah latihan yang baik untuk mencoba melatih dari data mentah yang anda bersihkan, diekspor, dan kemudian menggunakan model ini dalam aplikasi web.
+
+## Latihan - 'pickle' model anda
+
+Sekarang, saatnya untuk _pickle_ model anda! Anda dapat melakukannya dalam beberapa baris kode. Setelah itu di-_pickle_, muat model yang di-pickle dan uji terhadap array data sampel yang berisi nilai untuk detik, lintang, dan bujur,
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+Model mengembalikan **'3'**, yang merupakan kode negara untuk Inggris. Luar biasa! 👽
+
+## Latihan - membangun aplikasi Flask
+
+Sekarang anda dapat membangun aplikasi Flask untuk memanggil model anda dan mengembalikan hasil serupa, tetapi dengan cara yang lebih menarik secara visual.
+
+1. Mulailah dengan membuat folder bernama **web-app** di sebelah file _notebook.ipynb_ tempat file _ufo-model.pkl_ anda berada.
+
+1. Di dalam folder tersebut buat tiga folder lagi: **static**, dengan folder **css** di dalamnya, dan **templates**. Anda sekarang harus memiliki file dan direktori berikut:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ Lihat folder solusi untuk melihat aplikasi yang sudah selesai
+
+1. File pertama yang dibuat dalam folder _web-app_ adalah file **requirements.txt**. Seperti _package.json_ dalam aplikasi JavaScript, file ini mencantumkan ketergantungan yang diperlukan oleh aplikasi. Dalam **requirements.txt** tambahkan baris:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. Sekarang, jalankan file ini dengan menavigasi ke _web-app_:
+
+ ```bash
+ cd web-app
+ ```
+
+1. Di terminal anda ketik `pip install`, untuk menginstal perpustakaan yang tercantum dalam _requirements.txt_:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. Sekarang, anda siap membuat tiga file lagi untuk menyelesaikan aplikasi:
+
+ 1. Buat **app.py** di root.
+ 2. Buat **index.html** di direktori _templates_.
+ 3. Buat **styles.css** di direktori _static/css_.
+
+1. Buat file _styles.css_ dengan beberapa gaya:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. Selanjutnya, buat file _index.html_:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ Lihatlah templating dalam file ini. Perhatikan sintaks 'mustache' di sekitar variabel yang akan disediakan oleh aplikasi, seperti teks prediksi: `{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py` tambahkan:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 Tip: ketika anda menambahkan [`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train`. Apa pro dan kontra dari mengejar metode ini?
+
+## [Kuis Pasca-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## Tinjauan & Studi Mandiri
+
+Ada banyak cara untuk membangun aplikasi web yang mengonsumsi model ML. Buatlah daftar cara-cara yang dapat anda gunakan JavaScript atau Python untuk membangun aplikasi web yang memanfaatkan pembelajaran mesin. Pertimbangkan arsitektur: apakah model harus tetap dalam aplikasi atau hidup di cloud? Jika yang terakhir, bagaimana anda mengaksesnya? Gambarlah model arsitektur untuk solusi web ML yang diterapkan.
+
+## Tugas
+
+[Cobalah model yang berbeda](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/3-Web-App/1-Web-App/assignment.md b/translations/ms/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..841fc6d8d
--- /dev/null
+++ b/translations/ms/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# Cuba model yang berbeza
+
+## Arahan
+
+Sekarang anda telah membina satu aplikasi web menggunakan model Regresi terlatih, gunakan salah satu model dari pelajaran Regresi sebelumnya untuk membuat semula aplikasi web ini. Anda boleh mengekalkan gaya atau merekanya dengan cara yang berbeza untuk mencerminkan data labu. Berhati-hati untuk mengubah input agar sesuai dengan kaedah latihan model anda.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------------------------- | --------------------------------------------------------- | --------------------------------------------------------- | -------------------------------------- |
+| | Aplikasi web berfungsi seperti yang diharapkan dan dideploy ke awan | Aplikasi web mengandungi kesilapan atau menunjukkan hasil yang tidak dijangka | Aplikasi web tidak berfungsi dengan betul |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/3-Web-App/README.md b/translations/ms/3-Web-App/README.md
new file mode 100644
index 000000000..4eb3dbf14
--- /dev/null
+++ b/translations/ms/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# Bina aplikasi web untuk menggunakan model ML anda
+
+Dalam bahagian kurikulum ini, anda akan diperkenalkan kepada topik ML terapan: bagaimana untuk menyimpan model Scikit-learn anda sebagai fail yang boleh digunakan untuk membuat ramalan dalam aplikasi web. Setelah model disimpan, anda akan belajar bagaimana menggunakannya dalam aplikasi web yang dibina dalam Flask. Anda akan mula dengan mencipta model menggunakan beberapa data tentang penampakan UFO! Kemudian, anda akan membina aplikasi web yang membolehkan anda memasukkan beberapa saat dengan nilai latitud dan longitud untuk meramalkan negara mana yang melaporkan melihat UFO.
+
+
+
+Foto oleh Michael Herren di Unsplash
+
+## Pelajaran
+
+1. [Bina Aplikasi Web](1-Web-App/README.md)
+
+## Kredit
+
+"Bina Aplikasi Web" ditulis dengan ♥️ oleh [Jen Looper](https://twitter.com/jenlooper).
+
+♥️ Kuiz ditulis oleh Rohan Raj.
+
+Dataset diambil dari [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings).
+
+Arkitektur aplikasi web dicadangkan sebahagiannya oleh [artikel ini](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) dan [repo ini](https://github.com/abhinavsagar/machine-learning-deployment) oleh Abhinav Sagar.
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/1-Introduction/README.md b/translations/ms/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..facebe8b3
--- /dev/null
+++ b/translations/ms/4-Classification/1-Introduction/README.md
@@ -0,0 +1,300 @@
+# Pengenalan kepada klasifikasi
+
+Dalam empat pelajaran ini, anda akan meneroka fokus asas pembelajaran mesin klasik - _klasifikasi_. Kami akan melalui penggunaan pelbagai algoritma klasifikasi dengan set data tentang semua masakan hebat di Asia dan India. Harap anda lapar!
+
+
+
+> Raikan masakan pan-Asia dalam pelajaran ini! Imej oleh [Jen Looper](https://twitter.com/jenlooper)
+
+Klasifikasi adalah satu bentuk [pembelajaran berarah](https://wikipedia.org/wiki/Supervised_learning) yang banyak persamaannya dengan teknik regresi. Jika pembelajaran mesin adalah tentang meramal nilai atau nama kepada sesuatu dengan menggunakan set data, maka klasifikasi umumnya terbahagi kepada dua kumpulan: _klasifikasi binari_ dan _klasifikasi berbilang kelas_.
+
+[](https://youtu.be/eg8DJYwdMyg "Pengenalan kepada klasifikasi")
+
+Ingat:
+
+- **Regresi linear** membantu anda meramal hubungan antara pembolehubah dan membuat ramalan tepat di mana titik data baru akan jatuh dalam hubungan dengan garis tersebut. Jadi, anda boleh meramal _berapa harga labu pada bulan September vs. Disember_, sebagai contoh.
+- **Regresi logistik** membantu anda menemui "kategori binari": pada titik harga ini, _adakah labu ini oren atau tidak-oren_?
+
+Klasifikasi menggunakan pelbagai algoritma untuk menentukan cara lain dalam menentukan label atau kelas sesuatu titik data. Mari kita bekerja dengan data masakan ini untuk melihat sama ada, dengan memerhatikan sekumpulan bahan, kita boleh menentukan asal usul masakannya.
+
+## [Kuiz pra-ceramah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [Pelajaran ini tersedia dalam R!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### Pengenalan
+
+Klasifikasi adalah salah satu aktiviti asas penyelidik pembelajaran mesin dan saintis data. Dari klasifikasi asas nilai binari ("adakah emel ini spam atau tidak?"), kepada klasifikasi dan segmentasi imej yang kompleks menggunakan visi komputer, adalah selalu berguna untuk dapat menyusun data ke dalam kelas dan bertanya soalan mengenainya.
+
+Untuk menyatakan proses ini dengan cara yang lebih saintifik, kaedah klasifikasi anda mencipta model ramalan yang membolehkan anda memetakan hubungan antara pembolehubah input kepada pembolehubah output.
+
+
+
+> Masalah binari vs. berbilang kelas untuk algoritma klasifikasi untuk diatasi. Infografik oleh [Jen Looper](https://twitter.com/jenlooper)
+
+Sebelum memulakan proses membersihkan data, memvisualisasikannya, dan menyiapkannya untuk tugas ML kita, mari kita belajar sedikit tentang pelbagai cara pembelajaran mesin boleh digunakan untuk mengklasifikasikan data.
+
+Diperoleh daripada [statistik](https://wikipedia.org/wiki/Statistical_classification), klasifikasi menggunakan pembelajaran mesin klasik menggunakan ciri-ciri, seperti `smoker`, `weight`, dan `age` untuk menentukan _kemungkinan mengembangkan penyakit X_. Sebagai teknik pembelajaran berarah yang serupa dengan latihan regresi yang anda lakukan sebelum ini, data anda dilabelkan dan algoritma ML menggunakan label tersebut untuk mengklasifikasikan dan meramal kelas (atau 'ciri-ciri') satu set data dan menetapkannya kepada satu kumpulan atau hasil.
+
+✅ Luangkan masa sejenak untuk membayangkan set data tentang masakan. Apakah yang boleh dijawab oleh model berbilang kelas? Apakah yang boleh dijawab oleh model binari? Bagaimana jika anda ingin menentukan sama ada sesuatu masakan mungkin menggunakan fenugreek? Bagaimana jika anda ingin melihat jika, dengan pemberian beg runcit penuh dengan bunga lawang, artichoke, kembang kol, dan lobak pedas, anda boleh mencipta hidangan India yang tipikal?
+
+[](https://youtu.be/GuTeDbaNoEU "Bakul misteri gila")
+
+> 🎥 Klik imej di atas untuk video. Premis keseluruhan rancangan 'Chopped' adalah 'bakul misteri' di mana chef perlu membuat hidangan daripada pilihan bahan yang rawak. Pasti model ML akan membantu!
+
+## Hello 'classifier'
+
+Soalan yang ingin kita tanya tentang set data masakan ini sebenarnya adalah soalan **berbilang kelas**, kerana kita mempunyai beberapa masakan kebangsaan yang berpotensi untuk bekerja dengannya. Diberikan sekumpulan bahan, kelas manakah yang akan data ini sesuai?
+
+Scikit-learn menawarkan beberapa algoritma yang berbeza untuk digunakan untuk mengklasifikasikan data, bergantung kepada jenis masalah yang anda ingin selesaikan. Dalam dua pelajaran seterusnya, anda akan belajar tentang beberapa algoritma ini.
+
+## Latihan - bersihkan dan seimbangkan data anda
+
+Tugas pertama yang perlu dilakukan, sebelum memulakan projek ini, adalah membersihkan dan **mengimbangkan** data anda untuk mendapatkan hasil yang lebih baik. Mulakan dengan fail _notebook.ipynb_ kosong di akar folder ini.
+
+Perkara pertama yang perlu dipasang adalah [imblearn](https://imbalanced-learn.org/stable/). Ini adalah pakej Scikit-learn yang akan membolehkan anda mengimbangkan data dengan lebih baik (anda akan belajar lebih lanjut tentang tugas ini sebentar lagi).
+
+1. Untuk memasang `imblearn`, jalankan `pip install`, seperti ini:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. Import pakej yang anda perlukan untuk mengimport data anda dan memvisualisasikannya, juga import `SMOTE` dari `imblearn`.
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ Sekarang anda telah bersedia untuk membaca import data seterusnya.
+
+1. Tugas seterusnya adalah untuk mengimport data:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ Menggunakan `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df`.
+
+1. Semak bentuk data:
+
+ ```python
+ df.head()
+ ```
+
+ Lima baris pertama kelihatan seperti ini:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. Dapatkan maklumat tentang data ini dengan memanggil `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ Output anda kelihatan seperti ini:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## Latihan - belajar tentang masakan
+
+Sekarang kerja mula menjadi lebih menarik. Mari kita temui pengedaran data, per masakan
+
+1. Plot data sebagai bar dengan memanggil `barh()`:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ Terdapat bilangan masakan yang terhad, tetapi pengedaran data tidak sekata. Anda boleh membetulkannya! Sebelum berbuat demikian, teroka sedikit lagi.
+
+1. Ketahui berapa banyak data yang tersedia per masakan dan cetak:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ output kelihatan seperti ini:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## Menemui bahan-bahan
+
+Sekarang anda boleh menggali lebih mendalam ke dalam data dan belajar apakah bahan-bahan tipikal per masakan. Anda harus membersihkan data berulang yang mencipta kekeliruan antara masakan, jadi mari kita belajar tentang masalah ini.
+
+1. Cipta fungsi `create_ingredient()` dalam Python untuk mencipta dataframe bahan. Fungsi ini akan bermula dengan menjatuhkan lajur yang tidak berguna dan menyusun bahan mengikut kiraannya:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ Sekarang anda boleh menggunakan fungsi itu untuk mendapatkan idea tentang sepuluh bahan paling popular mengikut masakan.
+
+1. Panggil `create_ingredient()` and plot it calling `barh()`:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Lakukan perkara yang sama untuk data Jepun:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Sekarang untuk bahan-bahan Cina:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Plot bahan-bahan India:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Akhir sekali, plot bahan-bahan Korea:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Sekarang, jatuhkan bahan-bahan yang paling biasa yang mencipta kekeliruan antara masakan yang berbeza, dengan memanggil `drop()`:
+
+ Semua orang suka nasi, bawang putih dan halia!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## Imbangkan set data
+
+Sekarang anda telah membersihkan data, gunakan [SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - "Teknik Over-sampling Minoriti Sintetik" - untuk mengimbangkannya.
+
+1. Panggil `fit_resample()`, strategi ini menjana sampel baru dengan interpolasi.
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ Dengan mengimbangkan data anda, anda akan mendapat hasil yang lebih baik apabila mengklasifikasikannya. Fikirkan tentang klasifikasi binari. Jika kebanyakan data anda adalah satu kelas, model ML akan meramalkan kelas itu dengan lebih kerap, hanya kerana terdapat lebih banyak data untuknya. Mengimbangkan data mengambil sebarang data yang tidak seimbang dan membantu menghilangkan ketidakseimbangan ini.
+
+1. Sekarang anda boleh menyemak bilangan label per bahan:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ Output anda kelihatan seperti ini:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ Data ini kemas dan bersih, seimbang, dan sangat lazat!
+
+1. Langkah terakhir adalah menyimpan data yang seimbang, termasuk label dan ciri-ciri, ke dalam dataframe baru yang boleh dieksport ke dalam fail:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. Anda boleh melihat data sekali lagi menggunakan `transformed_df.head()` and `transformed_df.info()`. Simpan salinan data ini untuk digunakan dalam pelajaran masa depan:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ CSV baru ini kini boleh didapati di folder data akar.
+
+---
+
+## 🚀Cabaran
+
+Kurikulum ini mengandungi beberapa set data yang menarik. Gali melalui folder `data` dan lihat jika ada yang mengandungi set data yang sesuai untuk klasifikasi binari atau berbilang kelas? Apakah soalan yang akan anda tanya tentang set data ini?
+
+## [Kuiz selepas ceramah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## Ulasan & Kajian Sendiri
+
+Teroka API SMOTE. Apakah kes penggunaan yang terbaik digunakan? Apakah masalah yang diselesaikannya?
+
+## Tugasan
+
+[Teroka kaedah klasifikasi](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/1-Introduction/assignment.md b/translations/ms/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..b2e1a1865
--- /dev/null
+++ b/translations/ms/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Terokai kaedah pengelasan
+
+## Arahan
+
+Dalam [dokumentasi Scikit-learn](https://scikit-learn.org/stable/supervised_learning.html) anda akan menemui senarai panjang cara untuk mengelas data. Lakukan sedikit pencarian dalam dokumen ini: matlamat anda adalah untuk mencari kaedah pengelasan dan padankan dengan dataset dalam kurikulum ini, soalan yang boleh anda tanyakan mengenainya, dan teknik pengelasan. Buat spreadsheet atau jadual dalam fail .doc dan terangkan bagaimana dataset tersebut akan berfungsi dengan algoritma pengelasan.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | dokumen disampaikan yang mengandungi 5 algoritma bersama teknik pengelasan. Gambaran keseluruhan dijelaskan dengan baik dan terperinci. | dokumen disampaikan yang mengandungi 3 algoritma bersama teknik pengelasan. Gambaran keseluruhan dijelaskan dengan baik dan terperinci. | dokumen disampaikan yang mengandungi kurang daripada tiga algoritma bersama teknik pengelasan dan gambaran keseluruhan tidak dijelaskan dengan baik atau terperinci. |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/1-Introduction/solution/Julia/README.md b/translations/ms/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..84c037866
--- /dev/null
+++ b/translations/ms/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/2-Classifiers-1/README.md b/translations/ms/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..52e9a2593
--- /dev/null
+++ b/translations/ms/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,77 @@
+# Pengelasan Masakan 1
+
+Dalam pelajaran ini, anda akan menggunakan dataset yang anda simpan dari pelajaran terakhir yang penuh dengan data seimbang dan bersih tentang masakan.
+
+Anda akan menggunakan dataset ini dengan pelbagai pengelas untuk _meramal jenis masakan kebangsaan berdasarkan sekumpulan bahan_. Semasa melakukannya, anda akan mempelajari lebih lanjut tentang beberapa cara algoritma boleh digunakan untuk tugas pengelasan.
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# Persiapan
+
+Dengan anggapan anda telah menyelesaikan [Pelajaran 1](../1-Introduction/README.md), pastikan fail _cleaned_cuisines.csv_ wujud di folder root `/data` untuk empat pelajaran ini.
+
+## Latihan - ramalkan masakan kebangsaan
+
+1. Bekerja dalam folder _notebook.ipynb_ pelajaran ini, import fail tersebut bersama dengan perpustakaan Pandas:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ Data kelihatan seperti ini:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. Sekarang, import beberapa lagi perpustakaan:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. Bahagikan koordinat X dan y kepada dua dataframe untuk latihan. `cuisine` boleh menjadi dataframe label:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ Ia akan kelihatan seperti ini:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. Hapuskan `Unnamed: 0` column and the `cuisine` column, calling `drop()`. Simpan baki data sebagai ciri-ciri yang boleh dilatih:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ Ciri-ciri anda kelihatan seperti ini:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/2-Classifiers-1/assignment.md b/translations/ms/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..a6b896b64
--- /dev/null
+++ b/translations/ms/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# Mengkaji Penyelesai
+## Arahan
+
+Dalam pelajaran ini, anda telah mempelajari tentang pelbagai penyelesai yang memadankan algoritma dengan proses pembelajaran mesin untuk mencipta model yang tepat. Telusuri penyelesai yang disenaraikan dalam pelajaran dan pilih dua. Dengan kata-kata anda sendiri, bandingkan dan bezakan dua penyelesai ini. Jenis masalah apa yang mereka selesaikan? Bagaimana mereka bekerja dengan pelbagai struktur data? Mengapa anda akan memilih satu berbanding yang lain?
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Peningkatan |
+| -------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | Fail .doc disampaikan dengan dua perenggan, satu untuk setiap penyelesai, membandingkan mereka dengan teliti. | Fail .doc disampaikan dengan hanya satu perenggan | Tugasan tidak lengkap |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/ms/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..4173b3b96
--- /dev/null
+++ b/translations/ms/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/3-Classifiers-2/README.md b/translations/ms/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..c6124bb51
--- /dev/null
+++ b/translations/ms/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# Pengelasan Masakan 2
+
+Dalam pelajaran pengelasan kedua ini, anda akan meneroka lebih banyak cara untuk mengelaskan data numerik. Anda juga akan belajar tentang kesan memilih satu pengelas berbanding yang lain.
+
+## [Kuiz Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### Prasyarat
+
+Kami menganggap bahawa anda telah menyelesaikan pelajaran sebelumnya dan mempunyai dataset yang dibersihkan dalam folder `data` anda yang dinamakan _cleaned_cuisines.csv_ di akar folder 4-pelajaran ini.
+
+### Persiapan
+
+Kami telah memuatkan fail _notebook.ipynb_ anda dengan dataset yang dibersihkan dan telah membahagikannya kepada dataframe X dan y, siap untuk proses pembinaan model.
+
+## Peta pengelasan
+
+Sebelumnya, anda telah belajar tentang pelbagai pilihan yang anda miliki ketika mengelaskan data menggunakan helaian cheat Microsoft. Scikit-learn menawarkan helaian cheat yang serupa tetapi lebih terperinci yang dapat membantu mempersempit pemilihan penganggar anda (istilah lain untuk pengelas):
+
+
+> Tip: [kunjungi peta ini secara online](https://scikit-learn.org/stable/tutorial/machine_learning_map/) dan klik sepanjang jalan untuk membaca dokumentasi.
+
+### Pelan
+
+Peta ini sangat membantu apabila anda mempunyai pemahaman yang jelas tentang data anda, kerana anda boleh 'berjalan' sepanjang jalannya untuk membuat keputusan:
+
+- Kami mempunyai >50 sampel
+- Kami ingin meramalkan kategori
+- Kami mempunyai data yang berlabel
+- Kami mempunyai kurang dari 100K sampel
+- ✨ Kami boleh memilih Linear SVC
+- Jika itu tidak berfungsi, kerana kami mempunyai data numerik
+ - Kami boleh mencuba ✨ KNeighbors Classifier
+ - Jika itu tidak berfungsi, cuba ✨ SVC dan ✨ Ensemble Classifiers
+
+Ini adalah jalan yang sangat membantu untuk diikuti.
+
+## Latihan - bahagikan data
+
+Mengikuti jalan ini, kita harus memulakan dengan mengimport beberapa perpustakaan untuk digunakan.
+
+1. Import perpustakaan yang diperlukan:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. Bahagikan data latihan dan ujian anda:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Pengelas Linear SVC
+
+Support-Vector clustering (SVC) adalah sebahagian daripada keluarga teknik ML Support-Vector machines (pelajari lebih lanjut mengenai ini di bawah). Dalam kaedah ini, anda boleh memilih 'kernel' untuk menentukan bagaimana mengelompokkan label. Parameter 'C' merujuk kepada 'regularization' yang mengatur pengaruh parameter. Kernel boleh menjadi salah satu dari [beberapa](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC); di sini kita menetapkannya kepada 'linear' untuk memastikan kita memanfaatkan linear SVC. Kebarangkalian secara default adalah 'false'; di sini kita menetapkannya kepada 'true' untuk mengumpulkan anggaran kebarangkalian. Kami menetapkan keadaan rawak kepada '0' untuk mengocok data untuk mendapatkan kebarangkalian.
+
+### Latihan - gunakan linear SVC
+
+Mulakan dengan mencipta array pengelas. Anda akan menambah secara progresif ke dalam array ini semasa kita menguji.
+
+1. Mulakan dengan Linear SVC:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. Latih model anda menggunakan Linear SVC dan cetak laporan:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Hasilnya agak baik:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## Pengelas K-Neighbors
+
+K-Neighbors adalah sebahagian daripada keluarga kaedah ML "neighbors", yang boleh digunakan untuk pembelajaran yang diselia dan tidak diselia. Dalam kaedah ini, sejumlah titik yang telah ditetapkan dibuat dan data dikumpulkan di sekitar titik-titik ini supaya label yang digeneralisasi dapat diramalkan untuk data tersebut.
+
+### Latihan - gunakan pengelas K-Neighbors
+
+Pengelas sebelumnya bagus, dan berfungsi dengan baik dengan data, tetapi mungkin kita boleh mendapatkan ketepatan yang lebih baik. Cuba pengelas K-Neighbors.
+
+1. Tambahkan satu baris ke array pengelas anda (tambahkan koma selepas item Linear SVC):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ Hasilnya sedikit lebih buruk:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ Pelajari tentang [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors)
+
+## Pengelas Support Vector
+
+Pengelas Support-Vector adalah sebahagian daripada keluarga [Support-Vector Machine](https://wikipedia.org/wiki/Support-vector_machine) kaedah ML yang digunakan untuk tugas pengelasan dan regresi. SVMs "memetakan contoh latihan kepada titik di ruang" untuk memaksimumkan jarak antara dua kategori. Data seterusnya dipetakan ke dalam ruang ini supaya kategori mereka dapat diramalkan.
+
+### Latihan - gunakan pengelas Support Vector
+
+Mari cuba mendapatkan ketepatan yang sedikit lebih baik dengan pengelas Support Vector.
+
+1. Tambahkan koma selepas item K-Neighbors, dan kemudian tambahkan baris ini:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ Hasilnya sangat baik!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ Pelajari tentang [Support-Vectors](https://scikit-learn.org/stable/modules/svm.html#svm)
+
+## Pengelas Ensemble
+
+Mari ikuti jalan ini hingga ke akhir, walaupun ujian sebelumnya cukup baik. Mari cuba beberapa 'Pengelas Ensemble', khususnya Random Forest dan AdaBoost:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+Hasilnya sangat baik, terutama untuk Random Forest:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ Pelajari tentang [Pengelas Ensemble](https://scikit-learn.org/stable/modules/ensemble.html)
+
+Kaedah Pembelajaran Mesin ini "menggabungkan ramalan beberapa penganggar asas" untuk meningkatkan kualiti model. Dalam contoh kita, kita menggunakan Random Trees dan AdaBoost.
+
+- [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#forest), kaedah purata, membina 'hutan' 'pokok keputusan' yang disuntik dengan kebetulan untuk mengelakkan overfitting. Parameter n_estimators ditetapkan kepada jumlah pokok.
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) memadankan pengelas ke dataset dan kemudian memadankan salinan pengelas tersebut ke dataset yang sama. Ia memberi tumpuan kepada berat item yang salah dikelaskan dan menyesuaikan padanan untuk pengelas seterusnya untuk membetulkan.
+
+---
+
+## 🚀Cabaran
+
+Setiap teknik ini mempunyai sejumlah besar parameter yang boleh anda ubah. Kajilah parameter default masing-masing dan fikirkan apa yang akan berlaku jika parameter ini diubah untuk kualiti model.
+
+## [Kuiz Pasca-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## Ulasan & Kajian Sendiri
+
+Terdapat banyak jargon dalam pelajaran ini, jadi luangkan masa sebentar untuk menyemak [senarai ini](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott) istilah yang berguna!
+
+## Tugasan
+
+[Parameter play](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/3-Classifiers-2/assignment.md b/translations/ms/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..df9f3892b
--- /dev/null
+++ b/translations/ms/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# Parameter Play
+
+## Arahan
+
+Terdapat banyak parameter yang ditetapkan secara default apabila bekerja dengan pengklasifikasi ini. Intellisense dalam VS Code boleh membantu anda menggali ke dalamnya. Gunakan salah satu Teknik Pengelasan ML dalam pelajaran ini dan latih semula model dengan menyesuaikan pelbagai nilai parameter. Bina sebuah notebook yang menerangkan mengapa beberapa perubahan membantu kualiti model sementara yang lain merosakkannya. Berikan jawapan anda dengan terperinci.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------- |
+| | Sebuah notebook dipersembahkan dengan pengklasifikasi yang dibina sepenuhnya dan parameternya disesuaikan serta perubahan dijelaskan dalam kotak teks | Sebuah notebook dipersembahkan sebahagian atau dijelaskan dengan lemah | Sebuah notebook yang bermasalah atau cacat |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesalahan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/ms/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..e1021ce8f
--- /dev/null
+++ b/translations/ms/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/4-Applied/README.md b/translations/ms/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..eba4217ce
--- /dev/null
+++ b/translations/ms/4-Classification/4-Applied/README.md
@@ -0,0 +1,317 @@
+# Bina Aplikasi Web Pencadang Masakan
+
+Dalam pelajaran ini, anda akan membina model klasifikasi menggunakan beberapa teknik yang telah anda pelajari dalam pelajaran sebelumnya dan dengan dataset masakan yang lazat yang digunakan sepanjang siri ini. Selain itu, anda akan membina aplikasi web kecil untuk menggunakan model yang disimpan, memanfaatkan runtime web Onnx.
+
+Salah satu kegunaan praktikal pembelajaran mesin yang paling berguna adalah membina sistem cadangan, dan anda boleh mengambil langkah pertama ke arah itu hari ini!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 Klik gambar di atas untuk video: Jen Looper membina aplikasi web menggunakan data masakan yang diklasifikasikan
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+Dalam pelajaran ini, anda akan belajar:
+
+- Cara membina model dan menyimpannya sebagai model Onnx
+- Cara menggunakan Netron untuk memeriksa model
+- Cara menggunakan model anda dalam aplikasi web untuk inferens
+
+## Bina model anda
+
+Membina sistem ML yang diterapkan adalah bahagian penting dalam memanfaatkan teknologi ini untuk sistem perniagaan anda. Anda boleh menggunakan model dalam aplikasi web anda (dan dengan itu menggunakannya dalam konteks luar talian jika diperlukan) dengan menggunakan Onnx.
+
+Dalam [pelajaran sebelumnya](../../3-Web-App/1-Web-App/README.md), anda telah membina model Regresi tentang penampakan UFO, "dipickle" dan menggunakannya dalam aplikasi Flask. Walaupun seni bina ini sangat berguna untuk diketahui, ia adalah aplikasi Python full-stack, dan keperluan anda mungkin termasuk penggunaan aplikasi JavaScript.
+
+Dalam pelajaran ini, anda boleh membina sistem asas berasaskan JavaScript untuk inferens. Pertama, bagaimanapun, anda perlu melatih model dan menukarnya untuk digunakan dengan Onnx.
+
+## Latihan - latih model klasifikasi
+
+Pertama, latih model klasifikasi menggunakan dataset masakan yang telah dibersihkan yang kita gunakan.
+
+1. Mulakan dengan mengimport perpustakaan yang berguna:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ Anda memerlukan '[skl2onnx](https://onnx.ai/sklearn-onnx/)' untuk membantu menukar model Scikit-learn anda kepada format Onnx.
+
+1. Kemudian, bekerja dengan data anda dengan cara yang sama seperti yang anda lakukan dalam pelajaran sebelumnya, dengan membaca fail CSV menggunakan `read_csv()`:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. Keluarkan dua lajur pertama yang tidak diperlukan dan simpan data yang tinggal sebagai 'X':
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. Simpan label sebagai 'y':
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### Mulakan rutin latihan
+
+Kami akan menggunakan perpustakaan 'SVC' yang mempunyai ketepatan yang baik.
+
+1. Import perpustakaan yang sesuai dari Scikit-learn:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. Pisahkan set latihan dan ujian:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. Bina model Klasifikasi SVC seperti yang anda lakukan dalam pelajaran sebelumnya:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. Sekarang, uji model anda, memanggil `predict()`:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. Cetak laporan klasifikasi untuk memeriksa kualiti model:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Seperti yang kita lihat sebelum ini, ketepatan adalah baik:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### Tukar model anda kepada Onnx
+
+Pastikan untuk melakukan penukaran dengan nombor Tensor yang betul. Dataset ini mempunyai 380 bahan yang disenaraikan, jadi anda perlu mencatatkan nombor itu dalam `FloatTensorType`:
+
+1. Tukar menggunakan nombor tensor 380.
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. Buat onx dan simpan sebagai fail **model.onnx**:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > Perhatikan, anda boleh memasukkan [pilihan](https://onnx.ai/sklearn-onnx/parameterized.html) dalam skrip penukaran anda. Dalam kes ini, kami memasukkan 'nocl' untuk menjadi Benar dan 'zipmap' untuk menjadi Palsu. Oleh kerana ini adalah model klasifikasi, anda mempunyai pilihan untuk mengeluarkan ZipMap yang menghasilkan senarai kamus (tidak diperlukan). `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` fail.
+
+1. Dalam fail ini _index.html_, tambahkan markup berikut:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. Sekarang, bekerja dalam tag `body`, tambahkan sedikit markup untuk menunjukkan senarai kotak semak yang mencerminkan beberapa bahan:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ Perhatikan bahawa setiap kotak semak diberi nilai. Ini mencerminkan indeks di mana bahan itu ditemui mengikut dataset. Epal, sebagai contoh, dalam senarai abjad ini, menduduki lajur kelima, jadi nilainya adalah '4' kerana kita mula mengira pada 0. Anda boleh merujuk kepada [lembaran ramuan](../../../../4-Classification/data/ingredient_indexes.csv) untuk mengetahui indeks bahan tertentu.
+
+ Meneruskan kerja anda dalam fail index.html, tambahkan blok skrip di mana model dipanggil selepas penutupan akhir ``.
+
+1. Pertama, import [Onnx Runtime](https://www.onnxruntime.ai/):
+
+ ```html
+
+ ```
+
+ > Onnx Runtime digunakan untuk membolehkan menjalankan model Onnx anda di pelbagai platform perkakasan, termasuk pengoptimuman dan API untuk digunakan.
+
+1. Setelah Runtime ada, anda boleh memanggilnya:
+
+ ```html
+
+ ```
+
+Dalam kod ini, terdapat beberapa perkara yang berlaku:
+
+1. Anda mencipta array 380 nilai yang mungkin (1 atau 0) untuk ditetapkan dan dihantar ke model untuk inferens, bergantung pada sama ada kotak semak bahan dicentang.
+2. Anda mencipta array kotak semak dan cara untuk menentukan sama ada mereka dicentang dalam `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` pada arahan. Sebuah localhost harus dibuka dan anda boleh melihat aplikasi web anda. Periksa apa masakan yang disarankan berdasarkan pelbagai bahan:
+
+
+
+Tahniah, anda telah mencipta aplikasi web 'cadangan' dengan beberapa medan. Luangkan sedikit masa untuk membina sistem ini!
+## 🚀Cabaran
+
+Aplikasi web anda sangat minimal, jadi teruskan membina dengan menggunakan bahan dan indeks mereka dari data [ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv). Kombinasi rasa apa yang berfungsi untuk mencipta hidangan kebangsaan tertentu?
+
+## [Kuiz pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## Ulasan & Kajian Kendiri
+
+Walaupun pelajaran ini hanya menyentuh tentang kegunaan mencipta sistem cadangan untuk bahan makanan, kawasan aplikasi ML ini sangat kaya dengan contoh. Baca lebih lanjut tentang bagaimana sistem ini dibina:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## Tugasan
+
+[Bina cadangan baru](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat penting, terjemahan manusia profesional disarankan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/4-Applied/assignment.md b/translations/ms/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..817913550
--- /dev/null
+++ b/translations/ms/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# Bina Pencadang
+
+## Arahan
+
+Berdasarkan latihan dalam pelajaran ini, anda kini tahu cara membina aplikasi web berasaskan JavaScript menggunakan Onnx Runtime dan model Onnx yang ditukar. Cuba bina pencadang baru menggunakan data dari pelajaran ini atau sumber lain (sila berikan kredit). Anda mungkin mencipta pencadang haiwan peliharaan berdasarkan pelbagai atribut personaliti, atau pencadang genre muzik berdasarkan mood seseorang. Jadilah kreatif!
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | --------------------------------------------------------------------- | ------------------------------------- | --------------------------------- |
+| | Aplikasi web dan buku nota disediakan, kedua-duanya didokumentasikan dengan baik dan berfungsi | Salah satu daripadanya hilang atau mempunyai kelemahan | Kedua-duanya sama ada hilang atau mempunyai kelemahan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/4-Classification/README.md b/translations/ms/4-Classification/README.md
new file mode 100644
index 000000000..1e73e7b93
--- /dev/null
+++ b/translations/ms/4-Classification/README.md
@@ -0,0 +1,30 @@
+# Memulakan dengan klasifikasi
+
+## Topik serantau: Masakan Asia dan India yang Lazat 🍜
+
+Di Asia dan India, tradisi makanan sangat pelbagai dan sangat lazat! Mari kita lihat data tentang masakan serantau untuk cuba memahami bahan-bahan mereka.
+
+
+> Foto oleh Lisheng Chang di Unsplash
+
+## Apa yang anda akan pelajari
+
+Dalam bahagian ini, anda akan membina dari kajian awal anda tentang Regresi dan belajar tentang pengklasifikasi lain yang boleh anda gunakan untuk memahami data dengan lebih baik.
+
+> Terdapat alat low-code yang berguna yang boleh membantu anda belajar tentang bekerja dengan model klasifikasi. Cuba [Azure ML untuk tugas ini](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Pelajaran
+
+1. [Pengenalan kepada klasifikasi](1-Introduction/README.md)
+2. [Lebih banyak pengklasifikasi](2-Classifiers-1/README.md)
+3. [Pengklasifikasi lain](3-Classifiers-2/README.md)
+4. [ML Terapan: bina aplikasi web](4-Applied/README.md)
+
+## Kredit
+
+"Memulakan dengan klasifikasi" ditulis dengan ♥️ oleh [Cassie Breviu](https://www.twitter.com/cassiebreviu) dan [Jen Looper](https://www.twitter.com/jenlooper)
+
+Dataset masakan lazat diperoleh dari [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines).
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/5-Clustering/1-Visualize/README.md b/translations/ms/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..7df181a74
--- /dev/null
+++ b/translations/ms/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,218 @@
+# Pengenalan kepada pengelompokan
+
+Pegelompokan adalah sejenis [Pembelajaran Tanpa Pengawasan](https://wikipedia.org/wiki/Unsupervised_learning) yang menganggap bahawa satu set data tidak dilabel atau inputnya tidak dipadankan dengan output yang telah ditetapkan. Ia menggunakan pelbagai algoritma untuk menyusun data yang tidak dilabel dan menyediakan pengelompokan mengikut corak yang ditemui dalam data tersebut.
+
+[](https://youtu.be/ty2advRiWJM "No One Like You oleh PSquare")
+
+> 🎥 Klik gambar di atas untuk video. Semasa anda mempelajari pembelajaran mesin dengan pengelompokan, nikmati beberapa lagu Dance Hall Nigeria - ini adalah lagu yang sangat dihargai dari tahun 2014 oleh PSquare.
+## [Kuiz sebelum kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### Pengenalan
+
+[Pegelompokan](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) sangat berguna untuk penerokaan data. Mari kita lihat jika ia dapat membantu menemui tren dan corak dalam cara penonton Nigeria menikmati muzik.
+
+✅ Luangkan masa sebentar untuk memikirkan kegunaan pengelompokan. Dalam kehidupan sebenar, pengelompokan berlaku setiap kali anda mempunyai timbunan pakaian dan perlu menyusun pakaian ahli keluarga anda 🧦👕👖🩲. Dalam sains data, pengelompokan berlaku apabila cuba menganalisis pilihan pengguna, atau menentukan ciri-ciri mana-mana set data yang tidak dilabel. Pengelompokan, dalam cara tertentu, membantu memahami kekacauan, seperti laci stokin.
+
+[](https://youtu.be/esmzYhuFnds "Pengenalan kepada Pengelompokan")
+
+> 🎥 Klik gambar di atas untuk video: John Guttag dari MIT memperkenalkan pengelompokan
+
+Dalam suasana profesional, pengelompokan boleh digunakan untuk menentukan perkara seperti segmentasi pasaran, menentukan kumpulan umur yang membeli barangan tertentu, sebagai contoh. Kegunaan lain mungkin adalah pengesanan anomali, mungkin untuk mengesan penipuan daripada set data transaksi kad kredit. Atau anda mungkin menggunakan pengelompokan untuk menentukan tumor dalam sekumpulan imbasan perubatan.
+
+✅ Fikir sebentar tentang bagaimana anda mungkin pernah menemui pengelompokan 'di alam liar', dalam perbankan, e-dagang, atau perniagaan.
+
+> 🎓 Menariknya, analisis pengelompokan berasal dari bidang Antropologi dan Psikologi pada tahun 1930-an. Bolehkah anda bayangkan bagaimana ia mungkin digunakan?
+
+Sebagai alternatif, anda boleh menggunakannya untuk mengelompokkan hasil carian - melalui pautan membeli-belah, imej, atau ulasan, sebagai contoh. Pengelompokan berguna apabila anda mempunyai set data yang besar yang anda ingin kurangkan dan di mana anda ingin melakukan analisis yang lebih terperinci, jadi teknik ini boleh digunakan untuk mempelajari tentang data sebelum model lain dibina.
+
+✅ Setelah data anda disusun dalam kelompok, anda memberikan Id kelompok kepadanya, dan teknik ini boleh berguna apabila mengekalkan privasi set data; anda boleh merujuk kepada titik data dengan id kelompoknya, dan bukannya dengan data yang lebih mendedahkan. Bolehkah anda memikirkan sebab-sebab lain mengapa anda akan merujuk kepada Id kelompok dan bukannya elemen lain dalam kelompok untuk mengenalpastinya?
+
+Mendalami pemahaman anda tentang teknik pengelompokan dalam [modul Pembelajaran](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott) ini.
+## Bermula dengan pengelompokan
+
+[Scikit-learn menawarkan pelbagai kaedah](https://scikit-learn.org/stable/modules/clustering.html) untuk melaksanakan pengelompokan. Jenis yang anda pilih akan bergantung pada kes penggunaan anda. Menurut dokumentasi, setiap kaedah mempunyai pelbagai manfaat. Berikut adalah jadual ringkas kaedah yang disokong oleh Scikit-learn dan kes penggunaan yang sesuai:
+
+| Nama kaedah | Kes penggunaan |
+| :--------------------------- | :--------------------------------------------------------------------- |
+| K-Means | tujuan umum, induktif |
+| Penyebaran afiniti | banyak, kelompok tidak sekata, induktif |
+| Mean-shift | banyak, kelompok tidak sekata, induktif |
+| Pengelompokan spektral | beberapa, kelompok sekata, transduktif |
+| Pengelompokan hierarki Ward | banyak, kelompok terhad, transduktif |
+| Pengelompokan aglomeratif | banyak, terhad, jarak bukan Euclidean, transduktif |
+| DBSCAN | geometri bukan rata, kelompok tidak sekata, transduktif |
+| OPTICS | geometri bukan rata, kelompok tidak sekata dengan ketumpatan berubah, transduktif |
+| Campuran Gaussian | geometri rata, induktif |
+| BIRCH | set data besar dengan outlier, induktif |
+
+> 🎓 Bagaimana kita mencipta kelompok banyak bergantung pada bagaimana kita mengumpulkan titik data ke dalam kumpulan. Mari kita huraikan beberapa istilah:
+>
+> 🎓 ['Transduktif' vs. 'induktif'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> Inferens transduktif diperoleh daripada kes latihan yang diperhatikan yang memetakan kepada kes ujian tertentu. Inferens induktif diperoleh daripada kes latihan yang memetakan kepada peraturan umum yang hanya kemudian digunakan pada kes ujian.
+>
+> Contoh: Bayangkan anda mempunyai set data yang hanya dilabel sebahagiannya. Beberapa perkara adalah 'rekod', beberapa 'cd', dan beberapa kosong. Tugas anda adalah menyediakan label untuk yang kosong. Jika anda memilih pendekatan induktif, anda akan melatih model yang mencari 'rekod' dan 'cd', dan menggunakan label tersebut pada data yang tidak dilabel. Pendekatan ini akan menghadapi kesukaran untuk mengklasifikasikan perkara yang sebenarnya 'kaset'. Pendekatan transduktif, sebaliknya, menangani data yang tidak diketahui ini dengan lebih berkesan kerana ia berusaha untuk mengelompokkan item yang serupa bersama-sama dan kemudian memberikan label kepada kumpulan. Dalam kes ini, kelompok mungkin mencerminkan 'perkara muzik bulat' dan 'perkara muzik segi empat'.
+>
+> 🎓 ['Geometri bukan rata' vs. 'rata'](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> Berasal daripada istilah matematik, geometri bukan rata vs. rata merujuk kepada ukuran jarak antara titik dengan kaedah geometri 'rata' ([Euclidean](https://wikipedia.org/wiki/Euclidean_geometry)) atau 'bukan rata' (bukan Euclidean).
+>
+>'Rata' dalam konteks ini merujuk kepada geometri Euclidean (sebahagian daripadanya diajar sebagai geometri 'pesawat'), dan bukan rata merujuk kepada geometri bukan Euclidean. Apa kaitannya geometri dengan pembelajaran mesin? Nah, sebagai dua bidang yang berakar pada matematik, mesti ada cara umum untuk mengukur jarak antara titik dalam kelompok, dan itu boleh dilakukan dengan cara 'rata' atau 'bukan rata', bergantung pada sifat data. [Jarak Euclidean](https://wikipedia.org/wiki/Euclidean_distance) diukur sebagai panjang segmen garis antara dua titik. [Jarak bukan Euclidean](https://wikipedia.org/wiki/Non-Euclidean_geometry) diukur sepanjang lengkung. Jika data anda, divisualisasikan, kelihatan tidak wujud pada satah, anda mungkin perlu menggunakan algoritma khusus untuk menanganinya.
+>
+
+> Infografik oleh [Dasani Madipalli](https://twitter.com/dasani_decoded)
+>
+> 🎓 ['Jarak'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> Kelompok ditakrifkan oleh matriks jaraknya, contohnya jarak antara titik. Jarak ini boleh diukur dalam beberapa cara. Kelompok Euclidean ditakrifkan oleh purata nilai titik, dan mengandungi 'centroid' atau titik pusat. Jarak diukur dengan jarak ke centroid tersebut. Jarak bukan Euclidean merujuk kepada 'clustroids', titik terdekat dengan titik lain. Clustroids seterusnya boleh ditakrifkan dalam pelbagai cara.
+>
+> 🎓 ['Terhad'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [Pengelompokan Terhad](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) memperkenalkan pembelajaran 'semi-supervised' ke dalam kaedah tanpa pengawasan ini. Hubungan antara titik ditandakan sebagai 'tidak boleh paut' atau 'mesti paut' jadi beberapa peraturan dipaksa ke atas set data.
+>
+>Contoh: Jika algoritma dilepaskan pada sekumpulan data yang tidak dilabel atau separa dilabel, kelompok yang dihasilkan mungkin berkualiti rendah. Dalam contoh di atas, kelompok mungkin mengelompokkan 'perkara muzik bulat' dan 'perkara muzik segi empat' dan 'perkara segi tiga' dan 'biskut'. Jika diberikan beberapa kekangan, atau peraturan untuk diikuti ("item mesti diperbuat daripada plastik", "item perlu dapat menghasilkan muzik") ini boleh membantu 'mengekang' algoritma untuk membuat pilihan yang lebih baik.
+>
+> 🎓 'Ketumpatan'
+>
+> Data yang 'berisik' dianggap 'padat'. Jarak antara titik dalam setiap kelompoknya mungkin terbukti, apabila diperiksa, lebih atau kurang padat, atau 'sesak' dan oleh itu data ini perlu dianalisis dengan kaedah pengelompokan yang sesuai. [Artikel ini](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) menunjukkan perbezaan antara menggunakan pengelompokan K-Means vs. algoritma HDBSCAN untuk meneroka set data yang berisik dengan ketumpatan kelompok yang tidak sekata.
+
+## Algoritma pengelompokan
+
+Terdapat lebih 100 algoritma pengelompokan, dan penggunaannya bergantung pada sifat data yang ada. Mari kita bincangkan beberapa yang utama:
+
+- **Pengelompokan hierarki**. Jika objek diklasifikasikan mengikut jaraknya dengan objek berdekatan, dan bukan dengan yang lebih jauh, kelompok dibentuk berdasarkan jarak anggotanya dengan objek lain. Pengelompokan aglomeratif Scikit-learn adalah hierarki.
+
+ 
+ > Infografik oleh [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Pengelompokan centroid**. Algoritma popular ini memerlukan pilihan 'k', atau bilangan kelompok untuk dibentuk, selepas itu algoritma menentukan titik pusat kelompok dan mengumpulkan data di sekitar titik tersebut. [Pengelompokan K-means](https://wikipedia.org/wiki/K-means_clustering) adalah versi pengelompokan centroid yang popular. Pusatnya ditentukan oleh min terdekat, oleh itu namanya. Jarak kuasa dua dari kelompok diminimumkan.
+
+ 
+ > Infografik oleh [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Pengelompokan berasaskan pengedaran**. Berdasarkan pemodelan statistik, pengelompokan berasaskan pengedaran berpusat pada menentukan kebarangkalian bahawa titik data tergolong dalam kelompok, dan menugaskannya dengan sewajarnya. Kaedah campuran Gaussian tergolong dalam jenis ini.
+
+- **Pengelompokan berasaskan ketumpatan**. Titik data ditugaskan kepada kelompok berdasarkan ketumpatannya, atau pengelompokan di sekitar satu sama lain. Titik data yang jauh dari kumpulan dianggap sebagai outlier atau bunyi. DBSCAN, Mean-shift dan OPTICS tergolong dalam jenis pengelompokan ini.
+
+- **Pengelompokan berasaskan grid**. Untuk set data berbilang dimensi, grid dicipta dan data dibahagikan di antara sel grid, dengan itu mencipta kelompok.
+
+## Latihan - mengelompokkan data anda
+
+Pengelompokan sebagai teknik sangat dibantu oleh visualisasi yang betul, jadi mari kita mulakan dengan memvisualisasikan data muzik kita. Latihan ini akan membantu kita memutuskan kaedah pengelompokan yang paling berkesan untuk digunakan berdasarkan sifat data ini.
+
+1. Buka fail [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb) dalam folder ini.
+
+1. Import pakej `Seaborn` untuk visualisasi data yang baik.
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. Lampirkan data lagu dari [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv). Muatkan dataframe dengan beberapa data tentang lagu-lagu tersebut. Bersedia untuk meneroka data ini dengan mengimport perpustakaan dan mengeluarkan data:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ Semak beberapa baris pertama data:
+
+ | | nama | album | artis | genre_teratas_artis | tarikh_keluar | panjang | populariti | kebolehdansaan | keakustikan | tenaga | keinstrumentalan | kesegaran | kelantangan | keceritaan | tempo | tanda_waktu |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ---------------- | ------------ | ------ | ---------- | ------------ | ------------ | ------ | ---------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | r&b alternatif | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool | Enjoy Your Life | Lady Donli | pop Nigeria | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25 | 0.702 | 0.116 | 0.833 | 0.91 | 0.348 | -6.044 | 0.0447 | 105.115 | 4 |
+
+1. Dapatkan beberapa maklumat tentang dataframe, dengan memanggil `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ Outputnya kelihatan seperti ini:
+
+ ```output
+
+ RangeIndex: 530 entries, 0 to 529
+ Data columns (total 16 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 name 530 non-null object
+ 1 album 530 non-null object
+ 2 artist 530 non-null object
+ 3 artist_top_genre 530 non-null object
+ 4 release_date 530 non-null int64
+ 5 length 530 non-null int64
+ 6 popularity 530 non-null int64
+ 7 danceability 530 non-null float64
+ 8 acousticness 530 non-null float64
+ 9 energy 530 non-null float64
+ 10 instrumentalness 530 non-null float64
+ 11 liveness 530 non-null float64
+ 12 loudness 530 non-null float64
+ 13 speechiness 530 non-null float64
+ 14 tempo 530 non-null float64
+ 15 time_signature 530 non-null int64
+ dtypes: float64(8), int64(4), object(4)
+ memory usage: 66.4+ KB
+ ```
+
+1. Periksa semula untuk nilai null, dengan memanggil `isnull()` dan mengesahkan jumlahnya adalah 0:
+
+ ```python
+ df.isnull().sum()
+ ```
+
+ Kelihatan baik:
+
+ ```output
+ name 0
+ album 0
+ artist 0
+ artist_top_genre 0
+ release_date 0
+ length 0
+ popularity 0
+ danceability 0
+ acousticness 0
+ energy 0
+ instrumentalness 0
+ liveness 0
+ loudness 0
+ speechiness 0
+ tempo 0
+ time_signature 0
+ dtype: int64
+ ```
+
+1. Huraikan data:
+
+ ```python
+ df.describe()
+ ```
+
+ | | tarikh_keluar | panjang | populariti | kebolehdansaan | keakustikan | tenaga | keinstrumentalan | kesegaran | kelantangan | keceritaan | tempo | tanda_waktu |
+ | ----- | ------------ | ----------- | ---------- | ------------ | ------------ | -------- | ---------------- | -------- | --------- | ----------- | ---------- | -------------- |
+ | count | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 |
+ | mean | 2015.390566 | 222298.1698 | 17.507547 | 0.741619 | 0.265412 | 0.760623 | 0.016305 | 0.147308 | -4.953011 | 0.130748 | 116.487864 | 3.986792 |
+ | std | 3.131688 | 39696.82226 | 18.992212 | 0.117522 | 0.208342 | 0.148533 | 0.090321 | 0.123588 | 2.464186 | 0.092939 | 23.518601 | 0.333701 |
+ | min | 1998 | 89488 | 0 | 0.255 | 0.000665 | 0.111 | 0 | 0.0283 | -19.362 | 0.0278 | 61.695 | 3 |
+ | 25%
+## [Kuiz selepas kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## Kajian & Pembelajaran Sendiri
+
+Sebelum anda menggunakan algoritma pengelompokan, seperti yang telah kita pelajari, adalah idea yang baik untuk memahami sifat dataset anda. Baca lebih lanjut mengenai topik ini [di sini](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
+
+[Artikel yang berguna ini](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/) membimbing anda melalui pelbagai cara yang berbeza bagaimana algoritma pengelompokan berkelakuan, diberikan bentuk data yang berbeza.
+
+## Tugasan
+
+[Selidiki visualisasi lain untuk pengelompokan](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/5-Clustering/1-Visualize/assignment.md b/translations/ms/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..d4e5eed55
--- /dev/null
+++ b/translations/ms/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# Menyiasat visualisasi lain untuk pengelompokan
+
+## Arahan
+
+Dalam pelajaran ini, anda telah bekerja dengan beberapa teknik visualisasi untuk memahami cara memplot data anda sebagai persiapan untuk pengelompokan. Scatterplot, khususnya, berguna untuk mencari kumpulan objek. Siasat pelbagai cara dan pelbagai perpustakaan untuk mencipta scatterplot dan dokumentasikan kerja anda dalam sebuah notebook. Anda boleh menggunakan data dari pelajaran ini, pelajaran lain, atau data yang anda cari sendiri (sila kreditkan sumbernya, bagaimanapun, dalam notebook anda). Plotkan beberapa data menggunakan scatterplot dan jelaskan apa yang anda temui.
+
+## Rubrik
+
+| Kriteria | Contoh Terbaik | Memadai | Perlu Penambahbaikan |
+| -------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------- |
+| | Sebuah notebook dipersembahkan dengan lima scatterplot yang didokumentasikan dengan baik | Sebuah notebook dipersembahkan dengan kurang dari lima scatterplot dan kurang didokumentasikan dengan baik | Sebuah notebook yang tidak lengkap dipersembahkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/ms/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..3a3ac99f6
--- /dev/null
+++ b/translations/ms/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/5-Clustering/2-K-Means/README.md b/translations/ms/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..3552a3b67
--- /dev/null
+++ b/translations/ms/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# Pengelompokan K-Means
+
+## [Kuiz Pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+Dalam pelajaran ini, anda akan belajar cara membuat kluster menggunakan Scikit-learn dan dataset musik Nigeria yang anda impor sebelumnya. Kita akan membahas dasar-dasar K-Means untuk Pengelompokan. Ingatlah bahwa, seperti yang telah anda pelajari dalam pelajaran sebelumnya, ada banyak cara untuk bekerja dengan kluster dan metode yang anda gunakan tergantung pada data anda. Kita akan mencoba K-Means karena ini adalah teknik pengelompokan yang paling umum. Mari kita mulai!
+
+Istilah-istilah yang akan anda pelajari:
+
+- Skor Silhouette
+- Metode Elbow
+- Inertia
+- Variansi
+
+## Pengenalan
+
+[Pemgumpulan K-Means](https://wikipedia.org/wiki/K-means_clustering) adalah metode yang berasal dari domain pemrosesan sinyal. Metode ini digunakan untuk membagi dan mengelompokkan data menjadi 'k' kluster menggunakan serangkaian observasi. Setiap observasi berfungsi untuk mengelompokkan titik data yang diberikan ke 'mean' terdekatnya, atau titik tengah dari sebuah kluster.
+
+Kluster-kluster tersebut dapat divisualisasikan sebagai [diagram Voronoi](https://wikipedia.org/wiki/Voronoi_diagram), yang mencakup sebuah titik (atau 'benih') dan wilayah yang sesuai.
+
+
+
+> infografis oleh [Jen Looper](https://twitter.com/jenlooper)
+
+Proses pengelompokan K-Means [dijalankan dalam tiga langkah](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. Algoritma memilih sejumlah k-titik tengah dengan mengambil sampel dari dataset. Setelah itu, algoritma melakukan iterasi:
+ 1. Menugaskan setiap sampel ke centroid terdekat.
+ 2. Membuat centroid baru dengan mengambil nilai rata-rata dari semua sampel yang ditugaskan ke centroid sebelumnya.
+ 3. Kemudian, menghitung perbedaan antara centroid baru dan lama dan mengulangi hingga centroid stabil.
+
+Salah satu kelemahan menggunakan K-Means adalah anda perlu menetapkan 'k', yaitu jumlah centroid. Untungnya, 'metode elbow' membantu memperkirakan nilai awal yang baik untuk 'k'. Anda akan mencobanya sebentar lagi.
+
+## Prasyarat
+
+Anda akan bekerja dalam file [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) dari pelajaran ini yang mencakup impor data dan pembersihan awal yang anda lakukan pada pelajaran sebelumnya.
+
+## Latihan - persiapan
+
+Mulailah dengan melihat kembali data lagu.
+
+1. Buat boxplot, panggil `boxplot()` untuk setiap kolom:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ Data ini agak bising: dengan mengamati setiap kolom sebagai boxplot, anda dapat melihat outlier.
+
+ 
+
+Anda bisa melalui dataset dan menghapus outlier ini, tetapi itu akan membuat data cukup minimal.
+
+1. Untuk saat ini, pilih kolom mana yang akan anda gunakan untuk latihan pengelompokan. Pilih yang memiliki rentang yang serupa dan kodekan kolom `artist_top_genre` sebagai data numerik:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. Sekarang anda perlu memilih berapa banyak kluster yang akan ditargetkan. Anda tahu ada 3 genre lagu yang kami ambil dari dataset, jadi mari coba 3:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+Anda melihat array yang dicetak dengan kluster yang diprediksi (0, 1, atau 2) untuk setiap baris dataframe.
+
+1. Gunakan array ini untuk menghitung 'skor silhouette':
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## Skor Silhouette
+
+Carilah skor silhouette yang mendekati 1. Skor ini bervariasi dari -1 hingga 1, dan jika skornya 1, kluster tersebut padat dan terpisah dengan baik dari kluster lain. Nilai mendekati 0 mewakili kluster yang tumpang tindih dengan sampel yang sangat dekat dengan batas keputusan kluster tetangga. [(Sumber)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+Skor kita adalah **.53**, jadi tepat di tengah. Ini menunjukkan bahwa data kita tidak terlalu cocok untuk jenis pengelompokan ini, tetapi mari kita lanjutkan.
+
+### Latihan - membangun model
+
+1. Impor `KMeans` dan mulai proses pengelompokan.
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ Ada beberapa bagian di sini yang perlu dijelaskan.
+
+ > 🎓 range: Ini adalah iterasi dari proses pengelompokan
+
+ > 🎓 random_state: "Menentukan penghasil angka acak untuk inisialisasi centroid." [Sumber](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "within-cluster sums of squares" mengukur jarak rata-rata kuadrat dari semua titik dalam sebuah kluster ke centroid kluster. [Sumber](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce).
+
+ > 🎓 Inertia: Algoritma K-Means berusaha memilih centroid untuk meminimalkan 'inertia', "ukuran seberapa koheren kluster secara internal." [Sumber](https://scikit-learn.org/stable/modules/clustering.html). Nilainya ditambahkan ke variabel wcss pada setiap iterasi.
+
+ > 🎓 k-means++: Dalam [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means) anda dapat menggunakan optimasi 'k-means++', yang "menginisialisasi centroid agar (secara umum) jauh dari satu sama lain, menghasilkan kemungkinan hasil yang lebih baik daripada inisialisasi acak.
+
+### Metode Elbow
+
+Sebelumnya, anda menduga bahwa, karena anda menargetkan 3 genre lagu, anda harus memilih 3 kluster. Tetapi apakah itu benar?
+
+1. Gunakan 'metode elbow' untuk memastikannya.
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ Gunakan variabel `wcss` yang anda buat pada langkah sebelumnya untuk membuat grafik yang menunjukkan di mana 'tikungan' pada elbow, yang menunjukkan jumlah kluster yang optimal. Mungkin memang **3**!
+
+ 
+
+## Latihan - menampilkan kluster
+
+1. Cobalah prosesnya lagi, kali ini menetapkan tiga kluster, dan tampilkan kluster sebagai scatterplot:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. Periksa akurasi model:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ Akurasi model ini tidak terlalu bagus, dan bentuk kluster memberi anda petunjuk mengapa.
+
+ 
+
+ Data ini terlalu tidak seimbang, terlalu sedikit berkorelasi dan ada terlalu banyak varians antara nilai kolom untuk dikelompokkan dengan baik. Faktanya, kluster yang terbentuk mungkin sangat dipengaruhi atau bias oleh tiga kategori genre yang kita definisikan di atas. Itu adalah proses pembelajaran!
+
+ Dalam dokumentasi Scikit-learn, anda dapat melihat bahwa model seperti ini, dengan kluster yang tidak terlalu jelas, memiliki masalah 'varians':
+
+ 
+ > Infografis dari Scikit-learn
+
+## Variansi
+
+Variansi didefinisikan sebagai "rata-rata dari perbedaan kuadrat dari Mean" [(Sumber)](https://www.mathsisfun.com/data/standard-deviation.html). Dalam konteks masalah pengelompokan ini, ini mengacu pada data yang angka-angka dalam dataset kita cenderung menyimpang terlalu jauh dari mean.
+
+✅ Ini adalah saat yang tepat untuk memikirkan semua cara anda dapat memperbaiki masalah ini. Mengubah data sedikit lebih banyak? Menggunakan kolom yang berbeda? Menggunakan algoritma yang berbeda? Petunjuk: Cobalah [menyelaraskan data anda](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) untuk menormalkannya dan menguji kolom lain.
+
+> Cobalah '[kalkulator varians](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' ini untuk memahami konsep ini lebih lanjut.
+
+---
+
+## 🚀Tantangan
+
+Habiskan waktu dengan notebook ini, mengubah parameter. Bisakah anda meningkatkan akurasi model dengan membersihkan data lebih banyak (menghapus outlier, misalnya)? Anda dapat menggunakan bobot untuk memberikan bobot lebih pada sampel data tertentu. Apa lagi yang bisa anda lakukan untuk membuat kluster yang lebih baik?
+
+Petunjuk: Cobalah untuk menyelaraskan data anda. Ada kode yang dikomentari dalam notebook yang menambahkan penskalaan standar untuk membuat kolom data lebih mirip satu sama lain dalam hal rentang. Anda akan menemukan bahwa meskipun skor silhouette turun, 'tikungan' dalam grafik elbow menjadi lebih halus. Ini karena membiarkan data tidak diskalakan memungkinkan data dengan varians lebih sedikit untuk membawa lebih banyak bobot. Baca lebih lanjut tentang masalah ini [di sini](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226).
+
+## [Kuiz Pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## Tinjauan & Studi Mandiri
+
+Lihatlah Simulator K-Means [seperti yang ini](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). Anda dapat menggunakan alat ini untuk memvisualisasikan titik data sampel dan menentukan centroidnya. Anda dapat mengedit keacakan data, jumlah kluster, dan jumlah centroid. Apakah ini membantu anda mendapatkan gambaran tentang bagaimana data dapat dikelompokkan?
+
+Juga, lihat [handout ini tentang K-Means](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) dari Stanford.
+
+## Tugas
+
+[Cobalah metode pengelompokan yang berbeda](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/5-Clustering/2-K-Means/assignment.md b/translations/ms/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..574843d09
--- /dev/null
+++ b/translations/ms/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,13 @@
+# Cuba kaedah pengelompokan yang berbeza
+
+## Arahan
+
+Dalam pelajaran ini, anda telah mempelajari tentang pengelompokan K-Means. Kadang-kadang K-Means tidak sesuai untuk data anda. Buatlah sebuah notebook menggunakan data sama ada dari pelajaran ini atau dari sumber lain (nyatakan sumber anda) dan tunjukkan kaedah pengelompokan yang berbeza TANPA menggunakan K-Means. Apa yang anda pelajari?
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | --------------------------------------------------------------- | -------------------------------------------------------------------- | ---------------------------- |
+| | Sebuah notebook disediakan dengan model pengelompokan yang didokumentasikan dengan baik | Sebuah notebook disediakan tanpa dokumentasi yang baik dan/atau tidak lengkap | Kerja yang tidak lengkap diserahkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/ms/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..d554defb7
--- /dev/null
+++ b/translations/ms/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/5-Clustering/README.md b/translations/ms/5-Clustering/README.md
new file mode 100644
index 000000000..691ff2041
--- /dev/null
+++ b/translations/ms/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# Model Pengelompokan untuk Pembelajaran Mesin
+
+Pengelompokan adalah tugas pembelajaran mesin di mana ia mencari objek yang menyerupai satu sama lain dan mengelompokkannya ke dalam kelompok yang disebut kelompok. Apa yang membedakan pengelompokan dari pendekatan lain dalam pembelajaran mesin adalah bahwa semuanya terjadi secara otomatis, sebenarnya, bisa dikatakan ini adalah kebalikan dari pembelajaran terawasi.
+
+## Topik Regional: model pengelompokan untuk selera musik audiens Nigeria 🎧
+
+Audiens Nigeria yang beragam memiliki selera musik yang beragam. Menggunakan data yang diambil dari Spotify (terinspirasi oleh [artikel ini](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421)), mari kita lihat beberapa musik yang populer di Nigeria. Dataset ini mencakup data tentang skor 'danceability', 'acousticness', kerasnya, 'speechiness', popularitas, dan energi dari berbagai lagu. Akan sangat menarik untuk menemukan pola dalam data ini!
+
+
+
+> Foto oleh Marcela Laskoski di Unsplash
+
+Dalam rangkaian pelajaran ini, Anda akan menemukan cara baru untuk menganalisis data menggunakan teknik pengelompokan. Pengelompokan sangat berguna ketika dataset Anda tidak memiliki label. Jika memiliki label, maka teknik klasifikasi seperti yang Anda pelajari di pelajaran sebelumnya mungkin lebih berguna. Tetapi dalam kasus di mana Anda mencari untuk mengelompokkan data tanpa label, pengelompokan adalah cara yang bagus untuk menemukan pola.
+
+> Ada alat low-code yang berguna yang dapat membantu Anda mempelajari tentang bekerja dengan model pengelompokan. Coba [Azure ML untuk tugas ini](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Pelajaran
+
+1. [Pengantar pengelompokan](1-Visualize/README.md)
+2. [Pengelompokan K-Means](2-K-Means/README.md)
+
+## Kredit
+
+Pelajaran ini ditulis dengan 🎶 oleh [Jen Looper](https://www.twitter.com/jenlooper) dengan ulasan yang bermanfaat oleh [Rishit Dagli](https://rishit_dagli) dan [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan).
+
+Dataset [Nigerian Songs](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify) bersumber dari Kaggle sebagai hasil scraping dari Spotify.
+
+Contoh K-Means yang berguna yang membantu dalam membuat pelajaran ini termasuk [eksplorasi iris](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering) ini, [notebook pengantar](https://www.kaggle.com/prashant111/k-means-clustering-with-python) ini, dan [contoh hipotetis NGO](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering) ini.
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/1-Introduction-to-NLP/README.md b/translations/ms/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..8cc2791d8
--- /dev/null
+++ b/translations/ms/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# Pengenalan kepada pemprosesan bahasa semulajadi
+
+Pelajaran ini merangkumi sejarah ringkas dan konsep penting dalam *pemprosesan bahasa semulajadi*, satu cabang daripada *linguistik komputasi*.
+
+## [Kuiz pra-ceramah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## Pengenalan
+
+NLP, seperti yang biasa dikenali, adalah salah satu bidang yang paling terkenal di mana pembelajaran mesin telah digunakan dan diterapkan dalam perisian pengeluaran.
+
+✅ Bolehkah anda memikirkan perisian yang anda gunakan setiap hari yang mungkin mempunyai beberapa NLP di dalamnya? Bagaimana dengan program pemprosesan kata atau aplikasi mudah alih yang anda gunakan secara kerap?
+
+Anda akan belajar tentang:
+
+- **Idea tentang bahasa**. Bagaimana bahasa berkembang dan apakah bidang kajian utama.
+- **Definisi dan konsep**. Anda juga akan belajar definisi dan konsep tentang bagaimana komputer memproses teks, termasuk penguraian, tatabahasa, dan mengenal pasti kata nama dan kata kerja. Terdapat beberapa tugas pengekodan dalam pelajaran ini, dan beberapa konsep penting diperkenalkan yang anda akan belajar untuk kod kemudian dalam pelajaran seterusnya.
+
+## Linguistik komputasi
+
+Linguistik komputasi adalah bidang penyelidikan dan pembangunan selama beberapa dekad yang mengkaji bagaimana komputer boleh bekerja dengan, dan bahkan memahami, menterjemah, dan berkomunikasi dengan bahasa. Pemprosesan bahasa semulajadi (NLP) adalah bidang berkaitan yang memberi tumpuan kepada bagaimana komputer boleh memproses bahasa 'semulajadi', atau bahasa manusia.
+
+### Contoh - pendiktean telefon
+
+Jika anda pernah mendikte kepada telefon anda daripada menaip atau bertanya kepada pembantu maya soalan, ucapan anda telah ditukar kepada bentuk teks dan kemudian diproses atau *diuraikan* dari bahasa yang anda gunakan. Kata kunci yang dikesan kemudian diproses ke dalam format yang telefon atau pembantu boleh faham dan bertindak balas.
+
+
+> Pemahaman linguistik sebenar adalah sukar! Imej oleh [Jen Looper](https://twitter.com/jenlooper)
+
+### Bagaimana teknologi ini dibuat mungkin?
+
+Ini mungkin kerana seseorang menulis program komputer untuk melakukannya. Beberapa dekad yang lalu, beberapa penulis fiksyen sains meramalkan bahawa orang akan kebanyakannya bercakap dengan komputer mereka, dan komputer akan sentiasa memahami dengan tepat apa yang mereka maksudkan. Malangnya, ia ternyata menjadi masalah yang lebih sukar daripada yang dibayangkan oleh ramai, dan walaupun ia adalah masalah yang lebih difahami hari ini, terdapat cabaran yang ketara dalam mencapai pemprosesan bahasa semulajadi yang 'sempurna' apabila ia berkaitan dengan memahami makna ayat. Ini adalah masalah yang sangat sukar apabila ia berkaitan dengan memahami humor atau mengesan emosi seperti sindiran dalam ayat.
+
+Pada ketika ini, anda mungkin mengingati kelas sekolah di mana guru meliputi bahagian tatabahasa dalam ayat. Di sesetengah negara, pelajar diajar tatabahasa dan linguistik sebagai subjek khusus, tetapi di banyak negara, topik-topik ini dimasukkan sebagai sebahagian daripada pembelajaran bahasa: sama ada bahasa pertama anda di sekolah rendah (belajar membaca dan menulis) dan mungkin bahasa kedua di sekolah menengah. Jangan risau jika anda bukan pakar dalam membezakan kata nama daripada kata kerja atau kata keterangan daripada kata sifat!
+
+Jika anda bergelut dengan perbezaan antara *masa kini mudah* dan *masa kini progresif*, anda tidak bersendirian. Ini adalah perkara yang mencabar bagi ramai orang, bahkan penutur asli bahasa. Berita baiknya adalah bahawa komputer sangat baik dalam menerapkan peraturan formal, dan anda akan belajar untuk menulis kod yang boleh *menguraikan* ayat serta manusia. Cabaran yang lebih besar yang anda akan kaji kemudian ialah memahami *makna*, dan *sentimen*, sesuatu ayat.
+
+## Prasyarat
+
+Untuk pelajaran ini, prasyarat utama adalah dapat membaca dan memahami bahasa pelajaran ini. Tiada masalah matematik atau persamaan untuk diselesaikan. Walaupun pengarang asal menulis pelajaran ini dalam bahasa Inggeris, ia juga diterjemahkan ke dalam bahasa lain, jadi anda mungkin sedang membaca terjemahan. Terdapat contoh di mana beberapa bahasa yang berbeza digunakan (untuk membandingkan peraturan tatabahasa yang berbeza dari bahasa yang berbeza). Ini *tidak* diterjemahkan, tetapi teks penjelasan diterjemahkan, jadi maknanya harus jelas.
+
+Untuk tugas pengekodan, anda akan menggunakan Python dan contoh-contohnya menggunakan Python 3.8.
+
+Dalam bahagian ini, anda akan memerlukan, dan menggunakan:
+
+- **Pemahaman Python 3**. Pemahaman bahasa pengaturcaraan dalam Python 3, pelajaran ini menggunakan input, gelung, pembacaan fail, array.
+- **Visual Studio Code + sambungan**. Kami akan menggunakan Visual Studio Code dan sambungan Python. Anda juga boleh menggunakan IDE Python pilihan anda.
+- **TextBlob**. [TextBlob](https://github.com/sloria/TextBlob) adalah perpustakaan pemprosesan teks yang dipermudahkan untuk Python. Ikuti arahan di laman TextBlob untuk memasangnya pada sistem anda (pasang juga korpora, seperti yang ditunjukkan di bawah):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 Tip: Anda boleh menjalankan Python secara langsung dalam persekitaran VS Code. Semak [dokumentasi](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott) untuk maklumat lanjut.
+
+## Bercakap dengan mesin
+
+Sejarah mencuba untuk membuat komputer memahami bahasa manusia kembali beberapa dekad, dan salah seorang saintis terawal yang mempertimbangkan pemprosesan bahasa semulajadi adalah *Alan Turing*.
+
+### Ujian 'Turing'
+
+Ketika Turing sedang meneliti *kecerdasan buatan* pada tahun 1950-an, dia mempertimbangkan jika ujian perbualan boleh diberikan kepada seorang manusia dan komputer (melalui surat-menyurat yang ditaip) di mana manusia dalam perbualan itu tidak pasti sama ada mereka sedang berbual dengan manusia lain atau komputer.
+
+Jika, selepas tempoh perbualan tertentu, manusia tidak dapat menentukan bahawa jawapan itu dari komputer atau tidak, maka bolehkah komputer dikatakan *berfikir*?
+
+### Inspirasi - 'permainan tiruan'
+
+Idea untuk ini datang dari permainan pesta yang dipanggil *The Imitation Game* di mana seorang penyiasat berada sendirian dalam bilik dan ditugaskan untuk menentukan siapa daripada dua orang (di bilik lain) adalah lelaki dan wanita masing-masing. Penyiasat boleh menghantar nota, dan mesti cuba memikirkan soalan di mana jawapan bertulis mendedahkan jantina orang misteri itu. Sudah tentu, pemain di bilik lain cuba menipu penyiasat dengan menjawab soalan dengan cara yang mengelirukan atau mengelirukan penyiasat, sambil memberikan penampilan menjawab dengan jujur.
+
+### Membangunkan Eliza
+
+Pada tahun 1960-an, seorang saintis MIT bernama *Joseph Weizenbaum* membangunkan [*Eliza*](https://wikipedia.org/wiki/ELIZA), seorang 'terapis' komputer yang akan menanyakan soalan kepada manusia dan memberikan penampilan memahami jawapan mereka. Walau bagaimanapun, walaupun Eliza boleh menguraikan ayat dan mengenal pasti beberapa struktur tatabahasa dan kata kunci untuk memberikan jawapan yang munasabah, ia tidak boleh dikatakan *memahami* ayat itu. Jika Eliza diberi ayat yang mengikuti format "**I am** sedih" ia mungkin menyusun semula dan menggantikan kata-kata dalam ayat untuk membentuk jawapan "Berapa lama **anda telah** sedih".
+
+Ini memberikan kesan bahawa Eliza memahami kenyataan itu dan menanyakan soalan susulan, sedangkan sebenarnya, ia hanya mengubah masa dan menambah beberapa kata. Jika Eliza tidak dapat mengenal pasti kata kunci yang ia mempunyai jawapan untuk, ia akan memberikan jawapan rawak yang boleh digunakan untuk banyak kenyataan yang berbeza. Eliza boleh dengan mudah ditipu, contohnya jika pengguna menulis "**Anda adalah** sebuah basikal" ia mungkin menjawab dengan "Berapa lama **saya telah** sebuah basikal?", bukannya jawapan yang lebih beralasan.
+
+[](https://youtu.be/RMK9AphfLco "Berbual dengan Eliza")
+
+> 🎥 Klik imej di atas untuk video tentang program ELIZA asal
+
+> Nota: Anda boleh membaca keterangan asal [Eliza](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract) yang diterbitkan pada tahun 1966 jika anda mempunyai akaun ACM. Sebagai alternatif, baca tentang Eliza di [wikipedia](https://wikipedia.org/wiki/ELIZA)
+
+## Latihan - mengekod bot perbualan asas
+
+Bot perbualan, seperti Eliza, adalah program yang meminta input pengguna dan kelihatan memahami dan memberi respons dengan bijak. Tidak seperti Eliza, bot kita tidak akan mempunyai beberapa peraturan yang memberikan penampilan mempunyai perbualan pintar. Sebaliknya, bot kita hanya akan mempunyai satu kebolehan, iaitu untuk meneruskan perbualan dengan respons rawak yang mungkin berfungsi dalam hampir mana-mana perbualan remeh.
+
+### Rancangan
+
+Langkah-langkah anda semasa membina bot perbualan:
+
+1. Cetak arahan yang menasihati pengguna cara berinteraksi dengan bot
+2. Mulakan gelung
+ 1. Terima input pengguna
+ 2. Jika pengguna meminta untuk keluar, maka keluar
+ 3. Proses input pengguna dan tentukan respons (dalam kes ini, respons adalah pilihan rawak daripada senarai kemungkinan respons generik)
+ 4. Cetak respons
+3. kembali ke langkah 2
+
+### Membina bot
+
+Mari kita buat bot seterusnya. Kita akan mulakan dengan mendefinisikan beberapa frasa.
+
+1. Buat bot ini sendiri dalam Python dengan respons rawak berikut:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ Berikut adalah beberapa output contoh untuk panduan anda (input pengguna pada baris yang bermula dengan `>`):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ Satu penyelesaian yang mungkin untuk tugas ini adalah [di sini](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)
+
+ ✅ Berhenti dan pertimbangkan
+
+ 1. Adakah anda fikir respons rawak akan 'menipu' seseorang untuk berfikir bahawa bot sebenarnya memahami mereka?
+ 2. Apakah ciri-ciri yang perlu ada pada bot untuk menjadi lebih berkesan?
+ 3. Jika bot benar-benar boleh 'memahami' makna ayat, adakah ia perlu 'mengingati' makna ayat-ayat sebelumnya dalam perbualan juga?
+
+---
+
+## 🚀Cabaran
+
+Pilih salah satu elemen "berhenti dan pertimbangkan" di atas dan sama ada cuba melaksanakannya dalam kod atau tulis penyelesaian di atas kertas menggunakan pseudokod.
+
+Dalam pelajaran seterusnya, anda akan belajar tentang beberapa pendekatan lain untuk menguraikan bahasa semulajadi dan pembelajaran mesin.
+
+## [Kuiz pasca-ceramah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## Kajian Semula & Kajian Sendiri
+
+Lihat rujukan di bawah sebagai peluang bacaan lanjut.
+
+### Rujukan
+
+1. Schubert, Lenhart, "Linguistik Komputasi", *The Stanford Encyclopedia of Philosophy* (Edisi Musim Bunga 2020), Edward N. Zalta (ed.), URL = .
+2. Princeton University "About WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
+
+## Tugasan
+
+[Cari bot](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/ms/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..e9217d326
--- /dev/null
+++ b/translations/ms/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# Cari Bot
+
+## Arahan
+
+Bot ada di mana-mana. Tugas anda: cari satu dan ambil ia sebagai milik anda! Anda boleh menemui mereka di laman web, dalam aplikasi perbankan, dan di telefon, contohnya apabila anda menghubungi syarikat perkhidmatan kewangan untuk nasihat atau maklumat akaun. Analisis bot tersebut dan lihat jika anda boleh mengelirukannya. Jika anda boleh mengelirukan bot tersebut, mengapa anda fikir itu berlaku? Tulis satu kertas pendek tentang pengalaman anda.
+
+## Rubrik
+
+| Kriteria | Contoh | Memadai | Perlu Penambahbaikan |
+| -------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------- | --------------------- |
+| | Satu kertas penuh ditulis, menerangkan andaian seni bina bot dan menggariskan pengalaman anda dengannya | Kertas tidak lengkap atau tidak cukup diselidik | Tiada kertas diserahkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/2-Tasks/README.md b/translations/ms/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..f83d08cad
--- /dev/null
+++ b/translations/ms/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# Tugas dan Teknik Pemprosesan Bahasa Semulajadi yang Biasa
+
+Untuk kebanyakan tugas *pemprosesan bahasa semulajadi*, teks yang hendak diproses mesti dipecahkan, diperiksa, dan hasilnya disimpan atau dirujuk silang dengan peraturan dan set data. Tugas-tugas ini membolehkan pengaturcara untuk mendapatkan _makna_ atau _niat_ atau hanya _kekerapan_ istilah dan kata dalam teks.
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+Mari kita terokai teknik-teknik biasa yang digunakan dalam memproses teks. Digabungkan dengan pembelajaran mesin, teknik-teknik ini membantu anda menganalisis sejumlah besar teks dengan efisien. Sebelum menerapkan ML kepada tugas-tugas ini, mari kita fahami masalah yang dihadapi oleh pakar NLP.
+
+## Tugas biasa dalam NLP
+
+Terdapat pelbagai cara untuk menganalisis teks yang anda sedang kerjakan. Terdapat tugas-tugas yang boleh anda lakukan dan melalui tugas-tugas ini anda dapat memahami teks dan membuat kesimpulan. Biasanya anda menjalankan tugas-tugas ini secara berurutan.
+
+### Tokenisasi
+
+Mungkin perkara pertama yang perlu dilakukan oleh kebanyakan algoritma NLP adalah memecahkan teks kepada token, atau kata-kata. Walaupun ini kedengaran mudah, perlu mengambil kira tanda baca dan pembatas kata dan ayat dari pelbagai bahasa boleh membuatnya rumit. Anda mungkin perlu menggunakan pelbagai kaedah untuk menentukan sempadan.
+
+
+> Tokenisasi ayat dari **Pride and Prejudice**. Infografik oleh [Jen Looper](https://twitter.com/jenlooper)
+
+### Embeddings
+
+[Word embeddings](https://wikipedia.org/wiki/Word_embedding) adalah cara untuk menukar data teks anda secara numerik. Embeddings dilakukan dengan cara supaya kata-kata yang mempunyai makna serupa atau kata-kata yang digunakan bersama berkumpul bersama.
+
+
+> "I have the highest respect for your nerves, they are my old friends." - Word embeddings untuk ayat dalam **Pride and Prejudice**. Infografik oleh [Jen Looper](https://twitter.com/jenlooper)
+
+✅ Cuba [alat menarik ini](https://projector.tensorflow.org/) untuk bereksperimen dengan word embeddings. Mengklik pada satu kata menunjukkan kumpulan kata-kata yang serupa: 'toy' berkumpul dengan 'disney', 'lego', 'playstation', dan 'console'.
+
+### Parsing & Tagging Bahagian Ucapan
+
+Setiap kata yang telah ditokenkan boleh ditandakan sebagai bahagian ucapan - kata nama, kata kerja, atau kata sifat. Ayat `the quick red fox jumped over the lazy brown dog` mungkin ditandakan POS sebagai fox = kata nama, jumped = kata kerja.
+
+
+
+> Parsing ayat dari **Pride and Prejudice**. Infografik oleh [Jen Looper](https://twitter.com/jenlooper)
+
+Parsing adalah mengenali kata-kata yang berkaitan antara satu sama lain dalam satu ayat - sebagai contoh `the quick red fox jumped` adalah urutan kata sifat-kata nama-kata kerja yang berasingan dari urutan `lazy brown dog`.
+
+### Kekerapan Kata dan Frasa
+
+Prosedur yang berguna apabila menganalisis sejumlah besar teks adalah membina kamus setiap kata atau frasa yang menarik dan seberapa kerap ia muncul. Frasa `the quick red fox jumped over the lazy brown dog` mempunyai kekerapan kata 2 untuk the.
+
+Mari kita lihat contoh teks di mana kita mengira kekerapan kata. Puisi Rudyard Kipling The Winners mengandungi ayat berikut:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+Oleh kerana kekerapan frasa boleh tidak sensitif huruf besar atau sensitif huruf besar seperti yang diperlukan, frasa `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` adalah 2.
+
+### N-grams
+
+Teks boleh dipecahkan kepada urutan kata dengan panjang yang ditetapkan, satu kata (unigram), dua kata (bigrams), tiga kata (trigrams) atau sebarang bilangan kata (n-grams).
+
+Sebagai contoh `the quick red fox jumped over the lazy brown dog` dengan skor n-gram 2 menghasilkan n-grams berikut:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+Ia mungkin lebih mudah untuk membayangkannya sebagai kotak gelongsor di atas ayat. Ini adalah untuk n-grams 3 kata, n-gram adalah tebal dalam setiap ayat:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> Nilai N-gram 3: Infografik oleh [Jen Looper](https://twitter.com/jenlooper)
+
+### Ekstraksi Frasa Kata Nama
+
+Dalam kebanyakan ayat, terdapat kata nama yang menjadi subjek, atau objek ayat. Dalam bahasa Inggeris, ia sering boleh dikenal pasti sebagai mempunyai 'a' atau 'an' atau 'the' sebelum ia. Mengenal pasti subjek atau objek ayat dengan 'mengekstrak frasa kata nama' adalah tugas biasa dalam NLP apabila cuba memahami makna ayat.
+
+✅ Dalam ayat "I cannot fix on the hour, or the spot, or the look or the words, which laid the foundation. It is too long ago. I was in the middle before I knew that I had begun.", bolehkah anda mengenal pasti frasa kata nama?
+
+Dalam ayat `the quick red fox jumped over the lazy brown dog` terdapat 2 frasa kata nama: **quick red fox** dan **lazy brown dog**.
+
+### Analisis Sentimen
+
+Satu ayat atau teks boleh dianalisis untuk sentimen, atau betapa *positif* atau *negatif* ia. Sentimen diukur dalam *polariti* dan *objektiviti/subjektiviti*. Polariti diukur dari -1.0 hingga 1.0 (negatif ke positif) dan 0.0 hingga 1.0 (paling objektif ke paling subjektif).
+
+✅ Kemudian anda akan belajar bahawa terdapat pelbagai cara untuk menentukan sentimen menggunakan pembelajaran mesin, tetapi satu cara adalah mempunyai senarai kata dan frasa yang dikategorikan sebagai positif atau negatif oleh pakar manusia dan menerapkan model itu kepada teks untuk mengira skor polariti. Bolehkah anda melihat bagaimana ini berfungsi dalam beberapa keadaan dan kurang baik dalam keadaan lain?
+
+### Infleksi
+
+Infleksi membolehkan anda mengambil satu kata dan mendapatkan bentuk tunggal atau jamak kata tersebut.
+
+### Lematisasi
+
+*Lema* adalah akar atau kata kepala untuk satu set kata, contohnya *flew*, *flies*, *flying* mempunyai lema kata kerja *fly*.
+
+Terdapat juga pangkalan data yang berguna untuk penyelidik NLP, terutamanya:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/) adalah pangkalan data kata, sinonim, antonim dan banyak butiran lain untuk setiap kata dalam pelbagai bahasa. Ia sangat berguna apabila cuba membina terjemahan, pemeriksa ejaan, atau alat bahasa dari sebarang jenis.
+
+## Perpustakaan NLP
+
+Nasib baik, anda tidak perlu membina semua teknik ini sendiri, kerana terdapat perpustakaan Python yang sangat baik yang menjadikannya lebih mudah diakses oleh pembangun yang tidak pakar dalam pemprosesan bahasa semulajadi atau pembelajaran mesin. Pelajaran seterusnya termasuk lebih banyak contoh ini, tetapi di sini anda akan belajar beberapa contoh berguna untuk membantu anda dengan tugas seterusnya.
+
+### Latihan - menggunakan `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor`.
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > Apa yang berlaku di sini? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor) adalah "Pengestrak frasa kata nama yang menggunakan parsing chunk yang dilatih dengan korpus latihan ConLL-2000." ConLL-2000 merujuk kepada Persidangan Pembelajaran Bahasa Semulajadi Komputasi tahun 2000. Setiap tahun persidangan tersebut mengadakan bengkel untuk menangani masalah NLP yang sukar, dan pada tahun 2000 ia adalah chunking kata nama. Model ini dilatih pada Wall Street Journal, dengan "bahagian 15-18 sebagai data latihan (211727 token) dan bahagian 20 sebagai data ujian (47377 token)". Anda boleh melihat prosedur yang digunakan [di sini](https://www.clips.uantwerpen.be/conll2000/chunking/) dan [hasilnya](https://ifarm.nl/erikt/research/np-chunking.html).
+
+### Cabaran - meningkatkan bot anda dengan NLP
+
+Dalam pelajaran sebelumnya anda membina bot Q&A yang sangat mudah. Sekarang, anda akan membuat Marvin lebih simpatik dengan menganalisis input anda untuk sentimen dan mencetak respons yang sesuai dengan sentimen tersebut. Anda juga perlu mengenal pasti `noun_phrase` dan bertanya mengenainya.
+
+Langkah-langkah anda semasa membina bot perbualan yang lebih baik:
+
+1. Cetak arahan yang memberi nasihat kepada pengguna cara berinteraksi dengan bot
+2. Mulakan gelung
+ 1. Terima input pengguna
+ 2. Jika pengguna meminta untuk keluar, maka keluar
+ 3. Proses input pengguna dan tentukan respons sentimen yang sesuai
+ 4. Jika frasa kata nama dikesan dalam sentimen, jamakkan ia dan minta input lanjut mengenai topik tersebut
+ 5. Cetak respons
+3. kembali ke langkah 2
+
+Berikut adalah snippet kod untuk menentukan sentimen menggunakan TextBlob. Perhatikan hanya terdapat empat *gradien* respons sentimen (anda boleh mempunyai lebih banyak jika anda suka):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+Berikut adalah beberapa output contoh untuk membimbing anda (input pengguna adalah pada baris yang bermula dengan >):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+Satu penyelesaian yang mungkin untuk tugas ini adalah [di sini](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)
+
+✅ Semak Pengetahuan
+
+1. Adakah anda fikir respons yang simpatik akan 'menipu' seseorang untuk berfikir bahawa bot sebenarnya memahami mereka?
+2. Adakah mengenal pasti frasa kata nama menjadikan bot lebih 'boleh dipercayai'?
+3. Mengapa mengekstrak 'frasa kata nama' dari ayat merupakan perkara yang berguna untuk dilakukan?
+
+---
+
+Laksanakan bot dalam semak pengetahuan sebelumnya dan uji pada seorang rakan. Bolehkah ia menipu mereka? Bolehkah anda menjadikan bot anda lebih 'boleh dipercayai'?
+
+## 🚀Cabaran
+
+Ambil satu tugas dalam semak pengetahuan sebelumnya dan cuba melaksanakannya. Uji bot pada seorang rakan. Bolehkah ia menipu mereka? Bolehkah anda menjadikan bot anda lebih 'boleh dipercayai'?
+
+## [Kuiz pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## Kajian & Kajian Kendiri
+
+Dalam beberapa pelajaran berikutnya, anda akan belajar lebih lanjut mengenai analisis sentimen. Selidiki teknik menarik ini dalam artikel seperti ini di [KDNuggets](https://www.kdnuggets.com/tag/nlp)
+
+## Tugasan
+
+[Buat bot bercakap balik](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/2-Tasks/assignment.md b/translations/ms/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..fb144513e
--- /dev/null
+++ b/translations/ms/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# Buat Bot Berbicara Balik
+
+## Arahan
+
+Dalam beberapa pelajaran sebelumnya, anda telah memprogram sebuah bot dasar untuk berbicara. Bot ini memberikan jawapan rawak sehingga anda mengatakan 'bye'. Bolehkah anda membuat jawapan tersebut menjadi kurang rawak, dan mencetuskan jawapan jika anda mengatakan perkara tertentu, seperti 'why' atau 'how'? Fikirkan sedikit bagaimana pembelajaran mesin mungkin membuat jenis kerja ini kurang manual semasa anda memperluas bot anda. Anda boleh menggunakan perpustakaan NLTK atau TextBlob untuk memudahkan tugas anda.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ---------------------------------------------- | ------------------------------------------------- | ----------------------- |
+| | Fail bot.py baru disediakan dan didokumentasikan | Fail bot baru disediakan tetapi mengandungi bug | Fail tidak disediakan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/3-Translation-Sentiment/README.md b/translations/ms/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..317b0de4a
--- /dev/null
+++ b/translations/ms/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,190 @@
+# Terjemahan dan analisis sentimen dengan ML
+
+Dalam pelajaran sebelumnya, anda telah belajar bagaimana membangun bot dasar menggunakan `TextBlob`, sebuah perpustakaan yang menyematkan ML di balik layar untuk melakukan tugas NLP dasar seperti ekstraksi frasa kata benda. Tantangan penting lainnya dalam linguistik komputasional adalah terjemahan _tepat_ dari sebuah kalimat dari satu bahasa lisan atau tulisan ke bahasa lain.
+
+## [Kuis pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+Terjemahan adalah masalah yang sangat sulit karena ada ribuan bahasa dan masing-masing bisa memiliki aturan tata bahasa yang sangat berbeda. Salah satu pendekatan adalah mengubah aturan tata bahasa formal dari satu bahasa, seperti Bahasa Inggris, menjadi struktur yang tidak bergantung pada bahasa, dan kemudian menerjemahkannya dengan mengubah kembali ke bahasa lain. Pendekatan ini berarti anda akan mengambil langkah-langkah berikut:
+
+1. **Identifikasi**. Mengidentifikasi atau menandai kata-kata dalam bahasa input menjadi kata benda, kata kerja, dll.
+2. **Buat terjemahan**. Menghasilkan terjemahan langsung dari setiap kata dalam format bahasa target.
+
+### Contoh kalimat, Inggris ke Irlandia
+
+Dalam 'Bahasa Inggris', kalimat _I feel happy_ terdiri dari tiga kata dalam urutan:
+
+- **subjek** (I)
+- **kata kerja** (feel)
+- **kata sifat** (happy)
+
+Namun, dalam bahasa 'Irlandia', kalimat yang sama memiliki struktur tata bahasa yang sangat berbeda - emosi seperti "*happy*" atau "*sad*" diungkapkan sebagai berada *di atas* anda.
+
+Frasa Bahasa Inggris `I feel happy` dalam bahasa Irlandia akan menjadi `Tá athas orm`. Terjemahan *harfiah* akan menjadi `Happy is upon me`.
+
+Seorang penutur bahasa Irlandia yang menerjemahkan ke Bahasa Inggris akan mengatakan `I feel happy`, bukan `Happy is upon me`, karena mereka memahami makna kalimat tersebut, meskipun kata-kata dan struktur kalimatnya berbeda.
+
+Urutan formal untuk kalimat dalam bahasa Irlandia adalah:
+
+- **kata kerja** (Tá atau is)
+- **kata sifat** (athas, atau happy)
+- **subjek** (orm, atau upon me)
+
+## Terjemahan
+
+Program terjemahan naif mungkin hanya menerjemahkan kata-kata, mengabaikan struktur kalimat.
+
+✅ Jika anda telah belajar bahasa kedua (atau ketiga atau lebih) sebagai orang dewasa, anda mungkin mulai dengan berpikir dalam bahasa asli anda, menerjemahkan konsep kata demi kata dalam kepala anda ke bahasa kedua, dan kemudian mengucapkan terjemahan anda. Ini mirip dengan apa yang dilakukan program komputer terjemahan naif. Penting untuk melewati fase ini untuk mencapai kefasihan!
+
+Terjemahan naif mengarah pada terjemahan buruk (dan kadang-kadang lucu): `I feel happy` diterjemahkan secara harfiah menjadi `Mise bhraitheann athas` dalam bahasa Irlandia. Itu berarti (secara harfiah) `me feel happy` dan bukan kalimat yang valid dalam bahasa Irlandia. Meskipun Bahasa Inggris dan Irlandia adalah bahasa yang digunakan di dua pulau yang berdekatan, mereka adalah bahasa yang sangat berbeda dengan struktur tata bahasa yang berbeda.
+
+> Anda dapat menonton beberapa video tentang tradisi linguistik Irlandia seperti [yang ini](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### Pendekatan pembelajaran mesin
+
+Sejauh ini, anda telah belajar tentang pendekatan aturan formal untuk pemrosesan bahasa alami. Pendekatan lain adalah mengabaikan makna kata-kata, dan _sebaliknya menggunakan pembelajaran mesin untuk mendeteksi pola_. Ini dapat bekerja dalam terjemahan jika anda memiliki banyak teks (sebuah *corpus*) atau teks-teks (*corpora*) dalam bahasa asal dan target.
+
+Misalnya, pertimbangkan kasus *Pride and Prejudice*, novel terkenal berbahasa Inggris yang ditulis oleh Jane Austen pada tahun 1813. Jika anda membaca buku tersebut dalam bahasa Inggris dan terjemahan manusia dari buku tersebut dalam bahasa *Prancis*, anda dapat mendeteksi frasa dalam satu yang diterjemahkan secara _idiomatis_ ke dalam yang lain. Anda akan melakukannya sebentar lagi.
+
+Misalnya, ketika sebuah frasa bahasa Inggris seperti `I have no money` diterjemahkan secara harfiah ke dalam bahasa Prancis, itu mungkin menjadi `Je n'ai pas de monnaie`. "Monnaie" adalah 'false cognate' Prancis yang rumit, karena 'money' dan 'monnaie' tidak sinonim. Terjemahan yang lebih baik yang mungkin dibuat oleh manusia adalah `Je n'ai pas d'argent`, karena lebih baik menyampaikan makna bahwa anda tidak memiliki uang (daripada 'loose change' yang merupakan makna 'monnaie').
+
+
+
+> Gambar oleh [Jen Looper](https://twitter.com/jenlooper)
+
+Jika model ML memiliki cukup banyak terjemahan manusia untuk membangun model, model tersebut dapat meningkatkan akurasi terjemahan dengan mengidentifikasi pola umum dalam teks yang telah diterjemahkan sebelumnya oleh penutur manusia ahli dari kedua bahasa.
+
+### Latihan - terjemahan
+
+Anda dapat menggunakan `TextBlob` untuk menerjemahkan kalimat. Cobalah kalimat pertama yang terkenal dari **Pride and Prejudice**:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob` melakukan pekerjaan yang cukup baik dalam terjemahan: "C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!".
+
+Dapat dikatakan bahwa terjemahan TextBlob jauh lebih tepat, sebenarnya, daripada terjemahan Prancis tahun 1932 dari buku oleh V. Leconte dan Ch. Pressoir:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet egard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+Dalam kasus ini, terjemahan yang diinformasikan oleh ML melakukan pekerjaan yang lebih baik daripada penerjemah manusia yang secara tidak perlu menambahkan kata-kata dalam mulut penulis asli untuk 'kejelasan'.
+
+> Apa yang terjadi di sini? dan mengapa TextBlob begitu bagus dalam terjemahan? Nah, di balik layar, itu menggunakan Google translate, sebuah AI canggih yang mampu menganalisis jutaan frasa untuk memprediksi rangkaian terbaik untuk tugas yang sedang dikerjakan. Tidak ada yang manual di sini dan anda memerlukan koneksi internet untuk menggunakan `blob.translate`.
+
+✅ Try some more sentences. Which is better, ML or human translation? In which cases?
+
+## Sentiment analysis
+
+Another area where machine learning can work very well is sentiment analysis. A non-ML approach to sentiment is to identify words and phrases which are 'positive' and 'negative'. Then, given a new piece of text, calculate the total value of the positive, negative and neutral words to identify the overall sentiment.
+
+This approach is easily tricked as you may have seen in the Marvin task - the sentence `Great, that was a wonderful waste of time, I'm glad we are lost on this dark road` adalah kalimat dengan sentimen sarkastik dan negatif, tetapi algoritma sederhana mendeteksi 'great', 'wonderful', 'glad' sebagai positif dan 'waste', 'lost' dan 'dark' sebagai negatif. Sentimen keseluruhan dipengaruhi oleh kata-kata yang bertentangan ini.
+
+✅ Berhenti sejenak dan pikirkan tentang bagaimana kita menyampaikan sarkasme sebagai penutur manusia. Intonasi memainkan peran besar. Cobalah mengatakan frasa "Well, that film was awesome" dengan berbagai cara untuk menemukan bagaimana suara anda menyampaikan makna.
+
+### Pendekatan ML
+
+Pendekatan ML adalah dengan mengumpulkan secara manual teks-teks negatif dan positif - tweet, atau ulasan film, atau apa pun di mana manusia telah memberikan skor *dan* pendapat tertulis. Kemudian teknik NLP dapat diterapkan pada pendapat dan skor, sehingga pola muncul (misalnya, ulasan film positif cenderung memiliki frasa 'Oscar worthy' lebih sering daripada ulasan film negatif, atau ulasan restoran positif mengatakan 'gourmet' jauh lebih sering daripada 'disgusting').
+
+> ⚖️ **Contoh**: Jika anda bekerja di kantor seorang politikus dan ada beberapa undang-undang baru yang sedang diperdebatkan, konstituen mungkin menulis ke kantor dengan email mendukung atau email menentang undang-undang baru tersebut. Misalkan anda ditugaskan untuk membaca email-email tersebut dan memilahnya menjadi 2 tumpukan, *untuk* dan *menentang*. Jika ada banyak email, anda mungkin kewalahan mencoba membaca semuanya. Bukankah akan lebih baik jika sebuah bot bisa membaca semuanya untuk anda, memahaminya dan memberi tahu anda di tumpukan mana setiap email harus ditempatkan?
+>
+> Salah satu cara untuk mencapai itu adalah dengan menggunakan Pembelajaran Mesin. Anda akan melatih model dengan sebagian email *menentang* dan sebagian email *untuk*. Model akan cenderung mengaitkan frasa dan kata-kata dengan sisi menentang dan sisi untuk, *tetapi tidak akan memahami konten apa pun*, hanya bahwa kata-kata dan pola tertentu lebih mungkin muncul dalam email *menentang* atau *untuk*. Anda bisa mengujinya dengan beberapa email yang tidak anda gunakan untuk melatih model, dan melihat apakah model tersebut sampai pada kesimpulan yang sama dengan anda. Kemudian, setelah anda puas dengan akurasi model, anda bisa memproses email-email di masa depan tanpa harus membaca masing-masing.
+
+✅ Apakah proses ini terdengar seperti proses yang telah anda gunakan dalam pelajaran sebelumnya?
+
+## Latihan - kalimat sentimental
+
+Sentimen diukur dengan *polaritas* -1 hingga 1, yang berarti -1 adalah sentimen paling negatif, dan 1 adalah yang paling positif. Sentimen juga diukur dengan skor 0 - 1 untuk objektivitas (0) dan subjektivitas (1).
+
+Lihat lagi pada *Pride and Prejudice* karya Jane Austen. Teks tersebut tersedia di sini di [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm). Contoh di bawah ini menunjukkan program pendek yang menganalisis sentimen dari kalimat pertama dan terakhir dari buku tersebut dan menampilkan polaritas sentimen dan skor subjektivitas/objektivitasnya.
+
+Anda harus menggunakan perpustakaan `TextBlob` (dijelaskan di atas) untuk menentukan `sentimen` (anda tidak perlu menulis kalkulator sentimen sendiri) dalam tugas berikut.
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+Anda melihat output berikut:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## Tantangan - periksa polaritas sentimen
+
+Tugas anda adalah menentukan, menggunakan polaritas sentimen, apakah *Pride and Prejudice* memiliki lebih banyak kalimat yang benar-benar positif daripada yang benar-benar negatif. Untuk tugas ini, anda dapat mengasumsikan bahwa skor polaritas 1 atau -1 adalah benar-benar positif atau negatif.
+
+**Langkah-langkah:**
+
+1. Unduh [salinan Pride and Prejudice](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) dari Project Gutenberg sebagai file .txt. Hapus metadata di awal dan akhir file, hanya menyisakan teks asli
+2. Buka file dalam Python dan ekstrak isinya sebagai string
+3. Buat TextBlob menggunakan string buku
+4. Analisis setiap kalimat dalam buku dalam sebuah loop
+ 1. Jika polaritasnya 1 atau -1, simpan kalimat tersebut dalam array atau daftar pesan positif atau negatif
+5. Pada akhirnya, cetak semua kalimat positif dan kalimat negatif (secara terpisah) dan jumlah masing-masing.
+
+Berikut adalah [solusi contoh](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb).
+
+✅ Pemeriksaan Pengetahuan
+
+1. Sentimen didasarkan pada kata-kata yang digunakan dalam kalimat, tetapi apakah kode *memahami* kata-kata tersebut?
+2. Apakah anda berpikir bahwa polaritas sentimen akurat, atau dengan kata lain, apakah anda *setuju* dengan skor tersebut?
+ 1. Secara khusus, apakah anda setuju atau tidak setuju dengan polaritas **positif** absolut dari kalimat berikut?
+ * “What an excellent father you have, girls!” said she, when the door was shut.
+ * “Your examination of Mr. Darcy is over, I presume,” said Miss Bingley; “and pray what is the result?” “I am perfectly convinced by it that Mr. Darcy has no defect.
+ * How wonderfully these sort of things occur!
+ * I have the greatest dislike in the world to that sort of thing.
+ * Charlotte is an excellent manager, I dare say.
+ * “This is delightful indeed!
+ * I am so happy!
+ * Your idea of the ponies is delightful.
+ 2. Tiga kalimat berikut diberi skor dengan sentimen positif absolut, tetapi setelah dibaca lebih dekat, mereka bukan kalimat positif. Mengapa analisis sentimen berpikir mereka adalah kalimat positif?
+ * Happy shall I be, when his stay at Netherfield is over!” “I wish I could say anything to comfort you,” replied Elizabeth; “but it is wholly out of my power.
+ * If I could but see you as happy!
+ * Our distress, my dear Lizzy, is very great.
+ 3. Apakah anda setuju atau tidak setuju dengan polaritas **negatif** absolut dari kalimat berikut?
+ - Everybody is disgusted with his pride.
+ - “I should like to know how he behaves among strangers.” “You shall hear then—but prepare yourself for something very dreadful.
+ - The pause was to Elizabeth’s feelings dreadful.
+ - It would be dreadful!
+
+✅ Setiap penggemar Jane Austen akan memahami bahwa dia sering menggunakan bukunya untuk mengkritik aspek-aspek yang lebih konyol dari masyarakat Regency Inggris. Elizabeth Bennett, karakter utama dalam *Pride and Prejudice*, adalah pengamat sosial yang tajam (seperti penulis) dan bahasanya sering kali sangat bernuansa. Bahkan Mr. Darcy (cinta dalam cerita) mencatat penggunaan bahasa Elizabeth yang bermain-main dan menggoda: "I have had the pleasure of your acquaintance long enough to know that you find great enjoyment in occasionally professing opinions which in fact are not your own."
+
+---
+
+## 🚀Tantangan
+
+Bisakah anda membuat Marvin lebih baik dengan mengekstraksi fitur lain dari input pengguna?
+
+## [Kuis pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## Tinjauan & Studi Mandiri
+
+Ada banyak cara untuk mengekstraksi sentimen dari teks. Pikirkan tentang aplikasi bisnis yang mungkin menggunakan teknik ini. Pikirkan tentang bagaimana itu bisa salah. Baca lebih lanjut tentang sistem canggih yang siap digunakan di perusahaan yang menganalisis sentimen seperti [Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott). Uji beberapa kalimat Pride and Prejudice di atas dan lihat apakah ia dapat mendeteksi nuansa.
+
+## Tugas
+
+[Lisensi puitis](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/3-Translation-Sentiment/assignment.md b/translations/ms/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..23ec4d46e
--- /dev/null
+++ b/translations/ms/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# Lesen Puitis
+
+## Arahan
+
+Dalam [notebook ini](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency) anda boleh menemui lebih daripada 500 puisi Emily Dickinson yang telah dianalisis untuk sentimen menggunakan Azure text analytics. Menggunakan dataset ini, analisislah ia menggunakan teknik yang diterangkan dalam pelajaran. Adakah sentimen yang dicadangkan dalam sebuah puisi sepadan dengan keputusan perkhidmatan Azure yang lebih canggih? Mengapa atau mengapa tidak, pada pendapat anda? Adakah sesuatu yang mengejutkan anda?
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ------------------------------------------------------------------------- | ------------------------------------------------------- | ------------------------ |
+| | Notebook yang dipersembahkan dengan analisis yang kukuh terhadap output sampel penulis | Notebook tidak lengkap atau tidak melakukan analisis | Tiada notebook dipersembahkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/ms/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..a4de20222
--- /dev/null
+++ b/translations/ms/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/ms/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..84c037866
--- /dev/null
+++ b/translations/ms/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/4-Hotel-Reviews-1/README.md b/translations/ms/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..de53dcecb
--- /dev/null
+++ b/translations/ms/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,303 @@
+# Analisis Sentimen dengan Ulasan Hotel - Memproses Data
+
+Dalam bagian ini, Anda akan menggunakan teknik yang telah dipelajari dalam pelajaran sebelumnya untuk melakukan analisis data eksplorasi pada dataset besar. Setelah Anda memiliki pemahaman yang baik tentang kegunaan berbagai kolom, Anda akan belajar:
+
+- bagaimana menghapus kolom yang tidak diperlukan
+- bagaimana menghitung beberapa data baru berdasarkan kolom yang ada
+- bagaimana menyimpan dataset yang dihasilkan untuk digunakan dalam tantangan akhir
+
+## [Kuis Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### Pengenalan
+
+Sejauh ini Anda telah belajar tentang bagaimana data teks sangat berbeda dengan jenis data numerik. Jika itu adalah teks yang ditulis atau diucapkan oleh manusia, dapat dianalisis untuk menemukan pola dan frekuensi, sentimen, dan makna. Pelajaran ini membawa Anda ke dalam dataset nyata dengan tantangan nyata: **[515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** dan termasuk [lisensi CC0: Domain Publik](https://creativecommons.org/publicdomain/zero/1.0/). Dataset ini diambil dari Booking.com dari sumber publik. Pembuat dataset ini adalah Jiashen Liu.
+
+### Persiapan
+
+Anda akan membutuhkan:
+
+* Kemampuan untuk menjalankan notebook .ipynb menggunakan Python 3
+* pandas
+* NLTK, [yang harus Anda instal secara lokal](https://www.nltk.org/install.html)
+* Dataset yang tersedia di Kaggle [515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe). Ukurannya sekitar 230 MB setelah diekstrak. Unduh ke folder root `/data` yang terkait dengan pelajaran NLP ini.
+
+## Analisis data eksplorasi
+
+Tantangan ini mengasumsikan bahwa Anda sedang membangun bot rekomendasi hotel menggunakan analisis sentimen dan skor ulasan tamu. Dataset yang akan Anda gunakan mencakup ulasan dari 1493 hotel berbeda di 6 kota.
+
+Menggunakan Python, dataset ulasan hotel, dan analisis sentimen NLTK Anda bisa mengetahui:
+
+* Apa kata dan frasa yang paling sering digunakan dalam ulasan?
+* Apakah *tag* resmi yang menggambarkan hotel berkorelasi dengan skor ulasan (misalnya apakah ulasan lebih negatif untuk hotel tertentu untuk *Keluarga dengan anak kecil* dibandingkan dengan *Pelancong Solo*, mungkin menunjukkan bahwa hotel tersebut lebih baik untuk *Pelancong Solo*?)
+* Apakah skor sentimen NLTK 'setuju' dengan skor numerik ulasan hotel?
+
+#### Dataset
+
+Mari kita jelajahi dataset yang telah Anda unduh dan simpan secara lokal. Buka file dalam editor seperti VS Code atau bahkan Excel.
+
+Header dalam dataset adalah sebagai berikut:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+Berikut ini mereka dikelompokkan dengan cara yang mungkin lebih mudah untuk diperiksa:
+##### Kolom Hotel
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (latitude), `lng` (longitude)
+ * Menggunakan *lat* dan *lng* Anda bisa memetakan lokasi hotel dengan Python (mungkin diberi kode warna untuk ulasan negatif dan positif)
+ * Hotel_Address tidak jelas bermanfaat bagi kita, dan kita mungkin akan menggantinya dengan negara untuk memudahkan pengurutan & pencarian
+
+**Kolom Meta-ulasan Hotel**
+
+* `Average_Score`
+ * Menurut pembuat dataset, kolom ini adalah *Skor Rata-rata hotel, dihitung berdasarkan komentar terbaru dalam setahun terakhir*. Ini tampaknya cara yang tidak biasa untuk menghitung skor, tetapi ini adalah data yang diambil sehingga kita mungkin menerimanya apa adanya untuk saat ini.
+
+ ✅ Berdasarkan kolom lain dalam data ini, dapatkah Anda memikirkan cara lain untuk menghitung skor rata-rata?
+
+* `Total_Number_of_Reviews`
+ * Jumlah total ulasan yang diterima hotel ini - tidak jelas (tanpa menulis beberapa kode) apakah ini mengacu pada ulasan dalam dataset.
+* `Additional_Number_of_Scoring`
+ * Ini berarti skor ulasan diberikan tetapi tidak ada ulasan positif atau negatif yang ditulis oleh pengulas
+
+**Kolom Ulasan**
+
+- `Reviewer_Score`
+ - Ini adalah nilai numerik dengan paling banyak 1 tempat desimal antara nilai minimum dan maksimum 2.5 dan 10
+ - Tidak dijelaskan mengapa 2.5 adalah skor terendah yang mungkin
+- `Negative_Review`
+ - Jika seorang pengulas tidak menulis apa-apa, kolom ini akan berisi "**No Negative**"
+ - Perhatikan bahwa seorang pengulas mungkin menulis ulasan positif di kolom Ulasan Negatif (misalnya "tidak ada yang buruk tentang hotel ini")
+- `Review_Total_Negative_Word_Counts`
+ - Jumlah kata negatif yang lebih tinggi menunjukkan skor yang lebih rendah (tanpa memeriksa sentimen)
+- `Positive_Review`
+ - Jika seorang pengulas tidak menulis apa-apa, kolom ini akan berisi "**No Positive**"
+ - Perhatikan bahwa seorang pengulas mungkin menulis ulasan negatif di kolom Ulasan Positif (misalnya "tidak ada yang baik tentang hotel ini sama sekali")
+- `Review_Total_Positive_Word_Counts`
+ - Jumlah kata positif yang lebih tinggi menunjukkan skor yang lebih tinggi (tanpa memeriksa sentimen)
+- `Review_Date` dan `days_since_review`
+ - Sebuah ukuran kesegaran atau ketidakaktualan dapat diterapkan pada ulasan (ulasan yang lebih lama mungkin tidak seakurat yang lebih baru karena manajemen hotel berubah, atau renovasi telah dilakukan, atau kolam renang ditambahkan, dll.)
+- `Tags`
+ - Ini adalah deskriptor pendek yang mungkin dipilih pengulas untuk menggambarkan jenis tamu mereka (misalnya solo atau keluarga), jenis kamar yang mereka miliki, lama menginap dan bagaimana ulasan diajukan.
+ - Sayangnya, menggunakan tag ini bermasalah, lihat bagian di bawah yang membahas kegunaannya
+
+**Kolom Pengulas**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - Ini mungkin menjadi faktor dalam model rekomendasi, misalnya, jika Anda dapat menentukan bahwa pengulas yang lebih produktif dengan ratusan ulasan lebih cenderung negatif daripada positif. Namun, pengulas dari ulasan tertentu tidak diidentifikasi dengan kode unik, dan oleh karena itu tidak dapat dikaitkan dengan satu set ulasan. Ada 30 pengulas dengan 100 atau lebih ulasan, tetapi sulit untuk melihat bagaimana ini dapat membantu model rekomendasi.
+- `Reviewer_Nationality`
+ - Beberapa orang mungkin berpikir bahwa kebangsaan tertentu lebih cenderung memberikan ulasan positif atau negatif karena kecenderungan nasional. Berhati-hatilah membangun pandangan anekdotal seperti itu ke dalam model Anda. Ini adalah stereotip nasional (dan terkadang rasial), dan setiap pengulas adalah individu yang menulis ulasan berdasarkan pengalaman mereka. Mungkin telah disaring melalui banyak lensa seperti pengalaman hotel sebelumnya, jarak yang ditempuh, dan temperamen pribadi mereka. Berpikir bahwa kebangsaan mereka adalah alasan untuk skor ulasan sulit untuk dibenarkan.
+
+##### Contoh
+
+| Skor Rata-rata | Total Jumlah Ulasan | Skor Pengulas | Ulasan Negatif | Ulasan Positif | Tag |
+| -------------- | ------------------- | ------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | Saat ini ini bukan hotel tetapi situs konstruksi Saya diteror dari pagi hari dan sepanjang hari dengan kebisingan bangunan yang tidak dapat diterima saat beristirahat setelah perjalanan panjang dan bekerja di kamar Orang-orang bekerja sepanjang hari dengan palu di kamar yang bersebelahan Saya meminta untuk pindah kamar tetapi tidak ada kamar yang tenang tersedia Untuk memperburuk keadaan saya dikenakan biaya lebih Saya check out di malam hari karena saya harus berangkat sangat pagi penerbangan dan menerima tagihan yang sesuai Sehari kemudian hotel membuat biaya lain tanpa persetujuan saya melebihi harga yang dipesan Ini tempat yang mengerikan Jangan menghukum diri Anda dengan memesan di sini | Tidak ada tempat yang mengerikan | Perjalanan bisnis Pasangan Kamar Double Standar Menginap 2 malam |
+
+Seperti yang Anda lihat, tamu ini tidak memiliki pengalaman menginap yang menyenangkan di hotel ini. Hotel ini memiliki skor rata-rata yang baik yaitu 7.8 dan 1945 ulasan, tetapi pengulas ini memberikan skor 2.5 dan menulis 115 kata tentang betapa negatifnya pengalaman mereka. Jika mereka tidak menulis apa pun di kolom Ulasan_Positive, Anda mungkin menyimpulkan bahwa tidak ada yang positif, tetapi sayangnya mereka menulis 7 kata peringatan. Jika kita hanya menghitung kata-kata alih-alih makna, atau sentimen dari kata-kata tersebut, kita mungkin memiliki pandangan yang miring tentang niat pengulas. Anehnya, skor mereka yang 2.5 membingungkan, karena jika pengalaman menginap di hotel itu sangat buruk, mengapa memberikan poin sama sekali? Menyelidiki dataset dengan cermat, Anda akan melihat bahwa skor terendah yang mungkin adalah 2.5, bukan 0. Skor tertinggi yang mungkin adalah 10.
+
+##### Tag
+
+Seperti disebutkan di atas, pada pandangan pertama, ide untuk menggunakan `Tags` untuk mengkategorikan data masuk akal. Sayangnya, tag ini tidak distandarisasi, yang berarti bahwa di hotel tertentu, opsinya mungkin *Kamar Single*, *Kamar Twin*, dan *Kamar Double*, tetapi di hotel berikutnya, mereka adalah *Kamar Single Deluxe*, *Kamar Queen Klasik*, dan *Kamar King Eksekutif*. Ini mungkin hal yang sama, tetapi ada begitu banyak variasi sehingga pilihannya menjadi:
+
+1. Mencoba mengubah semua istilah menjadi satu standar, yang sangat sulit, karena tidak jelas apa jalur konversinya dalam setiap kasus (misalnya *Kamar single klasik* dipetakan ke *Kamar single* tetapi *Kamar Queen Superior dengan Pemandangan Taman atau Kota* jauh lebih sulit untuk dipetakan)
+
+1. Kita dapat mengambil pendekatan NLP dan mengukur frekuensi istilah tertentu seperti *Solo*, *Pelancong Bisnis*, atau *Keluarga dengan anak kecil* karena berlaku untuk setiap hotel, dan memasukkan itu ke dalam rekomendasi
+
+Tag biasanya (tetapi tidak selalu) merupakan satu bidang yang berisi daftar 5 hingga 6 nilai yang dipisahkan dengan koma yang sesuai dengan *Jenis perjalanan*, *Jenis tamu*, *Jenis kamar*, *Jumlah malam*, dan *Jenis perangkat yang digunakan untuk mengirimkan ulasan*. Namun, karena beberapa pengulas tidak mengisi setiap bidang (mereka mungkin meninggalkan satu kosong), nilainya tidak selalu dalam urutan yang sama.
+
+Sebagai contoh, ambil *Jenis grup*. Ada 1025 kemungkinan unik dalam bidang ini di kolom `Tags`, dan sayangnya hanya beberapa dari mereka yang merujuk pada grup (beberapa adalah jenis kamar, dll.). Jika Anda memfilter hanya yang menyebutkan keluarga, hasilnya mengandung banyak hasil tipe *Kamar keluarga*. Jika Anda memasukkan istilah *dengan*, yaitu menghitung nilai *Keluarga dengan*, hasilnya lebih baik, dengan lebih dari 80.000 dari 515.000 hasil yang mengandung frasa "Keluarga dengan anak kecil" atau "Keluarga dengan anak yang lebih tua".
+
+Ini berarti kolom tag tidak sepenuhnya tidak berguna bagi kita, tetapi akan membutuhkan beberapa pekerjaan untuk membuatnya berguna.
+
+##### Skor rata-rata hotel
+
+Ada sejumlah keanehan atau ketidaksesuaian dengan dataset yang tidak bisa saya pahami, tetapi diilustrasikan di sini sehingga Anda menyadarinya saat membangun model Anda. Jika Anda mengetahuinya, beri tahu kami di bagian diskusi!
+
+Dataset ini memiliki kolom berikut yang berkaitan dengan skor rata-rata dan jumlah ulasan:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+Hotel tunggal dengan ulasan terbanyak dalam dataset ini adalah *Britannia International Hotel Canary Wharf* dengan 4789 ulasan dari 515.000. Tetapi jika kita melihat nilai `Total_Number_of_Reviews` untuk hotel ini, nilainya adalah 9086. Anda mungkin menyimpulkan bahwa ada banyak skor tanpa ulasan, jadi mungkin kita harus menambahkan nilai kolom `Additional_Number_of_Scoring`. Nilai itu adalah 2682, dan menambahkannya ke 4789 memberi kita 7.471 yang masih 1615 kurang dari `Total_Number_of_Reviews`.
+
+Jika Anda mengambil kolom `Average_Score`, Anda mungkin menyimpulkan bahwa itu adalah rata-rata dari ulasan dalam dataset, tetapi deskripsi dari Kaggle adalah "*Skor Rata-rata hotel, dihitung berdasarkan komentar terbaru dalam setahun terakhir*". Itu tampaknya tidak terlalu berguna, tetapi kita dapat menghitung rata-rata kita sendiri berdasarkan skor ulasan dalam dataset. Menggunakan hotel yang sama sebagai contoh, skor rata-rata hotel yang diberikan adalah 7.1 tetapi skor yang dihitung (rata-rata skor pengulas *dalam* dataset) adalah 6.8. Ini dekat, tetapi bukan nilai yang sama, dan kita hanya bisa menebak bahwa skor yang diberikan dalam ulasan `Additional_Number_of_Scoring` meningkatkan rata-rata menjadi 7.1. Sayangnya, tanpa cara untuk menguji atau membuktikan pernyataan tersebut, sulit untuk menggunakan atau mempercayai `Average_Score`, `Additional_Number_of_Scoring` dan `Total_Number_of_Reviews` ketika mereka didasarkan pada, atau merujuk pada, data yang tidak kita miliki.
+
+Untuk memperumit masalah lebih lanjut, hotel dengan jumlah ulasan tertinggi kedua memiliki skor rata-rata yang dihitung sebesar 8.12 dan dataset `Average_Score` adalah 8.1. Apakah skor yang benar ini kebetulan atau apakah hotel pertama adalah ketidaksesuaian?
+
+Dengan kemungkinan bahwa hotel ini mungkin merupakan outlier, dan mungkin sebagian besar nilai cocok (tetapi beberapa tidak karena alasan tertentu) kita akan menulis program singkat berikutnya untuk mengeksplorasi nilai-nilai dalam dataset dan menentukan penggunaan yang benar (atau tidak- penggunaan) dari nilai-nilai tersebut.
+
+> 🚨 Sebuah catatan peringatan
+>
+> Saat bekerja dengan dataset ini, Anda akan menulis kode yang menghitung sesuatu dari teks tanpa harus membaca atau menganalisis teks sendiri. Ini adalah esensi dari NLP, menafsirkan makna atau sentimen tanpa harus ada manusia yang melakukannya. Namun, ada kemungkinan Anda akan membaca beberapa ulasan negatif. Saya akan menyarankan Anda untuk tidak melakukannya, karena Anda tidak perlu. Beberapa dari mereka konyol, atau ulasan hotel negatif yang tidak relevan, seperti "Cuacanya tidak bagus", sesuatu yang di luar kendali hotel, atau siapa pun. Tetapi ada sisi gelap dari beberapa ulasan juga. Terkadang ulasan negatif bersifat rasis, seksis, atau ageis. Ini tidak menyenangkan tetapi diharapkan dalam dataset yang diambil dari situs web publik. Beberapa pengulas meninggalkan ulasan yang Anda anggap tidak menyenangkan, tidak nyaman, atau mengganggu. Lebih baik biarkan kode yang mengukur sentimen daripada membaca sendiri dan merasa tidak nyaman. Yang mengatakan, itu adalah minoritas yang menulis hal-hal seperti itu, tetapi mereka tetap ada.
+
+## Latihan - Eksplorasi Data
+### Memuat data
+
+Itu cukup memeriksa data secara visual, sekarang Anda akan menulis beberapa kode dan mendapatkan beberapa jawaban! Bagian ini menggunakan perpustakaan pandas. Tugas pertama Anda adalah memastikan Anda dapat memuat dan membaca data CSV. Perpustakaan pandas memiliki pemuat CSV yang cepat, dan hasilnya ditempatkan dalam dataframe, seperti pada pelajaran sebelumnya. CSV yang kita muat memiliki lebih dari setengah juta baris, tetapi hanya 17 kolom. Pandas memberi Anda banyak cara yang kuat untuk berinteraksi dengan dataframe, termasuk kemampuan untuk melakukan operasi pada setiap baris.
+
+Dari sini di pelajaran ini, akan ada cuplikan kode dan beberapa penjelasan tentang kode dan beberapa diskusi tentang apa arti hasilnya. Gunakan _notebook.ipynb_ yang disertakan untuk kode Anda.
+
+Mari kita mulai dengan memuat file data yang akan Anda gunakan:
+
+```python
+# Load the hotel reviews from CSV
+import pandas as pd
+import time
+# importing time so the start and end time can be used to calculate file loading time
+print("Loading data file now, this could take a while depending on file size")
+start = time.time()
+# df is 'DataFrame' - make sure you downloaded the file to the data folder
+df = pd.read_csv('../../data/Hotel_Reviews.csv')
+end = time.time()
+print("Loading took " + str(round(end - start, 2)) + " seconds")
+```
+
+Sekarang data sudah dimuat, kita bisa melakukan beberapa operasi pada data tersebut. Simpan kode ini di bagian atas program Anda untuk bagian selanjutnya.
+
+## Jelajahi data
+
+Dalam hal ini, datanya sudah *bersih*, artinya siap untuk digunakan, dan tidak memiliki karakter dalam bahasa lain yang mungkin mengganggu algoritma yang hanya mengharapkan karakter bahasa Inggris.
+
+✅ Anda mungkin harus bekerja dengan data yang memerlukan beberapa pemrosesan awal untuk memformatnya sebelum menerapkan teknik NLP, tetapi tidak kali ini. Jika Anda harus melakukannya, bagaimana Anda akan menangani karakter non-Inggris?
+
+Luangkan waktu sejenak untuk memastikan bahwa setelah data dimuat, Anda dapat menjelajahinya dengan kode. Sangat mudah untuk ingin fokus pada kolom `Negative_Review` dan `Positive_Review`. Mereka penuh dengan teks alami untuk diproses oleh algoritma NLP Anda. Tapi tunggu! Sebelum Anda terjun ke NLP dan sentimen, Anda harus mengikuti kode di bawah ini untuk memastikan bahwa nilai yang diberikan dalam dataset sesuai dengan nilai yang Anda hitung dengan pandas.
+
+## Operasi Dataframe
+
+Tugas pertama dalam pelajaran ini adalah memeriksa apakah pernyataan berikut benar dengan menulis beberapa kode yang memeriksa data frame (tanpa mengubahnya).
+
+> Seperti banyak tugas pemrograman, ada beberapa cara untuk menyelesaikannya, tetapi saran yang baik adalah melakukannya dengan cara yang paling sederhana dan termudah yang Anda bisa, terutama jika akan lebih mudah untuk dipahami saat Anda kembali ke kode ini di masa mendatang. Dengan dataframes, ada API yang komprehensif yang sering kali memiliki cara untuk melakukan apa yang Anda inginkan secara efisien.
+Perlakukan pertanyaan-pertanyaan berikut sebagai tugas pemrograman dan cobalah menjawabnya tanpa melihat solusinya. 1. Cetak *shape* dari data frame yang baru saja Anda muat (shape adalah jumlah baris dan kolom) 2. Hitung frekuensi untuk kebangsaan pengulas: 1. Berapa banyak nilai berbeda yang ada untuk
+baris mempunyai nilai lajur `Positive_Review` "No Positive" 9. Kira dan cetak berapa banyak baris yang mempunyai nilai lajur `Positive_Review` "No Positive" **dan** nilai lajur `Negative_Review` "No Negative" ### Jawapan Kod 1. Cetak *bentuk* data frame yang baru sahaja dimuatkan (bentuk adalah bilangan baris dan lajur) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Kira jumlah kekerapan untuk kewarganegaraan pengulas: 1. Berapa banyak nilai yang berbeza ada untuk lajur `Reviewer_Nationality` dan apakah ia? 2. Apakah kewarganegaraan pengulas yang paling umum dalam dataset (cetak negara dan bilangan ulasan)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. Apakah 10 kewarganegaraan yang paling kerap ditemui seterusnya, dan jumlah kekerapan mereka? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. Apakah hotel yang paling kerap diulas untuk setiap 10 kewarganegaraan pengulas teratas? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. Berapa banyak ulasan terdapat bagi setiap hotel (jumlah kekerapan hotel) dalam dataset? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Hotel_Name | Total_Number_of_Reviews | Total_Reviews_Found | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | Anda mungkin perasan bahawa hasil *yang dikira dalam dataset* tidak sepadan dengan nilai dalam `Total_Number_of_Reviews`. Tidak jelas jika nilai dalam dataset ini mewakili jumlah ulasan yang hotel ada, tetapi tidak semua dikikis, atau beberapa pengiraan lain. `Total_Number_of_Reviews` tidak digunakan dalam model kerana ketidakjelasan ini. 5. Walaupun terdapat lajur `Average_Score` untuk setiap hotel dalam dataset, anda juga boleh mengira skor purata (mendapatkan purata semua skor pengulas dalam dataset untuk setiap hotel). Tambah lajur baru kepada dataframe anda dengan tajuk lajur `Calc_Average_Score` yang mengandungi purata yang dikira itu. Cetak lajur `Hotel_Name`, `Average_Score`, dan `Calc_Average_Score`. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` Anda mungkin juga tertanya-tanya tentang nilai `Average_Score` dan mengapa ia kadang-kadang berbeza daripada skor purata yang dikira. Oleh kerana kita tidak tahu mengapa beberapa nilai sepadan, tetapi yang lain mempunyai perbezaan, adalah lebih selamat dalam kes ini untuk menggunakan skor ulasan yang kita ada untuk mengira purata sendiri. Walau bagaimanapun, perbezaan biasanya sangat kecil, berikut adalah hotel dengan perbezaan terbesar dari purata dataset dan purata yang dikira: | Average_Score_Difference | Average_Score | Calc_Average_Score | Hotel_Name | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | Dengan hanya 1 hotel yang mempunyai perbezaan skor lebih daripada 1, ini bermakna kita mungkin boleh mengabaikan perbezaan tersebut dan menggunakan skor purata yang dikira. 6. Kira dan cetak berapa banyak baris yang mempunyai nilai lajur `Negative_Review` "No Negative" 7. Kira dan cetak berapa banyak baris yang mempunyai nilai lajur `Positive_Review` "No Positive" 8. Kira dan cetak berapa banyak baris yang mempunyai nilai lajur `Positive_Review` "No Positive" **dan** nilai lajur `Negative_Review` "No Negative" ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Cara lain Cara lain untuk mengira item tanpa Lambdas, dan menggunakan sum untuk mengira baris: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` Anda mungkin perasan bahawa terdapat 127 baris yang mempunyai kedua-dua nilai "No Negative" dan "No Positive" untuk lajur `Negative_Review` dan `Positive_Review` masing-masing. Ini bermakna pengulas memberikan skor numerik kepada hotel, tetapi enggan menulis ulasan positif atau negatif. Nasib baik ini adalah jumlah baris yang kecil (127 daripada 515738, atau 0.02%), jadi mungkin tidak akan menjejaskan model atau keputusan kita dalam mana-mana arah tertentu, tetapi anda mungkin tidak menjangkakan dataset ulasan mempunyai baris tanpa ulasan, jadi ia patut meneroka data untuk menemui baris seperti ini. Sekarang setelah anda meneroka dataset, dalam pelajaran seterusnya anda akan menapis data dan menambah beberapa analisis sentimen. --- ## 🚀Cabaran Pelajaran ini menunjukkan, seperti yang kita lihat dalam pelajaran sebelumnya, betapa pentingnya untuk memahami data anda dan kekurangannya sebelum melakukan operasi ke atasnya. Data berasaskan teks, khususnya, memerlukan pemeriksaan yang teliti. Selidiki pelbagai set data yang berat dengan teks dan lihat jika anda dapat menemui kawasan yang boleh memperkenalkan bias atau sentimen yang menyimpang ke dalam model. ## [Kuiz selepas kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## Kajian & Pembelajaran Kendiri Ambil [Laluan Pembelajaran ini mengenai NLP](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) untuk menemui alat yang boleh dicuba semasa membina model yang berat dengan ucapan dan teks. ## Tugasan [NLTK](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/ms/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..de49623cd
--- /dev/null
+++ b/translations/ms/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## Arahan
+
+NLTK adalah perpustakaan terkenal untuk digunakan dalam linguistik komputasi dan NLP. Ambil peluang ini untuk membaca melalui '[buku NLTK](https://www.nltk.org/book/)' dan cuba latihan-latihannya. Dalam tugasan yang tidak dinilai ini, anda akan mengenali perpustakaan ini dengan lebih mendalam.
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesalahan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/ms/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..4173b3b96
--- /dev/null
+++ b/translations/ms/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/ms/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..7630a4570
--- /dev/null
+++ b/translations/ms/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/5-Hotel-Reviews-2/README.md b/translations/ms/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..535454b55
--- /dev/null
+++ b/translations/ms/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# Analisis Sentimen dengan Ulasan Hotel
+
+Sekarang setelah anda menjelajahi dataset secara detail, saatnya untuk menyaring kolom-kolom dan kemudian menggunakan teknik NLP pada dataset untuk mendapatkan wawasan baru tentang hotel-hotel tersebut.
+## [Kuis Pra-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### Operasi Penyaringan & Analisis Sentimen
+
+Seperti yang mungkin sudah anda perhatikan, dataset ini memiliki beberapa masalah. Beberapa kolom diisi dengan informasi yang tidak berguna, yang lain tampaknya tidak benar. Jika mereka benar, tidak jelas bagaimana mereka dihitung, dan jawabannya tidak dapat diverifikasi secara independen oleh perhitungan anda sendiri.
+
+## Latihan: sedikit lagi pemrosesan data
+
+Bersihkan data sedikit lagi. Tambahkan kolom yang akan berguna nanti, ubah nilai di kolom lain, dan hapus beberapa kolom sepenuhnya.
+
+1. Pemrosesan kolom awal
+
+ 1. Hapus `lat` dan `lng`
+
+ 2. Ganti nilai `Hotel_Address` dengan nilai berikut (jika alamat berisi nama kota dan negara, ubah menjadi hanya kota dan negara).
+
+ Ini adalah satu-satunya kota dan negara dalam dataset:
+
+ Amsterdam, Netherlands
+
+ Barcelona, Spain
+
+ London, United Kingdom
+
+ Milan, Italy
+
+ Paris, France
+
+ Vienna, Austria
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ Sekarang anda bisa meng-query data tingkat negara:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | Hotel_Address | Hotel_Name |
+ | :--------------------- | :--------: |
+ | Amsterdam, Netherlands | 105 |
+ | Barcelona, Spain | 211 |
+ | London, United Kingdom | 400 |
+ | Milan, Italy | 162 |
+ | Paris, France | 458 |
+ | Vienna, Austria | 158 |
+
+2. Proses kolom Meta-review Hotel
+
+ 1. Hapus `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` dengan skor yang dihitung sendiri
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. Proses kolom ulasan
+
+ 1. Hapus `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']`, anda bisa mulai bertanya apakah mungkin untuk mengurangi pemrosesan yang harus dilakukan. Untungnya, itu mungkin - tetapi pertama-tama anda perlu mengikuti beberapa langkah untuk memastikan tag yang diminati.
+
+### Penyaringan tag
+
+Ingat bahwa tujuan dataset ini adalah untuk menambahkan sentimen dan kolom yang akan membantu anda memilih hotel terbaik (untuk diri sendiri atau mungkin untuk tugas klien yang meminta anda membuat bot rekomendasi hotel). Anda perlu bertanya pada diri sendiri apakah tag tersebut berguna atau tidak dalam dataset akhir. Berikut adalah satu interpretasi (jika anda membutuhkan dataset untuk alasan lain, tag yang berbeda mungkin tetap masuk/keluar dari seleksi):
+
+1. Jenis perjalanan itu relevan, dan itu harus tetap
+2. Jenis kelompok tamu itu penting, dan itu harus tetap
+3. Jenis kamar, suite, atau studio tempat tamu menginap tidak relevan (semua hotel pada dasarnya memiliki kamar yang sama)
+4. Perangkat yang digunakan untuk mengirim ulasan tidak relevan
+5. Jumlah malam yang dihabiskan pengulas *mungkin* relevan jika anda mengaitkan masa tinggal yang lebih lama dengan mereka yang lebih menyukai hotel tersebut, tetapi itu sedikit berlebihan, dan mungkin tidak relevan
+
+Singkatnya, **pertahankan 2 jenis tag dan hapus yang lain**.
+
+Pertama, anda tidak ingin menghitung tag sampai mereka dalam format yang lebih baik, jadi itu berarti menghapus tanda kurung siku dan kutipan. Anda bisa melakukannya dengan beberapa cara, tetapi anda ingin yang tercepat karena bisa memakan waktu lama untuk memproses banyak data. Untungnya, pandas memiliki cara mudah untuk melakukan masing-masing langkah ini.
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+Setiap tag menjadi sesuatu seperti: `Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag` kolom cocok dengan salah satu kolom baru, tambahkan 1, jika tidak, tambahkan 0. Hasil akhirnya adalah hitungan berapa banyak pengulas yang memilih hotel ini (secara agregat) untuk, misalnya, bisnis vs liburan, atau untuk membawa hewan peliharaan, dan ini adalah informasi yang berguna saat merekomendasikan hotel.
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### Simpan file anda
+
+Akhirnya, simpan dataset sebagaimana adanya sekarang dengan nama baru.
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## Operasi Analisis Sentimen
+
+Di bagian akhir ini, anda akan menerapkan analisis sentimen pada kolom ulasan dan menyimpan hasilnya dalam dataset.
+
+## Latihan: memuat dan menyimpan data yang sudah disaring
+
+Perhatikan bahwa sekarang anda memuat dataset yang sudah disaring yang disimpan di bagian sebelumnya, **bukan** dataset asli.
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### Menghapus kata-kata stop
+
+Jika anda menjalankan Analisis Sentimen pada kolom Ulasan Negatif dan Positif, itu bisa memakan waktu lama. Diuji pada laptop uji yang kuat dengan CPU cepat, itu memakan waktu 12 - 14 menit tergantung pada perpustakaan sentimen yang digunakan. Itu adalah waktu yang (relatif) lama, jadi patut diselidiki apakah itu bisa dipercepat.
+
+Menghapus kata-kata stop, atau kata-kata umum dalam bahasa Inggris yang tidak mengubah sentimen suatu kalimat, adalah langkah pertama. Dengan menghapusnya, analisis sentimen harus berjalan lebih cepat, tetapi tidak kurang akurat (karena kata-kata stop tidak mempengaruhi sentimen, tetapi mereka memperlambat analisis).
+
+Ulasan negatif terpanjang adalah 395 kata, tetapi setelah menghapus kata-kata stop, itu menjadi 195 kata.
+
+Menghapus kata-kata stop juga merupakan operasi yang cepat, menghapus kata-kata stop dari 2 kolom ulasan lebih dari 515.000 baris memakan waktu 3,3 detik pada perangkat uji. Itu bisa memakan waktu sedikit lebih atau kurang untuk anda tergantung pada kecepatan CPU perangkat anda, RAM, apakah anda memiliki SSD atau tidak, dan beberapa faktor lainnya. Relatif singkatnya operasi ini berarti bahwa jika itu meningkatkan waktu analisis sentimen, maka itu layak dilakukan.
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### Melakukan analisis sentimen
+
+Sekarang anda harus menghitung analisis sentimen untuk kolom ulasan negatif dan positif, dan menyimpan hasilnya dalam 2 kolom baru. Uji sentimen akan membandingkannya dengan skor pengulas untuk ulasan yang sama. Misalnya, jika sentimen berpikir ulasan negatif memiliki sentimen 1 (sentimen sangat positif) dan sentimen ulasan positif 1, tetapi pengulas memberi hotel skor terendah yang mungkin, maka teks ulasan tidak cocok dengan skor, atau analisis sentimen tidak dapat mengenali sentimen dengan benar. Anda harus mengharapkan beberapa skor sentimen sepenuhnya salah, dan sering kali itu bisa dijelaskan, misalnya ulasan bisa sangat sarkastis "Tentu saja saya SANGAT MENYUKAI tidur di kamar tanpa pemanas" dan analisis sentimen berpikir itu adalah sentimen positif, meskipun manusia yang membacanya akan tahu itu adalah sarkasme.
+
+NLTK menyediakan berbagai analis sentimen untuk dipelajari, dan anda bisa menggantinya dan melihat apakah sentimen lebih atau kurang akurat. Analisis sentimen VADER digunakan di sini.
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: Model Berbasis Aturan yang Parsimonious untuk Analisis Sentimen Teks Media Sosial. Konferensi Internasional Kedelapan tentang Weblogs dan Media Sosial (ICWSM-14). Ann Arbor, MI, Juni 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+Nantinya dalam program anda ketika anda siap menghitung sentimen, anda bisa menerapkannya pada setiap ulasan sebagai berikut:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+Ini memakan waktu sekitar 120 detik di komputer saya, tetapi akan bervariasi di setiap komputer. Jika anda ingin mencetak hasilnya dan melihat apakah sentimen sesuai dengan ulasan:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+Hal terakhir yang harus dilakukan dengan file sebelum menggunakannya dalam tantangan adalah menyimpannya! Anda juga harus mempertimbangkan untuk mengatur ulang semua kolom baru anda agar mudah digunakan (untuk manusia, ini adalah perubahan kosmetik).
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+Anda harus menjalankan seluruh kode untuk [notebook analisis](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) (setelah anda menjalankan [notebook penyaringan](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) untuk menghasilkan file Hotel_Reviews_Filtered.csv).
+
+Untuk meninjau, langkah-langkahnya adalah:
+
+1. File dataset asli **Hotel_Reviews.csv** dieksplorasi dalam pelajaran sebelumnya dengan [notebook penjelajah](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb)
+2. Hotel_Reviews.csv disaring oleh [notebook penyaringan](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) menghasilkan **Hotel_Reviews_Filtered.csv**
+3. Hotel_Reviews_Filtered.csv diproses oleh [notebook analisis sentimen](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) menghasilkan **Hotel_Reviews_NLP.csv**
+4. Gunakan Hotel_Reviews_NLP.csv dalam Tantangan NLP di bawah ini
+
+### Kesimpulan
+
+Ketika anda mulai, anda memiliki dataset dengan kolom dan data tetapi tidak semuanya dapat diverifikasi atau digunakan. Anda telah menjelajahi data, menyaring apa yang tidak anda butuhkan, mengubah tag menjadi sesuatu yang berguna, menghitung rata-rata anda sendiri, menambahkan beberapa kolom sentimen dan semoga, mempelajari beberapa hal menarik tentang pemrosesan teks alami.
+
+## [Kuis Pasca-Kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## Tantangan
+
+Sekarang setelah anda menganalisis dataset untuk sentimen, lihat apakah anda bisa menggunakan strategi yang telah anda pelajari dalam kurikulum ini (klastering, mungkin?) untuk menentukan pola di sekitar sentimen.
+
+## Tinjauan & Studi Mandiri
+
+Ikuti [modul Learn ini](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott) untuk mempelajari lebih lanjut dan menggunakan alat yang berbeda untuk menjelajahi sentimen dalam teks.
+## Tugas
+
+[Coba dataset yang berbeda](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/ms/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..d2734f220
--- /dev/null
+++ b/translations/ms/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# Cuba dataset yang berbeza
+
+## Arahan
+
+Sekarang setelah anda belajar tentang menggunakan NLTK untuk menetapkan sentimen kepada teks, cuba dataset yang berbeza. Anda mungkin perlu melakukan beberapa pemprosesan data, jadi buat notebook dan dokumentasikan proses pemikiran anda. Apa yang anda temui?
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ----------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | Notebook lengkap dan dataset disertakan dengan sel-sel yang didokumentasikan dengan baik menerangkan bagaimana sentimen ditetapkan | Notebook kekurangan penjelasan yang baik | Notebook mempunyai kecacatan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/ms/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..84c037866
--- /dev/null
+++ b/translations/ms/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/ms/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..6e0249da4
--- /dev/null
+++ b/translations/ms/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/README.md b/translations/ms/6-NLP/README.md
new file mode 100644
index 000000000..4523ca0fd
--- /dev/null
+++ b/translations/ms/6-NLP/README.md
@@ -0,0 +1,27 @@
+# Memulakan dengan pemprosesan bahasa semula jadi
+
+Pemprosesan bahasa semula jadi (NLP) adalah keupayaan program komputer untuk memahami bahasa manusia seperti yang dituturkan dan ditulis -- dirujuk sebagai bahasa semula jadi. Ia adalah komponen kecerdasan buatan (AI). NLP telah wujud lebih daripada 50 tahun dan mempunyai akar dalam bidang linguistik. Keseluruhan bidang ini ditujukan untuk membantu mesin memahami dan memproses bahasa manusia. Ini kemudian boleh digunakan untuk melaksanakan tugas seperti pemeriksaan ejaan atau terjemahan mesin. Ia mempunyai pelbagai aplikasi dunia sebenar dalam beberapa bidang, termasuk penyelidikan perubatan, enjin carian dan kecerdasan perniagaan.
+
+## Topik serantau: Bahasa dan sastera Eropah dan hotel romantik di Eropah ❤️
+
+Dalam bahagian kurikulum ini, anda akan diperkenalkan kepada salah satu penggunaan pembelajaran mesin yang paling meluas: pemprosesan bahasa semula jadi (NLP). Diperoleh daripada linguistik komputasi, kategori kecerdasan buatan ini adalah jambatan antara manusia dan mesin melalui komunikasi suara atau teks.
+
+Dalam pelajaran ini kita akan belajar asas-asas NLP dengan membina bot perbualan kecil untuk mempelajari bagaimana pembelajaran mesin membantu menjadikan perbualan ini semakin 'pintar'. Anda akan mengembara ke masa lalu, berbual dengan Elizabeth Bennett dan Mr. Darcy dari novel klasik Jane Austen, **Pride and Prejudice**, yang diterbitkan pada tahun 1813. Kemudian, anda akan menambah pengetahuan anda dengan mempelajari tentang analisis sentimen melalui ulasan hotel di Eropah.
+
+
+> Foto oleh Elaine Howlin di Unsplash
+
+## Pelajaran
+
+1. [Pengenalan kepada pemprosesan bahasa semula jadi](1-Introduction-to-NLP/README.md)
+2. [Tugas dan teknik NLP yang biasa](2-Tasks/README.md)
+3. [Terjemahan dan analisis sentimen dengan pembelajaran mesin](3-Translation-Sentiment/README.md)
+4. [Menyiapkan data anda](4-Hotel-Reviews-1/README.md)
+5. [NLTK untuk Analisis Sentimen](5-Hotel-Reviews-2/README.md)
+
+## Kredit
+
+Pelajaran pemprosesan bahasa semula jadi ini ditulis dengan ☕ oleh [Stephen Howell](https://twitter.com/Howell_MSFT)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/6-NLP/data/README.md b/translations/ms/6-NLP/data/README.md
new file mode 100644
index 000000000..4173b3b96
--- /dev/null
+++ b/translations/ms/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/1-Introduction/README.md b/translations/ms/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..b1d34b59f
--- /dev/null
+++ b/translations/ms/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# Pengenalan kepada ramalan siri masa
+
+
+
+> Sketchnote oleh [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+Dalam pelajaran ini dan pelajaran berikutnya, anda akan mempelajari sedikit tentang ramalan siri masa, satu bahagian yang menarik dan berharga dalam repertoir seorang saintis ML yang kurang dikenali berbanding topik lain. Ramalan siri masa adalah sejenis 'bola kristal': berdasarkan prestasi lalu sesuatu pembolehubah seperti harga, anda boleh meramalkan nilai potensinya pada masa depan.
+
+[](https://youtu.be/cBojo1hsHiI "Pengenalan kepada ramalan siri masa")
+
+> 🎥 Klik imej di atas untuk video tentang ramalan siri masa
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+Ia adalah bidang yang berguna dan menarik dengan nilai sebenar kepada perniagaan, memandangkan aplikasinya yang langsung kepada masalah penentuan harga, inventori, dan isu rantaian bekalan. Walaupun teknik pembelajaran mendalam telah mula digunakan untuk mendapatkan lebih banyak wawasan bagi meramalkan prestasi masa depan dengan lebih baik, ramalan siri masa tetap menjadi bidang yang banyak dimaklumkan oleh teknik ML klasik.
+
+> Kurikulum siri masa yang berguna dari Penn State boleh didapati [di sini](https://online.stat.psu.edu/stat510/lesson/1)
+
+## Pengenalan
+
+Bayangkan anda menyelenggara satu rangkaian meter parkir pintar yang menyediakan data tentang seberapa kerap ia digunakan dan untuk berapa lama dari semasa ke semasa.
+
+> Bagaimana jika anda boleh meramalkan, berdasarkan prestasi lalu meter tersebut, nilai masa depannya mengikut undang-undang penawaran dan permintaan?
+
+Meramalkan dengan tepat bila untuk bertindak bagi mencapai matlamat anda adalah cabaran yang boleh ditangani oleh ramalan siri masa. Ia mungkin tidak menyenangkan orang ramai apabila dikenakan bayaran lebih tinggi pada masa sibuk ketika mereka mencari tempat parkir, tetapi ia adalah cara yang pasti untuk menjana pendapatan untuk membersihkan jalan!
+
+Mari kita terokai beberapa jenis algoritma siri masa dan mulakan sebuah buku nota untuk membersihkan dan menyediakan beberapa data. Data yang akan anda analisis diambil dari pertandingan ramalan GEFCom2014. Ia terdiri daripada 3 tahun nilai beban elektrik dan suhu setiap jam antara 2012 dan 2014. Berdasarkan corak sejarah beban elektrik dan suhu, anda boleh meramalkan nilai masa depan beban elektrik.
+
+Dalam contoh ini, anda akan belajar bagaimana untuk meramalkan satu langkah masa ke hadapan, hanya menggunakan data beban sejarah. Sebelum memulakan, bagaimanapun, adalah berguna untuk memahami apa yang berlaku di sebalik tabir.
+
+## Beberapa definisi
+
+Apabila menghadapi istilah 'siri masa' anda perlu memahami penggunaannya dalam beberapa konteks yang berbeza.
+
+🎓 **Siri masa**
+
+Dalam matematik, "siri masa adalah satu siri titik data yang diindeks (atau disenaraikan atau diplot) mengikut urutan masa. Selalunya, siri masa adalah urutan yang diambil pada titik masa yang sama jarak." Contoh siri masa adalah nilai penutupan harian [Dow Jones Industrial Average](https://wikipedia.org/wiki/Time_series). Penggunaan plot siri masa dan pemodelan statistik sering ditemui dalam pemprosesan isyarat, ramalan cuaca, ramalan gempa bumi, dan bidang lain di mana peristiwa berlaku dan titik data boleh diplot mengikut masa.
+
+🎓 **Analisis siri masa**
+
+Analisis siri masa adalah analisis data siri masa yang disebutkan di atas. Data siri masa boleh mengambil bentuk yang berbeza, termasuk 'siri masa terganggu' yang mengesan corak dalam evolusi siri masa sebelum dan selepas peristiwa yang mengganggu. Jenis analisis yang diperlukan untuk siri masa bergantung pada sifat data. Data siri masa itu sendiri boleh berbentuk siri nombor atau aksara.
+
+Analisis yang akan dijalankan menggunakan pelbagai kaedah, termasuk domain frekuensi dan domain masa, linear dan tidak linear, dan banyak lagi. [Ketahui lebih lanjut](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm) tentang pelbagai cara untuk menganalisis jenis data ini.
+
+🎓 **Ramalan siri masa**
+
+Ramalan siri masa adalah penggunaan model untuk meramalkan nilai masa depan berdasarkan corak yang dipamerkan oleh data yang dikumpulkan sebelum ini seperti yang berlaku pada masa lalu. Walaupun mungkin untuk menggunakan model regresi untuk meneroka data siri masa, dengan indeks masa sebagai pembolehubah x pada plot, data sedemikian paling baik dianalisis menggunakan jenis model khas.
+
+Data siri masa adalah senarai pemerhatian yang dipesan, tidak seperti data yang boleh dianalisis dengan regresi linear. Yang paling biasa adalah ARIMA, akronim yang bermaksud "Autoregressive Integrated Moving Average".
+
+[Model ARIMA](https://online.stat.psu.edu/stat510/lesson/1/1.1) "mengaitkan nilai semasa siri kepada nilai masa lalu dan ralat ramalan masa lalu." Ia paling sesuai untuk menganalisis data domain masa, di mana data diatur mengikut masa.
+
+> Terdapat beberapa jenis model ARIMA, yang boleh anda pelajari [di sini](https://people.duke.edu/~rnau/411arim.htm) dan yang akan anda sentuh dalam pelajaran seterusnya.
+
+Dalam pelajaran seterusnya, anda akan membina model ARIMA menggunakan [Univariate Time Series](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm), yang menumpukan pada satu pembolehubah yang mengubah nilainya dari semasa ke semasa. Contoh jenis data ini adalah [set data ini](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm) yang merekodkan kepekatan C02 bulanan di Observatori Mauna Loa:
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ Kenal pasti pembolehubah yang berubah dari semasa ke semasa dalam set data ini
+
+## Ciri-ciri data siri masa untuk dipertimbangkan
+
+Apabila melihat data siri masa, anda mungkin perasan bahawa ia mempunyai [ciri-ciri tertentu](https://online.stat.psu.edu/stat510/lesson/1/1.1) yang perlu anda ambil kira dan kurangkan untuk lebih memahami coraknya. Jika anda menganggap data siri masa berpotensi memberikan 'isyarat' yang anda ingin analisis, ciri-ciri ini boleh dianggap sebagai 'bunyi'. Anda sering perlu mengurangkan 'bunyi' ini dengan mengimbangi beberapa ciri ini menggunakan beberapa teknik statistik.
+
+Berikut adalah beberapa konsep yang anda perlu tahu untuk bekerja dengan siri masa:
+
+🎓 **Tren**
+
+Tren ditakrifkan sebagai peningkatan dan penurunan yang boleh diukur dari semasa ke semasa. [Baca lebih lanjut](https://machinelearningmastery.com/time-series-trends-in-python). Dalam konteks siri masa, ia adalah tentang cara menggunakan dan, jika perlu, menghapuskan tren dari siri masa anda.
+
+🎓 **[Musim](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+Musim ditakrifkan sebagai turun naik berkala, seperti lonjakan cuti yang mungkin mempengaruhi jualan, sebagai contoh. [Lihat](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm) bagaimana jenis plot yang berbeza memaparkan musim dalam data.
+
+🎓 **Pelepasan**
+
+Pelepasan adalah jauh dari varians data standard.
+
+🎓 **Kitaran jangka panjang**
+
+Bebas daripada musim, data mungkin memaparkan kitaran jangka panjang seperti kemerosotan ekonomi yang berlangsung lebih lama daripada setahun.
+
+🎓 **Varians tetap**
+
+Dari semasa ke semasa, sesetengah data memaparkan turun naik yang tetap, seperti penggunaan tenaga setiap hari dan malam.
+
+🎓 **Perubahan mendadak**
+
+Data mungkin memaparkan perubahan mendadak yang mungkin memerlukan analisis lanjut. Penutupan perniagaan secara mendadak akibat COVID, sebagai contoh, menyebabkan perubahan dalam data.
+
+✅ Berikut adalah [contoh plot siri masa](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python) yang menunjukkan perbelanjaan mata wang dalam permainan harian selama beberapa tahun. Bolehkah anda mengenal pasti mana-mana ciri yang disenaraikan di atas dalam data ini?
+
+
+
+## Latihan - bermula dengan data penggunaan kuasa
+
+Mari kita mula mencipta model siri masa untuk meramalkan penggunaan kuasa masa depan berdasarkan penggunaan masa lalu.
+
+> Data dalam contoh ini diambil dari pertandingan ramalan GEFCom2014. Ia terdiri daripada 3 tahun nilai beban elektrik dan suhu setiap jam antara 2012 dan 2014.
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli dan Rob J. Hyndman, "Ramalan tenaga probabilistik: Pertandingan Ramalan Tenaga Global 2014 dan seterusnya", International Journal of Forecasting, vol.32, no.3, pp 896-913, Julai-September, 2016.
+
+1. Dalam folder `working` pelajaran ini, buka fail _notebook.ipynb_. Mulakan dengan menambah perpustakaan yang akan membantu anda memuat dan memvisualisasikan data
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ Perhatikan, anda menggunakan fail daripada `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()` yang disertakan:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ Anda dapat melihat bahawa terdapat dua lajur yang mewakili tarikh dan beban:
+
+ | | load |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. Sekarang, plot data dengan memanggil `plot()`:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. Sekarang, plot minggu pertama bulan Julai 2014, dengan memberikannya sebagai input kepada corak `energy` in `[from date]: [to date]`:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Plot yang cantik! Lihatlah plot-plot ini dan lihat jika anda boleh menentukan mana-mana ciri yang disenaraikan di atas. Apa yang boleh kita simpulkan dengan memvisualisasikan data?
+
+Dalam pelajaran seterusnya, anda akan mencipta model ARIMA untuk membuat beberapa ramalan.
+
+---
+
+## 🚀Cabaran
+
+Buat senarai semua industri dan bidang kajian yang anda boleh fikirkan yang akan mendapat manfaat daripada ramalan siri masa. Bolehkah anda memikirkan aplikasi teknik-teknik ini dalam seni? Dalam Ekonometrik? Ekologi? Runcit? Industri? Kewangan? Di mana lagi?
+
+## [Kuiz pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## Tinjauan & Kajian Sendiri
+
+Walaupun kita tidak akan membincangkannya di sini, rangkaian neural kadangkala digunakan untuk meningkatkan kaedah klasik ramalan siri masa. Baca lebih lanjut tentang mereka [dalam artikel ini](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412)
+
+## Tugasan
+
+[Visualisasikan lebih banyak siri masa](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/1-Introduction/assignment.md b/translations/ms/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..8bcfec973
--- /dev/null
+++ b/translations/ms/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Visualisasikan Beberapa Lagi Siri Masa
+
+## Arahan
+
+Anda telah mula mempelajari tentang Ramalan Siri Masa dengan melihat jenis data yang memerlukan pemodelan khas ini. Anda telah memvisualisasikan beberapa data berkaitan tenaga. Sekarang, carilah beberapa data lain yang akan mendapat manfaat daripada Ramalan Siri Masa. Cari tiga contoh (cuba [Kaggle](https://kaggle.com) dan [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott)) dan buat satu notebook untuk memvisualisasikannya. Catatkan sebarang ciri khas yang mereka ada (musim, perubahan mendadak, atau trend lain) dalam notebook tersebut.
+
+## Rubrik
+
+| Kriteria | Contoh Terbaik | Memadai | Perlu Penambahbaikan |
+| -------- | ---------------------------------------------------------- | --------------------------------------------------- | -------------------------------------------------------------------------------------------- |
+| | Tiga set data diplot dan dijelaskan dalam notebook | Dua set data diplot dan dijelaskan dalam notebook | Beberapa set data diplot atau dijelaskan dalam notebook atau data yang disampaikan tidak mencukupi |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/ms/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..7630a4570
--- /dev/null
+++ b/translations/ms/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/ms/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..d554defb7
--- /dev/null
+++ b/translations/ms/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/2-ARIMA/README.md b/translations/ms/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..de5b8df54
--- /dev/null
+++ b/translations/ms/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,396 @@
+# Ramalan siri masa dengan ARIMA
+
+Dalam pelajaran sebelumnya, anda telah mempelajari sedikit tentang ramalan siri masa dan memuat dataset yang menunjukkan turun naik beban elektrik dalam satu tempoh masa.
+
+[](https://youtu.be/IUSk-YDau10 "Pengenalan kepada ARIMA")
+
+> 🎥 Klik pada imej di atas untuk video: Pengenalan ringkas kepada model ARIMA. Contoh dilakukan dalam R, tetapi konsepnya adalah universal.
+
+## [Kuiz pra-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## Pengenalan
+
+Dalam pelajaran ini, anda akan menemui cara khusus untuk membina model dengan [ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average). Model ARIMA sangat sesuai untuk data yang menunjukkan [ketidakstasioner](https://wikipedia.org/wiki/Stationary_process).
+
+## Konsep umum
+
+Untuk dapat bekerja dengan ARIMA, terdapat beberapa konsep yang perlu anda ketahui:
+
+- 🎓 **Stasioneriti**. Dari konteks statistik, stasioneriti merujuk kepada data yang taburannya tidak berubah apabila digeser dalam masa. Data yang tidak stasioner, kemudian, menunjukkan turun naik akibat tren yang mesti ditransformasikan untuk dianalisis. Musim, sebagai contoh, boleh memperkenalkan turun naik dalam data dan boleh dihapuskan melalui proses 'perbezaan bermusim'.
+
+- 🎓 **[Perbezaan](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**. Membezakan data, sekali lagi dari konteks statistik, merujuk kepada proses mengubah data yang tidak stasioner untuk menjadikannya stasioner dengan menghapuskan tren yang tidak tetap. "Perbezaan menghapuskan perubahan dalam tahap siri masa, menghapuskan tren dan musim dan seterusnya menstabilkan purata siri masa." [Kertas oleh Shixiong et al](https://arxiv.org/abs/1904.07632)
+
+## ARIMA dalam konteks siri masa
+
+Mari kita huraikan bahagian-bahagian ARIMA untuk lebih memahami bagaimana ia membantu kita memodelkan siri masa dan membantu kita membuat ramalan terhadapnya.
+
+- **AR - untuk AutoRegressive**. Model autoregressive, seperti namanya, melihat 'ke belakang' dalam masa untuk menganalisis nilai-nilai terdahulu dalam data anda dan membuat anggapan mengenainya. Nilai-nilai terdahulu ini dipanggil 'lags'. Contohnya ialah data yang menunjukkan jualan bulanan pensel. Jumlah jualan setiap bulan akan dianggap sebagai 'pembolehubah yang berkembang' dalam dataset. Model ini dibina kerana "pembolehubah yang berkembang menarik minat diregresi pada nilai terdahulunya sendiri (iaitu, nilai sebelumnya)." [wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - untuk Integrated**. Berbeza dengan model 'ARMA' yang serupa, 'I' dalam ARIMA merujuk kepada aspek *[terintegrasi](https://wikipedia.org/wiki/Order_of_integration)*. Data 'terintegrasi' apabila langkah-langkah perbezaan diterapkan untuk menghapuskan ketidakstasioner.
+
+- **MA - untuk Moving Average**. Aspek [moving-average](https://wikipedia.org/wiki/Moving-average_model) model ini merujuk kepada pembolehubah output yang ditentukan dengan memerhati nilai semasa dan masa lalu lag.
+
+Kesimpulan: ARIMA digunakan untuk membuat model sesuai dengan bentuk khas data siri masa seakrab mungkin.
+
+## Latihan - bina model ARIMA
+
+Buka folder [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) dalam pelajaran ini dan cari fail [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb).
+
+1. Jalankan notebook untuk memuat perpustakaan `statsmodels` Python; anda akan memerlukannya untuk model ARIMA.
+
+1. Muatkan perpustakaan yang diperlukan
+
+1. Sekarang, muatkan beberapa perpustakaan lagi yang berguna untuk melukis data:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. Muat data dari fail `/data/energy.csv` ke dalam dataframe Pandas dan lihat:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. Lukis semua data tenaga yang tersedia dari Januari 2012 hingga Disember 2014. Tidak sepatutnya ada kejutan kerana kita telah melihat data ini dalam pelajaran lepas:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ Sekarang, mari kita bina model!
+
+### Buat dataset latihan dan ujian
+
+Sekarang data anda telah dimuatkan, jadi anda boleh memisahkannya ke dalam set latihan dan ujian. Anda akan melatih model anda pada set latihan. Seperti biasa, selepas model selesai dilatih, anda akan menilai ketepatannya menggunakan set ujian. Anda perlu memastikan bahawa set ujian meliputi tempoh masa yang lebih lewat daripada set latihan untuk memastikan bahawa model tidak memperoleh maklumat daripada tempoh masa hadapan.
+
+1. Peruntukkan tempoh dua bulan dari 1 September hingga 31 Oktober 2014 kepada set latihan. Set ujian akan merangkumi tempoh dua bulan dari 1 November hingga 31 Disember 2014:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ Oleh kerana data ini mencerminkan penggunaan tenaga harian, terdapat corak bermusim yang kuat, tetapi penggunaan adalah paling serupa dengan penggunaan pada hari-hari yang lebih baru.
+
+1. Visualisasikan perbezaannya:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Oleh itu, menggunakan jangka masa yang agak kecil untuk melatih data sepatutnya mencukupi.
+
+ > Nota: Oleh kerana fungsi yang kita gunakan untuk menyesuaikan model ARIMA menggunakan pengesahan dalam-sampel semasa pemasangan, kita akan mengabaikan data pengesahan.
+
+### Sediakan data untuk latihan
+
+Sekarang, anda perlu menyediakan data untuk latihan dengan melakukan penapisan dan penskalaan data anda. Tapis dataset anda untuk hanya merangkumi tempoh masa dan lajur yang anda perlukan, dan penskalaan untuk memastikan data diproyeksikan dalam selang 0,1.
+
+1. Tapis dataset asal untuk hanya merangkumi tempoh masa yang disebutkan di atas setiap set dan hanya termasuk lajur 'load' yang diperlukan serta tarikh:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ Anda boleh melihat bentuk data:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. Skala data untuk berada dalam julat (0, 1).
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. Visualisasikan data asal vs. data berskala:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > Data asal
+
+ 
+
+ > Data berskala
+
+1. Sekarang bahawa anda telah menentukur data berskala, anda boleh menskala data ujian:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### Laksanakan ARIMA
+
+Sudah tiba masanya untuk melaksanakan ARIMA! Anda kini akan menggunakan perpustakaan `statsmodels` yang anda pasang sebelum ini.
+
+Sekarang anda perlu mengikuti beberapa langkah
+
+ 1. Tentukan model dengan memanggil `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q`, tetapi sepadan dengan komponen bermusim model.
+
+1. Mulakan dengan menetapkan nilai horizon pilihan anda. Mari cuba 3 jam:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ Memilih nilai terbaik untuk parameter model ARIMA boleh menjadi mencabar kerana ia agak subjektif dan memerlukan masa. Anda mungkin mempertimbangkan untuk menggunakan `auto_arima()` function from the [`pyramid` library](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html),
+
+1. Buat masa ini cuba beberapa pilihan manual untuk mencari model yang baik.
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ Jadual keputusan dicetak.
+
+Anda telah membina model pertama anda! Sekarang kita perlu mencari cara untuk menilainya.
+
+### Menilai model anda
+
+Untuk menilai model anda, anda boleh melakukan pengesahan `walk forward`. Dalam amalan, model siri masa dilatih semula setiap kali data baru tersedia. Ini membolehkan model membuat ramalan terbaik pada setiap langkah masa.
+
+Bermula dari awal siri masa menggunakan teknik ini, latih model pada set data latihan. Kemudian buat ramalan pada langkah masa seterusnya. Ramalan dinilai terhadap nilai yang diketahui. Set latihan kemudian diperluas untuk merangkumi nilai yang diketahui dan proses diulang.
+
+> Nota: Anda harus memastikan tetingkap set latihan tetap untuk latihan yang lebih cekap supaya setiap kali anda menambah pemerhatian baru pada set latihan, anda menghapuskan pemerhatian dari permulaan set.
+
+Proses ini menyediakan anggaran yang lebih kukuh tentang bagaimana model akan berfungsi dalam amalan. Walau bagaimanapun, ia datang pada kos pengiraan untuk mencipta begitu banyak model. Ini boleh diterima jika data kecil atau model mudah, tetapi boleh menjadi isu pada skala besar.
+
+Pengesahan 'walk-forward' adalah standard emas penilaian model siri masa dan disyorkan untuk projek anda sendiri.
+
+1. Pertama, buat titik data ujian untuk setiap langkah HORIZON.
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | load | load+1 | load+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ Data digeser secara mendatar mengikut titik horizon.
+
+1. Buat ramalan pada data ujian anda menggunakan pendekatan tetingkap gelongsor ini dalam gelung sepanjang panjang data ujian:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ Anda boleh melihat latihan berlaku:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. Bandingkan ramalan dengan beban sebenar:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ Output
+ | | | timestamp | h | prediction | actual |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+ Perhatikan ramalan data setiap jam, berbanding beban sebenar. Sejauh mana ketepatannya?
+
+### Semak ketepatan model
+
+Periksa ketepatan model anda dengan menguji ralat peratusan mutlak purata (MAPE) ke atas semua ramalan.
+
+> **🧮 Tunjukkan saya matematik**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/) digunakan untuk menunjukkan ketepatan ramalan sebagai nisbah yang ditakrifkan oleh formula di atas. Perbezaan antara actualt dan predictedt dibahagikan dengan actualt. "Nilai mutlak dalam pengiraan ini dijumlahkan untuk setiap titik ramalan dalam masa dan dibahagikan dengan bilangan titik yang dipasangkan n." [wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. Nyatakan persamaan dalam kod:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. Kira MAPE satu langkah:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ MAPE ramalan satu langkah: 0.5570581332313952 %
+
+1. Cetak MAPE ramalan pelbagai langkah:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ Nombor yang rendah adalah yang terbaik: pertimbangkan bahawa ramalan yang mempunyai MAPE 10 adalah meleset sebanyak 10%.
+
+1. Tetapi seperti biasa, lebih mudah untuk melihat jenis ukuran ketepatan ini secara visual, jadi mari kita plotkannya:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 Plot yang sangat bagus, menunjukkan model dengan ketepatan yang baik. Syabas!
+
+---
+
+## 🚀Cabaran
+
+Selidik cara untuk menguji ketepatan model Siri Masa. Kami menyentuh tentang MAPE dalam pelajaran ini, tetapi adakah terdapat kaedah lain yang boleh anda gunakan? Selidik dan anotasi mereka. Dokumen yang berguna boleh didapati [di sini](https://otexts.com/fpp2/accuracy.html)
+
+## [Kuiz pasca-kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## Kajian & Pembelajaran Sendiri
+
+Pelajaran ini hanya menyentuh asas-asas Ramalan Siri Masa dengan ARIMA. Luangkan masa untuk mendalami pengetahuan anda dengan menyelidik [repositori ini](https://microsoft.github.io/forecasting/) dan jenis modelnya yang pelbagai untuk mempelajari cara lain membina model Siri Masa.
+
+## Tugasan
+
+[Model ARIMA baru](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/2-ARIMA/assignment.md b/translations/ms/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..5319365b8
--- /dev/null
+++ b/translations/ms/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# Model ARIMA Baru
+
+## Arahan
+
+Sekarang setelah anda membina model ARIMA, bina satu lagi dengan data baru (cuba salah satu [set data dari Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Catatkan kerja anda dalam buku nota, visualkan data dan model anda, dan uji ketepatannya menggunakan MAPE.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------- | ---------------------------------- |
+| | Buku nota disediakan dengan model ARIMA baru yang dibina, diuji dan dijelaskan dengan visualisasi dan ketepatan dinyatakan. | Buku nota yang disediakan tidak dicatat atau mengandungi pepijat | Buku nota yang tidak lengkap disediakan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional disarankan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/ms/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..1ce7ed17d
--- /dev/null
+++ b/translations/ms/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/ms/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..88a76427d
--- /dev/null
+++ b/translations/ms/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/3-SVR/README.md b/translations/ms/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..819b96884
--- /dev/null
+++ b/translations/ms/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,389 @@
+# Ramalan Siri Masa dengan Support Vector Regressor
+
+Dalam pelajaran sebelumnya, anda telah belajar cara menggunakan model ARIMA untuk membuat ramalan siri masa. Sekarang anda akan melihat model Support Vector Regressor yang merupakan model regressor yang digunakan untuk meramalkan data berterusan.
+
+## [Kuiz Pra-ceramah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## Pengenalan
+
+Dalam pelajaran ini, anda akan menemui cara khusus untuk membina model dengan [**SVM**: **S**upport **V**ector **M**achine](https://en.wikipedia.org/wiki/Support-vector_machine) untuk regresi, atau **SVR: Support Vector Regressor**.
+
+### SVR dalam konteks siri masa [^1]
+
+Sebelum memahami kepentingan SVR dalam ramalan siri masa, berikut adalah beberapa konsep penting yang perlu anda ketahui:
+
+- **Regresi:** Teknik pembelajaran terkawal untuk meramalkan nilai berterusan dari set input yang diberikan. Ideanya adalah untuk memadankan lengkung (atau garis) dalam ruang ciri yang mempunyai bilangan titik data maksimum. [Klik di sini](https://en.wikipedia.org/wiki/Regression_analysis) untuk maklumat lanjut.
+- **Support Vector Machine (SVM):** Jenis model pembelajaran mesin terkawal yang digunakan untuk klasifikasi, regresi dan pengesanan pencilan. Model ini adalah hyperplane dalam ruang ciri, yang dalam kes klasifikasi bertindak sebagai sempadan, dan dalam kes regresi bertindak sebagai garis padanan terbaik. Dalam SVM, fungsi Kernel biasanya digunakan untuk mengubah set data ke ruang dengan bilangan dimensi yang lebih tinggi, supaya mereka dapat dipisahkan dengan mudah. [Klik di sini](https://en.wikipedia.org/wiki/Support-vector_machine) untuk maklumat lanjut mengenai SVM.
+- **Support Vector Regressor (SVR):** Jenis SVM, untuk mencari garis padanan terbaik (yang dalam kes SVM adalah hyperplane) yang mempunyai bilangan titik data maksimum.
+
+### Mengapa SVR? [^1]
+
+Dalam pelajaran lepas anda belajar tentang ARIMA, yang merupakan kaedah statistik linear yang sangat berjaya untuk meramalkan data siri masa. Walau bagaimanapun, dalam banyak kes, data siri masa mempunyai *non-linearity*, yang tidak dapat dipetakan oleh model linear. Dalam kes sedemikian, keupayaan SVM untuk mempertimbangkan non-linearity dalam data untuk tugas regresi menjadikan SVR berjaya dalam ramalan siri masa.
+
+## Latihan - bina model SVR
+
+Langkah pertama untuk penyediaan data adalah sama seperti pelajaran sebelumnya mengenai [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA).
+
+Buka folder [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working) dalam pelajaran ini dan cari fail [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb).[^2]
+
+1. Jalankan notebook dan import perpustakaan yang diperlukan: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. Muatkan data dari fail `/data/energy.csv` ke dalam dataframe Pandas dan lihat: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. Plot semua data tenaga yang tersedia dari Januari 2012 hingga Disember 2014: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Sekarang, mari bina model SVR kita.
+
+### Buat dataset latihan dan ujian
+
+Sekarang data anda dimuatkan, anda boleh memisahkannya ke dalam set latihan dan ujian. Kemudian anda akan mengubah bentuk data untuk mencipta dataset berdasarkan langkah masa yang diperlukan untuk SVR. Anda akan melatih model anda pada set latihan. Selepas model selesai latihan, anda akan menilai ketepatannya pada set latihan, set ujian dan kemudian keseluruhan dataset untuk melihat prestasi keseluruhan. Anda perlu memastikan bahawa set ujian meliputi tempoh masa yang lebih lewat dari set latihan untuk memastikan model tidak mendapat maklumat dari tempoh masa depan [^2] (situasi yang dikenali sebagai *Overfitting*).
+
+1. Peruntukkan tempoh dua bulan dari 1 September hingga 31 Oktober 2014 ke set latihan. Set ujian akan merangkumi tempoh dua bulan dari 1 November hingga 31 Disember 2014: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. Visualisasikan perbezaannya: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+
+
+### Sediakan data untuk latihan
+
+Sekarang, anda perlu menyediakan data untuk latihan dengan melakukan penapisan dan penskalaan data anda. Tapis dataset anda untuk hanya merangkumi tempoh masa dan lajur yang anda perlukan, dan penskalaan untuk memastikan data diproyeksikan dalam julat 0,1.
+
+1. Tapis dataset asal untuk hanya merangkumi tempoh masa yang disebutkan di atas per set dan hanya termasuk lajur 'load' yang diperlukan serta tarikh: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. Skala data latihan untuk berada dalam julat (0, 1): [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. Sekarang, anda skala data ujian: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### Cipta data dengan langkah masa [^1]
+
+Untuk SVR, anda mengubah input data menjadi bentuk `[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data` supaya terdapat dimensi baru yang merujuk kepada langkah masa.
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+Untuk contoh ini, kita ambil `timesteps = 5`. Jadi, input kepada model adalah data untuk 4 langkah masa pertama, dan output akan menjadi data untuk langkah masa ke-5.
+
+```python
+timesteps=5
+```
+
+Menukar data latihan kepada tensor 2D menggunakan senarai bersarang:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+Menukar data ujian kepada tensor 2D:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+ Memilih input dan output dari data latihan dan ujian:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### Laksanakan SVR [^1]
+
+Sekarang, tiba masanya untuk melaksanakan SVR. Untuk membaca lebih lanjut mengenai pelaksanaan ini, anda boleh merujuk kepada [dokumentasi ini](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html). Untuk pelaksanaan kita, kita ikut langkah-langkah ini:
+
+ 1. Tentukan model dengan memanggil `SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()` function
+
+Sekarang kita cipta model SVR. Di sini kita gunakan [RBF kernel](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel), dan tetapkan hyperparameters gamma, C dan epsilon sebagai 0.5, 10 dan 0.05 masing-masing.
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### Latih model pada data latihan [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### Buat ramalan model [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+Anda telah membina SVR anda! Sekarang kita perlu menilai ia.
+
+### Nilai model anda [^1]
+
+Untuk penilaian, pertama kita akan skala kembali data ke skala asal kita. Kemudian, untuk memeriksa prestasi, kita akan plot siri masa asal dan ramalan, dan juga cetak keputusan MAPE.
+
+Skala data ramalan dan output asal:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### Periksa prestasi model pada data latihan dan ujian [^1]
+
+Kita ekstrak cap waktu dari dataset untuk ditunjukkan pada paksi-x plot kita. Perhatikan bahawa kita menggunakan ```timesteps-1``` nilai pertama sebagai input untuk output pertama, jadi cap waktu untuk output akan bermula selepas itu.
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+Plot ramalan untuk data latihan:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+Cetak MAPE untuk data latihan
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+Plot ramalan untuk data ujian
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+Cetak MAPE untuk data ujian
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 Anda mempunyai hasil yang sangat baik pada dataset ujian!
+
+### Periksa prestasi model pada dataset penuh [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+
+
+🏆 Plot yang sangat bagus, menunjukkan model dengan ketepatan yang baik. Syabas!
+
+---
+
+## 🚀Cabaran
+
+- Cuba ubah hyperparameters (gamma, C, epsilon) semasa mencipta model dan nilai pada data untuk melihat set hyperparameters mana yang memberikan hasil terbaik pada data ujian. Untuk mengetahui lebih lanjut mengenai hyperparameters ini, anda boleh merujuk kepada dokumen [di sini](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel).
+- Cuba gunakan fungsi kernel yang berbeza untuk model dan analisis prestasi mereka pada dataset. Dokumen yang berguna boleh didapati [di sini](https://scikit-learn.org/stable/modules/svm.html#kernel-functions).
+- Cuba gunakan nilai yang berbeza untuk `timesteps` untuk model melihat ke belakang untuk membuat ramalan.
+
+## [Kuiz Pasca-ceramah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## Ulasan & Kajian Kendiri
+
+Pelajaran ini adalah untuk memperkenalkan aplikasi SVR untuk Ramalan Siri Masa. Untuk membaca lebih lanjut mengenai SVR, anda boleh merujuk kepada [blog ini](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/). [Dokumentasi ini pada scikit-learn](https://scikit-learn.org/stable/modules/svm.html) menyediakan penjelasan yang lebih komprehensif mengenai SVM secara umum, [SVRs](https://scikit-learn.org/stable/modules/svm.html#regression) dan juga butiran pelaksanaan lain seperti pelbagai [fungsi kernel](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) yang boleh digunakan, dan parameter mereka.
+
+## Tugasan
+
+[Sebuah model SVR baru](assignment.md)
+
+
+
+## Kredit
+
+
+[^1]: Teks, kod dan output dalam seksyen ini disumbangkan oleh [@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD)
+[^2]: Teks, kod dan output dalam seksyen ini diambil dari [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/3-SVR/assignment.md b/translations/ms/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..554144f4b
--- /dev/null
+++ b/translations/ms/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,15 @@
+# Model SVR Baru
+
+## Arahan [^1]
+
+Sekarang setelah anda telah membina model SVR, bina satu lagi dengan data baru (cuba salah satu [dataset dari Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Anotasikan kerja anda dalam buku nota, visualisasikan data dan model anda, dan uji ketepatannya menggunakan plot yang sesuai dan MAPE. Juga cuba laraskan pelbagai hiperparameter dan juga menggunakan nilai berbeza untuk timesteps.
+## Rubrik [^1]
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ------------------------------------------------------------ | -------------------------------------------------------- | ---------------------------------- |
+| | Buku nota dibentangkan dengan model SVR yang dibina, diuji dan dijelaskan dengan visualisasi dan ketepatan dinyatakan. | Buku nota yang dibentangkan tidak dianotasi atau mengandungi pepijat. | Buku nota yang tidak lengkap dibentangkan |
+
+[^1]:Teks dalam bahagian ini berdasarkan [tugasan dari ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/7-TimeSeries/README.md b/translations/ms/7-TimeSeries/README.md
new file mode 100644
index 000000000..13b246d0f
--- /dev/null
+++ b/translations/ms/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# Pengenalan kepada Ramalan Siri Masa
+
+Apa itu ramalan siri masa? Ia mengenai meramalkan peristiwa masa depan dengan menganalisis trend masa lalu.
+
+## Topik serantau: penggunaan elektrik di seluruh dunia ✨
+
+Dalam dua pelajaran ini, anda akan diperkenalkan kepada ramalan siri masa, satu bidang pembelajaran mesin yang kurang dikenali tetapi sangat berharga untuk aplikasi industri dan perniagaan, antara bidang lain. Walaupun rangkaian neural boleh digunakan untuk meningkatkan kegunaan model-model ini, kita akan mengkajinya dalam konteks pembelajaran mesin klasik kerana model membantu meramalkan prestasi masa depan berdasarkan masa lalu.
+
+Fokus serantau kita adalah penggunaan elektrik di dunia, satu set data yang menarik untuk mempelajari tentang ramalan penggunaan kuasa masa depan berdasarkan corak beban masa lalu. Anda boleh melihat bagaimana jenis ramalan ini sangat berguna dalam persekitaran perniagaan.
+
+
+
+Foto oleh [Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) menara elektrik di jalan raya di Rajasthan di [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
+
+## Pelajaran
+
+1. [Pengenalan kepada ramalan siri masa](1-Introduction/README.md)
+2. [Membina model siri masa ARIMA](2-ARIMA/README.md)
+3. [Membina Support Vector Regressor untuk ramalan siri masa](3-SVR/README.md)
+
+## Kredit
+
+"Pengenalan kepada ramalan siri masa" ditulis dengan ⚡️ oleh [Francesca Lazzeri](https://twitter.com/frlazzeri) dan [Jen Looper](https://twitter.com/jenlooper). Notebook pertama kali muncul dalam talian di [Azure "Deep Learning For Time Series" repo](https://github.com/Azure/DeepLearningForTimeSeriesForecasting) yang asalnya ditulis oleh Francesca Lazzeri. Pelajaran SVR ditulis oleh [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/1-QLearning/README.md b/translations/ms/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..7fdee320e
--- /dev/null
+++ b/translations/ms/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,59 @@
+## Memeriksa polisi
+
+Oleh kerana Jadual-Q menyenaraikan "tarikan" setiap tindakan pada setiap keadaan, adalah mudah untuk menggunakannya bagi menentukan navigasi yang cekap dalam dunia kita. Dalam kes yang paling mudah, kita boleh memilih tindakan yang sepadan dengan nilai Jadual-Q tertinggi: (blok kod 9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> Jika anda mencuba kod di atas beberapa kali, anda mungkin perasan bahawa kadang-kadang ia "tergantung", dan anda perlu menekan butang HENTI dalam buku nota untuk menghentikannya. Ini berlaku kerana mungkin terdapat situasi apabila dua keadaan "menunjuk" antara satu sama lain dari segi nilai Q yang optimum, di mana agen akhirnya bergerak antara keadaan tersebut tanpa henti.
+
+## 🚀Cabaran
+
+> **Tugas 1:** Ubah suai `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` menjalankan simulasi 100 kali): (blok kod 10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+Selepas menjalankan kod ini, anda sepatutnya mendapat panjang laluan purata yang lebih kecil daripada sebelumnya, dalam lingkungan 3-6.
+
+## Menyiasat proses pembelajaran
+
+Seperti yang telah kita sebutkan, proses pembelajaran adalah keseimbangan antara penerokaan dan penerokaan pengetahuan yang diperoleh tentang struktur ruang masalah. Kita telah melihat bahawa hasil pembelajaran (keupayaan untuk membantu agen mencari laluan pendek ke matlamat) telah bertambah baik, tetapi juga menarik untuk melihat bagaimana panjang laluan purata berkelakuan semasa proses pembelajaran:
+
+## Ringkasan pembelajaran
+
+- **Panjang laluan purata meningkat**. Apa yang kita lihat di sini adalah bahawa pada mulanya, panjang laluan purata meningkat. Ini mungkin disebabkan oleh fakta bahawa apabila kita tidak tahu apa-apa tentang persekitaran, kita cenderung terperangkap dalam keadaan buruk, air atau serigala. Apabila kita belajar lebih banyak dan mula menggunakan pengetahuan ini, kita boleh meneroka persekitaran lebih lama, tetapi kita masih tidak tahu di mana epal berada dengan baik.
+
+- **Panjang laluan berkurangan, apabila kita belajar lebih banyak**. Setelah kita belajar cukup, ia menjadi lebih mudah untuk agen mencapai matlamat, dan panjang laluan mula berkurangan. Walau bagaimanapun, kita masih terbuka kepada penerokaan, jadi kita sering menyimpang dari laluan terbaik, dan meneroka pilihan baru, menjadikan laluan lebih panjang daripada yang optimum.
+
+- **Panjang meningkat secara mendadak**. Apa yang kita juga perhatikan pada graf ini ialah pada satu ketika, panjang meningkat secara mendadak. Ini menunjukkan sifat stokastik proses, dan bahawa kita pada satu ketika boleh "merosakkan" pekali Jadual-Q dengan menulis semula mereka dengan nilai baru. Ini sepatutnya diminimumkan dengan mengurangkan kadar pembelajaran (contohnya, menjelang akhir latihan, kita hanya menyesuaikan nilai Jadual-Q dengan nilai kecil).
+
+Secara keseluruhan, adalah penting untuk diingat bahawa kejayaan dan kualiti proses pembelajaran sangat bergantung kepada parameter, seperti kadar pembelajaran, pengurangan kadar pembelajaran, dan faktor diskaun. Ini sering dipanggil **hiperparameter**, untuk membezakannya daripada **parameter**, yang kita optimakan semasa latihan (contohnya, pekali Jadual-Q). Proses mencari nilai hiperparameter terbaik dipanggil **pengoptimuman hiperparameter**, dan ia layak mendapat topik yang berasingan.
+
+## [Kuiz selepas kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## Tugasan
+[Dunia yang Lebih Realistik](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/1-QLearning/assignment.md b/translations/ms/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..952402725
--- /dev/null
+++ b/translations/ms/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,30 @@
+# Dunia yang Lebih Realistik
+
+Dalam situasi kita, Peter dapat bergerak hampir tanpa merasa lelah atau lapar. Dalam dunia yang lebih realistik, dia perlu duduk dan berehat dari semasa ke semasa, dan juga makan untuk mendapatkan tenaga. Mari kita buat dunia kita lebih realistik dengan melaksanakan peraturan berikut:
+
+1. Dengan bergerak dari satu tempat ke tempat lain, Peter kehilangan **tenaga** dan mendapat sedikit **keletihan**.
+2. Peter boleh mendapatkan lebih banyak tenaga dengan memakan epal.
+3. Peter boleh menghilangkan keletihan dengan berehat di bawah pokok atau di atas rumput (iaitu berjalan ke lokasi papan dengan pokok atau rumput - padang hijau)
+4. Peter perlu mencari dan membunuh serigala
+5. Untuk membunuh serigala, Peter perlu mempunyai tahap tenaga dan keletihan tertentu, jika tidak, dia akan kalah dalam pertempuran.
+
+## Arahan
+
+Gunakan notebook asal [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) sebagai titik permulaan untuk penyelesaian anda.
+
+Ubah fungsi ganjaran di atas mengikut peraturan permainan, jalankan algoritma pembelajaran pengukuhan untuk mempelajari strategi terbaik untuk memenangi permainan, dan bandingkan hasil jalan rawak dengan algoritma anda dari segi jumlah permainan yang dimenangi dan kalah.
+
+> **Note**: Dalam dunia baru anda, keadaan adalah lebih kompleks, dan selain daripada kedudukan manusia, juga termasuk tahap keletihan dan tenaga. Anda boleh memilih untuk mewakili keadaan sebagai tuple (Board,energy,fatigue), atau mendefinisikan kelas untuk keadaan tersebut (anda juga boleh mengembangkannya daripada `Board`), atau bahkan mengubah kelas asal `Board` dalam [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py).
+
+Dalam penyelesaian anda, sila simpan kod yang bertanggungjawab untuk strategi jalan rawak, dan bandingkan hasil algoritma anda dengan jalan rawak pada akhir.
+
+> **Note**: Anda mungkin perlu menyesuaikan hiperparameter untuk membuatnya berfungsi, terutamanya jumlah epoch. Oleh kerana kejayaan permainan (melawan serigala) adalah kejadian yang jarang berlaku, anda boleh mengharapkan masa latihan yang lebih lama.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| | Notebook disediakan dengan definisi peraturan dunia baru, algoritma Q-Learning dan beberapa penjelasan teks. Q-Learning mampu meningkatkan hasil dengan ketara berbanding jalan rawak. | Notebook disediakan, Q-Learning dilaksanakan dan meningkatkan hasil berbanding jalan rawak, tetapi tidak dengan ketara; atau notebook kurang didokumentasikan dan kod tidak disusun dengan baik | Beberapa percubaan untuk mentakrifkan semula peraturan dunia dibuat, tetapi algoritma Q-Learning tidak berfungsi, atau fungsi ganjaran tidak sepenuhnya ditakrifkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/ms/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..7630a4570
--- /dev/null
+++ b/translations/ms/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/ms/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..7630a4570
--- /dev/null
+++ b/translations/ms/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/2-Gym/README.md b/translations/ms/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..dbbe00e7e
--- /dev/null
+++ b/translations/ms/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,324 @@
+## Prasyarat
+
+Dalam pelajaran ini, kita akan menggunakan pustaka yang disebut **OpenAI Gym** untuk mensimulasikan berbagai **lingkungan**. Anda dapat menjalankan kode pelajaran ini secara lokal (misalnya dari Visual Studio Code), di mana simulasi akan terbuka di jendela baru. Saat menjalankan kode secara online, Anda mungkin perlu melakukan beberapa penyesuaian pada kode, seperti yang dijelaskan [di sini](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7).
+
+## OpenAI Gym
+
+Dalam pelajaran sebelumnya, aturan permainan dan keadaan diberikan oleh kelas `Board` yang kita definisikan sendiri. Di sini kita akan menggunakan **lingkungan simulasi** khusus, yang akan mensimulasikan fisika di balik keseimbangan tiang. Salah satu lingkungan simulasi paling populer untuk melatih algoritma pembelajaran penguatan disebut [Gym](https://gym.openai.com/), yang dikelola oleh [OpenAI](https://openai.com/). Dengan menggunakan gym ini kita dapat membuat berbagai **lingkungan** dari simulasi cartpole hingga permainan Atari.
+
+> **Catatan**: Anda dapat melihat lingkungan lain yang tersedia dari OpenAI Gym [di sini](https://gym.openai.com/envs/#classic_control).
+
+Pertama, mari kita instal gym dan impor pustaka yang diperlukan (kode blok 1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## Latihan - inisialisasi lingkungan cartpole
+
+Untuk bekerja dengan masalah keseimbangan cartpole, kita perlu menginisialisasi lingkungan yang sesuai. Setiap lingkungan terkait dengan:
+
+- **Observation space** yang mendefinisikan struktur informasi yang kita terima dari lingkungan. Untuk masalah cartpole, kita menerima posisi tiang, kecepatan, dan beberapa nilai lainnya.
+
+- **Action space** yang mendefinisikan tindakan yang mungkin dilakukan. Dalam kasus kita, action space bersifat diskrit, dan terdiri dari dua tindakan - **kiri** dan **kanan**. (kode blok 2)
+
+1. Untuk menginisialisasi, ketik kode berikut:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+Untuk melihat bagaimana lingkungan bekerja, mari kita jalankan simulasi singkat selama 100 langkah. Pada setiap langkah, kita memberikan salah satu tindakan yang akan diambil - dalam simulasi ini kita hanya memilih tindakan secara acak dari `action_space`.
+
+1. Jalankan kode di bawah ini dan lihat hasilnya.
+
+ ✅ Ingat bahwa lebih disukai untuk menjalankan kode ini pada instalasi Python lokal! (kode blok 3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ Anda harus melihat sesuatu yang mirip dengan gambar ini:
+
+ 
+
+1. Selama simulasi, kita perlu mendapatkan pengamatan untuk memutuskan bagaimana bertindak. Faktanya, fungsi langkah mengembalikan pengamatan saat ini, fungsi reward, dan flag selesai yang menunjukkan apakah masuk akal untuk melanjutkan simulasi atau tidak: (kode blok 4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ Anda akan melihat sesuatu seperti ini di output notebook:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ Vektor pengamatan yang dikembalikan pada setiap langkah simulasi berisi nilai-nilai berikut:
+ - Posisi kereta
+ - Kecepatan kereta
+ - Sudut tiang
+ - Laju rotasi tiang
+
+1. Dapatkan nilai min dan max dari angka-angka tersebut: (kode blok 5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ Anda mungkin juga memperhatikan bahwa nilai reward pada setiap langkah simulasi selalu 1. Ini karena tujuan kita adalah bertahan selama mungkin, yaitu menjaga tiang tetap dalam posisi vertikal selama mungkin.
+
+ ✅ Faktanya, simulasi CartPole dianggap berhasil jika kita berhasil mendapatkan reward rata-rata 195 selama 100 percobaan berturut-turut.
+
+## Diskritisasi State
+
+Dalam Q-Learning, kita perlu membangun Q-Table yang mendefinisikan apa yang harus dilakukan pada setiap state. Untuk dapat melakukan ini, kita memerlukan state yang **diskrit**, lebih tepatnya, harus mengandung sejumlah nilai diskrit yang terbatas. Oleh karena itu, kita perlu **mendiskritkan** pengamatan kita, memetakan mereka ke dalam satu set state yang terbatas.
+
+Ada beberapa cara kita bisa melakukannya:
+
+- **Membagi menjadi bin**. Jika kita mengetahui interval dari nilai tertentu, kita bisa membagi interval ini menjadi beberapa **bin**, dan kemudian mengganti nilai dengan nomor bin yang dimasukinya. Ini bisa dilakukan menggunakan metode numpy [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html). Dalam kasus ini, kita akan mengetahui ukuran state dengan tepat, karena akan tergantung pada jumlah bin yang kita pilih untuk digitalisasi.
+
+✅ Kita bisa menggunakan interpolasi linier untuk membawa nilai ke beberapa interval terbatas (misalnya, dari -20 hingga 20), dan kemudian mengonversi angka menjadi bilangan bulat dengan membulatkannya. Ini memberi kita sedikit kontrol lebih pada ukuran state, terutama jika kita tidak mengetahui rentang nilai input yang tepat. Misalnya, dalam kasus kita, 2 dari 4 nilai tidak memiliki batas atas/bawah pada nilai mereka, yang dapat mengakibatkan jumlah state yang tak terbatas.
+
+Dalam contoh kita, kita akan menggunakan pendekatan kedua. Seperti yang mungkin Anda perhatikan nanti, meskipun batas atas/bawah tidak ditentukan, nilai-nilai tersebut jarang mengambil nilai di luar interval terbatas tertentu, sehingga state dengan nilai ekstrem akan sangat jarang.
+
+1. Berikut adalah fungsi yang akan mengambil pengamatan dari model kita dan menghasilkan tuple dari 4 nilai integer: (kode blok 6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. Mari kita juga eksplorasi metode diskritisasi lain menggunakan bin: (kode blok 7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. Sekarang mari kita jalankan simulasi singkat dan amati nilai lingkungan diskrit tersebut. Silakan coba keduanya `discretize` and `discretize_bins` dan lihat apakah ada perbedaan.
+
+ ✅ discretize_bins mengembalikan nomor bin, yang berbasis 0. Jadi untuk nilai variabel input sekitar 0, ia mengembalikan nomor dari tengah interval (10). Dalam discretize, kita tidak peduli dengan rentang nilai output, memungkinkan mereka menjadi negatif, sehingga nilai state tidak bergeser, dan 0 sesuai dengan 0. (kode blok 8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ Hapus komentar pada baris yang dimulai dengan env.render jika Anda ingin melihat bagaimana lingkungan dieksekusi. Jika tidak, Anda dapat mengeksekusinya di latar belakang, yang lebih cepat. Kita akan menggunakan eksekusi "tak terlihat" ini selama proses Q-Learning kita.
+
+## Struktur Q-Table
+
+Dalam pelajaran sebelumnya, state adalah pasangan angka sederhana dari 0 hingga 8, dan dengan demikian nyaman untuk mewakili Q-Table dengan tensor numpy dengan bentuk 8x8x2. Jika kita menggunakan diskritisasi bin, ukuran vektor state kita juga diketahui, sehingga kita dapat menggunakan pendekatan yang sama dan mewakili state dengan array berbentuk 20x20x10x10x2 (di sini 2 adalah dimensi dari action space, dan dimensi pertama sesuai dengan jumlah bin yang kita pilih untuk digunakan untuk setiap parameter dalam observation space).
+
+Namun, terkadang dimensi tepat dari observation space tidak diketahui. Dalam kasus fungsi `discretize`, kita mungkin tidak pernah yakin bahwa state kita tetap dalam batas tertentu, karena beberapa nilai asli tidak dibatasi. Oleh karena itu, kita akan menggunakan pendekatan yang sedikit berbeda dan mewakili Q-Table dengan kamus.
+
+1. Gunakan pasangan *(state,action)* sebagai kunci kamus, dan nilai akan sesuai dengan nilai entri Q-Table. (kode blok 9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ Di sini kita juga mendefinisikan fungsi `qvalues()`, yang mengembalikan daftar nilai Q-Table untuk state tertentu yang sesuai dengan semua tindakan yang mungkin. Jika entri tidak ada dalam Q-Table, kita akan mengembalikan 0 sebagai default.
+
+## Mari Mulai Q-Learning
+
+Sekarang kita siap mengajarkan Peter untuk menjaga keseimbangan!
+
+1. Pertama, mari kita atur beberapa hyperparameter: (kode blok 10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ Di sini, `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards` vektor untuk plot lebih lanjut. (kode blok 11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+Apa yang mungkin Anda perhatikan dari hasil tersebut:
+
+- **Dekat dengan tujuan kita**. Kita sangat dekat mencapai tujuan mendapatkan 195 reward kumulatif selama 100+ percobaan berturut-turut dari simulasi, atau kita mungkin telah mencapainya! Bahkan jika kita mendapatkan angka yang lebih kecil, kita masih tidak tahu, karena kita rata-rata lebih dari 5000 percobaan, dan hanya 100 percobaan yang diperlukan dalam kriteria formal.
+
+- **Reward mulai menurun**. Kadang-kadang reward mulai menurun, yang berarti kita dapat "menghancurkan" nilai yang sudah dipelajari dalam Q-Table dengan yang membuat situasi lebih buruk.
+
+Pengamatan ini lebih jelas terlihat jika kita plot kemajuan pelatihan.
+
+## Plotting Kemajuan Pelatihan
+
+Selama pelatihan, kita telah mengumpulkan nilai reward kumulatif pada setiap iterasi ke dalam vektor `rewards`. Berikut adalah tampilannya saat kita plot terhadap nomor iterasi:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+Dari grafik ini, tidak mungkin untuk mengatakan apa pun, karena sifat dari proses pelatihan stokastik panjang sesi pelatihan sangat bervariasi. Untuk lebih memahami grafik ini, kita dapat menghitung **rata-rata berjalan** selama serangkaian eksperimen, katakanlah 100. Ini dapat dilakukan dengan mudah menggunakan `np.convolve`: (kode blok 12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## Memvariasikan hyperparameter
+
+Untuk membuat pembelajaran lebih stabil, masuk akal untuk menyesuaikan beberapa hyperparameter kita selama pelatihan. Secara khusus:
+
+- **Untuk learning rate**, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon`, dan naik hingga hampir 1.
+
+> **Tugas 1**: Bermainlah dengan nilai hyperparameter dan lihat apakah Anda dapat mencapai reward kumulatif yang lebih tinggi. Apakah Anda mendapatkan di atas 195?
+
+> **Tugas 2**: Untuk secara formal menyelesaikan masalah, Anda perlu mendapatkan 195 reward rata-rata di 100 percobaan berturut-turut. Ukur itu selama pelatihan dan pastikan bahwa Anda telah menyelesaikan masalah secara formal!
+
+## Melihat hasilnya dalam aksi
+
+Akan menarik untuk benar-benar melihat bagaimana model yang dilatih berperilaku. Mari kita jalankan simulasi dan mengikuti strategi pemilihan tindakan yang sama seperti selama pelatihan, sampling sesuai dengan distribusi probabilitas di Q-Table: (kode blok 13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+Anda harus melihat sesuatu seperti ini:
+
+
+
+---
+
+## 🚀Tantangan
+
+> **Tugas 3**: Di sini, kita menggunakan salinan akhir dari Q-Table, yang mungkin bukan yang terbaik. Ingat bahwa kita telah menyimpan Q-Table dengan performa terbaik ke dalam `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax` fungsi untuk menemukan nomor tindakan yang sesuai dengan nilai Q-Table tertinggi. Implementasikan strategi ini dan lihat apakah itu meningkatkan keseimbangan.
+
+## [Kuis setelah kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## Tugas
+[Latih Mountain Car](assignment.md)
+
+## Kesimpulan
+
+Kita sekarang telah belajar bagaimana melatih agen untuk mencapai hasil yang baik hanya dengan memberikan mereka fungsi reward yang mendefinisikan keadaan permainan yang diinginkan, dan dengan memberi mereka kesempatan untuk menjelajahi ruang pencarian secara cerdas. Kita telah berhasil menerapkan algoritma Q-Learning dalam kasus lingkungan diskrit dan kontinu, tetapi dengan tindakan diskrit.
+
+Penting juga untuk mempelajari situasi di mana state tindakan juga kontinu, dan ketika observation space jauh lebih kompleks, seperti gambar dari layar permainan Atari. Dalam masalah tersebut, kita sering perlu menggunakan teknik pembelajaran mesin yang lebih kuat, seperti jaringan saraf, untuk mencapai hasil yang baik. Topik yang lebih maju ini adalah subjek dari kursus AI tingkat lanjut kita yang akan datang.
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/2-Gym/assignment.md b/translations/ms/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..803d742e5
--- /dev/null
+++ b/translations/ms/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# Latih Kereta Gunung
+
+[OpenAI Gym](http://gym.openai.com) telah direka sedemikian rupa sehingga semua persekitaran menyediakan API yang sama - iaitu kaedah yang sama `reset`, `step` dan `render`, dan abstraksi yang sama dari **ruang tindakan** dan **ruang pemerhatian**. Oleh itu, seharusnya mungkin untuk menyesuaikan algoritma pembelajaran pengukuhan yang sama ke persekitaran yang berbeza dengan perubahan kod yang minimum.
+
+## Persekitaran Kereta Gunung
+
+[Persekitaran Kereta Gunung](https://gym.openai.com/envs/MountainCar-v0/) mengandungi sebuah kereta yang terperangkap di dalam lembah:
+Matlamatnya adalah untuk keluar dari lembah dan menangkap bendera, dengan melakukan salah satu tindakan berikut pada setiap langkah:
+
+| Nilai | Makna |
+|---|---|
+| 0 | Mempercepat ke kiri |
+| 1 | Tidak mempercepat |
+| 2 | Mempercepat ke kanan |
+
+Trik utama masalah ini, bagaimanapun, adalah bahawa enjin kereta tidak cukup kuat untuk mendaki gunung dalam satu kali laluan. Oleh itu, satu-satunya cara untuk berjaya adalah dengan memandu ke depan dan ke belakang untuk membina momentum.
+
+Ruang pemerhatian terdiri daripada hanya dua nilai:
+
+| Nombor | Pemerhatian | Min | Maks |
+|-----|--------------|-----|-----|
+| 0 | Kedudukan Kereta | -1.2| 0.6 |
+| 1 | Kelajuan Kereta | -0.07 | 0.07 |
+
+Sistem ganjaran untuk kereta gunung agak rumit:
+
+ * Ganjaran 0 diberikan jika agen mencapai bendera (kedudukan = 0.5) di atas gunung.
+ * Ganjaran -1 diberikan jika kedudukan agen kurang dari 0.5.
+
+Episod akan tamat jika kedudukan kereta lebih dari 0.5, atau panjang episod lebih dari 200.
+## Arahan
+
+Sesuaikan algoritma pembelajaran pengukuhan kami untuk menyelesaikan masalah kereta gunung. Mulakan dengan kod [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb) yang ada, gantikan persekitaran baru, ubah fungsi diskritisasi keadaan, dan cuba buat algoritma yang ada untuk dilatih dengan perubahan kod yang minimum. Optimumkan hasil dengan menyesuaikan hiperparameter.
+
+> **Note**: Penyesuaian hiperparameter mungkin diperlukan untuk membuat algoritma konvergen.
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Peningkatan |
+| -------- | --------- | -------- | ----------------- |
+| | Algoritma Q-Learning berjaya disesuaikan dari contoh CartPole, dengan perubahan kod yang minimum, yang mampu menyelesaikan masalah menangkap bendera di bawah 200 langkah. | Algoritma Q-Learning baru telah diadopsi dari Internet, tetapi didokumentasikan dengan baik; atau algoritma yang ada diadopsi, tetapi tidak mencapai hasil yang diinginkan | Pelajar tidak dapat mengadopsi algoritma dengan berjaya, tetapi telah membuat langkah-langkah besar menuju penyelesaian (melaksanakan diskritisasi keadaan, struktur data Q-Table, dll.) |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/ms/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..7630a4570
--- /dev/null
+++ b/translations/ms/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/2-Gym/solution/R/README.md b/translations/ms/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..7630a4570
--- /dev/null
+++ b/translations/ms/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/8-Reinforcement/README.md b/translations/ms/8-Reinforcement/README.md
new file mode 100644
index 000000000..4688f49c7
--- /dev/null
+++ b/translations/ms/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# Pengenalan kepada pembelajaran pengukuhan
+
+Pembelajaran pengukuhan, RL, dianggap sebagai salah satu paradigma pembelajaran mesin asas, selain pembelajaran diselia dan pembelajaran tanpa penyeliaan. RL berkaitan dengan keputusan: membuat keputusan yang betul atau sekurang-kurangnya belajar daripadanya.
+
+Bayangkan anda mempunyai persekitaran simulasi seperti pasaran saham. Apa yang berlaku jika anda mengenakan peraturan tertentu? Adakah ia memberi kesan positif atau negatif? Jika sesuatu yang negatif berlaku, anda perlu mengambil _pengukuhan negatif_ ini, belajar daripadanya, dan mengubah haluan. Jika hasilnya positif, anda perlu membina dari _pengukuhan positif_ itu.
+
+
+
+> Peter dan kawan-kawannya perlu melarikan diri dari serigala yang lapar! Imej oleh [Jen Looper](https://twitter.com/jenlooper)
+
+## Topik serantau: Peter dan Serigala (Rusia)
+
+[Peter dan Serigala](https://en.wikipedia.org/wiki/Peter_and_the_Wolf) adalah kisah dongeng muzik yang ditulis oleh komposer Rusia [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev). Ia adalah cerita tentang pelopor muda Peter, yang dengan berani keluar dari rumahnya ke kawasan hutan untuk mengejar serigala. Dalam bahagian ini, kita akan melatih algoritma pembelajaran mesin yang akan membantu Peter:
+
+- **Meneroka** kawasan sekitar dan membina peta navigasi yang optimum
+- **Belajar** cara menggunakan papan selaju dan mengimbangi di atasnya, untuk bergerak dengan lebih pantas.
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 Klik imej di atas untuk mendengar Peter dan Serigala oleh Prokofiev
+
+## Pembelajaran pengukuhan
+
+Dalam bahagian sebelum ini, anda telah melihat dua contoh masalah pembelajaran mesin:
+
+- **Diselia**, di mana kita mempunyai set data yang mencadangkan penyelesaian sampel kepada masalah yang ingin kita selesaikan. [Klasifikasi](../4-Classification/README.md) dan [regresi](../2-Regression/README.md) adalah tugas pembelajaran diselia.
+- **Tanpa penyeliaan**, di mana kita tidak mempunyai data latihan berlabel. Contoh utama pembelajaran tanpa penyeliaan ialah [Pengelompokan](../5-Clustering/README.md).
+
+Dalam bahagian ini, kami akan memperkenalkan anda kepada jenis masalah pembelajaran baharu yang tidak memerlukan data latihan berlabel. Terdapat beberapa jenis masalah sedemikian:
+
+- **[Pembelajaran separa diselia](https://wikipedia.org/wiki/Semi-supervised_learning)**, di mana kita mempunyai banyak data tidak berlabel yang boleh digunakan untuk pra-latihan model.
+- **[Pembelajaran pengukuhan](https://wikipedia.org/wiki/Reinforcement_learning)**, di mana agen belajar bagaimana untuk bertindak dengan melakukan eksperimen dalam beberapa persekitaran simulasi.
+
+### Contoh - permainan komputer
+
+Katakan anda ingin mengajar komputer untuk bermain permainan, seperti catur, atau [Super Mario](https://wikipedia.org/wiki/Super_Mario). Untuk komputer bermain permainan, kita perlu ia meramalkan langkah mana yang perlu diambil dalam setiap keadaan permainan. Walaupun ini mungkin kelihatan seperti masalah klasifikasi, ia tidak - kerana kita tidak mempunyai set data dengan keadaan dan tindakan yang sepadan. Walaupun kita mungkin mempunyai beberapa data seperti perlawanan catur sedia ada atau rakaman pemain bermain Super Mario, kemungkinan besar data itu tidak akan cukup meliputi sejumlah besar keadaan yang mungkin.
+
+Daripada mencari data permainan sedia ada, **Pembelajaran Pengukuhan** (RL) adalah berdasarkan idea *membuat komputer bermain* banyak kali dan memerhatikan hasilnya. Oleh itu, untuk menggunakan Pembelajaran Pengukuhan, kita memerlukan dua perkara:
+
+- **Persekitaran** dan **simulator** yang membolehkan kita bermain permainan banyak kali. Simulator ini akan menentukan semua peraturan permainan serta keadaan dan tindakan yang mungkin.
+
+- **Fungsi ganjaran**, yang akan memberitahu kita seberapa baik kita melakukannya semasa setiap langkah atau permainan.
+
+Perbezaan utama antara jenis pembelajaran mesin lain dan RL ialah dalam RL kita biasanya tidak tahu sama ada kita menang atau kalah sehingga kita menyelesaikan permainan. Oleh itu, kita tidak boleh mengatakan sama ada langkah tertentu sahaja adalah baik atau tidak - kita hanya menerima ganjaran pada akhir permainan. Dan matlamat kita adalah untuk mereka bentuk algoritma yang akan membolehkan kita melatih model di bawah keadaan yang tidak pasti. Kita akan belajar tentang satu algoritma RL yang dipanggil **Q-learning**.
+
+## Pelajaran
+
+1. [Pengenalan kepada pembelajaran pengukuhan dan Q-Learning](1-QLearning/README.md)
+2. [Menggunakan persekitaran simulasi gim](2-Gym/README.md)
+
+## Kredit
+
+"Pengenalan kepada Pembelajaran Pengukuhan" ditulis dengan ♥️ oleh [Dmitry Soshnikov](http://soshnikov.com)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/9-Real-World/1-Applications/README.md b/translations/ms/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..70aaa66dc
--- /dev/null
+++ b/translations/ms/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# Postscript: Pembelajaran Mesin di Dunia Nyata
+
+
+> Sketchnote oleh [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+Dalam kurikulum ini, anda telah mempelajari banyak cara untuk menyiapkan data untuk pelatihan dan membuat model pembelajaran mesin. Anda telah membangun serangkaian model regresi klasik, klastering, klasifikasi, pemrosesan bahasa alami, dan deret waktu. Selamat! Sekarang, anda mungkin bertanya-tanya apa tujuan semua ini... apa aplikasi dunia nyata untuk model-model ini?
+
+Meskipun banyak minat dalam industri telah tertarik oleh AI, yang biasanya memanfaatkan pembelajaran mendalam, masih ada aplikasi berharga untuk model pembelajaran mesin klasik. Anda mungkin bahkan menggunakan beberapa aplikasi ini hari ini! Dalam pelajaran ini, anda akan menjelajahi bagaimana delapan industri dan domain materi pelajaran yang berbeda menggunakan jenis model ini untuk membuat aplikasi mereka lebih berkinerja, andal, cerdas, dan berharga bagi pengguna.
+
+## [Kuis sebelum kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 Keuangan
+
+Sektor keuangan menawarkan banyak peluang untuk pembelajaran mesin. Banyak masalah di area ini dapat dimodelkan dan diselesaikan dengan menggunakan ML.
+
+### Deteksi penipuan kartu kredit
+
+Kami mempelajari tentang [k-means clustering](../../5-Clustering/2-K-Means/README.md) sebelumnya dalam kursus ini, tetapi bagaimana cara menggunakannya untuk menyelesaikan masalah terkait penipuan kartu kredit?
+
+K-means clustering berguna selama teknik deteksi penipuan kartu kredit yang disebut **deteksi outlier**. Outlier, atau penyimpangan dalam pengamatan tentang satu set data, dapat memberi tahu kita apakah kartu kredit digunakan secara normal atau jika ada sesuatu yang tidak biasa terjadi. Seperti yang ditunjukkan dalam makalah yang ditautkan di bawah ini, anda dapat menyortir data kartu kredit menggunakan algoritma k-means clustering dan menetapkan setiap transaksi ke klaster berdasarkan seberapa besar penyimpangan yang muncul. Kemudian, anda dapat mengevaluasi klaster yang paling berisiko untuk transaksi penipuan versus yang sah.
+[Referensi](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### Manajemen kekayaan
+
+Dalam manajemen kekayaan, individu atau perusahaan menangani investasi atas nama klien mereka. Tugas mereka adalah mempertahankan dan menumbuhkan kekayaan dalam jangka panjang, sehingga penting untuk memilih investasi yang berkinerja baik.
+
+Salah satu cara untuk mengevaluasi bagaimana kinerja investasi tertentu adalah melalui regresi statistik. [Regresi linear](../../2-Regression/1-Tools/README.md) adalah alat yang berharga untuk memahami bagaimana suatu dana berkinerja relatif terhadap beberapa tolok ukur. Kita juga dapat menyimpulkan apakah hasil regresi signifikan secara statistik atau tidak, atau seberapa besar pengaruhnya terhadap investasi klien. Anda bahkan dapat memperluas analisis anda menggunakan regresi berganda, di mana faktor risiko tambahan dapat diperhitungkan. Untuk contoh bagaimana ini akan bekerja untuk dana tertentu, lihat makalah di bawah ini tentang evaluasi kinerja dana menggunakan regresi.
+[Referensi](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 Pendidikan
+
+Sektor pendidikan juga merupakan area yang sangat menarik di mana ML dapat diterapkan. Ada masalah menarik yang harus dipecahkan seperti mendeteksi kecurangan pada ujian atau esai atau mengelola bias, disengaja atau tidak, dalam proses koreksi.
+
+### Memprediksi perilaku siswa
+
+[Coursera](https://coursera.com), penyedia kursus online terbuka, memiliki blog teknologi yang hebat di mana mereka membahas banyak keputusan teknis. Dalam studi kasus ini, mereka memplot garis regresi untuk mencoba mengeksplorasi korelasi antara rating NPS (Net Promoter Score) yang rendah dan retensi atau putus kursus.
+[Referensi](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### Mengurangi bias
+
+[Grammarly](https://grammarly.com), asisten menulis yang memeriksa kesalahan ejaan dan tata bahasa, menggunakan sistem [pemrosesan bahasa alami](../../6-NLP/README.md) yang canggih di seluruh produknya. Mereka menerbitkan studi kasus menarik di blog teknologi mereka tentang bagaimana mereka menangani bias gender dalam pembelajaran mesin, yang anda pelajari dalam [pelajaran pengenalan tentang keadilan](../../1-Introduction/3-fairness/README.md).
+[Referensi](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 Ritel
+
+Sektor ritel pasti dapat memanfaatkan penggunaan ML, mulai dari menciptakan perjalanan pelanggan yang lebih baik hingga menyimpan inventaris dengan cara yang optimal.
+
+### Memperpersonalisasi perjalanan pelanggan
+
+Di Wayfair, perusahaan yang menjual barang-barang rumah tangga seperti furnitur, membantu pelanggan menemukan produk yang tepat untuk selera dan kebutuhan mereka sangat penting. Dalam artikel ini, insinyur dari perusahaan tersebut menjelaskan bagaimana mereka menggunakan ML dan NLP untuk "menampilkan hasil yang tepat bagi pelanggan". Secara khusus, Query Intent Engine mereka telah dibangun untuk menggunakan ekstraksi entitas, pelatihan klasifikasi, ekstraksi aset dan opini, serta penandaan sentimen pada ulasan pelanggan. Ini adalah contoh klasik bagaimana NLP bekerja dalam ritel online.
+[Referensi](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### Manajemen inventaris
+
+Perusahaan inovatif dan gesit seperti [StitchFix](https://stitchfix.com), layanan kotak yang mengirim pakaian kepada konsumen, sangat bergantung pada ML untuk rekomendasi dan manajemen inventaris. Tim styling mereka bekerja sama dengan tim merchandising mereka, sebenarnya: "salah satu ilmuwan data kami bermain-main dengan algoritma genetik dan menerapkannya pada pakaian untuk memprediksi apa yang akan menjadi pakaian yang sukses yang tidak ada hari ini. Kami membawa itu ke tim merchandise dan sekarang mereka dapat menggunakannya sebagai alat."
+[Referensi](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 Perawatan Kesehatan
+
+Sektor perawatan kesehatan dapat memanfaatkan ML untuk mengoptimalkan tugas penelitian dan juga masalah logistik seperti readmisi pasien atau menghentikan penyebaran penyakit.
+
+### Mengelola uji klinis
+
+Toksisitas dalam uji klinis adalah masalah besar bagi pembuat obat. Berapa banyak toksisitas yang dapat ditoleransi? Dalam studi ini, menganalisis berbagai metode uji klinis menghasilkan pengembangan pendekatan baru untuk memprediksi kemungkinan hasil uji klinis. Secara khusus, mereka dapat menggunakan random forest untuk menghasilkan [klasifikasi](../../4-Classification/README.md) yang dapat membedakan antara kelompok obat.
+[Referensi](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### Manajemen readmisi rumah sakit
+
+Perawatan rumah sakit mahal, terutama ketika pasien harus readmisi. Makalah ini membahas perusahaan yang menggunakan ML untuk memprediksi potensi readmisi menggunakan algoritma [klastering](../../5-Clustering/README.md). Klaster ini membantu analis untuk "menemukan kelompok readmisi yang mungkin memiliki penyebab yang sama".
+[Referensi](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### Manajemen penyakit
+
+Pandemi baru-baru ini telah menyoroti cara-cara di mana pembelajaran mesin dapat membantu menghentikan penyebaran penyakit. Dalam artikel ini, anda akan mengenali penggunaan ARIMA, kurva logistik, regresi linear, dan SARIMA. "Pekerjaan ini adalah upaya untuk menghitung laju penyebaran virus ini dan dengan demikian memprediksi kematian, pemulihan, dan kasus yang dikonfirmasi, sehingga dapat membantu kita untuk mempersiapkan diri dengan lebih baik dan bertahan hidup."
+[Referensi](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 Ekologi dan Teknologi Hijau
+
+Alam dan ekologi terdiri dari banyak sistem sensitif di mana interaksi antara hewan dan alam menjadi fokus. Penting untuk dapat mengukur sistem ini dengan akurat dan bertindak dengan tepat jika sesuatu terjadi, seperti kebakaran hutan atau penurunan populasi hewan.
+
+### Manajemen hutan
+
+Anda mempelajari tentang [Reinforcement Learning](../../8-Reinforcement/README.md) dalam pelajaran sebelumnya. Ini bisa sangat berguna ketika mencoba memprediksi pola di alam. Secara khusus, ini dapat digunakan untuk melacak masalah ekologi seperti kebakaran hutan dan penyebaran spesies invasif. Di Kanada, sekelompok peneliti menggunakan Reinforcement Learning untuk membangun model dinamika kebakaran hutan dari citra satelit. Menggunakan "spatially spreading process (SSP)" yang inovatif, mereka membayangkan kebakaran hutan sebagai "agen di setiap sel dalam lanskap." "Set tindakan yang dapat diambil api dari lokasi pada waktu tertentu termasuk menyebar ke utara, selatan, timur, atau barat atau tidak menyebar.
+
+Pendekatan ini membalik pengaturan RL yang biasa karena dinamika Proses Keputusan Markov (MDP) yang sesuai adalah fungsi yang diketahui untuk penyebaran kebakaran segera." Baca lebih lanjut tentang algoritma klasik yang digunakan oleh kelompok ini di tautan di bawah.
+[Referensi](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### Pemantauan gerakan hewan
+
+Sementara pembelajaran mendalam telah menciptakan revolusi dalam melacak gerakan hewan secara visual (anda dapat membuat [pelacak beruang kutub](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott) anda sendiri di sini), ML klasik masih memiliki tempat dalam tugas ini.
+
+Sensor untuk melacak gerakan hewan ternak dan IoT memanfaatkan jenis pemrosesan visual ini, tetapi teknik ML yang lebih dasar berguna untuk memproses data awal. Misalnya, dalam makalah ini, postur domba dipantau dan dianalisis menggunakan berbagai algoritma klasifikasi. Anda mungkin mengenali kurva ROC pada halaman 335.
+[Referensi](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ Manajemen Energi
+
+Dalam pelajaran kami tentang [peramalan deret waktu](../../7-TimeSeries/README.md), kami mengajukan konsep meteran parkir pintar untuk menghasilkan pendapatan bagi kota berdasarkan pemahaman tentang penawaran dan permintaan. Artikel ini membahas secara rinci bagaimana klastering, regresi, dan peramalan deret waktu digabungkan untuk membantu memprediksi penggunaan energi masa depan di Irlandia, berdasarkan meteran pintar.
+[Referensi](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 Asuransi
+
+Sektor asuransi adalah sektor lain yang menggunakan ML untuk membangun dan mengoptimalkan model keuangan dan aktuaria yang layak.
+
+### Manajemen Volatilitas
+
+MetLife, penyedia asuransi jiwa, terbuka dengan cara mereka menganalisis dan mengurangi volatilitas dalam model keuangan mereka. Dalam artikel ini anda akan melihat visualisasi klasifikasi biner dan ordinal. Anda juga akan menemukan visualisasi peramalan.
+[Referensi](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 Seni, Budaya, dan Sastra
+
+Dalam seni, misalnya dalam jurnalisme, ada banyak masalah menarik. Mendeteksi berita palsu adalah masalah besar karena terbukti mempengaruhi opini orang dan bahkan menjatuhkan demokrasi. Museum juga dapat memanfaatkan penggunaan ML dalam segala hal mulai dari menemukan hubungan antara artefak hingga perencanaan sumber daya.
+
+### Deteksi berita palsu
+
+Mendeteksi berita palsu telah menjadi permainan kucing dan tikus dalam media saat ini. Dalam artikel ini, peneliti menyarankan bahwa sistem yang menggabungkan beberapa teknik ML yang telah kita pelajari dapat diuji dan model terbaik diterapkan: "Sistem ini didasarkan pada pemrosesan bahasa alami untuk mengekstrak fitur dari data dan kemudian fitur-fitur ini digunakan untuk pelatihan pengklasifikasi pembelajaran mesin seperti Naive Bayes, Support Vector Machine (SVM), Random Forest (RF), Stochastic Gradient Descent (SGD), dan Logistic Regression (LR)."
+[Referensi](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+Artikel ini menunjukkan bagaimana menggabungkan berbagai domain ML dapat menghasilkan hasil yang menarik yang dapat membantu menghentikan penyebaran berita palsu dan menciptakan kerusakan nyata; dalam hal ini, dorongan utama adalah penyebaran rumor tentang pengobatan COVID yang memicu kekerasan massa.
+
+### Museum ML
+
+Museum berada di ambang revolusi AI di mana mengkatalogkan dan mendigitalkan koleksi serta menemukan hubungan antara artefak menjadi lebih mudah seiring dengan kemajuan teknologi. Proyek seperti [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.) membantu membuka misteri koleksi yang tidak dapat diakses seperti Arsip Vatikan. Namun, aspek bisnis dari museum juga mendapat manfaat dari model ML.
+
+Misalnya, Art Institute of Chicago membangun model untuk memprediksi apa yang diminati oleh audiens dan kapan mereka akan menghadiri pameran. Tujuannya adalah untuk menciptakan pengalaman pengunjung yang dipersonalisasi dan dioptimalkan setiap kali pengguna mengunjungi museum. "Selama tahun fiskal 2017, model tersebut memprediksi kehadiran dan penerimaan dengan akurasi dalam 1 persen, kata Andrew Simnick, wakil presiden senior di Art Institute."
+[Reference](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 Pemasaran
+
+### Segmentasi pelanggan
+
+Strategi pemasaran yang paling berkesan menargetkan pelanggan dengan cara yang berbeza berdasarkan pelbagai pengelompokan. Dalam artikel ini, penggunaan algoritma Kluster dibincangkan untuk menyokong pemasaran yang berbeza. Pemasaran berbeza membantu syarikat meningkatkan pengiktirafan jenama, mencapai lebih ramai pelanggan, dan menjana lebih banyak wang.
+[Reference](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 Cabaran
+
+Kenal pasti sektor lain yang mendapat manfaat daripada beberapa teknik yang anda pelajari dalam kurikulum ini, dan ketahui bagaimana ia menggunakan ML.
+
+## [Kuiz selepas kuliah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## Ulasan & Kajian Kendiri
+
+Pasukan sains data Wayfair mempunyai beberapa video menarik tentang bagaimana mereka menggunakan ML di syarikat mereka. Ia berbaloi untuk [dilihat](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)!
+
+## Tugasan
+
+[Perburuan harta karun ML](assignment.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/9-Real-World/1-Applications/assignment.md b/translations/ms/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..b0fb7f78f
--- /dev/null
+++ b/translations/ms/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# Pemburuan Harta Karun ML
+
+## Arahan
+
+Dalam pelajaran ini, anda telah belajar tentang banyak kes penggunaan dunia nyata yang diselesaikan menggunakan ML klasik. Walaupun penggunaan pembelajaran mendalam, teknik dan alat baru dalam AI, serta memanfaatkan rangkaian neural telah membantu mempercepat pengeluaran alat untuk membantu dalam sektor-sektor ini, ML klasik yang menggunakan teknik dalam kurikulum ini masih mempunyai nilai yang besar.
+
+Dalam tugasan ini, bayangkan anda sedang menyertai hackathon. Gunakan apa yang anda pelajari dalam kurikulum untuk mencadangkan penyelesaian menggunakan ML klasik untuk menyelesaikan masalah dalam salah satu sektor yang dibincangkan dalam pelajaran ini. Buat satu pembentangan di mana anda membincangkan bagaimana anda akan melaksanakan idea anda. Mata bonus jika anda boleh mengumpulkan data contoh dan membina model ML untuk menyokong konsep anda!
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | ------------------------------------------------------------------- | ------------------------------------------------ | ---------------------- |
+| | Pembentangan PowerPoint dipersembahkan - bonus untuk membina model | Pembentangan asas yang tidak inovatif dipersembahkan | Kerja tidak lengkap |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/9-Real-World/2-Debugging-ML-Models/README.md b/translations/ms/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..28ff2c4b3
--- /dev/null
+++ b/translations/ms/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,141 @@
+# Postscript: Debugging Model dalam Pembelajaran Mesin menggunakan Komponen Papan Pemuka AI Bertanggungjawab
+
+## [Kuiz pra-ceramah](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Pengenalan
+
+Pembelajaran mesin memberi kesan kepada kehidupan seharian kita. AI sedang menemui jalannya ke dalam beberapa sistem yang paling penting yang mempengaruhi kita sebagai individu serta masyarakat kita, dari penjagaan kesihatan, kewangan, pendidikan, dan pekerjaan. Contohnya, sistem dan model terlibat dalam tugas membuat keputusan harian, seperti diagnosis penjagaan kesihatan atau mengesan penipuan. Akibatnya, kemajuan dalam AI bersama dengan penerimaan yang dipercepatkan sedang dihadapi dengan harapan masyarakat yang berkembang dan peraturan yang semakin meningkat sebagai tindak balas. Kita sentiasa melihat kawasan di mana sistem AI terus gagal memenuhi jangkaan; mereka mendedahkan cabaran baru; dan kerajaan mula mengawal penyelesaian AI. Oleh itu, adalah penting bahawa model ini dianalisis untuk memberikan hasil yang adil, boleh dipercayai, inklusif, telus, dan bertanggungjawab untuk semua orang.
+
+Dalam kurikulum ini, kita akan melihat alat praktikal yang boleh digunakan untuk menilai jika model mempunyai isu AI bertanggungjawab. Teknik debugging pembelajaran mesin tradisional cenderung berdasarkan pengiraan kuantitatif seperti ketepatan agregat atau purata kehilangan ralat. Bayangkan apa yang boleh berlaku apabila data yang anda gunakan untuk membina model ini kekurangan demografi tertentu, seperti kaum, jantina, pandangan politik, agama, atau secara tidak seimbang mewakili demografi tersebut. Bagaimana pula apabila output model ditafsirkan untuk memihak kepada beberapa demografi? Ini boleh memperkenalkan perwakilan berlebihan atau kurang kumpulan ciri sensitif ini yang mengakibatkan isu keadilan, keterangkuman, atau kebolehpercayaan daripada model. Faktor lain ialah, model pembelajaran mesin dianggap kotak hitam, yang menjadikannya sukar untuk memahami dan menerangkan apa yang memacu ramalan model. Semua ini adalah cabaran yang dihadapi oleh saintis data dan pembangun AI apabila mereka tidak mempunyai alat yang mencukupi untuk debug dan menilai keadilan atau kebolehpercayaan model.
+
+Dalam pelajaran ini, anda akan mempelajari tentang debugging model anda menggunakan:
+
+- **Analisis Ralat**: mengenal pasti di mana dalam taburan data anda model mempunyai kadar ralat yang tinggi.
+- **Gambaran Keseluruhan Model**: melakukan analisis perbandingan merentasi kohort data yang berbeza untuk menemui perbezaan dalam metrik prestasi model anda.
+- **Analisis Data**: menyiasat di mana mungkin terdapat perwakilan berlebihan atau kurang data anda yang boleh mengubah model anda untuk memihak kepada satu demografi data berbanding yang lain.
+- **Kepentingan Ciri**: memahami ciri-ciri yang memacu ramalan model anda pada peringkat global atau peringkat tempatan.
+
+## Prasyarat
+
+Sebagai prasyarat, sila tinjau [Alat AI Bertanggungjawab untuk pembangun](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+
+> 
+
+## Analisis Ralat
+
+Metrik prestasi model tradisional yang digunakan untuk mengukur ketepatan kebanyakannya adalah pengiraan berdasarkan ramalan betul vs salah. Sebagai contoh, menentukan bahawa model adalah tepat 89% masa dengan kehilangan ralat 0.001 boleh dianggap sebagai prestasi yang baik. Kesilapan selalunya tidak diedarkan secara seragam dalam set data asas anda. Anda mungkin mendapat skor ketepatan model 89% tetapi mendapati bahawa terdapat kawasan berbeza dalam data anda di mana model gagal 42% masa. Akibat daripada corak kegagalan ini dengan kumpulan data tertentu boleh membawa kepada isu keadilan atau kebolehpercayaan. Adalah penting untuk memahami kawasan di mana model berprestasi baik atau tidak. Kawasan data di mana terdapat banyak ketidaktepatan dalam model anda mungkin menjadi demografi data yang penting.
+
+
+
+Komponen Analisis Ralat pada papan pemuka RAI menggambarkan bagaimana kegagalan model diedarkan merentasi pelbagai kohort dengan visualisasi pokok. Ini berguna dalam mengenal pasti ciri atau kawasan di mana terdapat kadar ralat yang tinggi dengan set data anda. Dengan melihat dari mana kebanyakan ketidaktepatan model datang, anda boleh mula menyiasat punca akar. Anda juga boleh mencipta kohort data untuk melakukan analisis. Kohort data ini membantu dalam proses debugging untuk menentukan mengapa prestasi model baik dalam satu kohort, tetapi salah dalam kohort lain.
+
+
+
+Penunjuk visual pada peta pokok membantu dalam mencari kawasan masalah dengan lebih cepat. Sebagai contoh, semakin gelap warna merah pada nod pokok, semakin tinggi kadar ralat.
+
+Peta haba adalah satu lagi fungsi visualisasi yang boleh digunakan pengguna dalam menyiasat kadar ralat menggunakan satu atau dua ciri untuk mencari penyumbang kepada kesilapan model merentasi keseluruhan set data atau kohort.
+
+
+
+Gunakan analisis ralat apabila anda perlu:
+
+* Mendapatkan pemahaman mendalam tentang bagaimana kegagalan model diedarkan merentasi set data dan merentasi beberapa dimensi input dan ciri.
+* Memecahkan metrik prestasi agregat untuk secara automatik menemui kohort yang salah untuk memaklumkan langkah mitigasi yang disasarkan anda.
+
+## Gambaran Keseluruhan Model
+
+Menilai prestasi model pembelajaran mesin memerlukan pemahaman holistik tentang tingkah lakunya. Ini boleh dicapai dengan mengkaji lebih daripada satu metrik seperti kadar ralat, ketepatan, ingatan, ketepatan, atau MAE (Ralat Mutlak Purata) untuk mencari perbezaan antara metrik prestasi. Satu metrik prestasi mungkin kelihatan hebat, tetapi ketidaktepatan boleh didedahkan dalam metrik lain. Di samping itu, membandingkan metrik untuk perbezaan merentasi keseluruhan set data atau kohort membantu menjelaskan di mana model berprestasi baik atau tidak. Ini amat penting dalam melihat prestasi model di kalangan ciri sensitif vs tidak sensitif (contohnya, bangsa pesakit, jantina, atau umur) untuk mendedahkan potensi ketidakadilan yang mungkin ada pada model. Sebagai contoh, mendapati bahawa model lebih salah dalam kohort yang mempunyai ciri sensitif boleh mendedahkan potensi ketidakadilan yang mungkin ada pada model.
+
+Komponen Gambaran Keseluruhan Model papan pemuka RAI membantu bukan sahaja dalam menganalisis metrik prestasi perwakilan data dalam kohort, tetapi ia memberi pengguna keupayaan untuk membandingkan tingkah laku model merentasi kohort yang berbeza.
+
+
+
+Fungsi analisis berasaskan ciri komponen membolehkan pengguna mengecilkan subkumpulan data dalam ciri tertentu untuk mengenal pasti anomali pada tahap granular. Sebagai contoh, papan pemuka mempunyai kecerdasan terbina dalam untuk secara automatik menjana kohort untuk ciri yang dipilih pengguna (contohnya, *"time_in_hospital < 3"* atau *"time_in_hospital >= 7"*). Ini membolehkan pengguna mengasingkan ciri tertentu daripada kumpulan data yang lebih besar untuk melihat sama ada ia adalah pengaruh utama hasil yang salah model.
+
+
+
+Komponen Gambaran Keseluruhan Model menyokong dua kelas metrik perbezaan:
+
+**Perbezaan dalam prestasi model**: Set metrik ini mengira perbezaan (perbezaan) dalam nilai metrik prestasi yang dipilih merentasi subkumpulan data. Berikut adalah beberapa contoh:
+
+* Perbezaan dalam kadar ketepatan
+* Perbezaan dalam kadar ralat
+* Perbezaan dalam ketepatan
+* Perbezaan dalam ingatan
+* Perbezaan dalam ralat mutlak purata (MAE)
+
+**Perbezaan dalam kadar pemilihan**: Metrik ini mengandungi perbezaan dalam kadar pemilihan (ramalan yang menguntungkan) di kalangan subkumpulan. Contoh ini ialah perbezaan dalam kadar kelulusan pinjaman. Kadar pemilihan bermaksud pecahan titik data dalam setiap kelas yang diklasifikasikan sebagai 1 (dalam klasifikasi binari) atau taburan nilai ramalan (dalam regresi).
+
+## Analisis Data
+
+> "Jika anda menyeksa data cukup lama, ia akan mengaku apa sahaja" - Ronald Coase
+
+Kenyataan ini kedengaran melampau, tetapi adalah benar bahawa data boleh dimanipulasi untuk menyokong apa-apa kesimpulan. Manipulasi sedemikian kadangkala boleh berlaku tanpa disedari. Sebagai manusia, kita semua mempunyai prasangka, dan selalunya sukar untuk mengetahui secara sedar apabila anda memperkenalkan prasangka dalam data. Menjamin keadilan dalam AI dan pembelajaran mesin tetap menjadi cabaran yang kompleks.
+
+Data adalah titik buta yang besar untuk metrik prestasi model tradisional. Anda mungkin mempunyai skor ketepatan yang tinggi, tetapi ini tidak selalu mencerminkan bias data yang mendasari yang mungkin ada dalam set data anda. Sebagai contoh, jika set data pekerja mempunyai 27% wanita dalam jawatan eksekutif dalam syarikat dan 73% lelaki pada tahap yang sama, model AI pengiklanan pekerjaan yang dilatih pada data ini mungkin menyasarkan kebanyakan penonton lelaki untuk jawatan pekerjaan peringkat kanan. Mempunyai ketidakseimbangan ini dalam data mengubah ramalan model untuk memihak kepada satu jantina. Ini mendedahkan isu keadilan di mana terdapat bias jantina dalam model AI.
+
+Komponen Analisis Data pada papan pemuka RAI membantu mengenal pasti kawasan di mana terdapat perwakilan berlebihan dan kurang dalam set data. Ia membantu pengguna mendiagnosis punca ralat dan isu keadilan yang diperkenalkan daripada ketidakseimbangan data atau kekurangan perwakilan kumpulan data tertentu. Ini memberi pengguna keupayaan untuk memvisualisasikan set data berdasarkan hasil yang diramalkan dan sebenar, kumpulan ralat, dan ciri khusus. Kadangkala menemui kumpulan data yang kurang diwakili juga boleh mendedahkan bahawa model tidak belajar dengan baik, oleh itu ketidaktepatan yang tinggi. Mempunyai model yang mempunyai bias data bukan sahaja isu keadilan tetapi menunjukkan bahawa model tidak inklusif atau boleh dipercayai.
+
+
+
+Gunakan analisis data apabila anda perlu:
+
+* Terokai statistik set data anda dengan memilih penapis yang berbeza untuk membahagikan data anda kepada dimensi yang berbeza (juga dikenali sebagai kohort).
+* Fahami taburan set data anda merentasi kohort dan kumpulan ciri yang berbeza.
+* Tentukan sama ada penemuan anda yang berkaitan dengan keadilan, analisis ralat, dan sebab-akibat (yang diperoleh daripada komponen papan pemuka lain) adalah hasil daripada taburan set data anda.
+* Tentukan di kawasan mana untuk mengumpul lebih banyak data untuk mengurangkan ralat yang datang daripada isu perwakilan, hingar label, hingar ciri, bias label, dan faktor serupa.
+
+## Kebolehfahaman Model
+
+Model pembelajaran mesin cenderung menjadi kotak hitam. Memahami ciri data utama yang memacu ramalan model boleh menjadi mencabar. Adalah penting untuk memberikan ketelusan mengapa model membuat ramalan tertentu. Sebagai contoh, jika sistem AI meramalkan bahawa pesakit diabetes berisiko dimasukkan semula ke hospital dalam masa kurang daripada 30 hari, ia harus dapat memberikan data sokongan yang membawa kepada ramalannya. Mempunyai penunjuk data sokongan membawa ketelusan untuk membantu klinik atau hospital dapat membuat keputusan yang berpengetahuan. Di samping itu, dapat menerangkan mengapa model membuat ramalan untuk pesakit individu membolehkan akauntabiliti dengan peraturan kesihatan. Apabila anda menggunakan model pembelajaran mesin dengan cara yang mempengaruhi kehidupan orang, adalah penting untuk memahami dan menerangkan apa yang mempengaruhi tingkah laku model. Kebolehterangan dan kebolehfahaman model membantu menjawab soalan dalam senario seperti:
+
+* Debug model: Mengapa model saya membuat kesilapan ini? Bagaimana saya boleh memperbaiki model saya?
+* Kerjasama manusia-AI: Bagaimana saya boleh memahami dan mempercayai keputusan model?
+* Pematuhan peraturan: Adakah model saya memenuhi keperluan undang-undang?
+
+Komponen Kepentingan Ciri papan pemuka RAI membantu anda debug dan mendapatkan pemahaman yang komprehensif tentang bagaimana model membuat ramalan. Ia juga alat yang berguna untuk profesional pembelajaran mesin dan pembuat keputusan untuk menerangkan dan menunjukkan bukti ciri yang mempengaruhi tingkah laku model untuk pematuhan peraturan. Seterusnya, pengguna boleh meneroka kedua-dua penjelasan global dan tempatan untuk mengesahkan ciri yang memacu ramalan model. Penjelasan global menyenaraikan ciri teratas yang mempengaruhi ramalan keseluruhan model. Penjelasan tempatan memaparkan ciri yang membawa kepada ramalan model untuk kes individu. Keupayaan untuk menilai penjelasan tempatan juga berguna dalam debugging atau mengaudit kes tertentu untuk lebih memahami dan mentafsir mengapa model membuat ramalan yang tepat atau tidak tepat.
+
+
+
+* Penjelasan global: Sebagai contoh, ciri apa yang mempengaruhi tingkah laku keseluruhan model kemasukan semula hospital diabetes?
+* Penjelasan tempatan: Sebagai contoh, mengapa pesakit diabetes berumur lebih 60 tahun dengan kemasukan hospital sebelumnya diramalkan akan dimasukkan semula atau tidak dimasukkan semula dalam masa 30 hari kembali ke hospital?
+
+Dalam proses debugging untuk memeriksa prestasi model merentasi kohort yang berbeza, Kepentingan Ciri menunjukkan tahap pengaruh ciri merentasi kohort. Ia membantu mendedahkan anomali apabila membandingkan tahap pengaruh ciri dalam memacu ramalan yang salah model. Komponen Kepentingan Ciri boleh menunjukkan nilai mana dalam ciri yang mempengaruhi secara positif atau negatif hasil model. Sebagai contoh, jika model membuat ramalan yang tidak tepat, komponen memberi anda keupayaan untuk menyelidiki dan mengenal pasti ciri atau nilai ciri yang memacu ramalan tersebut. Tahap perincian ini membantu bukan sahaja dalam debugging tetapi menyediakan ketelusan dan akauntabiliti dalam situasi audit. Akhirnya, komponen boleh membantu anda mengenal pasti isu keadilan. Sebagai ilustrasi, jika ciri sensitif seperti etnik atau jantina sangat mempengaruhi dalam memacu ramalan model, ini boleh menjadi tanda bias kaum atau jantina dalam model.
+
+
+
+Gunakan kebolehfahaman apabila anda perlu:
+
+* Tentukan sejauh mana ramalan sistem AI anda boleh dipercayai dengan memahami ciri yang paling penting untuk ramalan.
+* Pendekatan debugging model anda dengan memahaminya terlebih dahulu dan mengenal pasti sama ada model menggunakan ciri yang sihat atau hanya korelasi palsu.
+* Mendedahkan sumber potensi ketidakadilan dengan memahami sama ada model membuat ramalan berdasarkan ciri sensitif atau ciri yang sangat berkorelasi dengannya.
+* Membina kepercayaan pengguna dalam keputusan model anda dengan menjana penjelasan tempatan untuk menggambarkan hasil mereka.
+* Menyelesaikan audit peraturan sistem AI untuk mengesahkan model dan memantau kesan keputusan model terhadap manusia.
+
+## Kesimpulan
+
+Semua komponen papan pemuka RAI adalah alat praktikal untuk membantu anda membina model pembelajaran mesin yang kurang berbahaya dan lebih dipercayai kepada masyarakat. Ia meningkatkan pencegahan ancaman kepada hak asasi manusia; mendiskriminasi atau mengecualikan kumpulan tertentu daripada peluang hidup; dan risiko kecederaan fizikal atau psikologi. Ia juga membantu membina kepercayaan dalam keputusan model anda dengan menjana penjelasan tempatan untuk menggambarkan hasil mereka. Beberapa potensi bahaya boleh diklasifikasikan sebagai:
+
+- **Peruntukan**, jika jantina atau etnik contohnya lebih diutamakan daripada yang lain.
+- **Kualiti perkhidmatan**. Jika anda melatih data untuk satu senario tertentu tetapi realitinya jauh lebih kompleks, ia membawa kepada perkhidmatan yang kurang berprestasi.
+- **Stereotaip**. Mengaitkan kumpulan tertentu dengan atribut yang telah ditetapkan.
+- **Penghinaan**. Untuk mengkritik dan melabel sesuatu atau seseorang secara tidak adil.
+- **Perwakilan berlebihan atau kurang**. Idea ini adalah bahawa kumpulan tertentu tidak dilihat dalam profesion tertentu, dan mana-mana perkhidmatan atau fungsi yang terus mempromosikannya menyumbang kepada bahaya.
+
+### Papan pemuka Azure RAI
+
+[Papan pemuka Azure RAI](https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu) dibina atas alat sumber terbuka yang dibangunkan oleh institusi akademik dan organisasi terkemuka termasuk Microsoft yang penting untuk saintis data dan pembangun AI untuk lebih memahami tingkah laku model, menemui dan mengurangkan isu yang tidak diingini daripada model AI.
+
+- Pelajari cara menggunakan komponen yang berbeza dengan menyemak [dokumentasi papan pemuka RAI.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu)
+
+- Lihat beberapa [notebook sampel papan pemuka RAI](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) untuk debugging lebih banyak senario AI bertanggungjawab dalam Pembelajaran Mesin Azure.
+
+---
+## 🚀 Cabaran
+
+Untuk mengelakkan prasangka statistik atau data daripada diperkenalkan pada mulanya, kita harus:
+
+- mempunyai kepelbagaian latar belakang dan perspektif di kalangan orang yang
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/ms/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..1b6f69dce
--- /dev/null
+++ b/translations/ms/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# Terokai Papan Pemuka AI Bertanggungjawab (RAI)
+
+## Arahan
+
+Dalam pelajaran ini, anda telah belajar tentang papan pemuka RAI, satu suite komponen yang dibina di atas alat "sumber terbuka" untuk membantu saintis data melaksanakan analisis ralat, penerokaan data, penilaian keadilan, kebolehfahaman model, penilaian kontra-fakta/apa-jika dan analisis kausal pada sistem AI. Untuk tugasan ini, terokai beberapa [notebook](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) sampel papan pemuka RAI dan laporkan penemuan anda dalam bentuk kertas kerja atau pembentangan.
+
+## Rubrik
+
+| Kriteria | Cemerlang | Memadai | Perlu Penambahbaikan |
+| -------- | --------- | -------- | ----------------- |
+| | Kertas kerja atau pembentangan powerpoint dibentangkan yang membincangkan komponen papan pemuka RAI, notebook yang dijalankan, dan kesimpulan yang diperoleh daripada menjalankannya | Kertas kerja dibentangkan tanpa kesimpulan | Tiada kertas kerja dibentangkan |
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/9-Real-World/README.md b/translations/ms/9-Real-World/README.md
new file mode 100644
index 000000000..3b68d434a
--- /dev/null
+++ b/translations/ms/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# Postskrip: Aplikasi dunia nyata pembelajaran mesin klasik
+
+Dalam bahagian kurikulum ini, anda akan diperkenalkan kepada beberapa aplikasi dunia nyata pembelajaran mesin klasik. Kami telah mencari di internet untuk menemukan kertas putih dan artikel mengenai aplikasi yang menggunakan strategi ini, mengelakkan rangkaian neural, pembelajaran mendalam dan AI sebanyak mungkin. Ketahui bagaimana pembelajaran mesin digunakan dalam sistem perniagaan, aplikasi ekologi, kewangan, seni dan budaya, dan banyak lagi.
+
+
+
+> Foto oleh Alexis Fauvet di Unsplash
+
+## Pelajaran
+
+1. [Aplikasi Dunia Nyata untuk Pembelajaran Mesin](1-Applications/README.md)
+2. [Mendebug Model dalam Pembelajaran Mesin menggunakan komponen papan pemuka AI yang Bertanggungjawab](2-Debugging-ML-Models/README.md)
+
+## Kredit
+
+"Aplikasi Dunia Nyata" ditulis oleh sekumpulan individu, termasuk [Jen Looper](https://twitter.com/jenlooper) dan [Ornella Altunyan](https://twitter.com/ornelladotcom).
+
+"Mendebug Model dalam Pembelajaran Mesin menggunakan komponen papan pemuka AI yang Bertanggungjawab" ditulis oleh [Ruth Yakubu](https://twitter.com/ruthieyakubu)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/CODE_OF_CONDUCT.md b/translations/ms/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..1bb956aa6
--- /dev/null
+++ b/translations/ms/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Microsoft Open Source Code of Conduct
+
+Projek ini telah mengadopsi [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+
+Sumber:
+
+- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
+- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+- Hubungi [opencode@microsoft.com](mailto:opencode@microsoft.com) untuk pertanyaan atau kekhawatiran
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/CONTRIBUTING.md b/translations/ms/CONTRIBUTING.md
new file mode 100644
index 000000000..bd795f663
--- /dev/null
+++ b/translations/ms/CONTRIBUTING.md
@@ -0,0 +1,19 @@
+# Menyumbang
+
+Projek ini mengalu-alukan sumbangan dan cadangan. Kebanyakan sumbangan memerlukan anda
+bersetuju dengan Perjanjian Lesen Penyumbang (CLA) yang menyatakan bahawa anda mempunyai hak untuk,
+dan sebenarnya, memberi kami hak untuk menggunakan sumbangan anda. Untuk maklumat lanjut, sila lawati
+https://cla.microsoft.com.
+
+> Penting: apabila menterjemah teks dalam repo ini, sila pastikan anda tidak menggunakan terjemahan mesin. Kami akan mengesahkan terjemahan melalui komuniti, jadi sila hanya tawarkan diri untuk terjemahan dalam bahasa yang anda mahir.
+
+Apabila anda menghantar permintaan tarik, CLA-bot akan secara automatik menentukan sama ada anda perlu
+menyediakan CLA dan menghias PR dengan sewajarnya (contohnya, label, komen). Hanya ikut
+arahan yang diberikan oleh bot. Anda hanya perlu melakukan ini sekali sahaja di semua repositori yang menggunakan CLA kami.
+
+Projek ini telah menerima pakai [Kod Etika Sumber Terbuka Microsoft](https://opensource.microsoft.com/codeofconduct/).
+Untuk maklumat lanjut, lihat [Soalan Lazim Kod Etika](https://opensource.microsoft.com/codeofconduct/faq/)
+atau hubungi [opencode@microsoft.com](mailto:opencode@microsoft.com) dengan sebarang soalan atau komen tambahan.
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/README.md b/translations/ms/README.md
new file mode 100644
index 000000000..aff1dda4b
--- /dev/null
+++ b/translations/ms/README.md
@@ -0,0 +1,156 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# Pembelajaran Mesin untuk Pemula - Kurikulum
+
+> 🌍 Mari kita menjelajahi dunia sambil mempelajari Pembelajaran Mesin melalui budaya dunia 🌍
+
+Advokat Cloud di Microsoft dengan senang hati menawarkan kurikulum selama 12 minggu, 26 pelajaran tentang **Pembelajaran Mesin**. Dalam kurikulum ini, Anda akan mempelajari apa yang kadang-kadang disebut sebagai **pembelajaran mesin klasik**, terutama menggunakan Scikit-learn sebagai pustaka dan menghindari pembelajaran mendalam, yang dibahas dalam [kurikulum AI untuk Pemula](https://aka.ms/ai4beginners). Pasangkan pelajaran ini dengan kurikulum ['Data Science untuk Pemula'](https://aka.ms/ds4beginners), juga!
+
+Jelajahi dunia bersama kami saat kami menerapkan teknik klasik ini pada data dari berbagai wilayah di dunia. Setiap pelajaran mencakup kuis sebelum dan setelah pelajaran, instruksi tertulis untuk menyelesaikan pelajaran, solusi, tugas, dan banyak lagi. Pedagogi berbasis proyek kami memungkinkan Anda belajar sambil membangun, cara yang terbukti untuk membuat keterampilan baru 'melekat'.
+
+**✍️ Terima kasih yang tulus kepada penulis kami** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu, dan Amy Boyd
+
+**🎨 Terima kasih juga kepada ilustrator kami** Tomomi Imura, Dasani Madipalli, dan Jen Looper
+
+**🙏 Terima kasih khusus 🙏 kepada penulis, pengulas, dan kontributor konten Duta Mahasiswa Microsoft kami**, terutama Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila, dan Snigdha Agarwal
+
+**🤩 Terima kasih ekstra kepada Duta Mahasiswa Microsoft Eric Wanjau, Jasleen Sondhi, dan Vidushi Gupta untuk pelajaran R kami!**
+
+# Memulai
+
+Ikuti langkah-langkah ini:
+1. **Fork Repository**: Klik tombol "Fork" di sudut kanan atas halaman ini.
+2. **Clone Repository**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [temukan semua sumber daya tambahan untuk kursus ini dalam koleksi Microsoft Learn kami](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+
+**[Mahasiswa](https://aka.ms/student-page)**, untuk menggunakan kurikulum ini, fork seluruh repo ke akun GitHub Anda sendiri dan selesaikan latihan sendiri atau dengan kelompok:
+
+- Mulailah dengan kuis sebelum kuliah.
+- Baca kuliah dan selesaikan aktivitasnya, berhenti dan merenung pada setiap pemeriksaan pengetahuan.
+- Cobalah membuat proyek dengan memahami pelajaran daripada menjalankan kode solusi; namun kode tersebut tersedia di folder `/solution` di setiap pelajaran yang berorientasi proyek.
+- Ambil kuis setelah kuliah.
+- Selesaikan tantangan.
+- Selesaikan tugas.
+- Setelah menyelesaikan kelompok pelajaran, kunjungi [Papan Diskusi](https://github.com/microsoft/ML-For-Beginners/discussions) dan "belajar dengan suara keras" dengan mengisi rubrik PAT yang sesuai. 'PAT' adalah Alat Penilaian Kemajuan yang merupakan rubrik yang Anda isi untuk melanjutkan pembelajaran Anda. Anda juga dapat bereaksi terhadap PAT lain sehingga kita bisa belajar bersama.
+
+> Untuk studi lebih lanjut, kami merekomendasikan mengikuti modul dan jalur pembelajaran [Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott) ini.
+
+**Guru**, kami telah [menyertakan beberapa saran](for-teachers.md) tentang cara menggunakan kurikulum ini.
+
+---
+
+## Video walkthroughs
+
+Beberapa pelajaran tersedia dalam bentuk video pendek. Anda dapat menemukan semuanya di dalam pelajaran, atau di [daftar putar ML untuk Pemula di saluran YouTube Microsoft Developer](https://aka.ms/ml-beginners-videos) dengan mengklik gambar di bawah ini.
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## Temui Tim
+
+[](https://youtu.be/Tj1XWrDSYJU "Video promosi")
+
+**Gif oleh** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 Klik gambar di atas untuk video tentang proyek dan orang-orang yang membuatnya!
+
+---
+
+## Pedagogi
+
+Kami telah memilih dua prinsip pedagogis saat membangun kurikulum ini: memastikan bahwa itu berbasis proyek **praktis** dan bahwa itu mencakup **kuis yang sering**. Selain itu, kurikulum ini memiliki **tema umum** untuk memberikan kohesi.
+
+Dengan memastikan bahwa konten selaras dengan proyek, prosesnya menjadi lebih menarik bagi siswa dan retensi konsep akan ditingkatkan. Selain itu, kuis dengan risiko rendah sebelum kelas menetapkan niat siswa untuk mempelajari topik, sementara kuis kedua setelah kelas memastikan retensi lebih lanjut. Kurikulum ini dirancang agar fleksibel dan menyenangkan serta dapat diambil secara keseluruhan atau sebagian. Proyek dimulai dari yang kecil dan menjadi semakin kompleks pada akhir siklus 12 minggu. Kurikulum ini juga mencakup catatan tambahan tentang aplikasi dunia nyata dari ML, yang dapat digunakan sebagai kredit tambahan atau sebagai dasar untuk diskusi.
+
+> Temukan [Kode Etik](CODE_OF_CONDUCT.md), [Kontribusi](CONTRIBUTING.md), dan panduan [Terjemahan](TRANSLATIONS.md) kami. Kami menyambut umpan balik konstruktif Anda!
+
+## Setiap pelajaran mencakup
+
+- sketchnote opsional
+- video tambahan opsional
+- video walkthrough (beberapa pelajaran saja)
+- kuis pemanasan sebelum kuliah
+- pelajaran tertulis
+- untuk pelajaran berbasis proyek, panduan langkah demi langkah tentang cara membangun proyek
+- pemeriksaan pengetahuan
+- tantangan
+- bacaan tambahan
+- tugas
+- kuis setelah kuliah
+
+> **Catatan tentang bahasa**: Pelajaran ini terutama ditulis dalam Python, tetapi banyak juga yang tersedia dalam R. Untuk menyelesaikan pelajaran R, buka folder `/solution` dan cari pelajaran R. Mereka termasuk ekstensi .rmd yang merupakan file **R Markdown** yang dapat didefinisikan secara sederhana sebagai penyematan `code chunks` (dari R atau bahasa lain) dan `YAML header` (yang memandu cara memformat keluaran seperti PDF) dalam `Markdown document`. Dengan demikian, ini berfungsi sebagai kerangka kerja penulisan yang luar biasa untuk ilmu data karena memungkinkan Anda menggabungkan kode Anda, hasilnya, dan pemikiran Anda dengan memungkinkan Anda menuliskannya dalam Markdown. Selain itu, dokumen R Markdown dapat dirender ke format keluaran seperti PDF, HTML, atau Word.
+
+> **Catatan tentang kuis**: Semua kuis terdapat dalam [folder Aplikasi Kuis](../../quiz-app), untuk total 52 kuis dengan tiga pertanyaan masing-masing. Mereka terhubung dari dalam pelajaran tetapi aplikasi kuis dapat dijalankan secara lokal; ikuti instruksi di folder `quiz-app` untuk menghosting secara lokal atau menyebarkan ke Azure.
+
+| Nomor Pelajaran | Topik | Kelompok Pelajaran | Tujuan Pembelajaran | Pelajaran Terkait | Penulis |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | Pengenalan pembelajaran mesin | [Pengenalan](1-Introduction/README.md) | Pelajari konsep dasar di balik pembelajaran mesin | [Pelajaran](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | Sejarah pembelajaran mesin | [Pengenalan](1-Introduction/README.md) | Pelajari sejarah di balik bidang ini | [Pelajaran](1-Introduction/2-history-of-ML/README.md) | Jen dan Amy |
+| 03 | Keadilan dan pembelajaran mesin | [Pengenalan](1-Introduction/README.md) | Apa saja isu filosofis penting seputar keadilan yang harus dipertimbangkan siswa saat membangun dan menerapkan model ML? | [Pelajaran](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | Teknik untuk pembelajaran mesin | [Introduction](1-Introduction/README.md) | Teknik apa yang digunakan oleh peneliti ML untuk membina model ML? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris dan Jen |
+| 05 | Pengenalan kepada regresi | [Regression](2-Regression/README.md) | Mulakan dengan Python dan Scikit-learn untuk model regresi |
|
+| 09 | Aplikasi Web 🔌 | [Web App](3-Web-App/README.md) | Bina aplikasi web untuk menggunakan model yang telah dilatih | [Python](3-Web-App/1-Web-App/README.md) | Jen |
+| 10 | Pengenalan kepada klasifikasi | [Classification](4-Classification/README.md) | Bersihkan, sediakan, dan visualisasikan data anda; pengenalan kepada klasifikasi |
|
+| 13 | Masakan Asia dan India yang lazat 🍜 | [Classification](4-Classification/README.md) | Bina aplikasi web cadangan menggunakan model anda | [Python](4-Classification/4-Applied/README.md) | Jen |
+| 14 | Pengenalan kepada pengelompokan | [Clustering](5-Clustering/README.md) | Bersihkan, sediakan, dan visualisasikan data anda; Pengenalan kepada pengelompokan |
|
+| 16 | Pengenalan kepada pemprosesan bahasa semula jadi ☕️ | [Natural language processing](6-NLP/README.md) | Pelajari asas NLP dengan membina bot ringkas | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | Tugas Biasa dalam NLP ☕️ | [Natural language processing](6-NLP/README.md) | Mendalami pengetahuan NLP dengan memahami tugas-tugas biasa yang diperlukan semasa mengendalikan struktur bahasa | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | Terjemahan dan analisis sentimen ♥️ | [Natural language processing](6-NLP/README.md) | Terjemahan dan analisis sentimen dengan Jane Austen | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | Hotel romantik di Eropah ♥️ | [Natural language processing](6-NLP/README.md) | Analisis sentimen dengan ulasan hotel 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | Hotel romantik di Eropah ♥️ | [Natural language processing](6-NLP/README.md) | Analisis sentimen dengan ulasan hotel 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | Pengenalan kepada ramalan siri masa | [Time series](7-TimeSeries/README.md) | Pengenalan kepada ramalan siri masa | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ Penggunaan Tenaga Dunia ⚡️ - ramalan siri masa dengan ARIMA | [Time series](7-TimeSeries/README.md) | Ramalan siri masa dengan ARIMA | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ Penggunaan Tenaga Dunia ⚡️ - ramalan siri masa dengan SVR | [Time series](7-TimeSeries/README.md) | Ramalan siri masa dengan Support Vector Regressor | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | Pengenalan kepada pembelajaran pengukuhan | [Reinforcement learning](8-Reinforcement/README.md) | Pengenalan kepada pembelajaran pengukuhan dengan Q-Learning | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | Bantu Peter mengelak serigala! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | Pembelajaran pengukuhan Gym | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | Senario dan aplikasi ML di dunia sebenar | [ML in the Wild](9-Real-World/README.md) | Aplikasi dunia sebenar yang menarik dan mendedahkan tentang ML klasik | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | Debugging Model dalam ML menggunakan papan pemuka RAI | [ML in the Wild](9-Real-World/README.md) | Debugging Model dalam Pembelajaran Mesin menggunakan komponen papan pemuka AI Bertanggungjawab | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [cari semua sumber tambahan untuk kursus ini dalam koleksi Microsoft Learn kami](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## Akses Luar Talian
+
+Anda boleh menjalankan dokumentasi ini secara luar talian dengan menggunakan [Docsify](https://docsify.js.org/#/). Fork repo ini, [pasang Docsify](https://docsify.js.org/#/quickstart) pada mesin tempatan anda, dan kemudian dalam folder root repo ini, taipkan `docsify serve`. Laman web akan disediakan pada port 3000 di localhost anda: `localhost:3000`.
+
+## PDF
+Cari pdf kurikulum dengan pautan [di sini](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf).
+
+## Bantuan Diperlukan
+
+Adakah anda ingin menyumbang terjemahan? Sila baca [panduan terjemahan kami](TRANSLATIONS.md) dan tambah isu berasaskan templat untuk menguruskan beban kerja [di sini](https://github.com/microsoft/ML-For-Beginners/issues).
+
+## Kurikulum Lain
+
+Pasukan kami menghasilkan kurikulum lain! Lihat:
+
+- [AI untuk Pemula](https://aka.ms/ai4beginners)
+- [Sains Data untuk Pemula](https://aka.ms/datascience-beginners)
+- [**Versi 2.0 Baru** - Generative AI untuk Pemula](https://aka.ms/genai-beginners)
+- [**BARU** Keselamatan Siber untuk Pemula](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Pembangunan Web untuk Pemula](https://aka.ms/webdev-beginners)
+- [IoT untuk Pemula](https://aka.ms/iot-beginners)
+- [Pembelajaran Mesin untuk Pemula](https://aka.ms/ml4beginners)
+- [Pembangunan XR untuk Pemula](https://aka.ms/xr-dev-for-beginners)
+- [Menguasai GitHub Copilot untuk Pengaturcaraan Berpasangan AI](https://aka.ms/GitHubCopilotAI)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/SECURITY.md b/translations/ms/SECURITY.md
new file mode 100644
index 000000000..1810e47e0
--- /dev/null
+++ b/translations/ms/SECURITY.md
@@ -0,0 +1,40 @@
+## Keselamatan
+
+Microsoft mengambil serius keselamatan produk dan perkhidmatan perisian kami, termasuk semua repositori kod sumber yang diuruskan melalui organisasi GitHub kami, yang merangkumi [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), dan [organisasi GitHub kami](https://opensource.microsoft.com/).
+
+Jika anda percaya anda telah menemui kelemahan keselamatan dalam mana-mana repositori milik Microsoft yang memenuhi [definisi kelemahan keselamatan Microsoft](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott), sila laporkan kepada kami seperti yang diterangkan di bawah.
+
+## Melaporkan Isu Keselamatan
+
+**Sila jangan laporkan kelemahan keselamatan melalui isu GitHub awam.**
+
+Sebaliknya, sila laporkan kepada Pusat Tindak Balas Keselamatan Microsoft (MSRC) di [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).
+
+Jika anda lebih suka menghantar tanpa log masuk, hantarkan e-mel kepada [secure@microsoft.com](mailto:secure@microsoft.com). Jika boleh, enkripsikan mesej anda dengan kunci PGP kami; sila muat turun dari halaman [Microsoft Security Response Center PGP Key](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).
+
+Anda sepatutnya menerima maklum balas dalam masa 24 jam. Jika atas sebab tertentu anda tidak menerimanya, sila susuli melalui e-mel untuk memastikan kami menerima mesej asal anda. Maklumat tambahan boleh didapati di [microsoft.com/msrc](https://www.microsoft.com/msrc).
+
+Sila sertakan maklumat yang diminta di bawah (sebanyak yang anda boleh sediakan) untuk membantu kami lebih memahami sifat dan skop isu yang mungkin berlaku:
+
+ * Jenis isu (contohnya buffer overflow, SQL injection, cross-site scripting, dll.)
+ * Laluan penuh fail sumber yang berkaitan dengan manifestasi isu tersebut
+ * Lokasi kod sumber yang terjejas (tag/cabang/komit atau URL langsung)
+ * Sebarang konfigurasi khas yang diperlukan untuk menghasilkan isu tersebut
+ * Arahan langkah demi langkah untuk menghasilkan isu tersebut
+ * Kod bukti konsep atau eksploitasi (jika boleh)
+ * Kesan isu tersebut, termasuk bagaimana penyerang mungkin mengeksploitasi isu tersebut
+
+Maklumat ini akan membantu kami menilai laporan anda dengan lebih cepat.
+
+Jika anda melaporkan untuk ganjaran bug, laporan yang lebih lengkap boleh menyumbang kepada anugerah ganjaran yang lebih tinggi. Sila lawati halaman [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) kami untuk maklumat lanjut mengenai program aktif kami.
+
+## Bahasa Pilihan
+
+Kami lebih suka semua komunikasi dalam Bahasa Inggeris.
+
+## Polisi
+
+Microsoft mengikuti prinsip [Pendedahan Kelemahan Terkoordinasi](https://www.microsoft.com/en-us/msrc/cvd).
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/SUPPORT.md b/translations/ms/SUPPORT.md
new file mode 100644
index 000000000..dc5e85b28
--- /dev/null
+++ b/translations/ms/SUPPORT.md
@@ -0,0 +1,13 @@
+# Sokongan
+## Cara untuk mengemukakan isu dan mendapatkan bantuan
+
+Projek ini menggunakan GitHub Issues untuk menjejaki pepijat dan permintaan ciri. Sila cari isu yang sedia ada sebelum mengemukakan isu baru untuk mengelakkan pendua. Untuk isu baru, kemukakan pepijat atau permintaan ciri anda sebagai Isu baru.
+
+Untuk bantuan dan soalan mengenai penggunaan projek ini, kemukakan isu.
+
+## Polisi Sokongan Microsoft
+
+Sokongan untuk repositori ini adalah terhad kepada sumber-sumber yang disenaraikan di atas.
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/TRANSLATIONS.md b/translations/ms/TRANSLATIONS.md
new file mode 100644
index 000000000..ea1318c11
--- /dev/null
+++ b/translations/ms/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# Menyumbang dengan menterjemahkan pelajaran
+
+Kami mengalu-alukan terjemahan untuk pelajaran dalam kurikulum ini!
+## Garis Panduan
+
+Terdapat folder dalam setiap folder pelajaran dan folder pengenalan pelajaran yang mengandungi fail markdown yang telah diterjemahkan.
+
+> Nota, sila jangan terjemahkan sebarang kod dalam fail sampel kod; perkara yang perlu diterjemahkan hanyalah README, tugasan, dan kuiz. Terima kasih!
+
+Fail yang diterjemahkan harus mengikuti konvensyen penamaan ini:
+
+**README._[language]_.md**
+
+di mana _[language]_ adalah singkatan dua huruf bahasa mengikut standard ISO 639-1 (contohnya `README.es.md` untuk bahasa Sepanyol dan `README.nl.md` untuk bahasa Belanda).
+
+**assignment._[language]_.md**
+
+Serupa dengan Readme, sila terjemahkan tugasan juga.
+
+> Penting: apabila menterjemahkan teks dalam repo ini, sila pastikan anda tidak menggunakan terjemahan mesin. Kami akan mengesahkan terjemahan melalui komuniti, jadi sila hanya menawarkan diri untuk terjemahan dalam bahasa yang anda mahir.
+
+**Kuiz**
+
+1. Tambahkan terjemahan anda kepada aplikasi kuiz dengan menambah fail di sini: https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations, dengan konvensyen penamaan yang betul (en.json, fr.json). **Sila jangan terjemahkan perkataan 'true' atau 'false' walau bagaimanapun. terima kasih!**
+
+2. Tambahkan kod bahasa anda ke dropdown dalam fail App.vue aplikasi kuiz.
+
+3. Edit fail [translations index.js](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js) aplikasi kuiz untuk menambah bahasa anda.
+
+4. Akhir sekali, edit SEMUA pautan kuiz dalam fail README.md yang telah diterjemahkan untuk terus menuju ke kuiz yang telah diterjemahkan: https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1 menjadi https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id
+
+**TERIMA KASIH**
+
+Kami sangat menghargai usaha anda!
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/docs/_sidebar.md b/translations/ms/docs/_sidebar.md
new file mode 100644
index 000000000..dc95f1e8a
--- /dev/null
+++ b/translations/ms/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- Pengenalan
+ - [Pengenalan kepada Pembelajaran Mesin](../1-Introduction/1-intro-to-ML/README.md)
+ - [Sejarah Pembelajaran Mesin](../1-Introduction/2-history-of-ML/README.md)
+ - [Pembelajaran Mesin dan Keadilan](../1-Introduction/3-fairness/README.md)
+ - [Teknik-teknik Pembelajaran Mesin](../1-Introduction/4-techniques-of-ML/README.md)
+
+- Regresi
+ - [Alat-alat yang Digunakan](../2-Regression/1-Tools/README.md)
+ - [Data](../2-Regression/2-Data/README.md)
+ - [Regresi Linear](../2-Regression/3-Linear/README.md)
+ - [Regresi Logistik](../2-Regression/4-Logistic/README.md)
+
+- Bina Aplikasi Web
+ - [Aplikasi Web](../3-Web-App/1-Web-App/README.md)
+
+- Klasifikasi
+ - [Pengenalan kepada Klasifikasi](../4-Classification/1-Introduction/README.md)
+ - [Pengelas 1](../4-Classification/2-Classifiers-1/README.md)
+ - [Pengelas 2](../4-Classification/3-Classifiers-2/README.md)
+ - [Pembelajaran Mesin Terapan](../4-Classification/4-Applied/README.md)
+
+- Pengelompokan
+ - [Visualisasikan Data Anda](../5-Clustering/1-Visualize/README.md)
+ - [K-Means](../5-Clustering/2-K-Means/README.md)
+
+- NLP
+ - [Pengenalan kepada NLP](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [Tugas-tugas NLP](../6-NLP/2-Tasks/README.md)
+ - [Terjemahan dan Sentimen](../6-NLP/3-Translation-Sentiment/README.md)
+ - [Ulasan Hotel 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [Ulasan Hotel 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- Ramalan Siri Masa
+ - [Pengenalan kepada Ramalan Siri Masa](../7-TimeSeries/1-Introduction/README.md)
+ - [ARIMA](../7-TimeSeries/2-ARIMA/README.md)
+ - [SVR](../7-TimeSeries/3-SVR/README.md)
+
+- Pembelajaran Pengukuhan
+ - [Q-Learning](../8-Reinforcement/1-QLearning/README.md)
+ - [Gym](../8-Reinforcement/2-Gym/README.md)
+
+- Pembelajaran Mesin Dunia Nyata
+ - [Aplikasi](../9-Real-World/1-Applications/README.md)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/for-teachers.md b/translations/ms/for-teachers.md
new file mode 100644
index 000000000..16377111b
--- /dev/null
+++ b/translations/ms/for-teachers.md
@@ -0,0 +1,26 @@
+## Untuk Pendidik
+
+Adakah anda ingin menggunakan kurikulum ini di dalam kelas anda? Sila berbuat demikian!
+
+Malah, anda boleh menggunakannya dalam GitHub sendiri dengan menggunakan GitHub Classroom.
+
+Untuk melakukannya, fork repo ini. Anda akan perlu mencipta repo untuk setiap pelajaran, jadi anda perlu mengekstrak setiap folder ke dalam repo yang berasingan. Dengan cara itu, [GitHub Classroom](https://classroom.github.com/classrooms) boleh mengambil setiap pelajaran secara berasingan.
+
+Arahan penuh ini [full instructions](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/) akan memberikan anda idea bagaimana untuk menubuhkan kelas anda.
+
+## Menggunakan repo seperti sediakala
+
+Jika anda ingin menggunakan repo ini seperti yang ada sekarang, tanpa menggunakan GitHub Classroom, itu juga boleh dilakukan. Anda perlu berkomunikasi dengan pelajar anda pelajaran mana yang perlu diikuti bersama.
+
+Dalam format dalam talian (Zoom, Teams, atau lain-lain) anda mungkin membentuk bilik pecahan untuk kuiz, dan mentor pelajar untuk membantu mereka bersedia untuk belajar. Kemudian jemput pelajar untuk kuiz dan menghantar jawapan mereka sebagai 'issues' pada masa tertentu. Anda mungkin melakukan perkara yang sama dengan tugasan, jika anda mahu pelajar bekerja secara kolaboratif di tempat terbuka.
+
+Jika anda lebih suka format yang lebih peribadi, minta pelajar anda fork kurikulum ini, pelajaran demi pelajaran, ke dalam repo GitHub mereka sendiri sebagai repo peribadi, dan beri anda akses. Kemudian mereka boleh menyelesaikan kuiz dan tugasan secara peribadi dan menghantarnya kepada anda melalui issues pada repo kelas anda.
+
+Terdapat banyak cara untuk membuat ini berfungsi dalam format kelas dalam talian. Sila beritahu kami apa yang paling sesuai untuk anda!
+
+## Sila berikan pendapat anda!
+
+Kami ingin membuat kurikulum ini berfungsi untuk anda dan pelajar anda. Sila beri kami [maklum balas](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u).
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/quiz-app/README.md b/translations/ms/quiz-app/README.md
new file mode 100644
index 000000000..1c66fbcbd
--- /dev/null
+++ b/translations/ms/quiz-app/README.md
@@ -0,0 +1,115 @@
+# Kuiz
+
+Kuiz-kuiz ini adalah kuiz sebelum dan selepas kuliah untuk kurikulum ML di https://aka.ms/ml-beginners
+
+## Persediaan Projek
+
+```
+npm install
+```
+
+### Kompil dan muat semula secara langsung untuk pembangunan
+
+```
+npm run serve
+```
+
+### Kompil dan kecilkan untuk produksi
+
+```
+npm run build
+```
+
+### Lint dan betulkan fail
+
+```
+npm run lint
+```
+
+### Sesuaikan konfigurasi
+
+Lihat [Rujukan Konfigurasi](https://cli.vuejs.org/config/).
+
+Kredit: Terima kasih kepada versi asal aplikasi kuiz ini: https://github.com/arpan45/simple-quiz-vue
+
+## Melancarkan ke Azure
+
+Berikut adalah panduan langkah demi langkah untuk membantu anda memulakan:
+
+1. Fork Repositori GitHub
+Pastikan kod aplikasi web statik anda berada dalam repositori GitHub anda. Fork repositori ini.
+
+2. Buat Aplikasi Web Statik Azure
+- Buat akaun [Azure](http://azure.microsoft.com)
+- Pergi ke [portal Azure](https://portal.azure.com)
+- Klik "Create a resource" dan cari "Static Web App".
+- Klik "Create".
+
+3. Konfigurasikan Aplikasi Web Statik
+- Asas: Langganan: Pilih langganan Azure anda.
+- Kumpulan Sumber: Buat kumpulan sumber baru atau gunakan yang sedia ada.
+- Nama: Berikan nama untuk aplikasi web statik anda.
+- Wilayah: Pilih wilayah yang paling dekat dengan pengguna anda.
+
+- #### Butiran Pelancaran:
+- Sumber: Pilih "GitHub".
+- Akaun GitHub: Benarkan Azure mengakses akaun GitHub anda.
+- Organisasi: Pilih organisasi GitHub anda.
+- Repositori: Pilih repositori yang mengandungi aplikasi web statik anda.
+- Cabang: Pilih cabang yang anda ingin lancarkan.
+
+- #### Butiran Pembinaan:
+- Pratetap Pembinaan: Pilih rangka kerja yang digunakan oleh aplikasi anda (contoh: React, Angular, Vue, dsb.).
+- Lokasi Aplikasi: Nyatakan folder yang mengandungi kod aplikasi anda (contoh: / jika berada di akar).
+- Lokasi API: Jika anda mempunyai API, nyatakan lokasinya (pilihan).
+- Lokasi Output: Nyatakan folder di mana output pembinaan dijana (contoh: build atau dist).
+
+4. Semak dan Buat
+Semak tetapan anda dan klik "Create". Azure akan menyediakan sumber yang diperlukan dan membuat aliran kerja GitHub Actions dalam repositori anda.
+
+5. Aliran Kerja GitHub Actions
+Azure akan secara automatik membuat fail aliran kerja GitHub Actions dalam repositori anda (.github/workflows/azure-static-web-apps-.yml). Aliran kerja ini akan mengendalikan proses pembinaan dan pelancaran.
+
+6. Pantau Pelancaran
+Pergi ke tab "Actions" dalam repositori GitHub anda.
+Anda sepatutnya melihat aliran kerja sedang berjalan. Aliran kerja ini akan membina dan melancarkan aplikasi web statik anda ke Azure.
+Setelah aliran kerja selesai, aplikasi anda akan hidup di URL Azure yang disediakan.
+
+### Fail Aliran Kerja Contoh
+
+Berikut adalah contoh bagaimana fail aliran kerja GitHub Actions mungkin kelihatan:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### Sumber Tambahan
+- [Dokumentasi Azure Static Web Apps](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [Dokumentasi GitHub Actions](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/sketchnotes/LICENSE.md b/translations/ms/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..e486242c6
--- /dev/null
+++ b/translations/ms/sketchnotes/LICENSE.md
@@ -0,0 +1,146 @@
+Attribution-ShareAlike 4.0 International
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons") bukan firma undang-undang dan tidak menyediakan perkhidmatan atau nasihat undang-undang. Pengedaran lesen awam Creative Commons tidak mewujudkan hubungan peguam-klien atau hubungan lain. Creative Commons menyediakan lesen dan maklumat berkaitan berdasarkan "as-is". Creative Commons tidak memberikan sebarang jaminan mengenai lesen mereka, bahan yang dilesenkan di bawah terma dan syarat mereka, atau maklumat berkaitan. Creative Commons menafikan semua tanggungjawab atas kerosakan yang diakibatkan oleh penggunaannya sejauh mana yang mungkin.
+
+Menggunakan Lesen Awam Creative Commons
+
+Lesen awam Creative Commons menyediakan satu set terma dan syarat standard yang boleh digunakan oleh pencipta dan pemegang hak lain untuk berkongsi karya asli dan bahan lain yang tertakluk kepada hak cipta dan hak tertentu lain yang dinyatakan dalam lesen awam di bawah. Pertimbangan berikut adalah untuk tujuan maklumat sahaja, tidak menyeluruh, dan bukan sebahagian daripada lesen kami.
+
+ Pertimbangan untuk pemberi lesen: Lesen awam kami ditujukan untuk digunakan oleh mereka yang diberi kuasa untuk memberikan kebenaran awam untuk menggunakan bahan dengan cara yang sebaliknya dihadkan oleh hak cipta dan hak tertentu lain. Lesen kami tidak boleh dibatalkan. Pemberi lesen harus membaca dan memahami terma dan syarat lesen yang mereka pilih sebelum menggunakannya. Pemberi lesen juga harus mendapatkan semua hak yang diperlukan sebelum mengaplikasikan lesen kami supaya awam boleh menggunakan bahan seperti yang diharapkan. Pemberi lesen harus dengan jelas menandakan sebarang bahan yang tidak tertakluk kepada lesen. Ini termasuk bahan lain yang dilesenkan oleh CC, atau bahan yang digunakan di bawah pengecualian atau had kepada hak cipta. Pertimbangan lebih lanjut untuk pemberi lesen: wiki.creativecommons.org/Considerations_for_licensors
+
+ Pertimbangan untuk awam: Dengan menggunakan salah satu lesen awam kami, pemberi lesen memberikan kebenaran awam untuk menggunakan bahan yang dilesenkan di bawah terma dan syarat tertentu. Jika kebenaran pemberi lesen tidak diperlukan atas sebarang sebab - contohnya, kerana pengecualian atau had yang berkenaan kepada hak cipta - maka penggunaan itu tidak diatur oleh lesen. Lesen kami hanya memberikan kebenaran di bawah hak cipta dan hak tertentu lain yang pemberi lesen mempunyai kuasa untuk memberikan. Penggunaan bahan yang dilesenkan mungkin masih dihadkan atas sebab lain, termasuk kerana orang lain mempunyai hak cipta atau hak lain dalam bahan tersebut. Pemberi lesen mungkin membuat permintaan khas, seperti meminta semua perubahan ditandakan atau diterangkan. Walaupun tidak diwajibkan oleh lesen kami, anda digalakkan untuk menghormati permintaan tersebut jika munasabah. Pertimbangan lebih lanjut untuk awam: wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Creative Commons Attribution-ShareAlike 4.0 International Public License
+
+Dengan menggunakan Hak Dilesenkan (ditakrifkan di bawah), Anda menerima dan bersetuju untuk terikat dengan terma dan syarat Creative Commons Attribution-ShareAlike 4.0 International Public License ("Lesen Awam"). Setakat mana Lesen Awam ini boleh ditafsirkan sebagai kontrak, Anda diberikan Hak Dilesenkan sebagai pertimbangan atas penerimaan Anda terhadap terma dan syarat ini, dan Pemberi Lesen memberikan Anda hak tersebut sebagai pertimbangan atas manfaat yang diterima Pemberi Lesen daripada membuat Bahan Dilesenkan tersedia di bawah terma dan syarat ini.
+
+Seksyen 1 -- Definisi.
+
+ a. Bahan Diadaptasi bermaksud bahan yang tertakluk kepada Hak Cipta dan Hak Serupa yang berasal dari atau berdasarkan Bahan Dilesenkan dan di mana Bahan Dilesenkan diterjemahkan, diubah, disusun, diubahsuai, atau diubah dalam cara yang memerlukan kebenaran di bawah Hak Cipta dan Hak Serupa yang dipegang oleh Pemberi Lesen. Untuk tujuan Lesen Awam ini, di mana Bahan Dilesenkan adalah karya muzik, persembahan, atau rakaman bunyi, Bahan Diadaptasi selalu dihasilkan di mana Bahan Dilesenkan disinkronkan dalam hubungan masa dengan imej bergerak.
+
+ b. Lesen Adapter bermaksud lesen yang Anda gunakan untuk Hak Cipta dan Hak Serupa Anda dalam sumbangan Anda kepada Bahan Diadaptasi mengikut terma dan syarat Lesen Awam ini.
+
+ c. Lesen Serasi BY-SA bermaksud lesen yang disenaraikan di creativecommons.org/compatiblelicenses, diluluskan oleh Creative Commons sebagai setara dengan Lesen Awam ini.
+
+ d. Hak Cipta dan Hak Serupa bermaksud hak cipta dan/atau hak serupa yang berkaitan rapat dengan hak cipta termasuk, tanpa had, persembahan, siaran, rakaman bunyi, dan Hak Pangkalan Data Sui Generis, tanpa mengira bagaimana hak tersebut dilabel atau dikategorikan. Untuk tujuan Lesen Awam ini, hak yang dinyatakan dalam Seksyen 2(b)(1)-(2) bukan Hak Cipta dan Hak Serupa.
+
+ e. Langkah Teknologi Berkesan bermaksud langkah-langkah yang, tanpa kuasa yang betul, tidak boleh dielakkan di bawah undang-undang yang memenuhi kewajipan di bawah Artikel 11 Perjanjian Hak Cipta WIPO yang diterima pakai pada 20 Disember 1996, dan/atau perjanjian antarabangsa serupa.
+
+ f. Pengecualian dan Had bermaksud penggunaan adil, perjanjian adil, dan/atau sebarang pengecualian atau had lain kepada Hak Cipta dan Hak Serupa yang berlaku kepada penggunaan Anda terhadap Bahan Dilesenkan.
+
+ g. Elemen Lesen bermaksud atribut lesen yang disenaraikan dalam nama Lesen Awam Creative Commons. Elemen Lesen bagi Lesen Awam ini adalah Atribusi dan KongsiSerupa.
+
+ h. Bahan Dilesenkan bermaksud karya seni atau sastera, pangkalan data, atau bahan lain yang Pemberi Lesen menggunakan Lesen Awam ini.
+
+ i. Hak Dilesenkan bermaksud hak yang diberikan kepada Anda tertakluk kepada terma dan syarat Lesen Awam ini, yang terhad kepada semua Hak Cipta dan Hak Serupa yang berlaku kepada penggunaan Anda terhadap Bahan Dilesenkan dan yang Pemberi Lesen mempunyai kuasa untuk melesenkan.
+
+ j. Pemberi Lesen bermaksud individu atau entiti yang memberikan hak di bawah Lesen Awam ini.
+
+ k. Kongsi bermaksud menyediakan bahan kepada awam dengan apa-apa cara atau proses yang memerlukan kebenaran di bawah Hak Dilesenkan, seperti penghasilan semula, paparan awam, persembahan awam, pengedaran, penyebaran, komunikasi, atau pengimportan, dan membuat bahan tersedia kepada awam termasuk dengan cara yang anggota awam boleh mengakses bahan dari tempat dan pada masa yang dipilih secara individu oleh mereka.
+
+ l. Hak Pangkalan Data Sui Generis bermaksud hak selain daripada hak cipta yang berpunca daripada Arahan 96/9/EC Parlimen Eropah dan Majlis pada 11 Mac 1996 mengenai perlindungan undang-undang pangkalan data, seperti yang dipinda dan/atau digantikan, serta hak setara di mana-mana sahaja di dunia.
+
+ m. Anda bermaksud individu atau entiti yang menggunakan Hak Dilesenkan di bawah Lesen Awam ini. Anda mempunyai makna yang sepadan.
+
+Seksyen 2 -- Skop.
+
+ a. Pemberian lesen.
+
+ 1. Tertakluk kepada terma dan syarat Lesen Awam ini, Pemberi Lesen dengan ini memberikan Anda lesen seluruh dunia, bebas royalti, tidak boleh disublesenkan, tidak eksklusif, tidak boleh dibatalkan untuk menggunakan Hak Dilesenkan dalam Bahan Dilesenkan untuk:
+
+ a. menghasilkan semula dan Berkongsi Bahan Dilesenkan, secara keseluruhan atau sebahagian; dan
+
+ b. menghasilkan, menghasilkan semula, dan Berkongsi Bahan Diadaptasi.
+
+ 2. Pengecualian dan Had. Untuk mengelakkan keraguan, di mana Pengecualian dan Had berlaku kepada penggunaan Anda, Lesen Awam ini tidak berlaku, dan Anda tidak perlu mematuhi terma dan syaratnya.
+
+ 3. Tempoh. Tempoh Lesen Awam ini dinyatakan dalam Seksyen 6(a).
+
+ 4. Media dan format; pengubahsuaian teknikal dibenarkan. Pemberi Lesen memberi kuasa kepada Anda untuk menggunakan Hak Dilesenkan dalam semua media dan format sama ada yang diketahui sekarang atau yang akan datang, dan untuk membuat pengubahsuaian teknikal yang diperlukan untuk melakukannya. Pemberi Lesen mengetepikan dan/atau bersetuju untuk tidak menegaskan sebarang hak atau kuasa untuk melarang Anda daripada membuat pengubahsuaian teknikal yang diperlukan untuk menggunakan Hak Dilesenkan, termasuk pengubahsuaian teknikal yang diperlukan untuk mengelakkan Langkah Teknologi Berkesan. Untuk tujuan Lesen Awam ini, hanya membuat pengubahsuaian yang dibenarkan oleh Seksyen 2(a)(4) tidak pernah menghasilkan Bahan Diadaptasi.
+
+ 5. Penerima hiliran.
+
+ a. Tawaran daripada Pemberi Lesen -- Bahan Dilesenkan. Setiap penerima Bahan Dilesenkan secara automatik menerima tawaran daripada Pemberi Lesen untuk menggunakan Hak Dilesenkan di bawah terma dan syarat Lesen Awam ini.
+
+ b. Tawaran tambahan daripada Pemberi Lesen -- Bahan Diadaptasi. Setiap penerima Bahan Diadaptasi daripada Anda secara automatik menerima tawaran daripada Pemberi Lesen untuk menggunakan Hak Dilesenkan dalam Bahan Diadaptasi di bawah syarat-syarat Lesen Adapter yang Anda gunakan.
+
+ c. Tiada sekatan hiliran. Anda tidak boleh menawarkan atau mengenakan sebarang terma atau syarat tambahan atau berbeza, atau menggunakan sebarang Langkah Teknologi Berkesan kepada, Bahan Dilesenkan jika melakukannya menghalang penggunaan Hak Dilesenkan oleh mana-mana penerima Bahan Dilesenkan.
+
+ 6. Tiada sokongan. Tiada apa-apa dalam Lesen Awam ini yang membentuk atau boleh ditafsirkan sebagai kebenaran untuk menyatakan atau mengimplikasikan bahawa Anda, atau penggunaan Anda terhadap Bahan Dilesenkan, dikaitkan dengan, atau disokong, disahkan, atau diberikan status rasmi oleh, Pemberi Lesen atau pihak lain yang ditetapkan untuk menerima atribusi seperti yang dinyatakan dalam Seksyen 3(a)(1)(A)(i).
+
+ b. Hak lain.
+
+ 1. Hak moral, seperti hak integriti, tidak dilesenkan di bawah Lesen Awam ini, begitu juga hak publisiti, privasi, dan/atau hak personaliti serupa lain; walau bagaimanapun, sejauh yang mungkin, Pemberi Lesen mengetepikan dan/atau bersetuju untuk tidak menegaskan hak tersebut yang dipegang oleh Pemberi Lesen sejauh yang diperlukan untuk membolehkan Anda menggunakan Hak Dilesenkan, tetapi tidak sebaliknya.
+
+ 2. Hak paten dan tanda dagangan tidak dilesenkan di bawah Lesen Awam ini.
+
+ 3. Sejauh yang mungkin, Pemberi Lesen mengetepikan sebarang hak untuk mengutip royalti daripada Anda untuk penggunaan Hak Dilesenkan, sama ada secara langsung atau melalui persatuan pengutipan di bawah sebarang skim pelesenan sukarela atau boleh diketepikan secara undang-undang atau wajib. Dalam semua kes lain, Pemberi Lesen secara jelas mengekalkan sebarang hak untuk mengutip royalti tersebut.
+
+Seksyen 3 -- Syarat Lesen.
+
+Penggunaan Anda terhadap Hak Dilesenkan secara jelas tertakluk kepada syarat-syarat berikut.
+
+ a. Atribusi.
+
+ 1. Jika Anda Berkongsi Bahan Dilesenkan (termasuk dalam bentuk yang diubah), Anda mesti:
+
+ a. mengekalkan yang berikut jika ia disediakan oleh Pemberi Lesen dengan Bahan Dilesenkan:
+
+ i. pengenalan pencipta Bahan Dilesenkan dan mana-mana pihak lain yang ditetapkan untuk menerima atribusi, dengan cara yang munasabah yang diminta oleh Pemberi Lesen (termasuk dengan nama samaran jika ditetapkan);
+
+ ii. notis hak cipta;
+
+ iii. notis yang merujuk kepada Lesen Awam ini;
+
+ iv. notis yang merujuk kepada penafian jaminan;
+
+ v. URI atau pautan hiperteks ke Bahan Dilesenkan sejauh mana yang munasabah;
+
+ b. menunjukkan jika Anda mengubah Bahan Dilesenkan dan mengekalkan petunjuk sebarang pengubahsuaian sebelumnya; dan
+
+ c. menunjukkan bahawa Bahan Dilesenkan adalah dilesenkan di bawah Lesen Awam ini, dan menyertakan teks, atau URI atau pautan hiperteks ke, Lesen Awam ini.
+
+ 2. Anda boleh memenuhi syarat dalam Seksyen 3(a)(1) dengan cara yang munasabah berdasarkan medium, cara, dan konteks di mana Anda Berkongsi Bahan Dilesenkan. Contohnya, mungkin munasabah untuk memenuhi syarat dengan menyediakan URI atau pautan hiperteks ke sumber yang mengandungi maklumat yang diperlukan.
+
+ 3. Jika diminta oleh Pemberi Lesen, Anda mesti mengeluarkan sebarang maklumat yang diperlukan oleh Seksyen 3(a)(1)(A) sejauh yang munasabah.
+
+ b. KongsiSerupa.
+
+ Sebagai tambahan kepada syarat dalam Seksyen 3(a), jika Anda Berkongsi Bahan Diadaptasi yang Anda hasilkan, syarat berikut juga terpakai.
+
+ 1. Lesen Adapter yang Anda gunakan mesti merupakan lesen Creative Commons dengan Elemen Lesen yang sama, versi ini atau yang lebih baru, atau Lesen Serasi BY-SA.
+
+ 2. Anda mesti menyertakan teks, atau URI atau pautan hiperteks ke, Lesen Adapter yang Anda gunakan. Anda boleh memenuhi syarat ini dengan cara yang munasabah berdasarkan medium, cara, dan konteks di mana Anda Berkongsi Bahan Diadaptasi.
+
+ 3. Anda tidak boleh menawarkan atau mengenakan sebarang terma atau syarat tambahan atau berbeza, atau menggunakan sebarang Langkah Teknologi Berkesan kepada, Bahan Diadaptasi yang menghalang penggunaan hak yang diberikan di bawah Lesen Adapter yang Anda gunakan.
+
+Seksyen 4 -- Hak Pangkalan Data Sui Generis.
+
+Di mana Hak Dilesenkan termasuk Hak Pangkalan Data Sui Generis yang berlaku kepada penggunaan Anda terhadap Bahan Dilesenkan:
+
+ a. untuk mengelakkan keraguan, Seksyen 2(a)(1) memberikan Anda hak untuk mengekstrak, menggunakan semula, menghasilkan semula, dan Berkongsi semua atau sebahagian besar kandungan pangkalan data;
+
+ b. jika Anda menyertakan semua atau sebahagian besar kandungan pangkalan data dalam pangkalan data yang Anda mempunyai Hak Pangkalan Data Sui Generis, maka pangkalan data yang Anda mempunyai Hak Pangkalan Data Sui Generis (tetapi bukan kandungan individunya) adalah Bahan Diadaptasi, termasuk untuk tujuan Seksyen 3(b); dan
+
+ c. Anda mesti mematuhi syarat dalam Seksyen 3(a) jika Anda Berkongsi semua atau sebahagian besar kandungan pangkalan data.
+
+Untuk mengelakkan keraguan, Seksyen 4 ini melengkapi dan tidak menggantikan kewajipan Anda di bawah Lesen Awam ini di mana Hak Dilesenkan termasuk Hak Cipta dan Hak Serupa lain.
+
+Seksyen 5 -- Penafian Jaminan dan Had Tanggungjawab.
+
+ a. KECUALI JIKA DISEBUTKAN SECARA BERASINGAN OLEH PEMBERI LESEN, SEJAUH YANG MUNGKIN, PEMBERI LESEN MENAWARKAN BAHAN DILESENKAN "AS-IS" DAN "AS-AVAILABLE", DAN TIDAK MEMBUAT SEBARANG PERNYATAAN ATAU JAMINAN JENIS APA PUN MENGENAI BAHAN DILESENKAN, SAMA ADA DINYATAKAN, TERSIRAT, BERKANUN, ATAU LAINNYA. INI TERMASUK, TANPA HAD, JAMINAN HAK MILIK, KESESUAIAN UNTUK TUJUAN TERTENTU, TIDAK MELANGGAR HAK, KETIADAAN KECACATAN TERSEMBUNYI ATAU LAIN, KETEPATAN, ATAU KEHADIRAN ATAU KETIADAAN KESALAHAN, SAMA ADA DIKETAHUI ATAU TIDAK DIKETAHUI ATAU DAPAT DIKESAN. DI MANA PENAFIAN JAMINAN TIDAK DIBENARKAN SEPENUHNYA ATAU SEBAHAGIAN, PENAFIAN INI MUNGKIN TIDAK TERPAKAI KEPADA ANDA.
+
+ b. SEJAUH YANG MUNGKIN, DALAM APA JUA KEADAAN PEMBERI LESEN TIDAK AKAN BERTANGGUNGJAWAB KEPADA ANDA ATAS SEBARANG TEORI UNDANG-UNDANG (TERMASUK, TANPA HAD, KECUAIAN) ATAU SEBALIKNYA UNTUK SEBARANG KERUGIAN LANGSUNG, KHAS, TIDAK LANGSUNG, INSIDENTAL, KONSEKUENSI, PUNITIF, CONTOH, ATAU LAINNYA, KOS, PERBELANJAAN, ATAU KEROSAKAN YANG TIMBUL DARIPADA LESEN AWAM INI ATAU PENGGUNAAN BAHAN DILESENKAN, WALAUPUN PEMBERI LESEN TELAH DINASIHATKAN TENTANG KEMUNGKINAN KERUGIAN, KOS, PERBELANJAAN, ATAU KEROSAKAN TERSEBUT. DI MANA HAD TANGGUNGJAWAB TIDAK DIBENARKAN SEPENUHNYA ATAU SEBAHAGIAN, HAD INI MUNGKIN TIDAK TERPAKAI KEPADA ANDA.
+
+ c. Penafian jaminan dan had tanggungjawab yang dinyatakan di atas hendaklah ditafsirkan dengan cara yang, sejauh yang mungkin, paling hampir dengan penafian dan pengecualian tanggungjawab sepenuhnya.
+
+Seksyen 6 -- Tempoh dan Penamatan.
+
+ a. Lesen Aw
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila maklum bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/ms/sketchnotes/README.md b/translations/ms/sketchnotes/README.md
new file mode 100644
index 000000000..a5a1f23e5
--- /dev/null
+++ b/translations/ms/sketchnotes/README.md
@@ -0,0 +1,10 @@
+Semua sketchnotes kurikulum boleh dimuat turun di sini.
+
+🖨 Untuk cetakan dalam resolusi tinggi, versi TIFF boleh didapati di [repo ini](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff).
+
+🎨 Dicipta oleh: [Tomomi Imura](https://github.com/girliemac) (Twitter: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**Penafian**:
+Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI berasaskan mesin. Walaupun kami berusaha untuk ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat kritikal, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab ke atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/1-intro-to-ML/README.md b/translations/sw/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..6cc83738a
--- /dev/null
+++ b/translations/sw/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# Utangulizi wa ujifunzaji wa mashine
+
+## [Jaribio la kabla ya muhadhara](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "ML kwa wanaoanza - Utangulizi wa Ujifunzaji wa Mashine kwa Wanaoanza")
+
+> 🎥 Bonyeza picha hapo juu kwa video fupi inayofanya kazi kupitia somo hili.
+
+Karibu kwenye kozi hii ya ujifunzaji wa mashine ya kiasili kwa wanaoanza! Ikiwa wewe ni mgeni kabisa kwenye mada hii, au ni mtaalamu wa ML mwenye uzoefu anayetafuta kuboresha eneo fulani, tunafurahi kuwa na wewe! Tunataka kuunda sehemu rafiki ya kuanzisha masomo yako ya ML na tutafurahi kutathmini, kujibu, na kujumuisha [maoni yako](https://github.com/microsoft/ML-For-Beginners/discussions).
+
+[](https://youtu.be/h0e2HAPTGF4 "Utangulizi wa ML")
+
+> 🎥 Bonyeza picha hapo juu kwa video: John Guttag wa MIT anatambulisha ujifunzaji wa mashine
+
+---
+## Kuanza na ujifunzaji wa mashine
+
+Kabla ya kuanza na mtaala huu, unahitaji kuwa na kompyuta yako imewekwa na tayari kuendesha vitabu vya maelezo (notebooks) kwa ndani.
+
+- **Sanidi kompyuta yako na video hizi**. Tumia viungo vifuatavyo kujifunza [jinsi ya kusakinisha Python](https://youtu.be/CXZYvNRIAKM) kwenye mfumo wako na [kuweka mhariri wa maandishi](https://youtu.be/EU8eayHWoZg) kwa ajili ya maendeleo.
+- **Jifunze Python**. Pia inashauriwa kuwa na uelewa wa msingi wa [Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott), lugha ya programu inayofaa kwa wanasayansi wa data ambayo tunatumia katika kozi hii.
+- **Jifunze Node.js na JavaScript**. Pia tunatumia JavaScript mara kadhaa katika kozi hii tunapojenga programu za wavuti, kwa hivyo utahitaji kuwa na [node](https://nodejs.org) na [npm](https://www.npmjs.com/) zimesakinishwa, pamoja na [Visual Studio Code](https://code.visualstudio.com/) inayopatikana kwa ajili ya maendeleo ya Python na JavaScript.
+- **Unda akaunti ya GitHub**. Kwa kuwa umetuona hapa kwenye [GitHub](https://github.com), unaweza tayari kuwa na akaunti, lakini kama huna, unda moja na kisha nakili mtaala huu kutumia wewe mwenyewe. (Usisite kutupa nyota, pia 😊)
+- **Chunguza Scikit-learn**. Jijulishe na [Scikit-learn](https://scikit-learn.org/stable/user_guide.html), seti ya maktaba za ML ambazo tunazirejelea katika masomo haya.
+
+---
+## Ujifunzaji wa mashine ni nini?
+
+Neno 'ujifunzaji wa mashine' ni moja ya maneno maarufu na yanayotumika mara nyingi siku hizi. Kuna uwezekano mkubwa kwamba umesikia neno hili angalau mara moja ikiwa una aina fulani ya ufahamu wa teknolojia, bila kujali unafanya kazi katika uwanja gani. Mitambo ya ujifunzaji wa mashine, hata hivyo, ni fumbo kwa watu wengi. Kwa mwanzilishi wa ujifunzaji wa mashine, somo hili linaweza kuhisi kuwa gumu wakati mwingine. Kwa hiyo, ni muhimu kuelewa ujifunzaji wa mashine ni nini hasa, na kujifunza kuhusu hilo hatua kwa hatua, kupitia mifano ya vitendo.
+
+---
+## Mchoro wa hype
+
+
+
+> Google Trends inaonyesha 'mchoro wa hype' wa hivi karibuni wa neno 'ujifunzaji wa mashine'
+
+---
+## Ulimwengu wa fumbo
+
+Tunaishi katika ulimwengu uliojaa fumbo za kuvutia. Wanasayansi wakubwa kama Stephen Hawking, Albert Einstein, na wengine wengi wamejitolea maisha yao kutafuta habari za maana ambazo zinagundua fumbo za ulimwengu unaotuzunguka. Hii ni hali ya binadamu ya kujifunza: mtoto wa binadamu hujifunza mambo mapya na kugundua muundo wa ulimwengu wao mwaka baada ya mwaka wanapokua hadi utu uzima.
+
+---
+## Ubongo wa mtoto
+
+Ubongo wa mtoto na hisia zake hutambua ukweli wa mazingira yao na taratibu hujifunza mifumo iliyofichwa ya maisha ambayo husaidia mtoto kuunda sheria za kimantiki za kutambua mifumo iliyojifunza. Mchakato wa kujifunza wa ubongo wa binadamu hufanya wanadamu kuwa viumbe vyenye ustadi zaidi duniani. Kujifunza mfululizo kwa kugundua mifumo iliyofichwa na kisha kubuni kwenye mifumo hiyo hutuwezesha kujiboresha zaidi na zaidi katika maisha yetu yote. Uwezo huu wa kujifunza na uwezo wa kubadilika unahusiana na dhana inayoitwa [ubongo plastiki](https://www.simplypsychology.org/brain-plasticity.html). Kwa juu juu, tunaweza kuchora baadhi ya mfanano wa motisha kati ya mchakato wa kujifunza wa ubongo wa binadamu na dhana za ujifunzaji wa mashine.
+
+---
+## Ubongo wa binadamu
+
+[Ubongo wa binadamu](https://www.livescience.com/29365-human-brain.html) hutambua mambo kutoka ulimwengu wa kweli, huchakata habari iliyotambuliwa, hufanya maamuzi ya kimantiki, na hufanya vitendo fulani kulingana na hali. Hii ndio tunaita kujiendesha kwa akili. Tunapopanga nakala ya mchakato wa tabia ya akili kwa mashine, inaitwa akili ya bandia (AI).
+
+---
+## Baadhi ya istilahi
+
+Ingawa maneno yanaweza kuchanganywa, ujifunzaji wa mashine (ML) ni sehemu muhimu ya akili ya bandia. **ML inahusu kutumia algoriti maalum kugundua habari za maana na kupata mifumo iliyofichwa kutoka kwa data iliyotambuliwa ili kuthibitisha mchakato wa kufanya maamuzi ya kimantiki**.
+
+---
+## AI, ML, Kujifunza kwa kina
+
+
+
+> Mchoro unaoonyesha uhusiano kati ya AI, ML, kujifunza kwa kina, na sayansi ya data. Picha ya [Jen Looper](https://twitter.com/jenlooper) iliyochochewa na [picha hii](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining)
+
+---
+## Dhana za kufunika
+
+Katika mtaala huu, tutaangazia tu dhana za msingi za ujifunzaji wa mashine ambazo mwanzilishi lazima azijue. Tunashughulikia kile tunachokiita 'ujifunzaji wa mashine wa kiasili' hasa kwa kutumia Scikit-learn, maktaba bora ambayo wanafunzi wengi hutumia kujifunza misingi. Ili kuelewa dhana pana za akili ya bandia au kujifunza kwa kina, maarifa thabiti ya msingi ya ujifunzaji wa mashine ni muhimu, na hivyo tungependa kuyatoa hapa.
+
+---
+## Katika kozi hii utajifunza:
+
+- dhana za msingi za ujifunzaji wa mashine
+- historia ya ML
+- ML na usawa
+- mbinu za ML za regression
+- mbinu za ML za uainishaji
+- mbinu za ML za clustering
+- mbinu za ML za usindikaji wa lugha asilia
+- mbinu za ML za utabiri wa mfululizo wa wakati
+- ujifunzaji wa kuimarisha
+- matumizi halisi ya ML
+
+---
+## Kile hatutafunika
+
+- kujifunza kwa kina
+- mitandao ya neva
+- AI
+
+Ili kufanya uzoefu wa kujifunza kuwa bora, tutakwepa ugumu wa mitandao ya neva, 'kujifunza kwa kina' - ujenzi wa mifano yenye tabaka nyingi kwa kutumia mitandao ya neva - na AI, ambayo tutajadili katika mtaala tofauti. Pia tutatoa mtaala ujao wa sayansi ya data ili kuzingatia kipengele hicho cha uwanja huu mkubwa.
+
+---
+## Kwa nini ujifunze ujifunzaji wa mashine?
+
+Ujifunzaji wa mashine, kutoka mtazamo wa mifumo, unafafanuliwa kama uundaji wa mifumo ya kiotomatiki inayoweza kujifunza mifumo iliyofichwa kutoka kwa data ili kusaidia kufanya maamuzi ya akili.
+
+Motisha hii imechochewa kwa kiasi fulani na jinsi ubongo wa binadamu unavyojifunza mambo fulani kulingana na data inayotambuliwa kutoka ulimwengu wa nje.
+
+✅ Fikiria kwa dakika moja kwa nini biashara ingependa kujaribu kutumia mikakati ya ujifunzaji wa mashine dhidi ya kuunda injini ya sheria zilizowekwa ngumu.
+
+---
+## Matumizi ya ujifunzaji wa mashine
+
+Matumizi ya ujifunzaji wa mashine sasa yako karibu kila mahali, na ni kama data inayoenea katika jamii zetu, inayozalishwa na simu zetu mahiri, vifaa vilivyounganishwa, na mifumo mingine. Kwa kuzingatia uwezo mkubwa wa algoriti za kisasa za ujifunzaji wa mashine, watafiti wamekuwa wakichunguza uwezo wao wa kutatua matatizo ya maisha ya kila siku yenye vipimo vingi na taaluma nyingi na matokeo mazuri.
+
+---
+## Mifano ya ML iliyotumika
+
+**Unaweza kutumia ujifunzaji wa mashine kwa njia nyingi**:
+
+- Kutabiri uwezekano wa ugonjwa kutoka historia ya matibabu ya mgonjwa au ripoti.
+- Kutumia data ya hali ya hewa kutabiri matukio ya hali ya hewa.
+- Kuelewa hisia ya maandishi.
+- Kugundua habari za uongo ili kuzuia kuenea kwa propaganda.
+
+Fedha, uchumi, sayansi ya dunia, uchunguzi wa anga, uhandisi wa biomedical, sayansi ya utambuzi, na hata nyanja za sayansi ya jamii zimechukua ujifunzaji wa mashine kutatua matatizo magumu ya uchakataji wa data katika maeneo yao.
+
+---
+## Hitimisho
+
+Ujifunzaji wa mashine unaotomatisha mchakato wa kugundua mifumo kwa kupata maarifa ya maana kutoka kwa data halisi au data iliyotengenezwa. Imejidhihirisha kuwa na thamani kubwa katika biashara, afya, na matumizi ya kifedha, kati ya mengine.
+
+Katika siku za usoni, kuelewa misingi ya ujifunzaji wa mashine kutakuwa lazima kwa watu kutoka fani yoyote kutokana na kuenea kwake.
+
+---
+# 🚀 Changamoto
+
+Chora, kwenye karatasi au kwa kutumia programu ya mtandaoni kama [Excalidraw](https://excalidraw.com/), uelewa wako wa tofauti kati ya AI, ML, kujifunza kwa kina, na sayansi ya data. Ongeza baadhi ya mawazo ya matatizo ambayo kila moja ya mbinu hizi ni nzuri katika kutatua.
+
+# [Jaribio la baada ya muhadhara](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# Mapitio & Kujisomea
+
+Ili kujifunza zaidi kuhusu jinsi unavyoweza kufanya kazi na algoriti za ML kwenye wingu, fuata [Njia ya Kujifunza](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott).
+
+Chukua [Njia ya Kujifunza](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott) kuhusu misingi ya ML.
+
+---
+# Kazi
+
+[Anza na kukimbia](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/1-intro-to-ML/assignment.md b/translations/sw/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..40265b7f2
--- /dev/null
+++ b/translations/sw/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# Anza na Kukimbia
+
+## Maelekezo
+
+Katika kazi hii isiyopimwa, unapaswa kujikumbusha Python na kuandaa mazingira yako ili yaweze kuendesha daftari.
+
+Chukua [Python Learning Path](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott), kisha andaa mifumo yako kwa kupitia video hizi za utangulizi:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalam inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/2-history-of-ML/README.md b/translations/sw/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..56840a381
--- /dev/null
+++ b/translations/sw/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# Historia ya ujifunzaji wa mashine
+
+
+> Sketchnote na [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "ML kwa wanaoanza - Historia ya Ujifunzaji wa Mashine")
+
+> 🎥 Bonyeza picha hapo juu kwa video fupi inayopitia somo hili.
+
+Katika somo hili, tutapitia hatua kuu katika historia ya ujifunzaji wa mashine na akili bandia.
+
+Historia ya akili bandia (AI) kama uwanja inaunganishwa na historia ya ujifunzaji wa mashine, kwani algorithmi na maendeleo ya kompyuta yanayounga mkono ML yaliingia katika maendeleo ya AI. Ni muhimu kukumbuka kwamba, ingawa nyanja hizi kama maeneo tofauti ya uchunguzi zilianza kuimarika miaka ya 1950, [uvumbuzi muhimu wa algorithmi, takwimu, hisabati, kompyuta na kiufundi](https://wikipedia.org/wiki/Timeline_of_machine_learning) ulitangulia na kuingiliana na enzi hii. Kwa kweli, watu wamekuwa wakifikiria kuhusu maswali haya kwa [mamia ya miaka](https://wikipedia.org/wiki/History_of_artificial_intelligence): makala hii inajadili misingi ya kihistoria ya wazo la 'mashine inayofikiria.'
+
+---
+## Uvumbuzi muhimu
+
+- 1763, 1812 [Nadharia ya Bayes](https://wikipedia.org/wiki/Bayes%27_theorem) na watangulizi wake. Nadharia hii na matumizi yake yanasaidia katika inferensi, ikielezea uwezekano wa tukio kutokea kwa msingi wa maarifa ya awali.
+- 1805 [Nadharia ya Mraba Mdogo](https://wikipedia.org/wiki/Least_squares) na mtaalamu wa hesabu wa Kifaransa Adrien-Marie Legendre. Nadharia hii, ambayo utajifunza katika kitengo chetu cha Regression, husaidia katika kufaa data.
+- 1913 [Minyororo ya Markov](https://wikipedia.org/wiki/Markov_chain), iliyotajwa baada ya mtaalamu wa hesabu wa Kirusi Andrey Markov, hutumika kuelezea mfululizo wa matukio yanayoweza kutokea kwa msingi wa hali ya awali.
+- 1957 [Perceptron](https://wikipedia.org/wiki/Perceptron) ni aina ya kigeuaji cha mstari kilichovumbuliwa na mwanasaikolojia wa Marekani Frank Rosenblatt ambayo ina msingi katika maendeleo ya ujifunzaji wa kina.
+
+---
+
+- 1967 [Jirani Karibu](https://wikipedia.org/wiki/Nearest_neighbor) ni algorithmi iliyoundwa awali kwa ajili ya kupanga njia. Katika muktadha wa ML hutumika kugundua mifumo.
+- 1970 [Kurudisha Nyuma](https://wikipedia.org/wiki/Backpropagation) hutumika kufundisha [mitandao ya neva ya kulisha mbele](https://wikipedia.org/wiki/Feedforward_neural_network).
+- 1982 [Mitandao ya Neva ya Kurudia](https://wikipedia.org/wiki/Recurrent_neural_network) ni mitandao ya neva bandia inayotokana na mitandao ya neva ya kulisha mbele ambayo huunda grafu za muda.
+
+✅ Fanya utafiti kidogo. Ni tarehe gani nyingine zinajitokeza kama muhimu katika historia ya ML na AI?
+
+---
+## 1950: Mashine zinazofikiria
+
+Alan Turing, mtu wa ajabu sana ambaye alichaguliwa [na umma mwaka 2019](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) kama mwanasayansi mkubwa wa karne ya 20, anahesabiwa kuwa alisaidia kuweka msingi wa wazo la 'mashine inayoweza kufikiria.' Alikabiliana na wapinzani na hitaji lake la ushahidi wa kimaumbile wa wazo hili kwa sehemu kwa kuunda [Mtihani wa Turing](https://www.bbc.com/news/technology-18475646), ambao utachunguza katika masomo yetu ya NLP.
+
+---
+## 1956: Mradi wa Utafiti wa Majira ya Joto wa Dartmouth
+
+"Mradi wa Utafiti wa Majira ya Joto wa Dartmouth juu ya akili bandia ulikuwa tukio muhimu kwa akili bandia kama uwanja," na ilikuwa hapa ambapo neno 'akili bandia' lilianzishwa ([chanzo](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth)).
+
+> Kila kipengele cha kujifunza au kipengele kingine chochote cha akili kinaweza kuelezewa kwa usahihi kiasi kwamba mashine inaweza kutengenezwa ili kuiga.
+
+---
+
+Mtafiti mkuu, profesa wa hisabati John McCarthy, alitarajia "kuendelea kwa msingi wa nadharia kwamba kila kipengele cha kujifunza au kipengele kingine chochote cha akili kinaweza kuelezewa kwa usahihi kiasi kwamba mashine inaweza kutengenezwa ili kuiga." Washiriki walijumuisha mtu mwingine maarufu katika uwanja huo, Marvin Minsky.
+
+Warsha hiyo inahesabiwa kuwa ilianzisha na kuhimiza majadiliano kadhaa ikiwa ni pamoja na "kuongezeka kwa mbinu za alama, mifumo inayolenga maeneo madogo (mifumo ya wataalamu wa awali), na mifumo ya upunguzaji dhidi ya mifumo ya kuingiza." ([chanzo](https://wikipedia.org/wiki/Dartmouth_workshop)).
+
+---
+## 1956 - 1974: "Miaka ya dhahabu"
+
+Kuanzia miaka ya 1950 hadi katikati ya miaka ya '70, matumaini yalikuwa juu kwamba AI inaweza kutatua matatizo mengi. Mnamo 1967, Marvin Minsky alisema kwa kujiamini kwamba "Ndani ya kizazi ... tatizo la kuunda 'akili bandia' litatatuliwa kwa kiasi kikubwa." (Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall)
+
+Utafiti wa usindikaji wa lugha asilia uliendelea, utafutaji uliimarishwa na kufanywa kuwa na nguvu zaidi, na dhana ya 'ulimwengu mdogo' iliundwa, ambapo majukumu rahisi yalifanywa kwa kutumia maagizo ya lugha rahisi.
+
+---
+
+Utafiti ulifadhiliwa vizuri na mashirika ya serikali, maendeleo yalifanywa katika kompyuta na algorithmi, na mifano ya mashine za akili ziliundwa. Baadhi ya mashine hizi ni pamoja na:
+
+* [Shakey roboti](https://wikipedia.org/wiki/Shakey_the_robot), ambaye angeweza kusonga na kuamua jinsi ya kufanya majukumu 'kwa akili'.
+
+ 
+ > Shakey mnamo 1972
+
+---
+
+* Eliza, 'chatterbot' wa awali, angeweza kuzungumza na watu na kutenda kama 'mshauri' wa msingi. Utajifunza zaidi kuhusu Eliza katika masomo ya NLP.
+
+ 
+ > Toleo la Eliza, chatbot
+
+---
+
+* "Ulimwengu wa Vitalu" ulikuwa mfano wa ulimwengu mdogo ambapo vitalu vinaweza kupangwa na kuchaguliwa, na majaribio katika kufundisha mashine kufanya maamuzi yanaweza kujaribiwa. Maendeleo yaliyofanywa na maktaba kama [SHRDLU](https://wikipedia.org/wiki/SHRDLU) yalisaidia kusukuma mbele usindikaji wa lugha.
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "ulimwengu wa vitalu na SHRDLU")
+
+ > 🎥 Bonyeza picha hapo juu kwa video: Ulimwengu wa vitalu na SHRDLU
+
+---
+## 1974 - 1980: "Majira ya baridi ya AI"
+
+Kufikia katikati ya miaka ya 1970, ilionekana wazi kwamba ugumu wa kutengeneza 'mashine za akili' ulikuwa umepunguzwa na ahadi yake, kutokana na nguvu za kompyuta zilizopo, ilikuwa imezidishwa. Fedha zilikauka na imani katika uwanja huo ilipungua. Baadhi ya masuala yaliyopunguza imani ni pamoja na:
+---
+- **Mipaka**. Nguvu za kompyuta zilikuwa ndogo sana.
+- **Mlipo wa mchanganyiko**. Kiasi cha vigezo vinavyohitajika kufundishwa kiliongezeka kwa kasi kama zaidi ilivyotakiwa kutoka kwa kompyuta, bila maendeleo sambamba ya nguvu za kompyuta na uwezo.
+- **Upungufu wa data**. Kulikuwa na upungufu wa data ambao ulizuia mchakato wa kujaribu, kuendeleza, na kuboresha algorithmi.
+- **Je, tunauliza maswali sahihi?**. Maswali yenyewe yaliyoulizwa yalianza kutiliwa shaka. Watafiti walianza kukabiliana na ukosoaji kuhusu mbinu zao:
+ - Mitihani ya Turing ilianza kutiliwa shaka kwa njia mbalimbali, miongoni mwa mawazo mengine, nadharia ya 'chumba cha kichina' ambayo ilidai kwamba, "kuweka programu kwenye kompyuta ya kidigitali inaweza kuifanya ionekane kuelewa lugha lakini haiwezi kutoa uelewa wa kweli." ([chanzo](https://plato.stanford.edu/entries/chinese-room/))
+ - Maadili ya kuanzisha akili bandia kama vile "mshauri" ELIZA katika jamii yalipigwa msasa.
+
+---
+
+Wakati huo huo, shule mbalimbali za mawazo ya AI zilianza kuunda. Mgawanyiko ulianzishwa kati ya mazoea ya ["scruffy" vs. "neat AI"](https://wikipedia.org/wiki/Neats_and_scruffies). Maabara _Scruffy_ ilibadilisha programu kwa masaa hadi walipata matokeo yaliyotakiwa. Maabara _Neat_ "zililenga kwenye mantiki na utatuzi wa matatizo rasmi". ELIZA na SHRDLU zilikuwa mifumo inayojulikana ya _scruffy_. Katika miaka ya 1980, wakati mahitaji yalipoibuka ya kufanya mifumo ya ML iweze kurudiwa, mbinu ya _neat_ ilianza kuchukua nafasi ya mbele kwani matokeo yake yanaweza kuelezeka zaidi.
+
+---
+## Mifumo ya wataalamu ya miaka ya 1980
+
+Kadri uwanja ulivyokua, faida zake kwa biashara zilikuwa wazi zaidi, na katika miaka ya 1980 ndivyo ilivyokuwa na kuenea kwa 'mifumo ya wataalamu'. "Mifumo ya wataalamu ilikuwa miongoni mwa aina za kwanza za programu za akili bandia (AI) zilizofanikiwa kweli." ([chanzo](https://wikipedia.org/wiki/Expert_system)).
+
+Aina hii ya mfumo ni _mseto_, ikijumuisha sehemu ya injini ya sheria inayofafanua mahitaji ya biashara, na injini ya inferensi inayotumia mfumo wa sheria ili kutoa ukweli mpya.
+
+Enzi hii pia iliona kuongezeka kwa umakini kwa mitandao ya neva.
+
+---
+## 1987 - 1993: 'Baridi' ya AI
+
+Kuenea kwa vifaa maalum vya mifumo ya wataalamu kulikuwa na athari mbaya ya kuwa maalum sana. Kuongezeka kwa kompyuta binafsi pia kulishindana na mifumo hii mikubwa, maalum, ya kati. Udemokrasia wa kompyuta ulikuwa umeanza, na hatimaye ilifungua njia kwa mlipuko wa kisasa wa data kubwa.
+
+---
+## 1993 - 2011
+
+Enzi hii iliona kipindi kipya kwa ML na AI kuwa na uwezo wa kutatua baadhi ya matatizo ambayo yalisababishwa awali na ukosefu wa data na nguvu za kompyuta. Kiasi cha data kilianza kuongezeka kwa kasi na kupatikana zaidi, kwa bora na kwa mbaya, hasa na ujio wa simu mahiri karibu mwaka 2007. Nguvu za kompyuta ziliongezeka kwa kasi, na algorithmi ziliendelea pamoja. Uwanja ulianza kupata ukomavu huku siku za zamani zisizo na mpangilio zikianza kuimarika kuwa taaluma ya kweli.
+
+---
+## Sasa
+
+Leo ujifunzaji wa mashine na AI vinagusa karibu kila sehemu ya maisha yetu. Enzi hii inahitaji uelewa wa makini wa hatari na athari zinazowezekana za algorithmi hizi kwa maisha ya binadamu. Kama Brad Smith wa Microsoft alivyoeleza, "Teknolojia ya habari inazua masuala yanayokwenda kwenye msingi wa ulinzi wa haki za binadamu kama faragha na uhuru wa kujieleza. Masuala haya yanainua uwajibikaji kwa makampuni ya teknolojia yanayounda bidhaa hizi. Kwa maoni yetu, pia yanahitaji udhibiti wa serikali wenye busara na maendeleo ya kanuni kuhusu matumizi yanayokubalika" ([chanzo](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/)).
+
+---
+
+Inabaki kuonekana nini kitatokea siku zijazo, lakini ni muhimu kuelewa mifumo hii ya kompyuta na programu na algorithmi wanazoendesha. Tunatumaini kwamba mtaala huu utakusaidia kupata uelewa bora ili uweze kuamua mwenyewe.
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "Historia ya ujifunzaji wa kina")
+> 🎥 Bonyeza picha hapo juu kwa video: Yann LeCun anajadili historia ya ujifunzaji wa kina katika mhadhara huu
+
+---
+## 🚀Changamoto
+
+Chunguza moja ya matukio haya ya kihistoria na ujifunze zaidi kuhusu watu walio nyuma yake. Kuna wahusika wa kuvutia, na hakuna ugunduzi wa kisayansi uliowahi kuundwa katika utupu wa kitamaduni. Unagundua nini?
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## Mapitio na Kujisomea
+
+Hapa kuna vitu vya kutazama na kusikiliza:
+
+[Podcast hii ambapo Amy Boyd anajadili mabadiliko ya AI](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "The history of AI by Amy Boyd")
+
+---
+
+## Kazi
+
+[Tengeneza ratiba](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/2-history-of-ML/assignment.md b/translations/sw/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..5a3cfbab0
--- /dev/null
+++ b/translations/sw/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Unda ratiba ya matukio
+
+## Maelekezo
+
+Kutumia [repo hii](https://github.com/Digital-Humanities-Toolkit/timeline-builder), unda ratiba ya matukio ya kipengele fulani cha historia ya algorithms, hisabati, takwimu, AI, au ML, au mchanganyiko wa haya. Unaweza kuzingatia mtu mmoja, wazo moja, au kipindi kirefu cha mawazo. Hakikisha unaongeza vipengele vya multimedia.
+
+## Vigezo vya Tathmini
+
+| Kigezo | Kielelezo | Kinachokubalika | Kinachohitaji Kuboresha |
+| -------- | ------------------------------------------------- | --------------------------------------- | -------------------------------------------------------------- |
+| | Ratiba iliyotengenezwa imewasilishwa kama ukurasa wa GitHub | Msimbo haujakamilika na haujatumwa | Ratiba haijakamilika, haijafanyiwa utafiti vizuri na haijatumwa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asilia katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/3-fairness/README.md b/translations/sw/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..92bfde1bf
--- /dev/null
+++ b/translations/sw/1-Introduction/3-fairness/README.md
@@ -0,0 +1,133 @@
+# Kujenga Suluhisho za Kujifunza kwa Mashine na AI Inayowajibika
+
+
+> Sketchnote na [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Jaribio la Kabla ya Somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Utangulizi
+
+Katika mtaala huu, utaanza kugundua jinsi kujifunza kwa mashine kunaweza na kunaathiri maisha yetu ya kila siku. Hata sasa, mifumo na mifano inahusika katika kazi za kufanya maamuzi ya kila siku, kama vile utambuzi wa afya, idhini za mikopo au kugundua udanganyifu. Kwa hivyo, ni muhimu kwamba mifano hii ifanye kazi vizuri ili kutoa matokeo yanayoaminika. Kama programu nyingine yoyote, mifumo ya AI itakosa matarajio au kuwa na matokeo yasiyofaa. Ndiyo maana ni muhimu kuelewa na kuelezea tabia ya mfano wa AI.
+
+Fikiria nini kinaweza kutokea wakati data unayotumia kujenga mifano hii inakosa demografia fulani, kama vile rangi, jinsia, mtazamo wa kisiasa, dini, au inawakilisha kwa uwiano usio sawa demografia hizo. Je, kuhusu wakati matokeo ya mfano yanafafanuliwa kupendelea demografia fulani? Matokeo ni nini kwa programu? Zaidi ya hayo, nini kinatokea wakati mfano una matokeo mabaya na unadhuru watu? Nani anawajibika kwa tabia ya mifumo ya AI? Haya ni baadhi ya maswali tutakayochunguza katika mtaala huu.
+
+Katika somo hili, utajifunza:
+
+- Kuongeza ufahamu wako juu ya umuhimu wa haki katika kujifunza kwa mashine na madhara yanayohusiana na haki.
+- Kufahamu mazoezi ya kuchunguza vipengele vya nje na hali zisizo za kawaida ili kuhakikisha kutegemewa na usalama.
+- Kupata uelewa wa haja ya kuwawezesha wote kwa kubuni mifumo jumuishi.
+- Kuchunguza jinsi ilivyo muhimu kulinda faragha na usalama wa data na watu.
+- Kuona umuhimu wa kuwa na njia ya kisanduku la kioo kuelezea tabia ya mifano ya AI.
+- Kuwa makini na jinsi uwajibikaji ni muhimu kujenga imani katika mifumo ya AI.
+
+## Sharti
+
+Kama sharti, tafadhali chukua "Kanuni za AI Inayowajibika" Njia ya Kujifunza na tazama video hapa chini kuhusu mada hiyo:
+
+Jifunze zaidi kuhusu AI Inayowajibika kwa kufuata [Njia ya Kujifunza](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott)
+
+[](https://youtu.be/dnC8-uUZXSc "Njia ya Microsoft kwa AI Inayowajibika")
+
+> 🎥 Bofya picha hapo juu kwa video: Njia ya Microsoft kwa AI Inayowajibika
+
+## Haki
+
+Mifumo ya AI inapaswa kuwatendea watu wote kwa haki na kuepuka kuathiri vikundi vya watu kwa njia tofauti. Kwa mfano, mifumo ya AI inapotoa mwongozo juu ya matibabu ya matibabu, maombi ya mikopo, au ajira, inapaswa kutoa mapendekezo sawa kwa kila mtu mwenye dalili sawa, hali za kifedha, au sifa za kitaaluma. Kila mmoja wetu kama binadamu anabeba upendeleo uliorithiwa ambao unaathiri maamuzi na vitendo vyetu. Upendeleo huu unaweza kuonekana katika data tunayotumia kufundisha mifumo ya AI. U manipuli kama huo unaweza kutokea bila kukusudia. Mara nyingi ni vigumu kujua kwa makusudi wakati unaingiza upendeleo katika data.
+
+**"Kutokuwa na haki"** kunajumuisha athari mbaya, au "madhara", kwa kikundi cha watu, kama vile wale wanaofafanuliwa kwa rangi, jinsia, umri, au hali ya ulemavu. Madhara kuu yanayohusiana na haki yanaweza kuainishwa kama:
+
+- **Ugawaji**, kama jinsia au kabila kwa mfano inapendelewa zaidi ya nyingine.
+- **Ubora wa huduma**. Ukifundisha data kwa hali moja maalum lakini hali halisi ni ngumu zaidi, inasababisha huduma duni. Kwa mfano, dispanser ya sabuni ya mkono ambayo haikuweza kuhisi watu wenye ngozi nyeusi. [Marejeleo](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **Kudhalilisha**. Kukosoa kwa njia isiyo ya haki na kuweka lebo kitu au mtu. Kwa mfano, teknolojia ya kuweka lebo picha ilivyoweka lebo vibaya picha za watu wenye ngozi nyeusi kama sokwe.
+- **Uwiano wa juu au chini**. Wazo ni kwamba kikundi fulani hakionekani katika taaluma fulani, na huduma au kazi yoyote inayozidi kukuza hiyo inachangia madhara.
+- **Kuweka taswira**. Kuhusisha kikundi fulani na sifa zilizotanguliwa. Kwa mfano, mfumo wa kutafsiri lugha kati ya Kiingereza na Kituruki unaweza kuwa na makosa kutokana na maneno yenye uhusiano wa taswira ya kijinsia.
+
+
+> tafsiri kwa Kituruki
+
+
+> tafsiri kurudi kwa Kiingereza
+
+Wakati wa kubuni na kupima mifumo ya AI, tunahitaji kuhakikisha kwamba AI ni ya haki na haijapangiliwa kufanya maamuzi yenye upendeleo au ubaguzi, ambayo binadamu pia wamekatazwa kufanya. Kuhakikisha haki katika AI na kujifunza kwa mashine inabaki kuwa changamoto ngumu ya kijamii na kiteknolojia.
+
+### Kutegemewa na usalama
+
+Ili kujenga imani, mifumo ya AI inahitaji kuwa ya kutegemewa, salama, na thabiti chini ya hali za kawaida na zisizotarajiwa. Ni muhimu kujua jinsi mifumo ya AI itakavyotenda katika hali mbalimbali, hasa wakati ni vipengele vya nje. Wakati wa kujenga suluhisho za AI, inahitaji kuwepo na mkazo mkubwa juu ya jinsi ya kushughulikia hali mbalimbali ambazo suluhisho za AI zitakutana nazo. Kwa mfano, gari linalojiendesha lenyewe linahitaji kuweka usalama wa watu kama kipaumbele cha juu. Kwa hiyo, AI inayosukuma gari inahitaji kuzingatia hali zote zinazowezekana ambazo gari linaweza kukutana nazo kama usiku, dhoruba za radi au theluji, watoto wakikimbia barabarani, wanyama wa kipenzi, ujenzi wa barabara nk. Jinsi mfumo wa AI unavyoweza kushughulikia hali mbalimbali kwa kutegemewa na salama inaonyesha kiwango cha utabiri ambacho mwanasayansi wa data au mjenzi wa AI alizingatia wakati wa kubuni au kupima mfumo.
+
+> [🎥 Bofya hapa kwa video: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### Ujumuishaji
+
+Mifumo ya AI inapaswa kubuniwa kushirikisha na kuwawezesha wote. Wakati wa kubuni na kutekeleza mifumo ya AI, wanasayansi wa data na wajenzi wa AI wanatambua na kushughulikia vikwazo vinavyowezekana katika mfumo ambavyo vinaweza kwa bahati mbaya kuwazuia watu. Kwa mfano, kuna watu bilioni 1 wenye ulemavu duniani kote. Kwa maendeleo ya AI, wanaweza kufikia taarifa na fursa mbalimbali kwa urahisi zaidi katika maisha yao ya kila siku. Kwa kushughulikia vikwazo, inaunda fursa za kuunda na kukuza bidhaa za AI na uzoefu bora ambao unawanufaisha wote.
+
+> [🎥 Bofya hapa kwa video: ujumuishaji katika AI](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### Usalama na faragha
+
+Mifumo ya AI inapaswa kuwa salama na kuheshimu faragha ya watu. Watu wana imani ndogo katika mifumo inayoweka faragha yao, taarifa zao, au maisha yao hatarini. Wakati wa kufundisha mifano ya kujifunza kwa mashine, tunategemea data kutoa matokeo bora. Kwa kufanya hivyo, asili ya data na uadilifu lazima izingatiwe. Kwa mfano, je data ilikuwa imetolewa na mtumiaji au inapatikana hadharani? Ifuatayo, wakati wa kufanya kazi na data, ni muhimu kuunda mifumo ya AI ambayo inaweza kulinda taarifa za siri na kupinga mashambulizi. Kama AI inavyokuwa ya kawaida, kulinda faragha na kulinda taarifa muhimu za kibinafsi na biashara kunakuwa muhimu zaidi na ngumu. Masuala ya faragha na usalama wa data yanahitaji umakini wa karibu hasa kwa AI kwa sababu upatikanaji wa data ni muhimu kwa mifumo ya AI kufanya utabiri sahihi na maamuzi sahihi kuhusu watu.
+
+> [🎥 Bofya hapa kwa video: usalama katika AI](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Kama sekta tumepiga hatua kubwa katika Faragha na usalama, tukichochewa sana na kanuni kama GDPR (Kanuni ya Ulinzi wa Taarifa za Kijumla).
+- Hata hivyo, na mifumo ya AI tunapaswa kukubali mvutano kati ya haja ya data za kibinafsi zaidi ili kufanya mifumo kuwa ya kibinafsi zaidi na yenye ufanisi – na faragha.
+- Kama vile kuzaliwa kwa kompyuta zilizounganishwa na intaneti, tunaona pia ongezeko kubwa la idadi ya masuala ya usalama yanayohusiana na AI.
+- Wakati huo huo, tumeona AI ikitumika kuboresha usalama. Kwa mfano, skana za virusi za kisasa nyingi zinaendeshwa na heuristics za AI leo.
+- Tunahitaji kuhakikisha kwamba michakato yetu ya Sayansi ya Data inachanganyika kwa usawa na mazoea ya faragha na usalama ya kisasa.
+
+### Uwazi
+
+Mifumo ya AI inapaswa kueleweka. Sehemu muhimu ya uwazi ni kuelezea tabia ya mifumo ya AI na vipengele vyake. Kuboresha uelewa wa mifumo ya AI kunahitaji kwamba wadau waelewe jinsi na kwa nini inavyofanya kazi ili waweze kutambua masuala ya utendaji yanayoweza kutokea, wasiwasi wa usalama na faragha, upendeleo, mazoea ya kuwatenga, au matokeo yasiyotarajiwa. Tunaamini pia kwamba wale wanaotumia mifumo ya AI wanapaswa kuwa waaminifu na wazi kuhusu lini, kwa nini, na jinsi wanavyoamua kuitumia. Pamoja na mapungufu ya mifumo wanayotumia. Kwa mfano, kama benki inatumia mfumo wa AI kusaidia maamuzi ya mikopo ya watumiaji, ni muhimu kuchunguza matokeo na kuelewa ni data gani inayoathiri mapendekezo ya mfumo. Serikali zinaanza kudhibiti AI katika sekta mbalimbali, hivyo wanasayansi wa data na mashirika lazima waelezee kama mfumo wa AI unakidhi mahitaji ya udhibiti, hasa wakati kuna matokeo yasiyofaa.
+
+> [🎥 Bofya hapa kwa video: uwazi katika AI](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Kwa sababu mifumo ya AI ni ngumu sana, ni vigumu kuelewa jinsi inavyofanya kazi na kutafsiri matokeo.
+- Ukosefu huu wa uelewa unaathiri jinsi mifumo hii inavyosimamiwa, kutekelezwa, na kuandikwa.
+- Ukosefu huu wa uelewa unaathiri zaidi maamuzi yaliyofanywa kwa kutumia matokeo ambayo mifumo hii inazalisha.
+
+### Uwajibikaji
+
+Watu wanaobuni na kutekeleza mifumo ya AI lazima wawajibike kwa jinsi mifumo yao inavyofanya kazi. Haja ya uwajibikaji ni muhimu hasa kwa teknolojia nyeti kama utambuzi wa uso. Hivi karibuni, kumekuwa na ongezeko la mahitaji ya teknolojia ya utambuzi wa uso, hasa kutoka kwa mashirika ya utekelezaji wa sheria ambayo yanaona uwezo wa teknolojia katika matumizi kama vile kutafuta watoto waliopotea. Hata hivyo, teknolojia hizi zinaweza kutumiwa na serikali kuweka uhuru wa kimsingi wa raia wao hatarini kwa mfano, kuwezesha ufuatiliaji wa kuendelea wa watu maalum. Kwa hivyo, wanasayansi wa data na mashirika yanahitaji kuwajibika kwa jinsi mfumo wao wa AI unavyoathiri watu binafsi au jamii.
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Njia ya Microsoft kwa AI Inayowajibika")
+
+> 🎥 Bofya picha hapo juu kwa video: Onyo kuhusu Ufuatiliaji wa Misa Kupitia Utambuzi wa Uso
+
+Hatimaye moja ya maswali makubwa zaidi kwa kizazi chetu, kama kizazi cha kwanza kinacholetea AI katika jamii, ni jinsi ya kuhakikisha kwamba kompyuta zitabaki kuwajibika kwa watu na jinsi ya kuhakikisha kwamba watu wanaobuni kompyuta wanabaki kuwajibika kwa kila mtu mwingine.
+
+## Tathmini ya Athari
+
+Kabla ya kufundisha mfano wa kujifunza kwa mashine, ni muhimu kufanya tathmini ya athari ili kuelewa madhumuni ya mfumo wa AI; matumizi yaliyokusudiwa; wapi itatekelezwa; na nani atakayekuwa akiingiliana na mfumo. Hizi ni muhimu kwa mthibitishaji au mpimaji kutathmini mfumo kujua ni mambo gani ya kuzingatia wakati wa kutambua hatari zinazowezekana na matokeo yanayotarajiwa.
+
+Zifuatazo ni maeneo ya kuzingatia wakati wa kufanya tathmini ya athari:
+
+* **Athari mbaya kwa watu binafsi**. Kuwa na ufahamu wa vikwazo au mahitaji yoyote, matumizi yasiyoungwa mkono au mapungufu yoyote yanayojulikana yanayozuia utendaji wa mfumo ni muhimu kuhakikisha kwamba mfumo hautumiwi kwa njia inayoweza kudhuru watu binafsi.
+* **Mahitaji ya data**. Kupata uelewa wa jinsi na wapi mfumo utatumia data inawawezesha wakaguzi kuchunguza mahitaji yoyote ya data unayohitaji kuzingatia (mfano, kanuni za data za GDPR au HIPPA). Zaidi ya hayo, chunguza kama chanzo au wingi wa data ni wa kutosha kwa mafunzo.
+* **Muhtasari wa athari**. Kusanya orodha ya madhara yanayoweza kutokea kutokana na matumizi ya mfumo. Kwenye mzunguko wa maisha wa ML, kagua kama masuala yaliyotambuliwa yamepunguzwa au kushughulikiwa.
+* **Malengo yanayofaa** kwa kila moja ya kanuni sita za msingi. Tathmini kama malengo kutoka kwa kila kanuni yanatimizwa na kama kuna mapungufu yoyote.
+
+## Kusuluhisha shida na AI inayowajibika
+
+Sawa na kusuluhisha shida kwenye programu ya kompyuta, kusuluhisha shida kwenye mfumo wa AI ni mchakato muhimu wa kutambua na kutatua masuala kwenye mfumo. Kuna mambo mengi yanayoweza kuathiri mfano usifanye kazi kama inavyotarajiwa au kwa uwajibikaji. Metriki nyingi za utendaji wa mfano wa jadi ni jumla za kiasi za utendaji wa mfano, ambazo hazitoshi kuchambua jinsi mfano unavyokiuka kanuni za AI inayowajibika. Zaidi ya hayo, mfano wa kujifunza kwa mashine ni sanduku jeusi ambalo linafanya kuwa vigumu kuelewa nini kinachochochea matokeo yake au kutoa maelezo wakati unafanya kosa. Baadaye kwenye kozi hii, tutajifunza jinsi ya kutumia dashibodi ya AI Inayowajibika kusaidia kusuluhisha shida kwenye mifumo ya AI. Dashibodi inatoa zana kamili kwa wanasayansi wa data na wajenzi wa AI kufanya:
+
+* **Uchambuzi wa makosa**. Kutambua usambazaji wa makosa wa mfano ambao unaweza kuathiri haki au kutegemewa kwa mfumo.
+* **Muhtasari wa mfano**. Kugundua wapi kuna tofauti katika utendaji wa mfano kwenye makundi ya data.
+* **Uchambuzi wa data**. Kuelewa usambazaji wa data na kutambua upendeleo wowote katika data ambao unaweza kusababisha masuala ya haki, ujumuishaji, na kutegemewa.
+* **Ufafanuzi wa mfano**. Kuelewa nini kinachoathiri au kushawishi utabiri wa mfano. Hii husaidia kuelezea tabia ya mfano, ambayo ni muhimu kwa uwazi na uwajibikaji.
+
+## 🚀 Changamoto
+
+Ili kuzuia madhara yasitokee kwanza, tunapaswa:
+
+- kuwa na utofauti wa asili na mitazamo miongoni mwa watu wanaofanya kazi kwenye mifumo
+- kuwekeza katika seti za data zinazoakisi utofauti wa jamii yetu
+- kuendeleza mbinu bora katika mzunguko wa maisha wa kujifunza kwa mashine kugundua na kurekebisha AI inayowajibika inapojitokeza
+
+Fikiria hali halisi ambapo kutokuaminika kwa mfano ni dhahiri katika ujenzi na matumizi ya mfano. Ni nini kingine tunachopaswa kuzingatia?
+
+## [Jaribio la Baada ya Somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## Mapitio na Kujisomea
+
+Katika somo hili, umejifunza baadhi ya misingi ya dhana za haki na
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/3-fairness/assignment.md b/translations/sw/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..503dca2e8
--- /dev/null
+++ b/translations/sw/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# Chunguza Kifaa cha AI Kinachowajibika
+
+## Maagizo
+
+Katika somo hili ulijifunza kuhusu Kifaa cha AI Kinachowajibika, mradi wa "chanzo-wazi, unaoendeshwa na jamii kusaidia wanasayansi wa data kuchambua na kuboresha mifumo ya AI." Kwa kazi hii, chunguza moja ya [notebooks](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb) za RAI Toolbox na ripoti matokeo yako katika karatasi au uwasilishaji.
+
+## Rubric
+
+| Vigezo | Bora | Inatosha | Inahitaji Kuboresha |
+| -------- | --------- | -------- | ----------------- |
+| | Karatasi au uwasilishaji wa powerpoint unaeleza mifumo ya Fairlearn, notebook iliyotumika, na hitimisho lililopatikana kutoka kuiendesha | Karatasi inawasilishwa bila hitimisho | Hakuna karatasi iliyowasilishwa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/4-techniques-of-ML/README.md b/translations/sw/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..12e0b8f56
--- /dev/null
+++ b/translations/sw/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# Mbinu za Kujifunza kwa Mashine
+
+Mchakato wa kujenga, kutumia, na kudumisha mifano ya kujifunza kwa mashine na data wanayotumia ni tofauti sana na mchakato wa maendeleo mengine mengi. Katika somo hili, tutafafanua mchakato huo, na kuainisha mbinu kuu unazohitaji kujua. Utajifunza:
+
+- Kuelewa michakato inayounga mkono kujifunza kwa mashine kwa kiwango cha juu.
+- Kuchunguza dhana za msingi kama 'mifano', 'utabiri', na 'data ya mafunzo'.
+
+## [Maswali ya awali ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "ML kwa wanaoanza - Mbinu za Kujifunza kwa Mashine")
+
+> 🎥 Bofya picha hapo juu kwa video fupi inayopitia somo hili.
+
+## Utangulizi
+
+Kwa kiwango cha juu, ufundi wa kuunda michakato ya kujifunza kwa mashine (ML) unajumuisha hatua kadhaa:
+
+1. **Amua swali**. Mchakato mwingi wa ML huanza kwa kuuliza swali ambalo haliwezi kujibiwa na programu rahisi ya masharti au injini inayotegemea sheria. Maswali haya mara nyingi huzunguka utabiri kulingana na mkusanyiko wa data.
+2. **Kusanya na andaa data**. Ili uweze kujibu swali lako, unahitaji data. Ubora na, wakati mwingine, wingi wa data yako utaamua jinsi unavyoweza kujibu swali lako la awali. Kuonyesha data kwa picha ni kipengele muhimu cha awamu hii. Awamu hii pia inajumuisha kugawanya data katika kikundi cha mafunzo na cha majaribio ili kujenga mfano.
+3. **Chagua mbinu ya mafunzo**. Kulingana na swali lako na asili ya data yako, unahitaji kuchagua jinsi unavyotaka kufundisha mfano ili kuakisi data yako vizuri na kufanya utabiri sahihi dhidi yake. Hii ni sehemu ya mchakato wako wa ML inayohitaji utaalamu maalum na, mara nyingi, kiasi kikubwa cha majaribio.
+4. **Fundisha mfano**. Kutumia data yako ya mafunzo, utatumia algorithmi mbalimbali kufundisha mfano kutambua mifumo katika data. Mfano unaweza kutumia uzito wa ndani ambao unaweza kubadilishwa ili kutoa umuhimu kwa sehemu fulani za data kuliko zingine ili kujenga mfano bora.
+5. **Tathmini mfano**. Unatumia data ambayo haijawahi kuonekana (data yako ya majaribio) kutoka seti yako iliyokusanywa ili kuona jinsi mfano unavyofanya kazi.
+6. **Kurekebisha vigezo**. Kulingana na utendaji wa mfano wako, unaweza kurudia mchakato kwa kutumia vigezo tofauti, au vigeu, vinavyodhibiti tabia ya algorithmi zinazotumika kufundisha mfano.
+7. **Tabiri**. Tumia pembejeo mpya kupima usahihi wa mfano wako.
+
+## Swali gani la kuuliza
+
+Kompyuta zina ujuzi maalum wa kugundua mifumo iliyofichwa katika data. Huduma hii ni muhimu sana kwa watafiti wenye maswali kuhusu eneo fulani ambalo haliwezi kujibiwa kwa urahisi kwa kuunda injini ya sheria inayotegemea masharti. Kwa kazi ya actuarial, kwa mfano, mwanasayansi wa data anaweza kujenga sheria zilizotengenezwa kwa mkono kuhusu vifo vya wavutaji sigara dhidi ya wasiovuta sigara.
+
+Hata hivyo, wakati vigeu vingi vinaingizwa kwenye mlinganyo, mfano wa ML unaweza kuthibitisha kuwa bora zaidi kutabiri viwango vya vifo vya baadaye kulingana na historia ya afya ya zamani. Mfano mzuri zaidi unaweza kuwa kutabiri hali ya hewa kwa mwezi Aprili katika eneo fulani kulingana na data inayojumuisha latitudo, longitudo, mabadiliko ya hali ya hewa, ukaribu na bahari, mifumo ya mkondo wa jeti, na zaidi.
+
+✅ [Kifurushi hiki cha slaidi](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf) kuhusu mifano ya hali ya hewa kinatoa mtazamo wa kihistoria wa kutumia ML katika uchambuzi wa hali ya hewa.
+
+## Kazi za kabla ya kujenga
+
+Kabla ya kuanza kujenga mfano wako, kuna kazi kadhaa unazohitaji kukamilisha. Ili kupima swali lako na kuunda nadharia kulingana na utabiri wa mfano, unahitaji kutambua na kusanidi vipengele kadhaa.
+
+### Data
+
+Ili uweze kujibu swali lako kwa uhakika wowote, unahitaji kiasi kizuri cha data ya aina sahihi. Kuna mambo mawili unayohitaji kufanya wakati huu:
+
+- **Kusanya data**. Kumbuka somo la awali kuhusu haki katika uchambuzi wa data, kusanya data yako kwa uangalifu. Jua vyanzo vya data hii, upendeleo wowote wa asili ambao inaweza kuwa nao, na uandike asili yake.
+- **Andaa data**. Kuna hatua kadhaa katika mchakato wa maandalizi ya data. Unaweza kuhitaji kukusanya data na kuiweka sawa ikiwa inatoka vyanzo mbalimbali. Unaweza kuboresha ubora na wingi wa data kupitia mbinu mbalimbali kama vile kubadilisha mistari kuwa namba (kama tunavyofanya katika [Clustering](../../5-Clustering/1-Visualize/README.md)). Unaweza pia kuzalisha data mpya, kulingana na ile ya awali (kama tunavyofanya katika [Classification](../../4-Classification/1-Introduction/README.md)). Unaweza kusafisha na kuhariri data (kama tutakavyofanya kabla ya somo la [Web App](../../3-Web-App/README.md)). Hatimaye, unaweza pia kuibadilisha na kuichanganya, kulingana na mbinu zako za mafunzo.
+
+✅ Baada ya kukusanya na kuchakata data yako, chukua muda kuona kama umbo lake litakuruhusu kushughulikia swali lako lililokusudiwa. Inawezekana kwamba data haitafanya vizuri katika kazi yako iliyotolewa, kama tunavyogundua katika masomo yetu ya [Clustering](../../5-Clustering/1-Visualize/README.md)!
+
+### Vipengele na Lengo
+
+[Kipengele](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection) ni mali inayoweza kupimika ya data yako. Katika seti nyingi za data inaonyeshwa kama kichwa cha safu kama 'tarehe', 'ukubwa' au 'rangi'. Kipengele chako, kinachowakilishwa kwa kawaida kama `X` katika msimbo, kinawakilisha kigezo cha pembejeo ambacho kitakachotumika kufundisha mfano.
+
+Lengo ni kitu unachojaribu kutabiri. Lengo linalowakilishwa kwa kawaida kama `y` katika msimbo, linawakilisha jibu la swali unalojaribu kuuliza kwa data yako: mwezi Desemba, malenge ya **rangi** gani yatakuwa ya bei rahisi? huko San Francisco, vitongoji gani vitakuwa na **bei** bora ya mali isiyohamishika? Wakati mwingine lengo pia huitwa sifa ya lebo.
+
+### Kuchagua kigezo chako cha kipengele
+
+🎓 **Uchaguzi wa Kipengele na Uchimbaji wa Kipengele** Je, unajuaje ni kigezo gani cha kuchagua wakati wa kujenga mfano? Labda utapitia mchakato wa uchaguzi wa kipengele au uchimbaji wa kipengele ili kuchagua vigezo sahihi kwa mfano unaofanya vizuri zaidi. Hata hivyo, hazifanani: "Uchimbaji wa kipengele huunda vipengele vipya kutoka kwa kazi za vipengele vya asili, wakati uchaguzi wa kipengele unarudisha sehemu ndogo ya vipengele." ([chanzo](https://wikipedia.org/wiki/Feature_selection))
+
+### Kuonyesha data yako
+
+Kipengele muhimu cha zana ya mwanasayansi wa data ni uwezo wa kuonyesha data kwa kutumia maktaba kadhaa bora kama Seaborn au MatPlotLib. Kuweka data yako kwa picha kunaweza kukuruhusu kugundua uhusiano uliofichwa ambao unaweza kutumia. Picha zako zinaweza pia kukusaidia kugundua upendeleo au data isiyo na uwiano (kama tunavyogundua katika [Classification](../../4-Classification/2-Classifiers-1/README.md)).
+
+### Gawanya seti yako ya data
+
+Kabla ya mafunzo, unahitaji kugawanya seti yako ya data katika sehemu mbili au zaidi za ukubwa usio sawa ambazo bado zinawakilisha data vizuri.
+
+- **Mafunzo**. Sehemu hii ya seti ya data inafaa kwa mfano wako kuifundisha. Seti hii inajumuisha sehemu kubwa ya seti ya data ya awali.
+- **Majaribio**. Seti ya majaribio ni kikundi huru cha data, mara nyingi hukusanywa kutoka kwa data ya awali, unayotumia kuthibitisha utendaji wa mfano uliyojenga.
+- **Kuthibitisha**. Seti ya uthibitishaji ni kikundi kidogo cha mifano huru unayotumia kurekebisha vigezo vya mfano, au usanifu, ili kuboresha mfano. Kulingana na ukubwa wa data yako na swali unalouliza, unaweza usihitaji kujenga seti hii ya tatu (kama tunavyobainisha katika [Utabiri wa Mfululizo wa Muda](../../7-TimeSeries/1-Introduction/README.md)).
+
+## Kujenga mfano
+
+Kutumia data yako ya mafunzo, lengo lako ni kujenga mfano, au uwakilishi wa takwimu wa data yako, kwa kutumia algorithmi mbalimbali kuufundisha. Kufundisha mfano kunauweka wazi kwa data na kumruhusu kufanya mawazo kuhusu mifumo iliyogunduliwa, kuthibitisha, na kukubali au kukataa.
+
+### Amua juu ya mbinu ya mafunzo
+
+Kulingana na swali lako na asili ya data yako, utachagua mbinu ya kuifundisha. Kupitia [nyaraka za Scikit-learn](https://scikit-learn.org/stable/user_guide.html) - tunazotumia katika kozi hii - unaweza kuchunguza njia nyingi za kufundisha mfano. Kulingana na uzoefu wako, unaweza kuhitaji kujaribu mbinu kadhaa tofauti kujenga mfano bora zaidi. Una uwezekano wa kupitia mchakato ambapo wanasayansi wa data hutathmini utendaji wa mfano kwa kuipatia data ambayo haijawahi kuonekana, kuangalia usahihi, upendeleo, na masuala mengine yanayopunguza ubora, na kuchagua mbinu inayofaa zaidi ya mafunzo kwa kazi iliyopo.
+
+### Fundisha mfano
+
+Ukiwa na data yako ya mafunzo, uko tayari 'kuifaa' kuunda mfano. Utagundua kwamba katika maktaba nyingi za ML utapata msimbo 'model.fit' - ni wakati huu unapowasilisha kigezo chako cha kipengele kama safu ya thamani (kawaida 'X') na kigezo cha lengo (kawaida 'y').
+
+### Tathmini mfano
+
+Mara tu mchakato wa mafunzo unapokamilika (inaweza kuchukua marudio mengi, au 'epochs', kufundisha mfano mkubwa), utaweza kutathmini ubora wa mfano kwa kutumia data ya majaribio kupima utendaji wake. Data hii ni sehemu ndogo ya data ya awali ambayo mfano haujawahi kuchambua hapo awali. Unaweza kuchapisha jedwali la vipimo kuhusu ubora wa mfano wako.
+
+🎓 **Kufaa kwa mfano**
+
+Katika muktadha wa kujifunza kwa mashine, kufaa kwa mfano kunahusu usahihi wa kazi ya msingi ya mfano wakati inajaribu kuchambua data ambayo haijafahamika.
+
+🎓 **Kufaa kidogo** na **kufaa kupita kiasi** ni matatizo ya kawaida yanayopunguza ubora wa mfano, kwani mfano unafaa ama si vizuri vya kutosha au vizuri sana. Hii husababisha mfano kufanya utabiri kwa karibu sana au kwa mbali sana na data yake ya mafunzo. Mfano uliokaa kupita kiasi unafanya utabiri wa data ya mafunzo vizuri sana kwa sababu umejifunza maelezo na kelele za data vizuri sana. Mfano usiofaa ni si sahihi kwani hauwezi kuchambua kwa usahihi data yake ya mafunzo wala data ambayo haijawahi 'kuona'.
+
+
+> Infographic na [Jen Looper](https://twitter.com/jenlooper)
+
+## Kurekebisha vigezo
+
+Mara mafunzo yako ya awali yanapokamilika, angalia ubora wa mfano na fikiria kuboresha kwa kurekebisha 'vigezo vya juu'. Soma zaidi kuhusu mchakato huu [katika nyaraka](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott).
+
+## Utabiri
+
+Hii ni wakati ambapo unaweza kutumia data mpya kabisa kupima usahihi wa mfano wako. Katika mazingira ya ML 'iliyotumika', ambapo unajenga mali za wavuti kutumia mfano katika uzalishaji, mchakato huu unaweza kuhusisha kukusanya pembejeo za mtumiaji (mfano, kubofya kitufe) kuweka kigezo na kukituma kwa mfano kwa utambuzi, au tathmini.
+
+Katika masomo haya, utagundua jinsi ya kutumia hatua hizi kuandaa, kujenga, kupima, kutathmini, na kutabiri - ishara zote za mwanasayansi wa data na zaidi, unapopiga hatua katika safari yako ya kuwa mhandisi wa ML 'kamili'.
+
+---
+
+## 🚀Changamoto
+
+Chora mchoro wa mtiririko unaoakisi hatua za mtaalamu wa ML. Je, unajiona wapi sasa hivi katika mchakato huu? Unadhani utapata ugumu wapi? Nini kinaonekana rahisi kwako?
+
+## [Maswali ya baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## Mapitio na Kujisomea
+
+Tafuta mtandaoni mahojiano na wanasayansi wa data wanaozungumzia kazi yao ya kila siku. Hapa kuna [moja](https://www.youtube.com/watch?v=Z3IjgbbCEfs).
+
+## Kazi
+
+[Hoji mwanasayansi wa data](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/4-techniques-of-ML/assignment.md b/translations/sw/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..effa1c7bd
--- /dev/null
+++ b/translations/sw/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Fanya mahojiano na mwanasayansi wa data
+
+## Maelekezo
+
+Katika kampuni yako, katika kundi la watumiaji, au miongoni mwa marafiki zako au wanafunzi wenzako, zungumza na mtu anayefanya kazi kitaalamu kama mwanasayansi wa data. Andika karatasi fupi (maneno 500) kuhusu shughuli zao za kila siku. Je, wao ni wataalamu, au wanafanya kazi 'full stack'?
+
+## Rubric
+
+| Vigezo | Bora | Inayokubalika | Inayohitaji Kuboresha |
+| ------- | --------------------------------------------------------------------------------- | ------------------------------------------------------------------- | --------------------- |
+| | Insha ya urefu sahihi, yenye vyanzo vilivyotajwa, imewasilishwa kama faili la .doc | Insha haina vyanzo sahihi au ni fupi kuliko urefu unaohitajika | Hakuna insha iliyowasilishwa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/1-Introduction/README.md b/translations/sw/1-Introduction/README.md
new file mode 100644
index 000000000..d403c82bb
--- /dev/null
+++ b/translations/sw/1-Introduction/README.md
@@ -0,0 +1,25 @@
+# Utangulizi wa kujifunza kwa mashine
+
+Katika sehemu hii ya mtaala, utatambulishwa kwa dhana za msingi zinazounda uwanja wa kujifunza kwa mashine, nini ni, na kujifunza kuhusu historia yake na mbinu ambazo watafiti hutumia kufanya kazi nayo. Hebu tuchunguze ulimwengu huu mpya wa ML pamoja!
+
+
+> Picha na Bill Oxford kwenye Unsplash
+
+### Masomo
+
+1. [Utangulizi wa kujifunza kwa mashine](1-intro-to-ML/README.md)
+1. [Historia ya kujifunza kwa mashine na AI](2-history-of-ML/README.md)
+1. [Usawa na kujifunza kwa mashine](3-fairness/README.md)
+1. [Mbinu za kujifunza kwa mashine](4-techniques-of-ML/README.md)
+### Shukrani
+
+"Utangulizi wa Kujifunza kwa Mashine" uliandikwa kwa ♥️ na timu ya watu wakiwemo [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) na [Jen Looper](https://twitter.com/jenlooper)
+
+"Historia ya Kujifunza kwa Mashine" iliandikwa kwa ♥️ na [Jen Looper](https://twitter.com/jenlooper) na [Amy Boyd](https://twitter.com/AmyKateNicho)
+
+"Usawa na Kujifunza kwa Mashine" iliandikwa kwa ♥️ na [Tomomi Imura](https://twitter.com/girliemac)
+
+"Mbinu za Kujifunza kwa Mashine" iliandikwa kwa ♥️ na [Jen Looper](https://twitter.com/jenlooper) na [Chris Noring](https://twitter.com/softchris)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/1-Tools/README.md b/translations/sw/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..e14fea821
--- /dev/null
+++ b/translations/sw/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# Anza na Python na Scikit-learn kwa mifano ya regression
+
+
+
+> Sketchnote na [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [Somohili linapatikana kwa R!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## Utangulizi
+
+Katika masomo haya manne, utajifunza jinsi ya kujenga mifano ya regression. Tutazungumzia matumizi yake kwa ufupi. Lakini kabla hujaanza chochote, hakikisha una zana sahihi za kuanza mchakato!
+
+Katika somo hili, utajifunza jinsi ya:
+
+- Kusanidi kompyuta yako kwa kazi za kujifunza mashine za ndani.
+- Kufanya kazi na daftari za Jupyter.
+- Kutumia Scikit-learn, ikijumuisha usakinishaji.
+- Kuchunguza regression ya linear kwa zoezi la vitendo.
+
+## Usakinishaji na usanidi
+
+[](https://youtu.be/-DfeD2k2Kj0 "ML kwa wanaoanza - Sanidi zana zako tayari kujenga mifano ya Kujifunza Mashine")
+
+> 🎥 Bonyeza picha hapo juu kwa video fupi inayofanya kazi kupitia kusanidi kompyuta yako kwa ML.
+
+1. **Sakinisha Python**. Hakikisha kuwa [Python](https://www.python.org/downloads/) imewekwa kwenye kompyuta yako. Utatumia Python kwa kazi nyingi za sayansi ya data na kujifunza mashine. Mfumo mwingi wa kompyuta tayari unajumuisha usakinishaji wa Python. Kuna [Python Coding Packs](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott) muhimu pia, ili kurahisisha usanidi kwa baadhi ya watumiaji.
+
+ Matumizi mengine ya Python, hata hivyo, yanahitaji toleo moja la programu, wakati mengine yanahitaji toleo tofauti. Kwa sababu hii, ni muhimu kufanya kazi ndani ya [mazingira pepe](https://docs.python.org/3/library/venv.html).
+
+2. **Sakinisha Visual Studio Code**. Hakikisha una Visual Studio Code iliyosakinishwa kwenye kompyuta yako. Fuata maelekezo haya [kusakinisha Visual Studio Code](https://code.visualstudio.com/) kwa usakinishaji wa kimsingi. Utatumia Python ndani ya Visual Studio Code katika kozi hii, kwa hivyo unaweza kutaka kujifunza jinsi ya [kusanya Visual Studio Code](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) kwa maendeleo ya Python.
+
+ > Jifunze Python kwa kufanya kazi kupitia mkusanyiko huu wa [moduli za kujifunza](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott)
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Sanidi Python na Visual Studio Code")
+ >
+ > 🎥 Bonyeza picha hapo juu kwa video: kutumia Python ndani ya VS Code.
+
+3. **Sakinisha Scikit-learn**, kwa kufuata [maelekezo haya](https://scikit-learn.org/stable/install.html). Kwa kuwa unahitaji kuhakikisha kuwa unatumia Python 3, inashauriwa utumie mazingira pepe. Kumbuka, ikiwa unasakinisha maktaba hii kwenye Mac M1, kuna maelekezo maalum kwenye ukurasa uliounganishwa hapo juu.
+
+4. **Sakinisha Jupyter Notebook**. Utahitaji [kusakinisha kifurushi cha Jupyter](https://pypi.org/project/jupyter/).
+
+## Mazingira yako ya kuandika ML
+
+Utatumia **daftari** kuandika msimbo wako wa Python na kuunda mifano ya kujifunza mashine. Aina hii ya faili ni zana ya kawaida kwa wanasayansi wa data, na zinaweza kutambuliwa na kiambishi au ugani wake `.ipynb`.
+
+Daftari ni mazingira ya maingiliano yanayomruhusu msanidi programu kuandika msimbo na kuongeza maelezo na kuandika nyaraka karibu na msimbo ambao ni muhimu sana kwa miradi ya majaribio au utafiti.
+
+[](https://youtu.be/7E-jC8FLA2E "ML kwa wanaoanza - Sanidi Daftari za Jupyter kuanza kujenga mifano ya regression")
+
+> 🎥 Bonyeza picha hapo juu kwa video fupi inayofanya kazi kupitia zoezi hili.
+
+### Zoezi - kufanya kazi na daftari
+
+Katika folda hii, utapata faili _notebook.ipynb_.
+
+1. Fungua _notebook.ipynb_ ndani ya Visual Studio Code.
+
+ Seva ya Jupyter itaanza na Python 3+ imeanza. Utapata maeneo ya daftari ambayo yanaweza kuwa `run`, vipande vya msimbo. Unaweza kuendesha kipande cha msimbo, kwa kuchagua ikoni inayofanana na kitufe cha kucheza.
+
+2. Chagua ikoni ya `md` na ongeza kidogo ya markdown, na maandishi yafuatayo **# Karibu kwenye daftari lako**.
+
+ Kisha, ongeza msimbo wa Python.
+
+3. Andika **print('hello notebook')** kwenye kipande cha msimbo.
+4. Chagua mshale kuendesha msimbo.
+
+ Unapaswa kuona taarifa iliyochapishwa:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+Unaweza kuchanganya msimbo wako na maoni ili kujiorodhesha daftari.
+
+✅ Fikiria kwa dakika moja jinsi mazingira ya kazi ya msanidi wa wavuti yanavyotofautiana na yale ya mwanasayansi wa data.
+
+## Kuanzisha na Scikit-learn
+
+Sasa kwa kuwa Python imewekwa kwenye mazingira yako ya ndani, na unajisikia vizuri na daftari za Jupyter, hebu tujisikie vizuri vilevile na Scikit-learn (itamke `sci` as in `science`). Scikit-learn inatoa [API pana](https://scikit-learn.org/stable/modules/classes.html#api-ref) kukusaidia kufanya kazi za ML.
+
+Kulingana na [tovuti yao](https://scikit-learn.org/stable/getting_started.html), "Scikit-learn ni maktaba ya kujifunza mashine ya chanzo wazi inayounga mkono kujifunza kwa usimamizi na bila usimamizi. Pia inatoa zana mbalimbali za kufaa mifano, uchakataji wa data, uteuzi wa mifano na tathmini, na zana nyingine nyingi."
+
+Katika kozi hii, utatumia Scikit-learn na zana zingine kujenga mifano ya kujifunza mashine kufanya kazi tunazozita 'kujifunza mashine za jadi'. Tumeepuka makusudi mitandao ya neva na kujifunza kwa kina, kwani yanafunikwa vizuri zaidi katika mtaala wetu wa 'AI kwa Wanaoanza' unaokuja.
+
+Scikit-learn inafanya iwe rahisi kujenga mifano na kutathmini matumizi yake. Inalenga zaidi kutumia data ya nambari na ina seti kadhaa za data zilizotengenezwa tayari kwa matumizi kama zana za kujifunza. Pia inajumuisha mifano iliyojengwa tayari kwa wanafunzi kujaribu. Hebu tuchunguze mchakato wa kupakia data iliyopakiwa tayari na kutumia estimator iliyojengwa kwa mfano wa kwanza wa ML na Scikit-learn na data ya msingi.
+
+## Zoezi - daftari yako ya kwanza ya Scikit-learn
+
+> Mafunzo haya yaliyochochewa na [mfano wa regression ya linear](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py) kwenye tovuti ya Scikit-learn.
+
+[](https://youtu.be/2xkXL5EUpS0 "ML kwa wanaoanza - Mradi wako wa Kwanza wa Regression ya Linear katika Python")
+
+> 🎥 Bonyeza picha hapo juu kwa video fupi inayofanya kazi kupitia zoezi hili.
+
+Katika faili _notebook.ipynb_ lililohusishwa na somo hili, futa seli zote kwa kubonyeza ikoni ya 'kibuyu cha takataka'.
+
+Katika sehemu hii, utatumia seti ndogo ya data kuhusu ugonjwa wa kisukari iliyojengwa ndani ya Scikit-learn kwa madhumuni ya kujifunza. Fikiria kwamba unataka kujaribu matibabu kwa wagonjwa wa kisukari. Mifano ya Kujifunza Mashine inaweza kukusaidia kuamua ni wagonjwa gani wataitikia matibabu bora, kulingana na mchanganyiko wa vigezo. Hata mfano wa msingi wa regression, unapowekwa kwenye grafu, unaweza kuonyesha taarifa kuhusu vigezo ambavyo vitakusaidia kupanga majaribio yako ya kinadharia ya kliniki.
+
+✅ Kuna aina nyingi za mbinu za regression, na ile unayochagua inategemea jibu unalotafuta. Ikiwa unataka kutabiri kimo kinachowezekana kwa mtu wa umri fulani, utatumia regression ya linear, kwani unatafuta **thamani ya nambari**. Ikiwa unavutiwa na kugundua kama aina fulani ya chakula inapaswa kuzingatiwa kama mboga au la, unatafuta **ugawaji wa kitengo** kwa hivyo utatumia regression ya logistic. Utajifunza zaidi kuhusu regression ya logistic baadaye. Fikiria kidogo kuhusu maswali unayoweza kuuliza kwa data, na ni mbinu ipi itakayofaa zaidi.
+
+Hebu tuanze kazi hii.
+
+### Ingiza maktaba
+
+Kwa kazi hii tutaingiza maktaba kadhaa:
+
+- **matplotlib**. Ni zana muhimu ya [grafu](https://matplotlib.org/) na tutaichukua ili kuunda grafu ya mstari.
+- **numpy**. [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) ni maktaba muhimu kwa kushughulikia data ya nambari katika Python.
+- **sklearn**. Hii ni maktaba ya [Scikit-learn](https://scikit-learn.org/stable/user_guide.html).
+
+Ingiza maktaba kadhaa kusaidia na kazi zako.
+
+1. Ongeza ushirikishaji kwa kuandika msimbo ufuatao:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ Hapo juu unaingiza `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y` itakuwa lengo la regression.
+
+2. Ongeza amri za kuchapisha kuonyesha umbo la data na kipengele chake cha kwanza:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ Unachopata kama jibu ni tuple. Unachofanya ni kugawa maadili mawili ya kwanza ya tuple kwa `X` and `y` mtawalia. Jifunze zaidi [kuhusu tuples](https://wikipedia.org/wiki/Tuple).
+
+ Unaweza kuona kuwa data hii ina vitu 442 vilivyoundwa katika safu za vipengele 10:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ Fikiria kidogo kuhusu uhusiano kati ya data na lengo la regression. Regression ya linear inatabiri uhusiano kati ya kipengele X na lengo la kigezo y. Je, unaweza kupata [lengo](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) kwa seti ya data ya kisukari katika nyaraka? Je, seti hii ya data inaonyesha nini, ikizingatiwa lengo hilo?
+
+3. Kisha, chagua sehemu ya seti hii ya data kuipiga kwa kuchagua safu ya 3 ya seti ya data. Unaweza kufanya hivyo kwa kutumia `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)`. Ikiwa moja ya vigezo ni -1, kipimo kinacholingana kinahesabiwa kiotomatiki.
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ Wakati wowote, chapisha data ili kuangalia umbo lake.
+
+4. Sasa kwa kuwa una data tayari kupigwa, unaweza kuona ikiwa mashine inaweza kusaidia kuamua mgawanyiko wa kimantiki kati ya nambari katika seti hii ya data. Ili kufanya hivyo, unahitaji kugawanya data zote mbili (X) na lengo (y) katika seti za majaribio na mafunzo. Scikit-learn ina njia rahisi ya kufanya hivyo; unaweza kugawanya data yako ya majaribio katika sehemu fulani.
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+5. Sasa uko tayari kufundisha mfano wako! Pakia mfano wa regression ya linear na uufundishe na seti zako za mafunzo za X na y kwa kutumia `model.fit()`:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`. Hii itatumika kuchora mstari kati ya vikundi vya data
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. Sasa ni wakati wa kuonyesha data kwenye grafu. Matplotlib ni zana muhimu sana kwa kazi hii. Unda grafu ya nukta za X na y zote za majaribio, na utumie utabiri kuchora mstari mahali panapofaa zaidi, kati ya vikundi vya data vya mfano.
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ Fikiria kidogo kuhusu kinachoendelea hapa. Mstari wa moja kwa moja unapita kati ya nukta nyingi ndogo za data, lakini unafanya nini hasa? Je, unaweza kuona jinsi unavyoweza kutumia mstari huu kutabiri ambapo data mpya, isiyoonekana inapaswa kutoshea kwa uhusiano na mhimili wa y wa grafu? Jaribu kueleza kwa maneno matumizi ya vitendo ya mfano huu.
+
+Hongera, umeunda mfano wako wa kwanza wa regression ya linear, umeunda utabiri na kuonyesha kwenye grafu!
+
+---
+## 🚀Changamoto
+
+Piga kigezo tofauti kutoka kwa seti hii ya data. Kidokezo: hariri mstari huu: `X = X[:,2]`. Kwa kuzingatia lengo la seti hii ya data, una uwezo wa kugundua nini kuhusu maendeleo ya ugonjwa wa kisukari?
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## Mapitio na Kujisomea
+
+Katika mafunzo haya, ulifanya kazi na regression ya linear rahisi, badala ya regression ya univariate au multiple linear. Soma kidogo kuhusu tofauti kati ya mbinu hizi, au tazama [video hii](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef)
+
+Soma zaidi kuhusu dhana ya regression na fikiria kuhusu aina gani za maswali yanaweza kujibiwa na mbinu hii. Chukua [mafunzo haya](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott) ili kuongeza uelewa wako.
+
+## Kazi
+
+[Seti ya data tofauti](assignment.md)
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo rasmi. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au kutafsiri vibaya kunakotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/1-Tools/assignment.md b/translations/sw/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..5b95de3a7
--- /dev/null
+++ b/translations/sw/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# Regression with Scikit-learn
+
+## Maelekezo
+
+Angalia dataset ya [Linnerud](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud) katika Scikit-learn. Dataset hii ina [malengo](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset) mengi: 'Inajumuisha data za mazoezi matatu na vigezo vitatu vya kifizikia zilizokusanywa kutoka kwa wanaume ishirini wa makamo katika klabu ya mazoezi'.
+
+Kwa maneno yako mwenyewe, eleza jinsi ya kuunda modeli ya Regression ambayo ingeonyesha uhusiano kati ya mzunguko wa kiuno na idadi ya situps zinazofanywa. Fanya hivyo pia kwa datapoints zingine katika dataset hii.
+
+## Rubric
+
+| Kigezo | Bora Kabisa | Inatosha | Inahitaji Kuboresha |
+| ----------------------------- | ------------------------------------ | ----------------------------- | ---------------------------- |
+| Kuwasilisha aya ya maelezo | Aya iliyoandikwa vizuri inawasilishwa | Sentensi chache zinawasilishwa | Hakuna maelezo yanayotolewa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/1-Tools/solution/Julia/README.md b/translations/sw/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..a9ce3df1c
--- /dev/null
+++ b/translations/sw/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo rasmi. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalam inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/2-Data/README.md b/translations/sw/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..0af3b9f0c
--- /dev/null
+++ b/translations/sw/2-Regression/2-Data/README.md
@@ -0,0 +1,215 @@
+# Jenga mfano wa regression kwa kutumia Scikit-learn: andaa na onyesha data
+
+
+
+Infographic na [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [Somo hili linapatikana katika R!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## Utangulizi
+
+Sasa kwa kuwa umeandaliwa na zana unazohitaji kuanza kujenga mifano ya kujifunza mashine kwa kutumia Scikit-learn, uko tayari kuanza kuuliza maswali kuhusu data zako. Unapofanya kazi na data na kutumia suluhisho za ML, ni muhimu sana kuelewa jinsi ya kuuliza swali sahihi ili kufungua uwezo wa dataset yako.
+
+Katika somo hili, utajifunza:
+
+- Jinsi ya kuandaa data yako kwa ajili ya kujenga mfano.
+- Jinsi ya kutumia Matplotlib kwa ajili ya kuonyesha data.
+
+## Kuuliza swali sahihi kuhusu data zako
+
+Swali unalotaka lijibiwe litaamua ni aina gani ya algorithmi za ML utazitumia. Na ubora wa jibu unalopata utategemea sana asili ya data zako.
+
+Angalia [data](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) iliyotolewa kwa somo hili. Unaweza kufungua faili hili la .csv katika VS Code. Uskimaji wa haraka unaonyesha kuwa kuna mapengo na mchanganyiko wa data za maneno na namba. Pia kuna safu ya ajabu inayoitwa 'Package' ambapo data ni mchanganyiko kati ya 'sacks', 'bins' na thamani nyinginezo. Kwa kweli, data ni kama fujo kidogo.
+
+[](https://youtu.be/5qGjczWTrDQ "ML for beginners - How to Analyze and Clean a Dataset")
+
+> 🎥 Bofya picha hapo juu kwa video fupi ya kuandaa data kwa somo hili.
+
+Kwa kweli, si kawaida sana kupewa dataset iliyo tayari kabisa kutumia ili kuunda mfano wa ML moja kwa moja. Katika somo hili, utajifunza jinsi ya kuandaa dataset ghafi kwa kutumia maktaba za kawaida za Python. Pia utajifunza mbinu mbalimbali za kuonyesha data.
+
+## Uchunguzi wa kesi: 'soko la maboga'
+
+Katika folda hii utapata faili la .csv katika folda kuu `data` inayoitwa [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) ambayo inajumuisha mistari 1757 ya data kuhusu soko la maboga, imepangwa katika makundi kwa miji. Hii ni data ghafi iliyotolewa kutoka [Specialty Crops Terminal Markets Standard Reports](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) inayosambazwa na Idara ya Kilimo ya Marekani.
+
+### Kuandaa data
+
+Data hii iko katika uwanja wa umma. Inaweza kupakuliwa katika faili nyingi tofauti, kwa kila mji, kutoka tovuti ya USDA. Ili kuepuka faili nyingi tofauti, tumeunganisha data zote za miji katika lahajedwali moja, hivyo tayari tume_anda_ data kidogo. Sasa, tuangalie kwa karibu data hiyo.
+
+### Data ya maboga - hitimisho la awali
+
+Unaona nini kuhusu data hii? Tayari umeona kuwa kuna mchanganyiko wa maneno, namba, mapengo na thamani za ajabu ambazo unahitaji kuzifanyia kazi.
+
+Ni swali gani unaweza kuuliza kuhusu data hii, kwa kutumia mbinu ya Regression? Vipi kuhusu "Kutabiri bei ya boga linapouzwa katika mwezi fulani". Ukiangalia tena data, kuna mabadiliko unayohitaji kufanya ili kuunda muundo wa data unaohitajika kwa kazi hiyo.
+
+## Zoezi - kuchambua data ya maboga
+
+Tutumie [Pandas](https://pandas.pydata.org/), (jina linawakilisha `Python Data Analysis`) chombo kinachofaa sana kwa kuunda data, kuchambua na kuandaa data hii ya maboga.
+
+### Kwanza, angalia tarehe zilizokosekana
+
+Utahitaji kuchukua hatua za kuangalia tarehe zilizokosekana:
+
+1. Badilisha tarehe kuwa muundo wa mwezi (hizi ni tarehe za Marekani, hivyo muundo ni `MM/DD/YYYY`).
+2. Toa mwezi kwenye safu mpya.
+
+Fungua faili la _notebook.ipynb_ katika Visual Studio Code na ingiza lahajedwali katika dataframe mpya ya Pandas.
+
+1. Tumia `head()` kuona safu tano za kwanza.
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ Ni kazi gani ungetumia kuona safu tano za mwisho?
+
+1. Angalia kama kuna data iliyokosekana katika dataframe ya sasa:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ Kuna data iliyokosekana, lakini labda haitajalisha kwa kazi hii.
+
+1. Ili kufanya dataframe yako iwe rahisi kufanya kazi nayo, chagua safu tu unazohitaji, kwa kutumia `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:` katika mfano hapa chini ina maana "safu zote".
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### Pili, tambua bei ya wastani ya boga
+
+Fikiria jinsi ya kutambua bei ya wastani ya boga katika mwezi fulani. Ni safu gani ungechagua kwa kazi hii? Kidokezo: utahitaji safu 3.
+
+Suluhisho: chukua wastani wa safu za `Low Price` and `High Price` kujaza safu mpya ya Price, na ubadilishe safu ya Date kuonyesha tu mwezi. Kwa bahati nzuri, kulingana na ukaguzi hapo juu, hakuna data iliyokosekana kwa tarehe au bei.
+
+1. Ili kuhesabu wastani, ongeza msimbo ufuatao:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ Jisikie huru kuchapisha data yoyote unayotaka kukagua kwa kutumia `print(month)`.
+
+2. Sasa, nakili data yako iliyobadilishwa katika dataframe mpya ya Pandas:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ Kuchapisha dataframe yako kutaonyesha dataset safi na nadhifu ambayo unaweza kujenga mfano wako mpya wa regression.
+
+### Lakini subiri! Kuna kitu cha ajabu hapa
+
+Ukiangalia safu ya `Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package`.
+
+1. Ongeza kichujio juu ya faili, chini ya uingizaji wa awali wa .csv:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ Ukiangalia data sasa, utaona kuwa unapata tu safu 415 au hivyo za data zinazojumuisha maboga kwa bushel.
+
+### Lakini subiri! Kuna jambo moja zaidi la kufanya
+
+Je, uliona kuwa kiasi cha bushel kinatofautiana kwa kila safu? Unahitaji kuweka bei sawa ili kuonyesha bei kwa bushel, hivyo fanya hesabu ili kuistandardize.
+
+1. Ongeza mistari hii baada ya block inayounda dataframe mpya ya maboga:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ Kulingana na [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308), uzito wa bushel unategemea aina ya mazao, kwani ni kipimo cha ujazo. "Bushel ya nyanya, kwa mfano, inapaswa kuwa na uzito wa paundi 56... Majani na mboga huchukua nafasi zaidi na uzito mdogo, hivyo bushel ya spinachi ni paundi 20 tu." Ni ngumu sana! Tusijisumbue na kubadilisha bushel kuwa paundi, badala yake tuweke bei kwa bushel. Uchunguzi huu wote wa bushel za maboga, hata hivyo, unaonyesha jinsi ilivyo muhimu sana kuelewa asili ya data yako!
+
+Sasa, unaweza kuchambua bei kwa kipimo cha bushel. Ukichapisha data tena, utaona jinsi ilivyowekwa sawa.
+
+✅ Je, uliona kuwa maboga yanayouzwa kwa nusu bushel ni ghali sana? Je, unaweza kujua kwa nini? Kidokezo: maboga madogo ni ghali sana kuliko makubwa, labda kwa sababu kuna mengi zaidi kwa bushel, kutokana na nafasi isiyotumika inayochukuliwa na boga moja kubwa la pie.
+
+## Mikakati ya Kuonyesha Data
+
+Sehemu ya jukumu la mwanasayansi wa data ni kuonyesha ubora na asili ya data wanayofanya kazi nayo. Ili kufanya hivyo, mara nyingi huunda michoro ya kuvutia, au michoro, grafu, na chati, inayoonyesha vipengele tofauti vya data. Kwa njia hii, wanaweza kuonyesha kwa kuona mahusiano na mapengo ambayo vinginevyo ni vigumu kugundua.
+
+[](https://youtu.be/SbUkxH6IJo0 "ML for beginners - How to Visualize Data with Matplotlib")
+
+> 🎥 Bofya picha hapo juu kwa video fupi ya kuonyesha data kwa somo hili.
+
+Michoro inaweza pia kusaidia kuamua mbinu ya kujifunza mashine inayofaa zaidi kwa data. Mchoro wa kutawanya unaoonekana kufuata mstari, kwa mfano, unaonyesha kuwa data ni mgombea mzuri kwa zoezi la regression ya mstari.
+
+Maktaba moja ya kuonyesha data inayofanya kazi vizuri katika daftari za Jupyter ni [Matplotlib](https://matplotlib.org/) (ambayo pia uliiona katika somo lililopita).
+
+> Pata uzoefu zaidi na kuonyesha data katika [mafunzo haya](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott).
+
+## Zoezi - jaribu na Matplotlib
+
+Jaribu kuunda michoro ya msingi kuonyesha dataframe mpya uliyoitengeneza. Je, mchoro wa mstari wa msingi ungeonyesha nini?
+
+1. Ingiza Matplotlib juu ya faili, chini ya uingizaji wa Pandas:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. Endesha upya daftari lote ili kusasisha.
+1. Chini ya daftari, ongeza seli kuonyesha data kama sanduku:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ Je, mchoro huu ni wa manufaa? Je, kuna kitu kinachokushangaza kuhusu mchoro huu?
+
+ Sio hasa wa manufaa kwani yote inayoonyesha ni data yako kama mchanganyiko wa pointi katika mwezi fulani.
+
+### Fanya iwe ya manufaa
+
+Ili michoro ionyeshe data yenye manufaa, kwa kawaida unahitaji kuunganisha data kwa namna fulani. Hebu jaribu kuunda mchoro ambapo mhimili wa y unaonyesha miezi na data inaonyesha usambazaji wa data.
+
+1. Ongeza seli kuunda mchoro wa bar ulio na makundi:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ Hii ni michoro ya data yenye manufaa zaidi! Inaonekana kuonyesha kuwa bei ya juu zaidi ya maboga inatokea Septemba na Oktoba. Je, hilo linakidhi matarajio yako? Kwa nini au kwa nini siyo?
+
+---
+
+## 🚀Changamoto
+
+Chunguza aina tofauti za kuonyesha data ambazo Matplotlib inatoa. Ni aina gani zinazofaa zaidi kwa matatizo ya regression?
+
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## Mapitio na Kujisomea
+
+Angalia njia nyingi za kuonyesha data. Tengeneza orodha ya maktaba mbalimbali zinazopatikana na kumbuka ni zipi ni bora kwa aina fulani za kazi, kwa mfano michoro ya 2D dhidi ya michoro ya 3D. Umegundua nini?
+
+## Kazi
+
+[Kuonyesha data](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/2-Data/assignment.md b/translations/sw/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..0a19c1534
--- /dev/null
+++ b/translations/sw/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# Kuchunguza Michoro
+
+Kuna maktaba kadhaa tofauti zinazopatikana kwa ajili ya uchoraji wa data. Tengeneza michoro kadhaa ukitumia data za Pumpkin katika somo hili na matplotlib na seaborn katika daftari la sampuli. Ni maktaba zipi ni rahisi zaidi kufanya kazi nazo?
+## Rubric
+
+| Vigezo | Bora | Inaridhisha | Inahitaji Kuboresha |
+| ------- | ------- | ------- | ------- |
+| | Daftari linawasilishwa na lina uchunguzi/michoro miwili | Daftari linawasilishwa na lina uchunguzi/mchoro mmoja | Daftari halijawasilishwa |
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/2-Data/solution/Julia/README.md b/translations/sw/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..bca5bb809
--- /dev/null
+++ b/translations/sw/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kupata usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa maelewano mabaya au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/3-Linear/README.md b/translations/sw/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..a559b68cc
--- /dev/null
+++ b/translations/sw/2-Regression/3-Linear/README.md
@@ -0,0 +1,370 @@
+# Jenga modeli ya regression kwa kutumia Scikit-learn: regression kwa njia nne
+
+
+> Picha ya taarifa na [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [Quiz kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [Somo hili linapatikana katika R!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### Utangulizi
+
+Hadi sasa umechunguza regression ni nini kwa kutumia data za sampuli zilizokusanywa kutoka kwa seti ya data ya bei za malenge ambayo tutatumia katika somo hili lote. Pia umeweza kuiona kwa kutumia Matplotlib.
+
+Sasa uko tayari kuingia zaidi kwenye regression kwa ML. Wakati visualization inakuruhusu kuelewa data, nguvu halisi ya Machine Learning inatoka kwenye _kufundisha mifano_. Mifano inafundishwa kwenye data za kihistoria ili kunasa moja kwa moja utegemezi wa data, na hukuruhusu kutabiri matokeo kwa data mpya, ambayo mfano haujaiona kabla.
+
+Katika somo hili, utajifunza zaidi kuhusu aina mbili za regression: _basic linear regression_ na _polynomial regression_, pamoja na baadhi ya hisabati inayohusiana na mbinu hizi. Mifano hii itatuwezesha kutabiri bei za malenge kulingana na data tofauti za pembejeo.
+
+[](https://youtu.be/CRxFT8oTDMg "ML kwa wanaoanza - Kuelewa Linear Regression")
+
+> 🎥 Bofya picha hapo juu kwa muhtasari mfupi wa video kuhusu linear regression.
+
+> Katika mtaala huu, tunadhania kuwa na maarifa ya chini ya hisabati, na tunalenga kuifanya ipatikane kwa wanafunzi wanaotoka katika nyanja nyingine, hivyo angalia maelezo, 🧮 callouts, michoro, na zana zingine za kujifunza kusaidia kuelewa.
+
+### Mahitaji
+
+Unapaswa kuwa na ufahamu sasa na muundo wa data ya malenge tunayochunguza. Unaweza kuipata ikiwa imepakiwa na kusafishwa katika faili ya _notebook.ipynb_ ya somo hili. Katika faili, bei ya malenge inaonyeshwa kwa bushel katika fremu mpya ya data. Hakikisha unaweza kuendesha hizi notebooks katika kernels katika Visual Studio Code.
+
+### Maandalizi
+
+Kama ukumbusho, unapakia data hii ili uweze kuuliza maswali yake.
+
+- Ni wakati gani mzuri wa kununua malenge?
+- Ninaweza kutarajia bei gani ya kasha la malenge madogo?
+- Je, ninunue kwa vikapu vya nusu bushel au kwa sanduku la bushel 1 1/9?
+Tuendelee kuchimba data hii.
+
+Katika somo lililopita, uliunda fremu ya data ya Pandas na kuijaza na sehemu ya seti ya data ya awali, ukistandardisha bei kwa bushel. Kwa kufanya hivyo, hata hivyo, uliweza tu kukusanya takriban pointi 400 za data na kwa miezi ya msimu wa vuli tu.
+
+Angalia data ambayo tulipakia katika notebook inayosindikiza somo hili. Data imepakiwa na scatterplot ya awali imechorwa kuonyesha data ya mwezi. Labda tunaweza kupata maelezo zaidi kuhusu asili ya data kwa kuisafisha zaidi.
+
+## Mstari wa regression ya mstari
+
+Kama ulivyojifunza katika Somo la 1, lengo la zoezi la regression ya mstari ni kuweza kuchora mstari ili:
+
+- **Kuonyesha uhusiano wa vigezo**. Kuonyesha uhusiano kati ya vigezo
+- **Kufanya utabiri**. Kufanya utabiri sahihi wa mahali ambapo pointi mpya ya data ingeingia kwa uhusiano na mstari huo.
+
+Ni kawaida kwa **Least-Squares Regression** kuchora aina hii ya mstari. Neno 'least-squares' linamaanisha kwamba pointi zote za data zinazozunguka mstari wa regression zimetolewaz na kisha kuongezwa. Kwa hali nzuri, jumla ya mwisho ni ndogo iwezekanavyo, kwa sababu tunataka idadi ndogo ya makosa, au `least-squares`.
+
+Tunafanya hivyo kwa sababu tunataka kuunda mstari ambao una umbali wa chini kabisa kutoka kwa pointi zote za data zetu. Pia tunazitoa maneno kabla ya kuyaongeza kwa sababu tunajali ukubwa wake badala ya mwelekeo wake.
+
+> **🧮 Nionyeshe hisabati**
+>
+> Mstari huu, unaoitwa _line of best fit_ unaweza kuonyeshwa na [mchoro](https://en.wikipedia.org/wiki/Simple_linear_regression):
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` kwa kutumia usemi ufuatao:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+Sasa kwa kuwa unaelewa hisabati nyuma ya regression ya mstari, hebu tuunde mfano wa Regression kuona kama tunaweza kutabiri ni kifurushi gani cha malenge kitakuwa na bei bora za malenge. Mtu anayenunua malenge kwa ajili ya shamba la malenge la likizo anaweza kutaka habari hii ili aweze kuboresha manunuzi yake ya vifurushi vya malenge kwa shamba hilo.
+
+## Kutafuta Uhusiano
+
+[](https://youtu.be/uoRq-lW2eQo "ML kwa wanaoanza - Kutafuta Uhusiano: Muhimu kwa Regression ya Mstari")
+
+> 🎥 Bofya picha hapo juu kwa muhtasari mfupi wa video kuhusu uhusiano.
+
+Kutoka somo lililopita labda umeona kuwa bei ya wastani kwa miezi tofauti inaonekana kama hii:
+
+
+
+Hii inapendekeza kwamba kunaweza kuwa na uhusiano fulani, na tunaweza kujaribu kufundisha mfano wa regression ya mstari kutabiri uhusiano kati ya `Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` kazi:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+Inaonekana kama uhusiano ni mdogo, -0.15 kwa `Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` kazi ya kuchora tunaweza kuchora pointi zote kwenye grafu moja:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+Uchunguzi wetu unapendekeza kwamba aina ina athari zaidi kwenye bei ya jumla kuliko tarehe halisi ya kuuza. Tunaweza kuona hili kwa grafu ya bar:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+Tujikite kwa sasa kwenye aina moja tu ya malenge, 'aina ya pie', na tuone athari ya tarehe kwenye bei:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+Ikiwa sasa tutahesabu uhusiano kati ya `Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` - ambayo inamaanisha kwamba kufundisha mfano wa kutabiri ina maana.
+
+> Kabla ya kufundisha mfano wa regression ya mstari, ni muhimu kuhakikisha kuwa data yetu ni safi. Regression ya mstari haifanyi kazi vizuri na thamani zilizokosekana, hivyo ina maana kuondoa seli zote tupu:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+Njia nyingine itakuwa kujaza thamani hizo tupu na thamani za wastani kutoka kwenye safu inayolingana.
+
+## Regression ya Mstari Rahisi
+
+[](https://youtu.be/e4c_UP2fSjg "ML kwa wanaoanza - Regression ya Mstari na Polynomial kwa kutumia Scikit-learn")
+
+> 🎥 Bofya picha hapo juu kwa muhtasari mfupi wa video kuhusu regression ya mstari na polynomial.
+
+Ili kufundisha mfano wetu wa Regression ya Mstari, tutatumia maktaba ya **Scikit-learn**.
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+Tunanza kwa kutenganisha thamani za pembejeo (vipengele) na matokeo yanayotarajiwa (label) kwenye arrays tofauti za numpy:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> Kumbuka kwamba tulilazimika kufanya `reshape` kwenye data ya pembejeo ili kifurushi cha Regression ya Mstari kiielewe kwa usahihi. Regression ya Mstari inatarajia array ya 2D kama pembejeo, ambapo kila safu ya array inalingana na vector ya vipengele vya pembejeo. Katika kesi yetu, kwa kuwa tuna pembejeo moja tu - tunahitaji array yenye umbo N×1, ambapo N ni saizi ya seti ya data.
+
+Kisha, tunahitaji kugawanya data katika seti za mafunzo na majaribio, ili tuweze kuthibitisha mfano wetu baada ya mafunzo:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+Hatimaye, kufundisha mfano halisi wa Regression ya Mstari kunachukua mistari miwili tu ya msimbo. Tunafafanua `LinearRegression` object, and fit it to our data using the `fit` njia:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+`LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` katika kesi yetu, kuashiria bei mwanzoni mwa mwaka.
+
+Ili kuona jinsi mfano wetu ulivyo sahihi, tunaweza kutabiri bei kwenye seti ya data ya majaribio, na kisha kupima jinsi utabiri wetu ulivyo karibu na thamani zinazotarajiwa. Hii inaweza kufanywa kwa kutumia metrics ya mean square error (MSE), ambayo ni wastani wa tofauti zote zilizotolewa kati ya thamani inayotarajiwa na inayotabiriwa.
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+Kosa letu linaonekana kuwa karibu na pointi 2, ambayo ni ~17%. Sio nzuri sana. Kiashiria kingine cha ubora wa mfano ni **coefficient of determination**, ambayo inaweza kupatikana kama hii:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+Ikiwa thamani ni 0, inamaanisha kwamba mfano hauzingatii data ya pembejeo, na hufanya kama *mtabiri mbaya zaidi wa mstari*, ambayo ni wastani wa thamani ya matokeo. Thamani ya 1 inamaanisha kwamba tunaweza kutabiri kwa usahihi matokeo yote yanayotarajiwa. Katika kesi yetu, coefficient ni karibu 0.06, ambayo ni ya chini kabisa.
+
+Tunaweza pia kuchora data ya majaribio pamoja na mstari wa regression ili kuona vizuri jinsi regression inavyofanya kazi katika kesi yetu:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## Regression ya Polynomial
+
+Aina nyingine ya Regression ya Mstari ni Regression ya Polynomial. Wakati mwingine kuna uhusiano wa mstari kati ya vigezo - kadri malenge yanavyokuwa kubwa kwa ujazo, ndivyo bei inavyoongezeka - wakati mwingine uhusiano huu hauwezi kuchorwa kama ndege au mstari wa moja kwa moja.
+
+✅ Hapa kuna [mifano zaidi](https://online.stat.psu.edu/stat501/lesson/9/9.8) ya data ambayo inaweza kutumia Regression ya Polynomial
+
+Angalia tena uhusiano kati ya Tarehe na Bei. Je, scatterplot hii inaonekana kama inapaswa kuchambuliwa na mstari wa moja kwa moja? Je, bei haziwezi kubadilika? Katika kesi hii, unaweza kujaribu regression ya polynomial.
+
+✅ Polynomials ni misemo ya hisabati ambayo inaweza kuwa na moja au zaidi ya vigezo na coefficients
+
+Regression ya polynomial huunda mstari uliopinda ili kutoshea data isiyo ya mstari vizuri. Katika kesi yetu, ikiwa tutajumuisha variable ya `DayOfYear` iliyotolewa kwenye data ya pembejeo, tunapaswa kuweza kutoshea data yetu na curve ya parabolic, ambayo itakuwa na kiwango cha chini katika hatua fulani ndani ya mwaka.
+
+Scikit-learn inajumuisha [API ya pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline) kusaidia kuunganisha hatua tofauti za usindikaji wa data pamoja. **Pipeline** ni mnyororo wa **estimators**. Katika kesi yetu, tutaunda pipeline ambayo kwanza inaongeza vipengele vya polynomial kwenye mfano wetu, na kisha kufundisha regression:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+Kutumia `PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` vinginevyo. Hii inamaanisha kwamba kutakuwa na coefficients nne katika regression ya mstari, moja kwa kila aina ya malenge, inayohusika na "bei ya kuanzia" (au badala "bei ya ziada") kwa aina hiyo maalum.
+
+Msimbo hapa chini unaonyesha jinsi tunavyoweza one-hot encode aina:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+Ili kufundisha regression ya mstari kwa kutumia aina iliyowekwa one-hot encoded kama pembejeo, tunahitaji tu kuanzisha `X` and `y` data kwa usahihi:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+Sehemu iliyobaki ya msimbo ni sawa na tuliyotumia hapo juu kufundisha Regression ya Mstari. Ikiwa utaijaribu, utaona kwamba mean square error ni karibu sawa, lakini tunapata coefficient ya juu zaidi ya determination (~77%). Ili kupata utabiri sahihi zaidi, tunaweza kuzingatia vipengele zaidi vya kategoria, pamoja na vipengele vya nambari, kama `Month` or `DayOfYear`. To get one large array of features, we can use `join`:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+Hapa pia tunazingatia `City` and `Package` type, ambayo inatupa MSE 2.84 (10%), na determination 0.94!
+
+## Kuweka yote pamoja
+
+Ili kufanya mfano bora zaidi, tunaweza kutumia data iliyochanganywa (one-hot encoded categorical + numeric) kutoka mfano hapo juu pamoja na Regression ya Polynomial. Hapa kuna msimbo kamili kwa urahisi wako:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+Hii inapaswa kutupa coefficient bora ya determination ya karibu 97%, na MSE=2.23 (~8% prediction error).
+
+| Model | MSE | Determination |
+|-------|-----|---------------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` Linear | 5.24 (19.7%) | 0.77 |
+| All features Linear | 2.84 (10.5%) | 0.94 |
+| All features Polynomial | 2.23 (8.25%) | 0.97 |
+
+🏆 Umefanya vizuri! Umeunda mifano minne ya Regression katika somo moja, na kuboresha ubora wa mfano hadi 97%. Katika sehemu ya mwisho ya Regression, utajifunza kuhusu Logistic Regression ili kubaini kategoria.
+
+---
+## 🚀Changamoto
+
+Jaribu vigezo tofauti kadhaa katika notebook hii kuona jinsi uhusiano unavyolingana na usahihi wa mfano.
+
+## [Quiz baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/14/)
+
+## Mapitio na Kujisomea
+
+Katika somo hili tulijifunza kuhusu Regression ya Mstari. Kuna aina nyingine muhimu za Regression. Soma kuhusu mbinu za Stepwise, Ridge, Lasso na Elasticnet. Kozi nzuri ya kusoma kujifunza zaidi ni [Kozi ya Stanford ya Statistical Learning](https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning)
+
+## Kazi
+
+[Jenga Modeli](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/3-Linear/assignment.md b/translations/sw/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..51962a51e
--- /dev/null
+++ b/translations/sw/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# Unda Mfano wa Regression
+
+## Maelekezo
+
+Katika somo hili umeonyeshwa jinsi ya kujenga mfano kwa kutumia Regression ya Linear na Polynomial. Kwa kutumia maarifa haya, tafuta seti ya data au tumia moja ya seti zilizojengwa ndani ya Scikit-learn kujenga mfano mpya. Eleza katika daftari lako kwa nini ulichagua mbinu uliyotumia, na onyesha usahihi wa mfano wako. Ikiwa sio sahihi, eleza kwa nini.
+
+## Rubric
+
+| Vigezo | Bora kabisa | Inatosha | Inahitaji Kuboresha |
+| -------- | ------------------------------------------------------------ | --------------------------- | ------------------------------ |
+| | inawasilisha daftari kamili na suluhisho lililoandikwa vizuri | suluhisho halijakamilika | suluhisho lina kasoro au ni bugu |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/3-Linear/solution/Julia/README.md b/translations/sw/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..6ac3801a1
--- /dev/null
+++ b/translations/sw/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/4-Logistic/README.md b/translations/sw/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..fb7a9a16e
--- /dev/null
+++ b/translations/sw/2-Regression/4-Logistic/README.md
@@ -0,0 +1,380 @@
+# Utabiri wa Logistic kutabiri makundi
+
+
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [Somo hili linapatikana kwa R!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## Utangulizi
+
+Katika somo hili la mwisho kuhusu Regression, mojawapo ya mbinu za msingi za ML _classic_, tutachunguza Logistic Regression. Ungetumia mbinu hii kugundua mifumo ya kutabiri makundi mawili. Je, hii pipi ina chokoleti au la? Je, ugonjwa huu unaambukiza au la? Je, mteja huyu atachagua bidhaa hii au la?
+
+Katika somo hili, utajifunza:
+
+- Maktaba mpya kwa ajili ya kuona data
+- Mbinu za logistic regression
+
+✅ Pata uelewa wa kina wa kufanya kazi na aina hii ya regression katika [Learn module](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott)
+
+## Sharti
+
+Baada ya kufanya kazi na data za malenge, sasa tunajua kwamba kuna kundi moja la binary ambalo tunaweza kufanya kazi nalo: `Color`.
+
+Wacha tujenge mfano wa logistic regression kutabiri, kutokana na baadhi ya vigezo, _rangi ya malenge fulani inaweza kuwa_ (machungwa 🎃 au nyeupe 👻).
+
+> Kwa nini tunazungumzia binary classification katika somo kuhusu regression? Ni kwa urahisi wa lugha tu, kwani logistic regression ni [kweli ni mbinu ya classification](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression), ingawa ni ya msingi wa linear. Jifunze kuhusu njia nyingine za kuainisha data katika kundi la somo linalofuata.
+
+## Eleza swali
+
+Kwa madhumuni yetu, tutasema hili kama binary: 'Nyeupe' au 'Sio Nyeupe'. Pia kuna kundi la 'striped' katika dataset yetu lakini kuna matukio machache tu ya hilo, kwa hivyo hatutalitumia. Linatoweka mara tu tunapoondoa thamani za null kutoka kwa dataset, hata hivyo.
+
+> 🎃 Fun fact, wakati mwingine tunaita malenge nyeupe 'ghost' pumpkins. Hayachongwi kwa urahisi, kwa hivyo hayapendwi kama yale ya machungwa lakini yanaonekana baridi! Kwa hivyo tunaweza pia kuweka upya swali letu kama: 'Ghost' au 'Sio Ghost'. 👻
+
+## Kuhusu logistic regression
+
+Logistic regression inatofautiana na linear regression, ambayo ulijifunza hapo awali, kwa njia kadhaa muhimu.
+
+[](https://youtu.be/KpeCT6nEpBY "ML for beginners - Kuelewa Logistic Regression kwa Machine Learning Classification")
+
+> 🎥 Bofya picha hapo juu kwa muhtasari mfupi wa logistic regression.
+
+### Binary classification
+
+Logistic regression haitoi vipengele sawa na linear regression. Ya kwanza inatoa utabiri kuhusu kundi la binary ("nyeupe au sio nyeupe") ilhali ya pili inaweza kutabiri thamani zinazoendelea, kwa mfano kutokana na asili ya malenge na wakati wa mavuno, _bei yake itapanda kiasi gani_.
+
+
+> Infographic by [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+### Makundi mengine
+
+Kuna aina nyingine za logistic regression, ikiwa ni pamoja na multinomial na ordinal:
+
+- **Multinomial**, ambayo inahusisha kuwa na zaidi ya kundi moja - "Machungwa, Nyeupe, na Striped".
+- **Ordinal**, ambayo inahusisha makundi yaliyopangwa, muhimu ikiwa tunataka kupanga matokeo yetu kwa mantiki, kama malenge yetu ambayo yamepangwa kwa idadi finyu ya ukubwa (mini,sm,med,lg,xl,xxl).
+
+
+
+### Vigezo HAVIHITAJI kuhusiana
+
+Kumbuka jinsi linear regression ilifanya kazi vizuri zaidi na vigezo vilivyohusiana zaidi? Logistic regression ni kinyume - vigezo havihitaji kuendana. Hiyo inafanya kazi kwa data hii ambayo ina uhusiano dhaifu kiasi.
+
+### Unahitaji data nyingi safi
+
+Logistic regression itatoa matokeo sahihi zaidi ikiwa utatumia data nyingi; dataset yetu ndogo si bora kwa kazi hii, kwa hivyo kumbuka hilo.
+
+[](https://youtu.be/B2X4H9vcXTs "ML for beginners - Uchambuzi wa Data na Maandalizi kwa Logistic Regression")
+
+> 🎥 Bofya picha hapo juu kwa muhtasari mfupi wa kuandaa data kwa linear regression
+
+✅ Fikiria aina za data ambazo zingefaa kwa logistic regression
+
+## Mazoezi - safisha data
+
+Kwanza, safisha data kidogo, ukiondoa thamani za null na kuchagua baadhi tu ya safu:
+
+1. Ongeza msimbo ufuatao:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ Unaweza kila wakati kuangalia dataframe yako mpya:
+
+ ```python
+ pumpkins.info
+ ```
+
+### Visualization - categorical plot
+
+Hadi sasa umechaji [starter notebook](../../../../2-Regression/4-Logistic/notebook.ipynb) na data za malenge tena na kuzisafisha ili kuhifadhi dataset inayojumuisha vigezo vichache, ikiwa ni pamoja na `Color`. Wacha tuone dataframe kwenye notebook kwa kutumia maktaba tofauti: [Seaborn](https://seaborn.pydata.org/index.html), ambayo imejengwa juu ya Matplotlib ambayo tulitumia awali.
+
+Seaborn inatoa njia nzuri za kuona data yako. Kwa mfano, unaweza kulinganisha usambazaji wa data kwa kila `Variety` na `Color` katika categorical plot.
+
+1. Unda plot kama hiyo kwa kutumia `catplot` function, using our pumpkin data `pumpkins`, na kubainisha ramani ya rangi kwa kila kundi la malenge (machungwa au nyeupe):
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ Kwa kuchunguza data, unaweza kuona jinsi data ya Rangi inavyohusiana na Aina.
+
+ ✅ Kwa kuzingatia plot ya kikundi, ni uchunguzi gani wa kuvutia unaweza kufikiria?
+
+### Data pre-processing: feature and label encoding
+Dataset yetu ya malenge ina thamani za string kwa safu zake zote. Kufanya kazi na data ya kikundi ni rahisi kwa wanadamu lakini si kwa mashine. Algorithimu za machine learning hufanya kazi vizuri na nambari. Ndiyo maana encoding ni hatua muhimu sana katika awamu ya pre-processing ya data, kwani inatuwezesha kubadilisha data ya kikundi kuwa data ya nambari, bila kupoteza habari yoyote. Encoding nzuri husababisha kujenga mfano mzuri.
+
+Kwa feature encoding kuna aina mbili kuu za encoders:
+
+1. Ordinal encoder: inafaa vizuri kwa vigezo vya ordinal, ambavyo ni vigezo vya kikundi ambapo data zao zinafuata mpangilio wa kimantiki, kama safu ya `Item Size` katika dataset yetu. Inaunda ramani ili kila kundi liwakilishwe na nambari, ambayo ni mpangilio wa kundi katika safu.
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. Categorical encoder: inafaa vizuri kwa vigezo vya nominal, ambavyo ni vigezo vya kikundi ambapo data zao hazifuati mpangilio wa kimantiki, kama vipengele vyote tofauti na `Item Size` katika dataset yetu. Ni one-hot encoding, ambayo ina maana kwamba kila kundi linawakilishwa na safu ya binary: kigezo kilichosimbwa ni sawa na 1 ikiwa malenge ni ya Aina hiyo na 0 vinginevyo.
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+Kisha, `ColumnTransformer` hutumiwa kuchanganya encoders nyingi katika hatua moja na kuzitumia kwenye safu zinazofaa.
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+Kwa upande mwingine, ili kusimba lebo, tunatumia darasa la scikit-learn `LabelEncoder`, ambalo ni darasa la matumizi kusaidia kuboresha lebo ili ziwe na thamani kati ya 0 na n_classes-1 (hapa, 0 na 1).
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+Mara tu tunapokuwa tumekodisha vipengele na lebo, tunaweza kuziunganisha katika dataframe mpya `encoded_pumpkins`.
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ Je, ni faida gani za kutumia ordinal encoder kwa `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety` na `Color` katika categorical plot. Ili kuchora data vizuri tutatumia `Item Size` column and the unencoded `Variety` iliyosimbwa.
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### Tumia swarm plot
+
+Kwa kuwa Rangi ni kundi la binary (Nyeupe au Sio), inahitaji 'njia [maalum](https://seaborn.pydata.org/tutorial/categorical.html?highlight=bar) ya kuiona'. Kuna njia nyingine za kuona uhusiano wa kundi hili na vigezo vingine.
+
+Unaweza kuona vigezo kando kando na plots za Seaborn.
+
+1. Jaribu 'swarm' plot kuonyesha usambazaji wa thamani:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**Angalizo**: msimbo hapo juu unaweza kutoa onyo, kwa kuwa seaborn inashindwa kuwakilisha kiasi hicho cha pointi za data katika swarm plot. Suluhisho linalowezekana ni kupunguza ukubwa wa alama, kwa kutumia kipengele cha 'size'. Hata hivyo, kuwa makini kwamba hili linaathiri usomaji wa plot.
+
+> **🧮 Nionyeshe Hisabati**
+>
+> Logistic regression inategemea dhana ya 'maximum likelihood' kwa kutumia [sigmoid functions](https://wikipedia.org/wiki/Sigmoid_function). 'Sigmoid Function' kwenye plot inaonekana kama umbo la 'S'. Inachukua thamani na kuipanga mahali fulani kati ya 0 na 1. Mchoro wake pia unaitwa 'logistic curve'. Mfumo wake unaonekana kama huu:
+>
+> 
+>
+> ambapo sehemu ya kati ya sigmoid inajipata kwenye sehemu ya 0 ya x, L ni thamani ya juu ya curve, na k ni mwinuko wa curve. Ikiwa matokeo ya kazi ni zaidi ya 0.5, lebo husika itapewa darasa '1' la chaguo la binary. Ikiwa sivyo, itatambulishwa kama '0'.
+
+## Jenga mfano wako
+
+Kujenga mfano wa kupata classification ya binary ni rahisi kushangaza katika Scikit-learn.
+
+[](https://youtu.be/MmZS2otPrQ8 "ML for beginners - Logistic Regression kwa classification ya data")
+
+> 🎥 Bofya picha hapo juu kwa muhtasari mfupi wa kujenga mfano wa linear regression
+
+1. Chagua vigezo unavyotaka kutumia katika mfano wako wa classification na gawanya seti za mafunzo na majaribio kwa kuita `train_test_split()`:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. Sasa unaweza kufundisha mfano wako, kwa kuita `fit()` na data zako za mafunzo, na kuchapisha matokeo yake:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ Angalia scoreboard ya mfano wako. Sio mbaya, ukizingatia una karibu safu 1000 tu za data:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## Uelewa bora kupitia confusion matrix
+
+Ingawa unaweza kupata ripoti ya scoreboard [terms](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html?highlight=classification_report#sklearn.metrics.classification_report) kwa kuchapisha vipengee hapo juu, unaweza kuelewa mfano wako kwa urahisi zaidi kwa kutumia [confusion matrix](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix) kusaidia kuelewa jinsi mfano unavyofanya kazi.
+
+> 🎓 '[confusion matrix](https://wikipedia.org/wiki/Confusion_matrix)' (au 'error matrix') ni jedwali linaloonyesha kweli vs. uongo wa positives na negatives za mfano wako, hivyo kupima usahihi wa utabiri.
+
+1. Ili kutumia confusion metrics, piga `confusion_matrix()`:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ Angalia confusion matrix ya mfano wako:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+Katika Scikit-learn, confusion matrices Safu (axis 0) ni lebo halisi na safu (axis 1) ni lebo zilizotabiriwa.
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+Nini kinaendelea hapa? Tuseme mfano wetu umeombwa kuainisha malenge kati ya makundi mawili ya binary, kundi 'nyeupe' na kundi 'sio nyeupe'.
+
+- Ikiwa mfano wako unatabiri malenge kama sio nyeupe na inafaa kundi 'sio nyeupe' kwa kweli tunaita hiyo true negative, inayoonyeshwa na nambari ya juu kushoto.
+- Ikiwa mfano wako unatabiri malenge kama nyeupe na inafaa kundi 'sio nyeupe' kwa kweli tunaita hiyo false negative, inayoonyeshwa na nambari ya chini kushoto.
+- Ikiwa mfano wako unatabiri malenge kama sio nyeupe na inafaa kundi 'nyeupe' kwa kweli tunaita hiyo false positive, inayoonyeshwa na nambari ya juu kulia.
+- Ikiwa mfano wako unatabiri malenge kama nyeupe na inafaa kundi 'nyeupe' kwa kweli tunaita hiyo true positive, inayoonyeshwa na nambari ya chini kulia.
+
+Kama unavyoweza kudhani ni bora kuwa na idadi kubwa ya true positives na true negatives na idadi ndogo ya false positives na false negatives, ambayo inaonyesha kuwa mfano unafanya kazi vizuri.
+
+Je, confusion matrix inahusiana vipi na precision na recall? Kumbuka, ripoti ya classification iliyochapishwa hapo juu ilionyesha precision (0.85) na recall (0.67).
+
+Precision = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+Recall = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ Q: Kulingana na confusion matrix, mfano ulifanyaje? A: Sio mbaya; kuna idadi nzuri ya true negatives lakini pia kuna false negatives kadhaa.
+
+Wacha tutembelee tena maneno tuliyoyaona awali kwa msaada wa ramani ya TP/TN na FP/FN ya confusion matrix:
+
+🎓 Precision: TP/(TP + FP) Sehemu ya matukio muhimu kati ya matukio yaliyopatikana (mfano ni lebo zipi zilizoainishwa vizuri)
+
+🎓 Recall: TP/(TP + FN) Sehemu ya matukio muhimu yaliyopatikana, iwe yameainishwa vizuri au la
+
+🎓 f1-score: (2 * precision * recall)/(precision + recall) Wastani wa uzito wa precision na recall, bora ikiwa 1 na mbaya ikiwa 0
+
+🎓 Support: Idadi ya matukio ya kila lebo yaliyopatikana
+
+🎓 Accuracy: (TP + TN)/(TP + TN + FP + FN) Asilimia ya lebo zilizotabiriwa kwa usahihi kwa sampuli.
+
+🎓 Macro Avg: Hesabu ya wastani wa uzito wa metrics kwa kila lebo, bila kuzingatia kutofautiana kwa lebo.
+
+🎓 Weighted Avg: Hesabu ya wastani wa metrics kwa kila lebo, kwa kuzingatia kutofautiana kwa lebo kwa kuzipima kwa support zao (idadi ya matukio ya kweli kwa kila lebo).
+
+✅ Unaweza kufikiria ni kipimo gani unapaswa kuangalia ikiwa unataka mfano wako kupunguza idadi ya false negatives?
+
+## Onyesha ROC curve ya mfano huu
+
+[](https://youtu.be/GApO575jTA0 "ML for beginners - Kuchambua Utendaji wa Logistic Regression na ROC Curves")
+
+> 🎥 Bofya picha hapo juu kwa muhtasari mfupi wa ROC curves
+
+Wacha tufanye visualization moja zaidi kuona kinachoitwa 'ROC' curve:
+
+```python
+from sklearn.metrics import roc_curve, roc_auc_score
+import matplotlib
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+y_scores = model.predict_proba(X_test)
+fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
+
+fig = plt.figure(figsize=(6, 6))
+plt.plot([0, 1], [0, 1], 'k--')
+plt.plot(fpr, tpr)
+plt.xlabel('False Positive Rate')
+plt.ylabel('True Positive Rate')
+plt.title('ROC Curve')
+plt.show()
+```
+
+Kwa kutumia Matplotlib, chora [Receiving Operating Characteristic](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html?highlight=roc) au ROC ya mfano. ROC curves mara nyingi hutumiwa kupata mtazamo wa matokeo ya classifier kwa upande wa true vs. false positives. "ROC curves kawaida huonyesha true positive rate kwenye mhimili wa Y, na false positive rate kwenye mhimili wa X." Kwa hivyo, mwinuko wa curve na nafasi kati ya mstari wa kati na curve ni muhimu: unataka curve ambayo inaelekea juu na juu ya mstari haraka. Katika kesi yetu, kuna false positives kuanza na, na kisha mstari unaelekea juu na juu ipasavyo:
+
+
+
+Hatimaye, tumia [`roc_auc_score` API](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html?highlight=roc_auc#sklearn.metrics.roc_auc_score) ya Scikit-learn kuhesabu 'Area Under the Curve' (AUC):
+
+```python
+auc = roc_auc_score(y_test,y_scores[:,1])
+print(auc)
+```
+Matokeo ni `0.9749908725812341`. Kwa kuwa AUC inatoka
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokamilika. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa maelewano au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/4-Logistic/assignment.md b/translations/sw/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..4fa89565e
--- /dev/null
+++ b/translations/sw/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,13 @@
+# Kurudia Ujirani
+
+## Maelekezo
+
+Katika somo, ulitumia sehemu ndogo ya data ya maboga. Sasa, rudi kwenye data ya asili na ujaribu kutumia yote, iliyosafishwa na kuwekwa viwango, kujenga mfano wa Logistic Regression.
+## Rubric
+
+| Kigezo | Bora Zaidi | Inayotosheleza | Inayohitaji Kuboresha |
+| ------- | ---------------------------------------------------------------------- | ---------------------------------------------------------- | ---------------------------------------------------------- |
+| | Daftari linaoneshwa na mfano ulioelezwa vizuri na kufanya kazi vizuri | Daftari linaoneshwa na mfano unaofanya kazi kwa kiwango cha chini | Daftari linaoneshwa na mfano usiofanya kazi vizuri au hakuna |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/4-Logistic/solution/Julia/README.md b/translations/sw/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..27c2c0510
--- /dev/null
+++ b/translations/sw/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au dosari. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/2-Regression/README.md b/translations/sw/2-Regression/README.md
new file mode 100644
index 000000000..75a6ce155
--- /dev/null
+++ b/translations/sw/2-Regression/README.md
@@ -0,0 +1,43 @@
+# Mifano ya urejeleaji kwa ajili ya kujifunza mashine
+## Mada ya kieneo: Mifano ya urejeleaji kwa bei za maboga Amerika Kaskazini 🎃
+
+Katika Amerika Kaskazini, maboga mara nyingi hukatwa na kufanywa nyuso za kutisha kwa ajili ya Halloween. Hebu tujifunze zaidi kuhusu mboga hizi za kuvutia!
+
+
+> Picha na Beth Teutschmann kwenye Unsplash
+
+## Kile utakachojifunza
+
+[](https://youtu.be/5QnJtDad4iQ "Regression Introduction video - Click to Watch!")
+> 🎥 Bofya picha hapo juu kwa video ya utangulizi wa haraka kwa somo hili
+
+Masomo katika sehemu hii yanashughulikia aina za urejeleaji katika muktadha wa kujifunza mashine. Mifano ya urejeleaji inaweza kusaidia kubaini _uhusiano_ kati ya vigezo. Aina hii ya mfano inaweza kutabiri thamani kama urefu, joto, au umri, hivyo kufichua uhusiano kati ya vigezo inapochambua data.
+
+Katika mfululizo huu wa masomo, utagundua tofauti kati ya urejeleaji wa mstari na urejeleaji wa kimantiki, na wakati gani unapaswa kupendelea moja juu ya nyingine.
+
+[](https://youtu.be/XA3OaoW86R8 "ML for beginners - Introduction to Regression models for Machine Learning")
+
+> 🎥 Bofya picha hapo juu kwa video fupi ya utangulizi wa mifano ya urejeleaji.
+
+Katika kundi hili la masomo, utaandaliwa kuanza kazi za kujifunza mashine, ikiwa ni pamoja na kusanidi Visual Studio Code kusimamia vitabu, mazingira ya kawaida kwa wanasayansi wa data. Utagundua Scikit-learn, maktaba kwa ajili ya kujifunza mashine, na utajenga mifano yako ya kwanza, ukilenga mifano ya urejeleaji katika sura hii.
+
+> Kuna zana muhimu za chini ya msimbo ambazo zinaweza kukusaidia kujifunza kuhusu kufanya kazi na mifano ya urejeleaji. Jaribu [Azure ML kwa kazi hii](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### Masomo
+
+1. [Zana za biashara](1-Tools/README.md)
+2. [Usimamizi wa data](2-Data/README.md)
+3. [Urejeleaji wa mstari na polinomiali](3-Linear/README.md)
+4. [Urejeleaji wa kimantiki](4-Logistic/README.md)
+
+---
+### Shukrani
+
+"ML na urejeleaji" iliandikwa kwa ♥️ na [Jen Looper](https://twitter.com/jenlooper)
+
+♥️ Wanaochangia maswali ni pamoja na: [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) na [Ornella Altunyan](https://twitter.com/ornelladotcom)
+
+Seti ya data ya maboga imependekezwa na [mradi huu kwenye Kaggle](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) na data yake inatoka [Ripoti za Kawaida za Masoko ya Mazao Maalum](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) zinazotolewa na Idara ya Kilimo ya Marekani. Tumeongeza baadhi ya pointi kuhusu rangi kulingana na aina ili kusawazisha usambazaji. Data hii ipo katika umma.
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upotofu. Hati asilia katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo chenye mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/3-Web-App/1-Web-App/README.md b/translations/sw/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..e36b73668
--- /dev/null
+++ b/translations/sw/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# Jenga Tovuti Kutumia Mfano wa ML
+
+Katika somo hili, utafundisha mfano wa ML kwenye seti ya data ambayo ni ya kipekee: _matukio ya UFO katika karne iliyopita_, iliyotolewa kutoka kwenye hifadhidata ya NUFORC.
+
+Utajifunza:
+
+- Jinsi ya 'pickle' mfano uliyo fundishwa
+- Jinsi ya kutumia mfano huo katika programu ya Flask
+
+Tutaendelea kutumia daftari za maelezo kusafisha data na kufundisha mfano wetu, lakini unaweza kuchukua hatua moja zaidi kwa kuchunguza kutumia mfano 'katika mazingira halisi', kwa maneno mengine: katika programu ya wavuti.
+
+Ili kufanya hivi, unahitaji kujenga programu ya wavuti kwa kutumia Flask.
+
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## Kujenga programu
+
+Kuna njia kadhaa za kujenga programu za wavuti ili kutumia mifano ya kujifunza mashine. Muundo wako wa wavuti unaweza kuathiri jinsi mfano wako unavyofundishwa. Fikiria kuwa unafanya kazi katika biashara ambapo kikundi cha sayansi ya data kimefundisha mfano ambao wanataka utumie katika programu.
+
+### Mambo ya Kuzingatia
+
+Kuna maswali mengi unayohitaji kuuliza:
+
+- **Je, ni programu ya wavuti au programu ya simu?** Ikiwa unajenga programu ya simu au unahitaji kutumia mfano katika muktadha wa IoT, unaweza kutumia [TensorFlow Lite](https://www.tensorflow.org/lite/) na kutumia mfano katika programu ya Android au iOS.
+- **Mfano utakuwa wapi?** Katika wingu au ndani ya nchi?
+- **Msaada wa nje ya mtandao.** Je, programu inahitaji kufanya kazi nje ya mtandao?
+- **Teknolojia gani ilitumika kufundisha mfano?** Teknolojia iliyochaguliwa inaweza kuathiri zana unazohitaji kutumia.
+ - **Kutumia TensorFlow.** Ikiwa unafundisha mfano kwa kutumia TensorFlow, kwa mfano, mfumo huo unatoa uwezo wa kubadilisha mfano wa TensorFlow kwa matumizi katika programu ya wavuti kwa kutumia [TensorFlow.js](https://www.tensorflow.org/js/).
+ - **Kutumia PyTorch.** Ikiwa unajenga mfano kwa kutumia maktaba kama [PyTorch](https://pytorch.org/), una chaguo la kuuza nje katika muundo wa [ONNX](https://onnx.ai/) (Open Neural Network Exchange) kwa matumizi katika programu za wavuti za JavaScript zinazoweza kutumia [Onnx Runtime](https://www.onnxruntime.ai/). Chaguo hili litachunguzwa katika somo la baadaye kwa mfano uliofundishwa na Scikit-learn.
+ - **Kutumia Lobe.ai au Azure Custom Vision.** Ikiwa unatumia mfumo wa ML SaaS (Software as a Service) kama [Lobe.ai](https://lobe.ai/) au [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott) kufundisha mfano, aina hii ya programu inatoa njia za kuuza nje mfano kwa majukwaa mengi, ikiwa ni pamoja na kujenga API maalum ya kuulizwa katika wingu na programu yako ya mtandaoni.
+
+Pia una nafasi ya kujenga programu kamili ya wavuti ya Flask ambayo ingeweza kufundisha mfano yenyewe katika kivinjari cha wavuti. Hii inaweza pia kufanywa kwa kutumia TensorFlow.js katika muktadha wa JavaScript.
+
+Kwa madhumuni yetu, kwa kuwa tumekuwa tukifanya kazi na daftari za maelezo za msingi wa Python, hebu tuchunguze hatua unazohitaji kuchukua ili kuuza nje mfano uliofundishwa kutoka daftari kama hilo kwa muundo unaosomeka na programu ya wavuti iliyojengwa kwa Python.
+
+## Zana
+
+Kwa kazi hii, unahitaji zana mbili: Flask na Pickle, zote zinaendesha kwenye Python.
+
+✅ Flask ni nini? [Flask](https://palletsprojects.com/p/flask/) ni mfumo wa 'micro-framework' kama ulivyoelezwa na waumbaji wake, Flask hutoa vipengele vya msingi vya mifumo ya wavuti kwa kutumia Python na injini ya templating kujenga kurasa za wavuti. Angalia [moduli hii ya kujifunza](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott) ili kufanya mazoezi ya kujenga na Flask.
+
+✅ Pickle ni nini? [Pickle](https://docs.python.org/3/library/pickle.html) 🥒 ni moduli ya Python inayosarifu na kufungua muundo wa kitu cha Python. Unapofanya 'pickle' mfano, unasarifu au kupanua muundo wake kwa matumizi kwenye wavuti. Kuwa mwangalifu: pickle sio salama kiasili, kwa hivyo kuwa mwangalifu ikiwa umeombwa kufungua faili iliyofunguliwa kwa pickle. Faili iliyofunguliwa kwa pickle ina kiambishi cha `.pkl`.
+
+## Zoezi - safisha data yako
+
+Katika somo hili utatumia data kutoka kwa matukio 80,000 ya UFO, yaliyokusanywa na [NUFORC](https://nuforc.org) (Kituo cha Kitaifa cha Kuripoti UFO). Data hii ina maelezo ya kuvutia ya matukio ya UFO, kwa mfano:
+
+- **Maelezo marefu ya mfano.** "Mtu anatoka kwenye mwanga unaong'aa kwenye uwanja wa nyasi usiku na anakimbia kuelekea kwenye maegesho ya Texas Instruments".
+- **Maelezo mafupi ya mfano.** "taa zilituandama".
+
+Faili ya [ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) inajumuisha safu kuhusu `city`, `state` na `country` ambapo tukio lilitokea, `shape` ya kitu na `latitude` na `longitude`.
+
+Katika [daftari](../../../../3-Web-App/1-Web-App/notebook.ipynb) lililojumuishwa katika somo hili:
+
+1. ingiza `pandas`, `matplotlib`, na `numpy` kama ulivyofanya katika masomo yaliyopita na ingiza faili ya ufos. Unaweza kuangalia seti ya data ya sampuli:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. Badilisha data ya ufos kuwa dataframe ndogo na vichwa vipya. Angalia maadili ya kipekee katika uwanja wa `Country`.
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. Sasa, unaweza kupunguza kiasi cha data tunachohitaji kushughulika nacho kwa kuondoa maadili yoyote ya null na kuingiza tu matukio kati ya sekunde 1-60:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. Ingiza maktaba ya `LabelEncoder` ya Scikit-learn ili kubadilisha maadili ya maandishi kwa nchi kuwa namba:
+
+ ✅ LabelEncoder inasimbua data kwa alfabeti
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ Data yako inapaswa kuonekana kama hii:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## Zoezi - jenga mfano wako
+
+Sasa unaweza kujiandaa kufundisha mfano kwa kugawanya data katika kikundi cha mafunzo na majaribio.
+
+1. Chagua vipengele vitatu unavyotaka kufundisha kama vector yako ya X, na vector ya y itakuwa `Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude` na pata kitambulisho cha nchi kurudisha.
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. Fundisha mfano wako kwa kutumia regression ya logistic:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+Usahihi sio mbaya **(karibu 95%)**, bila kushangaza, kama `Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude`, lakini ni zoezi nzuri kujaribu kufundisha kutoka kwa data mbichi uliyoisafisha, kuiuza nje, na kisha kutumia mfano huu katika programu ya wavuti.
+
+## Zoezi - 'pickle' mfano wako
+
+Sasa, ni wakati wa _pickle_ mfano wako! Unaweza kufanya hivyo kwa mistari michache ya msimbo. Mara tu unapofanya _pickle_, pakia mfano wako uliopickle na ujaribu dhidi ya safu ya data ya sampuli iliyo na maadili ya sekunde, latitudo na longitudo,
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+Mfano unarudisha **'3'**, ambayo ni nambari ya nchi kwa Uingereza. Ajabu! 👽
+
+## Zoezi - jenga programu ya Flask
+
+Sasa unaweza kujenga programu ya Flask ili kuita mfano wako na kurudisha matokeo sawa, lakini kwa njia ya kuvutia zaidi.
+
+1. Anza kwa kuunda folda inayoitwa **web-app** karibu na faili ya _notebook.ipynb_ ambapo faili yako ya _ufo-model.pkl_ ipo.
+
+1. Katika folda hiyo unda folda nyingine tatu: **static**, yenye folda **css** ndani yake, na **templates**. Sasa unapaswa kuwa na faili na saraka zifuatazo:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ Rejelea folda ya suluhisho kwa mtazamo wa programu iliyokamilika
+
+1. Faili ya kwanza kuunda katika folda ya _web-app_ ni faili ya **requirements.txt**. Kama _package.json_ katika programu ya JavaScript, faili hii inataja utegemezi unaohitajika na programu. Katika **requirements.txt** ongeza mistari:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. Sasa, endesha faili hii kwa kuvinjari kwenye _web-app_:
+
+ ```bash
+ cd web-app
+ ```
+
+1. Katika terminal yako andika `pip install`, ili kusakinisha maktaba zilizoorodheshwa katika _requirements.txt_:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. Sasa, uko tayari kuunda faili tatu zaidi kumaliza programu:
+
+ 1. Unda **app.py** katika mzizi.
+ 2. Unda **index.html** katika saraka ya _templates_.
+ 3. Unda **styles.css** katika saraka ya _static/css_.
+
+1. Jenga faili ya _styles.css_ na mitindo michache:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. Kisha, jenga faili ya _index.html_:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ Angalia templating katika faili hii. Kumbuka sintaksia ya 'mustache' kuzunguka mabadiliko ambayo yatatolewa na programu, kama maandishi ya utabiri: `{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py` ongeza:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 Kidokezo: unapoongeza [`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train`. Je, ni faida na hasara gani za kufuata njia hii?
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## Mapitio & Kujisomea
+
+Kuna njia nyingi za kujenga programu ya wavuti ili kutumia mifano ya ML. Fanya orodha ya njia ambazo unaweza kutumia JavaScript au Python kujenga programu ya wavuti ili kutumia kujifunza mashine. Fikiria muundo: je, mfano unabaki katika programu au unaishi katika wingu? Ikiwa ni la pili, utaupataje? Chora muundo wa kimuundo kwa suluhisho la wavuti linalotumia ML.
+
+## Kazi
+
+[Jaribu mfano tofauti](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kibinadamu ya kitaalamu. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/3-Web-App/1-Web-App/assignment.md b/translations/sw/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..a4451ffe8
--- /dev/null
+++ b/translations/sw/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# Jaribu mfano tofauti
+
+## Maelekezo
+
+Sasa kwa kuwa umeunda programu ya wavuti ukitumia mfano wa Regression uliyojifunza, tumia moja ya mifano kutoka kwenye somo la Regression la awali ili kurekebisha programu hii ya wavuti. Unaweza kubaki na mtindo huo au kuibuni kwa njia tofauti ili kuakisi data za malenge. Kuwa makini kubadilisha pembejeo ili kuakisi njia ya mafunzo ya mfano wako.
+
+## Rubric
+
+| Kigezo | Bora | Inatosha | Inahitaji Kuboresha |
+| -------------------------- | ------------------------------------------------------- | --------------------------------------------------------- | --------------------------------------- |
+| | Programu ya wavuti inafanya kazi kama inavyotarajiwa na imewekwa kwenye wingu | Programu ya wavuti ina kasoro au inaonyesha matokeo yasiyotarajiwa | Programu ya wavuti haifanyi kazi ipasavyo |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kupata usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/3-Web-App/README.md b/translations/sw/3-Web-App/README.md
new file mode 100644
index 000000000..3c8947616
--- /dev/null
+++ b/translations/sw/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# Jenga programu ya wavuti kutumia mfano wako wa ML
+
+Katika sehemu hii ya mtaala, utajifunza mada ya ML inayotumika: jinsi ya kuhifadhi mfano wako wa Scikit-learn kama faili inayoweza kutumika kutoa utabiri ndani ya programu ya wavuti. Mara baada ya kuhifadhi mfano, utajifunza jinsi ya kuutumia katika programu ya wavuti iliyojengwa kwa kutumia Flask. Kwanza utaunda mfano ukitumia data fulani inayohusu matukio ya kuona UFO! Kisha, utajenga programu ya wavuti itakayokuruhusu kuingiza idadi ya sekunde pamoja na thamani ya latitudo na longitudo ili kutabiri ni nchi gani iliyoripoti kuona UFO.
+
+
+
+Picha na Michael Herren kwenye Unsplash
+
+## Masomo
+
+1. [Jenga Programu ya Wavuti](1-Web-App/README.md)
+
+## Shukrani
+
+"Jenga Programu ya Wavuti" iliandikwa kwa ♥️ na [Jen Looper](https://twitter.com/jenlooper).
+
+♥️ Maswali ya mazoezi yaliandikwa na Rohan Raj.
+
+Seti ya data imetolewa kutoka [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings).
+
+Miundombinu ya programu ya wavuti ilipendekezwa kwa sehemu na [makala hii](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) na [repo hii](https://github.com/abhinavsagar/machine-learning-deployment) na Abhinav Sagar.
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/1-Introduction/README.md b/translations/sw/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..10b66e587
--- /dev/null
+++ b/translations/sw/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# Utangulizi wa Uainishaji
+
+Katika masomo haya manne, utachunguza mwelekeo wa msingi wa ujifunzaji wa mashine wa kawaida - _uainishaji_. Tutapitia kutumia algorithimu mbalimbali za uainishaji na seti ya data kuhusu vyakula vyote bora vya Asia na India. Tunatumaini una njaa!
+
+
+
+> Sherehekea vyakula vya pan-Asia katika masomo haya! Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+Uainishaji ni aina ya [ujifunzaji unaosimamiwa](https://wikipedia.org/wiki/Supervised_learning) ambao una mambo mengi yanayofanana na mbinu za urejeleaji. Ikiwa ujifunzaji wa mashine ni kuhusu kutabiri thamani au majina ya vitu kwa kutumia seti za data, basi uainishaji kwa ujumla unagawanyika katika makundi mawili: _uainishaji wa binary_ na _uainishaji wa makundi mengi_.
+
+[](https://youtu.be/eg8DJYwdMyg "Introduction to classification")
+
+> 🎥 Bofya picha hapo juu kwa video: John Guttag wa MIT anatambulisha uainishaji
+
+Kumbuka:
+
+- **Urejeleaji wa mstari** ulikusaidia kutabiri uhusiano kati ya vigezo na kufanya utabiri sahihi kuhusu mahali ambapo kipengele kipya kingeangukia katika uhusiano na mstari huo. Kwa hivyo, ungeweza kutabiri _bei ya boga itakuwa kiasi gani mwezi wa Septemba vs. Desemba_, kwa mfano.
+- **Urejeleaji wa logisitiki** ulikusaidia kugundua "makundi mawili": kwa bei hii, _je, boga hili ni la rangi ya machungwa au si la machungwa_?
+
+Uainishaji hutumia algorithimu mbalimbali kuamua njia zingine za kuamua lebo au darasa la kipengele cha data. Hebu tufanye kazi na data hii ya vyakula ili kuona kama, kwa kuangalia kikundi cha viungo, tunaweza kuamua asili ya vyakula hivyo.
+
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [Somo hili linapatikana katika R!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### Utangulizi
+
+Uainishaji ni mojawapo ya shughuli za msingi za mtafiti wa ujifunzaji wa mashine na mwanasayansi wa data. Kutoka uainishaji wa msingi wa thamani ya binary ("je, barua pepe hii ni spam au si spam?"), hadi uainishaji na ugawaji wa picha tata kwa kutumia maono ya kompyuta, daima ni muhimu kuweza kupanga data katika madarasa na kuuliza maswali kuihusu.
+
+Kueleza mchakato kwa njia ya kisayansi zaidi, mbinu yako ya uainishaji inaunda mfano wa utabiri unaokuwezesha kuonyesha uhusiano kati ya vigezo vya ingizo na vigezo vya pato.
+
+
+
+> Masuala ya binary vs. multiclass kwa algorithimu za uainishaji kushughulikia. Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+Kabla ya kuanza mchakato wa kusafisha data yetu, kuiona, na kuiandaa kwa kazi zetu za ML, hebu tujifunze kidogo kuhusu njia mbalimbali ambazo ujifunzaji wa mashine unaweza kutumika kuainisha data.
+
+Iliyotokana na [takwimu](https://wikipedia.org/wiki/Statistical_classification), uainishaji kwa kutumia ujifunzaji wa mashine wa kawaida hutumia sifa, kama vile `smoker`, `weight`, na `age` kuamua _uwezekano wa kupata X ugonjwa_. Kama mbinu ya ujifunzaji unaosimamiwa inayofanana na mazoezi ya urejeleaji uliyofanya awali, data yako imewekwa lebo na algorithimu za ML hutumia lebo hizo kuainisha na kutabiri madarasa (au 'sifa') za seti ya data na kuyapeleka kwenye kikundi au matokeo.
+
+✅ Chukua muda kufikiria seti ya data kuhusu vyakula. Mfano wa multiclass ungeweza kujibu nini? Mfano wa binary ungeweza kujibu nini? Je, ungependa kuamua kama chakula fulani kina uwezekano wa kutumia fenugreek? Je, ungependa kuona kama, ukipewa mfuko wa mboga uliojaa nyota ya anise, artichokes, cauliflower, na horseradish, ungeweza kuunda sahani ya kawaida ya Kihindi?
+
+[](https://youtu.be/GuTeDbaNoEU "Crazy mystery baskets")
+
+> 🎥 Bofya picha hapo juu kwa video. Wazo kuu la kipindi 'Chopped' ni 'kikapu cha siri' ambapo wapishi wanapaswa kutengeneza sahani kutoka kwenye uchaguzi wa viungo vya bahati nasibu. Bila shaka mfano wa ML ungeweza kusaidia!
+
+## Halo 'classifier'
+
+Swali tunalotaka kuuliza kuhusu seti hii ya data ya vyakula ni swali la **multiclass**, kwani tuna vyakula vingi vya kitaifa vya kufanya kazi navyo. Tukipewa kundi la viungo, ni darasa gani kati ya haya mengi data itafaa?
+
+Scikit-learn inatoa algorithimu tofauti za kutumia kuainisha data, kulingana na aina ya tatizo unalotaka kutatua. Katika masomo mawili yajayo, utajifunza kuhusu baadhi ya algorithimu hizi.
+
+## Zoezi - safisha na usawazishe data yako
+
+Kazi ya kwanza, kabla ya kuanza mradi huu, ni kusafisha na **kusawazisha** data yako ili kupata matokeo bora. Anza na faili tupu _notebook.ipynb_ katika mzizi wa folda hii.
+
+Kitu cha kwanza kufunga ni [imblearn](https://imbalanced-learn.org/stable/). Hii ni kifurushi cha Scikit-learn ambacho kitakuwezesha kusawazisha data vizuri zaidi (utajifunza zaidi kuhusu kazi hii kwa muda mfupi).
+
+1. Kufunga `imblearn`, endesha `pip install`, kama ifuatavyo:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. Leta vifurushi unavyohitaji ili kuingiza data yako na kuiangalia, pia leta `SMOTE` kutoka `imblearn`.
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ Sasa uko tayari kusoma kuingiza data inayofuata.
+
+1. Kazi inayofuata itakuwa kuingiza data:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ Kutumia `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df`.
+
+1. Angalia sura ya data:
+
+ ```python
+ df.head()
+ ```
+
+ Safu tano za kwanza zinaonekana hivi:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. Pata taarifa kuhusu data hii kwa kuita `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ Matokeo yako yanafanana na:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## Zoezi - kujifunza kuhusu vyakula
+
+Sasa kazi inaanza kuwa ya kuvutia zaidi. Hebu tugundue usambazaji wa data, kwa kila chakula
+
+1. Panga data kama baa kwa kuita `barh()`:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ Kuna idadi ndogo ya vyakula, lakini usambazaji wa data si sawa. Unaweza kurekebisha hilo! Kabla ya kufanya hivyo, chunguza kidogo zaidi.
+
+1. Tafuta ni kiasi gani cha data kinapatikana kwa kila chakula na ichapishe:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ matokeo yanaonekana hivi:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## Kugundua viungo
+
+Sasa unaweza kuchimba zaidi katika data na kujifunza ni viungo gani vya kawaida kwa kila chakula. Unapaswa kusafisha data inayojirudia ambayo inasababisha mkanganyiko kati ya vyakula, kwa hivyo hebu tujifunze kuhusu tatizo hili.
+
+1. Unda kazi `create_ingredient()` katika Python ili kuunda fremu ya data ya viungo. Kazi hii itaanza kwa kuacha safu isiyo na msaada na kupanga viungo kwa hesabu zao:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ Sasa unaweza kutumia kazi hiyo kupata wazo la viungo kumi maarufu zaidi kwa kila chakula.
+
+1. Ita `create_ingredient()` and plot it calling `barh()`:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Fanya vivyo hivyo kwa data ya Kijapani:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Sasa kwa viungo vya Kichina:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Panga viungo vya Kihindi:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Hatimaye, panga viungo vya Kikorea:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Sasa, acha viungo vya kawaida vinavyosababisha mkanganyiko kati ya vyakula tofauti, kwa kuita `drop()`:
+
+ Kila mtu anapenda mchele, vitunguu na tangawizi!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## Sawazisha seti ya data
+
+Sasa kwa kuwa umesafisha data, tumia [SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - "Mbinu ya Kuzidisha Sampuli Ndogo ya Kijinga" - kuisawazisha.
+
+1. Ita `fit_resample()`, mkakati huu unazalisha sampuli mpya kwa upatanishi.
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ Kwa kusawazisha data yako, utapata matokeo bora zaidi unapoitainisha. Fikiria kuhusu uainishaji wa binary. Ikiwa data yako nyingi ni ya darasa moja, mfano wa ML utaweza kutabiri darasa hilo mara nyingi zaidi, kwa sababu kuna data zaidi kwa ajili yake. Kusawazisha data kunachukua data yoyote iliyopotoka na kusaidia kuondoa usawa huu.
+
+1. Sasa unaweza kuangalia idadi ya lebo kwa kila kiungo:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ Matokeo yako yanaonekana hivi:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ Data ni safi na nzuri, imesawazishwa, na ni ladha sana!
+
+1. Hatua ya mwisho ni kuhifadhi data yako iliyosawazishwa, ikiwa ni pamoja na lebo na sifa, kwenye fremu mpya ya data inayoweza kusafirishwa kwenye faili:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. Unaweza kuangalia data kwa kutumia `transformed_df.head()` and `transformed_df.info()`. Hifadhi nakala ya data hii kwa matumizi ya masomo yajayo:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ CSV hii mpya sasa inaweza kupatikana katika folda ya mzizi ya data.
+
+---
+
+## 🚀Changamoto
+
+Mtaala huu una seti kadhaa za data za kuvutia. Chunguza folda za `data` na uone kama kuna yoyote yenye seti za data ambazo zingefaa kwa uainishaji wa binary au wa makundi mengi? Maswali gani ungependa kuuliza kuhusu seti hii ya data?
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## Mapitio & Kujisomea
+
+Chunguza API ya SMOTE. Inafaa kutumika kwa kesi gani? Inatatua matatizo gani?
+
+## Kazi
+
+[Chunguza mbinu za uainishaji](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/1-Introduction/assignment.md b/translations/sw/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..c3e4643dc
--- /dev/null
+++ b/translations/sw/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Chunguza mbinu za uainishaji
+
+## Maelekezo
+
+Katika [hati za Scikit-learn](https://scikit-learn.org/stable/supervised_learning.html) utapata orodha kubwa ya njia za kuainisha data. Fanya utafutaji kidogo katika hati hizi: lengo lako ni kutafuta mbinu za uainishaji na kulinganisha na seti ya data katika mtaala huu, swali unaloweza kuuliza, na mbinu ya uainishaji. Tengeneza lahajedwali au jedwali katika faili la .doc na eleza jinsi seti ya data itakavyofanya kazi na algoriti ya uainishaji.
+
+## Rubric
+
+| Kigezo | Bora sana | Inayotosheleza | Inahitaji Kuboresha |
+| ------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | hati inawasilishwa ikipitia algoriti 5 pamoja na mbinu ya uainishaji. Muhtasari umeelezwa vizuri na kwa kina. | hati inawasilishwa ikipitia algoriti 3 pamoja na mbinu ya uainishaji. Muhtasari umeelezwa vizuri na kwa kina. | hati inawasilishwa ikipitia algoriti chini ya tatu pamoja na mbinu ya uainishaji na muhtasari haujaelezwa vizuri wala kwa kina. |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au dosari. Hati ya asili katika lugha yake ya kiasili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/1-Introduction/solution/Julia/README.md b/translations/sw/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..59ab2b944
--- /dev/null
+++ b/translations/sw/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu wa usahihi. Hati asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalam inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/2-Classifiers-1/README.md b/translations/sw/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..7b1df609e
--- /dev/null
+++ b/translations/sw/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,77 @@
+# Vihesabu vya Chakula 1
+
+Katika somo hili, utatumia dataset uliyohifadhi kutoka somo lililopita lililojaa data safi na iliyosawazishwa kuhusu aina za vyakula.
+
+Utatumia dataset hii na aina mbalimbali za vihesabu (classifiers) _kutabiri aina ya chakula cha kitaifa kulingana na kikundi cha viungo_. Wakati wa kufanya hivyo, utajifunza zaidi kuhusu baadhi ya njia ambazo algorithimu zinaweza kutumika kwa kazi za uainishaji.
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# Maandalizi
+
+Kama umehitimisha [Somo la 1](../1-Introduction/README.md), hakikisha kuwa faili _cleaned_cuisines.csv_ ipo katika folda ya mizizi `/data` kwa ajili ya masomo haya manne.
+
+## Zoezi - tabiri aina ya chakula cha kitaifa
+
+1. Ukifanya kazi katika folda ya _notebook.ipynb_ ya somo hili, leta faili hilo pamoja na maktaba ya Pandas:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ Data inaonekana hivi:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. Sasa, leta maktaba zaidi kadhaa:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. Gawanya viwianishi vya X na y katika dataframes mbili kwa mafunzo. `cuisine` inaweza kuwa dataframe ya lebo:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ Itaonekana hivi:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. Ondoa `Unnamed: 0` column and the `cuisine` column, calling `drop()`. Hifadhi data iliyobaki kama sifa za kufundisha:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ Sifa zako zinaonekana hivi:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/2-Classifiers-1/assignment.md b/translations/sw/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..02b18d75a
--- /dev/null
+++ b/translations/sw/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# Jifunze Kuhusu Watatuzi
+## Maelekezo
+
+Katika somo hili ulijifunza kuhusu watatuzi mbalimbali ambao huunganisha algorithimu na mchakato wa kujifunza mashine ili kuunda mfano sahihi. Pitia watatuzi waliotajwa katika somo na chagua wawili. Kwa maneno yako mwenyewe, linganisha na tofautisha watatuzi hawa wawili. Wanashughulikia aina gani ya tatizo? Wanashirikiana vipi na miundo mbalimbali ya data? Kwa nini ungechagua mmoja badala ya mwingine?
+## Rubric
+
+| Kigezo | Bora Sana | Inayotosha | Inahitaji Kuboresha |
+| -------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | Faili ya .doc imewasilishwa na aya mbili, moja kwa kila mtafsiri, zikilinganishwa kwa umakini. | Faili ya .doc imewasilishwa na aya moja tu | Kazi haijakamilika |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/sw/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..006a85db7
--- /dev/null
+++ b/translations/sw/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asilia katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/3-Classifiers-2/README.md b/translations/sw/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..ef5c665b0
--- /dev/null
+++ b/translations/sw/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# Vihesabu vya vyakula 2
+
+Katika somo hili la pili la uainishaji, utaangalia njia zaidi za kuainisha data za nambari. Pia utajifunza kuhusu athari za kuchagua kihesabu kimoja juu ya kingine.
+
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### Sharti
+
+Tunadhani kwamba umekamilisha masomo ya awali na una dataset iliyosafishwa katika folda yako ya `data` inayoitwa _cleaned_cuisines.csv_ katika mzizi wa folda hii ya masomo manne.
+
+### Maandalizi
+
+Tumeweka faili yako ya _notebook.ipynb_ na dataset iliyosafishwa na tumeigawa katika dataframes za X na y, tayari kwa mchakato wa kujenga modeli.
+
+## Ramani ya uainishaji
+
+Hapo awali, ulijifunza kuhusu chaguzi mbalimbali unazoweza kuwa nazo unapoweka data kwa kutumia karatasi ya udanganyifu ya Microsoft. Scikit-learn inatoa karatasi ya udanganyifu inayofanana, lakini yenye undani zaidi ambayo inaweza kusaidia zaidi kupunguza washauri wako (neno lingine la vihesabu):
+
+
+> Tip: [tembelea ramani hii mtandaoni](https://scikit-learn.org/stable/tutorial/machine_learning_map/) na bonyeza njia ili kusoma nyaraka.
+
+### Mpango
+
+Ramani hii ni muhimu sana mara tu unapokuwa na uelewa mzuri wa data yako, kwani unaweza 'kutembea' kwenye njia zake hadi uamuzi:
+
+- Tuna sampuli >50
+- Tunataka kutabiri kategoria
+- Tuna data zilizo na lebo
+- Tuna sampuli chini ya 100K
+- ✨ Tunaweza kuchagua Linear SVC
+- Ikiwa hiyo haifanyi kazi, kwa kuwa tuna data za nambari
+ - Tunaweza kujaribu ✨ KNeighbors Classifier
+ - Ikiwa hiyo haifanyi kazi, jaribu ✨ SVC na ✨ Ensemble Classifiers
+
+Hii ni njia nzuri sana ya kufuata.
+
+## Zoezi - gawanya data
+
+Kufuata njia hii, tunapaswa kuanza kwa kuingiza baadhi ya maktaba za kutumia.
+
+1. Ingiza maktaba zinazohitajika:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. Gawanya data yako ya mafunzo na majaribio:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Kihesabu cha Linear SVC
+
+Support-Vector clustering (SVC) ni mtoto wa familia ya Support-Vector machines ya mbinu za ML (jifunze zaidi kuhusu hizi hapa chini). Katika njia hii, unaweza kuchagua 'kernel' kuamua jinsi ya kuainisha lebo. Kigezo cha 'C' kinahusu 'uregulishaji' ambao unadhibiti ushawishi wa vigezo. Kernel inaweza kuwa moja ya [kadhaa](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC); hapa tunaweka 'linear' kuhakikisha tunatumia linear SVC. Uwezekano kwa default ni 'false'; hapa tunaweka 'true' ili kupata makadirio ya uwezekano. Tunaweka hali ya nasibu kuwa '0' ili kuchanganya data kupata uwezekano.
+
+### Zoezi - tumia linear SVC
+
+Anza kwa kuunda safu ya vihesabu. Utaongeza polepole kwenye safu hii tunapojaribu.
+
+1. Anza na Linear SVC:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. Funza modeli yako kwa kutumia Linear SVC na uchapishe ripoti:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Matokeo ni mazuri sana:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## K-Neighbors classifier
+
+K-Neighbors ni sehemu ya familia ya "majirani" ya mbinu za ML, ambazo zinaweza kutumika kwa kujifunza kwa usimamizi na bila usimamizi. Katika njia hii, idadi iliyopangwa ya pointi huundwa na data hukusanywa karibu na pointi hizi ili lebo zilizojumlishwa ziweze kutabiriwa kwa data.
+
+### Zoezi - tumia K-Neighbors classifier
+
+Kihesabu kilichopita kilikuwa kizuri, na kilifanya kazi vizuri na data, lakini labda tunaweza kupata usahihi bora zaidi. Jaribu K-Neighbors classifier.
+
+1. Ongeza mstari kwenye safu yako ya kihesabu (ongeza koma baada ya kipengee cha Linear SVC):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ Matokeo ni mabaya kidogo:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ Jifunze kuhusu [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors)
+
+## Kihesabu cha Support Vector
+
+Kihesabu cha Support-Vector ni sehemu ya familia ya [Support-Vector Machine](https://wikipedia.org/wiki/Support-vector_machine) ya mbinu za ML ambazo hutumika kwa kazi za uainishaji na regression. SVMs "huweka mifano ya mafunzo kwenye pointi katika nafasi" ili kuongeza umbali kati ya kategoria mbili. Data inayofuata huwekwa katika nafasi hii ili kategoria yao iweze kutabiriwa.
+
+### Zoezi - tumia Support Vector Classifier
+
+Hebu jaribu kupata usahihi bora zaidi kwa kutumia Support Vector Classifier.
+
+1. Ongeza koma baada ya kipengee cha K-Neighbors, na kisha ongeza mstari huu:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ Matokeo ni mazuri sana!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ Jifunze kuhusu [Support-Vectors](https://scikit-learn.org/stable/modules/svm.html#svm)
+
+## Vihesabu vya Ensemble
+
+Hebu fuata njia hadi mwisho, ingawa jaribio la awali lilikuwa zuri sana. Hebu jaribu 'Ensemble Classifiers, haswa Random Forest na AdaBoost:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+Matokeo ni mazuri sana, hasa kwa Random Forest:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ Jifunze kuhusu [Ensemble Classifiers](https://scikit-learn.org/stable/modules/ensemble.html)
+
+Njia hii ya Kujifunza Mashine "inaunganisha utabiri wa wahesabuji kadhaa wa msingi" ili kuboresha ubora wa modeli. Katika mfano wetu, tulitumia Random Trees na AdaBoost.
+
+- [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#forest), njia ya wastani, inajenga 'msitu' wa 'miti ya maamuzi' iliyo na nasibu ili kuepuka kufaa kupita kiasi. Kigezo cha n_estimators kimewekwa kwa idadi ya miti.
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) inafaa kihesabu kwa dataset na kisha inafaa nakala za kihesabu hicho kwa dataset hiyo hiyo. Inazingatia uzito wa vitu vilivyoainishwa vibaya na kurekebisha kifaa kwa kihesabu kinachofuata ili kusahihisha.
+
+---
+
+## 🚀Changamoto
+
+Kila moja ya mbinu hizi ina idadi kubwa ya vigezo unavyoweza kurekebisha. Tafiti vigezo vya default vya kila moja na fikiria kuhusu nini kurekebisha vigezo hivi kungeleta kwa ubora wa modeli.
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## Mapitio & Kujisomea
+
+Kuna maneno mengi ya kitaalamu katika masomo haya, kwa hivyo chukua muda kidogo kupitia [orodha hii](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott) ya maneno muhimu!
+
+## Kazi
+
+[Cheza na vigezo](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokamilika. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au kutafsiri vibaya kunakotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/3-Classifiers-2/assignment.md b/translations/sw/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..637a01862
--- /dev/null
+++ b/translations/sw/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# Parameter Play
+
+## Maelekezo
+
+Kuna vigezo vingi ambavyo vimewekwa kwa chaguo-msingi wakati wa kufanya kazi na hizi classifiers. Intellisense katika VS Code inaweza kukusaidia kuchambua vigezo hivi. Chagua mojawapo ya Mbinu za Uainishaji wa ML katika somo hili na ufunze upya mifano ukibadilisha thamani mbalimbali za vigezo. Tengeneza daftari inayoeleza kwa nini baadhi ya mabadiliko yanasaidia ubora wa mfano huku mengine yakidhoofisha. Eleza kwa kina katika jibu lako.
+
+## Rubric
+
+| Kigezo | Bora kabisa | Inayotosheleza | Inayohitaji Kuboresha |
+| ------- | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------- |
+| | Daftari imewasilishwa na classifier iliyojengwa kikamilifu na vigezo vyake kubadilishwa na mabadiliko kuelezewa katika visanduku vya maandishi | Daftari imewasilishwa kwa sehemu au kuelezewa vibaya | Daftari ina hitilafu au kasoro |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/sw/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..c92c8d7b9
--- /dev/null
+++ b/translations/sw/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, inashauriwa kutumia tafsiri ya kibinadamu ya kitaalamu. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/4-Applied/README.md b/translations/sw/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..d9a78c9ce
--- /dev/null
+++ b/translations/sw/4-Classification/4-Applied/README.md
@@ -0,0 +1,317 @@
+# Jenga Programu ya Mapendekezo ya Vyakula
+
+Katika somo hili, utajenga mfano wa uainishaji ukitumia baadhi ya mbinu ulizojifunza katika masomo yaliyopita na kwa kutumia seti ya data ya vyakula vitamu iliyotumika katika mfululizo huu. Aidha, utajenga programu ndogo ya wavuti kutumia mfano uliowekwa, kwa kutumia Onnx's web runtime.
+
+Moja ya matumizi muhimu ya kujifunza kwa mashine ni kujenga mifumo ya mapendekezo, na unaweza kuchukua hatua ya kwanza katika mwelekeo huo leo!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 Bofya picha hapo juu kwa video: Jen Looper anajenga programu ya wavuti kutumia data ya vyakula vilivyowekwa
+
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+Katika somo hili utajifunza:
+
+- Jinsi ya kujenga mfano na kuihifadhi kama mfano wa Onnx
+- Jinsi ya kutumia Netron kukagua mfano
+- Jinsi ya kutumia mfano wako katika programu ya wavuti kwa utabiri
+
+## Jenga mfano wako
+
+Kujenga mifumo ya ML inayotumika ni sehemu muhimu ya kutumia teknolojia hizi kwa mifumo ya biashara yako. Unaweza kutumia mifano ndani ya programu zako za wavuti (na hivyo kuzitumia katika muktadha wa nje ya mtandao ikiwa inahitajika) kwa kutumia Onnx.
+
+Katika [somo la awali](../../3-Web-App/1-Web-App/README.md), ulijenga mfano wa Regression kuhusu kuona UFO, "pickled" na kuutumia katika programu ya Flask. Wakati usanifu huu ni muhimu sana kujua, ni programu kamili ya Python, na mahitaji yako yanaweza kujumuisha matumizi ya programu ya JavaScript.
+
+Katika somo hili, unaweza kujenga mfumo wa msingi wa JavaScript kwa utabiri. Kwanza, hata hivyo, unahitaji kufundisha mfano na kuubadilisha kwa matumizi na Onnx.
+
+## Zoezi - fundisha mfano wa uainishaji
+
+Kwanza, fundisha mfano wa uainishaji ukitumia seti ya data ya vyakula iliyosafishwa tuliyotumia.
+
+1. Anza kwa kuingiza maktaba muhimu:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ Unahitaji '[skl2onnx](https://onnx.ai/sklearn-onnx/)' kusaidia kubadilisha mfano wako wa Scikit-learn kuwa muundo wa Onnx.
+
+1. Kisha, fanya kazi na data yako kwa njia ile ile uliyofanya katika masomo yaliyopita, kwa kusoma faili ya CSV ukitumia `read_csv()`:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. Ondoa safu mbili za kwanza zisizo za lazima na uhifadhi data iliyobaki kama 'X':
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. Hifadhi lebo kama 'y':
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### Anza mchakato wa mafunzo
+
+Tutatumia maktaba ya 'SVC' ambayo ina usahihi mzuri.
+
+1. Ingiza maktaba zinazofaa kutoka Scikit-learn:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. Tenganisha seti za mafunzo na majaribio:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. Jenga mfano wa Uainishaji wa SVC kama ulivyofanya katika somo lililopita:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. Sasa, jaribu mfano wako, ukipiga `predict()`:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. Chapisha ripoti ya uainishaji ili kuangalia ubora wa mfano:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Kama tulivyoona hapo awali, usahihi ni mzuri:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### Badilisha mfano wako kuwa Onnx
+
+Hakikisha unafanya ubadilishaji na idadi sahihi ya Tensor. Seti hii ya data ina viungo 380 vilivyotajwa, kwa hivyo unahitaji kubainisha idadi hiyo katika `FloatTensorType`:
+
+1. Badilisha ukitumia idadi ya tensor ya 380.
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. Unda onx na uhifadhi kama faili **model.onnx**:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > Kumbuka, unaweza kupitisha [chaguzi](https://onnx.ai/sklearn-onnx/parameterized.html) katika hati yako ya ubadilishaji. Katika kesi hii, tulipitisha 'nocl' kuwa Kweli na 'zipmap' kuwa Uongo. Kwa kuwa huu ni mfano wa uainishaji, una chaguo la kuondoa ZipMap ambayo hutoa orodha ya kamusi (si lazima). `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` faili.
+
+1. Katika faili hii _index.html_, ongeza alama zifuatazo:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. Sasa, ukifanya kazi ndani ya vitambulisho vya `body`, ongeza alama kidogo kuonyesha orodha ya visanduku vya kukagua vinavyoakisi baadhi ya viungo:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ Angalia kwamba kila kisanduku cha kukagua kimepewa thamani. Hii inaonyesha index ambapo kiungo kinapatikana kulingana na seti ya data. Apple, kwa mfano, katika orodha hii ya alfabeti, inachukua safu ya tano, kwa hivyo thamani yake ni '4' kwa kuwa tunaanza kuhesabu kutoka 0. Unaweza kushauriana na [spreadsheet ya viungo](../../../../4-Classification/data/ingredient_indexes.csv) kugundua index ya kiungo fulani.
+
+ Ukiendelea na kazi yako katika faili ya index.html, ongeza kizuizi cha hati ambapo mfano unaitwa baada ya kufunga mwisho ``.
+
+1. Kwanza, ingiza [Onnx Runtime](https://www.onnxruntime.ai/):
+
+ ```html
+
+ ```
+
+ > Onnx Runtime inatumika kuwezesha kuendesha mifano yako ya Onnx kwenye majukwaa mbalimbali ya vifaa, ikiwa ni pamoja na uboreshaji na API ya kutumia.
+
+1. Mara Runtime inapowekwa, unaweza kuipiga:
+
+ ```html
+
+ ```
+
+Katika msimbo huu, kuna mambo kadhaa yanayotokea:
+
+1. Uliunda safu ya thamani 380 zinazowezekana (1 au 0) kuwekwa na kutumwa kwa mfano kwa utabiri, kulingana na kama kisanduku cha kukagua kiungo kimechaguliwa.
+2. Uliunda safu ya visanduku vya kukagua na njia ya kubaini kama vilikuwa vimechaguliwa katika `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` kwenye prompt. Localhost inapaswa kufunguka na unaweza kuona programu yako ya wavuti. Angalia ni chakula gani kinachopendekezwa kulingana na viungo mbalimbali:
+
+
+
+Hongera, umeunda programu ya wavuti ya 'mapendekezo' na sehemu chache. Chukua muda kujenga mfumo huu zaidi!
+## 🚀Changamoto
+
+Programu yako ya wavuti ni ndogo sana, kwa hivyo endelea kuijenga ukitumia viungo na fahirisi zao kutoka kwa data ya [ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv). Ni mchanganyiko gani wa ladha hufanya chakula cha kitaifa fulani?
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## Mapitio & Kujisomea
+
+Wakati somo hili limegusia tu matumizi ya kujenga mfumo wa mapendekezo kwa viungo vya chakula, eneo hili la matumizi ya ML lina mifano mingi. Soma zaidi kuhusu jinsi mifumo hii inavyojengwa:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## Kazi
+
+[Jenga kipendekezo kipya](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au upotovu. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kuwa chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/4-Applied/assignment.md b/translations/sw/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..b95923f7b
--- /dev/null
+++ b/translations/sw/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# Jenga Recommender
+
+## Maelekezo
+
+Kutokana na mazoezi yako katika somo hili, sasa unajua jinsi ya kujenga programu ya wavuti inayotumia JavaScript kwa kutumia Onnx Runtime na mfano wa Onnx uliobadilishwa. Jaribu kujenga recommender mpya ukitumia data kutoka kwenye masomo haya au kutoka kwingine (tafadhali toa shukrani). Unaweza kuunda recommender ya wanyama wa kipenzi kulingana na sifa mbalimbali za utu, au recommender ya aina za muziki kulingana na hali ya mtu. Kuwa mbunifu!
+
+## Rubric
+
+| Vigezo | Bora kabisa | Inatosha | Inahitaji Kuboresha |
+| --------- | --------------------------------------------------------------------- | -------------------------------------- | ------------------------------------ |
+| | Programu ya wavuti na daftari vinaonyeshwa, vyote vina nyaraka nzuri na vinafanya kazi | Moja kati ya hizo mbili inakosekana au ina kasoro | Vyote vinakosekana au vina kasoro |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo na usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/4-Classification/README.md b/translations/sw/4-Classification/README.md
new file mode 100644
index 000000000..af218c6b0
--- /dev/null
+++ b/translations/sw/4-Classification/README.md
@@ -0,0 +1,30 @@
+# Kuanza na uainishaji
+
+## Mada ya Kieneo: Vyakula Vitamu vya Asia na India 🍜
+
+Katika Asia na India, mila za chakula ni nyingi sana, na ni tamu sana! Hebu tuangalie data kuhusu vyakula vya kieneo ili kujaribu kuelewa viungo vyao.
+
+
+> Picha na Lisheng Chang kwenye Unsplash
+
+## Kile utakachojifunza
+
+Katika sehemu hii, utajenga juu ya masomo yako ya awali ya Regression na kujifunza kuhusu waainishaji wengine ambao unaweza kutumia kuelewa data vizuri zaidi.
+
+> Kuna zana za low-code zinazoweza kukusaidia kujifunza kuhusu kufanya kazi na mifano ya uainishaji. Jaribu [Azure ML kwa kazi hii](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Masomo
+
+1. [Utangulizi wa uainishaji](1-Introduction/README.md)
+2. [Waainishaji zaidi](2-Classifiers-1/README.md)
+3. [Waainishaji wengine tena](3-Classifiers-2/README.md)
+4. [ML iliyotumika: jenga programu ya wavuti](4-Applied/README.md)
+
+## Shukrani
+
+"Kuanza na uainishaji" kiliandikwa kwa ♥️ na [Cassie Breviu](https://www.twitter.com/cassiebreviu) na [Jen Looper](https://www.twitter.com/jenlooper)
+
+Seti ya data ya vyakula vitamu ilitoka [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines).
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za kiotomatiki za AI. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kitaalamu ya binadamu. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/5-Clustering/1-Visualize/README.md b/translations/sw/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..8daf74916
--- /dev/null
+++ b/translations/sw/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,216 @@
+# Utangulizi wa clustering
+
+Clustering ni aina ya [Unsupervised Learning](https://wikipedia.org/wiki/Unsupervised_learning) ambayo inadhani kwamba dataset haina lebo au kwamba ingizo zake hazijalinganishwa na matokeo yaliyoainishwa awali. Inatumia algoriti mbalimbali kuchambua data isiyo na lebo na kutoa makundi kulingana na mifumo inayoona kwenye data.
+
+[](https://youtu.be/ty2advRiWJM "No One Like You by PSquare")
+
+> 🎥 Bofya picha hapo juu kwa video. Wakati unajifunza machine learning na clustering, furahia baadhi ya nyimbo za Nigerian Dance Hall - hii ni wimbo uliopendwa sana kutoka mwaka 2014 na PSquare.
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### Utangulizi
+
+[Clustering](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) ni muhimu sana kwa uchunguzi wa data. Hebu tuone kama inaweza kusaidia kugundua mwelekeo na mifumo katika njia ambayo hadhira ya Nigeria hutumia muziki.
+
+✅ Chukua dakika moja kufikiria matumizi ya clustering. Katika maisha halisi, clustering hutokea wakati wowote unapokuwa na rundo la nguo na unahitaji kupanga nguo za wanafamilia wako 🧦👕👖🩲. Katika data science, clustering hutokea wakati wa kujaribu kuchambua mapendeleo ya mtumiaji, au kubaini sifa za dataset yoyote isiyo na lebo. Clustering, kwa njia, husaidia kufanya mambo kuwa na maana, kama droo ya soksi.
+
+[](https://youtu.be/esmzYhuFnds "Introduction to Clustering")
+
+> 🎥 Bofya picha hapo juu kwa video: John Guttag wa MIT anatoa utangulizi wa clustering
+
+Katika mazingira ya kitaalamu, clustering inaweza kutumika kubaini mambo kama kugawanya soko, kubaini ni makundi ya umri gani yanayonunua bidhaa gani, kwa mfano. Matumizi mengine yanaweza kuwa kugundua udanganyifu, labda kugundua ulaghai kutoka kwenye dataset ya miamala ya kadi za mkopo. Au unaweza kutumia clustering kubaini uvimbe katika kundi la picha za matibabu.
+
+✅ Fikiria dakika moja jinsi unavyoweza kuwa umekutana na clustering 'katika mazingira ya asili', katika mazingira ya benki, e-commerce, au biashara.
+
+> 🎓 Inafurahisha, uchambuzi wa cluster ulitokana na nyanja za Anthropolojia na Saikolojia katika miaka ya 1930. Je, unaweza kufikiria jinsi ilivyoweza kutumika?
+
+Vinginevyo, unaweza kuitumia kwa kupanga matokeo ya utafutaji - kwa viungo vya ununuzi, picha, au hakiki, kwa mfano. Clustering ni muhimu unapokuwa na dataset kubwa ambayo unataka kupunguza na ambayo unataka kufanya uchambuzi wa kina zaidi, kwa hivyo mbinu hii inaweza kutumika kujifunza kuhusu data kabla ya kujenga mifano mingine.
+
+✅ Mara data yako inapopangwa katika clusters, unapeleka kitambulisho cha cluster, na mbinu hii inaweza kuwa na manufaa wakati wa kuhifadhi faragha ya dataset; badala yake unaweza kurejelea kipengele cha data kwa kitambulisho cha cluster, badala ya data inayotambulika zaidi. Je, unaweza kufikiria sababu nyingine kwa nini ungependa kurejelea kitambulisho cha cluster badala ya vipengele vingine vya cluster ili kuitambulisha?
+
+Kuelewa zaidi mbinu za clustering katika [Learn module](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott)
+## Kuanza na clustering
+
+[Scikit-learn inatoa mbinu nyingi](https://scikit-learn.org/stable/modules/clustering.html) za kufanya clustering. Aina unayochagua itategemea kesi yako ya matumizi. Kulingana na nyaraka, kila mbinu ina faida mbalimbali. Hapa kuna jedwali rahisi la mbinu zinazoungwa mkono na Scikit-learn na kesi zao za matumizi zinazofaa:
+
+| Jina la Mbinu | Kesi ya Matumizi |
+| :--------------------------- | :------------------------------------------------------------------ |
+| K-Means | matumizi ya jumla, inductive |
+| Affinity propagation | makundi mengi, yasiyo sawa, inductive |
+| Mean-shift | makundi mengi, yasiyo sawa, inductive |
+| Spectral clustering | makundi machache, sawa, transductive |
+| Ward hierarchical clustering | makundi mengi, yenye vikwazo, transductive |
+| Agglomerative clustering | makundi mengi, yenye vikwazo, umbali usio wa Euclidean, transductive|
+| DBSCAN | jiometri isiyo tambarare, makundi yasiyo sawa, transductive |
+| OPTICS | jiometri isiyo tambarare, makundi yasiyo sawa yenye msongamano tofauti, transductive|
+| Gaussian mixtures | jiometri tambarare, inductive |
+| BIRCH | dataset kubwa yenye outliers, inductive |
+
+> 🎓 Jinsi tunavyounda clusters inahusiana sana na jinsi tunavyokusanya pointi za data katika makundi. Hebu tuchambue baadhi ya msamiati:
+>
+> 🎓 ['Transductive' vs. 'inductive'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> Transductive inference inatokana na kesi za mafunzo zilizozingatiwa ambazo zinaelekeza kwenye kesi maalum za mtihani. Inductive inference inatokana na kesi za mafunzo ambazo zinaelekeza kwenye sheria za jumla ambazo baadaye zinatumika kwenye kesi za mtihani.
+>
+> Mfano: Fikiria una dataset ambayo ina lebo kwa sehemu tu. Vitu vingine ni 'rekodi', vingine ni 'cds', na vingine havina lebo. Kazi yako ni kutoa lebo kwa data isiyo na lebo. Ikiwa unachagua mbinu ya inductive, ungefundisha mfano kwa kutafuta 'rekodi' na 'cds', na kutumia lebo hizo kwenye data yako isiyo na lebo. Mbinu hii itapata shida kuainisha vitu ambavyo kwa kweli ni 'cassettes'. Mbinu ya transductive, kwa upande mwingine, hushughulikia data hii isiyojulikana kwa ufanisi zaidi kwani inafanya kazi ya kuunganisha vitu vinavyofanana na kisha kutoa lebo kwa kundi. Katika kesi hii, clusters zinaweza kuonyesha 'vitu vya muziki vya mviringo' na 'vitu vya muziki vya mraba'.
+>
+> 🎓 ['Non-flat' vs. 'flat' geometry](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> Imetokana na istilahi za kihesabu, jiometri isiyo tambarare vs. tambarare inahusu kipimo cha umbali kati ya pointi kwa njia ya 'tambarare' ([Euclidean](https://wikipedia.org/wiki/Euclidean_geometry)) au 'isiyo tambarare' (isiyo ya Euclidean).
+>
+>'Tambarare' katika muktadha huu inarejelea jiometri ya Euclidean (sehemu za ambayo hufundishwa kama 'jiometri ya ndege'), na isiyo tambarare inarejelea jiometri isiyo ya Euclidean. Jiometri inahusiana vipi na machine learning? Naam, kama nyanja mbili ambazo zinategemea hesabu, lazima kuwe na njia ya kawaida ya kupima umbali kati ya pointi katika clusters, na hiyo inaweza kufanywa kwa njia ya 'tambarare' au 'isiyo tambarare', kulingana na asili ya data. [Umbali wa Euclidean](https://wikipedia.org/wiki/Euclidean_distance) unapimwa kama urefu wa kipande cha mstari kati ya pointi mbili. [Umbali usio wa Euclidean](https://wikipedia.org/wiki/Non-Euclidean_geometry) unapimwa kando ya curve. Ikiwa data yako, ikionyeshwa, inaonekana kuwa haipo kwenye ndege, unaweza kuhitaji kutumia algoriti maalum kuishughulikia.
+>
+
+> Infographic na [Dasani Madipalli](https://twitter.com/dasani_decoded)
+>
+> 🎓 ['Umbali'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> Clusters zinafafanuliwa na matrix ya umbali wao, yaani umbali kati ya pointi. Umbali huu unaweza kupimwa kwa njia kadhaa. Clusters za Euclidean zinafafanuliwa na wastani wa thamani za pointi, na zina 'centroid' au kituo. Umbali hupimwa kwa umbali hadi kwenye centroid hiyo. Umbali usio wa Euclidean unarejelea 'clustroids', pointi iliyo karibu zaidi na pointi nyingine. Clustroids kwa upande wao zinaweza kufafanuliwa kwa njia mbalimbali.
+>
+> 🎓 ['Yenye vikwazo'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [Constrained Clustering](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) inaletwa 'semi-supervised' learning katika mbinu hii isiyo ya usimamizi. Mahusiano kati ya pointi yamewekwa alama kama 'hayawezi kuunganishwa' au 'lazima yaunganishwe' hivyo sheria fulani zinalazimishwa kwenye dataset.
+>
+> Mfano: Ikiwa algoriti imeachiwa huru kwenye kundi la data isiyo na lebo au yenye lebo kidogo, clusters zinazozalisha zinaweza kuwa za ubora duni. Katika mfano hapo juu, clusters zinaweza kuunganisha 'vitu vya muziki vya mviringo' na 'vitu vya muziki vya mraba' na 'vitu vya pembetatu' na 'biskuti'. Ikiwa imepewa vikwazo fulani, au sheria za kufuata ("kipengele lazima kiwe cha plastiki", "kipengele kinahitaji kuwa na uwezo wa kutoa muziki") hii inaweza kusaidia 'kuzuia' algoriti kufanya uchaguzi bora.
+>
+> 🎓 'Density'
+>
+> Data ambayo ni 'noisy' inachukuliwa kuwa 'dense'. Umbali kati ya pointi katika kila cluster zake unaweza kuonyesha, kwa uchunguzi, kuwa zaidi au chini ya dense, au 'imejaa' na hivyo data hii inahitaji kuchambuliwa kwa mbinu sahihi ya clustering. [Makala hii](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) inaonyesha tofauti kati ya kutumia algoriti za K-Means clustering vs. HDBSCAN kuchunguza dataset yenye kelele na msongamano wa cluster usio sawa.
+
+## Algoriti za clustering
+
+Kuna zaidi ya algoriti 100 za clustering, na matumizi yake yanategemea asili ya data iliyo mkononi. Hebu tujadili baadhi ya zile kuu:
+
+- **Hierarchical clustering**. Ikiwa kitu kimeainishwa kwa ukaribu wake na kitu kilicho karibu, badala ya kile kilicho mbali zaidi, clusters zinaundwa kulingana na umbali wa wanachama wake na vitu vingine. Agglomerative clustering ya Scikit-learn ni hierarchical.
+
+ 
+ > Infographic na [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Centroid clustering**. Algoriti hii maarufu inahitaji kuchagua 'k', au idadi ya clusters za kuunda, baada ya hapo algoriti inaamua kituo cha cluster na kukusanya data karibu na kituo hicho. [K-means clustering](https://wikipedia.org/wiki/K-means_clustering) ni toleo maarufu la centroid clustering. Kituo kinaamuliwa na wastani wa karibu, hivyo jina. Umbali wa mraba kutoka kwenye cluster unapunguzwa.
+
+ 
+ > Infographic na [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+- **Distribution-based clustering**. Inategemea modeli za takwimu, clustering inayotegemea usambazaji inajikita katika kubaini uwezekano kwamba kipengele cha data kinahusiana na cluster, na kukipeleka ipasavyo. Mbinu za Gaussian mixture ni za aina hii.
+
+- **Density-based clustering**. Vipengele vya data vinapewa clusters kulingana na density yao, au mkusanyiko wao karibu na kila mmoja. Vipengele vya data vilivyo mbali na kundi vinachukuliwa kuwa outliers au kelele. DBSCAN, Mean-shift na OPTICS ni za aina hii ya clustering.
+
+- **Grid-based clustering**. Kwa datasets za vipimo vingi, gridi inaundwa na data inagawanywa kati ya seli za gridi, hivyo kuunda clusters.
+
+## Zoezi - panga data yako
+
+Clustering kama mbinu inasaidiwa sana na taswira sahihi, kwa hivyo hebu tuanze kwa kutazama data yetu ya muziki. Zoezi hili litatusaidia kuamua ni mbinu gani za clustering tunazopaswa kutumia kwa ufanisi zaidi kwa asili ya data hii.
+
+1. Fungua faili [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb) katika folda hii.
+
+1. Ingiza kifurushi cha `Seaborn` kwa taswira bora ya data.
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. Ongeza data ya wimbo kutoka [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv). Pakia dataframe na data fulani kuhusu nyimbo. Jiandae kuchunguza data hii kwa kuingiza maktaba na kutoa data:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ Angalia mistari ya kwanza ya data:
+
+ | | name | album | artist | artist_top_genre | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ---------------- | ------------ | ------ | ---------- | ------------ | ------------ | ------ | ---------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool | Enjoy Your Life | Lady Donli | nigerian pop | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25 | 0.702 | 0.116 | 0.833 | 0.91 | 0.348 | -6.044 | 0.0447 | 105.115 | 4 |
+
+1. Pata taarifa kuhusu dataframe, kwa kuita `info()`:
+
+ ```python
+ df.info()
+ ```
+
+ Matokeo yanaonekana kama hivi:
+
+ ```output
+
+ RangeIndex: 530 entries, 0 to 529
+ Data columns (total 16 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 name 530 non-null object
+ 1 album 530 non-null object
+ 2 artist 530 non-null object
+ 3 artist_top_genre 530 non-null object
+ 4 release_date 530 non-null int64
+ 5 length 530 non-null int64
+ 6 popularity 530 non-null int64
+ 7 danceability 530 non-null float64
+ 8 acousticness 530 non-null float64
+ 9 energy 530 non-null float64
+ 10 instrumentalness 530 non-null float64
+ 11 liveness 530 non-null float64
+ 12 loudness 530 non-null float64
+ 13 speechiness 530 non-null float64
+ 14 tempo 530 non-null float64
+ 15 time_signature 530 non-null int64
+ dtypes: float64(8), int64(4), object(4)
+ memory usage: 66.4+ KB
+ ```
+
+1. Angalia mara mbili kama kuna thamani za null, kwa kuita `isnull()` na kuthibitisha jumla kuwa 0:
+
+ ```python
+ df.isnull().sum()
+ ```
+
+ Inaonekana vizuri:
+
+ ```output
+ name 0
+ album 0
+ artist 0
+ artist_top_genre 0
+ release_date 0
+ length 0
+ popularity 0
+ danceability 0
+ acousticness 0
+ energy 0
+ instrumentalness 0
+ liveness 0
+ loudness 0
+ speechiness 0
+ tempo 0
+ time_signature 0
+ dtype: int64
+ ```
+
+1. Eleza data:
+
+ ```python
+ df.describe()
+ ```
+
+ | | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | ----- | ------------ | ----------- | ---------- | ------------ | ------------ | -------- | ---------------- | -------- | --------- | ----------- | ---------- | -------------- |
+ | count | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 |
+ | mean | 2015.390566 | 222298.1698 | 17.507547 | 0.741619 | 0.265412 | 0.760623 | 0.016305 | 0.147308 | -4.953011 | 0.130748 | 116.487864 | 3.986792 |
+ | std | 3.131688 | 39696.82226 |
+## [Jaribio baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## Mapitio na Kujisomea
+
+Kabla ya kutumia algorithms za clustering, kama tulivyojifunza, ni wazo zuri kuelewa asili ya dataset yako. Soma zaidi kuhusu mada hii [hapa](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
+
+[Makala hii ya msaada](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/) inakuelekeza njia tofauti ambazo algorithms mbalimbali za clustering zinavyofanya kazi, kutokana na maumbo tofauti ya data.
+
+## Kazi
+
+[Fanya utafiti juu ya visualizations nyingine za clustering](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, inashauriwa kupata tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/5-Clustering/1-Visualize/assignment.md b/translations/sw/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..ea86f0a75
--- /dev/null
+++ b/translations/sw/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# Tafiti Visualizations Nyingine kwa Clustering
+
+## Maelekezo
+
+Katika somo hili, umeshughulika na mbinu kadhaa za kuonyesha data ili kupata ufahamu wa jinsi ya kuchora data yako kwa maandalizi ya kuigawanya. Scatterplots, hasa, ni muhimu kwa kupata makundi ya vitu. Tafiti njia tofauti na maktaba tofauti za kuunda scatterplots na andika kazi yako kwenye daftari. Unaweza kutumia data kutoka somo hili, masomo mengine, au data unayopata mwenyewe (tafadhali toa chanzo chake, hata hivyo, katika daftari lako). Chora baadhi ya data ukitumia scatterplots na eleza kile unachogundua.
+
+## Rubric
+
+| Vigezo | Bora Zaidi | Inayotosheleza | Inayohitaji Kuboresha |
+| -------- | -------------------------------------------------------------- | --------------------------------------------------------------------------------------- | ---------------------------------- |
+| | Daftari linaonyeshwa na scatterplots tano zilizoandikwa vizuri | Daftari linaonyeshwa na scatterplots chini ya tano na limeandikwa kidogo | Daftari halijakamilika linaonyeshwa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upotofu. Hati asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, inashauriwa kupata tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/sw/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..291f11ddb
--- /dev/null
+++ b/translations/sw/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu wa usahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/5-Clustering/2-K-Means/README.md b/translations/sw/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..ae27e851e
--- /dev/null
+++ b/translations/sw/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# K-Means clustering
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+Katika somo hili, utajifunza jinsi ya kuunda vikundi kwa kutumia Scikit-learn na dataset ya muziki wa Nigeria uliyoiingiza awali. Tutashughulikia misingi ya K-Means kwa ajili ya Clustering. Kumbuka kwamba, kama ulivyojifunza katika somo la awali, kuna njia nyingi za kufanya kazi na vikundi na njia unayotumia inategemea data yako. Tutajaribu K-Means kwani ni mbinu ya kawaida zaidi ya clustering. Twende kazi!
+
+Maneno utakayojifunza:
+
+- Silhouette scoring
+- Elbow method
+- Inertia
+- Variance
+
+## Utangulizi
+
+[K-Means Clustering](https://wikipedia.org/wiki/K-means_clustering) ni mbinu inayotokana na eneo la usindikaji wa ishara. Inatumika kugawanya na kugawa vikundi vya data katika 'k' clusters kwa kutumia mfululizo wa uchunguzi. Kila uchunguzi hufanya kazi ya kuweka kipengele cha data karibu zaidi na 'mean' yake, au sehemu ya kati ya cluster.
+
+Vikundi vinaweza kuonyeshwa kama [Voronoi diagrams](https://wikipedia.org/wiki/Voronoi_diagram), ambazo zinajumuisha sehemu (au 'mbegu') na eneo lake linalolingana.
+
+
+
+> infographic by [Jen Looper](https://twitter.com/jenlooper)
+
+Mchakato wa K-Means clustering [unatekelezwa katika hatua tatu](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. Algorithimu huchagua idadi ya k ya sehemu za kati kwa kuchukua sampuli kutoka kwenye dataset. Baada ya hapo, inarudia:
+ 1. Inapeleka kila sampuli kwa centroid iliyo karibu zaidi.
+ 2. Inaunda centroids mpya kwa kuchukua thamani ya wastani wa sampuli zote zilizotolewa kwa centroids za awali.
+ 3. Kisha, inahesabu tofauti kati ya centroids mpya na za zamani na kurudia hadi centroids zitakapokuwa imara.
+
+Hasara moja ya kutumia K-Means ni kwamba utahitaji kuanzisha 'k', ambayo ni idadi ya centroids. Kwa bahati nzuri, 'elbow method' husaidia kukadiria thamani nzuri ya kuanzia kwa 'k'. Utaijaribu baada ya muda mfupi.
+
+## Sharti
+
+Utafanya kazi katika faili la [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) la somo hili ambalo linajumuisha uingizaji wa data na usafishaji wa awali ulioufanya katika somo lililopita.
+
+## Mazoezi - maandalizi
+
+Anza kwa kuangalia tena data ya nyimbo.
+
+1. Unda boxplot, ukipiga `boxplot()` kwa kila safu:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ Data hii ni kidogo yenye kelele: kwa kuangalia kila safu kama boxplot, unaweza kuona outliers.
+
+ 
+
+Unaweza kupitia dataset na kuondoa hizi outliers, lakini hiyo ingefanya data kuwa kidogo sana.
+
+1. Kwa sasa, chagua safu ambazo utatumia kwa zoezi lako la clustering. Chagua zile zenye anuwai zinazofanana na encode safu ya `artist_top_genre` kama data ya nambari:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. Sasa unahitaji kuchagua idadi ya clusters za kulenga. Unajua kuna aina 3 za nyimbo ambazo tulizitenga kutoka kwenye dataset, kwa hivyo jaribu 3:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+Unaona safu iliyochapishwa na clusters zilizotabiriwa (0, 1, au 2) kwa kila safu ya dataframe.
+
+1. Tumia safu hii kuhesabu 'silhouette score':
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## Silhouette score
+
+Tafuta silhouette score iliyo karibu na 1. Alama hii inatofautiana kutoka -1 hadi 1, na ikiwa alama ni 1, cluster ni mnene na imejitenga vizuri na clusters nyingine. Thamani karibu na 0 inawakilisha clusters zinazofuatana na sampuli ziko karibu sana na mipaka ya maamuzi ya clusters jirani. [(Chanzo)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+Alama yetu ni **.53**, kwa hivyo katikati. Hii inaonyesha kwamba data yetu haifai sana kwa aina hii ya clustering, lakini tuendelee.
+
+### Mazoezi - jenga modeli
+
+1. Ingiza `KMeans` na anza mchakato wa clustering.
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ Kuna sehemu chache hapa ambazo zinahitaji maelezo.
+
+ > 🎓 range: Hizi ni iterations za mchakato wa clustering
+
+ > 🎓 random_state: "Inabainisha uzalishaji wa nambari za bahati nasibu kwa uanzishaji wa centroid." [Chanzo](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "within-cluster sums of squares" hupima umbali wa wastani wa mraba wa pointi zote ndani ya cluster hadi kwenye centroid ya cluster. [Chanzo](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce).
+
+ > 🎓 Inertia: Algorithimu za K-Means hujaribu kuchagua centroids kupunguza 'inertia', "kipimo cha jinsi clusters zilivyo na mshikamano wa ndani." [Chanzo](https://scikit-learn.org/stable/modules/clustering.html). Thamani inaongezwa kwenye variable ya wcss katika kila iteration.
+
+ > 🎓 k-means++: Katika [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means) unaweza kutumia 'k-means++' optimization, ambayo "inaanzisha centroids kuwa (kwa ujumla) mbali kutoka kwa kila mmoja, na kusababisha matokeo bora kuliko uanzishaji wa nasibu.
+
+### Elbow method
+
+Awali, ulidhani kwamba, kwa kuwa ulilenga aina 3 za nyimbo, unapaswa kuchagua clusters 3. Lakini je, ni hivyo?
+
+1. Tumia 'elbow method' kuhakikisha.
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ Tumia variable ya `wcss` ambayo uliijenga katika hatua ya awali kuunda chati inayoonyesha wapi 'bend' katika elbow ipo, ambayo inaonyesha idadi bora ya clusters. Labda ni **3**!
+
+ 
+
+## Mazoezi - onyesha clusters
+
+1. Jaribu mchakato tena, wakati huu ukiweka clusters tatu, na uonyeshe clusters kama scatterplot:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. Angalia usahihi wa modeli:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ Usahihi wa modeli hii sio mzuri sana, na umbo la clusters linakupa dokezo kwa nini.
+
+ 
+
+ Data hii ni isiyo na usawa, haijakolea sana na kuna tofauti kubwa kati ya thamani za safu ili kuunda clusters vizuri. Kwa kweli, clusters zinazoundwa zinaweza kuwa zimeathiriwa sana au kupotoshwa na aina tatu za muziki tulizozitaja hapo juu. Huo ulikuwa mchakato wa kujifunza!
+
+ Katika nyaraka za Scikit-learn, unaweza kuona kwamba modeli kama hii, yenye clusters ambazo hazijatengwa vizuri, ina tatizo la 'variance':
+
+ 
+ > Infographic kutoka Scikit-learn
+
+## Variance
+
+Variance inafafanuliwa kama "wastani wa tofauti za mraba kutoka kwa Mean" [(Chanzo)](https://www.mathsisfun.com/data/standard-deviation.html). Katika muktadha wa tatizo hili la clustering, inahusu data ambayo nambari za dataset yetu zina mwelekeo wa kutofautiana sana kutoka kwa wastani.
+
+✅ Huu ni wakati mzuri wa kufikiria njia zote ambazo unaweza kurekebisha tatizo hili. Kuboresha data zaidi? Kutumia safu tofauti? Kutumia algorithimu tofauti? Dokezo: Jaribu [kusawazisha data yako](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) ili kuifanya iwe kawaida na jaribu safu zingine.
+
+> Jaribu hii '[variance calculator](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' kuelewa dhana zaidi.
+
+---
+
+## 🚀Changamoto
+
+Tumia muda na notebook hii, ukibadilisha vigezo. Je, unaweza kuboresha usahihi wa modeli kwa kusafisha data zaidi (kuondoa outliers, kwa mfano)? Unaweza kutumia uzito kutoa uzito zaidi kwa sampuli fulani za data. Unaweza kufanya nini kingine kuunda clusters bora?
+
+Dokezo: Jaribu kusawazisha data yako. Kuna msimbo uliotolewa maoni katika notebook unaoongeza kusawazisha kwa kiwango cha kawaida ili kufanya safu za data zifanane zaidi kwa karibu kwa suala la anuwai. Utapata kuwa wakati silhouette score inashuka, 'kink' katika grafu ya elbow inakuwa laini zaidi. Hii ni kwa sababu kuacha data bila kusawazishwa kunaruhusu data yenye tofauti ndogo kubeba uzito zaidi. Soma zaidi kuhusu tatizo hili [hapa](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226).
+
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## Mapitio & Kujisomea
+
+Angalia K-Means Simulator [kama hii](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). Unaweza kutumia zana hii kuona pointi za sampuli za data na kubaini centroids zake. Unaweza kuhariri nasibu ya data, idadi ya clusters na idadi ya centroids. Je, hii inakusaidia kupata wazo la jinsi data inaweza kugawanywa?
+
+Pia, angalia [handout hii juu ya K-Means](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) kutoka Stanford.
+
+## Kazi
+
+[Jaribu njia tofauti za clustering](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/5-Clustering/2-K-Means/assignment.md b/translations/sw/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..ddd7c1939
--- /dev/null
+++ b/translations/sw/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,13 @@
+# Jaribu Njia Mbalimbali za Kuunda Makundi
+
+## Maelekezo
+
+Katika somo hili ulijifunza kuhusu K-Means clustering. Wakati mwingine K-Means haifai kwa data yako. Unda daftari la mazoezi ukitumia data kutoka kwenye masomo haya au kutoka sehemu nyingine (toa chanzo chako) na onyesha njia tofauti ya kuunda makundi ISIYOTUMIA K-Means. Umejifunza nini?
+## Rubric
+
+| Kigezo | Bora Sana | Kutosha | Inahitaji Kuboresha |
+| -------- | --------------------------------------------------------------- | -------------------------------------------------------------------- | ----------------------------- |
+| | Daftari la mazoezi limewasilishwa na lina modeli ya kuunda makundi iliyoandikwa vizuri | Daftari la mazoezi limewasilishwa bila maelezo mazuri na/au halijakamilika | Kazi isiyokamilika imewasilishwa |
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/sw/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..5e36efbae
--- /dev/null
+++ b/translations/sw/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalam inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/5-Clustering/README.md b/translations/sw/5-Clustering/README.md
new file mode 100644
index 000000000..d94b24b5b
--- /dev/null
+++ b/translations/sw/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# Mifano ya Clustering kwa ajili ya ujifunzaji wa mashine
+
+Clustering ni kazi ya ujifunzaji wa mashine ambapo inatafuta vitu vinavyofanana na kuviweka katika makundi yanayoitwa clusters. Kitu kinachotofautisha clustering na mbinu nyingine za ujifunzaji wa mashine ni kwamba mambo hufanyika moja kwa moja, kwa kweli, ni sahihi kusema ni kinyume cha ujifunzaji unaosimamiwa.
+
+## Mada ya Kikanda: mifano ya clustering kwa ladha ya muziki ya hadhira ya Nigeria 🎧
+
+Hadhira ya Nigeria ina ladha mbalimbali za muziki. Kutumia data zilizokusanywa kutoka Spotify (zilizoongozwa na [makala hii](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421)), hebu tuangalie baadhi ya muziki maarufu nchini Nigeria. Dataset hii inajumuisha data kuhusu alama za 'danceability' za nyimbo mbalimbali, 'acousticness', kelele, 'speechiness', umaarufu na nishati. Itakuwa ya kuvutia kugundua mifumo katika data hii!
+
+
+
+> Picha na Marcela Laskoski kwenye Unsplash
+
+Katika mfululizo huu wa masomo, utagundua njia mpya za kuchambua data kwa kutumia mbinu za clustering. Clustering ni muhimu sana wakati dataset yako haina lebo. Ikiwa ina lebo, basi mbinu za uainishaji kama zile ulizojifunza katika masomo ya awali zinaweza kuwa na manufaa zaidi. Lakini katika hali ambapo unatafuta kuunganisha data isiyo na lebo, clustering ni njia nzuri ya kugundua mifumo.
+
+> Kuna zana za kiwango cha chini cha msimbo zinazoweza kukusaidia kujifunza kuhusu kufanya kazi na mifano ya clustering. Jaribu [Azure ML kwa kazi hii](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Masomo
+
+1. [Utangulizi wa clustering](1-Visualize/README.md)
+2. [Clustering ya K-Means](2-K-Means/README.md)
+
+## Shukrani
+
+Masomo haya yaliandikwa na 🎶 na [Jen Looper](https://www.twitter.com/jenlooper) kwa ukaguzi wa msaada kutoka kwa [Rishit Dagli](https://rishit_dagli) na [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan).
+
+Dataset ya [Nyimbo za Nigeria](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify) ilitolewa kutoka Kaggle kama ilivyokusanywa kutoka Spotify.
+
+Mifano ya K-Means yenye manufaa ambayo ilisaidia katika kuunda somo hili ni pamoja na huu [uchunguzi wa iris](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering), hii [notebook ya utangulizi](https://www.kaggle.com/prashant111/k-means-clustering-with-python), na huu [mfano wa kidhahania wa NGO](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering).
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asili katika lugha yake ya awali inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/1-Introduction-to-NLP/README.md b/translations/sw/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..e5c59ec06
--- /dev/null
+++ b/translations/sw/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# Utangulizi wa usindikaji wa lugha asilia
+
+Somo hili linashughulikia historia fupi na dhana muhimu za *usindikaji wa lugha asilia*, tawi la *isimu ya kompyuta*.
+
+## [Jaribio kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## Utangulizi
+
+NLP, kama inavyojulikana, ni moja ya maeneo yanayojulikana zaidi ambapo kujifunza kwa mashine kumetumika na kutumiwa katika programu za uzalishaji.
+
+✅ Je, unaweza kufikiria programu unayotumia kila siku ambayo labda ina baadhi ya NLP iliyojumuishwa? Je, kuhusu programu zako za kuandika au programu za simu unazotumia mara kwa mara?
+
+Utaweza kujifunza kuhusu:
+
+- **Wazo la lugha**. Jinsi lugha zilivyoendelea na maeneo makubwa ya utafiti yamekuwa nini.
+- **Ufafanuzi na dhana**. Pia utajifunza ufafanuzi na dhana kuhusu jinsi kompyuta zinavyosindika maandishi, ikiwa ni pamoja na uchambuzi, sarufi, na kutambua nomino na vitenzi. Kuna baadhi ya kazi za programu katika somo hili, na dhana kadhaa muhimu zinaanzishwa ambazo utajifunza kuandika baadaye katika masomo yafuatayo.
+
+## Isimu ya kompyuta
+
+Isimu ya kompyuta ni eneo la utafiti na maendeleo kwa miongo mingi ambalo linachunguza jinsi kompyuta zinaweza kufanya kazi na hata kuelewa, kutafsiri, na kuwasiliana na lugha. Usindikaji wa lugha asilia (NLP) ni uwanja unaohusiana unaozingatia jinsi kompyuta zinaweza kusindika lugha za 'asili', au lugha za kibinadamu.
+
+### Mfano - uchapaji kwa simu
+
+Ikiwa umewahi kuchapisha kwa simu yako badala ya kuandika au kumuuliza msaidizi wa virtual swali, hotuba yako ilibadilishwa kuwa maandishi na kisha kusindika au *kuchambuliwa* kutoka kwa lugha uliyoongea. Maneno muhimu yaliyogunduliwa kisha yalisindika kuwa muundo ambao simu au msaidizi angeweza kuelewa na kutekeleza.
+
+
+> Ufahamu wa lugha halisi ni ngumu! Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+### Teknolojia hii inafanyikaje?
+
+Hii inawezekana kwa sababu mtu aliandika programu ya kompyuta kufanya hivi. Miongo michache iliyopita, baadhi ya waandishi wa hadithi za sayansi walitabiri kuwa watu wangezungumza zaidi na kompyuta zao, na kompyuta zingeelewa kila mara walichomaanisha. Kwa bahati mbaya, iligeuka kuwa tatizo gumu zaidi kuliko wengi walivyofikiria, na ingawa ni tatizo linaloeleweka vizuri zaidi leo, kuna changamoto kubwa katika kufanikisha usindikaji wa lugha asilia 'mkamilifu' linapokuja suala la kuelewa maana ya sentensi. Hili ni tatizo gumu hasa linapokuja suala la kuelewa ucheshi au kugundua hisia kama vile kejeli katika sentensi.
+
+Kwa wakati huu, unaweza kukumbuka madarasa ya shule ambapo mwalimu alifundisha sehemu za sarufi katika sentensi. Katika baadhi ya nchi, wanafunzi hufundishwa sarufi na isimu kama somo maalum, lakini katika nyingi, mada hizi hujumuishwa kama sehemu ya kujifunza lugha: ama lugha yako ya kwanza katika shule ya msingi (kujifunza kusoma na kuandika) na labda lugha ya pili baada ya msingi, au shule ya upili. Usijali ikiwa wewe si mtaalamu wa kutofautisha nomino na vitenzi au viambishi na vivumishi!
+
+Ikiwa unapata shida na tofauti kati ya *wakati uliopo rahisi* na *wakati uliopo endelevu*, hauko peke yako. Hii ni jambo gumu kwa watu wengi, hata wasemaji wa lugha ya asili. Habari njema ni kwamba kompyuta ni nzuri sana katika kutumia sheria rasmi, na utajifunza kuandika programu ambayo inaweza *kuchambua* sentensi kama vile binadamu. Changamoto kubwa zaidi utakayochunguza baadaye ni kuelewa *maana*, na *hisia*, ya sentensi.
+
+## Mahitaji ya awali
+
+Kwa somo hili, hitaji kuu la awali ni kuwa na uwezo wa kusoma na kuelewa lugha ya somo hili. Hakuna matatizo ya hesabu au usawa wa kutatua. Wakati mwandishi asili aliandika somo hili kwa Kiingereza, pia limetafsiriwa katika lugha zingine, kwa hivyo unaweza kuwa unasoma tafsiri. Kuna mifano ambapo lugha kadhaa tofauti zinatumika (kulinganisha sheria za sarufi tofauti za lugha tofauti). Hizi *hazijatafsiriwa*, lakini maandishi ya maelezo yametafsiriwa, kwa hivyo maana inapaswa kuwa wazi.
+
+Kwa kazi za programu, utatumia Python na mifano inatumia Python 3.8.
+
+Katika sehemu hii, utahitaji, na kutumia:
+
+- **Uelewa wa Python 3**. Uelewa wa lugha ya programu katika Python 3, somo hili linatumia pembejeo, loop, kusoma faili, arrays.
+- **Visual Studio Code + kiendelezi**. Tutatumia Visual Studio Code na kiendelezi chake cha Python. Unaweza pia kutumia IDE ya Python unayopenda.
+- **TextBlob**. [TextBlob](https://github.com/sloria/TextBlob) ni maktaba rahisi ya usindikaji wa maandishi kwa Python. Fuata maelekezo kwenye tovuti ya TextBlob ili kuisakinisha kwenye mfumo wako (sakinisha corpora pia, kama inavyoonyeshwa hapa chini):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 Kidokezo: Unaweza kuendesha Python moja kwa moja katika mazingira ya VS Code. Angalia [docs](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott) kwa habari zaidi.
+
+## Kuzungumza na mashine
+
+Historia ya kujaribu kufanya kompyuta kuelewa lugha ya kibinadamu inarudi nyuma miongo kadhaa, na mmoja wa wanasayansi wa kwanza kuzingatia usindikaji wa lugha asilia alikuwa *Alan Turing*.
+
+### Jaribio la 'Turing'
+
+Wakati Turing alikuwa akifanya utafiti juu ya *akili bandia* katika miaka ya 1950, alifikiria ikiwa jaribio la mazungumzo linaweza kufanywa kwa binadamu na kompyuta (kupitia mawasiliano yaliyoandikwa) ambapo binadamu katika mazungumzo hakuwa na uhakika ikiwa alikuwa akizungumza na binadamu mwingine au kompyuta.
+
+Ikiwa, baada ya muda fulani wa mazungumzo, binadamu hangeweza kuamua kuwa majibu yalitoka kwa kompyuta au la, basi je, kompyuta ingeweza kusemwa kuwa *inawaza*?
+
+### Msukumo - 'mchezo wa kuiga'
+
+Wazo hili lilitokana na mchezo wa karamu uitwao *Mchezo wa Kuiga* ambapo mhojiwa yuko peke yake katika chumba na ana jukumu la kuamua ni nani kati ya watu wawili (katika chumba kingine) ni mwanaume na mwanamke mtawalia. Mhojiwa anaweza kutuma maelezo, na lazima ajaribu kufikiria maswali ambapo majibu yaliyoandikwa yanafichua jinsia ya mtu wa fumbo. Bila shaka, wachezaji katika chumba kingine wanajaribu kumdanganya mhojiwa kwa kujibu maswali kwa njia ya kupotosha au kuchanganya mhojiwa, huku pia wakijaribu kuonekana kama wanajibu kwa uaminifu.
+
+### Kuendeleza Eliza
+
+Katika miaka ya 1960 mwanasayansi wa MIT aitwaye *Joseph Weizenbaum* aliendeleza [*Eliza*](https://wikipedia.org/wiki/ELIZA), 'mtaalamu' wa kompyuta ambaye angeuliza maswali kwa binadamu na kuonekana kama anaelewa majibu yao. Hata hivyo, ingawa Eliza angeweza kuchambua sentensi na kutambua baadhi ya miundo ya kisarufi na maneno muhimu ili kutoa jibu linalofaa, haingeweza kusemwa kuwa *inaelewa* sentensi. Ikiwa Eliza ingepewa sentensi inayofuata muundo "**Mimi ni** huzuni" inaweza kupanga upya na kubadilisha maneno katika sentensi ili kuunda jibu "Umekuwa **wewe ni** huzuni kwa muda gani".
+
+Hii ilitoa picha kwamba Eliza aliuelewa kauli hiyo na alikuwa akiuliza swali la ufuatiliaji, wakati kwa kweli, ilikuwa ikibadilisha wakati na kuongeza baadhi ya maneno. Ikiwa Eliza haingeweza kutambua neno kuu ambalo lilikuwa na jibu lake, ingetoa jibu la nasibu ambalo lingeweza kutumika kwa kauli nyingi tofauti. Eliza angeweza kudanganywa kwa urahisi, kwa mfano ikiwa mtumiaji aliandika "**Wewe ni** baiskeli" inaweza kujibu "Umekuwa **mimi ni** baiskeli kwa muda gani?", badala ya jibu lenye busara zaidi.
+
+[](https://youtu.be/RMK9AphfLco "Kuzungumza na Eliza")
+
+> 🎥 Bofya picha hapo juu kwa video kuhusu programu asili ya ELIZA
+
+> Kumbuka: Unaweza kusoma maelezo asili ya [Eliza](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract) yaliyochapishwa mwaka 1966 ikiwa una akaunti ya ACM. Vinginevyo, soma kuhusu Eliza kwenye [wikipedia](https://wikipedia.org/wiki/ELIZA)
+
+## Zoezi - kuandika bot ya mazungumzo ya msingi
+
+Bot ya mazungumzo, kama Eliza, ni programu inayochochea pembejeo za mtumiaji na kuonekana kuelewa na kujibu kwa busara. Tofauti na Eliza, bot yetu haitakuwa na sheria kadhaa zinazompa mwonekano wa kuwa na mazungumzo yenye akili. Badala yake, bot yetu itakuwa na uwezo mmoja tu, kuendelea na mazungumzo na majibu ya nasibu ambayo yanaweza kufanya kazi karibu katika mazungumzo yoyote ya kijinga.
+
+### Mpango
+
+Hatua zako unapojenga bot ya mazungumzo:
+
+1. Chapisha maagizo yanayomshauri mtumiaji jinsi ya kuingiliana na bot
+2. Anzisha kitanzi
+ 1. Kubali pembejeo za mtumiaji
+ 2. Ikiwa mtumiaji ameomba kutoka, basi toka
+ 3. Sindika pembejeo za mtumiaji na uamue jibu (katika kesi hii, jibu ni chaguo la nasibu kutoka kwenye orodha ya majibu yanayowezekana)
+ 4. Chapisha jibu
+3. rudi kwenye hatua ya 2
+
+### Kujenga bot
+
+Wacha tuunde bot sasa. Tutaanza kwa kufafanua baadhi ya misemo.
+
+1. Unda bot hii mwenyewe kwa Python na majibu ya nasibu yafuatayo:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ Hapa kuna baadhi ya matokeo ya sampuli ya kukuongoza (pembejeo za mtumiaji ziko kwenye mistari inayoanza na `>`):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ Suluhisho moja linalowezekana kwa kazi ni [hapa](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)
+
+ ✅ Simama na fikiria
+
+ 1. Je, unafikiri majibu ya nasibu yangemdanganya mtu kufikiri kwamba bot kwa kweli ilimuelewa?
+ 2. Ni vipengele gani bot ingehitaji kuwa bora zaidi?
+ 3. Ikiwa bot ingeweza kweli 'kuelewa' maana ya sentensi, je, ingehitaji 'kumbuka' maana ya sentensi za awali katika mazungumzo pia?
+
+---
+
+## 🚀Changamoto
+
+Chagua mojawapo ya vipengele vya "simama na fikiria" hapo juu na ujaribu kuvitumia kwa programu au andika suluhisho kwenye karatasi kwa kutumia pseudocode.
+
+Katika somo lijalo, utajifunza kuhusu mbinu kadhaa zingine za kuchambua lugha asilia na kujifunza kwa mashine.
+
+## [Jaribio baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## Mapitio & Kujisomea
+
+Angalia marejeo hapa chini kama fursa za kusoma zaidi.
+
+### Marejeo
+
+1. Schubert, Lenhart, "Isimu ya Kompyuta", *The Stanford Encyclopedia of Philosophy* (Toleo la Spring 2020), Edward N. Zalta (ed.), URL = .
+2. Chuo Kikuu cha Princeton "Kuhusu WordNet." [WordNet](https://wordnet.princeton.edu/). Chuo Kikuu cha Princeton. 2010.
+
+## Kazi
+
+[Tafuta bot](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, inashauriwa kupata tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa maelewano au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/sw/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..58ca1c1d5
--- /dev/null
+++ b/translations/sw/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# Tafuta roboti
+
+## Maelekezo
+
+Roboti ziko kila mahali. Kazi yako: tafuta moja na uichukue! Unaweza kuzipata kwenye tovuti, katika programu za benki, na kwenye simu, kwa mfano unapoita kampuni za huduma za kifedha kwa ushauri au taarifa za akaunti. Changanua roboti hiyo na uone kama unaweza kuichanganya. Kama unaweza kuichanganya roboti hiyo, unadhani ni kwa nini hilo limetokea? Andika karatasi fupi kuhusu uzoefu wako.
+
+## Rubric
+
+| Kigezo | Bora | Inayokubalika | Inayohitaji Kuboresha |
+| -------- | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | --------------------- |
+| | Karatasi kamili imeandikwa, ikielezea muundo unaodhaniwa wa roboti na kuelezea uzoefu wako nayo | Karatasi haijakamilika au haijafanyiwa utafiti vizuri | Hakuna karatasi iliyowasilishwa |
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa maelewano mabaya au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/2-Tasks/README.md b/translations/sw/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..1e900e207
--- /dev/null
+++ b/translations/sw/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# Kazi na Mbinu za Kawaida za Usindikaji Lugha Asilia
+
+Kwa kazi nyingi za *usindikaji lugha asilia*, maandishi yanayopaswa kushughulikiwa lazima yavunjwe, yakaguliwe, na matokeo yake yahifadhiwe au yarejelewe na sheria na seti za data. Kazi hizi, zinamruhusu mpangaji kupata _maana_ au _nia_ au tu _mara kwa mara_ ya maneno na maneno katika maandishi.
+
+## [Maswali ya awali ya mihadhara](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+Hebu tujifunze mbinu za kawaida zinazotumika katika usindikaji wa maandishi. Pamoja na kujifunza kwa mashine, mbinu hizi zinakusaidia kuchambua kiasi kikubwa cha maandishi kwa ufanisi. Kabla ya kutumia ML kwa kazi hizi, hata hivyo, hebu tuangalie matatizo yanayokutana na mtaalamu wa NLP.
+
+## Kazi za kawaida za NLP
+
+Kuna njia tofauti za kuchambua maandishi unayofanyia kazi. Kuna kazi ambazo unaweza kufanya na kupitia kazi hizi unaweza kuelewa maandishi na kutoa hitimisho. Kawaida unafanya kazi hizi kwa mpangilio.
+
+### Utoaji wa Tokeni
+
+Labda jambo la kwanza ambalo algorithimu nyingi za NLP zinahitaji kufanya ni kugawanya maandishi kuwa tokeni, au maneno. Ingawa hili linaweza kuonekana rahisi, kuzingatia alama za uakifishaji na mipaka ya maneno na sentensi za lugha tofauti kunaweza kufanya iwe ngumu. Unaweza kulazimika kutumia mbinu mbalimbali kubaini mipaka.
+
+
+> Kugawanya sentensi kutoka **Pride and Prejudice**. Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+### Utoaji wa Embeddings
+
+[Embeddings za maneno](https://wikipedia.org/wiki/Word_embedding) ni njia ya kubadilisha data yako ya maandishi kwa njia ya nambari. Embeddings hufanywa kwa njia ambayo maneno yenye maana sawa au maneno yanayotumika pamoja yanajikusanya pamoja.
+
+
+> "Nawaheshimu sana mishipa yako, wao ni marafiki zangu wa zamani." - Embeddings za maneno kwa sentensi katika **Pride and Prejudice**. Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+✅ Jaribu [chombo hiki cha kuvutia](https://projector.tensorflow.org/) kujaribu embeddings za maneno. Kubofya neno moja inaonyesha makundi ya maneno yanayofanana: 'toy' inakusanyika na 'disney', 'lego', 'playstation', na 'console'.
+
+### Uchanganuzi & Uwekaji wa Sehemu za Hotuba
+
+Kila neno ambalo limegawanywa linaweza kuwekwa alama kama sehemu ya hotuba - nomino, kitenzi, au kivumishi. Sentensi `the quick red fox jumped over the lazy brown dog` inaweza kuwekwa alama za POS kama fox = nomino, jumped = kitenzi.
+
+
+
+> Kuchanganua sentensi kutoka **Pride and Prejudice**. Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+Uchanganuzi ni kutambua maneno yanayohusiana katika sentensi - kwa mfano `the quick red fox jumped` ni mlolongo wa kivumishi-nomino-kitenzi ambao ni tofauti na mlolongo wa `lazy brown dog`.
+
+### Mara kwa Mara ya Maneno na Misemo
+
+Utaratibu muhimu wakati wa kuchambua mwili mkubwa wa maandishi ni kujenga kamusi ya kila neno au msemo wa kuvutia na jinsi inavyoonekana mara kwa mara. Msemo `the quick red fox jumped over the lazy brown dog` una mara mbili ya neno the.
+
+Hebu tuangalie mfano wa maandishi ambapo tunahesabu mara kwa mara ya maneno. Shairi la Rudyard Kipling "The Winners" lina beti ifuatayo:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+Kama mara kwa mara ya misemo inaweza kuwa isiyo na hisia za herufi kubwa au ndogo kama inavyohitajika, msemo `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` ni 2.
+
+### N-grams
+
+Maandishi yanaweza kugawanywa katika mfuatano wa maneno ya urefu uliowekwa, neno moja (unigram), maneno mawili (bigrams), maneno matatu (trigrams) au idadi yoyote ya maneno (n-grams).
+
+Kwa mfano `the quick red fox jumped over the lazy brown dog` na alama ya n-gram ya 2 inazalisha n-grams zifuatazo:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+Inaweza kuwa rahisi kuona kama sanduku la kuteleza juu ya sentensi. Hapa kuna n-grams za maneno 3, n-gram iko kwa herufi nzito katika kila sentensi:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> Thamani ya N-gram ya 3: Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+### Utoaji wa Misemo ya Nomino
+
+Katika sentensi nyingi, kuna nomino ambayo ni mhusika, au kitu cha sentensi. Katika Kiingereza, mara nyingi inatambulika kwa kuwa na 'a' au 'an' au 'the' kabla yake. Kutambua mhusika au kitu cha sentensi kwa 'kutoa msemo wa nomino' ni kazi ya kawaida katika NLP wakati wa kujaribu kuelewa maana ya sentensi.
+
+✅ Katika sentensi "Siwezi kuweka saa, au sehemu, au muonekano au maneno, ambayo yaliweka msingi. Ni muda mrefu sana uliopita. Nilikuwa katikati kabla sijajua kuwa nilianza.", unaweza kutambua misemo ya nomino?
+
+Katika sentensi `the quick red fox jumped over the lazy brown dog` kuna misemo 2 ya nomino: **quick red fox** na **lazy brown dog**.
+
+### Uchanganuzi wa Hisia
+
+Sentensi au maandishi yanaweza kuchambuliwa kwa hisia, au jinsi *chanya* au *hasi* ilivyo. Hisia hupimwa kwa *polarity* na *objectivity/subjectivity*. Polarity hupimwa kutoka -1.0 hadi 1.0 (hasi hadi chanya) na 0.0 hadi 1.0 (isiyo na upande hadi yenye upande zaidi).
+
+✅ Baadaye utajifunza kuwa kuna njia tofauti za kuamua hisia kwa kutumia kujifunza kwa mashine, lakini njia moja ni kuwa na orodha ya maneno na misemo ambayo imewekwa kama chanya au hasi na mtaalamu wa binadamu na kutumia mfano huo kwa maandishi ili kuhesabu alama ya polarity. Unaweza kuona jinsi hii ingeweza kufanya kazi katika hali zingine na chini katika zingine?
+
+### Ubadilishaji
+
+Ubadilishaji unakuwezesha kuchukua neno na kupata umoja au wingi wa neno hilo.
+
+### Lemmatization
+
+*Lemma* ni mzizi au neno kuu kwa seti ya maneno, kwa mfano *flew*, *flies*, *flying* yana lemma ya kitenzi *fly*.
+
+Kuna pia hifadhidata muhimu zinazopatikana kwa mtafiti wa NLP, haswa:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/) ni hifadhidata ya maneno, visawe, kinyume na maelezo mengine mengi kwa kila neno katika lugha nyingi tofauti. Ni muhimu sana wakati wa kujaribu kujenga tafsiri, ukaguzi wa tahajia, au zana za lugha za aina yoyote.
+
+## Maktaba za NLP
+
+Kwa bahati nzuri, huhitaji kujenga mbinu hizi zote wewe mwenyewe, kwani kuna maktaba bora za Python zinazopatikana ambazo zinaifanya iwe rahisi zaidi kwa waendelezaji ambao hawajabobea katika usindikaji lugha asilia au kujifunza kwa mashine. Masomo yanayofuata yanajumuisha mifano zaidi ya hizi, lakini hapa utajifunza mifano muhimu kukusaidia na kazi inayofuata.
+
+### Zoezi - kutumia `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor`.
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > Nini kinaendelea hapa? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor) ni "Kitoa misemo ya nomino kinachotumia uchanganuzi wa vipande vilivyofunzwa na korpasi ya mafunzo ya ConLL-2000." ConLL-2000 inarejelea Mkutano wa 2000 wa Kujifunza Lugha Asilia kwa Njia ya Kompyuta. Kila mwaka mkutano huo ulifanyika warsha ya kushughulikia tatizo gumu la NLP, na mnamo 2000 ilikuwa ni kuchanganua misemo ya nomino. Mfano ulifunzwa kwenye Wall Street Journal, na "sehemu 15-18 kama data ya mafunzo (tokeni 211727) na sehemu 20 kama data ya majaribio (tokeni 47377)". Unaweza kuangalia taratibu zilizotumika [hapa](https://www.clips.uantwerpen.be/conll2000/chunking/) na [matokeo](https://ifarm.nl/erikt/research/np-chunking.html).
+
+### Changamoto - kuboresha bot yako kwa NLP
+
+Katika somo lililopita ulijenga bot rahisi sana ya Q&A. Sasa, utamfanya Marvin kuwa na huruma kidogo kwa kuchambua maingizo yako kwa hisia na kuchapisha jibu linalolingana na hisia. Pia utahitaji kutambua `noun_phrase` na kuuliza kuhusu hilo.
+
+Hatua zako wakati wa kujenga bot bora ya mazungumzo:
+
+1. Chapisha maelekezo yanayoshauri mtumiaji jinsi ya kuwasiliana na bot
+2. Anza kitanzi
+ 1. Kubali maingizo ya mtumiaji
+ 2. Ikiwa mtumiaji ameomba kutoka, basi toka
+ 3. Shughulikia maingizo ya mtumiaji na kubaini jibu linalofaa la hisia
+ 4. Ikiwa msemo wa nomino umetambuliwa katika hisia, fanya wingi wake na uliza maingizo zaidi juu ya mada hiyo
+ 5. Chapisha jibu
+3. Rudi kwenye hatua ya 2
+
+Hapa kuna kipande cha msimbo wa kubaini hisia kwa kutumia TextBlob. Kumbuka kuna *mwelekeo* nne tu za jibu la hisia (unaweza kuwa na zaidi ikiwa unapenda):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+Hapa kuna matokeo ya sampuli kukuelekeza (maingizo ya mtumiaji yako kwenye mistari iliyoanza na >):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+Suluhisho moja linalowezekana kwa kazi hiyo liko [hapa](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)
+
+✅ Uhakiki wa Maarifa
+
+1. Je, unadhani majibu ya huruma yangeweza 'kumdanganya' mtu kufikiria kuwa bot kweli inaelewa?
+2. Je, kutambua msemo wa nomino kunafanya bot iwe ya 'kuaminika' zaidi?
+3. Kwa nini kutoa 'msemo wa nomino' kutoka kwa sentensi ni jambo muhimu kufanya?
+
+---
+
+Tekeleza bot katika uhakiki wa maarifa wa awali na ujaribu kwa rafiki. Je, inaweza kumdanganya? Je, unaweza kufanya bot yako iwe ya 'kuaminika' zaidi?
+
+## 🚀Changamoto
+
+Chukua kazi katika uhakiki wa maarifa wa awali na ujaribu kuitekeleza. Jaribu bot kwa rafiki. Je, inaweza kumdanganya? Je, unaweza kufanya bot yako iwe ya 'kuaminika' zaidi?
+
+## [Maswali ya baada ya mihadhara](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## Mapitio & Kujisomea
+
+Katika masomo machache yajayo utajifunza zaidi kuhusu uchanganuzi wa hisia. Tafiti mbinu hii ya kuvutia katika makala kama hizi kwenye [KDNuggets](https://www.kdnuggets.com/tag/nlp)
+
+## Kazi
+
+[Fanya bot izungumze](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa habari muhimu, inashauriwa kupata tafsiri ya kibinadamu ya kitaalamu. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/2-Tasks/assignment.md b/translations/sw/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..3316118d7
--- /dev/null
+++ b/translations/sw/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# Mfanye Bot Kuzungumza
+
+## Maelekezo
+
+Katika masomo yaliyopita, uliunda bot ya msingi ya kuzungumza naye. Bot huyu anatoa majibu ya bahati nasibu hadi useme 'kwaheri'. Je, unaweza kufanya majibu yawe na mpangilio kidogo, na kuchochea majibu ikiwa utasema mambo maalum, kama 'kwa nini' au 'vipi'? Fikiria kidogo jinsi ujifunzaji wa mashine unaweza kufanya aina hii ya kazi iwe ya kiotomatiki zaidi unapoendeleza bot wako. Unaweza kutumia maktaba za NLTK au TextBlob ili kurahisisha kazi zako.
+
+## Rubric
+
+| Vigezo | Bora Sana | Inafaa | Inahitaji Kuboresha |
+| ------- | -------------------------------------------- | ------------------------------------------------ | ----------------------- |
+| | Faili mpya ya bot.py imewasilishwa na kuandikwa | Faili mpya ya bot imewasilishwa lakini ina hitilafu | Faili haijawasilishwa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/3-Translation-Sentiment/README.md b/translations/sw/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..039abaf8f
--- /dev/null
+++ b/translations/sw/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,190 @@
+# Tafsiri na uchambuzi wa hisia kwa ML
+
+Katika masomo yaliyopita ulijifunza jinsi ya kujenga bot ya msingi kwa kutumia `TextBlob`, maktaba inayojumuisha ML kwa siri kufanya kazi za msingi za NLP kama vile uchimbaji wa misemo ya nomino. Changamoto nyingine muhimu katika taaluma ya lugha ya kompyuta ni tafsiri sahihi ya sentensi kutoka lugha moja inayozungumzwa au kuandikwa kwenda nyingine.
+
+## [Maswali kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+Tafsiri ni tatizo gumu sana kutokana na ukweli kwamba kuna maelfu ya lugha na kila moja inaweza kuwa na sheria tofauti za sarufi. Njia moja ni kubadilisha sheria rasmi za sarufi za lugha moja, kama Kiingereza, kuwa muundo usio tegemezi wa lugha, kisha kuitafsiri kwa kubadilisha tena kuwa lugha nyingine. Njia hii inamaanisha kwamba utachukua hatua zifuatazo:
+
+1. **Utambulisho**. Tambua au tagi maneno katika lugha ya ingizo kuwa nomino, vitenzi n.k.
+2. **Unda tafsiri**. Toa tafsiri ya moja kwa moja ya kila neno katika muundo wa lugha lengwa.
+
+### Mfano wa sentensi, Kiingereza hadi Kiayalandi
+
+Katika 'Kiingereza', sentensi _I feel happy_ ni maneno matatu kwa mpangilio:
+
+- **subjekti** (I)
+- **kitenzi** (feel)
+- **kivumishi** (happy)
+
+Hata hivyo, katika lugha ya 'Kiayalandi', sentensi hiyo hiyo ina muundo tofauti kabisa wa kisarufi - hisia kama "*happy*" au "*sad*" zinaonyeshwa kama kuwa *juu yako*.
+
+Msemo wa Kiingereza `I feel happy` katika Kiayalandi ungekuwa `Tá athas orm`. Tafsiri ya *moja kwa moja* ingekuwa `Happy is upon me`.
+
+Mzungumzaji wa Kiayalandi akitafsiri kwenda Kiingereza angesema `I feel happy`, sio `Happy is upon me`, kwa sababu wanaelewa maana ya sentensi, hata kama maneno na muundo wa sentensi ni tofauti.
+
+Mpangilio rasmi wa sentensi katika Kiayalandi ni:
+
+- **kitenzi** (Tá au is)
+- **kivumishi** (athas, au happy)
+- **subjekti** (orm, au upon me)
+
+## Tafsiri
+
+Programu ya tafsiri ya kijinga inaweza kutafsiri maneno pekee, ikipuuza muundo wa sentensi.
+
+✅ Ikiwa umejifunza lugha ya pili (au ya tatu au zaidi) kama mtu mzima, unaweza kuwa ulianza kwa kufikiria katika lugha yako ya asili, ukitafsiri dhana neno kwa neno kichwani mwako kwenda lugha ya pili, kisha kusema tafsiri yako. Hii ni sawa na kile programu za tafsiri za kijinga za kompyuta zinavyofanya. Ni muhimu kupita hatua hii ili kufikia ufasaha!
+
+Tafsiri ya kijinga inasababisha tafsiri mbaya (na wakati mwingine za kuchekesha): `I feel happy` inatafsiriwa moja kwa moja kuwa `Mise bhraitheann athas` katika Kiayalandi. Hiyo inamaanisha (moja kwa moja) `me feel happy` na sio sentensi halali ya Kiayalandi. Hata ingawa Kiingereza na Kiayalandi ni lugha zinazozungumzwa kwenye visiwa viwili vilivyo karibu sana, ni lugha tofauti sana zenye miundo tofauti ya sarufi.
+
+> Unaweza kutazama baadhi ya video kuhusu mila za lugha ya Kiayalandi kama [hii](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### Mbinu za kujifunza kwa mashine
+
+Hadi sasa, umejifunza kuhusu mbinu za sheria rasmi za usindikaji wa lugha asilia. Njia nyingine ni kupuuza maana ya maneno, na _badala yake kutumia kujifunza kwa mashine kugundua mifumo_. Hii inaweza kufanya kazi katika tafsiri ikiwa una maandishi mengi (a *corpus*) au maandishi (*corpora*) katika lugha ya asili na lugha lengwa.
+
+Kwa mfano, fikiria kesi ya *Pride and Prejudice*, riwaya maarufu ya Kiingereza iliyoandikwa na Jane Austen mwaka 1813. Ikiwa utasoma kitabu hicho kwa Kiingereza na tafsiri ya binadamu ya kitabu hicho kwa *Kifaransa*, unaweza kugundua misemo katika moja ambayo imetafsiriwa _kiidiomati_ kwenda nyingine. Utafanya hivyo kwa dakika moja.
+
+Kwa mfano, wakati msemo wa Kiingereza kama `I have no money` unapotafsiriwa moja kwa moja kwenda Kifaransa, inaweza kuwa `Je n'ai pas de monnaie`. "Monnaie" ni neno la Kifaransa lenye maana potofu, kwani 'money' na 'monnaie' sio sawa. Tafsiri bora ambayo binadamu anaweza kufanya itakuwa `Je n'ai pas d'argent`, kwa sababu inafikisha vyema zaidi maana kwamba huna pesa (badala ya 'loose change' ambayo ni maana ya 'monnaie').
+
+
+
+> Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+Ikiwa mfano wa ML una tafsiri za binadamu za kutosha kujenga mfano, inaweza kuboresha usahihi wa tafsiri kwa kutambua mifumo ya kawaida katika maandishi ambayo yametafsiriwa awali na wazungumzaji wa lugha zote mbili wenye ujuzi.
+
+### Zoezi - tafsiri
+
+Unaweza kutumia `TextBlob` kutafsiri sentensi. Jaribu mstari maarufu wa kwanza wa **Pride and Prejudice**:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob` inafanya kazi nzuri sana katika tafsiri: "C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!".
+
+Inaweza kusemwa kwamba tafsiri ya TextBlob ni sahihi zaidi, kwa kweli, kuliko tafsiri ya Kifaransa ya 1932 ya kitabu hicho na V. Leconte na Ch. Pressoir:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet egard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+Katika kesi hii, tafsiri iliyoongozwa na ML inafanya kazi bora kuliko mtafsiri wa binadamu ambaye anaweka maneno yasiyo ya lazima katika mdomo wa mwandishi wa asili kwa 'uwazi'.
+
+> Nini kinaendelea hapa? na kwa nini TextBlob ni nzuri sana katika tafsiri? Kweli, kwa nyuma, inatumia Google translate, AI yenye nguvu inayoweza kuchanganua mamilioni ya misemo ili kutabiri mistari bora kwa kazi inayofanyika. Hakuna kitu cha mwongozo kinachofanyika hapa na unahitaji muunganisho wa intaneti kutumia `blob.translate`.
+
+✅ Try some more sentences. Which is better, ML or human translation? In which cases?
+
+## Sentiment analysis
+
+Another area where machine learning can work very well is sentiment analysis. A non-ML approach to sentiment is to identify words and phrases which are 'positive' and 'negative'. Then, given a new piece of text, calculate the total value of the positive, negative and neutral words to identify the overall sentiment.
+
+This approach is easily tricked as you may have seen in the Marvin task - the sentence `Great, that was a wonderful waste of time, I'm glad we are lost on this dark road` ni sentensi ya kejeli, yenye hisia hasi, lakini algorithimu rahisi inagundua 'great', 'wonderful', 'glad' kama chanya na 'waste', 'lost' na 'dark' kama hasi. Hisia za jumla zinashawishiwa na maneno haya yanayopingana.
+
+✅ Simama kwa sekunde na fikiria jinsi tunavyowasilisha kejeli kama wazungumzaji wa binadamu. Mzigo wa sauti unachukua jukumu kubwa. Jaribu kusema msemo "Well, that film was awesome" kwa njia tofauti ili kugundua jinsi sauti yako inavyoonyesha maana.
+
+### Mbinu za ML
+
+Mbinu ya ML itakuwa kukusanya kwa mikono maandishi hasi na chanya - tweets, au mapitio ya filamu, au chochote ambapo binadamu ametoa alama *na* maoni yaliyoandikwa. Kisha mbinu za NLP zinaweza kutumika kwa maoni na alama, ili mifumo itokeze (mfano, mapitio chanya ya filamu yana mwelekeo wa kuwa na msemo 'Oscar worthy' zaidi kuliko mapitio hasi ya filamu, au mapitio chanya ya migahawa yanasema 'gourmet' zaidi kuliko 'disgusting').
+
+> ⚖️ **Mfano**: Ikiwa ulifanya kazi katika ofisi ya mwanasiasa na kulikuwa na sheria mpya inayojadiliwa, wapiga kura wanaweza kuandika kwa ofisi hiyo na barua pepe za kuunga mkono au kupinga sheria hiyo mpya. Tuseme umepewa kazi ya kusoma barua pepe na kuzipanga katika vikundi 2, *kwa* na *dhidi*. Ikiwa kulikuwa na barua pepe nyingi, unaweza kuzidiwa kujaribu kuzisoma zote. Si ingekuwa vizuri ikiwa bot ingeweza kuzisoma zote kwa ajili yako, kuzielewa na kukuambia ni barua pepe gani inapaswa kuwa katika kundi gani?
+>
+> Njia moja ya kufanikisha hilo ni kutumia Kujifunza kwa Mashine. Ungefundisha mfano kwa sehemu ya barua pepe za *dhidi* na sehemu ya barua pepe za *kwa*. Mfano ungeweza kuhusisha misemo na maneno na upande wa dhidi na upande wa kwa, *lakini usingeelewa maudhui yoyote*, tu kwamba maneno na mifumo fulani ina uwezekano mkubwa wa kuonekana katika barua pepe za *dhidi* au za *kwa*. Ungeweza kuijaribu na barua pepe fulani ambazo hukuzitumia kufundisha mfano, na kuona kama inafikia hitimisho sawa na wewe. Kisha, mara tu unapokuwa na furaha na usahihi wa mfano, unaweza kushughulikia barua pepe za baadaye bila kulazimika kusoma kila moja.
+
+✅ Je, mchakato huu unafanana na michakato uliotumia katika masomo yaliyopita?
+
+## Zoezi - sentensi za hisia
+
+Hisia hupimwa kwa *polarity* ya -1 hadi 1, ikimaanisha -1 ni hisia hasi zaidi, na 1 ni hisia chanya zaidi. Hisia pia hupimwa kwa alama ya 0 - 1 kwa objectivity (0) na subjectivity (1).
+
+Angalia tena *Pride and Prejudice* ya Jane Austen. Maandishi yanapatikana hapa katika [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm). Sampuli hapa chini inaonyesha programu fupi inayochambua hisia za sentensi za kwanza na za mwisho kutoka kitabu hicho na kuonyesha polarity ya hisia zake na alama ya subjectivity/objectivity.
+
+Unapaswa kutumia maktaba ya `TextBlob` (iliyotajwa hapo juu) kuamua `sentiment` (huna haja ya kuandika kikokotoo chako cha hisia) katika kazi ifuatayo.
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+Unaona matokeo yafuatayo:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## Changamoto - angalia polarity ya hisia
+
+Kazi yako ni kuamua, kwa kutumia polarity ya hisia, ikiwa *Pride and Prejudice* ina sentensi zaidi chanya kabisa kuliko hasi kabisa. Kwa kazi hii, unaweza kudhani kwamba alama ya polarity ya 1 au -1 ni chanya kabisa au hasi kabisa kwa mtiririko huo.
+
+**Hatua:**
+
+1. Pakua [nakala ya Pride and Prejudice](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) kutoka Project Gutenberg kama faili ya .txt. Ondoa metadata mwanzoni na mwishoni mwa faili, ukiacha tu maandishi ya asili
+2. Fungua faili hiyo kwa Python na toa yaliyomo kama kamba
+3. Unda TextBlob kwa kutumia kamba ya kitabu
+4. Changanua kila sentensi katika kitabu kwa mzunguko
+ 1. Ikiwa polarity ni 1 au -1 hifadhi sentensi katika safu au orodha ya ujumbe chanya au hasi
+5. Mwishoni, chapisha sentensi zote chanya na hasi (kando) na idadi ya kila moja.
+
+Hapa kuna [suluhisho](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb).
+
+✅ Ukaguzi wa Maarifa
+
+1. Hisia zinategemea maneno yaliyotumika katika sentensi, lakini je, msimbo *unaelewa* maneno hayo?
+2. Je, unadhani polarity ya hisia ni sahihi, au kwa maneno mengine, je, *unakubaliana* na alama hizo?
+ 1. Hasa, je, unakubaliana au hukubaliani na polarity chanya kabisa ya sentensi zifuatazo?
+ * “What an excellent father you have, girls!” said she, when the door was shut.
+ * “Your examination of Mr. Darcy is over, I presume,” said Miss Bingley; “and pray what is the result?” “I am perfectly convinced by it that Mr. Darcy has no defect.
+ * How wonderfully these sort of things occur!
+ * I have the greatest dislike in the world to that sort of thing.
+ * Charlotte is an excellent manager, I dare say.
+ * “This is delightful indeed!
+ * I am so happy!
+ * Your idea of the ponies is delightful.
+ 2. Sentensi 3 zifuatazo zilipata alama ya hisia chanya kabisa, lakini kwa kusoma kwa makini, sio sentensi chanya. Kwa nini uchambuzi wa hisia ulidhani ni sentensi chanya?
+ * Happy shall I be, when his stay at Netherfield is over!” “I wish I could say anything to comfort you,” replied Elizabeth; “but it is wholly out of my power.
+ * If I could but see you as happy!
+ * Our distress, my dear Lizzy, is very great.
+ 3. Je, unakubaliana au hukubaliani na polarity hasi kabisa ya sentensi zifuatazo?
+ - Everybody is disgusted with his pride.
+ - “I should like to know how he behaves among strangers.” “You shall hear then—but prepare yourself for something very dreadful.
+ - The pause was to Elizabeth’s feelings dreadful.
+ - It would be dreadful!
+
+✅ Mpenzi yeyote wa Jane Austen ataelewa kwamba mara nyingi hutumia vitabu vyake kukosoa vipengele vya kijinga vya jamii ya Kiingereza ya Regency. Elizabeth Bennett, mhusika mkuu katika *Pride and Prejudice*, ni mwangalizi mzuri wa kijamii (kama mwandishi) na lugha yake mara nyingi ina madoido mengi. Hata Bw. Darcy (mpenzi katika hadithi) anabaini matumizi ya kucheza na kejeli ya Elizabeth: "Nimepata furaha ya kukujua kwa muda mrefu wa kutosha kujua kwamba unafurahia sana wakati mwingine kutoa maoni ambayo kwa kweli sio yako."
+
+---
+
+## 🚀Changamoto
+
+Je, unaweza kumfanya Marvin kuwa bora zaidi kwa kutoa vipengele vingine kutoka kwa pembejeo ya mtumiaji?
+
+## [Maswali baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## Mapitio na Kujisomea
+
+Kuna njia nyingi za kutoa hisia kutoka kwa maandishi. Fikiria matumizi ya kibiashara ambayo yanaweza kutumia mbinu hii. Fikiria jinsi inavyoweza kwenda kombo. Soma zaidi kuhusu mifumo ya kisasa ya kibiashara inayochambua hisia kama [Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott). Jaribu baadhi ya sentensi za Pride and Prejudice hapo juu na uone kama inaweza kugundua madoido.
+
+## Kazi
+
+[Leseni ya kishairi](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/3-Translation-Sentiment/assignment.md b/translations/sw/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..3e07f1aa4
--- /dev/null
+++ b/translations/sw/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# Leseni ya kishairi
+
+## Maelekezo
+
+Katika [notebook hii](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency) unaweza kupata zaidi ya mashairi 500 ya Emily Dickinson yaliyotathminiwa awali kwa hisia kwa kutumia uchanganuzi wa maandishi wa Azure. Ukizingatia dataset hii, ifanyie uchambuzi kwa kutumia mbinu zilizofafanuliwa katika somo. Je, hisia zilizopendekezwa za shairi zinashabihiana na uamuzi wa huduma ya Azure iliyo na ujuzi zaidi? Kwa nini au kwa nini sio, kwa maoni yako? Je, kuna kitu chochote kinachokushangaza?
+
+## Rubric
+
+| Vigezo | Bora | Inayokubalika | Inayohitaji Kuboresha |
+| -------- | ---------------------------------------------------------------------- | -------------------------------------------------------- | ------------------------ |
+| | Notebook inaoneshwa na uchambuzi thabiti wa sampuli ya mwandishi | Notebook haijakamilika au haifanyi uchambuzi | Hakuna notebook inayowasilishwa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/sw/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..0d4c24d96
--- /dev/null
+++ b/translations/sw/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo rasmi. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/sw/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..1c480ac2a
--- /dev/null
+++ b/translations/sw/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo rasmi. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/4-Hotel-Reviews-1/README.md b/translations/sw/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..b0edd227b
--- /dev/null
+++ b/translations/sw/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,264 @@
+# Uchambuzi wa hisia na maoni ya hoteli - kuchakata data
+
+Katika sehemu hii utatumia mbinu zilizojifunza katika masomo yaliyopita kufanya uchambuzi wa data wa seti kubwa ya data. Mara baada ya kuelewa vizuri umuhimu wa safu mbalimbali, utajifunza:
+
+- jinsi ya kuondoa safu zisizo za lazima
+- jinsi ya kuhesabu data mpya kulingana na safu zilizopo
+- jinsi ya kuhifadhi seti ya data iliyotokana kwa matumizi katika changamoto ya mwisho
+
+## [Maswali kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### Utangulizi
+
+Hadi sasa umejifunza kuhusu jinsi data ya maandishi inavyotofautiana na aina za data za nambari. Ikiwa ni maandishi yaliyoandikwa au kusemwa na binadamu, yanaweza kuchambuliwa ili kupata mifumo na marudio, hisia na maana. Somo hili linakupeleka kwenye seti ya data halisi na changamoto halisi: **[515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** na inajumuisha [CC0: Public Domain license](https://creativecommons.org/publicdomain/zero/1.0/). Iliandikwa kutoka vyanzo vya umma vya Booking.com. Muundaji wa seti ya data ni Jiashen Liu.
+
+### Maandalizi
+
+Utahitaji:
+
+* Uwezo wa kuendesha daftari za .ipynb kwa kutumia Python 3
+* pandas
+* NLTK, [ambayo unapaswa kusakinisha ndani ya kompyuta yako](https://www.nltk.org/install.html)
+* Seti ya data inayopatikana kwenye Kaggle [515K Hotel Reviews Data in Europe](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe). Ina takriban 230 MB baada ya kufunguliwa. Pakua kwenye folda ya mzizi `/data` inayohusishwa na masomo haya ya NLP.
+
+## Uchambuzi wa data
+
+Changamoto hii inadhani kuwa unajenga roboti ya mapendekezo ya hoteli kwa kutumia uchambuzi wa hisia na alama za maoni ya wageni. Seti ya data unayotumia inajumuisha maoni ya hoteli 1493 tofauti katika miji 6.
+
+Kwa kutumia Python, seti ya data ya maoni ya hoteli, na uchambuzi wa hisia za NLTK unaweza kugundua:
+
+* Maneno na misemo inayotumika mara kwa mara katika maoni ni yapi?
+* Je, *lebo* rasmi zinazofafanua hoteli zinahusiana na alama za maoni (mfano, je, maoni hasi zaidi kwa hoteli fulani ni kwa *Familia yenye watoto wadogo* kuliko *Msafiri wa peke yake*, labda ikionyesha ni bora kwa *Wasafiri wa peke yao*?)
+* Je, alama za hisia za NLTK 'zinakubaliana' na alama za nambari za mtoa maoni wa hoteli?
+
+#### Seti ya data
+
+Hebu tuangalie seti ya data uliyopakua na kuhifadhi ndani ya kompyuta yako. Fungua faili katika mhariri kama VS Code au hata Excel.
+
+Vichwa vya habari katika seti ya data ni kama ifuatavyo:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+Hapa vimepangwa kwa njia ambayo inaweza kuwa rahisi kuchunguza:
+##### Safu za hoteli
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (latitude), `lng` (longitude)
+ * Kwa kutumia *lat* na *lng* unaweza kuchora ramani kwa kutumia Python inayoonyesha maeneo ya hoteli (labda kwa rangi tofauti kwa maoni hasi na chanya)
+ * Hotel_Address sio ya wazi kwa matumizi yetu, na labda tutaibadilisha na nchi kwa urahisi wa kupanga na kutafuta
+
+**Safu za Maoni ya Meta ya Hoteli**
+
+* `Average_Score`
+ * Kulingana na muundaji wa seti ya data, safu hii ni *Alama ya wastani ya hoteli, iliyohesabiwa kulingana na maoni ya hivi karibuni katika mwaka uliopita*. Hii inaonekana kama njia isiyo ya kawaida ya kuhesabu alama, lakini ni data iliyopatikana hivyo tunaweza kuichukua kwa thamani ya uso kwa sasa.
+
+ ✅ Kulingana na safu nyingine katika data hii, unaweza kufikiria njia nyingine ya kuhesabu alama ya wastani?
+
+* `Total_Number_of_Reviews`
+ * Jumla ya maoni ambayo hoteli hii imepokea - haijulikani (bila kuandika baadhi ya misimbo) ikiwa hii inahusu maoni katika seti ya data.
+* `Additional_Number_of_Scoring`
+ * Hii inamaanisha alama ya maoni ilitolewa lakini hakuna maoni chanya au hasi yaliyoandikwa na mtoa maoni
+
+**Safu za maoni**
+
+- `Reviewer_Score`
+ - Hii ni thamani ya nambari yenye nafasi 1 ya desimali kati ya thamani za chini na za juu 2.5 na 10
+ - Haijaelezewa kwa nini 2.5 ndio alama ya chini kabisa inayowezekana
+- `Negative_Review`
+ - Ikiwa mtoa maoni hakuandika chochote, sehemu hii itakuwa na "**Hakuna Hasi**"
+ - Kumbuka kuwa mtoa maoni anaweza kuandika maoni chanya katika safu ya maoni hasi (mfano, "hakuna kitu kibaya kuhusu hoteli hii")
+- `Review_Total_Negative_Word_Counts`
+ - Hesabu ya maneno hasi zaidi inaonyesha alama ya chini zaidi (bila kuangalia hisia)
+- `Positive_Review`
+ - Ikiwa mtoa maoni hakuandika chochote, sehemu hii itakuwa na "**Hakuna Chanya**"
+ - Kumbuka kuwa mtoa maoni anaweza kuandika maoni hasi katika safu ya maoni chanya (mfano, "hakuna kitu kizuri kabisa kuhusu hoteli hii")
+- `Review_Total_Positive_Word_Counts`
+ - Hesabu ya maneno chanya zaidi inaonyesha alama ya juu zaidi (bila kuangalia hisia)
+- `Review_Date` na `days_since_review`
+ - Kipimo cha upya au uchakavu kinaweza kutumika kwa maoni (maoni ya zamani yanaweza kuwa sio sahihi kama ya karibuni kwa sababu usimamizi wa hoteli umebadilika, au ukarabati umefanyika, au bwawa limeongezwa n.k.)
+- `Tags`
+ - Hizi ni maelezo mafupi ambayo mtoa maoni anaweza kuchagua kufafanua aina ya mgeni waliokuwa (mfano, msafiri wa peke yake au familia), aina ya chumba walichokuwa nacho, muda wa kukaa na jinsi maoni yalivyowasilishwa.
+ - Kwa bahati mbaya, kutumia lebo hizi ni shida, angalia sehemu hapa chini inayojadili umuhimu wake
+
+**Safu za mtoa maoni**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - Hii inaweza kuwa sababu katika mfano wa mapendekezo, kwa mfano, ikiwa unaweza kubaini kuwa waandishi wa maoni wenye maoni mengi zaidi ya mia moja walikuwa na uwezekano mkubwa wa kuwa na maoni hasi badala ya chanya. Hata hivyo, mtoa maoni wa maoni yoyote maalum hajatajwa na msimbo wa kipekee, na hivyo hawezi kuunganishwa na seti ya maoni. Kuna waandishi wa maoni 30 wenye maoni 100 au zaidi, lakini ni vigumu kuona jinsi hii inaweza kusaidia mfano wa mapendekezo.
+- `Reviewer_Nationality`
+ - Watu wengine wanaweza kufikiria kwamba utaifa fulani una uwezekano mkubwa wa kutoa maoni chanya au hasi kwa sababu ya mwelekeo wa kitaifa. Kuwa makini kujenga maoni kama haya ya kidokezo katika mifano yako. Hizi ni mifano ya kitaifa (na wakati mwingine ya kikabila), na kila mtoa maoni alikuwa mtu binafsi ambaye aliandika maoni kulingana na uzoefu wao. Inawezekana kuwa yalichujwa kupitia lensi nyingi kama vile kukaa kwao kwa hoteli za awali, umbali waliotembea, na tabia yao binafsi. Kufikiria kuwa utaifa wao ulikuwa sababu ya alama ya maoni ni vigumu kuthibitisha.
+
+##### Mifano
+
+| Alama ya Wastani | Jumla ya Maoni | Alama ya Mtoa Maoni | Maoni Hasi | Maoni Chanya | Lebo |
+| -------------- | ---------------------- | ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | Hii kwa sasa sio hoteli lakini ni eneo la ujenzi niliteswa kutoka asubuhi na mapema na siku nzima na kelele za ujenzi zisizokubalika wakati wa kupumzika baada ya safari ndefu na kufanya kazi chumbani Watu walikuwa wakifanya kazi siku nzima i e na nyundo za umeme katika vyumba vya jirani niliomba kubadilisha chumba lakini hakuna chumba kimya kilichopatikana Ili kufanya mambo kuwa mabaya zaidi nilitozwa zaidi niliondoka jioni kwa kuwa nilihitaji kuondoka mapema sana kwa ndege na kupokea bili inayofaa Siku moja baadaye hoteli ilifanya malipo mengine bila idhini yangu zaidi ya bei iliyowekwa Ni mahali pabaya Usijiteshe kwa kujibook hapa | Hakuna Mahali pabaya Kaa mbali | Safari ya kikazi Wenzi Chumba cha kawaida cha mara mbili Kaa usiku 2 |
+
+Kama unavyoona, mgeni huyu hakuwa na kukaa vizuri katika hoteli hii. Hoteli ina alama nzuri ya wastani ya 7.8 na maoni 1945, lakini mtoa maoni huyu alitoa alama 2.5 na kuandika maneno 115 kuhusu jinsi kukaa kwao kulivyokuwa hasi. Ikiwa hawakuandika chochote kabisa katika safu ya Positive_Review, unaweza kudhani hakuna kitu chanya, lakini walakini waliandika maneno 7 ya onyo. Ikiwa tungehisabu maneno badala ya maana, au hisia za maneno, tunaweza kuwa na mtazamo potofu wa nia ya mtoa maoni. Kwa kushangaza, alama yao ya 2.5 inachanganya, kwa sababu ikiwa kukaa kwao katika hoteli hiyo kulikuwa kibaya sana, kwa nini wape alama yoyote kabisa? Kuchunguza seti ya data kwa karibu, utaona kuwa alama ya chini kabisa inayowezekana ni 2.5, sio 0. Alama ya juu kabisa inayowezekana ni 10.
+
+##### Lebo
+
+Kama ilivyotajwa hapo juu, kwa mtazamo wa kwanza, wazo la kutumia `Tags` kuainisha data linaonekana kuwa na maana. Kwa bahati mbaya lebo hizi hazijasanifishwa, ambayo inamaanisha kuwa katika hoteli fulani, chaguzi zinaweza kuwa *Chumba kimoja*, *Chumba pacha*, na *Chumba mara mbili*, lakini katika hoteli nyingine, ni *Chumba cha Deluxe Single*, *Chumba cha Classic Queen*, na *Chumba cha Executive King*. Hizi zinaweza kuwa vitu sawa, lakini kuna tofauti nyingi sana kwamba chaguo linakuwa:
+
+1. Jaribu kubadilisha maneno yote kuwa kiwango kimoja, ambayo ni ngumu sana, kwa sababu haijulikani njia ya kubadilisha itakuwa nini katika kila kesi (mfano, *Chumba kimoja cha classic* kinaenda kwa *Chumba kimoja* lakini *Chumba cha Superior Queen na Mtazamo wa Bustani ya Ua au Jiji* ni ngumu zaidi kubadilisha)
+
+1. Tunaweza kuchukua njia ya NLP na kupima marudio ya maneno fulani kama *Solo*, *Msafiri wa Biashara*, au *Familia yenye watoto wadogo* jinsi zinavyotumika kwa kila hoteli, na kuingiza hiyo katika mapendekezo
+
+Lebo kawaida (lakini si mara zote) ni uwanja mmoja unaojumuisha orodha ya maadili 5 hadi 6 yaliyotenganishwa kwa koma yanayolingana na *Aina ya safari*, *Aina ya wageni*, *Aina ya chumba*, *Idadi ya usiku*, na *Aina ya kifaa maoni yalivyowasilishwa*. Hata hivyo, kwa sababu waandishi wa maoni wengine hawajazi kila uwanja (wanaweza kuacha moja wazi), maadili hayako katika mpangilio sawa kila wakati.
+
+Kwa mfano, chukua *Aina ya kikundi*. Kuna uwezekano wa kipekee 1025 katika uwanja huu katika safu ya `Tags`, na kwa bahati mbaya ni baadhi tu yao yanayohusu kikundi (baadhi ni aina ya chumba n.k.). Ikiwa unachuja zile zinazotaja familia pekee, matokeo yana maadili mengi ya aina ya *Chumba cha familia*. Ikiwa unajumuisha neno *na*, yaani hesabu maadili ya *Familia na*, matokeo ni bora, na zaidi ya 80,000 ya matokeo 515,000 yanajumuisha maneno "Familia na watoto wadogo" au "Familia na watoto wakubwa".
+
+Hii inamaanisha safu ya lebo sio bure kabisa kwetu, lakini itachukua kazi kuifanya iwe na maana.
+
+##### Alama ya wastani ya hoteli
+
+Kuna mambo kadhaa yasiyoeleweka au tofauti na seti ya data ambayo siwezi kuelewa, lakini yanaonyeshwa hapa ili uweze kujua wakati wa kujenga mifano yako. Ikiwa utaelewa, tafadhali tujulishe katika sehemu ya majadiliano!
+
+Seti ya data ina safu zifuatazo zinazohusiana na alama ya wastani na idadi ya maoni:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+Hoteli moja yenye maoni mengi zaidi katika seti hii ya data ni *Britannia International Hotel Canary Wharf* yenye maoni 4789 kati ya 515,000. Lakini ikiwa tutaangalia thamani ya `Total_Number_of_Reviews` kwa hoteli hii, ni 9086. Unaweza kudhani kuwa kuna alama nyingi zaidi bila maoni, hivyo labda tunapaswa kuongeza thamani ya safu ya `Additional_Number_of_Scoring`. Thamani hiyo ni 2682, na kuiongeza kwa 4789 inatufikisha 7,471 ambayo bado ni 1615 pungufu ya `Total_Number_of_Reviews`.
+
+Ikiwa unachukua safu za `Average_Score`, unaweza kudhani ni wastani wa maoni katika seti ya data, lakini maelezo kutoka Kaggle ni "*Alama ya wastani ya hoteli, iliyohesabiwa kulingana na maoni ya hivi karibuni katika mwaka uliopita*". Hiyo haionekani kuwa na maana, lakini tunaweza kuhesabu wastani wetu kulingana na alama za maoni katika seti ya data. Kwa kutumia hoteli hiyo hiyo kama mfano, alama ya wastani ya hoteli inatolewa kama 7.1 lakini alama iliyohesabiwa (wastani wa alama za mtoa maoni *katika* seti ya data) ni 6.8. Hii ni karibu, lakini sio thamani sawa, na tunaweza kudhani tu kuwa alama zilizotolewa katika maoni ya `Additional_Number_of_Scoring` ziliongeza wastani hadi 7.1. Kwa bahati mbaya bila njia ya kujaribu au kuthibitisha dhana hiyo, ni vigumu kutumia au kuamini `Average_Score`, `Additional_Number_of_Scoring` na `Total_Number_of_Reviews` wakati zinategemea, au zinarejelea, data ambayo hatuna.
+
+Kuchanganya mambo zaidi, hoteli yenye idadi ya pili ya juu zaidi ya maoni ina alama ya wastani iliyohesabiwa ya 8.12 na seti ya data ya `Average_Score` ni 8.1. Je, alama sahihi ni bahati nasibu au hoteli ya kwanza ni tofauti?
+
+Kwa uwezekano kwamba hoteli hizi zinaweza kuwa ni kipekee, na kwamba labda maadili mengi yanafikia (lakini baadhi hayafanyi kwa sababu fulani) tutaandika programu fupi ijayo ili kuchunguza maadili katika seti ya data na kubaini matumizi sahihi (au yasiyo ya matumizi) ya maadili.
+
+> 🚨 Tahadhari
+>
+> Unapofanya kazi na seti hii ya data utaandika misimbo inayohesabu kitu kutoka kwa maandishi bila kuwa na haja ya kusoma au kuchambua maandishi wewe mwenyewe. Hii ndio kiini cha NLP, kutafsiri maana au hisia bila kuwa na binadamu kufanya hivyo. Hata hivyo, inawezekana kwamba utasoma baadhi ya maoni hasi. Ningekushauri usifanye hivyo, kwa sababu huna haja ya kufanya hivyo. Baadhi yao ni ya kipuuzi, au maoni hasi yasiyohusiana na hoteli, kama "Hali ya hewa haikuwa nzuri", jambo ambalo liko nje ya uwezo wa hoteli, au hata mtu yeyote. Lakini kuna upande mweusi kwa baadhi ya maoni pia. Wakati mwingine maoni hasi ni ya kibaguzi, ya kijinsia, au ya umri. Hii ni bahati mbaya lakini inatarajiwa katika seti ya data iliyopatikana kutoka tovuti ya umma. Waandishi wa maoni wengine wanaacha maoni ambayo ungeona yasiyopendeza, yasiyofurahisha, au yenye kusikitisha. Bora kuacha msimbo kupima hisia kuliko kuyasoma mwenyewe na kusikitika. Hata hivyo, ni wachache wanaoandika vitu kama hivyo
+rows have column `Positive_Review` values of "No Positive" 9. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ### Code answers 1. Print out the *shape* of the data frame you have just loaded (the shape is the number of rows and columns) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Calculate the frequency count for reviewer nationalities: 1. How many distinct values are there for the column `Reviewer_Nationality` and what are they? 2. What reviewer nationality is the most common in the dataset (print country and number of reviews)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. What are the next top 10 most frequently found nationalities, and their frequency count? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. What was the most frequently reviewed hotel for each of the top 10 most reviewer nationalities? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. How many reviews are there per hotel (frequency count of hotel) in the dataset? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Hotel_Name | Total_Number_of_Reviews | Total_Reviews_Found | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | You may notice that the *counted in the dataset* results do not match the value in `Total_Number_of_Reviews`. It is unclear if this value in the dataset represented the total number of reviews the hotel had, but not all were scraped, or some other calculation. `Total_Number_of_Reviews` is not used in the model because of this unclarity. 5. While there is an `Average_Score` column for each hotel in the dataset, you can also calculate an average score (getting the average of all reviewer scores in the dataset for each hotel). Add a new column to your dataframe with the column header `Calc_Average_Score` that contains that calculated average. Print out the columns `Hotel_Name`, `Average_Score`, and `Calc_Average_Score`. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` You may also wonder about the `Average_Score` value and why it is sometimes different from the calculated average score. As we can't know why some of the values match, but others have a difference, it's safest in this case to use the review scores that we have to calculate the average ourselves. That said, the differences are usually very small, here are the hotels with the greatest deviation from the dataset average and the calculated average: | Average_Score_Difference | Average_Score | Calc_Average_Score | Hotel_Name | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | With only 1 hotel having a difference of score greater than 1, it means we can probably ignore the difference and use the calculated average score. 6. Calculate and print out how many rows have column `Negative_Review` values of "No Negative" 7. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" 8. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Another way Another way count items without Lambdas, and use sum to count the rows: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` You may have noticed that there are 127 rows that have both "No Negative" and "No Positive" values for the columns `Negative_Review` and `Positive_Review` respectively. That means that the reviewer gave the hotel a numerical score, but declined to write either a positive or negative review. Luckily this is a small amount of rows (127 out of 515738, or 0.02%), so it probably won't skew our model or results in any particular direction, but you might not have expected a data set of reviews to have rows with no reviews, so it's worth exploring the data to discover rows like this. Now that you have explored the dataset, in the next lesson you will filter the data and add some sentiment analysis. --- ## 🚀Challenge This lesson demonstrates, as we saw in previous lessons, how critically important it is to understand your data and its foibles before performing operations on it. Text-based data, in particular, bears careful scrutiny. Dig through various text-heavy datasets and see if you can discover areas that could introduce bias or skewed sentiment into a model. ## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## Review & Self Study Take [this Learning Path on NLP](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) to discover tools to try when building speech and text-heavy models. ## Assignment [NLTK](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/sw/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..bd2d7c375
--- /dev/null
+++ b/translations/sw/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## Maelekezo
+
+NLTK ni maktaba inayojulikana kwa matumizi katika isimu ya kompyuta na NLP. Tumia fursa hii kusoma '[kitabu cha NLTK](https://www.nltk.org/book/)' na ujaribu mazoezi yake. Katika kazi hii isiyopimwa, utapata kujua maktaba hii kwa undani zaidi.
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/sw/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..2c1472893
--- /dev/null
+++ b/translations/sw/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/sw/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..49b7a67c6
--- /dev/null
+++ b/translations/sw/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalam ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/5-Hotel-Reviews-2/README.md b/translations/sw/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..fd9c99890
--- /dev/null
+++ b/translations/sw/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# Uchambuzi wa Hisia na Maoni ya Hoteli
+
+Sasa kwa kuwa umechunguza kwa kina seti ya data, ni wakati wa kuchuja safu na kisha kutumia mbinu za NLP kwenye seti ya data ili kupata maarifa mapya kuhusu hoteli.
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### Uchujaji na Shughuli za Uchambuzi wa Hisia
+
+Kama ulivyoweza kugundua, seti ya data ina masuala kadhaa. Baadhi ya safu zimejazwa na taarifa zisizo na maana, zingine zinaonekana kuwa si sahihi. Ikiwa ni sahihi, haijulikani jinsi zilivyohesabiwa, na majibu hayawezi kuthibitishwa kwa uhuru kwa hesabu zako mwenyewe.
+
+## Zoezi: usindikaji zaidi wa data
+
+Safisha data kidogo zaidi. Ongeza safu ambazo zitakuwa na manufaa baadaye, badilisha thamani katika safu nyingine, na acha baadhi ya safu kabisa.
+
+1. Usindikaji wa awali wa safu
+
+ 1. Acha `lat` na `lng`
+
+ 2. Badilisha thamani za `Hotel_Address` na thamani zifuatazo (ikiwa anwani ina jina la mji na nchi, badilisha iwe tu mji na nchi).
+
+ Hizi ni miji na nchi pekee katika seti ya data:
+
+ Amsterdam, Netherlands
+
+ Barcelona, Spain
+
+ London, United Kingdom
+
+ Milan, Italy
+
+ Paris, France
+
+ Vienna, Austria
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ Sasa unaweza kuuliza data ya kiwango cha nchi:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | Hotel_Address | Hotel_Name |
+ | :--------------------- | :--------: |
+ | Amsterdam, Netherlands | 105 |
+ | Barcelona, Spain | 211 |
+ | London, United Kingdom | 400 |
+ | Milan, Italy | 162 |
+ | Paris, France | 458 |
+ | Vienna, Austria | 158 |
+
+2. Usindikaji wa safu za Meta-review za Hoteli
+
+ 1. Acha `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` na hesabu yetu wenyewe
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. Usindikaji wa safu za maoni
+
+ 1. Acha `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']`, unaweza kuanza kujiuliza kama inawezekana kupunguza kwa kiasi kikubwa usindikaji unaohitaji kufanya. Kwa bahati nzuri, inawezekana - lakini kwanza unahitaji kufuata hatua chache ili kubaini lebo za umuhimu.
+
+### Kuchuja lebo
+
+Kumbuka kwamba lengo la seti ya data ni kuongeza hisia na safu ambazo zitakusaidia kuchagua hoteli bora (kwa ajili yako au labda mteja anayekuagiza kutengeneza bot ya mapendekezo ya hoteli). Unahitaji kujiuliza kama lebo ni muhimu au la katika seti ya data ya mwisho. Hapa kuna tafsiri moja (ikiwa unahitaji seti ya data kwa sababu nyingine tofauti lebo zinaweza kubaki/kuondolewa kwenye uteuzi):
+
+1. Aina ya safari ni muhimu, na hiyo inapaswa kubaki
+2. Aina ya kikundi cha wageni ni muhimu, na hiyo inapaswa kubaki
+3. Aina ya chumba, suite, au studio ambayo mgeni alikaa haina umuhimu (hoteli zote zina vyumba vya kimsingi sawa)
+4. Kifaa ambacho maoni yalitumwa hakina umuhimu
+5. Idadi ya usiku mtoa maoni alikaa inaweza kuwa muhimu ikiwa utahusisha kukaa kwa muda mrefu na wao kupenda hoteli zaidi, lakini ni jambo la mbali, na labda halina umuhimu
+
+Kwa muhtasari, **weka aina 2 za lebo na ondoa nyingine**.
+
+Kwanza, hutaki kuhesabu lebo mpaka ziwe katika muundo bora, hivyo inamaanisha kuondoa mabano na nukuu. Unaweza kufanya hivi kwa njia kadhaa, lakini unataka njia ya haraka zaidi kwani inaweza kuchukua muda mrefu kusindika data nyingi. Kwa bahati nzuri, pandas ina njia rahisi ya kufanya kila moja ya hatua hizi.
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+Kila lebo inakuwa kama: `Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag` safu inafanana na moja ya safu mpya, ongeza 1, ikiwa haifanani, ongeza 0. Matokeo ya mwisho yatakuwa hesabu ya ni watoa maoni wangapi walichagua hoteli hii (kwa jumla) kwa mfano, biashara dhidi ya burudani, au kuleta mnyama, na hii ni taarifa muhimu wakati wa kupendekeza hoteli.
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### Hifadhi faili yako
+
+Hatimaye, hifadhi seti ya data kama ilivyo sasa na jina jipya.
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## Shughuli za Uchambuzi wa Hisia
+
+Katika sehemu hii ya mwisho, utatumia uchambuzi wa hisia kwenye safu za maoni na kuhifadhi matokeo katika seti ya data.
+
+## Zoezi: pakia na hifadhi data iliyochujwa
+
+Kumbuka kwamba sasa unapakia seti ya data iliyochujwa ambayo iliokolewa katika sehemu iliyopita, **si** seti ya data asilia.
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### Kuondoa maneno yasiyo na maana
+
+Ikiwa ungeendesha Uchambuzi wa Hisia kwenye safu za maoni ya Negativity na Positivity, inaweza kuchukua muda mrefu. Imejaribiwa kwenye laptop yenye CPU ya kasi, ilichukua dakika 12 - 14 kulingana na maktaba ya hisia iliyotumiwa. Huo ni muda mrefu (kwa kiasi fulani), hivyo ni vyema kuchunguza kama inaweza kuharakishwa.
+
+Kuondoa maneno yasiyo na maana, au maneno ya kawaida ya Kiingereza ambayo hayabadilishi hisia ya sentensi, ni hatua ya kwanza. Kwa kuyaondoa, uchambuzi wa hisia unapaswa kuendeshwa haraka zaidi, lakini sio kuwa na usahihi mdogo (kwa kuwa maneno yasiyo na maana hayabadilishi hisia, lakini yanapunguza kasi ya uchambuzi).
+
+Maoni ya kirefu zaidi ya negativity yalikuwa na maneno 395, lakini baada ya kuondoa maneno yasiyo na maana, ni maneno 195.
+
+Kuondoa maneno yasiyo na maana pia ni operesheni ya haraka, kuondoa maneno yasiyo na maana kutoka kwenye safu 2 za maoni zaidi ya mistari 515,000 ilichukua sekunde 3.3 kwenye kifaa cha majaribio. Inaweza kuchukua muda kidogo zaidi au kidogo kwako kulingana na kasi ya CPU ya kifaa chako, RAM, ikiwa una SSD au la, na baadhi ya mambo mengine. Ufupi wa operesheni hii inamaanisha kwamba ikiwa inaboresha muda wa uchambuzi wa hisia, basi inafaa kufanya.
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### Kufanya uchambuzi wa hisia
+
+Sasa unapaswa kuhesabu uchambuzi wa hisia kwa safu zote za maoni ya negativity na positivity, na kuhifadhi matokeo katika safu mpya 2. Jaribio la hisia litakuwa kulinganisha na alama ya mtoa maoni kwa maoni sawa. Kwa mfano, ikiwa hisia zinafikiri maoni ya negativity yalikuwa na hisia ya 1 (hisia chanya sana) na maoni ya positivity hisia ya 1, lakini mtoa maoni alitoa hoteli alama ya chini kabisa, basi ama maandishi ya maoni hayalingani na alama, au mchanganuzi wa hisia hakuweza kutambua hisia kwa usahihi. Unapaswa kutarajia baadhi ya alama za hisia kuwa si sahihi kabisa, na mara nyingi hiyo itaelezeka, kwa mfano maoni yanaweza kuwa na kejeli kali "Bila shaka NILIPENDA kulala katika chumba bila joto" na mchanganuzi wa hisia anafikiri hiyo ni hisia chanya, ingawa binadamu akisoma angejua ni kejeli.
+
+NLTK inatoa wachambuzi wa hisia tofauti wa kujifunza nao, na unaweza kuzibadilisha na kuona kama hisia ni sahihi zaidi au chini. Uchambuzi wa hisia wa VADER umetumika hapa.
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+Baadaye katika programu yako wakati uko tayari kuhesabu hisia, unaweza kuitumia kwa kila maoni kama ifuatavyo:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+Hii inachukua takriban sekunde 120 kwenye kompyuta yangu, lakini itatofautiana kwenye kila kompyuta. Ikiwa unataka kuchapisha matokeo na kuona kama hisia zinaendana na maoni:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+Jambo la mwisho kabisa kufanya na faili kabla ya kuitumia kwenye changamoto, ni kuihifadhi! Unapaswa pia kuzingatia kupanga upya safu zako zote mpya ili ziwe rahisi kufanya kazi nazo (kwa binadamu, ni mabadiliko ya vipodozi).
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+Unapaswa kuendesha msimbo mzima kwa [notebook ya uchambuzi](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) (baada ya kuendesha [notebook yako ya kuchuja](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) ili kuzalisha faili ya Hotel_Reviews_Filtered.csv).
+
+Ili kukagua, hatua ni:
+
+1. Faili asilia ya seti ya data **Hotel_Reviews.csv** inachunguzwa katika somo lililopita na [notebook ya uchunguzi](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb)
+2. Hotel_Reviews.csv inachujwa na [notebook ya kuchuja](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) na kusababisha **Hotel_Reviews_Filtered.csv**
+3. Hotel_Reviews_Filtered.csv inasindikwa na [notebook ya uchambuzi wa hisia](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) na kusababisha **Hotel_Reviews_NLP.csv**
+4. Tumia Hotel_Reviews_NLP.csv katika Changamoto ya NLP hapa chini
+
+### Hitimisho
+
+Ulipoanza, ulikuwa na seti ya data yenye safu na data lakini si yote inaweza kuthibitishwa au kutumika. Umechunguza data, umechambua kile usichohitaji, umebadilisha lebo kuwa kitu muhimu, umehesabu wastani wako mwenyewe, umeongeza safu za hisia na kwa matumaini, umejifunza mambo ya kuvutia kuhusu usindikaji wa maandishi ya asili.
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## Changamoto
+
+Sasa kwa kuwa una seti yako ya data iliyochambuliwa kwa hisia, angalia kama unaweza kutumia mikakati uliyojifunza katika mtaala huu (labda kugrupu?) ili kubaini mifumo kuzunguka hisia.
+
+## Mapitio na Kujisomea
+
+Chukua [moduli hii ya Kujifunza](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott) kujifunza zaidi na kutumia zana tofauti kuchunguza hisia katika maandishi.
+## Kazi
+
+[Jaribu seti tofauti ya data](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokamilika. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/sw/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..526fb51bd
--- /dev/null
+++ b/translations/sw/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# Jaribu dataset tofauti
+
+## Maelekezo
+
+Sasa kwa kuwa umejifunza kuhusu kutumia NLTK kutoa hisia kwa maandishi, jaribu dataset tofauti. Labda utahitaji kufanya usindikaji wa data kuhusu hiyo, kwa hivyo tengeneza daftari na andika mchakato wako wa mawazo. Unagundua nini?
+
+## Rubric
+
+| Vigezo | Mfano Bora | Inayotosheleza | Inahitaji Kuboresha |
+| ------- | ------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | Daftari kamili na dataset zinawasilishwa na seli zilizo na maelezo mazuri jinsi hisia zinavyotolewa | Daftari inakosa maelezo mazuri | Daftari ina kasoro |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/sw/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..21fa91334
--- /dev/null
+++ b/translations/sw/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati asili katika lugha yake ya awali inapaswa kuchukuliwa kama chanzo rasmi. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/sw/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..0633d34bc
--- /dev/null
+++ b/translations/sw/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/README.md b/translations/sw/6-NLP/README.md
new file mode 100644
index 000000000..04bd9456e
--- /dev/null
+++ b/translations/sw/6-NLP/README.md
@@ -0,0 +1,27 @@
+# Kuanza na usindikaji wa lugha asilia
+
+Usindikaji wa lugha asilia (NLP) ni uwezo wa programu ya kompyuta kuelewa lugha ya kibinadamu kama inavyoongelewa na kuandikwa -- inayojulikana kama lugha asilia. Ni sehemu ya akili bandia (AI). NLP imekuwepo kwa zaidi ya miaka 50 na ina mizizi katika uwanja wa isimu. Uwanja mzima unalenga kusaidia mashine kuelewa na kusindika lugha ya kibinadamu. Hii inaweza kutumika kutekeleza kazi kama kuangalia tahajia au tafsiri ya mashine. Ina matumizi mbalimbali katika ulimwengu halisi katika nyanja kadhaa, ikiwemo utafiti wa matibabu, injini za utafutaji na ujasusi wa biashara.
+
+## Mada ya Kikanda: Lugha za Ulaya na fasihi na hoteli za kimapenzi za Ulaya ❤️
+
+Katika sehemu hii ya mtaala, utatambulishwa kwa moja ya matumizi ya mashine yanayotumika sana: usindikaji wa lugha asilia (NLP). Imetokana na isimu ya kompyuta, kategoria hii ya akili bandia ni daraja kati ya binadamu na mashine kupitia mawasiliano ya sauti au maandishi.
+
+Katika masomo haya tutajifunza misingi ya NLP kwa kujenga bots ndogo za mazungumzo ili kujifunza jinsi mashine zinavyosaidia kufanya mazungumzo haya kuwa 'smart' zaidi na zaidi. Utasafiri nyuma ya wakati, ukizungumza na Elizabeth Bennett na Bw. Darcy kutoka kwenye riwaya ya Jane Austen, **Pride and Prejudice**, iliyochapishwa mwaka 1813. Kisha, utaongeza maarifa yako kwa kujifunza kuhusu uchambuzi wa hisia kupitia maoni ya hoteli huko Ulaya.
+
+
+> Picha na Elaine Howlin kwenye Unsplash
+
+## Masomo
+
+1. [Utangulizi wa usindikaji wa lugha asilia](1-Introduction-to-NLP/README.md)
+2. [Kazi na mbinu za kawaida za NLP](2-Tasks/README.md)
+3. [Tafsiri na uchambuzi wa hisia na ujifunzaji wa mashine](3-Translation-Sentiment/README.md)
+4. [Kuandaa data yako](4-Hotel-Reviews-1/README.md)
+5. [NLTK kwa ajili ya Uchambuzi wa Hisia](5-Hotel-Reviews-2/README.md)
+
+## Shukrani
+
+Masomo haya ya usindikaji wa lugha asilia yaliandikwa kwa ☕ na [Stephen Howell](https://twitter.com/Howell_MSFT)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo na usahihi. Hati asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo rasmi. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/6-NLP/data/README.md b/translations/sw/6-NLP/data/README.md
new file mode 100644
index 000000000..c22517b47
--- /dev/null
+++ b/translations/sw/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+Ladda ner hotellrecensionsdatan till den här mappen.
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokamilika. Hati ya awali katika lugha yake asilia inapaswa kuchukuliwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/1-Introduction/README.md b/translations/sw/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..7449568bf
--- /dev/null
+++ b/translations/sw/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# Utangulizi wa utabiri wa mfululizo wa muda
+
+
+
+> Sketchnote na [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+Katika somo hili na lile linalofuata, utajifunza kidogo kuhusu utabiri wa mfululizo wa muda, sehemu ya kuvutia na yenye thamani katika zana za mwanasayansi wa ML ambayo haijulikani sana kuliko mada zingine. Utabiri wa mfululizo wa muda ni kama 'mpira wa kioo': kwa msingi wa utendaji wa zamani wa kigezo kama bei, unaweza kutabiri thamani yake ya baadaye.
+
+[](https://youtu.be/cBojo1hsHiI "Utangulizi wa utabiri wa mfululizo wa muda")
+
+> 🎥 Bofya picha hapo juu kwa video kuhusu utabiri wa mfululizo wa muda
+
+## [Maswali ya awali ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+Ni uwanja wa kuvutia na muhimu na thamani halisi kwa biashara, kwa kuzingatia matumizi yake ya moja kwa moja katika matatizo ya bei, hesabu, na masuala ya mnyororo wa ugavi. Wakati mbinu za kujifunza kwa kina zimeanza kutumika kupata ufahamu zaidi ili kutabiri utendaji wa baadaye vizuri zaidi, utabiri wa mfululizo wa muda bado ni uwanja unaoongozwa sana na mbinu za kimsingi za ML.
+
+> Mtaala muhimu wa mfululizo wa muda wa Penn State unaweza kupatikana [hapa](https://online.stat.psu.edu/stat510/lesson/1)
+
+## Utangulizi
+
+Fikiria unadumisha safu ya mita za maegesho za kisasa ambazo hutoa data kuhusu mara ngapi zinatumika na kwa muda gani kwa muda.
+
+> Je, ungeweza kutabiri, kwa msingi wa utendaji wa zamani wa mita, thamani yake ya baadaye kulingana na sheria za ugavi na mahitaji?
+
+Kutabiri kwa usahihi wakati wa kuchukua hatua ili kufikia lengo lako ni changamoto ambayo inaweza kushughulikiwa na utabiri wa mfululizo wa muda. Haita wafurahisha watu kutozwa zaidi wakati wa shughuli nyingi wanapotafuta nafasi ya maegesho, lakini itakuwa njia ya uhakika ya kuzalisha mapato kwa kusafisha barabara!
+
+Wacha tuchunguze baadhi ya aina za algoriti za mfululizo wa muda na kuanzisha daftari ili kusafisha na kuandaa data. Data utakayochambua imetolewa kutoka kwa mashindano ya utabiri ya GEFCom2014. Inajumuisha miaka 3 ya mzigo wa umeme wa kila saa na thamani za joto kati ya 2012 na 2014. Kwa kuzingatia mifumo ya kihistoria ya mzigo wa umeme na joto, unaweza kutabiri thamani za baadaye za mzigo wa umeme.
+
+Katika mfano huu, utajifunza jinsi ya kutabiri hatua moja mbele ya wakati, kwa kutumia data ya mzigo wa kihistoria tu. Kabla ya kuanza, hata hivyo, ni muhimu kuelewa kinachoendelea nyuma ya pazia.
+
+## Baadhi ya ufafanuzi
+
+Unapokutana na neno 'mfululizo wa muda' unahitaji kuelewa matumizi yake katika muktadha tofauti.
+
+🎓 **Mfululizo wa muda**
+
+Katika hisabati, "mfululizo wa muda ni mfululizo wa alama za data zilizoorodheshwa (au zilizoorodheshwa au kuchorwa) kwa mpangilio wa wakati. Mara nyingi, mfululizo wa muda ni mlolongo uliotolewa kwa pointi za wakati zilizo na nafasi sawa." Mfano wa mfululizo wa muda ni thamani ya kufunga kila siku ya [Dow Jones Industrial Average](https://wikipedia.org/wiki/Time_series). Matumizi ya michoro ya mfululizo wa muda na modeli za takwimu mara nyingi hukutana katika usindikaji wa ishara, utabiri wa hali ya hewa, utabiri wa matetemeko ya ardhi, na nyanja zingine ambapo matukio hutokea na alama za data zinaweza kuchorwa kwa muda.
+
+🎓 **Uchambuzi wa mfululizo wa muda**
+
+Uchambuzi wa mfululizo wa muda, ni uchambuzi wa data ya mfululizo wa muda iliyotajwa hapo juu. Data ya mfululizo wa muda inaweza kuchukua aina tofauti, ikiwa ni pamoja na 'mfululizo wa muda uliokatizwa' ambao hugundua mifumo katika mabadiliko ya mfululizo wa muda kabla na baada ya tukio linalokatiza. Aina ya uchambuzi inayohitajika kwa mfululizo wa muda, inategemea asili ya data. Data ya mfululizo wa muda yenyewe inaweza kuchukua fomu ya mfululizo wa nambari au herufi.
+
+Uchambuzi unaofanywa, hutumia mbinu mbalimbali, ikiwa ni pamoja na uwanja wa masafa na uwanja wa wakati, linear na nonlinear, na zaidi. [Jifunze zaidi](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm) kuhusu njia nyingi za kuchambua aina hii ya data.
+
+🎓 **Utabiri wa mfululizo wa muda**
+
+Utabiri wa mfululizo wa muda ni matumizi ya modeli kutabiri thamani za baadaye kwa msingi wa mifumo iliyojitokeza na data iliyokusanywa hapo awali kama ilivyotokea zamani. Ingawa inawezekana kutumia modeli za regression kuchunguza data ya mfululizo wa muda, na faharasa za wakati kama x kwenye mchoro, data kama hiyo inachambuliwa vizuri zaidi kwa kutumia aina maalum za modeli.
+
+Data ya mfululizo wa muda ni orodha ya uchunguzi ulioamriwa, tofauti na data inayoweza kuchambuliwa na regression ya mstari. Moja ya kawaida ni ARIMA, kifupi cha "Autoregressive Integrated Moving Average".
+
+[Modeli za ARIMA](https://online.stat.psu.edu/stat510/lesson/1/1.1) "zinahusisha thamani ya sasa ya mfululizo na thamani za zamani na makosa ya utabiri ya zamani." Zinahitajika zaidi kwa kuchambua data ya uwanja wa wakati, ambapo data imepangwa kwa wakati.
+
+> Kuna aina kadhaa za modeli za ARIMA, ambazo unaweza kujifunza kuhusu [hapa](https://people.duke.edu/~rnau/411arim.htm) na ambazo utazigusia katika somo lijalo.
+
+Katika somo lijalo, utajenga modeli ya ARIMA kwa kutumia [Univariate Time Series](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm), inayozingatia kigezo kimoja kinachobadilisha thamani yake kwa muda. Mfano wa aina hii ya data ni [seti hii ya data](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm) inayorekodi mkusanyiko wa C02 wa kila mwezi katika Kituo cha Mauna Loa:
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ Tambua kigezo kinachobadilika kwa muda katika seti hii ya data
+
+## Tabia za data ya mfululizo wa muda za kuzingatia
+
+Unapotazama data ya mfululizo wa muda, unaweza kugundua kuwa ina [tabia fulani](https://online.stat.psu.edu/stat510/lesson/1/1.1) ambazo unahitaji kuzingatia na kupunguza ili kuelewa mifumo yake vizuri. Ukizingatia data ya mfululizo wa muda kama inayoweza kutoa 'ishara' unayotaka kuchambua, tabia hizi zinaweza kufikiriwa kama 'kelele'. Mara nyingi utahitaji kupunguza 'kelele' hii kwa kupunguza baadhi ya tabia hizi kwa kutumia mbinu za takwimu.
+
+Hapa kuna dhana unazopaswa kujua ili kuweza kufanya kazi na mfululizo wa muda:
+
+🎓 **Mwenendo**
+
+Mwenendo unafafanuliwa kama ongezeko na upunguzaji unaoweza kupimika kwa muda. [Soma zaidi](https://machinelearningmastery.com/time-series-trends-in-python). Katika muktadha wa mfululizo wa muda, ni kuhusu jinsi ya kutumia na, ikiwa ni lazima, kuondoa mwenendo kutoka kwa mfululizo wako wa muda.
+
+🎓 **[Msimu](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+Msimu unafafanuliwa kama mabadiliko ya mara kwa mara, kama vile msimu wa sikukuu ambao unaweza kuathiri mauzo, kwa mfano. [Angalia](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm) jinsi aina tofauti za michoro zinavyoonyesha msimu katika data.
+
+🎓 **Data zilizotengwa**
+
+Data zilizotengwa ni zile ambazo ziko mbali sana na tofauti ya kawaida ya data.
+
+🎓 **Mzunguko wa muda mrefu**
+
+Huru na msimu, data inaweza kuonyesha mzunguko wa muda mrefu kama vile kushuka kwa uchumi kunakodumu zaidi ya mwaka mmoja.
+
+🎓 **Tofauti ya mara kwa mara**
+
+Kwa muda, baadhi ya data huonyesha mabadiliko ya mara kwa mara, kama vile matumizi ya nishati kwa mchana na usiku.
+
+🎓 **Mabadiliko ya ghafla**
+
+Data inaweza kuonyesha mabadiliko ya ghafla ambayo yanaweza kuhitaji uchambuzi zaidi. Kufungwa kwa biashara ghafla kutokana na COVID, kwa mfano, kulisababisha mabadiliko katika data.
+
+✅ Hapa kuna [mfano wa mchoro wa mfululizo wa muda](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python) unaoonyesha matumizi ya sarafu ya ndani ya mchezo kwa siku kadhaa. Je, unaweza kutambua yoyote ya tabia zilizoorodheshwa hapo juu katika data hii?
+
+
+
+## Zoezi - kuanza na data ya matumizi ya nguvu
+
+Wacha tuanze kuunda modeli ya mfululizo wa muda ili kutabiri matumizi ya nguvu ya baadaye kwa kuzingatia matumizi ya zamani.
+
+> Data katika mfano huu imetolewa kutoka kwa mashindano ya utabiri ya GEFCom2014. Inajumuisha miaka 3 ya mzigo wa umeme wa kila saa na thamani za joto kati ya 2012 na 2014.
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli na Rob J. Hyndman, "Utabiri wa nishati ya uwezekano: Mashindano ya Utabiri wa Nishati ya Kimataifa 2014 na zaidi", Jarida la Kimataifa la Utabiri, vol.32, no.3, pp 896-913, Julai-Septemba, 2016.
+
+1. Katika folda ya `working` ya somo hili, fungua faili ya _notebook.ipynb_. Anza kwa kuongeza maktaba zitakazokusaidia kupakia na kuona data
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ Kumbuka, unatumia faili kutoka `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()`:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ Unaweza kuona kwamba kuna safu mbili zinazoonyesha tarehe na mzigo:
+
+ | | load |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. Sasa, chora data kwa kuita `plot()`:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. Sasa, chora wiki ya kwanza ya Julai 2014, kwa kutoa kama ingizo kwa `energy` in `[from date]: [to date]`:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Mchoro mzuri! Tazama michoro hii na uone kama unaweza kubaini yoyote ya tabia zilizoorodheshwa hapo juu. Tunaweza kusema nini kwa kuona data?
+
+Katika somo lijalo, utaunda modeli ya ARIMA ili kuunda baadhi ya utabiri.
+
+---
+
+## 🚀Changamoto
+
+Fanya orodha ya viwanda na maeneo ya uchunguzi unayoweza kufikiria ambayo yangefaidika na utabiri wa mfululizo wa muda. Je, unaweza kufikiria matumizi ya mbinu hizi katika sanaa? Katika Uchumi? Ekolojia? Rejareja? Viwanda? Fedha? Wapi pengine?
+
+## [Maswali ya baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## Mapitio & Kujisomea
+
+Ingawa hatutayafunika hapa, mitandao ya neva wakati mwingine hutumika kuimarisha mbinu za kimsingi za utabiri wa mfululizo wa muda. Soma zaidi kuhusu hilo [katika makala hii](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412)
+
+## Kazi
+
+[Onyesha zaidi mfululizo wa muda](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo kikuu. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/1-Introduction/assignment.md b/translations/sw/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..a98695e8b
--- /dev/null
+++ b/translations/sw/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Onyesha Mfululizo wa Muda Zaidi
+
+## Maelekezo
+
+Umeanza kujifunza kuhusu Utabiri wa Mfululizo wa Muda kwa kuangalia aina ya data inayohitaji uundaji maalum. Umeonyesha baadhi ya data kuhusu nishati. Sasa, angalia data nyingine ambayo ingefaidika na Utabiri wa Mfululizo wa Muda. Tafuta mifano mitatu (jaribu [Kaggle](https://kaggle.com) na [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott)) na tengeneza daftari ili kuionyesha. Eleza sifa zozote maalum walizo nazo (msimu, mabadiliko ya ghafla, au mwelekeo mwingine) katika daftari.
+
+## Rubric
+
+| Vigezo | Bora | Inayotosheleza | Inayohitaji Kuboresha |
+| -------- | ------------------------------------------------------ | -------------------------------------------------- | --------------------------------------------------------------------------------------- |
+| | Seti tatu za data zimeonyeshwa na kuelezewa kwenye daftari | Seti mbili za data zimeonyeshwa na kuelezewa kwenye daftari | Seti chache za data zimeonyeshwa au kuelezewa kwenye daftari au data iliyowasilishwa haitoshi |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upotovu. Hati asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au upotovu wowote unaotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/sw/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..81e218f42
--- /dev/null
+++ b/translations/sw/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, inashauriwa kutumia tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/sw/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..fee183455
--- /dev/null
+++ b/translations/sw/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/2-ARIMA/README.md b/translations/sw/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..40f1c579d
--- /dev/null
+++ b/translations/sw/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,397 @@
+# Utabiri wa mfululizo wa muda na ARIMA
+
+Katika somo lililopita, ulijifunza kidogo kuhusu utabiri wa mfululizo wa muda na kupakia seti ya data inayoonyesha mabadiliko ya mzigo wa umeme kwa kipindi cha muda.
+
+[](https://youtu.be/IUSk-YDau10 "Utangulizi wa ARIMA")
+
+> 🎥 Bofya picha hapo juu kwa video: Utangulizi mfupi wa mifano ya ARIMA. Mfano umefanywa kwa R, lakini dhana ni za ulimwengu wote.
+
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## Utangulizi
+
+Katika somo hili, utagundua njia maalum ya kujenga mifano na [ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average). Mifano ya ARIMA inafaa hasa kwa data inayoonyesha [kutokuwa na msimamo](https://wikipedia.org/wiki/Stationary_process).
+
+## Dhana za jumla
+
+Ili kuweza kufanya kazi na ARIMA, kuna dhana kadhaa unazohitaji kujua:
+
+- 🎓 **Ukimya**. Kutoka muktadha wa takwimu, ukimya unahusu data ambayo usambazaji wake haukabadilika inapohamishwa kwa muda. Data isiyo na ukimya, basi, inaonyesha mabadiliko kutokana na mwenendo ambao lazima ubadilishwe ili kuchambuliwa. Msimu, kwa mfano, unaweza kuanzisha mabadiliko katika data na inaweza kuondolewa kwa mchakato wa 'seasonal-differencing'.
+
+- 🎓 **[Kutoa tofauti](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**. Kutoa tofauti data, tena kutoka muktadha wa takwimu, inahusu mchakato wa kubadilisha data isiyo na ukimya ili kuifanya kuwa na ukimya kwa kuondoa mwenendo wake usiokuwa na msimamo. "Kutoa tofauti kunatoa mabadiliko katika kiwango cha mfululizo wa muda, kuondoa mwenendo na msimu na hivyo kuleta usawa wa wastani wa mfululizo wa muda." [Karatasi ya Shixiong et al](https://arxiv.org/abs/1904.07632)
+
+## ARIMA katika muktadha wa mfululizo wa muda
+
+Hebu tuvunje sehemu za ARIMA ili kuelewa vyema jinsi inavyotusaidia kuunda mifano ya mfululizo wa muda na kutusaidia kufanya utabiri dhidi yake.
+
+- **AR - kwa AutoRegressive**. Mifano ya autoregressive, kama jina linavyopendekeza, huangalia 'nyuma' kwa wakati kuchambua thamani za awali katika data yako na kufanya mawazo juu yao. Thamani hizi za awali huitwa 'lags'. Mfano ungekuwa data inayoonyesha mauzo ya kila mwezi ya penseli. Kila jumla ya mauzo ya mwezi ingechukuliwa kama 'kigezo kinachoendelea' katika seti ya data. Mfano huu umejengwa kama "kigezo kinachoendelea cha maslahi kinatabiriwa kwa thamani zake za nyuma (yaani, za awali)." [wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - kwa Integrated**. Tofauti na mifano ya 'ARMA', 'I' katika ARIMA inahusu kipengele chake cha *[integrated](https://wikipedia.org/wiki/Order_of_integration)*. Data inakuwa 'integrated' wakati hatua za kutoa tofauti zinapotumika ili kuondoa kutokuwa na ukimya.
+
+- **MA - kwa Moving Average**. Kipengele cha [moving-average](https://wikipedia.org/wiki/Moving-average_model) cha mfano huu kinahusu kigezo cha pato kinachobainishwa kwa kuchunguza thamani za sasa na za awali za lags.
+
+Bottom line: ARIMA inatumika kufanya mfano uendane na aina maalum ya data ya mfululizo wa muda kwa karibu iwezekanavyo.
+
+## Zoezi - kujenga mfano wa ARIMA
+
+Fungua folda ya [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) katika somo hili na upate faili ya [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb).
+
+1. Endesha notebook kupakia maktaba ya `statsmodels` ya Python; utahitaji hii kwa mifano ya ARIMA.
+
+1. Pakia maktaba muhimu
+
+1. Sasa, pakia maktaba zaidi zinazofaa kwa kuchora data:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. Pakia data kutoka faili la `/data/energy.csv` ndani ya dataframe ya Pandas na uangalie:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. Chora data yote ya nishati inayopatikana kutoka Januari 2012 hadi Desemba 2014. Hakutakuwa na mshangao kwani tuliiona data hii katika somo lililopita:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ Sasa, hebu tujenge mfano!
+
+### Unda seti za mafunzo na majaribio
+
+Sasa data yako imepakiwa, unaweza kuigawa katika seti za mafunzo na majaribio. Utafundisha mfano wako kwenye seti ya mafunzo. Kama kawaida, baada ya mfano kumaliza mafunzo, utapima usahihi wake kwa kutumia seti ya majaribio. Unahitaji kuhakikisha kuwa seti ya majaribio inashughulikia kipindi cha baadaye kutoka seti ya mafunzo ili kuhakikisha kuwa mfano haupati taarifa kutoka vipindi vya baadaye.
+
+1. Gawa kipindi cha miezi miwili kutoka Septemba 1 hadi Oktoba 31, 2014 kwa seti ya mafunzo. Seti ya majaribio itajumuisha kipindi cha miezi miwili cha Novemba 1 hadi Desemba 31, 2014:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ Kwa kuwa data hii inaonyesha matumizi ya nishati ya kila siku, kuna muundo wenye nguvu wa msimu, lakini matumizi ni sawa zaidi na matumizi ya siku za hivi karibuni.
+
+1. Onyesha tofauti:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Kwa hiyo, kutumia dirisha ndogo ya muda kwa mafunzo ya data inapaswa kutosha.
+
+ > Note: Kwa kuwa kazi tunayoyotumia kufaa mfano wa ARIMA hutumia uthibitishaji wa ndani-sampuli wakati wa kufaa, tutaacha data ya uthibitishaji.
+
+### Andaa data kwa mafunzo
+
+Sasa, unahitaji kuandaa data kwa mafunzo kwa kufanya uchujaji na kupima data yako. Chuja seti yako ya data ili kujumuisha tu vipindi vya muda vilivyotajwa na safu ambazo unahitaji, na kupima ili kuhakikisha data inatolewa katika kipimo cha 0,1.
+
+1. Chuja seti asili ya data kujumuisha tu vipindi vya muda vilivyotajwa kwa kila seti na kujumuisha tu safu inayohitajika 'mzigo' pamoja na tarehe:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ Unaweza kuona umbo la data:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. Pima data kuwa katika kipimo (0, 1).
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. Onyesha data asili dhidi ya data iliyopimwa:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > Data asili
+
+ 
+
+ > Data iliyopimwa
+
+1. Sasa kwa kuwa umekalibu data iliyopimwa, unaweza kupima data ya majaribio:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### Tekeleza ARIMA
+
+Ni wakati wa kutekeleza ARIMA! Sasa utatumia maktaba ya `statsmodels` uliyoisakinisha awali.
+
+Sasa unahitaji kufuata hatua kadhaa
+
+ 1. Fafanua mfano kwa kupiga `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q`, lakini zinaendana na vipengele vya msimu wa mfano.
+
+1. Anza kwa kuweka thamani yako ya upeo wa muda. Hebu jaribu masaa 3:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ Kuchagua thamani bora kwa vigezo vya mfano wa ARIMA kunaweza kuwa changamoto kwani ni jambo la kibinafsi na linachukua muda. Unaweza kufikiria kutumia maktaba ya `auto_arima()` function from the [`pyramid` library](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html),
+
+1. Kwa sasa jaribu uteuzi wa mwongozo ili kupata mfano mzuri.
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ Jedwali la matokeo linachapishwa.
+
+Umejenga mfano wako wa kwanza! Sasa tunahitaji kupata njia ya kuipima.
+
+### Pima mfano wako
+
+Ili kupima mfano wako, unaweza kufanya uthibitishaji wa `walk forward`. Kwa vitendo, mifano ya mfululizo wa muda hufundishwa tena kila wakati data mpya inapopatikana. Hii inaruhusu mfano kufanya utabiri bora kwa kila hatua ya muda.
+
+Kuanzia mwanzo wa mfululizo wa muda kwa kutumia mbinu hii, fundisha mfano kwenye seti ya mafunzo. Kisha fanya utabiri kwenye hatua inayofuata ya muda. Utabiri unakaguliwa dhidi ya thamani inayojulikana. Seti ya mafunzo kisha inapanuliwa kujumuisha thamani inayojulikana na mchakato unarudiwa.
+
+> Note: Unapaswa kuweka dirisha la seti ya mafunzo imara kwa mafunzo bora zaidi ili kwamba kila wakati unapoongeza uchunguzi mpya kwenye seti ya mafunzo, unatoa uchunguzi kutoka mwanzo wa seti.
+
+Mchakato huu hutoa makadirio thabiti zaidi ya jinsi mfano utakavyofanya kazi kwa vitendo. Hata hivyo, inakuja na gharama ya kompyuta ya kuunda mifano mingi sana. Hii inakubalika ikiwa data ni ndogo au ikiwa mfano ni rahisi, lakini inaweza kuwa tatizo kwa kiwango kikubwa.
+
+Uthibitishaji wa walk-forward ni kiwango cha dhahabu cha tathmini ya mifano ya mfululizo wa muda na inapendekezwa kwa miradi yako mwenyewe.
+
+1. Kwanza, unda hatua ya data ya majaribio kwa kila hatua ya HORIZON.
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | mzigo | mzigo+1 | mzigo+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ Data inahamishwa kwa usawa kulingana na hatua yake ya upeo wa muda.
+
+1. Fanya utabiri kwenye data yako ya majaribio kwa kutumia mbinu hii ya dirisha linalosonga katika kitanzi cha ukubwa wa urefu wa data ya majaribio:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ Unaweza kuona mafunzo yakitokea:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. Linganisha utabiri na mzigo halisi:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ Matokeo
+ | | | timestamp | h | utabiri | halisi |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+
+ Angalia utabiri wa data ya kila saa, ikilinganishwa na mzigo halisi. Je, ni sahihi kiasi gani?
+
+### Angalia usahihi wa mfano
+
+Angalia usahihi wa mfano wako kwa kupima kosa la asilimia ya wastani wa upungufu (MAPE) juu ya utabiri wote.
+
+> **🧮 Nionyeshe hisabati**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/) hutumika kuonyesha usahihi wa utabiri kama uwiano unaofafanuliwa na fomula hapo juu. Tofauti kati ya actualt na predictedt inagawanywa na actualt. "Thamani ya absolute katika hesabu hii inajumlishwa kwa kila hatua ya utabiri wa muda na kugawanywa na idadi ya pointi zilizofaa n." [wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. Eleza equation katika code:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. Hesabu MAPE ya hatua moja:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ MAPE ya utabiri wa hatua moja: 0.5570581332313952 %
+
+1. Chapisha MAPE ya utabiri wa hatua nyingi:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ Nambari ya chini ni bora: fikiria kwamba utabiri ambao una MAPE ya 10 ni mbali kwa 10%.
+
+1. Lakini kama kawaida, ni rahisi kuona aina hii ya kipimo cha usahihi kwa macho, kwa hivyo hebu tuichore:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 Mchoro mzuri sana, unaoonyesha mfano wenye usahihi mzuri. Hongera!
+
+---
+
+## 🚀Changamoto
+
+Chunguza njia za kupima usahihi wa Mfano wa Mfululizo wa Muda. Tumegusia MAPE katika somo hili, lakini je, kuna njia nyingine unazoweza kutumia? Tafiti na uzichambue. Hati ya kusaidia inaweza kupatikana [hapa](https://otexts.com/fpp2/accuracy.html)
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## Mapitio & Kujisomea
+
+Somo hili linagusa tu misingi ya Utabiri wa Mfululizo wa Muda na ARIMA. Chukua muda wa kuongeza maarifa yako kwa kuchimba zaidi katika [repo hii](https://microsoft.github.io/forecasting/) na aina zake mbalimbali za mifano ili kujifunza njia nyingine za kujenga mifano ya Mfululizo wa Muda.
+
+## Kazi
+
+[Mfano mpya wa ARIMA](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/2-ARIMA/assignment.md b/translations/sw/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..e093ad062
--- /dev/null
+++ b/translations/sw/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# Mfano Mpya wa ARIMA
+
+## Maagizo
+
+Sasa kwa kuwa umeunda mfano wa ARIMA, unda mwingine na data mpya (jaribu mojawapo ya [datasets hizi kutoka Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Eleza kazi yako kwenye daftari, onyesha data na mfano wako, na ujaribu usahihi wake kwa kutumia MAPE.
+
+## Rubric
+
+| Kigezo | Kimekamilika | Kinatosheleza | Kinahitaji Kuboresha |
+| -------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------- | ---------------------------------- |
+| | Daftari limewasilishwa na mfano mpya wa ARIMA umeundwa, kujaribiwa na kuelezewa kwa picha na usahihi umeelezwa. | Daftari lililowasilishwa halijafafanuliwa au lina makosa | Daftari lisilokamilika limewasilishwa |
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upotofu. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/sw/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..3645a4c67
--- /dev/null
+++ b/translations/sw/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/sw/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..9514ffe4f
--- /dev/null
+++ b/translations/sw/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kupata usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo rasmi. Kwa habari muhimu, inashauriwa kupata tafsiri ya kitaalamu ya kibinadamu. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/3-SVR/README.md b/translations/sw/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..28eb44c2c
--- /dev/null
+++ b/translations/sw/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,382 @@
+# Utabiri wa Mfululizo wa Wakati kwa Kutumia Support Vector Regressor
+
+Katika somo lililopita, ulijifunza jinsi ya kutumia mfano wa ARIMA kufanya utabiri wa mfululizo wa wakati. Sasa utakuwa unatazama mfano wa Support Vector Regressor ambao ni mfano wa kurudi nyuma unaotumika kutabiri data inayoendelea.
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## Utangulizi
+
+Katika somo hili, utagundua njia maalum ya kujenga mifano kwa [**SVM**: **S**upport **V**ector **M**achine](https://en.wikipedia.org/wiki/Support-vector_machine) kwa kurudi nyuma, au **SVR: Support Vector Regressor**.
+
+### SVR katika muktadha wa mfululizo wa wakati [^1]
+
+Kabla ya kuelewa umuhimu wa SVR katika utabiri wa mfululizo wa wakati, hapa kuna baadhi ya dhana muhimu unazohitaji kujua:
+
+- **Regression:** Mbinu ya kujifunza yenye usimamizi ili kutabiri thamani zinazoendelea kutoka kwenye seti fulani ya pembejeo. Wazo ni kufaa mkurva (au mstari) katika nafasi ya vipengele ambayo ina idadi kubwa ya pointi za data. [Bonyeza hapa](https://en.wikipedia.org/wiki/Regression_analysis) kwa maelezo zaidi.
+- **Support Vector Machine (SVM):** Aina ya mfano wa kujifunza kwa mashine yenye usimamizi unaotumika kwa uainishaji, kurudi nyuma na kugundua vitu vya kipekee. Mfano huo ni ndege ya juu katika nafasi ya vipengele, ambayo katika kesi ya uainishaji inafanya kazi kama mpaka, na katika kesi ya kurudi nyuma inafanya kazi kama mstari bora wa kufaa. Katika SVM, kazi ya Kernel hutumika kubadilisha seti ya data kuwa nafasi ya idadi kubwa ya vipimo, ili iweze kutenganishwa kwa urahisi. [Bonyeza hapa](https://en.wikipedia.org/wiki/Support-vector_machine) kwa maelezo zaidi kuhusu SVMs.
+- **Support Vector Regressor (SVR):** Aina ya SVM, kupata mstari bora wa kufaa (ambao katika kesi ya SVM ni ndege ya juu) ambayo ina idadi kubwa ya pointi za data.
+
+### Kwa nini SVR? [^1]
+
+Katika somo la mwisho ulijifunza kuhusu ARIMA, ambayo ni njia ya takwimu ya mstari yenye mafanikio sana kutabiri data ya mfululizo wa wakati. Hata hivyo, katika kesi nyingi, data ya mfululizo wa wakati ina *kutokuwa na mstari*, ambayo haiwezi kuakisiwa na mifano ya mstari. Katika kesi hizo, uwezo wa SVM kuzingatia kutokuwa na mstari katika data kwa kazi za kurudi nyuma hufanya SVR kufanikiwa katika utabiri wa mfululizo wa wakati.
+
+## Zoezi - jenga mfano wa SVR
+
+Hatua za awali za maandalizi ya data ni sawa na zile za somo lililopita kuhusu [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA).
+
+Fungua folda ya [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working) katika somo hili na tafuta faili [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb). [^2]
+
+1. Endesha daftari na ulete maktaba muhimu: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. Pakia data kutoka kwenye faili `/data/energy.csv` ndani ya dataframe ya Pandas na uangalie: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. Chora data zote za nishati zinazopatikana kutoka Januari 2012 hadi Desemba 2014: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Sasa, hebu tujenge mfano wetu wa SVR.
+
+### Tengeneza seti za mafunzo na majaribio
+
+Sasa data yako imepakuliwa, kwa hivyo unaweza kuitenganisha kuwa seti za mafunzo na majaribio. Kisha utarekebisha data ili kuunda seti ya data inayotegemea wakati ambayo itahitajika kwa SVR. Utaufundisha mfano wako kwenye seti ya mafunzo. Baada ya mfano kumaliza mafunzo, utapima usahihi wake kwenye seti ya mafunzo, seti ya majaribio na kisha seti kamili ya data ili kuona utendaji wa jumla. Unahitaji kuhakikisha kwamba seti ya majaribio inashughulikia kipindi cha baadaye kutoka kwenye seti ya mafunzo ili kuhakikisha kwamba mfano haupati habari kutoka kwenye vipindi vya wakati vijavyo [^2] (hali inayojulikana kama *Overfitting*).
+
+1. Tengeneza kipindi cha miezi miwili kutoka Septemba 1 hadi Oktoba 31, 2014 kwa seti ya mafunzo. Seti ya majaribio itajumuisha kipindi cha miezi miwili kutoka Novemba 1 hadi Desemba 31, 2014: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. Angalia tofauti: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+### Andaa data kwa mafunzo
+
+Sasa, unahitaji kuandaa data kwa mafunzo kwa kufanya uchujaji na kupima data yako. Chuja seti yako ya data ili kujumuisha vipindi vya wakati vilivyotajwa tu kwa kila seti na kujumuisha safu inayohitajika 'load' pamoja na tarehe: [^2]
+
+1. Chuja seti ya data asili ili kujumuisha vipindi vya wakati vilivyotajwa hapo juu tu kwa kila seti na kujumuisha safu inayohitajika 'load' pamoja na tarehe: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. Pima data ya mafunzo kuwa katika safu ya (0, 1): [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. Sasa, pima data ya majaribio: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### Unda data kwa hatua za wakati [^1]
+
+Kwa SVR, unabadilisha data ya pembejeo kuwa ya fomu `[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data` kwa njia ambayo kuna kipimo kipya kinachorejelea hatua za wakati.
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+Kwa mfano huu, tunachukua `timesteps = 5`. Kwa hivyo, pembejeo kwa mfano ni data ya hatua za wakati za kwanza 4, na pato litakuwa data ya hatua ya wakati ya 5.
+
+```python
+timesteps=5
+```
+
+Kubadilisha data ya mafunzo kuwa tensor ya 2D kwa kutumia orodha iliyo ndani ya orodha:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+Kubadilisha data ya majaribio kuwa tensor ya 2D:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+ Kuchagua pembejeo na matokeo kutoka kwa data ya mafunzo na majaribio:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### Tekeleza SVR [^1]
+
+Sasa, ni wakati wa kutekeleza SVR. Kusoma zaidi kuhusu utekelezaji huu, unaweza kurejelea [hati hii](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html). Kwa utekelezaji wetu, tunafuata hatua hizi:
+
+ 1. Fafanua mfano kwa kuita `SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()` function
+
+Sasa tunaunda mfano wa SVR. Hapa tunatumia [kernel ya RBF](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel), na kuweka hyperparameters gamma, C na epsilon kama 0.5, 10 na 0.05 mtawaliwa.
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### Fiti mfano kwenye data ya mafunzo [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### Fanya utabiri wa mfano [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+Umejenga SVR yako! Sasa tunahitaji kuipima.
+
+### Pima mfano wako [^1]
+
+Kwa tathmini, kwanza tutapima data tena kuwa kwenye kipimo chetu asili. Kisha, ili kuangalia utendaji, tutachora mfululizo wa wakati wa asili na utabiri, na pia kuchapisha matokeo ya MAPE.
+
+Pima tena data ya utabiri na pato la asili:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### Angalia utendaji wa mfano kwenye data ya mafunzo na majaribio [^1]
+
+Tunatoa timestamps kutoka kwenye seti ya data kuonyesha kwenye mhimili wa x wa mchoro wetu. Kumbuka kuwa tunatumia ```timesteps-1``` thamani za kwanza kama pembejeo kwa pato la kwanza, kwa hivyo timestamps za pato zitaanza baada ya hapo.
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+Chora utabiri kwa data ya mafunzo:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+Chapisha MAPE kwa data ya mafunzo
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+Chora utabiri kwa data ya majaribio
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+Chapisha MAPE kwa data ya majaribio
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 Una matokeo mazuri sana kwenye seti ya data ya majaribio!
+
+### Angalia utendaji wa mfano kwenye seti kamili ya data [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+🏆 Michoro nzuri sana, inaonyesha mfano wenye usahihi mzuri. Hongera!
+
+---
+
+## 🚀Changamoto
+
+- Jaribu kubadilisha hyperparameters (gamma, C, epsilon) wakati wa kuunda mfano na tathmini kwenye data ili kuona seti gani ya hyperparameters inatoa matokeo bora kwenye data ya majaribio. Kujua zaidi kuhusu hyperparameters hizi, unaweza kurejelea hati [hapa](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel).
+- Jaribu kutumia kazi tofauti za kernel kwa mfano na uchanganue utendaji wao kwenye seti ya data. Hati inayosaidia inaweza kupatikana [hapa](https://scikit-learn.org/stable/modules/svm.html#kernel-functions).
+- Jaribu kutumia thamani tofauti za `timesteps` kwa mfano ili kuangalia nyuma kufanya utabiri.
+
+## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## Mapitio na Kujisomea
+
+Somo hili lilikuwa la kuanzisha matumizi ya SVR kwa Utabiri wa Mfululizo wa Wakati. Kusoma zaidi kuhusu SVR, unaweza kurejelea [blogu hii](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/). Hii [hati ya scikit-learn](https://scikit-learn.org/stable/modules/svm.html) inatoa maelezo ya kina zaidi kuhusu SVMs kwa ujumla, [SVRs](https://scikit-learn.org/stable/modules/svm.html#regression) na pia maelezo mengine ya utekelezaji kama vile kazi tofauti za [kernel](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) zinazoweza kutumika, na vigezo vyake.
+
+## Kazi
+
+[Modeli mpya ya SVR](assignment.md)
+
+## Credits
+
+[^1]: Maandishi, msimbo na matokeo katika sehemu hii yalichangiwa na [@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD)
+[^2]: Maandishi, msimbo na matokeo katika sehemu hii yalichukuliwa kutoka [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kuwa sahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/3-SVR/assignment.md b/translations/sw/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..092c78904
--- /dev/null
+++ b/translations/sw/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,16 @@
+# Mfano mpya wa SVR
+
+## Maelekezo [^1]
+
+Sasa kwa kuwa umeshaunda mfano wa SVR, unda mwingine mpya kwa kutumia data mpya (jaribu moja ya [seti hizi za data kutoka Duke](http://www2.stat.duke.edu/~mw/ts_data_sets.html)). Elezea kazi yako katika daftari, ona data na mfano wako, na jaribu usahihi wake kwa kutumia michoro inayofaa na MAPE. Pia jaribu kurekebisha hyperparameters tofauti na kutumia thamani tofauti kwa timesteps.
+## Rubric [^1]
+
+| Vigezo | Bora kabisa | Inatosha | Inahitaji Kuboresha |
+| ------- | ----------------------------------------------------------- | ---------------------------------------------------------- | --------------------------------- |
+| | Daftari linaonyeshwa na mfano wa SVR umejengwa, umejaribiwa na kuelezewa kwa michoro na usahihi kutajwa. | Daftari linalowasilishwa halijafafanuliwa au lina makosa. | Daftari lisilokamilika linaonyeshwa |
+
+
+[^1]: Maandishi katika sehemu hii yamejikita kwenye [kazi kutoka ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/7-TimeSeries/README.md b/translations/sw/7-TimeSeries/README.md
new file mode 100644
index 000000000..1b20598ca
--- /dev/null
+++ b/translations/sw/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# Utangulizi wa utabiri wa mfululizo wa muda
+
+Utabiri wa mfululizo wa muda ni nini? Ni kuhusu kutabiri matukio yajayo kwa kuchambua mwenendo wa zamani.
+
+## Mada ya Kikanda: Matumizi ya umeme duniani ✨
+
+Katika masomo haya mawili, utatambulishwa kwa utabiri wa mfululizo wa muda, eneo ambalo halijulikani sana katika ujifunzaji wa mashine lakini lina thamani kubwa kwa matumizi ya viwanda na biashara, pamoja na nyanja nyingine. Ingawa mitandao ya neva inaweza kutumika kuboresha matumizi ya mifano hii, tutaisoma katika muktadha wa ujifunzaji wa mashine wa kiasili kwani mifano husaidia kutabiri utendaji wa baadaye kulingana na zamani.
+
+Mwelekeo wetu wa kikanda ni matumizi ya umeme duniani, seti ya data ya kuvutia kujifunza kuhusu utabiri wa matumizi ya nguvu za umeme za baadaye kulingana na mifumo ya mzigo wa zamani. Unaweza kuona jinsi aina hii ya utabiri inaweza kuwa ya msaada mkubwa katika mazingira ya biashara.
+
+
+
+Picha na [Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) ya minara ya umeme kwenye barabara huko Rajasthan kwenye [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
+
+## Masomo
+
+1. [Utangulizi wa utabiri wa mfululizo wa muda](1-Introduction/README.md)
+2. [Kujenga mifano ya ARIMA ya mfululizo wa muda](2-ARIMA/README.md)
+3. [Kujenga Support Vector Regressor kwa utabiri wa mfululizo wa muda](3-SVR/README.md)
+
+## Shukrani
+
+"Utangulizi wa utabiri wa mfululizo wa muda" uliandikwa kwa ⚡️ na [Francesca Lazzeri](https://twitter.com/frlazzeri) na [Jen Looper](https://twitter.com/jenlooper). Vitabu vya mazoezi vilionekana kwanza mtandaoni kwenye [Azure "Deep Learning For Time Series" repo](https://github.com/Azure/DeepLearningForTimeSeriesForecasting) vilivyoandikwa awali na Francesca Lazzeri. Somo la SVR liliandikwa na [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD)
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo na usahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/1-QLearning/README.md b/translations/sw/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..a107767dd
--- /dev/null
+++ b/translations/sw/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,319 @@
+# Utangulizi wa Kujifunza kwa Kuimarisha na Q-Learning
+
+
+> Sketchnote na [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+Kujifunza kwa kuimarisha kunahusisha dhana tatu muhimu: wakala, baadhi ya hali, na seti ya vitendo kwa kila hali. Kwa kutekeleza kitendo katika hali maalum, wakala hupewa tuzo. Fikiria tena mchezo wa kompyuta wa Super Mario. Wewe ni Mario, uko kwenye kiwango cha mchezo, umesimama karibu na ukingo wa mwamba. Juu yako kuna sarafu. Wewe ukiwa Mario, katika kiwango cha mchezo, katika nafasi maalum ... hiyo ndiyo hali yako. Kusonga hatua moja kulia (kitendo) kutakupeleka kwenye ukingo, na hiyo itakupa alama ndogo ya nambari. Hata hivyo, kubonyeza kitufe cha kuruka kutakupa alama na utabaki hai. Hiyo ni matokeo chanya na hiyo inapaswa kukupa alama chanya ya nambari.
+
+Kwa kutumia kujifunza kwa kuimarisha na simulator (mchezo), unaweza kujifunza jinsi ya kucheza mchezo ili kuongeza tuzo ambayo ni kubaki hai na kupata alama nyingi iwezekanavyo.
+
+[](https://www.youtube.com/watch?v=lDq_en8RNOo)
+
+> 🎥 Bofya picha hapo juu kumsikiliza Dmitry akijadili Kujifunza kwa Kuimarisha
+
+## [Jaribio la kabla ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/45/)
+
+## Mahitaji na Mipangilio
+
+Katika somo hili, tutakuwa tukijaribu na baadhi ya nambari katika Python. Unapaswa kuwa na uwezo wa kuendesha nambari ya Jupyter Notebook kutoka somo hili, ama kwenye kompyuta yako au mahali pengine kwenye wingu.
+
+Unaweza kufungua [notebook ya somo](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/notebook.ipynb) na kupitia somo hili kujenga.
+
+> **Kumbuka:** Ikiwa unafungua nambari hii kutoka kwenye wingu, pia unahitaji kupata faili la [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py), ambalo linatumika katika nambari ya notebook. Ongeza kwenye saraka moja na notebook.
+
+## Utangulizi
+
+Katika somo hili, tutachunguza ulimwengu wa **[Peter na Mbwa Mwitu](https://en.wikipedia.org/wiki/Peter_and_the_Wolf)**, uliohamasishwa na hadithi ya muziki ya Kirusi, [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev). Tutatumia **Kujifunza kwa Kuimarisha** kumruhusu Peter kuchunguza mazingira yake, kukusanya matofaa matamu na kuepuka kukutana na mbwa mwitu.
+
+**Kujifunza kwa Kuimarisha** (RL) ni mbinu ya kujifunza inayoturuhusu kujifunza tabia bora ya **wakala** katika **mazingira** fulani kwa kuendesha majaribio mengi. Wakala katika mazingira haya anapaswa kuwa na **lengo**, lililofafanuliwa na **kazi ya tuzo**.
+
+## Mazingira
+
+Kwa urahisi, tuchukulie ulimwengu wa Peter kuwa ubao wa mraba wa ukubwa `width` x `height`, kama hivi:
+
+
+
+Kila seli katika ubao huu inaweza kuwa:
+
+* **ardhi**, ambayo Peter na viumbe wengine wanaweza kutembea.
+* **maji**, ambayo huwezi kutembea.
+* **mti** au **nyasi**, mahali ambapo unaweza kupumzika.
+* **tofaa**, ambalo linawakilisha kitu ambacho Peter angefurahi kukipata ili kujilisha.
+* **mbwa mwitu**, ambaye ni hatari na anapaswa kuepukwa.
+
+Kuna moduli ya Python tofauti, [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py), ambayo ina nambari ya kufanya kazi na mazingira haya. Kwa sababu nambari hii si muhimu kwa kuelewa dhana zetu, tutaleta moduli na kuitumia kuunda ubao wa mfano (code block 1):
+
+```python
+from rlboard import *
+
+width, height = 8,8
+m = Board(width,height)
+m.randomize(seed=13)
+m.plot()
+```
+
+Nambari hii inapaswa kuchapisha picha ya mazingira inayofanana na ile hapo juu.
+
+## Vitendo na sera
+
+Katika mfano wetu, lengo la Peter litakuwa kupata tofaa, huku akiepuka mbwa mwitu na vikwazo vingine. Ili kufanya hivyo, anaweza kutembea tu hadi apate tofaa.
+
+Kwa hivyo, katika nafasi yoyote, anaweza kuchagua kati ya moja ya vitendo vifuatavyo: juu, chini, kushoto na kulia.
+
+Tutafafanua vitendo hivyo kama kamusi, na kuziunganisha na jozi za mabadiliko ya kuratibu yanayolingana. Kwa mfano, kusonga kulia (`R`) would correspond to a pair `(1,0)`. (code block 2):
+
+```python
+actions = { "U" : (0,-1), "D" : (0,1), "L" : (-1,0), "R" : (1,0) }
+action_idx = { a : i for i,a in enumerate(actions.keys()) }
+```
+
+Kwa muhtasari, mkakati na lengo la hali hii ni kama ifuatavyo:
+
+- **Mkakati**, wa wakala wetu (Peter) unafafanuliwa na kinachoitwa **sera**. Sera ni kazi inayorejesha kitendo katika hali yoyote iliyotolewa. Katika kesi yetu, hali ya tatizo inawakilishwa na ubao, ikijumuisha nafasi ya sasa ya mchezaji.
+
+- **Lengo**, la kujifunza kwa kuimarisha ni hatimaye kujifunza sera nzuri ambayo itaturuhusu kutatua tatizo kwa ufanisi. Hata hivyo, kama msingi, tuchukulie sera rahisi zaidi inayoitwa **kutembea kwa nasibu**.
+
+## Kutembea kwa nasibu
+
+Kwanza, tutatue tatizo letu kwa kutekeleza mkakati wa kutembea kwa nasibu. Kwa kutembea kwa nasibu, tutachagua kwa nasibu kitendo kinachofuata kutoka kwa vitendo vilivyoruhusiwa, hadi tufikie tofaa (code block 3).
+
+1. Tekeleza kutembea kwa nasibu na nambari hapa chini:
+
+ ```python
+ def random_policy(m):
+ return random.choice(list(actions))
+
+ def walk(m,policy,start_position=None):
+ n = 0 # number of steps
+ # set initial position
+ if start_position:
+ m.human = start_position
+ else:
+ m.random_start()
+ while True:
+ if m.at() == Board.Cell.apple:
+ return n # success!
+ if m.at() in [Board.Cell.wolf, Board.Cell.water]:
+ return -1 # eaten by wolf or drowned
+ while True:
+ a = actions[policy(m)]
+ new_pos = m.move_pos(m.human,a)
+ if m.is_valid(new_pos) and m.at(new_pos)!=Board.Cell.water:
+ m.move(a) # do the actual move
+ break
+ n+=1
+
+ walk(m,random_policy)
+ ```
+
+ Wito kwa `walk` unapaswa kurejesha urefu wa njia inayolingana, ambayo inaweza kutofautiana kutoka kwa kukimbia moja hadi nyingine.
+
+1. Endesha jaribio la kutembea mara kadhaa (sema, 100), na uchapishe takwimu zinazotokana (code block 4):
+
+ ```python
+ def print_statistics(policy):
+ s,w,n = 0,0,0
+ for _ in range(100):
+ z = walk(m,policy)
+ if z<0:
+ w+=1
+ else:
+ s += z
+ n += 1
+ print(f"Average path length = {s/n}, eaten by wolf: {w} times")
+
+ print_statistics(random_policy)
+ ```
+
+ Kumbuka kuwa urefu wa wastani wa njia ni karibu hatua 30-40, ambayo ni nyingi, ikizingatiwa kuwa umbali wa wastani hadi tofaa lililo karibu ni karibu hatua 5-6.
+
+ Unaweza pia kuona jinsi harakati za Peter zinavyoonekana wakati wa kutembea kwa nasibu:
+
+ 
+
+## Kazi ya tuzo
+
+Ili kufanya sera yetu kuwa ya akili zaidi, tunahitaji kuelewa ni hatua gani ni "bora" kuliko nyingine. Ili kufanya hivyo, tunahitaji kufafanua lengo letu.
+
+Lengo linaweza kufafanuliwa kwa maneno ya **kazi ya tuzo**, ambayo itarejesha thamani fulani ya alama kwa kila hali. Nambari inavyokuwa juu, ndivyo kazi ya tuzo inavyokuwa bora. (code block 5)
+
+```python
+move_reward = -0.1
+goal_reward = 10
+end_reward = -10
+
+def reward(m,pos=None):
+ pos = pos or m.human
+ if not m.is_valid(pos):
+ return end_reward
+ x = m.at(pos)
+ if x==Board.Cell.water or x == Board.Cell.wolf:
+ return end_reward
+ if x==Board.Cell.apple:
+ return goal_reward
+ return move_reward
+```
+
+Jambo la kuvutia kuhusu kazi za tuzo ni kwamba katika hali nyingi, *tunapewa tuzo kubwa mwishoni mwa mchezo*. Hii inamaanisha kuwa algoriti yetu inapaswa kwa namna fulani kukumbuka hatua "nzuri" zinazoongoza kwenye tuzo chanya mwishoni, na kuongeza umuhimu wake. Vivyo hivyo, hatua zote zinazoongoza kwenye matokeo mabaya zinapaswa kukatishwa tamaa.
+
+## Q-Learning
+
+Algoriti ambayo tutajadili hapa inaitwa **Q-Learning**. Katika algoriti hii, sera inafafanuliwa na kazi (au muundo wa data) unaoitwa **Q-Table**. Inaandika "uzuri" wa kila kitendo katika hali iliyotolewa.
+
+Inaitwa Q-Table kwa sababu mara nyingi ni rahisi kuiwakilisha kama meza, au safu nyingi. Kwa kuwa ubao wetu una vipimo vya `width` x `height`, tunaweza kuwakilisha Q-Table kwa kutumia safu ya numpy yenye umbo `width` x `height` x `len(actions)`: (code block 6)
+
+```python
+Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)
+```
+
+Kumbuka kwamba tunaanzisha maadili yote ya Q-Table na thamani sawa, katika kesi yetu - 0.25. Hii inalingana na sera ya "kutembea kwa nasibu", kwa sababu hatua zote katika kila hali ni nzuri sawa. Tunaweza kupitisha Q-Table kwa `plot` function in order to visualize the table on the board: `m.plot(Q)`.
+
+
+
+In the center of each cell there is an "arrow" that indicates the preferred direction of movement. Since all directions are equal, a dot is displayed.
+
+Now we need to run the simulation, explore our environment, and learn a better distribution of Q-Table values, which will allow us to find the path to the apple much faster.
+
+## Essence of Q-Learning: Bellman Equation
+
+Once we start moving, each action will have a corresponding reward, i.e. we can theoretically select the next action based on the highest immediate reward. However, in most states, the move will not achieve our goal of reaching the apple, and thus we cannot immediately decide which direction is better.
+
+> Remember that it is not the immediate result that matters, but rather the final result, which we will obtain at the end of the simulation.
+
+In order to account for this delayed reward, we need to use the principles of **[dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming)**, which allow us to think about out problem recursively.
+
+Suppose we are now at the state *s*, and we want to move to the next state *s'*. By doing so, we will receive the immediate reward *r(s,a)*, defined by the reward function, plus some future reward. If we suppose that our Q-Table correctly reflects the "attractiveness" of each action, then at state *s'* we will chose an action *a* that corresponds to maximum value of *Q(s',a')*. Thus, the best possible future reward we could get at state *s* will be defined as `max`a'*Q(s',a')* (maximum here is computed over all possible actions *a'* at state *s'*).
+
+This gives the **Bellman formula** for calculating the value of the Q-Table at state *s*, given action *a*:
+
+
+
+Here γ is the so-called **discount factor** that determines to which extent you should prefer the current reward over the future reward and vice versa.
+
+## Learning Algorithm
+
+Given the equation above, we can now write pseudo-code for our learning algorithm:
+
+* Initialize Q-Table Q with equal numbers for all states and actions
+* Set learning rate α ← 1
+* Repeat simulation many times
+ 1. Start at random position
+ 1. Repeat
+ 1. Select an action *a* at state *s*
+ 2. Execute action by moving to a new state *s'*
+ 3. If we encounter end-of-game condition, or total reward is too small - exit simulation
+ 4. Compute reward *r* at the new state
+ 5. Update Q-Function according to Bellman equation: *Q(s,a)* ← *(1-α)Q(s,a)+α(r+γ maxa'Q(s',a'))*
+ 6. *s* ← *s'*
+ 7. Update the total reward and decrease α.
+
+## Exploit vs. explore
+
+In the algorithm above, we did not specify how exactly we should choose an action at step 2.1. If we are choosing the action randomly, we will randomly **explore** the environment, and we are quite likely to die often as well as explore areas where we would not normally go. An alternative approach would be to **exploit** the Q-Table values that we already know, and thus to choose the best action (with higher Q-Table value) at state *s*. This, however, will prevent us from exploring other states, and it's likely we might not find the optimal solution.
+
+Thus, the best approach is to strike a balance between exploration and exploitation. This can be done by choosing the action at state *s* with probabilities proportional to values in the Q-Table. In the beginning, when Q-Table values are all the same, it would correspond to a random selection, but as we learn more about our environment, we would be more likely to follow the optimal route while allowing the agent to choose the unexplored path once in a while.
+
+## Python implementation
+
+We are now ready to implement the learning algorithm. Before we do that, we also need some function that will convert arbitrary numbers in the Q-Table into a vector of probabilities for corresponding actions.
+
+1. Create a function `probs()`:
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+ ```
+
+ Tunaongeza `eps` chache kwenye vector ya awali ili kuepuka mgawanyiko kwa 0 katika kesi ya awali, wakati vipengele vyote vya vector ni sawa.
+
+Endesha algoriti ya kujifunza kupitia majaribio 5000, pia huitwa **epochs**: (code block 8)
+```python
+ for epoch in range(5000):
+
+ # Pick initial point
+ m.random_start()
+
+ # Start travelling
+ n=0
+ cum_reward = 0
+ while True:
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ dpos = actions[a]
+ m.move(dpos,check_correctness=False) # we allow player to move outside the board, which terminates episode
+ r = reward(m)
+ cum_reward += r
+ if r==end_reward or cum_reward < -1000:
+ lpath.append(n)
+ break
+ alpha = np.exp(-n / 10e5)
+ gamma = 0.5
+ ai = action_idx[a]
+ Q[x,y,ai] = (1 - alpha) * Q[x,y,ai] + alpha * (r + gamma * Q[x+dpos[0], y+dpos[1]].max())
+ n+=1
+```
+
+Baada ya kutekeleza algoriti hii, Q-Table inapaswa kusasishwa na maadili ambayo yanafafanua mvuto wa hatua tofauti katika kila hatua. Tunaweza kujaribu kuonyesha Q-Table kwa kuchora vector kwenye kila seli ambayo itaelekeza kwenye mwelekeo unaotakiwa wa harakati. Kwa urahisi, tunachora duara ndogo badala ya kichwa cha mshale.
+
+## Kuangalia sera
+
+Kwa kuwa Q-Table inaorodhesha "mvuto" wa kila kitendo katika kila hali, ni rahisi kuitumia kufafanua urambazaji bora katika ulimwengu wetu. Katika kesi rahisi zaidi, tunaweza kuchagua kitendo kinacholingana na thamani ya juu zaidi ya Q-Table: (code block 9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> Ukijaribu nambari hapo juu mara kadhaa, unaweza kugundua kuwa wakati mwingine inagoma, na unahitaji kubonyeza kitufe cha STOP kwenye notebook ili kuikomesha. Hii inatokea kwa sababu kunaweza kuwa na hali ambapo hali mbili "zinaelekeza" kwa kila mmoja kwa thamani bora ya Q, ambapo wakala huishia kusonga kati ya hali hizo bila kikomo.
+
+## 🚀Changamoto
+
+> **Kazi 1:** Badilisha `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` inaendesha simulizi mara 100): (code block 10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+Baada ya kuendesha nambari hii, unapaswa kupata urefu wa wastani wa njia ndogo sana kuliko hapo awali, katika safu ya hatua 3-6.
+
+## Kuchunguza mchakato wa kujifunza
+
+Kama tulivyosema, mchakato wa kujifunza ni usawa kati ya uchunguzi na uchunguzi wa maarifa yaliyopatikana kuhusu muundo wa nafasi ya tatizo. Tumeona kwamba matokeo ya kujifunza (uwezo wa kusaidia wakala kupata njia fupi kwenda kwenye lengo) yameboreshwa, lakini pia ni ya kuvutia kuona jinsi urefu wa wastani wa njia unavyobadilika wakati wa mchakato wa kujifunza:
+
+## Muhtasari wa Kujifunza
+
+- **Urefu wa wastani wa njia unaongezeka**. Tunachokiona hapa ni kwamba mwanzoni, urefu wa wastani wa njia unaongezeka. Hii inaweza kuwa kutokana na ukweli kwamba tunapojua chochote kuhusu mazingira, tunatarajiwa kukwama katika hali mbaya, maji au mbwa mwitu. Tunapojifunza zaidi na kuanza kutumia maarifa haya, tunaweza kuchunguza mazingira kwa muda mrefu zaidi, lakini bado hatujui vizuri mahali tofaa yalipo.
+
+- **Urefu wa njia unapungua, tunapojifunza zaidi**. Mara tu tunapojifunza vya kutosha, inakuwa rahisi kwa wakala kufikia lengo, na urefu wa njia unaanza kupungua. Hata hivyo, bado tuko wazi kwa uchunguzi, kwa hivyo mara nyingi tunatoka kwenye njia bora, na kuchunguza chaguzi mpya, na kufanya njia kuwa ndefu kuliko ilivyo bora.
+
+- **Urefu unaongezeka ghafla**. Tunachokiona pia kwenye grafu hii ni kwamba wakati fulani, urefu uliongezeka ghafla. Hii inaonyesha asili ya mchakato wa stochastic, na kwamba tunaweza wakati fulani "kuharibu" coefficients za Q-Table kwa kuandika tena na maadili mapya. Hii inapaswa kupunguzwa kwa kupunguza kiwango cha kujifunza (kwa mfano, kuelekea mwisho wa mafunzo, tunarekebisha maadili ya Q-Table kwa thamani ndogo).
+
+Kwa ujumla, ni muhimu kukumbuka kwamba mafanikio na ubora wa mchakato wa kujifunza hutegemea sana vigezo, kama vile kiwango cha kujifunza, kupungua kwa kiwango cha kujifunza, na sababu ya punguzo. Hizi mara nyingi huitwa **vigezo vya hyper**, ili kuwatofautisha na **vigezo**, ambavyo tunaboresha wakati wa mafunzo (kwa mfano, coefficients za Q-Table). Mchakato wa kupata maadili bora ya vigezo vya hyper unaitwa **uboresha wa vigezo vya hyper**, na unastahili mada tofauti.
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## Kazi
+[Dunia Halisi Zaidi](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati asilia katika lugha yake ya awali inapaswa kuchukuliwa kama chanzo chenye mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/1-QLearning/assignment.md b/translations/sw/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..5aabd9de6
--- /dev/null
+++ b/translations/sw/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,28 @@
+# Ulimwengu Halisi Zaidi
+
+Katika hali yetu, Peter aliweza kusafiri karibu bila kuchoka au kuwa na njaa. Katika ulimwengu halisi zaidi, tunahitaji kukaa chini na kupumzika mara kwa mara, na pia kujilisha. Hebu tufanye ulimwengu wetu uwe halisi zaidi, kwa kutekeleza sheria zifuatazo:
+
+1. Kwa kusafiri kutoka sehemu moja hadi nyingine, Peter hupoteza **nguvu** na kupata **uchovu**.
+2. Peter anaweza kupata nguvu zaidi kwa kula maapulo.
+3. Peter anaweza kuondoa uchovu kwa kupumzika chini ya mti au kwenye nyasi (yaani kutembea kwenye eneo la ubao lenye mti au nyasi - uwanja wa kijani)
+4. Peter anahitaji kupata na kumuua mbwa mwitu
+5. Ili kumuua mbwa mwitu, Peter anahitaji kuwa na viwango fulani vya nguvu na uchovu, vinginevyo atapoteza vita.
+## Maelekezo
+
+Tumia [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) ya awali kama sehemu ya kuanzia kwa suluhisho lako.
+
+Badilisha kazi ya zawadi hapo juu kulingana na sheria za mchezo, endesha algorithimu ya kujifunza kwa kuimarisha ili kujifunza mkakati bora wa kushinda mchezo, na linganisha matokeo ya matembezi ya nasibu na algorithimu yako kwa suala la idadi ya michezo iliyoshinda na kupoteza.
+
+> **Note**: Katika ulimwengu wako mpya, hali ni ngumu zaidi, na kando na nafasi ya binadamu pia inajumuisha viwango vya uchovu na nguvu. Unaweza kuchagua kuwakilisha hali kama tuple (Board, energy, fatigue), au kufafanua darasa kwa hali (unaweza pia kutaka kulitoa kutoka `Board`), au hata kurekebisha darasa la awali la `Board` ndani ya [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py).
+
+Katika suluhisho lako, tafadhali weka msimbo unaohusika na mkakati wa matembezi ya nasibu, na linganisha matokeo ya algorithimu yako na matembezi ya nasibu mwishoni.
+
+> **Note**: Unaweza kuhitaji kurekebisha hyperparameters ili ifanye kazi, hasa idadi ya epochs. Kwa sababu mafanikio ya mchezo (kupigana na mbwa mwitu) ni tukio nadra, unaweza kutarajia muda mrefu zaidi wa mafunzo.
+## Rubric
+
+| Kigezo | Bora Zaidi | Inayokubalika | Inahitaji Kuboresha |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| | Daftari linawasilishwa na ufafanuzi wa sheria mpya za ulimwengu, algorithimu ya Q-Learning na maelezo fulani ya maandishi. Q-Learning ina uwezo wa kuboresha matokeo kwa kiasi kikubwa ikilinganishwa na matembezi ya nasibu. | Daftari linawasilishwa, Q-Learning inatekelezwa na inaboresha matokeo ikilinganishwa na matembezi ya nasibu, lakini sio kwa kiasi kikubwa; au daftari limeandikwa vibaya na msimbo haujapangwa vizuri | Jaribio fulani la kufafanua upya sheria za ulimwengu limefanywa, lakini algorithimu ya Q-Learning haifanyi kazi, au kazi ya zawadi haijafafanuliwa kikamilifu |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/sw/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..6eff6923f
--- /dev/null
+++ b/translations/sw/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/sw/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..db7b3b9b4
--- /dev/null
+++ b/translations/sw/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo rasmi. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/2-Gym/README.md b/translations/sw/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..c5c9f4b33
--- /dev/null
+++ b/translations/sw/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,324 @@
+## Vigezo vya awali
+
+Katika somo hili, tutatumia maktaba inayoitwa **OpenAI Gym** kusimulia mazingira tofauti. Unaweza kuendesha msimbo wa somo hili kwenye kompyuta yako (mfano kutoka Visual Studio Code), ambapo simulizi itafunguka kwenye dirisha jipya. Unapoendesha msimbo mtandaoni, unaweza kuhitaji kufanya mabadiliko kadhaa kwenye msimbo, kama ilivyoelezwa [hapa](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7).
+
+## OpenAI Gym
+
+Katika somo lililopita, sheria za mchezo na hali zilitolewa na darasa la `Board` ambalo tulilifafanua wenyewe. Hapa tutatumia mazingira maalum ya **simulizi**, ambayo yatasimulia fizikia ya fimbo inayosawazisha. Mojawapo ya mazingira maarufu ya simulizi kwa mafunzo ya algorithimu za kujifunza kuimarisha inaitwa [Gym](https://gym.openai.com/), ambayo inadumishwa na [OpenAI](https://openai.com/). Kwa kutumia gym hii tunaweza kuunda mazingira tofauti kutoka simulizi ya cartpole hadi michezo ya Atari.
+
+> **Kumbuka**: Unaweza kuona mazingira mengine yanayopatikana kutoka OpenAI Gym [hapa](https://gym.openai.com/envs/#classic_control).
+
+Kwanza, wacha tufunge gym na tuagize maktaba zinazohitajika (msimbo wa block 1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## Zoezi - anzisha mazingira ya cartpole
+
+Ili kufanya kazi na tatizo la kusawazisha cartpole, tunahitaji kuanzisha mazingira yanayolingana. Kila mazingira yanaunganishwa na:
+
+- **Nafasi ya uchunguzi** inayofafanua muundo wa taarifa tunazopokea kutoka kwa mazingira. Kwa tatizo la cartpole, tunapokea nafasi ya fimbo, kasi na thamani nyinginezo.
+
+- **Nafasi ya hatua** inayofafanua hatua zinazowezekana. Katika kesi yetu nafasi ya hatua ni ya kidijitali, na inajumuisha hatua mbili - **kushoto** na **kulia**. (msimbo wa block 2)
+
+1. Ili kuanzisha, andika msimbo ufuatao:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+Ili kuona jinsi mazingira yanavyofanya kazi, wacha tuendeshe simulizi fupi kwa hatua 100. Katika kila hatua, tunatoa moja ya hatua zinazochukuliwa - katika simulizi hii tunachagua hatua kwa nasibu kutoka `action_space`.
+
+1. Endesha msimbo hapa chini na uone matokeo.
+
+ ✅ Kumbuka kuwa inapendekezwa kuendesha msimbo huu kwenye usakinishaji wa Python wa ndani! (msimbo wa block 3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ Unapaswa kuona kitu kinachofanana na picha hii:
+
+ 
+
+1. Wakati wa simulizi, tunahitaji kupata uchunguzi ili kuamua jinsi ya kuchukua hatua. Kwa kweli, kazi ya hatua inarejesha uchunguzi wa sasa, kazi ya tuzo, na bendera ya kumaliza inayoweka wazi kama ina maana kuendelea na simulizi au la: (msimbo wa block 4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ Mwishowe utaona kitu kama hiki kwenye matokeo ya daftari:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ Vector ya uchunguzi inayorejeshwa katika kila hatua ya simulizi inajumuisha thamani zifuatazo:
+ - Nafasi ya gari
+ - Kasi ya gari
+ - Pembe ya fimbo
+ - Kiwango cha mzunguko wa fimbo
+
+1. Pata thamani ndogo na kubwa ya namba hizo: (msimbo wa block 5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ Unaweza pia kugundua kuwa thamani ya tuzo katika kila hatua ya simulizi ni 1 kila wakati. Hii ni kwa sababu lengo letu ni kuishi kwa muda mrefu iwezekanavyo, yaani kuweka fimbo katika nafasi ya wima kwa muda mrefu zaidi.
+
+ ✅ Kwa kweli, simulizi ya CartPole inachukuliwa kuwa imetatuliwa ikiwa tutafanikiwa kupata wastani wa tuzo ya 195 katika majaribio 100 mfululizo.
+
+## Ugawanyaji wa hali
+
+Katika Q-Learning, tunahitaji kujenga Jedwali la Q linalofafanua nini cha kufanya katika kila hali. Ili kufanya hivyo, tunahitaji hali kuwa **ya kidijitali**, kwa usahihi zaidi, inapaswa kuwa na idadi ndogo ya thamani za kidijitali. Kwa hivyo, tunahitaji kwa namna fulani **kugawanya** uchunguzi wetu, kuziunganisha kwenye seti ndogo ya hali.
+
+Kuna njia kadhaa tunaweza kufanya hivi:
+
+- **Gawa katika sehemu**. Ikiwa tunajua kipindi cha thamani fulani, tunaweza kugawa kipindi hiki katika idadi ya **sehemu**, na kisha kubadilisha thamani kwa namba ya sehemu ambayo inahusiana nayo. Hii inaweza kufanywa kwa kutumia njia ya numpy [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html). Katika kesi hii, tutajua kwa usahihi ukubwa wa hali, kwa sababu itategemea idadi ya sehemu tunazochagua kwa ajili ya digitalization.
+
+✅ Tunaweza kutumia usawazishaji wa mstari kuleta thamani kwa kipindi fulani (sema, kutoka -20 hadi 20), na kisha kubadilisha namba kuwa namba za tarakimu kwa kuzungusha. Hii inatupa udhibiti mdogo wa ukubwa wa hali, hasa ikiwa hatujui mipaka halisi ya thamani za ingizo. Kwa mfano, katika kesi yetu 2 kati ya 4 hazina mipaka ya juu/chini ya thamani zao, ambazo zinaweza kusababisha idadi isiyo na kikomo ya hali.
+
+Katika mfano wetu, tutatumia mbinu ya pili. Kama utakavyogundua baadaye, licha ya mipaka isiyoeleweka ya juu/chini, thamani hizo mara chache huchukua thamani nje ya vipindi fulani, hivyo hali hizo zenye thamani za juu zitakuwa nadra sana.
+
+1. Hapa kuna kazi itakayochukua uchunguzi kutoka kwa mfano wetu na kutoa jozi ya thamani za tarakimu 4: (msimbo wa block 6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. Wacha pia tuchunguze njia nyingine ya ugawanyaji kwa kutumia sehemu: (msimbo wa block 7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. Wacha sasa tuendeshe simulizi fupi na kuchunguza thamani hizo za mazingira ya kidijitali. Jisikie huru kujaribu zote `discretize` and `discretize_bins` na kuona kama kuna tofauti.
+
+ ✅ discretize_bins inarejesha namba ya sehemu, ambayo ni ya msingi 0. Kwa hivyo kwa thamani za ingizo karibu na 0 inarejesha namba kutoka katikati ya kipindi (10). Katika discretize, hatukujali kuhusu wigo wa thamani za matokeo, tukiruhusu kuwa hasi, hivyo thamani za hali hazijahamishwa, na 0 inahusiana na 0. (msimbo wa block 8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ Ondoa mstari unaoanza na env.render ikiwa unataka kuona jinsi mazingira yanavyotekelezwa. Vinginevyo unaweza kuutekeleza kwa siri, ambayo ni haraka zaidi. Tutatumia utekelezaji huu wa "kisiri" wakati wa mchakato wetu wa Q-Learning.
+
+## Muundo wa Jedwali la Q
+
+Katika somo letu lililopita, hali ilikuwa jozi rahisi ya namba kutoka 0 hadi 8, na hivyo ilikuwa rahisi kuwakilisha Jedwali la Q kwa tensor ya numpy yenye umbo la 8x8x2. Ikiwa tunatumia ugawanyaji wa sehemu, ukubwa wa vector yetu ya hali pia unajulikana, hivyo tunaweza kutumia mbinu hiyo hiyo na kuwakilisha hali kwa safu yenye umbo la 20x20x10x10x2 (hapa 2 ni kipimo cha nafasi ya hatua, na vipimo vya kwanza vinahusiana na idadi ya sehemu tulizochagua kutumia kwa kila moja ya vigezo katika nafasi ya uchunguzi).
+
+Hata hivyo, wakati mwingine vipimo halisi vya nafasi ya uchunguzi havijulikani. Katika kesi ya kazi ya `discretize`, hatuwezi kuwa na uhakika kwamba hali yetu inakaa ndani ya mipaka fulani, kwa sababu baadhi ya thamani za awali hazina mipaka. Kwa hivyo, tutatumia mbinu tofauti kidogo na kuwakilisha Jedwali la Q kwa kamusi.
+
+1. Tumia jozi *(state,action)* kama ufunguo wa kamusi, na thamani itahusiana na thamani ya ingizo la Jedwali la Q. (msimbo wa block 9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ Hapa pia tunafafanua kazi `qvalues()`, inayorejesha orodha ya thamani za Jedwali la Q kwa hali fulani inayohusiana na hatua zote zinazowezekana. Ikiwa ingizo halipo kwenye Jedwali la Q, tutarejesha 0 kama chaguo-msingi.
+
+## Wacha tuanze Q-Learning
+
+Sasa tuko tayari kumfundisha Peter kusawazisha!
+
+1. Kwanza, wacha tuchague baadhi ya vigezo vya msingi: (msimbo wa block 10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ Hapa, `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards` vector kwa ajili ya kuchora baadaye. (msimbo wa block 11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+Unachoweza kugundua kutoka kwa matokeo hayo:
+
+- **Karibu na lengo letu**. Tuko karibu sana na kufikia lengo la kupata tuzo ya 195 kwa jumla katika mfululizo wa majaribio 100+, au tunaweza kuwa tumelifanikisha! Hata kama tunapata namba ndogo, bado hatujui, kwa sababu tunachukua wastani wa majaribio 5000, na ni majaribio 100 tu yanahitajika katika vigezo rasmi.
+
+- **Tuzo inaanza kushuka**. Wakati mwingine tuzo inaanza kushuka, ambayo inamaanisha kuwa tunaweza "kuharibu" thamani zilizojifunza tayari kwenye Jedwali la Q na zile zinazofanya hali kuwa mbaya zaidi.
+
+Uchunguzi huu unaonekana wazi zaidi ikiwa tutachora maendeleo ya mafunzo.
+
+## Kuchora Maendeleo ya Mafunzo
+
+Wakati wa mafunzo, tumekusanya thamani ya tuzo ya jumla katika kila moja ya kurudia kwenye vector ya `rewards`. Hivi ndivyo inavyoonekana tunapochora dhidi ya namba ya kurudia:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+Kutoka kwenye grafu hii, haiwezekani kusema chochote, kwa sababu kutokana na asili ya mchakato wa mafunzo ya nasibu urefu wa vikao vya mafunzo hutofautiana sana. Ili kuelewa zaidi grafu hii, tunaweza kuhesabu **wastani wa kukimbia** juu ya mfululizo wa majaribio, tuseme 100. Hii inaweza kufanywa kwa urahisi kwa kutumia `np.convolve`: (msimbo wa block 12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## Kurekebisha vigezo vya msingi
+
+Ili kufanya mafunzo kuwa thabiti zaidi, ina maana kurekebisha baadhi ya vigezo vyetu vya msingi wakati wa mafunzo. Hasa:
+
+- **Kwa kiwango cha kujifunza**, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon`, na kuhamia karibu na 1.
+
+> **Kazi 1**: Cheza na thamani za vigezo vya msingi na uone kama unaweza kufikia tuzo ya juu zaidi. Je, unapata zaidi ya 195?
+
+> **Kazi 2**: Ili kutatua tatizo rasmi, unahitaji kupata tuzo ya wastani ya 195 katika majaribio 100 mfululizo. Pima hilo wakati wa mafunzo na hakikisha kuwa umetatua tatizo rasmi!
+
+## Kuona matokeo kwa vitendo
+
+Itakuwa ya kuvutia kuona jinsi mfano uliyojifunza unavyofanya kazi. Wacha tuendeshe simulizi na kufuata mkakati wa kuchagua hatua kama wakati wa mafunzo, tukichagua kulingana na usambazaji wa uwezekano kwenye Jedwali la Q: (msimbo wa block 13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+Unapaswa kuona kitu kama hiki:
+
+
+
+---
+
+## 🚀Changamoto
+
+> **Kazi 3**: Hapa, tulikuwa tunatumia nakala ya mwisho ya Jedwali la Q, ambalo linaweza lisiwe bora zaidi. Kumbuka kuwa tumehifadhi Jedwali la Q linalofanya kazi bora zaidi kwenye `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax` ili kupata namba ya hatua inayohusiana na thamani ya juu zaidi ya Jedwali la Q. Tekeleza mkakati huu na uone kama unaboreshwa kusawazisha.
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## Kazi
+[Fundisha Gari la Mlima](assignment.md)
+
+## Hitimisho
+
+Sasa tumejifunza jinsi ya kufundisha mawakala kufikia matokeo mazuri kwa kuwapa tuzo inayoelezea hali inayotakiwa ya mchezo, na kwa kuwapa fursa ya kuchunguza nafasi ya utafutaji kwa busara. Tumefanikiwa kutumia algorithimu ya Q-Learning katika hali za mazingira ya kidijitali na endelevu, lakini na hatua za kidijitali.
+
+Ni muhimu pia kujifunza hali ambapo hatua ya hali pia ni endelevu, na wakati nafasi ya uchunguzi ni ngumu zaidi, kama picha kutoka skrini ya mchezo wa Atari. Katika matatizo hayo tunahitaji mara nyingi kutumia mbinu za kujifunza mashine zenye nguvu zaidi, kama vile mitandao ya neva, ili kufikia matokeo mazuri. Mada hizo za juu zaidi ni somo la kozi yetu ya AI ya juu zaidi inayokuja.
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo rasmi. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/2-Gym/assignment.md b/translations/sw/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..acd88813a
--- /dev/null
+++ b/translations/sw/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# Funza Gari la Mlima
+
+[OpenAI Gym](http://gym.openai.com) imeundwa kwa namna ambayo mazingira yote yanatoa API sawa - yaani njia sawa za `reset`, `step` na `render`, na dhana sawa za **nafasi ya hatua** na **nafasi ya uchunguzi**. Kwa hivyo inapaswa kuwa rahisi kubadilisha algoriti za kujifunza kwa kuimarisha kwa mazingira tofauti kwa mabadiliko madogo ya msimbo.
+
+## Mazingira ya Gari la Mlima
+
+[Mazingira ya Gari la Mlima](https://gym.openai.com/envs/MountainCar-v0/) lina gari lililokwama kwenye bonde:
+Lengo ni kutoka kwenye bonde na kushika bendera, kwa kufanya moja ya hatua zifuatazo katika kila hatua:
+
+| Thamani | Maana |
+|---|---|
+| 0 | Kuongeza kasi kwenda kushoto |
+| 1 | Kutokuongeza kasi |
+| 2 | Kuongeza kasi kwenda kulia |
+
+Ujanja mkuu wa tatizo hili ni kwamba injini ya gari haina nguvu ya kutosha kupanda mlima kwa mzunguko mmoja. Kwa hivyo, njia pekee ya kufanikiwa ni kuendesha mbele na nyuma ili kujenga mwendo.
+
+Nafasi ya uchunguzi ina thamani mbili tu:
+
+| Nambari | Uchunguzi | Min | Max |
+|-----|--------------|-----|-----|
+| 0 | Nafasi ya Gari | -1.2| 0.6 |
+| 1 | Kasi ya Gari | -0.07 | 0.07 |
+
+Mfumo wa zawadi kwa gari la mlima ni mgumu kidogo:
+
+ * Zawadi ya 0 inatolewa ikiwa wakala atafikia bendera (nafasi = 0.5) juu ya mlima.
+ * Zawadi ya -1 inatolewa ikiwa nafasi ya wakala ni chini ya 0.5.
+
+Kipindi kinamalizika ikiwa nafasi ya gari ni zaidi ya 0.5, au urefu wa kipindi ni zaidi ya 200.
+## Maelekezo
+
+Badilisha algoriti yetu ya kujifunza kwa kuimarisha ili kutatua tatizo la gari la mlima. Anza na msimbo uliopo katika [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb), badilisha mazingira mapya, badilisha kazi za kugawanya hali, na jaribu kufanya algoriti iliyopo kufunza kwa mabadiliko madogo ya msimbo. Boresha matokeo kwa kurekebisha vigezo vya hyper.
+
+> **Note**: Marekebisho ya vigezo vya hyper yanaweza kuhitajika ili kufanya algoriti kufikia lengo.
+## Rubric
+
+| Kigezo | Bora | Kutosha | Inahitaji Kuboresha |
+| -------- | --------- | -------- | ----------------- |
+| | Algoriti ya Q-Learning imebadilishwa kwa mafanikio kutoka mfano wa CartPole, kwa mabadiliko madogo ya msimbo, ambayo ina uwezo wa kutatua tatizo la kushika bendera chini ya hatua 200. | Algoriti mpya ya Q-Learning imechukuliwa kutoka mtandaoni, lakini imeandikwa vizuri; au algoriti iliyopo imebadilishwa, lakini haifiki matokeo yanayotarajiwa | Mwanafunzi hakuweza kubadilisha algoriti yoyote kwa mafanikio, lakini amechukua hatua kubwa kuelekea suluhisho (ameunda kazi za kugawanya hali, muundo wa data wa Q-Table, n.k.) |
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/sw/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..95a83ca2a
--- /dev/null
+++ b/translations/sw/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au dosari. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/2-Gym/solution/R/README.md b/translations/sw/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..09e4814a7
--- /dev/null
+++ b/translations/sw/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/8-Reinforcement/README.md b/translations/sw/8-Reinforcement/README.md
new file mode 100644
index 000000000..52be30ac8
--- /dev/null
+++ b/translations/sw/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# Utangulizi wa kujifunza kwa kuimarisha
+
+Kujifunza kwa kuimarisha, RL, ni mojawapo ya mifumo ya msingi ya kujifunza kwa mashine, sambamba na kujifunza kwa usimamizi na kujifunza bila usimamizi. RL inahusu maamuzi: kutoa maamuzi sahihi au angalau kujifunza kutoka kwao.
+
+Fikiria una mazingira yaliyosimuliwa kama soko la hisa. Nini kitatokea ikiwa utaweka kanuni fulani? Je, ina athari nzuri au mbaya? Ikiwa kitu kibaya kitatokea, unahitaji kuchukua hii _kuimarisha hasi_, kujifunza kutoka kwayo, na kubadilisha mwelekeo. Ikiwa ni matokeo chanya, unahitaji kujenga juu ya hiyo _kuimarisha chanya_.
+
+
+
+> Peter na marafiki zake wanahitaji kukimbia mbwa mwitu mwenye njaa! Picha na [Jen Looper](https://twitter.com/jenlooper)
+
+## Mada ya Kanda: Peter na Mbwa Mwitu (Urusi)
+
+[Peter na Mbwa Mwitu](https://en.wikipedia.org/wiki/Peter_and_the_Wolf) ni hadithi ya muziki iliyoandikwa na mtunzi wa Kirusi [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev). Ni hadithi kuhusu kijana shujaa Peter, ambaye kwa ujasiri anatoka nyumbani kwake kwenda msituni kumfuata mbwa mwitu. Katika sehemu hii, tutafundisha algorithimu za kujifunza kwa mashine ambazo zitamsaidia Peter:
+
+- **Kuchunguza** eneo la karibu na kujenga ramani bora ya urambazaji
+- **Kujifunza** jinsi ya kutumia skateboard na kusawazisha juu yake, ili kuzunguka kwa haraka zaidi.
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 Bofya picha hapo juu kusikiliza Peter na Mbwa Mwitu na Prokofiev
+
+## Kujifunza kwa kuimarisha
+
+Katika sehemu zilizopita, umeona mifano miwili ya matatizo ya kujifunza kwa mashine:
+
+- **Kwa usimamizi**, ambapo tuna seti za data zinazopendekeza suluhisho za sampuli kwa tatizo tunalotaka kutatua. [Uainishaji](../4-Classification/README.md) na [urekebishaji](../2-Regression/README.md) ni kazi za kujifunza kwa usimamizi.
+- **Bila usimamizi**, ambapo hatuna data ya mafunzo yenye lebo. Mfano mkuu wa kujifunza bila usimamizi ni [Upangaji](../5-Clustering/README.md).
+
+Katika sehemu hii, tutakutambulisha kwa aina mpya ya tatizo la kujifunza ambalo halihitaji data ya mafunzo yenye lebo. Kuna aina kadhaa za matatizo kama hayo:
+
+- **[Kujifunza kwa nusu-usimamizi](https://wikipedia.org/wiki/Semi-supervised_learning)**, ambapo tuna data nyingi zisizo na lebo ambazo zinaweza kutumika kufundisha awali mfano.
+- **[Kujifunza kwa kuimarisha](https://wikipedia.org/wiki/Reinforcement_learning)**, ambapo wakala anajifunza jinsi ya kuenenda kwa kufanya majaribio katika mazingira yaliyosimuliwa.
+
+### Mfano - mchezo wa kompyuta
+
+Tuseme unataka kufundisha kompyuta kucheza mchezo, kama vile chess, au [Super Mario](https://wikipedia.org/wiki/Super_Mario). Ili kompyuta icheze mchezo, tunahitaji itabiri ni hatua gani ifanye katika kila hali ya mchezo. Ingawa hii inaweza kuonekana kama tatizo la uainishaji, sio - kwa sababu hatuna seti ya data na hali na hatua zinazolingana. Ingawa tunaweza kuwa na data kama vile mechi zilizopo za chess au kurekodi kwa wachezaji wakicheza Super Mario, kuna uwezekano kwamba data hiyo haitatosheleza idadi kubwa ya hali zinazowezekana.
+
+Badala ya kutafuta data iliyopo ya mchezo, **Kujifunza kwa Kuimarisha** (RL) kunategemea wazo la *kuifanya kompyuta icheze* mara nyingi na kuchunguza matokeo. Hivyo, ili kutumia Kujifunza kwa Kuimarisha, tunahitaji vitu viwili:
+
+- **Mazingira** na **simulator** ambayo huturuhusu kucheza mchezo mara nyingi. Simulator hii ingeweka sheria zote za mchezo pamoja na hali na hatua zinazowezekana.
+
+- **Kazi ya tuzo**, ambayo ingetueleza jinsi tulivyofanya vizuri wakati wa kila hatua au mchezo.
+
+Tofauti kuu kati ya aina nyingine za kujifunza kwa mashine na RL ni kwamba katika RL kwa kawaida hatujui kama tunashinda au kushindwa hadi tunapomaliza mchezo. Hivyo, hatuwezi kusema kama hatua fulani pekee ni nzuri au sio - tunapokea tuzo mwishoni mwa mchezo. Na lengo letu ni kubuni algorithimu ambazo zitatufanya tufundishe mfano chini ya hali zisizo na uhakika. Tutajifunza kuhusu algorithimu moja ya RL inayoitwa **Q-learning**.
+
+## Masomo
+
+1. [Utangulizi wa kujifunza kwa kuimarisha na Q-Learning](1-QLearning/README.md)
+2. [Kutumia mazingira ya simulation ya gym](2-Gym/README.md)
+
+## Shukrani
+
+"Utangulizi wa Kujifunza kwa Kuimarisha" uliandikwa kwa ♥️ na [Dmitry Soshnikov](http://soshnikov.com)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati ya asili katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo sahihi. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa maelewano mabaya au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/9-Real-World/1-Applications/README.md b/translations/sw/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..6b0edb104
--- /dev/null
+++ b/translations/sw/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# Postscript: Machine learning katika ulimwengu wa kweli
+
+
+> Sketchnote na [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+Katika mtaala huu, umejifunza njia nyingi za kuandaa data kwa ajili ya mafunzo na kuunda mifano ya machine learning. Umejenga mfululizo wa mifano ya regression, clustering, classification, natural language processing, na time series. Hongera! Sasa, unaweza kuwa unajiuliza yote haya ni kwa ajili ya nini... ni matumizi gani ya ulimwengu wa kweli kwa mifano hii?
+
+Wakati AI imevutia sana sekta nyingi, ambayo mara nyingi hutumia deep learning, bado kuna matumizi muhimu kwa mifano ya machine learning ya kawaida. Unaweza hata kutumia baadhi ya matumizi haya leo! Katika somo hili, utaangalia jinsi sekta nane tofauti na maeneo ya mada yanavyotumia aina hizi za mifano ili kufanya programu zao kuwa bora zaidi, za kuaminika, za akili, na zenye thamani kwa watumiaji.
+
+## [Pre-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 Fedha
+
+Sekta ya fedha inatoa fursa nyingi kwa machine learning. Shida nyingi katika eneo hili zinaweza kuigwa na kutatuliwa kwa kutumia ML.
+
+### Kugundua udanganyifu wa kadi za mkopo
+
+Tulijifunza kuhusu [k-means clustering](../../5-Clustering/2-K-Means/README.md) mapema katika kozi, lakini inaweza kutumika vipi kutatua matatizo yanayohusiana na udanganyifu wa kadi za mkopo?
+
+K-means clustering inasaidia wakati wa mbinu ya kugundua udanganyifu wa kadi za mkopo inayoitwa **outlier detection**. Outliers, au mabadiliko katika uchunguzi kuhusu seti ya data, zinaweza kutuambia ikiwa kadi ya mkopo inatumiwa kwa kawaida au ikiwa kuna kitu kisicho cha kawaida kinaendelea. Kama inavyoonyeshwa katika karatasi iliyounganishwa hapa chini, unaweza kupanga data za kadi za mkopo kwa kutumia algorithimu ya k-means clustering na kupeana kila muamala kwenye kundi kulingana na jinsi inavyoonekana kuwa outlier. Kisha, unaweza kutathmini makundi yenye hatari zaidi kwa miamala ya udanganyifu dhidi ya halali.
+[Reference](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### Usimamizi wa utajiri
+
+Katika usimamizi wa utajiri, mtu binafsi au kampuni hushughulikia uwekezaji kwa niaba ya wateja wao. Kazi yao ni kudumisha na kukuza utajiri kwa muda mrefu, kwa hivyo ni muhimu kuchagua uwekezaji unaofanya vizuri.
+
+Njia moja ya kutathmini jinsi uwekezaji fulani unavyofanya kazi ni kupitia regression ya takwimu. [Linear regression](../../2-Regression/1-Tools/README.md) ni chombo muhimu kwa kuelewa jinsi mfuko unavyofanya kazi kulingana na baadhi ya alama. Tunaweza pia kuamua ikiwa matokeo ya regression ni muhimu kistatistiki, au jinsi yanavyoweza kuathiri uwekezaji wa mteja. Unaweza hata kupanua uchambuzi wako zaidi kwa kutumia multiple regression, ambapo sababu za hatari za ziada zinaweza kuzingatiwa. Kwa mfano wa jinsi hii ingefanya kazi kwa mfuko maalum, angalia karatasi hapa chini kuhusu kutathmini utendaji wa mfuko kwa kutumia regression.
+[Reference](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 Elimu
+
+Sekta ya elimu pia ni eneo la kuvutia sana ambapo ML inaweza kutumika. Kuna matatizo ya kuvutia ya kushughulikia kama vile kugundua udanganyifu kwenye mitihani au insha au kudhibiti upendeleo, wa makusudi au la, katika mchakato wa kurekebisha.
+
+### Kutabiri tabia ya mwanafunzi
+
+[Coursera](https://coursera.com), mtoa kozi za mtandaoni, ana blogi nzuri ya teknolojia ambapo wanajadili maamuzi mengi ya uhandisi. Katika utafiti huu wa kesi, walipanga mstari wa regression kujaribu kuchunguza uhusiano wowote kati ya rating ya chini ya NPS (Net Promoter Score) na uhifadhi au kuacha kozi.
+[Reference](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### Kupunguza upendeleo
+
+[Grammarly](https://grammarly.com), msaidizi wa uandishi unaokagua makosa ya tahajia na sarufi, hutumia mifumo ya [natural language processing](../../6-NLP/README.md) katika bidhaa zake. Walichapisha utafiti wa kesi wa kuvutia katika blogi yao ya teknolojia kuhusu jinsi walivyoshughulikia upendeleo wa kijinsia katika machine learning, ambayo ulijifunza katika [somo letu la utangulizi la haki](../../1-Introduction/3-fairness/README.md).
+[Reference](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 Rejareja
+
+Sekta ya rejareja inaweza kunufaika sana na matumizi ya ML, kutoka kuunda safari bora ya mteja hadi kusimamia hesabu kwa njia bora.
+
+### Kubinafsisha safari ya mteja
+
+Katika Wayfair, kampuni inayouza bidhaa za nyumbani kama samani, kusaidia wateja kupata bidhaa sahihi kwa ladha na mahitaji yao ni muhimu. Katika makala hii, wahandisi kutoka kampuni hiyo wanaelezea jinsi wanavyotumia ML na NLP "kuibua matokeo sahihi kwa wateja". Hasa, Injini yao ya Query Intent imejengwa kutumia uchimbaji wa entiti, mafunzo ya classifier, uchimbaji wa mali na maoni, na kuweka alama za hisia kwenye hakiki za wateja. Hii ni matumizi ya kawaida ya jinsi NLP inavyofanya kazi katika rejareja mtandaoni.
+[Reference](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### Usimamizi wa hesabu
+
+Kampuni za ubunifu na zenye ujasiri kama [StitchFix](https://stitchfix.com), huduma ya sanduku inayotuma nguo kwa watumiaji, hutegemea sana ML kwa mapendekezo na usimamizi wa hesabu. Timu zao za mitindo hufanya kazi pamoja na timu zao za biashara, kwa kweli: "mmoja wa wanasayansi wetu wa data alicheza na algorithimu ya kijeni na kuitekeleza kwa mavazi kutabiri ni kipande gani cha mavazi kitakuwa na mafanikio ambacho hakipo leo. Tulileta hiyo kwa timu ya biashara na sasa wanaweza kuitumia kama chombo."
+[Reference](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 Huduma za Afya
+
+Sekta ya huduma za afya inaweza kutumia ML kuboresha kazi za utafiti na pia matatizo ya kiutendaji kama kurudisha wagonjwa hospitalini au kuzuia magonjwa kuenea.
+
+### Usimamizi wa majaribio ya kliniki
+
+Sumu katika majaribio ya kliniki ni wasiwasi mkubwa kwa watengenezaji wa dawa. Kiasi gani cha sumu kinavumilika? Katika utafiti huu, kuchambua mbinu mbalimbali za majaribio ya kliniki kulisababisha maendeleo ya mbinu mpya ya kutabiri uwezekano wa matokeo ya majaribio ya kliniki. Hasa, waliweza kutumia random forest kutoa [classifier](../../4-Classification/README.md) inayoweza kutofautisha kati ya vikundi vya dawa.
+[Reference](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### Usimamizi wa kurudisha wagonjwa hospitalini
+
+Huduma za hospitali ni ghali, hasa wakati wagonjwa wanahitaji kurudishwa. Karatasi hii inajadili kampuni inayotumia ML kutabiri uwezekano wa kurudishwa kwa kutumia algorithimu za [clustering](../../5-Clustering/README.md). Makundi haya husaidia wachambuzi "kugundua vikundi vya kurudishwa ambavyo vinaweza kushiriki sababu ya kawaida".
+[Reference](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### Usimamizi wa magonjwa
+
+Janga la hivi karibuni limeweka mwanga mkali juu ya njia ambazo machine learning inaweza kusaidia kuzuia kuenea kwa magonjwa. Katika makala hii, utatambua matumizi ya ARIMA, logistic curves, linear regression, na SARIMA. "Kazi hii ni jaribio la kuhesabu kiwango cha kuenea kwa virusi hivi na hivyo kutabiri vifo, kupona, na kesi zilizothibitishwa, ili iweze kutusaidia kujiandaa vizuri na kuishi."
+[Reference](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 Ikolojia na Teknolojia ya Kijani
+
+Asili na ikolojia inajumuisha mifumo mingi nyeti ambapo mwingiliano kati ya wanyama na asili unakuja mbele. Ni muhimu kuweza kupima mifumo hii kwa usahihi na kuchukua hatua ipasavyo ikiwa kitu kinatokea, kama moto wa msitu au kupungua kwa idadi ya wanyama.
+
+### Usimamizi wa misitu
+
+Umejifunza kuhusu [Reinforcement Learning](../../8-Reinforcement/README.md) katika masomo ya awali. Inaweza kuwa muhimu sana wakati wa kujaribu kutabiri mifumo katika asili. Hasa, inaweza kutumika kufuatilia matatizo ya ikolojia kama moto wa misitu na kuenea kwa spishi vamizi. Nchini Kanada, kikundi cha watafiti walitumia Reinforcement Learning kujenga mifano ya mienendo ya moto wa misitu kutoka picha za setilaiti. Kwa kutumia "spatially spreading process (SSP)", waliona moto wa misitu kama "wakala katika seli yoyote katika mandhari." "Seti ya vitendo ambavyo moto unaweza kuchukua kutoka eneo lolote kwa wakati wowote ni pamoja na kuenea kaskazini, kusini, mashariki, au magharibi au kutokuenea.
+
+Mbinu hii inageuza mpangilio wa kawaida wa RL kwa kuwa mienendo ya Mchakato wa Uamuzi wa Markov (MDP) unaolingana ni kazi inayojulikana kwa kuenea kwa moto mara moja." Soma zaidi kuhusu algorithimu za kawaida zinazotumiwa na kikundi hiki kwenye kiungo hapa chini.
+[Reference](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### Kugundua harakati za wanyama
+
+Wakati deep learning imeleta mapinduzi katika kufuatilia harakati za wanyama kwa kuona (unaweza kujenga yako mwenyewe [polar bear tracker](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott) hapa), ML ya kawaida bado ina nafasi katika kazi hii.
+
+Vihisi vya kufuatilia harakati za wanyama wa shambani na IoT hutumia aina hii ya uchakataji wa kuona, lakini mbinu za msingi za ML ni muhimu kwa kuchakata data awali. Kwa mfano, katika karatasi hii, mkao wa kondoo ulifuatiliwa na kuchambuliwa kwa kutumia algorithimu mbalimbali za classifier. Unaweza kutambua ROC curve kwenye ukurasa wa 335.
+[Reference](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ Usimamizi wa Nishati
+
+Katika masomo yetu ya [time series forecasting](../../7-TimeSeries/README.md), tulitaja dhana ya mita za maegesho za kisasa ili kuzalisha mapato kwa mji kulingana na kuelewa usambazaji na mahitaji. Makala hii inajadili kwa kina jinsi clustering, regression na time series forecasting zilivyotumika pamoja kusaidia kutabiri matumizi ya nishati ya baadaye nchini Ireland, kwa kutumia mita za kisasa.
+[Reference](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 Bima
+
+Sekta ya bima ni sekta nyingine inayotumia ML kujenga na kuboresha mifano ya kifedha na ya kihisabati.
+
+### Usimamizi wa Mabadiliko
+
+MetLife, mtoa bima ya maisha, ni wazi kuhusu jinsi wanavyochambua na kupunguza mabadiliko katika mifano yao ya kifedha. Katika makala hii utaona visualizations za binary na ordinal classification. Pia utagundua visualizations za forecasting.
+[Reference](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 Sanaa, Utamaduni, na Fasihi
+
+Katika sanaa, kwa mfano katika uandishi wa habari, kuna matatizo mengi ya kuvutia. Kugundua habari za uongo ni tatizo kubwa kwani imethibitishwa kuathiri maoni ya watu na hata kupindua demokrasia. Makumbusho pia yanaweza kunufaika na kutumia ML katika kila kitu kutoka kupata viungo kati ya vitu hadi mipango ya rasilimali.
+
+### Kugundua habari za uongo
+
+Kugundua habari za uongo kumekuwa mchezo wa paka na panya katika vyombo vya habari vya leo. Katika makala hii, watafiti wanapendekeza kuwa mfumo unaochanganya mbinu kadhaa za ML tulizozisoma unaweza kujaribiwa na mfano bora zaidi kutumika: "Mfumo huu unategemea natural language processing ili kutoa sifa kutoka kwa data na kisha sifa hizi hutumika kwa mafunzo ya classifiers za machine learning kama Naive Bayes, Support Vector Machine (SVM), Random Forest (RF), Stochastic Gradient Descent (SGD), na Logistic Regression (LR)."
+[Reference](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+Makala hii inaonyesha jinsi kuchanganya maeneo tofauti ya ML kunaweza kutoa matokeo ya kuvutia ambayo yanaweza kusaidia kuzuia habari za uongo kuenea na kuleta madhara halisi; katika kesi hii, msukumo ulikuwa kuenea kwa uvumi kuhusu matibabu ya COVID ambayo yalisababisha vurugu za umati.
+
+### ML katika Makumbusho
+
+Makumbusho yako kwenye hatihati ya mapinduzi ya AI ambapo kuorodhesha na kidijitali makusanyo na kupata viungo kati ya vitu vinakuwa rahisi kadri teknolojia inavyosonga mbele. Miradi kama [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.) inasaidia kufungua siri za makusanyo yasiyofikika kama vile Maktaba ya Vatican. Lakini, kipengele cha biashara cha makumbusho pia kinanufaika na mifano ya ML.
+
+Kwa mfano, Taasisi ya Sanaa ya Chicago ilijenga mifano ya kutabiri kile ambacho watazamaji wanavutiwa nacho na wakati watakapotembelea maonyesho. Lengo ni kuunda uzoefu wa kibinafsi na ulioboreshwa kwa kila mgeni kila wakati mgeni anapotembelea makumbusho. "Katika mwaka wa fedha 2017, mfano ulitabiri mahudhurio na mapokezi ndani ya asilimia 1 ya usahihi, anasema Andrew Simnick, makamu wa rais mwandamizi katika Taasisi ya Sanaa."
+[Reference](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 Masoko
+
+### Uwekaji wa wateja katika makundi
+
+Mikakati bora zaidi ya masoko inalenga wateja kwa njia tofauti kulingana na makundi mbalimbali. Katika makala hii, matumizi ya algorithimu za Clustering yamejadiliwa ili kusaidia masoko tofauti. Masoko tofauti husaidia kampuni kuboresha utambuzi wa chapa, kufikia wateja zaidi, na kupata pesa zaidi.
+[Reference](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 Changamoto
+
+Tambua sekta nyingine inayofaidika na baadhi ya mbinu ulizojifunza katika mtaala huu, na ugundue jinsi inavyotumia ML.
+
+## [Jaribio la baada ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## Mapitio & Kujisomea
+
+Timu ya sayansi ya data ya Wayfair ina video kadhaa za kuvutia kuhusu jinsi wanavyotumia ML katika kampuni yao. Inafaa [kuangalia](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)!
+
+## Kazi
+
+[A ML scavenger hunt](assignment.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upotovu. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/9-Real-World/1-Applications/assignment.md b/translations/sw/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..66af60dc9
--- /dev/null
+++ b/translations/sw/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# Shindano la Upelelezi wa ML
+
+## Maelekezo
+
+Katika somo hili, ulijifunza kuhusu matumizi mengi ya maisha halisi yaliyotatuliwa kwa kutumia ML ya kawaida. Ingawa matumizi ya deep learning, mbinu mpya na zana katika AI, na kutumia neural networks yamechochea uzalishaji wa zana za kusaidia katika sekta hizi, ML ya kawaida kwa kutumia mbinu katika mtaala huu bado ina thamani kubwa.
+
+Katika kazi hii, fikiria kuwa unashiriki katika hackathon. Tumia kile ulichojifunza katika mtaala kupendekeza suluhisho kwa kutumia ML ya kawaida kutatua tatizo katika moja ya sekta zilizojadiliwa katika somo hili. Unda uwasilishaji ambapo unajadili jinsi utakavyotekeleza wazo lako. Pointi za ziada kama unaweza kukusanya data ya mfano na kujenga mfano wa ML kusaidia dhana yako!
+
+## Rubric
+
+| Vigezo | Bora Zaidi | Inayotosheleza | Inayohitaji Kuboresha |
+| -------- | ------------------------------------------------------------------ | ----------------------------------------------- | --------------------- |
+| | Uwasilishaji wa PowerPoint umeonyeshwa - bonasi kwa kujenga mfano | Uwasilishaji wa kawaida, usio wa ubunifu umeonyeshwa | Kazi haijakamilika |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upotofu. Hati asili katika lugha yake ya asili inapaswa kuchukuliwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/9-Real-World/2-Debugging-ML-Models/README.md b/translations/sw/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..acabe3ade
--- /dev/null
+++ b/translations/sw/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,118 @@
+# Postscript: Urekebishaji wa Modeli katika Kujifunza kwa Mashine kwa kutumia Vipengele vya dashibodi ya AI inayowajibika
+
+
+## [Jaribio la awali ya somo](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Utangulizi
+
+Kujifunza kwa mashine kunaathiri maisha yetu ya kila siku. AI inajipenyeza katika baadhi ya mifumo muhimu zaidi inayotugusa kama watu binafsi na jamii yetu, kutoka afya, fedha, elimu, na ajira. Kwa mfano, mifumo na modeli zinahusika katika kazi za kila siku za kufanya maamuzi, kama vile utambuzi wa afya au kugundua ulaghai. Matokeo yake, maendeleo katika AI pamoja na kupitishwa kwa kasi yanakutana na matarajio yanayobadilika ya jamii na kanuni zinazokua kwa kujibu. Tunapata maeneo ambapo mifumo ya AI inaendelea kukosa matarajio; wanaonyesha changamoto mpya; na serikali zinaanza kudhibiti suluhisho za AI. Kwa hivyo, ni muhimu kwamba modeli hizi zichambuliwe ili kutoa matokeo ya haki, ya kuaminika, yanayojumuisha, wazi, na yanayowajibika kwa kila mtu.
+
+Katika mtaala huu, tutaangalia zana za vitendo zinazoweza kutumika kutathmini kama modeli ina masuala ya AI inayowajibika. Mbinu za jadi za kurekebisha kujifunza kwa mashine zinategemea hesabu za kiasi kama vile usahihi wa jumla au upotevu wa makosa wa wastani. Fikiria kinachoweza kutokea wakati data unayotumia kujenga modeli hizi inakosa baadhi ya idadi ya watu, kama vile rangi, jinsia, mtazamo wa kisiasa, dini, au inawakilisha idadi hiyo kwa uwiano usio sawa. Je, vipi kuhusu wakati matokeo ya modeli yanapotafsiriwa kupendelea idadi fulani ya watu? Hii inaweza kuanzisha uwakilishi wa ziada au pungufu wa makundi haya nyeti ya vipengele na kusababisha masuala ya haki, ujumuishaji, au uaminifu kutoka kwa modeli. Sababu nyingine ni, modeli za kujifunza kwa mashine zinachukuliwa kuwa masanduku meusi, jambo ambalo linawafanya kuwa magumu kuelewa na kuelezea nini kinachoendesha utabiri wa modeli. Haya yote ni changamoto zinazowakabili wataalamu wa data na watengenezaji wa AI wanapokosa zana za kutosha za kurekebisha na kutathmini haki au uaminifu wa modeli.
+
+Katika somo hili, utajifunza kuhusu kurekebisha modeli zako kwa kutumia:
+
+- **Uchambuzi wa Makosa**: tambua ni wapi katika usambazaji wa data yako modeli ina viwango vya juu vya makosa.
+- **Muhtasari wa Modeli**: fanya uchambuzi wa kulinganisha katika vikundi tofauti vya data ili kugundua tofauti katika vipimo vya utendaji wa modeli yako.
+- **Uchambuzi wa Data**: chunguza ni wapi kunaweza kuwa na uwakilishi wa ziada au pungufu wa data yako ambao unaweza kupotosha modeli yako kupendelea idadi moja ya watu kuliko nyingine.
+- **Umuhimu wa Vipengele**: elewa ni vipengele vipi vinaendesha utabiri wa modeli yako kwa kiwango cha jumla au cha ndani.
+
+## Sharti
+
+Kama sharti, tafadhali angalia [Zana za AI inayowajibika kwa watengenezaji](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+
+> 
+
+## Uchambuzi wa Makosa
+
+Vipimo vya utendaji wa modeli vya jadi vinavyotumika kupima usahihi mara nyingi ni hesabu zinazotegemea utabiri sahihi dhidi ya usio sahihi. Kwa mfano, kuamua kuwa modeli ni sahihi kwa 89% ya wakati na upotevu wa makosa wa 0.001 inaweza kuchukuliwa kuwa utendaji mzuri. Makosa mara nyingi hayasambazwi kwa usawa katika seti yako ya data ya msingi. Unaweza kupata alama ya usahihi ya modeli ya 89% lakini kugundua kuwa kuna maeneo tofauti ya data yako ambayo modeli inashindwa kwa 42% ya wakati. Matokeo ya mifumo hii ya kushindwa na makundi fulani ya data yanaweza kusababisha masuala ya haki au uaminifu. Ni muhimu kuelewa maeneo ambapo modeli inafanya vizuri au la. Maeneo ya data ambapo kuna idadi kubwa ya kutokua sahihi katika modeli yako yanaweza kuwa idadi muhimu ya data.
+
+
+
+Kipengele cha Uchambuzi wa Makosa kwenye dashibodi ya RAI kinaonyesha jinsi kushindwa kwa modeli kunavyosambazwa katika vikundi mbalimbali kwa kutumia mchanganuo wa mti. Hii ni muhimu katika kutambua vipengele au maeneo yenye viwango vya juu vya makosa katika seti yako ya data. Kwa kuona ni wapi makosa mengi ya modeli yanatokea, unaweza kuanza kuchunguza sababu kuu. Unaweza pia kuunda vikundi vya data kufanya uchambuzi juu yake. Vikundi hivi vya data husaidia katika mchakato wa kurekebisha ili kubaini kwa nini utendaji wa modeli ni mzuri katika kundi moja, lakini si sahihi katika kundi jingine.
+
+
+
+Viashiria vya kuona kwenye ramani ya mti husaidia katika kupata maeneo yenye tatizo haraka. Kwa mfano, kivuli cha rangi nyekundu kilicho giza zaidi kwenye nodi ya mti, ndivyo kiwango cha makosa kinavyokuwa juu zaidi.
+
+Ramani ya joto ni kipengele kingine cha kuona ambacho watumiaji wanaweza kutumia kuchunguza kiwango cha makosa kwa kutumia kipengele kimoja au viwili ili kupata mchango wa makosa ya modeli katika seti nzima ya data au vikundi.
+
+
+
+Tumia uchambuzi wa makosa unapohitaji:
+
+* Kupata ufahamu wa kina juu ya jinsi kushindwa kwa modeli kunavyosambazwa katika seti ya data na katika vipengele mbalimbali vya ingizo na vipengele.
+* Kuvunja vipimo vya utendaji wa jumla ili kugundua moja kwa moja vikundi vya makosa ili kuarifu hatua zako za kurekebisha zilizolengwa.
+
+## Muhtasari wa Modeli
+
+Kutathmini utendaji wa modeli ya kujifunza kwa mashine kunahitaji kupata uelewa wa jumla wa tabia yake. Hii inaweza kupatikana kwa kupitia zaidi ya kipimo kimoja kama vile kiwango cha makosa, usahihi, kumbukumbu, usahihi, au MAE (Makosa ya Wastani wa Absolute) ili kupata tofauti kati ya vipimo vya utendaji. Kipimo kimoja cha utendaji kinaweza kuonekana kizuri, lakini kutokua sahihi kunaweza kufichuliwa katika kipimo kingine. Zaidi ya hayo, kulinganisha vipimo kwa tofauti katika seti nzima ya data au vikundi husaidia kutoa mwanga juu ya wapi modeli inafanya vizuri au la. Hii ni muhimu hasa katika kuona utendaji wa modeli kati ya vipengele nyeti dhidi ya visivyo nyeti (mfano, rangi ya mgonjwa, jinsia, au umri) ili kugundua uwezekano wa kutokua na haki ambao modeli inaweza kuwa nao. Kwa mfano, kugundua kuwa modeli ni sahihi zaidi katika kundi lenye vipengele nyeti kunaweza kufichua uwezekano wa kutokua na haki ambao modeli inaweza kuwa nao.
+
+Kipengele cha Muhtasari wa Modeli kwenye dashibodi ya RAI husaidia sio tu katika kuchambua vipimo vya utendaji wa uwakilishi wa data katika kundi, lakini inawapa watumiaji uwezo wa kulinganisha tabia ya modeli katika vikundi tofauti.
+
+
+
+Kipengele cha uchambuzi wa msingi wa vipengele cha kipengele hiki kinawaruhusu watumiaji kupunguza vikundi vidogo vya data ndani ya kipengele fulani ili kutambua hali zisizo za kawaida kwa kiwango cha kina. Kwa mfano, dashibodi ina akili iliyojengwa ndani ya kiotomatiki kuunda vikundi kwa kipengele kilichochaguliwa na mtumiaji (mfano, *"time_in_hospital < 3"* au *"time_in_hospital >= 7"*). Hii inamruhusu mtumiaji kutenganisha kipengele fulani kutoka kwa kundi kubwa la data ili kuona kama ni mshawishi mkuu wa matokeo yasiyo sahihi ya modeli.
+
+
+
+Kipengele cha Muhtasari wa Modeli kinasaidia aina mbili za vipimo vya tofauti:
+
+**Tofauti katika utendaji wa modeli**: Seti hizi za vipimo zinahesabu tofauti (tofauti) katika maadili ya kipimo cha utendaji kilichochaguliwa katika vikundi vya data. Hapa kuna mifano michache:
+
+* Tofauti katika kiwango cha usahihi
+* Tofauti katika kiwango cha makosa
+* Tofauti katika usahihi
+* Tofauti katika kumbukumbu
+* Tofauti katika makosa ya wastani wa absolute (MAE)
+
+**Tofauti katika kiwango cha uteuzi**: Kipimo hiki kina tofauti katika kiwango cha uteuzi (utabiri mzuri) kati ya vikundi. Mfano wa hii ni tofauti katika viwango vya idhini ya mkopo. Kiwango cha uteuzi kinamaanisha sehemu ya pointi za data katika kila darasa zilizoainishwa kama 1 (katika uainishaji wa binary) au usambazaji wa maadili ya utabiri (katika regression).
+
+## Uchambuzi wa Data
+
+> "Ukitesa data kwa muda mrefu vya kutosha, itakiri chochote" - Ronald Coase
+
+Kauli hii inasikika kali, lakini ni kweli kwamba data inaweza kudanganywa kuunga mkono hitimisho lolote. Udanganyifu kama huo wakati mwingine unaweza kutokea bila kukusudia. Kama wanadamu, sote tuna upendeleo, na mara nyingi ni vigumu kujua kwa ufahamu wakati unaleta upendeleo katika data. Kuhakikisha haki katika AI na kujifunza kwa mashine bado ni changamoto ngumu.
+
+Data ni kipofu kikubwa kwa vipimo vya jadi vya utendaji wa modeli. Unaweza kuwa na alama za juu za usahihi, lakini hii haionyeshi kila mara upendeleo wa msingi wa data ambao unaweza kuwa katika seti yako ya data. Kwa mfano, ikiwa seti ya data ya wafanyakazi ina 27% ya wanawake katika nafasi za utendaji katika kampuni na 73% ya wanaume katika ngazi hiyo hiyo, modeli ya matangazo ya kazi ya AI iliyofundishwa kwenye data hii inaweza kulenga zaidi hadhira ya kiume kwa nafasi za kazi za ngazi za juu. Kuwa na uwiano huu wa data kulipotosha utabiri wa modeli kupendelea jinsia moja. Hii inaonyesha tatizo la haki ambapo kuna upendeleo wa kijinsia katika modeli ya AI.
+
+Kipengele cha Uchambuzi wa Data kwenye dashibodi ya RAI husaidia kutambua maeneo ambapo kuna uwakilishi wa ziada au pungufu katika seti ya data. Husaidia watumiaji kutambua sababu kuu za makosa na masuala ya haki yanayoletwa na uwiano wa data au ukosefu wa uwakilishi wa kundi fulani la data. Hii inawapa watumiaji uwezo wa kuona seti za data kulingana na matokeo yaliyotabiriwa na halisi, vikundi vya makosa, na vipengele maalum. Wakati mwingine kugundua kundi la data lililowakilishwa kidogo kunaweza pia kufichua kuwa modeli haijifunzi vizuri, hivyo makosa mengi. Kuwa na modeli ambayo ina upendeleo wa data sio tu tatizo la haki lakini inaonyesha kuwa modeli haijumuishi au haijakubalika.
+
+
+
+
+Tumia uchambuzi wa data unapohitaji:
+
+* Kuchunguza takwimu za seti yako ya data kwa kuchagua vichujio tofauti ili kugawanya data yako katika vipimo tofauti (inayojulikana pia kama vikundi).
+* Kuelewa usambazaji wa seti yako ya data katika vikundi tofauti na makundi ya vipengele.
+* Kuamua kama matokeo yako yanayohusiana na haki, uchambuzi wa makosa, na uhusiano (yanayotokana na vipengele vingine vya dashibodi) ni matokeo ya usambazaji wa seti yako ya data.
+* Kuamua ni maeneo gani ya kukusanya data zaidi ili kupunguza makosa yanayotokana na masuala ya uwakilishi, kelele za lebo, kelele za vipengele, upendeleo wa lebo, na mambo yanayofanana.
+
+## Ufafanuzi wa Modeli
+
+Modeli za kujifunza kwa mashine zinapokuwa masanduku meusi. Kuelewa ni vipengele vipi vya data muhimu vinaendesha utabiri wa modeli inaweza kuwa changamoto. Ni muhimu kutoa uwazi kuhusu kwa nini modeli inatoa utabiri fulani. Kwa mfano, ikiwa mfumo wa AI unatabiri kuwa mgonjwa wa kisukari yuko hatarini kurudi hospitalini ndani ya siku 30, inapaswa kutoa data inayounga mkono ambayo ilisababisha utabiri wake. Kuwa na viashiria vya data vinavyounga mkono kunaleta uwazi kusaidia madaktari au hospitali kufanya maamuzi sahihi. Zaidi ya hayo, kuwa na uwezo wa kuelezea kwa nini modeli ilifanya utabiri kwa mgonjwa binafsi inasaidia uwajibikaji kwa kanuni za afya. Unapotumia modeli za kujifunza kwa mashine kwa njia zinazogusa maisha ya watu, ni muhimu kuelewa na kuelezea ni nini kinachoathiri tabia ya modeli. Ufafanuzi na ufafanuzi wa modeli husaidia kujibu maswali katika hali kama:
+
+* Kurekebisha modeli: Kwa nini modeli yangu ilikosea? Ninawezaje kuboresha modeli yangu?
+* Ushirikiano wa binadamu na AI: Ninawezaje kuelewa na kuamini maamuzi ya modeli?
+* Kufuata kanuni: Je, modeli yangu inakidhi mahitaji ya kisheria?
+
+Kipengele cha Umuhimu wa Vipengele cha dashibodi ya RAI kinakusaidia kurekebisha na kupata uelewa wa kina wa jinsi modeli inavyotoa utabiri. Pia ni zana muhimu kwa wataalamu wa kujifunza kwa mashine na watoa maamuzi kuelezea na kuonyesha ushahidi wa vipengele vinavyoathiri tabia ya modeli kwa kufuata kanuni. Kisha, watumiaji wanaweza kuchunguza maelezo ya jumla na ya ndani kuthibitisha ni vipengele vipi vinaendesha utabiri wa modeli. Maelezo ya jumla yanaorodhesha vipengele vya juu vilivyoathiri utabiri wa jumla wa modeli. Maelezo ya ndani yanaonyesha ni vipengele vipi vilivyopelekea utabiri wa modeli kwa kesi ya mtu binafsi. Uwezo wa kutathmini maelezo ya ndani pia ni muhimu katika kurekebisha au kukagua kesi maalum ili kuelewa vizuri na kufafanua kwa nini modeli ilitoa utabiri sahihi au usio sahihi.
+
+
+
+* Maelezo ya jumla: Kwa mfano, ni vipengele gani vinavyoathiri tabia ya jumla ya modeli ya kurudi hospitalini kwa kisukari?
+* Maelezo ya ndani: Kwa mfano, kwa nini mgonjwa wa kisukari aliye na umri wa zaidi ya miaka 60 na aliye na historia ya kulazwa hospitalini alitabiriwa kurudi au kutorudi hospitalini ndani ya siku 30?
+
+Katika mchakato wa kurekebisha utendaji wa modeli katika vikundi tofauti, Umuhimu wa Vipengele unaonyesha ni kiwango gani cha athari kipengele kinacho katika vikundi. Husaidia kufichua hali zisizo za kawaida wakati wa kulinganisha kiwango cha ushawishi kipengele kinacho katika kuendesha utabiri wa makosa ya modeli. Kipengele cha Umuhimu wa Vipengele kinaweza kuonyesha ni maadili gani katika kipengele yaliyoathiri utabiri wa modeli kwa njia nzuri au mbaya. Kwa mfano, ikiwa modeli ilitoa utabiri usio sahihi, kipengele hiki kinatoa uwezo wa kuchambua na kugundua ni vipengele vipi au maadili ya vipengele yaliyoendesha utabiri huo. Kiwango hiki cha kina husaidia sio tu katika kurekebisha lakini pia hutoa uwazi na uwajibikaji katika hali za ukaguzi. Hatimaye, kipengele hiki kinaweza kukusaidia kutambua masuala ya haki. Kwa mfano, ikiwa kipengele nyeti kama vile kabila au jinsia kina ushawishi mkubwa katika kuendesha utabiri wa modeli, hii inaweza kuwa ishara ya upendeleo wa rangi au jinsia katika modeli.
+
+
+
+Tumia ufafanuzi unapohitaji:
+
+* Kuamua jinsi utabiri wa mfumo wako wa AI unavyoweza kuaminika kwa kuelewa ni vipengele vipi vilivyo muhimu zaidi kwa utabiri.
+* Kuanza kurekebisha modeli yako kwa kuelewa kwanza na kutambua kama modeli inatumia vipengele vyenye afya au uhusiano wa uongo tu.
+* Kugundua vyanzo vya uwezekano wa kutokua na haki kwa kuelewa kama modeli inategemea vipengele nyeti au vipengele vinavyohusiana sana navyo.
+* Kujenga imani ya mtumiaji katika maamuzi ya modeli yako kwa kutoa maelezo ya ndani ili kuelezea matokeo yao.
+* Kukamilisha uk
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotegemea mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu wa usahihi. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa maelewano mabaya au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/sw/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..a180ca7b8
--- /dev/null
+++ b/translations/sw/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# Chunguza dashibodi ya AI inayowajibika (RAI)
+
+## Maagizo
+
+Katika somo hili ulijifunza kuhusu dashibodi ya RAI, seti ya vipengele vilivyojengwa kwa zana za "chanzo-wazi" ili kusaidia wanasayansi wa data kufanya uchambuzi wa makosa, uchunguzi wa data, tathmini ya usawa, ufafanuzi wa modeli, tathmini za kinzani/what-if na uchambuzi wa kisababishi kwenye mifumo ya AI. Kwa kazi hii, chunguza baadhi ya [notebooks](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) za mfano za dashibodi ya RAI na ripoti matokeo yako katika karatasi au uwasilishaji.
+
+## Rubric
+
+| Vigezo | Bora | Inayotosha | Inahitaji Kuboresha |
+| ------- | --------- | -------- | ----------------- |
+| | Karatasi au uwasilishaji wa powerpoint umeonyeshwa ukijadili vipengele vya dashibodi ya RAI, daftari lililoendeshwa, na hitimisho lililotolewa kutoka kwa uendeshaji huo | Karatasi imewasilishwa bila hitimisho | Hakuna karatasi iliyowasilishwa |
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu wa usahihi. Hati asili katika lugha yake ya awali inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/9-Real-World/README.md b/translations/sw/9-Real-World/README.md
new file mode 100644
index 000000000..99a1892ff
--- /dev/null
+++ b/translations/sw/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# Postscript: Matumizi ya ulimwengu halisi ya kujifunza kwa mashine ya jadi
+
+Katika sehemu hii ya mtaala, utatambulishwa kwenye baadhi ya matumizi ya ulimwengu halisi ya ML ya jadi. Tumetafuta kwenye mtandao kupata karatasi nyeupe na makala kuhusu matumizi ambayo yametumia mikakati hii, tukiepuka mitandao ya neva, kujifunza kwa kina na AI kadri tuwezavyo. Jifunze jinsi ML inavyotumika katika mifumo ya biashara, matumizi ya kiikolojia, fedha, sanaa na utamaduni, na zaidi.
+
+
+
+> Picha na Alexis Fauvet kwenye Unsplash
+
+## Somo
+
+1. [Matumizi ya Ulimwengu Halisi kwa ML](1-Applications/README.md)
+2. [Urekebishaji wa Mifano katika Kujifunza kwa Mashine kwa kutumia vipengele vya dashibodi ya AI yenye Uwajibikaji](2-Debugging-ML-Models/README.md)
+
+## Shukrani
+
+"Matumizi ya Ulimwengu Halisi" iliandikwa na timu ya watu, wakiwemo [Jen Looper](https://twitter.com/jenlooper) na [Ornella Altunyan](https://twitter.com/ornelladotcom).
+
+"Urekebishaji wa Mifano katika Kujifunza kwa Mashine kwa kutumia vipengele vya dashibodi ya AI yenye Uwajibikaji" iliandikwa na [Ruth Yakubu](https://twitter.com/ruthieyakubu)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/CODE_OF_CONDUCT.md b/translations/sw/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..2eb8afb26
--- /dev/null
+++ b/translations/sw/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Microsoft Open Source Code of Conduct
+
+Mradi huu umekubali [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+
+Rasilimali:
+
+- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
+- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+- Wasiliana na [opencode@microsoft.com](mailto:opencode@microsoft.com) kwa maswali au wasiwasi
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuelewana. Hati asili katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo chenye mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inashauriwa. Hatutawajibika kwa maelewano mabaya au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/CONTRIBUTING.md b/translations/sw/CONTRIBUTING.md
new file mode 100644
index 000000000..38aa9be34
--- /dev/null
+++ b/translations/sw/CONTRIBUTING.md
@@ -0,0 +1,19 @@
+# Kuchangia
+
+Mradi huu unakaribisha michango na mapendekezo. Michango mingi inahitaji wewe
+kukubaliana na Mkataba wa Leseni ya Mchangiaji (CLA) unaotangaza kuwa una haki ya,
+na kwa kweli unafanya, kutupatia haki za kutumia mchango wako. Kwa maelezo zaidi, tembelea
+https://cla.microsoft.com.
+
+> Muhimu: unapofasiri maandishi katika repo hii, tafadhali hakikisha kwamba hutumii tafsiri ya mashine. Tutathibitisha tafsiri kupitia jamii, kwa hivyo tafadhali jitolee tu kwa tafsiri katika lugha ambazo una ujuzi nazo.
+
+Unapowasilisha ombi la kuvuta, CLA-bot itatambua moja kwa moja kama unahitaji
+kutoa CLA na kupamba PR ipasavyo (kwa mfano, lebo, maoni). Fuata tu
+maelekezo yaliyotolewa na bot. Utahitaji kufanya hivi mara moja tu katika hazina zote zinazotumia CLA yetu.
+
+Mradi huu umechukua [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+Kwa maelezo zaidi tazama [Maswali Yanayoulizwa Mara kwa Mara ya Kanuni za Maadili](https://opensource.microsoft.com/codeofconduct/faq/)
+au wasiliana na [opencode@microsoft.com](mailto:opencode@microsoft.com) kwa maswali au maoni ya ziada.
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/README.md b/translations/sw/README.md
new file mode 100644
index 000000000..ac69f5774
--- /dev/null
+++ b/translations/sw/README.md
@@ -0,0 +1,155 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# Kujifunza Mashine kwa Anayeanza - Mtaala
+
+> 🌍 Safiri duniani tunapochunguza Kujifunza Mashine kupitia tamaduni za ulimwengu 🌍
+
+Wataalam wa Cloud katika Microsoft wana furaha kutoa mtaala wa wiki 12, masomo 26 kuhusu **Kujifunza Mashine**. Katika mtaala huu, utajifunza kuhusu kile kinachoitwa **kujifunza mashine kwa kawaida**, ukitumia Scikit-learn kama maktaba na kuepuka kujifunza kwa kina, ambayo inashughulikiwa katika [mtaala wetu wa AI kwa Anayeanza](https://aka.ms/ai4beginners). Unganisha masomo haya na mtaala wetu wa ['Sayansi ya Takwimu kwa Anayeanza'](https://aka.ms/ds4beginners), pia!
+
+Safiri nasi kote ulimwenguni tunapotumia mbinu hizi za kawaida kwa data kutoka maeneo mbalimbali ya ulimwengu. Kila somo linajumuisha maswali kabla na baada ya somo, maelekezo ya maandishi ya kukamilisha somo, suluhisho, kazi, na zaidi. Pedagogi yetu inayozingatia miradi inakuwezesha kujifunza wakati wa kujenga, njia iliyothibitishwa ya kuifanya ujuzi mpya 'kubaki'.
+
+**✍️ Shukrani za dhati kwa waandishi wetu** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu na Amy Boyd
+
+**🎨 Shukrani pia kwa wachora picha wetu** Tomomi Imura, Dasani Madipalli, na Jen Looper
+
+**🙏 Shukrani maalum 🙏 kwa waandishi wa Microsoft Student Ambassador, wakaguzi, na wachangiaji wa maudhui**, hasa Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila, na Snigdha Agarwal
+
+**🤩 Shukrani za ziada kwa Microsoft Student Ambassadors Eric Wanjau, Jasleen Sondhi, na Vidushi Gupta kwa masomo yetu ya R!**
+
+# Kuanza
+
+Fuata hatua hizi:
+1. **Fork Repository**: Bonyeza kitufe cha "Fork" kwenye kona ya juu-kulia ya ukurasa huu.
+2. **Clone Repository**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [pata rasilimali zote za ziada kwa kozi hii katika mkusanyiko wetu wa Microsoft Learn](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+**[Wanafunzi](https://aka.ms/student-page)**, kutumia mtaala huu, fanya fork ya repo nzima kwenye akaunti yako ya GitHub na kamilisha mazoezi mwenyewe au na kikundi:
+
+- Anza na jaribio la awali la somo.
+- Soma somo na kamilisha shughuli, ukisimama na kutafakari katika kila ukaguzi wa maarifa.
+- Jaribu kuunda miradi kwa kuelewa masomo badala ya kuendesha msimbo wa suluhisho; hata hivyo msimbo huo unapatikana katika `/solution` folda katika kila somo linalohusiana na mradi.
+- Chukua jaribio la baada ya somo.
+- Kamilisha changamoto.
+- Kamilisha kazi.
+- Baada ya kukamilisha kikundi cha somo, tembelea [Bodi ya Majadiliano](https://github.com/microsoft/ML-For-Beginners/discussions) na "jifunze kwa sauti" kwa kujaza rubriki ya PAT inayofaa. 'PAT' ni Chombo cha Tathmini ya Maendeleo ambacho ni rubriki unayojaza ili kuendeleza ujifunzaji wako. Unaweza pia kuitikia PAT zingine ili tujifunze pamoja.
+
+> Kwa masomo zaidi, tunapendekeza kufuata moduli na njia za kujifunza hizi za [Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott).
+
+**Walimu**, tumejumuisha [mapendekezo kadhaa](for-teachers.md) kuhusu jinsi ya kutumia mtaala huu.
+
+---
+
+## Maelezo ya Video
+
+Baadhi ya masomo yanapatikana kama video fupi. Unaweza kupata hizi zote ndani ya masomo, au kwenye [orodha ya kucheza ya ML kwa Anayeanza kwenye kituo cha YouTube cha Microsoft Developer](https://aka.ms/ml-beginners-videos) kwa kubonyeza picha hapa chini.
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## Kutana na Timu
+
+[](https://youtu.be/Tj1XWrDSYJU "Video ya promo")
+
+**Gif na** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 Bonyeza picha hapo juu kwa video kuhusu mradi na watu waliouunda!
+
+---
+
+## Pedagogi
+
+Tumechagua kanuni mbili za kipedagogia wakati wa kujenga mtaala huu: kuhakikisha kuwa ni ya vitendo **inayozingatia miradi** na kwamba inajumuisha **maswali ya mara kwa mara**. Zaidi ya hayo, mtaala huu una **mandhari** ya kawaida ili kuupa mshikamano.
+
+Kwa kuhakikisha kuwa maudhui yanaendana na miradi, mchakato unakuwa wa kuvutia zaidi kwa wanafunzi na uhifadhi wa dhana utaongezeka. Zaidi ya hayo, jaribio la hatari ndogo kabla ya darasa linaweka nia ya mwanafunzi kuelekea kujifunza mada, wakati jaribio la pili baada ya darasa linaongeza uhifadhi zaidi. Mtaala huu uliundwa kuwa rahisi na wa kufurahisha na unaweza kuchukuliwa kwa ujumla au sehemu. Miradi huanza ndogo na kuwa ngumu zaidi mwishoni mwa mzunguko wa wiki 12. Mtaala huu pia unajumuisha maelezo ya matumizi ya kweli ya ML, ambayo yanaweza kutumika kama ziada ya mkopo au kama msingi wa majadiliano.
+
+> Pata [Kanuni zetu za Maadili](CODE_OF_CONDUCT.md), [Kuchangia](CONTRIBUTING.md), na [Miongozo ya Tafsiri](TRANSLATIONS.md). Tunakaribisha maoni yako ya kujenga!
+
+## Kila somo linajumuisha
+
+- sketchnote ya hiari
+- video ya ziada ya hiari
+- maelezo ya video (masomo mengine tu)
+- jaribio la joto la kabla ya somo
+- somo la maandishi
+- kwa masomo yanayozingatia mradi, miongozo ya hatua kwa hatua juu ya jinsi ya kujenga mradi
+- ukaguzi wa maarifa
+- changamoto
+- kusoma kwa ziada
+- kazi
+- jaribio la baada ya somo
+
+> **Maelezo kuhusu lugha**: Masomo haya yameandikwa kimsingi kwa Python, lakini mengi yanapatikana pia kwa R. Ili kukamilisha somo la R, nenda kwenye folda ya `/solution` na tafuta masomo ya R. Yanajumuisha kiendelezi cha .rmd ambacho kinaonyesha faili ya **R Markdown** ambayo inaweza kuelezewa kwa urahisi kama kuingiza `code chunks` (ya R au lugha nyingine) na `YAML header` (inayoongoza jinsi ya kuunda matokeo kama PDF) katika `Markdown document`. Kwa hivyo, inatumika kama mfumo wa uandishi wa mfano kwa sayansi ya data kwani inakuwezesha kuchanganya msimbo wako, matokeo yake, na mawazo yako kwa kuandika kwa Markdown. Zaidi ya hayo, hati za R Markdown zinaweza kutolewa kwa fomati za matokeo kama PDF, HTML, au Word.
+
+> **Maelezo kuhusu maswali**: Maswali yote yamo kwenye [folda ya Quiz App](../../quiz-app), kwa jumla ya maswali 52 ya maswali matatu kila moja. Yameunganishwa kutoka ndani ya masomo lakini programu ya maswali inaweza kuendeshwa kwa ndani; fuata maagizo katika folda ya `quiz-app` kuendesha kwa ndani au kupeleka kwenye Azure.
+
+| Nambari ya Somo | Mada | Kundi la Somo | Malengo ya Kujifunza | Somo lililounganishwa | Mwandishi |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | Utangulizi wa kujifunza mashine | [Utangulizi](1-Introduction/README.md) | Jifunze dhana za msingi za kujifunza mashine | [Somo](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | Historia ya kujifunza mashine | [Utangulizi](1-Introduction/README.md) | Jifunze historia inayoelezea uwanja huu | [Somo](1-Introduction/2-history-of-ML/README.md) | Jen na Amy |
+| 03 | Usawa na kujifunza mashine | [Utangulizi](1-Introduction/README.md) | Masuala muhimu ya kifalsafa kuhusu usawa ambayo wanafunzi wanapaswa kuzingatia wanapojenga na kutumia mifano ya ML? | [Somo](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | Mbinu za kujifunza kwa mashine | [Introduction](1-Introduction/README.md) | Je, watafiti wa ML wanatumia mbinu gani kujenga mifano ya ML? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris na Jen |
+| 05 | Utangulizi wa regression | [Regression](2-Regression/README.md) | Anza na Python na Scikit-learn kwa mifano ya regression |
|
+| 09 | Programu ya Wavuti 🔌 | [Web App](3-Web-App/README.md) | Jenga programu ya wavuti kutumia mfano wako uliyojifunza | [Python](3-Web-App/1-Web-App/README.md) | Jen |
+| 10 | Utangulizi wa uainishaji | [Classification](4-Classification/README.md) | Safisha, andaa, na onyesha data yako; utangulizi wa uainishaji |
|
+| 13 | Vyakula vitamu vya Asia na India 🍜 | [Classification](4-Classification/README.md) | Jenga programu ya wavuti ya kutoa mapendekezo kutumia mfano wako | [Python](4-Classification/4-Applied/README.md) | Jen |
+| 14 | Utangulizi wa clustering | [Clustering](5-Clustering/README.md) | Safisha, andaa, na onyesha data yako; Utangulizi wa clustering |
|
+| 16 | Utangulizi wa usindikaji wa lugha asilia ☕️ | [Natural language processing](6-NLP/README.md) | Jifunze misingi ya NLP kwa kujenga bot rahisi | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | Majukumu ya kawaida ya NLP ☕️ | [Natural language processing](6-NLP/README.md) | Zidi kuelewa NLP kwa kuelewa majukumu ya kawaida yanayohitajika unaposhughulikia miundo ya lugha | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | Tafsiri na uchambuzi wa hisia ♥️ | [Natural language processing](6-NLP/README.md) | Tafsiri na uchambuzi wa hisia na Jane Austen | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | Hoteli za Kimapenzi za Ulaya ♥️ | [Natural language processing](6-NLP/README.md) | Uchambuzi wa hisia na maoni ya hoteli 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | Hoteli za Kimapenzi za Ulaya ♥️ | [Natural language processing](6-NLP/README.md) | Uchambuzi wa hisia na maoni ya hoteli 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | Utangulizi wa utabiri wa mfululizo wa muda | [Time series](7-TimeSeries/README.md) | Utangulizi wa utabiri wa mfululizo wa muda | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ Matumizi ya Nguvu Duniani ⚡️ - utabiri wa mfululizo wa muda na ARIMA | [Time series](7-TimeSeries/README.md) | Utabiri wa mfululizo wa muda na ARIMA | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ Matumizi ya Nguvu Duniani ⚡️ - utabiri wa mfululizo wa muda na SVR | [Time series](7-TimeSeries/README.md) | Utabiri wa mfululizo wa muda na Support Vector Regressor | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | Utangulizi wa kujifunza kwa kuimarisha | [Reinforcement learning](8-Reinforcement/README.md) | Utangulizi wa kujifunza kwa kuimarisha na Q-Learning | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | Msaidie Peter kuepuka mbwa mwitu! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | Gym ya kujifunza kwa kuimarisha | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | Matukio na matumizi halisi ya ML | [ML in the Wild](9-Real-World/README.md) | Matumizi ya kuvutia na kufichua ya ulimwengu halisi ya ML ya kimsingi | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | Urekebishaji wa Modeli katika ML kwa kutumia dashibodi ya RAI | [ML in the Wild](9-Real-World/README.md) | Urekebishaji wa Modeli katika Kujifunza kwa Mashine kwa kutumia vipengele vya dashibodi ya AI inayowajibika | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [pata rasilimali zote za ziada za kozi hii kwenye mkusanyiko wetu wa Microsoft Learn](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## Ufikiaji wa nje ya mtandao
+
+Unaweza kuendesha nyaraka hizi nje ya mtandao kwa kutumia [Docsify](https://docsify.js.org/#/). Fork repo hii, [sakinisha Docsify](https://docsify.js.org/#/quickstart) kwenye mashine yako ya ndani, na kisha kwenye folda ya mizizi ya repo hii, andika `docsify serve`. Tovuti itahudumiwa kwenye bandari 3000 kwenye localhost yako: `localhost:3000`.
+
+## PDFs
+Pata nakala ya pdf ya mtaala na viungo [hapa](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf).
+
+## Msaada Unahitajika
+
+Ungependa kuchangia tafsiri? Tafadhali soma [miongozo yetu ya tafsiri](TRANSLATIONS.md) na ongeza suala lililotayarishwa ili kusimamia mzigo wa kazi [hapa](https://github.com/microsoft/ML-For-Beginners/issues).
+
+## Mitaala Mingine
+
+Timu yetu inazalisha mitaala mingine! Angalia:
+
+- [AI kwa Kompyuta](https://aka.ms/ai4beginners)
+- [Sayansi ya Takwimu kwa Kompyuta](https://aka.ms/datascience-beginners)
+- [**Toleo Jipya 2.0** - AI ya Kizazi kwa Kompyuta](https://aka.ms/genai-beginners)
+- [**JIPYA** Usalama wa Mtandao kwa Kompyuta](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Maendeleo ya Wavuti kwa Kompyuta](https://aka.ms/webdev-beginners)
+- [IoT kwa Kompyuta](https://aka.ms/iot-beginners)
+- [Ujifunzaji wa Mashine kwa Kompyuta](https://aka.ms/ml4beginners)
+- [Maendeleo ya XR kwa Kompyuta](https://aka.ms/xr-dev-for-beginners)
+- [Kumudu GitHub Copilot kwa Uprogramishaji wa Pamoja wa AI](https://aka.ms/GitHubCopilotAI)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu wa usahihi. Hati asilia katika lugha yake ya asili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, inashauriwa kutumia tafsiri ya kibinadamu ya kitaalamu. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/SECURITY.md b/translations/sw/SECURITY.md
new file mode 100644
index 000000000..95f784473
--- /dev/null
+++ b/translations/sw/SECURITY.md
@@ -0,0 +1,40 @@
+## Usalama
+
+Microsoft inachukulia kwa uzito usalama wa bidhaa na huduma zetu za programu, ikijumuisha hazina zote za msimbo wa chanzo zinazosimamiwa kupitia mashirika yetu ya GitHub, ambayo ni pamoja na [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), na [mashirika yetu ya GitHub](https://opensource.microsoft.com/).
+
+Ikiwa unaamini umepata udhaifu wa usalama katika hazina yoyote inayomilikiwa na Microsoft ambayo inakidhi [ufafanuzi wa Microsoft wa udhaifu wa usalama](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott), tafadhali ripoti kwetu kama ilivyoelezwa hapa chini.
+
+## Kuripoti Masuala ya Usalama
+
+**Tafadhali usiripoti udhaifu wa usalama kupitia masuala ya umma ya GitHub.**
+
+Badala yake, tafadhali ripoti kwa Kituo cha Majibu ya Usalama cha Microsoft (MSRC) kupitia [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).
+
+Ikiwa unapendelea kuwasilisha bila kuingia, tuma barua pepe kwa [secure@microsoft.com](mailto:secure@microsoft.com). Ikiwezekana, enkripti ujumbe wako kwa kutumia ufunguo wetu wa PGP; tafadhali pakua kutoka kwenye [ukurasa wa Ufunguo wa PGP wa Kituo cha Majibu ya Usalama cha Microsoft](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).
+
+Unapaswa kupokea majibu ndani ya saa 24. Ikiwa kwa sababu fulani hupokei, tafadhali fuatilia kwa barua pepe ili kuhakikisha tumepokea ujumbe wako wa awali. Habari zaidi inaweza kupatikana kwenye [microsoft.com/msrc](https://www.microsoft.com/msrc).
+
+Tafadhali jumuisha habari iliyoorodheshwa hapa chini (kadri uwezavyo kutoa) ili kutusaidia kuelewa vizuri asili na wigo wa suala linalowezekana:
+
+ * Aina ya suala (mfano, kufurika kwa buffer, sindano ya SQL, scripting ya tovuti msalaba, nk.)
+ * Njia kamili za faili za chanzo zinazohusiana na udhihirisho wa suala hilo
+ * Eneo la msimbo wa chanzo ulioathirika (tag/branch/commit au URL moja kwa moja)
+ * Mipangilio maalum inayohitajika ili kuzalisha suala hilo
+ * Maelekezo ya hatua kwa hatua ya kuzalisha suala hilo
+ * Ushahidi wa dhana au msimbo wa shambulio (ikiwa inawezekana)
+ * Athari ya suala hilo, ikijumuisha jinsi mshambulizi anaweza kutumia suala hilo
+
+Habari hii itatusaidia kushughulikia ripoti yako haraka zaidi.
+
+Ikiwa unaripoti kwa ajili ya zawadi ya hitilafu, ripoti kamili zaidi zinaweza kuchangia tuzo kubwa ya zawadi. Tafadhali tembelea ukurasa wetu wa [Programu ya Zawadi ya Hitilafu ya Microsoft](https://microsoft.com/msrc/bounty) kwa maelezo zaidi kuhusu programu zetu zinazotumika.
+
+## Lugha Zinazopendekezwa
+
+Tunapendelea mawasiliano yote yawe kwa Kiingereza.
+
+## Sera
+
+Microsoft inafuata kanuni ya [Ufunuo wa Udhaifu Ulioratibiwa](https://www.microsoft.com/en-us/msrc/cvd).
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asilia katika lugha yake ya awali inapaswa kuchukuliwa kama chanzo chenye mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/SUPPORT.md b/translations/sw/SUPPORT.md
new file mode 100644
index 000000000..5fcd20abe
--- /dev/null
+++ b/translations/sw/SUPPORT.md
@@ -0,0 +1,13 @@
+# Msaada
+## Jinsi ya kuripoti matatizo na kupata msaada
+
+Mradi huu unatumia GitHub Issues kufuatilia hitilafu na maombi ya vipengele. Tafadhali tafuta masuala yaliyopo kabla ya kuripoti masuala mapya ili kuepuka marudio. Kwa masuala mapya, ripoti hitilafu yako au ombi la kipengele kama Suala jipya.
+
+Kwa msaada na maswali kuhusu matumizi ya mradi huu, ripoti suala.
+
+## Sera ya Msaada ya Microsoft
+
+Msaada kwa hazina hii umepunguzwa kwa rasilimali zilizoorodheshwa hapo juu.
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati asili katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo rasmi. Kwa habari muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/TRANSLATIONS.md b/translations/sw/TRANSLATIONS.md
new file mode 100644
index 000000000..1a7431581
--- /dev/null
+++ b/translations/sw/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# Changia kwa kutafsiri masomo
+
+Tunakaribisha tafsiri za masomo katika mtaala huu!
+## Miongozo
+
+Kuna folda katika kila folda ya somo na folda ya utangulizi wa somo ambazo zina faili za markdown zilizotafsiriwa.
+
+> Kumbuka, tafadhali usitafsiri msimbo wowote katika faili za sampuli za msimbo; vitu pekee vya kutafsiri ni README, assignments, na majaribio. Asante!
+
+Faili zilizotafsiriwa zinapaswa kufuata mpangilio huu wa majina:
+
+**README._[language]_.md**
+
+ambapo _[language]_ ni kifupi cha herufi mbili za lugha kufuatia kiwango cha ISO 639-1 (mfano `README.es.md` kwa Kihispania na `README.nl.md` kwa Kiholanzi).
+
+**assignment._[language]_.md**
+
+Sawa na README, tafadhali tafsiri pia assignments.
+
+> Muhimu: unapofanya tafsiri ya maandishi katika hazina hii, tafadhali hakikisha kuwa hutumii tafsiri ya mashine. Tutathibitisha tafsiri kupitia jamii, kwa hivyo tafadhali jitolee tu kwa tafsiri katika lugha ambazo unazifahamu vizuri.
+
+**Majaribio**
+
+1. Ongeza tafsiri yako kwenye quiz-app kwa kuongeza faili hapa: https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations, kwa kutumia mpangilio sahihi wa majina (en.json, fr.json). **Tafadhali usibadilisha maneno 'true' au 'false' hata hivyo. asante!**
+
+2. Ongeza msimbo wa lugha yako kwenye dropdown katika faili ya quiz-app's App.vue.
+
+3. Hariri faili ya [translations index.js](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js) ya quiz-app ili kuongeza lugha yako.
+
+4. Hatimaye, hariri VIUNGO VYOTE vya majaribio katika faili zako za README.md zilizotafsiriwa ili kuelekeza moja kwa moja kwenye jaribio lako lililotafsiriwa: https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1 inakuwa https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id
+
+**ASANTE SANA**
+
+Tunathamini sana juhudi zako!
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati ya asili katika lugha yake ya asili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri potofu zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/docs/_sidebar.md b/translations/sw/docs/_sidebar.md
new file mode 100644
index 000000000..1cf95e2f2
--- /dev/null
+++ b/translations/sw/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- Utangulizi
+ - [Utangulizi wa Kujifunza Mashine](../1-Introduction/1-intro-to-ML/README.md)
+ - [Historia ya Kujifunza Mashine](../1-Introduction/2-history-of-ML/README.md)
+ - [Kujifunza Mashine na Haki](../1-Introduction/3-fairness/README.md)
+ - [Mbinu za Kujifunza Mashine](../1-Introduction/4-techniques-of-ML/README.md)
+
+- Usawazishaji
+ - [Zana za Kazi](../2-Regression/1-Tools/README.md)
+ - [Data](../2-Regression/2-Data/README.md)
+ - [Usawazishaji wa Mstari](../2-Regression/3-Linear/README.md)
+ - [Usawazishaji wa Logistic](../2-Regression/4-Logistic/README.md)
+
+- Tengeneza Programu ya Wavuti
+ - [Programu ya Wavuti](../3-Web-App/1-Web-App/README.md)
+
+- Uainishaji
+ - [Utangulizi wa Uainishaji](../4-Classification/1-Introduction/README.md)
+ - [Waainishaji 1](../4-Classification/2-Classifiers-1/README.md)
+ - [Waainishaji 2](../4-Classification/3-Classifiers-2/README.md)
+ - [Kujifunza Mashine Katika Matumizi](../4-Classification/4-Applied/README.md)
+
+- Kupanga Makundi
+ - [Onyesha Data Yako](../5-Clustering/1-Visualize/README.md)
+ - [K-Means](../5-Clustering/2-K-Means/README.md)
+
+- NLP
+ - [Utangulizi wa NLP](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [Kazi za NLP](../6-NLP/2-Tasks/README.md)
+ - [Tafsiri na Hisia](../6-NLP/3-Translation-Sentiment/README.md)
+ - [Maoni ya Hoteli 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [Maoni ya Hoteli 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- Utabiri wa Mfululizo wa Wakati
+ - [Utangulizi wa Utabiri wa Mfululizo wa Wakati](../7-TimeSeries/1-Introduction/README.md)
+ - [ARIMA](../7-TimeSeries/2-ARIMA/README.md)
+ - [SVR](../7-TimeSeries/3-SVR/README.md)
+
+- Kujifunza kwa Kuimarisha
+ - [Q-Learning](../8-Reinforcement/1-QLearning/README.md)
+ - [Gym](../8-Reinforcement/2-Gym/README.md)
+
+- Kujifunza Mashine Katika Dunia Halisi
+ - [Matumizi](../9-Real-World/1-Applications/README.md)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwepo kwa usahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutokuelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/for-teachers.md b/translations/sw/for-teachers.md
new file mode 100644
index 000000000..21b1d1b22
--- /dev/null
+++ b/translations/sw/for-teachers.md
@@ -0,0 +1,26 @@
+## Kwa Walimu
+
+Je, ungependa kutumia mtaala huu darasani kwako? Tafadhali jisikie huru!
+
+Kwa kweli, unaweza kuitumia ndani ya GitHub yenyewe kwa kutumia GitHub Classroom.
+
+Ili kufanya hivyo, fork repo hii. Utahitaji kuunda repo kwa kila somo, kwa hivyo utahitaji kutoa kila folda kwenye repo tofauti. Kwa njia hiyo, [GitHub Classroom](https://classroom.github.com/classrooms) inaweza kuchukua kila somo kando kando.
+
+Maelekezo haya [full instructions](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/) yatakupa wazo jinsi ya kuandaa darasa lako.
+
+## Kutumia repo kama ilivyo
+
+Kama ungependa kutumia repo hii kama ilivyo sasa, bila kutumia GitHub Classroom, hilo linaweza kufanyika pia. Utahitaji kuwasiliana na wanafunzi wako ni somo gani la kufanya pamoja.
+
+Katika muundo wa mtandaoni (Zoom, Teams, au nyinginezo) unaweza kuunda vyumba vya vikundi kwa ajili ya majaribio, na kuwaelekeza wanafunzi ili kuwasaidia kujiandaa kujifunza. Kisha waalike wanafunzi kwa ajili ya majaribio na kuwasilisha majibu yao kama 'issues' kwa wakati fulani. Unaweza kufanya vivyo hivyo na kazi za nyumbani, kama unataka wanafunzi wafanye kazi kwa ushirikiano wazi wazi.
+
+Ikiwa unapendelea muundo wa faragha zaidi, waulize wanafunzi wako wafork mtaala, somo kwa somo, kwenye repo zao za GitHub kama repo za faragha, na wakupatie ufikiaji. Kisha wanaweza kukamilisha majaribio na kazi za nyumbani kwa faragha na kukuwasilishia kupitia issues kwenye repo yako ya darasa.
+
+Kuna njia nyingi za kufanya hili lifanye kazi katika muundo wa darasa la mtandaoni. Tafadhali tujulishe ni nini kinachofanya kazi vizuri zaidi kwako!
+
+## Tafadhali tupe maoni yako!
+
+Tunataka kufanya mtaala huu ufanye kazi kwako na wanafunzi wako. Tafadhali tupe [maoni](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u).
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI zinazotumia mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au upungufu. Hati asili katika lugha yake ya awali inapaswa kuchukuliwa kuwa chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/quiz-app/README.md b/translations/sw/quiz-app/README.md
new file mode 100644
index 000000000..236835edb
--- /dev/null
+++ b/translations/sw/quiz-app/README.md
@@ -0,0 +1,115 @@
+# Maswali
+
+Haya maswali ni ya kabla na baada ya mihadhara ya mtaala wa ML kwenye https://aka.ms/ml-beginners
+
+## Kuanzisha Mradi
+
+```
+npm install
+```
+
+### Hukusanya na kupakia upya kwa maendeleo
+
+```
+npm run serve
+```
+
+### Hukusanya na kupunguza kwa uzalishaji
+
+```
+npm run build
+```
+
+### Hufanya lint na kurekebisha faili
+
+```
+npm run lint
+```
+
+### Kubadilisha usanidi
+
+Tazama [Marejeleo ya Usanidi](https://cli.vuejs.org/config/).
+
+Shukrani: Asante kwa toleo la awali la programu hii ya maswali: https://github.com/arpan45/simple-quiz-vue
+
+## Kuweka kwenye Azure
+
+Hapa kuna mwongozo wa hatua kwa hatua kukusaidia kuanza:
+
+1. Fork Repositori ya GitHub
+Hakikisha msimbo wa programu yako ya wavuti ya static uko kwenye repositori yako ya GitHub. Fork repositori hii.
+
+2. Unda Azure Static Web App
+- Unda [akaunti ya Azure](http://azure.microsoft.com)
+- Nenda kwenye [portal ya Azure](https://portal.azure.com)
+- Bonyeza “Create a resource” na tafuta “Static Web App”.
+- Bonyeza “Create”.
+
+3. Sanidi Static Web App
+- Msingi: Usajili: Chagua usajili wako wa Azure.
+- Kikundi cha Rasilimali: Unda kikundi kipya cha rasilimali au tumia kilichopo.
+- Jina: Toa jina kwa programu yako ya wavuti ya static.
+- Kanda: Chagua kanda iliyo karibu na watumiaji wako.
+
+- #### Maelezo ya Uwekaji:
+- Chanzo: Chagua “GitHub”.
+- Akaunti ya GitHub: Ruhusu Azure kufikia akaunti yako ya GitHub.
+- Shirika: Chagua shirika lako la GitHub.
+- Repositori: Chagua repositori inayoshikilia programu yako ya wavuti ya static.
+- Tawi: Chagua tawi unalotaka kuweka kutoka.
+
+- #### Maelezo ya Ujenzi:
+- Presets za Ujenzi: Chagua mfumo ambao programu yako imejengwa (mfano, React, Angular, Vue, nk.).
+- Mahali pa Programu: Eleza folda inayoshikilia msimbo wa programu yako (mfano, / ikiwa iko kwenye mzizi).
+- Mahali pa API: Ikiwa una API, eleza mahali pake (hiari).
+- Mahali pa Matokeo: Eleza folda ambapo matokeo ya ujenzi yanazalishwa (mfano, build au dist).
+
+4. Kagua na Unda
+Kagua mipangilio yako na bonyeza “Create”. Azure itaweka rasilimali zinazohitajika na kuunda mtiririko wa kazi wa GitHub Actions kwenye repositori yako.
+
+5. Mtiririko wa Kazi wa GitHub Actions
+Azure itaweka faili ya mtiririko wa kazi wa GitHub Actions kwenye repositori yako (.github/workflows/azure-static-web-apps-.yml). Mtiririko huu utashughulikia mchakato wa ujenzi na uwekaji.
+
+6. Fuata Uwekaji
+Nenda kwenye kichupo cha “Actions” kwenye repositori yako ya GitHub.
+Unapaswa kuona mtiririko wa kazi unaoendesha. Mtiririko huu utajenga na kuweka programu yako ya wavuti ya static kwenye Azure.
+Baada ya mtiririko wa kazi kukamilika, programu yako itakuwa hai kwenye URL iliyotolewa ya Azure.
+
+### Faili ya Mfano ya Mtiririko wa Kazi
+
+Hapa kuna mfano wa jinsi faili ya mtiririko wa kazi wa GitHub Actions inaweza kuonekana:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### Rasilimali za Ziada
+- [Nyaraka za Azure Static Web Apps](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [Nyaraka za GitHub Actions](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokubaliana. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo sahihi. Kwa taarifa muhimu, tafsiri ya kibinadamu ya kitaalamu inapendekezwa. Hatutawajibika kwa kutoelewana au kutafsiri vibaya kunakotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/sketchnotes/LICENSE.md b/translations/sw/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..a2d50a62d
--- /dev/null
+++ b/translations/sw/sketchnotes/LICENSE.md
@@ -0,0 +1,141 @@
+Attribution-ShareAlike 4.0 International
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons") sio kampuni ya sheria na haitoi huduma za kisheria au ushauri wa kisheria. Ugawaji wa leseni za umma za Creative Commons hauundi uhusiano wa wakili-mteja au uhusiano mwingine wowote. Creative Commons inatoa leseni zake na taarifa zinazohusiana kwenye msingi wa "kama ilivyo". Creative Commons haitoi dhamana yoyote kuhusu leseni zake, nyenzo yoyote iliyotolewa chini ya masharti na masharti yake, au taarifa yoyote inayohusiana. Creative Commons inakanusha dhima yote kwa uharibifu unaotokana na matumizi yao kwa kadiri inayowezekana.
+
+Kutumia Leseni za Umma za Creative Commons
+
+Leseni za umma za Creative Commons hutoa seti ya kawaida ya masharti na masharti ambayo waundaji na wamiliki wengine wa haki wanaweza kutumia kushiriki kazi za asili za uandishi na nyenzo nyingine zinazolindwa na hakimiliki na haki zingine maalum zilizoainishwa katika leseni ya umma hapa chini. Yafuatayo ni kwa madhumuni ya taarifa tu, si kamili, na si sehemu ya leseni zetu.
+
+ Maoni kwa watoaji leseni: Leseni zetu za umma zinakusudiwa kutumiwa na wale walioidhinishwa kutoa ruhusa kwa umma kutumia nyenzo kwa njia ambazo vinginevyo zinazuiliwa na hakimiliki na haki zingine maalum. Leseni zetu haziwezi kubatilishwa. Watoaji leseni wanapaswa kusoma na kuelewa masharti na masharti ya leseni wanayochagua kabla ya kuitumia. Watoaji leseni wanapaswa pia kupata haki zote muhimu kabla ya kutumia leseni zetu ili umma uweze kutumia nyenzo kama inavyotarajiwa. Watoaji leseni wanapaswa kuweka wazi nyenzo zozote ambazo haziko chini ya leseni. Hii inajumuisha nyenzo nyingine zilizopewa leseni ya CC, au nyenzo zinazotumiwa chini ya ubaguzi au upungufu wa hakimiliki. Maoni zaidi kwa watoaji leseni:
+ wiki.creativecommons.org/Considerations_for_licensors
+
+ Maoni kwa umma: Kwa kutumia moja ya leseni zetu za umma, mtoaji leseni anatoa ruhusa kwa umma kutumia nyenzo zilizopewa leseni chini ya masharti na masharti maalum. Ikiwa ruhusa ya mtoaji leseni si muhimu kwa sababu yoyote - kwa mfano, kwa sababu ya ubaguzi au upungufu wowote unaotumika wa hakimiliki - basi matumizi hayo hayadhibitiwi na leseni. Leseni zetu zinatoa ruhusa tu chini ya hakimiliki na haki zingine maalum ambazo mtoaji leseni ana mamlaka ya kutoa. Matumizi ya nyenzo zilizopewa leseni yanaweza bado kuzuiliwa kwa sababu zingine, pamoja na kwa sababu wengine wana hakimiliki au haki zingine katika nyenzo. Mtoaji leseni anaweza kufanya maombi maalum, kama vile kuomba mabadiliko yote yamewekwa alama au kuelezewa. Ingawa si lazima kwa leseni zetu, unahimizwa kuheshimu maombi hayo pale inapofaa. Maoni zaidi kwa umma:
+ wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Leseni ya Umma ya Creative Commons Attribution-ShareAlike 4.0 International
+
+Kwa kutumia Haki Zilizotolewa (zilizoelezwa hapa chini), Unakubali na unakubaliana kufungwa na masharti na masharti ya Leseni hii ya Umma ya Creative Commons Attribution-ShareAlike 4.0 International ("Leseni ya Umma"). Kwa kadiri Leseni hii ya Umma inaweza kutafsiriwa kama mkataba, Unapewa Haki Zilizotolewa kwa kuzingatia kukubalika kwako kwa masharti na masharti haya, na Mtoaji leseni anakupa haki hizo kwa kuzingatia manufaa ambayo Mtoaji leseni anapata kutokana na kuweka Nyenzo Zilizotolewa chini ya masharti haya.
+
+Sehemu ya 1 -- Ufafanuzi.
+
+ a. Nyenzo Zilizobadilishwa inamaanisha nyenzo zinazolindwa na Hakimiliki na Haki Sawa ambazo zimetokana na au zinatokana na Nyenzo Zilizotolewa na ambapo Nyenzo Zilizotolewa zimemfasiriwa, kubadilishwa, kupangwa, kubadilishwa, au kurekebishwa kwa njia inayohitaji ruhusa chini ya Hakimiliki na Haki Sawa zinazoshikiliwa na Mtoaji leseni. Kwa madhumuni ya Leseni hii ya Umma, ambapo Nyenzo Zilizotolewa ni kazi ya muziki, utendaji, au rekodi ya sauti, Nyenzo Zilizobadilishwa huzalishwa kila wakati ambapo Nyenzo Zilizotolewa zimeunganishwa kwa uhusiano wa wakati na picha inayosonga.
+
+ b. Leseni ya Adapter inamaanisha leseni Unayotumia kwa Hakimiliki yako na Haki Sawa katika michango yako kwa Nyenzo Zilizobadilishwa kwa mujibu wa masharti na masharti ya Leseni hii ya Umma.
+
+ c. Leseni Inayolingana ya BY-SA inamaanisha leseni iliyoorodheshwa kwenye creativecommons.org/compatiblelicenses, iliyoidhinishwa na Creative Commons kama inayolingana kimsingi na Leseni hii ya Umma.
+
+ d. Hakimiliki na Haki Sawa inamaanisha hakimiliki na/au haki sawa zinazohusiana sana na hakimiliki ikiwa ni pamoja na, bila kikomo, utendaji, utangazaji, rekodi ya sauti, na Haki za Hifadhidata za Sui Generis, bila kujali jinsi haki hizo zinavyoitwa au kuainishwa. Kwa madhumuni ya Leseni hii ya Umma, haki zilizobainishwa katika Sehemu ya 2(b)(1)-(2) si Hakimiliki na Haki Sawa.
+
+ e. Hatua za Kiteknolojia Zinazofaa inamaanisha hatua ambazo, bila mamlaka sahihi, haziwezi kupitishwa chini ya sheria zinazotimiza majukumu chini ya Kifungu cha 11 cha Mkataba wa Hakimiliki wa WIPO uliopitishwa Desemba 20, 1996, na/au mikataba mingine ya kimataifa inayofanana.
+
+ f. Ubaguzi na Upungufu inamaanisha matumizi ya haki, biashara ya haki, na/au ubaguzi mwingine wowote au upungufu wa Hakimiliki na Haki Sawa zinazotumika kwa matumizi yako ya Nyenzo Zilizotolewa.
+
+ g. Vipengele vya Leseni inamaanisha sifa za leseni zilizoorodheshwa katika jina la Leseni ya Umma ya Creative Commons. Vipengele vya Leseni ya Umma hii ni Attribution na ShareAlike.
+
+ h. Nyenzo Zilizotolewa inamaanisha kazi ya sanaa au fasihi, hifadhidata, au nyenzo nyingine ambayo Mtoaji leseni alitumia Leseni hii ya Umma.
+
+ i. Haki Zilizotolewa inamaanisha haki zilizokubaliwa kwako chini ya masharti na masharti ya Leseni hii ya Umma, ambazo zimepunguzwa kwa Hakimiliki na Haki Sawa zinazotumika kwa matumizi yako ya Nyenzo Zilizotolewa na ambazo Mtoaji leseni ana mamlaka ya kutoa leseni.
+
+ j. Mtoaji leseni inamaanisha mtu binafsi au chombo kinachotoa haki chini ya Leseni hii ya Umma.
+
+ k. Kushiriki inamaanisha kutoa nyenzo kwa umma kwa njia yoyote au mchakato unaohitaji ruhusa chini ya Haki Zilizotolewa, kama vile uzazi, onyesho la umma, utendaji wa umma, usambazaji, uenezaji, mawasiliano, au uagizaji, na kufanya nyenzo zipatikane kwa umma ikiwa ni pamoja na kwa njia ambazo wanachama wa umma wanaweza kufikia nyenzo kutoka mahali na wakati waliochagua kibinafsi.
+
+ l. Haki za Hifadhidata za Sui Generis inamaanisha haki nyingine isipokuwa hakimiliki zinazotokana na Direktiva 96/9/EC ya Bunge la Ulaya na ya Baraza la Machi 11, 1996 juu ya ulinzi wa kisheria wa hifadhidata, kama ilivyorekebishwa na/au kufanikiwa, pamoja na haki nyingine zinazolingana kimsingi popote duniani.
+
+ m. Wewe inamaanisha mtu binafsi au chombo kinachotumia Haki Zilizotolewa chini ya Leseni hii ya Umma. Yako ina maana inayolingana.
+
+Sehemu ya 2 -- Wigo.
+
+ a. Utoaji wa leseni.
+
+ 1. Kwa mujibu wa masharti na masharti ya Leseni hii ya Umma, Mtoaji leseni anakupa leseni isiyoweza kubatilishwa, isiyo ya kipekee, isiyo na malipo ya kifalme, isiyo na leseni ndogo ya kutumia Haki Zilizotolewa katika Nyenzo Zilizotolewa ili:
+
+ a. kuzalisha na Kushiriki Nyenzo Zilizotolewa, kwa sehemu au kwa ujumla; na
+
+ b. kuzalisha, kuzalisha tena, na Kushiriki Nyenzo Zilizobadilishwa.
+
+ 2. Ubaguzi na Upungufu. Kwa kuepuka shaka, ambapo Ubaguzi na Upungufu unatumika kwa matumizi yako, Leseni hii ya Umma haifanyi kazi, na Huna haja ya kufuata masharti na masharti yake.
+
+ 3. Muda. Muda wa Leseni hii ya Umma umeainishwa katika Sehemu ya 6(a).
+
+ 4. Vyombo vya habari na miundo; marekebisho ya kiufundi yanayoruhusiwa. Mtoaji leseni anakuruhusu kutumia Haki Zilizotolewa katika vyombo vya habari na miundo yote ikiwa inajulikana sasa au itakayoundwa baadaye, na kufanya marekebisho ya kiufundi muhimu kufanya hivyo. Mtoaji leseni anakataa na/au anakubaliana kutothibitisha haki yoyote au mamlaka ya kukuzuia kufanya marekebisho ya kiufundi muhimu kutumia Haki Zilizotolewa, ikiwa ni pamoja na marekebisho ya kiufundi muhimu kupitisha Hatua za Kiteknolojia Zinazofaa. Kwa madhumuni ya Leseni hii ya Umma, kufanya marekebisho yaliyoidhinishwa na Sehemu hii 2(a)(4) hakuzalishi Nyenzo Zilizobadilishwa.
+
+ 5. Wapokeaji wa chini.
+
+ a. Ofa kutoka kwa Mtoaji leseni -- Nyenzo Zilizotolewa. Kila mpokeaji wa Nyenzo Zilizotolewa hupokea moja kwa moja ofa kutoka kwa Mtoaji leseni ya kutumia Haki Zilizotolewa chini ya masharti na masharti ya Leseni hii ya Umma.
+
+ b. Ofa ya ziada kutoka kwa Mtoaji leseni -- Nyenzo Zilizobadilishwa. Kila mpokeaji wa Nyenzo Zilizobadilishwa kutoka kwako hupokea moja kwa moja ofa kutoka kwa Mtoaji leseni ya kutumia Haki Zilizotolewa katika Nyenzo Zilizobadilishwa chini ya masharti ya Leseni ya Adapter unayotumia.
+
+ c. Hakuna vikwazo vya chini. Huwezi kutoa au kuweka masharti au masharti yoyote ya ziada au tofauti, au kutumia Hatua za Kiteknolojia Zinazofaa, kwa Nyenzo Zilizotolewa ikiwa kufanya hivyo kunazuia matumizi ya Haki Zilizotolewa na mpokeaji yeyote wa Nyenzo Zilizotolewa.
+
+ 6. Hakuna idhini. Hakuna kitu katika Leseni hii ya Umma kinachounda au kinaweza kufasiriwa kama ruhusa ya kuthibitisha au kuashiria kwamba Wewe ni, au kwamba matumizi yako ya Nyenzo Zilizotolewa yanahusiana na, au yamefadhiliwa, kuidhinishwa, au kupewa hadhi rasmi na, Mtoaji leseni au wengine walioteuliwa kupokea sifa kama ilivyoainishwa katika Sehemu ya 3(a)(1)(A)(i).
+
+ b. Haki nyingine.
+
+ 1. Haki za kimaadili, kama vile haki ya uadilifu, hazijatolewa leseni chini ya Leseni hii ya Umma, wala haki za umaarufu, faragha, na/au haki zingine sawa za utu; hata hivyo, kwa kadiri inavyowezekana, Mtoaji leseni anakataa na/au anakubaliana kutothibitisha haki zozote kama hizo zinazoshikiliwa na Mtoaji leseni kwa kadiri ndogo inavyohitajika ili kukuruhusu kutumia Haki Zilizotolewa, lakini si vinginevyo.
+
+ 2. Haki za patent na alama za biashara hazijatolewa leseni chini ya Leseni hii ya Umma.
+
+ 3. Kwa kadiri inavyowezekana, Mtoaji leseni anakataa haki yoyote ya kukusanya malipo kutoka kwako kwa matumizi ya Haki Zilizotolewa, iwe moja kwa moja au kupitia jamii ya ukusanyaji chini ya mpango wowote wa hiari au wa kisheria wa lazima. Katika kesi zote nyingine Mtoaji leseni anahifadhi wazi haki yoyote ya kukusanya malipo hayo.
+
+Sehemu ya 3 -- Masharti ya Leseni.
+
+Matumizi yako ya Haki Zilizotolewa yamewekwa wazi kuwa chini ya masharti yafuatayo.
+
+ a. Attribution.
+
+ 1. Ikiwa Unashiriki Nyenzo Zilizotolewa (ikiwa ni pamoja na kwa njia iliyobadilishwa), Lazima:
+
+ a. kuhifadhi yafuatayo ikiwa inatolewa na Mtoaji leseni na Nyenzo Zilizotolewa:
+
+ i. kitambulisho cha muundaji wa Nyenzo Zilizotolewa na wengine wowote walioteuliwa kupokea sifa, kwa njia yoyote inayofaa iliyoombwa na Mtoaji leseni (ikiwa ni pamoja na kwa jina bandia ikiwa limewekwa);
+
+ ii. taarifa ya hakimiliki;
+
+ iii. taarifa inayorejelea Leseni hii ya Umma;
+
+ iv. taarifa inayorejelea kanusho la dhamana;
+
+ v. URI au kiungo cha Nyenzo Zilizotolewa kwa kadiri inavyowezekana;
+
+ b. kuonyesha ikiwa Ulibadilisha Nyenzo Zilizotolewa na kuhifadhi dalili ya mabadiliko yoyote ya awali; na
+
+ c. kuonyesha kuwa Nyenzo Zilizotolewa zimetolewa leseni chini ya Leseni hii ya Umma, na kujumuisha maandishi ya, au URI au kiungo cha, Leseni hii ya Umma.
+
+ 2. Unaweza kutimiza masharti katika Sehemu ya 3(a)(1) kwa njia yoyote inayofaa kulingana na chombo, njia, na muktadha ambao Unashiriki Nyenzo Zilizotolewa. Kwa mfano, inaweza kuwa busara kutimiza masharti kwa kutoa URI au kiungo kwa rasilimali inayojumuisha taarifa inayohitajika.
+
+ 3. Ikiwa imeombwa na Mtoaji leseni, Lazima uondoe taarifa yoyote inayohitajika na Sehemu ya 3(a)(1)(A) kwa kadiri inavyowezekana.
+
+ b. ShareAlike.
+
+ Mbali na masharti katika Sehemu ya 3(a), ikiwa Unashiriki Nyenzo Zilizobadilishwa Unazozalisha, masharti yafuatayo pia yanatumika.
+
+ 1. Leseni ya Adapter Unayotumia lazima iwe leseni ya Creative Commons yenye Vipengele vya Leseni sawa, toleo hili au baadaye, au Leseni Inayolingana ya BY-SA.
+
+ 2. Lazima ujumuisha maandishi ya, au URI au kiungo cha, Leseni ya Adapter Unayotumia. Unaweza kutimiza sharti hili kwa njia yoyote inayofaa kulingana na chombo, njia, na muktadha ambao Unashiriki Nyenzo Zilizobadilishwa.
+
+ 3. Huwezi kutoa au kuweka masharti au masharti yoyote ya ziada au tofauti, au kutumia Hatua za Kiteknolojia Zinazofaa, kwa Nyenzo Zilizobadilishwa zinazozuia matumizi ya haki zilizotolewa chini ya Leseni ya Adapter Unayotumia.
+
+Sehemu ya 4 -- Haki za Hifadhidata za Sui Generis.
+
+Ambapo Haki Zilizotolewa zinajumuisha Haki za Hifadhidata za Sui Generis zinazotumika kwa matumizi yako ya Nyenzo Zilizotolewa:
+
+ a. kwa kuepuka shaka, Sehemu ya 2(a)(1) inakupa haki ya kutoa, kutumia tena, kuzalisha tena, na Kushiriki sehemu zote au sehemu kubwa ya yaliyomo kwenye hifadhidata;
+
+ b. ikiwa Unajumuisha sehemu zote au sehemu kubwa ya yaliyomo kwenye hifadhidata katika hifadhidata ambayo Una Haki za Hifadhidata za Sui Generis, basi hifadhidata ambayo Una Haki za Hifadhidata za Sui Generis (lakini si yaliyomo binafsi) ni Nyenzo Zilizobadilishwa,
+
+ ikiwa ni pamoja na kwa madhumuni ya Sehemu ya 3(b); na
+ c. Lazima ufuate masharti katika Sehemu ya 3(a) ikiwa Unashiriki sehemu zote au sehemu kubwa ya yaliyomo kwenye hifadhidata.
+
+Kwa kuepuka shaka, Sehemu hii ya 4 inaongeza na haibadilishi wajibu wako chini ya Leseni hii ya Umma ambapo Haki Zilizotolewa zinajumuisha Hakimiliki na Haki Sawa.
+
+Sehemu ya 5 -- Kanusho la Dhamana na Kizuizi cha Dhima.
+
+ a. ISIPOKUWA KAMA INAVYOFANYWA TOFAUTI NA MTOAJI LESENI, KWA KADIRI INAVYOWEZEKANA, MTOAJI LESENI ANATOA NYENZO ZILIZOTOLEWA KAMA ILIVYO NA KAMA INAVYOPATIKANA, NA HAFANYI MAELEZO AU DHAMANA YOYOTE YA AINA YOYOTE KUHUSU NYENZO ZILIZOTOLEWA, IKIWA NI MAELEZO, YANAYOPEWA, KISHERIA, AU NYINGINE. HII INAJUMUISHA, BILA KIKOMO, DHAMANA ZA UMILIKI, UWEZO WA KUUZA, KUFANIK
+
+**Kanusho**:
+Hati hii imetafsiriwa kwa kutumia huduma za kutafsiri za AI. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokamilika. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya kibinadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/sw/sketchnotes/README.md b/translations/sw/sketchnotes/README.md
new file mode 100644
index 000000000..297219da6
--- /dev/null
+++ b/translations/sw/sketchnotes/README.md
@@ -0,0 +1,10 @@
+Sketchnotes zote za mtaala zinaweza kupakuliwa hapa.
+
+🖨 Kwa uchapishaji katika azimio la juu, matoleo ya TIFF yanapatikana kwenye [repo hii](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff).
+
+🎨 Imeundwa na: [Tomomi Imura](https://github.com/girliemac) (Twitter: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**Onyo**:
+Hati hii imetafsiriwa kwa kutumia huduma za tafsiri za AI za mashine. Ingawa tunajitahidi kwa usahihi, tafadhali fahamu kwamba tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya kiasili inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa habari muhimu, tafsiri ya kibinadamu ya kitaalam inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/1-intro-to-ML/README.md b/translations/tr/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..a240ed1e2
--- /dev/null
+++ b/translations/tr/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# Makine Öğrenimine Giriş
+
+## [Ders Öncesi Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "Yeni Başlayanlar İçin Makine Öğrenimi - Yeni Başlayanlar İçin Makine Öğrenimine Giriş")
+
+> 🎥 Bu dersi işleyen kısa bir video için yukarıdaki resme tıklayın.
+
+Yeni başlayanlar için klasik makine öğrenimi konusundaki bu kursa hoş geldiniz! Bu konuya tamamen yeni olsanız da, deneyimli bir ML uygulayıcısı olarak belirli bir alanı tazelemek isteseniz de, bize katıldığınız için mutluyuz! ML çalışmanıza dostça bir başlangıç noktası oluşturmak istiyoruz ve [geri bildiriminizi](https://github.com/microsoft/ML-For-Beginners/discussions) değerlendirmek, yanıtlamak ve dahil etmekten memnuniyet duyarız.
+
+[](https://youtu.be/h0e2HAPTGF4 "ML'ye Giriş")
+
+> 🎥 MIT'den John Guttag'ın makine öğrenimini tanıttığı video için yukarıdaki resme tıklayın
+
+---
+## Makine Öğrenimi ile Başlamak
+
+Bu müfredata başlamadan önce, bilgisayarınızı yerel olarak notebook'ları çalıştırmaya hazır hale getirmeniz gerekiyor.
+
+- **Bilgisayarınızı bu videolarla yapılandırın**. Sisteminizde [Python nasıl kurulur](https://youtu.be/CXZYvNRIAKM) ve geliştirme için bir [metin editörü nasıl ayarlanır](https://youtu.be/EU8eayHWoZg) öğrenmek için aşağıdaki bağlantıları kullanın.
+- **Python öğrenin**. Ayrıca bu kursta kullandığımız, veri bilimciler için faydalı bir programlama dili olan [Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott) hakkında temel bir anlayışa sahip olmanız önerilir.
+- **Node.js ve JavaScript öğrenin**. Bu kursta web uygulamaları oluştururken birkaç kez JavaScript kullanacağız, bu nedenle [node](https://nodejs.org) ve [npm](https://www.npmjs.com/) kurulu olmalı ve hem Python hem de JavaScript geliştirme için [Visual Studio Code](https://code.visualstudio.com/) kullanılabilir olmalıdır.
+- **GitHub hesabı oluşturun**. Bizi burada [GitHub](https://github.com) üzerinde bulduğunuza göre, muhtemelen bir hesabınız vardır, ancak yoksa bir hesap oluşturun ve bu müfredatı kendi kullanımınıza forklayın. (Bize bir yıldız vermekten çekinmeyin 😊)
+- **Scikit-learn'i keşfedin**. Bu derslerde referans verdiğimiz bir dizi ML kütüphanesi olan [Scikit-learn](https://scikit-learn.org/stable/user_guide.html) ile tanışın.
+
+---
+## Makine Öğrenimi Nedir?
+
+'Makine öğrenimi' terimi, günümüzün en popüler ve sık kullanılan terimlerinden biridir. Teknolojiye aşina iseniz, hangi alanda çalışıyor olursanız olun, bu terimi en az bir kez duymuş olma ihtimaliniz oldukça yüksektir. Ancak, makine öğreniminin mekanikleri çoğu insan için bir gizemdir. Makine öğrenimine yeni başlayan biri için konu bazen bunaltıcı olabilir. Bu nedenle, makine öğreniminin ne olduğunu anlamak ve pratik örneklerle adım adım öğrenmek önemlidir.
+
+---
+## Hype Eğrisi
+
+
+
+> Google Trends, 'makine öğrenimi' teriminin son zamanlardaki 'hype eğrisini' gösteriyor
+
+---
+## Gizemli Bir Evren
+
+Büyüleyici gizemlerle dolu bir evrende yaşıyoruz. Stephen Hawking, Albert Einstein ve daha birçok büyük bilim insanı, etrafımızdaki dünyanın gizemlerini ortaya çıkaran anlamlı bilgileri aramak için hayatlarını adadılar. Bu, öğrenmenin insan halidir: Bir insan çocuğu yeni şeyler öğrenir ve büyüdükçe dünyasının yapısını yıl yıl keşfeder.
+
+---
+## Çocuğun Beyni
+
+Bir çocuğun beyni ve duyuları, çevresindeki gerçekleri algılar ve hayatın gizli kalıplarını yavaş yavaş öğrenir, bu da çocuğun öğrenilen kalıpları tanımlamak için mantıksal kurallar oluşturmasına yardımcı olur. İnsan beyninin öğrenme süreci, insanları bu dünyanın en sofistike canlısı yapar. Gizli kalıpları keşfederek sürekli öğrenmek ve ardından bu kalıplar üzerinde yenilik yapmak, yaşamımız boyunca kendimizi daha iyi hale getirmemizi sağlar. Bu öğrenme kapasitesi ve evrimleşme yeteneği, [beyin plastisitesi](https://www.simplypsychology.org/brain-plasticity.html) adlı bir kavramla ilgilidir. Yüzeysel olarak, insan beyninin öğrenme süreci ile makine öğrenimi kavramları arasında bazı motive edici benzerlikler çizebiliriz.
+
+---
+## İnsan Beyni
+
+[İnsan beyni](https://www.livescience.com/29365-human-brain.html), gerçek dünyadan şeyleri algılar, algılanan bilgileri işler, rasyonel kararlar alır ve duruma göre belirli eylemler gerçekleştirir. Buna zeki davranmak diyoruz. Zeki davranış sürecinin bir benzerini bir makineye programladığımızda, buna yapay zeka (AI) denir.
+
+---
+## Bazı Terminoloji
+
+Terimler karıştırılabilse de, makine öğrenimi (ML), yapay zekanın önemli bir alt kümesidir. **ML, rasyonel karar verme sürecini doğrulamak için algılanan verilerden anlamlı bilgiler ortaya çıkarmak ve gizli kalıpları bulmak için özel algoritmalar kullanmakla ilgilidir**.
+
+---
+## AI, ML, Derin Öğrenme
+
+
+
+> AI, ML, derin öğrenme ve veri bilimi arasındaki ilişkileri gösteren bir diyagram. [Jen Looper](https://twitter.com/jenlooper) tarafından [bu grafik](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining) ilham alınarak hazırlanan infografik
+
+---
+## Kapsanacak Konular
+
+Bu müfredatta, bir başlangıcın bilmesi gereken makine öğreniminin temel kavramlarını ele alacağız. Öğrencilerin temel bilgileri öğrenmek için kullandığı mükemmel bir kütüphane olan Scikit-learn'i kullanarak 'klasik makine öğrenimi' dediğimiz şeyi kapsıyoruz. Yapay zeka veya derin öğrenmenin daha geniş kavramlarını anlamak için, makine öğreniminin güçlü bir temel bilgisine sahip olmak gereklidir ve bu bilgiyi burada sunmak istiyoruz.
+
+---
+## Bu Kursta Öğrenecekleriniz:
+
+- makine öğreniminin temel kavramları
+- ML'nin tarihi
+- ML ve adalet
+- regresyon ML teknikleri
+- sınıflandırma ML teknikleri
+- kümeleme ML teknikleri
+- doğal dil işleme ML teknikleri
+- zaman serisi tahminleme ML teknikleri
+- pekiştirmeli öğrenme
+- ML'nin gerçek dünya uygulamaları
+
+---
+## Kapsamayacağımız Konular
+
+- derin öğrenme
+- sinir ağları
+- AI
+
+Daha iyi bir öğrenme deneyimi sağlamak için, sinir ağlarının karmaşıklıklarından, 'derin öğrenme' - sinir ağları kullanarak çok katmanlı model oluşturma - ve AI'dan kaçınacağız, bunları farklı bir müfredatta ele alacağız. Ayrıca, bu daha geniş alanın bir yönüne odaklanmak için gelecek veri bilimi müfredatını sunacağız.
+
+---
+## Neden Makine Öğrenimi Çalışmalıyız?
+
+Sistemler perspektifinden makine öğrenimi, verilerden gizli kalıpları öğrenerek akıllı kararlar almaya yardımcı olan otomatik sistemlerin oluşturulması olarak tanımlanır.
+
+Bu motivasyon, insan beyninin dış dünyadan algıladığı verilere dayanarak belirli şeyleri nasıl öğrendiğinden gevşek bir şekilde ilham almıştır.
+
+✅ Bir işin neden makine öğrenimi stratejilerini kullanmak isteyebileceğini düşünün, sabit kodlanmış kurallara dayalı bir motor oluşturmak yerine.
+
+---
+## Makine Öğrenimi Uygulamaları
+
+Makine öğrenimi uygulamaları artık hemen her yerde ve akıllı telefonlarımız, bağlı cihazlarımız ve diğer sistemler tarafından üretilen veriler kadar yaygın. En son teknoloji makine öğrenimi algoritmalarının muazzam potansiyelini göz önünde bulundurarak, araştırmacılar, çok boyutlu ve çok disiplinli gerçek yaşam problemlerini büyük olumlu sonuçlarla çözme yeteneklerini araştırıyorlar.
+
+---
+## Uygulamalı ML Örnekleri
+
+**Makine öğrenimini birçok şekilde kullanabilirsiniz**:
+
+- Bir hastanın tıbbi geçmişinden veya raporlarından hastalık olasılığını tahmin etmek için.
+- Hava durumu verilerini kullanarak hava olaylarını tahmin etmek için.
+- Bir metnin duyarlılığını anlamak için.
+- Propagandanın yayılmasını durdurmak için sahte haberleri tespit etmek için.
+
+Finans, ekonomi, yer bilimi, uzay keşfi, biyomedikal mühendislik, bilişsel bilim ve hatta beşeri bilimler alanları, alanlarının zorlu, veri işleme ağırlıklı sorunlarını çözmek için makine öğrenimini benimsemiştir.
+
+---
+## Sonuç
+
+Makine öğrenimi, gerçek dünyadan veya üretilmiş verilerden anlamlı içgörüler bularak kalıp keşfetme sürecini otomatikleştirir. İş, sağlık ve finans uygulamaları da dahil olmak üzere birçok alanda son derece değerli olduğunu kanıtlamıştır.
+
+Yakın gelecekte, makine öğreniminin temellerini anlamak, yaygın olarak benimsenmesi nedeniyle herhangi bir alandaki insanlar için bir zorunluluk haline gelecektir.
+
+---
+# 🚀 Meydan Okuma
+
+Kağıt üzerinde veya [Excalidraw](https://excalidraw.com/) gibi bir çevrimiçi uygulama kullanarak, AI, ML, derin öğrenme ve veri bilimi arasındaki farkları anladığınızı çizin. Bu tekniklerin her birinin çözmede iyi olduğu problemler hakkında bazı fikirler ekleyin.
+
+# [Ders Sonrası Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# İnceleme & Kendi Kendine Çalışma
+
+Bulutta ML algoritmalarıyla nasıl çalışabileceğiniz hakkında daha fazla bilgi edinmek için bu [Öğrenme Yolunu](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott) takip edin.
+
+ML'nin temelleri hakkında bir [Öğrenme Yolu](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott) alın.
+
+---
+# Ödev
+
+[Başlamak için](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlaşılmalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/1-intro-to-ML/assignment.md b/translations/tr/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..97ede70f3
--- /dev/null
+++ b/translations/tr/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# Başlamaya Hazır Ol
+
+## Talimatlar
+
+Bu notlandırılmamış ödevde, Python bilginizi tazelemeniz ve çevrenizi çalışır hale getirmeniz, notebookları çalıştırabilmeniz gerekiyor.
+
+Bu [Python Öğrenme Yolunu](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott) takip edin ve ardından bu tanıtıcı videoları izleyerek sistemlerinizi kurun:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/2-history-of-ML/README.md b/translations/tr/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..3d2441ca3
--- /dev/null
+++ b/translations/tr/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# Makine Öğrenmesinin Tarihi
+
+
+> Sketchnote: [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "Başlangıç seviyesinde ML - Makine Öğrenmesinin Tarihi")
+
+> 🎥 Bu dersi içeren kısa bir video için yukarıdaki resme tıklayın.
+
+Bu derste, makine öğrenmesi ve yapay zekanın tarihindeki önemli dönüm noktalarını inceleyeceğiz.
+
+Yapay zekanın (AI) bir alan olarak tarihi, makine öğrenmesinin tarihi ile iç içedir, çünkü ML'yi destekleyen algoritmalar ve hesaplama ilerlemeleri AI'nın gelişimine katkıda bulunmuştur. Bu alanların ayrı araştırma alanları olarak 1950'lerde şekillenmeye başladığını hatırlamak faydalıdır, önemli [algoritmik, istatistiksel, matematiksel, hesaplama ve teknik keşifler](https://wikipedia.org/wiki/Timeline_of_machine_learning) bu dönemin öncesinde ve sonrasında gerçekleşmiştir. Aslında, insanlar bu sorular hakkında [yüzlerce yıldır](https://wikipedia.org/wiki/History_of_artificial_intelligence) düşünmektedir: bu makale, 'düşünen makine' fikrinin tarihsel entelektüel temellerini tartışmaktadır.
+
+---
+## Önemli Keşifler
+
+- 1763, 1812 [Bayes Teoremi](https://wikipedia.org/wiki/Bayes%27_theorem) ve öncülleri. Bu teorem ve uygulamaları, bir olayın gerçekleşme olasılığını önceki bilgiye dayanarak tanımlayan çıkarımın temelini oluşturur.
+- 1805 [En Küçük Kareler Teorisi](https://wikipedia.org/wiki/Least_squares) Fransız matematikçi Adrien-Marie Legendre tarafından. Bu teori, veri uyumlamada yardımcı olur ve Regresyon birimimizde öğreneceksiniz.
+- 1913 [Markov Zincirleri](https://wikipedia.org/wiki/Markov_chain), Rus matematikçi Andrey Markov'un adını taşır ve önceki bir duruma dayalı olarak olası olaylar dizisini tanımlar.
+- 1957 [Perceptron](https://wikipedia.org/wiki/Perceptron) Amerikan psikolog Frank Rosenblatt tarafından icat edilen bir tür lineer sınıflandırıcıdır ve derin öğrenmede ilerlemelerin temelini oluşturur.
+
+---
+
+- 1967 [En Yakın Komşu](https://wikipedia.org/wiki/Nearest_neighbor) başlangıçta rota haritalamak için tasarlanmış bir algoritmadır. ML bağlamında desenleri tespit etmek için kullanılır.
+- 1970 [Geri Yayılım](https://wikipedia.org/wiki/Backpropagation) [ileri beslemeli sinir ağlarını](https://wikipedia.org/wiki/Feedforward_neural_network) eğitmek için kullanılır.
+- 1982 [Yinelemeli Sinir Ağları](https://wikipedia.org/wiki/Recurrent_neural_network) ileri beslemeli sinir ağlarından türetilen ve zamansal grafikler oluşturan yapay sinir ağlarıdır.
+
+✅ Biraz araştırma yapın. ML ve AI tarihinde öne çıkan diğer tarihler nelerdir?
+
+---
+## 1950: Düşünen Makineler
+
+2019 yılında [kamuoyu tarafından](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) 20. yüzyılın en büyük bilim insanı olarak seçilen gerçekten olağanüstü bir kişi olan Alan Turing, 'düşünebilen bir makine' kavramının temellerini atmaya yardımcı olduğu için takdir edilir. Bu kavramın ampirik kanıtlarına duyduğu ihtiyaç ve karşıt görüşlerle başa çıkmak için [Turing Testi](https://www.bbc.com/news/technology-18475646) oluşturdu; bu testi NLP derslerimizde keşfedeceksiniz.
+
+---
+## 1956: Dartmouth Yaz Araştırma Projesi
+
+"Yapay zeka alanı için dönüm noktası olan Dartmouth Yaz Araştırma Projesi," ve burada 'yapay zeka' terimi icat edilmiştir ([kaynak](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth)).
+
+> Öğrenmenin veya zekanın herhangi bir özelliğinin o kadar kesin bir şekilde tanımlanabileceği ki bir makine bunu simüle edebilir.
+
+---
+
+Baş araştırmacı, matematik profesörü John McCarthy, "Öğrenmenin veya zekanın herhangi bir özelliğinin o kadar kesin bir şekilde tanımlanabileceği ki bir makine bunu simüle edebilir" hipotezine dayalı olarak ilerlemeyi umuyordu. Katılımcılar arasında alanın diğer bir aydınlanmış ismi Marvin Minsky de vardı.
+
+Çalıştay, "sembolik yöntemlerin yükselişi, sınırlı alanlara odaklanan sistemler (erken uzman sistemler) ve çıkarımsal sistemler ile tümevarımsal sistemler arasındaki tartışmaların" başlatılması ve teşvik edilmesiyle tanınır ([kaynak](https://wikipedia.org/wiki/Dartmouth_workshop)).
+
+---
+## 1956 - 1974: "Altın Yıllar"
+
+1950'lerden 1970'lerin ortalarına kadar, AI'nın birçok sorunu çözebileceği umuduyla büyük bir iyimserlik hakimdi. 1967'de Marvin Minsky, "Bir nesil içinde ... 'yapay zeka' yaratma sorunu büyük ölçüde çözülecek" diye güvenle belirtti (Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall).
+
+Doğal dil işleme araştırmaları gelişti, arama daha rafine ve güçlü hale geldi ve 'mikro-dünyalar' konsepti oluşturuldu, burada basit görevler düz dil talimatları kullanılarak tamamlandı.
+
+---
+
+Araştırmalar devlet kurumları tarafından iyi finanse edildi, hesaplama ve algoritmalar konusunda ilerlemeler kaydedildi ve akıllı makinelerin prototipleri inşa edildi. Bu makinelerden bazıları şunlardır:
+
+* [Shakey the robot](https://wikipedia.org/wiki/Shakey_the_robot), görevleri 'akıllıca' nasıl gerçekleştireceğine karar verebilen ve manevra yapabilen bir robottu.
+
+ 
+ > 1972'de Shakey
+
+---
+
+* Eliza, erken bir 'chatterbot', insanlarla sohbet edebilen ve ilkel bir 'terapist' olarak hareket edebilen bir robottu. Eliza hakkında daha fazla bilgiyi NLP derslerimizde öğreneceksiniz.
+
+ 
+ > Bir sohbet botu olan Eliza'nın bir versiyonu
+
+---
+
+* "Blocks world" blokların istiflenip sıralanabileceği ve makinelerin karar vermeyi öğrenme deneylerinin test edilebileceği bir mikro-dünya örneğiydi. [SHRDLU](https://wikipedia.org/wiki/SHRDLU) gibi kütüphanelerle yapılan ilerlemeler dil işlemenin ilerlemesine yardımcı oldu.
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "SHRDLU ile blocks world")
+
+ > 🎥 Yukarıdaki resme tıklayarak bir video izleyin: SHRDLU ile Blocks world
+
+---
+## 1974 - 1980: "AI Kışı"
+
+1970'lerin ortalarına gelindiğinde, 'akıllı makineler' yapmanın karmaşıklığının hafife alındığı ve mevcut hesaplama gücü göz önüne alındığında vaatlerinin abartıldığı ortaya çıktı. Fonlar kurudu ve alandaki güven yavaşladı. Güveni etkileyen bazı sorunlar şunlardı:
+---
+- **Sınırlamalar**. Hesaplama gücü çok sınırlıydı.
+- **Kombinatoryal patlama**. Bilgisayarlardan daha fazla şey istendikçe eğitilmesi gereken parametrelerin sayısı üstel olarak arttı, ancak hesaplama gücü ve yetenekleri paralel olarak evrimleşmedi.
+- **Veri kıtlığı**. Algoritmaların test edilmesi, geliştirilmesi ve rafine edilmesi sürecini engelleyen veri kıtlığı vardı.
+- **Doğru soruları mı soruyoruz?**. Sorulan soruların kendisi sorgulanmaya başlandı. Araştırmacılar yaklaşımları hakkında eleştiriler almaya başladı:
+ - Turing testleri, diğer fikirler arasında, 'Çin odası teorisi' aracılığıyla sorgulandı, bu teori "dijital bir bilgisayarı programlamanın dili anlamasını sağlayabileceğini ancak gerçek bir anlayış üretemeyeceğini" öne sürüyordu ([kaynak](https://plato.stanford.edu/entries/chinese-room/)).
+ - "Terapist" ELIZA gibi yapay zekaların topluma tanıtılmasının etiği tartışıldı.
+
+---
+
+Aynı zamanda, çeşitli AI düşünce okulları oluşmaya başladı. ["Düzensiz" ve "düzenli AI"](https://wikipedia.org/wiki/Neats_and_scruffies) uygulamaları arasında bir ikilik oluştu. _Düzensiz_ laboratuvarlar, istedikleri sonuçları elde edene kadar programları saatlerce ayarladılar. _Düzenli_ laboratuvarlar "mantık ve resmi problem çözmeye" odaklandı. ELIZA ve SHRDLU, iyi bilinen _düzensiz_ sistemlerdi. 1980'lerde, ML sistemlerini tekrarlanabilir hale getirme talebi ortaya çıktıkça, _düzenli_ yaklaşım yavaş yavaş ön plana çıktı çünkü sonuçları daha açıklanabilir.
+
+---
+## 1980'ler Uzman Sistemler
+
+Alan büyüdükçe, iş dünyasına olan faydası daha net hale geldi ve 1980'lerde 'uzman sistemlerin' yaygınlaşması da öyle. "Uzman sistemler, ilk gerçekten başarılı yapay zeka (AI) yazılım biçimleri arasındaydı." ([kaynak](https://wikipedia.org/wiki/Expert_system)).
+
+Bu tür bir sistem aslında _hibrit_ bir yapıya sahiptir, kısmen iş gereksinimlerini tanımlayan bir kurallar motoru ve kurallar sisteminden yararlanarak yeni gerçekleri çıkaran bir çıkarım motorundan oluşur.
+
+Bu dönemde ayrıca sinir ağlarına artan bir ilgi gösterildi.
+
+---
+## 1987 - 1993: AI 'Soğuması'
+
+Uzman sistemlerin özelleşmiş donanımının yaygınlaşması, ne yazık ki, çok özelleşmiş hale gelmesine neden oldu. Kişisel bilgisayarların yükselişi de bu büyük, özelleşmiş, merkezi sistemlerle rekabet etti. Bilgi işlem demokratikleşmeye başlamıştı ve bu, sonunda büyük verinin modern patlamasına yol açtı.
+
+---
+## 1993 - 2011
+
+Bu dönem, ML ve AI'nın daha önce veri ve hesaplama gücü eksikliği nedeniyle yaşanan bazı sorunları çözebilmesi için yeni bir dönem oldu. Veri miktarı hızla artmaya ve daha geniş çapta erişilebilir hale gelmeye başladı, özellikle 2007 civarında akıllı telefonların ortaya çıkmasıyla birlikte, hem iyi hem de kötü yönleriyle. Hesaplama gücü üstel olarak genişledi ve algoritmalar buna paralel olarak evrimleşti. Alan, geçmişin serbest günlerinin gerçek bir disipline dönüşmesiyle olgunlaşmaya başladı.
+
+---
+## Günümüz
+
+Bugün makine öğrenmesi ve yapay zeka hayatımızın hemen her alanına dokunuyor. Bu dönem, bu algoritmaların insan yaşamı üzerindeki riskleri ve potansiyel etkilerini dikkatli bir şekilde anlamayı gerektirir. Microsoft'tan Brad Smith'in belirttiği gibi, "Bilgi teknolojisi, gizlilik ve ifade özgürlüğü gibi temel insan hakları korumalarının kalbine giden sorunları gündeme getiriyor. Bu sorunlar, bu ürünleri yaratan teknoloji şirketleri için sorumluluğu artırıyor. Bizim görüşümüze göre, aynı zamanda dikkatli hükümet düzenlemeleri ve kabul edilebilir kullanımlar etrafında normların geliştirilmesini gerektiriyor" ([kaynak](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/)).
+
+---
+
+Geleceğin ne getireceği henüz belli değil, ancak bu bilgisayar sistemlerini ve çalıştırdıkları yazılım ve algoritmaları anlamak önemlidir. Bu müfredatın, daha iyi bir anlayış kazanmanıza ve kendiniz karar vermenize yardımcı olacağını umuyoruz.
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "Derin öğrenmenin tarihi")
+> 🎥 Yukarıdaki resme tıklayarak bir video izleyin: Yann LeCun bu derste derin öğrenmenin tarihini tartışıyor
+
+---
+## 🚀Meydan Okuma
+
+Bu tarihi anlardan birine derinlemesine dalın ve arkasındaki insanlar hakkında daha fazla bilgi edinin. İlginç karakterler var ve hiçbir bilimsel keşif kültürel bir boşlukta yaratılmamıştır. Ne keşfediyorsunuz?
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## İnceleme ve Kendi Kendine Çalışma
+
+İzlemeniz ve dinlemeniz için öğeler:
+
+[Amy Boyd'un AI'nın evrimini tartıştığı bu podcast](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "Amy Boyd tarafından yapay zekanın tarihi")
+
+---
+
+## Ödev
+
+[Bir zaman çizelgesi oluşturun](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/2-history-of-ML/assignment.md b/translations/tr/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..57d11c0a2
--- /dev/null
+++ b/translations/tr/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Bir zaman çizelgesi oluşturun
+
+## Talimatlar
+
+[bu repo](https://github.com/Digital-Humanities-Toolkit/timeline-builder)'yu kullanarak algoritmaların, matematiğin, istatistiklerin, AI'nın veya ML'nin tarihinin bir yönüne veya bunların bir kombinasyonuna dair bir zaman çizelgesi oluşturun. Bir kişiye, bir fikre veya uzun bir düşünce sürecine odaklanabilirsiniz. Multimedya unsurlar eklemeyi unutmayın.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek | Yeterli | Geliştirilmesi Gerekiyor |
+| --------- | --------------------------------------------------- | --------------------------------------- | --------------------------------------------------------------- |
+| | Yayınlanmış bir zaman çizelgesi GitHub sayfası olarak sunulmuş | Kod eksik ve yayınlanmamış | Zaman çizelgesi eksik, iyi araştırılmamış ve yayınlanmamış |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/3-fairness/README.md b/translations/tr/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..449b3db68
--- /dev/null
+++ b/translations/tr/1-Introduction/3-fairness/README.md
@@ -0,0 +1,140 @@
+# Sorumlu AI ile Makine Öğrenimi Çözümleri Oluşturma
+
+
+> Sketchnote by [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Giriş
+
+Bu müfredatta, makine öğreniminin günlük hayatımızı nasıl etkileyebileceğini ve etkilediğini keşfetmeye başlayacaksınız. Şu anda bile, sağlık teşhisleri, kredi onayları veya dolandırıcılığı tespit etme gibi günlük karar verme görevlerinde sistemler ve modeller yer alıyor. Bu nedenle, bu modellerin güvenilir sonuçlar sağlamak için iyi çalışması önemlidir. Herhangi bir yazılım uygulaması gibi, AI sistemleri de beklentileri karşılamayacak veya istenmeyen sonuçlar doğuracaktır. Bu yüzden bir AI modelinin davranışını anlamak ve açıklayabilmek çok önemlidir.
+
+Bu modelleri oluşturmak için kullandığınız veriler belirli demografik gruplardan yoksunsa, örneğin ırk, cinsiyet, siyasi görüş, din veya bu demografik grupları orantısız bir şekilde temsil ediyorsa ne olabilir? Modelin çıktısı bazı demografik grupları kayıracak şekilde yorumlandığında ne olur? Uygulama için sonuçları nedir? Ayrıca, modelin olumsuz bir sonucu olduğunda ve insanlara zarar verdiğinde ne olur? AI sistemlerinin davranışından kim sorumludur? Bu müfredatta bu soruları keşfedeceğiz.
+
+Bu derste:
+
+- Makine öğreniminde adaletin önemi ve adaletle ilgili zararlar konusunda farkındalık kazanacaksınız.
+- Güvenilirlik ve güvenliği sağlamak için aykırı durumları ve olağandışı senaryoları keşfetme pratiğine aşina olacaksınız.
+- Herkesi güçlendirmek için kapsayıcı sistemler tasarlama ihtiyacını anlayacaksınız.
+- Verilerin ve insanların gizliliğini ve güvenliğini korumanın ne kadar önemli olduğunu keşfedeceksiniz.
+- AI modellerinin davranışını açıklamak için şeffaf bir yaklaşımın önemini göreceksiniz.
+- AI sistemlerine güven inşa etmek için hesap verebilirliğin ne kadar önemli olduğunun farkında olacaksınız.
+
+## Önkoşul
+
+Önkoşul olarak, "Sorumlu AI İlkeleri" öğrenme yolunu tamamlayın ve aşağıdaki videoyu izleyin:
+
+Sorumlu AI hakkında daha fazla bilgi edinmek için bu [Öğrenme Yolu](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott) bağlantısını takip edin.
+
+[](https://youtu.be/dnC8-uUZXSc "Microsoft'un Sorumlu AI Yaklaşımı")
+
+> 🎥 Yukarıdaki resme tıklayarak video izleyin: Microsoft'un Sorumlu AI Yaklaşımı
+
+## Adalet
+
+AI sistemleri herkese adil davranmalı ve benzer gruplardaki insanları farklı şekillerde etkilemekten kaçınmalıdır. Örneğin, AI sistemleri tıbbi tedavi, kredi başvuruları veya işe alım konusunda rehberlik sağladığında, benzer semptomlara, mali durumlara veya mesleki niteliklere sahip herkese aynı önerileri yapmalıdır. Hepimiz insan olarak, kararlarımızı ve eylemlerimizi etkileyen miras alınmış önyargılar taşırız. Bu önyargılar, AI sistemlerini eğitmek için kullandığımız verilerde de ortaya çıkabilir. Bu tür manipülasyonlar bazen istemeden olabilir. Verilerde önyargı yaratırken bunu bilinçli olarak fark etmek genellikle zordur.
+
+**“Adaletsizlik”**, ırk, cinsiyet, yaş veya engellilik durumu gibi bir grup insan için olumsuz etkileri veya “zararları” kapsar. Başlıca adaletle ilgili zararlar şu şekilde sınıflandırılabilir:
+
+- **Tahsis**, örneğin bir cinsiyet veya etnisitenin diğerine göre kayırılması.
+- **Hizmet kalitesi**. Verileri belirli bir senaryo için eğitmek, ancak gerçekte çok daha karmaşık olması, kötü performans gösteren bir hizmete yol açar. Örneğin, koyu tenli insanları algılayamayan bir el sabunu dağıtıcısı. [Referans](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **Küçük düşürme**. Bir şeyi veya birini haksız yere eleştirme ve etiketleme. Örneğin, bir görüntü etiketleme teknolojisi, koyu tenli insanların görüntülerini goril olarak yanlış etiketlemiştir.
+- **Aşırı veya yetersiz temsil**. Belirli bir grubun belirli bir meslekte görülmediği ve bu durumu teşvik eden herhangi bir hizmet veya işlevin zarara katkıda bulunduğu fikri.
+- **Stereotipleştirme**. Belirli bir grubu önceden belirlenmiş özelliklerle ilişkilendirme. Örneğin, İngilizce ve Türkçe arasında çeviri yapan bir dil çeviri sistemi, cinsiyetle ilişkilendirilen kelimeler nedeniyle hatalar yapabilir.
+
+
+> Türkçeye çeviri
+
+
+> İngilizceye geri çeviri
+
+AI sistemleri tasarlarken ve test ederken, AI'nın adil olduğundan ve önyargılı veya ayrımcı kararlar vermeye programlanmadığından emin olmalıyız, ki bu kararları insanlar da vermemelidir. AI ve makine öğreniminde adaleti sağlamak karmaşık bir sosyoteknik zorluktur.
+
+### Güvenilirlik ve güvenlik
+
+Güven inşa etmek için, AI sistemlerinin güvenilir, güvenli ve normal ve beklenmedik koşullar altında tutarlı olması gerekir. AI sistemlerinin çeşitli durumlarda nasıl davranacağını bilmek önemlidir, özellikle de aykırı durumlarda. AI çözümleri oluştururken, AI çözümlerinin karşılaşacağı geniş bir yelpazedeki durumları nasıl ele alacağına odaklanmak gerekir. Örneğin, kendi kendine giden bir araba, insanların güvenliğini en üst düzeyde tutmalıdır. Sonuç olarak, arabayı yönlendiren AI, gece, fırtınalar veya kar fırtınaları, sokakta koşan çocuklar, evcil hayvanlar, yol çalışmaları gibi arabanın karşılaşabileceği tüm olası senaryoları dikkate almalıdır. Bir AI sisteminin çeşitli koşulları güvenilir ve güvenli bir şekilde nasıl ele alabileceği, veri bilimci veya AI geliştiricisinin sistemin tasarımı veya test edilmesi sırasında ne kadar öngörülü olduğunu yansıtır.
+
+> [🎥 Video için buraya tıklayın: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### Kapsayıcılık
+
+AI sistemleri herkesin katılımını sağlamalı ve güçlendirmelidir. AI sistemlerini tasarlarken ve uygularken veri bilimciler ve AI geliştiriciler, sistemi istemeden dışlayabilecek potansiyel engelleri belirler ve ele alır. Örneğin, dünya genelinde 1 milyar engelli insan var. AI'nın ilerlemesiyle, günlük yaşamlarında geniş bir bilgi ve fırsat yelpazesine daha kolay erişebilirler. Engelleri ele alarak, herkesin yararına daha iyi deneyimler sunan AI ürünlerini yenilik yapmak ve geliştirmek için fırsatlar yaratır.
+
+> [🎥 Video için buraya tıklayın: AI'da kapsayıcılık](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### Güvenlik ve gizlilik
+
+AI sistemleri güvenli olmalı ve insanların gizliliğine saygı göstermelidir. Gizliliklerini, bilgilerini veya hayatlarını riske atan sistemlere insanlar daha az güvenir. Makine öğrenimi modellerini eğitirken, en iyi sonuçları elde etmek için verilere güveniriz. Bunu yaparken, verilerin kaynağı ve bütünlüğü dikkate alınmalıdır. Örneğin, veriler kullanıcı tarafından mı gönderildi yoksa kamuya açık mıydı? Sonrasında, verilerle çalışırken, gizli bilgileri koruyabilen ve saldırılara karşı dirençli AI sistemleri geliştirmek önemlidir. AI daha yaygın hale geldikçe, gizliliği korumak ve önemli kişisel ve ticari bilgileri güvence altına almak daha kritik ve karmaşık hale geliyor. AI için gizlilik ve veri güvenliği sorunları, veriye erişimin AI sistemlerinin insanlar hakkında doğru ve bilgilendirilmiş tahminler ve kararlar vermesi için gerekli olması nedeniyle özellikle dikkat gerektirir.
+
+> [🎥 Video için buraya tıklayın: AI'da güvenlik](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- Endüstri olarak, GDPR (Genel Veri Koruma Yönetmeliği) gibi düzenlemelerle büyük ölçüde ilerlemeler kaydettik.
+- Ancak AI sistemleriyle, sistemleri daha kişisel ve etkili hale getirmek için daha fazla kişisel verilere ihtiyaç duyma ile gizlilik arasındaki gerilimi kabul etmeliyiz.
+- İnternetle bağlantılı bilgisayarların doğuşunda olduğu gibi, AI ile ilgili güvenlik sorunlarının sayısında büyük bir artış görüyoruz.
+- Aynı zamanda, AI'nın güvenliği artırmak için kullanıldığını gördük. Örneğin, çoğu modern antivirüs tarayıcıları bugün AI heuristikleri tarafından yönlendirilmektedir.
+- Veri Bilimi süreçlerimizin en son gizlilik ve güvenlik uygulamalarıyla uyumlu olmasını sağlamalıyız.
+
+### Şeffaflık
+AI sistemleri anlaşılabilir olmalıdır. Şeffaflığın önemli bir parçası, AI sistemlerinin ve bileşenlerinin davranışını açıklamaktır. AI sistemlerinin anlaşılmasını iyileştirmek, paydaşların nasıl ve neden çalıştığını anlamalarını gerektirir, böylece potansiyel performans sorunlarını, güvenlik ve gizlilik endişelerini, önyargıları, dışlayıcı uygulamaları veya istenmeyen sonuçları belirleyebilirler. AI sistemlerini kullananların, ne zaman, neden ve nasıl kullandıklarını ve sistemlerinin sınırlamalarını açıkça belirtmeleri gerektiğine inanıyoruz. Örneğin, bir banka tüketici kredi kararlarını desteklemek için bir AI sistemi kullanıyorsa, sonuçları incelemek ve sistemin önerilerini hangi verilerin etkilediğini anlamak önemlidir. Hükümetler, AI'yı endüstriler arasında düzenlemeye başlıyor, bu nedenle veri bilimciler ve kuruluşlar, AI sisteminin düzenleyici gereksinimleri karşılayıp karşılamadığını, özellikle istenmeyen bir sonuç olduğunda açıklamalıdır.
+
+> [🎥 Video için buraya tıklayın: AI'da şeffaflık](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- AI sistemleri çok karmaşık olduğu için nasıl çalıştıklarını ve sonuçları nasıl yorumladıklarını anlamak zordur.
+- Bu anlayış eksikliği, bu sistemlerin nasıl yönetildiğini, işletildiğini ve belgelenmesini etkiler.
+- Daha da önemlisi, bu anlayış eksikliği, bu sistemlerin ürettiği sonuçları kullanarak yapılan kararları etkiler.
+
+### Hesap Verebilirlik
+
+AI sistemlerini tasarlayan ve uygulayan kişiler, sistemlerinin nasıl çalıştığından sorumlu olmalıdır. Hesap verebilirlik ihtiyacı, özellikle yüz tanıma gibi hassas kullanım teknolojileri için çok önemlidir. Son zamanlarda, yüz tanıma teknolojisine olan talep artıyor, özellikle kayıp çocukları bulmak gibi kullanımlarda teknolojinin potansiyelini gören kolluk kuvvetleri tarafından. Ancak, bu teknolojiler, örneğin belirli bireylerin sürekli izlenmesini sağlayarak vatandaşların temel özgürlüklerini riske atmak için bir hükümet tarafından kullanılabilir. Bu nedenle, veri bilimciler ve kuruluşlar, AI sistemlerinin bireyleri veya toplumu nasıl etkilediğinden sorumlu olmalıdır.
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Microsoft'un Sorumlu AI Yaklaşımı")
+
+> 🎥 Yukarıdaki resme tıklayarak video izleyin: Yüz Tanıma Yoluyla Kitle Gözetimi Uyarıları
+
+Sonuçta, toplumda AI'yı tanıtan ilk nesil olarak, bilgisayarların insanlara hesap verebilir kalmasını nasıl sağlayacağımız ve bilgisayarları tasarlayan insanların diğer herkese hesap verebilir kalmasını nasıl sağlayacağımız, neslimizin en büyük sorularından biridir.
+
+## Etki Değerlendirmesi
+
+Bir makine öğrenimi modelini eğitmeden önce, AI sisteminin amacını, beklenen kullanımını, nerede konuşlandırılacağını ve sistemle kimlerin etkileşime gireceğini anlamak için bir etki değerlendirmesi yapmak önemlidir. Bu, sistemi değerlendiren gözden geçirenler veya test ediciler için potansiyel riskleri ve beklenen sonuçları belirlerken dikkate alınması gereken faktörleri bilmeleri açısından yararlıdır.
+
+Etki değerlendirmesi yaparken odaklanılması gereken alanlar şunlardır:
+
+* **Bireyler üzerinde olumsuz etki**. Sistem performansını engelleyen herhangi bir kısıtlama veya gereksinim, desteklenmeyen kullanım veya bilinen sınırlamaların farkında olmak, sistemin bireylere zarar verebilecek şekilde kullanılmamasını sağlamak için hayati öneme sahiptir.
+* **Veri gereksinimleri**. Sistemin verileri nasıl ve nerede kullanacağını anlamak, gözden geçirenlerin dikkate alması gereken veri gereksinimlerini (örneğin, GDPR veya HIPPA veri düzenlemeleri) araştırmalarını sağlar. Ayrıca, verinin kaynağı veya miktarının eğitime yeterli olup olmadığını inceleyin.
+* **Etki özeti**. Sistemin kullanımından kaynaklanabilecek potansiyel zararların bir listesini toplayın. ML yaşam döngüsü boyunca, belirlenen sorunların hafifletilip hafifletilmediğini veya ele alınıp alınmadığını gözden geçirin.
+* Altı temel ilkenin her biri için **uygulanabilir hedefler**. Her ilkenin hedeflerinin karşılanıp karşılanmadığını ve herhangi bir boşluk olup olmadığını değerlendirin.
+
+## Sorumlu AI ile Hata Ayıklama
+
+Bir yazılım uygulamasında hata ayıklama gibi, bir AI sisteminde hata ayıklamak da sistemdeki sorunları belirleme ve çözme sürecidir. Bir modelin beklenildiği gibi veya sorumlu bir şekilde performans göstermemesine etki eden birçok faktör vardır. Çoğu geleneksel model performans metriği, bir modelin performansının nicel toplamlarıdır ve sorumlu AI ilkelerini nasıl ihlal ettiğini analiz etmek için yeterli değildir. Ayrıca, bir makine öğrenimi modeli, sonuçlarını neyin yönlendirdiğini anlamayı veya hata yaptığında açıklama yapmayı zorlaştıran bir kara kutudur. Bu kursun ilerleyen bölümlerinde, AI sistemlerinde hata ayıklamaya yardımcı olmak için Sorumlu AI panosunu nasıl kullanacağımızı öğreneceğiz. Pano, veri bilimciler ve AI geliştiricilerinin şu işlemleri yapmaları için kapsamlı bir araç sağlar:
+
+* **Hata analizi**. Sistemin adaletini veya güvenilirliğini etkileyebilecek modelin hata dağılımını belirlemek.
+* **Model genel görünümü**. Modelin performansında veri grupları arasında farklılıklar olup olmadığını keşfetmek.
+* **Veri analizi**. Veri dağılımını anlamak ve adalet, kapsayıcılık ve güvenilirlik sorunlarına yol açabilecek potansiyel önyargıları belirlemek.
+* **Model yorumlanabilirliği**. Modelin tahminlerini neyin etkilediğini veya yönlendirdiğini anlamak. Bu, modelin davranışını açıklamak için önemlidir ve şeffaflık ve hesap verebilirlik için kritiktir.
+
+## 🚀 Meydan Okuma
+
+Zararların baştan itibaren ortaya çıkmasını önlemek için şunları yapmalıyız:
+
+- Sistemler üzerinde çalışan insanların farklı geçmişlere ve perspektiflere sahip olmasını sağlamak
+- Toplumumuzun çeşitliliğini yansıtan veri setlerine yatırım yapmak
+- Makine öğrenimi yaşam döngüsü boyunca sorumlu AI'yı tespit etmek ve düzeltmek için daha iyi yöntemler geliştirmek
+
+Model oluşturma ve kullanımında bir modelin güvenilmezliğinin belirgin olduğu gerçek hayat senaryolarını düşünün. Başka neleri göz önünde bulundurmalıyız?
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## İnceleme ve Kendi Kendine Çalışma
+
+Bu derste, makine öğreniminde adalet ve adaletsizlik kavramlarının bazı temel bilgilerini öğrendiniz.
+
+Konulara daha derinlemesine dalmak için bu atölyeyi izleyin:
+
+- Sorumlu AI Peşinde: Besmira Nushi, Mehrnoosh Sameki ve Amit Sharma tarafından ilkeleri pratiğe dökmek
+
+[](https://www.youtube.com/watch?v=tGgJCrA-MZU "Sorumlu AI Araç Kutusu: Sorumlu AI oluşturmak için açık kaynaklı bir çerçeve")
+
+> 🎥 Yukarıdaki resme tıklayarak video izleyin: Sorumlu AI Araç Kutusu: Sorumlu
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/3-fairness/assignment.md b/translations/tr/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..9ca682b4f
--- /dev/null
+++ b/translations/tr/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# Sorumlu AI Araç Kutusunu Keşfet
+
+## Talimatlar
+
+Bu derste, veri bilimcilerin AI sistemlerini analiz etmelerine ve iyileştirmelerine yardımcı olmak için "açık kaynaklı, topluluk odaklı bir proje" olan Sorumlu AI Araç Kutusu hakkında bilgi edindiniz. Bu ödev için, RAI Toolbox'un [notebook'larından](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb) birini keşfedin ve bulgularınızı bir makale veya sunum olarak raporlayın.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Mükemmel | Yeterli | Geliştirme Gerekli |
+| --------- | -------- | ------- | ------------------ |
+| | Fairlearn sistemlerini, çalıştırılan notebook'u ve çalıştırmanın sonuçlarını tartışan bir makale veya powerpoint sunumu sunulur | Sonuç içermeyen bir makale sunulur | Hiçbir makale sunulmaz |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/4-techniques-of-ML/README.md b/translations/tr/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..26ca83980
--- /dev/null
+++ b/translations/tr/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# Makine Öğrenimi Teknikleri
+
+Makine öğrenimi modellerini oluşturma, kullanma ve sürdürme süreci ve kullandıkları veriler, birçok diğer geliştirme iş akışından çok farklı bir süreçtir. Bu derste, bu süreci açıklığa kavuşturacak ve bilmeniz gereken ana teknikleri özetleyeceğiz. Şunları yapacaksınız:
+
+- Makine öğrenimini yüksek seviyede destekleyen süreçleri anlayın.
+- 'Modeller', 'tahminler' ve 'eğitim verileri' gibi temel kavramları keşfedin.
+
+## [Ders öncesi sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "Başlangıç seviyesi için ML - Makine Öğrenimi Teknikleri")
+
+> 🎥 Bu derste ilerlemek için yukarıdaki görüntüye tıklayarak kısa bir video izleyin.
+
+## Giriş
+
+Yüksek seviyede, makine öğrenimi (ML) süreçlerini oluşturma sanatı birkaç adımdan oluşur:
+
+1. **Soruyu belirleyin**. Çoğu ML süreci, basit bir koşullu program veya kurallara dayalı bir motorla cevaplanamayan bir soru sormakla başlar. Bu sorular genellikle bir veri koleksiyonuna dayalı tahminler etrafında döner.
+2. **Veri toplayın ve hazırlayın**. Sorunuzu cevaplayabilmek için veriye ihtiyacınız var. Verinizin kalitesi ve bazen de miktarı, başlangıç sorunuza ne kadar iyi cevap verebileceğinizi belirleyecektir. Veriyi görselleştirmek bu aşamanın önemli bir parçasıdır. Bu aşama ayrıca veriyi bir model oluşturmak için eğitim ve test gruplarına ayırmayı da içerir.
+3. **Eğitim yöntemini seçin**. Sorunuza ve verinizin doğasına bağlı olarak, verinizi en iyi yansıtacak ve doğru tahminler yapacak bir model eğitme yöntemini seçmeniz gerekecektir. Bu, ML sürecinizin belirli uzmanlık gerektiren ve genellikle önemli miktarda deneme gerektiren kısmıdır.
+4. **Modeli eğitin**. Eğitim verilerinizi kullanarak, çeşitli algoritmalar kullanarak bir model eğitirsiniz ve verideki desenleri tanımayı öğrenirsiniz. Model, verinin bazı bölümlerini diğerlerine göre önceliklendirmek için ayarlanabilen içsel ağırlıkları kullanabilir ve böylece daha iyi bir model oluşturabilir.
+5. **Modeli değerlendirin**. Topladığınız veri setinden daha önce hiç görülmemiş verileri (test verilerinizi) kullanarak modelin nasıl performans gösterdiğini görürsünüz.
+6. **Parametre ayarı**. Modelinizin performansına bağlı olarak, modeli eğitmek için kullanılan algoritmaların davranışını kontrol eden farklı parametreler veya değişkenler kullanarak süreci yeniden yapabilirsiniz.
+7. **Tahmin yapın**. Modelinizin doğruluğunu test etmek için yeni girdiler kullanın.
+
+## Hangi soruyu sormalı
+
+Bilgisayarlar, verilerde gizli kalmış desenleri keşfetmede özellikle yeteneklidir. Bu yetenek, belirli bir alanda kurallara dayalı bir motor oluşturarak kolayca cevaplanamayan soruları olan araştırmacılar için çok faydalıdır. Örneğin, bir aktüeryal görev verildiğinde, bir veri bilimcisi sigara içenler ile içmeyenler arasındaki ölüm oranları etrafında el yapımı kurallar oluşturabilir.
+
+Ancak, birçok diğer değişken denklem içine girdiğinde, geçmiş sağlık geçmişine dayalı olarak gelecekteki ölüm oranlarını tahmin etmek için bir ML modeli daha verimli olabilir. Daha neşeli bir örnek, bir yerin Nisan ayındaki hava durumunu enlem, boylam, iklim değişikliği, okyanusa yakınlık, jet akımının desenleri ve daha fazlasını içeren verilere dayanarak tahmin etmektir.
+
+✅ Bu [slayt sunumu](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf) hava durumu modelleri üzerinde ML kullanımı için tarihsel bir perspektif sunar.
+
+## Model oluşturma öncesi görevler
+
+Modelinizi oluşturmaya başlamadan önce tamamlamanız gereken birkaç görev vardır. Sorunuzu test etmek ve bir modelin tahminlerine dayalı bir hipotez oluşturmak için birkaç öğeyi tanımlamanız ve yapılandırmanız gerekir.
+
+### Veri
+
+Sorunuzu herhangi bir kesinlikle cevaplayabilmek için doğru türde yeterli miktarda veriye ihtiyacınız var. Bu noktada yapmanız gereken iki şey vardır:
+
+- **Veri toplayın**. Veri analizi dersindeki adaleti göz önünde bulundurarak verilerinizi dikkatle toplayın. Bu verilerin kaynaklarının farkında olun, sahip olabileceği herhangi bir önyargıyı bilin ve kökenini belgeleyin.
+- **Veriyi hazırlayın**. Veri hazırlama sürecinde birkaç adım vardır. Veriler farklı kaynaklardan geliyorsa, verileri bir araya getirip normalleştirmeniz gerekebilir. Verinin kalitesini ve miktarını çeşitli yöntemlerle artırabilirsiniz, örneğin dizeleri sayılara dönüştürmek ( [Kümeleme](../../5-Clustering/1-Visualize/README.md) dersinde yaptığımız gibi). Ayrıca, orijinal veriye dayanarak yeni veriler oluşturabilirsiniz ( [Sınıflandırma](../../4-Classification/1-Introduction/README.md) dersinde yaptığımız gibi). Veriyi temizleyip düzenleyebilirsiniz ( [Web Uygulaması](../../3-Web-App/README.md) dersinden önce yapacağımız gibi). Son olarak, eğitim tekniklerinize bağlı olarak veriyi rastgeleleştirip karıştırmanız gerekebilir.
+
+✅ Verilerinizi topladıktan ve işledikten sonra, verinin şeklinin amacınıza uygun olup olmadığını görmek için bir an durun. Verilerin, belirli bir görevinizde iyi performans göstermeyebileceği ortaya çıkabilir, [Kümeleme](../../5-Clustering/1-Visualize/README.md) derslerimizde keşfettiğimiz gibi!
+
+### Özellikler ve Hedef
+
+Bir [özellik](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection), verinizin ölçülebilir bir özelliğidir. Birçok veri setinde 'tarih', 'boyut' veya 'renk' gibi sütun başlıkları olarak ifade edilir. Özellik değişkeniniz, genellikle `X` olarak temsil edilir ve modeli eğitmek için kullanılacak giriş değişkenidir.
+
+Bir hedef, tahmin etmeye çalıştığınız şeydir. Hedef genellikle `y` olarak temsil edilir ve verinizden sormaya çalıştığınız sorunun cevabını temsil eder: Aralık ayında hangi **renk** kabaklar en ucuz olacak? San Francisco'da hangi mahallelerde en iyi gayrimenkul **fiyatı** olacak? Bazen hedef, etiket özniteliği olarak da adlandırılır.
+
+### Özellik değişkeninizi seçme
+
+🎓 **Özellik Seçimi ve Özellik Çıkarımı** Model oluştururken hangi değişkeni seçeceğinizi nasıl bileceksiniz? Muhtemelen en performanslı model için doğru değişkenleri seçmek için bir özellik seçimi veya özellik çıkarımı sürecinden geçeceksiniz. Ancak bunlar aynı şey değildir: "Özellik çıkarımı, orijinal özelliklerin fonksiyonlarından yeni özellikler oluştururken, özellik seçimi özelliklerin bir alt kümesini döndürür." ([kaynak](https://wikipedia.org/wiki/Feature_selection))
+
+### Verinizi görselleştirin
+
+Veri bilimcisinin araç setinin önemli bir yönü, Seaborn veya MatPlotLib gibi çeşitli mükemmel kütüphaneleri kullanarak veriyi görselleştirme gücüdür. Verinizi görsel olarak temsil etmek, yararlanabileceğiniz gizli korelasyonları ortaya çıkarmanıza olanak tanıyabilir. Görselleştirmeleriniz ayrıca önyargı veya dengesiz veri keşfetmenize yardımcı olabilir ([Sınıflandırma](../../4-Classification/2-Classifiers-1/README.md) dersinde keşfettiğimiz gibi).
+
+### Veri setinizi bölün
+
+Eğitimden önce, veri setinizi eşit olmayan boyutlarda iki veya daha fazla parçaya bölmeniz gerekir.
+
+- **Eğitim**. Veri setinin bu kısmı modeli eğitmek için kullanılır. Bu set, orijinal veri setinin çoğunluğunu oluşturur.
+- **Test**. Bir test veri seti, genellikle orijinal verilerden toplanan bağımsız bir veri grubudur ve oluşturulan modelin performansını doğrulamak için kullanılır.
+- **Doğrulama**. Bir doğrulama seti, modelin hiperparametrelerini veya mimarisini ayarlamak için kullanılan daha küçük bağımsız bir örnek grubudur. Verinizin boyutuna ve sorduğunuz soruya bağlı olarak, bu üçüncü seti oluşturmanız gerekmeyebilir ([Zaman Serisi Tahmini](../../7-TimeSeries/1-Introduction/README.md) dersinde belirttiğimiz gibi).
+
+## Model oluşturma
+
+Eğitim verilerinizi kullanarak, çeşitli algoritmalar kullanarak verinizin istatistiksel bir temsilini oluşturarak bir model oluşturmayı hedeflersiniz. Bir modeli eğitmek, onu veriye maruz bırakır ve keşfettiği, doğruladığı ve kabul ettiği veya reddettiği desenler hakkında varsayımlar yapmasına olanak tanır.
+
+### Eğitim yöntemini seçin
+
+Sorunuza ve verinizin doğasına bağlı olarak, onu eğitmek için bir yöntem seçeceksiniz. [Scikit-learn'ün belgelerini](https://scikit-learn.org/stable/user_guide.html) inceleyerek - bu derste kullandığımız - bir modeli eğitmenin birçok yolunu keşfedebilirsiniz. Deneyiminize bağlı olarak, en iyi modeli oluşturmak için birkaç farklı yöntemi denemeniz gerekebilir. Veri bilimcilerinin, modele görülmemiş veriler vererek performansını değerlendirdiği, doğruluk, önyargı ve diğer kaliteyi düşüren sorunları kontrol ettiği ve eldeki görev için en uygun eğitim yöntemini seçtiği bir süreçten geçmeniz muhtemeldir.
+
+### Bir modeli eğitin
+
+Eğitim verilerinizle donanmış olarak, onu bir model oluşturmak için 'fit' etmeye hazırsınız. Birçok ML kütüphanesinde 'model.fit' kodunu bulacağınızı fark edeceksiniz - bu sırada özellik değişkeninizi bir değerler dizisi (genellikle 'X') ve bir hedef değişkeni (genellikle 'y') olarak gönderirsiniz.
+
+### Modeli değerlendirin
+
+Eğitim süreci tamamlandığında (büyük bir modeli eğitmek için birçok yineleme veya 'epoch' gerekebilir), test verilerini kullanarak modelin kalitesini değerlendirebileceksiniz. Bu veri, modelin daha önce analiz etmediği orijinal verilerin bir alt kümesidir. Modelinizin kalitesi hakkında bir metrik tablosu yazdırabilirsiniz.
+
+🎓 **Model uyumu**
+
+Makine öğrenimi bağlamında, model uyumu, modelin altta yatan fonksiyonunun, tanımadığı verileri analiz etme girişimindeki doğruluğunu ifade eder.
+
+🎓 **Aşırı uyum** ve **eksik uyum**, modelin kalitesini düşüren yaygın sorunlardır, çünkü model ya yeterince iyi uymaz ya da çok iyi uyum sağlar. Bu, modelin tahminlerini ya eğitim verilerine çok yakın ya da çok gevşek bir şekilde hizalamasına neden olur. Aşırı uyumlu bir model, verilerin ayrıntılarını ve gürültüsünü çok iyi öğrendiği için eğitim verilerini çok iyi tahmin eder. Eksik uyumlu bir model ise, ne eğitim verilerini ne de henüz 'görmediği' verileri doğru bir şekilde analiz edebilir.
+
+
+> [Jen Looper](https://twitter.com/jenlooper) tarafından hazırlanan infografik
+
+## Parametre ayarı
+
+İlk eğitiminiz tamamlandığında, modelin kalitesini gözlemleyin ve 'hiperparametrelerini' ayarlayarak iyileştirmeyi düşünün. Süreç hakkında daha fazla bilgi için [belgelere](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott) göz atın.
+
+## Tahmin
+
+Bu, modelinizin doğruluğunu test etmek için tamamen yeni veriler kullanabileceğiniz andır. Bir üretim ortamında modeli kullanmak için web varlıkları oluşturduğunuz 'uygulamalı' bir ML ortamında, bu süreç, bir değişkeni ayarlamak ve değerlendirme veya çıkarım için modeli göndermek için kullanıcı girdisi (örneğin bir düğme basması) toplama işlemini içerebilir.
+
+Bu derslerde, bir veri bilimcisinin tüm hareketlerini ve daha fazlasını yaparak, 'tam yığın' bir ML mühendisi olma yolculuğunuzda ilerledikçe bu adımları nasıl hazırlayacağınızı, oluşturacağınızı, test edeceğinizi, değerlendireceğinizi ve tahmin edeceğinizi keşfedeceksiniz.
+
+---
+
+## 🚀Meydan Okuma
+
+Bir ML uygulayıcısının adımlarını yansıtan bir akış şeması çizin. Şu anda sürecin neresinde olduğunuzu düşünüyorsunuz? Nerede zorluk çekeceğinizi tahmin ediyorsunuz? Size ne kolay görünüyor?
+
+## [Ders sonrası sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Günlük çalışmalarını tartışan veri bilimcilerle yapılan röportajları çevrimiçi arayın. İşte [bir tane](https://www.youtube.com/watch?v=Z3IjgbbCEfs).
+
+## Ödev
+
+[Bir veri bilimcisiyle röportaj yapın](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/4-techniques-of-ML/assignment.md b/translations/tr/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..14c1a397d
--- /dev/null
+++ b/translations/tr/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# Bir veri bilimciyle röportaj
+
+## Talimatlar
+
+Şirketinizde, bir kullanıcı grubunda veya arkadaşlarınız ya da sınıf arkadaşlarınız arasında, profesyonel olarak veri bilimci olarak çalışan biriyle konuşun. Günlük işlerini anlatan kısa bir makale (500 kelime) yazın. Uzmanlar mı yoksa 'full stack' mi çalışıyorlar?
+
+## Değerlendirme Ölçütü
+
+| Kriterler | Mükemmel | Yeterli | Geliştirmeye İhtiyacı Var |
+| --------- | ----------------------------------------------------------------------------------- | ----------------------------------------------------------------- | ----------------------------- |
+| | Doğru uzunlukta, kaynakları belirtilmiş bir makale .doc dosyası olarak sunulmuştur | Makale yetersiz kaynak belirtilmiş veya istenilen uzunluktan kısadır | Makale sunulmamıştır |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/1-Introduction/README.md b/translations/tr/1-Introduction/README.md
new file mode 100644
index 000000000..e0f64bd4f
--- /dev/null
+++ b/translations/tr/1-Introduction/README.md
@@ -0,0 +1,25 @@
+# Makine Öğrenimine Giriş
+
+Bu müfredat bölümünde, makine öğrenimi alanının temel kavramlarına, ne olduğuna, tarihine ve araştırmacıların bu alanda çalışmak için kullandıkları tekniklere giriş yapacaksınız. Haydi, bu yeni ML dünyasını birlikte keşfedelim!
+
+
+> Fotoğraf: Bill Oxford tarafından Unsplash üzerinde
+
+### Dersler
+
+1. [Makine öğrenimine giriş](1-intro-to-ML/README.md)
+1. [Makine öğrenimi ve yapay zekanın tarihi](2-history-of-ML/README.md)
+1. [Adalet ve makine öğrenimi](3-fairness/README.md)
+1. [Makine öğrenimi teknikleri](4-techniques-of-ML/README.md)
+### Katkıda Bulunanlar
+
+"Makine Öğrenimine Giriş", [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) ve [Jen Looper](https://twitter.com/jenlooper) dahil olmak üzere bir ekip tarafından ♥️ ile yazılmıştır.
+
+"Makine Öğreniminin Tarihi", [Jen Looper](https://twitter.com/jenlooper) ve [Amy Boyd](https://twitter.com/AmyKateNicho) tarafından ♥️ ile yazılmıştır.
+
+"Adalet ve Makine Öğrenimi", [Tomomi Imura](https://twitter.com/girliemac) tarafından ♥️ ile yazılmıştır.
+
+"Makine Öğrenimi Teknikleri", [Jen Looper](https://twitter.com/jenlooper) ve [Chris Noring](https://twitter.com/softchris) tarafından ♥️ ile yazılmıştır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/1-Tools/README.md b/translations/tr/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..417096d56
--- /dev/null
+++ b/translations/tr/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# Python ve Scikit-learn ile Regresyon Modellerine Başlayın
+
+
+
+> Sketchnote [Tomomi Imura](https://www.twitter.com/girlie_mac) tarafından
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [Bu ders R dilinde de mevcut!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## Giriş
+
+Bu dört derste, regresyon modellerinin nasıl oluşturulacağını keşfedeceksiniz. Bunların ne işe yaradığını kısa süre içinde tartışacağız. Ancak herhangi bir şey yapmadan önce, süreci başlatmak için doğru araçlara sahip olduğunuzdan emin olun!
+
+Bu derste şunları öğreneceksiniz:
+
+- Bilgisayarınızı yerel makine öğrenimi görevleri için yapılandırma.
+- Jupyter defterleri ile çalışma.
+- Scikit-learn kullanımı, kurulum dahil.
+- Uygulamalı bir egzersiz ile doğrusal regresyonu keşfetme.
+
+## Kurulumlar ve yapılandırmalar
+
+[](https://youtu.be/-DfeD2k2Kj0 "Yeni başlayanlar için ML - Makine Öğrenimi modelleri oluşturmak için araçlarınızı ayarlayın")
+
+> 🎥 Bilgisayarınızı ML için yapılandırma konusunda kısa bir video için yukarıdaki resme tıklayın.
+
+1. **Python'u yükleyin**. Bilgisayarınızda [Python](https://www.python.org/downloads/) yüklü olduğundan emin olun. Python'u birçok veri bilimi ve makine öğrenimi görevi için kullanacaksınız. Çoğu bilgisayar sistemi zaten bir Python kurulumu içerir. Ayrıca bazı kullanıcılar için kurulumu kolaylaştırmak adına kullanışlı [Python Kod Paketleri](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott) de mevcuttur.
+
+ Ancak, Python'un bazı kullanımları yazılımın bir sürümünü gerektirirken, diğerleri farklı bir sürüm gerektirir. Bu nedenle, bir [sanal ortam](https://docs.python.org/3/library/venv.html) içinde çalışmak faydalıdır.
+
+2. **Visual Studio Code'u yükleyin**. Bilgisayarınızda Visual Studio Code'un yüklü olduğundan emin olun. Temel kurulum için [Visual Studio Code'u yükleme](https://code.visualstudio.com/) talimatlarını izleyin. Bu kursta Python'u Visual Studio Code'da kullanacağınız için, Python geliştirme için [Visual Studio Code'u yapılandırma](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) konusunda bilgi edinmek isteyebilirsiniz.
+
+ > Python ile rahat çalışmak için bu [Öğrenme modülleri](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott) koleksiyonunu inceleyin
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "Visual Studio Code ile Python Kurulumu")
+ >
+ > 🎥 Yukarıdaki resme tıklayarak VS Code içinde Python kullanımı hakkında bir video izleyin.
+
+3. **Scikit-learn'ü yükleyin**, [bu talimatları](https://scikit-learn.org/stable/install.html) izleyerek. Python 3 kullanmanız gerektiğinden, bir sanal ortam kullanmanız önerilir. Not: Bu kütüphaneyi bir M1 Mac'e yüklüyorsanız, yukarıdaki sayfada özel talimatlar bulunmaktadır.
+
+4. **Jupyter Notebook'u yükleyin**. [Jupyter paketini yüklemeniz](https://pypi.org/project/jupyter/) gerekecek.
+
+## Makine Öğrenimi Yazma Ortamınız
+
+Python kodunuzu geliştirmek ve makine öğrenimi modelleri oluşturmak için **defterler** kullanacaksınız. Bu dosya türü veri bilimciler için yaygın bir araçtır ve uzantıları `.ipynb` ile tanımlanabilir.
+
+Defterler, geliştiricinin hem kod yazmasına hem de kodun etrafında notlar ve dokümantasyon yazmasına olanak tanıyan etkileşimli bir ortamdır, bu da deneysel veya araştırma odaklı projeler için oldukça faydalıdır.
+
+[](https://youtu.be/7E-jC8FLA2E "Yeni başlayanlar için ML - Regresyon modelleri oluşturmaya başlamak için Jupyter Defterlerini ayarlayın")
+
+> 🎥 Bu egzersizi çalışırken kısa bir video için yukarıdaki resme tıklayın.
+
+### Egzersiz - bir defterle çalışma
+
+Bu klasörde, _notebook.ipynb_ dosyasını bulacaksınız.
+
+1. _notebook.ipynb_ dosyasını Visual Studio Code'da açın.
+
+ Python 3+ ile bir Jupyter sunucusu başlatılacak. Defterin bazı alanlarında `run`, kod parçaları bulacaksınız. Bir kod bloğunu çalıştırmak için, oynat düğmesi gibi görünen simgeyi seçebilirsiniz.
+
+2. `md` simgesini seçin ve biraz markdown ve şu metni ekleyin **# Defterinize hoş geldiniz**.
+
+ Sonra biraz Python kodu ekleyin.
+
+3. Kod bloğuna **print('hello notebook')** yazın.
+4. Kodu çalıştırmak için oku seçin.
+
+ Yazdırılan ifadeyi görmelisiniz:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+Kodunuzu yorumlarla birlikte ekleyerek defteri kendiliğinden belgelendirebilirsiniz.
+
+✅ Bir web geliştiricisinin çalışma ortamının veri bilimcinin çalışma ortamından ne kadar farklı olduğunu bir düşünün.
+
+## Scikit-learn ile Başlamak
+
+Artık Python yerel ortamınızda ayarlandı ve Jupyter defterleriyle rahatça çalışıyorsunuz, hadi Scikit-learn ile de aynı rahatlığı sağlayalım (bunu `sci` as in `science` olarak telaffuz edin). Scikit-learn, ML görevlerini gerçekleştirmenize yardımcı olacak [geniş bir API](https://scikit-learn.org/stable/modules/classes.html#api-ref) sunar.
+
+Web sitelerine göre ([website](https://scikit-learn.org/stable/getting_started.html)), "Scikit-learn, denetimli ve denetimsiz öğrenmeyi destekleyen açık kaynaklı bir makine öğrenimi kütüphanesidir. Ayrıca model uyarlama, veri ön işleme, model seçimi ve değerlendirme ve birçok diğer yardımcı araçlar sağlar."
+
+Bu derste, Scikit-learn ve diğer araçları kullanarak 'geleneksel makine öğrenimi' görevlerini gerçekleştirmek için makine öğrenimi modelleri oluşturacaksınız. Sinir ağları ve derin öğrenmeden özellikle kaçındık, çünkü bunlar yakında çıkacak olan 'Yeni Başlayanlar için AI' müfredatımızda daha iyi ele alınmaktadır.
+
+Scikit-learn, modelleri oluşturmayı ve kullanmak üzere değerlendirmeyi kolaylaştırır. Öncelikle sayısal verilerle çalışmaya odaklanır ve öğrenme araçları olarak kullanılabilecek birkaç hazır veri seti içerir. Ayrıca öğrencilerin denemesi için önceden oluşturulmuş modeller de içerir. Hazır paketlenmiş verileri yükleme sürecini ve bazı temel verilerle ilk ML modelimizi Scikit-learn ile kullanma sürecini keşfedelim.
+
+## Egzersiz - ilk Scikit-learn defteriniz
+
+> Bu eğitim, Scikit-learn'ün web sitesindeki [doğrusal regresyon örneğinden](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py) esinlenmiştir.
+
+[](https://youtu.be/2xkXL5EUpS0 "Yeni başlayanlar için ML - Python'da İlk Doğrusal Regresyon Projeniz")
+
+> 🎥 Bu egzersizi çalışırken kısa bir video için yukarıdaki resme tıklayın.
+
+Bu derse bağlı _notebook.ipynb_ dosyasında, tüm hücreleri 'çöp kutusu' simgesine basarak temizleyin.
+
+Bu bölümde, öğrenme amacıyla Scikit-learn'e dahil edilen diyabet hakkında küçük bir veri seti ile çalışacaksınız. Diyabet hastaları için bir tedaviyi test etmek istediğinizi hayal edin. Makine Öğrenimi modelleri, değişkenlerin kombinasyonlarına dayalı olarak hangi hastaların tedaviye daha iyi yanıt vereceğini belirlemenize yardımcı olabilir. Çok basit bir regresyon modeli bile, görselleştirildiğinde, teorik klinik denemelerinizi düzenlemenize yardımcı olacak değişkenler hakkında bilgi gösterebilir.
+
+✅ Birçok türde regresyon yöntemi vardır ve hangisini seçeceğiniz, aradığınız cevaba bağlıdır. Belirli bir yaşta bir kişinin muhtemel boyunu tahmin etmek istiyorsanız, doğrusal regresyon kullanırsınız, çünkü **sayısal bir değer** arıyorsunuzdur. Bir tür mutfağın vegan olup olmadığını keşfetmekle ilgileniyorsanız, **kategori ataması** arıyorsunuzdur, bu yüzden lojistik regresyon kullanırsınız. Lojistik regresyon hakkında daha fazla bilgi edineceksiniz. Verilere sorabileceğiniz bazı soruları ve bu yöntemlerden hangisinin daha uygun olacağını düşünün.
+
+Hadi bu göreve başlayalım.
+
+### Kütüphaneleri İçe Aktarma
+
+Bu görev için bazı kütüphaneleri içe aktaracağız:
+
+- **matplotlib**. [grafik aracı](https://matplotlib.org/) ve bir çizgi grafiği oluşturmak için kullanacağız.
+- **numpy**. [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) Python'da sayısal verileri işlemek için kullanışlı bir kütüphanedir.
+- **sklearn**. Bu, [Scikit-learn](https://scikit-learn.org/stable/user_guide.html) kütüphanesidir.
+
+Görevlerinizde yardımcı olması için bazı kütüphaneleri içe aktarın.
+
+1. Aşağıdaki kodu yazarak içe aktarmaları ekleyin:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ Yukarıda `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y` olacak şekilde içe aktarıyorsunuz.
+
+1. Veri matrisinin şeklini ve ilk öğesini göstermek için bazı yazdırma komutları ekleyin:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ Geri aldığınız yanıt bir demettir. Demetin ilk iki değerini `X` and `y` olarak atıyorsunuz. Daha fazla bilgi için [demetler](https://wikipedia.org/wiki/Tuple) hakkında bilgi edinin.
+
+ Bu verinin 10 elemanlı diziler halinde şekillendirilmiş 442 öğeye sahip olduğunu görebilirsiniz:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ Veri ve regresyon hedefi arasındaki ilişkiyi biraz düşünün. Doğrusal regresyon, X özelliği ile y hedef değişkeni arasındaki ilişkileri tahmin eder. Diyabet veri seti için [hedefi](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) dokümantasyonda bulabilir misiniz? Bu veri seti, hedefi göz önünde bulundurarak neyi gösteriyor?
+
+2. Sonraki adım olarak, veri setinin 3. sütununu seçerek bir bölümünü çizmek için seçin. Bunu `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)` kullanarak yapabilirsiniz. Parametrelerden biri -1 ise, ilgili boyut otomatik olarak hesaplanır.
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ Her zaman, verilerin şeklini kontrol etmek için yazdırabilirsiniz.
+
+3. Artık çizilmeye hazır verileriniz olduğuna göre, bir makinenin bu veri setindeki sayılar arasında mantıklı bir ayrım yapıp yapamayacağını görebilirsiniz. Bunu yapmak için, hem verileri (X) hem de hedefi (y) test ve eğitim setlerine ayırmanız gerekir. Scikit-learn bunu yapmanın basit bir yolunu sunar; test verilerinizi belirli bir noktada bölebilirsiniz.
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+4. Artık modelinizi eğitmeye hazırsınız! Doğrusal regresyon modelini yükleyin ve `model.fit()` kullanarak X ve y eğitim setlerinizle eğitin:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`. Bu, veri grupları arasındaki çizgiyi çizmek için kullanılacaktır
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. Şimdi verileri bir grafikte göstermenin zamanı geldi. Matplotlib bu görev için çok kullanışlı bir araçtır. Tüm X ve y test verilerinin bir dağılım grafiğini oluşturun ve tahmini kullanarak modelin veri grupları arasındaki en uygun yere bir çizgi çizin.
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ Burada ne olduğunu biraz düşünün. Bir düz çizgi, birçok küçük veri noktası arasında geçiyor, ancak tam olarak ne yapıyor? Bu çizgiyi kullanarak yeni, görülmemiş bir veri noktasının grafiğin y ekseni ile ilişkili olarak nereye oturması gerektiğini tahmin edebilmeniz gerektiğini görebiliyor musunuz? Bu modelin pratik kullanımını kelimelerle ifade etmeye çalışın.
+
+Tebrikler, ilk doğrusal regresyon modelinizi oluşturdunuz, onunla bir tahmin yaptınız ve bunu bir grafikte gösterdiniz!
+
+---
+## 🚀Meydan Okuma
+
+Bu veri setinden farklı bir değişkeni çizin. İpucu: bu satırı düzenleyin: `X = X[:,2]`. Bu veri setinin hedefi göz önüne alındığında, diyabetin bir hastalık olarak ilerlemesi hakkında ne keşfedebiliyorsunuz?
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Bu derste, basit doğrusal regresyon ile çalıştınız, tek değişkenli veya çok değişkenli regresyon yerine. Bu yöntemler arasındaki farklar hakkında biraz okuyun veya [bu videoyu](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef) izleyin.
+
+Regresyon kavramı hakkında daha fazla bilgi edinin ve bu teknikle hangi tür soruların yanıtlanabileceğini düşünün. Bu [eğitimi](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott) alarak bilginizi derinleştirin.
+
+## Ödev
+
+[Farklı bir veri seti](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/1-Tools/assignment.md b/translations/tr/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..93d1a5619
--- /dev/null
+++ b/translations/tr/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# Scikit-learn ile Regresyon
+
+## Talimatlar
+
+Scikit-learn'deki [Linnerud veri setine](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud) bir göz atın. Bu veri setinde birden fazla [hedef](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset) bulunmaktadır: 'Bir fitness kulübünde yirmi orta yaşlı erkekten toplanan üç egzersiz (veri) ve üç fizyolojik (hedef) değişkenden oluşur'.
+
+Kendi cümlelerinizle, bel ölçüsü ile kaç tane mekik çekildiği arasındaki ilişkiyi gösterecek bir Regresyon modelinin nasıl oluşturulacağını açıklayın. Bu veri setindeki diğer veri noktaları için de aynısını yapın.
+
+## Değerlendirme Kriterleri
+
+| Kriter | Örnek Teşkil Eden | Yeterli | Geliştirme Gerekiyor |
+| ------------------------------ | ----------------------------------- | ----------------------------- | -------------------------- |
+| Açıklayıcı bir paragraf gönderin | İyi yazılmış bir paragraf gönderildi | Birkaç cümle gönderildi | Hiçbir açıklama sağlanmadı |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/1-Tools/solution/Julia/README.md b/translations/tr/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..ae6b0deba
--- /dev/null
+++ b/translations/tr/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlama veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/2-Data/README.md b/translations/tr/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..793b4c443
--- /dev/null
+++ b/translations/tr/2-Regression/2-Data/README.md
@@ -0,0 +1,215 @@
+# Scikit-learn kullanarak bir regresyon modeli oluşturun: verileri hazırlayın ve görselleştirin
+
+
+
+İnfografik: [Dasani Madipalli](https://twitter.com/dasani_decoded)
+
+## [Ders öncesi sınavı](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [Bu ders R dilinde de mevcut!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## Giriş
+
+Scikit-learn ile makine öğrenimi modeli oluşturma araçlarını kurduğunuza göre, verileriniz hakkında sorular sormaya başlayabilirsiniz. Veri ile çalışırken ve ML çözümleri uygularken, verisetinizin potansiyelini doğru bir şekilde açığa çıkarmak için doğru soruyu sormayı anlamak çok önemlidir.
+
+Bu derste öğrenecekleriniz:
+
+- Verilerinizi model oluşturma için nasıl hazırlayacağınız.
+- Veri görselleştirme için Matplotlib'i nasıl kullanacağınız.
+
+## Verilerinize doğru soruyu sormak
+
+Yanıtlanması gereken soru, hangi tür ML algoritmalarını kullanacağınızı belirleyecektir. Ve alacağınız yanıtın kalitesi, verinizin doğasına büyük ölçüde bağlı olacaktır.
+
+Bu ders için sağlanan [verilere](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) bir göz atın. Bu .csv dosyasını VS Code'da açabilirsiniz. Hızlı bir göz gezdirdiğinizde hemen boşluklar ve karışık string ve sayısal veriler olduğunu görürsünüz. Ayrıca 'Package' adlı tuhaf bir sütun var, burada veriler 'sacks', 'bins' ve diğer değerler arasında karışmış durumda. Aslında veriler biraz dağınık.
+
+[](https://youtu.be/5qGjczWTrDQ "Yeni başlayanlar için ML - Bir Veri Setini Nasıl Analiz Edip Temizlersiniz")
+
+> 🎥 Yukarıdaki resme tıklayarak bu ders için verileri hazırlama sürecini gösteren kısa bir videoya ulaşabilirsiniz.
+
+Aslında, kutudan çıkar çıkmaz bir ML modeli oluşturmak için tamamen hazır bir veri setine sahip olmak çok yaygın değildir. Bu derste, standart Python kütüphanelerini kullanarak ham bir veri setini nasıl hazırlayacağınızı öğreneceksiniz. Ayrıca verileri görselleştirmek için çeşitli teknikleri öğreneceksiniz.
+
+## Vaka çalışması: 'balkabağı pazarı'
+
+Bu klasörde, kök `data` klasöründe [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) adlı bir .csv dosyası bulacaksınız. Bu dosya, şehir bazında gruplandırılmış, balkabağı pazarı hakkında 1757 satır veri içerir. Bu veriler, ABD Tarım Bakanlığı tarafından dağıtılan [Özel Ürünler Terminal Pazarları Standart Raporları](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice) adresinden çıkarılmış ham verilerdir.
+
+### Verileri hazırlamak
+
+Bu veriler kamu malıdır. USDA web sitesinden şehir başına ayrı ayrı dosyalar olarak indirilebilir. Çok fazla ayrı dosya olmaması için, tüm şehir verilerini tek bir elektronik tabloya birleştirdik, böylece verileri biraz _hazırlamış_ olduk. Şimdi, verilere daha yakından bakalım.
+
+### Balkabağı verileri - ilk sonuçlar
+
+Bu veriler hakkında ne fark ediyorsunuz? Zaten stringler, sayılar, boşluklar ve anlamlandırmanız gereken tuhaf değerlerin karışımı olduğunu gördünüz.
+
+Bu verilerle bir Regresyon tekniği kullanarak hangi soruyu sorabilirsiniz? "Belirli bir ayda satılık bir balkabağının fiyatını tahmin et" ne dersiniz? Verilere tekrar baktığınızda, bu görev için gerekli veri yapısını oluşturmak için bazı değişiklikler yapmanız gerektiğini görüyorsunuz.
+
+## Alıştırma - balkabağı verilerini analiz et
+
+Bu balkabağı verilerini analiz etmek ve hazırlamak için verileri şekillendirmede çok yararlı bir araç olan [Pandas](https://pandas.pydata.org/) (adı `Python Data Analysis` anlamına gelir) kullanacağız.
+
+### İlk olarak, eksik tarihleri kontrol edin
+
+İlk olarak eksik tarihleri kontrol etmek için adımlar atmanız gerekecek:
+
+1. Tarihleri ay formatına dönüştürün (bunlar ABD tarihleri, bu yüzden format `MM/DD/YYYY`).
+2. Ayı yeni bir sütuna çıkarın.
+
+_Notebook.ipynb_ dosyasını Visual Studio Code'da açın ve elektronik tabloyu yeni bir Pandas dataframe'ine aktarın.
+
+1. İlk beş satırı görüntülemek için `head()` işlevini kullanın.
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ Son beş satırı görüntülemek için hangi işlevi kullanırdınız?
+
+1. Mevcut dataframe'de eksik veri olup olmadığını kontrol edin:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ Eksik veri var, ancak belki de bu görev için önemli olmayabilir.
+
+1. Dataframe'inizi daha kolay çalışılabilir hale getirmek için yalnızca ihtiyacınız olan sütunları seçin, `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:` aşağıdaki durumda "tüm satırlar" anlamına gelir.
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### İkinci olarak, balkabağının ortalama fiyatını belirleyin
+
+Belirli bir ayda bir balkabağının ortalama fiyatını belirlemeyi düşünün. Bu görev için hangi sütunları seçerdiniz? İpucu: 3 sütuna ihtiyacınız olacak.
+
+Çözüm: Yeni Fiyat sütununu doldurmak için `Low Price` and `High Price` sütunlarının ortalamasını alın ve Tarih sütununu yalnızca ayı gösterecek şekilde dönüştürün. Neyse ki, yukarıdaki kontrole göre tarihler veya fiyatlar için eksik veri yok.
+
+1. Ortalama hesaplamak için aşağıdaki kodu ekleyin:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ `print(month)` kullanarak kontrol etmek istediğiniz herhangi bir veriyi yazdırabilirsiniz.
+
+2. Şimdi, dönüştürdüğünüz verileri yeni bir Pandas dataframe'ine kopyalayın:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ Dataframe'inizi yazdırmak, yeni regresyon modelinizi oluşturabileceğiniz temiz, düzenli bir veri setini gösterecektir.
+
+### Ama bekleyin! Burada tuhaf bir şey var
+
+`Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package` sütununa bakın.
+
+1. Dosyanın en üstüne, ilk .csv importunun altına bir filtre ekleyin:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ Şimdi veriyi yazdırırsanız, yalnızca bushel ile satılan balkabaklarını içeren yaklaşık 415 satır veri aldığınızı görebilirsiniz.
+
+### Ama bekleyin! Yapılacak bir şey daha var
+
+Bushel miktarının satır başına değiştiğini fark ettiniz mi? Fiyatlandırmayı normalize etmeniz ve bushel başına fiyatı göstermeniz gerekiyor, bu yüzden standartlaştırmak için biraz matematik yapın.
+
+1. Yeni_pumpkins dataframe'ini oluşturma bloğunun ardından bu satırları ekleyin:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308) göre, bushel'in ağırlığı ürün türüne bağlı olarak değişir, çünkü bu bir hacim ölçümüdür. "Örneğin, bir bushel domatesin 56 pound ağırlığında olması gerekiyor... Yapraklar ve yeşillikler daha az ağırlıkla daha fazla yer kaplar, bu yüzden bir bushel ıspanak sadece 20 pound." Bu oldukça karmaşık! Bushel'den pound'a dönüşüm yapmak yerine bushel başına fiyatlandırma yapalım. Ancak, balkabağı bushels'ı üzerine yapılan bu çalışma, verinizin doğasını anlamanın ne kadar önemli olduğunu gösteriyor!
+
+Şimdi, bushel ölçümlerine dayalı olarak birim başına fiyatlandırmayı analiz edebilirsiniz. Veriyi bir kez daha yazdırırsanız, nasıl standartlaştırıldığını görebilirsiniz.
+
+✅ Yarım bushel ile satılan balkabaklarının çok pahalı olduğunu fark ettiniz mi? Nedenini bulabilir misiniz? İpucu: Küçük balkabakları büyük olanlardan çok daha pahalıdır, muhtemelen bushel başına çok daha fazla olmalarından dolayı, büyük boş bir turta balkabağı tarafından kullanılan boş alan nedeniyle.
+
+## Görselleştirme Stratejileri
+
+Veri bilimcilerinin rolü, çalıştıkları verilerin kalitesini ve doğasını göstermektir. Bunu yapmak için, genellikle verilerin farklı yönlerini gösteren ilginç görselleştirmeler, grafikler ve tablolar oluştururlar. Bu şekilde, görsel olarak ilişkileri ve keşfedilmesi zor boşlukları gösterebilirler.
+
+[](https://youtu.be/SbUkxH6IJo0 "Yeni başlayanlar için ML - Matplotlib ile Veriler Nasıl Görselleştirilir")
+
+> 🎥 Yukarıdaki resme tıklayarak bu ders için verileri görselleştirme sürecini gösteren kısa bir videoya ulaşabilirsiniz.
+
+Görselleştirmeler, veriler için en uygun makine öğrenimi tekniğini belirlemeye de yardımcı olabilir. Örneğin, bir çizgiye benzeyen bir scatterplot, verilerin doğrusal regresyon için iyi bir aday olduğunu gösterir.
+
+Jupyter defterlerinde iyi çalışan bir veri görselleştirme kütüphanesi [Matplotlib](https://matplotlib.org/) (önceki derste de gördünüz).
+
+> Veri görselleştirme ile daha fazla deneyim kazanmak için [bu eğitimlere](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott) göz atın.
+
+## Alıştırma - Matplotlib ile deney yapın
+
+Yeni oluşturduğunuz dataframe'i göstermek için bazı temel grafikler oluşturmaya çalışın. Temel bir çizgi grafiği ne gösterir?
+
+1. Dosyanın en üstüne, Pandas importunun altına Matplotlib'i ekleyin:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. Tüm defteri yeniden çalıştırarak yenileyin.
+1. Defterin altına, veriyi kutu olarak çizmek için bir hücre ekleyin:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ Bu faydalı bir grafik mi? Sizi şaşırtan bir şey var mı?
+
+ Bu çok faydalı değil çünkü verilerinizi belirli bir ayda yayılmış noktalar olarak gösterir.
+
+### Onu faydalı hale getirin
+
+Grafiklerin faydalı veriler göstermesi için genellikle verileri bir şekilde gruplamanız gerekir. Y ekseninde ayları gösteren ve verilerin dağılımını gösteren bir grafik oluşturmaya çalışalım.
+
+1. Gruplandırılmış bir çubuk grafik oluşturmak için bir hücre ekleyin:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ Bu daha faydalı bir veri görselleştirme! Balkabağı fiyatlarının en yüksek olduğu dönemlerin Eylül ve Ekim olduğunu gösteriyor gibi görünüyor. Bu beklentinizi karşılıyor mu? Neden veya neden değil?
+
+---
+
+## 🚀Meydan okuma
+
+Matplotlib'in sunduğu farklı görselleştirme türlerini keşfedin. Hangi türler regresyon problemleri için en uygundur?
+
+## [Ders sonrası sınavı](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## İnceleme ve Kendi Kendine Çalışma
+
+Verileri görselleştirmenin birçok yoluna bir göz atın. Mevcut çeşitli kütüphanelerin bir listesini yapın ve hangi tür görevler için en uygun olduklarını not edin, örneğin 2D görselleştirmeler vs. 3D görselleştirmeler. Ne keşfediyorsunuz?
+
+## Ödev
+
+[Görselleştirmeyi keşfetmek](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/2-Data/assignment.md b/translations/tr/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..c31062ef7
--- /dev/null
+++ b/translations/tr/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# Görselleştirmeleri Keşfetmek
+
+Veri görselleştirme için kullanılabilecek çeşitli kütüphaneler vardır. Bu dersteki Kabak verilerini kullanarak matplotlib ve seaborn ile örnek bir not defterinde bazı görselleştirmeler oluşturun. Hangi kütüphanelerle çalışmak daha kolay?
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek | Yeterli | Geliştirilmeli |
+| -------- | --------- | -------- | ----------------- |
+| | İki keşif/görselleştirme içeren bir not defteri gönderildi | Bir keşif/görselleştirme içeren bir not defteri gönderildi | Not defteri gönderilmedi |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/2-Data/solution/Julia/README.md b/translations/tr/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..3771b60f2
--- /dev/null
+++ b/translations/tr/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/3-Linear/README.md b/translations/tr/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..5e7203319
--- /dev/null
+++ b/translations/tr/2-Regression/3-Linear/README.md
@@ -0,0 +1,370 @@
+# Scikit-learn kullanarak bir regresyon modeli oluşturun: dört farklı regresyon yöntemi
+
+
+> İnfografik [Dasani Madipalli](https://twitter.com/dasani_decoded) tarafından
+## [Ders öncesi sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [Bu ders R dilinde de mevcut!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### Giriş
+
+Şu ana kadar, bu derste kullanacağımız kabak fiyatlandırma veri setinden toplanan örnek verilerle regresyonun ne olduğunu keşfettiniz. Ayrıca Matplotlib kullanarak bu veriyi görselleştirdiniz.
+
+Şimdi, ML için regresyonun derinliklerine dalmaya hazırsınız. Görselleştirme, veriyi anlamlandırmanıza yardımcı olurken, Makine Öğreniminin gerçek gücü _modellerin eğitilmesinden_ gelir. Modeller, tarihi veriler üzerinde eğitilir ve veri bağımlılıklarını otomatik olarak yakalar, böylece modelin daha önce görmediği yeni veriler için sonuçları tahmin etmenizi sağlar.
+
+Bu derste, _temel doğrusal regresyon_ ve _polinomial regresyon_ olmak üzere iki tür regresyon hakkında daha fazla bilgi edineceksiniz ve bu tekniklerin altında yatan bazı matematiksel temelleri öğreneceksiniz. Bu modeller, farklı girdi verilerine bağlı olarak kabak fiyatlarını tahmin etmemize olanak tanıyacak.
+
+[](https://youtu.be/CRxFT8oTDMg "Yeni başlayanlar için ML - Doğrusal Regresyonu Anlamak")
+
+> 🎥 Doğrusal regresyon hakkında kısa bir video özet için yukarıdaki resme tıklayın.
+
+> Bu müfredat boyunca, matematik bilgisi minimum düzeyde varsayılmakta ve diğer alanlardan gelen öğrenciler için erişilebilir hale getirilmek istenmektedir, bu yüzden notlar, 🧮 çağrılar, diyagramlar ve diğer öğrenme araçlarına dikkat edin.
+
+### Önkoşul
+
+Şu ana kadar incelediğimiz kabak verisinin yapısına aşina olmalısınız. Bu dersin _notebook.ipynb_ dosyasında önceden yüklenmiş ve temizlenmiş olarak bulabilirsiniz. Dosyada, kabak fiyatı yeni bir veri çerçevesinde bushel başına gösterilmektedir. Bu not defterlerini Visual Studio Code'daki çekirdeklerde çalıştırabildiğinizden emin olun.
+
+### Hazırlık
+
+Hatırlatma olarak, bu veriyi sorular sormak için yüklüyorsunuz.
+
+- Kabak almak için en iyi zaman ne zaman?
+- Mini kabakların bir kutusunun fiyatı ne olabilir?
+- Yarım bushel sepetlerde mi yoksa 1 1/9 bushel kutularda mı almalıyım?
+Bu veriyi incelemeye devam edelim.
+
+Önceki derste, bir Pandas veri çerçevesi oluşturup, orijinal veri setinin bir kısmıyla doldurmuştunuz, bushel başına fiyatlandırmayı standartlaştırmıştınız. Ancak, bunu yaparak sadece yaklaşık 400 veri noktası toplayabildiniz ve sadece sonbahar ayları için.
+
+Bu dersin eşlik eden not defterinde önceden yüklenmiş verilere bir göz atın. Veriler önceden yüklenmiş ve ay verilerini göstermek için başlangıçta bir saçılma grafiği çizilmiştir. Belki verinin doğası hakkında daha fazla ayrıntı elde edebiliriz, daha fazla temizleyerek.
+
+## Doğrusal regresyon çizgisi
+
+1. Derste öğrendiğiniz gibi, bir doğrusal regresyon egzersizinin amacı, bir çizgi çizerek:
+
+- **Değişken ilişkilerini gösterin**. Değişkenler arasındaki ilişkiyi gösterin.
+- **Tahminler yapın**. Yeni bir veri noktasının bu çizgiye göre nereye düşeceğini doğru bir şekilde tahmin edin.
+
+Bu tür bir çizgi çizmek için **En Küçük Kareler Regresyonu** tipiktir. 'En küçük kareler' terimi, regresyon çizgisinin etrafındaki tüm veri noktalarının karelerinin alınıp toplanması anlamına gelir. İdealde, bu nihai toplam mümkün olduğunca küçük olmalıdır, çünkü düşük hata sayısı veya `least-squares` istiyoruz.
+
+Bunu yaparız çünkü tüm veri noktalarımızdan en az toplam mesafeye sahip bir çizgi modellemek istiyoruz. Ayrıca terimleri toplarken karelerini alırız çünkü yönünden ziyade büyüklüğü ile ilgileniriz.
+
+> **🧮 Bana matematiği göster**
+>
+> Bu çizgi, _en iyi uyum çizgisi_ olarak adlandırılır ve [bir denklemle](https://en.wikipedia.org/wiki/Simple_linear_regression) ifade edilebilir:
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` sütununu aşağıdaki ifade kullanarak hesaplayın:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+Şimdi doğrusal regresyonun ardındaki matematiği anladığınıza göre, hangi kabak paketinin en iyi fiyatlara sahip olacağını tahmin edip edemeyeceğimizi görmek için bir Regresyon modeli oluşturalım. Tatil kabak bahçesi için kabak satın alan biri, bahçe için kabak paketlerini optimize edebilmek için bu bilgiye sahip olmak isteyebilir.
+
+## Korelasyon Arayışı
+
+[](https://youtu.be/uoRq-lW2eQo "Yeni başlayanlar için ML - Korelasyon Arayışı: Doğrusal Regresyonun Anahtarı")
+
+> 🎥 Korelasyon hakkında kısa bir video özet için yukarıdaki resme tıklayın.
+
+Önceki dersten, farklı aylar için ortalama fiyatın şu şekilde göründüğünü muhtemelen görmüşsünüzdür:
+
+
+
+Bu, bazı korelasyonlar olması gerektiğini ve `Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr` fonksiyonunu kullanarak lineer regresyon modeli eğitmeye çalışabileceğimizi gösteriyor:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+Görünüşe göre korelasyon oldukça küçük, `-0.15` `Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter` çizim fonksiyonunu kullanarak tüm noktaları aynı grafikte çizebiliriz:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+Araştırmamız, çeşidin genel fiyat üzerinde satış tarihinden daha fazla etkisi olduğunu öne sürüyor. Bunu bir çubuk grafikle görebiliriz:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+Şu an için sadece bir kabak çeşidine, 'turta tipi'ne odaklanalım ve tarihin fiyat üzerindeki etkisini görelim:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+Şimdi `Price` and `DayOfYear` using `corr` function, we will get something like `-0.27` arasındaki korelasyonu hesaplasak, bu da tahmin edici bir model eğitmenin mantıklı olduğunu gösterir.
+
+> Doğrusal regresyon modeli eğitmeden önce, verimizin temiz olduğundan emin olmak önemlidir. Doğrusal regresyon eksik değerlerle iyi çalışmaz, bu yüzden tüm boş hücrelerden kurtulmak mantıklıdır:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+Başka bir yaklaşım, bu boş değerleri ilgili sütunun ortalama değerleriyle doldurmak olabilir.
+
+## Basit Doğrusal Regresyon
+
+[](https://youtu.be/e4c_UP2fSjg "Yeni başlayanlar için ML - Scikit-learn kullanarak Doğrusal ve Polinomial Regresyon")
+
+> 🎥 Doğrusal ve polinomial regresyon hakkında kısa bir video özet için yukarıdaki resme tıklayın.
+
+Doğrusal Regresyon modelimizi eğitmek için **Scikit-learn** kütüphanesini kullanacağız.
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+Başlangıçta, giriş değerlerini (özellikler) ve beklenen çıktıyı (etiket) ayrı numpy dizilerine ayırıyoruz:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> Giriş verisi üzerinde `reshape` işlemi yapmamız gerektiğini unutmayın, çünkü Doğrusal Regresyon paketi bunu doğru anlamalıdır. Doğrusal Regresyon, her satırın bir giriş özellikleri vektörüne karşılık geldiği 2D bir dizi bekler. Bizim durumumuzda, sadece bir giriş olduğundan, N×1 şeklinde bir diziye ihtiyacımız var, burada N veri setinin boyutudur.
+
+Daha sonra, veriyi eğitim ve test veri setlerine ayırmamız gerekiyor, böylece modeli eğittikten sonra doğrulayabiliriz:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+Son olarak, gerçek Doğrusal Regresyon modelini eğitmek sadece iki satır kod alır. `LinearRegression` object, and fit it to our data using the `fit` yöntemini tanımlarız:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+`LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` bizim durumumuzda, yılın başındaki fiyatı gösterir.
+
+Modelimizin ne kadar doğru olduğunu görmek için, test veri setinde fiyatları tahmin edebilir ve ardından tahminlerimizin beklenen değerlere ne kadar yakın olduğunu ölçebiliriz. Bu, beklenen ve tahmin edilen değerler arasındaki tüm kare farklarının ortalaması olan ortalama kare hata (MSE) metrikleri kullanılarak yapılabilir.
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+Hatalarımız yaklaşık 2 puan gibi görünüyor, bu da ~%17. Çok iyi değil. Model kalitesinin başka bir göstergesi **belirleme katsayısı**dır ve şu şekilde elde edilebilir:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+Değer 0 ise, modelin girdi verilerini dikkate almadığı ve *en kötü doğrusal tahminci* olarak davrandığı anlamına gelir, bu da basitçe sonucun ortalama değeridir. Değer 1 ise, tüm beklenen çıktıları mükemmel bir şekilde tahmin edebildiğimiz anlamına gelir. Bizim durumumuzda, katsayı yaklaşık 0.06, bu oldukça düşük.
+
+Ayrıca test verilerini regresyon çizgisi ile birlikte çizerek, regresyonun bizim durumumuzda nasıl çalıştığını daha iyi görebiliriz:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## Polinomial Regresyon
+
+Doğrusal Regresyonun başka bir türü Polinomial Regresyondur. Bazen değişkenler arasında doğrusal bir ilişki vardır - kabak hacmi büyüdükçe fiyat artar - bazen bu ilişkiler bir düzlem veya düz bir çizgi olarak çizilemez.
+
+✅ İşte [bazı örnekler](https://online.stat.psu.edu/stat501/lesson/9/9.8) Polinomial Regresyonun kullanılabileceği veriler
+
+Tarih ve Fiyat arasındaki ilişkiye bir kez daha bakın. Bu saçılma grafiği mutlaka düz bir çizgi ile analiz edilmeli mi? Fiyatlar dalgalanamaz mı? Bu durumda, polinomial regresyonu deneyebilirsiniz.
+
+✅ Polinomlar, bir veya daha fazla değişken ve katsayıdan oluşan matematiksel ifadelerdir
+
+Polinomial regresyon, doğrusal olmayan veriyi daha iyi uyacak şekilde eğri bir çizgi oluşturur. Bizim durumumuzda, girdi verisine kare `DayOfYear` değişkenini eklersek, verimizi yıl içinde belirli bir noktada minimuma sahip olacak parabolik bir eğri ile uyarlayabiliriz.
+
+Scikit-learn, veri işleme adımlarını bir araya getirmek için kullanışlı bir [pipeline API](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline) içerir. Bir **pipeline**, bir **estimators** zinciridir. Bizim durumumuzda, modelimize önce polinomial özellikler ekleyen ve ardından regresyonu eğiten bir pipeline oluşturacağız:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+`PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` aksi halde. Bu, doğrusal regresyonda dört katsayı olacağı anlamına gelir, her biri belirli bir kabak çeşidi için "başlangıç fiyatından" (veya "ek fiyat") sorumlu olacaktır.
+
+Aşağıdaki kod, bir çeşidi nasıl tek sıcak kodlayabileceğimizi gösterir:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+Tek sıcak kodlanmış çeşidi giriş olarak kullanarak doğrusal regresyon eğitmek için, sadece `X` and `y` verisini doğru şekilde başlatmamız yeterlidir:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+Kodun geri kalanı, Doğrusal Regresyon eğitmek için yukarıda kullandığımızla aynıdır. Dener iseniz, ortalama kare hatanın yaklaşık aynı olduğunu, ancak belirleme katsayısının (~%77) çok daha yüksek olduğunu göreceksiniz. Daha doğru tahminler elde etmek için, daha fazla kategorik özelliği, ayrıca `Month` or `DayOfYear`. To get one large array of features, we can use `join` gibi sayısal özellikleri de dikkate alabiliriz:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+Burada ayrıca `City` and `Package` türünü de dikkate alıyoruz, bu bize MSE 2.84 (%10) ve belirleme 0.94 verir!
+
+## Hepsini bir araya getirmek
+
+En iyi modeli oluşturmak için, yukarıdaki örnekten birleştirilmiş (tek sıcak kodlanmış kategorik + sayısal) veriyi Polinomial Regresyon ile birlikte kullanabiliriz. İşte kolaylık sağlamak için tam kod:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+Bu, yaklaşık %97'lik en iyi belirleme katsayısını ve MSE=2.23 (~%8 tahmin hatası) vermelidir.
+
+| Model | MSE | Belirleme |
+|-------|-----|-----------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` Doğrusal | 5.24 (%19.7) | 0.77 |
+| Tüm özellikler Doğrusal | 2.84 (%10.5) | 0.94 |
+| Tüm özellikler Polinomial | 2.23 (%8.25) | 0.97 |
+
+🏆 Tebrikler! Bir derste dört Regresyon modeli oluşturdunuz ve model kalitesini %97'ye çıkardınız. Regresyon üzerine son bölümde, kategorileri belirlemek için Lojistik Regresyon hakkında bilgi edineceksiniz.
+
+---
+## 🚀Meydan Okuma
+
+Bu not defterinde birkaç farklı değişkeni test edin ve korelasyonun model doğruluğuyla nasıl ilişkili olduğunu görün.
+
+## [Ders sonrası sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/14/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Bu derste Doğrusal Regresyon hakkında bilgi edindik. Diğer önemli Regresyon türleri de vardır. Adım adım, Ridge, Lasso ve Elasticnet teknikleri hakkında bilgi edinin. Daha fazla bilgi edinmek için iyi bir kurs [Stanford Statistical Learning course](https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning).
+
+## Ödev
+
+[Bir Model Oluşturun](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/3-Linear/assignment.md b/translations/tr/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..3e7c7c8c7
--- /dev/null
+++ b/translations/tr/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# Bir Regresyon Modeli Oluştur
+
+## Talimatlar
+
+Bu derste, Hem Doğrusal Hem de Polinom Regresyon kullanarak nasıl model oluşturulacağını öğrendiniz. Bu bilgiyi kullanarak, bir veri seti bulun veya Scikit-learn'in yerleşik setlerinden birini kullanarak yeni bir model oluşturun. Defterinizde hangi tekniği neden seçtiğinizi açıklayın ve modelinizin doğruluğunu gösterin. Eğer doğru değilse, nedenini açıklayın.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Mükemmel | Yeterli | Geliştirme Gerekiyor |
+| --------- | -------------------------------------------------------------- | -------------------------- | ------------------------------- |
+| | iyi belgelenmiş bir çözümle eksiksiz bir defter sunar | çözüm eksiktir | çözüm hatalı veya sorunludur |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/3-Linear/solution/Julia/README.md b/translations/tr/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..0168f878a
--- /dev/null
+++ b/translations/tr/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/4-Logistic/README.md b/translations/tr/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..1b2636471
--- /dev/null
+++ b/translations/tr/2-Regression/4-Logistic/README.md
@@ -0,0 +1,344 @@
+# Kategorileri Tahmin Etmek İçin Lojistik Regresyon
+
+
+
+## [Ders Öncesi Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [Bu ders R dilinde mevcut!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## Giriş
+
+Regresyon üzerine olan bu son derste, temel _klasik_ ML tekniklerinden biri olan Lojistik Regresyonu inceleyeceğiz. Bu tekniği, ikili kategorileri tahmin etmek için desenleri keşfetmek amacıyla kullanabilirsiniz. Bu şeker çikolata mı yoksa değil mi? Bu hastalık bulaşıcı mı değil mi? Bu müşteri bu ürünü seçecek mi yoksa seçmeyecek mi?
+
+Bu derste öğrenecekleriniz:
+
+- Veri görselleştirme için yeni bir kütüphane
+- Lojistik regresyon teknikleri
+
+✅ Bu tür bir regresyonla çalışma konusundaki anlayışınızı derinleştirmek için bu [Öğrenme modülüne](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott) göz atın
+
+## Ön Koşul
+
+Balkabağı verileriyle çalıştıktan sonra, üzerinde çalışabileceğimiz bir ikili kategori olduğunu fark edecek kadar bu veriye aşina olduk: `Color`.
+
+Hadi bazı değişkenlere dayanarak _belirli bir balkabağının renginin ne olacağını_ (turuncu 🎃 veya beyaz 👻) tahmin etmek için bir lojistik regresyon modeli oluşturalım.
+
+> Neden regresyonla ilgili bir derste ikili sınıflandırmadan bahsediyoruz? Sadece dilsel kolaylık için, çünkü lojistik regresyon [aslında bir sınıflandırma yöntemidir](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression), ancak doğrusal tabanlıdır. Verileri sınıflandırmanın diğer yollarını bir sonraki ders grubunda öğrenin.
+
+## Soruyu Tanımlayın
+
+Bizim amacımız için bunu ikili olarak ifade edeceğiz: 'Beyaz' veya 'Beyaz Değil'. Veritabanımızda ayrıca 'çizgili' kategorisi de var ama çok az örneği olduğu için bunu kullanmayacağız. Zaten veritabanından boş değerleri kaldırdığımızda bu kategori de kayboluyor.
+
+> 🎃 Eğlenceli bilgi, bazen beyaz balkabaklarına 'hayalet' balkabakları deriz. Oyulması çok kolay değildir, bu yüzden turuncu olanlar kadar popüler değillerdir ama havalı görünürler! Bu yüzden sorumuzu şu şekilde de yeniden formüle edebiliriz: 'Hayalet' veya 'Hayalet Değil'. 👻
+
+## Lojistik regresyon hakkında
+
+Lojistik regresyon, daha önce öğrendiğiniz doğrusal regresyondan birkaç önemli şekilde farklıdır.
+
+[](https://youtu.be/KpeCT6nEpBY "Başlangıç seviyesinde ML - Makine Öğrenimi Sınıflandırması için Lojistik Regresyonu Anlamak")
+
+> 🎥 Lojistik regresyon hakkında kısa bir genel bakış için yukarıdaki resme tıklayın.
+
+### İkili sınıflandırma
+
+Lojistik regresyon, doğrusal regresyonla aynı özellikleri sunmaz. İlki, bir ikili kategori hakkında bir tahmin sunar ("beyaz veya beyaz değil"), ikincisi ise sürekli değerleri tahmin edebilir, örneğin bir balkabağının kökeni ve hasat zamanı verildiğinde, _fiyatının ne kadar artacağını_.
+
+
+> İnfografik [Dasani Madipalli](https://twitter.com/dasani_decoded) tarafından
+
+### Diğer sınıflandırmalar
+
+Lojistik regresyonun başka türleri de vardır, bunlar arasında çoklu ve sıralı:
+
+- **Çoklu**, birden fazla kategoriye sahip olmak anlamına gelir - "Turuncu, Beyaz ve Çizgili".
+- **Sıralı**, sıralı kategorileri içerir, sonuçlarımızı mantıksal olarak sıralamak istediğimizde kullanışlıdır, örneğin sınırlı sayıda boyuta göre sıralanan balkabaklarımız (mini, küçük, orta, büyük, çok büyük, devasa).
+
+
+
+### Değişkenlerin KORELASYONLU OLMASINA GEREK YOK
+
+Doğrusal regresyonun daha fazla korelasyonlu değişkenlerle daha iyi çalıştığını hatırlıyor musunuz? Lojistik regresyon bunun tersidir - değişkenlerin uyumlu olmasına gerek yoktur. Bu, zayıf korelasyonlara sahip bu veri için işe yarar.
+
+### Çok temiz verilere ihtiyacınız var
+
+Lojistik regresyon, daha fazla veri kullanırsanız daha doğru sonuçlar verir; küçük veri setimiz bu görev için optimal değildir, bu yüzden bunu aklınızda bulundurun.
+
+[](https://youtu.be/B2X4H9vcXTs "Başlangıç seviyesinde ML - Lojistik Regresyon için Veri Analizi ve Hazırlığı")
+
+> 🎥 Doğrusal regresyon için veri hazırlığı hakkında kısa bir genel bakış için yukarıdaki resme tıklayın
+
+✅ Lojistik regresyona iyi uyum sağlayacak veri türlerini düşünün
+
+## Alıştırma - veriyi düzenleme
+
+Öncelikle, verileri biraz temizleyin, boş değerleri kaldırın ve sadece bazı sütunları seçin:
+
+1. Aşağıdaki kodu ekleyin:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ Yeni veri çerçevenize bir göz atabilirsiniz:
+
+ ```python
+ pumpkins.info
+ ```
+
+### Görselleştirme - kategorik grafik
+
+Şimdiye kadar [başlangıç not defterini](../../../../2-Regression/4-Logistic/notebook.ipynb) balkabağı verileriyle tekrar yüklediniz ve `Color` içeren birkaç değişkeni koruyarak temizlediniz. Veri çerçevesini farklı bir kütüphane kullanarak not defterinde görselleştirelim: [Seaborn](https://seaborn.pydata.org/index.html), daha önce kullandığımız Matplotlib üzerine kurulmuştur.
+
+Seaborn, verilerinizi görselleştirmenin bazı güzel yollarını sunar. Örneğin, `Variety` ve `Color` verilerinin dağılımlarını kategorik bir grafikte karşılaştırabilirsiniz.
+
+1. `catplot` function, using our pumpkin data `pumpkins` kullanarak ve her balkabağı kategorisi (turuncu veya beyaz) için bir renk eşlemesi belirterek böyle bir grafik oluşturun:
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ Verileri gözlemleyerek, Renk verisinin Çeşitlilik ile nasıl ilişkili olduğunu görebilirsiniz.
+
+ ✅ Bu kategorik grafiğe bakarak, hangi ilginç keşifleri hayal edebilirsiniz?
+
+### Veri ön işleme: özellik ve etiket kodlama
+Balkabağı veri setimiz tüm sütunları için string değerler içerir. Kategorik verilerle çalışmak insanlar için sezgiseldir ancak makineler için değil. Makine öğrenimi algoritmaları sayılarla iyi çalışır. Bu yüzden kodlama, veri ön işleme aşamasında çok önemli bir adımdır, çünkü kategorik verileri sayısal verilere dönüştürmemizi sağlar, herhangi bir bilgi kaybetmeden. İyi bir kodlama, iyi bir model oluşturmayı sağlar.
+
+Özellik kodlama için iki ana tür kodlayıcı vardır:
+
+1. Sıralı kodlayıcı: sıralı değişkenler için uygundur, bunlar kategorik değişkenlerdir ve verileri mantıksal bir sıralama izler, veri setimizdeki `Item Size` sütunu gibi. Her kategori bir sayı ile temsil edilir, bu da sütundaki kategorinin sırası olur.
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. Kategorik kodlayıcı: nominal değişkenler için uygundur, bunlar kategorik değişkenlerdir ve verileri mantıksal bir sıralama izlemez, veri setimizdeki `Item Size` dışındaki tüm özellikler gibi. Bu bir one-hot kodlamadır, yani her kategori bir ikili sütunla temsil edilir: kodlanmış değişken, balkabağı o Çeşitliliğe aitse 1, değilse 0 olur.
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+Sonra, `ColumnTransformer` birden fazla kodlayıcıyı tek bir adımda birleştirmek ve uygun sütunlara uygulamak için kullanılır.
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+Öte yandan, etiketi kodlamak için, scikit-learn `LabelEncoder` sınıfını kullanırız, bu sınıf etiketleri normalize etmeye yardımcı olan bir yardımcı sınıftır, böylece yalnızca 0 ve n_classes-1 (burada, 0 ve 1) arasında değerler içerir.
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+Özellikleri ve etiketi kodladıktan sonra, bunları yeni bir veri çerçevesi `encoded_pumpkins` içinde birleştirebiliriz.
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety` ve `Color` bir kategorik grafikte kodlamak için sıralı kodlayıcı kullanmanın avantajları nelerdir? Verileri daha iyi görselleştirmek için kodlanmış `Item Size` column and the unencoded `Variety` sütununu kullanacağız.
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### Bir 'swarm' grafiği kullanın
+
+Renk bir ikili kategori olduğundan (Beyaz veya Değil), görselleştirme için 'özelleşmiş bir yaklaşıma' ihtiyaç duyar. Bu kategorinin diğer değişkenlerle ilişkisini görselleştirmenin başka yolları da vardır.
+
+Seaborn grafikleri ile değişkenleri yan yana görselleştirebilirsiniz.
+
+1. Değerlerin dağılımını göstermek için bir 'swarm' grafiği deneyin:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**Dikkat**: Yukarıdaki kod bir uyarı oluşturabilir, çünkü seaborn bu kadar çok veri noktasını bir swarm grafiğinde temsil edemez. Olası bir çözüm, işaretçi boyutunu küçültmektir, 'size' parametresini kullanarak. Ancak, bunun grafiğin okunabilirliğini etkilediğini unutmayın.
+
+> **🧮 Matematiği Göster**
+>
+> Lojistik regresyon, [sigmoid fonksiyonları](https://wikipedia.org/wiki/Sigmoid_function) kullanarak 'maksimum olasılık' kavramına dayanır. Bir 'Sigmoid Fonksiyonu' bir grafikte 'S' şekline benzer. Bir değeri alır ve 0 ile 1 arasında bir yere haritalar. Eğrisi ayrıca 'lojistik eğri' olarak da adlandırılır. Formülü şu şekildedir:
+>
+> 
+>
+> burada sigmoid'in orta noktası x'in 0 noktasında bulunur, L eğrinin maksimum değeridir ve k eğrinin dikliğidir. Fonksiyonun sonucu 0.5'ten büyükse, ilgili etiket ikili seçimin '1' sınıfına atanır. Değilse, '0' olarak sınıflandırılır.
+
+## Modelinizi oluşturun
+
+Bu ikili sınıflandırmaları bulmak için bir model oluşturmak Scikit-learn'de şaşırtıcı derecede basittir.
+
+[](https://youtu.be/MmZS2otPrQ8 "Başlangıç seviyesinde ML - Verilerin sınıflandırması için Lojistik Regresyon")
+
+> 🎥 Doğrusal regresyon modeli oluşturma hakkında kısa bir genel bakış için yukarıdaki resme tıklayın
+
+1. Sınıflandırma modelinizde kullanmak istediğiniz değişkenleri seçin ve `train_test_split()` çağırarak eğitim ve test setlerini ayırın:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. Şimdi modelinizi, eğitim verilerinizle `fit()` çağırarak eğitebilir ve sonucunu yazdırabilirsiniz:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ Modelinizin skor tablosuna bir göz atın. Yaklaşık 1000 satır veriniz olduğunu düşünürsek fena değil:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## Bir karışıklık matrisi ile daha iyi anlama
+
+Yukarıda yazdırılan öğeleri kullanarak bir skor tablosu raporu [terimler](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html?highlight=classification_report#sklearn.metrics.classification_report) alabilirsiniz, ancak modelinizi daha kolay anlayabilirsiniz [karışıklık matrisi](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix) kullanarak modelin performansını anlamamıza yardımcı olabilir.
+
+> 🎓 Bir '[karışıklık matrisi](https://wikipedia.org/wiki/Confusion_matrix)' (veya 'hata matrisi') modelinizin gerçek vs. yanlış pozitiflerini ve negatiflerini ifade eden bir tablodur, böylece tahminlerin doğruluğunu ölçer.
+
+1. Bir karışıklık metriği kullanmak için `confusion_matrix()` çağırın:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ Modelinizin karışıklık matrisine bir göz atın:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+Scikit-learn'de karışıklık matrisinin Satırları (eksen 0) gerçek etiketlerdir ve sütunlar (eksen 1) tahmin edilen etiketlerdir.
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+Burada ne oluyor? Diyelim ki modelimiz balkabaklarını iki ikili kategori arasında sınıflandırmakla görevlendirildi, kategori 'beyaz' ve kategori 'beyaz değil'.
+
+- Modeliniz bir balkabağını beyaz değil olarak tahmin ederse ve gerçekte kategori 'beyaz değil' ise buna doğru negatif denir, üst sol numara ile gösterilir.
+- Modeliniz bir balkabağını beyaz olarak tahmin ederse ve gerçekte kategori 'beyaz değil' ise buna yanlış negatif denir, alt sol numara ile gösterilir.
+- Modeliniz bir balkabağını beyaz değil olarak tahmin ederse ve gerçekte kategori 'beyaz' ise buna yanlış pozitif denir, üst sağ numara ile gösterilir.
+- Modeliniz bir balkabağını beyaz olarak tahmin ederse ve gerçekte kategori 'beyaz' ise buna doğru pozitif denir, alt sağ numara ile gösterilir.
+
+Tahmin edebileceğiniz gibi, daha fazla doğru pozitif ve doğru negatif ve daha az yanlış pozitif ve yanlış negatif olması tercih edilir, bu da modelin daha iyi performans gösterdiğini ima eder.
+
+Karışıklık matrisi hassasiyet ve hatırlama ile nasıl ilişkilidir? Yukarıda yazdırılan sınıflandırma raporu hassasiyet (0.85) ve hatırlama (0.67) gösterdi.
+
+Hassasiyet = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+Hatırlama = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ S: Karışıklık matrisine göre model nasıl performans gösterdi? C: Fena değil; çok sayıda doğru negatif var ama aynı zamanda birkaç yanlış negatif de var.
+
+Karışıklık matrisinin TP/TN ve FP/FN eşlemesi ile daha önce gördüğümüz terimleri yeniden gözden geçirelim:
+
+🎓 Hassasiyet: TP/(TP + FP) Geri getirilen örnekler arasında ilgili örneklerin kesri (örneğin, hangi etiketler iyi etiketlenmişti)
+
+🎓 Hatırlama: TP/(TP + FN) İlgili örneklerin kesri, geri getirilen, iyi etiketlenmiş olsun ya da olmasın
+
+🎓 f1-skore: (2 * hassasiyet * hatırlama)/(hassasiyet + hatırlama) Hassasiyet ve hatırlamanın ağırlıklı ortalaması, en iyisi 1 ve en kötüsü 0
+
+🎓 Destek: Geri getirilen her etiketin oluşum sayısı
+
+🎓 Doğruluk: (TP + TN)/(TP + TN + FP + FN) Bir örnek için doğru tahmin edilen etiketlerin yüzdesi.
+
+🎓 Makro Ortalama: Her etiket için ağırlıksız ortalama metriklerin hesaplanması, etiket dengesizliğini dikkate almadan.
+
+🎓 Ağırlıklı Ortalama: Her etiket için ortalama metriklerin hesaplanması, desteklerine (her etiket için gerçek örneklerin sayısı) göre ağırlıklandırarak etiket dengesizliğini dikkate alarak.
+
+✅ Modelinizin yanlış negatif sayısını azaltmasını istiyorsanız hangi metriği izlemeniz gerektiğini düşünebilir misiniz?
+
+## Bu modelin ROC eğrisini görselleştirin
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/4-Logistic/assignment.md b/translations/tr/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..71ec55a77
--- /dev/null
+++ b/translations/tr/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,14 @@
+# Bazı Regresyonları Tekrar Denemek
+
+## Talimatlar
+
+Ders sırasında balkabağı verilerinin bir alt kümesini kullandınız. Şimdi, orijinal verilere geri dönün ve tamamını, temizlenmiş ve standartlaştırılmış haliyle, kullanarak bir Lojistik Regresyon modeli oluşturmayı deneyin.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Mükemmel | Yeterli | Geliştirme Gerekiyor |
+| --------- | ---------------------------------------------------------------------- | ----------------------------------------------------------- | ----------------------------------------------------------- |
+| | İyi açıklanmış ve iyi performans gösteren bir modelle bir defter sunulmuştur | Minimum performans gösteren bir modelle bir defter sunulmuştur | Düşük performans gösteren bir modelle veya hiç model sunulmamıştır |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/4-Logistic/solution/Julia/README.md b/translations/tr/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..4deba144c
--- /dev/null
+++ b/translations/tr/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/2-Regression/README.md b/translations/tr/2-Regression/README.md
new file mode 100644
index 000000000..f88d8450f
--- /dev/null
+++ b/translations/tr/2-Regression/README.md
@@ -0,0 +1,43 @@
+# Makine öğrenmesi için regresyon modelleri
+## Bölgesel konu: Kuzey Amerika'da balkabağı fiyatları için regresyon modelleri 🎃
+
+Kuzey Amerika'da balkabakları, Cadılar Bayramı için korkunç yüzler oymak amacıyla sıkça kullanılır. Bu büyüleyici sebzeler hakkında daha fazla bilgi edinelim!
+
+
+> Fotoğraf Beth Teutschmann tarafından Unsplash'ta
+
+## Öğrenecekleriniz
+
+[](https://youtu.be/5QnJtDad4iQ "Regresyon Tanıtım videosu - İzlemek için Tıklayın!")
+> 🎥 Bu ders için kısa bir tanıtım videosu izlemek için yukarıdaki resme tıklayın
+
+Bu bölümdeki dersler, makine öğrenmesi bağlamında regresyon türlerini ele alır. Regresyon modelleri, değişkenler arasındaki _ilişkiyi_ belirlemeye yardımcı olabilir. Bu tür bir model, uzunluk, sıcaklık veya yaş gibi değerleri tahmin edebilir, böylece veri noktalarını analiz ederken değişkenler arasındaki ilişkileri ortaya çıkarabilir.
+
+Bu ders serisinde, doğrusal ve lojistik regresyon arasındaki farkları ve ne zaman birini diğerine tercih etmeniz gerektiğini keşfedeceksiniz.
+
+[](https://youtu.be/XA3OaoW86R8 "Başlangıç seviyesindekiler için ML - Makine Öğrenmesi için Regresyon Modellerine Giriş")
+
+> 🎥 Regresyon modellerini tanıtan kısa bir video izlemek için yukarıdaki resme tıklayın.
+
+Bu ders grubunda, makine öğrenmesi görevlerine başlamak için gerekli ayarları yapacaksınız, bunlar arasında veri bilimciler için ortak bir ortam olan notebookları yönetmek için Visual Studio Code'u yapılandırmak da bulunur. Makine öğrenmesi için bir kütüphane olan Scikit-learn'ü keşfedeceksiniz ve bu bölümde Regresyon modellerine odaklanarak ilk modellerinizi oluşturacaksınız.
+
+> Regresyon modelleriyle çalışmayı öğrenmenize yardımcı olabilecek kullanışlı düşük kod araçlar vardır. Bu görev için [Azure ML'i deneyin](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### Dersler
+
+1. [Ticaretin araçları](1-Tools/README.md)
+2. [Veri yönetimi](2-Data/README.md)
+3. [Doğrusal ve polinom regresyon](3-Linear/README.md)
+4. [Lojistik regresyon](4-Logistic/README.md)
+
+---
+### Katkıda Bulunanlar
+
+"Regresyon ile ML" [Jen Looper](https://twitter.com/jenlooper) tarafından ♥️ ile yazılmıştır
+
+♥️ Quiz katkıcıları arasında: [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) ve [Ornella Altunyan](https://twitter.com/ornelladotcom) bulunur
+
+Balkabağı veri seti [bu Kaggle projesi](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) tarafından önerilmiştir ve veriler Amerika Birleşik Devletleri Tarım Bakanlığı tarafından dağıtılan [Specialty Crops Terminal Markets Standard Reports](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice)'tan alınmıştır. Dağılımı normalize etmek için çeşide dayalı olarak renkle ilgili bazı noktalar ekledik. Bu veriler kamu malıdır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/3-Web-App/1-Web-App/README.md b/translations/tr/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..c129e60a2
--- /dev/null
+++ b/translations/tr/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# Bir ML Modelini Kullanmak için Web Uygulaması Oluşturun
+
+Bu derste, _son yüzyıldaki UFO gözlemleri_ gibi dünyadışı bir veri seti üzerinde bir ML modeli eğiteceksiniz. Bu veriler NUFORC'un veritabanından alınmıştır.
+
+Öğreneceğiniz konular:
+
+- Eğitilmiş bir modeli nasıl 'pickle' yapacağınız
+- Bu modeli bir Flask uygulamasında nasıl kullanacağınız
+
+Verileri temizlemek ve modelimizi eğitmek için defterleri kullanmaya devam edeceğiz, ancak süreci bir adım öteye taşıyarak, modelinizi bir web uygulamasında kullanmayı keşfedebilirsiniz.
+
+Bunu yapmak için Flask kullanarak bir web uygulaması oluşturmanız gerekecek.
+
+## [Ders Öncesi Testi](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## Bir Uygulama Oluşturmak
+
+Makine öğrenimi modellerini tüketen web uygulamaları oluşturmanın birkaç yolu vardır. Web mimariniz, modelinizin nasıl eğitildiğini etkileyebilir. Bir işletmede çalıştığınızı ve veri bilimi grubunun bir model eğittiğini ve bu modeli bir uygulamada kullanmanızı istediğini hayal edin.
+
+### Dikkat Edilmesi Gerekenler
+
+Sormanız gereken birçok soru var:
+
+- **Bu bir web uygulaması mı yoksa mobil uygulama mı?** Bir mobil uygulama oluşturuyorsanız veya modeli bir IoT bağlamında kullanmanız gerekiyorsa, [TensorFlow Lite](https://www.tensorflow.org/lite/) kullanarak modeli bir Android veya iOS uygulamasında kullanabilirsiniz.
+- **Model nerede bulunacak?** Bulutta mı yoksa yerel olarak mı?
+- **Çevrimdışı destek.** Uygulamanın çevrimdışı çalışması gerekiyor mu?
+- **Modeli eğitmek için hangi teknoloji kullanıldı?** Seçilen teknoloji, kullanmanız gereken araçları etkileyebilir.
+ - **TensorFlow Kullanmak.** Örneğin, TensorFlow kullanarak bir model eğitiyorsanız, bu ekosistem, [TensorFlow.js](https://www.tensorflow.org/js/) kullanarak bir web uygulamasında kullanmak üzere bir TensorFlow modelini dönüştürme yeteneği sağlar.
+ - **PyTorch Kullanmak.** [PyTorch](https://pytorch.org/) gibi bir kütüphane kullanarak bir model oluşturuyorsanız, modeli JavaScript web uygulamalarında kullanmak üzere [Onnx Runtime](https://www.onnxruntime.ai/) kullanarak [ONNX](https://onnx.ai/) (Open Neural Network Exchange) formatında dışa aktarma seçeneğiniz vardır. Bu seçenek, gelecekteki bir derste Scikit-learn ile eğitilmiş bir model için incelenecektir.
+ - **Lobe.ai veya Azure Custom Vision Kullanmak.** [Lobe.ai](https://lobe.ai/) veya [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott) gibi bir ML SaaS (Hizmet Olarak Yazılım) sistemi kullanarak bir model eğitiyorsanız, bu tür yazılımlar, modeli birçok platform için dışa aktarma yolları sağlar, bu da çevrimiçi uygulamanız tarafından bulutta sorgulanacak özel bir API oluşturmayı içerir.
+
+Ayrıca, modelin kendisini bir web tarayıcısında eğitebilecek bir Flask web uygulaması oluşturma fırsatınız da var. Bu, bir JavaScript bağlamında TensorFlow.js kullanılarak da yapılabilir.
+
+Bizim amacımız için, Python tabanlı defterlerle çalıştığımızdan, eğitilmiş bir modeli bu tür bir defterden Python ile oluşturulmuş bir web uygulaması tarafından okunabilir bir formata nasıl dışa aktaracağınızı inceleyelim.
+
+## Araç
+
+Bu görev için iki araca ihtiyacınız var: Flask ve Pickle, her ikisi de Python üzerinde çalışır.
+
+✅ [Flask](https://palletsprojects.com/p/flask/) nedir? Yaratıcıları tarafından bir 'mikro-çerçeve' olarak tanımlanan Flask, Python kullanarak web çerçevelerinin temel özelliklerini ve web sayfaları oluşturmak için bir şablon motoru sağlar. Flask ile inşa etmeyi pratik yapmak için [bu Öğrenme modülüne](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott) göz atın.
+
+✅ [Pickle](https://docs.python.org/3/library/pickle.html) nedir? Pickle 🥒, bir Python nesne yapısını serileştiren ve serileştiren bir Python modülüdür. Bir modeli 'pickle' yaptığınızda, yapısını webde kullanmak üzere serileştirir veya düzleştirirsiniz. Dikkatli olun: pickle doğası gereği güvenli değildir, bu yüzden bir dosyayı 'un-pickle' yapmanız istendiğinde dikkatli olun. Bir pickled dosyası `.pkl` uzantısına sahiptir.
+
+## Alıştırma - verilerinizi temizleyin
+
+Bu derste, [NUFORC](https://nuforc.org) (Ulusal UFO Raporlama Merkezi) tarafından toplanan 80.000 UFO gözleminden veri kullanacaksınız. Bu veriler, UFO gözlemlerine dair ilginç açıklamalar içerir, örneğin:
+
+- **Uzun örnek açıklama.** "Bir adam geceleyin çimenli bir alana parlayan bir ışık huzmesinden çıkar ve Texas Instruments otoparkına doğru koşar".
+- **Kısa örnek açıklama.** "ışıklar bizi kovaladı".
+
+[ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) elektronik tablosu, gözlemin `city`, `state` ve `country` nerede gerçekleştiği, nesnenin `shape` ve `latitude` ve `longitude` ile ilgili sütunları içerir.
+
+Bu derste yer alan boş [notebook](../../../../3-Web-App/1-Web-App/notebook.ipynb) dosyasında:
+
+1. Önceki derslerde yaptığınız gibi `pandas`, `matplotlib` ve `numpy` içe aktarın ve ufos elektronik tablosunu içe aktarın. Örnek bir veri setine göz atabilirsiniz:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. Ufolar verilerini yeni başlıklarla küçük bir dataframe'e dönüştürün. `Country` alanındaki benzersiz değerleri kontrol edin.
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. Şimdi, ele almamız gereken veri miktarını azaltmak için herhangi bir boş değeri atabilir ve sadece 1-60 saniye arasındaki gözlemleri içe aktarabilirsiniz:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. Metin değerlerini ülkelere dönüştürmek için Scikit-learn'ün `LabelEncoder` kütüphanesini içe aktarın:
+
+ ✅ LabelEncoder verileri alfabetik olarak kodlar
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ Verileriniz şu şekilde görünmelidir:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## Alıştırma - modelinizi oluşturun
+
+Şimdi verileri eğitim ve test gruplarına ayırarak bir model eğitmeye hazır olabilirsiniz.
+
+1. Eğitmek istediğiniz üç özelliği X vektörü olarak seçin ve y vektörü `Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude` olacak ve bir ülke kimliği döndürecek.
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. Modelinizi lojistik regresyon kullanarak eğitin:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+Doğruluk fena değil **(yaklaşık %95)**, şaşırtıcı değil, çünkü `Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude`, ancak ham verilerden temizlediğiniz, dışa aktardığınız ve ardından bu modeli bir web uygulamasında kullandığınız bir modeli eğitmeye çalışmak iyi bir egzersizdir.
+
+## Alıştırma - modelinizi 'pickle' yapın
+
+Şimdi, modelinizi _pickle_ yapma zamanı! Bunu birkaç satır kodla yapabilirsiniz. Bir kez _pickled_ olduktan sonra, pickled modelinizi yükleyin ve saniye, enlem ve boylam değerlerini içeren bir örnek veri dizisine karşı test edin,
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+Model **'3'** değerini döndürüyor, bu da Birleşik Krallık için ülke kodu. Harika! 👽
+
+## Alıştırma - bir Flask uygulaması oluşturun
+
+Şimdi modelinizi çağıracak ve benzer sonuçlar döndürecek, ancak daha görsel olarak hoş bir şekilde, bir Flask uygulaması oluşturabilirsiniz.
+
+1. _notebook.ipynb_ dosyasının yanında **web-app** adlı bir klasör oluşturun ve _ufo-model.pkl_ dosyanızın bulunduğu yer.
+
+1. Bu klasörde üç klasör daha oluşturun: **static**, içinde bir **css** klasörü bulunan ve **templates**. Şimdi aşağıdaki dosya ve dizinlere sahip olmalısınız:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ Bitmiş uygulamanın bir görünümünü görmek için çözüm klasörüne başvurun
+
+1. _web-app_ klasöründe oluşturulacak ilk dosya **requirements.txt** dosyasıdır. Bir JavaScript uygulamasındaki _package.json_ gibi, bu dosya uygulama tarafından gerekli bağımlılıkları listeler. **requirements.txt** dosyasına şu satırları ekleyin:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. Şimdi, _web-app_ klasörüne giderek bu dosyayı çalıştırın:
+
+ ```bash
+ cd web-app
+ ```
+
+1. Terminalinizde `pip install` yazarak _requirements.txt_ dosyasında listelenen kütüphaneleri yükleyin:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. Şimdi, uygulamayı bitirmek için üç dosya daha oluşturmaya hazırsınız:
+
+ 1. Kök dizinde **app.py** oluşturun.
+ 2. _templates_ dizininde **index.html** oluşturun.
+ 3. _static/css_ dizininde **styles.css** oluşturun.
+
+1. _styles.css_ dosyasını birkaç stil ile oluşturun:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. Ardından, _index.html_ dosyasını oluşturun:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ Bu dosyadaki şablonlamaya bir göz atın. Uygulama tarafından sağlanacak değişkenler etrafındaki 'bıyık' sözdizimine dikkat edin, örneğin tahmin metni: `{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py` ekleyin:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 İpucu: [`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train`. Bu yöntemi takip etmenin artıları ve eksileri nelerdir?
+
+## [Ders Sonrası Testi](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+ML modellerini tüketen bir web uygulaması oluşturmanın birçok yolu vardır. Makine öğrenimini kullanmak için JavaScript veya Python kullanarak bir web uygulaması oluşturmanın yollarını listeleyin. Mimariyi göz önünde bulundurun: model uygulamada mı kalmalı yoksa bulutta mı yaşamalı? Eğer ikinci seçenekse, ona nasıl erişirsiniz? Uygulamalı bir ML web çözümü için bir mimari model çizin.
+
+## Ödev
+
+[Farklı bir model deneyin](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/3-Web-App/1-Web-App/assignment.md b/translations/tr/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..70c683ff4
--- /dev/null
+++ b/translations/tr/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# Farklı bir model deneyin
+
+## Talimatlar
+
+Eğitilmiş bir Regresyon modeli kullanarak bir web uygulaması oluşturduğunuza göre, bu web uygulamasını yeniden yapmak için önceki bir Regresyon dersinden bir modeli kullanın. Stili koruyabilir veya balkabağı verilerini yansıtacak şekilde farklı bir tasarım yapabilirsiniz. Modelinizin eğitim yöntemini yansıtmak için girdileri değiştirmeye dikkat edin.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Mükemmel | Yeterli | Geliştirme Gerekiyor |
+| ------------------------- | -------------------------------------------------------- | -------------------------------------------------------- | ---------------------------------- |
+| Web uygulaması beklenildiği gibi çalışıyor ve buluta dağıtılmış | Web uygulaması hatalar içeriyor veya beklenmedik sonuçlar veriyor | Web uygulaması düzgün çalışmıyor |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/3-Web-App/README.md b/translations/tr/3-Web-App/README.md
new file mode 100644
index 000000000..9f23427a4
--- /dev/null
+++ b/translations/tr/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# ML modelinizi kullanmak için bir web uygulaması oluşturun
+
+Bu müfredat bölümünde, uygulamalı bir ML konusuyla tanışacaksınız: Scikit-learn modelinizi bir web uygulamasında tahminler yapmak için kullanılabilecek bir dosya olarak nasıl kaydedeceğinizi öğreneceksiniz. Model kaydedildikten sonra, Flask ile oluşturulmuş bir web uygulamasında nasıl kullanılacağını öğreneceksiniz. İlk olarak, UFO gözlemleri hakkında bazı veriler kullanarak bir model oluşturacaksınız! Daha sonra, belirli bir süre, enlem ve boylam değeri girerek hangi ülkenin bir UFO gördüğünü bildirdiğini tahmin etmenizi sağlayacak bir web uygulaması oluşturacaksınız.
+
+
+
+Fotoğraf: Michael Herren tarafından Unsplash
+
+## Dersler
+
+1. [Bir Web Uygulaması Oluşturun](1-Web-App/README.md)
+
+## Katkıda Bulunanlar
+
+"Bir Web Uygulaması Oluşturun" ♥️ ile [Jen Looper](https://twitter.com/jenlooper) tarafından yazılmıştır.
+
+♥️ Quizler Rohan Raj tarafından yazılmıştır.
+
+Veri seti [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings) kaynağından alınmıştır.
+
+Web uygulaması mimarisi kısmen [bu makale](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) ve Abhinav Sagar tarafından [bu repo](https://github.com/abhinavsagar/machine-learning-deployment) tarafından önerilmiştir.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlaşılmalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/1-Introduction/README.md b/translations/tr/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..5b27e0cec
--- /dev/null
+++ b/translations/tr/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# Sınıflandırmaya Giriş
+
+Bu dört derste, klasik makine öğreniminin temel odak noktalarından biri olan _sınıflandırma_ konusunu keşfedeceksiniz. Asya ve Hindistan'ın tüm muhteşem mutfakları hakkında bir veri kümesi kullanarak çeşitli sınıflandırma algoritmalarını adım adım inceleyeceğiz. Umarım açsınızdır!
+
+
+
+> Bu derslerde pan-Asya mutfaklarını kutlayın! Görsel: [Jen Looper](https://twitter.com/jenlooper)
+
+Sınıflandırma, regresyon teknikleriyle birçok ortak noktası olan bir [denetimli öğrenme](https://wikipedia.org/wiki/Supervised_learning) türüdür. Makine öğrenimi, veri kümelerini kullanarak değerlere veya isimlere tahminlerde bulunmakla ilgiliyse, sınıflandırma genellikle iki gruba ayrılır: _ikili sınıflandırma_ ve _çok sınıflı sınıflandırma_.
+
+[](https://youtu.be/eg8DJYwdMyg "Sınıflandırmaya giriş")
+
+> 🎥 Yukarıdaki görsele tıklayarak bir video izleyin: MIT'den John Guttag sınıflandırmayı tanıtıyor
+
+Unutmayın:
+
+- **Doğrusal regresyon** size değişkenler arasındaki ilişkileri tahmin etmenize ve yeni bir veri noktasının bu çizgiyle ilişkili olarak nereye düşeceğini doğru bir şekilde tahmin etmenize yardımcı oldu. Örneğin, _Eylül ve Aralık aylarında bir kabağın fiyatının ne olacağını_ tahmin edebilirsiniz.
+- **Lojistik regresyon** size "ikili kategorileri" keşfetmenize yardımcı oldu: bu fiyat noktasında, _bu kabak turuncu mu yoksa turuncu değil mi_?
+
+Sınıflandırma, bir veri noktasının etiketini veya sınıfını belirlemenin çeşitli yollarını belirlemek için çeşitli algoritmalar kullanır. Bu mutfak verileriyle çalışarak, bir grup malzemeyi gözlemleyerek hangi mutfağa ait olduğunu belirleyip belirleyemeyeceğimizi görelim.
+
+## [Ders öncesi sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [Bu ders R dilinde de mevcut!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### Giriş
+
+Sınıflandırma, makine öğrenimi araştırmacısının ve veri bilimcisinin temel faaliyetlerinden biridir. Temel bir ikili değerin sınıflandırılmasından ("bu e-posta spam mi değil mi?") karmaşık görüntü sınıflandırma ve segmentasyonuna kadar, verileri sınıflara ayırmak ve sorular sormak her zaman faydalıdır.
+
+Bu süreci daha bilimsel bir şekilde ifade etmek gerekirse, sınıflandırma yönteminiz, giriş değişkenleri ile çıkış değişkenleri arasındaki ilişkiyi haritalamanıza olanak tanıyan bir tahmin modeli oluşturur.
+
+
+
+> Sınıflandırma algoritmalarının ele alması gereken ikili ve çok sınıflı sorunlar. Bilgilendirme görseli: [Jen Looper](https://twitter.com/jenlooper)
+
+Verilerimizi temizleme, görselleştirme ve ML görevlerimize hazırlama sürecine başlamadan önce, makine öğreniminin verileri sınıflandırmak için nasıl kullanılabileceğini biraz öğrenelim.
+
+[istatistiklerden](https://wikipedia.org/wiki/Statistical_classification) türetilen klasik makine öğrenimi kullanarak sınıflandırma, X hastalığının gelişme olasılığını belirlemek için `smoker`, `weight` ve `age` gibi özellikler kullanır. Daha önce gerçekleştirdiğiniz regresyon egzersizlerine benzer denetimli bir öğrenme tekniği olarak, verileriniz etiketlenmiştir ve ML algoritmaları bu etiketleri kullanarak bir veri kümesinin sınıflarını (veya 'özelliklerini') sınıflandırır ve tahmin eder ve bunları bir gruba veya sonuca atar.
+
+✅ Bir mutfak hakkında bir veri kümesi hayal etmek için bir an durun. Çok sınıflı bir model neyi cevaplayabilir? İkili bir model neyi cevaplayabilir? Belirli bir mutfağın çemen otu kullanma olasılığını belirlemek isteseydiniz ne olurdu? Bir torba yıldız anason, enginar, karnabahar ve yaban turpu dolu bir hediye alırsanız, tipik bir Hint yemeği yapıp yapamayacağınızı görmek isteseydiniz ne olurdu?
+
+[](https://youtu.be/GuTeDbaNoEU "Çılgın gizem sepetleri")
+
+> 🎥 Yukarıdaki görsele tıklayarak bir video izleyin. 'Chopped' adlı programın tüm konusu, şeflerin rastgele seçilen malzemelerden bir yemek yapmaları gereken 'gizem sepeti'dir. Kesinlikle bir ML modeli yardımcı olurdu!
+
+## Merhaba 'sınıflandırıcı'
+
+Bu mutfak veri kümesine sormak istediğimiz soru aslında bir **çok sınıflı soru**, çünkü çalışmak için birkaç potansiyel ulusal mutfak var. Bir grup malzeme verildiğinde, bu birçok sınıftan hangisine veri uyacak?
+
+Scikit-learn, çözmek istediğiniz sorunun türüne bağlı olarak verileri sınıflandırmak için kullanabileceğiniz birkaç farklı algoritma sunar. Önümüzdeki iki derste, bu algoritmalardan birkaçını öğreneceksiniz.
+
+## Egzersiz - verilerinizi temizleyin ve dengeleyin
+
+Bu projeye başlamadan önce yapılacak ilk görev, verilerinizi temizlemek ve daha iyi sonuçlar almak için **dengelemek**. Bu klasörün kökünde bulunan boş _notebook.ipynb_ dosyasıyla başlayın.
+
+İlk olarak kurulacak şey [imblearn](https://imbalanced-learn.org/stable/). Bu, verileri daha iyi dengelemenizi sağlayacak bir Scikit-learn paketidir (bu görev hakkında birazdan daha fazla bilgi edineceksiniz).
+
+1. `imblearn` kurmak için, `pip install` çalıştırın, şöyle:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. Verilerinizi içe aktarmak ve görselleştirmek için ihtiyaç duyduğunuz paketleri içe aktarın, ayrıca `imblearn`'den `SMOTE`'u içe aktarın.
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ Şimdi verileri içe aktarmaya hazırsınız.
+
+1. Bir sonraki görev verileri içe aktarmak olacak:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ `read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df` kullanarak.
+
+1. Verilerin şeklini kontrol edin:
+
+ ```python
+ df.head()
+ ```
+
+ İlk beş satır şöyle görünüyor:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. Bu veriler hakkında bilgi almak için `info()` çağırın:
+
+ ```python
+ df.info()
+ ```
+
+ Çıktınız şu şekilde görünüyor:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## Egzersiz - mutfaklar hakkında bilgi edinme
+
+Şimdi işler daha ilginç hale gelmeye başlıyor. Verilerin dağılımını keşfedelim, mutfak başına
+
+1. `barh()` çağırarak verileri çubuk grafik olarak çizin:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ Sınırlı sayıda mutfak var, ancak veri dağılımı düzensiz. Bunu düzeltebilirsiniz! Bunu yapmadan önce, biraz daha keşfedin.
+
+1. Mutfak başına ne kadar veri olduğunu öğrenin ve yazdırın:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ çıktı şöyle görünüyor:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## Malzemeleri keşfetme
+
+Şimdi verileri daha derinlemesine inceleyebilir ve her mutfak için tipik malzemelerin neler olduğunu öğrenebilirsiniz. Mutfaklar arasında karışıklığa neden olan tekrarlayan verileri temizlemelisiniz, bu yüzden bu sorunu öğrenelim.
+
+1. Bir malzeme veri çerçevesi oluşturmak için Python'da `create_ingredient()` fonksiyonunu oluşturun. Bu fonksiyon, işe yaramayan bir sütunu kaldırarak ve malzemeleri sayısına göre sıralayarak başlayacak:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ Şimdi bu fonksiyonu, her mutfak için en popüler on malzeme hakkında bir fikir edinmek için kullanabilirsiniz.
+
+1. `create_ingredient()` and plot it calling `barh()` çağırın:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Japon verileri için aynı işlemi yapın:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Şimdi Çin malzemeleri için:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Hint malzemelerini çizin:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Son olarak, Kore malzemelerini çizin:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. Şimdi, `drop()` çağırarak farklı mutfaklar arasında karışıklık yaratan en yaygın malzemeleri çıkarın:
+
+ Herkes pirinci, sarımsağı ve zencefili sever!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## Veri setini dengeleyin
+
+Verileri temizledikten sonra, [SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - "Sentetik Azınlık Aşırı Örnekleme Tekniği" - kullanarak dengeleyin.
+
+1. `fit_resample()` çağırın, bu strateji interpolasyon yoluyla yeni örnekler oluşturur.
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ Verilerinizi dengeleyerek, sınıflandırırken daha iyi sonuçlar alırsınız. İkili bir sınıflandırmayı düşünün. Verilerinizin çoğu bir sınıfsa, bir ML modeli bu sınıfı daha sık tahmin edecektir, çünkü bu sınıf için daha fazla veri vardır. Verileri dengelemek, herhangi bir dengesiz veriyi alır ve bu dengesizliği ortadan kaldırmaya yardımcı olur.
+
+1. Şimdi malzeme başına etiket sayısını kontrol edebilirsiniz:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ Çıktınız şöyle görünüyor:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ Veriler güzel ve temiz, dengeli ve çok lezzetli!
+
+1. Son adım, dengelenmiş verilerinizi, etiketler ve özellikler dahil olmak üzere, bir dosyaya aktarılabilecek yeni bir veri çerçevesine kaydetmektir:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. `transformed_df.head()` and `transformed_df.info()` kullanarak verilere son bir kez bakabilirsiniz. Bu verilerin bir kopyasını gelecekteki derslerde kullanmak üzere kaydedin:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ Bu yeni CSV şimdi kök veri klasöründe bulunabilir.
+
+---
+
+## 🚀Meydan Okuma
+
+Bu müfredat birkaç ilginç veri kümesi içerir. `data` klasörlerini inceleyin ve ikili veya çok sınıflı sınıflandırma için uygun olabilecek veri kümeleri var mı? Bu veri kümesine hangi soruları sorardınız?
+
+## [Ders sonrası sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+SMOTE'un API'sini keşfedin. Hangi kullanım durumları için en iyi şekilde kullanılır? Hangi sorunları çözer?
+
+## Ödev
+
+[Sınıflandırma yöntemlerini keşfedin](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/1-Introduction/assignment.md b/translations/tr/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..baff3f855
--- /dev/null
+++ b/translations/tr/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Sınıflandırma yöntemlerini keşfet
+
+## Talimatlar
+
+[Scikit-learn belgelerinde](https://scikit-learn.org/stable/supervised_learning.html) verileri sınıflandırmanın birçok yolunu bulacaksınız. Bu belgelerde küçük bir keşif yapın: amacınız sınıflandırma yöntemlerini aramak ve bu müfredattaki bir veri seti, ona sorabileceğiniz bir soru ve bir sınıflandırma tekniği ile eşleştirmektir. Bir elektronik tablo veya .doc dosyasında bir tablo oluşturun ve veri setinin sınıflandırma algoritması ile nasıl çalışacağını açıklayın.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Mükemmel | Yeterli | Geliştirmeye İhtiyacı Var |
+| --------- | ----------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | 5 algoritmanın ve bir sınıflandırma tekniğinin genel bir bakışını içeren bir belge sunulmuştur. Genel bakış iyi açıklanmış ve detaylıdır. | 3 algoritmanın ve bir sınıflandırma tekniğinin genel bir bakışını içeren bir belge sunulmuştur. Genel bakış iyi açıklanmış ve detaylıdır. | Üçten az algoritmanın ve bir sınıflandırma tekniğinin genel bir bakışını içeren bir belge sunulmuştur ve genel bakış ne iyi açıklanmış ne de detaylıdır. |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/1-Introduction/solution/Julia/README.md b/translations/tr/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..3bee799cf
--- /dev/null
+++ b/translations/tr/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/2-Classifiers-1/README.md b/translations/tr/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..ed4d7bd39
--- /dev/null
+++ b/translations/tr/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,244 @@
+# Mutfak Sınıflandırıcıları 1
+
+Bu derste, son dersten kaydettiğiniz dengeli ve temiz verilerle dolu mutfaklar hakkında olan veri setini kullanacaksınız.
+
+Bu veri setini çeşitli sınıflandırıcılarla kullanarak _belirli bir ulusal mutfağı bir grup malzemeye dayanarak tahmin edeceksiniz_. Bunu yaparken, algoritmaların sınıflandırma görevleri için nasıl kullanılabileceği hakkında daha fazla bilgi edineceksiniz.
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# Hazırlık
+
+[Lesson 1](../1-Introduction/README.md)'i tamamladığınızı varsayarak, bu dört ders için kök `/data` klasöründe _cleaned_cuisines.csv_ dosyasının mevcut olduğundan emin olun.
+
+## Egzersiz - bir ulusal mutfağı tahmin edin
+
+1. Bu dersin _notebook.ipynb_ klasöründe çalışarak, o dosyayı ve Pandas kütüphanesini içe aktarın:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ Veriler şu şekilde görünecektir:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. Şimdi, birkaç kütüphane daha içe aktarın:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. Eğitim için X ve y koordinatlarını iki dataframe'e bölün. `cuisine` etiketler dataframe'i olabilir:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ Şu şekilde görünecektir:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. `Unnamed: 0` column and the `cuisine` column, calling `drop()` öğesini düşürün. Geri kalan verileri eğitilebilir özellikler olarak kaydedin:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ Özellikleriniz şu şekilde görünecektir:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
+
+Artık modelinizi eğitmeye hazırsınız!
+
+## Sınıflandırıcınızı Seçmek
+
+Verileriniz temiz ve eğitime hazır olduğuna göre, işi yapmak için hangi algoritmayı kullanacağınıza karar vermelisiniz.
+
+Scikit-learn, sınıflandırmayı Denetimli Öğrenme altında gruplandırır ve bu kategoride birçok sınıflandırma yöntemi bulacaksınız. [Çeşitlilik](https://scikit-learn.org/stable/supervised_learning.html) ilk bakışta oldukça kafa karıştırıcıdır. Aşağıdaki yöntemlerin tümü sınıflandırma tekniklerini içerir:
+
+- Doğrusal Modeller
+- Destek Vektör Makineleri
+- Stokastik Gradyan İnişi
+- En Yakın Komşular
+- Gauss Süreçleri
+- Karar Ağaçları
+- Ansambl yöntemleri (oylama Sınıflandırıcısı)
+- Çoklu sınıf ve çoklu çıktı algoritmaları (çoklu sınıf ve çoklu etiket sınıflandırması, çoklu sınıf-çoklu çıktı sınıflandırması)
+
+> Verileri sınıflandırmak için [sinir ağlarını da kullanabilirsiniz](https://scikit-learn.org/stable/modules/neural_networks_supervised.html#classification), ancak bu dersin kapsamı dışındadır.
+
+### Hangi sınıflandırıcıyı seçmeli?
+
+Peki, hangi sınıflandırıcıyı seçmelisiniz? Çoğu zaman, birkaçını çalıştırmak ve iyi bir sonuç aramak, test etmenin bir yoludur. Scikit-learn, oluşturulan bir veri setinde KNeighbors, SVC iki yolu, GaussianProcessClassifier, DecisionTreeClassifier, RandomForestClassifier, MLPClassifier, AdaBoostClassifier, GaussianNB ve QuadraticDiscrinationAnalysis'ı karşılaştıran bir [yan yana karşılaştırma](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html) sunar ve sonuçları görselleştirir:
+
+
+> Grafikler Scikit-learn'ün belgelerinde oluşturulmuştur
+
+> AutoML bu sorunu bulutta bu karşılaştırmaları çalıştırarak, verileriniz için en iyi algoritmayı seçmenize olanak tanıyarak düzgün bir şekilde çözer. Bunu [burada](https://docs.microsoft.com/learn/modules/automate-model-selection-with-azure-automl/?WT.mc_id=academic-77952-leestott) deneyin
+
+### Daha İyi Bir Yaklaşım
+
+Ancak, rastgele tahmin etmekten daha iyi bir yol, bu indirilebilir [ML Hile Sayfası](https://docs.microsoft.com/azure/machine-learning/algorithm-cheat-sheet?WT.mc_id=academic-77952-leestott) üzerindeki fikirleri takip etmektir. Burada, çoklu sınıf problemimiz için bazı seçeneklerimiz olduğunu keşfediyoruz:
+
+
+> Microsoft'un Algoritma Hile Sayfasının, çoklu sınıf sınıflandırma seçeneklerini detaylandıran bir bölümü
+
+✅ Bu hile sayfasını indirin, yazdırın ve duvarınıza asın!
+
+### Mantık Yürütme
+
+Sahip olduğumuz kısıtlamalar göz önünde bulundurularak farklı yaklaşımlar üzerinde mantık yürütebilir miyiz görelim:
+
+- **Sinir ağları çok ağır**. Temiz ama minimal veri setimiz ve eğitimleri yerel olarak defterler üzerinden çalıştırdığımız gerçeği göz önüne alındığında, sinir ağları bu görev için çok ağırdır.
+- **İki sınıf sınıflandırıcı yok**. İki sınıf sınıflandırıcı kullanmıyoruz, bu yüzden bir-vs-hepsini dışarıda bırakıyoruz.
+- **Karar ağacı veya lojistik regresyon işe yarayabilir**. Bir karar ağacı veya çoklu sınıf verileri için lojistik regresyon işe yarayabilir.
+- **Çoklu Sınıf Güçlendirilmiş Karar Ağaçları farklı bir sorunu çözer**. Çoklu sınıf güçlendirilmiş karar ağacı, sıralamalar oluşturmak için tasarlanmış görevler gibi parametrik olmayan görevler için en uygundur, bu yüzden bizim için kullanışlı değildir.
+
+### Scikit-learn Kullanımı
+
+Verilerimizi analiz etmek için Scikit-learn kullanacağız. Ancak, Scikit-learn'de lojistik regresyon kullanmanın birçok yolu vardır. [Geçilecek parametrelere](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logistic%20regressio#sklearn.linear_model.LogisticRegression) bir göz atın.
+
+Temelde iki önemli parametre vardır - `multi_class` and `solver` - that we need to specify, when we ask Scikit-learn to perform a logistic regression. The `multi_class` value applies a certain behavior. The value of the solver is what algorithm to use. Not all solvers can be paired with all `multi_class` values.
+
+According to the docs, in the multiclass case, the training algorithm:
+
+- **Uses the one-vs-rest (OvR) scheme**, if the `multi_class` option is set to `ovr`
+- **Uses the cross-entropy loss**, if the `multi_class` option is set to `multinomial`. (Currently the `multinomial` option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.)"
+
+> 🎓 The 'scheme' here can either be 'ovr' (one-vs-rest) or 'multinomial'. Since logistic regression is really designed to support binary classification, these schemes allow it to better handle multiclass classification tasks. [source](https://machinelearningmastery.com/one-vs-rest-and-one-vs-one-for-multi-class-classification/)
+
+> 🎓 The 'solver' is defined as "the algorithm to use in the optimization problem". [source](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logistic%20regressio#sklearn.linear_model.LogisticRegression).
+
+Scikit-learn offers this table to explain how solvers handle different challenges presented by different kinds of data structures:
+
+
+
+## Exercise - split the data
+
+We can focus on logistic regression for our first training trial since you recently learned about the latter in a previous lesson.
+Split your data into training and testing groups by calling `train_test_split()`:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+```
+
+## Egzersiz - lojistik regresyon uygulayın
+
+Çoklu sınıf durumunu kullandığınız için hangi _şemayı_ kullanacağınıza ve hangi _çözücüyü_ ayarlayacağınıza karar vermeniz gerekir. LojistikRegresyonu, çoklu sınıf ayarı ve **liblinear** çözücü ile eğitin.
+
+1. multi_class'ı `ovr` and the solver set to `liblinear` olarak ayarlayarak bir lojistik regresyon oluşturun:
+
+ ```python
+ lr = LogisticRegression(multi_class='ovr',solver='liblinear')
+ model = lr.fit(X_train, np.ravel(y_train))
+
+ accuracy = model.score(X_test, y_test)
+ print ("Accuracy is {}".format(accuracy))
+ ```
+
+ ✅ `lbfgs`, which is often set as default
+
+ > Note, use Pandas [`ravel`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ravel.html) işlevi gibi farklı bir çözücü deneyin ve gerektiğinde verilerinizi düzleştirin.
+
+ Doğruluk **%80**'in üzerinde iyi!
+
+1. Bu modeli bir veri satırını (#50) test ederek çalışırken görebilirsiniz:
+
+ ```python
+ print(f'ingredients: {X_test.iloc[50][X_test.iloc[50]!=0].keys()}')
+ print(f'cuisine: {y_test.iloc[50]}')
+ ```
+
+ Sonuç basılır:
+
+ ```output
+ ingredients: Index(['cilantro', 'onion', 'pea', 'potato', 'tomato', 'vegetable_oil'], dtype='object')
+ cuisine: indian
+ ```
+
+ ✅ Farklı bir satır numarası deneyin ve sonuçları kontrol edin
+
+1. Daha derine inerek, bu tahminin doğruluğunu kontrol edebilirsiniz:
+
+ ```python
+ test= X_test.iloc[50].values.reshape(-1, 1).T
+ proba = model.predict_proba(test)
+ classes = model.classes_
+ resultdf = pd.DataFrame(data=proba, columns=classes)
+
+ topPrediction = resultdf.T.sort_values(by=[0], ascending = [False])
+ topPrediction.head()
+ ```
+
+ Sonuç basılır - Hint mutfağı en iyi tahminidir, iyi bir olasılıkla:
+
+ | | 0 |
+ | -------: | -------: |
+ | indian | 0.715851 |
+ | chinese | 0.229475 |
+ | japanese | 0.029763 |
+ | korean | 0.017277 |
+ | thai | 0.007634 |
+
+ ✅ Modelin neden Hint mutfağı olduğundan oldukça emin olduğunu açıklayabilir misiniz?
+
+1. Regresyon derslerinde olduğu gibi bir sınıflandırma raporu yazarak daha fazla ayrıntı alın:
+
+ ```python
+ y_pred = model.predict(X_test)
+ print(classification_report(y_test,y_pred))
+ ```
+
+ | | precision | recall | f1-score | support |
+ | ------------ | --------- | ------ | -------- | ------- |
+ | chinese | 0.73 | 0.71 | 0.72 | 229 |
+ | indian | 0.91 | 0.93 | 0.92 | 254 |
+ | japanese | 0.70 | 0.75 | 0.72 | 220 |
+ | korean | 0.86 | 0.76 | 0.81 | 242 |
+ | thai | 0.79 | 0.85 | 0.82 | 254 |
+ | accuracy | 0.80 | 1199 | | |
+ | macro avg | 0.80 | 0.80 | 0.80 | 1199 |
+ | weighted avg | 0.80 | 0.80 | 0.80 | 1199 |
+
+## 🚀Meydan Okuma
+
+Bu derste, temizlenmiş verilerinizi kullanarak bir grup malzemeye dayanarak ulusal bir mutfağı tahmin edebilen bir makine öğrenme modeli oluşturdunuz. Scikit-learn'ün veri sınıflandırmak için sağladığı birçok seçeneği okumak için biraz zaman ayırın. 'Çözücü' kavramını daha derinlemesine inceleyerek perde arkasında neler olduğunu anlayın.
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/22/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Lojistik regresyonun matematiğini [bu derste](https://people.eecs.berkeley.edu/~russell/classes/cs194/f11/lectures/CS194%20Fall%202011%20Lecture%2006.pdf) biraz daha derinlemesine inceleyin.
+## Ödev
+
+[Çözücüleri inceleyin](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan yanlış anlamalar veya yanlış yorumlamalar için sorumluluk kabul etmiyoruz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/2-Classifiers-1/assignment.md b/translations/tr/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..d5dfe7a44
--- /dev/null
+++ b/translations/tr/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# Çözücüleri İnceleyin
+## Talimatlar
+
+Bu derste, algoritmaların makine öğrenme süreciyle eşleştirilerek doğru bir model oluşturmak için kullanılan çeşitli çözücüler hakkında bilgi edindiniz. Derste listelenen çözücüleri inceleyin ve ikisini seçin. Kendi kelimelerinizle bu iki çözücüyü karşılaştırın ve kıyaslayın. Hangi tür sorunları ele alırlar? Çeşitli veri yapılarıyla nasıl çalışırlar? Neden birini diğerine tercih edersiniz?
+## Değerlendirme Kriterleri
+
+| Kriter | Örnek | Yeterli | İyileştirme Gerekli |
+| -------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------ | ---------------------------- |
+| | İki paragraftan oluşan, her bir çözücü hakkında düşünceli bir şekilde karşılaştırma yapan bir .doc dosyası sunulmuştur. | Sadece bir paragraf içeren bir .doc dosyası sunulmuştur | Ödev tamamlanmamış |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/tr/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..04236addf
--- /dev/null
+++ b/translations/tr/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/3-Classifiers-2/README.md b/translations/tr/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..9b5f24d09
--- /dev/null
+++ b/translations/tr/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# Mutfak Sınıflandırıcıları 2
+
+Bu ikinci sınıflandırma dersinde, sayısal verileri sınıflandırmanın daha fazla yolunu keşfedeceksiniz. Ayrıca bir sınıflandırıcıyı diğerine tercih etmenin sonuçlarını da öğreneceksiniz.
+
+## [Ders Öncesi Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### Ön Koşul
+
+Önceki dersleri tamamladığınızı ve bu 4 derslik klasörün kök dizininde _cleaned_cuisines.csv_ adlı temizlenmiş bir veri kümesine sahip olduğunuzu varsayıyoruz.
+
+### Hazırlık
+
+_notebook.ipynb_ dosyanızı temizlenmiş veri kümesiyle yükledik ve model oluşturma sürecine hazır olacak şekilde X ve y veri çerçevelerine böldük.
+
+## Bir sınıflandırma haritası
+
+Daha önce, Microsoft'un hile sayfasını kullanarak verileri sınıflandırırken sahip olduğunuz çeşitli seçenekleri öğrendiniz. Scikit-learn, tahmincilerinizi (sınıflandırıcılar için başka bir terim) daraltmanıza yardımcı olabilecek benzer ancak daha ayrıntılı bir hile sayfası sunar:
+
+
+> İpucu: [bu haritayı çevrimiçi ziyaret edin](https://scikit-learn.org/stable/tutorial/machine_learning_map/) ve belgelere ulaşmak için yol boyunca tıklayın.
+
+### Plan
+
+Bu harita, verilerinizi net bir şekilde kavradığınızda çok yardımcı olur, çünkü yolları boyunca bir karara 'yürüyebilirsiniz':
+
+- 50'den fazla örneğimiz var
+- Bir kategori tahmin etmek istiyoruz
+- Etiketlenmiş verilerimiz var
+- 100K'den az örneğimiz var
+- ✨ Bir Linear SVC seçebiliriz
+- Bu işe yaramazsa, çünkü sayısal verilerimiz var
+ - ✨ KNeighbors Classifier deneyebiliriz
+ - Bu da işe yaramazsa, ✨ SVC ve ✨ Ensemble Classifiers deneyin
+
+Bu takip edilmesi gereken çok faydalı bir yoldur.
+
+## Egzersiz - verileri bölmek
+
+Bu yolu izleyerek, kullanmak için bazı kütüphaneleri ithal ederek başlamalıyız.
+
+1. Gerekli kütüphaneleri ithal edin:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. Eğitim ve test verilerinizi bölün:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## Linear SVC sınıflandırıcı
+
+Destek-Vektör kümeleme (SVC), Destek-Vektör makineleri ailesinin bir alt kümesidir (aşağıda bunlar hakkında daha fazla bilgi edinin). Bu yöntemde, etiketleri nasıl kümeleyeceğinizi belirlemek için bir 'çekirdek' seçebilirsiniz. 'C' parametresi, parametrelerin etkisini düzenleyen 'düzenleme' anlamına gelir. Çekirdek [birkaç](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) türden biri olabilir; burada lineer SVC'den yararlanmak için onu 'lineer' olarak ayarlıyoruz. Olasılık varsayılan olarak 'false'dur; burada olasılık tahminleri toplamak için onu 'true' olarak ayarlıyoruz. Verileri karıştırmak için rastgele durumu '0' olarak ayarlıyoruz.
+
+### Egzersiz - bir linear SVC uygulayın
+
+Bir sınıflandırıcılar dizisi oluşturarak başlayın. Test ettikçe bu diziye kademeli olarak eklemeler yapacaksınız.
+
+1. Bir Linear SVC ile başlayın:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. Linear SVC kullanarak modelinizi eğitin ve bir rapor yazdırın:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Sonuç oldukça iyi:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## K-Neighbors sınıflandırıcı
+
+K-Neighbors, hem denetimli hem de denetimsiz öğrenme için kullanılabilen ML yöntemleri ailesinin bir parçasıdır. Bu yöntemde, önceden belirlenmiş sayıda nokta oluşturulur ve bu noktalar etrafında veriler toplanarak veriler için genelleştirilmiş etiketler tahmin edilebilir.
+
+### Egzersiz - K-Neighbors sınıflandırıcı uygulayın
+
+Önceki sınıflandırıcı iyiydi ve verilerle iyi çalıştı, ancak belki daha iyi doğruluk elde edebiliriz. Bir K-Neighbors sınıflandırıcı deneyin.
+
+1. Sınıflandırıcı dizinize bir satır ekleyin (Linear SVC öğesinden sonra bir virgül ekleyin):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ Sonuç biraz daha kötü:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors) hakkında bilgi edinin
+
+## Support Vector Classifier
+
+Support-Vector sınıflandırıcılar, sınıflandırma ve regresyon görevlerinde kullanılan [Support-Vector Machine](https://wikipedia.org/wiki/Support-vector_machine) ailesinin bir parçasıdır. SVM'ler, "eğitim örneklerini iki kategori arasındaki mesafeyi en üst düzeye çıkarmak için uzaydaki noktalara eşler." Sonraki veriler bu uzaya eşlenir, böylece kategorileri tahmin edilebilir.
+
+### Egzersiz - Support Vector Classifier uygulayın
+
+Biraz daha iyi doğruluk için bir Support Vector Classifier deneyelim.
+
+1. K-Neighbors öğesinden sonra bir virgül ekleyin ve ardından bu satırı ekleyin:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ Sonuç oldukça iyi!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ [Support-Vectors](https://scikit-learn.org/stable/modules/svm.html#svm) hakkında bilgi edinin
+
+## Ensemble Classifiers
+
+Önceki test oldukça iyi olmasına rağmen, yolun sonuna kadar takip edelim. Özellikle Random Forest ve AdaBoost gibi bazı 'Ensemble Classifiers' deneyelim:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+Sonuç özellikle Random Forest için çok iyi:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ [Ensemble Classifiers](https://scikit-learn.org/stable/modules/ensemble.html) hakkında bilgi edinin
+
+Bu Makine Öğrenimi yöntemi, modelin kalitesini artırmak için birkaç temel tahmincinin tahminlerini birleştirir. Örneğimizde, Random Trees ve AdaBoost kullandık.
+
+- [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#forest), aşırı uyumu önlemek için rastgelelik ile aşılanmış 'karar ağaçları'ndan oluşan bir 'orman' oluşturur. n_estimators parametresi, ağaç sayısını ayarlar.
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) bir veri kümesine bir sınıflandırıcı uyarlar ve ardından aynı veri kümesine bu sınıflandırıcının kopyalarını uyarlar. Yanlış sınıflandırılan öğelerin ağırlıklarına odaklanır ve bir sonraki sınıflandırıcı için uyumu düzeltmek için ayarlar.
+
+---
+
+## 🚀Meydan Okuma
+
+Bu tekniklerin her birinin ayarlayabileceğiniz birçok parametresi vardır. Her birinin varsayılan parametrelerini araştırın ve bu parametreleri ayarlamanın modelin kalitesi için ne anlama gelebileceğini düşünün.
+
+## [Ders Sonrası Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Bu derslerde çok fazla jargon var, bu yüzden bir dakika ayırarak [bu listeyi](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott) gözden geçirin!
+
+## Ödev
+
+[Parametre oyunu](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/3-Classifiers-2/assignment.md b/translations/tr/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..0f4d04c8e
--- /dev/null
+++ b/translations/tr/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# Parametre Oynama
+
+## Talimatlar
+
+Bu sınıflandırıcılarla çalışırken varsayılan olarak ayarlanmış birçok parametre vardır. VS Code'daki Intellisense, bunları keşfetmenize yardımcı olabilir. Bu dersteki ML Sınıflandırma Tekniklerinden birini benimseyin ve çeşitli parametre değerlerini değiştirerek modelleri yeniden eğitin. Bazı değişikliklerin model kalitesine neden yardımcı olduğunu, bazılarının ise neden zarar verdiğini açıklayan bir defter oluşturun. Cevabınızda ayrıntılı olun.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek | Yeterli | Geliştirmeye İhtiyaç Var |
+| --------- | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------- |
+| | Bir sınıflandırıcı tam olarak oluşturulmuş ve parametreleri ayarlanmış ve değişiklikler metin kutularında açıklanmış bir defter sunulmuştur | Bir defter kısmen sunulmuş veya kötü açıklanmıştır | Bir defter hatalı veya kusurludur |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan yanlış anlamalar veya yanlış yorumlamalar için sorumluluk kabul etmiyoruz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/tr/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..b08e01a3b
--- /dev/null
+++ b/translations/tr/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/4-Applied/README.md b/translations/tr/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..ae68bb379
--- /dev/null
+++ b/translations/tr/4-Classification/4-Applied/README.md
@@ -0,0 +1,317 @@
+# Bir Mutfak Önerici Web Uygulaması Oluşturma
+
+Bu derste, önceki derslerde öğrendiğiniz bazı teknikleri kullanarak ve bu seride kullanılan lezzetli mutfak veri seti ile bir sınıflandırma modeli oluşturacaksınız. Ayrıca, kaydedilmiş bir modeli kullanmak için Onnx'in web çalıştırma zamanını kullanarak küçük bir web uygulaması oluşturacaksınız.
+
+Makine öğreniminin en faydalı pratik kullanımlarından biri öneri sistemleri oluşturmaktır ve bugün bu yönde ilk adımı atabilirsiniz!
+
+[](https://youtu.be/17wdM9AHMfg "Applied ML")
+
+> 🎥 Yukarıdaki resme tıklayarak bir video izleyin: Jen Looper, sınıflandırılmış mutfak verilerini kullanarak bir web uygulaması oluşturuyor
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+Bu derste öğrenecekleriniz:
+
+- Bir model nasıl oluşturulur ve Onnx modeli olarak nasıl kaydedilir
+- Netron'u kullanarak model nasıl incelenir
+- Bir web uygulamasında model nasıl kullanılır
+
+## Modelinizi Oluşturun
+
+Uygulamalı ML sistemleri oluşturmak, bu teknolojileri iş sistemlerinizde kullanmanın önemli bir parçasıdır. Onnx kullanarak modelleri web uygulamalarınızda kullanabilirsiniz (ve gerektiğinde çevrimdışı bir bağlamda kullanabilirsiniz).
+
+[Önceki bir derste](../../3-Web-App/1-Web-App/README.md), UFO gözlemleri hakkında bir Regresyon modeli oluşturmuş, "pickle" etmiş ve bir Flask uygulamasında kullanmıştınız. Bu mimariyi bilmek çok faydalı olsa da, tam yığın bir Python uygulamasıdır ve gereksinimleriniz bir JavaScript uygulamasının kullanımını içerebilir.
+
+Bu derste, çıkarım için temel bir JavaScript tabanlı sistem oluşturabilirsiniz. Ancak önce, bir model eğitmeniz ve Onnx ile kullanmak üzere dönüştürmeniz gerekiyor.
+
+## Alıştırma - sınıflandırma modeli eğitme
+
+Öncelikle, kullandığımız temizlenmiş mutfak veri setini kullanarak bir sınıflandırma modeli eğitin.
+
+1. Faydalı kütüphaneleri içe aktararak başlayın:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ Scikit-learn modelinizi Onnx formatına dönüştürmenize yardımcı olacak '[skl2onnx](https://onnx.ai/sklearn-onnx/)' gerekecek.
+
+1. Daha sonra, önceki derslerde yaptığınız gibi bir CSV dosyasını `read_csv()` kullanarak okuyarak verilerinizle çalışın:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. İlk iki gereksiz sütunu kaldırın ve kalan verileri 'X' olarak kaydedin:
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. Etiketleri 'y' olarak kaydedin:
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### Eğitim rutinine başlayın
+
+'SVÇ' kütüphanesini kullanacağız çünkü iyi bir doğruluğa sahiptir.
+
+1. Scikit-learn'den uygun kütüphaneleri içe aktarın:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. Eğitim ve test setlerini ayırın:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. Önceki derste yaptığınız gibi bir SVC Sınıflandırma modeli oluşturun:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. Şimdi modelinizi test edin, `predict()` çağrısı yaparak:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. Modelin kalitesini kontrol etmek için bir sınıflandırma raporu yazdırın:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ Daha önce gördüğümüz gibi, doğruluk iyi:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### Modelinizi Onnx'e dönüştürün
+
+Dönüştürmeyi doğru Tensor numarası ile yapmayı unutmayın. Bu veri setinde 380 malzeme listelenmiştir, bu yüzden `FloatTensorType` içinde bu sayıyı belirtmeniz gerekir:
+
+1. 380 tensor numarasını kullanarak dönüştürün.
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. Onx oluşturun ve **model.onnx** olarak bir dosya olarak saklayın:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > Not, dönüşüm betiğinizde [seçenekler](https://onnx.ai/sklearn-onnx/parameterized.html) geçebilirsiniz. Bu durumda, 'nocl' True ve 'zipmap' False olarak geçtik. Bu bir sınıflandırma modeli olduğundan, ZipMap'i kaldırma seçeneğiniz vardır, bu da bir sözlük listesi üretir (gerekli değil). `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` dosyası.
+
+1. Bu dosyada _index.html_, aşağıdaki işaretlemeyi ekleyin:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. Şimdi, `body` etiketleri içinde çalışarak, bazı malzemeleri yansıtan bir dizi onay kutusu göstermek için biraz işaretleme ekleyin:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ Her onay kutusuna bir değer verildiğine dikkat edin. Bu, malzemenin veri setine göre bulunduğu indeksi yansıtır. Örneğin, bu alfabetik listede Elma beşinci sütunu işgal eder, bu yüzden değeri '4' olur çünkü 0'dan saymaya başlarız. Belirli bir malzemenin indeksini keşfetmek için [malzemeler tablosuna](../../../../4-Classification/data/ingredient_indexes.csv) başvurabilirsiniz.
+
+ index.html dosyasındaki çalışmanızı sürdürerek, modelin çağrıldığı bir script bloğu ekleyin, son kapanış ``'dan sonra.
+
+1. İlk olarak, [Onnx Runtime](https://www.onnxruntime.ai/) içe aktarın:
+
+ ```html
+
+ ```
+
+ > Onnx Runtime, Onnx modellerinizi geniş bir donanım platformu yelpazesinde çalıştırmanızı sağlamak için kullanılır, optimizasyonlar ve bir API içerir.
+
+1. Çalıştırma zamanı yerinde olduğunda, onu çağırabilirsiniz:
+
+ ```html
+
+ ```
+
+Bu kodda, birkaç şey oluyor:
+
+1. Bir malzeme onay kutusu işaretli olup olmadığına bağlı olarak ayarlanacak ve çıkarım için modele gönderilecek 380 olası değerden (1 veya 0) oluşan bir dizi oluşturdunuz.
+2. Bir dizi onay kutusu ve bunların işaretli olup olmadığını belirlemenin bir yolunu oluşturduğunuz `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` komut isteminde çalıştırın. Bir localhost açılmalı ve web uygulamanızı görüntüleyebilirsiniz. Çeşitli malzemelere göre hangi mutfağın önerildiğini kontrol edin:
+
+
+
+Tebrikler, birkaç alan içeren bir 'öneri' web uygulaması oluşturdunuz. Bu sistemi geliştirmek için biraz zaman ayırın!
+## 🚀Meydan Okuma
+
+Web uygulamanız çok minimal, bu yüzden [ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv) verilerindeki malzemeler ve indeksleri kullanarak geliştirmeye devam edin. Hangi lezzet kombinasyonları belirli bir ulusal yemeği oluşturur?
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## Gözden Geçirme & Kendi Kendine Çalışma
+
+Bu ders, yemek malzemeleri için bir öneri sistemi oluşturmanın faydasına sadece değindi, bu ML uygulamaları alanı çok zengin örneklerle doludur. Bu sistemlerin nasıl oluşturulduğu hakkında daha fazla okuyun:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## Ödev
+
+[Yeni bir önerici oluşturun](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/4-Applied/assignment.md b/translations/tr/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..cf7977eb0
--- /dev/null
+++ b/translations/tr/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# Bir önerici oluştur
+
+## Talimatlar
+
+Bu dersteki alıştırmalarınıza dayanarak, Onnx Runtime ve dönüştürülmüş Onnx modeli kullanarak JavaScript tabanlı bir web uygulaması oluşturmayı artık biliyorsunuz. Bu derslerden veya başka kaynaklardan elde edilen verileri kullanarak yeni bir önerici oluşturmayı deneyin (lütfen kaynak belirtin). Çeşitli kişilik özelliklerine göre bir evcil hayvan önerici veya bir kişinin ruh haline göre müzik türü önerici oluşturabilirsiniz. Yaratıcı olun!
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek Niteliğinde | Yeterli | Geliştirme Gerekiyor |
+| --------- | ---------------------------------------------------------------------- | -------------------------------------- | --------------------------------- |
+| | Bir web uygulaması ve not defteri sunulmuş, her ikisi de iyi belgelenmiş ve çalışıyor | Bu iki öğeden biri eksik veya hatalı | Her ikisi de eksik veya hatalı |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından doğacak herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/4-Classification/README.md b/translations/tr/4-Classification/README.md
new file mode 100644
index 000000000..d62bd2bdf
--- /dev/null
+++ b/translations/tr/4-Classification/README.md
@@ -0,0 +1,30 @@
+# Sınıflandırmaya Başlarken
+
+## Bölgesel Konu: Lezzetli Asya ve Hint Mutfağı 🍜
+
+Asya ve Hindistan'da yemek gelenekleri son derece çeşitlidir ve çok lezzetlidir! Malzemelerini anlamaya çalışmak için bölgesel mutfaklar hakkındaki verilere bir göz atalım.
+
+
+> Fotoğraf: Lisheng Chang tarafından Unsplash
+
+## Öğrenecekleriniz
+
+Bu bölümde, önceki Regresyon çalışmanız üzerine inşa edecek ve verileri daha iyi anlamak için kullanabileceğiniz diğer sınıflandırıcılar hakkında bilgi edineceksiniz.
+
+> Sınıflandırma modelleriyle çalışmayı öğrenmenize yardımcı olabilecek kullanışlı düşük kod araçları vardır. Bu görev için [Azure ML'yi deneyin](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Dersler
+
+1. [Sınıflandırmaya giriş](1-Introduction/README.md)
+2. [Daha fazla sınıflandırıcı](2-Classifiers-1/README.md)
+3. [Diğer sınıflandırıcılar](3-Classifiers-2/README.md)
+4. [Uygulamalı ML: bir web uygulaması oluşturun](4-Applied/README.md)
+
+## Katkıda Bulunanlar
+
+"Sınıflandırmaya Başlarken" ♥️ ile [Cassie Breviu](https://www.twitter.com/cassiebreviu) ve [Jen Looper](https://www.twitter.com/jenlooper) tarafından yazılmıştır.
+
+Lezzetli mutfaklar veri seti [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines) kaynaklıdır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/5-Clustering/1-Visualize/README.md b/translations/tr/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..c48e8c42e
--- /dev/null
+++ b/translations/tr/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,217 @@
+# Kümeleme Giriş
+
+Kümeleme, bir veri kümesinin etiketlenmediğini veya girdilerin önceden tanımlanmış çıktılarla eşleşmediğini varsayan bir tür [Gözetimsiz Öğrenme](https://wikipedia.org/wiki/Unsupervised_learning)'dir. Etiketlenmemiş verileri sıralamak ve veride algıladığı desenlere göre gruplamalar sağlamak için çeşitli algoritmalar kullanır.
+
+[](https://youtu.be/ty2advRiWJM "PSquare tarafından No One Like You")
+
+> 🎥 Yukarıdaki resme tıklayarak bir video izleyebilirsiniz. Kümeleme ile makine öğrenimi çalışırken, bazı Nijeryalı Dance Hall şarkılarının tadını çıkarın - bu, PSquare tarafından 2014 yılında yayımlanmış yüksek puanlı bir şarkıdır.
+
+## [Ön Ders Testi](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+
+### Giriş
+
+[Clustering](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) veri keşfi için çok kullanışlıdır. Nijeryalı dinleyicilerin müzik tüketiminde eğilimleri ve desenleri keşfetmeye yardımcı olup olmadığını görelim.
+
+✅ Kümeleme kullanım alanlarını düşünmek için bir dakika ayırın. Gerçek hayatta, çamaşır yığınınız olduğunda ve aile üyelerinizin kıyafetlerini ayırmanız gerektiğinde kümeleme olur 🧦👕👖🩲. Veri biliminde, kullanıcı tercihlerinin analiz edilmesi veya etiketlenmemiş herhangi bir veri kümesinin özelliklerinin belirlenmesi gerektiğinde kümeleme olur. Kümeleme, bir anlamda, kaosu anlamlandırmaya yardımcı olur, tıpkı bir çorap çekmecesi gibi.
+
+[](https://youtu.be/esmzYhuFnds "Kümelemeye Giriş")
+
+> 🎥 Yukarıdaki resme tıklayarak bir video izleyebilirsiniz: MIT'den John Guttag kümelemeyi tanıtıyor.
+
+Profesyonel bir ortamda, kümeleme pazar segmentasyonu gibi şeyleri belirlemek için kullanılabilir, örneğin hangi yaş gruplarının hangi ürünleri satın aldığını belirlemek için. Bir başka kullanım alanı, kredi kartı işlemlerinden oluşan bir veri kümesinde dolandırıcılığı tespit etmek olabilir. Ya da tıbbi taramalardan oluşan bir veri kümesinde tümörleri belirlemek için kümeleme kullanabilirsiniz.
+
+✅ Bir bankacılık, e-ticaret veya iş ortamında 'vahşi doğada' kümelemeyle nasıl karşılaşmış olabileceğinizi bir dakika düşünün.
+
+> 🎓 İlginç bir şekilde, kümeleme analizi 1930'larda Antropoloji ve Psikoloji alanlarında ortaya çıktı. O zamanlar nasıl kullanıldığını hayal edebilir misiniz?
+
+Alternatif olarak, arama sonuçlarını gruplamak için kullanabilirsiniz - örneğin alışveriş bağlantıları, resimler veya incelemeler. Kümeleme, büyük bir veri kümesini azaltmak ve üzerinde daha ayrıntılı analiz yapmak istediğinizde kullanışlıdır, bu nedenle teknik, diğer modeller oluşturulmadan önce veri hakkında bilgi edinmek için kullanılabilir.
+
+✅ Verileriniz kümeler halinde düzenlendikten sonra, ona bir küme kimliği atarsınız ve bu teknik, bir veri kümesinin gizliliğini korurken yararlı olabilir; daha belirgin tanımlanabilir veriler yerine bir veri noktasına küme kimliği ile atıfta bulunabilirsiniz. Bir küme kimliğine başvurmanın, kümeyi tanımlamak için diğer öğeler yerine başka nedenler düşünebilir misiniz?
+
+Kümeleme teknikleri hakkındaki bilginizi bu [Öğrenme modülünde](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott) derinleştirin.
+
+## Kümelemeye Başlarken
+
+[Scikit-learn geniş bir yelpazede](https://scikit-learn.org/stable/modules/clustering.html) kümeleme yöntemleri sunar. Seçtiğiniz tür, kullanım durumunuza bağlı olacaktır. Dokümana göre, her yöntemin çeşitli faydaları vardır. İşte Scikit-learn tarafından desteklenen yöntemlerin ve uygun kullanım durumlarının basitleştirilmiş bir tablosu:
+
+| Yöntem Adı | Kullanım Durumu |
+| :--------------------------- | :--------------------------------------------------------------------- |
+| K-Means | genel amaçlı, tümevarımsal |
+| Affinity propagation | çok, düzensiz kümeler, tümevarımsal |
+| Mean-shift | çok, düzensiz kümeler, tümevarımsal |
+| Spectral clustering | az, düzenli kümeler, tümdengelimsel |
+| Ward hierarchical clustering | çok, kısıtlı kümeler, tümdengelimsel |
+| Agglomerative clustering | çok, kısıtlı, Öklidyen olmayan mesafeler, tümdengelimsel |
+| DBSCAN | düz olmayan geometri, düzensiz kümeler, tümdengelimsel |
+| OPTICS | düz olmayan geometri, değişken yoğunluklu düzensiz kümeler, tümdengelimsel |
+| Gaussian mixtures | düz geometri, tümevarımsal |
+| BIRCH | büyük veri kümesi, aykırı değerlerle, tümevarımsal |
+
+> 🎓 Kümeleri nasıl oluşturduğumuz, veri noktalarını gruplara nasıl topladığımızla çok ilgilidir. Bazı terimleri açalım:
+>
+> 🎓 ['Tümdengelimsel' vs. 'Tümevarımsal'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> Tümdengelimsel çıkarım, belirli test durumlarına eşlenen gözlemlenmiş eğitim vakalarından türetilir. Tümevarımsal çıkarım ise eğitim vakalarından türetilir ve yalnızca daha sonra test durumlarına uygulanır.
+>
+> Bir örnek: Kısmen etiketlenmiş bir veri kümeniz olduğunu hayal edin. Bazı şeyler 'kayıt', bazıları 'cd' ve bazıları boştur. Göreviniz, boşlara etiket vermektir. Tümevarımsal bir yaklaşım seçerseniz, 'kayıtlar' ve 'cd'ler arayan bir model eğitirsiniz ve bu etiketleri etiketlenmemiş verinize uygularsınız. Bu yaklaşım, aslında 'kaset' olan şeyleri sınıflandırmakta zorlanır. Tümdengelimsel bir yaklaşım ise bu bilinmeyen veriyi daha etkili bir şekilde ele alır çünkü benzer öğeleri bir araya getirir ve ardından bir gruba etiket uygular. Bu durumda, kümeler 'yuvarlak müzik şeyleri' ve 'kare müzik şeyleri' gibi olabilir.
+>
+> 🎓 ['Düz olmayan' vs. 'düz' geometri](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> Matematiksel terminolojiden türetilen düz olmayan ve düz geometri, noktalar arasındaki mesafelerin 'düz' ([Öklidyen](https://wikipedia.org/wiki/Euclidean_geometry)) veya 'düz olmayan' (Öklidyen olmayan) geometrik yöntemlerle ölçülmesini ifade eder.
+>
+>'Düz' bu bağlamda Öklidyen geometriyi ifade eder (bir kısmı 'düzlem' geometri olarak öğretilir) ve düz olmayan, Öklidyen olmayan geometriyi ifade eder. Geometri, makine öğrenimi ile ne ilgisi var? Matematik kökenli iki alan olarak, kümelerdeki noktalar arasındaki mesafeleri ölçmenin ortak bir yolu olmalıdır ve bu, verinin doğasına bağlı olarak 'düz' veya 'düz olmayan' bir şekilde yapılabilir. [Öklidyen mesafeler](https://wikipedia.org/wiki/Euclidean_distance) iki nokta arasındaki bir doğru parçasının uzunluğu olarak ölçülür. [Öklidyen olmayan mesafeler](https://wikipedia.org/wiki/Non-Euclidean_geometry) bir eğri boyunca ölçülür. Veriniz, görselleştirildiğinde, bir düzlemde var olmuyormuş gibi görünüyorsa, bunu ele almak için özel bir algoritma kullanmanız gerekebilir.
+>
+
+> Bilgilendirme Grafiği [Dasani Madipalli](https://twitter.com/dasani_decoded) tarafından
+>
+> 🎓 ['Mesafeler'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> Kümeler, noktalar arasındaki mesafelerle tanımlanır. Bu mesafe birkaç şekilde ölçülebilir. Öklidyen kümeler, nokta değerlerinin ortalaması ile tanımlanır ve bir 'merkez nokta' içerir. Mesafeler, bu merkez noktaya olan mesafeyle ölçülür. Öklidyen olmayan mesafeler, diğer noktalara en yakın nokta olan 'clustroid'ler referans alınarak ölçülür. Clustroid'ler de çeşitli şekillerde tanımlanabilir.
+>
+> 🎓 ['Kısıtlı'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [Kısıtlı Kümeleme](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf), bu gözetimsiz yönteme 'yarı gözetimli' öğrenmeyi tanıtır. Noktalar arasındaki ilişkiler 'bağlanamaz' veya 'bağlanması gerekir' olarak işaretlenir, böylece veri kümesine bazı kurallar uygulanır.
+>
+>Bir örnek: Bir algoritma, etiketlenmemiş veya yarı etiketlenmiş bir veri kümesine serbest bırakıldığında, oluşturduğu kümeler kalitesiz olabilir. Yukarıdaki örnekte, kümeler 'yuvarlak müzik şeyleri', 'kare müzik şeyleri', 'üçgen şeyler' ve 'kurabiyeler' olarak gruplandırılabilir. Bazı kısıtlamalar veya kurallar verilirse ("öğe plastikten yapılmış olmalı", "öğe müzik üretebilmeli"), bu algoritmanın daha iyi seçimler yapmasına yardımcı olabilir.
+>
+> 🎓 'Yoğunluk'
+>
+> 'Gürültülü' veri 'yoğun' olarak kabul edilir. Her bir kümedeki noktalar arasındaki mesafeler, incelendiğinde daha veya az yoğun, yani 'kalabalık' olabilir ve bu nedenle bu veri, uygun kümeleme yöntemiyle analiz edilmelidir. [Bu makale](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html), düzensiz küme yoğunluğuna sahip gürültülü bir veri kümesini keşfetmek için K-Means kümeleme ile HDBSCAN algoritmalarını kullanmanın farkını göstermektedir.
+
+## Kümeleme Algoritmaları
+
+100'den fazla kümeleme algoritması vardır ve kullanımları eldeki verinin doğasına bağlıdır. Bazı ana algoritmaları tartışalım:
+
+- **Hiyerarşik kümeleme**. Bir nesne, yakın bir nesneye olan yakınlığına göre sınıflandırıldığında, kümeler üyelerinin diğer nesnelere olan mesafelerine göre oluşturulur. Scikit-learn'ün agglomeratif kümelemesi hiyerarşiktir.
+
+ 
+ > Bilgilendirme Grafiği [Dasani Madipalli](https://twitter.com/dasani_decoded) tarafından
+
+- **Merkez noktası kümeleme**. Bu popüler algoritma, oluşturulacak küme sayısını belirledikten sonra, bir kümenin merkez noktasını belirler ve bu nokta etrafında veri toplar. [K-means kümeleme](https://wikipedia.org/wiki/K-means_clustering), merkez noktası kümelemesinin popüler bir versiyonudur. Merkez, en yakın ortalama ile belirlenir, bu nedenle adı. Kümeden olan kare mesafesi minimize edilir.
+
+ 
+ > Bilgilendirme Grafiği [Dasani Madipalli](https://twitter.com/dasani_decoded) tarafından
+
+- **Dağılım tabanlı kümeleme**. İstatistiksel modellemeye dayalı olan dağılım tabanlı kümeleme, bir veri noktasının bir kümeye ait olma olasılığını belirlemeye ve buna göre atamaya odaklanır. Gaussian karışım yöntemleri bu türe aittir.
+
+- **Yoğunluk tabanlı kümeleme**. Veri noktaları, yoğunluklarına veya birbirleri etrafında gruplandırılmalarına göre kümelere atanır. Grup dışındaki veri noktaları, aykırı değerler veya gürültü olarak kabul edilir. DBSCAN, Mean-shift ve OPTICS bu tür kümelemeye aittir.
+
+- **Izgara tabanlı kümeleme**. Çok boyutlu veri kümeleri için bir ızgara oluşturulur ve veri ızgaranın hücrelerine bölünerek kümeler oluşturulur.
+
+## Alıştırma - Verinizi Kümeleyin
+
+Kümeleme tekniği, doğru görselleştirme ile büyük ölçüde desteklenir, bu yüzden müzik verilerimizi görselleştirmeye başlayalım. Bu alıştırma, bu verinin doğası için en etkili hangi kümeleme yöntemlerini kullanmamız gerektiğine karar vermemize yardımcı olacaktır.
+
+1. Bu klasördeki [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb) dosyasını açın.
+
+1. İyi veri görselleştirme için `Seaborn` paketini içe aktarın.
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. [_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv) dosyasından şarkı verilerini ekleyin. Şarkılar hakkında bazı verilerle bir dataframe yükleyin. Kütüphaneleri içe aktararak ve verileri dökerek bu veriyi keşfetmeye hazırlanın:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ İlk birkaç satırı kontrol edin:
+
+ | | name | album | artist | artist_top_genre | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ---------------- | ------------ | ------ | ---------- | ------------ | ------------ | ------ | ---------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool | Enjoy Your Life | Lady Donli | nigerian pop | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25 | 0.702 | 0.116 | 0.833 | 0.91 | 0.348 | -6.044 | 0.0447 | 105.115 | 4 |
+
+1. `info()` çağırarak dataframe hakkında bazı bilgiler edinin:
+
+ ```python
+ df.info()
+ ```
+
+ Çıktı şöyle görünecek:
+
+ ```output
+
+ RangeIndex: 530 entries, 0 to 529
+ Data columns (total 16 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 name 530 non-null object
+ 1 album 530 non-null object
+ 2 artist 530 non-null object
+ 3 artist_top_genre 530 non-null object
+ 4 release_date 530 non-null int64
+ 5 length 530 non-null int64
+ 6 popularity 530 non-null int64
+ 7 danceability 530 non-null float64
+ 8 acousticness 530 non-null float64
+ 9 energy 530 non-null float64
+ 10 instrumentalness 530 non-null float64
+ 11 liveness 530 non-null float64
+ 12 loudness 530 non-null float64
+ 13 speechiness 530 non-null float64
+ 14 tempo 530 non-null float64
+ 15 time_signature 530 non-null int64
+ dtypes: float64(8), int64(4), object(4)
+ memory usage: 66.4+ KB
+ ```
+
+1. `isnull()` çağırarak ve toplamın 0 olduğunu doğrulayarak null değerleri iki kez kontrol edin:
+
+ ```python
+ df.isnull().sum()
+ ```
+
+ İyi görünüyor:
+
+ ```output
+ name 0
+ album 0
+ artist 0
+ artist_top_genre 0
+ release_date 0
+ length 0
+ popularity 0
+ danceability 0
+ acousticness 0
+ energy 0
+ instrumentalness 0
+ liveness 0
+ loudness 0
+ speechiness 0
+ tempo 0
+ time_signature 0
+ dtype: int64
+ ```
+
+1. Verileri tanımlayın:
+
+ ```python
+ df.describe()
+ ```
+
+ | | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | ----- | ------------ | ----------- | ---------- | ------------ | ------------ | -------- | ---------------- | -------- | --------- | ----------- | ---------- | -------------- |
+ | count | 530 | 530 | 530 | 530 |
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Kümeleme algoritmalarını uygulamadan önce, öğrendiğimiz gibi, veri setinizin doğasını anlamak iyi bir fikirdir. Bu konu hakkında daha fazla bilgi edinmek için [buraya](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html) tıklayın.
+
+[Faydalı bu makale](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/), farklı veri şekilleri göz önüne alındığında çeşitli kümeleme algoritmalarının nasıl davrandığını açıklar.
+
+## Ödev
+
+[Kümeleme için diğer görselleştirmeleri araştırın](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/5-Clustering/1-Visualize/assignment.md b/translations/tr/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..7fd7f5005
--- /dev/null
+++ b/translations/tr/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# Kümeleme için diğer görselleştirmeleri araştırın
+
+## Talimatlar
+
+Bu derste, verilerinizi kümelemeye hazırlamak için bazı görselleştirme teknikleriyle çalıştınız. Özellikle, dağılım grafikleri nesne gruplarını bulmak için kullanışlıdır. Dağılım grafikleri oluşturmak için farklı yolları ve farklı kütüphaneleri araştırın ve çalışmalarınızı bir not defterinde belgeleyin. Bu dersteki verileri, diğer derslerden verileri veya kendinizin temin ettiği verileri kullanabilirsiniz (ancak kaynağını not defterinizde belirtmeyi unutmayın). Dağılım grafikleri kullanarak bazı verileri çizin ve neler keşfettiğinizi açıklayın.
+
+## Değerlendirme Kriterleri
+
+| Kriter | Mükemmel | Yeterli | Geliştirmeye İhtiyacı Var |
+| -------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------- |
+| | Beş iyi belgelenmiş dağılım grafiği içeren bir not defteri sunulur | Beşten az dağılım grafiği içeren ve daha az belgelenmiş bir not defteri sunulur | Eksik bir not defteri sunulur |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/tr/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..b57a2f2e3
--- /dev/null
+++ b/translations/tr/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğacak herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/5-Clustering/2-K-Means/README.md b/translations/tr/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..9c26da4d0
--- /dev/null
+++ b/translations/tr/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# K-Means kümeleme
+
+## [Ders Öncesi Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+Bu derste, daha önce içe aktardığınız Nijerya müzik veri kümesini kullanarak Scikit-learn ile nasıl kümeler oluşturacağınızı öğreneceksiniz. K-Means ile Kümeleme'nin temellerini ele alacağız. Daha önceki derste öğrendiğiniz gibi, kümelerle çalışmanın birçok yolu vardır ve kullandığınız yöntem verilerinize bağlıdır. En yaygın kümeleme tekniği olduğu için K-Means'ı deneyeceğiz. Hadi başlayalım!
+
+Öğreneceğiniz terimler:
+
+- Siluet skoru
+- Dirsek yöntemi
+- Eylemsizlik
+- Varyans
+
+## Giriş
+
+[K-Means Kümeleme](https://wikipedia.org/wiki/K-means_clustering), sinyal işleme alanından türetilmiş bir yöntemdir. Bir dizi gözlem kullanarak veri gruplarını 'k' kümelere bölmek ve ayırmak için kullanılır. Her gözlem, verilen bir veri noktasını en yakın 'ortalama'ya veya bir kümenin merkez noktasına en yakın olacak şekilde gruplar.
+
+Kümeler, bir nokta (veya 'tohum') ve karşılık gelen bölgesini içeren [Voronoi diyagramları](https://wikipedia.org/wiki/Voronoi_diagram) olarak görselleştirilebilir.
+
+
+
+> infographic by [Jen Looper](https://twitter.com/jenlooper)
+
+K-Means kümeleme süreci [üç adımlı bir süreçte çalışır](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. Algoritma, veri kümesinden örnekleme yaparak k sayıda merkez noktası seçer. Bundan sonra, döngüye girer:
+ 1. Her örneği en yakın merkez noktaya atar.
+ 2. Önceki merkez noktalara atanan tüm örneklerin ortalama değerini alarak yeni merkez noktalar oluşturur.
+ 3. Ardından, yeni ve eski merkez noktalar arasındaki farkı hesaplar ve merkez noktalar stabilize olana kadar tekrarlar.
+
+K-Means kullanmanın bir dezavantajı, 'k' yani merkez noktalarının sayısını belirlemeniz gerektiğidir. Neyse ki, 'dirsek yöntemi' 'k' için iyi bir başlangıç değeri tahmin etmenize yardımcı olur. Birazdan deneyeceksiniz.
+
+## Önkoşul
+
+Bu dersin [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) dosyasında çalışacaksınız, bu dosya önceki derste yaptığınız veri içe aktarma ve ön temizleme işlemlerini içerir.
+
+## Alıştırma - hazırlık
+
+Şarkı verilerine tekrar bir göz atarak başlayın.
+
+1. Her sütun için `boxplot()` çağırarak bir kutu grafiği oluşturun:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ Bu veri biraz gürültülü: her sütunu kutu grafiği olarak gözlemleyerek aykırı değerleri görebilirsiniz.
+
+ 
+
+Veri kümesinden bu aykırı değerleri çıkarabilirsiniz, ancak bu veriyi oldukça minimal hale getirir.
+
+1. Şimdi, kümeleme egzersiziniz için hangi sütunları kullanacağınıza karar verin. Benzer aralıklara sahip olanları seçin ve `artist_top_genre` sütununu sayısal veriler olarak kodlayın:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. Şimdi kaç küme hedefleyeceğinizi seçmeniz gerekiyor. Veri kümesinden 3 şarkı türü çıkardığınızı biliyorsunuz, bu yüzden 3'ü deneyelim:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+Dataframe'in her satırı için tahmin edilen kümeler (0, 1 veya 2) ile basılmış bir dizi görüyorsunuz.
+
+1. Bu diziyi kullanarak bir 'siluet skoru' hesaplayın:
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## Siluet skoru
+
+1'e yakın bir siluet skoru arayın. Bu skor -1 ile 1 arasında değişir ve eğer skor 1 ise, küme yoğundur ve diğer kümelerden iyi ayrılmıştır. 0'a yakın bir değer, örneklerin komşu kümelerin karar sınırına çok yakın olduğu örtüşen kümeleri temsil eder. [(Kaynak)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+Bizim skor **.53**, yani ortada. Bu, verilerimizin bu tür bir kümeleme için pek uygun olmadığını gösteriyor, ancak devam edelim.
+
+### Alıştırma - bir model oluşturma
+
+1. `KMeans`'i içe aktarın ve kümeleme sürecine başlayın.
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ Burada açıklamaya değer birkaç bölüm var.
+
+ > 🎓 range: Bunlar kümeleme sürecinin iterasyonlarıdır
+
+ > 🎓 random_state: "Merkez noktası başlatma için rastgele sayı üretimini belirler." [Kaynak](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "küme içi kareler toplamı" bir küme içindeki tüm noktaların küme merkezine olan kareli ortalama mesafesini ölçer. [Kaynak](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce).
+
+ > 🎓 Inertia: K-Means algoritmaları, 'inertia'yı minimize edecek merkez noktaları seçmeye çalışır, "kümelerin ne kadar içsel olarak tutarlı olduğunu ölçen bir ölçüttür." [Kaynak](https://scikit-learn.org/stable/modules/clustering.html). Değer her iterasyonda wcss değişkenine eklenir.
+
+ > 🎓 k-means++: [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means)'de 'k-means++' optimizasyonunu kullanabilirsiniz, bu "merkez noktalarını genellikle birbirinden uzak olacak şekilde başlatır, bu da rastgele başlatmadan muhtemelen daha iyi sonuçlar verir.
+
+### Dirsek yöntemi
+
+Daha önce, 3 şarkı türünü hedeflediğiniz için 3 küme seçmeniz gerektiğini varsaymıştınız. Ama gerçekten öyle mi?
+
+1. Emin olmak için 'dirsek yöntemini' kullanın.
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ Önceki adımda oluşturduğunuz `wcss` değişkenini kullanarak 'dirsek' bükümünün nerede olduğunu gösteren bir grafik oluşturun, bu optimum küme sayısını gösterir. Belki gerçekten **3**!
+
+ 
+
+## Alıştırma - kümeleri gösterme
+
+1. Süreci tekrar deneyin, bu sefer üç küme ayarlayın ve kümeleri bir dağılım grafiği olarak gösterin:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. Modelin doğruluğunu kontrol edin:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ Bu modelin doğruluğu pek iyi değil ve kümelerin şekli nedenini size ipucu veriyor.
+
+ 
+
+ Bu veri çok dengesiz, çok az korelasyonlu ve sütun değerleri arasında çok fazla varyans var, bu yüzden iyi kümelenmiyor. Aslında, oluşan kümeler muhtemelen yukarıda tanımladığımız üç tür kategorisinden büyük ölçüde etkileniyor veya eğiliyor. Bu bir öğrenme süreciydi!
+
+ Scikit-learn belgelerinde, bu model gibi, iyi belirlenmemiş kümeleri olan bir modelin 'varyans' problemi olduğunu görebilirsiniz:
+
+ 
+ > Infographic from Scikit-learn
+
+## Varyans
+
+Varyans, "Ortalamanın kareli farklarının ortalaması" olarak tanımlanır [(Kaynak)](https://www.mathsisfun.com/data/standard-deviation.html). Bu kümeleme problemi bağlamında, veri kümesindeki sayıların ortalamadan biraz fazla sapma eğiliminde olduğunu ifade eder.
+
+✅ Bu, bu sorunu düzeltmenin tüm yollarını düşünmek için harika bir an. Verileri biraz daha düzenlemek mi? Farklı sütunlar kullanmak mı? Farklı bir algoritma kullanmak mı? İpucu: Verilerinizi normalleştirmek için [ölçeklendirmeyi deneyin](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) ve diğer sütunları test edin.
+
+> Bu '[varyans hesaplayıcısı](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)'nı deneyerek kavramı biraz daha iyi anlayın.
+
+---
+
+## 🚀Meydan Okuma
+
+Bu notebook ile biraz zaman geçirin, parametreleri ayarlayın. Verileri daha fazla temizleyerek (örneğin aykırı değerleri çıkararak) modelin doğruluğunu artırabilir misiniz? Belirli veri örneklerine daha fazla ağırlık vermek için ağırlıklar kullanabilirsiniz. Daha iyi kümeler oluşturmak için başka ne yapabilirsiniz?
+
+İpucu: Verilerinizi ölçeklendirmeyi deneyin. Notebook'ta, veri sütunlarını aralık açısından daha benzer hale getirmek için standart ölçeklendirme ekleyen yorumlanmış kod bulacaksınız. Siluet skoru düşse de, dirsek grafiğindeki 'büküm' yumuşar. Bunun nedeni, verileri ölçeklendirilmemiş bırakmanın, daha az varyansa sahip verilerin daha fazla ağırlık taşımasına izin vermesidir. Bu sorun hakkında biraz daha okuyun [burada](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226).
+
+## [Ders Sonrası Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Bir K-Means Simülatörüne [bu gibi](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/) göz atın. Bu aracı kullanarak örnek veri noktalarını görselleştirebilir ve merkez noktalarını belirleyebilirsiniz. Verilerin rastgeleliğini, küme sayılarını ve merkez noktalarını düzenleyebilirsiniz. Bu, verilerin nasıl gruplanabileceği hakkında bir fikir edinmenize yardımcı olur mu?
+
+Ayrıca Stanford'dan [bu K-Means el kitabına](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) göz atın.
+
+## Ödev
+
+[Farklı kümeleme yöntemlerini deneyin](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/5-Clustering/2-K-Means/assignment.md b/translations/tr/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..8842b5622
--- /dev/null
+++ b/translations/tr/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,13 @@
+# Farklı kümeleme yöntemlerini deneyin
+
+## Talimatlar
+
+Bu derste K-Means kümeleme hakkında bilgi edindiniz. Bazen K-Means verileriniz için uygun olmayabilir. Bu derslerden veya başka bir kaynaktan (kaynağınızı belirtin) veri kullanarak bir defter oluşturun ve K-Means kullanmadan farklı bir kümeleme yöntemi gösterin. Ne öğrendiniz?
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek Niteliğinde | Yeterli | Geliştirmeye İhtiyaç Var |
+| --------- | ------------------------------------------------------------- | -------------------------------------------------------------------- | ------------------------------ |
+| | İyi belgelenmiş bir kümeleme modeli içeren bir defter sunulur | İyi belgelenmemiş ve/veya eksik bir defter sunulur | Eksik çalışma sunulmuştur |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlaşılma veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/tr/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..d02b660c6
--- /dev/null
+++ b/translations/tr/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğa önem versek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/5-Clustering/README.md b/translations/tr/5-Clustering/README.md
new file mode 100644
index 000000000..a88306f97
--- /dev/null
+++ b/translations/tr/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# Makine öğrenimi için kümeleme modelleri
+
+Kümeleme, benzer nesneleri bulmayı ve bunları kümeler olarak adlandırılan gruplar halinde gruplamayı amaçlayan bir makine öğrenimi görevidir. Kümelemeyi makine öğrenimindeki diğer yaklaşımlardan ayıran şey, her şeyin otomatik olarak gerçekleşmesidir. Aslında, denetimli öğrenmenin tam tersidir demek doğru olur.
+
+## Bölgesel konu: Nijeryalı bir izleyici kitlesinin müzik zevkine yönelik kümeleme modelleri 🎧
+
+Nijerya'nın çeşitli izleyici kitlesi, çeşitli müzik zevklerine sahiptir. Spotify'dan alınan verileri kullanarak (bu makaleden ilham alarak: [bu makale](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421)), Nijerya'da popüler olan bazı müziklere bakalım. Bu veri kümesi, çeşitli şarkıların 'dans edilebilirlik' puanı, 'akustiklik', ses yüksekliği, 'konuşkanlık', popülerlik ve enerji hakkında veriler içerir. Bu verilerdeki kalıpları keşfetmek ilginç olacak!
+
+
+
+> Fotoğraf Marcela Laskoski tarafından Unsplash üzerinde
+
+Bu ders serisinde, kümeleme tekniklerini kullanarak verileri analiz etmenin yeni yollarını keşfedeceksiniz. Kümeleme, veri kümenizde etiketler olmadığında özellikle yararlıdır. Eğer etiketler varsa, önceki derslerde öğrendiğiniz sınıflandırma teknikleri daha yararlı olabilir. Ancak etiketlenmemiş verileri gruplamayı amaçladığınız durumlarda, kümeleme kalıpları keşfetmenin harika bir yoludur.
+
+> Kümeleme modelleri ile çalışmayı öğrenmenize yardımcı olabilecek kullanışlı düşük kod araçları vardır. Bu görev için [Azure ML'yi deneyin](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## Dersler
+
+1. [Kümelenmeye giriş](1-Visualize/README.md)
+2. [K-Means kümeleme](2-K-Means/README.md)
+
+## Katkıda Bulunanlar
+
+Bu dersler 🎶 ile [Jen Looper](https://www.twitter.com/jenlooper) tarafından yazıldı ve [Rishit Dagli](https://rishit_dagli) ve [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) tarafından faydalı incelemelerle desteklendi.
+
+[Nijeryalı Şarkılar](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify) veri kümesi, Spotify'dan alınarak Kaggle'dan temin edilmiştir.
+
+Bu dersi oluştururken yardımcı olan faydalı K-Means örnekleri arasında bu [iris keşfi](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering), bu [giriş not defteri](https://www.kaggle.com/prashant111/k-means-clustering-with-python) ve bu [varsayımsal STK örneği](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering) bulunmaktadır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/1-Introduction-to-NLP/README.md b/translations/tr/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..b3c971295
--- /dev/null
+++ b/translations/tr/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# Doğal Dil İşlemeye Giriş
+
+Bu ders, *hesaplamalı dilbilim* alt alanı olan *doğal dil işleme*nin kısa bir tarihini ve önemli kavramlarını kapsar.
+
+## [Ders Öncesi Testi](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## Giriş
+
+Genellikle NLP olarak bilinen doğal dil işleme, makine öğreniminin uygulandığı ve üretim yazılımlarında kullanılan en bilinen alanlardan biridir.
+
+✅ Her gün kullandığınız ve muhtemelen içinde biraz NLP barındıran bir yazılım düşünebilir misiniz? Peki ya düzenli olarak kullandığınız kelime işlem programları veya mobil uygulamalar?
+
+Öğrenecekleriniz:
+
+- **Dillerin fikri**. Dillerin nasıl geliştiği ve ana çalışma alanlarının neler olduğu.
+- **Tanım ve kavramlar**. Bilgisayarların metni nasıl işlediğine dair tanımlar ve kavramlar, cümle çözümleme, dilbilgisi ve isim ve fiilleri tanımlama dahil. Bu derste bazı kodlama görevleri var ve sonraki derslerde kodlamayı öğreneceğiniz birkaç önemli kavram tanıtılıyor.
+
+## Hesaplamalı Dilbilim
+
+Hesaplamalı dilbilim, bilgisayarların dillerle nasıl çalışabileceğini, hatta anlayabileceğini, çevirebileceğini ve iletişim kurabileceğini araştıran ve geliştiren bir alandır. Doğal dil işleme (NLP), bilgisayarların 'doğal' veya insan dillerini nasıl işleyebileceğine odaklanan ilgili bir alandır.
+
+### Örnek - telefon dikte
+
+Telefonunuza yazmak yerine dikte ettiyseniz veya sanal bir asistana soru sorduysanız, konuşmanız bir metin formuna dönüştürülmüş ve ardından konuştuğunuz dilden *çözümleme* yapılmıştır. Algılanan anahtar kelimeler, telefonun veya asistanın anlayabileceği ve işlem yapabileceği bir formata dönüştürülmüştür.
+
+
+> Gerçek dilsel anlama zordur! Görsel [Jen Looper](https://twitter.com/jenlooper) tarafından
+
+### Bu teknoloji nasıl mümkün hale geliyor?
+
+Bu, birinin bunu yapmak için bir bilgisayar programı yazması sayesinde mümkündür. Birkaç on yıl önce, bazı bilim kurgu yazarları, insanların çoğunlukla bilgisayarlarıyla konuşacağını ve bilgisayarların her zaman ne demek istediklerini tam olarak anlayacağını öngörmüştü. Ne yazık ki, bu birçok kişinin hayal ettiğinden daha zor bir problem olduğu ortaya çıktı ve bugün çok daha iyi anlaşılan bir problem olmasına rağmen, bir cümlenin anlamını anlamak söz konusu olduğunda 'mükemmel' doğal dil işlemeye ulaşmakta önemli zorluklar vardır. Özellikle bir cümledeki mizahı anlamak veya alay gibi duyguları tespit etmek söz konusu olduğunda bu zor bir problemdir.
+
+Bu noktada, öğretmenin bir cümledeki dilbilgisi bölümlerini ele aldığı okul derslerini hatırlayabilirsiniz. Bazı ülkelerde, öğrenciler dilbilgisi ve dilbilimi ayrı bir konu olarak öğretilirken, birçok ülkede bu konular bir dil öğrenmenin bir parçası olarak dahil edilir: ya ilkokulda ana dilinizi (okumayı ve yazmayı öğrenmek) ya da ortaokul veya lisede ikinci bir dili öğrenmek. İsimleri fiillerden veya zarfları sıfatlardan ayırt etme konusunda uzman değilseniz endişelenmeyin!
+
+*Geniş zaman* ile *şimdiki zaman* arasındaki farkla mücadele ediyorsanız, yalnız değilsiniz. Bu, birçok insan için, hatta bir dilin ana konuşmacıları için bile zor bir şeydir. İyi haber şu ki, bilgisayarlar resmi kuralları uygulamada gerçekten iyidir ve bir cümleyi bir insan kadar iyi *çözümleyecek* kod yazmayı öğreneceksiniz. Daha sonra inceleyeceğiniz daha büyük zorluk, bir cümlenin *anlamını* ve *duygusunu* anlamaktır.
+
+## Ön Koşullar
+
+Bu ders için ana ön koşul, bu dersin dilini okuyabilmek ve anlayabilmektir. Çözülecek matematik problemleri veya denklemler yoktur. Orijinal yazar bu dersi İngilizce yazmış olsa da, başka dillere de çevrilmiştir, bu yüzden bir çeviri okuyabilirsiniz. Birkaç farklı dilin kullanıldığı örnekler vardır (farklı dillerin dilbilgisi kurallarını karşılaştırmak için). Bu diller *çevirilmemiştir*, ancak açıklayıcı metin çevrilmiştir, bu yüzden anlam net olmalıdır.
+
+Kodlama görevleri için Python kullanacaksınız ve örnekler Python 3.8 kullanılarak yapılmıştır.
+
+Bu bölümde, ihtiyacınız olacak ve kullanacaksınız:
+
+- **Python 3 anlama**. Python 3'te programlama dili anlama, bu ders girdi, döngüler, dosya okuma, diziler kullanır.
+- **Visual Studio Code + eklenti**. Visual Studio Code ve Python eklentisini kullanacağız. Ayrıca tercih ettiğiniz bir Python IDE'sini de kullanabilirsiniz.
+- **TextBlob**. [TextBlob](https://github.com/sloria/TextBlob), Python için basitleştirilmiş bir metin işleme kütüphanesidir. TextBlob sitesindeki talimatları izleyerek sisteminize yükleyin (aşağıda gösterildiği gibi corpusları da yükleyin):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 İpucu: Python'u doğrudan VS Code ortamlarında çalıştırabilirsiniz. Daha fazla bilgi için [belgelere](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott) göz atın.
+
+## Makinelerle Konuşmak
+
+Bilgisayarların insan dilini anlamasını sağlamaya yönelik çalışmalar on yıllar öncesine dayanır ve doğal dil işlemeyi düşünen en erken bilim insanlarından biri *Alan Turing* idi.
+
+### 'Turing testi'
+
+Turing, 1950'lerde *yapay zeka* araştırmaları yaparken, bir insana ve bilgisayara (yazılı iletişim yoluyla) bir konuşma testi verilse, insanın konuşmada başka bir insanla mı yoksa bir bilgisayarla mı konuştuğundan emin olamaması durumunu düşündü.
+
+Belirli bir konuşma süresinden sonra, insan cevapların bir bilgisayardan mı yoksa başka bir insandan mı geldiğini belirleyemezse, bilgisayarın *düşündüğü* söylenebilir mi?
+
+### İlham - 'taklit oyunu'
+
+Bu fikir, bir sorgulayıcının bir odada yalnız olduğu ve başka bir odadaki iki kişinin cinsiyetini belirlemeye çalıştığı bir parti oyunundan geldi. Sorgulayıcı notlar gönderebilir ve yazılı cevapların gizemli kişinin cinsiyetini ortaya çıkaracak sorular düşünmeye çalışmalıdır. Tabii ki, diğer odadaki oyuncular, soruları yanıltıcı veya kafa karıştırıcı şekilde cevaplayarak sorgulayıcıyı yanıltmaya çalışırken, aynı zamanda dürüstçe cevap veriyormuş gibi görünmeye çalışır.
+
+### Eliza'yı geliştirmek
+
+1960'larda MIT'den bir bilim insanı olan *Joseph Weizenbaum*, [*Eliza*](https://wikipedia.org/wiki/ELIZA) adında bir bilgisayar 'terapisti' geliştirdi. Eliza, insana sorular sorar ve cevaplarını anlıyormuş gibi görünürdü. Ancak, Eliza bir cümleyi çözümleyip belirli dilbilgisi yapıları ve anahtar kelimeleri tanımlayarak makul bir cevap verebilirken, cümleyi *anladığı* söylenemezdi. Eliza, "**Ben** üzgünüm" formatındaki bir cümleye karşılık, cümledeki kelimeleri yeniden düzenleyip yerine koyarak "Ne kadar süredir **üzgün** olduğunuzu hissediyorsunuz" şeklinde yanıt verebilirdi.
+
+Bu, Eliza'nın ifadeyi anladığı ve bir takip sorusu sorduğu izlenimini verirken, gerçekte, zamanı değiştirip bazı kelimeler ekliyordu. Eliza, yanıt verebileceği bir anahtar kelimeyi tanımlayamazsa, bunun yerine birçok farklı ifadeye uygulanabilecek rastgele bir yanıt verirdi. Eliza kolayca kandırılabilirdi, örneğin bir kullanıcı "**Sen** bir bisikletsin" yazarsa, "Ne kadar süredir **ben** bir bisikletim?" şeklinde yanıt verebilirdi, mantıklı bir yanıt yerine.
+
+[](https://youtu.be/RMK9AphfLco "Eliza ile Sohbet")
+
+> 🎥 Yukarıdaki görüntüye tıklayarak orijinal ELIZA programı hakkında bir video izleyebilirsiniz
+
+> Not: Bir ACM hesabınız varsa, 1966'da yayınlanan [Eliza'nın](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract) orijinal tanımını okuyabilirsiniz. Alternatif olarak, Eliza hakkında [wikipedia](https://wikipedia.org/wiki/ELIZA)'dan bilgi edinin
+
+## Alıştırma - Temel Bir Konuşma Botu Kodlama
+
+Eliza gibi bir konuşma botu, kullanıcı girdilerini alan ve anlamış gibi görünen ve akıllıca yanıt veren bir programdır. Eliza'nın aksine, botumuz akıllı bir konuşma izlenimi veren birkaç kurala sahip olmayacak. Bunun yerine, botumuzun tek bir yeteneği olacak, neredeyse her sıradan konuşmada işe yarayabilecek rastgele yanıtlarla konuşmayı sürdürmek.
+
+### Plan
+
+Bir konuşma botu oluştururken adımlarınız:
+
+1. Kullanıcıya botla nasıl etkileşime geçeceğini anlatan talimatları yazdırın
+2. Bir döngü başlatın
+ 1. Kullanıcı girdisini kabul edin
+ 2. Kullanıcı çıkmak isterse, çıkın
+ 3. Kullanıcı girdisini işleyin ve yanıtı belirleyin (bu durumda, yanıt olası genel yanıtlar listesinden rastgele bir seçimdir)
+ 4. Yanıtı yazdırın
+3. Adım 2'ye geri dönün
+
+### Botu Oluşturma
+
+Şimdi botu oluşturalım. Öncelikle bazı ifadeleri tanımlayarak başlayacağız.
+
+1. Aşağıdaki rastgele yanıtlarla bu botu kendiniz Python'da oluşturun:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ İşte size rehberlik etmesi için bazı örnek çıktılar (kullanıcı girdisi `>` ile başlayan satırlarda):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ Göreve olası bir çözüm [burada](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)
+
+ ✅ Düşünün ve durun
+
+ 1. Rastgele yanıtların birinin botun gerçekten anladığını düşündüreceğini düşünüyor musunuz?
+ 2. Botun daha etkili olması için hangi özelliklere ihtiyaç duyardı?
+ 3. Bir bot gerçekten bir cümlenin anlamını anlayabilseydi, önceki cümlelerin anlamını da 'hatırlaması' gerekir miydi?
+
+---
+
+## 🚀Meydan Okuma
+
+Yukarıdaki "düşünün ve durun" unsurlarından birini seçin ve bunu kodda uygulamaya çalışın veya bir çözümü kağıt üzerinde sahte kod kullanarak yazın.
+
+Bir sonraki derste, doğal dili çözümleme ve makine öğrenimine yönelik başka yaklaşımlar hakkında bilgi edineceksiniz.
+
+## [Ders Sonrası Testi](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## İnceleme ve Kendi Kendine Çalışma
+
+Aşağıdaki referanslara daha fazla okuma fırsatı olarak göz atın.
+
+### Referanslar
+
+1. Schubert, Lenhart, "Hesaplamalı Dilbilim", *The Stanford Encyclopedia of Philosophy* (Spring 2020 Edition), Edward N. Zalta (ed.), URL = .
+2. Princeton University "WordNet Hakkında." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
+
+## Ödev
+
+[Bir bot arayın](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/tr/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..9ace8ba83
--- /dev/null
+++ b/translations/tr/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# Bir bot arayın
+
+## Talimatlar
+
+Botlar her yerde. Göreviniz: Bir tane bulun ve benimseyin! Onları web sitelerinde, bankacılık uygulamalarında ve telefonla, örneğin finansal hizmet şirketlerini aradığınızda tavsiye veya hesap bilgileri aldığınızda bulabilirsiniz. Botu analiz edin ve onu şaşırtıp şaşırtamayacağınıza bakın. Eğer botu şaşırtabiliyorsanız, bunun neden olduğunu düşünüyorsunuz? Deneyiminiz hakkında kısa bir makale yazın.
+
+## Değerlendirme Kriterleri
+
+| Kriter | Mükemmel | Yeterli | Geliştirilmeli |
+| -------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------- | --------------------- |
+| | Bot mimarisinin varsayılan halini açıklayan ve onunla olan deneyiminizi özetleyen tam bir sayfa makale yazılmış | Makale eksik veya yeterince araştırılmamış | Makale teslim edilmemiş |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki versiyonu yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğacak yanlış anlamalar veya yanlış yorumlamalar için sorumluluk kabul etmiyoruz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/2-Tasks/README.md b/translations/tr/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..b7612ec66
--- /dev/null
+++ b/translations/tr/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# Doğal Dil İşleme Görevleri ve Teknikleri
+
+Çoğu *doğal dil işleme* görevi için işlenecek metin parçalanmalı, incelenmeli ve sonuçlar kurallar ve veri setleri ile çapraz referanslanarak saklanmalıdır. Bu görevler, programcının bir metindeki terimlerin ve kelimelerin _anlamını_ veya _amacını_ ya da sadece _frekansını_ çıkarmasına olanak tanır.
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+Metin işleme sırasında kullanılan yaygın teknikleri keşfedelim. Bu teknikler, makine öğrenimi ile birleştirildiğinde, büyük miktarda metni verimli bir şekilde analiz etmenize yardımcı olur. Ancak, bu görevlerde ML uygulamadan önce, bir NLP uzmanının karşılaştığı sorunları anlamak önemlidir.
+
+## NLP'ye Ortak Görevler
+
+Üzerinde çalıştığınız bir metni analiz etmenin farklı yolları vardır. Bu görevleri gerçekleştirebilir ve bu görevler aracılığıyla metni anlayabilir ve sonuçlar çıkarabilirsiniz. Genellikle bu görevleri bir sırayla yaparsınız.
+
+### Tokenizasyon
+
+Muhtemelen çoğu NLP algoritmasının yapması gereken ilk şey, metni tokenlere veya kelimelere ayırmaktır. Bu basit gibi görünse de, noktalama işaretlerini ve farklı dillerin kelime ve cümle ayırıcılarını hesaba katmak işleri zorlaştırabilir. Sınırları belirlemek için çeşitli yöntemler kullanmanız gerekebilir.
+
+
+> **Pride and Prejudice**'den bir cümleyi tokenizasyon. Infografik: [Jen Looper](https://twitter.com/jenlooper)
+
+### Gömme Teknikleri
+
+[Kelime gömmeleri](https://wikipedia.org/wiki/Word_embedding), metin verilerinizi sayısal olarak dönüştürmenin bir yoludur. Gömme işlemleri, benzer anlamlara sahip kelimeler veya birlikte kullanılan kelimeler bir araya gelecek şekilde yapılır.
+
+
+> "Sinirlerinize en yüksek saygıyı duyuyorum, onlar benim eski arkadaşlarım." - **Pride and Prejudice**'den bir cümle için kelime gömmeleri. Infografik: [Jen Looper](https://twitter.com/jenlooper)
+
+✅ Kelime gömmeleriyle denemeler yapmak için [bu ilginç aracı](https://projector.tensorflow.org/) deneyin. Bir kelimeye tıklamak, benzer kelimelerin kümelerini gösterir: 'toy' 'disney', 'lego', 'playstation' ve 'console' ile kümelenir.
+
+### Ayrıştırma ve Sözcük Türü Etiketleme
+
+Tokenize edilen her kelime, bir isim, fiil veya sıfat gibi bir sözcük türü olarak etiketlenebilir. `the quick red fox jumped over the lazy brown dog` cümlesi POS etiketlemesiyle şu şekilde olabilir: fox = isim, jumped = fiil.
+
+
+
+> **Pride and Prejudice**'den bir cümleyi ayrıştırma. Infografik: [Jen Looper](https://twitter.com/jenlooper)
+
+Ayrıştırma, bir cümlede hangi kelimelerin birbiriyle ilişkili olduğunu tanımaktır - örneğin `the quick red fox jumped` sıfat-isim-fiil dizisi, `lazy brown dog` dizisinden ayrıdır.
+
+### Kelime ve İfade Frekansları
+
+Büyük bir metin kümesini analiz ederken yararlı bir prosedür, ilgilenilen her kelime veya ifadenin ve ne sıklıkta göründüğünün bir sözlüğünü oluşturmaktır. `the quick red fox jumped over the lazy brown dog` ifadesi için 'the' kelimesinin frekansı 2'dir.
+
+Kelime frekanslarını saydığımız bir örnek metne bakalım. Rudyard Kipling'in The Winners şiiri şu dizeyi içerir:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+İfade frekansları gerektiği gibi büyük/küçük harf duyarlı veya duyarsız olabilir, `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` ifadesi 2'dir.
+
+### N-gramlar
+
+Bir metin, belirli bir uzunluktaki kelime dizilerine bölünebilir: tek kelime (unigram), iki kelime (bigram), üç kelime (trigram) veya herhangi bir sayıda kelime (n-gram).
+
+Örneğin `the quick red fox jumped over the lazy brown dog` ifadesi, 2 n-gram skoru ile şu n-gramları üretir:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+Bunu bir cümlenin üzerinde kayan bir kutu olarak görselleştirmek daha kolay olabilir. İşte 3 kelimelik n-gramlar için örnek, her cümlede n-gram kalın olarak belirtilmiştir:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> 3 n-gram değeri: Infografik: [Jen Looper](https://twitter.com/jenlooper)
+
+### İsim İfadesi Çıkarma
+
+Çoğu cümlede, cümlenin öznesi veya nesnesi olan bir isim vardır. İngilizcede, genellikle 'a', 'an' veya 'the' ile tanımlanabilir. Bir cümlenin öznesini veya nesnesini 'isim ifadesini çıkararak' tanımlamak, cümlenin anlamını anlamaya çalışırken NLP'de yaygın bir görevdir.
+
+✅ "Saat, yer, görünüş veya kelimeler, temeli atan şeyler üzerine karar veremem. Çok uzun zaman oldu. Başladığımı bilmeden önce ortasındaydım." cümlesinde isim ifadelerini tanımlayabilir misiniz?
+
+`the quick red fox jumped over the lazy brown dog` cümlesinde 2 isim ifadesi vardır: **quick red fox** ve **lazy brown dog**.
+
+### Duygu Analizi
+
+Bir cümle veya metin, ne kadar *pozitif* veya *negatif* olduğuna göre analiz edilebilir. Duygu, *kutupluluk* ve *nesnellik/öznellik* olarak ölçülür. Kutupluluk, -1.0'dan 1.0'a (negatiften pozitife) ve 0.0'dan 1.0'a (en nesnelden en öznel) ölçülür.
+
+✅ Daha sonra makine öğrenimi kullanarak duyguyu belirlemenin farklı yollarını öğreneceksiniz, ancak bir yol, insan uzman tarafından pozitif veya negatif olarak kategorize edilen kelime ve ifadelerden oluşan bir listeye sahip olmak ve bu modeli metne uygulayarak bir kutupluluk skoru hesaplamaktır. Bunun bazı durumlarda nasıl işe yarayacağını ve diğerlerinde daha az işe yarayacağını görebilir misiniz?
+
+### Çekim
+
+Çekim, bir kelimeyi almanızı ve kelimenin tekil veya çoğul halini elde etmenizi sağlar.
+
+### Lematizasyon
+
+Bir *lemma*, bir kelime kümesi için kök veya baş kelimedir, örneğin *flew*, *flies*, *flying* kelimelerinin lemması *fly* fiilidir.
+
+NLP araştırmacısı için kullanışlı veritabanları da mevcuttur, özellikle:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/), birçok farklı dildeki her kelime için kelimeler, eşanlamlılar, zıt anlamlılar ve birçok diğer detayların yer aldığı bir veritabanıdır. Çeviri, yazım denetleyicileri veya herhangi bir türde dil araçları oluştururken inanılmaz derecede faydalıdır.
+
+## NLP Kütüphaneleri
+
+Neyse ki, tüm bu teknikleri kendiniz oluşturmanız gerekmiyor, çünkü doğal dil işleme veya makine öğrenimi konusunda uzman olmayan geliştiriciler için çok daha erişilebilir hale getiren mükemmel Python kütüphaneleri mevcut. Bir sonraki derslerde bunların daha fazla örneğini göreceksiniz, ancak burada bir sonraki görevinizde size yardımcı olacak bazı faydalı örnekler öğreneceksiniz.
+
+### Egzersiz - `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor` kullanımı
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > Burada ne oluyor? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor), "ConLL-2000 eğitim korpusu ile eğitilmiş chunk ayrıştırma kullanan bir isim ifadesi çıkarıcısıdır." ConLL-2000, 2000 Yılı Hesaplamalı Doğal Dil Öğrenme Konferansı'na atıfta bulunur. Her yıl konferans, zorlu bir NLP sorununu ele almak için bir atölye çalışması düzenler ve 2000 yılında bu isim chunking idi. Bir model Wall Street Journal'da eğitildi, "15-18. bölümler eğitim verisi olarak (211727 token) ve 20. bölüm test verisi olarak (47377 token) kullanıldı". Kullanılan prosedürlere [buradan](https://www.clips.uantwerpen.be/conll2000/chunking/) ve [sonuçlara](https://ifarm.nl/erikt/research/np-chunking.html) bakabilirsiniz.
+
+### Meydan Okuma - Botunuzu NLP ile geliştirmek
+
+Önceki derste çok basit bir Soru-Cevap botu oluşturmuştunuz. Şimdi, Marvin'i biraz daha sempatik hale getirerek girdiğiniz metni analiz edip duyguya uygun bir yanıt vererek geliştireceksiniz. Ayrıca bir `noun_phrase` tanımlayıp onun hakkında daha fazla bilgi isteyeceksiniz.
+
+Daha iyi bir konuşma botu oluştururken adımlarınız:
+
+1. Kullanıcıya botla nasıl etkileşime geçeceğini açıklayan talimatları yazdırın
+2. Döngüye başlayın
+ 1. Kullanıcı girdiğini kabul edin
+ 2. Kullanıcı çıkmak isterse çıkın
+ 3. Kullanıcı girdisini işleyin ve uygun duygu yanıtını belirleyin
+ 4. Duyguda bir isim ifadesi tespit edilirse, onu çoğullaştırın ve bu konu hakkında daha fazla bilgi isteyin
+ 5. Yanıtı yazdırın
+3. 2. adıma geri dönün
+
+Duyguyu belirlemek için TextBlob kullanarak kod parçası burada. Not: sadece dört *duygu derecesi* vardır (daha fazla ekleyebilirsiniz):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+İşte bazı örnek çıktı (kullanıcı girdiği > ile başlayan satırlardadır):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+Görevin bir olası çözümü [burada](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)
+
+✅ Bilgi Kontrolü
+
+1. Sempatik yanıtların birini botun gerçekten anladığını düşündürebileceğini düşünüyor musunuz?
+2. İsim ifadesini belirlemek botu daha inandırıcı kılar mı?
+3. Bir cümleden 'isim ifadesi' çıkarmak neden faydalı olabilir?
+
+---
+
+Önceki bilgi kontrolünde botu uygulayın ve bir arkadaşınız üzerinde test edin. Onları kandırabilir mi? Botunuzu daha inandırıcı yapabilir misiniz?
+
+## 🚀Meydan Okuma
+
+Önceki bilgi kontrolündeki bir görevi alın ve uygulamaya çalışın. Botu bir arkadaşınız üzerinde test edin. Onları kandırabilir mi? Botunuzu daha inandırıcı yapabilir misiniz?
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## İnceleme ve Kendi Kendine Çalışma
+
+Sonraki birkaç derste duygu analizini daha fazla öğreneceksiniz. [KDNuggets](https://www.kdnuggets.com/tag/nlp) gibi makalelerde bu ilginç tekniği araştırın.
+
+## Ödev
+
+[Bir botu konuştur](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/2-Tasks/assignment.md b/translations/tr/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..c12626905
--- /dev/null
+++ b/translations/tr/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# Bir Botun Konuşmasını Sağlamak
+
+## Talimatlar
+
+Son birkaç derste, sohbet edebileceğiniz temel bir bot programladınız. Bu bot, 'bye' diyene kadar rastgele cevaplar veriyor. Cevapları biraz daha az rastgele hale getirebilir ve 'why' veya 'how' gibi belirli şeyler söylediğinizde cevap vermesini sağlayabilir misiniz? Botunuzu genişletirken, makine öğreniminin bu tür işleri nasıl daha az manuel hale getirebileceğini biraz düşünün. Görevlerinizi kolaylaştırmak için NLTK veya TextBlob kütüphanelerini kullanabilirsiniz.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek Çalışma | Yeterli | Geliştirilmesi Gerekiyor |
+| --------- | --------------------------------------------- | ------------------------------------------------ | ----------------------- |
+| | Yeni bir bot.py dosyası sunulmuş ve belgelenmiş | Yeni bir bot dosyası sunulmuş fakat hatalar içeriyor | Bir dosya sunulmamış |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/3-Translation-Sentiment/README.md b/translations/tr/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..a572c365b
--- /dev/null
+++ b/translations/tr/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,190 @@
+# ML ile Çeviri ve Duygu Analizi
+
+Önceki derslerde, temel NLP görevlerini gerçekleştirmek için sahne arkasında ML'yi kullanan `TextBlob` kütüphanesini kullanarak temel bir bot nasıl oluşturulacağını öğrendiniz. Hesaplamalı dilbilimdeki bir diğer önemli zorluk, bir cümlenin bir konuşulan veya yazılan dilden diğerine doğru bir şekilde _çevirisi_ dir.
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+Çeviri, binlerce dilin bulunması ve her birinin çok farklı dilbilgisi kurallarına sahip olabilmesi nedeniyle çok zor bir problemdir. Bir yaklaşım, İngilizce gibi bir dilin resmi dilbilgisi kurallarını dil bağımsız bir yapıya dönüştürmek ve ardından bunu başka bir dile çevirerek geri dönüştürmektir. Bu yaklaşım şu adımları içerir:
+
+1. **Tanımlama**. Giriş dilindeki kelimeleri isimler, fiiller vb. olarak tanımlayın veya etiketleyin.
+2. **Çeviri Oluşturma**. Hedef dil formatında her kelimenin doğrudan çevirisini üretin.
+
+### Örnek cümle, İngilizceden İrlandacaya
+
+'İngilizce'de, _I feel happy_ cümlesi üç kelimedir ve sırası:
+
+- **özne** (I)
+- **fiil** (feel)
+- **sıfat** (happy)
+
+Ancak, 'İrlandaca' dilinde, aynı cümlenin çok farklı bir dilbilgisi yapısı vardır - "*happy*" veya "*sad*" gibi duygular *üzerinde* olarak ifade edilir.
+
+İngilizce ifadesi `I feel happy` İrlandacada `Tá athas orm` olur. Kelimesi kelimesine çeviri `Happy is upon me` olurdu.
+
+Bir İrlandaca konuşan kişi İngilizceye çeviri yaparken `I feel happy` derdi, `Happy is upon me` değil, çünkü cümlenin anlamını anlarlar, kelimeler ve cümle yapısı farklı olsa bile.
+
+İrlandaca cümle için resmi sıra:
+
+- **fiil** (Tá veya is)
+- **sıfat** (athas, veya happy)
+- **özne** (orm, veya upon me)
+
+## Çeviri
+
+Naif bir çeviri programı, cümle yapısını göz ardı ederek yalnızca kelimeleri çevirebilir.
+
+✅ İkinci (veya üçüncü veya daha fazla) bir dili yetişkin olarak öğrendiyseniz, ana dilinizde düşünerek, bir kavramı kelime kelime ikinci dile çevirerek ve ardından çevirinizi konuşarak başlamış olabilirsiniz. Bu, naif çeviri bilgisayar programlarının yaptığına benzer. Akıcılık kazanmak için bu aşamayı geçmek önemlidir!
+
+Naif çeviri kötü (ve bazen komik) yanlış çevirilere yol açar: `I feel happy` İrlandacada kelimesi kelimesine `Mise bhraitheann athas` olarak çevrilir. Bu (kelimesi kelimesine) `me feel happy` anlamına gelir ve geçerli bir İrlandaca cümle değildir. İngilizce ve İrlandaca, iki yakın komşu adada konuşulan diller olmasına rağmen, çok farklı dilbilgisi yapıları olan farklı dillerdir.
+
+> İrlandaca dil gelenekleri hakkında bazı videolar izleyebilirsiniz, örneğin [bu video](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### Makine öğrenimi yaklaşımları
+
+Şimdiye kadar, doğal dil işleme için resmi kurallar yaklaşımını öğrendiniz. Başka bir yaklaşım, kelimelerin anlamını göz ardı etmek ve _yerine desenleri tespit etmek için makine öğrenimini kullanmaktır_. Bu, hem kaynak hem de hedef dillerde çok fazla metin (bir *corpus*) veya metinler (*corpora*) varsa çeviride işe yarayabilir.
+
+Örneğin, Jane Austen tarafından 1813'te yazılan ünlü İngiliz romanı *Pride and Prejudice* örneğini ele alalım. Kitabı İngilizce ve kitabın *Fransızca* bir insan çevirisini incelediğinizde, birinin diğerine _deyimsel_ olarak çevrildiği ifadeleri tespit edebilirsiniz. Bunu birazdan yapacaksınız.
+
+Örneğin, İngilizce `I have no money` ifadesi Fransızcaya kelimesi kelimesine çevrildiğinde `Je n'ai pas de monnaie` olabilir. "Monnaie", Fransızca'da 'false cognate' (yanıltıcı benzerlik) olarak 'money' ve 'monnaie' eşanlamlı değildir. İnsan tarafından yapılmış daha iyi bir çeviri `Je n'ai pas d'argent` olacaktır, çünkü bu, paranızın olmadığını (değişiklik anlamında değil) daha iyi ifade eder.
+
+
+
+> Görsel [Jen Looper](https://twitter.com/jenlooper) tarafından
+
+Bir ML modeli, her iki dilde de uzman insan konuşmacılar tarafından daha önce çevrilmiş metinlerdeki ortak desenleri belirleyerek çevirilerin doğruluğunu artırabilir.
+
+### Egzersiz - çeviri
+
+`TextBlob` kullanarak cümleleri çevirebilirsiniz. **Pride and Prejudice**'in ünlü ilk cümlesini deneyin:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob` çeviride oldukça iyi bir iş çıkarır: "C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!".
+
+TextBlob'un çevirisinin, aslında, V. Leconte ve Ch. Pressoir tarafından 1932'de yapılan Fransızca çevirisinden çok daha kesin olduğu söylenebilir:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet égard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+Bu durumda, ML tarafından bilgilendirilen çeviri, orijinal yazarın ağzına gereksiz yere kelimeler koyan insan çevirmeninden daha iyi bir iş çıkarır.
+
+> Burada neler oluyor? ve TextBlob neden çeviride bu kadar iyi? Aslında, arka planda Google translate kullanıyor, milyonlarca ifadeyi analiz edebilen ve görev için en iyi dizeleri tahmin edebilen sofistike bir AI. Burada manuel hiçbir şey yok ve `blob.translate` kullanmak için bir internet bağlantısına ihtiyacınız var.
+
+---
+
+## Duygu Analizi
+
+Duygu analizi, bir metnin olumlu, olumsuz veya tarafsız olup olmadığını belirlemek için kullanılır. Bu, kullanıcı geri bildirimleri, sosyal medya gönderileri veya müşteri incelemeleri gibi metinlerin genel tonunu anlamak için çok yararlıdır.
+
+> Örneğin, "Harika, bu karanlık yolda kaybolduğumuza sevindim" ifadesi, alaycı, olumsuz bir duygu cümlesidir, ancak basit algoritma 'harika', 'muhteşem', 'sevindim' gibi olumlu ve 'israf', 'kayboldu', 'karanlık' gibi olumsuz kelimeleri tespit eder. Genel duygu, bu çelişkili kelimelerle etkilenir.
+
+✅ Bir saniye durun ve insan konuşmacılar olarak alaycılığı nasıl ifade ettiğimizi düşünün. Tonlama büyük bir rol oynar. "Peki, o film harikaydı" ifadesini farklı şekillerde söylemeye çalışın ve sesinizin anlamı nasıl ilettiğini keşfedin.
+
+### ML yaklaşımları
+
+ML yaklaşımı, olumsuz ve olumlu metinleri - tweetler, film incelemeleri veya bir insanın bir puan *ve* yazılı bir görüş verdiği herhangi bir şey - manuel olarak toplamaktır. Daha sonra, NLP teknikleri görüşlere ve puanlara uygulanabilir, böylece desenler ortaya çıkar (örneğin, olumlu film incelemeleri 'Oscar'a değer' ifadesini olumsuz film incelemelerinden daha fazla içerir veya olumlu restoran incelemeleri 'gurme' kelimesini 'iğrenç'ten daha fazla içerir).
+
+> ⚖️ **Örnek**: Bir politikacının ofisinde çalıştığınızı ve tartışılan yeni bir yasa olduğunu varsayalım, seçmenler ofise bu yeni yasayı destekleyen veya karşı çıkan e-postalar yazabilirler. Diyelim ki e-postaları okuyup iki yığın halinde ayırmakla görevlisiniz, *destekleyen* ve *karşı çıkan*. Çok fazla e-posta varsa, hepsini okumaya çalışmak sizi bunaltabilir. Bir botun hepsini sizin için okuyup, anlayıp, her e-postanın hangi yığına ait olduğunu söylemesi güzel olmaz mıydı?
+>
+> Bunu başarmanın bir yolu Makine Öğrenimi kullanmaktır. Modeli, *karşı çıkan* e-postaların bir kısmı ve *destekleyen* e-postaların bir kısmı ile eğitirdiniz. Model, belirli ifadeleri ve kelimeleri karşı çıkan veya destekleyen e-postalarla ilişkilendirme eğiliminde olur, *ancak içeriğin hiçbirini anlamaz*, yalnızca belirli kelimelerin ve desenlerin bir *karşı çıkan* veya *destekleyen* e-postada daha olası olduğunu bilir. Modeli, eğitmek için kullanmadığınız bazı e-postalarla test edebilir ve aynı sonuca varıp varmadığını görebilirsiniz. Daha sonra, modelin doğruluğundan memnun olduğunuzda, gelecekteki e-postaları okumadan işleyebilirsiniz.
+
+✅ Bu süreç, önceki derslerde kullandığınız süreçlere benziyor mu?
+
+## Egzersiz - duygusal cümleler
+
+Duygu, -1 ile 1 arasında bir *kutuplaşma* ile ölçülür, bu da -1'in en olumsuz duygu olduğunu ve 1'in en olumlu olduğunu gösterir. Duygu ayrıca nesnellik (0) ve öznellik (1) için 0 - 1 arası bir puanla ölçülür.
+
+Jane Austen'ın *Pride and Prejudice* eserine tekrar bir göz atın. Metin [Project Gutenberg](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) sitesinde mevcuttur. Aşağıdaki örnek, kitabın ilk ve son cümlelerinin duygusunu analiz eden ve duygusal kutuplaşma ve nesnellik/öznellik puanını gösteren kısa bir programı göstermektedir.
+
+Bu görevde `sentiment` belirlemek için (kendi duygu hesaplayıcınızı yazmanız gerekmez) `TextBlob` kütüphanesini (yukarıda açıklanmıştır) kullanmalısınız.
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+Aşağıdaki çıktıyı görüyorsunuz:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## Zorluk - duygu kutuplaşmasını kontrol etme
+
+Göreviniz, *Pride and Prejudice*'in kesinlikle olumlu cümlelerinin kesinlikle olumsuz cümlelerinden daha fazla olup olmadığını duygu kutuplaşmasını kullanarak belirlemektir. Bu görev için, 1 veya -1'lik bir kutuplaşma puanının kesinlikle olumlu veya olumsuz olduğunu varsayabilirsiniz.
+
+**Adımlar:**
+
+1. [Pride and Prejudice](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm) kopyasını Project Gutenberg'den .txt dosyası olarak indirin. Dosyanın başındaki ve sonundaki meta verileri kaldırın, yalnızca orijinal metni bırakın.
+2. Dosyayı Python'da açın ve içeriğini bir dize olarak çıkarın.
+3. Kitap dizisi kullanarak bir TextBlob oluşturun.
+4. Kitaptaki her cümleyi bir döngüde analiz edin.
+ 1. Kutuplaşma 1 veya -1 ise cümleyi olumlu veya olumsuz mesajlar listesine kaydedin.
+5. Sonunda, tüm olumlu ve olumsuz cümleleri (ayrı ayrı) ve her birinin sayısını yazdırın.
+
+İşte bir örnek [çözüm](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb).
+
+✅ Bilgi Kontrolü
+
+1. Duygu, cümlede kullanılan kelimelere dayanır, ancak kod kelimeleri *anlar* mı?
+2. Duygu kutuplaşmasının doğru olduğunu düşünüyor musunuz, başka bir deyişle, puanlarla *aynı fikirde misiniz*?
+ 1. Özellikle, aşağıdaki cümlelerin mutlak **olumlu** kutuplaşması ile aynı fikirde misiniz veya farklı mısınız?
+ * “Ne mükemmel bir babanız var, kızlar!” dedi kapı kapandığında.
+ * “Bay Darcy'nin incelemesi sona erdi sanırım,” dedi Bayan Bingley; “ve sonucu nedir?” “Bundan tamamen eminim ki Bay Darcy'nin hiçbir kusuru yok.
+ * Bu tür şeylerin nasıl harika bir şekilde gerçekleştiği!
+ * Bu tür şeylerden dünyada en büyük hoşnutsuzluğa sahibim.
+ * Charlotte mükemmel bir yöneticidir, sanırım.
+ * “Bu gerçekten harika!
+ * Çok mutluyum!
+ * Midilliler hakkındaki fikriniz harika.
+ 2. Aşağıdaki 3 cümle mutlak olumlu bir duygu ile puanlandı, ancak dikkatlice okunduğunda, olumlu cümleler değiller. Duygu analizi neden olumlu cümleler olduğunu düşündü?
+ * Netherfield'deki kalışı bittiğinde mutlu olacağım!” “Sizi rahatlatacak bir şey söyleyebilmek isterdim,” diye yanıtladı Elizabeth; “ama bu tamamen benim gücümün dışında.
+ * Sizi mutlu görebilseydim!
+ * Sevgili Lizzy, sıkıntımız çok büyük.
+ 3. Aşağıdaki cümlelerin mutlak **olumsuz** kutuplaşması ile aynı fikirde misiniz veya farklı mısınız?
+ - Herkes onun gururundan iğreniyor.
+ - “Yabancılar arasında nasıl davrandığını bilmek isterdim.” “O zaman duyacaksın ama kendini çok korkunç bir şeye hazırlamalısın.
+ - Duraklama Elizabeth'in hislerine korkunçtu.
+ - Bu korkunç olurdu!
+
+✅ Jane Austen'ın herhangi bir hayranı, kitaplarını İngiliz Regency toplumunun daha gülünç yönlerini eleştirmek için sıklıkla kullandığını anlayacaktır. *Pride and Prejudice*'in ana karakteri Elizabeth Bennett, keskin bir sosyal gözlemcidir (yazar gibi) ve dili sık sık ağır bir şekilde nüanslıdır. Hikayedeki aşk ilgisi olan Bay Darcy bile Elizabeth'in dilini alaycı ve şakacı kullanmasını fark eder: "Sizinle tanışmanın verdiği zevk, ara sıra kendi görüşlerinizi ifade etmekten büyük keyif aldığınızı bilmem için yeterli oldu."
+
+---
+
+## 🚀Zorluk
+
+Marvin'i kullanıcı girdisinden diğer özellikleri çıkararak daha da geliştirebilir misiniz?
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Metinden duygu çıkarmanın birçok yolu vardır. Bu tekniği kullanabilecek iş uygulamalarını düşünün. Yanlış gidebileceği yolları düşünün. Duyguyu analiz eden sofistike kurumsal sistemler hakkında daha fazla bilgi edinin, örneğin [Azure Text Analysis](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott). Yukarıdaki Pride and Prejudice cümlelerinin bazılarını test edin ve nüansı algılayıp algılayamayacağını görün.
+
+## Ödev
+
+[Şiirsel lisans](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlama durumunda sorumluluk kabul etmiyoruz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/3-Translation-Sentiment/assignment.md b/translations/tr/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..55c0162c8
--- /dev/null
+++ b/translations/tr/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,13 @@
+# Şairane Lisans
+
+## Talimatlar
+
+[Bu not defterinde](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency) daha önce Azure metin analitiği kullanılarak duygu analizi yapılmış 500'den fazla Emily Dickinson şiiri bulabilirsiniz. Bu veri setini kullanarak, derste anlatılan teknikleri kullanarak analiz edin. Bir şiirin önerilen duygu durumu, daha gelişmiş Azure hizmetinin kararıyla eşleşiyor mu? Sizce neden ya da neden değil? Sizi şaşırtan bir şey var mı?
+## Değerlendirme Ölçütü
+
+| Kriterler | Örnek | Yeterli | Geliştirilmesi Gereken |
+| --------- | ---------------------------------------------------------------------- | ------------------------------------------------------ | ----------------------------- |
+| | Bir yazarın örnek çıktısının sağlam bir analiziyle sunulan bir not defteri | Not defteri eksik veya analiz yapmıyor | Not defteri sunulmamış |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlaşılma veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/tr/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..f55354840
--- /dev/null
+++ b/translations/tr/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/tr/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..84d87c59b
--- /dev/null
+++ b/translations/tr/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/4-Hotel-Reviews-1/README.md b/translations/tr/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..f97b67918
--- /dev/null
+++ b/translations/tr/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,273 @@
+# Otel Yorumlarıyla Duygu Analizi - Verilerin İşlenmesi
+
+Bu bölümde, önceki derslerde öğrendiğiniz teknikleri kullanarak büyük bir veri seti üzerinde keşifsel veri analizi yapacaksınız. Çeşitli sütunların faydasını iyi anladığınızda, şunları öğreneceksiniz:
+
+- Gereksiz sütunları nasıl kaldıracağınızı
+- Mevcut sütunlara dayanarak bazı yeni verileri nasıl hesaplayacağınızı
+- Sonuçta oluşan veri setini nihai zorlukta kullanmak için nasıl kaydedeceğinizi
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### Giriş
+
+Şu ana kadar metin verilerinin sayısal veri türlerinden oldukça farklı olduğunu öğrendiniz. Eğer bu metin bir insan tarafından yazılmış veya söylenmişse, desenleri ve frekansları, duygu ve anlamı bulmak için analiz edilebilir. Bu ders sizi gerçek bir veri seti ve gerçek bir zorlukla tanıştıracak: **[515K Avrupa'daki Otel Yorumları Verisi](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)** ve bir [CC0: Kamu Malı lisansı](https://creativecommons.org/publicdomain/zero/1.0/) içerir. Bu veri seti Booking.com'dan kamuya açık kaynaklardan toplanmıştır. Veri setinin yaratıcısı Jiashen Liu'dur.
+
+### Hazırlık
+
+İhtiyacınız olacaklar:
+
+* Python 3 kullanarak .ipynb not defterlerini çalıştırma yeteneği
+* pandas
+* NLTK, [yerel olarak yüklemeniz gereken](https://www.nltk.org/install.html)
+* Kaggle'da bulunan veri seti [515K Avrupa'daki Otel Yorumları Verisi](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe). Yaklaşık 230 MB açılmış haliyle. Bu NLP dersleriyle ilişkili kök `/data` klasörüne indirin.
+
+## Keşifsel veri analizi
+
+Bu zorluk, duygu analizi ve misafir yorum puanlarını kullanarak bir otel öneri botu oluşturduğunuzu varsayar. Kullanacağınız veri seti, 6 şehirdeki 1493 farklı otelin yorumlarını içerir.
+
+Python, otel yorumları veri seti ve NLTK'nın duygu analizini kullanarak şunları bulabilirsiniz:
+
+* Yorumlarda en sık kullanılan kelimeler ve ifadeler nelerdir?
+* Bir oteli tanımlayan resmi *etiketler* yorum puanlarıyla ilişkilendiriliyor mu (örneğin, belirli bir otel için *Küçük çocuklu aile* için daha olumsuz yorumlar mı var, belki de *Yalnız gezginler* için daha iyi olduğunu gösteriyor?)
+* NLTK duygu puanları otel yorumcunun sayısal puanıyla 'uyuşuyor' mu?
+
+#### Veri Seti
+
+İndirdiğiniz ve yerel olarak kaydettiğiniz veri setini keşfedelim. Dosyayı VS Code veya hatta Excel gibi bir editörde açın.
+
+Veri setindeki başlıklar şu şekildedir:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+İncelemesi daha kolay olacak şekilde gruplandırılmışlardır:
+##### Otel Sütunları
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (enlem), `lng` (boylam)
+ * *lat* ve *lng* kullanarak otel konumlarını gösteren bir harita çizebilirsiniz (belki olumsuz ve olumlu yorumlar için renk kodlu olarak)
+ * Hotel_Address bize açıkça yararlı değil ve muhtemelen daha kolay sıralama ve arama için bir ülkeyle değiştireceğiz
+
+**Otel Meta-yorum Sütunları**
+
+* `Average_Score`
+ * Veri seti yaratıcısına göre, bu sütun *otel puanının, son yılın en son yorumuna dayalı olarak hesaplanan ortalama puanı* anlamına gelir. Bu puanı hesaplamak için alışılmadık bir yol gibi görünüyor, ancak şimdilik bu veriyi yüzeyde kabul edebiliriz.
+
+ ✅ Bu verideki diğer sütunlara dayanarak, ortalama puanı hesaplamak için başka bir yol düşünebilir misiniz?
+
+* `Total_Number_of_Reviews`
+ * Bu otelin aldığı toplam yorum sayısı - bu veri setindeki yorumlara mı atıfta bulunuyor (kod yazmadan) net değil.
+* `Additional_Number_of_Scoring`
+ * Bu, bir yorum puanı verildiği, ancak yorumcunun olumlu veya olumsuz bir yorum yazmadığı anlamına gelir.
+
+**Yorum Sütunları**
+
+- `Reviewer_Score`
+ - Bu, min ve max değerleri 2.5 ile 10 arasında olan en fazla 1 ondalık basamağa sahip bir sayısal değerdir
+ - Neden 2.5'in en düşük puan olduğu açıklanmamış
+- `Negative_Review`
+ - Bir yorumcu hiçbir şey yazmazsa, bu alan "**No Negative**" olacaktır
+ - Bir yorumcu olumsuz yorum sütununda olumlu bir yorum yazabilir (örneğin, "bu otelde kötü bir şey yok")
+- `Review_Total_Negative_Word_Counts`
+ - Yüksek olumsuz kelime sayıları daha düşük bir puanı gösterir (duygusallığı kontrol etmeden)
+- `Positive_Review`
+ - Bir yorumcu hiçbir şey yazmazsa, bu alan "**No Positive**" olacaktır
+ - Bir yorumcu olumlu yorum sütununda olumsuz bir yorum yazabilir (örneğin, "bu otelde hiç iyi bir şey yok")
+- `Review_Total_Positive_Word_Counts`
+ - Yüksek olumlu kelime sayıları daha yüksek bir puanı gösterir (duygusallığı kontrol etmeden)
+- `Review_Date` ve `days_since_review`
+ - Bir yorumun tazeliği veya bayatlığı ölçüsü uygulanabilir (daha eski yorumlar, otel yönetimi değiştiği veya yenilemeler yapıldığı için, veya bir havuz eklendiği için, daha yeni yorumlar kadar doğru olmayabilir)
+- `Tags`
+ - Bunlar, bir yorumcunun misafir türünü (örneğin, yalnız veya aile) tanımlamak için seçebileceği kısa tanımlayıcılardır, sahip oldukları oda türü, kalış süresi ve yorumun nasıl gönderildiği.
+ - Ne yazık ki, bu etiketleri kullanmak sorunludur, kullanışlılıklarını tartışan aşağıdaki bölüme bakın
+
+**Yorumcu Sütunları**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - Bu, bir öneri modelinde bir faktör olabilir, örneğin, yüzlerce yorumu olan daha üretken yorumcuların daha olumsuz olmaktan ziyade daha olumlu olma olasılığını belirleyebilseydiniz. Ancak, belirli bir yorumun yorumcusu benzersiz bir kodla tanımlanmadığından, bir yorum setiyle ilişkilendirilemez. 100 veya daha fazla yorumu olan 30 yorumcu var, ancak bunun öneri modeline nasıl yardımcı olabileceğini görmek zor.
+- `Reviewer_Nationality`
+ - Bazı insanlar, belirli milliyetlerin ulusal bir eğilim nedeniyle olumlu veya olumsuz bir yorum yapma olasılığının daha yüksek olduğunu düşünebilir. Bu tür anekdot görüşleri modellerinize dahil ederken dikkatli olun. Bunlar ulusal (ve bazen ırksal) klişelerdir ve her yorumcu, deneyimlerine dayalı olarak bir yorum yazan bireydi. Önceki otel konaklamaları, seyahat edilen mesafe ve kişisel mizacı gibi birçok mercekten filtrelenmiş olabilir. Bir yorum puanının nedeni olarak milliyetlerini düşünmek zor.
+
+##### Örnekler
+
+| Ortalama Puan | Toplam Yorum Sayısı | Yorumcu Puanı | Olumsuz Yorum | Olumlu Yorum | Etiketler |
+| -------------- | ---------------------- | ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | Bu şu anda bir otel değil, bir inşaat alanı Uzun bir yolculuktan sonra dinlenirken ve odada çalışırken sabah erken saatlerde ve tüm gün boyunca kabul edilemez inşaat gürültüsü ile terörize edildim İnsanlar tüm gün boyunca bitişik odalarda matkaplarla çalışıyordu Oda değişikliği talep ettim ama sessiz bir oda mevcut değildi Durumu daha da kötüleştirmek için fazla ücret alındım Akşam erken bir uçuşa çıkmak zorunda olduğum için akşam çıkış yaptım ve uygun bir fatura aldım Bir gün sonra otel benim rızam olmadan rezervasyon fiyatının üzerinde başka bir ücret aldı Berbat bir yer Kendinize burada rezervasyon yaparak ceza vermeyin | Hiçbir şey Berbat bir yer Uzak durun | İş gezisi Çift Standart Çift Kişilik Oda 2 gece kaldı |
+
+Gördüğünüz gibi, bu misafir otelde mutlu bir konaklama yapmamış. Otelin 7.8 gibi iyi bir ortalama puanı ve 1945 yorumu var, ancak bu yorumcu 2.5 puan vermiş ve konaklamalarının ne kadar olumsuz olduğuna dair 115 kelime yazmış. Pozitif_Yorum sütununda hiçbir şey yazmadılarsa, hiçbir şeyin pozitif olmadığını varsayabilirsiniz, ama ne yazık ki 7 kelimelik bir uyarı yazmışlar. Yalnızca kelimeleri sayarsak, yorumcunun niyetinin anlamı veya duygusu yerine, çarpık bir görüşe sahip olabiliriz. Garip bir şekilde, 2.5 puanı kafa karıştırıcı, çünkü otel konaklaması bu kadar kötüyse, neden hiç puan verilsin ki? Veri setini yakından incelediğinizde, en düşük olası puanın 2.5 olduğunu, 0 olmadığını göreceksiniz. En yüksek olası puan ise 10.
+
+##### Etiketler
+
+Yukarıda belirtildiği gibi, ilk bakışta `Tags` kullanarak verileri kategorize etme fikri mantıklı geliyor. Ne yazık ki bu etiketler standartlaştırılmamış, bu da belirli bir otelde seçeneklerin *Tek kişilik oda*, *İkiz oda* ve *Çift kişilik oda* olabileceği, ancak bir sonraki otelde *Deluxe Tek Kişilik Oda*, *Klasik Kraliçe Odası* ve *Executive King Odası* olabileceği anlamına gelir. Bunlar aynı şeyler olabilir, ancak o kadar çok varyasyon var ki seçim şu şekilde olur:
+
+1. Tüm terimleri tek bir standarda dönüştürmeye çalışmak, bu çok zor çünkü her durumda dönüşüm yolunun ne olacağı net değil (örneğin, *Klasik tek kişilik oda* *Tek kişilik oda*ya eşlenebilir, ancak *Avlu Bahçesi veya Şehir Manzaralı Superior Kraliçe Odası* çok daha zor eşlenir)
+
+2. Bir NLP yaklaşımı benimseyebiliriz ve her otelde *Yalnız*, *İş Seyahatçisi* veya *Küçük çocuklu aile* gibi belirli terimlerin sıklığını ölçebiliriz ve bunu öneriye dahil edebiliriz
+
+Etiketler genellikle (ama her zaman değil) *Gezi türü*, *Misafir türü*, *Oda türü*, *Gece sayısı* ve *Yorumun gönderildiği cihaz türü* ile uyumlu 5 ila 6 virgülle ayrılmış değer içeren tek bir alan içerir. Ancak, bazı yorumcular her alanı doldurmazsa (birini boş bırakabilirler), değerler her zaman aynı sırada olmaz.
+
+Bir örnek olarak, *Grup türü* alın. `Tags` sütununda bu alanda 1025 benzersiz olasılık vardır ve ne yazık ki bunların yalnızca bazıları bir grubu ifade eder (bazıları oda türüdür vb.). Yalnızca aileden bahsedenleri filtrelerseniz, sonuçlar birçok *Aile odası* türü sonuç içerir. *ile* terimini dahil ederseniz, yani *Küçük çocuklu aile* veya *Büyük çocuklu aile* ifadelerini sayarsanız, sonuçlar daha iyi olur ve 515.000 sonucun 80.000'inden fazlası "Küçük çocuklu aile" veya "Büyük çocuklu aile" ifadesini içerir.
+
+Bu, etiketler sütununun tamamen işe yaramaz olmadığı anlamına gelir, ancak işe yarar hale getirmek için biraz çalışma gerekecektir.
+
+##### Ortalama otel puanı
+
+Veri setiyle ilgili anlayamadığım bazı tuhaflıklar veya tutarsızlıklar var, ancak modellerinizi oluştururken bunların farkında olmanız için burada gösterilmiştir. Çözerseniz, lütfen tartışma bölümünde bize bildirin!
+
+Veri seti, ortalama puan ve yorum sayısıyla ilgili aşağıdaki sütunlara sahiptir:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+Bu veri setindeki en fazla yoruma sahip tek otel *Britannia International Hotel Canary Wharf* 4789 yorumla 515.000'den. Ancak bu otel için `Total_Number_of_Reviews` değerine bakarsak, 9086'dır. Belki daha fazla yorumsuz puan olduğunu varsayabilirsiniz, bu yüzden `Additional_Number_of_Scoring` sütun değerini eklemeliyiz. Bu değer 2682'dir ve 4789'a eklediğimizde 7471 olur, bu da hala `Total_Number_of_Reviews`'dan 1615 eksiktir.
+
+`Average_Score` sütunlarını alırsanız, bunun veri setindeki yorumların ortalaması olduğunu varsayabilirsiniz, ancak Kaggle açıklaması "*otel puanının, son yılın en son yorumuna dayalı olarak hesaplanan ortalama puanı*"dır. Bu pek yararlı görünmüyor, ancak veri setindeki yorum puanlarına dayalı olarak kendi ortalamamızı hesaplayabiliriz. Aynı oteli örnek olarak kullanarak, otelin ortalama puanı 7.1 olarak verilir, ancak hesaplanan puan (veri setindeki ortalama yorumcu puanı) 6.8'dir. Bu yakın, ancak aynı değer değil ve `Additional_Number_of_Scoring` yorumlarında verilen puanların ortalamayı 7.1'e yükselttiğini tahmin edebiliriz. Ne yazık ki, bu iddiayı test etmenin veya kanıtlamanın bir yolu olmadığından, `Average_Score`, `Additional_Number_of_Scoring` ve `Total_Number_of_Reviews`'ya dayanan veya atıfta bulunan verileri kullanmak veya güvenmek zor.
+
+İşleri daha da karmaşık hale getirmek için, en fazla yoruma sahip ikinci otel hesaplanan ortalama puanı 8.12'dir ve veri setindeki `Average_Score` 8.1'dir. Bu doğru puan bir tesadüf mü yoksa ilk otel bir tutarsızlık mı?
+
+Bu otelin bir aykırı değer olabileceği ve belki de çoğu değerin (bazıları bir nedenle değil) tutarlı olduğu varsayımıyla, veri setindeki değerleri keşfetmek ve değerlerin doğru kullanımını (veya kullanılmamasını) belirlemek için bir sonraki kısa programı yazacağız.
+
+> 🚨 Bir uyarı notu
+>
+> Bu veri setiyle çalışırken, metni kendiniz okumadan veya analiz etmeden bir şeyler hesaplayan kod yazacaksınız. Bu, NLP'nin özü, bir insanın yapmasına gerek kalmadan anlam veya duyguyu yorumlamak. Ancak, bazı olumsuz yorumları okumanız mümkün. Okumanız gerekmediği için size bunu yapmamanızı tavsiye ederim. Bazıları saçma veya alakasız olumsuz otel yorumlarıdır, örneğin "Hava iyi değildi", otelin veya herhangi birinin kontrolü dışında bir şey. Ancak, bazı yorumların karanlık bir tarafı da var. Bazen olumsuz yorumlar ırkçı, cinsiyetçi veya yaşçı olabilir. Bu, bir kamu web sitesinden kazınan bir veri setinde beklenebilir. Bazı yorumcular, hoşlanmayacağınız, rahatsız edici veya üzücü bulacağınız yorumlar bırakır. Kodun duyguyu ölçmesine izin vermek daha iyidir, kendiniz okuyup üzülmektense. Bu, bu tür şeyleri yazanların azınlıkta olduğu anlamına gelir, ancak yine de varlar.
+
+## Alıştırma - Veri keşfi
+### Veriyi yükleyin
+
+Veriyi görsel olarak incelemek yeterli, şimdi biraz kod yazacak ve bazı cevaplar alacaksınız! Bu bölüm pandas kütüphanesini kullanır. İlk göreviniz, CSV verilerini yükleyip okuyabileceğinizden emin olmaktır. Pandas kütüphanesi hızlı bir CSV yükleyiciye sahiptir ve sonuç, önceki derslerde olduğu gibi bir veri çerçevesine yerleştirilir. Yüklediğimiz CSV, yarım milyondan fazla satır içerir, ancak sadece 17 sütun vardır. Pandas, bir veri çerçevesiyle etkileşimde bulunmak için birçok güçlü yol sunar, her satırda işlemler yapma yeteneği de dahil.
+
+Bu dersten itibaren, kod parçacıkları ve kodun bazı açıklamaları ve sonuçların ne anlama geldiği hakkında bazı tartışmalar olacaktır. Kodunuz için dahil edilen _notebook.ipynb_ dosyasını kullanın.
+
+Kullanacağınız veri dos
+rows have column `Positive_Review` değerleri "No Positive" 9. Sütun `Positive_Review` değerleri "No Positive" **ve** `Negative_Review` değerleri "No Negative" olan kaç satır olduğunu hesaplayın ve yazdırın ### Kod cevapları 1. Yeni yüklediğiniz veri çerçevesinin *şeklini* yazdırın (şekil satır ve sütun sayısıdır) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Yorumcu milliyetlerinin frekans sayısını hesaplayın: 1. `Reviewer_Nationality` sütunu için kaç farklı değer var ve bunlar nelerdir? 2. Veri setinde en yaygın olan yorumcu milliyeti nedir (ülke ve yorum sayısını yazdırın)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. En sık bulunan bir sonraki 10 milliyet ve frekans sayıları nelerdir? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. İlk 10 yorumcu milliyetinin her biri için en sık yorumlanan otel hangisiydi? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. Veri setinde otel başına kaç yorum var (otel frekans sayısı)? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Hotel_Name | Total_Number_of_Reviews | Total_Reviews_Found | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | Veri setinde *sayılmış* sonuçların `Total_Number_of_Reviews` değerleriyle eşleşmediğini fark edebilirsiniz. Bu değerin veri setinde otelin sahip olduğu toplam yorum sayısını temsil edip etmediği veya hepsinin kazınmamış olup olmadığı veya başka bir hesaplama olup olmadığı belirsizdir. Bu belirsizlik nedeniyle `Total_Number_of_Reviews` modelde kullanılmamaktadır. 5. Veri setindeki her otel için bir `Average_Score` sütunu olmasına rağmen, her otel için tüm yorumcu puanlarının ortalamasını alarak bir ortalama puan da hesaplayabilirsiniz. Veri çerçevenize `Calc_Average_Score` başlıklı yeni bir sütun ekleyin ve bu hesaplanmış ortalamayı içeren sütunu ekleyin. `Hotel_Name`, `Average_Score` ve `Calc_Average_Score` sütunlarını yazdırın. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` `Average_Score` değerinin ve hesaplanan ortalama puanın neden bazen farklı olduğunu merak edebilirsiniz. Bazı değerlerin neden eşleştiğini, ancak diğerlerinde neden bir fark olduğunu bilemediğimiz için, bu durumda kendimiz ortalamayı hesaplamak en güvenli yoldur. Bununla birlikte, farklar genellikle çok küçüktür, işte veri seti ortalaması ile hesaplanan ortalama arasındaki en büyük sapma olan oteller: | Average_Score_Difference | Average_Score | Calc_Average_Score | Hotel_Name | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | Sadece 1 otelin puan farkının 1'den büyük olması, farkı görmezden gelip hesaplanan ortalama puanı kullanabileceğimiz anlamına gelir. 6. Sütun `Negative_Review` değerleri "No Negative" olan kaç satır olduğunu hesaplayın ve yazdırın 7. Sütun `Positive_Review` değerleri "No Positive" olan kaç satır olduğunu hesaplayın ve yazdırın 8. Sütun `Positive_Review` değerleri "No Positive" **ve** `Negative_Review` değerleri "No Negative" olan kaç satır olduğunu hesaplayın ve yazdırın ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Başka bir yol Lambdas kullanmadan öğeleri saymanın başka bir yolu ve satırları saymak için sum kullanın: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` `Negative_Review` ve `Positive_Review` sütunları için sırasıyla "No Negative" ve "No Positive" değerlerine sahip 127 satır olduğunu fark etmiş olabilirsiniz. Bu, yorumcunun otele bir sayısal puan verdiği, ancak olumlu veya olumsuz bir yorum yazmaktan kaçındığı anlamına gelir. Neyse ki bu küçük bir satır miktarıdır (515738'den 127, yani %0.02), bu yüzden modelimizi veya sonuçlarımızı belirli bir yöne çekmeyecektir, ancak bir yorum veri setinin yorum içermeyen satırlara sahip olmasını beklemeyebilirsiniz, bu yüzden bu tür satırları keşfetmek için verileri keşfetmeye değer. Veri setini keşfettiğinize göre, bir sonraki derste verileri filtreleyecek ve bazı duygu analizleri ekleyeceksiniz. --- ## 🚀Meydan okuma Bu ders, önceki derslerde gördüğümüz gibi, verilerinizi ve özelliklerini anlamanın ne kadar kritik derecede önemli olduğunu gösterir. Özellikle metin tabanlı veriler dikkatli bir inceleme gerektirir. Çeşitli metin ağırlıklı veri setlerini inceleyin ve bir modele önyargı veya çarpık duygu ekleyebilecek alanları keşfedip edemeyeceğinizi görün. ## [Ders sonrası sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## İnceleme ve Kendi Kendine Çalışma [Bu NLP Öğrenme Yolunu](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) alın ve konuşma ve metin ağırlıklı modeller oluştururken denemek için araçları keşfedin. ## Ödev [NLTK](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğa özen göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanılmasından kaynaklanabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/tr/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..2d4a2e8ff
--- /dev/null
+++ b/translations/tr/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## Talimatlar
+
+NLTK, hesaplamalı dilbilim ve NLP'de kullanılmak üzere bilinen bir kütüphanedir. Bu fırsatı değerlendirerek '[NLTK kitabı](https://www.nltk.org/book/)'nı okuyun ve alıştırmalarını deneyin. Bu notlandırılmamış ödevde, bu kütüphaneyi daha derinlemesine tanıyacaksınız.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan yanlış anlaşılmalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/tr/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..29884c1e0
--- /dev/null
+++ b/translations/tr/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/tr/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..3c8d8fbcd
--- /dev/null
+++ b/translations/tr/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından doğacak herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/5-Hotel-Reviews-2/README.md b/translations/tr/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..86876af77
--- /dev/null
+++ b/translations/tr/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# Otel yorumları ile duygu analizi
+
+Artık veri setini ayrıntılı bir şekilde incelediğinize göre, sütunları filtreleyip veri seti üzerinde NLP tekniklerini kullanarak oteller hakkında yeni bilgiler edinmenin zamanı geldi.
+## [Ders öncesi sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### Filtreleme ve Duygu Analizi İşlemleri
+
+Muhtemelen fark etmişsinizdir, veri setinde bazı sorunlar var. Bazı sütunlar gereksiz bilgilerle dolu, diğerleri ise yanlış görünüyor. Doğru olsalar bile, nasıl hesaplandıkları belirsiz ve cevaplar kendi hesaplamalarınızla bağımsız olarak doğrulanamıyor.
+
+## Egzersiz: biraz daha veri işleme
+
+Verileri biraz daha temizleyin. Daha sonra kullanışlı olacak sütunlar ekleyin, diğer sütunlardaki değerleri değiştirin ve bazı sütunları tamamen kaldırın.
+
+1. İlk sütun işlemleri
+
+ 1. `lat` ve `lng`'i kaldırın
+
+ 2. `Hotel_Address` değerlerini aşağıdaki değerlerle değiştirin (adres şehir ve ülke ismini içeriyorsa, sadece şehir ve ülke olarak değiştirin).
+
+ Veri setinde sadece bu şehirler ve ülkeler var:
+
+ Amsterdam, Netherlands
+
+ Barcelona, Spain
+
+ London, United Kingdom
+
+ Milan, Italy
+
+ Paris, France
+
+ Vienna, Austria
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ Artık ülke düzeyinde veri sorgulayabilirsiniz:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | Otel_Adresi | Otel_Adı |
+ | :--------------------- | :--------: |
+ | Amsterdam, Netherlands | 105 |
+ | Barcelona, Spain | 211 |
+ | London, United Kingdom | 400 |
+ | Milan, Italy | 162 |
+ | Paris, France | 458 |
+ | Vienna, Austria | 158 |
+
+2. Otel Meta-inceleme sütunlarını işleyin
+
+ 1. `Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score` kendi hesapladığımız skorla kaldırın
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. İnceleme sütunlarını işleyin
+
+ 1. `Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' İş seyahati ', ' Yalnız gezgin ', ' Tek Kişilik Oda ', ' 5 gece kaldı ', ' Mobil cihazdan gönderildi ']` sütunlarını kaldırın, işlemi büyük ölçüde azaltmanın mümkün olup olmadığını sormaya başlayabilirsiniz. Neyse ki mümkün - ancak önce ilgi çekici etiketleri belirlemek için birkaç adımı izlemeniz gerekiyor.
+
+### Etiketleri filtreleme
+
+Veri setinin amacının, en iyi oteli seçmenize yardımcı olacak duygu ve sütunlar eklemek olduğunu unutmayın (kendiniz için veya belki size bir otel öneri botu yapma görevi veren bir müşteri için). Etiketlerin nihai veri setinde yararlı olup olmadığını kendinize sormanız gerekiyor. İşte bir yorum (eğer veri setine başka nedenlerle ihtiyacınız varsa, farklı etiketler seçime dahil olabilir veya dışarıda kalabilir):
+
+1. Seyahat türü önemlidir ve kalmalıdır
+2. Misafir grubu türü önemlidir ve kalmalıdır
+3. Misafirin kaldığı oda, süit veya stüdyo türü önemsizdir (tüm otellerde temelde aynı odalar vardır)
+4. İncelemenin gönderildiği cihaz önemsizdir
+5. İncelemecinin kaldığı gece sayısı, eğer daha uzun kalmalarını oteli daha çok sevmeleriyle ilişkilendirirseniz, önemli olabilir, ancak bu biraz zorlayıcıdır ve muhtemelen önemsizdir
+
+Özetle, **2 tür etiketi tutun ve diğerlerini kaldırın**.
+
+İlk olarak, etiketleri daha iyi bir formata getirmeden saymak istemezsiniz, bu da köşeli parantezleri ve tırnak işaretlerini kaldırmak anlamına gelir. Bunu birkaç şekilde yapabilirsiniz, ancak en hızlı yolu istersiniz çünkü çok fazla veriyi işlemek uzun sürebilir. Neyse ki, pandas bu adımların her birini kolayca yapmanın bir yolunu sunar.
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+Her etiket şu şekilde olur: `İş seyahati, Yalnız gezgin, Tek Kişilik Oda, 5 gece kaldı, Mobil cihazdan gönderildi`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Mobil cihazdan gönderildi` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Arkadaşlarla seyahat edenler` is the same as `Grup` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Etiket` sütunu yeni sütunlardan biriyle eşleşiyorsa, 1 ekleyin, değilse 0 ekleyin. Sonuç, bu oteli (toplamda) iş veya eğlence için veya bir evcil hayvanla getirmek için kaç incelemecinin seçtiğini saymak olacak ve bu, bir otel önerirken yararlı bir bilgidir.
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### Dosyanızı kaydedin
+
+Son olarak, veri setini şu anki haliyle yeni bir adla kaydedin.
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## Duygu Analizi İşlemleri
+
+Bu son bölümde, inceleme sütunlarına duygu analizi uygulayacak ve sonuçları bir veri setinde kaydedeceksiniz.
+
+## Egzersiz: filtrelenmiş verileri yükleyin ve kaydedin
+
+Artık filtrelenmiş veri setini yüklediğinizi, **orijinal** veri setini değil, unutmayın.
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### Stop kelimelerini kaldırma
+
+Negatif ve Pozitif inceleme sütunlarında Duygu Analizi çalıştırırsanız, uzun sürebilir. Hızlı bir CPU'ya sahip güçlü bir test dizüstü bilgisayarında test edildiğinde, kullanılan duygu kütüphanesine bağlı olarak 12 - 14 dakika sürdü. Bu (nispeten) uzun bir süre, bu nedenle hızlandırılıp hızlandırılamayacağını araştırmaya değer.
+
+Stop kelimeleri, yani bir cümlenin duygusunu değiştirmeyen yaygın İngilizce kelimeleri kaldırmak ilk adımdır. Onları kaldırarak, duygu analizi daha hızlı çalışmalı, ancak daha az doğru olmamalıdır (çünkü stop kelimeleri duyguyu etkilemez, ancak analizi yavaşlatır).
+
+En uzun negatif inceleme 395 kelimeydi, ancak stop kelimeleri kaldırıldıktan sonra 195 kelime oldu.
+
+Stop kelimeleri kaldırmak da hızlı bir işlemdir, 515.000 satırda 2 inceleme sütunundan stop kelimelerini kaldırmak test cihazında 3.3 saniye sürdü. Cihazınızın CPU hızına, RAM'e, SSD'ye sahip olup olmamanıza ve bazı diğer faktörlere bağlı olarak sizin için biraz daha uzun veya kısa sürebilir. İşlemin nispi kısalığı, duygu analizi süresini iyileştiriyorsa, yapmaya değer olduğu anlamına gelir.
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### Duygu analizi gerçekleştirme
+
+Şimdi negatif ve pozitif inceleme sütunları için duygu analizini hesaplamalı ve sonucu 2 yeni sütunda saklamalısınız. Duygu testinin, aynı inceleme için incelemecinin puanıyla karşılaştırılması olacaktır. Örneğin, duygu analizinin negatif incelemenin 1 (son derece pozitif duygu) ve pozitif inceleme duygu analizinin 1 olduğunu düşündüğünü varsayalım, ancak incelemeci otele mümkün olan en düşük puanı verdiyse, inceleme metni puanla eşleşmiyor olabilir veya duygu analizörü duyguyu doğru tanıyamamış olabilir. Bazı duygu puanlarının tamamen yanlış olmasını beklemelisiniz ve bu genellikle açıklanabilir olacaktır, örneğin inceleme son derece alaycı olabilir "Tabii ki ısıtma olmayan bir odada uyumayı SEVDİM" ve duygu analizörü bunun pozitif bir duygu olduğunu düşünebilir, ancak bunu okuyan bir insan bunun alaycı olduğunu bilir.
+
+NLTK, öğrenmek için farklı duygu analizörleri sağlar ve bunları değiştirebilir ve duygu analizinin daha doğru olup olmadığını görebilirsiniz. Burada VADER duygu analizi kullanılmıştır.
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: Sosyal Medya Metni için Basit Kurallara Dayalı Bir Model. Sekizinci Uluslararası Webloglar ve Sosyal Medya Konferansı (ICWSM-14). Ann Arbor, MI, Haziran 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+Programınızın ilerleyen bölümlerinde duygu analizi yapmaya hazır olduğunuzda, her incelemeye aşağıdaki gibi uygulayabilirsiniz:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+Bu, bilgisayarımda yaklaşık 120 saniye sürüyor, ancak her bilgisayarda değişecektir. Sonuçları yazdırmak ve duygunun incelemeye uyup uymadığını görmek isterseniz:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+Dosyayı kullanmadan önce yapmanız gereken son şey, onu kaydetmektir! Ayrıca, yeni sütunlarınızı yeniden düzenlemeyi düşünmelisiniz, böylece çalışmak daha kolay olur (bir insan için, bu kozmetik bir değişikliktir).
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+[analiz defterinin](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) tamamını çalıştırmalısınız (Hotel_Reviews_Filtered.csv dosyasını oluşturmak için [filtreleme defterinizi](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) çalıştırdıktan sonra).
+
+Gözden geçirmek için adımlar:
+
+1. Orijinal veri seti dosyası **Hotel_Reviews.csv** önceki derste [keşif defteri](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb) ile incelenmiştir
+2. Hotel_Reviews.csv [filtreleme defteri](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb) ile filtrelenir ve **Hotel_Reviews_Filtered.csv** elde edilir
+3. Hotel_Reviews_Filtered.csv [duygu analizi defteri](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb) ile işlenir ve **Hotel_Reviews_NLP.csv** elde edilir
+4. Aşağıdaki NLP Challenge'da Hotel_Reviews_NLP.csv dosyasını kullanın
+
+### Sonuç
+
+Başladığınızda, sütunlar ve veriler içeren bir veri setiniz vardı, ancak hepsi doğrulanabilir veya kullanılabilir değildi. Verileri incelediniz, ihtiyacınız olmayanları filtrelediniz, etiketleri faydalı bir şeye dönüştürdünüz, kendi ortalamalarınızı hesapladınız, bazı duygu sütunları eklediniz ve umarım doğal metni işlemede ilginç şeyler öğrendiniz.
+
+## [Ders sonrası sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## Zorluk
+
+Artık veri setinizin duygu analizi yapıldığına göre, bu müfredatta öğrendiğiniz stratejileri (belki kümeleme?) kullanarak duygu etrafında kalıplar belirleyip belirleyemeyeceğinizi görün.
+
+## İnceleme ve Kendi Kendine Çalışma
+
+Bu [Learn modülünü](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott) alarak daha fazla bilgi edinin ve metinlerde duygu keşfetmek için farklı araçlar kullanın.
+## Ödev
+
+[Farklı bir veri seti deneyin](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/tr/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..77f1556bf
--- /dev/null
+++ b/translations/tr/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# Farklı bir veri seti deneyin
+
+## Talimatlar
+
+Artık NLTK kullanarak metne duygu durumu atamayı öğrendiğinize göre, farklı bir veri seti deneyin. Muhtemelen bununla ilgili bazı veri işleme işlemleri yapmanız gerekecek, bu yüzden bir notebook oluşturun ve düşünce sürecinizi belgeleyin. Ne keşfettiniz?
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Mükemmel | Yeterli | Geliştirme Gerekli |
+| -------- | ----------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | ---------------------- |
+| | Duygu durumunun nasıl atandığını açıklayan iyi belgelenmiş hücrelerle eksiksiz bir notebook ve veri seti sunulur | Notebook iyi açıklamalardan yoksundur | Notebook hatalıdır |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/tr/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..267f0b649
--- /dev/null
+++ b/translations/tr/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından doğabilecek yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/tr/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..976ba076b
--- /dev/null
+++ b/translations/tr/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan yanlış anlamalar veya yanlış yorumlamalar için sorumluluk kabul etmiyoruz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/README.md b/translations/tr/6-NLP/README.md
new file mode 100644
index 000000000..7ed7abf65
--- /dev/null
+++ b/translations/tr/6-NLP/README.md
@@ -0,0 +1,27 @@
+# Doğal Dil İşleme ile Başlarken
+
+Doğal dil işleme (NLP), bir bilgisayar programının insan dilini konuşulduğu ve yazıldığı gibi anlaması yeteneğidir -- doğal dil olarak adlandırılır. Bu, yapay zekanın (AI) bir bileşenidir. NLP, 50 yılı aşkın bir süredir var olup dilbilim alanında kökleri vardır. Tüm alan, makinelerin insan dilini anlamasına ve işlemesine yardımcı olmaya yöneliktir. Bu, yazım denetimi veya makine çevirisi gibi görevleri yerine getirmek için kullanılabilir. Tıp araştırmaları, arama motorları ve iş zekası gibi birçok alanda çeşitli gerçek dünya uygulamaları bulunmaktadır.
+
+## Bölgesel Konu: Avrupa Dilleri ve Edebiyatı ve Avrupa'nın Romantik Otelleri ❤️
+
+Bu müfredat bölümünde, makine öğreniminin en yaygın kullanımlarından biri olan doğal dil işlemeye (NLP) giriş yapacaksınız. Hesaplamalı dilbilimden türetilen bu yapay zeka kategorisi, insanlar ve makineler arasında sesli veya metinsel iletişim yoluyla bir köprü görevi görür.
+
+Bu derslerde, küçük konuşma botları oluşturarak makine öğreniminin bu konuşmaları nasıl giderek daha 'akıllı' hale getirdiğini öğrenerek NLP'nin temellerini öğreneceğiz. Zaman içinde geriye giderek Jane Austen'in 1813'te yayımlanan klasik romanı **Gurur ve Önyargı**'dan Elizabeth Bennett ve Bay Darcy ile sohbet edeceksiniz. Ardından, Avrupa'daki otel yorumları üzerinden duygu analizi yapmayı öğrenerek bilginizi daha da geliştireceksiniz.
+
+
+> Fotoğraf: Elaine Howlin, Unsplash üzerinden
+
+## Dersler
+
+1. [Doğal dil işlemeye giriş](1-Introduction-to-NLP/README.md)
+2. [Yaygın NLP görevleri ve teknikleri](2-Tasks/README.md)
+3. [Makine öğrenimi ile çeviri ve duygu analizi](3-Translation-Sentiment/README.md)
+4. [Verilerinizi hazırlama](4-Hotel-Reviews-1/README.md)
+5. [Duygu Analizi için NLTK](5-Hotel-Reviews-2/README.md)
+
+## Katkıda Bulunanlar
+
+Bu doğal dil işleme dersleri ☕ ile [Stephen Howell](https://twitter.com/Howell_MSFT) tarafından yazılmıştır.
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/6-NLP/data/README.md b/translations/tr/6-NLP/data/README.md
new file mode 100644
index 000000000..ce83e55df
--- /dev/null
+++ b/translations/tr/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+Otel inceleme verilerini bu klasöre indirin.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/1-Introduction/README.md b/translations/tr/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..06401d2a3
--- /dev/null
+++ b/translations/tr/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# Zaman Serisi Tahminine Giriş
+
+
+
+> Çizim [Tomomi Imura](https://www.twitter.com/girlie_mac) tarafından
+
+Bu derste ve bir sonraki derste, zaman serisi tahmini hakkında biraz bilgi edineceksiniz. Bu, bir ML bilim insanının repertuarının ilginç ve değerli bir parçasıdır, ancak diğer konular kadar bilinmemektedir. Zaman serisi tahmini, bir tür 'kristal küre' gibidir: fiyat gibi bir değişkenin geçmiş performansına dayanarak, gelecekteki potansiyel değerini tahmin edebilirsiniz.
+
+[](https://youtu.be/cBojo1hsHiI "Zaman serisi tahminine giriş")
+
+> 🎥 Zaman serisi tahmini hakkında bir video için yukarıdaki resme tıklayın
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+Fiyatlandırma, envanter ve tedarik zinciri sorunlarına doğrudan uygulanabilirliği göz önüne alındığında, iş dünyası için gerçek değeri olan faydalı ve ilginç bir alandır. Derin öğrenme teknikleri, gelecekteki performansı daha iyi tahmin etmek için daha fazla içgörü elde etmek amacıyla kullanılmaya başlanmış olsa da, zaman serisi tahmini, büyük ölçüde klasik ML teknikleri tarafından bilgilendirilen bir alandır.
+
+> Penn State'in faydalı zaman serisi müfredatına [buradan](https://online.stat.psu.edu/stat510/lesson/1) ulaşabilirsiniz.
+
+## Giriş
+
+Diyelim ki, zamanla ne kadar sıklıkla kullanıldıkları ve ne kadar süreyle kullanıldıkları hakkında veri sağlayan bir dizi akıllı otopark sayacını yönetiyorsunuz.
+
+> Geçmiş performansına dayanarak, arz ve talep yasalarına göre gelecekteki değerini tahmin edebileceğinizi düşünün?
+
+Hedefinize ulaşmak için ne zaman harekete geçmeniz gerektiğini doğru bir şekilde tahmin etmek, zaman serisi tahmini ile ele alınabilecek bir zorluktur. İnsanlar park yeri ararken yoğun zamanlarda daha fazla ücret alınmasından hoşlanmayabilirler, ancak bu, sokakları temizlemek için gelir elde etmenin kesin bir yolu olacaktır!
+
+Bazı zaman serisi algoritmalarını inceleyelim ve bazı verileri temizlemek ve hazırlamak için bir defter başlatalım. Analiz edeceğiniz veriler, GEFCom2014 tahmin yarışmasından alınmıştır. 2012 ve 2014 yılları arasında 3 yıllık saatlik elektrik yükü ve sıcaklık değerlerinden oluşmaktadır. Elektrik yükü ve sıcaklıklarının geçmiş desenlerine bakarak, gelecekteki elektrik yükü değerlerini tahmin edebilirsiniz.
+
+Bu örnekte, yalnızca geçmiş yük verilerini kullanarak bir zaman adımını ileriye tahmin etmeyi öğreneceksiniz. Ancak başlamadan önce, perde arkasında neler olduğunu anlamak faydalı olacaktır.
+
+## Bazı Tanımlar
+
+'Zaman serisi' terimiyle karşılaştığınızda, onun farklı bağlamlarda nasıl kullanıldığını anlamanız gerekir.
+
+🎓 **Zaman serisi**
+
+Matematikte, "bir zaman serisi, zaman sırasına göre dizinlenmiş (veya listelenmiş veya grafiğe dökülmüş) bir veri noktaları serisidir. En yaygın olarak, bir zaman serisi, ardışık eşit aralıklı zaman noktalarında alınan bir dizidir." Bir zaman serisi örneği, [Dow Jones Sanayi Ortalaması](https://wikipedia.org/wiki/Time_series)'nın günlük kapanış değeridir. Zaman serisi grafikleri ve istatistiksel modellemenin kullanımı, sinyal işleme, hava durumu tahmini, deprem tahmini ve olayların meydana geldiği ve veri noktalarının zamanla çizilebileceği diğer alanlarda sıkça karşılaşılır.
+
+🎓 **Zaman serisi analizi**
+
+Zaman serisi analizi, yukarıda bahsedilen zaman serisi verilerinin analizidir. Zaman serisi verileri, bir kesinti olayından önce ve sonra bir zaman serisinin evrimindeki desenleri tespit eden 'kesintili zaman serileri' de dahil olmak üzere farklı biçimler alabilir. Zaman serisi için gereken analiz türü, verilerin doğasına bağlıdır. Zaman serisi verileri, sayı veya karakter serileri biçiminde olabilir.
+
+Yapılacak analiz, frekans alanı ve zaman alanı, doğrusal ve doğrusal olmayan ve daha fazlası dahil olmak üzere çeşitli yöntemler kullanır. Bu tür verileri analiz etmenin birçok yolu hakkında [daha fazla bilgi edinin](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm).
+
+🎓 **Zaman serisi tahmini**
+
+Zaman serisi tahmini, geçmişte meydana gelen veriler tarafından gösterilen desenlere dayanarak gelecekteki değerleri tahmin etmek için bir modelin kullanılmasıdır. Zaman serisi verilerini keşfetmek için regresyon modelleri kullanmak mümkün olsa da, zaman dizinlerinin bir grafikte x değişkenleri olarak kullanılmasıyla, bu tür veriler özel model türleri kullanılarak en iyi şekilde analiz edilir.
+
+Zaman serisi verileri, doğrusal regresyonla analiz edilebilecek verilerden farklı olarak, sıralı gözlemler listesidir. En yaygın olanı ARIMA'dır, bu, "Oto-Regresif Entegre Hareketli Ortalama" anlamına gelir.
+
+[ARIMA modelleri](https://online.stat.psu.edu/stat510/lesson/1/1.1) "bir serinin mevcut değerini geçmiş değerler ve geçmiş tahmin hatalarıyla ilişkilendirir." Zamanla sıralanan verilerin analiz edilmesi için en uygun olanıdır.
+
+> ARIMA modellerinin birkaç türü vardır, bunlar hakkında [buradan](https://people.duke.edu/~rnau/411arim.htm) bilgi edinebilir ve bir sonraki derste bu konulara değineceksiniz.
+
+Bir sonraki derste, zamanla değişen bir değişkene odaklanan [Tek Değişkenli Zaman Serisi](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm) kullanarak bir ARIMA modeli oluşturacaksınız. Bu tür verilere bir örnek, Mauna Loa Gözlemevi'nde aylık CO2 konsantrasyonunu kaydeden [bu veri setidir](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm):
+
+| CO2 | YearMonth | Year | Month |
+| :----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ Bu veri setinde zamanla değişen değişkeni belirleyin
+
+## Dikkate Alınması Gereken Zaman Serisi Veri Özellikleri
+
+Zaman serisi verilerine baktığınızda, daha iyi anlamak için dikkate almanız ve azaltmanız gereken [belirli özelliklere](https://online.stat.psu.edu/stat510/lesson/1/1.1) sahip olduğunu fark edebilirsiniz. Zaman serisi verilerini analiz etmek istediğiniz bir 'sinyal' olarak düşünürseniz, bu özellikler 'gürültü' olarak düşünülebilir. Bu 'gürültüyü' azaltmak için bazı istatistiksel teknikler kullanarak bu özelliklerden bazılarını dengelemeniz gerekecektir.
+
+Zaman serisi ile çalışabilmek için bilmeniz gereken bazı kavramlar şunlardır:
+
+🎓 **Trendler**
+
+Trendler, zamanla ölçülebilir artışlar ve azalmalar olarak tanımlanır. [Daha fazla okuyun](https://machinelearningmastery.com/time-series-trends-in-python). Zaman serisi bağlamında, zaman serinizden trendleri nasıl kullanacağınız ve gerekirse nasıl kaldıracağınız ile ilgilidir.
+
+🎓 **[Mevsimsellik](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+Mevsimsellik, örneğin satışları etkileyebilecek tatil yoğunluğu gibi periyodik dalgalanmalar olarak tanımlanır. Verilerde mevsimselliği gösteren farklı türde grafiklerin nasıl göründüğüne [bir göz atın](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm).
+
+🎓 **Aykırı Değerler**
+
+Aykırı değerler, standart veri varyansından uzak olan verilerdir.
+
+🎓 **Uzun Vadeli Döngü**
+
+Mevsimsellikten bağımsız olarak, veriler bir yıldan uzun süren bir ekonomik durgunluk gibi uzun vadeli bir döngü gösterebilir.
+
+🎓 **Sabit Varyans**
+
+Zamanla, bazı veriler günlük ve gece enerji kullanımı gibi sabit dalgalanmalar gösterir.
+
+🎓 **Ani Değişiklikler**
+
+Veriler, daha fazla analiz gerektirebilecek ani bir değişiklik gösterebilir. Örneğin, COVID nedeniyle iş yerlerinin ani kapanması, verilerde değişikliklere neden oldu.
+
+✅ İşte birkaç yıl boyunca günlük oyun içi para harcamasını gösteren [örnek bir zaman serisi grafiği](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python). Bu verilerde yukarıda listelenen özelliklerden herhangi birini belirleyebilir misiniz?
+
+
+
+## Egzersiz - Güç Kullanım Verileri ile Başlamak
+
+Geçmiş kullanıma dayanarak gelecekteki güç kullanımını tahmin etmek için bir zaman serisi modeli oluşturmaya başlayalım.
+
+> Bu örnekteki veriler, GEFCom2014 tahmin yarışmasından alınmıştır. 2012 ve 2014 yılları arasında 3 yıllık saatlik elektrik yükü ve sıcaklık değerlerinden oluşmaktadır.
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli ve Rob J. Hyndman, "Olasılıksal enerji tahmini: Global Energy Forecasting Competition 2014 ve ötesi", International Journal of Forecasting, cilt 32, no.3, ss 896-913, Temmuz-Eylül, 2016.
+
+1. Bu dersin `working` klasöründe, _notebook.ipynb_ dosyasını açın. Verileri yüklemenize ve görselleştirmenize yardımcı olacak kütüphaneleri ekleyerek başlayın
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ Not, dahil edilen `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()` dosyalarını kullanıyorsunuz:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ Tarih ve yükü temsil eden iki sütun olduğunu görebilirsiniz:
+
+ | | load |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. Şimdi, `plot()` çağrısı yaparak verileri grafiğe dökün:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. Şimdi, 2014 Temmuz'unun ilk haftasını `energy` in `[from date]: [to date]` modelini kullanarak grafiğe dökün:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Güzel bir grafik! Bu grafiklere bakın ve yukarıda listelenen özelliklerden herhangi birini belirleyip belirleyemeyeceğinizi görün. Verileri görselleştirerek ne çıkarımlar yapabiliriz?
+
+Bir sonraki derste, bazı tahminler oluşturmak için bir ARIMA modeli oluşturacaksınız.
+
+---
+
+## 🚀Meydan Okuma
+
+Zaman serisi tahmininden fayda sağlayabilecek tüm endüstrileri ve araştırma alanlarını listeleyin. Bu tekniklerin sanatlarda bir uygulamasını düşünebilir misiniz? Ekonometrikte? Ekolojide? Perakendede? Endüstride? Finans? Başka nerede?
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Burada ele almayacak olsak da, zaman serisi tahmininin klasik yöntemlerini geliştirmek için bazen sinir ağları kullanılır. Bu konuda daha fazla bilgi edinmek için [bu makaleyi](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412) okuyun.
+
+## Ödev
+
+[Daha fazla zaman serisi görselleştirin](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlaşılma veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/1-Introduction/assignment.md b/translations/tr/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..ef158a542
--- /dev/null
+++ b/translations/tr/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# Bazı Zaman Serilerini Daha Fazla Görselleştir
+
+## Talimatlar
+
+Zaman Serisi Tahmini hakkında, bu özel modellemeyi gerektiren veri türlerine bakarak öğrenmeye başladınız. Enerji ile ilgili bazı verileri görselleştirdiniz. Şimdi, Zaman Serisi Tahmini'nden fayda sağlayabilecek başka veriler arayın. Üç örnek bulun (örneğin [Kaggle](https://kaggle.com) ve [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott)) ve bunları görselleştirmek için bir notebook oluşturun. Notebook'ta sahip oldukları özel özellikleri (mevsimsellik, ani değişiklikler veya diğer trendler) not edin.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek | Yeterli | Gelişmeye İhtiyacı Var |
+| --------- | --------------------------------------------------- | ---------------------------------------------------- | ---------------------------------------------------------------------------------------- |
+| | Üç veri kümesi bir notebook'ta görselleştirilmiş ve açıklanmıştır | İki veri kümesi bir notebook'ta görselleştirilmiş ve açıklanmıştır | Az sayıda veri kümesi bir notebook'ta görselleştirilmiş veya açıklanmış ya da sunulan veri yetersizdir |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlama veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/tr/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..7f95ca4a3
--- /dev/null
+++ b/translations/tr/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/tr/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..40ef57c6e
--- /dev/null
+++ b/translations/tr/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/2-ARIMA/README.md b/translations/tr/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..7954da670
--- /dev/null
+++ b/translations/tr/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,397 @@
+# ARIMA ile Zaman Serisi Tahmini
+
+Önceki derste, zaman serisi tahmini hakkında biraz bilgi edindiniz ve bir zaman dilimi boyunca elektrik yükünün dalgalanmalarını gösteren bir veri kümesini yüklediniz.
+
+[](https://youtu.be/IUSk-YDau10 "Introduction to ARIMA")
+
+> 🎥 Yukarıdaki görüntüye tıklayarak bir video izleyin: ARIMA modellerine kısa bir giriş. Örnek R dilinde yapılmıştır, ancak kavramlar evrenseldir.
+
+## [Ders Öncesi Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## Giriş
+
+Bu derste, [ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average) ile model oluşturmanın belirli bir yolunu keşfedeceksiniz. ARIMA modelleri, özellikle [durağan olmayan](https://wikipedia.org/wiki/Stationary_process) verileri uyarlamak için uygundur.
+
+## Genel Kavramlar
+
+ARIMA ile çalışabilmek için bilmeniz gereken bazı kavramlar vardır:
+
+- 🎓 **Durağanlık**. İstatistiksel bağlamda, durağanlık, zaman içinde kaydırıldığında dağılımı değişmeyen verilere atıfta bulunur. Durağan olmayan veriler ise analiz edilmek üzere dönüştürülmesi gereken eğilimlerden kaynaklanan dalgalanmalar gösterir. Örneğin, mevsimsellik verilerde dalgalanmalara neden olabilir ve 'mevsimsel fark alma' süreci ile ortadan kaldırılabilir.
+
+- 🎓 **[Fark Alma](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**. İstatistiksel bağlamda, fark alma, durağan olmayan verileri durağan hale getirmek için değişken eğilimlerini ortadan kaldırma sürecine atıfta bulunur. "Fark alma, bir zaman serisinin seviyesindeki değişiklikleri ortadan kaldırarak eğilim ve mevsimselliği ortadan kaldırır ve böylece zaman serisinin ortalamasını stabilize eder." [Shixiong ve diğerlerinin makalesi](https://arxiv.org/abs/1904.07632)
+
+## Zaman Serisi Bağlamında ARIMA
+
+ARIMA'nın bölümlerini açarak, zaman serilerini nasıl modellediğini ve tahmin yapmamıza nasıl yardımcı olduğunu daha iyi anlayalım.
+
+- **AR - Otoregresif**. Otoregresif modeller, adından da anlaşılacağı gibi, verilerinizdeki önceki değerlere bakarak onları analiz eder ve varsayımlar yapar. Bu önceki değerlere 'gecikmeler' denir. Örneğin, aylık kalem satışlarını gösteren veriler. Her ayın satış toplamı, veri kümesinde 'gelişen değişken' olarak kabul edilir. Bu model, "ilgilenen gelişen değişkenin kendi gecikmiş (yani, önceki) değerlerine göre regresyona tabi tutulduğu" şeklinde oluşturulur. [wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - Entegre**. Benzer 'ARMA' modellerinden farklı olarak, ARIMA'daki 'I', *[entegre](https://wikipedia.org/wiki/Order_of_integration)* yönünü ifade eder. Veriler, durağanlığı ortadan kaldırmak için fark alma adımları uygulandığında 'entegre' olur.
+
+- **MA - Hareketli Ortalama**. Bu modelin [hareketli ortalama](https://wikipedia.org/wiki/Moving-average_model) yönü, çıkış değişkeninin, mevcut ve geçmiş gecikme değerlerini gözlemleyerek belirlendiğini ifade eder.
+
+Sonuç: ARIMA, zaman serisi verilerinin özel formuna mümkün olduğunca yakın bir model oluşturmak için kullanılır.
+
+## Alıştırma - Bir ARIMA Modeli Oluşturun
+
+Bu dersteki [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) klasörünü açın ve [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb) dosyasını bulun.
+
+1. ARIMA modelleri için ihtiyacınız olan `statsmodels` Python kütüphanesini yüklemek için notebook'u çalıştırın.
+
+1. Gerekli kütüphaneleri yükleyin.
+
+1. Şimdi, verileri çizmek için faydalı olan birkaç kütüphaneyi daha yükleyin:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. Verileri `/data/energy.csv` dosyasından bir Pandas dataframe'ine yükleyin ve bir göz atın:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. Ocak 2012'den Aralık 2014'e kadar mevcut tüm enerji verilerini çizin. Bu verileri önceki derste gördüğümüz için sürpriz olmamalı:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ Şimdi, bir model oluşturalım!
+
+### Eğitim ve Test Veri Setleri Oluşturun
+
+Verileriniz yüklendi, bu yüzden onları eğitim ve test setlerine ayırabilirsiniz. Modelinizi eğitim setinde eğiteceksiniz. Her zamanki gibi, model eğitimi tamamlandıktan sonra, doğruluğunu test setini kullanarak değerlendireceksiniz. Modelin gelecekteki zaman dilimlerinden bilgi almamasını sağlamak için test setinin eğitim setinden sonraki bir dönemi kapsadığından emin olmanız gerekir.
+
+1. 1 Eylül - 31 Ekim 2014 tarihlerini eğitim setine ayırın. Test seti, 1 Kasım - 31 Aralık 2014 dönemini kapsayacaktır:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ Bu veriler günlük enerji tüketimini yansıttığı için güçlü bir mevsimsel desen vardır, ancak tüketim en son günlerdeki tüketime en benzer.
+
+1. Farklılıkları görselleştirin:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Bu nedenle, verileri eğitmek için nispeten küçük bir zaman penceresi kullanmak yeterli olmalıdır.
+
+ > Not: ARIMA modelini uyarlamak için kullandığımız fonksiyon, uyarlama sırasında örnek içi doğrulama kullandığından, doğrulama verilerini göz ardı edeceğiz.
+
+### Verileri Eğitime Hazırlayın
+
+Şimdi, verileri filtreleme ve ölçeklendirme yaparak eğitime hazırlamanız gerekiyor. Veri kümenizi yalnızca ihtiyaç duyduğunuz zaman dilimlerini ve sütunları içerecek şekilde filtreleyin ve verilerin 0,1 aralığında projeksiyonunu sağlamak için ölçeklendirin.
+
+1. Orijinal veri kümesini, set başına yalnızca belirtilen zaman dilimlerini ve yalnızca gerekli olan 'load' sütunu ile tarih sütununu içerecek şekilde filtreleyin:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ Verinin şeklini görebilirsiniz:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. Verileri (0, 1) aralığında ölçeklendirin.
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. Orijinal ve ölçeklendirilmiş verileri görselleştirin:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > Orijinal veri
+
+ 
+
+ > Ölçeklendirilmiş veri
+
+1. Şimdi ölçeklendirilmiş verileri kalibre ettiğinize göre, test verilerini de ölçeklendirebilirsiniz:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### ARIMA'yı Uygulayın
+
+ARIMA'yı uygulama zamanı geldi! Daha önce yüklediğiniz `statsmodels` kütüphanesini kullanacaksınız.
+
+Şimdi birkaç adımı takip etmeniz gerekiyor
+
+ 1. Modeli `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q` fonksiyonunu çağırarak tanımlayın, ancak modelin mevsimsel bileşenlerine karşılık gelir.
+
+1. Tercih ettiğiniz ufuk değerini ayarlayarak başlayın. 3 saat deneyelim:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ Bir ARIMA modelinin parametreleri için en iyi değerleri seçmek zordur çünkü bu biraz öznel ve zaman alıcıdır. `auto_arima()` function from the [`pyramid` kütüphanesini kullanmayı düşünebilirsiniz.](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html),
+
+1. Şimdilik iyi bir model bulmak için bazı manuel seçimler deneyin.
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ Bir sonuç tablosu yazdırılır.
+
+İlk modelinizi oluşturdunuz! Şimdi onu değerlendirmek için bir yol bulmamız gerekiyor.
+
+### Modelinizi Değerlendirin
+
+Modelinizi değerlendirmek için, sözde `yürüyen ileri` doğrulama gerçekleştirebilirsiniz. Pratikte, zaman serisi modelleri her yeni veri geldiğinde yeniden eğitilir. Bu, modelin her zaman adımında en iyi tahmini yapmasına olanak tanır.
+
+Bu tekniği kullanarak zaman serisinin başından başlayarak, modeli eğitim veri setinde eğitin. Ardından bir sonraki zaman adımında tahmin yapın. Tahmin, bilinen değere karşı değerlendirilir. Eğitim seti daha sonra bilinen değeri içerecek şekilde genişletilir ve işlem tekrarlanır.
+
+> Not: Eğitimi daha verimli hale getirmek için eğitim seti penceresini sabit tutmalısınız, böylece her yeni gözlemi eğitim setine eklediğinizde, setin başından gözlemi kaldırırsınız.
+
+Bu süreç, modelin pratikte nasıl performans göstereceğine dair daha sağlam bir tahmin sağlar. Ancak, bu kadar çok model oluşturmanın hesaplama maliyeti vardır. Veri küçükse veya model basitse kabul edilebilir, ancak ölçek büyüdüğünde sorun olabilir.
+
+Yürüyen ileri doğrulama, zaman serisi modeli değerlendirmesinin altın standardıdır ve kendi projelerinizde tavsiye edilir.
+
+1. İlk olarak, her HORIZON adımı için bir test veri noktası oluşturun.
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | load | load+1 | load+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ Veriler ufuk noktasına göre yatay olarak kaydırılmıştır.
+
+1. Test verilerinizde bu kayan pencere yaklaşımını kullanarak bir döngü içinde tahminler yapın:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ Eğitimin gerçekleştiğini izleyebilirsiniz:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. Tahminleri gerçek yükle karşılaştırın:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ Çıktı
+ | | | timestamp | h | prediction | actual |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+
+ Saatlik verilerin tahminini, gerçek yükle karşılaştırın. Ne kadar doğru?
+
+### Model Doğruluğunu Kontrol Edin
+
+Modelinizin doğruluğunu, tüm tahminler üzerindeki ortalama mutlak yüzde hatasını (MAPE) test ederek kontrol edin.
+
+> **🧮 Matematiği Göster**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/) tahmin doğruluğunu yukarıdaki formülle tanımlanan bir oran olarak göstermek için kullanılır. Gerçekt ve tahmint arasındaki fark, gerçekt ile bölünür. "Bu hesaplamadaki mutlak değer her tahmin edilen zaman noktasında toplanır ve uydurulan noktaların sayısına n bölünür." [wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. Denklemi kodda ifade edin:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. Bir adımın MAPE'sini hesaplayın:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ Bir adım tahmin MAPE'si: 0.5570581332313952 %
+
+1. Çok adımlı tahmin MAPE'sini yazdırın:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ Güzel bir düşük sayı en iyisidir: MAPE'si 10 olan bir tahminin %10 hata payı olduğunu düşünün.
+
+1. Ancak her zaman olduğu gibi, bu tür doğruluk ölçümünü görsel olarak görmek daha kolaydır, bu yüzden bunu çizelim:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 Çok güzel bir grafik, iyi doğruluğa sahip bir modeli gösteriyor. Aferin!
+
+---
+
+## 🚀Meydan Okuma
+
+Bir Zaman Serisi Modelinin doğruluğunu test etmenin yollarını inceleyin. Bu derste MAPE'ye değiniyoruz, ancak kullanabileceğiniz başka yöntemler var mı? Onları araştırın ve not edin. Yardımcı bir belgeyi [burada](https://otexts.com/fpp2/accuracy.html) bulabilirsiniz.
+
+## [Ders Sonrası Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Bu ders, ARIMA ile Zaman Serisi Tahmininin yalnızca temel konularına değinmektedir. [Bu depo](https://microsoft.github.io/forecasting/) ve çeşitli model türlerine göz atarak Zaman Serisi modelleri oluşturmanın diğer yollarını öğrenmek için bilginizi derinleştirin.
+
+## Ödev
+
+[Yeni bir ARIMA modeli](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/2-ARIMA/assignment.md b/translations/tr/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..6e08a2ac4
--- /dev/null
+++ b/translations/tr/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# Yeni bir ARIMA modeli
+
+## Talimatlar
+
+Artık bir ARIMA modeli oluşturduğunuza göre, yeni verilerle (Duke'dan [bu veri setlerinden](http://www2.stat.duke.edu/~mw/ts_data_sets.html) birini deneyin) yeni bir tane oluşturun. Çalışmanızı bir not defterinde açıklayın, verileri ve modelinizi görselleştirin ve doğruluğunu MAPE kullanarak test edin.
+
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek | Yeterli | Geliştirilmesi Gerekiyor |
+| --------- | ------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- | ----------------------------------- |
+| | Yeni bir ARIMA modeli oluşturulmuş, test edilmiş ve görselleştirmelerle açıklanmış, doğruluk belirtilmiş bir not defteri sunulmuştur. | Sunulan not defteri açıklanmamış veya hatalar içermektedir | Eksik bir not defteri sunulmuştur |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/tr/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..7ea17fe26
--- /dev/null
+++ b/translations/tr/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/tr/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..b91dee279
--- /dev/null
+++ b/translations/tr/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/3-SVR/README.md b/translations/tr/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..a3c5b41f7
--- /dev/null
+++ b/translations/tr/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,386 @@
+# Destek Vektör Regresörü ile Zaman Serisi Tahmini
+
+Önceki derste, ARIMA modelini kullanarak zaman serisi tahminleri yapmayı öğrendiniz. Şimdi sürekli verileri tahmin etmek için kullanılan bir regresör modeli olan Destek Vektör Regresörü modeline bakacağız.
+
+## [Ders Öncesi Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## Giriş
+
+Bu derste, regresyon için [**SVM**: **D**estek **V**ektör **M**akinesi](https://en.wikipedia.org/wiki/Support-vector_machine) veya **SVR: Destek Vektör Regresörü** ile model oluşturmanın belirli bir yolunu keşfedeceksiniz.
+
+### Zaman serisi bağlamında SVR [^1]
+
+Zaman serisi tahmininde SVR'nin önemini anlamadan önce bilmeniz gereken bazı önemli kavramlar şunlardır:
+
+- **Regresyon:** Verilen bir dizi girdiden sürekli değerleri tahmin etmek için kullanılan denetimli öğrenme tekniği. Amaç, özellik alanında maksimum veri noktası sayısına sahip bir eğri (veya çizgi) uyarlamaktır. Daha fazla bilgi için [buraya tıklayın](https://en.wikipedia.org/wiki/Regression_analysis).
+- **Destek Vektör Makinesi (SVM):** Sınıflandırma, regresyon ve aykırı değer tespiti için kullanılan bir tür denetimli makine öğrenme modeli. Model, sınıflandırma durumunda sınır olarak, regresyon durumunda ise en iyi uyum çizgisi olarak işlev gören özellik alanında bir hiper düzlemdir. SVM'de, genellikle veri kümesini daha yüksek boyut sayısına sahip bir alana dönüştürmek için bir Çekirdek fonksiyonu kullanılır, böylece kolayca ayrılabilir hale gelirler. SVM'ler hakkında daha fazla bilgi için [buraya tıklayın](https://en.wikipedia.org/wiki/Support-vector_machine).
+- **Destek Vektör Regresörü (SVR):** En fazla veri noktasına sahip en iyi uyum çizgisini (SVM durumunda bu bir hiper düzlemdir) bulmak için kullanılan bir SVM türü.
+
+### Neden SVR? [^1]
+
+Son derste, zaman serisi verilerini tahmin etmek için çok başarılı bir istatistiksel doğrusal yöntem olan ARIMA hakkında bilgi edindiniz. Ancak birçok durumda, zaman serisi verileri doğrusal olmayan özelliklere sahiptir ve bu doğrusal modellerle haritalanamaz. Bu gibi durumlarda, SVR'nin doğrusal olmayan verileri regresyon görevleri için dikkate alma yeteneği, SVR'yi zaman serisi tahmininde başarılı kılar.
+
+## Egzersiz - bir SVR modeli oluşturun
+
+Veri hazırlama için ilk birkaç adım, [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA) hakkındaki önceki dersteki adımlarla aynıdır.
+
+Bu dersteki [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working) klasörünü açın ve [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb) dosyasını bulun.[^2]
+
+1. Not defterini çalıştırın ve gerekli kütüphaneleri içe aktarın: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. Verileri `/data/energy.csv` dosyasından bir Pandas veri çerçevesine yükleyin ve bir göz atın: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. Ocak 2012'den Aralık 2014'e kadar mevcut tüm enerji verilerini görselleştirin: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ Şimdi, SVR modelimizi oluşturalım.
+
+### Eğitim ve test veri setleri oluşturun
+
+Artık verileriniz yüklendiğine göre, onları eğitim ve test setlerine ayırabilirsiniz. Daha sonra, SVR için gerekli olan zaman adımı tabanlı bir veri seti oluşturmak için verileri yeniden şekillendireceksiniz. Modelinizi eğitim setinde eğiteceksiniz. Model eğitimi tamamlandıktan sonra, doğruluğunu eğitim setinde, test setinde ve ardından genel performansı görmek için tüm veri setinde değerlendireceksiniz. Test setinin, modelin gelecekteki zaman dilimlerinden bilgi edinmesini engellemek için eğitim setinden daha sonraki bir dönemi kapsadığından emin olmanız gerekir [^2] (bu duruma *Aşırı Uyum* denir).
+
+1. Eğitim setine 1 Eylül - 31 Ekim 2014 tarihleri arasındaki iki aylık dönemi ayırın. Test seti ise 1 Kasım - 31 Aralık 2014 tarihleri arasındaki iki aylık dönemi içerecektir: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. Farklılıkları görselleştirin: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+
+
+### Verileri eğitime hazırlayın
+
+Şimdi, verilerinizi filtreleme ve ölçeklendirme işlemlerini gerçekleştirerek eğitime hazırlamanız gerekiyor. Veri setinizi yalnızca gerekli zaman dilimlerini ve sütunları içerecek şekilde filtreleyin ve verilerin 0,1 aralığında projeksiyon yapılmasını sağlamak için ölçeklendirin.
+
+1. Orijinal veri setini yalnızca yukarıda belirtilen zaman dilimlerini içerecek şekilde filtreleyin ve yalnızca gerekli 'load' sütununu ve tarihi dahil edin: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. Eğitim verilerini (0, 1) aralığında ölçeklendirin: [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. Şimdi, test verilerini ölçeklendirin: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### Zaman adımları ile veri oluşturun [^1]
+
+SVR için, giriş verilerini `[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data` formunda dönüştürüyorsunuz, böylece zaman adımlarını ifade eden yeni bir boyut eklenmiş oluyor.
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+Bu örnek için, `timesteps = 5` alıyoruz. Yani, modele girdi olarak ilk 4 zaman adımının verilerini veriyoruz ve çıktı 5. zaman adımının verileri olacak.
+
+```python
+timesteps=5
+```
+
+İç içe liste kavramını kullanarak eğitim verilerini 2D tensöre dönüştürme:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+Test verilerini 2D tensöre dönüştürme:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+ Eğitim ve test verilerinden giriş ve çıkışları seçme:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### SVR'yi uygulayın [^1]
+
+Şimdi, SVR'yi uygulama zamanı. Bu uygulama hakkında daha fazla bilgi edinmek için [bu belgeleri](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) inceleyebilirsiniz. Bizim uygulamamız için şu adımları takip ediyoruz:
+
+ 1. `SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()` fonksiyonlarını çağırarak modeli tanımlayın.
+
+Şimdi bir SVR modeli oluşturuyoruz. Burada [RBF çekirdeğini](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel) kullanıyoruz ve hiperparametreleri gamma, C ve epsilon olarak sırasıyla 0.5, 10 ve 0.05 olarak ayarlıyoruz.
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### Modeli eğitim verileri üzerinde eğitin [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### Model tahminleri yapın [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+SVR'nizi oluşturdunuz! Şimdi bunu değerlendirmemiz gerekiyor.
+
+### Modelinizi değerlendirin [^1]
+
+Değerlendirme için, önce verileri orijinal ölçeğimize geri ölçeklendireceğiz. Daha sonra performansı kontrol etmek için orijinal ve tahmin edilen zaman serisi grafiğini çizeceğiz ve MAPE sonucunu yazdıracağız.
+
+Tahmin edilen ve orijinal çıktıyı ölçeklendirin:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### Eğitim ve test verileri üzerinde model performansını kontrol edin [^1]
+
+Grafiğimizin x ekseninde göstermek için veri setinden zaman damgalarını çıkarıyoruz. İlk ```timesteps-1``` değerlerini ilk çıktı için giriş olarak kullandığımızı unutmayın, bu nedenle çıktının zaman damgaları bundan sonra başlayacak.
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+Eğitim verileri için tahminleri çizin:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+Eğitim verileri için MAPE'yi yazdırın
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+Test verileri için tahminleri çizin
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+Test verileri için MAPE'yi yazdırın
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 Test veri setinde çok iyi bir sonuç elde ettiniz!
+
+### Tüm veri seti üzerinde model performansını kontrol edin [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+🏆 Çok güzel grafikler, iyi bir doğruluğa sahip bir modeli gösteriyor. Aferin!
+
+---
+
+## 🚀Meydan Okuma
+
+- Modeli oluştururken hiperparametreleri (gamma, C, epsilon) değiştirmeyi deneyin ve test verileri üzerinde hangi hiperparametre setinin en iyi sonuçları verdiğini görmek için değerlendirin. Bu hiperparametreler hakkında daha fazla bilgi edinmek için [buradaki belgeye](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel) bakabilirsiniz.
+- Model için farklı çekirdek fonksiyonları kullanmayı deneyin ve veri seti üzerindeki performanslarını analiz edin. Yardımcı bir belgeye [buradan](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) ulaşabilirsiniz.
+- Modelin tahmin yapması için geriye dönüp bakma adımı olan `timesteps` için farklı değerler kullanmayı deneyin.
+
+## [Ders Sonrası Test](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Bu ders, Zaman Serisi Tahmini için SVR'nin uygulanmasını tanıtmak içindi. SVR hakkında daha fazla bilgi edinmek için [bu bloga](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/) başvurabilirsiniz. Bu [scikit-learn belgesi](https://scikit-learn.org/stable/modules/svm.html), genel olarak SVM'ler, [SVR'ler](https://scikit-learn.org/stable/modules/svm.html#regression) ve ayrıca kullanılabilecek farklı [çekirdek fonksiyonları](https://scikit-learn.org/stable/modules/svm.html#kernel-functions) ve bunların parametreleri gibi diğer uygulama detayları hakkında daha kapsamlı bir açıklama sunar.
+
+## Ödev
+
+[Yeni bir SVR modeli](assignment.md)
+
+
+
+## Katkıda Bulunanlar
+
+[^1]: Bu bölümdeki metin, kod ve çıktı [@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD) tarafından katkıda bulunulmuştur.
+[^2]: Bu bölümdeki metin, kod ve çıktı [ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA) alınmıştır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/3-SVR/assignment.md b/translations/tr/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..bc7e1849a
--- /dev/null
+++ b/translations/tr/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,16 @@
+# Yeni bir SVR modeli
+
+## Talimatlar [^1]
+
+Artık bir SVR modeli oluşturduğunuza göre, yeni verilerle yeni bir model oluşturun (Duke'ten [bu veri setlerinden birini](http://www2.stat.duke.edu/~mw/ts_data_sets.html) deneyin). Çalışmanızı bir not defterinde açıklayın, verileri ve modelinizi görselleştirin ve uygun grafikler ve MAPE kullanarak doğruluğunu test edin. Ayrıca farklı hiperparametreleri ayarlamayı ve zaman adımları için farklı değerler kullanmayı da deneyin.
+
+## Değerlendirme Kriterleri [^1]
+
+| Kriter | Örnek | Yeterli | Geliştirme Gerekli |
+| -------- | ---------------------------------------------------------- | -------------------------------------------------------- | ----------------------------------- |
+| | Bir SVR modelinin oluşturulduğu, test edildiği ve görselleştirmelerle ve doğruluk belirtilerek açıklandığı bir not defteri sunulur. | Sunulan not defteri açıklanmamış veya hatalar içeriyor. | Eksik bir not defteri sunulmuş. |
+
+[^1]: Bu bölümdeki metin, [ARIMA ödevinden](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md) alınmıştır.
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından doğabilecek yanlış anlama veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/7-TimeSeries/README.md b/translations/tr/7-TimeSeries/README.md
new file mode 100644
index 000000000..a9b2e34fa
--- /dev/null
+++ b/translations/tr/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# Zaman Serisi Tahminine Giriş
+
+Zaman serisi tahmini nedir? Geçmişin trendlerini analiz ederek gelecekteki olayları tahmin etmekle ilgilidir.
+
+## Bölgesel konu: dünya çapında elektrik kullanımı ✨
+
+Bu iki derste, makine öğreniminin biraz daha az bilinen, ancak endüstri ve iş uygulamaları gibi alanlar için son derece değerli olan bir alanı olan zaman serisi tahmini ile tanışacaksınız. Sinir ağları bu modellerin faydasını artırmak için kullanılabilirken, biz onları klasik makine öğrenimi bağlamında inceleyeceğiz çünkü modeller geçmişe dayanarak gelecekteki performansı tahmin etmeye yardımcı olur.
+
+Bölgesel odağımız dünya çapında elektrik kullanımıdır, geçmiş yük desenlerine dayanarak gelecekteki güç kullanımını tahmin etmeyi öğrenmek için ilginç bir veri setidir. Bu tür bir tahminin iş ortamında son derece yararlı olabileceğini görebilirsiniz.
+
+
+
+Fotoğraf [Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) tarafından Rajasthan'daki bir yolda elektrik kuleleri üzerine [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) üzerinde çekilmiştir.
+
+## Dersler
+
+1. [Zaman serisi tahminine giriş](1-Introduction/README.md)
+2. [ARIMA zaman serisi modelleri oluşturma](2-ARIMA/README.md)
+3. [Zaman serisi tahmini için Destek Vektör Regresörü oluşturma](3-SVR/README.md)
+
+## Katkıda Bulunanlar
+
+"Zaman serisi tahminine giriş" ⚡️ ile [Francesca Lazzeri](https://twitter.com/frlazzeri) ve [Jen Looper](https://twitter.com/jenlooper) tarafından yazılmıştır. Not defterleri ilk olarak [Azure "Deep Learning For Time Series" repo](https://github.com/Azure/DeepLearningForTimeSeriesForecasting)'da Francesca Lazzeri tarafından yazılmış olarak çevrimiçi olarak ortaya çıktı. SVR dersi [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD) tarafından yazılmıştır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/1-QLearning/README.md b/translations/tr/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..cdc4292d3
--- /dev/null
+++ b/translations/tr/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,59 @@
+## Politikayı kontrol etme
+
+Q-Tablosu her durumdaki her eylemin "çekiciliğini" listeler, bu nedenle dünyamızda verimli navigasyonu tanımlamak oldukça kolaydır. En basit durumda, en yüksek Q-Tablosu değerine karşılık gelen eylemi seçebiliriz: (kod bloğu 9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> Yukarıdaki kodu birkaç kez denerseniz, bazen "takıldığını" ve kesmek için not defterindeki DURDUR düğmesine basmanız gerektiğini fark edebilirsiniz. Bu, iki durumun optimal Q-Değeri açısından birbirine "işaret ettiği" durumlar olabileceğinden, bu durumda ajan bu durumlar arasında sonsuz bir şekilde hareket etmeye başlar.
+
+## 🚀Meydan Okuma
+
+> **Görev 1:** `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` simülasyonu 100 kez çalıştıracak şekilde değiştirin: (kod bloğu 10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+Bu kodu çalıştırdıktan sonra, önceki ortalama yol uzunluğundan çok daha küçük bir ortalama yol uzunluğu elde etmelisiniz, 3-6 aralığında.
+
+## Öğrenme sürecini araştırma
+
+Belirttiğimiz gibi, öğrenme süreci, problem alanının yapısı hakkında elde edilen bilgilerin keşfi ve keşfi arasında bir dengedir. Öğrenme sonuçlarının (bir ajanın hedefe kısa bir yol bulma yeteneği) iyileştiğini gördük, ancak öğrenme sürecinde ortalama yol uzunluğunun nasıl davrandığını gözlemlemek de ilginçtir:
+
+Öğrenilenler şu şekilde özetlenebilir:
+
+- **Ortalama yol uzunluğu artar**. Burada gördüğümüz şey, başlangıçta ortalama yol uzunluğunun arttığıdır. Bu, çevre hakkında hiçbir şey bilmediğimizde, kötü durumlara, suya veya kurda yakalanma olasılığımızın yüksek olması nedeniyle olabilir. Daha fazla bilgi edindikçe ve bu bilgiyi kullanmaya başladıkça, çevreyi daha uzun süre keşfedebiliriz, ancak elma nerede olduğunu hala çok iyi bilmiyoruz.
+
+- **Öğrendikçe yol uzunluğu azalır**. Yeterince öğrendiğimizde, ajanın hedefe ulaşması daha kolay hale gelir ve yol uzunluğu azalmaya başlar. Ancak, keşfe hala açığız, bu yüzden genellikle en iyi yoldan saparız ve yeni seçenekleri keşfederiz, bu da yolu optimalden daha uzun hale getirir.
+
+- **Uzunluk ani bir şekilde artar**. Bu grafikte ayrıca, bir noktada uzunluğun ani bir şekilde arttığını gözlemliyoruz. Bu, sürecin stokastik doğasını ve bir noktada Q-Tablosu katsayılarını yeni değerlerle üzerine yazarak "bozabileceğimizi" gösterir. Bu, ideal olarak öğrenme oranını azaltarak en aza indirilmelidir (örneğin, eğitimin sonuna doğru, Q-Tablosu değerlerini sadece küçük bir değerle ayarlayarak).
+
+Genel olarak, öğrenme sürecinin başarısı ve kalitesinin, öğrenme oranı, öğrenme oranı düşüşü ve indirim faktörü gibi parametrelere önemli ölçüde bağlı olduğunu hatırlamak önemlidir. Bunlar genellikle **hiperparametreler** olarak adlandırılır, çünkü eğitim sırasında optimize ettiğimiz **parametrelerden** (örneğin, Q-Tablosu katsayıları) farklıdır. En iyi hiperparametre değerlerini bulma sürecine **hiperparametre optimizasyonu** denir ve ayrı bir konuyu hak eder.
+
+## [Ders sonrası quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## Ödev
+[Daha Gerçekçi Bir Dünya](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/1-QLearning/assignment.md b/translations/tr/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..c2c6c4a12
--- /dev/null
+++ b/translations/tr/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,28 @@
+# Daha Gerçekçi Bir Dünya
+
+Bizim durumumuzda, Peter neredeyse hiç yorulmadan veya acıkmadan dolaşabiliyordu. Daha gerçekçi bir dünyada, arada bir oturup dinlenmesi ve kendini beslemesi gerekecek. Dünyamızı daha gerçekçi hale getirelim ve aşağıdaki kuralları uygulayalım:
+
+1. Bir yerden bir yere hareket ederek, Peter **enerji** kaybeder ve biraz **yorgunluk** kazanır.
+2. Peter elma yiyerek daha fazla enerji kazanabilir.
+3. Peter, ağacın altında veya çimenlerin üzerinde dinlenerek yorgunluğundan kurtulabilir (yani, tahtada bir ağaç veya çimen bulunan bir yere yürüyerek - yeşil alan)
+4. Peter, kurdu bulup öldürmek zorunda.
+5. Kurdu öldürmek için, Peter'ın belirli seviyelerde enerji ve yorgunluğa sahip olması gerekir, aksi takdirde savaşı kaybeder.
+## Talimatlar
+
+Çözümünüz için başlangıç noktası olarak orijinal [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) defterini kullanın.
+
+Ödül fonksiyonunu oyunun kurallarına göre yukarıda belirtildiği şekilde değiştirin, pekiştirmeli öğrenme algoritmasını çalıştırarak oyunu kazanmak için en iyi stratejiyi öğrenin ve rastgele yürüyüş ile algoritmanızın sonuçlarını, kazanılan ve kaybedilen oyun sayısı açısından karşılaştırın.
+
+> **Note**: Yeni dünyanızda, durum daha karmaşıktır ve insan pozisyonuna ek olarak yorgunluk ve enerji seviyelerini de içerir. Durumu bir demet (Tahta, enerji, yorgunluk) olarak temsil etmeyi seçebilir veya durum için bir sınıf tanımlayabilirsiniz (bunu `Board`'dan türetmek isteyebilirsiniz), ya da orijinal `Board` sınıfını [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py) içinde değiştirebilirsiniz.
+
+Çözümünüzde, rastgele yürüyüş stratejisinden sorumlu olan kodu koruyun ve algoritmanızın sonuçlarını rastgele yürüyüş ile sonunda karşılaştırın.
+
+> **Note**: Çalışması için hiperparametreleri ayarlamanız gerekebilir, özellikle epoch sayısını. Oyunun başarısı (kurtla savaşma) nadir bir olay olduğu için, çok daha uzun eğitim süresi bekleyebilirsiniz.
+## Değerlendirme Kriterleri
+
+| Kriterler | Örnek | Yeterli | Geliştirmeye İhtiyaç Var |
+| --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| | Yeni dünya kurallarının tanımı, Q-Öğrenme algoritması ve bazı metinsel açıklamalar içeren bir defter sunulmuştur. Q-Öğrenme, rastgele yürüyüşle karşılaştırıldığında sonuçları önemli ölçüde iyileştirebilir. | Defter sunulmuş, Q-Öğrenme uygulanmış ve rastgele yürüyüşle karşılaştırıldığında sonuçları iyileştirmiş, ancak önemli ölçüde değil; ya da defter kötü belgelenmiş ve kod iyi yapılandırılmamış | Dünyanın kurallarını yeniden tanımlamak için bazı girişimlerde bulunulmuş, ancak Q-Öğrenme algoritması çalışmıyor veya ödül fonksiyonu tam olarak tanımlanmamış |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/tr/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..29676d329
--- /dev/null
+++ b/translations/tr/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından doğabilecek yanlış anlama veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/tr/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..c1ca5e02a
--- /dev/null
+++ b/translations/tr/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/2-Gym/README.md b/translations/tr/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..f9ea78a3f
--- /dev/null
+++ b/translations/tr/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,343 @@
+# CartPole Pateni
+
+Önceki derste çözmekte olduğumuz problem, gerçek hayat senaryolarına pek uygulanabilir olmayan bir oyuncak problem gibi görünebilir. Ancak durum böyle değil, çünkü birçok gerçek dünya problemi de bu senaryoyu paylaşır - Satranç veya Go oynamak da dahil. Bunlar benzerdir çünkü verilen kurallara sahip bir tahtamız ve **ayrık bir durumumuz** vardır.
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/47/)
+
+## Giriş
+
+Bu derste, Q-Öğrenme prensiplerini **sürekli durum** olan bir probleme uygulayacağız, yani bir veya daha fazla gerçek sayı ile verilen bir duruma. Aşağıdaki problemle ilgileneceğiz:
+
+> **Problem**: Peter kurtlardan kaçmak istiyorsa daha hızlı hareket edebilmelidir. Peter'ın kaymayı, özellikle dengeyi korumayı, Q-Öğrenme kullanarak nasıl öğrenebileceğini göreceğiz.
+
+
+
+> Peter ve arkadaşları kurtlardan kaçmak için yaratıcı oluyorlar! Görsel: [Jen Looper](https://twitter.com/jenlooper)
+
+Dengelemeyi basitleştirilmiş bir versiyon olan **CartPole** problemi olarak kullanacağız. CartPole dünyasında, sola veya sağa hareket edebilen yatay bir kaydırıcımız var ve amaç, kaydırıcının üstündeki dikey direği dengelemek.
+Ekim 2023'e kadar olan verilere dayalı olarak eğitildiniz.
+
+## Ön Gereksinimler
+
+Bu derste, farklı **ortamları** simüle etmek için **OpenAI Gym** adlı bir kütüphane kullanacağız. Bu dersin kodunu yerel olarak (örneğin, Visual Studio Code'dan) çalıştırabilirsiniz, bu durumda simülasyon yeni bir pencerede açılacaktır. Kodu çevrimiçi çalıştırırken, kodda bazı değişiklikler yapmanız gerekebilir, bu durum [burada](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7) açıklanmıştır.
+
+## OpenAI Gym
+
+Önceki derste, oyunun kuralları ve durum, kendimiz tanımladığımız `Board` sınıfı tarafından verilmişti. Burada, denge direğinin arkasındaki fiziği simüle edecek özel bir **simülasyon ortamı** kullanacağız. Takviye öğrenme algoritmalarını eğitmek için en popüler simülasyon ortamlarından biri, [OpenAI](https://openai.com/) tarafından sürdürülen [Gym](https://gym.openai.com/) adlı bir ortamdır. Bu gym'i kullanarak, cartpole simülasyonundan Atari oyunlarına kadar farklı **ortamlar** oluşturabiliriz.
+
+> **Not**: OpenAI Gym tarafından sunulan diğer ortamları [buradan](https://gym.openai.com/envs/#classic_control) görebilirsiniz.
+
+İlk olarak, gym'i yükleyelim ve gerekli kütüphaneleri içe aktaralım (kod bloğu 1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## Egzersiz - bir cartpole ortamı başlatma
+
+Cartpole dengeleme problemi ile çalışmak için ilgili ortamı başlatmamız gerekiyor. Her ortam şunlarla ilişkilidir:
+
+- **Gözlem alanı**: Ortamdan aldığımız bilgilerin yapısını tanımlar. Cartpole problemi için, direğin konumu, hızı ve bazı diğer değerleri alırız.
+
+- **Eylem alanı**: Olası eylemleri tanımlar. Bizim durumumuzda eylem alanı ayrık olup, iki eylemden oluşur - **sol** ve **sağ**. (kod bloğu 2)
+
+1. Başlatmak için aşağıdaki kodu yazın:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+Ortamın nasıl çalıştığını görmek için, 100 adımlık kısa bir simülasyon çalıştıralım. Her adımda, alınacak bir eylemi sağlıyoruz - bu simülasyonda `action_space`'ten rastgele bir eylem seçiyoruz.
+
+1. Aşağıdaki kodu çalıştırın ve neye yol açtığını görün.
+
+ ✅ Bu kodu yerel Python kurulumunda çalıştırmanız tercih edilir! (kod bloğu 3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ Şuna benzer bir şey görmelisiniz:
+
+ 
+
+1. Simülasyon sırasında, nasıl hareket edileceğine karar vermek için gözlemler almamız gerekir. Aslında, step fonksiyonu mevcut gözlemleri, bir ödül fonksiyonunu ve simülasyonun devam edip etmeyeceğini belirten done bayrağını döndürür: (kod bloğu 4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ Not defterinin çıktısında buna benzer bir şey görmelisiniz:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ Simülasyonun her adımında döndürülen gözlem vektörü şu değerleri içerir:
+ - Arabanın konumu
+ - Arabanın hızı
+ - Direğin açısı
+ - Direğin dönme hızı
+
+1. Bu sayıların minimum ve maksimum değerlerini alın: (kod bloğu 5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ Ayrıca, her simülasyon adımında ödül değerinin her zaman 1 olduğunu fark edebilirsiniz. Bunun nedeni, amacımızın mümkün olduğunca uzun süre hayatta kalmak, yani direği makul bir dikey pozisyonda en uzun süre tutmaktır.
+
+ ✅ Aslında, CartPole simülasyonu, 100 ardışık denemede 195 ortalama ödül elde etmeyi başardığımızda çözülmüş kabul edilir.
+
+## Durum ayrıklaştırma
+
+Q-Öğrenme'de, her durumda ne yapacağımızı tanımlayan bir Q-Tablosu oluşturmamız gerekir. Bunu yapabilmek için, durumun **ayrık** olması gerekir, daha kesin olarak, sonlu sayıda ayrık değer içermelidir. Bu nedenle, gözlemlerimizi **ayrıklaştırmamız** ve bunları sonlu bir durum kümesine eşlememiz gerekir.
+
+Bunu yapmanın birkaç yolu vardır:
+
+- **Kovalar halinde bölme**. Belirli bir değerin aralığını biliyorsak, bu aralığı bir dizi **kovaya** bölebiliriz ve ardından değeri ait olduğu kova numarasıyla değiştirebiliriz. Bu, numpy [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html) yöntemi kullanılarak yapılabilir. Bu durumda, durum boyutunu kesin olarak bileceğiz, çünkü bu, dijitalleştirme için seçtiğimiz kova sayısına bağlı olacaktır.
+
+✅ Değerleri belirli bir sonlu aralığa (örneğin, -20'den 20'ye) getirmek için lineer enterpolasyon kullanabiliriz ve ardından sayıları yuvarlayarak tamsayıya dönüştürebiliriz. Bu bize durum boyutu üzerinde biraz daha az kontrol sağlar, özellikle de giriş değerlerinin kesin aralıklarını bilmiyorsak. Örneğin, bizim durumumuzda 4 değerden 2'sinin değerlerinde üst/alt sınırlar yoktur, bu da sonsuz sayıda duruma neden olabilir.
+
+Örneğimizde, ikinci yaklaşımı kullanacağız. Daha sonra fark edeceğiniz gibi, tanımlanmamış üst/alt sınırlara rağmen, bu değerler nadiren belirli sonlu aralıkların dışında değerler alır, bu nedenle aşırı değerli durumlar çok nadir olacaktır.
+
+1. Modelimizden gözlemi alacak ve 4 tamsayı değerinden oluşan bir demet üretecek fonksiyon burada: (kod bloğu 6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. Kovalar kullanarak başka bir ayrıklaştırma yöntemini de inceleyelim: (kod bloğu 7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. Şimdi kısa bir simülasyon çalıştıralım ve bu ayrık ortam değerlerini gözlemleyelim. Hem `discretize` and `discretize_bins` kullanmayı deneyin ve fark olup olmadığını görün.
+
+ ✅ discretize_bins, kova numarasını döndürür, bu 0 tabanlıdır. Dolayısıyla, giriş değişkeninin etrafındaki değerler için 0, aralığın ortasındaki numarayı (10) döndürür. Discretize'de, çıktı değerlerinin aralığını önemsemedik, negatif olmalarına izin verdik, bu nedenle durum değerleri kaydırılmamış ve 0, 0'a karşılık gelir. (kod bloğu 8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ Ortamın nasıl çalıştığını görmek istiyorsanız env.render ile başlayan satırı yorumdan çıkarın. Aksi takdirde arka planda çalıştırabilirsiniz, bu daha hızlıdır. Q-Öğrenme sürecimiz sırasında bu "görünmez" yürütmeyi kullanacağız.
+
+## Q-Tablosu yapısı
+
+Önceki dersimizde, durum 0'dan 8'e kadar olan basit bir sayı çiftiydi ve bu nedenle Q-Tablosunu 8x8x2 şeklinde bir numpy tensörü ile temsil etmek uygundu. Kovalar ayrıklaştırmasını kullanırsak, durum vektörümüzün boyutu da bilinir, bu yüzden aynı yaklaşımı kullanabiliriz ve durumu 20x20x10x10x2 şeklinde bir dizi ile temsil edebiliriz (burada 2, eylem alanının boyutudur ve ilk boyutlar gözlem alanındaki her parametre için kullanmayı seçtiğimiz kova sayısına karşılık gelir).
+
+Ancak, bazen gözlem alanının kesin boyutları bilinmez. `discretize` fonksiyonu durumunda, bazı orijinal değerler bağlanmadığı için durumun belirli sınırlar içinde kaldığından asla emin olamayabiliriz. Bu nedenle, biraz farklı bir yaklaşım kullanacağız ve Q-Tablosunu bir sözlükle temsil edeceğiz.
+
+1. *(state,action)* çiftini sözlük anahtarı olarak kullanın ve değer Q-Tablosu giriş değerine karşılık gelir. (kod bloğu 9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ Burada, belirli bir durum için Q-Tablosu değerlerinin bir listesini döndüren `qvalues()` fonksiyonunu da tanımlıyoruz, bu tüm olası eylemlere karşılık gelir. Giriş Q-Tablosunda mevcut değilse, varsayılan olarak 0 döndüreceğiz.
+
+## Q-Öğrenmeye Başlayalım
+
+Şimdi Peter'a dengeyi öğretmeye hazırız!
+
+1. İlk olarak, bazı hiperparametreleri ayarlayalım: (kod bloğu 10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ Burada, `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards` vektörünü daha sonra çizim için saklıyoruz. (kod bloğu 11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+Bu sonuçlardan fark edebileceğiniz şeyler:
+
+- **Hedefimize yakınız**. 100'den fazla ardışık simülasyon çalıştırmasında 195 kümülatif ödül alma hedefimize çok yakınız veya aslında başardık! Daha küçük sayılar alsak bile, 5000 çalıştırma üzerinden ortalama alıyoruz ve resmi kriterde sadece 100 çalıştırma gereklidir.
+
+- **Ödül düşmeye başlıyor**. Bazen ödül düşmeye başlar, bu da Q-Tablosunda zaten öğrenilmiş değerleri daha kötü duruma getirenlerle "bozabileceğimiz" anlamına gelir.
+
+Bu gözlem, eğitim ilerlemesini çizdiğimizde daha net görülür.
+
+## Eğitim İlerlemesini Çizmek
+
+Eğitim sırasında, her yinelemede kümülatif ödül değerini `rewards` vektörüne topladık. İşte bunu yineleme sayısına karşı çizdiğimizde nasıl göründüğü:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+Bu grafikten bir şey anlamak mümkün değil, çünkü stokastik eğitim sürecinin doğası gereği eğitim oturumlarının uzunluğu büyük ölçüde değişir. Bu grafiği daha anlamlı hale getirmek için, örneğin 100 deney üzerinde **hareketli ortalama** hesaplayabiliriz. Bu, `np.convolve` kullanılarak uygun bir şekilde yapılabilir: (kod bloğu 12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## Hiperparametreleri Değiştirme
+
+Öğrenmeyi daha kararlı hale getirmek için, eğitim sırasında bazı hiperparametrelerimizi ayarlamak mantıklıdır. Özellikle:
+
+- **Öğrenme oranı** için, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon` ve neredeyse 1'e kadar çıkın.
+
+> **Görev 1**: Hiperparametre değerleriyle oynayın ve daha yüksek kümülatif ödül elde edip edemeyeceğinizi görün. 195'in üzerine çıkabiliyor musunuz?
+
+> **Görev 2**: Problemi resmi olarak çözmek için, 100 ardışık çalıştırma boyunca 195 ortalama ödül almanız gerekir. Bunu eğitim sırasında ölçün ve problemi resmi olarak çözdüğünüzden emin olun!
+
+## Sonucu Aksiyon Halinde Görmek
+
+Eğitilmiş modelin nasıl davrandığını görmek ilginç olurdu. Simülasyonu çalıştıralım ve eğitim sırasında olduğu gibi Q-Tablosundaki olasılık dağılımına göre eylem seçme stratejisini izleyelim: (kod bloğu 13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+Şuna benzer bir şey görmelisiniz:
+
+
+
+---
+
+## 🚀Meydan Okuma
+
+> **Görev 3**: Burada, Q-Tablosunun son kopyasını kullandık, bu en iyisi olmayabilir. En iyi performans gösteren Q-Tablosunu `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax` fonksiyonunu kullanarak, en yüksek Q-Tablosu değerine karşılık gelen eylem numarasını bulmak için bu stratejiyi uygulayın ve dengelemeyi iyileştirip iyileştirmediğini görün.
+
+## [Ders Sonrası Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## Ödev
+[Bir Dağ Arabasını Eğit](assignment.md)
+
+## Sonuç
+
+Artık ajanları yalnızca oyunun istenen durumunu tanımlayan bir ödül fonksiyonu sağlayarak ve arama alanını zekice keşfetme fırsatı vererek iyi sonuçlar elde etmeyi nasıl eğiteceğimizi öğrendik. Q-Öğrenme algoritmasını ayrık ve sürekli ortamlar durumunda başarıyla uyguladık, ancak ayrık eylemlerle.
+
+Eylem durumunun da sürekli olduğu ve gözlem alanının çok daha karmaşık olduğu durumları da incelemek önemlidir, örneğin Atari oyun ekranından gelen görüntü gibi. Bu tür problemler, iyi sonuçlar elde etmek için genellikle daha güçlü makine öğrenme teknikleri, örneğin sinir ağları, kullanmamızı gerektirir. Bu daha ileri konular, ileri düzey AI kursumuzun konusudur.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/2-Gym/assignment.md b/translations/tr/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..d74768d96
--- /dev/null
+++ b/translations/tr/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# Dağ Arabası Eğitimi
+
+[OpenAI Gym](http://gym.openai.com), tüm ortamların aynı API'yi sağlaması için tasarlanmıştır - yani aynı `reset`, `step` ve `render` yöntemleri ve **eylem alanı** ve **gözlem alanı**nın aynı soyutlamaları. Bu nedenle, aynı pekiştirmeli öğrenme algoritmalarını, minimum kod değişiklikleri ile farklı ortamlara uyarlamak mümkün olmalıdır.
+
+## Bir Dağ Arabası Ortamı
+
+[Dağ Arabası ortamı](https://gym.openai.com/envs/MountainCar-v0/), bir vadide sıkışmış bir araba içerir:
+Amacınız vadiden çıkmak ve bayrağı ele geçirmek, her adımda aşağıdaki eylemlerden birini yaparak:
+
+| Değer | Anlam |
+|---|---|
+| 0 | Sola hızlan |
+| 1 | Hızlanma |
+| 2 | Sağa hızlan |
+
+Bu problemin ana püf noktası, arabanın motorunun dağı tek bir geçişte ölçeklendirmek için yeterince güçlü olmamasıdır. Bu nedenle, başarılı olmanın tek yolu, momentum kazanmak için ileri geri gitmektir.
+
+Gözlem alanı sadece iki değerden oluşur:
+
+| Num | Gözlem | Min | Max |
+|-----|---------|-----|-----|
+| 0 | Araba Pozisyonu | -1.2| 0.6 |
+| 1 | Araba Hızı | -0.07 | 0.07 |
+
+Dağ arabası için ödül sistemi oldukça zordur:
+
+ * Dağın tepesindeki bayrağa (pozisyon = 0.5) ulaşan ajana 0 ödülü verilir.
+ * Ajanın pozisyonu 0.5'ten azsa -1 ödülü verilir.
+
+Eğer araba pozisyonu 0.5'ten fazla olursa veya bölüm uzunluğu 200'den fazla olursa bölüm sona erer.
+## Talimatlar
+
+Dağ arabası problemini çözmek için pekiştirmeli öğrenme algoritmamızı uyarlayın. Mevcut [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb) koduyla başlayın, yeni ortamı ekleyin, durum ayrıştırma fonksiyonlarını değiştirin ve mevcut algoritmanın minimum kod değişiklikleri ile eğitilmesini sağlamaya çalışın. Sonucu hiperparametreleri ayarlayarak optimize edin.
+
+> **Not**: Algoritmanın yakınsamasını sağlamak için hiperparametre ayarlamaları gerekebilir.
+## Değerlendirme Kriterleri
+
+| Kriter | Örnek | Yeterli | Geliştirme Gerekiyor |
+|--------|-------|---------|----------------------|
+| | Q-Öğrenme algoritması, minimum kod değişiklikleri ile CartPole örneğinden başarıyla uyarlanmış ve 200 adımın altında bayrağı ele geçirme problemini çözebilecek şekilde çalışıyor. | İnternetten yeni bir Q-Öğrenme algoritması uyarlanmış, ancak iyi belgelenmiş; veya mevcut algoritma uyarlanmış, ancak istenen sonuçlara ulaşamıyor | Öğrenci herhangi bir algoritmayı başarıyla uyarlayamamış, ancak çözüme doğru önemli adımlar atmış (durum ayrıştırma, Q-Tablo veri yapısı, vb. uygulanmış) |
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki versiyonu yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/tr/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..346279c88
--- /dev/null
+++ b/translations/tr/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/2-Gym/solution/R/README.md b/translations/tr/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..2d1eb433f
--- /dev/null
+++ b/translations/tr/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/8-Reinforcement/README.md b/translations/tr/8-Reinforcement/README.md
new file mode 100644
index 000000000..186d8d2a4
--- /dev/null
+++ b/translations/tr/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# Pekiştirmeli Öğrenmeye Giriş
+
+Pekiştirmeli öğrenme, RL, denetimli öğrenme ve denetimsiz öğrenmenin yanında temel makine öğrenme paradigmalarından biri olarak görülür. RL, kararlarla ilgilidir: doğru kararları vermek veya en azından onlardan öğrenmek.
+
+Bir simüle edilmiş ortamınız olduğunu hayal edin, örneğin borsa. Belirli bir düzenleme getirirseniz ne olur? Olumlu veya olumsuz bir etkisi var mı? Olumsuz bir şey olursa, bu _olumsuz pekiştirmeyi_ almalı, ondan öğrenmeli ve rotanızı değiştirmelisiniz. Eğer olumlu bir sonuç olursa, bu _olumlu pekiştirmeyi_ geliştirmelisiniz.
+
+
+
+> Peter ve arkadaşlarının aç kurttan kaçması gerekiyor! Görsel [Jen Looper](https://twitter.com/jenlooper) tarafından
+
+## Bölgesel Konu: Peter ve Kurt (Rusya)
+
+[Peter ve Kurt](https://en.wikipedia.org/wiki/Peter_and_the_Wolf), Rus besteci [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev) tarafından yazılmış bir müzikli peri masalıdır. Bu, genç öncü Peter'in cesurca evinden çıkıp ormanda kurtu kovalamaya gittiği bir hikayedir. Bu bölümde, Peter'e yardımcı olacak makine öğrenme algoritmalarını eğiteceğiz:
+
+- Çevreyi **keşfetmek** ve optimal bir navigasyon haritası oluşturmak
+- Daha hızlı hareket edebilmek için kaykay kullanmayı ve üzerinde denge kurmayı **öğrenmek**.
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 Peter ve Kurt'u dinlemek için yukarıdaki görsele tıklayın
+
+## Pekiştirmeli Öğrenme
+
+Önceki bölümlerde, iki tür makine öğrenme problemi örneği gördünüz:
+
+- **Denetimli**, çözmek istediğimiz probleme örnek çözümler öneren veri kümelerimiz olduğunda. [Sınıflandırma](../4-Classification/README.md) ve [regresyon](../2-Regression/README.md) denetimli öğrenme görevleridir.
+- **Denetimsiz**, etiketlenmiş eğitim verilerimizin olmadığı durumlarda. Denetimsiz öğrenmenin ana örneği [Kümeleme](../5-Clustering/README.md)'dir.
+
+Bu bölümde, etiketlenmiş eğitim verileri gerektirmeyen yeni bir öğrenme problem türüyle tanışacaksınız. Bu tür problemlerin birkaç türü vardır:
+
+- **[Yarı denetimli öğrenme](https://wikipedia.org/wiki/Semi-supervised_learning)**, çok sayıda etiketlenmemiş verinin modeli önceden eğitmek için kullanılabileceği durumlar.
+- **[Pekiştirmeli öğrenme](https://wikipedia.org/wiki/Reinforcement_learning)**, bir ajanının simüle edilmiş bir ortamda deneyler yaparak nasıl davranacağını öğrendiği durumlar.
+
+### Örnek - Bilgisayar Oyunu
+
+Bir bilgisayara bir oyun, örneğin satranç veya [Super Mario](https://wikipedia.org/wiki/Super_Mario) oynamayı öğretmek istediğinizi varsayalım. Bilgisayarın oyun oynaması için, her oyun durumunda hangi hamleyi yapacağını tahmin etmesi gerekir. Bu bir sınıflandırma problemi gibi görünse de, değildir - çünkü durumlar ve karşılık gelen eylemlerle ilgili bir veri kümesine sahip değiliz. Mevcut satranç maçları veya Super Mario oynayan oyuncuların kayıtları gibi bazı verilere sahip olsak da, bu verilerin yeterince geniş bir durumu kapsamayacağı muhtemeldir.
+
+Mevcut oyun verilerini aramak yerine, **Pekiştirmeli Öğrenme** (RL), *bilgisayarı birçok kez oynamaya ve sonucu gözlemlemeye* dayalıdır. Bu nedenle, Pekiştirmeli Öğrenmeyi uygulamak için iki şeye ihtiyacımız var:
+
+- **Bir ortam** ve **bir simülatör**, bu da oyunu birçok kez oynamamıza izin verir. Bu simülatör, tüm oyun kurallarını, olası durumları ve eylemleri tanımlar.
+
+- **Bir ödül fonksiyonu**, bu da her hamle veya oyun sırasında ne kadar iyi olduğumuzu bize söyler.
+
+Diğer makine öğrenme türleri ile RL arasındaki temel fark, RL'de genellikle oyunu bitirene kadar kazanıp kazanmadığımızı bilmememizdir. Bu nedenle, belirli bir hamlenin tek başına iyi olup olmadığını söyleyemeyiz - sadece oyunun sonunda bir ödül alırız. Amacımız, belirsiz koşullar altında bir modeli eğitmemizi sağlayacak algoritmalar tasarlamaktır. **Q-learning** adı verilen bir RL algoritmasını öğreneceğiz.
+
+## Dersler
+
+1. [Pekiştirmeli öğrenme ve Q-Learning'e giriş](1-QLearning/README.md)
+2. [Gym simülasyon ortamını kullanma](2-Gym/README.md)
+
+## Katkıda Bulunanlar
+
+"Pekiştirmeli Öğrenmeye Giriş" [Dmitry Soshnikov](http://soshnikov.com) tarafından ♥️ ile yazılmıştır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için, profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/9-Real-World/1-Applications/README.md b/translations/tr/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..e9a5c7553
--- /dev/null
+++ b/translations/tr/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# Sonsöz: Gerçek Dünyada Makine Öğrenimi
+
+
+> Sketchnote by [Tomomi Imura](https://www.twitter.com/girlie_mac)
+
+Bu müfredatta, verileri eğitim için hazırlamanın ve makine öğrenimi modelleri oluşturmanın birçok yolunu öğrendiniz. Klasik regresyon, kümeleme, sınıflandırma, doğal dil işleme ve zaman serisi modellerinden oluşan bir dizi model oluşturdunuz. Tebrikler! Şimdi, tüm bunların ne için olduğunu merak ediyor olabilirsiniz... Bu modellerin gerçek dünya uygulamaları nelerdir?
+
+Endüstride genellikle derin öğrenmeyi kullanan yapay zeka büyük ilgi görse de, klasik makine öğrenimi modellerinin hala değerli uygulamaları vardır. Bugün bile bu uygulamalardan bazılarını kullanıyor olabilirsiniz! Bu derste, sekiz farklı endüstri ve konu alanının bu tür modelleri nasıl daha performanslı, güvenilir, akıllı ve kullanıcılar için değerli hale getirdiğini keşfedeceksiniz.
+
+## [Ders Öncesi Quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 Finans
+
+Finans sektörü, makine öğrenimi için birçok fırsat sunar. Bu alandaki birçok problem, ML kullanılarak modellenip çözülebilir.
+
+### Kredi Kartı Dolandırıcılığı Tespiti
+
+Kursun başlarında [k-means kümeleme](../../5-Clustering/2-K-Means/README.md) hakkında öğrendik, ancak bu teknik kredi kartı dolandırıcılığıyla ilgili sorunları nasıl çözebilir?
+
+K-means kümeleme, **aykırı değer tespiti** olarak adlandırılan bir kredi kartı dolandırıcılığı tespit tekniğinde kullanışlıdır. Bir veri seti hakkındaki gözlemler arasında aykırı değerler veya sapmalar, bir kredi kartının normal kapasitede mi kullanıldığını yoksa olağandışı bir şey mi olduğunu bize söyleyebilir. Aşağıdaki bağlantıda verilen makalede gösterildiği gibi, k-means kümeleme algoritmasını kullanarak kredi kartı verilerini sıralayabilir ve her işlemi ne kadar aykırı göründüğüne göre bir kümeye atayabilirsiniz. Ardından, dolandırıcılık ve meşru işlemler için en riskli kümeleri değerlendirebilirsiniz.
+[Referans](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### Varlık Yönetimi
+
+Varlık yönetiminde, bir birey veya firma müşterileri adına yatırımları yönetir. Uzun vadede serveti sürdürmek ve büyütmek onların işidir, bu yüzden iyi performans gösteren yatırımları seçmek çok önemlidir.
+
+Belirli bir yatırımın nasıl performans gösterdiğini değerlendirmek için istatistiksel regresyon kullanılabilir. [Lineer regresyon](../../2-Regression/1-Tools/README.md), bir fonun belirli bir benchmarka göre nasıl performans gösterdiğini anlamak için değerli bir araçtır. Ayrıca, regresyon sonuçlarının istatistiksel olarak anlamlı olup olmadığını veya bir müşterinin yatırımlarını ne kadar etkileyeceğini de çıkarabiliriz. Analizinizi daha da genişleterek, ek risk faktörlerini hesaba katabileceğiniz çoklu regresyon kullanabilirsiniz. Bu işlemin belirli bir fon için nasıl çalışacağına dair bir örnek için, aşağıdaki makaleye göz atabilirsiniz.
+[Referans](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 Eğitim
+
+Eğitim sektörü de ML'nin uygulanabileceği çok ilginç bir alandır. Sınavlarda veya makalelerde hile yapmayı tespit etmek veya düzeltme sürecindeki önyargıyı, istemsiz ya da değil, yönetmek gibi ilginç sorunlar ele alınabilir.
+
+### Öğrenci Davranışını Tahmin Etme
+
+Açık çevrimiçi kurs sağlayıcısı [Coursera](https://coursera.com), birçok mühendislik kararını tartıştığı harika bir teknoloji bloguna sahiptir. Bu vaka çalışmasında, düşük NPS (Net Promoter Score) puanı ile kursa devam veya bırakma arasında bir korelasyon olup olmadığını keşfetmek için bir regresyon çizgisi çizdiler.
+[Referans](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### Önyargıyı Azaltma
+
+Yazım asistanı [Grammarly](https://grammarly.com), ürünlerinde yazım ve dilbilgisi hatalarını kontrol eden sofistike [doğal dil işleme sistemleri](../../6-NLP/README.md) kullanır. Teknoloji bloglarında, makine öğreniminde cinsiyet önyargısını nasıl ele aldıklarını anlatan ilginç bir vaka çalışması yayınladılar, bu da [giriş niteliğindeki adalet dersimizde](../../1-Introduction/3-fairness/README.md) öğrendiğiniz bir konudur.
+[Referans](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 Perakende
+
+Perakende sektörü, müşteri yolculuğunu daha iyi hale getirmekten envanteri optimal bir şekilde stoklamaya kadar ML'den kesinlikle faydalanabilir.
+
+### Müşteri Yolculuğunu Kişiselleştirme
+
+Ev eşyaları satan Wayfair'de, müşterilerin zevk ve ihtiyaçlarına uygun ürünleri bulmalarına yardımcı olmak çok önemlidir. Bu makalede, şirketin mühendisleri, ML ve NLP'yi müşteriler için doğru sonuçları nasıl ortaya çıkardıklarını anlatıyorlar. Özellikle, Sorgu Niyet Motorları, varlık çıkarımı, sınıflandırıcı eğitimi, varlık ve görüş çıkarımı ve müşteri yorumlarında duygu etiketleme kullanılarak oluşturulmuştur. Bu, çevrimiçi perakendede NLP'nin nasıl çalıştığının klasik bir kullanım örneğidir.
+[Referans](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### Envanter Yönetimi
+
+[StitchFix](https://stitchfix.com) gibi yenilikçi ve çevik şirketler, tüketicilere kıyafet gönderen bir kutu hizmeti, öneriler ve envanter yönetimi için büyük ölçüde ML'ye dayanır. Stil ekipleri, ticaret ekipleriyle birlikte çalışır, aslında: "veri bilimcilerimizden biri, genetik bir algoritma ile uğraştı ve bugüne kadar var olmayan başarılı bir giysi parçasını tahmin etmek için bunu giyime uyguladı. Bunu ticaret ekibine sunduk ve şimdi bunu bir araç olarak kullanabiliyorlar."
+[Referans](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 Sağlık Hizmetleri
+
+Sağlık hizmetleri sektörü, araştırma görevlerini ve hastaların yeniden hastaneye yatması veya hastalıkların yayılmasını durdurma gibi lojistik sorunları optimize etmek için ML'yi kullanabilir.
+
+### Klinik Denemeleri Yönetme
+
+Klinik denemelerdeki toksisite, ilaç üreticileri için büyük bir endişe kaynağıdır. Ne kadar toksisite tolere edilebilir? Bu çalışmada, çeşitli klinik deneme yöntemlerini analiz etmek, klinik deneme sonuçlarının olasılıklarını tahmin etmek için yeni bir yaklaşımın geliştirilmesine yol açtı. Özellikle, gruplar arasında ayrım yapabilen bir [sınıflandırıcı](../../4-Classification/README.md) üretmek için rastgele orman kullanabildiler.
+[Referans](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### Hastane Yeniden Yatış Yönetimi
+
+Hastane bakımı maliyetlidir, özellikle de hastalar yeniden hastaneye yatırılmak zorunda kaldığında. Bu makale, [kümeleme](../../5-Clustering/README.md) algoritmaları kullanarak yeniden yatış potansiyelini tahmin etmek için ML kullanan bir şirketi tartışıyor. Bu kümeler, analistlerin "ortak bir nedeni paylaşabilecek yeniden yatış gruplarını keşfetmesine" yardımcı olur.
+[Referans](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### Hastalık Yönetimi
+
+Son pandemi, makine öğreniminin hastalık yayılmasını durdurmaya nasıl yardımcı olabileceğine dair parlak bir ışık tuttu. Bu makalede, ARIMA, lojistik eğriler, lineer regresyon ve SARIMA'nın kullanıldığını göreceksiniz. "Bu çalışma, bu virüsün yayılma hızını hesaplamak ve böylece ölümleri, iyileşmeleri ve doğrulanmış vakaları tahmin etmek için bir girişimdir, böylece daha iyi hazırlanabilir ve hayatta kalabiliriz."
+[Referans](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 Ekoloji ve Yeşil Teknoloji
+
+Doğa ve ekoloji, hayvanlar ve doğa arasındaki etkileşimin odak noktası olduğu birçok hassas sistemden oluşur. Bu sistemleri doğru bir şekilde ölçmek ve bir şeyler olduğunda uygun şekilde hareket etmek önemlidir, örneğin bir orman yangını veya hayvan popülasyonundaki bir düşüş gibi.
+
+### Orman Yönetimi
+
+Önceki derslerde [Takviyeli Öğrenme](../../8-Reinforcement/README.md) hakkında öğrendiniz. Doğadaki kalıpları tahmin etmeye çalışırken çok faydalı olabilir. Özellikle, orman yangınları ve istilacı türlerin yayılması gibi ekolojik sorunları izlemek için kullanılabilir. Kanada'da, bir grup araştırmacı, uydu görüntülerinden orman yangını dinamik modelleri oluşturmak için Takviyeli Öğrenme kullandı. Yenilikçi bir "mekansal yayılma süreci (SSP)" kullanarak, bir orman yangınını "manzaradaki herhangi bir hücredeki ajan" olarak düşündüler. "Yangının herhangi bir noktada bir konumdan alabileceği eylemler kümesi, kuzeye, güneye, doğuya veya batıya yayılmayı veya yayılmamayı içerir.
+
+Bu yaklaşım, ilgili Markov Karar Sürecinin (MDP) dinamiklerinin bilinen bir fonksiyon olduğu için, normal RL kurulumunu tersine çevirir." Aşağıdaki bağlantıda bu grubun kullandığı klasik algoritmalar hakkında daha fazla bilgi edinin.
+[Referans](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### Hayvanların Hareket Algılaması
+
+Derin öğrenme, hayvan hareketlerini görsel olarak izleme konusunda bir devrim yaratmışken (kendi [kutup ayısı izleyicinizi](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott) burada oluşturabilirsiniz), klasik ML bu görevde hala yerini koruyor.
+
+Çiftlik hayvanlarının hareketlerini izlemek için sensörler ve IoT, bu tür görsel işlemeyi kullanır, ancak daha temel ML teknikleri veri ön işleme için kullanışlıdır. Örneğin, bu makalede, koyun duruşları çeşitli sınıflandırıcı algoritmalar kullanılarak izlenmiş ve analiz edilmiştir. Sayfa 335'te ROC eğrisini tanıyabilirsiniz.
+[Referans](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ Enerji Yönetimi
+
+[Zaman serisi tahmini](../../7-TimeSeries/README.md) derslerimizde, bir kasaba için arz ve talebi anlamaya dayalı olarak gelir elde etmek için akıllı park sayaçları kavramını ele aldık. Bu makale, İrlanda'da akıllı ölçüm temelinde gelecekteki enerji kullanımını tahmin etmeye yardımcı olmak için kümeleme, regresyon ve zaman serisi tahmininin nasıl birleştirildiğini ayrıntılı olarak tartışıyor.
+[Referans](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 Sigorta
+
+Sigorta sektörü, yaşanabilir finansal ve aktüeryal modeller oluşturmak ve optimize etmek için ML'yi kullanan bir başka sektördür.
+
+### Volatilite Yönetimi
+
+MetLife, bir hayat sigortası sağlayıcısı, finansal modellerindeki volatiliteyi analiz etme ve hafifletme yöntemlerini açıkça paylaşmaktadır. Bu makalede, ikili ve sıralı sınıflandırma görselleştirmeleri dikkat çekecek. Ayrıca tahmin görselleştirmeleri de bulacaksınız.
+[Referans](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 Sanat, Kültür ve Edebiyat
+
+Sanat alanında, örneğin gazetecilikte, birçok ilginç sorun vardır. Sahte haber tespiti büyük bir sorundur çünkü insanların görüşlerini etkilediği ve hatta demokrasileri devirdiği kanıtlanmıştır. Müzeler de, eserler arasındaki bağlantıları bulmaktan kaynak planlamasına kadar her şeyde ML kullanmaktan faydalanabilir.
+
+### Sahte Haber Tespiti
+
+Günümüz medyasında sahte haber tespiti kedi fare oyununa dönüşmüştür. Bu makalede, araştırmacılar, çalıştığımız çeşitli ML tekniklerini birleştiren bir sistemin test edilebileceğini ve en iyi modelin uygulanabileceğini öneriyorlar: "Bu sistem, verilerden özellikler çıkarmak için doğal dil işleme temellidir ve ardından bu özellikler, Naive Bayes, Support Vector Machine (SVM), Random Forest (RF), Stochastic Gradient Descent (SGD) ve Logistic Regression (LR) gibi makine öğrenimi sınıflandırıcılarının eğitimi için kullanılır."
+[Referans](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+Bu makale, farklı ML alanlarını birleştirmenin, sahte haberlerin yayılmasını durdurmaya ve gerçek zararlar yaratmasını önlemeye yardımcı olabilecek ilginç sonuçlar üretebileceğini gösteriyor; bu durumda, COVID tedavileri hakkında yayılan söylentilerin şiddet olaylarını kışkırtması etkili olmuştur.
+
+### Müze ML
+
+Müzeler, koleksiyonları kataloglama ve dijitalleştirme ve eserler arasındaki bağlantıları bulmayı teknoloji ilerledikçe daha kolay hale getiren bir AI devriminin eşiğindedir. [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.) gibi projeler, Vatikan Arşivleri gibi erişilemeyen koleksiyonların gizemlerini çözmeye yardımcı oluyor. Ancak, müzelerin iş yönü de ML modellerinden faydalanır.
+
+Örneğin, Chicago Sanat Enstitüsü, izleyicilerin neyle ilgilendiğini ve sergileri ne zaman ziyaret edeceklerini tahmin etmek için modeller oluşturdu. Amaç, kullanıcı müzeyi her ziyaret ettiğinde bireyselleştirilmiş ve optimize edilmiş ziyaretçi deneyimleri yaratmaktır. "2017 mali yılı boyunca, model, katılım ve kabulü yüzde 1 doğrulukla tahmin etti," diyor Chicago Sanat Enstitüsü kıdemli başkan yardımcısı Andrew Simnick.
+[Referans](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 Pazarlama
+
+### Müşteri segmentasyonu
+
+En etkili pazarlama stratejileri, müşterileri farklı gruplandırmalar temelinde farklı şekillerde hedefler. Bu makalede, farklılaştırılmış pazarlamayı desteklemek için Kümeleme algoritmalarının kullanımları tartışılmaktadır. Farklılaştırılmış pazarlama, şirketlerin marka bilinirliğini artırmalarına, daha fazla müşteriye ulaşmalarına ve daha fazla para kazanmalarına yardımcı olur.
+[Referans](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 Meydan Okuma
+
+Bu müfredatta öğrendiğiniz bazı tekniklerden faydalanan başka bir sektörü belirleyin ve ML'i nasıl kullandığını keşfedin.
+
+## [Ders sonrası sınav](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## Gözden Geçirme ve Kendi Kendine Çalışma
+
+Wayfair veri bilimi ekibinin, şirketlerinde ML'i nasıl kullandıklarına dair birkaç ilginç videosu var. [Göz atmaya değer](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)!
+
+## Ödev
+
+[Bir ML hazine avı](assignment.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/9-Real-World/1-Applications/assignment.md b/translations/tr/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..b0209bfde
--- /dev/null
+++ b/translations/tr/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# Bir ML Hazine Avı
+
+## Talimatlar
+
+Bu derste, klasik ML kullanılarak çözülen birçok gerçek yaşam örneği hakkında bilgi edindiniz. Derin öğrenme, yeni teknikler ve yapay zeka araçlarının kullanımı ve sinir ağlarının devreye alınması bu sektörlerdeki araçların üretimini hızlandırmış olsa da, bu müfredattaki teknikleri kullanarak klasik ML hala büyük değer taşımaktadır.
+
+Bu ödevde, bir hackathon'a katıldığınızı hayal edin. Müfredatta öğrendiklerinizi kullanarak bu derste tartışılan sektörlerden birinde bir sorunu çözmek için klasik ML kullanarak bir çözüm önerin. Fikrinizi nasıl uygulayacağınızı tartıştığınız bir sunum hazırlayın. Örnek veri toplayıp fikrinizi destekleyecek bir ML modeli oluşturabilirseniz ekstra puan kazanırsınız!
+
+## Değerlendirme Ölçütleri
+
+| Kriterler | Örnek | Yeterli | İyileştirme Gerekiyor |
+| --------- | --------------------------------------------------------------- | ------------------------------------------------ | ------------------------- |
+| | Bir PowerPoint sunumu sunulur - model oluşturma bonus | Yenilikçi olmayan, temel bir sunum sunulur | Çalışma eksik |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/9-Real-World/2-Debugging-ML-Models/README.md b/translations/tr/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..70d25e2e8
--- /dev/null
+++ b/translations/tr/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,130 @@
+# Sonsöz: Sorumlu AI panosu bileşenleri kullanarak Makine Öğreniminde Model Hatalarını Ayıklama
+
+## [Ders Öncesi Testi](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## Giriş
+
+Makine öğrenimi günlük hayatımızı etkiliyor. AI, sağlık, finans, eğitim ve istihdam gibi bireyler ve toplum olarak bizi etkileyen en önemli sistemlere giriyor. Örneğin, sistemler ve modeller sağlık teşhisleri veya dolandırıcılığı tespit etme gibi günlük karar verme görevlerinde yer alır. Sonuç olarak, AI'daki ilerlemeler ve hızlandırılmış benimseme, gelişen toplumsal beklentiler ve artan düzenlemelerle karşılanmaktadır. AI sistemlerinin beklentileri karşılamadığı, yeni zorluklar ortaya çıkardığı ve hükümetlerin AI çözümlerini düzenlemeye başladığı alanları sürekli olarak görüyoruz. Bu nedenle, bu modellerin herkes için adil, güvenilir, kapsayıcı, şeffaf ve hesap verebilir sonuçlar sağlamak amacıyla analiz edilmesi önemlidir.
+
+Bu müfredatta, bir modelin sorumlu AI sorunları olup olmadığını değerlendirmek için kullanılabilecek pratik araçlara bakacağız. Geleneksel makine öğrenimi hata ayıklama teknikleri, genellikle toplanmış doğruluk veya ortalama hata kaybı gibi nicel hesaplamalara dayanır. Bu modelleri oluşturmak için kullandığınız verilerde ırk, cinsiyet, politik görüş, din gibi demografik bilgiler eksik olduğunda ne olabileceğini hayal edin. Modelin çıktısı belirli bir demografiyi tercih edecek şekilde yorumlandığında ne olur? Bu, bu hassas özellik gruplarının aşırı veya yetersiz temsil edilmesine neden olarak modelden adalet, kapsayıcılık veya güvenilirlik sorunlarına yol açabilir. Bir diğer faktör ise, makine öğrenimi modellerinin kara kutular olarak kabul edilmesidir, bu da bir modelin tahminini neyin yönlendirdiğini anlamayı ve açıklamayı zorlaştırır. Tüm bunlar, bir modelin adaletini veya güvenilirliğini değerlendirmek ve hata ayıklamak için yeterli araçlara sahip olmadıklarında veri bilimciler ve AI geliştiricilerin karşılaştığı zorluklardır.
+
+Bu derste, modellerinizi hata ayıklama konusunda şunları öğreneceksiniz:
+
+- **Hata Analizi**: Modelin yüksek hata oranlarına sahip olduğu veri dağılımındaki yerleri belirleyin.
+- **Model Genel Bakış**: Model performans metriklerindeki farklılıkları keşfetmek için farklı veri kohortları arasında karşılaştırmalı analiz yapın.
+- **Veri Analizi**: Modelinizin bir veri demografisini diğerine tercih etmesine neden olabilecek veri fazlalığı veya eksikliği olup olmadığını araştırın.
+- **Özellik Önemi**: Modelinizin tahminlerini küresel veya yerel düzeyde hangi özelliklerin yönlendirdiğini anlayın.
+
+## Önkoşul
+
+Bir önkoşul olarak, lütfen [Geliştiriciler için Sorumlu AI araçlarını](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard) inceleyin.
+
+> 
+
+## Hata Analizi
+
+Doğruluğu ölçmek için kullanılan geleneksel model performans metrikleri genellikle doğru ve yanlış tahminlere dayalı hesaplamalardır. Örneğin, bir modelin %89 doğruluk oranına sahip olduğunu ve hata kaybının 0.001 olduğunu belirlemek iyi bir performans olarak kabul edilebilir. Hatalar genellikle temel veri kümenizde eşit dağılmamıştır. %89 doğruluk puanı alabilirsiniz, ancak modelin verinizin farklı bölgelerinde %42 oranında başarısız olduğunu keşfedebilirsiniz. Belirli veri gruplarındaki bu başarısızlık modellerinin sonucu adalet veya güvenilirlik sorunlarına yol açabilir. Modelin iyi veya kötü performans gösterdiği alanları anlamak önemlidir. Modelinizde yüksek sayıda yanlışlıkların olduğu veri bölgeleri önemli bir veri demografisi olabilir.
+
+
+
+RAI panosundaki Hata Analizi bileşeni, model hatasının çeşitli kohortlar arasında nasıl dağıldığını ağaç görselleştirmesi ile gösterir. Bu, veri kümenizde yüksek hata oranına sahip özellikleri veya alanları belirlemede yararlıdır. Modelin yanlışlıklarının çoğunun nereden geldiğini görerek, sorunun kök nedenini araştırmaya başlayabilirsiniz. Ayrıca, analiz yapmak için veri kohortları oluşturabilirsiniz. Bu veri kohortları, model performansının bir kohortta neden iyi olduğunu, ancak başka bir kohortta neden hatalı olduğunu belirlemek için hata ayıklama sürecinde yardımcı olur.
+
+
+
+Ağaç haritasındaki görsel göstergeler, sorun alanlarını daha hızlı bulmanıza yardımcı olur. Örneğin, bir ağaç düğümünün daha koyu kırmızı tonu, hata oranının daha yüksek olduğunu gösterir.
+
+Isı haritası, kullanıcıların tüm veri kümesi veya kohortlar genelinde model hatalarına katkıda bulunan bir veya iki özelliği araştırmak için hata oranını incelemelerinde kullanabilecekleri başka bir görselleştirme işlevselliğidir.
+
+
+
+Hata analizini şu durumlarda kullanın:
+
+* Model hatalarının bir veri kümesi ve çeşitli giriş ve özellik boyutları genelinde nasıl dağıldığını derinlemesine anlamak için.
+* Hata ayıklama adımlarınızı bilgilendirmek için hatalı kohortları otomatik olarak keşfetmek amacıyla toplam performans metriklerini parçalamak için.
+
+## Model Genel Bakış
+
+Bir makine öğrenimi modelinin performansını değerlendirmek, davranışını bütünsel olarak anlamayı gerektirir. Bu, performans metrikleri arasında farklılıkları bulmak için hata oranı, doğruluk, geri çağırma, kesinlik veya MAE (Ortalama Mutlak Hata) gibi birden fazla metriği gözden geçirerek elde edilebilir. Bir performans metriği harika görünebilir, ancak başka bir metrikte yanlışlıklar ortaya çıkabilir. Ayrıca, tüm veri kümesi veya kohortlar arasında metrikleri karşılaştırmak, modelin iyi veya kötü performans gösterdiği yerleri aydınlatmaya yardımcı olur. Bu, özellikle modelin hassas ve hassas olmayan özellikler (örneğin, hasta ırkı, cinsiyet veya yaşı) arasında performansını görerek modelin potansiyel adaletsizliğini ortaya çıkarmak için önemlidir. Örneğin, modelin hassas özelliklere sahip bir kohortta daha hatalı olduğunu keşfetmek, modelin potansiyel adaletsizliğini ortaya çıkarabilir.
+
+RAI panosundaki Model Genel Bakış bileşeni, sadece bir kohortta veri temsilinin performans metriklerini analiz etmekle kalmaz, aynı zamanda kullanıcıların modelin davranışını farklı kohortlar arasında karşılaştırma yeteneği sağlar.
+
+
+
+Bileşenin özellik tabanlı analiz işlevselliği, kullanıcıların belirli bir özellik içinde veri alt gruplarını daraltarak anormallikleri daha ayrıntılı düzeyde belirlemelerini sağlar. Örneğin, pano, kullanıcı tarafından seçilen bir özellik için otomatik olarak kohortlar oluşturma yeteneğine sahiptir (örneğin, *"time_in_hospital < 3"* veya *"time_in_hospital >= 7"*). Bu, kullanıcının daha büyük bir veri grubundan belirli bir özelliği izole etmesine ve bu özelliğin modelin hatalı sonuçlarını yönlendiren önemli bir etken olup olmadığını görmesine olanak tanır.
+
+
+
+Model Genel Bakış bileşeni iki tür farklılık metriğini destekler:
+
+**Model performansındaki farklılık**: Bu metrikler, veri alt grupları arasında seçilen performans metriğinin değerlerindeki farklılığı hesaplar. İşte birkaç örnek:
+
+* Doğruluk oranındaki farklılık
+* Hata oranındaki farklılık
+* Kesinlikteki farklılık
+* Geri çağırmadaki farklılık
+* Ortalama mutlak hatadaki (MAE) farklılık
+
+**Seçim oranındaki farklılık**: Bu metrik, veri alt grupları arasında seçim oranındaki (olumlu tahmin) farkı içerir. Bunun bir örneği, kredi onay oranlarındaki farklılıktır. Seçim oranı, her sınıfta 1 olarak sınıflandırılan veri noktalarının oranını (ikili sınıflandırmada) veya tahmin değerlerinin dağılımını (regresyonda) ifade eder.
+
+## Veri Analizi
+
+> "Verileri yeterince uzun süre işkence ederseniz, her şeyi itiraf eder" - Ronald Coase
+
+Bu ifade aşırı gibi görünebilir, ancak verilerin herhangi bir sonucu desteklemek için manipüle edilebileceği doğrudur. Böyle bir manipülasyon bazen istemeden de olabilir. İnsan olarak hepimizin önyargıları vardır ve veriye önyargı eklediğinizde bunu bilinçli olarak fark etmek genellikle zordur. AI ve makine öğreniminde adaleti sağlamak karmaşık bir zorluk olmaya devam etmektedir.
+
+Veri, geleneksel model performans metrikleri için büyük bir kör noktadır. Yüksek doğruluk puanlarına sahip olabilirsiniz, ancak bu her zaman veri kümenizde olabilecek temel veri önyargısını yansıtmaz. Örneğin, bir şirketin yönetici pozisyonlarındaki kadın çalışanların %27'si ve aynı seviyedeki erkeklerin %73'ü olan bir veri kümesi, bu verilere dayalı olarak eğitilmiş bir iş ilanı AI modelinin çoğunlukla üst düzey iş pozisyonları için erkek hedef kitleye yönelik olmasına neden olabilir. Veride bu dengesizlik, modelin tahminini bir cinsiyeti diğerine tercih edecek şekilde eğmiştir. Bu, AI modelinde bir cinsiyet önyargısı olan bir adalet sorunu ortaya çıkarır.
+
+RAI panosundaki Veri Analizi bileşeni, veri kümesinde aşırı ve yetersiz temsili olan alanları belirlemeye yardımcı olur. Kullanıcılara, veri dengesizliklerinden veya belirli bir veri grubunun temsil eksikliğinden kaynaklanan hata ve adalet sorunlarının kök nedenini teşhis etme yeteneği sağlar. Bu, kullanıcıların tahmin edilen ve gerçek sonuçlar, hata grupları ve belirli özelliklere dayalı veri kümelerini görselleştirmelerine olanak tanır. Bazen az temsil edilen bir veri grubunu keşfetmek, modelin iyi öğrenmediğini de ortaya çıkarabilir, bu da yüksek yanlışlıkların nedenidir. Veride önyargı olan bir modelin sadece bir adalet sorunu değil, aynı zamanda kapsayıcı veya güvenilir olmadığını da gösterir.
+
+
+
+Veri analizini şu durumlarda kullanın:
+
+* Farklı filtreler seçerek veri kümenizi farklı boyutlara (kohortlar olarak da bilinir) dilimleyerek veri kümesi istatistiklerinizi keşfedin.
+* Veri kümenizin farklı kohortlar ve özellik grupları genelinde dağılımını anlayın.
+* Adalet, hata analizi ve nedensellik ile ilgili bulgularınızın (diğer pano bileşenlerinden türetilmiş) veri kümenizin dağılımının bir sonucu olup olmadığını belirleyin.
+* Temsil sorunlarından, etiket gürültüsünden, özellik gürültüsünden, etiket önyargısından ve benzeri faktörlerden kaynaklanan hataları hafifletmek için hangi alanlarda daha fazla veri toplayacağınıza karar verin.
+
+## Model Yorumlanabilirliği
+
+Makine öğrenimi modelleri kara kutular olma eğilimindedir. Bir modelin tahminini yönlendiren anahtar veri özelliklerini anlamak zor olabilir. Bir modelin belirli bir tahminde bulunmasının nedenini açıklamak önemlidir. Örneğin, bir AI sistemi, bir diyabet hastasının 30 gün içinde tekrar hastaneye yatma riski taşıdığını tahmin ederse, bu tahminine yol açan destekleyici verileri sağlayabilmelidir. Destekleyici veri göstergelerine sahip olmak, klinisyenlerin veya hastanelerin iyi bilgilendirilmiş kararlar almasına yardımcı olmak için şeffaflık sağlar. Ayrıca, bir modelin bir birey için neden bir tahminde bulunduğunu açıklayabilmek, sağlık düzenlemeleriyle hesap verebilirlik sağlar. İnsanların hayatlarını etkileyen makine öğrenimi modelleri kullanırken, bir modelin davranışını neyin etkilediğini anlamak ve açıklamak çok önemlidir. Model açıklanabilirliği ve yorumlanabilirliği, aşağıdaki senaryolarda soruları yanıtlamaya yardımcı olur:
+
+* Model hata ayıklama: Modelim neden bu hatayı yaptı? Modelimi nasıl geliştirebilirim?
+* İnsan-AI işbirliği: Modelin kararlarını nasıl anlayabilir ve güvenebilirim?
+* Düzenleyici uyumluluk: Modelim yasal gereklilikleri karşılıyor mu?
+
+RAI panosundaki Özellik Önemi bileşeni, bir modelin tahminlerini nasıl yaptığını anlamak ve hata ayıklamak için kapsamlı bir anlayış sağlar. Ayrıca, makine öğrenimi profesyonelleri ve karar vericiler için modelin davranışını etkileyen özelliklerin kanıtlarını açıklamak ve göstermek için düzenleyici uyumluluk açısından faydalı bir araçtır. Kullanıcılar, hem küresel hem de yerel açıklamaları keşfederek hangi özelliklerin modelin tahminlerini yönlendirdiğini doğrulayabilir. Küresel açıklamalar, modelin genel tahminini etkileyen en önemli özellikleri listeler. Yerel açıklamalar, bir modelin belirli bir vaka için yaptığı tahmini hangi özelliklerin yönlendirdiğini gösterir. Yerel açıklamaları değerlendirme yeteneği, belirli bir vakayı hata ayıklama veya denetleme açısından modelin neden doğru veya yanlış bir tahminde bulunduğunu daha iyi anlamak ve yorumlamak için de faydalıdır.
+
+
+
+* Küresel açıklamalar: Örneğin, bir diyabet hastanesi tekrar yatış modelinin genel davranışını hangi özellikler etkiliyor?
+* Yerel açıklamalar: Örneğin, neden 60 yaş üstü ve önceki hastane yatışları olan bir diyabet hastası, 30 gün içinde tekrar hastaneye yatacağı veya yatmayacağı tahmin edildi?
+
+Modelin performansını farklı kohortlar genelinde inceleme sürecinde, Özellik Önemi, bir özelliğin kohortlar genelinde ne düzeyde etki ettiğini gösterir. Modelin hatalı tahminlerini yönlendiren bir özelliğin etki düzeyini karşılaştırırken anormallikleri ortaya çıkarmaya yardımcı olur. Özellik Önemi bileşeni, bir özelliğin değerlerinin modelin sonucunu olumlu veya olumsuz etkilediğini gösterebilir. Örneğin, bir model yanlış bir tahminde bulunduğunda, bileşen, tahmini yönlendiren özellikleri veya özellik değerlerini belirleme ve inceleme yeteneği sağlar. Bu düzeydeki ayrıntı, sadece hata ayıklamada değil, aynı zamanda denetim durumlarında şeffaflık ve hesap verebilirlik sağlamada da yardımcı olur. Son olarak, bileşen adalet sorunlarını belirlemenize yardımcı olabilir. Örneğin, etnik köken veya cinsiyet gibi hassas bir özellik modelin tahminini yönlendirmede yüksek derecede etkiliyse, bu modelde ırk veya cinsiyet önyargısı olduğunu gösterebilir.
+
+
+
+Yorumlanabilirliği şu durumlarda kullanın:
+
+* AI sisteminizin tahminlerinin ne kadar güvenilir olduğunu, tahminler için hangi özelliklerin en önemli olduğunu anlayarak belirleyin.
+* Modelinizi anlamak ve sağlıklı özellikler kullanıp kullanmadığını veya sadece yanlış korelasyonları mı kullandığını belirlemek için modelin hata ayıklamasına yaklaşın.
+* Modelin tahminlerini hassas özelliklere veya onlarla yüksek derecede ilişkili özelliklere dayandırıp dayandırmadığını anlayarak potansiyel adaletsizlik kaynaklarını ortaya çıkarın.
+* Yerel açıklamalar oluşturarak modelin kararlarını kullanıcılara açıklamak ve güven oluşturmak.
+* AI sisteminin düzenleyici bir denetimini tamamlayarak modelleri doğrulamak ve model kararlarının insanlar üzerindeki etkisini izlemek.
+
+## Sonuç
+
+Tüm RAI pano bileşenleri, topluma daha az zarar veren ve daha güvenilir makine öğrenimi modelleri oluşturmanıza yardımcı olan pratik araçlardır. İnsan haklarına yönelik tehditlerin önlenmesini, belirli grupları yaşam fırsatlarından dışlamayı veya fiziksel veya psikolojik yaralanma riskini artırır. Ayrıca, modelinizin kararlarına yerel açıklamalar oluşturarak güven oluşturmanıza yardımcı olur. Potansiyel zararlar şu şekilde sınıflandırılabilir:
+
+- **Tahsis**, örneğin bir cinsiyet veya etnik kökenin diğerine tercih edilmesi.
+- **Hizmet kalitesi**. Veriyi belirli bir senaryo için eğitiyorsanız, ancak gerçeklik çok daha karmaşıksa, bu kötü performans gösteren bir hizmete yol açar.
+- **Kalıp**. Belirli bir grubu önceden atanmış özelliklerle ilişkilendirme.
+- **Aşağılama**. Bir şeyi veya birini haksızca eleştirmek ve etiketlemek.
+- **Aşırı veya yetersiz temsil**. Belirli bir grubun belirli bir meslekte görülmediği fikri ve bu durumu teşvik eden herhangi bir hizmet veya işlev zarara katkıda bulunur.
+
+### Azure RAI panosu
+
+[Azure RAI panosu](https://learn.microsoft.com
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanılması sonucu oluşabilecek yanlış anlaşılmalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/tr/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..33d69f4d2
--- /dev/null
+++ b/translations/tr/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# Sorumlu Yapay Zeka (RAI) panelini keşfedin
+
+## Talimatlar
+
+Bu derste, veri bilimcilerin hata analizi, veri keşfi, adalet değerlendirmesi, model yorumlanabilirliği, karşıt/what-if değerlendirmeleri ve yapay zeka sistemlerinde nedensel analiz gerçekleştirmelerine yardımcı olmak için "açık kaynak" araçlar üzerine kurulu bir bileşenler paketi olan RAI panelini öğrendiniz. Bu ödev için, RAI panelinin bazı örnek [not defterlerini](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) keşfedin ve bulgularınızı bir makale veya sunumda raporlayın.
+
+## Değerlendirme Ölçütleri
+
+| Kriterler | Mükemmel | Yeterli | Geliştirme Gerekiyor |
+| -------- | --------- | -------- | ----------------- |
+| | RAI panelinin bileşenlerini, çalıştırılan not defterini ve çalıştırma sonucunda elde edilen sonuçları tartışan bir makale veya PowerPoint sunumu sunulmuştur | Sonuçları içermeyen bir makale sunulmuştur | Hiçbir makale sunulmamıştır |
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/9-Real-World/README.md b/translations/tr/9-Real-World/README.md
new file mode 100644
index 000000000..fc5b25a85
--- /dev/null
+++ b/translations/tr/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# Sonsöz: Klasik makine öğreniminin gerçek dünya uygulamaları
+
+Bu bölümde, klasik ML'in bazı gerçek dünya uygulamalarıyla tanışacaksınız. İnterneti tarayarak bu stratejileri kullanan uygulamalar hakkında makaleler ve raporlar bulduk, mümkün olduğunca sinir ağlarından, derin öğrenmeden ve yapay zekadan kaçındık. ML'in iş sistemlerinde, ekolojik uygulamalarda, finans, sanat ve kültürde nasıl kullanıldığını öğrenin.
+
+
+
+> Fotoğraf Alexis Fauvet tarafından Unsplash üzerinde
+
+## Ders
+
+1. [ML için Gerçek Dünya Uygulamaları](1-Applications/README.md)
+2. [Sorumlu AI kontrol paneli bileşenlerini kullanarak Makine Öğrenimi Modellerini Hata Ayıklama](2-Debugging-ML-Models/README.md)
+
+## Katkıda Bulunanlar
+
+"Gerçek Dünya Uygulamaları" [Jen Looper](https://twitter.com/jenlooper) ve [Ornella Altunyan](https://twitter.com/ornelladotcom) dahil olmak üzere bir ekip tarafından yazılmıştır.
+
+"Sorumlu AI kontrol paneli bileşenlerini kullanarak Makine Öğrenimi Modellerini Hata Ayıklama" [Ruth Yakubu](https://twitter.com/ruthieyakubu) tarafından yazılmıştır.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/CODE_OF_CONDUCT.md b/translations/tr/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..93c1af329
--- /dev/null
+++ b/translations/tr/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Microsoft Açık Kaynak Davranış Kuralları
+
+Bu proje [Microsoft Açık Kaynak Davranış Kuralları](https://opensource.microsoft.com/codeofconduct/) benimsemiştir.
+
+Kaynaklar:
+
+- [Microsoft Açık Kaynak Davranış Kuralları](https://opensource.microsoft.com/codeofconduct/)
+- [Microsoft Davranış Kuralları SSS](https://opensource.microsoft.com/codeofconduct/faq/)
+- Sorularınız veya endişeleriniz için [opencode@microsoft.com](mailto:opencode@microsoft.com) ile iletişime geçin
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından doğabilecek yanlış anlamalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/CONTRIBUTING.md b/translations/tr/CONTRIBUTING.md
new file mode 100644
index 000000000..b72341938
--- /dev/null
+++ b/translations/tr/CONTRIBUTING.md
@@ -0,0 +1,14 @@
+# Katkıda Bulunma
+
+Bu proje katkıları ve önerileri memnuniyetle karşılar. Çoğu katkı, bir Katılımcı Lisans Anlaşması'nı (CLA) kabul etmenizi gerektirir; bu, katkınızı kullanma haklarına sahip olduğunuzu ve bu hakları bize gerçekten verdiğinizi beyan eder. Ayrıntılar için https://cla.microsoft.com adresini ziyaret edin.
+
+> Önemli: Bu depodaki metinleri çevirirken, lütfen makine çevirisi kullanmadığınızdan emin olun. Çevirileri topluluk aracılığıyla doğrulayacağız, bu nedenle yalnızca yetkin olduğunuz dillerde çeviri yapmak için gönüllü olun.
+
+Bir pull request gönderdiğinizde, bir CLA-bot otomatik olarak bir CLA sağlamanız gerekip gerekmediğini belirleyecek ve PR'yi uygun şekilde süsleyecektir (örneğin, etiket, yorum). Bot tarafından sağlanan talimatları basitçe takip edin. Tüm depolarımızda CLA kullanarak bunu sadece bir kez yapmanız gerekecek.
+
+Bu proje [Microsoft Açık Kaynak Davranış Kuralları](https://opensource.microsoft.com/codeofconduct/)'nı benimsemiştir.
+Daha fazla bilgi için [Davranış Kuralları SSS](https://opensource.microsoft.com/codeofconduct/faq/) sayfasına bakabilir
+veya ek sorularınız ya da yorumlarınız için [opencode@microsoft.com](mailto:opencode@microsoft.com) ile iletişime geçebilirsiniz.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba sarf etsek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/README.md b/translations/tr/README.md
new file mode 100644
index 000000000..35e91eff2
--- /dev/null
+++ b/translations/tr/README.md
@@ -0,0 +1,155 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# Başlangıç Seviyesi İçin Makine Öğrenimi - Bir Müfredat
+
+> 🌍 Dünya kültürleri aracılığıyla Makine Öğrenimini keşfederken dünyayı dolaşın 🌍
+
+Microsoft'taki Bulut Savunucuları olarak, tamamen **Makine Öğrenimi** üzerine 12 haftalık, 26 derslik bir müfredat sunmaktan mutluluk duyuyoruz. Bu müfredatta, genellikle **klasik makine öğrenimi** olarak adlandırılan konuları öğreneceksiniz, ağırlıklı olarak Scikit-learn kütüphanesini kullanarak ve derin öğrenmeyi atlayarak, ki bu konu [Başlangıç Seviyesi İçin AI Müfredatımızda](https://aka.ms/ai4beginners) ele alınmaktadır. Bu dersleri, ['Başlangıç Seviyesi İçin Veri Bilimi' müfredatımızla](https://aka.ms/ds4beginners) birleştirin!
+
+Klasik teknikleri dünyanın farklı bölgelerinden veriler üzerinde uygularken bizimle birlikte dünyayı dolaşın. Her ders, ders öncesi ve sonrası sınavları, dersi tamamlama talimatlarını, bir çözümü, bir ödevi ve daha fazlasını içerir. Proje tabanlı pedagojimiz, yeni becerilerin 'kalıcı' olmasını sağlayan, öğrenirken inşa etmenizi sağlayan kanıtlanmış bir yoldur.
+
+**✍️ Yazarlarımıza içten teşekkürler** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu ve Amy Boyd
+
+**🎨 İllüstratörlerimize teşekkürler** Tomomi Imura, Dasani Madipalli ve Jen Looper
+
+**🙏 Microsoft Öğrenci Elçileri yazarlarımıza, gözden geçirenlerimize ve içerik katkıda bulunanlarımıza özel teşekkürler**, özellikle Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila ve Snigdha Agarwal
+
+**🤩 R derslerimiz için Microsoft Öğrenci Elçileri Eric Wanjau, Jasleen Sondhi ve Vidushi Gupta'ya ekstra teşekkürler!**
+
+# Başlarken
+
+Aşağıdaki adımları izleyin:
+1. **Depoyu Çatallayın**: Bu sayfanın sağ üst köşesindeki "Fork" düğmesine tıklayın.
+2. **Depoyu Klonlayın**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [Bu kurs için ek kaynakların tümünü Microsoft Learn koleksiyonumuzda bulun](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+**[Öğrenciler](https://aka.ms/student-page)**, bu müfredatı kullanmak için, tüm repo'yu kendi GitHub hesabınıza çatallayın ve alıştırmaları kendi başınıza veya bir grup ile tamamlayın:
+
+- Ders öncesi sınavla başlayın.
+- Dersi okuyun ve her bilgi kontrolünde durup düşünerek etkinlikleri tamamlayın.
+- Çözüm kodunu çalıştırmak yerine dersleri anlayarak projeleri oluşturmaya çalışın; ancak bu kod her proje odaklı derste `/solution` klasörlerinde mevcuttur.
+- Ders sonrası sınavı yapın.
+- Meydan okumayı tamamlayın.
+- Ödevi tamamlayın.
+- Bir ders grubunu tamamladıktan sonra, [Tartışma Panosu](https://github.com/microsoft/ML-For-Beginners/discussions) ziyaret edin ve uygun PAT rubriğini doldurarak "yüksek sesle öğrenin". Bir 'PAT', öğrenmenizi daha da ilerletmek için doldurduğunuz bir rubriktir. Ayrıca diğer PAT'lere de tepki verebilirsiniz, böylece birlikte öğrenebiliriz.
+
+> Daha fazla çalışma için, bu [Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott) modüllerini ve öğrenme yollarını takip etmenizi öneririz.
+
+**Öğretmenler**, bu müfredatı nasıl kullanacağınıza dair [bazı öneriler ekledik](for-teachers.md).
+
+---
+
+## Video Yürüyüşleri
+
+Bazı dersler kısa video formatında mevcuttur. Tüm bu videoları derslerin içinde veya [Microsoft Developer YouTube kanalındaki Başlangıç Seviyesi İçin ML oynatma listesinde](https://aka.ms/ml-beginners-videos) bulabilirsiniz, aşağıdaki resme tıklayarak.
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## Ekibi Tanıyın
+
+[](https://youtu.be/Tj1XWrDSYJU "Tanıtım videosu")
+
+**Gif by** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 Proje ve projeyi oluşturan kişiler hakkında bir video için yukarıdaki resme tıklayın!
+
+---
+
+## Pedagoji
+
+Bu müfredatı oluştururken iki pedagojik ilkeyi seçtik: elverişli ve **proje tabanlı** olmasını ve **sık sınavlar** içermesini sağlamak. Ayrıca, bu müfredatın uyumlu bir **temaya** sahip olmasını sağladık.
+
+İçeriğin projelerle uyumlu olmasını sağlayarak, süreç öğrenciler için daha ilgi çekici hale gelir ve kavramların kalıcılığı artırılır. Ayrıca, bir ders öncesinde düşük riskli bir sınav, öğrencinin bir konuyu öğrenmeye yönelik niyetini belirlerken, ders sonrası ikinci bir sınav daha fazla kalıcılığı sağlar. Bu müfredat esnek ve eğlenceli olacak şekilde tasarlanmıştır ve tamamı veya kısmen alınabilir. Projeler küçük başlar ve 12 haftalık döngünün sonunda giderek daha karmaşık hale gelir. Bu müfredat ayrıca, ekstra kredi veya tartışma temeli olarak kullanılabilecek ML'nin gerçek dünya uygulamaları üzerine bir ek içerir.
+
+> [Davranış Kuralları](CODE_OF_CONDUCT.md), [Katkıda Bulunma](CONTRIBUTING.md) ve [Çeviri](TRANSLATIONS.md) yönergelerimizi bulun. Yapıcı geri bildiriminizi bekliyoruz!
+
+## Her Ders İçerir
+
+- isteğe bağlı sketchnote
+- isteğe bağlı ek video
+- video yürüyüşü (bazı derslerde)
+- ders öncesi ısınma sınavı
+- yazılı ders
+- proje tabanlı dersler için, projeyi nasıl oluşturacağınızla ilgili adım adım kılavuzlar
+- bilgi kontrolleri
+- bir meydan okuma
+- ek okuma
+- ödev
+- ders sonrası sınav
+
+> **Diller hakkında bir not**: Bu dersler öncelikle Python ile yazılmıştır, ancak birçoğu R dilinde de mevcuttur. Bir R dersini tamamlamak için `/solution` klasörüne gidin ve R derslerini arayın. Bu dersler, bir **R Markdown** dosyasını temsil eden .rmd uzantısına sahiptir ve bu dosya, `code chunks` (R veya diğer dillerin) ve `YAML header` (PDF gibi çıktıları nasıl biçimlendireceğinizi yönlendiren) bir `Markdown document` içinde yerleştirilmiş bir şekilde basitçe tanımlanabilir. Bu nedenle, veri bilimi için örnek bir yazım çerçevesi olarak hizmet eder, çünkü kodunuzu, çıktısını ve düşüncelerinizi birleştirmenize olanak tanır ve bunları Markdown ile yazmanıza olanak tanır. Ayrıca, R Markdown belgeleri PDF, HTML veya Word gibi çıktı formatlarına dönüştürülebilir.
+
+> **Sınavlar hakkında bir not**: Tüm sınavlar [Quiz App klasöründe](../../quiz-app) yer alır, her biri üç sorudan oluşan toplam 52 sınav. Derslerden bağlantılıdır, ancak quiz uygulaması yerel olarak çalıştırılabilir; yerel olarak barındırmak veya Azure'a dağıtmak için `quiz-app` klasöründeki talimatları izleyin.
+
+| Ders Numarası | Konu | Ders Grubu | Öğrenme Hedefleri | Bağlantılı Ders | Yazar |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | Makine öğrenimine giriş | [Giriş](1-Introduction/README.md) | Makine öğrenimi ile ilgili temel kavramları öğrenin | [Ders](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | Makine öğreniminin tarihi | [Giriş](1-Introduction/README.md) | Bu alanın altındaki tarihi öğrenin | [Ders](1-Introduction/2-history-of-ML/README.md) | Jen ve Amy |
+| 03 | Adalet ve makine öğrenimi | [Giriş](1-Introduction/README.md) | Öğrencilerin ML modellerini oluştururken ve uygularken dikkate alması gereken önemli felsefi konular nelerdir? | [Ders](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | Makine öğrenimi teknikleri | [Introduction](1-Introduction/README.md) | ML araştırmacıları ML modelleri oluşturmak için hangi teknikleri kullanıyor? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris ve Jen |
+| 05 | Regresyona giriş | [Regression](2-Regression/README.md) | Regresyon modelleri için Python ve Scikit-learn ile başlayın |
|
+| 09 | Bir Web Uygulaması 🔌 | [Web App](3-Web-App/README.md) | Eğittiğiniz modeli kullanmak için bir web uygulaması oluşturun | [Python](3-Web-App/1-Web-App/README.md) | Jen |
+| 10 | Sınıflandırmaya giriş | [Classification](4-Classification/README.md) | Verilerinizi temizleyin, hazırlayın ve görselleştirin; sınıflandırmaya giriş |
|
+| 13 | Lezzetli Asya ve Hint mutfakları 🍜 | [Classification](4-Classification/README.md) | Modelinizi kullanarak bir öneri web uygulaması oluşturun | [Python](4-Classification/4-Applied/README.md) | Jen |
+| 14 | Kümelemeye giriş | [Clustering](5-Clustering/README.md) | Verilerinizi temizleyin, hazırlayın ve görselleştirin; kümelemeye giriş |
|
+| 16 | Doğal Dil İşlemeye Giriş ☕️ | [Natural language processing](6-NLP/README.md) | Basit bir bot oluşturarak NLP hakkında temel bilgileri öğrenin | [Python](6-NLP/1-Introduction-to-NLP/README.md) | Stephen |
+| 17 | Yaygın NLP Görevleri ☕️ | [Natural language processing](6-NLP/README.md) | Dil yapılarıyla uğraşırken gerekli olan yaygın görevleri anlayarak NLP bilginizi derinleştirin | [Python](6-NLP/2-Tasks/README.md) | Stephen |
+| 18 | Çeviri ve Duygu Analizi ♥️ | [Natural language processing](6-NLP/README.md) | Jane Austen ile çeviri ve duygu analizi | [Python](6-NLP/3-Translation-Sentiment/README.md) | Stephen |
+| 19 | Avrupa'nın Romantik Otelleri ♥️ | [Natural language processing](6-NLP/README.md) | Otel yorumlarıyla duygu analizi 1 | [Python](6-NLP/4-Hotel-Reviews-1/README.md) | Stephen |
+| 20 | Avrupa'nın Romantik Otelleri ♥️ | [Natural language processing](6-NLP/README.md) | Otel yorumlarıyla duygu analizi 2 | [Python](6-NLP/5-Hotel-Reviews-2/README.md) | Stephen |
+| 21 | Zaman Serisi Tahminine Giriş | [Time series](7-TimeSeries/README.md) | Zaman serisi tahminine giriş | [Python](7-TimeSeries/1-Introduction/README.md) | Francesca |
+| 22 | ⚡️ Dünya Güç Kullanımı ⚡️ - ARIMA ile zaman serisi tahmini | [Time series](7-TimeSeries/README.md) | ARIMA ile zaman serisi tahmini | [Python](7-TimeSeries/2-ARIMA/README.md) | Francesca |
+| 23 | ⚡️ Dünya Güç Kullanımı ⚡️ - SVR ile zaman serisi tahmini | [Time series](7-TimeSeries/README.md) | Destek Vektör Regresörü ile zaman serisi tahmini | [Python](7-TimeSeries/3-SVR/README.md) | Anirban |
+| 24 | Pekiştirmeli Öğrenmeye Giriş | [Reinforcement learning](8-Reinforcement/README.md) | Q-Learning ile pekiştirmeli öğrenmeye giriş | [Python](8-Reinforcement/1-QLearning/README.md) | Dmitry |
+| 25 | Peter'ın kurttan kaçmasına yardım edin! 🐺 | [Reinforcement learning](8-Reinforcement/README.md) | Pekiştirmeli öğrenme Gym | [Python](8-Reinforcement/2-Gym/README.md) | Dmitry |
+| Postscript | Gerçek Dünya ML Senaryoları ve Uygulamaları | [ML in the Wild](9-Real-World/README.md) | Klasik ML'nin ilginç ve açıklayıcı gerçek dünya uygulamaları | [Lesson](9-Real-World/1-Applications/README.md) | Team |
+| Postscript | RAI gösterge tablosunu kullanarak ML'de Model Hatalarını Ayıklama | [ML in the Wild](9-Real-World/README.md) | Sorumlu AI gösterge tablosu bileşenlerini kullanarak Makine Öğreniminde Model Hatalarını Ayıklama | [Lesson](9-Real-World/2-Debugging-ML-Models/README.md) | Ruth Yakubu |
+
+> [bu kurs için tüm ek kaynakları Microsoft Learn koleksiyonumuzda bulun](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+## Çevrimdışı erişim
+
+Bu dokümantasyonu [Docsify](https://docsify.js.org/#/) kullanarak çevrimdışı çalıştırabilirsiniz. Bu repoyu fork'layın, [Docsify'i yükleyin](https://docsify.js.org/#/quickstart) yerel makinenize ve ardından bu reponun kök klasöründe `docsify serve` yazın. Web sitesi localhost'unuzda 3000 portunda sunulacaktır: `localhost:3000`.
+
+## PDF'ler
+Müfredatın PDF dosyasını bağlantılarla [buradan](https://microsoft.github.io/ML-For-Beginners/pdf/readme.pdf) bulabilirsiniz.
+
+## Yardım İstendi
+
+Bir çeviri katkısında bulunmak ister misiniz? Lütfen [çeviri yönergelerimizi](TRANSLATIONS.md) okuyun ve iş yükünü yönetmek için şablonlu bir sorun ekleyin [buradan](https://github.com/microsoft/ML-For-Beginners/issues).
+
+## Diğer Müfredatlar
+
+Ekibimiz başka müfredatlar da üretiyor! Göz atın:
+
+- [AI for Beginners](https://aka.ms/ai4beginners)
+- [Data Science for Beginners](https://aka.ms/datascience-beginners)
+- [**Yeni Sürüm 2.0** - Generative AI for Beginners](https://aka.ms/genai-beginners)
+- [**YENİ** Cybersecurity for Beginners](https://github.com/microsoft/Security-101??WT.mc_id=academic-96948-sayoung)
+- [Web Dev for Beginners](https://aka.ms/webdev-beginners)
+- [IoT for Beginners](https://aka.ms/iot-beginners)
+- [Machine Learning for Beginners](https://aka.ms/ml4beginners)
+- [XR Development for Beginners](https://aka.ms/xr-dev-for-beginners)
+- [Mastering GitHub Copilot for AI Paired Programming](https://aka.ms/GitHubCopilotAI)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilmektedir. Bu çevirinin kullanımından kaynaklanan yanlış anlama veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/SECURITY.md b/translations/tr/SECURITY.md
new file mode 100644
index 000000000..23c90abba
--- /dev/null
+++ b/translations/tr/SECURITY.md
@@ -0,0 +1,40 @@
+## Güvenlik
+
+Microsoft, yazılım ürünlerimizin ve hizmetlerimizin güvenliğini ciddiye alır. Bu, [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin) ve [GitHub organizasyonlarımız](https://opensource.microsoft.com/) dahil olmak üzere GitHub organizasyonlarımız aracılığıyla yönetilen tüm kaynak kodu depolarını içerir.
+
+Eğer Microsoft'a ait herhangi bir depoda [Microsoft'un güvenlik açığı tanımına](https://docs.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)?WT.mc_id=academic-77952-leestott) uyan bir güvenlik açığı bulduğunuzu düşünüyorsanız, lütfen aşağıda açıklandığı gibi bildirin.
+
+## Güvenlik Sorunlarını Bildirme
+
+**Lütfen güvenlik açıklarını GitHub'daki genel sorunlar üzerinden bildirmeyin.**
+
+Bunun yerine, Microsoft Güvenlik Yanıt Merkezi'ne (MSRC) [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report) adresinden bildirin.
+
+Giriş yapmadan bildirmeyi tercih ediyorsanız, [secure@microsoft.com](mailto:secure@microsoft.com) adresine e-posta gönderin. Mümkünse, mesajınızı PGP anahtarımızla şifreleyin; lütfen [Microsoft Güvenlik Yanıt Merkezi PGP Anahtar sayfasından](https://www.microsoft.com/en-us/msrc/pgp-key-msrc) indirin.
+
+24 saat içinde bir yanıt almanız gerekir. Herhangi bir nedenle almazsanız, orijinal mesajınızı aldığımızdan emin olmak için e-posta ile takip edin. Ek bilgiye [microsoft.com/msrc](https://www.microsoft.com/msrc) adresinden ulaşabilirsiniz.
+
+Lütfen aşağıda listelenen bilgileri (sağlayabildiğiniz kadarını) ekleyin, bu olası sorunun doğasını ve kapsamını daha iyi anlamamıza yardımcı olacaktır:
+
+ * Sorunun türü (ör. buffer overflow, SQL injection, cross-site scripting, vb.)
+ * Sorunun ortaya çıkmasına ilişkin kaynak dosyaların tam yolları
+ * Etkilenen kaynak kodunun konumu (etiket/şube/commit veya doğrudan URL)
+ * Sorunu yeniden oluşturmak için gereken özel yapılandırma
+ * Sorunu yeniden oluşturmak için adım adım talimatlar
+ * Kanıt niteliğinde konsept veya exploit kodu (mümkünse)
+ * Sorunun etkisi, bir saldırganın sorunu nasıl kullanabileceği dahil
+
+Bu bilgiler, raporunuzu daha hızlı değerlendirmemize yardımcı olacaktır.
+
+Bir hata ödül programı için rapor veriyorsanız, daha eksiksiz raporlar daha yüksek ödül kazanımına katkıda bulunabilir. Aktif programlarımız hakkında daha fazla bilgi için [Microsoft Hata Ödül Programı](https://microsoft.com/msrc/bounty) sayfamızı ziyaret edin.
+
+## Tercih Edilen Diller
+
+Tüm iletişimlerin İngilizce olmasını tercih ediyoruz.
+
+## Politika
+
+Microsoft, [Koordine Edilmiş Güvenlik Açığı Açıklaması](https://www.microsoft.com/en-us/msrc/cvd) ilkesini takip eder.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/SUPPORT.md b/translations/tr/SUPPORT.md
new file mode 100644
index 000000000..5247c7568
--- /dev/null
+++ b/translations/tr/SUPPORT.md
@@ -0,0 +1,13 @@
+# Destek
+## Sorun bildirme ve yardım alma
+
+Bu proje, hataları ve özellik isteklerini takip etmek için GitHub Issues kullanır. Yeni bir sorun bildirmeden önce, mevcut sorunları arayarak tekrarları önleyin. Yeni sorunlar için, hatanızı veya özellik isteğinizi yeni bir Sorun olarak bildirin.
+
+Bu projeyi kullanma konusunda yardım ve sorular için bir sorun bildirin.
+
+## Microsoft Destek Politikası
+
+Bu depo için destek, yukarıda listelenen kaynaklarla sınırlıdır.
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğa özen göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/TRANSLATIONS.md b/translations/tr/TRANSLATIONS.md
new file mode 100644
index 000000000..593ca989f
--- /dev/null
+++ b/translations/tr/TRANSLATIONS.md
@@ -0,0 +1,37 @@
+# Dersleri Çevirerek Katkıda Bulunun
+
+Bu müfredattaki derslerin çevirilerine açığız!
+## Yönergeler
+
+Her ders klasöründe ve ders tanıtım klasöründe, çevrilmiş markdown dosyalarını içeren klasörler bulunmaktadır.
+
+> Not: Lütfen kod örnek dosyalarındaki hiçbir kodu çevirmeyin; çevirmeniz gereken tek şey README, ödevler ve testlerdir. Teşekkürler!
+
+Çevrilmiş dosyalar şu adlandırma kuralını izlemelidir:
+
+**README._[dil]_.md**
+
+burada _[dil]_ ISO 639-1 standardına göre iki harfli dil kısaltmasıdır (örneğin, İspanyolca için `README.es.md` ve Hollandaca için `README.nl.md`).
+
+**assignment._[dil]_.md**
+
+Readme'ler gibi, lütfen ödevleri de çevirin.
+
+> Önemli: Bu depodaki metinleri çevirirken, lütfen makine çevirisi kullanmadığınızdan emin olun. Çevirileri topluluk aracılığıyla doğrulayacağız, bu nedenle yalnızca yetkin olduğunuz dillerde çeviri için gönüllü olun.
+
+**Testler**
+
+1. Çevirinizi quiz-app'e eklemek için buraya bir dosya ekleyin: https://github.com/microsoft/ML-For-Beginners/tree/main/quiz-app/src/assets/translations, uygun adlandırma kuralı ile (en.json, fr.json). **Ancak 'true' veya 'false' kelimelerini yerelleştirmeyin. Teşekkürler!**
+
+2. Dil kodunuzu quiz-app'in App.vue dosyasındaki açılır menüye ekleyin.
+
+3. Quiz-app'in [translations index.js dosyasını](https://github.com/microsoft/ML-For-Beginners/blob/main/quiz-app/src/assets/translations/index.js) dilinizi eklemek için düzenleyin.
+
+4. Son olarak, çevirdiğiniz README.md dosyalarındaki TÜM test bağlantılarını doğrudan çevirdiğiniz teste yönlendirecek şekilde düzenleyin: https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1 şu şekilde olur https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1?loc=id
+
+**TEŞEKKÜRLER**
+
+Çabalarınızı gerçekten takdir ediyoruz!
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluğu sağlamak için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belge, kendi dilinde yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/docs/_sidebar.md b/translations/tr/docs/_sidebar.md
new file mode 100644
index 000000000..ebfb198d0
--- /dev/null
+++ b/translations/tr/docs/_sidebar.md
@@ -0,0 +1,46 @@
+- Giriş
+ - [Makine Öğrenimine Giriş](../1-Introduction/1-intro-to-ML/README.md)
+ - [Makine Öğreniminin Tarihi](../1-Introduction/2-history-of-ML/README.md)
+ - [ML ve Adalet](../1-Introduction/3-fairness/README.md)
+ - [ML Teknikleri](../1-Introduction/4-techniques-of-ML/README.md)
+
+- Regresyon
+ - [Araçlar](../2-Regression/1-Tools/README.md)
+ - [Veri](../2-Regression/2-Data/README.md)
+ - [Doğrusal Regresyon](../2-Regression/3-Linear/README.md)
+ - [Lojistik Regresyon](../2-Regression/4-Logistic/README.md)
+
+- Web Uygulaması Oluşturma
+ - [Web Uygulaması](../3-Web-App/1-Web-App/README.md)
+
+- Sınıflandırma
+ - [Sınıflandırmaya Giriş](../4-Classification/1-Introduction/README.md)
+ - [Sınıflandırıcılar 1](../4-Classification/2-Classifiers-1/README.md)
+ - [Sınıflandırıcılar 2](../4-Classification/3-Classifiers-2/README.md)
+ - [Uygulamalı ML](../4-Classification/4-Applied/README.md)
+
+- Kümeleme
+ - [Verilerinizi Görselleştirin](../5-Clustering/1-Visualize/README.md)
+ - [K-Means](../5-Clustering/2-K-Means/README.md)
+
+- NLP
+ - [NLP'ye Giriş](../6-NLP/1-Introduction-to-NLP/README.md)
+ - [NLP Görevleri](../6-NLP/2-Tasks/README.md)
+ - [Çeviri ve Duygu Analizi](../6-NLP/3-Translation-Sentiment/README.md)
+ - [Otel Yorumları 1](../6-NLP/4-Hotel-Reviews-1/README.md)
+ - [Otel Yorumları 2](../6-NLP/5-Hotel-Reviews-2/README.md)
+
+- Zaman Serisi Tahmini
+ - [Zaman Serisi Tahminine Giriş](../7-TimeSeries/1-Introduction/README.md)
+ - [ARIMA](../7-TimeSeries/2-ARIMA/README.md)
+ - [SVR](../7-TimeSeries/3-SVR/README.md)
+
+- Pekiştirmeli Öğrenme
+ - [Q-Learning](../8-Reinforcement/1-QLearning/README.md)
+ - [Gym](../8-Reinforcement/2-Gym/README.md)
+
+- Gerçek Dünya ML
+ - [Uygulamalar](../9-Real-World/1-Applications/README.md)
+
+**Feragatname**:
+Bu belge, makine tabanlı AI çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Orijinal belgenin kendi dilindeki hali yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/for-teachers.md b/translations/tr/for-teachers.md
new file mode 100644
index 000000000..401322570
--- /dev/null
+++ b/translations/tr/for-teachers.md
@@ -0,0 +1,26 @@
+## Eğitimciler İçin
+
+Bu müfredatı sınıfınızda kullanmak ister misiniz? Lütfen çekinmeden kullanın!
+
+Aslında, GitHub Classroom kullanarak GitHub üzerinde de kullanabilirsiniz.
+
+Bunu yapmak için, bu repoyu forklayın. Her ders için bir repo oluşturmanız gerekecek, bu yüzden her klasörü ayrı bir repoya çıkarmanız gerekecek. Bu şekilde, [GitHub Classroom](https://classroom.github.com/classrooms) her dersi ayrı ayrı alabilir.
+
+Bu [tam talimatlar](https://github.blog/2020-03-18-set-up-your-digital-classroom-with-github-classroom/) sınıfınızı nasıl kuracağınız konusunda size bir fikir verecektir.
+
+## Repoyu olduğu gibi kullanmak
+
+Eğer GitHub Classroom kullanmadan bu repoyu olduğu gibi kullanmak isterseniz, bu da mümkündür. Öğrencilerinizle birlikte hangi dersi işleyeceğinizi iletişim kurmanız gerekecek.
+
+Çevrimiçi bir formatta (Zoom, Teams veya diğerleri) sınavlar için breakout odaları oluşturabilir ve öğrencilerin öğrenmeye hazır olmalarına yardımcı olmak için mentorluk yapabilirsiniz. Ardından öğrencileri sınavlara davet edin ve belirli bir zamanda cevaplarını 'issues' olarak göndermelerini isteyin. Eğer öğrencilerin açıkta işbirliği yapmalarını isterseniz, ödevler için de aynı şeyi yapabilirsiniz.
+
+Daha özel bir formatı tercih ederseniz, öğrencilerinizden müfredatı, ders ders kendi GitHub repolarına özel repo olarak forklamalarını ve size erişim vermelerini isteyin. Ardından sınavları ve ödevleri özel olarak tamamlayabilir ve size sınıf repounuzdaki issues üzerinden gönderebilirler.
+
+Çevrimiçi sınıf formatında bu işi yapmanın birçok yolu vardır. Lütfen sizin için en iyi neyin çalıştığını bize bildirin!
+
+## Lütfen düşüncelerinizi bizimle paylaşın!
+
+Bu müfredatın sizin ve öğrencileriniz için işe yaramasını istiyoruz. Lütfen bize [geri bildirim](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2humCsRZhxNuI79cm6n0hRUQzRVVU9VVlU5UlFLWTRLWlkyQUxORTg5WS4u) verin.
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dilindeki hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan yanlış anlaşılmalar veya yanlış yorumlamalardan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/quiz-app/README.md b/translations/tr/quiz-app/README.md
new file mode 100644
index 000000000..d34ef5ec2
--- /dev/null
+++ b/translations/tr/quiz-app/README.md
@@ -0,0 +1,115 @@
+# Quizler
+
+Bu quizler, https://aka.ms/ml-beginners adresindeki ML müfredatının ders öncesi ve sonrası quizleridir.
+
+## Proje Kurulumu
+
+```
+npm install
+```
+
+### Geliştirme için derler ve sıcak yükler
+
+```
+npm run serve
+```
+
+### Üretim için derler ve küçültür
+
+```
+npm run build
+```
+
+### Dosyaları kontrol eder ve düzeltir
+
+```
+npm run lint
+```
+
+### Yapılandırmayı Özelleştir
+
+[Configuration Reference](https://cli.vuejs.org/config/) adresine bakın.
+
+Teşekkürler: Bu quiz uygulamasının orijinal versiyonuna teşekkürler: https://github.com/arpan45/simple-quiz-vue
+
+## Azure'a Dağıtım
+
+Başlamanıza yardımcı olacak adım adım bir rehber:
+
+1. Bir GitHub Deposunu Çatallayın
+Statik web uygulamanızın kodunun GitHub deponuzda olduğundan emin olun. Bu depoyu çatallayın.
+
+2. Bir Azure Statik Web Uygulaması Oluşturun
+- [Azure hesabı](http://azure.microsoft.com) oluşturun
+- [Azure portalına](https://portal.azure.com) gidin
+- "Kaynak oluştur" seçeneğine tıklayın ve "Statik Web Uygulaması" arayın.
+- "Oluştur" butonuna tıklayın.
+
+3. Statik Web Uygulamasını Yapılandırın
+- Temel Bilgiler: Abonelik: Azure aboneliğinizi seçin.
+- Kaynak Grubu: Yeni bir kaynak grubu oluşturun veya mevcut birini kullanın.
+- Ad: Statik web uygulamanız için bir ad girin.
+- Bölge: Kullanıcılarınıza en yakın bölgeyi seçin.
+
+- #### Dağıtım Detayları:
+- Kaynak: "GitHub"ı seçin.
+- GitHub Hesabı: Azure'un GitHub hesabınıza erişmesine izin verin.
+- Organizasyon: GitHub organizasyonunuzu seçin.
+- Depo: Statik web uygulamanızı içeren depoyu seçin.
+- Dal: Hangi daldan dağıtım yapacağınızı seçin.
+
+- #### Yapı Detayları:
+- Yapı Ön Ayarları: Uygulamanızın hangi çerçeve ile oluşturulduğunu seçin (örneğin, React, Angular, Vue, vb.).
+- Uygulama Konumu: Uygulama kodunuzu içeren klasörü belirtin (örneğin, kökteyse /).
+- API Konumu: Bir API'niz varsa, konumunu belirtin (isteğe bağlı).
+- Çıktı Konumu: Yapı çıktısının oluşturulduğu klasörü belirtin (örneğin, build veya dist).
+
+4. Gözden Geçirin ve Oluşturun
+Ayarlarınızı gözden geçirin ve "Oluştur" butonuna tıklayın. Azure gerekli kaynakları ayarlayacak ve deponuza bir GitHub Actions iş akışı oluşturacaktır.
+
+5. GitHub Actions İş Akışı
+Azure, deponuzda otomatik olarak bir GitHub Actions iş akışı dosyası oluşturacaktır (.github/workflows/azure-static-web-apps-.yml). Bu iş akışı yapı ve dağıtım sürecini yönetecektir.
+
+6. Dağıtımı İzleyin
+GitHub deponuzdaki "Actions" sekmesine gidin.
+Bir iş akışının çalıştığını görmelisiniz. Bu iş akışı, statik web uygulamanızı Azure'a yapılandıracak ve dağıtacaktır.
+İş akışı tamamlandığında, uygulamanız sağlanan Azure URL'sinde canlı olacaktır.
+
+### Örnek İş Akışı Dosyası
+
+İşte GitHub Actions iş akışı dosyasının nasıl görünebileceğine dair bir örnek:
+name: Azure Static Web Apps CI/CD
+```
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
+ repo_token: ${{ secrets.GITHUB_TOKEN }}
+ action: "upload"
+ app_location: "/quiz-app" # App source code path
+ api_location: ""API source code path optional
+ output_location: "dist" #Built app content directory - optional
+```
+
+### Ek Kaynaklar
+- [Azure Statik Web Uygulamaları Dokümantasyonu](https://learn.microsoft.com/azure/static-web-apps/getting-started)
+- [GitHub Actions Dokümantasyonu](https://docs.github.com/actions/use-cases-and-examples/deploying/deploying-to-azure-static-web-app)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal diliyle yazılmış hali, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi önerilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/sketchnotes/LICENSE.md b/translations/tr/sketchnotes/LICENSE.md
new file mode 100644
index 000000000..eac23fc89
--- /dev/null
+++ b/translations/tr/sketchnotes/LICENSE.md
@@ -0,0 +1,278 @@
+Attribution-ShareAlike 4.0 Uluslararası
+
+=======================================================================
+
+Creative Commons Corporation ("Creative Commons") bir hukuk firması değildir ve
+hukuki hizmet veya danışmanlık sağlamaz. Creative Commons kamu lisanslarının
+dağıtılması, avukat-müvekkil veya başka bir ilişki yaratmaz. Creative Commons,
+lisanslarını ve ilgili bilgileri "olduğu gibi" sunar. Creative Commons,
+lisansları, bu lisanslar altında lisanslanan herhangi bir materyal veya ilgili
+bilgiler hakkında hiçbir garanti vermez. Creative Commons, kullanımlarından
+kaynaklanan zararlardan sorumluluğunu mümkün olan en geniş ölçüde reddeder.
+
+Creative Commons Kamu Lisanslarının Kullanımı
+
+Creative Commons kamu lisansları, yazarların ve diğer hak sahiplerinin telif
+hakkı ve aşağıda belirtilen belirli diğer haklara tabi orijinal eserleri ve
+diğer materyalleri paylaşmak için kullanabileceği standart bir terim ve
+koşul seti sağlar. Aşağıdaki hususlar yalnızca bilgilendirme amaçlıdır,
+kapsamlı değildir ve lisanslarımızın bir parçasını oluşturmaz.
+
+ Lisans verenler için hususlar: Kamu lisanslarımız,
+ telif hakkı ve belirli diğer haklar tarafından kısıtlanmış materyalin
+ kamu tarafından kullanılmasına izin vermek için yetkilendirilmiş
+ kişiler tarafından kullanılmak üzere tasarlanmıştır. Lisanslarımız
+ geri alınamaz. Lisans verenler, bir lisansı uygulamadan önce seçtikleri
+ lisansın şartlarını ve koşullarını okumalı ve anlamalıdır. Lisans verenler,
+ lisanslarımızı uygulamadan önce tüm gerekli hakları güvence altına almalıdır,
+ böylece kamu beklenildiği gibi materyali yeniden kullanabilir. Lisans verenler,
+ lisansa tabi olmayan herhangi bir materyali açıkça işaretlemelidir. Bu,
+ diğer CC lisanslı materyalleri veya telif hakkı istisnası veya sınırlaması
+ altında kullanılan materyalleri içerir. Lisans verenler için daha fazla husus:
+ wiki.creativecommons.org/Considerations_for_licensors
+
+ Kamu için hususlar: Kamu lisanslarımızdan birini kullanarak,
+ lisans veren, lisanslı materyali belirli terim ve koşullar altında
+ kullanma izni verir. Lisans verenin izni herhangi bir nedenle gerekli
+ değilse - örneğin, geçerli bir istisna veya telif hakkı sınırlaması nedeniyle -
+ bu kullanım lisans tarafından düzenlenmez. Lisanslarımız yalnızca telif hakkı
+ ve lisans verenin yetki verdiği belirli diğer haklar altında izin verir.
+ Lisanslı materyalin kullanımı, başkalarının materyalde telif hakkı veya
+ diğer haklara sahip olması gibi diğer nedenlerle hala kısıtlanabilir.
+ Lisans veren, tüm değişikliklerin işaretlenmesi veya tanımlanması gibi
+ özel taleplerde bulunabilir. Lisanslarımız tarafından zorunlu kılınmamakla
+ birlikte, bu taleplere makul olduğu sürece saygı göstermeniz teşvik edilir.
+ Kamu için daha fazla husus:
+ wiki.creativecommons.org/Considerations_for_licensees
+
+=======================================================================
+
+Creative Commons Attribution-ShareAlike 4.0 Uluslararası Kamu Lisansı
+
+Lisanslı Hakları (aşağıda tanımlanmıştır) kullanarak, bu Creative Commons
+Attribution-ShareAlike 4.0 Uluslararası Kamu Lisansı ("Kamu Lisansı")
+şart ve koşullarına bağlı olmayı kabul edersiniz. Bu Kamu Lisansı bir
+sözleşme olarak yorumlanabiliyorsa, bu şart ve koşulları kabul etmeniz
+karşılığında Lisanslı Haklar size verilir ve Lisans veren, Lisanslı
+Materyali bu şartlar ve koşullar altında sunmanın getirdiği faydalar
+karşılığında size bu hakları verir.
+
+Bölüm 1 -- Tanımlar.
+
+ a. Uyarlanmış Materyal, Lisans Verenin elinde bulundurduğu Telif Hakkı
+ ve Benzer Haklar kapsamında izin gerektiren bir şekilde Lisanslı
+ Materyal'den türetilen veya buna dayalı olan ve Lisanslı Materyalin
+ çevrildiği, değiştirildiği, düzenlendiği, dönüştürüldüğü veya başka
+ şekilde değiştirildiği materyal anlamına gelir. Bu Kamu Lisansı
+ amaçları doğrultusunda, Lisanslı Materyal bir müzik eseri, performans
+ veya ses kaydı ise, Lisanslı Materyal hareketli bir görüntü ile
+ zamanlı ilişki içinde senkronize edildiğinde her zaman Uyarlanmış
+ Materyal üretilir.
+
+ b. Uyarlayıcı Lisansı, bu Kamu Lisansı'nın şart ve koşullarına uygun
+ olarak Uyarlanmış Materyale katkılarınızda Telif Hakkı ve Benzer
+ Haklarınızı uyguladığınız lisans anlamına gelir.
+
+ c. BY-SA Uyumlu Lisans, creativecommons.org/compatiblelicenses adresinde
+ listelenen ve Creative Commons tarafından bu Kamu Lisansı'nın esasen
+ eşdeğeri olarak onaylanan bir lisans anlamına gelir.
+
+ d. Telif Hakkı ve Benzer Haklar, performans, yayın, ses kaydı ve Sui
+ Generis Veritabanı Hakları dahil ancak bunlarla sınırlı olmamak üzere,
+ telif hakkına yakından bağlı telif hakkı ve/veya benzer haklar anlamına
+ gelir. Bu Kamu Lisansı amaçları doğrultusunda, Bölüm 2(b)(1)-(2)'de
+ belirtilen haklar Telif Hakkı ve Benzer Haklar değildir.
+
+ e. Etkili Teknolojik Önlemler, uygun yetki olmadan 20 Aralık 1996'da
+ kabul edilen WIPO Telif Hakkı Antlaşması'nın 11. Maddesi ve/veya benzer
+ uluslararası anlaşmalar kapsamındaki yükümlülükleri yerine getiren
+ yasalara göre aşılmaması gereken önlemler anlamına gelir.
+
+ f. İstisnalar ve Sınırlamalar, Lisanslı Materyali kullanmanız için
+ geçerli olan adil kullanım, adil işlem ve/veya diğer herhangi bir
+ telif hakkı ve benzer haklar istisnası veya sınırlaması anlamına gelir.
+
+ g. Lisans Unsurları, bir Creative Commons Kamu Lisansı'nın adında listelenen
+ lisans nitelikleri anlamına gelir. Bu Kamu Lisansı'nın Lisans Unsurları
+ Atıf ve PaylaşımAlike'dır.
+
+ h. Lisanslı Materyal, Lisans Verenin bu Kamu Lisansı'nı uyguladığı sanatsal
+ veya edebi eser, veritabanı veya diğer materyal anlamına gelir.
+
+ i. Lisanslı Haklar, bu Kamu Lisansı'nın şart ve koşullarına tabi olarak
+ size verilen haklar anlamına gelir ve Lisans Verenin lisans verme yetkisine
+ sahip olduğu ve Lisanslı Materyali kullanmanız için geçerli olan tüm Telif
+ Hakkı ve Benzer Haklarla sınırlıdır.
+
+ j. Lisans Veren, bu Kamu Lisansı kapsamında haklar veren birey(ler) veya
+ kuruluş(lar) anlamına gelir.
+
+ k. Paylaşmak, çoğaltma, kamuya açık gösterim, kamuya açık performans, dağıtım,
+ yayma, iletişim veya ithalat gibi Lisanslı Haklar kapsamında izin gerektiren
+ herhangi bir araç veya süreçle materyali halka sunmak ve materyali kamuya,
+ bireylerin kendi seçtikleri yer ve zamanda erişebilecekleri şekilde sunmak
+ anlamına gelir.
+
+ l. Sui Generis Veritabanı Hakları, 11 Mart 1996 tarihli Avrupa Parlamentosu
+ ve Konseyi'nin 96/9/EC Yönergesi'nden kaynaklanan ve değiştirilen ve/veya
+ yerine geçen, dünyanın herhangi bir yerindeki diğer esasen eşdeğer haklar
+ anlamına gelir.
+
+ m. Siz, bu Kamu Lisansı altında Lisanslı Hakları kullanan birey veya kuruluş
+ anlamına gelir. "Sizin" de buna karşılık gelen bir anlamı vardır.
+
+Bölüm 2 -- Kapsam.
+
+ a. Lisans verme.
+
+ 1. Bu Kamu Lisansı'nın şart ve koşullarına tabi olarak, Lisans Veren,
+ size dünya çapında, telif ücretsiz, alt lisans verilemez, münhasır
+ olmayan, geri alınamaz bir lisans vererek Lisanslı Hakları Lisanslı
+ Materyal üzerinde kullanmanızı sağlar:
+
+ a. Lisanslı Materyali, tamamen veya kısmen çoğaltmak ve paylaşmak; ve
+
+ b. Uyarlanmış Materyal üretmek, çoğaltmak ve paylaşmak.
+
+ 2. İstisnalar ve Sınırlamalar. İstisnalar ve Sınırlamalar kullanımınıza
+ uygulandığında, bu Kamu Lisansı uygulanmaz ve bu şart ve koşullara
+ uymanız gerekmez.
+
+ 3. Süre. Bu Kamu Lisansı'nın süresi, Bölüm 6(a)'da belirtilmiştir.
+
+ 4. Medya ve formatlar; teknik değişikliklere izin verilir. Lisans Veren,
+ Lisanslı Hakları şimdi bilinen veya daha sonra oluşturulan tüm medya
+ ve formatlarda kullanmanıza ve bunu yapmak için gerekli teknik
+ değişiklikleri yapmanıza izin verir. Lisans Veren, Etkili Teknolojik
+ Önlemleri aşmak için gerekli teknik değişiklikler dahil olmak üzere
+ Lisanslı Hakları kullanmak için gerekli teknik değişiklikleri yapmanızı
+ yasaklamama veya herhangi bir hak veya yetki ileri sürmeme konusunda
+ feragat eder ve/veya kabul eder. Bu Kamu Lisansı amaçları doğrultusunda,
+ bu Bölüm 2(a)(4) tarafından yetkilendirilen değişiklikleri yapmak asla
+ Uyarlanmış Materyal üretmez.
+
+ 5. Aşağı yönlü alıcılar.
+
+ a. Lisans Veren'den Teklif -- Lisanslı Materyal. Lisanslı Materyalin
+ her alıcısı, bu Kamu Lisansı'nın şart ve koşulları altında Lisanslı
+ Hakları kullanma teklifini otomatik olarak Lisans Verenden alır.
+
+ b. Lisans Veren'den Ek Teklif -- Uyarlanmış Materyal. Sizden Uyarlanmış
+ Materyal alan her alıcı, Uyarlanmış Materyalde Lisanslı Hakları,
+ uyguladığınız Uyarlayıcı Lisans koşulları altında kullanma teklifini
+ otomatik olarak Lisans Verenden alır.
+
+ c. Aşağı yönlü kısıtlamalar yok. Lisanslı Materyalin herhangi bir
+ alıcısının Lisanslı Hakları kullanmasını kısıtlıyorsa, Lisanslı
+ Materyale herhangi bir ek veya farklı şart veya koşul sunamaz veya
+ uygulayamaz veya Etkili Teknolojik Önlemler uygulayamazsınız.
+
+ 6. Onay yok. Bu Kamu Lisansı'nda hiçbir şey, sizin veya Lisanslı Materyali
+ kullanımınızın Lisans Veren veya Bölüm 3(a)(1)(A)(i)'de belirtilen diğer
+ kişilere atıfta bulunularak bağlantılı, sponsorlu, onaylı veya resmi statüde
+ olduğunu iddia etme veya ima etme izni olarak yorumlanamaz.
+
+ b. Diğer haklar.
+
+ 1. Bütünlük hakkı gibi manevi haklar, bu Kamu Lisansı kapsamında lisanslanmamıştır,
+ ne de tanıtım, gizlilik ve/veya diğer benzer kişilik hakları; ancak, mümkün
+ olduğu ölçüde, Lisans Veren, Lisanslı Hakları kullanmanızı sağlamak için
+ gerekli olan sınırlı ölçüde, Lisans Verenin elinde bulundurduğu bu tür hakları
+ ileri sürmeme veya feragat etme konusunda anlaşır, ancak aksi halde değil.
+
+ 2. Patent ve ticari marka hakları bu Kamu Lisansı kapsamında lisanslanmamıştır.
+
+ 3. Mümkün olduğu ölçüde, Lisans Veren, Lisanslı Hakları kullanmanız için sizden
+ doğrudan veya herhangi bir gönüllü veya feragat edilebilir yasal veya zorunlu
+ lisanslama planı kapsamında bir toplama topluluğu aracılığıyla telif ücreti
+ toplama hakkından feragat eder. Diğer tüm durumlarda, Lisans Veren, bu tür
+ telif ücretlerini toplama hakkını açıkça saklı tutar.
+
+Bölüm 3 -- Lisans Koşulları.
+
+Lisanslı Hakları kullanmanız, açıkça aşağıdaki koşullara tabi olarak yapılmalıdır.
+
+ a. Atıf.
+
+ 1. Lisanslı Materyali (değiştirilmiş formda dahil) paylaşırsanız, şunları yapmalısınız:
+
+ a. Lisans Veren tarafından Lisanslı Materyal ile birlikte sağlanmışsa,
+ aşağıdakileri koruyun:
+
+ i. Lisanslı Materyalin yaratıcısının (yaratıcılarının) ve Lisans Veren
+ tarafından makul bir şekilde istenen herhangi bir şekilde (takma adla
+ belirtilmişse dahil) atıf almak üzere belirlenen diğer kişilerin
+ kimlik bilgilerini;
+
+ ii. bir telif hakkı bildirimi;
+
+ iii. bu Kamu Lisansına atıfta bulunan bir bildirim;
+
+ iv. garanti feragatnamesine atıfta bulunan bir bildirim;
+
+ v. makul ölçüde uygulanabilir olduğu sürece, Lisanslı Materyale bir URI
+ veya bağlantı;
+
+ b. Lisanslı Materyali değiştirdiğinizi belirtin ve önceki değişikliklerin
+ herhangi bir göstergesini koruyun; ve
+
+ c. Lisanslı Materyalin bu Kamu Lisansı kapsamında lisanslandığını belirtin ve
+ bu Kamu Lisansının metnini veya URI'sini veya bağlantısını ekleyin.
+
+ 2. Bölüm 3(a)(1) koşullarını, Lisanslı Materyali paylaştığınız ortam, araç ve
+ bağlama göre makul herhangi bir şekilde yerine getirebilirsiniz. Örneğin,
+ gerekli bilgileri içeren bir kaynağa URI veya bağlantı sağlayarak koşulları
+ yerine getirmek makul olabilir.
+
+ 3. Lisans Veren tarafından talep edilirse, Bölüm 3(a)(1)(A) tarafından gerekli
+ bilgilerin makul ölçüde uygulanabilir olduğu sürece kaldırılmasını sağlamalısınız.
+
+ b. PaylaşımAlike.
+
+ Bölüm 3(a) koşullarına ek olarak, ürettiğiniz Uyarlanmış Materyali paylaşırsanız,
+ aşağıdaki koşullar da geçerlidir.
+
+ 1. Uyguladığınız Uyarlayıcı Lisans, aynı Lisans Unsurlarına sahip bir Creative
+ Commons lisansı olmalı, bu sürüm veya daha sonra, veya bir BY-SA Uyumlu Lisans
+ olmalıdır.
+
+ 2. Uyguladığınız Uyarlayıcı Lisansın metnini veya URI'sini veya bağlantısını
+ eklemelisiniz. Bu koşulu, Uyarlanmış Materyali paylaştığınız ortam, araç ve
+ bağlama göre makul herhangi bir şekilde yerine getirebilirsiniz.
+
+ 3. Uyarlanmış Materyalin, uyguladığınız Uyarlayıcı Lisans altında verilen hakların
+ kullanılmasını kısıtlayan herhangi bir ek veya farklı şart veya koşul sunamaz veya
+ uygulayamaz veya Etkili Teknolojik Önlemler uygulayamazsınız.
+
+Bölüm 4 -- Sui Generis Veritabanı Hakları.
+
+Lisanslı Haklar, Lisanslı Materyali kullanmanız için geçerli olan Sui Generis Veritabanı
+Haklarını içeriyorsa:
+
+ a. Şüpheye mahal vermemek için, Bölüm 2(a)(1), veritabanının içeriğinin tamamını veya
+ önemli bir kısmını çıkarmak, yeniden kullanmak, çoğaltmak ve paylaşmak hakkını size verir;
+
+ b. Veritabanı içeriğinin tamamını veya önemli bir kısmını, Sui Generis Veritabanı
+ Haklarına sahip olduğunuz bir veritabanına dahil ederseniz, Sui Generis Veritabanı
+ Haklarına sahip olduğunuz veritabanı (ancak bireysel içerikleri değil) Uyarlanmış
+ Materyaldir,
+
+ Bölüm 3(b) amaçları doğrultusunda; ve
+ c. Veritabanı içeriğinin tamamını veya önemli bir kısmını paylaşırsanız, Bölüm 3(a)
+ koşullarına uymalısınız.
+
+Şüpheye mahal vermemek için, bu Bölüm 4, Lisanslı Hakların diğer Telif Hakkı ve Benzer
+Hakları içerdiği durumlarda bu Kamu Lisansı kapsamındaki yükümlülüklerinizi tamamlar ve
+yerine geçmez.
+
+Bölüm 5 -- Garanti Feragatnamesi ve Sorumluluk Sınırlaması.
+
+ a. LİSANS VEREN TARAFINDAN AYRICA YAPILMADIĞI SÜRECE, MÜMKÜN OLDUĞU ÖLÇÜDE, LİSANS
+ VEREN, LİSANSLI MATERYALİ OLDUĞU GİBİ VE MEVCUT OLDUĞU GİBİ SUNAR VE LİSANSLI
+ MATERYAL HAKKINDA HİÇBİR TÜRDE BEYAN VEYA GARANTİ VERMEZ, İSTER AÇIK, İSTER ZIMNİ,
+ YASAL VEYA DİĞER. BU, BAŞLIK GARANTİLERİ, SATILABİLİRLİK, BELİRLİ BİR AMACA UYGUNLUK,
+ İHLAL ETMEME, GİZLİ VEYA DİĞER KUSURLARIN BULUNMAMASI, DOĞRULUK VEYA HATALARIN VARLIĞI
+ VEYA YOKLUĞU, BİLİNEN VEYA KEŞFEDİLEBİLEN DAHİL ANCAK BUNLARLA SINIRLI OLMAMAK Ü
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/tr/sketchnotes/README.md b/translations/tr/sketchnotes/README.md
new file mode 100644
index 000000000..effb3d01b
--- /dev/null
+++ b/translations/tr/sketchnotes/README.md
@@ -0,0 +1,10 @@
+Tüm müfredatın sketchnote'larını buradan indirebilirsiniz.
+
+🖨 Yüksek çözünürlükte yazdırmak için, TIFF versiyonları [bu depoda](https://github.com/girliemac/a-picture-is-worth-a-1000-words/tree/main/ml/tiff) mevcuttur.
+
+🎨 Oluşturan: [Tomomi Imura](https://github.com/girliemac) (Twitter: [@girlie_mac](https://twitter.com/girlie_mac))
+
+[](https://creativecommons.org/licenses/by-sa/4.0/)
+
+**Feragatname**:
+Bu belge, makine tabanlı yapay zeka çeviri hizmetleri kullanılarak çevrilmiştir. Doğruluk için çaba göstersek de, otomatik çevirilerin hata veya yanlışlıklar içerebileceğini lütfen unutmayın. Belgenin orijinal dili, yetkili kaynak olarak kabul edilmelidir. Kritik bilgiler için profesyonel insan çevirisi tavsiye edilir. Bu çevirinin kullanımından kaynaklanan herhangi bir yanlış anlama veya yanlış yorumlamadan sorumlu değiliz.
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/1-intro-to-ML/README.md b/translations/zh/1-Introduction/1-intro-to-ML/README.md
new file mode 100644
index 000000000..1197360ff
--- /dev/null
+++ b/translations/zh/1-Introduction/1-intro-to-ML/README.md
@@ -0,0 +1,148 @@
+# 机器学习简介
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/1/)
+
+---
+
+[](https://youtu.be/6mSx_KJxcHI "初学者的机器学习 - 初学者的机器学习简介")
+
+> 🎥 点击上面的图片观看一个简短的视频,了解本课内容。
+
+欢迎来到这个面向初学者的经典机器学习课程!无论你是完全不了解这个话题,还是一个有经验的ML从业者想要复习某个领域,我们都很高兴你能加入我们!我们希望为你的ML学习创造一个友好的起点,并乐于评估、回应并采纳你的[反馈](https://github.com/microsoft/ML-For-Beginners/discussions)。
+
+[](https://youtu.be/h0e2HAPTGF4 "ML简介")
+
+> 🎥 点击上面的图片观看视频:MIT的John Guttag介绍机器学习
+
+---
+## 机器学习入门
+
+在开始这个课程之前,你需要将你的计算机设置好,并准备好在本地运行笔记本。
+
+- **通过这些视频配置你的机器**。使用以下链接了解[如何在系统中安装Python](https://youtu.be/CXZYvNRIAKM)和[设置开发用的文本编辑器](https://youtu.be/EU8eayHWoZg)。
+- **学习Python**。还建议你对[Python](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott)有基本的了解,这是一门对数据科学家很有用的编程语言,我们在本课程中会使用它。
+- **学习Node.js和JavaScript**。我们在构建Web应用程序时也会多次使用JavaScript,所以你需要安装[node](https://nodejs.org)和[npm](https://www.npmjs.com/),以及用于Python和JavaScript开发的[Visual Studio Code](https://code.visualstudio.com/)。
+- **创建一个GitHub账户**。既然你在[GitHub](https://github.com)上找到了我们,你可能已经有一个账户了,如果没有,创建一个,然后fork这个课程以便自己使用。(也可以给我们一个star 😊)
+- **探索Scikit-learn**。熟悉一下[Scikit-learn](https://scikit-learn.org/stable/user_guide.html),这是我们在这些课程中引用的一组ML库。
+
+---
+## 什么是机器学习?
+
+“机器学习”这个术语是当今最流行和经常使用的术语之一。如果你对技术有一定了解,无论你从事什么领域,你都有很大可能至少听过一次这个术语。然而,机器学习的机制对大多数人来说仍然是一个谜。对于机器学习初学者来说,这个主题有时会让人感到不知所措。因此,理解机器学习的真正含义,并通过实际例子一步步学习它,是很重要的。
+
+---
+## 热潮曲线
+
+
+
+> 谷歌趋势显示了最近“机器学习”一词的“热潮曲线”
+
+---
+## 神秘的宇宙
+
+我们生活在一个充满迷人谜团的宇宙中。伟大的科学家如斯蒂芬·霍金、阿尔伯特·爱因斯坦等,终其一生致力于寻找揭示我们周围世界谜团的有意义的信息。这是人类学习的本质:一个人类孩子通过感知周围环境的事实,逐年揭示世界的结构,直到成年。
+
+---
+## 孩子的脑袋
+
+孩子的脑袋和感官感知周围环境的事实,并逐渐学习生活中的隐藏模式,这帮助孩子制定逻辑规则来识别学到的模式。人类大脑的学习过程使人类成为这个世界上最复杂的生物。通过不断发现隐藏的模式并在这些模式上进行创新,使我们在一生中变得越来越好。这种学习能力和进化能力与一个叫做[脑可塑性](https://www.simplypsychology.org/brain-plasticity.html)的概念有关。从表面上看,我们可以在一定程度上将人类大脑的学习过程与机器学习的概念联系起来。
+
+---
+## 人类大脑
+
+[人类大脑](https://www.livescience.com/29365-human-brain.html)从现实世界中感知事物,处理感知到的信息,做出理性决策,并根据情况执行某些行为。这就是我们所说的智能行为。当我们将这种智能行为过程的仿真程序化到机器上时,这就是人工智能(AI)。
+
+---
+## 一些术语
+
+虽然这些术语可能会混淆,但机器学习(ML)是人工智能的一个重要子集。**ML关注的是使用专门的算法从感知到的数据中发现有意义的信息和隐藏的模式,以支持理性决策过程**。
+
+---
+## AI, ML, 深度学习
+
+
+
+> 一张展示AI、ML、深度学习和数据科学之间关系的图表。由[Jen Looper](https://twitter.com/jenlooper)制作,灵感来自[这张图](https://softwareengineering.stackexchange.com/questions/366996/distinction-between-ai-ml-neural-networks-deep-learning-and-data-mining)
+
+---
+## 涵盖的概念
+
+在这个课程中,我们将只涵盖机器学习的核心概念,这是初学者必须了解的。我们主要使用Scikit-learn,一个许多学生用来学习基础知识的优秀库,来讲解我们所谓的“经典机器学习”。要理解人工智能或深度学习的更广泛概念,扎实的机器学习基础知识是不可或缺的,因此我们希望在这里提供这些知识。
+
+---
+## 在本课程中你将学习:
+
+- 机器学习的核心概念
+- ML的历史
+- ML与公平性
+- 回归ML技术
+- 分类ML技术
+- 聚类ML技术
+- 自然语言处理ML技术
+- 时间序列预测ML技术
+- 强化学习
+- ML的实际应用
+
+---
+## 我们不会涵盖的内容
+
+- 深度学习
+- 神经网络
+- AI
+
+为了提供更好的学习体验,我们将避免神经网络的复杂性、“深度学习”——使用神经网络构建多层模型——和AI,我们将在不同的课程中讨论这些内容。我们还将提供即将推出的数据科学课程,以专注于这个更大领域的这一方面。
+
+---
+## 为什么要学习机器学习?
+
+从系统的角度来看,机器学习被定义为创建能够从数据中学习隐藏模式以辅助智能决策的自动化系统。
+
+这种动机在某种程度上是受人类大脑如何根据从外部世界感知的数据学习某些事物的启发。
+
+✅ 想一想为什么企业会想尝试使用机器学习策略,而不是创建一个基于硬编码规则的引擎。
+
+---
+## 机器学习的应用
+
+机器学习的应用现在几乎无处不在,就像由我们的智能手机、连接设备和其他系统生成的数据在我们的社会中流动一样无处不在。考虑到最先进的机器学习算法的巨大潜力,研究人员一直在探索其解决多维和多学科现实生活问题的能力,并取得了很好的成果。
+
+---
+## 应用ML的例子
+
+**你可以通过多种方式使用机器学习**:
+
+- 从患者的病史或报告中预测疾病的可能性。
+- 利用天气数据预测天气事件。
+- 理解文本的情感。
+- 识别假新闻以阻止宣传的传播。
+
+金融、经济学、地球科学、太空探索、生物医学工程、认知科学,甚至人文学科领域都采用机器学习来解决它们领域中繁重的数据处理问题。
+
+---
+## 结论
+
+机器学习通过从现实世界或生成的数据中发现有意义的见解来自动化模式发现过程。它在商业、健康和金融应用等领域中已经证明了自己的高度价值。
+
+在不久的将来,了解机器学习的基础知识将成为任何领域的人们必须掌握的技能,因为它被广泛采用。
+
+---
+# 🚀 挑战
+
+在纸上或使用[Excalidraw](https://excalidraw.com/)等在线应用程序,画出你对AI、ML、深度学习和数据科学之间区别的理解。添加一些这些技术擅长解决的问题的想法。
+
+# [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/2/)
+
+---
+# 复习与自学
+
+要了解更多关于如何在云中使用ML算法的信息,请关注这个[学习路径](https://docs.microsoft.com/learn/paths/create-no-code-predictive-models-azure-machine-learning/?WT.mc_id=academic-77952-leestott)。
+
+参加一个关于ML基础知识的[学习路径](https://docs.microsoft.com/learn/modules/introduction-to-machine-learning/?WT.mc_id=academic-77952-leestott)。
+
+---
+# 作业
+
+[开始运行](assignment.md)
+
+**免责声明**:
+本文件是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应以原文档的母语版本为权威来源。对于关键信息,建议使用专业的人类翻译。我们不对因使用本翻译而产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/1-intro-to-ML/assignment.md b/translations/zh/1-Introduction/1-intro-to-ML/assignment.md
new file mode 100644
index 000000000..80af76f9c
--- /dev/null
+++ b/translations/zh/1-Introduction/1-intro-to-ML/assignment.md
@@ -0,0 +1,12 @@
+# 启动和运行
+
+## 说明
+
+在这个不计分的作业中,你应该复习一下Python,并使你的环境能够运行notebooks。
+
+请参考这个 [Python 学习路径](https://docs.microsoft.com/learn/paths/python-language/?WT.mc_id=academic-77952-leestott),然后通过以下入门视频设置你的系统:
+
+https://www.youtube.com/playlist?list=PLlrxD0HtieHhS8VzuMCfQD4uJ9yne1mE6
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务进行翻译的。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/2-history-of-ML/README.md b/translations/zh/1-Introduction/2-history-of-ML/README.md
new file mode 100644
index 000000000..32bc03141
--- /dev/null
+++ b/translations/zh/1-Introduction/2-history-of-ML/README.md
@@ -0,0 +1,152 @@
+# 机器学习的历史
+
+
+> 由 [Tomomi Imura](https://www.twitter.com/girlie_mac) 绘制的速写笔记
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/3/)
+
+---
+
+[](https://youtu.be/N6wxM4wZ7V0 "机器学习初学者 - 机器学习的历史")
+
+> 🎥 点击上方图片观看本课的简短视频。
+
+在本课中,我们将回顾机器学习和人工智能历史上的重要里程碑。
+
+人工智能(AI)作为一个领域的历史与机器学习的历史紧密相连,因为支撑机器学习的算法和计算进步推动了人工智能的发展。值得注意的是,虽然这些领域作为独立的研究领域在20世纪50年代开始成型,但重要的[算法、统计、数学、计算和技术发现](https://wikipedia.org/wiki/Timeline_of_machine_learning)早在这一时期之前就已经出现并且有所重叠。实际上,人们已经思考这些问题[数百年](https://wikipedia.org/wiki/History_of_artificial_intelligence)了:这篇文章讨论了“思考机器”这一概念的历史智力基础。
+
+---
+## 著名发现
+
+- 1763年, 1812年 [贝叶斯定理](https://wikipedia.org/wiki/Bayes%27_theorem)及其前身。这个定理及其应用在推理中起着重要作用,描述了基于先验知识发生事件的概率。
+- 1805年 [最小二乘法](https://wikipedia.org/wiki/Least_squares) 由法国数学家Adrien-Marie Legendre提出。这个理论,你将在我们的回归单元中学习,帮助进行数据拟合。
+- 1913年 [马尔可夫链](https://wikipedia.org/wiki/Markov_chain),由俄罗斯数学家Andrey Markov命名,用于描述基于前一个状态的一系列可能事件。
+- 1957年 [感知器](https://wikipedia.org/wiki/Perceptron) 是一种由美国心理学家Frank Rosenblatt发明的线性分类器,支撑了深度学习的进步。
+
+---
+
+- 1967年 [最近邻](https://wikipedia.org/wiki/Nearest_neighbor) 是一种最初设计用于绘制路线的算法。在机器学习背景下,它用于检测模式。
+- 1970年 [反向传播](https://wikipedia.org/wiki/Backpropagation) 用于训练[前馈神经网络](https://wikipedia.org/wiki/Feedforward_neural_network)。
+- 1982年 [递归神经网络](https://wikipedia.org/wiki/Recurrent_neural_network) 是从前馈神经网络派生的人工神经网络,创建时间图。
+
+✅ 做一些研究。还有哪些日期在机器学习和人工智能历史上是关键的?
+
+---
+## 1950年:会思考的机器
+
+Alan Turing,一个真正了不起的人物,被[公众在2019年](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century)投票选为20世纪最伟大的科学家,被认为帮助奠定了“会思考的机器”这一概念的基础。他通过创建[图灵测试](https://www.bbc.com/news/technology-18475646)部分解决了反对者和他自己对这一概念的经验证据的需求,你将在我们的自然语言处理课程中探讨这一点。
+
+---
+## 1956年:达特茅斯夏季研究项目
+
+“达特茅斯夏季人工智能研究项目是人工智能作为一个领域的奠基性事件”,在这里“人工智能”一词被创造出来了([来源](https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth))。
+
+> 学习或任何其他智能特征的每一个方面原则上都可以如此精确地描述,以至于可以制造出模拟它的机器。
+
+---
+
+首席研究员、数学教授John McCarthy希望“基于这样一种假设进行研究,即学习或任何其他智能特征的每一个方面原则上都可以如此精确地描述,以至于可以制造出模拟它的机器。” 参与者中还包括该领域的另一位著名人物Marvin Minsky。
+
+该研讨会被认为启动并鼓励了几次讨论,包括“符号方法的兴起、专注于有限领域的系统(早期专家系统)以及演绎系统与归纳系统的对立。”([来源](https://wikipedia.org/wiki/Dartmouth_workshop))。
+
+---
+## 1956 - 1974年:“黄金时代”
+
+从1950年代到70年代中期,人们对AI能够解决许多问题充满乐观。1967年,Marvin Minsky自信地表示,“在一代人之内……创造‘人工智能’的问题将基本解决。”(Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall)
+
+自然语言处理研究蓬勃发展,搜索得到改进并变得更强大,“微观世界”的概念被创造出来,在那里可以使用简单的语言指令完成简单的任务。
+
+---
+
+政府机构提供了充足的资金,计算和算法方面取得了进展,智能机器的原型被建造出来。这些机器包括:
+
+* [Shakey机器人](https://wikipedia.org/wiki/Shakey_the_robot),能够智能地操纵和决定如何执行任务。
+
+ 
+ > 1972年的Shakey
+
+---
+
+* Eliza,一个早期的“聊天机器人”,可以与人对话并充当原始的“治疗师”。你将在自然语言处理课程中了解更多关于Eliza的信息。
+
+ 
+ > Eliza,一个聊天机器人的版本
+
+---
+
+* “积木世界”是一个微观世界的例子,在那里可以堆叠和分类积木,并可以测试教机器做决定的实验。使用诸如[SHRDLU](https://wikipedia.org/wiki/SHRDLU)之类的库构建的进步推动了语言处理的发展。
+
+ [](https://www.youtube.com/watch?v=QAJz4YKUwqw "积木世界与SHRDLU")
+
+ > 🎥 点击上方图片观看视频:积木世界与SHRDLU
+
+---
+## 1974 - 1980年:“AI寒冬”
+
+到70年代中期,制造“智能机器”的复杂性被低估的事实变得显而易见,而其承诺在现有的计算能力下被夸大了。资金枯竭,对该领域的信心减弱。一些影响信心的问题包括:
+---
+- **限制**。计算能力太有限。
+- **组合爆炸**。随着对计算机要求的增加,需要训练的参数数量呈指数增长,而计算能力和能力没有相应地进化。
+- **数据稀缺**。数据稀缺,阻碍了测试、开发和改进算法的过程。
+- **我们在问正确的问题吗?**。提出的问题本身开始受到质疑。研究人员开始面对对其方法的批评:
+ - 图灵测试受到质疑,其中包括“中文房间理论”这一观点,该理论认为,“编程一个数字计算机可能会使其看起来理解语言,但不能产生真正的理解。”([来源](https://plato.stanford.edu/entries/chinese-room/))
+ - 将像“治疗师”ELIZA这样的人工智能引入社会的伦理问题受到挑战。
+
+---
+
+同时,各种人工智能思想流派开始形成。“[简洁AI与凌乱AI](https://wikipedia.org/wiki/Neats_and_scruffies)”的二分法被确立。_凌乱_实验室通过长时间调整程序来获得期望的结果。_简洁_实验室“专注于逻辑和正式问题解决”。ELIZA和SHRDLU是著名的_凌乱_系统。到了80年代,随着对机器学习系统可重复性需求的出现,_简洁_方法逐渐占据了主导地位,因为其结果更具解释性。
+
+---
+## 1980年代 专家系统
+
+随着该领域的发展,其对商业的好处变得更加明显,到了80年代,“专家系统”也开始普及。“专家系统是最早真正成功的人工智能(AI)软件形式之一。”([来源](https://wikipedia.org/wiki/Expert_system))。
+
+这种类型的系统实际上是_混合_的,部分由定义业务需求的规则引擎组成,部分由利用规则系统推导新事实的推理引擎组成。
+
+这个时代也越来越关注神经网络。
+
+---
+## 1987 - 1993年:AI‘寒潮’
+
+专门的专家系统硬件的普及产生了不幸的后果,即变得过于专业化。个人计算机的兴起也与这些大型、专门、集中化的系统竞争。计算的民主化已经开始,并最终为现代大数据的爆发铺平了道路。
+
+---
+## 1993 - 2011年
+
+这一时期标志着机器学习和人工智能能够解决早期由于数据和计算能力不足而导致的一些问题。数据量开始迅速增加并变得更广泛可用,无论是好是坏,尤其是在2007年左右智能手机的出现。计算能力成倍增长,算法也随之发展。随着过去自由放任的日子开始凝聚成一个真正的学科,该领域开始走向成熟。
+
+---
+## 现在
+
+今天,机器学习和人工智能几乎触及我们生活的每个部分。这个时代需要对这些算法对人类生活的风险和潜在影响有仔细的理解。正如微软的Brad Smith所说,“信息技术提出了涉及基本人权保护的核心问题,如隐私和言论自由。这些问题加剧了创造这些产品的科技公司的责任。在我们看来,这也呼吁政府进行深思熟虑的监管,并制定关于可接受用途的规范”([来源](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/))。
+
+---
+
+未来将如何发展尚未可知,但理解这些计算机系统以及它们运行的软件和算法是很重要的。我们希望这门课程能帮助你更好地理解,以便你自己做出决定。
+
+[](https://www.youtube.com/watch?v=mTtDfKgLm54 "深度学习的历史")
+> 🎥 点击上方图片观看视频:Yann LeCun在这次讲座中讨论了深度学习的历史
+
+---
+## 🚀挑战
+
+深入了解这些历史时刻之一,了解背后的人物。这些人物非常有趣,没有任何科学发现是在文化真空中产生的。你发现了什么?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/4/)
+
+---
+## 复习与自学
+
+这里有一些可以观看和聆听的内容:
+
+[这个播客中,Amy Boyd讨论了AI的演变](http://runasradio.com/Shows/Show/739)
+[](https://www.youtube.com/watch?v=EJt3_bFYKss "艾米·博伊德讲述AI历史")
+
+---
+
+## 作业
+
+[创建时间线](assignment.md)
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的本国语言版本视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用此翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/2-history-of-ML/assignment.md b/translations/zh/1-Introduction/2-history-of-ML/assignment.md
new file mode 100644
index 000000000..b98ff4592
--- /dev/null
+++ b/translations/zh/1-Introduction/2-history-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# 创建时间线
+
+## 说明
+
+使用[这个仓库](https://github.com/Digital-Humanities-Toolkit/timeline-builder),创建一个关于算法、数学、统计、人工智能或机器学习历史某一方面的时间线,或这些领域的组合。你可以专注于一个人、一个想法,或一个长时间段的思想。确保添加多媒体元素。
+
+## 评分标准
+
+| 标准 | 模范 | 合格 | 需要改进 |
+| ------ | ------------------------------------------------- | --------------------------------------- | --------------------------------------------------------------- |
+| | 部署的时间线作为GitHub页面展示 | 代码不完整且未部署 | 时间线不完整,研究不充分且未部署 |
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应以原文档的母语版本为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/3-fairness/README.md b/translations/zh/1-Introduction/3-fairness/README.md
new file mode 100644
index 000000000..99c2b301e
--- /dev/null
+++ b/translations/zh/1-Introduction/3-fairness/README.md
@@ -0,0 +1,159 @@
+# 构建负责任的人工智能解决方案
+
+
+> 速写笔记由 [Tomomi Imura](https://www.twitter.com/girlie_mac) 提供
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## 介绍
+
+在本课程中,你将开始了解机器学习如何以及正在影响我们的日常生活。即使在现在,系统和模型已经参与了日常的决策任务,例如医疗诊断、贷款审批或检测欺诈。因此,这些模型需要表现良好,以提供可信的结果。就像任何软件应用程序一样,AI系统也会有未达预期或产生不理想结果的情况。这就是为什么理解和解释AI模型行为是至关重要的。
+
+想象一下,当你用来构建这些模型的数据缺乏某些人口统计信息,如种族、性别、政治观点、宗教,或者这些人口统计信息被不成比例地代表时,会发生什么情况。再想想当模型的输出被解读为偏向某些人口统计时,会有什么后果?另外,当模型产生不利结果并对人们造成伤害时,会发生什么?谁应该对AI系统的行为负责?这些是我们将在本课程中探讨的问题。
+
+在本课中,你将:
+
+- 提高对机器学习中公平性的重要性及相关危害的认识。
+- 熟悉探索异常值和不寻常场景的实践,以确保可靠性和安全性。
+- 了解设计包容性系统以赋权所有人的必要性。
+- 探讨保护数据和个人隐私与安全的重要性。
+- 了解采用透明方法解释AI模型行为的重要性。
+- 注意到责任感在建立AI系统信任中的重要性。
+
+## 先决条件
+
+作为先决条件,请完成“负责任AI原则”学习路径并观看以下视频:
+
+通过以下[学习路径](https://docs.microsoft.com/learn/modules/responsible-ai-principles/?WT.mc_id=academic-77952-leestott)了解更多关于负责任AI的信息。
+
+[](https://youtu.be/dnC8-uUZXSc "Microsoft's Approach to Responsible AI")
+
+> 🎥 点击上图观看视频:Microsoft's Approach to Responsible AI
+
+## 公平性
+
+AI系统应该公平对待每个人,避免对相似群体的人产生不同的影响。例如,当AI系统提供医疗建议、贷款申请或就业指导时,它们应该对有相似症状、财务状况或专业资格的每个人提供相同的建议。我们每个人都携带着影响我们决策和行动的继承偏见。这些偏见可能体现在我们用来训练AI系统的数据中。这种操控有时是无意的。通常很难有意识地知道你何时在数据中引入了偏见。
+
+**“不公平”** 包括对某一群体的负面影响或“伤害”,例如按种族、性别、年龄或残疾状态定义的群体。主要的公平性相关伤害可以分类为:
+
+- **分配**,例如一个性别或种族被偏爱于另一个。
+- **服务质量**。如果你为一个特定场景训练数据,但现实要复杂得多,就会导致服务表现不佳。例如,一个手部肥皂分配器似乎无法感应深色皮肤的人。[参考](https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773)
+- **诽谤**。不公平地批评和标签某物或某人。例如,一种图像标签技术臭名昭著地将深色皮肤人的图像错误标记为大猩猩。
+- **过度或不足代表**。某一群体在某一职业中未被看到的想法,任何继续推广这种现象的服务或功能都是在造成伤害。
+- **刻板印象**。将某一群体与预先分配的属性联系在一起。例如,英语和土耳其语之间的语言翻译系统可能由于与性别有关的刻板印象而出现错误。
+
+
+> 翻译成土耳其语
+
+
+> 翻译回英语
+
+在设计和测试AI系统时,我们需要确保AI是公平的,并且没有被编程成做出有偏见或歧视性的决策,这是人类也被禁止做出的。确保AI和机器学习中的公平性仍然是一个复杂的社会技术挑战。
+
+### 可靠性和安全性
+
+为了建立信任,AI系统需要在正常和意外条件下可靠、安全和一致。了解AI系统在各种情况下的行为是很重要的,特别是在异常情况下。构建AI解决方案时,需要大量关注如何处理AI解决方案可能遇到的各种情况。例如,一辆自动驾驶汽车需要将人的安全放在首位。因此,驱动汽车的AI需要考虑汽车可能遇到的所有场景,如夜晚、雷暴或暴风雪、孩子跑过街道、宠物、道路施工等。一个AI系统在多种条件下可靠和安全地处理情况的能力反映了数据科学家或AI开发人员在设计或测试系统时所考虑的预期水平。
+
+> [🎥 点击这里观看视频:](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
+
+### 包容性
+
+AI系统应该设计成能够吸引和赋权所有人。在设计和实施AI系统时,数据科学家和AI开发人员会识别和解决系统中可能无意中排除某些人的潜在障碍。例如,全球有10亿残疾人。随着AI的发展,他们可以更轻松地在日常生活中访问各种信息和机会。通过解决这些障碍,可以创新和开发出为每个人带来更好体验的AI产品。
+
+> [🎥 点击这里观看视频:AI中的包容性](https://www.microsoft.com/videoplayer/embed/RE4vl9v)
+
+### 安全和隐私
+
+AI系统应该是安全的,并尊重人们的隐私。人们对那些将其隐私、信息或生活置于风险中的系统信任度较低。在训练机器学习模型时,我们依赖数据来产生最佳结果。在此过程中,数据的来源和完整性必须考虑。例如,数据是用户提交的还是公开可用的?接下来,在处理数据时,至关重要的是开发能够保护机密信息并抵御攻击的AI系统。随着AI的普及,保护隐私和确保重要的个人和商业信息的安全变得越来越重要和复杂。隐私和数据安全问题需要特别关注AI,因为访问数据对于AI系统做出准确和知情的预测和决策至关重要。
+
+> [🎥 点击这里观看视频:AI中的安全性](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- 作为一个行业,我们在隐私和安全方面取得了重大进展,这主要得益于如GDPR(通用数据保护条例)等法规的推动。
+- 然而,对于AI系统,我们必须承认在需要更多个人数据以使系统更个性化和有效与隐私之间的紧张关系。
+- 就像互联网带来的连接计算机一样,我们也看到与AI相关的安全问题数量激增。
+- 同时,我们也看到AI被用来改善安全性。例如,大多数现代的防病毒扫描器今天都是由AI启发驱动的。
+- 我们需要确保我们的数据科学流程与最新的隐私和安全实践和谐融合。
+
+### 透明性
+
+AI系统应该是可理解的。透明性的一个关键部分是解释AI系统及其组件的行为。提高对AI系统的理解需要利益相关者理解它们的工作原理和原因,以便他们能够识别潜在的性能问题、安全和隐私问题、偏见、排他性做法或意外结果。我们还认为,使用AI系统的人应该诚实并坦率地说明何时、为何以及如何选择部署它们,以及所使用系统的局限性。例如,如果一家银行使用AI系统来支持其消费者贷款决策,重要的是检查结果并了解哪些数据影响了系统的建议。政府开始在各个行业对AI进行监管,因此数据科学家和组织必须解释AI系统是否符合监管要求,特别是在出现不理想结果时。
+
+> [🎥 点击这里观看视频:AI中的透明性](https://www.microsoft.com/videoplayer/embed/RE4voJF)
+
+- 由于AI系统非常复杂,很难理解它们的工作原理并解释结果。
+- 这种缺乏理解会影响这些系统的管理、操作和文档编制方式。
+- 更重要的是,这种缺乏理解会影响使用这些系统产生结果的决策。
+
+### 责任
+
+设计和部署AI系统的人必须对其系统的运行负责。责任感的需求在敏感技术的使用中尤为重要,如面部识别技术。最近,对面部识别技术的需求不断增长,特别是执法机构看到了这项技术在寻找失踪儿童等用途中的潜力。然而,这些技术可能会被政府用来威胁公民的基本自由,例如,通过使对特定个人的持续监控成为可能。因此,数据科学家和组织需要对其AI系统对个人或社会的影响负责。
+
+[](https://www.youtube.com/watch?v=Wldt8P5V6D0 "Microsoft's Approach to Responsible AI")
+
+> 🎥 点击上图观看视频:面部识别技术大规模监控的警告
+
+最终,对于我们这一代人来说,作为将AI引入社会的第一代人,最大的一个问题是如何确保计算机对人负责,以及如何确保设计计算机的人对所有人负责。
+
+## 影响评估
+
+在训练机器学习模型之前,进行影响评估以了解AI系统的目的、预期用途、部署地点以及与系统互动的人是谁,这一点很重要。这些对于评审员或测试人员评估系统时了解在识别潜在风险和预期后果时需要考虑的因素非常有帮助。
+
+进行影响评估时的重点领域如下:
+
+* **对个人的不利影响**。了解任何限制或要求、不支持的使用或任何已知的性能限制对于确保系统不会以可能对个人造成伤害的方式使用至关重要。
+* **数据要求**。了解系统如何以及在哪里使用数据,使评审员能够探索你需要注意的任何数据要求(例如GDPR或HIPPA数据法规)。此外,检查数据的来源或数量是否足够用于训练。
+* **影响总结**。收集一份使用系统可能带来的潜在伤害清单。在ML生命周期中,检查是否已缓解或解决识别出的问题。
+* **每个核心原则的适用目标**。评估每个原则的目标是否实现,以及是否存在任何差距。
+
+## 使用负责任AI进行调试
+
+类似于调试软件应用程序,调试AI系统是识别和解决系统问题的必要过程。许多因素会影响模型未按预期或负责任地表现。大多数传统的模型性能指标是模型性能的定量汇总,这不足以分析模型如何违反负责任AI原则。此外,机器学习模型是一个黑箱,难以理解其结果的驱动因素或在出现错误时提供解释。在本课程的后期,我们将学习如何使用负责任AI仪表板来帮助调试AI系统。仪表板为数据科学家和AI开发人员提供了一个全面的工具,用于执行:
+
+* **错误分析**。识别模型的错误分布,这可能影响系统的公平性或可靠性。
+* **模型概述**。发现模型在数据群体中的性能差异。
+* **数据分析**。了解数据分布并识别数据中可能导致公平性、包容性和可靠性问题的潜在偏见。
+* **模型可解释性**。了解影响或影响模型预测的因素。这有助于解释模型的行为,对于透明性和责任感非常重要。
+
+## 🚀 挑战
+
+为了防止伤害的引入,我们应该:
+
+- 让系统开发团队拥有多样化的背景和观点
+- 投资反映我们社会多样性的数据集
+- 在整个机器学习生命周期中开发更好的方法,以检测和纠正负责任AI的出现
+
+思考一些现实生活中的场景,模型的不可信性在模型构建和使用中变得显而易见。我们还应该考虑什么?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## 复习与自学
+
+在本课中,你学习了机器学习中公平性和不公平性的基本概念。
+
+观看这个研讨会,深入了解这些主题:
+
+- 追求负责任AI:将原则付诸实践,作者:Besmira Nushi、Mehrnoosh Sameki 和 Amit Sharma
+
+[](https://www.youtube.com/watch?v=tGgJCrA-MZU "RAI Toolbox: An open-source framework for building responsible AI")
+
+> 🎥 点击上图观看视频:RAI Toolbox: An open-source framework for building responsible AI by Besmira Nushi, Mehrnoosh Sameki, and Amit Sharma
+
+另外,阅读:
+
+- Microsoft的RAI资源中心:[负责任的AI资源 – Microsoft AI](https://www.microsoft.com/ai/responsible-ai-resources?activetab=pivot1%3aprimaryr4)
+
+- Microsoft的FATE研究组:[FATE: 公平性、责任感、透明性和AI伦理 - Microsoft Research](https://www.microsoft.com/research/theme/fate/)
+
+RAI Toolbox:
+
+- [负责任AI工具箱GitHub仓库](https://github.com/microsoft/responsible-ai-toolbox)
+
+阅读关于Azure机器学习的工具以确保公平性:
+
+- [Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-fairness-ml?WT.mc_id=academic-77952-leestott)
+
+## 作业
+
+[探索RAI工具箱](assignment.md)
+
+**免责声明**:
+本文件已使用基于机器的AI翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业人工翻译。我们对使用本翻译可能引起的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/3-fairness/assignment.md b/translations/zh/1-Introduction/3-fairness/assignment.md
new file mode 100644
index 000000000..edbbbafb9
--- /dev/null
+++ b/translations/zh/1-Introduction/3-fairness/assignment.md
@@ -0,0 +1,14 @@
+# 探索负责任的AI工具箱
+
+## 说明
+
+在本课中,您了解了负责任的AI工具箱,这是一个“开源的、社区驱动的项目,旨在帮助数据科学家分析和改进AI系统。”在这个作业中,请探索RAI工具箱的一个[notebook](https://github.com/microsoft/responsible-ai-toolbox/blob/main/notebooks/responsibleaidashboard/getting-started.ipynb),并在报告或演示文稿中汇报您的发现。
+
+## 评分标准
+
+| 标准 | 模范 | 合格 | 需要改进 |
+| -------- | --------- | -------- | -------------- |
+| | 提交了一篇讨论Fairlearn系统、运行的notebook及其结论的论文或PowerPoint演示文稿 | 提交了一篇没有结论的论文 | 没有提交论文 |
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本国语言版本视为权威来源。对于重要信息,建议使用专业的人类翻译。对于因使用本翻译而产生的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/4-techniques-of-ML/README.md b/translations/zh/1-Introduction/4-techniques-of-ML/README.md
new file mode 100644
index 000000000..9efd27181
--- /dev/null
+++ b/translations/zh/1-Introduction/4-techniques-of-ML/README.md
@@ -0,0 +1,121 @@
+# 机器学习技术
+
+构建、使用和维护机器学习模型及其使用的数据的过程,与许多其他开发工作流非常不同。在本课中,我们将揭开这个过程的神秘面纱,并概述您需要了解的主要技术。您将:
+
+- 从高层次上理解机器学习的基本过程。
+- 探索基本概念,如“模型”、“预测”和“训练数据”。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/7/)
+
+[](https://youtu.be/4NGM0U2ZSHU "ML for beginners - Techniques of Machine Learning")
+
+> 🎥 点击上方图片观看本课的简短视频。
+
+## 介绍
+
+从高层次来看,创建机器学习(ML)过程的工艺包含若干步骤:
+
+1. **确定问题**。大多数ML过程都是从提出一个不能通过简单的条件程序或基于规则的引擎回答的问题开始。这些问题通常围绕基于数据集合的预测展开。
+2. **收集和准备数据**。为了能够回答您的问题,您需要数据。数据的质量和数量有时会决定您能多好地回答最初的问题。数据可视化是这一阶段的重要方面。此阶段还包括将数据分为训练组和测试组以构建模型。
+3. **选择训练方法**。根据您的问题和数据的性质,您需要选择如何训练模型以最好地反映您的数据并对其进行准确预测。这是您的ML过程需要特定专业知识并且通常需要大量实验的部分。
+4. **训练模型**。使用您的训练数据,您将使用各种算法训练模型以识别数据中的模式。模型可能利用内部权重,可以调整这些权重以优先考虑数据的某些部分,从而构建更好的模型。
+5. **评估模型**。您使用从未见过的数据(您的测试数据)来查看模型的表现。
+6. **参数调整**。根据模型的表现,您可以使用不同的参数或变量重新进行该过程,这些参数或变量控制用于训练模型的算法的行为。
+7. **预测**。使用新输入来测试模型的准确性。
+
+## 提出什么问题
+
+计算机特别擅长发现数据中的隐藏模式。这一功能对于研究人员来说非常有用,他们在某个领域有一些问题,这些问题不能通过创建基于条件的规则引擎轻松回答。举例来说,给定一个精算任务,数据科学家可能能够构建关于吸烟者与非吸烟者死亡率的手工规则。
+
+然而,当许多其他变量被引入时,基于过去的健康历史,ML模型可能更有效地预测未来的死亡率。一个更愉快的例子可能是根据包括纬度、经度、气候变化、海洋的接近度、喷流模式等数据,预测某个地点四月份的天气。
+
+✅ 这份关于天气模型的[幻灯片](https://www2.cisl.ucar.edu/sites/default/files/2021-10/0900%20June%2024%20Haupt_0.pdf)提供了使用ML进行天气分析的历史视角。
+
+## 构建前的任务
+
+在开始构建模型之前,您需要完成几个任务。为了测试您的问题并根据模型的预测形成假设,您需要识别和配置几个元素。
+
+### 数据
+
+为了能够以任何确定性回答您的问题,您需要大量合适类型的数据。在这一点上,您需要做两件事:
+
+- **收集数据**。牢记上一课关于数据分析公平性的内容,谨慎收集您的数据。了解这些数据的来源、它可能具有的任何内在偏见,并记录其来源。
+- **准备数据**。数据准备过程有几个步骤。如果数据来自不同的来源,您可能需要整理数据并使其规范化。您可以通过各种方法提高数据的质量和数量,例如将字符串转换为数字(如我们在[聚类](../../5-Clustering/1-Visualize/README.md)中所做的)。您还可以基于原始数据生成新数据(如我们在[分类](../../4-Classification/1-Introduction/README.md)中所做的)。您可以清理和编辑数据(如我们在[Web应用](../../3-Web-App/README.md)课程之前所做的)。最后,根据您的训练技术,您可能还需要对其进行随机化和打乱。
+
+✅ 收集和处理数据后,花点时间看看它的形状是否能让您解决预期的问题。正如我们在[聚类](../../5-Clustering/1-Visualize/README.md)课程中发现的那样,数据可能无法在您的任务中表现良好!
+
+### 特征和目标
+
+一个[特征](https://www.datasciencecentral.com/profiles/blogs/an-introduction-to-variable-and-feature-selection)是数据的可测量属性。在许多数据集中,它表现为列标题,如“日期”、“大小”或“颜色”。您的特征变量,通常在代码中表示为`X`,代表用于训练模型的输入变量。
+
+目标是您试图预测的事物。目标通常在代码中表示为`y`,代表您试图从数据中询问的问题的答案:在十二月,哪种**颜色**的南瓜最便宜?在旧金山,哪个社区的房地产**价格**最好?有时目标也称为标签属性。
+
+### 选择您的特征变量
+
+🎓 **特征选择和特征提取** 如何在构建模型时知道选择哪个变量?您可能会通过特征选择或特征提取的过程来选择最合适的变量以构建性能最佳的模型。然而,它们并不是一回事:“特征提取从原始特征的函数中创建新特征,而特征选择返回特征的子集。”([来源](https://wikipedia.org/wiki/Feature_selection))
+
+### 可视化您的数据
+
+数据科学家工具包的重要方面是使用Seaborn或MatPlotLib等出色库来可视化数据的能力。以可视化方式表示您的数据可能会让您发现可以利用的隐藏关联。您的可视化还可能帮助您发现偏见或不平衡的数据(正如我们在[分类](../../4-Classification/2-Classifiers-1/README.md)中发现的那样)。
+
+### 分割您的数据集
+
+在训练之前,您需要将数据集分割成两个或多个不等大小的部分,这些部分仍能很好地代表数据。
+
+- **训练**。数据集的这一部分用于训练模型。这部分数据构成了原始数据集的大部分。
+- **测试**。测试数据集是一个独立的数据组,通常从原始数据中收集,用于确认构建模型的性能。
+- **验证**。验证集是一个较小的独立数据组,您用它来调整模型的超参数或架构以改进模型。根据您的数据大小和您提出的问题,您可能不需要构建这个第三组(正如我们在[时间序列预测](../../7-TimeSeries/1-Introduction/README.md)中所指出的那样)。
+
+## 构建模型
+
+使用您的训练数据,您的目标是使用各种算法构建一个模型或数据的统计表示来**训练**它。训练模型使其暴露于数据,并允许其对发现、验证和接受或拒绝的感知模式做出假设。
+
+### 决定训练方法
+
+根据您的问题和数据的性质,您将选择一种方法来训练它。通过[Scikit-learn的文档](https://scikit-learn.org/stable/user_guide.html)——我们在本课程中使用的库,您可以探索许多训练模型的方法。根据您的经验,您可能需要尝试几种不同的方法来构建最佳模型。您可能会经历一个过程,即数据科学家通过向模型提供未见过的数据来评估模型的性能,检查其准确性、偏差和其他质量降低问题,并选择最适合手头任务的训练方法。
+
+### 训练模型
+
+有了您的训练数据,您就可以“拟合”它来创建一个模型。您会注意到,在许多ML库中,您会发现代码“model.fit”——此时,您将以值数组(通常是“X”)的形式发送您的特征变量和目标变量(通常是“y”)。
+
+### 评估模型
+
+一旦训练过程完成(训练大型模型可能需要多次迭代或“周期”),您将能够使用测试数据来评估模型的质量,以衡量其性能。这些数据是模型之前未分析过的原始数据的一个子集。您可以打印出一张关于模型质量的指标表。
+
+🎓 **模型拟合**
+
+在机器学习的背景下,模型拟合是指模型的基础函数在尝试分析不熟悉的数据时的准确性。
+
+🎓 **欠拟合**和**过拟合**是降低模型质量的常见问题,因为模型要么拟合得不够好,要么拟合得太好。这导致模型的预测要么与训练数据过于紧密对齐,要么与训练数据对齐过于松散。过拟合的模型因为过于了解数据的细节和噪音而预测训练数据太好。欠拟合的模型不准确,因为它既不能准确分析其训练数据,也不能准确分析尚未“见过”的数据。
+
+
+> 信息图由[Jen Looper](https://twitter.com/jenlooper)制作
+
+## 参数调整
+
+初步训练完成后,观察模型的质量并考虑通过调整其“超参数”来改进它。阅读更多关于这一过程的内容[在文档中](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters?WT.mc_id=academic-77952-leestott)。
+
+## 预测
+
+这是您可以使用全新数据测试模型准确性的时刻。在“应用”ML设置中,当您构建Web资产以在生产中使用模型时,这个过程可能涉及收集用户输入(例如按钮按下),以设置变量并将其发送到模型进行推断或评估。
+
+在这些课程中,您将发现如何使用这些步骤来准备、构建、测试、评估和预测——所有数据科学家的操作以及更多内容,随着您在成为“全栈”ML工程师的旅程中不断前进。
+
+---
+
+## 🚀挑战
+
+绘制一个反映ML实践者步骤的流程图。您现在在这个过程中看到自己处于哪个阶段?您预测您会在哪个阶段遇到困难?对您来说什么看起来很容易?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/8/)
+
+## 回顾与自学
+
+在网上搜索与数据科学家讨论他们日常工作的采访。这里有一个[采访](https://www.youtube.com/watch?v=Z3IjgbbCEfs)。
+
+## 作业
+
+[采访数据科学家](assignment.md)
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用此翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/4-techniques-of-ML/assignment.md b/translations/zh/1-Introduction/4-techniques-of-ML/assignment.md
new file mode 100644
index 000000000..7d40e6506
--- /dev/null
+++ b/translations/zh/1-Introduction/4-techniques-of-ML/assignment.md
@@ -0,0 +1,14 @@
+# 采访数据科学家
+
+## 说明
+
+在你的公司、用户群、朋友或同学中,找一个专业从事数据科学工作的人进行交谈。写一篇关于他们日常工作的简短文章(500字)。他们是专门从事某一领域,还是从事“全栈”工作?
+
+## 评分标准
+
+| 标准 | 卓越 | 合格 | 需要改进 |
+| ------ | -------------------------------------------------------------------------------------- | ------------------------------------------------------------------ | --------------------- |
+| | 提交了一篇符合字数要求的文章,且注明了来源,并以 .doc 文件格式呈现 | 文章来源注明不充分或篇幅短于要求长度 | 未提交文章 |
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/1-Introduction/README.md b/translations/zh/1-Introduction/README.md
new file mode 100644
index 000000000..2eabc296a
--- /dev/null
+++ b/translations/zh/1-Introduction/README.md
@@ -0,0 +1,26 @@
+# 机器学习简介
+
+在本课程部分中,您将了解机器学习领域的基本概念、它是什么,并了解其历史以及研究人员使用的技术。让我们一起探索这个机器学习的新世界吧!
+
+
+> 图片由 Bill Oxford 提供,来自 Unsplash
+
+### 课程
+
+1. [机器学习简介](1-intro-to-ML/README.md)
+1. [机器学习和人工智能的历史](2-history-of-ML/README.md)
+1. [公平性与机器学习](3-fairness/README.md)
+1. [机器学习的技术](4-techniques-of-ML/README.md)
+
+### 致谢
+
+“机器学习简介”由包括 [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan), [Ornella Altunyan](https://twitter.com/ornelladotcom) 和 [Jen Looper](https://twitter.com/jenlooper) 在内的团队用♥️编写。
+
+“机器学习的历史”由 [Jen Looper](https://twitter.com/jenlooper) 和 [Amy Boyd](https://twitter.com/AmyKateNicho) 用♥️编写。
+
+“公平性与机器学习”由 [Tomomi Imura](https://twitter.com/girliemac) 用♥️编写。
+
+“机器学习的技术”由 [Jen Looper](https://twitter.com/jenlooper) 和 [Chris Noring](https://twitter.com/softchris) 用♥️编写。
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而产生的任何误解或误释,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/1-Tools/README.md b/translations/zh/2-Regression/1-Tools/README.md
new file mode 100644
index 000000000..016f5d338
--- /dev/null
+++ b/translations/zh/2-Regression/1-Tools/README.md
@@ -0,0 +1,228 @@
+# 使用 Python 和 Scikit-learn 构建回归模型入门
+
+
+
+> 由 [Tomomi Imura](https://www.twitter.com/girlie_mac) 绘制的手绘笔记
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/9/)
+
+> ### [本课程提供 R 语言版本!](../../../../2-Regression/1-Tools/solution/R/lesson_1.html)
+
+## 简介
+
+在这四节课中,你将学习如何构建回归模型。我们将很快讨论这些模型的用途。但在开始之前,请确保你已经准备好了合适的工具!
+
+在本课中,你将学到:
+
+- 配置你的电脑以进行本地机器学习任务。
+- 使用 Jupyter 笔记本。
+- 安装和使用 Scikit-learn。
+- 通过动手练习探索线性回归。
+
+## 安装和配置
+
+[](https://youtu.be/-DfeD2k2Kj0 "机器学习初学者 - 设置工具以构建机器学习模型")
+
+> 🎥 点击上面的图片观看一个简短的视频,了解如何配置你的电脑以进行机器学习。
+
+1. **安装 Python**。确保你的电脑上安装了 [Python](https://www.python.org/downloads/)。你将使用 Python 进行许多数据科学和机器学习任务。大多数计算机系统已经包含了 Python 安装。也有一些有用的 [Python 编码包](https://code.visualstudio.com/learn/educators/installers?WT.mc_id=academic-77952-leestott),可以简化某些用户的设置过程。
+
+ 然而,有些 Python 的使用需要一个版本的软件,而其他使用则需要不同的版本。因此,在 [虚拟环境](https://docs.python.org/3/library/venv.html) 中工作是很有用的。
+
+2. **安装 Visual Studio Code**。确保你的电脑上安装了 Visual Studio Code。按照这些说明进行 [安装 Visual Studio Code](https://code.visualstudio.com/) 的基本安装。在本课程中,你将使用 Visual Studio Code 中的 Python,因此你可能需要了解如何 [配置 Visual Studio Code](https://docs.microsoft.com/learn/modules/python-install-vscode?WT.mc_id=academic-77952-leestott) 以进行 Python 开发。
+
+ > 通过学习这组 [Learn 模块](https://docs.microsoft.com/users/jenlooper-2911/collections/mp1pagggd5qrq7?WT.mc_id=academic-77952-leestott) 来熟悉 Python
+ >
+ > [](https://youtu.be/yyQM70vi7V8 "使用 Visual Studio Code 设置 Python")
+ >
+ > 🎥 点击上面的图片观看视频:在 VS Code 中使用 Python。
+
+3. **安装 Scikit-learn**,按照 [这些说明](https://scikit-learn.org/stable/install.html)。由于你需要确保使用 Python 3,建议你使用虚拟环境。请注意,如果你在 M1 Mac 上安装此库,请参阅上述链接页面中的特别说明。
+
+1. **安装 Jupyter Notebook**。你需要 [安装 Jupyter 包](https://pypi.org/project/jupyter/)。
+
+## 你的机器学习编写环境
+
+你将使用 **notebooks** 来开发你的 Python 代码并创建机器学习模型。这种类型的文件是数据科学家的常用工具,可以通过其后缀或扩展名 `.ipynb` 识别。
+
+笔记本是一种交互式环境,允许开发者编写代码并添加注释和文档,这对于实验或研究导向的项目非常有帮助。
+
+[](https://youtu.be/7E-jC8FLA2E "机器学习初学者 - 设置 Jupyter 笔记本以开始构建回归模型")
+
+> 🎥 点击上面的图片观看一个简短的视频,了解如何进行此练习。
+
+### 练习 - 使用笔记本
+
+在这个文件夹中,你会找到 _notebook.ipynb_ 文件。
+
+1. 在 Visual Studio Code 中打开 _notebook.ipynb_。
+
+ 一个 Jupyter 服务器将会启动,并启动 Python 3+。你会发现笔记本中有一些区域可以 `run`,即代码片段。你可以通过选择一个看起来像播放按钮的图标来运行一个代码块。
+
+1. 选择 `md` 图标并添加一些 markdown,以及以下文本 **# 欢迎来到你的笔记本**。
+
+ 接下来,添加一些 Python 代码。
+
+1. 在代码块中输入 **print('hello notebook')**。
+1. 选择箭头来运行代码。
+
+ 你应该会看到打印的语句:
+
+ ```output
+ hello notebook
+ ```
+
+
+
+你可以在代码中插入注释,以自我记录笔记本。
+
+✅ 想一想,网页开发者的工作环境与数据科学家的工作环境有何不同。
+
+## 使用 Scikit-learn 入门
+
+现在 Python 已经在你的本地环境中设置完毕,并且你已经熟悉了 Jupyter 笔记本,让我们同样熟悉一下 Scikit-learn(发音为 `sci` as in `science`)。Scikit-learn 提供了一个 [广泛的 API](https://scikit-learn.org/stable/modules/classes.html#api-ref),帮助你执行机器学习任务。
+
+根据他们的 [网站](https://scikit-learn.org/stable/getting_started.html),"Scikit-learn 是一个开源机器学习库,支持监督学习和非监督学习。它还提供了各种工具用于模型拟合、数据预处理、模型选择和评估,以及许多其他实用工具。"
+
+在本课程中,你将使用 Scikit-learn 和其他工具来构建机器学习模型,以执行我们称之为“传统机器学习”的任务。我们故意避开了神经网络和深度学习,因为它们将在我们即将推出的“AI for Beginners”课程中更好地覆盖。
+
+Scikit-learn 使构建模型和评估它们的使用变得简单。它主要专注于使用数值数据,并包含几个现成的数据集供学习使用。它还包括预构建的模型供学生尝试。让我们探索一下加载预打包数据和使用内置估算器来构建第一个机器学习模型的过程。
+
+## 练习 - 你的第一个 Scikit-learn 笔记本
+
+> 本教程的灵感来自 Scikit-learn 网站上的 [线性回归示例](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py)。
+
+[](https://youtu.be/2xkXL5EUpS0 "机器学习初学者 - 你的第一个 Python 线性回归项目")
+
+> 🎥 点击上面的图片观看一个简短的视频,了解如何进行此练习。
+
+在与本课相关的 _notebook.ipynb_ 文件中,按下“垃圾桶”图标清除所有单元格。
+
+在本节中,你将使用一个内置于 Scikit-learn 中的小数据集,关于糖尿病。假设你想测试一种糖尿病患者的治疗方法。机器学习模型可以帮助你确定哪些患者在不同变量组合下对治疗反应更好。即使是一个非常基本的回归模型,当可视化时,也可能显示出关于变量的信息,这些信息可以帮助你组织理论上的临床试验。
+
+✅ 回归方法有很多种,选择哪一种取决于你要回答的问题。如果你想预测一个特定年龄的人的可能身高,你会使用线性回归,因为你在寻找一个**数值值**。如果你想知道某种菜肴是否应该被认为是素食,你在寻找一个**类别分配**,因此你会使用逻辑回归。你稍后会学习更多关于逻辑回归的内容。想一想你可以向数据提出的一些问题,以及哪种方法更合适。
+
+让我们开始这个任务。
+
+### 导入库
+
+对于这个任务,我们将导入一些库:
+
+- **matplotlib**。这是一个有用的 [绘图工具](https://matplotlib.org/),我们将使用它来创建折线图。
+- **numpy**。 [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) 是一个处理 Python 数值数据的有用库。
+- **sklearn**。这是 [Scikit-learn](https://scikit-learn.org/stable/user_guide.html) 库。
+
+导入一些库来帮助你完成任务。
+
+1. 通过输入以下代码添加导入:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn import datasets, linear_model, model_selection
+ ```
+
+ 上面你正在导入 `matplotlib`, `numpy` and you are importing `datasets`, `linear_model` and `model_selection` from `sklearn`. `model_selection` is used for splitting data into training and test sets.
+
+### The diabetes dataset
+
+The built-in [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) includes 442 samples of data around diabetes, with 10 feature variables, some of which include:
+
+- age: age in years
+- bmi: body mass index
+- bp: average blood pressure
+- s1 tc: T-Cells (a type of white blood cells)
+
+✅ This dataset includes the concept of 'sex' as a feature variable important to research around diabetes. Many medical datasets include this type of binary classification. Think a bit about how categorizations such as this might exclude certain parts of a population from treatments.
+
+Now, load up the X and y data.
+
+> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
+
+In a new code cell, load the diabetes dataset by calling `load_diabetes()`. The input `return_X_y=True` signals that `X` will be a data matrix, and `y` 将是回归目标。
+
+1. 添加一些打印命令以显示数据矩阵的形状及其第一个元素:
+
+ ```python
+ X, y = datasets.load_diabetes(return_X_y=True)
+ print(X.shape)
+ print(X[0])
+ ```
+
+ 你得到的响应是一个元组。你所做的是将元组的前两个值分别分配给 `X` and `y`。了解更多 [关于元组](https://wikipedia.org/wiki/Tuple)。
+
+ 你可以看到这个数据有 442 个项目,形状为 10 个元素的数组:
+
+ ```text
+ (442, 10)
+ [ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076
+ -0.04340085 -0.00259226 0.01990842 -0.01764613]
+ ```
+
+ ✅ 想一想数据和回归目标之间的关系。线性回归预测特征 X 和目标变量 y 之间的关系。你能在文档中找到糖尿病数据集的 [目标](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) 吗?考虑到目标,这个数据集在展示什么?
+
+2. 接下来,通过选择数据集的第 3 列来绘制一部分数据。你可以通过使用 `:` operator to select all rows, and then selecting the 3rd column using the index (2). You can also reshape the data to be a 2D array - as required for plotting - by using `reshape(n_rows, n_columns)` 来实现。如果其中一个参数为 -1,则自动计算相应的维度。
+
+ ```python
+ X = X[:, 2]
+ X = X.reshape((-1,1))
+ ```
+
+ ✅ 随时打印出数据以检查其形状。
+
+3. 现在你已经准备好要绘制的数据,可以看看机器是否能帮助确定此数据集中数字之间的逻辑分割。为此,你需要将数据(X)和目标(y)分成测试集和训练集。Scikit-learn 有一个简单的方法来做到这一点;你可以在给定点分割测试数据。
+
+ ```python
+ X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)
+ ```
+
+4. 现在你准备好训练你的模型了!加载线性回归模型,并使用 `model.fit()` 训练你的 X 和 y 训练集:
+
+ ```python
+ model = linear_model.LinearRegression()
+ model.fit(X_train, y_train)
+ ```
+
+ ✅ `model.fit()` is a function you'll see in many ML libraries such as TensorFlow
+
+5. Then, create a prediction using test data, using the function `predict()`。这将用于在数据组之间画线
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+6. 现在是时候在图中显示数据了。Matplotlib 是完成此任务的非常有用的工具。创建所有 X 和 y 测试数据的散点图,并使用预测在模型的数据组之间的最合适位置画一条线。
+
+ ```python
+ plt.scatter(X_test, y_test, color='black')
+ plt.plot(X_test, y_pred, color='blue', linewidth=3)
+ plt.xlabel('Scaled BMIs')
+ plt.ylabel('Disease Progression')
+ plt.title('A Graph Plot Showing Diabetes Progression Against BMI')
+ plt.show()
+ ```
+
+ 
+
+ ✅ 想一想这里发生了什么。一条直线穿过许多小数据点,但它到底在做什么?你能看到如何使用这条线来预测一个新的、未见过的数据点在图的 y 轴上的位置吗?试着用语言描述这个模型的实际用途。
+
+恭喜你,你构建了你的第一个线性回归模型,用它进行了预测,并在图中显示了它!
+
+---
+## 🚀挑战
+
+绘制此数据集的不同变量。提示:编辑这行:`X = X[:,2]`。考虑到这个数据集的目标,你能发现糖尿病作为一种疾病的进展情况吗?
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/10/)
+
+## 复习与自学
+
+在本教程中,你使用了简单线性回归,而不是单变量或多变量线性回归。阅读一下这些方法之间的区别,或者看看 [这个视频](https://www.coursera.org/lecture/quantifying-relationships-regression-models/linear-vs-nonlinear-categorical-variables-ai2Ef)。
+
+阅读更多关于回归概念的内容,并思考可以用这种技术回答哪些问题。参加这个 [教程](https://docs.microsoft.com/learn/modules/train-evaluate-regression-models?WT.mc_id=academic-77952-leestott) 以加深理解。
+
+## 作业
+
+[不同的数据集](assignment.md)
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/1-Tools/assignment.md b/translations/zh/2-Regression/1-Tools/assignment.md
new file mode 100644
index 000000000..c83d52ae8
--- /dev/null
+++ b/translations/zh/2-Regression/1-Tools/assignment.md
@@ -0,0 +1,16 @@
+# 使用Scikit-learn进行回归分析
+
+## 说明
+
+查看Scikit-learn中的 [Linnerud 数据集](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_linnerud.html#sklearn.datasets.load_linnerud)。这个数据集有多个 [目标](https://scikit-learn.org/stable/datasets/toy_dataset.html#linnerrud-dataset):'它由来自健身俱乐部的二十名中年男性的三项运动(数据)和三项生理(目标)变量组成'。
+
+用你自己的话描述如何创建一个回归模型,该模型将绘制腰围与完成仰卧起坐次数之间的关系。对该数据集中的其他数据点也做同样的描述。
+
+## 评分标准
+
+| 标准 | 杰出 | 合格 | 需要改进 |
+| ----------------------------- | -------------------------------- | --------------------------- | ------------------------ |
+| 提交描述性段落 | 提交的段落写得很好 | 提交了几句话 | 没有提供描述 |
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的母语版本视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/1-Tools/solution/Julia/README.md b/translations/zh/2-Regression/1-Tools/solution/Julia/README.md
new file mode 100644
index 000000000..60f126054
--- /dev/null
+++ b/translations/zh/2-Regression/1-Tools/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。我们不对因使用此翻译而产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/2-Data/README.md b/translations/zh/2-Regression/2-Data/README.md
new file mode 100644
index 000000000..332261e01
--- /dev/null
+++ b/translations/zh/2-Regression/2-Data/README.md
@@ -0,0 +1,215 @@
+# 使用 Scikit-learn 构建回归模型:准备和可视化数据
+
+
+
+信息图由 [Dasani Madipalli](https://twitter.com/dasani_decoded) 提供
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/11/)
+
+> ### [本课程有 R 版本!](../../../../2-Regression/2-Data/solution/R/lesson_2.html)
+
+## 介绍
+
+现在你已经准备好了使用 Scikit-learn 构建机器学习模型的工具,你可以开始向你的数据提出问题了。在处理数据和应用机器学习解决方案时,理解如何提出正确的问题以正确地释放数据集的潜力是非常重要的。
+
+在本课中,你将学习:
+
+- 如何为模型构建准备数据。
+- 如何使用 Matplotlib 进行数据可视化。
+
+## 向数据提出正确的问题
+
+你需要回答的问题将决定你将使用哪种类型的机器学习算法。而你得到的答案的质量将在很大程度上取决于你的数据的性质。
+
+看看为本课提供的[数据](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv)。你可以在 VS Code 中打开这个 .csv 文件。快速浏览一下,你会发现有空白和混合的字符串和数字数据。还有一个奇怪的列叫做 'Package',其中的数据是 'sacks'、'bins' 和其他值的混合。实际上,这些数据有点乱。
+
+[](https://youtu.be/5qGjczWTrDQ "初学者的机器学习 - 如何分析和清理数据集")
+
+> 🎥 点击上方图片观看准备本课数据的短视频。
+
+实际上,很少有数据集是完全准备好直接用于创建机器学习模型的。在本课中,你将学习如何使用标准 Python 库准备原始数据集。你还将学习各种数据可视化技术。
+
+## 案例研究:'南瓜市场'
+
+在这个文件夹中,你会在根 `data` 文件夹中找到一个名为 [US-pumpkins.csv](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) 的 .csv 文件,其中包含关于南瓜市场的 1757 行数据,按城市分组。这是从美国农业部分发的[特色作物终端市场标准报告](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice)中提取的原始数据。
+
+### 准备数据
+
+这些数据是公共领域的数据。可以从 USDA 网站按城市下载多个单独的文件。为了避免太多单独的文件,我们将所有城市的数据合并到一个电子表格中,因此我们已经_准备_了一些数据。接下来,让我们仔细看看这些数据。
+
+### 南瓜数据 - 初步结论
+
+你注意到这些数据有什么特点?你已经看到有字符串、数字、空白和奇怪的值混合在一起,需要你理解。
+
+你可以用回归技术向这些数据提出什么问题?比如“预测某个月份出售的南瓜价格”。再看看数据,你需要做一些更改来创建所需的任务数据结构。
+
+## 练习 - 分析南瓜数据
+
+让我们使用 [Pandas](https://pandas.pydata.org/),(名称代表 `Python Data Analysis`)这是一个非常有用的数据处理工具,来分析和准备这些南瓜数据。
+
+### 首先,检查缺失日期
+
+你首先需要采取措施检查缺失日期:
+
+1. 将日期转换为月份格式(这些是美国日期,所以格式是 `MM/DD/YYYY`)。
+2. 提取月份到新列。
+
+在 Visual Studio Code 中打开 _notebook.ipynb_ 文件,并将电子表格导入到新的 Pandas 数据框中。
+
+1. 使用 `head()` 函数查看前五行。
+
+ ```python
+ import pandas as pd
+ pumpkins = pd.read_csv('../data/US-pumpkins.csv')
+ pumpkins.head()
+ ```
+
+ ✅ 你会使用什么函数来查看最后五行?
+
+1. 检查当前数据框中是否有缺失数据:
+
+ ```python
+ pumpkins.isnull().sum()
+ ```
+
+ 有缺失数据,但可能对当前任务没有影响。
+
+1. 为了使数据框更容易处理,仅选择需要的列,使用 `loc` function which extracts from the original dataframe a group of rows (passed as first parameter) and columns (passed as second parameter). The expression `:`,下面的例子中表示“所有行”。
+
+ ```python
+ columns_to_select = ['Package', 'Low Price', 'High Price', 'Date']
+ pumpkins = pumpkins.loc[:, columns_to_select]
+ ```
+
+### 其次,确定南瓜的平均价格
+
+思考如何确定某个月份南瓜的平均价格。你会选择哪些列来完成这个任务?提示:你需要 3 列。
+
+解决方案:取 `Low Price` and `High Price` 列的平均值填充新的 Price 列,并将 Date 列转换为仅显示月份。幸运的是,根据上面的检查,日期或价格没有缺失数据。
+
+1. 要计算平均值,添加以下代码:
+
+ ```python
+ price = (pumpkins['Low Price'] + pumpkins['High Price']) / 2
+
+ month = pd.DatetimeIndex(pumpkins['Date']).month
+
+ ```
+
+ ✅ 随时使用 `print(month)` 打印任何你想检查的数据。
+
+2. 现在,将转换后的数据复制到新的 Pandas 数据框中:
+
+ ```python
+ new_pumpkins = pd.DataFrame({'Month': month, 'Package': pumpkins['Package'], 'Low Price': pumpkins['Low Price'],'High Price': pumpkins['High Price'], 'Price': price})
+ ```
+
+ 打印出你的数据框,你会看到一个干净、整洁的数据集,你可以在其上构建新的回归模型。
+
+### 但是等等!这里有些奇怪的东西
+
+如果你看看 `Package` column, pumpkins are sold in many different configurations. Some are sold in '1 1/9 bushel' measures, and some in '1/2 bushel' measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
+
+> Pumpkins seem very hard to weigh consistently
+
+Digging into the original data, it's interesting that anything with `Unit of Sale` equalling 'EACH' or 'PER BIN' also have the `Package` type per inch, per bin, or 'each'. Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string 'bushel' in their `Package` 列。
+
+1. 在文件顶部的初始 .csv 导入下添加一个过滤器:
+
+ ```python
+ pumpkins = pumpkins[pumpkins['Package'].str.contains('bushel', case=True, regex=True)]
+ ```
+
+ 如果你现在打印数据,你会看到你只得到了大约 415 行包含蒲式耳单位的南瓜数据。
+
+### 但是等等!还有一件事要做
+
+你注意到每行的蒲式耳量不同吗?你需要标准化定价以显示每蒲式耳的价格,所以做一些数学运算来标准化它。
+
+1. 在创建 new_pumpkins 数据框的代码块后添加这些行:
+
+ ```python
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1 1/9'), 'Price'] = price/(1 + 1/9)
+
+ new_pumpkins.loc[new_pumpkins['Package'].str.contains('1/2'), 'Price'] = price/(1/2)
+ ```
+
+✅ 根据 [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308),蒲式耳的重量取决于产品的类型,因为它是一个体积测量单位。“例如,一蒲式耳的西红柿应该重 56 磅……叶类蔬菜占据更多空间但重量较轻,所以一蒲式耳的菠菜只有 20 磅。”这都相当复杂!我们不必进行蒲式耳到磅的转换,而是按蒲式耳定价。所有这些对南瓜蒲式耳的研究,表明了解数据的性质是多么重要!
+
+现在,你可以根据蒲式耳测量分析每单位的定价。如果你再打印一次数据,你会看到它是如何标准化的。
+
+✅ 你注意到按半蒲式耳出售的南瓜非常贵吗?你能找出原因吗?提示:小南瓜比大南瓜贵得多,可能是因为每蒲式耳有更多的小南瓜,而一个大南瓜占用了很多空间。
+
+## 可视化策略
+
+数据科学家的职责之一是展示他们所处理的数据的质量和性质。为此,他们经常创建有趣的可视化,例如图表、图形和图表,展示数据的不同方面。通过这种方式,他们能够直观地展示关系和难以发现的差距。
+
+[](https://youtu.be/SbUkxH6IJo0 "初学者的机器学习 - 如何使用 Matplotlib 进行数据可视化")
+
+> 🎥 点击上方图片观看本课数据可视化的短视频。
+
+可视化还可以帮助确定最适合数据的机器学习技术。例如,似乎遵循一条线的散点图表明该数据是线性回归练习的良好候选数据。
+
+一个在 Jupyter 笔记本中效果很好的数据可视化库是 [Matplotlib](https://matplotlib.org/)(你在上一课中也看到了它)。
+
+> 在[这些教程](https://docs.microsoft.com/learn/modules/explore-analyze-data-with-python?WT.mc_id=academic-77952-leestott)中获得更多数据可视化经验。
+
+## 练习 - 试验 Matplotlib
+
+尝试创建一些基本图表来显示你刚创建的新数据框。基本折线图会显示什么?
+
+1. 在文件顶部,Pandas 导入下方导入 Matplotlib:
+
+ ```python
+ import matplotlib.pyplot as plt
+ ```
+
+1. 重新运行整个笔记本以刷新。
+1. 在笔记本底部添加一个单元格,将数据绘制为箱形图:
+
+ ```python
+ price = new_pumpkins.Price
+ month = new_pumpkins.Month
+ plt.scatter(price, month)
+ plt.show()
+ ```
+
+ 
+
+ 这是一个有用的图表吗?它有什么让你惊讶的地方吗?
+
+ 这并不是特别有用,因为它只是在给定月份中显示你的数据点的分布。
+
+### 使其有用
+
+为了让图表显示有用的数据,你通常需要以某种方式对数据进行分组。让我们尝试创建一个图表,其中 y 轴显示月份,数据展示数据的分布。
+
+1. 添加一个单元格以创建分组条形图:
+
+ ```python
+ new_pumpkins.groupby(['Month'])['Price'].mean().plot(kind='bar')
+ plt.ylabel("Pumpkin Price")
+ ```
+
+ 
+
+ 这是一个更有用的数据可视化!它似乎表明南瓜的最高价格出现在九月和十月。这个符合你的预期吗?为什么或为什么不?
+
+---
+
+## 🚀挑战
+
+探索 Matplotlib 提供的不同类型的可视化。哪些类型最适合回归问题?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/12/)
+
+## 回顾与自学
+
+看看数据可视化的多种方式。列出各种可用的库,并注明哪些库最适合特定类型的任务,例如 2D 可视化与 3D 可视化。你发现了什么?
+
+## 作业
+
+[探索可视化](assignment.md)
+
+**免责声明**:
+本文件是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档视为权威来源。对于关键信息,建议使用专业人工翻译。我们不对使用此翻译引起的任何误解或曲解承担责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/2-Data/assignment.md b/translations/zh/2-Regression/2-Data/assignment.md
new file mode 100644
index 000000000..b799a65e7
--- /dev/null
+++ b/translations/zh/2-Regression/2-Data/assignment.md
@@ -0,0 +1,11 @@
+# 探索可视化
+
+有几种不同的库可用于数据可视化。使用本课中的南瓜数据在一个示例笔记本中创建一些可视化图表,使用matplotlib和seaborn。哪些库更容易使用?
+## 评分标准
+
+| 标准 | 卓越 | 充分 | 需要改进 |
+| ---- | ---- | ---- | -------- |
+| | 提交的笔记本中有两个探索/可视化图表 | 提交的笔记本中有一个探索/可视化图表 | 未提交笔记本 |
+
+**免责声明**:
+本文件使用基于机器的AI翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业人工翻译。我们对因使用此翻译而引起的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/2-Data/solution/Julia/README.md b/translations/zh/2-Regression/2-Data/solution/Julia/README.md
new file mode 100644
index 000000000..40832f9a9
--- /dev/null
+++ b/translations/zh/2-Regression/2-Data/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/3-Linear/README.md b/translations/zh/2-Regression/3-Linear/README.md
new file mode 100644
index 000000000..1d2ba68dd
--- /dev/null
+++ b/translations/zh/2-Regression/3-Linear/README.md
@@ -0,0 +1,370 @@
+# 使用 Scikit-learn 构建回归模型:四种回归方法
+
+
+> 信息图作者 [Dasani Madipalli](https://twitter.com/dasani_decoded)
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/13/)
+
+> ### [这节课也有 R 版本!](../../../../2-Regression/3-Linear/solution/R/lesson_3.html)
+### 介绍
+
+到目前为止,你已经用南瓜定价数据集的示例数据探索了什么是回归,并用 Matplotlib 对其进行了可视化。
+
+现在你已经准备好深入了解机器学习中的回归。虽然可视化可以帮助你理解数据,但机器学习的真正力量在于_训练模型_。模型在历史数据上进行训练,以自动捕捉数据依赖关系,并允许你预测模型从未见过的新数据的结果。
+
+在本课中,你将学习两种类型的回归:_基本线性回归_和_多项式回归_,以及这些技术背后的一些数学原理。这些模型将允许我们根据不同的输入数据预测南瓜价格。
+
+[](https://youtu.be/CRxFT8oTDMg "初学者的机器学习 - 理解线性回归")
+
+> 🎥 点击上面的图片观看关于线性回归的简短视频概述。
+
+> 在整个课程中,我们假设数学知识最低,并努力使其对来自其他领域的学生可访问,因此请注意笔记、🧮 标注、图表和其他学习工具以帮助理解。
+
+### 前提条件
+
+到现在为止,你应该熟悉我们正在检查的南瓜数据的结构。你可以在本课的_notebook.ipynb_文件中找到预加载和预清理的数据。在文件中,南瓜价格按蒲式耳显示在一个新的数据框中。确保你可以在 Visual Studio Code 的内核中运行这些笔记本。
+
+### 准备工作
+
+提醒一下,你正在加载这些数据以便提出问题。
+
+- 什么时候是购买南瓜的最佳时间?
+- 我可以期待一个迷你南瓜盒的价格是多少?
+- 我应该购买半蒲式耳的篮子还是1 1/9蒲式耳的盒子?
+让我们继续深入挖掘这些数据。
+
+在上一课中,你创建了一个 Pandas 数据框,并用部分原始数据集填充它,按蒲式耳标准化价格。然而,通过这样做,你只能收集到约400个数据点,而且只是在秋季月份。
+
+看看我们在本课配套笔记本中预加载的数据。数据已预加载,并绘制了初始散点图以显示月份数据。也许我们可以通过进一步清理数据来了解更多关于数据的性质。
+
+## 线性回归线
+
+正如你在第一课中所学,线性回归练习的目标是能够绘制一条线来:
+
+- **显示变量关系**。显示变量之间的关系
+- **做出预测**。准确预测新数据点在该线上的位置。
+
+这种类型的线通常是通过**最小二乘回归**绘制的。术语“最小二乘”意味着所有围绕回归线的数据点都被平方然后相加。理想情况下,最终的总和尽可能小,因为我们希望错误数量低,即`least-squares`。
+
+我们这样做是因为我们希望建模一条线,使其与所有数据点的累计距离最小。我们在相加之前将项平方,因为我们关心的是它的大小而不是方向。
+
+> **🧮 给我展示数学**
+>
+> 这条线,称为_最佳拟合线_,可以通过[一个方程](https://en.wikipedia.org/wiki/Simple_linear_regression)来表示:
+>
+> ```
+> Y = a + bX
+> ```
+>
+> `X` is the 'explanatory variable'. `Y` is the 'dependent variable'. The slope of the line is `b` and `a` is the y-intercept, which refers to the value of `Y` when `X = 0`.
+>
+>
+>
+> First, calculate the slope `b`. Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> In other words, and referring to our pumpkin data's original question: "predict the price of a pumpkin per bushel by month", `X` would refer to the price and `Y` would refer to the month of sale.
+>
+>
+>
+> Calculate the value of Y. If you're paying around $4, it must be April! Infographic by [Jen Looper](https://twitter.com/jenlooper)
+>
+> The math that calculates the line must demonstrate the slope of the line, which is also dependent on the intercept, or where `Y` is situated when `X = 0`.
+>
+> You can observe the method of calculation for these values on the [Math is Fun](https://www.mathsisfun.com/data/least-squares-regression.html) web site. Also visit [this Least-squares calculator](https://www.mathsisfun.com/data/least-squares-calculator.html) to watch how the numbers' values impact the line.
+
+## Correlation
+
+One more term to understand is the **Correlation Coefficient** between given X and Y variables. Using a scatterplot, you can quickly visualize this coefficient. A plot with datapoints scattered in a neat line have high correlation, but a plot with datapoints scattered everywhere between X and Y have a low correlation.
+
+A good linear regression model will be one that has a high (nearer to 1 than 0) Correlation Coefficient using the Least-Squares Regression method with a line of regression.
+
+✅ Run the notebook accompanying this lesson and look at the Month to Price scatterplot. Does the data associating Month to Price for pumpkin sales seem to have high or low correlation, according to your visual interpretation of the scatterplot? Does that change if you use more fine-grained measure instead of `Month`, eg. *day of the year* (i.e. number of days since the beginning of the year)?
+
+In the code below, we will assume that we have cleaned up the data, and obtained a data frame called `new_pumpkins`, similar to the following:
+
+ID | Month | DayOfYear | Variety | City | Package | Low Price | High Price | Price
+---|-------|-----------|---------|------|---------|-----------|------------|-------
+70 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+71 | 9 | 267 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+72 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 18.0 | 18.0 | 16.363636
+73 | 10 | 274 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 17.0 | 17.0 | 15.454545
+74 | 10 | 281 | PIE TYPE | BALTIMORE | 1 1/9 bushel cartons | 15.0 | 15.0 | 13.636364
+
+> The code to clean the data is available in [`notebook.ipynb`](../../../../2-Regression/3-Linear/notebook.ipynb). We have performed the same cleaning steps as in the previous lesson, and have calculated `DayOfYear` 列使用以下表达式:
+
+```python
+day_of_year = pd.to_datetime(pumpkins['Date']).apply(lambda dt: (dt-datetime(dt.year,1,1)).days)
+```
+
+现在你已经了解了线性回归背后的数学原理,让我们创建一个回归模型,看看我们是否可以预测哪种南瓜包装的价格最好。有人为节日南瓜田购买南瓜可能需要这些信息,以优化他们的南瓜包购买。
+
+## 寻找相关性
+
+[](https://youtu.be/uoRq-lW2eQo "初学者的机器学习 - 寻找相关性:线性回归的关键")
+
+> 🎥 点击上面的图片观看关于相关性的简短视频概述。
+
+从上一课中你可能已经看到,不同月份的平均价格如下所示:
+
+
+
+这表明应该有一些相关性,我们可以尝试训练线性回归模型来预测`Month` and `Price`, or between `DayOfYear` and `Price`. Here is the scatter plot that shows the latter relationship:
+
+
+
+Let's see if there is a correlation using the `corr`函数之间的关系:
+
+```python
+print(new_pumpkins['Month'].corr(new_pumpkins['Price']))
+print(new_pumpkins['DayOfYear'].corr(new_pumpkins['Price']))
+```
+
+看起来相关性很小,-0.15 通过`Month` and -0.17 by the `DayOfMonth`, but there could be another important relationship. It looks like there are different clusters of prices corresponding to different pumpkin varieties. To confirm this hypothesis, let's plot each pumpkin category using a different color. By passing an `ax` parameter to the `scatter`绘图函数我们可以在同一张图上绘制所有点:
+
+```python
+ax=None
+colors = ['red','blue','green','yellow']
+for i,var in enumerate(new_pumpkins['Variety'].unique()):
+ df = new_pumpkins[new_pumpkins['Variety']==var]
+ ax = df.plot.scatter('DayOfYear','Price',ax=ax,c=colors[i],label=var)
+```
+
+
+
+我们的调查表明,品种对总体价格的影响比实际销售日期更大。我们可以通过柱状图看到这一点:
+
+```python
+new_pumpkins.groupby('Variety')['Price'].mean().plot(kind='bar')
+```
+
+
+
+让我们暂时只关注一种南瓜品种,“派型”,看看日期对价格的影响:
+
+```python
+pie_pumpkins = new_pumpkins[new_pumpkins['Variety']=='PIE TYPE']
+pie_pumpkins.plot.scatter('DayOfYear','Price')
+```
+
+
+如果我们现在计算`Price` and `DayOfYear` using `corr` function, we will get something like `-0.27`之间的相关性 - 这意味着训练预测模型是有意义的。
+
+> 在训练线性回归模型之前,确保数据干净是很重要的。线性回归不适用于缺失值,因此清除所有空单元格是有意义的:
+
+```python
+pie_pumpkins.dropna(inplace=True)
+pie_pumpkins.info()
+```
+
+另一种方法是用相应列的平均值填充这些空值。
+
+## 简单线性回归
+
+[](https://youtu.be/e4c_UP2fSjg "初学者的机器学习 - 使用 Scikit-learn 的线性和多项式回归")
+
+> 🎥 点击上面的图片观看关于线性和多项式回归的简短视频概述。
+
+为了训练我们的线性回归模型,我们将使用**Scikit-learn**库。
+
+```python
+from sklearn.linear_model import LinearRegression
+from sklearn.metrics import mean_squared_error
+from sklearn.model_selection import train_test_split
+```
+
+我们首先将输入值(特征)和预期输出(标签)分离到单独的 numpy 数组中:
+
+```python
+X = pie_pumpkins['DayOfYear'].to_numpy().reshape(-1,1)
+y = pie_pumpkins['Price']
+```
+
+> 请注意,我们必须对输入数据执行`reshape`操作,以便线性回归包能够正确理解它。线性回归期望输入为二维数组,其中数组的每一行对应一个输入特征向量。在我们的例子中,由于我们只有一个输入 - 我们需要一个形状为 N×1 的数组,其中 N 是数据集大小。
+
+然后,我们需要将数据分成训练和测试数据集,以便在训练后验证我们的模型:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+```
+
+最后,训练实际的线性回归模型只需要两行代码。我们定义`LinearRegression` object, and fit it to our data using the `fit`方法:
+
+```python
+lin_reg = LinearRegression()
+lin_reg.fit(X_train,y_train)
+```
+
+`LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21`在我们的例子中,表示年初的价格。
+
+为了查看我们的模型有多准确,我们可以在测试数据集上预测价格,然后测量我们的预测与预期值的接近程度。这可以使用均方误差(MSE)指标来完成,这是所有预期值和预测值之间的平方差的平均值。
+
+```python
+pred = lin_reg.predict(X_test)
+
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+```
+
+我们的误差似乎在2点左右,大约是17%。另一个模型质量的指标是**决定系数**,可以这样获得:
+
+```python
+score = lin_reg.score(X_train,y_train)
+print('Model determination: ', score)
+```
+如果值为0,则意味着模型不考虑输入数据,并作为*最差线性预测器*,即结果的简单平均值。值为1意味着我们可以完美预测所有预期输出。在我们的例子中,系数约为0.06,这相当低。
+
+我们还可以将测试数据与回归线一起绘制,以更好地了解回归在我们的案例中是如何工作的:
+
+```python
+plt.scatter(X_test,y_test)
+plt.plot(X_test,pred)
+```
+
+
+
+## 多项式回归
+
+另一种线性回归是多项式回归。有时变量之间存在线性关系 - 南瓜体积越大,价格越高 - 有时这些关系不能绘制为平面或直线。
+
+✅ 这里有[更多示例](https://online.stat.psu.edu/stat501/lesson/9/9.8)说明可以使用多项式回归的数据
+
+再看看日期与价格之间的关系。这个散点图看起来是否一定要用直线来分析?价格不能波动吗?在这种情况下,你可以尝试多项式回归。
+
+✅ 多项式是可能包含一个或多个变量和系数的数学表达式
+
+多项式回归创建一条曲线以更好地拟合非线性数据。在我们的例子中,如果我们在输入数据中包含一个平方的`DayOfYear`变量,我们应该能够用一条抛物线来拟合我们的数据,该抛物线将在一年中的某一点达到最低点。
+
+Scikit-learn 包含一个有用的[pipeline API](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html?highlight=pipeline#sklearn.pipeline.make_pipeline),可以将不同的数据处理步骤组合在一起。**管道**是**估计器**的链。在我们的例子中,我们将创建一个管道,首先将多项式特征添加到我们的模型中,然后训练回归:
+
+```python
+from sklearn.preprocessing import PolynomialFeatures
+from sklearn.pipeline import make_pipeline
+
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+
+pipeline.fit(X_train,y_train)
+```
+
+使用`PolynomialFeatures(2)` means that we will include all second-degree polynomials from the input data. In our case it will just mean `DayOfYear`2, but given two input variables X and Y, this will add X2, XY and Y2. We may also use higher degree polynomials if we want.
+
+Pipelines can be used in the same manner as the original `LinearRegression` object, i.e. we can `fit` the pipeline, and then use `predict` to get the prediction results. Here is the graph showing test data, and the approximation curve:
+
+
+
+Using Polynomial Regression, we can get slightly lower MSE and higher determination, but not significantly. We need to take into account other features!
+
+> You can see that the minimal pumpkin prices are observed somewhere around Halloween. How can you explain this?
+
+🎃 Congratulations, you just created a model that can help predict the price of pie pumpkins. You can probably repeat the same procedure for all pumpkin types, but that would be tedious. Let's learn now how to take pumpkin variety into account in our model!
+
+## Categorical Features
+
+In the ideal world, we want to be able to predict prices for different pumpkin varieties using the same model. However, the `Variety` column is somewhat different from columns like `Month`, because it contains non-numeric values. Such columns are called **categorical**.
+
+[](https://youtu.be/DYGliioIAE0 "ML for beginners - Categorical Feature Predictions with Linear Regression")
+
+> 🎥 Click the image above for a short video overview of using categorical features.
+
+Here you can see how average price depends on variety:
+
+
+
+To take variety into account, we first need to convert it to numeric form, or **encode** it. There are several way we can do it:
+
+* Simple **numeric encoding** will build a table of different varieties, and then replace the variety name by an index in that table. This is not the best idea for linear regression, because linear regression takes the actual numeric value of the index, and adds it to the result, multiplying by some coefficient. In our case, the relationship between the index number and the price is clearly non-linear, even if we make sure that indices are ordered in some specific way.
+* **One-hot encoding** will replace the `Variety` column by 4 different columns, one for each variety. Each column will contain `1` if the corresponding row is of a given variety, and `0` 否则。这意味着线性回归中将有四个系数,每个南瓜品种一个,负责该特定品种的“起始价格”(或“附加价格”)。
+
+下面的代码显示了我们如何对一个品种进行独热编码:
+
+```python
+pd.get_dummies(new_pumpkins['Variety'])
+```
+
+ ID | FAIRYTALE | MINIATURE | MIXED HEIRLOOM VARIETIES | PIE TYPE
+----|-----------|-----------|--------------------------|----------
+70 | 0 | 0 | 0 | 1
+71 | 0 | 0 | 0 | 1
+... | ... | ... | ... | ...
+1738 | 0 | 1 | 0 | 0
+1739 | 0 | 1 | 0 | 0
+1740 | 0 | 1 | 0 | 0
+1741 | 0 | 1 | 0 | 0
+1742 | 0 | 1 | 0 | 0
+
+要使用独热编码品种作为输入训练线性回归,我们只需要正确初始化`X` and `y`数据:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety'])
+y = new_pumpkins['Price']
+```
+
+其余代码与我们上面用于训练线性回归的代码相同。如果你尝试一下,你会发现均方误差大致相同,但我们得到了更高的决定系数(约77%)。为了获得更准确的预测,我们可以考虑更多的分类特征,以及数值特征,如`Month` or `DayOfYear`. To get one large array of features, we can use `join`:
+
+```python
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+```
+
+在这里我们还考虑了`City` and `Package`类型,这给我们带来了MSE 2.84(10%)和决定系数0.94!
+
+## 综合起来
+
+为了制作最佳模型,我们可以使用上述示例中的组合(独热编码分类 + 数值)数据与多项式回归。以下是完整代码,供你参考:
+
+```python
+# set up training data
+X = pd.get_dummies(new_pumpkins['Variety']) \
+ .join(new_pumpkins['Month']) \
+ .join(pd.get_dummies(new_pumpkins['City'])) \
+ .join(pd.get_dummies(new_pumpkins['Package']))
+y = new_pumpkins['Price']
+
+# make train-test split
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+# setup and train the pipeline
+pipeline = make_pipeline(PolynomialFeatures(2), LinearRegression())
+pipeline.fit(X_train,y_train)
+
+# predict results for test data
+pred = pipeline.predict(X_test)
+
+# calculate MSE and determination
+mse = np.sqrt(mean_squared_error(y_test,pred))
+print(f'Mean error: {mse:3.3} ({mse/np.mean(pred)*100:3.3}%)')
+
+score = pipeline.score(X_train,y_train)
+print('Model determination: ', score)
+```
+
+这应该给我们几乎97%的最佳决定系数,MSE=2.23(~8%的预测误差)。
+
+| 模型 | MSE | 决定系数 |
+|-------|-----|-----------|
+| `DayOfYear` Linear | 2.77 (17.2%) | 0.07 |
+| `DayOfYear` Polynomial | 2.73 (17.0%) | 0.08 |
+| `Variety` 线性 | 5.24 (19.7%) | 0.77 |
+| 所有特征线性 | 2.84 (10.5%) | 0.94 |
+| 所有特征多项式 | 2.23 (8.25%) | 0.97 |
+
+🏆 做得好!你在一节课中创建了四个回归模型,并将模型质量提高到97%。在回归的最后一部分中,你将学习逻辑回归以确定类别。
+
+---
+## 🚀挑战
+
+在这个笔记本中测试几个不同的变量,看看相关性如何对应于模型的准确性。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/14/)
+
+## 复习与自学
+
+在本课中我们学习了线性回归。还有其他重要的回归类型。阅读逐步回归、岭回归、套索回归和弹性网回归技术。一个很好的课程是[斯坦福统计学习课程](https://online.stanford.edu/courses/sohs-ystatslearning-statistical-learning)
+
+## 作业
+
+[构建一个模型](assignment.md)
+
+**免责声明**:
+本文件使用机器翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/3-Linear/assignment.md b/translations/zh/2-Regression/3-Linear/assignment.md
new file mode 100644
index 000000000..6afc4ae35
--- /dev/null
+++ b/translations/zh/2-Regression/3-Linear/assignment.md
@@ -0,0 +1,14 @@
+# 创建回归模型
+
+## 说明
+
+在本课中,你学习了如何使用线性回归和多项式回归来构建模型。利用这些知识,找到一个数据集或使用Scikit-learn的内置数据集来构建一个新的模型。在你的笔记本中解释你选择的技术,并展示你的模型的准确性。如果不准确,请解释原因。
+
+## 评分标准
+
+| 标准 | 模范表现 | 合格表现 | 需要改进 |
+| -------- | ------------------------------------------------------------ | -------------------------- | ----------------------------- |
+| | 提供了一个完整的笔记本,并且有详细的解决方案文档 | 解决方案不完整 | 解决方案有缺陷或存在错误 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文件的母语版本视为权威来源。对于重要信息,建议使用专业人工翻译。对于因使用此翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/3-Linear/solution/Julia/README.md b/translations/zh/2-Regression/3-Linear/solution/Julia/README.md
new file mode 100644
index 000000000..b3f15efd9
--- /dev/null
+++ b/translations/zh/2-Regression/3-Linear/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本国语言版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/4-Logistic/README.md b/translations/zh/2-Regression/4-Logistic/README.md
new file mode 100644
index 000000000..7407e413a
--- /dev/null
+++ b/translations/zh/2-Regression/4-Logistic/README.md
@@ -0,0 +1,395 @@
+# 使用逻辑回归预测分类
+
+
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/15/)
+
+> ### [本课程也有 R 版本!](../../../../2-Regression/4-Logistic/solution/R/lesson_4.html)
+
+## 简介
+
+在这最后一节关于回归的课程中,我们将介绍逻辑回归,这是一种经典的机器学习技术。你可以使用这种技术来发现模式并预测二元分类。这颗糖果是巧克力吗?这种疾病是否具有传染性?这个顾客会选择这个产品吗?
+
+在本课中,你将学习:
+
+- 一个新的数据可视化库
+- 逻辑回归的技巧
+
+✅ 在这个 [学习模块](https://docs.microsoft.com/learn/modules/train-evaluate-classification-models?WT.mc_id=academic-77952-leestott) 中深入了解如何使用这种回归方法
+
+## 前提条件
+
+在处理南瓜数据的过程中,我们已经足够熟悉这个数据集,意识到有一个二元分类可以使用:`Color`。
+
+让我们构建一个逻辑回归模型来预测,给定一些变量,_某个南瓜的颜色可能是什么_(橙色 🎃 或白色 👻)。
+
+> 为什么在回归课程中讨论二元分类?只是为了语言上的方便,因为逻辑回归实际上是 [一种分类方法](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression),虽然它基于线性。了解其他分类数据的方法将在下一个课程组中讨论。
+
+## 定义问题
+
+对于我们的目的,我们将其表达为二元分类:“白色”或“非白色”。我们的数据集中还有一个“条纹”类别,但实例很少,所以我们不会使用它。无论如何,一旦我们从数据集中移除空值,它就会消失。
+
+> 🎃 有趣的事实,我们有时称白色南瓜为“幽灵”南瓜。它们不太容易雕刻,所以不像橙色南瓜那么受欢迎,但它们看起来很酷!所以我们也可以将问题重新表述为:“幽灵”或“非幽灵”。👻
+
+## 关于逻辑回归
+
+逻辑回归与之前学过的线性回归在几个重要方面有所不同。
+
+[](https://youtu.be/KpeCT6nEpBY "机器学习初学者 - 了解用于分类的逻辑回归")
+
+> 🎥 点击上方图片观看关于逻辑回归的简短视频概述。
+
+### 二元分类
+
+逻辑回归不提供与线性回归相同的功能。前者提供关于二元分类(“白色或非白色”)的预测,而后者能够预测连续值,例如给定南瓜的产地和收获时间,_其价格将上涨多少_。
+
+
+> 信息图由 [Dasani Madipalli](https://twitter.com/dasani_decoded) 提供
+
+### 其他分类
+
+还有其他类型的逻辑回归,包括多项和序数:
+
+- **多项式**,涉及多个类别 - “橙色、白色和条纹”。
+- **序数**,涉及有序类别,如果我们想根据有限数量的尺寸(迷你、小、中、大、超大、特大)对南瓜进行逻辑排序,这种方法很有用。
+
+
+
+### 变量不需要相关
+
+还记得线性回归在变量相关性更高时效果更好吗?逻辑回归正好相反 - 变量不需要对齐。这适用于相关性较弱的数据。
+
+### 你需要大量干净的数据
+
+如果使用更多数据,逻辑回归会给出更准确的结果;我们的小数据集对于这个任务来说并不理想,所以请记住这一点。
+
+[](https://youtu.be/B2X4H9vcXTs "机器学习初学者 - 数据分析与准备")
+
+✅ 思考哪些类型的数据适合逻辑回归
+
+## 练习 - 整理数据
+
+首先,稍微清理一下数据,删除空值并选择一些列:
+
+1. 添加以下代码:
+
+ ```python
+
+ columns_to_select = ['City Name','Package','Variety', 'Origin','Item Size', 'Color']
+ pumpkins = full_pumpkins.loc[:, columns_to_select]
+
+ pumpkins.dropna(inplace=True)
+ ```
+
+ 你可以随时查看新的数据框:
+
+ ```python
+ pumpkins.info
+ ```
+
+### 可视化 - 分类图
+
+到现在你已经加载了 [starter notebook](../../../../2-Regression/4-Logistic/notebook.ipynb) 再次使用南瓜数据并清理它,以保留包含一些变量的数据集,包括 `Color`。让我们使用不同的库来可视化数据框:[Seaborn](https://seaborn.pydata.org/index.html),它基于我们之前使用的 Matplotlib。
+
+Seaborn 提供了一些很好的方法来可视化你的数据。例如,你可以在分类图中比较每个 `Variety` 和 `Color` 的数据分布。
+
+1. 使用 `catplot` function, using our pumpkin data `pumpkins` 创建这样的图,并为每个南瓜类别(橙色或白色)指定颜色映射:
+
+ ```python
+ import seaborn as sns
+
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+
+ sns.catplot(
+ data=pumpkins, y="Variety", hue="Color", kind="count",
+ palette=palette,
+ )
+ ```
+
+ 
+
+ 通过观察数据,你可以看到颜色数据与品种的关系。
+
+ ✅ 根据这个分类图,你可以设想哪些有趣的探索?
+
+### 数据预处理:特征和标签编码
+我们的南瓜数据集的所有列都包含字符串值。处理分类数据对人类来说很直观,但对机器来说却不是。机器学习算法在处理数字时效果更好。这就是为什么编码是数据预处理阶段非常重要的一步,因为它使我们能够将分类数据转换为数值数据,而不会丢失任何信息。良好的编码有助于构建一个好的模型。
+
+对于特征编码,主要有两种编码器:
+
+1. 序数编码器:适用于序数变量,即其数据具有逻辑顺序的分类变量,如数据集中的 `Item Size` 列。它创建一个映射,使每个类别由一个数字表示,该数字是列中类别的顺序。
+
+ ```python
+ from sklearn.preprocessing import OrdinalEncoder
+
+ item_size_categories = [['sml', 'med', 'med-lge', 'lge', 'xlge', 'jbo', 'exjbo']]
+ ordinal_features = ['Item Size']
+ ordinal_encoder = OrdinalEncoder(categories=item_size_categories)
+ ```
+
+2. 分类编码器:适用于名义变量,即其数据没有逻辑顺序的分类变量,如数据集中除 `Item Size` 之外的所有特征。它是一种独热编码,这意味着每个类别由一个二进制列表示:如果南瓜属于该品种,则编码变量等于1,否则为0。
+
+ ```python
+ from sklearn.preprocessing import OneHotEncoder
+
+ categorical_features = ['City Name', 'Package', 'Variety', 'Origin']
+ categorical_encoder = OneHotEncoder(sparse_output=False)
+ ```
+然后,使用 `ColumnTransformer` 将多个编码器组合成一个步骤并将其应用于适当的列。
+
+```python
+ from sklearn.compose import ColumnTransformer
+
+ ct = ColumnTransformer(transformers=[
+ ('ord', ordinal_encoder, ordinal_features),
+ ('cat', categorical_encoder, categorical_features)
+ ])
+
+ ct.set_output(transform='pandas')
+ encoded_features = ct.fit_transform(pumpkins)
+```
+另一方面,为了编码标签,我们使用 scikit-learn 的 `LabelEncoder` 类,这是一个实用类,帮助规范化标签,使它们只包含0到 n_classes-1(这里是0和1)之间的值。
+
+```python
+ from sklearn.preprocessing import LabelEncoder
+
+ label_encoder = LabelEncoder()
+ encoded_label = label_encoder.fit_transform(pumpkins['Color'])
+```
+一旦我们对特征和标签进行编码,我们可以将它们合并到一个新的数据框 `encoded_pumpkins` 中。
+
+```python
+ encoded_pumpkins = encoded_features.assign(Color=encoded_label)
+```
+✅ 使用序数编码器对 `Item Size` column?
+
+### Analyse relationships between variables
+
+Now that we have pre-processed our data, we can analyse the relationships between the features and the label to grasp an idea of how well the model will be able to predict the label given the features.
+The best way to perform this kind of analysis is plotting the data. We'll be using again the Seaborn `catplot` function, to visualize the relationships between `Item Size`, `Variety` 和 `Color` 在分类图中的优势是什么?为了更好地绘制数据,我们将使用编码后的 `Item Size` column and the unencoded `Variety` 列。
+
+```python
+ palette = {
+ 'ORANGE': 'orange',
+ 'WHITE': 'wheat',
+ }
+ pumpkins['Item Size'] = encoded_pumpkins['ord__Item Size']
+
+ g = sns.catplot(
+ data=pumpkins,
+ x="Item Size", y="Color", row='Variety',
+ kind="box", orient="h",
+ sharex=False, margin_titles=True,
+ height=1.8, aspect=4, palette=palette,
+ )
+ g.set(xlabel="Item Size", ylabel="").set(xlim=(0,6))
+ g.set_titles(row_template="{row_name}")
+```
+
+
+### 使用群图
+
+由于颜色是一个二元类别(白色或非白色),它需要“[专门的可视化方法](https://seaborn.pydata.org/tutorial/categorical.html?highlight=bar)”。还有其他方法可以可视化这个类别与其他变量的关系。
+
+你可以使用 Seaborn 图表并排可视化变量。
+
+1. 尝试使用“群图”来显示值的分布:
+
+ ```python
+ palette = {
+ 0: 'orange',
+ 1: 'wheat'
+ }
+ sns.swarmplot(x="Color", y="ord__Item Size", data=encoded_pumpkins, palette=palette)
+ ```
+
+ 
+
+**注意**:上述代码可能会生成警告,因为 seaborn 无法在群图中表示如此多的数据点。一个可能的解决方案是减小标记的大小,使用 'size' 参数。然而,请注意这会影响图的可读性。
+
+> **🧮 展示数学**
+>
+> 逻辑回归依赖于使用 [Sigmoid 函数](https://wikipedia.org/wiki/Sigmoid_function) 的“最大似然”概念。Sigmoid 函数在图上看起来像一个“S”形。它取一个值并将其映射到0到1之间的某个位置。它的曲线也被称为“逻辑曲线”。其公式如下:
+>
+> 
+>
+> 其中 Sigmoid 的中点位于 x 的0点,L 是曲线的最大值,k 是曲线的陡度。如果函数的结果大于0.5,则该标签将被赋予二元选择的“1”类。如果不是,则将其分类为“0”。
+
+## 构建你的模型
+
+在 Scikit-learn 中构建一个二元分类模型非常简单。
+
+[](https://youtu.be/MmZS2otPrQ8 "机器学习初学者 - 用于数据分类的逻辑回归")
+
+> 🎥 点击上方图片观看关于构建线性回归模型的简短视频概述。
+
+1. 选择要在分类模型中使用的变量,并调用 `train_test_split()` 来拆分训练和测试集:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ X = encoded_pumpkins[encoded_pumpkins.columns.difference(['Color'])]
+ y = encoded_pumpkins['Color']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+
+ ```
+
+2. 现在你可以通过调用 `fit()` 使用训练数据训练你的模型,并打印出其结果:
+
+ ```python
+ from sklearn.metrics import f1_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('F1-score: ', f1_score(y_test, predictions))
+ ```
+
+ 看一下你模型的评分板。考虑到你只有大约1000行数据,这还不错:
+
+ ```output
+ precision recall f1-score support
+
+ 0 0.94 0.98 0.96 166
+ 1 0.85 0.67 0.75 33
+
+ accuracy 0.92 199
+ macro avg 0.89 0.82 0.85 199
+ weighted avg 0.92 0.92 0.92 199
+
+ Predicted labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0
+ 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0
+ 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
+ 0 0 0 1 0 0 0 0 0 0 0 0 1 1]
+ F1-score: 0.7457627118644068
+ ```
+
+## 通过混淆矩阵更好地理解
+
+虽然你可以通过打印上面的项目来获得评分报告 [terms](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html?highlight=classification_report#sklearn.metrics.classification_report),但你可能能够通过使用 [混淆矩阵](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix) 更容易地理解你的模型,以帮助我们了解模型的表现。
+
+> 🎓 “[混淆矩阵](https://wikipedia.org/wiki/Confusion_matrix)”(或“错误矩阵”)是一个表,表达了模型的真 vs. 假阳性和阴性,从而评估预测的准确性。
+
+1. 要使用混淆矩阵,请调用 `confusion_matrix()`:
+
+ ```python
+ from sklearn.metrics import confusion_matrix
+ confusion_matrix(y_test, predictions)
+ ```
+
+ 看一下你模型的混淆矩阵:
+
+ ```output
+ array([[162, 4],
+ [ 11, 22]])
+ ```
+
+在 Scikit-learn 中,混淆矩阵的行(轴0)是实际标签,列(轴1)是预测标签。
+
+| | 0 | 1 |
+| :---: | :---: | :---: |
+| 0 | TN | FP |
+| 1 | FN | TP |
+
+这里发生了什么?假设我们的模型被要求在两个二元类别之间分类南瓜,即“白色”和“非白色”。
+
+- 如果你的模型预测南瓜为非白色,实际上属于“非白色”类别,我们称之为真阴性,显示在左上角。
+- 如果你的模型预测南瓜为白色,实际上属于“非白色”类别,我们称之为假阴性,显示在左下角。
+- 如果你的模型预测南瓜为非白色,实际上属于“白色”类别,我们称之为假阳性,显示在右上角。
+- 如果你的模型预测南瓜为白色,实际上属于“白色”类别,我们称之为真阳性,显示在右下角。
+
+你可能已经猜到,真阳性和真阴性的数量越多,假阳性和假阴性的数量越少,模型的表现就越好。
+
+混淆矩阵如何与精度和召回率相关?请记住,上面打印的分类报告显示了精度(0.85)和召回率(0.67)。
+
+精度 = tp / (tp + fp) = 22 / (22 + 4) = 0.8461538461538461
+
+召回率 = tp / (tp + fn) = 22 / (22 + 11) = 0.6666666666666666
+
+✅ 问:根据混淆矩阵,模型表现如何?答:还不错,有很多真阴性,但也有一些假阴性。
+
+让我们借助混淆矩阵的 TP/TN 和 FP/FN 映射,重新审视之前看到的术语:
+
+🎓 精度:TP/(TP + FP) 在检索到的实例中相关实例的比例(例如,哪些标签被正确标记)
+
+🎓 召回率:TP/(TP + FN) 检索到的相关实例的比例,无论是否标记正确
+
+🎓 f1-score:(2 * 精度 * 召回率)/(精度 + 召回率)精度和召回率的加权平均值,最好为1,最差为0
+
+🎓 支持:每个标签检索到的实例的数量
+
+🎓 准确率:(TP + TN)/(TP + TN + FP + FN)样本中准确预测的标签的百分比。
+
+🎓 宏平均:每个标签的未加权平均指标的计算,不考虑标签不平衡。
+
+🎓 加权平均:每个标签的平均指标的计算,考虑标签不平衡,通过其支持(每个标签的真实实例数量)加权。
+
+✅ 如果你希望模型减少假阴性的数量,你应该关注哪个指标?
+
+## 可视化此模型的 ROC 曲线
+
+[](https://youtu.be/GApO575jTA0 "机器学习初学者 - 使用 ROC 曲线分析逻辑回归性能")
+
+> 🎥 点击上方图片观看关于 ROC 曲线的简短视频概述
+
+让我们做一个可视化,看看所谓的“ROC”曲线:
+
+```python
+from sklearn.metrics import roc_curve, roc_auc_score
+import matplotlib
+import matplotlib.pyplot as plt
+%matplotlib inline
+
+y_scores = model.predict_proba(X_test)
+fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
+
+fig = plt.figure(figsize=(6, 6))
+plt.plot([0, 1], [0, 1], 'k--')
+plt.plot(fpr, tpr)
+plt.xlabel('False Positive Rate')
+plt.ylabel('True Positive Rate')
+plt.title('ROC Curve')
+plt.show()
+```
+
+使用 Matplotlib 绘制模型的 [接收者操作特性](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html?highlight=roc) 或 ROC。ROC 曲线通常用于查看分类器的输出在真实和假阳性方面的表现。“ROC 曲线通常在 Y 轴上显示真实阳性率,在 X 轴上显示假阳性率。”因此,曲线的陡度和曲线与中点线之间的空间很重要:你希望曲线迅速向上并越过线。在我们的情况下,一开始有假阳性,然后线条正确地向上并越过:
+
+
+
+最后,使用 Scikit-learn 的 [`roc_auc_score` API](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html?highlight=roc_auc#sklearn.metrics.roc_auc_score) 计算实际的“曲线下面积”(AUC):
+
+```python
+auc = roc_auc_score(y_test,y_scores[:,1])
+print(auc)
+```
+结果是 `0.9749908725812341`。由于 AUC 范围从0到1,你希望分数越大越好,因为预测100%正确的模型的 AUC 为1;在这种情况下,模型_非常好_。
+
+在未来的分类课程中,你将学习如何迭代以提高模型的分数。但现在,恭喜你!你已经完成了这些回归课程!
+
+---
+## 🚀挑战
+
+关于逻辑回归还有很多内容需要解读!但学习的最佳方式是实验。找到一个适合这种分析的数据集,并用它构建一个模型。你学到了什么?提示:试试 [Kaggle](https://www.kaggle.com/search?q=logistic+regression+datasets) 找有趣的数据集。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/16/)
+
+## 复习与自学
+
+阅读 [斯坦福大学的这篇论文](https://web.stanford.edu/~jurafsky/slp3/5.pdf) 的前几页,了解逻辑回归的一些实际用途。思考哪些任务更适合我们到目前为止学习的回归任务。哪种方法最适合?
+
+## 作业
+
+[重试这个回归](assignment.md)
+
+**免责声明**:
+本文件已使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议进行专业的人类翻译。我们不对使用此翻译引起的任何误解或误读负责。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/4-Logistic/assignment.md b/translations/zh/2-Regression/4-Logistic/assignment.md
new file mode 100644
index 000000000..8d2c2b432
--- /dev/null
+++ b/translations/zh/2-Regression/4-Logistic/assignment.md
@@ -0,0 +1,14 @@
+# 重试一些回归
+
+## 说明
+
+在本课中,你使用了南瓜数据的一个子集。现在,回到原始数据,尝试使用全部数据,进行清洗和标准化,构建一个逻辑回归模型。
+
+## 评分标准
+
+| 标准 | 杰出表现 | 充分表现 | 需要改进 |
+| -------- | ----------------------------------------------------------------------- | ------------------------------------------------------------ | ----------------------------------------------------------- |
+| | 提供了一个解释清晰且表现良好的模型的笔记本 | 提供了一个表现最低限度的模型的笔记本 | 提供了一个表现不佳的模型的笔记本或没有提供模型的笔记本 |
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/4-Logistic/solution/Julia/README.md b/translations/zh/2-Regression/4-Logistic/solution/Julia/README.md
new file mode 100644
index 000000000..80ee9ef74
--- /dev/null
+++ b/translations/zh/2-Regression/4-Logistic/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。我们对因使用此翻译而产生的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/2-Regression/README.md b/translations/zh/2-Regression/README.md
new file mode 100644
index 000000000..636c46fe8
--- /dev/null
+++ b/translations/zh/2-Regression/README.md
@@ -0,0 +1,43 @@
+# 机器学习的回归模型
+## 区域专题:北美南瓜价格的回归模型 🎃
+
+在北美,南瓜经常被雕刻成万圣节的恐怖面孔。让我们一起来探索这些迷人的蔬菜吧!
+
+
+> 图片由 Beth Teutschmann 拍摄,发布在 Unsplash
+
+## 你将学到什么
+
+[](https://youtu.be/5QnJtDad4iQ "回归简介视频 - 点击观看!")
+> 🎥 点击上面的图片观看本课的快速简介视频
+
+本节课程涵盖了机器学习背景下的各种回归类型。回归模型可以帮助确定变量之间的_关系_。这种类型的模型可以预测长度、温度或年龄等数值,从而在分析数据点时揭示变量之间的关系。
+
+在这一系列课程中,你将发现线性回归和逻辑回归之间的区别,以及何时应该选择其中之一。
+
+[](https://youtu.be/XA3OaoW86R8 "初学者的机器学习 - 机器学习回归模型简介")
+
+> 🎥 点击上面的图片观看回归模型简介短视频。
+
+在这一组课程中,你将开始机器学习任务,包括配置 Visual Studio Code 来管理笔记本,这是数据科学家的常用环境。你将了解 Scikit-learn,这是一个机器学习库,并在本章中构建你的第一个模型,重点是回归模型。
+
+> 有一些有用的低代码工具可以帮助你学习如何使用回归模型。试试 [Azure ML 进行此任务](https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+### 课程
+
+1. [行业工具](1-Tools/README.md)
+2. [数据管理](2-Data/README.md)
+3. [线性和多项式回归](3-Linear/README.md)
+4. [逻辑回归](4-Logistic/README.md)
+
+---
+### 致谢
+
+"ML with regression" 由 [Jen Looper](https://twitter.com/jenlooper) 用 ♥️ 编写
+
+♥️ 测验贡献者包括: [Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan) 和 [Ornella Altunyan](https://twitter.com/ornelladotcom)
+
+南瓜数据集由 [Kaggle 上的这个项目](https://www.kaggle.com/usda/a-year-of-pumpkin-prices) 推荐,其数据来源于美国农业部发布的 [特种作物终端市场标准报告](https://www.marketnews.usda.gov/mnp/fv-report-config-step1?type=termPrice)。我们根据品种添加了一些颜色数据以规范分布。这些数据属于公共领域。
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/3-Web-App/1-Web-App/README.md b/translations/zh/3-Web-App/1-Web-App/README.md
new file mode 100644
index 000000000..b0fc87e8d
--- /dev/null
+++ b/translations/zh/3-Web-App/1-Web-App/README.md
@@ -0,0 +1,348 @@
+# 构建一个使用机器学习模型的Web应用
+
+在本课中,你将使用一个非常特别的数据集训练一个机器学习模型:_过去一个世纪的UFO目击事件_,这些数据来源于NUFORC的数据库。
+
+你将学习到:
+
+- 如何“pickle”一个训练好的模型
+- 如何在Flask应用中使用该模型
+
+我们将继续使用notebook来清理数据和训练我们的模型,但你可以更进一步,探索在实际环境中使用模型:在一个web应用中。
+
+要做到这一点,你需要使用Flask构建一个web应用。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/17/)
+
+## 构建应用
+
+有几种方法可以构建web应用来使用机器学习模型。你的web架构可能会影响你训练模型的方式。想象一下,你在一个企业中工作,数据科学团队已经训练了一个模型,他们希望你在应用中使用。
+
+### 考虑因素
+
+你需要问自己很多问题:
+
+- **这是一个web应用还是一个移动应用?** 如果你正在构建一个移动应用或需要在物联网环境中使用模型,你可以使用 [TensorFlow Lite](https://www.tensorflow.org/lite/) 并在Android或iOS应用中使用该模型。
+- **模型将驻留在哪里?** 在云端还是本地?
+- **离线支持。** 应用是否需要离线工作?
+- **使用什么技术训练模型?** 所选技术可能会影响你需要使用的工具。
+ - **使用TensorFlow。** 如果你使用TensorFlow训练模型,例如,该生态系统提供了使用 [TensorFlow.js](https://www.tensorflow.org/js/) 将TensorFlow模型转换为web应用使用的能力。
+ - **使用PyTorch。** 如果你使用诸如 [PyTorch](https://pytorch.org/) 之类的库构建模型,你可以选择将其导出为 [ONNX](https://onnx.ai/) (开放神经网络交换) 格式,用于可以使用 [Onnx Runtime](https://www.onnxruntime.ai/) 的JavaScript web应用。这种选择将在未来的课程中探索,用于一个Scikit-learn训练的模型。
+ - **使用Lobe.ai或Azure Custom Vision。** 如果你使用诸如 [Lobe.ai](https://lobe.ai/) 或 [Azure Custom Vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/?WT.mc_id=academic-77952-leestott) 之类的ML SaaS(软件即服务)系统训练模型,这类软件提供了为多种平台导出模型的方法,包括构建一个定制API,通过你的在线应用在云端查询。
+
+你还有机会构建一个完整的Flask web应用,该应用可以在web浏览器中自行训练模型。这也可以在JavaScript环境中使用TensorFlow.js完成。
+
+为了我们的目的,因为我们一直在使用基于Python的notebook,让我们来探索将训练好的模型从这样的notebook导出为一个Python构建的web应用可读的格式所需的步骤。
+
+## 工具
+
+完成这项任务,你需要两个工具:Flask和Pickle,它们都运行在Python上。
+
+✅ 什么是 [Flask](https://palletsprojects.com/p/flask/)? 由其创建者定义为“微框架”,Flask提供了使用Python和一个模板引擎构建网页的基本web框架功能。看看 [这个学习模块](https://docs.microsoft.com/learn/modules/python-flask-build-ai-web-app?WT.mc_id=academic-77952-leestott) 来练习使用Flask构建。
+
+✅ 什么是 [Pickle](https://docs.python.org/3/library/pickle.html)? Pickle 🥒 是一个Python模块,用于序列化和反序列化Python对象结构。当你“pickle”一个模型时,你将其结构序列化或扁平化,以便在web上使用。注意:pickle本质上是不安全的,因此如果被提示“un-pickle”一个文件时要小心。一个pickled文件的后缀是 `.pkl`。
+
+## 练习 - 清理数据
+
+在本课中,你将使用由 [NUFORC](https://nuforc.org)(国家UFO报告中心)收集的80,000个UFO目击事件数据。这些数据包含一些有趣的UFO目击描述,例如:
+
+- **长描述示例。** “一个人从夜间照在草地上的光束中出现,并跑向德州仪器的停车场”。
+- **短描述示例。** “灯光追逐我们”。
+
+[ufos.csv](../../../../3-Web-App/1-Web-App/data/ufos.csv) 电子表格包括关于`city`、`state`和`country`的列,记录了目击事件发生的地点、物体的`shape`、其`latitude`和`longitude`。
+
+在本课包含的空白 [notebook](../../../../3-Web-App/1-Web-App/notebook.ipynb) 中:
+
+1. 导入`pandas`、`matplotlib`和`numpy`,并导入ufos电子表格。你可以查看一个样本数据集:
+
+ ```python
+ import pandas as pd
+ import numpy as np
+
+ ufos = pd.read_csv('./data/ufos.csv')
+ ufos.head()
+ ```
+
+1. 将ufos数据转换为一个具有新标题的小数据框。检查`Country`字段中的唯一值。
+
+ ```python
+ ufos = pd.DataFrame({'Seconds': ufos['duration (seconds)'], 'Country': ufos['country'],'Latitude': ufos['latitude'],'Longitude': ufos['longitude']})
+
+ ufos.Country.unique()
+ ```
+
+1. 现在,你可以通过删除任何空值并仅导入1-60秒之间的目击事件来减少需要处理的数据量:
+
+ ```python
+ ufos.dropna(inplace=True)
+
+ ufos = ufos[(ufos['Seconds'] >= 1) & (ufos['Seconds'] <= 60)]
+
+ ufos.info()
+ ```
+
+1. 导入Scikit-learn的`LabelEncoder`库,将国家的文本值转换为数字:
+
+ ✅ LabelEncoder按字母顺序编码数据
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+
+ ufos['Country'] = LabelEncoder().fit_transform(ufos['Country'])
+
+ ufos.head()
+ ```
+
+ 你的数据应该看起来像这样:
+
+ ```output
+ Seconds Country Latitude Longitude
+ 2 20.0 3 53.200000 -2.916667
+ 3 20.0 4 28.978333 -96.645833
+ 14 30.0 4 35.823889 -80.253611
+ 23 60.0 4 45.582778 -122.352222
+ 24 3.0 3 51.783333 -0.783333
+ ```
+
+## 练习 - 构建你的模型
+
+现在你可以通过将数据分为训练组和测试组来准备训练模型。
+
+1. 选择你要训练的三个特征作为X向量,y向量将是`Country`. You want to be able to input `Seconds`, `Latitude` and `Longitude`,并获得一个国家ID以返回。
+
+ ```python
+ from sklearn.model_selection import train_test_split
+
+ Selected_features = ['Seconds','Latitude','Longitude']
+
+ X = ufos[Selected_features]
+ y = ufos['Country']
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
+ ```
+
+1. 使用逻辑回归训练你的模型:
+
+ ```python
+ from sklearn.metrics import accuracy_score, classification_report
+ from sklearn.linear_model import LogisticRegression
+ model = LogisticRegression()
+ model.fit(X_train, y_train)
+ predictions = model.predict(X_test)
+
+ print(classification_report(y_test, predictions))
+ print('Predicted labels: ', predictions)
+ print('Accuracy: ', accuracy_score(y_test, predictions))
+ ```
+
+准确率不错 **(约95%)**,不出所料,因为`Country` and `Latitude/Longitude` correlate.
+
+The model you created isn't very revolutionary as you should be able to infer a `Country` from its `Latitude` and `Longitude`,但这是一个从你清理、导出并在web应用中使用的原始数据中尝试训练的好练习。
+
+## 练习 - “pickle”你的模型
+
+现在,是时候_pickle你的模型了!你可以用几行代码完成。一旦_pickle完毕,加载你的pickled模型并用包含秒数、纬度和经度值的样本数据数组进行测试,
+
+```python
+import pickle
+model_filename = 'ufo-model.pkl'
+pickle.dump(model, open(model_filename,'wb'))
+
+model = pickle.load(open('ufo-model.pkl','rb'))
+print(model.predict([[50,44,-12]]))
+```
+
+模型返回**'3'**,这是英国的国家代码。太神奇了!👽
+
+## 练习 - 构建一个Flask应用
+
+现在你可以构建一个Flask应用来调用你的模型并返回类似的结果,但以更具视觉吸引力的方式。
+
+1. 先在_notebook.ipynb_文件所在的地方创建一个名为**web-app**的文件夹,其中包含你的_ufo-model.pkl_文件。
+
+1. 在该文件夹中创建三个文件夹:**static**,其中包含一个**css**文件夹,以及**templates**。你现在应该有以下文件和目录:
+
+ ```output
+ web-app/
+ static/
+ css/
+ templates/
+ notebook.ipynb
+ ufo-model.pkl
+ ```
+
+ ✅ 参考解决方案文件夹以查看完成的应用
+
+1. 在_web-app_文件夹中创建的第一个文件是**requirements.txt**文件。就像JavaScript应用中的_package.json_一样,该文件列出了应用所需的依赖项。在**requirements.txt**中添加以下几行:
+
+ ```text
+ scikit-learn
+ pandas
+ numpy
+ flask
+ ```
+
+1. 现在,通过导航到_web-app_运行此文件:
+
+ ```bash
+ cd web-app
+ ```
+
+1. 在你的终端中键入`pip install`,以安装_requirements.txt_中列出的库:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+1. 现在,你准备创建另外三个文件以完成应用:
+
+ 1. 在根目录中创建**app.py**。
+ 2. 在_templates_目录中创建**index.html**。
+ 3. 在_static/css_目录中创建**styles.css**。
+
+1. 使用一些样式构建_styles.css_文件:
+
+ ```css
+ body {
+ width: 100%;
+ height: 100%;
+ font-family: 'Helvetica';
+ background: black;
+ color: #fff;
+ text-align: center;
+ letter-spacing: 1.4px;
+ font-size: 30px;
+ }
+
+ input {
+ min-width: 150px;
+ }
+
+ .grid {
+ width: 300px;
+ border: 1px solid #2d2d2d;
+ display: grid;
+ justify-content: center;
+ margin: 20px auto;
+ }
+
+ .box {
+ color: #fff;
+ background: #2d2d2d;
+ padding: 12px;
+ display: inline-block;
+ }
+ ```
+
+1. 接下来,构建_index.html_文件:
+
+ ```html
+
+
+
+
+ 🛸 UFO Appearance Prediction! 👽
+
+
+
+
+
+
+
+
+
According to the number of seconds, latitude and longitude, which country is likely to have reported seeing a UFO?
+
+
+
+
{{ prediction_text }}
+
+
+
+
+
+
+
+ ```
+
+ 查看此文件中的模板。注意将由应用提供的变量周围的“胡须”语法,如预测文本:`{{}}`. There's also a form that posts a prediction to the `/predict` route.
+
+ Finally, you're ready to build the python file that drives the consumption of the model and the display of predictions:
+
+1. In `app.py`中添加:
+
+ ```python
+ import numpy as np
+ from flask import Flask, request, render_template
+ import pickle
+
+ app = Flask(__name__)
+
+ model = pickle.load(open("./ufo-model.pkl", "rb"))
+
+
+ @app.route("/")
+ def home():
+ return render_template("index.html")
+
+
+ @app.route("/predict", methods=["POST"])
+ def predict():
+
+ int_features = [int(x) for x in request.form.values()]
+ final_features = [np.array(int_features)]
+ prediction = model.predict(final_features)
+
+ output = prediction[0]
+
+ countries = ["Australia", "Canada", "Germany", "UK", "US"]
+
+ return render_template(
+ "index.html", prediction_text="Likely country: {}".format(countries[output])
+ )
+
+
+ if __name__ == "__main__":
+ app.run(debug=True)
+ ```
+
+ > 💡 提示:当你添加[`debug=True`](https://www.askpython.com/python-modules/flask/flask-debug-mode) while running the web app using Flask, any changes you make to your application will be reflected immediately without the need to restart the server. Beware! Don't enable this mode in a production app.
+
+If you run `python app.py` or `python3 app.py` - your web server starts up, locally, and you can fill out a short form to get an answer to your burning question about where UFOs have been sighted!
+
+Before doing that, take a look at the parts of `app.py`:
+
+1. First, dependencies are loaded and the app starts.
+1. Then, the model is imported.
+1. Then, index.html is rendered on the home route.
+
+On the `/predict` route, several things happen when the form is posted:
+
+1. The form variables are gathered and converted to a numpy array. They are then sent to the model and a prediction is returned.
+2. The Countries that we want displayed are re-rendered as readable text from their predicted country code, and that value is sent back to index.html to be rendered in the template.
+
+Using a model this way, with Flask and a pickled model, is relatively straightforward. The hardest thing is to understand what shape the data is that must be sent to the model to get a prediction. That all depends on how the model was trained. This one has three data points to be input in order to get a prediction.
+
+In a professional setting, you can see how good communication is necessary between the folks who train the model and those who consume it in a web or mobile app. In our case, it's only one person, you!
+
+---
+
+## 🚀 Challenge
+
+Instead of working in a notebook and importing the model to the Flask app, you could train the model right within the Flask app! Try converting your Python code in the notebook, perhaps after your data is cleaned, to train the model from within the app on a route called `train`时。追求这种方法的优缺点是什么?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/18/)
+
+## 复习与自学
+
+有很多方法可以构建一个使用机器学习模型的web应用。列出你可以使用JavaScript或Python构建一个利用机器学习的web应用的方法。考虑架构:模型应该留在应用中还是驻留在云端?如果是后者,你将如何访问它?画出一个应用机器学习web解决方案的架构模型。
+
+## 作业
+
+[尝试一个不同的模型](assignment.md)
+
+**免责声明**:
+本文件是使用基于机器的人工智能翻译服务翻译的。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业人工翻译。对于使用本翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/3-Web-App/1-Web-App/assignment.md b/translations/zh/3-Web-App/1-Web-App/assignment.md
new file mode 100644
index 000000000..a7c155545
--- /dev/null
+++ b/translations/zh/3-Web-App/1-Web-App/assignment.md
@@ -0,0 +1,14 @@
+# 尝试不同的模型
+
+## 说明
+
+现在你已经使用训练好的回归模型构建了一个网页应用程序,请使用之前回归课程中的一个模型重新制作这个网页应用程序。你可以保持原有的风格或设计一个不同的风格以反映南瓜数据。注意更改输入以反映你的模型的训练方法。
+
+## 评分标准
+
+| 标准 | 杰出 | 合格 | 需要改进 |
+| -------------------------- | ------------------------------------------------------ | ------------------------------------------------------ | -------------------------------------- |
+| | 网页应用程序按预期运行并部署到云端 | 网页应用程序包含缺陷或显示意外结果 | 网页应用程序不能正常运行 |
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/3-Web-App/README.md b/translations/zh/3-Web-App/README.md
new file mode 100644
index 000000000..e871f92b4
--- /dev/null
+++ b/translations/zh/3-Web-App/README.md
@@ -0,0 +1,24 @@
+# 构建一个使用你的ML模型的Web应用
+
+在本课程的这一部分,你将了解一个应用的机器学习主题:如何将你的Scikit-learn模型保存为一个文件,以便在Web应用中进行预测。一旦模型保存好,你将学习如何在一个用Flask构建的Web应用中使用它。你将首先使用一些关于UFO目击事件的数据创建一个模型!然后,你将构建一个Web应用,该应用将允许你输入一个包含纬度和经度值的秒数,以预测哪个国家报告看到了UFO。
+
+
+
+照片由 Michael Herren 拍摄,来自 Unsplash
+
+## 课程
+
+1. [构建一个Web应用](1-Web-App/README.md)
+
+## 致谢
+
+"构建一个Web应用" 由 [Jen Looper](https://twitter.com/jenlooper) 用 ♥️ 编写。
+
+♥️ 测验由 Rohan Raj 编写。
+
+数据集来源于 [Kaggle](https://www.kaggle.com/NUFORC/ufo-sightings)。
+
+Web应用架构部分参考了 [这篇文章](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) 和 [这个仓库](https://github.com/abhinavsagar/machine-learning-deployment) ,由 Abhinav Sagar 提供。
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/1-Introduction/README.md b/translations/zh/4-Classification/1-Introduction/README.md
new file mode 100644
index 000000000..866d87731
--- /dev/null
+++ b/translations/zh/4-Classification/1-Introduction/README.md
@@ -0,0 +1,302 @@
+# 分类简介
+
+在这四节课中,你将探索经典机器学习的一个基本重点——_分类_。我们将使用关于亚洲和印度所有美味菜肴的数据集,演示如何使用各种分类算法。希望你已经准备好享受这场美食盛宴了!
+
+
+
+> 在这些课程中庆祝泛亚洲美食吧!图片由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+分类是一种[监督学习](https://wikipedia.org/wiki/Supervised_learning)形式,它与回归技术有很多相似之处。如果说机器学习的全部内容是通过使用数据集来预测值或事物的名称,那么分类通常分为两类:_二元分类_和_多类分类_。
+
+[](https://youtu.be/eg8DJYwdMyg "Introduction to classification")
+
+> 🎥 点击上面的图片观看视频:MIT的John Guttag介绍分类
+
+记住:
+
+- **线性回归**帮助你预测变量之间的关系,并准确预测新数据点相对于这条线的位置。因此,你可以预测_南瓜在九月和十二月的价格_。
+- **逻辑回归**帮助你发现“二元类别”:在这个价格点上,_这个南瓜是橙色的还是非橙色的_?
+
+分类使用各种算法来确定数据点的标签或类别的其他方式。让我们使用这个美食数据,看看通过观察一组配料,是否可以确定它的来源美食。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)
+
+> ### [本课程有R语言版本!](../../../../4-Classification/1-Introduction/solution/R/lesson_10.html)
+
+### 介绍
+
+分类是机器学习研究人员和数据科学家的基本活动之一。从对二元值的基本分类(“这封邮件是垃圾邮件还是不是?”),到使用计算机视觉进行复杂的图像分类和分割,能够将数据分类并对其提问总是很有用的。
+
+以更科学的方式陈述这个过程,你的分类方法创建了一个预测模型,使你能够将输入变量与输出变量之间的关系映射出来。
+
+
+
+> 分类算法处理的二元与多类问题。信息图由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+在开始清理数据、可视化数据并为我们的机器学习任务做准备之前,让我们先了解一下机器学习分类数据的各种方式。
+
+分类源自[统计学](https://wikipedia.org/wiki/Statistical_classification),使用经典机器学习进行分类使用特征,例如`smoker`, `weight`, 和`age`来确定_患X疾病的可能性_。作为一种类似于你之前执行的回归练习的监督学习技术,你的数据是有标签的,机器学习算法使用这些标签来分类和预测数据集的类别(或“特征”)并将其分配到一个组或结果中。
+
+✅ 想象一下关于美食的数据集。一个多类模型能够回答什么问题?一个二元模型能够回答什么问题?如果你想确定某种美食是否可能使用葫芦巴怎么办?如果你想知道,如果得到一袋满是八角、洋蓟、花椰菜和辣根的杂货,你能否做出一道典型的印度菜?
+
+[](https://youtu.be/GuTeDbaNoEU "Crazy mystery baskets")
+
+> 🎥 点击上面的图片观看视频。节目“Chopped”的整个前提是“神秘篮子”,厨师们必须用随机选择的配料做出一些菜肴。肯定有一个机器学习模型会有所帮助!
+
+## 你好,'分类器'
+
+我们想要向这个美食数据集提出的问题实际上是一个**多类问题**,因为我们有几个潜在的国家美食可以处理。给定一批配料,这些数据将适合哪些类别?
+
+Scikit-learn提供了几种不同的算法来分类数据,具体取决于你要解决的问题类型。在接下来的两节课中,你将学习这些算法中的几种。
+
+## 练习 - 清理和平衡你的数据
+
+在开始这个项目之前的第一个任务是清理和**平衡**你的数据,以获得更好的结果。从这个文件夹根目录中的空白_notebook.ipynb_文件开始。
+
+首先要安装的是[imblearn](https://imbalanced-learn.org/stable/)。这是一个Scikit-learn包,它将允许你更好地平衡数据(你将在稍后了解更多关于此任务的内容)。
+
+1. 要安装`imblearn`,运行`pip install`,如下所示:
+
+ ```python
+ pip install imblearn
+ ```
+
+1. 导入需要的包以导入数据并可视化它,还要从`imblearn`中导入`SMOTE`。
+
+ ```python
+ import pandas as pd
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ import numpy as np
+ from imblearn.over_sampling import SMOTE
+ ```
+
+ 现在你已设置好读取导入数据。
+
+1. 下一个任务是导入数据:
+
+ ```python
+ df = pd.read_csv('../data/cuisines.csv')
+ ```
+
+ 使用`read_csv()` will read the content of the csv file _cusines.csv_ and place it in the variable `df`。
+
+1. 检查数据的形状:
+
+ ```python
+ df.head()
+ ```
+
+ 前五行看起来像这样:
+
+ ```output
+ | | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+ | --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+ | 0 | 65 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 1 | 66 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 2 | 67 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 3 | 68 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+ | 4 | 69 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+ ```
+
+1. 通过调用`info()`获取有关此数据的信息:
+
+ ```python
+ df.info()
+ ```
+
+ 你的输出类似于:
+
+ ```output
+
+ RangeIndex: 2448 entries, 0 to 2447
+ Columns: 385 entries, Unnamed: 0 to zucchini
+ dtypes: int64(384), object(1)
+ memory usage: 7.2+ MB
+ ```
+
+## 练习 - 了解美食
+
+现在工作开始变得更有趣了。让我们发现每种美食的数据分布
+
+1. 通过调用`barh()`将数据绘制为条形图:
+
+ ```python
+ df.cuisine.value_counts().plot.barh()
+ ```
+
+ 
+
+ 有有限数量的美食,但数据分布不均。你可以修复它!在这样做之前,多探索一下。
+
+1. 找出每种美食有多少数据并打印出来:
+
+ ```python
+ thai_df = df[(df.cuisine == "thai")]
+ japanese_df = df[(df.cuisine == "japanese")]
+ chinese_df = df[(df.cuisine == "chinese")]
+ indian_df = df[(df.cuisine == "indian")]
+ korean_df = df[(df.cuisine == "korean")]
+
+ print(f'thai df: {thai_df.shape}')
+ print(f'japanese df: {japanese_df.shape}')
+ print(f'chinese df: {chinese_df.shape}')
+ print(f'indian df: {indian_df.shape}')
+ print(f'korean df: {korean_df.shape}')
+ ```
+
+ 输出如下所示:
+
+ ```output
+ thai df: (289, 385)
+ japanese df: (320, 385)
+ chinese df: (442, 385)
+ indian df: (598, 385)
+ korean df: (799, 385)
+ ```
+
+## 发现配料
+
+现在你可以深入挖掘数据,了解每种美食的典型配料。你应该清理会在美食之间造成混淆的重复数据,所以让我们了解这个问题。
+
+1. 在Python中创建一个函数`create_ingredient()`来创建一个配料数据框。这个函数将首先删除一个无用的列,并按其计数对配料进行排序:
+
+ ```python
+ def create_ingredient_df(df):
+ ingredient_df = df.T.drop(['cuisine','Unnamed: 0']).sum(axis=1).to_frame('value')
+ ingredient_df = ingredient_df[(ingredient_df.T != 0).any()]
+ ingredient_df = ingredient_df.sort_values(by='value', ascending=False,
+ inplace=False)
+ return ingredient_df
+ ```
+
+ 现在你可以使用该函数了解每种美食最受欢迎的前十种配料。
+
+1. 调用`create_ingredient()` and plot it calling `barh()`:
+
+ ```python
+ thai_ingredient_df = create_ingredient_df(thai_df)
+ thai_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 对日本数据做同样的操作:
+
+ ```python
+ japanese_ingredient_df = create_ingredient_df(japanese_df)
+ japanese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 现在是中国配料:
+
+ ```python
+ chinese_ingredient_df = create_ingredient_df(chinese_df)
+ chinese_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 绘制印度配料:
+
+ ```python
+ indian_ingredient_df = create_ingredient_df(indian_df)
+ indian_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 最后,绘制韩国配料:
+
+ ```python
+ korean_ingredient_df = create_ingredient_df(korean_df)
+ korean_ingredient_df.head(10).plot.barh()
+ ```
+
+ 
+
+1. 现在,通过调用`drop()`删除在不同美食之间造成混淆的最常见配料:
+
+ 每个人都喜欢大米、大蒜和姜!
+
+ ```python
+ feature_df= df.drop(['cuisine','Unnamed: 0','rice','garlic','ginger'], axis=1)
+ labels_df = df.cuisine #.unique()
+ feature_df.head()
+ ```
+
+## 平衡数据集
+
+现在你已经清理了数据,使用[SMOTE](https://imbalanced-learn.org/dev/references/generated/imblearn.over_sampling.SMOTE.html) - “合成少数过采样技术” - 来平衡它。
+
+1. 调用`fit_resample()`,这种策略通过插值生成新样本。
+
+ ```python
+ oversample = SMOTE()
+ transformed_feature_df, transformed_label_df = oversample.fit_resample(feature_df, labels_df)
+ ```
+
+ 通过平衡你的数据,当分类它时你将获得更好的结果。想想一个二元分类。如果你大部分数据都是一个类别的,机器学习模型会更频繁地预测该类别,仅仅因为它有更多的数据。平衡数据会修正任何倾斜的数据,并有助于消除这种不平衡。
+
+1. 现在你可以检查每种配料的标签数量:
+
+ ```python
+ print(f'new label count: {transformed_label_df.value_counts()}')
+ print(f'old label count: {df.cuisine.value_counts()}')
+ ```
+
+ 你的输出如下所示:
+
+ ```output
+ new label count: korean 799
+ chinese 799
+ indian 799
+ japanese 799
+ thai 799
+ Name: cuisine, dtype: int64
+ old label count: korean 799
+ indian 598
+ chinese 442
+ japanese 320
+ thai 289
+ Name: cuisine, dtype: int64
+ ```
+
+ 数据干净、平衡,非常美味!
+
+1. 最后一步是将你的平衡数据,包括标签和特征,保存到一个可以导出到文件的新数据框中:
+
+ ```python
+ transformed_df = pd.concat([transformed_label_df,transformed_feature_df],axis=1, join='outer')
+ ```
+
+1. 你可以使用`transformed_df.head()` and `transformed_df.info()`再看一遍数据。保存这个数据的副本以供将来课程使用:
+
+ ```python
+ transformed_df.head()
+ transformed_df.info()
+ transformed_df.to_csv("../data/cleaned_cuisines.csv")
+ ```
+
+ 这个新的CSV现在可以在根数据文件夹中找到。
+
+---
+
+## 🚀挑战
+
+这个课程包含几个有趣的数据集。浏览`data`文件夹,看看是否有适合二元或多类分类的数据集?你会向这个数据集提出什么问题?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)
+
+## 复习与自学
+
+探索SMOTE的API。它最适用于哪些用例?它解决了哪些问题?
+
+## 作业
+
+[探索分类方法](assignment.md)
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/1-Introduction/assignment.md b/translations/zh/4-Classification/1-Introduction/assignment.md
new file mode 100644
index 000000000..a43b10346
--- /dev/null
+++ b/translations/zh/4-Classification/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# 探索分类方法
+
+## 说明
+
+在 [Scikit-learn 文档](https://scikit-learn.org/stable/supervised_learning.html) 中,你会发现大量的分类数据的方法。请在这些文档中进行一次小型寻宝游戏:你的目标是寻找分类方法,并将其与本课程中的数据集、你可以提出的问题以及分类技术相匹配。创建一个电子表格或 .doc 文件的表格,并解释该数据集如何与分类算法一起工作。
+
+## 评分标准
+
+| 标准 | 卓越 | 适当 | 需要改进 |
+| -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| | 提供了一份概述5种算法及其分类技术的文档。概述解释得非常清楚且详细。 | 提供了一份概述3种算法及其分类技术的文档。概述解释得非常清楚且详细。 | 提供了一份概述少于三种算法及其分类技术的文档,且概述解释得既不清楚也不详细。 |
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。我们不对因使用此翻译而引起的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/1-Introduction/solution/Julia/README.md b/translations/zh/4-Classification/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..f36170959
--- /dev/null
+++ b/translations/zh/4-Classification/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用本翻译而引起的任何误解或误释,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/2-Classifiers-1/README.md b/translations/zh/4-Classification/2-Classifiers-1/README.md
new file mode 100644
index 000000000..db60d31df
--- /dev/null
+++ b/translations/zh/4-Classification/2-Classifiers-1/README.md
@@ -0,0 +1,244 @@
+# 美食分类器 1
+
+在本课中,你将使用上节课保存的数据集,这些数据是关于美食的平衡、干净的数据。
+
+你将使用这个数据集与各种分类器一起工作,_根据一组食材预测给定的国家美食_。在此过程中,你将了解一些算法如何被用来完成分类任务。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/21/)
+# 准备工作
+
+假设你已经完成了[第一课](../1-Introduction/README.md),确保在根目录 `/data` 文件夹中存在一个 _cleaned_cuisines.csv_ 文件,以供这四节课使用。
+
+## 练习 - 预测国家美食
+
+1. 在本课的 _notebook.ipynb_ 文件夹中,导入该文件和 Pandas 库:
+
+ ```python
+ import pandas as pd
+ cuisines_df = pd.read_csv("../data/cleaned_cuisines.csv")
+ cuisines_df.head()
+ ```
+
+ 数据看起来是这样的:
+
+| | Unnamed: 0 | cuisine | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| --- | ---------- | ------- | ------ | -------- | ----- | ---------- | ----- | ------------ | ------- | -------- | --- | ------- | ----------- | ---------- | ----------------------- | ---- | ---- | --- | ----- | ------ | -------- |
+| 0 | 0 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | indian | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 2 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 3 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 4 | indian | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
+
+
+1. 现在,导入更多的库:
+
+ ```python
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ from sklearn.svm import SVC
+ import numpy as np
+ ```
+
+1. 将 X 和 y 坐标分成两个用于训练的数据框架。`cuisine` 可以作为标签数据框:
+
+ ```python
+ cuisines_label_df = cuisines_df['cuisine']
+ cuisines_label_df.head()
+ ```
+
+ 它看起来是这样的:
+
+ ```output
+ 0 indian
+ 1 indian
+ 2 indian
+ 3 indian
+ 4 indian
+ Name: cuisine, dtype: object
+ ```
+
+1. 删除 `Unnamed: 0` column and the `cuisine` column, calling `drop()` 列。将剩余的数据保存为可训练的特征:
+
+ ```python
+ cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1)
+ cuisines_feature_df.head()
+ ```
+
+ 你的特征看起来是这样的:
+
+| | almond | angelica | anise | anise_seed | apple | apple_brandy | apricot | armagnac | artemisia | artichoke | ... | whiskey | white_bread | white_wine | whole_grain_wheat_flour | wine | wood | yam | yeast | yogurt | zucchini |
+| ---: | -----: | -------: | ----: | ---------: | ----: | -----------: | ------: | -------: | --------: | --------: | ---: | ------: | ----------: | ---------: | ----------------------: | ---: | ---: | ---: | ----: | -----: | -------: |
+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
+
+现在你已经准备好训练你的模型了!
+
+## 选择你的分类器
+
+现在你的数据已经清理干净并准备好进行训练,你需要决定使用哪种算法来完成这项任务。
+
+Scikit-learn 将分类归为监督学习,在这个类别中你会发现很多分类方法。 [种类繁多](https://scikit-learn.org/stable/supervised_learning.html),乍一看可能会让人眼花缭乱。以下方法都包含分类技术:
+
+- 线性模型
+- 支持向量机
+- 随机梯度下降
+- 最近邻
+- 高斯过程
+- 决策树
+- 集成方法(投票分类器)
+- 多类和多输出算法(多类和多标签分类,多类-多输出分类)
+
+> 你也可以使用[神经网络来分类数据](https://scikit-learn.org/stable/modules/neural_networks_supervised.html#classification),但这超出了本课的范围。
+
+### 选择哪个分类器?
+
+那么,你应该选择哪个分类器呢?通常,运行多个分类器并寻找一个好的结果是一种测试方法。Scikit-learn 提供了一个[并排比较](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html)的创建数据集,比较了 KNeighbors、SVC 两种方式、GaussianProcessClassifier、DecisionTreeClassifier、RandomForestClassifier、MLPClassifier、AdaBoostClassifier、GaussianNB 和 QuadraticDiscrinationAnalysis,展示了结果的可视化:
+
+
+> 图表来自 Scikit-learn 的文档
+
+> AutoML 通过在云中运行这些比较,允许你选择最适合你数据的算法,巧妙地解决了这个问题。试试[这里](https://docs.microsoft.com/learn/modules/automate-model-selection-with-azure-automl/?WT.mc_id=academic-77952-leestott)
+
+### 更好的方法
+
+比盲目猜测更好的方法是遵循这个可下载的[机器学习速查表](https://docs.microsoft.com/azure/machine-learning/algorithm-cheat-sheet?WT.mc_id=academic-77952-leestott)上的想法。在这里,我们发现,对于我们的多类问题,我们有一些选择:
+
+
+> 微软算法速查表的一部分,详细介绍了多类分类选项
+
+✅ 下载这个速查表,打印出来,挂在墙上!
+
+### 推理
+
+让我们看看能否根据我们面临的限制推理出不同的方法:
+
+- **神经网络太重了**。考虑到我们的数据集干净但很少,并且我们是通过笔记本本地运行训练,神经网络对于这个任务来说太重了。
+- **没有两类分类器**。我们不使用两类分类器,因此排除了 one-vs-all。
+- **决策树或逻辑回归可能有效**。决策树可能有效,或者多类数据的逻辑回归。
+- **多类增强决策树解决不同的问题**。多类增强决策树最适合非参数任务,例如设计用于构建排名的任务,因此对我们没有用。
+
+### 使用 Scikit-learn
+
+我们将使用 Scikit-learn 来分析我们的数据。然而,在 Scikit-learn 中有许多方法可以使用逻辑回归。看看[需要传递的参数](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logistic%20regressio#sklearn.linear_model.LogisticRegression)。
+
+本质上有两个重要参数 - `multi_class` and `solver` - that we need to specify, when we ask Scikit-learn to perform a logistic regression. The `multi_class` value applies a certain behavior. The value of the solver is what algorithm to use. Not all solvers can be paired with all `multi_class` values.
+
+According to the docs, in the multiclass case, the training algorithm:
+
+- **Uses the one-vs-rest (OvR) scheme**, if the `multi_class` option is set to `ovr`
+- **Uses the cross-entropy loss**, if the `multi_class` option is set to `multinomial`. (Currently the `multinomial` option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.)"
+
+> 🎓 The 'scheme' here can either be 'ovr' (one-vs-rest) or 'multinomial'. Since logistic regression is really designed to support binary classification, these schemes allow it to better handle multiclass classification tasks. [source](https://machinelearningmastery.com/one-vs-rest-and-one-vs-one-for-multi-class-classification/)
+
+> 🎓 The 'solver' is defined as "the algorithm to use in the optimization problem". [source](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logistic%20regressio#sklearn.linear_model.LogisticRegression).
+
+Scikit-learn offers this table to explain how solvers handle different challenges presented by different kinds of data structures:
+
+
+
+## Exercise - split the data
+
+We can focus on logistic regression for our first training trial since you recently learned about the latter in a previous lesson.
+Split your data into training and testing groups by calling `train_test_split()`:
+
+```python
+X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+```
+
+## 练习 - 应用逻辑回归
+
+由于你使用的是多类情况,你需要选择什么 _方案_ 和设置什么 _求解器_。使用 LogisticRegression 的多类设置和 **liblinear** 求解器进行训练。
+
+1. 创建一个多类设置为 `ovr` and the solver set to `liblinear` 的逻辑回归:
+
+ ```python
+ lr = LogisticRegression(multi_class='ovr',solver='liblinear')
+ model = lr.fit(X_train, np.ravel(y_train))
+
+ accuracy = model.score(X_test, y_test)
+ print ("Accuracy is {}".format(accuracy))
+ ```
+
+ ✅ 尝试一个不同的求解器,例如 `lbfgs`, which is often set as default
+
+ > Note, use Pandas [`ravel`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ravel.html) 函数在需要时展平你的数据。
+
+ 准确率超过 **80%**,效果很好!
+
+1. 你可以通过测试一行数据(#50)来看到这个模型的实际效果:
+
+ ```python
+ print(f'ingredients: {X_test.iloc[50][X_test.iloc[50]!=0].keys()}')
+ print(f'cuisine: {y_test.iloc[50]}')
+ ```
+
+ 结果打印出来:
+
+ ```output
+ ingredients: Index(['cilantro', 'onion', 'pea', 'potato', 'tomato', 'vegetable_oil'], dtype='object')
+ cuisine: indian
+ ```
+
+ ✅ 尝试一个不同的行号并检查结果
+
+1. 更深入地了解,你可以检查这个预测的准确性:
+
+ ```python
+ test= X_test.iloc[50].values.reshape(-1, 1).T
+ proba = model.predict_proba(test)
+ classes = model.classes_
+ resultdf = pd.DataFrame(data=proba, columns=classes)
+
+ topPrediction = resultdf.T.sort_values(by=[0], ascending = [False])
+ topPrediction.head()
+ ```
+
+ 结果打印出来 - 印度菜是最好的猜测,概率很高:
+
+ | | 0 |
+ | -------: | -------: |
+ | indian | 0.715851 |
+ | chinese | 0.229475 |
+ | japanese | 0.029763 |
+ | korean | 0.017277 |
+ | thai | 0.007634 |
+
+ ✅ 你能解释为什么模型非常确定这是印度菜吗?
+
+1. 通过打印分类报告,获取更多细节,就像在回归课程中所做的那样:
+
+ ```python
+ y_pred = model.predict(X_test)
+ print(classification_report(y_test,y_pred))
+ ```
+
+ | | precision | recall | f1-score | support |
+ | ------------ | --------- | ------ | -------- | ------- |
+ | chinese | 0.73 | 0.71 | 0.72 | 229 |
+ | indian | 0.91 | 0.93 | 0.92 | 254 |
+ | japanese | 0.70 | 0.75 | 0.72 | 220 |
+ | korean | 0.86 | 0.76 | 0.81 | 242 |
+ | thai | 0.79 | 0.85 | 0.82 | 254 |
+ | accuracy | 0.80 | 1199 | | |
+ | macro avg | 0.80 | 0.80 | 0.80 | 1199 |
+ | weighted avg | 0.80 | 0.80 | 0.80 | 1199 |
+
+## 🚀挑战
+
+在本课中,你使用清理后的数据构建了一个机器学习模型,可以根据一系列食材预测国家美食。花点时间阅读 Scikit-learn 提供的许多分类数据的选项。深入了解“求解器”的概念,了解幕后发生了什么。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/22/)
+
+## 回顾与自学
+
+深入了解逻辑回归背后的数学原理,在[这节课](https://people.eecs.berkeley.edu/~russell/classes/cs194/f11/lectures/CS194%20Fall%202011%20Lecture%2006.pdf)
+## 作业
+
+[研究求解器](assignment.md)
+
+**免责声明**:
+本文档已使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。我们对使用本翻译可能引起的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/2-Classifiers-1/assignment.md b/translations/zh/4-Classification/2-Classifiers-1/assignment.md
new file mode 100644
index 000000000..234aa1cbe
--- /dev/null
+++ b/translations/zh/4-Classification/2-Classifiers-1/assignment.md
@@ -0,0 +1,12 @@
+# 研究求解器
+## 说明
+
+在本课中,你了解了将算法与机器学习过程相结合以创建准确模型的各种求解器。浏览课中列出的求解器,并选择两个。在你自己的话中,对这两个求解器进行比较和对比。它们解决什么样的问题?它们如何处理各种数据结构?为什么你会选择其中一个而不是另一个?
+## 评分标准
+
+| 标准 | 模范 | 合格 | 需要改进 |
+| ------ | --------------------------------------------------------------------------------------------- | ---------------------------------------------- | --------------------------- |
+| | 提交的 .doc 文件包含两个段落,每个段落分别比较一个求解器,并进行深思熟虑的比较。 | 提交的 .doc 文件只有一个段落 | 作业不完整 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文件的本地语言版本视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用本翻译而产生的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/2-Classifiers-1/solution/Julia/README.md b/translations/zh/4-Classification/2-Classifiers-1/solution/Julia/README.md
new file mode 100644
index 000000000..7d329e2a9
--- /dev/null
+++ b/translations/zh/4-Classification/2-Classifiers-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。我们不对因使用本翻译而产生的任何误解或误读负责。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/3-Classifiers-2/README.md b/translations/zh/4-Classification/3-Classifiers-2/README.md
new file mode 100644
index 000000000..bf8592de2
--- /dev/null
+++ b/translations/zh/4-Classification/3-Classifiers-2/README.md
@@ -0,0 +1,238 @@
+# 美食分类器 2
+
+在第二节分类课程中,您将探索更多分类数值数据的方法。您还将了解选择不同分类器的后果。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/23/)
+
+### 前提条件
+
+我们假设您已经完成了前面的课程,并在您的 `data` 文件夹中有一个名为 _cleaned_cuisines.csv_ 的已清理数据集,该文件位于这四节课的根目录中。
+
+### 准备
+
+我们已经加载了您的 _notebook.ipynb_ 文件,并将已清理的数据集划分为 X 和 y 数据框,准备进行模型构建过程。
+
+## 分类图
+
+之前,您通过微软的速查表了解了分类数据的各种选项。Scikit-learn 提供了一个类似但更细致的速查表,可以进一步帮助您缩小估算器(分类器的另一种说法)的选择范围:
+
+
+> 提示:[在线访问此图](https://scikit-learn.org/stable/tutorial/machine_learning_map/)并点击路径以阅读文档。
+
+### 计划
+
+一旦您对数据有了清晰的理解,这张图就非常有帮助,因为您可以沿着路径“走”到一个决策:
+
+- 我们有超过50个样本
+- 我们想预测一个类别
+- 我们有标记的数据
+- 我们有少于100K个样本
+- ✨ 我们可以选择一个线性SVC
+- 如果这不起作用,因为我们有数值数据
+ - 我们可以尝试一个 ✨ KNeighbors 分类器
+ - 如果这不起作用,尝试 ✨ SVC 和 ✨ 集成分类器
+
+这是一个非常有用的路径。
+
+## 练习 - 划分数据
+
+按照这条路径,我们应该从导入一些需要的库开始。
+
+1. 导入所需的库:
+
+ ```python
+ from sklearn.neighbors import KNeighborsClassifier
+ from sklearn.linear_model import LogisticRegression
+ from sklearn.svm import SVC
+ from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
+ from sklearn.model_selection import train_test_split, cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve
+ import numpy as np
+ ```
+
+1. 划分您的训练和测试数据:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3)
+ ```
+
+## 线性 SVC 分类器
+
+支持向量聚类(SVC)是支持向量机家族中的一种机器学习技术(下面会详细介绍)。在这种方法中,您可以选择一个“核”来决定如何聚类标签。参数 'C' 指的是“正则化”,它调节参数的影响。核可以是[几种](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC)中的一种;这里我们将其设置为“线性”,以确保我们利用线性 SVC。概率默认为“false”;在这里我们将其设置为“true”以收集概率估计。我们将随机状态设置为“0”以打乱数据以获得概率。
+
+### 练习 - 应用线性 SVC
+
+首先创建一个分类器数组。我们将在测试时逐步添加到这个数组中。
+
+1. 从线性 SVC 开始:
+
+ ```python
+ C = 10
+ # Create different classifiers.
+ classifiers = {
+ 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0)
+ }
+ ```
+
+2. 使用线性 SVC 训练您的模型并打印出报告:
+
+ ```python
+ n_classifiers = len(classifiers)
+
+ for index, (name, classifier) in enumerate(classifiers.items()):
+ classifier.fit(X_train, np.ravel(y_train))
+
+ y_pred = classifier.predict(X_test)
+ accuracy = accuracy_score(y_test, y_pred)
+ print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
+ print(classification_report(y_test,y_pred))
+ ```
+
+ 结果相当不错:
+
+ ```output
+ Accuracy (train) for Linear SVC: 78.6%
+ precision recall f1-score support
+
+ chinese 0.71 0.67 0.69 242
+ indian 0.88 0.86 0.87 234
+ japanese 0.79 0.74 0.76 254
+ korean 0.85 0.81 0.83 242
+ thai 0.71 0.86 0.78 227
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+## K-Neighbors 分类器
+
+K-Neighbors 是“邻居”家族的机器学习方法的一部分,可以用于监督和非监督学习。在这种方法中,会创建预定义数量的点,并围绕这些点收集数据,以便为数据预测通用标签。
+
+### 练习 - 应用 K-Neighbors 分类器
+
+之前的分类器效果不错,并且与数据配合良好,但也许我们可以获得更好的准确性。试试 K-Neighbors 分类器。
+
+1. 在分类器数组中添加一行(在线性 SVC 项目后添加一个逗号):
+
+ ```python
+ 'KNN classifier': KNeighborsClassifier(C),
+ ```
+
+ 结果稍差一些:
+
+ ```output
+ Accuracy (train) for KNN classifier: 73.8%
+ precision recall f1-score support
+
+ chinese 0.64 0.67 0.66 242
+ indian 0.86 0.78 0.82 234
+ japanese 0.66 0.83 0.74 254
+ korean 0.94 0.58 0.72 242
+ thai 0.71 0.82 0.76 227
+
+ accuracy 0.74 1199
+ macro avg 0.76 0.74 0.74 1199
+ weighted avg 0.76 0.74 0.74 1199
+ ```
+
+ ✅ 了解 [K-Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#neighbors)
+
+## 支持向量分类器
+
+支持向量分类器是 [支持向量机](https://wikipedia.org/wiki/Support-vector_machine) 家族的一部分,这些机器学习方法用于分类和回归任务。SVMs 将“训练示例映射到空间中的点”以最大化两个类别之间的距离。随后将数据映射到此空间,以便预测它们的类别。
+
+### 练习 - 应用支持向量分类器
+
+让我们尝试用支持向量分类器获得更好的准确性。
+
+1. 在 K-Neighbors 项目后添加一个逗号,然后添加这一行:
+
+ ```python
+ 'SVC': SVC(),
+ ```
+
+ 结果相当好!
+
+ ```output
+ Accuracy (train) for SVC: 83.2%
+ precision recall f1-score support
+
+ chinese 0.79 0.74 0.76 242
+ indian 0.88 0.90 0.89 234
+ japanese 0.87 0.81 0.84 254
+ korean 0.91 0.82 0.86 242
+ thai 0.74 0.90 0.81 227
+
+ accuracy 0.83 1199
+ macro avg 0.84 0.83 0.83 1199
+ weighted avg 0.84 0.83 0.83 1199
+ ```
+
+ ✅ 了解 [支持向量](https://scikit-learn.org/stable/modules/svm.html#svm)
+
+## 集成分类器
+
+让我们走到这条路径的尽头,尽管前面的测试结果已经相当好。让我们尝试一些“集成分类器”,特别是随机森林和 AdaBoost:
+
+```python
+ 'RFST': RandomForestClassifier(n_estimators=100),
+ 'ADA': AdaBoostClassifier(n_estimators=100)
+```
+
+结果非常好,尤其是随机森林:
+
+```output
+Accuracy (train) for RFST: 84.5%
+ precision recall f1-score support
+
+ chinese 0.80 0.77 0.78 242
+ indian 0.89 0.92 0.90 234
+ japanese 0.86 0.84 0.85 254
+ korean 0.88 0.83 0.85 242
+ thai 0.80 0.87 0.83 227
+
+ accuracy 0.84 1199
+ macro avg 0.85 0.85 0.84 1199
+weighted avg 0.85 0.84 0.84 1199
+
+Accuracy (train) for ADA: 72.4%
+ precision recall f1-score support
+
+ chinese 0.64 0.49 0.56 242
+ indian 0.91 0.83 0.87 234
+ japanese 0.68 0.69 0.69 254
+ korean 0.73 0.79 0.76 242
+ thai 0.67 0.83 0.74 227
+
+ accuracy 0.72 1199
+ macro avg 0.73 0.73 0.72 1199
+weighted avg 0.73 0.72 0.72 1199
+```
+
+✅ 了解 [集成分类器](https://scikit-learn.org/stable/modules/ensemble.html)
+
+这种机器学习方法“结合了几个基础估算器的预测”以提高模型的质量。在我们的示例中,我们使用了随机森林和 AdaBoost。
+
+- [随机森林](https://scikit-learn.org/stable/modules/ensemble.html#forest),一种平均方法,构建一个充满随机性的“决策树”森林,以避免过拟合。参数 n_estimators 设置为树的数量。
+
+- [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) 将分类器拟合到数据集,然后将该分类器的副本拟合到同一数据集。它关注错误分类项的权重,并调整下一个分类器的拟合以进行修正。
+
+---
+
+## 🚀挑战
+
+这些技术中的每一个都有大量参数可以调整。研究每个技术的默认参数,并思考调整这些参数对模型质量的影响。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/24/)
+
+## 复习与自学
+
+这些课程中有很多术语,所以花点时间复习一下[这个列表](https://docs.microsoft.com/dotnet/machine-learning/resources/glossary?WT.mc_id=academic-77952-leestott)中的有用术语!
+
+## 作业
+
+[参数调试](assignment.md)
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应以原文档的母语版本为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用本翻译而产生的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/3-Classifiers-2/assignment.md b/translations/zh/4-Classification/3-Classifiers-2/assignment.md
new file mode 100644
index 000000000..730b86646
--- /dev/null
+++ b/translations/zh/4-Classification/3-Classifiers-2/assignment.md
@@ -0,0 +1,14 @@
+# 参数演练
+
+## 说明
+
+在使用这些分类器时,有很多默认设置的参数。VS Code 中的 Intellisense 可以帮助你深入了解它们。在本课中采用一种机器学习分类技术,并通过调整各种参数值来重新训练模型。创建一个笔记本,解释为什么某些更改有助于模型质量,而其他更改则会降低质量。请在回答中详细说明。
+
+## 评分标准
+
+| 标准 | 卓越 | 适当 | 需要改进 |
+| ------ | ---------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | ----------------------------- |
+| | 提供一个完全构建的分类器的笔记本,并在文本框中解释参数调整和更改 | 提供的笔记本部分完成或解释不充分 | 提供的笔记本有错误或缺陷 |
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于重要信息,建议使用专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/3-Classifiers-2/solution/Julia/README.md b/translations/zh/4-Classification/3-Classifiers-2/solution/Julia/README.md
new file mode 100644
index 000000000..f7171b0ee
--- /dev/null
+++ b/translations/zh/4-Classification/3-Classifiers-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/4-Applied/README.md b/translations/zh/4-Classification/4-Applied/README.md
new file mode 100644
index 000000000..e3adbdaf0
--- /dev/null
+++ b/translations/zh/4-Classification/4-Applied/README.md
@@ -0,0 +1,317 @@
+# 构建一个美食推荐 Web 应用
+
+在本课中,你将使用之前课程中学到的一些技术,结合贯穿本系列课程使用的美食数据集,构建一个分类模型。此外,你还将构建一个小型 Web 应用来使用保存的模型,并利用 Onnx 的 Web 运行时。
+
+机器学习最有用的实际应用之一是构建推荐系统,你今天可以迈出这一方向的第一步!
+
+[](https://youtu.be/17wdM9AHMfg "应用机器学习")
+
+> 🎥 点击上面的图片观看视频:Jen Looper 使用分类美食数据构建 Web 应用
+
+## [课前小测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/25/)
+
+在本课中你将学习:
+
+- 如何构建模型并将其保存为 Onnx 模型
+- 如何使用 Netron 检查模型
+- 如何在 Web 应用中使用你的模型进行推断
+
+## 构建你的模型
+
+构建应用机器学习系统是利用这些技术为你的业务系统服务的重要部分。你可以使用 Onnx 在 Web 应用中使用模型(如果需要,也可以在离线环境中使用它们)。
+
+在[之前的课程](../../3-Web-App/1-Web-App/README.md)中,你构建了一个关于 UFO 目击事件的回归模型,将其“腌制”并在 Flask 应用中使用。虽然这种架构非常有用,但它是一个全栈的 Python 应用,而你的需求可能包括使用 JavaScript 应用。
+
+在本课中,你可以构建一个基于 JavaScript 的基本推断系统。但首先,你需要训练一个模型并将其转换为 Onnx 格式。
+
+## 练习 - 训练分类模型
+
+首先,使用我们之前使用的清理后的美食数据集训练一个分类模型。
+
+1. 首先导入有用的库:
+
+ ```python
+ !pip install skl2onnx
+ import pandas as pd
+ ```
+
+ 你需要 '[skl2onnx](https://onnx.ai/sklearn-onnx/)' 来帮助将你的 Scikit-learn 模型转换为 Onnx 格式。
+
+1. 然后,以与之前课程相同的方式处理你的数据,通过 `read_csv()` 读取 CSV 文件:
+
+ ```python
+ data = pd.read_csv('../data/cleaned_cuisines.csv')
+ data.head()
+ ```
+
+1. 移除前两个不必要的列,并将剩余的数据保存为 'X':
+
+ ```python
+ X = data.iloc[:,2:]
+ X.head()
+ ```
+
+1. 将标签保存为 'y':
+
+ ```python
+ y = data[['cuisine']]
+ y.head()
+
+ ```
+
+### 开始训练流程
+
+我们将使用具有良好准确性的 'SVC' 库。
+
+1. 从 Scikit-learn 导入适当的库:
+
+ ```python
+ from sklearn.model_selection import train_test_split
+ from sklearn.svm import SVC
+ from sklearn.model_selection import cross_val_score
+ from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report
+ ```
+
+1. 分离训练集和测试集:
+
+ ```python
+ X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
+ ```
+
+1. 构建一个 SVC 分类模型,如你在之前的课程中所做的那样:
+
+ ```python
+ model = SVC(kernel='linear', C=10, probability=True,random_state=0)
+ model.fit(X_train,y_train.values.ravel())
+ ```
+
+1. 现在,测试你的模型,调用 `predict()`:
+
+ ```python
+ y_pred = model.predict(X_test)
+ ```
+
+1. 打印分类报告以检查模型的质量:
+
+ ```python
+ print(classification_report(y_test,y_pred))
+ ```
+
+ 如我们之前所见,准确性很好:
+
+ ```output
+ precision recall f1-score support
+
+ chinese 0.72 0.69 0.70 257
+ indian 0.91 0.87 0.89 243
+ japanese 0.79 0.77 0.78 239
+ korean 0.83 0.79 0.81 236
+ thai 0.72 0.84 0.78 224
+
+ accuracy 0.79 1199
+ macro avg 0.79 0.79 0.79 1199
+ weighted avg 0.79 0.79 0.79 1199
+ ```
+
+### 将你的模型转换为 Onnx
+
+确保使用正确的 Tensor 数进行转换。此数据集中列出了 380 种成分,因此你需要在 `FloatTensorType` 中标注该数字:
+
+1. 使用 380 的 tensor 数进行转换。
+
+ ```python
+ from skl2onnx import convert_sklearn
+ from skl2onnx.common.data_types import FloatTensorType
+
+ initial_type = [('float_input', FloatTensorType([None, 380]))]
+ options = {id(model): {'nocl': True, 'zipmap': False}}
+ ```
+
+1. 创建 onx 文件并保存为 **model.onnx**:
+
+ ```python
+ onx = convert_sklearn(model, initial_types=initial_type, options=options)
+ with open("./model.onnx", "wb") as f:
+ f.write(onx.SerializeToString())
+ ```
+
+ > 注意,你可以在转换脚本中传递[选项](https://onnx.ai/sklearn-onnx/parameterized.html)。在本例中,我们将 'nocl' 设置为 True,并将 'zipmap' 设置为 False。由于这是一个分类模型,你可以选择移除 ZipMap,它会生成一个字典列表(不必要)。 `nocl` refers to class information being included in the model. Reduce your model's size by setting `nocl` to 'True'.
+
+Running the entire notebook will now build an Onnx model and save it to this folder.
+
+## View your model
+
+Onnx models are not very visible in Visual Studio code, but there's a very good free software that many researchers use to visualize the model to ensure that it is properly built. Download [Netron](https://github.com/lutzroeder/Netron) and open your model.onnx file. You can see your simple model visualized, with its 380 inputs and classifier listed:
+
+
+
+Netron is a helpful tool to view your models.
+
+Now you are ready to use this neat model in a web app. Let's build an app that will come in handy when you look in your refrigerator and try to figure out which combination of your leftover ingredients you can use to cook a given cuisine, as determined by your model.
+
+## Build a recommender web application
+
+You can use your model directly in a web app. This architecture also allows you to run it locally and even offline if needed. Start by creating an `index.html` file in the same folder where you stored your `model.onnx` 文件。
+
+1. 在这个文件 _index.html_ 中,添加以下标记:
+
+ ```html
+
+
+
+ Cuisine Matcher
+
+
+ ...
+
+
+ ```
+
+1. 现在,在 `body` 标签内工作,添加一些标记以显示反映某些成分的复选框列表:
+
+ ```html
+
Check your refrigerator. What can you create?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ```
+
+ 注意,每个复选框都有一个值。这反映了成分在数据集中的索引位置。例如,苹果在这个按字母顺序排列的列表中占据第五列,所以它的值是 '4',因为我们从 0 开始计数。你可以查阅[成分电子表格](../../../../4-Classification/data/ingredient_indexes.csv)来发现给定成分的索引。
+
+ 继续在 index.html 文件中工作,在最后一个关闭的 `` 后添加一个脚本块,其中调用模型。
+
+1. 首先,导入 [Onnx Runtime](https://www.onnxruntime.ai/):
+
+ ```html
+
+ ```
+
+ > Onnx Runtime 用于在广泛的硬件平台上运行你的 Onnx 模型,包括优化和使用的 API。
+
+1. 一旦 Runtime 就位,你可以调用它:
+
+ ```html
+
+ ```
+
+在这段代码中,发生了几件事:
+
+1. 你创建了一个包含 380 个可能值(1 或 0)的数组,这些值将根据成分复选框是否被选中而设置并发送到模型进行推断。
+2. 你创建了一个复选框数组,并提供了一种确定它们是否被选中的方法,在 `init` function that is called when the application starts. When a checkbox is checked, the `ingredients` array is altered to reflect the chosen ingredient.
+3. You created a `testCheckboxes` function that checks whether any checkbox was checked.
+4. You use `startInference` function when the button is pressed and, if any checkbox is checked, you start inference.
+5. The inference routine includes:
+ 1. Setting up an asynchronous load of the model
+ 2. Creating a Tensor structure to send to the model
+ 3. Creating 'feeds' that reflects the `float_input` input that you created when training your model (you can use Netron to verify that name)
+ 4. Sending these 'feeds' to the model and waiting for a response
+
+## Test your application
+
+Open a terminal session in Visual Studio Code in the folder where your index.html file resides. Ensure that you have [http-server](https://www.npmjs.com/package/http-server) installed globally, and type `http-server` 提示符下。一个本地主机应该会打开,你可以查看你的 Web 应用。检查根据各种成分推荐的美食:
+
+
+
+恭喜,你已经创建了一个带有几个字段的“推荐” Web 应用。花点时间来完善这个系统吧!
+## 🚀挑战
+
+你的 Web 应用非常简单,因此继续使用[ingredient_indexes](../../../../4-Classification/data/ingredient_indexes.csv) 数据中的成分及其索引来完善它。哪些口味组合可以创造出特定的国家菜肴?
+
+## [课后小测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/26/)
+
+## 复习与自学
+
+虽然本课只是触及了创建食材推荐系统的实用性,但这个机器学习应用领域有很多丰富的例子。阅读更多关于这些系统如何构建的内容:
+
+- https://www.sciencedirect.com/topics/computer-science/recommendation-engine
+- https://www.technologyreview.com/2014/08/25/171547/the-ultimate-challenge-for-recommendation-engines/
+- https://www.technologyreview.com/2015/03/23/168831/everything-is-a-recommendation/
+
+## 作业
+
+[构建一个新的推荐系统](assignment.md)
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业人工翻译。我们对因使用此翻译而引起的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/4-Applied/assignment.md b/translations/zh/4-Classification/4-Applied/assignment.md
new file mode 100644
index 000000000..8e6c7162c
--- /dev/null
+++ b/translations/zh/4-Classification/4-Applied/assignment.md
@@ -0,0 +1,14 @@
+# 构建推荐系统
+
+## 说明
+
+根据你在本课中的练习,你现在知道如何使用 Onnx Runtime 和转换后的 Onnx 模型来构建基于 JavaScript 的 web 应用。尝试使用这些课程中的数据或其他来源的数据(请注明出处)来构建一个新的推荐系统。你可以根据各种个性属性创建一个宠物推荐系统,或者根据一个人的心情创建一个音乐类型推荐系统。发挥你的创造力吧!
+
+## 评分标准
+
+| 标准 | 杰出表现 | 合格表现 | 需要改进 |
+| -------- | ---------------------------------------------------------------------- | ------------------------------------- | --------------------------------- |
+| | 提供了一个 web 应用和笔记本,且两者都记录良好并能运行 | 其中一个缺失或有缺陷 | 两者都缺失或有缺陷 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业人工翻译。我们不对因使用此翻译而产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/4-Classification/README.md b/translations/zh/4-Classification/README.md
new file mode 100644
index 000000000..18684e666
--- /dev/null
+++ b/translations/zh/4-Classification/README.md
@@ -0,0 +1,30 @@
+# 分类入门
+
+## 区域话题:美味的亚洲和印度美食 🍜
+
+在亚洲和印度,饮食传统极其多样化,而且非常美味!让我们看看有关区域美食的数据,试着了解它们的成分。
+
+
+> 照片由 Lisheng Chang 提供,来自 Unsplash
+
+## 你将学到什么
+
+在本节中,你将基于之前对回归的学习,了解其他分类器,以便更好地理解数据。
+
+> 有一些有用的低代码工具可以帮助你学习如何使用分类模型。试试 [Azure ML 来完成这个任务](https://docs.microsoft.com/learn/modules/create-classification-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## 课程
+
+1. [分类简介](1-Introduction/README.md)
+2. [更多分类器](2-Classifiers-1/README.md)
+3. [其他分类器](3-Classifiers-2/README.md)
+4. [应用机器学习:构建一个 web 应用](4-Applied/README.md)
+
+## 致谢
+
+"分类入门" 由 [Cassie Breviu](https://www.twitter.com/cassiebreviu) 和 [Jen Looper](https://www.twitter.com/jenlooper) 用 ♥️ 编写
+
+美味的美食数据集来源于 [Kaggle](https://www.kaggle.com/hoandan/asian-and-indian-cuisines)。
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。我们不对因使用此翻译而引起的任何误解或误释承担责任。
\ No newline at end of file
diff --git a/translations/zh/5-Clustering/1-Visualize/README.md b/translations/zh/5-Clustering/1-Visualize/README.md
new file mode 100644
index 000000000..1154465a2
--- /dev/null
+++ b/translations/zh/5-Clustering/1-Visualize/README.md
@@ -0,0 +1,221 @@
+# 聚类简介
+
+聚类是一种[无监督学习](https://wikipedia.org/wiki/Unsupervised_learning)方法,假设数据集是未标记的,或者其输入未与预定义的输出匹配。它使用各种算法对未标记的数据进行分类,并根据数据中识别出的模式提供分组。
+
+[](https://youtu.be/ty2advRiWJM "PSquare的No One Like You")
+
+> 🎥 点击上面的图片观看视频。在学习聚类机器学习的同时,享受一些尼日利亚舞厅音乐——这是PSquare在2014年发布的一首备受好评的歌曲。
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/27/)
+### 简介
+
+[聚类](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124)对于数据探索非常有用。让我们看看它是否可以帮助发现尼日利亚观众消费音乐的趋势和模式。
+
+✅ 花一分钟时间思考聚类的用途。在现实生活中,每当你有一堆洗好的衣服需要分类到家庭成员的衣物中时,就会发生聚类🧦👕👖🩲。在数据科学中,聚类发生在尝试分析用户的偏好或确定任何未标记数据集的特征时。聚类在某种程度上帮助我们理解混乱,比如一个袜子抽屉。
+
+[](https://youtu.be/esmzYhuFnds "聚类简介")
+
+> 🎥 点击上面的图片观看视频:MIT的John Guttag介绍聚类
+
+在专业环境中,聚类可以用来确定市场细分,例如确定哪个年龄段购买哪些物品。另一个用途是异常检测,可能用于从信用卡交易数据集中检测欺诈行为。或者你可能会使用聚类来确定一批医学扫描中的肿瘤。
+
+✅ 想一分钟,你可能在银行、电子商务或商业环境中如何遇到过聚类。
+
+> 🎓 有趣的是,聚类分析起源于20世纪30年代的人类学和心理学领域。你能想象它可能是如何使用的吗?
+
+或者,你可以用它来对搜索结果进行分组——例如购物链接、图片或评论。当你有一个大型数据集需要缩小并进行更细粒度的分析时,聚类是非常有用的,因此这种技术可以在构建其他模型之前了解数据。
+
+✅ 一旦你的数据被组织成聚类,你可以为其分配一个聚类ID,这种技术在保护数据集隐私时非常有用;你可以通过其聚类ID而不是更具揭示性的可识别数据来引用数据点。你能想到其他为什么你会用聚类ID而不是聚类的其他元素来识别它的原因吗?
+
+在这个[学习模块](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-77952-leestott)中深入了解聚类技术
+## 聚类入门
+
+[Scikit-learn提供了大量](https://scikit-learn.org/stable/modules/clustering.html)的聚类方法。你选择的类型将取决于你的用例。根据文档,每种方法都有不同的优点。以下是Scikit-learn支持的方法及其适用用例的简化表:
+
+| 方法名称 | 用例 |
+| :--------------------------- | :--------------------------------------------------------------------- |
+| K-Means | 通用,归纳式 |
+| Affinity propagation | 许多,不均匀的聚类,归纳式 |
+| Mean-shift | 许多,不均匀的聚类,归纳式 |
+| Spectral clustering | 少数,均匀的聚类,传导式 |
+| Ward hierarchical clustering | 许多,受约束的聚类,传导式 |
+| Agglomerative clustering | 许多,受约束的,非欧几里得距离,传导式 |
+| DBSCAN | 非平面几何,不均匀的聚类,传导式 |
+| OPTICS | 非平面几何,不均匀的聚类,密度可变,传导式 |
+| Gaussian mixtures | 平面几何,归纳式 |
+| BIRCH | 带有离群值的大型数据集,归纳式 |
+
+> 🎓 我们如何创建聚类与我们如何将数据点聚集成组有很大关系。让我们解开一些词汇:
+>
+> 🎓 ['传导式' vs. '归纳式'](https://wikipedia.org/wiki/Transduction_(machine_learning))
+>
+> 传导推理是从观察到的训练案例中推导出来的,这些案例映射到特定的测试案例。归纳推理是从训练案例中推导出来的,这些案例映射到一般规则,然后才应用于测试案例。
+>
+> 举个例子:假设你有一个部分标记的数据集。有些东西是'唱片',有些是'CD',有些是空白的。你的任务是为空白部分提供标签。如果你选择归纳方法,你会训练一个模型寻找'唱片'和'CD',并将这些标签应用于未标记的数据。这种方法在分类实际上是'磁带'的东西时会遇到麻烦。另一方面,传导方法更有效地处理这种未知数据,因为它会将相似的项目分组,然后将标签应用于一个组。在这种情况下,聚类可能反映'圆形音乐物品'和'方形音乐物品'。
+>
+> 🎓 ['非平面' vs. '平面'几何](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
+>
+> 源自数学术语,非平面与平面几何指的是通过'平面'([欧几里得](https://wikipedia.org/wiki/Euclidean_geometry))或'非平面'(非欧几里得)几何方法测量点之间的距离。
+>
+>'平面'在此上下文中指的是欧几里得几何(其中的一部分被教授为'平面'几何),而非平面指的是非欧几里得几何。几何与机器学习有什么关系?好吧,作为两个根植于数学的领域,必须有一种常见的方法来测量聚类中点之间的距离,这可以通过'平面'或'非平面'方式完成,具体取决于数据的性质。[欧几里得距离](https://wikipedia.org/wiki/Euclidean_distance)被测量为两点之间线段的长度。[非欧几里得距离](https://wikipedia.org/wiki/Non-Euclidean_geometry)沿曲线测量。如果你的数据在可视化时似乎不存在于平面上,你可能需要使用专门的算法来处理它。
+>
+
+> 信息图由[Dasani Madipalli](https://twitter.com/dasani_decoded)制作
+>
+> 🎓 ['距离'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
+>
+> 聚类由其距离矩阵定义,例如点之间的距离。这种距离可以通过几种方式测量。欧几里得聚类由点值的平均值定义,并包含一个'质心'或中心点。因此距离通过到质心的距离来测量。非欧几里得距离指的是'聚类中心点',即最接近其他点的点。聚类中心点反过来可以通过各种方式定义。
+>
+> 🎓 ['受约束'](https://wikipedia.org/wiki/Constrained_clustering)
+>
+> [受约束聚类](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf)将'半监督'学习引入这种无监督方法。点之间的关系被标记为'不能链接'或'必须链接',因此在数据集上强加了一些规则。
+>
+> 举个例子:如果一个算法在一批未标记或半标记的数据上自由运行,它产生的聚类可能质量很差。在上面的例子中,聚类可能会将'圆形音乐物品'、'方形音乐物品'、'三角形物品'和'饼干'分组。如果给出一些约束或规则(“项目必须是塑料制成的”,“项目需要能够产生音乐”),这可以帮助'约束'算法做出更好的选择。
+>
+> 🎓 '密度'
+>
+> 被认为是'噪声'的数据被认为是'密集'的。通过检查,每个聚类中点之间的距离可能证明是更密集或更稀疏的,因此需要使用适当的聚类方法来分析这种数据。[这篇文章](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html)展示了使用K-Means聚类与HDBSCAN算法探索具有不均匀聚类密度的噪声数据集的区别。
+
+## 聚类算法
+
+有超过100种聚类算法,它们的使用取决于手头数据的性质。让我们讨论一些主要的:
+
+- **层次聚类**。如果一个对象是通过其与附近对象的接近程度来分类的,而不是与远离的对象分类,则聚类是基于其成员与其他对象的距离形成的。Scikit-learn的凝聚聚类是层次聚类。
+
+ 
+ > 信息图由[Dasani Madipalli](https://twitter.com/dasani_decoded)制作
+
+- **质心聚类**。这种流行的算法需要选择'k',即要形成的聚类数量,然后算法确定聚类的中心点并围绕该点收集数据。[K-means聚类](https://wikipedia.org/wiki/K-means_clustering)是质心聚类的流行版本。中心由最近的均值确定,因此得名。聚类的平方距离被最小化。
+
+ 
+ > 信息图由[Dasani Madipalli](https://twitter.com/dasani_decoded)制作
+
+- **基于分布的聚类**。基于统计建模,基于分布的聚类中心在确定数据点属于某个聚类的概率,并相应地分配它。高斯混合方法属于这种类型。
+
+- **基于密度的聚类**。数据点根据其密度或围绕彼此的分组被分配到聚类。远离组的数据点被视为离群点或噪声。DBSCAN、Mean-shift和OPTICS属于这种类型的聚类。
+
+- **基于网格的聚类**。对于多维数据集,创建一个网格并将数据分配到网格的单元中,从而创建聚类。
+
+## 练习 - 聚类你的数据
+
+聚类作为一种技术大大受益于适当的可视化,所以让我们开始可视化我们的音乐数据。这个练习将帮助我们决定哪种聚类方法最有效地用于这种数据的性质。
+
+1. 打开此文件夹中的[_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/notebook.ipynb)文件。
+
+1. 导入`Seaborn`包以获得良好的数据可视化效果。
+
+ ```python
+ !pip install seaborn
+ ```
+
+1. 从[_nigerian-songs.csv_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/data/nigerian-songs.csv)中追加歌曲数据。加载一个包含一些关于歌曲数据的数据框架。通过导入库并导出数据来准备探索此数据:
+
+ ```python
+ import matplotlib.pyplot as plt
+ import pandas as pd
+
+ df = pd.read_csv("../data/nigerian-songs.csv")
+ df.head()
+ ```
+
+ 检查前几行数据:
+
+ | | name | album | artist | artist_top_genre | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | --- | ------------------------ | ---------------------------- | ------------------- | ---------------- | ------------ | ------ | ---------- | ------------ | ------------ | ------ | ---------------- | -------- | -------- | ----------- | ------- | -------------- |
+ | 0 | Sparky | Mandy & The Jungle | Cruel Santino | alternative r&b | 2019 | 144000 | 48 | 0.666 | 0.851 | 0.42 | 0.534 | 0.11 | -6.699 | 0.0829 | 133.015 | 5 |
+ | 1 | shuga rush | EVERYTHING YOU HEARD IS TRUE | Odunsi (The Engine) | afropop | 2020 | 89488 | 30 | 0.71 | 0.0822 | 0.683 | 0.000169 | 0.101 | -5.64 | 0.36 | 129.993 | 3 |
+ | 2 | LITT! | LITT! | AYLØ | indie r&b | 2018 | 207758 | 40 | 0.836 | 0.272 | 0.564 | 0.000537 | 0.11 | -7.127 | 0.0424 | 130.005 | 4 |
+ | 3 | Confident / Feeling Cool | Enjoy Your Life | Lady Donli | nigerian pop | 2019 | 175135 | 14 | 0.894 | 0.798 | 0.611 | 0.000187 | 0.0964 | -4.961 | 0.113 | 111.087 | 4 |
+ | 4 | wanted you | rare. | Odunsi (The Engine) | afropop | 2018 | 152049 | 25 | 0.702 | 0.116 | 0.833 | 0.91 | 0.348 | -6.044 | 0.0447 | 105.115 | 4 |
+
+1. 获取有关数据框架的信息,调用`info()`:
+
+ ```python
+ df.info()
+ ```
+
+ 输出如下所示:
+
+ ```output
+
+ RangeIndex: 530 entries, 0 to 529
+ Data columns (total 16 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 name 530 non-null object
+ 1 album 530 non-null object
+ 2 artist 530 non-null object
+ 3 artist_top_genre 530 non-null object
+ 4 release_date 530 non-null int64
+ 5 length 530 non-null int64
+ 6 popularity 530 non-null int64
+ 7 danceability 530 non-null float64
+ 8 acousticness 530 non-null float64
+ 9 energy 530 non-null float64
+ 10 instrumentalness 530 non-null float64
+ 11 liveness 530 non-null float64
+ 12 loudness 530 non-null float64
+ 13 speechiness 530 non-null float64
+ 14 tempo 530 non-null float64
+ 15 time_signature 530 non-null int64
+ dtypes: float64(8), int64(4), object(4)
+ memory usage: 66.4+ KB
+ ```
+
+1. 通过调用`isnull()`并验证总和为0来仔细检查是否有空值:
+
+ ```python
+ df.isnull().sum()
+ ```
+
+ 看起来不错:
+
+ ```output
+ name 0
+ album 0
+ artist 0
+ artist_top_genre 0
+ release_date 0
+ length 0
+ popularity 0
+ danceability 0
+ acousticness 0
+ energy 0
+ instrumentalness 0
+ liveness 0
+ loudness 0
+ speechiness 0
+ tempo 0
+ time_signature 0
+ dtype: int64
+ ```
+
+1. 描述数据:
+
+ ```python
+ df.describe()
+ ```
+
+ | | release_date | length | popularity | danceability | acousticness | energy | instrumentalness | liveness | loudness | speechiness | tempo | time_signature |
+ | ----- | ------------ | ----------- | ---------- | ------------ | ------------ | -------- | ---------------- | -------- | --------- | ----------- | ---------- | -------------- |
+ | count | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 | 530 |
+ | mean | 2015.390566 | 222298.1698 | 17.507547 | 0.741619 | 0.265412 | 0.760623 | 0.016305 | 0.147308 | -4.953011 | 0.130748 | 116.487864 | 3.986792 |
+ | std | 3.131688 | 39696.82226 | 18.992212 | 0.117522 | 0.208342 | 0.148533 | 0.090321 | 0.123588 | 2.464186 | 0.092939 | 23.518601 | 0.333701 |
+ | min | 1998 | 89488 | 0 | 0.255 | 0.000665 | 0.111 | 0 | 0.0283 | -19.362 | 0.0278 | 61.695 | 3 |
+ | 25% | 2014 | 199305 | 0 | 0.681 | 0.089525 | 0.669 | 0 | 0.07565 | -6.29875 | 0.0591 | 102.96125 | 4 |
+ | 50% | 2016 | 218509 | 13 | 0.761 | 0.2205 | 0.7845 | 0.000004 | 0.1035 | -4.5585 | 0.09795 | 112.7145 | 4 |
+ | 75% | 2017 | 242098.5 | 31 | 0.8295 | 0.403 | 0.87575 | 0.000234 | 0.164 | -3.331 | 0.177 | 125.03925 | 4 |
+ | max | 2020 | 511738 | 73 | 0.966 | 0.954 | 0.995 | 0.91 | 0.811 | 0.582 | 0.514 | 206.007 |
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/28/)
+
+## 复习与自学
+
+在应用聚类算法之前,正如我们所学,了解数据集的性质是一个好主意。可以在[这里](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)阅读更多相关内容。
+
+[这篇有用的文章](https://www.freecodecamp.org/news/8-clustering-algorithms-in-machine-learning-that-all-data-scientists-should-know/)会带你了解在不同数据形状下,各种聚类算法的不同表现方式。
+
+## 作业
+
+[研究其他聚类的可视化方式](assignment.md)
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本国语言版本视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/5-Clustering/1-Visualize/assignment.md b/translations/zh/5-Clustering/1-Visualize/assignment.md
new file mode 100644
index 000000000..e4050a7d4
--- /dev/null
+++ b/translations/zh/5-Clustering/1-Visualize/assignment.md
@@ -0,0 +1,14 @@
+# 研究其他聚类的可视化方法
+
+## 指导说明
+
+在本课中,你已经学习了一些可视化技术,以便为聚类做好数据绘图的准备。特别是散点图对于寻找对象组非常有用。研究不同的方法和不同的库来创建散点图,并在笔记本中记录你的工作。你可以使用本课的数据、其他课程的数据或你自己找到的数据(不过请在笔记本中注明其来源)。使用散点图绘制一些数据,并解释你发现了什么。
+
+## 评分标准
+
+| 标准 | 模范 | 适当 | 需要改进 |
+| -------- | ------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------- |
+| | 提交一个包含五个有详细记录的散点图的笔记本 | 提交一个包含少于五个散点图且记录不太详细的笔记本 | 提交一个不完整的笔记本 |
+
+**免责声明**:
+本文档是使用基于机器的AI翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/5-Clustering/1-Visualize/solution/Julia/README.md b/translations/zh/5-Clustering/1-Visualize/solution/Julia/README.md
new file mode 100644
index 000000000..167b51bf3
--- /dev/null
+++ b/translations/zh/5-Clustering/1-Visualize/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文件使用基于机器的AI翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业的人类翻译。我们不对因使用本翻译而引起的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/5-Clustering/2-K-Means/README.md b/translations/zh/5-Clustering/2-K-Means/README.md
new file mode 100644
index 000000000..ff1ab3f86
--- /dev/null
+++ b/translations/zh/5-Clustering/2-K-Means/README.md
@@ -0,0 +1,250 @@
+# K-Means 聚类
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)
+
+在本课中,你将学习如何使用 Scikit-learn 和之前导入的尼日利亚音乐数据集创建聚类。我们将介绍 K-Means 聚类的基础知识。请记住,正如你在之前的课程中所学,有很多方法可以处理聚类,使用的方法取决于你的数据。我们将尝试 K-Means,因为它是最常见的聚类技术。让我们开始吧!
+
+你将学习的术语:
+
+- Silhouette 评分
+- 肘部法
+- 惯性
+- 方差
+
+## 介绍
+
+[K-Means 聚类](https://wikipedia.org/wiki/K-means_clustering) 是一种源自信号处理领域的方法。它用于通过一系列观察将数据分成 'k' 个聚类。每个观察都试图将一个给定的数据点分配到离它最近的 '均值',即聚类的中心点。
+
+这些聚类可以可视化为 [Voronoi 图](https://wikipedia.org/wiki/Voronoi_diagram),其中包括一个点(或 '种子')及其对应的区域。
+
+
+
+> 信息图由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+K-Means 聚类过程 [分三步执行](https://scikit-learn.org/stable/modules/clustering.html#k-means):
+
+1. 算法通过从数据集中采样选择 k 个中心点。之后,它循环执行以下步骤:
+ 1. 将每个样本分配到最近的质心。
+ 2. 通过取分配给前一个质心的所有样本的平均值来创建新的质心。
+ 3. 然后计算新旧质心之间的差异,并重复直到质心稳定。
+
+使用 K-Means 的一个缺点是你需要确定 'k',即质心的数量。幸运的是,'肘部法'可以帮助估计一个好的起始值。你将在稍后尝试它。
+
+## 先决条件
+
+你将在本课的 [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/notebook.ipynb) 文件中工作,该文件包含你在上一课中完成的数据导入和初步清理。
+
+## 练习 - 准备工作
+
+首先,再次查看歌曲数据。
+
+1. 为每列创建一个箱线图,调用 `boxplot()`:
+
+ ```python
+ plt.figure(figsize=(20,20), dpi=200)
+
+ plt.subplot(4,3,1)
+ sns.boxplot(x = 'popularity', data = df)
+
+ plt.subplot(4,3,2)
+ sns.boxplot(x = 'acousticness', data = df)
+
+ plt.subplot(4,3,3)
+ sns.boxplot(x = 'energy', data = df)
+
+ plt.subplot(4,3,4)
+ sns.boxplot(x = 'instrumentalness', data = df)
+
+ plt.subplot(4,3,5)
+ sns.boxplot(x = 'liveness', data = df)
+
+ plt.subplot(4,3,6)
+ sns.boxplot(x = 'loudness', data = df)
+
+ plt.subplot(4,3,7)
+ sns.boxplot(x = 'speechiness', data = df)
+
+ plt.subplot(4,3,8)
+ sns.boxplot(x = 'tempo', data = df)
+
+ plt.subplot(4,3,9)
+ sns.boxplot(x = 'time_signature', data = df)
+
+ plt.subplot(4,3,10)
+ sns.boxplot(x = 'danceability', data = df)
+
+ plt.subplot(4,3,11)
+ sns.boxplot(x = 'length', data = df)
+
+ plt.subplot(4,3,12)
+ sns.boxplot(x = 'release_date', data = df)
+ ```
+
+ 这些数据有点嘈杂:通过观察每列的箱线图,你可以看到异常值。
+
+ 
+
+你可以遍历数据集并删除这些异常值,但这会使数据变得非常少。
+
+1. 现在,选择你将在聚类练习中使用的列。选择具有相似范围的列,并将 `artist_top_genre` 列编码为数值数据:
+
+ ```python
+ from sklearn.preprocessing import LabelEncoder
+ le = LabelEncoder()
+
+ X = df.loc[:, ('artist_top_genre','popularity','danceability','acousticness','loudness','energy')]
+
+ y = df['artist_top_genre']
+
+ X['artist_top_genre'] = le.fit_transform(X['artist_top_genre'])
+
+ y = le.transform(y)
+ ```
+
+1. 现在你需要选择目标聚类的数量。你知道从数据集中提取了 3 种歌曲类型,所以我们试试 3 个聚类:
+
+ ```python
+ from sklearn.cluster import KMeans
+
+ nclusters = 3
+ seed = 0
+
+ km = KMeans(n_clusters=nclusters, random_state=seed)
+ km.fit(X)
+
+ # Predict the cluster for each data point
+
+ y_cluster_kmeans = km.predict(X)
+ y_cluster_kmeans
+ ```
+
+你会看到一个数组,打印出每行数据框的预测聚类(0、1 或 2)。
+
+1. 使用此数组计算 'silhouette score':
+
+ ```python
+ from sklearn import metrics
+ score = metrics.silhouette_score(X, y_cluster_kmeans)
+ score
+ ```
+
+## Silhouette 评分
+
+寻找接近 1 的 silhouette 评分。此评分范围从 -1 到 1,如果评分为 1,聚类密集且与其他聚类分离良好。接近 0 的值表示聚类重叠,样本非常接近相邻聚类的决策边界。[(来源)](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam)
+
+我们的评分是 **0.53**,所以在中间。这表明我们的数据不太适合这种聚类,但让我们继续。
+
+### 练习 - 构建模型
+
+1. 导入 `KMeans` 并开始聚类过程。
+
+ ```python
+ from sklearn.cluster import KMeans
+ wcss = []
+
+ for i in range(1, 11):
+ kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
+ kmeans.fit(X)
+ wcss.append(kmeans.inertia_)
+
+ ```
+
+ 这里有几个部分值得解释。
+
+ > 🎓 range: 这些是聚类过程的迭代次数
+
+ > 🎓 random_state: "确定质心初始化的随机数生成。" [来源](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)
+
+ > 🎓 WCSS: "组内平方和" 测量聚类内所有点到聚类质心的平方平均距离。 [来源](https://medium.com/@ODSC/unsupervised-learning-evaluating-clusters-bd47eed175ce)
+
+ > 🎓 惯性: K-Means 算法试图选择质心以最小化 '惯性',"衡量聚类内部的一致性。" [来源](https://scikit-learn.org/stable/modules/clustering.html)。该值在每次迭代时附加到 wcss 变量上。
+
+ > 🎓 k-means++: 在 [Scikit-learn](https://scikit-learn.org/stable/modules/clustering.html#k-means) 中,你可以使用 'k-means++' 优化,它 "将质心初始化为(通常)彼此相距较远,从而可能比随机初始化产生更好的结果。
+
+### 肘部法
+
+之前,你推测因为你目标是 3 种歌曲类型,所以你应该选择 3 个聚类。但真的是这样吗?
+
+1. 使用 '肘部法' 来确认。
+
+ ```python
+ plt.figure(figsize=(10,5))
+ sns.lineplot(x=range(1, 11), y=wcss, marker='o', color='red')
+ plt.title('Elbow')
+ plt.xlabel('Number of clusters')
+ plt.ylabel('WCSS')
+ plt.show()
+ ```
+
+ 使用你在上一步中构建的 `wcss` 变量创建一个图表,显示肘部的 '弯曲' 位置,指示最佳的聚类数量。也许确实是 3!
+
+ 
+
+## 练习 - 显示聚类
+
+1. 再次尝试此过程,这次设置三个聚类,并将聚类显示为散点图:
+
+ ```python
+ from sklearn.cluster import KMeans
+ kmeans = KMeans(n_clusters = 3)
+ kmeans.fit(X)
+ labels = kmeans.predict(X)
+ plt.scatter(df['popularity'],df['danceability'],c = labels)
+ plt.xlabel('popularity')
+ plt.ylabel('danceability')
+ plt.show()
+ ```
+
+1. 检查模型的准确性:
+
+ ```python
+ labels = kmeans.labels_
+
+ correct_labels = sum(y == labels)
+
+ print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y.size))
+
+ print('Accuracy score: {0:0.2f}'. format(correct_labels/float(y.size)))
+ ```
+
+ 该模型的准确性不是很好,聚类的形状也给你一个提示。
+
+ 
+
+ 这些数据太不平衡,相关性太小,各列值之间的方差太大,无法很好地聚类。事实上,形成的聚类可能受到我们上面定义的三种类型类别的严重影响或偏斜。这是一个学习过程!
+
+ 在 Scikit-learn 的文档中,你可以看到像这样的模型,聚类没有很好地划分,有 '方差' 问题:
+
+ 
+ > 信息图来自 Scikit-learn
+
+## 方差
+
+方差定义为 "平均平方差" [(来源)](https://www.mathsisfun.com/data/standard-deviation.html)。在这个聚类问题的背景下,它指的是我们数据集的数字倾向于从均值中偏离太多。
+
+✅ 这是一个很好的时机来思考你可以用哪些方法来解决这个问题。稍微调整数据?使用不同的列?使用不同的算法?提示:尝试[缩放数据](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/)以标准化它并测试其他列。
+
+> 试试这个 '[方差计算器](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' 来更好地理解这个概念。
+
+---
+
+## 🚀挑战
+
+花一些时间在这个 notebook 上,调整参数。你能通过进一步清理数据(例如删除异常值)来提高模型的准确性吗?你可以使用权重来给某些数据样本更多的权重。你还能做些什么来创建更好的聚类?
+
+提示:尝试缩放你的数据。notebook 中有注释代码,添加标准缩放以使数据列在范围上更相似。你会发现,虽然 silhouette 评分下降了,但肘部图的 '弯曲' 更加平滑。这是因为让数据不缩放会使方差较小的数据权重更大。阅读更多关于这个问题的信息 [这里](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226)。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)
+
+## 回顾与自学
+
+看看一个 K-Means 模拟器 [比如这个](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/)。你可以使用这个工具来可视化样本数据点并确定其质心。你可以编辑数据的随机性、聚类数量和质心数量。这是否有助于你了解数据如何分组?
+
+另外,看看斯坦福大学的 [K-Means 讲义](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html)。
+
+## 作业
+
+[尝试不同的聚类方法](assignment.md)
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/5-Clustering/2-K-Means/assignment.md b/translations/zh/5-Clustering/2-K-Means/assignment.md
new file mode 100644
index 000000000..6e8701476
--- /dev/null
+++ b/translations/zh/5-Clustering/2-K-Means/assignment.md
@@ -0,0 +1,14 @@
+# 尝试不同的聚类方法
+
+## 说明
+
+在本课中,你学习了K-Means聚类。有时K-Means并不适合你的数据。请创建一个笔记本,使用这些课程中的数据或其他来源的数据(请注明来源),并展示一种不同于K-Means的聚类方法。你学到了什么?
+
+## 评分标准
+
+| 标准 | 杰出表现 | 合格表现 | 需要改进 |
+| -------- | ------------------------------------------------------------------ | --------------------------------------------------------------- | ---------------------------- |
+| | 提供了一个有详细文档记录的聚类模型的笔记本 | 提供了一个没有良好文档记录和/或不完整的笔记本 | 提交的作品不完整 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本国语言版本视为权威来源。对于关键信息,建议使用专业人工翻译。我们不对使用本翻译所产生的任何误解或误释承担责任。
\ No newline at end of file
diff --git a/translations/zh/5-Clustering/2-K-Means/solution/Julia/README.md b/translations/zh/5-Clustering/2-K-Means/solution/Julia/README.md
new file mode 100644
index 000000000..755dd134d
--- /dev/null
+++ b/translations/zh/5-Clustering/2-K-Means/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/5-Clustering/README.md b/translations/zh/5-Clustering/README.md
new file mode 100644
index 000000000..91e31f126
--- /dev/null
+++ b/translations/zh/5-Clustering/README.md
@@ -0,0 +1,31 @@
+# 机器学习中的聚类模型
+
+聚类是一种机器学习任务,旨在寻找相似的对象并将它们分组到称为簇的组中。聚类与机器学习中的其他方法不同,因为它是自动发生的,实际上可以说它是监督学习的反面。
+
+## 地区专题:为尼日利亚观众的音乐品味设计的聚类模型 🎧
+
+尼日利亚的观众有着多样化的音乐品味。使用从Spotify抓取的数据(灵感来源于[这篇文章](https://towardsdatascience.com/country-wise-visual-analysis-of-music-taste-using-spotify-api-seaborn-in-python-77f5b749b421),让我们看看尼日利亚流行的一些音乐。这个数据集包括关于各种歌曲的“舞蹈性”得分、“声学性”、响度、“演讲性”、流行度和能量的数据。发现这些数据中的模式将会很有趣!
+
+
+
+> 照片由Marcela Laskoski拍摄,发布于Unsplash
+
+在这一系列课程中,你将发现使用聚类技术分析数据的新方法。当你的数据集缺乏标签时,聚类特别有用。如果数据集有标签,那么你在之前课程中学到的分类技术可能会更有用。但在你想要对未标记的数据进行分组的情况下,聚类是发现模式的好方法。
+
+> 有一些有用的低代码工具可以帮助你学习如何使用聚类模型。试试[Azure ML来完成这个任务](https://docs.microsoft.com/learn/modules/create-clustering-model-azure-machine-learning-designer/?WT.mc_id=academic-77952-leestott)
+
+## 课程
+
+1. [聚类简介](1-Visualize/README.md)
+2. [K-Means聚类](2-K-Means/README.md)
+
+## 致谢
+
+这些课程由[Jen Looper](https://www.twitter.com/jenlooper)精心编写,并得到了[Rishit Dagli](https://rishit_dagli)和[Muhammad Sakib Khan Inan](https://twitter.com/Sakibinan)的有益评审。
+
+[Nigerian Songs](https://www.kaggle.com/sootersaalu/nigerian-songs-spotify)数据集来源于Kaggle,由Spotify抓取。
+
+一些有用的K-Means示例对创建这节课提供了帮助,包括这个[iris探索](https://www.kaggle.com/bburns/iris-exploration-pca-k-means-and-gmm-clustering),这个[入门笔记](https://www.kaggle.com/prashant111/k-means-clustering-with-python),以及这个[假设的NGO示例](https://www.kaggle.com/ankandash/pca-k-means-clustering-hierarchical-clustering)。
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/1-Introduction-to-NLP/README.md b/translations/zh/6-NLP/1-Introduction-to-NLP/README.md
new file mode 100644
index 000000000..25b13fbcd
--- /dev/null
+++ b/translations/zh/6-NLP/1-Introduction-to-NLP/README.md
@@ -0,0 +1,168 @@
+# 自然语言处理简介
+
+本课涵盖了*自然语言处理*(NLP)这一*计算语言学*的子领域的简史和重要概念。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/31/)
+
+## 介绍
+
+NLP是机器学习应用和生产软件中最知名的领域之一。
+
+✅ 你能想到每天使用的软件中可能嵌入了一些NLP吗?你经常使用的文字处理程序或移动应用程序呢?
+
+你将学习:
+
+- **语言的概念**。语言是如何发展的,主要的研究领域是什么。
+- **定义和概念**。你还将学习计算机如何处理文本的定义和概念,包括解析、语法以及识别名词和动词。本课中有一些编码任务,并引入了一些重要概念,你将在接下来的课程中学习如何编码这些概念。
+
+## 计算语言学
+
+计算语言学是一个研究和开发领域,研究计算机如何处理、理解、翻译和与语言交流。自然语言处理(NLP)是一个相关领域,专注于计算机如何处理“自然”或人类语言。
+
+### 示例 - 手机语音输入
+
+如果你曾经使用手机语音输入而不是打字,或者向虚拟助手提问,你的语音会被转换为文本形式,然后进行处理或*解析*。检测到的关键词会被处理成手机或助手能够理解和执行的格式。
+
+
+> 真正的语言理解很难!图片由[Jen Looper](https://twitter.com/jenlooper)提供
+
+### 这种技术是如何实现的?
+
+这是因为有人编写了一个计算机程序来实现这一点。几年前,一些科幻作家预测人们主要会与计算机对话,计算机会总是准确理解他们的意思。遗憾的是,这个问题比许多人想象的要难得多,虽然今天我们对这个问题有了更好的理解,但在实现“完美”的自然语言处理方面仍然面临重大挑战,特别是在理解句子的意义时。这在理解幽默或检测句子中的情感(如讽刺)时尤其困难。
+
+此时,你可能会回想起学校课堂上老师讲解句子语法部分的情景。在一些国家,学生会专门学习语法和语言学,但在许多国家,这些主题是作为学习语言的一部分:在小学学习母语(学习阅读和写作),可能在中学学习第二语言。如果你不擅长区分名词和动词或副词和形容词,也不用担心!
+
+如果你在区分*简单现在时*和*现在进行时*方面有困难,你并不孤单。这对许多人来说是一个挑战,即使是某种语言的母语者。好消息是,计算机非常擅长应用正式规则,你将学习编写代码,能够像人类一样*解析*句子。更大的挑战是理解句子的*意义*和*情感*。
+
+## 前提条件
+
+本课的主要前提条件是能够阅读和理解本课的语言。本课没有数学问题或方程需要解决。虽然原作者用英语写了本课,但它也被翻译成其他语言,所以你可能在阅读翻译版本。有些例子使用了不同的语言(以比较不同语言的语法规则)。这些例子*没有*翻译,但解释性文本是翻译的,所以意思应该是清楚的。
+
+对于编码任务,你将使用Python,例子使用的是Python 3.8。
+
+在本节中,你将需要并使用:
+
+- **Python 3 理解**。编程语言理解Python 3,本课使用输入、循环、文件读取、数组。
+- **Visual Studio Code + 扩展**。我们将使用Visual Studio Code及其Python扩展。你也可以使用你喜欢的Python IDE。
+- **TextBlob**。[TextBlob](https://github.com/sloria/TextBlob)是一个简化的Python文本处理库。按照TextBlob网站上的说明将其安装到你的系统中(同时安装语料库,如下所示):
+
+ ```bash
+ pip install -U textblob
+ python -m textblob.download_corpora
+ ```
+
+> 💡 提示:你可以直接在VS Code环境中运行Python。查看[文档](https://code.visualstudio.com/docs/languages/python?WT.mc_id=academic-77952-leestott)以获取更多信息。
+
+## 与机器对话
+
+让计算机理解人类语言的历史可以追溯到几十年前,最早考虑自然语言处理的科学家之一是*阿兰·图灵*。
+
+### '图灵测试'
+
+当图灵在20世纪50年代研究*人工智能*时,他考虑是否可以给人类和计算机(通过打字通信)进行一个对话测试,让人类在对话中无法确定他们是在与另一个人还是计算机对话。
+
+如果在一定长度的对话后,人类无法确定回答是否来自计算机,那么是否可以说计算机在*思考*?
+
+### 灵感来源 - '模仿游戏'
+
+这个想法来自一个叫做*模仿游戏*的聚会游戏,审问者独自在一个房间里,任务是确定另一个房间里的两个人分别是男性和女性。审问者可以发送纸条,并且必须尝试提出问题,通过书面回答来揭示神秘人物的性别。当然,另一个房间里的玩家试图通过回答问题来误导或困惑审问者,同时给出看似诚实的回答。
+
+### 开发Eliza
+
+在20世纪60年代,一位MIT科学家*约瑟夫·魏岑鲍姆*开发了[*Eliza*](https://wikipedia.org/wiki/ELIZA),一个计算机“治疗师”,会向人类提问并给出理解他们答案的假象。然而,虽然Eliza可以解析句子并识别某些语法结构和关键词,从而给出合理的回答,但不能说它*理解*句子。如果Eliza遇到格式为“**我很** 难过”的句子,它可能会重新排列并替换句子中的单词,形成“你**一直** 难过多久了”的回答。
+
+这给人一种Eliza理解了陈述并在问后续问题的印象,而实际上,它只是改变了时态并添加了一些单词。如果Eliza无法识别出有响应的关键词,它会给出一个随机的回答,这个回答可以适用于许多不同的陈述。例如,如果用户写“**你是** 自行车”,它可能会回答“我**一直**是 自行车多久了?”,而不是一个更合理的回答。
+
+[](https://youtu.be/RMK9AphfLco "与Eliza聊天")
+
+> 🎥 点击上方图片观看关于原始ELIZA程序的视频
+
+> 注意:如果你有ACM账户,可以阅读1966年发表的[Eliza](https://cacm.acm.org/magazines/1966/1/13317-elizaa-computer-program-for-the-study-of-natural-language-communication-between-man-and-machine/abstract)原始描述。或者,可以在[wikipedia](https://wikipedia.org/wiki/ELIZA)上了解Eliza
+
+## 练习 - 编写一个基本的对话机器人
+
+一个对话机器人,如Eliza,是一个引导用户输入并似乎能够理解和智能回应的程序。与Eliza不同,我们的机器人不会有多个规则来让它看起来像是在进行智能对话。相反,我们的机器人只有一个功能,就是通过随机回应来保持对话,这些回应在几乎任何琐碎的对话中都可能有效。
+
+### 计划
+
+构建对话机器人的步骤:
+
+1. 打印指示,告知用户如何与机器人互动
+2. 开始一个循环
+ 1. 接受用户输入
+ 2. 如果用户要求退出,则退出
+ 3. 处理用户输入并确定回应(在本例中,回应是从可能的通用回应列表中随机选择的)
+ 4. 打印回应
+3. 返回第2步循环
+
+### 构建机器人
+
+接下来让我们创建机器人。我们将从定义一些短语开始。
+
+1. 使用以下随机回应在Python中自己创建这个机器人:
+
+ ```python
+ random_responses = ["That is quite interesting, please tell me more.",
+ "I see. Do go on.",
+ "Why do you say that?",
+ "Funny weather we've been having, isn't it?",
+ "Let's change the subject.",
+ "Did you catch the game last night?"]
+ ```
+
+ 这里有一些示例输出供你参考(用户输入在以`>`开头的行上):
+
+ ```output
+ Hello, I am Marvin, the simple robot.
+ You can end this conversation at any time by typing 'bye'
+ After typing each answer, press 'enter'
+ How are you today?
+ > I am good thanks
+ That is quite interesting, please tell me more.
+ > today I went for a walk
+ Did you catch the game last night?
+ > I did, but my team lost
+ Funny weather we've been having, isn't it?
+ > yes but I hope next week is better
+ Let's change the subject.
+ > ok, lets talk about music
+ Why do you say that?
+ > because I like music!
+ Why do you say that?
+ > bye
+ It was nice talking to you, goodbye!
+ ```
+
+ 任务的一个可能解决方案在[这里](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/1-Introduction-to-NLP/solution/bot.py)
+
+ ✅ 停下来思考
+
+ 1. 你认为随机回应会“骗”某人认为机器人真的理解他们吗?
+ 2. 机器人需要哪些功能才能更有效?
+ 3. 如果一个机器人真的能“理解”句子的意义,它是否需要“记住”对话中前面句子的意义?
+
+---
+
+## 🚀挑战
+
+选择上面的一个“停下来思考”元素,尝试在代码中实现它们,或者用伪代码在纸上写出解决方案。
+
+在下一课中,你将学习一些其他解析自然语言和机器学习的方法。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/32/)
+
+## 复习与自学
+
+查看下面的参考资料,作为进一步阅读的机会。
+
+### 参考资料
+
+1. Schubert, Lenhart, "Computational Linguistics", *斯坦福哲学百科全书* (2020年春季版), Edward N. Zalta (编), URL = .
+2. Princeton University "About WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
+
+## 作业
+
+[搜索一个机器人](assignment.md)
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议进行专业的人类翻译。我们对使用本翻译所产生的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/1-Introduction-to-NLP/assignment.md b/translations/zh/6-NLP/1-Introduction-to-NLP/assignment.md
new file mode 100644
index 000000000..72a13e013
--- /dev/null
+++ b/translations/zh/6-NLP/1-Introduction-to-NLP/assignment.md
@@ -0,0 +1,14 @@
+# 寻找机器人
+
+## 说明
+
+机器人无处不在。你的任务是:找到一个并采纳它!你可以在网站上、银行应用程序中以及电话中找到它们,例如,当你打电话给金融服务公司咨询或获取账户信息时。分析这个机器人,看看你能否让它混淆。如果你能让机器人混淆,为什么会发生这种情况?写一篇简短的文章描述你的经历。
+
+## 评分标准
+
+| 标准 | 模范 | 合格 | 需要改进 |
+| -------- | ------------------------------------------------------------------------------------------------------------- | -------------------------------------------- | --------------------- |
+| | 写了一整页的文章,解释了假定的机器人架构并概述了你与它的体验 | 文章不完整或研究不充分 | 未提交文章 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于重要信息,建议进行专业的人类翻译。对于因使用本翻译而引起的任何误解或误释,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/2-Tasks/README.md b/translations/zh/6-NLP/2-Tasks/README.md
new file mode 100644
index 000000000..b41527182
--- /dev/null
+++ b/translations/zh/6-NLP/2-Tasks/README.md
@@ -0,0 +1,217 @@
+# 常见的自然语言处理任务和技术
+
+对于大多数*自然语言处理*任务,需要将要处理的文本分解、检查,并将结果存储或与规则和数据集进行交叉引用。这些任务使程序员能够推导出文本中的_意义_或_意图_,或仅仅是术语和单词的_频率_。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/33/)
+
+让我们来了解一些常见的文本处理技术。结合机器学习,这些技术可以帮助你高效地分析大量文本。然而,在将机器学习应用于这些任务之前,我们先来了解一下NLP专家所遇到的问题。
+
+## NLP常见任务
+
+分析文本有不同的方法。你可以执行一些任务,通过这些任务你能够理解文本并得出结论。通常你会按顺序执行这些任务。
+
+### 分词
+
+大多数NLP算法首先要做的可能是将文本分割成标记或单词。虽然这听起来很简单,但考虑到标点符号和不同语言的单词和句子分隔符会使其变得复杂。你可能需要使用各种方法来确定分界线。
+
+
+> 分词一个来自**傲慢与偏见**的句子。信息图由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+### 嵌入
+
+[词嵌入](https://wikipedia.org/wiki/Word_embedding)是一种将文本数据数值化的方法。嵌入的方式是使具有相似意义或经常一起使用的单词聚集在一起。
+
+
+> “我对你的神经非常尊重,它们是我的老朋友。” - 来自**傲慢与偏见**的一句话的词嵌入。信息图由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+✅ 尝试[这个有趣的工具](https://projector.tensorflow.org/)来实验词嵌入。点击一个单词会显示相似单词的聚类:'toy'与'disney'、'lego'、'playstation'和'console'聚在一起。
+
+### 解析和词性标注
+
+每个被分词的单词都可以被标注为词性 - 名词、动词或形容词。句子 `the quick red fox jumped over the lazy brown dog` 可能被词性标注为 fox = 名词, jumped = 动词。
+
+
+
+> 解析一个来自**傲慢与偏见**的句子。信息图由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+解析是识别句子中哪些单词是相关的 - 例如 `the quick red fox jumped` 是一个形容词-名词-动词序列,与 `lazy brown dog` 序列分开。
+
+### 词和短语频率
+
+分析大量文本时,一个有用的过程是建立一个感兴趣的每个单词或短语的词典,并记录其出现频率。短语 `the quick red fox jumped over the lazy brown dog` 中 the 的词频为2。
+
+让我们看一个例子文本,我们数一下单词的频率。鲁德亚德·吉卜林的诗《胜利者》中包含以下诗句:
+
+```output
+What the moral? Who rides may read.
+When the night is thick and the tracks are blind
+A friend at a pinch is a friend, indeed,
+But a fool to wait for the laggard behind.
+Down to Gehenna or up to the Throne,
+He travels the fastest who travels alone.
+```
+
+由于短语频率可以根据需要区分大小写或不区分大小写,短语 `a friend` has a frequency of 2 and `the` has a frequency of 6, and `travels` 的频率是2。
+
+### N-grams
+
+文本可以被分割成固定长度的单词序列,一个单词(unigram)、两个单词(bigrams)、三个单词(trigrams)或任意数量的单词(n-grams)。
+
+例如 `the quick red fox jumped over the lazy brown dog` 以2的n-gram得分产生以下n-grams:
+
+1. the quick
+2. quick red
+3. red fox
+4. fox jumped
+5. jumped over
+6. over the
+7. the lazy
+8. lazy brown
+9. brown dog
+
+可以将其想象为一个滑动框在句子上。以下是3个单词的n-grams,每个句子中的n-gram用粗体表示:
+
+1. **the quick red** fox jumped over the lazy brown dog
+2. the **quick red fox** jumped over the lazy brown dog
+3. the quick **red fox jumped** over the lazy brown dog
+4. the quick red **fox jumped over** the lazy brown dog
+5. the quick red fox **jumped over the** lazy brown dog
+6. the quick red fox jumped **over the lazy** brown dog
+7. the quick red fox jumped over **the lazy brown** dog
+8. the quick red fox jumped over the **lazy brown dog**
+
+
+
+> N-gram值为3:信息图由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+### 名词短语提取
+
+在大多数句子中,有一个名词是主语或宾语。在英语中,它通常可以通过前面有'a'、'an'或'the'来识别。通过'提取名词短语'来识别句子的主语或宾语是NLP中试图理解句子意义时的常见任务。
+
+✅ 在句子 "I cannot fix on the hour, or the spot, or the look or the words, which laid the foundation. It is too long ago. I was in the middle before I knew that I had begun." 中,你能识别出名词短语吗?
+
+在句子 `the quick red fox jumped over the lazy brown dog` 中有2个名词短语:**quick red fox** 和 **lazy brown dog**。
+
+### 情感分析
+
+一个句子或文本可以被分析其情感,或者它有多*积极*或*消极*。情感通过*极性*和*客观性/主观性*来衡量。极性从-1.0到1.0(消极到积极)和0.0到1.0(最客观到最主观)。
+
+✅ 你将会学到有不同的方法使用机器学习来确定情感,但一种方法是有一个由人类专家分类为积极或消极的单词和短语列表,并将该模型应用于文本以计算极性得分。你能看到这在某些情况下如何工作,而在其他情况下效果较差吗?
+
+### 词形变化
+
+词形变化使你可以获取一个单词并得到该单词的单数或复数形式。
+
+### 词干提取
+
+*词干*是一个词组的根或词头,例如 *flew*、*flies*、*flying* 的词干是动词 *fly*。
+
+对于NLP研究人员,还有一些有用的数据库,尤其是:
+
+### WordNet
+
+[WordNet](https://wordnet.princeton.edu/) 是一个包含许多语言中每个单词的同义词、反义词及其他许多细节的数据库。在构建翻译、拼写检查器或任何类型的语言工具时,它非常有用。
+
+## NLP库
+
+幸运的是,你不需要自己构建所有这些技术,因为有一些优秀的Python库可以使其对非自然语言处理或机器学习专业的开发人员更易于使用。接下来的课程中会有更多这些库的示例,但在这里你会学到一些有用的例子来帮助你完成下一个任务。
+
+### 练习 - 使用 `TextBlob` library
+
+Let's use a library called TextBlob as it contains helpful APIs for tackling these types of tasks. TextBlob "stands on the giant shoulders of [NLTK](https://nltk.org) and [pattern](https://github.com/clips/pattern), and plays nicely with both." It has a considerable amount of ML embedded in its API.
+
+> Note: A useful [Quick Start](https://textblob.readthedocs.io/en/dev/quickstart.html#quickstart) guide is available for TextBlob that is recommended for experienced Python developers
+
+When attempting to identify *noun phrases*, TextBlob offers several options of extractors to find noun phrases.
+
+1. Take a look at `ConllExtractor`
+
+ ```python
+ from textblob import TextBlob
+ from textblob.np_extractors import ConllExtractor
+ # import and create a Conll extractor to use later
+ extractor = ConllExtractor()
+
+ # later when you need a noun phrase extractor:
+ user_input = input("> ")
+ user_input_blob = TextBlob(user_input, np_extractor=extractor) # note non-default extractor specified
+ np = user_input_blob.noun_phrases
+ ```
+
+ > 这里发生了什么? [ConllExtractor](https://textblob.readthedocs.io/en/dev/api_reference.html?highlight=Conll#textblob.en.np_extractors.ConllExtractor) 是“一个使用ConLL-2000训练语料库进行块解析的名词短语提取器。” ConLL-2000 指的是2000年计算自然语言学习会议。每年会议都会举办一个研讨会来解决一个棘手的NLP问题,2000年的问题是名词块解析。模型是在《华尔街日报》上训练的,“第15-18节作为训练数据(211727个标记)和第20节作为测试数据(47377个标记)”。你可以在[这里](https://www.clips.uantwerpen.be/conll2000/chunking/)查看使用的程序和[结果](https://ifarm.nl/erikt/research/np-chunking.html)。
+
+### 挑战 - 使用NLP改进你的机器人
+
+在上一课中,你构建了一个非常简单的问答机器人。现在,你将通过分析输入的情感并打印出相应的回应,使Marvin更具同情心。你还需要识别一个 `noun_phrase` 并询问相关内容。
+
+构建更好的对话机器人的步骤:
+
+1. 打印指示,指导用户如何与机器人互动
+2. 开始循环
+ 1. 接受用户输入
+ 2. 如果用户要求退出,则退出
+ 3. 处理用户输入并确定适当的情感回应
+ 4. 如果在情感中检测到名词短语,将其复数化并要求更多相关输入
+ 5. 打印回应
+3. 返回步骤2
+
+以下是使用TextBlob确定情感的代码片段。注意只有四个情感回应的*梯度*(如果你愿意,可以有更多):
+
+```python
+if user_input_blob.polarity <= -0.5:
+ response = "Oh dear, that sounds bad. "
+elif user_input_blob.polarity <= 0:
+ response = "Hmm, that's not great. "
+elif user_input_blob.polarity <= 0.5:
+ response = "Well, that sounds positive. "
+elif user_input_blob.polarity <= 1:
+ response = "Wow, that sounds great. "
+```
+
+以下是一些示例输出以供参考(用户输入以>开头的行):
+
+```output
+Hello, I am Marvin, the friendly robot.
+You can end this conversation at any time by typing 'bye'
+After typing each answer, press 'enter'
+How are you today?
+> I am ok
+Well, that sounds positive. Can you tell me more?
+> I went for a walk and saw a lovely cat
+Well, that sounds positive. Can you tell me more about lovely cats?
+> cats are the best. But I also have a cool dog
+Wow, that sounds great. Can you tell me more about cool dogs?
+> I have an old hounddog but he is sick
+Hmm, that's not great. Can you tell me more about old hounddogs?
+> bye
+It was nice talking to you, goodbye!
+```
+
+任务的一个可能解决方案在[这里](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/2-Tasks/solution/bot.py)
+
+✅ 知识检查
+
+1. 你认为同情回应会“欺骗”某人认为机器人真的理解他们吗?
+2. 识别名词短语是否使机器人更“可信”?
+3. 为什么从句子中提取“名词短语”是一件有用的事情?
+
+---
+
+实现之前知识检查中的机器人,并在朋友身上测试。它能欺骗他们吗?你能让你的机器人更“可信”吗?
+
+## 🚀挑战
+
+尝试实现之前知识检查中的一个任务。在朋友身上测试机器人。它能欺骗他们吗?你能让你的机器人更“可信”吗?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/34/)
+
+## 复习与自学
+
+在接下来的几节课中,你将学习更多关于情感分析的内容。研究这种有趣的技术,例如在[KDNuggets](https://www.kdnuggets.com/tag/nlp)上的文章。
+
+## 作业
+
+[让机器人对话](assignment.md)
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/2-Tasks/assignment.md b/translations/zh/6-NLP/2-Tasks/assignment.md
new file mode 100644
index 000000000..4c3686b81
--- /dev/null
+++ b/translations/zh/6-NLP/2-Tasks/assignment.md
@@ -0,0 +1,14 @@
+# 让机器人回应
+
+## 说明
+
+在过去的几节课中,你编写了一个基本的聊天机器人。这个机器人会给出随机的回答,直到你说“再见”。你能让这些回答不那么随机,并在你说特定的话(如“为什么”或“怎么”)时触发特定的回答吗?在扩展你的机器人时,想一想机器学习如何使这类工作变得不那么繁琐。你可以使用 NLTK 或 TextBlob 库来简化你的任务。
+
+## 评分标准
+
+| 标准 | 杰出 | 合格 | 需要改进 |
+| ------ | --------------------------------------------- | ----------------------------------------------- | ---------------------- |
+| | 提供了一个新的 bot.py 文件并进行了文档记录 | 提供了一个新的机器人文件,但包含错误 | 没有提供文件 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议进行专业人工翻译。我们不对使用此翻译所产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/3-Translation-Sentiment/README.md b/translations/zh/6-NLP/3-Translation-Sentiment/README.md
new file mode 100644
index 000000000..d962d4657
--- /dev/null
+++ b/translations/zh/6-NLP/3-Translation-Sentiment/README.md
@@ -0,0 +1,190 @@
+# 机器学习中的翻译和情感分析
+
+在前面的课程中,你学习了如何使用`TextBlob`构建一个基本的机器人,这是一个在幕后嵌入了机器学习的库,用于执行基本的自然语言处理任务,如名词短语提取。在计算语言学中,另一个重要的挑战是准确地将一个语言的句子翻译成另一种语言。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/35/)
+
+翻译是一个非常困难的问题,因为有成千上万种语言,每种语言都有非常不同的语法规则。一种方法是将一种语言的正式语法规则(如英语)转换为一种不依赖语言的结构,然后通过转换回另一种语言来进行翻译。这种方法的步骤如下:
+
+1. **识别**。将输入语言中的单词标记为名词、动词等。
+2. **创建翻译**。生成目标语言格式的每个单词的直接翻译。
+
+### 示例句子,从英语到爱尔兰语
+
+在“英语”中,句子 _I feel happy_ 是三个单词,顺序是:
+
+- **主语** (I)
+- **动词** (feel)
+- **形容词** (happy)
+
+然而,在“爱尔兰语”中,同一句子有非常不同的语法结构——情感如“*happy*”或“*sad*”被表达为在你身上。
+
+英语短语`I feel happy`在爱尔兰语中是`Tá athas orm`。一个*字面*翻译是`Happy is upon me`。
+
+一个将爱尔兰语翻译成英语的人会说`I feel happy`,而不是`Happy is upon me`,因为他们理解句子的意思,即使单词和句子结构不同。
+
+爱尔兰语句子的正式顺序是:
+
+- **动词** (Tá 或 is)
+- **形容词** (athas, 或 happy)
+- **主语** (orm, 或 upon me)
+
+## 翻译
+
+一个简单的翻译程序可能只翻译单词,而忽略句子结构。
+
+✅ 如果你作为成年人学习了第二(或第三或更多)语言,你可能会开始时用母语思考,在脑海中逐字翻译概念到第二语言,然后说出你的翻译。这类似于简单的翻译计算机程序所做的。要达到流利程度,重要的是要超越这个阶段!
+
+简单的翻译会导致糟糕(有时甚至搞笑)的误译:`I feel happy`字面翻译成爱尔兰语是`Mise bhraitheann athas`。这意味着(字面上)`me feel happy`,并不是一个有效的爱尔兰语句子。尽管英语和爱尔兰语是两个紧邻岛屿上使用的语言,它们却有着非常不同的语法结构。
+
+> 你可以观看一些关于爱尔兰语言传统的视频,如[这个](https://www.youtube.com/watch?v=mRIaLSdRMMs)
+
+### 机器学习方法
+
+到目前为止,你已经了解了自然语言处理的正式规则方法。另一种方法是忽略单词的含义,而是_使用机器学习来检测模式_。如果你有大量的文本(一个*语料库*)或文本(*语料库*)在原始语言和目标语言中,这种方法可以在翻译中起作用。
+
+例如,考虑*傲慢与偏见*的情况,这是简·奥斯汀在1813年写的一本著名的英语小说。如果你查阅这本书的英文版和人类翻译的*法文*版,你可以在其中检测到一种语言中成语式翻译成另一种语言的短语。你很快就会这样做。
+
+例如,当一个英语短语如`I have no money`被字面翻译成法语时,它可能变成`Je n'ai pas de monnaie`。“Monnaie”是一个棘手的法语“假同源词”,因为“money”和“monnaie”并不相同。一个更好的翻译是人类可能会做的`Je n'ai pas d'argent`,因为它更好地传达了你没有钱的意思(而不是“零钱”,这是“monnaie”的意思)。
+
+
+
+> 图片由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+如果一个机器学习模型有足够的人类翻译来建立一个模型,它可以通过识别先前由两种语言的专家人类翻译的文本中的常见模式来提高翻译的准确性。
+
+### 练习 - 翻译
+
+你可以使用`TextBlob`来翻译句子。试试**傲慢与偏见**的著名第一句话:
+
+```python
+from textblob import TextBlob
+
+blob = TextBlob(
+ "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife!"
+)
+print(blob.translate(to="fr"))
+
+```
+
+`TextBlob`的翻译非常好:“C'est une vérité universellement reconnue, qu'un homme célibataire en possession d'une bonne fortune doit avoir besoin d'une femme!”。
+
+可以说,TextBlob的翻译实际上比1932年V. Leconte和Ch. Pressoir的法语翻译更准确:
+
+"C'est une vérité universelle qu'un célibataire pourvu d'une belle fortune doit avoir envie de se marier, et, si peu que l'on sache de son sentiment à cet égard, lorsqu'il arrive dans une nouvelle résidence, cette idée est si bien fixée dans l'esprit de ses voisins qu'ils le considèrent sur-le-champ comme la propriété légitime de l'une ou l'autre de leurs filles."
+
+在这种情况下,由机器学习提供的信息的翻译比人为翻译做得更好,后者不必要地为“清晰”而在原作者的嘴里塞了话。
+
+> 这里发生了什么?为什么TextBlob在翻译方面如此出色?实际上,它使用了Google翻译,这是一个复杂的AI,能够解析数百万的短语,以预测最适合手头任务的字符串。这里没有任何手动操作,你需要互联网连接来使用`blob.translate`.
+
+✅ Try some more sentences. Which is better, ML or human translation? In which cases?
+
+## Sentiment analysis
+
+Another area where machine learning can work very well is sentiment analysis. A non-ML approach to sentiment is to identify words and phrases which are 'positive' and 'negative'. Then, given a new piece of text, calculate the total value of the positive, negative and neutral words to identify the overall sentiment.
+
+This approach is easily tricked as you may have seen in the Marvin task - the sentence `Great, that was a wonderful waste of time, I'm glad we are lost on this dark road`是一个讽刺的、负面的情感句子,但简单的算法检测到“great”、“wonderful”、“glad”是正面的,“waste”、“lost”和“dark”是负面的。总体情感被这些相互冲突的单词所左右。
+
+✅ 停下来想一想我们作为人类如何表达讽刺。语调起了很大的作用。尝试用不同的方式说“好吧,那部电影真棒”,看看你的声音如何传达意义。
+
+### 机器学习方法
+
+机器学习的方法是手动收集负面和正面的文本——推文或电影评论,或者任何人们给出评分和书面意见的内容。然后可以将自然语言处理技术应用于意见和评分,以便出现模式(例如,正面的电影评论往往比负面的电影评论更多地使用“奥斯卡级别”的短语,或正面的餐馆评论更多地说“美食”而不是“恶心”)。
+
+> ⚖️ **示例**:如果你在政治家的办公室工作,有一项新的法律正在辩论中,选民可能会写信给办公室,支持或反对这项新法律。假设你负责阅读这些邮件并将它们分类为支持和反对。如果有很多邮件,你可能会感到不堪重负,试图阅读所有的邮件。如果有一个机器人可以为你阅读所有邮件,理解它们并告诉你每封邮件属于哪一类,那不是很好吗?
+>
+> 一种实现方法是使用机器学习。你会用一部分反对的邮件和一部分支持的邮件来训练模型。模型会倾向于将某些短语和单词与反对方和支持方相关联,但它不会理解任何内容,只是某些单词和模式更有可能出现在反对或支持的邮件中。你可以用一些没有用于训练模型的邮件来测试它,看看它是否得出了与你相同的结论。然后,一旦你对模型的准确性感到满意,你就可以处理未来的邮件,而不必阅读每一封邮件。
+
+✅ 这个过程听起来像你在前面的课程中使用的过程吗?
+
+## 练习 - 情感句子
+
+情感通过一个从-1到1的*极性*来衡量,-1是最负面的情感,1是最正面的情感。情感还通过0到1的评分来衡量客观性(0)和主观性(1)。
+
+再看看简·奥斯汀的*傲慢与偏见*。该文本可在[古腾堡计划](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm)上找到。下面的示例显示了一个短程序,它分析了书中的第一句和最后一句的情感,并显示其情感极性和主观性/客观性评分。
+
+你应该使用`TextBlob`库(如上所述)来确定`情感`(你不必编写自己的情感计算器)在以下任务中。
+
+```python
+from textblob import TextBlob
+
+quote1 = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."""
+
+quote2 = """Darcy, as well as Elizabeth, really loved them; and they were both ever sensible of the warmest gratitude towards the persons who, by bringing her into Derbyshire, had been the means of uniting them."""
+
+sentiment1 = TextBlob(quote1).sentiment
+sentiment2 = TextBlob(quote2).sentiment
+
+print(quote1 + " has a sentiment of " + str(sentiment1))
+print(quote2 + " has a sentiment of " + str(sentiment2))
+```
+
+你会看到以下输出:
+
+```output
+It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want # of a wife. has a sentiment of Sentiment(polarity=0.20952380952380953, subjectivity=0.27142857142857146)
+
+Darcy, as well as Elizabeth, really loved them; and they were
+ both ever sensible of the warmest gratitude towards the persons
+ who, by bringing her into Derbyshire, had been the means of
+ uniting them. has a sentiment of Sentiment(polarity=0.7, subjectivity=0.8)
+```
+
+## 挑战 - 检查情感极性
+
+你的任务是通过情感极性来确定*傲慢与偏见*是否有更多绝对正面的句子而不是绝对负面的句子。对于这个任务,你可以假设极性评分为1或-1是绝对正面或负面。
+
+**步骤:**
+
+1. 从古腾堡计划下载一本[傲慢与偏见](https://www.gutenberg.org/files/1342/1342-h/1342-h.htm)的副本作为.txt文件。删除文件开头和结尾的元数据,只保留原文。
+2. 在Python中打开文件并将内容提取为字符串。
+3. 使用书字符串创建一个TextBlob。
+4. 在循环中分析书中的每个句子。
+ 1. 如果极性是1或-1,将句子存储在正面或负面的消息数组或列表中。
+5. 最后,分别打印出所有正面句子和负面句子及其数量。
+
+这里有一个示例[解决方案](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/3-Translation-Sentiment/solution/notebook.ipynb)。
+
+✅ 知识检查
+
+1. 情感是基于句子中使用的单词,但代码*理解*单词吗?
+2. 你认为情感极性准确吗?换句话说,你*同意*这些评分吗?
+ 1. 特别是,你是否同意以下句子的绝对**正面**极性?
+ * “What an excellent father you have, girls!” said she, when the door was shut.
+ * “Your examination of Mr. Darcy is over, I presume,” said Miss Bingley; “and pray what is the result?” “I am perfectly convinced by it that Mr. Darcy has no defect.
+ * How wonderfully these sort of things occur!
+ * I have the greatest dislike in the world to that sort of thing.
+ * Charlotte is an excellent manager, I dare say.
+ * “This is delightful indeed!
+ * I am so happy!
+ * Your idea of the ponies is delightful.
+ 2. 以下三个句子被评分为绝对正面情感,但仔细阅读,它们并不是正面的句子。为什么情感分析认为它们是正面的句子?
+ * Happy shall I be, when his stay at Netherfield is over!” “I wish I could say anything to comfort you,” replied Elizabeth; “but it is wholly out of my power.
+ * If I could but see you as happy!
+ * Our distress, my dear Lizzy, is very great.
+ 3. 你是否同意以下句子的绝对**负面**极性?
+ - Everybody is disgusted with his pride.
+ - “I should like to know how he behaves among strangers.” “You shall hear then—but prepare yourself for something very dreadful.
+ - The pause was to Elizabeth’s feelings dreadful.
+ - It would be dreadful!
+
+✅ 任何简·奥斯汀的爱好者都会理解,她经常用她的书来批评英国摄政时期社会中更荒谬的方面。伊丽莎白·班内特,《傲慢与偏见》的主角,是一个敏锐的社会观察者(像作者一样),她的语言经常充满了微妙的意味。即使是故事中的爱情对象达西先生也注意到伊丽莎白的戏谑和戏弄的语言使用:“我已经有幸与你相识足够长的时间,知道你偶尔会表达一些并非你真正观点的意见,以此为乐。”
+
+---
+
+## 🚀挑战
+
+你能通过从用户输入中提取其他特征来让Marvin变得更好吗?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/36/)
+
+## 复习与自学
+
+有很多方法可以从文本中提取情感。想想可能利用这种技术的商业应用。想想它可能会出错的地方。阅读更多关于分析情感的复杂企业级系统,如[Azure文本分析](https://docs.microsoft.com/azure/cognitive-services/Text-Analytics/how-tos/text-analytics-how-to-sentiment-analysis?tabs=version-3-1?WT.mc_id=academic-77952-leestott)。测试一些上述的傲慢与偏见的句子,看看它是否能检测到微妙之处。
+
+## 作业
+
+[诗意许可](assignment.md)
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业人工翻译。我们对因使用本翻译而产生的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/3-Translation-Sentiment/assignment.md b/translations/zh/6-NLP/3-Translation-Sentiment/assignment.md
new file mode 100644
index 000000000..fbbf9ce59
--- /dev/null
+++ b/translations/zh/6-NLP/3-Translation-Sentiment/assignment.md
@@ -0,0 +1,14 @@
+# 诗意的许可
+
+## 说明
+
+在[这个笔记本](https://www.kaggle.com/jenlooper/emily-dickinson-word-frequency)中,你可以找到超过500首艾米莉·狄金森的诗,这些诗已经使用Azure文本分析进行了情感分析。使用这个数据集,按照课程中描述的方法进行分析。一首诗的建议情感是否与更复杂的Azure服务的决定相匹配?为什么或为什么不,在你看来?有什么让你感到惊讶的吗?
+
+## 评分标准
+
+| 标准 | 杰出表现 | 合格表现 | 需要改进 |
+| -------- | -------------------------------------------------------------------------- | ------------------------------------------------------- | ------------------------ |
+| | 提供了一个包含作者样本输出的完整分析的笔记本 | 笔记本不完整或未进行分析 | 没有提供笔记本 |
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本国语言版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而产生的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/3-Translation-Sentiment/solution/Julia/README.md b/translations/zh/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
new file mode 100644
index 000000000..8c5272084
--- /dev/null
+++ b/translations/zh/6-NLP/3-Translation-Sentiment/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的本国语言版本视为权威来源。对于重要信息,建议进行专业的人类翻译。对于因使用此翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/3-Translation-Sentiment/solution/R/README.md b/translations/zh/6-NLP/3-Translation-Sentiment/solution/R/README.md
new file mode 100644
index 000000000..353b1d6c7
--- /dev/null
+++ b/translations/zh/6-NLP/3-Translation-Sentiment/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文件使用机器翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/4-Hotel-Reviews-1/README.md b/translations/zh/6-NLP/4-Hotel-Reviews-1/README.md
new file mode 100644
index 000000000..286ee1f55
--- /dev/null
+++ b/translations/zh/6-NLP/4-Hotel-Reviews-1/README.md
@@ -0,0 +1,315 @@
+# 使用酒店评论进行情感分析 - 处理数据
+
+在本节中,你将使用前几节课中学到的技术对一个大型数据集进行一些探索性数据分析。一旦你对各列的实用性有了较好的理解,你将学到:
+
+- 如何删除不必要的列
+- 如何基于现有列计算一些新数据
+- 如何保存结果数据集以便在最终挑战中使用
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/37/)
+
+### 介绍
+
+到目前为止,你已经了解了文本数据与数值数据类型有很大的不同。如果是人类书写或说出的文本,可以通过分析找到模式和频率、情感和意义。本课将带你进入一个真实的数据集,面临一个真实的挑战:**[欧洲515K酒店评论数据](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)**,并包含一个[CC0:公共领域许可](https://creativecommons.org/publicdomain/zero/1.0/)。它是从Booking.com的公共资源中抓取的。数据集的创建者是Jiashen Liu。
+
+### 准备
+
+你将需要:
+
+* 能够使用Python 3运行.ipynb笔记本
+* pandas
+* NLTK,[你应该在本地安装](https://www.nltk.org/install.html)
+* 数据集可在Kaggle上找到[欧洲515K酒店评论数据](https://www.kaggle.com/jiashenliu/515k-hotel-reviews-data-in-europe)。解压后大约230 MB。下载到与这些NLP课程相关的根`/data`文件夹中。
+
+## 探索性数据分析
+
+这个挑战假设你正在使用情感分析和客人评论评分来构建一个酒店推荐机器人。你将使用的数据集包括6个城市中1493家不同酒店的评论。
+
+使用Python、一个酒店评论数据集和NLTK的情感分析,你可以找出:
+
+* 评论中最常用的词和短语是什么?
+* 描述酒店的官方标签与评论评分是否相关(例如,某个酒店的“有小孩的家庭”标签的负面评论是否多于“单人旅行者”的负面评论,这可能表明它更适合单人旅行者?)
+* NLTK情感评分是否与酒店评论者的数值评分一致?
+
+#### 数据集
+
+让我们探索你已下载并保存在本地的数据集。用VS Code或Excel等编辑器打开文件。
+
+数据集的标题如下:
+
+*Hotel_Address, Additional_Number_of_Scoring, Review_Date, Average_Score, Hotel_Name, Reviewer_Nationality, Negative_Review, Review_Total_Negative_Word_Counts, Total_Number_of_Reviews, Positive_Review, Review_Total_Positive_Word_Counts, Total_Number_of_Reviews_Reviewer_Has_Given, Reviewer_Score, Tags, days_since_review, lat, lng*
+
+以下是按便于检查的方式分组的标题:
+##### 酒店列
+
+* `Hotel_Name`, `Hotel_Address`, `lat` (纬度), `lng` (经度)
+ * 使用*lat*和*lng*你可以用Python绘制一张地图,显示酒店位置(或许用颜色编码表示负面和正面评论)
+ * Hotel_Address对我们来说不是显而易见的有用,我们可能会用国家替换它,以便更容易排序和搜索
+
+**酒店元评论列**
+
+* `Average_Score`
+ * 根据数据集创建者,这一列是*酒店的平均评分,根据去年最新评论计算*。这种计算评分的方式似乎有些不寻常,但这是抓取的数据,所以我们暂且接受。
+
+ ✅ 基于数据中的其他列,你能想到另一种计算平均评分的方法吗?
+
+* `Total_Number_of_Reviews`
+ * 该酒店收到的总评论数 - 不清楚(不写代码的话)这是否指的是数据集中的评论数。
+* `Additional_Number_of_Scoring`
+ * 这意味着给出了评论评分,但评论者没有写正面或负面评论
+
+**评论列**
+
+- `Reviewer_Score`
+ - 这是一个最多有一位小数的数值,最小值和最大值在2.5到10之间
+ - 没有解释为什么2.5是最低可能的评分
+- `Negative_Review`
+ - 如果评论者什么都没写,这一栏将显示“**No Negative**”
+ - 注意评论者可能会在负面评论栏写正面评论(例如,“这家酒店没有什么不好的地方”)
+- `Review_Total_Negative_Word_Counts`
+ - 更高的负面词汇数量表示较低的评分(不检查情感性)
+- `Positive_Review`
+ - 如果评论者什么都没写,这一栏将显示“**No Positive**”
+ - 注意评论者可能会在正面评论栏写负面评论(例如,“这家酒店根本没有什么好的地方”)
+- `Review_Total_Positive_Word_Counts`
+ - 更高的正面词汇数量表示较高的评分(不检查情感性)
+- `Review_Date`和`days_since_review`
+ - 可以对评论应用新鲜度或陈旧度的衡量(较旧的评论可能不如较新的评论准确,因为酒店管理变了,或进行了装修,或添加了游泳池等)
+- `Tags`
+ - 这些是评论者可能选择用来描述他们是何种客人的简短描述(例如,单人或家庭),他们住的房间类型,停留时间以及评论提交的方式。
+ - 不幸的是,使用这些标签是有问题的,请查看下面讨论它们实用性的部分
+
+**评论者列**
+
+- `Total_Number_of_Reviews_Reviewer_Has_Given`
+ - 这可能是推荐模型中的一个因素,例如,如果你能确定更多的多产评论者(有数百条评论)更可能给出负面而不是正面的评论。然而,任何特定评论的评论者没有用唯一代码标识,因此无法链接到一组评论。有30位评论者有100条或更多评论,但很难看出这如何有助于推荐模型。
+- `Reviewer_Nationality`
+ - 有些人可能认为某些国籍的人更可能给出正面或负面评论,因为有国家倾向。小心将这种轶事观点构建到你的模型中。这些是国家(有时是种族)刻板印象,每个评论者都是基于他们的经验写评论的个体。可能通过多种镜头过滤,如他们以前的酒店住宿、旅行距离和个人性格。认为他们的国籍是评论评分的原因是难以证明的。
+
+##### 示例
+
+| 平均评分 | 总评论数 | 评论者评分 | 负面评论 | 正面评论 | 标签 |
+| -------------- | ---------------------- | ---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
+| 7.8 | 1945 | 2.5 | 这目前不是一家酒店,而是一个施工现场。我从早上到全天都被不可接受的建筑噪音吓坏了,无法在长途旅行后休息和在房间工作。人们整天都在用凿岩机在相邻房间工作。我要求换房,但没有安静的房间。更糟的是,我被多收了费用。我在晚上退房,因为我有很早的航班,并收到了一张合适的账单。一天后,酒店未经我同意再次收取超出预订价格的费用。这是一个可怕的地方,不要通过预订来惩罚自己。 | 没有任何好地方,远离这里 | 商务旅行,夫妻标准双人房,住了2晚 |
+
+如你所见,这位客人在这家酒店住得并不愉快。这家酒店有7.8的良好平均评分和1945条评论,但这位评论者给了2.5分,并写了115个字描述他们的负面经历。如果他们在Positive_Review栏中什么都没写,你可能会推测没有什么正面的,但他们写了7个警告词。如果我们只是数词而不是词的意义或情感,我们可能会对评论者的意图有偏差的看法。奇怪的是,他们的2.5分令人困惑,因为如果酒店住宿如此糟糕,为什么还给它任何分数?仔细调查数据集,你会发现最低可能的分数是2.5,而不是0。最高可能的分数是10。
+
+##### 标签
+
+如上所述,乍一看,使用`Tags`来分类数据的想法是有道理的。不幸的是,这些标签没有标准化,这意味着在给定的酒店中,选项可能是*单人房*、*双人房*和*双人房*,但在下一家酒店中,它们是*豪华单人房*、*经典大床房*和*行政大床房*。这些可能是相同的东西,但有太多的变体,选择变得:
+
+1. 尝试将所有术语更改为单一标准,这非常困难,因为不清楚每种情况下的转换路径是什么(例如,*经典单人房*映射到*单人房*,但*带庭院花园或城市景观的高级大床房*更难映射)
+
+1. 我们可以采取NLP方法,测量某些术语如*单人旅行者*、*商务旅行者*或*带小孩的家庭*在每家酒店中的频率,并将其纳入推荐
+
+标签通常(但不总是)是一个包含5到6个逗号分隔值的字段,对应于*旅行类型*、*客人类型*、*房间类型*、*夜晚数*和*提交评论的设备类型*。然而,因为有些评论者没有填写每个字段(他们可能留一个空白),值并不总是按相同顺序排列。
+
+例如,取*组类型*。在`Tags`列中有1025种独特的可能性,不幸的是,只有一部分指的是组(有些是房间类型等)。如果你只过滤提到家庭的那些,结果包含许多*家庭房*类型的结果。如果你包括*带*这个词,即计算*带*值的*家庭*,结果更好,在515,000个结果中有超过80,000个包含“带小孩的家庭”或“带大孩子的家庭”短语。
+
+这意味着标签列对我们来说并非完全无用,但需要一些工作使其有用。
+
+##### 酒店平均评分
+
+数据集中有一些奇怪之处或差异,我无法弄清楚,但在此说明以便你在构建模型时注意到它们。如果你弄清楚了,请在讨论区告诉我们!
+
+数据集有以下列与平均评分和评论数量相关:
+
+1. Hotel_Name
+2. Additional_Number_of_Scoring
+3. Average_Score
+4. Total_Number_of_Reviews
+5. Reviewer_Score
+
+数据集中评论最多的单个酒店是*Britannia International Hotel Canary Wharf*,有4789条评论中的515,000条。但如果我们看`Total_Number_of_Reviews`值,这家酒店是9086。你可能推测有更多没有评论的评分,所以也许我们应该加上`Additional_Number_of_Scoring`列值。该值是2682,加上4789得到7471,仍然比`Total_Number_of_Reviews`少1615。
+
+如果你取`Average_Score`列,你可能推测它是数据集中评论的平均值,但Kaggle的描述是“*根据去年最新评论计算的酒店平均评分*”。这似乎没有那么有用,但我们可以根据数据集中的评论评分计算自己的平均值。以同一家酒店为例,给出的酒店平均评分是7.1,但计算得出的评分(数据集中评论者的平均评分)是6.8。这接近但不是相同的值,我们只能猜测`Additional_Number_of_Scoring`评论中的评分将平均值提高到7.1。不幸的是,没有办法测试或证明这一断言,因此很难使用或信任`Average_Score`、`Additional_Number_of_Scoring`和`Total_Number_of_Reviews`,因为它们基于或引用了我们没有的数据。
+
+更复杂的是,评论第二多的酒店的计算平均评分为8.12,而数据集`Average_Score`是8.1。这是正确的评分巧合还是第一家酒店的差异?
+
+考虑到这些酒店可能是异常值,也许大多数值匹配(但某些原因导致部分不匹配),我们将在下一步编写一个短程序来探索数据集中的值,并确定这些值的正确使用(或不使用)。
+
+> 🚨 注意
+>
+> 使用此数据集时,你将编写代码从文本中计算某些内容,而不必自己阅读或分析文本。这是NLP的本质,解释意义或情感而不需要人类来做。然而,你可能会阅读一些负面评论。我建议你不要这样做,因为你不需要这样做。有些是愚蠢的或无关紧要的负面酒店评论,如“天气不好”,这是酒店或任何人无法控制的。但也有一些负面评论是种族主义、性别歧视或年龄歧视的。这是从公共网站抓取的数据集中不可避免的。某些评论者留下的评论会让你觉得令人反感、不舒服或不安。最好让代码测量情感,而不是自己阅读它们并感到不安。尽管如此,写这种东西的人是少数,但它们依然存在。
+
+## 练习 - 数据探索
+### 加载数据
+
+视觉检查数据已经够多了,现在你将编写一些代码并得到一些答案!本节使用pandas库。你的第一个任务是确保你能加载和读取CSV数据。pandas库有一个快速的CSV加载器,结果放在一个数据框中,如前几课所示。我们加载的CSV有超过50万行,但只有17列。pandas为你提供了许多强大的方法与数据框交互,包括对每一行执行操作的能力。
+
+从这里开始,将有代码片段和一些代码解释以及一些关于结果含义的讨论。使用包含的_notebook.ipynb_进行你的代码编写。
+
+让我们从加载你将使用的数据文件开始:
+
+```python
+# Load the hotel reviews from CSV
+import pandas as pd
+import time
+# importing time so the start and end time can be used to calculate file loading time
+print("Loading data file now, this could take a while depending on file size")
+start = time.time()
+# df is 'DataFrame' - make sure you downloaded the file to the data folder
+df = pd.read_csv('../../data/Hotel_Reviews.csv')
+end = time.time()
+print("Loading took " + str(round(end - start, 2)) + " seconds")
+```
+
+现在数据已加载,我们可以对其进行一些操作。将此代码保留在程序顶部以便下一部分使用。
+
+## 探索数据
+
+在这种情况下,数据已经是*干净的*,这意味着它已经准备好使用,并且没有可能使只期望英文字符的算法出错的其他语言字符。
+
+✅ 你可能需要处理需要一些初步处理才能格式化的数据,然后应用NLP技术,但这次不需要。如果需要,你将如何处理非英文字符?
+
+花点时间确保数据加载后,你可以用代码探索它。非常容易想要专注于`Negative_Review`和`Positive_Review`列。它们充满了你的NLP算法要处理的自然文本。但等等!在你跳入NLP和情感分析之前,你应该按照下面的代码确定数据集中给定的值是否与你用pandas计算的值匹配。
+
+## 数据框操作
+
+本课的第一个任务是编写一些代码检查以下断言是否正确(不更改数据框)。
+
+> 像许多编程任务一样,有几种方法可以完成它,但好的建议是以最简单、最容易理解的方式完成,特别是当你将来回到这段代码时更容易理解。对于数据框,有一个全面的API,通常有一种方法可以高效地完成你想要的操作。
+将以下问题视为编码任务,尝试在不查看解决方案的情况下回答它们。
+1. 打印出你刚加载的数据框的*形状*(形状是行数和列数)
+2. 计算评论者国籍的频率计数:
+ 1. 列`Reviewer_Nationality`有多少不同的值,它们是什么?
+ 2. 数据集中最常见的评论者国籍是什么(打印国家和评论数)?
+ 3. 下一个最常见的10个国籍及其频率计数是什么?
+3. 每个最常见的10个评论者国籍中评论最多的酒店是什么?
+4. 数据集中每家酒店有多少评论(酒店的频率计数)?
+5. 虽然数据集中每家酒店都有`Average_Score`列,但你也可以计算一个平均评分(获取数据集中每家酒店的所有评论者评分的平均值)。在你的数据框中添加一个新列,标题为`Calc_Average_Score`,包含计算的平均值。
+6. 是否有任何酒店的`Average_Score`和`Calc_Average_Score`(四舍五入到一位小数)相同?使用`?
+ 1. Try writing a Python function that takes a Series (row) as an argument and compares the values, printing out a message when the values are not equal. Then use the `.apply()方法处理每一行。
+7. 计算并打印出`Negative_Review`列值为“No Negative”的行数
+8. 计算并打印出`Positive_Review`列值为“No Positive”的行数
+rows have column `Positive_Review` values of "No Positive" 9. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ### Code answers 1. Print out the *shape* of the data frame you have just loaded (the shape is the number of rows and columns) ```python
+ print("The shape of the data (rows, cols) is " + str(df.shape))
+ > The shape of the data (rows, cols) is (515738, 17)
+ ``` 2. Calculate the frequency count for reviewer nationalities: 1. How many distinct values are there for the column `Reviewer_Nationality` and what are they? 2. What reviewer nationality is the most common in the dataset (print country and number of reviews)? ```python
+ # value_counts() creates a Series object that has index and values in this case, the country and the frequency they occur in reviewer nationality
+ nationality_freq = df["Reviewer_Nationality"].value_counts()
+ print("There are " + str(nationality_freq.size) + " different nationalities")
+ # print first and last rows of the Series. Change to nationality_freq.to_string() to print all of the data
+ print(nationality_freq)
+
+ There are 227 different nationalities
+ United Kingdom 245246
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ ...
+ Comoros 1
+ Palau 1
+ Northern Mariana Islands 1
+ Cape Verde 1
+ Guinea 1
+ Name: Reviewer_Nationality, Length: 227, dtype: int64
+ ``` 3. What are the next top 10 most frequently found nationalities, and their frequency count? ```python
+ print("The highest frequency reviewer nationality is " + str(nationality_freq.index[0]).strip() + " with " + str(nationality_freq[0]) + " reviews.")
+ # Notice there is a leading space on the values, strip() removes that for printing
+ # What is the top 10 most common nationalities and their frequencies?
+ print("The next 10 highest frequency reviewer nationalities are:")
+ print(nationality_freq[1:11].to_string())
+
+ The highest frequency reviewer nationality is United Kingdom with 245246 reviews.
+ The next 10 highest frequency reviewer nationalities are:
+ United States of America 35437
+ Australia 21686
+ Ireland 14827
+ United Arab Emirates 10235
+ Saudi Arabia 8951
+ Netherlands 8772
+ Switzerland 8678
+ Germany 7941
+ Canada 7894
+ France 7296
+ ``` 3. What was the most frequently reviewed hotel for each of the top 10 most reviewer nationalities? ```python
+ # What was the most frequently reviewed hotel for the top 10 nationalities
+ # Normally with pandas you will avoid an explicit loop, but wanted to show creating a new dataframe using criteria (don't do this with large amounts of data because it could be very slow)
+ for nat in nationality_freq[:10].index:
+ # First, extract all the rows that match the criteria into a new dataframe
+ nat_df = df[df["Reviewer_Nationality"] == nat]
+ # Now get the hotel freq
+ freq = nat_df["Hotel_Name"].value_counts()
+ print("The most reviewed hotel for " + str(nat).strip() + " was " + str(freq.index[0]) + " with " + str(freq[0]) + " reviews.")
+
+ The most reviewed hotel for United Kingdom was Britannia International Hotel Canary Wharf with 3833 reviews.
+ The most reviewed hotel for United States of America was Hotel Esther a with 423 reviews.
+ The most reviewed hotel for Australia was Park Plaza Westminster Bridge London with 167 reviews.
+ The most reviewed hotel for Ireland was Copthorne Tara Hotel London Kensington with 239 reviews.
+ The most reviewed hotel for United Arab Emirates was Millennium Hotel London Knightsbridge with 129 reviews.
+ The most reviewed hotel for Saudi Arabia was The Cumberland A Guoman Hotel with 142 reviews.
+ The most reviewed hotel for Netherlands was Jaz Amsterdam with 97 reviews.
+ The most reviewed hotel for Switzerland was Hotel Da Vinci with 97 reviews.
+ The most reviewed hotel for Germany was Hotel Da Vinci with 86 reviews.
+ The most reviewed hotel for Canada was St James Court A Taj Hotel London with 61 reviews.
+ ``` 4. How many reviews are there per hotel (frequency count of hotel) in the dataset? ```python
+ # First create a new dataframe based on the old one, removing the uneeded columns
+ hotel_freq_df = df.drop(["Hotel_Address", "Additional_Number_of_Scoring", "Review_Date", "Average_Score", "Reviewer_Nationality", "Negative_Review", "Review_Total_Negative_Word_Counts", "Positive_Review", "Review_Total_Positive_Word_Counts", "Total_Number_of_Reviews_Reviewer_Has_Given", "Reviewer_Score", "Tags", "days_since_review", "lat", "lng"], axis = 1)
+
+ # Group the rows by Hotel_Name, count them and put the result in a new column Total_Reviews_Found
+ hotel_freq_df['Total_Reviews_Found'] = hotel_freq_df.groupby('Hotel_Name').transform('count')
+
+ # Get rid of all the duplicated rows
+ hotel_freq_df = hotel_freq_df.drop_duplicates(subset = ["Hotel_Name"])
+ display(hotel_freq_df)
+ ``` | Hotel_Name | Total_Number_of_Reviews | Total_Reviews_Found | | :----------------------------------------: | :---------------------: | :-----------------: | | Britannia International Hotel Canary Wharf | 9086 | 4789 | | Park Plaza Westminster Bridge London | 12158 | 4169 | | Copthorne Tara Hotel London Kensington | 7105 | 3578 | | ... | ... | ... | | Mercure Paris Porte d Orleans | 110 | 10 | | Hotel Wagner | 135 | 10 | | Hotel Gallitzinberg | 173 | 8 | You may notice that the *counted in the dataset* results do not match the value in `Total_Number_of_Reviews`. It is unclear if this value in the dataset represented the total number of reviews the hotel had, but not all were scraped, or some other calculation. `Total_Number_of_Reviews` is not used in the model because of this unclarity. 5. While there is an `Average_Score` column for each hotel in the dataset, you can also calculate an average score (getting the average of all reviewer scores in the dataset for each hotel). Add a new column to your dataframe with the column header `Calc_Average_Score` that contains that calculated average. Print out the columns `Hotel_Name`, `Average_Score`, and `Calc_Average_Score`. ```python
+ # define a function that takes a row and performs some calculation with it
+ def get_difference_review_avg(row):
+ return row["Average_Score"] - row["Calc_Average_Score"]
+
+ # 'mean' is mathematical word for 'average'
+ df['Calc_Average_Score'] = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+
+ # Add a new column with the difference between the two average scores
+ df["Average_Score_Difference"] = df.apply(get_difference_review_avg, axis = 1)
+
+ # Create a df without all the duplicates of Hotel_Name (so only 1 row per hotel)
+ review_scores_df = df.drop_duplicates(subset = ["Hotel_Name"])
+
+ # Sort the dataframe to find the lowest and highest average score difference
+ review_scores_df = review_scores_df.sort_values(by=["Average_Score_Difference"])
+
+ display(review_scores_df[["Average_Score_Difference", "Average_Score", "Calc_Average_Score", "Hotel_Name"]])
+ ``` You may also wonder about the `Average_Score` value and why it is sometimes different from the calculated average score. As we can't know why some of the values match, but others have a difference, it's safest in this case to use the review scores that we have to calculate the average ourselves. That said, the differences are usually very small, here are the hotels with the greatest deviation from the dataset average and the calculated average: | Average_Score_Difference | Average_Score | Calc_Average_Score | Hotel_Name | | :----------------------: | :-----------: | :----------------: | ------------------------------------------: | | -0.8 | 7.7 | 8.5 | Best Western Hotel Astoria | | -0.7 | 8.8 | 9.5 | Hotel Stendhal Place Vend me Paris MGallery | | -0.7 | 7.5 | 8.2 | Mercure Paris Porte d Orleans | | -0.7 | 7.9 | 8.6 | Renaissance Paris Vendome Hotel | | -0.5 | 7.0 | 7.5 | Hotel Royal Elys es | | ... | ... | ... | ... | | 0.7 | 7.5 | 6.8 | Mercure Paris Op ra Faubourg Montmartre | | 0.8 | 7.1 | 6.3 | Holiday Inn Paris Montparnasse Pasteur | | 0.9 | 6.8 | 5.9 | Villa Eugenie | | 0.9 | 8.6 | 7.7 | MARQUIS Faubourg St Honor Relais Ch teaux | | 1.3 | 7.2 | 5.9 | Kube Hotel Ice Bar | With only 1 hotel having a difference of score greater than 1, it means we can probably ignore the difference and use the calculated average score. 6. Calculate and print out how many rows have column `Negative_Review` values of "No Negative" 7. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" 8. Calculate and print out how many rows have column `Positive_Review` values of "No Positive" **and** `Negative_Review` values of "No Negative" ```python
+ # with lambdas:
+ start = time.time()
+ no_negative_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" else False , axis=1)
+ print("Number of No Negative reviews: " + str(len(no_negative_reviews[no_negative_reviews == True].index)))
+
+ no_positive_reviews = df.apply(lambda x: True if x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of No Positive reviews: " + str(len(no_positive_reviews[no_positive_reviews == True].index)))
+
+ both_no_reviews = df.apply(lambda x: True if x['Negative_Review'] == "No Negative" and x['Positive_Review'] == "No Positive" else False , axis=1)
+ print("Number of both No Negative and No Positive reviews: " + str(len(both_no_reviews[both_no_reviews == True].index)))
+ end = time.time()
+ print("Lambdas took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Lambdas took 9.64 seconds
+ ``` ## Another way Another way count items without Lambdas, and use sum to count the rows: ```python
+ # without lambdas (using a mixture of notations to show you can use both)
+ start = time.time()
+ no_negative_reviews = sum(df.Negative_Review == "No Negative")
+ print("Number of No Negative reviews: " + str(no_negative_reviews))
+
+ no_positive_reviews = sum(df["Positive_Review"] == "No Positive")
+ print("Number of No Positive reviews: " + str(no_positive_reviews))
+
+ both_no_reviews = sum((df.Negative_Review == "No Negative") & (df.Positive_Review == "No Positive"))
+ print("Number of both No Negative and No Positive reviews: " + str(both_no_reviews))
+
+ end = time.time()
+ print("Sum took " + str(round(end - start, 2)) + " seconds")
+
+ Number of No Negative reviews: 127890
+ Number of No Positive reviews: 35946
+ Number of both No Negative and No Positive reviews: 127
+ Sum took 0.19 seconds
+ ``` You may have noticed that there are 127 rows that have both "No Negative" and "No Positive" values for the columns `Negative_Review` and `Positive_Review` respectively. That means that the reviewer gave the hotel a numerical score, but declined to write either a positive or negative review. Luckily this is a small amount of rows (127 out of 515738, or 0.02%), so it probably won't skew our model or results in any particular direction, but you might not have expected a data set of reviews to have rows with no reviews, so it's worth exploring the data to discover rows like this. Now that you have explored the dataset, in the next lesson you will filter the data and add some sentiment analysis. --- ## 🚀Challenge This lesson demonstrates, as we saw in previous lessons, how critically important it is to understand your data and its foibles before performing operations on it. Text-based data, in particular, bears careful scrutiny. Dig through various text-heavy datasets and see if you can discover areas that could introduce bias or skewed sentiment into a model. ## [Post-lecture quiz](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/38/) ## Review & Self Study Take [this Learning Path on NLP](https://docs.microsoft.com/learn/paths/explore-natural-language-processing/?WT.mc_id=academic-77952-leestott) to discover tools to try when building speech and text-heavy models. ## Assignment [NLTK](assignment.md)
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用本翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/4-Hotel-Reviews-1/assignment.md b/translations/zh/6-NLP/4-Hotel-Reviews-1/assignment.md
new file mode 100644
index 000000000..b8e0ece45
--- /dev/null
+++ b/translations/zh/6-NLP/4-Hotel-Reviews-1/assignment.md
@@ -0,0 +1,8 @@
+# NLTK
+
+## 指南
+
+NLTK 是一个在计算语言学和自然语言处理领域中广为人知的库。请利用这个机会阅读 '[NLTK book](https://www.nltk.org/book/)' 并尝试其中的练习。在这个不计分的作业中,你将更深入地了解这个库。
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用此翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md b/translations/zh/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
new file mode 100644
index 000000000..9c8c3c215
--- /dev/null
+++ b/translations/zh/6-NLP/4-Hotel-Reviews-1/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档是使用基于机器的AI翻译服务翻译的。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用此翻译而产生的任何误解或误释,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/4-Hotel-Reviews-1/solution/R/README.md b/translations/zh/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
new file mode 100644
index 000000000..a0f7c2ff5
--- /dev/null
+++ b/translations/zh/6-NLP/4-Hotel-Reviews-1/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本地语言版本视为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/5-Hotel-Reviews-2/README.md b/translations/zh/6-NLP/5-Hotel-Reviews-2/README.md
new file mode 100644
index 000000000..a4e3347b1
--- /dev/null
+++ b/translations/zh/6-NLP/5-Hotel-Reviews-2/README.md
@@ -0,0 +1,377 @@
+# 酒店评论的情感分析
+
+现在你已经详细探索了数据集,是时候过滤列并使用NLP技术在数据集上获得关于酒店的新见解了。
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/39/)
+
+### 过滤和情感分析操作
+
+正如你可能已经注意到的,数据集存在一些问题。有些列充满了无用的信息,其他一些似乎不正确。如果它们是正确的,也不清楚它们是如何计算的,并且无法通过你自己的计算独立验证答案。
+
+## 练习:更多数据处理
+
+进一步清理数据。添加以后有用的列,改变其他列中的值,并完全删除某些列。
+
+1. 初步列处理
+
+ 1. 删除`lat`和`lng`
+
+ 2. 将`Hotel_Address`的值替换为以下值(如果地址包含城市和国家的名称,将其更改为仅包含城市和国家)。
+
+ 数据集中只有以下城市和国家:
+
+ 阿姆斯特丹,荷兰
+
+ 巴塞罗那,西班牙
+
+ 伦敦,英国
+
+ 米兰,意大利
+
+ 巴黎,法国
+
+ 维也纳,奥地利
+
+ ```python
+ def replace_address(row):
+ if "Netherlands" in row["Hotel_Address"]:
+ return "Amsterdam, Netherlands"
+ elif "Barcelona" in row["Hotel_Address"]:
+ return "Barcelona, Spain"
+ elif "United Kingdom" in row["Hotel_Address"]:
+ return "London, United Kingdom"
+ elif "Milan" in row["Hotel_Address"]:
+ return "Milan, Italy"
+ elif "France" in row["Hotel_Address"]:
+ return "Paris, France"
+ elif "Vienna" in row["Hotel_Address"]:
+ return "Vienna, Austria"
+
+ # Replace all the addresses with a shortened, more useful form
+ df["Hotel_Address"] = df.apply(replace_address, axis = 1)
+ # The sum of the value_counts() should add up to the total number of reviews
+ print(df["Hotel_Address"].value_counts())
+ ```
+
+ 现在你可以查询国家级别的数据:
+
+ ```python
+ display(df.groupby("Hotel_Address").agg({"Hotel_Name": "nunique"}))
+ ```
+
+ | 酒店地址 | 酒店名称 |
+ | :------------------ | :------: |
+ | 阿姆斯特丹,荷兰 | 105 |
+ | 巴塞罗那,西班牙 | 211 |
+ | 伦敦,英国 | 400 |
+ | 米兰,意大利 | 162 |
+ | 巴黎,法国 | 458 |
+ | 维也纳,奥地利 | 158 |
+
+2. 处理酒店元评论列
+
+ 1. 删除`Additional_Number_of_Scoring`
+
+ 1. Replace `Total_Number_of_Reviews` with the total number of reviews for that hotel that are actually in the dataset
+
+ 1. Replace `Average_Score`,用我们自己计算的分数替代
+
+ ```python
+ # Drop `Additional_Number_of_Scoring`
+ df.drop(["Additional_Number_of_Scoring"], axis = 1, inplace=True)
+ # Replace `Total_Number_of_Reviews` and `Average_Score` with our own calculated values
+ df.Total_Number_of_Reviews = df.groupby('Hotel_Name').transform('count')
+ df.Average_Score = round(df.groupby('Hotel_Name').Reviewer_Score.transform('mean'), 1)
+ ```
+
+3. 处理评论列
+
+ 1. 删除`Review_Total_Negative_Word_Counts`, `Review_Total_Positive_Word_Counts`, `Review_Date` and `days_since_review`
+
+ 2. Keep `Reviewer_Score`, `Negative_Review`, and `Positive_Review` as they are,
+
+ 3. Keep `Tags` for now
+
+ - We'll be doing some additional filtering operations on the tags in the next section and then tags will be dropped
+
+4. Process reviewer columns
+
+ 1. Drop `Total_Number_of_Reviews_Reviewer_Has_Given`
+
+ 2. Keep `Reviewer_Nationality`
+
+### Tag columns
+
+The `Tag` column is problematic as it is a list (in text form) stored in the column. Unfortunately the order and number of sub sections in this column are not always the same. It's hard for a human to identify the correct phrases to be interested in, because there are 515,000 rows, and 1427 hotels, and each has slightly different options a reviewer could choose. This is where NLP shines. You can scan the text and find the most common phrases, and count them.
+
+Unfortunately, we are not interested in single words, but multi-word phrases (e.g. *Business trip*). Running a multi-word frequency distribution algorithm on that much data (6762646 words) could take an extraordinary amount of time, but without looking at the data, it would seem that is a necessary expense. This is where exploratory data analysis comes in useful, because you've seen a sample of the tags such as `[' Business trip ', ' Solo traveler ', ' Single Room ', ' Stayed 5 nights ', ' Submitted from a mobile device ']`,你可以开始问是否有可能大大减少你必须做的处理。幸运的是,这是可能的,但首先你需要遵循几个步骤来确定感兴趣的标签。
+
+### 过滤标签
+
+记住数据集的目标是添加情感和列,以帮助你选择最佳酒店(为自己或客户要求你制作一个酒店推荐机器人)。你需要问自己这些标签在最终数据集中是否有用。这里有一个解释(如果你出于其他原因需要数据集,不同的标签可能会被保留/排除在选择之外):
+
+1. 旅行类型是相关的,应该保留
+2. 客人群体类型是重要的,应该保留
+3. 客人入住的房间、套房或工作室类型是无关紧要的(所有酒店基本上都有相同的房间)
+4. 提交评论的设备是无关紧要的
+5. 评论者入住的夜晚数量*可能*是相关的,如果你将较长的入住时间与他们更喜欢酒店联系起来,但这有点牵强,可能是无关紧要的
+
+总之,**保留两种标签,删除其他的**。
+
+首先,你不想计算标签,直到它们处于更好的格式,这意味着删除方括号和引号。你可以通过多种方式来做这件事,但你想要最快的方法,因为处理大量数据可能需要很长时间。幸运的是,pandas有一种简单的方法来完成每个步骤。
+
+```Python
+# Remove opening and closing brackets
+df.Tags = df.Tags.str.strip("[']")
+# remove all quotes too
+df.Tags = df.Tags.str.replace(" ', '", ",", regex = False)
+```
+
+每个标签变成类似这样的:`Business trip, Solo traveler, Single Room, Stayed 5 nights, Submitted from a mobile device`.
+
+Next we find a problem. Some reviews, or rows, have 5 columns, some 3, some 6. This is a result of how the dataset was created, and hard to fix. You want to get a frequency count of each phrase, but they are in different order in each review, so the count might be off, and a hotel might not get a tag assigned to it that it deserved.
+
+Instead you will use the different order to our advantage, because each tag is multi-word but also separated by a comma! The simplest way to do this is to create 6 temporary columns with each tag inserted in to the column corresponding to its order in the tag. You can then merge the 6 columns into one big column and run the `value_counts()` method on the resulting column. Printing that out, you'll see there was 2428 unique tags. Here is a small sample:
+
+| Tag | Count |
+| ------------------------------ | ------ |
+| Leisure trip | 417778 |
+| Submitted from a mobile device | 307640 |
+| Couple | 252294 |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Solo traveler | 108545 |
+| Stayed 3 nights | 95821 |
+| Business trip | 82939 |
+| Group | 65392 |
+| Family with young children | 61015 |
+| Stayed 4 nights | 47817 |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Family with older children | 26349 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Stayed 5 nights | 20845 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+| 2 rooms | 12393 |
+
+Some of the common tags like `Submitted from a mobile device` are of no use to us, so it might be a smart thing to remove them before counting phrase occurrence, but it is such a fast operation you can leave them in and ignore them.
+
+### Removing the length of stay tags
+
+Removing these tags is step 1, it reduces the total number of tags to be considered slightly. Note you do not remove them from the dataset, just choose to remove them from consideration as values to count/keep in the reviews dataset.
+
+| Length of stay | Count |
+| ---------------- | ------ |
+| Stayed 1 night | 193645 |
+| Stayed 2 nights | 133937 |
+| Stayed 3 nights | 95821 |
+| Stayed 4 nights | 47817 |
+| Stayed 5 nights | 20845 |
+| Stayed 6 nights | 9776 |
+| Stayed 7 nights | 7399 |
+| Stayed 8 nights | 2502 |
+| Stayed 9 nights | 1293 |
+| ... | ... |
+
+There are a huge variety of rooms, suites, studios, apartments and so on. They all mean roughly the same thing and not relevant to you, so remove them from consideration.
+
+| Type of room | Count |
+| ----------------------------- | ----- |
+| Double Room | 35207 |
+| Standard Double Room | 32248 |
+| Superior Double Room | 31393 |
+| Deluxe Double Room | 24823 |
+| Double or Twin Room | 22393 |
+| Standard Double or Twin Room | 17483 |
+| Classic Double Room | 16989 |
+| Superior Double or Twin Room | 13570 |
+
+Finally, and this is delightful (because it didn't take much processing at all), you will be left with the following *useful* tags:
+
+| Tag | Count |
+| --------------------------------------------- | ------ |
+| Leisure trip | 417778 |
+| Couple | 252294 |
+| Solo traveler | 108545 |
+| Business trip | 82939 |
+| Group (combined with Travellers with friends) | 67535 |
+| Family with young children | 61015 |
+| Family with older children | 26349 |
+| With a pet | 1405 |
+
+You could argue that `Travellers with friends` is the same as `Group` more or less, and that would be fair to combine the two as above. The code for identifying the correct tags is [the Tags notebook](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb).
+
+The final step is to create new columns for each of these tags. Then, for every review row, if the `Tag`列匹配其中一个新列,添加1,否则添加0。最终结果将是一个计数,显示有多少评论者选择了这家酒店(总体上)用于,例如,商务旅行还是休闲旅行,或者是否带宠物,这在推荐酒店时是有用的信息。
+
+```python
+# Process the Tags into new columns
+# The file Hotel_Reviews_Tags.py, identifies the most important tags
+# Leisure trip, Couple, Solo traveler, Business trip, Group combined with Travelers with friends,
+# Family with young children, Family with older children, With a pet
+df["Leisure_trip"] = df.Tags.apply(lambda tag: 1 if "Leisure trip" in tag else 0)
+df["Couple"] = df.Tags.apply(lambda tag: 1 if "Couple" in tag else 0)
+df["Solo_traveler"] = df.Tags.apply(lambda tag: 1 if "Solo traveler" in tag else 0)
+df["Business_trip"] = df.Tags.apply(lambda tag: 1 if "Business trip" in tag else 0)
+df["Group"] = df.Tags.apply(lambda tag: 1 if "Group" in tag or "Travelers with friends" in tag else 0)
+df["Family_with_young_children"] = df.Tags.apply(lambda tag: 1 if "Family with young children" in tag else 0)
+df["Family_with_older_children"] = df.Tags.apply(lambda tag: 1 if "Family with older children" in tag else 0)
+df["With_a_pet"] = df.Tags.apply(lambda tag: 1 if "With a pet" in tag else 0)
+
+```
+
+### 保存文件
+
+最后,以新名称保存现在的数据集。
+
+```python
+df.drop(["Review_Total_Negative_Word_Counts", "Review_Total_Positive_Word_Counts", "days_since_review", "Total_Number_of_Reviews_Reviewer_Has_Given"], axis = 1, inplace=True)
+
+# Saving new data file with calculated columns
+print("Saving results to Hotel_Reviews_Filtered.csv")
+df.to_csv(r'../data/Hotel_Reviews_Filtered.csv', index = False)
+```
+
+## 情感分析操作
+
+在这一最后部分,你将对评论列应用情感分析,并将结果保存在数据集中。
+
+## 练习:加载和保存过滤后的数据
+
+请注意,现在你加载的是在上一部分保存的过滤后的数据集,而不是原始数据集。
+
+```python
+import time
+import pandas as pd
+import nltk as nltk
+from nltk.corpus import stopwords
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+nltk.download('vader_lexicon')
+
+# Load the filtered hotel reviews from CSV
+df = pd.read_csv('../../data/Hotel_Reviews_Filtered.csv')
+
+# You code will be added here
+
+
+# Finally remember to save the hotel reviews with new NLP data added
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r'../data/Hotel_Reviews_NLP.csv', index = False)
+```
+
+### 移除停用词
+
+如果你在负面和正面评论列上运行情感分析,可能需要很长时间。测试在一台强大的测试笔记本电脑上,使用快速CPU,耗时12-14分钟,具体取决于使用的情感库。这是一个(相对)较长的时间,所以值得调查是否可以加快速度。
+
+移除停用词,即不改变句子情感的常见英语词汇,是第一步。通过移除它们,情感分析应该运行得更快,但不会降低准确性(因为停用词不会影响情感,但会减慢分析速度)。
+
+最长的负面评论是395个词,但在移除停用词后是195个词。
+
+移除停用词也是一个快速操作,在测试设备上从515,000行的两个评论列中移除停用词耗时3.3秒。根据你的设备CPU速度、RAM、是否有SSD等因素,这可能会稍微多一点或少一点时间。操作的相对短暂性意味着如果它能改善情感分析时间,那么这是值得做的。
+
+```python
+from nltk.corpus import stopwords
+
+# Load the hotel reviews from CSV
+df = pd.read_csv("../../data/Hotel_Reviews_Filtered.csv")
+
+# Remove stop words - can be slow for a lot of text!
+# Ryan Han (ryanxjhan on Kaggle) has a great post measuring performance of different stop words removal approaches
+# https://www.kaggle.com/ryanxjhan/fast-stop-words-removal # using the approach that Ryan recommends
+start = time.time()
+cache = set(stopwords.words("english"))
+def remove_stopwords(review):
+ text = " ".join([word for word in review.split() if word not in cache])
+ return text
+
+# Remove the stop words from both columns
+df.Negative_Review = df.Negative_Review.apply(remove_stopwords)
+df.Positive_Review = df.Positive_Review.apply(remove_stopwords)
+```
+
+### 执行情感分析
+
+现在你应该计算负面和正面评论列的情感分析,并将结果存储在两个新列中。情感测试将与同一评论的评论者评分进行比较。例如,如果情感认为负面评论的情感是1(极其正面的情感)而正面评论的情感也是1,但评论者给酒店的评分是最低的,那么要么评论文本与评分不匹配,要么情感分析器无法正确识别情感。你应该预期一些情感评分是完全错误的,通常这可以解释,例如评论可能是极其讽刺的“当然,我喜欢在没有暖气的房间里睡觉”,而情感分析器认为这是正面的情感,即使人类阅读它会知道这是讽刺。
+
+NLTK提供了不同的情感分析器供学习,你可以替换它们,看看情感是否更准确。这里使用的是VADER情感分析。
+
+> Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+```python
+from nltk.sentiment.vader import SentimentIntensityAnalyzer
+
+# Create the vader sentiment analyser (there are others in NLTK you can try too)
+vader_sentiment = SentimentIntensityAnalyzer()
+# Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
+
+# There are 3 possibilities of input for a review:
+# It could be "No Negative", in which case, return 0
+# It could be "No Positive", in which case, return 0
+# It could be a review, in which case calculate the sentiment
+def calc_sentiment(review):
+ if review == "No Negative" or review == "No Positive":
+ return 0
+ return vader_sentiment.polarity_scores(review)["compound"]
+```
+
+在你的程序中,当你准备好计算情感时,可以将其应用于每个评论,如下所示:
+
+```python
+# Add a negative sentiment and positive sentiment column
+print("Calculating sentiment columns for both positive and negative reviews")
+start = time.time()
+df["Negative_Sentiment"] = df.Negative_Review.apply(calc_sentiment)
+df["Positive_Sentiment"] = df.Positive_Review.apply(calc_sentiment)
+end = time.time()
+print("Calculating sentiment took " + str(round(end - start, 2)) + " seconds")
+```
+
+这在我的电脑上大约需要120秒,但在每台电脑上都会有所不同。如果你想打印结果并查看情感是否与评论匹配:
+
+```python
+df = df.sort_values(by=["Negative_Sentiment"], ascending=True)
+print(df[["Negative_Review", "Negative_Sentiment"]])
+df = df.sort_values(by=["Positive_Sentiment"], ascending=True)
+print(df[["Positive_Review", "Positive_Sentiment"]])
+```
+
+在使用文件之前要做的最后一件事是保存它!你还应该考虑重新排序所有新列,以便于使用(对人类来说,这是一种外观上的变化)。
+
+```python
+# Reorder the columns (This is cosmetic, but to make it easier to explore the data later)
+df = df.reindex(["Hotel_Name", "Hotel_Address", "Total_Number_of_Reviews", "Average_Score", "Reviewer_Score", "Negative_Sentiment", "Positive_Sentiment", "Reviewer_Nationality", "Leisure_trip", "Couple", "Solo_traveler", "Business_trip", "Group", "Family_with_young_children", "Family_with_older_children", "With_a_pet", "Negative_Review", "Positive_Review"], axis=1)
+
+print("Saving results to Hotel_Reviews_NLP.csv")
+df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
+```
+
+你应该运行整个[分析笔记本](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb)的代码(在你运行[过滤笔记本](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb)以生成Hotel_Reviews_Filtered.csv文件之后)。
+
+回顾一下,步骤是:
+
+1. 原始数据集文件**Hotel_Reviews.csv**在上一课中通过[探索笔记本](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/4-Hotel-Reviews-1/solution/notebook.ipynb)进行了探索
+2. 通过[过滤笔记本](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/1-notebook.ipynb)过滤Hotel_Reviews.csv,生成**Hotel_Reviews_Filtered.csv**
+3. 通过[情感分析笔记本](https://github.com/microsoft/ML-For-Beginners/blob/main/6-NLP/5-Hotel-Reviews-2/solution/3-notebook.ipynb)处理Hotel_Reviews_Filtered.csv,生成**Hotel_Reviews_NLP.csv**
+4. 在下面的NLP挑战中使用Hotel_Reviews_NLP.csv
+
+### 结论
+
+当你开始时,你有一个包含列和数据的数据集,但并非所有数据都可以验证或使用。你已经探索了数据,过滤了不需要的内容,将标签转换为有用的东西,计算了自己的平均值,添加了一些情感列,并希望学习了一些关于处理自然文本的有趣知识。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/40/)
+
+## 挑战
+
+现在你已经对数据集进行了情感分析,看看你是否可以使用本课程中学到的策略(例如聚类)来确定情感模式。
+
+## 复习与自学
+
+参加[这个学习模块](https://docs.microsoft.com/en-us/learn/modules/classify-user-feedback-with-the-text-analytics-api/?WT.mc_id=academic-77952-leestott)以了解更多并使用不同的工具探索文本中的情感。
+## 作业
+
+[尝试不同的数据集](assignment.md)
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/5-Hotel-Reviews-2/assignment.md b/translations/zh/6-NLP/5-Hotel-Reviews-2/assignment.md
new file mode 100644
index 000000000..98749bff9
--- /dev/null
+++ b/translations/zh/6-NLP/5-Hotel-Reviews-2/assignment.md
@@ -0,0 +1,14 @@
+# 尝试不同的数据集
+
+## 说明
+
+既然你已经了解了如何使用NLTK来为文本分配情感值,现在试试一个不同的数据集。你可能需要对数据进行一些处理,因此请创建一个笔记本并记录你的思考过程。你发现了什么?
+
+## 评分标准
+
+| 标准 | 卓越 | 合格 | 需要改进 |
+| -------- | ----------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ---------------------- |
+| | 提供了一个完整的笔记本和数据集,并且有详细记录的单元格解释了如何分配情感值 | 笔记本缺少良好的解释 | 笔记本有缺陷 |
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的母语版本视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md b/translations/zh/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
new file mode 100644
index 000000000..ede0e0c40
--- /dev/null
+++ b/translations/zh/6-NLP/5-Hotel-Reviews-2/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文件使用基于机器的AI翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/5-Hotel-Reviews-2/solution/R/README.md b/translations/zh/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
new file mode 100644
index 000000000..237cdb6db
--- /dev/null
+++ b/translations/zh/6-NLP/5-Hotel-Reviews-2/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本地语言版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/README.md b/translations/zh/6-NLP/README.md
new file mode 100644
index 000000000..7d9b60823
--- /dev/null
+++ b/translations/zh/6-NLP/README.md
@@ -0,0 +1,27 @@
+# 自然语言处理入门
+
+自然语言处理 (NLP) 是计算机程序理解人类语言(口语和书面语)的能力,称为自然语言。它是人工智能 (AI) 的一个组成部分。NLP 已经存在了超过 50 年,并且在语言学领域有其根源。整个领域旨在帮助机器理解和处理人类语言。这可以用于执行诸如拼写检查或机器翻译等任务。它在许多领域中有各种实际应用,包括医学研究、搜索引擎和商业智能。
+
+## 地区主题:欧洲语言文学和欧洲浪漫酒店 ❤️
+
+在课程的这一部分,你将了解机器学习的最广泛使用之一:自然语言处理 (NLP)。源自计算语言学,这种人工智能类别通过语音或文本通信在人与机器之间架起桥梁。
+
+在这些课程中,我们将通过构建小型对话机器人来学习 NLP 的基础知识,以了解机器学习如何帮助使这些对话变得越来越“智能”。你将穿越时光,与简·奥斯汀 1813 年出版的经典小说《傲慢与偏见》中的伊丽莎白·班纳特和达西先生聊天。然后,你将通过学习欧洲酒店评论的情感分析来进一步提高你的知识。
+
+
+> 照片由 Elaine Howlin 提供,来自 Unsplash
+
+## 课程
+
+1. [自然语言处理简介](1-Introduction-to-NLP/README.md)
+2. [常见的 NLP 任务和技术](2-Tasks/README.md)
+3. [使用机器学习进行翻译和情感分析](3-Translation-Sentiment/README.md)
+4. [准备你的数据](4-Hotel-Reviews-1/README.md)
+5. [NLTK 进行情感分析](5-Hotel-Reviews-2/README.md)
+
+## 致谢
+
+这些自然语言处理课程由 [Stephen Howell](https://twitter.com/Howell_MSFT) 用 ☕ 编写。
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/6-NLP/data/README.md b/translations/zh/6-NLP/data/README.md
new file mode 100644
index 000000000..7b205831d
--- /dev/null
+++ b/translations/zh/6-NLP/data/README.md
@@ -0,0 +1,4 @@
+下载酒店评论数据到此文件夹。
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们概不负责。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/1-Introduction/README.md b/translations/zh/7-TimeSeries/1-Introduction/README.md
new file mode 100644
index 000000000..cd695349d
--- /dev/null
+++ b/translations/zh/7-TimeSeries/1-Introduction/README.md
@@ -0,0 +1,188 @@
+# 时间序列预测简介
+
+
+
+> 由 [Tomomi Imura](https://www.twitter.com/girlie_mac) 绘制的速写图
+
+在本课及接下来的课程中,你将学习一些关于时间序列预测的知识,这是机器学习科学家工具库中一个有趣且有价值的部分,虽然它不像其他主题那么广为人知。时间序列预测就像一种“水晶球”:基于某个变量(如价格)的过去表现,你可以预测其未来的潜在价值。
+
+[](https://youtu.be/cBojo1hsHiI "时间序列预测简介")
+
+> 🎥 点击上方图片观看关于时间序列预测的视频
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/41/)
+
+这是一个有用且有趣的领域,对商业具有实际价值,因为它直接应用于定价、库存和供应链问题。虽然深度学习技术已经开始用于获得更多见解以更好地预测未来表现,但时间序列预测仍然是一个主要由经典机器学习技术所主导的领域。
+
+> 宾夕法尼亚州立大学的实用时间序列课程可以在[这里](https://online.stat.psu.edu/stat510/lesson/1)找到
+
+## 介绍
+
+假设你维护一系列智能停车计时器,这些计时器提供关于它们在一段时间内使用频率和时长的数据。
+
+> 如果你可以根据计时器的过去表现,预测其未来价值会怎样呢?这可以根据供需法则进行预测。
+
+准确预测何时采取行动以实现目标是一个挑战,可以通过时间序列预测来解决。在寻找停车位的繁忙时段被收取更多费用可能不会让人们高兴,但这肯定是产生收入来清洁街道的一种方法!
+
+让我们探讨一些时间序列算法类型,并开始一个笔记本来清理和准备一些数据。你将分析的数据来自GEFCom2014预测竞赛。它包括2012年至2014年之间的3年每小时电力负载和温度值。根据电力负载和温度的历史模式,你可以预测电力负载的未来值。
+
+在这个例子中,你将学习如何仅使用历史负载数据来预测一个时间步长。开始之前,了解幕后发生的事情是有用的。
+
+## 一些定义
+
+当遇到“时间序列”这个术语时,你需要理解它在几个不同上下文中的使用。
+
+🎓 **时间序列**
+
+在数学中,“时间序列是一系列按时间顺序索引(或列出或绘图)的数据点。最常见的是,时间序列是在连续的等间隔时间点上获取的一系列。”时间序列的一个例子是[道琼斯工业平均指数](https://wikipedia.org/wiki/Time_series)的每日收盘值。时间序列绘图和统计建模的使用在信号处理、天气预报、地震预测和其他事件发生并且数据点可以随时间绘制的领域中经常遇到。
+
+🎓 **时间序列分析**
+
+时间序列分析是对上述时间序列数据的分析。时间序列数据可以采取不同的形式,包括“中断时间序列”,它检测时间序列在中断事件前后的演变模式。所需的时间序列分析类型取决于数据的性质。时间序列数据本身可以采取一系列数字或字符的形式。
+
+要执行的分析使用各种方法,包括频域和时域、线性和非线性等。[了解更多](https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htm)关于分析此类数据的多种方法。
+
+🎓 **时间序列预测**
+
+时间序列预测是使用模型根据过去收集的数据展示的模式来预测未来值。虽然可以使用回归模型来探索时间序列数据,并在图上将时间索引作为x变量,但此类数据最好使用特殊类型的模型进行分析。
+
+时间序列数据是一组有序的观察值,不像可以通过线性回归分析的数据。最常见的一种是ARIMA,它是“自回归积分滑动平均”的缩写。
+
+[ARIMA模型](https://online.stat.psu.edu/stat510/lesson/1/1.1)“将一系列的当前值与过去的值和过去的预测误差相关联。”它们最适合分析时域数据,即随时间排序的数据。
+
+> 有几种类型的ARIMA模型,你可以在[这里](https://people.duke.edu/~rnau/411arim.htm)了解更多,你将在下一课中涉及这些内容。
+
+在下一课中,你将使用[单变量时间序列](https://itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm)构建一个ARIMA模型,该模型关注随时间变化的一个变量。这类数据的一个例子是[这个数据集](https://itl.nist.gov/div898/handbook/pmc/section4/pmc4411.htm),记录了Mauna Loa天文台的每月CO2浓度:
+
+| CO2 | YearMonth | Year | Month |
+| :-----: | :-------: | :---: | :---: |
+| 330.62 | 1975.04 | 1975 | 1 |
+| 331.40 | 1975.13 | 1975 | 2 |
+| 331.87 | 1975.21 | 1975 | 3 |
+| 333.18 | 1975.29 | 1975 | 4 |
+| 333.92 | 1975.38 | 1975 | 5 |
+| 333.43 | 1975.46 | 1975 | 6 |
+| 331.85 | 1975.54 | 1975 | 7 |
+| 330.01 | 1975.63 | 1975 | 8 |
+| 328.51 | 1975.71 | 1975 | 9 |
+| 328.41 | 1975.79 | 1975 | 10 |
+| 329.25 | 1975.88 | 1975 | 11 |
+| 330.97 | 1975.96 | 1975 | 12 |
+
+✅ 识别这个数据集中随时间变化的变量
+
+## 时间序列数据需要考虑的特性
+
+在查看时间序列数据时,你可能会注意到它具有[某些特性](https://online.stat.psu.edu/stat510/lesson/1/1.1),你需要考虑并减轻这些特性以更好地理解其模式。如果你将时间序列数据视为可能提供“信号”的数据,这些特性可以被视为“噪声”。你通常需要通过使用一些统计技术来减少这些“噪声”。
+
+以下是一些你应该了解的概念,以便能够处理时间序列:
+
+🎓 **趋势**
+
+趋势被定义为随时间的可测量的增加和减少。[阅读更多](https://machinelearningmastery.com/time-series-trends-in-python)。在时间序列的上下文中,这涉及如何使用并在必要时从你的时间序列中移除趋势。
+
+🎓 **[季节性](https://machinelearningmastery.com/time-series-seasonality-with-python/)**
+
+季节性被定义为周期性的波动,例如可能影响销售的假日高峰。[看看](https://itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm)不同类型的图表如何显示数据中的季节性。
+
+🎓 **异常值**
+
+异常值远离标准数据方差。
+
+🎓 **长期周期**
+
+独立于季节性,数据可能显示出长期周期,例如持续超过一年的经济衰退。
+
+🎓 **恒定方差**
+
+随时间推移,一些数据显示出恒定的波动,例如每天和夜间的能量使用。
+
+🎓 **突然变化**
+
+数据可能显示出需要进一步分析的突然变化。例如,由于COVID导致企业突然关闭,导致数据发生变化。
+
+✅ 这里有一个[示例时间序列图](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python),显示了几年来每日的游戏内货币花费。你能在这个数据中识别出上面列出的任何特性吗?
+
+
+
+## 练习 - 开始使用电力使用数据
+
+让我们开始创建一个时间序列模型,以预测给定过去使用情况的未来电力使用情况。
+
+> 本示例中的数据来自GEFCom2014预测竞赛。它包括2012年至2014年之间的3年每小时电力负载和温度值。
+>
+> Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli 和 Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016.
+
+1. 在本课的 `working` 文件夹中,打开 _notebook.ipynb_ 文件。首先添加将帮助你加载和可视化数据的库
+
+ ```python
+ import os
+ import matplotlib.pyplot as plt
+ from common.utils import load_data
+ %matplotlib inline
+ ```
+
+ 请注意,你正在使用包含的文件中的 `common` folder which set up your environment and handle downloading the data.
+
+2. Next, examine the data as a dataframe calling `load_data()` and `head()`:
+
+ ```python
+ data_dir = './data'
+ energy = load_data(data_dir)[['load']]
+ energy.head()
+ ```
+
+ 你可以看到有两列表示日期和负载:
+
+ | | load |
+ | :-----------------: | :----: |
+ | 2012-01-01 00:00:00 | 2698.0 |
+ | 2012-01-01 01:00:00 | 2558.0 |
+ | 2012-01-01 02:00:00 | 2444.0 |
+ | 2012-01-01 03:00:00 | 2402.0 |
+ | 2012-01-01 04:00:00 | 2403.0 |
+
+3. 现在,通过调用 `plot()` 绘制数据:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+4. 现在,通过提供输入 `energy` in `[from date]: [to date]` 模式,绘制2014年7月的第一周:
+
+ ```python
+ energy['2014-07-01':'2014-07-07'].plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ 一个美丽的图!看看这些图表,看看你是否能确定上面列出的任何特性。通过可视化数据,我们可以推测什么?
+
+在下一课中,你将创建一个ARIMA模型来进行一些预测。
+
+---
+
+## 🚀挑战
+
+列出你能想到的所有可以从时间序列预测中受益的行业和研究领域。你能想到这些技术在艺术中的应用吗?在计量经济学中?生态学?零售业?工业?金融?还有哪里?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/42/)
+
+## 复习与自学
+
+虽然我们不会在这里讨论,但有时会使用神经网络来增强经典的时间序列预测方法。阅读更多关于它们的信息[在这篇文章中](https://medium.com/microsoftazure/neural-networks-for-forecasting-financial-and-economic-time-series-6aca370ff412)
+
+## 作业
+
+[可视化更多时间序列](assignment.md)
+
+**免责声明**:
+本文件使用基于机器的AI翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于重要信息,建议使用专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/1-Introduction/assignment.md b/translations/zh/7-TimeSeries/1-Introduction/assignment.md
new file mode 100644
index 000000000..8110c0399
--- /dev/null
+++ b/translations/zh/7-TimeSeries/1-Introduction/assignment.md
@@ -0,0 +1,14 @@
+# 可视化更多时间序列
+
+## 说明
+
+你已经开始通过查看需要这种特殊建模的数据类型来学习时间序列预测。你已经可视化了一些关于能源的数据。现在,寻找一些其他可以从时间序列预测中受益的数据。找到三个例子(可以试试 [Kaggle](https://kaggle.com) 和 [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/catalog/?WT.mc_id=academic-77952-leestott)),并创建一个笔记本来可视化它们。在笔记本中记录它们的任何特殊特征(季节性、突然变化或其他趋势)。
+
+## 评分标准
+
+| 标准 | 杰出表现 | 充足表现 | 需要改进 |
+| -------- | ------------------------------------------------------ | ---------------------------------------------------- | ----------------------------------------------------------------------------------------- |
+| | 在笔记本中绘制并解释了三个数据集 | 在笔记本中绘制并解释了两个数据集 | 在笔记本中绘制或解释的数据集较少,或者呈现的数据不足 |
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应以原文档的母语版本为权威来源。对于关键信息,建议寻求专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/1-Introduction/solution/Julia/README.md b/translations/zh/7-TimeSeries/1-Introduction/solution/Julia/README.md
new file mode 100644
index 000000000..171efef23
--- /dev/null
+++ b/translations/zh/7-TimeSeries/1-Introduction/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业的人类翻译。我们不对使用本翻译而产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/1-Introduction/solution/R/README.md b/translations/zh/7-TimeSeries/1-Introduction/solution/R/README.md
new file mode 100644
index 000000000..f727a1eaf
--- /dev/null
+++ b/translations/zh/7-TimeSeries/1-Introduction/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档视为权威来源。对于关键信息,建议进行专业人工翻译。我们不对因使用本翻译而产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/2-ARIMA/README.md b/translations/zh/7-TimeSeries/2-ARIMA/README.md
new file mode 100644
index 000000000..1855b77d9
--- /dev/null
+++ b/translations/zh/7-TimeSeries/2-ARIMA/README.md
@@ -0,0 +1,396 @@
+# ARIMA 时间序列预测
+
+在上一节课中,你了解了一些时间序列预测的基础知识,并加载了一个显示某段时间内电力负载波动的数据集。
+
+[](https://youtu.be/IUSk-YDau10 "Introduction to ARIMA")
+
+> 🎥 点击上面的图片观看视频:ARIMA 模型的简要介绍。示例使用 R 语言,但概念是通用的。
+
+## [课前小测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/43/)
+
+## 简介
+
+在本课中,你将了解一种特定的构建模型的方法,即 [ARIMA: *A*uto*R*egressive *I*ntegrated *M*oving *A*verage](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)。ARIMA 模型特别适合拟合显示 [非平稳性](https://wikipedia.org/wiki/Stationary_process)的数据。
+
+## 基本概念
+
+为了能够使用 ARIMA,有一些概念你需要了解:
+
+- 🎓 **平稳性**。在统计学背景下,平稳性指的是数据的分布在时间上不变。非平稳数据则由于趋势而显示波动,必须通过转换来进行分析。例如,季节性可以引入数据波动,可以通过“季节性差分”过程来消除。
+
+- 🎓 **[差分](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing)**。在统计学背景下,差分指的是通过移除非恒定趋势将非平稳数据转换为平稳数据的过程。“差分移除了时间序列中的水平变化,消除了趋势和季节性,从而稳定了时间序列的均值。” [Shixiong 等人的论文](https://arxiv.org/abs/1904.07632)
+
+## ARIMA 在时间序列中的应用
+
+让我们拆解 ARIMA 的各个部分,以便更好地理解它如何帮助我们建模时间序列并进行预测。
+
+- **AR - 自回归**。自回归模型,顾名思义,是向“后”看,分析数据中的先前值并对其进行假设。这些先前值被称为“滞后”。例如,显示每月铅笔销售数据的数据集。每个月的销售总额将被视为数据集中的“演变变量”。该模型是“对其自身滞后(即先前)值进行回归”。[wikipedia](https://wikipedia.org/wiki/Autoregressive_integrated_moving_average)
+
+- **I - 积分**。与类似的“ARMA”模型不同,ARIMA 中的“I”指的是其 *[积分](https://wikipedia.org/wiki/Order_of_integration)* 方面。通过应用差分步骤来消除非平稳性,数据被“积分”。
+
+- **MA - 移动平均**。该模型的 [移动平均](https://wikipedia.org/wiki/Moving-average_model) 方面指的是通过观察当前和过去的滞后值来确定输出变量。
+
+总而言之:ARIMA 被用来使模型尽可能紧密地拟合时间序列数据的特殊形式。
+
+## 练习 - 构建 ARIMA 模型
+
+打开本课中的 [_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/working) 文件夹,并找到 [_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/2-ARIMA/working/notebook.ipynb) 文件。
+
+1. 运行 notebook 以加载 `statsmodels` Python 库;你将需要它来构建 ARIMA 模型。
+
+1. 加载必要的库
+
+1. 现在,加载一些对绘图有用的库:
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from pandas.plotting import autocorrelation_plot
+ from statsmodels.tsa.statespace.sarimax import SARIMAX
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ from IPython.display import Image
+
+ %matplotlib inline
+ pd.options.display.float_format = '{:,.2f}'.format
+ np.set_printoptions(precision=2)
+ warnings.filterwarnings("ignore") # specify to ignore warning messages
+ ```
+
+1. 将 `/data/energy.csv` 文件中的数据加载到 Pandas 数据框中并查看:
+
+ ```python
+ energy = load_data('./data')[['load']]
+ energy.head(10)
+ ```
+
+1. 绘制 2012 年 1 月至 2014 年 12 月的所有可用能源数据。应该没有意外,因为我们在上一节课中看到了这些数据:
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 现在,让我们构建一个模型!
+
+### 创建训练和测试数据集
+
+现在你的数据已经加载,可以将其分成训练集和测试集。你将用训练集来训练你的模型。像往常一样,模型训练完成后,你将使用测试集评估其准确性。你需要确保测试集覆盖比训练集晚的时间段,以确保模型不会从未来时间段获取信息。
+
+1. 将 2014 年 9 月 1 日至 10 月 31 日的两个月分配给训练集。测试集将包括 2014 年 11 月 1 日至 12 月 31 日的两个月:
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+ 由于这些数据反映了每日能源消耗,存在明显的季节性模式,但最近几天的消耗最为相似。
+
+1. 可视化差异:
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ 因此,使用相对较小的时间窗口来训练数据应该是足够的。
+
+ > 注意:由于我们用来拟合 ARIMA 模型的函数在拟合过程中使用了样本内验证,我们将省略验证数据。
+
+### 准备数据进行训练
+
+现在,你需要通过对数据进行过滤和缩放来准备训练数据。过滤数据集以仅包括所需的时间段和列,并缩放以确保数据在 0 到 1 之间。
+
+1. 过滤原始数据集以仅包括上述时间段和所需的“load”列及日期:
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ 你可以看到数据的形状:
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+1. 将数据缩放到 0 到 1 的范围内。
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ train.head(10)
+ ```
+
+1. 可视化原始数据与缩放后的数据:
+
+ ```python
+ energy[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']].rename(columns={'load':'original load'}).plot.hist(bins=100, fontsize=12)
+ train.rename(columns={'load':'scaled load'}).plot.hist(bins=100, fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ > 原始数据
+
+ 
+
+ > 缩放后的数据
+
+1. 现在你已经校准了缩放后的数据,可以缩放测试数据:
+
+ ```python
+ test['load'] = scaler.transform(test)
+ test.head()
+ ```
+
+### 实现 ARIMA
+
+现在是时候实现 ARIMA 了!你将使用之前安装的 `statsmodels` 库。
+
+现在你需要按照几个步骤进行
+
+ 1. 通过调用 `SARIMAX()` and passing in the model parameters: p, d, and q parameters, and P, D, and Q parameters.
+ 2. Prepare the model for the training data by calling the fit() function.
+ 3. Make predictions calling the `forecast()` function and specifying the number of steps (the `horizon`) to forecast.
+
+> 🎓 What are all these parameters for? In an ARIMA model there are 3 parameters that are used to help model the major aspects of a time series: seasonality, trend, and noise. These parameters are:
+
+`p`: the parameter associated with the auto-regressive aspect of the model, which incorporates *past* values.
+`d`: the parameter associated with the integrated part of the model, which affects the amount of *differencing* (🎓 remember differencing 👆?) to apply to a time series.
+`q`: the parameter associated with the moving-average part of the model.
+
+> Note: If your data has a seasonal aspect - which this one does - , we use a seasonal ARIMA model (SARIMA). In that case you need to use another set of parameters: `P`, `D`, and `Q` which describe the same associations as `p`, `d`, and `q` 来定义模型,但对应于模型的季节性组件。
+
+1. 首先设置你首选的 horizon 值。让我们尝试 3 小时:
+
+ ```python
+ # Specify the number of steps to forecast ahead
+ HORIZON = 3
+ print('Forecasting horizon:', HORIZON, 'hours')
+ ```
+
+ 为 ARIMA 模型选择最佳参数值可能具有挑战性,因为它在某种程度上是主观且耗时的。你可以考虑使用 `auto_arima()` function from the [`pyramid` 库](https://alkaline-ml.com/pmdarima/0.9.0/modules/generated/pyramid.arima.auto_arima.html),
+
+1. 现在尝试一些手动选择以找到一个好的模型。
+
+ ```python
+ order = (4, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ model = SARIMAX(endog=train, order=order, seasonal_order=seasonal_order)
+ results = model.fit()
+
+ print(results.summary())
+ ```
+
+ 打印出结果表格。
+
+你已经构建了你的第一个模型!现在我们需要找到一种方法来评估它。
+
+### 评估你的模型
+
+为了评估你的模型,你可以进行所谓的 `walk forward` 验证。实际上,时间序列模型在每次新数据可用时都会重新训练。这允许模型在每个时间步长上做出最佳预测。
+
+使用这种技术从时间序列的开头开始,在训练数据集上训练模型。然后对下一个时间步长进行预测。预测结果与已知值进行评估。然后扩展训练集以包括已知值,并重复该过程。
+
+> 注意:你应该保持训练集窗口固定,以便每次将新观察值添加到训练集中时,都从集开始移除观察值。
+
+此过程提供了模型在实际应用中的更稳健估计。然而,创建如此多的模型会带来计算成本。如果数据量小或模型简单,这是可以接受的,但在大规模应用中可能会成为问题。
+
+步进验证是时间序列模型评估的黄金标准,推荐用于你自己的项目。
+
+1. 首先,为每个 HORIZON 步长创建一个测试数据点。
+
+ ```python
+ test_shifted = test.copy()
+
+ for t in range(1, HORIZON+1):
+ test_shifted['load+'+str(t)] = test_shifted['load'].shift(-t, freq='H')
+
+ test_shifted = test_shifted.dropna(how='any')
+ test_shifted.head(5)
+ ```
+
+ | | | load | load+1 | load+2 |
+ | ---------- | -------- | ---- | ------ | ------ |
+ | 2014-12-30 | 00:00:00 | 0.33 | 0.29 | 0.27 |
+ | 2014-12-30 | 01:00:00 | 0.29 | 0.27 | 0.27 |
+ | 2014-12-30 | 02:00:00 | 0.27 | 0.27 | 0.30 |
+ | 2014-12-30 | 03:00:00 | 0.27 | 0.30 | 0.41 |
+ | 2014-12-30 | 04:00:00 | 0.30 | 0.41 | 0.57 |
+
+ 数据根据其 horizon 点水平移动。
+
+1. 使用这种滑动窗口方法对测试数据进行预测,循环大小为测试数据长度:
+
+ ```python
+ %%time
+ training_window = 720 # dedicate 30 days (720 hours) for training
+
+ train_ts = train['load']
+ test_ts = test_shifted
+
+ history = [x for x in train_ts]
+ history = history[(-training_window):]
+
+ predictions = list()
+
+ order = (2, 1, 0)
+ seasonal_order = (1, 1, 0, 24)
+
+ for t in range(test_ts.shape[0]):
+ model = SARIMAX(endog=history, order=order, seasonal_order=seasonal_order)
+ model_fit = model.fit()
+ yhat = model_fit.forecast(steps = HORIZON)
+ predictions.append(yhat)
+ obs = list(test_ts.iloc[t])
+ # move the training window
+ history.append(obs[0])
+ history.pop(0)
+ print(test_ts.index[t])
+ print(t+1, ': predicted =', yhat, 'expected =', obs)
+ ```
+
+ 你可以观看训练过程:
+
+ ```output
+ 2014-12-30 00:00:00
+ 1 : predicted = [0.32 0.29 0.28] expected = [0.32945389435989236, 0.2900626678603402, 0.2739480752014323]
+
+ 2014-12-30 01:00:00
+ 2 : predicted = [0.3 0.29 0.3 ] expected = [0.2900626678603402, 0.2739480752014323, 0.26812891674127126]
+
+ 2014-12-30 02:00:00
+ 3 : predicted = [0.27 0.28 0.32] expected = [0.2739480752014323, 0.26812891674127126, 0.3025962399283795]
+ ```
+
+1. 比较预测值和实际负载:
+
+ ```python
+ eval_df = pd.DataFrame(predictions, columns=['t+'+str(t) for t in range(1, HORIZON+1)])
+ eval_df['timestamp'] = test.index[0:len(test.index)-HORIZON+1]
+ eval_df = pd.melt(eval_df, id_vars='timestamp', value_name='prediction', var_name='h')
+ eval_df['actual'] = np.array(np.transpose(test_ts)).ravel()
+ eval_df[['prediction', 'actual']] = scaler.inverse_transform(eval_df[['prediction', 'actual']])
+ eval_df.head()
+ ```
+
+ 输出
+ | | | timestamp | h | prediction | actual |
+ | --- | ---------- | --------- | --- | ---------- | -------- |
+ | 0 | 2014-12-30 | 00:00:00 | t+1 | 3,008.74 | 3,023.00 |
+ | 1 | 2014-12-30 | 01:00:00 | t+1 | 2,955.53 | 2,935.00 |
+ | 2 | 2014-12-30 | 02:00:00 | t+1 | 2,900.17 | 2,899.00 |
+ | 3 | 2014-12-30 | 03:00:00 | t+1 | 2,917.69 | 2,886.00 |
+ | 4 | 2014-12-30 | 04:00:00 | t+1 | 2,946.99 | 2,963.00 |
+
+ 观察每小时数据的预测值与实际负载。准确性如何?
+
+### 检查模型准确性
+
+通过测试所有预测的平均绝对百分比误差(MAPE)来检查模型的准确性。
+
+> **🧮 展示数学公式**
+>
+> 
+>
+> [MAPE](https://www.linkedin.com/pulse/what-mape-mad-msd-time-series-allameh-statistics/) 用于显示预测准确性,定义如上公式。实际值t和预测值t之差除以实际值t。“在此计算中,每个预测点的绝对值之和除以拟合点数 n。” [wikipedia](https://wikipedia.org/wiki/Mean_absolute_percentage_error)
+
+1. 用代码表达公式:
+
+ ```python
+ if(HORIZON > 1):
+ eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
+ print(eval_df.groupby('h')['APE'].mean())
+ ```
+
+1. 计算一步的 MAPE:
+
+ ```python
+ print('One step forecast MAPE: ', (mape(eval_df[eval_df['h'] == 't+1']['prediction'], eval_df[eval_df['h'] == 't+1']['actual']))*100, '%')
+ ```
+
+ 一步预测 MAPE: 0.5570581332313952 %
+
+1. 打印多步预测 MAPE:
+
+ ```python
+ print('Multi-step forecast MAPE: ', mape(eval_df['prediction'], eval_df['actual'])*100, '%')
+ ```
+
+ ```output
+ Multi-step forecast MAPE: 1.1460048657704118 %
+ ```
+
+ 一个较低的数值是最好的:考虑到一个预测 MAPE 为 10 的模型,误差为 10%。
+
+1. 但正如往常,视觉化这种准确性测量更容易,所以让我们绘制它:
+
+ ```python
+ if(HORIZON == 1):
+ ## Plotting single step forecast
+ eval_df.plot(x='timestamp', y=['actual', 'prediction'], style=['r', 'b'], figsize=(15, 8))
+
+ else:
+ ## Plotting multi step forecast
+ plot_df = eval_df[(eval_df.h=='t+1')][['timestamp', 'actual']]
+ for t in range(1, HORIZON+1):
+ plot_df['t+'+str(t)] = eval_df[(eval_df.h=='t+'+str(t))]['prediction'].values
+
+ fig = plt.figure(figsize=(15, 8))
+ ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
+ ax = fig.add_subplot(111)
+ for t in range(1, HORIZON+1):
+ x = plot_df['timestamp'][(t-1):]
+ y = plot_df['t+'+str(t)][0:len(x)]
+ ax.plot(x, y, color='blue', linewidth=4*math.pow(.9,t), alpha=math.pow(0.8,t))
+
+ ax.legend(loc='best')
+
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+🏆 一个非常漂亮的图表,显示了一个准确性良好的模型。做得好!
+
+---
+
+## 🚀挑战
+
+深入研究测试时间序列模型准确性的方法。在本课中我们讨论了 MAPE,但还有其他方法可以使用吗?研究它们并注释。一份有用的文档可以在 [这里](https://otexts.com/fpp2/accuracy.html) 找到。
+
+## [课后小测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/44/)
+
+## 复习与自学
+
+本课仅触及了使用 ARIMA 进行时间序列预测的基础知识。花些时间通过深入研究 [这个仓库](https://microsoft.github.io/forecasting/) 及其各种模型类型来加深你的知识,学习其他构建时间序列模型的方法。
+
+## 作业
+
+[一个新的 ARIMA 模型](assignment.md)
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应以原文档的母语版本为权威来源。对于关键信息,建议寻求专业人工翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/2-ARIMA/assignment.md b/translations/zh/7-TimeSeries/2-ARIMA/assignment.md
new file mode 100644
index 000000000..c8b821d79
--- /dev/null
+++ b/translations/zh/7-TimeSeries/2-ARIMA/assignment.md
@@ -0,0 +1,14 @@
+# 一个新的 ARIMA 模型
+
+## 说明
+
+既然你已经建立了一个 ARIMA 模型,现在使用新的数据建立一个新的模型(可以尝试[杜克大学的这些数据集](http://www2.stat.duke.edu/~mw/ts_data_sets.html))。在笔记本中对你的工作进行注释,直观展示数据和你的模型,并使用 MAPE 测试其准确性。
+
+## 评分标准
+
+| 标准 | 模范 | 足够 | 需要改进 |
+| -------- | ------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | --------------------------- |
+| | 提供了一个包含新 ARIMA 模型的笔记本,进行了测试并通过可视化和准确性说明。 | 提供的笔记本没有注释或包含错误 | 提供了不完整的笔记本 |
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/2-ARIMA/solution/Julia/README.md b/translations/zh/7-TimeSeries/2-ARIMA/solution/Julia/README.md
new file mode 100644
index 000000000..06fbab6bc
--- /dev/null
+++ b/translations/zh/7-TimeSeries/2-ARIMA/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/2-ARIMA/solution/R/README.md b/translations/zh/7-TimeSeries/2-ARIMA/solution/R/README.md
new file mode 100644
index 000000000..3b5757b7f
--- /dev/null
+++ b/translations/zh/7-TimeSeries/2-ARIMA/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的母语版本视为权威来源。对于关键信息,建议进行专业的人类翻译。对于使用此翻译引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/3-SVR/README.md b/translations/zh/7-TimeSeries/3-SVR/README.md
new file mode 100644
index 000000000..f62f3e53a
--- /dev/null
+++ b/translations/zh/7-TimeSeries/3-SVR/README.md
@@ -0,0 +1,382 @@
+# 使用支持向量回归进行时间序列预测
+
+在上一课中,你学习了如何使用ARIMA模型进行时间序列预测。现在你将学习使用支持向量回归模型,这是一种用于预测连续数据的回归模型。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/51/)
+
+## 介绍
+
+在本课中,你将了解如何使用[**SVM**: **支持向量机**](https://en.wikipedia.org/wiki/Support-vector_machine)进行回归,或**SVR: 支持向量回归**。
+
+### 时间序列背景下的SVR [^1]
+
+在理解SVR在时间序列预测中的重要性之前,这里有一些你需要了解的重要概念:
+
+- **回归:** 一种监督学习技术,用于从给定的输入集预测连续值。其思想是在特征空间中拟合一条包含最多数据点的曲线(或直线)。[点击这里](https://en.wikipedia.org/wiki/Regression_analysis)了解更多信息。
+- **支持向量机 (SVM):** 一种用于分类、回归和异常检测的监督学习模型。在分类中,该模型在特征空间中作为边界,在回归中作为最佳拟合线。SVM中通常使用核函数将数据集转换到更高维度的空间,使其更易于分离。[点击这里](https://en.wikipedia.org/wiki/Support-vector_machine)了解更多关于SVM的信息。
+- **支持向量回归 (SVR):** 一种SVM,用于找到包含最多数据点的最佳拟合线(在SVM中是超平面)。
+
+### 为什么选择SVR? [^1]
+
+在上一课中,你学习了ARIMA,这是一种非常成功的统计线性方法,用于预测时间序列数据。然而,在许多情况下,时间序列数据具有*非线性*,这无法通过线性模型映射。在这种情况下,SVM在回归任务中考虑数据非线性的能力使得SVR在时间序列预测中非常成功。
+
+## 练习 - 构建一个SVR模型
+
+数据准备的前几步与上一课的[ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)相同。
+
+打开本课的[_/working_](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/3-SVR/working)文件夹,找到[_notebook.ipynb_](https://github.com/microsoft/ML-For-Beginners/blob/main/7-TimeSeries/3-SVR/working/notebook.ipynb)文件。[ ^2 ]
+
+1. 运行笔记本并导入必要的库: [^2]
+
+ ```python
+ import sys
+ sys.path.append('../../')
+ ```
+
+ ```python
+ import os
+ import warnings
+ import matplotlib.pyplot as plt
+ import numpy as np
+ import pandas as pd
+ import datetime as dt
+ import math
+
+ from sklearn.svm import SVR
+ from sklearn.preprocessing import MinMaxScaler
+ from common.utils import load_data, mape
+ ```
+
+2. 从`/data/energy.csv`文件中加载数据到Pandas数据框并查看: [^2]
+
+ ```python
+ energy = load_data('../../data')[['load']]
+ ```
+
+3. 绘制2012年1月至2014年12月的所有可用能源数据: [^2]
+
+ ```python
+ energy.plot(y='load', subplots=True, figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+ 现在,让我们构建我们的SVR模型。
+
+### 创建训练和测试数据集
+
+现在数据已经加载,你可以将其分为训练集和测试集。然后你将重塑数据以创建基于时间步长的数据集,这对于SVR是必要的。你将在训练集上训练你的模型。模型训练完成后,你将评估其在训练集、测试集和整个数据集上的准确性,以查看整体性能。你需要确保测试集覆盖训练集之后的时间段,以确保模型不会从未来时间段获取信息[^2](这种情况称为*过拟合*)。
+
+1. 将2014年9月1日至10月31日的两个月分配给训练集。测试集将包括2014年11月1日至12月31日的两个月: [^2]
+
+ ```python
+ train_start_dt = '2014-11-01 00:00:00'
+ test_start_dt = '2014-12-30 00:00:00'
+ ```
+
+2. 可视化差异: [^2]
+
+ ```python
+ energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)][['load']].rename(columns={'load':'train'}) \
+ .join(energy[test_start_dt:][['load']].rename(columns={'load':'test'}), how='outer') \
+ .plot(y=['train', 'test'], figsize=(15, 8), fontsize=12)
+ plt.xlabel('timestamp', fontsize=12)
+ plt.ylabel('load', fontsize=12)
+ plt.show()
+ ```
+
+ 
+
+### 准备训练数据
+
+现在,你需要通过对数据进行过滤和缩放来准备训练数据。过滤数据集以仅包括所需的时间段和列,并缩放以确保数据在0到1的范围内。
+
+1. 过滤原始数据集,仅包括每个集合的上述时间段,并仅包括所需的'load'列和日期: [^2]
+
+ ```python
+ train = energy.copy()[(energy.index >= train_start_dt) & (energy.index < test_start_dt)][['load']]
+ test = energy.copy()[energy.index >= test_start_dt][['load']]
+
+ print('Training data shape: ', train.shape)
+ print('Test data shape: ', test.shape)
+ ```
+
+ ```output
+ Training data shape: (1416, 1)
+ Test data shape: (48, 1)
+ ```
+
+2. 将训练数据缩放到(0,1)范围: [^2]
+
+ ```python
+ scaler = MinMaxScaler()
+ train['load'] = scaler.fit_transform(train)
+ ```
+
+4. 现在,缩放测试数据: [^2]
+
+ ```python
+ test['load'] = scaler.transform(test)
+ ```
+
+### 创建带有时间步长的数据 [^1]
+
+对于SVR,你需要将输入数据转换为`[batch, timesteps]`. So, you reshape the existing `train_data` and `test_data`的形式,使得有一个新的维度表示时间步长。
+
+```python
+# Converting to numpy arrays
+train_data = train.values
+test_data = test.values
+```
+
+对于这个例子,我们取`timesteps = 5`。因此,模型的输入是前4个时间步长的数据,输出是第5个时间步长的数据。
+
+```python
+timesteps=5
+```
+
+使用嵌套列表推导将训练数据转换为2D张量:
+
+```python
+train_data_timesteps=np.array([[j for j in train_data[i:i+timesteps]] for i in range(0,len(train_data)-timesteps+1)])[:,:,0]
+train_data_timesteps.shape
+```
+
+```output
+(1412, 5)
+```
+
+将测试数据转换为2D张量:
+
+```python
+test_data_timesteps=np.array([[j for j in test_data[i:i+timesteps]] for i in range(0,len(test_data)-timesteps+1)])[:,:,0]
+test_data_timesteps.shape
+```
+
+```output
+(44, 5)
+```
+
+选择训练和测试数据的输入和输出:
+
+```python
+x_train, y_train = train_data_timesteps[:,:timesteps-1],train_data_timesteps[:,[timesteps-1]]
+x_test, y_test = test_data_timesteps[:,:timesteps-1],test_data_timesteps[:,[timesteps-1]]
+
+print(x_train.shape, y_train.shape)
+print(x_test.shape, y_test.shape)
+```
+
+```output
+(1412, 4) (1412, 1)
+(44, 4) (44, 1)
+```
+
+### 实现SVR [^1]
+
+现在,是时候实现SVR了。要了解更多关于此实现的信息,你可以参考[此文档](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html)。对于我们的实现,我们遵循以下步骤:
+
+ 1. 通过调用`SVR()` and passing in the model hyperparameters: kernel, gamma, c and epsilon
+ 2. Prepare the model for the training data by calling the `fit()` function
+ 3. Make predictions calling the `predict()`函数定义模型
+
+现在我们创建一个SVR模型。这里我们使用[RBF核](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel),并将超参数gamma、C和epsilon分别设置为0.5、10和0.05。
+
+```python
+model = SVR(kernel='rbf',gamma=0.5, C=10, epsilon = 0.05)
+```
+
+#### 在训练数据上拟合模型 [^1]
+
+```python
+model.fit(x_train, y_train[:,0])
+```
+
+```output
+SVR(C=10, cache_size=200, coef0=0.0, degree=3, epsilon=0.05, gamma=0.5,
+ kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
+```
+
+#### 进行模型预测 [^1]
+
+```python
+y_train_pred = model.predict(x_train).reshape(-1,1)
+y_test_pred = model.predict(x_test).reshape(-1,1)
+
+print(y_train_pred.shape, y_test_pred.shape)
+```
+
+```output
+(1412, 1) (44, 1)
+```
+
+你已经构建了你的SVR!现在我们需要评估它。
+
+### 评估你的模型 [^1]
+
+为了评估,首先我们将数据缩放回原始比例。然后,为了检查性能,我们将绘制原始和预测的时间序列图,并打印MAPE结果。
+
+缩放预测和原始输出:
+
+```python
+# Scaling the predictions
+y_train_pred = scaler.inverse_transform(y_train_pred)
+y_test_pred = scaler.inverse_transform(y_test_pred)
+
+print(len(y_train_pred), len(y_test_pred))
+```
+
+```python
+# Scaling the original values
+y_train = scaler.inverse_transform(y_train)
+y_test = scaler.inverse_transform(y_test)
+
+print(len(y_train), len(y_test))
+```
+
+#### 检查模型在训练和测试数据上的表现 [^1]
+
+我们从数据集中提取时间戳以显示在图表的x轴上。注意,我们使用第一个```timesteps-1```值作为第一个输出的输入,因此输出的时间戳将从那之后开始。
+
+```python
+train_timestamps = energy[(energy.index < test_start_dt) & (energy.index >= train_start_dt)].index[timesteps-1:]
+test_timestamps = energy[test_start_dt:].index[timesteps-1:]
+
+print(len(train_timestamps), len(test_timestamps))
+```
+
+```output
+1412 44
+```
+
+绘制训练数据的预测:
+
+```python
+plt.figure(figsize=(25,6))
+plt.plot(train_timestamps, y_train, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(train_timestamps, y_train_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.title("Training data prediction")
+plt.show()
+```
+
+
+
+打印训练数据的MAPE
+
+```python
+print('MAPE for training data: ', mape(y_train_pred, y_train)*100, '%')
+```
+
+```output
+MAPE for training data: 1.7195710200875551 %
+```
+
+绘制测试数据的预测
+
+```python
+plt.figure(figsize=(10,3))
+plt.plot(test_timestamps, y_test, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(test_timestamps, y_test_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+打印测试数据的MAPE
+
+```python
+print('MAPE for testing data: ', mape(y_test_pred, y_test)*100, '%')
+```
+
+```output
+MAPE for testing data: 1.2623790187854018 %
+```
+
+🏆 你在测试数据集上得到了非常好的结果!
+
+### 检查模型在完整数据集上的表现 [^1]
+
+```python
+# Extracting load values as numpy array
+data = energy.copy().values
+
+# Scaling
+data = scaler.transform(data)
+
+# Transforming to 2D tensor as per model input requirement
+data_timesteps=np.array([[j for j in data[i:i+timesteps]] for i in range(0,len(data)-timesteps+1)])[:,:,0]
+print("Tensor shape: ", data_timesteps.shape)
+
+# Selecting inputs and outputs from data
+X, Y = data_timesteps[:,:timesteps-1],data_timesteps[:,[timesteps-1]]
+print("X shape: ", X.shape,"\nY shape: ", Y.shape)
+```
+
+```output
+Tensor shape: (26300, 5)
+X shape: (26300, 4)
+Y shape: (26300, 1)
+```
+
+```python
+# Make model predictions
+Y_pred = model.predict(X).reshape(-1,1)
+
+# Inverse scale and reshape
+Y_pred = scaler.inverse_transform(Y_pred)
+Y = scaler.inverse_transform(Y)
+```
+
+```python
+plt.figure(figsize=(30,8))
+plt.plot(Y, color = 'red', linewidth=2.0, alpha = 0.6)
+plt.plot(Y_pred, color = 'blue', linewidth=0.8)
+plt.legend(['Actual','Predicted'])
+plt.xlabel('Timestamp')
+plt.show()
+```
+
+
+
+```python
+print('MAPE: ', mape(Y_pred, Y)*100, '%')
+```
+
+```output
+MAPE: 2.0572089029888656 %
+```
+
+🏆 非常好的图表,显示了一个具有良好准确性的模型。做得好!
+
+---
+
+## 🚀挑战
+
+- 尝试在创建模型时调整超参数(gamma、C、epsilon),并在数据上进行评估,看看哪组超参数在测试数据上给出最佳结果。要了解更多关于这些超参数的信息,你可以参考[此文档](https://scikit-learn.org/stable/modules/svm.html#parameters-of-the-rbf-kernel)。
+- 尝试为模型使用不同的核函数,并分析它们在数据集上的表现。可以参考[此文档](https://scikit-learn.org/stable/modules/svm.html#kernel-functions)。
+- 尝试使用不同的`timesteps`值来让模型回顾以进行预测。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/52/)
+
+## 复习与自学
+
+本课旨在介绍SVR在时间序列预测中的应用。要了解更多关于SVR的信息,你可以参考[这篇博客](https://www.analyticsvidhya.com/blog/2020/03/support-vector-regression-tutorial-for-machine-learning/)。这篇[scikit-learn文档](https://scikit-learn.org/stable/modules/svm.html)提供了关于SVM的一般解释,[SVR](https://scikit-learn.org/stable/modules/svm.html#regression)以及其他实现细节,如可以使用的不同[核函数](https://scikit-learn.org/stable/modules/svm.html#kernel-functions)及其参数。
+
+## 作业
+
+[一个新的SVR模型](assignment.md)
+
+## 致谢
+
+[^1]: 本节中的文字、代码和输出由[@AnirbanMukherjeeXD](https://github.com/AnirbanMukherjeeXD)贡献
+[^2]: 本节中的文字、代码和输出取自[ARIMA](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA)
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而产生的任何误解或误释,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/3-SVR/assignment.md b/translations/zh/7-TimeSeries/3-SVR/assignment.md
new file mode 100644
index 000000000..a8c7a4304
--- /dev/null
+++ b/translations/zh/7-TimeSeries/3-SVR/assignment.md
@@ -0,0 +1,16 @@
+# 一个新的 SVR 模型
+
+## 说明 [^1]
+
+现在你已经构建了一个 SVR 模型,使用新的数据再构建一个模型(试试[这些来自杜克大学的数据集](http://www2.stat.duke.edu/~mw/ts_data_sets.html))。在笔记本中对你的工作进行注释,直观展示数据和你的模型,并使用适当的图表和 MAPE 测试其准确性。同时尝试调整不同的超参数,并使用不同的时间步长值。
+
+## 评分标准 [^1]
+
+| 标准 | 优秀 | 合格 | 需要改进 |
+| -------- | ------------------------------------------------------------ | --------------------------------------------------------- | ----------------------------------- |
+| | 提交的笔记本中包含构建、测试并通过可视化和准确性说明的 SVR 模型。 | 提交的笔记本未注释或包含错误。 | 提交的笔记本不完整。 |
+
+[^1]: 本节中的文字基于 [ARIMA 的作业](https://github.com/microsoft/ML-For-Beginners/tree/main/7-TimeSeries/2-ARIMA/assignment.md)
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。我们不对使用本翻译所产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/7-TimeSeries/README.md b/translations/zh/7-TimeSeries/README.md
new file mode 100644
index 000000000..a0e6006ed
--- /dev/null
+++ b/translations/zh/7-TimeSeries/README.md
@@ -0,0 +1,26 @@
+# 时间序列预测简介
+
+什么是时间序列预测?它是通过分析过去的趋势来预测未来的事件。
+
+## 区域主题:全球电力使用 ✨
+
+在这两节课中,你将了解时间序列预测,这是一种相对较少人知的机器学习领域,但在工业和商业应用等领域中却极具价值。虽然可以使用神经网络来增强这些模型的实用性,但我们将研究它们在经典机器学习背景下的应用,因为模型有助于根据过去的表现预测未来的表现。
+
+我们的区域重点是全球的电力使用,这是一个有趣的数据集,可以了解如何根据过去的负载模式预测未来的电力使用情况。你可以看到这种预测在商业环境中是多么有用。
+
+
+
+照片由 [Peddi Sai hrithik](https://unsplash.com/@shutter_log?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) 拍摄于拉贾斯坦邦的一条道路上的电塔,发布在 [Unsplash](https://unsplash.com/s/photos/electric-india?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
+
+## 课程
+
+1. [时间序列预测简介](1-Introduction/README.md)
+2. [构建ARIMA时间序列模型](2-ARIMA/README.md)
+3. [构建支持向量回归器进行时间序列预测](3-SVR/README.md)
+
+## 鸣谢
+
+"时间序列预测简介" 由 [Francesca Lazzeri](https://twitter.com/frlazzeri) 和 [Jen Looper](https://twitter.com/jenlooper) 用 ⚡️ 编写。这些笔记本最初出现在 [Azure "Deep Learning For Time Series" repo](https://github.com/Azure/DeepLearningForTimeSeriesForecasting) 上,由 Francesca Lazzeri 编写。SVR 课程由 [Anirban Mukherjee](https://github.com/AnirbanMukherjeeXD) 编写。
+
+**免责声明**:
+本文档是使用机器翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档视为权威来源。对于关键信息,建议进行专业的人类翻译。我们不对使用此翻译而产生的任何误解或曲解承担责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/1-QLearning/README.md b/translations/zh/8-Reinforcement/1-QLearning/README.md
new file mode 100644
index 000000000..da85aa901
--- /dev/null
+++ b/translations/zh/8-Reinforcement/1-QLearning/README.md
@@ -0,0 +1,319 @@
+## 强化学习和Q-Learning简介
+
+
+> [Tomomi Imura](https://www.twitter.com/girlie_mac) 的手绘笔记
+
+强化学习涉及三个重要概念:智能体、状态和每个状态的一组动作。通过在指定状态下执行一个动作,智能体会获得奖励。再次想象一下电脑游戏超级马里奥。你是马里奥,你在游戏关卡中,站在悬崖边。你上方有一个硬币。你是马里奥,在游戏关卡中,处于一个特定位置……这就是你的状态。向右移动一步(一个动作)会让你掉下悬崖,这会给你一个低分。然而,按下跳跃按钮会让你得分并且你会活下来。这是一个正面的结果,应该给你一个正的分数。
+
+通过使用强化学习和模拟器(游戏),你可以学习如何玩游戏以最大化奖励,即尽可能多地得分和保持存活。
+
+[](https://www.youtube.com/watch?v=lDq_en8RNOo)
+
+> 🎥 点击上面的图片听 Dmitry 讨论强化学习
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/45/)
+
+## 先决条件和设置
+
+在本课中,我们将用Python实验一些代码。你应该能够在你的电脑或云端运行本课的Jupyter Notebook代码。
+
+你可以打开[课程笔记本](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/notebook.ipynb)并通过本课构建。
+
+> **注意:** 如果你从云端打开此代码,你还需要获取 [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py) 文件,它在笔记本代码中使用。将其添加到与笔记本相同的目录中。
+
+## 简介
+
+在本课中,我们将探索**[彼得与狼](https://en.wikipedia.org/wiki/Peter_and_the_Wolf)**的世界,这个故事灵感来自俄罗斯作曲家[谢尔盖·普罗科菲耶夫](https://en.wikipedia.org/wiki/Sergei_Prokofiev)创作的音乐童话。我们将使用**强化学习**让彼得探索他的环境,收集美味的苹果并避免遇到狼。
+
+**强化学习**(RL)是一种学习技术,它允许我们通过运行许多实验来学习**智能体**在某些**环境**中的最佳行为。在这种环境中,智能体应该有某种**目标**,由**奖励函数**定义。
+
+## 环境
+
+为了简单起见,让我们将彼得的世界看作一个大小为 `width` x `height` 的正方形棋盘,如下所示:
+
+
+
+这个棋盘上的每个格子可以是:
+
+* **地面**,彼得和其他生物可以在上面行走。
+* **水**,显然你不能在上面行走。
+* **树**或**草地**,可以休息的地方。
+* **苹果**,彼得会很高兴找到它来喂饱自己。
+* **狼**,很危险,应该避免。
+
+有一个单独的Python模块 [`rlboard.py`](https://github.com/microsoft/ML-For-Beginners/blob/main/8-Reinforcement/1-QLearning/rlboard.py),包含处理这个环境的代码。因为这些代码对于理解我们的概念并不重要,所以我们将导入该模块并用它来创建示例棋盘(代码块1):
+
+```python
+from rlboard import *
+
+width, height = 8,8
+m = Board(width,height)
+m.randomize(seed=13)
+m.plot()
+```
+
+这段代码应该打印出类似上图的环境。
+
+## 动作和策略
+
+在我们的例子中,彼得的目标是找到一个苹果,同时避免狼和其他障碍。为此,他可以四处走动,直到找到一个苹果。
+
+因此,在任何位置,他可以选择以下动作之一:向上、向下、向左和向右。
+
+我们将这些动作定义为一个字典,并将它们映射到相应的坐标变化对。例如,向右移动 (`R`) would correspond to a pair `(1,0)`。(代码块2):
+
+```python
+actions = { "U" : (0,-1), "D" : (0,1), "L" : (-1,0), "R" : (1,0) }
+action_idx = { a : i for i,a in enumerate(actions.keys()) }
+```
+
+总结一下,这个场景的策略和目标如下:
+
+- **策略**,我们的智能体(彼得)的策略由一个所谓的**策略函数**定义。策略是一个函数,在任何给定状态下返回动作。在我们的例子中,问题的状态由棋盘表示,包括玩家的当前位置。
+
+- **目标**,强化学习的目标是最终学习一个好的策略,能够有效地解决问题。然而,作为基线,我们考虑最简单的策略,称为**随机行走**。
+
+## 随机行走
+
+让我们首先通过实现随机行走策略来解决我们的问题。通过随机行走,我们将从允许的动作中随机选择下一个动作,直到我们到达苹果(代码块3)。
+
+1. 使用下面的代码实现随机行走:
+
+ ```python
+ def random_policy(m):
+ return random.choice(list(actions))
+
+ def walk(m,policy,start_position=None):
+ n = 0 # number of steps
+ # set initial position
+ if start_position:
+ m.human = start_position
+ else:
+ m.random_start()
+ while True:
+ if m.at() == Board.Cell.apple:
+ return n # success!
+ if m.at() in [Board.Cell.wolf, Board.Cell.water]:
+ return -1 # eaten by wolf or drowned
+ while True:
+ a = actions[policy(m)]
+ new_pos = m.move_pos(m.human,a)
+ if m.is_valid(new_pos) and m.at(new_pos)!=Board.Cell.water:
+ m.move(a) # do the actual move
+ break
+ n+=1
+
+ walk(m,random_policy)
+ ```
+
+ 调用 `walk` 应该返回相应路径的长度,每次运行可能会有所不同。
+
+1. 多次运行行走实验(例如100次),并打印结果统计数据(代码块4):
+
+ ```python
+ def print_statistics(policy):
+ s,w,n = 0,0,0
+ for _ in range(100):
+ z = walk(m,policy)
+ if z<0:
+ w+=1
+ else:
+ s += z
+ n += 1
+ print(f"Average path length = {s/n}, eaten by wolf: {w} times")
+
+ print_statistics(random_policy)
+ ```
+
+ 请注意,路径的平均长度大约是30-40步,这相当多,考虑到到最近苹果的平均距离大约是5-6步。
+
+ 你还可以看到彼得在随机行走期间的移动情况:
+
+ 
+
+## 奖励函数
+
+为了让我们的策略更智能,我们需要了解哪些移动比其他移动“更好”。为此,我们需要定义我们的目标。
+
+目标可以用**奖励函数**来定义,它会为每个状态返回一些分数值。数值越高,奖励函数越好。(代码块5)
+
+```python
+move_reward = -0.1
+goal_reward = 10
+end_reward = -10
+
+def reward(m,pos=None):
+ pos = pos or m.human
+ if not m.is_valid(pos):
+ return end_reward
+ x = m.at(pos)
+ if x==Board.Cell.water or x == Board.Cell.wolf:
+ return end_reward
+ if x==Board.Cell.apple:
+ return goal_reward
+ return move_reward
+```
+
+关于奖励函数的一个有趣之处在于,在大多数情况下,*我们只有在游戏结束时才会得到实质性的奖励*。这意味着我们的算法应该以某种方式记住导致最终正面奖励的“好”步骤,并增加它们的重要性。同样,所有导致不良结果的移动应该被抑制。
+
+## Q-Learning
+
+我们将在这里讨论的算法称为**Q-Learning**。在这个算法中,策略由一个函数(或数据结构)定义,称为**Q-Table**。它记录了在给定状态下每个动作的“好坏”。
+
+它被称为Q-Table,因为将其表示为表格或多维数组通常很方便。由于我们的棋盘尺寸为 `width` x `height`,我们可以使用形状为 `width` x `height` x `len(actions)` 的numpy数组来表示Q-Table:(代码块6)
+
+```python
+Q = np.ones((width,height,len(actions)),dtype=np.float)*1.0/len(actions)
+```
+
+请注意,我们用相等的值初始化Q-Table的所有值,在我们的例子中是0.25。这对应于“随机行走”策略,因为每个状态下的所有移动都同样好。我们可以将Q-Table传递给 `plot` function in order to visualize the table on the board: `m.plot(Q)`.
+
+
+
+In the center of each cell there is an "arrow" that indicates the preferred direction of movement. Since all directions are equal, a dot is displayed.
+
+Now we need to run the simulation, explore our environment, and learn a better distribution of Q-Table values, which will allow us to find the path to the apple much faster.
+
+## Essence of Q-Learning: Bellman Equation
+
+Once we start moving, each action will have a corresponding reward, i.e. we can theoretically select the next action based on the highest immediate reward. However, in most states, the move will not achieve our goal of reaching the apple, and thus we cannot immediately decide which direction is better.
+
+> Remember that it is not the immediate result that matters, but rather the final result, which we will obtain at the end of the simulation.
+
+In order to account for this delayed reward, we need to use the principles of **[dynamic programming](https://en.wikipedia.org/wiki/Dynamic_programming)**, which allow us to think about out problem recursively.
+
+Suppose we are now at the state *s*, and we want to move to the next state *s'*. By doing so, we will receive the immediate reward *r(s,a)*, defined by the reward function, plus some future reward. If we suppose that our Q-Table correctly reflects the "attractiveness" of each action, then at state *s'* we will chose an action *a* that corresponds to maximum value of *Q(s',a')*. Thus, the best possible future reward we could get at state *s* will be defined as `max`a'*Q(s',a')* (maximum here is computed over all possible actions *a'* at state *s'*).
+
+This gives the **Bellman formula** for calculating the value of the Q-Table at state *s*, given action *a*:
+
+
+
+Here γ is the so-called **discount factor** that determines to which extent you should prefer the current reward over the future reward and vice versa.
+
+## Learning Algorithm
+
+Given the equation above, we can now write pseudo-code for our learning algorithm:
+
+* Initialize Q-Table Q with equal numbers for all states and actions
+* Set learning rate α ← 1
+* Repeat simulation many times
+ 1. Start at random position
+ 1. Repeat
+ 1. Select an action *a* at state *s*
+ 2. Execute action by moving to a new state *s'*
+ 3. If we encounter end-of-game condition, or total reward is too small - exit simulation
+ 4. Compute reward *r* at the new state
+ 5. Update Q-Function according to Bellman equation: *Q(s,a)* ← *(1-α)Q(s,a)+α(r+γ maxa'Q(s',a'))*
+ 6. *s* ← *s'*
+ 7. Update the total reward and decrease α.
+
+## Exploit vs. explore
+
+In the algorithm above, we did not specify how exactly we should choose an action at step 2.1. If we are choosing the action randomly, we will randomly **explore** the environment, and we are quite likely to die often as well as explore areas where we would not normally go. An alternative approach would be to **exploit** the Q-Table values that we already know, and thus to choose the best action (with higher Q-Table value) at state *s*. This, however, will prevent us from exploring other states, and it's likely we might not find the optimal solution.
+
+Thus, the best approach is to strike a balance between exploration and exploitation. This can be done by choosing the action at state *s* with probabilities proportional to values in the Q-Table. In the beginning, when Q-Table values are all the same, it would correspond to a random selection, but as we learn more about our environment, we would be more likely to follow the optimal route while allowing the agent to choose the unexplored path once in a while.
+
+## Python implementation
+
+We are now ready to implement the learning algorithm. Before we do that, we also need some function that will convert arbitrary numbers in the Q-Table into a vector of probabilities for corresponding actions.
+
+1. Create a function `probs()`:
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+ ```
+
+ 我们在原始向量中添加了一些 `eps`,以避免在初始情况下,向量的所有分量相同时发生除以0的情况。
+
+通过5000次实验运行学习算法,也称为**epochs**:(代码块8)
+```python
+ for epoch in range(5000):
+
+ # Pick initial point
+ m.random_start()
+
+ # Start travelling
+ n=0
+ cum_reward = 0
+ while True:
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ dpos = actions[a]
+ m.move(dpos,check_correctness=False) # we allow player to move outside the board, which terminates episode
+ r = reward(m)
+ cum_reward += r
+ if r==end_reward or cum_reward < -1000:
+ lpath.append(n)
+ break
+ alpha = np.exp(-n / 10e5)
+ gamma = 0.5
+ ai = action_idx[a]
+ Q[x,y,ai] = (1 - alpha) * Q[x,y,ai] + alpha * (r + gamma * Q[x+dpos[0], y+dpos[1]].max())
+ n+=1
+```
+
+执行此算法后,Q-Table 应该会更新每一步中不同动作的吸引力值。我们可以尝试通过在每个单元格绘制一个指向期望移动方向的向量来可视化Q-Table。为了简化,我们画一个小圆圈代替箭头。
+
+## 检查策略
+
+由于Q-Table列出了每个状态下每个动作的“吸引力”,因此很容易使用它来定义在我们的世界中的高效导航。在最简单的情况下,我们可以选择对应于最高Q-Table值的动作:(代码块9)
+
+```python
+def qpolicy_strict(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = list(actions)[np.argmax(v)]
+ return a
+
+walk(m,qpolicy_strict)
+```
+
+> 如果你多次尝试上述代码,你可能会注意到有时它会“卡住”,你需要按下笔记本中的停止按钮来中断它。这是因为可能会出现两种状态在最优Q-Value方面“指向”彼此的情况,在这种情况下,智能体最终会在这些状态之间无限移动。
+
+## 🚀挑战
+
+> **任务1:** 修改 `walk` function to limit the maximum length of path by a certain number of steps (say, 100), and watch the code above return this value from time to time.
+
+> **Task 2:** Modify the `walk` function so that it does not go back to the places where it has already been previously. This will prevent `walk` from looping, however, the agent can still end up being "trapped" in a location from which it is unable to escape.
+
+## Navigation
+
+A better navigation policy would be the one that we used during training, which combines exploitation and exploration. In this policy, we will select each action with a certain probability, proportional to the values in the Q-Table. This strategy may still result in the agent returning back to a position it has already explored, but, as you can see from the code below, it results in a very short average path to the desired location (remember that `print_statistics` 运行模拟100次:(代码块10)
+
+```python
+def qpolicy(m):
+ x,y = m.human
+ v = probs(Q[x,y])
+ a = random.choices(list(actions),weights=v)[0]
+ return a
+
+print_statistics(qpolicy)
+```
+
+运行此代码后,你应该会得到比之前小得多的平均路径长度,大约在3-6之间。
+
+## 调查学习过程
+
+正如我们所提到的,学习过程是在探索和利用已获得的关于问题空间结构的知识之间的平衡。我们已经看到学习的结果(帮助智能体找到通向目标的短路径的能力)有所改善,但观察学习过程中平均路径长度的变化也很有趣:
+
+学习总结如下:
+
+- **平均路径长度增加**。我们看到的是,起初平均路径长度增加。这可能是因为当我们对环境一无所知时,我们可能会陷入不良状态,如水或狼。随着我们了解更多并开始使用这些知识,我们可以更长时间地探索环境,但我们仍然不知道苹果的位置。
+
+- **路径长度减少,随着我们学到更多**。一旦我们学到足够多,智能体更容易实现目标,路径长度开始减少。然而,我们仍然开放探索,因此我们经常偏离最佳路径,探索新选项,使路径比最优路径更长。
+
+- **长度突然增加**。我们在这个图表上还观察到某些时候,长度突然增加。这表明过程的随机性,并且我们可能会在某个时候通过用新值覆盖它们来“破坏”Q-Table系数。这理想情况下应该通过降低学习率来最小化(例如,在训练结束时,我们只调整Q-Table值一个小值)。
+
+总的来说,重要的是要记住,学习过程的成功和质量在很大程度上取决于参数,如学习率、学习率衰减和折扣因子。这些通常称为**超参数**,以区别于**参数**,我们在训练过程中优化(例如Q-Table系数)。找到最佳超参数值的过程称为**超参数优化**,它值得单独讨论。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/46/)
+
+## 作业
+[一个更真实的世界](assignment.md)
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的本国语言版本视为权威来源。对于重要信息,建议使用专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/1-QLearning/assignment.md b/translations/zh/8-Reinforcement/1-QLearning/assignment.md
new file mode 100644
index 000000000..e77a1e41e
--- /dev/null
+++ b/translations/zh/8-Reinforcement/1-QLearning/assignment.md
@@ -0,0 +1,28 @@
+# 一个更真实的世界
+
+在我们的情境中,彼得几乎可以不感到疲倦或饥饿地四处移动。在一个更真实的世界中,他需要时不时地坐下来休息,还需要吃东西。让我们通过实现以下规则使我们的世界更加真实:
+
+1. 从一个地方移动到另一个地方时,彼得会失去**能量**并获得一些**疲劳**。
+2. 彼得可以通过吃苹果来获得更多能量。
+3. 彼得可以通过在树下或草地上休息来消除疲劳(即走到有树或草的棋盘位置 - 绿色区域)。
+4. 彼得需要找到并杀死狼。
+5. 为了杀死狼,彼得需要有一定的能量和疲劳水平,否则他会输掉战斗。
+## 指导
+
+使用原始的 [notebook.ipynb](../../../../8-Reinforcement/1-QLearning/notebook.ipynb) 笔记本作为解决方案的起点。
+
+根据游戏规则修改上述奖励函数,运行强化学习算法以学习赢得游戏的最佳策略,并比较随机漫步算法与您的算法在赢得和输掉游戏数量方面的结果。
+
+> **Note**: 在您的新世界中,状态更加复杂,除了人类位置外,还包括疲劳和能量水平。您可以选择将状态表示为元组 (Board,energy,fatigue),或者为状态定义一个类(您也可以从 `Board` 派生它),甚至可以修改原始的 `Board` 类在 [rlboard.py](../../../../8-Reinforcement/1-QLearning/rlboard.py) 中。
+
+在您的解决方案中,请保留负责随机漫步策略的代码,并在最后将您的算法结果与随机漫步进行比较。
+
+> **Note**: 您可能需要调整超参数以使其正常工作,尤其是训练次数。由于游戏的成功(与狼战斗)是一个罕见事件,您可以预期更长的训练时间。
+## 评分标准
+
+| 标准 | 杰出 | 合格 | 需要改进 |
+| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
+| | 提供了一个定义新世界规则的笔记本,Q-Learning 算法和一些文字解释。Q-Learning 能够显著改善与随机漫步相比的结果。 | 提供了笔记本,Q-Learning 已实现并改善了与随机漫步相比的结果,但并不显著;或者笔记本记录不充分,代码结构不良。 | 尝试重新定义世界规则,但 Q-Learning 算法不起作用,或者奖励函数未完全定义。 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议进行专业的人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/1-QLearning/solution/Julia/README.md b/translations/zh/8-Reinforcement/1-QLearning/solution/Julia/README.md
new file mode 100644
index 000000000..ee7e14800
--- /dev/null
+++ b/translations/zh/8-Reinforcement/1-QLearning/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/1-QLearning/solution/R/README.md b/translations/zh/8-Reinforcement/1-QLearning/solution/R/README.md
new file mode 100644
index 000000000..709ed844b
--- /dev/null
+++ b/translations/zh/8-Reinforcement/1-QLearning/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业的人类翻译。对于因使用此翻译而产生的任何误解或误释,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/2-Gym/README.md b/translations/zh/8-Reinforcement/2-Gym/README.md
new file mode 100644
index 000000000..ef2b871de
--- /dev/null
+++ b/translations/zh/8-Reinforcement/2-Gym/README.md
@@ -0,0 +1,342 @@
+# 小车杆平衡
+
+在前一课中我们解决的问题看起来像是一个玩具问题,似乎并不适用于实际场景。但事实并非如此,因为许多现实世界的问题也有类似的场景,包括下棋或围棋。它们是相似的,因为我们也有一个带有给定规则的棋盘和一个**离散状态**。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/47/)
+
+## 介绍
+
+在本课中,我们将把Q学习的相同原理应用到一个**连续状态**的问题上,即由一个或多个实数给定的状态。我们将处理以下问题:
+
+> **问题**:如果彼得想要逃离狼,他需要能够更快地移动。我们将看看彼得如何学习滑冰,特别是如何通过Q学习保持平衡。
+
+
+
+> 彼得和他的朋友们想出妙招逃离狼!图片来自 [Jen Looper](https://twitter.com/jenlooper)
+
+我们将使用一种称为**小车杆**问题的简化平衡版本。在小车杆世界中,我们有一个可以左右移动的水平滑块,目标是让垂直杆在滑块上保持平衡。
+
+## 先决条件
+
+在本课中,我们将使用一个名为**OpenAI Gym**的库来模拟不同的**环境**。你可以在本地运行本课的代码(例如在Visual Studio Code中),在这种情况下,模拟将会在一个新窗口中打开。当在线运行代码时,你可能需要对代码进行一些调整,如[这里](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7)所述。
+
+## OpenAI Gym
+
+在前一课中,游戏的规则和状态由我们自己定义的`Board`类给出。这里我们将使用一个特殊的**模拟环境**,它将模拟平衡杆背后的物理现象。训练强化学习算法最流行的模拟环境之一是由[OpenAI](https://openai.com/)维护的[Gym](https://gym.openai.com/)。通过使用这个gym,我们可以创建从小车杆模拟到Atari游戏的不同**环境**。
+
+> **注意**:你可以在[这里](https://gym.openai.com/envs/#classic_control)查看OpenAI Gym提供的其他环境。
+
+首先,让我们安装gym并导入所需的库(代码块1):
+
+```python
+import sys
+!{sys.executable} -m pip install gym
+
+import gym
+import matplotlib.pyplot as plt
+import numpy as np
+import random
+```
+
+## 练习 - 初始化一个小车杆环境
+
+要处理小车杆平衡问题,我们需要初始化相应的环境。每个环境都与一个:
+
+- **观察空间**相关,定义我们从环境中接收的信息结构。对于小车杆问题,我们接收杆的位置、速度和其他一些值。
+
+- **动作空间**相关,定义可能的动作。在我们的案例中,动作空间是离散的,由两个动作组成 - **左**和**右**。(代码块2)
+
+1. 要初始化,请输入以下代码:
+
+ ```python
+ env = gym.make("CartPole-v1")
+ print(env.action_space)
+ print(env.observation_space)
+ print(env.action_space.sample())
+ ```
+
+要查看环境如何工作,让我们运行一个100步的简短模拟。在每一步,我们提供一个要采取的动作 - 在此模拟中,我们只是随机选择一个来自`action_space`的动作。
+
+1. 运行下面的代码,看看会导致什么结果。
+
+ ✅ 记住,最好在本地Python安装中运行此代码!(代码块3)
+
+ ```python
+ env.reset()
+
+ for i in range(100):
+ env.render()
+ env.step(env.action_space.sample())
+ env.close()
+ ```
+
+ 你应该会看到类似这样的图像:
+
+ 
+
+1. 在模拟过程中,我们需要获取观察值以决定如何行动。事实上,step函数返回当前的观察值、奖励函数和一个表示是否继续模拟的完成标志:(代码块4)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ print(f"{obs} -> {rew}")
+ env.close()
+ ```
+
+ 你将在笔记本输出中看到类似这样的内容:
+
+ ```text
+ [ 0.03403272 -0.24301182 0.02669811 0.2895829 ] -> 1.0
+ [ 0.02917248 -0.04828055 0.03248977 0.00543839] -> 1.0
+ [ 0.02820687 0.14636075 0.03259854 -0.27681916] -> 1.0
+ [ 0.03113408 0.34100283 0.02706215 -0.55904489] -> 1.0
+ [ 0.03795414 0.53573468 0.01588125 -0.84308041] -> 1.0
+ ...
+ [ 0.17299878 0.15868546 -0.20754175 -0.55975453] -> 1.0
+ [ 0.17617249 0.35602306 -0.21873684 -0.90998894] -> 1.0
+ ```
+
+ 在模拟的每一步返回的观察向量包含以下值:
+ - 小车的位置
+ - 小车的速度
+ - 杆的角度
+ - 杆的旋转速率
+
+1. 获取这些数字的最小值和最大值:(代码块5)
+
+ ```python
+ print(env.observation_space.low)
+ print(env.observation_space.high)
+ ```
+
+ 你可能还会注意到,每一步模拟的奖励值总是1。这是因为我们的目标是尽可能长时间地生存,即尽可能长时间地保持杆在合理的垂直位置。
+
+ ✅ 实际上,如果我们能在100次连续试验中平均获得195的奖励,小车杆模拟就被认为是解决了。
+
+## 状态离散化
+
+在Q学习中,我们需要构建Q表,定义在每个状态下该做什么。为了能够做到这一点,我们需要状态是**离散的**,更准确地说,它应该包含有限数量的离散值。因此,我们需要以某种方式**离散化**我们的观察值,将它们映射到有限的状态集合。
+
+有几种方法可以做到这一点:
+
+- **划分为箱子**。如果我们知道某个值的区间,我们可以将该区间划分为多个**箱子**,然后用它所属的箱子编号替换该值。这可以使用numpy的[`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html)方法来完成。在这种情况下,我们将准确知道状态的大小,因为它将取决于我们为数字化选择的箱子数量。
+
+✅ 我们可以使用线性插值将值带到某个有限区间(例如,从-20到20),然后通过四舍五入将数字转换为整数。这给了我们对状态大小的控制稍微少一些,特别是如果我们不知道输入值的确切范围。例如,在我们的例子中,4个值中的2个没有上/下限,这可能导致无限数量的状态。
+
+在我们的例子中,我们将使用第二种方法。正如你可能稍后注意到的,尽管没有明确的上/下限,这些值很少会取到某些有限区间之外的值,因此那些具有极端值的状态将非常罕见。
+
+1. 这是一个函数,它将从我们的模型中获取观察值并生成一个包含4个整数值的元组:(代码块6)
+
+ ```python
+ def discretize(x):
+ return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
+ ```
+
+1. 让我们还探索另一种使用箱子的离散化方法:(代码块7)
+
+ ```python
+ def create_bins(i,num):
+ return np.arange(num+1)*(i[1]-i[0])/num+i[0]
+
+ print("Sample bins for interval (-5,5) with 10 bins\n",create_bins((-5,5),10))
+
+ ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter
+ nbins = [20,20,10,10] # number of bins for each parameter
+ bins = [create_bins(ints[i],nbins[i]) for i in range(4)]
+
+ def discretize_bins(x):
+ return tuple(np.digitize(x[i],bins[i]) for i in range(4))
+ ```
+
+1. 现在让我们运行一个简短的模拟并观察这些离散的环境值。随意尝试`discretize` and `discretize_bins`,看看是否有差异。
+
+ ✅ discretize_bins返回箱子编号,这是从0开始的。因此,对于接近0的输入变量值,它返回区间中间的数字(10)。在discretize中,我们不关心输出值的范围,允许它们为负,因此状态值没有偏移,0对应于0。(代码块8)
+
+ ```python
+ env.reset()
+
+ done = False
+ while not done:
+ #env.render()
+ obs, rew, done, info = env.step(env.action_space.sample())
+ #print(discretize_bins(obs))
+ print(discretize(obs))
+ env.close()
+ ```
+
+ ✅ 如果你想查看环境的执行情况,请取消注释以env.render开头的行。否则,你可以在后台执行它,这样更快。在我们的Q学习过程中,我们将使用这种“隐形”执行。
+
+## Q表结构
+
+在前一课中,状态是从0到8的简单数对,因此用形状为8x8x2的numpy张量表示Q表是方便的。如果我们使用箱子离散化,状态向量的大小也是已知的,所以我们可以使用相同的方法,用形状为20x20x10x10x2的数组表示状态(这里2是动作空间的维度,前面的维度对应于我们为观察空间中的每个参数选择的箱子数量)。
+
+然而,有时观察空间的确切维度是不知道的。在`discretize`函数的情况下,我们可能永远无法确定我们的状态是否保持在某些限制内,因为某些原始值没有限制。因此,我们将使用一种稍微不同的方法,用字典表示Q表。
+
+1. 使用对*(state,action)*作为字典键,值对应于Q表条目值。(代码块9)
+
+ ```python
+ Q = {}
+ actions = (0,1)
+
+ def qvalues(state):
+ return [Q.get((state,a),0) for a in actions]
+ ```
+
+ 这里我们还定义了一个函数`qvalues()`,它返回给定状态对应于所有可能动作的Q表值列表。如果Q表中没有该条目,我们将返回0作为默认值。
+
+## 开始Q学习
+
+现在我们准备教彼得保持平衡了!
+
+1. 首先,让我们设置一些超参数:(代码块10)
+
+ ```python
+ # hyperparameters
+ alpha = 0.3
+ gamma = 0.9
+ epsilon = 0.90
+ ```
+
+ 这里,`alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In the previous lesson we started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.
+
+ `gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.
+
+ `epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute a random action. This will allow us to explore areas of the search space that we have never seen before.
+
+ ✅ In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those "mistakes"
+
+### Improve the algorithm
+
+We can also make two improvements to our algorithm from the previous lesson:
+
+- **Calculate average cumulative reward**, over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
+
+- **Calculate maximum average cumulative result**, `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
+
+1. Collect all cumulative rewards at each simulation at `rewards`向量以便进一步绘图。(代码块11)
+
+ ```python
+ def probs(v,eps=1e-4):
+ v = v-v.min()+eps
+ v = v/v.sum()
+ return v
+
+ Qmax = 0
+ cum_rewards = []
+ rewards = []
+ for epoch in range(100000):
+ obs = env.reset()
+ done = False
+ cum_reward=0
+ # == do the simulation ==
+ while not done:
+ s = discretize(obs)
+ if random.random() Qmax:
+ Qmax = np.average(cum_rewards)
+ Qbest = Q
+ cum_rewards=[]
+ ```
+
+你可能从这些结果中注意到:
+
+- **接近我们的目标**。我们非常接近实现目标,即在100+次连续运行模拟中获得195的累计奖励,或者我们实际上已经实现了!即使我们得到较小的数字,我们也不知道,因为我们平均超过5000次运行,而正式标准仅需要100次运行。
+
+- **奖励开始下降**。有时奖励开始下降,这意味着我们可能用使情况变得更糟的新值“破坏”了Q表中已经学习到的值。
+
+如果我们绘制训练进度,这一观察结果会更加明显。
+
+## 绘制训练进度
+
+在训练过程中,我们已经将每次迭代的累计奖励值收集到`rewards`向量中。以下是我们将其与迭代次数一起绘制时的样子:
+
+```python
+plt.plot(rewards)
+```
+
+
+
+从这个图表中,我们无法判断任何事情,因为由于随机训练过程的性质,训练会话的长度变化很大。为了使这个图表更有意义,我们可以计算一系列实验的**移动平均值**,比如100。这可以方便地使用`np.convolve`完成:(代码块12)
+
+```python
+def running_average(x,window):
+ return np.convolve(x,np.ones(window)/window,mode='valid')
+
+plt.plot(running_average(rewards,100))
+```
+
+
+
+## 调整超参数
+
+为了使学习更加稳定,有必要在训练过程中调整一些超参数。特别是:
+
+- **对于学习率**,`alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in the Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.
+
+- **Increase epsilon**. We may want to increase the `epsilon` slowly, in order to explore less and exploit more. It probably makes sense to start with lower value of `epsilon`,并上升到几乎1。
+
+> **任务1**:尝试调整超参数值,看看是否能获得更高的累计奖励。你能达到195以上吗?
+
+> **任务2**:要正式解决这个问题,你需要在100次连续运行中获得195的平均奖励。在训练过程中测量这一点,并确保你已经正式解决了这个问题!
+
+## 查看结果
+
+实际上看到训练模型的行为会很有趣。让我们运行模拟,并按照训练期间的相同动作选择策略,根据Q表中的概率分布进行采样:(代码块13)
+
+```python
+obs = env.reset()
+done = False
+while not done:
+ s = discretize(obs)
+ env.render()
+ v = probs(np.array(qvalues(s)))
+ a = random.choices(actions,weights=v)[0]
+ obs,_,done,_ = env.step(a)
+env.close()
+```
+
+你应该会看到类似这样的图像:
+
+
+
+---
+
+## 🚀挑战
+
+> **任务3**:这里,我们使用的是Q表的最终副本,这可能不是最好的。记住我们已经将表现最好的Q表存储在`Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
+
+> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with the highest Q-Table value? This can be done by using `np.argmax`函数来找到对应于最高Q表值的动作编号。实现这一策略,看看它是否改善了平衡。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/48/)
+
+## 作业
+[训练一辆山地车](assignment.md)
+
+## 结论
+
+我们现在已经学会了如何通过提供定义游戏期望状态的奖励函数,并给他们机会智能地探索搜索空间,来训练智能体以取得良好的结果。我们已经成功地在离散和连续环境中应用了Q学习算法,但动作仍然是离散的。
+
+重要的是还要研究动作状态也是连续的情况,以及观察空间更复杂的情况,例如来自Atari游戏屏幕的图像。在这些问题中,我们通常需要使用更强大的机器学习技术,例如神经网络,以取得良好的结果。这些更高级的话题是我们即将到来的更高级AI课程的主题。
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业的人工翻译。我们不对使用此翻译所产生的任何误解或误释承担责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/2-Gym/assignment.md b/translations/zh/8-Reinforcement/2-Gym/assignment.md
new file mode 100644
index 000000000..af2231b29
--- /dev/null
+++ b/translations/zh/8-Reinforcement/2-Gym/assignment.md
@@ -0,0 +1,43 @@
+# 训练山地车
+
+[OpenAI Gym](http://gym.openai.com) 的设计方式使得所有环境都提供相同的 API - 即相同的方法 `reset`、`step` 和 `render`,以及相同的 **动作空间** 和 **观察空间** 抽象。因此,应该可以在不同环境中以最小的代码更改来适应相同的强化学习算法。
+
+## 山地车环境
+
+[山地车环境](https://gym.openai.com/envs/MountainCar-v0/) 包含一辆卡在山谷中的车:
+目标是驶出山谷并夺取旗帜,每一步执行以下动作之一:
+
+| 值 | 含义 |
+|---|---|
+| 0 | 向左加速 |
+| 1 | 不加速 |
+| 2 | 向右加速 |
+
+这个问题的主要技巧是,汽车的发动机不够强大,无法一次性爬上山。因此,唯一成功的方法是来回驾驶以积累动量。
+
+观察空间仅由两个值组成:
+
+| 数字 | 观察值 | 最小值 | 最大值 |
+|-----|--------------|-----|-----|
+| 0 | 车的位置 | -1.2| 0.6 |
+| 1 | 车的速度 | -0.07 | 0.07 |
+
+山地车的奖励系统相当棘手:
+
+ * 如果代理在山顶(位置 = 0.5)到达旗帜,则奖励为0。
+ * 如果代理的位置小于0.5,则奖励为-1。
+
+如果车的位置超过0.5,或者剧集长度超过200,剧集将终止。
+## 说明
+
+调整我们的强化学习算法以解决山地车问题。从现有的 [notebook.ipynb](../../../../8-Reinforcement/2-Gym/notebook.ipynb) 代码开始,替换新的环境,改变状态离散化函数,并尝试以最小的代码修改使现有算法进行训练。通过调整超参数来优化结果。
+
+> **注意**:可能需要调整超参数才能使算法收敛。
+## 评分标准
+
+| 标准 | 模范 | 充分 | 需要改进 |
+| -------- | --------- | -------- | ----------------- |
+| | Q-Learning 算法已成功从 CartPole 示例中适应,最小代码修改即可解决在200步内捕获旗帜的问题。 | 采用了一个新的 Q-Learning 算法,但有良好的文档记录;或者采用了现有算法,但未达到预期结果 | 学生未能成功采用任何算法,但在解决方案方面迈出了实质性步骤(实现状态离散化、Q-表数据结构等) |
+
+**免责声明**:
+本文档已使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用此翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/2-Gym/solution/Julia/README.md b/translations/zh/8-Reinforcement/2-Gym/solution/Julia/README.md
new file mode 100644
index 000000000..f11f67076
--- /dev/null
+++ b/translations/zh/8-Reinforcement/2-Gym/solution/Julia/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业的人类翻译。我们不对使用本翻译所产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/2-Gym/solution/R/README.md b/translations/zh/8-Reinforcement/2-Gym/solution/R/README.md
new file mode 100644
index 000000000..8189dc310
--- /dev/null
+++ b/translations/zh/8-Reinforcement/2-Gym/solution/R/README.md
@@ -0,0 +1,4 @@
+
+
+**免责声明**:
+本文档使用基于机器的人工智能翻译服务进行翻译。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始文档的母语版本视为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用此翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/8-Reinforcement/README.md b/translations/zh/8-Reinforcement/README.md
new file mode 100644
index 000000000..0710ee914
--- /dev/null
+++ b/translations/zh/8-Reinforcement/README.md
@@ -0,0 +1,56 @@
+# 强化学习简介
+
+强化学习(Reinforcement Learning,RL)被视为与监督学习和无监督学习并列的基本机器学习范式之一。RL 关注的是决策:做出正确的决策,或者至少从中学习。
+
+想象一下你有一个模拟环境,比如股票市场。如果你实施了一项规定,会发生什么?它会产生积极还是消极的影响?如果发生了消极的事情,你需要从这个_负强化_中学习并改变方向。如果是积极的结果,你需要在这个_正强化_的基础上进一步发展。
+
+
+
+> 彼得和他的朋友们需要逃离饥饿的狼!图片由 [Jen Looper](https://twitter.com/jenlooper) 提供
+
+## 地区主题:彼得与狼(俄罗斯)
+
+[彼得与狼](https://en.wikipedia.org/wiki/Peter_and_the_Wolf) 是由俄罗斯作曲家 [Sergei Prokofiev](https://en.wikipedia.org/wiki/Sergei_Prokofiev) 写的一部音乐童话。故事讲述了年轻的先锋彼得,他勇敢地走出家门,来到森林空地去追逐狼。在本节中,我们将训练机器学习算法来帮助彼得:
+
+- **探索** 周围区域并建立最佳导航地图
+- **学习** 如何使用滑板并在上面保持平衡,以便更快地移动。
+
+[](https://www.youtube.com/watch?v=Fmi5zHg4QSM)
+
+> 🎥 点击上面的图片听 Prokofiev 的《彼得与狼》
+
+## 强化学习
+
+在前面的章节中,你已经看到了两种机器学习问题的例子:
+
+- **监督学习**,我们有数据集来建议我们想要解决的问题的样本解决方案。[分类](../4-Classification/README.md) 和 [回归](../2-Regression/README.md) 是监督学习任务。
+- **无监督学习**,我们没有标记的训练数据。无监督学习的主要例子是 [聚类](../5-Clustering/README.md)。
+
+在本节中,我们将向你介绍一种不需要标记训练数据的新型学习问题。有几种类型的此类问题:
+
+- **[半监督学习](https://wikipedia.org/wiki/Semi-supervised_learning)**,我们有大量未标记的数据,可以用来预训练模型。
+- **[强化学习](https://wikipedia.org/wiki/Reinforcement_learning)**,其中一个代理通过在某些模拟环境中进行实验来学习如何行为。
+
+### 示例 - 电脑游戏
+
+假设你想教电脑玩游戏,比如国际象棋或 [超级马里奥](https://wikipedia.org/wiki/Super_Mario)。为了让电脑玩游戏,我们需要它预测在每个游戏状态下应该做出哪一步。这看起来像是一个分类问题,但实际上不是——因为我们没有一个包含状态和相应动作的数据集。虽然我们可能有一些现有的象棋比赛或玩家玩超级马里奥的记录数据,但这些数据可能不足以涵盖足够多的可能状态。
+
+与其寻找现有的游戏数据,**强化学习**(RL)基于*让电脑多次玩游戏*并观察结果的想法。因此,要应用强化学习,我们需要两样东西:
+
+- **一个环境**和**一个模拟器**,允许我们多次玩游戏。这个模拟器将定义所有的游戏规则以及可能的状态和动作。
+
+- **一个奖励函数**,告诉我们每一步或每局游戏的表现如何。
+
+其他类型的机器学习与 RL 的主要区别在于,RL 中我们通常不知道是否赢得比赛,直到比赛结束。因此,我们无法判断某个单独的动作是否好——我们只在游戏结束时获得奖励。我们的目标是设计能够在不确定条件下训练模型的算法。我们将学习一种称为 **Q-learning** 的 RL 算法。
+
+## 课程
+
+1. [强化学习和 Q-Learning 简介](1-QLearning/README.md)
+2. [使用健身模拟环境](2-Gym/README.md)
+
+## 致谢
+
+"强化学习简介" 由 [Dmitry Soshnikov](http://soshnikov.com) 倾情撰写
+
+**免责声明**:
+本文件是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议使用专业的人类翻译。对于因使用本翻译而引起的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/9-Real-World/1-Applications/README.md b/translations/zh/9-Real-World/1-Applications/README.md
new file mode 100644
index 000000000..227c5f5e4
--- /dev/null
+++ b/translations/zh/9-Real-World/1-Applications/README.md
@@ -0,0 +1,149 @@
+# 后记:现实世界中的机器学习
+
+
+> Sketchnote 由 [Tomomi Imura](https://www.twitter.com/girlie_mac) 提供
+
+在本课程中,你学到了许多准备数据进行训练和创建机器学习模型的方法。你构建了一系列经典的回归、聚类、分类、自然语言处理和时间序列模型。恭喜你!现在,你可能会想,这一切是为了什么……这些模型在现实世界中的应用是什么?
+
+虽然深度学习的 AI 在工业界引起了很多关注,但经典机器学习模型仍然有许多有价值的应用。你今天甚至可能会使用其中的一些应用!在本课中,你将探索八个不同的行业和主题领域如何使用这些类型的模型来使他们的应用程序更高效、更可靠、更智能并为用户提供更多价值。
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/49/)
+
+## 💰 金融
+
+金融领域为机器学习提供了许多机会。这个领域的许多问题都可以通过使用机器学习来建模和解决。
+
+### 信用卡欺诈检测
+
+我们在课程中学到了 [k-means 聚类](../../5-Clustering/2-K-Means/README.md),但它如何用于解决与信用卡欺诈相关的问题呢?
+
+k-means 聚类在一种称为 **异常检测** 的信用卡欺诈检测技术中非常有用。异常或一组数据观察中的偏差可以告诉我们信用卡是否在正常使用,或者是否有异常情况发生。正如下面链接的论文所示,你可以使用 k-means 聚类算法对信用卡数据进行分类,并根据每笔交易的异常程度将其分配到一个簇中。然后,你可以评估这些最具风险的簇,以区分欺诈交易和合法交易。
+[参考](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.680.1195&rep=rep1&type=pdf)
+
+### 财富管理
+
+在财富管理中,个人或公司代表客户处理投资。他们的工作是长期维持和增长财富,因此选择表现良好的投资是至关重要的。
+
+评估特定投资表现的一种方法是通过统计回归。[线性回归](../../2-Regression/1-Tools/README.md) 是理解基金相对于某个基准表现的有价值工具。我们还可以推断回归结果是否具有统计显著性,或者它们对客户投资的影响程度。你甚至可以进一步扩展你的分析,使用多重回归,将其他风险因素考虑在内。有关如何为特定基金进行此操作的示例,请参阅下面关于使用回归评估基金表现的论文。
+[参考](http://www.brightwoodventures.com/evaluating-fund-performance-using-regression/)
+
+## 🎓 教育
+
+教育领域也是一个非常有趣的应用机器学习的领域。有许多有趣的问题需要解决,例如检测考试或论文中的作弊行为,或管理纠正过程中的偏见,无论是有意还是无意的。
+
+### 预测学生行为
+
+[Coursera](https://coursera.com),一个在线开放课程提供商,有一个很棒的技术博客,他们在博客中讨论了许多工程决策。在这个案例研究中,他们绘制了一条回归线,试图探索低 NPS(净推荐值)评分与课程保留或退课之间的任何相关性。
+[参考](https://medium.com/coursera-engineering/controlled-regression-quantifying-the-impact-of-course-quality-on-learner-retention-31f956bd592a)
+
+### 缓解偏见
+
+[Grammarly](https://grammarly.com),一个检查拼写和语法错误的写作助手,在其产品中使用了复杂的 [自然语言处理系统](../../6-NLP/README.md)。他们在技术博客中发布了一篇有趣的案例研究,讨论了他们如何处理机器学习中的性别偏见问题,你在我们的[公平性介绍课程](../../1-Introduction/3-fairness/README.md)中学到了这一点。
+[参考](https://www.grammarly.com/blog/engineering/mitigating-gender-bias-in-autocorrect/)
+
+## 👜 零售
+
+零售行业肯定可以从机器学习的使用中受益,从创建更好的客户旅程到以最佳方式库存。
+
+### 个性化客户旅程
+
+在销售家具等家居用品的公司 Wayfair,帮助客户找到符合他们品味和需求的产品至关重要。在这篇文章中,公司工程师描述了他们如何使用机器学习和自然语言处理来“为客户提供正确的结果”。特别是,他们的查询意图引擎使用了实体提取、分类器训练、资产和意见提取以及客户评论的情感标记。这是 NLP 在在线零售中的经典用例。
+[参考](https://www.aboutwayfair.com/tech-innovation/how-we-use-machine-learning-and-natural-language-processing-to-empower-search)
+
+### 库存管理
+
+像 [StitchFix](https://stitchfix.com) 这样创新且灵活的公司,一个向消费者发送服装的盒子服务,严重依赖机器学习进行推荐和库存管理。他们的造型团队与商品团队密切合作:“我们的一个数据科学家尝试了一种遗传算法,并将其应用于服装,以预测今天不存在的成功服装。我们将其带给商品团队,现在他们可以将其作为一种工具使用。”
+[参考](https://www.zdnet.com/article/how-stitch-fix-uses-machine-learning-to-master-the-science-of-styling/)
+
+## 🏥 医疗保健
+
+医疗保健领域可以利用机器学习来优化研究任务以及物流问题,例如重新接纳患者或阻止疾病传播。
+
+### 管理临床试验
+
+临床试验中的毒性是药物制造商的一个主要关注点。多少毒性是可以容忍的?在这项研究中,分析各种临床试验方法导致开发了一种新的方法来预测临床试验结果的概率。具体来说,他们能够使用随机森林来生成一个 [分类器](../../4-Classification/README.md),该分类器能够区分药物组。
+[参考](https://www.sciencedirect.com/science/article/pii/S2451945616302914)
+
+### 医院再入院管理
+
+医院护理成本高昂,尤其是当患者需要重新入院时。本文讨论了一家公司如何使用机器学习通过 [聚类](../../5-Clustering/README.md) 算法来预测再入院的可能性。这些聚类帮助分析师“发现可能有共同原因的再入院群体”。
+[参考](https://healthmanagement.org/c/healthmanagement/issuearticle/hospital-readmissions-and-machine-learning)
+
+### 疾病管理
+
+最近的疫情使人们对机器学习如何帮助阻止疾病传播有了更深的认识。在这篇文章中,你会看到 ARIMA、逻辑曲线、线性回归和 SARIMA 的使用。“这项工作试图计算这种病毒的传播率,从而预测死亡、康复和确诊病例,以便我们能够更好地准备和生存。”
+[参考](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7979218/)
+
+## 🌲 生态与绿色科技
+
+自然和生态由许多敏感的系统组成,动物和自然之间的相互作用成为焦点。准确测量这些系统并在发生情况时采取适当行动(例如森林火灾或动物数量下降)是非常重要的。
+
+### 森林管理
+
+你在之前的课程中学到了 [强化学习](../../8-Reinforcement/README.md)。它在预测自然模式时非常有用。特别是,它可以用于跟踪生态问题,例如森林火灾和入侵物种的传播。在加拿大,一组研究人员使用强化学习从卫星图像中构建了森林火灾动态模型。使用一种创新的“空间传播过程 (SSP)”,他们将森林火灾视为“景观中任何单元格的代理”。“火灾在任何时间点从某个位置采取的行动集合包括向北、向南、向东或向西传播或不传播。
+
+这种方法颠倒了通常的 RL 设置,因为相应的马尔可夫决策过程 (MDP) 的动态是立即火灾传播的已知函数。”阅读更多关于该组使用的经典算法的链接如下。
+[参考](https://www.frontiersin.org/articles/10.3389/fict.2018.00006/full)
+
+### 动物运动传感
+
+虽然深度学习在视觉跟踪动物运动方面引发了一场革命(你可以在这里构建自己的 [北极熊追踪器](https://docs.microsoft.com/learn/modules/build-ml-model-with-azure-stream-analytics/?WT.mc_id=academic-77952-leestott)),但经典机器学习在这个任务中仍然有一席之地。
+
+用于跟踪农场动物运动的传感器和物联网利用了这种类型的视觉处理,但更基本的机器学习技术对于预处理数据非常有用。例如,在这篇论文中,使用各种分类器算法监测和分析了绵羊的姿势。你可能会在第 335 页看到 ROC 曲线。
+[参考](https://druckhaus-hofmann.de/gallery/31-wj-feb-2020.pdf)
+
+### ⚡️ 能源管理
+
+在我们关于[时间序列预测](../../7-TimeSeries/README.md)的课程中,我们提到了通过了解供需关系来为一个小镇生成收入的智能停车计时器的概念。本文详细讨论了聚类、回归和时间序列预测如何结合起来,帮助预测爱尔兰未来的能源使用,基于智能计量。
+[参考](https://www-cdn.knime.com/sites/default/files/inline-images/knime_bigdata_energy_timeseries_whitepaper.pdf)
+
+## 💼 保险
+
+保险行业是另一个使用机器学习来构建和优化可行的金融和精算模型的行业。
+
+### 波动性管理
+
+MetLife,一家人寿保险提供商,公开了他们分析和缓解财务模型波动性的方法。在这篇文章中,你会注意到二元和序数分类可视化。你还会发现预测可视化。
+[参考](https://investments.metlife.com/content/dam/metlifecom/us/investments/insights/research-topics/macro-strategy/pdf/MetLifeInvestmentManagement_MachineLearnedRanking_070920.pdf)
+
+## 🎨 艺术、文化与文学
+
+在艺术领域,例如新闻业,有许多有趣的问题。检测假新闻是一个巨大的问题,因为它已被证明会影响人们的意见,甚至颠覆民主。博物馆也可以通过使用机器学习在从发现文物之间的联系到资源规划的各个方面受益。
+
+### 假新闻检测
+
+在当今的媒体中,检测假新闻已成为猫捉老鼠的游戏。在这篇文章中,研究人员建议测试结合我们学习的几种机器学习技术的系统,并部署最佳模型:“该系统基于自然语言处理从数据中提取特征,然后这些特征用于训练机器学习分类器,如朴素贝叶斯、支持向量机 (SVM)、随机森林 (RF)、随机梯度下降 (SGD) 和逻辑回归 (LR)。”
+[参考](https://www.irjet.net/archives/V7/i6/IRJET-V7I6688.pdf)
+
+这篇文章展示了如何结合不同的机器学习领域可以产生有趣的结果,帮助阻止假新闻传播并造成实际损害;在这种情况下,动机是关于 COVID 治疗的谣言传播引发了暴力。
+
+### 博物馆机器学习
+
+博物馆正处于 AI 革命的前沿,随着技术的进步,编目和数字化收藏以及发现文物之间的联系变得更加容易。像 [In Codice Ratio](https://www.sciencedirect.com/science/article/abs/pii/S0306457321001035#:~:text=1.,studies%20over%20large%20historical%20sources.) 这样的项目正在帮助揭开梵蒂冈档案馆等无法访问的收藏的神秘面纱。但是,博物馆的业务方面也受益于机器学习模型。
+
+例如,芝加哥艺术博物馆建立了模型来预测观众的兴趣以及他们何时会参观展览。目标是每次用户访问博物馆时创建个性化和优化的访问体验。“在 2017 财年,模型预测的出席率和入场率的准确率在 1% 以内,”芝加哥艺术博物馆高级副总裁 Andrew Simnick 说。
+[Reference](https://www.chicagobusiness.com/article/20180518/ISSUE01/180519840/art-institute-of-chicago-uses-data-to-make-exhibit-choices)
+
+## 🏷 市场营销
+
+### 客户细分
+
+最有效的营销策略是根据不同的群体以不同的方式针对客户。在这篇文章中,讨论了聚类算法的使用,以支持差异化营销。差异化营销帮助公司提高品牌认知度,吸引更多客户,并赚取更多利润。
+[Reference](https://ai.inqline.com/machine-learning-for-marketing-customer-segmentation/)
+
+## 🚀 挑战
+
+找出另一个受益于本课程中所学技术的行业,并了解它如何使用机器学习。
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/50/)
+
+## 复习与自学
+
+Wayfair的数据科学团队有几个关于他们公司如何使用机器学习的有趣视频。值得[一看](https://www.youtube.com/channel/UCe2PjkQXqOuwkW1gw6Ameuw/videos)!
+
+## 作业
+
+[机器学习寻宝游戏](assignment.md)
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议使用专业人工翻译。我们不对因使用本翻译而产生的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/9-Real-World/1-Applications/assignment.md b/translations/zh/9-Real-World/1-Applications/assignment.md
new file mode 100644
index 000000000..de4bedca1
--- /dev/null
+++ b/translations/zh/9-Real-World/1-Applications/assignment.md
@@ -0,0 +1,16 @@
+# 机器学习寻宝游戏
+
+## 指示
+
+在本课中,你了解了许多通过经典机器学习解决的实际应用案例。尽管深度学习、新技术和工具在人工智能中的使用,以及神经网络的利用,加快了这些领域中工具的生产,但使用本课程中的技术进行的经典机器学习仍然具有很大的价值。
+
+在这个作业中,想象你正在参加一个黑客马拉松。利用你在课程中学到的知识,提出一个使用经典机器学习解决本课讨论的某个领域问题的方案。制作一个演示文稿,讨论你将如何实现你的想法。如果你能收集样本数据并构建一个支持你概念的机器学习模型,还可以获得额外加分!
+
+## 评分标准
+
+| 标准 | 模范表现 | 合格表现 | 需要改进 |
+| -------- | ---------------------------------------------------------------- | --------------------------------------------- | ---------------------- |
+| | 提交了一个PowerPoint演示文稿 - 构建模型可以获得额外加分 | 提交了一个非创新的基础演示文稿 | 工作不完整 |
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议使用专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担责任。
\ No newline at end of file
diff --git a/translations/zh/9-Real-World/2-Debugging-ML-Models/README.md b/translations/zh/9-Real-World/2-Debugging-ML-Models/README.md
new file mode 100644
index 000000000..c61a762b2
--- /dev/null
+++ b/translations/zh/9-Real-World/2-Debugging-ML-Models/README.md
@@ -0,0 +1,174 @@
+# 后记:使用负责任的AI仪表板组件进行机器学习模型调试
+
+
+## [课前测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/5/)
+
+## 介绍
+
+机器学习影响着我们的日常生活。人工智能正在逐渐融入一些对个人和社会至关重要的系统,如医疗、金融、教育和就业。例如,系统和模型参与了日常决策任务,如医疗诊断或欺诈检测。因此,随着人工智能的进步和加速采用,社会期望也在不断演变,法规也在不断增加。我们经常看到人工智能系统未能达到预期的领域;它们暴露了新的挑战;政府也开始对人工智能解决方案进行监管。因此,分析这些模型以提供公平、可靠、包容、透明和负责任的结果对每个人都非常重要。
+
+在本课程中,我们将探讨一些实际工具,用于评估模型是否存在负责任的人工智能问题。传统的机器学习调试技术往往基于定量计算,如聚合准确性或平均误差损失。想象一下,当你用来构建这些模型的数据缺乏某些人口统计信息,如种族、性别、政治观点、宗教,或者这些人口统计信息在数据中占比不均时会发生什么。又或者,当模型输出被解释为偏向某些人口统计时会发生什么?这可能导致对这些敏感特征组的过度或不足的代表,导致模型在公平性、包容性或可靠性方面出现问题。另一个因素是,机器学习模型被认为是黑箱,这使得理解和解释模型预测的驱动因素变得困难。当数据科学家和人工智能开发人员没有足够的工具来调试和评估模型的公平性或可信性时,这些都是他们面临的挑战。
+
+在本课程中,您将学习如何使用以下方法调试您的模型:
+
+- **错误分析**:识别模型在数据分布中错误率较高的地方。
+- **模型概览**:在不同的数据群体中进行比较分析,发现模型性能指标的差异。
+- **数据分析**:调查数据的过度或不足代表性,可能导致模型偏向某些数据人口统计。
+- **特征重要性**:了解哪些特征在全局或局部层面驱动模型的预测。
+
+## 前提条件
+
+作为前提条件,请查看 [开发人员的负责任AI工具](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+
+> 
+
+## 错误分析
+
+用于测量准确性的传统模型性能指标主要基于正确与错误预测的计算。例如,确定模型在89%的时间内准确且误差损失为0.001可以被认为是良好的性能。然而,错误往往在您的基础数据集中分布不均。您可能获得89%的模型准确性评分,但发现模型在数据的不同区域中有42%的时间会失败。这些特定数据组的失败模式可能导致公平性或可靠性问题。了解模型表现良好或不良的区域是至关重要的。在模型中有大量不准确的区域可能是重要的数据人口统计。
+
+
+
+RAI仪表板上的错误分析组件通过树形可视化展示了模型失败在各个群体中的分布情况。这对于识别数据集中错误率较高的特征或区域非常有用。通过查看模型大部分不准确的来源,您可以开始调查根本原因。您还可以创建数据群体进行分析。这些数据群体有助于调试过程,以确定为什么模型在一个群体中表现良好,而在另一个群体中表现错误。
+
+
+
+树形图上的视觉指示器有助于更快地定位问题区域。例如,树节点颜色越深红色,错误率越高。
+
+热图是另一种可视化功能,用户可以使用一个或两个特征来调查错误率,以查找整个数据集或群体中导致模型错误的因素。
+
+
+
+使用错误分析时,您需要:
+
+* 深入了解模型失败在数据集和多个输入和特征维度中的分布情况。
+* 分解聚合性能指标,以自动发现错误群体,以便采取有针对性的缓解措施。
+
+## 模型概览
+
+评估机器学习模型的性能需要全面了解其行为。这可以通过查看多个指标(如错误率、准确性、召回率、精度或MAE(平均绝对误差))来发现性能指标之间的差异来实现。一个性能指标可能看起来很好,但在另一个指标中可能会暴露不准确性。此外,比较整个数据集或群体中的指标差异有助于揭示模型表现良好或不良的地方。这在查看模型在敏感特征(如患者种族、性别或年龄)与非敏感特征之间的表现时尤为重要,以发现模型可能存在的潜在不公平。例如,发现模型在包含敏感特征的群体中更容易出错,可以揭示模型可能存在的潜在不公平。
+
+RAI仪表板的模型概览组件不仅有助于分析数据表示在群体中的性能指标,还为用户提供了比较不同群体中模型行为的能力。
+
+
+
+该组件的基于特征的分析功能允许用户缩小特定特征中的数据子组,以更细粒度地识别异常。例如,仪表板具有内置智能,自动为用户选择的特征生成群体(例如,*"time_in_hospital < 3"* 或 *"time_in_hospital >= 7"*)。这使用户能够从较大的数据组中隔离出特定特征,以查看其是否是模型错误结果的关键影响因素。
+
+
+
+模型概览组件支持两类差异指标:
+
+**模型性能差异**:这些指标集计算数据子组中所选性能指标值的差异(差距)。以下是一些示例:
+
+* 准确率差异
+* 错误率差异
+* 精度差异
+* 召回率差异
+* 平均绝对误差(MAE)差异
+
+**选择率差异**:该指标包含数据子组中选择率(有利预测)的差异。一个示例是贷款批准率的差异。选择率指的是每个类别中被分类为1的数据点的比例(在二分类中)或预测值的分布(在回归中)。
+
+## 数据分析
+
+> "如果你对数据进行足够长时间的折磨,它会承认任何事情" - 罗纳德·科斯
+
+这句话听起来极端,但数据确实可以被操纵以支持任何结论。这种操纵有时可能是无意的。作为人类,我们都有偏见,通常很难有意识地知道何时在数据中引入偏见。确保人工智能和机器学习中的公平性仍然是一个复杂的挑战。
+
+数据是传统模型性能指标的一个巨大盲点。您可能有很高的准确性评分,但这并不总是反映数据集中可能存在的潜在数据偏见。例如,如果一个公司高管职位的数据集中有27%的女性和73%的男性,一个基于此数据训练的职位广告AI模型可能主要面向男性观众发布高级职位。这种数据的不平衡使模型的预测偏向一个性别。这揭示了AI模型中的性别偏见问题。
+
+RAI仪表板上的数据分析组件有助于识别数据集中过度和不足代表的区域。它帮助用户诊断由数据不平衡或缺乏特定数据组的代表性引起的错误和公平性问题的根本原因。这使用户能够根据预测和实际结果、错误组和特定特征可视化数据集。有时发现一个代表性不足的数据组还可以揭示模型学习不佳,从而导致高不准确性。具有数据偏见的模型不仅是一个公平性问题,还表明模型不具包容性或可靠性。
+
+
+
+
+使用数据分析时,您需要:
+
+* 通过选择不同的过滤器来探索数据集统计信息,将数据切分为不同的维度(也称为群体)。
+* 了解数据集在不同群体和特征组中的分布情况。
+* 确定与公平性、错误分析和因果关系相关的发现(由其他仪表板组件得出)是否是数据集分布的结果。
+* 决定在哪些领域收集更多数据,以缓解由于代表性问题、标签噪声、特征噪声、标签偏见等因素导致的错误。
+
+## 模型可解释性
+
+机器学习模型往往是黑箱。了解哪些关键数据特征驱动模型的预测可能具有挑战性。提供模型为何做出某个预测的透明性非常重要。例如,如果一个AI系统预测某糖尿病患者在不到30天内有被重新入院的风险,它应该能够提供支持其预测的数据。具有支持数据指标可以带来透明性,帮助临床医生或医院做出明智的决策。此外,能够解释为什么模型对个别患者做出预测,有助于符合健康法规的问责制。当您使用机器学习模型影响人们的生活时,理解和解释模型行为的驱动因素至关重要。模型解释性和可解释性有助于回答以下场景中的问题:
+
+* 模型调试:为什么我的模型会犯这个错误?如何改进我的模型?
+* 人工智能协作:我如何理解和信任模型的决策?
+* 法规合规:我的模型是否满足法律要求?
+
+RAI仪表板的特征重要性组件有助于调试并全面了解模型如何做出预测。它也是机器学习专业人员和决策者的有用工具,用于解释和展示影响模型行为的特征,以满足法规要求。接下来,用户可以探索全局和局部解释,以验证哪些特征驱动模型的预测。全局解释列出了影响模型整体预测的主要特征。局部解释显示了导致模型对个别案例做出预测的特征。评估局部解释的能力在调试或审计特定案例时也很有帮助,以更好地理解和解释模型为什么做出准确或不准确的预测。
+
+
+
+* 全局解释:例如,哪些特征影响糖尿病医院再入院模型的整体行为?
+* 局部解释:例如,为什么一个超过60岁、曾有住院经历的糖尿病患者被预测为在30天内再次入院或不再次入院?
+
+在检查模型在不同群体中的性能的调试过程中,特征重要性显示了特征在各个群体中的影响程度。它有助于在比较特征在驱动模型错误预测中的影响程度时揭示异常。特征重要性组件可以显示特征中的哪些值对模型的结果产生了正面或负面影响。例如,如果模型做出了不准确的预测,该组件使您能够深入了解并确定哪些特征或特征值驱动了预测。这种细节不仅有助于调试,还在审计情况下提供了透明性和问责制。最后,该组件可以帮助您识别公平性问题。举例来说,如果一个敏感特征如种族或性别在驱动模型预测中具有很高的影响力,这可能表明模型中存在种族或性别偏见。
+
+
+
+使用解释性时,您需要:
+
+* 通过了解哪些特征对预测最重要来确定您的AI系统预测的可信度。
+* 通过首先理解模型并确定模型是否使用健康特征或仅仅是错误关联来进行模型调试。
+* 通过了解模型是否基于敏感特征或与之高度相关的特征进行预测来发现潜在的公平性来源。
+* 通过生成局部解释来展示模型决策的结果,从而建立用户对模型决策的信任。
+* 完成AI系统的法规审计,以验证模型并监控模型决策对人类的影响。
+
+## 结论
+
+所有的RAI仪表板组件都是实用工具,帮助您构建对社会危害较小且更可信的机器学习模型。它提高了防止对人权的威胁;歧视或排除某些群体的生活机会;以及身体或心理伤害的风险。它还通过生成局部解释来展示模型决策的结果,从而建立用户对模型决策的信任。某些潜在的危害可以分类为:
+
+- **分配**,例如,如果某个性别或种族被优待。
+- **服务质量**。如果您为一个特定场景训练数据,但现实要复杂得多,这会导致服务表现不佳。
+- **刻板印象**。将某个特定群体与预先分配的属性联系起来。
+- **贬低**。不公平地批评和贴标签。
+- **过度或不足代表性**。某个群体在某个职业中不被看到,任何继续推广这种情况的服务或功能都在助长危害。
+
+### Azure RAI仪表板
+
+[Azure RAI仪表板](https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu) 基于领先学术机构和组织(包括微软)开发的开源工具,为数据科学家和AI开发人员更好地理解模型行为、发现和缓解AI模型中的不良问题提供了重要帮助。
+
+- 通过查看RAI仪表板 [文档](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-responsible-ai-dashboard?WT.mc_id=aiml-90525-ruyakubu) 学习如何使用不同的组件。
+
+- 查看一些RAI仪表板 [示例笔记本](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks) 以在Azure机器学习中调试更多负责任的AI场景。
+
+---
+## 🚀 挑战
+
+为了防止统计或数据偏见的引入,我们应该:
+
+- 让从事系统工作的人员拥有多样的背景和观点
+- 投资于反映我们社会多样性的数据集
+- 发展更好的方法来检测和纠正偏见
+
+思考在模型构建和使用中存在不公平的现实场景。我们还应该考虑什么?
+
+## [课后测验](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/6/)
+## 复习与自学
+
+在本课中,您学习了一些在机器学习中融入负责任AI的实际工具。
+
+观看此工作坊以深入了解这些主题:
+
+- 负责任AI仪表板:负责任AI实践的一站式解决方案,由Besmira Nushi和Mehrnoosh Sameki主讲
+
+[](https://www.youtube.com/watch?v=f1oaDNl3djg "负责任AI仪表板:负责任AI实践的一站式解决方案")
+
+> 🎥 点击上方图片观看视频:负责任AI仪表板:负责任AI实践的一站式解决方案,由Besmira Nushi和Mehrnoosh Sameki主讲
+
+参考以下材料以了解更多关于负责任AI以及如何构建更可信模型的信息:
+
+- 微软的RAI仪表板工具用于调试ML模型:[负责任AI工具资源](https://aka.ms/rai-dashboard)
+
+- 探索负责任AI工具包:[Github](https://github.com/microsoft/responsible-ai-toolbox)
+
+- 微软的RAI资源中心:[负责任AI资源 – Microsoft AI](https://www.microsoft.com/ai/responsible-ai-resources?activetab=pivot1%3aprimaryr4)
+
+- 微软的FATE研究组:[FATE: Fairness, Accountability, Transparency, and Ethics in AI - Microsoft Research](https://www.microsoft.com/research/theme/fate/)
+
+## 作业
+
+[探索RAI仪表板](assignment.md)
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文档视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用此翻译而引起的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/9-Real-World/2-Debugging-ML-Models/assignment.md b/translations/zh/9-Real-World/2-Debugging-ML-Models/assignment.md
new file mode 100644
index 000000000..066ee8217
--- /dev/null
+++ b/translations/zh/9-Real-World/2-Debugging-ML-Models/assignment.md
@@ -0,0 +1,14 @@
+# 探索负责任的人工智能(RAI)仪表板
+
+## 说明
+
+在本课程中,您了解了RAI仪表板,这是一个基于“开源”工具构建的组件套件,帮助数据科学家进行错误分析、数据探索、公平性评估、模型可解释性、反事实/假设评估和因果分析。对于此作业,请探索一些RAI仪表板的示例[notebooks](https://github.com/Azure/RAI-vNext-Preview/tree/main/examples/notebooks),并在论文或演示文稿中报告您的发现。
+
+## 评分标准
+
+| 标准 | 杰出 | 充分 | 需要改进 |
+| ---- | ---- | ---- | -------- |
+| | 提交的论文或PPT演示文稿讨论了RAI仪表板的组件、运行的notebook以及从中得出的结论 | 提交的论文没有结论 | 未提交论文 |
+
+**免责声明**:
+本文档是使用基于机器的人工智能翻译服务翻译的。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议使用专业的人类翻译。我们对使用此翻译而引起的任何误解或误读不承担责任。
\ No newline at end of file
diff --git a/translations/zh/9-Real-World/README.md b/translations/zh/9-Real-World/README.md
new file mode 100644
index 000000000..6760d49f7
--- /dev/null
+++ b/translations/zh/9-Real-World/README.md
@@ -0,0 +1,21 @@
+# 后记:经典机器学习的实际应用
+
+在本课程的这一部分中,你将了解经典机器学习的一些实际应用。我们在互联网上搜寻了白皮书和文章,介绍了使用这些策略的应用,尽量避免涉及神经网络、深度学习和人工智能。了解机器学习在商业系统、生态应用、金融、艺术和文化等方面的应用。
+
+
+
+> 照片由 Alexis Fauvet 拍摄,发布在 Unsplash
+
+## 课程
+
+1. [机器学习的实际应用](1-Applications/README.md)
+2. [使用Responsible AI仪表板组件进行机器学习模型调试](2-Debugging-ML-Models/README.md)
+
+## 致谢
+
+"实际应用" 由一组团队编写,包括 [Jen Looper](https://twitter.com/jenlooper) 和 [Ornella Altunyan](https://twitter.com/ornelladotcom)。
+
+"使用Responsible AI仪表板组件进行机器学习模型调试" 由 [Ruth Yakubu](https://twitter.com/ruthieyakubu) 编写。
+
+**免责声明**:
+本文件使用基于机器的人工智能翻译服务进行翻译。尽管我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原文档视为权威来源。对于关键信息,建议进行专业的人类翻译。我们不对使用此翻译引起的任何误解或曲解承担责任。
\ No newline at end of file
diff --git a/translations/zh/CODE_OF_CONDUCT.md b/translations/zh/CODE_OF_CONDUCT.md
new file mode 100644
index 000000000..f2f52779f
--- /dev/null
+++ b/translations/zh/CODE_OF_CONDUCT.md
@@ -0,0 +1,12 @@
+# Microsoft 开源行为准则
+
+此项目已采用 [Microsoft 开源行为准则](https://opensource.microsoft.com/codeofconduct/)。
+
+资源:
+
+- [Microsoft 开源行为准则](https://opensource.microsoft.com/codeofconduct/)
+- [Microsoft 行为准则常见问题](https://opensource.microsoft.com/codeofconduct/faq/)
+- 如有问题或疑虑,请联系 [opencode@microsoft.com](mailto:opencode@microsoft.com)
+
+**免责声明**:
+本文档使用基于机器的AI翻译服务进行翻译。尽管我们力求准确,但请注意,自动翻译可能包含错误或不准确之处。应将原文档的母语版本视为权威来源。对于关键信息,建议进行专业人工翻译。我们不对因使用此翻译而引起的任何误解或误读承担责任。
\ No newline at end of file
diff --git a/translations/zh/CONTRIBUTING.md b/translations/zh/CONTRIBUTING.md
new file mode 100644
index 000000000..bdd807e58
--- /dev/null
+++ b/translations/zh/CONTRIBUTING.md
@@ -0,0 +1,12 @@
+# 贡献
+
+此项目欢迎贡献和建议。大多数贡献需要您同意贡献者许可协议 (CLA),声明您有权并确实授予我们使用您贡献的权利。详情请访问 https://cla.microsoft.com。
+
+> 重要提示:在翻译此库中的文本时,请确保不使用机器翻译。我们将通过社区验证翻译,因此请仅在您精通的语言中自愿进行翻译。
+
+当您提交拉取请求时,CLA-bot 会自动确定您是否需要提供 CLA 并适当地装饰 PR(例如标签、评论)。只需按照 bot 提供的指示操作。您只需要在所有使用我们 CLA 的库中执行一次此操作。
+
+此项目采用了 [Microsoft 开源行为准则](https://opensource.microsoft.com/codeofconduct/)。有关更多信息,请参阅 [行为准则常见问题](https://opensource.microsoft.com/codeofconduct/faq/) 或联系 [opencode@microsoft.com](mailto:opencode@microsoft.com) 以获取任何其他问题或评论。
+
+**免责声明**:
+本文件是使用基于机器的人工智能翻译服务翻译的。虽然我们努力确保准确性,但请注意,自动翻译可能包含错误或不准确之处。应将原始语言的文件视为权威来源。对于关键信息,建议进行专业人工翻译。对于因使用本翻译而产生的任何误解或误读,我们不承担任何责任。
\ No newline at end of file
diff --git a/translations/zh/README.md b/translations/zh/README.md
new file mode 100644
index 000000000..3fa9a7b9d
--- /dev/null
+++ b/translations/zh/README.md
@@ -0,0 +1,155 @@
+[](https://github.com/microsoft/ML-For-Beginners/blob/master/LICENSE)
+[](https://GitHub.com/microsoft/ML-For-Beginners/graphs/contributors/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/issues/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/pulls/)
+[](http://makeapullrequest.com)
+
+[](https://GitHub.com/microsoft/ML-For-Beginners/watchers/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/network/)
+[](https://GitHub.com/microsoft/ML-For-Beginners/stargazers/)
+
+[](https://discord.gg/zxKYvhSnVp?WT.mc_id=academic-000002-leestott)
+
+# 初学者的机器学习 - 课程
+
+> 🌍 跟随我们一起环游世界,通过世界文化来探索机器学习 🌍
+
+微软的云倡导者很高兴为大家提供一个为期12周、共26课的**机器学习**课程。在这个课程中,你将学习到有时被称为**经典机器学习**的内容,主要使用Scikit-learn库,并避免涉及深度学习(在我们的[AI初学者课程](https://aka.ms/ai4beginners)中有详细介绍)。你也可以将这些课程与我们的['数据科学初学者课程'](https://aka.ms/ds4beginners)结合起来学习!
+
+跟随我们环游世界,将这些经典技术应用于世界各地的数据。每节课包括课前和课后测试、完成课程的书面指导、解决方案、作业等。我们的项目驱动教学法允许你在构建中学习,这是一种证明有效的新技能学习方法。
+
+**✍️ 衷心感谢我们的作者** Jen Looper, Stephen Howell, Francesca Lazzeri, Tomomi Imura, Cassie Breviu, Dmitry Soshnikov, Chris Noring, Anirban Mukherjee, Ornella Altunyan, Ruth Yakubu 和 Amy Boyd
+
+**🎨 同样感谢我们的插画师** Tomomi Imura, Dasani Madipalli 和 Jen Looper
+
+**🙏 特别感谢 🙏 我们的微软学生大使作者、审阅者和内容贡献者**,尤其是 Rishit Dagli, Muhammad Sakib Khan Inan, Rohan Raj, Alexandru Petrescu, Abhishek Jaiswal, Nawrin Tabassum, Ioan Samuila 和 Snigdha Agarwal
+
+**🤩 额外感谢微软学生大使 Eric Wanjau, Jasleen Sondhi 和 Vidushi Gupta 为我们的R课程提供帮助!**
+
+# 开始
+
+按照以下步骤操作:
+1. **Fork 这个仓库**:点击页面右上角的"Fork"按钮。
+2. **克隆这个仓库**: `git clone https://github.com/microsoft/ML-For-Beginners.git`
+
+> [在我们的Microsoft Learn合集里找到所有额外资源](https://learn.microsoft.com/en-us/collections/qrqzamz1nn2wx3?WT.mc_id=academic-77952-bethanycheum)
+
+**[学生们](https://aka.ms/student-page)**,要使用这个课程,请将整个仓库fork到你自己的GitHub账户中,并独自或与小组一起完成练习:
+
+- 从课前测试开始。
+- 阅读课程并完成活动,在每次知识检查时暂停并反思。
+- 尝试通过理解课程内容来创建项目,而不是直接运行解决方案代码;不过这些代码在每个项目导向的课程的`/solution`文件夹中都可以找到。
+- 完成课后测试。
+- 完成挑战。
+- 完成作业。
+- 完成一个课程组后,访问[讨论板](https://github.com/microsoft/ML-For-Beginners/discussions)并通过填写适当的PAT评分表来“公开学习”。PAT是一个进度评估工具,你可以通过填写评分表来进一步学习。你也可以对其他PAT做出反应,以便我们一起学习。
+
+> 进一步学习,我们推荐你跟随这些[Microsoft Learn](https://docs.microsoft.com/en-us/users/jenlooper-2911/collections/k7o7tg1gp306q4?WT.mc_id=academic-77952-leestott)模块和学习路径。
+
+**教师们**,我们[提供了一些建议](for-teachers.md)关于如何使用这个课程。
+
+---
+
+## 视频讲解
+
+部分课程有短视频形式。你可以在课程中找到所有这些视频,或者点击下图在[微软开发者YouTube频道的初学者机器学习播放列表](https://aka.ms/ml-beginners-videos)中观看。
+
+[](https://aka.ms/ml-beginners-videos)
+
+---
+
+## 团队介绍
+
+[](https://youtu.be/Tj1XWrDSYJU "宣传视频")
+
+**Gif 制作** [Mohit Jaisal](https://linkedin.com/in/mohitjaisal)
+
+> 🎥 点击上图观看关于项目和创建者的视频!
+
+---
+
+## 教学法
+
+我们在设计这个课程时选择了两个教学原则:确保它是动手的**项目驱动**和包含**频繁的测验**。此外,这个课程有一个共同的**主题**,以确保连贯性。
+
+通过确保内容与项目对齐,这个过程变得更有吸引力,学生对概念的记忆也会增强。此外,课前低风险测验可以让学生集中注意力学习一个主题,而课后测验可以进一步巩固记忆。这个课程设计灵活有趣,可以整体或部分完成。项目从小开始,到12周周期结束时逐渐变得复杂。这个课程还包括一个关于机器学习在现实世界应用的附录,可以作为额外学分或讨论的基础。
+
+> 查找我们的[行为准则](CODE_OF_CONDUCT.md),[贡献指南](CONTRIBUTING.md)和[翻译指南](TRANSLATIONS.md)。我们欢迎你的建设性反馈!
+
+## 每节课包括
+
+- 可选的草图笔记
+- 可选的补充视频
+- 视频讲解(部分课程)
+- 课前热身测验
+- 书面课程
+- 对于项目导向的课程,提供逐步构建项目的指南
+- 知识检查
+- 挑战
+- 补充阅读
+- 作业
+- 课后测验
+
+> **关于语言的说明**:这些课程主要用Python编写,但许多课程也有R版本。要完成R课程,请转到`/solution`文件夹并查找R课程。它们包括一个.rmd扩展名,表示一个**R Markdown**文件,可以简单定义为`code chunks`(R或其他语言)和`YAML header`(指导如何格式化输出,如PDF)的嵌入。作为一个优秀的数据科学创作框架,它允许你结合代码、输出和你的想法,将它们写在Markdown中。此外,R Markdown文档可以渲染为PDF、HTML或Word等输出格式。
+
+> **关于测验的说明**:所有测验都包含在[测验应用文件夹](../../quiz-app)中,共有52个测验,每个测验有三道题。它们在课程中链接,但测验应用可以本地运行;按照`quiz-app`文件夹中的说明进行本地托管或部署到Azure。
+
+| 课程编号 | 主题 | 课程分组 | 学习目标 | 链接课程 | 作者 |
+| :-----------: | :------------------------------------------------------------: | :-------------------------------------------------: | ------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------: |
+| 01 | 机器学习介绍 | [介绍](1-Introduction/README.md) | 学习机器学习的基本概念 | [课程](1-Introduction/1-intro-to-ML/README.md) | Muhammad |
+| 02 | 机器学习的历史 | [介绍](1-Introduction/README.md) | 学习这一领域的历史 | [课程](1-Introduction/2-history-of-ML/README.md) | Jen 和 Amy |
+| 03 | 机器学习中的公平性 | [介绍](1-Introduction/README.md) | 学生在构建和应用机器学习模型时应该考虑哪些重要的哲学问题? | [课程](1-Introduction/3-fairness/README.md) | Tomomi |
+| 04 | 机器学习技术 | [Introduction](1-Introduction/README.md) | 机器学习研究人员使用哪些技术来构建机器学习模型? | [Lesson](1-Introduction/4-techniques-of-ML/README.md) | Chris 和 Jen |
+| 05 | 回归介绍 | [Regression](2-Regression/README.md) | 开始使用Python和Scikit-learn进行回归模型构建 |