From dc7d81b5a42f1178ce284e71755ab65406b10dc5 Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Wed, 28 Jul 2021 19:12:54 +0530
Subject: [PATCH 1/8] Update README.md
---
2-Regression/4-Logistic/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/2-Regression/4-Logistic/README.md b/2-Regression/4-Logistic/README.md
index a4488c11..f694a554 100644
--- a/2-Regression/4-Logistic/README.md
+++ b/2-Regression/4-Logistic/README.md
@@ -284,7 +284,7 @@ In future lessons on classifications, you will learn how to iterate to improve y
---
## 🚀Challenge
-There's a lot more to unpack regarding logistic regression! But the best way to learn is to experiment. Find a dataset that lends itself to this type of analysis and build a model with it. What do you learn? tip: try [Kaggle](https://kaggle.com) for interesting datasets.
+There's a lot more to unpack regarding logistic regression! But the best way to learn is to experiment. Find a dataset that lends itself to this type of analysis and build a model with it. What do you learn? tip: try [Kaggle](https://www.kaggle.com/search?q=logistic+regression+datasets) for interesting datasets.
## [Post-lecture quiz](https://jolly-sea-0a877260f.azurestaticapps.net/quiz/16/)
## Review & Self Study
From 03015f91c4073aa8ab281e84eb46f23c0210e60b Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Wed, 28 Jul 2021 19:18:39 +0530
Subject: [PATCH 2/8] Update README.md
---
2-Regression/1-Tools/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/2-Regression/1-Tools/README.md b/2-Regression/1-Tools/README.md
index e36c34fe..6225f7cd 100644
--- a/2-Regression/1-Tools/README.md
+++ b/2-Regression/1-Tools/README.md
@@ -95,7 +95,7 @@ For this task we will import some libraries:
- **matplotlib**. It's a useful [graphing tool](https://matplotlib.org/) and we will use it to create a line plot.
- **numpy**. [numpy](https://numpy.org/doc/stable/user/whatisnumpy.html) is a useful library for handling numeric data in Python.
-- **sklearn**. This is the Scikit-learn library.
+- **sklearn**. This is the [Scikit-learn](https://scikit-learn.org/stable/user_guide.html) library.
Import some libraries to help with your tasks.
From 31826eec95ac7821b4ab8b74d369a1da6473f385 Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Wed, 28 Jul 2021 20:13:21 +0530
Subject: [PATCH 3/8] Update README.md
Stanford's K-Means Simulator had been removed from the parent directory. Added new K-Means Simulator
---
5-Clustering/2-K-Means/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/5-Clustering/2-K-Means/README.md b/5-Clustering/2-K-Means/README.md
index 153932e6..bd59e080 100644
--- a/5-Clustering/2-K-Means/README.md
+++ b/5-Clustering/2-K-Means/README.md
@@ -242,7 +242,7 @@ Hint: Try to scale your data. There's commented code in the notebook that adds s
## Review & Self Study
-Take a look at Stanford's K-Means Simulator [here](https://stanford.edu/class/engr108/visualizations/kmeans/kmeans.html). You can use this tool to visualize sample data points and determine its centroids. With fresh data, click 'update' to see how long it takes to find convergence. You can edit the data's randomness, numbers of clusters and numbers of centroids. Does this help you get an idea of how the data can be grouped?
+Take a look at K-Means Simulator [here](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). You can use this tool to visualize sample data points and determine its centroids. You can edit the data's randomness, numbers of clusters and numbers of centroids. Does this help you get an idea of how the data can be grouped?
Also, take a look at [this handout on k-means](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) from Stanford.
From dc0eda9f923f18444e269c53f197538b98f9aaa8 Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Thu, 29 Jul 2021 07:19:32 +0530
Subject: [PATCH 4/8] Update README.md
address link broken
---
6-NLP/1-Introduction-to-NLP/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/6-NLP/1-Introduction-to-NLP/README.md b/6-NLP/1-Introduction-to-NLP/README.md
index 924ce73c..4a7a88d6 100644
--- a/6-NLP/1-Introduction-to-NLP/README.md
+++ b/6-NLP/1-Introduction-to-NLP/README.md
@@ -69,7 +69,7 @@ The idea for this came from a party game called *The Imitation Game* where an in
### Developing Eliza
-In the 1960's an MIT scientist called *Joseph Weizenbaum* developed [*Eliza*](https:/wikipedia.org/wiki/ELIZA), a computer 'therapist' that would ask the human questions and give the appearance of understanding their answers. However, while Eliza could parse a sentence and identify certain grammatical constructs and keywords so as to give a reasonable answer, it could not be said to *understand* the sentence. If Eliza was presented with a sentence following the format "**I am** sad" it might rearrange and substitute words in the sentence to form the response "How long have **you been** sad".
+In the 1960's an MIT scientist called *Joseph Weizenbaum* developed [*Eliza*](https://wikipedia.org/wiki/ELIZA), a computer 'therapist' that would ask the human questions and give the appearance of understanding their answers. However, while Eliza could parse a sentence and identify certain grammatical constructs and keywords so as to give a reasonable answer, it could not be said to *understand* the sentence. If Eliza was presented with a sentence following the format "**I am** sad" it might rearrange and substitute words in the sentence to form the response "How long have **you been** sad".
This gave the impression that Eliza understood the statement and was asking a follow-on question, whereas in reality, it was changing the tense and adding some words. If Eliza could not identify a keyword that it had a response for, it would instead give a random response that could be applicable to many different statements. Eliza could be easily tricked, for instance if a user wrote "**You are** a bicycle" it might respond with "How long have **I been** a bicycle?", instead of a more reasoned response.
From d9e007b72916a46a34173fd59be83c89f22f9ce1 Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Thu, 29 Jul 2021 07:26:26 +0530
Subject: [PATCH 5/8] Update README.md
Fix bot.py address
---
6-NLP/1-Introduction-to-NLP/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/6-NLP/1-Introduction-to-NLP/README.md b/6-NLP/1-Introduction-to-NLP/README.md
index 4a7a88d6..51235856 100644
--- a/6-NLP/1-Introduction-to-NLP/README.md
+++ b/6-NLP/1-Introduction-to-NLP/README.md
@@ -133,7 +133,7 @@ Let's create the bot next. We'll start by defining some phrases.
It was nice talking to you, goodbye!
```
- One possible solution to the task is [here](../solution/bot.py)
+ One possible solution to the task is [here](solution/bot.py)
✅ Stop and consider
From f29dc516b89d5da7de6f72a2259881e7644ab177 Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Thu, 29 Jul 2021 07:34:20 +0530
Subject: [PATCH 6/8] Update README.md
Address Link fix
---
6-NLP/3-Translation-Sentiment/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/6-NLP/3-Translation-Sentiment/README.md b/6-NLP/3-Translation-Sentiment/README.md
index bcd6cdd1..0c6b568b 100644
--- a/6-NLP/3-Translation-Sentiment/README.md
+++ b/6-NLP/3-Translation-Sentiment/README.md
@@ -143,7 +143,7 @@ Your task is to determine, using sentiment polarity, if *Pride and Prejudice* ha
1. If the polarity is 1 or -1 store the sentence in an array or list of positive or negative messages
5. At the end, print out all the positive sentences and negative sentences (separately) and the number of each.
-Here is a sample [solution](solutions/notebook.ipynb).
+Here is a sample [solution](solution/notebook.ipynb).
✅ Knowledge Check
From f17f05ace97acfa46b0b6a9a4c7f28fa066908b1 Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Thu, 29 Jul 2021 07:48:06 +0530
Subject: [PATCH 7/8] Update README.md
Fix link to Jupyter-notebook
---
6-NLP/5-Hotel-Reviews-2/README.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/6-NLP/5-Hotel-Reviews-2/README.md b/6-NLP/5-Hotel-Reviews-2/README.md
index 7d8a4d03..12b9a15a 100644
--- a/6-NLP/5-Hotel-Reviews-2/README.md
+++ b/6-NLP/5-Hotel-Reviews-2/README.md
@@ -347,13 +347,13 @@ print("Saving results to Hotel_Reviews_NLP.csv")
df.to_csv(r"../data/Hotel_Reviews_NLP.csv", index = False)
```
-You should run the entire code for [the analysis notebook](solution/notebook-sentiment-analysis.ipynb) (after you've run [your filtering notebook](solution/notebook-filtering.ipynb) to generate the Hotel_Reviews_Filtered.csv file).
+You should run the entire code for [the analysis notebook](solution/3-notebook.ipynb) (after you've run [your filtering notebook](solution/1-notebook.ipynb) to generate the Hotel_Reviews_Filtered.csv file).
To review, the steps are:
-1. Original dataset file **Hotel_Reviews.csv** is explored in the previous lesson with [the explorer notebook](../4-Hotel-Reviews-1/solution/notebook-explorer.ipynb)
-2. Hotel_Reviews.csv is filtered by [the filtering notebook](solution/notebook-filtering.ipynb) resulting in **Hotel_Reviews_Filtered.csv**
-3. Hotel_Reviews_Filtered.csv is processed by [the sentiment analysis notebook](solution/notebook-sentiment-analysis.ipynb) resulting in **Hotel_Reviews_NLP.csv**
+1. Original dataset file **Hotel_Reviews.csv** is explored in the previous lesson with [the explorer notebook](../4-Hotel-Reviews-1/solution/notebook.ipynb)
+2. Hotel_Reviews.csv is filtered by [the filtering notebook](solution/1-notebook.ipynb) resulting in **Hotel_Reviews_Filtered.csv**
+3. Hotel_Reviews_Filtered.csv is processed by [the sentiment analysis notebook](solution/3-notebook.ipynb) resulting in **Hotel_Reviews_NLP.csv**
4. Use Hotel_Reviews_NLP.csv in the NLP Challenge below
### Conclusion
From 7539844e90c7c1921fafacc4aa923799c510f132 Mon Sep 17 00:00:00 2001
From: Abhinav Sharma <63901956+abhi-bhatra@users.noreply.github.com>
Date: Thu, 29 Jul 2021 08:30:58 +0530
Subject: [PATCH 8/8] Update README.md
Link to K-Means Clustering Simulator
---
5-Clustering/2-K-Means/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/5-Clustering/2-K-Means/README.md b/5-Clustering/2-K-Means/README.md
index bd59e080..6e0724b5 100644
--- a/5-Clustering/2-K-Means/README.md
+++ b/5-Clustering/2-K-Means/README.md
@@ -242,7 +242,7 @@ Hint: Try to scale your data. There's commented code in the notebook that adds s
## Review & Self Study
-Take a look at K-Means Simulator [here](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). You can use this tool to visualize sample data points and determine its centroids. You can edit the data's randomness, numbers of clusters and numbers of centroids. Does this help you get an idea of how the data can be grouped?
+Take a look at K-Means Simulator [such as this one](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). You can use this tool to visualize sample data points and determine its centroids. You can edit the data's randomness, numbers of clusters and numbers of centroids. Does this help you get an idea of how the data can be grouped?
Also, take a look at [this handout on k-means](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) from Stanford.