diff --git a/4-Classification/1-Introduction/solution/R/lesson_10-R.ipynb b/4-Classification/1-Introduction/solution/R/lesson_10-R.ipynb index 251bbe082..4592429f9 100644 --- a/4-Classification/1-Introduction/solution/R/lesson_10-R.ipynb +++ b/4-Classification/1-Introduction/solution/R/lesson_10-R.ipynb @@ -103,8 +103,8 @@ "cell_type": "code", "execution_count": null, "source": [ - "suppressWarnings(if (!require(\"pacman\"))install.packages(\"pacman\"))\n", - "\n", + "suppressWarnings(if (!require(\"pacman\"))install.packages(\"pacman\"))\r\n", + "\r\n", "pacman::p_load(tidyverse, tidymodels, DataExplorer, themis, here)" ], "outputs": [], @@ -138,12 +138,12 @@ "cell_type": "code", "execution_count": null, "source": [ - "# Import data\n", - "df <- read_csv(file = \"https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/4-Classification/data/cuisines.csv\")\n", - "\n", - "# View the first 5 rows\n", - "df %>% \n", - " slice_head(n = 5)\n" + "# Import data\r\n", + "df <- read_csv(file = \"https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/4-Classification/data/cuisines.csv\")\r\n", + "\r\n", + "# View the first 5 rows\r\n", + "df %>% \r\n", + " slice_head(n = 5)\r\n" ], "outputs": [], "metadata": { @@ -163,12 +163,12 @@ "cell_type": "code", "execution_count": null, "source": [ - "# Basic information about the data\n", - "df %>%\n", - " introduce()\n", - "\n", - "# Visualize basic information above\n", - "df %>% \n", + "# Basic information about the data\r\n", + "df %>%\r\n", + " introduce()\r\n", + "\r\n", + "# Visualize basic information above\r\n", + "df %>% \r\n", " plot_intro(ggtheme = theme_light())" ], "outputs": [], @@ -193,17 +193,17 @@ "cell_type": "code", "execution_count": null, "source": [ - "# Count observations per cuisine\n", - "df %>% \n", - " count(cuisine) %>% \n", - " arrange(n)\n", - "\n", - "# Plot the distribution\n", - "theme_set(theme_light())\n", - "df %>% \n", - " count(cuisine) %>% \n", - " ggplot(mapping = aes(x = n, y = reorder(cuisine, -n))) +\n", - " geom_col(fill = \"midnightblue\", alpha = 0.7) +\n", + "# Count observations per cuisine\r\n", + "df %>% \r\n", + " count(cuisine) %>% \r\n", + " arrange(n)\r\n", + "\r\n", + "# Plot the distribution\r\n", + "theme_set(theme_light())\r\n", + "df %>% \r\n", + " count(cuisine) %>% \r\n", + " ggplot(mapping = aes(x = n, y = reorder(cuisine, -n))) +\r\n", + " geom_col(fill = \"midnightblue\", alpha = 0.7) +\r\n", " ylab(\"cuisine\")" ], "outputs": [], @@ -214,15 +214,17 @@ { "cell_type": "markdown", "source": [ - "There are a finite number of cuisines, but the distribution of data is uneven. You can fix that! Before doing so, explore a little more.\n", - "\n", - "Next, let's assign each cuisine into its individual table and find out how much data is available (rows, columns) per cuisine.\n", - "\n", - "
\n",
- "
\n",
- "
\r\n",
+ "
\r\n",
+ "
\n",
- "
\n",
- "
\r\n",
+ "
\r\n",
+ "
\n",
- "
\n",
- "
\r\n",
+ " \r\n",
+ " \r\n",
+ "
\r\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ " cuisine n \n",
+ "1 korean 799\n",
+ "2 indian 598\n",
+ "3 chinese 442\n",
+ "4 japanese 320\n",
+ "5 thai 289"
+ ],
+ "text/markdown": [
+ "\n",
+ "A tibble: 5 Γ 2\n",
+ "\n",
+ "| cuisine <fct> | n <int> |\n",
+ "|---|---|\n",
+ "| korean | 799 |\n",
+ "| indian | 598 |\n",
+ "| chinese | 442 |\n",
+ "| japanese | 320 |\n",
+ "| thai | 289 |\n",
+ "\n"
+ ],
+ "text/latex": [
+ "A tibble: 5 Γ 2\n",
+ "\\begin{tabular}{ll}\n",
+ " cuisine & n\\\\\n",
+ " \n",
+ "\tcuisine almond angelica anise anise_seed apple apple_brandy apricot armagnac artemisia β― whiskey white_bread white_wine whole_grain_wheat_flour wine wood yam yeast yogurt zucchini \n",
+ "\n",
+ "\n",
+ "\t<fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> β― <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> \n",
+ "\tindian 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 0 0 \n",
+ "\tindian 1 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 0 0 \n",
+ "\tindian 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 0 0 \n",
+ "\tindian 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 0 0 \n",
+ "\n",
+ "indian 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 1 0 \n",
+ "
\n"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 735
+ },
+ "id": "jhCrrH22IWVR",
+ "outputId": "d444a85c-1d8b-485f-bc4f-8be2e8f8217c"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Perfect! Now, time to split the data such that 70% of the data goes to training and 30% goes to testing. We'll also apply a `stratification` technique when splitting the data to `maintain the proportion of each cuisine` in the training and validation datasets.\n",
+ "\n",
+ "[rsample](https://rsample.tidymodels.org/), a package in Tidymodels, provides infrastructure for efficient data splitting and resampling:"
+ ],
+ "metadata": {
+ "id": "AYTjVyajIdny"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "source": [
+ "# Load the core Tidymodels packages into R session\r\n",
+ "library(tidymodels)\r\n",
+ "\r\n",
+ "# Create split specification\r\n",
+ "set.seed(2056)\r\n",
+ "cuisines_split <- initial_split(data = df_select,\r\n",
+ " strata = cuisine,\r\n",
+ " prop = 0.7)\r\n",
+ "\r\n",
+ "# Extract the data in each split\r\n",
+ "cuisines_train <- training(cuisines_split)\r\n",
+ "cuisines_test <- testing(cuisines_split)\r\n",
+ "\r\n",
+ "# Print the number of cases in each split\r\n",
+ "cat(\"Training cases: \", nrow(cuisines_train), \"\\n\",\r\n",
+ " \"Test cases: \", nrow(cuisines_test), sep = \"\")\r\n",
+ "\r\n",
+ "# Display the first few rows of the training set\r\n",
+ "cuisines_train %>% \r\n",
+ " slice_head(n = 5)\r\n",
+ "\r\n",
+ "\r\n",
+ "# Display distribution of cuisines in the training set\r\n",
+ "cuisines_train %>% \r\n",
+ " count(cuisine) %>% \r\n",
+ " arrange(desc(n))"
+ ],
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Training cases: 1712\n",
+ "Test cases: 736"
+ ]
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ " cuisine almond angelica anise anise_seed apple apple_brandy apricot armagnac\n",
+ "1 chinese 0 0 0 0 0 0 0 0 \n",
+ "2 chinese 0 0 0 0 0 0 0 0 \n",
+ "3 chinese 0 0 0 0 0 0 0 0 \n",
+ "4 chinese 0 0 0 0 0 0 0 0 \n",
+ "5 chinese 0 0 0 0 0 0 0 0 \n",
+ " artemisia β― whiskey white_bread white_wine whole_grain_wheat_flour wine wood\n",
+ "1 0 β― 0 0 0 0 1 0 \n",
+ "2 0 β― 0 0 0 0 1 0 \n",
+ "3 0 β― 0 0 0 0 0 0 \n",
+ "4 0 β― 0 0 0 0 0 0 \n",
+ "5 0 β― 0 0 0 0 0 0 \n",
+ " yam yeast yogurt zucchini\n",
+ "1 0 0 0 0 \n",
+ "2 0 0 0 0 \n",
+ "3 0 0 0 0 \n",
+ "4 0 0 0 0 \n",
+ "5 0 0 0 0 "
+ ],
+ "text/markdown": [
+ "\n",
+ "A tibble: 5 Γ 381\n",
+ "\n",
+ "| cuisine <fct> | almond <dbl> | angelica <dbl> | anise <dbl> | anise_seed <dbl> | apple <dbl> | apple_brandy <dbl> | apricot <dbl> | armagnac <dbl> | artemisia <dbl> | β― β― | whiskey <dbl> | white_bread <dbl> | white_wine <dbl> | whole_grain_wheat_flour <dbl> | wine <dbl> | wood <dbl> | yam <dbl> | yeast <dbl> | yogurt <dbl> | zucchini <dbl> |\n",
+ "|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n",
+ "| chinese | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | β― | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |\n",
+ "| chinese | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | β― | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |\n",
+ "| chinese | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | β― | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n",
+ "| chinese | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | β― | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n",
+ "| chinese | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | β― | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n",
+ "\n"
+ ],
+ "text/latex": [
+ "A tibble: 5 Γ 381\n",
+ "\\begin{tabular}{lllllllllllllllllllll}\n",
+ " cuisine & almond & angelica & anise & anise\\_seed & apple & apple\\_brandy & apricot & armagnac & artemisia & β― & whiskey & white\\_bread & white\\_wine & whole\\_grain\\_wheat\\_flour & wine & wood & yam & yeast & yogurt & zucchini\\\\\n",
+ " \n",
+ "\tcuisine n \n",
+ "\n",
+ "\n",
+ "\t<fct> <int> \n",
+ "\tkorean 799 \n",
+ "\tindian 598 \n",
+ "\tchinese 442 \n",
+ "\tjapanese 320 \n",
+ "\n",
+ "thai 289 \n",
+ "
\n"
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ " cuisine n \n",
+ "1 korean 559\n",
+ "2 indian 418\n",
+ "3 chinese 309\n",
+ "4 japanese 224\n",
+ "5 thai 202"
+ ],
+ "text/markdown": [
+ "\n",
+ "A tibble: 5 Γ 2\n",
+ "\n",
+ "| cuisine <fct> | n <int> |\n",
+ "|---|---|\n",
+ "| korean | 559 |\n",
+ "| indian | 418 |\n",
+ "| chinese | 309 |\n",
+ "| japanese | 224 |\n",
+ "| thai | 202 |\n",
+ "\n"
+ ],
+ "text/latex": [
+ "A tibble: 5 Γ 2\n",
+ "\\begin{tabular}{ll}\n",
+ " cuisine & n\\\\\n",
+ " \n",
+ "\tcuisine almond angelica anise anise_seed apple apple_brandy apricot armagnac artemisia β― whiskey white_bread white_wine whole_grain_wheat_flour wine wood yam yeast yogurt zucchini \n",
+ "\n",
+ "\n",
+ "\t<fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> β― <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> \n",
+ "\tchinese 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 1 0 0 0 0 0 \n",
+ "\tchinese 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 1 0 0 0 0 0 \n",
+ "\tchinese 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 0 0 \n",
+ "\tchinese 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 0 0 \n",
+ "\n",
+ "chinese 0 0 0 0 0 0 0 0 0 β― 0 0 0 0 0 0 0 0 0 0 \n",
+ "
\n"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 535
+ },
+ "id": "w5FWIkEiIjdN",
+ "outputId": "2e195fd9-1a8f-4b91-9573-cce5582242df"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## 2. Deal with imbalanced data\n",
+ "\n",
+ "As you might have noticed in the original data set as well as in our training set, there is quite an unequal distribution in the number of cuisines. Korean cuisines are *almost* 3 times Thai cuisines. Imbalanced data often has negative effects on the model performance. Many models perform best when the number of observations is equal and, thus, tend to struggle with unbalanced data.\n",
+ "\n",
+ "There are majorly two ways of dealing with imbalanced data sets:\n",
+ "\n",
+ "- adding observations to the minority class: `Over-sampling` e.g using a SMOTE algorithm which synthetically generates new examples of the minority class using nearest neighbors of these cases.\n",
+ "\n",
+ "- removing observations from majority class: `Under-sampling`\n",
+ "\n",
+ "In our previous lesson, we demonstrated how to deal with imbalanced data sets using a `recipe`. A recipe can be thought of as a blueprint that describes what steps should be applied to a data set in order to get it ready for data analysis. In our case, we want to have an equal distribution in the number of our cuisines for our `training set`. Let's get right into it."
+ ],
+ "metadata": {
+ "id": "daBi9qJNIwqW"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "source": [
+ "# Load themis package for dealing with imbalanced data\r\n",
+ "library(themis)\r\n",
+ "\r\n",
+ "# Create a recipe for preprocessing training data\r\n",
+ "cuisines_recipe <- recipe(cuisine ~ ., data = cuisines_train) %>% \r\n",
+ " step_smote(cuisine)\r\n",
+ "\r\n",
+ "# Print recipe\r\n",
+ "cuisines_recipe"
+ ],
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ "Data Recipe\n",
+ "\n",
+ "Inputs:\n",
+ "\n",
+ " role #variables\n",
+ " outcome 1\n",
+ " predictor 380\n",
+ "\n",
+ "Operations:\n",
+ "\n",
+ "SMOTE based on cuisine"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 200
+ },
+ "id": "Az6LFBGxI1X0",
+ "outputId": "29d71d85-64b0-4e62-871e-bcd5398573b6"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "You can of course go ahead and confirm (using prep+bake) that the recipe will work as you expect it - all the cuisine labels having `559` observations.\r\n",
+ "\r\n",
+ "Since we'll be using this recipe as a preprocessor for modeling, a `workflow()` will do all the prep and bake for us, so we won't have to manually estimate the recipe.\r\n",
+ "\r\n",
+ "Now we are ready to train a model π©βπ»π¨βπ»!\r\n",
+ "\r\n",
+ "## 3. Choosing your classifier\r\n",
+ "\r\n",
+ " \n",
+ "\tcuisine n \n",
+ "\n",
+ "\n",
+ "\t<fct> <int> \n",
+ "\tkorean 559 \n",
+ "\tindian 418 \n",
+ "\tchinese 309 \n",
+ "\tjapanese 224 \n",
+ "\n",
+ "thai 202
\r\n",
+ "
\r\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 248
+ },
+ "id": "CqtckvtsKqax",
+ "outputId": "e57fe557-6a68-4217-fe82-173328c5436d"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Great job! In Tidymodels, evaluating model performance can be done using [yardstick](https://yardstick.tidymodels.org/) - a package used to measure the effectiveness of models using performance metrics. As we did in our logistic regression lesson, let's begin by computing a confusion matrix."
+ ],
+ "metadata": {
+ "id": "8w5N6XsBKss7"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "source": [
+ "# Confusion matrix for categorical data\n",
+ "conf_mat(data = results, truth = cuisine, estimate = .pred_class)\n"
+ ],
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ " Truth\n",
+ "Prediction chinese indian japanese korean thai\n",
+ " chinese 83 1 8 15 10\n",
+ " indian 4 163 1 2 6\n",
+ " japanese 21 5 73 25 1\n",
+ " korean 15 0 11 191 0\n",
+ " thai 10 11 3 7 70"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 133
+ },
+ "id": "YvODvsLkK0iG",
+ "outputId": "bb69da84-1266-47ad-b174-d43b88ca2988"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "When dealing with multiple classes, it's generally more intuitive to visualize this as a heat map, like this:"
+ ],
+ "metadata": {
+ "id": "c0HfPL16Lr6U"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "source": [
+ "update_geom_defaults(geom = \"tile\", new = list(color = \"black\", alpha = 0.7))\n",
+ "# Visualize confusion matrix\n",
+ "results %>% \n",
+ " conf_mat(cuisine, .pred_class) %>% \n",
+ " autoplot(type = \"heatmap\")"
+ ],
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ "plot without title"
+ ],
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAA0gAAANICAMAAADKOT/pAAADAFBMVEUAAAABAQECAgIDAwMEBAQFBQUGBgYHBwcICAgJCQkKCgoLCwsMDAwNDQ0ODg4PDw8QEBARERESEhITExMUFBQVFRUWFhYXFxcYGBgZGRkaGhobGxscHBwdHR0eHh4fHx8gICAhISEiIiIjIyMkJCQlJSUmJiYnJycoKCgpKSkqKiorKyssLCwtLS0uLi4vLy8wMDAxMTEyMjIzMzM0NDQ1NTU2NjY3Nzc4ODg5OTk6Ojo7Ozs8PDw9PT0+Pj4/Pz9AQEBBQUFCQkJDQ0NERERFRUVGRkZHR0dISEhJSUlKSkpLS0tMTExNTU1OTk5PT09QUFBRUVFSUlJTU1NUVFRVVVVWVlZXV1dYWFhZWVlaWlpbW1tcXFxdXV1eXl5fX19gYGBhYWFiYmJjY2NkZGRlZWVmZmZnZ2doaGhpaWlqampra2tsbGxtbW1ubm5vb29wcHBxcXFycnJzc3N0dHR1dXV2dnZ3d3d4eHh5eXl6enp7e3t8fHx9fX1+fn5/f3+AgICBgYGCgoKDg4OEhISFhYWGhoaHh4eIiIiJiYmKioqLi4uMjIyNjY2Ojo6Pj4+QkJCRkZGSkpKTk5OUlJSVlZWWlpaXl5eYmJiZmZmampqbm5ucnJydnZ2enp6fn5+goKChoaGioqKjo6OkpKSlpaWmpqanp6eoqKipqamqqqqrq6usrKytra2urq6vr6+wsLCxsbGysrKzs7O0tLS1tbW2tra3t7e4uLi5ubm6urq7u7u8vLy9vb2+vr6/v7/AwMDBwcHCwsLDw8PExMTFxcXGxsbHx8fIyMjJycnKysrLy8vMzMzNzc3Ozs7Pz8/Q0NDR0dHS0tLT09PU1NTV1dXW1tbX19fY2NjZ2dna2trb29vc3Nzd3d3e3t7f39/g4ODh4eHi4uLj4+Pk5OTl5eXm5ubn5+fo6Ojp6enq6urr6+vs7Ozt7e3u7u7v7+/w8PDx8fHy8vLz8/P09PT19fX29vb39/f4+Pj5+fn6+vr7+/v8/Pz9/f3+/v7////isF19AAAACXBIWXMAABJ0AAASdAHeZh94AAAgAElEQVR4nO3deWBU9b3//0+ibApWrbYuvYorXaxoaatWvVqpqG2HsCmLBAqoVXBDjCKbKMqOQUDFFVxKqyhVFLUqWKJsxg3Lz2IFGilLiEqptMX0hpzvnJkMCbx5/W5vz5k5Z+D5/OOc85nEz3w8Mw9mMjmo84gocC7qBRDtCQGJKISARBRCQCIKISARhRCQiEIISEQhBCSiEAISUQgBiSiEgEQUQkAiCiEgEYUQkIhCCEhEIQQkohACElEIAYkohIBEFEJAIgohIBGFEJCIQijHkLb+NUZVRb2Ahn26OeoVNCxWpyZWi/mbeGbnGNJl42PUmbNiVM+7nohRlz4ao/o/HqOuFc/sHEMasShGdfwsRt3+1qYYNXJ9jLq9MkZNE89sIMUkIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSbI+EVLQktatJvB8FpNGtv9LoqMteTx7d/f2vND6h5M3oIS07xT0XxjyhQJpx+sGNj79pbfCJgkJ6o7V72t/f4FKdFSmkRa3dnNTBgnYHNPneY7GCVPvB1ggg3ezaTZraq+C8RYvGF7a64cbW7rLIIU1sdmR8IE1ynX4957qC9pFDGtvsiDSkywsn+D0eJaRxycWkIC1pcdzYSecUzIwTpP+0YJBOONJ/CTqncP6iI49YsGjRwqMOjhrSS03GTY0PpJNaVia3P92nImJIc5vcWZqG1LVFoInCgPRCkzF3pyF1bLa8snLdd1pGC+nTOy8uvvdLr+iVEZ2KF/hv7WoTC0f07zvf8zaP79Vl8CrPe+2qzsX3Vu8YZgHS8cf624sKF5RdO84/+plbEDGk8oWfxQjSt7/pb7vu80ngmYJBWvTa+jpIPz0ickhLFlSmIa1vVuSPR7lXI4V0w9jN6wdM94qu+fCfj3XZ5v+MVDRwi/dKl23eoPFfVD/es3pj+/e3b7xudmaYDUjD3C+fmz+6aee64Zsnfz3QdOF82BAjSFPckOV/nrFfv+AzBf6woQ7SWa3Wr18dLaRkaUiL3FB/MMdNjhLS6sTG5KbcK3ra8zYmKlKQ5nrepsQnqxKbkz8zdStblVjtedu9zDD5zyxpn+wPIUJadFsz5wp7pz5i+P1vH2i372gg7dT9+yfPz/WVwScKC1Lrlh0PdAddvyYOkJ5zd/mDN9ywKCG92b42tS9anHwvl/g4BSl9WJZINbv2ng4ls9Z7mWHye9/4cbL3QoR0T/MzRt91SWHqI4bJzh0+KdBsex6kZw/4yYzfXL7PzfGB1LKw20P3F7mL4gDpSTfVHyxzN0YJaVH77WlIS+ohpQ+XJjJv4zbNG9mhrH64uwJBeuPwE/0Xo66FTya3L44f2ragF5AatPGo7/ovRlcULo0NpLff87dd3ZwYQHrOTfIHZW54lJDWJCo876MXdgNpbWJl8usbvZotyd30wZlhFiA97VJwJrjMLL9wDwGpvrfddf7uCXdPbCCle8IFmi8kSEvcLf7gqfQLU1SQvEEjKtddd+9uIHlDS6pqXuzy+at9Pq7dPGRKZpgVSD383Wg3+PkbHkyTugVI9ZW7/v7uEXdXbCCtXOlv73fjYgBpQ4uf+4MhrixSSFvu6NJz2rbdQdo8ruslJSu82ll9Ova6+++ZYRYgvdH8mDeSuw7usRcLT/WPLnGTgVTfxq+02pjc9Xa/jwukdwsv9AfnFbwRA0iVlzZ5p7Jy7bHfDjTXHnGt3UB32qgJFxe2XbSo2H332pLzC77zRsSQ5pWW9nADSkvfiQOkTXe6Hz/wxOWFRcFnCgbp2QkTurorJ0xYvL6Paztu1OmuX6DpgkGaO2lSN9d/0qRlle8dfPTQO37QaA6QFo06qWmjlleWLVr0Zkmrps2O7fFqoNlCgNQ7fS2ZezAWkDY9+P39Gp8wZH3wiYJB6ll3Vu5dv3ZM6xZNT5kYaLaAkHrVLWZ6ZeWiC1s0Pe2ZQLPtIZDCjau/ZVz9rQKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEgqINmAJAOSCkg2IMmApAKSDUgyIKmAZAOSDEiqmEDqOSJGnXJfjOpw+z0xqtOUGNU76rPRsMvEMzvHkEa+GqMuHBajznv8tRg1IOoFNGzgKzFqiHhm5xjSvZ/GqF/MjFHd3on6zWXD7ox6AQ27oypG3Sue2UCKSUCSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyhQTpraYHhzDLL/7zp/3YY9zg1MFDHQ7Z92sXz0ge3fTt5o2O6j0jWkjLTnHPhTFPGJDmt23e/OTSquAThQFpXcl/NW45bFPwiUKEVJN4f6fxpkTFrjdlGVLVmS5aSL0bH1wH6QeFF155puswc+b1BUf37HWi6xQppInNjowNpJcbtbx90jnuluAzhQEpsc9V0y9xJcEnChFS7Qdbd4W0601ZhjSp8bmRQhrWqPiyNKQS1y25/f43Z8z82qEPzJz58GEHRAnppSbjpsYG0o8O+ONnn1V9Z7+NgWcKAdJsd1ty+/Mzg78kZfGtXRLSv/29oUD6wwEll0YKadyomXWQftT0ofRND/e4zt+d7R6IEFL5ws/iA2nydH/bx/0p8EwhQOrSfF3wSVKF+9auNrFwRP++8z1v9aAuVy9Mv7WrGN6964gN3o4vZQ/SRSeujxZSsjpIh540c2aDH4tmnPDV/3TCkD5siA+kdOceGnyOECAdfW5VVWXwaapC/xmpaOAW75Uu22r7lW6rGpKGdGXptn+MKfEyX8oepIcK5n0aE0gzCs7t8/WC/S9KvQw9dNfw0/e5BkgNe9jdHnyS4JA2FfaadEzBQf0/iR+kuf5buk/+mNjoeUvSkLZ+6XmLO9RmvpT8xgVtkr0dNqQ/HdL307hAut8deuxVN15Y0Ma/qcS5Q274jyfcIyH9utlFsfjUrsId9b0Hnrqq8Gfxg7TY8zYnPi5rv93zPklDWj6kuLhboibzpeQ3lvdM9mHYkLoeviY2kB50zacndz9xtya3U6+/7LSCBJDqG7dPpw0hTBMc0l/cQWuSu8vcK7GDtCSlZX77Ws9bk4K0odPsam+pD2lJBtJuCg7pqYKHKyoquh1csS4GkGY2+6a/HeT61N3cPkUKSKmudIM+DWOeEH5GanGmv/2NuyvwTNmBtDxR6XllKUhlRTWe92j2IfVzdZ0fB0itDvO317orphQP948Gur5AqmtgQWkIs3wWCqQzjve3j7l7As+UHUjVPUq3rrs5BWllYsW/Fg5OVGUb0tsv+LU74IU34wCplytJbs8oHD+14Jv+p3ftUmMgJXvahfWJRQiQxrunk9su+7wVeKbsQPI+ur7z1e8k/uzfNKN7jylbB3bblGVI6aL9GWlonz5nu4v69Jkw86GWTdr3+6E7f+bMn7nju/c+veC4//QaoTAgzSst7eEGlJYGnyq4gcrjDipN9V7gqUKAtK71foPuLnKXB59pz7rWLmJIP657d3nVzJn3tv3KPocVJ/XM6H1046bfuGj6fzpnGJB6163rwcAzBYf0UeYt+GOBpwrjEqGP+3yt0XFj43WtXZC4+lvF1d8yrv62AUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAckGJBWQZECyAUkFJBmQbEBSAUkGJBuQVECSAcl28S9jVKt2Mer4blGfjob9KOoFNOxnV8Soi8QzO8eQ7loVo7r/JUYNf/KNGDXkTzFq1PoYNU08s3MMafLaGNUz6rcJDbvtmWUxanhFjIrV+8z7xDMbSDEJSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIs/yBt6n1EoUsFpFwHJFn+Qbp437a9+6UCUq4Dkiz/IH312WwBAtL/FpBk+QdpvyogRRWQZPkH6ezXgRRVQJLlH6S3f7gYSBEFJFn+QTrzv9x+R6cCUq4Dkiz/IJ3dNhOQch2QZPkHKfsBSQUkWT5C+uyFBx56+Qsg5T4gyfIP0vZBjfzLGvYfD6ScByRZ/kEa7zo+/OIL91/gHgVSrgOSLP8gfeuG9P6K7wEp1wFJln+QmsxP7+c1A1KuA5Is/yDt/3x6/2xzIOU6IMnyD9JZP672d9vanQukXAckWf5Bmldw1JWjbr/8iMJXgZTrgCTLP0jeb7/pf/z93XnZcgQkGZBkeQjJ89a/VV6ZNUZA0gFJlpeQshyQVECS5RmkVqO9VjsCUq4DkizPIJ1W6p22IyDlOiDJ8gxSTgKSCkiy/IPU5sP0/ulvASnXAUmWf5BceWr3P7c1BlKuA5Is3yC5+rhoNecBSZZvkN6/2xWl/uuQl434C5ByHZBk+QbJ8y74U7YAAel/C0iy/IPkbZyS3FTdtglIOQ9IsvyDtPIw/1OGCnfYaiDlOiDJ8g9Sh+Pf8ncfHt8JSLkOSLL8g3ToI+n9/S2AlOuAJMs/SM2eSO9/tV9MIc07t3nzk8ZW+Ie/P9k9GT2kkvSvC86OGtJrmV9cjF+2bNoPvtL4xMFLo4T0/Dn773/SmDUVFdenV3Vm9JCWneKeC2OefwvSjy6o8Xdf/ODMzC01ifdjBOnZfY8eNuYsd2PycHSzI+IA6ZeFd/n9OmpIbw5JdX7Br5ZNKmx1402nuCsihPTbfY8eOvosN6iiom/hWL+ZkUOa2OzIHEJ6ueDYASNH9Dm08OXMLbUfbI0RpNNbvLt2bcW391uz9rdNRk2KA6TuBwSfI10Yb+1eP7TDsmXfOLJs2bJFRx8cIaTTWrxdUbHmW/utqri4RaCJQoP0UpNxU3MIyXuljf9CfHJc/4bs+Lv9bbFbvrbsd2tjAelnRwafI10YkC458NVliwdO9A9/7sqigzRusr/t6d6ruPDweEAqX/hZTiF53mcf/H8N/4vF/lu7iuHdu47Y4FUnXh7cr+9SLzOuTSwc0b/vfM/bPL5Xl8GrPO+1qzoX31u9Y5gFSOnOPiS1iwWks1tVVa0NPk1VKJCeLCzJHC5tfVigqcL4sOHsQyoqzjyxomJlDCAlyzGkXfIhXVm67R9jSpKH1/3Ve7XDlszYKxq4xXulyzZv0Pgvqh/vWb2x/fvbN143OzNM/sP/XJfsy7Ah3eeGxQfSKcd0PsgdNOgvsYB0/qFvpPZvzH34gn3HRg3pHje0ouLklkUHuoOu/WhvgrTbvyHrQ9qatLC4Q21N4jnP2971lczYK5rreZsSn6xKbE7+LNWtbFVidfLrXmaY/IcXtEn2dsiQZjZrVxEfSMcU9pj5UEf30+AzBYf0ZOGg9MFU5w4vDTZXcEiPNDt/TUVFy8JL7r8n4S7YmyDt9m/I+pCWDyku7paoqUksS95w1azM2CtanHxbl/i4LJFqdu09HUpmrfcyw+T3rrg52c4XSQSGNGqfotVr4wPp/RX+trubG3im4JC6Nn49ffC7icPPL/hFtJBu36f9x8ndknJ/cLF7ai+CtNuSkDZ0ml3tLfUh+f9fzCt+nRl7RUtSkJYmquu+edO8kR3K6oe7Kyikfu7aT9bGCFK637hRgecIDGnp13/UYNTXzYgSUl93zZ/rR4+6EUB6v6yoxvMe9SE97XnVnV/LjDOQ1iZWJr9xo1ezJbmbPjgzzAqkqwvG7jiOBaTVq/3tQ25i4JkCQ3rE3eLvXip5xN/d5YZGCGlAwZj0wYoV/vYeN3ovgrR/g3b8DdkkpJWJFf9aODhRVZMYUFE9q+PfMuMMJG9oSVXNi10+f7XPx7Wbh0zJDLMB6VduZP0gDpA+KLzI37UtWBI9pKvdr/zd7wq/tyS56+qmRgfp8cwr0LLCdv7u3ILX9yJIXZO1anRG5w6nFLS5ugEkb0b3HlO2Duy2IfHiTZ37lXuZ8aYMpM3jul5SssKrndWnY6+7/54ZZgHSmmMPHDvOb8naOePGXeJ+OW7cm9FCqurnzp8w+gx3efCZAkP6uft9at/bnXz9ze0KTloSGaRVxxw4JnVBw6KK3u680SN/6PoEmS4MSPNKS3u4AaWl7+QAUrLZJ23wdyu/ObchpB2H74g5/g8FgvR+5oKyB9deWnc0LWJIG8efckDTU0uDTxQc0tmF6f3Swa2aNjuu5+uBJgsE6d3M4/RAxeo7Tm7RtPW4QI7CgNQ788zJDaSTnkrv72tdd8P2jxI7PnWLHlLYcfW3jKu/Vf8WpMavpfezm9TdsLDDqFog5SQgyfIP0hGXpna1XQ8PTgZI/7eAJMs/SLe67147atSAb7nBQMp1QJLlH6TacYf7P5EdMrwGSLkOSLL8g5Sk9Mmypau3Z4sRkHRAkuUjpG1vzfnU+x8g5T4gyfIQ0sQWzi3xhvwia5SApAKSLP8gPeDaT09CenTf8UDKdUCS5R+kk6/0tiUhebecCKRcByRZ/kFq+moa0u8aASnXAUmWf5C+9nwa0lMHACnXAUmWf5B+cs4/fUifn9QOSLkOSLL8g/T6Psdf5/r2PqDRm0DKdUCS5R8k77VT/Ssbfvj7bDkCkgxIsjyE5Hmb3ntvc9YYAUkHJFn+QToje/+JVSD9LwFJln+QvjEJSFEFJFn+QXruW7/9F5CiCUiy/IN09ndd4yOO9gNSrgOSLP8gnXle27qAlOuAJMs/SNkPSCogyfIO0rZlb24BUkQBSZZvkCa3cK5R/y/FNwIpuwFJlmeQnnEtbxh2lrtafCOQshuQZHkG6eyW/v8utm+jvwEpioAkyzNIzYf727dc1i5YBdL/X0CS5Rkkd7+/3eBeFt8JpKwGJFm+QXrQ3250LwEpioAkAxKQ/v2AJMs3SLcsSTbPlfo7IOU6IMnyDVLDgJTrgCTLM0i3NgxIuQ5IsjyDlJOApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZAOSCkgyINmApAKSDEg2IKmAJAOSDUgqIMmAZOvQM0Yd/4sYdWq7TjHqB5fGqJ/2iVEXimd2jiFNWR+jij+PUaOWboxRiWkxanTUj03DYvKKBCQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiQbkFRAkgHJBiQVkGRAsgFJBSQZkGxAUgFJBiRbUEhvtHZP+/sbXKqzIodUduEBTdr8KoSJAkNa1No9s+tRFJBGHOWuSx1cf3zjxifcsPNtkUEK7XEKBVJN4p1oIY1tdkQa0uWFE/wejxrS2y2OmzD53IIngs8UFNK45Kl5ZpejKCB1a3xQGs2V7shLLj1s35sa3hYZpPAepz0C0twmd5amIXVtEWii0CB1bvbh559vOumY4DMFhPR8k9GT03zqj6KANKjRJT3TaA498K5p0ya0aNXwtsgghfc47RGQFr22vg7ST4+IBaSqZh393Wj3euCpAkJaPH9jHZ/6oygg3XrLtDSaMe4sf9y2YHz9bZFBCvFxCg1SzbCRNX8d36tzyYfe9sTv+k32No/v1WXwKs+rGN6964gNXm1i4Yj+fednBVKyOkhntVq/fnX0kJa54f5urpsaeKrgHzbU84kQUrI0mjvcef6gixtYf1tkkEJ8nEKDVFrypTfo1i1fPtz1b17RwFX/9AaN/6L68Z7V3pWl2/4xpsRL3rjFe6XLtuS3f74s2d+yAql1y44HuoOuXxMxpBfc3f5uiRsReKo9DdLU/Y7yB23cZTGAFOLjFBakJ/p/4a1OrPW86osXeEVPet6qxGbPq+1W5m390vMWd6j1iuZ63qbEJ8lvX9Am2dtZgdSysNtD9xe5iyKG9Iy7z9+9424KPNWeBmlawv33yNsuaOH6xgBSiI9TSJDGJv7geW+2r00O+v/GKyrzvLJEqtne8iHFxd0SNV7RYs/bnPcB240AABDoSURBVPg4+R2rpyRblxVIb7/nb7u6OdFCmucm+7vF7tbAU+1xkO4+r8C5b13qrowBpBAfp5Ag9RsxsKYO0lVPeEVLPG9pojr1tQ2dZlcnBzWpG9OQdlNYkNI94UZGC+ltN8zfzUn/gReoPQ7StGljS+5M/ow0LAaQQnycQoJUvrXPI94a/43bts7zU2bWJlYmv7LRKyuq8bxHcwVp5Up/e78bFy2kT1sk/N1wtzjwVHsgJL/v7jclBpBCfJxC+7BhRYd3vZKRX2y7r+c/Uma8oSVVNS92+XxlYsW/Fg5OVOUE0ruFF/qD8wreiBbS58VNln/++YZjvxN8pj0O0umHTp42bXDhORZX7iGF+DiF93ukx4u3VN3R89Lbkj/8pCBtHtf1kpIVnjeje48pWwd225RFSM9OmNDVXTlhwuL1fVzbcaNOd/0CTRcCpD98teXwMT9sNDf4TAEhzZ04sZu7auLEpQ2OooB0Q48ep7u2PXqMnHZFwQnFHZp/dWzD2yKDFN7jtEdca9czfYWdu3f92jGtWzQ9ZWKg2UK51m7ZRS2anvFcCBMFhFRcd2rua3AUBaSz6u69z7Rpfb7RqPlpd+58W1SQwnuc9ghIIcfV3zKu/lYByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSRUTSB2LY9SJfWJUm469Y1TLs2NUIurHpmEXimd2jiFNq4xRvaP+c79ht77zWYwatSlGXV8eo24Xz2wgxSQgyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSDYgqYAkA5INSCogyYBkA5IKSDIg2YCkApIMSLagkBa1dnNSBwvaHdDke49FCym5mGd2PYoW0pz/PrjJSZM+DT5RcEh/cnXNjBbSgsw6JpSXzzq7eeOT7oo1pKIlOw1rEu9nBdK4ZkekIS1pcdzYSecUzIwSkr+YZ3Y5ihbSrMKTx0443Q2OA6R1d6UqKng9WkiLh6Y6v2BW+Zz9j7p56GkFE+MKafnHBlLtB1uzAemFJmPuTkPq2Gx5ZeW677SMENLzTUZPTvOpP4oYUsuj13322cbjD40DpHSrDy8OPkkIb+0WHtqxvPyCpi+Vly894RtxhXTbiwaSLBikJQsq05DWNyvyx6Pcq9FBWjx/Yx2f+qNoIVXe8YS/6+HWxQbSZQd/FHySECB1PXB++bKm5/uHg9wT8YQ0pH2n672iV0Z0Kl7geRXDu3cdsSFrb+0q6yAtckP9wRw3OTpIyer5xAJSuk9P+0bwSUKC9Gbh2BBmCQ5pduHN5eVPuwH+8XQ3Ip6QvH7+K9I1H/7zsS7bvCtLt/1jTEkdpPXPJKvKBqTn3F3+4A03DEg7tWH5y50bzQw+T0iQOhz+lxBmCQ6p3aGLyssfcMP846fc1XGG9LTnbUxUeFu/9LzFHWrTkBa0SfZ2NiA96ab6g2XuRiDt1DPOHfWbEOYJB9KbhXeGMU1gSLMLb0xup7nb/MGz7vI4Q1rseZsTH3vLhxQXd0vUZP8VaZI/KHPDgbRTH/1qaseCgcHnCQfS5Y1XhzFNYEjdGi9Mbh90Q/3BU+6aOENakoK0odPsam9pBtJuCgnSEneLP3gq/cIEpJ0a5F4NPEcokCqP+EkY0wSG9NbXz/R3c1x/f3dP+oUp3pDKimo879HsQ9rQ4uf+YIgrA1J9fxz3O3/3azc5HpBecpPCmCYwpBnpl6Jl+5/n7wa4p2IKqf/Df89AWplY8a+FgxNV2YZUeWmTdyor1x777UBz7WmQPio8syq5+6V7Jh6Qhrvgv4z1CwrpGjcrte/Q+Pny8kX/dUKgybIIaW7nPhlI3ozuPaZsHdhtQ3YgzZ00qZvrP2nSssr3Dj566B0/aDQnQkhzJ07s5q6aOHFpg6NoIX12nfvhqImdCr5fFQ9I3d2aMKYJDCnhFqb28w48csCgk/edHldI/5eCQepVd9nU9MrKRRe2aHraM4FmCwipuG4x9zU4ihjSp5NObrb/t66uCD5TKJAuKAxjluCQ/ruw7uDpc/Zvcup9wSbbIyCFHFd/y7j6WwUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkFZBsQJIBSQUkG5BkQFIByQYkGZBUQLIBSQYkVUwgzXg8Rg2MegENGzY16hU07NqoF9CwWD1O08QzO8eQiPbMgEQUQkAiCiEgEYUQkIhCCEhEIQQkohACElEIAYkohIBEFEJAIgohIBGFEJCIQghIRCGUl5CmTY56BQ2ad+emqJdQ34d3Lo16CfV9eeesqJfQoBl3ZnX6vISUaBf1Chp0R5uPo15Cfa+2eTzqJdS3tc3VUS+hQb2/n9XpgRQ0IKmAFPeApAKSDEg2IKmAJAMSUfwDElEIAYkohPIDUtGS1K4m8X7ECzFL2JSoyPmqYnAaTDWJd6Jegq7u6ZMpK+cvryDVfrA14oWYJSQh5XxVMTgNpthCWv6xgZSV85dXkGJYElLUS4hFsYV024u5efrEHNKnd15cfO+XXtErIzoVL/Bfk2sTC0f07zvf8zaP79Vl8CrPe+2qzsX3Vu8YZruGS1g9qMvVC9Nv7SqGd+86YoO340vZXkPmDqsTLw/u13epZxaQ69PjQ6oZNrLmr+N7dS750Nue+F2/yTvuNadnZ+eGtO90febpk1nH3vjW7oaxm9cPmO4VXfPhPx/rss0/A0UDt3ivdNnmDRr/RfXjPas3tn9/+8brZmeGWV9QgyXU9ivdVjUkDenK0m3/GFPi7Vhd1tdQd4c1iev+6r3aYYtZQK5Pjw+ptORLb9CtW758uOvfkutY9c8d95rTs7NL/fxXpPTTp/6k7XWQVic2JjflXtHTnrcx/ZQtmuu/n/pkVWJz8s1ut7JVidWet93LDLO+ogZL+KO/uCXpVW390vMWd6jNfCn7a6i7w5rEc8l//a6v7LqAnJ+eJKQn+n+RfMDWel71xQu8oie9+nvN6dnZpRSk9NOn/qTtdZDebF+b2hctTr5ZSXycehanD8sSqWbX3tOhZNZ6LzPM+ooaLqH9ds/7JA1p+ZDi4m6JmsyXsr+GujusSSxL3nDVrF0XkPPTU5MYm/hD5gHr/xuvKIl2x73m9OzsUgpS3f3uOGl7HaRF/nPVS/+0mIGUPlyayLxP2TRvZIey+mGWa7CE+f6TZk0K0oZOs6u9pf5TZUluIGXusCaRfI54V/x61wXk/PTUJPqNGFhTB+mqJ1LryNxrbs/OLvV7ccfTp/6k7XWQ1vifiX30wm4grU2sTH59o1ezJbmbPjgzzHoNlrA8Uen/qetDKiuq8bxHcwgpc4c1ieS7lurOr+26gJyfnppE+dY+jyQfsOQbt22d56fWkbnX3J6dXWoAqf6k7XWQvEEjKtddd+9uIHlDS6pqXuzy+at9Pq7dPGRKZpj1BTVYQnWP0q3rbk5BWplY8a+FgxNVOYOUucOaxICK6lkd/2YWkOvT43/YsKLDu17JyC+23dfzH+lPnOvuNbdnZ5f6P/z3zP3Wn7S9D9KWO7r0nLZtd5A2j+t6SckKr3ZWn4697v57Zpj1Gi7ho+s7X/1O4s/+TTO695iydWC3TbmClLnDDYkXb+rcr9wzC8j16Un9Hunx4i1Vd/S89LZ1db+6ydxrTs/OLs3t3GfHA7bjpO19kGg3NfgTNba/B93rAlLetf0j/zPtdECKS0DKuxZ2GFWbOQZSXAISUQgBiSiEgEQUQkAiCiEgEYUQkPK3X7pMp+32622Pzu169uqAlL+9PnXq1Gtd5+TWXNb9nv+4AimHASm/e92V7u7mKUDKcUDK7+ognXn28984w2vd2j8u+qp3QfLtXhuv7XFrLmze/JLsX8lLQMr36iCdd/I373mhHtKfilz5h17blq1HP3tjwS+iXeFeEpDyuzpIbd2c5HYHJK+f23Hjj74W4fL2noCU32UgNf6XZyE19a/J61UY4fL2noCU32UgHeFvd4V0tD/sx0OcizjL+V0G0tH+FkjRxVnO73aCdOpJ/vY0IEUQZzm/2wnSeYckfyja1CwJ6TL3P0DKaZzl/G4nSJPdmMp3f/ydJKQR7rangZTLOMv53U6Qqm84sknr5we08Ly/nNqoFZByGWeZKISARBRCQCIKISARhRCQiEIISEQhBCSiEAISUQgBiSiEgEQUQkAiCiEgEYXQ/wMhANIDIZLX1QAAAABJRU5ErkJggg=="
+ },
+ "metadata": {
+ "image/png": {
+ "width": 420,
+ "height": 420
+ }
+ }
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 436
+ },
+ "id": "HsAtwukyLsvt",
+ "outputId": "3032a224-a2c8-4270-b4f2-7bb620317400"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "The darker squares in the confusion matrix plot indicate high numbers of cases, and you can hopefully see a diagonal line of darker squares indicating cases where the predicted and actual label are the same.\n",
+ "\n",
+ "Let's now calculate summary statistics for the confusion matrix."
+ ],
+ "metadata": {
+ "id": "oOJC87dkLwPr"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "source": [
+ "# Summary stats for confusion matrix\n",
+ "conf_mat(data = results, truth = cuisine, estimate = .pred_class) %>% \n",
+ "summary()"
+ ],
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ " .metric .estimator .estimate\n",
+ "1 accuracy multiclass 0.7880435\n",
+ "2 kap multiclass 0.7276583\n",
+ "3 sens macro 0.7780927\n",
+ "4 spec macro 0.9477598\n",
+ "5 ppv macro 0.7585583\n",
+ "6 npv macro 0.9460080\n",
+ "7 mcc multiclass 0.7292724\n",
+ "8 j_index macro 0.7258524\n",
+ "9 bal_accuracy macro 0.8629262\n",
+ "10 detection_prevalence macro 0.2000000\n",
+ "11 precision macro 0.7585583\n",
+ "12 recall macro 0.7780927\n",
+ "13 f_meas macro 0.7641862"
+ ],
+ "text/markdown": [
+ "\n",
+ "A tibble: 13 Γ 3\n",
+ "\n",
+ "| .metric <chr> | .estimator <chr> | .estimate <dbl> |\n",
+ "|---|---|---|\n",
+ "| accuracy | multiclass | 0.7880435 |\n",
+ "| kap | multiclass | 0.7276583 |\n",
+ "| sens | macro | 0.7780927 |\n",
+ "| spec | macro | 0.9477598 |\n",
+ "| ppv | macro | 0.7585583 |\n",
+ "| npv | macro | 0.9460080 |\n",
+ "| mcc | multiclass | 0.7292724 |\n",
+ "| j_index | macro | 0.7258524 |\n",
+ "| bal_accuracy | macro | 0.8629262 |\n",
+ "| detection_prevalence | macro | 0.2000000 |\n",
+ "| precision | macro | 0.7585583 |\n",
+ "| recall | macro | 0.7780927 |\n",
+ "| f_meas | macro | 0.7641862 |\n",
+ "\n"
+ ],
+ "text/latex": [
+ "A tibble: 13 Γ 3\n",
+ "\\begin{tabular}{lll}\n",
+ " .metric & .estimator & .estimate\\\\\n",
+ " \n",
+ "\tcuisine .pred_class \n",
+ "\n",
+ "\n",
+ "\t<fct> <fct> \n",
+ "\tindian thai \n",
+ "\tindian indian \n",
+ "\tindian indian \n",
+ "\tindian indian \n",
+ "\n",
+ "indian indian \n",
+ "
\n"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 494
+ },
+ "id": "OYqetUyzL5Wz",
+ "outputId": "6a84d65e-113d-4281-dfc1-16e8b70f37e6"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "If we narrow down to some metrics such as accuracy, sensitivity, ppv, we are not badly off for a start π₯³!\n",
+ "\n",
+ "## 4. Digging Deeper\n",
+ "\n",
+ "Let's ask one subtle question: What criteria is used to settle for a given type of cuisine as the predicted outcome?\n",
+ "\n",
+ "Well, Statistical machine learning algorithms, like logistic regression, are based on `probability`; so what actually gets predicted by a classifier is a probability distribution over a set of possible outcomes. The class with the highest probability is then chosen as the most likely outcome for the given observations.\n",
+ "\n",
+ "Let's see this in action by making both hard class predictions and probabilities."
+ ],
+ "metadata": {
+ "id": "43t7vz8vMJtW"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "source": [
+ "# Make hard class prediction and probabilities\n",
+ "results_prob <- cuisines_test %>%\n",
+ " select(cuisine) %>% \n",
+ " bind_cols(mr_fit %>% predict(new_data = cuisines_test)) %>% \n",
+ " bind_cols(mr_fit %>% predict(new_data = cuisines_test, type = \"prob\"))\n",
+ "\n",
+ "# Print out results\n",
+ "results_prob %>% \n",
+ " slice_head(n = 5)"
+ ],
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ " cuisine .pred_class .pred_chinese .pred_indian .pred_japanese .pred_korean\n",
+ "1 indian thai 1.551259e-03 0.4587877 5.988039e-04 2.428503e-04\n",
+ "2 indian indian 2.637133e-05 0.9999488 6.648651e-07 2.259993e-05\n",
+ "3 indian indian 1.049433e-03 0.9909982 1.060937e-03 1.644947e-05\n",
+ "4 indian indian 6.237482e-02 0.4763035 9.136702e-02 3.660913e-01\n",
+ "5 indian indian 1.431745e-02 0.9418551 2.945239e-02 8.721782e-03\n",
+ " .pred_thai \n",
+ "1 5.388194e-01\n",
+ "2 1.577948e-06\n",
+ "3 6.874989e-03\n",
+ "4 3.863391e-03\n",
+ "5 5.653283e-03"
+ ],
+ "text/markdown": [
+ "\n",
+ "A tibble: 5 Γ 7\n",
+ "\n",
+ "| cuisine <fct> | .pred_class <fct> | .pred_chinese <dbl> | .pred_indian <dbl> | .pred_japanese <dbl> | .pred_korean <dbl> | .pred_thai <dbl> |\n",
+ "|---|---|---|---|---|---|---|\n",
+ "| indian | thai | 1.551259e-03 | 0.4587877 | 5.988039e-04 | 2.428503e-04 | 5.388194e-01 |\n",
+ "| indian | indian | 2.637133e-05 | 0.9999488 | 6.648651e-07 | 2.259993e-05 | 1.577948e-06 |\n",
+ "| indian | indian | 1.049433e-03 | 0.9909982 | 1.060937e-03 | 1.644947e-05 | 6.874989e-03 |\n",
+ "| indian | indian | 6.237482e-02 | 0.4763035 | 9.136702e-02 | 3.660913e-01 | 3.863391e-03 |\n",
+ "| indian | indian | 1.431745e-02 | 0.9418551 | 2.945239e-02 | 8.721782e-03 | 5.653283e-03 |\n",
+ "\n"
+ ],
+ "text/latex": [
+ "A tibble: 5 Γ 7\n",
+ "\\begin{tabular}{lllllll}\n",
+ " cuisine & .pred\\_class & .pred\\_chinese & .pred\\_indian & .pred\\_japanese & .pred\\_korean & .pred\\_thai\\\\\n",
+ " \n",
+ "\t.metric .estimator .estimate \n",
+ "\n",
+ "\n",
+ "\t<chr> <chr> <dbl> \n",
+ "\taccuracy multiclass 0.7880435 \n",
+ "\tkap multiclass 0.7276583 \n",
+ "\tsens macro 0.7780927 \n",
+ "\tspec macro 0.9477598 \n",
+ "\tppv macro 0.7585583 \n",
+ "\tnpv macro 0.9460080 \n",
+ "\tmcc multiclass 0.7292724 \n",
+ "\tj_index macro 0.7258524 \n",
+ "\tbal_accuracy macro 0.8629262 \n",
+ "\tdetection_prevalence macro 0.2000000 \n",
+ "\tprecision macro 0.7585583 \n",
+ "\trecall macro 0.7780927 \n",
+ "\n",
+ "f_meas macro 0.7641862 \n",
+ "
\n"
+ ]
+ },
+ "metadata": {}
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 248
+ },
+ "id": "xdKNs-ZPMTJL",
+ "outputId": "68f6ac5a-725a-4eff-9ea6-481fef00e008"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Much better!\n",
+ "\n",
+ "β
Can you explain why the model is pretty sure that the first observation is Thai?\n",
+ "\n",
+ "## **πChallenge**\n",
+ "\n",
+ "In this lesson, you used your cleaned data to build a machine learning model that can predict a national cuisine based on a series of ingredients. Take some time to read through the [many options](https://www.tidymodels.org/find/parsnip/#models) Tidymodels provides to classify data and [other ways](https://parsnip.tidymodels.org/articles/articles/Examples.html#multinom_reg-models) to fit multinomial regression.\n",
+ "\n",
+ "#### THANK YOU TO:\n",
+ "\n",
+ "[`Allison Horst`](https://twitter.com/allison_horst/) for creating the amazing illustrations that make R more welcoming and engaging. Find more illustrations at her [gallery](https://www.google.com/url?q=https://github.com/allisonhorst/stats-illustrations&sa=D&source=editors&ust=1626380772530000&usg=AOvVaw3zcfyCizFQZpkSLzxiiQEM).\n",
+ "\n",
+ "[Cassie Breviu](https://www.twitter.com/cassieview) and [Jen Looper](https://www.twitter.com/jenlooper) for creating the original Python version of this module β₯οΈ\n",
+ "\n",
+ " \n",
+ "\tcuisine .pred_class .pred_chinese .pred_indian .pred_japanese .pred_korean .pred_thai \n",
+ "\n",
+ "\n",
+ "\t<fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> \n",
+ "\tindian thai 1.551259e-03 0.4587877 5.988039e-04 2.428503e-04 5.388194e-01 \n",
+ "\tindian indian 2.637133e-05 0.9999488 6.648651e-07 2.259993e-05 1.577948e-06 \n",
+ "\tindian indian 1.049433e-03 0.9909982 1.060937e-03 1.644947e-05 6.874989e-03 \n",
+ "\tindian indian 6.237482e-02 0.4763035 9.136702e-02 3.660913e-01 3.863391e-03 \n",
+ "\n",
+ "indian indian 1.431745e-02 0.9418551 2.945239e-02 8.721782e-03 5.653283e-03
\n",
+ "Would have thrown in some jokes but I donut understand food puns π
.\n",
+ "\n",
+ "
\n",
+ "\n",
+ "Happy Learning,\n",
+ "\n",
+ "[Eric](https://twitter.com/ericntay), Gold Microsoft Learn Student Ambassador.\n"
+ ],
+ "metadata": {
+ "id": "2tWVHMeLMYdM"
+ }
+ }
+ ]
+}
\ No newline at end of file
diff --git a/4-Classification/2-Classifiers-1/solution/R/lesson_11.Rmd b/4-Classification/2-Classifiers-1/solution/R/lesson_11.Rmd
new file mode 100644
index 000000000..a4221217b
--- /dev/null
+++ b/4-Classification/2-Classifiers-1/solution/R/lesson_11.Rmd
@@ -0,0 +1,349 @@
+---
+title: 'Build a classification model: Delicious Asian and Indian Cuisines'
+output:
+ html_document:
+ df_print: paged
+ theme: flatly
+ highlight: breezedark
+ toc: yes
+ toc_float: yes
+ code_download: yes
+---
+
+## Cuisine classifiers 1
+
+In this lesson, we'll explore a variety of classifiers to *predict a given national cuisine based on a group of ingredients.* While doing so, we'll learn more about some of the ways that algorithms can be leveraged for classification tasks.
+
+### [**Pre-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/21/)
+
+### **Preparation**
+
+This lesson builds up on our [previous lesson](https://github.com/microsoft/ML-For-Beginners/blob/main/4-Classification/1-Introduction/solution/lesson_10-R.ipynb) where we:
+
+- Made a gentle introduction to classifications using a dataset about all the brilliant cuisines of Asia and India π.
+
+- Explored some [dplyr verbs](https://dplyr.tidyverse.org/) to prep and clean our data.
+
+- Made beautiful visualizations using ggplot2.
+
+- Demonstrated how to deal with imbalanced data by preprocessing it using [recipes](https://recipes.tidymodels.org/articles/Simple_Example.html).
+
+- Demonstrated how to `prep` and `bake` our recipe to confirm that it will work as supposed to.
+
+#### **Prerequisite**
+
+For this lesson, we'll require the following packages to clean, prep and visualize our data:
+
+- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun!
+
+- `tidymodels`: The [tidymodels](https://www.tidymodels.org/) framework is a [collection of packages](https://www.tidymodels.org/packages/) for modeling and machine learning.
+
+- `DataExplorer`: The [DataExplorer package](https://cran.r-project.org/web/packages/DataExplorer/vignettes/dataexplorer-intro.html) is meant to simplify and automate EDA process and report generation.
+
+- `themis`: The [themis package](https://themis.tidymodels.org/) provides Extra Recipes Steps for Dealing with Unbalanced Data.
+
+- `nnet`: The [nnet package](https://cran.r-project.org/web/packages/nnet/nnet.pdf) provides functions for estimating feed-forward neural networks with a single hidden layer, and for multinomial logistic regression models.
+
+You can have them installed as:
+
+`install.packages(c("tidyverse", "tidymodels", "DataExplorer", "here"))`
+
+Alternatively, the script below checks whether you have the packages required to complete this module and installs them for you in case they are missing.
+
+```{r, message=F, warning=F}
+suppressWarnings(if (!require("pacman"))install.packages("pacman"))
+
+pacman::p_load(tidyverse, tidymodels, DataExplorer, themis, here)
+```
+
+Now, let's hit the ground running!
+
+## 1. Split the data into training and test sets.
+
+We'll start by picking a few steps from our previous lesson.
+
+### Drop the most common ingredients that create confusion between distinct cuisines, using `dplyr::select()`.
+
+Everyone loves rice, garlic and ginger!
+
+```{r recap_drop}
+# Load the original cuisines data
+df <- read_csv(file = "https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/4-Classification/data/cuisines.csv")
+
+# Drop id column, rice, garlic and ginger from our original data set
+df_select <- df %>%
+ select(-c(1, rice, garlic, ginger)) %>%
+ # Encode cuisine column as categorical
+ mutate(cuisine = factor(cuisine))
+
+# Display new data set
+df_select %>%
+ slice_head(n = 5)
+
+# Display distribution of cuisines
+df_select %>%
+ count(cuisine) %>%
+ arrange(desc(n))
+```
+
+Perfect! Now, time to split the data such that 70% of the data goes to training and 30% goes to testing. We'll also apply a `stratification` technique when splitting the data to `maintain the proportion of each cuisine` in the training and validation datasets.
+
+[rsample](https://rsample.tidymodels.org/), a package in Tidymodels, provides infrastructure for efficient data splitting and resampling:
+
+```{r data_split}
+# Load the core Tidymodels packages into R session
+library(tidymodels)
+
+# Create split specification
+set.seed(2056)
+cuisines_split <- initial_split(data = df_select,
+ strata = cuisine,
+ prop = 0.7)
+
+# Extract the data in each split
+cuisines_train <- training(cuisines_split)
+cuisines_test <- testing(cuisines_split)
+
+# Print the number of cases in each split
+cat("Training cases: ", nrow(cuisines_train), "\n",
+ "Test cases: ", nrow(cuisines_test), sep = "")
+
+# Display the first few rows of the training set
+cuisines_train %>%
+ slice_head(n = 5)
+
+
+# Display distribution of cuisines in the training set
+cuisines_train %>%
+ count(cuisine) %>%
+ arrange(desc(n))
+
+
+```
+
+## 2. Deal with imbalanced data
+
+As you might have noticed in the original data set as well as in our training set, there is quite an unequal distribution in the number of cuisines. Korean cuisines are *almost* 3 times Thai cuisines. Imbalanced data often has negative effects on the model performance. Many models perform best when the number of observations is equal and, thus, tend to struggle with unbalanced data.
+
+There are majorly two ways of dealing with imbalanced data sets:
+
+- adding observations to the minority class: `Over-sampling` e.g using a SMOTE algorithm which synthetically generates new examples of the minority class using nearest neighbors of these cases.
+
+- removing observations from majority class: `Under-sampling`
+
+In our previous lesson, we demonstrated how to deal with imbalanced data sets using a `recipe`. A recipe can be thought of as a blueprint that describes what steps should be applied to a data set in order to get it ready for data analysis. In our case, we want to have an equal distribution in the number of our cuisines for our `training set`. Let's get right into it.
+
+```{r recap_balance}
+# Load themis package for dealing with imbalanced data
+library(themis)
+
+# Create a recipe for preprocessing training data
+cuisines_recipe <- recipe(cuisine ~ ., data = cuisines_train) %>%
+ step_smote(cuisine)
+
+# Print recipe
+cuisines_recipe
+
+```
+
+You can of course go ahead and confirm (using prep+bake) that the recipe will work as you expect it - all the cuisine labels having `559` observations.
+
+Since we'll be using this recipe as a preprocessor for modeling, a `workflow()` will do all the prep and bake for us, so we won't have to manually estimate the recipe.
+
+Now we are ready to train a model π©βπ»π¨βπ»!
+
+## 3. Choosing your classifier
+
+{width="600"}
+
+Now we have to decide which algorithm to use for the job π€.
+
+In Tidymodels, the [`parsnip package`](https://parsnip.tidymodels.org/index.html) provides consistent interface for working with models across different engines (packages). Please see the parsnip documentation to explore [model types & engines](https://www.tidymodels.org/find/parsnip/#models) and their corresponding [model arguements](https://www.tidymodels.org/find/parsnip/#model-args). The variety is quite bewildering at first sight. For instance, the following methods all include classification techniques:
+
+- C5.0 Rule-Based Classification Models
+
+- Flexible Discriminant Models
+
+- Linear Discriminant Models
+
+- Regularized Discriminant Models
+
+- Logistic Regression Models
+
+- Multinomial Regression Models
+
+- Naive Bayes Models
+
+- Support Vector Machines
+
+- Nearest Neighbors
+
+- Decision Trees
+
+- Ensemble methods
+
+- Neural Networks
+
+The list goes on!
+
+### **What classifier to go with?**
+
+So, which classifier should you choose? Often, running through several and looking for a good result is a way to test.
+
+> AutoML solves this problem neatly by running these comparisons in the cloud, allowing you to choose the best algorithm for your data. Try it [here](https://docs.microsoft.com/learn/modules/automate-model-selection-with-azure-automl/?WT.mc_id=academic-15963-cxa)
+
+Also the choice of classifier depends on our problem. For instance, when the outcome can be categorized into `more than two classes`, like in our case, you must use a `multiclass classification algorithm` as opposed to `binary classification.`
+
+### **A better approach**
+
+A better way than wildly guessing, however, is to follow the ideas on this downloadable [ML Cheat sheet](https://docs.microsoft.com/azure/machine-learning/algorithm-cheat-sheet?WT.mc_id=academic-15963-cxa). Here, we discover that, for our multiclass problem, we have some choices:
+
+{width="500"}
+
+### **Reasoning**
+
+Let's see if we can reason our way through different approaches given the constraints we have:
+
+- **Deep Neural networks are too heavy**. Given our clean, but minimal dataset, and the fact that we are running training locally via notebooks, deep neural networks are too heavyweight for this task.
+
+- **No two-class classifier**. We do not use a two-class classifier, so that rules out one-vs-all.
+
+- **Decision tree or logistic regression could work**. A decision tree might work, or multinomial regression/multiclass logistic regression for multiclass data.
+
+- **Multiclass Boosted Decision Trees solve a different problem**. The multiclass boosted decision tree is most suitable for nonparametric tasks, e.g. tasks designed to build rankings, so it is not useful for us.
+
+Also, normally before embarking on more complex machine learning models e.g ensemble methods, it's a good idea to build the simplest possible model to get an idea of what is going on. So for this lesson, we'll start with a `multinomial logistic regression` model.
+
+> Logistic regression is a technique used when the outcome variable is categorical (or nominal). For Binary logistic regression the number of outcome variables is two, whereas the number of outcome variables for multinomial logistic regression is more than two. See [Advanced Regression Methods](https://bookdown.org/chua/ber642_advanced_regression/multinomial-logistic-regression.html) for further reading.
+
+## 4. Train and evaluate a Multinomial logistic regression model.
+
+In Tidymodels, `parsnip::multinom_reg()`, defines a model that uses linear predictors to predict multiclass data using the multinomial distribution. See `?multinom_reg()` for the different ways/engines you can use to fit this model.
+
+For this example, we'll fit a Multinomial regression model via the default [nnet](https://cran.r-project.org/web/packages/nnet/nnet.pdf) engine.
+
+> I picked a value for `penalty` sort of randomly. There are better ways to choose this value that is, by using `resampling` and `tuning` the model which we'll discuss later.
+>
+> See [Tidymodels: Get Started](https://www.tidymodels.org/start/tuning/) in case you want to learn more on how to tune model hyperparameters.
+
+```{r multinorm_reg}
+# Create a multinomial regression model specification
+mr_spec <- multinom_reg(penalty = 1) %>%
+ set_engine("nnet", MaxNWts = 2086) %>%
+ set_mode("classification")
+
+# Print model specification
+mr_spec
+
+```
+
+Great job π₯³! Now that we have a recipe and a model specification, we need to find a way of bundling them together into an object that will first preprocess the data then fit the model on the preprocessed data and also allow for potential post-processing activities. In Tidymodels, this convenient object is called a [`workflow`](https://workflows.tidymodels.org/) and conveniently holds your modeling components! This is what we'd call *pipelines* in *Python*.
+
+So let's bundle everything up into a workflow!π¦
+
+```{r workflow}
+# Bundle recipe and model specification
+mr_wf <- workflow() %>%
+ add_recipe(cuisines_recipe) %>%
+ add_model(mr_spec)
+
+# Print out workflow
+mr_wf
+
+```
+
+Workflows ππ! A **`workflow()`** can be fit in much the same way a model can. So, time to train a model!
+
+```{r train}
+# Train a multinomial regression model
+mr_fit <- fit(object = mr_wf, data = cuisines_train)
+
+mr_fit
+```
+
+The output shows the coefficients that the model learned during training.
+
+### Evaluate the Trained Model
+
+It's time to see how the model performed π by evaluating it on a test set! Let's begin by making predictions on the test set.
+
+```{r test}
+# Make predictions on the test set
+results <- cuisines_test %>% select(cuisine) %>%
+ bind_cols(mr_fit %>% predict(new_data = cuisines_test))
+
+# Print out results
+results %>%
+ slice_head(n = 5)
+
+```
+
+Great job! In Tidymodels, evaluating model performance can be done using [yardstick](https://yardstick.tidymodels.org/) - a package used to measure the effectiveness of models using performance metrics. As we did in our logistic regression lesson, let's begin by computing a confusion matrix.
+
+```{r conf_mat}
+# Confusion matrix for categorical data
+conf_mat(data = results, truth = cuisine, estimate = .pred_class)
+
+
+```
+
+When dealing with multiple classes, it's generally more intuitive to visualize this as a heat map, like this:
+
+```{r conf_viz}
+update_geom_defaults(geom = "tile", new = list(color = "black", alpha = 0.7))
+# Visualize confusion matrix
+results %>%
+ conf_mat(cuisine, .pred_class) %>%
+ autoplot(type = "heatmap")
+```
+
+The darker squares in the confusion matrix plot indicate high numbers of cases, and you can hopefully see a diagonal line of darker squares indicating cases where the predicted and actual label are the same.
+
+Let's now calculate summary statistics for the confusion matrix.
+
+```{r conf_stats}
+# Summary stats for confusion matrix
+conf_mat(data = results, truth = cuisine, estimate = .pred_class) %>% summary()
+```
+
+If we narrow down to some metrics such as accuracy, sensitivity, ppv, we are not badly off for a start π₯³!
+
+## 4. Digging Deeper
+
+Let's ask one subtle question: What criteria is used to settle for a given type of cuisine as the predicted outcome?
+
+Well, Statistical machine learning algorithms, like logistic regression, are based on `probability`; so what actually gets predicted by a classifier is a probability distribution over a set of possible outcomes. The class with the highest probability is then chosen as the most likely outcome for the given observations.
+
+Let's see this in action by making both hard class predictions and probabilities.
+
+```{r pred_prob}
+# Make hard class prediction and probabilities
+results_prob <- cuisines_test %>%
+ select(cuisine) %>%
+ bind_cols(mr_fit %>% predict(new_data = cuisines_test)) %>%
+ bind_cols(mr_fit %>% predict(new_data = cuisines_test, type = "prob"))
+
+# Print out results
+results_prob %>%
+ slice_head(n = 5)
+
+
+```
+
+Much better!
+
+β
Can you explain why the model is pretty sure that the first observation is Thai?
+
+## **πChallenge**
+
+In this lesson, you used your cleaned data to build a machine learning model that can predict a national cuisine based on a series of ingredients. Take some time to read through the [many options](https://www.tidymodels.org/find/parsnip/#models) Tidymodels provides to classify data and [other ways](https://parsnip.tidymodels.org/articles/articles/Examples.html#multinom_reg-models) to fit multinomial regression.
+
+#### THANK YOU TO:
+
+[`Allison Horst`](https://twitter.com/allison_horst/) for creating the amazing illustrations that make R more welcoming and engaging. Find more illustrations at her [gallery](https://www.google.com/url?q=https://github.com/allisonhorst/stats-illustrations&sa=D&source=editors&ust=1626380772530000&usg=AOvVaw3zcfyCizFQZpkSLzxiiQEM).
+
+[Cassie Breviu](https://www.twitter.com/cassieview) and [Jen Looper](https://www.twitter.com/jenlooper) for creating the original Python version of this module β₯οΈ
+
+Happy Learning,
+
+[Eric](https://twitter.com/ericntay), Gold Microsoft Learn Student Ambassador.