You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ML-For-Beginners/translations/en/4-Classification/1-Introduction/solution/R/lesson_10-R.ipynb

724 lines
27 KiB

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"nbformat": 4,
"nbformat_minor": 2,
"metadata": {
"colab": {
"name": "lesson_10-R.ipynb",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "ir",
"display_name": "R"
},
"language_info": {
"name": "R"
},
"coopTranslator": {
"original_hash": "2621e24705e8100893c9bf84e0fc8aef",
"translation_date": "2025-09-06T15:39:42+00:00",
"source_file": "4-Classification/1-Introduction/solution/R/lesson_10-R.ipynb",
"language_code": "en"
}
},
"cells": [
{
"cell_type": "markdown",
"source": [
"# Build a classification model: Delicious Asian and Indian Cuisines\n"
],
"metadata": {
"id": "ItETB4tSFprR"
}
},
{
"cell_type": "markdown",
"source": [
"## Introduction to classification: Clean, prep, and visualize your data\n",
"\n",
"In these four lessons, you'll dive into one of the core areas of classic machine learning: *classification*. We'll explore various classification algorithms using a dataset about the diverse and delicious cuisines of Asia and India. Get ready to whet your appetite!\n",
"\n",
"<p>\n",
" <img src=\"../../images/pinch.png\"\n",
" width=\"600\"/>\n",
" <figcaption>Celebrate pan-Asian cuisines in these lessons! Image by Jen Looper</figcaption>\n",
"\n",
"Classification is a type of [supervised learning](https://wikipedia.org/wiki/Supervised_learning) that shares many similarities with regression techniques. In classification, you train a model to predict which `category` an item belongs to. If machine learning is about predicting values or assigning names to things using datasets, classification typically falls into two categories: *binary classification* and *multiclass classification*.\n",
"\n",
"Keep in mind:\n",
"\n",
"- **Linear regression** helped you predict relationships between variables and make accurate predictions about where a new data point would fall in relation to that line. For example, you could predict numeric values like *the price of a pumpkin in September versus December*.\n",
"\n",
"- **Logistic regression** helped you identify \"binary categories\": at a certain price point, *is this pumpkin orange or not-orange*?\n",
"\n",
"Classification uses various algorithms to determine other ways of assigning a label or class to a data point. In this lesson, we'll use cuisine data to see if we can predict the cuisine of origin based on a set of ingredients.\n",
"\n",
"### [**Pre-lecture quiz**](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/19/)\n",
"\n",
"### **Introduction**\n",
"\n",
"Classification is a fundamental task for machine learning researchers and data scientists. From simple binary classification (\"is this email spam or not?\") to complex image classification and segmentation using computer vision, the ability to sort data into classes and analyze it is invaluable.\n",
"\n",
"To put it in more scientific terms, classification involves creating a predictive model that maps the relationship between input variables and output variables.\n",
"\n",
"<p>\n",
" <img src=\"../../images/binary-multiclass.png\"\n",
" width=\"600\"/>\n",
" <figcaption>Binary vs. multiclass problems for classification algorithms to handle. Infographic by Jen Looper</figcaption>\n",
"\n",
"Before we dive into cleaning, visualizing, and preparing our data for machine learning tasks, let's explore the different ways machine learning can be used to classify data.\n",
"\n",
"Derived from [statistics](https://wikipedia.org/wiki/Statistical_classification), classification in classic machine learning uses features like `smoker`, `weight`, and `age` to predict *the likelihood of developing a certain disease*. As a supervised learning technique similar to the regression exercises you've done before, classification uses labeled data and machine learning algorithms to predict and assign classes (or 'features') of a dataset to specific groups or outcomes.\n",
"\n",
"✅ Take a moment to imagine a dataset about cuisines. What kinds of questions could a multiclass model answer? What about a binary model? For instance, could you predict whether a given cuisine is likely to use fenugreek? Or, if you were handed a grocery bag containing star anise, artichokes, cauliflower, and horseradish, could you determine whether you could create a typical Indian dish?\n",
"\n",
"### **Hello 'classifier'**\n",
"\n",
"The question we want to answer with this cuisine dataset is a **multiclass question**, as we have several possible national cuisines to consider. Based on a set of ingredients, which of these many classes does the data belong to?\n",
"\n",
"Tidymodels provides several algorithms for classifying data, depending on the type of problem you're trying to solve. In the next two lessons, you'll learn about some of these algorithms.\n",
"\n",
"#### **Prerequisite**\n",
"\n",
"For this lesson, we'll need the following packages to clean, prepare, and visualize our data:\n",
"\n",
"- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to make data science faster, easier, and more enjoyable.\n",
"\n",
"- `tidymodels`: The [tidymodels](https://www.tidymodels.org/) framework is a [collection of packages](https://www.tidymodels.org/packages/) for modeling and machine learning.\n",
"\n",
"- `DataExplorer`: The [DataExplorer package](https://cran.r-project.org/web/packages/DataExplorer/vignettes/dataexplorer-intro.html) simplifies and automates the exploratory data analysis (EDA) process and report generation.\n",
"\n",
"- `themis`: The [themis package](https://themis.tidymodels.org/) provides additional recipe steps for handling unbalanced data.\n",
"\n",
"You can install them using:\n",
"\n",
"`install.packages(c(\"tidyverse\", \"tidymodels\", \"DataExplorer\", \"here\"))`\n",
"\n",
"Alternatively, the script below checks whether the required packages for this module are installed and installs them for you if they're missing.\n"
],
"metadata": {
"id": "ri5bQxZ-Fz_0"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"suppressWarnings(if (!require(\"pacman\"))install.packages(\"pacman\"))\r\n",
"\r\n",
"pacman::p_load(tidyverse, tidymodels, DataExplorer, themis, here)"
],
"outputs": [],
"metadata": {
"id": "KIPxa4elGAPI"
}
},
{
"cell_type": "markdown",
"source": [
"We'll later load these awesome packages and make them available in our current R session. (This is for mere illustration, `pacman::p_load()` already did that for you)\n"
],
"metadata": {
"id": "YkKAxOJvGD4C"
}
},
{
"cell_type": "markdown",
"source": [
"## Exercise - clean and balance your data\n",
"\n",
"The first task before starting this project is to clean and **balance** your data to achieve better results.\n",
"\n",
"Let's take a look at the data! 🕵️\n"
],
"metadata": {
"id": "PFkQDlk0GN5O"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Import data\r\n",
"df <- read_csv(file = \"https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/4-Classification/data/cuisines.csv\")\r\n",
"\r\n",
"# View the first 5 rows\r\n",
"df %>% \r\n",
" slice_head(n = 5)\r\n"
],
"outputs": [],
"metadata": {
"id": "Qccw7okxGT0S"
}
},
{
"cell_type": "markdown",
"source": [
"Interesting! From the looks of it, the first column is a kind of `id` column. Let's get a little more information about the data.\n"
],
"metadata": {
"id": "XrWnlgSrGVmR"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Basic information about the data\r\n",
"df %>%\r\n",
" introduce()\r\n",
"\r\n",
"# Visualize basic information above\r\n",
"df %>% \r\n",
" plot_intro(ggtheme = theme_light())"
],
"outputs": [],
"metadata": {
"id": "4UcGmxRxGieA"
}
},
{
"cell_type": "markdown",
"source": [
"From the output, we can immediately see that we have `2448` rows and `385` columns and `0` missing values. We also have 1 discrete column, *cuisine*.\n",
"\n",
"## Exercise - exploring cuisines\n",
"\n",
"Now the task gets more engaging. Let's analyze the data distribution by cuisine.\n"
],
"metadata": {
"id": "AaPubl__GmH5"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Count observations per cuisine\r\n",
"df %>% \r\n",
" count(cuisine) %>% \r\n",
" arrange(n)\r\n",
"\r\n",
"# Plot the distribution\r\n",
"theme_set(theme_light())\r\n",
"df %>% \r\n",
" count(cuisine) %>% \r\n",
" ggplot(mapping = aes(x = n, y = reorder(cuisine, -n))) +\r\n",
" geom_col(fill = \"midnightblue\", alpha = 0.7) +\r\n",
" ylab(\"cuisine\")"
],
"outputs": [],
"metadata": {
"id": "FRsBVy5eGrrv"
}
},
{
"cell_type": "markdown",
"source": [
"There are a limited number of cuisines, but the data distribution is imbalanced. You can address this! Before proceeding, take some time to explore further.\n",
"\n",
"Next, let's separate each cuisine into its own tibble and determine the amount of data available (rows, columns) for each cuisine.\n",
"\n",
"> A [tibble](https://tibble.tidyverse.org/) is a modern version of a data frame.\n",
"\n",
"<p >\n",
" <img src=\"../../images/dplyr_filter.jpg\"\n",
" width=\"600\"/>\n",
" <figcaption>Illustration by @allison_horst</figcaption>\n"
],
"metadata": {
"id": "vVvyDb1kG2in"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Create individual tibble for the cuisines\r\n",
"thai_df <- df %>% \r\n",
" filter(cuisine == \"thai\")\r\n",
"japanese_df <- df %>% \r\n",
" filter(cuisine == \"japanese\")\r\n",
"chinese_df <- df %>% \r\n",
" filter(cuisine == \"chinese\")\r\n",
"indian_df <- df %>% \r\n",
" filter(cuisine == \"indian\")\r\n",
"korean_df <- df %>% \r\n",
" filter(cuisine == \"korean\")\r\n",
"\r\n",
"\r\n",
"# Find out how much data is available per cuisine\r\n",
"cat(\" thai df:\", dim(thai_df), \"\\n\",\r\n",
" \"japanese df:\", dim(japanese_df), \"\\n\",\r\n",
" \"chinese_df:\", dim(chinese_df), \"\\n\",\r\n",
" \"indian_df:\", dim(indian_df), \"\\n\",\r\n",
" \"korean_df:\", dim(korean_df))"
],
"outputs": [],
"metadata": {
"id": "0TvXUxD3G8Bk"
}
},
{
"cell_type": "markdown",
"source": [
"## **Exercise - Discovering top ingredients by cuisine using dplyr**\n",
"\n",
"Now you can dive deeper into the data and explore the typical ingredients for each cuisine. You'll need to clean up recurring data that causes confusion between cuisines, so let's tackle this issue.\n",
"\n",
"Create a function `create_ingredient()` in R that returns a dataframe of ingredients. This function will begin by removing an unhelpful column and then sort the ingredients based on their count.\n",
"\n",
"The basic structure of a function in R is:\n",
"\n",
"`myFunction <- function(arglist){`\n",
"\n",
"**`...`**\n",
"\n",
"**`return`**`(value)`\n",
"\n",
"`}`\n",
"\n",
"A concise introduction to R functions can be found [here](https://skirmer.github.io/presentations/functions_with_r.html#1).\n",
"\n",
"Lets jump right in! We'll use [dplyr verbs](https://dplyr.tidyverse.org/) that we've been learning in previous lessons. As a quick refresher:\n",
"\n",
"- `dplyr::select()`: helps you choose which **columns** to keep or exclude.\n",
"\n",
"- `dplyr::pivot_longer()`: allows you to \"lengthen\" data, increasing the number of rows while reducing the number of columns.\n",
"\n",
"- `dplyr::group_by()` and `dplyr::summarise()`: enable you to calculate summary statistics for different groups and organize them into a neat table.\n",
"\n",
"- `dplyr::filter()`: creates a subset of the data containing only rows that meet your conditions.\n",
"\n",
"- `dplyr::mutate()`: lets you create or modify columns.\n",
"\n",
"Check out this [*art*-filled learnr tutorial](https://allisonhorst.shinyapps.io/dplyr-learnr/#section-welcome) by Allison Horst, which introduces some handy data wrangling functions in dplyr *(part of the Tidyverse)*.\n"
],
"metadata": {
"id": "K3RF5bSCHC76"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Creates a functions that returns the top ingredients by class\r\n",
"\r\n",
"create_ingredient <- function(df){\r\n",
" \r\n",
" # Drop the id column which is the first colum\r\n",
" ingredient_df = df %>% select(-1) %>% \r\n",
" # Transpose data to a long format\r\n",
" pivot_longer(!cuisine, names_to = \"ingredients\", values_to = \"count\") %>% \r\n",
" # Find the top most ingredients for a particular cuisine\r\n",
" group_by(ingredients) %>% \r\n",
" summarise(n_instances = sum(count)) %>% \r\n",
" filter(n_instances != 0) %>% \r\n",
" # Arrange by descending order\r\n",
" arrange(desc(n_instances)) %>% \r\n",
" mutate(ingredients = factor(ingredients) %>% fct_inorder())\r\n",
" \r\n",
" \r\n",
" return(ingredient_df)\r\n",
"} # End of function"
],
"outputs": [],
"metadata": {
"id": "uB_0JR82HTPa"
}
},
{
"cell_type": "markdown",
"source": [
"Now we can use the function to get an idea of the top ten most popular ingredients by cuisine. Let's test it with `thai_df`.\n"
],
"metadata": {
"id": "h9794WF8HWmc"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Call create_ingredient and display popular ingredients\r\n",
"thai_ingredient_df <- create_ingredient(df = thai_df)\r\n",
"\r\n",
"thai_ingredient_df %>% \r\n",
" slice_head(n = 10)"
],
"outputs": [],
"metadata": {
"id": "agQ-1HrcHaEA"
}
},
{
"cell_type": "markdown",
"source": [
"In the previous section, we used `geom_col()`, let's see how you can use `geom_bar` too, to create bar charts. Use `?geom_bar` for further reading.\n"
],
"metadata": {
"id": "kHu9ffGjHdcX"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Make a bar chart for popular thai cuisines\r\n",
"thai_ingredient_df %>% \r\n",
" slice_head(n = 10) %>% \r\n",
" ggplot(aes(x = n_instances, y = ingredients)) +\r\n",
" geom_bar(stat = \"identity\", width = 0.5, fill = \"steelblue\") +\r\n",
" xlab(\"\") + ylab(\"\")"
],
"outputs": [],
"metadata": {
"id": "fb3Bx_3DHj6e"
}
},
{
"cell_type": "markdown",
"source": [
"Understood! Please provide the text you'd like me to translate.\n"
],
"metadata": {
"id": "RHP_xgdkHnvM"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Get popular ingredients for Japanese cuisines and make bar chart\r\n",
"create_ingredient(df = japanese_df) %>% \r\n",
" slice_head(n = 10) %>%\r\n",
" ggplot(aes(x = n_instances, y = ingredients)) +\r\n",
" geom_bar(stat = \"identity\", width = 0.5, fill = \"darkorange\", alpha = 0.8) +\r\n",
" xlab(\"\") + ylab(\"\")\r\n"
],
"outputs": [],
"metadata": {
"id": "019v8F0XHrRU"
}
},
{
"cell_type": "markdown",
"source": [
"Could you please provide the markdown file you'd like me to translate?\n"
],
"metadata": {
"id": "iIGM7vO8Hu3v"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Get popular ingredients for Chinese cuisines and make bar chart\r\n",
"create_ingredient(df = chinese_df) %>% \r\n",
" slice_head(n = 10) %>%\r\n",
" ggplot(aes(x = n_instances, y = ingredients)) +\r\n",
" geom_bar(stat = \"identity\", width = 0.5, fill = \"cyan4\", alpha = 0.8) +\r\n",
" xlab(\"\") + ylab(\"\")"
],
"outputs": [],
"metadata": {
"id": "lHd9_gd2HyzU"
}
},
{
"cell_type": "markdown",
"source": [
"Let's take a look at the Indian cuisines 🌶️.\n"
],
"metadata": {
"id": "ir8qyQbNH1c7"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Get popular ingredients for Indian cuisines and make bar chart\r\n",
"create_ingredient(df = indian_df) %>% \r\n",
" slice_head(n = 10) %>%\r\n",
" ggplot(aes(x = n_instances, y = ingredients)) +\r\n",
" geom_bar(stat = \"identity\", width = 0.5, fill = \"#041E42FF\", alpha = 0.8) +\r\n",
" xlab(\"\") + ylab(\"\")"
],
"outputs": [],
"metadata": {
"id": "ApukQtKjH5FO"
}
},
{
"cell_type": "markdown",
"source": [
"Finally, plot the Korean ingredients.\n"
],
"metadata": {
"id": "qv30cwY1H-FM"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Get popular ingredients for Korean cuisines and make bar chart\r\n",
"create_ingredient(df = korean_df) %>% \r\n",
" slice_head(n = 10) %>%\r\n",
" ggplot(aes(x = n_instances, y = ingredients)) +\r\n",
" geom_bar(stat = \"identity\", width = 0.5, fill = \"#852419FF\", alpha = 0.8) +\r\n",
" xlab(\"\") + ylab(\"\")"
],
"outputs": [],
"metadata": {
"id": "lumgk9cHIBie"
}
},
{
"cell_type": "markdown",
"source": [
"From the data visualizations, we can now exclude the most common ingredients that cause confusion between different cuisines, using `dplyr::select()`.\n",
"\n",
"Everyone loves rice, garlic, and ginger!\n"
],
"metadata": {
"id": "iO4veMXuIEta"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Drop id column, rice, garlic and ginger from our original data set\r\n",
"df_select <- df %>% \r\n",
" select(-c(1, rice, garlic, ginger))\r\n",
"\r\n",
"# Display new data set\r\n",
"df_select %>% \r\n",
" slice_head(n = 5)"
],
"outputs": [],
"metadata": {
"id": "iHJPiG6rIUcK"
}
},
{
"cell_type": "markdown",
"source": [
"## Preprocessing data using recipes 👩‍🍳👨‍🍳 - Dealing with imbalanced data ⚖️\n",
"\n",
"<p >\n",
" <img src=\"../../images/recipes.png\"\n",
" width=\"600\"/>\n",
" <figcaption>Artwork by @allison_horst</figcaption>\n",
"\n",
"Since this lesson is about cuisines, we need to frame `recipes` in the right context.\n",
"\n",
"Tidymodels offers another handy package: `recipes` - a package designed for data preprocessing.\n"
],
"metadata": {
"id": "kkFd-JxdIaL6"
}
},
{
"cell_type": "markdown",
"source": [
"Let's take a look at the distribution of our cuisines again.\n"
],
"metadata": {
"id": "6l2ubtTPJAhY"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Distribution of cuisines\r\n",
"old_label_count <- df_select %>% \r\n",
" count(cuisine) %>% \r\n",
" arrange(desc(n))\r\n",
"\r\n",
"old_label_count"
],
"outputs": [],
"metadata": {
"id": "1e-E9cb7JDVi"
}
},
{
"cell_type": "markdown",
"source": [
"As you can see, there is quite an unequal distribution in the number of cuisines. Korean cuisines are almost three times more than Thai cuisines. Imbalanced data often negatively impacts model performance. Consider a binary classification scenario: if most of your data belongs to one class, a machine learning model will tend to predict that class more frequently simply because there is more data available for it. Balancing the data addresses any skewed distribution and helps eliminate this imbalance. Many models perform best when the number of observations is equal, and they often struggle with unbalanced data.\n",
"\n",
"There are primarily two approaches to handling imbalanced data sets:\n",
"\n",
"- Adding observations to the minority class: `Over-sampling`, for example, using a SMOTE algorithm.\n",
"\n",
"- Removing observations from the majority class: `Under-sampling`.\n",
"\n",
"Now, let's demonstrate how to handle imbalanced data sets using a `recipe`. A recipe can be thought of as a blueprint that outlines the steps to be applied to a data set to prepare it for data analysis.\n"
],
"metadata": {
"id": "soAw6826JKx9"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Load themis package for dealing with imbalanced data\r\n",
"library(themis)\r\n",
"\r\n",
"# Create a recipe for preprocessing data\r\n",
"cuisines_recipe <- recipe(cuisine ~ ., data = df_select) %>% \r\n",
" step_smote(cuisine)\r\n",
"\r\n",
"cuisines_recipe"
],
"outputs": [],
"metadata": {
"id": "HS41brUIJVJy"
}
},
{
"cell_type": "markdown",
"source": [
"Let's break down our preprocessing steps.\n",
"\n",
"- The call to `recipe()` with a formula specifies the *roles* of the variables using the `df_select` data as a reference. For example, the `cuisine` column is assigned the `outcome` role, while the other columns are assigned the `predictor` role.\n",
"\n",
"- [`step_smote(cuisine)`](https://themis.tidymodels.org/reference/step_smote.html) defines a *step* in the recipe that synthetically generates new examples for the minority class using the nearest neighbors of those cases.\n",
"\n",
"Now, if we want to view the preprocessed data, we need to [**`prep()`**](https://recipes.tidymodels.org/reference/prep.html) and [**`bake()`**](https://recipes.tidymodels.org/reference/bake.html) the recipe.\n",
"\n",
"`prep()`: calculates the necessary parameters from a training set, which can then be applied to other datasets.\n",
"\n",
"`bake()`: applies the operations from a prepped recipe to any dataset.\n"
],
"metadata": {
"id": "Yb-7t7XcJaC8"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Prep and bake the recipe\r\n",
"preprocessed_df <- cuisines_recipe %>% \r\n",
" prep() %>% \r\n",
" bake(new_data = NULL) %>% \r\n",
" relocate(cuisine)\r\n",
"\r\n",
"# Display data\r\n",
"preprocessed_df %>% \r\n",
" slice_head(n = 5)\r\n",
"\r\n",
"# Quick summary stats\r\n",
"preprocessed_df %>% \r\n",
" introduce()"
],
"outputs": [],
"metadata": {
"id": "9QhSgdpxJl44"
}
},
{
"cell_type": "markdown",
"source": [
"Let's now check the distribution of our cuisines and compare them with the imbalanced data.\n"
],
"metadata": {
"id": "dmidELh_LdV7"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Distribution of cuisines\r\n",
"new_label_count <- preprocessed_df %>% \r\n",
" count(cuisine) %>% \r\n",
" arrange(desc(n))\r\n",
"\r\n",
"list(new_label_count = new_label_count,\r\n",
" old_label_count = old_label_count)"
],
"outputs": [],
"metadata": {
"id": "aSh23klBLwDz"
}
},
{
"cell_type": "markdown",
"source": [
"Yum! The data is nice and clean, balanced, and very delicious 😋!\n",
"\n",
"> Typically, a recipe is used as a preprocessor for modeling, where it specifies the steps to be applied to a dataset to prepare it for modeling. In such cases, a `workflow()` is generally used (as we've seen in previous lessons) instead of manually processing a recipe.\n",
">\n",
"> Therefore, you usually don't need to **`prep()`** and **`bake()`** recipes when working with tidymodels. However, these functions are useful tools for verifying that recipes are performing as expected, as in our example.\n",
">\n",
"> When you **`bake()`** a prepped recipe with **`new_data = NULL`**, you retrieve the original data you provided when defining the recipe, but with the preprocessing steps applied.\n",
"\n",
"Now, let's save a copy of this data for use in future lessons:\n"
],
"metadata": {
"id": "HEu80HZ8L7ae"
}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Save preprocessed data\r\n",
"write_csv(preprocessed_df, \"../../../data/cleaned_cuisines_R.csv\")"
],
"outputs": [],
"metadata": {
"id": "cBmCbIgrMOI6"
}
},
{
"cell_type": "markdown",
"source": [
"This fresh CSV can now be found in the root data folder.\n",
"\n",
"**🚀Challenge**\n",
"\n",
"This curriculum contains several interesting datasets. Explore the `data` folders and see if any of them include datasets suitable for binary or multi-class classification. What questions could you ask about this dataset?\n",
"\n",
"## [**Post-lecture quiz**](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/20/)\n",
"\n",
"## **Review & Self Study**\n",
"\n",
"- Check out [package themis](https://github.com/tidymodels/themis). What other techniques can be used to address imbalanced data?\n",
"\n",
"- Tidy models [reference website](https://www.tidymodels.org/start/).\n",
"\n",
"- H. Wickham and G. Grolemund, [*R for Data Science: Visualize, Model, Transform, Tidy, and Import Data*](https://r4ds.had.co.nz/).\n",
"\n",
"#### THANK YOU TO:\n",
"\n",
"[`Allison Horst`](https://twitter.com/allison_horst/) for creating the wonderful illustrations that make R more approachable and engaging. You can find more of her work in her [gallery](https://www.google.com/url?q=https://github.com/allisonhorst/stats-illustrations&sa=D&source=editors&ust=1626380772530000&usg=AOvVaw3zcfyCizFQZpkSLzxiiQEM).\n",
"\n",
"[Cassie Breviu](https://www.twitter.com/cassieview) and [Jen Looper](https://www.twitter.com/jenlooper) for developing the original Python version of this module ♥️\n",
"\n",
"<p >\n",
" <img src=\"../../images/r_learners_sm.jpeg\"\n",
" width=\"600\"/>\n",
" <figcaption>Artwork by @allison_horst</figcaption>\n"
],
"metadata": {
"id": "WQs5621pMGwf"
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n---\n\n**Disclaimer**: \nThis document has been translated using the AI translation service [Co-op Translator](https://github.com/Azure/co-op-translator). While we aim for accuracy, please note that automated translations may include errors or inaccuracies. The original document in its native language should be regarded as the authoritative source. For critical information, professional human translation is advised. We are not responsible for any misunderstandings or misinterpretations resulting from the use of this translation.\n"
]
}
]
}