Add R resources for lessons 05 and 06

pull/230/head
R-icntay 3 years ago
parent 73e2d4a206
commit 97fee635f4

Binary file not shown.

After

Width:  |  Height:  |  Size: 558 KiB

@ -0,0 +1,436 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "lesson_1-R.ipynb",
"provenance": [],
"collapsed_sections": [],
"toc_visible": true
},
"kernelspec": {
"name": "ir",
"display_name": "R"
},
"language_info": {
"name": "R"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "YJUHCXqK57yz"
},
"source": [
"#Build a regression model: Get started with R and Tidymodels for regression models"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LWNNzfqd6feZ"
},
"source": [
"## Introduction to Regression - Lesson 1\n",
"\n",
"#### Putting it into perspective\n",
"\n",
"✅ There are many types of regression methods, and which one you pick depends on the answer you're looking for. If you want to predict the probable height for a person of a given age, you'd use `linear regression`, as you're seeking a **numeric value**. If you're interested in discovering whether a type of cuisine should be considered vegan or not, you're looking for a **category assignment** so you would use `logistic regression`. You'll learn more about logistic regression later. Think a bit about some questions you can ask of data, and which of these methods would be more appropriate.\n",
"\n",
"In this section, you will work with a [small dataset about diabetes](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). Imagine that you wanted to test a treatment for diabetic patients. Machine Learning models might help you determine which patients would respond better to the treatment, based on combinations of variables. Even a very basic regression model, when visualized, might show information about variables that would help you organize your theoretical clinical trials.\n",
"\n",
"That said, let's get started on this task!\n",
"\n",
"![Artwork by \\@allison_horst](../images/encouRage.jpg){width=\"630\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FIo2YhO26wI9"
},
"source": [
"## 1. Loading up our tool set\n",
"\n",
"For this task, we'll require the following packages:\n",
"\n",
"- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun!\n",
"\n",
"- `tidymodels`: The [tidymodels](https://www.tidymodels.org/) framework is a [collection of packages](https://www.tidymodels.org/packages/) for modeling and machine learning.\n",
"\n",
"You can have them installed as:\n",
"\n",
"`install.packages(c(\"tidyverse\", \"tidymodels\"))`\n",
"\n",
"The script below checks whether you have the packages required to complete this module and installs them for you in case some are missing."
]
},
{
"cell_type": "code",
"metadata": {
"id": "cIA9fz9v7Dss",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "2df7073b-86b2-4b32-cb86-0da605a0dc11"
},
"source": [
"if (!require(\"pacman\")) install.packages(\"pacman\")\n",
"pacman::p_load(tidyverse, tidymodels)"
],
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"text": [
"Loading required package: pacman\n",
"\n"
],
"name": "stderr"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gpO_P_6f9WUG"
},
"source": [
"Now, let's load these awesome packages and make them available in our current R session.(This is for mere illustration, `pacman::p_load()` already did that for you)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "NLMycgG-9ezO"
},
"source": [
"# load the core Tidyverse packages\n",
"library(tidyverse)\n",
"\n",
"# load the core Tidymodels packages\n",
"library(tidymodels)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "KM6iXLH996Cl"
},
"source": [
"## 2. The diabetes dataset\n",
"\n",
"In this exercise, we'll put our regression skills into display by making predictions on a diabetes dataset. The [diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt) includes `442 samples` of data around diabetes, with 10 predictor feature variables, `age`, `sex`, `body mass index`, `average blood pressure`, and `six blood serum measurements` as well as an outcome variable `y`: a quantitative measure of disease progression one year after baseline.\n",
"\n",
"|Number of observations|442|\n",
"|----------------------|:---|\n",
"|Number of predictors|First 10 columns are numeric predictive|\n",
"|Outcome/Target|Column 11 is a quantitative measure of disease progression one year after baseline|\n",
"|Predictor Information|- age in years\n",
"||- sex\n",
"||- bmi body mass index\n",
"||- bp average blood pressure\n",
"||- s1 tc, total serum cholesterol\n",
"||- s2 ldl, low-density lipoproteins\n",
"||- s3 hdl, high-density lipoproteins\n",
"||- s4 tch, total cholesterol / HDL\n",
"||- s5 ltg, possibly log of serum triglycerides level\n",
"||- s6 glu, blood sugar level|\n",
"\n",
"\n",
"\n",
"\n",
"> 🎓 Remember, this is supervised learning, and we need a named 'y' target.\n",
"\n",
"Before you can manipulate data with R, you need to import the data into R's memory, or build a connection to the data that R can use to access the data remotely.\n",
"\n",
"> The [readr](https://readr.tidyverse.org/) package, which is part of the Tidyverse, provides a fast and friendly way to read rectangular data into R.\n",
"\n",
"Now, let's load the diabetes dataset provided in this source URL: <https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html>\n",
"\n",
"Also, we'll perform a sanity check on our data using `glimpse()` and dsiplay the first 5 rows using `slice()`.\n",
"\n",
"Before going any further, let's also introduce something you will encounter often in R code 🥁🥁: the pipe operator `%>%`\n",
"\n",
"The pipe operator (`%>%`) performs operations in logical sequence by passing an object forward into a function or call expression. You can think of the pipe operator as saying \"and then\" in your code."
]
},
{
"cell_type": "code",
"metadata": {
"id": "Z1geAMhM-bSP"
},
"source": [
"# Import the data set\n",
"diabetes <- read_table2(file = \"https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt\")\n",
"\n",
"\n",
"# Get a glimpse and dimensions of the data\n",
"glimpse(diabetes)\n",
"\n",
"\n",
"# Select the first 5 rows of the data\n",
"diabetes %>% \n",
" slice(1:5)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "UwjVT1Hz-c3Z"
},
"source": [
"`glimpse()` shows us that this data has 442 rows and 11 columns with all the columns being of data type `double` \n",
"\n",
"<br>\n",
"\n",
"\n",
"\n",
"> glimpse() and slice() are functions in [`dplyr`](https://dplyr.tidyverse.org/). Dplyr, part of the Tidyverse, is a grammar of data manipulation that provides a consistent set of verbs that help you solve the most common data manipulation challenges\n",
"\n",
"<br>\n",
"\n",
"Now that we have the data, let's narrow down to one feature (`bmi`) to target for this exercise. This will require us to select the desired columns. So, how do we do this?\n",
"\n",
"[`dplyr::select()`](https://dplyr.tidyverse.org/reference/select.html) allows us to *select* (and optionally rename) columns in a data frame."
]
},
{
"cell_type": "code",
"metadata": {
"id": "RDY1oAKI-m80"
},
"source": [
"# Select predictor feature `bmi` and outcome `y`\n",
"diabetes_select <- diabetes %>% \n",
" select(c(bmi, y))\n",
"\n",
"# Print the first 5 rows\n",
"diabetes_select %>% \n",
" slice(1:10)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "SDk668xK-tc3"
},
"source": [
"## 3. Training and Testing data\n",
"\n",
"It's common practice in supervised learning to *split* the data into two subsets; a (typically larger) set with which to train the model, and a smaller \"hold-back\" set with which to see how the model performed.\n",
"\n",
"Now that we have data ready, we can see if a machine can help determine a logical split between the numbers in this dataset. We can use the [rsample](https://tidymodels.github.io/rsample/) package, which is part of the Tidymodels framework, to create an object that contains the information on *how* to split the data, and then two more rsample functions to extract the created training and testing sets:\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "EqtHx129-1h-"
},
"source": [
"set.seed(2056)\n",
"# Split 67% of the data for training and the rest for tesing\n",
"diabetes_split <- diabetes_select %>% \n",
" initial_split(prop = 0.67)\n",
"\n",
"# Extract the resulting train and test sets\n",
"diabetes_train <- training(diabetes_split)\n",
"diabetes_test <- testing(diabetes_split)\n",
"\n",
"# Print the first 3 rows of the training set\n",
"diabetes_train %>% \n",
" slice(1:10)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "sBOS-XhB-6v7"
},
"source": [
"## 4. Train a linear regression model with Tidymodels\n",
"\n",
"Now we are ready to train our model!\n",
"\n",
"In Tidymodels, you specify models using `parsnip()` by specifying three concepts:\n",
"\n",
"- Model **type** differentiates models such as linear regression, logistic regression, decision tree models, and so forth.\n",
"\n",
"- Model **mode** includes common options like regression and classification; some model types support either of these while some only have one mode.\n",
"\n",
"- Model **engine** is the computational tool which will be used to fit the model. Often these are R packages, such as **`\"lm\"`** or **`\"ranger\"`**\n",
"\n",
"This modeling information is captured in a model specification, so let's build one!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "20OwEw20--t3"
},
"source": [
"# Build a linear model specification\n",
"lm_spec <- \n",
" # Type\n",
" linear_reg() %>% \n",
" # Engine\n",
" set_engine(\"lm\") %>% \n",
" # Mode\n",
" set_mode(\"regression\")\n",
"\n",
"\n",
"# Print the model specification\n",
"lm_spec"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "_oDHs89k_CJj"
},
"source": [
"After a model has been *specified*, the model can be `estimated` or `trained` using the [`fit()`](https://parsnip.tidymodels.org/reference/fit.html) function, typically using a formula and some data.\n",
"\n",
"`y ~ .` means we'll fit `y` as the predicted quantity/target, explained by all the predictors/features ie, `.` (in this case, we only have one predictor: `bmi` )"
]
},
{
"cell_type": "code",
"metadata": {
"id": "YlsHqd-q_GJQ"
},
"source": [
"# Build a linear model specification\n",
"lm_spec <- linear_reg() %>% \n",
" set_engine(\"lm\") %>%\n",
" set_mode(\"regression\")\n",
"\n",
"\n",
"# Train a linear regression model\n",
"lm_mod <- lm_spec %>% \n",
" fit(y ~ ., data = diabetes_train)\n",
"\n",
"# Print the model\n",
"lm_mod"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "kGZ22RQj_Olu"
},
"source": [
"From the model output, we can see the coefficients learned during training. They represent the coefficients of the line of best fit that gives us the lowest overall error between the actual and predicted variable.\n",
"<br>\n",
"\n",
"## 5. Make predictions on the test set\n",
"\n",
"Now that we've trained a model, we can use it to predict the disease progression y for the test dataset using [parsnip::predict()](https://parsnip.tidymodels.org/reference/predict.model_fit.html). This will be used to draw the line between data groups."
]
},
{
"cell_type": "code",
"metadata": {
"id": "nXHbY7M2_aao"
},
"source": [
"# Make predictions for the test set\n",
"predictions <- lm_mod %>% \n",
" predict(new_data = diabetes_test)\n",
"\n",
"# Print out some of the predictions\n",
"predictions %>% \n",
" slice(1:5)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "R_JstwUY_bIs"
},
"source": [
"Woohoo! 💃🕺 We just trained a model and used it to make predictions!\n",
"\n",
"When making predictions, the tidymodels convention is to always produce a tibble/data frame of results with standardized column names. This makes it easy to combine the original data and the predictions in a usable format for subsequent operations such as plotting.\n",
"\n",
"`dplyr::bind_cols()` efficiently binds multiple data frames column."
]
},
{
"cell_type": "code",
"metadata": {
"id": "RybsMJR7_iI8"
},
"source": [
"# Combine the predictions and the original test set\n",
"results <- diabetes_test %>% \n",
" bind_cols(predictions)\n",
"\n",
"\n",
"results %>% \n",
" slice(1:5)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "XJbYbMZW_n_s"
},
"source": [
"## 6. Plot modelling results\n",
"\n",
"Now, its time to see this visually 📈. We'll create a scatter plot of all the `y` and `bmi` values of the test set, then use the predictions to draw a line in the most appropriate place, between the model's data groupings.\n",
"\n",
"R has several systems for making graphs, but `ggplot2` is one of the most elegant and most versatile. This allows you to compose graphs by **combining independent components**."
]
},
{
"cell_type": "code",
"metadata": {
"id": "R9tYp3VW_sTn"
},
"source": [
"# Set a theme for the plot\n",
"theme_set(theme_light())\n",
"# Create a scatter plot\n",
"results %>% \n",
" ggplot(aes(x = bmi)) +\n",
" # Add a scatter plot\n",
" geom_point(aes(y = y), size = 1.6) +\n",
" # Add a line plot\n",
" geom_line(aes(y = .pred), color = \"blue\", size = 1.5)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "zrPtHIxx_tNI"
},
"source": [
"> ✅ Think a bit about what's going on here. A straight line is running through many small dots of data, but what is it doing exactly? Can you see how you should be able to use this line to predict where a new, unseen data point should fit in relationship to the plot's y axis? Try to put into words the practical use of this model.\n",
"\n",
"Congratulations, you built your first linear regression model, created a prediction with it, and displayed it in a plot!\n"
]
}
]
}

@ -0,0 +1,250 @@
---
title: 'Build a regression model: Get started with R and Tidymodels for regression models'
output:
html_document:
df_print: paged
theme: flatly
highlight: breezedark
toc: yes
toc_float: yes
code_download: yes
---
## Introduction to Regression - Lesson 1
#### Putting it into perspective
✅ There are many types of regression methods, and which one you pick depends on the answer you're looking for. If you want to predict the probable height for a person of a given age, you'd use `linear regression`, as you're seeking a **numeric value**. If you're interested in discovering whether a type of cuisine should be considered vegan or not, you're looking for a **category assignment** so you would use `logistic regression`. You'll learn more about logistic regression later. Think a bit about some questions you can ask of data, and which of these methods would be more appropriate.
In this section, you will work with a [small dataset about diabetes](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). Imagine that you wanted to test a treatment for diabetic patients. Machine Learning models might help you determine which patients would respond better to the treatment, based on combinations of variables. Even a very basic regression model, when visualized, might show information about variables that would help you organize your theoretical clinical trials.
That said, let's get started on this task!
![Artwork by \@allison_horst](../images/encouRage.jpg){width="630"}
## 1. Loading up our tool set
For this task, we'll require the following packages:
- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun!
- `tidymodels`: The [tidymodels](https://www.tidymodels.org/) framework is a [collection of packages](https://www.tidymodels.org/packages/) for modeling and machine learning.
You can have them installed as:
`install.packages(c("tidyverse", "tidymodels"))`
The script below checks whether you have the packages required to complete this module and installs them for you in case they are missing.
```{r, message=F, warning=F}
if (!require("pacman")) install.packages("pacman")
pacman::p_load(tidyverse, tidymodels)
```
Now, let's load these awesome packages and make them available in our current R session. (This is for mere illustration, `pacman::p_load()` already did that for you)
```{r load_tidy_verse_models, message=F, warning=F}
# load the core Tidyverse packages
library(tidyverse)
# load the core Tidymodels packages
library(tidymodels)
```
## 2. The diabetes dataset
In this exercise, we'll put our regression skills into display by making predictions on a diabetes dataset. The [diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt) includes `442 samples` of data around diabetes, with 10 predictor feature variables, `age`, `sex`, `body mass index`, `average blood pressure`, and `six blood serum measurements` as well as an outcome variable `y`: a quantitative measure of disease progression one year after baseline.
+----------------------------+------------------------------------------------------------------------------------+
| **Number of observations** | **442** |
+============================+====================================================================================+
| **Number of predictors** | First 10 columns are numeric predictive values |
+----------------------------+------------------------------------------------------------------------------------+
| **Outcome/Target** | Column 11 is a quantitative measure of disease progression one year after baseline |
+----------------------------+------------------------------------------------------------------------------------+
| **Predictor Information** | - age age in years |
| | - sex |
| | - bmi body mass index |
| | - bp average blood pressure |
| | - s1 tc, total serum cholesterol |
| | - s2 ldl, low-density lipoproteins |
| | - s3 hdl, high-density lipoproteins |
| | - s4 tch, total cholesterol / HDL |
| | - s5 ltg, possibly log of serum triglycerides level |
| | - s6 glu, blood sugar level |
+----------------------------+------------------------------------------------------------------------------------+
> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
Before you can manipulate data with R, you need to import the data into R's memory, or build a connection to the data that R can use to access the data remotely.\
> The [readr](https://readr.tidyverse.org/) package, which is part of the Tidyverse, provides a fast and friendly way to read rectangular data into R.
Now, let's load the diabetes dataset provided in this source URL: <https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html>
Also, we'll perform a sanity check on our data using `glimpse()` and dsiplay the first 5 rows using `slice()`.
Before going any further, let's introduce something you will encounter quite often in R code: the pipe operator `%>%`
The pipe operator (`%>%`) performs operations in logical sequence by passing an object forward into a function or call expression. You can think of the pipe operator as saying "and then" in your code.\
```{r load_dataset, message=F, warning=F}
# Import the data set
diabetes <- read_table2(file = "https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt")
# Get a glimpse and dimensions of the data
glimpse(diabetes)
# Select the first 5 rows of the data
diabetes %>%
slice(1:5)
```
`glimpse()` shows us that this data has 442 rows and 11 columns with all the columns being of data type `double`
> glimpse() and slice() are functions in [`dplyr`](https://dplyr.tidyverse.org/). Dplyr, part of the Tidyverse, is a grammar of data manipulation that provides a consistent set of verbs that help you solve the most common data manipulation challenges
Now that we have the data, let's narrow down to one feature (`bmi`) to target for this exercise. This will require us to select the desired columns. So, how do we do this?
[`dplyr::select()`](https://dplyr.tidyverse.org/reference/select.html) allows us to *select* (and optionally rename) columns in a data frame.
```{r select, message=F, warning=F}
# Select predictor feature `bmi` and outcome `y`
diabetes_select <- diabetes %>%
select(c(bmi, y))
# Print the first 5 rows
diabetes_select %>%
slice(1:5)
```
## 3. Training and Testing data
It's common practice in supervised learning to *split* the data into two subsets; a (typically larger) set with which to train the model, and a smaller "hold-back" set with which to see how the model performed.
Now that we have data ready, we can see if a machine can help determine a logical split between the numbers in this dataset. We can use the [rsample](https://tidymodels.github.io/rsample/) package, which is part of the Tidymodels framework, to create an object that contains the information on *how* to split the data, and then two more rsample functions to extract the created training and testing sets:
```{r split, message=F, warning=F}
set.seed(2056)
# Split 67% of the data for training and the rest for tesing
diabetes_split <- diabetes_select %>%
initial_split(prop = 0.67)
# Extract the resulting train and test sets
diabetes_train <- training(diabetes_split)
diabetes_test <- testing(diabetes_split)
# Print the first 3 rows of the training set
diabetes_train %>%
slice(1:3)
```
## 4. Train a linear regression model with Tidymodels
Now we are ready to train our model!
In Tidymodels, you specify models using `parsnip()` by specifying three concepts:
- Model **type** differentiates models such as linear regression, logistic regression, decision tree models, and so forth.
- Model **mode** includes common options like regression and classification; some model types support either of these while some only have one mode.
- Model **engine** is the computational tool which will be used to fit the model. Often these are R packages, such as **`"lm"`** or **`"ranger"`**
This modeling information is captured in a model specification, so let's build one!
```{r lm_model_spec, message=F, warning=F}
# Build a linear model specification
lm_spec <-
# Type
linear_reg() %>%
# Engine
set_engine("lm") %>%
# Mode
set_mode("regression")
# Print the model specification
lm_spec
```
After a model has been *specified*, the model can be `estimated` or `trained` using the [`fit()`](https://parsnip.tidymodels.org/reference/fit.html) function, typically using a formula and some data.
`y ~ .` means we'll fit `y` as the predicted quantity/target, explained by all the predictors/features ie, `.` (in this case, we only have one predictor: `bmi` )
```{r train, message=F, warning=F}
# Build a linear model specification
lm_spec <- linear_reg() %>%
set_engine("lm") %>%
set_mode("regression")
# Train a linear regression model
lm_mod <- lm_spec %>%
fit(y ~ ., data = diabetes_train)
# Print the model
lm_mod
```
From the model output, we can see the coefficients learned during training. They represent the coefficients of the line of best fit that gives us the lowest overall error between the actual and predicted variable.
## 5. Make predictions on the test set
Now that we've trained a model, we can use it to predict the disease progression y for the test dataset using [parsnip::predict()](https://parsnip.tidymodels.org/reference/predict.model_fit.html). This will be used to draw the line between data groups.
```{r test, message=F, warning=F}
# Make predictions for the test set
predictions <- lm_mod %>%
predict(new_data = diabetes_test)
# Print out some of the predictions
predictions %>%
slice(1:5)
```
Woohoo! 💃🕺 We just trained a model and used it to make predictions!
When making predictions, the tidymodels convention is to always produce a tibble/data frame of results with standardized column names. This makes it easy to combine the original data and the predictions in a usable format for subsequent operations such as plotting.
`dplyr::bind_cols()` efficiently binds multiple data frames column.
```{r test_pred, message=F, warning=F}
# Combine the predictions and the original test set
results <- diabetes_test %>%
bind_cols(predictions)
results %>%
slice(1:5)
```
## 6. Plot modelling results
Now, its time to see this visually 📈. We'll create a scatter plot of all the `y` and `bmi` values of the test set, then use the predictions to draw a line in the most appropriate place, between the model's data groupings.
R has several systems for making graphs, but `ggplot2` is one of the most elegant and most versatile. This allows you to compose graphs by **combining independent components**.
```{r plot_pred, message=F, warning=F}
# Set a theme for the plot
theme_set(theme_light())
# Create a scatter plot
results %>%
ggplot(aes(x = bmi)) +
# Add a scatter plot
geom_point(aes(y = y), size = 1.6) +
# Add a line plot
geom_line(aes(y = .pred), color = "blue", size = 1.5)
```
> ✅ Think a bit about what's going on here. A straight line is running through many small dots of data, but what is it doing exactly? Can you see how you should be able to use this line to predict where a new, unseen data point should fit in relationship to the plot's y axis? Try to put into words the practical use of this model.
Congratulations, you built your first linear regression model, created a prediction with it, and displayed it in a plot!

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

@ -0,0 +1,644 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "lesson_2-R.ipynb",
"provenance": [],
"collapsed_sections": [],
"toc_visible": true
},
"kernelspec": {
"name": "ir",
"display_name": "R"
},
"language_info": {
"name": "R"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "Pg5aexcOPqAZ"
},
"source": [
"# Build a regression model: prepare and visualize data\n",
"\n",
"## **Linear Regression for Pumpkins - Lesson 2**\n",
"#### Introduction\n",
"\n",
"Now that you are set up with the tools you need to start tackling machine learning model building with Tidymodels and the Tidyverse, you are ready to start asking questions of your data. As you work with data and apply ML solutions, it's very important to understand how to ask the right question to properly unlock the potentials of your dataset.\n",
"\n",
"In this lesson, you will learn:\n",
"\n",
"- How to prepare your data for model-building.\n",
"\n",
"- How to use `ggplot2` for data visualization.\n",
"\n",
"The question you need answered will determine what type of ML algorithms you will leverage. And the quality of the answer you get back will be heavily dependent on the nature of your data.\n",
"\n",
"Let's see this by working through a practical exercise.\n",
"\n",
"![Artwork by \\@allison_horst](../images/unruly_data.jpg){width=\"700\"} <br>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dc5WhyVdXAjR"
},
"source": [
"## 1. Importing pumpkins data and summoning the Tidyverse\n",
"\n",
"We'll require the following packages to slice and dice this lesson:\n",
"\n",
"- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun!\n",
"\n",
"You can have them installed as:\n",
"\n",
"`install.packages(c(\"tidyverse\"))`\n",
"\n",
"The script below checks whether you have the packages required to complete this module and installs them for you in case some are missing."
]
},
{
"cell_type": "code",
"metadata": {
"id": "GqPYUZgfXOBt"
},
"source": [
"if (!require(\"pacman\")) install.packages(\"pacman\")\n",
"pacman::p_load(tidyverse)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "kvjDTPDSXRr2"
},
"source": [
"Now, let's fire up some packages and load the [data](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) provided for this lesson!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "VMri-t2zXqgD"
},
"source": [
"# Load the core Tidyverse packages\n",
"library(tidyverse)\n",
"\n",
"# Import the pumpkins data\n",
"pumpkins <- read_csv(file = \"https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/2-Regression/data/US-pumpkins.csv\")\n",
"\n",
"\n",
"# Get a glimpse and dimensions of the data\n",
"glimpse(pumpkins)\n",
"\n",
"\n",
"# Print the first 50 rows of the data set\n",
"pumpkins %>% \n",
" slice_head(n =50)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "REWcIv9yX29v"
},
"source": [
"A quick `glimpse()` immediately shows that there are blanks and a mix of strings (`chr`) and numeric data (`dbl`). The `Date` is of type character and there's also a strange column called `Package` where the data is a mix between `sacks`, `bins` and other values. The data, in fact, is a bit of a mess 😤.\n",
"\n",
"In fact, it is not very common to be gifted a dataset that is completely ready to use to create a ML model out of the box. But worry not, in this lesson, you will learn how to prepare a raw dataset using standard R libraries 🧑‍🔧. You will also learn various techniques to visualize the data.📈📊\n",
"<br>\n",
"\n",
"> A refresher: The pipe operator (`%>%`) performs operations in logical sequence by passing an object forward into a function or call expression. You can think of the pipe operator as saying \"and then\" in your code.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Zxfb3AM5YbUe"
},
"source": [
"## 2. Check for missing data\n",
"\n",
"One of the most common issues data scientists need to deal with is incomplete or missing data. R represents missing, or unknown values, with special sentinel value: `NA` (Not Available).\n",
"\n",
"So how would we know that the data frame contains missing values?\n",
"<br>\n",
"- One straight forward way would be to use the base R function `anyNA` which returns the logical objects `TRUE` or `FALSE`"
]
},
{
"cell_type": "code",
"metadata": {
"id": "G--DQutAYltj"
},
"source": [
"pumpkins %>% \n",
" anyNA()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "mU-7-SB6YokF"
},
"source": [
"Great, there seems to be some missing data! That's a good place to start.\n",
"\n",
"- Another way would be to use the function `is.na()` that indicates which individual column elements are missing with a logical `TRUE`."
]
},
{
"cell_type": "code",
"metadata": {
"id": "W-DxDOR4YxSW"
},
"source": [
"pumpkins %>% \n",
" is.na() %>% \n",
" head(n = 7)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "xUWxipKYY0o7"
},
"source": [
"Okay, got the job done but with a large data frame such as this, it would be inefficient and practically impossible to review all of the rows and columns individually😴.\n",
"\n",
"- A more intuitive way would be to calculate the sum of the missing values for each column:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ZRBWV6P9ZArL"
},
"source": [
"pumpkins %>% \n",
" is.na() %>% \n",
" colSums()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "9gv-crB6ZD1Y"
},
"source": [
"Much better! There is missing data, but maybe it won't matter for the task at hand. Let's see what further analysis brings forth.\n",
"\n",
"> Along with the awesome sets of packages and functions, R has a very good documentation. For instance, use `help(colSums)` or `?colSums` to find out more about the function."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o4jLY5-VZO2C"
},
"source": [
"## 3. Dplyr: A Grammar of Data Manipulation\n",
"\n",
"![Artwork by \\@allison_horst](../images/dplyr_wrangling.png){width=\"569\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i5o33MQBZWWw"
},
"source": [
"[`dplyr`](https://dplyr.tidyverse.org/), a package in the Tidyverse, is a grammar of data manipulation that provides a consistent set of verbs that help you solve the most common data manipulation challenges. In this section, we'll explore some of dplyr's verbs!\n",
"<br>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "x3VGMAGBZiUr"
},
"source": [
"#### dplyr::select()\n",
"\n",
"`select()` is a function in the package `dplyr` which helps you pick columns to keep or exclude.\n",
"\n",
"To make your data frame easier to work with, drop several of its columns, using `select()`, keeping only the columns you need.\n",
"\n",
"For instance, in this exercise, our analysis will involve the columns `Package`, `Low Price`, `High Price` and `Date`. Let's select these columns."
]
},
{
"cell_type": "code",
"metadata": {
"id": "F_FgxQnVZnM0"
},
"source": [
"# Select desired columns\n",
"pumpkins <- pumpkins %>% \n",
" select(Package, `Low Price`, `High Price`, Date)\n",
"\n",
"\n",
"# Print data set\n",
"pumpkins %>% \n",
" slice_head(n = 5)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "2KKo0Ed9Z1VB"
},
"source": [
"#### dplyr::mutate()\n",
"\n",
"`mutate()` is a function in the package `dplyr` which helps you create or modify columns, while keeping the existing columns.\n",
"\n",
"The general structure of mutate is:\n",
"\n",
"`data %>% mutate(new_column_name = what_it_contains)`\n",
"\n",
"Let's take `mutate` out for a spin using the `Date` column by doing the following operations:\n",
"\n",
"1. Convert the dates (currently of type character) to a month format (these are US dates, so the format is `MM/DD/YYYY`).\n",
"\n",
"2. Extract the month from the dates to a new column.\n",
"\n",
"In R, the package [lubridate](https://lubridate.tidyverse.org/) makes it easier to work with Date-time data. So, let's use `dplyr::mutate()`, `lubridate::mdy()`, `lubridate::month()` and see how to achieve the above objectives. We can drop the Date column since we won't be needing it again in subsequent operations."
]
},
{
"cell_type": "code",
"metadata": {
"id": "5joszIVSZ6xe"
},
"source": [
"# Load lubridate\n",
"library(lubridate)\n",
"\n",
"pumpkins <- pumpkins %>% \n",
" # Convert the Date column to a date object\n",
" mutate(Date = mdy(Date)) %>% \n",
" # Extract month from Date\n",
" mutate(Month = month(Date)) %>% \n",
" # Drop Date column\n",
" select(-Date)\n",
"\n",
"# View the first few rows\n",
"pumpkins %>% \n",
" slice_head(n = 7)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "nIgLjNMCZ-6Y"
},
"source": [
"Woohoo! 🤩\n",
"\n",
"Next, let's create a new column `Price`, which represents the average price of a pumpkin. Now, let's take the average of the `Low Price` and `High Price` columns to populate the new Price column.\n",
"<br>"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Zo0BsqqtaJw2"
},
"source": [
"# Create a new column Price\n",
"pumpkins <- pumpkins %>% \n",
" mutate(Price = (`Low Price` + `High Price`)/2)\n",
"\n",
"# View the first few rows of the data\n",
"pumpkins %>% \n",
" slice_head(n = 5)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "p77WZr-9aQAR"
},
"source": [
"Yeees!💪\n",
"\n",
"\"But wait!\", you'll say after skimming through the whole data set with `View(pumpkins)`, \"There's something odd here!\"🤔\n",
"\n",
"If you look at the `Package` column, pumpkins are sold in many different configurations. Some are sold in `1 1/9 bushel` measures, and some in `1/2 bushel` measures, some per pumpkin, some per pound, and some in big boxes with varying widths.\n",
"\n",
"Let's verify this:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "XISGfh0IaUy6"
},
"source": [
"# Verify the distinct observations in Package column\n",
"pumpkins %>% \n",
" distinct(Package)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "7sMjiVujaZxY"
},
"source": [
"Amazing!👏\n",
"\n",
"Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string *bushel* in the `Package` column and put this in a new data frame `new_pumpkins`.\n",
"<br>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "L8Qfcs92ageF"
},
"source": [
"#### dplyr::filter() and stringr::str_detect()\n",
"\n",
"[`dplyr::filter()`](https://dplyr.tidyverse.org/reference/filter.html): creates a subset of the data only containing **rows** that satisfy your conditions, in this case, pumpkins with the string *bushel* in the `Package` column.\n",
"\n",
"[stringr::str_detect()](https://stringr.tidyverse.org/reference/str_detect.html): detects the presence or absence of a pattern in a string.\n",
"\n",
"The [`stringr`](https://github.com/tidyverse/stringr) package provides simple functions for common string operations."
]
},
{
"cell_type": "code",
"metadata": {
"id": "hy_SGYREampd"
},
"source": [
"# Retain only pumpkins with \"bushel\"\n",
"new_pumpkins <- pumpkins %>% \n",
" filter(str_detect(Package, \"bushel\"))\n",
"\n",
"# Get the dimensions of the new data\n",
"dim(new_pumpkins)\n",
"\n",
"# View a few rows of the new data\n",
"new_pumpkins %>% \n",
" slice_head(n = 5)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "VrDwF031avlR"
},
"source": [
"You can see that we have narrowed down to 415 or so rows of data containing pumpkins by the bushel.🤩\n",
"<br>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mLpw2jH4a0tx"
},
"source": [
"#### dplyr::case_when()\n",
"\n",
"**But wait! There's one more thing to do**\n",
"\n",
"Did you notice that the bushel amount varies per row? You need to normalize the pricing so that you show the pricing per bushel, not per 1 1/9 or 1/2 bushel. Time to do some math to standardize it.\n",
"\n",
"We'll use the function [`case_when()`](https://dplyr.tidyverse.org/reference/case_when.html) to *mutate* the Price column depending on some conditions. `case_when` allows you to vectorise multiple `if_else()`statements.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "P68kLVQmbM6I"
},
"source": [
"# Convert the price if the Package contains fractional bushel values\n",
"new_pumpkins <- new_pumpkins %>% \n",
" mutate(Price = case_when(\n",
" str_detect(Package, \"1 1/9\") ~ Price/(1 + 1/9),\n",
" str_detect(Package, \"1/2\") ~ Price/(1/2),\n",
" TRUE ~ Price))\n",
"\n",
"# View the first few rows of the data\n",
"new_pumpkins %>% \n",
" slice_head(n = 30)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "pS2GNPagbSdb"
},
"source": [
"Now, we can analyze the pricing per unit based on their bushel measurement. All this study of bushels of pumpkins, however, goes to show how very `important` it is to `understand the nature of your data`!\n",
"\n",
"> ✅ According to [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308), a bushel's weight depends on the type of produce, as it's a volume measurement. \"A bushel of tomatoes, for example, is supposed to weigh 56 pounds... Leaves and greens take up more space with less weight, so a bushel of spinach is only 20 pounds.\" It's all pretty complicated! Let's not bother with making a bushel-to-pound conversion, and instead price by the bushel. All this study of bushels of pumpkins, however, goes to show how very important it is to understand the nature of your data!\n",
">\n",
"> ✅ Did you notice that pumpkins sold by the half-bushel are very expensive? Can you figure out why? Hint: little pumpkins are way pricier than big ones, probably because there are so many more of them per bushel, given the unused space taken by one big hollow pie pumpkin.\n",
"<br>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qql1SowfbdnP"
},
"source": [
"Now lastly, for the sheer sake of adventure 💁‍♀️, let's also move the Month column to the first position i.e `before` column `Package`.\n",
"\n",
"`dplyr::relocate()` is used to change column positions."
]
},
{
"cell_type": "code",
"metadata": {
"id": "JJ1x6kw8bixF"
},
"source": [
"# Create a new data frame new_pumpkins\n",
"new_pumpkins <- new_pumpkins %>% \n",
" relocate(Month, .before = Package)\n",
"\n",
"new_pumpkins %>% \n",
" slice_head(n = 7)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "y8TJ0Za_bn5Y"
},
"source": [
"Good job!👌 You now have a clean, tidy dataset on which you can build your new regression model!\n",
"<br>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mYSH6-EtbvNa"
},
"source": [
"## 4. Data visualization with ggplot2\n",
"\n",
"![Infographic by Dasani Madipalli](../images/data-visualization.png){width=\"600\"}\n",
"\n",
"There is a *wise* saying that goes like this:\n",
"\n",
"> \"The simple graph has brought more information to the data analyst's mind than any other device.\" --- John Tukey\n",
"\n",
"Part of the data scientist's role is to demonstrate the quality and nature of the data they are working with. To do this, they often create interesting visualizations, or plots, graphs, and charts, showing different aspects of data. In this way, they are able to visually show relationships and gaps that are otherwise hard to uncover.\n",
"\n",
"Visualizations can also help determine the machine learning technique most appropriate for the data. A scatterplot that seems to follow a line, for example, indicates that the data is a good candidate for a linear regression exercise.\n",
"\n",
"R offers a number of several systems for making graphs, but [`ggplot2`](https://ggplot2.tidyverse.org/index.html) is one of the most elegant and most versatile. `ggplot2` allows you to compose graphs by **combining independent components**.\n",
"\n",
"Let's start with a simple scatter plot for the Price and Month columns.\n",
"\n",
"So in this case, we'll start with [`ggplot()`](https://ggplot2.tidyverse.org/reference/ggplot.html), supply a dataset and aesthetic mapping (with [`aes()`](https://ggplot2.tidyverse.org/reference/aes.html)) then add a layers (like [`geom_point()`](https://ggplot2.tidyverse.org/reference/geom_point.html)) for scatter plots.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "g2YjnGeOcLo4"
},
"source": [
"# Set a theme for the plots\n",
"theme_set(theme_light())\n",
"\n",
"# Create a scatter plot\n",
"p <- ggplot(data = new_pumpkins, aes(x = Price, y = Month))\n",
"p + geom_point()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ml7SDCLQcPvE"
},
"source": [
"Is this a useful plot 🤷? Does anything about it surprise you?\n",
"\n",
"It's not particularly useful as all it does is display in your data as a spread of points in a given month.\n",
"<br>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jMakvJZIcVkh"
},
"source": [
"### **How do we make it useful?**\n",
"\n",
"To get charts to display useful data, you usually need to group the data somehow. For instance in our case, finding the average price of pumpkins for each month would provide more insights to the underlying patterns in our data. This leads us to one more **dplyr** flyby:\n",
"\n",
"#### `dplyr::group_by() %>% summarize()`\n",
"\n",
"Grouped aggregation in R can be easily computed using\n",
"\n",
"`dplyr::group_by() %>% summarize()`\n",
"\n",
"- `dplyr::group_by()` changes the unit of analysis from the complete dataset to individual groups such as per month.\n",
"\n",
"- `dplyr::summarize()` creates a new data frame with one column for each grouping variable and one column for each of the summary statistics that you have specified.\n",
"\n",
"For example, we can use the `dplyr::group_by() %>% summarize()` to group the pumpkins into groups based on the **Month** columns and then find the **mean price** for each month."
]
},
{
"cell_type": "code",
"metadata": {
"id": "6kVSUa2Bcilf"
},
"source": [
"# Find the average price of pumpkins per month\n",
"new_pumpkins %>%\n",
" group_by(Month) %>% \n",
" summarise(mean_price = mean(Price))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Kds48GUBcj3W"
},
"source": [
"Succinct!✨\n",
"\n",
"Categorical features such as months are better represented using a bar plot 📊. The layers responsible for bar charts are `geom_bar()` and `geom_col()`. Consult `?geom_bar` to find out more.\n",
"\n",
"Let's whip up one!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "VNbU1S3BcrxO"
},
"source": [
"# Find the average price of pumpkins per month then plot a bar chart\n",
"new_pumpkins %>%\n",
" group_by(Month) %>% \n",
" summarise(mean_price = mean(Price)) %>% \n",
" ggplot(aes(x = Month, y = mean_price)) +\n",
" geom_col(fill = \"midnightblue\", alpha = 0.7) +\n",
" ylab(\"Pumpkin Price\")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "zDm0VOzzcuzR"
},
"source": [
"🤩🤩This is a more useful data visualization! It seems to indicate that the highest price for pumpkins occurs in September and October. Does that meet your expectation? Why or why not?\n",
"\n",
"Congratulations on finishing the second lesson 👏! You did prepared your data for model building, then uncovered more insights using visualizations!"
]
}
]
}

@ -0,0 +1,345 @@
---
title: 'Build a regression model: prepare and visualize data'
output:
html_document:
df_print: paged
theme: flatly
highlight: breezedark
toc: yes
toc_float: yes
code_download: yes
---
## **Linear Regression for Pumpkins - Lesson 2**
#### Introduction
Now that you are set up with the tools you need to start tackling machine learning model building with Tidymodels and the Tidyverse, you are ready to start asking questions of your data. As you work with data and apply ML solutions, it's very important to understand how to ask the right question to properly unlock the potentials of your dataset.
In this lesson, you will learn:
- How to prepare your data for model-building.
- How to use `ggplot2` for data visualization.
The question you need answered will determine what type of ML algorithms you will leverage. And the quality of the answer you get back will be heavily dependent on the nature of your data.
Let's see this by working through a practical exercise.
![Artwork by \@allison_horst](../images/unruly_data.jpg){width="700"}
## 1. Importing pumpkins data and summoning the Tidyverse
We'll require the following packages to slice and dice this lesson:
- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun!
You can have them installed as:
`install.packages(c("tidyverse"))`
The script below checks whether you have the packages required to complete this module and installs them for you in case they are missing.
```{r, message=F, warning=F}
if (!require("pacman")) install.packages("pacman")
pacman::p_load(tidyverse)
```
Now, let's fire up some packages and load the [data](https://github.com/microsoft/ML-For-Beginners/blob/main/2-Regression/data/US-pumpkins.csv) provided for this lesson!
```{r load_tidy_verse_models, message=F, warning=F}
# Load the core Tidyverse packages
library(tidyverse)
# Import the pumpkins data
pumpkins <- read_csv(file = "https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/2-Regression/data/US-pumpkins.csv")
# Get a glimpse and dimensions of the data
glimpse(pumpkins)
# Print the first 50 rows of the data set
pumpkins %>%
slice_head(n =50)
```
A quick `glimpse()` immediately shows that there are blanks and a mix of strings (`chr`) and numeric data (`dbl`). The `Date` is of type character and there's also a strange column called `Package` where the data is a mix between `sacks`, `bins` and other values. The data, in fact, is a bit of a mess 😤.
In fact, it is not very common to be gifted a dataset that is completely ready to use to create a ML model out of the box. But worry not, in this lesson, you will learn how to prepare a raw dataset using standard R libraries 🧑‍🔧. You will also learn various techniques to visualize the data.📈📊
> A refresher: The pipe operator (`%>%`) performs operations in logical sequence by passing an object forward into a function or call expression. You can think of the pipe operator as saying "and then" in your code.
## 2. Check for missing data
One of the most common issues data scientists need to deal with is incomplete or missing data. R represents missing, or unknown values, with special sentinel value: `NA` (Not Available).
So how would we know that the data frame contains missing values?
- One straight forward way would be to use the base R function `anyNA` which returns the logical objects `TRUE` or `FALSE`
```{r anyNA, message=F, warning=F}
pumpkins %>%
anyNA()
```
Great, there seems to be some missing data! That's a good place to start.
- Another way would be to use the function `is.na()` that indicates which individual column elements are missing with a logical `TRUE`.
```{r is_na, message=F, warning=F}
pumpkins %>%
is.na() %>%
head(n = 7)
```
Okay, got the job done but with a large data frame such as this, it would be inefficient and practically impossible to review all of the rows and columns individually😴.
- A more intuitive way would be to calculate the sum of the missing values for each column:
```{r colSum_NA, message=F, warning=F}
pumpkins %>%
is.na() %>%
colSums()
```
Much better! There is missing data, but maybe it won't matter for the task at hand. Let's see what further analysis brings forth.
> Along with the awesome sets of packages and functions, R has a very good documentation. For instance, use `help(colSums)` or `?colSums` to find out more about the function.
## 3. Dplyr: A Grammar of Data Manipulation
![Artwork by \@allison_horst](../images/dplyr_wrangling.png){width="569"}
[`dplyr`](https://dplyr.tidyverse.org/), a package in the Tidyverse, is a grammar of data manipulation that provides a consistent set of verbs that help you solve the most common data manipulation challenges. In this section, we'll explore some of dplyr's verbs!
#### dplyr::select()
`select()` is a function in the package `dplyr` which helps you pick columns to keep or exclude.
To make your data frame easier to work with, drop several of its columns, using `select()`, keeping only the columns you need.
For instance, in this exercise, our analysis will involve the columns `Package`, `Low Price`, `High Price` and `Date`. Let's select these columns.
```{r select, message=F, warning=F}
# Select desired columns
pumpkins <- pumpkins %>%
select(Package, `Low Price`, `High Price`, Date)
# Print data set
pumpkins %>%
slice_head(n = 5)
```
#### dplyr::mutate()
`mutate()` is a function in the package `dplyr` which helps you create or modify columns, while keeping the existing columns.
The general structure of mutate is:
`data %>% mutate(new_column_name = what_it_contains)`
Let's take `mutate` out for a spin using the `Date` column by doing the following operations:
1. Convert the dates (currently of type character) to a month format (these are US dates, so the format is `MM/DD/YYYY`).
2. Extract the month from the dates to a new column.
In R, the package [lubridate](https://lubridate.tidyverse.org/) makes it easier to work with Date-time data. So, let's use `dplyr::mutate()`, `lubridate::mdy()`, `lubridate::month()` and see how to achieve the above objectives. We can drop the Date column since we won't be needing it again in subsequent operations.
```{r mut_date, message=F, warning=F}
# Load lubridate
library(lubridate)
pumpkins <- pumpkins %>%
# Convert the Date column to a date object
mutate(Date = mdy(Date)) %>%
# Extract month from Date
mutate(Month = month(Date)) %>%
# Drop Date column
select(-Date)
# View the first few rows
pumpkins %>%
slice_head(n = 7)
```
Woohoo! 🤩
Next, let's create a new column `Price`, which represents the average price of a pumpkin. Now, let's take the average of the `Low Price` and `High Price` columns to populate the new Price column.
```{r price, message=F, warning=F}
# Create a new column Price
pumpkins <- pumpkins %>%
mutate(Price = (`Low Price` + `High Price`)/2)
# View the first few rows of the data
pumpkins %>%
slice_head(n = 5)
```
Yeees!💪
"But wait!", you'll say after skimming through the whole data set with `View(pumpkins)`, "There's something odd here!"🤔
If you look at the `Package` column, pumpkins are sold in many different configurations. Some are sold in `1 1/9 bushel` measures, and some in `1/2 bushel` measures, some per pumpkin, some per pound, and some in big boxes with varying widths.
Let's verify this:
```{r Package, message=F, warning=F}
# Verify the distinct observations in Package column
pumpkins %>%
distinct(Package)
```
Amazing!👏
Pumpkins seem to be very hard to weigh consistently, so let's filter them by selecting only pumpkins with the string *bushel* in the `Package` column and put this in a new data frame `new_pumpkins`.
#### dplyr::filter() and stringr::str_detect()
[`dplyr::filter()`](https://dplyr.tidyverse.org/reference/filter.html): creates a subset of the data only containing **rows** that satisfy your conditions, in this case, pumpkins with the string *bushel* in the `Package` column.
[stringr::str_detect()](https://stringr.tidyverse.org/reference/str_detect.html): detects the presence or absence of a pattern in a string.
The [`stringr`](https://github.com/tidyverse/stringr) package provides simple functions for common string operations.
```{r filter, message=F, warning=F}
# Retain only pumpkins with "bushel"
new_pumpkins <- pumpkins %>%
filter(str_detect(Package, "bushel"))
# Get the dimensions of the new data
dim(new_pumpkins)
# View a few rows of the new data
new_pumpkins %>%
slice_head(n = 5)
```
You can see that we have narrowed down to 415 or so rows of data containing pumpkins by the bushel.🤩
#### dplyr::case_when()
**But wait! There's one more thing to do**
Did you notice that the bushel amount varies per row? You need to normalize the pricing so that you show the pricing per bushel, not per 1 1/9 or 1/2 bushel. Time to do some math to standardize it.
We'll use the function [`case_when()`](https://dplyr.tidyverse.org/reference/case_when.html) to *mutate* the Price column depending on some conditions. `case_when` allows you to vectorise multiple `if_else()`statements.
```{r normalize_price, message=F, warning=F}
# Convert the price if the Package contains fractional bushel values
new_pumpkins <- new_pumpkins %>%
mutate(Price = case_when(
str_detect(Package, "1 1/9") ~ Price/(1 + 1/9),
str_detect(Package, "1/2") ~ Price/(1/2),
TRUE ~ Price))
# View the first few rows of the data
new_pumpkins %>%
slice_head(n = 30)
```
Now, we can analyze the pricing per unit based on their bushel measurement. All this study of bushels of pumpkins, however, goes to show how very `important` it is to `understand the nature of your data`!
> ✅ According to [The Spruce Eats](https://www.thespruceeats.com/how-much-is-a-bushel-1389308), a bushel's weight depends on the type of produce, as it's a volume measurement. "A bushel of tomatoes, for example, is supposed to weigh 56 pounds... Leaves and greens take up more space with less weight, so a bushel of spinach is only 20 pounds." It's all pretty complicated! Let's not bother with making a bushel-to-pound conversion, and instead price by the bushel. All this study of bushels of pumpkins, however, goes to show how very important it is to understand the nature of your data!
>
> ✅ Did you notice that pumpkins sold by the half-bushel are very expensive? Can you figure out why? Hint: little pumpkins are way pricier than big ones, probably because there are so many more of them per bushel, given the unused space taken by one big hollow pie pumpkin.
Now lastly, for the sheer sake of adventure 💁‍♀️, let's also move the Month column to the first position i.e `before` column `Package`.
`dplyr::relocate()` is used to change column positions.
```{r new_pumpkins, message=F, warning=F}
# Create a new data frame new_pumpkins
new_pumpkins <- new_pumpkins %>%
relocate(Month, .before = Package)
new_pumpkins %>%
slice_head(n = 7)
```
Good job!👌 You now have a clean, tidy dataset on which you can build your new regression model!
## 4. Data visualization with ggplot2
![Infographic by Dasani Madipalli](../images/data-visualization.png){width="600"}
There is a *wise* saying that goes like this:
> "The simple graph has brought more information to the data analyst's mind than any other device." --- John Tukey
Part of the data scientist's role is to demonstrate the quality and nature of the data they are working with. To do this, they often create interesting visualizations, or plots, graphs, and charts, showing different aspects of data. In this way, they are able to visually show relationships and gaps that are otherwise hard to uncover.
Visualizations can also help determine the machine learning technique most appropriate for the data. A scatterplot that seems to follow a line, for example, indicates that the data is a good candidate for a linear regression exercise.
R offers a number of several systems for making graphs, but [`ggplot2`](https://ggplot2.tidyverse.org/index.html) is one of the most elegant and most versatile. `ggplot2` allows you to compose graphs by **combining independent components**.
Let's start with a simple scatter plot for the Price and Month columns.
So in this case, we'll start with [`ggplot()`](https://ggplot2.tidyverse.org/reference/ggplot.html), supply a dataset and aesthetic mapping (with [`aes()`](https://ggplot2.tidyverse.org/reference/aes.html)) then add a layers (like [`geom_point()`](https://ggplot2.tidyverse.org/reference/geom_point.html)) for scatter plots.
```{r scatter_plt, message=F, warning=F}
# Set a theme for the plots
theme_set(theme_light())
# Create a scatter plot
p <- ggplot(data = new_pumpkins, aes(x = Price, y = Month))
p + geom_point()
```
Is this a useful plot 🤷? Does anything about it surprise you?
It's not particularly useful as all it does is display in your data as a spread of points in a given month.
### **How do we make it useful?**
To get charts to display useful data, you usually need to group the data somehow. For instance in our case, finding the average price of pumpkins for each month would provide more insights to the underlying patterns in our data. This leads us to one more **dplyr** flyby:
#### `dplyr::group_by() %>% summarize()`
Grouped aggregation in R can be easily computed using
`dplyr::group_by() %>% summarize()`
- `dplyr::group_by()` changes the unit of analysis from the complete dataset to individual groups such as per month.
- `dplyr::summarize()` creates a new data frame with one column for each grouping variable and one column for each of the summary statistics that you have specified.
For example, we can use the `dplyr::group_by() %>% summarize()` to group the pumpkins into groups based on the **Month** columns and then find the **mean price** for each month.
```{r grp_sumry, message=F, warning=F}
# Find the average price of pumpkins per month
new_pumpkins %>%
group_by(Month) %>%
summarise(mean_price = mean(Price))
```
Succinct!✨
Categorical features such as months are better represented using a bar plot 📊. The layers responsible for bar charts are `geom_bar()` and `geom_col()`. Consult
`?geom_bar` to find out more.
Let's whip up one!
```{r bar_plt, message=F, warning=F}
# Find the average price of pumpkins per month then plot a bar chart
new_pumpkins %>%
group_by(Month) %>%
summarise(mean_price = mean(Price)) %>%
ggplot(aes(x = Month, y = mean_price)) +
geom_col(fill = "midnightblue", alpha = 0.7) +
ylab("Pumpkin Price")
```
🤩🤩This is a more useful data visualization! It seems to indicate that the highest price for pumpkins occurs in September and October. Does that meet your expectation? Why or why not?
Congratulations on finishing the second lesson 👏! You did prepared your data for model building, then uncovered more insights using visualizations!\
Loading…
Cancel
Save