You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ML-For-Beginners/translations/en/5-Clustering/2-K-Means/solution/R/lesson_15-R.ipynb

629 lines
27 KiB

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"anaconda-cloud": "",
"kernelspec": {
"display_name": "R",
"language": "R",
"name": "ir"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
},
"colab": {
"name": "lesson_14.ipynb",
"provenance": [],
"collapsed_sections": [],
"toc_visible": true
},
"coopTranslator": {
"original_hash": "ad65fb4aad0a156b42216e4929f490fc",
"translation_date": "2025-09-06T15:36:34+00:00",
"source_file": "5-Clustering/2-K-Means/solution/R/lesson_15-R.ipynb",
"language_code": "en"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "GULATlQXLXyR"
},
"source": [
"## Explore K-Means clustering using R and Tidy data principles.\n",
"\n",
"### [**Pre-lecture quiz**](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/29/)\n",
"\n",
"In this lesson, you will learn how to create clusters using the Tidymodels package and other packages in the R ecosystem (we'll call them friends 🧑‍🤝‍🧑), along with the Nigerian music dataset you imported earlier. We will cover the basics of K-Means for clustering. Remember, as you learned in the previous lesson, there are many ways to work with clusters, and the method you choose depends on your data. We'll try K-Means since it's the most common clustering technique. Let's dive in!\n",
"\n",
"Terms you will learn about:\n",
"\n",
"- Silhouette scoring \n",
"- Elbow method \n",
"- Inertia \n",
"- Variance \n",
"\n",
"### **Introduction**\n",
"\n",
"[K-Means Clustering](https://wikipedia.org/wiki/K-means_clustering) is a method derived from the field of signal processing. It is used to divide and group data into `k clusters` based on similarities in their features.\n",
"\n",
"The clusters can be visualized as [Voronoi diagrams](https://wikipedia.org/wiki/Voronoi_diagram), which consist of a point (or 'seed') and its corresponding region.\n",
"\n",
"<p>\n",
" <img src=\"../../images/voronoi.png\"\n",
" width=\"500\"/>\n",
" <figcaption>Infographic by Jen Looper</figcaption>\n",
"\n",
"K-Means clustering follows these steps:\n",
"\n",
"1. The data scientist begins by specifying the desired number of clusters to be created. \n",
"2. The algorithm then randomly selects K observations from the dataset to serve as the initial centers for the clusters (i.e., centroids). \n",
"3. Each of the remaining observations is assigned to its closest centroid. \n",
"4. The new means of each cluster are calculated, and the centroid is moved to the mean. \n",
"5. Once the centers are recalculated, every observation is checked again to see if it might be closer to a different cluster. All objects are reassigned using the updated cluster means. The cluster assignment and centroid update steps are repeated iteratively until the cluster assignments stop changing (i.e., when convergence is achieved). Typically, the algorithm stops when each new iteration results in negligible movement of centroids, and the clusters stabilize.\n",
"\n",
"<div>\n",
"\n",
"> Note that due to the randomization of the initial k observations used as starting centroids, slightly different results can occur each time the procedure is applied. For this reason, most algorithms use several *random starts* and select the iteration with the lowest WCSS. Therefore, it is strongly recommended to always run K-Means with several values of *nstart* to avoid an *undesirable local optimum.*\n",
"\n",
"</div>\n",
"\n",
"This short animation using the [artwork](https://github.com/allisonhorst/stats-illustrations) of Allison Horst illustrates the clustering process:\n",
"\n",
"<p>\n",
" <img src=\"../../images/kmeans.gif\"\n",
" width=\"550\"/>\n",
" <figcaption>Artwork by @allison_horst</figcaption>\n",
"\n",
"A key question that arises in clustering is: how do you determine the number of clusters to divide your data into? One limitation of K-Means is that you need to define `k`, the number of `centroids`. Fortunately, the `elbow method` helps estimate a good starting value for `k`. You'll try it shortly.\n",
"\n",
"### **Prerequisite**\n",
"\n",
"We'll pick up right where we left off in the [previous lesson](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb), where we analyzed the dataset, created various visualizations, and filtered the dataset to focus on observations of interest. Be sure to check it out!\n",
"\n",
"We'll need some packages to complete this module. You can install them using: \n",
"`install.packages(c('tidyverse', 'tidymodels', 'cluster', 'summarytools', 'plotly', 'paletteer', 'factoextra', 'patchwork'))`\n",
"\n",
"Alternatively, the script below checks whether you have the required packages for this module and installs any missing ones for you.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ah_tBi58LXyi"
},
"source": [
"suppressWarnings(if(!require(\"pacman\")) install.packages(\"pacman\"))\n",
"\n",
"pacman::p_load('tidyverse', 'tidymodels', 'cluster', 'summarytools', 'plotly', 'paletteer', 'factoextra', 'patchwork')\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "7e--UCUTLXym"
},
"source": [
"## 1. A dance with data: Narrow down to the 3 most popular music genres\n",
"\n",
"This is a summary of what we covered in the previous lesson. Let's analyze and break down some data!\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Ycamx7GGLXyn"
},
"source": [
"# Load the core tidyverse and make it available in your current R session\n",
"library(tidyverse)\n",
"\n",
"# Import the data into a tibble\n",
"df <- read_csv(file = \"https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/5-Clustering/data/nigerian-songs.csv\", show_col_types = FALSE)\n",
"\n",
"# Narrow down to top 3 popular genres\n",
"nigerian_songs <- df %>% \n",
" # Concentrate on top 3 genres\n",
" filter(artist_top_genre %in% c(\"afro dancehall\", \"afropop\",\"nigerian pop\")) %>% \n",
" # Remove unclassified observations\n",
" filter(popularity != 0)\n",
"\n",
"\n",
"\n",
"# Visualize popular genres using bar plots\n",
"theme_set(theme_light())\n",
"nigerian_songs %>%\n",
" count(artist_top_genre) %>%\n",
" ggplot(mapping = aes(x = artist_top_genre, y = n,\n",
" fill = artist_top_genre)) +\n",
" geom_col(alpha = 0.8) +\n",
" paletteer::scale_fill_paletteer_d(\"ggsci::category10_d3\") +\n",
" ggtitle(\"Top genres\") +\n",
" theme(plot.title = element_text(hjust = 0.5))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "b5h5zmkPLXyp"
},
"source": [
"🤩 That went well!\n",
"\n",
"## 2. More data exploration.\n",
"\n",
"How clean is this data? Let's check for outliers using box plots. We will focus on numeric columns with fewer outliers (although you could remove the outliers). Boxplots can show the range of the data and will help decide which columns to use. Keep in mind, boxplots do not display variance, which is a key factor for good clusterable data. For more information, check out [this discussion](https://stats.stackexchange.com/questions/91536/deduce-variance-from-boxplot).\n",
"\n",
"[Boxplots](https://en.wikipedia.org/wiki/Box_plot) are used to visually represent the distribution of `numeric` data, so let's begin by *selecting* all numeric columns along with the popular music genres.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "HhNreJKLLXyq"
},
"source": [
"# Select top genre column and all other numeric columns\n",
"df_numeric <- nigerian_songs %>% \n",
" select(artist_top_genre, where(is.numeric)) \n",
"\n",
"# Display the data\n",
"df_numeric %>% \n",
" slice_head(n = 5)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "uYXrwJRaLXyq"
},
"source": [
"See how the selection helper `where` simplifies this process 💁? Discover more functions like this [here](https://tidyselect.r-lib.org/).\n",
"\n",
"Since well be creating a boxplot for each numeric feature and want to avoid using loops, lets reshape our data into a *longer* format. This will enable us to use `facets`—subplots that showcase individual subsets of the data.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "gd5bR3f8LXys"
},
"source": [
"# Pivot data from wide to long\n",
"df_numeric_long <- df_numeric %>% \n",
" pivot_longer(!artist_top_genre, names_to = \"feature_names\", values_to = \"values\") \n",
"\n",
"# Print out data\n",
"df_numeric_long %>% \n",
" slice_head(n = 15)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "-7tE1swnLXyv"
},
"source": [
"Much longer! Now it's time for some `ggplots`! So, which `geom` are we going to use?\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "r88bIsyuLXyy"
},
"source": [
"# Make a box plot\n",
"df_numeric_long %>% \n",
" ggplot(mapping = aes(x = feature_names, y = values, fill = feature_names)) +\n",
" geom_boxplot() +\n",
" facet_wrap(~ feature_names, ncol = 4, scales = \"free\") +\n",
" theme(legend.position = \"none\")\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "EYVyKIUELXyz"
},
"source": [
"Easy-gg!\n",
"\n",
"Now we can see that this data is somewhat messy: by looking at each column as a boxplot, you can spot outliers. You could go through the dataset and eliminate these outliers, but that would leave the data quite sparse.\n",
"\n",
"For now, let's decide which columns we'll use for our clustering exercise. We'll select the numeric columns with similar ranges. While we could encode `artist_top_genre` as numeric, we'll leave it out for now.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "-wkpINyZLXy0"
},
"source": [
"# Select variables with similar ranges\n",
"df_numeric_select <- df_numeric %>% \n",
" select(popularity, danceability, acousticness, loudness, energy) \n",
"\n",
"# Normalize data\n",
"# df_numeric_select <- scale(df_numeric_select)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "D7dLzgpqLXy1"
},
"source": [
"## 3. Computing k-means clustering in R\n",
"\n",
"We can compute k-means in R using the built-in `kmeans` function; refer to `help(\"kmeans()\")`. The `kmeans()` function takes a data frame with all numeric columns as its main argument.\n",
"\n",
"The first step in using k-means clustering is to define the number of clusters (k) that will be created in the final solution. Since we know there are 3 song genres identified in the dataset, let's try using 3:\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "uC4EQ5w7LXy5"
},
"source": [
"set.seed(2056)\n",
"# Kmeans clustering for 3 clusters\n",
"kclust <- kmeans(\n",
" df_numeric_select,\n",
" # Specify the number of clusters\n",
" centers = 3,\n",
" # How many random initial configurations\n",
" nstart = 25\n",
")\n",
"\n",
"# Display clustering object\n",
"kclust\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "hzfhscWrLXy-"
},
"source": [
"The kmeans object contains several pieces of information, which are well explained in `help(\"kmeans()\")`. For now, let's focus on a few. We can see that the data has been divided into 3 clusters with sizes of 65, 110, and 111. The output also includes the cluster centers (means) for the 3 groups across the 5 variables.\n",
"\n",
"The clustering vector represents the cluster assignment for each observation. Let's use the `augment` function to add the cluster assignment to the original dataset.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "0XwwpFGQLXy_"
},
"source": [
"# Add predicted cluster assignment to data set\n",
"augment(kclust, df_numeric_select) %>% \n",
" relocate(.cluster) %>% \n",
" slice_head(n = 10)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "NXIVXXACLXzA"
},
"source": [
"Perfect, we have just divided our dataset into 3 groups. So, how good is our clustering 🤷? Let's take a look at the `Silhouette score`.\n",
"\n",
"### **Silhouette score**\n",
"\n",
"[Silhouette analysis](https://en.wikipedia.org/wiki/Silhouette_(clustering)) can be used to evaluate the separation distance between the resulting clusters. This score ranges from -1 to 1, where a score close to 1 indicates that the cluster is compact and well-separated from other clusters. A score near 0 suggests overlapping clusters, with samples positioned close to the decision boundary of neighboring clusters. [source](https://dzone.com/articles/kmeans-silhouette-score-explained-with-python-exam).\n",
"\n",
"The average silhouette method calculates the mean silhouette score of observations for different values of *k*. A high average silhouette score signifies effective clustering.\n",
"\n",
"The `silhouette` function in the cluster package computes the average silhouette width.\n",
"\n",
"> The silhouette can be calculated using any [distance](https://en.wikipedia.org/wiki/Distance \"Distance\") metric, such as [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance \"Euclidean distance\") or [Manhattan distance](https://en.wikipedia.org/wiki/Manhattan_distance \"Manhattan distance\"), which we discussed in the [previous lesson](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/solution/R/lesson_14-R.ipynb).\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Jn0McL28LXzB"
},
"source": [
"# Load cluster package\n",
"library(cluster)\n",
"\n",
"# Compute average silhouette score\n",
"ss <- silhouette(kclust$cluster,\n",
" # Compute euclidean distance\n",
" dist = dist(df_numeric_select))\n",
"mean(ss[, 3])\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "QyQRn97nLXzC"
},
"source": [
"Our score is **.549**, which places us right in the middle. This suggests that our data isn't particularly well-suited for this type of clustering. Let's check if we can confirm this assumption visually. The [factoextra package](https://rpkgs.datanovia.com/factoextra/index.html) offers functions (`fviz_cluster()`) to visualize clustering.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "7a6Km1_FLXzD"
},
"source": [
"library(factoextra)\n",
"\n",
"# Visualize clustering results\n",
"fviz_cluster(kclust, df_numeric_select)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "IBwCWt-0LXzD"
},
"source": [
"The overlap in clusters indicates that our data is not particularly well-suited for this type of clustering, but let's proceed.\n",
"\n",
"## 4. Determining optimal clusters\n",
"\n",
"A common question that arises in K-Means clustering is this: without having predefined class labels, how can you determine the number of clusters to divide your data into?\n",
"\n",
"One approach to address this is to use a data sample to `create a series of clustering models` with an increasing number of clusters (e.g., from 1 to 10) and evaluate clustering metrics such as the **Silhouette score.**\n",
"\n",
"We can determine the optimal number of clusters by running the clustering algorithm for different values of *k* and assessing the **Within Cluster Sum of Squares** (WCSS). The total within-cluster sum of squares (WCSS) measures how compact the clusters are, and we aim for it to be as small as possible. Lower values indicate that the data points within a cluster are closer to each other.\n",
"\n",
"Let's examine how different choices of `k`, ranging from 1 to 10, impact this clustering.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "hSeIiylDLXzE"
},
"source": [
"# Create a series of clustering models\n",
"kclusts <- tibble(k = 1:10) %>% \n",
" # Perform kmeans clustering for 1,2,3 ... ,10 clusters\n",
" mutate(model = map(k, ~ kmeans(df_numeric_select, centers = .x, nstart = 25)),\n",
" # Farm out clustering metrics eg WCSS\n",
" glanced = map(model, ~ glance(.x))) %>% \n",
" unnest(cols = glanced)\n",
" \n",
"\n",
"# View clustering rsulsts\n",
"kclusts\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "m7rS2U1eLXzE"
},
"source": [
"Now that we have the total within-cluster sum-of-squares (tot.withinss) for each clustering algorithm with center *k*, we use the [elbow method](https://en.wikipedia.org/wiki/Elbow_method_(clustering)) to determine the optimal number of clusters. This method involves plotting the WCSS as a function of the number of clusters and selecting the [elbow of the curve](https://en.wikipedia.org/wiki/Elbow_of_the_curve \"Elbow of the curve\") as the ideal number of clusters.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "o_DjHGItLXzF"
},
"source": [
"set.seed(2056)\n",
"# Use elbow method to determine optimum number of clusters\n",
"kclusts %>% \n",
" ggplot(mapping = aes(x = k, y = tot.withinss)) +\n",
" geom_line(size = 1.2, alpha = 0.8, color = \"#FF7F0EFF\") +\n",
" geom_point(size = 2, color = \"#FF7F0EFF\")\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "pLYyt5XSLXzG"
},
"source": [
"The plot shows a significant decrease in WCSS (indicating greater *compactness*) as the number of clusters increases from one to two, followed by another noticeable drop from two to three clusters. Beyond that, the reduction becomes less significant, creating an `elbow` 💪 in the chart around three clusters. This suggests that there are likely two to three well-separated clusters of data points.\n",
"\n",
"We can now proceed to extract the clustering model with `k = 3`:\n",
"\n",
"> `pull()`: used to extract a single column\n",
">\n",
"> `pluck()`: used to access elements in data structures like lists\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "JP_JPKBILXzG"
},
"source": [
"# Extract k = 3 clustering\n",
"final_kmeans <- kclusts %>% \n",
" filter(k == 3) %>% \n",
" pull(model) %>% \n",
" pluck(1)\n",
"\n",
"\n",
"final_kmeans\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "l_PDTu8tLXzI"
},
"source": [
"Great! Let's proceed to visualize the clusters we've obtained. How about adding some interactivity with `plotly`?\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "dNcleFe-LXzJ"
},
"source": [
"# Add predicted cluster assignment to data set\n",
"results <- augment(final_kmeans, df_numeric_select) %>% \n",
" bind_cols(df_numeric %>% select(artist_top_genre)) \n",
"\n",
"# Plot cluster assignments\n",
"clust_plt <- results %>% \n",
" ggplot(mapping = aes(x = popularity, y = danceability, color = .cluster, shape = artist_top_genre)) +\n",
" geom_point(size = 2, alpha = 0.8) +\n",
" paletteer::scale_color_paletteer_d(\"ggthemes::Tableau_10\")\n",
"\n",
"ggplotly(clust_plt)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "6JUM_51VLXzK"
},
"source": [
"Perhaps we might have anticipated that each cluster (indicated by different colors) would correspond to unique genres (indicated by different shapes).\n",
"\n",
"Let's examine the model's accuracy.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "HdIMUGq7LXzL"
},
"source": [
"# Assign genres to predefined integers\n",
"label_count <- results %>% \n",
" group_by(artist_top_genre) %>% \n",
" mutate(id = cur_group_id()) %>% \n",
" ungroup() %>% \n",
" summarise(correct_labels = sum(.cluster == id))\n",
"\n",
"\n",
"# Print results \n",
"cat(\"Result:\", label_count$correct_labels, \"out of\", nrow(results), \"samples were correctly labeled.\")\n",
"\n",
"cat(\"\\nAccuracy score:\", label_count$correct_labels/nrow(results))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "C50wvaAOLXzM"
},
"source": [
"This model's accuracy is decent, but not exceptional. Its possible that the data isnt well-suited for K-Means Clustering. The dataset is too imbalanced, the correlations are weak, and theres too much variance between column values to form effective clusters. In fact, the clusters that do form are likely heavily influenced or skewed by the three genre categories we defined earlier.\n",
"\n",
"That said, this has been a valuable learning experience!\n",
"\n",
"In Scikit-learn's documentation, youll find that a model like this one, where clusters are not clearly defined, faces a 'variance' issue:\n",
"\n",
"<p >\n",
" <img src=\"../../images/problems.png\"\n",
" width=\"500\"/>\n",
" <figcaption>Infographic from Scikit-learn</figcaption>\n",
"\n",
"\n",
"\n",
"## **Variance**\n",
"\n",
"Variance is described as \"the average of the squared differences from the Mean\" [source](https://www.mathsisfun.com/data/standard-deviation.html). In the context of this clustering problem, it means that the values in our dataset tend to deviate too much from the mean.\n",
"\n",
"✅ This is a great opportunity to brainstorm ways to address this issue. Should you adjust the data further? Use different columns? Try a different algorithm? Hint: Consider [scaling your data](https://www.mygreatlearning.com/blog/learning-data-science-with-k-means-clustering/) to normalize it and experiment with other columns.\n",
"\n",
"> Use this '[variance calculator](https://www.calculatorsoup.com/calculators/statistics/variance-calculator.php)' to deepen your understanding of the concept.\n",
"\n",
"------------------------------------------------------------------------\n",
"\n",
"## **🚀Challenge**\n",
"\n",
"Spend some time experimenting with this notebook and tweaking the parameters. Can you improve the models accuracy by cleaning the data further (e.g., removing outliers)? You could assign weights to emphasize certain data samples. What other strategies can you think of to create better clusters?\n",
"\n",
"Hint: Try scaling your data. Theres commented code in the notebook that applies standard scaling to make the data columns more comparable in range. Youll notice that while the silhouette score decreases, the 'kink' in the elbow graph becomes smoother. This happens because unscaled data allows features with less variance to disproportionately influence the clustering. You can read more about this issue [here](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering/21226#21226).\n",
"\n",
"## [**Post-lecture quiz**](https://gray-sand-07a10f403.1.azurestaticapps.net/quiz/30/)\n",
"\n",
"## **Review & Self Study**\n",
"\n",
"- Explore a K-Means Simulator [like this one](https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng574/k-means/). This tool allows you to visualize sample data points and determine their centroids. You can adjust the datas randomness, the number of clusters, and the number of centroids. Does this help you better understand how data can be grouped?\n",
"\n",
"- Check out [this handout on K-Means](https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) from Stanford.\n",
"\n",
"Want to test your new clustering skills on datasets that are well-suited for K-Means clustering? Take a look at:\n",
"\n",
"- [Train and Evaluate Clustering Models](https://rpubs.com/eR_ic/clustering) using Tidymodels and related tools\n",
"\n",
"- [K-means Cluster Analysis](https://uc-r.github.io/kmeans_clustering), UC Business Analytics R Programming Guide\n",
"\n",
"- [K-means clustering with tidy data principles](https://www.tidymodels.org/learn/statistics/k-means/)\n",
"\n",
"## **Assignment**\n",
"\n",
"[Experiment with different clustering methods](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/2-K-Means/assignment.md)\n",
"\n",
"## THANK YOU TO:\n",
"\n",
"[Jen Looper](https://www.twitter.com/jenlooper) for creating the original Python version of this module ♥️\n",
"\n",
"[`Allison Horst`](https://twitter.com/allison_horst/) for designing the wonderful illustrations that make R more approachable and engaging. You can find more of her work in her [gallery](https://www.google.com/url?q=https://github.com/allisonhorst/stats-illustrations&sa=D&source=editors&ust=1626380772530000&usg=AOvVaw3zcfyCizFQZpkSLzxiiQEM).\n",
"\n",
"Happy Learning,\n",
"\n",
"[Eric](https://twitter.com/ericntay), Gold Microsoft Learn Student Ambassador.\n",
"\n",
"<p >\n",
" <img src=\"../../images/r_learners_sm.jpeg\"\n",
" width=\"500\"/>\n",
" <figcaption>Artwork by @allison_horst</figcaption>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n---\n\n**Disclaimer**: \nThis document has been translated using the AI translation service [Co-op Translator](https://github.com/Azure/co-op-translator). While we aim for accuracy, please note that automated translations may include errors or inaccuracies. The original document in its native language should be regarded as the authoritative source. For critical information, professional human translation is advised. We are not responsible for any misunderstandings or misinterpretations resulting from the use of this translation.\n"
]
}
]
}