You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Data-Science-For-Beginners/2-Working-With-Data/08-data-preparation/notebook.ipynb

1580 lines
52 KiB

3 years ago
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"name": "python3",
"display_name": "Python 3",
"language": "python"
},
"language_info": {
"mimetype": "text/x-python",
"nbconvert_exporter": "python",
"name": "python",
"file_extension": ".py",
"version": "3.5.4",
"pygments_lexer": "ipython3",
"codemirror_mode": {
"version": 3,
"name": "ipython"
}
},
"colab": {
"name": "notebook.ipynb",
"provenance": []
}
},
3 years ago
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "rQ8UhzFpgRra"
},
3 years ago
"source": [
"# Data Preparation\n",
"\n",
"[Original Notebook source from *Data Science: Introduction to Machine Learning for Data Science Python and Machine Learning Studio by Lee Stott*](https://github.com/leestott/intro-Datascience/blob/master/Course%20Materials/4-Cleaning_and_Manipulating-Reference.ipynb)\n",
"\n",
"## Exploring `DataFrame` information\n",
"\n",
"> **Learning goal:** By the end of this subsection, you should be comfortable finding general information about the data stored in pandas DataFrames.\n",
"\n",
"Once you have loaded your data into pandas, it will more likely than not be in a `DataFrame`. However, if the data set in your `DataFrame` has 60,000 rows and 400 columns, how do you even begin to get a sense of what you're working with? Fortunately, pandas provides some convenient tools to quickly look at overall information about a `DataFrame` in addition to the first few and last few rows.\n",
"\n",
3 years ago
"In order to explore this functionality, we will import the Python scikit-learn library and use an iconic dataset that every data scientist has seen hundreds of times: British biologist Ronald Fisher's *Iris* data set used in his 1936 paper \"The use of multiple measurements in taxonomic problems\":"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "hB1RofhdgRrp"
},
3 years ago
"source": [
"import pandas as pd\n",
"from sklearn.datasets import load_iris\n",
"\n",
"iris = load_iris()\n",
3 years ago
"iris_df = pd.DataFrame(data=iris['data'], columns=iris['feature_names'])"
],
"execution_count": 1,
"outputs": []
},
{
"cell_type": "markdown",
3 years ago
"metadata": {
"id": "AGA0A_Y8hMdz"
},
"source": [
"### `DataFrame.shape`\n",
"We have loaded the Iris Dataset in the variable `iris_df`. Before diving into the data, it would be valuable to know the number of datapoints we have and the overall size of the dataset. It is useful to look at the volume of data we are dealing with. "
]
},
{
"cell_type": "code",
"metadata": {
"id": "LOe5jQohhulf",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "4641a412-8abb-4e2f-d1ec-ff9b5004e361"
},
"source": [
"iris_df.shape"
],
"execution_count": 2,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"(150, 4)"
]
},
"metadata": {},
"execution_count": 2
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "smE7AGzOhxk2"
},
"source": [
"So, we are dealing with 150 rows and 4 columns of data. Each row represents one datapoint and each column represents a single feature associated with the data frame. So basically, there are 150 datapoints containing 4 features each.\n",
"\n",
"`shape` here is an attribute of the dataframe and not a function, which is why it doesn't end in a pair of parentheses. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d3AZKs0PinGP"
},
"source": [
"### `DataFrame.columns`\n",
"Let us now move into the 4 columns of data. What does each of them exactly represent? The `columns` attribute will give us the name of the columns in the dataframe. "
]
},
{
"cell_type": "code",
"metadata": {
"id": "YPGh_ziji-CY",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "0f9c41ea-d480-4245-d7e2-56d514ac7724"
},
"source": [
"iris_df.columns"
],
"execution_count": 3,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"Index(['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)',\n",
" 'petal width (cm)'],\n",
" dtype='object')"
]
},
"metadata": {},
"execution_count": 3
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TsobcU_VjCC_"
},
"source": [
"As we can see, there are four(4) columns. The `columns` attribute tells us the name of the columns and basically nothing else. This attribute assumes importance when we want to identify the features a dataset contains."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "2UTlvkjmgRrs"
},
3 years ago
"source": [
"### `DataFrame.info`\n",
"The amount of data(given by the `shape` attribute) and the name of the features or columns(given by the `columns` attribute) tell us something about the dataset. Now, we would want to dive deeper into the dataset. The `DataFrame.info()` function is quite useful for this. "
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "dHHRyG0_gRrt",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "94d5e48a-746c-4e58-b08f-c63b377a61b1"
},
3 years ago
"source": [
"iris_df.info()"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"RangeIndex: 150 entries, 0 to 149\n",
"Data columns (total 4 columns):\n",
" # Column Non-Null Count Dtype \n",
"--- ------ -------------- ----- \n",
" 0 sepal length (cm) 150 non-null float64\n",
" 1 sepal width (cm) 150 non-null float64\n",
" 2 petal length (cm) 150 non-null float64\n",
" 3 petal width (cm) 150 non-null float64\n",
"dtypes: float64(4)\n",
"memory usage: 4.8 KB\n"
]
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "1XgVMpvigRru"
},
3 years ago
"source": [
"From here, we get to can make a few observations:\n",
"1. The DataType of each column: In this dataset, all of the data is stored as 64-bit floating-point numbers.\n",
"2. Number of Non-Null values: Dealing with null values is an important step in data preparation. It will be dealt with later in the notebook."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IYlyxbpWFEF4"
},
"source": [
"### DataFrame.describe()\n",
"Say we have a lot of numerical data in our dataset. Univariate statistical calculations such as the mean, median, quartiles etc. can be done on each of the columns individually. The `DataFrame.describe()` function provides us with a statistical summary of the numerical columns of a dataset.\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "tWV-CMstFIRA",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 297
},
"outputId": "b01322a1-4296-4ad0-f990-6e0dcba668f6"
},
"source": [
"iris_df.describe()"
],
"execution_count": 5,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>sepal length (cm)</th>\n",
" <th>sepal width (cm)</th>\n",
" <th>petal length (cm)</th>\n",
" <th>petal width (cm)</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>count</th>\n",
" <td>150.000000</td>\n",
" <td>150.000000</td>\n",
" <td>150.000000</td>\n",
" <td>150.000000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>mean</th>\n",
" <td>5.843333</td>\n",
" <td>3.057333</td>\n",
" <td>3.758000</td>\n",
" <td>1.199333</td>\n",
" </tr>\n",
" <tr>\n",
" <th>std</th>\n",
" <td>0.828066</td>\n",
" <td>0.435866</td>\n",
" <td>1.765298</td>\n",
" <td>0.762238</td>\n",
" </tr>\n",
" <tr>\n",
" <th>min</th>\n",
" <td>4.300000</td>\n",
" <td>2.000000</td>\n",
" <td>1.000000</td>\n",
" <td>0.100000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25%</th>\n",
" <td>5.100000</td>\n",
" <td>2.800000</td>\n",
" <td>1.600000</td>\n",
" <td>0.300000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>50%</th>\n",
" <td>5.800000</td>\n",
" <td>3.000000</td>\n",
" <td>4.350000</td>\n",
" <td>1.300000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>75%</th>\n",
" <td>6.400000</td>\n",
" <td>3.300000</td>\n",
" <td>5.100000</td>\n",
" <td>1.800000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>max</th>\n",
" <td>7.900000</td>\n",
" <td>4.400000</td>\n",
" <td>6.900000</td>\n",
" <td>2.500000</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)\n",
"count 150.000000 150.000000 150.000000 150.000000\n",
"mean 5.843333 3.057333 3.758000 1.199333\n",
"std 0.828066 0.435866 1.765298 0.762238\n",
"min 4.300000 2.000000 1.000000 0.100000\n",
"25% 5.100000 2.800000 1.600000 0.300000\n",
"50% 5.800000 3.000000 4.350000 1.300000\n",
"75% 6.400000 3.300000 5.100000 1.800000\n",
"max 7.900000 4.400000 6.900000 2.500000"
]
},
"metadata": {},
"execution_count": 5
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zjjtW5hPGMuM"
},
"source": [
"The output above shows the total number of data points, mean, standard deviation, minimum, lower quartile(25%), median(50%), upper quartile(75%) and the maximum value of each column."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "-lviAu99gRrv"
},
3 years ago
"source": [
"### `DataFrame.head`\n",
"With all the above functions and attributes, we have got a top level view of the dataset. We know how many data points are there, how many features are there, the data type of each feature and the number of non-null values for each feature.\n",
"\n",
"Now its time to look at the data itself. Let's see what the first few rows(the first few datapoints) of our `DataFrame` look like:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "DZMJZh0OgRrw",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 204
},
"outputId": "14b1e3cd-54ac-47dc-f7b2-231d51d93741"
},
3 years ago
"source": [
"iris_df.head()"
],
"execution_count": 6,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>sepal length (cm)</th>\n",
" <th>sepal width (cm)</th>\n",
" <th>petal length (cm)</th>\n",
" <th>petal width (cm)</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>5.1</td>\n",
" <td>3.5</td>\n",
" <td>1.4</td>\n",
" <td>0.2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>4.9</td>\n",
" <td>3.0</td>\n",
" <td>1.4</td>\n",
" <td>0.2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>4.7</td>\n",
" <td>3.2</td>\n",
" <td>1.3</td>\n",
" <td>0.2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>4.6</td>\n",
" <td>3.1</td>\n",
" <td>1.5</td>\n",
" <td>0.2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>5.0</td>\n",
" <td>3.6</td>\n",
" <td>1.4</td>\n",
" <td>0.2</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)\n",
"0 5.1 3.5 1.4 0.2\n",
"1 4.9 3.0 1.4 0.2\n",
"2 4.7 3.2 1.3 0.2\n",
"3 4.6 3.1 1.5 0.2\n",
"4 5.0 3.6 1.4 0.2"
]
},
"metadata": {},
"execution_count": 6
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EBHEimZuEFQK"
},
"source": [
"As the output here, we can see five(5) entries of the dataset. If we look at the index at the left, we find out that these are the first five rows."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "oj7GkrTdgRry"
},
3 years ago
"source": [
"### Exercise:\n",
"\n",
"From the example given above, it is clear that, by default, `DataFrame.head` returns the first five rows of a `DataFrame`. In the code cell below, can you figure out a way to display more than five rows?"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "EKRmRFFegRrz"
},
3 years ago
"source": [
"# Hint: Consult the documentation by using iris_df.head?"
],
"execution_count": 7,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "BJ_cpZqNgRr1"
},
3 years ago
"source": [
"### `DataFrame.tail`\n",
"Another way of looking at the data can be from the end(instead of the beginning). The flipside of `DataFrame.head` is `DataFrame.tail`, which returns the last five rows of a `DataFrame`:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "heanjfGWgRr2",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 204
},
"outputId": "d4e22b38-ba5d-4dd1-bbd2-b9cd9ad7b150"
},
3 years ago
"source": [
"iris_df.tail()"
],
"execution_count": 8,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>sepal length (cm)</th>\n",
" <th>sepal width (cm)</th>\n",
" <th>petal length (cm)</th>\n",
" <th>petal width (cm)</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>145</th>\n",
" <td>6.7</td>\n",
" <td>3.0</td>\n",
" <td>5.2</td>\n",
" <td>2.3</td>\n",
" </tr>\n",
" <tr>\n",
" <th>146</th>\n",
" <td>6.3</td>\n",
" <td>2.5</td>\n",
" <td>5.0</td>\n",
" <td>1.9</td>\n",
" </tr>\n",
" <tr>\n",
" <th>147</th>\n",
" <td>6.5</td>\n",
" <td>3.0</td>\n",
" <td>5.2</td>\n",
" <td>2.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>148</th>\n",
" <td>6.2</td>\n",
" <td>3.4</td>\n",
" <td>5.4</td>\n",
" <td>2.3</td>\n",
" </tr>\n",
" <tr>\n",
" <th>149</th>\n",
" <td>5.9</td>\n",
" <td>3.0</td>\n",
" <td>5.1</td>\n",
" <td>1.8</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)\n",
"145 6.7 3.0 5.2 2.3\n",
"146 6.3 2.5 5.0 1.9\n",
"147 6.5 3.0 5.2 2.0\n",
"148 6.2 3.4 5.4 2.3\n",
"149 5.9 3.0 5.1 1.8"
]
},
"metadata": {},
"execution_count": 8
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "31kBWfyLgRr3"
},
3 years ago
"source": [
"In practice, it is useful to be able to easily examine the first few rows or the last few rows of a `DataFrame`, particularly when you are looking for outliers in ordered datasets. \n",
"\n",
"All the functions and attributes shown above with the help of code examples, help us get a look and feel of the data. \n",
"\n",
3 years ago
"> **Takeaway:** Even just by looking at the metadata about the information in a DataFrame or the first and last few values in one, you can get an immediate idea about the size, shape, and content of the data you are dealing with."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "TvurZyLSDxq_"
},
3 years ago
"source": [
"### Missing Data\n",
"Let us dive into missing data. Missing data occurs, when no value is sotred in some of the columns. \n",
"\n",
"Let us take an example: say someone is concious about his/her weight and doesn't fill the weight field in a survey. Then, the weight value for that certain person will be missing. \n",
"\n",
"Most of the time, in real world datasets, missing values occur.\n",
"\n",
"**How Pandas Handles missing data**\n",
"\n",
"\n",
"Pandas handles missing values in two ways. The first you've seen before in previous sections: `NaN`, or Not a Number. This is a actually a special value that is part of the IEEE floating-point specification and it is only used to indicate missing floating-point values.\n",
"\n",
3 years ago
"For missing values apart from floats, pandas uses the Python `None` object. While it might seem confusing that you will encounter two different kinds of values that say essentially the same thing, there are sound programmatic reasons for this design choice and, in practice, going this route enables pandas to deliver a good compromise for the vast majority of cases. Notwithstanding this, both `None` and `NaN` carry restrictions that you need to be mindful of with regards to how they can be used."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "lOHqUlZFgRr5"
},
3 years ago
"source": [
"### `None`: non-float missing data\n",
"Because `None` comes from Python, it cannot be used in NumPy and pandas arrays that are not of data type `'object'`. Remember, NumPy arrays (and the data structures in pandas) can contain only one type of data. This is what gives them their tremendous power for large-scale data and computational work, but it also limits their flexibility. Such arrays have to upcast to the “lowest common denominator,” the data type that will encompass everything in the array. When `None` is in the array, it means you are working with Python objects.\n",
"\n",
3 years ago
"To see this in action, consider the following example array (note the `dtype` for it):"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "QIoNdY4ngRr7",
"outputId": "e2ea93a4-b967-4319-904b-85479c36b169",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
3 years ago
"source": [
"import numpy as np\n",
"\n",
"example1 = np.array([2, None, 6, 8])\n",
3 years ago
"example1"
],
"execution_count": 9,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([2, None, 6, 8], dtype=object)"
]
},
"metadata": {},
"execution_count": 9
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "pdlgPNbhgRr7"
},
3 years ago
"source": [
"The reality of upcast data types carries two side effects with it. First, operations will be carried out at the level of interpreted Python code rather than compiled NumPy code. Essentially, this means that any operations involving `Series` or `DataFrames` with `None` in them will be slower. While you would probably not notice this performance hit, for large datasets it might become an issue.\n",
"\n",
"The second side effect stems from the first. Because `None` essentially drags `Series` or `DataFrame`s back into the world of vanilla Python, using NumPy/pandas aggregations like `sum()` or `min()` on arrays that contain a ``None`` value will generally produce an error:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "gWbx-KB9gRr8",
"outputId": "ff2a899b-5419-4a5c-b054-bc1e6ab906c5",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 292
}
},
3 years ago
"source": [
"example1.sum()"
],
"execution_count": 10,
"outputs": [
{
"output_type": "error",
"ename": "TypeError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-10-ce9901ad18bd>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mexample1\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/numpy/core/_methods.py\u001b[0m in \u001b[0;36m_sum\u001b[0;34m(a, axis, dtype, out, keepdims, initial, where)\u001b[0m\n\u001b[1;32m 45\u001b[0m def _sum(a, axis=None, dtype=None, out=None, keepdims=False,\n\u001b[1;32m 46\u001b[0m initial=_NoValue, where=True):\n\u001b[0;32m---> 47\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mumr_sum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0maxis\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mout\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkeepdims\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minitial\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwhere\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 48\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 49\u001b[0m def _prod(a, axis=None, dtype=None, out=None, keepdims=False,\n",
"\u001b[0;31mTypeError\u001b[0m: unsupported operand type(s) for +: 'int' and 'NoneType'"
]
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "LcEwO8UogRr9"
},
3 years ago
"source": [
"**Key takeaway**: Addition (and other operations) between integers and `None` values is undefined, which can limit what you can do with datasets that contain them."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "pWvVHvETgRr9"
},
3 years ago
"source": [
"### `NaN`: missing float values\n",
"\n",
"In contrast to `None`, NumPy (and therefore pandas) supports `NaN` for its fast, vectorized operations and ufuncs. The bad news is that any arithmetic performed on `NaN` always results in `NaN`. For example:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "rcFYfMG9gRr9",
"outputId": "a452b675-2131-47a7-ff38-2b4d6e923d50",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
3 years ago
"source": [
"np.nan + 1"
],
"execution_count": 11,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"nan"
]
},
"metadata": {},
"execution_count": 11
}
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "BW3zQD2-gRr-",
"outputId": "6956b57f-8ae7-4880-cc1d-0cf54edfe6ee",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
3 years ago
"source": [
"np.nan * 0"
],
"execution_count": 12,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"nan"
]
},
"metadata": {},
"execution_count": 12
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "fU5IPRcCgRr-"
},
3 years ago
"source": [
"The good news: aggregations run on arrays with `NaN` in them don't pop errors. The bad news: the results are not uniformly useful:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "LCInVgSSgRr_",
"outputId": "57ad3201-3958-48c6-924b-d46b61d4aeba",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
3 years ago
"source": [
"example2 = np.array([2, np.nan, 6, 8]) \n",
3 years ago
"example2.sum(), example2.min(), example2.max()"
],
"execution_count": 13,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"(nan, nan, nan)"
]
},
"metadata": {},
"execution_count": 13
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "nhlnNJT7gRr_"
},
3 years ago
"source": [
"### Exercise:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "yan3QRaOgRr_"
},
"source": [
"# What happens if you add np.nan and None together?\n"
],
"execution_count": 14,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "_iDvIRC8gRsA"
},
3 years ago
"source": [
"Remember: `NaN` is just for missing floating-point values; there is no `NaN` equivalent for integers, strings, or Booleans."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "kj6EKdsAgRsA"
},
3 years ago
"source": [
"### `NaN` and `None`: null values in pandas\n",
"\n",
"Even though `NaN` and `None` can behave somewhat differently, pandas is nevertheless built to handle them interchangeably. To see what we mean, consider a `Series` of integers:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "Nji-KGdNgRsA",
"outputId": "8dbdf129-cd8b-40b5-96ba-21a7f3fa0044",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
3 years ago
"source": [
"int_series = pd.Series([1, 2, 3], dtype=int)\n",
3 years ago
"int_series"
],
"execution_count": 15,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"0 1\n",
"1 2\n",
"2 3\n",
"dtype: int64"
]
},
"metadata": {},
"execution_count": 15
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "WklCzqb8gRsB"
},
3 years ago
"source": [
"### Exercise:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "Cy-gqX5-gRsB"
},
"source": [
"# Now set an element of int_series equal to None.\n",
"# How does that element show up in the Series?\n",
"# What is the dtype of the Series?\n"
],
"execution_count": 16,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "WjMQwltNgRsB"
},
3 years ago
"source": [
"In the process of upcasting data types to establish data homogeneity in `Seires` and `DataFrame`s, pandas will willingly switch missing values between `None` and `NaN`. Because of this design feature, it can be helpful to think of `None` and `NaN` as two different flavors of \"null\" in pandas. Indeed, some of the core methods you will use to deal with missing values in pandas reflect this idea in their names:\n",
"\n",
"- `isnull()`: Generates a Boolean mask indicating missing values\n",
"- `notnull()`: Opposite of `isnull()`\n",
"- `dropna()`: Returns a filtered version of the data\n",
"- `fillna()`: Returns a copy of the data with missing values filled or imputed\n",
"\n",
"These are important methods to master and get comfortable with, so let's go over them each in some depth."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "Yh5ifd9FgRsB"
},
3 years ago
"source": [
"### Detecting null values\n",
"\n",
"Now that we have understood the importance of missing values, we need to detect them in our dataset, before dealing with them.\n",
3 years ago
"Both `isnull()` and `notnull()` are your primary methods for detecting null data. Both return Boolean masks over your data."
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "e-vFp5lvgRsC"
},
3 years ago
"source": [
"example3 = pd.Series([0, np.nan, '', None])"
],
"execution_count": 17,
"outputs": []
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "1XdaJJ7PgRsC",
"outputId": "1fd6c6af-19e0-4568-e837-985d571604f4",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
3 years ago
"source": [
"example3.isnull()"
],
"execution_count": 18,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"0 False\n",
"1 True\n",
"2 False\n",
"3 True\n",
"dtype: bool"
]
},
"metadata": {},
"execution_count": 18
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "PaSZ0SQygRsC"
},
3 years ago
"source": [
"Look closely at the output. Does any of it surprise you? While `0` is an arithmetic null, it's nevertheless a perfectly good integer and pandas treats it as such. `''` is a little more subtle. While we used it in Section 1 to represent an empty string value, it is nevertheless a string object and not a representation of null as far as pandas is concerned.\n",
"\n",
"Now, let's turn this around and use these methods in a manner more like you will use them in practice. You can use Boolean masks directly as a ``Series`` or ``DataFrame`` index, which can be useful when trying to work with isolated missing (or present) values.\n",
"\n",
"If we want the total number of missing values, we can just do a sum over the mask produced by the `isnull()` method."
]
},
{
"cell_type": "code",
"metadata": {
"id": "JCcQVoPkHDUv",
"outputId": "c0002689-f529-4e3e-c73b-41ac513c59d3",
"colab": {
"base_uri": "https://localhost:8080/"
}
},
"source": [
"example3.isnull().sum()"
],
"execution_count": 19,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"2"
]
},
"metadata": {},
"execution_count": 19
}
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "PlBqEo3mgRsC"
},
3 years ago
"source": [
"### Exercise:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "ggDVf5uygRsD"
},
"source": [
"# Try running example3[example3.notnull()].\n",
"# Before you do so, what do you expect to see?\n"
],
"execution_count": 20,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "D_jWN7mHgRsD"
},
3 years ago
"source": [
"**Key takeaway**: Both the `isnull()` and `notnull()` methods produce similar results when you use them in DataFrames: they show the results and the index of those results, which will help you enormously as you wrestle with your data."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BvnoojWsgRr4"
},
"source": [
"### Dealing with missing data\n",
"\n",
"> **Learning goal:** By the end of this subsection, you should know how and when to replace or remove null values from DataFrames.\n",
"\n",
"Machine Learning models can't deal with missing data themselves. So, before passing the data into the model, we need to deal with these missing values.\n",
"\n",
"How missing data is handled carries with it subtle tradeoffs, can affect your final analysis and real-world outcomes.\n",
"\n",
"There are primarily two ways of dealing with missing data:\n",
"\n",
"\n",
"1. Drop the row containing the missing value\n",
"2. Replace the missing value with some other value\n",
"\n",
"We will discuss both these methods and their pros and cons in details.\n",
"\n"
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "3VaYC1TvgRsD"
},
3 years ago
"source": [
"### Dropping null values\n",
"\n",
"Beyond identifying missing values, pandas provides a convenient means to remove null values from `Series` and `DataFrame`s. (Particularly on large data sets, it is often more advisable to simply remove missing [NA] values from your analysis than deal with them in other ways.) To see this in action, let's return to `example3`:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "7uIvS097gRsD"
},
3 years ago
"source": [
"example3 = example3.dropna()\n",
3 years ago
"example3"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "hil2cr64gRsD"
},
3 years ago
"source": [
"Note that this should look like your output from `example3[example3.notnull()]`. The difference here is that, rather than just indexing on the masked values, `dropna` has removed those missing values from the `Series` `example3`.\n",
"\n",
"Because `DataFrame`s have two dimensions, they afford more options for dropping data."
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "an-l74sPgRsE"
},
3 years ago
"source": [
"example4 = pd.DataFrame([[1, np.nan, 7], \n",
" [2, 5, 8], \n",
" [np.nan, 6, 9]])\n",
3 years ago
"example4"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "66wwdHZrgRsE"
},
3 years ago
"source": [
"(Did you notice that pandas upcast two of the columns to floats to accommodate the `NaN`s?)\n",
"\n",
"You cannot drop a single value from a `DataFrame`, so you have to drop full rows or columns. Depending on what you are doing, you might want to do one or the other, and so pandas gives you options for both. Because in data science, columns generally represent variables and rows represent observations, you are more likely to drop rows of data; the default setting for `dropna()` is to drop all rows that contain any null values:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "jAVU24RXgRsE"
},
3 years ago
"source": [
"example4.dropna()"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "TrQRBuTDgRsE"
},
3 years ago
"source": [
"If necessary, you can drop NA values from columns. Use `axis=1` to do so:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "GrBhxu9GgRsE"
},
3 years ago
"source": [
"example4.dropna(axis='columns')"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "KWXiKTfMgRsF"
},
3 years ago
"source": [
"Notice that this can drop a lot of data that you might want to keep, particularly in smaller datasets. What if you just want to drop rows or columns that contain several or even just all null values? You specify those setting in `dropna` with the `how` and `thresh` parameters.\n",
"\n",
"By default, `how='any'` (if you would like to check for yourself or see what other parameters the method has, run `example4.dropna?` in a code cell). You could alternatively specify `how='all'` so as to drop only rows or columns that contain all null values. Let's expand our example `DataFrame` to see this in action."
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "Bcf_JWTsgRsF"
},
3 years ago
"source": [
"example4[3] = np.nan\n",
3 years ago
"example4"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "oXXSfQFHgRsF"
},
3 years ago
"source": [
"### Exercise:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "ExUwQRxpgRsF"
},
"source": [
"# How might you go about dropping just column 3?\n",
"# Hint: remember that you will need to supply both the axis parameter and the how parameter.\n"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "38kwAihWgRsG"
},
3 years ago
"source": [
"The `thresh` parameter gives you finer-grained control: you set the number of *non-null* values that a row or column needs to have in order to be kept:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "M9dCNMaagRsG"
},
3 years ago
"source": [
"example4.dropna(axis='rows', thresh=3)"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "fmSFnzZegRsG"
},
3 years ago
"source": [
"Here, the first and last row have been dropped, because they contain only two non-null values."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "mCcxLGyUgRsG"
},
3 years ago
"source": [
"### Filling null values\n",
"\n",
"Depending on your dataset, it can sometimes make more sense to fill null values with valid ones rather than drop them. You could use `isnull` to do this in place, but that can be laborious, particularly if you have a lot of values to fill. Because this is such a common task in data science, pandas provides `fillna`, which returns a copy of the `Series` or `DataFrame` with the missing values replaced with one of your choosing. Let's create another example `Series` to see how this works in practice."
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "0ybtWLDdgRsG"
},
3 years ago
"source": [
"example5 = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))\n",
3 years ago
"example5"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "yrsigxRggRsH"
},
3 years ago
"source": [
"You can fill all of the null entries with a single value, such as `0`:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "KXMIPsQdgRsH"
},
3 years ago
"source": [
"example5.fillna(0)"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "FI9MmqFJgRsH"
},
3 years ago
"source": [
"### Exercise:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "af-ezpXdgRsH"
},
"source": [
"# What happens if you try to fill null values with a string, like ''?\n"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "kq3hw1kLgRsI"
},
3 years ago
"source": [
"You can **forward-fill** null values, which is to use the last valid value to fill a null:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "vO3BuNrggRsI"
},
3 years ago
"source": [
"example5.fillna(method='ffill')"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "nDXeYuHzgRsI"
},
3 years ago
"source": [
"You can also **back-fill** to propagate the next valid value backward to fill a null:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "4M5onHcEgRsI"
},
3 years ago
"source": [
"example5.fillna(method='bfill')"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"id": "MbBzTom5gRsI"
},
3 years ago
"source": [
"As you might guess, this works the same with `DataFrame`s, but you can also specify an `axis` along which to fill null values:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "aRpIvo4ZgRsI"
},
3 years ago
"source": [
"example4"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "VM1qtACAgRsI"
},
3 years ago
"source": [
"example4.fillna(method='ffill', axis=1)"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZeMc-I1EgRsI"
},
3 years ago
"source": [
"Notice that when a previous value is not available for forward-filling, the null value remains."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "eeAoOU0RgRsJ"
},
3 years ago
"source": [
"### Exercise:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"collapsed": true,
"trusted": false,
"id": "e8S-CjW8gRsJ"
},
"source": [
"# What output does example4.fillna(method='bfill', axis=1) produce?\n",
"# What about example4.fillna(method='ffill') or example4.fillna(method='bfill')?\n",
"# Can you think of a longer code snippet to write that can fill all of the null values in example4?\n"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "YHgy0lIrgRsJ"
},
3 years ago
"source": [
"You can be creative about how you use `fillna`. For example, let's look at `example4` again, but this time let's fill the missing values with the average of all of the values in the `DataFrame`:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "OtYVErEygRsJ"
},
3 years ago
"source": [
"example4.fillna(example4.mean())"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "zpMvCkLSgRsJ"
},
3 years ago
"source": [
"Notice that column 3 is still valueless: the default direction is to fill values row-wise.\n",
"\n",
"> **Takeaway:** There are multiple ways to deal with missing values in your datasets. The specific strategy you use (removing them, replacing them, or even how you replace them) should be dictated by the particulars of that data. You will develop a better sense of how to deal with missing values the more you handle and interact with datasets."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "K8UXOJYRgRsJ"
},
3 years ago
"source": [
"## Removing duplicate data\n",
"\n",
"> **Learning goal:** By the end of this subsection, you should be comfortable identifying and removing duplicate values from DataFrames.\n",
"\n",
"In addition to missing data, you will often encounter duplicated data in real-world datasets. Fortunately, pandas provides an easy means of detecting and removing duplicate entries."
]
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "qrEG-Wa0gRsJ"
},
3 years ago
"source": [
"### Identifying duplicates: `duplicated`\n",
"\n",
"You can easily spot duplicate values using the `duplicated` method in pandas, which returns a Boolean mask indicating whether an entry in a `DataFrame` is a duplicate of an ealier one. Let's create another example `DataFrame` to see this in action."
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "ZLu6FEnZgRsJ"
},
3 years ago
"source": [
"example6 = pd.DataFrame({'letters': ['A','B'] * 2 + ['B'],\n",
" 'numbers': [1, 2, 1, 3, 3]})\n",
3 years ago
"example6"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "cIduB5oBgRsK"
},
3 years ago
"source": [
"example6.duplicated()"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "0eDRJD4SgRsK"
},
3 years ago
"source": [
"### Dropping duplicates: `drop_duplicates`\n",
"`drop_duplicates` simply returns a copy of the data for which all of the `duplicated` values are `False`:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "w_YPpqIqgRsK"
},
3 years ago
"source": [
"example6.drop_duplicates()"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "69AqoCZAgRsK"
},
3 years ago
"source": [
"Both `duplicated` and `drop_duplicates` default to consider all columnsm but you can specify that they examine only a subset of columns in your `DataFrame`:"
]
3 years ago
},
{
"cell_type": "code",
"metadata": {
"trusted": false,
"id": "BILjDs67gRsK"
},
3 years ago
"source": [
"example6.drop_duplicates(['letters'])"
],
"execution_count": null,
"outputs": []
3 years ago
},
{
"cell_type": "markdown",
"metadata": {
"id": "GvX4og1EgRsL"
},
3 years ago
"source": [
"> **Takeaway:** Removing duplicate data is an essential part of almost every data-science project. Duplicate data can change the results of your analyses and give you inaccurate results!"
]
3 years ago
}
]
3 years ago
}