You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Data-Science-For-Beginners/2-Working-With-Data/07-python/notebook-covidspread.ipynb

448 lines
14 KiB

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Estimation of COVID-19 Pandemic\n",
"\n",
"## Loading Data\n",
"\n",
"We will use data on COVID-19 infected individuals, provided by the [Center for Systems Science and Engineering](https://systems.jhu.edu/) (CSSE) at [Johns Hopkins University](https://jhu.edu/). Dataset is available in [this GitHub Repository](https://github.com/CSSEGISandData/COVID-19)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"plt.rcParams[\"figure.figsize\"] = (10,3) # make figures larger"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can load the most recent data directly from GitHub using `pd.read_csv`. If for some reason the data is not available, you can always use the copy available locally in the `data` folder - just uncomment the line below that defines `base_url`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"base_url = \"https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/\" # loading from Internet\n",
"# base_url = \"../../data/COVID/\" # loading from disk\n",
"infected_dataset_url = base_url + \"time_series_covid19_confirmed_global.csv\"\n",
"recovered_dataset_url = base_url + \"time_series_covid19_recovered_global.csv\"\n",
"deaths_dataset_url = base_url + \"time_series_covid19_deaths_global.csv\"\n",
"countries_dataset_url = base_url + \"../UID_ISO_FIPS_LookUp_Table.csv\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's now load the data for infected individuals and see how the data looks like:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infected = pd.read_csv(infected_dataset_url)\n",
"infected.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that each row of the table defines the number of infected individuals for each country and/or province, and columns correspond to dates. Similar tables can be loaded for other data, such as number of recovered and number of deaths."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"recovered = pd.read_csv(recovered_dataset_url)\n",
"deaths = pd.read_csv(deaths_dataset_url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Making Sense of the Data\n",
"\n",
"From the table above the role of province column is not clear. Let's see the different values that are present in `Province/State` column:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infected['Province/State'].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the names we can deduce that countries like Australia and China have more detailed breakdown by provinces. Let's look for information on China to see the example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infected[infected['Country/Region']=='China']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pre-processing the Data \n",
"\n",
"We are not interested in breaking countries down to further territories, thus we would first get rid of this breakdown and add information on all territories together, to get info for the whole country. This can be done using `groupby`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infected = infected.groupby('Country/Region').sum()\n",
"recovered = recovered.groupby('Country/Region').sum()\n",
"deaths = deaths.groupby('Country/Region').sum()\n",
"\n",
"infected.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see that due to using `groupby` all DataFrames are now indexed by Country/Region. We can thus access the data for a specific country by using `.loc`:|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infected.loc['US'][3:].plot()\n",
"recovered.loc['US'][3:].plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Note** how we use `[3:]` to remove first two elements of a sequence that contain geolocation of a country. We can also drop those two columns altogether:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"infected.drop(columns=['Lat','Long'],inplace=True)\n",
"recovered.drop(columns=['Lat','Long'],inplace=True)\n",
"deaths.drop(columns=['Lat','Long'],inplace=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Investigating the Data\n",
"\n",
"Let's now switch to investigating a specific country. Let's create a frame that contains the data on infections indexed by date:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def mkframe(country):\n",
" df = pd.DataFrame({ 'infected' : infected.loc[country][1:] ,\n",
" 'recovered' : recovered.loc[country][1:],\n",
" 'deaths' : deaths.loc[country][1:]})\n",
" df.index = pd.to_datetime(df.index)\n",
" return df\n",
"\n",
"df = mkframe('US')\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's compute the number of new infected people each day. This will allow us to see the speed at which pandemic progresses. The easiest day to do it is to use `diff`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df['ninfected'] = df['infected'].diff()\n",
"df['ninfected'].plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see high fluctuations in data. Let's look closer at one of the months:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df[(df.index.year==2020) & (df.index.month==7)]['ninfected'].plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It clearly looks like there are weekly fluctuations in data. Because we want to be able to see the trends, it makes sense to smooth out the curve by computing running average (i.e. for each day we will compute the average value of the previous several days):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df['ninfav'] = df['ninfected'].rolling(window=7).mean()\n",
"df['ninfav'].plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to be able to compare several countries, we might want to take the country's population into account, and compare the percentage of infected individuals with respect to country's population. In order to get country's population, let's load the dataset of countries:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"countries = pd.read_csv(countries_dataset_url)\n",
"countries"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because this dataset contains information on both countries and provinces, to get the population of the whole country we need to be a little bit clever: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pop = countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]['Population'].iloc[0]\n",
"df['pinfected'] = df['infected']*100 / pop\n",
"df['pinfected'].plot(figsize=(10,3))\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Computing $R_t$\n",
"\n",
"To see how infectious is the disease, we look at the **basic reproduction number** $R_0$, which indicated the number of people that an infected person would further infect. When $R_0$ is more than 1, the epidemic is likely to spread.\n",
"\n",
"$R_0$ is a property of the disease itself, and does not take into account some protective measures that people may take to slow down the pandemic. During the pandemic progression, we can estimate the reproduction number $R_t$ at any given time $t$. It has been shown that this number can be roughly estimated by taking a window of 8 days, and computing $$R_t=\\frac{I_{t-7}+I_{t-6}+I_{t-5}+I_{t-4}}{I_{t-3}+I_{t-2}+I_{t-1}+I_t}$$\n",
"where $I_t$ is the number of newly infected individuals on day $t$.\n",
"\n",
"Let's compute $R_t$ for our pandemic data. To do this, we will take a rolling window of 8 `ninfected` values, and apply the function to compute the ratio above:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df['Rt'] = df['ninfected'].rolling(8).apply(lambda x: x[4:].sum()/x[:4].sum())\n",
"df['Rt'].plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see that there are some gaps in the graph. Those can be caused by either `NaN`, if `inf` values being present in the dataset. `inf` may be caused by division by 0, and `NaN` can indicate missing data, or no data available to compute the result (like in the very beginning of our frame, where rolling window of width 8 is not yet available). To make the graph nicer, we need to fill those values using `replace` and `fillna` function.\n",
"\n",
"Let's further look at the beginning of the pandemic. We will also limit the y-axis values to show only values below 6, in order to see better, and draw horizontal line at 1."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ax = df[df.index<\"2020-05-01\"]['Rt'].replace(np.inf,np.nan).fillna(method='pad').plot(figsize=(10,3))\n",
"ax.set_ylim([0,6])\n",
"ax.axhline(1,linestyle='--',color='red')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another interesting indicator of the pandemic is the **derivative**, or **daily difference** in new cases. It allows us to see clearly when pandemic is increasing or declining. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df['ninfected'].diff().plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Given the fact that there are a lot of fluctuations in data caused by reporting, it makes sense to smooth the curve by running rolling average to get the overall picture. Let's again focus on the first months of the pandemic:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ax=df[df.index<\"2020-06-01\"]['ninfected'].diff().rolling(7).mean().plot()\n",
"ax.axhline(0,linestyle='-.',color='red')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Challenge\n",
"\n",
"Now it is time for you to play more with the code and data! Here are a few suggestions you can experiment with:\n",
"* See the spread of the pandemic in different countries.\n",
"* Plot $R_t$ graphs for several countries on one plot for comparison, or make several plots side-by-side\n",
"* See how the number of deaths and recoveries correlate with number of infected cases.\n",
"* Try to find out how long a typical disease lasts by visually correlating infection rate and deaths rate and looking for some anomalies. You may need to look at different countries to find that out.\n",
"* Calculate the fatality rate and how it changes over time. You may want to take into account the length of the disease in days to shift one time series before doing calculations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"\n",
"You may look at further studies of COVID epidemic spread in the following publications:\n",
"* [Sliding SIR Model for Rt Estimation during COVID Pandemic](https://soshnikov.com/science/sliding-sir-model-for-rt-estimation/), blog post by [Dmitry Soshnikov](http://soshnikov.com)\n",
"* T.Petrova, D.Soshnikov, A.Grunin. [Estimation of Time-Dependent Reproduction Number for Global COVID-19 Outbreak](https://www.preprints.org/manuscript/202006.0289/v1). *Preprints* **2020**, 2020060289 (doi: 10.20944/preprints202006.0289.v1)\n",
"* [Code for the above paper on GitHub](https://github.com/shwars/SlidingSIR)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"interpreter": {
"hash": "86193a1ab0ba47eac1c69c1756090baa3b420b3eea7d4aafab8b85f8b312f0c5"
},
"kernelspec": {
"display_name": "Python 3.8.8 64-bit (conda)",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}