Add comment of Feature Engineering and Selection

pull/2/head
benjas 5 years ago
parent 4dfd6788ed
commit 64705d565b

@ -2068,7 +2068,6 @@
],
"source": [
"# Histogram Plot of Site EUI\n",
"\n",
"figsize(8, 8)\n",
"plt.hist(data['Site EUI (kBtu/ft²)'].dropna(), bins = 20, edgecolor = 'black');\n",
"plt.xlabel('Site EUI'); \n",
@ -2812,7 +2811,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pairs Plot"
"## Pairs Plot\n",
"绘制多种图,上三角形为散点图,对角线为直方图,下三角为两个变量间的相关系数和核密度估算"
]
},
{
@ -2840,45 +2840,44 @@
}
],
"source": [
"# 提取要绘制的列\n",
"plot_data = features[['score',\n",
" 'Site EUI (kBtu/ft²)',\n",
" 'Weather Normalized Source EUI (kBtu/ft²)',\n",
" 'log_Total GHG Emissions (Metric Tons CO2e)']]\n",
"# Extract the columns to plot\n",
"plot_data = features[['score', 'Site EUI (kBtu/ft²)', \n",
" 'Weather Normalized Source EUI (kBtu/ft²)', \n",
" 'log_Total GHG Emissions (Metric Tons CO2e)']]\n",
"\n",
"# 把 inf 换成 nan\n",
"plot_data = plot_data.replace({np.inf: np.nan,-np.inf:np.nan})\n",
"# Replace the inf with nan\n",
"plot_data = plot_data.replace({np.inf: np.nan, -np.inf: np.nan})\n",
"\n",
"# 重命名\n",
"plot_data = plot_data.rename(columns = {'Site EUI (kBtu/ft²)': 'Site EUI',\n",
" 'Weather Normalized Source EUI (kBtu/ft²)':'Weather Norm EUI',\n",
"# Rename columns \n",
"plot_data = plot_data.rename(columns = {'Site EUI (kBtu/ft²)': 'Site EUI', \n",
" 'Weather Normalized Source EUI (kBtu/ft²)': 'Weather Norm EUI',\n",
" 'log_Total GHG Emissions (Metric Tons CO2e)': 'log GHG Emissions'})\n",
"\n",
"# 删除 na 值\n",
"# Drop na values\n",
"plot_data = plot_data.dropna()\n",
"\n",
"# 计算某两列之间的相关系数\n",
"def corr_func(x,y,**kwargs):\n",
" r = np.corrcoef(x,y)[0][1]\n",
"# Function to calculate correlation coefficient between two columns\n",
"def corr_func(x, y, **kwargs):\n",
" r = np.corrcoef(x, y)[0][1]\n",
" ax = plt.gca()\n",
" ax.annotate(\"r = {:.2f}\".format(r),\n",
" xy = (.2,.8),\n",
" xycoords = ax.transAxes,\n",
" xy=(.2, .8), xycoords=ax.transAxes,\n",
" size = 20)\n",
" \n",
"# 创建 pairgrid 对象\n",
"grid = sns.PairGrid(data = plot_data,size=3)\n",
"\n",
"# 上三角是散点图\n",
"grid.map_upper(plt.scatter,color = 'red', alpha =0.6)\n",
"# Create the pairgrid object\n",
"grid = sns.PairGrid(data = plot_data, size = 3)\n",
"\n",
"# 对角线是直方图\n",
"grid.map_diag(plt.hist,color ='red',edgecolor = 'black')\n",
"# Upper is a scatter plot\n",
"grid.map_upper(plt.scatter, color = 'red', alpha = 0.6)\n",
"\n",
"# 下三角是相关系数和二维核密度图\n",
"# Diagonal is a histogram\n",
"grid.map_diag(plt.hist, color = 'red', edgecolor = 'black')\n",
"\n",
"# Bottom is correlation and density plot\n",
"grid.map_lower(corr_func);\n",
"grid.map_lower(sns.kdeplot,cmap = plt.cm.Reds)\n",
"grid.map_lower(sns.kdeplot, cmap = plt.cm.Reds)\n",
"\n",
"# Title for entire plot\n",
"plt.suptitle('Pairs Plot of Energy Data', size = 36, y = 1.02);"
]
},
@ -2886,7 +2885,45 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 特征工程"
"我们可以查看两个特征的相交关系如左下角的log GHG Emissions 和 score的相关系数为-0.35,右上角可以看到这种关系的散点图"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 特征工程与选择\n",
"目前我们已经探索了数据之间的趋势和关系可以为其构建一组特征。我们还可以利用EDA的结果来指导特征工程\n",
"* 得分的分布情况因建筑类型不同而有所不同,在较小的程度上因自治区而异。\n",
"* 采用特征的对数变换不会导致特征与分数之间的线性相关性显着增加。\n",
"\n",
"如何定义特征工程与选择:\n",
"* [特征工程](https://machinelearningmastery.com/discover-feature-engineering-how-to-engineer-features-and-how-to-get-good-at-it/)获取原始数据并提取或创建新特征的过程这些特征允许机器学习模型学习这些特征和目标之间的映射。这可能意味着对变量进行转换比如log和平方根的处理或者对分类变量进行热编码这样它们就可以在模型中使用了。一般来说我认为的特征工程指的是添加从原始数据派生的附加特性。\n",
"* [特征选择](https://machinelearningmastery.com/an-introduction-to-feature-selection/):选择数据中最相关的功能的过程。“最相关”可能取决于许多因素,可能是一些简单的东西,如与目标的最高相关性,或具有最大方差的特征。在特征选择中,我们删除那些不能帮助我们的模型学习特征和目标之间关系的特征。这有助于模型更好地推广到新的数据中,从而得到更具解释性的模型。一般来说,我认为特征选择是减去特征,所以我们只剩下那些最重要的。\n",
"\n",
"特征工程与选择是迭代过程,需要多次尝试才能正确。通常,我们会使用建模的结果(例如随机森林的特征重要性)返回并重做特征选择,或者需要创建新变量的关系。这些过程通常也包含领域知识和数据统计质量的混合。\n",
"\n",
"[特征工程与选择](https://www.featurelabs.com/blog/secret-to-data-science-success/)通常在机器学习问题上的时间回报率最高。它可能需要相当长的时间才能得到正确的结果,但往往比精确的算法和用于模型的超参数更重要。如果我们没有给模型提供正确的数据,那么我们也不期望它能学习到好东西!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在本项目中,我们将使用以下步骤:\n",
"* 数值型变量和两个分类的变量\n",
"* 加入数值变量的对数变换\n",
"* One-Hot的分类变量\n",
"\n",
"对于特征选择,我们将使用以下操作:\n",
"* 删除[共线特征](https://statinfer.com/204-1-9-issue-of-multicollinearity-in-python/)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"选择数值特性添加所有数值型特性的log转换将分类特征做Ont-Hot Encode并将这些特征集连接在一起。"
]
},
{
@ -2943,6 +2980,13 @@
"features.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"到这里为止我们已经有了11319行和109列特征有一列是分数但并不是每个特征对结果都有正影响有一些是多余的因为他们高度相关。"
]
},
{
"cell_type": "markdown",
"metadata": {},

@ -2068,7 +2068,6 @@
],
"source": [
"# Histogram Plot of Site EUI\n",
"\n",
"figsize(8, 8)\n",
"plt.hist(data['Site EUI (kBtu/ft²)'].dropna(), bins = 20, edgecolor = 'black');\n",
"plt.xlabel('Site EUI'); \n",
@ -2812,7 +2811,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pairs Plot"
"## Pairs Plot\n",
"绘制多种图,上三角形为散点图,对角线为直方图,下三角为两个变量间的相关系数和核密度估算"
]
},
{
@ -2840,45 +2840,44 @@
}
],
"source": [
"# 提取要绘制的列\n",
"plot_data = features[['score',\n",
" 'Site EUI (kBtu/ft²)',\n",
" 'Weather Normalized Source EUI (kBtu/ft²)',\n",
" 'log_Total GHG Emissions (Metric Tons CO2e)']]\n",
"# Extract the columns to plot\n",
"plot_data = features[['score', 'Site EUI (kBtu/ft²)', \n",
" 'Weather Normalized Source EUI (kBtu/ft²)', \n",
" 'log_Total GHG Emissions (Metric Tons CO2e)']]\n",
"\n",
"# 把 inf 换成 nan\n",
"plot_data = plot_data.replace({np.inf: np.nan,-np.inf:np.nan})\n",
"# Replace the inf with nan\n",
"plot_data = plot_data.replace({np.inf: np.nan, -np.inf: np.nan})\n",
"\n",
"# 重命名\n",
"plot_data = plot_data.rename(columns = {'Site EUI (kBtu/ft²)': 'Site EUI',\n",
" 'Weather Normalized Source EUI (kBtu/ft²)':'Weather Norm EUI',\n",
"# Rename columns \n",
"plot_data = plot_data.rename(columns = {'Site EUI (kBtu/ft²)': 'Site EUI', \n",
" 'Weather Normalized Source EUI (kBtu/ft²)': 'Weather Norm EUI',\n",
" 'log_Total GHG Emissions (Metric Tons CO2e)': 'log GHG Emissions'})\n",
"\n",
"# 删除 na 值\n",
"# Drop na values\n",
"plot_data = plot_data.dropna()\n",
"\n",
"# 计算某两列之间的相关系数\n",
"def corr_func(x,y,**kwargs):\n",
" r = np.corrcoef(x,y)[0][1]\n",
"# Function to calculate correlation coefficient between two columns\n",
"def corr_func(x, y, **kwargs):\n",
" r = np.corrcoef(x, y)[0][1]\n",
" ax = plt.gca()\n",
" ax.annotate(\"r = {:.2f}\".format(r),\n",
" xy = (.2,.8),\n",
" xycoords = ax.transAxes,\n",
" xy=(.2, .8), xycoords=ax.transAxes,\n",
" size = 20)\n",
" \n",
"# 创建 pairgrid 对象\n",
"grid = sns.PairGrid(data = plot_data,size=3)\n",
"\n",
"# 上三角是散点图\n",
"grid.map_upper(plt.scatter,color = 'red', alpha =0.6)\n",
"# Create the pairgrid object\n",
"grid = sns.PairGrid(data = plot_data, size = 3)\n",
"\n",
"# 对角线是直方图\n",
"grid.map_diag(plt.hist,color ='red',edgecolor = 'black')\n",
"# Upper is a scatter plot\n",
"grid.map_upper(plt.scatter, color = 'red', alpha = 0.6)\n",
"\n",
"# 下三角是相关系数和二维核密度图\n",
"# Diagonal is a histogram\n",
"grid.map_diag(plt.hist, color = 'red', edgecolor = 'black')\n",
"\n",
"# Bottom is correlation and density plot\n",
"grid.map_lower(corr_func);\n",
"grid.map_lower(sns.kdeplot,cmap = plt.cm.Reds)\n",
"grid.map_lower(sns.kdeplot, cmap = plt.cm.Reds)\n",
"\n",
"# Title for entire plot\n",
"plt.suptitle('Pairs Plot of Energy Data', size = 36, y = 1.02);"
]
},
@ -2886,7 +2885,45 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 特征工程"
"我们可以查看两个特征的相交关系如左下角的log GHG Emissions 和 score的相关系数为-0.35,右上角可以看到这种关系的散点图"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 特征工程与选择\n",
"目前我们已经探索了数据之间的趋势和关系可以为其构建一组特征。我们还可以利用EDA的结果来指导特征工程\n",
"* 得分的分布情况因建筑类型不同而有所不同,在较小的程度上因自治区而异。\n",
"* 采用特征的对数变换不会导致特征与分数之间的线性相关性显着增加。\n",
"\n",
"如何定义特征工程与选择:\n",
"* [特征工程](https://machinelearningmastery.com/discover-feature-engineering-how-to-engineer-features-and-how-to-get-good-at-it/)获取原始数据并提取或创建新特征的过程这些特征允许机器学习模型学习这些特征和目标之间的映射。这可能意味着对变量进行转换比如log和平方根的处理或者对分类变量进行热编码这样它们就可以在模型中使用了。一般来说我认为的特征工程指的是添加从原始数据派生的附加特性。\n",
"* [特征选择](https://machinelearningmastery.com/an-introduction-to-feature-selection/):选择数据中最相关的功能的过程。“最相关”可能取决于许多因素,可能是一些简单的东西,如与目标的最高相关性,或具有最大方差的特征。在特征选择中,我们删除那些不能帮助我们的模型学习特征和目标之间关系的特征。这有助于模型更好地推广到新的数据中,从而得到更具解释性的模型。一般来说,我认为特征选择是减去特征,所以我们只剩下那些最重要的。\n",
"\n",
"特征工程与选择是迭代过程,需要多次尝试才能正确。通常,我们会使用建模的结果(例如随机森林的特征重要性)返回并重做特征选择,或者需要创建新变量的关系。这些过程通常也包含领域知识和数据统计质量的混合。\n",
"\n",
"[特征工程与选择](https://www.featurelabs.com/blog/secret-to-data-science-success/)通常在机器学习问题上的时间回报率最高。它可能需要相当长的时间才能得到正确的结果,但往往比精确的算法和用于模型的超参数更重要。如果我们没有给模型提供正确的数据,那么我们也不期望它能学习到好东西!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在本项目中,我们将使用以下步骤:\n",
"* 数值型变量和两个分类的变量\n",
"* 加入数值变量的对数变换\n",
"* One-Hot的分类变量\n",
"\n",
"对于特征选择,我们将使用以下操作:\n",
"* 删除[共线特征](https://statinfer.com/204-1-9-issue-of-multicollinearity-in-python/)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"选择数值特性添加所有数值型特性的log转换将分类特征做Ont-Hot Encode并将这些特征集连接在一起。"
]
},
{
@ -2943,6 +2980,13 @@
"features.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"到这里为止我们已经有了11319行和109列特征有一列是分数但并不是每个特征对结果都有正影响有一些是多余的因为他们高度相关。"
]
},
{
"cell_type": "markdown",
"metadata": {},

Loading…
Cancel
Save