@ -1,8 +1,59 @@
|
||||
# Title
|
||||
# Displaying airport data
|
||||
|
||||
You have been provided a [database](https://raw.githubusercontent.com/Microsoft/Data-Science-For-Beginners/main/2-Working-With-Data/05-relational-databases/airports.db) built on [SQLite](https://sqlite.org/index.html) which contains information about airports. The schema is displayed below. You will use the [SQLite extension](https://marketplace.visualstudio.com/items?itemName=alexcvzz.vscode-sqlite&WT.mc_id=academic-40229-cxa) in [Visual Studio Code](https://code.visualstudio.com?WT.mc_id=academic-40229-cxa) to display information about different cities' airports.
|
||||
|
||||
## Instructions
|
||||
|
||||
To get started with the assignment, you'll need to perform a couple of steps. You'll need to install a bit of tooling and download the sample database.
|
||||
|
||||
### Setup your system
|
||||
|
||||
You can use Visual Studio Code and the SQLite extension to interact with the database.
|
||||
|
||||
1. Navigate to [code.visualstudio.com](https://code.visualstudio.com?WT.mc_id=academic-40229-cxa) and follow the instructions to install Visual Studio Code
|
||||
1. Install the [SQLite extension](https://marketplace.visualstudio.com/items?itemName=alexcvzz.vscode-sqlite&WT.mc_id=academic-40229-cxa) extension as instructed on the Marketplace page
|
||||
|
||||
### Download and open the database
|
||||
|
||||
Next you will download an open the database.
|
||||
|
||||
1. Download the [database file from GitHub](https://raw.githubusercontent.com/Microsoft/Data-Science-For-Beginners/main/2-Working-With-Data/05-relational-databases/airports.db) and save it to a directory
|
||||
1. Open Visual Studio Code
|
||||
1. Open the database in the SQLite extension by selecting **Ctl-Shift-P** (or **Cmd-Shift-P** on a Mac) and typing `SQLite: Open database`
|
||||
1. Select **Choose database from file** and open the **airports.db** file you downloaded previously
|
||||
1. After opening the database (you won't see an update on the screen), create a new query window by selecting **Ctl-Shift-P** (or **Cmd-Shift-P** on a Mac) and typing `SQLite: New query`
|
||||
|
||||
Once open, the new query window can be used to run SQL statements against the database. You can use the command **Ctl-Shift-Q** (or **Cmd-Shift-Q** on a Mac) to run queries against the database.
|
||||
|
||||
> [!NOTE] For more information about the SQLite extension, you can consult the [documentation](https://marketplace.visualstudio.com/items?itemName=alexcvzz.vscode-sqlite&WT.mc_id=academic-40229-cxa)
|
||||
|
||||
## Database schema
|
||||
|
||||
A database's schema is its table design and structure. The **airports** database as two tables, `cities`, which contains a list of cities in the United Kingdom and Ireland, and `airports`, which contains the list of all airports. Because some cities may have multiple airports, two tables were created to store the information. In this exercise you will use joins to display information for different cities.
|
||||
|
||||
| Cities |
|
||||
| ---------------- |
|
||||
| id (PK, integer) |
|
||||
| city (text) |
|
||||
| country (text) |
|
||||
|
||||
| Airports |
|
||||
| -------------------------------- |
|
||||
| id (PK, integer) |
|
||||
| name (text) |
|
||||
| code (text) |
|
||||
| city_id (FK to id in **Cities**) |
|
||||
|
||||
## Assignment
|
||||
|
||||
Create queries to return the following information:
|
||||
|
||||
1. all city names in the `Cities` table
|
||||
1. all cities in Ireland in the `Cities` table
|
||||
1. all airport names with their city and country
|
||||
1. all airports in London, United Kingdom
|
||||
|
||||
## Rubric
|
||||
|
||||
Exemplary | Adequate | Needs Improvement
|
||||
--- | --- | -- |
|
||||
| Exemplary | Adequate | Needs Improvement |
|
||||
| --------- | -------- | ----------------- |
|
||||
|
@ -0,0 +1,76 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\r\n",
|
||||
"\r\n",
|
||||
"Licensed under the MIT License."
|
||||
],
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Exploring NYC Taxi data in Winter and Summer\r\n",
|
||||
"\r\n",
|
||||
"Refer to the [Data dictionary](https://www1.nyc.gov/assets/tlc/downloads/pdf/data_dictionary_trip_records_yellow.pdf) to explore the columns that have been provided.\r\n"
|
||||
],
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"source": [
|
||||
"!pip install pandas"
|
||||
],
|
||||
"outputs": [],
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"source": [
|
||||
"import pandas as pd\r\n",
|
||||
"import glob\r\n",
|
||||
"\r\n",
|
||||
"path = '../../data/Taxi/yellow_tripdata_2019-{}.csv'\r\n",
|
||||
"july_taxi = pd.read_csv(path.format('07'))\r\n",
|
||||
"january_taxi = pd.read_csv(path.format('01'))\r\n",
|
||||
"\r\n",
|
||||
"df = pd.concat([january_taxi, july_taxi])\r\n",
|
||||
"\r\n",
|
||||
"print(df)"
|
||||
],
|
||||
"outputs": [],
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"name": "python3",
|
||||
"display_name": "Python 3.9.7 64-bit ('venv': venv)"
|
||||
},
|
||||
"language_info": {
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"version": "3.9.7",
|
||||
"nbconvert_exporter": "python",
|
||||
"file_extension": ".py"
|
||||
},
|
||||
"name": "04-nyc-taxi-join-weather-in-pandas",
|
||||
"notebookId": 1709144033725344,
|
||||
"interpreter": {
|
||||
"hash": "6b9b57232c4b57163d057191678da2030059e733b8becc68f245de5a75abe84e"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
@ -0,0 +1,25 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"source": [
|
||||
"# print(pd.read_csv('../../data/Taxi/yellow_tripdata_2019-01.csv'))\r\n",
|
||||
"# all_files = glob.glob('../../data/Taxi/*.csv')\r\n",
|
||||
"\r\n",
|
||||
"# df = pd.concat((pd.read_csv(f) for f in all_files))\r\n",
|
||||
"# print(df)"
|
||||
],
|
||||
"outputs": [],
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"orig_nbformat": 4,
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 279 KiB |
@ -0,0 +1,115 @@
|
||||
# Data Science in the Real World
|
||||
|
||||
We're almost at the end of this learning journey!
|
||||
|
||||
We started with definitions of data science and ethics, explored various tools & techniques for data analysis, reviewed the data science lifecycle, and looked at scaling and automating data science workflows with cloud computing services.
|
||||
|
||||
And right now, you're probably wondering: "_How do these lessons translate to real-world contexts?_"
|
||||
|
||||
In this lesson, we'll talk about the real-world applications of data science and dive into a select few examples that explore data science in research, sustainability and digital humanities contexts. And we'll conclude with resources to help you continue the learning journey and explore some of these application ideas on your own.
|
||||
|
||||
## Where is Data Science Used Today?
|
||||
|
||||
Data Science technologies and techniques are finding a home in almost every industry today - thanks in no small part due to the democratization of AI, allowing developers to integrate data insights and decision-making intelligence into user experiences and workflows.
|
||||
|
||||
Here are some examples of "applied" data science in the real world:
|
||||
|
||||
* [Google Flu Trends](https://www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends/) used data science to correlate search terms with flu trends. While the approach had flaws, it raised awareness of the possibilities (and challenges) of data-driven healthcare predictions.
|
||||
|
||||
* [UPS Routing Predictions](https://www.technologyreview.com/2018/11/21/139000/how-ups-uses-ai-to-outsmart-bad-weather/) - explains how UPS uses data science and machine learning to predict optimal routes for delivery, taking into account weather conditions, traffic patterns, delivery deadlines and more.
|
||||
|
||||
* [NYC Taxicab Route Visualization](http://chriswhong.github.io/nyctaxi/) - data gathered using [Freedom Of Information Laws](https://chriswhong.com/open-data/foil_nyc_taxi/) helped visualize a day in the life of NYC cabs, helping us understand how they navigate the busy city, the money they make, and the duration of trips over each 24-hour period.
|
||||
|
||||
* [Uber Data Science Workbench](https://eng.uber.com/dsw/) - uses data (on pickup & dropoff locations, trip duration, preferred routes etc.) gathered from millions of uber trips *daily* to build a data analytics tool to help with pricing, safety, fraud detection and navigation decisions.
|
||||
|
||||
* [Sports Analytics](https://towardsdatascience.com/scope-of-analytics-in-sports-world-37ed09c39860) - focuses on _predictive analytics_ (team and player analysis - think [Moneyball](https://datasciencedegree.wisconsin.edu/blog/moneyball-proves-importance-big-data-big-ideas/) - and fan management) and _data visualization_ (team & fan dashboards, games etc.) with applications like talent scouting, sports gambling and inventory/venue management.
|
||||
|
||||
* [Data Science in Banking](https://data-flair.training/blogs/data-science-in-banking/) - highlights the value of data science in the finance industry with applications ranging from risk modeling and fraud detction, to customer segmentation, real-time prediction and recommender systems. Predictive analytics also drive critical measures like [credit scores](https://dzone.com/articles/using-big-data-and-predictive-analytics-for-credit).
|
||||
|
||||
* [Data Science in Healthcare](https://data-flair.training/blogs/data-science-in-healthcare/) - highlights applications like medical imaging (e.g., MRI, X-Ray, CT-Scan), genomics (DNA sequencing), drug development (risk assessment, success prediction), predictive analytics (patient care & supply logistics), disease tracking & prevention etc.
|
||||
|
||||
![Data Science Applications in The Real World](data-science-applications.png) Image Credit: [Data Flair: 6 Amazing Data Science Applications ](https://data-flair.training/blogs/data-science-applications/)
|
||||
|
||||
There are many other application domains to consider (see the image above as one example) - check out the [Review & Self Study](?id=review-amp-self-study) section for some relevant resources. For now, let's take a slightly deeper look at a few interesting examples in the following sections.
|
||||
|
||||
|
||||
## Research: Gender Shades Study
|
||||
|
||||
Researchers are often the earliest members of the technical community to explore real-world applications for big data algorithms and applied AI. The focus is often on both _exploring opportunities_ to do good and _uncovering challenges_ that lead to potential harms or unintended consequences.
|
||||
|
||||
Let's talk about one example - the [Gender Shades](http://gendershades.org/overview.html) project from MIT, one of the earliest to explore data ethics topics like fairness and bias, to highlight the need for more transparency in algorithm design and AI, and demand more inclusive testing of products.
|
||||
|
||||
The project evaluated the accuracy of AI-powered _gender classification_ products (from companies like IBM, Microsoft and Face++) using a dataset of 1270 images (from African and European countries) as the benchmark. While overall accuracy of classification was high for all products, the study identified non-trivial differences in the error rates _between different groups of users_, with misgendering being higher for female subjects or those with darker skin.
|
||||
|
||||
The study had broader implications for facial analysis algorithms as a whole, highlighting the potential for individual and social harms when used in contexts like law enforcement or hiring. Many organizations have since created _responsible AI_ principles and practices to improve the fairness of AI systems.
|
||||
|
||||
|
||||
**Want to learn about relevant research efforts in Microsoft?**
|
||||
|
||||
* Check out these [Microsoft Research Projects](https://www.microsoft.com/research/research-area/artificial-intelligence/?facet%5Btax%5D%5Bmsr-research-area%5D%5B%5D=13556&facet%5Btax%5D%5Bmsr-content-type%5D%5B%5D=msr-project)
|
||||
* Explore student projects and coursework from the [Microsoft Research Data Science Summer School](https://www.microsoft.com/en-us/research/academic-program/data-science-summer-school/).
|
||||
* Check out the [Fairlearn](https://fairlearn.org/) open-source, community-driven effort to improve fairness in AI systems.
|
||||
|
||||
|
||||
|
||||
## Digital Humanities: Poetics
|
||||
|
||||
Digital Humanities [has been defined](https://digitalhumanities.stanford.edu/about-dh-stanford) as "a collection of practices and approaches combining computational methods with humanistic inquiry". [Stanford projects](https://digitalhumanities.stanford.edu/projects) like _"rebooting history"_ and _"poetic thinking"_ illustrate the linkage between [Digital Humanities and Data Science](https://digitalhumanities.stanford.edu/digital-humanities-and-data-science) - emphasizing techniques like network analysis, information visualization, spatial and text analysis that can help us revisit historical and literary data sets to derive new insights and perspective.
|
||||
|
||||
*Want to explore and extend a project in this space?*
|
||||
|
||||
Check out ["Emily Dickinson and the Meter of Mood"](https://gist.github.com/jlooper/ce4d102efd057137bc000db796bfd671) - a great example from [Jen Looper](https://twitter.com/jenlooper) that asks how we can use data science to revisit familiar poetry and re-evaluate its meaning and the contributions of its author in new contexts. For instance, _can we predict the year in which a poem was authored by analyzing its tone or sentiment_ - and what does this tell us about the author's state of mind over the relevant period?
|
||||
|
||||
To answer that question, we follow the steps of our data science lifecycle:
|
||||
* [`Data Acquisition`](https://gist.github.com/jlooper/ce4d102efd057137bc000db796bfd671#acquiring-the-dataset) - to collect a relevant dataset for analysis. Options including using an API ( e.g., [Poetry DB API](https://poetrydb.org/index.html)) or scraping web pages (e.g., [Project Gutenberg](https://www.gutenberg.org/files/12242/12242-h/12242-h.htm)) using tools like [Scrapy](https://scrapy.org/).
|
||||
* [`Data Cleaning`](https://gist.github.com/jlooper/ce4d102efd057137bc000db796bfd671#clean-the-data) - explains how text can be formatted, sanitized and simplified using basic tools like Visual Studio Code and Microsoft Excel.
|
||||
* [`Data Analysis`](https://gist.github.com/jlooper/ce4d102efd057137bc000db796bfd671#working-with-the-data-in-a-notebook) - explains how we can now import the dataset into "Notebooks" for analysis using Python packages (like pandas, numpy and matplotlib) to organize and visualize the data.
|
||||
* [`Sentiment Analysis`](https://gist.github.com/jlooper/ce4d102efd057137bc000db796bfd671#sentiment-analysis-using-cognitive-services) - explains how we can integrate cloud services like Text Analytics, using low-code tools like [Power Automate](https://flow.microsoft.com/en-us/) for automated data processing workflows.
|
||||
|
||||
Using this workflow, we can explore the seasonal impacts on the sentiment of the poems, and help us fashion our own perspectives on the author. Try it out yourself - then extend the notebook to ask other questions or visualize the data in new ways!
|
||||
|
||||
|
||||
## Sustainability: Planetary Data
|
||||
|
||||
The [2030 Agenda For Sustainable Development](https://sdgs.un.org/2030agenda) - adopted by all United Nations members in 2015 - identifies 17 goals including ones that focus on **Protecting the Planet** from degradation and the impact of climate change. The [Microsoft Sustainability](https://www.microsoft.com/en-us/sustainability) initiative supports these goals by exploring ways in which technology solutions can support and build more sustainable futures with a [focus on 4 goals](https://dev.to/azure/a-visual-guide-to-sustainable-software-engineering-53hh) - being carbon negative, water positive, zero waste, and bio-diverse by 2030.
|
||||
|
||||
Tackling these challenges in a scalable and timely manner requires cloud-scale thinking - and large scale data. That's where the [Planetary Computer](https://planetarycomputer.microsoft.com/) initiative. It consists of 4 components:
|
||||
|
||||
* [Data Catalog](https://planetarycomputer.microsoft.com/catalog) - with petabytes of data on Earth systems, hosted on Azure, available for free.
|
||||
* [Planetary API](https://planetarycomputer.microsoft.com/docs/reference/stac/) - to help users search for relevant data across space and time.
|
||||
* [Hub](https://planetarycomputer.microsoft.com/docs/overview/environment/) - a managed environment for scientists to process massive geospatial datasets.
|
||||
* [Applications](https://planetarycomputer.microsoft.com/applications) - showcasing use cases and tools using this data, for sustainability insights.
|
||||
|
||||
Check out [the documentation](https://planetarycomputer.microsoft.com/docs/overview/about) for more details and explore applications like [Ecosystem Monitoring](https://analytics-lab.org/ecosystemmonitoring/) to get ideas for how you can use the data sets to derive useful insights or build applications that can motivate relevant behavioral changes for sustainability.
|
||||
|
||||
**The Planetary Computer Project is currently in preview (as of Sep 2021)**
|
||||
|
||||
Please [request access](https://planetarycomputer.microsoft.com/account/request) to get started with your own exploration and connect with your peers in this space.
|
||||
|
||||
|
||||
## Pre-Lecture Quiz
|
||||
|
||||
[Pre-lecture quiz]()
|
||||
|
||||
## 🚀 Challenge
|
||||
|
||||
|
||||
## Post-Lecture Quiz
|
||||
|
||||
[Post-lecture quiz]()
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
Want to explore more use cases? Here are a few relevant articles:
|
||||
* [17 Data Science Applications and Examples](https://builtin.com/data-science/data-science-applications-examples) - Jul 2021
|
||||
* [11 Breathtaking Data Science Applications in Real World](https://myblindbird.com/data-science-applications-real-world/) - May 2021
|
||||
* [Data Science In The Real World](https://towardsdatascience.com/data-science-in-the-real-world/home) - Article Collection
|
||||
* [Data Science In Education](https://data-flair.training/blogs/data-science-in-education/)
|
||||
* [Data Science In Agriculture](https://data-flair.training/blogs/data-science-in-agriculture/)
|
||||
* [Data Science in Finance](https://data-flair.training/blogs/data-science-in-finance/)
|
||||
* [Data Science at the Movies](https://data-flair.training/blogs/data-science-at-movies/)
|
||||
|
||||
|
||||
## Assignment
|
||||
|
||||
[Assignment Title](assignment.md)
|
After Width: | Height: | Size: 407 KiB |
@ -1,19 +0,0 @@
|
||||
# Data Science in the Real World
|
||||
|
||||
## Pre-Lecture Quiz
|
||||
|
||||
[Pre-lecture quiz]()
|
||||
|
||||
## 🚀 Challenge
|
||||
|
||||
|
||||
## Post-Lecture Quiz
|
||||
|
||||
[Post-lecture quiz]()
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
||||
## Assignment
|
||||
|
||||
[Assignment Title](assignment.md)
|
After Width: | Height: | Size: 4.5 MiB |
After Width: | Height: | Size: 213 KiB |
After Width: | Height: | Size: 199 KiB |
After Width: | Height: | Size: 252 KiB |
After Width: | Height: | Size: 215 KiB |
After Width: | Height: | Size: 218 KiB |
After Width: | Height: | Size: 223 KiB |
After Width: | Height: | Size: 206 KiB |
After Width: | Height: | Size: 250 KiB |
After Width: | Height: | Size: 211 KiB |
After Width: | Height: | Size: 249 KiB |
After Width: | Height: | Size: 240 KiB |
After Width: | Height: | Size: 248 KiB |
After Width: | Height: | Size: 290 KiB |
After Width: | Height: | Size: 234 KiB |
After Width: | Height: | Size: 298 KiB |
After Width: | Height: | Size: 248 KiB |
After Width: | Height: | Size: 291 KiB |
After Width: | Height: | Size: 319 KiB |
After Width: | Height: | Size: 240 KiB |