Merge branch 'microsoft:main' into main

pull/325/head
Vidushi Gupta 3 years ago committed by GitHub
commit 9501179a16
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -50,7 +50,7 @@ Dado que los datos son omnipresentes, la propia ciencia de los datos es también
<dl>
<dt>Bases de datos</dt>
<dd>
Una consideración crítica es **cómo almacenar** los datos, es decir, cómo estructurarlos de forma que permitan un procesamiento más rápido. Hay diferentes tipos de bases de datos que almacenan datos estructurados y no estructurados, que <a href="../../2-Working-With-Data/README.md">consideraremos en nuestro curso</a>.
Una consideración crítica es **cómo almacenar** los datos, es decir, cómo estructurarlos de forma que permitan un procesamiento más rápido. Hay diferentes tipos de bases de datos que almacenan datos estructurados y no estructurados, que <a href="../../../2-Working-With-Data/README.md">consideraremos en nuestro curso</a>.
</dd>
<dt>Big Data</dt>
<dd>
@ -66,7 +66,7 @@ Un área del Machine learning llamada inteligencia artificial (IA o AI, por sus
</dd>
<dt>Visualización</dt>
<dd>
Cantidades muy grandes de datos son incomprensibles para un ser humano, pero una vez que creamos visualizaciones útiles con esos datos, podemos darles más sentido y sacar algunas conclusiones. Por ello, es importante conocer muchas formas de visualizar la información, algo que trataremos en <a href="../../3-Data-Visualization/README.md">la sección 3</a> de nuestro curso. Campos relacionados también incluyen la **Infografía**, y la **Interacción Persona-Ordenador** en general.
Cantidades muy grandes de datos son incomprensibles para un ser humano, pero una vez que creamos visualizaciones útiles con esos datos, podemos darles más sentido y sacar algunas conclusiones. Por ello, es importante conocer muchas formas de visualizar la información, algo que trataremos en <a href="../../../3-Data-Visualization/README.md">la sección 3</a> de nuestro curso. Campos relacionados también incluyen la **Infografía**, y la **Interacción Persona-Ordenador** en general.
</dd>
</dl>

@ -0,0 +1,164 @@
# Definitie van Data Science
| ![ Sketchnote door [(@sketchthedocs)](https://sketchthedocs.dev) ](../../../sketchnotes/01-Definitions.png) |
| :----------------------------------------------------------------------------------------------------: |
| Defining Data Science - _Sketchnote by [@nitya](https://twitter.com/nitya)_ |
---
[![Defining Data Science Video](../images/video-def-ds.png)](https://youtu.be/beZ7Mb_oz9I)
## [Starttoets data science](https://red-water-0103e7a0f.azurestaticapps.net/quiz/0)
## Wat is Data?
In ons dagelijks leven zijn we voortdurend omringd door data. De tekst die je nu leest is data. De lijst met telefoonnummers van je vrienden op je smartphone is data, evenals de huidige tijd die op je horloge wordt weergegeven. Als mens werken we van nature met data, denk aan het geld dat we moeten tellen of door berichten te schrijven aan onze vrienden.
Gegevens werden echter veel belangrijker met de introductie van computers. De primaire rol van computers is om berekeningen uit te voeren, maar ze hebben gegevens nodig om mee te werken. We moeten dus begrijpen hoe computers gegevens opslaan en verwerken.
Met de opkomst van het internet nam de rol van computers als gegevensverwerkingsapparatuur toe. Als je erover nadenkt, gebruiken we computers nu steeds meer voor gegevensverwerking en communicatie, in plaats van echte berekeningen. Wanneer we een e-mail schrijven naar een vriend of zoeken naar informatie op internet, creëren, bewaren, verzenden en manipuleren we in wezen gegevens.
> Kan jij je herinneren wanneer jij voor het laatste echte berekeningen door een computer hebt laten uitvoeren?
## Wat is Data Science?
[Wikipedia](https://en.wikipedia.org/wiki/Data_science) definieert **Data Science** als *een interdisciplinair onderzoeksveld met betrekking tot wetenschappelijke methoden, processen en systemen om kennis en inzichten te onttrekken uit (zowel gestructureerde als ongestructureerde) data.*
Deze definitie belicht de volgende belangrijke aspecten van data science:
* Het belangrijkste doel van data science is om **kennis** uit gegevens te destilleren, in andere woorden - om data **te begrijpen**, verborgen relaties te vinden en een **model** te bouwen.
* Data science maakt gebruik van **wetenschappelijke methoden**, zoals waarschijnlijkheid en statistiek. Toen de term *data science* voor het eerst werd geïntroduceerd, beweerden sommige mensen zelfs dat data science slechts een nieuwe mooie naam voor statistiek was. Tegenwoordig is duidelijk geworden dat het veld veel breder is.
* Verkregen kennis moet worden toegepast om enkele **bruikbare inzichten** te produceren, d.w.z. praktische inzichten die je kunt toepassen op echte bedrijfssituaties.
* We moeten in staat zijn om te werken met zowel **gestructureerde** als **ongestructureerde** data. We komen later in de cursus terug om verschillende soorten gegevens te bespreken.
* **Toepassingsdomein** is een belangrijk begrip, en datawetenschappers hebben vaak minstens een zekere mate van expertise nodig in het probleemdomein, bijvoorbeeld: financiën, geneeskunde, marketing, enz.
> Een ander belangrijk aspect van Data Science is dat het bestudeert hoe gegevens kunnen worden verzameld, opgeslagen en bediend met behulp van computers. Terwijl statistiek ons wiskundige grondslagen geeft, past data science wiskundige concepten toe om daadwerkelijk inzichten uit gegevens te halen.
Een van de manieren (toegeschreven aan [Jim Gray](https://en.wikipedia.org/wiki/Jim_Gray_(computer_scientist))) om naar de data science te kijken, is om het te beschouwen als een apart paradigma van de wetenschap:
* **Empirisch**, waarbij we vooral vertrouwen op waarnemingen en resultaten van experimenten
* **Theoretisch**, waar nieuwe concepten voortkomen uit bestaande wetenschappelijke kennis
* **Computational**, waar we nieuwe principes ontdekken op basis van enkele computationele experimenten
* **Data-Driven**, gebaseerd op het ontdekken van relaties en patronen in de data
## Andere gerelateerde vakgebieden
Omdat data alomtegenwoordig is, is data science zelf ook een breed vakgebied, dat veel andere disciplines raakt.
<dl>
<dt>Databases</dt>
<dd>
Een kritische overweging is **hoe de gegevens op te slaan**, d.w.z. hoe deze te structureren op een manier die een snellere verwerking mogelijk maakt. Er zijn verschillende soorten databases die gestructureerde en ongestructureerde gegevens opslaan, welke <a href ="../../../2-Working-With-Data/README.md">we in onze cursus zullen overwegen</a>.
</dd>
<dt>Big Data</dt>
<dd>
Vaak moeten we zeer grote hoeveelheden gegevens opslaan en verwerken met een relatief eenvoudige structuur. Er zijn speciale benaderingen en hulpmiddelen om die gegevens op een gedistribueerde manier op een computercluster op te slaan en efficiënt te verwerken.
</dd>
<dt>Machine learning</dt>
<dd>
Een manier om gegevens te begrijpen is door **een model** te bouwen dat in staat zal zijn om een gewenste uitkomst te voorspellen. Het ontwikkelen van modellen op basis van data wordt **machine learning** genoemd. Misschien wilt u een kijkje nemen op onze <a href = "https://aka.ms/ml-beginners">Machine Learning for Beginners</a> Curriculum om er meer over te weten te komen.
</dd>
<dt>kunstmatige intelligentie</dt>
<dd>
Een gebied van machine learning dat bekend staat als Artificial Intelligence (AI) is ook afhankelijk van gegevens en betreft het bouwen van modellen met een hoge complexiteit die menselijke denkprocessen nabootsen. AI-methoden stellen ons vaak in staat om ongestructureerde data (bijvoorbeeld natuurlijke taal) om te zetten in gestructureerde inzichten.
</dd>
<dt>visualisatie</dt>
<dd>
Enorme hoeveelheden gegevens zijn onbegrijpelijk voor een mens, maar zodra we nuttige visualisaties maken met behulp van die gegevens, kunnen we de gegevens beter begrijpen en enkele conclusies trekken. Het is dus belangrijk om veel manieren te kennen om informatie te visualiseren - iets dat we zullen behandelen in <a href="../../../3-Data-Visualization/README.md">Sectie 3</a> van onze cursus. Gerelateerde velden omvatten ook **Infographics** en **Mens-computerinteractie** in het algemeen.
</dd>
</dl>
## Typen van Data
Zoals we al hebben vermeld, zijn gegevens overal te vinden. We moeten het gewoon op de juiste manier vastleggen! Het is handig om onderscheid te maken tussen **gestructureerde** en **ongestructureerde** data. De eerste wordt meestal weergegeven in een goed gestructureerde vorm, vaak als een tabel of een aantal tabellen, terwijl de laatste slechts een verzameling bestanden is. Soms kunnen we het ook hebben over **semigestructureerde** gegevens, die een soort structuur hebben die sterk kan variëren.
| Gestructureerde | Semi-gestructureerde | Ongestructureerde |
| --------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| Lijst van mensen met hun telefoonnummer | Wikipedia pagina's met links | Tekst van encyclopaedia Britannica |
| Temperatuur in alle kamers van een gebouw op elke minuut gedurende de laatste 20 jaar | Verzameling van wetenschappelijke artikelen in JSON-formaat met auteurs, publicatiegegevens en een abstract | Bestanden opslag met bedrijfsdocumenten |
| Gegevens van leeftijd en geslacht van alle mensen die het gebouw betreden | Internet pagina's | Onbewerkte videofeed van bewakingscamera's |
## Waar data vandaan te halen
Er zijn veel mogelijke gegevensbronnen en het zal onmogelijk zijn om ze allemaal op te sommen! Laten we echter enkele van de typische plaatsen noemen waar u gegevens kunt krijgen:
* **Gestructureerd**
- **Internet of Things** (IoT), inclusief data van verschillende sensoren, zoals temperatuur- of druksensoren, leveren veel bruikbare data op. Als een kantoorgebouw bijvoorbeeld is uitgerust met IoT-sensoren, kunnen we automatisch verwarming en verlichting regelen om de kosten te minimaliseren.
- **Enquêtes** die we gebruikers vragen in te vullen na een aankoop of na een bezoek aan een website.
- **Analyse van gedrag** kan ons bijvoorbeeld helpen begrijpen hoe diep een gebruiker in een website gaat en wat de typische reden is om de site te verlaten.
* **Ongestructureerd **
- **Teksten** kunnen een rijke bron van inzichten zijn, zoals een algemene **sentimentscore**, of het extraheren van trefwoorden en semantische betekenis.
- **Afbeeldingen** of **Video**. Een video van een bewakingscamera kan worden gebruikt om het verkeer op de weg in te schatten en mensen te informeren over mogelijke files.
- Webserver **Logs** kunnen worden gebruikt om te begrijpen welke pagina's van onze site het vaakst worden bezocht en voor hoe lang.
* Semi-gestructureerd
- **Social Network** grafieken kunnen geweldige bronnen van gegevens zijn over gebruikerspersoonlijkheden en potentiële effectiviteit bij het verspreiden van informatie.
- Wanneer we een heleboel foto's van een feest hebben, kunnen we proberen **Group Dynamics**-gegevens te extraheren door een grafiek te maken van mensen die met elkaar foto's maken.
Door verschillende mogelijke databronnen te kennen, kun je proberen na te denken over verschillende scenario's waarin data science technieken kunnen worden toegepast om de situatie beter te leren kennen en bedrijfsprocessen te verbeteren.
## Wat je met Data kunt doen
In Data Science richten we ons op de volgende stappen van data journey:
<dl>
<dt>1) Data-acquisitie</dt>
<dd>
De eerste stap is het verzamelen van de gegevens. Hoewel het in veel gevallen een eenvoudig proces kan zijn, zoals gegevens die vanuit een webapplicatie naar een database komen, moeten we soms speciale technieken gebruiken. Gegevens van IoT-sensoren kunnen bijvoorbeeld overweldigend zijn en het is een goede gewoonte om bufferingseindpunten zoals IoT Hub te gebruiken om alle gegevens te verzamelen voordat ze verder worden verwerkt.
</dd>
<dt>2) Gegevensopslag</dt>
<dd>
Het opslaan van gegevens kan een uitdaging zijn, vooral als we het hebben over big data. Wanneer u beslist hoe u gegevens wilt opslaan, is het logisch om te anticiperen op de manier waarop u de gegevens in de toekomst zou opvragen. Er zijn verschillende manieren waarop gegevens kunnen worden opgeslagen:
<ul>
<li>Een relationele database slaat een verzameling tabellen op en gebruikt een speciale taal genaamd SQL om deze op te vragen. Tabellen zijn meestal georganiseerd in verschillene groepen die schema's worden genoemd. In veel gevallen moeten we de gegevens van de oorspronkelijke vorm converteren naar het schema.</li>
<li><a href="https://en.wikipedia.org/wiki/NoSQL">A NoSQL</a> database, zoals <a href="https://azure.microsoft.com/services/cosmos-db/?WT.mc_id=academic-31812-dmitryso">CosmosDB</a>, dwingt geen schema's af op gegevens en maakt het opslaan van complexere gegevens mogelijk, bijvoorbeeld hiërarchische JSON-documenten of grafieken. NoSQL-databases hebben echter niet de uitgebreide querymogelijkheden van SQL en kunnen geen referentiële integriteit afdwingen, d.w.z. regels over hoe de gegevens in tabellen zijn gestructureerd en de relaties tussen tabellen regelen.</li>
<li><a href="https://en.wikipedia.org/wiki/Data_lake">Data Lake</a> opslag wordt gebruikt voor grote verzamelingen gegevens in ruwe, ongestructureerde vorm. Data lakes worden vaak gebruikt met big data, waarbij alle data niet op één machine past en moet worden opgeslagen en verwerkt door een cluster van servers. <a href="https://en.wikipedia.org/wiki/Apache_Parquet">Parquet</a> is het gegevensformaat dat vaak wordt gebruikt in combinatie met big data.</li>
</ul>
</dd>
<dt>3) Gegevensverwerking</dt>
<dd>
Dit is het meest spannende deel van het gegevenstraject, waarbij de gegevens van de oorspronkelijke vorm worden omgezet in een vorm die kan worden gebruikt voor visualisatie / modeltraining. Bij het omgaan met ongestructureerde gegevens zoals tekst of afbeeldingen, moeten we mogelijk enkele AI-technieken gebruiken om **functies** uit de gegevens te destilleren en deze zo naar gestructureerde vorm te converteren.
</dd>
<dt>4) Visualisatie / Menselijke inzichten</dt>
<dd>
Vaak moeten we, om de gegevens te begrijpen, deze visualiseren. Met veel verschillende visualisatietechnieken in onze toolbox kunnen we de juiste weergave vinden om inzicht te krijgen. Vaak moet een data scientist "spelen met data", deze vele malen visualiseren en op zoek gaan naar wat relaties. Ook kunnen we statistische technieken gebruiken om een hypothese te testen of een correlatie tussen verschillende gegevens te bewijzen.
</dd>
<dt>5) Het trainen van een voorspellend model</dt>
<dd>
Omdat het uiteindelijke doel van data science is om beslissingen te kunnen nemen op basis van data, willen we misschien de technieken van <a href="http://github.com/microsoft/ml-for-beginners">Machine Learning</a> gebruiken om een voorspellend model te bouwen. We kunnen dit vervolgens gebruiken om voorspellingen te doen met behulp van nieuwe datasets met vergelijkbare structuren.
</dd>
</dl>
Natuurlijk, afhankelijk van de werkelijke gegevens, kunnen sommige stappen ontbreken (bijvoorbeeld wanneer we de gegevens al in de database hebben opgeslagen of wanneer we geen modeltraining nodig hebben), of sommige stappen kunnen meerdere keren worden herhaald (zoals gegevensverwerking).
## Digitalisering en digitale transformatie
In het afgelopen decennium begonnen veel bedrijven het belang van gegevens te begrijpen bij het nemen van zakelijke beslissingen. Om data science-principes toe te passen op het opereren van een bedrijf, moet men eerst wat gegevens verzamelen, d.w.z. bedrijfsprocessen vertalen naar digitale vorm. Dit staat bekend als **digitalisering**. Het toepassen van data science-technieken op deze gegevens om beslissingen te sturen, kan leiden tot aanzienlijke productiviteitsstijgingen (of zelfs zakelijke spil), **digitale transformatie** genoemd.
Laten we een voorbeeld nemen. Stel dat we een data science-cursus hebben (zoals deze) die we online aan studenten geven, en we willen data science gebruiken om het te verbeteren. Hoe kunnen we dat doen?
We kunnen beginnen met de vraag "Wat kan worden gedigitaliseerd?" De eenvoudigste manier zou zijn om de tijd te meten die elke student nodig heeft om elke module te voltooien en om de verkregen kennis te meten door aan het einde van elke module een meerkeuzetest te geven. Door het gemiddelde te nemen van de time-to-complete over alle studenten, kunnen we erachter komen welke modules de meeste problemen veroorzaken voor studenten en werken aan het vereenvoudigen ervan.
> Je zou kunnen stellen dat deze aanpak niet ideaal is, omdat modules van verschillende lengtes kunnen zijn. Het is waarschijnlijk eerlijker om de tijd te delen door de lengte van de module (in aantal tekens) en in plaats daarvan die waarden te vergelijken.
Wanneer we beginnen met het analyseren van resultaten van meerkeuzetoetsen, kunnen we proberen te bepalen welke concepten studenten moeilijk kunnen begrijpen en die informatie gebruiken om de inhoud te verbeteren. Om dat te doen, moeten we tests zo ontwerpen dat elke vraag is toegewezen aan een bepaald concept of een deel van de kennis.
Als we het nog ingewikkelder willen maken, kunnen we de tijd die voor elke module nodig is, uitzetten tegen de leeftijdscategorie van studenten. We kunnen erachter komen dat het voor sommige leeftijdscategorieën ongepast lang duurt om de module te voltooien, of dat studenten afhaken voordat ze het voltooien. Dit kan ons helpen leeftijdsaanbevelingen voor de module te geven en de ontevredenheid van mensen over verkeerde verwachtingen te minimaliseren.
## 🚀 Uitdaging
In deze challenge proberen we concepten te vinden die relevant zijn voor het vakgebied Data Science door te kijken naar teksten. We nemen een Wikipedia-artikel over Data Science, downloaden en verwerken de tekst en bouwen vervolgens een woordwolk zoals deze:
![Word Cloud for Data Science](../images/ds_wordcloud.png)
Ga naar ['notebook.ipynb'](notebook.ipynb) om de code door te lezen. Je kunt de code ook uitvoeren en zien hoe alle gegevenstransformaties in realtime worden uitgevoerd.
> Als je niet weet hoe je code in een Jupyter Notebook moet uitvoeren, kijk dan eens naar [dit artikel](https://soshnikov.com/education/how-to-execute-notebooks-from-github/).
## [Post-lecture quiz](https://red-water-0103e7a0f.azurestaticapps.net/quiz/1)
## Opdrachten
* **Taak 1**: Wijzig de bovenstaande code om gerelateerde concepten te achterhalen voor de velden **Big Data** en **Machine Learning**
* **Taak 2**: [Denk na over Data Science-scenario's] (assignment.md)
## Credits
Deze les is geschreven met ♥️ door [Dmitry Soshnikov] (http://soshnikov.com)

@ -45,7 +45,7 @@ Já que dados são um conceito difundido, a ciência de dados em si também é u
<dl>
<dt>Banco de Dados</dt>
<dd>
A coisa mais óbvia a considerar é **como armazenar** os dados, ex. como estruturá-los de uma forma que permite um processamento rápido. Existem diferentes tipos de banco de dados que armazenam dados estruturados e não estruturados, que <a href="../../2-Working-With-Data/README.md">nós vamos considerar nesse curso</a>.
A coisa mais óbvia a considerar é **como armazenar** os dados, ex. como estruturá-los de uma forma que permite um processamento rápido. Existem diferentes tipos de banco de dados que armazenam dados estruturados e não estruturados, que <a href="../../../2-Working-With-Data/README.md">nós vamos considerar nesse curso</a>.
</dd>
<dt>Big Data</dt>
<dd>
@ -61,7 +61,7 @@ Como aprendizado de máquina, inteligência artificial também se baseia em dado
</dd>
<dt>Visualização</dt>
<dd>
Vastas quantidades de dados são incompreensíveis para o ser humano, mas uma vez que criamos visualizações úteis - nós podemos começar a dar muito mais sentido aos dados, e desenhar algumas conclusões. Portanto, é importante conhecer várias formas de visualizar informação - algo que vamos cobrir na <a href="../../3-Data-Visualization/README.md">Seção 3</a> do nosso curso. Áreas relacionadas também incluem **Infográficos**, e **Interação Humano-Computador** no geral.
Vastas quantidades de dados são incompreensíveis para o ser humano, mas uma vez que criamos visualizações úteis - nós podemos começar a dar muito mais sentido aos dados, e desenhar algumas conclusões. Portanto, é importante conhecer várias formas de visualizar informação - algo que vamos cobrir na <a href="../../../3-Data-Visualization/README.md">Seção 3</a> do nosso curso. Áreas relacionadas também incluem **Infográficos**, e **Interação Humano-Computador** no geral.
</dd>
</dl>

@ -1,23 +0,0 @@
# Tâche: Traitement Des Données en Python
Dans cette tâche, développez le code que nous avons commencé dans nos défis. La tâche a deux sections:
## Modélisation de la Propagation de COVID-19
- [ ] Placez les graphiques $R_t$ de 5-6 pays sur une graphe à comparer, ou mettez plusieurs graphiques côte à côte
- [ ] Voiez comment les nombres des morts et guérisons sont liés aux nombres des infectés
- [ ] Trouvez combien de temps une maladie typique dure en mettre en corrélation le taux de l'infection avex le taux du décès et cherchez les anomalies. Peut-être vous avez besoin de regarder les pays différents à le trouver.
- [ ] Calculez le taux du décès et comment il change au fil du temps. *Il se peut que vous voulez prendre en compte la durée de la maladie en jours alors que vous pouvez déplacer une série chronologique avant de faire les calculs*
## Analyse des Articles COVID-19
- [ ] Build co-occurrence matrix of different medications, and see which medications often occur together (i.e. mentioned in one abstract). You can modify the code for building co-occurrence matrix for medications and diagnoses.
- [ ] Visualize this matrix using heatmap.
- [ ] As a stretch goal, visualize the co-occurrence of medications using [chord diagram](https://en.wikipedia.org/wiki/Chord_diagram). [This library](https://pypi.org/project/chord/) may help you draw a chord diagram.
- [ ] As another stretch goal, extract dosages of different medications (such as **400mg** in *take 400mg of chloroquine daily*) using regular expressions, and build dataframe that shows different dosages for different medications. **Note**: consider numeric values that are in close textual vicinity of the medicine name.
## Rubric
Exemplaire | Acceptable | A Besoin Damélioration
--- | --- | -- |
Chaque tâche est complet, illustré, et expliqué, y compris au moins un des deux objectifs | Plus que 5 tâches sont complets, aucun des objectifs sont essayés, ou les résultats ne sont pas évidents | Moins que 5 (mais plus que 3) tâches sont complets, les illustrations n'expliquent pas l'objectif

@ -0,0 +1 @@
<!--add translations to this folder-->

@ -2,7 +2,7 @@
## 지침
이 강의에서는 선형 차트, 산점도 및 막대형 차트를 사용하여 이 데이터 셋에 대한 흥미로운 사실을 보여 주었습니다. 이 과제에서는 데이터셋을 자세히 조사하여 특정 유형의 새에 대한 사실을 발견하는 과정을 진행합니다. 예를 들어, 흰기러기(Snoew Geese)에 대한 모든 흥미로운 데이터를 시각화하는 노트북을 만드는 것이 있습니다. 위에서 언급한 세 가지의 플롯을 사용하여 여러분의 노트북을 만들어보세요.
이 강의에서는 선형 차트, 산점도 및 막대형 차트를 사용하여 이 데이터 셋에 대한 흥미로운 사실을 보여 주었습니다. 이 과제에서는 데이터셋을 자세히 조사하여 특정 유형의 새에 대한 사실을 발견하는 과정을 진행합니다. 예를 들어, 흰기러기(Snow Geese) 에 대한 모든 흥미로운 데이터를 시각화하는 노트북을 만드는 것이 있습니다. 위에서 언급한 세 가지의 플롯을 사용하여 여러분의 노트북을 만들어보세요.
## 기준표

@ -20,6 +20,15 @@ birds = pd.read_csv('../../data/birds.csv')
birds.head()
```
| | Name | ScientificName | Category | Order | Family | Genus | ConservationStatus | MinLength | MaxLength | MinBodyMass | MaxBodyMass | MinWingspan | MaxWingspan |
| ---: | :--------------------------- | :--------------------- | :-------------------- | :----------- | :------- | :---------- | :----------------- | --------: | --------: | ----------: | ----------: | ----------: | ----------: |
| 0 | Black-bellied whistling-duck | Dendrocygna autumnalis | Ducks/Geese/Waterfowl | Anseriformes | Anatidae | Dendrocygna | LC | 47 | 56 | 652 | 1020 | 76 | 94 |
| 1 | Fulvous whistling-duck | Dendrocygna bicolor | Ducks/Geese/Waterfowl | Anseriformes | Anatidae | Dendrocygna | LC | 45 | 53 | 712 | 1050 | 85 | 93 |
| 2 | Snow goose | Anser caerulescens | Ducks/Geese/Waterfowl | Anseriformes | Anatidae | Anser | LC | 64 | 79 | 2050 | 4050 | 135 | 165 |
| 3 | Ross's goose | Anser rossii | Ducks/Geese/Waterfowl | Anseriformes | Anatidae | Anser | LC | 57.3 | 64 | 1066 | 1567 | 113 | 116 |
| 4 | Greater white-fronted goose | Anser albifrons | Ducks/Geese/Waterfowl | Anseriformes | Anatidae | Anser | LC | 64 | 81 | 1930 | 3310 | 130 | 165 |
In general, you can quickly look at the way data is distributed by using a scatter plot as we did in the previous lesson:
```python
@ -31,6 +40,8 @@ plt.xlabel('Max Length')
plt.show()
```
![max length per order](images/scatter-wb.png)
This gives an overview of the general distribution of body length per bird Order, but it is not the optimal way to display true distributions. That task is usually handled by creating a Histogram.
## Working with histograms
@ -40,7 +51,7 @@ Matplotlib offers very good ways to visualize data distribution using Histograms
birds['MaxBodyMass'].plot(kind = 'hist', bins = 10, figsize = (12,12))
plt.show()
```
![distribution over the entire dataset](images/dist1.png)
![distribution over the entire dataset](images/dist1-wb.png)
As you can see, most of the 400+ birds in this dataset fall in the range of under 2000 for their Max Body Mass. Gain more insight into the data by changing the `bins` parameter to a higher number, something like 30:
@ -48,7 +59,7 @@ As you can see, most of the 400+ birds in this dataset fall in the range of unde
birds['MaxBodyMass'].plot(kind = 'hist', bins = 30, figsize = (12,12))
plt.show()
```
![distribution over the entire dataset with larger bins param](images/dist2.png)
![distribution over the entire dataset with larger bins param](images/dist2-wb.png)
This chart shows the distribution in a bit more granular fashion. A chart less skewed to the left could be created by ensuring that you only select data within a given range:
@ -59,7 +70,7 @@ filteredBirds = birds[(birds['MaxBodyMass'] > 1) & (birds['MaxBodyMass'] < 60)]
filteredBirds['MaxBodyMass'].plot(kind = 'hist',bins = 40,figsize = (12,12))
plt.show()
```
![filtered histogram](images/dist3.png)
![filtered histogram](images/dist3-wb.png)
✅ Try some other filters and data points. To see the full distribution of the data, remove the `['MaxBodyMass']` filter to show labeled distributions.
@ -76,7 +87,7 @@ hist = ax.hist2d(x, y)
```
There appears to be an expected correlation between these two elements along an expected axis, with one particularly strong point of convergence:
![2D plot](images/2D.png)
![2D plot](images/2D-wb.png)
Histograms work well by default for numeric data. What if you need to see distributions according to text data?
## Explore the dataset for distributions using text data
@ -115,7 +126,7 @@ plt.gca().set(title='Conservation Status', ylabel='Max Body Mass')
plt.legend();
```
![wingspan and conservation collation](images/histogram-conservation.png)
![wingspan and conservation collation](images/histogram-conservation-wb.png)
There doesn't seem to be a good correlation between minimum wingspan and conservation status. Test other elements of the dataset using this method. You can try different filters as well. Do you find any correlation?

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

@ -57,6 +57,12 @@ Take this data and convert the 'class' column to a category:
cols = mushrooms.select_dtypes(["object"]).columns
mushrooms[cols] = mushrooms[cols].astype('category')
```
```python
edibleclass=mushrooms.groupby(['class']).count()
edibleclass
```
Now, if you print out the mushrooms data, you can see that it has been grouped into categories according to the poisonous/edible class:
@ -78,7 +84,7 @@ plt.show()
```
Voila, a pie chart showing the proportions of this data according to these two classes of mushrooms. It's quite important to get the order of the labels correct, especially here, so be sure to verify the order with which the label array is built!
![pie chart](images/pie1.png)
![pie chart](images/pie1-wb.png)
## Donuts!
@ -108,7 +114,7 @@ plt.title('Mushroom Habitats')
plt.show()
```
![donut chart](images/donut.png)
![donut chart](images/donut-wb.png)
This code draws a chart and a center circle, then adds that center circle in the chart. Edit the width of the center circle by changing `0.40` to another value.

@ -1,18 +0,0 @@
# Try it in Excel
# 엑셀로 해보세요.
## Instructions
## 지시사항
Did you know you can create donut, pie, and waffle charts in Excel? Using a dataset of your choice, create these three charts right in an Excel spreadsheet.
엑셀에서 도넛, 파이, 와플 차트를 만들 수 있다는 것을 알고 있었나요? 선택한 데이터셋을 사용하여 Excel 스프레드시트에 세 개의 차트를 직접 만드십시오.
## Rubric
## 루브릭
| Exemplary | Adequate | Needs Improvement |
| ------------------------------------------------------- | ------------------------------------------------- | ------------------------------------------------------ |
| An Excel spreadsheet is presented with all three charts | An Excel spreadsheet is presented with two charts | An Excel spreadsheet is presented with only one chart |
| 모범 | 충분 | 개선 필요 |
| ------------------------------------------------------- | ------------------------------------------------- | ------------------------------------------------------ |
| 엑셀 스프레드시트는 세 차트와 함께 제시됩니다 | 엑셀 스프레드시트는 두 차트와 함께 제시됩니다 | 엑셀 스프레드시트는 오직 하나의 차트와 함께 제시됩니다 |

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

@ -1,227 +0,0 @@
# Visualizing Proportions
# 비율 시각화
|![ Sketchnote by [(@sketchthedocs)](https://sketchthedocs.dev) ](../../sketchnotes/11-Visualizing-Proportions.png)|
|:---:|
|Visualizing Proportions - _Sketchnote by [@nitya](https://twitter.com/nitya)_ |
|비율 시각화 - _제작자 : [@nitya](https://twitter.com/nitya)_ |
In this lesson, you will use a different nature-focused dataset to visualize proportions, such as how many different types of fungi populate a given dataset about mushrooms. Let's explore these fascinating fungi using a dataset sourced from Audubon listing details about 23 species of gilled mushrooms in the Agaricus and Lepiota families. You will experiment with tasty visualizations such as:
이 과정에서는 버섯에 대해 주어진 데이터셋에 얼마나 많은 종류의 곰팡이가 채워져 있는지와 같은 다른 자연에 초점을 맞춘 데이터셋을 사용하여 비율을 시각화합니다. Agaricus와 Lepiota과에 속하는 23종의 구이버섯에 대한 세부 정보를 나열한 Audubon의 데이터셋을 이용하여 이 매력적인 곰팡이를 탐험해 봅시다. 다음과 같은 맛있는 시각화를 실험하게 됩니다.
- Pie charts 🥧
- Donut charts 🍩
- Waffle charts 🧇
- 원형 차트 🥧
- 도넛 차트 🍩
- 와플 차트 🧇
> 💡 A very interesting project called [Charticulator](https://charticulator.com) by Microsoft Research offers a free drag and drop interface for data visualizations. In one of their tutorials they also use this mushroom dataset! So you can explore the data and learn the library at the same time: [Charticulator tutorial](https://charticulator.com/tutorials/tutorial4.html).
> 💡 Microsoft Research의 [Charticulator](https://charticulator.com)라는 매우 흥미로운 프로젝트는 데이터 시각화를 위한 무료 끌어 놓기(드래그 앤 드랍) 인터페이스를 제공합니다. 튜토리얼 중 하나에서는 버섯 데이터 세트도 사용합니다! 따라서 데이터를 탐색하면서 라이브러리를 동시에 학습할 수 있습니다. [Charticulator tutorial](https://charticulator.com/tutorials/tutorial4.html).
## [Pre-lecture quiz](https://red-water-0103e7a0f.azurestaticapps.net/quiz/20)
## [사전 강의 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/20)
## Get to know your mushrooms 🍄
## 버섯을 알아가세요 🍄
Mushrooms are very interesting. Let's import a dataset to study them:
버섯은 매우 흥미롭습니다. 데이터셋을 가져와 연구해 보겠습니다.
```python
import pandas as pd
import matplotlib.pyplot as plt
mushrooms = pd.read_csv('../../data/mushrooms.csv')
mushrooms.head()
```
A table is printed out with some great data for analysis:
분석을 위한 몇 가지 훌륭한 데이터가 포함된 표가 인쇄됩니다:
| class | cap-shape | cap-surface | cap-color | bruises | odor | gill-attachment | gill-spacing | gill-size | gill-color | stalk-shape | stalk-root | stalk-surface-above-ring | stalk-surface-below-ring | stalk-color-above-ring | stalk-color-below-ring | veil-type | veil-color | ring-number | ring-type | spore-print-color | population | habitat |
| --------- | --------- | ----------- | --------- | ------- | ------- | --------------- | ------------ | --------- | ---------- | ----------- | ---------- | ------------------------ | ------------------------ | ---------------------- | ---------------------- | --------- | ---------- | ----------- | --------- | ----------------- | ---------- | ------- |
| Poisonous | Convex | Smooth | Brown | Bruises | Pungent | Free | Close | Narrow | Black | Enlarging | Equal | Smooth | Smooth | White | White | Partial | White | One | Pendant | Black | Scattered | Urban |
| Edible | Convex | Smooth | Yellow | Bruises | Almond | Free | Close | Broad | Black | Enlarging | Club | Smooth | Smooth | White | White | Partial | White | One | Pendant | Brown | Numerous | Grasses |
| Edible | Bell | Smooth | White | Bruises | Anise | Free | Close | Broad | Brown | Enlarging | Club | Smooth | Smooth | White | White | Partial | White | One | Pendant | Brown | Numerous | Meadows |
| Poisonous | Convex | Scaly | White | Bruises | Pungent | Free | Close | Narrow | Brown | Enlarging | Equal | Smooth | Smooth | White | White | Partial | White | One | Pendant | Black | Scattered | Urban |
Right away, you notice that all the data is textual. You will have to convert this data to be able to use it in a chart. Most of the data, in fact, is represented as an object:
바로, 모든 데이터가 텍스트임을 알 수 있습니다. 이 데이터를 차트에서 사용하려면 데이터를 변환해야 합니다. 실제로 대부분의 데이터는 개체로 표현됩니다:
```python
print(mushrooms.select_dtypes(["object"]).columns)
```
The output is:
출력은 다음과 같습니다:
```output
Index(['class', 'cap-shape', 'cap-surface', 'cap-color', 'bruises', 'odor',
'gill-attachment', 'gill-spacing', 'gill-size', 'gill-color',
'stalk-shape', 'stalk-root', 'stalk-surface-above-ring',
'stalk-surface-below-ring', 'stalk-color-above-ring',
'stalk-color-below-ring', 'veil-type', 'veil-color', 'ring-number',
'ring-type', 'spore-print-color', 'population', 'habitat'],
dtype='object')
```
Take this data and convert the 'class' column to a category:
이 데이터를 가져와서 '클래스' 열을 범주로 변환합니다:
```python
cols = mushrooms.select_dtypes(["object"]).columns
mushrooms[cols] = mushrooms[cols].astype('category')
```
Now, if you print out the mushrooms data, you can see that it has been grouped into categories according to the poisonous/edible class:
이제 버섯 데이터를 인쇄하면 poisonous/editable 클래스에 따라 범주로 분류되었음을 알 수 있습니다:
| | cap-shape | cap-surface | cap-color | bruises | odor | gill-attachment | gill-spacing | gill-size | gill-color | stalk-shape | ... | stalk-surface-below-ring | stalk-color-above-ring | stalk-color-below-ring | veil-type | veil-color | ring-number | ring-type | spore-print-color | population | habitat |
| --------- | --------- | ----------- | --------- | ------- | ---- | --------------- | ------------ | --------- | ---------- | ----------- | --- | ------------------------ | ---------------------- | ---------------------- | --------- | ---------- | ----------- | --------- | ----------------- | ---------- | ------- |
| class | | | | | | | | | | | | | | | | | | | | | |
| Edible | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | ... | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 | 4208 |
| Poisonous | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | ... | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 | 3916 |
If you follow the order presented in this table to create your class category labels, you can build a pie chart:
이 표에 나와 있는 순서에 따라 클래스 범주 레이블을 만들면 파이 차트를 작성할 수 있습니다:
## Pie!
## 파이!
```python
labels=['Edible','Poisonous']
plt.pie(edibleclass['population'],labels=labels,autopct='%.1f %%')
plt.title('Edible?')
plt.show()
```
Voila, a pie chart showing the proportions of this data according to these two classes of mushrooms. It's quite important to get the order of the labels correct, especially here, so be sure to verify the order with which the label array is built!
![pie chart](images/pie1.png)
![파이 차트](images/pie1.png)
## Donuts!
## 도넛!
A somewhat more visually interesting pie chart is a donut chart, which is a pie chart with a hole in the middle. Let's look at our data using this method.
좀 더 시각적으로 흥미로운 파이 차트는 도넛 차트입니다. 이것은 가운데에 구멍이 있는 파이 차트입니다. 이 방법을 사용하여 우리의 데이터를 살펴봅시다.
Take a look at the various habitats where mushrooms grow:
버섯이 자라는 다양한 서식지를 살펴보세요:
```python
habitat=mushrooms.groupby(['habitat']).count()
habitat
```
Here, you are grouping your data by habitat. There are 7 listed, so use those as labels for your donut chart:
여기서는 서식지에 따라 데이터를 그룹화합니다. 7개가 나열되어 있으므로 도넛 차트의 레이블로 사용합니다:
```python
labels=['Grasses','Leaves','Meadows','Paths','Urban','Waste','Wood']
plt.pie(habitat['class'], labels=labels,
autopct='%1.1f%%', pctdistance=0.85)
center_circle = plt.Circle((0, 0), 0.40, fc='white')
fig = plt.gcf()
fig.gca().add_artist(center_circle)
plt.title('Mushroom Habitats')
plt.show()
```
![donut chart](images/donut.png)
![도넛 차트](images/donut.png)
This code draws a chart and a center circle, then adds that center circle in the chart. Edit the width of the center circle by changing `0.40` to another value.
이 코드는 차트와 중심 원을 그리고, 차트에 해당 중심 원을 추가합니다. 0.40을 다른 값으로 변경하여 중심 원의 너비를 편집합니다.
Donut charts can be tweaked in several ways to change the labels. The labels in particular can be highlighted for readability. Learn more in the [docs](https://matplotlib.org/stable/gallery/pie_and_polar_charts/pie_and_donut_labels.html?highlight=donut).
도넛 차트는 레이블을 변경하기 위해 여러 가지 방법으로 수정할 수 있습니다. 특히 라벨은 가독성을 위해 강조 표시할 수 있습니다. 자세한 내용은 [https](https://matplotlib.org/stable/gallery/pie_and_polar_charts/pie_and_donut_labels.html?highlight=donut))에서 확인하십시오.
Now that you know how to group your data and then display it as a pie or donut, you can explore other types of charts. Try a waffle chart, which is just a different way of exploring quantity.
## Waffles!
이제 데이터를 그룹화한 다음 파이 또는 도넛으로 표시하는 방법을 알았으므로 다른 유형의 차트를 살펴볼 수 있습니다. 양을 탐구하는 다른 방법인 와플 차트를 시도해 보세요.
## 와플!
A 'waffle' type chart is a different way to visualize quantities as a 2D array of squares. Try visualizing the different quantities of mushroom cap colors in this dataset. To do this, you need to install a helper library called [PyWaffle](https://pypi.org/project/pywaffle/) and use Matplotlib:
'와플' 유형 차트는 양을 2D 정사각형 배열로 시각화하는 다른 방법입니다. 이 데이터셋에 있는 버섯 머리 색상의 다양한 양을 시각화해 보십시오. 이렇게 하려면 [PyWaffle](https://pypi.org/project/pywaffle/)이라는 도우미 라이브러리를 설치하고 Matplotlib을 사용해야 합니다:
```python
pip install pywaffle
```
Select a segment of your data to group:
그룹화할 데이터의 부분 선택:
```python
capcolor=mushrooms.groupby(['cap-color']).count()
capcolor
```
Create a waffle chart by creating labels and then grouping your data:
레이블을 만든 다음 데이터를 그룹화하여 와플 차트를 만듭니다:
```python
import pandas as pd
import matplotlib.pyplot as plt
from pywaffle import Waffle
data ={'color': ['brown', 'buff', 'cinnamon', 'green', 'pink', 'purple', 'red', 'white', 'yellow'],
'amount': capcolor['class']
}
df = pd.DataFrame(data)
fig = plt.figure(
FigureClass = Waffle,
rows = 100,
values = df.amount,
labels = list(df.color),
figsize = (30,30),
colors=["brown", "tan", "maroon", "green", "pink", "purple", "red", "whitesmoke", "yellow"],
)
```
Using a waffle chart, you can plainly see the proportions of cap colors of this mushrooms dataset. Interestingly, there are many green-capped mushrooms!
와플 차트를 사용하면 이 버섯 데이터셋의 머리 색상 비율을 쉽게 알 수 있습니다. 흥미롭게도, 녹색 머리가 있는 버섯이 많이 있답니다!
![waffle chart](images/waffle.png)
![와플 차트](images/waffle.png)
✅ Pywaffle supports icons within the charts that use any icon available in [Font Awesome](https://fontawesome.com/). Do some experiments to create an even more interesting waffle chart using icons instead of squares.
✅ Pywaffle은 차트 내에서 [Font Awesome](https://fontawesome.com/))에서 사용할 수 있는 아이콘을 지원합니다. 정사각형 대신 아이콘을 사용하여 훨씬 더 흥미로운 와플 차트를 만들기 위해 몇 가지 실험을 해보세요.
In this lesson, you learned three ways to visualize proportions. First, you need to group your data into categories and then decide which is the best way to display the data - pie, donut, or waffle. All are delicious and gratify the user with an instant snapshot of a dataset.
이 과정에서는 비율을 시각화하는 세 가지 방법을 배웠습니다. 먼저 데이터를 범주로 분류한 다음 파이, 도넛 또는 와플 중 어떤 것이 데이터를 표시하는 가장 좋은 방법인지 결정해야 합니다. 이 모든 것이 맛있고 데이터셋의 즉각적인 스냅샷으로 사용자를 만족시킵니다.
## 🚀 Challenge
## 🚀 도전
Try recreating these tasty charts in [Charticulator](https://charticulator.com).
## [Post-lecture quiz](https://red-water-0103e7a0f.azurestaticapps.net/quiz/21)
[Charticulator](https://charticulator.com)에서 맛있는 차트를 다시 만들어 보십시오.
## [강의 후 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/21)
## Review & Self Study
## 리뷰 & 셀프 학습
Sometimes it's not obvious when to use a pie, donut, or waffle chart. Here are some articles to read on this topic:
때때로 언제 파이, 도넛, 와플 차트를 사용해야 하는지 명확하지 않다. 다음은 이 주제에 대해 읽을 몇 가지 기사입니다:
https://www.beautiful.ai/blog/battle-of-the-charts-pie-chart-vs-donut-chart
https://medium.com/@hypsypops/pie-chart-vs-donut-chart-showdown-in-the-ring-5d24fd86a9ce
https://www.mit.edu/~mbarker/formula1/f1help/11-ch-c6.htm
https://medium.datadriveninvestor.com/data-visualization-done-the-right-way-with-tableau-waffle-chart-fdf2a19be402
Do some research to find more information on this sticky decision.
## Assignment
이 까다로운 결정에 대한 더 많은 정보를 찾기 위해 조사를 하세요.
## 과제
[Try it in Excel](assignment.md)
[엑셀로 도전해보세요](assignment.md)

@ -1,17 +0,0 @@
# Build your own custom vis
# 나만의 사용자 정의 보기 구축
## Instructions
## 지시사항
Using the code sample in this project to create a social network, mock up data of your own social interactions. You could map your usage of social media or make a diagram of your family members. Create an interesting web app that shows a unique visualization of a social network.
이 프로젝트의 코드 샘플을 사용하여 소셜 네트워크를 만들고, 소셜 상호 작용의 데이터를 모조해 보십시오. 소셜 미디어 사용을 지도화하거나 가족 구성원의 도표를 작성할 수 있습니다. 소셜 네트워크의 고유한 시각화를 보여주는 흥미로운 웹 앱을 만듭니다.
## Rubric
## 루브릭
Exemplary | Adequate | Needs Improvement
--- | --- | -- |
A GitHub repo is presented with code that runs properly (try deploying it as a static web app) and has an annotated README explaining the project | The repo does not run properly or is not documented well | The repo does not run properly and is not documented well
모범 | 충분 | 개선 필요
--- | --- | -- |
GitHub repo는 적절하게 실행되는 코드와 함께 제시되며(정적 웹 앱으로 배포해 보십시오) 프로젝트를 설명하는 주석이 달린 README를 가지고 있습니다. | repo는 제대로 실행되지 않거나 잘 문서화되지 않았습니다. | repo는 제대로 실행되지 않으며 잘 문서화되지 않습니다.

@ -1,232 +0,0 @@
# Making Meaningful Visualizations
# 의미 있는 시각화 만들기
|![ Sketchnote by [(@sketchthedocs)](https://sketchthedocs.dev) ](../../sketchnotes/13-MeaningfulViz.png)|
|:---:|
| Meaningful Visualizations - _Sketchnote by [@nitya](https://twitter.com/nitya)_ |
| 의미 있는 시각화 -_제작자: [@nitya](https://twitter.com/nitya)_ |
> "If you torture the data long enough, it will confess to anything" -- [Ronald Coase](https://en.wikiquote.org/wiki/Ronald_Coase)
> "데이터를 충분히 오래 고문하면, 그것은 무엇이든 자백할 것입니다." [Ronald Coase] (https://en.wikiquote.org/wiki/Ronald_Coase)
One of the basic skills of a data scientist is the ability to create a meaningful data visualization that helps answer questions you might have. Prior to visualizing your data, you need to ensure that it has been cleaned and prepared, as you did in prior lessons. After that, you can start deciding how best to present the data.
데이터 과학자의 기본 기술 중 하나는 사용자가 가질 수 있는 질문에 대답하는 데 도움이 되는 의미 있는 데이터 시각화를 만드는 능력입니다. 데이터를 시각화하기 전에 이전 학습에서와 같이 데이터를 정리하고 준비해야 합니다. 그런 다음 데이터를 가장 잘 표시할 방법을 결정할 수 있습니다.
In this lesson, you will review:
이 과정에서는 다음을 복습합니다:
1. How to choose the right chart type
2. How to avoid deceptive charting
3. How to work with color
4. How to style your charts for readability
5. How to build animated or 3D charting solutions
6. How to build a creative visualization
1. 올바른 차트 유형을 선택하는 방법
2. 기만적인 차트 작성을 피하는 방법
3. 컬러로 작업하는 방법
4. 가독성을 위해 차트를 스타일링하는 방법
5. 애니메이션 또는 3D 차트 작성 솔루션을 구축하는 방법
6. 창의적 시각화를 만드는 방법
## [Pre-Lecture Quiz](https://red-water-0103e7a0f.azurestaticapps.net/quiz/24)
## [사전 강의 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/24)
## Choose the right chart type
## 올바른 차트 유형 선택
In previous lessons, you experimented with building all kinds of interesting data visualizations using Matplotlib and Seaborn for charting. In general, you can select the [right kind of chart](https://chartio.com/learn/charts/how-to-select-a-data-vizualization/) for the question you are asking using this table:
이전 과정에서는 차트 작성을 위해 Matplotlib와 Seaborn을 사용하여 모든 종류의 흥미로운 데이터 시각화를 구축하는 실험을 했습니다. 일반적으로 다음 표를 사용하여 [적절한 유형의 차트](https://chartio.com/learn/charts/how-to-select-a-data-vizualization/)를 선택할 수 있습니다:
| You need to: | You should use: |
| -------------------------- | ------------------------------- |
| Show data trends over time | Line |
| Compare categories | Bar, Pie |
| Compare totals | Pie, Stacked Bar |
| Show relationships | Scatter, Line, Facet, Dual Line |
| Show distributions | Scatter, Histogram, Box |
| Show proportions | Pie, Donut, Waffle |
> ✅ Depending on the makeup of your data, you might need to convert it from text to numeric to get a given chart to support it.
> ✅ 데이터의 구성에 따라 텍스트에서 숫자로 변환해야 데이터를 지원할 수 있는 차트를 얻을 수 있습니다.
## Avoid deception
## 속임수를 피하라
Even if a data scientist is careful to choose the right chart for the right data, there are plenty of ways that data can be displayed in a way to prove a point, often at the cost of undermining the data itself. There are many examples of deceptive charts and infographics!
데이터 과학자가 올바른 데이터에 대한 올바른 차트를 선택하기 위해 주의를 기울인다고 해도, 데이터 자체를 손상시키는 대가를 치르고라도, 요점을 입증하는 방법으로 데이터를 표시할 수 있는 방법은 얼마든지 있습니다. 기만적인 차트와 인포그래픽의 예는 많습니다!
[![How Charts Lie by Alberto Cairo](./images/tornado.png)](https://www.youtube.com/watch?v=oX74Nge8Wkw "How charts lie")
[![알베르토 카이로의 차트 눕기](..images/tornado.png)](https://www.youtube.com/watch?v=oX74Nge8Wkw "차트 눕기"
> 🎥 Click the image above for a conference talk about deceptive charts
> 🎥 위 이미지를 클릭하면 기만적인 차트에 대한 컨퍼런스 토크가 나옵니다.
This chart reverses the X axis to show the opposite of the truth, based on date:
이 차트는 날짜를 기준으로 X축을 반전시켜 진실의 반대 방향을 보여줍니다:
![bad chart 1](images/bad-chart-1.png)
![나쁜 차트 1](images/bad-chart-1.png)
[This chart](https://media.firstcoastnews.com/assets/WTLV/images/170ae16f-4643-438f-b689-50d66ca6a8d8/170ae16f-4643-438f-b689-50d66ca6a8d8_1140x641.jpg) is even more deceptive, as the eye is drawn to the right to conclude that, over time, COVID cases have declined in the various counties. In fact, if you look closely at the dates, you find that they have been rearranged to give that deceptive downward trend.
[이 차트](https://media.firstcoastnews.com/assets/WTLV/images/170ae16f-4643-438f-b689-50d66ca6a8d8/170ae16f-4643-438f-b689-50d66ca6a8d8_1140x641.jpg)는 시간이 지남에 따라 다양한 카운티에서 COVID 사례가 감소했다고 결론을 내릴 수 있는 권리에 주목하기 때문에 훨씬 더 기만적이다. 사실, 만약 여러분이 날짜를 자세히 본다면, 여러분은 그것들이 기만적인 하향 추세를 주기 위해 재배열되었다는 것을 발견할 것입니다.
![bad chart 2](images/bad-chart-2.jpg)
![나쁜 차트 2](images/bad-chart-2.jpg)
This notorious example uses color AND a flipped Y axis to deceive: instead of concluding that gun deaths spiked after the passage of gun-friendly legislation, in fact the eye is fooled to think that the opposite is true:
이 악명 높은 예는 색깔과 뒤집힌 Y축을 사용하여 속인다: 총기 친화적인 법안이 통과된 후 총기 사망률이 급증했다고 결론짓는 대신, 사실 그 반대라고 생각하는 것은 눈을 속인다:
![bad chart 3](images/bad-chart-3.jpg)
![나쁜 차트 3](images/bad-chart-3.jpg)
This strange chart shows how proportion can be manipulated, to hilarious effect:
이 이상한 차트는 비율을 조작하여 우스꽝스러운 효과를 얻을 수 있습니다:
![bad chart 4](images/bad-chart-4.jpg)
![나쁜 차트 4](images/bad-chart-4.jpg)
Comparing the incomparable is yet another shady trick. There is a [wonderful web site](https://tylervigen.com/spurious-correlations) all about 'spurious correlations' displaying 'facts' correlating things like the divorce rate in Maine and the consumption of margarine. A Reddit group also collects the [ugly uses](https://www.reddit.com/r/dataisugly/top/?t=all) of data.
비교할 수 없는 것을 비교하는 것은 또 다른 음흉한 속임수이다. 메인주의 이혼율과 마가린 소비와 같은 '비교적 상관관계'를 보여주는 [정보 웹사이트](https://tylervigen.com/spurious-correlations)가 있다. Reddit 그룹은 또한 데이터의 [사용량](https://www.reddit.com/r/dataisugly/top/?t=all)을 수집합니다.
It's important to understand how easily the eye can be fooled by deceptive charts. Even if the data scientist's intention is good, the choice of a bad type of chart, such as a pie chart showing too many categories, can be deceptive.
눈이 얼마나 쉽게 기만적인 도표에 속아 넘어갈 수 있는지 이해하는 것이 중요하다. 데이터 과학자의 의도가 좋더라도 너무 많은 범주를 보여주는 파이 차트와 같은 잘못된 유형의 차트를 선택하는 것은 기만적일 수 있습니다.
## Color
## 색상
You saw in the 'Florida gun violence' chart above how color can provide an additional layer of meaning to charts, especially ones not designed using libraries such as Matplotlib and Seaborn which come with various vetted color libraries and palettes. If you are making a chart by hand, do a little study of [color theory](https://colormatters.com/color-and-design/basic-color-theory)
당신은 '플로리다 총기 폭력' 차트에서 색상이 차트에 어떻게 추가적인 의미 층을 제공할 수 있는지 보았으며, 특히 다양한 컬러 라이브러리와 팔레트가 제공되는 Matplotlib과 Seaborn과 같은 도서관을 사용하여 설계되지 않은 것을 보았습니다. 만약 여러분이 손으로 차트를 만들고 있다면, [색깔 이론]을 조금 연구해 보세요. https://colormatters.com/color-and-design/basic-color-theory)
> ✅ Be aware, when designing charts, that accessibility is an important aspect of visualization. Some of your users might be color blind - does your chart display well for users with visual impairments?
> ✅ 차트를 설계할 때 접근성은 시각화의 중요한 측면임을 유의해야 한다. 일부 사용자는 색맹일 수 있습니다. 당신의 차트는 시각 장애가 있는 사용자들에게 잘 표시됩니까?
Be careful when choosing colors for your chart, as color can convey meaning you might not intend. The 'pink ladies' in the 'height' chart above convey a distinctly 'feminine' ascribed meaning that adds to the bizarreness of the chart itself.
색상은 의도하지 않은 의미를 전달할 수 있으므로 차트에 사용할 색상을 선택할 때 주의하십시오. 위의 'height' 차트에 있는 'pink ladies'는 차트 자체의 기괴함을 더하는 뚜렷하게 '여성적인' 의미를 전달한다.
While [color meaning](https://colormatters.com/color-symbolism/the-meanings-of-colors) might be different in different parts of the world, and tend to change in meaning according to their shade. Generally speaking, color meanings include:
그러나 [color languation](https://colormatters.com/color-symbolism/the-meanings-of-colors)은 세계 각지에서 다를 수 있으며 색조에 따라 의미가 변하는 경향이 있다. 일반적으로 색상의 의미는 다음과 같다.
| Color | Meaning |
| ------ | ------------------- |
| red | power |
| blue | trust, loyalty |
| yellow | happiness, caution |
| green | ecology, luck, envy |
| purple | happiness |
| orange | vibrance |
If you are tasked with building a chart with custom colors, ensure that your charts are both accessible and the color you choose coincides with the meaning you are trying to convey.
사용자 지정 색을 사용하여 차트를 작성해야 하는 경우 차트에 액세스할 수 있고 선택한 색상이 전달하려는 의미와 일치하는지 확인하십시오.
## Styling your charts for readability
## 가독성을 위한 차트 스타일링
Charts are not meaningful if they are not readable! Take a moment to consider styling the width and height of your chart to scale well with your data. If one variable (such as all 50 states) need to be displayed, show them vertically on the Y axis if possible so as to avoid a horizontally-scrolling chart.
차트를 읽을 수 없으면 의미가 없습니다! 데이터에 맞게 잘 확장되도록 차트의 너비와 높이를 스타일링하는 것을 고려해 보십시오. 하나의 변수(예: 50개 상태 모두)를 표시해야 하는 경우 가로 스크롤 차트를 피하기 위해 가능하면 Y축에 세로로 표시합니다.
Label your axes, provide a legend if necessary, and offer tooltips for better comprehension of data.
축에 레이블을 지정하고 필요한 경우 범례를 제공하며 데이터를 더 잘 이해할 수 있도록 툴팁을 제공합니다.
If your data is textual and verbose on the X axis, you can angle the text for better readability. [Matplotlib](https://matplotlib.org/stable/tutorials/toolkits/mplot3d.html) offers 3d plotting, if you data supports it. Sophisticated data visualizations can be produced using `mpl_toolkits.mplot3d`.
데이터가 텍스트이고 X축에 상세할 경우 텍스트를 더 잘 읽을 수 있도록 각도를 지정할 수 있습니다. [Matplotlib](https://matplotlib.org/stable/tutorials/toolkits/mplot3d.html)은 데이터가 지원하는 경우 3D 플로팅을 제공합니다. 'mpl_toolkits.mplot3d'를 사용하여 정교한 데이터 시각화를 생성할 수 있습니다.
![3d plots](images/3d.png)
## Animation and 3D chart display
## 애니메이션 및 3D 차트 표시
Some of the best data visualizations today are animated. Shirley Wu has amazing ones done with D3, such as '[film flowers](http://bl.ocks.org/sxywu/raw/d612c6c653fb8b4d7ff3d422be164a5d/)', where each flower is a visualization of a movie. Another example for the Guardian is 'bussed out', an interactive experience combining visualizations with Greensock and D3 plus a scrollytelling article format to show how NYC handles its homeless problem by bussing people out of the city.
오늘날 최고의 데이터 시각화 중 일부는 애니메이션입니다. Shirley Wu는 D3로 놀라운 작업을 했습니다.
예를 들어 '[필름 플라워](http://bl.ocks.org/sxywu/raw/d612c6c653fb8b4d7ff3d422be164a5d/)',에서는 각각의 꽃이 영화의 시각화이다. 가디언의 또 다른 예는 '버스드 아웃'으로, Greensock 및 D3와 시각화를 결합한 대화형 체험과 NYC가 사람들을 도시 밖으로 내쫓아 노숙자 문제를 어떻게 처리하는지 보여주는 기사 형식을 포함한다.
![busing](images/busing.png)
![버스 수송](images/busing.png)
> "Bussed Out: How America Moves its Homeless" from [the Guardian](https://www.theguardian.com/us-news/ng-interactive/2017/dec/20/bussed-out-america-moves-homeless-people-country-study). Visualizations by Nadieh Bremer & Shirley Wu
> [가디언](https://www.theguardian.com/us-news/ng-interactive/2017/dec/20/bussed-out-america-moves-homeless-people-country-study))의 "버스드 아웃: 미국의 노숙자 이동 방법" Nadieh Bremer & Shirley Wu의 시각화
While this lesson is insufficient to go into depth to teach these powerful visualization libraries, try your hand at D3 in a Vue.js app using a library to display a visualization of the book "Dangerous Liaisons" as an animated social network.
이 교훈은 이러한 강력한 시각화 라이브러리를 가르치기에 충분하지 않지만, 애니메이션 소셜 네트워크로서 "위험한 관계"라는 책의 시각화를 표시하기 위해 라이브러리를 사용하여 Vue.js 앱의 D3에서 여러분의 손을 사용해 보십시오.
> "Les Liaisons Dangereuses" is an epistolary novel, or a novel presented as a series of letters. Written in 1782 by Choderlos de Laclos, it tells the story of the vicious, morally-bankrupt social maneuvers of two dueling protagonists of the French aristocracy in the late 18th century, the Vicomte de Valmont and the Marquise de Merteuil. Both meet their demise in the end but not without inflicting a great deal of social damage. The novel unfolds as a series of letters written to various people in their circles, plotting for revenge or simply to make trouble. Create a visualization of these letters to discover the major kingpins of the narrative, visually.
> "Les Liaisons Dangereuses"는 편지 소설 또는 일련의 편지로 표현된 소설이다. 1782년 Choderlos de Laclos에 의해 쓰여진 이 책은 18세기 후반 프랑스 귀족의 결투적인 두 주인공인 Vicomte de Valmont와 Marquise de Merteuil의 잔인하고 도덕적으로 타락한 사회적 책략에 대한 이야기를 들려준다. 둘 다 결국 그들의 죽음을 맞이하지만 큰 사회적 피해를 입히지 않고는 아니다. 이 소설은 복수의 음모를 꾸미거나 단순히 문제를 일으키기 위해 서클에 있는 다양한 사람들에게 쓴 일련의 편지들로 전개된다. 이 글자들을 시각화해서 이야기의 주요 킹핀을 시각적으로 발견하세요.
You will complete a web app that will display an animated view of this social network. It uses a library that was built to create a [visual of a network](https://github.com/emiliorizzo/vue-d3-network) using Vue.js and D3. When the app is running, you can pull the nodes around on the screen to shuffle the data around.
이 소셜 네트워크의 애니메이션 보기를 표시하는 웹 앱을 완료합니다. Vue.js 및 D3를 사용하여 [네트워크의 시각](https://github.com/emiliorizzo/vue-d3-network))을 만들기 위해 구축된 라이브러리를 사용합니다. 앱이 실행 중일 때 화면에서 노드를 당겨 데이터를 이리저리 섞을 수 있습니다.
![liaisons](images/liaisons.png)
![관계](images/liaisons.png)
## Project: Build a chart to show a network using D3.js
## 프로젝트: D3.js를 사용하여 네트워크를 표시할 차트 작성
> This lesson folder includes a `solution` folder where you can find the completed project, for your reference.
> 이 과정 폴더에는 완료된 프로젝트를 참조할 수 있는 '솔루션' 폴더가 포함되어 있습니다.
1. Follow the instructions in the README.md file in the starter folder's root. Make sure you have NPM and Node.js running on your machine before installing your project's dependencies.
1. 스타터 폴더의 루트에 있는 README.md 파일의 지침을 따릅니다. 프로젝트의 종속성을 설치하기 전에 시스템에서 NPM 및 Node.js가 실행 중인지 확인하십시오.
2. Open the `starter/src` folder. You'll discover an `assets` folder where you can find a .json file with all the letters from the novel, numbered, with a 'to' and 'from' annotation.
2. 'starter/src' 폴더를 엽니다. 당신은 소설의 모든 글자와 번호가 매겨진 .json 파일을 찾을 수 있는 assets 폴더를 발견할 것이다.
3. Complete the code in `components/Nodes.vue` to enable the visualization. Look for the method called `createLinks()` and add the following nested loop.
3. `components/Nodes.vue`를 사용하여 시각화를 활성화합니다. createLinks()라는 메서드를 찾아 다음과 같은 중첩 루프를 추가합니다.
Loop through the .json object to capture the 'to' and 'from' data for the letters and build up the `links` object so that the visualization library can consume it:
.json 객체를 루프하여 문자에 대한 'to' 및 'from' 데이터를 캡처하고 시각화 라이브러리가 사용할 수 있도록 'links' 객체를 구축합니다.
```javascript
//loop through letters
let f = 0;
let t = 0;
for (var i = 0; i < letters.length; i++) {
for (var j = 0; j < characters.length; j++) {
if (characters[j] == letters[i].from) {
f = j;
}
if (characters[j] == letters[i].to) {
t = j;
}
}
this.links.push({ sid: f, tid: t });
}
```
Run your app from the terminal (npm run serve) and enjoy the visualization!
터미널(npm run serve)에서 앱을 실행하고 시각화를 즐기십시오!
## 🚀 Challenge
## 🚀 도전
Take a tour of the internet to discover deceptive visualizations. How does the author fool the user, and is it intentional? Try correcting the visualizations to show how they should look.
인터넷을 둘러보고 기만적인 시각화를 찾아보세요. 저자는 어떻게 사용자를 속이고, 의도적인가? 시각화를 수정하여 어떻게 보여야 하는지 표시해 보십시오.
## [Post-lecture quiz](https://red-water-0103e7a0f.azurestaticapps.net/quiz/25)
## [강의 후 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/25)
## Review & Self Study
## 리뷰 & 셀프 학습
Here are some articles to read about deceptive data visualization:
다음은 기만적인 데이터 시각화에 대한 몇 가지 기사입니다:
https://gizmodo.com/how-to-lie-with-data-visualization-1563576606
http://ixd.prattsi.org/2017/12/visual-lies-usability-in-deceptive-data-visualizations/
Take a look at these interest visualizations for historical assets and artifacts:
과거 자산 및 인공물에 대한 다음과 같은 관심 시각화를 살펴 보십시오.
https://handbook.pubpub.org/
Look through this article on how animation can enhance your visualizations:
이 기사를 통해 애니메이션이 시각화를 향상시키는 방법에 대해 알아보십시오:
https://medium.com/@EvanSinar/use-animation-to-supercharge-data-visualization-cd905a882ad4
## Assignment
## 과제
[Build your own custom visualization](assignment.md)
[맞춤형 시각화 구축](assignment.md)

@ -28,7 +28,7 @@ Defining the goals of the project will require deeper context into the problem o
Questions a data scientist may ask:
- Has this problem been approached before? What was discovered?
- Is the purpose and goal understood by all involved?
- Where is there ambiguity and how to reduce it?
- Is there ambiguity and how to reduce it?
- What are the constraints?
- What will the end result potentially look like?
- How much resources (time, people, computational) are available?
@ -62,16 +62,18 @@ Considerations of how and where the data is stored can influence the cost of its
Heres some aspects of modern data storage systems that can affect these choices:
**On premise vs off premise vs public or private cloud**
On premise refers to hosting managing the data on your own equipment, like owning a server with hard drives that store the data, while off premise relies on equipment that you dont own, such as a data center. The public cloud is a popular choice for storing data that requires no knowledge of how or where exactly the data is stored, where public refers to a unified underlying infrastructure that is shared by all who use the cloud. Some organizations have strict security policies that require that they have complete access to the equipment where the data is hosted and will rely on a private cloud that provides its own cloud services. Youll learn more about data in the cloud in [later lessons](5-Data-Science-In-Cloud).
On premise refers to hosting managing the data on your own equipment, like owning a server with hard drives that store the data, while off premise relies on equipment that you dont own, such as a data center. The public cloud is a popular choice for storing data that requires no knowledge of how or where exactly the data is stored, where public refers to a unified underlying infrastructure that is shared by all who use the cloud. Some organizations have strict security policies that require that they have complete access to the equipment where the data is hosted and will rely on a private cloud that provides its own cloud services. Youll learn more about data in the cloud in [later lessons](https://github.com/microsoft/Data-Science-For-Beginners/tree/main/5-Data-Science-In-Cloud).
**Cold vs hot data**
When training your models, you may require more training data. If youre content with your model, more data will arrive for a model to serve its purpose. In any case the cost of storing and accessing data will increase as you accumulate more of it. Separating rarely used data, known as cold data from frequently accessed hot data can be a cheaper data storage option through hardware or software services. If cold data needs to be accessed, it may take a little longer to retrieve in comparison to hot data.
### Managing Data
As you work with data you may discover that some of the data needs to be cleaned using some of the techniques covered in the lesson focused on [data preparation](2-Working-With-Data\08-data-preparation) to build accurate models. When new data arrives, it will need some of the same applications to maintain consistency in quality. Some projects will involve use of an automated tool for cleansing, aggregation, and compression before the data is moved to its final location. Azure Data Factory is an example of one of these tools.
As you work with data you may discover that some of the data needs to be cleaned using some of the techniques covered in the lesson focused on [data preparation](https://github.com/microsoft/Data-Science-For-Beginners/tree/main/2-Working-With-Data/08-data-preparation) to build accurate models. When new data arrives, it will need some of the same applications to maintain consistency in quality. Some projects will involve use of an automated tool for cleansing, aggregation, and compression before the data is moved to its final location. Azure Data Factory is an example of one of these tools.
### Securing the Data
One of the main goals of securing data is ensuring that those working it are in control of what is collected and in what context it is being used. Keeping data secure involves limiting access to only those who need it, adhering to local laws and regulations, as well as maintaining ethical standards, as covered in the [ethics lesson](1-Introduction\02-ethics).
One of the main goals of securing data is ensuring that those working it are in control of what is collected and in what context it is being used. Keeping data secure involves limiting access to only those who need it, adhering to local laws and regulations, as well as maintaining ethical standards, as covered in the [ethics lesson](https://github.com/microsoft/Data-Science-For-Beginners/tree/main/1-Introduction/02-ethics).
Heres some things that a team may do with security in mind:
- Confirm that all data is encrypted

@ -0,0 +1,23 @@
# Assessing a Dataset
A client has approached your team for help in investigating a taxi customer's seasonal spending habits in New York City.
They want to know: **Do yellow taxi passengers in New York City tip drivers more in the winter or summer?**
Your team is in the [Capturing](Readme.md#Capturing) stage of the Data Science Lifecycle and you are in charge of handling the the dataset. You have been provided a notebook and [data](../../data/taxi.csv) to explore.
In this directory is a [notebook](notebook.ipynb) that uses Python to load yellow taxi trip data from the [NYC Taxi & Limousine Commission](https://docs.microsoft.com/en-us/azure/open-datasets/dataset-taxi-yellow?tabs=azureml-opendatasets).
You can also open the taxi data file in text editor or spreadsheet software like Excel.
## Instructions
- Assess whether or not the data in this dataset can help answer the question.
- Explore the [NYC Open Data catalog](https://data.cityofnewyork.us/browse?sortBy=most_accessed&utf8=%E2%9C%93). Identify an additional dataset that could potentially be helpful in answering the client's question.
- Write 3 questions that you would ask the client for more clarification and better understanding of the problem.
Refer to the [dataset's dictionary](https://www1.nyc.gov/assets/tlc/downloads/pdf/data_dictionary_trip_records_yellow.pdf) and [user guide](https://www1.nyc.gov/assets/tlc/downloads/pdf/trip_record_user_guide.pdf) for more information about the data.
## Rubric
Exemplary | Adequate | Needs Improvement
--- | --- | -- |

@ -1,211 +0,0 @@
<<<<<<< HEAD
# 데이터 과학의 생애주기 소개
|![ Sketchnote by [(@sketchthedocs)](https://sketchthedocs.dev) ](../../sketchnotes/14-DataScience-Lifecycle.png)|
|:---:|
| 데이터 과학의 생애주기 소개 - [@nitya](https://twitter.com/nitya)의 이미지 |
## [강의 시작 전 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/26)
이 시점에서 여러분은 아마 데이터 과학이 하나의 프로세스라는 것을 깨달았을 것입니다. 이 프로세스는 다음과 같이 5단계로 나눌 수 있습니다:
- 데이터 포획
- 데이터 처리
- 데이터 분석
- 소통
- 유지보수
이번 강의에서는 생애 주기의 세 부분 : 데이터 포획, 데이터 처리 그리고 유지에 집중합니다.
![Diagram of the data science lifecycle](./images/data-science-lifecycle.jpg)
> [Berkeley School of Information](https://ischoolonline.berkeley.edu/data-science/what-is-data-science/) 의 이미지
## 데이터 포획
생애 주기의 첫 번째 단계는 다음 단계의 의존도가 높기 때문에 아주 중요합니다. 이것은 사실상 두 단계가 합해진 것이라고 볼 수 있습니다 : 데이터 수집과 해결해야 하는 문제들 및 목적 정의.
프로젝트의 목표를 정의하려면 문제나 질문에 대해서 더 깊은 맥락을 필요로 할 것입니다. 첫째, 우리는 문제 해결이 필요한 사람들을 찾아내고 영입해야 한다. 그들은 사업의 이해관계자이거나 프로젝트의 후원자일 수도 있으며, 그들은 누가 이 프로젝트를 통해 이익을 얻을 수 있는지, 무엇을 왜 필요로 하는지를 식별하는데에 도움을 줄 수 있습니다. 잘 정의된 목표는 납득할만한 결과를 정의하기 위해 계량(측정)과 수량화가 가능해야만 한다.
데이터 과학자가 할 수도 있는 질문들 :
- 이 문제에 접근한 적이 있습니까? 무엇이 발견되었습니까?
- 관련되어 있는 모든 사람들이 목적과 목표를 이해하고 있습니까?
- 모호성은 어디에서 확인할 수 있으며 어떻게 줄일 수 있겠습니까?
- 제약이 되는 것들은 무엇입니까?
- 최종 결과는 잠재적으로 어떻게 될 것 같습니까?
- 사용 가능한 자원들 (시간, 인력, 컴퓨터 이용) 이 얼마나 됩니까?
다음은 이 정의된 목표들을 달성하는 데 필요한 데이터를 식별하고, 수집하고, 마지막으로 탐색하는 것입니다. 이 획득 단계에서, 데이터 과학자들은 데이터의 양과 질또한 평가해야만 합니다. 얻은 것이 원하는 결과에 도달하는데 도움이 될 지 확인하기 위해서는 약간의 데이터 탐색이 요구됩니다.
데이터 과학자가 데이터에 대해 물어볼 수 있는 질문들 :
- 어떤 데이터가 이미 제가 사용이 가능합니까?
- 이 데이터의 소유자는 누구입니까?
- 개인 정보 보호 문제는 무엇입니까?
- 내가 이 문제를 해결할만큼 충분합니까?
- 이 문제에 대해 허용 가능한 품질의 데이터 입니까?
- 만약 내가 이 데이터를 통해 추가적인 정보를 발견한다면, 목표를 바꾸거나 정의를 다시 내려야 합니까?
## 데이터 처리
생애 주기의 데이터 처리단계는 모델링뿐만 아니라 데이터에서 패턴을 발견하는 데 초점을 맞춥니다. 데이터 처리 단계에서 사용되는 몇몇 기술들은 패턴을 파악하기 위한 통계정 방식을 필요로 합니다. 일반적으로, 이것이 사람에게는 대규모 데이터 세트로 수행하는 지루한 작업일것이고, 데이터 처리의 속도를 높이기 위해 무거운 작업을 컴퓨터들에게 시키며 의존하게 됩니다. 이 단계는 또한 데이터 과학과 기계학습이 교차하는 단계입니다. 첫 번째 수업에서 배웠듯이, 기계학습은 데이터를 이해하기 위한 모델을 구축하는 과정입니다. 모델은 데이터 내 변수간의 관계를 나타내는 것으로 결과들을 예측하는 데 도움이 됩니다.
일반적으로 이 단계에서 이요되는 기술들은 ML for Beginners 커리큘럼에서 다룹니다. 링크를 따라가 그것들에 대해 더 알아보십시오 :
- [분류](https://github.com/microsoft/ML-For-Beginners/tree/main/4-Classification): 보다 효율적인 사용을 위하여 데이터를 범주화 합니다.
- [군집](https://github.com/microsoft/ML-For-Beginners/tree/main/5-Clustering): 데이터를 비슷한 군집들로 군집화 합니다.
- [회귀](https://github.com/microsoft/ML-For-Beginners/tree/main/2-Regression): 값을 예측하거나 예측할 변수 간의 관계를 결정합니다.
## 유지보수
생애주기 다이어그램에서, 유지보수는 데이터 포획단계와 데이터 처리단계의 사이에 있다는 것을 알 수 있습니다. 유지보수는 프로젝트 과정 전체에 걸쳐 데이터를 관리, 저장 및 보호하는 지속적인 과정이며 프로젝트 전체에 걸쳐 고려해야만 합니다.
### 데이터 저장
데이터가 어떻게, 어디로 저장되는지에 대한 고려사항들은 저장소 비용뿐만 아니라 데이터의 접근 속도에 영향을 미칠 수 있습니다. 이와 같은 결정들은 데이터 과학자가 단독으로 내리는 것은 아니지만, 데이터 저장 방식에 따라 데이터를 처리하는 방식을 스스로 선택할 수 있습니다.
이러한 선택들에 영향을 미칠 수 있는 최신 데이터 저장소 시스템의 몇 가지 측면들입니다:
**전제 있음 vs 전제 없음 vs 공용 혹은 개인(자체) 클라우드**
전제 있음은 데이터를 저장하는 하드 드라이브가 있는 서버를 소유하는 것과 같이 자체 장비에서 데이터를 관리하는 호스팅을 의미하는 반면, 전제 없음은 데이터 센터와 같이 소유하지 않은 장비에 의존합니다. 공용 클라우드는 데이터가 정확이 어디에 어떻게 저장되는지에 대한 지식이 필요하지 않은 데이터 저장에 인기있는 선택입니다. 여기서 공용이란 클라우드를 사용하는 모든 사용자가 공유하는 통합 기반 인프라를 의미합니다. 일부 조직들은 데이터가 호스팅되는 장비에 대하여 완전한 접근 권한을 요구하는 엄격한 보안정책이 있으며, 자체 클라우드 서비스를 제공하는 사설 클라우드에 의존합니다. 클라우드의 데이터에 대한 자세한 내용은 [다음 강의](5-Data-Science-In-Cloud) 에서 더 배우게 될 것입니다.
**Cold vs hot 데이터**
모델을 훈련할 때, 더 많은 훈련데이터가 필요할 수 있습니다. 만약 당신이 당신의 모델에 만족을 한다면, 모델이 목적을 달성하도록 더 많은 데이터들이 제공될 것입니다. 어떠한 경우에도 데이터를 더 많이 축적할수록 데이터 저장 및 접근 비용은 증가합니다. 자주 접근하는 hot 데이터로부터, cold 데이터로 알려져 있는 자주 접근하지 않는 데이터를 분리하는 것은 하드웨어 혹은 소프트웨어 서비스를 통해 더 저렴한 데이터 저장 선택지가 될 수 있습니다. 만약 cold 데이터에 접근해야 하는 경우, hot 데이터에 비하여 검색하는데 시간이 좀 더 소요될 수 있습니다.
### 데이터 관리
데이터를 작업 하다보면 정확한 모델을 구축하기 위해 [데이터 준비](2-Working-With-Data\08-data-preparation)에 중점을 둔 강의에서 다룬 일부 기술을 사용하여 일부 데이터를 정리해야 한다는 것을 알 수 있습니다. 새로운 데이터가 제공되면, 품질의 일관성을 유지하기 위해서 동일한 애플리케이션의 일부를 필요로 합니다. 일부 프로젝트들에서는 데이터를 최종 위치로 이동하기 전에 정리, 집계 및 압축 작업을 위한 자동화된 도구의 사용이 포합됩니다. Azure Data Factory는 이러한 도구 중 하나의 예입니다.
### 데이터 보안
데이터 보안의 주요 목표 중 하나는 데이터를 작업하는 사람들이 수집 대상과 데이터가 사용되는 맥락을 제어할 수 있도록 하는 것입니다. 데이터 보안을 유지하려면 데이터를 필요로 하는 사람만 접근할 수 있도록 제한하고, 현지 법률 및 규정을 준수하며, [윤리 강의](1-Introduction\02-ethics)에서 다루는 윤리적 표준을 유지해야 합니다.
다음은 보안을 염두에 두고 팀에서 수행할 수 있는 몇 가지 사항입니다:
- 모든 데이터가 암호화 되는지 확인합니다.
- 그들의 데이터가 어떻게 이용되는지 고객들에게 정보를 제공합니다.
- 프로젝트에서 떠난 사람들의 데이터 접근을 금지합니다.
- 특정 프로젝트 구성원들만이 데이터를 변경할 수 있도록 허용합니다.
## 🚀 도전
데이터 과학의 생애주기에는 여러가지 버전이 있습니다. 여기서 각 단계는 이름과 단계 수가 다를 수 있지만 이 강의에서 언급한 것과 동일한 과정을 포합합니다.
[Team Data Science Process lifecycle](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/lifecycle) 와 [Cross-industry standard process for data mining](https://www.datascience-pm.com/crisp-dm-2/)를 탐구 해보십시오. 이 둘 사이의 3가지 유사점과 차이점을 대보시오.
|Team Data Science Process (TDSP)|Cross-industry standard process for data mining (CRISP-DM)|
|--|--|
|![Team Data Science Lifecycle](./images/tdsp-lifecycle2.png) | ![Data Science Process Alliance Image](./images/CRISP-DM.png) |
| [Microsoft](https://docs.microsoft.comazure/architecture/data-science-process/lifecycle)의 이미지 | [Data Science Process Alliance](https://www.datascience-pm.com/crisp-dm-2/)의 이미지 |
## [이전 강의 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/27)
## 복습 & 자기주도학습
데이터 과학의 생애주기를 적용하는 데는 여러 역할과 작업이 포함되며, 일부는 각 단계의 특정 부분에 집중할 수 있습니다. 팀 데이터 과학 프로세스는 프로젝트에서 누군가가 가질 수 있는 역할 및 작업 유형을 설명하는 몇 가지 리소스를 제공합니다.
* [팀 데이터 과학 프로세스 역할 및 작업](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/roles-tasks)
* [데이터 과학 작업 실행: 탐색, 모델링 및 배치](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/execute-data-science-tasks)
## 과제
[데이터 세트 ](assignment.md)
=======
# 데이터 과학의 생애주기 소개
|![ Sketchnote by [(@sketchthedocs)](https://sketchthedocs.dev) ](../../../sketchnotes/14-DataScience-Lifecycle.png)|
|:---:|
| 데이터 과학의 생애주기 소개 - [@nitya](https://twitter.com/nitya)의 이미지 |
## [강의 시작 전 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/26)
이 시점에서 여러분은 아마 데이터 과학이 하나의 프로세스라는 것을 깨달았을 것입니다. 이 프로세스는 다음과 같이 5단계로 나눌 수 있습니다:
- 데이터 포획
- 데이터 처리
- 데이터 분석
- 소통
- 유지보수
이번 강의에서는 생애 주기의 세 부분 : 데이터 포획, 데이터 처리 그리고 유지에 집중합니다.
![Diagram of the data science lifecycle](.././images/data-science-lifecycle.jpg)
> [Berkeley School of Information](https://ischoolonline.berkeley.edu/data-science/what-is-data-science/) 의 이미지
## 데이터 포획
생애 주기의 첫 번째 단계는 다음 단계의 의존도가 높기 때문에 아주 중요합니다. 이것은 사실상 두 단계가 합해진 것이라고 볼 수 있습니다 : 데이터 수집과 해결해야 하는 문제들 및 목적 정의.
프로젝트의 목표를 정의하려면 문제나 질문에 대해서 더 깊은 맥락을 필요로 할 것입니다. 첫째, 우리는 문제 해결이 필요한 사람들을 찾아내고 영입해야 한다. 그들은 사업의 이해관계자이거나 프로젝트의 후원자일 수도 있으며, 그들은 누가 이 프로젝트를 통해 이익을 얻을 수 있는지, 무엇을 왜 필요로 하는지를 식별하는데에 도움을 줄 수 있습니다. 잘 정의된 목표는 납득할만한 결과를 정의하기 위해 계량(측정)과 수량화가 가능해야만 한다.
데이터 과학자가 할 수도 있는 질문들 :
- 이 문제에 접근한 적이 있습니까? 무엇이 발견되었습니까?
- 관련되어 있는 모든 사람들이 목적과 목표를 이해하고 있습니까?
- 모호성은 어디에서 확인할 수 있으며 어떻게 줄일 수 있겠습니까?
- 제약이 되는 것들은 무엇입니까?
- 최종 결과는 잠재적으로 어떻게 될 것 같습니까?
- 사용 가능한 자원들 (시간, 인력, 컴퓨터 이용) 이 얼마나 됩니까?
다음은 이 정의된 목표들을 달성하는 데 필요한 데이터를 식별하고, 수집하고, 마지막으로 탐색하는 것입니다. 이 획득 단계에서, 데이터 과학자들은 데이터의 양과 질또한 평가해야만 합니다. 얻은 것이 원하는 결과에 도달하는데 도움이 될 지 확인하기 위해서는 약간의 데이터 탐색이 요구됩니다.
데이터 과학자가 데이터에 대해 물어볼 수 있는 질문들 :
- 어떤 데이터가 이미 제가 사용이 가능합니까?
- 이 데이터의 소유자는 누구입니까?
- 개인 정보 보호 문제는 무엇입니까?
- 내가 이 문제를 해결할만큼 충분합니까?
- 이 문제에 대해 허용 가능한 품질의 데이터 입니까?
- 만약 내가 이 데이터를 통해 추가적인 정보를 발견한다면, 목표를 바꾸거나 정의를 다시 내려야 합니까?
## 데이터 처리
생애 주기의 데이터 처리단계는 모델링뿐만 아니라 데이터에서 패턴을 발견하는 데 초점을 맞춥니다. 데이터 처리 단계에서 사용되는 몇몇 기술들은 패턴을 파악하기 위한 통계정 방식을 필요로 합니다. 일반적으로, 이것이 사람에게는 대규모 데이터 세트로 수행하는 지루한 작업일것이고, 데이터 처리의 속도를 높이기 위해 무거운 작업을 컴퓨터들에게 시키며 의존하게 됩니다. 이 단계는 또한 데이터 과학과 기계학습이 교차하는 단계입니다. 첫 번째 수업에서 배웠듯이, 기계학습은 데이터를 이해하기 위한 모델을 구축하는 과정입니다. 모델은 데이터 내 변수간의 관계를 나타내는 것으로 결과들을 예측하는 데 도움이 됩니다.
일반적으로 이 단계에서 이요되는 기술들은 ML for Beginners 커리큘럼에서 다룹니다. 링크를 따라가 그것들에 대해 더 알아보십시오 :
- [분류](https://github.com/microsoft/ML-For-Beginners/tree/main/4-Classification): 보다 효율적인 사용을 위하여 데이터를 범주화 합니다.
- [군집](https://github.com/microsoft/ML-For-Beginners/tree/main/5-Clustering): 데이터를 비슷한 군집들로 군집화 합니다.
- [회귀](https://github.com/microsoft/ML-For-Beginners/tree/main/2-Regression): 값을 예측하거나 예측할 변수 간의 관계를 결정합니다.
## 유지보수
생애주기 다이어그램에서, 유지보수는 데이터 포획단계와 데이터 처리단계의 사이에 있다는 것을 알 수 있습니다. 유지보수는 프로젝트 과정 전체에 걸쳐 데이터를 관리, 저장 및 보호하는 지속적인 과정이며 프로젝트 전체에 걸쳐 고려해야만 합니다.
### 데이터 저장
데이터가 어떻게, 어디로 저장되는지에 대한 고려사항들은 저장소 비용뿐만 아니라 데이터의 접근 속도에 영향을 미칠 수 있습니다. 이와 같은 결정들은 데이터 과학자가 단독으로 내리는 것은 아니지만, 데이터 저장 방식에 따라 데이터를 처리하는 방식을 스스로 선택할 수 있습니다.
이러한 선택들에 영향을 미칠 수 있는 최신 데이터 저장소 시스템의 몇 가지 측면들입니다:
**전제 있음 vs 전제 없음 vs 공용 혹은 개인(자체) 클라우드**
전제 있음은 데이터를 저장하는 하드 드라이브가 있는 서버를 소유하는 것과 같이 자체 장비에서 데이터를 관리하는 호스팅을 의미하는 반면, 전제 없음은 데이터 센터와 같이 소유하지 않은 장비에 의존합니다. 공용 클라우드는 데이터가 정확이 어디에 어떻게 저장되는지에 대한 지식이 필요하지 않은 데이터 저장에 인기있는 선택입니다. 여기서 공용이란 클라우드를 사용하는 모든 사용자가 공유하는 통합 기반 인프라를 의미합니다. 일부 조직들은 데이터가 호스팅되는 장비에 대하여 완전한 접근 권한을 요구하는 엄격한 보안정책이 있으며, 자체 클라우드 서비스를 제공하는 사설 클라우드에 의존합니다. 클라우드의 데이터에 대한 자세한 내용은 [다음 강의](5-Data-Science-In-Cloud) 에서 더 배우게 될 것입니다.
**Cold vs hot 데이터**
모델을 훈련할 때, 더 많은 훈련데이터가 필요할 수 있습니다. 만약 당신이 당신의 모델에 만족을 한다면, 모델이 목적을 달성하도록 더 많은 데이터들이 제공될 것입니다. 어떠한 경우에도 데이터를 더 많이 축적할수록 데이터 저장 및 접근 비용은 증가합니다. 자주 접근하는 hot 데이터로부터, cold 데이터로 알려져 있는 자주 접근하지 않는 데이터를 분리하는 것은 하드웨어 혹은 소프트웨어 서비스를 통해 더 저렴한 데이터 저장 선택지가 될 수 있습니다. 만약 cold 데이터에 접근해야 하는 경우, hot 데이터에 비하여 검색하는데 시간이 좀 더 소요될 수 있습니다.
### 데이터 관리
데이터를 작업 하다보면 정확한 모델을 구축하기 위해 [데이터 준비](2-Working-With-Data\08-data-preparation)에 중점을 둔 강의에서 다룬 일부 기술을 사용하여 일부 데이터를 정리해야 한다는 것을 알 수 있습니다. 새로운 데이터가 제공되면, 품질의 일관성을 유지하기 위해서 동일한 애플리케이션의 일부를 필요로 합니다. 일부 프로젝트들에서는 데이터를 최종 위치로 이동하기 전에 정리, 집계 및 압축 작업을 위한 자동화된 도구의 사용이 포합됩니다. Azure Data Factory는 이러한 도구 중 하나의 예입니다.
### 데이터 보안
데이터 보안의 주요 목표 중 하나는 데이터를 작업하는 사람들이 수집 대상과 데이터가 사용되는 맥락을 제어할 수 있도록 하는 것입니다. 데이터 보안을 유지하려면 데이터를 필요로 하는 사람만 접근할 수 있도록 제한하고, 현지 법률 및 규정을 준수하며, [윤리 강의](1-Introduction\02-ethics)에서 다루는 윤리적 표준을 유지해야 합니다.
다음은 보안을 염두에 두고 팀에서 수행할 수 있는 몇 가지 사항입니다:
- 모든 데이터가 암호화 되는지 확인합니다.
- 그들의 데이터가 어떻게 이용되는지 고객들에게 정보를 제공합니다.
- 프로젝트에서 떠난 사람들의 데이터 접근을 금지합니다.
- 특정 프로젝트 구성원들만이 데이터를 변경할 수 있도록 허용합니다.
## 🚀 도전
데이터 과학의 생애주기에는 여러가지 버전이 있습니다. 여기서 각 단계는 이름과 단계 수가 다를 수 있지만 이 강의에서 언급한 것과 동일한 과정을 포합합니다.
[Team Data Science Process lifecycle](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/lifecycle) 와 [Cross-industry standard process for data mining](https://www.datascience-pm.com/crisp-dm-2/)를 탐구 해보십시오. 이 둘 사이의 3가지 유사점과 차이점을 대보시오.
|Team Data Science Process (TDSP)|Cross-industry standard process for data mining (CRISP-DM)|
|--|--|
|![Team Data Science Lifecycle](.././images/tdsp-lifecycle2.png) | ![Data Science Process Alliance Image](.././images/CRISP-DM.png) |
| [Microsoft](https://docs.microsoft.comazure/architecture/data-science-process/lifecycle)의 이미지 | [Data Science Process Alliance](https://www.datascience-pm.com/crisp-dm-2/)의 이미지 |
## [이전 강의 퀴즈](https://red-water-0103e7a0f.azurestaticapps.net/quiz/27)
## 복습 & 자기주도학습
데이터 과학의 생애주기를 적용하는 데는 여러 역할과 작업이 포함되며, 일부는 각 단계의 특정 부분에 집중할 수 있습니다. 팀 데이터 과학 프로세스는 프로젝트에서 누군가가 가질 수 있는 역할 및 작업 유형을 설명하는 몇 가지 리소스를 제공합니다.
* [팀 데이터 과학 프로세스 역할 및 작업](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/roles-tasks)
* [데이터 과학 작업 실행: 탐색, 모델링 및 배치](https://docs.microsoft.com/en-us/azure/architecture/data-science-process/execute-data-science-tasks)
## 과제
[데이터 세트](assignment.md)
>>>>>>> f226d9539b580b27eb72c07423c0e0a5fcf4d540

@ -0,0 +1,223 @@
# The Data Science Lifecycle: Communication
|![ Sketchnote by [(@sketchthedocs)](https://sketchthedocs.dev)](../../sketchnotes/16-Communicating.png)|
|:---:|
| Data Science Lifecycle: Communication - _Sketchnote by [@nitya](https://twitter.com/nitya)_ |
## [Pre-Lecture Quiz](https://red-water-0103e7a0f.azurestaticapps.net/quiz/30)
Test your knowledge of what's to come with the Pre-Lecture Quiz above!
# Introduction
### What is Communication?
Lets start this lesson by defining what is means to communicate. **To communicate is to convey or exchange information.** Information can be ideas, thoughts, feelings, messages, covert signals, data anything that a **_sender_** (someone sending information) wants a **_receiver_** (someone receiving information) to understand. In this lesson, we will refer to senders as communicators, and receivers as the audience.
### Data Communication & Storytelling
We understand that when communicating, the aim is to convey or exchange information. But when communicating data, your aim shouldn't be to simply pass along numbers to your audience. Your aim should be to communicate a story that is informed by your data - effective data communication and storytelling go hand-in-hand. Your audience is more likely to remember a story you tell, than a number you give. Later in this lesson, we will go over a few ways that you can use storytelling to communicate your data more effectively.
### Types of Communication
Throughout this lesson two different types of communication will be discussed, One-Way Communication and Two-Way Communication.
**One way communication** happens when a sender sends information to a receiver, without any feedback or response. We see examples of one-way communication every day in bulk/mass emails, when the news delivers the most recent stories, or even when a television commercial comes on and informs you about why their product is great. In each of these instances, the sender is not seeking an exchange of information. They are only seeking to convey or deliver information.
**Two-way communication** happens when all involved parties act as both senders and receivers. A sender will begin by communicating to a receiver, and the receiver will provide feedback or a response. Two-way communication is what we traditionally think of when we talk about communication. We usually think of people engaged in a conversation - either in person, or over a phone call, social media, or text message.
When communicating data, there will be cases where you will be using one-way communication (think about presenting at a conference, or to a large group where questions wont be asked directly after) and there will be cases where you will use two-way communication (think about using data to persuade a few stakeholders for buy-in, or to convince a teammate that time and effort should be spent building something new).
# Effective Communication
### Your Responsibilities as a communicator
When communicating, it is your job to make sure that your receiver(s) are taking away the information that you want them to take away. When youre communicating data, you dont just want your receivers to takeaway numbers, you want your receivers to takeaway a story thats informed by your data. A good data communicator is a good storyteller.
How do you tell a story with data? There are infinite ways but below are 6 that we will talk about in this lesson.
1. Understand Your Audience, Your Medium, & Your Communication Method
2. Begin with the End in Mind
3. Approach it Like an Actual Story
4. Use Meaningful Words & Phrases
5. Use Emotion
Each of these strategies is explained in greater detail below.
### 1. Understand Your Audience, Your Channel & Your Communication Method
The way you communicate with family members is likely different than the way you communicate with your friends. You probably use different words and phrases that the people youre speaking to are more likely to understand. You should take the same approach when communicating data. Think about who youre communicating to. Think about their goals and the context that they have around the situation that youre explaining to them.
You can likely group the majority of your audience them within a category. In a _Harvard Business Review_ article, “[How to Tell a Story with Data](http://blogs.hbr.org/2013/04/how-to-tell-a-story-with-data/),” Dell Executive Strategist Jim Stikeleather identifies five categories of audiences.
- **Novice**: first exposure to the subject, but doesnt want
oversimplification
- **Generalist**: aware of the topic, but looking for an overview
understanding and major themes
- **Managerial**: in-depth, actionable understanding of intricacies and
interrelationships with access to detail
- **Expert**: more exploration and discovery and less storytelling with
great detail
- **Executive**: only has time to glean the significance and conclusions of
weighted probabilities
These categories can inform the way you present data to your audience.
In addition to thinking about your audience's category, you should also consider the channel you're using to communicate with your audience. Your approach should be slightly different if you're writing a memo or email vs having a meeting or presenting at a conference.
On top of understanding your audience, knowing how you will be communicating with them (using one-way communication or two-way) is also critical.
If you are communicating with a majority Novice audience and youre using one-way communication, you must first educate the audience and give them proper context. Then you must present your data to them and tell them what your data means and why your data matters. In this instance, you may want to be laser focused on driving clarity, because your audience will not be able to ask you any direct questions.
If you are communicating with a majority Managerial audience and youre using two-way communication, you likely wont need to educate your audience or provide them with much context. You may be able to jump straight into discussing the data that youve collected and why it matters. In this scenario though, you should be focused on timing and controlling your presentation. When using two-way communication (especially with a Managerial audience who is seeking an “actionable understanding of intricacies and interrelationships with access to detail”) questions may pop up during your interaction that may take the discussion in a direction that doesnt relate to the story that youre trying to tell. When this happens, you can take action and move the discussion back on track with your story.
### 2. Begin With The End In Mind
Beginning with the end in mind means understanding your intended takeaways for your audience before you start communicating with them. Being thoughtful about what you want your audience to takeaway ahead of time can help you craft a story that your audience can follow. Beginning with the end in mind is appropriate for both one-way communication and two-way communication.
How do you begin with the end in mind? Before communicating your data, write down your key takeaways. Then, every step of the way as you're preparing the story that you want to tell with your data, ask yourself, "How does this integrate into the story I'm telling?"
Be Aware While starting with the end in mind is ideal, you dont want to communicate only the data that supports your intended takeaways. Doing this is called Cherry-Picking, which happens when a communicator only communicates data that supports the point they are tying to make and ignores all other data.
If all the data that you collected clearly supports your intended takeaways, great. But if there is data that you collected that doesnt support your takeaways, or even supports an argument against your key takeaways, you should communicate that data as well. If this happens, be upfront with your audience and let them know why you're choosing to stick with your story even though all the data doesn't necessarily support it.
### 3. Approach it Like an Actual Story
A traditional story happens in 5 Phases. You may have heard these phases expressed as Exposition, Rising Action, Climax, Falling Action, and Denouncement. Or the easier to remember Context, Conflict, Climax, Closure, Conclusion. When communicating your data and your story, you can take a similar approach.
You can begin with context, set the stage and make sure your audience is all on the same page. Then introduce the conflict. Why did you need to collect this data? What problems were you seeking to solve? After that, the climax. What is the data? What does the data mean? What solutions does the data tell us we need? Then you get to the closure, where you can reiterate the problem, and the proposed solution(s). Lastly, we come to the conclusion, where you can summarize your key takeaways and the next steps you recommend the team takes.
### 4. Use Meaningful Words & Phrases
If you and I were working together on a product, and I said to you "Our users take a long time to onboard onto our platform," how long would you estimate that "long time" to be? An hour? A week? It's hard to know. What if I said that to an entire audience? Everyone in the audience may end up with a different idea of how long users take to onboard onto our platform.
Instead, what if I said "Out users take, on average, 3 minutes to sign up and onboard onto our platform."
That messaging is more clear. When communicating data, it can be easy to think that everyone in your audience is thinking just like you. But that is not always the case. Driving clarity around your data and what it means is one of your responsibilities as a communicator. If the data or your story is not clear, your audience will have a hard time following, and it is less likely that they will understand your key takeaways.
You can communicate data more clearly when you use meaningful words and phrases, instead of vague ones. Below are a few examples.
- We had an *impressive* year!
- One person could think a impressive means a 2% - 3% increase in revenue, and one person could think it means a 50% - 60% increase.
- Our users' success rates increased *dramatically*.
- How large of an increase is a dramatic increase?
- This undertaking will require *significant* effort.
- How much effort is significant?
Using vague words could be useful as an introduction to more data that's coming, or as a summary of the story that you've just told. But consider ensuring that every part of your presentation is clear for your audience.
### 5. Use Emotion
Emotion is key in storytelling. It's even more important when you're telling a story with data. When you're communicating data, everything is focused on the takeaways you want your audience to have. When you evoke an emotion for an audience it helps them empathize, and makes them more likely to take action. Emotion also increases the likelihood that an audience will remember your message.
You may have encountered this before with TV commercials. Some commercials are very somber, and use a sad emotion to connect with their audience and make the data that they're presenting really stand out. Or, some commercials are very upbeat and happy may make you associate their data with a happy feeling.
How do you use emotion when communicating data? Below are a couple of ways.
- Use Testimonials and Personal Stories
- When collecting data, try to collect both quantitative and qualitative data, and integrate both types of data when you're communicating. If your data is primarily quantitative, seek stories from individuals to learn more about their experience with whatever your data is telling you.
- Use Imagery
- Images help an audience see themselves in a situation. When you use
images, you can push an audience toward the emotion that you feel
they should have about your data.
- Use Color
- Different colors evoke different emotions. Popular colors and the emotions they evoke are below. Be aware, that colors could have different meanings in different cultures.
- Blue usually evokes emotions of peace and trust
- Green is usually related to the nature and the environment
- Red is usually passion and excitement
- Yellow is usually optimism and happiness
# Communication Case Study
Emerson is a Product Manager for a mobile app. Emerson has noticed that customers submit 42% more complaints and bug reports on the weekends. Emerson also noticed that customers who submit a complaint that goes unanswered after 48 hours are more 32% more likely to give the app a rating of 1 or 2 in the app store.
After doing research, Emerson has a couple of solutions that will address the issue. Emerson sets up a 30-minute meeting with the 3 company leads to communicate the data and the proposed solutions.
During this meeting, Emersons goal is to have the company leads understand that the 2 solutions below can improve the apps rating, which will likely translate into higher revenue.
**Solution 1.** Hire customer service reps to work on weekends
**Solution 2.** Purchase a new customer service ticketing system where customer service reps can easily identify which complaints have been in the queue the longest so they can tell which to address most immediately.
In the meeting, Emerson spends 5 minutes explaining why having a low rating on the app store is bad, 10 minutes explaining the research process and how the trends were identified, 10 minutes going through some of the recent customer complaints, and the last 5 minutes glossing over the 2 potential solutions.
Was this an effective way for Emerson to communicate during this meeting?
During the meeting, one company lead fixated on the 10 minutes of customer complaints that Emerson went through. After the meeting, these complaints were the only thing that this team lead remembered. Another company lead primarily focused on Emerson describing the research process. The third company lead did remember the solutions proposed by Emerson but wasnt sure how those solutions could be implemented.
In the situation above, you can see that there was a significant gap between what Emerson wanted the team leads to take away, and what they ended up taking away from the meeting. Below is another approach that Emerson could consider.
How could Emerson improve this approach?
Context, Conflict, Climax, Closure, Conclusion
**Context** - Emerson could spend the first 5 minutes introducing the entire situation and making sure that the team leads understand how the problems affect metrics that are critical to the company, like revenue.
It could be laid out this way: "Currently, our app's rating in the app store is a 2.5. Ratings in the app store are critical to App Store Optimization, which impacts how many users see our app in search, and how our app is viewed to perspective users. And ofcourse, the number of users we have is tied directly to revenue."
**Conflict** Emerson could then move to talk for the next 5 minutes or so on the conflict.
It could go like this: “Users submit 42% more complaints and bug reports on the weekends. Customers who submit a complaint that goes unanswered after 48 hours are more 32% less likely to give our app a rating over a 2 in the app store. Improving our app's rating in the app store to a 4 would improve our visibility by 20-30%, which I project would increase revenue by 10%." Of course, Emerson should be prepared to justify these numbers.
**Climax** After laying the groundwork, Emerson could then move to the Climax for 5 or so minutes.
Emerson could introduce the proposed solutions, lay out how those solutions will address the issues outlined, how those solutions could be implemented into existing workflows, how much the solutions cost, what the ROI of the solutions would be, and maybe even show some screenshots or wireframes of how the solutions would look if implemented. Emerson could also share testimonials from users who took over 48 hours to have their complaint addressed, and even a testimonial from a current customer service representative within the company who has comments on the current ticketing system.
**Closure** Now Emerson can spend 5 minutes restating the problems faced by the company, revisit the proposed solutions, and review why those solutions are the right ones.
**Conclusion** Because this is a meeting with a few stakeholders where two-way communication will be used, Emerson could then plan to leave 10 minutes for questions, to make sure that anything that was confusing to the team leads could be clarified before the meeting is over.
If Emerson took approach #2, it is much more likely that the team leads will take away from the meeting exactly what Emerson intended for them to take away that the way complaints and bugs are handled could be improved, and there are 2 solutions that could be put in place to make that improvement happen. This approach would be a much more effective approach to communicating the data, and the story, that Emerson wants to communicate.
# Conclusion
### Summary of main points
- To communicate is to convey or exchange information.
- When communicating data, your aim shouldn't be to simply pass along numbers to your audience. Your aim should be to communicate a story that is informed by your data.
- There are 2 types of communication, One-Way Communication (information is communicated with no intention of a response) and Two-Way Communication (information is communicated back and forth.)
- There are many strategies you can use to telling a story with your data, 5 strategies we went over are:
- Understand Your Audience, Your Medium, & Your Communication Method
- Begin with the End in Mind
- Approach it Like an Actual Story
- Use Meaningful Words & Phrases
- Use Emotion
### Recommended Resources for Self Study
[The Five C's of Storytelling - Articulate Persuasion](http://articulatepersuasion.com/the-five-cs-of-storytelling/)
[1.4 Your Responsibilities as a Communicator Business Communication for Success (umn.edu)](https://open.lib.umn.edu/businesscommunication/chapter/1-4-your-responsibilities-as-a-communicator/)
[How to Tell a Story with Data (hbr.org)](https://hbr.org/2013/04/how-to-tell-a-story-with-data)
[Two-Way Communication: 4 Tips for a More Engaged Workplace (yourthoughtpartner.com)](https://www.yourthoughtpartner.com/blog/bid/59576/4-steps-to-increase-employee-engagement-through-two-way-communication)
[6 succinct steps to great data storytelling - BarnRaisers, LLC (barnraisersllc.com)](https://barnraisersllc.com/2021/05/02/6-succinct-steps-to-great-data-storytelling/)
[How to Tell a Story With Data | Lucidchart Blog](https://www.lucidchart.com/blog/how-to-tell-a-story-with-data)
[6 Cs of Effective Storytelling on Social Media | Cooler Insights](https://coolerinsights.com/2018/06/effective-storytelling-social-media/)
[The Importance of Emotions In Presentations | Ethos3 - A Presentation Training and Design Agency](https://ethos3.com/2015/02/the-importance-of-emotions-in-presentations/)
[Data storytelling: linking emotions and rational decisions (toucantoco.com)](https://www.toucantoco.com/en/blog/data-storytelling-dataviz)
[Emotional Advertising: How Brands Use Feelings to Get People to Buy (hubspot.com)](https://blog.hubspot.com/marketing/emotions-in-advertising-examples)
[Choosing Colors for Your Presentation Slides | Think Outside The Slide](https://www.thinkoutsidetheslide.com/choosing-colors-for-your-presentation-slides/)
[How To Present Data [10 Expert Tips] | ObservePoint](https://resources.observepoint.com/blog/10-tips-for-presenting-data)
[Microsoft Word - Persuasive Instructions.doc (tpsnva.org)](https://www.tpsnva.org/teach/lq/016/persinstr.pdf)
[The Power of Story for Your Data (thinkhdi.com)](https://www.thinkhdi.com/library/supportworld/2019/power-story-your-data.aspx)
[Common Mistakes in Data Presentation (perceptualedge.com)](https://www.perceptualedge.com/articles/ie/data_presentation.pdf)
[Infographic: Here are 15 Common Data Fallacies to Avoid (visualcapitalist.com)](https://www.visualcapitalist.com/here-are-15-common-data-fallacies-to-avoid/)
[Cherry Picking: When People Ignore Evidence that They Dislike Effectiviology](https://effectiviology.com/cherry-picking/#How_to_avoid_cherry_picking)
[Tell Stories with Data: Communication in Data Science | by Sonali Verghese | Towards Data Science](https://towardsdatascience.com/tell-stories-with-data-communication-in-data-science-5266f7671d7)
[1. Communicating Data - Communicating Data with Tableau [Book] (oreilly.com)](https://www.oreilly.com/library/view/communicating-data-with/9781449372019/ch01.html)
## [Post-Lecture Quiz](https://red-water-0103e7a0f.azurestaticapps.net/quiz/31)
Review what you've just learned with the Post-Lecture Quiz above!
## Assignment
[Market Research](assignment.md)

@ -1,6 +1,6 @@
# डेटा विज्ञान के जीवनचक्र: संचार
|![ Sketchnote by [(@sketchthedocs)](https://sketchthedocs.dev)](https://github.com/Heril18/Data-Science-For-Beginners/raw/main/sketchnotes/16-Communicating.png)|
|![ Sketchnote by [(@sketchthedocs)](https://sketchthedocs.dev)](../..//sketchnotes/16-Communicating.png)|
|:---:|
| डेटा विज्ञान के जीवनचक्र: संचार - _[@nitya](https://twitter.com/nitya) द्वारा स्केचनोट_|

@ -50,7 +50,7 @@ Azure ML provides all the tools developers and data scientists need for their ma
### 1.2 The Heart Failure Prediction Project:
There is no doubt that making and building projects is the best to put your skills and knowledge to test. In this lesson, we are going to explore two different ways of building a data science project for the prediction of heart failure attacks in Azure ML Studio, through Low code/No code and through the Azure ML SDK as shown in the following schema:
There is no doubt that making and building projects is the best way to put your skills and knowledge to the test. In this lesson, we are going to explore two different ways of building a data science project for the prediction of heart failure attacks in Azure ML Studio, through Low code/No code and through the Azure ML SDK as shown in the following schema:
![project-schema](images/project-schema.PNG)

@ -0,0 +1 @@
<!--add translations to this folder-->

@ -76,7 +76,7 @@ Let's create a compute instance to provision a jupyter notebook.
Congratulations, you have just created a compute instance! We will use this compute instance to create a Notebook the [Creating Notebooks section](#23-creating-notebooks).
### 2.3 Loading the Dataset
Refer the [previous lesson](../18-tbd/README.md) in the section **2.3 Loading the Dataset** if you have not uploaded the dataset yet.
Refer the [previous lesson](../18-Low-Code/README.md) in the section **2.3 Loading the Dataset** if you have not uploaded the dataset yet.
### 2.4 Creating Notebooks
@ -97,7 +97,7 @@ Now that we have a Notebook, we can start training the model with Azure ML SDK.
### 2.5 Training a model
First of all, if you ever have a doubt, refer to the [Azure ML SDK documentation](https://docs.microsoft.com/python/api/overview/azure/ml?WT.mc_id=academic-40229-cxa&ocid=AID3041109). In contains all the necessary information to understand the modules we are going to see in this lesson.
First of all, if you ever have a doubt, refer to the [Azure ML SDK documentation](https://docs.microsoft.com/python/api/overview/azure/ml?WT.mc_id=academic-40229-cxa&ocid=AID3041109). It contains all the necessary information to understand the modules we are going to see in this lesson.
#### 2.5.1 Setup Workspace, experiment, compute cluster and dataset

@ -26,7 +26,7 @@ Thanks to the democratization of AI, developers are now finding it easier to des
* [Sports Analytics](https://towardsdatascience.com/scope-of-analytics-in-sports-world-37ed09c39860) - focuses on _predictive analytics_ (team and player analysis - think [Moneyball](https://datasciencedegree.wisconsin.edu/blog/moneyball-proves-importance-big-data-big-ideas/) - and fan management) and _data visualization_ (team & fan dashboards, games etc.) with applications like talent scouting, sports gambling and inventory/venue management.
* [Data Science in Banking](https://data-flair.training/blogs/data-science-in-banking/) - highlights the value of data science in the finance industry with applications ranging from risk modeling and fraud detction, to customer segmentation, real-time prediction and recommender systems. Predictive analytics also drive critical measures like [credit scores](https://dzone.com/articles/using-big-data-and-predictive-analytics-for-credit).
* [Data Science in Banking](https://data-flair.training/blogs/data-science-in-banking/) - highlights the value of data science in the finance industry with applications ranging from risk modeling and fraud detection, to customer segmentation, real-time prediction and recommender systems. Predictive analytics also drive critical measures like [credit scores](https://dzone.com/articles/using-big-data-and-predictive-analytics-for-credit).
* [Data Science in Healthcare](https://data-flair.training/blogs/data-science-in-healthcare/) - highlights applications like medical imaging (e.g., MRI, X-Ray, CT-Scan), genomics (DNA sequencing), drug development (risk assessment, success prediction), predictive analytics (patient care & supply logistics), disease tracking & prevention etc.

Loading…
Cancel
Save