fairness quizzes

pull/34/head
Jen Looper 3 years ago
parent 44b36137b7
commit 346fa15060

@ -32,9 +32,6 @@ Learn more about Responsible AI by following this [Learning Path](https://docs.m
This sounds extreme but it is true that data can be manipulated to support any conclusion. Such manipulation can sometimes happen unintentionally. As humans, we all have bias, and you it is often difficult to consciously know when you are introducing bias in data.
Guaranteeing fairness in AI and machine learning remains a complex sociotechnical challenge. This means that it cannot be addressed from either purely social or technical perspectives.
### Fairness-related harms
What do you mean by unfairness? "Unfairness" encompasses negative impacts, or "harms", for a group of people, such as those defined in terms of race, gender, age, or disability status.
@ -149,6 +146,7 @@ This introductory lesson does not dive deeply into the details of algorithmic un
### Fairlearn
[Fairlearn](https://fairlearn.github.io/) is an open-source Python package that allows you to assess your systems' fairness and mitigate unfairness.
The tool helps you to assesses how a model's predictions affect different groups, enabling you to compare multiple models by using fairness and performance metrics, and supplying a set of algorithms to mitigate unfairness in binary classification and regression.
- Learn how to use the different components by checking out the Fairlearn's [GitHub](https://github.com/fairlearn/fairlearn/)
@ -162,7 +160,7 @@ The tool helps you to assesses how a model's predictions affect different groups
- Check out these [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/contrib/fairness) for more fairness assessment scenarios in Azure Machine Learning.
## 🚀 Challenge
To avoid biases to be introduced in the first place, we should:
To prevent biases from being introduced in the first place, we should:
- have a diversity of backgrounds and perspectives among the people working on systems
- invest in datasets that reflect the diversity of our society

@ -112,7 +112,7 @@
},
{
"id": 3,
"title": "History of Machine Learning: Post-Lecture Quiz",
"title": "History of Machine Learning: Pre-Lecture Quiz",
"quiz": [
{
"questionText": "q1",
@ -218,51 +218,55 @@
},
{
"id": 5,
"title": "Fairness and Machine Learning: Post-Lecture Quiz",
"title": "Fairness and Machine Learning: Pre-Lecture Quiz",
"quiz": [
{
"questionText": "q1",
"questionText": "Unfairness in Machine Learning can happen",
"answerOptions": [
{
"answerText": "a",
"answerText": "intentionally",
"isCorrect": "false"
},
{
"answerText": "b",
"isCorrect": "true"
"answerText": "unintentionally",
"isCorrect": "false"
},
{
"answerText": "c",
"isCorrect": "false"
"answerText": "both of the above",
"isCorrect": "true"
}
]
},
{
"questionText": "q2",
"questionText": "The term 'unfairness' in ML connotes:",
"answerOptions": [
{
"answerText": "a",
"answerText": "harms for a group of people",
"isCorrect": "true"
},
{
"answerText": "b",
"answerText": "harm to one person",
"isCorrect": "false"
},
{
"answerText": "harms for the majority of people",
"isCorrect": "false"
}
]
},
{
"questionText": "q3",
"questionText": "The five main types of harms include",
"answerOptions": [
{
"answerText": "a",
"isCorrect": "false"
"answerText": "allocation, quality of service, stereotyping, denigration, and over- or under- representation",
"isCorrect": "true"
},
{
"answerText": "b",
"isCorrect": "true"
"answerText": "elocation, quality of service, stereotyping, denigration, and over- or under- representation ",
"isCorrect": "false"
},
{
"answerText": "c",
"answerText": "allocation, quality of service, stereophonics, denigration, and over- or under- representation ",
"isCorrect": "false"
}
]
@ -274,48 +278,52 @@
"title": "Fairness and Machine Learning: Post-Lecture Quiz",
"quiz": [
{
"questionText": "q1",
"questionText": "Unfairness in a model can be caused by",
"answerOptions": [
{
"answerText": "a",
"isCorrect": "false"
"answerText": "overrreliance on historical data",
"isCorrect": "true"
},
{
"answerText": "b",
"isCorrect": "true"
"answerText": "underreliance on historical data",
"isCorrect": "false"
},
{
"answerText": "c",
"answerText": "too closely aligning to historical data",
"isCorrect": "false"
}
]
},
{
"questionText": "q2",
"questionText": "To mitigate unfairness, you can",
"answerOptions": [
{
"answerText": "a",
"isCorrect": "true"
"answerText": "identify harms and affected groups",
"isCorrect": "false"
},
{
"answerText": "b",
"answerText": "define fairness metrics",
"isCorrect": "false"
},
{
"answerText": "both the above",
"isCorrect": "true"
}
]
},
{
"questionText": "q3",
"questionText": "Fairlearn is a package that can",
"answerOptions": [
{
"answerText": "a",
"isCorrect": "false"
"answerText": "compare multiple models by using fairness and performance metrics",
"isCorrect": "true"
},
{
"answerText": "b",
"isCorrect": "true"
"answerText": "choose the best model for your needs",
"isCorrect": "false"
},
{
"answerText": "c",
"answerText": "help you decide what is fair and what is not",
"isCorrect": "false"
}
]

Loading…
Cancel
Save