Update 1-Introduction/3-fairness/README.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
pull/965/head
Lee Stott 2 weeks ago committed by GitHub
parent cc67b44259
commit 83da5c5bbb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -54,7 +54,7 @@ When designing and testing AI systems, we need to ensure that AI is fair and not
To build trust, AI systems need to be reliable, safe, and consistent under normal and unexpected conditions. It is important to know how AI systems will behave in a variety of situations, especially when they are outliers. When building AI solutions, there needs to be a substantial amount of focus on how to handle a wide variety of circumstances that the AI solutions would encounter. For example, a self-driving car needs to put people's safety as a top priority. As a result, the AI powering the car needs to consider all the possible scenarios that the car could come across such as night, thunderstorms or blizzards, kids running across the street, pets, road construction etc. How well an AI system can handle a wide range of conditions reliably and safely reflects the level of anticipation the data scientist or AI developer considered during the design or testing of the system.
> [🎥 Click here for a video: ](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
> [🎥 Click here for a video: reliability and safety in AI](https://www.microsoft.com/videoplayer/embed/RE4vvIl)
### Inclusiveness

Loading…
Cancel
Save