diff --git a/Introduction/1-intro-to-ML/README.md b/Introduction/1-intro-to-ML/README.md index 46f6c330..7129d6f3 100644 --- a/Introduction/1-intro-to-ML/README.md +++ b/Introduction/1-intro-to-ML/README.md @@ -4,9 +4,6 @@ > Click this image to watch a video discussing the difference between Machine Learning, AI, and Deep Learning. ## [Pre-lecture quiz](https://jolly-sea-0a877260f.azurestaticapps.net/quiz/1/) - -Describe what we will learn - ### Introduction Describe what will be covered diff --git a/Introduction/2-history-of-ML/README.md b/Introduction/2-history-of-ML/README.md index f49037a7..daf5aa21 100644 --- a/Introduction/2-history-of-ML/README.md +++ b/Introduction/2-history-of-ML/README.md @@ -1,13 +1,9 @@ # History of Machine Learning - -Add a sketchnote if possible/appropriate - ## [Pre-lecture quiz](https://jolly-sea-0a877260f.azurestaticapps.net/quiz/3/) In this lesson, we will walk through the major milestones of the history of Machine Learning and AI. -The history of Artificial Intelligence as a field is intertwined with the history of Machine Learning, as the algorithms and computational advances that underpin ML fed into the development of AI. It is useful to remember that, while these fields as distinct areas of inquiry began to crystallize in the 1950s, important [algorithmical, statistical, mathematical, computational and technical discoveries](https://wikipedia.org/wiki/Timeline_of_machine_learning) predated and overlapped this era. - +The history of Artificial Intelligence as a field is intertwined with the history of Machine Learning, as the algorithms and computational advances that underpin ML fed into the development of AI. It is useful to remember that, while these fields as distinct areas of inquiry began to crystallize in the 1950s, important [algorithmical, statistical, mathematical, computational and technical discoveries](https://wikipedia.org/wiki/Timeline_of_machine_learning) predated and overlapped this era. In fact, people have been thinking about these questions for [hundreds of years](https://wikipedia.org/wiki/History_of_artificial_intelligence): this article discusses the historical intellectual underpinnings of the idea of a 'thinking machine'. ## Notable Discoveries - 1763, 1812 [Bayes Theorem](https://wikipedia.org/wiki/Bayes%27_theorem) and its predecessors. This theorem and its applications underlie inference, describing the probability of an event occuring based on prior knowledge. @@ -19,7 +15,6 @@ The history of Artificial Intelligence as a field is intertwined with the histor - 1982 [Recurrent Neural Network](https://wikipedia.org/wiki/Recurrent_neural_network) are artificial neural networks derived from feedforward neural networks that create temporal graphs. ✅ Do a little research. What other dates stand out as pivotal in the history of ML and AI? - ## 1950: Machines that Think Alan Turing, a truly remarkable person who was voted [by the public in 2019](https://wikipedia.org/wiki/Icons:_The_Greatest_Person_of_the_20th_Century) as the greatest scientist of the 20th century, is credited as helping to lay the foundation for the concept of a 'machine that can think'. He grappled with naysayers and his own need for empirical evidence of this concept in part by creating the [Turing Test](https://www.bbc.com/news/technology-18475646), which you will explore in our NLP lessons. @@ -31,73 +26,68 @@ Alan Turing, a truly remarkable person who was voted [by the public in 2019](htt The lead researcher, mathematics professor John McCarthy, hoped "to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". The participants included another luminary in the field, Marvin Minsky. The Workshop is credited with having initiated and encouraged including "the rise of symbolic methods, systems focussed on limited domains (early expert systems), and deductive systems versus inductive systems."[source](https://wikipedia.org/wiki/Dartmouth_workshop) +## 1956 - 1974: "The Golden Years" -## 1956 - 1974: "The Gold Rush" - -Optimism was high in this era that AI could solve many problems. Research was very well funded. Shakey the robot could maneuver and decide. Eliza could converse w/ ppl. Blocksworld +From the 1950s through the mid '70s, optimism ran high in the hope that AI could solve many problems. In 1967 Marvin Minsky stated confidently that "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." (Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall) -## 1974 - 1980: "AI Winter" +Natural Language Processing research flourished, search was refined and made more powerful, and the concept of 'micro-worlds' where simple tasks were completed using plain language instructions. -Funding stopped, optimism lowered. Some issues included: -compute power was too limited -combinatorial explosion: the amount of parameters needing to be trained exploded w/o compute keeping up -paucity of data, hindered the process of using algorithms -how to frame the question...were we asking the right questions, were they specific enough -lots of criticism about approaches - criticism on turing tests - chinese room theory - ethical criticism of eliza +Research was well funded by government agencies, advances were made in computation and algorithms, and prototypes of intelligent machines were built. Some of these machines include: -scruffy vs. neat AI -neat AI has lots of trees and logical reasoning -scruffy AI encompasses an idea's metadata -led to progressions in OO programming +[Shakey the robot](https://wikipedia.org/wiki/Shakey_the_robot) could maneuver and decide how to perform tasks 'intelligently'. -## 1980s Expert systems +![Shakey, an intelligent robot](images/shakey.jpg) +> Shakey in 1972 -knowledge became the focus of AI and its businenss impact became acknowledged +Eliza, an early 'chatterbot', could converse w/ people and act as a primitive 'therapist'. You'll learn more about Eliza in the NLP lessons. -revival of connectionism (NN) behind the scenes, in research -hopfield net -backpropagation -applied neural networks +![Eliza, a bot](images/eliza.png) +> A version of Eliza, a chatbot -## 1987 - 1993: AI Chill -hardware had become too specialized -moving into an era of personal computers - computing becoming democratized +"Blocks world" was an example of a micro world where blocks can be stacked and sorted and experiments in teaching machines to make decisions could be tested. Advances built with libraries such as [SHRDLU](https://wikipedia.org/wiki/SHRDLU) helped propel language processing forward. -## 1990s: AI based on Robotics +https://www.youtube.com/watch?v=QAJz4YKUwqw +[![blocks world with SHRDLU](https://img.youtube.com/vi/QAJz4YKUwqw/0.jpg)](https://www.youtube.com/watch?v=QAJz4YKUwqw "blocks world with SHRDLU") -To show real intelligence AI needs a body +## 1974 - 1980: "AI Winter" -## 1993 - 2011 +By the mid 1970s, it had become apparent that the complexity of making 'intelligent machines' had been understated and that its promise, given the available compute power, had been overblown. Funding dried up and confidence in the field slowed. Some issues that impacted confidence included: -Same issues start to be solved -excessive data -huge compute power -more powerful algorithms -better able to frame question +- Compute power was too limited +- 'Combinatorial explosion': the amount of parameters needed to be trained grew exponentially as more was asked of computers, without a parallel evolution of compute power and capability +- There was a paucity of data that hindered the process of testing, developing, and refining algorithms +- The very questions that were being asked began to be questioned. Researchers began to field criticism about their approaches: + - Turing tests came into question by means, among other ideas, of the 'chinese room theory' which posited that, "programming a digital computer may make it appear to understand language but could not produce real understanding."[source](https://plato.stanford.edu/entries/chinese-room/) + - The ethics of introducing artificial intelligences such as the "therapist" ELIZA into society was challenged -## Now +At the same time, various AI schools of thought began to form. A dichotomy was established between ["scruffy" vs. "neat AI"](https://en.wikipedia.org/wiki/Neats_and_scruffies) practices. 'Scruffy' labs tweaked programs for hours until they had the desired results. 'Neat' labs "focused on logic and formal problem solving". ELIZA and SHRDLU were well-known 'scruffy' systems. In the 1980s, as demand emerged to make ML systems reproducible, the 'neat' approach gradually took the forefront as its result are more explainable. +## 1980s Expert systems -AI started as a single area, now there are many parts and they cross-collaborate +As the field grew, its benefit to business became clearer, and the 1980s the proliferation of 'expert systems'. "Expert systems were among the first truly successful forms of artificial intelligence (AI) software."[source](https://wikipedia.org/wiki/Expert_system) +This type of system is actually hybrid, consisting partially of a rules engine defining business requirements, and an inference engine that leveraged the rules system to deduce new facts. +This era also saw increasing attention paid to neural networks. -[![The history of Deep Learning](https://img.youtube.com/vi/mTtDfKgLm54/0.jpg)](https://www.youtube.com/watch?v=mTtDfKgLm54 "The history of Deep Learning") -> Yann LeCun discusses the history of Deep Learning in this lecture +## 1987 - 1993: AI 'Chill' +The proliferation of specialized expert systems hardware had the unfortunate effect that they become too specialized. The rise of personal computers also competed with these large, specialized, centralized systems. The democratization of computing had begun, and it eventually paved the way for the modern explosion of big data. +## 1993 - 2011 -## 🚀Challenge +This epoch saw a new era for ML and AI to be able to solve some of the problems that had been caused earlier by the lack of data and compute power. Data began to proliferate and become more widely available, for better and for worse, especially with the advent of the smartphone around 2007. Compute power expanded exponentially, and algorithms evolved alongside. The field began to gain maturity as the freewheeling days of the past began to crystallize into a true discipline. +## Now -Add a challenge for students to work on collaboratively in class to enhance the project +Today, Machine Learning and AI touch almost every part of our lives. This era calls for careful understanding of the risks and potentials effects of these algorithms on human lives. As Microsoft's Brad Smith has stated, "Information technology raises issues that go to the heart of fundamental human-rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses."[source](https://www.technologyreview.com/2019/12/18/102365/the-future-of-ais-impact-on-society/). It remains to be seen what the future holds but it is important to understand these computer systems and the software and algorithms that they run. We hope that this curriculum will help you to gain a better understanding so that you can decide for yourself. -Optional: add a screenshot of the completed lesson's UI if appropriate +[![The history of Deep Learning](https://img.youtube.com/vi/mTtDfKgLm54/0.jpg)](https://www.youtube.com/watch?v=mTtDfKgLm54 "The history of Deep Learning") +> Yann LeCun discusses the history of Deep Learning in this lecture +## 🚀Challenge +Dig into one of these historical moments and learn more about the people behind them. There are fascinating characters, and no scientific discovery was ever created in a cultural vacuum. What do you discover? ## [Post-lecture quiz](https://jolly-sea-0a877260f.azurestaticapps.net/quiz/4/) - ## Review & Self Study -Here are two items to review: +Here are items to watch and listen to: [This podcast where Amy Boyd discusses the evolution of AI](http://runasradio.com/Shows/Show/739) diff --git a/Introduction/2-history-of-ML/images/eliza.png b/Introduction/2-history-of-ML/images/eliza.png new file mode 100644 index 00000000..04f14146 Binary files /dev/null and b/Introduction/2-history-of-ML/images/eliza.png differ diff --git a/Introduction/2-history-of-ML/images/shakey.jpg b/Introduction/2-history-of-ML/images/shakey.jpg new file mode 100644 index 00000000..53ce4b35 Binary files /dev/null and b/Introduction/2-history-of-ML/images/shakey.jpg differ