You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ML-For-Beginners/translations/en/8-Reinforcement
leestott fad44a68c4
🌐 Update translations via Co-op Translator
3 weeks ago
..
1-QLearning 🌐 Update translations via Co-op Translator 3 weeks ago
2-Gym 🌐 Update translations via Co-op Translator 3 weeks ago
README.md 🌐 Update translations via Co-op Translator 3 weeks ago

README.md

Introduction to reinforcement learning

Reinforcement learning, or RL, is considered one of the fundamental paradigms of machine learning, alongside supervised learning and unsupervised learning. RL focuses on decision-making: making the right decisions or, at the very least, learning from them.

Imagine you have a simulated environment, like the stock market. What happens if you implement a specific regulation? Does it lead to positive or negative outcomes? If something negative occurs, you need to take this negative reinforcement, learn from it, and adjust your approach. If the outcome is positive, you should build on that positive reinforcement.

peter and the wolf

Peter and his friends need to escape the hungry wolf! Image by Jen Looper

Regional topic: Peter and the Wolf (Russia)

Peter and the Wolf is a musical fairy tale written by the Russian composer Sergei Prokofiev. It tells the story of a young pioneer, Peter, who bravely ventures out of his house into a forest clearing to confront a wolf. In this section, we will train machine learning algorithms to help Peter:

  • Explore the surrounding area and create an optimal navigation map.
  • Learn how to use a skateboard and maintain balance on it to move around more quickly.

Peter and the Wolf

🎥 Click the image above to listen to Peter and the Wolf by Prokofiev

Reinforcement learning

In earlier sections, you encountered two types of machine learning problems:

  • Supervised learning, where we have datasets that provide example solutions to the problem we aim to solve. Classification and regression are examples of supervised learning tasks.
  • Unsupervised learning, where we lack labeled training data. A primary example of unsupervised learning is Clustering.

In this section, we will introduce a new type of learning problem that does not rely on labeled training data. There are several types of such problems:

Example - computer game

Imagine you want to teach a computer to play a game, such as chess or Super Mario. For the computer to play the game, it needs to predict which move to make in each game state. While this might seem like a classification problem, it is not—because we do not have a dataset containing states and corresponding actions. Although we might have some data, like records of chess matches or gameplay footage of Super Mario, it is unlikely that this data will sufficiently cover the vast number of possible states.

Instead of relying on existing game data, Reinforcement Learning (RL) is based on the idea of letting the computer play the game repeatedly and observing the outcomes. To apply Reinforcement Learning, we need two key components:

  • An environment and a simulator that allow the computer to play the game multiple times. This simulator defines all the game rules, as well as possible states and actions.

  • A reward function, which evaluates how well the computer performed during each move or game.

The primary difference between RL and other types of machine learning is that in RL, we typically do not know whether we have won or lost until the game is over. Therefore, we cannot determine whether a specific move is good or bad on its own—we only receive feedback (a reward) at the end of the game. Our goal is to design algorithms that enable us to train a model under these uncertain conditions. In this section, we will explore one RL algorithm called Q-learning.

Lessons

  1. Introduction to reinforcement learning and Q-Learning
  2. Using a gym simulation environment

Credits

"Introduction to Reinforcement Learning" was written with ♥️ by Dmitry Soshnikov


Disclaimer:
This document has been translated using the AI translation service Co-op Translator. While we strive for accuracy, please note that automated translations may contain errors or inaccuracies. The original document in its native language should be regarded as the authoritative source. For critical information, professional human translation is recommended. We are not responsible for any misunderstandings or misinterpretations resulting from the use of this translation.