tweaks for reinforcement

pull/38/head
Jen Looper 3 years ago
parent 7d3a1010a8
commit c590fb7fed

@ -1,6 +1,6 @@
# A More Realistic World
In our situation, Peter was able to move around almost without getting tired or hungry. In more realistic world, we has to sit down and rest from time to time, and also to feed himself. Let's make our world more realistic, by implementing the following rules:
In our situation, Peter was able to move around almost without getting tired or hungry. In a more realistic world, we has to sit down and rest from time to time, and also to feed himself. Let's make our world more realistic, by implementing the following rules:
1. By moving from one place to another, Peter loses **energy** and gains some **fatigue**.
2. Peter can gain more energy by eating apples.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -1,25 +1,25 @@
# CartPole Skating
The problem we have been solving in the previous lesson might seem like a toy problem, not really applicable for real life scenarios. This is not the case, because many real world problems are like that - including playing chess or go. They are similar, because we also have a board with given rules and **discrete state**.
The problem we have been solving in the previous lesson might seem like a toy problem, not really applicable for real life scenarios. This is not the case, because many real world problems are share this scenario - including playing chess or Go. They are similar, because we also have a board with given rules and a **discrete state**.
In this lesson we will apply the same principles of Q-Learning to a problem with **continuous state**, i.e. a state that is given by one or more real numbers. We will deal with the following problem:
> **Problem**: If Peter wants to escape from the wolf, he needs to be able to move faster than him. We will see how Peter can learn to skate, in particular, to keep balance, using Q-Learning.
> **Problem**: If Peter wants to escape from the wolf, he needs to be able to move faster. We will see how Peter can learn to skate, in particular, to keep balance, using Q-Learning.
We will use a simplified version of balancing known as **CartPole** problem. In cartpole world, we have a horizontal slider that can move left or right, and the goal is to balance a pole staying on top of it.
We will use a simplified version of balancing known as a **CartPole** problem. In the cartpole world, we have a horizontal slider that can move left or right, and the goal is to balance a vertical pole on top of the slider.
<img src="images/cartpole.png" width="200"/>
<img alt="a cartpole" src="images/cartpole.png" width="200"/>
## Prerequisites
In this lesson, we will be using a library called **OpenAI Gym** to simulate different **environments**. It is preferred to run this lesson's code locally (eg. from Visual Studio Code), in which case the simulation will open in a new window. When running the code online, you may need to make some tweaks to the code, as described [here](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7).
In this lesson, we will be using a library called **OpenAI Gym** to simulate different **environments**. You can run this lesson's code locally (eg. from Visual Studio Code), in which case the simulation will open in a new window. When running the code online, you may need to make some tweaks to the code, as described [here](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7).
## OpenAI Gym
In the previous lesson, the rules of the game and the state were given by `Board` class, which we defined ourselves. Here we will use a special **sumulation environment**, which will simulate the physics behind the balancing pole. One of the most popular simulation environments for training Reinforcement Learning algorithms is called [Gym](https://gym.openai.com/), which is maintained by [OpenAI](https://openai.com/). By using gym we can create difference **environments**: from cartpole simulation to Atari games.
In the previous lesson, the rules of the game and the state were given by the `Board` class which we defined ourselves. Here we will use a special **sumulation environment**, which will simulate the physics behind the balancing pole. One of the most popular simulation environments for training reinforcement learning algorithms is called a [Gym](https://gym.openai.com/), which is maintained by [OpenAI](https://openai.com/). By using this gym we can create difference **environments** from a cartpole simulation to Atari games.
> **Note**: You can see other environments available from OpenAI Gym [here](https://gym.openai.com/envs/#classic_control).
First, let's install the gym and import required libraries:
First, let's install the gym and import required libraries (code block 1):
```python
import sys
@ -35,7 +35,7 @@ import random
To work with CartPole balancing problem, we need to initialize corresponding environment. Each environment is associated with:
* **Observation space** that defines the structure of information that we receive from the environment. For cartpole problem, we receive position of the pole, velocity and some other values.
* **Action space** that defines possible actions. In our case action space is discrete, and consists of two actions - **left** and **right**.
* **Action space** that defines possible actions. In our case action space is discrete, and consists of two actions - **left** and **right**. (code block 2)
```python
env = gym.make("CartPole-v1")
@ -46,7 +46,7 @@ print(env.action_space.sample())
To see how the environment works, let's run a short simulation for 100 steps. At each step, we provide one of the actions to be taken - in this simulation we just randomly select an action from `action_space`. Run the code below and see what it leads to.
> **Note**: Remember that it is preferred to run this code on local Python installation!
> **Note**: Remember that it is preferred to run this code on local Python installation! (code block 3)
```python
env.reset()
@ -59,9 +59,9 @@ env.close()
You should be seeing something similar to this one:
![](images/cartpole-nobalance.gif)
![non-balancing cartpole](images/cartpole-nobalance.gif)
During simulation, we need to get observatons in order to decide how to act. In fact, `step` function returns us back current observations, reward function, and the `done` flag that indicates whether it makes sense to continue the simulation or not:
During simulation, we need to get observatons in order to decide how to act. In fact, `step` function returns us back current observations, reward function, and the `done` flag that indicates whether it makes sense to continue the simulation or not: (code block 4)
```python
env.reset()
@ -91,7 +91,7 @@ The observation vector that is returned at each step of the simulation contains
* Angle of pole
* Rotation rate of pole
We can get min and max value of those numbers:
We can get min and max value of those numbers: (code block 5)
```python
print(env.observation_space.low)
@ -113,13 +113,12 @@ There are a few ways we can do this:
In our example, we will go with the second approach. As you may notice later, despite undefined upper/lower bounds, those value rarely take values outside of certain finite intervals, thus those states with extreme values will be very rare.
Here is the function that will take the observation from our model, and produces a tuple of 4 integer values:
Here is the function that will take the observation from our model, and produces a tuple of 4 integer values: (code block 6)
```python
def discretize(x):
return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))
```
Let's also explore other discretization method using bins:
Let's also explore other discretization method using bins: (code block 7)
```python
def create_bins(i,num):
return np.arange(num+1)*(i[1]-i[0])/num+i[0]
@ -136,7 +135,7 @@ def discretize_bins(x):
Let's now run a short simulation and observe those discrete environment values. Feel free to try both `discretize` and `discretize_bins` and see if there is a difference.
> **Note**: `discretize_bins` returns the bin number, which is 0-based, thus for values of input variable around 0 it returns the number from the middle of the interval (10). In `discretize`, we did not care about the range of output values, allowing them to be negative, thus the state values are not shifted, and 0 corresponds to 0.
> **Note**: `discretize_bins` returns the bin number, which is 0-based, thus for values of input variable around 0 it returns the number from the middle of the interval (10). In `discretize`, we did not care about the range of output values, allowing them to be negative, thus the state values are not shifted, and 0 corresponds to 0. (code block 8)
```python
env.reset()
@ -155,7 +154,7 @@ env.close()
In our previous lesson, the state was a simple pair of numbers from 0 to 8, and thus it was convenient to represent Q-Table by numpy tensor with shape 8x8x2. If we use bins discretization, the size of our state vector is also known, so we can use the same approach and represent state by an array of shape 20x20x10x10x2 (here 2 is the dimension of action space, and first dimensions correspond to the number of bins we have selected to use for each of the parameters in observation space).
However, sometimes precise dimensions of the observation space are not known. In case of `discretize` function, we may never be sure that our state stays within certain limits, because some of the original values are not bound. Thus, we will use slightly different approach and represent Q-Table by a dictionary. We will use the pair *(state,action)* as the dictionary key, and the value would correspond to Q-Table entry value.
However, sometimes precise dimensions of the observation space are not known. In case of `discretize` function, we may never be sure that our state stays within certain limits, because some of the original values are not bound. Thus, we will use slightly different approach and represent Q-Table by a dictionary. We will use the pair *(state,action)* as the dictionary key, and the value would correspond to Q-Table entry value. (code block 9)
```python
Q = {}
@ -169,7 +168,7 @@ Here we also define a function `qvalues`, which returns a list of Q-Table values
## Let's Start Q-Learning!
Now we are ready to teach Peter to balance! First, let's set some hyperparameterers:
Now we are ready to teach Peter to balance! First, let's set some hyperparameters: (code block 10)
```python
# hyperparameters
@ -191,7 +190,7 @@ We would also make two improvements to our algorithm from the previous lesson:
* Calculating average cumulative reward over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.
* We will calculate maximum average cumulative result `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.
We will also collect all cumulative rewards at each simulation at `rewards` vector for further plotting.
We will also collect all cumulative rewards at each simulation at `rewards` vector for further plotting. (code block 11)
```python
def probs(v,eps=1e-4):
@ -242,9 +241,9 @@ This is more clearly visible if we plot training progress.
During training, we have collected the cumulative reward value at each of the iterations into `rewards` vector. Here is how it looks when we plot it against the iteration number:
![](images/train_progress_raw.png)
![raw progress](images/train_progress_raw.png)
From this graph, it is not possible to tell anything, because due to the nature of stochastic training process the length of training sessions varies greatly. To make more sense of this graph, we can calculate **running average** over series of experiments, let's say 100. This can be done conveniently using `np.convolve`:
From this graph, it is not possible to tell anything, because due to the nature of stochastic training process the length of training sessions varies greatly. To make more sense of this graph, we can calculate **running average** over series of experiments, let's say 100. This can be done conveniently using `np.convolve`: (code block 12)
```python
def running_average(x,window):
@ -253,7 +252,7 @@ def running_average(x,window):
plt.plot(running_average(rewards,100))
```
![](images/train_progress_runav.png)
![training progress](images/train_progress_runav.png)
## Varying Hyperparameters
@ -267,7 +266,7 @@ To make learning more stable, it makes sense to adjust some of our hyperparamete
## Seeing the Result in Action
Now it would be interesting to actually see how the trained model behaves. Let's run the simulation, and we will be following the same action selection strategy as during training: sampling according to the probability distribution in Q-Table:
Now it would be interesting to actually see how the trained model behaves. Let's run the simulation, and we will be following the same action selection strategy as during training: sampling according to the probability distribution in Q-Table: (code block 13)
```python
obs = env.reset()
@ -283,7 +282,10 @@ env.close()
You should see something like this:
![](images/cartpole-balance.gif)
![a balancing cartpole](images/cartpole-balance.gif)
---
## 🚀Challenge
> **Task 3**: Here, we were using the final copy of Q-Table, which may not be the best one. Remember that we have stored the best-performing Q-Table into `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.
@ -291,8 +293,10 @@ You should see something like this:
## [Post-lecture quiz](link-to-quiz-app)
## Assignment: [Train Mountain Car](assignment.md)
## Assignment: [Train a Mountain Car](assignment.md)
## Conclusion
We have now learnt how to train agents to achieve good results just by providing them a reward function that defines the desired state of the game, and by giving it an opportunity to intelligently explore the search space. We have successfully applied Q-Learning algorithm in the cases of discrete and continuous environments, but with discrete actions. In the are of reinforcement learning, we need to further study situations where action state is also continuous, and when observation space is much more complex, such as the image from Atari game screen. In those problems we often need to use more powerful machine learning techniques, such as neural networks, in order to achieve good results. Those more advanced topics are the subject of more advanced Deep Reinforcement Learning course.
We have now learned how to train agents to achieve good results just by providing them a reward function that defines the desired state of the game, and by giving them an opportunity to intelligently explore the search space. We have successfully applied the Q-Learning algorithm in the cases of discrete and continuous environments, but with discrete actions.
It's important to also study situations where action state is also continuous, and when observation space is much more complex, such as the image from the Atari game screen. In those problems we often need to use more powerful machine learning techniques, such as neural networks, in order to achieve good results. Those more advanced topics are the subject of our forthcoming more advanced AI course.

@ -2,7 +2,7 @@
[OpenAI Gym](http://gym.openai.com) has been designed in such a way that all environments provide the same API - i.e. the same methods `reset`, `step` and `render`, and the same abstractions of **action space** and **observation space**. Thus is should be possible to adapt the same reinforcement learning algorithms to different environments with minimal code changes.
## Mountain Car Environment
## A Mountain Car Environment
[Mountain Car environment](https://gym.openai.com/envs/MountainCar-v0/) contains a car stuck in a valley:

@ -28,117 +28,53 @@
"source": [
"## CartPole Skating\n",
"\n",
"The problem we have been solving in the previous lesson might seem like a toy problem, not really applicable for real life scenarios. This is not the case, because many real world problems are like that - including playing chess or go. They are similar, because we also have a board with given rules and **discrete state**.\n",
"\n",
"In this lesson we will apply the same principles of Q-Learning to a problem with **continuous state**, i.e. a state that is given by one or more real numbers. We will deal with the following problem:\n",
"\n",
"> **Problem**: If Peter wants to escape from the wolf, he needs to be able to move faster than him. We will see how Peter can learn to skate, in particular, to keep balance, using Q-Learning.\n",
"\n",
"We will use a simplified version of balancing known as **CartPole** problem. In cartpole world, we have a horizontal slider that can move left or right, and the goal is to balance a pole staying on top of it.\n",
"\n",
"<img src=\"images/cartpole.png\" width=\"200\"/>\n",
"\n",
"## OpenAI Gym\n",
"\n",
"In the previous lesson, the rules of the game and the state were given by `Board` class, which we defined ourselves. Here we will use a special **sumulation environment**, which will simulate the physics behind the balancing pole. One of the most popular simulation environments for training Reinforcement Learning algorithms is called [Gym](https://gym.openai.com/), which is maintained by [OpenAI](https://openai.com/). By using gym we can create difference **environments**: from cartpole simulation to Atari games. \n",
"\n",
"First, let's install the gym and import required libraries:"
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Requirement already satisfied: gym in c:\\winapp\\miniconda3\\lib\\site-packages (0.18.3)\nRequirement already satisfied: pyglet<=1.5.15,>=1.4.0 in c:\\winapp\\miniconda3\\lib\\site-packages (from gym) (1.5.15)\nRequirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in c:\\winapp\\miniconda3\\lib\\site-packages (from gym) (1.2.2)\nRequirement already satisfied: Pillow<=8.2.0 in c:\\winapp\\miniconda3\\lib\\site-packages (from gym) (7.2.0)\nRequirement already satisfied: scipy in c:\\winapp\\miniconda3\\lib\\site-packages (from gym) (1.6.1)\nRequirement already satisfied: numpy>=1.10.4 in c:\\winapp\\miniconda3\\lib\\site-packages (from gym) (1.19.5)\n"
]
}
],
"source": [
"import sys\n",
"!{sys.executable} -m pip install gym "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import gym\n",
"import matplotlib.pyplot as plt\n",
"from IPython import display\n",
"import numpy as np\n",
"import random"
"#code block 1"
]
},
{
"source": [
"## CartPole Environment\n",
"\n",
"To work with CartPole balancing problem, we need to initialize corresponding environment. Each environment is associated with:\n",
"* **Observation space** that defines the structure of information that we receive from the environment. For cartpole problem, we receive position of the pole, velocity and some other values.\n",
"* **Action space** that defines possible actions. In our case action space is discrete, and consists of two actions - **left** and **right**."
"## Create a cartpole environment"
],
"cell_type": "markdown",
"metadata": {}
},
{
"source": [
"#code block 2"
],
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Discrete(2)\nBox(-3.4028234663852886e+38, 3.4028234663852886e+38, (4,), float32)\n1\n"
]
}
],
"source": [
"env = gym.make(\"CartPole-v1\")\n",
"print(env.action_space)\n",
"print(env.observation_space)\n",
"print(env.action_space.sample())"
]
"execution_count": null,
"outputs": []
},
{
"source": [
"To see how the environment works, let's run a short simulation for 100 steps. At each step, we provide one of the actions to be taken - in this simulation we just randomly select an action from `action_space`. Run the code below and see what it leads to.\n",
"\n",
"> **Note**: It is preferred to run this code locally (eg. from Visual Studio Code), in which case the simulation will open in a new window. When running the code online, you may need to make some tweaks to the code, as described [here](https://towardsdatascience.com/rendering-openai-gym-envs-on-binder-and-google-colab-536f99391cc7)."
"To see how the environment works, let's run a short simulation for 100 steps."
],
"cell_type": "markdown",
"metadata": {}
},
{
"source": [
"#code block 3"
],
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"C:\\winapp\\miniconda3\\lib\\site-packages\\gym\\logger.py:30: UserWarning: \u001b[33mWARN: You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior.\u001b[0m\n warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))\n"
]
}
],
"source": [
"env.reset()\n",
"\n",
"for i in range(100):\n",
" env.render()\n",
" env.step(env.action_space.sample())\n",
"env.close()"
]
"execution_count": null,
"outputs": []
},
{
"source": [
@ -148,51 +84,17 @@
"metadata": {}
},
{
"source": [
"#code block 4"
],
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"[-0.01781364 0.16446158 0.00575593 -0.26601863] -> 1.0\n",
"[-1.45244123e-02 3.59500908e-01 4.35556587e-04 -5.56880543e-01] -> 1.0\n",
"[-0.00733439 0.55461674 -0.01070205 -0.84942621] -> 1.0\n",
"[ 0.00375794 0.35964236 -0.02769058 -0.56012774] -> 1.0\n",
"[ 0.01095079 0.16491978 -0.03889313 -0.27629582] -> 1.0\n",
"[ 0.01424918 0.36057441 -0.04441905 -0.58098753] -> 1.0\n",
"[ 0.02146067 0.5562897 -0.0560388 -0.8873258 ] -> 1.0\n",
"[ 0.03258647 0.75212567 -0.07378532 -1.19708542] -> 1.0\n",
"[ 0.04762898 0.55803219 -0.09772702 -0.92841056] -> 1.0\n",
"[ 0.05878962 0.75432799 -0.11629524 -1.25013537] -> 1.0\n",
"[ 0.07387618 0.56087255 -0.14129794 -0.99602608] -> 1.0\n",
"[ 0.08509363 0.75757231 -0.16121846 -1.32953877] -> 1.0\n",
"[ 0.10024508 0.56480922 -0.18780924 -1.09133681] -> 1.0\n",
"[ 0.11154126 0.76184222 -0.20963598 -1.43658114] -> 1.0\n"
]
}
],
"source": [
"env.reset()\n",
"\n",
"done = False\n",
"while not done:\n",
" env.render()\n",
" obs, rew, done, info = env.step(env.action_space.sample())\n",
" print(f\"{obs} -> {rew}\")\n",
"env.close()"
]
"execution_count": null,
"outputs": []
},
{
"source": [
"The observation vector that is returned at each step of the simulation contains the following values:\n",
"* Position of cart\n",
"* Velocity of cart\n",
"* Angle of pole\n",
"* Rotation rate of pole\n",
"\n",
"We can get min and max value of those numbers:\n"
"We can get min and max value of those numbers:"
],
"cell_type": "markdown",
"metadata": {}
@ -211,32 +113,12 @@
}
],
"source": [
"print(env.observation_space.low)\n",
"print(env.observation_space.high)"
"#code block 5"
]
},
{
"source": [
"You may also notice that reward value on each simulation step is always 1. This is because our goal is to survive as long as possible, i.e. keep the pole to a reasonably vertical position for the longest period of time.\n",
"\n",
"> In fact, CartPole simulation is considered solved if we manage to get the average reward of 195 over 100 consecutive trials."
],
"cell_type": "markdown",
"metadata": {}
},
{
"source": [
"## State Discretization\n",
"\n",
"In Q=Learning, we need to build Q-Table that defines what to do at each state. To be able to do this, we need state to be **discreet**, more precisely, it should contain finite number of disctete values. Thus, we need somehow to **discretize** our observations, mapping them to finite set of states.\n",
"\n",
"There are a few ways we can do this:\n",
"* If we know the interval of a certain value, we can divide this interval into a number of **bins**, and then replace the value by the number of bin that it belongs to. This can be done using numpy [`digitize`](https://numpy.org/doc/stable/reference/generated/numpy.digitize.html) method. In this case, we will precisely know the state size, because it will depend on the number of bins we select for digitalization.\n",
"* We can use linear interpolation to bring values to some finite interval (say, from -20 to 20), and then convert numbers to integers by rounding them. This gives us a bit less control on the size of the state, especially if we do not know the exact ranges of input values. For example, in our case 2 out of 4 values do not have upper/lower bounds on their values, which may result in the infinite number of states.\n",
"\n",
"In our example, we will go with the second approach. As you may notice later, despite undefined upper/lower bounds, those value rarely take values outside of certain finite intervals, thus those states with extreme values will be very rare.\n",
"\n",
"Here is the function that will take the observation from our model, and produces a tuple of 4 integer values:"
"## State Discretization"
],
"cell_type": "markdown",
"metadata": {}
@ -247,8 +129,7 @@
"metadata": {},
"outputs": [],
"source": [
"def discretize(x):\n",
" return tuple((x/np.array([0.25, 0.25, 0.01, 0.1])).astype(np.int))"
"#code block 6"
]
},
{
@ -272,24 +153,12 @@
}
],
"source": [
"def create_bins(i,num):\n",
" return np.arange(num+1)*(i[1]-i[0])/num+i[0]\n",
"\n",
"print(\"Sample bins for interval (-5,5) with 10 bins\\n\",create_bins((-5,5),10))\n",
"\n",
"ints = [(-5,5),(-2,2),(-0.5,0.5),(-2,2)] # intervals of values for each parameter\n",
"nbins = [20,20,10,10] # number of bins for each parameter\n",
"bins = [create_bins(ints[i],nbins[i]) for i in range(4)]\n",
"\n",
"def discretize_bins(x):\n",
" return tuple(np.digitize(x[i],bins[i]) for i in range(4))"
"#code block 7"
]
},
{
"source": [
"Let's now run a short simulation and observe those discrete environemnt values. Feel free to try both `discretize` and `discretize_bins` and see if there is a difference.\n",
"\n",
"> **Note**: `discretize_bins` returns the bin number, which is 0-based, thus for values of input variable around 0 it returns the number from the middle of the interval (10). In `discretize`, we did not care about the range of output values, allowing them to be negative, thus the state values are not shifted, and 0 corresponds to 0."
"Let's now run a short simulation and observe those discrete environment values."
],
"cell_type": "markdown",
"metadata": {}
@ -308,24 +177,12 @@
}
],
"source": [
"env.reset()\n",
"\n",
"done = False\n",
"while not done:\n",
" #env.render()\n",
" obs, rew, done, info = env.step(env.action_space.sample())\n",
" #print(discretize_bins(obs))\n",
" print(discretize(obs))\n",
"env.close()"
"#code block 8"
]
},
{
"source": [
"## Q-Table Structure\n",
"\n",
"In our previous lesson, the state was a simple pair of numbers from 0 to 8, and thus it was convenient to represent Q-Table by numpy tensor with shape 8x8x2. If we use bins discretization, the size of our state vector is also known, so we can use the same approach and represent state by an array of shape 20x20x10x10x2 (here 2 is the dimension of action space, and first dimensions correspond to the number of bins we have selected to use for each of the parameters in observation space).\n",
"\n",
"However, sometimes precise dimensions of the observation space are not known. In case of `discretize` function, we may never be sure that our state stays within certain limits, because some of the original values are not bound. Thus, we will use slightly different approach and represent Q-Table by a dictionary. We will use the pair *(state,action)* as the dictionary key, and the value would correspond to Q-Table entry value. "
"## Q-Table Structure"
],
"cell_type": "markdown",
"metadata": {}
@ -336,20 +193,12 @@
"metadata": {},
"outputs": [],
"source": [
"Q = {}\n",
"actions = (0,1)\n",
"\n",
"def qvalues(state):\n",
" return [Q.get((state,a),0) for a in actions]"
"#code block 9"
]
},
{
"source": [
"Here we also define a function `qvalues`, which returns a list of Q-Table values for a given state that correspond to all possible actions. If the entry is not present in the Q-Table, we will return 0 as the default.\n",
"\n",
"## Let's Start Q-Learning!\n",
"\n",
"Now we are ready to teach Peter to balance! First, let's set some hyperparameterers:"
"## Let's Start Q-Learning!"
],
"cell_type": "markdown",
"metadata": {}
@ -360,32 +209,9 @@
"metadata": {},
"outputs": [],
"source": [
"# hyperparameters\n",
"alpha = 0.3\n",
"gamma = 0.90\n",
"epsilon = 0.9"
"#code block 10"
]
},
{
"source": [
"Here, `alpha` is the **learning rate** that defines to which extent we should adjust the current values of Q-Table at each step. In previous lesson we have started with 1, and then decreased `alpha` to lower values during training. In this example we will keep it constant just for simplicity, and you can experiment with adjusting `alpha` values later.\n",
"\n",
"`gamma` is the **discount factor** that shows to which extent we should prioritize future reward over current reward.\n",
"\n",
"`epsilon` is the **exploration/exploitation factor** that determines whether we should prefer exploration to exploitation or vice versa. In our algorithm, we will in `epsilon` percent of the cases select the next action according to Q-Table values, and in the remaining number of cases we will execute random action. This will allow us to explore the areas of search space that we have never seen before. \n",
"\n",
"> In terms of balancing - choosing random action (exploration) would act as a random punch in the wrong direction, and the pole would have to learn how to recover the balance from those \"mistakes\"\n",
"\n",
"We would also make two improvements to our algorithm from the previous lesson:\n",
"\n",
"* Calculating average cumulative reward over a number of simulations. We will print the progress each 5000 iterations, and we will average out our cumulative reward over that period of time. It means that if we get more than 195 point - we can consider the problem solved, with even higher quality than required.\n",
"* We will calculate maximum average cumulative result `Qmax`, and we will store the Q-Table corresponding to that result. When you run the training, you will notice that sometimes the average cumulative result starts to drop, and we want to keep the values of Q-Table that correspond to the best model observed during training.\n",
"\n",
"We will also collect all cumulative rewards at each simulation in `rewards` vector for further plotting."
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 14,
@ -419,59 +245,12 @@
}
],
"source": [
"def probs(v,eps=1e-4):\n",
" v = v-v.min()+eps\n",
" v = v/v.sum()\n",
" return v\n",
"\n",
"random.seed(13)\n",
"np.random.seed(13)\n",
"env.seed(13)\n",
"\n",
"Qmax = 0\n",
"cum_rewards = []\n",
"rewards = []\n",
"for epoch in range(100000):\n",
" obs = env.reset()\n",
" done = False\n",
" cum_reward=0\n",
" # == do the simulation ==\n",
" while not done:\n",
" s = discretize(obs)\n",
" if random.random()<epsilon:\n",
" # exploitation - chose the action according to Q-Table probabilities\n",
" v = probs(np.array(qvalues(s)))\n",
" a = random.choices(actions,weights=v)[0]\n",
" else:\n",
" # exploration - randomly chose the action\n",
" a = np.random.randint(env.action_space.n)\n",
"\n",
" obs, rew, done, info = env.step(a)\n",
" cum_reward+=rew\n",
" ns = discretize(obs)\n",
" Q[(s,a)] = (1 - alpha) * Q.get((s,a),0) + alpha * (rew + gamma * max(qvalues(ns)))\n",
" cum_rewards.append(cum_reward)\n",
" rewards.append(cum_reward)\n",
" # == Periodically print results and calculate average reward ==\n",
" if epoch%5000==0:\n",
" print(f\"{epoch}: {np.average(cum_rewards)}, alpha={alpha}, epsilon={epsilon}\")\n",
" if np.average(cum_rewards) > Qmax:\n",
" Qmax = np.average(cum_rewards)\n",
" Qbest = Q\n",
" cum_rewards=[]"
"#code block 11"
]
},
{
"source": [
"What you may notice from those results:\n",
"* We are very close achieving the goal of getting 195 cumulative reward over 100+ consecutive runs of the simulation, or we may have actually achieved it! Even if we get smaller numbers, we still do not know, because we average over 5000 runs, and only 100 runs is required in the formal criteria.\n",
"* Sometimes the reward start to drop, which means that we can \"destroy\" already learnt values in Q-Table with the ones that make situation worse\n",
"\n",
"This is more clearly visible if we plot training progress.\n",
"\n",
"## Plotting Training Progress\n",
"\n",
"During training, we have collected the cumulative reward value at each of the iterations into `rewards` vector. Here is how it looks when we plot it against the iteration number:"
"## Plotting Training Progress"
],
"cell_type": "markdown",
"metadata": {}
@ -542,25 +321,12 @@
}
],
"source": [
"def running_average(x,window):\n",
" return np.convolve(x,np.ones(window)/window,mode='valid')\n",
"\n",
"plt.plot(running_average(rewards,100))"
"#code block 12"
]
},
{
"source": [
"## Varying Hyperparameters\n",
"\n",
"To make learning more stable, it makes sense to adjust some of our hyperparameters during training. In particular:\n",
"* For **learning rate**, `alpha`, we may start with values close to 1, and then keep decreasing the parameter. With time, we will be getting good probability values in Q-Table, and thus we should be adjusting them slightly, and not overwriting completely with new values.\n",
"* We may want to increase the `eplilon` slowly, in order to be exploring less, and expliting more. It probably makes sense to start with lower value of `epsilon`, and move up to almost 1\n",
"\n",
"> **Task 1**: Play with hyperparameter values and see if you can achieve higher cumulative reward. Are you getting above 195?\n",
"\n",
"> **Task 2**: To formally solve the problem, you need to get 195 average reward across 100 consecutive runs. Measure that during training and make sure that you have formally solved the problem!\n",
"\n",
"## Seeing the Result in Action\n",
"## Varying Hyperparameters and Seeing the Result in Action\n",
"\n",
"Now it would be interesting to actually see how the trained model behaves. Let's run the simulation, and we will be following the same action selection strategy as during training: sampling according to the probability distribution in Q-Table: "
],
@ -573,24 +339,13 @@
"metadata": {},
"outputs": [],
"source": [
"obs = env.reset()\n",
"done = False\n",
"while not done:\n",
" s = discretize(obs)\n",
" env.render()\n",
" v = probs(np.array(qvalues(s)))\n",
" a = random.choices(actions,weights=v)[0]\n",
" obs,_,done,_ = env.step(a)\n",
"env.close()"
"# code block 13"
]
},
{
"source": [
"> **Task 3**: Here, we were using the final copy of Q-Table, which may not be the best one. Remember that we have stored the best-performing Q-Table into `Qbest` variable! Try the same example with the best-performing Q-Table by copying `Qbest` over to `Q` and see if you notice the difference.\n",
"\n",
"> **Task 4**: Here we were not selecting the best action on each step, but rather sampling with corresponding probability distribution. Would it make more sense to always select the best action, with highest Q-Table value? This can be easily done by using `np.argmax` function to find out the action number corresponding to highers Q-Table value. Implement this strategy and see if it improves the balancing.\n",
"\n",
"## Saving result to animated GIF\n",
"## Saving result to an animated GIF\n",
"\n",
"If you want to impress your friends, you may want to send them the animated GIF picture of the balancing pole. To do this, we can invoke `env.render` to produce an image frame, and then save those to animated GIF using PIL library:"
],
@ -628,22 +383,6 @@
"ims[0].save('images/cartpole-balance.gif',save_all=True,append_images=ims[1::2],loop=0,duration=5)\n",
"print(i)"
]
},
{
"source": [
"## Conclusion\n",
"\n",
"We have now learnt how to train agents to achieve good results just by providing them a reward function that defines the desired state of the game, and by giving it an opportunity to intelligently explore the search space. We have successfully applied Q-Learning algorithm in the cases of discrete and continuous environments, but with discrete actions. In the are of reinforcement learning, we need to further study situations where action state is also continuous, and when observation space is much more complex, such as the image from Atari game screen. In those problems we often need to use more powerful machine learning techniques, such as neural networks, in order to achieve good results. Those more advanced topics are the subject of more advanced Deep Reinforcement Learning course."
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
]
}

File diff suppressed because one or more lines are too long

@ -32,8 +32,8 @@ The main difference between other types of machine learning and RL is that in RL
## Lessons
1. [Introduction to Reinforcement Learning and Q-Learning](1-QLearning/README.md)
2. [Using gym simulation environment](2-Gym/README.md)
1. [Introduction to reinforcement learning and Q-Learning](1-QLearning/README.md)
2. [Using a gym simulation environment](2-Gym/README.md)
## Credits

Loading…
Cancel
Save