diff --git a/2-Regression/4-Logistic/solution/lesson_4-R.ipynb b/2-Regression/4-Logistic/solution/lesson_4-R.ipynb index 609e8301..59468575 100644 --- a/2-Regression/4-Logistic/solution/lesson_4-R.ipynb +++ b/2-Regression/4-Logistic/solution/lesson_4-R.ipynb @@ -430,8 +430,11 @@ ">\r\n", "> Remember how `linear regression` often used `ordinary least squares` to arrive at a value? `Logistic regression` relies on the concept of 'maximum likelihood' using [`sigmoid functions`](https://wikipedia.org/wiki/Sigmoid_function). A Sigmoid Function on a plot looks like an `S shape`. It takes a value and maps it to somewhere between 0 and 1. Its curve is also called a 'logistic curve'. Its formula looks like this:\r\n", ">\r\n", - "> ![](images/sigmoid.png)\r\n", - ">\r\n", + "> \r\n", + "

\r\n", + " where the sigmoid's midpoint finds itself at x's 0 point, L is the curve's maximum value, and k is the curve's steepness. If the outcome of the function is more than 0.5, the label in question will be given the class 1 of the binary choice. If not, it will be classified as 0.\r\n", "\r\n", "Let's begin by splitting the data into `training` and `test` sets. The training set is used to train a classifier so that it finds a statistical relationship between the features and the label value.\r\n",