Logistic Regression
Binary prediction using the logit function made from scratch
Regression Algorithms (Series)
What is Logistic Regression?
Logistic Regression is one of many supervised machine learning algorithms, just like Linear Regression. Instead of predicting a continuous value, it predicts the probability of an event happening, or something is true or false. So, this algorithms is mostly used for binary classification problems. Here are the use cases of Logistic Regression:
- Predict if an email is spam or not.
- Predict if a credit card transaction is fraudulent or not.
- Predict if a customer will churn or not.
- Predict if a patient has cancer or not.
However, it also has the same limitations as Linear Regression, such as:
- It is a linear model, so it cannot capture complex non-linear relationships. In other words, it assumes that the data is linearly separable.
- It assumes that the data is independent of each other. In other words, it assumes that the data is not correlated with each other.
- It is sensitive to outliers.
Mathematics Behind Logistic Regression
First, Logistic Regression still uses Linear Equation used in Linear Regression, and it’s expressed as:
Second, we are going to need Sigmoid function. This funciton’s sole job is to predict by converting the output of the linear equation into a probability value between and .
where . Then the sigmoid function can be rewritten as:
After acquiring the probability value from the Sigmoid function, we can use a threshold value to classify into one or another. To generalize, we can use the following equation to predict the probability of an event occurring:
Once we have the probability of an event occuring, we then use a threshold value to round up the probability value to either or . If , then the data is classified as , otherwise it is classified as .
Similar to what we did in the Linear Regression post, we need to estimate the best and using the Gradient Descent algorithm. What the Gradient Descent algorithm does is to update the and values based on the cost function and the learning rate.
This example is just a simple linear model, we are going to use the following equations to update intercept and coefficient:
where is the learning rate, is the -th parameter, is the cost function, and is the -th feature.
Since we only have and , we can simplify the equation above to:
Implementation
First things first, we need to import the necessary libraries.
from sklearn.datasets import load_irisimport matplotlib.pyplot as pltimport seaborn as snsimport numpy as npSay that we only have a feature , the sepal length, and we want to determine the instance is a Virginica or not. Let’s prepare the data.
iris = load_iris()sepal_length = iris.data[:, 0]target = iris.target
is_virginica_dict = {0: 0, 1: 0, 2: 1}is_virginica = np.array([is_virginica_dict[i] for i in target])
species_dict = {0: 'setosa', 1: 'versicolor', 2: 'virginica'}species_name = np.array([species_dict[i] for i in target]) You might be wondering why we have is_virginica_dict.
I need that variable to separate the data so that some data sit at the bottom of the plot and some sit at the top of the plot.
So let’s add a line like we did in the Linear Regression post with as the intercept and as the coefficient.
It’s clear that our data do not follow the pattern that the straight line is showing in the graph. Thus, we need to use Sigmoid function to bend the line, so that the line would look like this:
def accuracy(y_pred, y): return np.sum(y_pred == y) / len(y)
def sigmoid(x): return 1 / (1 + np.exp(-x))
def linear_function(intercept, coefficient, x): return intercept + coefficient * x
def threshold(x): return np.where(x > 0.5, 1, 0)
def gradient_descent(x, y, epochs, alpha = 0.01): intercept, coefficient = -1.2, 0.28 # initial guess
for _ in range(epochs): y_pred = np.array( [ sigmoid( linear_function(intercept, coefficient, i) ) for i in x ] ) intercept = intercept - alpha * np.sum(y_pred - y) / len(y) coefficient = coefficient - alpha * np.sum((y_pred - y) * x) / len(y)
return intercept, coefficientLet’s train our model for times.
intercept, coefficient = gradient_descent(sepal_length, is_virginica, 100000)predicted_value = np.array([sigmoid(linear_function(intercept, coefficient, i)) for i in sepal_length])corrected_prediction = threshold(predicted_value)pred_acc = aaccuracy(corrected_prediction, is_virginica)
print(f'accuracy: {pred_acc:.4f}') # 0.8print(f'intercept: {intercept:.4f}') # -13.004847396222699print(f'coefficient: {coefficient:.4f}') # 2.0547824850027654Not bad, the accuracy is , with as the intercept and as the coefficient.
You can find the full code in this repository
Conclusion
- Logistic Regression is a supervised machine learning algorithm that is used for binary classification problems.
- It uses the Sigmoid function to calculate the probability of an event occurring.
- It uses a threshold value to roundup the probability value to either or . If , then the data is classified as , otherwise it is classified as .
References
- Vieira, Tim. Exp-Normalize Trick. https://timvieira.github.io/blog/post/2014/02/11/exp-normalize-trick/
- IBM. Logistic Regression. https://developer.ibm.com/articles/implementing-logistic-regression-from-scratch-in-python/