…something not as hard as you may think

TL; DR

If you are here for a quick solution that just works, then here it is in just 5 lines of code:

model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='bce')
model.fit(x_train, y_train, epochs=100)

The long way

Now, if you’re still with me it means that you don’t want just to copy + paste 5 lines of code, but to see how you can actually implement this method yourself from scratch.

TensorFlow is a rich library; it has many APIs that you can use. Among them is the Keras API which can be used to build a logistic regression model very quickly, as you can see above. And there’s nothing wrong with that. If you have to implement a complex deep learning model, that perhaps you saw in a new paper, Keras saves you a lot of time; it lets you focus on what’s important, and don’t have to care about each math operation that has to be done.

But, if your purpose is to learn a basic machine learning technique, like logistic regression, it is worth it using the core math functions from TensorFlow and implementing it from scratch.

Knowing TensorFlow’s lower-level math APIs also can help you building a deep learning model when you need to implement a custom training loop, or a custom activation or loss function. It can also be more fun!

So, let’s get started!


To understand better what we’re going to do next, you can read my previous article about logistic regression:

What’s our plan for implementing Logistic Regression with TensorFlow?

Let’s first think of the underlying math that we want to use.

There are many ways to define a loss function and then find the optimal parameters for it, among them, here we will implement in our LogisticRegression class the following 3 ways for learning the parameters:

  • We will rewrite the logistic regression equation so that we turn it into a least-squares linear regression problem with different labels and then, we use the closed-form formula to find the weights:
  • Like above, we turn logistic into least-squares linear regression, but instead of the closed-form formula, we use stochastic gradient descent (SGD) to minimize the following loss function:

which was obtained by substituting the y in the sum of squared errors loss

with the right-hand side of

  • We use the maximum likelihood estimation (MLE) method, write the likelihood function, play around with it, restate it as a minimization problem, and apply SGD with the following loss function:

In the above equations, X is the input matrix that contains observations on the row axis and features on the column axis; y is a column vector that contains the classification labels (0 or 1); f is the sum of squared errors loss function; h is the loss function for the MLE method.

If you want to find out more about how we obtained the above equations, please check out the above-linked article.

So now, this is our goal: translate the above equations into code. And we’ll use TensorFlow for that.

We plan to use an object-oriented approach for implementation. We’ll create a LogisticRegression class with 3 public methods: fit()predict(), and accuracy().

Among fit’s parameters, one will determine how our model learns. This parameter is named method (not to be confused with a method as a function of a class) and it can take the following strings as values: ‘ols_solve’ (OLS stands for Ordinary Least Squares), ‘ols_sgd’, and ‘mle_sgd’.

To not make the fit() method too long, we would like to split the code into 3 different private methods, each one responsible for one way of finding the parameters.

We will have the __ols_solve() private method for applying the closed-form formula.

In this method and in the other methods that use the OLS approach, we will use the constant EPS to make sure the labels are not exactly 0 or 1, but something in between. That’s to avoid getting plus or minus infinity for the logarithm in the equations above.

In __ols_solve() we first check if X has full column rank so that we can apply this method. Then we force y to be between EPS and 1-EPS. The ols_y variable holds the labels of the ordinary least-squares linear regression problem that’s equivalent to our logistic regression problem. Basically, we transform the labels that we have for logistic regression so that they are compliant with the linear regression equations. After that, we apply the closed-form formula using TensorFlow functions.

For the 2 SGD-based algorithms, it would be redundant to have them as 2 separate methods since they will have almost all the code the same except for the part where we compute the loss value, as we have 2 different loss functions for them.

What we’ll do is to create a generic __sgd() method that does not rely on a particular loss function. Instead, it will expect as a parameter a function responsible for computing the loss value which the __sgd() method will use.

In this method, we first initialize the weights to a random column vector with values drawn from a normal distribution with mean 0 and a standard deviation of 1/(# of features). The intuition for this std dev is that if we have more features, then we need smaller weights to be able to converge (and not blow up our gradients). Then we go through all the dataset for iterations times. At the start of each such iteration, we randomly shuffle our dataset, then for each batch of data, we compute the loss value using the loss_fn function taken as a parameter, then use TensorFlow to take the gradient of this loss value with respect to (w.r.t.) self.weights and update the weights.

The loss needs to be computed inside with tf.GradientTape() as tape: block. This is to tell TensorFlow to keep track of the operations applied so that it knows how to take the gradient.

Then, to take the gradient of the loss w.r.t. weights we use grads = tape.gradient(loss, self.weights), and to subtract the gradient multiplied with the learning rate we use self.weights.assign_sub(learning_rate*grads).

For ‘ols_sgd’ and ‘mle_sgd’ we’ll create 2 private methods: __sse_loss() and __mle_loss() that compute and return the loss value for these 2 different techniques.

For these 2 methods, we simply apply the formulas for f and h using TensorFlow’s math functions.

So, when fit() is called with method=‘ols_solve’ we call __ols_solve(), when method=‘ols_sgd’ we call __sgd() with loss_fn=self.__sse_loss, and when method=’mle_sgd’ we call __sgd() with loss_fn=self.__mle_loss.

In predict() we first check if fit() was called previously by looking for the weights attribute (the fit method is the only method that creates it). Then we check if the shapes of the input matrix x and weights vector allow multiplication. Otherwise, return error messages. If everything is OK, we do the multiplication and pass the result through the logistic function.

In accuracy() we make predictions using the above method. Then check if the shape of the predictions matches that of the true labels, otherwise, we show an error message. After that we make sure that both predictions and the true labels have values of either 0 or 1 by a simple rule: if the value is >= 0.5 consider it a 1, otherwise a 0.

To compute the accuracy, we check for equality between y and y_hat. This will return a vector of Boolean values. Then cast these Booleans to float (False becomes 0.0, and True becomes 1.0). Then, the accuracy is simply the mean of these values.

Here is the full code of the LogisticRegression class:


Now, we would like to test our LogisticRegression class with some real-world data. For that, we will use this heart disease dataset from Kaggle. You can read more about this dataset on Kaggle, but the main idea is to predict the “target” column (which is 0 if healthy or 1 if has heart disease) based on the others.

Below is the code which shows our LogisticRegression class in action (cells 1 & 2 are not shown below to avoid repetition; it was shown in the snippet above).

As you can see, we were able to obtain a decent 80%+ accuracy both in training and testing with our from-scratch implementation.


You can see the full notebook on Kaggle.

I hope you found this information useful and thanks for reading!

This article is also posted on Medium here. Feel free to have a look!


Dorian

Passionate about Data Science, AI, Programming & Math

0 0 votes
Article Rating
Subscribe
Notify of
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

[…] to implement Logistic Regression with NumPyHow to Implement Logistic Regression with TensorFlowHow to Implement Logistic Regression with […]

1
0
Would love your thoughts, please comment.x
()
x