Multivariate Logistic Regression Demo

Source: 🤖Homemade Machine Learning repository

☝Before moving on with this demo you might want to take a look at:

Logistic regression is the appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.

Logistic Regression is used when the dependent variable (target) is categorical.

For example:

  • To predict whether an email is spam (1) or (0).
  • Whether online transaction is fraudulent (1) or not (0).
  • Whether the tumor is malignant (1) or not (0).

Demo Project: In this example we will train clothes classifier that will recognize clothes types (10 categories) from 28x28 pixel images.

In [1]:
# To make debugging of logistic_regression module easier we enable imported modules autoreloading feature.
# By doing this you may change the code of logistic_regression library and all these changes will be available here.
%load_ext autoreload
%autoreload 2

# Add project root folder to module loading paths.
import sys
sys.path.append('../..')

Import Dependencies

  • pandas - library that we will use for loading and displaying the data in a table
  • numpy - library that we will use for linear algebra operations
  • matplotlib - library that we will use for plotting the data
  • math - math library that we will use to calculate sqaure roots etc.
  • logistic_regression - custom implementation of logistic regression
In [2]:
# Import 3rd party dependencies.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import math

# Import custom logistic regression implementation.
from homemade.logistic_regression import LogisticRegression

Load the Data

In this demo we will use a sample of Fashion MNIST dataset in a CSV format.

Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.

Instead of using full dataset with 60000 training examples we will use cut dataset of just 5000 examples that we will also split into training and testing sets.

Each row in the dataset consists of 785 values: the first value is the label (a category from 0 to 9) and the remaining 784 values (28x28 pixels image) are the pixel values (a number from 0 to 255).

Each training and test example is assigned to one of the following labels:

  • 0 T-shirt/top
  • 1 Trouser
  • 2 Pullover
  • 3 Dress
  • 4 Coat
  • 5 Sandal
  • 6 Shirt
  • 7 Sneaker
  • 8 Bag
  • 9 Ankle boot
In [3]:
# Load the data.
data = pd.read_csv('../../data/fashion-mnist-demo.csv')

# Laets create the mapping between numeric category and category name.
label_map = {
    0: 'T-shirt/top',
    1: 'Trouser',
    2: 'Pullover',
    3: 'Dress',
    4: 'Coat',
    5: 'Sandal',
    6: 'Shirt',
    7: 'Sneaker',
    8: 'Bag',
    9: 'Ankle boot',
}

# Print the data table.
data.head(10)
Out[3]:
label pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 pixel8 pixel9 ... pixel775 pixel776 pixel777 pixel778 pixel779 pixel780 pixel781 pixel782 pixel783 pixel784
0 2 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
1 9 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
2 6 0 0 0 0 0 0 0 5 0 ... 0 0 0 30 43 0 0 0 0 0
3 0 0 0 0 1 2 0 0 0 0 ... 3 0 0 0 0 1 0 0 0 0
4 3 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
5 4 0 0 0 5 4 5 5 3 5 ... 7 8 7 4 3 7 5 0 0 0
6 4 0 0 0 0 0 0 0 0 0 ... 14 0 0 0 0 0 0 0 0 0
7 5 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
8 4 0 0 0 0 0 0 3 2 0 ... 1 0 0 0 0 0 0 0 0 0
9 8 0 0 0 0 0 0 0 0 0 ... 203 214 166 0 0 0 0 0 0 0

10 rows × 785 columns

Plot the Data

Let's peek first 25 rows of the dataset and display them as an images to have an example of clothes we will be working with.

In [4]:
# How many images to display.
numbers_to_display = 25

# Calculate the number of cells that will hold all the images.
num_cells = math.ceil(math.sqrt(numbers_to_display))

# Make the plot a little bit bigger than default one.
plt.figure(figsize=(10, 10))

# Go through the first images in a training set and plot them.
for plot_index in range(numbers_to_display):
    # Extrace image data.
    digit = data[plot_index:plot_index + 1].values
    digit_label = digit[0][0]
    digit_pixels = digit[0][1:]

    # Calculate image size (remember that each picture has square proportions).
    image_size = int(math.sqrt(digit_pixels.shape[0]))
    
    # Convert image vector into the matrix of pixels.
    frame = digit_pixels.reshape((image_size, image_size))
    
    # Plot the image matrix.
    plt.subplot(num_cells, num_cells, plot_index + 1)
    plt.imshow(frame, cmap='Greys')
    plt.title(label_map[digit_label])
    plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)

# Plot all subplots.
plt.subplots_adjust(hspace=0.5, wspace=0.5)
plt.show()

Split the Data Into Training and Test Sets

In this step we will split our dataset into training and testing subsets (in proportion 80/20%).

Training data set will be used for training of our model. Testing dataset will be used for validating of the model. All data from testing dataset will be new to model and we may check how accurate are model predictions.

In [5]:
# Split data set on training and test sets with proportions 80/20.
# Function sample() returns a random sample of items.
pd_train_data = data.sample(frac=0.8)
pd_test_data = data.drop(pd_train_data.index)

# Convert training and testing data from Pandas to NumPy format.
train_data = pd_train_data.values
test_data = pd_test_data.values

# Extract training/test labels and features.
num_training_examples = 3000
x_train = train_data[:num_training_examples, 1:]
y_train = train_data[:num_training_examples, [0]]

x_test = test_data[:, 1:]
y_test = test_data[:, [0]]

Init and Train Logistic Regression Model

☝🏻This is the place where you might want to play with model configuration.

  • polynomial_degree - this parameter will allow you to add additional polynomial features of certain degree. More features - more curved the line will be.
  • max_iterations - this is the maximum number of iterations that gradient descent algorithm will use to find the minimum of a cost function. Low numbers may prevent gradient descent from reaching the minimum. High numbers will make the algorithm work longer without improving its accuracy.
  • regularization_param - parameter that will fight overfitting. The higher the parameter, the simplier is the model will be.
  • polynomial_degree - the degree of additional polynomial features (x1^2 * x2, x1^2 * x2^2, ...). This will allow you to curve the predictions.
  • sinusoid_degree - the degree of sinusoid parameter multipliers of additional features (sin(x), sin(2*x), ...). This will allow you to curve the predictions by adding sinusoidal component to the prediction curve.
  • normalize_data - boolean flag that indicates whether data normalization is needed or not.
In [6]:
# Set up linear regression parameters.
max_iterations = 10000  # Max number of gradient descent iterations.
regularization_param = 25  # Helps to fight model overfitting.
polynomial_degree = 0  # The degree of additional polynomial features.
sinusoid_degree = 0  # The degree of sinusoid parameter multipliers of additional features.
normalize_data = True  # Whether we need to normalize data to make it more unifrom or not. 

# Init logistic regression instance.
logistic_regression = LogisticRegression(x_train, y_train, polynomial_degree, sinusoid_degree, normalize_data)

# Train logistic regression.
(thetas, costs) = logistic_regression.train(regularization_param, max_iterations)

Let's see how model parameters (thetas) look like. For each digit class (from 0 to 9) we've just trained a set of 784 parameters (one theta for each image pixel). These parameters represents the importance of every pixel for specific digit recognition.

In [7]:
# Print thetas table.
pd.DataFrame(thetas)
Out[7]:
0 1 2 3 4 5 6 7 8 9 ... 775 776 777 778 779 780 781 782 783 784
0 -6.077491 -0.054096 -0.072268 0.031625 0.074728 0.090190 0.137357 -0.128473 -0.072395 0.011406 ... -0.071843 -0.054607 0.139552 0.056198 0.197529 0.005156 -0.083541 -0.040630 -0.009371 -0.009137
1 -6.871615 -0.002022 -0.002043 -0.013173 -0.019387 0.199802 -0.046613 0.038926 0.143524 0.172239 ... 0.100372 0.056467 -0.019896 -0.024661 -0.012431 -0.004531 -0.009489 -0.003141 -0.013222 -0.016581
2 -5.484414 -0.001377 0.065189 -0.223125 -0.037388 -0.061283 -0.142390 0.096851 -0.069638 -0.006118 ... -0.031186 0.160019 0.142373 0.137873 0.064698 0.043028 -0.085860 -0.019212 0.172103 0.179450
3 -6.479539 -0.025596 -0.025370 0.149444 -0.010553 -0.003944 0.052908 0.111660 -0.075223 -0.063640 ... 0.180960 -0.030979 -0.028157 -0.190387 -0.125556 -0.126413 -0.066519 -0.051773 -0.048392 -0.028799
4 -5.916161 -0.001441 -0.024669 0.023222 0.064555 -0.013986 -0.024218 -0.027205 -0.021830 -0.085304 ... -0.077075 0.155594 -0.070225 -0.066349 0.160192 0.074204 0.037278 -0.004909 -0.080100 -0.003308
5 -10.188132 -0.000555 -0.000587 -0.041603 -0.000990 -0.005956 -0.008254 -0.023400 -0.032593 -0.061880 ... -0.056071 -0.030024 0.052833 -0.038149 -0.042217 -0.107069 -0.038814 0.019141 -0.089063 -0.003123
6 -4.696624 0.087019 0.022476 0.068921 -0.200872 -0.088585 0.084472 -0.050343 0.062190 -0.034068 ... -0.041652 -0.046297 -0.032799 0.014693 -0.286354 -0.072298 0.229458 0.103750 0.052720 -0.094161
7 -9.695491 -0.000492 -0.000397 -0.000271 -0.000577 -0.004215 -0.002173 -0.001870 -0.002409 -0.003127 ... -0.036599 -0.040486 -0.014132 -0.008360 -0.003225 -0.002326 -0.001465 -0.007164 -0.004659 0.000013
8 -6.024387 -0.007143 -0.010259 -0.046641 -0.034966 0.056934 0.012818 0.040539 0.104894 0.079651 ... -0.029042 0.079349 -0.017436 0.021008 -0.067486 -0.120647 -0.120105 -0.044941 -0.008850 -0.033903
9 -8.194469 0.000054 -0.000283 -0.000571 -0.000219 -0.015695 -0.005539 -0.002578 -0.001912 -0.002804 ... -0.010609 -0.018612 -0.045792 -0.009738 -0.005582 0.034933 0.029350 0.033669 0.045969 0.003467

10 rows × 785 columns

Illustrate Hidden Layers Perceptrons

Each perceptron in the hidden layer learned something from the training process. What it learned is represented by input theta parameters for it. Each perceptron in the hidden layer has 28x28 input thetas (one for each input image pizel). Each theta represents how valuable each pixel is for this particuar perceptron. So let's try to plot how valuable each pixel of input image is for each perceptron based on its theta values.

In [8]:
# How many images to display.
numbers_to_display = 9

# Calculate the number of cells that will hold all the images.
num_cells = math.ceil(math.sqrt(numbers_to_display))

# Make the plot a little bit bigger than default one.
plt.figure(figsize=(10, 10))

# Go through the thetas and print them.
for plot_index in range(numbers_to_display):
    # Extrace thetas data.
    digit_pixels = thetas[plot_index][1:]

    # Calculate image size (remember that each picture has square proportions).
    image_size = int(math.sqrt(digit_pixels.shape[0]))
    
    # Convert image vector into the matrix of pixels.
    frame = digit_pixels.reshape((image_size, image_size))
    
    # Plot the thetas matrix.
    plt.subplot(num_cells, num_cells, plot_index + 1)
    plt.imshow(frame, cmap='Greys')
    plt.title(plot_index)
    plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)

# Plot all subplots.
plt.subplots_adjust(hspace=0.5, wspace=0.5)
plt.show()

Analyze Gradient Descent Progress

The plot below illustrates how the cost function value changes over each iteration. You should see it decreasing.

In case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it.

From this plot you may also get an understanding of how many iterations you need to get an optimal value of the cost function.

In [9]:
# Draw gradient descent progress for each label.
labels = logistic_regression.unique_labels
for index, label in enumerate(labels):
    plt.plot(range(len(costs[index])), costs[index], label=label_map[labels[index]])

plt.xlabel('Gradient Steps')
plt.ylabel('Cost')
plt.legend()
plt.show()

Calculate Model Training Precision

Calculate how many of training and test examples have been classified correctly. Normally we need test precission to be as high as possible. In case if training precision is high and test precission is low it may mean that our model is overfitted (it works really well with the training data set but it is not good at classifying new unknown data from the test dataset). In this case you may want to play with regularization_param parameter to fighth the overfitting.

In [10]:
# Make training set predictions.
y_train_predictions = logistic_regression.predict(x_train)
y_test_predictions = logistic_regression.predict(x_test)

# Check what percentage of them are actually correct.
train_precision = np.sum(y_train_predictions == y_train) / y_train.shape[0] * 100
test_precision = np.sum(y_test_predictions == y_test) / y_test.shape[0] * 100

print('Training Precision: {:5.4f}%'.format(train_precision))
print('Test Precision: {:5.4f}%'.format(test_precision))
Training Precision: 93.8333%
Test Precision: 82.9000%

Plot Test Dataset Predictions

In order to illustrate how our model classifies unknown examples let's plot first 64 predictions for testing dataset. All green clothes on the plot below have been recognized corrctly but all the red clothes have not been recognized correctly by our classifier. On top of each image you may see the clothes class (type) that has been recognized on the image.

In [11]:
# How many numbers to display.
numbers_to_display = 64

# Calculate the number of cells that will hold all the numbers.
num_cells = math.ceil(math.sqrt(numbers_to_display))

# Make the plot a little bit bigger than default one.
plt.figure(figsize=(15, 15))

# Go through the first numbers in a test set and plot them.
for plot_index in range(numbers_to_display):
    # Extrace digit data.
    digit_label = y_test[plot_index, 0]
    digit_pixels = x_test[plot_index, :]
    
    # Predicted label.
    predicted_label = y_test_predictions[plot_index][0]

    # Calculate image size (remember that each picture has square proportions).
    image_size = int(math.sqrt(digit_pixels.shape[0]))
    
    # Convert image vector into the matrix of pixels.
    frame = digit_pixels.reshape((image_size, image_size))
    
    # Plot the number matrix.
    color_map = 'Greens' if predicted_label == digit_label else 'Reds'
    plt.subplot(num_cells, num_cells, plot_index + 1)
    plt.imshow(frame, cmap=color_map)
    plt.title(label_map[predicted_label])
    plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False)

# Plot all subplots.
plt.subplots_adjust(hspace=0.5, wspace=0.5)
plt.show()