Source: 🤖Homemade Machine Learning repository
☝Before moving on with this demo you might want to take a look at:
Linear regression is a linear model, e.g. a model that assumes a linear relationship between the input variables (x)
and the single output variable (y)
. More specifically, that output variable (y)
can be calculated from a linear combination of the input variables (x)
.
Multivariate Linear Regression is a linear regression that has more than one input parameter and one output label.
Demo Project: In this demo we will build a model that will predict
Happiness.Score
for the countries based onEconomy.GDP.per.Capita
andFreedom
parameters.
# To make debugging of linear_regression module easier we enable imported modules autoreloading feature.
# By doing this you may change the code of linear_regression library and all these changes will be available here.
%load_ext autoreload
%autoreload 2
# Add project root folder to module loading paths.
import sys
sys.path.append('../..')
# Import 3rd party dependencies.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import plotly
import plotly.graph_objs as go
# Configure Plotly to be rendered inline in the notebook.
plotly.offline.init_notebook_mode()
# Import custom linear regression implementation.
from homemade.linear_regression import LinearRegression
In this demo we will use World Happindes Dataset for 2017.
# Load the data.
data = pd.read_csv('../../data/world-happiness-report-2017.csv')
# Print the data table.
data.head(10)
# Print histograms for each feature to see how they vary.
histohrams = data.hist(grid=False, figsize=(10, 10))
In this step we will split our dataset into training and testing subsets (in proportion 80/20%).
Training data set will be used for training of our linear model. Testing dataset will be used for validating of the model. All data from testing dataset will be new to model and we may check how accurate are model predictions.
# Split data set on training and test sets with proportions 80/20.
# Function sample() returns a random sample of items.
train_data = data.sample(frac=0.8)
test_data = data.drop(train_data.index)
# Decide what fields we want to process.
input_param_name_1 = 'Economy..GDP.per.Capita.'
input_param_name_2 = 'Freedom'
output_param_name = 'Happiness.Score'
# Split training set input and output.
x_train = train_data[[input_param_name_1, input_param_name_2]].values
y_train = train_data[[output_param_name]].values
# Split test set input and output.
x_test = test_data[[input_param_name_1, input_param_name_2]].values
y_test = test_data[[output_param_name]].values
Let's visualize the training and test datasets to see the shape of the data.
# Configure the plot with training dataset.
plot_training_trace = go.Scatter3d(
x=x_train[:, 0].flatten(),
y=x_train[:, 1].flatten(),
z=y_train.flatten(),
name='Training Set',
mode='markers',
marker={
'size': 10,
'opacity': 1,
'line': {
'color': 'rgb(255, 255, 255)',
'width': 1
},
}
)
# Configure the plot with test dataset.
plot_test_trace = go.Scatter3d(
x=x_test[:, 0].flatten(),
y=x_test[:, 1].flatten(),
z=y_test.flatten(),
name='Test Set',
mode='markers',
marker={
'size': 10,
'opacity': 1,
'line': {
'color': 'rgb(255, 255, 255)',
'width': 1
},
}
)
# Configure the layout.
plot_layout = go.Layout(
title='Date Sets',
scene={
'xaxis': {'title': input_param_name_1},
'yaxis': {'title': input_param_name_2},
'zaxis': {'title': output_param_name}
},
margin={'l': 0, 'r': 0, 'b': 0, 't': 0}
)
plot_data = [plot_training_trace, plot_test_trace]
plot_figure = go.Figure(data=plot_data, layout=plot_layout)
# Render 3D scatter plot.
plotly.offline.iplot(plot_figure)
☝🏻This is the place where you might want to play with model configuration.
polynomial_degree
- this parameter will allow you to add additional polynomial features of certain degree. More features - more curved the line will be.num_iterations
- this is the number of iterations that gradient descent algorithm will use to find the minimum of a cost function. Low numbers may prevent gradient descent from reaching the minimum. High numbers will make the algorithm work longer without improving its accuracy.learning_rate
- this is the size of the gradient descent step. Small learning step will make algorithm work longer and will probably require more iterations to reach the minimum of the cost function. Big learning steps may couse missing the minimum and growth of the cost function value with new iterations.regularization_param
- parameter that will fight overfitting. The higher the parameter, the simplier is the model will be.polynomial_degree
- the degree of additional polynomial features (x1^2 * x2, x1^2 * x2^2, ...
). This will allow you to curve the predictions.sinusoid_degree
- the degree of sinusoid parameter multipliers of additional features (sin(x), sin(2*x), ...
). This will allow you to curve the predictions by adding sinusoidal component to the prediction curve.# Set up linear regression parameters.
num_iterations = 500 # Number of gradient descent iterations.
regularization_param = 0 # Helps to fight model overfitting.
learning_rate = 0.01 # The size of the gradient descent step.
polynomial_degree = 0 # The degree of additional polynomial features.
sinusoid_degree = 0 # The degree of sinusoid parameter multipliers of additional features.
# Init linear regression instance.
linear_regression = LinearRegression(x_train, y_train, polynomial_degree, sinusoid_degree)
# Train linear regression.
(theta, cost_history) = linear_regression.train(
learning_rate,
regularization_param,
num_iterations
)
# Print training results.
print('Initial cost: {:.2f}'.format(cost_history[0]))
print('Optimized cost: {:.2f}'.format(cost_history[-1]))
# Print model parameters
theta_table = pd.DataFrame({'Model Parameters': theta.flatten()})
theta_table.head()
The plot below illustrates how the cost function value changes over each iteration. You should see it decreasing.
In case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it. In this case you might want to reduce the learning rate parameter (the size of the gradient step).
From this plot you may also get an understanding of how many iterations you need to get an optimal value of the cost function. In current example you may see that there is no much sense to increase the number of gradient descent iterations over 500 since it will not reduce cost function significantly.
# Plot gradient descent progress.
plt.plot(range(num_iterations), cost_history)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.title('Gradient Descent Progress')
plt.show()
Since our model is trained now we may plot its predictions over the training and test datasets to see how well it fits the data.
# Generate different combinations of X and Y sets to build a predictions plane.
predictions_num = 10
# Find min and max values along X and Y axes.
x_min = x_train[:, 0].min();
x_max = x_train[:, 0].max();
y_min = x_train[:, 1].min();
y_max = x_train[:, 1].max();
# Generate predefined numbe of values for eaxh axis betwing correspondent min and max values.
x_axis = np.linspace(x_min, x_max, predictions_num)
y_axis = np.linspace(y_min, y_max, predictions_num)
# Create empty vectors for X and Y axes predictions
# We're going to find cartesian product of all possible X and Y values.
x_predictions = np.zeros((predictions_num * predictions_num, 1))
y_predictions = np.zeros((predictions_num * predictions_num, 1))
# Find cartesian product of all X and Y values.
x_y_index = 0
for x_index, x_value in enumerate(x_axis):
for y_index, y_value in enumerate(y_axis):
x_predictions[x_y_index] = x_value
y_predictions[x_y_index] = y_value
x_y_index += 1
# Predict Z value for all X and Y pairs.
z_predictions = linear_regression.predict(np.hstack((x_predictions, y_predictions)))
# Plot training data with predictions.
# Configure the plot with test dataset.
plot_predictions_trace = go.Scatter3d(
x=x_predictions.flatten(),
y=y_predictions.flatten(),
z=z_predictions.flatten(),
name='Prediction Plane',
mode='markers',
marker={
'size': 1,
},
opacity=0.8,
surfaceaxis=2,
)
plot_data = [plot_training_trace, plot_test_trace, plot_predictions_trace]
plot_figure = go.Figure(data=plot_data, layout=plot_layout)
plotly.offline.iplot(plot_figure)
Calculate the value of cost function for the training and test data set. The less this value is, the better.
train_cost = linear_regression.get_cost(x_train, y_train, regularization_param)
test_cost = linear_regression.get_cost(x_test, y_test, regularization_param)
print('Train cost: {:.2f}'.format(train_cost))
print('Test cost: {:.2f}'.format(test_cost))
Let's now render the table of prediction values that our trained model does for unknown data (for test dataset). You should see that predicted happiness score should be quite similar to the known happiness score fron the test dataset.
test_predictions = linear_regression.predict(x_test)
test_predictions_table = pd.DataFrame({
'Economy GDP per Capita': x_test[:, 0].flatten(),
'Freedom': x_test[:, 1].flatten(),
'Test Happiness Score': y_test.flatten(),
'Predicted Happiness Score': test_predictions.flatten(),
'Prediction Diff': (y_test - test_predictions).flatten()
})
test_predictions_table.head(10)