The source of this map is this wonderful blog post

This website will guide you through each topic individually from start to finish just by following the next buttons at the bottom of each page. If you would like, you can jump to any specific topic you would like from the contents below.

In supervised learning we have a set of training data as an input and a set of labels or “correct answers” for each training set as an output. Then we’re training our model (machine learning algorithm parameters) to map the input to the output correctly (to do correct prediction). The ultimate purpose is to find such model parameters that will successfully continue correct input→output mapping (predictions) even for new input examples.

In regression problems we do real value predictions. Basically we try to draw a line/plane/n-dimensional plane along the training examples.

*Usage examples: stock price forecast, sales analysis, dependency of any number, etc.*

- 📗 Math | Linear Regression - theory and links for further readings
- ⚙️ Demo | Univariate Linear Regression - implementation example
- ⚙️ Demo | Multivariate Linear Regression - implementation example
- ⚙️ Demo | Non Linear Regression - implementation example

In classification problems we split input examples by certain characteristic.

*Usage examples: spam-filters, language detection, finding similar documents, handwritten letters recognition, etc.*

- 📗 Math | Logistic Regression - theory and links for further readings
- ▶️ Demo | Logistic Regression (Linear Boundary) - predict Iris flower
`class`

based on`petal_length`

and`petal_width`

- ▶️ Demo | Logistic Regression (Non-Linear Boundary) - predict microchip
`validity`

based on`param_1`

and`param_2`

- ▶️ Demo | Multivariate Logistic Regression | MNIST - recognize handwritten digits from
`28x28`

pixel images. - ▶️ Demo | Multivariate Logistic Regression | Fashion MNIST - recognize clothes types from
`28x28`

pixel images.

Unsupervised learning is a branch of machine learning that learns from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data.

In clustering problems we split the training examples by unknown characteristics. The algorithm itself decides what characteristic to use for splitting.

*Usage examples: market segmentation, social networks analysis, organize computing clusters, astronomical data analysis, image compression, etc.*

- 📗 Reading and Wiki | K-means Algorithm - theory and links for further readings
- ▶️ Demo | K-means Algorithm - split Iris flowers into clusters based on
`petal_length`

and`petal_width`

Anomaly detection (also outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.

*Usage examples: intrusion detection, fraud detection, system health monitoring, removing anomalous data from the dataset etc.*

- 📗 Math | Anomaly Detection using Gaussian Distribution - theory and links for further readings
- ▶️ Demo | Anomaly Detection - find anomalies in server operational parameters like
`latency`

and`threshold`

The neural network itself isn’t an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.

*Usage examples: as a substitute of all other algorithms in general, image recognition, voice recognition, image processing (applying specific style), language translation, etc.*

- 📗 Math | Multilayer Perceptron - theory and links for further readings
- ▶️ Demo | Multilayer Perceptron | MNIST - recognize handwritten digits from
`28x28`

pixel images. - ▶️ Demo | Multilayer Perceptron | Fashion MNIST - recognize the type of clothes from
`28x28`

pixel images.