click anywhere to adjust the target hyperplane
weights
0
0
0

The Perceptron

By Matt Mazur on August 28, 2014

The perceptron is a basic machine learning algorithm for supervised learning.

This project is inspired by the neural network chapter in Daniel Shiffman's amazing Nature of Code book. In it, he describes how a perceptron works and provides easy-to-follow code examples and visualizations to assist in his explanations. You should check it out.

The main difference in this project is that you can adjust the red line (the target hyperplane) by clicking anywhere on the graph. This allows you to observe how the perceptron adapts over time.

What exactly am I looking at?

On the graph above there are 1,000 points. The perceptron's job is to figure out which of those points are above the target hyperplane and which are below it. If the perceptron calculates that a point is below the target hyperplane, it colors it in, otherwise it will be empty.

Each time the perceptron comes up with an answer (above or below), the answer is compared to the real answer (whether the point is actually above or below the target hyperplane). The perceptron then feeds this information back into itself to improve the accuracy of its predictions.

The thin gray line is the current hyperplane: it represents the boundary between what the perceptron's weights currently reflect as the boundary between points below the line from points above it. You can adjust the target hyperplane and watch how the current hyperplane changes as the perceptron learns adapts.

A single-layer perceptron is good at linearly-separable problems like this one, but doesn't work well on more complicated tasks. For that we'll need a multi-layer perceptron which we'll explore in future projects.