Start

Current Training Set:

XOR OR NAND AND
Input 1 Input 2 Target Prediction
0 0 ? ?
0 1 ? ?
1 0 ? ?
1 1 ? ?
Average Error: 0.0
Epoch 0

Neural Network

By Matt Mazur on October 15, 2014

This Emergent Mind project (#10!) implements a JavaScript-based neural network with back-propagation that can learn various logical operators.

To begin the learning process, simply click the Start button above.

By default the neural network will learn how to map an XOR operator, but you can change the operator it’s trying to learn by changing the training set that it’s using to teach the neural network.

As the neural network learns how to map the operator, its predictions will become closer and closer to what the operator actually returns. For example, the XOR function should return 1 only when exactly one of its inputs is a 1: 00 should return 0, 01 should return 1, 10 should return 1, and 11 should return 0. At first the neural network’s predictions will be completely random, but as each epoch passes and we train the neural network on what the output should be for that operator, its predictions will become closer and closer to the correct value.

The average difference between the neural network’s predictions and the actual value of the operator (the target) is displayed as a sparkline so you can see the gradual error reduction over time as it learns. If you switch to a different operator while the neural network is learning you’ll notice a spike in the average error, followed by another gradual reduction as the neural network adjusts its weights to learn the new operator. It may take a few thousand epochs for the neural network to adjust its weights properly so if the average error seems to be stuck around a certain number just sit tight for a minute or two while it does its thing.

The lines connecting the neurons represent the weights between them where black lines represent positive weights and gray lines represent negative weights. The thickness of the lines indicates the strength the weight: thick lines have a larger absolute values than thin lines.

If you’d like to learn more about how the backpropagation algorithm works, I recommend starting on Wikipedia’s backpropagation article which gives a great introduction.