Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62811

On Sparse Distributed Representations and Catastrophic Forgetting

$
0
0

Hello,

As some of you may know, I am running a series where I try to outperform Deepmind's Atari results. I use a different technique based on HTM (hierarchical temporal memory), but both Deepmind's technique and my technique fall under reinforcement learning.

So recently I posted about the HTM-RL architecture I was going to use. In it I decided to use a multi-layer perceptron to learn Q values from the outputs of HTM regions. In order for this to work, I realized that I would have to end up using an experience replay mechanism, much like Deepmind did.

Since I find experience replay inelegant, I wanted another approach to solve the problem of "catastrophic forgetting". Normally if you try to train a multilayer perceptron in an online fashion, newer values tend to destroy previous ones. Experience replay solves this to some extent by performing stochastic gradient descent on a window of past samples. This works well, but the memory requirements are enormous if you extend your replay window.

So I decided to take a hint from HTM, and I developed a new supervised online learning algorithm that does not suffer from catastrophic forgetting. It uses SDRs (sparse distributed representations) to represent its input in a field of nodes. Then, the output is simply a linear combination of the SDR that was formed. This makes it a universal function approximator, but at the same time the spatial pooling (input pattern generalization) greatly reduces catastrophic forgetting by reducing the strength of a backpropagation update depending on the output of the SDR cells.

I ran a simple experiment: I trained this "SDR-net" to learn a sine function of its input, by giving it all the samples in order. I then did the same with a MLP. When incrementing the X value by 0.001 at a time and trying to learn the corresponding Y value, the SDR-net got 270 times less error within 20PI!

But don't take my word for it! Source code can be found in my AI experimentation library, "AILib". It is named SDRRBFNetwork in the "rbf" directory. Link: https://github.com/222464/AILib/tree/master/Source

This algorithm has quite possibly already been discovered. But in case it is not, I wanted to post about it here!

TL;DR: sparse distributed representations are awesome!

submitted by CireNeikual
[link][2 comments]

Viewing all articles
Browse latest Browse all 62811

Trending Articles