Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62874

My Attempt at Outperforming Deepmind's Atari Results

$
0
0

Hello!

I am a reinforcement learning hobbyist, and I made myself a challenge: To outperform DeepMind's reinforcement learning agent in the Arcade Learning Environment!

I will be posting updates here as I go!

The codebase I am using (currently separate from the ALE for easier experimentation) is available here: https://github.com/222464/AILib

It contains a large number of agents, the one I am currently working on is called HTMRL (hierarchical temporal memory reinforcement learning).

What do I do differently?

First off, I do not have a fixed time window of previous inputs to "solve" the hidden state problem. Rather, I am using HTM (hierarchical temporal memory) to form a context for the input automatically, as well as compress the input down to a manageable number of features.

From there, I use a simple feed-forward neural network(s) to be the actor/critic (I have two versions, one with an actor for continuous actions, not necessary for the ALE). These take the output of the last HTM region as input.

The critic-only version (discrete action) updates using standard Q learning updates plus eligibility traces.

The actor-critic version maintains a Q function in the critic and uses a form of policy gradient to optimize the actor on the Q values.

Right now I am working on getting it to function flawlessly on the pole balancing task before scaling up to the ALE.

Here is an image of HTMRL performing pole balancing. The top right shows the highest-level HTM region.

http://i1218.photobucket.com/albums/dd401/222464/HTMRLPoleBalancing.png~original

More coming soon!

submitted by CireNeikual
[link][7 comments]

Viewing all articles
Browse latest Browse all 62874

Trending Articles