Hello again!
Time for an update!
The first thing I did after my original post was try to optimize HTMRL in order to get more gradient descent updates for the actor/critic portion. I started by experimenting with various numbers of HTM layers and sizes such that the last layer (the input to the actor/critic) was of an appropriate size that was small enough to run fast and large enough to convey enough information.
Secondly, I stopped using vanilla stochastic gradient descent for the actor/critic and started using RMSProp, just like in the original DeepMind paper. It was a simple change but quickly gave me better performance.
Thirdly, I started experimenting with ways to convert the binary information outputted by the last HTM region for the actor/critic into a floating-point representation that preserves information but does not hurt generalization at the same time. Originally I did a straight up binary to floating point value conversion, but this makes similar HTM configurations result in vastly different outputs, which greatly hurts generalization. So, instead I then opted for the lossy but generalizing approach: to sum the inputs and divide by the maximum count. This doesn't take all the positional information of the input into account, but with a small enough condensing radius it can provide a decent compression/loss tradeoff.
Fourth, I played with some advantage learning replacements for standard Q learning. Advantage learning functions better then standard Q learning in continuous environments, since it amplifies the differences in state-action value between the successive timesteps more. This makes it less susceptible to errors in the function approximation as well as noise.
Finally, I received help from the SFML community in making the code work on more platforms. They even added CMake support for me! Many thanks!
That's it for this update, time to get back to coding!
[link][1 comment]