Hi MachineLearning,
Let me preface this with an apology. I'm not sure if this is the right subreddit for this question. CogSci seems more interested in the philosophical side of Neural Networks, and Neuro is obviously more interested in straight Neuroscience. If you feel this is the wrong subreddit, feel free to ignore this and, mods, please lock it - but in this case, a pointer in the right direction would be great.
A friend and I were introduced to Neural Networks by a professor. I am a Neuroscience student, he is a Computer Science student. For the purposes of curiosity and learning, we have decided to try our hand at them. He has already made a few with good results.
At the moment we are building an ANN with a special focus on adherence to known Neuroscience. Whereas before, we were using some basic positive/negative reinforcement for our learning, now we are looking at more closely modelling synaptic plasticity - probably with some variation of Hebbian Learning. At present, we have implemented Oja's rule, But I was wondering what else where was out there. The Neuroscientist in me bristles at some of the assumptions and foundations of Oja's rule (especially the default 'decaying' of synapatic weight), so I was wondering if there were any papers presenting algorithms more closely based on long-term potentiation and long-term depression. I've been searching via my university library, but it has been fruitless so far.
Any references would be a great help, and thank you in advance.
[link][10 comments]