Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 63067

Stupid question about NN weight update equations

$
0
0

I'm working with long short term memory cells which require that not every node in a layer connects to every node in another, I.e there are many 0-weight connections. The typical weight update equation looks like alpha * error(j) * output(i) where i connects to j. If I'm doing this for a whole layer with matrices I can't easily specify to skip the 0-weight connections without losing a lot of speed. How can I use matrix ops without changing the 0-weight connections? I was thinking to multiply the update equation by the current weight but I don't know if that would negatively affect the overall accuracy.

submitted by purpleladydragons
[link][2 comments]

Viewing all articles
Browse latest Browse all 63067

Trending Articles