Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62908

L1 penalty on the activation values?

$
0
0

Glorot presents in his paper ( Deep Sparse Rectifier Neural Networks ) his idea to penalize activation values with L1 norm in order to to use the relu function for an autoencoder.

Does anybody know how it is done?

submitted by rishok
[link][1 comment]

Viewing all articles
Browse latest Browse all 62908

Trending Articles