Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62921

How to train Auto-Encoders with Tied Weights?

$
0
0

Hi there.

I started to read about auto-encoders a short time ago and I am trying to imagine how I could employ an under-complete AE (I'm considering the simplest scenario possible, no denoising, only with a hidden layer).

The idea of reconstructing the input in the output layer seems trivial to me, but I just can't understand how it is possible to apply the backpropagation algorithm in the training phase if one considers W in the encoder and W' the decoder. Furthermore, what advantages or disadvantages will I have if I use tied weights and why that simple property assures me those advantages? And how do I assure that both matrices will continue to be transpose when I backpropagate the error?

I am pretty sure that something is wring in my brain and I can't see what is going on. I hope someone can help :)

Thank you in advance.

submitted by sungiv
[link][3 comments]

Viewing all articles
Browse latest Browse all 62921

Trending Articles