Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62750

Why are deep networks difficult to train, really?

$
0
0

As I understand it, deep learning is basically about neural networks with multiple hidden layers. A long time (decades) ago, such networks were not used because they were hard to train, in contrast to shallow networks. Then people discovered pretraining (autoencoders and such), and got over that roadblock: the "good initialization" + "gradient descent fine-tuning" approach worked.

It is often said that the real reason for the difficulties was not the initialization (or many bad local minima), but the use of first-order methods ("pathological curvature" causes problems). This is solved by using curvature information in e.g. so-called Hessian-free methods (which, amusingly, do use the Hessian).

That does not make sense. Surely someone took L-BFGS or a Conjugate Gradient method and applied it to deep networks back in the bad old days? Wouldn't such methods have removed problems with the "pathological curvature", and people would have noticed? Why did it take decades for the discovery to happen? Or do these methods not work, after all? Is there some special sauce in Hessian-free methods that makes them work where L-BFGS or CG doesn't? If I try L-BFGS or CG with deep networks, will I actually get any problems other than costlier function evaluations compared to HF methods?

submitted by Coffee2theorems
[link][13 comments]

Viewing all articles
Browse latest Browse all 62750

Trending Articles