I've been looking into "Sequence to Sequence Learning with Neural Networks" by Sutskever, Vinyals and Le (arXiv:1409.3215), and would like to understand how/why they implement a deep (4-layer) Long-Short Term Memory (LSTM) network.
They do not explain the details of how they tie together the network, but from following the references I get the impression that they are using a connection structure like this: http://i.imgur.com/J3DwxSF.png (from "Hybrid speech recognition with deep bidirectional LSTM" http://www.cs.toronto.edu/~graves/asru_2013.pdf ), except using LSTM units.
If they were using a deep RNN, though, I would think that a connection like this would work better: http://i.imgur.com/9txOrbN.png . The difference is that I feed the output of the whole network into the input at the next time step, instead of feeding each layer's outputs into the same layer's inputs at the next time step.
Why do they do it this way?
Also, is there a good, modern reference that compares and contrasts approaches to recurrent neural networks? For example, I would like to know:
Why are these special memory units better than a simple (possibly deep) recurrent neural network (RNN)?
Is there any good understanding of the difference between LSTM units and the simpler memory units introduced in "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation" (arXiv:1406.1078)? It seems like this paper gets good results even with a shallow network (they use only one layer of hidden units).
Do any neural network packages have good built-in support for RNNs and these variants?
So far, I have found LISA Groundhog https://github.com/lisa-groundhog/GroundHog and some simpler theano examples (like https://github.com/gwtaylor/theano-rnn).
In torch7, nnx has a simple recurrent module (https://github.com/clementfarabet/lua---nnx#nnx.Recurrent). I don't know whether/how it works with deep RNNs, and haven't found support for LSTMs or other memory modules.
I haven't found anything in Caffe.
[link][6 comments]