I am a newbie at neural networks and I was learning about good ol' simple feed-forward neural networks.
Am I right in saying that going from one layer to the next, is essentially just transforming a vector with some transformation matrix and then applying an activation function to the resulting vector?
Or am I missing some crucial piece of understanding regarding neural networks?
[link][22 comments]