Practically all of the literature focuses on neural networks with separated hidden layers, in which a neuron in the n'th layer only has input connections from layer n - 1. I was playing with the idea of putting together sigmoid units by hand as a white box model, and realized that this model can be seen as a neural net with no distinct layers. My intuition tells me that it also has more representational power, if just by a constant factor. It definitely has the downside of being harder to implement quickly on a computer (matrix multiplication won't work as nicely) but if it has greater representational power per neuron than a network with discrete layers, perhaps it is worth some research.
[link][10 comments]