I could do backprop and learn the input values that maximize the probability of the output, but this would take a while, and since I have to do it many times it is undesirable.
It may help if there is a fast algorithm to do this for a single layer net, because I could settle for finding activations in the second to last layer that maximize the likelihood of the output, and then repeat on previous layers until I have input values.
Another approach would be to train some reconstruction weights to reconstruct the layer below after each feedforward pass during learning, as in the wake sleep algorithm for deep belief nets, but this would suffer from 'explaining away', because the net isn't a deep belief net. (In a deep belief net, the forward weights learn features that are able to be easily reconstructed via the reconstruction weights, thus explaining away explaining away.) My nets are trained to predict a target function, so I don't see any way around explaining away.
I'd love to hear your thoughts, and hopefully see any links to relevant papers that you might have.
Edit: I've posted this question here to help me brainstorm before I actually implement anything.
[link][22 comments]