Hi, I have a simple question. Thank you for taking time to help answer.
Lets say we have a single layer of feature learning (sparse coding, rbm, autoencoder), with a logistic regression classifier on top. And we train the classifier to get an understanding of the relative importance of each learned feature from the classifier's weights.
Are there interpretation advantages of using sparse coding (linear) over the nonlinear embeddings from restricted boltzmann machine or autoencoders? Maybe what I'm asking is could we trace the important "learned features" back to their original data for one method but not the others?
I'm confused because in sparse coding, we know the basis and weights (even if they're overcomplete). But also in RBM, we know the energy function and weights.
Thank you very much.
[link][1 comment]