As I see it, the backbone of the recent "deep learning revolution" is the ability to use RBFs and Autoencoders to learn sophisticated features from unlabeled data. However, I've noticed that you really only seem to hear about the efficacy of these techniques in regimes that deal with image data (e.g. object recognition, image segmentation) and audio data (e.g. speech recognition). Is this because those feature-extraction methods only work well on relatively homogenous data, like pixels and amplitudes, or have I just not heard as much about the strides being made in other areas?
I ask because I'm interested in applying techniques from deep learning to problems in the medical sciences, where the available information can come in many forms (microarray data, symptom lists, genetic factors, etc), but I don't know how far it's possible to push these methods before they break down. Thoughts?
[link][4 comments]