I'm very interested in the emerging field of deep learning. One thing I've noticed, though, is that most of the example applications involved homogeneous input data.
What I mean is that each feature if the same "kind" of value. For example, many applications involve images or speech signals. And in an image each feature is a pixel. In an audio segment each feature is an amplitude sample. They're all the same kind of value.
Even in the winners of Merck competition were faced with features that were homogeneous (as far as I could tell).
So my question is: is homogeneousness of the features a requirement for deep learning? Are there examples of people successfully using completely heterogenous features for deep learning? Is deep learning particularly good for large feature sets?
[link] [7 comments]