Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62728

"Deep Learning" with non-Image/Audio/Text data. (Proteins)

$
0
0

Hi, I work with data that isn't Vision/images, audio or natural text, (Protein sequences). I've done a fair bit of work on extracting features from the sequences using a variety of representations (K-mer frequencies, discrete and continuous properties, time-signal representations, etc' ), and have worked with this using mainly SVMs and Random Forests for the best results up until now.

I'm interested in applying some more "advanced" methods to this, mainly: A) techniques which can take advantage of huge amounts of unlabelled (incredibly heterogenous) samples (i.e. some of the 80 million protein sequences known). From what I know, Deep belief networks, and the various autoencoders + layers of thereof would be a good starting point for this? B) Use of multi-layer networks (EG multilayer perceptrons) and other techniques to better classify the data. (In conjunction with A presumably).

The features are mostly partially linear but weak, (i.e. individually relevant strong features, - a linear SVM does work, but with mediocre performance); And dimensionality of the "raw" features is moderately high - ~hundreds to tens of thousands (depending on what type of N-Gram length I extract). (I've tried applying PCA but performance dropped).

I frame this as a binary classification task usually, In most cases my positive set is a few hundred to a few thousand samples, while my negative set is a random sampling from the "unknown" background of many millions of proteins not annotated as having the properties of the positive set, (i.e P-U learning framed as a binary case). (So - overfitting is a very big issue, even with Cross validation and relatively simple models such as RF/ensembles).

All my work so far has been with scikit learn (Python 3+). Nolearn and Breze look good, but I haven't managed to get them running on my home Windows PC, while the Lab Linux PC is limited in what I can install. Ease of use is a major concern for me, I'm a neophyte with no rigorous background in ML, and I completely lack intuitions when it comes to nets. EG: How many layers to start with? Should they be larger or smaller than the input dimensionality [the literature is conflicted], should the layers go : [Large, medium, small, medium, large], or something else? Does it really make sense to try stacked sparse filtering and to have the first layer be larger than the input dimensionality? (Example from the FastML blog). etc'

Almost all the literature I've seen really focuses on Images, or audio text, and not "Here's an unknown dataset, get better results than a RF with it".. (Again, a sole exception is the FastML "Kaggle BlackBox challenge" post). Thank you very much!

PS - Here's a basic form of the features and datasets I have in mind: http://www.ncbi.nlm.nih.gov/pubmed/24336809

PPS -

If it'll make it easier, imagine I'm giving as an example the Kaggle "Forest Cover Type Prediction" challenge. Same issues apply in terms of lack of intuition.

Thanks very much for any tips/advice/folk wisdom!

submitted by ddofer
[link][18 comments]

Viewing all articles
Browse latest Browse all 62728

Trending Articles