Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62576

Is the apparent similarity between the random subspace method and dropout anything more than superficial?

$
0
0

As I understand in the random subspace method (e.g. random forests) you randomly omit some of your features in the weak classifiers so you don't end up with all the weak classifiers using a few good features and being highly correlated as then you basically have an ensemble of loads of copies of the same classifier which is pretty useless.

In dropout you randomly omit units from the neural network during training to prevent co-adaptation (i.e. where a unit becomes dependent on other units, so a learned feature/metafeature (what is the proper term for this? I guess just referring to it as a unit?) becomes useful only in the presence of certain other learned features).

In both cases you use model averaging on the resulting ensemble to obtain the final model.

I could be completely wrong, please feel free to correct me!

submitted by alexgmcm
[link][5 comments]

Viewing all articles
Browse latest Browse all 62576

Trending Articles