Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62888

Why are extremely randomized trees more efficient than standard Random Forests?

$
0
0

Ok so I have learned recently about decision trees, and some of the developments to adress overfitting of such technique. It is not suprising that by combining different decision trees (Random Forest), using subsets of the training data and randomization, lead to a more general solution and therefore less overfitting.

Then it seems that by completly randomize either the feature selection and the split threshold (this one makes no sense to me) a better method can be developed.

Can someone explain why such method works and why is it better than Random Forests? Is this anyhow different from blindly throw a threshold multiple times and choosing the one that gives the best result?

submitted by BagOfWords1000
[link][comment]

Viewing all articles
Browse latest Browse all 62888

Trending Articles