For example, For AdaBoost, at each iteration, we need to select the weak classifier that gives the smallest weighted error rate. If we want to build tree with depth larger than 1, it is desired to train the tree node by node greedily . What kind of criteria should we use to find the best split for the nodes? Select the split that gives the smallest weighted error or other rules like gini value?
Thanks a lot!
[link][2 comments]