Quantcast
Viewing all articles
Browse latest Browse all 62709

How is it that when training these boosted tree algorithms it seems to often be the case that the training error diverges from the test error and yet the test error pretty much never increases?

I've seen this so many times (w/r to classification). Randomly choose a set of test data and while training have a plot of the the validation and test errors (w/r to iteration). The errors diverge (with training error being ever more less than the test error) and yet the test error pretty much never increases and usually continues to decrease. I would expect that an increasing divergence in training vs test means that there's some overfitting going on but if that's true shouldn't the test error increase?

Heck, I've seen test error decrease well after the training error has hit 0.

submitted by duckandcover
[link] [7 comments]

Viewing all articles
Browse latest Browse all 62709

Trending Articles