Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62750

help with odd learning behaviour on MLP training test sample

$
0
0

I have a MLP based neural network producing the following learning graph for the training test sample:

http://i.imgur.com/bi6cD6K.jpg

My understanding is that, as the number of training cycles increases, the error (squared difference between neural network output and target value) for the training sample should drop to a minimum while the error for the training test sample should drop to a minimum and then rise (as the neural network becomes more schizophrenic).

The result I'm getting here for the training test sample has the error decrease, then increase, then decrease and stabilise near the minimum. Does anyone know what may be causing this behaviour?

submitted by d3pd
[link][10 comments]

Viewing all articles
Browse latest Browse all 62750

Trending Articles