Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62797

Neural network always gives same output regardless of input

$
0
0

Hello! I have a neural network coded up in PyBrain. It has 4 input layers, 3 hidden, and 4 output. The network has been trained until convergence. Now, however, whenever I activate the network using a set of data the network returns the exact same values regardless of what the input is.

Code snippet for the creation of the network:

netWL = buildNetwork(4, 3, 4, bias=True) dsWL = SupervisedDataSet(4, 4) trainerWL = BackpropTrainer(netWL, dsWL) j = 0 while j < 916: dsWL.addSample((modelSM[j],modelAT[j],modelRH[j],modelST[j]), (woodlandSM[j],woodlandAT[j],woodlandRH[j],woodlandST[j])) j += 1 trainerWL.trainUntilConvergence(verbose=True, validationProportion=0.20, maxEpochs=2000, continueEpochs=20) 

Then, I activate the network using:

netWL.activate([modelSM[i], modelAT[i], modelRH[i], modelST[i]]) 

(The above is inside a loop that loops through all of my data; I have triple-verified that different information is in fact being passed to the ANN each time.)

For example, two of my inputs are:

['0.146500', '16.264000', '93.900000', '18.851000'] ['0.223800', '14.178000', '97.000000', '15.608000'] 

Both return this output:

[0.2610043921756522, -1.1608110829979246, 80.40450766175121, 3.1899894546988423] 

I don't know what I've done wrong so the ANN returns the same values each time. Any ideas?

submitted by anonymouse72
[link][4 comments]

Viewing all articles
Browse latest Browse all 62797

Trending Articles