New to neural networks. Can someone help me clarify something? I'm not sure how to handle real-value data. (this question is not about the learning algorithm, but about the architecture or maybe the component neurons of the neural network.)
I've seen examples of neural networks doing things like given a set of words, predict the most likely word or given some hand-written digit, recognize it. But in most cases, input data is very restricted.
In the first case it worked in the following way. Take all the words in the english language and order them: (sun, car, yellow, is, me,..) then any word can be represented by a vector which is all entries zero for all the words you don't want to represent, and one for the word that you do. So car would be represented by the vector (0,1,0,0,...). In the example that I saw for predicting the next word, the input was given a several such vectors, and the neural network would then propagate forward through sigmoid neurons.
In the example for the hand-written digit you could have for each pixel any number between zero and one, but that's still a compact interval. This one also propagated forward by using sigmoid neurons.
I'm stressing that one example worked on binary data and the other one worked on data on a compact interval, because sigmoid neurons seem especially useful for these situations, since their output is between zero and one.
How would I go about handling on a neural network input (and output) data that can take any value. For concreteness imagine that I want to predict the value of a stock given the past 5 values of that stock. The easy question is: are sigmoid neurons ruled out because somehow it's impossible to use them for this problem, and why exactly? And if so, how does one handle such that?
Thank you for your time
[link][20 comments]