Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 63107

Do input/output neurons of neural networks have activation functions? (Newbie question)

$
0
0

CONTEXT

I am trying to code a simple single-hidden layer neural network from scratch to try and approximate functions like exp(x) and such - just to get an idea of how data is represented in weights and such.

I am planning on using a single input neuron that takes a single real-valued input.

This input (and a bias unit) go through weights to the hidden layer which has a sigmoid activation function. These get summed and weighted to give the output neuron - and that's supposed to be the function.

So essentially, if x is the input to the neural network, the output of the output neuron is Wsigmoid(b + Vx) + c where b is the bias of the hidden layer, c is the bias for the input layer, V is the vector representing the weights between the input and the hidden layer, and W is the vector representing the weights between the hidden and output layer. Learning done through simple backpropagation.


QUESTIONS

  1. Is this a sound design?

  2. It makes sense for the input and output layers to not have activation functions, doesn't it? Are there cases when the input/output layers will have activation functions too?

The latter question is the main reason I made this thread as evidenced by the title.

submitted by TheHiddenLayer
[link][8 comments]

Viewing all articles
Browse latest Browse all 63107

Trending Articles