calculate derivative in that point either proper way with formulas, or lazy way: x->y1 x+0.001->y2 (y1-y2)/0.001 = slope you want
1) Somebody told me perceptron is just a line based on the training dataset. 2) The bias in the perceptron is the b in a simple line equation. mx+b=y 3) What I want to get is the m of a perceptron which has 2 weights.
1) this is wrong. all of the neural network stuff relies on non-linearity. usually done by activation function 2) there should be an activation function. something like tanh(mx+b). if not then its wrong. 3) if you insist, then take average of weights. (m1+m2)/2
1) I meant with ignoring the activation function. sry. 2) Right... mine is a sign function. I returns +1 or -1. 3) Thanks, but I don't know why it doesn't work!
what's the source of the first and second claim? for me it's clearly a neural network to solve linear regressions, it's still a neural network just not interesting
all layers will collapse into 1 so, linear neural network always can be represented by 1 layer neural network. which is usually not called such today
Обсуждают сегодня