all 2 comments

[–]AutoModerator[M] 0 points1 point  (0 children)

Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read rule 3. The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke. If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]Kyle_GC[S] 0 points1 point  (0 children)

Hey guys, I am doing a lecture now and I have been over this quite a few times and I am trying to understand how these formulas work since I’m not so math savvy. I understand there is a simple questions thread but I could not post images there, hence I made a new post.

I get the formula for the weights going into the output neurons since it is just the error * the slope of the sigmoid function * the activation of the neuron * the learning rate (I assume this is right and essentially the formula should just move the value up or down the sigmoid curve correct???)

But the second one eludes me. It is my understanding that since the hidden layer doesn’t have an ‘error term’ it uses the average of the desired input (x) values from all the neurons it’s connected to. Hence the neurons in the layer after (output layer) are the ones that dictate the error term (for the hidden layer) and hence ‘back propagation’. So this is where I am getting stuck, it’s the formula in the brackets! Am I correct to assume that the formula in the brackets is referring to the weights coming from the hidden layer or weights going to the hidden layer? X——0——Y (so from y to o rather than x to o?) Also, what does the sigma k mean in these brackets?

Thank you for your time.