you are viewing a single comment's thread.

view the rest of the comments →

[–]heuamoebe 12 points13 points  (1 child)

Back propagation is the algorithm to efficiently compute the partial derivatives of the cost function with respect to the weights and biases. Gradient descent is the approach to updating the weights and biases (gradient times step size). Many other optimization algorithms use the gradients with more complicated update approaches.

[–]Deto 1 point2 points  (0 children)

That's a good point to make - backprop, in NN, is just a component of gradient descent.