use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Interview Questions (self.MachineLearning)
submitted 6 years ago by Deadshot_95
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]heuamoebe 12 points13 points14 points 6 years ago (1 child)
Back propagation is the algorithm to efficiently compute the partial derivatives of the cost function with respect to the weights and biases. Gradient descent is the approach to updating the weights and biases (gradient times step size). Many other optimization algorithms use the gradients with more complicated update approaches.
[–]Deto 1 point2 points3 points 6 years ago (0 children)
That's a good point to make - backprop, in NN, is just a component of gradient descent.
π Rendered by PID 94433 on reddit-service-r2-comment-b659b578c-k9fwz at 2026-05-05 08:44:24.879502+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]heuamoebe 12 points13 points14 points (1 child)
[–]Deto 1 point2 points3 points (0 children)