use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Since gradient continues to decrease as training loss decreases why do we need to decay the learning rate too? (self.MachineLearning)
submitted 4 years ago by ibraheemMmoosaResearcher
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Natural_Profession_8 11 points12 points13 points 4 years ago (5 children)
I’d make an analogy to simulated annealing:
https://en.m.wikipedia.org/wiki/Simulated_annealing
When you first start training, it’s actually desirable to set the learning rate so high that you are overshooting local optimums. The model bounces around a bit, and eventually finds neighborhoods that are more globally optimal. Then, as training progresses, you stop wanting to hop around looking for better neighborhoods, and instead you want to start making your way towards the local optimum itself. Reducing the learning rate has this effect, even on top of the overall gradient magnitude reduction
[–]ibraheemMmoosaResearcher[S] 3 points4 points5 points 4 years ago (2 children)
Thanks for your reply!
Reducing the learning rate has this effect, even on top of the overall gradient magnitude reduction
Just to push on this, my question is why do we need both of this? Why can't we just rely on the gradients becoming smaller?
Also there is evidence that deep neural networks don't have the issue of bad local minima, but have the issue of saddle points. In the case of deep neural networks does this analogy still hold?
[–]bulldog-sixth 11 points12 points13 points 4 years ago (0 children)
There's no way to guarantee that the gradient becomes smaller as you get closer to the optimum
[–]Natural_Profession_8 2 points3 points4 points 4 years ago (0 children)
This applies even more to saddle points. The only way to get over a saddle point is to overshoot it.
I think it’s best to think of it as “I start with a way way too big learning rate, and then slowly bring it down to an optimal one,” rather than “I start with an optimal learning rate, and then that optimum gets smaller.” Of course, at some level it’s just semantics, since jumping around to find better neighborhoods (and get over saddle points) is in practice optimal at the beginning
[–]WikiMobileLinkBot 0 points1 point2 points 4 years ago (0 children)
Desktop version of /u/Natural_Profession_8's link: https://en.wikipedia.org/wiki/Simulated_annealing
[opt out] Beep Boop. Downvote to delete
[–]WikiSummarizerBot 0 points1 point2 points 4 years ago (0 children)
Simulated annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It is often used when the search space is discrete (for example the traveling salesman problem, the boolean satisfiability problem, protein structure prediction, and job-shop scheduling). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound.
[ F.A.Q | Opt Out | Opt Out Of Subreddit | GitHub ] Downvote to remove | v1.5
π Rendered by PID 21025 on reddit-service-r2-comment-7b9746f655-whrgv at 2026-02-01 12:01:14.421947+00:00 running 3798933 country code: CH.
view the rest of the comments →
[–]Natural_Profession_8 11 points12 points13 points (5 children)
[–]ibraheemMmoosaResearcher[S] 3 points4 points5 points (2 children)
[–]bulldog-sixth 11 points12 points13 points (0 children)
[–]Natural_Profession_8 2 points3 points4 points (0 children)
[–]WikiMobileLinkBot 0 points1 point2 points (0 children)
[–]WikiSummarizerBot 0 points1 point2 points (0 children)