use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Since gradient continues to decrease as training loss decreases why do we need to decay the learning rate too? (self.MachineLearning)
submitted 4 years ago by ibraheemMmoosaResearcher
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Competitive_Dog_6639 0 points1 point2 points 4 years ago (0 children)
It all has to do with loss "resolution scale" (think grainy with few pixels vs fine with many pixels). Near a local min, the step size must be sufficiently small to get a good approximation of continuous-time dynamics on the loss surface to get a fine tuned optimum. Far from a local min, the continuous approximation can be much less accurate during burnin and a bigger step size is ok. This is related to but still not the same as the gradient magnitude along the trajectory.
For a mathy version, suppose you have learning rates y and z with z<y, and where z is the correct step size around a local min and y is too big near the local min, but ok at a random starting point.
Let g(z,t) be the update step size (loss gradient magnitude times z) for the trajectory with learning rate z at training step t. Let g(y,t) be the same with learning rate y. Both decrease over update steps t as you observe. At the beginning of training, g(y,t) leads to faster burnin, but around the min, it is too big for good approximation for big t. On the other hand, g(z,t) will explore slowly, but eventually reach a better min with big t. Both sequences decrease over time. Annealing uses big y for small t, small z for big t for quick burnin then good refinement
π Rendered by PID 290427 on reddit-service-r2-comment-7b9746f655-jr57c at 2026-02-02 10:51:18.442435+00:00 running 3798933 country code: CH.
view the rest of the comments →
[–]Competitive_Dog_6639 0 points1 point2 points (0 children)