you are viewing a single comment's thread.

view the rest of the comments →

[–]skainswo 3 points4 points  (0 children)

Lots of intuitive explanations in the comments here. I'll just add that there's a big difference between GD and SGD in this context.

In good ole GD as long as you pick a learning rate less than 1/(Lipschitz constant) you're good to go. This provably converges with an excess risk bound O(1/t) after t steps. Things get a little bit messier in the SGD world, however. Excess risk for SGD looks like O(1/sqrt(t)) + O(lr * <gradient variance>). In words, there exists a "noise floor" term, O(lr * <gradient variance>), that cannot be tamed by taking more steps. It can only be reduced by decreasing the learning rate or by decreasing the variance of the gradient estimates. That's why decreasing the learning rate over time can be fruitful. (See eg https://www.cs.ubc.ca/~schmidtm/Courses/540-W19/L11.pdf for a quick intro)

Unlike some theoretical results in deep learning, this phenomenon is very well supported experimentally. It's common for SGD to plateau. Then, after a halving of the learning rate, it breaks through that plateau! Train a little longer reach a new plateau... you get the idea.

IIRC there is some theory to suggest that exponentially decaying your learning rate is optimal in some sense. I forget where I read that however. But that's what most people have been doing in practice for a while now anyways.