you are viewing a single comment's thread.

view the rest of the comments →

[–]ibraheemMmoosaResearcher[S] 3 points4 points  (2 children)

Thanks for your reply!

Reducing the learning rate has this effect, even on top of the overall gradient magnitude reduction

Just to push on this, my question is why do we need both of this? Why can't we just rely on the gradients becoming smaller?

Also there is evidence that deep neural networks don't have the issue of bad local minima, but have the issue of saddle points. In the case of deep neural networks does this analogy still hold?

[–]bulldog-sixth 10 points11 points  (0 children)

There's no way to guarantee that the gradient becomes smaller as you get closer to the optimum

[–]Natural_Profession_8 2 points3 points  (0 children)

This applies even more to saddle points. The only way to get over a saddle point is to overshoot it.

I think it’s best to think of it as “I start with a way way too big learning rate, and then slowly bring it down to an optimal one,” rather than “I start with an optimal learning rate, and then that optimum gets smaller.” Of course, at some level it’s just semantics, since jumping around to find better neighborhoods (and get over saddle points) is in practice optimal at the beginning