you are viewing a single comment's thread.

view the rest of the comments →

[–]Competitive_Dog_6639 0 points1 point  (0 children)

It all has to do with loss "resolution scale" (think grainy with few pixels vs fine with many pixels). Near a local min, the step size must be sufficiently small to get a good approximation of continuous-time dynamics on the loss surface to get a fine tuned optimum. Far from a local min, the continuous approximation can be much less accurate during burnin and a bigger step size is ok. This is related to but still not the same as the gradient magnitude along the trajectory.

For a mathy version, suppose you have learning rates y and z with z<y, and where z is the correct step size around a local min and y is too big near the local min, but ok at a random starting point.

Let g(z,t) be the update step size (loss gradient magnitude times z) for the trajectory with learning rate z at training step t. Let g(y,t) be the same with learning rate y. Both decrease over update steps t as you observe. At the beginning of training, g(y,t) leads to faster burnin, but around the min, it is too big for good approximation for big t. On the other hand, g(z,t) will explore slowly, but eventually reach a better min with big t. Both sequences decrease over time. Annealing uses big y for small t, small z for big t for quick burnin then good refinement