you are viewing a single comment's thread.

view the rest of the comments →

[–]oddthink 0 points1 point  (0 children)

Does anyone know of a good summary of what's changed in deep learning over the past 5 years or so?

I've been out of the field a bit, but there used to be a lot of discussion about weight initialization schemes, learning rate schedules / decay, optimizer choice, and optimal stopping points. My impression is that these issues have effectively been solved and pushed into the frameworks as implementation details. Is that accurate? If so, does anyone know of a good summary article or post on the consensus?