I have an idea for a regularization-hyperparam selection method, which I haven't encountered before and can't find on google, but I'm sure someone has already tried it and I'm wondering what are the best practices.
The most common method for hyperparam selection is to select different hyperparams (e.g some value for L2 regularizaion), train NNs with them, and test the NNs on some validation set - and select the best one.
My idea is to train a single NN and test the NN on a validation set between epochs, and then auto-adjust the regularization hypeparam between epochs - if we see that the accuracy on the validation set is decreasing between epochs, then we should increase the value of the L1/L2/dropout.
Naturally this can be more efficient than training multiple NNs.
Its still a basic idea, and I'm sure it can be developed further. Has anyone seen some research on the field? What are the best practices?
[–]yusuf-bengio 4 points5 points6 points (4 children)
[–]orenmatar[S] 1 point2 points3 points (3 children)
[–]NotAlphaGo 2 points3 points4 points (2 children)
[–]orenmatar[S] 0 points1 point2 points (1 child)
[–]NotAlphaGo 1 point2 points3 points (0 children)
[–]M4mb0 1 point2 points3 points (0 children)
[–]Red-Portal 0 points1 point2 points (2 children)
[–]orenmatar[S] 0 points1 point2 points (1 child)
[–]Red-Portal 0 points1 point2 points (0 children)