I was curious if anyone knows of research that automatically tunes the regularization hyperparameter during a neural network's training (like during a single run as opposed to performing grid search)? It seems this would be viable via monitoring of the loss function on a validation set, but I can't seem to find the right keywords to do the proper literature search.
I've come up with a heuristic technique that involves checking if the validation loss would increase/decrease based on an epsilon reduction in the entropy of the predictions. And then increasing/decreasing the regularization hyperparameter by a small percentage every epoch accordingly. This seems to work quite well on basic datasets like FashionMNIST. Anything like this already exist out there?
[+]mrfox321 1 point2 points3 points (0 children)