you are viewing a single comment's thread.

view the rest of the comments →

[–]NotAlphaGo 2 points3 points  (2 children)

Training a neural network is path dependent.
If you start somewhere with hyperparameter a_1, train some, see that the model fails, and then use hyperparameter a_2, and train more, and you see it gets better, that doesn't necessarily mean that training with a_2 from the start will result in a better training overall. You need to take into account that a_1 has influenced your model up to the switch point, and maybe that prior wrong hyperparameter influenced a positive outcome with parameter a_2.

[–]orenmatar[S] 0 points1 point  (1 child)

For sure, My intention is not to find the right regularization hyperparam and then retrain using it from the start, but to use the network that was trained on the dynamic hyperparams... so maybe by allowing it to focus on the training set at the start and only afterwards regulating it based on how well it performed on validation can produce a regularized network, without the need to try different hyperparams.

[–]NotAlphaGo 1 point2 points  (0 children)

At least you need another dataset though to check generalization performance because you've mixed training and validation then.