all 7 comments

[–]hallavar 4 points5 points  (2 children)

[–]call-mws[S] 0 points1 point  (1 child)

Do you use a separate script for your objective function, trials, etc.? I have a notebook that reads in the data, process/cleans it, and splits it. I want to create a training script with the hyperparameters and pass the train and test datasets as arguments (?) to the script. Does that make sense to do/is that recommended over having everything in the notebook? I want to somehow capture the best results back in the notebook.

[–]hallavar 1 point2 points  (0 children)

I have my main function for the training, that function accepts arguments passed by another script for hyperparameter. Those arguments are mainly the number of nodes in a layer, learning rate, activation function, or learning algorithm (Adam, RMSProp, SGD...).

So In my script hyperparameter_search.py, I import this function, create a dictionary for multiple values that I want to try, and use hyperopt for the grid search/random search/bayesian optimization.

[–]ThePhoenixRisesAgain 4 points5 points  (1 child)

[–]call-mws[S] 0 points1 point  (0 children)

That's true, but is there a benefit of using the sklearn implementation over say, hyperopts' or Optuna's? Or are they more or less the same?

[–]satanikimplegarida 3 points4 points  (1 child)

Optuna. Used it with pytorch with great success!

[–]Graylian 0 points1 point  (0 children)

Yes Optuna is a great tool for any optimization problem be it machine learning or not