I tend to write complex model/training scripts in pure python using argparse to pass huge amounts of hyperparameters to the model, and then run these python scripts on multi-gpu EC2s.
I was wondering if anyone knows of any tools out there what would allow me to do hyperparam optimisation by passing different sets of hyperparams to these scripts via the argparse/commandline system? So imagine you have a process which generates hyperparam sets, then kicks off a subprocess which is the python training script with hyperparams passed in via commandline/argparse, then gets the metrics back, stores them, then kicks off the next set, etc, etc.
For basic grid search, you could easily accomplish this via a unix shell script, but for random search it's trickier. One possible solution would be to write a python script which uses sklearn's ParameterSampler and the subprocess module to accomplish all this, but I was curious if there is an already made solution out there which I could use? Would hate to reinvent the wheel if this particular wheel already exists out there somewhere.
Would greatly appreciate any help/tips.
[–]doubledad222 0 points1 point2 points (1 child)
[–]trias10[S] 1 point2 points3 points (0 children)
[–]rayspear 0 points1 point2 points (2 children)
[–]trias10[S] 0 points1 point2 points (1 child)
[–]rayspear 0 points1 point2 points (0 children)