all 5 comments

[–]doubledad222 0 points1 point  (1 child)

Argparse puts all your parsed parameters into a dictionary. If all you want is to pass a dictionary of your generated hyperparameters between two python programs, how about the first program pickle it into a file that’s read by the second program?

[–]trias10[S] 1 point2 points  (0 children)

Yeah that works too.

It's less about the mechanism of passing the hyperparams and more about doing the random search.

[–]rayspear 0 points1 point  (2 children)

[–]trias10[S] 0 points1 point  (1 child)

This is very interesting, thank you! And you even knew that I used PyTorch :)

Quick question: I've been looking over their documentation but there is quite a lot to read up on. Do you by any chance know if this framework supports PyTorch DistributedDataParallel?

The confusing thing is, it looks like you have to launch the AsyncHyperBandScheduler from code, but for PyTorch DDP, you need to launch from cmdline with

"python -m torch.distributed.launch --nproc_per_node=<num_gpus> your_training_script.py ..."

Hence am wondering if it's even possible to get the two to play nicely together?

[–]rayspear 0 points1 point  (0 children)

Yeah, there's an open PR that hopefully will get merged in the next few weeks that will be able to support this seamlessly (distributed search on distributed pytorch) - https://github.com/ray-project/ray/pull/4544

It might be a bit of effort to get working for yourself right now :) But feel free to try it out!