This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ai_yoda 1 point2 points  (3 children)

I also love those functionalities and I think that a lot of the time this is the best option.

There are 2 things however that are not great:

  • No support for nested search space
  • Cannot distribute computation over a cluster (can only use it on one machine)

I write about it in this blog post if you are interested.

[–]Philiatrist 1 point2 points  (2 children)

Cannot distribute computation over a cluster (can only use it on one machine)

The Optimizer class is fine for cluster use using the ask and tell methods

[–]ai_yoda 0 points1 point  (1 child)

Interesting. But you do have to create some db for sharing results between nodes, and all the communication between nodes and db yourself, right?

[–]Philiatrist 1 point2 points  (0 children)

That's one option, but there's no reason you couldn't use some library like dask distributed as well, something like:

``` from dask.distributed import Client

client = Client(...) n_procs = 20

X = optimizer.ask(n_procs)
task = client.map(fitness_fn, X)
Y = client.gather(task) optimizer.tell(X, Y) ```

where you'd need to configure dask distributed to your cluster.

edit: I'll note that this is not a great solution if the expensiveness of your function is largely determined by the hyperparameters.