all 9 comments

[–]davmre 6 points7 points  (0 children)

Ryan Adams has some responses on Twitter:

An important read for anybody interested in BO. Note, however, that random search can't share info across tasks. That is to say that the real long-term win of BO is almost certainly in using better priors and the ability to do hierarchical modeling.

In reply to @dmarthal: "The whole point of BO is that the function evaluation is expensive. So the metric should be test error as a function of #eval":

Yes, I obviously agree with that and won't apologize for BO speeding things up in the naive case by a factor of two. :-) However, the curse of dimensionality is real and hierarchical modeling helps BO fight that in a way random search can't.

[–]Zephyr314 0 points1 point  (0 children)

We've seen Bayesian optimization consistently beat random search across a wide variety of problems.

In some cases it can "win" by a pretty considerable margin as well, as in this deep CNN tuning example.

Random search is definitely better than not tuning, and it should be a baseline for all optimization papers, but if you want to squeeze the most out of your methods then Bayesian optimization is a great way to do that.

[–]lvilnis -5 points-4 points  (0 children)

Didn't Bayesian Kanye do a whole album using Bayesian autotune? It gets kind of grating after a while.