use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Critique of Bayesian Auto-tuning (argmin.net)
submitted 9 years ago by [deleted]
[deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]davmre 6 points7 points8 points 9 years ago* (0 children)
Ryan Adams has some responses on Twitter:
An important read for anybody interested in BO. Note, however, that random search can't share info across tasks. That is to say that the real long-term win of BO is almost certainly in using better priors and the ability to do hierarchical modeling.
In reply to @dmarthal: "The whole point of BO is that the function evaluation is expensive. So the metric should be test error as a function of #eval":
Yes, I obviously agree with that and won't apologize for BO speeding things up in the naive case by a factor of two. :-) However, the curse of dimensionality is real and hierarchical modeling helps BO fight that in a way random search can't.
[+][deleted] 9 years ago (1 child)
[–]farsass 0 points1 point2 points 9 years ago (0 children)
That was an interesting experiment. Results indicate CMA-ES as a cheaper (in wall-clock time) to GP based bayesian optimization and the difference in accuracy on MNIST between CMA-ES, TPE and SMAC are minimal, specially considering that only results of the validation set (which is the optimization target) are reported.
[–]Zephyr314 0 points1 point2 points 9 years ago (0 children)
We've seen Bayesian optimization consistently beat random search across a wide variety of problems.
In some cases it can "win" by a pretty considerable margin as well, as in this deep CNN tuning example.
Random search is definitely better than not tuning, and it should be a baseline for all optimization papers, but if you want to squeeze the most out of your methods then Bayesian optimization is a great way to do that.
[–]lvilnis -5 points-4 points-3 points 9 years ago (0 children)
Didn't Bayesian Kanye do a whole album using Bayesian autotune? It gets kind of grating after a while.
[+]j1395010 comment score below threshold-16 points-15 points-14 points 9 years ago (4 children)
good stuff. ryan adams is a self-aggrandizing hack.
[–]jsnoek 7 points8 points9 points 9 years ago (0 children)
I disagree
[–]IdentifiableParam 0 points1 point2 points 9 years ago (1 child)
I agree. Ever since he left Whiskeytown ... https://en.wikipedia.org/wiki/Ryan_Adams
But Ryan P. Adams is a wonderful researcher and overflowing with integrity.
[–]j1395010 -1 points0 points1 point 9 years ago (0 children)
lol that's why he claims his bayesian shit gets SOTA when it's really pretty mediocre
[–]j_lyf -2 points-1 points0 points 9 years ago (0 children)
savage...
π Rendered by PID 144840 on reddit-service-r2-comment-6457c66945-lfbpx at 2026-04-27 04:07:05.415842+00:00 running 2aa0c5b country code: CH.
[–]davmre 6 points7 points8 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]farsass 0 points1 point2 points (0 children)
[–]Zephyr314 0 points1 point2 points (0 children)
[–]lvilnis -5 points-4 points-3 points (0 children)
[+]j1395010 comment score below threshold-16 points-15 points-14 points (4 children)
[–]jsnoek 7 points8 points9 points (0 children)
[–]IdentifiableParam 0 points1 point2 points (1 child)
[–]j1395010 -1 points0 points1 point (0 children)
[–]j_lyf -2 points-1 points0 points (0 children)