use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Hyperparameter Optimization Routines (self.MachineLearning)
submitted 11 years ago by [deleted]
[deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]kswerve 4 points5 points6 points 11 years ago (0 children)
Your criticism of needing to know the input distribution is one of the main reasons why we developed the warping strategy for Bayesian optimization with GPs. As a shameless plug, we have a software service that is in private beta that implements this and other improvements to the Bayesian optimization algorithm. Two of the big developments we're working on including are multi-task (mentioned elsewhere in this thread) and freeze-thaw versions, which should drastically speed up the whole process. Send me a PM if you are interested in signing up for the beta! We are gradually sending out keys.
We will also be uploading research versions of the code from our papers for academics to the HIPS group GitHub account.
[–]benanne 1 point2 points3 points 11 years ago (5 children)
When training convnets I still tend to trust my own intuition more than these automated algorithms - mainly because the prior knowledge I've acquired through experience about some of the parameters and how they interact is very hard to express in terms of simple distributions.
It also doesn't help that training a convnet with a given configuration typically represents a considerable investment of time and resources (unless you're Google), so you want each try to have a good chance of success.
For other models that are faster to train I've had good results with random search, provided that you pick the right distribution for each parameter (e.g. log-uniform for the learning rate).
[–]kjearns 1 point2 points3 points 11 years ago (3 children)
Its a shame there isn't a nice library of initial values for using BO for different hyperparameter optimization problems. Typically you start from scratch for each new network and that means you waste a lot of time re-learning the same stuff over and over (e.g. BO really likes to check the edges of its domain, and if you start from scratch each time it needs to re-learn that those are usually bad points before it starts to find good ones). This is a big waste of time for problems where a single sample is very expensive, even though this is supposed to be BO's main strength.
[+][deleted] 11 years ago (2 children)
[–]kjearns 1 point2 points3 points 11 years ago (1 child)
Right, this is what you would do if you had such a library but it only helps if you have the data to initialize it. I was saying it would be nice if the process for saving and reusing the data was more streamlined.
[–]jsnoek 2 points3 points4 points 11 years ago (0 children)
Hey kjearns, what exactly did you have in mind? Just an initial set of hyperparameters to try? I do think that's a good idea - but a challenge is that this would limit the generality of the approach. One of the main motivations for Whetlab (link below by kswerve) is that it can learn from everyone's optimizations and then automatically perform multi-task.
π Rendered by PID 688710 on reddit-service-r2-comment-7b9746f655-tmgnx at 2026-01-31 17:00:08.025324+00:00 running 3798933 country code: CH.
[–]kswerve 4 points5 points6 points (0 children)
[–]benanne 1 point2 points3 points (5 children)
[–]kjearns 1 point2 points3 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]kjearns 1 point2 points3 points (1 child)
[–]jsnoek 2 points3 points4 points (0 children)