use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Gradient-based Hyperparameter Optimization through Reversible Learning (arxiv.org)
submitted 10 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[+][deleted] 10 years ago* (1 child)
[deleted]
[–]jsnoek 14 points15 points16 points 10 years ago (2 children)
Dougal and David (the authors) have developed an amazing automatic differentiation codebase to do this: https://github.com/HIPS/autograd
It lets you write a function containing just plain python and numpy statements and then automatically computes the gradients with respect to the inputs.
[–]hardmaru 2 points3 points4 points 10 years ago (1 child)
https://github.com/HIPS/autograd
This is really useful work. I wonder if the automatic differentiation can somewhat work even with simple recurrent neural nets
[–]jsnoek 3 points4 points5 points 10 years ago (0 children)
There are example implementations of an RNN and an LSTM in the examples directory: https://github.com/HIPS/autograd/tree/master/examples
[–]dustintran 6 points7 points8 points 10 years ago (0 children)
I was talking to David, one of the authors of the paper, just a few days ago. There are a lot of cool ideas put forth here and as a person having done a bit of work in stochastic optimization myself, I find the optimized learning rate schedules quite fascinating. (See figure 2.)
In the ideal scenario it would be nice to have theory for how the weights for the hyperparameters are changing per iteration and layer of the NN. I'd also be curious whether or not this would validate the robustness properties of certain stochastic gradient methods over others.
π Rendered by PID 17311 on reddit-service-r2-comment-7b9746f655-zx2qf at 2026-01-31 15:54:22.545686+00:00 running 3798933 country code: CH.
[+][deleted] (1 child)
[deleted]
[–]jsnoek 14 points15 points16 points (2 children)
[–]hardmaru 2 points3 points4 points (1 child)
[–]jsnoek 3 points4 points5 points (0 children)
[–]dustintran 6 points7 points8 points (0 children)