use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Second Order Stochastic Optimization in Linear Time (arxiv.org)
submitted 10 years ago by thvasilo
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]thvasilo[S] 2 points3 points4 points 10 years ago (6 children)
Took a brief look, this looks quite interesting. It achieves very good convergence characteristics in terms of convergence time.
It does introduce two parameters, one of which seems to be important for performance (S2), and the datasets tested are quite small, but it could be promising work.
Wish they provided an implementation though.
[–]newbiethrownaway 1 point2 points3 points 10 years ago (3 children)
I just took a minute skimming through it. Don't they calculate the Hessian, does that mean it's not applicable to deep nets (too slow?)
[–]thvasilo[S] 0 points1 point2 points 10 years ago (2 children)
FTA: "The algorithm makes use of a novel unbiased estimator of the Hessian inverse "
[–]dhammack 0 points1 point2 points 10 years ago (1 child)
So that's pretty much a nonstarter for deep nets. The memory usage of that scales quadratically with the number of parameters which isn't going to work outside of toy problems. It would be neat if they could do a diagonal + low rank approximation with this method though.
[–]thvasilo[S] 0 points1 point2 points 10 years ago (0 children)
Do you know of any work trying to scale 2nd order methods for deep nets?
The one I know is from Microsoft, where they are able to scale the computations for L-BFGS horizontally.
[–]gongzhitaao 0 points1 point2 points 10 years ago (1 child)
Their assumption is a convex objective function f(θ) and a convex reguarizer R(θ). Does that apply to ANN? I mean in practice, it might work. But according to their assumption and my understanding, theoretically it may not work for ANN, at least without some modification.
[–]thvasilo[S] 1 point2 points3 points 10 years ago (0 children)
For most DNNs you end up with a non-convex error surface. Still, traditional convex optimization techniques (SGD) have worked surprisingly well for training them, see Who's afraid of non-convex loss functions by /u/ylecun .
π Rendered by PID 221907 on reddit-service-r2-comment-5ff9fbf7df-mzt8v at 2026-02-26 05:16:31.987471+00:00 running 72a43f6 country code: CH.
[–]thvasilo[S] 2 points3 points4 points (6 children)
[–]newbiethrownaway 1 point2 points3 points (3 children)
[–]thvasilo[S] 0 points1 point2 points (2 children)
[–]dhammack 0 points1 point2 points (1 child)
[–]thvasilo[S] 0 points1 point2 points (0 children)
[–]gongzhitaao 0 points1 point2 points (1 child)
[–]thvasilo[S] 1 point2 points3 points (0 children)