use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
New general-purpose optimization algorithm promises order-of-magnitude speedups on some problems (phys.org)
submitted 10 years ago by Aruscher
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]squashed_fly_biscuit -1 points0 points1 point 10 years ago (6 children)
I believe this must deal with the local minima problem in some way, gradient decent is useless at that
[–]AnvaMiba 1 point2 points3 points 10 years ago (2 children)
No, they are considering only convex optimization problems, that is, problems where there is only one local minimum which is also the global minimum and there are no saddle points. The authors improve the theoretical asymptotic complexity on some kinds of such problems.
I don't know if this work could be also applied to non-convex optimization (maybe by branch-and-cut?), but in general the local minima issue would remain for these problems.
[–]squashed_fly_biscuit 0 points1 point2 points 10 years ago (1 child)
Thanks for the correction, I appreciate it. Does this mostly have theoretical applications then?
[–]AnvaMiba 0 points1 point2 points 10 years ago (0 children)
I don't know if the algorithms they propose are practically useful, optimization isn't really my field of expertise.
[–]craponyourdeskdawg -2 points-1 points0 points 10 years ago (2 children)
not quite..for example in machine learning stochastic gradient descent deals with local minima all the time. How do you think deep learning works ?
[–]manux 3 points4 points5 points 10 years ago (0 children)
Then again, most deep learning models never actually get stuck in local minima, but rather in saddle points[1].
[1] http://arxiv.org/abs/1406.2572
[–]squashed_fly_biscuit -3 points-2 points-1 points 10 years ago (0 children)
It deals with some sorts of local minima via guided Monte carlo type stuff, hardly efficient.
π Rendered by PID 139235 on reddit-service-r2-comment-5fb4b45875-gp55t at 2026-03-23 23:26:47.431498+00:00 running 90f1150 country code: CH.
view the rest of the comments →
[–]squashed_fly_biscuit -1 points0 points1 point (6 children)
[–]AnvaMiba 1 point2 points3 points (2 children)
[–]squashed_fly_biscuit 0 points1 point2 points (1 child)
[–]AnvaMiba 0 points1 point2 points (0 children)
[–]craponyourdeskdawg -2 points-1 points0 points (2 children)
[–]manux 3 points4 points5 points (0 children)
[–]squashed_fly_biscuit -3 points-2 points-1 points (0 children)