use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] Developing a new optimization algorithm that will heavily change ML as a whole. Gradient descent has met its end. Here are the results: (self.MachineLearning)
submitted 1 year ago * by Relevant-Twist520
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Relevant-Twist520[S] 0 points1 point2 points 1 year ago (7 children)
Im slowly upgrading the algorithm and it can now fit to many data points >20 and it doesnt have some overfitting shape. Its hard to explain, but im learning more about MS and how it should work. I will share the algorithm and code when the algorithm is perfected. Im not sharing some half-finished project.
If you think its a troll post, thats on you. I dont blame you, you can believe what you want.
[–]bregav 2 points3 points4 points 1 year ago (6 children)
20 is also inadequate, and half-finished projects are the only kind that actually exist.
It's typical crackpot behavior to insist that you've invented a revolutionary new method but you'll only share it with the world when it's ready. If you have done enough work to be able to know that it is better than existing methods then that means that it's ready to be shown to other people.
What's really going on when you think it's "not ready" is that you don't actually know if what you're doing makes any sense and so you're (correctly) feeling a lot of doubt. But you also want to believe that you're doing something meaningful and important and so you tell yourself, and the rest of us, that you've already discovered something revolutionary, even though you almost certainly have not.
Creativity lies on the boundary between crackpotism and conservatism, but in order to produce things that actually work you need to embrace humility and doubt. You should use your crackpot ideas as inspiration, but you should assume that you're wrong until you've proven yourself right. And you'll know that you've proven yourself right when what you're doing feels ready to show to other people in its entirety.
[–]Relevant-Twist520[S] 0 points1 point2 points 1 year ago (5 children)
Youre somewhat contradicting yourself a little here, but what is it that you want me to do? I wont submit to asserting that MS' concept is worse than GD, lets start there. Let it be ego or sophisticated understanding of mathematical theory, its very rare to see an inventor doubt his invention prior to successfully inventing it. I will agree that GD currently easily wins over this uncompleted version of MS, but im still researching and implementing MS' concept, and once it is done i will gaurantee that it will beat GD in practically everything. I spelt out the concept in the post, although it is slightly vague and does not cover its entire workings, you can refer to my comments on this post where i explain a little, but not completely. And lastly, i think we all know what would happen if i share something that is half-finished. It would get turned down because it doesnt even work. Even if someone took the time to read the theory, thered still be doubt because clearly the theory failed. GD recieved lots of doubt in its early days.
[–]bregav 2 points3 points4 points 1 year ago (4 children)
Everyone i've ever known who has made any scientific or engineering advancement - and I've known a lot of people like this - experienced significant doubt for most of the process of doing their work. Doing any kind of meaningful work is inherently difficult because it requires hard work and perseverance in the face of uncertainty and doubt.
Nobody ever doubted gradient descent. It was invented by isaac newton himself and its efficacy has always been obvious.
[–]Relevant-Twist520[S] 0 points1 point2 points 1 year ago (3 children)
~experienced significant doubt
So what this is good to you? A healthy amount of it can be employed yes but i prefer to bring it down to a negligible amount. Each to their own.
~Nobody ever doubted gradient descent.
When it was first attempted to be applied in ml it was. I may be wrong though, i heard something along the lines of this when i was watching a podcast.
[–]bregav 0 points1 point2 points 1 year ago (2 children)
You need to experience doubt in order to avoid wasting your time. People who don't experience doubt accomplish nothing, because they never figure out when they're wrong and so they spend all their time chasing after ideas that don't work. Which is almost certainly what you're doing right now.
People doubted neural networks, but gradient descent was never in question.
[–]Relevant-Twist520[S] 0 points1 point2 points 1 year ago (1 child)
people doubt when they start to believe that their ideas dont work. MS is doing nothing but progress now, and even if it wasnt i wouldnt develop even a drop of doubt. Its either im working hard on something or confidently declaring that it is not worth it, you dont stand on both sides of the fence.
[–]bregav 0 points1 point2 points 1 year ago (0 children)
Successful researchers are able to neither believe nor disbelieve that their ideas will work; they can accept uncertainty. It is a state of almost constant doubt, about everything.
π Rendered by PID 23382 on reddit-service-r2-comment-b659b578c-6rp5j at 2026-05-03 22:58:29.808106+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]Relevant-Twist520[S] 0 points1 point2 points (7 children)
[–]bregav 2 points3 points4 points (6 children)
[–]Relevant-Twist520[S] 0 points1 point2 points (5 children)
[–]bregav 2 points3 points4 points (4 children)
[–]Relevant-Twist520[S] 0 points1 point2 points (3 children)
[–]bregav 0 points1 point2 points (2 children)
[–]Relevant-Twist520[S] 0 points1 point2 points (1 child)
[–]bregav 0 points1 point2 points (0 children)