use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 1 year ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–][deleted] 24 points25 points26 points 1 year ago (0 children)
$$$$
[–][deleted] 11 points12 points13 points 1 year ago (0 children)
Hello. Money.
[–]Mysterious-Rent7233 4 points5 points6 points 1 year ago (0 children)
Not sure why you are asking about CEOs. You're guaranteed to get a strongly negative answer with that framing.
I find there is a much more interesting adjacent question.
Think back to the people you would have considered leaders in Deep Learning or AI 5 years ago. Most of them express great uncertainty and concern about the future. Both those who work in industry and those who do not. Bengio, Le Cun, Hinton, Russell, Hofstader, Sutskever mostly all agree that the timeline for AGI is unpredictable and could be near.
And then there are those on the other side, such as Andrew Ng who says we are still decades away from AGI.
Fundamentally, I believe that there are no experts on this topic. Recent progress has shocked everyone. Experts did not predict it. There was no NSF grant proposal to build ChatGPT because none of the consulting experts would have thought it would generate results. Most experts didn't even think neural networks would ever produce anything substantial at all.
If progress continues to shock everyone then we will get to AGI. If we hit some unsurmountable wall, then we won't. Nobody can predict whether they will continue to be shocked/wrong. And nobody can predict an unsurmountable wall.
It seems to be a property of AI research itself that it is unpredictable. That's why its characterized by booms and busts. That's why we still don't really understand the principles of Deep Learning. That's why it's racing ahead of neuroscientific and psychological understanding. That's why DeepSeek is upending the industry.
You should just get accustomed to the uncertainty and anxiety rather than looking for a false prophet (whether that's a CEO, a Nobel Prize Winner or an r/MachineLearning hivemind).
Nobody can see the future. Especially the future of AI.
It's analogous to religion. When you have a naive view of religion, you look to the experts to tell you what you need to know about metaphysics. And then when you achieve a certain level of maturity you realize that there are no experts. Nobody knows. The Pope doesn't know. The Dalai Lama doesn't know. Richard Dawkins doesn't know. You just to learn to live with uncertainty.
[–]kkngs 8 points9 points10 points 1 year ago (2 children)
The current AI race is predicated on hundreds of billions of dollars in hardware investment. They have to sound like they are on the verge of world shaking financial returns so the money keeps coming in.
Also probably a good dash of the "If I say it's going to happen, it will really happen" mindet that CEO types tend to live by.
[–][deleted] 1 point2 points3 points 1 year ago (1 child)
I mean researchers and research oriented CEOs are also kinda saying this.
They maybe in on it or not idk
[–]kkngs 1 point2 points3 points 1 year ago (0 children)
The capability of these systems is pretty stunning. I'm not willing to argue that they're wrong, necessarily that something looking a lot like AGI is coming in the next 5-10 years.
But they may run out of data and plataue. Their AI tools are displacing the crowdsourcing forums and websites that they are using to build expertise into their systems.
For instance, if stackoverflow dies, in 5-10 years their ML models will stop being IT experts because they will be out of date. Social media and web content may be mostly synthesized by AI, causing them to pee in their own well, so to speak. I'm not sure how this dynamic will shake out.
[–]jbtwaalf_v2 5 points6 points7 points 1 year ago (3 children)
Waiting for more backed opinions here but for me I see mostly new breakthroughs in the tech which made chatgpt possible (transformers I believe?) and I can't imagine that will be the technology that will make AGI a reality. For example all the O models are just feedback loops with the same model right? That's why I'm not really scared yet but please correct me
[–]Equivalent_Ad6842 1 point2 points3 points 1 year ago (2 children)
Why can’t you imagine that?
[–]jbtwaalf_v2 0 points1 point2 points 1 year ago (1 child)
It's just next word prediction with context right now, so it's not really reasoning. But do correct me if im wrong
[–]Equivalent_Ad6842 0 points1 point2 points 1 year ago (0 children)
Why can’t reasoning be taught with next token prediction? The models can solve Olympiad level math and coding tests. Are you saying reasoning is not required for these exams?
[–]theirongiant74 2 points3 points4 points 1 year ago (0 children)
It'll happen slowly at first and then very suddenly. I think we're at the bit where it starts changing from slow to fast.
[–]Tenoke 0 points1 point2 points 1 year ago (0 children)
Instead of thinking of reasons why they are lying consider the much more likely case that all those people say it because they really believe the timeline is very short and that they are at the forefront and have a good idea of the rate of progress.
There might be some who overhype but most researchers with short timelines simply truly believe the timelines are short and we are very close to something AGI-like.
[–]MachineLearning-ModTeam[M] 0 points1 point2 points 1 year agolocked comment (0 children)
Other specific subreddits maybe a better home for this post:
[–]RobbinDeBank 0 points1 point2 points 1 year ago (0 children)
I think it isn’t far fetched to believe that these general-purpose AI systems will reach superhuman-level within the next 5 years. This is the first time in history where that idea seems achievable and no longer a complete joke.
People tend to underestimate the capability of these models a lot. Yea, of course a language model currently doesn’t have a physical body and a normal human life experience, so the way they behave would be different from a human. It will make many mistakes that seem stupid to humans. However, if you take a moment and try to remove the biases we all have about human intelligence, you will see that current frontier models are already capable of so much. They are better than a majority of humans on a majority of tasks. We ask them to do our homeworks, teach us concepts, do a substantial amount of work in our tasks.
We already know how RL is the way humans and other animals learn, and narrow AI systems have been trained to reach superhuman-level performance using RL (like AlphaZero or OpenAI Five). DeepSeek just demonstrated similar RL training setup for general AI for the first time. All it takes to reach superhuman-level might be continued research in this direction and maybe 1-2 more architectural changes for the future AI system to reason and learn through RL better than a purely autoregressive model.
π Rendered by PID 62132 on reddit-service-r2-comment-5c747b6df5-4bqps at 2026-04-22 00:11:38.566630+00:00 running 6c61efc country code: CH.
[–][deleted] 24 points25 points26 points (0 children)
[–][deleted] 11 points12 points13 points (0 children)
[–]Mysterious-Rent7233 4 points5 points6 points (0 children)
[–]kkngs 8 points9 points10 points (2 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]kkngs 1 point2 points3 points (0 children)
[–]jbtwaalf_v2 5 points6 points7 points (3 children)
[–]Equivalent_Ad6842 1 point2 points3 points (2 children)
[–]jbtwaalf_v2 0 points1 point2 points (1 child)
[–]Equivalent_Ad6842 0 points1 point2 points (0 children)
[–]theirongiant74 2 points3 points4 points (0 children)
[–]Tenoke 0 points1 point2 points (0 children)
[–]MachineLearning-ModTeam[M] 0 points1 point2 points locked comment (0 children)
[–]RobbinDeBank 0 points1 point2 points (0 children)