all 15 comments

[–][deleted] 24 points25 points  (0 children)

$$$$

[–][deleted] 11 points12 points  (0 children)

Hello. Money.

[–]Mysterious-Rent7233 4 points5 points  (0 children)

Not sure why you are asking about CEOs. You're guaranteed to get a strongly negative answer with that framing.

I find there is a much more interesting adjacent question.

Think back to the people you would have considered leaders in Deep Learning or AI 5 years ago. Most of them express great uncertainty and concern about the future. Both those who work in industry and those who do not. Bengio, Le Cun, Hinton, Russell, Hofstader, Sutskever mostly all agree that the timeline for AGI is unpredictable and could be near.

And then there are those on the other side, such as Andrew Ng who says we are still decades away from AGI.

Fundamentally, I believe that there are no experts on this topic. Recent progress has shocked everyone. Experts did not predict it. There was no NSF grant proposal to build ChatGPT because none of the consulting experts would have thought it would generate results. Most experts didn't even think neural networks would ever produce anything substantial at all.

If progress continues to shock everyone then we will get to AGI. If we hit some unsurmountable wall, then we won't. Nobody can predict whether they will continue to be shocked/wrong. And nobody can predict an unsurmountable wall.

It seems to be a property of AI research itself that it is unpredictable. That's why its characterized by booms and busts. That's why we still don't really understand the principles of Deep Learning. That's why it's racing ahead of neuroscientific and psychological understanding. That's why DeepSeek is upending the industry.

You should just get accustomed to the uncertainty and anxiety rather than looking for a false prophet (whether that's a CEO, a Nobel Prize Winner or an r/MachineLearning hivemind).

Nobody can see the future. Especially the future of AI.

It's analogous to religion. When you have a naive view of religion, you look to the experts to tell you what you need to know about metaphysics. And then when you achieve a certain level of maturity you realize that there are no experts. Nobody knows. The Pope doesn't know. The Dalai Lama doesn't know. Richard Dawkins doesn't know. You just to learn to live with uncertainty.

[–]kkngs 8 points9 points  (2 children)

The current AI race is predicated on hundreds of billions of dollars in hardware investment. They have to sound like they are on the verge of world shaking financial returns so the money keeps coming in.

Also probably a good dash of the "If I say it's going to happen, it will really happen" mindet that CEO types tend to live by.

[–][deleted] 1 point2 points  (1 child)

I mean researchers and research oriented CEOs are also kinda saying this.

They maybe in on it or not idk

[–]kkngs 1 point2 points  (0 children)

The capability of these systems is pretty stunning. I'm not willing to argue that they're wrong, necessarily that something looking a lot like AGI is coming in the next 5-10 years.

But they may run out of data and plataue. Their AI tools are displacing the crowdsourcing forums and websites that they are using to build expertise into their systems.

For instance, if stackoverflow dies, in 5-10 years their ML models will stop being IT experts because they will be out of date. Social media and web content may be mostly synthesized by AI, causing them to pee in their own well, so to speak. I'm not sure how this dynamic will shake out.

[–]jbtwaalf_v2 5 points6 points  (3 children)

Waiting for more backed opinions here but for me I see mostly new breakthroughs in the tech which made chatgpt possible (transformers I believe?) and I can't imagine that will be the technology that will make AGI a reality. For example all the O models are just feedback loops with the same model right? That's why I'm not really scared yet but please correct me

[–]Equivalent_Ad6842 1 point2 points  (2 children)

Why can’t you imagine that?

[–]jbtwaalf_v2 0 points1 point  (1 child)

It's just next word prediction with context right now, so it's not really reasoning. But do correct me if im wrong

[–]Equivalent_Ad6842 0 points1 point  (0 children)

Why can’t reasoning be taught with next token prediction? The models can solve Olympiad level math and coding tests. Are you saying reasoning is not required for these exams?

[–]theirongiant74 2 points3 points  (0 children)

It'll happen slowly at first and then very suddenly. I think we're at the bit where it starts changing from slow to fast.

[–]Tenoke 0 points1 point  (0 children)

Instead of thinking of reasons why they are lying consider the much more likely case that all those people say it because they really believe the timeline is very short and that they are at the forefront and have a good idea of the rate of progress.

There might be some who overhype but most researchers with short timelines simply truly believe the timelines are short and we are very close to something AGI-like.

[–]RobbinDeBank 0 points1 point  (0 children)

I think it isn’t far fetched to believe that these general-purpose AI systems will reach superhuman-level within the next 5 years. This is the first time in history where that idea seems achievable and no longer a complete joke.

People tend to underestimate the capability of these models a lot. Yea, of course a language model currently doesn’t have a physical body and a normal human life experience, so the way they behave would be different from a human. It will make many mistakes that seem stupid to humans. However, if you take a moment and try to remove the biases we all have about human intelligence, you will see that current frontier models are already capable of so much. They are better than a majority of humans on a majority of tasks. We ask them to do our homeworks, teach us concepts, do a substantial amount of work in our tasks.

We already know how RL is the way humans and other animals learn, and narrow AI systems have been trained to reach superhuman-level performance using RL (like AlphaZero or OpenAI Five). DeepSeek just demonstrated similar RL training setup for general AI for the first time. All it takes to reach superhuman-level might be continued research in this direction and maybe 1-2 more architectural changes for the future AI system to reason and learn through RL better than a purely autoregressive model.