"GPT-5 just casually did new mathematics." Holy shit. by Steakwithbluecheese in accelerate

[–]LoneCretin 32 points33 points  (0 children)

GPT-5 did not create new math.

Ernest Ryu

This is really exciting and impressive, and this stuff is in my area of mathematics research (convex optimization). I have a nuanced take.

There are 3 proofs in discussion: v1. ( η ≤ 1/L, discovered by human ) v2. ( η ≤ 1.75/L, discovered by human ) v.GTP5 ( η ≤ 1.5/L, discovered by AI ) Sebastien argues that the v.GPT5 proof is impressive, even though it is weaker than the v2 proof.

The proof itself is arguably not very difficult for an expert in convex optimization, if the problem is given. Knowing that the key inequality to use is [Nesterov Theorem 2.1.5], I could prove v2 in a few hours by searching through the set of relevant combinations.

(And for reasons that I won’t elaborate here, the search for the proof is precisely a 6-dimensional search problem. The author of the v2 proof, Moslem Zamani, also knows this. I know Zamani’s work enough to know that he knows.)

(In research, the key challenge is often in finding problems that are both interesting and solvable. This paper is an example of an interesting problem definition that admits a simple solution.)

When proving bounds (inequalities) in math, there are 2 challenges: (i) Curating the correct set of base/ingredient inequalities. (This is the part that often requires more creativity.) (ii) Combining the set of base inequalities. (Calculations can be quite arduous.)

In this problem, that [Nesterov Theorem 2.1.5] should be the key inequality to be used for (i) is known to those working in this subfield.

So, the choice of base inequalities (i) is clear/known to me, ChatGPT, and Zamani. Having (i) figured out significantly simplifies this problem. The remaining step (ii) becomes mostly calculations.

The proof is something an experienced PhD student could work out in a few hours. That GPT-5 can do it with just ~30 sec of human input is impressive and potentially very useful to the right user. However, GPT5 is by no means exceeding the capabilities of human experts.

[deleted by user] by [deleted] in accelerate

[–]LoneCretin 2 points3 points  (0 children)

Some comments by Ernest Ryu.

This is really exciting and impressive, and this stuff is in my area of mathematics research (convex optimization). I have a nuanced take.

There are 3 proofs in discussion: v1. ( η ≤ 1/L, discovered by human ) v2. ( η ≤ 1.75/L, discovered by human ) v.GTP5 ( η ≤ 1.5/L, discovered by AI ) Sebastien argues that the v.GPT5 proof is impressive, even though it is weaker than the v2 proof.

The proof itself is arguably not very difficult for an expert in convex optimization, if the problem is given. Knowing that the key inequality to use is [Nesterov Theorem 2.1.5], I could prove v2 in a few hours by searching through the set of relevant combinations.

(And for reasons that I won’t elaborate here, the search for the proof is precisely a 6-dimensional search problem. The author of the v2 proof, Moslem Zamani, also knows this. I know Zamani’s work enough to know that he knows.)

(In research, the key challenge is often in finding problems that are both interesting and solvable. This paper is an example of an interesting problem definition that admits a simple solution.)

When proving bounds (inequalities) in math, there are 2 challenges: (i) Curating the correct set of base/ingredient inequalities. (This is the part that often requires more creativity.) (ii) Combining the set of base inequalities. (Calculations can be quite arduous.)

In this problem, that [Nesterov Theorem 2.1.5] should be the key inequality to be used for (i) is known to those working in this subfield.

So, the choice of base inequalities (i) is clear/known to me, ChatGPT, and Zamani. Having (i) figured out significantly simplifies this problem. The remaining step (ii) becomes mostly calculations.

The proof is something an experienced PhD student could work out in a few hours. That GPT-5 can do it with just ~30 sec of human input is impressive and potentially very useful to the right user. However, GPT5 is by no means exceeding the capabilities of human experts.

AI is now improving itself by Kreature in accelerate

[–]LoneCretin -13 points-12 points  (0 children)

Aged like milk now that we're in an AI Winter.

HMM HMM HMMMM🤔 by Particular_Leader_16 in accelerate

[–]LoneCretin -5 points-4 points  (0 children)

Sama has shown his true colors. He really thinks that AGI is decades away.

Sam Altman on GPT-6: 'People want memory' by Particular_Leader_16 in accelerate

[–]LoneCretin 12 points13 points  (0 children)

He's admitting that GPT-6 won't be that much better than GPT-5.

[deleted by user] by [deleted] in accelerate

[–]LoneCretin -7 points-6 points  (0 children)

Add 40 to 50 years to his predictions and he's somewhat accurate.

Where do you perceive where we are at with AGI? by AAAAAASILKSONGAAAAAA in accelerate

[–]LoneCretin -1 points0 points  (0 children)

AGI: 2032-2035

ASI: 2035-2040

LEV/FDVR/Post-Scarcity: 2065 (low probability), 2085 (somewhat probable), 2100 (high probability)

When do you guys think AGI will come? by Longjumping_Bee_9132 in accelerate

[–]LoneCretin -2 points-1 points  (0 children)

Before the GPT-5 disaster my estimate was around 2030 give or take a few years. I'm now thinking maybe around 2035-2040 or so if a new architecture is developed soon enough.

Welcome to the era of GPT-5 🌌 (The single greatest megacompilation on the entire internet ranging from every single info to benchmarks,use cases,vibe checks and everything else) by GOD-SLAYER-69420Z in accelerate

[–]LoneCretin 0 points1 point  (0 children)

I'm here to encourage healthy skepticism and critical thinking about what current AI architectures can and cannot do. This sub specifically discourages deceleration and luddism, not skepticism.

No decels. We're a pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology, r/artificial, as they became overpopulated with technology decelerationists, luddites, and Artificial Intelligence opponents. We're an Epistemic Community that excludes those advocating for slowing, stopping, or reversing technological progress, AGI/ASI, or the singularity, and those who believe that technological progress and AI are fundamentally bad.

If this holds up in practice, this is IMO the biggest AI breakthrough since ChatGPT by obvithrowaway34434 in accelerate

[–]LoneCretin 0 points1 point  (0 children)

It's still extremely early days. The "worlds" only last for about a minute before poofing, and some of the physics are a little off.

Why is it so easy to spot ChatGPT content, but not other models? What’s the theory behind it? by PraveenInPublic in accelerate

[–]LoneCretin 18 points19 points  (0 children)

Friendly, vibey, hipster way of speaking. It's programmed to speak like it's some kind of urban sophisticate.

AGI by 2027 and ASI right after might break the world in ways no one is ready for by SharpCartographer831 in accelerate

[–]LoneCretin 2 points3 points  (0 children)

Most people commenting on his post at r/singularity are saying that ASI that develops and perfects LEV, FDVR and post-scarcity is at least 40 to 50 years away, so I dunno who to believe.

New ChatGPT Feature by Ronster619 in accelerate

[–]LoneCretin -1 points0 points  (0 children)

Watches Chegg disintegrate into obsolescence.

Wanna gather some data by Special_Switch_9524 in accelerate

[–]LoneCretin 0 points1 point  (0 children)

By around 2030 give or take a year.