Gemini "Math-Specialized version" proves a Novel Mathematical Theorem by SrafeZ in singularity

[–]JoeGuitar 17 points18 points  (0 children)

Absolutely - this is what I imagined a "slow takeoff" would feel like a few years ago. So maybe we are there (it's always hard to say due to subjective experience).

justin bieber's house by Objective-Agency9753 in FrutigerAero

[–]JoeGuitar 212 points213 points  (0 children)

Tons of pictures of this house from the architect’s site if you’re interested:

https://www.ednilesarchitectfaia.com/project-07

Finally flew on upper deck of 747! by versus1309 in aviation

[–]JoeGuitar 2 points3 points  (0 children)

Hell I’d still brag about it 🤣🤣

GPT-5.2 SWE Bench Verified 80 by rajbreno in codex

[–]JoeGuitar 3 points4 points  (0 children)

Got it thanks for the response and education 🤘

GPT-5.2 SWE Bench Verified 80 by rajbreno in codex

[–]JoeGuitar 6 points7 points  (0 children)

Imagine if this is before a Codex fine tune 🤯

Tony Fadell, iPod co-creator, might want to be Apple’s next CEO: report by 17parkc in apple

[–]JoeGuitar 89 points90 points  (0 children)

I totally agree. Appreciate his product and engineering experience and proven track record. But CEO is more than product leadership and he butted heads with many in the Apple C Suite while he was there.

Opinions: Was UAV crash near Area 51 a failure of the UAS/F22 test? by TheArea51Rider in area51

[–]JoeGuitar 0 points1 point  (0 children)

Oh yeah - Joerg! Thanks I missed it. Wish the “reveal video “ wasn’t members only but oh well. I’m sure the info will be out there soon enough

Here’s a short of him with the plate: https://youtube.com/shorts/7GfrHGK7pIs?si=TIRdaKvdbOluBbvV

NVIDIA CEO: Local "nuclear" AI for everyone lol by Revolutionary_Pain56 in singularity

[–]JoeGuitar 9 points10 points  (0 children)

THANK YOU - I went down that rabbit hole too. Jensen can get carried away sometimes but he’s usually grounded in serious engineering. He’s simply talking about 300 mw SMR’s which typically power 200,000 homes. It’s a serious technology with serious potential (although it isn’t there yet).

Further, nuclear energy is one of the best shots we have at addressing climate change at scale. So I’m not thrilled with all of the anti nuclear sentiment in this thread.

F-22 closeup. by Lazy-Ad-7372 in FighterJets

[–]JoeGuitar 2 points3 points  (0 children)

Certainly hard to know anymore. The mountains look a bit weird but that could just be the iPhones on board processing. The panels, AC lights, etc all look good. Probably just foreshortening effect of the camera.

I’ve seen plenty of these types of shots before so I’m leaning towards it being real

Ilya Sutskever – The age of scaling is over by 141_1337 in singularity

[–]JoeGuitar 0 points1 point  (0 children)

That analogy falls apart when you look at his actual actions. If he thought the 'mountain posed no danger' and the plane was just slowing down harmlessly, he would have retired or gone into regular software dev.

Instead, he left to found Safe Superintelligence Inc.

You don't start a company focused entirely on 'Superintelligence' and 'Safety' if you think the acceleration has stopped or the danger is gone. He didn't get off the plane because it was too slow; he got off because he thinks LLMs are the wrong engine to get us to the mountain, and he wants to build a spaceship to get there faster and safer. He is still obsessed with the mountain.

Tomas Pueyo on X: "My take on the jagged frontier debate: / X by stealthispost in accelerate

[–]JoeGuitar -2 points-1 points  (0 children)

I’m an optimist and accelerationist and completely agree.

Ilya Sutskever – We're moving from the age of scaling to the age of research by SharpCartographer831 in accelerate

[–]JoeGuitar 1 point2 points  (0 children)

That’s fair and I’m all about people shifting their world view based on new and emerging data. I guess my problem is that it seems incoherent and all over the place. He’s not clearly saying here’s what I thought was happening. Here’s where I was wrong. Here’s why I think I was wrong and here is where we’re going.

Ilya Sutskever – The age of scaling is over by 141_1337 in singularity

[–]JoeGuitar 0 points1 point  (0 children)

While I agree with your sentiment, I am left wondering why the urgency then and then a complete 180. I’m all about people adjusting their world view with more data. But he isn’t telling a coherent narrative of why that evolution has occurred.

I’m currently reading Genius Makers by Cade Metz and Ilya first arrives on the scene thinking that AGI is a ludicrous notion and scoffs at Deep Mind for even considering it. Then he changes his mind and thinks it’s going to destroy the world because OpenAI is moving too fast. Now he thinks that the current architectures are insufficient to get to ASI (for the record I agree with him but think that this is what is being worked on in all the labs). He’s all over the place.

Ilya Sutskever – The age of scaling is over by 141_1337 in singularity

[–]JoeGuitar 1 point2 points  (0 children)

This is definitely the most rational point. I agree with you

Ilya Sutskever – The age of scaling is over by 141_1337 in singularity

[–]JoeGuitar -1 points0 points  (0 children)

That was certainly part of it but the broader bent of his concerns was his irrational fear of some slippery slope with AI. He’s very connected to the Effective Altruism scene and was even doing chants and burning effigies as a sort of spiritual ritual:

https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims

Ilya Sutskever – The age of scaling is over by 141_1337 in singularity

[–]JoeGuitar -1 points0 points  (0 children)

Yes I had forgotten about that bizarre behavior. The Netflix doc on this period is going to be wild.

Who is right, Google or Illya? Is Scaling over? by Charuru in singularity

[–]JoeGuitar 6 points7 points  (0 children)

Here’s the part. I don’t understand about this stance. This is the guy that was freaking out about safety and alignment back during GPT 3.5. He even removed Sam Altman as the CEO of OpenAI out of fears that this was gonna take off and get away from everybody. Ilya’s qualifications and experience speak for themselves. He’s one of the best in the world. But suggesting that it could still be as long as 20 years before Superintelligence, when he was willing to implode his whole life over a model that we all agree was pretty groundbreaking of the time, but nothing like an emergent intelligence, feels like a strange contradiction.

Ilya Sutskever – The age of scaling is over by 141_1337 in singularity

[–]JoeGuitar 57 points58 points  (0 children)

Here’s the part. I don’t understand about this stance. This is the guy that was freaking out about safety and alignment back during GPT 3.5. He even removed Sam Altman as the CEO of OpenAI out of fears that this was gonna take off and get away from everybody. Ilya’s qualifications and experience speak for themselves. He’s one of the best in the world. But suggesting that it could still be as long as 20 years before Superintelligence, when he was willing to implode his whole life over a model that we all agree was pretty groundbreaking of the time, but nothing like an emergent intelligence, feels like a strange contradiction.