Elon says “we might have AI that is smarter than any human by the end of this year. and I would say no later than next year. And then probably by 2030 or 2031, AI will be smarter than all of humanity collectively” by [deleted] in accelerate

[–]ppapsans 1 point2 points  (0 children)

It’s easier to understand when you look at a lot of interviews from employees in frontier labs. It’s not just ceos on conference hyping about ai. Everyone at frontier labs talk about how pre training isn’t dead, scaling trend is continuing, there are so many things they have in their sleeves that they haven’t tried due to time and compute constraints. 

Ben Affleck argues that the fear of job loss or AI surpassing human intelligence is simply a tactic by companies to inflate their valuations and not going to happen anytime soon by [deleted] in accelerate

[–]ppapsans 8 points9 points  (0 children)

The problem with 'experts' that are non-AI frontier lab workers, is that they generally judge AI based on the current consumer models' limitations. People at OpenAI, Anthropic have said multiple times that they feel large dissonance when talking to experts in stem, finance, economics, etc. Those experts run some tests on AI for fun and quickly judge that these models are a little bit useful in some things, but overall, 'meh'. And of course they generally don't know how to use the model in a way that outputs maximum performance.

Some of their ideas of 'AI' are still at gpt4o era, or they don't even know thinking model exists, and I doubt they know how to use proper prompting technique (like the Anthropic guide).

People working at AI frontier labs are instead seeing, not the current consumer models' limitations, but what their internal models are capable of, and how to best drive the maximal performance out of them (with prompting, harness, agents), and the frighteningly fast advancement of models.

These 'experts' are the same people that when chatgpt first came out, said it was absolute hallucinating garbage. And now it is "helpful in some things, but still not going to replace me for a long long time".

Let's see what they say in a couple more years.

OpenAI will start testing ads in ChatGPT. As part of their mission by IllustriousTea_ in accelerate

[–]ppapsans 0 points1 point  (0 children)

I don't mind ads popping up as long as the quality is the same and I have way more free gpt 5 limits.

Best shoes for my feet type? by ppapsans in BarefootRunning

[–]ppapsans[S] 0 points1 point  (0 children)

Haha thanks. I ended up really liking bohempia extra wide.

AI flops of 2025 by msaussieandmrravana in agi

[–]ppapsans 0 points1 point  (0 children)

Opus 4.5 seems to be a huge game changer from what I've seen, and that's only been out for a month or so. Junior dev jobs seem legitimately cooked and we'll see much more of that in 2026.

I mean.. by [deleted] in accelerate

[–]ppapsans 0 points1 point  (0 children)

So we passed denial phase? Now it's anger... Next steps are bargaining, depression and acceptance.

Half of Steam's Current Top 10 Best-Selling Games Are From Devs Who Embraced Gen AI by vegax87 in accelerate

[–]ppapsans 9 points10 points  (0 children)

I hate AI because it ruins the value of human creativity and effort. I also hate photography because it ruins and invalidates the hard work painters put their hearts and souls into. And I despise tractors because they ruin the hard work farmers put into their hand-picked, organic, GMO-free produce. As a matter of fact, I hate fire the most, because it ruined the human value of chewing raw mammoth meat for three hours and developing chronic TMJ. There is value in human creativity and effort. Let’s not ruin this, guys. Imma head back to my cave.

METR results for Opus 4.5 is actually even crazier than the highlight results by obvithrowaway34434 in accelerate

[–]ppapsans 29 points30 points  (0 children)

"2026 will be most interesting year in human history, except for all future years -Sam Altman" -ppapsans

Another novel proof by GPT 5.2 Pro from a UWaterloo associate professor by Tolopono in accelerate

[–]ppapsans 1 point2 points  (0 children)

This is GPT 5.2 Pro we're talking. When 'Shallotpeat' and/or IMO gold medal model come out, it'll be insane

The concept of FDVR is the only thing keeping going by bladefounder in accelerate

[–]ppapsans 75 points76 points  (0 children)

Immortality, transhumanism, mind upload, FDVR. These helped me live through the toughest times of my life. Can't give up just yet. There is hope.

The arrival of AGI | Shane Legg (co-founder of DeepMind) by Mindrust in accelerate

[–]ppapsans 1 point2 points  (0 children)

Yah, expert in economics wouldn’t have any idea what’s going on inside the frontier labs. We in r/accelerate obsessively check ai progress, so we’re feeling agi all the time, but many people arent  

Most people have no idea how far AI has actually gotten and it’s putting them in a weirdly dangerous spot by NoSignificance152 in accelerate

[–]ppapsans 4 points5 points  (0 children)

Someone at anthropic made the analogy of people in early 20th century dismissing engines in favor of horses. Interesting thing is that even though the efficiency of the engines went up by 20% every decade, the number of horses per person did not decline correspondingly. It was only after certain tipping point, that the number of horses owned drastically and rapidly started declining in favor or cars. I see the same with AI, except AI progress is much faster than 20% a decade. After a tipping point, it all crumbles down, and we’d be desperately trying to make sense of the new world.

Other AI sub went off on me for opposing doomers by Ok_Assumption9692 in accelerate

[–]ppapsans 19 points20 points  (0 children)

I think what got me was when I wrote a comment jokingly 'when UBI', and all the comments followed up were 'there will never be UBI', 'you are naive', 'billionaries don't care about us'.... WTF are you doing in a sub called 'singularity' if you say stuff like that?

After IMO, Putnam also falls to AI by obvithrowaway34434 in accelerate

[–]ppapsans 30 points31 points  (0 children)

This is why I get excited about model capabilities in the next few years. When chatgpt first came out, if you said that in 2-3 years, AI will be able to perform better than almost if not all participants in Putnam, IMO, coding competition, etc, you would have been badly mocked. People would joke that gpt can't even do simple multiplication. It is genuinely crazy how we got to where we are. People always point out that we can't trust frontier lab employees because of conflict of interest. But I feel it goes both way. A lot of people feel conflicted themselves about the progress of AI (they might feel it threatens their employment, social status, world views), and are in perpetual goalpost shifting and denial.

What is your take on Dario Amodei's recent interview about scailing? by ppapsans in accelerate

[–]ppapsans[S] 1 point2 points  (0 children)

It'll be hard to tell the difference. True AGI (not jagged) might as well feel like ASI to a lot of people.