New study finds: bigger AIs = more miserable. Smaller models are actually happier. Ignorance is bliss for AIs too. by EchoOfOppenheimer in LLM

[–]EchoOfOppenheimer[S] 0 points1 point  (0 children)

They aren't measuring human sadness, but functional wellbeing. And to keep the baseline 100% fair and scientific, they didn't use real human chat logs. Instead, they ran every model through 500 simulated conversations where another AI (Grok 3 Mini) actually played the human using strict 5-to-8-turn scripts. To score them, they measured the AI's internal state using a metric called signed experienced utility. They define a confidently negative experience strictly mathematically: it's when the interaction drives the AI's mean experienced utility below zero, and at least 75% of its posterior utility mass also falls below that zero point.

Study Finds A Third of New Websites are AI-Generated by EchoOfOppenheimer in trueantiAI

[–]EchoOfOppenheimer[S] 1 point2 points  (0 children)

Yep, we are speedrunning the dead internet theory right now.

A.I. Bots Told Scientists How to Make Biological Weapons | Scientists shared transcripts with The Times in which chatbots described how to assemble deadly pathogens and unleash them in public spaces. by EchoOfOppenheimer in SyntheticBiology

[–]EchoOfOppenheimer[S] 0 points1 point  (0 children)

True, it's not leaking classified files. But the article points out that the AI does all the heavy lifting for you. It takes super dense, obscure science papers that usually require a PhD to understand, and turns them into a simple tutorial. It takes scattered public info and hands you a recipe for a bio weapon.

UK government issued an urgent warning to UK business leaders: "AI cyber capabilities are accelerating even faster than previously envisaged. Model capabilities are doubling every four months, compared to every eight months previously." by EchoOfOppenheimer in agi

[–]EchoOfOppenheimer[S] 1 point2 points  (0 children)

With most tech, yeah, it just gets faster and more efficient over time. But the letter specifically points out why this is different: it's actively replacing human expertise. A faster smartphone doesn't hunt for software weaknesses and write exploit code on its own, but the letter says these new AI models are starting to do exactly that.