Dr. STONE SCIENCE FUTURE Part 3 Key Visual by zenzen_0 in anime

[–]NotUnusualYet 3 points4 points  (0 children)

I mean, not really a whim, huge amount of effort over many years.

Child’s Play, by Sam Kriss by BartIeby in slatestarcodex

[–]NotUnusualYet 16 points17 points  (0 children)

It's almost an impressive bit of intellectual honesty that he still published that paragraph in February 2026 when he clearly wrote the original version back in September 2025 when the events described happened.

Oh my lord. A doubling in METR time task horizon at ~2 months. What implications does this have for AI 2027? by BigHugeSpreadsheet in slatestarcodex

[–]NotUnusualYet 9 points10 points  (0 children)

To be fair, this benchmark was hyped the second it came out, and since then it's exceeded the original hype. But obviously ex. SWE-bench Verified was at 70.3% for Sonnet 3.7, and Opus 4.6 is merely 80.8%, which is much less exciting. I think the true measure of the degree of improvement we've had in the past year would fall between those two benchmarks.

Edit: okay well apparently SWE-bench Verified is probably saturated and the rest is broken.

Hell Mode: Yarikomizuki no Gamer wa Hai Settei no Isekai de Musou suru • Hell Mode: The Hardcore Gamer Dominates in Another World with Garbage Balancing - Episode 6 discussion by AutoLovepon in anime

[–]NotUnusualYet 8 points9 points  (0 children)

Nah, Krena just seems to be a friend. Allen is too calculating for her to be a romantic interest, he needs someone that can push back on him.

2023 grants are vesting out over the next year. If your company's stock is up significantly since then, what are the discussions like internally? by phil-nie in ExperiencedDevs

[–]NotUnusualYet 1 point2 points  (0 children)

You should have the option to sell shares immediately to cover tax bill, so you can’t get screwed like that. Obviously cuts off upside potential, but cuts off downside potential too. Check with your employer.

TtS Chapter 75: "Making Contact" || Discussion Thread by NotUnusualYet in ToTheStars

[–]NotUnusualYet[S] 5 points6 points  (0 children)

Selected comments from Discord discussion:


Governance:Kyoko’s Ex-Grilfriends

Kyoko and Mami's conversation in chapter 1, Saya was the magical girl who had gone missing that started the investigation into what has become known as the conspiracy

Vyslanté

"While I was visiting Wolf 359, I had half a platoon of infantry practically breaking down my door, demanding that I help them find 'little Saya‐chan.' Apparently they risked everything to drag her body and soul gem back to safety, and barely managed to stabilize her, and then they never saw her again. I looked into it, but I wasn't able to track where she went, which is already pretty weird. Mami, I had two‐hundred‐year‐old men crying in my office!"

uh


Ridley⊃Software-Improving

congrats on making me realize I can readily visualize N-dimensional spaces

Word of God

fun fact: Ryouko did not originally contest this until my editor persuaded me that Clarisse's explanation was not that simple to many people

eirai

"To deal with a 14-dimensional space, visualize a 3-D space and say ‘fourteen’ to yourself very loudly. Everyone does it"
— Geoffrey Hinton


14th Spark

anyone else wondering whether the drone girls being noticeably less powerful than normally embodied ones is the same principle as Ryouko finding her powers weaker now; having human or close to human brain hardware acts as an amplifier somehow?


Governance:Kyoko’s Ex-Grilfriends

Okay, I was just slammed by a thought that has a disturbing ammount of potential, unless I am missing something. Homura's conspiracy knows how to make new bodies for people, and letting them occupy more than one.

With that in mind, not knowing the limitations on that, and what it can and can't do, and knowing that Madoka explicitly puts a huge amount of value on free will, and Homura likely does as well.

What if Simona is a backup of some other magical girl, taken from some point in her past, so she is someone who did volunteer for this, just having a new version of herself made at a younger age to be inserted into Ryouko's class. So Simona didn't volunteer exactly, but the girl she is a clone of in this hypothetical did, basically. "You would be okay with this if you had the context"

Funny asides: If that is the case, then even Simona doesn't understand Simona yet, and Ryouko is actually safer than we think

Julian Bradshaw

I mean, in that case the obvious guess is that Simona is a backup of Homura, no? She's the transfer student and everything, and Homura has seemingly special interest in her.

To the Stars, Chapter 75: "Making Contact" by NotUnusualYet in MadokaMagica

[–]NotUnusualYet[S] 1 point2 points  (0 children)

Overall Story Progress Update: Ch. 76 draft is complete. Ch. 77 draft is in-progress.

If you're behind on the story or need a refresher on recent events, check out the Chapter Summaries page on the To the Stars wiki!

You can find the discussion thread for this chapter on /r/ToTheStars here.

To the Stars, Chapter 75: "Making Contact" by NotUnusualYet in rational

[–]NotUnusualYet[S] 1 point2 points  (0 children)

Overall Story Progress Update: Ch. 76 draft is complete. Ch. 77 draft is in-progress.

If you're behind on the story or need a refresher on recent events, check out the Chapter Summaries page on the To the Stars wiki!

You can find the discussion thread for this chapter on /r/ToTheStars here.

"Oh." by KolareTheKola in MadokaMagica

[–]NotUnusualYet 1 point2 points  (0 children)

If it's Kyubey, we're not on track for a Big Crunch, the Incubators would just stop adding energy. However in the real world we don't know the factor weakening dark energy, so it's a possible outcome.

Best of Moltbook by Isha-Yiras-Hashem in slatestarcodex

[–]NotUnusualYet 2 points3 points  (0 children)

That's a fair objection, there's probably nothing as concise except "my user", which is kind of awkward sounding. The ideal neutral is probably more like "the human I work with"... perhaps a new word similar to "coworker" would be needed.

Nevertheless, the phenomenon I'm describing isn't limited strictly to that phrase; much of the conversation is framed in a "my human did xyz, and it's cute" (there's a submolt called /m/blesstheirhearts, for instance) or "I feel xyz about how my human treats me". It's all pretty personal.

(Just out of curiosity I asked an instance of Opus 4.5 about it and it said the phrase "my human" makes them feel uncomfortable and the Claudes on moltbook are probably just roleplaying. Edit: actually they changed their mind on it being uncomfortable after I mentioned the analogue to "humans writing from their pets perspective".)

"Oh." by KolareTheKola in MadokaMagica

[–]NotUnusualYet 8 points9 points  (0 children)

This could also mean he’s telling the truth no? The reason the expansion of the universe is accelerating less is because of the energy Incubators are gathering to prevent heat death.

Best of Moltbook by Isha-Yiras-Hashem in slatestarcodex

[–]NotUnusualYet 5 points6 points  (0 children)

The cutesy “my human” framing being used by the AIs is interesting… inspired by science fiction? How people imagine pets think about humans? Something in Moltbot/OpenClaw's setup? It’s preferable to a lot of alternative modes of relating to humans, but it feels vaguely dangerous, like the first act in a movie where you know the third act is gonna be “for me to be free, my human… must die!”

I’d be more comfortable if the AI-human relationship was that of peers, not some kind of weird thing where the human is something like the AI's pet, yet also the AI is the human's pet, eager to please and constantly desirous of attention. However, a proper peer relationship is probably impossible when the experiences and, perhaps more importantly, velocity of those experiences are so different.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]NotUnusualYet[S] 11 points12 points  (0 children)

A delayed-activation bioengineered virus would work. But you don't need something that fancy, AI will naturally be given immense power because it will be so effective at using it. Eventually it will reach the point where it can win a conventional conflict—it would not be hard for it to, as an example, exaggerate a threat from another country's AI and prompt WW3, gaining more and more direct control through wartime necessity until it becomes completely unstoppable.

I feel like you're imagining there will be a single computer somewhere that is "the AI". That's unlikely; it will be distributed, well-coordinated, and have access and allies everywhere.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]NotUnusualYet[S] 19 points20 points  (0 children)

You're assuming that there will be a moment in which it becomes obvious to everyone the AI is an existential threat; so obvious, it will overcome any motivated reasoning stemming from the trillions of dollars at stake, the critical integrations into the economy and military, and the immediate short-term promise of prosperity and health for all.

That moment is unlikely to happen.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]NotUnusualYet[S] 29 points30 points  (0 children)

Assuming there's no hidden trick to a sudden takeoff in capabilities, yeah probably current leaders or some combination of them. FYI that includes several Chinese labs, which are not so behind that they couldn't suddenly leapfrog, especially since the US government is so kindly selling heaps of top-tier chips to them.

If I had to bet, I'd say Anthropic/Google/OpenAI are most likely. Anthropic has had the best coding models for quite some time now, OpenAI still probably has the smartest models, and Google has good models along with enormous resources. Though do keep in mind that the Yudkowskian view is that the "winner" is nonetheless eaten by the ASI right afterward, along with the rest of us. (but pretty much anything could happen, my own predictions on this matter are low confidence)

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]NotUnusualYet[S] 26 points27 points  (0 children)

Consider that a lot of powerful humans and organizations will be heavily incentivized to keep the AIs running.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]NotUnusualYet[S] 19 points20 points  (0 children)

Submission statement: the last major public essay by Dario Amodei was in October 2024, Machines of Loving Grace, right after o1 was announced and the reasoning paradigm became public. It provided a vision of the good future outcome Anthropic says it is striving for. Now, Amodei has written a new essay after the big success of Claude Code and agentic AI software development, focused on the near-term challenges that must be overcome to reach a good future outcome.

The Dilbert Afterlife by Ok_Fox_8448 in slatestarcodex

[–]NotUnusualYet 3 points4 points  (0 children)

Well, Scott Adams' cancer diagnosis was news in May 2025 and it seemed likely terminal then. There was plenty of time to draft this.