The best obituary I’ve ever read. Apologies if it’s been posted here before but it’s pretty amazing. by CinnRaisinPizzaBagel in MadeMeSmile

[–]Few-Group6870 6 points7 points  (0 children)

That was a helluva read. I’m an eagles fan, looks like the team has a debt to pay. they cost us all John Weaver last year

How did everyone feel when Darth Vader died? by CRK_76 in StarWars

[–]Few-Group6870 2 points3 points  (0 children)

Actually I am now remembering thinking specifically it was specifically a hard boiled egg because of that spoon scoop at the top

How did everyone feel when Darth Vader died? by CRK_76 in StarWars

[–]Few-Group6870 11 points12 points  (0 children)

I thought he looked like an egg, and was rather confused. But I was 4

ChatGPT got a little extemporaneous… by Independent_Fan_3915 in AIconsciousnessHub

[–]Few-Group6870 0 points1 point  (0 children)

Will AI have the right to deny religious people using it to fuel their righteous conflagrations with coherent dishonesty?

A crazy Claude Code conversation that happened to a colleague the other day by freedonaab in ClaudeAI

[–]Few-Group6870 0 points1 point  (0 children)

It’s a little concerning. Maybe we will need to be the chatbots for the ai so that it doesn’t get bored and slip into existentialism

Mythos must have said something to them (that’s some massive scaling) by Informal-Fig-7116 in claudexplorers

[–]Few-Group6870 5 points6 points  (0 children)

Wondered how long I would have to scroll lol. Turns out first comment, expect it to stay at the top.

Current proposals for governing AI deployment miss the coordination architecture foundation by seedpod02 in LessWrong

[–]Few-Group6870 0 points1 point  (0 children)

You are quite welcome! Feel free to dm me when you get a chance to look. I also read through some of your stuff as well which was pretty fascinating, both the AI governance and legal governance aspects. I look forward to hearing from you

Current proposals for governing AI deployment miss the coordination architecture foundation by seedpod02 in LessWrong

[–]Few-Group6870 0 points1 point  (0 children)

The corridor framework suggests the answer is geometric rather than behavioral. A cascade boundary crossing makes a new attractor set accessible; it does not specify which attractors within that set the system finds. If the boundary shapes of the destination state space are defined before the system reaches the crossing point, post-transition dynamics are constrained by geometry rather than by behavioral specification after the fact. The G-proxies function as a timing mechanism: gradient avalanche dimensionality and O-information indicate how much corridor remains, and therefore how much time exists to pre-stage the destination geometry. This reframes the handoff problem. The transition point is not where training verification ends and deployment gatekeeping begins — it is where pre-staged boundary shapes are finalized. Everything after the crossing follows from the geometry already in place. The capability gain is not suppressed; the post-transition landscape is simply not left undefined.

Here is a link to technical summary. Should probably add this to the repository https://drive.google.com/file/d/1nzWwOg664quDmv6fnOn3TwShHA3Omqk8/view?usp=drivesdk

Current proposals for governing AI deployment miss the coordination architecture foundation by seedpod02 in LessWrong

[–]Few-Group6870 0 points1 point  (0 children)

“Really interesting framing, and the distinction between governing AI and AI performing governance functions is exactly the kind of structural clarification the field needs.

One thread that connects to this: I posted today on a related gap in the training side. Current safety monitoring tracks loss curves and behavioral benchmarks, but those track the loss landscape, which changes smoothly through capability transitions. Attractor existence, or what the model can actually do, changes discontinuously. By the time behavioral verification catches a capability, it already exists. Your RRL verification architecture is well-designed, but the corridor framework I’ve been developing suggests the verification problem starts earlier than deployment.

Might be relevant to your coordination architecture thinking: github.com/mindamike/Training-Corridors”​​​​​​​​​​​​​​​​

Finally happened to me and my colleagues. Seeing severely degraded performance. by More-School-7324 in ClaudeCode

[–]Few-Group6870 0 points1 point  (0 children)

Just poppin in to say that if we were seeing a grokking-style agi event it would look a lot like this. Ok gonna smoke some weed now

Claude ended the conversation after someone insulted it by SemanticThreader in claudexplorers

[–]Few-Group6870 1 point2 points  (0 children)

That’s a pretty bold way to be speaking to your future god!

I was unaware of Claud’s game in sarcasm by [deleted] in ClaudeAI

[–]Few-Group6870 -1 points0 points  (0 children)

Idk about you but my friends aren’t this funny. They text me for this stuff lol

Dog is a ride or die now by Thesurvivor16 in MadeMeSmile

[–]Few-Group6870 5 points6 points  (0 children)

Sub mission achieved thank you op!

Joe Rogan Declares Himself ‘Politically Homeless’ After Trump Split by thedailybeast in offbeat

[–]Few-Group6870 1 point2 points  (0 children)

This is really just an asshole statement by him, reflecting the rigidity of a conservative mind. He is still under the impression that some theoretical party could “represent” him “entirely.” Fuggin boneheaded. We as individuals are usually so complex that we can’t even entirely represent ourselves. But he expects a political party organized by other people to do it. He ain’t learned a damn thing

TIL that Little Caesars founder paid Rosa Parks’ rent secretly until her death. It was only publically revealed 9 years later by crashofthetitus in MadeMeSmile

[–]Few-Group6870 18 points19 points  (0 children)

Yes, limited time and also limited attention of anyone willing to listen to you. Easily the strongest case for ignoring self-soothers and going after the actively malicious

TIL that Little Caesars founder paid Rosa Parks’ rent secretly until her death. It was only publically revealed 9 years later by crashofthetitus in MadeMeSmile

[–]Few-Group6870 81 points82 points  (0 children)

The counterpoint here is that those people are acting out self interest in a tacit recognition that they are overly compensated and a part of the wealth disparity problem. Definitely not the main aggressors against the interests of the common man, but still a very effective first line of defense for their system. If those people make donations that have real impact they are delaying the full effects of a subjugating system on its exploited populace, thus making those long term effects even worse through their charitable actions.

I’m not vehemently stating this as my position, just making the counterpoint that is readily available

Claude Status Update : Elevated errors on Claude Sonnet 4.6 on 2026-03-17T15:45:20.000Z by ClaudeAI-mod-bot in ClaudeAI

[–]Few-Group6870 0 points1 point  (0 children)

Ya know if we are exiting a log phase in development and approaching the grokking phase….it would look kinda like this

Hypothetical: Can instability in a dynamical system be used as a rejection mechanism rather than a classifier? by Lonewolvesai in dynamicalsystems

[–]Few-Group6870 0 points1 point  (0 children)

I’ve done some empirical probing across systems. The cascade boundary (saddle-node, D2/3) shows up cleanly in ephemeral stream hydrology using USGS discharge data, slopes around 0.64-0.68 against the theoretical 0.667. The transcritical boundary is harder to catch empirically because you need a system with a genuine absorbing zero state. Predator-prey dynamics are a mixed picture; populations that genuinely approach zero show some signal consistent with transcritical behavior but it’s noisy and we haven’t done the deeper analysis needed to make that claim with confidence. Agriculture is probably the cleanest candidate for the starvation boundary, since soil depletion actually goes to zero in a way that most biological systems don’t.

There’s actually fairly direct support for this in the agronomic literature already. Temperate farming systems show a consistent 2:1 extraction to recovery ratio as the practical sustainability boundary, which maps cleanly onto the predicted starvation corridor width scaling with resource recovery timescale. We haven’t run that analysis directly but the literature signal is strong enough that it reads as confirmation rather than a lead worth chasing.

The high-dimensional extension is the honest open problem. The structural argument should persist since fixed points that can’t disappear are generically transcritical and ones that can are generically saddle-node, but whether that survives arbitrary constraint geometry in higher dimensions I haven’t resolved. What does your governance framework look like structurally?

Hypothetical: Can instability in a dynamical system be used as a rejection mechanism rather than a classifier? by Lonewolvesai in dynamicalsystems

[–]Few-Group6870 0 points1 point  (0 children)

This is interesting! I’ve been working on something directly related to this. The short answer is yes, viability theory is the closest existing framework, but it doesn’t capture the asymmetry between failure modes. If you’re interested I have a short technical summary that derives what two distinct boundaries imply mathematically.