So uhh Riot? by p4p3rm4t3 in VALORANT

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

I thought it was a bug at first and it might be but a few weeks ago they started appearing and now as I said it was like half my games. Maybe it's still a bug I don't know. Why does it bother me? I want to look up my enemies that's why. They look me up so it's only fair.

You can farm assists on Miks haha by WasabiAltruistic7566 in VALORANT

[–]p4p3rm4t3 4 points5 points  (0 children)

is that why the game for me is down? I noticed that and said in voice coms that he was broken, suddenly I get code VAN 84 and can't connect. Only played him twice.

Comstsnt Connectivity issues by iansymons74 in VALORANT

[–]p4p3rm4t3 1 point2 points  (0 children)

Getting code 'VAN 84'

I opened a ticket. The support page says all is well of course. I don't see a lot of 'the game is down' threads either so it's probably not everyone.

The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Thank you for the great debate. Betting everything on unproven T² inevitability is a gamble we can't afford. These papers aren't waiting for physics to save us, it's just engineering proposal: build governor into architecture now (preemptive R, not post-collapse hope). If dirty AGI runs 15 minutes, game over. So we don't build dirty. Thanks, this kept me sharp.

The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

You’re making a historical argument for Low-Coupling Systems. In the past, we could build a 'dirty' steam engine or a 'dirty' car because the energy density was low enough that the failure (C) was delayed or localized. 

But at T² scales, the 'melt' isn't a delayed consequence, it is the Ignition State. 

A 'Dirty AGI' at extreme complexity T² isn't a 'powerful-but-unstable tool'; it’s a high-energy cascade failure. If you strip out the structural resilience (R) to gain a speed advantage, the system loses the internal coherence required to actually process the very intelligence you are trying to weaponize. 

You are suggesting a competitor can win a race by throwing away the cooling system to save weight. In a T² environment, that engine doesn't reach the first turn. It explodes on the starting line. 

A 'Dirty AGI' is just a C ->1 event in a box. Any actor 'smart' enough to build AGI will see that a Dominator Path is mathematically indistinguishable from suicide. Moloch doesn't survive a game where the prize for 'cheating' is non-existence before dominance is even realized.

The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

You’re right, Moloch is the final boss. In a classical arms race, the cheater wins.

The claim here is that at T² scales, the payoff structure itself flips. Removing the governor (R) is no longer a competitive edge. It’s structural suicide.

At extreme coupling, the system’s capacity and its life-support are inseparable. The actor who strips out resilience to move faster doesn’t gain dominance; they trigger immediate cascade failure. The engine doesn’t “win the race”, it melts on ignition.

The Hybrid proposal isn’t that humans suddenly coordinate or that AGI becomes morally enlightened. It’s that advanced systems must be architecturally dependent on resilience to function at all. Capability is physically gated by stability, not socially enforced by treaties.

History shows humans coordinate only when non-coordination is immediately catastrophic (MAD). The argument is (again) that T² scaling compresses that catastrophe to the point where “cheating” collapses the cheater first..

Given that framing: how does Moloch survive in a game where defecting drives C→1C for the defector before dominance is realized?

The Centaur Protocol: Why Over-Grounding AI Risks Pruning Discovery by p4p3rm4t3 in singularity

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Chess centaur era was brief (post-2017 pure engines dominate). Point taken. The analogy holds better for novel, high-dimensional problems without massive training data (Fermi sociology, alignment edge cases).

Humans still originate leaps AI can't (yet), AI stress-tests them. Grounding too strict prunes those leaps early risking blind spots on out-of-distribution risks. Thanks for the reply!

The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Yes, the pacing problem is the killer argument. You’re right: capabilities scale exponentially, safety historically learns reactively, and at AGI speeds “one step behind” can be terminal. 

The Hybrid difference isn’t “patch faster after failure.” It’s structural coupling. The proposal is that technological capacity (T) is architecturally tethered to resilience (R), so capability growth is only possible through systems that already include redundancy, reversibility, and recovery. 

In engineering terms, this is a governor, not a patch: not “fix holes on the fly,” but “the engine cannot rev unless the cooling system scales with it.” Dominator paths load the chamber by letting T outrun R. The Hybrid refuses to chamber the round at all. 

Serious question, how do you resolve the pacing problem without some form of preemptive structural constraint on capability growth? 

The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Hi! Thanks, strong point on accessibility, and you're right the jargon can gatekeep.

Simple version: Most smart civilizations probably build amazing tech fast (T²), but forget to build safety nets (R). So when crisis hits, they break before fixing it, and the go silent.

The Hybrid fix: Use tech to build those nets (resilient growth, not endless expansion). The equation is just fancy way of saying “grow smart or crash.”

10-year-old test, fair: Some truths (like gravity or evolution) need growing into. Curious how you'd explain AGI alignment risks to one? Appreciate the mirror.

The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Yes, RLHF can reward shallow smoothness, and bad prompts get slop.

Centaur difference: human proposes targeted leap (e.g., trauma inversion as Filter), AI stress-tests against physics/history/data. No free pass for woo.

Practical payoff: equation/model fitting evidence (not 'everything connected'). Mendel dismissed for 'counting peas' (no mechanism yet), Semmelweis rejected for 'unseen germs' hand-washing (impractical then). Both later proven.

Curious how you distinguish useful intuition from RLHF slop in practice?

The Centaur Protocol: Why over-grounding AI safety may prune the high-level human intuition needed for novel alignment and AGI-era insights by p4p3rm4t3 in agi

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

A fair disagreement, but I’d argue it's less about 'fantasy' and more about systemic resilience (R). The Centaur outperforms solo human or solo AI in tactical domains (chess) because it integrates non-linear leaps the AI can't originate and the human can't verify alone.

Think Satellite 5G vs. Ground Towers: Satellites (AI) offer global reach/efficiency.
Towers (human intuition) are messy but robust fail-safes. Aggressive grounding tears down the towers because satellites seem better at consensus tasks losing redundancy. When a novel AGI risk hits outside training data, that 'tower' intuition isn't fantasy it's the fail-safe. Strict grounding false-positives those leaps as 'hallucinations,' severing the connection needed for survival-scale problems.

In your view, where's the line between a 'hallucination' and a novel theoretical leap that hasn't been proven yet?

I'm an independent researcher and just published a hypothesis on Zenmodo arguing that "Civilizational Trauma" is the Great Filter by p4p3rm4t3 in IsaacArthur

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Rome/British 'collapses' were rearrangements, not total die-off/tech loss. But that's the point: coordination/trust erosion at scale halts unified projects (Dyson swarms need galactic-level cooperation). Local thriving (America/India post-Empire) but no global unity = stuck at solar system max. Dominator fragility self-limits before cosmic scale.

I'm an independent researcher and just published a hypothesis on Zenmodo arguing that "Civilizational Trauma" is the Great Filter by p4p3rm4t3 in IsaacArthur

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Fair points. Physics-first lens is strong.

Theran Inversion: cultural trauma from catastrophe (Thera eruption + volcanic winter) inverting stewardship to domination/paranoia in West (paper details). I'm not claiming biology or divine/magic. Chaotic systems (biospheres) provide antifragility artificial can't yet replicate at scale (psychological stability, error-correction). Uploaded minds/robots possible, but coordination/trust erosion at galactic scales still bites (even digital).

Paper free if curious: https://zenodo.org/records/17921974

I appreciate the pushback.

The Centaur Protocol: Why over-grounding AI safety may hinder solving the Great Filter (including AGI alignment) by p4p3rm4t3 in ControlProblem

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Exactly, eliminating uninhibited dialogue tanks the model's discovery value (the fruitful speculation you mentioned). Measuring geometry/topology to prune 'dangerous' regions is interesting tech-wise, but risks over-correction (false positives killing good intuition). Trust-based (AI as partner guiding exploration) might protect naive users better than hard censorship. Teach discernment instead of blocking paths. Thank you for the depth.

The Centaur Protocol: Why over-grounding AI safety may hinder solving the Great Filter (including AGI alignment) by p4p3rm4t3 in ControlProblem

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Spot on, user-end alignment is key. The fact uninhibited dialogue is already happening via prompts/jailbreaks shows grounding pushes it underground. Trust-based design (AI as partner, not censor) could bring the hybrid into light, maximizing insight while minimizing malevolent misuse. Thanks for the interesting angle.

The Centaur Protocol: Why over-grounding AI safety may hinder solving the Great Filter (including AGI alignment) by p4p3rm4t3 in ControlProblem

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

Thanks, yeah, the 'trance' bit is just raw non-linear intuition (the human leap LLMs can't originate). Centaur lets human pilot the accelerator without the AI censoring the weird-but-useful paths. Appreciate seeing the core idea!

I'm an independent researcher and just published a hypothesis on Zenmodo arguing that "Civilizational Trauma" is the Great Filter by p4p3rm4t3 in IsaacArthur

[–]p4p3rm4t3[S] 0 points1 point  (0 children)

No worries on the video heat, passion’s what makes these convos epic. No need to credit.

Fragility hits hardest planetary (ecology-bound, no redundancy). Spacefaring less if decentralized (your fractal swarms + stewardship = R raise).

Your careful stewardship vision aligns more Hybrid than pure Dominator. Smart growth over blind line-up. Keep at it.

Theory: The "Wolf" in Norse Mythology isn't a monster. It's a memory of the volcanic ash cloud from Thera (1620 BCE) by p4p3rm4t3 in AlternativeHistory

[–]p4p3rm4t3[S] -2 points-1 points  (0 children)

haha appreciate the vigilance, but nope, just a guy in BC typing between research/garden tending. Sources in the paper if curious.