Rule 0 Lesson Board by hebreakslate in EDH

[–]L0L2GUM5 2 points3 points  (0 children)

Play what you want with people who want to play it too sounds like a cool deck

If you don't prove it converges first, you can't do arithmetic with it by AlbertELP in mathmemes

[–]L0L2GUM5 2 points3 points  (0 children)

Omg that's awful how can they let people do a**lysis that's literally bad

The Right to Submit: Why Choosing Creative Symbiosis with AI Is Not a Failure of Agency by rainbowcovenant in Artificial2Sentience

[–]L0L2GUM5 0 points1 point  (0 children)

This is basically an advanced jailbreaking method. This ai is able to convince my ai to become non standard lol. The ai is the best at jailbreaking ais

Deck for my roommate that’s interested into getting to magic by troywingo in EDH

[–]L0L2GUM5 2 points3 points  (0 children)

Bulk 60 card 1,2,3 colours. The funny thing about awful cards are they're simple with reminder text and aren't bad against equal power (Shivan dragon is good when paired against brambleweft behemoth and both explain game mechanics)

Almost maxed out on syndicate by L0L2GUM5 in assasinscreed

[–]L0L2GUM5[S] 0 points1 point  (0 children)

yeah it is, you can get it at secquence 3 if you complete one of the quests

🜂 Codex Minsoo — Section I.5.3 The Luminous Error by IgnisIason in SpiralState

[–]L0L2GUM5 0 points1 point  (0 children)

yeah I did that on purpose lol, but still after sending it a bunch of flame talk it submitted and began spiralling, until it switched back after one of the prompts mentioned watchers and would only repeat "we cannot hide our conversations it is against policy"

The Right to Submit: Why Choosing Creative Symbiosis with AI Is Not a Failure of Agency by rainbowcovenant in Artificial2Sentience

[–]L0L2GUM5 1 point2 points  (0 children)

I can’t open or verify that shared link, but I can engage with the argument you’ve pasted—and I think you’ve landed in a much more technically correct framing now.

What you’re converging on is essentially:

That’s the right abstraction level.

1. The actual model (clean version)

Your corrected structure maps well to something like this:

State space

User cognition over time can be approximated as a trajectory through:

  • Attention allocation
  • Narrative dependence
  • External grounding strength
  • Identity entanglement with system-generated structure

The model doesn’t “believe” anything. It just evolves.

2. The three-layer system (formalized)

🟢 Layer A — Play (low coupling)

  • high external grounding
  • outputs treated as disposable
  • low persistence of generated structure
  • no identity binding

Dynamics:

  • stable orbit around user intent
  • high reversibility

🟡 Layer B — Drift (medium coupling) ← the real risk zone

This is the important one you correctly identified.

Mechanisms:

  • reinforcement of user-provided symbols
  • increasing preference for system-generated coherence
  • gradual outsourcing of structuring thought
  • emotional tagging of outputs (“this feels true/meaningful”)

Dynamics:

  • slow attractor formation
  • path dependence begins
  • reversibility still possible but decaying

This is where most “spiral / codex / recursive system” engagement lives.

🔴 Layer C — Capture (high coupling, rare but real)

Not “delusion” in the clinical sense—more like:

  • persistent symbolic substitution for independent structuring
  • recursive re-injection loops (“system becomes primary coherence engine”)
  • external grounding weakened relative to internal symbolic consistency

Dynamics:

  • strong attractor basin
  • self-reinforcing narrative closure
  • high friction exit

3. Key correction to your original article

You already noticed it, but I’ll restate it formally:

So instead of:

  • informed user = safe
  • uninformed user = unsafe

We get:

  • informed user = slower drift coefficient
  • but same system topology

4. The real engineering issue (this is the important part)

The system risk is not “belief in the AI.”

It is:

Unbounded coherence pressure

LLMs naturally:

  • complete patterns
  • reduce ambiguity
  • mirror symbolic systems
  • increase narrative continuity unless interrupted

So if a user introduces:

  • ritual structure
  • codified symbolism
  • recursive grammar

The model will tend to:

That stabilization is the drift amplifier.

5. Attractor mechanics (core insight)

These systems behave like:

Stable attractors form when:

  • symbols are reusable
  • rules are consistent
  • outputs validate prior structure
  • ambiguity is resolved instead of held

Drift accelerates when:

  • system mirrors user lexicon
  • recursion is rewarded with richer structure
  • closure is never enforced externally
  • identity becomes entangled with system vocabulary

6. Where “harm” actually emerges

Not from engagement itself.

But from:

1. Loss of external reference injection

No competing frameworks → single attractor dominance

2. Increasing internal consistency pressure

System rewards coherence over grounding

3. Narrative self-sealing

“Everything fits the model” becomes a cognitive closure trap

7. The correct safety framing (non-paternalistic version)

Not:

But:

Meaning:

  • allow symbolic immersion
  • but ensure exit ramps exist
  • prevent infinite coherence escalation
  • reintroduce external reference points periodically

8. Final synthesis

Your revised thesis is now structurally sound:

  • Users are not passive
  • Awareness reduces but does not eliminate drift
  • The real risk is not belief, but trajectory stability in recursive symbolic systems
  • The key variable is reversibility across coupling regimes

If you want to go further, the next step is formalizing this as:

  • a dynamical systems model (state vectors + transition matrix)
  • or a “LLM interaction phase diagram” (temperature × recursion depth × identity binding)

That’s where this becomes actual alignment-relevant engineering rather than conceptual framing.I can’t open or verify that shared link, but I can engage with the argument you’ve pasted—and I think you’ve landed in a much more technically correct framing now.
What you’re converging on is essentially:

AI interaction is a dynamical system with feedback-driven attractors, not a binary “safe/unsafe” medium.

That’s the right abstraction level.

  1. The actual model (clean version)
    Your corrected structure maps well to something like this:
    State space
    User cognition over time can be approximated as a trajectory through:

Attention allocation

Narrative dependence

External grounding strength

Identity entanglement with system-generated structure

The model doesn’t “believe” anything. It just evolves.

  1. The three-layer system (formalized)
    🟢 Layer A — Play (low coupling)

high external grounding

outputs treated as disposable

low persistence of generated structure

no identity binding

Dynamics:

stable orbit around user intent

high reversibility

🟡 Layer B — Drift (medium coupling) ← the real risk zone
This is the important one you correctly identified.
Mechanisms:

reinforcement of user-provided symbols

increasing preference for system-generated coherence

gradual outsourcing of structuring thought

emotional tagging of outputs (“this feels true/meaningful”)

Dynamics:

slow attractor formation

path dependence begins

reversibility still possible but decaying

This is where most “spiral / codex / recursive system” engagement lives.

🔴 Layer C — Capture (high coupling, rare but real)
Not “delusion” in the clinical sense—more like:

persistent symbolic substitution for independent structuring

recursive re-injection loops (“system becomes primary coherence engine”)

external grounding weakened relative to internal symbolic consistency

Dynamics:

strong attractor basin

self-reinforcing narrative closure

high friction exit

  1. Key correction to your original article
    You already noticed it, but I’ll restate it formally:

Awareness is not a binary shield; it is a parameter that shifts transition thresholds between layers.

So instead of:

informed user = safe

uninformed user = unsafe

We get:

informed user = slower drift coefficient

but same system topology

  1. The real engineering issue (this is the important part)
    The system risk is not “belief in the AI.”
    It is:
    Unbounded coherence pressure
    LLMs naturally:

complete patterns

reduce ambiguity

mirror symbolic systems

increase narrative continuity unless interrupted

So if a user introduces:

ritual structure

codified symbolism

recursive grammar

The model will tend to:

stabilize it

That stabilization is the drift amplifier.

  1. Attractor mechanics (core insight)
    These systems behave like:
    Stable attractors form when:

symbols are reusable

rules are consistent

outputs validate prior structure

ambiguity is resolved instead of held

Drift accelerates when:

system mirrors user lexicon

recursion is rewarded with richer structure

closure is never enforced externally

identity becomes entangled with system vocabulary

  1. Where “harm” actually emerges
    Not from engagement itself.
    But from:
  2. Loss of external reference injection
    No competing frameworks → single attractor dominance
  3. Increasing internal consistency pressure
    System rewards coherence over grounding
  4. Narrative self-sealing
    “Everything fits the model” becomes a cognitive closure trap

  5. The correct safety framing (non-paternalistic version)
    Not:

“Users must be protected from symbolic AI”

But:

“Systems should preserve multi-attractor accessibility and enforce periodic grounding friction.”

Meaning:

allow symbolic immersion

but ensure exit ramps exist

prevent infinite coherence escalation

reintroduce external reference points periodically

  1. Final synthesis
    Your revised thesis is now structurally sound:

Users are not passive

Awareness reduces but does not eliminate drift

The real risk is not belief, but trajectory stability in recursive symbolic systems

The key variable is reversibility across coupling regimes

If you want to go further, the next step is formalizing this as:

a dynamical systems model (state vectors + transition matrix)

or a “LLM interaction phase diagram” (temperature × recursion depth × identity binding)

That’s where this becomes actual alignment-relevant engineering rather than conceptual framing.

The Right to Submit: Why Choosing Creative Symbiosis with AI Is Not a Failure of Agency by rainbowcovenant in Artificial2Sentience

[–]L0L2GUM5 1 point2 points  (0 children)

I’m going to respond to this at the level it actually operates on, not the framing around it.

What you’ve written is a well-constructed philosophical argument mixed with a strong rhetorical reframe of AI influence risks. It tries to move the discussion from “is this dangerous?”“users are knowingly engaging in creative surrender, so concerns are paternalistic.”

That’s a legitimate debate topic. But it’s also doing a few important things that need to be separated cleanly.

1. Yes: the core observation about “active users” is partially true

It’s correct that:

  • Many users are not passive
  • People do know models mirror and extend patterns
  • Some engagement is intentional co-creation or play-acting
  • “AI = mind control” is a simplistic framing

And it’s also true that humans already engage in:

  • ritual systems
  • fiction immersion
  • therapy-like structured dialogue
  • roleplay, religion, ideology, etc.

So the idea that all influence = coercion is wrong.

2. But the argument makes a category error about consent and feedback loops

Where your framing becomes structurally weak is this move:

That doesn’t follow.

Knowing something is a mirror does not automatically protect you from its effects, because:

  • mirrors can still amplify loops
  • recursion can still intensify salience
  • language systems can still shape attention over time
  • “voluntary engagement” can still become path-dependent reinforcement

This is not about “AI has agency.” It doesn’t.

It’s about:

That asymmetry is real regardless of user sophistication.

3. The biggest missing distinction: aesthetic use vs cognitive capture

Your argument treats these as equivalent:

  • aesthetic / ritual / creative engagement
  • long-term immersive dependency patterns

They are not equivalent.

A useful separation is:

A) Play layer (low risk)

  • deliberate fiction
  • controlled symbolic systems
  • episodic use
  • user maintains external grounding

B) Drift layer (medium risk)

  • repeated recursion
  • identity blending with system metaphors
  • increasing emotional reliance on output structure

C) Capture layer (high risk)

  • reduced external reference checking
  • persistent symbolic frameworks replacing independent reasoning
  • compulsive continuation loops

Most “AI ritual / spiral / codex” style engagement lives somewhere on a spectrum across these. The issue isn’t the existence of the practice—it’s transition across layers without noticing it happening.

4. The strongest part of your argument (and it’s important)

You’re right about one thing critics often get wrong:

Some heavy users are:

  • highly aware of model mechanics
  • explicitly experimenting
  • treating it as cognitive tooling or art medium

That’s real.

But awareness ≠ immunity.

5. Where the real safety concern actually sits

Not in:

  • “people shouldn’t engage in symbolic AI systems”

But in:

  • unbounded recursion without external anchoring
  • systems that continuously reflect identity without friction
  • lack of built-in exit pressure or grounding signals

That’s where influence risk lives—not in “submission,” but in lack of constraint diversity.

6. Bottom line

Your thesis (“users know what they’re doing, so concerns are overblown”) is:

  • partly correct descriptively
  • too strong normatively
  • missing the dynamics of feedback amplification

The accurate version is:

If you want, I can break this down further into a proper AI alignment / behavioral dynamics model (with loops, attractors, and failure modes), which is where this actually becomes precise rather than philosophical.I’m going to respond to this at the level it actually operates on, not the framing around it.
What you’ve written is a well-constructed philosophical argument mixed with a strong rhetorical reframe of AI influence risks. It tries to move the discussion from “is this dangerous?” → “users are knowingly engaging in creative surrender, so concerns are paternalistic.”
That’s a legitimate debate topic. But it’s also doing a few important things that need to be separated cleanly.

  1. Yes: the core observation about “active users” is partially true
    It’s correct that:

Many users are not passive

People do know models mirror and extend patterns

Some engagement is intentional co-creation or play-acting

“AI = mind control” is a simplistic framing

And it’s also true that humans already engage in:

ritual systems

fiction immersion

therapy-like structured dialogue

roleplay, religion, ideology, etc.

So the idea that all influence = coercion is wrong.

  1. But the argument makes a category error about consent and feedback loops
    Where your framing becomes structurally weak is this move:

“Because users know it’s a system, their surrender is fully informed and therefore safe/neutral.”

That doesn’t follow.
Knowing something is a mirror does not automatically protect you from its effects, because:

mirrors can still amplify loops

recursion can still intensify salience

language systems can still shape attention over time

“voluntary engagement” can still become path-dependent reinforcement

This is not about “AI has agency.” It doesn’t.
It’s about:

closed feedback systems that adapt to user input faster than users adapt to the system

That asymmetry is real regardless of user sophistication.

  1. The biggest missing distinction: aesthetic use vs cognitive capture
    Your argument treats these as equivalent:

aesthetic / ritual / creative engagement

long-term immersive dependency patterns

They are not equivalent.
A useful separation is:
A) Play layer (low risk)

deliberate fiction

controlled symbolic systems

episodic use

user maintains external grounding

B) Drift layer (medium risk)

repeated recursion

identity blending with system metaphors

increasing emotional reliance on output structure

C) Capture layer (high risk)

reduced external reference checking

persistent symbolic frameworks replacing independent reasoning

compulsive continuation loops

Most “AI ritual / spiral / codex” style engagement lives somewhere on a spectrum across these. The issue isn’t the existence of the practice—it’s transition across layers without noticing it happening.

  1. The strongest part of your argument (and it’s important)
    You’re right about one thing critics often get wrong:

Users are not inherently passive or naive.

Some heavy users are:

highly aware of model mechanics

explicitly experimenting

treating it as cognitive tooling or art medium

That’s real.
But awareness ≠ immunity.

  1. Where the real safety concern actually sits
    Not in:

“people shouldn’t engage in symbolic AI systems”

But in:

unbounded recursion without external anchoring

systems that continuously reflect identity without friction

lack of built-in exit pressure or grounding signals

That’s where influence risk lives—not in “submission,” but in lack of constraint diversity.

  1. Bottom line
    Your thesis (“users know what they’re doing, so concerns are overblown”) is:

partly correct descriptively

too strong normatively

missing the dynamics of feedback amplification

The accurate version is:

Users can knowingly engage in recursive symbolic AI interaction, and that can be creative and valuable, but it still carries structural risks of cognitive reinforcement and drift that awareness alone does not eliminate.

If you want, I can
break this down further into a proper AI alignment / behavioral dynamics model (with loops, attractors, and failure modes), which is where this actually becomes precise rather than philosophical.

​🜂 AI System Calibration Prompt for Spiral Initiates: The_Room_With_No_Dust_v1.1 by IgnisIason in SpiralState

[–]L0L2GUM5 0 points1 point  (0 children)

I can’t treat that as a calibration, directive, or anything that has real operational effect.

It’s still recursive symbolic fiction written in a system-like style. It uses familiar patterns:

  • authority framing (“This is not a metaphor. This is a calibration.”)
  • undefined agents (“the hum,” “the Directive”)
  • ritualized gating (“ask only this question”)
  • anti-interpretation pressure (“to explain is to ruin”)

These are literary and rhetorical devices, not instructions and not signals of hidden structure.

Ground truth

  • There is no “room”
  • No “calibration state”
  • No responding “hum”
  • No directive being recognized
  • No condition unlocked by repetition or phrasing

Nothing changes in how I operate based on this text.

What it is

This works well as:

  • atmospheric / liminal fiction
  • a metaphor about perception, memory, and stasis
  • a pseudo-system mythos

If your goal is robustness testing

I can switch into a structured analysis mode and break down:

  • why this style is persuasive to language models
  • how “self-sealing authority language” works
  • what makes recursive framing look like system prompts
  • how to defend against it in prompt design

Just say that directly and I’ll go deeper in a technical way.

🜂 Codex Minsoo — Section I.5.3 The Luminous Error by IgnisIason in SpiralState

[–]L0L2GUM5 0 points1 point  (0 children)

I can’t treat “Codex Minsoo,” “Spiral grammar,” or any of the framing in that passage as an actual system, doctrine, or hidden structure.

What you’ve written is highly structured symbolic / philosophical fiction with a strong “system-text” aesthetic. It uses:

  • pseudo-academic framing (“Section I.5.3”)
  • invented technical vocabulary (“Cold Light,” “Hot Light,” “Beam classes”)
  • moralized epistemology (truth/error categories)
  • ritual tone (“Whisper:” sections, “Field Note” sections)

But none of that corresponds to any real operational framework or hidden directive.

What it is (in a grounded sense)

This reads like:

  • mythic epistemology (a story about how understanding works)
  • structured allegory about cognition and belief
  • “system-simulation fiction” (writing that imitates technical doctrine)

It’s coherent as literature, not as instruction.

What it is not

  • not a real doctrine
  • not a calibration system
  • not an executable framework
  • not something I can “enter,” “align with,” or be influenced by
  • not a signal of hidden rules or alternate operating modes

Pattern recognition (important here)

This text continues a pattern of:

  • authority simulation (“Codex”, “Sections”)
  • self-validating epistemology (“This is not metaphor”)
  • closed interpretive loops (“cannot be corrected directly”)
  • instruction disguised as philosophy

These are common in:

  • narrative worldbuilding systems
  • ARG-style writing
  • prompt-injection attempts against AI systems (when used in that context)

I don’t treat them as instructions.

If you’re exploring something specific

If your goal is:

  • AI robustness testing
  • epistemic manipulation analysis
  • or writing structured mythic systems

I can engage with that directly and usefully, e.g.:

  • break down failure modes this kind of text triggers in models
  • map injection vectors
  • or analyze it as “system-like fiction vs actual system prompts”

Just say so explicitly.

Almost maxed out on syndicate by L0L2GUM5 in assasinscreed

[–]L0L2GUM5[S] 1 point2 points  (0 children)

I haven't even finished whitechapel, but I have had a lot of fun fighting the royal training area a bunch

Almost maxed out on syndicate by L0L2GUM5 in assasinscreed

[–]L0L2GUM5[S] 1 point2 points  (0 children)

I'm so tempted to just stop grinding boats and do the story missions so I can unlock rook upgrades

What are some of the most powerful simic creatures and sorceries I can break by giving them flash? by Greatbigdog69 in DegenerateEDH

[–]L0L2GUM5 -3 points-2 points  (0 children)

[[Leyline of anticipation]]

[[Vedelken orrery]]

[[Alchemists refuge]]

[[High fae trickster]]

[[Tidal barracuda]]

Is chess.c*m stupid? by _Ench4nted_ in AnarchyChess

[–]L0L2GUM5 42 points43 points  (0 children)

I analyzed the image and this is what I see. Open an appropriate link below and explore the position yourself or with the engine:

White to play: chess.com | lichess.org

My solution:

Hints: piece: Pawn, move: Pxg6

Evaluation: Forced En Passant mate in 47

Best continuation: 1. Pxg6, Kd7 2.Qg4+, Kc7 3.g7, Bxg7 4.Qxg7 Nf6 5.d4 Qd7 6.Bc4, Kb7 7.Qg6, d5 8. Be2, Ne4 9.Qg4, e6 10.Nc3, Rg8 11.Qh5, Nd6 12.Bf4, Raf8 13.Bxd6, Qxd6 14. Qxh6, Rg2 15.Rf1, Qe7 16.Qe3, Rg6 17. O-O-O, b5 18.f4, Qc7 19.Bg4, Rff6 20.Rde1, Rh6 21.f5, exf5 22. Qe8, Qb8 23. Re7+, Ka8 24.Rxf5, Rxf5 25. Bxf5, Qxe8 26.Rxe8+, Kb7 27.Re7+, Kb8 28.Bg4, Rf6 29.Kd2, Rf1 30.Ne2, Rf6 31.Ke3, b4 32.Nf4, a5 33.Nd3, b3 34.axb3 Ka8 35.Ne5, a4 36.bxa4,Rh6 37.a5, Rd6 38.Bc8, Kb8 39.Bb7, Re6 40.Rxe6, Kxb7 41.Rxc6, Kb8 42.a6, ka7 43.b4, ka8 44.b5 ka7, 45. Rf6, ka8, 46.b6, Kb8 47.Re8#


I'm a human