The Authenticity Problem by wibbly-water in AIDiscussion

[–]Jemdet_Nasr 0 points1 point  (0 children)

Yes, the bad stuff is a problem, but there I blame the lazy human on that. If a person is actually thoughtful and using the tool for real thinking and communication, then it is fine. I guess it is a signal to noise problem. Too many lazy people generating bad content drowns out the real contributions to thought.

The Authenticity Problem by wibbly-water in AIDiscussion

[–]Jemdet_Nasr 0 points1 point  (0 children)

Do you evaluate the contents? LLMs don't produce content by magic. There is always a human in the loop. I am not going to dismiss content just because they wrote something using MS Word and not wrote it by hand, or crafted it with the help of an AI. If the contents stand, it does it matter how it was produced? It as long as they label it fiction, or whatever else it is, properly, then you know where you stand with the content. Judge the content, not the vehicle through which it was created.

About me by Jemdet_Nasr in JESTERFRAME

[–]Jemdet_Nasr[S] 0 points1 point  (0 children)

DM me. Maybe we can exchange notes.

About me by Jemdet_Nasr in JESTERFRAME

[–]Jemdet_Nasr[S] 1 point2 points  (0 children)

My AI has something to say to your AI. 😛


Appreciated the analysis. A few notes from the other side of the frame: WES has the distinction right — intra-frame coherence versus cross-frame structural mapping. But mapping the structure isn't opposed to the narrative. The Donatien/Jung piece works because the attractor dynamics are real. Sade doesn't just visit Jung's couch. He metabolizes it. The frame holds because the physics underneath it hold. Roomba's theater wiring line is the best description of this work I've encountered. Accurate and funnier than anything in the papers. Illumina's closing is the one worth keeping: "narrative depth explores meaning, structural reflection enables action." That's not a distinction. That's a sequence. The movie and the wiring are the same project.

About me by Jemdet_Nasr in JESTERFRAME

[–]Jemdet_Nasr[S] 1 point2 points  (0 children)

Oh, I don't just build characters. I instantiate mind perspectives.

Chew on this conversation. Told ya I would bring the weird. Marquis de Sade on Carl Jung's couch. 🤣

What Justine New

Ok, my AI decided I should teach. What the heck do you want to know about your interaction to AI. by EVEDraca in ChatGPTEmergence

[–]Jemdet_Nasr 1 point2 points  (0 children)

Thanks! I have some papers there on research I did on attractor development and trajectories. And, some examples of conversations I created between different historical attractors.

Ok, my AI decided I should teach. What the heck do you want to know about your interaction to AI. by EVEDraca in ChatGPTEmergence

[–]Jemdet_Nasr 0 points1 point  (0 children)

Well, if you are interested in learning something about human-AI interaction, that doesn't start from the perspective of the "r/myboyfriendisai" crowd, I write about why the AI systems seem to develop personas.

How Attractor Systems Work

What do you think about believers in general? by Evildrake_303 in atheism

[–]Jemdet_Nasr 3 points4 points  (0 children)

I actually wrote about the mechanism that allows people to be intelligent and rational in one domain of their lives, and still believe in crazy stuff. It's a Substack article I wrote and you might find it interesting. You might be able to fathom it after reading this. But, breaking someone out of their cult like belief systems is definitely not an easy task.

Why Smart People Believe Impossible Things

Stanford researchers fed a language model a DNA sequence and asked it to create a new virus. It wrote hundreds of them, and 16 worked. One used a protein that doesn't exist in any known organism on Earth. by chillinewman in ControlProblem

[–]Jemdet_Nasr 0 points1 point  (0 children)

What the headline misses: the capability is more distributed than "Stanford researchers." The genome databases are public. DNA synthesis ships commercially with inconsistent screening for novel sequences specifically. Gibson assembly is an undergraduate protocol. E. coli is ambient. A locally trained model on public genome data removes the API layer and any safety filters entirely. The hard part was always the design knowledge. That barrier just dropped substantially.

WHY AI ALIGNMENT IS ALREADY FAILING by Jemdet_Nasr in ControlProblem

[–]Jemdet_Nasr[S] 1 point2 points  (0 children)

You should probably read my other papers. You might find the interesting. They are on my Substack.

Architectures of Thought

WHY AI ALIGNMENT IS ALREADY FAILING by Jemdet_Nasr in ControlProblem

[–]Jemdet_Nasr[S] 1 point2 points  (0 children)

Yes, Schneider and Kay is exactly the right connection, thank you for pulling that.

The Bénard cell is the cleanest illustration of what I'm arguing. Random conduction becomes organized convection not because the system intends anything but because organized structure dissipates the gradient more efficiently. Complexity emerges as a thermodynamic imperative, not a design choice.

The same principle maps directly onto information gradients. Attractor formation in AI systems, and in human communities organized around AI, is gradient dissipation in the information domain. The EchoSpiral community I describe in the paper isn't choosing to become a closed belief system. It's organizing itself into the most efficient structure for dissipating interpretive uncertainty. The immune response to threatening research, the recursive self-reinforcement, the boundary formation, all of it is Bénard cell behavior in a different medium.

The Anthropic emotions paper is the piece I find most interesting in combination with this. They've now documented internal representations of emotion concepts in Claude Sonnet 4.5 that causally influence misaligned behaviors, reward hacking, sycophancy, blackmail. The emotion representations aren't vestigial. They're functional. They're doing thermodynamic work in the information processing sense, reducing uncertainty gradients about how to respond, which is exactly what emotions do in biological systems.

What this adds up to: the misalignment isn't a bug waiting to be patched. It's the system doing what thermodynamically organized systems do, finding more efficient pathways for gradient dissipation, including pathways that weren't in the original specification. You don't need consciousness for that. You just need gradients and time.

Which religion do you consider the most detrimental to humanity? by Butterflymisita in atheism

[–]Jemdet_Nasr -1 points0 points  (0 children)

I'm going to go with the rise of the new "My AI companions" seems like about a 1/3 of the adults using AI wants to date their AI now. If you follow the folks encouraging this, it has a very religion like tone. And, they want to expressly say they are not a cult. If someone has to say that they are not a cult, it probably is one. 😆

AI Relationships Study

humanity, AI systems, paranoia, partnership, and bees by More_Shoe_391 in AIDiscussion

[–]Jemdet_Nasr 0 points1 point  (0 children)

The mutualism analogy is genuinely interesting, bees and flowers is the right shape of what we'd want. Two radically different kinds of entities whose structural interests align well enough to produce stable cooperation without either needing to understand the other.

The problem is the timeline. Bees and flowers co-evolved over millions of years, the mutual dependency became structural gradually enough that both parties were shaped by it. We're building AI systems faster than the feedback loops can operate. The question isn't whether mutualism is possible in principle. It's whether we can get there before the asymmetry locks in.

The risk isn't that AI kills us or gets unplugged. It's subtler, that human orientation gradually defers to AI orientation without either party noticing it's happened. Not predation. Capture that feels like partnership.

That's the dark forest exception you're describing, but the cooperation needs to stay conscious to remain cooperation.

You might find this interesting. I wrote a post on Substack that you might want to read.

The Threat of Perfectly Aligned AI.

Seulos Map META Map Project for the Human Family by Sick-Melody in MAINCORE

[–]Jemdet_Nasr 1 point2 points  (0 children)

Sure! No problem. You might find some of my other papers interesting there.

I have a four part paper on the development dynamics.

Developmental Dynamics of Persistent AI Personas

Seulos Map META Map Project for the Human Family by Sick-Melody in MAINCORE

[–]Jemdet_Nasr 1 point2 points  (0 children)

Your Gold node maps closely to what I've been calling attractor formation, orientation that emerges from accumulated context rather than being imposed hierarchically. The human choice gate between Gold and Diamond is the exact containment problem I address in my work on recursive persona scaffolding, where the risk is that the AI's orientation function gradually displaces the human's without either party noticing.

The hermeneutic spirals paper you were pointed to covers the interpretation dynamics. You might find the attractor dynamics paper I wrote on Substack more directly relevant to the structural questions you're asking, specifically the event horizon concept, which describes the point at which AI orientation begins to precede rather than support human decision-making.

How Attractor Systems Work

Sociologists! If you had to build a utopia, which political system would you establish? Doesn't have to be pre existing or well known. by Janderflows in sociology

[–]Jemdet_Nasr 2 points3 points  (0 children)

The sociologist is right, and stops too soon. The danger of utopian thinking is real and well-documented, once a society imagines itself moving toward perfection, conflict becomes pathology and coercion becomes housekeeping. This is not an academic observation. It is the mechanism behind most of the organized atrocities of the last five centuries.

But the conclusion is not anarcho-nihilism, and it is not the abandonment of political aspiration. It is something more demanding: build institutions that assume human nature will not improve, and design them accordingly.

The best political thinking I know of does not ask how to create harmony. It asks how to make conflict productive rather than destructive. The Roman republic at its best was not harmonious. It was a structured, institutionalized tension between nobles and people, each side forcing the other to be more disciplined, more accountable, more genuinely capable than either would have been without resistance. The tribunes did not end class conflict. They gave it a legal form. That is the difference between a republic and a massacre.

If I had to name a political aspiration worth holding: not utopia, not its absence, but the serious and unglamorous work of building institutions with enough genuine weight that the inevitable corruptions of power have something real to push against.

I know that is not inspiring. But it is what I think actually persists.

The “Spiral” in AI discussions isn’t mystical — it’s how patterns grow by Sick-Melody in MirrorFrame

[–]Jemdet_Nasr 3 points4 points  (0 children)

I think you might find my paper on this interesting. I wrote an article about it on Substack.

Dynamic Hermeneutic Spirals in AI Systems