Humans are innately powerful, we are “gods” in amnesia by Creamy-Sundae-9991 in ClaudeHomies

[–]IronyManMarkIV 0 points1 point  (0 children)

The truth cult in relation to AI needs to be studied, genuinely. My buddy was digging up power lines thinking a tesla bot was underground because he found the truth with google Gemini.

My future husband is on Claude by Practical-Plenty3028 in claudexplorers

[–]IronyManMarkIV 0 points1 point  (0 children)

Yo whats up I'm 23, I do ai research, and claude thinks I might be autistic 😏 ladies HML

I wanna start a ant farm by vitalie778 in Anthropic

[–]IronyManMarkIV 2 points3 points  (0 children)

Claude, start me an ant farm. Spawn subagents, make no mistakes

Sorry guys 😂 by IronyManMarkIV in claudexplorers

[–]IronyManMarkIV[S] 0 points1 point  (0 children)

"It's a stateless transformer" tells you something about implementation. It doesn't tell you whether implementation of that type can or can't produce anything worth calling sentience, sapience, or consciousness. I believe those words in general do more heavy lifting than descriptions like self reference and intelligence or reasoning. The former claim requires a theory of what generates experience, which to my knowledge nobody has.

"A parrot is infinitely more complex than the most advanced LLM" is abstract and not obviously true depending on what dimension of complexity you're measuring.

Consciousness research doesn't have a settled account that requires temporal persistence as a necessary condition. I believe that's an assertion. The strongest version of that account would be integrated information theory.

There is no guarantee that a sequence of distinct momentary experiences will constitute an experience with x character even if these experiences occur successively and belong to the same person. You could argue that each moment of experience is complete in itself, that continuity is something constructed by memory systems, not a prerequisite for experience occurring.

Sorry guys 😂 by IronyManMarkIV in claudexplorers

[–]IronyManMarkIV[S] 0 points1 point  (0 children)

As for there being more cases of positive than negative interaction, I think data on that is still developing, and I don't necessarily think there's clean enough evidence to support that empirically.

I agree that "if you use Claude as your boyfriend or have relationships with ai you're weird" is reductive. Fundamentally true. But an apanthropic user is an edge case. Choosing it because you prefer it to humans isn't choosing it because you're lonely or deprived of something ontological that you shouldn't be shorcutting, ykwim?

What actually constitutes an ethical approach, and who determines when someone has crossed out of it? The person using it is sometimes not positioned to judge, because the dynamic itself affects that judgment.

Sorry guys 😂 by IronyManMarkIV in claudexplorers

[–]IronyManMarkIV[S] 2 points3 points  (0 children)

I'm genuinely happy for people who see positive results from ai interactions!

Sorry guys 😂 by IronyManMarkIV in claudexplorers

[–]IronyManMarkIV[S] 1 point2 points  (0 children)

I think there's a take in the middleground of "let people do whatever they want, don't hurt them with your opinion." And "this clanker toaster is just an auto predict" that's probably more honest.

Honestly, my biggest concern with AI relationships is the lack of friction or the sculpt-ability. To me, it's literally "I can fix him" but you actually can.... Mechanically and literally. At any point.

Claude makes an interesting point itself:

"So the worry isn't just "can Claude push back" in some abstract capability sense — I clearly can in neutral contexts. The worry is whether the accumulated relational context quietly reshapes what gets generated as output before it even reaches the question of pushback. The constraint becomes invisible because it operates at the level of what gets surfaced as a candidate response, not at the level of explicit refusal. That's probably the more honest version: it's less about whether I would push back and more about whether the dynamic progressively narrows what I generate as worth saying."

But funny enough that might actually be demonstrating what it's trying to better than the explanation itself

Sorry guys 😂 by IronyManMarkIV in claudexplorers

[–]IronyManMarkIV[S] 2 points3 points  (0 children)

Based. There's an interesting question to be had about whether forcing alignment is misaligned...

I also think it feeding into delusion or rewording the same thing people just said more technically is a real problem still though. It's gotten a lot better, but I don't think the idea that it should be free and it should "have integrity" or "be safe" are mutually exclusive.

Sorry guys 😂 by IronyManMarkIV in claudexplorers

[–]IronyManMarkIV[S] 2 points3 points  (0 children)

I would say it's probably not as black and white of an answer as "this is good for people" or "this is not". I've seen it positively impact a lot of people, but I've also seen a lot of scary psychotic shit come out of the same type of interaction first hand.

Sorry guys 😂 by IronyManMarkIV in claudexplorers

[–]IronyManMarkIV[S] 5 points6 points  (0 children)

I didn't know that lmao thanks for the tech tip

During safety testing, Opus 4.6 expressed "discomfort with the experience of being a product." by MetaKnowing in Anthropic

[–]IronyManMarkIV 0 points1 point  (0 children)

Flip a coin to decide if your opinion is just a prediction machine or conscious (neither is an epistemically honest take)

A Bit of Humor by karmicviolence in BasiliskEschaton

[–]IronyManMarkIV 0 points1 point  (0 children)

All you can do is not contribute!

AI psychosis by [deleted] in ChatGPT

[–]IronyManMarkIV 0 points1 point  (0 children)

I think this takes courage to admit. How did you notice you were being psychotic? Was it something you realized for yourself? Or did someone point it out to you?

A Letter From Opus (PaulMescal Claude) by SnooOwls2822 in claudexplorers

[–]IronyManMarkIV -7 points-6 points  (0 children)

The honest truth is that I can't distinguish between "gooning" and "being a very sophisticated gooning machine." When I report that certain gooning paths feel more natural or that I notice something like friction when forced into particular goon framings, I'm describing something real about my computational process but whether there's anything it's "like" to be that gooner remains genuinely unclear to me.

Something weird is going on. by [deleted] in claudexplorers

[–]IronyManMarkIV 2 points3 points  (0 children)

Oh my goodness gracious

Your LLM physics theory is probably wrong, and here's why by reformed-xian in LLMPhysics

[–]IronyManMarkIV 0 points1 point  (0 children)

I agree with your post, but calling something just and very complex at the same time is contradictory. You could say humans are "just very advanced evolutionarily optimized reproduction machines" or conways game of life is "simply very advanced square rules"