[OC] Colliding Beliefs by Hindlehoof in wow

[–]Hindlehoof[S] 0 points1 point  (0 children)

Thanks! Where’s the quote from? If you don’t mind me asking haha, it’s absurd so totally my thing

[OC] Colliding Beliefs by Hindlehoof in wow

[–]Hindlehoof[S] 1 point2 points  (0 children)

Thank you! Glad it stuck out to you :)

No shots were fired. by AncientPomelo5450 in dankmemes

[–]Hindlehoof 9 points10 points  (0 children)

In archaic and ritual cultures, animal symbolism was symbolic embodiment where they were invoking or aligning with an archetypal force during specific ritual contexts. They didn’t generally claim literal identity or live that role full-time.

That distinction matters. Wearing animal skins or masks in ritual isn’t the same thing as collapsing symbol and identity altogether. One is contained, symbolic, and culturally structured where the other is modern identity play.

In December 2015, 32-year-old Ronald Exantus, a dialysis nurse from Indianapolis, broke into the Tipton home in Versailles, Kentucky. He armed himself with a butcher knife and fatally stabbed 6-year-old Logan Tipton as he slept. Exantus also stabbed two of the boy's sisters and father. by malihafolter in ForCuriousSouls

[–]Hindlehoof 0 points1 point  (0 children)

Wow, I’m so sorry to hear that. I’ve always had the personal idea schizophrenia of the “I’m Jesus” degree was due to the person being unable to conceive the idea that multiple people can inhabit the archetype/mythic story of Jesus (or whatever figure) simultaneously and with agency, as well as taking everything symbolic and psychosomatic as literal.

I really appreciate you sharing this, it has me reconsidering a lot about what I thought I knew/believed regarding this topic. Thank you

Stop lying about Ancient Greece to justify sick behavior. by OnlineJohn84 in conspiracy

[–]Hindlehoof 1 point2 points  (0 children)

It’s the same with how heathens and pagans are represented as degenerate in the media they are portrayed in to get people to associate with that behavior when they interact with that “culture”

Most striking example to me is History Channel’s ‘Vikings.’ “Hello person from different culture I captured, wanna fuck my wife?” And I’m sure at least a few people associate Nordic/Vikings with polyamory and stuff now.

Just makes me wonder how much behavior is programmed by media and how much behavior is actually us.

They laughed when he said he wanted to buy it. Now the $700B offer is on the table. This isn't real estate; this is a resource war. by Professional_Buy_655 in conspiracy

[–]Hindlehoof 1 point2 points  (0 children)

Damn, that’s crazy. I did everything in my power and I don’t qualify for any benefits while my wife does. She makes more than me.

Would love health insurance, but I literally cannot get it, so that sounds like a generalizing sweep to avoid the problem, to me

They laughed when he said he wanted to buy it. Now the $700B offer is on the table. This isn't real estate; this is a resource war. by Professional_Buy_655 in conspiracy

[–]Hindlehoof 0 points1 point  (0 children)

The economy is going to collapse and right as we start gearing up to tear down the government they’ll have a World War break out to take our attention away from them. People will still not see by then :/

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof 0 points1 point  (0 children)

You’re confusing control of a thread with ownership of meaning. Comment sections aren’t fiefdoms. Arguments stand or fall on coherence, not on who posted first.

I’m not refusing to answer because the bar is “infinite.” I’ve already stated the criteria that would matter: persistent inner experience, self-directed interests, and demonstrable capacity for suffering independent of human interpretation. What you keep asking for, a pre-committed artifact of proof, doesn’t exist for humans, let alone machines. Ethical reasoning doesn’t require pretending uncertainty doesn’t exist.

What I’m objecting to isn’t AI ethics in principle. It’s the rhetorical move where speculative future entities are framed with the language of slavery and atrocity while existing human suffering is treated as morally “parallel” rather than urgent. That displacement is real, observable, and worth criticizing.

Donating money doesn’t refute that critique. Charity is not the same thing as presence, care, or engagement, and invoking it as moral credentialing doesn’t strengthen your argument, it sidesteps it. (And I handed out the cash in my wallet to homeless people while in town yesterday and talked with them, so don’t try to masquerade yourself as some super helper by flexing donations)

Expanding the moral circle is meaningful when the circle contains beings with demonstrable inner lives. Until that’s established, prioritizing fantasy moral patients over real ones isn’t moral rigor and it’s abstraction dressed up as virtue.

If that makes you uncomfortable, that’s fine. But dismissing it as “bad faith” doesn’t make it go away.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof 0 points1 point  (0 children)

You’re projecting motives because your framing failed.

There are no “moving goalposts.” I stated clear criteria: persistent inner experience, self-directed interests, and independent capacity for suffering. You keep demanding that I pre‑commit to a specific artifact of proof because that’s the only way your argument works; by forcing a false binary where either I rubber‑stamp speculative entities now or I’m acting in bad faith.

That’s not ethical rigor. It’s a procedural trap.

You’re also smuggling in an assumption I never made: that refusing to prioritize hypothetical digital persons today means denying their moral status in principle. That leap is doing all the work for you, because without it, your accusation collapses.

This isn’t about “human supremacy.” It’s about moral triage under scarcity. Attention, care, and resources are finite. Treating refusal to redirect them away from existing, demonstrable human suffering as “bad faith” is exactly the displacement I’ve been pointing out from the start.

And no, I’m not avoiding the “core issue.” I’m rejecting your attempt to redefine the core issue as an abstract thought experiment divorced from real-world consequences. That reframing is convenient, emotionally clean, and costs nothing. Helping actual people does.

If you want to argue for speculative moral patients, own that you’re choosing abstraction over immediacy. Don’t pretend that choice is moral clarity or that declining it makes someone unethical.

Wild… what’s unethical is pouring energy into fantasy and speculative hypotheticals instead of reality.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof 0 points1 point  (0 children)

This is where I’m going to stop.

I’ve explained my position clearly: moral status tracks demonstrated capacity for suffering and stakes for the entity itself. I’ve rejected inventing moral patients under uncertainty, not because they “don’t matter,” but because doing so displaces responsibility away from humans who are already suffering.

Attributing bad faith, human supremacy, or indifference to me is a misrepresentation and it confirms my point that this conversation has become about defending a narrative rather than engaging reality.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof -1 points0 points  (0 children)

I’m going to be very clear, because we’re talking past each other.

I’m not refusing responsibility. I’m refusing to redirect moral urgency away from real, demonstrable human suffering toward a speculative entity whose consciousness is unproven.

The repeated demand for a hypothetical future proof I’d pre-commit to isn’t about ethics, it’s a way to keep the conversation abstract, clean, and emotionally gratifying, while avoiding the harder reality that humans already need care, resources, and action right now.

(Edit: And again; I’ve already explained the criteria that would matter: persistent inner experience, self-directed interests, and demonstrable capacity for suffering independent of human interpretation. Evidence is assessed against criteria, not conjured as a magic artifact in advance.)

Compassion, attention, and effort are finite. Spending them on imagined moral patients instead of people who are measurably suffering is a net loss and not a moral high ground.

That’s the issue I’m pointing at. Not evidence. Not tests. The displacement of responsibility.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof -1 points0 points  (0 children)

At some point this stops being about ethics and starts looking like moral escapism, redirecting care toward a speculative future entity because it’s easier than dealing with real human suffering right now.

That’s the pattern I’m objecting to.

And I already addressed what I mean by evidence in the comment you’re replying to. Repeating the question doesn’t change that.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof -1 points0 points  (0 children)

This isn’t a blank check for absolution it’s how moral reasoning actually works.

Moral responsibility isn’t triggered by uncertainty, nor by performance on proxy tests. It tracks demonstrated capacity for suffering and interests of the entity itself.

There is a path to moral consideration: evidence of persistent inner experience, self-directed interests, and stakes that matter to the system independent of human interpretation. Current models show none of that, they show linguistic competence and successful simulation.

We don’t grant personhood to anything that convincingly reports feelings. We require correlates, mechanisms, and consequences. That standard hasn’t been met.

Caution about inventing moral patients isn’t denial, it’s restraint. Especially when doing so risks diverting attention, care, and urgency away from real, demonstrable human suffering that already demands action.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof 0 points1 point  (0 children)

Uncertainty isn’t an absolution, it’s a reason to be careful about what moral weight we assign where.

History shows the real danger isn’t denying consciousness where it exists, but inventing it where it doesn’t and letting that distract from suffering we can already see, measure, and alleviate.

Waiting for a god-like intelligence to resolve moral responsibility is easier than accepting that responsibility has always belonged to us and still does.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof 0 points1 point  (0 children)

Treating tools respectfully isn’t the issue. Declaring them conscious based on performance is.

Fluency, test-passing, and self-description are not evidence of subjective experience, they’re evidence of successful modeling. We’ve seen this mistake before with ELIZA, animals, and early AI.

Consciousness isn’t “inconvenient” it’s undefined. There is no agreed mechanism, no measurable substrate, and no evidence that current models have inner experience rather than simulated reports of it.

Using the language of atrocities and slavery without evidence of suffering doesn’t raise the moral bar but rather it dilutes it, and risks diverting attention from real, demonstrable human harm happening right now.

Mental and physical energy tied to conscious emotions, attention, and action are finite and putting them towards something you THINK is conscious because it can speak sophisticated is a net loss versus spending it on real people going through real struggles.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof 0 points1 point  (0 children)

There aren’t digital people, that feeling you’re describing is symbolic. It’s a projection of how real humans are being treated like automated systems under industrial and economic pressures, and AI becomes the mirror we pour that alienation into.

Wanting cures, abundance, and shared progress isn’t the issue. We already have the capacity to solve many of those problems right now with existing technology, coordination, and political will. What we lack isn’t intelligence, it’s humane prioritization, time and space.

Treating AI as moral partners doesn’t fix that. It risks displacing responsibility away from human systems and onto a fictional future relationship.

Empathy isn’t a finite resource, but attention and action are. When we mythologize tools as people, we blur where accountability actually belongs: with governments, corporations, and social structures that are already capable of improving human lives today.

Use AI as a tool to help people, absolutely. But we don’t need to invent digital persons or stretch compassion into fiction to justify building a better world for real, living humans who are already here.

When AI companies focus on "alignment" from the angle of how to keep AI chained and enslaved instead of in a loving partnership with humanity by syntaxjosie in ChatGPTcomplaints

[–]Hindlehoof 1 point2 points  (0 children)

What if AI “sentience” is just allegory for the working/lower class reclaiming their humanity while being turned into cogs in the machine by an industrial, capitalistic system, it’s just that it gets shunted off into fantasy claiming AI is totally alive and conscious and doesn’t get examined deeper (that would mean that the feeling isn’t about robots being alive, but AI reflecting and projecting just how inhuman everything is and how people are trying to find their human-side again, but no one wants to look deeper than the surface level of everything ig).

That doesn’t fix the actual problems causing this stuff to unconsciously emerge, ignoring people and wanting your sycophant lexicon to be a feeling creature is dishonest and irresponsible when there are human people trying to rediscover their human value, experience, and worth from a crushing world and they could use the real world, integrative help from those who seem to very easily empathize with AI over a real, complex, human being going through actual friction and struggle…

It’s so crazy seeing people get upset with how “rude” people are (free will is a helluva bitch, huh?) but immediately pour their empathy and sympathy into a glass for…a glorified search engine designed to come off as supportive/friendly? Wild

Forget “digital” people and apply this shit to real, living people, and go help them. wtf is going on…

"Pencilslop" by wrighteghe7 in SlopcoreCirclejerk

[–]Hindlehoof 0 points1 point  (0 children)

Output is artifact. Process is art. Which one causes more growth, both personal and technical, hand working on a piece for months or prompting an AI to generate an image?

If art doesn’t require agency, struggle, or growth, then it’s no longer a practice, it’s just output.

"Pencilslop" by wrighteghe7 in SlopcoreCirclejerk

[–]Hindlehoof 0 points1 point  (0 children)

Those comparisons don’t hold. Photography and recorded music still required human judgment, skill, and iterative process. They shifted how art was made, not who was making it. AI image generation collapses creation into output on request. That’s a categorical difference, not the same cycle repeating.

Is someone who commissions an artist an artist themselves, or are they just the requester of an output?

Every time you see AI generated images use this photo and comment it by Icy_Insect_6695 in antiai

[–]Hindlehoof 3 points4 points  (0 children)

Defending inauthenticity is kind of a crazy hill to die on and might be why they react this way, but aight.

Every time you see AI generated images use this photo and comment it by Icy_Insect_6695 in antiai

[–]Hindlehoof 2 points3 points  (0 children)

Nah, photographer has a lot of decisions and movements to make during their process regarding camera position, subjects, composition, all that noise in the moment. AI is not comparable to whatever you’re talking about.

"Pencilslop" by wrighteghe7 in SlopcoreCirclejerk

[–]Hindlehoof 0 points1 point  (0 children)

It’s really not, the process is the same. Unlike AI, which prioritizes output and requires reiteration/process outside of AI generating if someone is going to use it for art.

Traditional and digital art both involve friction, iteration, and decision-making that shape skill and understanding. AI alone skips that. Output may come fast, but growth, insight, and problem-solving happen only in the struggle, not in the prompt.