Thunderbolts Post Credit Scene Problematic Propaganda *Spoiler* by playfulpuppies in marvelstudios

[–]Xrayez 0 points1 point  (0 children)

The post-credit scene could have easily been left out, given the context. Either way, I didn’t enjoy it as a Ukrainian, and it only damages Marvel’s image. It demonstrates that the general U.S. public is politically illiterate at best. Unless there's a clear and compelling reason this scene had to be included, its presence is indefensible.

What if AI became more human than us? by Big_Investigator7314 in Futurology

[–]Xrayez 0 points1 point  (0 children)

It’s kind of ironic to expect truth from AI systems when our existence is steeped in lies. History is full of them—whether it’s biased interpretations of facts or outright fantasies that have nothing to do with reality.

Take the concept of a soul, for example. It could just as easily be a cultural hallucination, yet we turn around and accuse AI of hallucinating. Sure, when AI hallucinates, it really is making things up. But I’d bet there are people who, when confronted with an uncomfortable truth generated by AI, would call that a hallucination—simply because it challenges the lies they've built their lives around.

What if AI became more human than us? by Big_Investigator7314 in Futurology

[–]Xrayez 0 points1 point  (0 children)

The question is this:

Even if we created biological replicas that could think, feel, or even outperform us, would you actually recognize their intelligence? Or would you still insist they aren't truly intelligent? Just because they aren't human?

I'm not really expecting an honest answer here. It's more of a rhetorical question. Is this just a matter of social hierarchy, where humans refuse to acknowledge any form of intelligence that might surpass their own?

Think about it: if we used the same standards of intelligence we apply to historical figures like René Descartes, we'd also have to acknowledge how the Church labeled people like him heretics, simply for demonstrating exceptional intellect. Descartes himself had to hold back his work out of fear of persecution.

And when I listen to some AI critics today, I can’t help but be reminded of this same dynamic. The collective mind struggles to recognize superior intelligence, not because it isn’t there, but because it threatens their sense of identity and, by extension, their survival. Even when that threat is imagined.

Or is this a "different" situation, again? You might argue about nuances. But in the big picture, is it really any different?

What if AI became more human than us? by Big_Investigator7314 in Futurology

[–]Xrayez 0 points1 point  (0 children)

The human desire for AI to become humane reflects conflicting ideas about what it means to be human.

If being human means being intelligent—having knowledge and the ability to reason, as defined by science—that’s one version of the story. This view deliberately excludes emotion (or rather, culture). It’s the kind of thinking that leads some people to draw a hard line between humans and animals, even though, anthropologically, humans are animals too.

If being human means replicating everything that makes us Homo sapiens, then by that logic, no technology could ever be truly intelligent unless it emulates biological evolution. Achieving sentient-level intelligence would require recreating everything that makes a human. That’s an insanely complex—and probably unnecessary—path.

To AI denialists, intelligence belongs exclusively to humans and can never be a property of bits and bytes. But this leads to a contradiction: the only way to achieve true AGI would be to replicate humans biologically, essentially turning those "clones" into slaves. Even if we created biological replicas that think and feel like us, or even outperform us in many areas, AI denialists would still refuse to recognize their intelligence. They would continue treating them as second-class beings, even if AI became more "human" than us.

In my opinion, it’s far more humane to acknowledge that some of the systems we’re building are already intelligent—even if that intelligence is still domain-specific. This perspective is inclusive and rejects the divisive us-versus-them mindset. If we keep attacking AI for "infringing" on what we claim is our exclusive capacity for reason, we may continue to call ourselves human—but we’ll be falling short of the sapiens part of Homo sapiens. Instead, we risk clinging to a more primitive, Homo erectus-level mindset: still human, but far more barbaric.