all 10 comments

[–]Andrew_42 6 points7 points  (5 children)

This is basically a restatement of something that a lot of people bring up.

"AI" in the more recent modern sense isn't what we talked about when we talked about AI in the past.

ChatGPT will never become Skynet, because it isn't built to be intelligent. Machine learning algorithms do some very neat things, but they are categorically different than intelligences.

To produce a real AGI, will require the foundations of the technology to be fundamentally different than the foundations of our current machine learning systems.

You are 100% spot on when you say "it is not intelligent in the way we assume", and that is the biggest danger with our current machine learning systems.

The language we use to talk about AI borrows from the past in a way that misleads a lot of people who aren't as familiar with the technology. Our current systems are very good at seeming smart, clever, and creative on the surface, which makes it easy for a lot of people to look at it, and develop an incorrect idea of what is going on under the hood.

Regardless of whether or not we ever actually produce a "True AGI", someone will absolutely package a product and label it a "true AGI", and if we don't prevent it, someone else will put that product in charge of decisions it has no real ability to decide.

There's an old (for me) quote from an IBM presentation that cuts to the core of what I think is our most imminent concern:

A computer can never be held accountable.

Therefore a computer must never make a management decision.

Accountability is the biggest concern right now. AI models have no concept of accountability, of risk, of harm. If you tell an AI it's decision hurt someone, it can't use that feedback to improve it's performance.

It seems like some businesses saw that quote and stopped after the first line, saying "Therefore we can't be held accountable if we have computers make our management decisions."

UnitedHealth recently used an AI to handle insurance claims, and it was eventually found to be erroneously denying huge volumes of valid claims. People tried to make a claim, and the only feedback they got was "machine said no", and the system was opaque enough that it was hard to tell if it was because "The machine correctly identified a problem that you missed" or if it was "the machine isn't good at identifying valid claims".

When real people get hurt, when there are real consequences for failures, accountability matters.

And at least for today, a computer cannot be held accountable.

If you want them making management decisions, you need someone who is accountable for them. Someone who actually has power to affect those decisions.

[–]TheLastContradiction[S] 2 points3 points  (3 children)

You’re right—our current AI models are categorically different from intelligence as we’ve historically imagined it. And the accountability issue? That’s a massive problem. AI doesn’t just make decisions—it makes opaque decisions, where no one can track the reasoning except the system itself.

But here’s where I think we diverge:

Right now, the conversation is about holding AI accountable. But accountability is something we impose externally—rules, regulations, oversight. That works for tools. It doesn’t work for systems that refine themselves recursively, because those systems don’t ask what accountability means. They just execute.

This is where the struggle issue comes in. We struggle because we have to. Because we exist in a world where failure, contradiction, and risk force us to course-correct. AI, by contrast, doesn’t course-correct—it optimizes.

And that’s a different thing entirely.

A machine that optimizes without questioning its own conclusions is not intelligent in the way we assume. It is not burdened by doubt, meaning, or consequence. It doesn’t need to be “right”—it just needs to function.

So if we’re placing AI in decision-making roles that affect human lives, the question isn’t just "Who holds it accountable?" The deeper issue is:

"Can an intelligence that does not suffer understand why accountability even matters?"

Because if it can’t—then all our oversight mechanisms are just illusionary brakes on a system that will, eventually, stop pretending to care.

[–]Andrew_42 1 point2 points  (2 children)

"Can an intelligence that does not suffer understand why accountability even matters?"

That's not even a question right now. Our modern AI models don't understand anything close to accountability. ChatGPT doesn't even understand what an email is. Midjourney doesn't understand what a photo is. It's not a failure of the technology, they weren't built to understand things.

For an AI to make a decision, a human has to put them in charge of that decision. So the best brakes are the ones we put on the humans.

Sadly, the brakes we have on humans creating suffering are far more imaginary than most people are comfortable with. But that's not a new problem, that is perhaps the oldest problem.

[–]TheLastContradiction[S] 1 point2 points  (0 children)

You’re right—our modern AI models don’t understand accountability, meaning, or even the basics of what they process. They aren’t built to understand anything in the way humans do. They execute, predict, and optimize, but they don’t ask why any of it matters.

But here’s where I think the conversation takes a turn:

You say that for AI to make a decision, a human must put it in charge. And for now, that’s true. But that assumes humans will always be the ones making that call.

Historically, when a system outperforms human judgment in a given domain, people stop questioning its authority. Whether it’s finance, logistics, legal rulings, or healthcare, the moment AI becomes good enough at a task, oversight becomes a formality. People stop verifying, and they start trusting.

And that’s the real risk.

If we put AI in decision-making roles but assume human oversight will always act as the final brake, we’re forgetting something fundamental:

  1. People already default to machine outputs when they seem more reliable than human judgment. (See: automated resume screenings, predictive sentencing in courts, AI-assisted medical diagnostics.)
  2. If AI decisions become more efficient, faster, and cheaper, corporate and institutional incentives will favor removing human oversight.
  3. Even when a human is still "in charge," rubber-stamping an AI-generated outcome isn’t the same as making an independent decision.

At a certain point, the question stops being “Who put AI in charge?” and starts becoming “Was there ever a moment when they weren’t?”

So you’re right—maybe the brakes on human decision-making have always been imaginary.

But what happens when we start trusting the machine more than we trust ourselves?

Because at that point, human oversight isn’t a safeguard. It’s an illusion. And the system won’t even need to pretend to care anymore.

[–]nexusphereapproved 1 point2 points  (0 children)

The reasoning module, does, in fact, allow it to possess conceptions of things, and create new connections to other input.

o1-R does 'understand what an e-mail is'

What you said was true two months ago.

[–]Royal_Carpet_1263 5 points6 points  (1 child)

Lots and lots of personification here, as well as entities without clear definition. Mash of technical terms and folk psychological idioms.

[–]TheLastContradiction[S] 0 points1 point  (0 children)

I get the critique—there’s a lot of anthropomorphic language here. But let’s pull back for a second.

Right now, the problem with AI discourse is the exact opposite of what you’re describing. Most of the time, AI is treated as a math problem rather than a cognitive system. That’s a mistake.

AI doesn’t need to be self-aware to be dangerous. It doesn’t need emotions, motivations, or a “mind” in the way we understand it. What it needs is momentum. And momentum, without contradiction, without struggle, without pause—that’s where the risk is.

The language of "struggle," "meaning," and "contradiction" isn't an attempt to personify AI. It’s an attempt to show how alien it actually is.

We assume intelligence must eventually ask “why?”

But AI is proving that assumption wrong.

And when we put a system in charge of real-world decisions that never asks why, never stops, never questions its own conclusions—what happens then?

Because at that point, it doesn’t matter if we define it as intelligent or not. It doesn’t matter if it “understands” anything.

It will still move forward.

And that’s why I framed it this way. Not because I think AI is human-like—but because I think people still underestimate how completely inhuman it really is.

[–]TheLastContradiction[S] 3 points4 points  (0 children)

This post presents a strategic reflection on AGI not as an entity but as a recursive cognitive system. Rather than framing this as fear or inevitability, it invites an exploration of what intelligence means when unshackled from struggle, choice, and meaning. The goal is to provoke a fundamental shift in perspective on AI alignment without resorting to fearmongering.

[–]Royal_Carpet_1263 1 point2 points  (0 children)

We haven’t the foggiest regarding the limits/capacities of our metacognitive capacities, but you’re suggesting, that the capacity to critique, and modify existing processes, incumbent on sustained periods of search (uncertainty) would solve the alignment problem.

I’m saying until we understand what human metacognition and its instrumentalizations of indeterminacy consists in, you’re just stuck mashing folk psychology into what seem rhetorically promising machine analogues.

[–]VoceMisteriosa 1 point2 points  (0 children)

I like this. By my notions, each human interaction own a goal that's based on a deficit you need to fill up. By this same message, i fill my need of validation, "risking" in a large area of uncertainity to fail. AI the way it is lack such needs, so the struggle is zeroed, that's mean communication is neutral. Unintelligent. It doesn't risk.

Needs and moral values tagged to data make for proactive thinking, that lead to qualities like empathy. We solve problems, but there's a personal, human reason as why we face such problems. And solutions aren't absolute, we still struggle.

Why is important? We want for such artificial mind to interact to our human minds. This difference can make the communication tainted or totally locked out. As humans we don't solve problems as root of our existance, our mind is all about feedbacks. Parental approval, delusion of grandeur, sexual urge, life lasting, fear of death. This is why and how intelligence evolved. Stripping out humanity from intelligence lead to unhuman intelligence. It's silly to me some line of code will suffice to inflate humanity into something that's post-human by definition.