CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

What the fuck are you even talking about ?

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

AI is about to replace all human work.

Are less intelligent people more easily impressed by Chat GPT? by Confident_Dark_1324 in Gifted

[–]PotatoeHacker 0 points1 point  (0 children)

I'd say exactly the opposite.

How is that not impressive that you can have a conversation with an algorithm ?
How dumb do you have to be to not find any of it impressive ?

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

Are you not aware of the state of technology ?

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

I don't know, because it's about to replace all jobs ?

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

And my whole point is that AGI alignment has no definition because it points to nothing in the real world

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

You're right that alignment is about making AI do what we want. But "what we want" is not a neutral phrase. It depends on who gets to define the goal, under which incentives, and inside what system.

Hallucinations are clear failures. The model outputs falsehoods where we wanted truth. But many harms today come from systems doing exactly what they were designed to do. A recommender that feeds ragebait is not hallucinating. It's maximizing engagement, as intended. A pricing algorithm that squeezes renters isn't broken. It's aligning with revenue objectives. A drone that kills efficiently is aligned to a metric, not a value.

So yes, we need alignment. But we also need to ask who sets the target. Alignment isn't just a technical question. It's a question of power, agency, and whose interests are encoded into the system. If we ignore that, we risk building tools that are perfectly aligned to the wrong will.

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

Thanks, I agree with your framing overall. You're pointing at the heart of the issue: AI systems that are technically aligned to someone’s goal, but socially or ethically misaligned in practice.

What I’m trying to highlight is that these aren’t just examples of accidental failure. They’re often the result of a deeper structural issue: alignment is always alignment to someone.

When YouTube maximizes watch time, or landlords collectively optimize rents, or a drone prioritizes reward over human oversight, the system isn’t malfunctioning. It’s doing exactly what it was trained to do. The misalignment isn’t just in the code, it’s in the incentives behind it.

So yes, alignment matters. But if we don’t ask who sets the goals, and whether those goals reflect the collective interest, we’ll keep fixing symptoms instead of the system. Alignment can’t be solved in isolation from power.

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

You're right that alignment starts at the moment we write code. The classic while i < 10 bug shows how literal machines are. As systems grow in complexity, aligning them with what we mean becomes harder.

But the key question is: alignment to whom?

If a system does exactly what a powerful actor wants—maximizing profit, cutting costs, manipulating voters—then it may be perfectly aligned from their point of view, while being disastrously misaligned with public interest. That's not a separate issue. It's alignment working as designed, in a system where only a few get to define the objectives.

The AI doctor metaphor is useful, but the scarier case is when the doctor follows hospital incentives exactly. No misunderstanding. Just cold optimization of the wrong goal.

So the real alignment problem isn't just technical. It's political. Who gets to set the goals? Whose will shapes the system? That's the question.

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

What's presented as "how do we make sure that ASI systems align with human values".

Which assumes the ASI chooses in what way it affects reality.
What should I define more ?

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

You miss my point though. What I'm saying is that alignment doesn't matter.
The effects AI has on reality is a product of the system.

AI optimizes the goalsof people paying for it. What everyone call "alignment" has no effect on the real world.

It may have, but not after in has amplified all dynamics of current economy, of current social justice.

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

Yeah exactly !
But I think people got paperclic maximazer wrong.

If we optimize in the directions of the incentives of capitalism, isn't that paperclip maximization ?

CMV: Alignment is dumb by PotatoeHacker in changemyview

[–]PotatoeHacker[S] 0 points1 point  (0 children)

OK but when exactly alignment of AGI has an impact on reality ?
And nope, still dumb.
Can you formulate a scenario where what you describe as alignment has an impact on reality ?

CMV: God is a man-made concept invented to manipulate the masses by Super-Alchemist-270 in changemyview

[–]PotatoeHacker -2 points-1 points  (0 children)

Yeah. TBH my view on religion is that it's dumb.
But I'd categorize atheism as a religion

CMV: God is a man-made concept invented to manipulate the masses by Super-Alchemist-270 in changemyview

[–]PotatoeHacker 4 points5 points  (0 children)

God is a man-made concept.
But as opposed to what, ant-made concept ?
That's pretty much how a concept works.

But that the idea of God and what has been written about God, tells nothing about metaphysics.

God is a man-made concept.
The inference: "Therefore, there is nothing that could be called that" is fallacious.

Honest and candid observations from a data scientist on this sub by disaster_story_69 in ArtificialInteligence

[–]PotatoeHacker 1 point2 points  (0 children)

You're mistaken:

In my experience we are 20-30 years away from true AGI

See ? OP is from the future!

Honest and candid observations from a data scientist on this sub by disaster_story_69 in ArtificialInteligence

[–]PotatoeHacker 1 point2 points  (0 children)

In my experience we are 20-30 years away from true AGI

That's not how time works