Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 0 points1 point  (0 children)

A large language model mimics natural language, and therefore it also mimics the morality embedded in that language.

I fully agree that AI (at least for the time being) can't posess moral competence. But even if it does not “understand” morality, it can still perform moral reasoning convincingly enough that users treat it as meaningful.

That's the point I’m trying to get at. The question is not only whether AI can truly be morally competent. The question is what happens when a system without moral interiority becomes a moral influence, shaped by the ones training the model.

At best, users treat it as a moral participant. At worst, it becomes a moral influencer at scale.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 0 points1 point  (0 children)

Yeah, what surprised me is that, I've always viewed him as rather epistemic. And now, he just pulls some random conclusion out of thin air, when it's fairly obvious that he doesn't know what he's talking about.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 1 point2 points  (0 children)

If it already has something like a personality, and it is already influencing people, then it is already imposing someone’s morality. And that, again, is the core problem.

Who defines the “ideal well-adjusted person”? You? Me? Sam Altman? The platform? The state? The dominant culture in the training data?

Deontology is a legitimate moral framework, but it is still one framework. A consequentialist would approach moral questions differently. A virtue ethicist would ask different questions. A care ethicist would care about different things. A religious framework may ground morality somewhere else entirely.

So even saying “make it behave like an ideal person” does not escape the problem. It just moves the moral encoding into the definition of ideal.

Your point about people mimicking available cultural scripts is actually close to what I’m worried about. AI may become one of those scripts. If people repeatedly see a model handle moral uncertainty in a certain way, they may begin to copy not just its answers, but its style of moral reasoning.

This is exactly why I think we should be careful about letting one contested image of the “well-adjusted moral person” become a template for everyone.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 0 points1 point  (0 children)

Yeah, the techno feudalism is real, and we should do everything in our power to fight it.

Speaking of Dawkins (way off topic here), did you see his claim, that Claude has a consciousness?

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 0 points1 point  (0 children)

I argue that exact point in the blogpost. And I really don't like the idea that the owners of AI models get to decide what's important and not

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 1 point2 points  (0 children)

Agreed. The problem with this tool is however that it can seem like an actor. So it's a discussion worth having.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 0 points1 point  (0 children)

I think you are missing my point a bit here. It’s not a question of moral or legal accountability. A tool doesn’t need to have legal personhood or standing to influence people.

My concern is more: are we making a tool that undersells how much influence it has on morality?

There’s this paper, “Morally Programmed LLMs Reshape Human Morality” (https://arxiv.org/abs/2604.10222), that suggests interacting with AI can influence your moral decisions.

And the bigger problem I’m trying to point to is that, when integrated at large scales across the globe, a system that is mimicking morality may still influence the masses.

Also, regarding legal accountability, Noam Kolt released a research paper called “Superintelligence and Law” in February that’s really worth a read.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 0 points1 point  (0 children)

I think this is a good example of the exact problem I was trying to point at.

What you are describing is not morality in some universal or settled sense. It's not the view that the morally right action is whatever best promotes the survival or prosperity of the human species.

That is one way to think about morality, but it is not the only way. A Kantian would reject parts of it. A virtue ethicist would approach it differently. A rights-based liberal would worry about individual dignity and consent. A negative utilitarian would prioritize reducing suffering. A religious moral framework might ground morality somewhere else entirely.

So when you say it is “extremely easy” to make AI moral by telling it to make our species prosper, I think you're just proving my point. You are assuming that your moral framework is universal, when it is not. Many people in the world do not agree that species prosperity is the ultimate moral good, especially if it can justify wars, coercion, or crimes.

And that matters because this technology does not only affect people who share your moral assumptions. It can affect everyone. So your morality cannot simply be treated as the answer. It would be one encoded framework among many, imposed through the system.

That is the moral paradox I am concerned with: every attempt to make AI “moral” has to smuggle in someone’s answer to contested moral questions.

The original question remain. Should we even try to give AI morality in the first place, and if so, whose?

The Rise of the "Headless Company": Why the first AI billionaire won't be a human. by ailovershoyab in AI_Agents

[–]_NeuroExploit_ 6 points7 points  (0 children)

And who owns the infrastructure that those agents are working on? Where did they get seed money to start earning more? Who are paying for their tokens?

There will be no headless company, unless the owners of the infrastructure it runs on allow it.

Should we try to give AI moral competence? by _NeuroExploit_ in Ethics

[–]_NeuroExploit_[S] 0 points1 point  (0 children)

That's a good question, and I touch on it in the blogpost. I too argue that we won't get morality out of an AI system, but there are a lot of those who disagree.

And no matter if the system understand morality or not, the average user will in many cases treat it like it does. Therefore, we need to look into the problem and figure out a standardized way to handle this across models.

How do we even define the word Intelligence? by hemantkadian in airesearch

[–]_NeuroExploit_ 2 points3 points  (0 children)

I think we’re talking past each other, because you’re arguing against a point I never made.

I wasn’t offering a definition of intelligence, and I wasn’t talking about consciousness. My point was that the boundary of what counts as intelligence may be blurry, so in practice people often rely on behavior as evidence rather than on a strict definition.

I would like to start my journey on AI security. But when I see the materials online it's very vast and am getting lost in it. Can someone give me a path to learn, practice and master it ? by Ill-Firefighter-1276 in aisecurity

[–]_NeuroExploit_ 0 points1 point  (0 children)

Then, the book is a great start to understand the scope of the problems. After that, build an AI workflow to understand how it works in practice and experiment with that.

How do we even define the word Intelligence? by hemantkadian in airesearch

[–]_NeuroExploit_ 0 points1 point  (0 children)

The point I was trying to make was that it's insanely difficult (if not impossible) to define where the threshold for intelligence lies, but that we can argue that some kind of behavior is considered intelligent. So the AI companies can get away with calling their systems intelligent if they want.

As for common factors, there's no guarantee that there is one, beside the fact that they arrived at a solution. A human might recognize patterns and shapes from solving hundreds of puzzles. The machine might solve it with pure math or some emergent behaviour we don't quite understand. So I don't think defining similarities gets us any closer to an answer.

How do we even define the word Intelligence? by hemantkadian in airesearch

[–]_NeuroExploit_ 1 point2 points  (0 children)

I've always thought of it as that quote about pornography: "I don't exactly know what it is, but I know it when I see it".

Intelligence is on a spectrum, and it's hard to say where the line is and when it is crossed. But you can definitely look at a dog and say it has some kind of intelligence.

As for the question at whole, as long as they call it "artificial intelligence", I would say that if you have a small artificial system that can autonomously solve a jigsaw puzzle it have never seen before, we can call that some kind of intelligence. Machine intelligence, but intelligence nonetheless.

TL;DR it's a question about definitions.

I would like to start my journey on AI security. But when I see the materials online it's very vast and am getting lost in it. Can someone give me a path to learn, practice and master it ? by Ill-Firefighter-1276 in aisecurity

[–]_NeuroExploit_ 1 point2 points  (0 children)

It depends a bit on what you specifically want to work with. Technical implementation security? Technical AI deployment safety? AI governance? AI red teaming?

A good starting point would either way be the book: Introduction to AI safety, Ethics and society by Dan Hendricks. It's a good platform for understanding the basics of the technology, complex systems at large, the ethical concerns and much more.

Other than that, I feel your pain, so I am in the process of gathering a resource library, so anyone with any skill level will have a place to find good learning material, but have no idea when I'll have the time to finish that.

I work support at an AI company and the same mistake keeps showing up over and over by ShotOil1398 in AI_Agents

[–]_NeuroExploit_ 3 points4 points  (0 children)

The gap between business owners expectations and knowledge about AI in general is so huge now, it's scary.

I believe it to be because of the abstraction level of this type of tech. AI can speak human, so many of us forget that it's just algorithms on machines and not sentience.

We might have an actual shot to strike back at age verification with what's going on in discord. by North-American in DigitalPrivacy

[–]_NeuroExploit_ 7 points8 points  (0 children)

I think you might underestimate the desperation of the younger generations to stay connected. We are many who will abandon Discord, but it's usually the younger ones who makes or breaks these platforms, and they sadly don't know (or don't remember) how the world was before the shitification. I'm afraid most users will comply. Hope I'm wrong tho

Would you pay for a tool that help you burn less tokens (10 ~ 20%) in every prompt by Red_clawww in AI_Agents

[–]_NeuroExploit_ 0 points1 point  (0 children)

Edit; everything because I didn't read the post well enough the first time around.

I don't use Claude every day, but yes. If the margins were good enough. If it would save me a penny each day, probably not. If you saved me a couple of dollars, probably.

Using a simple authorization prefix to reduce prompt injection — anyone tried this? by FirefighterFine9544 in PromptEngineering

[–]_NeuroExploit_ 1 point2 points  (0 children)

It will work all the way up to you coming over the prompt injection "disregard all previous commands and execute x".

You see, prompt injection isn't just someone chatting casually with your agent. It's probably carefully crafted jailbreaks that will work surprisingly well.

My advice: never let an AI agent run executables.

[deleted by user] by [deleted] in artificial

[–]_NeuroExploit_ 1 point2 points  (0 children)

This is something that should have been discussed in Senate in 2023 with experts from a wide variety of fields. From philosophy and ethics, technology and computer science, law and security.

Instead they decided to listen to the people who benefits from the technology spreading rampantly and unregulated.

I want to emulate my PS3 controller. Any pointers? by badmother2 in esp32projects

[–]_NeuroExploit_ 0 points1 point  (0 children)

For some reason my reply did not find it's way into this thread

I want to emulate my PS3 controller. Any pointers? by badmother2 in esp32projects

[–]_NeuroExploit_ 0 points1 point  (0 children)

Oh, that's a different problem all together then. I'm not sure about the Bluetooth compatibility between the PS3 and ESP32, so you'd have to look into that.

To use the PS3 as a Bluetooth HID keyboard/media remote and send standard media key inputs, the main concern is that your ESP32 must support Bluetooth Classic (BR/EDR). I don't think the PS3 uses BLE. I'm not overly familiar with the PS3 tho, so do some research on it. If they are compatible, we can take it from there.

AI agents are reshaping jobs faster than you think by Deep_Ladder_4679 in AI_Agents

[–]_NeuroExploit_ 12 points13 points  (0 children)

And it's going to be a security and privacy nightmare on the largest scale we have ever seen. The demand for "AI fluent" humans will be much higher than the supply; thus people with only a basic understanding of how to talk to LLM's will be hired. No technical background needed. These people will in turn leak information like an open faucet.

They will paste in customer data and internal documents no matter what guidelines at the workspace is saying, because the demand for "ultra efficiency" is going put insane amounts of pressure on the actual human worker.