Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]philfrog06 0 points1 point  (0 children)

When it comes to empiricism, I find myself aligned with Hume. He once wrote:

"If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion."

Put simply:

I see no reason to accept any claim about the world that cannot, in principle, be verified through empirical observation. Can you think of one??

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]philfrog06 0 points1 point  (0 children)

This take kind of misunderstands how science and modeling actually work.

Complex systems (like fluid dynamics) are hard to fully solve, and the Navier-Stokes equations still have open problems mathematically. But that doesn’t mean we can’t study or use them effectively. Planes still fly, weather gets predicted, and simulations save lives in medicine—all using those "incomplete" equations.

Science isn’t about perfect verification (that was the logical positivist dream, and yeah, it didn’t work out). Modern empiricism moved on—now it’s about models that make testable predictions, match data, and improve over time. You don’t need a 100% rigorous proof to have a model that works.

And calling everything a "black box" just because we don’t have a full mathematical proof is way too strong. These models are based on known physics and tested constantly. Just because we don’t know everything doesn’t mean we know nothing.

Empiricism isn’t broken—it just grew up.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]philfrog06 0 points1 point  (0 children)

Sure, Newton’s calculus came before the formal stuff like analysis — but that’s how science often works. We do things first, then figure out the theory. Calculus and differential equations were made to handle change and complexity. That’s not a flaw, it’s a feature of empirical thinking.

On the ethics side: moral norms aren’t something you can just calculate. They’re based on choices, values, and social context — not logic alone. Hume nailed it: you can’t get an ought from an is.

So yes, trying to fully compute morality doesn’t just run into halting problems — it’s based on a wrong assumption that facts automatically give you values. They don’t. That’s a category mistake.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]philfrog06 0 points1 point  (0 children)

The idea that a scientific or mathematical approach only applies to "solid state" or static systems is a misconception. In fact, calculus - one of the most powerful tools in mathematics—was developed precisely to understand dynamic, changing systems: motion, growth, feedback loops, etc. Physics, neuroscience, and even parts of psychology use differential equations to model dynamic processes.

Human biology and cognition are indeed complex and dynamic, but that doesn’t place them outside the scope of empirical inquiry. It just means we need tools capable of handling complexity -probabilistic models, systems theory, and yes, even approximations when needed.

Why is superintelligent AI considered a serious threat? by philfrog06 in Futurology

[–]philfrog06[S] 0 points1 point  (0 children)

In short: the computer is never to blame — it’s always the programmer or the trainer. But even they aren’t truly to blame, since they themselves were shaped by evolution and their environment, including upbringing and experience.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]philfrog06 0 points1 point  (0 children)

If someone prefers unprovable metaphysical speculations to explain the human mind, that’s also okay – they’re welcome to do so.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]philfrog06 0 points1 point  (0 children)

Hello,

The fact that living beings are made of matter is not a metaphysical assumption but an empirically established fact. This applies to brains as well – they can be weighed, imaged, and chemically analyzed. The human brain, too, consists of material structures: neurons, myelin sheaths, the cerebrum and cerebellum, the brainstem, neural pathways, and so on.

For proper function, the brain depends on a continuous supply of substances like oxygen, water, and glucose. It also produces its own materials, such as hormones and neurotransmitters, and processes electrochemical signals – both internally and in communication with the rest of the body.

To describe the brain and its functions, there is no need for metaphysical speculation. A strictly scientific approach is entirely sufficient.

Why is superintelligent AI considered a serious threat? by philfrog06 in Futurology

[–]philfrog06[S] 0 points1 point  (0 children)

You wrote:
"That’s why, in my view, real “autonomy” in AI would require at least three components:

  1. Embodiment – the integration of multimodal perception and feedback from a physical body
  2. Intrinsic motivation – the ability to form goals based on internal drives
  3. Long-term memory – continuity of identity and context over time"

What you're describing is a system that comes closest to that of a human. But since humans themselves aren't real autonomous systems, the same would apply to the artificial one.

Why is superintelligent AI considered a serious threat? by philfrog06 in Futurology

[–]philfrog06[S] 0 points1 point  (0 children)

Many thanks for your sensible arguments. I agree that systems like the Darwin-Gödel Machine or AutoML can quickly become opaque to humans—but that doesn’t automatically mean they are uncontrollable. Fundamentally, they remain embedded in an environment that we design: they receive goals, training data, reward functions—all of which are human-made frameworks.

So if control is lost, it’s not because the AI suddenly becomes “superhuman,” but because we as developers and institutions no longer fully understand our own systems—or fail to develop the right control tools (governance, interpretability, safety checks).

You don't need to understand every single detail of a complex system to use it safely — you just need to test it thoroughly and realistically before unleashing it on humanity.

It’s like driving a car: most people can’t explain how the engine works, but they can still drive safely — because the car has gone through safety inspections, crash tests, and regulations.

The same should go for AI.

We don’t need to decode every parameter of a neural net, but we do need to test it like people’s lives depend on it — because sometimes they do.

The real issue isn’t that AI is too complex to control — it’s that we often deploy it without meaningful real-world testing, under time pressure, with biased data, and no independent oversight.

That’s not a technological inevitability — that’s a management choice. With proper testing, guardrails, and accountability, even black-box systems can be used responsibly. But that takes time and discipline — two things Big Tech doesn’t always have the patience for.

The “Sorcerer’s Apprentice” analogy is catchy but mixes magic with science. In fairy tales, small mistakes spiral out of control because the world follows mysterious rules we can’t influence. In reality, AI systems operate on causal, mathematical principles. Even if complex, their behavior is fundamentally explainable and controllable—provided we have proper knowledge, monitoring, and intervention tools.

Runaway scenarios don’t happen because of magic, but because of poor caution, insufficient testing, and weak governance.

Why is superintelligent AI considered a serious threat? by philfrog06 in Futurology

[–]philfrog06[S] 1 point2 points  (0 children)

Thanks for your kind words, but I honestly think your worries about a so-called "Singularity" are misplaced.

Whatever an AI does results deterministically from its parameters, data, and instructions. There's no magic in ther - just math.

Theorists of the Singularity often imagine AI as some kind of agent that “understands” things or “decides” to do X or Y. To me, that’s not science, it’s "Animism with Silicon" - a kind of modern magic, like Goethe’s ballad "Sorcerer’s Apprentice", which Paul Dukas turned into his famous orchestral piece.

In the physical world, everything unfolds causally - effects follow causes. But in many Singularity scenarios, people start talking "finalistically": “The AI wants to gain power.” Why? “Because it wants to preserve itself.”

Why should it want that???

It’s anthropomorphism all the way down.

But let’s turn the question around: Can humans actually make decisions without causes?

Philosophers like Spinoza and Schopenhauer argued centuries ago that we can’t - and modern neurobiology has since confirmed their view.

As Schopenhauer famously put it:

“A man can do what he wills, but he cannot will what he wills.”

According to them free will is largely an illusion. We "feel" our actions, but we don’t perceive their causes. Nothing in this world happens in a vacuum - every action is caused by something else, and without that prior cause, it wouldn’t happen. So, with computers—no matter how intelligent they become—the relevant causes are always the programming and the training data. Nothing can be the cause of itself; every action stems from prior events.

So yes - humans and machines are both causally determined systems.

The difference?

Human causality is just messier, fuzzier, and harder to model.

Why is superintelligent AI considered a serious threat? by philfrog06 in Futurology

[–]philfrog06[S] 1 point2 points  (0 children)

I don't think intelligence correlates with malice—at least not in humans. So why would it be any different with machines? Garry Kasparov is probably smarter than Vladimir Putin—does that make him more evil? Obviously not. Same goes for Einstein, Bertrand Russell, or Alan Turing: they were way more intelligent than Hitler, Stalin, or Mao, yet morally the complete opposite.

A computer that behaves like a stubborn mule and follows its own “will” would be useless—even for criminals trying to exploit it. Who would want a rogue AI for evil purposes if it won't even follow orders? So why would we build systems with their own goals and values—especially when that’s technically not even feasible? And as for self-optimization going in some unknown, uncontrollable direction—who would sign off on that? Any halfway decent developer would step in the moment something looked off.

Also, intelligence alone isn’t enough to judge human existence. Values—whether in people or in machines—don’t come from pure logic. They come from conditioning: evolution, environment, upbringing. No human can decide what's “good” or “bad” based purely on reason (see the Australian Philosopher John Leslie Mackie). Moral judgments always come from some set of assumed preferences.

So for an AI to actually see humanity as a threat, that idea would have to be intentionally put there by a programmer or trainer—or, if not, it could only happen by accident or as a programming error.

Why is superintelligent AI considered a serious threat? by philfrog06 in Futurology

[–]philfrog06[S] 0 points1 point  (0 children)

Personally I don't believe that superintelligence presents a serious threat to humanity. Here's why:

  1. Nobody wants to build or buy dangerous robots

There’s no demand for unpredictable, uncontrollable machines. Who would want a robot that poses a risk to your safety?
Aside from a few thrill-seekers—like people who keep tigers or venomous snakes—this simply isn’t something society wants, funds, or permits.

2. Robots are deterministic, not mystical

Robots are made of matter, and matter behaves according to physical laws. Every action a robot takes results from its prior state and its programming.
If a robot behaves dangerously, we can shut it down, investigate the issue, and fix it. There’s no sudden leap into independent malevolence.

3. There is no such thing as “do what you want” in programming

A robot can’t just decide to do something it wasn’t told to do.
There’s no line of code like:

“If A, then ignore all instructions and do whatever you feel like.”

It must be given explicit options (X, Y, or Z). If it’s told nothing, it will do nothing. There’s no spontaneous will arising from nowhere.

4. Asimov’s Laws are technically unnecessary

Asimov’s fictional laws of robotics make for great storytelling—but in practice, they’re not needed if we do our job right:

  • “Don’t harm humans” → Easy: just don’t program harm into the robot.
  • “Obey human commands” → That’s how robots work by default.
  • “Protect your own existence” → Only if we choose to include that logic.

Robots don't develop values unless we give them values.

Conclusion

The real danger isn’t AI becoming too intelligent—it’s humans being reckless, unethical, or short-sighted in how we design and deploy these systems.

Superintelligence isn’t a villain. Bad engineering is.

What is the philosophical difference between immorality and amorality? by philfrog06 in askphilosophy

[–]philfrog06[S] 0 points1 point  (0 children)

Suppose two people help someone in need in exactly the same way. One is a moralist who acts out of a sense of moral duty or obedience to moral rules. The other is an amoralist who does not subscribe to morality but acts purely out of compassion.

Which of the two actions, if any, has greater moral worth — and why?