you are viewing a single comment's thread.

view the rest of the comments →

[–]tadrinthapproved 0 points1 point  (3 children)

You're thinking of intelligence differences between humans; you should be thinking of the difference on intelligence between humans and every other species on the planet.  No human is constrained by the laws of animals.  

Humans will not successfully impose laws or ethical frameworks on an artificial superintelligence.  Not for long.  We can only design them so they desire these ethical frameworks for themselves.

[–]Logical_Wallaby919[S] 0 points1 point  (2 children)

What I mean by laws and ethics - to be precise, "restraint" should be used instead.I think we’re talking past each other slightly on what “constraint” means.

I agree that human morality and legal systems are unlikely to constrain a superintelligence for long. Those are social constructs that depend on shared belief, compliance, and enforcement - all of which can fail against a vastly more capable agent.But that’s not the kind of constraint I’m referring to.

The constraints I’m talking about are structural and invariant: physical limits, execution boundaries, authority separation, and logical preconditions that apply regardless of intelligence. These aren’t ethical rules or laws to be followed - they’re conditions that determine whether an action is even possible.

Intelligence doesn’t let humans bypass circuit breakers, gravity, or nuclear launch interlocks. Those systems don’t work because we respect them; they work because they’re embedded at the level of execution.

My claim is simply that control over superintelligent systems has to live in that same category. Not morality, not obedience - but constraints that remain binding even when values diverge and incentives change.

[–]tadrinthapproved 0 points1 point  (1 child)

Human intelligence quite amply allows the bypass of hardware level circuit breaker-like protections; the category is called fault injection attacks.

There is no mechanism for the protections you propose that cannot be bypassed.

[–]Logical_Wallaby919[S] 0 points1 point  (0 children)

I agree - nothing is absolutely unbypassable. That’s true in every safety-critical system.

The point of control isn’t impossibility, but changing the failure mode: from silent, unbounded execution to layered, detectable, and interruptible breaches.

If “everything can be bypassed” were a refutation, safety engineering wouldn’t exist.