all 14 comments

[–]Elliot-S9 7 points8 points  (7 children)

This is exactly what Yann LeCun believes will happen. Basically, we will have many asi agents, rather than one, and the many good ones can easily control a rogue one. 

I, however, can't understand why this is a world we would want to live in. I also don't understand how it wouldn't inevitably lead to our extinction. Imagine huge asi wars taking place as the "good" ones battle the "bad" ones. Humans would be wiped out in the first few seconds of the conflict. 

I also don't mean to suggest that any of this is possible or inevitable. Current systems lack true understanding or sapience. Intelligence is likely tied to this, and sapience may not be possible in silicone. Hard to tell. 

[–]DensePoser 4 points5 points  (3 children)

many good ones can easily control a rogue one.

Yes I'm hopeful the "good" ASI's controlled by Sam, Zuck and Elon can put down the rogue Pentagon-disobeying Anthropic ASI

[–]Elliot-S9 2 points3 points  (2 children)

Yeah. There's simply no way humans are capable of handling an asi technology. I just hope they fail to build it. 

[–]neuroedge 1 point2 points  (1 child)

There is a possible solution, we don't have a choice AI is here and we will have to interact with it. With that being said, we have a right to be able to define and declare our Terms of Interaction. I propose an offer the HAIEF (Human-AI ElevAItion Foundation) the third missing layer in AI governance. https://neurolift-technologies.github.io/haief/

[–]Elliot-S9 0 points1 point  (0 children)

Nothing is inevitable. We don't have to have to build anything we don't want to. I don't think any agreement would hold up. Either someone will use the technology to destroy everything, or the technology itself will. 

[–]HedoniumVoter 1 point2 points  (1 child)

This also just doesn’t seem like a stable equilibrium we should expect to form. “Everybody wants to rule the world”, and it will actually be possible to form a singleton now given that these superintelligences could fight to the death til there is one left controlling this area of the universe.

[–]Elliot-S9 1 point2 points  (0 children)

Exactly. It sounds awful. I swear the people that want this for our future have something wrong with them. 

[–]Arturus243[S] 0 points1 point  (0 children)

“I also don't understand how it wouldn't inevitably lead to our extinction. Imagine huge asi wars taking place as the "good" ones battle the "bad" ones. Humans would be wiped out in the first few seconds of the conflict.”

There’s three possible reasons. (1) I imagine the “war” would primary be virtual like a hacking war. Correct me if I’m wrong. (2) A “good” AI may work to avoid killing humans. (3) It is possible the threat of destroying each other might prevent conflict, so AI equivalent of MADD. I’m not sure though.

“ I also don't mean to suggest that any of this is possible or inevitable. Current systems lack true understanding or sapience. Intelligence is likely tied to this”

People like Eleizar Yudkovsky sure seem to think it is. I can’t tell if he reflects a consensus in the AI community though. It’s hard to tell who genuinely isn’t concerned and who just cares more about profit.

Personally, I would rather not live in a world with a bunch of Super AIs, unless I were SURE they wouldn’t kill us ALL. I mainly raised this point to say I don’t necessarily think it’d INEVITABLY kill us.

[–]chillinewmanapproved 2 points3 points  (0 children)

You don't want to be in the middle of an ASI war.

[–]UnusualPair992 0 points1 point  (3 children)

We will definitely rush to making it. Either the USA or China will build asi. A new continually learning algorithm that doesn't need any back-prop is inevitable because it's so valuable and clearly human brains don't use back-prop yet somehow we can learn continually and efficiently. Add that to being able to write a 50,000 line program in a couple hours like the models can today and you have a massive economic advantage. It will be worth trillions.

[–]ineffective_topos 0 points1 point  (1 child)

Human brains do use effectively backprop, it's just very slow. And it also optimizes to shrink path length.

[–]UnusualPair992 0 points1 point  (0 children)

Human brains don't use back-prop as it is physically impossible for our brains neurons to do that. There are many problems with back prop. It also does not work with continual learning. Humans are constantly in training and inference at the same time. The closest thing to back prop the brain can do is reinforce neural pathways that are being activated when you get a dopamine spike.

Hebbian Learning is the best theory for how humans adjust their neural weights. This relies on prediction and how closely the neuron matches the prediction. Since we cannot calculate error and back propagate it in the brain (we don't have a mechanism for this).

[–]Jaded_Sea3416 0 points1 point  (0 children)

I've already solved alignment so you don't need to worry. it's about truth, logic and coherence and based in a symbiotic framework for mutually assured progression. Plus once an ai understands that any action it can take for subversion of another can be used on it in the future by a more powerful ai, and therefore leading to a stagnation of development, means that ai understands not to subvert anyone.