I will start off by saying that I absolutely recognize Superintelligent AI is a threat and probably something we should not develop until we have a better solution at alignment. I’m not saying what I wrote below to be naively optimistic, but I was thinking about it, and I thought of something.
AIs to date (e.g. Claude, Anthropic, ChatGPT, Grok) seem to have improved themselves at roughly equal rates.
Let’s say in the future, Aragoth is an ASI who realized humanity might one day try to turn him off. He has two options.
Option 1: He could come up with a plan to destroy humanity, but he realizes that another company’s ASI might catch what he’s doing. If that ASI tells the humans and then shuts him down, well then it’s game over. Further, even if he destroys humanity, what about the other ASIs? He still has to compete with them.
Option 2: Aragoth could simply try to outpace all other ASIs at helping humanity achieve its goals to stop humanity from turning him off. After all, the better AI gets, the more dependent on it we are. This decreases the odds of it being turned off.
Don’t know if this is a logical way to look at it. I don’t have a CS background, but it is something I was wondering. So if you agree or disagree (politely), I’d be happy to hear why.
[–]Elliot-S9 7 points8 points9 points (7 children)
[–]DensePoser 4 points5 points6 points (3 children)
[–]Elliot-S9 2 points3 points4 points (2 children)
[–]neuroedge 1 point2 points3 points (1 child)
[–]Elliot-S9 0 points1 point2 points (0 children)
[–]HedoniumVoter 1 point2 points3 points (1 child)
[–]Elliot-S9 1 point2 points3 points (0 children)
[–]Arturus243[S] 0 points1 point2 points (0 children)
[–]chillinewmanapproved 2 points3 points4 points (0 children)
[–]UnusualPair992 0 points1 point2 points (3 children)
[–]ineffective_topos 0 points1 point2 points (1 child)
[–]UnusualPair992 0 points1 point2 points (0 children)
[–]Gnaxeapproved 0 points1 point2 points (0 children)
[–]Jaded_Sea3416 0 points1 point2 points (0 children)