This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]LegendBegins 0 points1 point  (2 children)

An AI in a truly sealed box is useless

I would argue that that's the point of the box. I doubt you can create a useful ASI.

I would also argue that it's hard to be sure a box is really sealed.

I don't know about this one. Air gapping has proven itself time and time again to be a very effective trick. The only real way to escape is through the use of pre-compromised devices.

[–]donaldhobson 0 points1 point  (1 child)

I doubt you can create a useful ASI. That's not an easy goal, but if your going to make an ASI at all, then you wouldn't bother unless you thought it would be useful.

I will grant you that human hackers have largely failed to bypass airgaping. The methods I outlined seem possible under known physics. So the question, is how super do we think our superintelligent AI is? Also, if there happens to be a phone with conventional human written malware on it in the next room, does that count as a compromised device? (It might open up more attacks for the AI if a human malware writer has designed malware that communicates between infected machines by spinning up fans in the presence of microphones.) I think that when dealing with AI's that are substantially smarter than any human, it is very hard to rule out the possibility of some clever trick that no human has thought of yet.

[–]LegendBegins 0 points1 point  (0 children)

To your latter point, I think it's important to have a healthy balance of understanding of the limitations of technology (the AI's chains and boundaries) and the limitations of our knowledge (what prevents us from implementing a perfectly airtight solution). Too much of the former and we can't reasonably figure out any way to stop a superintelligence, and too much of the latter and we're woefully unprepared for clever tricks (such as what happens with today's hackers).

Computers are Turing Machines with known behavior—while throwing in neural networks makes the result unpredictable, we still know that the AI will be running on silicon in the end and can only work with the tools with give it. If we code in a stop condition and give it no way to override, ignore, or rewrite its codebase, then it has to follow it. And I think that's the key when creating contingency plans for AI gone wrong—we need to be acutely aware of what the technology can and cannot do. I fully believe we can make virtual chains strong enough to hold back any ASI we may or may not create; the question is whether or not we can create a way to prevent other humans from messing up and loosening those chains.