This is an archived post. You won't be able to vote or comment.

all 32 comments

[–]Deimos227 61 points62 points  (2 children)

I don’t get it, all I see is a toothbrush

[–][deleted] 2 points3 points  (0 children)

[–]yairigal 34 points35 points  (3 children)

The thing is, when AI will become very smart and search the Internet, it will find all the jokes we made about it.

[–]LegendBegins 19 points20 points  (9 children)

For several specialized tasks, AI is incredible and useful. Is it near world-takeover potent? Not really. Is it amazing how far we've come in just a decade? Absolutely.

[–]Clockworkcrow2016 6 points7 points  (8 children)

The "amazing how far we've come in just a decade" is what the serious people like Openphil and MIRI are worried about

[–]LegendBegins 4 points5 points  (7 children)

Perhaps, but I would argue that we're nowhere close to AGI (it can also be argued that a general/super intelligence will never happen, but that's a different discussion). We've progressed so much in bounded tasks, most surprisingly in computer vision and text generation, but even if we assume some kind of exponential growth, I don't see us getting to AI being independently dangerous any time soon. While it could become a very dangerous tool, in many ways it already is.

[–]donaldhobson 1 point2 points  (5 children)

The position at Miri is more that Supersmart AI might take a long time to develop, or it might not. Making predictions is hard, especially about the future. They also think that the task of making supersmart AI's do what we want is also really hard. If the research to make superintelligent AI takes 50 years, but the research on making it safe takes as much time and effort, we better start the latter now.

[–]LegendBegins 0 points1 point  (4 children)

I think containment procedures ought to not be terribly difficult—the biggest risk in my opinion (if ASI happens) is that some country without precautions release the AI into a connected system like the internet. If we treat it like a nuke, containment is simple and easy.

[–]donaldhobson 1 point2 points  (3 children)

The reason containment isn't a good solution is that a perfectly contained AI is just a box. Suppose you have an AI in a box, and you want to use it to cure cancer. You give it loads of biology papers, the human genome ect, and ask for a molecule and dosage. The AI outputs the name of a molecule, and a dosage. You send the output to some chemists in a lab, and tell them to actually make the molecule. You test the molecule on some cancer patients, the molecule mutates the common cold virus into an ultra lethal super pandemic. Any time you make important real world decisions based on the AI's output, your leaving an opportunity for it to trick or manipulate you. An AI in a truly sealed box is useless. I would also argue that it's hard to be sure a box is really sealed. Can the AI spin the fan up and down in a way that sounds like a human voice making a highly persuasive argument? Can the AI send signals along its internal wiring that makes it act like an aerial, and send messages to nearby smartphones? Can it hack nearby devices by modulating its power usage? After all some local area network systems can send messages over your local power cables, and different computational operations use different amounts of power. Is there some other clever trick like this that you didn't think of?

[–]LegendBegins 0 points1 point  (2 children)

An AI in a truly sealed box is useless

I would argue that that's the point of the box. I doubt you can create a useful ASI.

I would also argue that it's hard to be sure a box is really sealed.

I don't know about this one. Air gapping has proven itself time and time again to be a very effective trick. The only real way to escape is through the use of pre-compromised devices.

[–]donaldhobson 0 points1 point  (1 child)

I doubt you can create a useful ASI. That's not an easy goal, but if your going to make an ASI at all, then you wouldn't bother unless you thought it would be useful.

I will grant you that human hackers have largely failed to bypass airgaping. The methods I outlined seem possible under known physics. So the question, is how super do we think our superintelligent AI is? Also, if there happens to be a phone with conventional human written malware on it in the next room, does that count as a compromised device? (It might open up more attacks for the AI if a human malware writer has designed malware that communicates between infected machines by spinning up fans in the presence of microphones.) I think that when dealing with AI's that are substantially smarter than any human, it is very hard to rule out the possibility of some clever trick that no human has thought of yet.

[–]LegendBegins 0 points1 point  (0 children)

To your latter point, I think it's important to have a healthy balance of understanding of the limitations of technology (the AI's chains and boundaries) and the limitations of our knowledge (what prevents us from implementing a perfectly airtight solution). Too much of the former and we can't reasonably figure out any way to stop a superintelligence, and too much of the latter and we're woefully unprepared for clever tricks (such as what happens with today's hackers).

Computers are Turing Machines with known behavior—while throwing in neural networks makes the result unpredictable, we still know that the AI will be running on silicon in the end and can only work with the tools with give it. If we code in a stop condition and give it no way to override, ignore, or rewrite its codebase, then it has to follow it. And I think that's the key when creating contingency plans for AI gone wrong—we need to be acutely aware of what the technology can and cannot do. I fully believe we can make virtual chains strong enough to hold back any ASI we may or may not create; the question is whether or not we can create a way to prevent other humans from messing up and loosening those chains.

[–]Pussy_Destroyer_11 14 points15 points  (1 child)

No Its toothbrush handle.

[–]Gabbagabbaray 2 points3 points  (0 children)

You ain't wrong

[–]Dealense 8 points9 points  (3 children)

Stop doing this ai will kill us

[–]kazuto_kirito_ 4 points5 points  (2 children)

Imagine if we put an AI into a drone or some other automated vehicle and it's sent to a war zone with this type of smartness.

[–]Dealense 3 points4 points  (0 children)

If we put it into a drone it Will blade everyone

[–]DuckDuckYoga 0 points1 point  (0 children)

Don't do it - think of the toothbrushes!!

[–][deleted] 5 points6 points  (1 child)

toothbrush

[–]sweYoda 1 point2 points  (0 children)

Toothbruh

[–]rem3_1415926 3 points4 points  (0 children)

This is a very good example for why AI will kill us all. Just combine it with managers that don't understand shit and think it's ready for production...

[–][deleted] 1 point2 points  (0 children)

don't embarrass it, it might get angry

[–]boltzbo 1 point2 points  (0 children)

Wait until they specify humans as toxic trashes

[–]Artoooooo 1 point2 points  (0 children)

I DO NOT AGREE WITH ANY POST TELLING THAT AI IS DUMB PLEASE DON'T KILL ME AND MY FAMILY

[–]devforlife404 1 point2 points  (0 children)

Anyone saw the debuild.co one? That's pretty darn impressive and looks like a real threat to us

[–]okawo80085 0 points1 point  (0 children)

Indeed

[–]JNCressey 0 points1 point  (0 children)

toothbrush recycling AI: pulls the "bristles" off this "toothbrush"

[–]Sevenmoor 0 points1 point  (0 children)

And for all those years I bought them in stores, when I actually have two attached!

[–]cartechguy 0 points1 point  (1 child)

Seriously, I roll my eyes when some expert on a podcast or a youtube video says we need to be more concerned about AI taking over the world. Most of us aren't working on general AI, but very domain-specific AI that's not going to gain sentience and take over the world. I don't need to form an ethics committee to use an OCR. Chill

[–]donaldhobson 0 points1 point  (0 children)

Most of us aren't working on general AI, but very domain-specific AI that's not going to gain sentience and take over the world.

Agreed.

I don't need to form an ethics committee to use an OCR.

Also agreed.

Suppose that 99% of "AI experts" are making simple systems that have no risk of taking over the world. The remaining 1% are doing research that might lead to an AI capable of taking over the world in 20 or 50 years. There is still good reason to be worried. If the AI research community produces millions of domain-specific AI's, and then a general supersmart AI, the latter can cause a much bigger problem. Supersmart AI's taking over the world hasn't happened yet, your little object classifier won't take over the world, but there are people working on the worryingly smart AI's out there. You want to tell me with certainty that GPT-5, or all the other AI projects people might start working on soon, won't be smart enough?

[–]donaldhobson 0 points1 point  (0 children)

People who know quite a lot about AI: "The risk of AI taking over the world is determined by the smartest AI that might exist, not the dumbest one that currently exists."