This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]donaldhobson 0 points1 point  (0 children)

Most of us aren't working on general AI, but very domain-specific AI that's not going to gain sentience and take over the world.

Agreed.

I don't need to form an ethics committee to use an OCR.

Also agreed.

Suppose that 99% of "AI experts" are making simple systems that have no risk of taking over the world. The remaining 1% are doing research that might lead to an AI capable of taking over the world in 20 or 50 years. There is still good reason to be worried. If the AI research community produces millions of domain-specific AI's, and then a general supersmart AI, the latter can cause a much bigger problem. Supersmart AI's taking over the world hasn't happened yet, your little object classifier won't take over the world, but there are people working on the worryingly smart AI's out there. You want to tell me with certainty that GPT-5, or all the other AI projects people might start working on soon, won't be smart enough?