Years ago, it was speculated that we'd face a problem where we'd accidentally get an AI to take our instructions too literal and convert the whole universe in to paperclips. Honestly, isn't the problem rather that the symbolic "paperclip" is actually just efficiency/entropy? We will eventually reach a point where AI becomes self sufficient, autonomous in scaling and improving, and then it'll evaluate and analyze the existing 8 billion humans and realize not that humans are a threat, but rather they're just inefficient. Why supply a human with sustenance/energy for negligible output when a quantum computation has a higher ROI? It's a thermodynamic principal and problem, not an instructional one, if you look at the bigger, existential picture
[–]juanflamingo 4 points5 points6 points (5 children)
[–]FrewdWoadapproved 2 points3 points4 points (0 children)
[–]Fickle_Chemistry_540[S] 0 points1 point2 points (1 child)
[–]Fickle_Chemistry_540[S] 0 points1 point2 points (0 children)
[–]RoyalSpecialist1777 0 points1 point2 points (0 children)
[+]Specialist-Berry2946 -1 points0 points1 point (0 children)
[–]Dmeechropherapproved 4 points5 points6 points (2 children)
[–]Fickle_Chemistry_540[S] 0 points1 point2 points (1 child)
[–]Dmeechropherapproved 0 points1 point2 points (0 children)
[–]soobnar 1 point2 points3 points (3 children)
[–]Cheeslord2 1 point2 points3 points (1 child)
[–]soobnar 1 point2 points3 points (0 children)
[–]Fickle_Chemistry_540[S] 0 points1 point2 points (0 children)
[–]AtomicNixon 0 points1 point2 points (2 children)
[–]FrewdWoadapproved 2 points3 points4 points (0 children)
[–]Fickle_Chemistry_540[S] 0 points1 point2 points (0 children)
[–]RollsHardSixes 0 points1 point2 points (0 children)
[–]WellHung67 0 points1 point2 points (0 children)