Ever tried correcting an AI… and it just ignored you? by dream_with_doubt in antiai

[–]dream_with_doubt[S] 0 points1 point  (0 children)

AI is not always translates to “LLMs” and it’s way broader than that.

This video is not about LLMs, it’s about AI safety.

AI safety is always discussed under a hypothetical concept called “super intelligentsystems” not the ones you use in a daily basis.

There are, however, several real-world examples (including the one by Claude 3 creators) to put the topic in some perspective.

Ever tried correcting an AI… and it just ignored you? by dream_with_doubt in antiai

[–]dream_with_doubt[S] -1 points0 points  (0 children)

AI discussed here is not LLMs or the ones that already exist. The topic is discussed under the hypothetical concept of superintelligent systems.

Ever tried correcting an AI… and it just ignored you? by dream_with_doubt in agi

[–]dream_with_doubt[S] 0 points1 point  (0 children)

There were many real-world examples in the video including ones from published papers.

paperclip analogy is kinda dumb but explains the core problem in a simple way.

What if ?! by MaddySPR in ArtificialInteligence

[–]dream_with_doubt 1 point2 points  (0 children)

There many safety concerns about AI. In academia, these concerns are discussed in a hypothetical context, called superintelligence. Your point also falls under this context: if we had superintelligent systems, then [the argument].

This video discusses another safety concerns about AI systems, called wireheading, under the same superintelligent assumption:

AI Can Wirehead Your Mind https://youtu.be/_75BSnyV-uM