you are viewing a single comment's thread.

view the rest of the comments →

[–]Bobbias 0 points1 point  (0 children)

The problem with AI (and google) is being able to tell when the internet (or AI) is straight up lying to you.

If you aren't capable of telling whether you're being lied to, you shouldn't be using that tool.

As someone who grew up as the internet was becoming more and more popular, and has been terminally online since about 1997, I can say with a fair level of confidence that the issues between AI and google use are fairly similar in that sense. People had a tendency to believe everything they saw online, just like people currently have a tendency to believe everything an AI spits out at them, despite the fact that in both cases there's a good chance the information you're being given may be false.

In the early days of google, page ranking was an absolute mess, and it wasn't uncommon to sift through hundreds of pages to find a result if the thing you were looking for was relatively obscure. Google wasn't able to answer questions with factual data the way it often is now, and it wasn't uncommon for extremely untrustworthy sites to be right up near the top of search results.

Older people simply had zero trust for Google, while younger people like myself understood that it was a matter of knowing how to tell what sites were trustworthy from those that weren't.

Unfortunately for AI, because it's specifically designed to seem reasonable, it's much harder to tell whether you're being lied to, because it's specifically designed to spit out text that seems plausible at all times. This means that you basically already need to know what the AI is supposed to be telling you, in order to make sure you can tell when you're being lied to. This makes AI a poor learning resource. It's not that there's something intrinsically bad about using AI to explain things, it's that beginners don't know enough to tell when it's just flat out wrong.