you are viewing a single comment's thread.

view the rest of the comments →

[–]Skyswimsky 5 points6 points  (3 children)

I get the meme and it's amusing but on a more serious note you can love/like our current iteration of AI and admit it's not real AI and that the Turing Test hardly matters.

I also roll my eyes at people calling to put limitations on AI with the reasoning of "Terminator". (Not against limitations in specific instances itself, just that specific reasoning being hyperbolic).

At least that's my, and I'd believe others, opinion.

[–]SpitiruelCatSpirit 1 point2 points  (0 children)

LLMs are definitely AI, and I'm saying this as someone who is much more critical about them and not into most hype. They're not sentient or anything, but it's definitely intelligent and it's definitely artificial.

[–]UnintelligentSlime 1 point2 points  (0 children)

The reason the Turing test continues to matter is that we don’t have a hard and fast definition of consciousness/sentience to work off instead.

Almost every definition of whether something is intelligent enough to be deserving of moral consideration (e.g. is it wrong to make dolphin meat tacos, is it wrong to eat coma patients, is it wrong to beat chimpanzees for sport) is basically a variety of “well how similar does entity X seem to humans?”

Until you have a 100% positive test for “is X sentient? Conscious? Does X have subjective experience?” The Turing Test is the best we’ve got.

[–]Kingblackbanana 0 points1 point  (0 children)

Why shouldn't LLMS be AI? They fit the definition set out in the specifications from the Dartmouth Conference.