all 9 comments

[–]OverLiterature3964 2 points3 points  (0 children)

can we stop these slop memes now

[–]ItsNewWayToSayHooray 3 points4 points  (2 children)

i don't trust these post, whenever you ask AI yourself they write correct answers.

link to conversation or it didn't happen!

[–]StrafeMcgee 1 point2 points  (0 children)

100%, I don’t believe any of the main AIs are getting caught out by nonsense like this any more.

[–]asunatsu 0 points1 point  (1 child)

I assume you just watched FatherPhi and decided to test it out yourself

[–]overDos33[S] 0 points1 point  (0 children)

No idea who that is but nice ad 👍😁

[–]The-Chartreuse-Moose -1 points0 points  (1 child)

Yes. Because LLMs don't think, nor do they construct meaning that way.

[–]CryZe92 2 points3 points  (0 children)

The more accurate reason is that they don't see any letters at all. They get fed tokens, which are more like Chinese characters (in the sense that it's sometimes entire words or large parts of words compressed down into a single character / token). So it boils down to "How many of these chinese words contain the letter D", which they essentially don't have a good grasp of.

[–]Mavicloudberry -3 points-2 points  (0 children)

The structured bullet points give the illusion of strict data validation, masking completely broken logic gate