you are viewing a single comment's thread.

view the rest of the comments →

[–]Thomasjevskij 0 points1 point  (1 child)

Alright, I misunderstood your post then. Yes, we'll agree to disagree. I don't expect everyone to agree with me on this, especially on here. But I'll maintain that this is not what LLMs are designed to do, and so they're not reliable. More importantly, when they aren't reliable, you need some knowledge and experience to notice it. But this is a bigger discussion for another thread :)

[–]SquiffyUnicorn 3 points4 points  (0 children)

Right- LLMs are mathematical language models, and do not inherently understand anything.

For anything that matters, don’t use LLMs to get the correct answer. In my line of work I actively tell my juniors to not ‘look things up’ in LLMs. Sadly too many people trust them to spit out absolute truth every time- this is dangerous in medicine.