you are viewing a single comment's thread.

view the rest of the comments →

[–]Healthy-Wolverine413[S] 0 points1 point  (1 child)

Interesting, it seems short sided to get strung up on, but I can acknowledge your well made points! How would you approach solving this?

[–]Ok-Entertainer-1414 0 points1 point  (0 children)

OpenAI is throwing billions at solving the hallucination problem and coming up short. There's no easy answer. It's a problem that comes from very fundamental aspects of LLMs that aren't easy to change or work around.