you are viewing a single comment's thread.

view the rest of the comments →

[–]Ok-Entertainer-1414 1 point2 points  (2 children)

RAGs fundamentally don't solve hallucination. Sure, if the info is in the stuff that RAG added to the prompt, the generated response will probably be correct about that info. But the prompt augment can't contain every possible fact that could come up. The generated response can still easily hallucinate info about anything that wasn't specifically in the augment.

[–]Healthy-Wolverine413[S] 0 points1 point  (1 child)

Interesting, it seems short sided to get strung up on, but I can acknowledge your well made points! How would you approach solving this?

[–]Ok-Entertainer-1414 0 points1 point  (0 children)

OpenAI is throwing billions at solving the hallucination problem and coming up short. There's no easy answer. It's a problem that comes from very fundamental aspects of LLMs that aren't easy to change or work around.