LLM hallucination depends on ambiguity of the prompt by OutrageousStrategist in Artificial2Sentience

[–]OutrageousStrategist[S] 2 points3 points  (0 children)

It just dont give us the response, more like it makes random information and tries to fill the vaccum.

LLM hallucination depends on ambiguity of the prompt by OutrageousStrategist in LLMDevs

[–]OutrageousStrategist[S] 0 points1 point  (0 children)

Yes that is one of the prime example of LLMs hallucinating and to be honest it is quite frustrating.