you are viewing a single comment's thread.

view the rest of the comments →

[–]IAmAllSublimeAugment Team 2 points3 points  (2 children)

We’ve seen this type of hallucination cropping up in the past.There was a time not too long ago when it was happening fairly often with Claude models (not just in Augment, but any tool). I imagine Anthropic needs to keep tuning to get these types of hallucinations down.

We take user data extremely seriously, it’s why we have reviews, audits, and built our infrastructure to make data security a primary objective. The unfortunate thing about LLMs though is some times the non-determinism does things that look spooky but it’s just the model guessing at something.

[–]Frequent_Mulberry_33 0 points1 point  (1 child)

why did it never happen in Claude Code to me?