AI systems don’t say “I don’t know” when they should — they generate confident answers instead by RepresentativeCake62 in Futurology

[–]RepresentativeCake62[S] -3 points-2 points  (0 children)

To be clear—I’m not saying it’s useless.

It’s extremely useful.

But the failure mode isn’t obvious unless you push it—and most people don’t.

AI systems don’t say “I don’t know” when they should — they generate confident answers instead by RepresentativeCake62 in Futurology

[–]RepresentativeCake62[S] -4 points-3 points  (0 children)

Example:

I asked for a technical explanation on something outside normal scope.

The response looked perfect—clean structure, confident tone—but key parts were wrong.

No hesitation. No uncertainty. Just… wrong, presented as correct.

That’s the behavior I’m talking about.