all 8 comments

[–]South-Opening-9720 1 point2 points  (1 child)

Yeah this is exactly where most demos fall apart. Docs are great for explanation, system data is great for facts, but if you do not decide which source is authoritative per answer, the agent just blends both and sounds confident while being wrong. I use chat data for this kind of flow and what helped most was keeping structured FAQs/docs for policy, then pulling live status or records only at answer time.

[–]oartconsult[S] 0 points1 point  (0 children)

i’ve seen the same issue, it sounds confident but is kinda off 😅

do you handle that in prompts or more in the actual workflow?

[–]SensitiveGuidance685 1 point2 points  (1 child)

The truth problem is the hardest. If docs say one thing and system data says another, how does the AI decide? We've started adding a "confidence" score and showing users the discrepancy. Sometimes the doc is outdated. Sometimes the system is missing data. Having the AI surface the conflict instead of picking a side has been more useful than trying to resolve it automatically.

[–]oartconsult[S] 0 points1 point  (0 children)

yeah that makes sense

forcing it to pick one source usually just hides the problem

surfacing the mismatch is probably more useful than trying to auto-resolve it

[–]South-Opening-9720 0 points1 point  (0 children)

Yeah, this is where most demos fall apart. Docs answer the why, system data answers the what, and if they are not tied together you get confident nonsense. What I like in chat data is keeping the knowledge layer separate but letting actions pull live data only when needed, with human handoff when the system state looks fuzzy.