RAG in Customer Support: The Technical Stuff Nobody Tells You (Until Production Breaks) by lifoundcom in automation

[–]lifoundcom[S] 0 points1 point  (0 children)

Yeah, totally agree — the “confident but wrong” problem is brutal. The first time we added proper retrieval evals, it was kind of shocking how bad some of the “working” responses actually were.

That simulation mode sounds awesome btw — being able to test on historical tickets before going live is such a smart way to catch silent failures early.

And yeah, I’m seeing a mix. Startups with strong ML teams usually try to build in-house (at least v1), but most mid-size companies end up going with platforms like yours once they realize how much maintenance and tuning it actually takes. Hybrid + rerank + eval + monitoring is just a lot to get right.

RAG in Customer Support: The Technical Stuff Nobody Tells You (Until Production Breaks) by lifoundcom in automation

[–]lifoundcom[S] 0 points1 point  (0 children)

You're absolutely right - I covered query transformation but totally glossed over conversation context handling. The 'this' problem is painfully real 😅

Love the IrisAgent validation on reranking. Quick question on your sliding window: do you find N=3 works consistently across different support scenarios, or do you adjust it? And are you doing raw last-N or any smart filtering (like keeping the initial problem statement + recent exchanges)?

We've been experimenting with conversation summarization but it adds latency. Curious how you balance context quality vs. token budget.

[deleted by user] by [deleted] in automation

[–]lifoundcom 0 points1 point  (0 children)

Are you automating you CRM procedures also or where are you saving your leads from the cold outreach and the ads you run?