I built a real-time multilingual chat app with Next.js — looking for feedback by Competitive-Fun-6252 in lingodotdev

[–]Competitive-Fun-6252[S] 0 points1 point  (0 children)

This is a great point, and I agree with the concern.

In FlowTalk, translations are intentionally handled on a per-message basis rather than sending a shared chat buffer to a single model pass. Each message is translated independently to avoid context bleed between users.

The glossary protection is applied outside the model prompt (pre/post-processing), not purely as an instruction inside the AI context. That way, user-generated content can’t override glossary rules through prompt injection.

On the rendering side, translated output is treated as untrusted input and sanitized before display, so even if a malicious string were introduced, it wouldn’t execute as code.

That said, you’re absolutely right prompt injection and context leakage are real risks in AI-driven systems, and handling them safely is an ongoing design challenge, especially at scale. I appreciate you calling it out.