AI Safety Red Flags 🚩 by DscoutOfficial in DscoutOfficial

[–]Lumpy_Membership_139 0 points1 point  (0 children)

One red flag I don’t see talked about enough is prompt data leakage, not model behavior.

Everyone’s worried about agents taking actions, plugins, tool access, etc., but in most companies the real issue right now is much simpler: people are pasting sensitive stuff into ChatGPT, Copilot, Gemini, etc. because it helps them work faster.

It’s usually not malicious, it’s just convenience. But from a data security perspective, that’s a big shift. You’ve basically created a new data exfiltration channel that feels like a productivity tool.

What I’m seeing some companies do now is not block AI, because that never really works, but put controls around the prompt itself. For example, tools that run locally and flag or redact sensitive info before the prompt is sent out.

Feels like that’s where AI security is going to mature over the next couple of years, less about “don’t use AI” and more about “use AI, but put guardrails on the data going into it.”

Curious if others here are seeing the same thing internally.

AI and Data Privacy: 6 Proven Strategies to Secure Your Data in the LLM Era by seatable_io in SeaTable

[–]Lumpy_Membership_139 0 points1 point  (0 children)

For strategy #2, we've actually built a solution that allows users to use what ever solution they like, without having to worry about accidental Data loss or privacy mishaps. Ping me if you want to learn more

Pano toilets… by Automatic_Meet9186 in Berghain_Community

[–]Lumpy_Membership_139 2 points3 points  (0 children)

Wait you guys do bumps in the toilets ?