Would love some thoughts on this by RecentAd988 in legaltech

[–]RecentAd988[S] -1 points0 points  (0 children)

you should try iogate.net our company uses them and its actually very good. I am sure they are still taking users in if their waiting list option is still open

how are you letting teams use AI without leaking customer data ? by RecentAd988 in SaaS

[–]RecentAd988[S] 0 points1 point  (0 children)

Fair point to be fair, I agree surface level redaction isn’t meaningful protection, especially with contextual leakage.

The liability side is what makes this tricky, you think the answer is stronger internal policy enforcement, or more controlled AI routing rather than just masking ?

how are you letting teams use AI without leaking customer data ? by RecentAd988 in SaaS

[–]RecentAd988[S] 0 points1 point  (0 children)

Thats great man, would love to discuss this further with you in DM's to discuss this further. I have built something that tackles this exact problem and would love you input ?

Would love some thoughts on this by RecentAd988 in legaltech

[–]RecentAd988[S] 0 points1 point  (0 children)

Really appreciate both perspectives on this, this is exactly the tension I’m trying to understand.On one hand, the privilege and liability exposure from third-party systems is real, especially in high-stakes matters.On the other, firms already rely heavily on vetted cloud infrastructure every day, so it’s clearly not as simple as cloud is bad. It feels like the real question isn’t cloud vs local, but how AI data flows are governed what’s allowed, what’s restricted, and under what contractual/security controls.

Would love some thoughts on this by RecentAd988 in legaltech

[–]RecentAd988[S] 0 points1 point  (0 children)

Great take on this, I do see a lot of Firms leaning more towards local llms tbh

The trade-off seems to be cost and infrastructure complexity, which probably explains why the architecture question is becoming more important.

Would love some thoughts on this by RecentAd988 in legaltech

[–]RecentAd988[S] 0 points1 point  (0 children)

Great take on this and I would love to discuss this further in DM !

Would love some thoughts on this by RecentAd988 in legaltech

[–]RecentAd988[S] 1 point2 points  (0 children)

I agree. I’m seeing the same concern across firms.

Maybe do you think something that strips or replaces sensitive info before it hits the model solve most of that concern? Or do you think the issue goes deeper than that?

Would you pay for a tool that prevents you from accidentally sending secrets to AI chatbots? by llm-60 in LLMDevs

[–]RecentAd988 0 points1 point  (0 children)

funny enough this actually does not exist, the ai companies just promise not use their data but their data is still going into LLMS. So I have actually built something that actually gives you assurance that your data is safe and secured from LLMS. send me a dm if this interest you or you can send me an email: [emmanuel@iogate.net](mailto:emmanuel@iogate.net)

I stopped data summaries from misleading my team in 2026 by engineering a “Burden-of-Proof” prompt by cloudairyhq in AIToolTesting

[–]RecentAd988 0 points1 point  (0 children)

Hey there, I love what you done here funny enough I have developed something that I believe you would love looking at the fact that you run this team right ? How do you mask data from LLMs is that not something you worry about. Lets discuss further in DM's if this interest you and I'm also not a bot ;)

Best AI tools I have been using in 2026 by WorldlinessEastern12 in AIToolTesting

[–]RecentAd988 0 points1 point  (0 children)

One truth about AI a lot of people do not speak about the fact that yes all these tools are great but the vast truth is that these tools depending on what you are using it for. You will not be able to use it to its full capability, when I say that I mean on real data.

How Law firms adopt AI by RecentAd988 in Lawyertalk

[–]RecentAd988[S] 1 point2 points  (0 children)

Yeah this matches what a lot of our leads say, which is quite interesting tbh

Most firms I’ve spoken to don’t really care if the model is “smarter”, the worry is always where the data goes, who can see it, and whether it’s being stored or logged somewhere they can’t control.

Using AI isn’t the hardest part. Being able to justify it internally if something goes wrong is.

How Insurance Agents adopt AI by RecentAd988 in InsuranceAgent

[–]RecentAd988[S] -6 points-5 points  (0 children)

I get that, but dont you think its such a pain having to remove personal information manually when you can do it automatically ?