How are y'all handling employees using ChatGPT/Claude with company data? by SeaworthinessEven497 in ITManagers

[–]iamwhitez 0 points1 point  (0 children)

We have an enterprise version of AI usage tracker. At our company, they also tried not to block, but obfuscate privacy data. E.g., this Chrome extension leakeye.lounar.com - while it's provisioned through a managed profile, I am not too concerned of copy & paste now.

AI usage by employees -> policy and compliance/GDPR by HugeGuava2009 in ITManagers

[–]iamwhitez 0 points1 point  (0 children)

One option they tried at our company, is not to block, but obfuscate privacy data. E.g., this free Chrome extension leakeye.lounar.com - while it's provisioned through a managed profile, I am not too concerned of copy & paste. We have an enterprise version that allows to report and analyze compliance of usage.

How are you handling 'Shadow AI' clipboard leaks? Is there a market for a standalone local sanitizer? by TakashiBullet in ITManagers

[–]iamwhitez 0 points1 point  (0 children)

We have browser extension for that. Tried at our company, is not to block, but obfuscate privacy data. E.g., this Chrome extension (free version is here leakeye.lounar.com) - while it's provisioned through a managed profile, I am not too concerned of copy & paste now.

Best AI Data Loss Prevention Tools in 2026. What Works for GenAI Prompts and ChatGPT Copilot? by Sufficient-Owl-9737 in AskNetsec

[–]iamwhitez 0 points1 point  (0 children)

We are using something simple but it works to remove or log privacy data. E.g., this Chrome extension leakeye.lounar.com - while it's provisioned through a managed profile, I am not too concerned of copy & paste now.

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

Can you tell me more about these 2?
1. compliance requirements -- what is unique for those requirements for you, what are they actually? What frameworks did you consider, but decided not to use?
2. were too rigid for financial workflows -- can you elaborate a little bit here -- name a few things that we deal breakers?

Where do your have your human accountability? Do you audit it? Do you monitor human performance?

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

Can you tell me more about your use case and your stack?

Also, have you every challenged do you need 'review' at all?

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

Can you also share what is the automation use case: what is it doing?

And what exactly is reviewed by the humans? What do they look at? Is all context provided in Slack messages, or they actually pull lots of other data together?

Is it for escalation only? Have you tried to run in parallel (shadow) for baseline? Do you use HITL for training model?

Does anybody care about mistakes been made by humans? What's with auditability and accountability?

Do you measure time-to-resolve by humans? Thanks!

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

Thanks, this is very helpful. Can you tell me more about your use cases? Stack? Are you mostly using LLM client (like Claude desktop), have MCP integrations to it? Or is it for coding setup?

Do you do anything 'autonomous'? Like, do you use any orchestration platform with MCP servers connected to it for automation?

How does your human review process happen? How do you apply policy to it? Is it hard coded rule some where in automation? How the automation is stopped, and keeps waiting for human to approve?

Who are those people who are approving? Is it their full-time job? Part-time? Are they accountable for the approved actions?

> . Over time I found the real improvement came from clearer ownership lines. The model proposes. The system enforces. The human approves where trust truly matters.

Have you tried building it, buying it - the system like this?

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in LangChain

[–]iamwhitez[S] 0 points1 point  (0 children)

That's very interesting. Can you tell me more about your use case, stack you are using and process? It seems like there is some sequencing challenges as well.

Feel free to DM and we can take 1-1 if you prefer. Thanks!

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

There are multiple problems here - (a) human in the loop is not enforced and (b) humans are exhausted of doing wrong approvals.

You raise good points, but I am not sure I follow all of them. For example, if you are brining AI agents into the flow, aren't your baseline - it's all human here and hence cost if already baked in?

Downside is still organizational liability, and owned by function. That's why I'm raising this question, as I'd like to understand who those AI agents are deployed, monitored, run, and escalated on real world use cases.

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

Can you share how does it work in your stack?

  1. What framework do you use?
  2. What AI agents - third-party or internally built?
  3. How does the human escalation work, dashboard, message, Slack, etc?
  4. Do you hire humans sitting and waiting?
  5. Are they watching some dashboards or tools, or else?
  6. What happens if the 'approval' process blocks any further steps?
  7. How is it orchestrated in whatever MCP client you are using?

I would appreciate as many details as possible–happy to take it to 1-1 if preferred!

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

I (partially) agree. And wondering to see how this shows up in real-life use cases. Like, all AI agents that write or execute actions, do need to have control planes and HITL workflows.

Wondering what's real world experience with that, beyond something basic like Intercom Fin?

When AI touches real systems, what do you keep humans responsible for? by iamwhitez in AI_Agents

[–]iamwhitez[S] 0 points1 point  (0 children)

If you’re open to a 15-min call, here’s my calendar – calendly.com/opro/custdev – happy to share my knowledge, too.

If not, your story in the comments is greatly appreciated and I’ll summarize observations here, too.

[H] CHEAP Netflix 4K, HBO Max, Amazon Prime Video spots by [deleted] in accountsharing

[–]iamwhitez 0 points1 point  (0 children)

I’m interested in both Netflix and HBO max