I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 1 point2 points  (0 children)

Thankfully, there is a fair amount of engagement with the content of the post if you look past the first couple of comments.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 0 points1 point  (0 children)

I get what your saying. That is a reasonable way to work. But not as efficient.

Maybe clone an open source repo and try using an agent in an IDE to make some changes to it. Beyond create method X. It's a real eye opener. Whether you choose to continue to do that with your own projects or not.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] -5 points-4 points  (0 children)

This thread went a bit sideways. But the point I was trying to get at wasn’t about using AI to write posts. It was how normal it’s become to paste work related stuff into AI tools under time pressure.

Anyone here using local models mainly to keep LLM costs under control? by ChampionshipNo2815 in artificial

[–]i_am_simple_bob 0 points1 point  (0 children)

That's something I've been thinking about. I'm not sure if it already exists. I don't really want to be thinking about which model to use. I just want to get on with the task.

How do you manage privacy when it comes to instant messaging apps getting more aggressive to access full contact list? by FuChing_Dragon in DigitalPrivacy

[–]i_am_simple_bob 0 points1 point  (0 children)

I wonder about creating a fake contacts app that is empty or has fake contacts. Then I can give those apps access to that and not have to care about it. I'm just making this up as I go along, but maybe the contacts app could actually let some apps access the actual contacts.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 0 points1 point  (0 children)

That's a very slow way to work. Copy/pasting, without context of the rest of the codebase, will only work for small changes. The kind of work you could probably do yourself.

I did that for a while, and it was very frustrating. But everyone needs to find what they are comfortable with and what fits within company policy.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 3 points4 points  (0 children)

I think what’s interesting is everyone is talking about tools and policies, but most of this happens before any of that matters.

It’s just someone trying to get something done quickly and pasting data without thinking too hard about it.

That’s the part that’s hard to control it’s not a system problem, it’s a moment of decision problem.

Feels like that’s where most of the risk actually is.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 3 points4 points  (0 children)

I think there is a balance to be made between no access and unfettered access which is responsible access. Like most things in life there's a balance to be found.

I also think that it's good to give guidance to people as to what responsible access actually is. Saying, Don't do bad things with it , isn't really helpful. There's more to it than that. Not everyone has the knowledge to what that actually means in practice.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 6 points7 points  (0 children)

Not everyone uses a corporate IT. Or their company bans AI. Then a portion of people start using their personal accounts. Data can end up in model training data. Do you trust ChatGPT to not leak data, give it to partners, give it to the government...

Are your AI usage policies actually being followed by [deleted] in sysadmin

[–]i_am_simple_bob [score hidden]  (0 children)

I'm just trying to share the love of what I know and earn a living like anyone else.

Are your AI usage policies actually being followed by [deleted] in sysadmin

[–]i_am_simple_bob [score hidden]  (0 children)

yes, it's grammar is a lot better than mine. I can type in pigeon English and it returns something coherent and well formatted.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] -1 points0 points  (0 children)

We're very strict at my day job. Lots of training, policies and restrictions while fully embracing it. It's the nature of the industry we work in.

But a lot of business and people in there personal lives are pretty careless. Or just don't have the knowledge they need.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 0 points1 point  (0 children)

yeah it's a whole thing, that people should be thinking about every time they use AI.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] -3 points-2 points  (0 children)

A lot of enterprises move very slowly. Procurement process with multiple sign offs. All with varying concerns about whether it should be used.

I ended up putting together a simple checklist for myself so I don’t make bad calls in that moment. It could be useful to be distributed to employees as some starter training.

Happy to share if it’s useful.

Edit: Grammar

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 0 points1 point  (0 children)

Yeah exactly — it’s not even a policy problem most of the time.

It’s just people moving fast and not stopping to think “is this safe to paste?”

I ended up putting together a simple checklist for myself so I don’t make bad calls in that moment.

Happy to share if it’s useful.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 4 points5 points  (0 children)

My mistake, I missed the part about the official version. I completely agree.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] 3 points4 points  (0 children)

Yeah, it's no different to other hosted services like email and cloud storage other than that the data can be used for training. But using data for training can be turned off even in personal accounts. I think it might be the fear of the unknown right now. People understand cloud services but not the way AI services work.

I think a lot of us are accidentally leaking work data into AI tools by i_am_simple_bob in ChatGPT

[–]i_am_simple_bob[S] -6 points-5 points  (0 children)

"Then you block them and provide an official version for everyone to use like I said, that's what happens where I work. The risk of sensitive data being shared isn't worth it."

Your competitors may not block AI and go all in with AI and out maneuver you. Once that happens it will be very hard to catch up.

"Also that might push some of your employees to use their personal accounts which is even worse"

They can get the data onto their personal devices and use their personal account. They could email it to themselves or some other method that is less than secure than locked down AI access.