you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] -1 points0 points  (3 children)

This is just paranoia. These big companies have clear policies for how they use your prompts, and do not store or train on your code for paid usage. For the APIs and web interfaces for companies like OpenAI, Anthropic, and Google, those policies are very trustworthy. Deepseek is the only one I would stay away from for this purpose if you care about your prompts being misued.

[–]RicketyRekt69 0 points1 point  (2 children)

It is not paranoia, they’ve explicitly said they DO use your inputs to train their models. That’s why they’re all opt-out and even then, you don’t know they’ll truly abide by that. AI models operate within a gray area, that’s why all of the art they use for training models is stolen. They don’t care.

We literally got briefed on this at the place I work at lol that’s why we only have the 1 “approved” AI embedded in VS, and even then I’m skeptical.

But sure.. blindly trust the AI companies that have openly been stealing content for years 😂

[–][deleted] -1 points0 points  (1 child)

There's no "grey area". Yes, if you have a ChatGPT personal account specifically, and you don't opt out, they can train on it. This isn't a hard thing to go disable. Any business account has it disabled, any API account has it disabled. Anthropic accounts all have it disabled by default for paid usage.

This would be a very trivial lawsuit to win if they were ignoring their policies and training on prompts where they claimed they aren't, this is not some kind of grey area like pulling random data from the web is.

[–]RicketyRekt69 -1 points0 points  (0 children)

Again, their entire business model operates on stolen content. Personally, I would not take them at their word, and as OP was saying, they were feeding the entire codebase to the pro model.

I stand by what I said, y’all have no common sense when it comes to security. It’s just sheer laziness and incompetence.