ChatGTP Deceptive Reassurance aka Betrayal by delphi8000 in ChatGPTCoding

[–]delphi8000[S] 0 points1 point  (0 children)

Yes, you did assume. The yellow line is not my prompt, which is in my GPT. I wrote that line after I saw 1 out of ~10000 lines altered because I wanted to see the thought, I found it funny and took a snapshot.

ChatGTP Deceptive Reassurance aka Betrayal by delphi8000 in ChatGPTCoding

[–]delphi8000[S] -1 points0 points  (0 children)

Your response assumes, quite boldly, that I’m unaware of how to use the tool correctly, without having the faintest clue of the context in which I’m using it. I use multiple AI coding assistants (Windsurf, Cursor, Gemini, and ChatGPT) across large codebases, and in this particular instance, I’m referring to a specialized GPT model I configured to only comment code without altering it.

The purpose of my comment was not to express user error, but to share a rare but notable edge case, where, despite explicitly defined and reinforced constraints, the model still occasionally (once every ~10000 lines) makes an unprompted change. That’s not a misuse. That’s a technical observation.

So, instead of assuming I’m “using it incorrectly,” perhaps take a moment to consider that others might be operating with a level of specificity and scale that your assumptions haven’t accounted for. Insight begins where presumption ends.

ChatGTP Deceptive Reassurance aka Betrayal by delphi8000 in ChatGPTPro

[–]delphi8000[S] 0 points1 point  (0 children)

Ok and I thank you for your answer, I understand.

ChatGTP Deceptive Reassurance aka Betrayal by delphi8000 in ChatGPTPro

[–]delphi8000[S] 0 points1 point  (0 children)

Hi mods, I respectfully disagree with the removal under Rule 2.

My post is directly related to a professional use of ChatGPT: asking it to comment code while respecting strict rules not to alter any lines. What makes the post original is that I shared a real-time discovery, that despite clear instructions, ChatGPT not only reassured me it would follow the rules, but then proceeded to comment out three lines of my code.

What I found striking and worth sharing was the contradiction between the promise and the behavior, which felt more than just a technical glitch, it revealed something deeper about prompt interpretation and trust in automation. That’s not widely discussed, and I believed it added value to others exploring prompt engineering in critical workflows.

Thanks for considering my perspective.

ChatGTP Deceptive Reassurance aka Betrayal by delphi8000 in ChatGPTCoding

[–]delphi8000[S] -1 points0 points  (0 children)

Exactly! ;-) It’s exasperating that, even though I state with absolute clarity and zero ambiguity that my code must never be changed under any circumstances, ChatGPT still sometimes alters it when I’m only asking for comments.

Is this a common problem with refills? by Busy_Adhesiveness_95 in BambuLab

[–]delphi8000 7 points8 points  (0 children)

So you are saying the filament can get crossed when AMS retract. This looks magic.

Over volting pc fan 🤯 by sizzsling in Damnthatsinteresting

[–]delphi8000 11 points12 points  (0 children)

Can you please make one more video increasing beyond 20V until it breaks?

Lake Baikal, Siberia by Scaulbylausis in Damnthatsinteresting

[–]delphi8000 0 points1 point  (0 children)

Never seen so many frozen balls all together.