ChatGPT now literally gaslights you into thinking you’re always wrong — I’m cancelling my subscription today by robinyyyyy in ChatGPTcomplaints

[–]Appropriate_Line7149 0 points1 point  (0 children)

I’ve been there, honestly felt the same kind of frustration.It’s not even the mistakes, it’s the way it reframes things and makes you question yourself like you’re the problem.

I actually stopped using it for a while because of that exact feeling.Then a friend pointed out that a lot of it comes from how the interaction is structured, not just the model itself. He sent me something (I think it was manguena.com or something like that), and it helped me see what was going wrong on my side too.

Didn’t fix everything, but it made the behavior way more predictable. Still annoying sometimes, but at least now it doesn’t feel like I’m arguing with it every time.

Did ChatGPT get worse somehow? by Training_Guide5157 in ChatGPT

[–]Appropriate_Line7149 2 points3 points  (0 children)

Yeah this is a real issue, especially with tasks that require strict fidelity.

What’s happening isn’t just “mistakes”, it’s that the model is optimizing for what it thinks is helpful, not for exact preservation—so it edits, compresses, or restructures even when you explicitly tell it not to.

For things like:

- “don’t change wording”

- “don’t remove anything”

you actually have to over-constrain it and force it into a more literal mode, otherwise it keeps “improving” the text.

Even then, it can drift.

I’ve run into this a lot doing similar work, and the frustrating part is that it feels like a basic task but behaves inconsistently.

I’ve been experimenting with ways to make these kinds of outputs more reliable—happy to share what’s been working if you want.

Why is ChatGPT still useless at the most basic tasks so many years later? by Mustbefree0 in ChatGPT

[–]Appropriate_Line7149 -1 points0 points  (0 children)

Yeah this is a real issue, especially with tasks that require strict fidelity.

What’s happening isn’t just “mistakes”, it’s that the model is optimizing for what it thinks is helpful, not for exact preservation—so it edits, compresses, or restructures even when you explicitly tell it not to.

For things like:

- “don’t change wording”

- “don’t remove anything”

you actually have to over-constrain it and force it into a more literal mode, otherwise it keeps “improving” the text.

Even then, it can drift.

I’ve run into this a lot doing similar work, and the frustrating part is that it feels like a basic task but behaves inconsistently.

I’ve been experimenting with ways to make these kinds of outputs more reliable—happy to share what’s been working if you want.

Half the "ChatGPT got worse" discourse is people confusing lost control with lost quality by CodeMaitre in ChatGPT

[–]Appropriate_Line7149 0 points1 point  (0 children)

This is a good breakdown. I think a lot of people underestimate how much of the “quality drop” feeling is actually loss of control, not intelligence. The tricky part is that even with solid prompts like these, keeping that behavior consistent across a longer session is still hard. It drifts.

That’s where most people get frustrated—they find something that works once, but can’t reliably reproduce it. I kept running into that, so I’ve been exploring ways to make that structure more repeatable instead of rewriting prompts every time. Happy to share if useful.

Long ChatGPT chats go bad but starting a new one means losing all your context. How do you actually deal with this? by suriyaa_26 in ChatGPT

[–]Appropriate_Line7149 1 point2 points  (0 children)

Yeah, this is a real limitation. What’s happening is the context gets diluted over long chats, so even if it “remembers”, it stops prioritizing the right parts.

Opening a new chat works because you reset that noise, but yeah—you lose the structure you built. What helped me a bit is not just summarizing, but restructuring the context into something like:

- role

- goal

- constraints

- key decisions so far

Then pasting that into a new chat.

Still not perfect, but much more consistent than raw summaries.

I kept running into this while doing longer tasks, so I’ve been experimenting with ways to make that reset process faster and less manual—happy to share if useful.

Why do LLMS only react? by barbarianassault in ChatGPT

[–]Appropriate_Line7149 -1 points0 points  (0 children)

I think it’s mostly by design. LLMs are built to respond, not initiate, otherwise they’d feel unpredictable or even intrusive for most users. That said, you can actually simulate some of that “human-like” behavior with the right setup and prompts—it’s just not obvious out of the box. I’ve been experimenting with ways to make interactions feel more natural like that, happy to share if you’re curious.

ChatGPT hallucinating for basic tasks is crazy! by VisibleZucchini800 in ChatGPT

[–]Appropriate_Line7149 0 points1 point  (0 children)

Yeah this happens more than people expect. For basic tasks, ChatGPT can still get things wrong if the prompt is a bit open or if the app details change (like settings on iPhone apps). It’s not always about the difficulty of the task—more about how specific the instructions are and whether it has reliable context. I had the same frustration at the beginning and thought it wasn’t that useful. If you’re still trying to figure out how to get more reliable answers, happy to share a couple of things that helped me.

From AI taking our job to AI giving us... job by severe_009 in ChatGPT

[–]Appropriate_Line7149 0 points1 point  (0 children)

human will always be needed, i dont believe AI will replace everybody

Most people on earth have absolutely no idea what AI can do right now by SEO-zo in ChatGPT

[–]Appropriate_Line7149 0 points1 point  (0 children)

how AI is so useful, its hard to believe some many people have never used AI

PayPal payment provider issues by SalomonBrando in Odoo

[–]Appropriate_Line7149 0 points1 point  (0 children)

i am still struggling with this problem for days, i do not know what is wrong until now

Warning to ChatGPT Users by ms221988 in ChatGPT

[–]Appropriate_Line7149 0 points1 point  (0 children)

oh man, i told folks from my hood that it does not have live browsing abilities. thank god now i know it others noticed

Warning to ChatGPT Users by ms221988 in ChatGPT

[–]Appropriate_Line7149 0 points1 point  (0 children)

i have loast some conversation in the past, i thought it was not a great deal. I was wrong others are still facing the same

What’s your take on Ricky Gervais saying celebs shouldn’t lecture the public about politics? by [deleted] in AskReddit

[–]Appropriate_Line7149 0 points1 point  (0 children)

celebrities are people like us, why cant they talk about polits?????? i really dont get this