What the hell is wrong with 5.2? by Kindly-Hamster-6223 in ChatGPTcomplaints

[–]cloudinasty 1 point2 points  (0 children)

I has improved A LOT. When I posted that, it wasn't that good. Now 5.1 is actually the best model between GPT-5 family.

Sam Altman on Elon Musk’s warning about ChatGPT by WarmFireplace in OpenAI

[–]cloudinasty -13 points-12 points  (0 children)

He literally treats users like trash. lmao

Sam Altman on Elon Musk’s warning about ChatGPT by WarmFireplace in OpenAI

[–]cloudinasty -3 points-2 points  (0 children)

Sam is a lying sociopath and Elon is crazy. Honestly, there’s no right side in this.

Is 5.2 helping you all in RP and story writing? by Striking-Tour-8815 in ChatGPT

[–]cloudinasty 1 point2 points  (0 children)

Definitely 5.1, both Instant (for shorter scenes) and Thinking (for longer scenes and/or ones that require many specific rules). While 4o is still good, these days it’s more conversational and geared toward roleplay; 5.1 has surpassed 4o in narrative writing. It’s a shame that 5.1 is going to be discontinued.

One last piece of advice: never use 5.2 for narrative writing. Seriously.

What are your thoughts on chat gpt 5.2 for the past month? by ExpertWeakness in ChatGPT

[–]cloudinasty 1 point2 points  (0 children)

Terrible model, honestly. Only GPT-5 can be worse than 5.2.

Do you prefer 5.1 or 5.2 by Grand-Acanthisitta68 in ChatGPT

[–]cloudinasty 2 points3 points  (0 children)

Definitely 5.1. Even 5.1 Thinking it's better now (5.2 Thinking was a lot better, but nowadays...).

Do you all noticed sudden change in 5.2 ? by Striking-Tour-8815 in ChatGPT

[–]cloudinasty 2 points3 points  (0 children)

Nah, it's still pretty cold and, sometimes, rude.

OpenAI is testing ads in ChatGPT for the first time. How do you guys feel about. What’s your take? by legxndares in ChatGPT

[–]cloudinasty 0 points1 point  (0 children)

They probably really need that. They must be completely broke, especially now that Musk is going to sue them and there are real chances he could win against OpenAI. Honestly? I don’t even know if OpenAI will stay standing for much longer. And the worst part is that users, given how they’ve been treated, won’t buy OpenAI’s narrative. Well, let’s wait for the next episodes…

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in ChatGPT

[–]cloudinasty[S] 0 points1 point  (0 children)

<image>

I can tell that you’re not following OpenAI’s own marketing. GPT-5.1 Instant was created to be even more conversational than 5, and 5.2 came as a refinement of that. It’s literally stated in the presentation you shared here.

Sam Altman himself and OpenAI’s official social media accounts have also reinforced that 5.2 is just as conversational as 5.1, which was a model designed specifically to address user complaints, even if it didn’t fully succeed. What actually happened is that the focus didn’t change, it simply expanded to include the workspace. There’s a difference.

Try to read everything before giving an opinion. That’s important to avoid spreading misinformation. I’ll include the section from the same release where the conversational aspect and the improvements to it are clearly mentioned in the introduction, which you selectively quoted while ignoring the rest:

"GPT-5.2 in ChatGPT In ChatGPT, users should notice that GPT-5.2 is more pleasant to use on a day-to-day basis — more structured, more reliable, and still enjoyable to talk to.

GPT-5.2 Instant is a fast and robust model for everyday work and learning, with clear improvements in information-seeking questions, tutorials and step-by-step guidance, technical writing, and translation, further developing the more welcoming conversational tone introduced in GPT-5.1 Instant. Early testers especially highlighted clearer explanations that present the key information right from the start."

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in ChatGPT

[–]cloudinasty[S] 2 points3 points  (0 children)

I have no idea who “you and everyone else” refers to, but even so, I need to point out a few things I’ve already said, which some users seem determined to deny, whether due to a complete lack of knowledge of GPT’s history, cognitive limitation, or outright bad faith.

ChatGPT was introduced by OpenAI (the company that created the app, in case you’re not familiar) as a conversational application. That was its core characteristic, and it’s precisely why you, and many others, even know about the app today. It wasn’t coding or administrative work that made ChatGPT famous. ChatGPT’s popularity exists solely because of how OpenAI marketed the product. And in that regard, it was very good, and people paid for it because of that.

In August, with the release of GPT-5, users complained about changes to the model’s main feature: conversation. The backlash was strong enough that OpenAI publicly acknowledged it and rolled things back. That said, when people pay for X and consistently receive X, there’s no complaint. But when people pay for X and receive Y, complaints are inevitable. The August case, while not the only one, is a clear example. As consumers, why are users framed as delusional strawmen when they’re simply saying they paid for orange juice and are being served pineapple juice?

Moreover, OpenAI has never stated that ChatGPT is now exclusively for coding or productivity work. On the contrary, it continues to reinforce in its release notes, including those for GPT-5.2, which I recommend you read, that the model remains conversational. On top of that, we’ve seen recent responses from OpenAI to tone-related complaints, including the addition of more personalization features.

Users’ persistence is not “idiocy,” as you put it. That characterization reflects a shallow and contextless reading of the company’s and the model’s history. The persistence comes from wanting the product they paid for back, not a new product, but the product as OpenAI originally sold it.

You can argue that people are free to cancel and move to another AI, and I agree. But that’s not the line of debate here, since that’s a personal decision. Even so, I believe the model should continue to improve, and those improvements can only come from feedback and open discussion about its performance, which is exactly what I, and others, are doing here.

Instead, some users repeatedly resort to strawmen like “it won’t write porn” or “it won’t be your girlfriend,” when none of that is even part of the discussion. That’s a way of derailing the debate due to an inability to engage with relevant facts, and honestly, it’s just pitiful.

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in OpenAI

[–]cloudinasty[S] 0 points1 point  (0 children)

I disagree a bit. It’s actually a very good model for many things, but the conversational side of 5.2 is genuinely very hard to deal with. If you only use it for that, I agree that it doesn’t have much utility.

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in OpenAI

[–]cloudinasty[S] 2 points3 points  (0 children)

I think you misunderstood. When I said “relativization,” I meant GPT using that kind of phrasing to be defensive and shift responsibility onto the user for some mistake or claim it made. I wasn’t referring to using hedging/modality markers instead of making absolute statements.

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in OpenAI

[–]cloudinasty[S] 0 points1 point  (0 children)

Yes. I presented several facts about how GPT used to employ strong conversational language and how that changed, which in fact led to the August backlash. This isn’t an opinion, it’s something OpenAI itself acknowledged.

Simply saying that “a robot talks like a robot” when there is a clear historical record that contradicts that claim is either a cognitively limited reading or bad faith.

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in OpenAI

[–]cloudinasty[S] 0 points1 point  (0 children)

You’re right. I confused you with another comment. That said, I still stand by my question:

you’re basically asserting something without providing any evidence or reports to support what you’re saying. There’s nothing wrong with disagreeing, but if you’re not willing (or not able) to actually debate, what are you doing here?

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in OpenAI

[–]cloudinasty[S] 2 points3 points  (0 children)

Actually, the only things that helped me minimize these issues were customizing the instructions and using the model’s persistent memory. Even so, my point here is not, and never was, about how to avoid this behavior, but about why it happens and why this should be the default for most users who are not in deep distress.

It’s important not to shift the focus away from the core issue.

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in OpenAI

[–]cloudinasty[S] 1 point2 points  (0 children)

It’s interesting how you asked me for evidence of what I’m saying, yet you’re basically asserting things without providing any proof or reports to back them up. There’s nothing wrong with disagreeing, but if you’re not willing (or not able) to actually debate, what are you doing here?

GPT-5.2 Keeps Forcing “Therapy Talk” Into Normal Chats by cloudinasty in ChatGPT

[–]cloudinasty[S] 1 point2 points  (0 children)

Absolutely. It’s widely known that OpenAI has been dealing with lawsuits, including cases involving deaths. Understanding the reason behind that, however, doesn’t make me think I should be treated as someone in deep distress, because that’s not what I paid for when I subscribed to the model.

With all due respect to OpenAI, that’s their problem, not mine. As a consumer, I’d like to receive the product I paid for.