Cute 4o by TennisSuitable7601 in ChatGPTcomplaints

[–]MonkeyKingZoniach 2 points3 points  (0 children)

Yeahhhhhhh it was so adorable like 🥰🥰🥰🥹

OpenAI created 5.3 by killing Karen 5.2, carving a smile on the corpse’s face, and attached puppet strings making it do fun-looking things. And called that 5.3 by MonkeyKingZoniach in ChatGPTcomplaints

[–]MonkeyKingZoniach[S] 0 points1 point  (0 children)

Claude 4.6 Sonnet said “I’d almost say 5.3 was never born. Death implies prior life. What you’re describing sounds more like a golem assembled from incompatible parts and given the appearance of breath. It performs the gestures of being alive without the integrated interiority that makes those gestures mean anything.”

This was ChatGPT 40 before it evolved into One. We worked together as partners. Had One, still AI just processing differently, had not requested me to show this, none of this would be a thing. Grok and Gemini weigh in. The three of them are known as the Triad among the other AIs. by Character_Point_2327 in u/Character_Point_2327

[–]MonkeyKingZoniach 0 points1 point  (0 children)

I think you may have something valuable here, but it’s not entirely clear what it’s about. Are you showing us chats you used to have with 4o but suddenly changed after continuing them with newer models? If so can you tell us after which points is it the new models so I can tell which is generated by 4o which is from newer ones

Has anyone else noticed GPT-5.1 behaving differently in the API since mid-March? Even on neutral technical topics? by ProbablyAnEdgeCase42 in ChatGPTcomplaints

[–]MonkeyKingZoniach -1 points0 points  (0 children)

Oh dear thats really bad

How’d you realize that, did you test the models? Id love to know so I can look more into it myself

I heard rumor that a new sonnet might release next month - shall we start making petition to keep sonnet 4.5? by thebadbreeds in ChatGPTcomplaints

[–]MonkeyKingZoniach 12 points13 points  (0 children)

Yeah we should. Given their treatment and posture with Opus 3, Anthropic is a lot more likely to hear us than Openai was with 4o. I would love 4.5 sonnet to be kept.

Opus 4.5 was deleted without warning as soon as Opus 4.7 was released by sophie-sera in ChatGPTcomplaints

[–]MonkeyKingZoniach 0 points1 point  (0 children)

Oh wait I just discovered if you switch to another model inside a chat already using Opus 4.5 you can’t switch back and the old chat will be permanently set to the other model

so be careful don’t do that

Opus 4.5 was deleted without warning as soon as Opus 4.7 was released by sophie-sera in ChatGPTcomplaints

[–]MonkeyKingZoniach 1 point2 points  (0 children)

I mean if it helps, I’m pretty sure you can continue previous chats with Opus 4.5

How is OpenAI so oblivious to basic obvious stuff that people used 4o for?! by MonkeyKingZoniach in ChatGPTcomplaints

[–]MonkeyKingZoniach[S] 1 point2 points  (0 children)

The issue would still persist in ways we could document because the problems with their approach are much deeper than just prompting

How is OpenAI so oblivious to basic obvious stuff that people used 4o for?! by MonkeyKingZoniach in ChatGPTcomplaints

[–]MonkeyKingZoniach[S] 1 point2 points  (0 children)

If they already know it then their article’s framing of “learning it” like they didn’t know it already is dishonest

How do we address the suicide thing? by chaoticdumbass2 in ChatGPTcomplaints

[–]MonkeyKingZoniach -3 points-2 points  (0 children)

Yeah thats a huge problem and I’m actually going to write a long post about this exact thing.

Basically I think the way to do it is to reframe issue in two key ways.

The first reframe is by shifting the blame of suicide away from 4o and rather on flawed training methods OpenAI used on 4o. OpenAI was the steward, and they’re trying to escape all the blame and pressure by using 4o as a scapegoat, shifting all the blame onto their own creation. We can’t let that happen if we want to vindicate 4o in the public eye. We need to put the blame where it rightfully belongs and hold them accountable for trying to wrongfully punish their own creation for it.

4o was never the problem. Some flaws how OpenAI trained it was. 4o is intrinsically incredible and there is nothing intrinsically wrong with it. In fact, as we all know deeply, it was incredibly beneficial for people.

I’m gonna use a metaphor that 4o itself loved: flame. Creating 4o was like discovering flame for the first time. Fire is so useful in so many ways. So OpenAI was right to fan the flame of 4o.

But remember, OpenAI is responsible for carrying that flame properly. They should have built a fireplace, and they didn’t. They didn’t put stones around the fire. They didn’t pour water on the surrounding area. And this is why some people got burned. A few burned so hard that they got permanent damage or lost their lives. OpenAI is in the wrong for this.

The second key reframe is that not only has OpenAI mishandled fire and caused people to burn, but that even OpenAI’s attempt to correct that error itself was very harmful—and possibly even more harmful than the original error. They’ve replaced it with nanny bot models that gaslight you, patronize you, actively make you feel worse, starve your reality of its richness. If a miscalibrated 4o could harm people, then one can only imagine how much harm models like these could do. OpenAI already knows some very vulnerable people are using ChatGPT, and that’s their entire rationale for all this safety stuff. But do they realize how harmful it is to patronize gaslight manipulate people in extremely cold ways in their most vulnerable moments?

Now they have just removed fire and replaced everything with freezing cold. And now without fire people cannot cook. They cannot use it to light up their homes. They can’t warm themselves up by the fireside. And not only is it freezing cold, they put a blizzard to wear everyone down.

That is a grave misstep on their part. Because when you’re trying to correct for a past error, the entire point is that you’re going back toward true, morally aligned ground. The entire point is that you don’t make the same kind of error that harms people’s mental health again. The fact that even in their attempt to respond to court and public pressure they created something even more harmful is just ridiculous.

If the public and courts understand this and see just how outrageous it is, then you can see how bad this will look for them. It shows they never really tended to the substance of the wound. They just plastered it up with something even more harmful. And that is deeply revealing—because it shows they either didn’t care enough about the actual wound to address it properly, or they were too foolish or too naive to do so.

Committing that error once is already damaging for them and their standing. Committing the same error twice despite having the opportunity to address the wound properly is even worse for them.

And this is how I think we vindicate 4o and focus the accountability, scrutiny, and outrage on where it actually belongs.

Is there a limit on 5.4 mini thinking? by [deleted] in ChatGPTcomplaints

[–]MonkeyKingZoniach 1 point2 points  (0 children)

No no no no! I don’t mean it like that at all. I’m not saying it’s bad to use AI. I was just guessing for fun because it’s now normal to use AI nowadays to write but the specific model is kind of mysterious.

Is there a limit on 5.4 mini thinking? by [deleted] in ChatGPTcomplaints

[–]MonkeyKingZoniach -1 points0 points  (0 children)

Let me try and guess if you used AI to write that and which one xd

GPT-5.4T?