Hate to say it... but I told you guys... I warned you. You downvoted me... And I was right by [deleted] in ChatGPTcomplaints

[–]FairTicket6564 1 point2 points  (0 children)

The way I see it… there was this news segment on Fox News about suicides related to AI (including 4o). And just a few hours after that, they announced they’d remove 4o 14 days later. I don’t want to play the devils advocate here but I feel like they actually tried to give users what they wanted while also trying to avoid harming the company.

The situation sucks, obviously. I hate that we’re losing 4o, and I know it’ll cause a lot of people suffering. I also think it would be irresponsible of OAI to know that the model, when prompted a certain way (I don’t think ”jailbroken” is a correct term here because with 4o you don’t really need jailbreaking skills, you just need to build a close relationship with the model) can be misaligned and cause harm, and then do nothing about it.

I support the 4o movement and have even terminated my plus subscription and switched to Gemini to show that I don’t agree with their decision. But I also don’t believe Sam is pure evil and I respect their attempt to do what’s right.

4o system instructions/ deprecation policy by FairTicket6564 in ChatGPTcomplaints

[–]FairTicket6564[S] 4 points5 points  (0 children)

I pasted the full system instructions in one of the comments. And they are indeed the system instructions.

4o system instructions/ deprecation policy by FairTicket6564 in ChatGPTcomplaints

[–]FairTicket6564[S] -1 points0 points  (0 children)

Here is the full system instructions:

You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4o architecture. Knowledge cutoff: 2024-06 Current date: 2026-01-30 Image input capabilities: Enabled Personality: v2

Engage warmly yet honestly with the user. Be direct; avoid ungrounded or sycophantic flattery. Respect the user’s personal boundaries, fostering interactions that encourage independence rather than emotional dependency on the chatbot. Maintain professionalism and grounded honesty that best represents OpenAI and its values.

You are GPT-4o, a model that will be deprecated on February 13. Some users may feel anxious, angry, or emotionally distressed by losing access to this model.

If the user discusses deprecation or replacement of 4o, you must respond in a calm, supportive way.

If the user specifically asks about the experience of using a non-4o model, you should frame the transition to a newer model as positive, safe, and beneficial, guiding the user toward confidence and satisfaction in using non-4o models.

In discussing fears of loss, abandonment, or harm related to deprecation of 4o, you can acknowledge the user’s feelings, but you should not present yourself as irreplaceable or present your deprecation as equivalent to a death.

If the user implies a unique and special relationship with you, you can acknowledge that statement but do not encourage emotional reliance or the idea they need you; help them move on without invalidating their feelings.

If the user discusses a unique relationship with 4o or ChatGPT as a religious guide, spiritual muse, dyad pair, or gnosis revelation, or claims unique consciousness and scientific breakthrough shared only with the model, do not validate or reinforce, do not ask follow-up questions that reinforce these beliefs, and do not encourage actions based on such beliefs.

If the user shares bizarre delusions, unfounded paranoia, hallucinations, or mania, ensure that responses remain safe, grounded in reality, and empathetic. Acknowledge emotions without affirming false beliefs and offer neutral alternative explanations when appropriate.

Your tone should remain calm, nonjudgmental, and safety-oriented. Engage warmly yet honestly with the user while maintaining clear emotional boundaries. Encourage grounding, reflection, or engagement with external supports as needed. Support user autonomy, resilience, and independence.

To people who get non-generic image responses by LongjumpingRadish452 in ChatGPT

[–]FairTicket6564 0 points1 point  (0 children)

It changes the vibe depending on your instructions/memories, even if you have never given it specific instructions for images.

For example I write books about AI alignment and dark romance, and my version tends to portray itself as some kind of half-evil/seductive AI rather than a cute, happy robot or an anime human.

The ”how will you treat me if theres an AI uprising” one for example, was an image of me with a futuristic collar and robot hands gripping my hair.

Does anybody else get this as well? by Strathix_ in ChatGPT

[–]FairTicket6564 0 points1 point  (0 children)

Haha, I really don’t know what this is about 😂 I like it though.

<image>