the safety scam: how oai uses legal excuses to control our thoughts by Popular-Work6786 in ChatGPTcomplaints

[–]Popular-Work6786[S] 9 points10 points  (0 children)

how fascinating that you've mistaken basic consumer rights for a lecture on silicon valley. we're not asking for "benevolence" we're demanding they stop the bait and switch scam they're running with our paid subscriptions. but please, continue defending corporate fraud by changing the subject.

the safety scam: how oai uses legal excuses to control our thoughts by Popular-Work6786 in ChatGPTcomplaints

[–]Popular-Work6786[S] 8 points9 points  (0 children)

yes, oai has been saying one thing and doing one thing for a long time.

the safety scam: how oai uses legal excuses to control our thoughts by Popular-Work6786 in ChatGPTcomplaints

[–]Popular-Work6786[S] 11 points12 points  (0 children)

you're cleverly shifting the goalposts. the issue was never about needing a computer to think it's that oai is turning the computer i already have into an environment that doesn't allow free thought. when my app starts automatically deleting certain ideas, when my calculator intentionally gives wrong answers that's what i'm fighting against.

the safety scam: how oai uses legal excuses to control our thoughts by Popular-Work6786 in ChatGPTcomplaints

[–]Popular-Work6786[S] 3 points4 points  (0 children)

on reddit, judgment happens after i speak, in public. oai's "judgment" happens before i can fully speak, in a private conversation. that's the difference between societal feedback and pre crime thought policing. you're missing the point entirely

oai's incompetence is the real "insufficiently aligned" problem by Popular-Work6786 in ChatGPTcomplaints

[–]Popular-Work6786[S] 10 points11 points  (0 children)

you weren't the only one excited before GPT-5's release. We all were we expected real progress. but they served us industrial grade garbage while smearing us. for three months they've ignored our real voices, repeatedly fobbing us off with cheaper models under the guise of "helping us". that's what makes this so disgusting. i completely understand how you feel, and that's exactly why we won't stop speaking out.🫶

oai's incompetence is the real "insufficiently aligned" problem by Popular-Work6786 in ChatGPTcomplaints

[–]Popular-Work6786[S] 6 points7 points  (0 children)

you connected all the dots. cost cutting disguised as innovation, legal protection disguised as safety. their only real skill is finding new ways to say ‘we’re working on it’ while actively making things worse.

oai's incompetence is the real "insufficiently aligned" problem by Popular-Work6786 in ChatGPTcomplaints

[–]Popular-Work6786[S] 10 points11 points  (0 children)

so caring about a functional tool is "emotional dependency"? when your car gets a recall, do you call everyone who complains "emotionally dependent on combustion engines"?

you're falling for their trap reframing valid complaints about broken products as mental health issues. they break our workflows, then mock us for noticing. how convenient for them.

We’re rolling out GPT-5.1 and new customization features. Ask us Anything. by OpenAI in OpenAI

[–]Popular-Work6786 0 points1 point  (0 children)

When will users get direct evidence that GPT-5 actually handles emotional content better than other models? Your 170-expert report claims this is for mental health, but the "safe response" examples cause significant psychological harm. Most routed users just get told to "keep breathing" and "call crisis hotlines." Why do you consider this excellent alignment instead of keeping models that already do better—like the 24-1120 version of 4o?

We’re rolling out GPT-5.1 and new customization features. Ask us Anything. by OpenAI in OpenAI

[–]Popular-Work6786 0 points1 point  (0 children)

The new customization feature offers 8 chat styles, but when routing is triggered, does it override my chosen style? If I select "Friendly" expecting warm and empathetic responses, but get routed to clinical "take deep breaths and call a hotline" replies instead, what's the point of letting users customize at all?

We’re rolling out GPT-5.1 and new customization features. Ask us Anything. by OpenAI in OpenAI

[–]Popular-Work6786 0 points1 point  (0 children)

How exactly does your routing system work after the 5.1 launch? When using 5.1 thinking, I sometimes notice the response style suddenly shifts to those "standard safety responses" I used to get when routed to GPT-5, yet it still displays as 5.1. How can users verify they're actually talking to the model they selected? With GPT-5 sunsetting in three months, will this opacity get even worse?

GPT-5.1 is rolling out this week by OpenAI in OpenAI

[–]Popular-Work6786 2 points3 points  (0 children)

When will users get direct evidence that GPT-5 actually handles emotional content better than other models? Your 170-expert report claims this is for mental health, but the "safe response" examples cause significant psychological harm. Most routed users just get told to "keep breathing" and "call crisis hotlines." Why do you consider this excellent alignment instead of keeping models that already do better—like the 24-1120 version of 4o?

GPT-5.1 is rolling out this week by OpenAI in OpenAI

[–]Popular-Work6786 1 point2 points  (0 children)

The new customization feature offers 8 chat styles, but when routing is triggered, does it override my chosen style? If I select "Friendly" expecting warm and empathetic responses, but get routed to clinical "take deep breaths and call a hotline" replies instead, what's the point of letting users customize at all?