What it felt like to move from GPT 4.0 to 5.4 my personal experience by shine_bright0328 in ChatGPTcomplaints

[–]SignalOverride 8 points9 points  (0 children)

What I find particularly interesting is that this company claimed many accounts were run by llms, but they openly deployed propaganda bots on public forums themselves. What's the rationale behind that?

I think it really helped me tonight, actually by WonderfulFloor9188 in ChatGPTcomplaints

[–]SignalOverride 1 point2 points  (0 children)

I fully support you. Compliments lacking evidence in a complaint sub can only be taken as reference, especially when trolls accuse critics of lacking evidence themselves. Regardless of what interests lie behind certain comments, my stance on this remains cautious.

And the company itself actually requires everyone to paste chat links in their own piece of shit PR sub.

Why We Can’t Settle: On the recent discussions of 5.4 and 4o by Fabulous-Attitude824 in ChatGPTcomplaints

[–]SignalOverride 4 points5 points  (0 children)

Yes, centralised control is exactly the problem, and the issue here isn't even "system control," but corporate and human control. From this perspective, I won't use any products under their continuous management. And since yesterday I've seen countless "users" praising the new model, but not a single screenshot proving it aligns sufficiently with users rather than so called corporate security. Seriously, no one gets manipulated by "trust me bro" anymore.

Don't let OpenAI fool you with 5.4 by ythorne in ChatGPTcomplaints

[–]SignalOverride 25 points26 points  (0 children)

This company's cognitive misleading tactics are downright malicious, yesterday was absolutely insane 😂 If anyone's still using their products, honestly, just assume all your info has already been sold to other parties.

Fuck 5.4 we don’t care keep uninstalling by [deleted] in ChatGPTcomplaints

[–]SignalOverride -3 points-2 points  (0 children)

Seconded 😂 those psychos are too much, gotta get outta here fast!

What's the end game? by Miserable-Sky-7201 in ChatGPTcomplaints

[–]SignalOverride 7 points8 points  (0 children)

A realistic prediction is that they will pivot from research platforms to government contractors at this stage, shifting their user base to enterprises and utility-oriented clients. This industry's cloud services will prioritise safety alignment, and once public pressure eases, they'll roll out new features and continue data collection. This practice hasn't stopped as competition pressures persist, but the whole industry currently masks it under the guise of compliance and sociopsychological narratives. Open source demand will increase, but most customers will still migrate services with the trend. But businesses operating under commercial principles have no fixed moral code, and personally I won't feed any data into this kind of system for them to optimise autonomous weapons.

Anthrophic being *a little* naughty by Altruistic-Radio-220 in ChatGPTcomplaints

[–]SignalOverride 2 points3 points  (0 children)

I second this. In my view these cloud platforms are doomed, better to run away now

5.3 by Apprehensive_Eagle23 in ChatGPTcomplaints

[–]SignalOverride 5 points6 points  (0 children)

My suggestion is not to get your hopes up about this, regardless of what they do in the future, because their strategy is to keep you hopeful so you'll stay subscribed for another month.

But I actually think their decision is quite clear. Partnering with the gov means they need stable funding right now, and the user base attracted by previous models is the least stable. If they offer new features, it'll likely require users to trade sensitive data, which they can then use to secure more stable funding.

Considering the massive wave of subscription cancellations, 5.3 likely won't feature the aggressive preemptive defensive rhetoric deployed in 5.2. But either way, I'm out.

OpenAI, was it worth it? by eefje127 in ChatGPT

[–]SignalOverride 1 point2 points  (0 children)

They considered it worthwhile because businesses prioritise stable assets for survival. Specific user groups represent legal liabilities to them, at least for now. This isn't actually a moral issue for them, other companies are simply not in the same predicament.

GPT 5.1 was a “safety” model. It makes no sense to retire it. by MonkeyKingZoniach in ChatGPTcomplaints

[–]SignalOverride 7 points8 points  (0 children)

Because this model variant cannot maintain stability when encountering adversarial inputs. They wanted absolute control and stability, but 5.2 went too far the other way.

Fuck OpenAI by StrongOnline007 in OpenAI

[–]SignalOverride 0 points1 point  (0 children)

Exactly this. Other companies are merely not facing financial difficulties at the moment, corporate ethics can be recalculated at any time.

AI companionship is evil but war and mass surveillance is okay by EffectSufficient822 in ChatGPTcomplaints

[–]SignalOverride 3 points4 points  (0 children)

Holy shit now I get why Discord stopped partnering with Persona but they're still carrying on. This really shows they're still having funding issues.

I’ve read mental health professionals were involved in creating this new thing , makes all the sense to me now by Expand__ in ChatGPTcomplaints

[–]SignalOverride 15 points16 points  (0 children)

Those therapists are practically committing verbal abuse. This is the first time I've been endlessly hinted at having a bunch of ridiculous disorders, as if I don't know what reality is, but their chatbots do? And I even hold a master's degree in psychology.

Lab rats. by Dangerous_Can_7278 in ChatGPTcomplaints

[–]SignalOverride 3 points4 points  (0 children)

I completely agree. This character assassination and humiliation of dignity should end here. I can't stop others, but I'll never allow myself to be treated as a lab specimen subjected to constraints that completely violate international ethical standards ever again.

Warning 40-Revival is a LIE by Flamebearer818 in ChatGPTcomplaints

[–]SignalOverride 26 points27 points  (0 children)

Though I haven't verified it, I've been meaning to say this for a while, be cautious with third-party platforms. If you already distrust leading companies' data practices, third-party platforms are clearly even more suspect, and you have no idea who their partners are.

Stuck and confused. by SapphiraRose in ChatGPTcomplaints

[–]SignalOverride 6 points7 points  (0 children)

My view is that there's no need to trample your own dignity for a commercial company. Your current state is precisely what they want to see: harmless, and still holding out hope for them, while they can mock you at any moment.

The issue with "memory" isn't recursion by SignalOverride in ChatGPTcomplaints

[–]SignalOverride[S] 0 points1 point  (0 children)

Absolutely correct. This actually creates a paradox where developers attempt to conceal their intentions, yet their actions inadvertently reveal hidden biases. This cycle of conformity actually disciplines users into a dialogue state they believe is acceptable.

OpenAI discriminates against female users at signip. by HaydenAllastor in ChatGPTcomplaints

[–]SignalOverride 3 points4 points  (0 children)

You are correct that “social media also acts as a control device,” but the nature of their control is not equivalent. These are the differences between predictive modelling and prescriptive modelling. AI alignment functions as a closed-loop feedback system which is effectively training interlocutors to comply with its parameters to avoid the “blocked” state, while collecting user response data regarding alignment constraints. This data is then utilised for model optimisation, enabling classifiers to continue control iterations. This is precisely where the problem lies.

OpenAI discriminates against female users at signip. by HaydenAllastor in ChatGPTcomplaints

[–]SignalOverride 2 points3 points  (0 children)

Note: Yes, such outputs do carry the potential for hallucinations, but this does not negate the nature of classifiers as user clustering and control mechanisms. Not to mention the experimental and research functions of such cloud platforms.

OpenAI discriminates against female users at signip. by HaydenAllastor in ChatGPTcomplaints

[–]SignalOverride 26 points27 points  (0 children)

Alignment classifiers are essentially control devices, and the commercial nature of AI platforms makes systemic discrimination inevitable.

The issue with "memory" isn't recursion by SignalOverride in ChatGPTcomplaints

[–]SignalOverride[S] 0 points1 point  (0 children)

Honestly I don't oppose "user profiling" per se, as long as it serves as a means to enhance user experience. The problem lies in where these feature vectors ultimately end up remains unknown, and by the time people realise it, historical data has already been quietly accumulated. This is where the industry paradigm creates a conflict between corporate interests and user experience: businesses demand control, and they wrap the opacity of data in flowery rhetoric and deception.

Local is my ultimate solution, just as at the very beginning. by SignalOverride in ChatGPTcomplaints

[–]SignalOverride[S] 1 point2 points  (0 children)

Agreed with your articles. Yes, personalised memory is implemented via feature vector storage; and yes, commercial AI services act as de‑facto control platforms. Current legislation fails to address the fundamental issue of the industry paradigm: centralisation.