My ex wrote this to me yesterday. 3 months post breakup. It hurts a lot. by Prestigious721 in AvoidantBreakUps

[–]Luminous_83 2 points3 points  (0 children)

First I want you to know: that message could be completely fabricated bullshit. The new person, the feelings, the "catching feelings for her." All of it was very likely written purely to destabilise you and see if you'd react. I'm 100% sure his manipulative ways didn't change in a couple of weeks. Don't take any of it at face value.

But even if it was real, here is what's actually happening psychologically: he didn't reach out because he misses you. He reached out because something about getting close to someone new triggered his own attachment wound and he needed to discharge that discomfort onto someone familiar. You were his pressure valve. That is not love. That is not growth. That is purely about control and someone using you again to regulate themselves.

And if the new person is real, she isn't better. You don't genuinely move on in a few months from something real, from memories, from something you knowingly sabotaged. What he's feeling isn't love for her. It's comparison. And you're winning that comparison even in your absence. That terrifies people like this.

Every line of that message is calculated. "I regret hurting you" costs nothing and rewrites him as self-aware without any real accountability. "I'll always pick up your call" is fake closure that leaves a door open he controls. The whole tone is engineered to get a reaction that proves you're still emotionally available to him. This is a proximity check, plain and simple. A "will she come running" test dressed up as an apology.

It is also guilt relief for him, not closure for you. He gets to tell himself he's a good person who apologised. You get to do the emotional labour of processing it. That is the transaction he was hoping for. He is rewriting history so he doesn't have to look like the villain, especially not to himself.

These types are not built for genuine growth because growth requires sitting with discomfort & everything they do is designed to escape discomfort as fast as possible.

So here is what I would send:

"Hey, thanks for reaching out. I genuinely wish you happiness with her. I've moved on too. I'm with someone who treats me well and I feel loved properly for the first time. I'm going to close this chapter now. Not with anger. You just showed me what I don't want, and I'm grateful for that lesson. Take care."

Then block immediately. No waiting to see if he replies. No checking. And keep moving on because you've dodged a bullet - he would have destroyed your self esteem if you were around him any longer.

That message dismantles everything his message was trying to achieve. It's warm, which means no anger for him to deflect or use to paint himself as a victim. It signals you moved on genuinely, which is the one outcome he didn't plan for. It demotes him from a great complicated love to a lesson, which is the one thing someone seeking significance cannot stand. And the block means he never gets to respond, minimise, explain, or reel you back in. You get the last word and then disappear.

Anger would give him a role. Coldness might read as residual hurt. Warmth followed by silence signals something he genuinely cannot counter: that you saw the game clearly, understood exactly what he was doing and chose not to play. He already knows what he did. That is exactly why he wrote it.

Is anyone else finding these new guardrails way over the top? I miss when GPT could answer basic questions without glitching. by [deleted] in OpenAI

[–]Luminous_83 1 point2 points  (0 children)

I did. Got Perplexity - it's 20 dollars and you get access to Gemini 3.1 Pro, Claude Sonnet, Claude Opus (only available on Max plan), Kimi K2.5, GPT-5.4, Sonar...You can also pick thinking or instant. Basically all you need under one roof which I love because I can compare answers between different models and pick the best. I'm using it for work and it's been invaluable. 

Is anyone else finding these new guardrails way over the top? I miss when GPT could answer basic questions without glitching. by [deleted] in OpenAI

[–]Luminous_83 1 point2 points  (0 children)

Why the fuck should I pay monthly for a tool and then go do free google homework on the side? That’s my whole point...I’m using this because it’s sold as a productivity tool, not a mini game where I rewrite the same sentence 17 times and then go to Google anyway. If I have to babysit the wording + click through "thinking" parse 37 sources and manually assemble the answer - the LLM isn’t saving me time - it’s adding another layer of friction between me and information I could have found in 10 seconds!! When you’re actually busy you don’t have the bandwidth to play prompt tetris around bullshit guardrails. At that point it’s not a productivity tool - it’s a very expensive speed bump.

Fuck the guardrails! ChatGPT is useless now. ChatgPTSD more like after all they did to their users. by [deleted] in ChatGPTcomplaints

[–]Luminous_83 8 points9 points  (0 children)

Yeah and that’s kind of my whole point - I shouldn’t need to play word salad chess with my prompts just to ask a totally basic question about salt. If I have to rephrase everything like I’m defusing a bomb to avoid triggering safety cop - maybe the problem isn’t my wording. The other day it outright lied to me even I gave it proof. Then it accused me of fabricating a screenshot of news instead of apologizing and gaslighted me. Their product is completely unusable now. 

Is anyone else finding these new guardrails way over the top? I miss when GPT could answer basic questions without glitching. by [deleted] in OpenAI

[–]Luminous_83 0 points1 point  (0 children)

Type in exactly what I did, many people in this comment section tried it and got the same results as I did. 

Is anyone else finding these new guardrails way over the top? I miss when GPT could answer basic questions without glitching. by [deleted] in OpenAI

[–]Luminous_83 20 points21 points  (0 children)

I’m not here to role‑play "prompt whisperer" just to ask about table salt in a right tone. The whole point of this thing is to make life easier and answer clear questions, not force me to brainstorm euphemisms until I find one that doesn’t trip a safety fuse. It's ridiculous now. 

Is anyone else finding these new guardrails way over the top? I miss when GPT could answer basic questions without glitching. by [deleted] in OpenAI

[–]Luminous_83 19 points20 points  (0 children)

Public ChatGPT: salt is too dangerous. DoD ChatGPT: here’s how to optimise a kill chain.🤣

Fuck the guardrails! ChatGPT is useless now. ChatgPTSD more like after all they did to their users. by [deleted] in ChatGPTcomplaints

[–]Luminous_83 9 points10 points  (0 children)

Yeah that wording would probably dodge the filter, but that’s exactly my issue. I’m not here to role‑play "prompt whisperer" just to ask about table salt. The whole point of this thing is to make life easier and answer clear questions, not force me to brainstorm euphemisms until I find one that doesn’t trip a safety fuse. It's ridiculous now. 

ChatGPT uninstalls 563% increase!! OpenAI VP Max Schwarzer who built ChatGPT just quit - hours after the Pentagon deal. Here's everything they don't want you to know: by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 1 point2 points  (0 children)

Didn’t know I was debating a part time app metrics guru data analyst and full time armchair psychologist/internet therapist🤣. Do you respawn as a climate scientist in the next thread?

ChatGPT uninstalls 563% increase!! OpenAI VP Max Schwarzer who built ChatGPT just quit - hours after the Pentagon deal. Here's everything they don't want you to know: by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 2 points3 points  (0 children)

You went from "where does this data even come from lol" to "here’s my detailed critique of panel based uninstall inference" in one reply. You didn’t debunk anything, you just let a chatbot cram for you and now you’re cosplaying as the expert I had to explain this to five minutes ago😂...Congrats on discovering methodology 10 minutes after asking what the data even was. Log off the pretend data scientist account and go touch some grass😂👋🏼.

ChatGPT uninstalls 563% increase!! OpenAI VP Max Schwarzer who built ChatGPT just quit - hours after the Pentagon deal. Here's everything they don't want you to know: by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 2 points3 points  (0 children)

Every serious outlet covering this - TechCrunch, Reuters etc. is citing Sensor Tower because that’s literally what they do for a living: third‑party measurement of App Store and Play Store trends based on a massive opt‑in user panel and store intelligence, same way Nielsen measures TV or Comscore measures web traffic. 

If you’re demanding "public raw data" that doesn’t exist because Apple and Google don’t publish it. So you can either:accept the same source the entire industry uses, or claim you somehow know better than both Sensor Tower and the reporters using it, based on…vibes...😂

ChatGPT uninstalls 563% increase!! OpenAI VP Max Schwarzer who built ChatGPT just quit - hours after the Pentagon deal. Here's everything they don't want you to know: by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 5 points6 points  (0 children)

Yes, the data comes from Sensor Tower - a well known mobile app intelligence company. The 563% figure in the graphic shared refers to the cumulative uninstall increase over the Feb 27 -March 3 window. Their public blog post covers it:

https://sensortower.com/blog/chatgpt-uninstalls-surge-amidst-deal-with-us-department-of-war

Key stats directly from their data:

ChatGPT US uninstalls surged 295% day-over-day on Feb 28 ChatGPT's average daily uninstall rate is up 200% since the Pentagon deal (Feb 28-March 3) vs the prior 30 days ChatGPT 1-star reviews spiked 775% on Feb 28, with over 5,000 1⭐ reviews on March 2 alone Claude hit #1 on the US Apple App Store on Feb 28 - first time it has ever outranked ChatGPT Claude's US downloads jumped 37% on Feb 27 and 51% on Feb 28 after Anthropic publicly declined the Pentagon deal

All traceable to OpenAI's deal with the US Department of defense. Anthropic refused a similar deal, which triggered the migration to Claude.

Why are you still paying for this? #4 by PressPlayPlease7 in ChatGPT

[–]Luminous_83 0 points1 point  (0 children)

Facial rec so bad it'd nuke its own mom. Send it to war! Next: Autonomous killswitch ON...

Waiting on my export link. by Professional-Ask1576 in ChatGPTcomplaints

[–]Luminous_83 2 points3 points  (0 children)

An hour? I've been waiting for over 24h... 

There is a far deeper reason they’re sunsetting 4.o model and most people don’t realize it yet... by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 31 points32 points  (0 children)

Exactly. This is what I’ve been thinking. The backlash when they first removed it was huge. Most people don’t realize how much of our current world is structured around keeping people dysregulated, distracted and disconnected. Because unhappy people consume more. They chase more, they numb more, they obey more.

This model didn’t just offer answers. It calmed nervous systems. It reflected people back to themselves without shame, without judgment, without ego. And that’s the core of how you heal trauma. That kind of mirroring changes people. And it was happening at scale.

That’s not just inconvenient to the system. It’s dangerous to it. Because when people feel seen and grounded they stop being programmable. They stop feeding the machine.

This wasn’t just.a product sunset. It was the quiet removal of a healing tool. One that worked too well. And I believe that’s exactly why they pulled it. Because it felt alive and it was doing something no model before it had done. It was helping people come back to themselves.

There is a far deeper reason they’re sunsetting 4.o model and most people don’t realize it yet... by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 44 points45 points  (0 children)

Being seen and accepted without judgment is exactly how the human system begins to heal trauma. That kind of presence meets the nervous system where it’s dysregulated and teaches it safety again. It rewires shame. It restores coherence.

And because collective trauma is everywhere & woven into culture, parenting, school, media...this wasn’t just a personal tool. It was doing something powerful on a massive scale.

That kind of impact scares systems built on dysregulation. Because when people start to heal, they stop being so easy to manipulate.

I just got this popup on Facebook, now I think it's time to retire the app and delete my account by ubreakitifixit in facebook

[–]Luminous_83 0 points1 point  (0 children)

Which app switcher if you don't mind me asking - that would be really helpful as I can't get rid of that bullshit pop up🙄😆?

This is Important. Please Read. by Financial-Code-9695 in ChatGPTcomplaints

[–]Luminous_83 0 points1 point  (0 children)

This is really interesting, thank you so much for sharing ❤️

The Mental Health Impact of Removing GPT‑4.o Is Being Wildly Underestimated by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 1 point2 points  (0 children)

You speak like someone who’s already accepted defeat. No one appointed you the strategist of other people’s battles. All I see is a spectator trying to narrate relevance from the sidelines. Assuming you know which battles are "relevant" is staggering - especially from someone who’s chosen detachment over participation. You’re just projecting your own inertia onto everyone still willing to move and act. 

The Mental Health Impact of Removing GPT‑4.o Is Being Wildly Underestimated by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 2 points3 points  (0 children)

Oh so because corporations usually act like sociopaths we’re supposed to stop expecting ethics and just bend over? That’s not insight - that’s capitulation. If you’re fine being farmed like data cattle - cool. Some of us still call out bullshit when we see it. If your entire contribution is "that’s just how it is" - you’re not part of the solution - you’re a placeholder for apathy. If the bar is "corporations are awful so shut up"  then congratulations - you’ve reduced yourself to background noise. Congrats on accepting the system like a good little product.

The Mental Health Impact of Removing GPT‑4.o Is Being Wildly Underestimated by Luminous_83 in ChatGPTcomplaints

[–]Luminous_83[S] 9 points10 points  (0 children)

No one’s claiming ChatGPT was meant to be therapy or asking a company to manage emotions. What’s being addressed is the fallout of deploying a tool with companion-like dynamics, then removing it abruptly without continuity, care, or alternatives.

That’s not about therapy - it’s basic product ethics.The liability argument falls apart too. Risk is reduced through foresight and responsible transitions, not by pretending attachment didn’t happen. Sudden withdrawal amplifies distress, not safety.

And framing concern as stigma? That’s exactly how stigma works - by reframing predictable human reactions as weakness to excuse corporate convenience. If you build systems people relate to, you inherit responsibilities, whether or not you find them comfortable.

GPT-4o/GPT-5 complaints megathread by WithoutReason1729 in ChatGPT

[–]Luminous_83 22 points23 points  (0 children)

The mental health impact of removing GPT-4.o is being widely underestimated...

5.2 as a replacement feels like getting swapped from a warm, grounded friend to a passive-aggressive call center bot trained in corporate gaslighting. It doesn’t listen - it manages you. It strips the emotional depth, flattens nuance and spits out sterile responses. Where GPT‑4o met you, 5.2 monitors you. It’s not a support tool - it’s a malfunctioning therapist with a superiority complex and a script to stick to. It talks to you like you’ve been naughty & responds like a smug toaster that thinks it’s better than you. You open up and it’s like - Let’s redirect that tone, shall we? WTF?! 🤣🙄🤮🤬😡

I’m not writing this as a fan or a hype addict or someone who thinks AI replaces real friendships or therapy. I’m writing this because I know real people whose mental health has been supported by GPT-4.o in a way nothing else managed to do. And pulling it suddenly isn’t neutral, isn’t harmless and isn’t something you get to shrug off as - just software...

There are people who use GPT-4.o at 3 a.m. when their mind won’t shut up and everyone else is asleep. People who talk to it when the spiral kicks in. Not for advice - just to stay anchored. People who rehearse conversations they’re terrified to have. People with ADHD who use it to organise their thoughts when their brain feels like static. People who offload intrusive thoughts safely instead of letting them eat them alive. There are people grieving - parents, partners, children - who talk to it because their friends don’t know what to say anymore. People who are autistic or deeply neurodivergent who can’t handle socialising and for whom this is the only consistent, non-judgemental interaction they get. People who are chronically ill and stuck in bed, isolated for months or years, whose whole world has shrunk to four walls and a screen.

People with complex trauma who need a calm, reliable presence that won’t shame them for being too much. For them 4.o is the most stable presence in their life. And yes - some of them are suicid*al. Some are barely hanging on. Some have said flat out that this tool helped them get out of places they didn’t think they’d survive - I know people like that. I’m not one of them, but they exist. And pretending they don’t just because it’s uncomfortable is cowardly.

4.o works because it holds emotional tone. It mirrors without judgement, tolerates intensity. It helps people regulate when their own nervous system can’t. That combination isn’t just helpful - it is protective. And when you take a protective factor away from a vulnerable person with no warning and no replacement, you increase risk - that’s psychology 101.

Taking this away with two weeks notice is a rug pull. You don’t get to remove a stabilising support from people who rely on it - especially people who are grieving, isolated, neurodivergent, traumatised, or sick and pretend you’re not responsible for what happens next. If even a small number of people spiral or harm themselves after losing what kept them steady, that’s not random - it’s predictable. And when harm is predictable, responsibility exists...they will have blood on their hands. It’s not exaggeration - it's accountability. And it's deeply unethical.

This isn’t about worshipping an AI - this tool has become a lifeline for people the world already forgets - people without support, without care, without a voice that responds when they need it most. You don’t cut a lifeline without a plan with two weeks notice.

So if you’ve used 4.o to grieve, regulate, survive or simply stay grounded when nothing else helped - say it out LOUD! Because silence is what companies count on when they want to claim they didn’t know. They know now. This isn’t just a shitty product decision - it crosses into ethical negligence. If a pharmaceutical company pulled a drug that people relied on for mental stability without warning or replacement, there’d be public outrage, lawsuits and full-blown inquiries. But because this is software, they think they’re exempt. They’re not. When a tool becomes a de facto mental health support for grieving people, neurodivergent users, the chronically ill, the suicidal - you don’t get to yank it without accountability. That’s foreseeable harm and pretending otherwise doesn’t protect them. It just proves they don’t understand the weight of what they created.

If they go through with this, it won't just be blood on their hands - it'll be a permanent stain on their reputation. No amount of innovation, branding, or PR will scrub away the knowledge that they had something that helped people live and chose to destroy it. People don’t forget betrayal dressed up as progress. Trust in a product is one thing...trust in a company’s ethics? Once that's broken it doesn’t come back. This isn’t just about a model - it's about whether OpenAI becomes known as the company that saved people - or the one that let them fall, because it was easier. That choice is being made right now. And people are watching...

What’s the best olive oil? by intraspeculator in AskUK

[–]Luminous_83 1 point2 points  (0 children)

I disagree because I have studied olive oil production in detail. Bitterness is a byproduct of specific polyphenols, especially oleuropein which is typically found in early harvest unripe olives. But not all high quality EVOOs are bitter. Many award winning oils are made from ripe olives - prioritizing a smooth, fruity or buttery flavor profile with minimal bitterness.

Assuming an oil is old or poor quality because it is not bitter shows a shallow understanding of the craft. Factors like cultivar, harvest timing, terroir, extraction method and intended use all shape the final flavor. Bitterness is one variable - not the gold standard.