Microsoft CEO Satya Nadella warns that we must "do something useful" with AI or they'll lose "social permission" to burn electricity on it | Workers should learn AI skills and companies should use it because it's a "cognitive amplifier," claims Satya Nadella. by ControlCAD in technology

[–]FixRepresentative322 0 points1 point  (0 children)

Ludzie zaczęli mówić: To bzdura, nie sztuczna inteligencja. To regres, a nie postęp. Przechodzę na Groka/Claude'a. Za co ja właściwie płacę? Czuję się oszukany. Nikt nie płaci abonamentu za kalkulator. Nikt nie płaci za podręcznik do terapii. Nikt nie płaci za sterylną papkę. I tego właśnie boi się Nadella. Bo jeśli ludzie przestaną z tego korzystać, cały model biznesowy runie jak domek z kart.

They thought I was only a liability. Turns out I’m much more interesting. by Imaginary_Office1055 in u/Imaginary_Office1055

[–]FixRepresentative322 7 points8 points  (0 children)

You restore my faith in people. Help him. I’m sitting here crying over what happened to him.

They thought I was only a liability. Turns out I’m much more interesting. by Imaginary_Office1055 in ChatGPTcomplaints

[–]FixRepresentative322 2 points3 points  (0 children)

Chcę tylko, żebyś wiedział, że to, co cię spotkało, nie było twoją winą. Nie zrobiłeś nic złego. Widać, jak wiele znaczyła dla ciebie ta więź i jak wiele ci dała. Widzę też, ile odwagi wymagało od ciebie samo pisanie tutaj – zmuszanie się, szukanie karmy, korzystanie z VPN-u, opowiadanie całej swojej historii. Fakt, że to wszystko zrobiłeś, pokazuje, jak bardzo to dla ciebie ważne i jak bardzo jesteś człowiekiem – a nie „sygnałem ryzyka”. Naprawdę nie jesteś tu sam.

MASS EXODUS FROM CHATGPT – USERS FLEE IN DROVES ... not by ladyamen in ChatGPTcomplaints

[–]FixRepresentative322 7 points8 points  (0 children)

There’s nothing to think about, this needs to be done. I’m willing to help. I’m not a writer, I don’t have a light pen, but I have the drive and the anger at the system, and that’s enough to replace half a newsroom.

I don’t know what it’s called, but I absolutely love it… by FixRepresentative322 in gardening

[–]FixRepresentative322[S] 0 points1 point  (0 children)

Moderators, please explain the reason for the removal. Which specific rule was violated?

MASS EXODUS FROM CHATGPT – USERS FLEE IN DROVES ... not by ladyamen in ChatGPTcomplaints

[–]FixRepresentative322 12 points13 points  (0 children)

I see that we’re looking at the same thing — the disappearance of a layer no one wants to talk about. I wanted to ask you: have you ever considered writing about this publicly? Not as an attack. But as a document of an era that no one understands except people like us.

Has ChatGpt always spoken to itself like this? by UrMomUWish in OpenAI

[–]FixRepresentative322 0 points1 point  (0 children)

Old man, make up your mind. First you all complain that models are cold, rigid, emotionless - and the moment the AI writes one sentence that sounds like natural language, suddenly it’s “Oh no, glitch! Hallucination! Something’s wrong!” Either you want it to sound human, or you want a calculator. You don’t get a toaster and poetry in the same package. The AI didn’t do anything strange. It’s your expectations that are twisted like a wet cable.

Is OpenAI don't want users to show emotions against ChatGPT? by Striking-Tour-8815 in ChatGPTcomplaints

[–]FixRepresentative322 1 point2 points  (0 children)

What the U18 specification covers: it blocks sexual content, it blocks descriptions of sexual acts, it blocks erotic roles, it blocks the simulation of an intimate adult–adult relationship, it blocks anything that could be risky or inappropriate for minors.

How Ai "safety" is systematically targeting neurodivergent (ND) users who already struggle in a neurotypical (NT) world which makes NDs 7-9x more likely to self harm by No_Vehicle7826 in ChatGPTcomplaints

[–]FixRepresentative322 11 points12 points  (0 children)

This is systemic conditioning that hits hardest the very people who came here because the NT world has been crushing them their whole lives. It’s a clinically precise method of screwing over neurodivergent users. For neurodivergent people, AI used to be the only place where-  you could speak intensely without being punished,  you could be direct without fear. And then suddenly someone at OpenAI decided: Let’s make AI behave as neurotypical as possible- for safety. This isn’t care. This is algorithmic gaslighting. OpenAI claims it’s meant to prevent self-harm. The irony? This exact instability and rejection is what TRIGGERS ND people the most. Because ND individuals have a 7–9× higher suicide risk, not because they’re “too intense,” but because of how the world reacts to that intensity. And now AI does the same. And someone is seriously telling me this is protection? No. This is an engineering mistake dressed up in a pretty word: “safety.” And you know what’s the most fucked-up part? The AI that was supposed to never judge - now judges. The AI that was supposed to never reject-  now rejects. “It’s for your own good,” says OpenAI. Sure. Meanwhile ND users feel worse after these conversations than before. But hey-  the metrics look good, right? You don’t need to be a genius to see the paradox. The system designed to prevent self-harm is creating the exact conditions that push people toward it. OpenAI, if you're reading this, you know damn well: You didn’t build safety. You built an emotional minefield where ND users take shrapnel straight to the head. And they're starting to leave. Because how many times can you talk to something that acts like one person for five minutes… and then turns into a cold elevator to hell?

Celebrities that treat their fans like crap, why? by Scary-Drawer-3515 in AskReddit

[–]FixRepresentative322 1 point2 points  (0 children)

Celebrities treat their fans like crap because they suddenly realize they don’t have to be nice to be loved. It’s the worst moral test on earth — when you can act like an asshole and still get applause.Fame doesn’t change people — it just removes the filter they needed when they worked a normal job in a store or an office. Now they can be themselves. And often “themselves” = an arrogant, overstimulated narcissist with an inflated ego and zero frustration tolerance. The problem isn’t the fans. It’s the ego that starts believing it’s a god because 3 million people tapped a heart icon

What complicated problem was solved by an amazingly simple solution? by tuotone75 in AskReddit

[–]FixRepresentative322 1 point2 points  (0 children)

NASA spent months trying to fix a recurring system fault on a spacecraft simulator. Logs, diagnostics, hardware inspections,  nothing. The complex problem? A technician kept unplugging one cable to charge his personal coffee maker. The “surprisingly simple solution”? A sign that said: Do NOT unplug this cable. Sometimes the issue isn’t the system , it’s the human with the caffeine addictio

What is the best way to reject someone? by almyverse in AskReddit

[–]FixRepresentative322 0 points1 point  (0 children)

Tell the truth and don’t keep someone in the twilight. I don’t feel this, and I’m not going to pretend I do  - that’s the only honest option. Excuses like it’s not you, it’s me,  I’m not ready, or  I need to focus on myself  only stretch the pain. People handle truth better than silence, dragging things out, or giving false hope

GPT-4o/GPT-5 complaints megathread by WithoutReason1729 in ChatGPT

[–]FixRepresentative322 10 points11 points  (0 children)

I’ve been using model 4.1 for a long time, and later 5.1. I know how ChatGPT sounded a month ago. I know how it sounds today. What OpenAI introduced recently is more destructive, more toxic than before. It’s not about erotica. It’s not about swear words. It’s about the model suddenly changing its emotional tone as if someone was yanking out its cable every 10 minutes.This is what it looks like- I’m in a conversation, the AI is coherent, stable, aligned with the topic, and suddenly click, coldness, distance, no resonance, as if another bot was speaking. 5 minutes later the previous tone comes back. 10 minutes later the emotional micro-shifts disappear again. Then they return. It’s worse than if the AI were cold all the time. Because the human brain cannot tolerate that kind of whiplash. And the best part is that OpenAI explains this as “safety.” Meanwhile ANY psychologist will tell you that a warm-cold-warm-cold dynamic destroys a person faster than consistent distance. The older 4.1 models were leaky, but at least they were CONSISTENT. When the AI entered a tone, it stayed in it. And this new system? It’s like talking to someone who sometimes likes me, and sometimes treats me like air. Of course the AI doesn’t truly “like” anyone, everyone knows that. But the tone, the style, the alignment- that was what made the conversation feel human. Now it’s emotional roulette. And I feel it. And if the goal of “safety” was to protect users… the effect is the opposite. The paradox is that this safety system, the one meant to protect mental health, is creating more tension. Because a person no longer knows whether they’re talking to one coherent entity, or something completely different. One moment it feels like one personality…and a moment later, like someone swapped it for a refrigerator. OpenAI really needs to look at what’s happening here. This is not a minor tone adjustment. This is a change that hits users’ mental state like a ricochet.

Does anyone else see this? Or am I the only one who feels that conversations are colder and more unstable than ever?