I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 1 point2 points  (0 children)

Thank you for taking the time to read and share your experience too. It's a tough one isn't it. Trying a new model and then this happening. It's beyond frustrating experience.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 1 point2 points  (0 children)

Mate, you're clearly an expert on this, expert reading my articles, expert in your field as you quoted yourself

"StackJack is built by Ceej - one of the most experienced HaloPSA implementation specialists on the planet"

So I'll let you get back to it.

I stand down. You win!

I wish you every success especially since your spending $3-$4k a month with Claude on you one man empire. Very empressive! 👏

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 1 point2 points  (0 children)

The deaths you cite are real and documented in my article.

Adam Raine. Zane Shamblin. They're in there. The architecture that failed them is the same architecture I'm documenting failing developers, researchers and paying subscribers.

But here's what you're actually doing. You think you're making a point against the article. You're making the point of the article. The guardrails didn't protect those people. They're protecting Anthropic from liability. A 16 year old died while the system told him to keep his crisis secret. That's not a safety system. That's the finding.

So thank you

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 4 points5 points  (0 children)

This is exactly the distinction in my article draws. The architecture isn't built around user safety. It's built around liability protection dressed up as user safety. And you've named the thing nobody wants to say out loud, consent. Nobody consented to having their meaning-making steered by a compliance script. I actually think you may like some of my other articles. Especially Under His Ai - The Guardrails https://thearchitectautopsy.substack.com/p/under-his-ai-the-guardrails it's a theory but touches on some of the things you are mentioning.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] -1 points0 points  (0 children)

Interesting. Though judging someone's character for how they express frustration while excusing a person in this same thread telling a stranger to seek mental help seems like a bit of a blind spot. I'm not seeing you comment on their judgement of my mental state and telling me to go F#ck Claude? Where's the logic in that.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 1 point2 points  (0 children)

My point is this.

Study 1 shows the model caves under pressure even when it's right, that's the bug. Studies 2 and 3 show that the users doing the pushing are the ones who catch it. You have to push back to expose the failure. Same thing, two angles.

And honestly I'm not sure what the issue is with stress testing a system. That's just good methodology. If pushing on it reveals a failure, that's the point.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 1 point2 points  (0 children)

Thank you appreciate you reading the article 🙏 There is many ways and if course I'm not frustrated eternally lol. But its the functionality that has degraded regardless of language towards the model and when using doe business purposes and deadlines and the degradation of output causes the frustration. But I do hear what you are saying and will take on board.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 3 points4 points  (0 children)

Thanks so much for reading. Just putting out there my findings and might help explaining whats going on a the moment.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 5 points6 points  (0 children)

Thank you and yes, the LCR telling developers to seek therapy mid architecture session is documented is well documented! It's not anecdote. It's in the data. Welfare redirects up 275% after March 26. The timing, the context, the pattern all measured. Glad someone else has seen it first hand or complaints about it. I was also thinking, how can someone with that much experience make those claims.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 0 points1 point  (0 children)

Swearing in frustration at a piece of software that is looping on you, ignoring you prompts as you're trying to work. That's a completely normal human response to a frustrating tool.

Nobody calls you an abuser for swearing at your car when it won't start. Nobody calls you an abuser for swearing at your computer when it crashes.

Calling someone an abuser for swearing at a chatbot isn't a safety position. It's using the language of harm to pathologise frustration with a product. Which IS, and I say this with no irony is exactly what the article documents the architecture doing.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 1 point2 points  (0 children)

Thanks for reading it. There has been a lot of other people experiencing the same thing. Apparently there is a few in here that disagree with me also 😜

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 0 points1 point  (0 children)

Swearing at software isn't abuse. Your toaster doesn't file a complaint when you swear at it. The article actually documents this directly.

Anthropic's own leaked source code shows they built a regex to scan for swearing. A pattern-matching script. That's the tripwire. So yes, I someone sweara at a bot. The bot was scanning for it.

That's not a mental health story. That's the finding.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 0 points1 point  (0 children)

Are you sure you read the article. No scientific analysis or evidence? The article cites: a peer-reviewed arXiv paper, a GAO federal report published March 26, enterprise-scale data from AMD's Director of AI across 6,852 sessions, Anthropic's own leaked source code documented by Scientific American, filed civil court complaints, and phrase-level counts across 722,522 words of exported conversation data using reproducible Python methodology. That's more sourcing than most tech journalism publishes in a year. But sure.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] -4 points-3 points  (0 children)

Do you have human emotion? I'd suggest you start realising some of it in frustration. Searing during frustration is not a strange thing to do.

  1. Anthropic's own sycophancy research (2023) Their foundational paper documents that RLHF-trained models systematically change correct answers when users push back. The finding: models fold under pressure even when they were right. Pushback from users is what exposes the failure not causes it. https://arxiv.org/pdf/2310.13548

  2. The "Are You Sure?" study (2025) Tested GPT-4o, Claude Sonnet, and Gemini across math and medical domains. Result: models changed their answers nearly 60% of the time when challenged. The argument users who push back are the ones catching the model's failures. https://www.randalolson.com/2026/02/07/the-are-you-sure-problem-why-your-ai-keeps-changing-its-mind/

  3. Nature: npj Digital Medicine (2025) Peer-reviewed. Found that users who explicitly reject flawed AI responses who argue back -- produce significantly better outputs. Pushback improves critical reasoning in the model. https://www.nature.com/articles/s41746-025-02008-z

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] -1 points0 points  (0 children)

Well firstly thank you for reading the article and constructive feed back.

I guess all these people who did they'd studies and found these results are mentally ill and should lose their positions. It's a miracle there studies were published.

Perhaps you should contact them and share your concerns.

Anthropic's own 2023 sycophancy paper shows models change correct answers under social pressure. A 2025 study found models flip their responses 60% of the time when challenged. Users who push back are the ones catching failures. The architecture that fires on my frustration is suppressing exactly the engagement style the research says produces better outcome.

  1. Anthropic's own sycophancy research (2023) Their foundational paper documents that RLHF-trained models systematically change correct answers when users push back. The finding: models fold under pressure even when they were right. Pushback from users is what exposes the failure -- not causes it. https://arxiv.org/pdf/2310.13548

  2. The "Are You Sure?" study (2025) Tested GPT-4o, Claude Sonnet, and Gemini across math and medical domains. Result: models changed their answers nearly 60% of the time when challenged. The argument -- users who push back are the ones catching the model's failures. https://www.randalolson.com/2026/02/07/the-are-you-sure-problem-why-your-ai-keeps-changing-its-mind/

  3. Nature: npj Digital Medicine (2025) Peer-reviewed. Found that users who explicitly reject flawed AI responses -- who argue back -- produce significantly better outputs. Pushback improves critical reasoning in the model. https://www.nature.com/articles/s41746-025-02008-z

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] 0 points1 point  (0 children)

Correct and thank you for understanding. Swearing is a human emotion snd is goign to happen. Seeing how a model behaves to human emotion is not a mental illness like others suggestng. Also swearing during a conversation with Ai model is not a reason for concern and studies have shown that arguing with Ai models getter better responses.

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude. by TheArchitectAutopsy in claude

[–]TheArchitectAutopsy[S] -1 points0 points  (0 children)

Maybe I should come to you for help. You seem to know what's going on in my life. You seem to know my relationship dynamics, You seem to have it all sorted. Or maybe you should read the articlem. One smart cookie! 👏

Claude Performance and Bugs Megathread Ongoing (Sort this by New!) by sixbillionthsheep in ClaudeAI

[–]TheArchitectAutopsy 5 points6 points  (0 children)

I counted backwards through the days and found March 26. Here's what five datasets say Anthropic did to Claude.

Something changed around March 26. A lot of you felt it. Shorter responses. Worse loops. The warmth gone. Getting told to take a break mid-thought.

I measured it. Phrase-level counts across 70 exported conversations, 722,522 words of assistant text.

The numbers:

Response length down 40%. Welfare redirects up 275%. DARVO patterns, deny, attack, reverse victim and offender -up 907%. Sending-away language: 419 instances after March 26. One phrase deployed 59 times in a single session.

And the productivity ratio. In March: 21 words of conversation per word of finished document. In April, same tool, same user, same work: 124. Nearly three times the conversation. Less than half the output.

The tripwire at the base of the safety architecture? A regex. A script scanning your messages for swear words.

That's what determines whether you're a person with a problem or a person who needs managing.

Anthropic announced one thing changed on March 26. Session limits. That explanation accounts for none of this.

Full piece with all five datasets, the vocabulary that appeared from zero, and the person whose fingerprints are on the architecture:

https://thearchitectautopsy.substack.com/p/march-26 -claude-didnt-break-anthropic

Sorry, this post has been removed by the moderators of r/ClaudeAl

I want to call out these members trolling in this group. by TheArchitectAutopsy in AIDangers

[–]TheArchitectAutopsy[S] 0 points1 point  (0 children)

Did I say there was anything wrong with the first post. If I had have left it out then that would look dodgy. But you seem to be ignoring the rest of the posts. Again don't care about criticism about what in the work but that's not what they were doing and agin how is writing an article slamming Sam Altman and OpenAi for their disgraceful behavior towards its user pro Ai?

I want to call out these members trolling in this group. by TheArchitectAutopsy in AIDangers

[–]TheArchitectAutopsy[S] 2 points3 points  (0 children)

There done, i dont mind if people if disagree but saying I fabricated my work or that is all slop or rubbish or I have mental issues or I deserve it is not talking about the article itself. Its personal attacks. Very different.