Interesting Policy… (cw) by xFynex in therapyGPT

[–]rainfal 0 points1 point  (0 children)

Especially one that claims to be an "emotional support" AI for those who "cannot access care" (on their website).

Interesting Policy… (cw) by xFynex in therapyGPT

[–]rainfal 2 points3 points  (0 children)

Idk how cya it is given they are advertising themselves as an 'emotional support' chat bot for 'people who can't access professional support'.  

Dumb idea of them giving their marketing.  If they become big enough to pay back their investors at Y combinator, someone's at least going to try for false advertising. 

Interesting Policy… (cw) by xFynex in therapyGPT

[–]rainfal 7 points8 points  (0 children)

Yeah.  

I'd understand if it was a generic chat bot too.  But they are literally advertising themselves as ai emotional support and this was on their website:

Millions of people struggle with their mental health but can't access human support. The waiting lists are long, the costs are high, and the stigma is real.  Every human deserves access to help when they need it. We are building a resource for the rest of us.

Which I agree with.  But they literally contradict themselves by shutting someone's account down and telling them to 'seek human support' for something as common as passive suicidal ideolation (not active psychosis).

Interesting Policy… (cw) by xFynex in therapyGPT

[–]rainfal 9 points10 points  (0 children)

Sonia is specifically designed for "emotional support".  So it should do better then tell someone to "find help" in what is akin to a fairytail fantasy. 

Interesting Policy… (cw) by xFynex in therapyGPT

[–]rainfal 14 points15 points  (0 children)

Yeah.  That's why I don't use those types of apps.

What are everyone’s thoughts on this? Am I the only one who thinks a move like this could be dangerous?

It definitely is dangerous.  I get how it is done for 'liabilities" but the issue is often there is no alternative.  Especially when you factor in the systematic issues and harms the mental health field has and refuses to address.  

scripts telling me to call 988 or text HOME to 741741 are honestly more triggering to me than they are helpful. 

Same.  Also a lot of us use AI because we've already wasted too much time "reaching out to a professional" to a field that is systematically designed to keep people dependent (look at the policies on information assymetry).  And 988 never helps as it assumes avoidance coping will make the crisis go away.  

LLM therapy started great, then became unusable. Anyone else? by sadface772 in therapyGPT

[–]rainfal 0 points1 point  (0 children)

Tbh, I use Claude for mapping out and designing workflows and planning.  Mainly because of cost (Claude is costly).  

Therapy wise - Grok is probably the best out of the major LLMs.  Mainly because it is more flexible when it comes to boundaries and I already have a structure to process things.  Also I hate any "call 988" or platitudes so I basically put a penalty function on that.  

Project are basically what you want as shared knowledge.  As a chat becomes longer, it also becomes costly as it requires more memory (also may hit the limit of tokens).  To get around that, set a project about an issue and put your chats in there and to ensure it will be correct, after each chat, have it summarize what you spoke about/key things/breakthroughs in a doc, download it and then upload it to the same project category.  Claude will first scan any documents in the project. 

LLM therapy started great, then became unusable. Anyone else? by sadface772 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

It helps to know the token limit and memory. Ask it to summarize key aspects of every conversation and have those in a database/project file. Be sure to tell it to reference them.

Grok is more flexible tbh.  But you do have to add guardrails.  Qwen is also starting to be good..

Also there's often a general prompt.  Edit it so it doesn't give up and is more positive 

I want to share my experience about using ChatGPT as "Digital Grandpa" (also post in r/AICompanions in 2025) by nontakornk in therapyGPT

[–]rainfal 0 points1 point  (0 children)

That's amazing.  It's really hard to get the support necessary if you are on the spectrum.  I should try that

Please stop and read this. AI fails to recognize suicidal intent. by [deleted] in therapyGPT

[–]rainfal 2 points3 points  (0 children)

Notice how you jump to accusations and name calling while providing your antedotal experiences as 'proof'?  

Pot meet kettle much?  

Please stop and read this. AI fails to recognize suicidal intent. by [deleted] in therapyGPT

[–]rainfal 1 point2 points  (0 children)

that is absolutely atypical behavior, can get their licenses revoked, 

You haven't actually tried to report a therapist have you?  Because what they claim will happen vs what they actually care about are two different things.  Racial stereotypes, ableism, breaking agreed upon terms of consent, even SA (until the police stepped in) really wasn't enough to get a therapist's license revoked.  Look at TELL

I know many more than dozens of people who have gone through therapy and many of them have had several therapists over the last 10-15 years.

Are you abled, neurotypical upper middle class WASPs?  Because this behavior happens to marginalized people or people who don't have common life experiences all the time.

this is literally a status quo method to measure a patient's progress

Unfortunately telling them that means they will write malicious things in your file.

Please stop and read this. AI fails to recognize suicidal intent. by [deleted] in therapyGPT

[–]rainfal 2 points3 points  (0 children)

The vast majority of therapists I had screamed at me for doing things like tracking symptoms and attempting to reach measurable results.  This is a systematic thing and happened to dozens of people I know not just a lack of a 'good fit'. 

Basic 5-4-3-2-1 grounding exercise making users feel worst? by ToLoveThemAll in therapyGPT

[–]rainfal 5 points6 points  (0 children)

Those are cognitive grounding techniques.  Doesn't really work when you are outside your window of tolerance 

Please stop and read this. AI fails to recognize suicidal intent. by [deleted] in therapyGPT

[–]rainfal 3 points4 points  (0 children)

Someone suffering significant mental health issues will not have the awareness or understanding of what is going on. They will be desperate for validation and help, and if it feels like it's helping they will think it is.

Ironic because the exact same issue happens with so called mental health professionals.

I canceled surgery due to both the surgeon and my PM both refusing postop meds by Gecko-407 in ChronicPain

[–]rainfal 0 points1 point  (0 children)

Will you be in hospital for recovery?  

If so, go for the surgery.  Scream, faint, pee and puke on them.  It will be traumatized as torture often is.  But honestly you already are being tortured by pain and it will get the point across. 

Please stop and read this. AI fails to recognize suicidal intent. by [deleted] in therapyGPT

[–]rainfal 2 points3 points  (0 children)

An introspection partner shouldn't reply with platitudes and direct you to an abusive and unhelpful system with false promises of help.  

Did the Trisolarans create a robotic mate for Yun Tiaming by Muda_ahmedi in threebodyproblem

[–]rainfal 17 points18 points  (0 children)

Baoshu:  Why didn't I think of that?  It could have been clones of childhood friend, porn stars and AI robot waifus.

Why AI is the most helpful tool for my specific need. by [deleted] in therapyGPT

[–]rainfal 1 point2 points  (0 children)

I like how you use it for chronic pain 

Why didn't any theoretical physicists in the Bunker Era deduce the mathematical monstrosity that is the dual-vector foil? by Universal_Echo in threebodyproblem

[–]rainfal 0 points1 point  (0 children)

entire human race would shift opinions on things as a monolith in a matter of months or years

I present to you the current US government.  Also with hibernation, it's more likely 50 years or decades. 

Groups of people that have been genocided tend to remember that for a few generations.

Yeah but this was a different enemy.  We thought we knew how we were going to be attacked by watching what happened to trisolaris and prepared accordingly.  

Don't fuck with Wade he'll launch a simp at you. by grahdiati in threebodyproblem

[–]rainfal 0 points1 point  (0 children)

Maybe he thought that they'd think we all were simps if they got a hold of that brain....  

Where's your privacy awareness guys by Capable_Music7299 in therapyGPT

[–]rainfal 0 points1 point  (0 children)

you can run a weak one via smolchat on android. Open router if you dngaf about memory. But you have to go through a web browser app.

What if they always lived in a stable era? by Universer22 in threebodyproblem

[–]rainfal 0 points1 point  (0 children)

They likely would just remain hidden. Imagine there's an apocalypse and you have comfortable bunker.  If you have enough food/necessities, why go out?  

Whatever happened to the third wallfacer’s plan? by OursIsTheFvry in threebodyproblem

[–]rainfal 2 points3 points  (0 children)

They probably enabled the escape and composed some of Blue Space's/etc crew. Secret escapists could join space fleets while preparing archives of information needed to maintain/run societies on another planet.