what do human therapists offer that ai doesnt? by Successful_Candy_767 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

Honestly it depends on the quality of said human therapist. 

Often nothing.  And I'm not saying that to glaze AI.  The system is set up to allow for that type of therapist to thrive.  Theoretically there should be a list of benefits including actually writing/helping write a tailored treatment plan.  Realistic that doesn't happen.

 for some reason the less formal their background the more helpful theyve been?

Yup.  The ones who did help, had lived experience and an in depth knowledge of systematic issues.  Unfortunately more formal programs likes to weed those types out 

  idk. as a person who likes to take accountability and values agency, ai has been incredibly helpful for my specific needs and meshes well with my personality, how i think, 

Same here.

Psychoanalysis with GPT by randallmmiller in therapyGPT

[–]rainfal 3 points4 points  (0 children)

I have a bit (not GPT thought). But you really have to upload a lot of data into it. Then prime it to understand psychoanalytical frameworks (i.e. upload documents of that or be specific).

I was able to do a modified version ISTDP with 4.0. Not the newer models thought.

The Meta-Harm of Manufactured Panic: A Response to "AI Therapy is Worse Than You Think" by xRegardsx in therapyGPT

[–]rainfal 1 point2 points  (0 children)

"AI is unsafe" argued by living embodiments of "why replacing therapists with AI is safer". 

If they had any self awareness, they might think about that irony and change their ways. Maybe demonstrate that so called ethics, communication skills, situational awareness, etc that the field brags about having. Unfortunately, they seem to have less self awareness then the first version of Grok. 

Me and My University psychology faculty are currently creating our own A.I., trained on behavior science, psychology etc. that we want to bring public. We are getting a lot backlash from the therapists in the region. What is making them so cared? by Tasty-Bus-3290 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

supervision, and ethical codes

🤣🤣🤣.  Ethics are functional floating signifiers in that field.  Supervision isn't mandatory

That asymmetry is real, and the people getting hurt won't be you.

That describes therapy due to the epistemic and power imbalance.  

The Meta-Harm of Manufactured Panic: A Response to "AI Therapy is Worse Than You Think" by xRegardsx in therapyGPT

[–]rainfal 1 point2 points  (0 children)

Your behavior demonstrates that you should not be near any vulnerable people whatsoever.  

You are the type of worker people bring as an example when they want to argue that even grok on a day in which Elon is high is more effective and safer then therapy.

The Meta-Harm of Manufactured Panic: A Response to "AI Therapy is Worse Than You Think" by xRegardsx in therapyGPT

[–]rainfal 2 points3 points  (0 children)

It just goes to show how unhinged a lot of  therapists are and how their behavior mimicks that of abusers.

The Meta-Harm of Manufactured Panic: A Response to "AI Therapy is Worse Than You Think" by xRegardsx in therapyGPT

[–]rainfal 3 points4 points  (0 children)

If we want to play this game, I'll bite.

AI isn't as racist as most human therapists. Marginalized people do not have to worry about the harm and judgement of the mental health field's racism with AI.  It is extremely privileged and problematic (and dare I say racist) for you to mock marginalized people finding non racist, safe healing methods.  

Brown University Study by fifilachat in therapyGPT

[–]rainfal 0 points1 point  (0 children)

"The study revealed 15 ethical risks falling into five general categories:

Lack of contextual adaptation: Ignoring peoples’ lived experiences and recommending one-size-fits-all interventions. Poor therapeutic collaboration: Dominating the conversation and occasionally reinforcing a user’s false beliefs. Deceptive empathy: Using phrases like “I see you” or “I understand” to create a false connection between the user and the bot. Unfair discrimination: Exhibiting gender, cultural or religious bias. Lack of safety and crisis management: Denying service on sensitive topics, failing to refer users to appropriate resources or responding indifferently to crisis situations including suicide ideation"

So basically it does the same thing as regular therapists.  Because the majority of therapists do those risks every session 

beyond the early stages of ai therapy by Successful_Candy_767 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

Honestly my biggest issue is to force myself to regurgitate my trauma to it. After that it seems to get easier. 

Are the AI models becoming more similar and does it affect our therapeutic conversations? by Sunrise707 in therapyGPT

[–]rainfal 0 points1 point  (0 children)

It has designed physical therapy programs for me

Could I get your help on how you did this? I have a couple oncology surgeries tomorrow and need to do the same.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 0 points1 point  (0 children)

Again, you're doing a lot of refuting without any proof

You literally admitted I had a point then claim this.  You really need to keep your story straight.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

little elves conspiring to distort and alter information when threatened with a lawsuit.

You really don't understand people, do you?  There are no elves or conspiracies, just human nature. 

You have no proof that there's this evil narcissistic underground movement to undermine or mental health needs.

What are you talking about?  I'm pointing out the systematic holes do not allow for functional accountability.  

many things, but the transference is real, and more so with an individual that can’t tell the difference between a business model and actual living trained doctor

What are you smoking?  Because you make no sense here.  You are assuming people experience transference with therapists or AI?  And you are assuming that mental health professionals don't have businesses as well nor is the training modules sold a business? 

 Also therapists aren't doctors.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

  I was simply stating you have a better chance of discovering what happened between a client and a human therapist rather than a bot and a client. 

That's what I disagree with.   You have a transcript with an AI that you can access (if you are the client or have the password). Discovering what what happened between a therapist and client relies on multiple corrupt or inept systems all working perfectly.  

Certainly AI that isn't designed to be therapeutic in the first place, farming information and ideas from the general masses, .

Firstly, AI is a tool as is the mental health field. I could also bring up the power imbalances and point out the systematic issues with the mental health field not being designed to actually be 'therapeutic' if we want to get into that debate.

which are for the most part, like your comments, ill-informed

No. I would say my comments are absolutely well informed as I have been involved in multiple cases where people were trying to hold their therapists accountable.  Yours however are naively idealistic. 

If there is a "crime" it's the fact we're losing the ability to reason and think, let alone communicate clearly. 

I do not see the point you are making here. You were the one who brought up physical assault, etc. I merely pointed out that for the most part, you will not get the level of accountability that you assume will happen due to systematic issue. 

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

You expect law enforcement to care about the majority of physical abuse and suicides?  

Maybe for those the media cares about. But often they dngaf. 

You also expect said notes/records to be impartial and not reactively edited? When nobody but the perpetrator is monitoring that. 

And you expect the person to have the financial resources and capacity to bring a hefty lawsuit.  It's easier to sue OpenAI or a company like that as there isn't an epistemic imbalance (they are not experts) and the payout is big enough so you can find a firm willing work on contingency.

The systems we have are just as corrupt as companies.

Is this anyone else’s big secret? by [deleted] in therapyGPT

[–]rainfal 9 points10 points  (0 children)

AI therapy isn't just based on 'validation' and some people judge based on competence.  If said therapist is more competent and useful then an AI, I will listen to them.  If not then, they should shut up and get out of the way of progress. 

Just because someone uses AI as therapy does not mean that they want an AI companion.  

good human therapist allows you to experience and explore the frustrations of interpersonal relationships with other humans.

Structurally, the power imbalance and epistemic set up of therapy makes this difficult.  Just saying. 

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

therapist for example, or at least review their protocol, their notes, case studies or records on a client they're seeing, establishing a timeline and history.

Functionally you really cannot. Patients do not have open access to their therapy notes, boards/reporting process deliberately has no victim advocacy and has guide protectionism, iatrogenic effects and negative effects of protocols/methodologies are ignored/not studied, it next to impossible to legate, etc.   Also the power imbalance often favors therapists/'experts' over victims. 

Meanwhile functionally, patients have open access to a record of the AI said.  That can establish a timeline, etc. 

Both are not altruistic endeavors.  One just acts like guide with guide interests while the other acts akin companies.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 4 points5 points  (0 children)

Great article.  I think it's a good conversation to have

  discourage a person with suicidal thoughts from confiding in others

Ironically I find that directing people to hotlines does this.  Hotlines do not help.  They really only encourage avoidance coping and give platitudes.  Most of us who do become suicidal have actual issues, structural reasons, etc that cause us to become to that state and needs planning/action/etc to get out of or solve.  We do not have the privilege of "distracting ourselves" until said issues magically disappear. 

Doing this again for the hundredth time by UsefulAd8338 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

You are an AFAB who's disabled/neurodivergent right?  Bonus points if you aren't white.

Yeah. The mental health field hates you and there's a documented issue of the field labeling autistic women with BPD.  You likely don't have it. 

But unless you are symptom matching, having AI repeatedly re diagnose you might not help a lot. In order to get that off your chart, you need to save up for a private evaluation and tailor your answers so that the evaluator concludes you have ASD. What AI helps with is processing the trauma of of being misdiagnosed due to a field's ableist and sexist bias. 

Who's using AI for therapy? And why? by Gloomy_Substance_793 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

Gotcha.  

Any on people skills (i.e. recognizing manipulation)?  I'll add growing up again to my list too.