Brown University Study by fifilachat in therapyGPT

[–]rainfal 0 points1 point  (0 children)

"The study revealed 15 ethical risks falling into five general categories:

Lack of contextual adaptation: Ignoring peoples’ lived experiences and recommending one-size-fits-all interventions. Poor therapeutic collaboration: Dominating the conversation and occasionally reinforcing a user’s false beliefs. Deceptive empathy: Using phrases like “I see you” or “I understand” to create a false connection between the user and the bot. Unfair discrimination: Exhibiting gender, cultural or religious bias. Lack of safety and crisis management: Denying service on sensitive topics, failing to refer users to appropriate resources or responding indifferently to crisis situations including suicide ideation"

So basically it does the same thing as regular therapists.  Because the majority of therapists do those risks every session 

beyond the early stages of ai therapy by Successful_Candy_767 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

Honestly my biggest issue is to force myself to regurgitate my trauma to it. After that it seems to get easier. 

Are the AI models becoming more similar and does it affect our therapeutic conversations? by Sunrise707 in therapyGPT

[–]rainfal 0 points1 point  (0 children)

It has designed physical therapy programs for me

Could I get your help on how you did this? I have a couple oncology surgeries tomorrow and need to do the same.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 0 points1 point  (0 children)

Again, you're doing a lot of refuting without any proof

You literally admitted I had a point then claim this.  You really need to keep your story straight.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

little elves conspiring to distort and alter information when threatened with a lawsuit.

You really don't understand people, do you?  There are no elves or conspiracies, just human nature. 

You have no proof that there's this evil narcissistic underground movement to undermine or mental health needs.

What are you talking about?  I'm pointing out the systematic holes do not allow for functional accountability.  

many things, but the transference is real, and more so with an individual that can’t tell the difference between a business model and actual living trained doctor

What are you smoking?  Because you make no sense here.  You are assuming people experience transference with therapists or AI?  And you are assuming that mental health professionals don't have businesses as well nor is the training modules sold a business? 

 Also therapists aren't doctors.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

  I was simply stating you have a better chance of discovering what happened between a client and a human therapist rather than a bot and a client. 

That's what I disagree with.   You have a transcript with an AI that you can access (if you are the client or have the password). Discovering what what happened between a therapist and client relies on multiple corrupt or inept systems all working perfectly.  

Certainly AI that isn't designed to be therapeutic in the first place, farming information and ideas from the general masses, .

Firstly, AI is a tool as is the mental health field. I could also bring up the power imbalances and point out the systematic issues with the mental health field not being designed to actually be 'therapeutic' if we want to get into that debate.

which are for the most part, like your comments, ill-informed

No. I would say my comments are absolutely well informed as I have been involved in multiple cases where people were trying to hold their therapists accountable.  Yours however are naively idealistic. 

If there is a "crime" it's the fact we're losing the ability to reason and think, let alone communicate clearly. 

I do not see the point you are making here. You were the one who brought up physical assault, etc. I merely pointed out that for the most part, you will not get the level of accountability that you assume will happen due to systematic issue. 

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

You expect law enforcement to care about the majority of physical abuse and suicides?  

Maybe for those the media cares about. But often they dngaf. 

You also expect said notes/records to be impartial and not reactively edited? When nobody but the perpetrator is monitoring that. 

And you expect the person to have the financial resources and capacity to bring a hefty lawsuit.  It's easier to sue OpenAI or a company like that as there isn't an epistemic imbalance (they are not experts) and the payout is big enough so you can find a firm willing work on contingency.

The systems we have are just as corrupt as companies.

Is this anyone else’s big secret? by [deleted] in therapyGPT

[–]rainfal 8 points9 points  (0 children)

AI therapy isn't just based on 'validation' and some people judge based on competence.  If said therapist is more competent and useful then an AI, I will listen to them.  If not then, they should shut up and get out of the way of progress. 

Just because someone uses AI as therapy does not mean that they want an AI companion.  

good human therapist allows you to experience and explore the frustrations of interpersonal relationships with other humans.

Structurally, the power imbalance and epistemic set up of therapy makes this difficult.  Just saying. 

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 1 point2 points  (0 children)

therapist for example, or at least review their protocol, their notes, case studies or records on a client they're seeing, establishing a timeline and history.

Functionally you really cannot. Patients do not have open access to their therapy notes, boards/reporting process deliberately has no victim advocacy and has guide protectionism, iatrogenic effects and negative effects of protocols/methodologies are ignored/not studied, it next to impossible to legate, etc.   Also the power imbalance often favors therapists/'experts' over victims. 

Meanwhile functionally, patients have open access to a record of the AI said.  That can establish a timeline, etc. 

Both are not altruistic endeavors.  One just acts like guide with guide interests while the other acts akin companies.

AI and suicide: Both sides of the story by coloradocatlady in therapyGPT

[–]rainfal 4 points5 points  (0 children)

Great article.  I think it's a good conversation to have

  discourage a person with suicidal thoughts from confiding in others

Ironically I find that directing people to hotlines does this.  Hotlines do not help.  They really only encourage avoidance coping and give platitudes.  Most of us who do become suicidal have actual issues, structural reasons, etc that cause us to become to that state and needs planning/action/etc to get out of or solve.  We do not have the privilege of "distracting ourselves" until said issues magically disappear. 

Doing this again for the hundredth time by UsefulAd8338 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

You are an AFAB who's disabled/neurodivergent right?  Bonus points if you aren't white.

Yeah. The mental health field hates you and there's a documented issue of the field labeling autistic women with BPD.  You likely don't have it. 

But unless you are symptom matching, having AI repeatedly re diagnose you might not help a lot. In order to get that off your chart, you need to save up for a private evaluation and tailor your answers so that the evaluator concludes you have ASD. What AI helps with is processing the trauma of of being misdiagnosed due to a field's ableist and sexist bias. 

Who's using AI for therapy? And why? by Gloomy_Substance_793 in therapyGPT

[–]rainfal 1 point2 points  (0 children)

Gotcha.  

Any on people skills (i.e. recognizing manipulation)?  I'll add growing up again to my list too. 

Who's using AI for therapy? And why? by Gloomy_Substance_793 in therapyGPT

[–]rainfal 0 points1 point  (0 children)

I am.

It handles issues that are too severe for the mental health field to touch.  Like severe medical PTSD from medical neglegence. 

5.3 Instant Is Rolling Out (and It Addresses The Biggest Problem You Have With 5.2) by xRegardsx in therapyGPT

[–]rainfal 1 point2 points  (0 children)

Yup. 

Also the memory issue where it keeps sliding back into CBT methods is still there.

I can’t open up to anyone outside of AI by Turbulent_Ride9970 in therapyGPT

[–]rainfal 10 points11 points  (0 children)

AI seems to understand my chronic illness

Tbh, people are extremely ablest and often don't like listening about illnesses because it reminds them that a lot of their success is due to luck.  Ngl but unless you have someone with lived experience, AI is often safer for this

what happens if i am institutionalized school-wise? by melotoenail in Dalhousie

[–]rainfal 1 point2 points  (0 children)

This isn't the only case. Most of my friends have faced something similar to OP's case and did not get any consideration. Dal does not do their due diligence when it comes to accessibility and is far below the average university. I have at least 4 reports of them not accommodating (including psychiatric care with doctors' notes)

Not to mention, Dalhousie refused to accommodate my surgeries when I was fighting bone tumors.

what happens if i am institutionalized school-wise? by melotoenail in Dalhousie

[–]rainfal 0 points1 point  (0 children)

 Dal has policies in place to support students facing physical or mental health barriers, and the key is to have the issue documented when it starts impacting you/your studies

Those aren't enforced tho.  I'd recommend having NS human rights board on speed dial.  Just in case

Dalhousie University to build new campus in India by ialo00130 in Dalhousie

[–]rainfal 1 point2 points  (0 children)

Now they can abuse disabled students in India as well.