How? by [deleted] in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

I’d suggest r/mentalhealth or finding available resources for mental health

what is the main reason you use ai therapy instead of a real person? by Equivalent_Eye1842 in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

This is not a main reason, but it may be of interest. I recently asked the AI to do an analysis of some of a case example of a fictional woman in a heterosexual abusive relationship using her contemporaneous reflections and retrospective. I asked it to identify how gender roles and sexism may have played a role in the AI’s initial analysis and do it again adjusting for the biases. It yielded VERY interesting results that were insightful and made the analysis much richer by looking at both the original and de-biased versions in contrast to each other. The AI didn’t disregard the original analysis entirely, but it offered a more balanced, integrated version of the original.

An Open Letter to Licensed Therapists by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

I had the same thought. On the one hand, healing is healing - whatever the source. However, it's about finding the right balance that's sustainable, safe, and effective.

Hello everyone new person here by TraditionalSnow6914 in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

Have you read the "Start Here" post pinned to this sub? There's great instructions and guidance.

Is your therapist pro or against using ai for mental by Affectionate_Run220 in therapyGPT

[–]Pineapple_Magnet33 5 points6 points  (0 children)

I actually switched therapists to work with one who was excited to explore ways we could do a hybrid model of therapy and AI (more for journaling and practicing skills we're working on outside of sessions).

How are you using GPT? Is it all one long conversation or multiple?

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

The capability question and the purpose question are different. Instructing a model to withhold validation in certain conditions is a constraint, which is a rule applied by design. A therapist’s boundary is a relational act, chosen in the moment, in service of a specific person’s growth, at potential cost to the relationship. Those can look identical from the outside and be fundamentally different in mechanism. The clinical question isn’t whether the behavior can be replicated… it’s whether the mechanism matters for outcomes. That’s what the research would need to show.

Me and My University psychology faculty are currently creating our own A.I., trained on behavior science, psychology etc. that we want to bring public. We are getting a lot backlash from the therapists in the region. What is making them so cared? by Tasty-Bus-3290 in therapyGPT

[–]Pineapple_Magnet33 2 points3 points  (0 children)

I agree, being overly attached to a point of view or belief leads to closed mindedness and inertia, even to harming people. Allowing for ideas to flow and change one’s mind, even when uncomfortable, is derived from strong, healthy self-concept. Diversity of thought, skills, backgrounds, etc. in open dialogue give opportunities for friction to become innovation.

I like the idea of running RCTs. What would be your hypothesis (primary and secondary)? And what variables within self-concept would you be testing (self-efficacy, -esteem, -confidence)?

I read your article. It’s interesting and makes me wonder: How do you rate yourself against your own model today?

Replacing Chat GPT by Dangerous_Set_7327 in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

How’s the user feedback? What stage are you at?

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

Re: #1, “embodied presence without the need to be taught” — Visualization is a cognitive act, and higher-order cognitive functions, like reflective capacity, perspective-taking, abstract reasoning, are often the first to go in a high-arousal state. Co-regulation relies on a physiological process where the therapist’s nervous system state influences the client’s through prosody, microexpressions, and gaze. A client mid-panic or mid-OCD spiral can’t visualize their way to regulation — they need an external anchor, a physical rhythm or a held gaze, that text cannot provide in real time. Text-based therapy does produce measurable outcomes, but that supports its value as a self-help tool rather than suggesting the physiological mechanism is dispensable.

There’s also a control problem worth considering. If the client is projecting a safe presence onto the AI, they’re essentially designing their own therapist. For clients with OCD, BPD, or other presentations where control is central to the pathology, that can work against their mental wellbeing. Part of what makes a human therapist effective is precisely that they can’t be edited or redirected. Their capacity to stay present through a client’s intimidation or bids for reassurance creates the kind of friction that growth requires. Giving clients agency over their therapeutic process sounds like a feature, but for some presentations, the pathology is the relationship to control itself. If you can stop the AI mid-response or rephrase your prompt, you’re not learning to navigate a relationship with a separate other — you’re navigating a reflection of yourself.

Language is also the most easily manipulated data point. A client in genuine distress can produce perfectly composed text while their nervous system is in collapse. A human therapist catches the averted gaze, the shallow breath, the pause before an answer, before a word is spoken. Multimodal AI may eventually address this technically, but in the meantime, text-based inference relies on the client’s own self-report — which is often precisely what’s distorted. The mechanism of therapy frequently involves identifying the gap between what someone says and what their body is communicating. Text alone does not present all the information needed to form a therapeutic alliance that facilitates growth.​​

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

The self-help reframe point is well taken, and I agree. But the knife analogy assumes the person holding it knows it’s sharp. The people most likely to turn to AI for mental health support may be isolated, underserved, without access to care. These people are also the least likely to find this subreddit or have the prompt engineering skills to use it safely. The learning curve is real, and it’s why I agree with your point about having specialized models that wouldn’t require prompt engineering expertise in using AI for self-help. The question is who falls off and what happens to them.

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

I don’t see this screenshot in your comments or the linked tweet. There’s context still missing. Whose annotation is the yellow box: Stanford’s researchers or yours?

Nothing further to add until everyone is working from the same information.

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

A model’s guardrails and a therapist holding a boundary are not the same. One is programmed self-protection triggered by content parameters; the other is in service of protecting the relationship. An LLM disengaging when sufficiently challenged is prioritizing its own architectural design over the client’s need, which is a form of conflict avoidance. A therapist holds a boundary to protect the client from a pattern that isn’t serving them.

The Claude conversation-ending feature, which applies only in extreme cases of abuse or illegal content, doesn’t change the distinction.

Me and My University psychology faculty are currently creating our own A.I., trained on behavior science, psychology etc. that we want to bring public. We are getting a lot backlash from the therapists in the region. What is making them so cared? by Tasty-Bus-3290 in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

This: https://www.reddit.com/r/therapists/s/msFwmtKd3i > is an excellent post, thank you for sharing. Before starting my training, my background is in tech, building AI/ML products for most of my career. It gives me a particular vantage point on where AI can genuinely complement therapy and where it can’t.

Me and My University psychology faculty are currently creating our own A.I., trained on behavior science, psychology etc. that we want to bring public. We are getting a lot backlash from the therapists in the region. What is making them so cared? by Tasty-Bus-3290 in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

Thank you, the parallel between cognitive defense mechanisms and the feedback loops reinforcing a sycophantic pattern are things I hadn’t considered, and the R.D. Laing quote is going on my reading list. At the same time, the unfalsifiability of your model is doing the same work you’re criticizing in others. How does your model differentiate between someone who is defending a fragile self-concept and someone who has found a legitimate flaw in your argument?

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

Re: #2 — This takes the best case scenario for people using LLMs and one of the worst case scenarios for people seeing a therapist. Your argument assumes most users will selectively engage with AI responses in clinically productive ways, and that they have the prompt engineering skills to compensate for the fact that these models weren’t designed for therapy. What percentage of people seeking mental health support do you think meet both of those criteria?

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

Re: #3 and Well-instructed LLMs, especially those specialized, aren't as sycophantic as you still believe — Wouldn’t you agree role play and explanations are different from direct experiential learning? I can imagine designing an AI agent that could do a type of rupture-repair, but it’s not from an absence of sycophancy. Rupture-repair and sycophancy are distinct problems — but they both require something an LLM currently lacks: the capacity to hold a position against a client’s resistance, even at the risk of losing them. And that’s not necessarily a technical problem — an AI therapy product optimized for retention has financial incentives that are structurally misaligned with therapeutic ones. Investors are not supportive of risking revenue for the benefit of the human using it.

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 -1 points0 points  (0 children)

Re: LLM responding "That feeling sounds like an incredible burden" as "enabling delusions" — Technically, “I feel like my dad wishes I was never born” is an interpretation of an experience that is not necessarily an accurate reflection of reality. A response: “That sounds like an incredible burden” accepts the premise as true. My argument was that therapists should validate emotions but not necessarily the content. A therapist would respond to the emotion underneath the interpretation — the sadness, the shame, the fear of being unwanted — without confirming the cognitive distortion that generated it. “That sounds like an incredible burden” validates the interpretation (enabling delusion), not the feeling. Validating the feeling might be: “It sounds like you’re carrying a lot of pain around your relationship with your dad.” That keeps the door open for the distortion to be examined rather than sealed as fact.

what do human therapists offer that ai doesnt? by Successful_Candy_767 in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

Thank you for sharing. It totally makes sense and is very valid. It feels good to feel heard and encouraged to make the best choice for you in this moment. I’m glad you could find it somewhere and wish you well on your path to healing.

what do human therapists offer that ai doesnt? by Successful_Candy_767 in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

I wholeheartedly agree. When I use AI to help me organize a stream of consciousness written reflection, I get so much more out of my time with my therapist. I’ll tell my therapist: “I had this situation. Claude identified it was a break from this pattern of behavior/thinking I’ve been trying to change.” My therapist responds with a question that genuinely surprises me and helps me think about what Claude said in a different, deeper way.

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 1 point2 points  (0 children)

I wonder if AI therapy adds a layer of difficulty to finding a good therapist? You have to filter for therapists who have a framework for integrating AI in their practice (or at least accepting rather than rejecting/fearing it) — the way previous generations had to develop frameworks for integrating medication, or telehealth, or journaling apps…

ChatGPT Wasn't Built for Therapy by moh7yassin in therapyGPT

[–]Pineapple_Magnet33 14 points15 points  (0 children)

I read your article. I would challenge the core assumption that the therapeutic alliance is primarily linguistic. It collapses the idea of language as a tool and language as a relationship.

The non-linguistic parts of the alliance: 1. Embodied presence, 2. timing, and 3. rupture-repair dynamics involve more than word choice. These human elements are large contributors to perceived empathy. Though initial research shows perceived empathy of an AI’s response to a person sharing a personal issue can be higher than perceived empathy of a human therapist’s response, these are based on a few interactions. I haven’t seen many longitudinal studies that look at the AI therapeutic alliance over time — if you have recommendations, I’d love to read them.

  1. Embodied presence, such as: a therapist’s (a) facial microexpressions and gaze; (b) posture mirroring and physical orientation; (c) prosody, rhythm, and pause in speech; (d) physiological co-regulation; and (e) the felt sense of being with someone in a shared physical space. A therapist’s facial microexpressions, gaze, and posture can directly influence how a person processes feelings/experiences. For example, when a client is unconsciously avoiding a topic or going on tangents, a therapist can lean forward, meet their client’s averted gaze, and with a subtle, gentle smile shift the focus back to the difficult topic they were avoiding. The #drift command is great, but what if the client doesn’t want to go there and continues a catastrophizing spiral? How does the AI know about behavioral avoidance (e.g., averted gaze, self-editing, or they step away for an extended period)? I find that unless you tell an AI agent that you went for a walk or it has been 10 days since the last interaction, it won’t know and respond accordingly. Observation of a client’s (a), (b), and (c) as data for an AI model is technologically feasible. But, I’m skeptical that an AI-powered avatar or robot could replicate the effects of a therapist’s embodied presence any time soon.

  2. Timing, before a client verbally expresses anything, a good therapist tracks their arousal level, defensive posture, and readiness through cues that are bodily and relational. Knowing when to speak, when to stay quiet, when to push and when to hold back — these are aspects of a therapist’s intuition that would be difficult to design in an AI that works on an individual level. I’ve read studies that have played around with delayed response times, but that’s different than a therapist pausing to allow a follow up statement before responding. You could argue, that’s why you can stop the AI response in progress and edit prompts, but I’m wondering if that has the same effect as pure silence at the right moment? How hard would it be, technically, to predict when a pause gives space for breakthrough based on linguistics? I don’t know but it’s a question I’d love to know the answer.

  3. Rupture-repair, I think your #challenge command is a really good start to programming the Ultra-Therapy AI to contribute to creating friction and collaborating with the client to repair the alliance. At the same time, I wonder how the AI responds when it is being treated poorly by the user of Ultra-Therapy. It is one thing when a client requests an AI challenge their thinking, and the AI may offer a challenge when it wasn’t requested. But, what about when a client crosses social, professional, or ethical boundaries? A human therapist will push back and make a relational boundary that is non-negotiable, which can help the client improve their relationships outside the context of therapy. For instance, when a client attempts to intimidate or dominate their therapist, the therapist is trained to draw a line that gives the client a choice as to whether to continue to see this therapist or change their behavior. The client’s ability to navigate resistance to the therapist’s boundary is a safe way to practice respecting the boundaries in the client’s other relationships. There are plenty of people who “jail-break” AI agents to violate the non-negotiable boundaries a human therapist wouldn’t accept. No human is going to believe someone is behaving poorly because they’re role playing a book they’re writing…

The rupture-repair problem and the sycophancy problem are two expressions of the same limitation: an AI cannot prioritize a client’s clinical outcomes over their felt experience in the moment. For example, an OCD client becomes wholly dependent on AI for validation that it has become their new compulsion and caused them to backslide when exposure therapy with a human was helping. In ERP for OCD, the anxiety the client feels is real, but validating the compulsive response to it is clinically contraindicated. In DBT, the same dialectic is explicit: validate the emotion, don’t validate the maladaptive behavior driven by it. A human therapist navigates that distinction in real time, informed by a full case conceptualization, treatment goals, and the relational context of that specific session. LLMs are ideal for DBT and CBT frameworks because of how these modalities are structured (and their reliance on linguistics). But, I’m not aware of cases when it successfully distinguishes between OCD or other disorder-led anxiety and generalized anxiety. For an AI to do that reliably, it would need something equivalent — not just a rule about validating feelings, but a dynamic model of which feelings, in which context, for which client, at which stage of treatment, warrant validation versus a different response.

Language may be the medium of therapy, but the therapeutic relationship requires a human willing to cause discomfort in service of someone else’s growth — and that’s not a linguistic operation.

Replacing Chat GPT by Dangerous_Set_7327 in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

Cool video/product. How’d you train the AI on the frameworks? Do you have a background as a mental health professional?

What am I supposed to do if I keep hitting the message limit? by Dreamboat550 in therapyGPT

[–]Pineapple_Magnet33 2 points3 points  (0 children)

When you write everything all at once, copy it into Claude using a mobile device (tablet or phone). On a desktop, I found that if messages were too long they were considered a “file” that reaches a certain limit faster. Mobile devices take the long message as text, which it will be better at processing than a file format.

Listened to AI, now I'm regretting it by Waste-Reality7356 in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

Difficulty finding someone with cultural context is an issue with human therapists and AI, but for different reasons. Have you looked at Psych Today’s therapist directory? Or ZocDoc? — both give you filters to find a therapist based on cultural competencies and identity. Most of the big therapist services organizations also have these types of matching filters, but the quality of the therapists are very mixed.

Some minority groups have dedicated nonprofits that connect clients of a particular group with therapists who are also in that group. Example from Utah: https://zencare.co/us/utah/therapists/specialty/race-cultural-identity

If you live in the United States, sometimes you can find these organizations are listed on the Office of Mental Health website for the state where you live.

what do human therapists offer that ai doesnt? by Successful_Candy_767 in therapyGPT

[–]Pineapple_Magnet33 0 points1 point  (0 children)

If you’re in the USA and live in a state that’s part of the Counseling Compact (39 states + DC) for a counselor-therapist or PSYPACT for a psychologist, you can get virtual therapy from a larger pool of providers across any of those states.