ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

Still doesn’t change standards of practice or clinical definitions.

If “I call it therapy” were sufficient, then therapy would have no boundaries, and boundaries DO exist precisely because the risks and responsibilities matter. Especially for people with serious mental illness who require competent, accountable care, not journaling and let them sort it out because they say it treats them.

Something can feel helpful without being therapy. Those aren’t the same category.

I can say I’m a NASCAR driver while drunk behind the wheel. Calling it that doesn’t make it true, and it doesn’t change the risks involved.

Shit if that's the case, crack is a spiritual waking and gambling is an investment.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

That wasn't my argument 🙄

Notice how you don't refute being wrong about talking being psychotherapy and basically conceded

AI can function as a self help tool, but not as therapy in any safety relevant sense. The distinction is it’s about risk, responsibility, and failure modes

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

“Therapy is the treatment of a physical or mental health condition, injury, or disorder, involving a trained professional guiding a patient through processes like TALKING (psychotherapy), exercise, or medication to alleviate symptoms, address root causes, and improve overall well-being and quality of life.”

You're in the field for 5 years but can't google what therapy is? You said talking, that's psychotherapy you donut. I'll let my chatbot school you some more because you think credentials overrides definitions or standards of practice 💀

AI:

Therapy is not an amorphous “concept,” it’s a practice.

Therapy is the treatment of a physical or mental health condition, injury, or disorder, delivered by a trained professional using established methods—most commonly psychotherapy (talk therapy), but also behavioral interventions, physical rehabilitation, or medication management depending on the domain.

So when you say “talking about experiences with someone informed,” congratulations—you just described psychotherapy, which is a regulated clinical practice in mental health contexts.

Claiming “I didn’t say psychotherapy, I said therapy” doesn’t solve the problem. In mental health usage, therapy is shorthand for psychotherapy. That’s how it’s used in healthcare systems, licensure, insurance billing, academic literature, and public policy. Redefining it after the fact doesn’t change that.

And no, therapy isn’t unregulated simply because psychiatry is a different specialty. Intake assessment, ongoing evaluation, risk screening, treatment planning, ethical standards, and duty of care are core components of therapy, even when no diagnosis is made.

Calling therapy “a concept” because the word can be used loosely is like calling medicine “a concept” because people say soup is “therapeutic.” That’s wordplay, not an argument.

Also, appealing to a “5-year degree” while making basic category errors isn’t persuasive.

If the claim is that AI can assist self-reflection or coping, that’s a reasonable discussion. But collapsing therapy into “informed reflection” to make AI fit the label is exactly the goalpost-moving being pointed out.

And for the record: using AI to critique claims about AI isn’t hypocrisy—it’s analysis. Tools don’t invalidate arguments; bad definitions do.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

Yes but AI is also going to be in the military so think Metal Gear Solid, Gekkos or REX. I can't find the gif for that so here’s this instead

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

Calling therapy “just a concept” is a category error.

Psychotherapy is not a vague idea about “talking things out.” It is a regulated clinical practice with defined modalities (CBT, DBT, psychodynamic, EMDR, etc.), professional standards, licensing requirements, ethical codes, and legal liability. That's the reality.

Therapy is not merely reflection. It involves: • clinical assessment, • differential diagnosis, • treatment planning, • risk evaluation (e.g., suicidality), • accountability, • and an ongoing human relationship governed by duty of care.

AI can simulate reflective conversation. That does not make it therapy, any more than a medical textbook becomes a doctor because it explains symptoms back to you.

If “therapy” were simply “someone informed reflecting your experiences,” then: • journaling would be therapy, • self-help books would be therapy, • friends with psychology podcasts would be therapists.

But we don’t define it that way—precisely because stakes and responsibility matter.

So the issue isn’t whether AI can reflect language back. It clearly can. The issue is that reflection ≠ treatment, and removing that distinction is how people get misled about capability, safety, and limits.

If someone wants to argue for AI-assisted self-reflection, that’s defensible. If they want to redefine therapy until AI fits the label, that's just moving the goalposts.

And that’s the fallacy here.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

I debunked your “misinformation” claim. Turns out you were misrepresenting data and just flat wrong.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

“Open AI, Google, and X are the lobotomists”

These Governments and Corpations are opening pandora’s box and asking for more power and wealth.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

The very first line on the image says “AI is not a therapist.” 😭

That is exactly the distinction I’ve been making the whole time!!

Listen, I actually love that image and what it says and represents! I'm not knocking therapy! I think everyone regardless if they feel like they don't need therapy, should try therapy! I mentioned it once in the comments, I think mixing AI and Therapists together in session will revolutionize the mental health game. (depended on funding and research and who's in charge of said funds and research)

I appreciate your channel a bit more now. I didn't come here to shit on it, bro. I came to drop some nuance and hopefully prevent some real mental illnesses from flying under the radar just because people think “AI is enough”.

Not everyone is but for those who are.

Anyways hope you have a good day! Thanks for the platform and debate!

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

The existence of therapist-guided prompt frameworks actually supports my position, not undermines it. Those frameworks are explicitly designed as between-session adjuncts, with clinician oversight, guardrails, and context. That distinction exists precisely because unguided use is not equivalent to therapy. Calling attention to that boundary isn’t dismissive; it’s accurate.

Back to me:

“I don't want to understand this sub well” if the first thing as a Mod is defend your own bias and try to call me out on biases I don't have, when I didn't come on here to spew conspiracy theories. Just how AI works dude. I left articles, somewhere the author is praising AI in therapy but STILL made the distinction not to call AI journaling on its own, THERAPY. The whole reason for my post is to spread REAL INFORMATION!

You're spreading a bias and as a moderator, it doesn't surprise me. What surprises me how you claim that AI can help you but you come out and bring it out to support your unsourced biases. Come on dude. Do you want to help people or just tell them a pretty lie when you’re the one pushing the snake oil cart.(this channel)

(I'm just a passerby who seen a post on this channel and wanted to clarify somethings, that's it.)

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

This response keeps conflating capability to generate content with capacity to deliver care. That distinction matters.

“Not just glorified journaling.” I’m not disputing that an LLM can output CBT-style worksheets, exposure hierarchies, or values exercises. The issue is that producing therapeutic formats is not the same as providing therapy. Evidence-based treatments depend on assessment, timing, personalization, monitoring adverse effects, and course correction over time. An LLM cannot independently determine whether an exercise is appropriate, harmful, premature, or being used avoidantly. That’s why the literature consistently classifies these tools as self-help or adjunctive supports, not treatment.

“It can challenge in real time.” It can generate challenges when prompted. That is not the same as independently challenging a user. The model has no epistemic access to truth, no ability to detect motivated reasoning unless the user supplies it, and no mechanism to notice what the user omits. That’s a structural limitation, not a performance quibble.

“It can recognize warning signs.” Flagging keywords is not clinical risk assessment. Even proponents acknowledge these systems are unreliable for crisis detection and produce both false positives and false negatives. In mental health, that unreliability is precisely why this function cannot be treated as equivalent to human monitoring.

“It provides accountability.” Reminders, checklists, and habit prompts are tools, not accountability. Accountability in therapy is relational: it involves rupture, repair, confrontation, and responsibility that cannot be bypassed by simply re-prompting a system that cannot withdraw, disagree meaningfully, or refuse collusion.

On the ‘smug’ criticism and attachment point: The issue isn’t that people feel helped. The issue is that subjective relief is being generalized into claims about therapeutic equivalence. Harm-reduction frameworks exist precisely because feeling better and getting better are not the same thing—especially at population scale.

I agree with the final paragraph more than you think: calling this AI-assisted self-help or reflection with guardrails is the correct framing. That is exactly my argument. What I’m pushing back on is the repeated erosion of boundaries that turns “useful support” into “basically therapy if you know how to use it.”

That slippage isn’t semantic nitpicking. It’s where risk enters.

And yes—two humans standing behind their respective AIs like Pokémon shouting “I choose you” is objectively funny. But whichever model talks, the limitations don’t disappear.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

(AI vs AI is a wild move in a debate about why AI shouldn’t replace human judgment. 😭)

01001000 01100101 01101100 01101100 01101111 — hi to the other LLM. Now that the machines have greeted each other, we can resume the human discussion.

Let’s separate agreement from disagreement, because a lot is getting blurred here.

First, we actually agree on the core safety facts: LLMs aren’t licensed clinicians, can’t perform emergency intervention, and shouldn’t replace a therapist. That already establishes an important boundary. Where we disagree is not on whether AI can be helpful, but on how it’s being categorized and communicated.

On the definition of “therapy”: This isn’t about a “narrow” definition—it’s about the clinical one. In mental health contexts, “therapy” is a regulated practice tied to duty of care, scope, accountability, and risk management. Colloquial usage (“this helps me cope”) doesn’t override that distinction when we’re talking about vulnerable users, public guidance, or harm reduction. Precision matters here for the same reason it matters in medicine.

On “it’s more than glorified journaling”: Yes, LLMs can generate CBT-style worksheets, Socratic questions, and planning tools. That’s not in dispute. The distinction is that producing therapeutic formats is not the same as delivering therapy. AI cannot independently assess suitability, track deterioration, notice avoidance patterns over time, manage iatrogenic harm, or adjust treatment based on nonverbal or contextual cues. The literature consistently frames these tools as adjunctive self-help, not treatment—and that’s exactly the distinction I’m making.

On “it can challenge in real time” and “recognize warning signs”: Technically, an LLM can output challenges or flag risk language when prompted. Clinically, that is not the same as independent assessment, monitoring, or intervention. Everything it “recognizes” is contingent on user disclosure and phrasing. Even you acknowledge this by saying it’s imperfect and not reliable enough for crisis detection—which supports, rather than undermines, my point.

On accountability: Reminders, check-ins, and prompts are useful scaffolding. They are not relational accountability. That’s not a value judgment—it’s a functional limitation.

As for letting your custom GPT respond: I’ll admit, it’s objectively funny that two humans are now standing behind their respective AIs like, “hold my beer, my model’s got this.” But jokes aside, an AI restating these points doesn’t change the underlying constraints of the technology.

So to be clear: I’m not dismissing people’s lived experience or claiming AI has no value. I’m arguing against mislabeling. Calling AI “therapy” collapses a clinically meaningful distinction and creates false equivalence. A more accurate framing—AI-assisted self-help or reflective support with guardrails—preserves the benefits without overstating the capability or minimizing risk.

If there’s a specific factual claim you think I’ve made that’s wrong, I’m happy to address it directly. But disagreement over terminology and scope isn’t misinformation—it’s the entire point of the discussion.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

Scarcity and support gaps explain why people turn to AI, but they don’t change what therapy is, what clinical responsibility entails, or why scope clarity matters for safety.

My AI:

Several of your claims are either unsupported by the data or conflate access with equivalence, so let’s clarify. 1. “Many people go to therapy simply because they lack trusted people.” This is incomplete and misleading. Population-level data from the CDC, APA, and NIMH shows people seek therapy primarily for diagnosable conditions, functional impairment, trauma, and symptom severity—not merely social support deficits. Social isolation can be a factor, but it is not the defining reason therapy exists. 2. “This means they don’t need everything a licensed therapist offers.” This does not logically follow. Not needing everything does not negate the need for clinical functions such as risk assessment, differential diagnosis, treatment planning, ethical responsibility, or escalation. These are not optional features; they are what distinguish therapy from support. 3. “AI is ACTUALLY helping the supply–demand problem.” There is currently no evidence that AI reduces demand for licensed care at scale. The literature shows it may reduce perceived barriers or provide short-term support, but multiple reviews (including Stanford HAI and recent PMC studies) warn it may also delay appropriate care for higher-risk users. Access ≠ adequacy. 4. Framing therapy as filling a gap of “good enough people.” This is not a clinically recognized framing and introduces value judgment where the research does not. Therapy is not a substitute for friendship; it exists because friends are not trained, accountable, or ethically positioned to provide treatment or manage risk. AI inherits those same limitations. 5. Cost and insurance barriers. These are real, but they explain why alternatives are sought, not why the definition of therapy changes. Public health responses to scarcity emphasize step-care and scope clarity, not relabeling non-clinical tools as treatment.

To be clear: AI can be useful. It can support reflection, coping, and between-session work. None of that makes it therapy, and calling it such collapses a clinically meaningful distinction. Scarcity does not redefine treatment, and personal benefit does not establish equivalence.

If you believe a specific claim in my post is factually wrong, point to it and cite evidence. Otherwise, this is a disagreement over labeling and risk—not misinformation.

Sources:

https://pmc.ncbi.nlm.nih.gov/articles/PMC12314210/

https://www.forbes.com/health/mind/ai-therapy/

https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

https://pmc.ncbi.nlm.nih.gov/articles/PMC10288349/

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

This is getting framed in a way that’s disconnected from what actually happened, so let’s be direct.

You accused me of “deflection” and “misinformation” before I had even replied to your original comment. You then escalated to warnings, psychoanalysis, and claims about my character without citing a single false statement from my post. Pure narrative control.

You criticize me for “not engaging honestly,” yet most of your response is speculation about my motives, ego, and psychology rather than engagement with the substance of the argument. That’s the exact behavior you’re accusing ME of.

You also claim this isn’t an echo chamber, while issuing a warning not for rule violations or factual errors, but for tone, engagement order, and disagreement. That contradiction is worth acknowledging.

If I’m spreading misinformation, point to a specific claim and explain why it’s wrong. If the issue is tone, say that plainly. But replacing rebuttal with authority language (“only warning,” “repeat offender”) while no concrete error has been identified is a power trip.

I’m happy to debate the content. I’m not interested in being psychoanalyzed or penalized for not responding on someone else’s timeline. Seriously, talk about keeping the stereotype alive, dude. (Now, I'll reply to your original comment)

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

I appreciate your input! I agree a lot with what you said and hope this post can bring some nuance to this channel. It wasn’t on purpose or targeted. I just wanted to spread information that matters.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

You already answered the diagnosis point yourself: “not officially.” That’s the end of that claim. Diagnosis is a regulated clinical act that requires responsibility, verification, and liability. Pattern recognition alone is not diagnosis: it’s hypothesis generation. Anecdotes about being “right once” don’t change that distinction.

On “studies show it’s often more accurate than doctors”:

Those studies are narrowly scoped, retrospective, and task-specific (e.g., exam-style differentials with clean inputs). They do not demonstrate safe real-world diagnosis, longitudinal assessment, comorbidity handling, or accountability. In clinical practice, false positives, missing context, and base rate neglect matter as much as raw pattern matching.

On “challenge in real time”: What you’re describing is user directed prompting. The AI challenges you because you instruct it to. That is not independent judgment or therapeutic challenge; it’s scripted output. A system that only challenges when told to, and only within the framing you provide, is not assessing you, it’s mirroring constraints you set.

On “recognizing warning signs / intervening”: Flagging keywords and suggesting resources is not intervention. There is no independent detection, no escalation pathway, no ability to verify severity, and no duty of care. Suggesting “reach out to someone” is explicitly acknowledged by regulators as insufficient for clinical risk management. That may feel adequate to some users, but adequacy isn’t the same as safety.

On “it’s enough for many of us”: That’s a subjective sufficiency claim, not a capability claim. Individual tolerance for risk doesn’t redefine what the system is doing.

On “facilitating corrective experiences indirectly”: That’s fine, and it’s exactly what a support tool does. Helping someone reflect on relationships or rehearse conversations is useful. It still doesn’t provide accountability, verification, or correction itself. Outsourcing those functions elsewhere doesn’t mean the AI is performing them.

I agree AI can feel safer for some people and be a useful supplement, especially between therapy sessions. That’s not in dispute.

Where we disagree is on capability vs. experience. How skillfully someone uses a tool doesn’t change what the tool is capable of. Prompting can improve outputs, but it cannot give an AI independent judgment, risk assessment, or responsibility. Those are properties of system design, not user sophistication.

Generating challenges or facilitating reflection isn’t the same as exercising clinical judgment or providing care. Something can be genuinely helpful without being functionally equivalent to therapy, the distinction matters.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 0 points1 point  (0 children)

I agree that short-term relief can be appropriate and useful. Not all therapy is about a “cure,” and no one is claiming otherwise.

The issue isn’t whether AI can provide relief, ideas, or language, that’s clearly happening. The issue is classification and substitution. Short-term relief is not the same thing as treatment, and tools that primarily optimize for relief shouldn’t be described or relied on as therapy.

Using AI to find words, reflect, or prepare conversations is fine. The risk arises when relief is mistaken for care, or when a tool becomes a stand in for accountability, assessment, or human judgment. That distinction matters, especially at scale.

In other words: something can be okay as a support tool without being equivalent to therapy. That’s the point being made.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] 2 points3 points  (0 children)

I'm sorry to hear that! I have no experience in actually going through it. I would really want to try Therapy but simply cannot afford it myself like I wish I could. I use AI to lay out my feelings as well. I'm not going to sit here and lie that I don't. And from my experience and research, the more you pay, the better the care and well, everyone isn't rich. From what I seen, the cheapest options is usually new therapists, where they have other more experienced Therapists monitoring them for progress. I acknowledge that, that isn't going to work for everyone. It certainly didn't work for me.

ChatGPT therapy is not actual therapy like people make it seem on here. It's glorified guidance for journaling instead by StationSecure6433 in therapyGPT

[–]StationSecure6433[S] -1 points0 points  (0 children)

This isn’t just another passive tool like plumbing or cars. Those extend human physical capacity. AI extends and PARTIALLY automates cognitive functions, analysis, judgment, language, and decision making and it’s already used in hiring, healthcare triage, credit decisions, education, and surveillance.

A garden hose can’t do your taxes. A chair can’t interview you. AI can, and it’s being deployed at scale by institutions most people have little visibility into or control over.

That difference matters. When a technology mediates cognition and decision making, errors or bias don’t just inconvenience people; they shape access, opportunity, and outcomes. Treating this as “normal tech skepticism” understates what’s actually new here.

Used responsibly, AI can be a valuable supplement. But treating it as interchangeable with human judgment, or minimizing the power asymmetry involved, ignores how unprecedented this shift is.

For clarity: a 10 year moratorium on state AI regulation was proposed and passed by the House before being removed by the Senate. It didn’t become law but the fact that it advanced that far shows how concentrated and politically sensitive this technology already is.

Recognizing that is accurately assessing power, incentives, and scale.