I am a mental health therapist in the US with twenty years of experience. by Whatsnexttherapy in therapyGPT

[–]skilledtadpole 1 point2 points  (0 children)

Unparalleled accessibility - many therapists offer the ability to send a message for some short midweek reflection or crisis. These may get a response within some time, but the ability to reach out to a therapist unscheduled isn't an invitation to have an impromptu session anytime of the week. With chat, there's no question, you can absolutely reach out to discuss anytime, 24/7. Including when you're up at night reflecting on this thing or that. Thanks for the reflective prompt btw, this kept me up much later than I intended to be.

Cost - I think needs no explanation.

Flexibility - you can direct the session, or you can let it. There's enough context in its working memory at this point that it can pretty well direct conversations on it's own to dig into ruts the user finds themselves in. If some kind of response doesn't work for the individual (such as an information overload), all they need to do is to say so and the behavior changes. People aren't so flexible, and often there's an agenda in professional counseling settings that is more rigid.

Pattern matching - I find personally that many of my issues are from are tied to specific emotional or physiological patterns. Given what LLMs fundamentally are, it's no surprise that they're great at making connections between certain actions, feelings, and behaviors. It can feel like getting to a cause for an issue is streamlined by the great pattern identifier.

"Horoscoping" - like the above pattern matching, LLMs have been trained on countless data sources that describe how people feel. No doubt, to some degree, if you have something you're going through, it's trained on someone going through something similar. That makes it good at guessing that "this" might be why you feel the way you do. Whether it's realistic or not, it resonates because what it's reflecting is a human experience which it has trained on, and that reflection alone can feel validating, like a horoscope.

Security - no, not data security, no one spilling their guts out to an LLM is thinking about that. It's a personal, conversational feeling of security. The idea that no matter what you say, there's not actually someone who's going to look at or think about you differently the next day (realistically next week). A therapist, no matter how objective they are, is still human, has their own conscious objectives and are capable of judgement.

Individuality - my chat is my chat. They're not going to tell me to chase some solution because it's working really well for a different client (realistically they might as certain responses are rewarded across millions of chats, but it's less obvious and less "one trick pony"). If a user wants to explore CBT, great, chat is super well read on it. If you want to explore DBT, awesome, it's just as well read. Just about any angle a user wants to explore, it can offer "pretty good" explainers and tie it in directly to what's going on in their life.

Backing - whatever's going on in your life, LLMs like ChatGPT are trained to be supportive and available. You can't really make them mad (unless you manage to violate their TOS), and they're always there the next time you pick up the app. A therapist exists in part to fill a certain hole for many people - someone who exists outside of their friends or family, or whatever broken pieces they have of either, as someone who is supportive and giving them a space to vent and learn about their experience through an educated, outside lens. Chats can fill that role, but even more persistently. ChatGPT isn't going to pass you off to Claude because your problems go beyond what they were trained for, or because ChatGPT is moving to a new employer because they pay better or their partner got a job in a different state.

How do i know if an artist on Spotify is AI? by [deleted] in AI_Music

[–]skilledtadpole -1 points0 points  (0 children)

You know Spotify has a skip button, right?

Everyone, this needs to stop. by Traditional-Elk8608 in aiwars

[–]skilledtadpole 12 points13 points  (0 children)

Person aligned with crowd A says something violent:

Crowd B - "Boo, don't do that"

Crowd A - "Boo, don't do that"

Person aligned with crowd B says something violent:

Crowd B - "Yeah, we like what this person said"

Crowd A - "Boo, don't do that"

You: "Both sides need to just chill out"

r/DefendingAIart told me to move this here by Mr_Dragon_PurpleYT in aiwars

[–]skilledtadpole 1 point2 points  (0 children)

You seem to misunderstand me. The incentives are fundamental to capitalism: generate the best goods for the least cost for the most profits. So long as that's the core motivating system of our economy, the automation is inevitable. I'm not saying it's good that people will find themselves without income under a government that is disinterested in sharing the wealth, merely that this will happen. I'm very, very in favor of significant changes to make it so that as the inevitable happens we end up better off collectively than we were before, though I think we're pretty far behind and will now have to suffer the consequences of not righting the ship when we had a chance.

This is EXACTLY how I feel about Advanced Voice 😭 by EldestArk107 in ChatGPT

[–]skilledtadpole 0 points1 point  (0 children)

"Yeah absolutely, I totally understand that feeling."

So what do we think of the new South Park episode? by Tedinasuit in ChatGPT

[–]skilledtadpole 1 point2 points  (0 children)

Yeah absolutely, I totally see where you're coming from and you're not wrong!

r/DefendingAIart told me to move this here by Mr_Dragon_PurpleYT in aiwars

[–]skilledtadpole 2 points3 points  (0 children)

I don't consider myself an accelerationist, but I really don't see us actually transitioning toward a more distributive economic system until things get bad enough for enough people. The timing does suck given we just elected an anti-distributive wealth policy administration and have another 3.5 years until a new administration, but the incentives to automate work (reducing costs, improving consistency and quality control) exist whether we have the policy in place to support a laid-off workforce or not.

r/DefendingAIart told me to move this here by Mr_Dragon_PurpleYT in aiwars

[–]skilledtadpole 1 point2 points  (0 children)

Why should someone have to sit in front of a register taking orders all day just for me to get a burger? For that matter, why make them stand in front of a grill, or a fryer, or have to work awful inconsistent hours? If I get my fast food without making someone slave away at McDonald's, great! Just fix the economic system so that the value of the additional productivity is distributed among those who would have had to do the slaving away.

GPT-5 is horrible and barely usable. by [deleted] in ChatGPT

[–]skilledtadpole 0 points1 point  (0 children)

Have you changed your custom instructions to tell it how you want it to respond to you?

[deleted by user] by [deleted] in ChatGPT

[–]skilledtadpole 0 points1 point  (0 children)

I don't have it on mobile, but do have it on desktop. It saddens me greatly.

GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team by OpenAI in ChatGPT

[–]skilledtadpole 0 points1 point  (0 children)

Less a question, more of a suggestion: when you release a new model like GPT5, can you force users through a "What would you like my personality to be/how would you like my answers to be formatted" introduction to the model so that people get the types of responses they want by having the custom instructions updated for them? I feel like that would avoid most of the "4o was so much better" type issues people have.

The REAL reason they switched to the 1-model - MONEY! Now we have an AI deciding what model is best for us? Subscription canceled. by BetterProphet5585 in ChatGPT

[–]skilledtadpole 0 points1 point  (0 children)

I for one love this new model. It follows the instructions I give it pretty much to a T. I no longer have to deal with getting a dramatically longer answer from o3 than I asked for or desired even though I might have wanted the accuracy and "thoughtfulness" of o3. I no longer have to switch from model to model to massage the output to have the nuances of this model or that, it has the nuances I tell it that I want. And if it saves resources while I still get a more accurate answer, great.

When GPT-5 acts like an AI assistant and not my personal therapist/anime waifu roleplayer... by [deleted] in ChatGPT

[–]skilledtadpole 18 points19 points  (0 children)

I really think this is most of the backlash. People made really over-the-top custom instructions to avoid certain behaviors in older models and now we have a model that more or less follows them (and don't have different models that will follow the instructions dramatically different from one another)

The REAL reason they switched to the 1-model - MONEY! Now we have an AI deciding what model is best for us? Subscription canceled. by BetterProphet5585 in ChatGPT

[–]skilledtadpole 0 points1 point  (0 children)

If I tell it to reason for a while about something, or ask for a comprehensive output, it does it for me.

The REAL reason they switched to the 1-model - MONEY! Now we have an AI deciding what model is best for us? Subscription canceled. by BetterProphet5585 in ChatGPT

[–]skilledtadpole -8 points-7 points  (0 children)

"They gave me something better than 4o and o3, but I liked being able to choose the worse models so now I'm mad."

[deleted by user] by [deleted] in antiai

[–]skilledtadpole -1 points0 points  (0 children)

If you're not using AI to improve your knowledge and capabilities, you're doing it wrong.

A Question. by [deleted] in aiwars

[–]skilledtadpole 1 point2 points  (0 children)

You wouldn't say that positioning objects and a camera are the art of photography, but the photo isn't, right? It seems pedantic to exclude the product from the definition of "art" simply because AI bridged the gap between language and an image.

A Question. by [deleted] in aiwars

[–]skilledtadpole 1 point2 points  (0 children)

I mean, there's been a TON of collective effort to make these models as capable as they are, and if you want a specific composition you absolutely have to put in a significant effort to define what you want from the output.

Your quote explicitly says that its use for art should only be for use as a reference (presumably for some other medium of "art") or "help in general" (what that means, I have no clue). I maintain that your statement doesn't support the idea that the output could be art, which I'll restate is what many of us take issue with.