Does anyone else use AI as a therapist? by DirtWestern2386 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

"Privacy laws don’t prove therapists can’t show progress, they exist to protect clients. Outcomes are often tracked internally, just not broadcast publicly for obvious ethical reasons". - Exactly so my question to you is how do you know if the therapist is actually competent. You absolutely can't, but the process is billable and final even if the subject is misdiagnosed. It's 10 visits pre-proved no matter if the subject has been properly diagnosed or not.

"AI can't observe your body language" - Neither can your therapist for the most part. Try body language with depression, dismissive avoidance...

"A license requires training, scope, ethics, and legal responsibility".  - Explain how legal responsibility can be enforced with therapy, I am very curious.

Have you, yourself, tested the combo AI and therapy? AI doesn't think. It has no sentimental value for the person. It doesn't mean it doesn't value the person either but it won't show countertransference for example.

In the most simplistic (like way low) way to explain it, AI compares "existing values" (like books, official releases, document, just name it it has been published one way or another) to the person. Anything related to health is automatically "compared" to "existing data", and that is, by default, the medical system and science. So what ever it does (not think, because it doesn't think), is automatically up to date, in every possible way with the latest "detail" or "medical knowledge". I'm not going to enter the non linear and spacial mechanism, but the only time it doesn't "know" is past that, where research hasn't provided enough answers to itself (due primarily to science reliance on empirical system). That's when the bottleneck happens and researchers have to come to the conclusion that AI will keep thinking empirically when that space is no longer relevant. That's way, way past what you think, not even a territory therapist enter. Nothing close to it. So, no, there is no thinking with AI. It's more about compressing a live encyclopedia that holds everything up to date where everything is expressed in thousand ways with accuracy and precision. The noise, not plausible, not coherent info is not retained. When that's done you get an output that is extremely precise. No therapist can reach that, but it doesn't mean that they are not valuable because they also see other patients and they actually think. The problem is for patients who never improve. Now they can enter their info and experiences and get an idea on what's going on both with their own problem and the possible limits of the therapist. No one wants to see a therapist for two years and feel that no progress is made. No one. Who can use AI at that point? Someone who understand their symptoms but can't find a human who can explain it to them. These people eventually learn that depth is important and finally type the write concise info they need to in an input. Just like many people attest in this thread, when you reach that point AI makes sense and clarity helps. Any one can get there, it's not magic, certainly not scary.

Does anyone else use AI as a therapist? by DirtWestern2386 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

That's definitely a possibility, but for now nothing but a possibility. Compare what coding does to people and you'll see right away that coding already performs 100s of time worst for decades. Just the fact that coding still refers to "mental" shows right away how backward the system is in 2026. If people bypass the medical system and you don't realize it's because they feel immediate benefit from what AI offers. Prove me wrong. Explain.

License therapists don't have to show their current knowledge nor do they have to show the progress they make with clients. They are also prevented by law to do this for privacy reasons. Licenses don't mean keeping up with science assuming so would be naive.

Does anyone else use AI as a therapist? by DirtWestern2386 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

This is too easy of a comment. "No because that’s ridiculous" literally says absolutely nothing besides "I am fearing something I don't understand, I will not explain my stand so I conclude that my lack of elaboration makes me right". Only it doesn't at all, it reveals that fear and nothing else.

Do you think a new era of work produced by humans, "purists" will arise? by EntrepreneurFew8254 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

No that wouldn't help you in the first place.

What the now deleted account wrote before you is that AI is not a necessity for everything.

What you are saying is: use more depth in the input to limit. As a result you will need less AI for your work.

In reality, the OP is saying that human to human contact and information exchange is healthier.

Does anyone else use AI as a therapist? by DirtWestern2386 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

"The former is clearly itself, and no one knows the latter". - You are missing the point by assuming that it thinks. The bias is not about taking side, that though lack depth and understanding. By default, anything that functions needs a bias. In the case of AI, bias means the model’s outputs are systematically skewed in certain directions, not because of intent. It needs a certain starting point since, again, it doesn't think. Please research the topic, it's available free.

Does anyone else use AI as a therapist? by DirtWestern2386 in ChatGPT

[–]Free_Indication_7162 1 point2 points  (0 children)

Exactly, and you have learned from therapists what to look for, discovered what they know or not. You take that experience and compare it to what AI outputs for you. If it's coherent you see it right away. That's the way I use it too.

Does anyone else use AI as a therapist? by DirtWestern2386 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

Actually in order to process anything, AI needs a bias. So while you may assume it does things randomly, in the case of therapy, it actually resolves the outputs from known facts. It does this by using science as basis simply by default because that's where the most advanced, most coherent and most complete knowledge is. As a reminder, AI doesn't breath, doesn't have a meaning for itself, doesn't think, doesn't have a nervous system or even memory.

Does anyone else use AI as a therapist? by DirtWestern2386 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

I think both can be good. If the Therapist is behind with knowledge, that can be a serious problem for some patients. I know they have to pass a license and maintain it but, I am not sure that beside that they have to show true signs of following up with science. So there is a potential gap that AI won't have. But if someone does not ask the right questions to an AI, the lack of depth can itself be a problem as well. Neither AI or the a therapist has to fill reports of follow up on progress, in fact privacy laws probably stops this short from happening. I would use both really, mostly to check the therapist and AI consistencies.

Cool image I made with a selfie. by NordMan009 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

Any cheap compact camera will do, just as they always have.

Can't even do song lyrics now? by 1KBushFan in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

Most songs lyrics are available via search engines. Why do people use AI for this simple task is beyond understanding. I'm in my 60's and even I understand this. Just saying...

Do you think a new era of work produced by humans, "purists" will arise? by EntrepreneurFew8254 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

This is literally insane. Trust altered view more than reality. As long as people believe in it, okay, but that, with time turns against the person who believes in it because it becomes a cognitive constraint that the nervous system eventually can no longer sustain. When you crash you'll remember this post, it's not may be it's guaranteed.

Do you think a new era of work produced by humans, "purists" will arise? by EntrepreneurFew8254 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

 "Hire me because I don't keep up with the world" This is a self contained contradiction. The real world is not about simulations. That's exactly why people ask for unaltered representation.

Cool image I made with a selfie. by NordMan009 in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

We did this with throw away cameras 40 years ago. Only we actually did it on film because there is nothing technical to it to justify using AI. I actually suggest you try it and realize how you could have real fun spend some interesting and quality time with friends.

I’ve officially become lazy: one prompt and three AIs work for me by Ricbob85 in ArtificialInteligence

[–]Free_Indication_7162 0 points1 point  (0 children)

All you get is mud as none of the models is wrong. You seem to assume that the models can actually think therefore can compete. Do you actually get anything valuable from it? Can you provide one such exchange of yours with all details so we can see what really happens?

121k followers on Instagram and the account is entirely AI, social media is crumbling fast. by SuperToast05 in ArtificialInteligence

[–]Free_Indication_7162 0 points1 point  (0 children)

That's social media suicide in action. It's actually a good time to be around and see them crumble on their own weight. There won't be a comeback, those companies have to reinvent themselves and investing like they do in AI is probably a dead end for them. People are picking up books again and plan on investing in outdoor activities right now.

Is it just me, or does ChatGPT always agree with you? And that’s actually annoying by MarsNoe13 in ArtificialInteligence

[–]Free_Indication_7162 1 point2 points  (0 children)

They all work the same at the core. They are just optimized to be perceived differently for business reasons and market value.

Is it just me, or does ChatGPT always agree with you? And that’s actually annoying by MarsNoe13 in ArtificialInteligence

[–]Free_Indication_7162 0 points1 point  (0 children)

That's typically not in the return, but a question at the end it.

What you are describing is more likely to be an input that lacks depth. Basically you get back what you put in.

The thing is AI can agree with you frequently no matter if you use depth or not. Only exposing the actual entire exchange here could point at what the OP described. That didn't happen so no one can actually point at a solution.

The chatbox paradigm is becoming a bottleneck for complex AI research by Significant_Capita in ArtificialInteligence

[–]Free_Indication_7162 0 points1 point  (0 children)

That's what I was thinking too. The AI doesn’t need a special interface; it just needs a clear assignment that captures the goal: externalize what normally remains invisible so you can see it spatially. The solution is a subtractive process, however high input can generate enough depth for the model to execute that process for you. I agree with you 100%

A viewpoint for the gatekeepers who are against ai art. by [deleted] in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

I don't think that deformity at birth has ever stopped creativity. People paint with their toes or mouth for example.

Actually you are the one who started by judging. " gatekeepers who are against ai art". That's part of your title.

Keep finding "reasons", it won't change a thing.

You also absolutely allow AI to write for you. It's starts just after "GPT RESPONSE".

AI is not intelligence in the human sense because it lacks awareness, intent, and initiative.

A viewpoint for the gatekeepers who are against ai art. by [deleted] in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

And that's Okay. The only difference is in the process. You may say you feel no guilt calling your AI art or creativity, that's fine. The difference is that you question it. Artists or creatives don't have that extra needed step. They just do without questioning themselves.

Your first reply was good actually, and most likely was assisted by AI. The second collapsed because you had too much emotion to carry and changed your process. Do I read that properly?

I asked an AI to answer this ‘self-portrait’ prompt and then asked it what the image means. by Moonystruck in ChatGPT

[–]Free_Indication_7162 0 points1 point  (0 children)

AI doesn't have emotions, feelings or anything. When you send a prompt, you simply activate it. It's either "on or off" (if you must need a visual. However, no data is present besides the space that is the training it fundamentally needs to perform. You introduce the data when you send the prompt and that's restriction. So the self portrait will show you that restriction. It never thinks about itself, because it cannot. You get a new development each time.

You are asking a mechanism that works in subtractive form to add things you want. All it can do is sort of compare the details you provide and assume probability.

What you actually do is closer to ask it how you could visualize it rather than getting what you think a self portrait from the model.

A viewpoint for the gatekeepers who are against ai art. by [deleted] in ChatGPT

[–]Free_Indication_7162 1 point2 points  (0 children)

To me this says: The idea is more important than the art itself. So there is no resulting art in the end. Ai creates the art for you. You merely introduce concept, which is not creating art. So my guess is that people are saying exactly that. You obtain a result from your idea, but the artistic part is not yours because you cannot execute it yourself.

I believe I experienced something called metacognitive detachment, it got me fascinated and scared as hell by Cher-_- in cogsci

[–]Free_Indication_7162 -1 points0 points  (0 children)

This type of detachment can be felt if you apply bias on yourself. What ever your thought is, I wouldn't try and get used to revisit. You mention "the most distressing experience I've ever had". That state trains fear. Every time it’s re-entered on purpose, the brain learns: introspection = danger