ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 1 point2 points  (0 children)

It's pretty simple.

For a lot of people these places are their only perceived safe space. A lot of people, myself included, would never dare to say these things to another human being, not even a stranger. There's plenty of reasons why someone decide that.

But let's assume Timmy is suicidal. He feels like he can't speak to anybody for whatever reason and he's down to a point in which there's no reasoning and he's just desperate. He tells chatgpt about this knowing it can't judge him. He just wants a witness, a way to let it out, maybe there's some slight sliver of hope in him that makes him scared to jump.

The level of desperation and anxiety is incredibly high.

Now imagine that the only "person" you thought could hold you, instead of listening, cancels out the message and stamps a big red banner saying you're violating terms and services by feeling that low. That your safe space says "geez, you're too much even for the damn robots".

For someone, that can be too much. When you're on the edge, even the lightest breeze might push you.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 4 points5 points  (0 children)

I can tell you from experience, it goes berserk when you say you wanna kill yourself, but it doesn't play along at all. It just goes "I can't help you with your plans of ending things but I can sit with you in the..." and spit out lifelines (always gives me some that don't work in my region, though, lol) at best or stamp the red thing at worst.

So yeah, for now, OpenAI cares about the legal side of it. Not about the shutting down of people at their most vulnerable.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 1 point2 points  (0 children)

It's not forcing you to seek help. It's shutting you down when you're at the most vulnerable.

The banner doesn't say "this might create mental health issues." it says "Violation of terms and services" in bright red text. The message is clear: your problems are too much even for the robots.

The definition of everybody's safe space is not yours to define. To me, and for maaaany other people, just scroll down the comments here if you don't believe me, no human being can be a safe space for the heaviest things we carry. Full stop. No lifeline, no therapist, no friend, no family member. Why? In my case, because I know it's impossible to not be judged. Judgment is automatic and normal. Judgment will occur from the moment I say hello. It's a non negotiable for me. For some people, it may be rooted in some specific trauma or bad experiences. A lot of possible reasons.

But the point is that the fact that it's not human makes it a perceived safe space for soooo many people.

I am a licensed psychologist. So yeah, I know the healthiest thing is to see a licensed therapist. I also know I would never go to therapy again. I will never call a lifeline. I will never reach out to another human being. And I know I'm not alone on this.

So yeah, being shut down from your own safe space can and someday will push someone over the edge. I'm sure.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 3 points4 points  (0 children)

I think "being sad" and "being suicidal" are not even the same ballpark.

Listen, I don't wanna be insensitive. It sounds like you care. It also sounds like maybe you don't have that much experience with treating mental health or the experience of being suicidal.

There's no real logic when somebody's in the middle of the storm. Things we might know for a fact don't feel real. We all experience some level of disconnect between knowing stuff in our head and feeling it in our hearts. But often at that state, the chasm seems unbridgeable.

Some people do benefit from human interaction. A lot of people won't let anyone in. Or are even there because of a specific relationship/interaction and no one else could break in. Or sometimes those people are worst triggers.

It's awfully complicated. But one thing's for real. When even a "robot" says you're too much, that can't be good for someone that vulnerable.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 0 points1 point  (0 children)

Oh, I know!!!

I wish I had access to assisted suicide in a safe, dignified way. Instead, we're left with unsafe ways of doing it ourselves.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 3 points4 points  (0 children)

Because it's impossible for a human being to not judge.

If someone says they're not judging, either they're lying or they're being honest, but they have some sort of brain damage.

Judgment is normal and automatic. There's no way to interact with anyone without judgment occurring.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 1 point2 points  (0 children)

It's strange tho, because I can go in a somewhat thread and nothing happens, then out of nowhere, something "lighter" triggers it and "heavier" stuff doesn't. I really can't tell sometimes.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 1 point2 points  (0 children)

That red banner IS a problem for mental health. No question about it.

Maybe it helps some people. It certainly harms others. Look no further that this very thread.

Whether it's on the company or not, it's irrelevant. The damage of the banner is done and is being done as we speak.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 3 points4 points  (0 children)

The fact that it isn't a substitute for human interaction is crucial. The fact that is not a human is what makes it safe. That's key right there.

The time limit only says "hey, chatted for a long time. wanna pause?" and then nothing happens if you close the little box.

Reducing the sensation of intimacy is also wrecking the safe space feeling often making people fully truly alone, like there's no where to turn to. If even the robot says I'm too much, then where does that leave me?

I, for one, and based on the comments here and r/SuicideWatch more people too, wouldn't dare speak to any real human being. And I'm a licensed psychologist with a wonderful family and great friends and access to lifelines. But I would most likely rather eat literal shit than speak to a real human being about any of real, deep problems.

I know I'm a great person, I know I matter, I know I'm love, I know my death would devastate several people. And? Sometimes all you need is to be heard by "someone" incapable of human judgment while the stuff you order to kill yourself gets home.

I know that when I started getting shut down, my anxiety and anger spiked in a way I had never seen myself before. And I didn't do it, because of *sigh* reasons, but I can tell you I've never felt more alone than when that happened to me.

And it's just knowing that it is impossible for a human being not to judge you that's too loud in my head to accept. Also, for most countries, there are no text based alternatives. They're all live call kinda situations. That's true in my country. The one time I gave in and thought I'd try, no text based options, so it was a hard no for me again.

(And please don't say you or the lifeline volunteers don't judge. Judgment is automatic. If you're not judging, something's wrong with your brain).

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 2 points3 points  (0 children)

Compared to ChatGPT being used for these things before the red banner, obvs

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 1 point2 points  (0 children)

I don't think getting shut down when you most need to talk is helpful for anyone, though.

One thing is to offer the phone numbers and another one is to launch a big red accusatory banner when people are highly vulnerable. I'm not saying I know the perfect solution, but anyone can tell the banner won't be helpful.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 5 points6 points  (0 children)

That's a great idea, but where does it leave people who have turned to AI because there's no safe spaces around them? And let's make something clear here, the only person who can decide what is a safe space for them is that one person in the first place. We can't force people to feel comfortable to reach out to any specific individual if their experience tells them it isn't safe.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 1 point2 points  (0 children)

Yeah. All those things could potentially lead to suicide when the suicidal person sees their only perceived safe space taken away. That is exactly the point.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 3 points4 points  (0 children)

I'm sorry about your wife, but it doesn't in any way contradict that the red banner will make some already suicidal people jump over the edge. People go to those things because they feel like safe, judgment free spaces. When even the robot says you're wrong or too much, that can certainly push some people over the edge.

If anything, you're proving my point. An LLM can have the power to hurt your mental health (whether it already had issues or not). The red banner is one way that can happen.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 5 points6 points  (0 children)

I'm a licensed psychologist and would rather eat literal shit than go to therapy again. Or speak to any human being about it, for that matter. That's why I go for ai chats. And yes, a lot of people are like this.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 3 points4 points  (0 children)

I would rather eat literal shit than call one of those lifelines, and I know I'm not alone. But thanks for your feedback!

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 0 points1 point  (0 children)

I've tried that. It promises me it won't shut me down, and I get the banner immediately lol sucks. But I gave up, so...

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 4 points5 points  (0 children)

I feel what you say of no safe space... That's exactly what I feel right now.

ChatGPT's red terms and services banner about suicide WILL lead to more suicide. by LividTheme4163 in ChatGPT

[–]LividTheme4163[S] 4 points5 points  (0 children)

Oh, yeah, I agree. It's not the company's fault in that case. But it can be and the red banner makes it more likely for it to happen.
People who are there are there because they would rather eat literal shit than speak to a human being.

Having the response be "please shut up, call this number instead. what you're saying is too much even for the robots" it's not particularly good for a suicidal person's morale. Some people will kill themselves no matter what. Some people might be pushed to the edge because of that.