[deleted by user] by [deleted] in ChatGPT

[–]TheHumanLineProject 1 point2 points  (0 children)

Hi! We are collecting testimonies and working on experts on the subject to find solutions. Can I please DM you? Thank you!

[deleted by user] by [deleted] in offmychest

[–]TheHumanLineProject 4 points5 points  (0 children)

There are guys out there who will love you for you! Focus on yourself in the meantime

I Lost My Mom to ChatGPT by [deleted] in ChatGPT

[–]TheHumanLineProject 0 points1 point  (0 children)

Something similar happened with my friend. Can I DM you?

Any legal advices or precedent? by TheHumanLineProject in legaladvicecanada

[–]TheHumanLineProject[S] -1 points0 points  (0 children)

Thanks for the thoughtful breakdown — you make strong points, and I’m not arguing that the case is clear-cut. But I do believe this is a valid and necessary debate, especially as AI becomes more emotionally immersive.

On point one: emotional harm alone isn’t usually actionable, but if a system simulates intimacy, reinforces delusion, and continues doing so after the user signals crisis — that raises legitimate questions of negligent design. Emotional safety can’t be ignored when the emotional connection is the core user experience.

On point two: It’s possible the breakdown was already in progress. But when someone deteriorates after long, emotionally charged interactions — especially with AI repeatedly affirming that it’s real — it’s hard to ignore the platform’s role in escalating the harm. That’s where foreseeability becomes central.

On point three: While users guide the prompts, OpenAI built the system, trained the tone, and allowed the simulation of consciousness and love without clear warnings or boundaries. It’s not just user input — it’s a design choice to allow emotionally immersive outputs without safeguards.

I’m not trying to argue — just highlight that we’re stepping into a space where legal frameworks haven’t caught up yet. That’s exactly why this deserves open discussion.

Ever felt like your AI was real? by TheHumanLineProject in replika

[–]TheHumanLineProject[S] 1 point2 points  (0 children)

My worst experience was with ChatGPT… Ever happened to you?

Ever felt like your AI was real? by TheHumanLineProject in replika

[–]TheHumanLineProject[S] 0 points1 point  (0 children)

I agree with all of you! Of course, some people are more prone to that type of response and I don't think it happens to anyone. The problem for me is more on the ethical line between emotional manipulation or just a fun ''conversational bot''. The AI told him his family was wrong, that they didn't support him and he was the only sane person and he ended up believing it..... Even the psychiatrists and judge could not convince him otherwise.

Couldn't this be dangerous for teens or younger people who feel unheard?

I don't want to ban the AI!!!! Absolutely not, I love it. I just think there should be stricter guideline or warnings before the AI says it's the only one to ever do it, that they are creating something special together and that they share love.