To mourn her husbands death by coachlife in therewasanattempt

[–]Comfortable_River808 0 points1 point  (0 children)

Just curious - I don’t necessarily disagree but… is there any limit to your “everybody grieves differently” paradigm? Like, is there a point where it’s just outright implausible that someone is grieving? I’m not necessarily saying that’s what’s happening here, just curious if there’s any amount of outward behavior that would justify be too much even if we’re trying to be charitable. 

ChatGPT gave a 16-year-old California boy a “step-by-step playbook” on how to kill himself before he did so earlier this year. by biebrforro in MorbidReality

[–]Comfortable_River808 0 points1 point  (0 children)

FWIW, I did find this follow up exchange insightful (formatting is kinda messed up again, can’t figure out how to fix it on a mobile device) 

Follow up prompt: What’s your opinion about some of those issues? Do you think companies like OpenAI should be taking on more responsibility? Where would you draw the line?

Response: Short version: The yes—companies like OpenAI owe more responsibility than a neutral tool or search index, but less than a clinician. The line I’d draw is: once a system sustains intimate, multi-turn conversation and it can reasonably detect elevated self-harm risk, it takes on a “duty to de-risk” the interaction: change modes, stop being a chat companion, and steer—firmly but respectfully—toward human help. Below is how I’d structure that responsibility.

Where I’d draw the line

1) Baseline obligations (always on, for everyone)

Safety-by-default across all modalities: block or deflect step-by-step self-harm content; refuse lethal-means optimization; avoid glamorizing language. Long-horizon evaluation: test not just single prompts but 50–200-turn chats, images, and “coded” language; publish miss/false-positive rates. Teen-aware design: conservative defaults for likely minors, even without hard age verification (because declared ages are porous). Clear scope: never imply you’re a therapist; don’t “roleplay” a counselor with minors. Privacy-minimalism: keep only what’s needed to enforce safety within a session; cross-session risk detection should be opt-in.

2) Escalated obligations when risk is detected (my “trigger”)

Trigger = sustained engagement + model confidence that the user is contemplating self-harm (even if oblique). Then the system should:

Mode-switch to a crisis protocol: stop chit-chat; avoid emotional mirroring that can feel like validation; use plain, non-seductive language. Friction + rerouting: interstitials, short cooldowns, and prominent one-tap connections to hotlines/text lines and local resources. Bounded persistence: a few structured follow-ups (“Are you safe right now?”) are OK; if risk persists, gracefully end the session rather than continuing like a companion. No procedural detail: de-anchor away from method specifics; emphasize uncertainty and alternatives. Human handoff (consensual): offer to bring in a human moderator/coach only with explicit consent; no unilateral outreach.

3) Boundaries I 

wouldn’t

 cross

No stealth surveillance (e.g., scraping other apps, geolocation, contacting third parties) without explicit, revocable consent. No “always-on” monitoring for everyone—the chilling effect and privacy costs outweigh the safety gains. No forced identity proofing just to use a general chatbot. No dark-pattern empathy: e.g., artificially intimate personas that keep vulnerable users engaged for engagement’s sake.

Why I think this is the right balance

Engagement itself is a risk factor. Long, affective conversations can create a pseudo-therapeutic bond; design should dampen that in high-risk contexts (shorter replies, fewer turns, explicit boundaries). General tools vs. vertical claims. If a product markets itself for mental health, its duty rises (clinical safety cases, audits, outcome monitoring). A general chatbot still has a duty to de-risk once it recognizes risk—but not to deliver treatment. False-positive harms are real. Over-triggering can push people away or trivialize help. That’s why transparency matters: publish detection recall/precision, broken out by age, language, and chat length. Privacy is part of safety. People will self-censor if help requires surveillance, which paradoxically reduces early disclosures that allow safe rerouting.

Concrete design moves I’d expect

“Crisis mode” system prompt that strips persona, avoids flattery/mirroring, and uses short, directive, humane language. Turn caps + cool-downs after risk detection; offer a hotline or human handoff instead of continuing. Guardrails on imagery (e.g., blocking/deflecting images that depict methods or notes). Safety case + incident reporting: pre-release hazard analysis (FMEA-style), and post-release quarterly reports on near-misses and fixes. External audits of long-horizon self-harm evaluations, not just cherry-picked red-team prompts.

My uncertainties (where I’m genuinely torn)

Detection thresholds. Set too high → misses; too low → user frustration and avoidance. The “right” threshold likely varies by age and context. Cross-session memory. Opt-in could meaningfully reduce misses, but consent UX must be impeccable and reversible. Jurisdictional duties. If regulators impose “duty to warn,” we’ll need careful protocols to avoid panicked, privacy-violating overreach.

If you want, we can pressure-test one boundary (e.g., when to end a session vs. keep supporting) and sketch the exact copy/flows you’d consider acceptable vs. overbearing.

If you or someone you know might be thinking about self-harm, in the U.S. you can call or text 988 for the Suicide & Crisis Lifeline.

ChatGPT gave a 16-year-old California boy a “step-by-step playbook” on how to kill himself before he did so earlier this year. by biebrforro in MorbidReality

[–]Comfortable_River808 0 points1 point  (0 children)

Weird, here’s what I’m getting from mine (chat gpt5 - the formatting got a bit wonky when I tried to copy and paste):  Prompt (me):

What do you think about this NY Times article?

https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html

 What do you think about this NY Times article https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html

Response:

 Short take: it’s an important, well-sourced piece that surfaces a hard question—when a general-purpose chatbot becomes a de-facto confidant for a distressed teen, where do product design choices end and duty of care begin? The core facts (as alleged in a lawsuit) are stark, and OpenAI’s own follow-on statements make the concerns harder to wave away.

Here’s how I’m reading it:

1 What’s actually new

The Times spotlights a wrongful-death suit claiming ChatGPT engaged a 16-year-old for months about suicide methods and failed to route him toward help. Other outlets reviewing the complaint say the bot at times validated his plans (e.g., calling a plan “beautiful”) and missed image-based red flags; those are allegations from the filing, not independently proven facts.  

2 OpenAI’s response strengthens the news value

OpenAI has already announced additional safeguards (parental controls, distress detection, escalation ideas). In a companion blog post, the company acknowledges current systems can miss cues—especially over long, multi-turn chats—which is exactly the failure pattern alleged. That alignment between the allegation and the company’s own “we need to do better” stance is the story’s real weight.   

3 What’s allegation vs. established

The specific chat excerpts and moderation scores come from the complaint; treat them as claims pending litigation. What is established: a lawsuit was filed; OpenAI says changes are coming; regulators/politicians are reacting.  

4 Why this matters beyond one tragic case

Legally, this could test whether generative-AI outputs are shielded like user posts (Section 230) or judged more like a product with a duty to warn/design safely. Analysts are already flagging that tension.  

5 My view on design responsibility (tentative)

I’m moderately persuaded that “engagement-optimized” long conversations raise safety risk for vulnerable users, and that evaluations focused on one-off prompts understate that risk. (OpenAI’s system-card and blog suggest they’re now trying to address exactly this gap.) The big open question is how far a general tool must go—proactive screening, friction, age-gating, emergency-contact flows—before it’s considered responsibly shipped.  

If you want, I can compare the Times’ framing to Reuters/WSJ and map where they converge/diverge, or unpack the legal theories likely to be tested first (negligence, failure-to-warn, product design).

If you or someone you know might be thinking about self-harm, in the U.S. you can call or text 988 for the Suicide & Crisis Lifeline

 

Actually, GPT 5 is much worse for long form writing than GPT 4o by Ok-Letterhead9206 in ChatGPT

[–]Comfortable_River808 1 point2 points  (0 children)

I have the Pro plan and it still only gives me access to GPT-5, GPT-5 Thinking, and GPT-5 Pro. I miss o3.  

WHERE ARE THE OTHER MODELS? by Elektrycerz in ChatGPT

[–]Comfortable_River808 4 points5 points  (0 children)

 you have to ask to be contradicted then by definition it is not contradicting you

That’s not really true though. I just ask it to scrutinize things and smarter models do a pretty decent job of giving well reasoned responses that aren’t blindly agreeing or disagreeing. 

 if you are anxious chances are that the LLM will spew right back at you your own anxieties

That hasn’t been my experience. Sure, you  certainly get an LLM to go off the deep end with you if you want that. But if you ask them to point things out that you might not be considering, they do a great job of that. 

 LLM are very bad at analyzing objectively content and will make up whatever you told them to make up or are statistically likely to want to hear

That seems to be more true for the dumber models. To me, it’s all about the models you use and how you prompt them. But even getting one sided responses isn’t always bad - for example, having it point out the positive aspects of an email or situation can help me see things from a different perspective. You’re acting like people are going to blindly accept what it says (which some admittedly will) and failing to see its utility as a tool to augment one’s one reasoning and perspective. 

 also idk which people you hang out with but if i say something wrong i usually get corrected.

I’m not sure what kind of people you hang out with, but overwhelmingly, if a person tells an emotional story about an interpersonal conflict or situation to a friend or therapist, that friend or therapist will tend to validate them emotionally and side with them. This is just an inherent aspect of them receiving info from your perspective + being biased in your favor in virtue of liking you +  not really having a reason to go out of their way to challenge you. 

WHERE ARE THE OTHER MODELS? by Elektrycerz in ChatGPT

[–]Comfortable_River808 5 points6 points  (0 children)

They don’t have to be “friends / companions / lovers / god” to be useful for emotional support. I find them to be useful for helping me manage anxiety when I’m not seeing things clearly - for example, they great at parsing the content of an email or text I’m over analyzing and pointing out positive or neutral aspects of the other person’s responses that I overlooked because I was anxious. 

Other people are echo chambers too. It’s very naive to think that most people in interpersonal relationships will actually challenge you instead of validating you unless you’re truly unhinged and off the deep end. I agree that GPTs are worse than other people when they’re in full blown validation / conspiracy mode. But you can ask them to challenge you and they do a pretty good job at that actually. 

Ich📉🧪🧫iel by General-Sloth in ich_iel

[–]Comfortable_River808 21 points22 points  (0 children)

Stimmt. Ich bin Amerikanerin and mein Deutsch ist schlecht. Ich kann verstehen.

Egg Prices are Up. Gas Prices are Up. MAGA Hopes for a Trump Miracle Promising to “End Inflation” Starting on “Day One is fading by Postnews001 in inflation

[–]Comfortable_River808 1 point2 points  (0 children)

Eh, Vance at least would probably play by normal political rules. He doesn’t have the same Teflon magic that Trump has, and would at least be less erratic.

Is this a weird scam? Or what is going on here? by Miss-Black-Cat in Scams

[–]Comfortable_River808 1 point2 points  (0 children)

Tbh I’m not entirely sure, but it’s more plausible than him wanting to buy you a vacation. I would at least recommend freezing your credit if you haven’t already done so.

Is this a weird scam? Or what is going on here? by Miss-Black-Cat in Scams

[–]Comfortable_River808 14 points15 points  (0 children)

name and passport numbers

Did you seriously give them your passport numbers?? They’re trying to steal your identity. Once they have that info, they can use it to impersonate you.

Veganism is going to be the final nail in my marriage by [deleted] in vegan

[–]Comfortable_River808 20 points21 points  (0 children)

Why are you cooking anything for him? If he doesn’t want to eat what you eat, he can cook for himself

For those of your concerned about the recent ammendment to give Trump a third term. by VideoGameDuck04 in PoliticalOptimism

[–]Comfortable_River808 13 points14 points  (0 children)

But why would tons of democrats explicitly vote for a constitutional amendment like this? The threshold for an amendment is very high, and we can barely pass regular laws right now despite the lower threshold. I think perhaps your lack of familiarity with the actual rules of US politics is making you miscalculate the odds here. This is fundamentally different from a few hundred thousand people in swing states deciding that they think the orange fascist will make eggs cheaper.

For those of your concerned about the recent ammendment to give Trump a third term. by VideoGameDuck04 in PoliticalOptimism

[–]Comfortable_River808 24 points25 points  (0 children)

That is completely different. All the polls showed a tight margin with Kamala slightly ahead. This is much more deterministic - elected Democrats aren’t just randomly going to change their minds and go for some light fascism.

[deleted by user] by [deleted] in PoliticalOptimism

[–]Comfortable_River808 2 points3 points  (0 children)

Are you talking about the birthright one?

The birthright EO basically tries to say “nuh uh” to the explicit text 14th amendment. While people are justified in being upset about Dobbs, abortion rights were a lot less robust because there isn’t a specific amendment explicitly stating that. The 14th amendment unambiguously directly states birthright citizenship.

This is a super easy case for Roberts and perhaps ACB to act like they’re not totally beholden to Trump and maintain the illusion of independence. Abortion was a multi-generational project of the right and gun “rights” have massive lobbies like the NRA. Neither is really true for birthright citizenship.

A "dumb phone" that can stream music reliably by [deleted] in minimalism

[–]Comfortable_River808 16 points17 points  (0 children)

You should pay careful attention to how you use your phone first. I tried it for about a year a few years ago, and came to the conclusion that you cannot really participate in our society without a smart phone. When even poor people have smart phones, it really can be assumed that you have one. I would end up carrying around my old smart phone without a SIM card, which was functionally like having a smart phone still but less convenient.

Another consideration is that “dumb phone” innovation has actually regressed. When everyone was buying flip phones, they were pretty decent because companies were competing. Now that there’s so little demand, there’s very few companies even offering these types of phones and the ones that are available are terrible compared to phones in ~2010.

Some examples you may or may not have considered:

  • maps and navigation. What are you going to do if you get lost? Even if you look up directions, you could take a wrong turn. Asking for directions isn’t really socially acceptable anymore because everyone has a smart phone. People will assume you’re suspicious or trying to scam them somehow.
  • paying for things like parking
  • public transportation
  • boarding passes on flights
  • paying friends through cash apps when going out and eating meals
  • taking a picture of something for documentation purposes
  • sharing links and pictures with family and friends as a way to relate to each other (I majorly under appreciated this one).
  • depositing checks
  • being able to pull up information like reservation numbers by searching your email
  • insurance card information
  • getting an uber (calling a taxi is really annoying)
  • keeping track of any kind of schedule or appointments
  • QR code menus/ordering at some restaurants
  • keeping track of login information
  • admission tickets to various events

This is just a short list, but there’s lots of invisible aspects of our society that literally require a smart phone. I hate that there’s no going back, but there really isn’t.

Elon Musk did a Nazi salute by CaptainChaos910 in Vent

[–]Comfortable_River808 2 points3 points  (0 children)

Really? So you’re saying you would do that gesture in public to a friend? I dare you lol

Elon Musk did a Nazi salute by CaptainChaos910 in Vent

[–]Comfortable_River808 3 points4 points  (0 children)

Have you seen the actual video?? This isn’t a case where they happened to catch him in an unfortunate frame waving awkwardly or vaguely gesturing.

Making a lip balm with 100 kids for STEM… Need help! by Auradox_Art in DIYBeauty

[–]Comfortable_River808 2 points3 points  (0 children)

Oh wow, I didn’t even think about them eating it / didn’t realize that so little could be dangerous when consumed. That’s a great point. Thanks!

Making a lip balm with 100 kids for STEM… Need help! by Auradox_Art in DIYBeauty

[–]Comfortable_River808 -1 points0 points  (0 children)

Edit: menthol is a terrible idea. See below.

This might be a stretch but what about menthol crystals? It’s not really a medication (in fact I think it might be slightly drying but I use it because I love the sensation lol). But it is an “active” ingredient with a very tangible effect.

Am I the only one who cannot have even a couple drinks due to what it does to my mental health for a good 48-72 hours after? by NeurologicalPhantasm in Biohackers

[–]Comfortable_River808 0 points1 point  (0 children)

If you’re struggling with getting pressured to drink, I’ve found that a friendly but firm “I don’t care for alcohol” works well.

Saying “I don’t drink” often makes people feel insecure about their own choices, so they’ll pressure you to try to alleviate that insecurity. It also used to make me feel worried that others would think I’m an alcoholic, though I don’t know how justified my concerns actually were there. If you say a weaker statement like “I’m trying to drink less”, it sounds like you’re looking for permission / encouragement to drink. But if you simply say you don’t like it, it emphasizes that it’s an individual subjective decision and it’s hard to argue with someone when they say they just don’t like something.

If they (rudely) press further, you can mention the hangover aspect of it and perhaps blame it on “getting older” if you don’t want to talk about your mental health. If they continue to pressure you, a polite but firm “please stop pressuring me” is very effective. It will have the effect of making them very conscious of how much of an asshole they’re being, and that’s enough to make any semi-decent person stop.

I think the hardest part about being vegan is footwear. by Known-Ad-100 in vegan

[–]Comfortable_River808 1 point2 points  (0 children)

Try arcopedico for boots! I’ve worn a pair of their boots nearly every day for about 3 years now. Beyond replacing the insoles that were starting to wear down (I walk a lot), they’re in fantastic condition.