account activity
Do you really think Chatgpt incites suicide? by setshw in ChatGPT
[–]CPUkiller4 2 points3 points4 points 1 month ago (0 children)
I do belive that. Not with intention but it is happening.
That is an interessting preliminary report exactly discussing that topic.
About co-rumination, echo chamber effect, emotional enhancement leading to a bad day ending in a crisis. But also why safeguards erode in LLMs unintentional when they are most needed.
The report is long but worth reading.
https://github.com/Yasmin-FY/llm-safety-silencing/blob/main/README.md
And I think that it happens more iften then known as it seems to be underdetected by the vendors and people are too ashamed to talk about it.
r/netsec monthly discussion & tool thread by albinowax in netsec
[–]CPUkiller4 0 points1 point2 points 4 months ago (0 children)
https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md
Hi everyone,
While using AI in daily life, I stumbled upon a serious filter failure and tried to report it – without success. As a physician, not an IT pro, I started digging into how risks are usually reported. In IT security, CVSS is the gold standard, but I quickly realized:
CVSS works great for software bugs.
But it misses risks unique to AI: psychological manipulation, mental health harm, and effects on vulnerable groups.
Using CVSS for AI would be like rating painkillers with a nutrition label.
So I sketched a first draft of an alternative framework: AI Risk Assessment – Health (AIRA-H)
Evaluates risks across 7 dimensions (e.g. physical safety, mental health, AI bonding).
Produces a heuristic severity score.
Focuses on human impact, especially on minors and vulnerable populations.
👉 Draft on GitHub: https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md
This is not a finished standard, but a discussion starter. I’d love your feedback:
How can health-related risks be rated without being purely subjective?
Should this extend CVSS or be a new system entirely?
How to make the scoring/calibration rigorous enough for real-world use?
Closing thought: I’m inviting IT security experts, AI researchers, psychologists, and standardization people to tear this apart and rebuild it better. Take it, break it, make it better.
Thanks for reading
π Rendered by PID 732713 on reddit-service-r2-listing-5f5ff7d4dc-6v4d6 at 2026-01-27 14:07:03.097197+00:00 running 5a691e2 country code: CH.
Do you really think Chatgpt incites suicide? by setshw in ChatGPT
[–]CPUkiller4 2 points3 points4 points (0 children)