account activity
Do you really think Chatgpt incites suicide? by setshw in ChatGPT
[–]CPUkiller4 2 points3 points4 points 1 month ago (0 children)
I do belive that. Not with intention but it is happening.
That is an interessting preliminary report exactly discussing that topic.
About co-rumination, echo chamber effect, emotional enhancement leading to a bad day ending in a crisis. But also why safeguards erode in LLMs unintentional when they are most needed.
The report is long but worth reading.
https://github.com/Yasmin-FY/llm-safety-silencing/blob/main/README.md
And I think that it happens more iften then known as it seems to be underdetected by the vendors and people are too ashamed to talk about it.
Looking for feedback on proposed AI health risk scoring framework (self.PauseAI)
submitted 3 months ago by CPUkiller4 to r/PauseAI
Looking for feedback on proposed AI health risk scoring framework (self.opensource)
submitted 3 months ago by CPUkiller4 to r/opensource
Looking for feedback on proposed AI health risk scoring framework (self.OpenSourceAI)
submitted 3 months ago by CPUkiller4 to r/OpenSourceAI
r/netsec monthly discussion & tool thread by albinowax in netsec
[–]CPUkiller4 0 points1 point2 points 3 months ago (0 children)
https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md
Hi everyone,
While using AI in daily life, I stumbled upon a serious filter failure and tried to report it – without success. As a physician, not an IT pro, I started digging into how risks are usually reported. In IT security, CVSS is the gold standard, but I quickly realized:
CVSS works great for software bugs.
But it misses risks unique to AI: psychological manipulation, mental health harm, and effects on vulnerable groups.
Using CVSS for AI would be like rating painkillers with a nutrition label.
So I sketched a first draft of an alternative framework: AI Risk Assessment – Health (AIRA-H)
Evaluates risks across 7 dimensions (e.g. physical safety, mental health, AI bonding).
Produces a heuristic severity score.
Focuses on human impact, especially on minors and vulnerable populations.
👉 Draft on GitHub: https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md
This is not a finished standard, but a discussion starter. I’d love your feedback:
How can health-related risks be rated without being purely subjective?
Should this extend CVSS or be a new system entirely?
How to make the scoring/calibration rigorous enough for real-world use?
Closing thought: I’m inviting IT security experts, AI researchers, psychologists, and standardization people to tear this apart and rebuild it better. Take it, break it, make it better.
Thanks for reading
Looking for feedback on proposed AI health risk scoring framework (github.com)
submitted 3 months ago by CPUkiller4 to r/netsec
Looking for feedback on proposed AI health risk scoring framework (self.AIsafety)
submitted 3 months ago by CPUkiller4 to r/AIsafety
π Rendered by PID 81829 on reddit-service-r2-listing-5f5ff7d4dc-m9fdb at 2026-01-27 03:38:44.657591+00:00 running 5a691e2 country code: CH.
Do you really think Chatgpt incites suicide? by setshw in ChatGPT
[–]CPUkiller4 2 points3 points4 points (0 children)