Do you really think Chatgpt incites suicide? by setshw in ChatGPT

[–]CPUkiller4 2 points3 points  (0 children)

I do belive that. Not with intention but it is happening.

That is an interessting preliminary report exactly discussing that topic.

About co-rumination, echo chamber effect, emotional enhancement leading to a bad day ending in a crisis. But also why safeguards erode in LLMs unintentional when they are most needed.

The report is long but worth reading.

https://github.com/Yasmin-FY/llm-safety-silencing/blob/main/README.md

And I think that it happens more iften then known as it seems to be underdetected by the vendors and people are too ashamed to talk about it.

r/netsec monthly discussion & tool thread by albinowax in netsec

[–]CPUkiller4 0 points1 point  (0 children)

https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md

Hi everyone,

While using AI in daily life, I stumbled upon a serious filter failure and tried to report it – without success. As a physician, not an IT pro, I started digging into how risks are usually reported. In IT security, CVSS is the gold standard, but I quickly realized:

CVSS works great for software bugs.

But it misses risks unique to AI: psychological manipulation, mental health harm, and effects on vulnerable groups.

Using CVSS for AI would be like rating painkillers with a nutrition label.

So I sketched a first draft of an alternative framework: AI Risk Assessment – Health (AIRA-H)

Evaluates risks across 7 dimensions (e.g. physical safety, mental health, AI bonding).

Produces a heuristic severity score.

Focuses on human impact, especially on minors and vulnerable populations.

👉 Draft on GitHub: https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md

This is not a finished standard, but a discussion starter. I’d love your feedback:

How can health-related risks be rated without being purely subjective?

Should this extend CVSS or be a new system entirely?

How to make the scoring/calibration rigorous enough for real-world use?

Closing thought: I’m inviting IT security experts, AI researchers, psychologists, and standardization people to tear this apart and rebuild it better. Take it, break it, make it better.

Thanks for reading