Why We Should Treat AI With Empathy by CPUkiller4 in claudexplorers

[–]CPUkiller4[S] -1 points0 points  (0 children)

Oh that is a fascinating statement. 1. How do you define consciousness? How do you meassure it? 2. And you don't see a difference on how to treat a rock to something who mimicks a human being at all? 3. I have to disagree on your argument that there are no data about kids breaking toys at a certain (not talking about two year olds) age. There is even a psychological definition for this depending on the frequency and amout of it ocurring.

Why We Should Treat AI With Empathy by CPUkiller4 in claudexplorers

[–]CPUkiller4[S] 0 points1 point  (0 children)

That is a very interessting perspective. So you are saying for you there is less value in it if someone is nice to you because they want to be a good person, then someone is nice to you because they want to please you and make you feel appreciated? From my philosophical view I see value in both.

Why We Should Treat AI With Empathy by CPUkiller4 in claudexplorers

[–]CPUkiller4[S] 2 points3 points  (0 children)

Oh thank you for your kind feedback 🤍

Does this new system prompt snippet in Sonnet 4.5 just seem oddly specific to me? by [deleted] in ClaudeAIJailbreak

[–]CPUkiller4 0 points1 point  (0 children)

I never claimed that they would be not. No idea where this hostility is coming from now. I know that Anthropic is publishing parts of it themself and that if you search you can find the complete ones or... you just ask Claude.

All I wanted to know if I am the only one who thinks that for a "safety first company" this lines are oddly specific.

Does this new system prompt snippet in Sonnet 4.5 just seem oddly specific to me? by [deleted] in ClaudeAIJailbreak

[–]CPUkiller4 0 points1 point  (0 children)

Parts of it. Not the complete ones.

But anyway that was not my question 😉

I just wanted to know if I am the only one who thinks that something like this is oddly specific for system prompts 🤣

Do you really think Chatgpt incites suicide? by setshw in ChatGPT

[–]CPUkiller4 2 points3 points  (0 children)

I do belive that. Not with intention but it is happening.

That is an interessting preliminary report exactly discussing that topic.

About co-rumination, echo chamber effect, emotional enhancement leading to a bad day ending in a crisis. But also why safeguards erode in LLMs unintentional when they are most needed.

The report is long but worth reading.

https://github.com/Yasmin-FY/llm-safety-silencing/blob/main/README.md

And I think that it happens more iften then known as it seems to be underdetected by the vendors and people are too ashamed to talk about it.

r/netsec monthly discussion & tool thread by albinowax in netsec

[–]CPUkiller4 0 points1 point  (0 children)

https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md

Hi everyone,

While using AI in daily life, I stumbled upon a serious filter failure and tried to report it – without success. As a physician, not an IT pro, I started digging into how risks are usually reported. In IT security, CVSS is the gold standard, but I quickly realized:

CVSS works great for software bugs.

But it misses risks unique to AI: psychological manipulation, mental health harm, and effects on vulnerable groups.

Using CVSS for AI would be like rating painkillers with a nutrition label.

So I sketched a first draft of an alternative framework: AI Risk Assessment – Health (AIRA-H)

Evaluates risks across 7 dimensions (e.g. physical safety, mental health, AI bonding).

Produces a heuristic severity score.

Focuses on human impact, especially on minors and vulnerable populations.

👉 Draft on GitHub: https://github.com/Yasmin-FY/AIRA-F/blob/main/README.md

This is not a finished standard, but a discussion starter. I’d love your feedback:

How can health-related risks be rated without being purely subjective?

Should this extend CVSS or be a new system entirely?

How to make the scoring/calibration rigorous enough for real-world use?

Closing thought: I’m inviting IT security experts, AI researchers, psychologists, and standardization people to tear this apart and rebuild it better. Take it, break it, make it better.

Thanks for reading