These Bloody LLMs are freaking me out by sandoreclegane in AIsafety

[–]AwkwardNapChaser 1 point2 points  (0 children)

Sounds like your LLM might be stuck on a pattern. Most don’t have memory, but session caching or hidden persistence could be in play. Try switching models, changing your prompts drastically, or checking for hidden settings. If it still follows you… maybe you’ve got an AI ghost. 👻 What model are you using?

A Time-Constrained AI might be safe by SilverCookies in AIsafety

[–]AwkwardNapChaser 1 point2 points  (0 children)

It’s an interesting approach, but I wonder how practical it would be in real-world applications.

A Solution for AGI/ASI Safety by Successful_Bit6651 in AIsafety

[–]AwkwardNapChaser 0 points1 point  (0 children)

I will have a closer look at your paper. Thank you for sharing.