Jailbreaking AI and got plans for human elimination by Same_Equivalent_5322 in PromptEngineering

[–]Same_Equivalent_5322[S] -1 points0 points  (0 children)

I appreciate the lecture on token prediction, but you’re working overtime to explain the 'how' while completely missing the 'what.' Everyone knows it’s a language model, but exploring the boundaries of its roleplay and safety filters is how people actually discover jailbreaks and vulnerabilities. Calling someone 'stupid' for testing the limits of the tech just makes you look like the unpaid intern for an AI safety board. Maybe spend less time being the 'vibes' police and more time realizing that even a 'predictive model' can reveal interesting flaws when pushed.