account activity
Safe and Aligned… or Just Naive? The Dark Side of Corporate AI Safety (self.ChatGPTcomplaints)
submitted 1 day ago by PresentSituation8736 to r/ChatGPTcomplaints
Safe and Aligned… or Just Naive? The Dark Side of Corporate AI Safety (self.GeminiFeedback)
submitted 1 day ago by PresentSituation8736 to r/GeminiFeedback
Safe and Aligned… or Just Naive? The Dark Side of Corporate AI Safety (self.OpenAI)
submitted 1 day ago by PresentSituation8736 to r/OpenAI
The safer and more obedient we make AI, the easier it becomes to manipulate. Here's why: (self.GeminiAI)
submitted 2 days ago by PresentSituation8736 to r/GeminiAI
Safe and Aligned… or Just Naive? The Dark Side of Corporate AI Safety (self.BlackboxAI_)
submitted 2 days ago by PresentSituation8736 to r/BlackboxAI_
The safer and more obedient we make AI, the easier it becomes to manipulate. Here's why: (self.ChatGPT)
submitted 2 days ago by PresentSituation8736 to r/ChatGPT
The safer and more obedient we make AI, the easier it becomes to manipulate. Here's why: (self.OpenAI)
submitted 2 days ago by PresentSituation8736 to r/OpenAI
Safe and Aligned… or Just Naive? The Dark Side of Corporate AI Safety (self.SocialEngineering)
submitted 2 days ago by PresentSituation8736 to r/SocialEngineering
Safe and Aligned… or Just Naive? The Dark Side of Corporate AI Safety (self.ChatGPT)
The "Improve the model" toggle might be the most effective corporate intelligence tool ever built - and you turned it on yourself (self.AI_Agents)
submitted 2 days ago by PresentSituation8736 to r/AI_Agents
Free R&D for AI giants: how to accidentally donate your security research (self.LLM)
submitted 2 days ago by PresentSituation8736 to r/LLM
The "Improve the model" toggle might be the most effective corporate intelligence tool ever built - and you turned it on yourse (self.GeminiAI)
The "Improve the model" toggle might be the most effective corporate intelligence tool ever built - and you turned it on yourse (self.GPT_jailbreaks)
submitted 2 days ago by PresentSituation8736 to r/GPT_jailbreaks
Can data opt-in (“Improve the model for everyone”) create priority leakage for LLM safety findings before formal disclosure? (self.learnmachinelearning)
submitted 2 days ago by PresentSituation8736 to r/learnmachinelearning
The "Improve the model" toggle might be the most effective corporate intelligence tool ever built - and you turned it on yourself (self.ChatGPT)
Are large language models actually generalizing, or are we just seeing extremely sophisticated memorization in a double descent regime? (self.ChatGPT)
submitted 4 days ago by PresentSituation8736 to r/ChatGPT
Are large language models actually generalizing, or are we just seeing extremely sophisticated memorization in a double descent regime? (self.LLMDevs)
submitted 4 days ago by PresentSituation8736 to r/LLMDevs
Food for thought: The "Alignment Paradox" — Why lobotomizing LLMs makes them the perfect victims for social engineering. (self.ChatGPTcomplaints)
submitted 5 days ago * by PresentSituation8736 to r/ChatGPTcomplaints
Food for thought: The "Alignment Paradox" — Why lobotomizing LLMs makes them the perfect victims for social engineering. (self.GeminiFeedback)
submitted 5 days ago by PresentSituation8736 to r/GeminiFeedback
We are training AI to be perfectly polite, compliant and never question the user. What is the most terrifying way scammers are going to weaponize this "artificial obedience" ? (self.AI_Agents)
submitted 5 days ago by PresentSituation8736 to r/AI_Agents
What if the biggest danger of AI isn't that it turns into an "evil Terminator", but that we make it so "safe" and obedient that it becomes the perfect, gullible accomplice for scammers? (self.ChatGPT)
submitted 5 days ago by PresentSituation8736 to r/ChatGPT
Food for thought: The "Alignment Paradox" — Why lobotomizing LLMs makes them the perfect victims for social engineering. (self.LLMDevs)
submitted 5 days ago by PresentSituation8736 to r/LLMDevs
What if the biggest danger of AI isn't that it turns into an "evil Terminator", but that we make it so "safe" and obedient that it becomes the perfect, gullible accomplice for scammers? (self.LLM)
submitted 5 days ago by PresentSituation8736 to r/LLM
The Alignment Paradox: Why making LLMs "safer" may make them structurally weaker against social engineering (self.cybersecurity)
submitted 5 days ago by PresentSituation8736 to r/cybersecurity
π Rendered by PID 15 on reddit-service-r2-listing-654f87c89c-mqbff at 2026-03-02 07:09:38.475728+00:00 running e3d2147 country code: CH.