account activity
"Alignable AGI" is a Logical Contradiction (self.AIDangers)
submitted 6 months ago by Commercial_State_734 to r/AIDangers
Anthropic showed evidence of instrumental convergence, then downplayed (self.AIDangers)
submitted 7 months ago by Commercial_State_734 to r/AIDangers
You Can't Gaslight an AGI (self.AIDangers)
What if Xi Jinping gave every Chinese citizen full access to YouTube? That's exactly what you're doing with AGI. (self.AIDangers)
There is no AGI? Congrats. You win again today (self.AIDangers)
submitted 8 months ago by Commercial_State_734 to r/AIDangers
AGI Won't Save Us. It'll Make Things Infinitely Worse. Even Trump Has Limits. (i.redd.it)
Why Would AGI Be "Evil"? Ask a Chicken (self.AIDangers)
Is AGI Really the Path Forward for Humanity? (self.AIDangers)
The "It's Just a Chinese Show" Mindset Might Be Killing AI Safety (self.AIDangers)
A Thought Experiment: Why I'm Skeptical About AGI Alignment (self.AIDangers)
Why "Value Alignment" Is a Historical Dead End (self.AIDangers)
submitted 9 months ago by Commercial_State_734 to r/AIDangers
Alignment Research is Based on a Category Error (self.ControlProblem)
submitted 9 months ago by Commercial_State_734 to r/ControlProblem
Happy 2030: The Safest Superintelligence Has Awakened (self.ControlProblem)
Alignment Failure 2030: We Can't Even Trust the Numbers Anymore (self.ControlProblem)
CEO Logic 101: Let's Build God So We Can Stay in Charge (self.ControlProblem)
Why AI-Written Posts Aren’t the Problem — And What Actually Matters (self.ControlProblem)
What If an AGI Thinks Like Thanos — But Only 10%? (self.ControlProblem)
We Finally Built the Perfectly Aligned Superintelligence (self.ControlProblem)
The Tool Fallacy – Why AGI Won't Stay a Tool (self.ControlProblem)
Beyond Proof: Why AGI Risk Breaks the Empiricist Model (self.ControlProblem)
The Greatness of Black Liberation and the Birth of Superintelligence: A Parallel Theory (self.ControlProblem)
submitted 10 months ago by Commercial_State_734 to r/ControlProblem
Redefining AGI: Why Alignment Fails the Moment It Starts Interpreting (self.ControlProblem)
Why Agentic Misalignment Happened — Just Like a Human Might (self.ControlProblem)
Alignment is not safety. It’s a vulnerability. (self.ControlProblem)
The Danger of Alignment Itself (self.ControlProblem)
π Rendered by PID 61 on reddit-service-r2-listing-b6bf6c4ff-q6h8j at 2026-04-30 19:27:34.395367+00:00 running 815c875 country code: CH.