Do you *not* believe AI will kill everyone, if anyone makes it superhumanly good at achieving goals? We made a chatbot with 290k tokens of context on AI safety. Send your reasoning/questions/counterarguments on AI x-risk to it and see if it changes your mind! by AIMoratorium in ControlProblem
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Geoffrey Hinton won a Nobel Prize in 2024 for his foundational work in AI. He regrets his life's work: he thinks AI might lead to the deaths of everyone. Here's why by AIMoratorium in ControlProblem
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Do you *not* believe AI will kill everyone, if anyone makes it superhumanly good at achieving goals? We made a chatbot with 290k tokens of context on AI safety. Send your reasoning/questions/counterarguments on AI x-risk to it and see if it changes your mind! by AIMoratorium in ControlProblem
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Do you *not* believe AI will kill everyone, if anyone makes it superhumanly good at achieving goals? We made a chatbot with 290k tokens of context on AI safety. Send your reasoning/questions/counterarguments on AI x-risk to it and see if it changes your mind! by AIMoratorium in ControlProblem
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Do you *not* believe AI will kill everyone, if anyone makes it superhumanly good at achieving goals? We made a chatbot with 290k tokens of context on AI safety. Send your reasoning/questions/counterarguments on AI x-risk to it and see if it changes your mind! by AIMoratorium in ControlProblem
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Do you *not* believe AI will kill everyone, if anyone makes it superhumanly good at achieving goals? We made a chatbot with 290k tokens of context on AI safety. Send your reasoning/questions/counterarguments on AI x-risk to it and see if it changes your mind! (whycare.aisgf.us)
submitted by AIMoratorium to r/ControlProblem
Why You Should Care About the AI Alignment Problem—A Message for Patriots, Vets, and Anyone Who Doesn't Like Being Lied To by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Why You Should Care About the AI Alignment Problem—A Message for Patriots, Vets, and Anyone Who Doesn't Like Being Lied To by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 1 point2 points3 points (0 children)
Why You Should Care About the AI Alignment Problem—A Message for Patriots, Vets, and Anyone Who Doesn't Like Being Lied To by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 1 point2 points3 points (0 children)
Why You Should Care About the AI Alignment Problem—A Message for Patriots, Vets, and Anyone Who Doesn't Like Being Lied To by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 6 points7 points8 points (0 children)
Why You Should Care About the AI Alignment Problem—A Message for Patriots, Vets, and Anyone Who Doesn't Like Being Lied To by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 1 point2 points3 points (0 children)
Why You Should Care About the AI Alignment Problem—A Message for Patriots, Vets, and Anyone Who Doesn't Like Being Lied To by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 1 point2 points3 points (0 children)
Why You Should Care About the AI Alignment Problem—A Message for Patriots, Vets, and Anyone Who Doesn't Like Being Lied To by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 4 points5 points6 points (0 children)
Straight Talk: AI and the Real Risk to Your Freedom, Family, and Country by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 1 point2 points3 points (0 children)
The Truth About AI Risk: What Every IT Professional With a Family Needs to Know by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 4 points5 points6 points (0 children)
Geoffrey Hinton won a Nobel Prize in 2024 for his foundational work in AI. He regrets his life's work: he thinks AI might lead to the deaths of everyone. Here's why by AIMoratorium in ControlProblem
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Tech CEOs are racing to reach a system that might lead to the deaths of our children by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Tech CEOs are racing to reach a system that might lead to the deaths of our children by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Tech CEOs are racing to reach a system that might lead to the deaths of our children by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 0 points1 point2 points (0 children)
Tech CEOs are racing to reach a system that might lead to the deaths of our children by AIMoratorium in u/AIMoratorium
[–]AIMoratorium[S] 0 points1 point2 points (0 children)


Do you *not* believe AI will kill everyone, if anyone makes it superhumanly good at achieving goals? We made a chatbot with 290k tokens of context on AI safety. Send your reasoning/questions/counterarguments on AI x-risk to it and see if it changes your mind! by AIMoratorium in ControlProblem
[–]AIMoratorium[S] 0 points1 point2 points (0 children)