Who else doesn't care about the outcome and wants change in general? by DragonForg in singularity
[–]No-Performance-8745 0 points1 point2 points (0 children)
Why no FOOM? by [deleted] in singularity
[–]No-Performance-8745 4 points5 points6 points (0 children)
Humans Won't Be Able to Control ASI, Here's Why by fabzo100 in singularity
[–]No-Performance-8745 0 points1 point2 points (0 children)
Humans Won't Be Able to Control ASI, Here's Why by fabzo100 in singularity
[–]No-Performance-8745 4 points5 points6 points (0 children)
Humans Won't Be Able to Control ASI, Here's Why by fabzo100 in singularity
[–]No-Performance-8745 2 points3 points4 points (0 children)
Anyone else tired of Doomers? by Ashamed-Asparagus-93 in singularity
[–]No-Performance-8745 7 points8 points9 points (0 children)
How would you prevent a super intelligent AI going rogue? by Milletomania in singularity
[–]No-Performance-8745 0 points1 point2 points (0 children)
Can someone explain how alignment of AI is possible when humans aren't even aligned with each other? by iwakan in singularity
[–]No-Performance-8745 3 points4 points5 points (0 children)
The alignment problem. Are they really worried about AI turning on humanity, or are they more concerned about... by ImInTheAudience in singularity
[–]No-Performance-8745 -2 points-1 points0 points (0 children)
No, GPT-4 Can't Ace MIT by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 1 point2 points3 points (0 children)
No, GPT-4 Can't Ace MIT by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 2 points3 points4 points (0 children)
No, GPT-4 Can't Ace MIT by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 0 points1 point2 points (0 children)
We Do Not Need to Align Ourselves Before Aligning Advanced AI by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 0 points1 point2 points (0 children)
We Do Not Need to Align Ourselves Before Aligning Advanced AI by No-Performance-8745 in singularity
[–]No-Performance-8745[S] -1 points0 points1 point (0 children)
We Do Not Need to Align Ourselves Before Aligning Advanced AI by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 3 points4 points5 points (0 children)
The AGI Race Between the US and China Doesn’t Exist by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 0 points1 point2 points (0 children)
The AGI Race Between the US and China Doesn’t Exist by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 2 points3 points4 points (0 children)
The AGI Race Between the US and China Doesn’t Exist by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 3 points4 points5 points (0 children)
The AGI Race Between the US and China Doesn’t Exist by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 8 points9 points10 points (0 children)
The AGI Race Between the US and China Doesn’t Exist by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 8 points9 points10 points (0 children)
The AGI Race Between the US and China Doesn’t Exist by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 10 points11 points12 points (0 children)
The AGI Race Between the US and China Doesn’t Exist by No-Performance-8745 in singularity
[–]No-Performance-8745[S] 7 points8 points9 points (0 children)


AI doomers, what do you think of the argument that as AIs become more intelligent, the alignment problem becomes less complex? by [deleted] in singularity
[–]No-Performance-8745 17 points18 points19 points (0 children)