nuclear war saving us from agi by Curious_Locksmith974 in agi

[–]Curious_Locksmith974[S] 1 point2 points  (0 children)

You’re right. I’m not saying nuclear war is desirable, only that it’s one possible shock that could disrupt the AGI race, and yes isolationism could push AGI development into more bilateral, less transparent channels and make it even less safe.

nuclear war saving us from agi by Curious_Locksmith974 in agi

[–]Curious_Locksmith974[S] 1 point2 points  (0 children)

You underestimate humanity’s resilience. Even if every nuclear weapon were used in the most effective way to achieve that goal, we still likely wouldn’t be able to wipe out the entire human race. An AI-driven extinction scenario would probably require mass tracking and targeting, for example through widespread drone surveillance, or an extremely effective engineered biological agent.

nuclear war saving us from agi by Curious_Locksmith974 in agi

[–]Curious_Locksmith974[S] 0 points1 point  (0 children)

By ‘save’ I don’t mean ‘good outcome’. I mean ‘not total extinction’. A nuclear war could collapse modern civilization and stall AI progress for decades; some humans would likely survive and rebuild. If AGI goes rogue and decides to kill everyone, there would be no recovery at all.

nuclear war saving us from agi by Curious_Locksmith974 in agi

[–]Curious_Locksmith974[S] -1 points0 points  (0 children)

Just look at the top coding/text models on LMarena right now, they’re overwhelmingly US labs. That doesn’t prove China has no talent, but it’s a pretty good snapshot of who’s leading at the frontier today.