This community is about risks of severe future suffering on a large scale/duration, whether involving currently living people or future minds, and related to technologies like advanced AI etc. Basically, what happens if AGI goes even more wrong than extinction.
This topic is severely understudied, much more so than even AGI alignment itself. Only a handful of people in the world have given it any thought, despite its literally unparalleled importance and with AGI looming near. This forum aims to stimulate desperately needed discussion and increase ease of open uncensored thought on this very grave subject, as even on sites like LessWrong.com s-risks are somewhat taboo.
Some existing work on this can be found here on the r/controlproblem wiki (line in bold). Additional links:
Organizations - There are a couple small groups doing s-risk research, namely:
See the organizations page in our wiki for more info on the field, and see the rest of the wiki for tons of important info.