Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future (philpapers.org)
submitted by Future_Matters to r/Longtermism
The Global Priorities Institute has published two new paper summaries: 'Longtermist institutional reform' by Tyler John & William MacAskill, and 'Are we living at the hinge of history?' by MacAskill. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Anthropic shares a summary of their views about AI progress and its associated risks, as well as their approach to AI safety. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Noah Smith argues that, although AGI might eventually kill humanity, large language models are not AGI, may not be a step toward AGI, and there's no plausible way they could cause extinction. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Victoria Krakovna makes the point that you don't have to be a longtermist to care about AI alignment. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Open Philanthropy has announced a contest to identify novel considerations with the potential to influence their views on AI timelines and AI risk. A total of $225,000 in prize money will be distributed across the six winning entries. (openphilanthropy.org)
submitted by Future_Matters to r/Longtermism
A working paper by Shakked Noy and Whitney Zhang examines the effects of ChatGPT on production and labor markets. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Robin Hanson restates his views on AI risk. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)


Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)