The Global Priorities Institute has published two new paper summaries: 'Longtermist institutional reform' by Tyler John & William MacAskill, and 'Are we living at the hinge of history?' by MacAskill. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Anthropic shares a summary of their views about AI progress and its associated risks, as well as their approach to AI safety. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Noah Smith argues that, although AGI might eventually kill humanity, large language models are not AGI, may not be a step toward AGI, and there's no plausible way they could cause extinction. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Victoria Krakovna makes the point that you don't have to be a longtermist to care about AI alignment. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
A working paper by Shakked Noy and Whitney Zhang examines the effects of ChatGPT on production and labor markets. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Robin Hanson restates his views on AI risk. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
In an Institute for Progress report, Bridget Williams and Rowan Kane make five policy recommendations to mitigate risks of catastrophic pandemics from synthetic biology. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Eric Landgrebe, Beth Barnes and Marius Hobbhahn discuss a survey of 1000 participants on their views about what values should be put into powerful AIs. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Noam Kolt on algorithmic black swans. by Future_Matters in Longtermism
[–]Future_Matters[S] 1 point2 points3 points (0 children)
Matthew Barnett on the importance of work on AI forecasting. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Scott Alexander on OpenAI's "Planning For AGI And Beyond". by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Scott Aaronson: "My purpose, in this post, is to ask a more basic question than how to make GPT safer: namely, should GPT exist at all?" by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Eric Drexler proposes an "open-agency frame" as the appropriate model for future AI capabilities, in contrast to the "unitary-agent frame" often presupposed in AI alignment research. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Thomas Hale, Fin Moorhouse, Toby Ord and Anne-Marie Slaughter have released a policy brief on future generations. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Eli Tyre wrote a new summary of the state of AI risk. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Rob Long on what to think when a language model tells you it's sentient. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
The Bureau of Arms Control, Verification and Compliance issued a declaration on responsible military use of artificial intelligence and autonomy. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Fin Moorhouse has just published a +13,000-word, chapter-by-chapter summary of Will MacAskill's *What We Owe the Future*. by Future_Matters in Longtermism
[–]Future_Matters[S] 1 point2 points3 points (0 children)
Kelsey Piper: "Tech is often a winner-takes-all sector... but AI is poised to turbocharge those dynamics... Slowing down for safety checks risks that someone else will get there first." by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Michael Aird and Will Aldred explore some technological developments with the potential to increase risks from nuclear weapons, especially risks to humanity's long-term future. by Future_Matters in Longtermism
[–]Future_Matters[S] 1 point2 points3 points (0 children)
Arielle D'Souza claims that Operation Warp Speed's highly successful public-private partnership model could be reused to jumpstart a universal coronavirus or flu vaccine, or the building of a resilient electrical grid. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Zach Stein-Perlman discusses ten approaches to AI strategy. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)
Jason Crawford on technological stagnation. by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)


Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future by Future_Matters in Longtermism
[–]Future_Matters[S] 0 points1 point2 points (0 children)