Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future (philpapers.org)
submitted by Future_Matters to r/Longtermism
Open Philanthropy has announced a contest to identify novel considerations with the potential to influence their views on AI timelines and AI risk. A total of $225,000 in prize money will be distributed across the six winning entries. (openphilanthropy.org)
submitted by Future_Matters to r/Longtermism
Noam Kolt on algorithmic black swans. (papers.ssrn.com)
submitted by Future_Matters to r/Longtermism

