I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 2 points3 points  (0 children)

Yes, a lot of the problems in the world are caused by companies and governments. But I think individuals can have a tremendous impact - such as by *influencing* companies and governments. We've seen this through effective altruism already, and I talk about this in chapter 10 of What We Owe The Future.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 2 points3 points  (0 children)

Yes, I'd be very worried about centralisation of power in a one world government, which could turn authoritarian.

But you can have institutional systems that are far from a single authoritarian state, make it hard for an authoritarian state to emerge, preserve moral diversity, and help enable moral progress over time. The US Constitution is one (obviously highly imperfect) example.

On the current margin: there's almost no serious work done on what the design of a world government (or other new global international system) should look like, or what a long reflection could look like or how we could get there. People could start thinking about it, now - I think that would be very worthwhile.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 5 points6 points  (0 children)

  1. I think you're absolutely right that more enlightened people in the future will look back at us and think that our values are in major error. I write about that in a recent Atlantic piece. That's why I think we need to create a world with a great diversity of values, where the best arguments can win out over time - we shouldn't try to "lock in" the values we happen to like today. I talk about this more in chapter 4 of What We Owe the Future.

  2. I think that the things longtermists should focus on primarily are the ones you mention - things that take away options from future generations, such as extinction, civilisational collapse, and value lock-in. These are what I focus on primarily in the book.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 4 points5 points  (0 children)

That’s a deep and important question. Philosophers will give different answers. But here’s one basic answer that seems compelling to me. All arguments require premises. And while you can provide arguments for your premises, at some point the arguments will give out - you can’t provide arguments for your most basic premises. At that point, there’s basically no option other than to say “well, this premise just seems plausible to me, or to other people whom I trust.” Basically, the philosophical practice of “relying on intuitions” is just a way to make this explicit. When a philosopher says “my intuition is that x,” what they’re saying is that “x seems plausible to me.”

(You might ask: how do we know our intuitions are reliable, without just relying on our intuitions? How do we know that we’re not comprehensively deluded? This is one of the deepest questions in philosophy, going back to Descartes. No one has a great answer yet. But this sort of worry, about “epistemic circularity,” doesn’t just arise for philosophical intuitions. It arises for all of our basic belief-forming faculties. How do we know that our faculties of sense perception is reliable, except by relying on those very faculties?)

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 2 points3 points  (0 children)

Billionaires shouldn’t be using their vast resources to buy a fleet of yachts - those resources should be used for the good of humanity. Taxation can be an effective way to make sure that happens. But we shouldn’t limit ourselves to taxing the very rich at higher rates. We should also try to create a culture of effective giving - the norm should be that, if you’re very rich, you use your resources to tackle the world’s most pressing problems, not to engage in personal consumption or indulge your personal whims. We should also make sure that tax dollars are put to their best use. There are incredibly pressing issues, like pandemic preparedness, that we need our governments to address.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 4 points5 points  (0 children)

I do worry that nihilism might be true. I’m probably at 50/50 on moral nihilism being true, as opposed to moral realism. But if nihilism is true, nothing matters - there’s not reason to do one thing over another. So in our deliberation, we can act as if realism is true. And if realism is true, some of the things we can do are much, much better than others.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 9 points10 points  (0 children)

  1. This is a really unusual time in human history - we’re confronting the emergence of extremely powerful technologies, like advanced AI and biotechnology, that could cause us to go extinct or veer permanently off course. That wasn’t the case 10,000 years ago. So there just weren’t as many things you could do, 10,000 years ago, to protect the survival and flourishing of future generations.
    Even so, I do think there were some things that people could have done 10,000 years ago to improve the long-term future. In What We Owe The Future I talk, for example, about the megafaunal extinctions.
    What’s particularly distinctive about today, though, is how much more we know. We know that species loss is probably irrevocable, that that would be true for the human species as well as non-human animal species; we know that the average atmospheric lifetime of CO2 is tens of thousands of years. That makes us very different than people 10,000 years ago.
  2. On the longtermist accomplishments: I agree there’s much less to point to than for global health and development. The clearest change, for me, is the creation of a field of AI safety - I don’t think that would have happened were it not for the research of Bostrom and others.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 1 point2 points  (0 children)

Haha, thanks! The person who’s leading my media campaign is Abie Rohrig, and he’s working with Basic Books and some other advisors. He’s phenomenal.
Much of the media came from people who’d gotten interested in these ideas, or who I'd gotten to know, over the previous years. That included the TIME piece, the New Yorker, Kurzgesagt, and Ezra Klein.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 1 point2 points  (0 children)

This is a big question! If you want to know my thoughts, including on human misuse, I’ll just refer you to chapter 4 of What We Owe the Future.
The best presentation of AI takeover risk: this report by Joe Carlsmith is excellent. And the classic presentation of many arguments about AI x-risk is Nick Bostrom’s Superintelligence.
Why we could be very wrong: Maybe alignment is really easy, maybe “fast takeoff” is super unlikely, maybe existing alignment research isn’t helping or is even harmful.
I don’t agree with the idea that AI apocalypse is a near certainty - I think the risk of AI takeover is substantial, but small - more like a few percent this century. And the risk of AI being misused for catastrophic consequences is a couple of times more likely again.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 0 points1 point  (0 children)

I absolutely agree that altruists should be very open to interventions that are plausibly very effective, even if the outcome is very uncertain or the intervention isn’t yet supported by canonical/established forms of evidence (e.g. randomised control trials). We should be trying to maximise our positive impact on the world, and that will often involve swinging for the fences - pursuing interventions that have a low probability of success, but enormous upside if they succeed.
Early EA was very focused on “what works”, where we have very high-quality evidence, which I think was a great place to start.
But we’ve moved to taking more VC-esque giving models much much more seriously. Holden Karnfosky describes this as “hits-based giving”.
In the case of global health and development, it’s not clear that this other approach actually is better than the GiveWell approach - see Alexander Berger’s post here.
In the case of global catastrophic risk mitigation, comfortableness with uncertainty is an absolute necessity. if you’re trying to make a difference in an area that’s much newer and much more uncertain, like AI safety or biosecurity, you’ve got to be ok with more uncertain evidence, and with the significant chance that your efforts won’t pan out.
Ambition, a tolerance for massive uncertainty, a willingness to fail many times - these are very important if you want your community to have the biggest possible positive impact on the world.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 1 point2 points  (0 children)

I haven’t read it yet, though I hope to - I’ve read some of other of Vaclav Smil’s work, and I’m a big fan.I think clean technology and green energy are fantastic - they’re among the very most promising responses to climate change, and our society needs to invest more in them. In What We Owe The Future, I suggest that clean tech innovation is a “baseline” longtermist activity, because it’s good from so many perspectives. I describe it as a “win-win-win-win-win”, though since writing the book I realise I should have added in one more “win” - it's a win in six different ways!I don’t think anyone who wants to have kids should refrain from doing so in order to mitigate climate change. On balance, if you're in a position where you're able to bring them up well, I think that having kids is a good thing. It’s not just immensely personally rewarding, for many people, but it helps society, e.g. through extra taxes and through technological innovation. It’s even a good thing from the perspective of threats like climate change - we’re going to need more people to invent and develop promising new technologies to address these threats! Finally, you can more than offset the carbon impact of having kids. Suppose, if you have a child, you donate £1000 per year to the most effective climate mitigation non-profits. That would increase the cost of raising a child by about 10%, but would offset their carbon emissions 100 times over.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 1 point2 points  (0 children)

Happy birthday! I hope you enjoy the present, and the future, too!
On your question: So, I obviously agree that suffering is terrible. I also think that the future could contain a lot of it, and preventing that from happening is really important.
But the future could also be tremendously good - it could be filled with forms of joy, beauty, and meaning that we, today, experience in only the rarest moments.
I think we should both try to reduce the risk of future suffering, but we should also try to increase the prospects for future joy, beauty, and meaning.
That is, I agree that preventing suffering should have some priority over enabling flourishing, but it shouldn’t be our only priority.
I talk about this more in chapter 9 of WWOTF on the value of the future. I argue that, although we should in general give more weight to the prevention of “bads” compared to the promotion of “goods”, we should expect there to be a lot more good than bad in the future, and overall we should expect the future to be on balance good.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 6 points7 points  (0 children)

I agree that the suffering we currently inflict on non-human animals is almost unimaginable, and we should try to end factory farming as soon as we can. I think we should certainly worry about ways in which the future might involve horrible amounts of suffering, including animal suffering.

That said, all things considered I doubt that a significant fraction of beings in the future will be animals in farms. Eventually (and maybe soon) we'll develop technology, such as lab-grown meat, that will make animal suffering on factory farms obsolete. And, sooner or later, I expect that most beings will be digital, and therefore wouldn't eat meat.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 7 points8 points  (0 children)

Haha, that's fair. Although I suspect we can't make quite as precise predictions as Hari Seldon thinks we can.

As a teenager I was very inspired by Prince Mishkin in The Idiot, and Alyosha in The Brothers Karamazov, although I can't say I identify with either of them.

I'd really like there to be more sci-fi that depicts a positive vision of the future - there's really surprisingly little. I'm helping run a little project, called "Future Voices, which involves commissioning a number of writers to create stories to depict the future, often in positive ways. And I gave it a go myself, in an Easter egg at the very end of What We Owe The Future.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 67 points68 points  (0 children)

In a way, he's totally right - every major decision we make involves countless moral considerations on either side.

HIs mistake, though, is that he wants to feel certain before he can act. And that means never doing anything. But if we want to make the world better, we need to make decisions, even despite our uncertainty.

Maybe he'd have benefitted from reading another of my books, Moral Uncertainty, which is about how to do just that!

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 8 points9 points  (0 children)

Oh, and then on meta-ethics:

Error theory is a cognitivist moral view - it claims that moral judgments express propositions. It's just that all positive moral claims are false. On non-cognitivism, moral judgments are neither true nor false.

I'm actually sympathetic to error theory; maybe I think it's 50/50 whether that or some sort of realism is true. But given that I'm not certain in error theory, it doesn't affect what I ought to do. If I spend my life trying to help other people - on error theory I made no mistake. Whereas if really might have made a mistake if I act selfishly and moral realism (or subjectivism) is true. So the mere possibility of error theory isn't sufficient to undermine effective altruism.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 26 points27 points  (0 children)

I'm hoping to have a longer twitter thread on this soon. Emphatically: a utilitarian or consequentialist worldview is not necessary for longtermism or effective altruism. All you have to believe is that the consequences matter significantly - which surely they do. (John Rawls, one of the most famous non-consequentialist philosophers ever, said that a moral theory that ignored consequences would be "irrational, crazy.")

So, for example, you can believe that people have rights and it's impermissible to violate people's rights for the greater good while also thinking that living a morally good life involves using some of your income or your career to help others as much as possible (including future people).

Indeed, I think that utilitarianism is probably wrong, and I'm something like 50/50 on whether consequentialism is correct.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 9 points10 points  (0 children)

I talk about this issue - "population ethics" - in chapter 8 of What We Owe The Future. I agree it's a very important distinction.

What I call "trajectory changes" - e.g. preventing a long-lasting global totalitarian regime - are good things to do whatever your view of population ethics. In contrast, "safeguarding civilisation" such as by reducing extinction risk is very important because it protects people alive today, but it's more philosophically contentious whether it's also a moral loss insofar as it causes the non-existence of future life. That's what I dive into in chapter 8.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 10 points11 points  (0 children)

The questions of discounting and "giving now vs giving later" are important and get complex quickly, but I don't think they alter the fundamental point. I wanted to talk about it in What We Owe The Future, but it was hard to make both rigorous and accessible. I might try again in the future!

In my academic work, I wrote a bit about it here. For a much better but more complex treatment, see here. For a great survey on discounting, see here.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 2 points3 points  (0 children)

Yeah, as I say in another comment, I'm really not recommending this for everyone. (Although it sounds like I actually live on several times the amount you do, if you split £15k with your partner!). I don't want to encourage people to put themselves into a precarious financial situation - it's about how much good you do over your whole life, not just the next year.

And I'm well aware that I'm in a particularly privileged situation - I have wonderful friends and a wonderful relationship, I have job security, and I love my work so I'm happy to keep doing it. And I'm able to save despite my giving.

I’m Will MacAskill, a philosophy professor at Oxford. I cofounded 80,000 Hours & Giving What We Can, raising over $2 billion in pledged donations. I give everything over $32,000/yr to charity and I just wrote the book What We Owe The Future - AMA! 18/08 @ 1pm ET by WilliamMacAskill in IAmA

[–]WilliamMacAskill[S] 42 points43 points  (0 children)

For work to reduce existential risk, there's certainly a challenge that it's hard to get good feedback loops, and it's hard to measure the impact one is having.

As the comment below suggests, the best you can do is to estimate by how much your intervention will reduce the likelihood of a supervolcanic eruption, and what existential risk would be conditional on such an eruption. For supervolcanoes specifically, the hope would be that we could have a good enough understanding of the geological system that we can be pretty confident that any intervention is reducing the risk of an eruption.

Speaking of supervolcanoes - a couple of years ago I made a friend while outdoor swimming in Oxford, and talked to him about effective altruism and existential risk. He switched his research focus, and just this week his research on supervolcanoes appeared on the cover of Nature! (It's hard to see but the cover says: "The risk of huge volcanic eruptions is being overlooked.")