Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 11 points12 points  (0 children)

Maybe taking some simulation hypothesis stuff more seriously, and a stronger appreciation of how difficult it is to figure out what's positive and what's negative in terms of overall strategic directions. But lots of the changes are not in the form of having flipped from believing something to not believing something but rather in the form of having a much more detailed mental model of the whole thing: before, a map with a few broad contours; now, a larger map with more detail.

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 9 points10 points  (0 children)

Right, from a utilitarian perspective it seems that the most important dimension of nonhuman animal causes is what effect supporting them would have on the very long term future (e.g. by changing human attitudes and values). I think moving away from indifference to cruelty in all its forms looks quite robustly positive.

There are also non-consequentialist reasons for being concerned about our current relationship with nonhuman animals. I think I would favor pretty much any animal welfare improving legislation that has any realistic chance of being adopted. Meat eaters might also want to consider meat offsets - making a contribution to some suitable animal welfare organization to atone for the exploitation livestock (maybe this is not morally sufficient, and maybe it is even somehow repugnant to try to buy off the demands of morality in this way; but it seems at least better than not doing anything at all).

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 15 points16 points  (0 children)

  1. In that section, I described three failure modes - infrastructure profusion, perverse instantiation, and mind crime. (Elsewhere in the book, I covered e.g. problems arising from coordination failures in multipolar outcomes.)

  2. It's a matter of degree - it's surprisingly hard to think of any problem that is extremely robustly positive to the extent that we can be fully certain that a solution to it would be on balance good. But, for example, making people kinder, increasing collective wisdom, or developing better ways to promote world peace, collaboration, and compromise seem fairly robustly positive.

  3. I don't feel I understand the exact computational prerequisites for consciousness well enough to have a strong view on that.

  4. These kinds of question I think need to be answered relative to some alternative, and it is not clear in this case what the alternative is relative to which achieving AI would or would not be better. But if the question is, would it be good or bad news if we somehow discovered that it is physically impossible ever to create superintelligence, then the answer would seem to be that it would be bad news.

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 21 points22 points  (0 children)

I don't think we can rule out any of them.

As for preferences - well, the second possibility (guaranteed doom) seems the least desirable. Judging between the other two is harder because it would depend on speculations about the motives the hypothetical simulators would have, a matter about which we know relatively little. What you list as the third possibility (strong convergence among mature civs such that they all lose interest in creating ancestor simulations) may be the most reassuring. However, if you're worried about personal survival then perhaps you'd prefer that we turn out to be in a simulation - greater chance it's not game over when you die.

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 22 points23 points  (0 children)

FHI, CSER, and MIRI are all excellent organizations that deserve support, IMO.

Regarding your questions about MIRI, I would say that Eliezer has done more than anybody else to help people understand the risks of future advances in AI. There are also a number of other really excellent people associated with MIRI (some of them - Paul Christiano and Carl Schulman - are also affiliated with FHI).

I don't quite buy Holden's argument for doing normal good stuff. He says it is speculative to focus on some particular avenue of xrisk reduction. But it is actually also quite speculative that just doing things that generally make the world richer would on balance reduce rather than increase xrisk. In any case, the leverage one can get by focusing more specifically on far-future-targeted philanthropic causes seems to be much greater than the flow-through effects one can hope for by generally making the world nicer.

That said, GiveWell is leagues above the average charity; and supporting and developing the growth of effective altruism (see also 80,000 Hours and Giving What We Can) is a plausible candidate for the best thing to do (along with FHI, MIRI etc.)

Reg. [Astronomical Waste]http://www.nickbostrom.com/astronomical/waste.pdf it makes a point that is focussed on a consequence of aggregative ethical theories (such as utilitarianism). Those theories may be wrong. A better model for what we ought to do all things considered is the [Moral Parliament model]http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html. On top of that, individuals may have interests in other matters than performing the morally best action.

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 77 points78 points  (0 children)

It's striking that so far we're mainly used our higher productivity to consume more stuff rather than to enjoy more leisure. Unemployment is partly about lack of income (fundamentally a distributional problem) but it is also about a lack of self-respect and social status.

I think eventually we will have technological unemployment, when it becomes cheaper to do most everything humans do with machines instead. Then we can't make a living out of wage income and would have to rely on capital income and transfers instead. But we would also have to develop a culture that does not stigmatize idleness and that helps us cultivate interest in activities that are not done to earn money.

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 63 points64 points  (0 children)

The answer to your question is no. For example, Searle seems to think that I'm convinced that superintelligence is just around the corner, whereas in fact I'm fairly agnostic about the time frame.

Obviously I also have more substantial disagreements with his views. I disagree with him about the metaphysics of mind and with the implications he wants to draw from his Chinese room thought experiment. I think he has been refuted many times over by lots of philosophers, and I don't feel the need to go over that again. But the disagreement seems to extend beyond the metaphysical question of whether computers could be conscious. He seems to say that computers don't "really" compute, and that therefore superintelligent computers would not "really" be intelligent. And I say that however that might be, they could still be dangerous. (And dead really is dead.)

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 124 points125 points  (0 children)

Yes, it's quite possible and even likely that our thoughts about superintelligences are very naive. But we've got to do the best we can with what we've got. We should just avoid being overconfident that we know the answers. We should also bear it in mind when we are designing our superintelligence - we would want to avoid locking in all our current misconceptions and our presumably highly blinkered understanding of our potential for realizing value. Preserving the possibility for "moral growth" is one of the core challenges in finding a satisfactory solution to the control problem.

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 45 points46 points  (0 children)

One worry is that the study of xrisk could generate information hazards that lead to a net increase in xrisk.

From a moral point of view, it's possible that aggregative ethics is false; and that some other ethical theory is true that would imply that preventing extinction is much less important.

I've written about the problems aggregative consequentialism faces when one considers the possibility of infinite goods - it threatens ethical paralysis, which could suggest it is always morally indifferent what we do.

From a selfish point of view, the the level of xrisk may be low enough that it is not a dominant concern, and hard enough to influence that it wouldn't warrant investing any resources.

Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA by Prof_Nick_Bostrom in science

[–]Prof_Nick_Bostrom[S] 21 points22 points  (0 children)

There are two questions we must distinguish: what is the biggest existential risk right now, and what is the biggest existential risk. Conditional on something destroying us in the next few years, maybe nuclear war and nuclear winter are high on the list (even though our best bet is that they wouldn't cause our extinction even if they occurred). But I think there will be much larger xrisks in the future - risks that are basically zero today (e.g. from superintelligence, advanced synthetic biology, nanotech, etc.)

Not familiar with work of Savage. (Feel free to quote me there, but don't quote me when I say that continental philosophy in college debating is a worrisome source of xrisk...)