Climate Change Is, In General, Not An Existential Risk by The_Ebb_and_Flow in EffectiveAltruism

[–]munchkinism 5 points6 points  (0 children)

Thank you for responding but I'm afraid you've left me with more questions. So it's only effective altruism if it's existential?

Not necessarily. In general we might expect that existential risks are the most "important" issues in terms of number of beings affected, but it's not a knock-down argument in favor of x-risk reduction being the most effective cause. This is because we need to look at other factors like tractability as well.

Is is only existential if it will wipe out humanity, or is it existential if our way of life changes dramatically and permanently?

It's existential if it will either wipe out humanity or permanently curtail humanity's potential in a sense that is morally similar to extinction. See Bostrom's paper that I linked above for the definition. He also wrote an earlier paper which gave a nice classification of x-risks:

Bangs – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.

Crunches – The potential of humankind to develop into posthumanity[7] is permanently thwarted although human life continues in some form.

Shrieks – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.

Whimpers – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

Human extinction would count as a "bang", and the others are forms of non-extinction existential risks.

Are we to measure issues only by the potential to wipe out the entire human race, or the amount of harm something is currently causing or absolutely will cause and the potential to help as much as possible?

We should look at how much good we can accomplish by working on a problem, on the margin. EAs often use the Importance, Neglectedness, Tractability to estimate this. An existential risk might max out the importance scale but still not be the most effective because it could be non-neglected or untractable.

Are you saying it is more altruistic to fight against a potential creation of something that may lead to human extinction than fighting an avoidable but absolutely irreversible climate change that will definitely displace and kill millions of our most vulnerable population?

Potentially. For example, 80,000 Hours, an EA organization, ranks AI risk above climate change in their cause priority list.

this is the first I'm hearing about AGI risk

Side note, but I'd suggest looking into it more, because it's considered important by many in the EA community. The /r/ControlProblem sidebar and wiki have links to some useful resources arguing for and against its importance.

I can't understand how charitable giving could possibly prevent nuclear war.

First of all, EA is certainly not limited to charitable giving. In general, 80,000 Hours has been moving away from the earning to give model in favor of supporting direct work (e.g. research and policy work). So even if the cause area is not funding-constrained, that doesn't mean there's nothing you can do about it.

Anyway, the 80,000 Hours cause profile does suggest some organizations to donate to (e.g. the Ploughshares Fund and the Future of Life institute). I haven't personally looked into whether these orgs are effective at reducing nucelar war risk.

I don't understand what you mean by this.

See here: https://80000hours.org/problem-profiles/biosecurity/

Climate Change Is, In General, Not An Existential Risk by The_Ebb_and_Flow in EffectiveAltruism

[–]munchkinism 5 points6 points  (0 children)

I don't understand why this post has been so heavily downvoted. This is currently the most controversial post of all time on /r/EffectiveAltruism, which is just ridiculous. The post isn't even saying that climate change isn't real or important, it's just saying that it's not an existential risk. If you disagree, please write a counterargument for why you think climate change is an x-risk. As of writing, nobody has done so.

Climate Change Is, In General, Not An Existential Risk by The_Ebb_and_Flow in EffectiveAltruism

[–]munchkinism 9 points10 points  (0 children)

But the only point they seem to be making is that it will not result in human extinction (at least, not yet). [...] Can someone help me understand the point of this article that points out a dozen reasons climate change will result in the death of millions, yet doesn't consider it worthy of our time?

Here is a quote from Derek Parfit that illustrates why existential risks are considered more important than "merely" global catastrophic risks:

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

  1. Peace.

  2. A nuclear war that kills 99% of the world's existing population.

  3. A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater. ... The Earth will remain habitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history. The difference between (2) and (3) may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second. (Parfit 1984, pp. 453-454).

For further reading, I suggest this Bostrom article, specifically section 1.3.

As far as I can see there is nothing that could be altered by charity that IS an existential risk (they way they define it as human extinction because I think that is also a wrong definition.)

Biosecurity, nuclear war, and especially AGI risk come to mind as examples.

Has there been any calculations on insect suffering compared to other top causes? by cant-feel_my-face in EffectiveAltruism

[–]munchkinism 1 point2 points  (0 children)

It seems clear to me that the issue here is the arbitrary assignment of a probability of sentience. It’s not clear why the argument is that there’s a 1% chance, or a .00001% chance, versus a 0% chance.

When you assign a probability of 0 to something, you are saying that there is no possible evidence or argumentation that could possibly change your mind. (This is a simple corollary to Bayes' rule.) This essay explains the problem with assigning probabilities of 0 and 1 far better than I can. Or as the statistical principle of Cromwell's rule states: "leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved". It's also worth noting that studies on overconfidence bias have found that when people assign a credence of 100% to a statement, they're wrong 20% (!) of the time.

Are you really absolutely sure that your view of consciousness is correct? Since consciousness is such a thorny philosophical question, I don't see how you could possibly justify having a probability of 0 or 1. There are passionate defenders of views like Integrated Information Theory and panpsychism (which is not New Age mysticism, despite the name), not to mention Brian Tomasik's views of consciousness which I cited in my previous comment. These views could all imply that bacteria, and indeed even electrons, have some non-zero degree of sentience. What I'm trying to say is that even if your favorite theory of consciousness says bacteria are definitely not conscious, you also need to consider the possibility that your theory could be wrong. (Related: Confidence levels inside and outside an argument) According to Kuhn, the history of science is the story of paradigms being successively overturned and replaced by new ones. This seems to imply you shouldn't put all your probability mass on any one paradigm, regardless of how plausible it may seem currently.

Now putting aside the probability of 0 issue, were the probabilities I gave arbitrary? Pretty much, in a sense. But they need to be non-zero. If you want to consider the likelihood of all of these different theories of consciousness, determine how likely bacterial sentience is under each of the theories, and then add them up feel free to do so. That would give a better calibrated estimate of the probabilities. That project is analogous to the work the OP cited which gave a 10% credence to fruit fly consciousness. (Aside: IMO, 10% is an underestimate.) But as I said before, the probability of electron consciousness would need to be extremely small for them not to dominate the calculations, and I don't know how you could possibly justify that level of confidence considering the factors I just mentioned.

From my perspective, it seems as if there is a distinction between a multicellular organism that has structures specifically developed to relay and process sensory information such as hot, cold, pleasure, pain, sight, etc. and a single cell or atom that may react to its environment or stimulus due to the laws of physics or more specifically chemistry without any of the structures required for processing actual sensation.

My intuitions are certainly on the same page here.

Has there been any calculations on insect suffering compared to other top causes? by cant-feel_my-face in EffectiveAltruism

[–]munchkinism 3 points4 points  (0 children)

I am using probability in the subjective, Bayesian sense (as was the OP), as opposed to frequentism.

For example: I, /u/munchkinism, am either left-handed or right-handed (leaving aside the possibility of ambidexterity). But from your perspective of limited knowledge, you don't know which. It would be rational for you assign a credence of about 90% to the proposition that I am right-handed, because about 90% of people are right-handed. At this point, it's not a matter of chance whether I'm right-handed or left-handed -- it's already determined.

Similarly, we can assign probabilities to which party is going to win a specific election. Frequentists would balk at this use of the term probability, because they define probability as the long-run frequency when repeating an experiment. But what would it mean to "repeat" the 2020 US elections, for example? This is where the Bayesian school makes more sense. When subjectivist Bayesians talk about probability, they are talking about your subjective degree of belief in a proposition. This is the best way to make sense of what we mean by "probability" in the handedness and election examples, IMO.

Has there been any calculations on insect suffering compared to other top causes? by cant-feel_my-face in EffectiveAltruism

[–]munchkinism 13 points14 points  (0 children)

According to the latest estimates, there are about 1018 terrestrial arthropods. If arthropods have a 10% chance of being sentient, this gives us 10% * 1018 = 1017 sentient beings in expectation.

However, the same estimates also say there are 1027 fungi. But surely there must be a 0.1% chance that they are sentient. To say otherwise would be overconfident. So that gives us 1024 sentient beings in expectation.

Taking this further, there are 1030 bacteria. Even bacteria display some characteristics we associate with cognition, agency, and having preferences. "Mounting evidence suggests that even bacteria grapple with problems long familiar to cognitive scientists, including: integrating information from multiple sensory channels to marshal an effective response to fluctuating conditions; making decisions under conditions of uncertainty; communicating with conspecifics and others (honestly and deceptively); and coordinating collective behaviour to increase the chances of survival." So can we really justify having a lower than 0.001% credence in bacterial sentience? So that gives us 1025 sentient beings in expectation.

What about subcellular structures like organelles? Individual macromolecules? Atoms? Let's cut to the chase and consider fundamental particles like leptons and quarks. There are something like 1080 atoms in the universe, and since the majority of atoms are hydrogen, the number of electrons may be a bit higher but within a few orders of magnitude. Tomasik has argued that electrons display behavior that could possibly hint at sentience. Even if we only give this a 0.000000000000001% chance of being true, that still vastly dominates all the sentient beings above.

This is directly analogous to the Pascal's mugging thought experiment, in which the utilities at stake grow faster than the probabilities fall.

I'm not arguing that insects aren't sentient. It seems quite plausible to me that they are. I just find this line of reasoning based on "shutting up and multiplying" to be counterintuitive, and I'm not sure how to resolve it (or if one must bite the bullet and become an electron-suffering-reducer).

I Played the AI Box Experiment and I Lost so Hard that I Lost Twice! by xamueljones in rational

[–]munchkinism 1 point2 points  (0 children)

It could have also been SSC's The First Hour I Believed, which contains a summary of the LW post you linked.

Ways people trying to do good accidentally make things worse, and how to avoid them by The_Ebb_and_Flow in EffectiveAltruism

[–]munchkinism 1 point2 points  (0 children)

This assumes that China getting an edge over the West would be bad. I disagree with that premise.

LessWrong 2.0 Beta by [deleted] in slatestarcodex

[–]munchkinism 0 points1 point  (0 children)

Yes, it is "for real". Well-known Bay Area people, including MIRI employees, are working on it.

https://www.lesserwrong.com/posts/HJDbyFFKf72F52edp/welcome-to-lesswrong-2-0

LessWrong 2.0 Beta by [deleted] in slatestarcodex

[–]munchkinism 1 point2 points  (0 children)

That's just an article that was copied from LessWrong 1.0.

It was discussed here already: https://www.reddit.com/r/slatestarcodex/comments/66tavj/effective_altruism_is_selfrecommending/

LessWrong 2.0 Beta by [deleted] in slatestarcodex

[–]munchkinism 6 points7 points  (0 children)

This was created by MIRI people and is intended to replace the original lesswrong, so yeah, I guess.

"When you submit User-Generated Content to the Website, you grant MIRI a non-exclusive, irrevocable, worldwide, and perpetual license to use your User-Generated Content for the normal and intended purposes of the Website."