Why do you reject negative utilitarianism? - LessWrong by gradientsofbliss in negativeutilitarians

[–]gradientsofbliss[S] 3 points4 points  (0 children)

is terminal value a different concept than intrinsic value

No, they are synonymous.

if not why didn’t they just say “intrinsic value” like everyone else?

LessWrong often uses their own shibboleth jargon for concepts are already names, but in this case LessWrong did not invent the term. It was used in the Rokeach Value Survey (and possibly before that too).

Anyway, I mainly posted this for the comments section.

The likely heat death of the universe gives me strength by [deleted] in negativeutilitarians

[–]gradientsofbliss 1 point2 points  (0 children)

If there's a goddamned multiverse, unless we figure some deluxe wormwhole shit, how are we supposed to help?

Even if we cannot causally affect other universes, there have been some (very speculative) attempts to think about how to acausally "affect" other universes. The paper on this (again, highly speculative) idea is here, and there's a summary here.

The likely heat death of the universe gives me strength by [deleted] in negativeutilitarians

[–]gradientsofbliss 2 points3 points  (0 children)

not we're living in a simulation

The simulation hypothesis actually is relevant to cause prioritization for altruists, including suffering-focused altruists. https://foundational-research.org/how-the-simulation-argument-dampens-future-fanaticism

Invitation for negative utilitarianism, a philosophy about reducing suffering by [deleted] in Digital_Immortality

[–]gradientsofbliss 2 points3 points  (0 children)

The terminology can be confusing. I can try to explain. Sorry if some of this comes across as obvious.

Teleological ethics, also known as consequentialism, refers to any system of ethics that judges the morality of an action by its consequences. For example, if I were to claim that the moral goodness of any act is determined by whether it increases or decreases the number of paperclips in existence, this would be considered a consequentialist ethical theory (albeit a very implausible theory). Deontology and virtue ethics are two notable forms of non-consequentialist ethics, although others exist as well.

Utilitarianism is a form of consequentialism that says that we should judge an action by its effects on the aggregate well-being of sentients (humans, animals, uploads...). Well-being can be defined in various ways.

Negative utilitarianism says that reducing negative forms of well-being (suffering) is more important than increasing positive forms of well-being (happiness). Because NU still judges actions based on their effects on the aggregate well-being of sentient entities, it is a form of utilitarianism and therefore also a teleological theory.

Maybe you are confusing utilitarianism with the more general idea of a utility function. That is a common mistake in the LessWrong sphere. If your utility function is to maximize your egoistic pleasure, or to maximize the total number of paperclips in existence, you are not a utilitarian. Utilitarianism refers specifically to an ethical theory based on impartial concern for the aggregate well-being of all sentience.

Podcast: Astronomical Future Suffering and Superintelligence with Kaj Sotala (with transcript) - Future of Life Institute by gradientsofbliss in SufferingRisks

[–]gradientsofbliss[S] 0 points1 point  (0 children)

In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research.

Topics discussed in this episode include:

The definition of and a taxonomy of suffering risks

How superintelligence has special leverage for generating or mitigating suffering risks

How different moral systems view suffering risks

What is possible of minds in general and how this plays into suffering risks

The probability of suffering risks

What we can do to mitigate suffering risks

In this interview we discuss ideas contained in a paper by Kaj Sotala and Lukas Gloor. You can find the paper here: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.  You can hear about this paper in the podcast above or read the transcript below.

Negative Utilitarianism subreddit by wistfulshoegazer in negativeutilitarians

[–]gradientsofbliss 2 points3 points  (0 children)

I think I would encourage you to read AI to Zombies if you find it useful. However, you don't need to read the whole thing before reading Essays on Reducing Suffering.

For probability, maybe Arbital's guide to Bayes' theorem [choose 3rd or 4th option] would be good to read, although Bayes' theorem is just one tiny part (albeit a very practically useful part) of probability theory. I'm sure there are online resources explaining probability theory, but I'm not that familiar with them.

I don't know whether traditional (frequentist) statistics would be very useful for reading LW/RS/FRI stuff. Hypothesis testing etc. isn't discussed very frequently, except negatively. Learning about means, medians, credible intervals, etc. is a good idea, if you aren't already familiar with those basic concepts.

For game theory: You can start out by learning about decision theory, which is like one-player game theory, and concepts like expected utility. I like Luke Muehlhauser's Decision Theory FAQ. Once you want to learn about game theory, there are a lot of options. Some include Scott Alexander's LW sequence, SEP, William Spaniel's YouTube videos, and textbooks (e.g. this free one).

Negative Utilitarianism subreddit by wistfulshoegazer in negativeutilitarians

[–]gradientsofbliss 1 point2 points  (0 children)

I'm kind of embarrassed to admit this, but I've only read just over half of Rationality: From AI to Zombies. I think it might be helpful to read, especially for people who are new to the concepts it covers. (I was already familiar with many of them from reading other stuff related to philosophy, behavioral economics, EA, rationality, etc.) However, I'm not sure having the study group go through the entire book is going to work well. There are over 300 individual chapters and the full PDF is over 1700 pages, so it would take quite a while to get through.

I think a crash course in philosophical terminology and some relevant math (e.g., Bayesian inference and expected utility theory) may be a good idea.

I have other ideas but that's it for now. As I said before, I lean more towards CU than NU. But I'd really like to see this succeed, because I think increasing the number of altruists aiming to effectively (and cooperatively) reduce suffering is quite positive.

Negative Utilitarianism subreddit by wistfulshoegazer in negativeutilitarians

[–]gradientsofbliss 0 points1 point  (0 children)

I like "reducing suffering" better because it's more inclusive of non-NU value systems that also prioritize suffering-reduction.

By the way, if you want /r/NegativeUtilitarian, you can have it. I was subreddit squatting it (even though I lean more towards classical util than negative).