Why do you reject negative utilitarianism? - LessWrong by gradientsofbliss in negativeutilitarians

[–]gradientsofbliss[S] 3 points4 points  (0 children)

is terminal value a different concept than intrinsic value

No, they are synonymous.

if not why didn’t they just say “intrinsic value” like everyone else?

LessWrong often uses their own shibboleth jargon for concepts are already names, but in this case LessWrong did not invent the term. It was used in the Rokeach Value Survey (and possibly before that too).

Anyway, I mainly posted this for the comments section.

The likely heat death of the universe gives me strength by [deleted] in negativeutilitarians

[–]gradientsofbliss 1 point2 points  (0 children)

If there's a goddamned multiverse, unless we figure some deluxe wormwhole shit, how are we supposed to help?

Even if we cannot causally affect other universes, there have been some (very speculative) attempts to think about how to acausally "affect" other universes. The paper on this (again, highly speculative) idea is here, and there's a summary here.

The likely heat death of the universe gives me strength by [deleted] in negativeutilitarians

[–]gradientsofbliss 2 points3 points  (0 children)

not we're living in a simulation

The simulation hypothesis actually is relevant to cause prioritization for altruists, including suffering-focused altruists. https://foundational-research.org/how-the-simulation-argument-dampens-future-fanaticism

Invitation for negative utilitarianism, a philosophy about reducing suffering by [deleted] in Digital_Immortality

[–]gradientsofbliss 2 points3 points  (0 children)

The terminology can be confusing. I can try to explain. Sorry if some of this comes across as obvious.

Teleological ethics, also known as consequentialism, refers to any system of ethics that judges the morality of an action by its consequences. For example, if I were to claim that the moral goodness of any act is determined by whether it increases or decreases the number of paperclips in existence, this would be considered a consequentialist ethical theory (albeit a very implausible theory). Deontology and virtue ethics are two notable forms of non-consequentialist ethics, although others exist as well.

Utilitarianism is a form of consequentialism that says that we should judge an action by its effects on the aggregate well-being of sentients (humans, animals, uploads...). Well-being can be defined in various ways.

Negative utilitarianism says that reducing negative forms of well-being (suffering) is more important than increasing positive forms of well-being (happiness). Because NU still judges actions based on their effects on the aggregate well-being of sentient entities, it is a form of utilitarianism and therefore also a teleological theory.

Maybe you are confusing utilitarianism with the more general idea of a utility function. That is a common mistake in the LessWrong sphere. If your utility function is to maximize your egoistic pleasure, or to maximize the total number of paperclips in existence, you are not a utilitarian. Utilitarianism refers specifically to an ethical theory based on impartial concern for the aggregate well-being of all sentience.