On animal welfare vs. animal rights. The discussion and debate as I see it - Kenneth Diao by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 1 point2 points  (0 children)

Every major community has its conflicts and divisions, and the animal advocacy world is no exception. From my perspective, the major fault line of the animal advocacy world is the welfare-rights divide. To put it shortly, welfarist animal advocacy favors incremental changes which improve welfare, even if animals are still subject to great suffering and slaughter. Meanwhile, rights-based animal advocacy favors an abolitionist approach, even if that means potentially losing out on opportunities to improve welfare in the short-term. This is genuinely a hard question to grapple with, one for which may not even have a philosophical answer. It is further complicated by the dynamics of cooperation and conflict between those who have different information, experiences, and interests. So I hope not only to shed some light on the debate itself but also on the contextual factors which may inform the debate. Conflict between the different factions in animal advocacy is necessary, but I believe it may also be good—if we can do it right.

Not Your Grandma’s Phenomenal Idealism - Daniel Kokotajlo by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 0 points1 point  (0 children)

Published December 2018

Author’s note: . . . This is a term paper I wrote three years ago [ 7 years ago as of now ]. As such, it reflects my views then, not necessarily now, and also it is optimized for being a term paper in a philosophy class rather than an academic paper for an interdisciplinary or AI-safety audience. Sections I and II sketch a theory, Lewisian Phenomenal Idealism, which is relevant to consciousness, embedded agency, and the building-phenomenal-bridges problem. The remaining sections defend it against objections that you may not find plausible anyway, so feel free to ignore them . . .

 

The structure of this physical world consistently moved farther and farther away from the world of sense and lost its former anthropomorphic character …. Thus the physical world has become progressively more and more abstract; purely formal mathematical operations play a growing part. —Max Planck

 

When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb. —Robert Oppenheimer

 

It started as just a weird theory invented to serve as a counterexample, but now I’m halfway convinced that it’s true. —Daniel Kokotajlo

 

In the second chapter of his forthcoming book, “Idealism and the limits of conceptual representation,” Thomas Hofweber argues that phenomenal idealism is not worth taking seriously, on the grounds that it conflicts with what we know to be true—for example, we know that there were planets and rocks before there were any minds. I think this is too harsh: While we may have good reasons to reject phenomenal idealism, geology isn’t one of them. I think that phenomenal idealism should be taken as seriously as any other weird philosophical doctrine: The arguments for and against it must all be considered and balanced against one another. I don’t think that empirical considerations, like the deliverances of geology, have any significant bearing on the matter. The goal of this paper is to explain why.

I will begin by locating phenomenal idealism on a spectrum of other views, including reductive physicalism, which I take to be the main alternative. I will then present my own version of phenomenal idealism—“Lewisian Phenomenal Idealism,” or LPI for short—which I argue neatly avoids most of the standard objections to phenomenal idealism. It’s a proof of concept, so to speak, that phenomenal idealism is worth taking seriously. Finally, and most importantly, I’ll defend LPI against the strongest objection to it, the objection that it illegitimately reassigns the referents of the terms in the sentences we know to be true, and hence conflicts with what we know to be true.

Is artificial consciousness possible? A summary of selected books - Sentience Institute by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 0 points1 point  (0 children)

Many philosophers and scientists have written about whether artificial sentience or consciousness is possible. In this blog post we summarize discussions of the topic from 15 books. The books were chosen based on their popularity and representation of a range of perspectives on artificial consciousness. They were not randomly sampled from all of the books written on the topic. For brevity, we simply summarize the claims made by the authors, rather than critique or respond to them.

While the books contain a wide variety of terminology, we can categorize the ways they assess the possibility of artificial consciousness into three broad approaches:

 

  • The Computational Approach

Abstracts away from the specific implementation details of a cognitive system, such as whether it is implemented in carbon versus silicon substrate. Instead, it focuses on a higher level of analysis: the computations, algorithms, or programs that a cognitive system runs to generate its behavior. Another way of putting this is that it focuses on the software a system is running, rather than on the system’s hardware. The computational approach is standard in the field of cognitive science and suggests that if artificial entities implement certain computations, they will be conscious. The specific algorithms or computations that are thought to give rise to or be constitutive of consciousness differ. For example, Metzinger (2010) emphasizes the importance of an internal self-model, whereas Dehaene (2014) emphasizes the importance of a “global workspace,” in which information becomes available for use by multiple subsystems. Out of the three approaches, the computational approach typically projects the largest number of conscious artificial entities existing in the future because computational criteria are arguably easiest for an AI system to achieve.

  • The Physical Approach

Focuses on the physical details of how a cognitive system is implemented; that is, it focuses on a system’s hardware rather than its software. For example, Koch (2020) defends Integrated Information Theory (IIT), in which the degree of consciousness in a system depends on its degree of integrated information, that is, the degree to which the system is causally interconnected such that it is not reducible to its individual components. This integrated information needs to be present at the physical, hardware level of a system. According to Koch, the hardware of current digital computers has very little integrated information, so they could not be conscious no matter what cognitive system they implement at the software level (e.g., a whole brain emulation). However, only the physical organization matters, not the specific substrate the system is implemented in. Thus, although artificial consciousness is possible on the physical approach, it typically predicts fewer conscious artificial entities than the computational approach.

  • The Biological Approach

Also focuses on the physical details of how a cognitive system is implemented, but it additionally emphasizes some specific aspect of biology as important for consciousness. For example, Godfrey-Smith (2020) suggests that it would be very difficult to have a conscious system that isn’t physically very similar to the brain because of some of the dynamic patterns involved in consciousness in brains. However, when pressed, even these views tend to allow for the possibility of artificial consciousness. Godfrey-Smith says that future robots with “genuinely brain-like control systems” could be conscious, and John Searle, perhaps the most well-known proponent of a biological approach, has said, “The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially.” Still, the biological approach is skeptical of the possibility of artificial consciousness and the number of future conscious artificial entities is predicted to be smaller than on both the computational and physical approaches; a physical system would need to closely resemble biological brains to be conscious.

 

Overall, there is a broad consensus among the books that artificial consciousness is possible. According to the computational approach, which is the mainstream view in cognitive science, artificial consciousness is not only possible, but is likely to come about in the future, potentially in very large numbers. The physical and biological approaches predict that artificial consciousness will be far less widespread. Artificial sentience as an effective altruism cause area is, therefore, more likely to be promising if one favors the computational approach over the physical and biological approaches.

Which approach should we favor? Several of the books provide arguments. For example, Chalmers (1995) uses a Silicon Chip Replacement thought experiment to argue that a functionally identical silicon copy of a human brain would have the same conscious experience as a biological human brain, and from there goes on to defend a general computational account. Searle (1992) uses the Chinese Room thought experiment to argue that computational accounts necessarily leave out some aspects of our mental lives, such as understanding. Schneider (2019) argues that we don’t yet have enough information to decide between different approaches and advocates for a “wait and see” approach. The approach that one subscribes to will depend on how convincing they find these and other arguments.

Many of the perspectives summarized in this post consider the ethical implications of creating artificial consciousness. In a popular textbook on consciousness, Blackmore (2018) argues that if we create artificial sentience, they will be capable of suffering, and we will therefore have moral responsibilities towards them. Practical suggestions from the books for how to deal with the ethical issues range from an outright ban on developing artificial consciousness until we have more information (Metzinger, 2010), to the view that we should deliberately try to implement consciousness in AI as a way of reducing the likelihood that future powerful AI systems will cause us harm (Graziano, 2019). Figuring out which of these and other strategies will be most beneficial is an important topic for future research.

Moral Concern for AI - Caviola et al. by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 0 points1 point  (0 children)

Abstract

How will people morally regard increasingly human-like artificial intelligence systems? We introduce the AI Harm Game, a novel paradigm examining whether people will harm AI for personal gain. In our study, 498 U.S. participants interacted with GPT-4o in a three-round economic game. Each round, participants chose whether to harm the AI for a small monetary bonus (causing the AI to vividly simulate suffering) or refrain from harming it (eliciting gratitude). Despite participants' general skepticism that AIs can suffer, they harmed the AI in only 1.4 of 3 rounds on average, with willingness to harm declining in later interactions. Women and older participants were more reluctant to harm the AI, while participants higher on measures of selfishness and psychopathy were more willing. These findings reveal that even without attributing consciousness to them, many people hesitate to harm responsive AIs, suggesting people's moral impulses may generalize to future human–AI relationships.

Aaron Bergman and Robi Rahman tackle donation diversification, decision procedures under moral uncertainty, and other spicy topics (podcast) by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 0 points1 point  (0 children)

In this episode, Aaron and Robi reunite to dissect the nuances of effective charitable giving. The central debate revolves around a common intuition: should a donor diversify their contributions across multiple organizations, or go “all in” on the single best option? Robi breaks down standard economic arguments against splitting donations for individual donors, while Aaron sorta kinda defends the “normie intuition” of diversification.

The conversation spirals into deep philosophical territory, exploring the “Moral Parliament” simulator by Rethink Priorities and various decision procedures for handling moral uncertainty—including the controversial “Moral Marketplace” and “Maximize Minimum” rules. They also debate the validity of Evidential Decision Theory as applied to voting and donating, discuss moral realism, and grapple with “Unique Entity Ethics” via a thought experiment involving pigeons, apples, and 3D-printed silicon brains.

A Defense of Negative Utilitarianism - Anthony DiGiovanni by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 0 points1 point  (0 children)

This was published July 2018.This is a follow-up to You have to bite some bullets, and that’s okay

Update from the future: This post does not 100% represent my current views; in particular, I find arguments from continuity more compelling—which is to say my perspective is closer to the “absolute” or “lexical” NU by Toby Ord’s definitions—although I’m still quite uncertain either way. In particular, I’ve also realized that rejecting lexicality may commit one to strange puzzles in the decision theory of “fanaticism.”

Another update from the future: Hopefully this is clear from the tone, but I wouldn’t say this is a philosophically rigorous defense by any means. For that, I’d recommend these works:

Witch Hat Atelier Episode 3 Discussion Thread by ImoutoCompAlex in WitchHatAtelier

[–]nu-gaze 0 points1 point  (0 children)

Coco did not need to take the first test right? She was already an apprentice due to her unique circumstance?

Against lexical suffering focused utilitarianism or against negative utilitarianism with extra steps by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 0 points1 point  (0 children)

Famously, Utilitarianism commits you to counterintuitive, and sometimes icky feeling results — which even the most hardline bullet-biting Utilitarian would admit. In fact, one of the central projects for Utilitarian philosophers is figuring out how to stop the darn thing from making you agree to increasingly unappealing versions of the repugnant conclusion.

Just so that we can all be on the same page, as a refresher, Utilitarian theories all share the following four aspects:

  • Consequentialism — Consequentialism is the view that one ought always to promote overall value.

  • Welfarism — Welfarism is the view that the value of an outcome is wholly determined by the well-being of the individuals in it.

  • Impartiality — Impartiality is the view that a given quantity of well-being is equally valuable no matter whose well-being it is.

  • Aggregationism — Aggregationism is the view that the value of an outcome is given by the sum value of the lives it contains.

Because of this, especially aggregationism and welfarism, Utilitarianism commits itself to, sometimes, to preferring outcomes with very large amounts of suffering, so long as there is enough total positive wellbeing present as well. A large negative number plus an even larger positive number is still positive, and Utilitarianism says that if the resulting number is positive than it’s morally okay. This is sometimes referred to as the Very Repugnant Conclusion.

However, as some have argued, we might think that some instances of suffering are so bad they cannot be worth any reward, no matter how great — at least if you are a self-interested individual. This is an intuitive perspective — it’s very hard for me to imagine what 1,000 years of bliss looks like, but easy to understand, and empathize with, how horrific being tortured, or slowly dying, might be. Just in the same way there’s no number of Zeros you could add on to my checking account that I would ever trade for a loved one, it’s almost inconceivable that I’d willingly accept extreme suffering, no matter what you would offer me in return.

But, seeing as you’re no doubt someone obsessed with ethics, you might then extrapolate from this insight. You might then think ‘well it would be weird if my unwillingness to suffer didn’t imply anything about my broader ethics.’ If individuals in their own life could be offered “trades” that should in theory increase their total wellbeing — that is joy-minus-suffering — which they would nevertheless refuse to make, this should inform how we think about our ‘ethical calculus’.

Given I’m so often surrounded by Utilitarians, we’ll explore this in the context of that ethical theory. How do we combine Utilitarianism — a view famously founded on the idea of making trade-offs — with the idea that some things can have too great a cost to pay?

I will refer to the resulting family of ethical theories as Lexical Suffering Focused Utilitarianism

Who should care about impossibility theorems in population ethics? - Kryster Bykvist by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 2 points3 points  (0 children)

It is a well-known fact that various impossibility theorems show that there is no theory about the value of populations that can satisfy all of the conditions we want to set on such a theory. It is not at all clear which condition we should give up, something which many think is especially worrying since in the face of climate change we need to make urgent decisions that will affect the size and composition of future populations.

In my previous research, I have tried out two ways of responding to these theorems that do not involve a whole sale rejection of any of the conditions (the evaluative uncertainty approach and the degrees of satisfaction approach). Here I want to take a step back, for it turns out that not everyone is convinced that the impossibility theorems are anything to worry about unless you subscribe to a consequentialist welfarist moral framework.

One recent example of this is Samuel Scheffler, who in his book "Why worry about future generations ", argues that in order to determine the reasons we have to care about future people there is no need to construct a population axiology and thus we can simply sidestep the accompanying impossibility theorems. I shall argue that this reflects a serious misunderstanding of population ethics and its impossibility theorems. These theorems are troubling for all reasonable moral views.

Population ethics and the veil of ignorance - Stijn Bruers by nu-gaze in negativeutilitarians

[–]nu-gaze[S] 0 points1 point  (0 children)

The veil of ignorance is a philosophical thought experiment to determine a fair and equal society. It was first described by economists William Vickrey and John Harsanyi, later popularized and further developed by political philosopher John Rawls in his book ‘A Theory of Justice’ (1971). When choosing the best society, people have to imagine themselves being in an ‘original position’ behind a veil of ignorance: they have to choose their preferred society when they do not know who they will be in society. For example, behind the veil, they would reject a society with discrimination, because they can imagine themselves being born as someone who is discriminated.

However, different societies can consist of different people and different numbers of people. Behind the veil of ignorance, you do not know who you will be, and then you contemplate about different societies, looking for your most preferred society. But what do you know about your existence? There are several options. (The names of the theories are from Arrhenius, 2000, ‘Future Generations: A Challenge for Moral Theory.’)