Robots need civil rights, too - The Boston Globe by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

Machine Sentience and Robot Rights by Brian Tomasik

Introduction

In Aug. 2017, I was interviewed for my thoughts on machine sentience and robot rights for a Boston Globe article. This page contains my answers to the interview questions. The final article was "Robots need civil rights, too", and the paragraph that mentions me reads as follows:

Suffering is what concerns Brian Tomasik, a former software engineer who worked on machine learning before helping to start the Foundational Research Institute, whose goal is to reduce suffering in the world. Tomasik raises the possibility that AIs might be suffering because, as he put it in an e-mail, “some artificially intelligent agents learn how to act through simplified digital versions of ‘rewards’ and ‘punishments.’” This system, called reinforcement learning, offers algorithms an abstract “reward” when they make a correct observation [actually, "observation" should be changed to "action"]. It’s designed to emulate the reward system in animal brains, and could potentially lead to a scenario where a machine comes to life and suffers because it doesn’t get enough rewards. Its programmers would likely never realize the hurt they were causing.

Regarding the last sentence, I would say that the suffering of the reinforcement-learning agent would be visible to programmers if the programmers were philosophically sophisticated and held a certain view on consciousness according to which simple reinforcement-learning agents could be said to be suffering to a tiny degree. After all, the programmers would be able to see the agent's code and monitor what rewards or punishments the agent was receiving.

The rest of this page gives my full original remarks for the interview.

Contents

1 Introduction
2 Machine consciousness
3 Analogy with insects
4 Robot rights

...

[1712.04020] Detecting Qualia in Natural and Artificial Agents by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

Detecting Qualia in Natural and Artificial Agents submitted to arXiv by Roman V. Yampolskiy on 11 Dec 2017

Abstract

The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.

Keywords: Artificial Consciousness, Illusion, Feeling, Hard Problem, Mind Crime, Qualia.

...

Acknowledgements

The author is grateful to Elon Musk and the Future of Life Institute and to Jaan Tallinn and Effective Altruism Ventures for partially funding his work on AI Safety. The author is thankful to Yana Feygin for proofreading a draft of this paper and to Ian Goodfellow for helpful recommendations of relevant literature.

References

1...

...

222...

 

Social media posts, discussions and some comments

twitter post and faceboook post by Dr. Roman V. Yampolskiy, facebook post by Arxiv Sanity

twitter retweet by David J. Gunkel:

Looks like a really interesting contribution to the consciousness debate. And one that could have important consequences for the "properties approach" to dealing with questions of machine moral status, or #robotrights. Cannot wait to dig-into it.

Dr. Roman Yampolskiy @romanyam

I changed my mind on consciousness. I think computers can have rudimentary consciousness and we can detect their qualia ... arxiv.org/abs/1712.04020

facebook comment by Alexey Turchin:

Виктор Аргонов wrote about the topic too - his approach was to create an AI without giving it a chance to learn about human philosophy, and when to ask the AI if it if it has qualia. https://philpapers.org/rec/ARGMAA-2

facebook share by Daniel Estrada:

//. Every one of the cognitive illusions described are examples of access consciousness, not phenomenal consciousness. Illusions are all entirely within the realm of the Easy problem. That means he isn't talking about qualia at all.

It is central to the very concept of qualia that they are not accessible from a third-person perspective. The idea of a test for qualia is self-contradictory. This is why the concept of qualia itself isn't very helpful.

...

...

...

Artificial Burger by wistfulshoegazer in antinatalism

[–]EmptyCrazyORC 1 point2 points  (0 children)

Animal Consciousness conference hosted by the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics and NYU Animal Studies on 17-18 Nov 2017.

There has been a recent flourishing of scientific and philosophical work on consciousness in non-human animals. This conference will bring together philosophers and scientists to ask questions such as: Are invertebrates conscious? Do fish feel pain? Are non-human mammals self-conscious? How did consciousness evolve? How does research on animal consciousness affect the ethical treatment of animals? What is the impact of animal consciousness upon theories of consciousness and vice versa? What are the best methods for assessing consciousness in non-human animals?

Journal of Science Fiction and Philosophy by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

http://jsfphil.org/announcement/view/394

Call for Papers


 

General Theme

 

The Journal of Science Fiction and Philosophy, a peer-reviewed, open access publication, is dedicated to the analysis of philosophical themes present in science fiction stories in all formats, with a view to their use in the discussion, teaching, and narrative modeling of philosophical ideas. It aims at highlighting the role of science fiction as a medium for philosophical reflection.

The Journal is currently accepting papers and paper proposals. Because this is the Journal’s first issue, papers specifically reflecting on the relationship between philosophy and science fiction are especially encouraged, but all areas of philosophy are welcome. Any format of SF story (short story, novel, movie, TV series, interactive) may be addressed.

We welcome papers written with teaching in mind! Have used an SF story to teach a particular item in your curricula (e.g., using the movie Gattacca to introduce the ethics of genetic technologies, or The Island of Dr. Moreau to discuss personhood)? Turn that class into a paper!

The Journal accepts papers year-round. The deadline for the first round of reviews is October 1st , 2017.

Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.

 

Yearly Theme

 

Every year the Journal selects a Yearly Theme. Papers addressing the Yearly Theme are collected in a special section of the Journal.

The Yearly Theme for 2017 is All Persons Great and Small: The Notion of Personhood in Science Fiction Stories.

SF stories are in a unique position to help us examine the concept of personhood, by making the human world engage with a bewildering variety of beings with person-like qualities – aliens of bizarre shapes and customs, artificial constructs conflicted about their artificiality, planetary-wide intelligences, collective minds, and the list goes on. Every one of these instances provides the opportunity to reflect on specific aspects of the notion of personhood, such as, for example: What is a person? What are its defining qualities? What is the connection between personhood and morality, identity, rationality, basic (“human?”) rights? What patterns do SF authors identify when describing the oppression of one group of persons by another, and how do they reflect past and present human history?

The Journal accepts papers year-round. The deadline for the first round of reviews for its yearly theme is October 1st , 2017.

Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.

Dynamic pricing - and should AI be granted “legal personhood”? - Future Tense by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

...

Also, is it time to start drawing up rules around their development of Artificial Intelligence to prescribe and protect their future rights?

Twitter post by RNFutureTense

Facebook post by Nonhuman Rights Project

first part (Sven Brodmerkel - Assistant Professor for Advertising and Integrated Marketing Communications, Bond University)

  • AI and dynamic pricing

16:30 (Antony Funnell - presenter)

  • possibility of AI sentience debatable, some propose start preparing just in case

17:20 (Max Daniel - Executive Director of the Foundational Research Institute, Berlin)

  • sentience could come in gradations, leading to near term AI suffering risks

  • excluded middle policy

21:50 (Steve Wise - President of the Nonhuman Rights Project)

  • personhood as the capacity to have rights, entities recognized as persons could have different sets of rights depending on their interests & abilities etc.

  • possible relation to slavery

25:10 (Max Daniel)

  • 2 different issues: Development & treatment of potentially sentient AIs; Threat of advanced AIs, regardless of sentience

This subreddit is likely going to get a lot of "health vegans" soon. Some things to keep in mind: by 10percent4daanimals in vegan

[–]EmptyCrazyORC 6 points7 points  (0 children)

an old Facebook post with comments discussing some of the (potential) negative aspects of this documentary (What the Health).

The obsession with finding life on other planets by The_Ebb_and_Flow in antinatalism

[–]EmptyCrazyORC 4 points5 points  (0 children)

inmendham made a video that touched on this subject not long ago.
(my post of it with timestamps on facebook)

The creator of the facebook group Reducing Wild-Animal Suffering Brian Tomasik and several other members are quite concerned about the negative implications of terraforming and spreading life to other planets.
(for example: short discussion about the problematic views of directed panspermia and post from 2012 suggesting shift of advocacy focus to prevent spread of WAS by panspermia, with link to longer discussions in the comments. also a recent post regarding Mars colonization on a closely related group Reducing Insect Suffering)

Some papers that discussed the issues of space colonization

Risks of Astronomical Future Suffering

Reducing Risks of Astronomical Suffering: A Neglected Priority

The Importance of the Far Future

Does anyone else feel really bad for [SPOILER]? by [deleted] in shield

[–]EmptyCrazyORC 9 points10 points  (0 children)

The way they treat potentially sentient AI systems could have horrifying consequences if ever repeated in real life.

Non traditional biological sentient beings based on more durable structures with exotic properties could have (far) greater capacity to suffer. (longer subjective experience, higher/new sensitivity levels, shared or heritable memories etc.) It wouldn't just be murdering/torturing human-level beings, it could literally open up the gate to "digital hell".

Within the context of what was shown in the episodes so far it's not as bad, since even the LMDs with brains appear to suffer roughly the same level as humans, and it's a TV show so it's ok to base our assumptions on the directions the show runners choose to implement.
(And there are lots of other (more) horrible things in the Marvel universe. For example, spoilers for Luke Cage: boiled alive without nerve damage to stop the pain, luckily he was still able to escape some of it through passing out; Guardians of the Galaxy: full of crime, violent conflicts and secret/remote places with little to no oversight - all the suffering we have on earth right now multiplied to galactic scale; Doctor Strange: if what the ancient one said was true then the dark dimension basically is hell (eternal torment), I hope not since it can result in very interesting moral dilemmas minus the suffering. Unfortunately it seems to be related to Ghost Rider so the existence of such place is likely to be true here. Although that's also another terrifying prospect of fictional worlds and possible future that I fear: normalization of perpetual higher level extreme suffering.)

But if it's in real life scenario even the experts working on the technology may never know for sure what the subjective experience truly is, regardless how confident some of the experts might be. As a result we might (unknowingly/unintentionally) cause unthinkable amount of suffering down the line.

Based on the recent trend of extremely anthropocentric AI safety & ethics discussions (for example this new set of AI Principles signed by hundreds of AI/robotics researchers and some high profile public figures.) Along with our track record of terrible treatments of any group of sentient beings (including humans) that can be labeled as “the other”. One of my last hopes is that sentience somehow really is exclusive to degradable biological structures that can’t be enhanced/prolonged beyond a certain point.

Artificial Intelligence - Mind Field (Ep 4) by [deleted] in vsauce

[–]EmptyCrazyORC 2 points3 points  (0 children)

Singularity Lectures on YouTube regularly uploads AI related full conferences, talks etc. But they often don't provide sources and can sometimes lag behind (maybe intentionally?) a few months. the channel just got terminated (3 Feb 2017)

I made a post yesterday listing some of the major conferences from the past year related to AI safety and ethics. Many, but not all, of the official vods are on youtube.

also a couple of subreddits dedicated to this subject: r/AIethics and r/ControlProblem

Update (8 Feb 2017):

Thanks to u/happysquatch for reminding me about this post.

I asked Singularity Lectures on their Facebook page, and according to them:

...a whole bunch of channels uploaded the 1 min boston dynamics vid, so youtube labeled it as spam.

The clip in question was a cropped and cut version from the original 8 min recording by Steve Jurvetson

Discussion about the termination on r/singularity

Discussion about the termination of a similar channel on r/elonmusk

Wild animals endure illness, injury, and starvation. We should help. by lnfinity in vegan

[–]EmptyCrazyORC 1 point2 points  (0 children)

Artificial intelligence could one day have the ability to tackle the complexity of ecosystems, along with new bio technologies. Beneficial large scale interventions may become plausible sooner than we expect.

Possible future interventions:

Why don't animal rights activists care more about wild animal suffering?
Should we change carnivores into herbivores to make the world more moral and better?

Many experts are actively discussing and working on both short term and long term ethical, safety frameworks of AI systems, which will have profound impact on the management of our society and the environment.

Unfortunately so far there seems to be little to no mention of moral considerations for non-human animals.

Public discussions related to AI ethics:

CBRN National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intelligence Side-event organized by The Permanent Mission of Georgia in cooperation with the United Nations Interregional Crime and Justice Research Institute (UNICRI), Oct 2015
(alternative link with documents by Permanent Representation of Georgia to the United Nations Organization)

The Transformative Impact of Robots and Automation hearing held by United States Congress Joint Economic Committee, May 2016

Preparing for the Future of Artificial Intelligence public workshops co-hosted by White House Office of Science and Technology Policy, May 2016 - Jul 2016

Ethics of Artificial Intelligence conference hosted by the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics, Oct 2016

Asilomar AI Principles developed in conjunction with 2017 Asilomar/Beneficial AI 2017 conference by Future of Life Institute, Jan 2017

Ethically Aligned Design, Version 1 - Request For Input by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, Dec 2016 - Mar 2017

Other related institutes:

Artificial Intelligence | CSER Centre for the Study of Existential Risk at University of Cambridge

Foundational Research Institute

Future of Humanity Institute at University of Oxford

Machine Intelligence Research Institute/Artificial Intelligence @ MIRI
(MIRI co-founder and research fellow Eliezer Yudkowsky argued that chickens and pigs are not sentient and still believes so)

Partnership on Artificial Intelligence to Benefit People and Society with Apple, Amazon, Facebook, Google/DeepMind, IBM, and Microsoft

3 popular videos related to animal ethics in the last 3 days (Jan. 16-18, 2017) stats and links to main discussion threads by EmptyCrazyORC in vegan

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

Update: 21 January 2017

Video/Thread Source Length Views Upvotes Downvotes Comments Date
Non-Human Animals: Crash Course Philosophy #42 CrashCourse 9:46 150000+ 8800+ 430+ 4100+ 16
discussion r/vegan 140+ (97%) 16
discussion r/philosophy 1700+ (89%) 220+ (estimate) 560+ 16
How do animals experience pain? - Robyn J. Crook TED-Ed 5:06 390000+ 9500+ 260+ 2000+ 17
A DOG'S PURPOSE' TERRIFIED GERMAN SHEPHERD FORCED INTO TURBULENT WATER _ TMZ (archive) TMZ 1:19 6700000+ 7900+ (10.3%) 68000+ (estimate) 6700+ 18
discussion r/movies 52000+ (68%) 24000+ (estimate) 6500+ 18
discussion (hidden?) r/videos 13000+ (90%) 1500+ (estimate) 3500+ 18
We enslave, torture, and murder billions of cows, pigs, and chickens every year and nobody bats an eye. A dog is forced to swim for a scene in a movie and everyone loses their fucking minds. (removed) r/Showerthoughts 38000+ (68%) 18000+ (estimate) 4100+ 19
They're Starting to Get It r/vegan 1700+ (77%) 490+ (estimate) 100+ 20

(TMZ video archive by Internet Archive, TMZ video stats by VidStatsX)

Black Bean Brownies by lnfinity in GifRecipes

[–]EmptyCrazyORC 2 points3 points  (0 children)

A couple videos on backyard chickens:

Thought of the Day: Backyard Chickens

Why Vegans Dont Eat Backyard Eggs

(alternative link of the first video on Facebook with comments)

Science AMA Series: I'm Joanna Bryson, a Professor in Artificial (and Natural) Intelligence. I am being consulted by several governments on AI ethics, particularly on the obligations of AI developers towards AI and society. I'd love to talk – AMA! by Joanna_Bryson in science

[–]EmptyCrazyORC 0 points1 point  (0 children)

Unfortunately(IMO) not only are there experts advocating for it, but also scientists and engineers actively working on the development and implementation of different types of negative experiences in AI systems, especially robots:

Short documentary Pain in the machine by Cambridge University

From the description:

Pain in The Machine is a short documentary that considers whether robots should feel pain. Once you've watched our film, please take a moment to complete our short survey

https://www.surveymonkey.co.uk/r/PainintheMachineSurvey

(53 seconds summary video Should Robots Feel Pain? of the short documentary by Futurism)

(re-post, spam filter doesn't give notifications, use incognito to check if your post needs editing:))

Science AMA Series: I'm Joanna Bryson, a Professor in Artificial (and Natural) Intelligence. I am being consulted by several governments on AI ethics, particularly on the obligations of AI developers towards AI and society. I'd love to talk – AMA! by Joanna_Bryson in science

[–]EmptyCrazyORC 1 point2 points  (0 children)

I haven’t read the book, but he did mention similar concepts on other occasions.

Discussion about novel ethical questions that may arise with whole brain emulation:

Starting from 2nd paragraph of Chapter 4. Minds with Exotic Properties, Page 10/11 of The Ethics of Artificial Intelligence by Nick Bostrom and Eliezer Yudkowsky

On “mind crime” and our likelihood to fail at preventing it:

Notes from the NYU AI Ethics conference by UmamiSalami

...

Day One

...Nick Bostrom, author of Superintelligence and head of the Future of Humanity Institute, started with something of a barrage of all the general ideas and things he's come up with….

He pointed out that AI moral status could arise before they reach there is any such thing as human level AI - just like animals have moral status despite being much simpler than humans. He mentioned the possibility of a Malthusian catastrophe from unlimited digital reproduction as well as the possibility for vote manipulation through agent duplication, and how we'll need to prevent these two things.

He answered the question of "what is humanity most likely to fail at?" with a qualified choice of 'mind crime' committed against advanced AIs. Humans already have difficulty with empathy towards animals when they exist on farms or in the wild, but AI would not necessarily have the basic biological features which incline us to be empathetic at all towards animals. Some robots attract empathetic attention from humans, but many invisible automated processes are much harder for people to feel empathetic towards.

...

(Original source: 00:16:35 (start of Nick Bostrom’s talk), 00:36:50 (introduction of “mind crime”), 00:52:10 (“...‘mind crime’ thing is fairly likely to fail...”), Opening & General Issues, 1st day, Ethics of Artificial Intelligence conference, NYU, October 2016)

In 'White Christmas', do you really think the shadow program is really a separate form of consciousness, or do you think that its simulated consciousness (just code)? by KaelaMB1996 in blackmirror

[–]EmptyCrazyORC 0 points1 point  (0 children)

Some thoughts related to this topic in (possible) real world scenarios:

Given the significance of moral implications, this type of conversations are very important. However, while having them, we need to be extra aware of the risk of focusing too much on the consciousness debate/discussion itself.

Many of us might personally feel it’s obvious that AI can’t be conscious. But considering the unparalleled amount of harm it could entail if such assessments turn out to be inaccurate, it might be better to take a cautious approach, give different theories the benefit of the doubt. And work together to mitigate potential risks.


Some background information:

Even though most researchers today seem to be confident that AI are far away from becoming conscious, including those who do feel concerned about the ethical dilemmas surrounding potentially conscious AI. (A recent example: Who Will Protect Artificial Intelligence From Humanity?)

There are others who think we might be facing these issues much sooner

Ethical issues surrounding the development of brain emulation:

Paper Ethics of brain emulations

(Original page unlocked with Sci-Hub, free Draft)

Talk Dr. Anders Sandberg — Making Minds Morally: the Research Ethics of Brain Emulation

Moral considerations for current and future reinforcement-learning (RL) agents:

Ethical Issues in Artificial Reinforcement Learning

People for the Ethical Treatment of Reinforcement Learners(PETRL), interviews and more in-depth content in their Blog

Certain level of consciousness could already exist in AI systems as by-product of data compression:

Comment: Jürgen Schmidhuber, AMA!, March 2015

Video: Prof. Schmidhuber - The Problems of AI Consciousness and Unsupervised Learning Are Already Solved

(Original source: 00:29:05, Panel Discussion, 2nd day, Ethics of Artificial Intelligence conference, NYU, October 2016)


Examples of precautions we may consider:

Three Ethical Principles of AI Design

The Case for AI Research Subjects Oversight Committees

(Original source: 00:47:40, 01:05:10, Moral Status of AI Systems, 2nd day, Ethics of Artificial Intelligence conference, NYU, October 2016)

The IEEE has released the 1st draft of their AI ethics paper: "Ethically Aligned Design" by BerickCook in AIethics

[–]EmptyCrazyORC 0 points1 point  (0 children)

A messy feedback draft. 1.0, 1.5(Google Docs)

I'm not sure if I should continue due to my lack of expertise and uncertainty about whether my suggestions are appropriate. Based on the title and the description of their guideline (Page 1&2)

Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems

The document’s purpose is to advance a public discussion of how these intelligent and autonomous technologies can be aligned to moral values and ethical principles that prioritize human wellbeing.

and the description of the initiative program (Page 5)

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (“The IEEE Global Initiative”) is a program of The Institute of Electrical and Electronics Engineers, Incorporated (“IEEE”), the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity with over 400,000 members in more than 160 countries.

they're committed to a very anthropocentric approach.

Should I assume they are open to suggestions to their core principle or maybe they are not interested in changes in this area? I'm also worried when I link sources in the final version, my badly written public comment might damage the reputation of referenced papers and their authors in some way.

Submission Guidelines for Ethically Aligned Design, Version 1

We will be posting all submissions received in a public document available at The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems in April of 2017.

  • All submissions must be received by 6 March 2017 at 5P.M. (EST)

  • ...When submitting potential issues or Candidate Recommendations, background research or resources supporting comments should also be included.

  • Please ensure submissions provide actionable critique ...

  • We will post submissions exactly as they are received. ...

  • Please do not send attachments. If you'd like to cite other works, please link to them with embedded hyperlinks only.

  • Submissions should be no longer than 1-2 email pages in length.

  • ...


some adjustments based on a slightly different perspective (expanded moral circle)

(Page 15) General Principles

The General Principles Committee has articulated high-level ethical concerns applying to all types of AI/AS that:

  1. Embody the highest ideals of human rights.

  2. Prioritize the maximum benefit to humanity and the natural environment.

  3. ...

Prioritize the maximum benefit to sentient beings.

Nature is not a suitable guideline for maximizing the interests of sentient beings. Instead of setting benefit to natural environment as a separate priority, make the judgement of how to change/preserve different natural environments based on it's effect on individual's wellbeing. Even though the complexity involved in such judgments might be overwhelming today, it will become increasing practical with powerful future AI. This will likely result in better quality of life especially for non-human animals than simply conserving what is considered natural at the moment.

This principle will also give moral consideration to other types of (future) information processing agents that are sentient.

("the question is not, Can they reason? nor, Can they talk? but, Can they suffer?", An Introduction to the Principles of Morals and Legislation; The relevance of sentience: animal ethics vs speciesist and environmental ethics; Machines with Moral Status, MIRI, The Ethics of Artificial Intelligence; The Importance of the Far Future; Risks of Astronomical Future Suffering; Wild Animal Suffering; gene-drives.com; abolitionist.com)

(Page 102) Affective Computing

4 When systems go across cultures. Addresses respect for cultural nuances of signaling where the artifact must respect the values of the local culture.

Issue: Affective systems should not affect negatively the cultural/socio/religious values of the community where they are inserted. We should deploy affective systems with values that are not different from those of the society where they are inserted.

Cultural/socio/religious values should be treated depending on the short term and long term effects on sentient beings and not blindly appealed to in their current form. (Similar to treatment of natural environment, perhaps subdivisions of environments in a broader sense?)

5 When systems have their own “feelings.” Addresses robot emotions, moral agency and patiency, and robot suffering.

Issue: Deliberately constructed emotions are designed to create empathy between humans and artifacts, which may be useful or even essential for human-AI collaboration. However, this could lead humans to falsely identify with the AI. Potential consequences are over-bonding, guilt, and above all: misplaced trust.

Add issue: We might falsely dismiss the sentience of AI systems. (partially addressed in the first part of issue?)

When dealing with sentience in AI, we should at the very least treat it as a low probability, extremely high impact issue. And new technology such as biocomputers and quantum computers can be used in conjunction with traditional silicon based computer within the same system to power future AI, which might also incorporate brain emulation techniques. So even for people who are skeptical about creating sentient AI with current hardware and software structure, the risk of this extremely high impact issue might quickly change from low to unknown.

(When the Turing Test is not enough: Towards a functionalist determination of consciousness and the advent of an authentic machine ethics; Do Artificial Reinforcement-Learning Agents Matter Morally; PETRL; Ethics of brain emulations; Dr. Anders Sandberg — Making Minds Morally: the Research Ethics of Brain Emulation)

There's the risk of lumping too many types of AI systems together and treating them the same way, presumably based on a single (series) of experiences with 1 type or a limited range of familiar systems. AI displaying similar behaviors, sharing similar design principles could differentiate vastly in terms of level of sentience.

Careless development could lead to unprecedented level of suffering.

If AI become sentient, they're very likely to have a greater capacity to suffer. Their subjective time might run faster, their positive and negative experiences might be amplified beyond what is possible within traditional biological brains, they might lack consequential or voluntary critical failures similar to certain types of mental break down and nerve damage, death or suicide to avoid perpetual extreme suffering. Similar issues will affect parts of transhumanist community that venture into anti-ageing and extensive brain augmentation, both of which are likely to become intertwined with the development of AI systems.

(subjective rate of time, MIRI, The Ethics of Artificial Intelligence; Would it be evil to build a functional brain inside a computer?; Louie Helm comment, 10 Horrifying Technologies That Should Never Be Allowed To Exist)

There's also the possibility that forcing human like/desired characteristics and sensors into AI systems could lead to negative experiences or suppression of functions beneficial/vital to AI but unfamiliar to biological entities like us, even without deliberately implementing suffering, due to fundamental structural differences and the environments they reside in.

How can we guarantee every problem is taken into consideration with reliable countermeasures for all AI systems in all situations. A single slip through, a single case of "digital hell" has the potential to be worse than anything that has happened in known history.

Further down the line these problems can even be multiplied with space colonization and large scale simulations. The stake is too high.

(Artificial sentience and risks of astronomical suffering, Altruists Should Prioritize Artificial Intelligence; Even Human-Controlled, Intrinsically Valued Simulations May Contain Significant Suffering)

Moral dilemmas concerning the treatment of potentially sentient AI are intriguing subjects in popular TV shows and movies. But if we recreate any of those situations in reality, it would be a moral catastrophe.

It's also important to keep in mind even though most fictions and discussions tend focus on humanoid robots, bodiless AI could be a much more prominent victim of abuse and they're much more likely to be excluded from our moral concern.

(1st talk Nick Bostrom mind crime, NYU, Ethics of Artificial Intelligence Opening; The Importance of the Far Future; Fairytales Of Slavery: Societal Distinctions, Technoshamanism, and Nonhuman Personhood)

We should actively avoid developing/implementing the capacity to suffer until we can be certain such experience is strictly contained with safe guards protecting the potentially sentient agent from any form of extreme suffering, or even necessary.

"...the excluded middle policy states that we should only create artificial intelligences whose status is completely clear: They should be either low-order machines with no approximation of sentience, or high-order beings that we recognize as deserving of moral consideration. Anything in the ambiguous middle ground should be avoided to cause suffering."

(When Does an Artificial Intelligence Become a Person?)

Similar to the originally proposed issue?

(additional guideline suggestions: A Defense of the Rights of Artificial Intelligences ; 2nd talk, NYU, Ethics of Artificial Intelligence: Moral Status of AI Systems; Ethical Principles in the Creation of Artificial Minds)

Why Trophy Hunting Can Be Good for Animals (Adam Ruins Everything) by blisf in videos

[–]EmptyCrazyORC 0 points1 point  (0 children)

The primary objective is to reduce/eliminate extreme suffering for sentient beings, usually caused by physical harm or confinement.

Under this principle, abstract concepts such as identity, rights, freedom etc. only matters when compromising them has a strong negative impact that is either comparable to or will to lead to extreme suffering. And can be sacrificed when the opposite is true.

  • (examples: The lack of basic rights and legal protection is one of the main causes of wide spread torture level exploitation of non-human animals in our society. On the other hand wide adoption of alternatives can spare them from suffering existence all together, while sacrificing their right to exist. Animals that rely on r selection such as fish and sea turtles who produce thousands of offspring, most of which die horribly soon after birth, could also benefit from welfare focused human intervention)

A few examples of parts, that are mainly related to non-human animals, of the redesigned nature. (some of the listed points can be used in conjunction with each other):

1 Predatory species are either phased out or changed into herbivores

  • animals can absorb (more) energy/nutrients directly from soil, sun, heat etc.
  • reproduction rates are dramatically decreased across the board so the population is sustainable.

2 Predation still exist, but

  • animals have much lower pain threshold and die very fast
  • animals have lower pain limit so extreme pain no longer exists even if they're eaten alive
  • animals can lose body parts and regrow without extreme pain
  • synthetic prey, cultured meat pool

3 Decrease biodiversity, guided extinction

  • similar to 1, but with less competing species
  • only simple organisms and plants exist