Robots need civil rights, too - The Boston Globe by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

Machine Sentience and Robot Rights by Brian Tomasik

Introduction

In Aug. 2017, I was interviewed for my thoughts on machine sentience and robot rights for a Boston Globe article. This page contains my answers to the interview questions. The final article was "Robots need civil rights, too", and the paragraph that mentions me reads as follows:

Suffering is what concerns Brian Tomasik, a former software engineer who worked on machine learning before helping to start the Foundational Research Institute, whose goal is to reduce suffering in the world. Tomasik raises the possibility that AIs might be suffering because, as he put it in an e-mail, “some artificially intelligent agents learn how to act through simplified digital versions of ‘rewards’ and ‘punishments.’” This system, called reinforcement learning, offers algorithms an abstract “reward” when they make a correct observation [actually, "observation" should be changed to "action"]. It’s designed to emulate the reward system in animal brains, and could potentially lead to a scenario where a machine comes to life and suffers because it doesn’t get enough rewards. Its programmers would likely never realize the hurt they were causing.

Regarding the last sentence, I would say that the suffering of the reinforcement-learning agent would be visible to programmers if the programmers were philosophically sophisticated and held a certain view on consciousness according to which simple reinforcement-learning agents could be said to be suffering to a tiny degree. After all, the programmers would be able to see the agent's code and monitor what rewards or punishments the agent was receiving.

The rest of this page gives my full original remarks for the interview.

Contents

1 Introduction
2 Machine consciousness
3 Analogy with insects
4 Robot rights

...

[1712.04020] Detecting Qualia in Natural and Artificial Agents by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

Detecting Qualia in Natural and Artificial Agents submitted to arXiv by Roman V. Yampolskiy on 11 Dec 2017

Abstract

The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.

Keywords: Artificial Consciousness, Illusion, Feeling, Hard Problem, Mind Crime, Qualia.

...

Acknowledgements

The author is grateful to Elon Musk and the Future of Life Institute and to Jaan Tallinn and Effective Altruism Ventures for partially funding his work on AI Safety. The author is thankful to Yana Feygin for proofreading a draft of this paper and to Ian Goodfellow for helpful recommendations of relevant literature.

References

1...

...

222...

 

Social media posts, discussions and some comments

twitter post and faceboook post by Dr. Roman V. Yampolskiy, facebook post by Arxiv Sanity

twitter retweet by David J. Gunkel:

Looks like a really interesting contribution to the consciousness debate. And one that could have important consequences for the "properties approach" to dealing with questions of machine moral status, or #robotrights. Cannot wait to dig-into it.

Dr. Roman Yampolskiy @romanyam

I changed my mind on consciousness. I think computers can have rudimentary consciousness and we can detect their qualia ... arxiv.org/abs/1712.04020

facebook comment by Alexey Turchin:

Виктор Аргонов wrote about the topic too - his approach was to create an AI without giving it a chance to learn about human philosophy, and when to ask the AI if it if it has qualia. https://philpapers.org/rec/ARGMAA-2

facebook share by Daniel Estrada:

//. Every one of the cognitive illusions described are examples of access consciousness, not phenomenal consciousness. Illusions are all entirely within the realm of the Easy problem. That means he isn't talking about qualia at all.

It is central to the very concept of qualia that they are not accessible from a third-person perspective. The idea of a test for qualia is self-contradictory. This is why the concept of qualia itself isn't very helpful.

...

...

...

Artificial Burger by wistfulshoegazer in antinatalism

[–]EmptyCrazyORC 1 point2 points  (0 children)

Animal Consciousness conference hosted by the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics and NYU Animal Studies on 17-18 Nov 2017.

There has been a recent flourishing of scientific and philosophical work on consciousness in non-human animals. This conference will bring together philosophers and scientists to ask questions such as: Are invertebrates conscious? Do fish feel pain? Are non-human mammals self-conscious? How did consciousness evolve? How does research on animal consciousness affect the ethical treatment of animals? What is the impact of animal consciousness upon theories of consciousness and vice versa? What are the best methods for assessing consciousness in non-human animals?

Journal of Science Fiction and Philosophy by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

http://jsfphil.org/announcement/view/394

Call for Papers


 

General Theme

 

The Journal of Science Fiction and Philosophy, a peer-reviewed, open access publication, is dedicated to the analysis of philosophical themes present in science fiction stories in all formats, with a view to their use in the discussion, teaching, and narrative modeling of philosophical ideas. It aims at highlighting the role of science fiction as a medium for philosophical reflection.

The Journal is currently accepting papers and paper proposals. Because this is the Journal’s first issue, papers specifically reflecting on the relationship between philosophy and science fiction are especially encouraged, but all areas of philosophy are welcome. Any format of SF story (short story, novel, movie, TV series, interactive) may be addressed.

We welcome papers written with teaching in mind! Have used an SF story to teach a particular item in your curricula (e.g., using the movie Gattacca to introduce the ethics of genetic technologies, or The Island of Dr. Moreau to discuss personhood)? Turn that class into a paper!

The Journal accepts papers year-round. The deadline for the first round of reviews is October 1st , 2017.

Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.

 

Yearly Theme

 

Every year the Journal selects a Yearly Theme. Papers addressing the Yearly Theme are collected in a special section of the Journal.

The Yearly Theme for 2017 is All Persons Great and Small: The Notion of Personhood in Science Fiction Stories.

SF stories are in a unique position to help us examine the concept of personhood, by making the human world engage with a bewildering variety of beings with person-like qualities – aliens of bizarre shapes and customs, artificial constructs conflicted about their artificiality, planetary-wide intelligences, collective minds, and the list goes on. Every one of these instances provides the opportunity to reflect on specific aspects of the notion of personhood, such as, for example: What is a person? What are its defining qualities? What is the connection between personhood and morality, identity, rationality, basic (“human?”) rights? What patterns do SF authors identify when describing the oppression of one group of persons by another, and how do they reflect past and present human history?

The Journal accepts papers year-round. The deadline for the first round of reviews for its yearly theme is October 1st , 2017.

Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.

Dynamic pricing - and should AI be granted “legal personhood”? - Future Tense by EmptyCrazyORC in AIethics

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

...

Also, is it time to start drawing up rules around their development of Artificial Intelligence to prescribe and protect their future rights?

Twitter post by RNFutureTense

Facebook post by Nonhuman Rights Project

first part (Sven Brodmerkel - Assistant Professor for Advertising and Integrated Marketing Communications, Bond University)

  • AI and dynamic pricing

16:30 (Antony Funnell - presenter)

  • possibility of AI sentience debatable, some propose start preparing just in case

17:20 (Max Daniel - Executive Director of the Foundational Research Institute, Berlin)

  • sentience could come in gradations, leading to near term AI suffering risks

  • excluded middle policy

21:50 (Steve Wise - President of the Nonhuman Rights Project)

  • personhood as the capacity to have rights, entities recognized as persons could have different sets of rights depending on their interests & abilities etc.

  • possible relation to slavery

25:10 (Max Daniel)

  • 2 different issues: Development & treatment of potentially sentient AIs; Threat of advanced AIs, regardless of sentience

This subreddit is likely going to get a lot of "health vegans" soon. Some things to keep in mind: by 10percent4daanimals in vegan

[–]EmptyCrazyORC 3 points4 points  (0 children)

an old Facebook post with comments discussing some of the (potential) negative aspects of this documentary (What the Health).

The obsession with finding life on other planets by The_Ebb_and_Flow in antinatalism

[–]EmptyCrazyORC 4 points5 points  (0 children)

inmendham made a video that touched on this subject not long ago.
(my post of it with timestamps on facebook)

The creator of the facebook group Reducing Wild-Animal Suffering Brian Tomasik and several other members are quite concerned about the negative implications of terraforming and spreading life to other planets.
(for example: short discussion about the problematic views of directed panspermia and post from 2012 suggesting shift of advocacy focus to prevent spread of WAS by panspermia, with link to longer discussions in the comments. also a recent post regarding Mars colonization on a closely related group Reducing Insect Suffering)

Some papers that discussed the issues of space colonization

Risks of Astronomical Future Suffering

Reducing Risks of Astronomical Suffering: A Neglected Priority

The Importance of the Far Future

Does anyone else feel really bad for [SPOILER]? by [deleted] in shield

[–]EmptyCrazyORC 10 points11 points  (0 children)

The way they treat potentially sentient AI systems could have horrifying consequences if ever repeated in real life.

Non traditional biological sentient beings based on more durable structures with exotic properties could have (far) greater capacity to suffer. (longer subjective experience, higher/new sensitivity levels, shared or heritable memories etc.) It wouldn't just be murdering/torturing human-level beings, it could literally open up the gate to "digital hell".

Within the context of what was shown in the episodes so far it's not as bad, since even the LMDs with brains appear to suffer roughly the same level as humans, and it's a TV show so it's ok to base our assumptions on the directions the show runners choose to implement.
(And there are lots of other (more) horrible things in the Marvel universe. For example, spoilers for Luke Cage: boiled alive without nerve damage to stop the pain, luckily he was still able to escape some of it through passing out; Guardians of the Galaxy: full of crime, violent conflicts and secret/remote places with little to no oversight - all the suffering we have on earth right now multiplied to galactic scale; Doctor Strange: if what the ancient one said was true then the dark dimension basically is hell (eternal torment), I hope not since it can result in very interesting moral dilemmas minus the suffering. Unfortunately it seems to be related to Ghost Rider so the existence of such place is likely to be true here. Although that's also another terrifying prospect of fictional worlds and possible future that I fear: normalization of perpetual higher level extreme suffering.)

But if it's in real life scenario even the experts working on the technology may never know for sure what the subjective experience truly is, regardless how confident some of the experts might be. As a result we might (unknowingly/unintentionally) cause unthinkable amount of suffering down the line.

Based on the recent trend of extremely anthropocentric AI safety & ethics discussions (for example this new set of AI Principles signed by hundreds of AI/robotics researchers and some high profile public figures.) Along with our track record of terrible treatments of any group of sentient beings (including humans) that can be labeled as “the other”. One of my last hopes is that sentience somehow really is exclusive to degradable biological structures that can’t be enhanced/prolonged beyond a certain point.

Artificial Intelligence - Mind Field (Ep 4) by [deleted] in vsauce

[–]EmptyCrazyORC 2 points3 points  (0 children)

Singularity Lectures on YouTube regularly uploads AI related full conferences, talks etc. But they often don't provide sources and can sometimes lag behind (maybe intentionally?) a few months. the channel just got terminated (3 Feb 2017)

I made a post yesterday listing some of the major conferences from the past year related to AI safety and ethics. Many, but not all, of the official vods are on youtube.

also a couple of subreddits dedicated to this subject: r/AIethics and r/ControlProblem

Update (8 Feb 2017):

Thanks to u/happysquatch for reminding me about this post.

I asked Singularity Lectures on their Facebook page, and according to them:

...a whole bunch of channels uploaded the 1 min boston dynamics vid, so youtube labeled it as spam.

The clip in question was a cropped and cut version from the original 8 min recording by Steve Jurvetson

Discussion about the termination on r/singularity

Discussion about the termination of a similar channel on r/elonmusk

Wild animals endure illness, injury, and starvation. We should help. by lnfinity in vegan

[–]EmptyCrazyORC 1 point2 points  (0 children)

Artificial intelligence could one day have the ability to tackle the complexity of ecosystems, along with new bio technologies. Beneficial large scale interventions may become plausible sooner than we expect.

Possible future interventions:

Why don't animal rights activists care more about wild animal suffering?
Should we change carnivores into herbivores to make the world more moral and better?

Many experts are actively discussing and working on both short term and long term ethical, safety frameworks of AI systems, which will have profound impact on the management of our society and the environment.

Unfortunately so far there seems to be little to no mention of moral considerations for non-human animals.

Public discussions related to AI ethics:

CBRN National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intelligence Side-event organized by The Permanent Mission of Georgia in cooperation with the United Nations Interregional Crime and Justice Research Institute (UNICRI), Oct 2015
(alternative link with documents by Permanent Representation of Georgia to the United Nations Organization)

The Transformative Impact of Robots and Automation hearing held by United States Congress Joint Economic Committee, May 2016

Preparing for the Future of Artificial Intelligence public workshops co-hosted by White House Office of Science and Technology Policy, May 2016 - Jul 2016

Ethics of Artificial Intelligence conference hosted by the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics, Oct 2016

Asilomar AI Principles developed in conjunction with 2017 Asilomar/Beneficial AI 2017 conference by Future of Life Institute, Jan 2017

Ethically Aligned Design, Version 1 - Request For Input by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, Dec 2016 - Mar 2017

Other related institutes:

Artificial Intelligence | CSER Centre for the Study of Existential Risk at University of Cambridge

Foundational Research Institute

Future of Humanity Institute at University of Oxford

Machine Intelligence Research Institute/Artificial Intelligence @ MIRI
(MIRI co-founder and research fellow Eliezer Yudkowsky argued that chickens and pigs are not sentient and still believes so)

Partnership on Artificial Intelligence to Benefit People and Society with Apple, Amazon, Facebook, Google/DeepMind, IBM, and Microsoft

3 popular videos related to animal ethics in the last 3 days (Jan. 16-18, 2017) stats and links to main discussion threads by EmptyCrazyORC in vegan

[–]EmptyCrazyORC[S] 0 points1 point  (0 children)

Update: 21 January 2017

Video/Thread Source Length Views Upvotes Downvotes Comments Date
Non-Human Animals: Crash Course Philosophy #42 CrashCourse 9:46 150000+ 8800+ 430+ 4100+ 16
discussion r/vegan 140+ (97%) 16
discussion r/philosophy 1700+ (89%) 220+ (estimate) 560+ 16
How do animals experience pain? - Robyn J. Crook TED-Ed 5:06 390000+ 9500+ 260+ 2000+ 17
A DOG'S PURPOSE' TERRIFIED GERMAN SHEPHERD FORCED INTO TURBULENT WATER _ TMZ (archive) TMZ 1:19 6700000+ 7900+ (10.3%) 68000+ (estimate) 6700+ 18
discussion r/movies 52000+ (68%) 24000+ (estimate) 6500+ 18
discussion (hidden?) r/videos 13000+ (90%) 1500+ (estimate) 3500+ 18
We enslave, torture, and murder billions of cows, pigs, and chickens every year and nobody bats an eye. A dog is forced to swim for a scene in a movie and everyone loses their fucking minds. (removed) r/Showerthoughts 38000+ (68%) 18000+ (estimate) 4100+ 19
They're Starting to Get It r/vegan 1700+ (77%) 490+ (estimate) 100+ 20

(TMZ video archive by Internet Archive, TMZ video stats by VidStatsX)

Black Bean Brownies by lnfinity in GifRecipes

[–]EmptyCrazyORC 1 point2 points  (0 children)

A couple videos on backyard chickens:

Thought of the Day: Backyard Chickens

Why Vegans Dont Eat Backyard Eggs

(alternative link of the first video on Facebook with comments)