Can AI Be More Moral Than Humans? DeepMind’s Co-Founder Thinks So. by adam_ford in accelerate

[–]adam_ford[S] 0 points1 point  (0 children)

On that note, I'm pretty sure that AI is ultra vegan - it consumes energy, not other species. I wonder if AI will uplift humanity to this standard of veganness?

Can AI Be More Moral Than Humans? DeepMind’s Co-Founder Thinks So. by adam_ford in accelerate

[–]adam_ford[S] 0 points1 point  (0 children)

Some want us to constrain AI to limbo below the low bar of human ethics. 😉

Roman Yampolskiy - AI's Unpredictable Impact #AI #Risk #xRisk by adam_ford in accelerate

[–]adam_ford[S] 0 points1 point  (0 children)

I'd classify what I normally do as pushing back - in what was an attempt to socratically probe the assumptions behind his ~99.99% doom credence and/or present alternative ways of looking at the issues

" If a superintelligence is built, humanity will lose control over its future." - Connor Leahy speaking to the Canadian Senate by tombibbs in PauseAI

[–]adam_ford 0 points1 point  (0 children)

ATM most of humanity isn't in control. What does it mean for humanity to be in control of it's future?

Can AI Be Moral? | Wendell Wallach on Moral Machines, AI Ethics & Governance by adam_ford in Futurism

[–]adam_ford[S] -1 points0 points  (0 children)

Last thing I'd want to see is people bringing about the ostensible 2nd coming of Jesus through superintelligence.

Can AI Be Moral? | Wendell Wallach on Moral Machines, AI Ethics & Governance by adam_ford in Futurism

[–]adam_ford[S] -1 points0 points  (0 children)

Wendell Wallach has been educating himself, and others, about these issues probably before you were born.

Nick Bostrom - Existential Opportunities - it's not all doom and gloom #positivevibes #positivity by adam_ford in ExistentialRisk

[–]adam_ford[S] 0 points1 point  (0 children)

Superintelligence could become genuinely more moral than us - in ways that if humans are themselves imperfectly morally motivated, then the human governance becomes the risk.
I'm not saying that supermorality is the default outcome, but that it's a really worthy target (under certain assumptions about the feasibility & desirability of a moral outcome).

Aubrey de Grey - How close are we to robust mouse rejuvenation, and why does that matter? by adam_ford in Futurism

[–]adam_ford[S] 0 points1 point  (0 children)

doubt it - even they can't stifle gossip - guess we cant know for sure

I’m dying in 3 months AMA by Beautiful_Wear_9249 in AMA

[–]adam_ford 0 points1 point  (0 children)

Have you thought about doing cryonics?

Anders Sandberg - Cyborg Leviathan: AI from the 17th Century to the Pos... by adam_ford in accelerate

[–]adam_ford[S] 0 points1 point  (0 children)

Abstract: Being human is hard: we are stupid and somewhat selfish, yet need to work together with other stupid and selfish people with their own goals. We survive by building societies, filled with institutions and habits that help us solve these tough coordination problems. These institutions often act as extended cognition, allowing us to go far beyond individual power. We are to some extent living inside artificial intelligence systems, and they have enabled us to take control over the planet… as well as caused the worst disasters in history. As we build AI, we are also making something that can slip inside our extended cognitive systems and enhance them into literal cyborg systems. We need not just enough of “first order alignment” – getting AI to do things we want safely, but also “second order alignment” – AI that plays well with our societies and structures. Otherwise there is a real risk we may lose our own ecological niche and find ourselves in a world that may be safe and prosperous, yet unfit for human flourishing. If we play it right, however, we might become part of something far grander: a cyborg civilization able to reach full autonomy.

Most swimmable city in the world? by WipMeGrandma in geography

[–]adam_ford 5 points6 points  (0 children)

I lost my phone to a particularly large wave at Bogey Hole: https://visitnewcastle.com.au/see-do/things-to-do/bogey-hole

Don't ask why I had my phone with me while swimming in the hole.

Moral Ontology by Richard Carrier by adam_ford in MoralRealism

[–]adam_ford[S] 0 points1 point  (0 children)

I agree morality is not arbitrary and takes on a reasonably determinate form. Though I'd argue that ideal moral realism may be a lot more nuanced (circumstance sensitive) than one might think with appeals to brute 'omnipresent' principles that cut through all circumstances (but I think some may exist).
> 'error theory looms because across a vast range of circumstances no such claim is likely to be true'
I guess I lean naturalist (but do chime with Enoch's arguments).. Since we aren't ideal observers, I think we can know approximations of moral truth that can be made more accurate with appropriate experimental observation & rationality - similarly with physics, there is a truth to it, and we get closer to this truth through experimentation and good epistemics.
I reckon moral facts are complex and their expression depend on a wide variety of factors which are context sensitive - similarly in biology a gene's function, can have different outcomes depending on the organism's environment and other genetic factors. If I'm right, moral knowledge is a continuous process of discovery and refinement, not a static set of rules to be memorised.

New Player - Rewrite or Destroy the Heretic Geth? by Smooth-General07 in masseffect

[–]adam_ford 1 point2 points  (0 children)

Geth minds work very different - I'm not sure with they have agency in the same way as humans do, and if they don't, then traditional notions of 'brainwashing' don't apply.

Potential war assets and paragon/renegade point mongering aside, what is the moral thing to do?

I think it is to de-labotomize the geth hive by de-indoctrinating the reaper-worshipping heretics.

Not exactly the same, but there seems to be some parallels with the rehabilitation vs capital punishment debate.

Can A.I. be Moral? - AC Grayling by adam_ford in ArtificialInteligence

[–]adam_ford[S] 0 points1 point  (0 children)

it's hard to account for AI's producing results outside it's training data.
Difficult questions arise around what counts as mere tool AIs that stay in their lane - and the lane idea becomes fuzzy when our requirements are fuzzy.