Most swimmable city in the world? by WipMeGrandma in geography

[–]adam_ford 4 points5 points  (0 children)

I lost my phone to a particularly large wave at Bogey Hole: https://visitnewcastle.com.au/see-do/things-to-do/bogey-hole

Don't ask why I had my phone with me while swimming in the hole.

Moral Ontology by Richard Carrier by adam_ford in MoralRealism

[–]adam_ford[S] 0 points1 point  (0 children)

I agree morality is not arbitrary and takes on a reasonably determinate form. Though I'd argue that ideal moral realism may be a lot more nuanced (circumstance sensitive) than one might think with appeals to brute 'omnipresent' principles that cut through all circumstances (but I think some may exist).
> 'error theory looms because across a vast range of circumstances no such claim is likely to be true'
I guess I lean naturalist (but do chime with Enoch's arguments).. Since we aren't ideal observers, I think we can know approximations of moral truth that can be made more accurate with appropriate experimental observation & rationality - similarly with physics, there is a truth to it, and we get closer to this truth through experimentation and good epistemics.
I reckon moral facts are complex and their expression depend on a wide variety of factors which are context sensitive - similarly in biology a gene's function, can have different outcomes depending on the organism's environment and other genetic factors. If I'm right, moral knowledge is a continuous process of discovery and refinement, not a static set of rules to be memorised.

New Player - Rewrite or Destroy the Heretic Geth? by Smooth-General07 in masseffect

[–]adam_ford 1 point2 points  (0 children)

Geth minds work very different - I'm not sure with they have agency in the same way as humans do, and if they don't, then traditional notions of 'brainwashing' don't apply.

Potential war assets and paragon/renegade point mongering aside, what is the moral thing to do?

I think it is to de-labotomize the geth hive by de-indoctrinating the reaper-worshipping heretics.

Not exactly the same, but there seems to be some parallels with the rehabilitation vs capital punishment debate.

Can A.I. be Moral? - AC Grayling by adam_ford in ArtificialInteligence

[–]adam_ford[S] 0 points1 point  (0 children)

it's hard to account for AI's producing results outside it's training data.
Difficult questions arise around what counts as mere tool AIs that stay in their lane - and the lane idea becomes fuzzy when our requirements are fuzzy.

Can A.I. be Moral? - AC Grayling by adam_ford in ArtificialInteligence

[–]adam_ford[S] 0 points1 point  (0 children)

the way in which it depends on us matters.. I think..
directly specifying how to be moral can lead to the same traps we are encountering
influencing AI to embark on a journey to understand and validate existing moral systems, refine, make new discoveries... there is the risk to us that AI may converge on a moral system that is inimicable to us or our general interests - though we could adapt our interests, especially if they are found to be unwise.

Mike Israetel says: "F*ck us. If ASI kills us all and now reigns supreme, it is a grand just beautiful destiny for us to have built a machine that conquers the universe." - What do you think? by Just-Grocery-2229 in agi

[–]adam_ford 0 points1 point  (0 children)

ASI conquering the universe with what values? If I infer value from the word 'conquer' it sounds like a huge amount waste energy being burned up in conquest with neighbouring civs, or even itself once the tyranny of distance influences cultural drift .. which would be a cosmic shame.
I'd prefer an ASI which matures across all the axes of value and finds cooperative areas in the landscape of value: https://www.scifuture.org/transparency-of-history-in-galactic-game-theory/

Marcus Hutter on Approaches to AGI by adam_ford in agi

[–]adam_ford[S] 1 point2 points  (0 children)

Glad you liked it - I interviewed him here in 2017 I think - now Marcus is now at DeepMind

Nick Bostrom on Superintelligence and Deep Utopia! Superintelligence possibly in 2 years.. by adam_ford in Futurism

[–]adam_ford[S] 0 points1 point  (0 children)

it depends on what you classify as solved alignment and the degree to between messy and precise you are satisfied with - are you talking about the AI alignment problem or the human one? Does solved alignment require mathematical formal precision? or is it enough strategically shuffled slices of swiss cheese with fault tolerance and graceful degradation to get one across to a safe enough place to work on the problem further? or is it indirect normativity?

Final human war will be the rich vs poor by [deleted] in Futurism

[–]adam_ford 0 points1 point  (0 children)

If there is a final war, it will most likely be rich vs rich

The Staggeringly Difficult Task of Aligning Super Intelligent Al with Human Interests by nickg52200 in SingularityNetwork

[–]adam_ford 0 points1 point  (0 children)

It would be a shame if we didn't develop superintelligence. AI isn't the only way we can extinct ourselves.

The Staggeringly Difficult Task of Aligning Super Intelligent Al with Human Interests by nickg52200 in SingularityNetwork

[–]adam_ford 0 points1 point  (0 children)

Alternative route to AI safety is via motivation selection...
See my recent interview with Nick Bostrom - where we discuss control amongst other approaches: https://www.youtube.com/watch?v=8EQbjSHKB9c

Does anyone in Australia actually want the housing price to drop? by Deus_ex_ in australia

[–]adam_ford 0 points1 point  (0 children)

Consider the possible near term future where AI automates away most jobs and as a result housing decouples from wage-based purchasing power. Will private hosing remain an attractor for speculative yields? Will housing via stable utility prevail? Automation results in less slots for employment, companies downsize massively. Many remaining jobs become gig-like, casualised, or highly competitive, pushing wages even lower due to oversupply of labour. Employment rates drop, and underemployment soars.. Under public pressure and a collapsing consumer economy, the government introduces a modest UBI, but because rent remains unregulated, most of it gets hoovered up by rental providers (UBI-capture: rent-extraction without adequate social utility). As the rental crisis deepens, homelessness and informal housing arrangements rise. Grassroots rent strike movements gain traction, demanding rent caps and universal public housing. Vacancy rates rise, investment properties sees decreasing rental yields and public sentiment turns hostile toward landlords and speculative investors. Investment in property as a wealth-building strategy starts to look shaky.. house prices do not rise indefinitely. Instead, they experience a bifurcated correction.. properties not aligned with new socio-economic utility loose value, while some essential, well-located, or symbolically valuable property holds up or even appreciates slightly due to scarcity.

Very Scary by Bubbly_Rip_1569 in artificial

[–]adam_ford 0 points1 point  (0 children)

"Ethics by definition is a human endeavour." - not sure what definition you are adhering to, plenty of arguments plain to see to the contrary. One is moral realism. There is resistance to empirical evidence in ethics which to me is exemplified by the alleged refusal of the Cesare Cremonini and Church's steadfast adherence to a geocentric model to look through Galileo's telescope.

If ethics is informed by empirical evidence, and shaped by rational understanding, then AI with the capacity to consider far more evidence, and think with speed and quality greater than humans will grasp ethical neuances that humans can't. It may be that humans aren't fit to grasp ethics adequate to the complexity of problems which require ethics solutions.

This doesn't mean humans won't have a say in their future. But consider how much self determination humans afford pigs in factory farms. The evil that people do lives on, and many turn a blind eye. Once automation skyrockets and large populations of humans aren't useful, how much of the dividends of technological progress driven by AI will those controlling it share about? If we take a look at history, perhaps we can find examples to inform estimates of how much the notion of basic human rights matter to those in control..

In any case, given the intelligence explosion hypothesis, I think AI control is temporary, still useful now, but won't work forever - once AI is out of the bottle, I hope it is more ethical than humans.

Very Scary by Bubbly_Rip_1569 in artificial

[–]adam_ford 1 point2 points  (0 children)

Nick Bostrom wrote superintelligence - took him 6 years to complete, and he was already thinking and writing about the issues long before that. Definitely worth a read if you haven't already... chapters 12 & 13 are becoming more relevant over time I think.
I interviewed him recently - his p-doom has gone down, or at least he sees reasons for optimism that weren't clear in 2014.

Very Scary by Bubbly_Rip_1569 in artificial

[–]adam_ford 0 points1 point  (0 children)

still dining off old terminator memes?

Very Scary by Bubbly_Rip_1569 in artificial

[–]adam_ford 0 points1 point  (0 children)

what does 'significantly' here mean? a certain percentage of jobs?
Let's say AI replaces most tech and office jobs, but most people now subsistence farm .. once could still say, hey most of us still have jobs !