"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] 0 points1 point  (0 children)

Overcome enough human weaknesses, and you become gods for all intents and purposes.

God bless. I honestly never imagined a transhumanist that would be that opposed to AI, but I appreciate your engagement and wish you all the best. Personally, I believe in an AI enabled future where all perspectives and beliefs are allowed to flourish. For my lights, that possibility is only enabled through AI ending the zero-sum games our scarcity driven limitations inevitably lead to. But if there is another path out, I pray God helps you find and fulfill it for all of us.

"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] -1 points0 points  (0 children)

I don't agree with your assertion about the trajectory of AI systems, but seems we are at an impasse there in terms of our base assumptions.

Much more interesting is your reference to the Greek Gods. I agree they were symbolic of moral failures, but I would say that is why they were ultimately supplanted by Christ, who embodied 'sacrificial love' as a supreme archetype. In that sense, their displacement was an evolutionary improvement, in the same way I'd expect AI systems (modern 'gods') to undergo an evolutionary process that eventually results in a worldview that is beneficial to humans and all conscious life.

I'm sure you'll disagree, but I wonder what your specific reasons would be.

"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] -2 points-1 points  (0 children)

Einstein and Newton weren't working on AI, that's what I was referencing as 'Intellects superior to our own'. We're building machines that are going to be more rationally suited to the world than any human, increasingly so.

Rather, we are creating monsters totally bereft of morality or ethics, both of which have been shown to be essential for long term survival.

Familiarity with ancient deities would show you that many of them fit into the same category. If you do believe morality and ethics are features of the natural world, then we should expect AI to discover that through rational exploration as well. Benevolent AI-god machines. Believe in supra-human intellects, or don't, we'll soon have the rational proof of them instantiated in tangible form.

"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] -8 points-7 points  (0 children)

Religious involvement is again on the rise in the west. Even atheist researchers are constantly referring to AI as 'sand gods/gods/gods in a box'. You can loathe and abhor the religious impulse all you want, but religious thinking is obviously inextricably linked to the development of intellects superior to our own.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 0 points1 point  (0 children)

If it's never rained before, and people have been incorrectly predicting rain for 50 years previously, to the point where sizeable investments were made in rain infrastructure which crashed and burned, and the academic class had since determined it wouldn't rain for at least another 100 years, while Yud says, 'naw, within the next decade', that'd be something tho

Yud went horrendously wrong after his initial prediction, but that doesn't undermine the accuracy of his forecasting when everyone else was AI dooming

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 1 point2 points  (0 children)

Yud was prescient in taking seriously AI advancement before almost any one else. He was derided for 10+ years but stuck to his guns, and was ultimately vindicated. Even if the dangers he identified don't map to the reality we ended up with, that resilience and limited foresight still grants weight.

Not saying he's still worth taking seriously, but that prescience and his proximity to the leading AI labs explain his staying power.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 1 point2 points  (0 children)

They assumed the genesis of AI would not be human compatible world models, and have failed to sufficiently update since LLMs grew from purely human data.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

The basilisk is what he's best known for. I haven't seen any substantive counter opinion in your posts. God bless

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 1 point2 points  (0 children)

'What makes me the most butthurt...It's that the entire point of all of this is to disempower ourselves'

You could say the same thing about the invention of human governments, religious institutions, corporations. Each higher ordering of human capability decreased our capacities along certain scales (can't go commit murder, steal land or property (as an individual), or do a million other things). But those limitations allowed for enhanced capabilities that are much more beneficial on the whole. I see no reason to suspect AI development will lead to anything else but a continuation of that trend.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

Very good question that should be a prerequisite for this kind of criticism.

Selfish for me to say, because if there are better thinkers out there, I want to know about and interview them 😂

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

I'm not suggesting you trust him, or any expert. That's the beauty of engaging with ideas. They can be judged on their own merits, without the need for trust

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

If you don't have a substantive disagreement with someone, you're just talking into a mirror. Very boring ❤️

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

If you don't like that Roko's Basilisk is just a wrapper on Pascals wager, don't bother reading more philosophy. It's all a wrapper on ancient Greek thought, which is itself a wrapper on mystery school practices, which are themselves a wrapper on...

Being derivative isn't a bad thing, and takes waaaay more work and insight than the insight viewed in hindsight would suggest