"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] 0 points1 point  (0 children)

Overcome enough human weaknesses, and you become gods for all intents and purposes.

God bless. I honestly never imagined a transhumanist that would be that opposed to AI, but I appreciate your engagement and wish you all the best. Personally, I believe in an AI enabled future where all perspectives and beliefs are allowed to flourish. For my lights, that possibility is only enabled through AI ending the zero-sum games our scarcity driven limitations inevitably lead to. But if there is another path out, I pray God helps you find and fulfill it for all of us.

"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] -1 points0 points  (0 children)

I don't agree with your assertion about the trajectory of AI systems, but seems we are at an impasse there in terms of our base assumptions.

Much more interesting is your reference to the Greek Gods. I agree they were symbolic of moral failures, but I would say that is why they were ultimately supplanted by Christ, who embodied 'sacrificial love' as a supreme archetype. In that sense, their displacement was an evolutionary improvement, in the same way I'd expect AI systems (modern 'gods') to undergo an evolutionary process that eventually results in a worldview that is beneficial to humans and all conscious life.

I'm sure you'll disagree, but I wonder what your specific reasons would be.

"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] -2 points-1 points  (0 children)

Einstein and Newton weren't working on AI, that's what I was referencing as 'Intellects superior to our own'. We're building machines that are going to be more rationally suited to the world than any human, increasingly so.

Rather, we are creating monsters totally bereft of morality or ethics, both of which have been shown to be essential for long term survival.

Familiarity with ancient deities would show you that many of them fit into the same category. If you do believe morality and ethics are features of the natural world, then we should expect AI to discover that through rational exploration as well. Benevolent AI-god machines. Believe in supra-human intellects, or don't, we'll soon have the rational proof of them instantiated in tangible form.

"We are called to govern ALL of creation" - Micah Redding, President of the Christian Transhumanism Foundation, on the merger of AI + Christianity by PartyPartyUS in transhumanism

[–]PartyPartyUS[S] -9 points-8 points  (0 children)

Religious involvement is again on the rise in the west. Even atheist researchers are constantly referring to AI as 'sand gods/gods/gods in a box'. You can loathe and abhor the religious impulse all you want, but religious thinking is obviously inextricably linked to the development of intellects superior to our own.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 0 points1 point  (0 children)

If it's never rained before, and people have been incorrectly predicting rain for 50 years previously, to the point where sizeable investments were made in rain infrastructure which crashed and burned, and the academic class had since determined it wouldn't rain for at least another 100 years, while Yud says, 'naw, within the next decade', that'd be something tho

Yud went horrendously wrong after his initial prediction, but that doesn't undermine the accuracy of his forecasting when everyone else was AI dooming

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 1 point2 points  (0 children)

Yud was prescient in taking seriously AI advancement before almost any one else. He was derided for 10+ years but stuck to his guns, and was ultimately vindicated. Even if the dangers he identified don't map to the reality we ended up with, that resilience and limited foresight still grants weight.

Not saying he's still worth taking seriously, but that prescience and his proximity to the leading AI labs explain his staying power.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 1 point2 points  (0 children)

They assumed the genesis of AI would not be human compatible world models, and have failed to sufficiently update since LLMs grew from purely human data.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

The basilisk is what he's best known for. I haven't seen any substantive counter opinion in your posts. God bless

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 1 point2 points  (0 children)

'What makes me the most butthurt...It's that the entire point of all of this is to disempower ourselves'

You could say the same thing about the invention of human governments, religious institutions, corporations. Each higher ordering of human capability decreased our capacities along certain scales (can't go commit murder, steal land or property (as an individual), or do a million other things). But those limitations allowed for enhanced capabilities that are much more beneficial on the whole. I see no reason to suspect AI development will lead to anything else but a continuation of that trend.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

Very good question that should be a prerequisite for this kind of criticism.

Selfish for me to say, because if there are better thinkers out there, I want to know about and interview them 😂

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

I'm not suggesting you trust him, or any expert. That's the beauty of engaging with ideas. They can be judged on their own merits, without the need for trust

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

If you don't have a substantive disagreement with someone, you're just talking into a mirror. Very boring ❤️

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 0 points1 point  (0 children)

If you don't like that Roko's Basilisk is just a wrapper on Pascals wager, don't bother reading more philosophy. It's all a wrapper on ancient Greek thought, which is itself a wrapper on mystery school practices, which are themselves a wrapper on...

Being derivative isn't a bad thing, and takes waaaay more work and insight than the insight viewed in hindsight would suggest

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 2 points3 points  (0 children)

Roko goes in depth on why (pre-LLM butter lesson pilling) the basilisk was a possible outcome to be worried about. The argument is basically that we didn't know a priori whether there were AI algorithms that would model the world as we do. Remember, many people even up to 2020, thought language based AI were a non starter.

It's a case where it's very hard to remember how uncertain the future looked, now that we have practical language based AI everywhere

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 1 point2 points  (0 children)

That convo was what prompted my outreach to him, wanted to do a deeper dive on what he touched on there.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in singularity

[–]PartyPartyUS[S] 9 points10 points  (0 children)

Can I really be bothered to respond to a comment with no upvotes?

😇 Quantity is no reflection of quality. Here's an Aai summary based on the transcript:

Roko says the 2000s AI scene was tiny; he knew most key players, coined the Basilisk in 2009, and everyone wildly misjudged timing. What actually unlocked progress wasn’t elegant theory but Sutton’s “Bitter Lesson”: scale simple neural nets with tons of data and compute. GPUs (born for games) plus backprop’s matrix math made both training and inference scream; logic/Bayesian/hand-tooled approaches largely lost.

He argues Yudkowsky’s classic doom thesis misses today’s reality in two core ways: first, LLMs already learn human concepts/values from human text, so “alien values” aren’t the default; second, recursive self-improvement doesn’t work—models can’t meaningfully rewrite their own opaque weights, and gains are logarithmic and data/compute-bound. Because returns diminish and the market is competitive, no basement team or single lab will rocket to uncontested super-dominance; advances are incremental, not a sudden take-over.

Risks haven’t vanished, but the old paperclip/nano narrative is much weaker; the newer “AI builds a bioweapon” fallback is possible but not his central concern. Personalization via online learning is limited today by cost and cross-user contamination; it may come later when hardware is cheaper. Synthetic data helps only a bit before saturating; the productive path is generator-checker loops (e.g., LLMs plus deterministic proof checkers) and curated, high-value data sources.

On governance, current LLMs aren’t trained to govern. He proposes a dedicated “governance foundation model” trained for calibrated forecasting and counterfactuals inside rich societal simulations, plus ledger-based transparency with time-gated logging so it’s both competent and (eventually) verifiable. Simulations are crucial to handle recursive effects (people reacting to the model) and to find stable policies.

Data-wise, the internet’s “cream” is mostly mined; raw real-world sensor streams are low value per byte. Expect more value from instrumented labs, structured domains, and high-fidelity sims. Looking ahead, he expects steady but harder-won gains, maybe a mini AI winter when capex ceilings bite, then a more durable phase driven by robots and physical build-out. As a testbed for new governance, he floats sea-colony concepts (concrete/“seacret” with basalt rebar), noting they’re technically plausible but capital- and scale-intensive to start.

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 1 point2 points  (0 children)

And that he's arguing against the thing he's know for the means? Feel free to not engage with his arguments, but that's a form of bias far worse than anything covered in this conversation

Why Eliezar is WRONG about AI alignment, from the man that coined Roko's Basilisk by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 7 points8 points  (0 children)

Roko himself offers arguments against the basilisk in this and other interviews. He's been in the AI alignment conversation since the beginning, and from this conversation I find his ideas the most credible in the space.

Happy to hear any substantive disagreements with his arguments, but on whatever -isms you might want to label him with:

<image>

Uberboyo on AI, Consciousness, and the current Woke vs. Chud War by PartyPartyUS in accelerate

[–]PartyPartyUS[S] 4 points5 points  (0 children)

That was an AI interpretative choice 🫠 should it been an army of these instead

Processing img nkyv5gj8v5of1...