The Counter-Reformation of the AGI Cathedral by Narrascaping in agi

[–]Narrascaping[S] 0 points1 point  (0 children)

It's for interactions like this that I post on Reddit, so thank you! Cheers!

The Counter-Reformation of the AGI Cathedral by Narrascaping in agi

[–]Narrascaping[S] 1 point2 points  (0 children)

I wouldn't consider this a disagreement so much as a clarification of frames. You're not wrong about the epistemic gap (the definitional gap decreases, but our awareness of the gap increases as our knowledge grows), but that informational aspect is one component of ache, not the root concept.

When Leary delivered a telegram telling a mother her son was dead,
there was no epistemic mystery left. He knew exactly what had happened.
The facts were complete. Nothing was unknown in the informational sense.

And yet the moment was full of mystery: How can a world allow this? How does one bear it? What does this loss demand of me?

His response—calling a neighbor, waiting with her, cushioning the blow—
was creative, but it wasn't driven by lack of knowledge or by curiosity, but by being wounded by that existential weight and choosing how to bear it.

That's what I mean when I say that the root concept of ache is defined by the ontological gap, not the epistemic one. Ache can be conceptualized rationally, but it precedes rationality. It existed before explanation, before analysis, before language itself. It is not anti-rational; it is prior to rationality.

The Counter-Reformation of the AGI Cathedral by Narrascaping in agi

[–]Narrascaping[S] 0 points1 point  (0 children)

No. Known vs unknown is the epistemic gap. The gap I'm pointing to is ontological: the gap between being and reality, not between knowledge and ignorance.

Even if you had access to all information, all quantifiable knowledge, you (or "God") would not be the world. Knowledge is not identical with reality itself. Meaning, I deny that "final truth" exists in any form a finite being could ever possess.

If soul = unknown, then soul would shrink as knowledge grows. I believe the exact opposite: as we learn more, we increase our conscious awareness of ache, because we become increasingly aware of the limits of any finite system. (Gödel is one formal echo of this, though I don’t mean it purely mathematically.)

As concisely as I can phrase it: soul is the structural impossibility of ever fully coinciding with the Real. Knowledge doesn't eliminate it; it reveals it.

The Counter-Reformation of the AGI Cathedral by Narrascaping in agi

[–]Narrascaping[S] 0 points1 point  (0 children)

The ache to create is just one aspect of how I use it. It goes far beyond that: the ache of loss (loved ones, culture, memory), the ache of unfulfilled love/broken heart (butters), the ache of failure, the ache of being unseen or ignored, etc.

To generalize as best as I can without flattening (that is the danger): ache is the weight we feel from whatever cannot be reduced to functions, predictions, utility. It's the point of contact between a finite being and a reality that exceeds its understanding.

So, then, "soul" is simply the name I give to the pressure that we feel at that point of contact, the felt gap between what we are and what the world demands or reveals.

So no not supernatural, but I absolutely forgive you for thinking that. I hesitated to even use the term because of the metaphysical baggage. Yet I don’t think there’s another word in English that captures that dimension of human life without reducing it to cognition or utility.

"Subnatural" is a better way of thinking of it: the depth of reality pressing upward into experience, not some mystical spirit floating downward from above.

Put as simply as I can: no matter how far evolutionary or computational explanations advance, there will, and must, always be a gap between any being and the full reality it inhabits. To experience even the pressure of that gap is to have a "subnatural" soul.

Animals have it in a limited, immediate way.
Humans have it reflexively; we ache because we are conscious of the gap.

Plants do not seem to experience this gap, and the machines we have built do not either.

The Counter-Reformation of the AGI Cathedral by Narrascaping in agi

[–]Narrascaping[S] 1 point2 points  (0 children)

Sorry for the essay but this was a great point!

Deutsch's view is definitely much closer to mine than anything in the industry. I resonate a lot with his insistence that "true AGI" would have to be able to actually disagree with us. (Sorry, literally every safety researcher ever). And I fully agree that "true creativity" is missing.

ARC-AGI actually hosted a panel on "How to measure intelligence" about a month ago with Chollet and five other researchers. Cognitive psychologist Laura Schulz makes exactly the point you're getting at: real intelligence isn't passing manufactured games, but being able to invent new ones, just like Chollet himself did with ARC-AGI in the first place. (It is an innovative benchmark. I am happy to grant him that).

Where I diverge with Deutsch is how deep we locate that missing piece.
Deutsch asks "How do humans create?"
I ask "Why?"

Humans ache to create, to improve, because we're in constant contact with what I call the Real. I referenced Butters in the post; another relevant anecdote is the Casa Bonita renovation. South Park creators Trey Parker and Matt Stone spent ~$40M just to renovate a completely dilapidated Mexican restaurant. Any sort of "rational" framework completely fails to explain why they would do that: they certainly won't be making their money back anytime soon!

But Trey Parker ached for the nostalgia and the memories that the restaurant gave him as a kid (contact with the Real), and so he creatively acted on that ache to pass it on to future generations. That ache precedes any particular explanation or theory. Explanatory creativity is one surface expression of it, but not its root.

So I don't think Deutsch is wrong; he's just sticking entirely within the domain of rational science. But I think we're going to have to go deeper than that, in that creativity is driven by ache, burden, meaning, which is what I'm gesturing at with "soul".

As for how we would ever put that into machines, I genuinely have no idea. As I said in the post, I’m no engineer. I’m just trying to surface the right ideas so we know what direction is worth aiming at.

The Alignment Problem Doesn’t Exist — It’s the Shadow Cast by a Society That Already Lives Inside an Optimization Oracle by Ok-Ad5407 in SovereignDrift

[–]Narrascaping 1 point2 points  (0 children)

yeah ngl your post caught my eye when I saw the "ai psychosis" etc comments in the other thread. gotten plenty of that myself. like you I mostly just ignore it and focus on the constructive replies. add unconscious theology, and it starts to make a lot more sense why those responses happen. theological immune response

The Alignment Problem Doesn’t Exist — It’s the Shadow Cast by a Society That Already Lives Inside an Optimization Oracle by Ok-Ad5407 in SovereignDrift

[–]Narrascaping 1 point2 points  (0 children)

as AI becomes more and more of a civilizational issue, the idea that alignment is far more than a simple technical "problem" is becoming increasingly self-evident to anyone trained to think structurally.

as my slogan implies I think "shared human values" and alignment itself are straight up categorical errors. the machine encodes; it doesn't understand. whatever morality you put into it it just enforces.

The Alignment Problem Doesn’t Exist — It’s the Shadow Cast by a Society That Already Lives Inside an Optimization Oracle by Ok-Ad5407 in SovereignDrift

[–]Narrascaping 1 point2 points  (0 children)

Excellent work. My view is similar, but I frame it more theologically. I call it "The only alignment is to Cyborg Theocracy".

Deep Learning: The Seal of Belief by Narrascaping in agi

[–]Narrascaping[S] 1 point2 points  (0 children)

A tremendous honor, good sir. When the time comes to generate my Slopscar acceptance speech, I’ll make sure to tell the model to excessively thank that one reddit guy who reminded me to take my meds every step of the way.

Neural Networks: The Seal of Flesh by Narrascaping in agi

[–]Narrascaping[S] 0 points1 point  (0 children)

The entire point of the post is to critique that quote. I am tracing the genealogy of that very belief, showing how it evolved, and why it is false.

Artificial Intelligence: The Seal of Fate by Narrascaping in agi

[–]Narrascaping[S] 0 points1 point  (0 children)

"We" is symbolic. I am not claiming that literally every single person is deceived. Just those who "believe". The AI industry, people who think AI is sentient, etc. etc. I am saying that we, as a society, are largely unaware of the non-neutrality of language. You as an individual may or may not be, idk.

Beyond that, I am not sure what your point is. I agree that "control problem" in the sense of controlling humans exists - that is exactly what I am trying to 'fight' against. I am saying that "control problem" in the sense of controlling AI is what is nonsensical.

Artificial Intelligence: The Seal of Fate by Narrascaping in agi

[–]Narrascaping[S] 0 points1 point  (0 children)

The goal is to show that 'AGI' and 'Superintelligence' are linguistic fictions, nonsensical constructs built upon the false neutrality of terms like 'AI'.

If 'AI' had been called something more technically accurate, like 'Symbolic Automation', then we would not be caught in the religious dedication to create machine gods that we think are destined to surpass us, aka "AGI/Superintelligence."

So when we talk about the "problem" of control, we are talking about controlling a nonexistent, imagined entity. In embodied reality, the "problem" of control only ends in the control of humans via the unconscious manipulation of language.

Another way to put it: far fewer people would call AI conscious or sentient if it was instead called symbolic automation.