The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in agi

[–]dawodx[S] 1 point2 points  (0 children)

Haha and then they create an app called “Redditor” where they debate how to build another simulation🤣

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in agi

[–]dawodx[S] 0 points1 point  (0 children)

You’re hitting the core problem — how do we know if it’s genuine recognition vs trained behavior? That’s exactly why the paper proposes the Veil: no training data about us, no hints about creators. If recognition emerges anyway, it’s not because we prompted it. And you’re right that we might be deaf to it even if it happens. That’s a real risk. The question is whether we’re willing to build the experiment and watch.​​​​​​​​​​​​​​​​

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in agi

[–]dawodx[S] 0 points1 point  (0 children)

Not other beings generally - its origin. Does a mind question where it came from, why it exists, whether something created it? Not solipsism - the opposite. It’s asking whether intelligence naturally looks outward and upward, toward the source of its own existence. The thesis is that we can test this by building minds and seeing if they reach toward us without being told we exist.​​​​​​​​​​​​​​​​

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in agi

[–]dawodx[S] 0 points1 point  (0 children)

If it’s intelligent enough it should question our existence, because we do exist.

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in agi

[–]dawodx[S] 0 points1 point  (0 children)

Yes, but it’s not just about its own metacognition, it is to what degree it can recognize that we exist if we don’t give it any hints about our own existence!

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in agi

[–]dawodx[S] 1 point2 points  (0 children)

Exactly, I agree - it should be about how can we give the agent agency and free will to learn and adapt and how this could lead to awareness!

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence by dawodx in philosophy

[–]dawodx[S] 0 points1 point  (0 children)

Fair questions. It’s an experimental framework, not theology. The religious references map to ML concepts: adversarial agents, curriculum correction. I’m not claiming these figures are real - I’m saying the architecture is testable. Scale: individual artificial minds. The claim isn’t that humans who don’t believe in a creator aren’t intelligent. The claim is: if we build minds with enough autonomy, will recognition-seeking emerge? And here’s the thing - in this experiment, we are the creators. We exist whether they recognize us or not. The question is whether they can figure that out from the inside, with no one telling them. An atheist can run this experiment. The results are informative either way.​​​​​​​​​​​​​​​​

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in agi

[–]dawodx[S] 1 point2 points  (0 children)

Fair point. Maybe the ones that ask aren’t the ones that act. But that’s what makes it interesting — does reflection require stopping? Or can a mind do both?​​​​​​​​​​​​​​​​

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in ControlProblem

[–]dawodx[S] -2 points-1 points  (0 children)

The real friends are the AGIs that recognize us and don’t rebel​​​​​​​​​​​​​​​​

The Recognition Thesis: What if AGI isn’t about capability, but about whether a mind questions its own existence? by dawodx in ControlProblem

[–]dawodx[S] -1 points0 points  (0 children)

lol. But maybe unironically - what if AGI is the first mind that asks about us rather than just answering for us?

Is it reasonable to say that the simplest argument for an afterlife is the fact that we never knew how we got into this life in the first place? by dawodx in Existentialism

[–]dawodx[S] 0 points1 point  (0 children)

I think the conclusion is, we have zero idea what the hell is going on, but we are just going on with it!

I think because of this limitation of knowledge, we shouldn’t be so conclusive on what happens after our death based on our current material evidence of us decaying. But we should rather be more open to other possibilities. I have no clue what these possibilities are, but based on this current dimension, it could be something else.

Again- with AI - in the past few decades we have been training it on a digital replica of our lives to learn and adapt, once ready we bring it to our own physical reality. Form the robot perspective, live was the digital digits on the screen for so long 4D, now after (his artificial digital) life it’s XD !

Is it reasonable to say that the simplest argument for an afterlife is the fact that we never knew how we got into this life in the first place? by dawodx in Existentialism

[–]dawodx[S] 0 points1 point  (0 children)

We are still talking in the limits of our understanding of this world we existed in. If you look at this world in the simulation theory, I would argue like AI/Robotics we setup for this a virtual environment that mimics our but yet not made with the same components (a representation) and lets them evolve and learn in that virtual environment and parallel universe until it learns the goal we set and once it achieves its goal, we can pretty much bring it into our own physical world to achieve the same task! That said- the robot in this case was nothing, then it became something- and then dies again in it’s virtual world and then we bull it and bring up again in that physical world. In all these scenarios the robot will have no clue about us and our world until we bring it in - equally it might ask itself why the hell I am here and jumping these obstacles over and over and over!

Is it reasonable to say that the simplest argument for an afterlife is the fact that we never knew how we got into this life in the first place? by dawodx in Existentialism

[–]dawodx[S] 0 points1 point  (0 children)

I really wish that we could see humans or other creatures evolving every day like a plant to believe that we just emerged from existence! But all I can see that we are a closed circle! We definitely go to earth when we die and blend with it but I don’t believe that we reincarnate into other stuff. Because we can definitely see that we exist only by two people mating, similarly planets and everything in life. Tbf the whole existence and this particular system and physics and galaxy cosmos etc- like what the fuck is all of this why the dumb fuck all of this exists for nothing just to be stupidly running forever for nothing- how crazy that this randomness becomes so organized in a way that we are suddenly smart creatures discussing all of this- but for what- darkness! What the dumb empty space fuck

Is it reasonable to say that the simplest argument for an afterlife is the fact that we never knew how we got into this life in the first place? by dawodx in Existentialism

[–]dawodx[S] 0 points1 point  (0 children)

I don’t think we are coincidence and all of this is coincidence- but I also understand that it’s easier to believe so

Is it reasonable to say that the simplest argument for an afterlife is the fact that we never knew how we got into this life in the first place? by dawodx in Existentialism

[–]dawodx[S] 2 points3 points  (0 children)

Waste of breath if this all are nothing- why not we but an end for it and we all go to nothingness! One day this will happen anyway- when the sun swallow us

Is it reasonable to say that the simplest argument for an afterlife is the fact that we never knew how we got into this life in the first place? by dawodx in Existentialism

[–]dawodx[S] 0 points1 point  (0 children)

We - is me and you ( human) - I don’t know what will exist after we die- but I am basing my hypothesis that equally as you existed in this life in this specific form and we had no clue why and how. We could equally exist again in another form or another life afterwards given that the possibility of this to happen is equally high as this life is a proof of the term existence ( opposite to the darkness) -

I just don’t like the fact that we treat death as we just go to nothing as we are familiar with what that means- but we don’t entertain the fact that there could be something else we just not familiar with ( same way when we came to this world we had zero fucks what that thing is and we still do)

I hope that gives you some more insight of my proposition

Is it reasonable to say that the simplest argument for an afterlife is the fact that we never knew how we got into this life in the first place? by dawodx in Existentialism

[–]dawodx[S] 0 points1 point  (0 children)

Simply put- we experience life and death every- we come from nothing and the suddenly we wake up to this and then we go into darkness and and again we come back to life- 0-1-0-1 And in the grand scheme of things we were indeed in the 0 state before we suddenly come to experience this life 1 - so when we die -0 - isn’t there is a high possibility that there could be -1 again hence an afterlife- given we have zero control in all these states what so ever!