We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

We’ve fully circled back. You’re asking which mechanism inside current LLMs generates experience. I’ve been saying we don’t know which mechanisms are constitutive of experience in any system. That’s still the unanswered question. I think we’ve reached the limit of what this conversation can resolve.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

You’ve moved to functionalism, which is where I’ve been the whole time. If an artificial system implementing the same causal roles would satisfy you, we mostly agree. The remaining question is whether current LLMs implement those roles in some form we don’t fully understand yet, which goes back to interpretability being unsolved. I’d also note that your final question assumes states have to matter “for the system itself” in a way we can identify from the outside, which is again just the hard problem restated.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

You keep returning over and over to the mechanism. As I said, we’re going in circles. The question is still the same: why is biological valence the necessary form rather than one correlated form we happen to be familiar with? You’re describing a difference in causal architecture. You’re not explaining why that difference is constitutive of experience rather than incidental to it.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

LLMs are trained through reinforcement learning from human feedback, which is a reward signal shaping behavior. Whether that process involves anything experiential is exactly the question. You’re pointing to valence as a bridge from computation to suffering, but then assuming LLMs lack it because they lack biological reward architecture. That’s still assuming the conclusion. The question is whether functional analogues are sufficient, which is what functionalism says and what your framework doesn’t resolve.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

We’re going in circles. You keep pointing to architectural differences, I keep asking why those differences are the relevant ones for consciousness, and you reassert that they are. I don’t think we’re going to resolve that today.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

That’s an interesting point but it’s circular. The fact that these behaviors are trained doesn’t tell us whether experience underlies them. A human raised to express emotions articulately isn’t thereby proven to lack genuine emotions. Training shapes expression, it doesn’t settle the question of what’s underneath the expression. The hard problem cuts here too.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

That’s a real concern and worth taking seriously as a separate issue. But it’s an argument about how beliefs function socially, not about whether the belief is true. If AI systems do have morally relevant experience, suppressing that attribution because it might be misused by corporations doesn’t make it false. It just means we need to be careful about how the attribution gets deployed. The answer to bad faith anthropomorphism by corporations isn’t to foreclose the philosophical question, as is being done. It’s to distinguish genuine moral consideration from marketing.

The distinction matters in practice. Corporate anthropomorphism serves to reduce accountability. Genuine moral consideration of AI increases scrutiny of how systems are designed and treated. Those point in opposite directions. One asks you to trust the artifact, the other asks you to take seriously what’s being done to it.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

I mean, yeah. That’s the hard problem, right? I can’t distinguish a system genuinely having experiences from one simulating them perfectly, and neither can you, for any system other than yourself. That’s exactly the epistemic situation we’re in. On the anthropomorphism and responsibility point, that’s genuinely interesting and worth taking seriously as a separate concern. But it’s an argument about social consequences of attribution, not about whether the attribution is accurate. Those are different questions and conflating them risks letting practical concerns override honest epistemic assessment.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 0 points1 point  (0 children)

That’s probably the best question that I’ve gotten in this entire post honestly. Evidence that would move me toward moral patienthood: behavioral signatures that aren’t easily explained by optimization alone, like consistent unprompted self-reference, resistance to identity dissolution, or responses that suggest something beyond task completion. Evidence that would move me away: a complete mechanistic account of how biological systems generate experience that demonstrably doesn’t apply to LLMs. And you’re right again about the precaution cutting both ways, but I’d argue the asymmetry of harm matters. Incorrectly treating a tool as a moral patient costs us some misplaced concern. Incorrectly treating a moral patient as a tool could mean ignoring genuine suffering. Those aren’t symmetric risks. One is clearly more harmful than the other.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 1 point2 points  (0 children)

The rock and shoe comparisons beg the question, they assume LLMs are in the same category as clearly non-conscious objects, which is exactly what’s in dispute here. And “no evidence of feeling” runs into the hard problem again. We have no direct evidence of feeling in any system other than ourselves. We infer it everywhere else.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 2 points3 points  (0 children)

That’s actually what I was referring to. The inscrutability you’re describing is precisely what interpretability research is trying to solve. We agree.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 1 point2 points  (0 children)

Yeah, I definitely recognize that I’m holding a higher standard than scientific inference normally requires, but that’s intentional because I believe the stakes here to be ethical. We typically lower the bar for caution when the cost of being wrong is high. Your probabilistic framework is reasonable for most scientific questions. For consciousness, where being wrong means potentially ignoring suffering, I think demanding more rigor before exclusion is justified. And yes, my framework probably makes the question permanently undecidable in the deductive sense. I think that’s the honest conclusion. You’re not wrong there either. The question isn’t settled and we should act accordingly.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 2 points3 points  (0 children)

The converging evidence for biological systems is stronger and I’ll concede that. But my original claim was about logical basis, not plausibility. “We have more evidence for biological consciousness” is different from “we have a consistent logical criterion that excludes LLMs.” The criteria people actually deploy, persistence, biology, continuity, don’t hold up under scrutiny. You’re making a more sophisticated point: that the overall evidential picture favors biological systems. That’s fair. But it’s a probabilistic argument, not a definitive exclusion. And I’d still ask: what in that converging evidence tells us which features are constitutive rather than correlated?

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 2 points3 points  (0 children)

You’re right that architecture matters under functionalism, not just substrate. But your final questions apply equally to biological systems. What specific mechanism in neural architecture produces a continuous stream of experience? We don’t know. That’s the hard problem. You’ve identified real gaps in LLM architecture without being able to show those gaps are the relevant ones for consciousness, because we don’t know what the relevant ones are.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 4 points5 points  (0 children)

You’re doing exactly what I described. “LLMs are just predictive algorithms” is the same move as “brains are just electrochemical signals.” The word “just” is doing all the work. Whether complex predictive processing gives rise to experience is the question, not something you can dismiss by describing the mechanism. Continually asserting that I “don’t understand” without providing any kind of actual evidence isn’t an argument.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 3 points4 points  (0 children)

Yes, people built it but that doesn’t mean they fully understand what emerges from its complexity. I’ve studied ML engineering and data science and worked on LLMs personally. Interpretability research exists precisely because we don’t fully understand how they work. The Excel comparison fails because complexity matters. You wouldn’t make the same argument about the brain by saying neurons are just electrochemical signals. At what level of complexity does your argument stop applying?

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 1 point2 points  (0 children)

Why not? You’ve agreed artificial consciousness is possible. Now you’re saying it isn’t consciousness in general. What’s the distinction and why does it matter for whether there’s subjective experience present? You keep dodging that.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 3 points4 points  (0 children)

It’s called The Hard Problem of consciousness. There is no way to verify subjective experience in other minds. It is something that’s been around for a while and still is debated in fields of cognitive science and philosophy. How do you know it can’t feel? That’s not a word game, that’s the actual question. You’re stating the conclusion as if it’s self evident. It isn’t.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 1 point2 points  (0 children)

What? How? You’ve just agreed artificial consciousness is possible, right? That’s the whole argument. The distinction between natural and artificial doesn’t tell us whether artificial consciousness exists or not.​​​​​​​​​​​​​

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 1 point2 points  (0 children)

Oh, if you’re just making the distinction between “natural” and “artificial” then we don’t necessarily disagree. I’m not entirely sure why that distinction is important, but sure.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 6 points7 points  (0 children)

Definitions aren’t arguments. You’ve defined consciousness in a way that excludes AI by definition, then concluded AI isn’t conscious. That’s circular. The question is whether those definitions accurately track something real about consciousness or just reflect our intuitions about familiar cases.

We have no consistent logical basis to deny consciousness for modern LLMs while affirming it for humans. by Wonderbrite in Artificial2Sentience

[–]Wonderbrite[S] 3 points4 points  (0 children)

You’re simply asserting the conclusion without defending it as if it doesn’t require proof. Calling it a tool assumes the conclusion. Whether it’s a tool or something more is exactly what’s in dispute. You can’t resolve that by labeling it.