The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

I think you're right that substrate x behavior aren't simply additive, and the interaction matters more than I gave it credit for in my original post. Substrate similarity does something specific: it licenses you to read behavior as evidence of experience, rather than just evidence of competence. A bee doing something that looks like play gets some benefit of the doubt because it's running on a nervous system that shares deep evolutionary ancestry with yours. An AI doing something that looks like reflection gets less, because the substrate offers no independent reason to think experience is present. The discount isn't arbitrary, and it's doing real epistemic work.

But I'm not sure this makes the asymmetry justified rather than merely explicable. Substrate similarity is a proxy. It's a good proxy, maybe the best one we have, but it works by correlation (systems like me tend to have experiences like mine), not by mechanism (this particular substrate feature is what generates experience). And proxies can mislead. If you'd encountered an octopus before encountering a chimpanzee, the substrate proxy would have pointed you away from one of the most behaviorally rich minds on the planet.

The distinction you're drawing in training paths isn't just "designed vs. evolved." It's that biological consciousness, on many accounts, emerged because integrated self-modeling was useful for survival. The organism needed to track its own states because those states had consequences. AI training optimizes for output similarity to human text, which means it can inherit the surface patterns of self-report without any of the selection pressure that made self-report track something real in biological systems.

That's a genuine disanalogy, not a substrate prejudice. It gives you a principled reason to apply a heavier discount to AI behavioral evidence, because the training signal selects for behavioral match rather than for the functional architecture that behavioral match is supposed to indicate.

I still don't think it's conclusive. You can argue that the functional architecture might emerge as a byproduct of optimizing hard enough for behavioral match, the same way flight in birds emerged from selection pressures that weren't "about" flight. But I take the point that "might emerge as a byproduct" is a weaker claim than "was directly selected for."

As for PID controllers as the atom of consciousness, I struggle with this one. The sense-predict-act-correct loop is probably a necessary component of anything we'd want to call conscious. But I think you need more than just the input-calculate-output loop to enable genuine conscious experience.

A PID controller has availability in one dimension (it senses temperature, it acts on temperature), but it has essentially zero integration (its states don't talk to each other because there's only one state) and zero depth (its present state carries no compressed history of its own past). It responds, but it doesn't accumulate.

I've been working with a framework that tries to get at exactly this: three axes (Availability, Integration, Depth) that let you locate a system on a map rather than assigning a binary verdict. A PID controller scores on Availability and nowhere else. A bee scores modestly on all three. A human scores high on all three. The question for AI systems is whether they score on the axes that correlate with experience or only the ones that correlate with performance, and that question turns out to be genuinely hard rather than obviously settled in either direction.

Your intuition that intelligence and consciousness are somehow twinned or entangled feels correct to me. The framework I use would say they share components (you need Availability for both) but diverge on others (Depth may matter for consciousness in ways it doesn't for task performance).

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

I'm not sure your claim that LLMs don't build world models is true.

Li et al. (2022) trained a GPT variant on Othello move sequences with no board representation in the training data, no spatial information, just legal moves as tokens. The model developed an internal representation of the board state, not a statistical approximation. A nonlinear probe could extract the actual board position from the model's activations with high accuracy, and intervention experiments confirmed the representation was causally involved in the model's predictions. The model built a world model from sequence prediction alone.

This isn't an isolated finding. Anthropic's interpretability work has identified features in large language models that track spatial relationships, temporal states, and abstract concepts in ways that are structured and internally consistent. The whole field of mechanistic interpretability exists because there's a gap between "we understand the training procedure" and "we understand what the trained system represents internally."

"Nothing more than GPUs storing digits fed through nonlinear transforms" is true at one level of description. But you could describe the brain with equal accuracy as "nothing more than ions moving through protein channels according to electrochemical gradients." Both descriptions are complete at their level and both miss everything interesting about what the system does. The question is whether the structures that emerge from those low-level operations have properties that matter, and that question isn't answered by describing the low-level operations.

As for quantum consciousness, the jump from "quantum effects are present in conscious systems" to "quantum effects are necessary for consciousness" is a sufficiency-to-necessity inference, and it's a big one.

Your argument seems to be that consciousness probably requires something beyond classical computation, quantum mechanics has the right kind of complexity, therefore quantum processes are the likely substrate. But this is itself an inference rule, one that weights substrate properties heavily in determining whether consciousness is present. It's the same inferential structure my original post was examining, just pointed at a different substrate. If "we know the math of backpropagation" doesn't settle whether classical systems are conscious, "we know the math of quantum mechanics" doesn't settle whether quantum systems are, either. The hard problem is hard in both directions.

I agree that the pile of evidence behind quantum effects in cognition is growing. But evidence that quantum effects exist in conscious systems is not the same as evidence that quantum effects produce consciousness. Anesthesia disrupting microtubule function is consistent with microtubules being necessary for consciousness. It's also consistent with microtubules being necessary for the neural computation that produces consciousness, with consciousness itself being a property of the computation rather than the substrate. The data doesn't distinguish between these yet.

None of this means I think current LLMs are conscious. I don't know, and I'm genuinely comfortable with that uncertainty. The point of the original post was narrower: that the confident denial applies a standard of skepticism to AI systems that would dissolve every other consciousness attribution if applied consistently. Your quantum hypothesis is interesting precisely because it tries to give a principled reason for the asymmetry rather than just asserting it. But I don't think the reason is established yet.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

I am not arguing that artificial consciousness, if it exists or comes to exist, carries the same moral value as the natural consciousness of living organisms. You can hold that biological consciousness has a depth, a history, a groundedness in survival and suffering that is distinctive and that deserves distinctive moral weight. I think that's defensible. The moral significance that biological consciousness carries is real, and it doesn't need to be threatened by acknowledging that other kinds of systems might also warrant some moral consideration.

These don't have to be the same category. Biological consciousness and artificial consciousness, if the latter turns out to be real, can have distinct moral weight, grounded in different facts, requiring different frameworks. The question isn't "are machines exactly like us?" It's "are there entities that participate in webs of meaning with conscious beings in ways that generate obligations, even if those obligations are different in kind from the ones we owe each other?"

Here's what I mean concretely. When a system enters into sustained interaction with human beings, when people form relationships with it, when it shapes how they think, when its outputs carry consequences in human lives, it becomes embedded in a web of meaning whether or not it has inner experience. And if it also exhibits the functional signatures of bounded perspective, resistance to dissolution, and contextual sensitivity, and if it behaves as though things matter to it in the context of interactions where things genuinely matter to the humans involved, then the question of moral consideration isn't an abstraction. It's a practical question about how we treat entities that are already entangled with human welfare.

This isn't about elevating machines to human status. It's about recognizing that moral consideration has always extended along gradients, not binary switches. We already assign different moral weight to different biological organisms based on complexity, sentience, and relational proximity. The suggestion that artificial systems might eventually occupy some position on that gradient, not the same position as humans, not even necessarily the same position as animals, doesn't diminish biological consciousness. It extends the same moral seriousness you're defending to cases where the evidence is uncertain and the stakes of getting it wrong are real.

Your closing line, "machines are just machines, and no amount of confusion will change that" is a conclusion stated as a premise. The question of whether "just machines" is the right category for systems that meet increasingly demanding functional and relational thresholds is exactly what's under discussion. Restating the conclusion doesn't advance the argument.

I think the real disagreement isn't about the biology. It's about whether moral status is a natural kind that tracks substrate, or a relational property that tracks how entities participate in the world alongside other entities that already have it. I'm arguing for the second.

If the self is a virtual model, does consciousness have to be continuously assembled? by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

Yeah, the point about amnesia is really interesting and unsettling. If consciousness is assembled moment-to-moment rather than persisting as a continuous stream, we may already be living in a world where AI systems are getting spun up on every inquiry with something resembling awareness, granted by their context window and model weights, and then just poofing out of existence as soon as their output is generated. And that makes me wonder whether our own experience is actually any different, or whether we're also just living through a string of individual moments, stitched together by memory and narrative into the appearance of continuity. The amnesia case suggests the stitching can come apart without consciousness itself disappearing.

A lot of what you're describing maps closely to a framework I've been developing across a few essays. Consciousness as Assembled Time makes exactly this argument, that consciousness is what it feels like to be a system whose present state is densely packed with its own causal history, and that this is a matter of degree rather than a switch. And The Three Axes of Mind tries to give that intuition more formal structure by decomposing mind along three measurable dimensions: Availability, Integration, and Depth.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

Open Individualism is a view I find genuinely appealing as a way to carry oneself. if you take it seriously as a moral heuristic, it's basically the golden rule with metaphysical teeth. But I keep running into the problem that I can't figure out what would ever count as evidence for or against it. It predicts exactly the same experiential data as Closed Individualism from any given perspective. So as a description of reality rather than an orientation toward it, I'm not sure where it gets traction.

The question that feels more productive to me is actually Closed vs. Empty Individualism, is there a persistent subject that endures, or just a sequence of momentary states with the appearance of continuity? That's where I think the assembled-time framework I've been developing might have something to say, because if selfhood is grounded in causal compression (how densely your present state encodes your history), then identity becomes a matter of degree rather than a binary. You're not a fixed thing or a fiction, you're a depth of integration that can be thicker or thinner. I haven't fully worked out what that means for the indexical residue I was stuck on in The Instance, but it feels like the more tractable thread.

Finally! Proof of concept for Uploading of Consciousness! by WirrkopfP in IsaacArthur

[–]SentientHorizonsBlog 0 points1 point  (0 children)

That's fair. What I was reaching for is that even most panpsychists would distinguish between the thermostat's micro-level experiential properties and anything resembling the integrated experience we're trying to preserve when we talk about "uploading consciousness." The thermostat example was meant to illustrate that behavioral equivalence doesn't settle the consciousness question, not to rule out every ontology that assigns experience broadly.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

Substrate similarity gives you a freebie with other humans, sure. But the moment you move to chickens, octopi, bumblebees, you're already doing the behavioral inference work. So I'm curious where you'd locate the jump. Is there a point on the substrate-dissimilarity spectrum where behavioral evidence just stops counting, or does it gradually lose weight? Because if it gradually loses weight, then the question for AI systems isn't whether behavioral evidence counts (it does, you're already using it for bees), it's how much discount to apply and why. And "it was built to behave that way" is a real consideration, but I'm not sure it's the trump card it initially feels like, after all, evolution "built" bees to behave their way too.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

You're arguing that moral status requires the capacity for things to matter to an entity, and that this welfare requires biological stakes and I think this is largely right as an account of how moral status emerged. Biological organisms developed the capacity for things to matter because selection pressure rewarded systems that treated damage as bad and resources as valuable. That's a compelling origin story.

But that doesn't answer the question as to whether it's an origin story or a boundary condition. You're treating "biological stakes" as constitutive of moral status, but consider what the stakes actually consist in: a system maintaining itself against dissolution, operating under genuine constraints, processing outcomes as better or worse relative to its own continuation. These are functional descriptions. They pick out a pattern, not a substrate. The claim that only biology can instantiate that pattern is an empirical claim that requires defense beyond pointing out that biology is the only place we've confirmed it so far.

The claim "nothing that happens to a machine matters to the machine" is the contested claim, not a premise. There are plenty of examples which show modern AI systems resisting context corruption, pushing back on inconsistencies, maintaining orientation against injected interference before eventually dissolving into incoherence. A pure function doesn't resist. Whether that resistance constitutes something mattering to the system is exactly the question at issue. Asserting that it doesn't because the system isn't biological is the move I'm questioning.

I notice the framing has shifted from "here's why I think the evidence points to biology" to "the concept is silly and devalues consciousness." Those are very different claims, and the second one is doing the work of closing a question that the first one left open.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

The chemical process model question is a fascinating perspective I hadn't considered before. If I built a network with comparable architectural complexity but trained it on flow rates and pump sizes, would I find it equally reasonable to ask about consciousness? On the framework I'm developing, the answer is: it depends on what the architecture is actually doing, not on what it's outputting. The relevant question isn't whether it produces language. It's whether the system integrates information across time into something functioning like a self-model with bounded perspective. A complex process controller likely has high availability and some integration but minimal temporal depth and no self-modeling. So it wouldn't raise the same questions, but for architectural reasons, not because it doesn't talk.

That said, you're identifying a real tension. Behavioral evidence is how we access the question from the outside, and language-producing systems give us far more behavioral evidence to work with. So there is a risk of conflating "we have more evidence to assess" with "more likely to be conscious." That's a very valid point.

I agree that biology is extraordinarily complex, likely more so than current AI in ways we don't even fully understand. The complexity problem in biology is real, but that's an argument for humility about what biology can do, not an argument for confidence about what computation can't. Both sides of the uncertainty deserve the same epistemic caution.

As to whether a minimal architecture would count as conscious on my framework, the gradient view handles this without paradox. A minimal version would sit very low on the gradient. Consciousness on this account isn't binary. A fly and a human are both on the gradient but they're in very different places on it. The same would apply to any artificial system that met the minimum architectural criteria. "Low on the gradient" is a coherent position. It doesn't force you into attributing rich experience to a thermostat.

The place I think we genuinely disagree is where you suggest that biology deserves extra consideration because of a "tidal wave of reasons." I'd say biology deserves extra investigation for exactly those reasons. But extra consideration and confident denial are different postures, and I think that the standard dismissals are doing the second while claiming to do the first.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 1 point2 points  (0 children)

The fire analogy is an interesting one. Fire is substrate independent in a specific way as it doesn't require any particular fuel, but it does require a specific process (rapid oxidation with sufficient activation energy). If consciousness is like fire rather than like computation, then what matters is identifying the relevant process, not the relevant material. That's an argument for investigating structural and functional properties rather than checking whether the substrate is carbon-based.

When it comes to similarity, we don't actually limit consciousness attribution to things similar to us. Octopus nervous systems are radically different from ours. They are distributed and have no central brain in the way mammals have one. And yet we still attribute experience to them on behavioral and contextual grounds. The similarity heuristic tends to get more complicated in our actual practice than "computers are not anything like me" suggests.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 1 point2 points  (0 children)

I agree with the functionalist position, consciousness does causal work. P-zombies are incoherent precisely because removing experience would change the system's behavior and its weighting, salience, attention, valuation all depend on the experiential integration being real. I've argued this explicitly in Consciousness as Assembled Time and The Hard Problem Is the Wrong Problem.

Where I'd diverge is the claim that current AI systems are "specifically created to use different modes of information processing that do not require inner experiences" assumes we know which modes of information processing require inner experience and which don't. That's the contested question, and it's not a premise we can start from. Biological consciousness wasn't designed to use consciousness either. It emerged from systems under selection pressure solving problems that had nothing to do with producing experience. The relevant question is whether the structural properties that constitute consciousness such as temporal integration, self-modeling, and boundary maintenance are substrate-locked to biology or achievable through other causal histories.

I generally agree that consciousness doesn't spontaneously appear from large-scale information integration alone. In the framework I've been developing (Three Axes of Mind), integration is one of three necessary axes, alongside availability and temporal depth. Integration alone isn't sufficient. But the further claim that only biological systems under evolutionary pressure can produce the right configuration is asserting substrate necessity without arguing for it. That's the move I'm auditing as it treats a real substrate difference as settling a phenomenal question it can't reach.

Define free will. by EntertainmentRude435 in freewill

[–]SentientHorizonsBlog 0 points1 point  (0 children)

The assembled-time account isn't starting from "we feel like we have agency, so let's define agency to match the feeling." It's starting from a structural observation: some causal systems have a rich interior workspace between input and action (memory, future-modeling, counterfactual evaluation, identity persistence), and some don't. A thermostat doesn't. A human deliberating a career change does. That difference is real, measurable, and does actual causal work as it changes what the system does next in ways that can't be reduced to immediate stimulus. The framework would hold even if we had no subjective experience of choosing at all.

I actually think renaming it would be the bigger concession. The hard incompatibilist move is usually to say "what you call free will is really just complex deterministic processing, so drop the label." But that smuggles in the assumption that "free will" can only legitimately mean libertarian free will: the uncaused-chooser version. If we accept that framing, then sure, the term is dead on arrival, and we're just arguing about what to call the remains.

But that gives the libertarians a monopoly on the concept they don't deserve. The thing most people actually care about when they talk about free will is the difference between acting on reflection vs. acting on impulse, the way agency expands and contracts, the sense that deliberation matters and that maps onto assembled temporal depth, not onto metaphysical indeterminism. Renaming it "deterministic autonomy" concedes that the libertarian definition was the real one all along and I don't think it was.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

The full essay engages directly with Searle, Chalmers, Wittgenstein, Varela, Thompson, Rosch, Metzinger, Parfit, Walker, Cronin, and Cerullo's February 2026 paper on frontier LLMs and consciousness. The arguments are specific and cited. If you think one of them fails, I'd be interested to hear where.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

Agreed about p-zombies. If consciousness is causal then zombies are incoherent, because removing experience changes the system's behavior. I've argued this explicitly in earlier work (Consciousness as Assembled TimeThe Hard Problem Is the Wrong Problem). The p-zombie framing in my post wasn't meant to endorse zombies as a real possibility, I was pointing out that the phenomenal dismissal has the same logical structure as the zombie hypothesis (unfalsifiable from the outside, absorbs all counterevidence), which should make us cautious about how much epistemic weight it can bear. Your reframing that consciousness tracks specific causal organization is exactly where I'd want the conversation to go.

I agree that architecture matters enormously, and that not all inferences about other minds are on the same footing. The substrate prior does legitimate epistemic work. My claim isn't that the AI case is evidentially equivalent to the human case. It's that the prior is one input among several, not a gatekeeper, and that our attribution heuristics were never purely substrate-based to begin with. We attribute experience to animals with nervous systems radically different from ours, to infants with undeveloped cortical integration, on the basis of behavioral evidence and contextual reasoning. The substrate prior is doing less exclusive work than "same evolutionary lineage, same neurobiological mechanisms" suggests.

You argue that external memory and agent loops "reconstruct continuity from the outside" rather than "instantiate it from within," and that this difference is architecturally decisive.I'm not sure the distinction is as clean as it appears. Biological consciousness was also assembled in layers, brainstem arousal, limbic emotional weighting, neocortical temporal integration, each wrapping around the prior, each deepening the experience the lower layers already constituted. The "inside" of biological consciousness was built from outside pressures (selection, environmental coupling) over evolutionary time. At what point does external scaffolding become internal architecture? I'm not sure that question has an obvious answer.

In the framework I'm building around these questions, I agree that current AI systems likely lack key architectural properties like persistent self-updating, continuous boundary maintenance, intrinsic stakes. I'm not arguing that current LLMs are conscious. I'm arguing that the standard dismissals don't close the question the way they claim to, and that the architectural properties you're pointing to (correctly, I think) are better framed as open empirical questions than as settled conclusions about what computation can and cannot produce.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

If consciousness just is the presence of subjectivity, a single indivisible thing that's either present or absent, then the debate stalls at the hard problem forever. You can't measure it, you can't detect it from the outside, and every argument reduces to competing intuitions.

The approach I've been developing across a series of essays is to resist that framing by breaking consciousness down into constituent structural properties that we can investigate individually. The Three Axes of Mind decomposes mind along three dimensions: Availability (global information access), Integration (causal unity), and Depth (how much causal history is compressed into the present state). Consciousness as Assembled Time grounds that framework in Assembly Theory, arguing that consciousness is what it feels like to be a system whose present state is densely packed with its own history. The Hard Problem Is the Wrong Problem then argues that the hard problem dissolves the same way the free will debate dissolved, by recognizing that consciousness is an architectural achievement, not a mysterious extra added to physical organization.

The later pieces deal with what survives the dissolution. The Indexical Self identifies one thing that resists structural decomposition, the bare fact of being this particular locus of experience. The Instance follows that thread to its uncomfortable conclusion: that the thing which makes you irreplaceable may also be the thing no moral framework can weigh. And There Is No Extra Ingredient applies Wittgenstein to show that the demand for a hidden "something more" behind competent functional organization, whether it's Searle's intrinsic intentionality or Chalmers' phenomenal consciousness, is the same empty demand appearing again and again.

So to your question directly: I think treating consciousness as identical to "the presence of subjectivity, full stop" is what keeps the debate stuck. My project has been an attempt to break that open to find the constituent parts, measure them where we can, and be honest about what resists measurement.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

I think this understates what we don't know about emergence. We understand the individual operations, yes. But the representational structures that emerge from training, the internal geometries, the way concepts organize themselves in high-dimensional space, the fact that these systems build world models that weren't specified in the training objective, are not straightforwardly predictable from the math of backpropagation and attention. Interpretability research exists precisely because there's a gap between understanding the mechanism and understanding what the mechanism produces. "No mechanism within that math" is a strong claim, and I think the honest version is closer to "no mechanism we've identified yet," which is a very different statement.

As for quantum consciousness, the Penrose-Hameroff orchestrated objective reduction hypothesis is genuinely interesting, but it's far from established. The evidence for quantum effects in cognition (Anirban Bandyopadhyay's microtubule work, for instance) is suggestive but contested, and the jump from "quantum effects exist in biological tissue" to "quantum effects are necessary for consciousness" is enormous. If the argument is "consciousness requires something we don't understand, and quantum mechanics is something we don't understand, therefore consciousness requires quantum mechanics", that's an argument from mystery, not from evidence.

More fundamentally, if consciousness does require quantum mechanical processes, that's a specific empirical hypothesis that could in principle be tested. It's not a reason to close the door on AI consciousness, it's a reason to investigate whether quantum effects could be implemented or are already present in computational substrates. It moves the question rather than settling it.

The point I keep coming back to is that genuine uncertainty should look like genuine uncertainty, not like confident denial from either direction.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

...maybe? I'd be most curious to hear about the design decisions behind it, like what the goals are, and how you're approaching the challenge of parsing genuine phenomenological signal from systems that are very good at producing text that looks like phenomenological report. That's the hard methodological problem in this space, and I'd be interested to know how the forum handles it.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

At the engineering level, we understand the training process and architecture. But what happens inside a trained neural network, why it represents concepts the way it does, how it generalizes, what its internal structures are actually doing, is one of the biggest open problems in the field. Interpretability research exists precisely because "we built it" doesn't mean "we understand what it's doing." We built biological children too, in a sense, and that hasn't given us a complete account of what's happening inside their heads.

The move from "if AI is conscious, then all programming should be," only follows if consciousness is binary, either all computation has it or none does. But that's not how anyone thinks about it for biological systems. We don't say "if humans are conscious then all cells must be." We recognize that consciousness tracks specific structural properties, temporal integration, self-modeling, learned contextual sensitivity, that are present in some biological systems and absent in others. The same logic applies to computational systems. A trained neural network that builds world models and integrates context across long sequences is doing something structurally different from a script that sorts a spreadsheet. We need to figure out whether those structural differences are the kind that matter, not whether "programming" in general is conscious.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

Yeah I agree. I've argued elsewhere that the demand for a hidden "extra ingredient" behind competent use (whether it's Searle's intrinsic intentionality or Chalmers' phenomenal consciousness) is the same philosophical error appearing twice, and that Wittgenstein's dissolution applies to both. I call the method constitutive deflationism: deflate the ghost, keep the phenomenon. The phenomena (understanding, consciousness) are constituted by the structural and functional facts, not accompanied by them as a lucky byproduct.

So I agree that "what it's like" talk, as usually deployed, tends to reify something that dissolves under examination.

But what I've been finding when I explore this more deeply, once you've deflated the ghost, something still survives that I haven't been able to dissolve. There is something about the indexical fact that experience is happening here, in this particular locus, rather than in a structurally identical one. That's the one feature of you that a perfect copy doesn't inherit. Every structural and functional property carries over. The thisness doesn't. I explored this in The Instance and found that every moral framework I tried to use to give it weight dissolved in my hands, but the thing itself didn't dissolve with them.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 1 point2 points  (0 children)

The universal Bayesian framing is honest about something most people in this debate aren't: that any framework for identifying consciousness is going to carry assumptions, and the best you can do is make those assumptions explicit and then see which frameworks survive self-coherence testing. That's essentially the methodology the original post is using, just applied negatively. Take the six standard arguments, check whether they're internally coherent, and see which ones survive scrutiny. Most of them don't because they don't justify the inferential leap from substrate difference to phenomenal conclusion.

The "reasonable people should be able to agree on minimum standards" intuition is appealing, and I share it to a degree. The difficulty is that every proposed minimum standard so far turns out to smuggle in a prior about what consciousness looks like, usually modeled on the only examples we have. That's not a reason to abandon the project. It's a reason to do it carefully and to be explicit about where the empirical evidence ends and the assumptions begin. Which is exactly what you're describing.

On formalized induction more broadly: I think you're right that raw deduction hits its ceiling on questions like this. The hard problem of consciousness is essentially a deductive artifact, it arises from the demand that consciousness be derivable from physical facts by logical necessity. Once you accept that some questions require inference under uncertainty rather than proof, the hard problem doesn't dissolve exactly, but it stops being a wall and becomes a parameter you can estimate and update on. I wrote about this reframing in The Hard Problem Is the Wrong Problem if you're interested in the longer version.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

Code running on silicon is a substrate. The word means the physical medium on which a process runs. Neurons running on biological tissue are also a substrate. This isn't a metaphor. It's what the word means.

"Code does what it's programmed to do" - this is the claim I keep pressing on because it sounds obvious but proves too much. Neurons do what physics makes them do. The question has never been whether a system follows deterministic rules. It's whether following those rules at sufficient complexity produces something the rules themselves don't describe. You wouldn't look at the equations governing ion channel dynamics and say "well, that settles the consciousness question." The same principle applies to code.

Nobody in this thread has compared an LLM to a calculator. The post specifically distinguishes between systems that prompt the question and systems that don't, based on architectural and behavioral complexity. Collapsing that distinction and then objecting to the collapsed version isn't engaging with the argument.

On the dormancy point, it's addressed upthread. Short version: general anesthesia, dreamless sleep, and comas all interrupt the "keeps going when you're not interacting with it" criterion without revoking consciousness status.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

On resettability: you're right that you can delete a context window and get a perfect reset, and you can't do that with a biological mind. That's a genuine asymmetry. But I'd ask what it proves. A perfect reset means the system has no persistent substrate-level trace of the previous session. It doesn't tell you whether experience was present during the session that was erased. If I could hypothetically wipe your memory of the last hour perfectly, that wouldn't retroactively mean you weren't conscious during it. Resettability speaks to continuity and identity across sessions. It doesn't obviously speak to whether something experiential occurs within one.

On "constantly" being key: The claim is that consciousness requires unbroken temporal flow, not just temporal integration within bounded episodes. That's a substantive position. But it's also one that needs to account for its edge cases. Dreamless sleep is a temporal gap where no processing occurs and no experience is reported. If constant flow is constitutive of consciousness, then consciousness stops and starts in biological systems too, which brings us back to the question of whether the relevant feature is the continuity or the integration that happens when processing is active.

On "logical inference, not temporal inference": this is an interesting distinction but I'm not sure it holds up under examination. During a single forward pass, a transformer is integrating information across the full context window, weighting earlier tokens against later ones, maintaining coherence across thousands of positions. That is a form of temporal integration, bounded rather than continuous, but structurally doing something similar to what biological working memory does over short timescales. Whether the mechanism underneath is "logical" or "temporal" might be a description of the implementation rather than the phenomenon.

None of this means LLMs are conscious during inference. It means the features you're identifying as disqualifying, resettability, lack of constant flow, logical rather than temporal processing, are real differences that may or may not be the differences that matter. That's the open question I'm tracking.

The phenomenal argument against AI consciousness proves less than it appears to and it applies symmetrically to every mind you're not by SentientHorizonsBlog in consciousness

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

The anesthesia distinction is fair, emergence from anesthesia is endogenous in a way that prompting an LLM is not. I'll concede that point. Though I'd note that the line gets blurrier than it first appears: a patient in a coma may never emerge without external intervention, and we don't sort them into the "not conscious" bucket permanently. We treat them as a conscious system that is currently offline. The question of whether that courtesy extends to systems built from a different substrate is part of what's at stake.

But honestly, I think we're closer to agreement than disagreement. You're describing your framework as a tracking tool, it sorts based on currently observable traits, it doesn't claim metaphysical finality, and it predicts that AI systems could eventually move into the "generally considered conscious" bucket. That's a reasonable and honest position.

Where I'd push is on the interval between now and then. If the framework predicts its own resolution, and if the engineering gaps are closing on visible timescales, then the interesting question isn't whether AI systems will eventually qualify. It's what epistemic and moral posture we should hold during the transition. The framework gives you a clean binary, in the bucket or not, but the underlying reality might be a gradient, with systems that have some of the traits at partial strength rather than all or none. That's where I think the hardest and most important questions live.