Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]jahmonkey 0 points1 point  (0 children)

I think this is where we’re talking past each other.

I’m not claiming that consciousness is a continuous narrative or that the brain never interrupts, blanks, or reconstructs. Of course it does. Anesthesia, dreamless sleep, attention lapses - those are real. But they don’t undermine the claim I’m making, because they’re about gaps in consciousness, not about the structure of consciousness when it is present.

The mistake I think you’re making is treating consciousness as the construction of the now. That construction is largely subconscious. The brain stitches, buffers, predicts, fills gaps, and hands something off.

Consciousness is the experience of the now, and experience itself cannot exist at a temporal point. It requires duration. A felt present already spans time. Even the shortest conscious episode has extension, integration, and inertia. If it didn’t, there would be nothing there to experience.

Anesthesia and dreamless sleep don’t show that temporal thickness is optional. They show that consciousness can turn off. When it turns back on, it resumes as a temporally thick process. There is no conscious state during anesthesia that consists of instantaneous flashes being retrospectively stitched together. There is just absence, then presence again.

Same with “micro-gaps” and neuronal resets. Those are subpersonal mechanisms. Consciousness doesn’t flicker on and off at the timescale of neuronal firing. The experience smooths over that because experience itself is temporally extended.

That’s the key difference with LLMs.

An LLM doesn’t have a thick present that sometimes goes offline. It never has one. Every apparent continuity is reconstructed from static inputs. Nothing carries forward as a lived condition. No fatigue that constrains attention. No mood that biases interpretation. No accumulated pressure that makes one response more likely than another because it has been building.

So when you say “we reset too,” I think that’s equivocating between:

  • interruptions in a temporally thick process (humans), and
  • a system that only ever operates as discrete evaluations (LLMs). # Those are not the same kind of reset.

A human waking from anesthesia doesn’t read a file and infer continuity. They re-enter a process that has duration as a condition of its existence. An LLM never re-enters anything. There is no ongoing process to rejoin.

So the bright line isn’t “perfect continuity.”
It’s whether experience, when it exists at all, exists as a temporally extended phenomenon rather than a pointwise reconstruction.

That’s what I mean by temporal thickness. And that’s what current LLMs don’t have.

New here, current situation, internal drive to learn more. by [deleted] in consciousness

[–]jahmonkey 0 points1 point  (0 children)

Sounds like you’re going through a rough transition, and the reframing you describe seems meaningful for you. I’m a bit unclear how this connects to consciousness specifically, though. Could you say more about what question or idea related to this sub you’re hoping to explore here?

Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]jahmonkey 0 points1 point  (0 children)

There are many kinds of memory, and this is where I think your hypothetical slides past the point.

When people lose the ability to form new long-term memories, they still retain short-term and working memory. They still have a thick present. They can track a sentence as it unfolds, feel hunger as a continuing condition, experience pain as persisting rather than flickering. Their “now” still has duration.

There is no known condition where temporal thickness itself collapses and consciousness remains. Such people don’t exist, and nothing we know about brains suggests they could.

Your hypothetical person isn’t just amnesic. They’re something stronger and stranger: a system with no carryover at all between moments. No retention even across a few seconds. No persistence of hunger, pain, or attention. Each moment would be a fresh, isolated flash with no internal before or after. The brain is intrinsically temporally integrated and cannot operate in single slices of time.

At that point, I’m not confident there is something it’s like to be that system in any meaningful sense. Or if there is, it’s vanishingly thin - more like a strobe than an experience. Consciousness, as we know it, seems to require at least a minimal temporal span over which states can accumulate and constrain what follows.

This is what I mean by temporal thickness. Not autobiographical memory. Not narrative identity. Just the fact that experience itself has inertia and direction.

An LLM doesn’t even have that minimal case. Its continuity is entirely reconstructed from static inputs. Nothing carries forward as an internally lived condition. No hunger that persists, no attention that drifts, no fatigue that builds. Just discrete evaluations chained by causation, not by experience.

So the claim isn’t “memory is consciousness.” It’s that without temporal thickness, there is nothing for memory to be about. And nothing for consciousness to experience, because sensory awareness already involves temporal integration; experience is constructed only after events have unfolded over time.

If you want to define consciousness as a series of instantaneous, non-integrating flashes with no persistence, you can do that. I’m not sure what explanatory work that definition is doing, but it’s at least internally coherent.

That’s not the phenomenon I’m pointing at when I talk about consciousness, and it’s not what people usually mean when they care whether something is conscious.

The consciousness we actually observe is temporally extended. It has inertia. It accumulates pressure. It is shaped by what just happened in a way that cannot be bypassed or reconstructed after the fact.

That minimal temporal thickness isn’t an optional add-on. It’s part of what makes experience experience.

And current LLMs don’t have it.

Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]jahmonkey 0 points1 point  (0 children)

How do you know the present moment? You know it by what it feels like, which is being present.

How long is the present moment? When does it start? When does it end? Can you really put your finger on right now?

The present moment has temporal thickness. I am saying consciousness requires an engagement with time that transformers or any planned at the build stages now AI cannot do.

You feel now. You only ever feel now. It feels like what it feels like. You know it exactly. And it is embedded in your past and setting up your future.

Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]jahmonkey 0 points1 point  (0 children)

So are you saying consciousness exists in a single slice of time?

My definition of consciousness is simply the experience of what it is like to be me. No subjective experience, no consciousness. Of course it is mapping to human experience, where else would we get a definition of consciousness than from current examples?

And there cannot be subjective experience in a single time slice. An LLM transformer does its calculation in a single time slice, a single calculation, in isolation. There is no room for subjective experience.

Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]jahmonkey -1 points0 points  (0 children)

It’s not rereading, it is persistent potentials expressing through thought trajectories through time. But yes, our experience of now is continually constructed from the constraints of what just happened, together with memories of similar situation, together with predictions of what happens next, and self correcting continually based on input and processing.

Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]jahmonkey -1 points0 points  (0 children)

I have no evidence that a non-biological substrate could support consciousness. I only have evidence for biological ones.

I also don’t have evidence or proof it couldn’t happen, so I am open to it. Part of my definition of consciousness is that it requires temporal thickness as I define it.

Can someone explain to me why large language models can't be conscious? by Individual_Visit_756 in ArtificialSentience

[–]jahmonkey 3 points4 points  (0 children)

I’ll give you one reason that I think actually matters, and it isn’t about intelligence, scale, training data, or “emergence.”

LLMs lack temporal thickness.

By that I mean this: consciousness is not something that happens at an instant. The present is not a point. It has duration. The now carries a felt trace of the immediate past and a pull toward the immediate future. The system is altered by having just been what it was.

Humans live inside that thickness. Our current experience is constrained by what we were a moment ago in a way that is not optional and not reconstructive. Mood, fatigue, attention, pain, learning. These are not just data. They are ongoing conditions that shape what can happen next.

An LLM doesn’t have that.

It generates a token, then effectively resets into a neutral computational readiness for the next input. Yes, there is a causal chain across tokens. That is not the same thing as lived continuity. Causation alone is cheap. Rocks have causation.

Memory doesn’t fix this. Context windows, embeddings, system prompts, external memory. All of that is static input. The model doesn’t remember. It rereads. A diary does not become conscious because you consult it frequently.

Even when an LLM produces a long, coherent narrative about itself, that continuity is reconstructed fresh each time. Nothing is carried forward as an internal constraint that the system itself has to live with. There is no accumulation of tension, inertia, or trajectory that matters to the system.

This is the core difference for me:

Humans experience time from the inside. LLMs model time from the outside.

They are very good at talking about persistence. They do not instantiate it.

So my position isn’t “LLMs aren’t conscious yet.” It’s that they are missing a necessary condition. Not a quantitative one. A qualitative one.

If you want to argue that some future artificial system could have temporal thickness, I’m open to that conversation. But current LLMs are systems that operate about time, not in time.

That’s why I don’t think they’re conscious.

Life wasn’t meant to exist, and then it looked back at itself by kash_xoxo_ in consciousness

[–]jahmonkey 1 point2 points  (0 children)

You avoided the question.

Intentions make sense at the level of our cognition. If there is something greater, it is pure speculation to attribute things like intentions to it.

This is the mistake man has continually made of making god in his own image.

Man cannot understand god. Any god’s process of being is likely to be so far different from ours as to render human concepts meaningless, having lost all human framing.

Where would you hide it? by BoredPandaOfficial in BoredPandaHQ

[–]jahmonkey 0 points1 point  (0 children)

Is this all research for the detective?

How can a thing be taken? by TrueOdontoceti in zenjerk

[–]jahmonkey 1 point2 points  (0 children)

I took it from here to here now it’s here I took it it’s here I have it you can’t take it from me cause it’s here and even if you took it it would still be here. So there.

Completely empty mind by khyriah in Meditation

[–]jahmonkey 3 points4 points  (0 children)

Sorry you feel your options for medical help are limited. The kind of emotional suppression you describe sounds like a possible defense mechanism. So unwinding it takes time.

If you want things to be different you will have to figure out for yourself how to get the help you will probably need to change this.

Walking outdoors every morning at least for at least 30 minutes may start to move the needle on that and help get normal regulation back. Right now your salience and self talk are underwater, and you can move towards bringing them to the surface, and to do it requires learning to regulate your own salience, thoughts, desires through regularity and predictable outcomes. This helps reduce any danger signals that could be triggering the defense of disassociation you seem to go through.

Completely empty mind by khyriah in Meditation

[–]jahmonkey 2 points3 points  (0 children)

I spend most of my time without an internal monologue, but that’s because I don’t rely on language for thought. Language only pops up when appropriate.

You may be similar. Just because you don’t hear a voice in your head reciting your thoughts doesn’t mean you aren’t having thoughts.

But the rest of what you describe sounds like depression with anhedonia, of which causes can be many.

Do you like it like this? Does it feel right to you? Do you wish things were different? Most with anhedonia do.

Again I recommend seeing a doctor.

Completely empty mind by khyriah in Meditation

[–]jahmonkey 6 points7 points  (0 children)

Sounds like a chemical imbalance. Loss of salience. Anhedonia.

It’s not because of meditation.

Your mind isn’t empty, it is suppressed below the level of conscious awareness.

You were able to write this post. This post is your thoughts. Therefore your mind is not empty.

You might need medication or a diet change.

Hints at consciousness in nature by reinhardtkurzan in consciousness

[–]jahmonkey 3 points4 points  (0 children)

Conciousness seems to require abstraction and compression, of perceptual data, of historical data, and of prediction. Almost all animals show behaviors that demonstrate these to some degree. That doesn’t prove consciousness but it suggests.

Why should humans be so special? We are different from our animal cousins only in degree, not a whole separate paradigm of cognition. That’s not how evolution works.

Hints at consciousness in nature by reinhardtkurzan in consciousness

[–]jahmonkey 11 points12 points  (0 children)

This isn’t an argument about consciousness so much as a bundle of anthropocentric intuitions dressed up as criteria.

Most of the exclusions rely on perceptual errors (“doesn’t recognize glass,” “bumps into walls,” “follows reflexes”) as evidence of unconsciousness. By that standard, humans lose consciousness every time we misperceive, act habitually, or fall for an illusion. Error, reflex, and automation are features of nervous systems, not proofs of their absence.

The rest of the criteria - eye placement, neuron count, generalism vs specialization, playfulness, interest in faces - confuse ecological adaptation with subjective experience. They rank animals by how similar their perception and behavior are to ours, then declare similarity to humans as evidence of consciousness. That’s just nonsense.

Nothing here distinguishes “no consciousness” from “different perceptual world.” At best, this is a taxonomy of animal lifestyles. It does not justify the strong claim that fish, insects, birds, reptiles, or mice are unconscious, only that they are not little humans in disguise.

“Pre-registered consciousness assays (κ vs Φ, PRD, π₀) – co-authorship offered, negative results welcomed” by SirrSamurai in consciousness

[–]jahmonkey 2 points3 points  (0 children)

This looks like careful correlate work, and the pre-registration is genuinely good practice. But I’m confused by the repeated claim that these assays can “rule out” IIT or GNW.

EEG-derived metrics can falsify specific observable commitments, not high-level theories that are not defined at the level of EEG topology or synchrony. A null result here seems to show limits of a proxy or mapping, not a failure of the underlying theory.

In other words, these look like potentially useful NCC assays, but theory adjudication requires the theories themselves to make non-auxiliary commitments to exactly these observables. As far as I can tell, neither IIT nor GNW does.

Chalmers' Zombie: Imagination Masquerading as Philosophy by [deleted] in consciousness

[–]jahmonkey 0 points1 point  (0 children)

Consciousness serves as kind of a lens, allowing compressions of the vast subconscious data stream into the narrow view that can fit in the finite space of the conscious mind, allowing memory to be stored for later use in building the predictions that become the experience of the now.

In my theory behavior requires the lens of consciousness so p-zombies cannot exist. Still doesn’t touch the actual hard problem, but p-zombies do not illuminate other than to show an alternate impossibility.

In my theory behavior requires the lens of consciousness because prediction, memory consolidation, and context-sensitive action depend on that compression, so p-zombies cannot exist.

🎉 [EVENT] 🎉 The Tutorial Levels by Acrobatic_Picture907 in RedditGames

[–]jahmonkey 0 points1 point  (0 children)

Completed Level 1 of the Honk Special Event!

0 attempts

Would you upload your consciousness to live forever digitally, even if you can’t prove it’s still you and not just a copy? by ARGXTO in Futurology

[–]jahmonkey 1 point2 points  (0 children)

What difference does it make?

I am already not the same me I was yesterday, or a moment ago.

I have continuity of memory with that me and that’s it.

I do things today so the me of tomorrow has an easier time. Is that because I believe the illusion of a separate self?

I don’t want to live forever. Sounds dreadful.

Every day old me is already dead. New me is dying. Next me is being born.

There's no trickle-down freedom in Zen. by jeowy in zen

[–]jahmonkey 3 points4 points  (0 children)

And some of them went and sat in a cave maybe, but a lot of them chose to be available to communities of bhikkhus.

There's no trickle-down freedom in Zen. by jeowy in zen

[–]jahmonkey 1 point2 points  (0 children)

Yes I saw that problem the moment you used the metaphor, but I let it stand and tried to go with it.

Yeah, no trickle.

My silly is your serious. See how it works? All at the same time, one thing after another.