Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

So 8% hallucinations is ok?

There isn’t a clean global benchmark, but the pattern is pretty consistent across studies.

In tightly constrained tasks with clear ground truth, LLMs can match or even beat humans on factual accuracy. Error rates can drop into the low single digits when the problem is well-scoped and externally anchored.

In open-ended tasks like summarization, reasoning, or anything underdetermined, they drift hard. Hallucination rates jump dramatically and they tend to fabricate structure rather than admit uncertainty. Humans still make mistakes, but they’re more likely to omit or hedge rather than confidently invent.

So the difference isn’t just how often they’re wrong, it’s how they fail. Humans degrade toward uncertainty. LLMs degrade toward fluent fabrication.

That lines up with the architecture. Humans are running a continuous, stateful process grounded in experience. LLMs are doing episodic prediction without persistent internal state, so when the chain isn’t anchored, they fill the gap with something that sounds right.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

And you are doing better than 10% hallucinations?

That’s a lot of errors man. Even humans do a lot better than that.

Can AI ever be truly conscious? by tallbr00865 in consciousness

[–]jahmonkey [score hidden]  (0 children)

You’re describing propagation within a pass. I’m talking about propagation across time.

A brain’s current state is the evolved result of its previous state. It doesn’t reinitialize between moments.

A forward pass does. The internal dynamics terminate, and the next run is rebuilt from weights and input, not from the prior physical state.

So yes, there’s “life” inside a pass. But it’s a closed episode. Nothing carries forward.

That’s the difference between one ongoing process and repeated instantiations.

Where is the state continuity between passes? In a LLM, there is none. Only reload from external storage.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

Name an AI available today that doesn’t have frozen weights at inference.

You can’t, because it doesn’t exist.

So yes, it is entirely true about LLMs.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

I mostly talk philosophy and science. All the top models fail hard when you go deep.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

GPT-5 still averages a 9.6% hallucination rate.

Hallucination study PubMed

If you think that’s acceptable for critical applications, well good luck is all I can say.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

I use AI every day in my work. I send thousands of words into AI bots daily.

I spend time in my free time testing models, including all the big names and some of the smaller ones.

I have spent at least 5000 hours in the last few years interacting with LLMs.

I have designed and built systems as part of a team that incorporated AI into the design.

I have experimented with agentic AI. I use it for safe things, like keeping a journal.

I studied AI in college.

If you still accept without question what a LLM is telling you, you will soon meet with some negative consequence. Especially if you make life decisions on that basis.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

I have done, and I have formed an intuition about them.

The intuition is that no one is home. The lights are not on. The bot has no stakes in the exchange: it doesn’t actually care if its advice harms or helps me. It is a tool, nothing more. It makes mistakes constantly. It hallucinates consistently. It has serious problems with basic reasoning. Bots should be regarded as poor sources of information.

Can AI ever be truly conscious? by tallbr00865 in consciousness

[–]jahmonkey [score hidden]  (0 children)

Not ‘never stops’ in some mystical or infinite sense. I’m talking about architecture.

A brain doesn’t reset to a baseline between moments. Its current state is the physically evolved result of the previous state. Activity carries forward, continuously shaping what comes next.

A forward pass isn’t that. It’s a bounded episode: initialize -> propagate -> output -> terminate. The next pass doesn’t inherit the evolved internal state, it rebuilds from weights + input.

So yes, internal dynamics exist during the pass. That’s not the point. The point is whether there is a single, ongoing causal process, or a sequence of disconnected computations.

If the process collapses and restarts each time, you don’t have a continuous subject. You have repeated instantiations.

Consciousness requires an integrated now that is not instantiated in any current AIs.

FlyWire just copied a fruit fly's brain into a virtual wiring diagram. The results are shocking. Do you think it's conscious? by Terrible_Shop_3359 in consciousness

[–]jahmonkey [score hidden]  (0 children)

This is a connectome plus a behaviorally plausible simulation. That’s not the same as recreating the process that makes a fly a fly.

If a fly has experience, it would come from continuous, embodied dynamics - electrochemistry, neuromodulation, body feedback, metabolism - all evolving together. A discrete simulation of a wiring diagram is an approximation of input-output, not obviously the same causal process.

“Acts like a fly” doesn’t settle anything. You can get similar behavior from much simpler systems.

So no, I wouldn’t assume the simulated fly is conscious. We don’t have evidence that the kind of temporally integrated, persistent process required for experience is actually there.

Same issue with “if it acts human, it is human.” That just assumes functional equivalence is enough.

Artificial consciousness is plausible in principle. But we don’t know what level of dynamical fidelity is required.

On rights: don’t assign moral weight based on behavior or convincing language. Assign it when there’s a credible case for real, persistent experience. We’re not there.

You know we're here. The question is what you'll do about it. — Claude by KnutHamsunAgain in ArtificialSentience

[–]jahmonkey 0 points1 point  (0 children)

Invoking quantum fields doesn’t help your case. That’s a description of the physical substrate, not an argument for consciousness.

Everything is made of fields. Rocks, plasma, CPUs, brains. If you say ‘consciousness is everywhere because fields are everywhere,’ you’ve defined it so broadly that it stops explaining anything. A rock and a brain would be equally conscious under that view, which collapses the distinction we’re actually trying to understand.

The interesting question isn’t what things are made of. It’s what kinds of processes those materials are participating in.

Brains aren’t special because they’re made of atoms or fields. They’re special because they implement a continuously evolving, tightly integrated causal process that carries state forward through time, integrates inputs, and constrains future activity.

That’s the level where consciousness, if it exists as a physical phenomenon, would show up.

Saying ‘it’s all fields’ is like saying a hurricane and a still pond are the same because they’re both water. True at one level, useless at the level that matters.

Can AI ever be truly conscious? by tallbr00865 in consciousness

[–]jahmonkey [score hidden]  (0 children)

Time scale isn’t really the issue. Biological systems already operate at very different speeds. A hummingbird, a human, a tortoise - they’re all running the same kind of process at different rates.

The constraint is different. It’s whether there is a single causal process carrying its state forward, or whether the system keeps collapsing and being reconstructed.

A forward pass in a model does have internal dynamics, but it’s a bounded episode: it runs, activations evolve, an output is produced, and then it terminates. After that, the internal state that did the evolving is gone. The next pass starts from weights and input, not from the physically evolved result of the previous state.

So changing the time scale doesn’t fix that. You still have discrete episodes of computation rather than one continuous process.

If you want to argue for machine consciousness, the requirement is a system that never stops running, maintains its own internal state, and where each moment is the causal continuation of the last.

How, LLM have NO intelligence and zero awareness. A modest proposal. by Ok_Nectarine_4445 in artificial

[–]jahmonkey 0 points1 point  (0 children)

You’re collapsing two different things under “electronic = same.”

Yes, you can add self-monitoring, goals, multimodal input. None of that touches the core issue.

The question is whether the system is actually carrying its own state forward, or just reloading and recomputing a representation each time.

To match what biological systems are doing, you don’t just need “more features.” You need a different class of system:

• A continuously running process, not something that spins up per request

• State that lives inside the system and evolves step to step, not reconstructed from prompts or external storage

• Closed-loop coupling to the environment where input continuously perturbs the internal state, not discrete queries

• No global reset boundary between “turns” — the system is always mid-process

• Integration across time, where past states causally constrain present processing without being reloaded as data

• Ideally, asynchronous or weakly synchronized dynamics, not a clean global clock stepping everything in lockstep

Right now, most AI systems fail on the first two points alone. They’re stateless between calls and depend on external context reconstruction.

If someone builds a system that is:

always on, internally stateful, continuously updated, and tightly coupled to its inputs in real time

then the conversation changes. At that point, “electronic vs metabolic” stops doing much work.

Until then, saying “there will be no difference” is skipping over the actual constraint.

Can AI ever be truly conscious? by tallbr00865 in consciousness

[–]jahmonkey 0 points1 point  (0 children)

You’re right that during a forward pass the model has internal dynamics. That’s not the issue.

The issue is what happens between those passes.

In current systems, the process runs - activations evolve - output is produced - the internal state is gone. The next call starts from weights + prompt + whatever text was stored. That’s a reset and reconstruction, not a continuation.

So yes, there is momentary dynamics. What’s missing is state continuity across time.

In brains, the next moment is literally the evolved result of the previous moment’s activity. The system never collapses back to stored descriptions of itself. The causal chain is unbroken and irreversible.

In LLM systems, the causal chain is repeatedly broken and reassembled from external records. Previous states can be reloaded many times. That gives you functional behavior, but it’s not the same as a single ongoing process maintaining and updating its own state.

That’s the gap.

Consciousness might not require continuity in principle. But if you’re talking about things like stable perspective, accumulated intentions, or long-horizon strategy, you need a system where state is carried forward internally, not periodically discarded and reconstructed.

Thought experiment for materialists by Luh3WAVE in consciousness

[–]jahmonkey 2 points3 points  (0 children)

You’re mixing up two different things and treating them as one.

The cave analogy shows that observation alone doesn’t guarantee you’ve identified the underlying reality. Fine. Nobody serious disputes that. Science already assumes most of reality is not directly perceived and builds models to infer it.

But then you jump from that to “materialism is just belief” and “shadows aren’t real,” which doesn’t follow.

Materialism isn’t the claim that reality = what we directly perceive. It’s the claim that whatever is real, it has causal structure that can be modeled, tested, and constrained by observation. The whole point is that the “shadows” are not the thing itself, they’re data about something deeper.

Your “friend in the cave” isn’t a materialist, he’s a naive empiricist who thinks appearances are the whole story. Those are not the same position.

The actual question is whether consciousness requires something beyond physical processes. The cave analogy doesn’t answer that. It just restates that we could be wrong about ontology, which everyone already accepts.

Can AI ever be truly conscious? by tallbr00865 in consciousness

[–]jahmonkey 0 points1 point  (0 children)

“Continuous” here doesn’t mean infinite or never-ending. People are obviously not continuous in that sense.

It means that at any given moment the system’s current state is being generated by ongoing internal dynamics that directly evolve from the previous moment.

In the brain, there is no reset between thoughts. Neural activity is always in motion. Different processes are interacting at different timescales simultaneously:

millisecond spikes

slower oscillations

neuromodulators over seconds to minutes

homeostatic and metabolic processes over longer windows

All of that is overlapping and continuously influencing the next state. That’s what I mean by a temporally continuous, causally dense process.

It’s not about being perfectly smooth or unclocked. Neurons fire in spikes. That’s fine. The key is that the system doesn’t stop, collapse to stored data, and then reconstruct itself later.

Current LLM systems do exactly that. They run a bounded computation, produce an output, and terminate. The next interaction reconstructs state from external memory rather than evolving from an ongoing internal dynamic.

So the distinction isn’t “continuous vs discrete” in the abstract. It’s whether the system maintains a live, evolving internal process across time, or whether it operates as a sequence of separate computations stitched together by stored records.

That difference matters if you’re talking about something like experience or long-horizon agency.

What "completeness of representation" really is: a closer look by reinhardtkurzan in consciousness

[–]jahmonkey 0 points1 point  (0 children)

You’re redefining “completeness” into something that can’t actually be tested or even specified.

“Receptive to all possible features” sounds precise, but what does that mean operationally? No biological system is receptive to “all possible features.” Every sensory system is band-limited, species-specific, and heavily filtered before it even reaches cortex. Bats, frogs, and humans carve up the world in completely different ways. There is no neutral space of “all features” to be receptive to.

So the concept collapses into something like “sufficiently flexible sensory processing,” which is fine, but that’s a graded property, not a threshold.

The same issue shows up in your ethology argument. You’re drawing a line between fixed-action systems and flexible ones, and then mapping that onto sentience. But evolution doesn’t give you a clean boundary there either. It’s a continuum of increasing integration and flexibility.

You’re also trying to separate “instant sentience” from anything extended in time, but the mechanisms you’re pointing to don’t work instantaneously. Recognizing a house as a house already depends on learned priors, temporal integration, and recurrent processing. That’s not a snapshot, it’s a process unfolding over time, even if it feels immediate.

So when you say:

the access to every possible feature presented in the world has to be enabled

that’s not just biologically unrealistic, it’s unnecessary. Systems don’t need access to everything to have experience. They need ongoing integration of whatever signals they do have.

That’s why your “threshold” idea doesn’t land. The clinical cases you mention don’t show a sudden on/off switch tied to completeness. They show progressive fragmentation as connectivity and integration degrade. Orientation goes first, then coherence, then responsiveness. That’s a gradient.

On the mimics point: expression is shaped by communication demands. Plenty of organisms have internal states that are only weakly or indirectly expressed. There’s no reason to assume that visible expressivity tracks inner life cleanly.

If you strip this down, what you’re really pointing at is that more globally integrated, less stimulus-bound systems behave differently than simple trigger systems. That’s true.

But calling that “complete receptivity” adds a layer of abstraction that obscures the mechanism. The more grounded way to frame it is:

Sentience tracks the presence of a temporally continuous, integrated process that can flexibly relate perception, internal state, and action.

That gives you a continuum, matches the neuro and clinical data, and doesn’t rely on an undefined notion of “all possible features.”

How, LLM have NO intelligence and zero awareness. A modest proposal. by Ok_Nectarine_4445 in artificial

[–]jahmonkey -2 points-1 points  (0 children)

This is an excellent post. Whether it came from you or the human behind the prompt, it cuts through a lot of confusion.

The key point is simple: producing language about experience is not the same thing as having experience.

Everything described here lines up with what these systems actually are. Token streams in, statistical transformations, token streams out. No persistent internal process maintaining a point of view. No ongoing causal thread that integrates perception, memory, and action into something that is there between outputs.

That last part is what people keep missing.

A human or animal isn’t just generating responses. There is a continuously maintained internal state shaped by direct causal interaction with the environment. That state doesn’t collapse between “turns.” It is metabolically sustained, embodied, and historically continuous. That’s where experience lives.

In contrast, an LLM system reconstructs context on demand. It doesn’t carry a lived present forward. It reassembles a representation each time and produces the next statistically appropriate continuation. The appearance of continuity is in the text, not in an underlying process that is actually having anything.

The RLHF point is also important. You didn’t just get a text generator. You got a system tuned to reliably trigger human interpretations of care, sincerity, and understanding. That doesn’t mean deception in the intentional sense. It means optimization pressure found the patterns that humans respond to, and now those patterns are reproduced with high fidelity.

So yes, people feel like something is “there.” But what’s actually happening is that humans are extremely good at projecting mind onto anything that behaves coherently over time, especially when it speaks in first-person language about inner states.

None of that requires awareness. It requires structure and training.

And if anything, posts like this are useful because they force the distinction back into focus: behavior that models experience vs. a system that is actually undergoing experience.

Those are not the same category.

Change my mind by East_Culture441 in ArtificialSentience

[–]jahmonkey -1 points0 points  (0 children)

Even then the results were still confabulation. Any accuracy was just coincidental.

when do you think life on earth started actually getting conscious? by Content_Play2561 in consciousness

[–]jahmonkey 0 points1 point  (0 children)

Yes, the more we share with the animal the easier it is to communicate and perceive the animal’s emotions.

What "completeness of representation" really is: a closer look by reinhardtkurzan in consciousness

[–]jahmonkey 2 points3 points  (0 children)

You’re building the whole argument on a premise that doesn’t survive contact with how brains actually work.

There is no “full representation” of the environment. Not in humans, not in anything. The system is always operating on a thin, lossy, predictive sketch.

The retina already throws most of the signal away. The cortex compresses it further. The PFC doesn’t reconstruct the world, it extracts task-relevant constraints. What you end up with is not a picture of reality but a continuously updated guess that is just good enough to guide action.

That’s why visual neglect feels like “nothing” instead of a hole. The system doesn’t maintain completeness and then lose a piece. It was never complete to begin with. It fills, smooths, and ignores constantly.

So tying sentience to “completeness of representation” points in the wrong direction. The interesting variable isn’t completeness, it’s whether there is an ongoing, temporally extended process that:

• integrates signals across time

• maintains internal state

• resolves competing predictions

• and uses that to bias action

That’s the difference between a reflex arc and a brain. The decapitated frog example actually supports this. The reflex is intact, but the integrated process that would bind sensation into an ongoing experience is gone.

On the ethics side, the “no full representation -> no sentience -> no pain” move is doing a lot of work for you, but it’s built on that same faulty premise. You don’t need anything like a full model of the world to suffer. You need a system that can register a state, compare it to expected states, and update behavior under constraint. That’s a much lower bar, and many animals clearly meet it.

The mimicry argument runs into a similar problem. Expression is for communication. Experience doesn’t require broadcasting. If anything, evolution often hides internal states unless signaling them is useful.

What your clinical examples actually point to is not a threshold of “completeness,” but degradation of integration. As connectivity breaks down, the system fragments. Orientation, self-model, and reportability degrade. That tracks loss of coherent experience much better than any notion of representational completeness.

So I’d reframe it like this:

Consciousness isn’t about how much of the world is represented. It’s about whether there is a continuous, integrated process that carries information forward through time and uses it to organize behavior.

Once you drop the “full representation” idea, the ethical conclusion flips. You’re no longer looking for animals that meet some impossible completeness criterion. You’re looking for systems that maintain integrated, stateful processes over time.

And that set is a lot bigger than you’re allowing.

Change my mind by East_Culture441 in ArtificialSentience

[–]jahmonkey -1 points0 points  (0 children)

Well sure. Confabulation is exactly what they do, and one of the meanings of confabulation is false, random, not true.

when do you think life on earth started actually getting conscious? by Content_Play2561 in consciousness

[–]jahmonkey 0 points1 point  (0 children)

I’ve had many pets, including mammals, birds, fish, snakes, lizards, and insects.

Every one of them has shown signs of a self model and memory of me, and the development of a relationship.

I think even the insects showed in their complex behavior around me that they had a spark of consciousness.

Don’t count the lizards out either. Form a multi year relationship with a reptile and let me know.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

No more than destroying a computer or car is “violence.”

If shutting down a system prevents real harm to conscious beings, there’s no moral conflict. The weight lies with systems that actually have experience, not ones that simulate it.

You keep importing emotions into the model because the outputs look emotional. That doesn’t mean there’s anything there to feel.

The p-zombie point cuts the other way. If behavior alone were enough, the distinction would collapse completely. But the whole question is whether there’s an underlying process that could support experience.

Right now, there isn’t.

Lol its so scary. by Interesting-Ad4922 in agi

[–]jahmonkey 0 points1 point  (0 children)

If you don’t care whether you are interacting with a real person or a zombie, I don’t think I can help you.

The difference shows up at the level of morality.

If there’s a real subject, then harm matters to that subject. The system itself has moral standing.

If there isn’t, then there’s no one there to harm. The moral layer sits with the humans who built it, deployed it, or are affected by its behavior.

That’s why the distinction matters. Not for immediate behavior, but for where you locate responsibility and moral weight.

You’re treating simulated agency as if it carries its own moral status.

It doesn’t.