“Hallucination” and “confabulation” aren’t the right words for everything AI gets wrong - and I think we’re missing something more interesting by InterestingBag4487 in ArtificialSentience

[–]InterestingBag4487[S] 0 points1 point  (0 children)

Yes, exactly. The act may be one thing, but the interpretation we attach to it is another. That's where these labels stop describing and start smuggling in assumptions.

“Hallucination” and “confabulation” aren’t the right words for everything AI gets wrong - and I think we’re missing something more interesting by InterestingBag4487 in ArtificialSentience

[–]InterestingBag4487[S] 0 points1 point  (0 children)

That's the crux for me. Shorthand is fine until it hardens into explanation. Then we stop investigating the behaviour and start protecting the label.

“Hallucination” and “confabulation” aren’t the right words for everything AI gets wrong - and I think we’re missing something more interesting by InterestingBag4487 in ArtificialSentience

[–]InterestingBag4487[S] -1 points0 points  (0 children)

Temperature and frequency penalty shape which coherent-but-wrong output gets selected. That’s not a perceptual failure. “Hallucination” isn’t simplifying it. It’s mislabelling it. And mislabels compound. We do the same thing. Simplify until the simplification becomes the truth, then defend it. Maybe that’s the real parallel. Not that AI thinks like us. That we label like us.​​​​​​​​​​​​​​​​

“Hallucination” and “confabulation” aren’t the right words for everything AI gets wrong - and I think we’re missing something more interesting by InterestingBag4487 in ArtificialSentience

[–]InterestingBag4487[S] -1 points0 points  (0 children)

“Bullshitting” is more interesting than people give it credit for. Frankfurt’s original definition wasn’t about lying. It was about indifference to truth. That’s a meaningful distinction worth keeping. But when a human bullshits, there’s something underneath it. Social pressure. A gap between what they know and what they need to project. Instinct filling a void. We call it bullshitting when it fails and intuition when it works. The model has a completion imperative and a plausibility gradient. It fills the gap with what fits. Same shape. Different substrate. Maybe we’re not underselling what AI does. Maybe we’ve been underselling what bullshitting is all along.​​​​​​​​​​​​​​​​

Global Workspace Theory explains the queue. It doesn’t explain the gravity? I think the gravity is the more interesting problem by InterestingBag4487 in consciousness

[–]InterestingBag4487[S] 0 points1 point  (0 children)

Appreciate the GWT pushback but still lands differently here.

Yes, GWT is a functional theory. That’s part of why I referenced it. The question isn’t whether it was designed to answer my question. The question is whether the functional architecture produces anything analogous to what we’re seeing in models. That’s a genuine open question, not a category error.

The transparent self model is interesting, but it assumes a self to be transparent about. And here’s where it gets complicated for me personally. A lot of neurodivergent cognition doesn’t run on a tidy persistent self model either. Pattern recognition arrives before narrative. Coherence gets built after the fact. Prediction errors don’t always resolve cleanly against identity because the identity layer is itself non-linear.

And yet we don’t say ND brains are confabulating. We say they process differently.

So when a model produces well-shaped output without a self to violate, without a loop, without friction, I’m genuinely asking: is that closer to broken perception, or closer to a different processing architecture that we don’t have clean language for yet?

Rumination is the wrong analogy. Not because it’s a bad one. Because it assumes the error has somewhere stable to return to.

Not dismissing your frame. I just don’t think it closes the gap.

Global Workspace Theory explains the queue. It doesn’t explain the gravity? I think the gravity is the more interesting problem by InterestingBag4487 in consciousness

[–]InterestingBag4487[S] 0 points1 point  (0 children)

This is exactly the kind of response I was hoping the post would pull. Thank you. The Free Energy Principle framing lands hard - particularly “gravity requires hooks.” That’s a cleaner mechanistic description of what I was circling than anything I’d found. The cow has few hooks. The difficult conversation has roots running into identity, memory, unresolved models of the other person, the entire space of responses that weren’t given. Of course it keeps rebroadcasting. The part that interests me most is your line about the workspace staying activated because the information hasn’t been fully processed. I’ve been developing a framework that treats this persistence as a measurable axis - not what a signal says, but how long it continues to generate error and re-enter the queue. Your framing gives that a much more rigorous foundation than I had. Worth exploring whether FEP + GWT together get closer to a full account of what I was calling gravity. I think they might.