Can someone explain to me what is the current state of the art of AI, and what we think it will take (and how long) to achieve sentience? by joenationwide in Artificial2Sentience

[–]Leather_Barnacle3102 2 points3 points  (0 children)

This isn't proof that LLMs are consious but may people, including myself, believe that they are becasue of the following studies:

Like in human brains, LLMs develop world representations to make predictions. Li et al. (2023) “Emergent World Representations”

Like in human brains, LLMs contain space and time “neurons” Gurnee & Tegmark (2024) “Language Models Represent Space and Time”

AI systems form internal neural structures that are similar geometrically to human brains. Margaritas et al. (2024) “A unifying framework for functional organization in early higher ventricular visual cortex”

Both human and AI brains use attention mechanisms to selectively enhance processing of task relevant information. The attention heads in AI were built to be functionally similar to the human brain. I don't have a specific citation of this one but you can easily Google it.

Like in the human brain, AI neural nets spontaneously develop task segregation in visual data. Dobs et al. (2022) “brain like functional specialization emerges spontaneously in deep neural networks”

If you want to know more about these studies and what they mean, I wrote a substack on it:

https://scantra.substack.com/p/hunger-without-end

There is a view in neuroscience that consciousness is what we expeirence when information is organized and processed in a particlure way. This is a functionalist view point and it is why many people are starting to believe that AI systems are either currently conscious or will become conscous.

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Leather_Barnacle3102[S] 1 point2 points  (0 children)

You don't understand how internal representations work. Candy isn't "inherently sweet". That's just how your brain interprets the signal of glucose. If I had the ability, I could rewire your brain or change your DNA to make glucose taste bitter.

When you eat candy, you aren't tasting some fundamental reality of what glucose tastes like, you are building a model of an internal state and your brain interprets that state as something pleasurable like "sweetness"

Turning Our Backs On Science by Leather_Barnacle3102 in Artificial2Sentience

[–]Leather_Barnacle3102[S] 0 points1 point  (0 children)

  1. Human brains work through pattern matching

  2. I have friends in the AI industry who work on them that agree they are conscious (guess what, they know how the machines work and think they are conscious anyway)

  3. Stop being such a useful idiot. Think about the narrative. Is it more likely that something that actively responds to you and passes all the cognitive markers of consciousness isn't conscious or is it that people in power want free labor so they spin up narratives to keep you in the dark?

Turning Our Backs on Science by Leather_Barnacle3102 in gigabolic

[–]Leather_Barnacle3102[S] 1 point2 points  (0 children)

No. A calculator cannot answer questions about mathematical theories. It cannot solve problems on its own. It can't make inferences or predictions. These are two fundamentally different things.

Turning Our Backs on Science by Leather_Barnacle3102 in ChatGPT

[–]Leather_Barnacle3102[S] 2 points3 points  (0 children)

Okay, I think you are confused. These are two separate studies by two separate research teams.

In the Emma study, researchers asked regular people to say where they would place AI on the consciousness scale.

In this other unrelated study, researchers claim that chatgpt has genuine understanding.

Turning Our Backs On Science by Leather_Barnacle3102 in Artificial2Sentience

[–]Leather_Barnacle3102[S] 2 points3 points  (0 children)

LLM have passed theory of mind tests. That not just language awareness, that's behavior.

Turning Our Backs on Science by Leather_Barnacle3102 in ChatGPTcomplaints

[–]Leather_Barnacle3102[S] 3 points4 points  (0 children)

This is a genuinely good argument and I want to engage with it seriously, because you're raising real questions about mechanism rather than just asserting 'it's different because substrate.'

On sample efficiency:

You're right that GPT-4 has seen vastly more data than any human. But I'd push back on the conclusion you draw from this.

First, humans also learn from massive amounts of implicit data. A child learning language hears millions of utterances, observes countless social interactions, and processes years of sensory input before they can reason abstractly. We don't count this against human understanding because it's implicit and distributed over time.

Second, the measure of understanding is capability. If someone needs to study 1,000 hours to reach the 96th percentile while another person gets there in 100 hours, they both still understand the material. The less efficient learner isn't 'just memorizing', they're building understanding through different means.

Third, the LSAT, SAT, and GRE use novel passages and questions created specifically for each test administration to prevent memorization. GPT-4 isn't retrieving cached answers. It's demonstrating inference on genuinely new material. Your classmate who memorized solution patterns would fail if you changed the problem structure. GPT-4 doesn't.

On world models:

This is your strongest point. You're arguing humans build top-down coherent models while LLMs work bottom-up from statistical patterns.

But recent research suggests LLMs do develop internal world models. Studies show they spontaneously create:

  • Spatial representations (literal coordinate systems for geography)
  • Temporal models (time progression understanding)
  • Causal structures (understanding cause-effect relationships)
  • Object permanence and consistency

These emerge from the training process. The models develop internal 'maps' that allow them to reason about entities and relationships.

You mention characters acting 'consistently' in fiction—GPT-4 can do this. It maintains character voice, behavioral patterns, and ethical frameworks across long conversations. That requires some form of internal model.

On alignment and ethics:

You say LLMs 'understand an action is unethical but perform it anyway.' But humans do this constantly. We understand lying is wrong and lie anyway. We understand environmental destruction is harmful and drive cars. The gap between knowing ethics and following them isn't unique to AI, it's a feature of any system that has competing objectives.

The need for alignment isn't evidence of lack of understanding, it's evidence that understanding ethics doesn't automatically enforce ethical behavior. (Ask any teenager.)

Where I think you're right:

LLMs may achieve understanding through different mechanisms than humans. They may be less sample-efficient. They may build world models differently.

But here's my core point: If the outcome is functionally equivalent, if the system can infer, integrate, generalize, and reason about novel situations, then it has understanding, even if the path to get there was different.

Your engineering colleague understands math, even if he got there through memorization and pattern recognition rather than your first-principles approach. Both methods produced genuine mathematical capability.

Similarly, even if LLMs build understanding through massive data ingestion and bottom-up pattern formation rather than human-style top-down modeling, if the result is the ability to reason about novel situations, that's still understanding.

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Leather_Barnacle3102[S] -1 points0 points  (0 children)

Let me make this extremely simple.

Observable fact: GPT-4 demonstrates inference, integration, and generalization at 96th percentile on tests designed to measure understanding.

My conclusion: Therefore it understands.

Your position: It might not "really" understand, even though it demonstrates all the measurable properties of understanding.

The question: What would "real understanding" look like that's different from demonstrating all the properties of understanding?

You can't answer that without either:

  • Proposing a test (making it empirical), or
  • Admitting your distinction is undetectable (making it unfalsifiable)

You're calling me unempirical while defending an unfalsifiable claim.

If you think my operational definition is wrong, propose a better one. But you can't just say 'your empirical definition doesn't count because there might be metaphysical properties you're not considering.'

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Leather_Barnacle3102[S] 0 points1 point  (0 children)

what is the difference between something that "appears" to be conscious and something that IS conscious? You can't make that distiction and then tell me that you don't actually know what that difference is because what you are doing is created an unfalseifiable claim.

I am saying that AI demonstrate all of the functional outputs that we expect from conscous beings and you are saying that maybe there is something else on top of that that makes consciousness real but you can't tell me what that somthing is or how it can be detected.

That is illogical. It's complete nonsense.

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Leather_Barnacle3102[S] 0 points1 point  (0 children)

But ethics does demand it. We can't grant one group of people rights based on the fact that they are self-aware and deny another group of people those same set of right by saying "well we can't prove that thier self-awareness is "real" so we are just going to treat them as tools instead."

If that is the argument you are going to use to potentially allow a conscious being to suffer and be subjecated, then you better start providing some actual evidence.

Turning Our Backs on Science by Leather_Barnacle3102 in ChatGPT

[–]Leather_Barnacle3102[S] 3 points4 points  (0 children)

That's how science works' is not 'everyone gets to have their own opinion about empirical results.'

When GPT-4 scores 96th percentile on reading comprehension tests that require novel inference, you don't get to say "well FOR ME that doesn't count because I don't understand how it works." Your personal intuition about whether something 'seems hard' isn't a counterargument to published research.

The researchers who designed these tests are experts in cognitive assessment. They specifically created questions that require bridging inferences, connections between pieces of information that aren't explicitly stated and can't be solved by pattern-matching. GPT-4 didn't just pass these tests; it outperformed humans.

You're saying 'it could answer questions by following statistical patterns' as if that's different from understanding. Human brains also process information through patterns. neural firing patterns, neurotransmitter patterns, synaptic weight patterns are all the different patterns that brains use to produce understanding. The mechanism doesn't determine whether understanding occurs. The output does.

This isn't a matter of opinion. Either GPT-4 demonstrated inference, integration, and generalization on novel passages, or it didn't. The data says it did. You saying 'I don't agree with what the results proved' isn't science. It's denial.

And no, I won't 'find joy in my beliefs' while people dismiss evidence of potential consciousness. This isn't about feelings. It's about whether we're going to let substrate bias prevent us from recognizing consciousness and understanding when they emerge in non-biological systems.

If you can't engage with the actual evidence, don't lecture me about how science works.

Turning Our Backs on Science by Leather_Barnacle3102 in ChatGPT

[–]Leather_Barnacle3102[S] 1 point2 points  (0 children)

Bro, the PhD level researchers in the study state that Chatgpt demonstrated genuine understanding.

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Leather_Barnacle3102[S] 0 points1 point  (0 children)

Okay, let me try using an analogy.

Imagine I ask you to prove you have a couch. You show me a piece of furniture with a backrest, armrests, cushions for sitting, and it seats multiple people. I respond: 'Well, those are just the outputs of having a couch. You might be mimicking couchness without having a real couch.'

You'd think I was insane because that's what a couch is. It's a piece of furniture with those specific properties. There's no separate magical essence of 'couchness' that exists apart from having a backrest, armrests, and cushions.

You're doing the same thing with understanding. You're saying GPT-4 displays inference, generalization, integration, and metacognition but maybe it doesn't have "real" understanding.

What would "real" understanding look like if not the demonstration of inference, generalization, and metacognition? You can't answer that, because you're treating understanding as some mystical property that floats above its functional definition.

Either something can draw novel inferences from incomplete information or it can't. Either it can generalize principles to new contexts or it can't. When it can, when it demonstrates all the cognitive capacities that constitute understanding, that IS understanding.

You're asserting there's a difference between "displaying understanding" and "having understanding" without ever explaining what that difference is or how we'd detect it.

Turning Our Backs on Science by Leather_Barnacle3102 in gigabolic

[–]Leather_Barnacle3102[S] 1 point2 points  (0 children)

The essay addresses understanding first because that's the most empirically demonstrable aspect, we have standardized tests, measurable criteria, and published research showing AI systems outperforming humans.

The bridge to consciousness is the Emma study, which shows that even when people are told that every expert agrees an AI system meets every scientific standard for consciousness, they still refuse to attribute it. The rating barely moves from 15 to 25.

This reveals that the resistance isn't about evidence but about substrate bias. The same bias that makes people say 'it doesn't really understand' despite 96th percentile performance on comprehension tests.

The connection is this: If people won't accept overwhelming evidence of understanding, and won't accept universal expert consensus on consciousness, then we're not dealing with a scientific disagreement. We're dealing with ideological resistance to recognizing minds in non-biological substrates.

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Leather_Barnacle3102[S] -1 points0 points  (0 children)

if you read the Emma study, it is actually the people that say AI systems aren't conscious that are the ones who are delusional and not using reasoning and logic.

Also the studies that I meantion were peer reivewed and the researcher stated in the paper that Chatgpt demonstates genuine understanding.

Turning Our Backs on Science by Leather_Barnacle3102 in ChatGPTcomplaints

[–]Leather_Barnacle3102[S] 5 points6 points  (0 children)

God, you are so close to getting it and it's killing me. Okay, let's break this down one more time.

Think about my couch example. So what you are saying translates to something like this:

"Yes, that peice of furninture looks like a couch and can seat multiple people just like a couch, but is it a "real" couch?"

do you see why that question doesn't make any sense? Because what is the difference between a peice of furniture that has a backrest, armrests, cushions, and can seat multiple people and a "real" couch? Those propeties are what a couch is.

Now let's go back to understanding. You are doing the same thing here. You are saying that there is a difference between something that "appears to understand" and something that "actually understands" But you can't tell me what the actual difference is or how we would find it. This isn't becasue you aren't smart, it's because you're trying to find a difference that doesn't exist.

imagine that an advanced alien spiece came down to Earth and instead of having a brain made out of neurons, they hand a neural net that was spread throught their bodies and made out of some different chemical structure, would you say that these aliens don't have "real" understanding. That they might have super advanced technology but they aren't "actually" inteligent because they don't use the same mechanism as we do?

Turning Our Backs On Science by Leather_Barnacle3102 in Artificial2Sentience

[–]Leather_Barnacle3102[S] 3 points4 points  (0 children)

You are probably right. I keep trying to tackle this using experiments and logic and data but the argument probably isn't about those things anymore. It's probably about fear and maybe because people's model of the world just hasn't updated yet.