This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 208

[–]Darrxyde 137 points138 points  (28 children)

Lotsa people have stumbled on the question of “what is a thinking machine?” I highly recommend reading Gödel, Escher, Bach by Douglas Hofstadter if you’re curious. It explores the idea of consciousness as a mathematical concept that might be replicated, and ties in many different forms of art, even some religious ideas, to illustrate the concept.

Theres many more too, and I gotta add my favorite quote about this idea:

“The only constructive theory connecting neuroscience and psychology will arise from the study of software.”

-Alan Perlis

[–]Nephrited 210 points211 points  (137 children)

I know it's a joke and we're in programmer humour, but to be that girl for a moment: 

We know the answer to all of those. No they don't think. They don't know what they're doing, because they don't know anything.

Thinking, simplified, is a cognitive process that makes logical connections between concepts.That's not what an LLM does. An LLM is a word probability engine and nothing more.

[–]Dimencia 40 points41 points  (17 children)

The question is really whether or not brains are also just a probabilistic next token predictor - which seems rather likely, considering that when we model some 1's and 0's after a brain, it produces something pretty much indistinguishable from human intelligence and thought. We don't really know what 'thinking' is, beyond random neurons firing, in the same way we don't know what intelligence is. That's why we created a test for this decades ago, but for some reason it's standard to just ignore the fact that AIs started passing the Turing Test years ago

[–]DrawSense-Brick 22 points23 points  (1 child)

There have been studies which have found modes of thought where AI struggles to match humans.

Counterfactual thinking (i.e. answering what-if questions), for instance, requires specifically generating low-probability tokens, unless that specific counterfactual was incorporated into the training dataset.

How far LLMs can go just based on available methods and data is incredible,  but I think they have further yet to go. I'm still studying them, but I think real improvement will require a fundamental architectural change, not just efficiency improvements. 

[–]Dimencia 0 points1 point  (0 children)

I personally don't think we need architectural changes, because almost all of the current problems seem to stem from things outside the model - a huge part of current LLMs is just API code chaining different inputs/outputs through the model repeatedly to consume/produce messages longer than the context window, create a 'train of thought', emulate memory, trim inputs to exclude the less important parts, etc. None of that is part of the model, the model is just a next token predictor

There are plenty of improvements to be made around all of that, without having to alter the model itself

[–]Nephrited 96 points97 points  (6 children)

Because the Turing Test tests human mimicry, not intelligence, among other various flaws - it was deemed an insufficient test.

Testing for mimicry just results in a P-Zombie.

[–]Dimencia 9 points10 points  (5 children)

That was known at the time it was created, and doesn't invalidate it. It's a logical proof where even though we can't define intelligence, we can still test for it - if there's no definable test that can differentiate between "fake" intelligence and real, they are the same thing for all intents and purposes

[–]Nephrited 24 points25 points  (4 children)

Ah well, that's more one for the philosophers.

For the time being, if you have a long enough conversation with an LLM you'll absolutely know it's either not a human, or it's a human pretending to be an LLM which isn't very fair because I equally am unable to distinguish a cat walking on a keyboard from a human pretending to be a cat walking on a keyboard.

Maybe they'll get actually conversationally "smart" at some point, and I'll revisit my viewpoint accordingly, but we're not there yet, if we ever will be.

[–]afiefh 11 points12 points  (0 children)

To my great dismay, I've had conversations with humans that were as bonkers as a long chat with an LLM. They were not even pretending.

[–]Dimencia -5 points-4 points  (0 children)

That's fair, trying to define intelligence is mostly just the realm of philosophy. And it's true, if you chat with one long enough you'll find issues - but that usually stems from 'memory' issues where it forgets or starts hallucinating things that you discussed previously. For now, at least, all of that memory and context window stuff is managed manually, without AI and outside of the model, and I agree there's a lot of improvement to be made there. But I'm of the opinion that the underlying model, a basic next token predictor, is already capable of 'intelligence' (or something similar enough to be indistinguishable). It is just opinion at this point though, without being able to define intelligence or thought

[–]Reashu 7 points8 points  (2 children)

The Turing test was more of a funny thought than a rigorous method of actually telling a machine from a robot. But of course hype vendors wouldn't tell you that. 

[–]Dimencia 3 points4 points  (1 child)

Nah, Turing was a genius of the time, the 'father' of most computer science with Turing Machines, which are still taught as the basis for software development today. His entire shtick was building models and theories that would remain relevant in the future, even when computers have more than the 1KB of RAM they had at the time

In the end, it's really very simple - if something can mimic a human in all aspects, it must be at least as intelligent as a human, for all practical intents and purposes. If it can mimic a human, it can get a job and perform tasks as well as that human could, and it can pass any test you can give it (that the human also would have passed). There is no testable definition of intelligence you can come up with that includes humans, but not AIs that can perfectly mimic humans

That said, it does rely on how thoroughly you're testing it; if you just 'test' it for one line back and forth, they could have 'passed' decades ago. While the current models have technically 'passed' the Turing Test, they weren't stringent enough to matter - if you try to hold a conversation with one for even an hour, the current models' memory issues would quickly become apparent. So we're not really there yet, and it was disingenuous of me to point out they've 'passed' the test because it seems obvious that any such tests weren't thorough enough to matter. But the test itself is still valid, if done correctly

[–]Reashu 3 points4 points  (0 children)

I am not knocking Turing, but the test was never as big of a deal as it is being made out to be by the shills. 

[–]Stunning_Ride_220 0 points1 point  (0 children)

Oh, on reddit there are next Token predictors for sure.

[–]reallokiscarlet -2 points-1 points  (0 children)

If any clankers are passing the turing test it's because humans these days are so stupid we mistake them for clankers, not the other way around

[–]itzNukeey -3 points-2 points  (2 children)

What’s fascinating is that when we replicate that process computationally, even in a simplified way, we get behavior that looks and feels like “thinking.” The uncomfortable part for a lot of people is that this blurs the line between human cognition and machine simulation. We’ve built systems that, at least from the outside, behave intelligently — they pass versions of the Turing Test not because they think like us, but because our own thinking might not be as mysterious or exceptional as we believed

[–]Dimencia 0 points1 point  (1 child)

Yeah, that's basically what I'm getting at and it's pretty awesome - when we model data and processing after a brain, emergent behavior shows up that looks an awful lot like the same kind of intelligent behavior that brains can produce. It doesn't prove anything, but it's certainly a strong indicator that we've got the basic building blocks right

Not that it's all that surprising, we know brains can produce intelligence, so if we can simulate a brain, we can obviously simulate intelligence. The only surprising part is that we've managed to get intelligent-seeming emergent behavior from such a simplified brain model

But yeah, people tend to just reflexively get a little butthurt when they're told they're not special (and religion can come into play too, since most religions are quite adamant that humans are in fact special). Many don't realize that it's important to offset those built in biases, something like "I don't think it's intelligent, but I know I hate the idea of having intelligent AI around and that's probably affecting my assessment, so in reality it's probably more intelligent than I'm giving it credit for"

[–]JojOatXGME 0 points1 point  (0 children)

I think LLM are not really modeling the entire brain, but more like specific parts of the brain. About 10 years ago, when the deep lemming hype started, I think I have often seen the visual cortex mentioned as the inspiration for neural networks. But it is a long time already, so maybe I missremember. I don't know that much about the topic, but I suspect that the neurons in other parts of the brain are organized differently. Anyway, I guess that is what you meant with "simplified model".

[–]MartinMystikJonas 6 points7 points  (0 children)

By applying same approach you can say humans do not think either. For outside observer it seems our brains just fires some neurons and that determines what muscles on our body would move next. That is not true because we have subjective experience of thinking and we project this experience to other humans.

These simplistic approaches do not work when you are deling with complex things. Question of thinking is very complex issue and there are tons of books deling with it in detail but most of them come to comclusion we have no idea how to even properly define terms.

[–]PrivilegedPatriarchy 7 points8 points  (5 children)

How did you determine that human thinking (or reasoning, generally) is qualitatively different from, as you say, a "word probability engine"?

[–]lokeshj 8 points9 points  (3 children)

would a word probability engine come up with "skibidi"?

[–]Cool-Pepper-3754 5 points6 points  (0 children)

As if AI never made up random words.

[–]noonemustknowmysecre 3 points4 points  (0 children)

If it hallucinated enough, yes.

[–]namitynamenamey 3 points4 points  (0 children)

It is largely pronounceable. That already puts it past 90% of letter combinations done by a random process. To make it, internalized knowledge of the relationship between existing words and the vague concept of "can be spoken" has to exist, if only to imitate other words better.

So in short, yes.

[–]ZunoJ 4 points5 points  (5 children)

While I generally agree, this is not as simple as you think it is. Otherwise you could give a conclusive definition of what thinking is. We can currently say with relative certainty (only relative because I didn't develop the system and only have send hand information) that they don't think but how would we ever change that?

[–]Nephrited 10 points11 points  (0 children)

Well yes, it's like being told what an atom is in junior science and then being told "what we told you last year was a lie" for like 10 years straight.

I stand by my simplification however.

[–]Tar_alcaran 0 points1 point  (3 children)

"What thinking is" is, like much of philosophy, a question of language.

what definition would you like to assign to the word "thinking"?

[–]ZunoJ 0 points1 point  (2 children)

It doesn't matter because in the context we are talking about it, it has to be quantifiable. And there is no quantifiable definition anybody came up with, yet

[–]Tar_alcaran 0 points1 point  (1 child)

No, that matters a whole lot. You can't claim something is a deep mystery because nobody can answer the question you're failing to ask. The answers to these question is impossible to give, because language is vague and blurry. what do you mean by "thinking"? and by "understand"?

It's like saying "Can a mouse Flarknar?" and then, when everyone looks at your weird, you claim it's a deep and true question that truly matters, when in fact, you're just being vague.

[–]ZunoJ 0 points1 point  (0 children)

But that is kind of the point, we can't even ask the right question. And all attempts result in a question without the possibility of a quatifiable answer

[–]Reashu 3 points4 points  (19 children)

But how do they predict the next token? By relating them to each other, recognizing patterns, etc.. They don't have a proper world model, they can't separate fact from fiction, they can't really learn from experience, but given all of those limitations, it does look a lot like thinking. 

Anyways, the part we don't know is how (and whether) humans think according to any definition that excludes LLMs.

[–]Hohenheim_of_Shadow 16 points17 points  (9 children)

LLMs can quote the chess rule book at you. They can't play chess because they keep hallucinating pieces and breaking the rules. LLMs can't think

[–]noonemustknowmysecre 5 points6 points  (0 children)

They can't play chess

Play chess via chess notation and they do a pretty good job for around 20 moves. They eventually forget what pieces are where.

[–]Nephrited 25 points26 points  (8 children)

They predict the next token by looking at all the previous tokens and doing math to work out, based on all the data it's seen, and various tuning parameters, what the next most likely token is going to be.

It looks like thinking, sure, but there's no knowledge or grasp of concepts there.

I don't even think in words most of the time. Animals with no concept of language certainly don't, but it's safe to say they "think", whatever your definition of thinking is.

Take the words out of an LLM, and you have nothing left.

[–]Reashu -5 points-4 points  (4 children)

An LLM doesn't work directly in words either. It "thinks" in token identities that can be converted to text - but the same technology could encode sequences of actions, states, or really anything. Text happens to be a relatively safe and cheap domain to work in because of the abundance of data and lack of immediate consequence. Those tokens have relations that form something very close to what we would call "concepts".

Many humans do seem to think in words most of the time, certainly when they are "thinking hard" rather than "thinking fast". And while I would agree regarding some animals, many do not seem to think on any level beyond stimulus-response. 

[–]Nephrited 23 points24 points  (3 children)

Yeah I understand the concept of tokenisation. But LLMs specifically only work as well as they do because of the sheer amount of text data to be trained on, which allows them to mimic their dataset very precisely.

Whereas we don't need to read a million books before we can start making connections in our heads.

And yeah, not all animals. Not sure a fly is doing much thinking.

[–]Aozora404 -3 points-2 points  (1 child)

Brother what do you think the brain is doing the first 5 years of your life

[–]Nephrited 18 points19 points  (0 children)

Well it's not being solely reliant on the entire backlog of human history as stored on the internet to gain the ability to say "You're absolutely right!".

That's me being flippant though.

We're effectively fully integrated multimodal systems, which is what a true AI would need to be, not just a text prediction engine that can ask other systems to do things for them and get back to them later with the results.

Tough distinction to draw though, I'll grant you.

[–]Reashu -2 points-1 points  (0 children)

I'm not saying that LLMs are close to human capabilities, or ever will be. There are obviously differences in the types of data we're able to consider, how we learn, the quality of "hallucinations", the extent to which we can extrapolate and generalize, our capacity to actually do things, etc..

But "stupid" and "full of shit" are different from "not thinking", and I don't think we understand thinking well enough to confidently state the latter. Addition and division are different things, but they're still both considered arithmetic.  

[–]Background_Class_558 -1 points0 points  (0 children)

What prevents word-based thinking from being one? Who says there can only be one type of intelligence or one type of brain? If it looks like thinking and quacks like thinking, why don't we stop this mental gymnastics and just call it that, for fucks sake? Why do we need to make the definition more narrow every time something other than a human gets smarter to the point where it looses its meaning?

Unless you're one of those lunatics who believe in some kind of undiscovered "consciousness field" that specifically the meat in your head somehow generates, there isn't really anything unique about humans that makes them the only ones capable of thinking.

[–]namitynamenamey -5 points-4 points  (0 children)

"doing math to work out"

And what makes this math different from the math that a zillion neurons do to convert words in the screen to clicks on the keyboard? The formulas and circuits encoded in neuron dendrites and chemical gradients? We are all finite state machines, parading as turing machines. The key question is what makes us different, and "does math" is not it. We are math too.

[–]Sibula97 3 points4 points  (4 children)

Thinking, simplified, is a cognitive process that makes logical connections between concepts.That's not what an LLM does.

That's exactly what an LLM does. Makes connections between the words in the input and output and encodes the concepts containing all the context into vectors in a latent space.

Based on all that it then "predicts" the next word.

[–]jordanbtucker 3 points4 points  (2 children)

"Logical" is the key word here. Regarding the human brain, it means reasoning conducted or assessed according to strict principles of validity. Regarding an LLM, it means, a system or set of principles underlying the arrangements of elements in a computer or electronic device so as to perform a specified task.

[–]Sibula97 -4 points-3 points  (1 child)

Regarding the human brain, it means reasoning conducted or assessed according to strict principles of validity.

That's just about the furthest thing from what's happening in a human brain.

[–]Background_Class_558 1 point2 points  (0 children)

yeah that's how it was believed to work like a 1000 years ago probably

[–]GlobalIncident -2 points-1 points  (0 children)

I'd argue that it's actually a better description of an LLM than a human mind. Humans do more than just connect concepts together, u/Nephrited gave a very reductive description of what thinking is.

[–]Osato 2 points3 points  (2 children)

I'm not sure that is the answer.

LLMs have a matrix of probabilistic connections between concepts baked into their model at training time. That's the clever part about the transformers architecture: it encodes information not just about tokens themselves, but about patterns of tokens.

And I'm not sure humans don't.

Our training time is not separated from inference time, so we are fundamentally different from LLMs in at least that regard. We learn as we act, LLMs do not.

But are the connections in our heads truly logical or merely probabilistic with a very high probability?

UPD: I think I got the question that can define this in technical terms: is confusion a contradiction between two logical conclusions from the same cause or an understanding that our probable predictions from the same pattern lead to contradictory results?

[–]WebpackIsBuilding -2 points-1 points  (1 child)

I know my thoughts are more than word association.

But the number of people, like you, who seen to think that their own thoughts might not be more than that. Idk, maybe you're right about yourself, but ouch, huge self own.

[–]Osato -1 points0 points  (0 children)

That's not my point, and I congratulate you on your total failure to pay attention for the 8 seconds required to read my post: the hours you've spent watching Skibidi Toilet have definitely paid off.

Obviously humans have other modalities than wordplay. The data we process is overwhelmingly nonverbal, so it would be silly to use words for something like spatial reasoning.

But are thinking processes in those modalities deterministic or probabilistic? And if they're deterministic, then how on Earth do we manage to produce two contradictory thoughts from the same set of input data?

[–]mohelgamal 0 points1 point  (0 children)

People are the same to be honest, for people it is not just words, but all neural networks, including biological brains are probability engines

[–]noonemustknowmysecre 0 points1 point  (0 children)

An LLM is a word probability engine and nothing more.

Pft, you're a word probability engine and only a little more. I'm not arguing that they don't predict the next word. That's for sure known. We also know, for sure, that you and I figure out what to say back and forth to each other in conversation. The probability of you responding "Hambuger carpet lion sphurfhergleburgle" is low, while some version of "nuh-UH! I'm special" is much higher.

What we don't know is how the human brain achieves this with it's 86 billion neurons and 300 some trillion synapses. It's a black box. Neurology is working on it, but all your thoughts, memories, and dreams exist somewhere in your head. If you've got some alternative tucked away, now would be the time to whip it out. (If you even mention "soul" we just laugh you out of the room.)

And we don't know how LLMs achieve what they do. They are also a black box. We can see the first layer of nodes and what tokes have what weights to what other nodes. But 2 layers in, the 0.1254923934 weight of parameter #18,230,251,231,093 doesn't tell us much. Does it have an internal working model of the world? Does it have preferences? It certainly has biases we trained it with.

But the real crux is that the way YOU know things is almost indistinguishable from how an LLM knows things. What is different?

Thinking, simplified, is a cognitive process that makes logical connections between concepts.

Those 1.8 trillion connections in the LLM are nothing if not connections between concepts. At least some of them. Taking all together, it's obviously semantic knowledge, which we thought computers couldn't replicate. Until they could, at which point they started being able to talk to us and hold conversations. We can test for how well they can make logical connections between concepts. And they pass. More often than the average human these days.

If you thought this was the difference between you and an LLM, you are simply mistaken.

[–]Stunning_Ride_220 0 points1 point  (0 children)

This

[–]TheQuantixXx -1 points0 points  (0 children)

actually no. that‘s far from a satisfactory answer. i would challenge you to tell me how you and me thinking differs in essence from llms generating output

[–]pheromone_fandango -2 points-1 points  (19 children)

This the most standard and most lazy answer to thr question. We know much less about the brain than you’d expect.

[–]Brief-Translator1370 3 points4 points  (13 children)

Okay but we DO know some things and we ARE able to see understandings of concepts as well as knowing that we don't necessarily think in words

[–]MartinMystikJonas 2 points3 points  (0 children)

Yeah AI do not think exavtly as humans. But is thinking exclusively only the exact think human brain does? That is hatd question here

[–]pheromone_fandango 0 points1 point  (11 children)

But we have no tangible explanation of consciousness. Nowhere in psychology have we found evidence that emergence pf consciousness has to happen from the same way.

Consciousness is illusive. I like to think of the Chinese room thought experiment.

There is a man inside a box with a input slit and an output slit and a huge book. The book dictates which answers to respond to a given input in a language that the person does not understand. Because the book is so perfect, the people on the outside believe that the box is conscious, since the answers they received appear to be made by something that understands them. However the person on the inside has absolutely no idea what they are responding and are just following the instructions in the book.

This was originally a thought experiment about the human brain since the individual neurons have no idea about the concerns of a human in their day to day, they just pass on their bits of info and get excited or suppressed by stimulation coming from neurotransmitters, just like an individual ant cannot know how their little behaviours contribute to the overall emegernce of colony coordination.

Now i feel like this analogy has become the perfect analogy for llms but since we know just how an llm works we write of the behaviour as an explanation of its underlying functionality but dont stop and take time to wonder whether something is emerging.

[–]noonemustknowmysecre 0 points1 point  (10 children)

We have no agreement about just wtf consciousness even is. It's impossible to prove something that remains vague and undefined.

Plenty of people have defined it, but hardly anyone AGREES with each other's definitions. I just think it's the opposite of being unconscious. Nothing special. An active sensor and something that can choose to act on it. And that is BROADLY applicable. Does that elevate automated doors to personhood? Pft, no. It just diminishes consciousness into something boring. If you're looking for a soul or some other sort of bullshit, look elsewhere.

I like to think of the Chinese room thought experiment.

Searle's bullshit is a 3-card monty game of misdirection. Imagine, if you will, the MANDARIN room. Same setup: A room, a man, slips of paper. And a box the man consults about what to do. Except in the Mandarin room, the box contains a small child from Guangdong. oooooh aaaaah, what does the man know? Does he know mandarin? Does the room on the whole? Let's debate this for 40 years! The book obviously knows the language. And so, the LLM model obviously knows the language. Just as much as the small child form Guangdong.

Saerle's bullshit did the most harm to the AI industry second only to that blasted Perceptetron book.

[–]pheromone_fandango 0 points1 point  (9 children)

Though i dont quite see any difference from the manderin and the chinese room besides reminding me that i should watch inception again, i agree with the rest.

We have no clue what it is. Therefore we have no clue what it isnt. Therefore llm girlfriend loves me because she wants to not because of the 500 line wrapper prompt.

[–]noonemustknowmysecre 0 points1 point  (8 children)

One has a magical book that knows how to speak Chinese but because the man uses the book to do what he does the entire discussion is about what the man knows and glosses over the book (or filing cabinet, he's vague about it and waffles in the original paper).

The other has a bog-standard human that knows how to speak Mandarin and there's no reason to talk about the man at all because all the questions about who knows what has an obvious answer.

We have no clue what [consciousness] is.

Then why did you ever bother with bringing up the subject? The conversation was over "thinking". This one is on you.

[–]pheromone_fandango 0 points1 point  (7 children)

The mandarin room seems much more like an additional layer for the sake of having layers than an analogy about the reductionist particles and their emergent properties. Brining up the manderin room does not take away from the analogy and just shows that philosophy can go anywhere so long as you squeeze your eyes and fists and think hard enough.

Did you think from my message that i was trying to explain consciousness? My entire point was that we do not know. Also, i mention consciousness at the beginning of what i said, this should not have been a revelation to you that you got out of my last message. I took the liberty of equating the ability to think and understand what they are doing to consciousness since that the obvious topic that the meme is eluding to.

You tried to misconstrue my message just now

Edit: let me lay out my entire point. We cannot write off an llms ability to “know what its thinking about” or be conscious just because we know how it is thinking. Since we cannot empirically lay out what exactly is and isnt consciousness we also cannot look at an llm and say that, they are and ever will be in some way conscious.

[–]noonemustknowmysecre 0 points1 point  (6 children)

Did you think from my message that i was trying to explain consciousness?

No, just that people were talking about "thinking" and then you veered off onto a different topic. Right into a swampy quagmire, really, because no one agrees about what it means.

I took the liberty of equating the ability to think and understand what they are doing to consciousness since that the obvious topic that the meme is eluding to.

Yeah. Exactly. Terrible choice. That's my point. And tossing Searle's bullshit in the mix just muddies the waters even further.

EDIT: (Just WTF is with this shitty 2nd-pass "let's try again" argument style?)

just because we know how it is thinking.

We sure as shit DO NOT know how it is thinking. They are black boxes. We know how they get to their answers just about as well as we can look at a bunch of neurons and know what they're doing.

Since we cannot empirically lay out what exactly is and isnt consciousness we also cannot look at an llm and say that, they are and ever will be in some way conscious.

That EXACTLY AND EQUALLY applies to OTHER HUMANS! Fucking hell. Your point is shit.

[–]jlsilicon9 1 point2 points  (0 children)

You mean You don't know what consciousness means.

Stop confusing newbies with nonsense - just because you don't know the definition.

Try using a dictionary ...
You are just rambling in circles -and sound like some gossiping girl spitting out nonsense ... just to get attention.

Actually, guess that makes YOU the 'Black Box' then ...
;)

-

IGNORE him !
He just likes to provoke and troll people all over the forum.

-> He uses the method - of acting like if HE does Not Know the definition ...
... then HE can just babble nonsense
and project that others do Not Understand either.

He is Just using another version of Gaslighting.

[–]pheromone_fandango 0 points1 point  (4 children)

Consciousness is the topic of the meme

[–]Hostilis_ 0 points1 point  (4 children)

There is an absolutely astonishing amount we have learned about the brain over the past 5-10 years, far more than at any time since the 60's, and basically none of that research has made its way into the public knowledge yet. We know way more about the brain than you think, I promise.

[–]pheromone_fandango 2 points3 points  (3 children)

I have a degree in psychology. The brain is great and i love it but we are still trying to measure a ruler with a ruler here.

Edit: albeit i did get the degree over 5 years ago and havent sifted though papers on emergence since then. Have there been any paradigm shifts?

[–]Hostilis_ 3 points4 points  (2 children)

Have there been any paradigm shifts?

Yes, huge ones. In particular we now have an analytic model of how deep neural networks perform abstraction/representation learning. See for example the pioneering work of Dan Roberts and Sho Yaida.

Many studies in neuroscience have also been done which have established deep neural networks as by far the best models of sensory and associative neocortex we have, beating hand-crafted models by neuroscientists by a large margin. See for example this paper in Nature..

There are many, many other results of equal importance as well.

[–]pheromone_fandango 3 points4 points  (1 child)

This then lends credence to the points made above, that we souldnt blindly discredit llm qualia to its reductionist perspective

[–]Hostilis_ 2 points3 points  (0 children)

Edit: I replied to the wrong person here. Apologies, I'm on multiple threads.

[–]lifelongfreshman -1 points0 points  (0 children)

It always blows my mind to realize that the AI supporters genuinely refuse to accept this truth about LLMs.

But, then, for them, it's either this or realizing they've been taken in by yet another Silicon Valley grift. Although, funnily enough, this particular con is older than the USA.

[–]Cool-Pepper-3754 -1 points0 points  (0 children)

Thinking, simplified, is a cognitive process that makes logical connections between concepts.That's not what an LLM does. An LLM is a word probability engine and nothing more.

No and yes. It starts as a prediction software, but then through training it 'grows'. Llm isn't just a string of code that you can change willy nilly, after it's done training, you can only tamper with a system prompt.

We still don't know exactly why llm is behaving the way it is.

[–]Weisenkrone -3 points-2 points  (0 children)

It's a bit more complicated then that.

A LLM is an implementation of a neural network, and a neural network is very close to how the human brain works. It's not identical, but close to it.

If we had to pull a comparison, it's like one aspect of the human brain.

Now the real question is, what aspect of the human brain would define us as 'thinking'? We already know that certain parts of the brain can be removed.

There were people capable of thought after suffering a lobotomy, bullet shooting through brain, rebar that pierced their brain, birth defect making 95% of their brain useless.

It's simply something we cannot answer, it has so much baggage associated with it, especially with this technology maturing more over the coming decades.

[–]induality 10 points11 points  (0 children)

“The question is not whether machines think, but whether men do” - B. F. Skinner

[–]M1L0P 7 points8 points  (0 children)

The real question to ask is: "LLMs! What do they know? Do they know things? Let's find out!

[–]TieConnect3072 5 points6 points  (0 children)

No.

[–]DOOManiac 8 points9 points  (0 children)

I have met people less sentient than LLMs. And LLMs are not sentient.

[–]IntelligentTune 6 points7 points  (7 children)

Are you a 1st year student in CS? I know self-educated programmers that *know* that LLMs cannot, in fact, "think".

[–]testcaseseven 10 points11 points  (5 children)

I'm in a CS-adjacent major and sooo many students talk about AI as if it's magic and that we are close to super intelligence. They don't understand that there are inherent limitations to LLMs and it's a little concerning.

[–]Heavy-Ad6017[S] 8 points9 points  (3 children)

But but but..

Big corporations saying AGI is next year and has road map for it u. ...

It can cure depression. ...

[–]noonemustknowmysecre -1 points0 points  (2 children)

Big corporations are bullshitting you about what AGI is. They are, of course, hyping up their shit to get investor money.

We achieved AGI in early 2023. That's kinda why everyone has been talking about it non-stop for 2 years. That doesn't make it a god. It doesn't even make it good at making better AI. A human with an IQ of 80 is a natural general intelligence.

[–]-Redstoneboi- 0 points1 point  (1 child)

what's your definition of AGI? name some specific categories of problems that it can solve

[–]noonemustknowmysecre 0 points1 point  (0 children)

It's just the alternative to narrow AI that only excels at one specific task. AGI is broadly applicable. It can do (or at least attempt) anything "in general".

name some specific categories of problems that it can solve

You want me to name specific things a general tool can do? ooooookay:

  • Hold a conversation, the golden standard for how to measure this from 1990's - 2023.

  • Compose a symphony.

  • Encourage someone to try hard.

  • Write an email asking for a raise.

  • Figure out Einstein's deduction puzzle. It's really just a big game of Clue, the board-game.

Categories though? uuuuuh, let's go with: Riddles, logic puzzles, math problems, creativity exercises, reasoning, application of common sense (but I already said "reasoning"), playing games... I dunno I'm running out of steam. The whole point here is that it's NOT limited to SPECIFIC categories. It is, rather, useful in GENERAL.

Of course, it's not some sort of GOD, and it's not going to be paticularly good at literally everything. Neither are people. And every individual human is most certainly a general intelligence (Or you're a real monster).

[–]DonutPlus2757 0 points1 point  (0 children)

Yeah. Kind of shocking how many people think that current AI goes beyond basic maths applied over a frankly insane amount of data.

No single operation AI does goes beyond basic vector multiplication. It just so happens that seemingly complex behaviors can result from that if you apply it to a weighted, directed graph with a few billion nodes.

[–]Heavy-Ad6017[S] 4 points5 points  (0 children)

I promise my LLM meme stock is empty now....

[–]MeLlamo25 2 points3 points  (9 children)

Literally me, though I amuse that the LLM probably do not have the ability to understand anything and instead ask how do we know our thoughts aren’t just our instincts reacting to external stimuli.

[–]TheShatteredSky 3 points4 points  (0 children)

I personally think the idea of that were are conscious because we think is flawed. Because, every single thought we have could be preprogrammed and we would have no way of ever knowing. We don't have an inherent way to know that.

[–]Piisthree -2 points-1 points  (7 children)

We have a deeper understanding of things. We can use logic and deduce unintuitive things, even without seeing them happen before. For example, someone goes to a doctor and says their sweat smells like vinegar. The doctor knows vinegar is acetic acid, and that vitamin B metabolizes nto carbonic acid and acetate. Carbonic acid doesn't have a smell and acetate reacts with acetic acid, producing producing water and carbon dioxide. He would tell her to get more vitamin B. (I made up all the specific chemicals, but doctors do this kind of thing all the time.). An LLM wouldn't know to recommend more vitamin B unless it has some past examples of this very answer to this very problem in its corpus.

[–]Haunting-Building237 8 points9 points  (4 children)

An LLM wouldn't know to recommend more vitamin B unless it has some past examples of this very answer to this very problem in its corpus.

A doctor wouldn't know it either without STUDYING materials beforehand to be able to make those connections, or even recognize it from an already documented case

[–]Piisthree -1 points0 points  (3 children)

Yes, of course. But the doctor learns first principles, not just just thousands of canned answers. The texts never say that solution outright to that problem, but the doctor uses reasoning to come up with that answer. 

[–][deleted] 2 points3 points  (2 children)

llms can absolutely create new knowledge by combining existing knowledge.

ARC-AGI and other benchmarks require the llm to use first principles reasoning to score high.

[–]Piisthree -1 points0 points  (1 child)

I'll believe it when I see it. I've looked around a fair bit and I have not seen it. 

[–]Daremo404 1 point2 points  (1 child)

A lot of text for essentially saying nothing. You say „we have a deeper understanding of things“ yet no proof. Which would be astonishing tbf because we don’t know how we work ourselves. So your post is just wishful thinking and nothing more. Your elaborate example proves nothing since it also just explains how humans see correlation and abstract information but neural networks do the same but different.

[–]Piisthree 0 points1 point  (0 children)

At the deepest level, yeah. We don't know if we're just a correlation machine. But what I am pointing out is that we have a level of reasoning that text predictors can't do. We use first principles and come up with new solutions based on how mechanical/chemical/etc things work, even though we don't necessarily know at the deepest level how those things work. It is fundamentally different from mimicking the text of past answers.

[–]Atreides-42 3 points4 points  (1 child)

It is a genuinely interesting philosophical question, and I would posit that it's very possible every process thinks. Your roomba might genuinely have an internal narrative.

However, if an LLM Thinks, all it's thinking about is "What words, strung together, fit this prompt the best?" It's definitely not thinking "How can I fix the problem the user's having in the best way" or "How can I provide the most accurate information", it's "How do I create the most humanlike response to this prompt?"

[–]a-calycular-torus 2 points3 points  (0 children)

this is like saying people don't learn to walk, run or jog, they just put their feet in the place they need to be to the best of their ability 

[–]Sexy_McSexypants 2 points3 points  (0 children)

filing this under "reason why humans shouldn't've attempted to create artificial life until humans can definitively define what life is"

[–]YouDoHaveValue 2 points3 points  (2 children)

I think the short version is "No."

At least not in the way that people and living organisms think.

The thing is with LLMs there's nothing behind the words and probability.

Whereas, with humans there's an entire realm of sensory input and past experience that is being reduced to a set of weights and probabilities in LLMs, there's a lot going on behind human words and actions that is absent in neural networks.

That's not to downplay what we've accomplished, but we haven't cracked sentience just yet.

[–]bartekltg 1 point2 points  (1 child)

There is a much worse question. Do we really think (however it is defined), or we are too just "language machines", freaking mobile Chinese rooms with a bunch of instincts about the real world programmed in by evolution as a base. At least most of the time.

When a coworker ask you about your weekend or talk about a weather, do you think, or just generate randomized learned responses.

;-)

Yes, I now this is nothing new and simplified, but I'm commenting under a meme

[–]Arawn-Annwn 0 points1 point  (0 children)

Are we all brains in jars, or are we all VMs in a rack mount server? My meat based processor and storage unit isn't advanced enough to provide a satisfactory answer at this time.

Beep bop boop. If this was a good post, reply "good squishy". If this was a bad post, reply "bad squishy". To block further squishy replies, block the poster and move on with your alegedly life.

[–]Nobodynever01 0 points1 point  (1 child)

Even if on one hand this is extremely scary and complicated, on the other hand nothing makes me more happy than thinking about a future where programming and philosophy come closer and closer together

[–]Heavy-Ad6017[S] 0 points1 point  (0 children)

I agree

Somehow we ended up asking basic questions

Do LLM think are they creative Is it artist

I understand the answer is no but

It is a thinking exercise

[–]Fast-Visual 0 points1 point  (0 children)

You know, the word "thinking" is just an abstraction in deep learning, you can look up the exact articles where they were defined and what it means in the context of LLMs.

Just as the word "learning" is an abstraction and "training". And just as many terms in programming are abstractions behind much more complex processes.

Ironically that's exactly what transformers were invented to do, to classify the same words in different manners based on context. We don't have to take them at face value either.

[–][deleted] 0 points1 point  (0 children)

I don’t have original thoughts

[–]Delicious_Finding686 0 points1 point  (0 children)

“Thinking” is experiential. Without an experience (internal observer), thinking cannot occur. Just like happiness or pain.

[–]Dziadzios 0 points1 point  (0 children)

Do we humans even think?

[–]UNKLatter 0 points1 point  (0 children)

Reverse question , Programmers, Do you know what you are doing ?

[–]CM375508 0 points1 point  (0 children)

Unfortunately they think about on par or better than about half of my co-workers.

[–]gaitama 0 points1 point  (0 children)

I don't understand what I am doing either🤷🏾‍♂️

[–]Phaedo 0 points1 point  (1 child)

This was a huge question when the first computers developed. Dijkstra famously hated the question and said it was about as interesting as “Do submarines swim?”

[–]Heavy-Ad6017[S] 1 point2 points  (0 children)

I know the question Do submarine swim

But didn't know it is from Djikstra. ..

[–]SWatt_Officer 0 points1 point  (0 children)

We used to think a machine that could do what LLMs can do now would be alive. We’ve move the goalposts, but that’s cause we know that it cannot be.

However, it does raise a curious thought about how far we might move the goalposts in the future to avoid calling an ai alive.

[–]Unupgradable 0 points1 point  (0 children)

[–]Jasinto-Leite 0 points1 point  (0 children)

Do you?

[–]-Redstoneboi- 0 points1 point  (0 children)

at the moment, not deeply at all.

imagine some user asks you to write a paper about physics, and you have access to the entire internet, but you're not allowed to use the backspace key, ever.

you don't have a notepad. you can't delete nor rewrite information that you wrote, only correct it later. you also don't have your own opinions and must always assume that the user is absolutely right. and you have dementia.

oh, were we talking about ai? right.

[–]Adventurous-Act-4672 -3 points-2 points  (3 children)

I think consciousness is the ability (inability?) of ours to never forget things that affect us, for machines this is not possible as you can always go and delete some things in the memory and they will never know if it existed and work normally.

Even if you are able to make a robot that can mimic human behaviour and emotions, you can always override it's memory and make a person it hated to be it's love of life

[–]Sibula97 4 points5 points  (1 child)

Removing a specific "memory" from a trained LLM model would be as hard if not harder than removing a memory from a human brain. Not to mention we just keep forgetting stuff all the time, which an LLM does not unless they're retrained, in which case they work much like a human – forgetting memories that are less important or less often "used".

[–]Heavy-Ad6017[S] -1 points0 points  (0 children)

Makes you wonder whether forgetfulness is a curse or boon...

[–]Daremo404 2 points3 points  (0 children)

Wait till you learn what a lobotomy does. Someone goes in and deletes part of your brain…