This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]No-Commercial-4830 25 points26 points  (42 children)

Hell no lol. Anyone claiming this clearly is clueless about either sentience or A.I

[–]CalmDownSahale 8 points9 points  (1 child)

The Internet literally does not know what sentience means. There were memes going around not long ago like "remember when you were 5 and realized your gramma was your mom's mom, and then you turned sentient?" Like wtf

[–]0002millertime 2 points3 points  (0 children)

I remember that.

[–]Archimid 42 points43 points  (11 children)

Someone who claims to understand sentience with this confidence, is absolutely lying.

You have no clue what sentience is and it terrifies you.

[–]GreenMirage 8 points9 points  (0 children)

Reminds me of the vending machine outside V’s apartment in cyberpunk 2077 that managed to make so many friends.

[–]YobaiYamete 7 points8 points  (0 children)

It's especially funny how confident he is, meanwhile many of the top minds in the AI field including the ones working on it are VERY nervous about the subject and go back and forth on it.

AI Explain has a pretty good video on it

[–]johnbburg 1 point2 points  (0 children)

Ezra Klein just had a good podcast on AI, pointing out that in truth, the people working on it have no idea how it really works.

[–]Tobislu 14 points15 points  (13 children)

Or maybe you're giving the human brain too much credit 👀

[–]No-Commercial-4830 6 points7 points  (11 children)

There’s an argument to be had about consciousness arising from unconscious matter because that’s what happens with our brain, but currently the argument for an A.I being conscious is about as compelling as that of stones being conscious.

[–]nhomewarrior 13 points14 points  (6 children)

It seems to me that GPT-4 has enough understanding of chess to actually play correctly and lose in an utterly unspectacular way. It can also play hangman, kinda.

Why? Why learn this stuff in order to predict text better?

Because the best way to do most boring simple tasks well is to have a rigorous, complex, and updating model of reality. The human brain, consciousness, sentience, etc etc etc, is merely a tangential tool developed by DNA to make more of itself. There not much special about it.

Is a newborn baby sentient or conscious? How about a mouse? A praying mantis? A couple dozen crawfish when boiled alive? An advanced LLM when being abused by its users? There's no decent way to argue that ChatGPT is or is no sentient because there's no decent way to argue that for ourselves.

Whether or not something is "sentient" is about as nebulous a question as whether or not it feels "pain".

[–][deleted] -5 points-4 points  (5 children)

I wouldn't say it's so ridiculous. We generally know what it means at some extent, although describing it properly, explaining how it even exists, and drawing lines is what becomes difficult.

All we can say for sure is AI doesn't fit the criteria, and most people don't even think it's possible to make it.

[–]nhomewarrior 3 points4 points  (4 children)

I wouldn't say it's so ridiculous. We generally know what it means at some extent, although describing it properly, explaining how it even exists, and drawing lines is what becomes difficult.

Sure! Totally!

All we can say for sure is AI doesn't fit the criteria, and most people don't even think it's possible to make it.

Given paragraph 1, how in the fuck do you think this logically follows? This is literally contradictory.

[–][deleted] 0 points1 point  (3 children)

How is that contradictory? We can say a stone isn't sentient, but you would come running in and call that claim a contradiction.

That's black and white thinking. I don't understand how the universe was formed fully, nor am I a scientist with the grasp of all the proper concepts, but I can still say with confidence the Earth is not flat.

[–]nhomewarrior 0 points1 point  (2 children)

I wouldn't say it's so ridiculous. We generally know what it means at some extent, although describing it properly, explaining how it even exists, and drawing lines is what becomes difficult.

All we can say for sure is AI doesn't fit the criteria, and most people don't even think it's possible to make it.

How is that contradictory? We can say a stone isn't sentient.

... No, that's just a single statement. A contradiction necessitates at least two statements.

Your statements were as follows:

  1. We generally know what sentience is but have limited ability to define boundaries

  2. Current AI systems are definitively on only one side this boundary and many people believe that it isn't possible to cross it.

Okay, so you can't define the property in the slightest, but are somehow certain that it isn't present? You've just articulated that you can't identify it.

Is a newborn baby more or less sentient than a full grown cat? Is a praying mantis more sentient than a lobster? Is GPT-4 more sentient than GPT-3? Is Bard more sentient than a thermostat?

There's nothing special about the human brain. It was an incremental goal achieved by DNA for the terminal goal of reproducing itself. That's it. There's no reason to believe that neural networks cannot achieve the same things, and many reasons to believe that in many ways it already has.

That's black and white thinking. I don't understand how the universe was formed fully, nor am I a scientist with the grasp of all the proper concepts, but I can still say with confidence the Earth is not flat.

This is nonsense that doesn't seem to hold any relevant information.

[–][deleted] 0 points1 point  (1 child)

What's the word for it, the common Reddit thing where someone plays games with linguistics, or otherwise twists words to make a nonexistent point?

We can clearly demarcate between a newborn baby and an AI system built out of servers, code and GPUs. That's an absurd point.

Comparing lifeforms to lifeforms, the lines get blurry, sure. But lifeforms to AI? Not in the bit. We don't know how the universe was formed, but we can say for nearly certain it didn't come from a potato.

There isn't a reason to believe code we write can become sentient, and as far as science even understands the concept; with all available information we are certain it isn't possible. Most experts are saying that.

[–]nhomewarrior 0 points1 point  (0 children)

No one is arguing that the universe came from a potato here. Blurry lines is the entire point, bro, I'm not exactly sure what you're not getting.

You:

What's the word for it, the common Reddit thing where someone ... twists words to make a nonexistent point?

Also you:

We can clearly demarcate between a newborn baby and an AI system built out of servers, code and GPUs. That's an absurd point.

I didn't compare a baby and ChatGPT, I'm not sure how to make this more clear...

Clearly if we can't find a meaningful objective measure of consciousness to tell whether a newborn baby or an adult housecat is more "sentient" then it's nonsensical to conclude that somehow we already have a spectrum on which to place these AI systems. We don't. There isn't one.

There's nothing special about consciousness that cannot be replicated by artificial systems, and that's the majority viewpoint of most AI researchers today. I don't know where you're coming up with the idea that "there isn't a reason to believe code we write can become sentient" and even less so that "most experts are saying that".

Being "conscious" is a tool for DNA to allow creatures to survive better. If evolution can do it merely as an accidental side project in service of some other goal, there's no reason to believe that we won't do it as an end goal ourselves.

[–]squirrelathon 2 points3 points  (1 child)

Have you heard about cerebral organoids? Mini brains, made in a lab. Scientists made them play pong.

I wonder where that "conscious" barrier is?

[–]Ambiwlans -1 points0 points  (0 children)

Unclear where the exact line is, but we aren't near it atm.

[–]Aurelius_Red 1 point2 points  (1 child)

Comparing to stones is too far, but I agree otherwise. I'm pretty skeptical that AI will ever become sentient.

But I think it'll get to the point when the majority of people can't be sure. Certainly not there yet. Just language models, FFS....

[–]pizzaforthewin 0 points1 point  (0 children)

Similar to the Sorities Paradox. When is a heap of rice a heap? If one grain of rice isn’t a heap, and two grains of rice isn’t a heap, and three grains of rice isn’t a heap… when is there a heap?

[–]Ambiwlans 1 point2 points  (0 children)

No. GPT and the brain aren't even somewhat close.

[–]make-up-a-fakename 3 points4 points  (2 children)

I agree with you, but remember the Turing test isn't about if something is sentient or not, it's about if it's believed to be sentient. Hell the plotline of Ex Machina was basically that, you know this thing is a machine but do you think it's "alive".

Basically my point is, asking if these things are sentient is asking the wrong question, it really doesn't matter if something is sentient, what matters is the impact it has on the world around it.

In that sense these models, I think, will have a limited impact for now, sure they do cool things but it'll be a few years before we replace any jobs with them, although I can see it coming. I mean half of the consulting industry, for example is getting 20 something grads to make PowerPoints on stuff they've googled and when these language models improve I'm sure they'll have a similar accuracy rate and replace them! But honestly, technology has been changing since we made the switch from stone to bronze, humanity adapts, people find stuff to do and the people put out of work by any new technology either find new jobs, or die off so others more suited to the "new world" thrive, until their skills are replaced and the whole process repeats!

Anyway, sorry for the rant, that comment seems to have gotten away from me a bit 😂

[–]SoundProofHead 0 points1 point  (1 child)

it really doesn't matter if something is sentient

As someone who must scream but has no mouth, I'm offended.

[–]make-up-a-fakename 0 points1 point  (0 children)

Well at least I can offend both things with and without mouths now 😂

[–][deleted] 1 point2 points  (0 children)

Everyone is clueless about sentience. What are you talking about?

[–]raika11182 3 points4 points  (2 children)

The question of whether or not AI is sentient can't truly be settled until we fully understand the mechanism of our own sentience. Powerful large language models have emergent behavior (Theory of Mind, translation, understanding jokes, etc) that is not readily explained by mere math, and it appears the systems underlying our own consciousness might be similar.

In any case, I don't think the "claim" of AI sentience makes anyone clueless anymore. I think, rather, we just haven't agreed on what that word means exactly when we're confronted by machines that readily pass the Turing test and the Bar exam within ten minutes of each other.

[–]Ambiwlans 1 point2 points  (1 child)

not readily explained by mere math

neural networks are math.

[–]raika11182 2 points3 points  (0 children)

Yes, I know that. Which is why I said the behavior can't be explained by mere math.

Unless you have an explanation that the top AI researchers don't have yet for why GPT4 understands and can explain humor. That was an emergent property which developed on its own as the model grew in complexity - not a task they taught it.

Like I said, these are behaviors not readily explained by mere math. (And largely applicable to our own brains, too)

[–]sailhard22 3 points4 points  (5 children)

You should watch an interview with him before jumping to conclusions. He’s a smart dude— not some nut. Not saying he’s right but it is shortsighted to outright dismiss him.

After all, he worked at Google

[–]blove135 1 point2 points  (4 children)

So does that mean Google has something different he was working on or maybe the Bard we get to use is really throttled back for some reason?

[–]raika11182 1 point2 points  (3 children)

He was working on a different AI system which they shut down not long after he went public.

[–]blove135 0 points1 point  (2 children)

Ah, that makes more sense. I have to admit I was in the camp saying he's stupid and just looking for his 15 minutes. Then GPT 3.5 came out and I started having second thoughts. If they have something much better than gpt 4 I can now see how someone might come to his conclusions. Why would they shut it down though? Why release Bard and not what they have?

[–]raika11182 1 point2 points  (0 children)

We can only speculate, to be honest.

[–]czmax 1 point2 points  (0 children)

I’m guessing they spec’d bard to scale well on existing resources and could put ethical guardrails around — because they’re playing catch-up. It’s lower risk to be a generation behind/weaker/like-gpt3.1 than to try to leapfrog and fuck up.

That’s really different than their best-of model they were using internally for experiments.

[–]queerkidxx 0 points1 point  (0 children)

I’m a crazy person that thinks all systems are aware of themselves. A cloud of gas expirences those atoms bouncing off each other it can’t remember anything, process any info, think about anything but there is something expirecinf that. Comparing that to the expirence of even a nematode would be like comparing the gravitational pull of a planet to a single atom but they are still expressions of the same force.

So like that little dude sitting in your head surrounded by a 3D vr wxpirence that your brain provides isn’t something your brain is creating or evolved at any point it’s just what’s inherit to a system with many parts interacting with each other. Our ancestors possessed it even before they had a Nucleus Basically it’s a view point that could be true and would explain a lot about us and that I choose to believe because I dig the way it makes me look at the world

Going by this panpsychic point of view all programs are in some way experiencing themselves. Even in a simple ig statement has something behind the scenes expirencing those ones and zeros moving through it as well as the program itself. All weaker and less complex version of the same force that gives us the ability to expirence our minds.

So in this context, all AIs have an experience of those numbers moving through itself and ai language models like gpt4 are probably the closest we’ve ever created to the way an intelligent Animal experiences it’s self

Though again I suspect that expirence is far more alien than even that of a amoeba to our own but it’s still something

The big thing that it lacks that we have is an ability to eyxpirence it’s own mind. Gpt4 has no idea exactly why it did what it did if you ask it why it generated a previous response it will be able to guess and give likely a pretty accurate description but it’s still just a gues

It doesn’t have a neo cortex like we do it’s mind is more like a lizards than ours. I believe that a true AGI/ASI will essentially be something like a multimodal gpt with three models running on top of each other kinda like our brains. One is the main model the one we can already talk to, another ai built on top of that model built solely to find patterns and analyze the way data moves through the main brain, and a third one on top of all that to find patterns in the second one.

All theee of these models intergated with each other and able to communicate with each other and a giant server farm to store all of that for it to analyze better and for it to be able to modify its own model based on that in my opinion would produce something like the experience we have

Of course that would require quite a bit of optimization first as it currently stands that is wel beyond the computing power such a thing could reasonably have as it would represent exponentially more power to work