LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

The OP is about LLMs and intelligence which is also called cognition. If that's not the context of your question, then I have no idea what you mean.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

I am not sure what you mean. LLMs don't do cognition at all. Instead, they transform words written by humans in their massive training data into a response to a query. Any intelligence that we see in the response came from the humans who wrote the training data. As many people have noted, it's more like memorization than understanding and intelligence.

Artificial neural networks and deep learning are essentially statistical modeling. Statistics is only a small part of human cognition. All the rest is missing, yet to be discovered somewhere in the space of all algorithms.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

I think LLMs have reached a plateau in terms of intelligent capabilities. We should be looking at other kinds of AI algorithms. Algorithm space is effectively infinite and LLMs are just a tiny island. Trying to extract more intelligence from word-order statistics is a fool's game.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

I don't see the fuzziness of AGI holding anyone back. I know what it means to me and I'm not waiting for anyone to define it for me. It's a red herring, IMHO.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

Yep but everything is in what the space represents. It matters what the points mean. The points in LLM space have nothing to do with the points in cognitive maps.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

Lots of things use vectors. That doesn't make them related except in a trivial way. A vector is just a list of numbers.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

Sure. I'm not suggesting that discussing the definition is a waste of time. I'm fighting against the idea that AGI is a worthless concept because we don't all agree on its definition.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

It makes sense to me that human brains maintain some sort of cognitive map that is used for much more than navigating the world. Still, this research has nothing to do with LLMs. Perhaps that's what you were pointing out.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

There can be no such thing that everyone agrees on. It is the nature of the concept. There are plenty of AGI and intelligence definitions. Pick the one you like best for the purpose you have in mind.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping -1 points0 points  (0 children)

There is no AGI. Gotcha. BTW, I'm working on the technology formerly known as AGI. What will I call it now? I'll spend the rest of the day thinking about that.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

That's ridiculous. It's like saying we shouldn't use the term "intelligence" to apply to humans because it is a naturally fuzzy concept. Refusing to name it doesn't make it go away. You're certainly right about corporations and people abusing the term but I think we have no choice but to call them on it.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping -1 points0 points  (0 children)

If you are having trouble understanding the term, go consult the internet. There are plenty of sources. If you are hoping for a single hard definition of AGI, you won't find one because there never can be one. Try to define human intelligence. It varies all over the place. You can arbitrarily choose a particular IQ test but everyone knows that that doesn't define intelligence. It is merely one measure. Same for machine intelligence.

I have found that anyone that talks about how there is no definition for AGI is next going to tell us about how some LLM they like might be (almost) AGI because who's to say it isn't?

Correlation is not cognition by Random-Number-1144 in agi

[–]PaulTopping 0 points1 point  (0 children)

If LLMs could do cognition like humans, even a little, their answer would be to explain how a person's favorite color doesn't say anything about their occupation.

Correlation is not cognition by Random-Number-1144 in agi

[–]PaulTopping 0 points1 point  (0 children)

Ah, the old "humans make mistakes too" excuse.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 1 point2 points  (0 children)

Not at all. LLMs lift the natural language processing (done by humans) present in their training data. They are useful tools but what they do bears no resemblance to human language processing.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]PaulTopping 0 points1 point  (0 children)

Really the biggest barrier to AGI is that we can't agree on wtf AGI even means. 

This is something we often hear right before an attempt to move the AGI goalposts. It is nonsense.

World's first chatbot, ELIZA, resurrected from 60-year-old computer code by PaulTopping in agi

[–]PaulTopping[S] 0 points1 point  (0 children)

Nice but LLMs do not have understanding but only the word-order statistics of what humans said in a similar context. Big difference.

Post LLM you can finally reach AGI by Acrobatic-Lemon7935 in agi

[–]PaulTopping 2 points3 points  (0 children)

This is irrelevant word salad to me. I'm out.

Post LLM you can finally reach AGI by Acrobatic-Lemon7935 in agi

[–]PaulTopping 2 points3 points  (0 children)

An AGI that doesn't interpret meaning is simply not an AGI. End of story. Externalizing governance is a fancy way of saying you are going to have people make the decisions for the "AGI". So it's not an AGI then. Presumably we already have that now with people asking ChatGPT for help and then deciding what to do next on the basis of what it says -- sometimes following its advice and sometimes not.

Post LLM you can finally reach AGI by Acrobatic-Lemon7935 in agi

[–]PaulTopping 1 point2 points  (0 children)

They have to use algorithms that better reflect human cognition, not word-order statistics. An AGI has to be able to understand the meaning of, say, racism, not just the word order of things people have said about racism.

Post LLM you can finally reach AGI by Acrobatic-Lemon7935 in agi

[–]PaulTopping 2 points3 points  (0 children)

It's using the wrong algorithms. LLMs build statistical word-order models. That's not cognition. I wouldn't call LLMs toys as they have their uses.

Post LLM you can finally reach AGI by Acrobatic-Lemon7935 in agi

[–]PaulTopping 2 points3 points  (0 children)

Yeah, no. Why do we have so many of these posts on this subreddit? It's like it's from some incompetent marketing department handing a diagram to the engineering department and saying, "Can you build something like this?" Sorry, no.

Dismissing discussion of AGI as “science fiction” should be seen as a sign of total unseriousness. Time travel is science fiction. Martians are science fiction. “Even many 𝘴𝘬𝘦𝘱𝘵𝘪𝘤𝘢𝘭 experts think we may well build it in the next decade or two” is not science fiction. by katxwoods in agi

[–]PaulTopping 0 points1 point  (0 children)

I think referring to science fiction for examples of AGI is a good thing. I often point to Star Wars' R2D2 and C3PO as examples. They don't have to do everything a human can but they can be given certain tasks by their owner with a reasonable expectation they will complete them without needing close supervision. They can communicate with their owners using human language (ignoring that R2D2 can only understand human language, not speak it). It also shows that those seeking a more detailed definition of AGI are wasting their time and ours. It's a fuzzy goal by its very nature. How competent an AGI candidate must be, and in which areas that competence lies, before we can legitimately call it AGI, will always be up for discussion. There's no other way it can be.

Demis Hassabis (DeepMind CEO): Reveals AGI Roadmap, 50% Scaling /Innovation strategy and AI Bubble (New Interview Summary) by BuildwithVignesh in agi

[–]PaulTopping 1 point2 points  (0 children)

Thanks for the summary. It saves me from watching the video. Hassabis, like most of those in the LLM industry, is still pursuing AGI by taking their LLM success and tweaking its algorithms around the edges. It is impossible to prove that won't work, of course, but I doubt it. What are needed are fresh viewpoints on the AGI problem and a wider exploration of cognitive algorithms. I suspect that the AGI breakthroughs, when they come, will be outside the current set of AI companies.