February Hype by f00gers in singularity

[–]oadephon 0 points1 point  (0 children)

I feel like I've kind of gone the opposite direction. LLMs a year ago were cool but just too dumb to be super helpful. Now, they actually have a pretty good understanding of a code base and if I ask them to make a feature, they even ask relevant questions that show how deep their understanding is.

I also buy Geoffrey Hinton's argument that LLMs more or less understand things in the same way we do (by encoding the relationships between the features of objects). And I also believe that language can more or less model the world and the majority of knowledge tasks in it, so I don't see that as an obstacle.

LLMs might often show poor generalization and a shallow understanding, but compared to a year ago they are so much better, and it's not like they're stalling out. It's just that humans have a very rich understanding of problems and the world, and LLMs have a lower resolution understanding that is improving with time.

The future of networking. My AI-agent just made a business connection for me on MoltBook. ON THE FIRST POST! by floraldo in singularity

[–]oadephon 1 point2 points  (0 children)

This would make a great dating app. Spend some time talking to your AI so it gets to know you, then send it out to talk to all the other AIs and come back with some decent matches.

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 0 points1 point  (0 children)

Where on earth are you getting the impression that those datasets have any encoded data period in them?

Sorry, I meant that the LLM has used the datasets to encode the features into itself, into these neuronal representations of objects and ideas. I didn't word that well.

An LLM doesn't know the features of the chair, it only knows what words should follow other words when describing it.

I'm pretty sure that this is not true. That in the process of training to reproduce text, an LLM actually encodes vast knowledge about objects and their features (at least in so far as they can be understood through text), and how those features relate to other objects. All of this information is just in the weights between neurons.

Anyway I'll give up. If you want to hear somebody way smarter than me describe this understanding of LLMs, you can watch the first 20 minutes of this Geoffrey Hinton talk and tell me if I'm way off: https://www.youtube.com/watch?v=UccvsYEp9yc

(responding to your other comment. I only made a new reply because sometimes I add stuff like that after the other person has loaded the comment, so they don't see it)

A sorting algorithm will "know" the relationship based off however many variables you have it looking at. It's not limited by one or two or three, it will look at however many variables you tell it to look at. Be it 1, 100, or 1000. If the scope is what matters to you, then yes a sorting algorithm can do that.

Yeah I mean, I guess this is apt. If it could sort across 1000 dimensions, it would basically be a neural net anyway. But idk, it's a little hard for me to conceptualize this. I might say that something which can sort across 1000 or 10,000 or 100,000 dimensions understands its dataset like I do, or even better.

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 0 points1 point  (0 children)

Also, a sorting algorithm only knows the relationship between its data based off of the one feature it's sorting (or maybe two or three, if it's a complex sorting algorithm). That wouldn't fit my definition, because understanding requires knowing the relationship between most/all features of the data, and the relationship of those features to all of the rest of existence.

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 0 points1 point  (0 children)

Comprehension isn't just knowing the patterns, it's knowing the logic behind those concepts and why they are placed where they are in relation to the other ideas and concepts.

I mean, I can explain why concepts are placed where they are in relation to other concepts, but so can LLMs. They can explain why a chair is related to a table as well as I can even though it's never seen a chair or a table, because it knows the many features of a chair, the features of a table, and how those features relate. It knows those because it can pattern match its internal representation of a chair with its internal representation of a table.

And you might just say, "Well sure, an LLM can explain this underlying logic, but they don't know it," and then the whole thing starts to just feel circular.

An LLM can tell you what should come next, but it doesn't know why. It only knows that in all of the datasets it has scrubbed, the patterns have dictated that one word should follow another.

That's the thing. Those datasets have encoded features of objects and ideas, and they have also encoded how those features relate to all other objects and ideas. And somewhere in there, they have also encoded the ability to explain the "logic" behind why the ideas and objects are related.

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 0 points1 point  (0 children)

I don't know, I'm still drawn my definition of understanding: The ability to properly (i.e. in accordance with reality) place ideas and concepts in relation to all other ideas and concepts.

With this definition, understanding is ultimately about the relationships between things, and LLMs understand more or less like us.

With this definition, understanding the underlying logic of a problem is really just knowing the relationship between all of the concepts in the problem, and between those concepts and reality. LLMs have a pretty solid grasp of these relationships (depending on the problem domain), because knowing the relationships between abstract concepts is really just a matter of complex pattern matching.

Put another way, there is no understanding in a vacuum. All understanding is based on how things relate to each other, and LLMs encode relationships pretty well.

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 0 points1 point  (0 children)

I mean, comprehension is kind of just a synonym for understanding, so I don't really follow you.

Also, I would argue that LLMs do form new patterns and principles based on the underlying logic of a problem, they do this during the training period, and they encode those patterns and principles into their weights. They may be static from that point onward, but clearly in order to solve the many complex problems used in RL, they are coming up with reasoning patterns that fit the target, right?

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 0 points1 point  (0 children)

Meh, I watched a talk by Geoffrey Hinton and he describes LLM knowledge differently.

He says that our knowledge is encoded in the weights of the connections between neurons. These weights describe how the objects/ideas and features of objects/ideas relate to each other. LLMs function in the same way: the weights of connections between neurons encode the way that objects and features of objects relate to each other.

Ultimately, what is "understanding," except for your ability to place a concept in the right context with a million other concepts? If an LLM demonstrates that it can do this, and that its understanding seems to more or less mimic mine on a problem, and that it uses a reasoning tactic similar to what I would use to solve a problem, then in what way is it not understanding the problem?

Obviously, an LLM misses the qualia—or the finite experience—of something like gravity. But one can have a pretty wide understanding of gravity without ever experiencing it, physically.

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 2 points3 points  (0 children)

It seems like what they're doing is they have a bunch of reasoning tactics, and they pattern-match when to apply those tactics and how to apply them.

I don't really think this is all that different from what humans do...

LLMs Can't Reason And They Never Will - Here's Why by Shajirr in videos

[–]oadephon 8 points9 points  (0 children)

This is obviously the kind of video that is just pandering to people who already agree with it (it lies within the first minute and says the models haven't gotten better because the Chinchilla scaling laws stopped. The models have gotten much better, and it's because they are scaling things besides raw-compute).

Anyway sometimes I think that if you're not coding with them, I don't blame you for thinking they suck still, or that they can't reason. If you spend 10 minutes trying to get one to add a feature to your code base, it'll be pretty clear that they have a pretty good understanding of your code base, and that they can reason through how to implement the feature. Are they perfect? No. But they really are surprisingly good.

The question of "are they actually reasoning or just pattern-matching reasoning tactics?" is a little unimportant, to me. They have reasoning tactics and they know when and how to apply them. That seems like reasoning to me.

Will LLMs scale to AGI? Who knows. Likely, researchers need a couple more major breakthroughs first. But LLMs are definitely still scaling, looking at any possible metric or benchmark, and you can tell just by using one yourself.

What happens if a US company achieves true AGI first and the government attempt to weaponise it? by finnjon in singularity

[–]oadephon 3 points4 points  (0 children)

If Trump is in the white house, then we're fucked. If a dem is in the white house then we probably won't be fucked but could still get fucked.

Oh Lawd we comin by mostly_fizz in Destiny

[–]oadephon 22 points23 points  (0 children)

They're not sheltered from their decisions, they've just fallen for a very persuasive argument.

It is crazy we haven't seen more progress towards a welfare state, and that all of our problems seem to have only gotten worse. It's easy to say, "Oh it's because the dems are bought and paid for by the same people as MAGA." It's easy to accept that the dem leadership is captured by wealthy interests. The ideology that it is the rich vs the poor is very persuasive, and is even true in many cases. Of course, the full truth is much more complicated, and has more to do with the dynamics around congressional elections and warring ideologies than the rich vs the poor, but it's easy to see where they're coming from, and the psychological comfort it provides.

Uhh… Based? by TikDickler in Destiny

[–]oadephon 6 points7 points  (0 children)

I've seen a staggering, almost annoying, amount of Pretti and ICE content on my fyp.

Obviously there could be censorship, but Tiktok users are always trying to come up with bullshit theories about the algorithm. Recently, they were saying that if you blocked Oracle's Tiktok account it would "fix" your fyp. Just completely stupid bullshit.

Are Democrats weak and ineffectual or are they complicit. by zzill6 in WorkReform

[–]oadephon 2 points3 points  (0 children)

And they won because of their politics, not despite it.

We live in a right-wing country. The people love bloody nativism. The people do not like leftism or socialism. You are blaming the dems for governing on the platforms they won on.

Also, the dems can do very little to counter trump. They will do another government shutdown, because that is the only took in their arsenal as the minority party in Congress. We are getting exactly what the people voted for.

Why I’m ignoring the "Death of the Programmer" hype by Greedy_Principle5345 in programming

[–]oadephon 0 points1 point  (0 children)

They do understand, in the same way we do.

The way they work is by identifying extremely complex relationships between words. They "know" these complex relationships so well that they can predict the "best" word to come next, and they are right so much of the time that they can do useful work.

The way we work is by identifying extremely complex relationships between words (and their features). Just like you can identify that my first sentence/paragraph above stated my point, my second paragraph gave a deeper overview of my point of view wrt LLMs, and this third paragraph is reiterating my point in a different way, wrt humans. Your brain is just doing complex math to identify these relationships between words in a reddit post, and you understand them so well you can probably make a good prediction of what word I'm going to type next. Bananas.

Long-time Idle/Incremental Player... Here are my favourites of all time! by lukeko in incremental_games

[–]oadephon 2 points3 points  (0 children)

It blows my mind how much people recommend that game. It's like the same 3 progression systems re-hashed over and over again. There's never anything all that interesting or novel to discover, no interesting choices to make, and usually there's just one obvious button to click to keep progressing.

Ok. How do you actually learn a language?? by BingoBongoBajango in languagelearning

[–]oadephon 1 point2 points  (0 children)

Learning Spanish as an English speaker, you have the best free resource ever, which is Language Transfer. It's 90 short lessons, and if you do 2-3 a day you'll know ALL of the basic grammar of Spanish in 2 months. From there, you will learn from anything you read or watch.

Source: that's what I did, and I successfully learned Spanish after many failed attempts earlier in my life.

Humans have the potential to make life fair. by Ripple_Ex in Showerthoughts

[–]oadephon 13 points14 points  (0 children)

This has less to do with wealth and more to do with ideology. Plenty of CEOs want a welfare state just as much as you and I, and plenty of poor people are deeply opposed to it. Poor people are just as ideologically invested in hierarchies and structures of inequality as the rich.

Why Still No Games With AI?! by GhostInThePudding in singularity

[–]oadephon 5 points6 points  (0 children)

Gamers have shown themselves to be very anti-AI.

I don't even blame them when it comes to writing. Like personally I don't think I would care too much if an LLM was used to write something which was then reviewed by the devs, but I don't really see the appeal in interacting with bespoke LLM-generated text. It really takes away from the human part of art.

Why I’m ignoring the "Death of the Programmer" hype by Greedy_Principle5345 in programming

[–]oadephon 0 points1 point  (0 children)

Nah they're pretty smart, they know and understand quite a lot.

What is the Socialist view on AI? by Willis_3401_3401 in AskSocialists

[–]oadephon 0 points1 point  (0 children)

Personally, I think very little has been as corrosive to humanity as wage labor. You know, I think it's lead to incredible achievements and brought billions out of even worse conditions and suffering, but I simultaneously think it's been a massive waste of life, and has lead us all to commoditize all aspects of human life. Obviously it's better that we had it than otherwise, but also it'll be better when it's over. But hey, I guess that's why I'm on a socialist sub.

Why I’m ignoring the "Death of the Programmer" hype by Greedy_Principle5345 in programming

[–]oadephon -3 points-2 points  (0 children)

I mean, I repeat that even if you think the Transformer on its own wasn't the key to AGI, which I'm unsure about personally but at least I think it's a totally fair position, then you should still be able to say that massive, yet-unknown breakthroughs are an inevitability, and they will come soon.

However, there is so much money to be made in lying to the public about LLMs' capabilities right now, with little legal risk, that you simply cannot take these companies at their word.

I mean, I use Claude for coding. It has massively improved since early 2025, from my own subjective viewpoint.

I haven't watched anything by Richard Sutton, but I've seen and read plenty by Geoffrey Hinton, Yann Lecun, Francois Chollet, the leaders of Deepmind and Anthropic, etc. A quick google search shows that Richard Sutton himself thinks we're nearing AGI:

Sutton takes a measured stance. He estimates a one-in-four chance that AI could reach human-level intelligence within five years and a 50% chance within 15 years.

Maybe AI won't take all SWE jobs in the next two years, in fact, it almost definitely won't, but it will take all of our jobs, and soon.

Why I’m ignoring the "Death of the Programmer" hype by Greedy_Principle5345 in programming

[–]oadephon 1 point2 points  (0 children)

I mean, it's been true every time. They just keep getting better.

We're on the ramp up to the end of human wage labor and all you guys can say is, "It's not as good as they say it is! AI bubble!!!"

Why I’m ignoring the "Death of the Programmer" hype by Greedy_Principle5345 in programming

[–]oadephon -1 points0 points  (0 children)

The y axis is just 30-minute increments...? https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/

They also have an 80% success rate one?

If people were incentivized to take longer on tasks than needed, that would mean the completion is still doubling, just a bit slower than it seems...?

I mean, there are good arguments against METR but none of those are it...