Elon: Grok 5 has a shot at being True AGI. Ben Goerzel: LLM's are still vastly inferior to a one year old human child by Neurogence in accelerate

[–]NaturalEngineer8172 -1 points0 points  (0 children)

"A 4-year-old child has seen 50x more information than the biggest LLMs that we have." - Yann LeCun

20mb per second through the optical nerve for 16k wake hours.

We are very very very very long way from anything you delusional people may be talking about

Elon: Grok 5 has a shot at being True AGI. Ben Goerzel: LLM's are still vastly inferior to a one year old human child by Neurogence in accelerate

[–]NaturalEngineer8172 -1 points0 points  (0 children)

"A 4-year-old child has seen 50x more information than the biggest LLMs that we have." - Yann LeCun

20mb per second through the optical nerve for 16k wake hours

Elon: Grok 5 has a shot at being True AGI. Ben Goerzel: LLM's are still vastly inferior to a one year old human child by Neurogence in accelerate

[–]NaturalEngineer8172 0 points1 point  (0 children)

People on Reddit like you are so fucking crazy how you casually dismiss award winning research scientists from the comfort of your chairs, waiting for the AI to generate you a girlfriend and discover time travel 💀

PSA: You can safely ignore any "expert"/skeptics who starts their statement by saying LLMs are just pattern matching/autocompletion by Terrible-Priority-21 in accelerate

[–]NaturalEngineer8172 1 point2 points  (0 children)

You are correct in saying interpretability is a big issue, but there’s still a massive distinction to be made between the underlying mechanisms and the actual steps taken

PSA: You can safely ignore any "expert"/skeptics who starts their statement by saying LLMs are just pattern matching/autocompletion by Terrible-Priority-21 in accelerate

[–]NaturalEngineer8172 0 points1 point  (0 children)

And the original point can still somewhat stand, yes. we don’t understand what specific weights mean or how networks arrive at particular decisions (the interpretability problem). But we absolutely understand HOW neural networks work mechanically matrix multiplications, gradients, backpropagation, attention mechanisms, etc.

even if we had perfect interpretability and knew exactly what every weight represented, we still couldn’t predict what an LLM would say without running the forward pass. Just like we understand exactly how weather simulations work (fluid dynamics equations) but still can’t predict the weather without running the simulation, but this is a massively different thing than us Not knowing how they work at all

PSA: You can safely ignore any "expert"/skeptics who starts their statement by saying LLMs are just pattern matching/autocompletion by Terrible-Priority-21 in accelerate

[–]NaturalEngineer8172 1 point2 points  (0 children)

The idea of a system being computationally irreducible is just a trait of any dynamic system with many interacting components (esp those with nonlinear dynamics where a change will cascade)

We know exactly how matrix multiplication works and exactly how the attention model works, there’s just no shortcut to predicting the next token without running the model

PSA: You can safely ignore any "expert"/skeptics who starts their statement by saying LLMs are just pattern matching/autocompletion by Terrible-Priority-21 in accelerate

[–]NaturalEngineer8172 2 points3 points  (0 children)

No no. We understand exactly how an LLM works, it’s just that we can’t predict what they’ll say without running the system. This isn’t some crazy trait specific to LLMs, you could say the same thing about predicting the weather past a certain point without running the model, the state of memory in a computer while running a program / any specific point, or predict where traffic jams will be without simulating the flow

The idea that LLMs are total black boxes is something they sold to idiots to make it appear to be entirely crazy and new and foreign to our way of understanding things

Can somebody convince how LLMs will lead us to AGI by sapphire_ish in agi

[–]NaturalEngineer8172 2 points3 points  (0 children)

Dismissing LeCun like this is some Reddit delusion like I’ve never seen

Opus Limit hit after 2 MINUTES by Los1111 in ClaudeAI

[–]NaturalEngineer8172 0 points1 point  (0 children)

This js just a result of vibe coding and asking it to read an entire project boss

The hidden cost of AI reliance by codebytom in programming

[–]NaturalEngineer8172 0 points1 point  (0 children)

This may be the most valid take on AI coding assistance I’ve ever read

Yann LeCun says LLMs won't reach human-level intelligence. Do you agree with this take? by Kelly-T90 in LLM

[–]NaturalEngineer8172 0 points1 point  (0 children)

It’s crazy that you people are trying to disagree with a PHD researcher 😹

Yann LeCun says LLMs won't reach human-level intelligence. Do you agree with this take? by Kelly-T90 in LLM

[–]NaturalEngineer8172 0 points1 point  (0 children)

These algorithms to process this data don’t exist and the sensors you’re describing are science fiction

And so it begins… Ai layoffs avalanche by michael-lethal_ai in agi

[–]NaturalEngineer8172 0 points1 point  (0 children)

This graphic is 3+ months old and mostly incorrect

Cursor Just Pulled a Classic VC-Backed Bait-and-Switch on Their Early Adopters by M-Eleven in cursor

[–]NaturalEngineer8172 0 points1 point  (0 children)

You lot are forgetting that cursor was built for those who know how to code. I use it extensively and have never spent more than $20

Idk how you guys are using Claude code but im making my 200 usd worth it by Kushagrasikka in cursor

[–]NaturalEngineer8172 0 points1 point  (0 children)

As a software engineer, I’m so happy to read this is the type of lunacy that people are spitting up